text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{Higher Rank Askey--Wilson Algebras as Skein Algebras}
\begin{abstract}
In this paper we give a topological interpretation and diagrammatic calculus for the rank $(n-2)$ Askey--Wilson algebra by proving there is an explicit isomorphism with the Kauffman bracket skein algebra of the $(n+1)$-punctured sphere. To do this we consider the Askey-Wilson algebra in the braided tensor product of $n$ copies of either the quantum group $\mathcal{U}_q{(\mathfrak{sl}_2)}$ or the reflection equation algebra. We then use the isomorpism of the Kauffman bracket skein algebra of the $(n+1)$-punctured sphere with the $\mathcal{U}_q{(\mathfrak{sl}_2})$ invariants of the Aleeksev moduli algebra to complete the correspondence. We also find the graded vector space dimension of the $\mathcal{U}_q{(\mathfrak{sl}_2})$ invariants of the Aleeksev moduli algebra and apply this to finding a presentation of the skein algebra of the five-punctured sphere and hence also find a presentation for the rank $2$ Askey--Wilson algebra.
\end{abstract}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
The main goal of this paper is to give a topological interpretation of the higher rank Askey--Wilson algebra. More precisely, we will prove that there is an explicit isomorphism
\[\AW{n} \xrightarrow{\sim }\SkAlg{q}{\Sigma_{0,n+1}}\]
between the rank $(n-2)$ Askey-Wilson algebra $\AW{n}$ and the Kauffman bracket skein algebra $\SkAlg{q}{\Sigma_{0,n+1}}$ of the $(n+1)$-punctured sphere. The Kauffman bracket skein algebra is an invariant of oriented surfaces given by considering framed links in the thickened surface and imposing the following skein relations
\begin{align*}
\skeindiagram{leftnoarrow} &= q^{\frac{1}{2}}\; \skeindiagram{horizontal} + q^{-\frac{1}{2}} \; \skeindiagram{vertical}, \\
\skeindiagram{circle} &= -q - q^{-1}
\end{align*}
which allows one to resolve all crossing and remove trivial links at the cost of a constant. These relations and the diagrammatic calculus based on Jones--Wenzl idempotents \cite{MV94,Lickorish93,KL94book}, make skein algebras an ideal setting for carrying out concrete computations. The Kauffman bracket skein relation can be renormalised to give the famous Jones polynomial and the skein algebra $\SkAlg{q}{\Sigma}$ itself is a quantisation of the $\SL_2$ character variety of the surface \cite{Bullock97,PrzytyckiSikora00,BullockFrohmanKania99}.
On the other hand, the original Askey--Wilson algebra $\operatorname{Zh}_q(a,b,c,d)$ was first introduced by Zhedanov \cite{Zhedanov91} in 1991 as the algebra of the bispectral operators of the Askey--Wilson polynomials.
Askey--Wilson polynomials are hypergeometric orthogonal polynomials which can be considered as Macdonald polynomials for the affine root system $(C^{\vee}_1, C_1)$ \cite{NS04}.
Askey--Wilson algebras and polynomials have applications in physics such as to the one-dimensional Asymmetric Simple Exclusion Process (ASEP) statistical mechanics model~\cite{USW04} as well as applications to more algebraic areas such as the theory of Leonard pairs \cite{TV04}.
Askey--Wilson polynomials can also be truncated to give $q$-Racah polynomials which encode the $6j$-symbols or Racah coefficients which occur in angular momentum recoupling when there are three sources of angular momentum.
The Askey--Wilson algebra\footnote{This is the original Askey--Wilson algebra of Zhedanov. In this paper we use the special Askey--Wilson algebra which if we specialise its parametes is isomorphic to the truncated $q$-Onsager algebra quotiented by the Skylanin determinant.} can in turn be considered as a truncated version of the $q$-Onsager algebra; the $q$-Onsager algebra is the reflection equation algebra of the quantum group $\mathcal{U}_q\big(\widehat{\mathfrak{sl}_2}\big)$ of the affine lie group $\widehat{\mathfrak{sl}_2}$ \cite{Terwilliger01,Baseilhac05} and is used in integrable systems such as in the analysis of the XXZ spin chains with non-diagonal boundary conditions \cite{BK05,BB13}.
There are multiple alternative versions of Askey--Wilson algebras. As well as of Zhedanov's original algebra $\operatorname{Zh}_q(a,b,c,d)$ there is $\operatorname{aw(3)}$ in which the parameters $a,b,c,d$ have been replaced by central elements and the universal Askey--Wilson algebra $\Delta_q$ of Terwilliger \cite{Terwilliger11} with a different choice of central elements. We shall use $\AW{3}$ which is a quotient of $\operatorname{aw(3)}$ by a relation involving the quantum Casimir. For a more complete account of the different versions of the Askey--Wilson algebra and their applications see \cite{CFG21} and the references therein.
There are multiple possible approaches to generalising the definition of the Askey--Wilson algebra to higher ranks and we shall follow the approach based on relating $\AW{3}$ to quantum groups. Huang showed that there is an embedding
\[\AW{3} \xhookrightarrow{} \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt)\]
with the generators $\{\,\Lambda_{A} \;\vert\; A \subseteq \{1, 2, 3\} \,\}$ of $\AW{3}$ under the embedding being constructed out the quantum Casimir of $\mathcal{U}_q(\slt)$ using coproducts and a map $\tau$ \cite{Huang17,CrampeGaboriaudVinetZaimi20}. This definition was then generalised by Post and Walter to $\AW{4}$ \cite{PostWalter17} and by de Clerq et al.\ to $\AW{n}$ which is defined as a subalgebra $\AW{n} \subset \mathcal{U}_q(\slt)^{\otimes n}$ generated by explicit generators $\{\,\Lambda_A\;\vert\; A \subseteq \{1,\ldots,n\}\,\}$ \cite{DeClercq19,DeBieDeClercqVanDeVijver20}. De Clerq et al.\ also showed that an algebra isomorphic to $\AW{n}$, the rank $(n-2)$ Bannai--Ito algebra, was the symmetric algebra of the $\mathbb{Z}^n_2$ $q$-Dirac--Dunkl model \cite{DeBieDeClercqVanDeVijver20}.
\subsection*{Isomorphism between Askey--Wilson and Skein Algebras}
In this paper we shall prove
\begin{thm}
\label{thm:AWskeiniso}
There is an isomorphism
\[\SkAlg{q}{\Sigma_{0,n+1}} \xrightarrow{\sim} \AW{n}: s_A \mapsto -\Lambda_A\]
between the Kauffman bracket skein algebra $\SkAlg{q}{\Sigma_{0,n+1}}$ of the $(n+1)$-punctured sphere and the rank $(n-2)$ Askey-Wilson algebra $\AW{n}$ which sends the\footnote{There are two choices: either the curves always go below or always go above the points they do not include. In this paper we shall choose below.} simple closed curve $s_A$ around the punctures $A$ to the Askey-Wilson generator $\Lambda_A$ with a negative coefficient.
\end{thm}
\noindent To show the power of this theorem in \cref{sec:commutator}, we shall use some skein algebraic calculations to obtain an elegant new proof of the theorem of de Clerq \cite[Theorem~3.2]{DeClercq19} which states that $\AW{n}$ satisfies a generalisation of the commutator relations which are used to define the classical Askey--Wilson algebras.
We also show in \cref{sec:braid_grp} that this isomorphism is compatible with the action of the braid group.
\cref{thm:AWskeiniso} is a generalisation of the classical result that $\AW{3}$ is isomorphic to the Kauffman bracket skein algebra of the four-punctured sphere. This was proven by showing that $\AW{3}$ is isomorphic to the $(C^{\vee}_1, C_1)$ spherical double affine Hecke algebra (DAHA) \cite{Koornwinder07,Terwilliger13} and by comparing the presentation of the $(C^{\vee}_1, C_1)$ spherical DAHA to the presentation of the Kauffman bracket skein algebra of the four-punctured sphere \cite{,BS18, Cooke18,Hikami19}.
This approach is not readily generalisable so we will instead prove \cref{thm:AWskeiniso} by chaining together the following three maps
\[
\SkAlg{q}{\Sigma_{0, n+1}} \subseteq \SkAlgSt{\Sigma_{0, n+1}} \xrightarrow{\sim} \mathcal{O}_q(\mathfrak{sl}_2)^{\tilde{\otimes} n} \xrightarrow{\sim} \left( \mathcal{U}_q(\slt)^{\lf} \right)^{\tilde{\otimes} n} \xhookrightarrow{} \mathcal{U}_q(\slt)^{\otimes n},
\]
which we shall now discuss.
\subsection*{Braiding the Tensor Product and the Generators of the Askey--Wilson Algebra}
The last of these maps is a injective homomorphism that unbraids the braided tensor product $\tilde{\otimes}$ of Majid \cite{Majid91,Majid95book} to give the ordinary tensor product. In \cref{sec:braided_tensor_product} we will show that if we consider the Askey--Wilson algebra $\AW{n}$ as a subalgebra of the braided rather than unbraided tensor product of $n$ copies of $\mathcal{U}_q(\slt)$ we obtain a simpler description of the generators with no $\tau$ map:
\begin{thm}
\label{thm:braidedgenerators}
The generator $\Lambda_A$ for $A = \{i_1< \dots< i_k\} \subseteq \{1, \dots, n\}$ as an element of $\left( \mathcal{U}_q(\slt)^{\lf} \right)^{\tilde{\otimes} n}$ is given by $\underline{\Delta}^k(\Lambda)_{\underline{i}}$ where $\Lambda$ is the quantum Casimir, $\underline{\Delta}$ is the `braided' coproduct, and the subscript $\underline{i} = (i_1, \dots, i_k)$ denotes that $j^{th}$ tensor factor of the coproduct is placed in position $i_j$ of the tensor product and $1$ is placed in all the empty positions. For example,
\[\Lambda_{134} = \Lambda_{(1)} \otimes 1 \otimes \Lambda_{(2)} \otimes \Lambda_{(3)} \in \left( \mathcal{U}_q(\slt)^{\lf} \right)^{\tilde{\otimes} 4}\]
where we are using Sweedler notation for the coproduct.
\end{thm}
\subsection*{Reflection Equation Algebras and Presentations} The middle map is a Hopf algebra isomorphism and is given componentwise by the Rosso isomorphism from the reflection equation algebra $\mathcal{O}_q(\mathfrak{sl}_2)$ to $\mathcal{U}_q(\slt)^{\lf}$, the locally finite subalgebra of $\mathcal{U}_q(\slt)$ \cite{KS97book}.
The defining relation for the reflection equation algebra is the reflection equation which first arose in integrable systems related to factorisable scattering on a half-line with a reflecting wall and is based on the standard $R$-matrix of $\mathcal{U}_q(\slt)$ \cite{Kulish96}. The algebra $\mathcal{O}_q(\mathfrak{sl}_2)^{\tilde{\otimes} n}$ is a special case for the $(n+1)$-punctured sphere of the Aleeksev moduli algebra $\mathcal{L}_{\Sigma}$ which is defined combinatorically for more general surfaces with different tensor products depending on how the handles of the handlebody decomposition of the surface interact \cite{Alekseev94,AGS96}.
The subalgebra $\mathcal{L}_{\Sigma}^{\mathcal{U}_q(\slt)}$ of the Aleeksev moduli algebra which is invariant under the action of $\mathcal{U}_q(\slt)$ is naturally filtered by degree.
In \cref{sec:dimensions} we will use the explicit algebraic description for $\mathcal{L}_{\Sigma}$ to compute the Hilbert series of $\mathcal{L}_{\Sigma}^{\mathcal{U}_q(\slt)}$ which enumerates the vector space dimension of each graded part of the associated graded algebra.
For a general punctured surface $\Sigma_{g,r}$ of genus $g$ with $r>1$, the graded algebras associated with $\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$ and the skein algebra $\SkAlg{q}{\Sigma_{g,r}}$ punctures are graded isomorphic.
Also, for the punctured sphere $\Sigma_{0,n+1}$, the graded algebras associated with $\mathcal{L}_{\Sigma_{0, n+1}}^{\mathcal{U}_q(\slt)}$ and the Askey--Wilson algebra $\AW{n}$ are graded isomorphic.
Thus, we also obtain Hilbert series for $\SkAlg{q}{\Sigma_{g,r}}$ and $\AW{n}$:
\begin{thm}
\label{thm:hilbert}
The Hilbert series of $\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$, $\AW{n}$ and $\SkAlg{q}{\Sigma_{g,r}}$ is
\[
h(t) = \frac{(1+t)^{n-2}}{(1-t)^{n}(1-t^2)^{2n-3}}\left(\sum_{k=0}^{n-2}{\binom{n-2}{k}}^2t^{2k} - \sum_{k=0}^{n-3}\binom{n-2}{k}\binom{n-2}{k+1}t^{2k+1}\right)
\]
where $n = 2g + r - 1$.
\end{thm}
Despite skein algebras dating back to the 80s, presentations of Kauffman bracket skein algebras are only known for a handful of punctured surfaces: spheres with up to four punctures and tori with up to two punctures \cite{BullockPrzytycki00}. Whilst it is not difficult to find relations between elements of a skein algebra, it is difficult to conclude you have enough relations. In \cref{sec:presentation} we will use this Hilbert series together with a Poincare--Birkhoff--Witt basis to solve this problem and obtain a presentation for the skein algebra of the five-punctured sphere. This also give us a presentation for the rank 2 Askey--Wilson algebra $\AW{4}$ which is the first of the generalised Askey--Wilson algebras and also the case originally considered by Post and Walter.
\begin{thm}
\label{thm:presshort}
A presentation for $\SkAlg{q}{\Sigma_{0,5}} \cong \AW{4}$ is given by the simple loops $s_A$ for $A \subseteq \{1,2,3,4\}$ subject to the generalised Askey--Wilson commutator relations, non-intersecting loops commuting and relations of types (see \cref{thm:presentation_fivepunctures} and \cref{app:1} for full details):
\begin{gather*}
\diagramhh{present}{4cubic}{0pt}{0pt}{0.25} \;
\diagramhh{present}{cubictriple}{0pt}{0pt}{0.25} \;
\diagramhh{present}{quadratic}{0pt}{0pt}{0.25} \;
\diagramhh{present}{2tripleloop}{0pt}{0pt}{0.25} \\
\diagramhh{present}{cross}{0pt}{0pt}{0.25} \;
\diagramhh{present}{2triplelink}{0pt}{0pt}{0.25} \;
\diagramhh{present}{doubletriplecross}{0pt}{0pt}{0.25}
\end{gather*}
\end{thm}
In particular, this means that $\AW{n}$ contains many relations which are not derived from the original Askey--Wilson operator relations.
\subsection*{Skein Algebras and Generalisations}
Finally, the leftmost map is a Hopf algebra isomorphism from the stated skein algebra $\SkAlgSt{\Sigma_{0, n+1}}$ to the Aleeksev moduli algebra. Stated skein algebras are an extension of Kauffman bracket skein algebras which allows tangles with end points rather than only closed loops \cite{Le18,CL20}. This extension means that, unlike ordinary skein algebras, stated skein algebras behave well under gluing: they satisfy an excision property. This is crucial in constructing the isomorphism to $\mathcal{L}_{\Sigma_{0,n+1}} = \mathcal{O}_q(\mathfrak{sl}_2)^{\tilde{\otimes} n}$ given in \cite{CL20}.
The stated skein algebra is a special case for $\mathcal{U}_q(\slt)$ of a more general construction based on skein categories $\SkCat{\mathcal{V}}{\Sigma}$
\cite{WalkerTQFT,JohnsonFreyd15,Cooke19} and internal skein algebras $\SkAlg{\mathcal{V}}{\Sigma}^{\operatorname{int}}$
\cite{GunninghamJordanSafronov19} which generalises to other quantum groups $\mathcal{U}_q(\mathfrak{g})$ or indeed any ribbon category $\mathcal{V}$
(for the explicit relation between stated and internal skein algebras see \cite{haioun21}).
Skein categories are categories whose hom-spaces are vector spaces of ribbon tangles (often with coupons) in the thickened surface with skein relations imposed: the skein relations are determined by the choice of quantum group or ribbon category.
The skein algebra $\SkAlg{\mathcal{V}}{\Sigma}$ is then simply $\operatorname{Hom}_{\SkCat{\mathcal{V}}{\Sigma}}(\varnothing, \varnothing)$ and the internal skein algebra is $\SkAlg{\mathcal{V}}{\Sigma}^{\operatorname{int}} = \operatorname{Hom}_{\SkCat{\mathcal{V}}{\Sigma}}(\_, \varnothing): \mathcal{V} \to \operatorname{Vect}$.
These skein categories satisfy excision and can thus be considered as factorisation homology theories \cite{CookeThesis,Cooke19} (and independently when $\mathcal{V}$ is modular by \cite{KT21}).
For any punctured surface $\Sigma$ and any quantum group $\mathcal{U}_q(\mathfrak{g})$ when $q$ is not a root of unity the internal skein algebra is isomorphic to the Aleeksev moduli algebra and the skein algebra is the $\mathcal{U}_q(\mathfrak{g})$-invariant subalgebra \cite{GunninghamJordanSafronov19} (and \cite{Faitg2020} for the $G = \SL_2$ case without using factorisation homology).
This gives us a generalisation of the left-hand map to an isomorphism
\[\SkAlg{\mathcal{U}_q(\mathfrak{g})}{\Sigma} \to \mathcal{L}_{G, \Sigma}^{\mathcal{U}_q(\mathfrak{g})}\]
for any punctured surface $\Sigma$ and for any quantum group $\mathcal{U}_q(\mathfrak{g})$ assuming $q$ is generic. For a higher genus surface $\mathcal{L}_{G, \Sigma_{g,r}} = \mathcal{O}_q(\mathfrak{g})^{\hat{\otimes} 2g} \tilde{\otimes} \mathcal{O}_q(\mathfrak{g})^{\tilde{\otimes} r-1}$ where $\hat{\otimes}$ is a different tensor product from the Majid braided tensor product. Whilst it is beyond the scope of this paper, the consideration of other gauge groups in particular $\mathfrak{g} = \mathfrak{sl}_n$ using this connection to skein theory may prove fruitful and would be interesting to compare to other generalisations such as the one based of the affine $\hat{\mathfrak{sl}_n}$ $q$-Osanger algebra in \cite{BCP19}.
\subsection*{Summary of Sections}
\begin{description}
\item[\cref{sec:AWAlgebras}] In this section we define $\AW{n}$ and its set of generators $\Lambda_A$.
\item[\cref{sec:braided_tensor_product}] In this section we define the braided tensor product, show that the unbraiding map is an injective morphism of algebras and prove \cref{thm:braidedgenerators}.
\item[\cref{sec:moduli}] In this section we define the reflection equation algebra $\mathcal{O}_q(\SL_2)$, Alekseev moduli algebra $\mathcal{L}_{\Sigma}$ and the map between the reflection equation algebra and the quantum group $\mathcal{U}_q(\slt)$.
\item[\cref{sec:skein}] In this section we define the stated skein algebra and prove \cref{thm:AWskeiniso}.
\item[\cref{sec:commutator}] In this section we use skein algebras to give a much shorter proof of Theorem 3.2 of \cite{DeClercq19}.
\item[\cref{sec:braid_grp}] In this section we show that the isomorphism given in \cref{thm:AWskeiniso} is compatible with the action of the braid group.
\item[\cref{sec:dimensions}] In this section we find the graded vector space dimension of the graded algebra associated to $\mathcal{L}_{\Sigma}^{\mathcal{U}_q(\slt)}$ (\cref{thm:hilbert}) and consequently of the associated graded algebras of the Askey--Wilson algebra $\AW{n}$ and the skein balgebra $\SkAlg{q}{\Sigma_{g,r}}$ for $r>1$.
\item[\cref{sec:presentation}] In this section we prove \cref{thm:presshort} by constructing a confluent terminating term rewriting system based on the relations to obtain a linear basis with the same Hilbert series as $\SkAlg{q}{\Sigma_{0,5}}$.
\end{description}
\subsection*{Notation}
For an algebra $A$ over a field, two integers $k<l$ and $\underline{i} = (i_1,\ldots,i_k)$ with $1\leq i_1\leq \cdots \leq i_k \leq l$, we will use an embedding $A^{\otimes k} \rightarrow A^{\otimes l}$. It is defined by $x_1\otimes \cdots x_k \mapsto 1 \otimes \cdots \otimes 1 \otimes x_{1}\otimes 1 \otimes \cdots \otimes 1 \otimes x_{k} \otimes 1 \otimes \cdots \otimes 1$, where the tensorand $x_{j}$ is at the $i_j$-th position. The image of $x\in A^{\otimes k}$ will be then denoted by $x_{\underline{i}}$.
We will also use the Sweedler notation for coproducts and coaction: if $(C,\;\Delta)$ is a coalgebra and $(M,\;\Delta_M)$ is a right-comodule over $C$, the coproduct of $c\in C$ will be denoted by $\Delta(c) = \sum c_{(1)}\otimes c_{(2)}$ and the coaction of $C$ on $m\in M$ by $\Delta_M(m) = \sum m_{(1)}\otimes m_{(0)}$.
\section{Askey--Wilson Algebras}
\label{sec:AWAlgebras}
In this section we shall define the Askey--Wilson algebra $\AW{3}$, explain how it can be embedded into three tensor copies of the quantum group $\mathcal{U}_q(\slt)$ and thus generalised to the higher rank Askey--Wilson algebra $\AW{n}$.
\subsection{Classical Askey--Wilson algebras}
The Askey--Wilson algebra was originally defined by Zhedanov \cite{Zhedanov91} to study Askey--Wilson orthogonal polynomials as the representations of Askey--Wilson algebras can be used to better understand the associated polynomials.
\begin{defn}
The \emph{Zhedanov Askey--Wilson algebra $\operatorname{Zh}_q(a_1,a_2,a_3,a_{123})$} is the algebra over $\mathbb{C}(q)$ with generators $A$, $B$ and $C$ such that
\begin{align*}
A + \left(q^2 - q^{-2}\right)^{-1} [B, C]_q &= \left(q + q^{-1}\right)^{-1} \left( C_1 C_2 + C_3 C_{123} \right) \\
B + \left(q^2 - q^{-2}\right)^{-1} [C, A]_q &= \left(q + q^{-1}\right)^{-1} \left( C_2 C_3 + C_1 C_{123} \right) \\
C + \left(q^2 - q^{-2}\right)^{-1} [A, B]_q &= \left(q + q^{-1}\right)^{-1} \left( C_3 C_1 + C_2 C_{123} \right)
\end{align*}
where $[X, Y]_q := qXY - q^{-1}YX$ is the \emph{quantum Lie bracket} and
$C_i := q^{a_i} + q^{-a_i}$.
\end{defn}
If we weaken the relations, so that instead of having equalities we simply require the expressions on the left-hand-side are central in the algebra, we obtain the \emph{universal} Askey--Wilson algebras which was defined by Terwilliger \cite{Terwilliger11}:
\begin{defn}
The \emph{universal Askey--Wilson algebra} $\Delta_q$ is the algebra over $\mathbb{C}(q)$ with generators $A$, $B$, $C$ such that
\[A + \left(q^2 - q^{-2}\right)^{-1} [B, C]_q \quad B + \left(q^2 - q^{-2}\right)^{-1} [C, A]_q \quad C + \left(q^2 - q^{-2}\right)^{-1} [A, B]_q\]
are central.
\end{defn}
If we set
\begin{align*}
\alpha &= \left(q+q^{-1}\right)\left(A + \left(q^2 - q^{-2}\right)^{-1}[B, C]_q\right), \\
\beta &= \left(q+q^{-1}\right)\left( B + \left(q^2 - q^{-2}\right)^{-1} [C, A]_q \right) , \\
\gamma &= \left(q+q^{-1}\right)\left(C + \left(q^2 - q^{-2}\right)^{-1} [A, B]_q \right)
\end{align*}
and also define the `Casimir' element
\[\Omega = qABC + q^2 A^2 + q^{-2} B^2 + q^2 C^2 - q A \alpha - q^{-1} B \beta - q C \gamma\]
we also have the following alternative presentation:
\begin{prop}[{\cite[Proposition~2.8]{Terwilliger13}}]
The \emph{universal Askey--Wilson algebra} $\Delta_q$ is the algebra over $\mathbb{C}(q)$ with generators $A$, $B$, $C$, $\alpha$, $\beta$, $\gamma$ and $\Omega$ such that $\alpha$, $\beta$, $\gamma$ and $\Omega$ are central and
\begin{align*}
[A, B]_q &= -\left( q^2 - q^{-2} \right) C + \left( q - q^{-1} \right) \gamma \\
[B, C]_q &= -\left( q^2 - q^{-2} \right) A + \left( q - q^{-1} \right) \alpha \\
[C, A]_q &= -\left( q^2 - q^{-2} \right) B + \left( q - q^{-1} \right) \beta \\
\Omega &= q ABC + q^2 A^2 + q^{-2} B^2 + q^2 C^2 - q A \alpha - q^{-1} B \beta - q C \gamma
\end{align*}
\end{prop}
\subsection{Huang's embedding into $\mathcal{U}_q(\slt)^{ \otimes 3 }$}
Huang \cite{Huang17} showed that the universal Askey--Wilson algebra can be embedded into $\mathcal{U}_q(\slt)^{ \otimes 3 }$. We first recall the classical definition of $\mathcal{U}_q(\slt)$ and choose one of its many Hopf algebra structure. We stress that we follow the conventions of \cite{CL20} which are different from \cite{Huang17}.
\begin{defn}
The quantum algebra $\mathcal{U}_q(\slt)$ is the $\mathbb{C}(q)$-algebra generated by $K$, $K^{-1}$, $E$ and $F$, subject to the following relations:
\[
K^{\pm 1}K^{\mp 1} = 1, \quad KE=q^2EK, \quad KF=q^{-2}FK \quad \text{and} \quad [E,F]=\frac{K-K^{-1}}{q-q^{-1}}.
\]
We endow it with a Hopf algebra structure with the following comultiplication $\Delta$, counit $\varepsilon$ and antipode $S$:
\begin{align*}
\Delta(K)&= K\otimes K, & \varepsilon(K) &= 1,& S(K) &= K^{-1},\\
\Delta(E)&= E\otimes K + 1\otimes E, & \varepsilon(E) &= 0,& S(E) &= -EK^{-1},\\
\Delta(F)&= F\otimes 1 + K^{-1} \otimes F, & \varepsilon(F) &= 0,& S(F) &= -KF.
\end{align*}
The \emph{quantum Casimir} of $\mathcal{U}_q(\slt)$ is
\[
\Lambda := \left(q - q^{-1}\right)^2 FE + q K + q^{-1} K^{-1} = \left(q - q^{-1}\right)^2 EF + q^{-1} K + q K^{-1};
\]
this is a central element.
\end{defn}
The embedding is given by:
\begin{thm}[{\cite[Theorem 4.1 and Theorem 4.8]{Huang17}}]
Let $\flat: \Delta_{q^{-1}} \rightarrow \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt)$ be the following map
\begin{align*}
&A \mapsto \Delta(\Lambda) \otimes 1, \\
&B \mapsto 1 \otimes \Delta(\Lambda), \\
&C \mapsto \frac{q^{-1}(\Delta(\Lambda)\otimes 1)(1\otimes \Delta(\Lambda))-q(1\otimes \Delta(\Lambda))(\Delta(\Lambda)\otimes 1)}{q^2-q^{-2}}+\frac{\Lambda \otimes 1 \otimes \Lambda + (1 \otimes \Lambda \otimes 1) \Delta^{(2)} (\Lambda)}{q+q^{-1}},\\
&\alpha \mapsto \Lambda \otimes \Lambda \otimes 1 + (1 \otimes 1 \otimes \Lambda) \Delta^{(2)} (\Lambda), \\
&\beta \mapsto 1 \otimes \Lambda \otimes \Lambda + (\Lambda \otimes 1 \otimes 1) \Delta^{(2)} (\Lambda), \\
&\gamma \mapsto \Lambda \otimes 1 \otimes \Lambda + (1 \otimes \Lambda \otimes 1) \Delta^{(2)} (\Lambda),
\end{align*}
where $\Delta^{(2)} = (\Delta\otimes \id)\circ \Delta$. Then $\flat$ is an injective morphism of algebras.
\end{thm}
\begin{rmk}
In Huang's original result, the isomorphism $\Delta_{q} \rightarrow \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt)$ uses the opposite coproduct to $\Delta$. It is related to $\flat$ by the algebra isomorphism $\Delta_q \rightarrow\Delta_{q^{-1}}$ which sends $(A,B,C,\alpha,\beta,\gamma,\delta)$ to $(B,A,C,\beta,\alpha,\gamma)$ and with the algebra automorphism of $\mathcal{U}_q(\slt)\otimes\mathcal{U}_q(\slt)\otimes\mathcal{U}_q(\slt)$ sending $x\otimes y \otimes z$ to $z \otimes y \otimes x$.
\end{rmk}
For the generator $C$, which is not required to generate the algebra, but makes the presentation more symmetric, \textcite{CrampeGaboriaudVinetZaimi20} showed that its image has the simpler expression $C^{\flat} = (1 \otimes \tau_L) \Delta(\Lambda)$ where $\tau_L$ is defined below.
\begin{defn}
\label{def:coaction_sl2}
We denote by $\mathcal{I}_L$ the subalgebra of $\mathcal{U}_q(\slt)$ generated by $E$, $FK$, $K$ and $\Lambda$. It is a left $\mathcal{U}_q(\slt)$-comodule with action
\[
\tau_L \colon \left\{
\begin{array}{lcl}
\mathcal{I}_L & \longrightarrow & \mathcal{U}_q(\slt) \otimes \mathcal{I}_L \\
E & \longmapsto & K \otimes E\\
FK & \longmapsto & K^{-1} \otimes FK -q F \otimes \Lambda + q\left(q+q^{-1}\right)F\otimes K - q^{-1}(q-q^{-1})^2F^2K\otimes E\\
K & \longmapsto & 1 \otimes K - q^{-1} ( q - q^{-1} )^2 FK \otimes E\\
\Lambda & \longmapsto & 1 \otimes \Lambda
\end{array}
\right.
\]
\end{defn}
Clearly the image of the map $\flat$ is contained in the subalgebra of $\mathcal{U}_q(\slt)^{ \otimes 3 }$ generated by the elements
\[
\Lambda_1 := \Lambda \otimes 1 \otimes 1, \quad \Lambda_2 := 1 \otimes \Lambda \otimes 1, \quad \Lambda_2 := 1 \otimes 1 \otimes \Lambda, \quad
\Lambda_{123} := \Delta^2(\Lambda),
\]
\[
\Lambda_{12} := \Delta(\Lambda) \otimes 1, \quad \Lambda_{13} := (1 \otimes \tau_L)\Delta(\Lambda), \quad \Lambda_{23} := 1 \otimes \Delta^2(\Lambda).
\]
It was shown by Huang \cite[Corollary 4.6]{Huang17} that this subalgebra is contained in the centraliser of $\mathcal{U}_q(\slt)$ in $\mathcal{U}_q(\slt)^{ \otimes 3 }$
\[
\mathfrak{C}(\mathcal{U}_q(\slt)) = \Big\{\, X \in \mathcal{U}_q(\slt)^{ \otimes 3 } \Big| \big[(\Delta \otimes \id)\Delta(x),\; X\big] = 0, \; \forall x \in \mathcal{U}_q(\slt) \,\Big\}
\]
Note that whilst there is an injective $\mathbb{C}$-algebra homomorphism $\Delta_{q^{-1}} \to \mathfrak{C}(\mathcal{U}_q(\slt))$ it is not surjective as we do not have $\Lambda_i$ and $\Lambda_{123}$ in the image. We instead need the following centrally extended universal Askey--Wilson algebra:
\begin{defn}
The \emph{Askey--Wilson algebra $\operatorname{AW}(3)$}\footnote{Our Askey--Wilson algebra $\AW{3}$ is the Special Askey--Wilson algebra ${\bf{saw}}(3)$ of \cite{CrampeGaboriaudVinetZaimi20}.} is the algebra over $\mathbb{C}(q)$ with generators $A$, $B$, $C$ and central generators $C_1$, $C_2$, $C_3$, $C_{123}$ such that
\begin{gather*}
A + \left(q^2 - q^{-2}\right)^{-1} [B, C]_q = \left(q + q^{-1}\right)^{-1} \left( C_1 C_2 + C_3 C_{123} \right) \\
B + \left(q^2 - q^{-2}\right)^{-1} [C, A]_q = \left(q + q^{-1}\right)^{-1} \left( C_2 C_3 + C_1 C_{123} \right) \\
C + \left(q^2 - q^{-2}\right)^{-1} [A, B]_q = \left(q + q^{-1}\right)^{-1} \left( C_3 C_1 + C_2 C_{123} \right) \\
\Omega = \left(q + q^{-1} \right)^2 - C_1^2 - C_2^2 - C_3^2 - C_{123}^2 - C_1 C_2 C_3 C_{123}
\end{gather*}
where
\[\Omega := qABC + q^2 A^2 + q^{-2} B^2 + q^2 C^2 - q A \left( C_1 C_2 + C_3 C_{123} \right) - q^{-1} B \left( C_2 C_3 + C_1 C_{123} \right) - q C \left( C_3 C_1 + C_2 C_{123} \right).\]
\end{defn}
\subsection{Higher rank Askey--Wilson algebras}
The embedding of the universal Askey--Wilson algebra into the centraliser of three copies of the quantum group $\mathcal{U}_q(\slt)$ makes it possible define a higher rank Askey--Wilson algebra by constructing Casimirs $\Lambda_A$ for $A \subseteq \{1, \dots, n\}$ in the centraliser of $\mathcal{U}_q(\slt)^{ \otimes n }$ for $n \geq 1$.
Post and Walter \cite{PostWalter17} generalised to $n = 4$, and De Clercq, De Bie and Van de Vijer \cite{DeClercq19,DeBieDeClercqVanDeVijver20} gave a definition for general $n$. The following definition is a slight rewriting of the definition of \cite[Definition~2.3]{DeClercq19} with our conventions of coproduct.
\begin{defn}
\label{def:AW_generators}
Let $A=\{i_1 < \cdots <i_k\}$ be a non-empty subset of $\left\{1, \dots, n\right\}$ and let $\Lambda$ be the quantum Casimir of $U_q(\mathfrak{sl}_2)$. The \emph{Casimir $\Lambda_A$} is defined by
\[
\Lambda_A = \left(\left(\id^{\otimes (i_k-2)}\otimes \alpha_{i_k-1}\right)\circ\cdots\circ (\id\otimes \alpha_2) \circ \alpha_1 (\Lambda)\right)\otimes 1^{n-i_k},
\]
with $\alpha_i=\Delta$ if $i\in A$ and $\alpha_i=\tau_L$ otherwise. We also set $\Lambda_{\emptyset}=q+q^{-1}$ by convention.
\end{defn}
\begin{defn}
\label{def:AW}
The \emph{Askey--Wilson algebra of rank $(n-2)$} denoted $\AW{n}$ is the subalgebra of $\mathcal{U}_q(\slt)^{\otimes n}$ generated by $\Lambda_A$ for all non-empty subsets $A \subseteq \{1, \dots, n\}$.
\end{defn}
\begin{rmk}
This algebra is defined over $\mathbb{C}(q)$, but will need to consider filed extension of this algebra, notably to $\mathbb{C}(q^{1/4})$ where $q^{1/4}$ is a fixed fourth root of $q$.
\end{rmk}
De Clercq \cite[Theorem 3.1 and 3.2]{DeClercq19} show that the higher rank Askey--Wilson algebras satisfy a generalised version of the commutator relations which define $\AW{3}$.
We shall give an alternative and much shorter proof of this result in \cref{sec:commutator}.
\subsection{Changing the coproduct or how to obtain isomorphisms}
\label{sec:change_conventions}
We show now explain how the change of conventions for the coproduct from those used in \cite{Huang17} to those used in this paper affect the definition of the Askey--Wilson algebra $\AW{n}$. Let us denote by $\widetilde{\mathrm{AW}}(n)$ the Askey--Wilson algebra of \cite{DeClercq19,DeBieDeClercqVanDeVijver20} obtained from the coproduct $\Delta^{\op}$. Explicitly, $\widetilde{\mathrm{AW}}(n)$ is the subalgebra of $\mathcal{U}_q(\slt)^{\otimes n}$ generated by $\tilde{\Lambda}_A$ for $A$ a non-empty subset of $\{1,\ldots,n\}$ given by
\[
\tilde{\Lambda}_A = 1^{\otimes i_1-1}\otimes\left(\left(\alpha_{i_1+1}\otimes\id^{\otimes (n-i_1-1)}\right)\circ\cdots\circ (\alpha_{n-1}\otimes \id) \circ \alpha_n (\Lambda)\right)
\]
for $A=\{i_1< \ldots <i_k\}$, with $\alpha_i = \Delta^{\op}$ if $i\in A$ and $\alpha_i = \tau_L^{\op}$ otherwise.
The following isomorphism is the higher rank version of the algebra isomorphism $\Delta_q \rightarrow \Delta_{q^{-1}}$ of \cite[Lemma 2.11]{Terwilliger13} which sends $(A,B,C,\alpha,\beta,\gamma)$ to $(B,A,C,\beta,\alpha,\gamma)$.
\begin{prop}
The algebra isomorphism $\mathcal{U}_q(\slt)^{\otimes n} \rightarrow \mathcal{U}_q(\slt)^{\otimes n}$ given by $x_1\otimes \cdots \otimes x_n \mapsto x_n \otimes \cdots \otimes x_1$ restricts in an algebra isomorphism $\AW{n} \simeq \widetilde{\mathrm{AW}}(n)$. This isomorphism moreover sends $\Lambda_A$ to $\tilde{\Lambda}_{\tilde{A}}$ where $\tilde{A}=\left\{\,n+1-i \ \middle\vert \ i\in A\,\right\}$.
\end{prop}
\begin{proof}
This is immediate from the definitions of $\Lambda_A$ and $\tilde{\Lambda}_A$.
\end{proof}
The following anti-isomorphism is the higher rank version of the algebra anti-isomorphism $\Delta_q \rightarrow \Delta_{q^{-1}}$ which sends $(A,B,C,\alpha,\beta,\gamma)$ to $(A,B,C,\alpha,\beta,\gamma)$.
\begin{prop}
The algebra anti-isomorphism $S^{\otimes n} \colon \mathcal{U}_q(\slt)^{\otimes n} \rightarrow \mathcal{U}_q(\slt)^{\otimes n}$ restricts in an algebra anti-isomorphism $\AW{n} \simeq \widetilde{\mathrm{AW}}(n)$. This isomorphism moreover sends $\Lambda_A$ to $\tilde{\Lambda}_{A}$.
\end{prop}
\begin{proof}
The isomorphism follows from the fact that $S\otimes S \circ \Delta = \Delta^{\op}$, that $S\otimes S \circ \tau_L = \tau_R\circ S$ and from \cite[Proposition~2.3]{DeClercq19}. Here $\tau_R$ is given in \cite[Definition 2.1]{DeClercq19}.
\end{proof}
Composing the two previous isomorphisms, we obtain the higher rank version of the algebra anti-automorphism of $\AW{n}$ of \cite[Lemma 2.9]{Terwilliger13}.
\begin{prop}
There exists an algebra anti-automorphism of $\AW{n}$ sending $\Lambda_A$ to $\Lambda_{\tilde{A}}$.
\end{prop}
\section{Braided Tensor Product of copies of the locally finite part of $\mathcal{U}_q(\slt)$}
\label{sec:braided_tensor_product}
As explained in the introduction, the isomorphism between the skein algebra of the $(n+1)$-punctured sphere and the Askey--Wilson algebra $\AW{n}$ consists of a sequence of steps. In this section, we describe the Askey--Wilson algebra as the image of an injective morphism called the unbraiding map and we give an explicit form of the preimages of the generators $\Lambda_A$. One of the key ideas is the interpretation of the coaction $\tau$ as the conjugation by the $R$-matrix \cite{CrampeGaboriaudVinetZaimi20} and the fact that the left coideal $\mathcal{I}_L$ is the locally finite part of $\mathcal{U}_q(\slt)$ for the left adjoint action.
\subsection{Grading and the left adjoint action}
\label{sec:left_adjoint_action}
There exists a $\mathbb{Z}$-grading on $\mathcal{U}_q(\slt)$ given on the generators by $\lvert E \rvert =1$, $\lvert F \rvert = -1$ and $\lvert K \rvert = 0$. Note that $Kx = q^{2\lvert x \rvert} xK$ for any homogeneous element $x$.
As it is a Hopf algebra, $\mathcal{U}_q(\slt)$ acts on itself via the left adjoint action given by
\[
h\rhd x = \sum h_{(1)}xS\left(h_{(2)}\right),
\]
for any $h,x\in \mathcal{U}_q(\slt)$. Note that $\lvert h\rhd x \rvert = \lvert h\rvert + \lvert x \rvert$.
\begin{defn}
The locally finite elements of $\mathcal{U}_q(\slt)$ for the left adjoint action are:
\[
\mathcal{U}_q(\slt)^{\lf} = \Big\{ \,x \in \mathcal{U}_q(\slt) \mathrel{\Big\vert} \mathcal{U}_q(\slt)\rhd x \text{ is finite dimensional} \,\Big\}.
\]
\end{defn}
By definition, $\mathcal{U}_q(\slt)$ then acts on $\mathcal{U}_q(\slt)^{\lf}$ via the left adjoint action. It is known that $\mathcal{U}_q(\slt)^{\lf}$ is a subalgebra of $\mathcal{U}_q(\slt)$ and also a left coideal of $\mathcal{U}_q(\slt)$, that is $\Delta\left(\mathcal{U}_q(\slt)^{\lf}\right) \subset \mathcal{U}_q(\slt)\otimes \mathcal{U}_q(\slt)^{\lf}$, see \cite[Lemma 3.112]{voigt-yuncken} for example.
The following is a theorem of Joseph--Letzter \cite{JosephLetzter92} in the specific case of $\mathcal{U}_q(\slt)$. Note that our convention for the coproduct is different from \cite{JosephLetzter92}.
\begin{prop}
The subalgebra and left coideal $\mathcal{U}_q(\slt)^{\lf}$ coincides with $\mathcal{I}_L$.
\end{prop}
\subsection{The quasi-R-matrix}
\label{sec:notations}
In this subsection, we introduce the $R$-matrix for $\mathcal{U}_q(\slt)$ following \cite[Chapter 4]{Lusztig93}. The Hopf algebra $\mathcal{U}_q(\slt)$ fails to be a quasi-triangular Hopf algebra. This problem is usually overcomed using the notion of quasi-$R$-matrix.
First, we define $\Psi\colon \mathcal{U}_q(\slt) \otimes \mathcal{U}_q(\slt) \rightarrow \mathcal{U}_q(\slt)\otimes \mathcal{U}_q(\slt)$ as the algebra isomorphism defined on homogeneous elements by $\Psi(x\otimes y) = x K^{-\lvert y\rvert} \otimes K^{-\lvert x \rvert} y = K^{-\lvert y\rvert}x \otimes yK^{-\lvert x \rvert}$.
\begin{thm}[{\cite[Chapter 4]{Lusztig93}}]
\label{thm:quasi-R-matrix}
There exists an element $\Theta = \sum_{i\geq 0}\Theta_i$ with $\Theta_i = u_i\otimes v_i$ satisfying the following properties:
\begin{itemize}
\item \label{itm:grad_theta} $u_i \otimes v_i = q^{i(i-1)/2}\displaystyle\frac{ \left(q-q^{-1}\right)^i}{[i]!} E^i \otimes F^i$;
\item for all $x\in \mathcal{U}_q(\slt)$, we have $\Psi(\Delta^{\op}(x))\Theta = \Theta\Delta(x)$;
\item \label{itm:inv_theta} $\Theta$ is invertible with inverse $\Gamma = \sum_{i\geq 0} \Gamma_i$ with $\Gamma_i = S(u_i)K^i \otimes v_i = u_i \otimes S^{-1}(v_i)K^{-i}$;
\item $\Delta\otimes \id(\Theta) = \Psi_{23}(\Theta_{13})\Theta_{23}$ and $\id\otimes\Delta(\Theta) = \Psi_{12}(\Theta_{13})\Theta_{12}$.
\end{itemize}
\end{thm}
In Sweedler notation, one has that for any homogeneous $x\in \mathcal{U}_q(\slt)$
\begin{equation}
\label{eq:coproduct-op-sweedler}
\sum_{i\geq 0} x_{(2)}K^{-\lvert x_{(1)} \rvert}u_i \otimes x_{(1)}K^{\lvert x_{(2)} \rvert}v_i
=
\sum_{i\geq 0} u_i x_{(1)} \otimes v_i x_{(2)}.
\end{equation}
Thanks to the quasi-$R$-matrix, one can endow the category of type $1$ integrable weight modules with a braiding as follows. Let $M$ and $N$ be two integrable weight modules of type $1$. For $m\in M$ of weight $\mu$ and $n\in N$ of weight $\nu$, we define
\[
c_{M,N}(m \otimes n) = \sum_{i \geq 0}q^{(\mu + 2i)(\nu - 2i)/2}(v_i n)\otimes (u_i m).
\]
Note that in general the coefficients appearing in the formula above are in $\mathbb{Q}(q^{1/2})$. This then defines a map $c_{M,N}\colon M\otimes N \rightarrow N\otimes M$ which is a braiding for the category of type 1 integrable weight modules (see \cite[Section 3]{Jantzen96} for more details).
\subsection{Braided tensor product}
\label{sec:braided_tensor}
For two algebras in a braided monoidal category, Majid had defined their braided tensor product \cite[Lemma 9.2.12]{Majid95book}. In the spectific case of a category of modules of a quasi-triangular Hopf algebra, the multiplication in the braided tensor product is explicitely given using the $R$-matrix \cite[Corollary 9.2.13]{Majid95book}. In our situation, since we are working in the category of integrable weight modules over $\mathcal{U}_q(\slt)$, we can not define the braided tensor product of $\mathcal{U}_q(\slt)$ with itself since it is not an integrable module for the left adjoint action. One way to remedy to this problem is to work with the locally finite part $\mathcal{U}_q(\slt)^{\lf}$ so that the definition of the multiplication is still valid.
\begin{prop}
For any homogeneous elements $g,h,x$ and $y$ of $\mathcal{U}_q(\slt)^{\lf}$, we define the braided product $\cdot$ by
\begin{equation}
(g\otimes h)\cdot (x\otimes y) = \sum_{i\geq 0}q^{2(\lvert h \rvert + i)(\lvert x \rvert - i)}g(v_i\rhd x) \otimes (u_i \rhd h)y.
\end{equation}
This defines an associative multiplication on the vector space $\mathcal{U}_q(\slt)^{\lf}\otimes \,\mathcal{U}_q(\slt)^{\lf}$. The resulting algebra is the \emph{braided tensor product} and will be denoted by $\mathcal{U}_q(\slt)^{\lf}\tilde{\otimes}\,\mathcal{U}_q(\slt)^{\lf}$.
\end{prop}
We can also inductively define a braided product on $\mathcal{U}_q(\slt)^{\otimes n}$ and we denote the resulting algebra by $\left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}$. One can check that this multiplication is compatible with the adjoint action: for any $h\in \mathcal{U}_q(\slt)$ and $x,y \in \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}$, one has
\[
h\rhd (x\cdot y) = \sum (h_{(1)}\rhd x)\cdot (h_{(2)}\rhd y).
\]
This multiplication endows $\left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}$ with an algebra structure in the category of integrable $\mathcal{U}_q(\slt)$-modules. We can also endow this algebra with a bialgebra structure in the category of $\mathcal{U}_q(\slt)$-modules.
Define $\underline{\Delta}$ following \cite[Section 3]{Majid91}:
\[
\underline{\Delta}(x) = \sum_{i\geq 0} x_{(1)}S\left(K^{i+\lvert x_{(2)} \rvert}v_i\right)\otimes u_i\rhd x_{(2)}
\]
One may check that this endows $\mathcal{U}_q(\slt)^{\lf}$ with a bialgebra structure in the category of left $\mathcal{U}_q(\slt)$-modules, that is the multiplication $\cdot$ and the comultiplication $\underline{\Delta}$ satisfy the usual axioms of a bialgebra and that they are all compatible with the adjoint action. For example,
\[
\underline{\Delta}(h\rhd x) = h\rhd \underline{\Delta}(x).
\]
Furthermore, we inductively define the iterated comultiplication $\Delta^{(n)}$ by
\[
\Delta^{(0)} = \id,\quad\text{and}\quad \Delta^{(n+1)} = \left(\id\otimes\Delta^{(n)}\right)\circ \Delta,
\]
and define the iterated comultiplication $\underline{\Delta}^{(n)}$ similarly. Note that $\Delta^{(1)}=\Delta$ and $\underline{\Delta}^{(1)}=\underline{\Delta}$. One may inductively show that
\begin{align}
\underline{\Delta}^{(n-1)}(x)
&= \sum_{i\geq 0} x_{(1)}S\left(K^{i+\lvert x_{(2)} \rvert}v_i\right)\otimes \underline{\Delta}^{(k-2)}\left(u_i\rhd x_{(2)}\right)\nonumber \\
&= \sum_{i\geq 0} x_{(1)}S\left(K^{i+\lvert x_{(2)} \rvert}v_i\right)\otimes u_i\rhd \underline{\Delta}^{(n-2)}\left(x_{(2)}\right) \label{eq:iterated_underlined_comultiplication_qgr}.
\end{align}
We have the following explicit formulas for $\underline{\Delta}(x)$ when $x$ is a generator of $\left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}$, each of which follows from direct computation.
\begin{lem}\label{lem:formula_bar_coproduct}
We have
\begin{align*}
\underline{\Delta}(E) & = E\otimes K + q^{-1}\left(\Lambda-q^{-1}K\right)\otimes E,\\
\underline{\Delta}(KF) & = K\otimes KF + q^{-1}KF\otimes \left(C-q^{-1}K\right),\\
\underline{\Delta}(K) & = K\otimes K + q^{-1}\left(q-q^{-1}\right)^2KF\otimes E,\\
\underline{\Delta}(\Lambda) & = q^{-1}C\otimes C - q^{-2}C\otimes K - q^{-2} K \otimes C + q^{-2}\left(q+q^{-1}\right)K\otimes K \\
&\quad+ \left(q-q^{-1}\right)^2\left(E\otimes KF + q^{-2}KF\otimes E\right).
\end{align*}
\end{lem}
\subsection{Unbraiding the braided tensor product}
\label{sec:unbraiding}
Following \cite{FioreSteinackerWess03}, we unbraid the tensor product into the usual tensor product. Since we work with the quantum group $\mathcal{U}_q(\slt)$, we need to use the quasi-$R$-matrix $\Theta$ and the locally finite part.
\begin{prop}
\label{prop:one-step_unbraiding_qgr}
For any $k\geq 1$, the map $\varphi_n\colon \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n} \rightarrow \mathcal{U}_q(\slt)\otimes \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} (n-1)}$ defined by
\[
\varphi_n(a\otimes b) = \sum_{i\geq 0} aK^{i+\lvert b \rvert}v_i\otimes u_i\rhd b,
\]
for any homogeneous elements $a\in \mathcal{U}_q(\slt)^{\lf}$ and $b\in \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} (n-1)}$, is an injective morphism of algebras.
\end{prop}
\begin{proof}
First, it is easy to check that $a\otimes b\mapsto \left( \sum_{i\geq 0} \left(aS(v_i)K^{-(i+\lvert b \rvert)} \right) \otimes u_i\rhd b \right)$ is a left inverse of $\varphi_n$.
We now check that $\varphi_n$ is a morphism of algebra. Let $a,c\in \mathcal{U}_q(\slt)^{\lf}$ and $b,d\in \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} (n-1)}$ and we compute:
\begin{align*}
\varphi_n\big((a\otimes b)\cdot (c\otimes d)\big)
&=\sum_{i\geq 0}q^{2(\lvert b \rvert + i)(\lvert c\rvert - i)} \varphi_n\big(a(v_i\rhd c)\otimes ((u_i\rhd b)\cdot d)\big)\\
&=\sum_{i,j\geq 0}q^{2(\lvert b \rvert + i)(\lvert c\rvert - i)} a(v_i \rhd c)K^{i+j+\lvert b \rvert + \lvert d \rvert}v_j \otimes u_j \rhd ((u_i\rhd b)\cdot d).
\end{align*}
Now, we use the definition of the left adjoint action, its compatibility with the braided product, together with $\Delta\otimes\id(\Theta) = \Psi_{23}(\Theta_{13})\Theta_{23}$ and $\id\otimes \Delta(\Theta) = \Psi_{12}(\Theta_{13})\Theta_{12}$ to find that
\begin{multline*}
\varphi_n\big((a\otimes b)\cdot (c\otimes d)\big) =\\
\sum_{i,j,k,l \geq 0} q^{2(\lvert b \rvert + i + k)(\lvert c\rvert - i - k)} aK^{-i}v_k c S(v_i)K^{i+j+k+l+\lvert b\rvert +\lvert d \rvert} v_j v_l \otimes (u_j u_i u_k \rhd b)\cdot \left(K^j u_l \rhd d\right).
\end{multline*}
Since for any homogeneous $x\in \mathcal{U}_q(\slt)$ we have $Kx = q^{2\lvert x\rvert} x K$, we find that
\[
\varphi_n\big((a\otimes b)\cdot (c\otimes d)\big) =
\sum_{i,j,k,l \geq 0} aK^{k+\lvert b \rvert}v_k c S(v_i)K^{j} v_j K^{l+\lvert d \rvert}v_l \otimes (u_j u_i u_k \rhd b)\cdot (u_l \rhd d).
\]
Finally, since $\sum_{i,j\geq 0}u_j u_i \otimes S(v_i)K^{j} v_j = 1\otimes 1$, we have
\[
\varphi_n\big((a\otimes b)\cdot (c\otimes d)\big) =
\sum_{k,l\geq 0} \left(aK^{k+\lvert b \rvert}v_k\right)\left(c K^{l+\lvert d \rvert}v_l\right) \otimes (u_k \rhd b)\cdot (u_l \rhd d)\\
= \varphi_n(a\otimes b)\cdot \varphi_n(c\otimes d),
\]
and $\varphi_n$ is a morphism of algebra.
\end{proof}
By composing the one-step unbraiding maps $\varphi_n$ from \cref{prop:one-step_unbraiding_qgr}, one can inductively define an unbraiding map $\gamma_n\colon \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}\rightarrow \mathcal{U}_q(\slt)^{\otimes n}$. We define $\gamma_1$ to be the injection of the locally finite part $\mathcal{U}_q(\slt)^{\lf}$ inside the whole quantum group $\mathcal{U}_q(\slt)$ and for $n\geq 1$ we define $\gamma_{n+1}=(\id\otimes \gamma_n)\circ \varphi_{n+1}$.
\begin{cor}
The map $\gamma_n\colon \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}\rightarrow \mathcal{U}_q(\slt)^{\otimes n}$ is an injective morphism of algebras.
\end{cor}
We end this subsection by explaining how the adjoint action $\rhd$ of $\mathcal{U}_q(\slt)$ on $\left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes}n}$ behaves under the unbraiding map $\gamma_n$.
\begin{prop}
\label{prop:unbraiding_action}
For any $x \in \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}$ and $h \in \mathcal{U}_q(\slt)$ we have
\[
\gamma_n(h\rhd x) = \sum \Delta^{(n-1)}\left(h_{(1)}\right)\gamma_n(x)\Delta^{(n-1)}\left(S\left(h_{(2)}\right)\right).
\]
\end{prop}
\begin{proof}
Once again, we proceed by induction on $n$ with case $n=1$ being true by the definition of the left adjoint action. We start by computing $\varphi_{n+1}(h\rhd(a\otimes b))$ for any $h\in \mathcal{U}_q(\slt)$, $a\in \mathcal{U}_q(\slt)^{\lf}$ and $b\in \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes}n}$:
\begin{align*}
\varphi_{n+1}\big(h\rhd(a\otimes b)\big)
&= \sum \varphi_{n+1}\left(h_{(1)}\rhd a\otimes h_{(2)}\rhd b\right)\\
&= \sum_{i\geq 0} \left(h_{(1)}\rhd a\right)K^{i+\lvert h_{(2)} \rvert + \lvert b \rvert} v_i \otimes \left(u_i h_{(2)}\right)\rhd b\\
&= \sum_{i\geq 0} h_{(1)}aS\left(h_{(2)}\right)K^{i+\lvert h_{(3)} \rvert + \lvert b \rvert} v_i h_{(4)}S\left(h_{(5)}\right) \otimes \left(u_i h_{(3)}\right)\rhd b\\
&= \sum_{i\geq 0} h_{(1)}aS\left(h_{(2)}\right)K^{i} v_i K_{\lvert h_{(3)}\rvert} h_{(4)}K_{\lvert b \rvert}S\left(h_{(5)}\right) \otimes \left( u_i K^{-i} h_{(3)} K_{\lvert h_{(4)}\rvert} \right)\rhd b,
\end{align*}
the second to last equality following from the counit and antipode axioms. Now, we use the fact that the quasi-$R$-matrix and $\Psi$ intertwine the coproduct $\Delta$ and its opposite $\Delta^{\op}$ (see \cref{eq:coproduct-op-sweedler}) to obtain
\begin{align*}
\varphi_{n+1}\big(h\rhd(a\otimes b)\big)
&= \sum_{i\geq 0} h_{(1)}aS\left(h_{(2)}\right)h_{(3)}K^i v_i K_{\lvert b \rvert}S\left(h_{(5)}\right) \otimes \left(h_{(4)}u_i K^{-i}\right)\rhd b\\
&= \sum_{i\geq 0} h_{(1)}aK^i v_i K_{\lvert b \rvert}S\left(h_{(3)}\right) \otimes \left(h_{(2)}u_i K^{-i}\right)\rhd b\\
&= \sum_{i\geq 0} h_{(1)}aK^{i+\lvert b \rvert} v_i S\left(h_{(3)}\right) \otimes \left(h_{(2)}u_i\right)\rhd b,
\end{align*}
where the second to last equality follows once again from the counit and antipode axioms. We now use the inductive definition of the unbraiding map and the induction hypothesis and we obtain
\begin{align*}
\gamma_{n+1}\big(h\rhd(a\otimes b)\big)
&= \sum_{i\geq 0} h_{(1)}aK^{i+\lvert b \rvert} v_i S\left(h_{(3)}\right) \otimes \gamma_n\left(\left(h_{(2)}u_i\right)\rhd b\right)\\
&= \sum_{i\geq 0} h_{(1)}aK^{i+\lvert b \rvert} v_i S\left(h_{(4)}\right) \otimes \Delta^{(n-1)}\left(h_{(2)}\right)\gamma_n(u_i\rhd b)\Delta^{(n-1)}\left(S\left(h_{(3)}\right)\right)\\
&= \sum \Delta^{(n)}\left(h_{(1)}\right)\left(\sum_{i\geq 0} aK^{i+\lvert b \rvert} v_i \otimes \gamma_n(u_i\rhd b)\right)\Delta^{(n)}\left(S\left(h_{(2)}\right)\right)\\
&= \sum \Delta^{(n)}\left(h_{(1)}\right)\gamma_{n+1}(a\otimes b)\Delta^{(n)}\left(S\left(h_{(2)}\right)\right),
\end{align*}
as expected.
\end{proof}
This last proposition leads to the following interesting corollary concerning centralizers.
\begin{cor}
\label{cor:invariant-centralizer}
Invariants elements of $\left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes}n}$ under the left adjoint action are sent by the unbraiding map $\gamma_n$ into the centralizer of $\mathcal{U}_q(\slt)$ in $\mathcal{U}_q(\slt)^{\otimes n}$. Explicitly, any $x\in \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes}n}$ stable under the left adjoint action commutes with $\Delta^{(n-1)}(h)$ for any $h\in \mathcal{U}_q(\slt)$.
\end{cor}
\begin{proof}
Suppose that $z\in \mathcal{U}_q(\slt)^{\otimes n}$ satisfies the following: for any $h\in \mathcal{U}_q(\slt)$ we have
\[
\sum \Delta^{(n-1)}\left(h_{(1)}\right)z\Delta^{(n-1)}\left(S\left(h_{(2)}\right)\right )=\varepsilon(h)z.
\]
Then for such an element $z$ we have
\begin{align*}
\Delta^{(n-1)}(h)z
&= \sum \Delta^{(n-1)}\left(h_{(1)}\right)z\Delta^{(n-1)}\left(S\left(h_{(2)}\right)\right)\Delta^{(n-1)}\left(h_{(3)}\right)\\
&= \sum \varepsilon\left(h_{(1)}\right)z\Delta^{(n-1)}\left(h_{(2)}\right)\\
&= z\Delta^{(n-1)}(h),
\end{align*}
the first equality following from the counit and antipode axiom, the second one from the hypothesis on $z$ and the last one from the counit axiom. The statement follows then immediately from \cref{prop:unbraiding_action}
\end{proof}
\subsection{Inserting units and unbraiding}
\label{sec:inserting_1}
Our next aim is to understand the images of the elements $\underline{\Delta}^{(k-1)}(x)_{\underline{i}}$ under the unbraiding map $\gamma_n$. If $x=\Lambda$ is the Casimir element of $\mathcal{U}_q(\slt)$, we will show that $\underline{\Delta}^{(k-1)}(x)_{\underline{i}}$ coincides with the generator $\Lambda_A$ of the Askey--Wilson algebra, for $A=\{i_1,\ldots,i_k\}$. As a corollary we obtain that the Askey--Wilson algebra $\AW{n}$ lies inside the centralizer of $\mathcal{U}_q(\slt)$ in $\mathcal{U}_q(\slt)^{\otimes n}$, generalizing a result of Huang \cite[Corollary 4.6]{Huang17}.
\begin{lem}
For any $x\in \mathcal{U}_q(\slt)^{\lf}$ we have
\begin{equation}
\label{eq:tau_adj_action}
\tau_L(x) = \sum_{i\geq 0} K^{i+\lvert x \rvert}v_i\otimes u_i\rhd x.
\end{equation}
\end{lem}
Only a finite number of terms of the sum are non-zero since $x$ lies in the locally finite part of $\mathcal{U}_q(\slt)$ for the left adjoint action.
\begin{proof}
Let us denote by $\tau'_L(x)$ the right-hand side of \cref{eq:tau_adj_action}. One easily checks that $\tau'_L(xy) = \tau'_L(x)\tau'_L(y)$ using the relation $\Delta\otimes \id(\Theta) = \Psi_{23}(\Theta_{13})\Theta_{23}$. It remains to check that $\tau_L$ and $\tau'_L$ coincide on the generators of $\mathcal{U}_q(\slt)$, which is a straightforward calculation.
\end{proof}
As noted in \cite[Lemma 4.1]{CrampeGaboriaudVinetZaimi20}, the map $\tau_L$ is also given by conjugation by the $R$-matrix.
\begin{lem}
\label{lem:tau-conj}
For any $x\in \mathcal{U}_q(\slt)^{\lf}$ we have $\tau_L(x) = \Psi^{-1}\left(\Theta_{21}(1\otimes x)\Theta_{21}^{-1}\right)$.
\end{lem}
\begin{proof}
We start with the conjugation by $\Theta_{21}$:
\[
\Theta_{21}(1\otimes x)\Theta_{21}^{-1}
= \sum_{i,j\geq 0} v_i v_j \otimes u_ix S(u_j) K^j\\
= \sum_{i,j\geq 0} v_i v_j \otimes u_i x S\left(K^i u_j\right) K^{i+j}\\
= \sum_{i\geq 0} v_i\otimes (u_i\rhd x) K^{i},
\]
the last equality following from $\Delta\otimes (\Theta) = \Psi_{23}(\Theta_{13})\Theta_{23}$ and from the definition of the adjoint action. Applying $\Psi^{-1}$ concludes the proof.
\end{proof}
\begin{cor}
The map $\tau_L$ is a morphism of algebras and defines a left coation of $\mathcal{U}_q(\slt)$ on $\mathcal{U}_q(\slt)^{\lf}$, that is $(\Delta\otimes \id)\circ \tau_L = (\id\otimes \tau_L)\circ \tau_L$ and $(\varepsilon\otimes \id)\circ \tau_L=\id$.
\end{cor}
\begin{proof}
The map $\tau_L$ is clearly a morphism of algebra thanks to \cref{lem:tau-conj}. It is also not difficult to check that it is a left coaction using the relation $\id\otimes \Delta (\Psi) = \Psi_{12}(\Theta_{13})\Theta_{12}$.
\end{proof}
Finally, we obtain an explicit formula for the image of $\underline{\Delta}^{(k-1)}(x)_{\underline{i}}$ under unbraiding.
\begin{prop}
\label{prop:inserting_units}
Let $n\in \mathbb{Z}_{>0}$, $k\in \mathbb{Z}_{\geq0}$ with $k\leq n$, $\underline{i}=(i_1,\ldots,i_k)$ with $1\leq i_1< \cdots < i_k \leq n$ and $x\in \mathcal{U}_q(\slt)^{\lf}$. Then
\[
\gamma_n\left(\underline{\Delta}^{(k-1)}(x)_{\underline{i}}\right) = \left(\id^{\otimes (i_k-2)}\otimes \alpha_{i_k-1}\right)\circ\cdots\circ (\id\otimes \alpha_2) \circ \alpha_1 (x)\otimes 1^{n-i_k},
\]
with $\alpha_i=\Delta$ if $i\in\{i_1,\ldots,i_k\}$ and $\alpha_i=\tau_L$ otherwise.
\end{prop}
\begin{proof}
Once again, we proceed on induction on $n$, and there is nothing to prove if $n=1$. Let us suppose the result proven for some $n\in\mathbb{Z}_{>0}$ and any $k,\underline{i}$ and $x$ as above. Let $k\leq n+1$, $\underline{i}=(i_1,\ldots,i_k)$ with $1\leq i_1 < \cdots < i_k\leq n+1$ and $x\in \mathcal{U}_q(\slt)^{\lf}$.
We first suppose that $i_{1}\neq 1$ so that $\underline{\Delta}^{(k-1)}(x)_{\underline{i}}=1\otimes \underline{\Delta}^{(k-1)}(x)_{\underline{i}-1}$, where $\underline{i}-1 = (i_1-1,\ldots,i_k-1)$. Then
\[
\varphi_{n+1}\left(\underline{\Delta}^{(k-1)}(x)_{\underline{i}}\right)=\sum_{i\geq 0}K^{i+\lvert x \rvert}v_i\otimes u_i\rhd \underline{\Delta}^{(k-1)}(x)_{\underline{i}-1} = \sum_{i\geq 0}K^{i+\lvert x \rvert}v_i\otimes \underline{\Delta}^{(k-1)}(u_i\rhd x)_{\underline{i}-1}.
\]
By the induction hypothesis, we have
\[
\gamma_n\left(\underline{\Delta}^{(k-1)}(u_i\rhd x)_{\underline{i}-1}\right) = \left(\id^{\otimes (i_k-3)}\otimes \beta_{i_k-2}\right)\circ\cdots\circ (\id\otimes \beta_2) \circ \beta_1 (u_i\rhd x)\otimes 1^{n-i_k+1},
\]
with $\beta_i=\Delta$ if $i\in\{i_1-1,\ldots,i_k-1\}$ and $\beta_i=\tau_L$ otherwise. Therefore,
\begin{align*}
\gamma_{n+1}\left(\underline{\Delta}^{(k-1)}(x)_{\underline{i}}\right)
&= (\id\otimes \gamma_{n})\circ \varphi_{n+1}\left(\underline{\Delta}^{(k-1)}(x)_{\underline{i}}\right)\\
&= \sum_{i\geq 0} K^{i+\lvert x \rvert} u_i\otimes \gamma_n\left(\underline{\Delta}^{(k-1)}(v_i\rhd x)_{\underline{i}-1}\right)\\
&= \left(\id^{\otimes (i_k-2)}\otimes \beta_{i_k-2}\right)\circ\cdots\circ (\id\otimes \beta_2) \circ (\id\otimes\beta_1) \left(\sum_{i\geq 0} K^{i+\lvert x \rvert} u_i\otimes (v_i\rhd x)\right)\otimes 1^{n-i_k+1}\\
&=\left(\id^{\otimes (i_k-2)}\otimes \beta_{i_k-2}\right)\circ\cdots\circ (\id\otimes \beta_2) \circ (\id\otimes\beta_1) \circ \tau_L(x)\otimes 1^{n-i_k+1},
\end{align*}
which has the desired form if we set $\alpha_i=\beta_{i-1}$ for $2\leq i \leq i_{k}-1$ and $\alpha_1=\tau_L$.
We now suppose that $i_1=1$ so that
\[
\underline{\Delta}^{(k-1)}(x)_{\underline{i}}=\sum_{i\geq 0} x_{(1)}S\left(K^{i+\lvert x_{(2)}\rvert}v_i\right)\otimes u_i\rhd \underline{\Delta}^{(k-2)}\left(x_{(2)}\right)_{\underline{i}^{-}-1},
\]
where $\underline{i}^{-}-1 = (i_2-1,\ldots,i_k-1)$. Then we have
\begin{align*}
\varphi_{n+1}\left(\underline{\Delta}^{(k-1)}(x)_{\underline{i}}\right)
&=\sum_{i,j\geq 0}x_{(1)}S\left(K_{\mu+\lvert x_{(2)}\rvert}v_i\right)K^{i+j+\lvert x_{(2)}\rvert}v_j\otimes \left((u_ju_i)\rhd \underline{\Delta}^{(k-2)}\left(x_{(2)}\right)_{\underline{i}^{-}-1}\right)\\
&=\sum_{i,j\geq 0}x_{(1)}S(v_i)K^{j}v_j\otimes \left((u_ju_i)\rhd \underline{\Delta}^{(k-2)}\left(x_{(2)}\right)_{\underline{i}^{-}-1}\right)\\
&= \sum x_{(1)}\otimes \underline{\Delta}^{(k-2)}(x)_{\underline{i}^{-}-1},
\end{align*}
the last equality following once again from \cref{itm:inv_theta}. As in the case $i_1\neq 1$, we use the inductive definition of $\gamma_{n+1}$, apply the induction hypothesis and obtain the expected result, the only difference being that $\alpha_1=\Delta$ instead $\alpha_1=\tau_L$.
\end{proof}
\begin{rmk}
In the specific case of $k=n$, the previous proposition states that $\gamma_n\circ\underline{\Delta}^{(n-1)} = \Delta^{(n-1)}\circ \gamma_1$.
\end{rmk}
With $x=\Lambda$ we recover the generator $\Lambda_A$ of $\AW{n}$ for a suitable $A\subset\{1,\ldots,n\}$.
\begin{cor}
\label{cor:AW_generators}
The element $\underline{\Delta}^{(k-1)}(\Lambda)_{\underline{i}}$ is sent to the element $\Lambda_A$ of the Askey--Wilson algebra under the unbraiding map $\gamma_n$, where $A=\{i_1,\ldots,i_k\}$.
\end{cor}
Since the Casimir element $\Lambda$ is central, combining \cref{prop:inserting_units} with \cref{cor:invariant-centralizer}, we obtain the following corollary:
\begin{cor}
\label{cor:AW_centralizer}
The Askey--Wilson algebra $\AW{n}$ is a subalgebra of the centralizer of $\mathcal{U}_q(\slt)$ in $\mathcal{U}_q(\slt)^{\otimes n}$.
\end{cor}
\section{Moduli algebras}
\label{sec:moduli}
In order to state the results of Costantino--Lê for (stated) skein algebras of punctured spheres, we introduce the quantum coordinate algebra and the reflection equation algebra.
\subsection{Quantum coordinate algebra}
\label{sec:qcoor}
We begin by defining the quantum coordinate algebra, otherwise known as the Faddeev-Reshetikhin-Takhtajan algebra \cite{FRT89}, for the following $4\times 4$ $R$-matrix:
\[
R =
\begin{pmatrix}
q & 0 & 0 & 0 \\
0 & 1 & q-q^{-1} & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & q
\end{pmatrix}.
\]
\begin{defn}
The quantum coordinate algebra $\SL_q(2)$ is the algebra with generators the entries of the matrix $U = \begin{pmatrix}
u^+_+ & u^+_-\\
u^-_+ & u^-_-
\end{pmatrix}$ with the relations given by
\begin{align*}
u^+_-u^+_+ &= q u^+_+u^+_-, & u^-_-u^+_- &= qu^+_-u^-_-, & u^-_+u^+_+ &= q u^+_+u^-_+, & u^-_-u^-_+ &= qu^-_+u^-_-, & u^+_-u^-_+ &= u^-_+u^+_-,
\end{align*}
and
\begin{align*}
u^+_+u^-_--q^{-1}u^+_-u^-_+ &= 1, & u^-_-u^+_+-qu^-_+u^+_- &= 1.
\end{align*}
\end{defn}
We endow this algebra with a structure of a Hopf algebra with coproduct $\Delta$, counit $\varepsilon$ and antipode $S$ given by
\begin{multline*}
\Delta
\begin{pmatrix}
u_+^+ & u_-^+\\
u_+^- & u_-^-
\end{pmatrix}
=
\begin{pmatrix}
u_+^+\otimes u_+^+ + u^+_-\otimes u^-_+ & u^+_-\otimes u^-_- + u^+_+\otimes u^+_-\\
u^-_+\otimes u^+_+ + u^-_-\otimes u^-_+ & u^-_-\otimes u^-_- + u^-_+\otimes u^+_-
\end{pmatrix},
\quad
\varepsilon
\begin{pmatrix}
u_+^+ & u_-^+\\
u_+^- & u_-^-
\end{pmatrix}
=
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\\ \quad\text{and}\quad
S
\begin{pmatrix}
u_+^+ & u_-^+\\
u_+^- & u_-^-
\end{pmatrix}
=
\begin{pmatrix}
u^-_- & -qu^+_-\\
-q^{-1}u^-_+ & u^+_+
\end{pmatrix}.
\end{multline*}
There exists a non-degenerate Hopf pairing $\langle\cdot,\cdot\rangle$ between $\SL_q(2)$ and $\mathcal{U}_q(\slt)$ given by
\[
\left\langle K, \begin{pmatrix}u^+_+ & u^+_-\\u^-_+ & u^-_-\end{pmatrix}\right\rangle = \begin{pmatrix}q & 0\\ 0 & q^{-1}\end{pmatrix},\quad
\left\langle E, \begin{pmatrix}u^+_+ & u^+_-\\u^-_+ & u^-_-\end{pmatrix}\right\rangle = \begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix}\quad\text{and}\quad
\left\langle F, \begin{pmatrix}u^+_+ & u^+_-\\u^-_+ & u^-_-\end{pmatrix}\right\rangle = \begin{pmatrix}0 & 0\\ 1 & 0\end{pmatrix}.
\]
This pairing implies that any right comodule over $\SL_q(2)$ can be turned into a left module over $\mathcal{U}_q(\slt)$: if $M$ is a right $\SL_q(2)$-comodule with coaction $\Delta_M$ then the action of $\mathcal{U}_q(\slt)$ is
\[
x\cdot m = \sum \langle x,\;m_{(0)}\rangle\, m_{(1)},
\]
with the right coaction on $M$ being $\Delta_M(m) = \sum m_{(1)}\otimes m_{(0)}$.
\begin{ex}
It can be checked that the left $\mathcal{U}_q(\slt)$-module structure arising from the right $\SL_q(2)$-comodule structure on $\SL_q(2)$ is given on the generators by
\begin{align*}
K\cdot u^+_+ &= q u^+_+, & K \cdot u^+_- &= q^{-1}u^+_-,& K\cdot u^-_+ &= qu^-_+,& K\cdot u^-_- &=q^{-1}u^-_-,\\
E\cdot u^+_+ &= 0, & E \cdot u^+_- &= u^+_+,& E\cdot u^-_+ &= 0,& E\cdot u^-_- &=u^-_+,\\
F\cdot u^+_+ &= u^+_-, & F \cdot u^+_- &= 0,& F\cdot u^-_+ &= u^-_-,& F\cdot u^-_- &=0.
\end{align*}
\end{ex}
The Hopf algebra $\SL_q(2)$ is cobraided with the co-$R$-matrix $\rho$ being given by $R$:
\[
\rho\left(\begin{pmatrix}
u_+^+ & u_-^+\\
u_+^- & u_-^-
\end{pmatrix}\otimes\begin{pmatrix}
u_+^+ & u_-^+\\
u_+^- & u_-^-
\end{pmatrix}\right) = R.
\]
Therefore, the category of right $\SL_q(2)$-modules is braided: given two $\SL_q(2)$-modules $V$ and $W$ with respective coactions $\Delta_V$ and $\Delta_W$, the co-$R$-matrix $\rho$ defines an isomorphism $c_{V,W}^{\rho} \colon V\otimes W \rightarrow W\otimes V$ given by
\[
c^{\rho}_{V,W}(v\otimes w) = \sum \rho\left(v_{(0)}\otimes w_{(0)}\right) w_{(1)}\otimes v_{(1)}
\]
where $\Delta_V(v) = \sum v_{(1)}\otimes v_{(0)}\in V\otimes\SL_q(2)$ and $\Delta_W(v) = \sum w_{(1)}\otimes w_{(0)}\in W\otimes \SL_q(2)$.
The following proposition is classical and compares the two braidings $c_{V,W}^\rho$ and $c_{V,W}$, where we see $V$ and $W$ as left $\mathcal{U}_q(\slt)$-modules through the pairing.
\begin{prop}
\label{prop:cobr=br}
Let $V$ and $W$ be two right comodules over $\SL_q(2)$ that we also equip with the structure of left modules over $U_q(\mathfrak{sl}_2)$. If the obtained modules are weight modules with a locally nilpotent action of $E$ and $F$. Then we have $c_{V,W}=c^\rho_{V,W}$ as maps from $V\otimes W$ to $W\otimes V$.
\end{prop}
\begin{proof}
By definition, we have
\[
c^\rho_{V,W}(v\otimes w) = \sum \rho\left(v_{(0)}\otimes w_{(0)}\right) w_{(1)}\otimes v_{(1)},
\]
and
\[
c_{V,W}(v\otimes w) = \sum_{n\geq 0} q^{(\lvert v \rvert + 2n)(\lvert w \rvert -2n)/2}q^{n(n-1)/2}\frac{(q-q^{-1})^n}{[n]!}\left\langle E^n,\;v_{(0)}\right\rangle \left\langle F^n ,\; w_{(0)}\right\rangle w_{(1)}\otimes v_{(1)}
\]
for all weight vectors $v \in V$ and $w \in W$. Note that since $\Delta_V(K\cdot v) = (1\otimes K) \cdot \Delta_V(v)$, and similarly for $W$, we have
\[
c^R_{V,W}(v\otimes w) = \sum_{n\geq 0} q^{(\lvert v_{(0)} \rvert + 2n)(\lvert w_{(0)} \rvert -2n)/2}q^{n(n-1)/2}\frac{(q-q^{-1})^n}{[n]!}\left\langle E^n,\;v_{(0)}\right\rangle \left\langle F^n ,\; w_{(0)}\right\rangle w_{(1)}\otimes v_{(1)}.
\]
Therefore, the claim is proved once we have checked that
\begin{equation}
\rho(x\otimes y) = \sum_{n\geq 0} q^{(\lvert x \rvert + 2n)(\lvert y \rvert -2n)/2}q^{n(n-1)/2}\frac{(q-q^{-1})^n}{[n]!}\langle E^n,\;x\rangle \langle F^n ,\; y\rangle.\label{eq:rho'}
\end{equation}
Denote by $\rho'(x\otimes y)$ the right-hand side of \cref{eq:rho'}. Using the various properties of the $R$-matrix and of the co-$R$-matrix, it is a routine calculation to check that
\begin{align*}
\rho'(xy\otimes z) &= \sum \rho'\left(x \otimes z_{(1)}\right)\rho'\left(y \otimes z_{(2)}\right), & \rho'(1\otimes z) &= \varepsilon(z),\\
\rho'(x\otimes yz) &= \sum \rho'\left(x_{(1)} \otimes z\right)\rho'\left(x_{(2)} \otimes y\right), & \rho'(x\otimes 1) &= \varepsilon(x),
\end{align*}
for all $x,y$ and $z\in \SL_q(2)$. Therefore, it remains to check that $\rho(x\otimes y)=\rho'(x\otimes y)$ for $x$ and $y$ in the set $\{u^\pm_{\pm}\}$, which is an easy and omitted calculation.
\end{proof}
\subsection{Reflection equation algebras}
\label{sec:REA}
We now introduce another algebra which is not isomorphic to the quantum coordinate algebra but is twist-equivalent to it \cite{Donin}.
\begin{defn}
The reflection equation algebra $\mathcal{O}_q(\SL_2)$ is the algebra with generators the entries of the matrix $\mathcal{K}=
\begin{pmatrix}
k_+^+ & k_-^+\\
k_+^- & k_-^-
\end{pmatrix}$ which satisfy the following:
\begin{enumerate}
\item the quantum determinant relation: $k_+^+k_-^--qk_-^+k_+^- = 1$,
\item the reflection equation: $R_{21}(\mathcal{K} \otimes I) R (\mathcal{K} = I_2\otimes \mathcal{K}) = ( I\otimes \mathcal{K}) R_{21} (\mathcal{K} \otimes I) R$.
\end{enumerate}
\end{defn}
From the theory of $L$-operators, see for example \cite[Proposition 3.116]{voigt-yuncken}, we may deduce an algebra isomorphism between $\mathcal{O}_q(\SL_2)$ and $\mathcal{U}_q(\slt)^{\lf}$. It is explicitly given by
\[
\begin{pmatrix}
k_+^+ & k_-^+\\
k_+^- & k_-^-
\end{pmatrix}
\mapsto
\begin{pmatrix}
q^{-1}\left(\Lambda-q^{-1}K\right) & q^{-1}\left(q-q^{-1}\right)E \\
\left(q-q^{-1}\right)KF & K
\end{pmatrix}.
\]
The bar coproduct $\underline{\Delta}$ on $\mathcal{U}_q(\slt)^{\lf}$ defined in \cref{sec:braided_tensor} has a much nicer expression through this isomorphism: it becomes the usual matrix coproduct given by
\[
\mathcal{K} \mapsto
\begin{pmatrix}
k_+^+\otimes k_+^+ + k^+_-\otimes k^-_+ & k^+_-\otimes k^-_- + k^+_+\otimes k^+_-\\
k^-_+\otimes k^+_+ + k^-_-\otimes k^-_+ & k^-_-\otimes k^-_- + k^-_+\otimes k^+_-
\end{pmatrix}.
\]
Finally, we note also that the element $\operatorname{tr}_q(\mathcal{K}):=qk^+_++q^{-1}k^-_-$ is sent to the Casimir element $\Lambda$.
\subsection{Alekseev moduli algebras}
\label{sec:alekseev}
We end this section by introducing Alekseev moduli algebras, also known as quantum loop algebras, in the case of $\mathcal{U}_q(\slt)$. These algebras are attached to the punctured surfaces $\Sigma_{g,r}$. In the particular case of the punctured sphere $\Sigma_{0,n+1}$, we recover the braided tensor power $\mathcal{O}_q(\SL_2)^{\tilde{\otimes} n}\simeq \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes}n}$ already encountered in \cref{sec:unbraiding}. In general, we also need the notion of the elliptic double $\mathcal{D}_q(\SL_2)$. As a vector space, the elliptic double is isomorphic to $\mathcal{O}_q(\SL_2)^{\otimes 2}$, see \cite{Brochier-Jordan} for further details on the multiplication.
\begin{defn}
The Alekseev moduli algebra associated with the punctured surface $\Sigma_{g,r}$ is
\[
\mathcal{L}_{\Sigma_{g,r}} = \mathcal{D}_q(\SL_2)^{\tilde{\otimes} g}\tilde{\otimes}\mathcal{O}_q(\SL_2)^{\tilde{\otimes} r-1},
\]
where $\tilde{\otimes}$ still denotes the braided tensor product.
\end{defn}
It should be noted that, as a vector space, the algebra $\mathcal{L}_{\Sigma_{g,r}}(\SL_2)$ is isomorphic to $\mathcal{O}_q(\SL_2)^{\otimes 2g+r-1}$. Indeed, $\mathcal{D}_q(\SL_2)$ is defined as a vector space by $\mathcal{O}_q(\SL_2)^{\otimes 2}$ and the multiplication is twisted using the $R$-matrix. We will also write $\mathcal{L}_{\Sigma_{g,r}} = \mathcal{O}_q(\SL_2)^{\hat{\otimes} 2g}\tilde{\otimes}\mathcal{O}_q(\SL_2)^{\tilde{\otimes} r-1}$, where $\hat{\otimes}$ emphasises that the multiplication on $\mathcal{O}_q(\SL_2)^{\otimes 2g}$ is not the trivial one, but is such that $\mathcal{O}_q(\SL_2)^{\hat{\otimes} 2g}=\mathcal{D}_q(\SL_2)^{\tilde{\otimes} g}$ as an algebra.
\section{Skein Algebras}
\label{sec:skein}
In this section, we shall prove that the Askey--Wilson algebra $\AW{n}$ is isomorphic to the Kauffman bracket skein algebra $\SkAlg{q}{\Sigma_{0,n+1}}$ of the $(n+1)$-punctured sphere and explicitly match up the generators on both sides of this correspondence. To do this we shall consider the Kauffman bracket skein algebra as a subalgebra of the stated skein algebra. Costantino and L\^{e} proved that the stated skein algebra of a $(n+1)$-punctured sphere is isomorphic to the braided tensor product of $n$ copies of the reflection equation algebra $\mathcal{O}_q(\SL_2)$ \cite{CL20}. The reflection equation algebra is in turn isomorphic to $\mathcal{U}_q(\slt)^{\lf}$ (\cref{sec:REA}). Finally, we shall use the description of the Askey--Wilson algebra $\AW{n}$ as a subalgebra of the braided tensor product of $n$ copies of $\mathcal{U}_q(\slt)^{\lf}$ which we developed in \cref{sec:unbraiding} to obtain our result.
\subsection{Kauffman Bracket Skein Algebras and Stated Skein Algebra}
\label{sec:skeindefn}
The Kauffman bracket skein algebra is based on the Kauffman bracket:
\begin{defn}
Let \(L\) be a link without contractible components (but including the empty link). The \emph{Kauffman bracket polynomial} \(\langle L \rangle\) in the variable \(q\) is defined by the following local \emph{skein relations}:
\begin{align}
\skeindiagram{leftnoarrow} &= q^{\frac{1}{2}}\; \skeindiagram{horizontal} + q^{-\frac{1}{2}} \; \skeindiagram{vertical}, \label{skeinrel_cross} \\
\skeindiagram{circle} &= -q - q^{-1}. \label{skeinrel_loop}
\end{align}
\end{defn}
\noindent These diagrams represent links with blackboard framing which are identical outside a 3-dimensional disc and are as depicted inside the disc. It is an invariant of framed links and it can be `renormalised' to give the Jones polynomial. The Kauffman bracket can also be used to define an invariant of \(3\)-manifolds:
\begin{defn}
Let \(M\) be a smooth 3-manifold, \(R\) be a commutative ring with identity and \(q\) be an invertible element of \(R\). The \emph{Kauffman bracket skein module} \(\SkAlg{q}{M}\) is the \(R\)-module of all formal linear combinations of links, modulo the Kauffman bracket skein relations pictured above.
\end{defn}
\begin{rmk} For the remainder of the paper we will use the coefficient ring \(R := \mathbb{C}\left(q^{1/4}\right)\).
\end{rmk}
For a surface $\Sigma$, we define its \emph{skein algebra} $\SkAlg{q}{\Sigma}$ to be the skein module $\SkAlg{q}{\Sigma\times [0,1]}$ and define multiplication by first stacking the links on top of each other to obtain a link in \(\Sigma \times [0,2]\) and then rescaling the second coordinate to obtain \(\Sigma \times [0,1]\) again. Usually links are drawn by projecting onto $\Sigma$. In this case the multiplication $XY$ is obtained by drawing $Y$ above $X$.
Typically, this algebra is noncommutative; however, its $q = \pm 1$ specialisation is commutative since in this case the right-hand side of the first skein relation is symmetric with respect to switching the crossing. \citeauthor{Bullock97}, \citeauthor{PrzytyckiSikora00}~\cite{Bullock97, PrzytyckiSikora00} showed that at $q=-1$ the skein algebra $\SkAlg{q}{\Sigma}$ is isomorphic to the ring of functions on the $\SL_2$ character variety of $\Sigma$. \cite{BullockFrohmanKania99} strengthened this statement by showing that the skein algebra is a quantization of the $\SL_2$ character variety of $\Sigma$ with respect to Atiyah--Bott--Goldman Poisson bracket.
Recall that \(\Sigma_{g,r}\) denotes the compact oriented surface with genus $g$ and $r$ punctures. We will restrict ourselves to punctured surfaces so we assume that $r>1$.
Every punctured surface $\Sigma_{g,r}$ has a handlebody decomposition which is given by attaching $n=2g+r-1$ handles to a disc. For example, the handlebody decomposition of $\Sigma_{0,5}$ is shown in \cref{fig:5punctsphere}.
\begin{defn}
Let $\Sigma_{g,n}^{\bullet}$ denote the surface with a choice of marking on its boundary. The marking must be on the disc part of the handlebody decomposition of the surface.
\end{defn}
\begin{figure}
\caption{This figure shows the handlebody decomposition of $\Sigma^{\bullet}
\label{fig:5punctsphere}
\end{figure}
For every subset $A \subseteq \{1, \dots, n\}$ there is a simple closed curve $s_A$ which intersects the handles $A$. These simple curves $s_A$ form a generating set for the skein algebra:
\begin{thm}[\cite{Bullock99}]
The curves $s_A$ for all non-empty subsets $A \subseteq \{1, \dots, n\}$ generate the skein algebra $\SkAlg{q}{\Sigma_{0,n+1}}$.
\end{thm}
In order to relate these Kauffman bracket skein algebras to quantum loop algebras, one must consider the skein algebra as a subalgebra of an algebra of skeins which are not all closed loops.
\begin{defn}[\cite{Le18}]
Let $\Sigma$ be an oriented surface with boundary $\partial \Sigma$. Let $T$ be a tangle in $\Sigma \times [0, 1]$ together with a colouring $\pm$ on each point where $T$ meets the boundary $\partial \Sigma$. The \emph{stated skein algebra} $\SkAlgSt{\Sigma}$ is the \(R\)-module of all formal linear combinations of isotopy classes of such tangles $T$, modulo the Kauffman bracket skein relations (\cref{skeinrel_cross,skeinrel_loop}) and the boundary conditions:
\begin{gather}
\label{eq:skeinboundary}
\diagramhh{skeinrelations}{pp}{12pt}{1pt}{0.25} = \diagramhh{skeinrelations}{mm}{12pt}{1pt}{0.25} = 0 \quad \quad \diagramhh{skeinrelations}{mp}{12pt}{1pt}{0.25} = q^{-\frac{1}{4}} \quad \quad \diagramhh{skeinrelations}{cup}{12pt}{1pt}{0.25} = q^{\frac{1}{4}}\diagramhh{skeinrelations}{pmstraight}{12pt}{1pt}{0.25} - q^{\frac{5}{4}}\diagramhh{skeinrelations}{mpstraight}{12pt}{1pt}{0.25}
\end{gather}
\end{defn}
The reason we wish to consider stated skein algebras is that stated skein algebras satisfy excision \cite[Theorem 4.12]{CL20} which in particular means that the stated skein algebra $\SkAlgSt{\Sigma^{\bullet}_{0,n+1}}$ can be constructed out of copies of the simpler skein algebra $\SkAlgSt{\Sigma^{\bullet}_{0,2}}$.
\begin{rmk}
Stated skein algebras are a special case for $\SL_2$ of \emph{internal skein algebras} which were defined by \citeauthor{GunninghamJordanSafronov19}~\cite{GunninghamJordanSafronov19} based on \emph{skein categories} \cite{Cooke19,JohnsonFreyd15}. Skein categories and thus internal skein algebras are defined for any linear ribbon category $\mathcal{V}$ over any unital commutative ring.
\end{rmk}
\subsection{Isomorphism of Skein Algebra and Askey--Wilson Algebra}
In this subsection we shall combine the results of this paper so far together with the results of Costantino and L\^{e} to obtain an explicit isomorphism between the Kauffman bracket skein algebra of the $(n+1)$-punctured sphere $\SkAlg{q}{\Sigma_{0,n+1}^{\bullet}}$ and the rank $n$ Askey--Wilson algebra $\AW{n}$.
Costantino and Lê \cite[Theorem 3.4]{CL20} show that the quantum coordinate algebra\footnote{Costantino and Lê follow Majid in referring to $\SL_q(2)$ as the quantum coordinate algebra and denote it as $\mathcal{O}_q(G)$. This $\mathcal{O}_q(G)$ does not correspond to our $\mathcal{O}_q(G)$ which denotes the reflection equation algebra.} $\SL_q(2)$ has a straightforward topological interpretation as the stated skein algebra of the bigon $\mathcal{B}$ with isomorphism
\[\SkAlgSt{\mathcal{B}} \xrightarrow{\sim} \SL_q(2): \alpha(a, b) \mapsto u^{a}_{b} \]
By embedding the marked annulus $\Sigma^{\bullet}_{0,2}$ into the bigon as shown in the figure they conclude:
\begin{prop}[{\cite[Proposition 4.25]{CL20}}]
\label{prop:isoskeinrea}
The stated skein algebra $\SkAlgSt{\Sigma^{\bullet}_{0,2}}$ is isomorphic as a Hopf algebra to the reflection equation algebra $\mathcal{O}_q(\SL_2)$.
\end{prop}
The isomorphism is described explicitly in the proof of Proposition 4.25\footnote{Note that the reflection equation algebra is denoted $BSL_q(2)$ by Costantino and Lê and referred to as the transmuted or braided version of the quantum coordinate algebra.} and on generators this isomorphism is given by
\begin{equation}
\label{eq:skeintorea}
\beta(\varepsilon_1, \varepsilon_2) \mapsto \bar{C}(-\varepsilon_1) k^{-\varepsilon_1}_{\varepsilon_2}
\end{equation}
where $C(+) = \bar{C}(-) = -q^{-5/4}$ and $C(-) = \bar{C}(+) = q^{-1/4}$.
\begin{figure}
\caption{The bigon $\mathcal{B}
\end{figure}
The skein algebra of the (marked) annulus $\SkAlg{q}{\Sigma^{\bullet}_{0,2}}$ is isomorphic to $R[s_1]$ where $s_1$ is the loop around the puncture. If we consider $\SkAlg{q}{\Sigma^{\bullet}_{0,2}}$ as a subalgebra of the associated stated skein algebra we have
\[s_1 = \diagramhh{loopcalc}{s1}{12pt}{1pt}{0.25} = q^{\frac{1}{4}}\diagramhh{loopcalc}{s1pm}{18pt}{1pt}{0.25} - q^{\frac{5}{4}} \diagramhh{loopcalc}{s1mp}{18pt}{1pt}{0.25} \mapsto q^{\frac{1}{4}} \bar{C}(-) k^-_- - q^{\frac{5}{4}} \bar{C}(+) k^+_+ = - q^{-1} k^-_- - q k^+_+, \]
so using \cref{prop:isoskeinrea} together with the results of \cref{sec:REA} we conclude:
\[f_1: \SkAlgSt{\Sigma^{\bullet}_{0,2}} \xrightarrow{\sim} \mathcal{O}_q(\SL_2),\; s_1 \mapsto - \operatorname{tr}_q(\mathcal{K}) \]
Using the excision of stated skein algebras, this result can be extended to multiple punctures.
\begin{prop}[\cite{CL20}]
\label{prop:nisoskeinrea}
The stated skein algebra $\SkAlgSt{\Sigma^{\bullet}_{0,n+1}}$ is isomorphic as an algebra to the quantum loop algebra $\mathcal{L}_{\Sigma_{0,n+1}}=\mathcal{O}_q(\SL_2)^{\tilde{\otimes} n}$. Moreover, this isomorphism sends the closed loop $s_A$, for $A=\{i_1<\cdots<i_r\}$, to the element $-\Delta^{(k-1)}(\operatorname{tr}_q(\mathcal{K}))_{\underline{i}}$ of $\mathcal{O}_q(\SL_2)^{\tilde{\otimes} n}$.
\end{prop}
\begin{proof}
The isomorphism is given in \cite[Proposition 4.25]{CL20}. We untangle the definition of this isomorphism and compute it on the closed loops $s_A$. Let $A=\{i_1 < \cdots < i_k\}$ be a subset of $\{1,\cdots,n\}$.
Using one puncture to flatten the sphere the loop $s_A$ has form
\[\diagramhh{loopcalc}{sA}{12pt}{1pt}{0.25}\]
As before we apply \cref{eq:skeinboundary} to obtain
\[\bar{C}(+)^{-1}\diagramhh{loopcalc}{sApm}{18pt}{1pt}{0.25} + \bar{C}(-)^{-1} \diagramhh{loopcalc}{sAmp}{18pt}{1pt}{0.25}\]
Applying the relation again at each puncture leaves us with
\[\sum_{\varepsilon,\varepsilon_i}\bar{C}(-\varepsilon)^{-1} \prod_{i=1}^{k-1} \bar{C}(\varepsilon_i)^{-1} \diagramhh{loopcalc}{sAdecomp}{20pt}{1pt}{0.25}\]
where we sum over all possible values of $\varepsilon,\varepsilon_i \in \{\pm\}$.
The map $g_n$ is now simply given by \cref{eq:skeintorea} on each puncture so we have
\begin{align*}
\sum_{\varepsilon,\varepsilon_i}\left(\bar{C}(-\varepsilon)^{-1}\bar{C}(\varepsilon) \prod_{i=1}^{k-1} \bar{C}(\varepsilon_i)^{-1}\bar{C}(\varepsilon_i)\right) \left(\bigotimes_{i=1}^{k} k^{\varepsilon_i}_{\varepsilon_{i+1}}\right)_{\!\!\underline{i}} &= \sum_{\varepsilon \in \{\pm\}}\bar{C}(-\varepsilon)^{-1} \bar{C}(\varepsilon)
\Delta^{(k-1)}\left(k^{\varepsilon}_{\varepsilon}\right)_{\underline{i}} \\
&= -q\Delta^{(k-1)}\left(k^{+}_{+}\right)_{\underline{i}}-q^{-1}\Delta^{(k-1)}\left(k^{-}_{-}\right)_{\underline{i}}\\
&= -\Delta^{(k-1)}(\operatorname{tr}_q(\mathcal{K}))_{\underline{i}}
\end{align*}
as required.
\end{proof}
\begin{rmk}
In \cite{CL20}, the braided structure on the tensor product $\mathcal{O}_q(\SL_2)^{\tilde{\otimes} n}$ is defined using the co-$R$-matrix $\rho$. But, as an easy consequence of \cref{prop:cobr=br}, this braided structure is the same as the braided structure obtained with the $R$-matrix of $\mathcal{U}_q(\slt)$ and the left action of $\mathcal{U}_q(\slt)$ on $\mathcal{O}_q(\SL_2)$ arising from the adjoint action.
\end{rmk}
Combining \cref{prop:nisoskeinrea} with our previous results we conclude:
\begin{thm}
\label{thm:iso}
If $q$ is generic then there is an algebra isomorphism $\SkAlg{q}{\Sigma^{\bullet}_{0,n+1}} \to \AW{n}$ sending the closed loop $s_A\in \SkAlg{q}{\Sigma^{\bullet}_{0,n+1}}$ to $-\Lambda_A \in \AW{n}$.
\end{thm}
\begin{proof}
By \cref{prop:nisoskeinrea} there is an isomorphism $\SkAlgSt{\Sigma^{\bullet}_{0,n+1}} \to \mathcal{O}_q(\SL_2)^{\tilde{\otimes} n}$. By \cref{sec:REA} there is an isomorphism $\mathcal{O}_q(\SL_2) \to \mathcal{U}_q(\slt)^{\lf}$ which gives us an isomorphism $g_n: \SkAlgSt{\Sigma^{\bullet}_{0,n+1}} \to \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n}$. Combining this with the injective unbraiding map $\gamma_n: \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n} \to \mathcal{U}_q(\slt)^{\otimes n}$ defined in \cref{sec:unbraiding} gives us an injective algebra morphism
\[\SkAlgSt{\Sigma^{\bullet}_{0,n+1}} \xrightarrow{g_n} \left(\mathcal{U}_q(\slt)^{\lf}\right)^{\tilde{\otimes} n} \xrightarrow{\gamma_n} \mathcal{U}_q(\slt)^{\otimes n}. \]
Therefore, to prove the result it is sufficient to show that the generators $s_A \in \SkAlg{q}{\Sigma^{\bullet}_{0,n+1}} \subset \SkAlgSt{\Sigma^{\bullet}_{0,n+1}}$ are sent to the generators $-\Lambda_A \in \AW{n} \subset \mathcal{U}_q(\slt)^{\otimes n}$, which follows from the explicit computations in \cref{prop:nisoskeinrea} and \cref{cor:AW_generators}.
\end{proof}
\section{Commutator Relations}
\label{sec:commutator}
As an illustration of the usefulness of being able to use diagrams for calculations involving Askey--Wilson algebras, we are now going to reprove the main results of \cite{DeClercq19}. The first result is proving that loops in $\AW{n}$ satisfy a generalisation of the commutator relations used to define $\AW{3}$.
\begin{thm}
\label{thm:commutators}
Let $A_1, A_2, A_3, A_4 \subseteq{\{1, \dots, n\}}$ such that $\max{A_i} < \min{A_{j}}$ whenever $i<j$ and both $A_i$ and $A_j$ are non empty.
Let $A$ and $B$ be one of the following
\begin{enumerate}
\item $A = A_1 \cup A_2 \cup A_4$ and $B = A_2 \cup A_3$,
\item $A = A_2 \cup A_3$ and $B = A_1 \cup A_3 \cup A_4$,
\item $A = A_1 \cup A_3 \cup A_4$ and $B = A_1 \cup A_2 \cup A_4$
\end{enumerate}
We have the commutator
\[[\Lambda_{A},\; \Lambda_{B}]_q = \left(q^{-2}- q^{2}\right) \Lambda_{(A \cup B) \backslash (A \cap B)} + \left(q -q^{-1}\right)\left(\Lambda_{A \cap B} \Lambda_{A \cup B} + \Lambda_{A \backslash (A \cap B)} \Lambda_{B \backslash (A \cap B)}\right).\]
\end{thm}
\begin{proof}
We use the isomorphism $\SkAlg{q}{\Sigma_{0, n+1}} \to \AW{n}\colon s_A \mapsto -\Lambda_A$ and instead prove the result for loops in $\SkAlg{q}{\Sigma_{0, n+1}}$.
As usual, we represent $\Sigma_{0, n+1}$ as $n$ points in a line with the final point used to flatten the sphere onto the page. We omit any point not in $A_1 \cup A_2 \cup A_3 \cup A_4$ as these points make no difference to the calculation: either they are to the left or right of the loops, or the loops pass below them. We note that the condition on the $A_i$ means that all the points are partitioned into $A_1, A_2, A_3, A_4$ in order.
If $A = A_1 \cup A_2 \cup A_4$ and $B = A_2 \cup A_3$ then we have
\begin{align*}
s_A s_B &= \diagramhh{skeincommutatorproofv2}{AB2}{12pt}{0pt}{0.3} = \diagramhh{skeincommutatorproofv2}{A1A42}{12pt}{0pt}{0.3} + q^{-1} \; \diagramhh{skeincommutatorproofv2}{NotStandard2}{12pt}{0pt}{0.3} \\
&+ q \; \diagramhh{skeincommutatorproofv2}{A1A3A42}{12pt}{0pt}{0.3} + \diagramhh{skeincommutatorproofv2}{AllA2}{12pt}{0pt}{0.3} \\
s_B s_A &= \diagramhh{skeincommutatorproofv2}{BA2}{12pt}{0pt}{0.3} = \diagramhh{skeincommutatorproofv2}{A1A42}{12pt}{0pt}{0.3} + q \; \diagramhh{skeincommutatorproofv2}{NotStandard2}{12pt}{0pt}{0.3} \\
&+ q^{-1} \; \diagramhh{skeincommutatorproofv2}{A1A3A42}{12pt}{0pt}{0.3} + \diagramhh{skeincommutatorproofv2}{AllA2}{12pt}{0pt}{0.3}
\end{align*}
Hence, we have
\begin{align*}
q s_A s_B - q^{-1} s_B s_A &= \left(q^{2}- q^{-2}\right) s_{(A \cup B) \backslash (A \cap B)} + \left(q -q^{-1}\right) (s_{A \backslash (A \cap B)} s_{B \backslash (A \cap B)}+ s_{A \cap B} s_{A \cup B}).
\end{align*}
The other cases are similar.
\end{proof}
The second result of \cite{DeClercq19} is even easier to prove:
\begin{thm}
Let $ B \subseteq A \subset{\{1, \dots, n\}}$ then $\Lambda_A$ and $\Lambda_B$ commute.
\end{thm}
\begin{proof}
If $ B \subseteq A \subset{\{1, \dots, n\}}$ then the loops $s_A$ and $s_B$ do not intersect so they commute.
\end{proof}
As the loops $s_A$ and $s_B$ also do not intersect if $A \cap \{\min{B}, \dots, \max{B}\} = \emptyset$, we also have:
\begin{prop}
Let $ B,\, A \subset{\{1, \dots, n\}}$ such that $A \cap \{\min{B}, \dots, \max{B}\} = \emptyset$ then $\Lambda_A$ and $\Lambda_B$ commute.
\end{prop}
\begin{rmk}
Note that the loops $s_A$ and $s_B$ also do not intersect when $A \cap \{\min{B}, \dots, \max{B}\} = \emptyset$. Hence, we can immediately conclude in this case that $s_A$ and $s_B$ commute. Alternatively, this is the case of \cref{thm:commutators} with $A_1=\{\,a\in A \;\vert\; a<b,\; \forall b \in B\,\}$, $A_2=\emptyset$, $A_3=B$ and $A_4 = \{\,a\in A \;\vert \;a>b,\; \forall b \in B\,\}$.
\end{rmk}
\section{Action of the braid group}
\label{sec:braid_grp}
As noted in \cite[Section 8]{CFG21}, both the Askey--Wilson algebra $\AW{3}$ and the skein algebra of the $4$-punctured sphere admit an action of the braid group on $3$ strands, and these actions are compatible with the isomorphism between the Askey--Wilson algebra and the skein algebra. We give in this section a higher rank version of this result.
Recall that the braid group on $n$ strands $B_n$ is the group with the following presentation:
\[
\left\langle \; \beta_1,\ldots,\beta_{n-1}\ \middle\vert\ \beta_i\beta_{i+1}\beta_i = \beta_{i+1}\beta_i\beta_{i+1} \text{ for } 1 \leq i < n-1, \beta_i\beta_j= \beta_j\beta_i \text{ for } \lvert i-j \rvert > 1 \; \right\rangle.
\]
\subsection{Action of $B_n$ on the skein algebra of the $n+1$-punctured sphere}
\label{sec:braid_skein}
The braid group $B_n$ acts by half Dehn twists on the skein algebra of the $(n+1)$-punctured sphere. It permutes the punctures by anti-clockwise rotations and any framed link on the sphere is continuously deformed during the rotation process.
\begin{ex}
For example, the generator $\beta_2$ permutes anti-clockwise the second and third punctures and the framed link is deformed during the process:
\[\beta_2 \cdot \left( \diagramhh{braidgroup}{s134}{12pt}{1pt}{0.25}
\right) = \diagramhh{braidgroup}{braided}{12pt}{1pt}{0.25}\]
\end{ex}
\begin{prop}
\label{prop:action_skein}
Let $A=\{i_1<\cdots<i_k\}$ be a non-empty subset of $\{1,\ldots,n\}$ and $1 \leq i \leq n-1$. We have
\[
\beta_i\cdot s_A =
\begin{cases}
s_A & \text{if } i,i+1 \in A \text{ or } i,i+1\not\in A,\\
s_{\left(A\backslash \{i\}\right)\cup \{i+1\}} & \text{if } i\in A, i+1\not\in A,\\
\displaystyle \frac{\left[s_{A},s_{\{i,i+1\}}\right]_q-\left(q-q^{-1}\right)\left(s_{i+1}s_{A\cup\{i\}}+s_{i}s_{A\backslash\{i+1\}}\right)}{q^2-q^{-2}} & \text{if } i\not\in A, i+1\in A.
\end{cases}
\]
\end{prop}
\begin{proof}
If $i,i+1 \in A$ or $i,i+1\not\in A$, there is nothing to prove. If $i \in A$ and $i+1\not\in A$, it is clear that $\beta_i\cdot s_A =s_{(A\backslash \{i\})\cup \{i+1\}}$. The case $i\not\in A$ and $i+1\in A$ is a pleasant computation along the lines of the graphical proof of \cref{thm:commutators}.
\end{proof}
\subsection{Action of $B_n$ on the higher rank Askey--Wilson algebra}
\label{sec:braid_AW}
The action of the braid group $B_n$ on $\AW{n}$ by algebra automorphisms is given by conjugation by the $R$-matrix. Given $x\in \mathcal{U}_q(\slt)\otimes \mathcal{U}_q(\slt)$, we define $\beta(x)$ in (a completion of) $\mathcal{U}_q(\slt)\otimes \mathcal{U}_q(\slt)$ by:
\[
\beta(x) = \sigma\left(\Psi^{-1}\left(\Theta x \Theta^{-1}\right)\right)
\]
where $\sigma(a\otimes b) = b\otimes a$. For $1 \leq i \leq n-1$ the action of the generator $\beta_i\in B_n$ is given by the endomorphism $\id^{\otimes i-1}\otimes \beta \otimes \id^{\otimes n-i-1}$. Once again, we should act on a completion of $\mathcal{U}_q(\slt)^{\otimes n}$ because the quasi-$R$-matrix $\Theta$ is an infinite sum.
The properties of the quasi-$R$-matrix $\Theta$ ensure that we obtain an action of $B_n$.
Thanks to \cref{lem:tau-conj}, the generators $\Lambda_A$ of the Askey--Wilson algebra can be rewritten using the action of the braid group. Given $i<j$, let $\beta_{i,j} = \beta_{j-1}\cdots\beta_{i+1}\beta_{i}$. Then if $A=\{i_1<\cdots<i_k\}$ is a non-empty subset of $\{1,\ldots,n\}$, we have
\[
\Lambda_A = \beta_{1,i_1}\beta_{2,i_2}\cdots\beta_{k,i_k}\cdot\left(\Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}\right) \in \AW{n}.
\]
\begin{prop}
\label{prop:action_AW}
The action of the braid group on (a completion of) $\mathcal{U}_q(\slt)^{\otimes n}$ restricts to $\AW{n}$. Moreover, the images of the generators $\Lambda_A$ are given as follows:
\[
\beta_i\cdot \Lambda_A =
\begin{cases}
\Lambda_A & \text{if } i,i+1 \in A \text{ or } i,i+1\not\in A,\\
\Lambda_{(A\backslash \{i\})\cup \{i+1\}} & \text{if } i\in A,\; i+1\not\in A ,\\
\displaystyle -\frac{\left[\Lambda_{A},\;\Lambda_{\{i,i+1\}}\right]_q-\left(q-q^{-1}\right)\left(\Lambda_{i+1}\Lambda_{A\cup\{i\}}+\Lambda_{i}\Lambda_{A\backslash\{i+1\}}\right)}{q^2-q^{-2}} & \text{if } i\not\in A,\; i+1\in A\text{.}
\end{cases}
\]
\end{prop}
\begin{proof}
The first assertion follows from the explicit formulas for the action since this shows that $\psi_i(\Lambda_A) \in \AW{n}$ for all $1 \leq i \leq n-1$ and $A\subseteq \{1,\ldots,n\}$.
Let $A=\{i_1<\cdots <i_k\}$ be a non-empty subset of $\{1,\ldots,n\}$. We first suppose that $i\in A$. Let $j$ be such that $i_j = i$. Since $\beta_i\beta_{r,s} = \beta_{r,s}\beta_i$ if $i+1<r$ or $s<i$, we have
\begin{align*}
\beta_i\cdot \Lambda_A &= \beta_i\cdot \left(\beta_{1,i_1}\beta_{2,i_2}\cdots\beta_{k,i_k}\cdot\left(\Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}\right)\right)\\
&= \beta_{1,i_1}\cdots\beta_{j-1,i_{j-1}}\beta_i\beta_{j,i_j}\cdots\beta_{k,i_k}\cdot\left(\Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}\right)\\
&= \beta_{1,i_1}\cdots\beta_{j-1,i_{j-1}}\beta_{j,i+1}\beta_{j+1,i_{j+1}}\cdots\beta_{k,i_k}\cdot\left(\Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}\right),
\end{align*}
which is obviously equal to $\Lambda_{(A\backslash \{i\})\cup \{i+1\}}$ if $i+1\not\in A$.
If $i+1\in A$, then $i_{j+1} = i+1$ and since $\beta_{j,i+1}\beta_{j+1,i+1} = \beta_{j,i} \beta_{j,i+1} = \beta_{j,i} \beta_{j+1,i+1}\beta_j$, we find that
\[
\beta_i\cdot\Lambda_A = \beta_{1,i_1}\beta_{2,i_2}\cdots\beta_{k,i_k}\beta_j\cdot\left(\Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}\right).
\]
As $\Psi(\Delta^{\op}(x))\Theta = \Theta\Delta(x)$ and $j<k$, we have $\beta_j\cdot\left(\Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}\right) = \Delta^{(k-1)}(\Lambda)\otimes 1^{\otimes n-k}$. Therefore, $\beta_i\cdot\Lambda_A = \Lambda_A$.
We now suppose that $i\not\in A$. If we also have that $i+1\not\in A$ then the arguments are similar to the previous case. If $i+1\in A$, we set $A'=\big(A\backslash\{i+1\}\big)\cup\{i\}$. Thanks to \cref{thm:commutators}, we have
\[
\left[\Lambda_{A'},\;\Lambda_{\{i,i+1\}}\right]_q = -\left(q^2-q^{-2}\right)\Lambda_A + \left(q-q^{-1}\right)\left(\Lambda_{i+1}\Lambda_{A\cup\{i\}}+\Lambda_{i}\Lambda_{A\backslash\{i+1\}}\right).
\]
We now act with $\beta_i$ to obtain
\begin{multline*}
\left[\beta_i\cdot \Lambda_{A'},\;\beta_i\cdot \Lambda_{\{i,i+1\}}\right]_q \\= -\left(q^2-q^{-2}\right)\beta_i\cdot \Lambda_A + \left(q-q^{-1}\right)\left(\left(\beta_i\cdot \Lambda_{i}\right)\left(\beta_i\cdot\Lambda_{A\cup\{i\}}\right)+\left(\beta_i\cdot \Lambda_{i+1}\right)\left(\beta_i\cdot\Lambda_{A\backslash\{i+1\}}\right)\right).
\end{multline*}
But $\beta_i\cdot\Lambda_{A'} = \Lambda_A$, $\beta_i\cdot\Lambda_{\{i,i+1\}}=\Lambda_{\{i,i+1\}}$, $\beta_i\cdot\Lambda_{A\cup\{i\}}=\Lambda_{A\cup\{i\}}$ and $\beta_i\cdot\Lambda_{A\backslash\{i+1\}}=\Lambda_{A\backslash\{i+1\}}$ by the previous cases and $\beta_i\cdot\Lambda_{i} = \Lambda_{i+1}$ and $\beta_i\cdot\Lambda_{i+1} = \Lambda_{i}$ since the Casimir $\Lambda$ is central. We then obtain the formula for $\beta_i\cdot\Lambda_A$.
\end{proof}
Form \cref{prop:action_skein} and \cref{prop:action_AW}, we immediately deduce the following
\begin{prop}
The algebra isomorphism $\SkAlg{q}{\Sigma_{0, n+1}}\rightarrow \AW{n}$ given by $s_A\mapsto -\Lambda_A$ commutes with the action of the braid group $B_n$.
\end{prop}
\section{Graded Dimensions}
\label{sec:dimensions}
In this section we will compute the Hilbert series of the Askey--Wilson algebra $\AW{n}$ and the skein algebra $\SkAlg{q}{\Sigma_{g,r}}$ of the surface $\Sigma_{g,r}$ of genus $g$ with $r>0$ punctures. These algebras are filtered and their Hilbert series encodes vector space dimension of each graded part of the associated graded algebra. In the next section, we will use these Hilbert series to find presentations for $\AW{4}$ and $\SkAlg{q}{\Sigma_{0,5}}$.
\begin{defn}
The \emph{Hilbert series} of the \(\mathbb{Z}_{\geq 0}\) graded vector space \(A = \bigoplus_{n \in \mathbb{Z}_{\geq 0}} A[n-1]\) is the formal power series
\[
h_A(t) = \sum_{n \in \mathbb{Z}_{\geq 0}} \dim(A[n]) t^n.
\]
The Hilbert series of a \(\mathbb{Z}_{\geq 0}\) graded algebra \(A\) is the Hilbert series of its underlying \(\mathbb{Z}_{\geq 0}\) graded vector space.
\end{defn}
We will use the isomorphism between the subalgebra $\mathcal{L}_{\Sigma_{0,n+1}}^{\mathcal{U}_q(\slt)}$ of the Aleeksev moduli algebra $\mathcal{L}_{\Sigma_{0,n+1}}$
which is invariant under the action of $\mathcal{U}_q(\slt)$ and the Askey--Wilson algebra $\AW{n}$ from \cref{sec:moduli}, and also the isomorphism between $\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$ and the skein algebra $\SkAlg{q}{\Sigma_{g,r}}$ from \cref{sec:skein}, so that we can instead compute the Hilbert series of $\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$ whose compution is a generalisations of the calculations for $\Sigma_{0,4}$ by the first author in \cite{Cooke18}.
Recall that the Alekseev moduli algebra
\[\mathcal{L}_{\Sigma_{g,r}} = \mathcal{D}_q(\SL_2)^{\tilde{\otimes} g}\tilde{\otimes}\mathcal{O}_q(\SL_2)^{\tilde{\otimes} r-1} = \mathcal{O}_q(\SL_2)^{\hat{\otimes} 2g}\tilde{\otimes}\mathcal{O}_q(\SL_2)^{\tilde{\otimes} r-1}\]
is generated as an algebra by elements $(k_i)_{\delta}^{\epsilon} \in \mathcal{K}_i \hookrightarrow{} \mathcal{L}_{\Sigma_{g,r}}$ for $i \in {1, \dots, (2g+r-1)}$. If we define the degree $ \left| (k_i)_{\delta}^{\epsilon} \right| = 1$ then all the relations of $\mathcal{L}_{\Sigma_{g,r}}$ are homogeneous except the determinant relations $(k_i)_+^+(k_i)_-^--q(k_i)_-^+(k_i)_+^- = 1$ for which the non-homogeneous element is in the ground ring. Hence, $\mathcal{L}_{\Sigma_{g,r}}$ is filtered with the $j^{th}$ filtered part $\left(\mathcal{L}_{\Sigma_{g,r}}\right)(j)$
being the vector space spanned by all monomials in the generators $(k_i)_{\delta}^{\epsilon}$ with degree at most $j$\footnote{Let $A$ be a filtered algebra with generators $\{x_i\}$ to which we assign degrees $|x_i| \in \mathbb{Z}_{>0}$. Then the degree of an element $f \in A$ is the smallest degree of any polynomial in the $\{x_i\}$ which represents $f$. Note that we have $\operatorname{degree}(fg) \leq \operatorname{degree}(f) \operatorname{degree}(g)$ rather than equality.
}. As $\mathcal{L}_{\Sigma_{g,r}}$ is filtered rather than graded we need to consider its associated graded algebra.
\begin{defn}
The \emph{associated graded algebra} of the \(\mathbb{Z}_{\geq 0}\) filtered algebra \(A = \bigcup_{n \in \mathbb{Z}_{\geq 0}} A(n)\) is
\[
\mathscr{G}(A) = \bigoplus_{n \in \mathbb{Z}_{\geq 0}} A[n] \text{ where } A[n] = \begin{cases}
A(0) & \text{ for }n = 0 \\
\faktor{A(n)}{A(n-1)} &\text{ for } n > 0.
\end{cases}
\]
The \emph{Hilbert series} of the \(\mathbb{Z}_{\geq 0}\) filtered algebra \(A = \bigcup_{n \in \mathbb{Z}_{\geq 0}} A(n)\) is the Hilbert series of the associated graded algebra \(\mathscr{G}(A)\).
\end{defn}
As $\mathcal{L}_{\Sigma_{g,r}}$ is acted on by $\mathcal{U}_q(\slt)$ we can also decompose it into its weight spaces to obtain its character.
\begin{defn}
Let \(V\) be a vector space acted on by \(\mathcal{U}_q(\slt)\) and let \(V^k\) denote the \(q^k\)-weight space of \(V\) where \(k \in \mathbb{Z}\). The \emph{character} of \(V\) is the formal power series
\[
\ch_V(u) = \sum_{k \in \mathbb{Z}} \dim \left(V^{k} \right) u^{k}.
\]
\end{defn}
Using both the decomposition into graded parts and into weight spaces simultaneously gives the graded character.
\begin{defn}
Let \(V = \bigoplus_{n} V[n]\) be a graded vector space acted on by \(\mathcal{U}_q(\slt)\). The \emph{graded character} of \(V\) is
\[
h_V(u,t) := \sum_n \ch_{V[n]}(u) t^n = \sum_{n,k} \dim \left(V[n]^k\right) u^k t^n,
\]
where \(V[n]^k\) is the \(q^k\)-weight space of \(V[n]\). If \(V\) is filtered rather than graded the graded character of \(V\) \(h_V(u,t)\) is \(h_{\mathscr{G}(V)}(u,t)\), the graded character of associated graded vector space \(\mathscr{G}(V)\).
\end{defn}
As the Alekseev moduli algebra \(\mathcal{L}_{\Sigma_{g,r}}\) is simply multiple copies of the reflection equation algebra $\mathcal{O}_q(\SL_2)$ tensored together its graded character is easy to determine:
\begin{prop}
\label{prop:charofL}
The graded character of \(\mathcal{L}_{\Sigma_{g,r}}\) is
\begin{align*}
h_{A}(u,t) = \left(\frac{(1 + t)}{(1 - t) (1-u^2t) (1-u^{-2}t)}\right)^{2g + r - 1}.
\end{align*}
\end{prop}
\begin{proof}
From \cite[Proposition A.6.]{Cooke18} we have that the graded character of \(\mathcal{O}_q(\SL_2)\) is
\begin{align*}
h_{\mathcal{O}_q(\SL_2)}(u,t) = \frac{(1 + t)}{(1 - t) (1-u^2t) (1-u^{-2}t)}.
\end{align*}
and as $\mathcal{L}_{\Sigma_{g,r}} \cong (\mathcal{O}_q(\SL_2) \hat{\otimes} \mathcal{O}_q(\SL_2))^{\otimes g} \tilde{\otimes} \mathcal{O}_q(\SL_2)^{\otimes r - 1}$ this gives the result.
\end{proof}
\noindent
This can now be used to compute the Hilbert series of its $\mathcal{U}_q(\slt)$ invariant subalgebra:
\begin{thm}
\label{thm:dimensions}
The Hilbert series of $\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$ is
\[h(t) = \frac{(1+t)^{n-2}}{(1-t)^{n}(1-t^2)^{2n-3}}\left(\sum_{k=0}^{n-2}{\binom{n-2}{k}}^2t^{2k} - \sum_{k=0}^{n-3}\binom{n-2}{k}\binom{n-2}{k+1}t^{2k+1}\right) \]
where $n = 2g + r -1$.
\end{thm}
\begin{proof}
As we have filtered isomorphisms is sufficient to prove this result for $\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$.
We have that
\begin{align*}
(1-ax)^{-n} &= \sum_{k=0}^{\infty} \rchoose{n}{k} a^k x^k,
\end{align*}
where $\rchoose{n}{k} := \displaystyle\binom{n+k-1}{k}$. Appling this to the graded character of $A = \mathcal{L}_{\Sigma_{g,r}}$ from \cref{prop:charofL} gives:
\begin{align*}
h_{A}(u,t) &= \left(\frac{1 + t}{1 - t}\right)^n \frac{1}{(1-u^2t)^n(1-u^{-2}t)^n} \\
&= \left(\frac{1 + t}{1 - t}\right)^n \left( \sum_{k=0}^{\infty} \rchoose{n}{k} u^{2k} t^k\right) \left( \sum_{l=0}^{\infty} \rchoose{n}{l} u^{-2l} t^l \right) \\
&= \left(\frac{1 + t}{1 - t}\right)^n \left( \sum_{k+l = m} \rchoose{n}{k} \rchoose{n}{l} u^{2(k-l)} t^m\right)
\end{align*}
Similarly to in \cite{Cooke18} the Hilbert series \(h_{\mathscr{A}}(t)\) of $\mathscr{A} = \mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)}$ is determined by the \(u\) coefficient of \((u - u^{-1}) h_{A}(u,t)\). The \(u\) coefficient is
\begin{align*}
h_{\mathscr{A}}(t)=&\left(\frac{1 + t}{1 - t}\right)^n \left( \sum_{k+l = m, k - l = 0} \rchoose{n}{k} \rchoose{n}{l} t^m - \sum_{k+l = m, k - l = 1} \rchoose{n}{k} \rchoose{n}{l} t^m \right) \\
=&\left(\frac{1 + t}{1 - t}\right)^n \left( \sum_{m = 0}^{\infty} \rchoose{n}{m}^2 t^{2m} - \sum_{m = 0}^{\infty} \rchoose{n}{m} \rchoose{n}{m+1} t^{2m+1} \right).
\end{align*}
Using \cite[(6.10)]{CGPdAV}, we have
\begin{multline*}
\frac{1}{(1 - t)^n} \left( \sum_{m = 0}^{\infty} \rchoose{n}{m}^2 t^{2m} - \sum_{m = 0}^{\infty} \rchoose{n}{m} \rchoose{n}{m+1} t^{2m+1} \right) \\ = \frac{(1+t)^{n-2}}{(1-t^2)^{3(n-1)}}\left(\sum_{k=0}^{n-2}{\binom{n-2}{k}}^2t^{2k} - \sum_{k=0}^{n-3}\binom{n-2}{k}\binom{n-2}{k+1}t^{2k+1}\right),
\end{multline*}
which gives the expected formula after multiplying by $(1+t)^n$.
\end{proof}
Using the isomorphisms
\[
\mathcal{L}_{\Sigma_{g,r}}^{\mathcal{U}_q(\slt)} \to \SkAlg{q}{\Sigma_{g,r}}: -\Delta^{(k-1)}(\operatorname{tr}_q(\mathcal{K}))_{\underline{i}} \mapsto s_A \text{ and } \mathcal{L}_{\Sigma_{0, n+1}}^{\mathcal{U}_q(\slt)} \to \AW{n}: \Delta^{(k-1)}(\operatorname{tr}_q(\mathcal{K}))_{\underline{i}} \mapsto \Lambda_A
\]
from \cref{sec:moduli} and \cref{sec:skein} we can induce a filtered structure on $\SkAlg{q}{\Sigma_{g,r}}$ such that $s_A$ has degree $|A|$ and a filtered structure on $\AW{n}$ such that $\Lambda_A$ has degree $|A|$.
\begin{cor}
\label{cor:hilbskein}
The Hilbert series of the skein algebra $\SkAlg{q}{\Sigma_{g, r}}$ and the higher rank Askey--Wilson algebra $\AW{n}$ is
\[h(t) = \frac{(1+t)^{n-2}}{(1-t)^{n}(1-t^2)^{2n-3}}\left(\sum_{k=0}^{n-2}{\binom{n-2}{k}}^2t^{2k} - \sum_{k=0}^{n-3}\binom{n-2}{k}\binom{n-2}{k+1}t^{2k+1}\right) \]
where $n = 2g + r -1$.
\end{cor}
\begin{rmk}
In \cite[Section 6.2]{CGPdAV}, it is shown that the polynomial $(1-t)^n h(t)$ is the Hilbert series of the centraliser of the diagonal action $U(\mathfrak{sl}_2)$ in $U\left(\mathfrak{sl}_2^{\otimes 2}\right)$ and that the numerator has positive coefficients.
The term $\frac{1}{(1-t)^n}$ is from counting the $n$ simple loops $s_1,\ldots,s_n$ which are central and have no relations with any other loops so that $\SkAlg{q}{\Sigma_{0, n+1}}$ is free over the subalgebra generated by them.
\end{rmk}
\begin{rmk}
This Hilbert series can also be written in terms of the hypergeometric function as follows:
\[
h(t) = \frac{(1+t)^n}{(1-t)^n} \left( \prescript{}{2}{F}_1(n,n;1;t^2) - nt \prescript{}{2}{F}_1(n,n+1;2;t^2) \right)
\]
\end{rmk}
\section{Presentation of the Skein Algebra of the Five-Punctured Sphere}
\label{sec:presentation}
In this section we shall use the isomorphisms between the Askey--Wilson algebra $\AW{n}$, the \(\mathcal{U}_q(\slt)\)-invariants of the Alekseev moduli algebra and the skein algebra $\SkAlg{q}{\Sigma_{0, n+1}}$ together with the Hilbert series computed in the previous section to obtain a presentation for $\SkAlg{q}{\Sigma_{0, 5}}$ and therefore also for $\AW{4}$. This case represents the lowest of the higher-rank Askey--Wilson algebras and was the case considered by Post and Walker.
Presentations of the Kauffman bracket skein algebra in the punctured surface case are only known for a handful of the simplest cases: punctured spheres with up to four punctures and punctured tori with either one or two punctures.
The Hilbert series of $\SkAlg{q}{\Sigma_{g,r}}$ only depends on $n = 2g + r -1$ and so the cases for which a presentation for $\SkAlg{q}{\Sigma_{g,r}}$ in known correspond to $n=1,2,3$ whereas in this section we shall consider the five-punctured sphere which corresponds to $n=4$. Applying \cref{cor:hilbskein} for $n=4$ we get:
\begin{cor}
\label{cor:hilbn4}
The Hilbert series of $\SkAlg{q}{\Sigma_{0,5}}$ and $\AW{4}$ is
\[
h(t) = \frac{(1+t)^2}{(1-t)^4(1-t^2)^5}\left(1-2t+2t^2-2t^3+t^4\right)=\frac{1+t^2+4t^3+t^4+t^6}{(1-t)^4(1-t^2)^5}
\]
\end{cor}
The difficulty for finding presentations for more complex punctured surfaces is that the number of generators and relations required increases and also, whilst it is easy to find relations using diagrams and resolving crossings, it is difficult to prove that you have found all the relations. We are able to overcome this second difficulty using the above Hilbert series.
In order to find the relations we will first generalise the relations in the presentation for the four-punctured sphere found by Bullock and Przytycki\footnote{We have corrected a sign error in the first relation which appears in the published version of the paper \cite{BullockPrzytycki00}.} before considering the additional relations which are of a genuinely different nature.
Let $x_1 = s_{12}, x_2 = s_{23}$ and $x_3 = s_{13}$ and let $s_4$ denote the loop around the outside puncture (see \cref{figure:FPuncSphereEdges}). If curve $x_i$ separates $s_i,s_j$ from $s_k, s_\ell$, let
$ p_i = s_i s_j + s_k s_\ell$. Explicitly,
\[
p_1 = s_1 s_2 + s_3 s_4,\quad \quad p_2 = s_2 s_3 + s_1 s_4,\quad \quad p_3 = s_1 s_3 + s_2 s_4
\]
\begin{thm}[\cite{BullockPrzytycki00}]
\label{thm:presentationofsphereskein}
As an algebra over the polynomial ring $R[s_1,s_2,s_3,s_4]$,
the Kauffman bracket skein algebra \(\SkAlg{q}{\Sigma_{0,4}}\) has a presentation with generators \(x_1,\,x_2,\,x_3\) and relations
\begin{align}
\left[x_i,\; x_{i+1}\right]_{q} &= \left(q^2 - q^{-2}\right)x_{i+2} + \left(q - q^{-1}\right)p_{i+2} \text{ (indices taken modulo 3)}; \label{eq:cubic_commutator}\\
\Omega_K &= \left(q +q^{-1}\right)^2 - \left(p_1 p_2 p_3 p_4 + p_1^2 + p_2^2 + p_3^2 + p_4^2\right); \label{eq:cubic}
\end{align}
where we have used the following Casimir element:
\[\Omega_K := -q x_1 x_2 x_3 + q^2 x_1^2 + q^{-2} x_2^2 + q^2 x_3^2 + q p_1 x_1 + q^{-1}p_2 x_2 + q p_3 x_3.\]
\end{thm}
\begin{figure}
\caption{This figure shows the product $x_1 x_2 x_3$, which is the leading term of the cubic relation, on the four-punctured sphere $\Sigma^{\bullet}
\label{figure:FPuncSphereEdges}
\end{figure}
\begin{figure}
\caption{The leading terms for the four types of relations in $\SkAlg{q}
\label{figure:4looprels}
\end{figure}
The first set of relations (\cref{eq:cubic_commutator}) are the commutators relations from \cref{thm:commutators} and we will also want the commutator relations from this theorem for our case. The second relation (\cref{eq:cubic}) can be derived by taking $x_1 x_2 x_3$ and resolving all crossings. As there are four ways to embed three points into four points we will end up with four such cubic relations; that is we will have relations whose left-hand sides are $s_{12} s_{23} s_{13}$, $s_{23} s_{34} s_{24}$, $s_{34} s_{14} s_{13}$ and $s_{12} s_{14} s_{24}$. We shall also have four cubic relations where one of the \emph{diagonal loops}, $s_{13}$ or $s_{24}$, has been replaced by a triple loop; these have left-hand sides $s_{12} s_{23} s_{134}$, $s_{23} s_{34} s_{124}$, $s_{34} s_{14} s_{123}$ and $s_{12} s_{14} s_{234}$.
Furthermore, instead of using three loops to create a closed loop, we can use four loops giving the \emph{quartic relation}:
\begin{align*}
s_{12}&s_{23}s_{34}s_{14} \\
&= s_{1}s_{3}s_{12}s_{23} + s_{2}s_{4}s_{12}s_{14} + s_{2}s_{4}s_{23}s_{34} + s_{1}s_{3}s_{34}s_{14} \\
&+ \left(qs_{3}s_{12} + qs_{2}s_{13} + q^{-1}s_{1}s_{23} + s_{123} + q^{-1}s_{4}s_{1234}+s_{1}s_{2}s_{3}\right)s_{123} \\
&+ \left(qs_{4}s_{13} + qs_{3}s_{14} + q^{-1}s_{1}s_{34} + s_{134} + q^{-1}s_{2}s_{1234}+s_{1}s_{3}s_{4}\right)s_{134} \\
&+ \left( qs_{4}s_{12} + qs_{2}s_{14} + q^{-1}s_{1}s_{24} + s_{124} + q^{-1}s_{3}s_{1234}+s_{1}s_{2}s_{4}\right)s_{124} \\
&+ q^{-2}\left(q^{-1}s_{4}s_{23} + q^{-1}s_{3}s_{24} + q^{-1}s_{2}s_{34} + q^{-2}s_{234} + q^{-1}s_{1}s_{1234} + \left(2-q^2\right)s_{2}s_{3}s_{4}\right)s_{234} \\
&+ \left(q^2s_{12} + \left(q+q^{-1}\right)s_{1}s_{2} +s_{3}s_{4}s_{1234}\right)s_{12} + \left(q^{-2}s_{23}+q^{-1}s_{2}s_{3}+q^{-2}s_{1}s_{4}s_{1234}\right)s_{23} \\
&+ \left(q^2s_{14} + \left(q+q^{-1}\right)s_{1}s_{4} +s_{2}s_{3}s_{1234}\right)s_{14} + \left(q^{-2}s_{34} + q^{-1}s_{3}s_{4}+q^{-2}s_{1}s_{2}s_{1234}\right)s_{34} \\
&+ \left(q^{-2}s_{24} +(q^{-1}-q)s_{2}s_{4}+q^{-2}s_{1}s_{3}s_{1234}\right)s_{24} +\left(q^2s_{13} + s_{2}s_{4}s_{1234}\right)s_{13} \\
&+q^{-2}s_{1234}^2 -s_{2}^2s_{4}^2 -s_{1}^2s_{3}^2 +q^{-1}s_{1}s_{2}s_{3}s_{4}s_{1234} +\left(2+q^{-2}\right)\left(s_{1}^2 + s_{2}^2 + s_{3}^2 + s_{4}^2\right) \\
&-2q^2-5-4q^{-2}-q^{-4}
\end{align*}
or we can create a closed loop from two triple loops:
\[s_{123}s_{134} = s_{12}s_{14} + s_{23}s_{34} -s_{3}s_{234} -s_{1}s_{124} + s_{1234}s_{13} -\left(q+q^{-1}\right)s_{24} -s_{2}s_{4}\]
\begin{figure}
\caption{Loops commute when the points in one are a subset of the points in the other as in the left image or when the loops are in different parts of the surface as in the right image.}
\label{figure:commuting}
\end{figure}
Whenever two loops do not intersect they commute. In the case $n=3$ this only happens when one of the loops either contains a single point or all the points. As these loops which contain a single point or all the points are all central this is encoded by adding these loops to the polynomial ring. In the case $n=4$ we still have these central loops but we also have pairs of loops neither of which are central which commute. This happens when the points of one of the loops is a subset of the points of the other loop:
\[s_A s_B = s_B s_A \text{ for } A \subseteq B; \]
or when the loops are simply in different parts of the surface:
\[s_{12} s_{34} = s_{34} s_{12} \text{ and } s_{23} s_{14} = s_{14} s_{23}.\]
Note that determining whether two loops do not intersect is not a simple as noting that $A \cap B = \emptyset$ as we have
\begin{align*}
s_{13}s_{24} &= q^{-2}s_{12} s_{34} + q^2 s_{23} s_{14} + q^{-1} s_{1} s_{2} s_{34} + q^{-1} s_{3} s_{4} s_{12} + q s_{1} s_{4} s_{23} + q s_{2} s_{3} s_{14} \\
&+ s_{1} s_{234} + s_{4} s_{123} + s_{3} s_{124} + s_{2} s_{134}
+ s_{1} s_{2} s_{3} s_{4} + \left(q+q^{-1}\right)s_{1234}
\end{align*}
We shall call this relation the \emph{crossing relation}. Whilst in the case $n=4$ we only have a single crossing relation, relations of this type are a general feature of $\SkAlg{q}{\Sigma_{0, n+1}}$ for higher $n$.
\begin{figure}
\caption{There are three types of relation in $\SkAlg{q}
\label{figure:crossandtriple}
\end{figure}
For the remaining relations we need to consider afresh the proof of \cref{thm:commutators}. This considers two loops $s_A$ and $s_B$ which are simply linked; that is to say there intersection looks like the intersection of $s_{12}$ and $s_{23}$. When the crossings of $s_A s_B$ are resolved one of the resultant terms is not a simple loop as the loop goes around the outside of the punctures $C = A \cap B$ (see \cref{figure:extra} for examples of the non-simple loops you obtain). In the proof this term is eliminated using $s_B s_A$ to yield the commutator relation. However, note that $s_{12} s_{23} s_{34}$ and $s_{123} s_{234}$ both yield the same non-simple term $s_{1 \overline{23} 4}$. Hence, we get a relation
\begin{align*}
s_{123}s_{234} &= qs_{12}s_{23}s_{34} -qs_{3}s_{12}s_{234} -q^{-1}s_{2}s_{34}s_{123} -q^2s_{12}s_{24} - s_{34}s_{13} \\
&+s_{3}s_{134} + q^{-2}s_{2}s_{124} +\left(q^{-1} - q\right)s_{2}s_{4}s_{12} + s_{1234}s_{23} + \left(q+q^{-1}\right)s_{14} \\
&+ q^{-1}s_{2}s_{3}s_{1234}+s_{1}s_{4}
\end{align*}
and by symmetry similar relations with left hand sides $s_{234} s_{134}$, $s_{134} s_{124}$ and $s_{124} s_{123}$.
Finally, resolving the crossing of $s_{24} s_{123}$ yields the terms $s_{4 \overline{1} 23}$ and $s_{12 \overline{3} 4}$ which can be obtained from $s_{14} s_{123}$ and $s_{34} s_{123}$ respectively. This gives the relation
\begin{align*}
s_{24}s_{123} &= q^2s_{12}s_{234} + q^{-2}s_{23}s_{124} + qs_{4}s_{12}s_{23} \\
&- qs_{2}s_{4}s_{123} - \left(q^3+q^{-3}\right)s_{134} -q^2s_{1}s_{34} -q^{-2}s_{3}s_{14} -q^2s_{4}s_{13} \\
&+ \left(1-q^2-q^{-2}\right)s_{2}s_{1234} -qs_{1}s_{3}s_{4}
\end{align*}
and by symmetry similar relations with left-hand sides $s_{13}s_{234}$, $s_{24} s_{134}$ and $s_{13}s_{124}$.
The remainder of this section will be dedicated to proving that this set of relations is complete.
\begin{thm}
\label{thm:presentation_fivepunctures}
The Kauffman bracket skein algebra $\SkAlg{q}{\Sigma_{0,5}}$ of the five-punctured sphere with generic $q$ has a presentation as an algebra over $R$ given by the simple loops $s_A$ for all $A \subseteq \{1,2,3,4\}$ with commuting relations
\begin{gather}
s_A \text{ for } |A| = 1 \text{ or } 4 \text{ is central }, \\
s_A s_{123} = s_{123} s_A \text{ for } A \in \{12, 23, 13\}, \\
s_{12} s_{34} = s_{34} s_{12} \text{ and } s_{23} s_{14} = s_{14} s_{23},
\end{gather}
commutator relations
\[
[s_{1}, s_{2}]_q = \left(q^{-2}- q^{2}\right) s_{(A \cup B) \backslash (A \cap B)} + \left(q -q^{-1}\right) \left( s_{A \cap B} s_{A \cup B} + s_{A \backslash (A \cup B)} s_{B \backslash (A \cup B)} \right)
\]
where $A$ and $B$ are set of points with conditions as stated in \cref{thm:commutators}, and relations for the following terms (see Appendix for full list of relations)
\[\begin{array}{ccccr}
s_{12}s_{23}s_{13} & s_{23}s_{34}s_{24} & s_{34}s_{14}s_{13} & s_{12}s_{14} s_{24} &\text{(cubic relations)} \\
s_{12}s_{23}s_{134} & s_{23}s_{34}s_{124} & s_{34}s_{14}s_{123} & s_{12}s_{14} s_{234} & \text{(cubic relations with triples)} \\
&s_{12} s_{23} s_{34} s_{14} & s_{13} s_{24} && \text{(quartic and cross relations)} \\
& s_{123}s_{134} & s_{234} s_{124} && \text{(triple loop relations)} \\
s_{134} s_{124} & s_{123} s_{124} & s_{123} s_{234} & s_{234} s_{134} & \text{(triple link relations)} \\
s_{13} s_{234} & s_{13} s_{124} & s_{24} s_{123} & s_{24} s_{134} & \text{(double and triple relations)}
\end{array}
\]
\end{thm}
\begin{rmk}
Looking at the skein relations it is easy to see that given a relation for $s_A s_B$ switching all the over crossing for under crossings gives a relation for $s_B s_A$ which has the same terms with modified coefficients. Furthermore, given a relation reflecting all the terms in the vertical or horizontal plane and again modifying the coefficients will give another relation.
\end{rmk}
\subsection{Term Rewriting Systems and the Diamond Lemma}
In order to prove \cref{thm:presentation_fivepunctures} we shall use a Term Rewriting System (TRWS).
\begin{defn}
An \emph{abstract rewriting system} is a set A together with a binary relation \(\to\) on A called the \emph{reduction relation} or \emph{rewrite relation}.
\begin{enumerate}
\item It is \emph{terminating} if there are no infinite chains \(a_0 \to a_1 \to a_2 \to \dots\).
\item It is \emph{locally confluent} if for all \(y \xleftarrow{} x \xrightarrow{} z\) there exists an element \(y \downarrow z \in A\) such that there are paths \(y \to \dots \to (y \downarrow z) \) and \(z \to \dots \to (y \downarrow z) \).
\item It is \emph{confluent} if for all \(y \xleftarrow{} \dots \xleftarrow{} x \xrightarrow{} \dots \xrightarrow{} z\) there exists an element \(y \downarrow z \in A\) such that there are paths \(y \to \dots \to (y \downarrow z) \) and \(z \to \dots \to (y \downarrow z) \).
\end{enumerate}
In a terminating confluent abstract rewriting system an element \(a \in A\) will always reduce to a unique reduced expression regardless of the order of the reductions used.
\end{defn}
The \emph{diamond lemma} (or Newman's lemma) for abstract rewriting systems states that a terminating abstract rewriting system is confluent if and only if it is locally confluent. Bergman's diamond lemma is an application to ring theory of the diamond lemma for abstract rewriting systems. The definitions given in this section can be found \cite[Section~1]{diamond}.
Let \(\mathcal{R}\) be a commutative ring with multiplicative identity and \(X\) be an alphabet (a set of symbols from which we form words).
\begin{defn}
A \emph{reduction system} \(S\) consists of term rewriting rules \(\sigma : W_{\sigma} \mapsto f_{\sigma}\) where \(W_{\sigma} \in \langle X \rangle\) is a word in the alphabet \(X\) and \(f_{\sigma} \in \mathcal{R} \langle X \rangle\) is a linear combination of words. A \emph{\(\sigma\)-reduction} \(r_{\sigma}(T)\) of an expression \(T \in \mathcal{R}\langle X \rangle\) is formed by replacing an instance of \(W_{\sigma}\) in \(T\) with \(f_{\sigma}\). A \emph{reduction} is a \(\sigma\)-reduction for some \(\sigma \in S\). If there are no possible reductions for an expression we say it is \emph{irreducible}.
\end{defn}
\begin{defn}
The five-tuple \((\sigma, \tau, A, B, C)\) with \(\sigma, \tau \in S\) and \(A,B, C \in \langle X \rangle\) is an \emph{overlap ambiguity} if \(W_{\sigma} = AB\) and \(W_{\tau} = BC\) and an \emph{inclusion ambiguity} if \(W_{\sigma} = B\) and \(W_{\tau} = ABC\).
These ambiguities are \emph{resolvable} if reducing \(ABC\) by starting with a \(\sigma\)-reduction gives the same result as starting with a \(\tau\)-reduction.
\end{defn}
\begin{ex}
Suppose we have an alphabet \(X = \langle a,b \rangle\) and reduction system \(S=\{\,\sigma : ab \mapsto ba, \tau : ba \mapsto a\,\}\). Then \(r_{\sigma}(T) = aba + a\) is a \(\sigma\)-reduction of \(T= aab + a\). We also have an overlap ambiguity \((\sigma, \tau, a,b,a)\) which is resolvable as \( aba \xmapsto{r_{\sigma}} ba^2 \xmapsto{r_{\tau}} a^2 \) gives the same expression as \(aba \xmapsto{r_{\tau}} a^2\).
\end{ex}
\begin{defn}
A \emph{semigroup partial ordering} \(\leq\) on \(\langle X \rangle\) is a partial order such that \(B \leq B' \) implies that \(ABC \leq AB'C\) for all words \(A,B, B', C\). It is \emph{compatible with the reduction system} \(S\) if for all \(\sigma \in S\) the monomials in \(f_{\sigma}\) are less than \(W_{\sigma}\).
\end{defn}
\begin{defn}
A reduction system \(S\) satisfies the \emph{descending chain condition} or is \emph{terminating} if for any expression \(T \in \mathcal{R} \langle X \rangle\) any sequence of reductions terminates in a finite number of reductions with an irreducible expression.
\end{defn}
\begin{lem}[The Diamond Lemma {\cite[Theorem~1.2]{diamond}}]
Let \(S\) be a reduction system for \(\mathcal{R} \langle X \rangle\) and let \(\leq\) be a semigroup partial ordering on \(\langle X \rangle\) compatible with the reduction system \(S\) with the descending chain condition. The following are equivalent:
\begin{enumerate}
\item All ambiguities in \(S\) are resolvable (\(S\) is \emph{locally confluent});
\item Every element \(a \in \mathcal{R} \langle X \rangle\) can be reduced in a finite number of reductions to a unique expression \(r_S(a)\) (\(S\) is \emph{confluent});
\item The algebra \(K = \mathcal{R} \langle X \rangle / I\), where \(I\) is the two-sided ideal of \(\mathcal{R} \langle X \rangle\) generated by the elements \((W_{\sigma} - f_{\sigma})\), can be identified with the \(\mathcal{R}\)-algebra \(k \langle X \rangle_{\mathrm{irr}}\) spanned by the \(S\)-irreducible monomials of \(\langle X \rangle\) with multiplication given by \(a \cdot b = r_S(ab)\). These \(S\)-irreducible monomials are called a Poincare--Birkhoff--Witt basis of \(K\).
\end{enumerate}
\end{lem}
\subsection{Linear Basis for $\SkAlg{q}{\Sigma_{0,5}}$}
In this subsection we will construct a locally consistant, terminating term rewriting system from the relations stated in \cref{thm:presentation_fivepunctures} for $\SkAlg{q}{\Sigma_{0,5}}$. This will give a linear basis for $\SkAlg{q}{\Sigma_{0,5}}$ over the commutative ring $\mathcal{R}$ $=$ $R[s_{1}, s_{2}, s_{3}, s_{4}, s_{1234}]$ which we will then use to prove \cref{thm:presentation_fivepunctures}.
The obvious approach would be to take each relation from \cref{thm:presentation_fivepunctures} (except the first as this is implicit in our choice of base ring) and turn it into a rewriting rule. Whilst this approach works well for the relations whose left hand side is pairwise (is the product of two loops) adding the non-pairwise relations leads to an infinite TRWS which would be difficult to show that it was consistent\footnote{In \cite{Cooke18} the consistency of the resulting infinite system was proven by induction for the case fo $\SkAlg{q}{\Sigma_{0,4}}$ however this method does not scale well.}.
\begin{ex}
Assume we construct a TRWS with all the pairwise relations for \cref{thm:presentation_fivepunctures} and the non-pairwise cubic relation for $s_{12} s_{23} s_{13}$. One of the ambiguities for this TRW is the word $s_{23} s_{12} s_{23} s_{13}$ which on the one hand can be reduced using the cubic relation and on the other hand using the commutator relation for $s_{23} s_{12}$. Using the commutator relation gives as its leading term $s_{12} s_{23}^2 s_{13}$ which cannot be reduced further. However, the term $s_{12} s_{23}^2 s_{13}$ does not arise if you start reducing with the cubic relation, and thus the system is not consistent. In order to make a consistent system you would need to add a rewriting rule for $s_{12} s_{23}^2 s_{13}$. Considering $s_{23} s_{12} s_{23}^2 s_{13}$ using the same argument as above leads to the conclusion you also need $s_{12} s_{23}^3 s_{13}$ and indeed $s_{12} s_{23}^n s_{13}$ for all $n \in \mathbb{Z}_{>0}$ in the TRWS: that is you end up with an infinite TRWS.
\end{ex}
In order to avoid an infinite TRWS we add some extra generators so that all the relations are pairwise.
In order to make the cubic relations pairwise we add the generators $s_{1 \overline{2} 3}$, $s_{2 \overline{3} 4}$, $s_{3 \overline{4} 1}$ and $s_{4 \overline{1} 2}$ and for the quartic relation we add the generators $s_{1 \overline{23} 4}$, $s_{2 \overline{34} 1}$, $s_{3 \overline{14} 2}$ and $s_{4 \overline{12} 3}$. Finally, we also need the generators $s_{12,34}$ and $s_{23,14}$ which are just the products $s_{12} s_{34}$ and $s_{12} s_{34}$ respectively considered as a generator.
We shall order these extended generators and place them into five groups as follows:
\begin{description}[leftmargin=!,labelwidth=\widthof{\bfseries{\textrm{III}.}}]
\item[\textrm{I}.] $s_{12}$ $s_{23}$ $s_{34}$ $s_{14}$
\item[\textrm{II}.] $s_{1 \overline{2} 3}$ $s_{2 \overline{3} 4}$ $s_{3 \overline{4} 1}$ $s_{4 \overline{1} 2}$ $s_{12, 34}$ $s_{23,14}$
\item[\textrm{III}.] $s_{1 \overline{23} 4}$, $s_{2 \overline{34} 1}$, $s_{3 \overline{14} 2}$
\item[\textrm{IV}.] $s_{13}$ $s_{24}$
\item[\textrm{V}.] $s_{123}$ $s_{234}$ $s_{134}$ $s_{123}$
\end{description}
\begin{figure}
\caption{Extra generators we add to make all the relations in $\SkAlg{q}
\label{figure:extra}
\end{figure}
\begin{defn}
By convention if a point $x \not\in A$ then the loop $s_A$ passes on the inside of the point $x$ if the points are in a circle or below the point $x$ if the points are in a line. If instead a loop passes on the outside (or above) of the point $x$ we refer to $x$ as a \emph{double point}\footnote{In the handlebody decomposition of the surface, the handle associated to a double point will intersect the loop in two arcs.}. The extra generators have double points denoted by $\overline{x}$.
\end{defn}
Using these generators the non-pairwise relations the cubic relations become
\begin{align*}
s_{1 \overline{2} 3}s_{13}&= \left(q^{-1}s_{1}s_{23} + qs_{3}s_{12} + s_{1}s_{2}s_{3} + s_{123}\right)s_{123} + q^{-1}\big(s_{2}s_{3}\\ &+ q^{-1} s_{23}\big)s_{23} + q(s_{1}s_{2} + q s_{12})s_{12} + s_{1}^2 + s_{2}^2 + s_{3}^2 - \left(q + q^{-1}\right)^2,\\
s_{1 \overline{2} 3}s_{134}&= \left(q^{-1}s_{1}s_{23} + qs_{3}s_{12} + s_{1}s_{2}s_{3} + s_{123}\right)s_{1234} +q(s_{1}s_{2} + qs_{12})s_{124} \\
&+ q^{-1}\left(s_{2}s_{3} + q^{-1}s_{23}\right)s_{234} + s_{1}s_{14} + s_{2}s_{24} + s_{3}s_{34} + \left(q + q^{-1}\right)s_{4},
\end{align*}
and their symmetries, and the quartic relation becomes the two relations
\begin{align*}
s_{12}s_{2 \overline{34} 1}&= \left(q^{-1}s_{1}s_{234} + qs_{2}s_{134} + s_{1}s_{2}s_{34} + s_{1234}\right) s_{1234} + q(s_{1}s_{34} + qs_{134})s_{134} \\
&+ q^{-1}\left(s_{2}s_{34}s_{234} + q^{-1}s_{234}\right)s_{234} + s_{1}^2 + s_{2}^2 + s_{34}^2 - \left(q + q^{-1}\right)^2,\\
s_{23}s_{3 \overline{14} 2}&= \left(qs_{3}s_{124} + q^{-1}s_{2}s_{134} + s_{2}s_{3}s_{14} +s_{1234} \right) s_{1234} + q(s_{2}s_{14}+ qs_{124})s_{124} \\
&+ q^{-1}\left(s_{3}s_{14} + q^{-1}s_{134}\right)s_{134} + s_{2}^2 + s_{3}^2 + s_{14}^2 - \left(q + q^{-1}\right)^2.
\end{align*}
Unfortunately, adding extra generators massively increases the number of relations, but these extra relations can be generated from the original relations in a manner which will now be described.
Firstly we have the relations which relate the new generators to the simple generators which we shall call the \emph{generator generating relations}. For the generators which consist of two disjoint loops these are trivial:
\[s_{12}s_{34} \mapsto s_{12 , 34} \quad s_{23}s_{14} \mapsto s_{23 , 14} \]
For the generators with a single double point the relations have the form:
\[s_{12}s_{23} \mapsto q^{-1}s_{1 \overline{2} 3} + qs_{13} + s_{1}s_{3} + s_{2}s_{123} \]
For the generators with two double points the relations have the form:
\begin{align*}
s_{12}s_{2 \overline{3} 4}&\mapsto q^{-1}s_{1 \overline{23} 4} + s_{1}s_{4} + qp^{-1}\left(s_{34}s_{13} - p^{-1}s_{14} - s_{1}s_{4} - s_{3}s_{134}\right) \\
&+ p^{-1}s_{2}\left(s_{34}s_{123} - p^{-1}s_{124} - s_{4}s_{12} - s_{3}s_{1234}\right) \\
s_{12}s_{23 , 14}&\mapsto qs_{13}s_{14} + q^{-1}s_{1 \overline{2} 3}s_{14} + s_{2}s_{123}s_{14} + s_{1}s_{3}s_{14}
\end{align*}
where $p = q$ except when finding the coefficients for the symmetric relations it does not invert. For the full list of relations see the code\footnote{Code available at \href{https://github.com/jcooke848/Askey-Wilson-Algebras-as-Skein-Code.git}{https://github.com/jcooke848/Askey-Wilson-Algebras-as-Skein-Code.git} \label{code}}.
Now let $\{bc \mapsto r_{bc}\}$ be a relation in \cref{thm:presentation_fivepunctures} excluding the commuting and the commutator relations, and let $\{ab \mapsto r_{ab}\}$ be a relation in the generator generating relations. Hence, we have an equality
\[
r_{ab}c = abc = a r_{bc}
\]
and furthermore $r_{ab}c$ contains a new generator multiplied by $c$ on the right. Rearrange with respect to $mc$ where $m$ is the largest new generator in $r_{ab}$ and turn this equation into a reduction rule for $mc$.
After generating these new relations for all the original $\{bc \mapsto r_{bc}\}$ relations, iterate by the generating the relations where $\{bc \mapsto r_{bc}\}$ is one of these newly generated relations. This generates all the new relations including the extra generators apart from the commuting and commutator relations involving the extra generators. To generate these consider monomials $ba$ where $b>a$, one of them is a new generator and $ba$ is not reducible using the generator generating relations or any other relations we have generated so far. Consider the monomial $\prod b_i a_j$ where $\prod b_i$ generates $b$ and $a_j$ generates $a$ as the leading term using the generator generating relations. We can generate a relation if:
\begin{itemize}
\item All the $b_i$ terms are larger than all the $a_j$ terms by considering $x =\prod b_i a_j$
\item All the $b_i$ terms are smaller than all the $a_j$ terms by considering $x =\prod a_j b_i$
\end{itemize}
Take $x$ and apply the generator generating relations to $\prod b_i$ and $\prod a_j$ separatly, then order the result using the commutator and commuting relations and rearrange the result to obtain a relation for $ba$. This generates a term rewriting system with 241 relations which we shall call $S_{\mathcal{B}}$ we shall now show is confluent.
\begin{prop}
\label{prop:ambiguities}
All ambiguities in the term rewriting system $S_{\mathcal{B}}$ are resolvable.
\end{prop}
\begin{proof}
As all reductions in $S_{\mathcal{B}}$ are pairwise, it is sufficient to check $abc$ where $a,b,c$ are basis elements such that $\{\;ab\mapsto r_{ab},\; bc \mapsto r_{bc}\;\}$ are reductions in $S_{\mathcal{B}}$. As there are 241 relations, there are a very large number of such ambiguities so we have used a computer to check these\cref{code}.
\end{proof}
To use the diamond lemma for ring theory we now need to prove that the system $S_{\mathcal{B}}$ terminates. We shall do this by constructing a partial order which is compatible with the term rewriting system. This partial order will be constructed by chaining together three different partial orders. The first ordering is ordering by \emph{reduced degree} \cite[Section~15]{ReducedOrder}:
\begin{defn}
Give the letters of the finite alphabet \(X\) an ordering \(x_1 \leq \dots \leq x_N\). Any word \(W\) of length \(n\) can be written as \(W = x_{i_1} \dots x_{i_n}\) where \(x_{i_j} \in X\). An \emph{inversion} of \(W\) is a pair \(k \leq l\) with \(x_{i_{k}} \geq x_{i_{l}}\) i.e.\ a pair with letters in the incorrect order. The number of inversions of \(W\) is denoted \(|W|\).
\end{defn}
\begin{defn}
Any expression \(T\) can be written as a linear combination of words \(T= \sum c_l W_l\). Define \(\rho_n(T):= \sum_{\operatorname{length}(W_l)=n, c_l \neq 0} |W_l|\). The \emph{reduced degree of \(T\)} is the largest \(n\) such that \(\rho_n(T) \neq 0\).
\end{defn}
\begin{defn}
Under the \emph{reduced degree ordering}, \(T \leq S\) if
\begin{enumerate}
\item The reduced degree of \(T\) is less than the reduced degree of \(S\), or
\item The reduced degree of \(T\) and \(S\) are equal, but \(\rho_n(T) \leq \rho_n(S)\) for maximal nonzero \(n\).
\end{enumerate}
\end{defn}
The second ordering is by total degree.
\begin{defn}
The \emph{total degree} of $T \in k<X>$ is the maximal degree of its monomials. Under the \emph{total degree ordering} $T \leq S$ if the total degree of $T$ is less than or equal to the total degree of $S$.
\end{defn}
\begin{defn}
Let $s$ be one of the extended generators. The \emph{degree} of $s$ is
\[\operatorname{degree}(s) = (\text{number of points inside loop}) + 2(\text{number of double points}),\]
so for example $s_{1 \overline{2} 3}$ has degree $4$.
The degree of a monomial is the sum of the degree of its terms.
\end{defn}
The final partial order is a partial order based on the notion on how near are the loops that make up a monomial in the ordered list of loops.
\begin{defn}
Let $\operatorname{group}(s)$ denote the group (1-5) which the loop $s$ is in. For a monomial $m = \prod_{i \in I} s_i$ we define
\[\operatorname{nearness}(m) = \sum_{i,j \in I} | \operatorname{group}(s_i) - \operatorname{group}(s_j) |.\]
Let $\{m_i\}$ and $\{n_j\}$ be the maximal total degree monomials of expressions $T$ and $S$ respectively. Under the \emph{group distance ordering} $T \leq S$ if
\[\max_i(\text{number of distinct loops in }m_i) < \max_j(\text{number of distinct loops in }m_j)\]
or these maxima are equal and
\[\sum_i \operatorname{nearness}(m_i) > \sum_j \operatorname{nearness}(n_j).\]
\end{defn}
We now combine these three partial orders to obtain a single partial order of $\mathcal{R}\langle X\rangle$.
\begin{defn}
\label{defn:ordering}
Let $m, n \in \mathcal{R}\langle X\rangle$. We define $m \leq n$ if one of the following conditions is satisfied:
\begin{enumerate}
\item $m<n$ with respect to the reduced degree ordering
\item $m = n$ under the reduced degree ordering and $m<n$ with respect to the total degree ordering
\item $m = n$ under the reduced degree ordering, they have the same total degree and $m<n$ with respect to the group distance ordering.
\end{enumerate}
\end{defn}
\begin{lem}
\label{lem:partialorder}
The term rewriting system $S_{\mathcal{B}}$ is compatible with the ordering defined in \cref{defn:ordering}.
\end{lem}
\begin{proof}
This requires that for every rewriting rule $\{\sigma\mapsto r_{\sigma}\}$, $\sigma < m$ for all monomials $m$ in $r_{\sigma}$. This can easily be checked using the code \footref{code}.
\end{proof}
We can now apply the diamond lemma for ring theory.
\begin{thm}
The term-rewriting system $S_{\mathcal{B}}$ is confluent and hence the reduced monomials form a linear basis for the associated algebra $\mathcal{B}$.
\end{thm}
\begin{proof}
The ambiguities are resolvable by \cref{prop:ambiguities} and by \cref{lem:partialorder} there is a compatible term rewriting system so the term rewriting system terminates and hence we can apply the diamond lemma for ring theory.
\end{proof}
If we filter $\mathcal{B}$ by degree, we have a surjective filtered algebra homomorphism
\[
\phi: \mathcal{B} \to \SkAlg{q}{\Sigma_{0,5}}.
\]
We now need to prove that $\phi$ is an isomorphism and thus that $\mathcal{B}$ is a presentation for $\SkAlg{q}{\Sigma_{0,5}}$. To do this we shall compute the Hilbert series of $\mathcal{B}$ and show it is the same as the Hilbert series for $\SkAlg{q}{\Sigma_{0,5}}$ which we have already computed in \cref{thm:dimensions}.
\begin{prop}
The Hilbert series of $\mathcal{B}$ is
\[\frac{t^4 - 2t^3 + 4t^2 -2t +1}{(1-t^2)^3(1-t)^6}\]
\end{prop}
\begin{proof}
In order to compute the Hilbert series, we first consider what are the conditions on a monomial $m = \prod_{i \in I} s_i$ if it is reduced and therefore is in the vector space basis. Firstly, note that there is a relation $yx$ for any $y>x$ so the $s_i$ must be ordered. Furthermore, there is a relation $xy$ between any two loops in the same group; hence,
\[ m = s_{\textrm{I}}^{\alpha} s_{\textrm{II}}^{\beta} s_{\textrm{III}}^{\gamma} s_{\textrm{IV}}^{\delta} s_{\textrm{V}}^{\epsilon} \]
where for example $s_{\textrm{I}}$ is a loop in group 1 and $\alpha, \beta, \gamma, \delta, \epsilon \in \mathbb{Z}_{\geq 0}$.
Also note that as all the relations are pairwise we only need to concern ourselves with neighbouring terms in the monomial.
The Hilbert series of $\{\;x^m \mathrel{|} m \in \mathbb{Z}_{>0},\; \deg(x) = n\;\}$ is $\frac{t^n}{1-t^n}$,
so the Hilbert series when there is only a $s_{\textrm{I}}$ loop is
\[
1 + \frac{4t^2}{1-t^2}.
\]
Given a $s_{\textrm{II}}$ loop there are two possible choices for $s_{\textrm{I}}$; hence the Hilbert series for $s_\textrm{I}^{\alpha} s_{\textrm{II}}^{\beta}$ is
\[
1 + \frac{4t^2}{1-t^2} + \frac{6t^4}{1-t^4}\left(1 + \frac{2t^2}{1-t^2}\right)
= 1 + \frac{4t^2}{1-t^2} + \frac{6t^4}{(1-t^2)^2}
\]
Given a $s_{\textrm{III}}$ loop there are three choices of $s_{\textrm{II}}$ or if there is no $s_{\textrm{II}}$ loop there are three choices for $s_{\textrm{I}}$; hence the Hilbert series for $s_{\textrm{I}}^{\alpha} s_{\textrm{II}}^{\beta} s_{\textrm{III}}^{\gamma}$ is
\[
1 + \frac{4t^2}{1-t^2} + \frac{6t^4}{(1-t^2)^2} + \frac{4t^6}{1-t^6}\left( \frac{3t^4}{(1-t^2)^2} + \frac{3t^2}{1-t^2} + 1 \right) = \frac{1-t^8}{(1-t^2)^4}
\]
Given a $s_{\textrm{IV}}$ loop there are no choices for $s_{\textrm{III}}$ (the relations are derived from the cubic relation) and two choices for $s_{\textrm{V}}$. There are four choices for $s_{\textrm{II}}$ and if there is no $s_{\textrm{II}}$ loop there is a free choice of $s_{\textrm{I}}$; hence the Hilbert series for $s_{\textrm{I}}^{\alpha} s_{\textrm{II}}^{\beta} s_{\textrm{III}}^{\gamma} s_{\textrm{IV}}^{\epsilon} s_{\textrm{V}}^{\delta}$ assuming $\delta \neq 0$ is
\[
\frac{2t^2}{1-t^2}\left(\frac{2t^3}{1-t^3}+1\right)\left( \frac{4t^4}{(1-t^2)^2} + \frac{4t^2}{1-t^2} + 1\right)
\]
Finally, we assume there is no $s_{\textrm{IV}}$ loop but fix a $s_{\textrm{V}}$ loop. There are two choices for $s_{\textrm{III}}$, if there is no $s_{\textrm{III}}$ loop there are five choices for $s_{\textrm{II}}$ and if there is only a $s_{\textrm{I}}$ loop then there is a free choice. Hence, the Hilbert series for $s_{\textrm{I}}^{\alpha} s_{\textrm{II}}^{\beta} s_{\textrm{III}}^{\gamma} s_{\textrm{V}}^{\epsilon}$ assuming $\epsilon \neq 0$ is
\[
\frac{4t^3}{1-t^3} \left(\frac{2t^6}{(1-t^2)^3} + \frac{5t^4}{(1-t^2)^2} + \frac{4t^2}{1-t^2} + 1 \right) = \frac{4t^3}{1-t^3} \frac{(1-t^4)}{(1-t^2)^4}.
\]
Combining these cases gives a Hilbert series for $\mathcal{B}$ of
\begin{gather*}
\frac{1-t^8}{(1-t^2)^4} + \frac{2t^2}{1-t^2}\left(\frac{2t^3}{1-t^3}+1\right)\left( \frac{4t^2}{(1-t^2)^2} + 1\right) + \frac{4t^3}{1-t^3} \frac{(1-t^4)}{(1-t^2)^4} \\
= \frac{t^4 - 2t^3 + 4t^2 -2t +1}{(1-t^2)^3(1-t)^6}
\end{gather*}
as required.
\end{proof}
This means we have an isomorphism
\[\phi: \SkAlg{q}{\Sigma_{0,5}} \to \mathcal{B}\]
and therefore we have a presentation for $\SkAlg{q}{\Sigma_{0,5}}$. Finally, we will remove the extra generators to reduce the presentation to that in \cref{thm:presentation_fivepunctures} thus proving this theorem.
\begin{proof}[Proof of \cref{thm:presentation_fivepunctures}]
We have that $\mathcal{B}$ is a presentation for $\SkAlg{q}{\Sigma_{0,5}}$ as the algebras are isomorphic. Using generator generating relations we can eliminate the non-simple loop generators of $\mathcal{B}$. The relations in $\mathcal{B}$ between simple loops are the same as in $\mathcal{A}$ and it is straightforward to check that the new cubic and quartic relations reduce (see code). The extra relations in $\mathcal{B}$ which were generated from these relations reduce by how they were defined. Hence, we can reduce the presentation $\mathcal{B}$ thus concluding the proof.
\end{proof}
\begin{appendices}
\crefalias{section}{appsec}
\section{Appendix}
\label{app:1}
In this appendix we list explicitly the full list of relations for the presentation $\SkAlg{q}{\Sigma_{0,5}}$ which is given in Theorem \ref{thm:presentation_fivepunctures}.
\subsection{Commuting}
\begin{align*}
s_{124}s_{24} &= s_{24}s_{124} &
s_{124}s_{14} &= s_{14}s_{124} &
s_{134}s_{14} &= s_{14}s_{134}\\
s_{134}s_{13} &= s_{13}s_{134} &
s_{124}s_{12} &= s_{12}s_{124} &
s_{234}s_{34} &= s_{34}s_{234}\\
s_{234}s_{24} &= s_{24}s_{234} &
s_{134}s_{34} &= s_{34}s_{134} &
s_{234}s_{23} &= s_{23}s_{234}\\
s_{123}s_{13} &= s_{13}s_{123} &
s_{123}s_{12} &= s_{12}s_{123} &
s_{123}s_{23} &= s_{23}s_{123}\\
s_{14}s_{23} &= s_{23}s_{14} &
s_{34}s_{12} &= s_{12}s_{34} &
\end{align*}
\subsection{Commutators}
\begin{align*}
s_{23}s_{12} &= \left(1-q^2\right)s_{1}s_{3} + \left(q^{-1} - q^3\right)s_{13} + \left(1-q^2\right)s_{2}s_{123} + q^2s_{12}s_{23}\\
s_{34}s_{23} &= \left(1-q^2\right)s_{2}s_{4} + \left(q^{-1} - q^3\right)s_{24} + \left(1-q^2\right)s_{3}s_{234} + q^2s_{23}s_{34}\\
s_{14}s_{12} &= \left(1-q^{-2}\right)s_{2}s_{4} + \left(q - q^{-3}\right)s_{24} + \left(1-q^{-2}\right)s_{1}s_{124} + q^{-2}s_{12}s_{14}\\
s_{14}s_{34} &= \left(1-q^2\right)s_{1}s_{3} + \left(q^{-1} - q^3\right)s_{13} + \left(1-q^2\right)s_{4}s_{134} + q^2s_{34}s_{14}\\
s_{13}s_{12} &= \left(1-q^{-2}\right)s_{2}s_{3} + \left(q - q^{-3}\right)s_{23} + \left(1-q^{-2}\right)s_{1}s_{123} + q^{-2}s_{12}s_{13}\\
s_{13}s_{23} &= \left(1-q^2\right)s_{1}s_{2} + \left(q^{-1} - q^3\right)s_{12} + \left(1-q^2\right)s_{3}s_{123} + q^2s_{23}s_{13}\\
s_{13}s_{14} &= \left(1-q^2\right)s_{3}s_{4} + \left(q^{-1} - q^3\right)s_{34} + \left(1-q^2\right)s_{1}s_{134} + q^2s_{14}s_{13}\\
s_{24}s_{12} &= \left(1-q^2\right)s_{1}s_{4} + \left(q^{-1} - q^3\right)s_{14} + \left(1-q^2\right)s_{2}s_{124} + q^2s_{12}s_{24}\\
s_{24}s_{23} &= \left(1-q^{-2}\right)s_{3}s_{4} + \left(q - q^{-3}\right)s_{34} + \left(1-q^{-2}\right)s_{2}s_{234} + q^{-2}s_{23}s_{24}\\
s_{24}s_{34} &= \left(1-q^2\right)s_{2}s_{3} + \left(q^{-1} - q^3\right)s_{23} + \left(1-q^2\right)s_{4}s_{234} + q^2s_{34}s_{24}\\
s_{24}s_{14} &= \left(1-q^{-2}\right)s_{1}s_{2} + \left(q - q^{-3}\right)s_{12} + \left(1-q^{-2}\right)s_{4}s_{124} + q^{-2}s_{14}s_{24}\\
s_{13}s_{34} &= \left(1-q^{-2}\right)s_{1}s_{4} + \left(q - q^{-3}\right)s_{14} + \left(1-q^{-2}\right)s_{3}s_{134} + q^{-2}s_{34}s_{13}\\
s_{123}s_{34} &= \left(1-q^{-2}\right)s_{3}s_{1234} + \left(1-q^{-2}\right)s_{4}s_{12} + \left(q - q^{-3}\right)s_{124} + q^{-2}s_{34}s_{123}\\
s_{123}s_{14} &= \left(1-q^2\right)s_{1}s_{1234} + \left(1-q^2\right)s_{4}s_{23} + \left(q^{-1} - q^3\right)s_{234} + q^2s_{14}s_{123}\\
s_{234}s_{12} &= \left(1-q^2\right)s_{2}s_{1234} + \left(1-q^2\right)s_{1}s_{34} + \left(q^{-1} - q^3\right)s_{134} + q^2s_{12}s_{234}\\
s_{234}s_{14} &= \left(1-q^{-2}\right)s_{4}s_{1234} + \left(1-q^{-2}\right)s_{1}s_{23} + \left(q - q^{-3}\right)s_{123} + q^{-2}s_{14}s_{234}\\
s_{134}s_{23} &= \left(1-q^2\right)s_{3}s_{1234} + \left(1-q^2\right)s_{2}s_{14} + \left(q^{-1} - q^3\right)s_{124} + q^2s_{23}s_{134}\\
s_{134}s_{12} &= \left(1-q^{-2}\right)s_{1}s_{1234} + \left(1-q^{-2}\right)s_{2}s_{34} + \left(q - q^{-3}\right)s_{234} + q^{-2}s_{12}s_{134}\\
s_{124}s_{23} &= \left(1-q^{-2}\right)s_{2}s_{1234} + \left(1-q^{-2}\right)s_{3}s_{14} + \left(q - q^{-3}\right)s_{134} + q^{-2}s_{23}s_{124}\\
s_{124}s_{34} &= \left(1-q^2\right)s_{4}s_{1234} + \left(1-q^2\right)s_{3}s_{12} + \left(q^{-1} - q^3\right)s_{123} + q^2s_{34}s_{124}\\
\end{align*}
\subsection{Cubic Relations}
\begin{align*}
s_{34}s_{14}s_{13} &= -\left(q + 2q^{-1} + q^{-3}\right)+q^{-1}s_{4}^2+q^{-1}s_{3}^2+q^{-1}s_{1}^2 \\
&+ s_{3}s_{4}s_{34} + q^{-2}s_{1}s_{4}s_{14} + s_{1}s_{3}s_{13} + q^{-1}s_{1}s_{3}s_{4}s_{134} + qs_{34}^2 + s_{1}s_{34}s_{134} \\
&+ q^{-3}s_{14}^2 + q^{-2}s_{3}s_{14}s_{134} + qs_{13}^2 + s_{4}s_{13}s_{134} + q^{-1}s_{134}^2\\
s_{12}s_{14}s_{24} &= -\left(q^3 + 2q + q^{-1}\right)+qs_{4}^2+qs_{2}^2+qs_{1}^2 \\
&+ s_{1}s_{2}s_{12} + q^2s_{1}s_{4}s_{14} + s_{2}s_{4}s_{24} + qs_{1}s_{2}s_{4}s_{124} + q^{-1}s_{12}^2 + s_{4}s_{12}s_{124} \\
&+ q^3s_{14}^2 + q^2s_{2}s_{14}s_{124} + q^{-1}s_{24}^2 + s_{1}s_{24}s_{124} + qs_{124}^2\\
s_{12}s_{23}s_{13} &= -\left(q + 2q^{-1} + q^{-3}\right)+q^{-1}s_{3}^2+q^{-1}s_{2}^2+q^{-1}s_{1}^2 \\
&+ s_{1}s_{2}s_{12} + q^{-2}s_{2}s_{3}s_{23} + s_{1}s_{3}s_{13} + q^{-1}s_{1}s_{2}s_{3}s_{123} + qs_{12}^2 + s_{3}s_{12}s_{123} \\
&+ q^{-3}s_{23}^2 + q^{-2}s_{1}s_{23}s_{123} + qs_{13}^2 + s_{2}s_{13}s_{123} + q^{-1}s_{123}^2\\
s_{23}s_{34}s_{24} &= -\left(q + 2q^{-1} + q^{-3}\right)+q^{-1}s_{4}^2+q^{-1}s_{3}^2+q^{-1}s_{2}^2 \\
&+ s_{2}s_{3}s_{23} + q^{-2}s_{3}s_{4}s_{34} + s_{2}s_{4}s_{24} + q^{-1}s_{2}s_{3}s_{4}s_{234} + qs_{23}^2 + s_{4}s_{23}s_{234} \\
&+ q^{-3}s_{34}^2 + q^{-2}s_{2}s_{34}s_{234} + qs_{24}^2 + s_{3}s_{24}s_{234} + q^{-1}s_{234}^2\\
\end{align*}
\subsection{Cubic Relations with Triples}
\begin{align*}
s_{12}s_{14}s_{234} &= \left(q^2 + 1\right)s_{3}+qs_{1}s_{2}s_{4}s_{1234}-q^2s_{1}^2s_{3} + s_{4}s_{1234}s_{12} + qs_{2}s_{23} \\
&+ qs_{4}s_{34} + q^2s_{2}s_{1234}s_{14} -q^3s_{1}s_{13} + s_{1}s_{1234}s_{24} + s_{2}s_{4}s_{234} + qs_{1234}\\
&+s_{124} + s_{1}s_{12}s_{23} + q^{-1}s_{12}s_{123} + q^2s_{1}s_{34}s_{14} + q^3s_{14}s_{134} + q^{-1}s_{24}s_{234}\\
s_{34}s_{14}s_{123} &= \left(1+q^{-2}\right)s_{2}+\left(1-q^2-q^{-2}\right)s_{2}s_{4}^2+q^{-1}s_{1}s_{3}s_{4}s_{1234} + q^{-1}s_{1}s_{12} \\
&+ q^{-1}s_{3}s_{23} + s_{1}s_{1234}s_{34} + q^{-2}s_{3}s_{1234}s_{14} + s_{4}s_{1234}s_{13} \\
&+ \left(-q^3+q^{-1}-q^{-3}\right)s_{4}s_{24} + s_{1}s_{3}s_{123} + \left(-q^2+1\right)s_{3}s_{4}s_{234} + q^{-1}s_{1234}s_{134} \\
&+ q^{-2}s_{4}s_{12}s_{14} + q^2s_{4}s_{23}s_{34} + qs_{34}s_{234} + q^{-3}s_{14}s_{124} + qs_{13}s_{123}\\
s_{23}s_{34}s_{124} &= q^{-1}s_{2}s_{3}s_{4}s_{1234}+\left(1+q^{-2}\right)s_{1}+-q^2s_{1}s_{3}^2 + q^{-1}s_{2}s_{12} \\(q^2+1)
&+ s_{4}s_{1234}s_{23} + q^{-2}s_{2}s_{1234}s_{34} + q^{-1}s_{4}s_{14} + \left(-q^3-q+q^{-1}\right)s_{3}s_{13} \\
&+ s_{3}s_{1234}s_{24} + \left(-q^2+1\right)s_{2}s_{3}s_{123} + q^{-1}s_{1234}s_{234} + \left(q^{-2}-1\right)s_{3}s_{4}s_{134} \\
&+ s_{2}s_{4}s_{124} + q^2s_{3}s_{12}s_{23} + qs_{23}s_{123} + s_{3}s_{34}s_{14} + q^{-3}s_{34}s_{134} + qs_{24}s_{124}\\
s_{12}s_{23}s_{134} &= \left(1+q^{-2}\right)s_{4}- s_{2}^2s_{4}+q^{-1}s_{1}s_{2}s_{3}s_{1234} + s_{3}s_{1234}s_{12} \\
&+ q^{-2}s_{1}s_{1234}s_{23} + q^{-1}s_{3}s_{34} + q^{-1}s_{1}s_{14} + s_{2}s_{1234}s_{13} -qs_{2}s_{24} \\
&+ q^{-1}s_{1234}s_{123} + \left(q^{-2}-1\right)s_{2}s_{3}s_{234} + s_{1}s_{3}s_{134} + s_{2}s_{12}s_{14} + qs_{12}s_{124} \\
&+ s_{2}s_{23}s_{34} + q^{-3}s_{23}s_{234} + qs_{13}s_{134}
\end{align*}
\subsection{Quartic Relation}
\begin{align*}
s_{12}&s_{23}s_{34}s_{14} \\
&= s_{1}s_{3}s_{12}s_{23} + s_{2}s_{4}s_{12}s_{14} + s_{2}s_{4}s_{23}s_{34} + s_{1}s_{3}s_{34}s_{14} \\
&+ \left(qs_{3}s_{12} + qs_{2}s_{13} + q^{-1}s_{1}s_{23} + s_{123} + q^{-1}s_{4}s_{1234}+s_{1}s_{2}s_{3}\right)s_{123} \\
&+ \left(qs_{4}s_{13} + qs_{3}s_{14} + q^{-1}s_{1}s_{34} + s_{134} + q^{-1}s_{2}s_{1234}+s_{1}s_{3}s_{4}\right)s_{134} \\
&+ \left( qs_{4}s_{12} + qs_{2}s_{14} + q^{-1}s_{1}s_{24} + s_{124} + q^{-1}s_{3}s_{1234}+s_{1}s_{2}s_{4}\right)s_{124} \\
&+ q^{-2}\left(q^{-1}s_{4}s_{23} + q^{-1}s_{3}s_{24} + q^{-1}s_{2}s_{34} + q^{-2}s_{234} + q^{-1}s_{1}s_{1234} + (2-q^2)s_{2}s_{3}s_{4}\right)s_{234} \\
&+ \left(q^2s_{12} + \left(q+q^{-1}\right)s_{1}s_{2} +s_{3}s_{4}s_{1234}\right)s_{12} + \left(q^{-2}s_{23}+q^{-1}s_{2}s_{3}+q^{-2}s_{1}s_{4}s_{1234}\right)s_{23} \\
&+ \left(q^2s_{14} + \left(q+q^{-1}\right)s_{1}s_{4} +s_{2}s_{3}s_{1234}\right)s_{14} + \left(q^{-2}s_{34} + q^{-1}s_{3}s_{4}+q^{-2}s_{1}s_{2}s_{1234}\right)s_{34} \\
&+ \left(q^{-2}s_{24} +\left(q^{-1}-q\right)s_{2}s_{4}+q^{-2}s_{1}s_{3}s_{1234}\right)s_{24} +\left(q^2s_{13} + s_{2}s_{4}s_{1234}\right)s_{13} \\
&+q^{-2}s_{1234}^2 -s_{2}^2s_{4}^2 -s_{1}^2s_{3}^2 +q^{-1}s_{1}s_{2}s_{3}s_{4}s_{1234} +\left(2+q^{-2}\right)\left(s_{1}^2 + s_{2}^2 + s_{3}^2 + s_{4}^2\right)\\
&-2q^2-5-4q^{-2}-q^{-4}\\
\end{align*}
\subsection{Loop Triple Relations}
\begin{align*}
s_{123}s_{134} &= -s_{2}s_{4} + s_{1234}s_{13} -\left(q + q^{-1}\right)s_{24} -s_{3}s_{234} -s_{1}s_{124} + s_{12}s_{14} + s_{23}s_{34}\\
s_{134}s_{123} &= \left(1 - q^2 - q^{-2}\right)s_{2}s_{4} + s_{1234}s_{13} -\left(q^3 - q^{-3}\right)s_{24} -q^2s_{3}s_{234} -q^{-2}s_{1}s_{124} \\
&+ q^{-2}s_{12}s_{14} + q^2s_{23}s_{34}\\
s_{234}s_{124} &= -q^2s_{1}s_{3} - \left(q^3+q\right)s_{13} + s_{1234}s_{24} -q^2s_{2}s_{123} -s_{4}s_{134} + q^2s_{12}s_{23} + s_{34}s_{14}\\
s_{124}s_{234} &= -q^2s_{1}s_{3} - \left(q^3+q\right)s_{13} + s_{1234}s_{24} -s_{2}s_{123} -q^2s_{4}s_{134} + s_{12}s_{23} + q^2s_{34}s_{14}\\
\end{align*}
\subsection{Link Triple Relations}
\begin{align*}
s_{123}s_{234} &= q^{-1}s_{2}s_{3}s_{1234}+s_{1}s_{4} + \left(q^{-1} - q\right)s_{2}s_{4}s_{12} + s_{1234}s_{23} + \left(q + q^{-1}\right)s_{14} \\
&+ s_{3}s_{134} + q^{-2}s_{2}s_{124} -q^2s_{12}s_{24} -qs_{3}s_{12}s_{234} -s_{34}s_{13} -q^{-1}s_{2}s_{34}s_{123}\\
&+ qs_{12}s_{23}s_{34}\\
s_{234}s_{123} &= qs_{2}s_{3}s_{1234}+s_{1}s_{4} + \left(-q^3+q\right)s_{2}s_{4}s_{12} + s_{1234}s_{23} + \left(q + q^{-1}\right)s_{14} \\
&+ q^2s_{3}s_{134} + s_{2}s_{124} -q^4s_{12}s_{24} -q^3s_{3}s_{12}s_{234} -q^2s_{34}s_{13} -qs_{2}s_{34}s_{123}\\
&+ q^3s_{12}s_{23}s_{34}\\
s_{124}s_{134} &= s_{2}s_{3}+qs_{1}s_{4}s_{1234} + \left(q + q^{-1}\right)s_{23} + s_{1234}s_{14} + q^2s_{1}s_{123} \\
&+ s_{4}s_{234} -q^2s_{12}s_{13} -qs_{4}s_{12}s_{134} -s_{34}s_{24} -qs_{1}s_{34}s_{124} \\
&+ qs_{12}s_{34}s_{14}\\
s_{134}s_{124} &= s_{2}s_{3}+q^{-1}s_{1}s_{4}s_{1234} + \left(q + q^{-1}\right)s_{23} + s_{1234}s_{14} + s_{1}s_{123} \\
&+ q^{-2}s_{4}s_{234} -s_{12}s_{13} -q^{-1}s_{4}s_{12}s_{134} -q^{-2}s_{34}s_{24} -q^{-1}s_{1}s_{34}s_{124}\\
&+ q^{-1}s_{12}s_{34}s_{14}\\
s_{134}s_{234} &= qs_{3}s_{4}s_{1234}+s_{1}s_{2} + \left(q + q^{-1}\right)s_{12} + \left(-q^3+q\right)s_{1}s_{3}s_{23} + s_{1234}s_{34} \\
&+ s_{3}s_{123} + q^2s_{4}s_{124} -q^4s_{23}s_{13} -q^3s_{4}s_{23}s_{134} -q^2s_{14}s_{24} -qs_{3}s_{14}s_{234} \\
&+ q^3s_{23}s_{34}s_{14}\\
s_{234}s_{134} &= q^{-1}s_{3}s_{4}s_{1234}+s_{1}s_{2} + \left(q + q^{-1}\right)s_{12} + \left(q^{-1} - q\right)s_{1}s_{3}s_{23} + s_{1234}s_{34}\\
&+ q^{-2}s_{3}s_{123} + s_{4}s_{124} -q^2s_{23}s_{13} -qs_{4}s_{23}s_{134} -s_{14}s_{24} -q^{-1}s_{3}s_{14}s_{234}\\
&+ qs_{23}s_{34}s_{14}\\
s_{124}s_{123} &= \left(q^2 - q^{-2} + q^{-4}\right)s_{3}s_{4}+\left(1 - q^{-1} + q^{-3}\right)s_{1}s_{2}s_{1234} + s_{1234}s_{12} \\
&+ \left(q-q^{-1}\right)s_{2}s_{4}s_{23} + \left(q^3+q-q^{-1}+q^{-5}\right)s_{34} + \left(q^{-3}-q^{-1}\right)s_{1}s_{3}s_{14} \\
&+ \left(q^2 - q^{-2} + q^{-4}\right)s_{2}s_{234} + \left(q^2 - 1 + q^{-4}\right)s_{1}s_{134} -q^{-4}s_{23}s_{24} \\
&-q^{-3}s_{1}s_{23}s_{124} -q^2s_{14}s_{13} -qs_{2}s_{14}s_{123} + q^{-1}s_{12}s_{23}s_{14}\\
s_{123}s_{124} &= \left(q^4-q^2+q^{-2}\right)s_{3}s_{4}+\left(q^3-q+q^{-1}\right)s_{1}s_{2}s_{1234} + s_{1234}s_{12} \\
&+ \left(q^3-q\right)s_{2}s_{4}s_{23} + \left(q^5-q+q^{-1}+q^{-3}\right)s_{34} + \left(q^{-1} - q\right)s_{1}s_{3}s_{14} \\
&+ \left(q^4-1+q^{-2}\right)s_{2}s_{234} + \left(q^4-q^2+q^{-2}\right)s_{1}s_{134} -q^{-2}s_{23}s_{24} \\
&-q^{-1}s_{1}s_{23}s_{124} -q^4s_{14}s_{13} -q^3s_{2}s_{14}s_{123} + qs_{12}s_{23}s_{14}\\
\end{align*}
\subsection{Crossing Relations}
\begin{align*}
s_{13}s_{24} &= \left(q + q^{-1}\right)s_{1234}+s_{1}s_{2}s_{3}s_{4} + q^{-1}s_{3}s_{4}s_{12} + qs_{1}s_{4}s_{23} + q^{-1}s_{1}s_{2}s_{34} \\
&+ qs_{2}s_{3}s_{14} + s_{4}s_{123} + s_{1}s_{234} + s_{2}s_{134} + s_{3}s_{124} + q^{-2}s_{12}s_{34} + q^2s_{23}s_{14}\\
s_{24}s_{13} &= \left(q + q^{-1}\right)s_{1234}+s_{1}s_{2}s_{3}s_{4} + qs_{3}s_{4}s_{12} + q^{-1}s_{1}s_{4}s_{23} + qs_{1}s_{2}s_{34} \\
&+ q^{-1}s_{2}s_{3}s_{14} + s_{4}s_{123} + s_{1}s_{234} + s_{2}s_{134} + s_{3}s_{124} + q^2s_{12}s_{34} + q^{-2}s_{23}s_{14}\\
\end{align*}
\subsection{Double and Triple Crossing Relations}
\begin{align*}
s_{123}s_{24} &= -s_{2}s_{1234}-qs_{1}s_{3}s_{4} -s_{1}s_{34} -s_{3}s_{14} -q^2s_{4}s_{13} -qs_{2}s_{4}s_{123} \\
&-\left(q + q^{-1}\right)s_{134} + qs_{4}s_{12}s_{23} + s_{12}s_{234} + s_{23}s_{124}\\
s_{234}s_{13} &= -s_{3}s_{1234}-qs_{1}s_{2}s_{4} -s_{4}s_{12} -s_{2}s_{14} -q^2s_{1}s_{24} -qs_{1}s_{3}s_{234} \\
&-\left(q + q^{-1}\right)s_{124} + qs_{1}s_{23}s_{34} + s_{23}s_{134} + s_{34}s_{123}\\
s_{13}s_{124} &= -q^{-1}s_{2}s_{3}s_{4}+\left(1 - q^2 - q^{-2}\right)s_{1}s_{1234} -q^2s_{4}s_{23} -q^{-2}s_{2}s_{34} -q^{-2}s_{3}s_{24} \\
&-\left(q^3 - q^{-3}\right)s_{234} -q^{-1}s_{1}s_{3}s_{124} + q^{-1}s_{3}s_{12}s_{14} + q^{-2}s_{12}s_{134} + q^2s_{14}s_{123}\\
s_{24}s_{134} &= \left(1 - q^2 - q^{-2}\right)s_{4}s_{1234}-qs_{1}s_{2}s_{3} -q^2s_{3}s_{12} -q^{-2}s_{1}s_{23} -q^2s_{2}s_{13} \\
&-\left(q^3 - q^{-3}\right)s_{123} -qs_{2}s_{4}s_{134} + qs_{2}s_{34}s_{14} + q^2s_{34}s_{124} + q^{-2}s_{14}s_{234}\\
s_{24}s_{123} &= \left(1 - q^2 - q^{-2}\right)s_{2}s_{1234}-qs_{1}s_{3}s_{4} -q^2s_{1}s_{34} -q^{-2}s_{3}s_{14} -q^2s_{4}s_{13} \\
&-qs_{2}s_{4}s_{123} -\left(q^3 - q^{-3}\right)s_{134} + qs_{4}s_{12}s_{23} + q^2s_{12}s_{234} + q^{-2}s_{23}s_{124}\\
s_{13}s_{234} &= \left(1 - q^2 - q^{-2}\right)s_{3}s_{1234}-qs_{1}s_{2}s_{4} -q^{-2}s_{4}s_{12} -q^2s_{2}s_{14} -q^2s_{1}s_{24} \\
&-qs_{1}s_{3}s_{234} -\left(q^3 - q^{-3}\right)s_{124} + qs_{1}s_{23}s_{34} + q^2s_{23}s_{134} + q^{-2}s_{34}s_{123}\\
s_{134}s_{24} &= -s_{4}s_{1234}-qs_{1}s_{2}s_{3} -s_{3}s_{12} -s_{1}s_{23} -q^2s_{2}s_{13} -\left(q + q^{-1}\right)s_{123} \\
&-qs_{2}s_{4}s_{134} + qs_{2}s_{34}s_{14} + s_{34}s_{124} + s_{14}s_{234}\\
s_{124}s_{13} &= -q^{-1}s_{2}s_{3}s_{4}-s_{1}s_{1234} -s_{4}s_{23} -s_{2}s_{34} -q^{-2}s_{3}s_{24} -\left(q + q^{-1}\right)s_{234} \\
&-q^{-1}s_{1}s_{3}s_{124} + q^{-1}s_{3}s_{12}s_{14} + s_{12}s_{134} + s_{14}s_{123}\\
\end{align*}
\end{appendices}
\printbibliography[title={References}]
\end{document} |
\begin{document}
\title{Long-range interactions and information transfer in spin chains}
\author{Rebecca Ronke$^{1*}$, Tim Spiller$^{2\dag}$, Irene D'Amico$^{1\ddag}$}
\address{$^1$ Department of Physics, University of York, York, YO10 5DD, UK}
\address{$^2$ School of Physics and Astronomy, E. C. Stoner Building, University of Leeds, Leeds, LS2 9JT, UK}
\ead{$^{*}[email protected], $^{\ddag}[email protected], $^{\ddag}[email protected]}
\begin{abstract}
One of the main proposed tools to transfer information in a quantum computational context are spin chains. While spin chains have shown to be convenient and reliable, it has to be expected that, as with any implementation of a physical system, they will be subject to various errors and perturbative factors. In this work we consider the transfer of entangled as well as unentangled states to investigate the effects of various errors, paying particular attention to unwanted long-range interactions.
\end{abstract}
\section{Introduction}
The ability to reliably transfer quantum information in a solid state system is one of the key ingredients to quantum information processing, particularly when considering the construction of networked and distributed systems on a larger scale. Spin chains are able to respond to this challenge, while the mathematical framework that they are based on can be applied to a variety of physical systems. Examples of successful applications include for instance encoding into soliton-like packets of excitations \cite{osborne2004}, electrons or excitons trapped in nanostructures \cite{damico2007,damico2006,niko2004}, strings of fullerenes \cite{twamley2003} or nanometer scale magnetic particles \cite{tejada2001}.
Previous work on linear spin chains has shown that in order to ensure perfect state transfer (PST), the coupling strength $J_{i,i+1}$ between two neighboring sites $i$ and $i+1$ on a chain of length $N$ should be pre-engineered according to \cite{chris2005}
\begin{equation}
J_{i,i+1}=J_{0}\sqrt{i(N-i)},\label{PST}
\end{equation}
with $J_{0}$ the characteristic coupling constant. The time-independent perfect-transfer Hamiltonian, where we assume that any single excitation energies are site-independent, is thus:
\begin{equation}
\label{hami}
{\cal{H}} = \sum_{i=1}^{N-1} J_{i,i+1}[ |1\rangle \langle 0|_{i} \otimes |0\rangle \langle 1|_{i+1} + |0\rangle \langle 1|_{i} \otimes |1\rangle \langle 0|_{i+1}].
\end{equation}
This Hamiltonian also preserves the number of excitations. In order to assess the quality of the state transfer of entangled states, we use the entanglement of formation (EoF) \cite{wootters1998}. For unentangled states, we define a fidelity $F$ corresponding to mapping the initial state $|\psi_{in}\rangle$ over a time $t$ into the desired state $|\psi_{fin}\rangle$ via the chain natural dynamics
\begin{equation}
F=|\langle \psi_{fin} |e^{-i{\cal{H}}t/\hbar}| \psi_{ini} \rangle|^{2},
\end{equation}
so that PST is achieved when $F=1$. This system dynamics leads in particular also to the so-called mirroring rule \cite{albanese2004}, which guarantees perfect transfer of any one state to its mirrored image with respect to the centre of the chain. We define the time it takes for a state to then return to its original form as the system or revival time, $t_{S}$.
\section{Effect of perturbations}
When considering perturbations, we consider fabrication defects in the spin chains themselves on the one hand, and on the other also the effect of non-synchronous or imperfect input operations \cite{me1}. For all perturbations, we consider three types of initial input states: (i) unentangled states (such as $|\psi_{in}\rangle = |110000\rangle$), (ii) initially entangled states (such as $|\psi_{in}\rangle = 1/\sqrt{2}(|100000\rangle+|010000\rangle)$) and (iii) states where the entanglement is created by the system dynamics (such as $|\psi_{in}\rangle = 1/2(|000000\rangle+|100000\rangle+|000001\rangle+|100001\rangle)$). After reviewing the effect of various perturbative factors, we will focus on the central topic of this contribution, the effect of long-range interactions.\\
\subsection{Imperfect input operations}
In the vast majority of quantum information protocols, it is assumed that the system under investigation is governed by a universal, perfect clock. In an experimental context however, this might be overly optimistic, especially when the preparation of an input state involves accessing two or more separate sites of the chain simultaneously. For a type (i) chain, this is relevant when there are multiple excitations involved as otherwise the entire system dynamics is merely shifted in time. The actual effect of non-synchronous input depends greatly on the input mechanism used. We consider the two main possibilities for injection into spin chains, which are injection via SWAP operation (an additional particle is injected into the chain, e.g. via a waveguide) and excitation via a Rabi-flopping pulse (e.g. flipping the spin of an electron confined in the chain) \cite{me1}. Both these methods allow for some correction via measurement of the system but we find that with increasing delay times between injections, systems with Rabi flopping perform noticeably worse than those where the injection was done via SWAP operation. An exception are input states of type (iii), where the access sites are far apart and there is therefore virtually no difference between the effects of the two injection mechanisms.
\subsection{Random noise}
It has to be expected that any fabrication process of spin chains may lead to random, but time-independent errors in the coupling values of the system. We model these by adding to all non-zero entries of the Hamiltonian a random energy $\eta d_{l,m}J_{0}$ for $1 \le l$,$m \le number\,of\,basis\,states$. The scale is fixed by $\eta$ which we set to 0.1 and for each $l \le m$ the different random number $d_{l,m}$ is generated with a flat distribution between zero and unity. In order to preserve hermiticity, we set $d_{l,m}=d_{m,l}$. The specific weight of the noise would have to be determined depending on the experiment under consideration, but we found that for a perturbation of 10\% of $J_0$, near-perfect transfer is still achieved during the first few periods of the system for all considered types of initial input.
\subsection{Single excitation energies}
We previously assumed that single excitation energies are site-independent, and thus did not need to be explicitly included into the Hamiltonian (Eq. (\ref{hami})). However, local magnetic fields or additional single site fabrication defects may result in a loss of this independence, and so we would have to add to (\ref{hami}) the term
\begin{equation}
H_{1} = \sum_{i=1}^{N} \epsilon_{i} |1\rangle \langle1|_{i},
\label{enerc}
\end{equation}
where $\epsilon_{i}$ is not independent of the site \textit{i} any more. We find that this perturbation has little effect on all input types for small values of $\epsilon_{i}$, resulting in a loss of less than 5\% for $\epsilon_{i}\leq0.1 J_{max}$, $J_{max}$ the maximum value of the coupling between neighbouring sites, at the first revival. However for larger values the effect becomes very detrimental, in particular for type (iii) input.
\subsection{Excitation-excitation interactions}
Spin chains containing multiple excitations may also be prone to interactions between excitations in neighbouring sites. To represent this, we consider the perturbative term
\begin{equation}
\label{interc}
H_{2} = \sum_{i=1}^{N-1} \gamma J_{0} |1\rangle \langle1|_{i} \otimes |1\rangle \langle1|_{i+1}.
\end{equation}
An example of this would be biexcitonic interaction in quantum dot-based chains \cite{damico2001,rinaldis2002}. While this perturbation is not relevant for input type (ii) (due to there only being a single excitation), we find that for the other two types, the transfer fidelity is again very well preserved for values of $\gamma$ up to $0.1 J_{max}$, resulting in less than a 5\% loss of the original input state at revival time. It can also be shown that the loss in fidelity is exponential in $N$ with Gaussian dependence on the characteristic noise parameters \cite{me1}.
\subsection{Next-nearest neighbor interactions}
In our original Hamiltonian (\ref{hami}), we assumed that interactions would be restricted to the nearest neighbour of any one spin, which is a fair assumption if the transfer between sites is based on tunnelling. If however instead we consider dipole-dipole interactions, we can model the resulting perturbation as
\begin{equation}
\label{hami2}
\nonumber H_{3}= \sum_{i=1}^{N-2} J_{i,i+2}[ |1\rangle \langle 0|_{i} \otimes |0\rangle \langle 1|_{i+2} + |0\rangle \langle 1|_{i} \otimes |1\rangle \langle 0|_{i+2}],
\end{equation}
with $J_{i,i+2}=\Delta(J_{i,i+1}+J_{i+1,i+2})/2$. We have confirmed in \cite{me1} that this is a realistic model for example for semiconductor quantum dots with excitonic qubits. Of all the perturbations considered so far, this is the most influential. Again, the loss in fidelity is exponential in $N$ with Gaussian dependence on the characteristic noise parameters. Not only do values of $\Delta$ as small as 0.05 lead to a considerable loss in transfer fidelity (for a 10 spin chain, 40\% loss with input (i), 20\% loss with input (ii), 5\% loss with input (iii)), but even for the most stable input (iii) the periodicity of the system is entirely lost after a single period (for any chain length). As some protocols rely on continual periodicity, this is a potentially very serious issue (see also \cite{avellino2006}).
\subsection{Longer range interactions}
The main reason for next-nearest neighbours being the most detrimental perturbation is that it is the only perturbation we have considered so far that effectively opens up ``new channels'' for the system dynamics. We can consider this phenomenon in a more general context by formulating a perturbation factor that opens up all available channels of the system. Assuming that these unwanted longer range interactions are random, we add to all zero entries in the Hamiltonian a coupling $\chi d_{l,m} J_{max}$ for $1 \le l$,$m \le number\,of\,basis\,states$. For each $l \le m$ the different random number $d_{l,m}$ is generated with a flat distribution between zero and unity and, in order to preserve hermiticity, we set $d_{l,m}=d_{m,l}$. In figure \ref{fig:ii}, every point on the graph corresponds to an average obtained from 100 realisations.
\begin{figure}
\caption{\label{fig:i}
\label{fig:i}
\caption{\label{fig:ii}
\label{fig:ii}
\end{figure}
The result of these additional channels has a different effect on the three types of initial input that we consider. In figure \ref{fig:i} we consider the dynamics of a typical realisation. We see in figure \ref{fig:i} (a) that the unentangled input type (i) suffers by far the most: even for small $\chi$ ($\chi=0.03$) the initial state fidelity is not recovered well after a single period, and subsequently entirely lost. For $\chi=0.1$, the initial state is not even recovered at all. By contrast, input type (ii) in figure \ref{fig:i} (b) conserves an acceptable amount of EoF for $\chi=0.03$ for a few periods before becoming erratic, whereas the plot for $\chi=0.1$ shows that the EoF peaks are shifted from their expected times, making the system entirely unpredictable after the first revival. A similar phenomenon can be observed in figure \ref{fig:i} (c) for $\chi=0.03$ where the EoF peaks of input type (iii) should occur at $0.5 t_{S}, 1.5 t_{S},\cdots$ but are actually starting to be shifted after just the first revival, with additional peaks forming at integer multiples of $t_{S}$. However for $\chi=0.1$, the EoF never reaches decent values. Overall it is worth noting that this is in-keeping with the detrimental effects of next-nearest neighbour interaction we have observed, where the most detrimental effect was also seen for input type (i). However the opening up of new channels even just on a small scale has almost immediate and serious consequences for all systems considered.\\
The qualitative effect of opening up new, undesired channels in a system is also dependent on the number of spins in the chain, $N$, as a higher dimensional Hamiltonian also allows for more perturbative elements. We see this confirmed in figure \ref{fig:ii}, where all three systems show a clear decreasing trend in transfer quality with increasing $N$. Again, this data was sampled and averaged over 100 random runs, keeping a constant $\chi=0.03$. We see in figure \ref{fig:ii} (a) that again, it is the unentangled input type (i) that suffers most from this perturbation, showing that, for $N=15$, the first revival is barely over 40\% of the initial input state. Input types (ii) and (iii) perform better, with type (ii) in figure \ref{fig:ii} (b) losing less than 25\% EoF for $N=15$ and similarly with type (iii) in figure \ref{fig:ii} (c) losing less than 30\% EoF for $N=15$. We see therefore that unentangled states are much more prone to the detrimental effects of new open channels in a system, but also that shorter chains always perform much better than longer ones, regardless of the initial input.
\section{Conclusions}
We have considered the performances of the transfer of both unentangled as well as entangled states under various physically relevant perturbation factors. Overall, we found spin chains to be a very robust system, provided that the errors we considered remain of the order of a few percent of the characteristic coupling constant and that imperfect input will be corrected via subsequent measurement of the system. In particular, we were concerned with the opening up of new channels for the system dynamics, beyond next-nearest neighbour interaction, as this had proved to have the most detrimental effect on PST of any type of input. We have shown that longer chains suffer significantly more from this type of perturbation, with unentangled initial states being particularly affected. Large errors lead to a complete loss of the periodicity of the system, while small errors are tolerable up to a minimum of the first revival for all input types considered. Our studies demonstrate the impact of various potential difficulties with the implementation of spin chains, while giving encouraging results for modest errors. This demonstrates that spin chains are an encouraging candidate for future experiments on information transfer in solid state quantum information systems.\\
\\
RR was supported by EPSRC-GB and HP. IDA acknowledges partial support by HP. IDA and RR acknowledge the kind hospitality of the HP Research Labs Bristol.
\section*{References}
\end{document} |
\begin{document}
\title{Kronecker limit functions and an extension of the Rohrlich-Jensen formula}
\author{James Cogdell \and Jay Jorgenson
\footnote{The second named author acknowledges grant support from several PSC-CUNY Awards, which are jointly funded
by the Professional Staff Congress and The City University of New York.}\and Lejla Smajlovi\'{c}}
\maketitle
\begin{abstract}\noindent
In \cite{Ro84} Rohrlich proved a modular analogue of Jensen's formula. Under certain conditions, the Rohrlich-Jensen
formula expresses an integral of the log-norm $\log \Vert f \Vert$ of a $\text{\rm PSL}(2,\mathbb{Z}Z)$ modular form $f$ in terms
of the Dedekind Delta function evaluated at the divisor of $f$. In \cite{BK19} the authors re-interpreted the Rohrlich-Jensen
formula as evaluating a regularized inner product of $\log \Vert f \Vert$ and extended the result to compute a regularized
inner product of $\log \Vert f \Vert$ with what amounts to powers of the Hauptmoduli of $\text{\rm PSL}(2,\mathbb{Z}Z)$. In the present article, we
revisit the Rohrlich-Jensen formula and prove that it can be viewed as a regularized inner product of special values of two
Poincar\'e series, one of which is the Niebur-Poincar\'e series and the other is the resolvent kernel of the Laplacian.
The regularized inner product can be seen as a type of Maass-Selberg relation.
In this form, we develop a Rohrlich-Jensen formula associated to any Fuchsian group $\Gammamma$ of the first kind with one cusp by employing a type of Kronecker limit formula associated to the resolvent kernel. We
present two examples of our main result: First, when $\Gammamma$ is the full modular group $\text{\rm PSL}(2,\mathbb{Z}Z)$, thus reproving the theorems
from \cite{BK19}; and second when $\Gammamma$ is an Atkin-Lehner group $\Gammamma_{0}(N)^+$, where explicit computations are given for certain
genus zero, one and two levels.
\end{abstract}
\vskip .15in
\section{Introduction and statement of results}
\subsection{The Poisson-Jensen formula}
Let $D_{R} = \{z =x+iy\in \mathbb{C}C : \varepsilonrt z \varepsilonrt < R\}$ be the disc of radius $R$ centered at the
origin in the complex plane $\mathbb{C}C$. Let $F$ be a non-constant meromorphic function on the closure
$\overline{D_{R}}$ of $D_{R}$. Denote by $c_{F}$ the leading non-zero
coefficient of $F$ at zero, meaning that
for some integer $m$ we have that
$F(z) = c_{F}z^{m} + O(z^{m+1})$ as $z$ approaches zero. For any $a \in D_{R}$, let $n_{F}(a)$ denote
the order of $F$ at $a$; there are a finite number of points $a$ for which $n_{F}(a) \neq 0$.
With this, Jensen's formula, as stated on page 341 of \cite{La99}, asserts that
\begin{equation}\lambdabel{Jensen}
\frac{1}{2\pi}\int\limits_{0}^{2\pi}\log\varepsilonrt F(Re^{i\theta})\varepsilonrt d\theta
+ \sum\limits_{a \in D_{R}} n_{F}(a) \log (\varepsilonrt a \varepsilonrt/R) + n_{F}(0) \log (1/R) = \log \varepsilonrt c_{F}\varepsilonrt.
\end{equation}
One can consider the action of a M\"obius transformation which preserves $D_{R}$ and seek to determine the
resulting expression from \eqref{Jensen}. Such a consideration leads to the
Poisson-Jensen formula, and we refer the reader to page 161 of \cite{La87} for a statement and proof.
On their own, the Jensen formula and the Poisson-Jensen formula paved the way toward Nevanlinna theory, which in
its most elementary interpretation establishes subtle growth estimates for meromorphic functions; see Chapter VI
of \cite{La99}. Going further, Nevanlinna theory provided motivation for Vojta's conjectures whose insight
into arithmetic algebraic geometry is profound. In particular, page 34 of \cite{Vo87} contains a type
of ``dictionary'' which translates between Nevalinna theory and number theory
where Vojta asserts that Jensen's formula should be viewed as analogous to the Artin-Whaples product formula
from class field theory.
\subsection{A modular generalization}
In \cite{Ro84} Rohrlich proved what he aptly called a modular version of Jensen's formula. We now shall
describe Rohrlich's result.
Let $f$ be a meromorphic function on the upper half plane $\HH$ which is invariant with respect to the
action of the full modular group $\mathrm{PSL}(2,\mathbb{Z})$. Set $\mathcal{F}$ to be the ``usual'' fundamental domain
of the quotient $\mathrm{PSL}(2,\mathbb{Z})\backslash \HH$, and let $d\mu$ denote the area form of the hyperbolic metric.
Assume that $f$ does not have a pole at the cusp
$\infty$ of $\mathcal{F}$, and assume further that the Fourier expansion of $f$ at $\infty$ has its constant
term equal to one. Let $P(w)$ be the Kronecker limit function associated to the parabolic Eisenstein series associated to $\mathrm{PSL}(2,\mathbb{Z})$;
below we will write $P(w)$ in terms of the Dedekind Delta function, but for now we want to keep the concept
of a Kronecker limit function in the conversation. With all this, the Rohrlich-Jensen formula
is the statement that
\begin{equation} \lambdabel{rohrl thm}
\frac{1}{2\pi}\int\limits_{\mathrm{PSL}(2,\mathbb{Z})\backslash \mathbb{H}} \log|f(z)|d\mu(z) + \sum_{w\in \mathcal{F}}
\frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}P(w) =0.
\end{equation}
In this expression, $\mathrm{ord}_w(f)$ denotes the order of $f$ at $w$ as a meromorphic function,
and $\mathrm{ord}(w)$ denotes the order of the action of $\mathrm{PSL}(2,\mathbb{Z})$ on $\HH$. As a means
by which one can see beyond the above setting, one can view \eqref{rohrl thm} as evaluating the inner product
$$
\lambdangle 1,\log|f(z)| \rangle=\int\limits_{\mathrm{PSL}(2,\mathbb{Z})\backslash \mathbb{H}} 1\cdot \log|f(z)|d\mu(z)
$$
within the Hilbert space of $L^{2}$ functions on $\mathrm{PSL}(2,\mathbb{Z})\backslash \HH$.
There are various directions in which \eqref{rohrl thm} has been extended. In \cite{Ro84}, Rohrlich described
the analogue of \eqref{rohrl thm} for general Fuchsian groups of the first kind and for meromorphic modular forms $f$ of
non-zero weight; see page 19 of \cite{Ro84}. In \cite{HIvPT19} the authors studied the
quotient of hyperbolic three space when acted upon by the discrete group $\mathrm{PSL}(2,\mathcal{O}_K)$
where $\mathcal{O}_K$ denotes the ring of integers of an imaginary quadratic field $K$. In that setting, the function
$\log \varepsilonrt f \varepsilonrt$ is replaced by a function which is harmonic at all but a finite number of points and
at those points the function has prescribed singularities. As in \cite{Ro84}, the analogue of \eqref{rohrl thm} involves
a function $P$ which is constructed from a type of Kronecker limit formula.
In \cite{BK19} the authors returned to the setting of $\mathrm{PSL}(2,\mathbb{Z})$ acting on $\HH$.
Let $q_{z}=e^{2\pi i z}$ be the standard local coordinate near $\infty$ of $\mathrm{PSL}(2,\mathbb{Z})\backslash \HH$. The
Hauptmodul $j(z)$ is the unique $\mathrm{PSL}(2,\mathbb{Z})$ invariant holomorphic function on $\HH$ whose expansion near
$\infty$ is $j(z) = q_{z}^{-1} + o(q_{z}^{-1})$ as $z$ approaches $\infty$. Let $T_{n}$ denote the $n$-th Hecke
operator and set $j_{n}(z) = j|T_n (z)$. The main results of \cite{BK19} are the derivation of formulas for
the regularized scalar product $\lambdangle j_n(z),\log (({\mathrm{Im}} (z))^k|f(z)|) \rangle$ where $f$ is a weight $2k$ meromorphic modular
form with respect to $\mathrm{PSL}(2,\mathbb{Z})$. Below we will discuss further the formulas from \cite{BK19} and describe
the way in which their results are natural extensions of \eqref{rohrl thm}.
\subsection{Revisiting Rohrlich's theorem}
The purpose of this article is to extend the point of view that the Rohlrich-Jensen formula is the
evaluation of a particular type of inner product. To do so, we shall revisit the role of each of the two terms
$j|T_n (z)$ and $\log (({\mathrm{Im}} (z))^k|f(z)|)$.
The function $j|T_n (z)$ can be characterized as the unique holomorphic function which is $\mathrm{PSL}(2,\mathbb{Z})$
invariant on $\HH$ and whose expansion near $\infty$ is $q_{z}^{-n} + o(q_{z}^{-1})$. These properties hold
for the special value $s=1$ of the Niebur-Poincar\'e series $F_{-n}^{\Gammamma}(z,s)$, which is
defined in \cite{Ni73} for any Fuchsian group $\Gammamma$ of the first kind with one cusp and discussed in section \ref{sect_NP-series} below. As proved in \cite{Ni73}, for any $m\in\mathbb{N}$,
the Niebur-Poincar\'e series $F_{m}^{\Gammamma}(z,s)$ is an eigenfunction of the hyperbolic Laplacian $\Deltalta_{\mathbb{H}yp}$;
specifically, we have that
$$
\Deltalta_{\mathbb{H}yp} F_{m}^{\Gammamma}(z,s) = s(s-1)F_{m}^{\Gammamma}(z,s).
$$
Also, $F_{m}^{\Gammamma}(z,s)$ is orthogonal to constant functions.
Furthermore, if $\Gammamma = \mathrm{PSL}(2,\mathbb{Z})$,
then for any positive integer $n$ there is an explicitly computable constant $c_{n}$ such that
\begin{equation}\lambdabel{j_via_F}
F_{-n}^{\mathrm{PSL}(2,\mathbb{Z})}(z,1) = \frac{1}{2\pi\sqrt{n}}j_{n}(z) + c_{n}.
\end{equation}
As a result, the Rohrlich-Jensen formula proved in \cite{BK19},
when combined with Rohrlich's formula from \cite{Ro84}, reduces to computing the regularized inner product of
$F_{-n}^{\mathrm{PSL}(2,\mathbb{Z})}(z,1)$ with $\log (({\mathrm{Im}} (z))^k|f(z)|)$.
As for the term $\log (({\mathrm{Im}} (z))^k|f(z)|)$, we begin by recalling Proposition 12 from \cite{JvPS19}. Let $2k\geq 4$ be
any even positive integer, and let $f$ be a weight $2k$ meromorphic form $f$ which is $\Gammamma$ invariant and with $q$-expansion at $\infty$
that is normalized so its constant term is equal to one. Set $\Vert f\Vert(z) = y^{k}\varepsilonrt f(z) \varepsilonrt$, where $z=x+iy$. Let
$\mathcal{E}^{\mathrm{ell}}_{\Gammamma,w}(z,s)$ be the elliptic Eisenstein
series associated to the aforementioned data; a summary of the relevant properties of $\mathcal{E}^{\mathrm{ell}}_{\Gammamma,w}(z,s)$
is given in section \ref{ell_Eisen_series} below. Then, in \cite{JvPS19} it is proved that one has the asymptotic relation
\begin{equation} \lambdabel{ell kroneck limit one cusp}
\sum_{w\in \mathcal{F}_\Gammamma} \mathrm{ord}_w(f) \mathcal{E}^{\mathrm{ell}}_{\Gammamma,w}(z,s)=
-s\log\left( |f(z)| |\eta_{\Gammamma,\infty}^4(z)|^{-k}\right) + O(s^2)
\,\,\,\,\,
\textrm{\rm as $s \to 0$}
\end{equation}
where $\mathcal{F}_\Gammamma$ is the fundamental domain for the action of $\Gammamma$ on $\mathbb{H}$ and $\eta_{\Gammamma,\infty}(z)$ is the analogue of the classical eta function for the modular group, see the Kronecker limit formula \eqref{KronLimitPArGen} for the parabolic Eisenstein series.
With this, formula \eqref{ell kroneck limit one cusp} can be written as
\begin{equation} \lambdabel{ell kroneck limit one cusp2}
\log\left( \Vert f\Vert (z) \right) = kP_{\Gammamma}(z) - \sum_{w\in \mathcal{F}_\Gammamma}
\mathrm{ord}_w(f) \lim_{s\to 0} \frac{1}{s} \mathcal{E}^{\mathrm{ell}}_{\Gammamma,w}(z,s),
\end{equation}
where $P_{\Gammamma}(z)=\log(|\eta_{\Gammamma,\infty}^4(z)|{\mathrm{Im}} (z))$ is the Kronecker limit function associated to the
parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\Gammamma,\infty}(z,s)$; the precise normalizations and
expressions defining $\mathcal{E}^{\mathrm{par}}_{\Gammamma,\infty}(z,s)$ will be clarified below.
Following \cite{CJS20}, one can recast \eqref{ell kroneck limit one cusp2} in terms of
the resolvent kernel, which we now shall undertake.
The resolvent kernel, also called the automorphic Green's function, $G_s^{\Gammamma}(z,w)$ is the integral kernel
which for almost all $s \in \mathbb{C}C$ inverts the operator $\Deltalta_{\mathbb{H}yp} + s(s-1)$. In other words,
$$
\Deltalta_{\mathbb{H}yp} G_s^{\Gammamma}(z,w) = s(1-s) G_s^{\Gammamma}(z,w).
$$
The resolvent kernel is closely related to the elliptic Eisenstein series; see \cite{vP16} as well as \cite{CJS20}.
Specifically, from Corollary 7.4 of \cite{vP16}, after taking into account a sign difference in our normalization,
we have that
\begin{equation}\lambdabel{Ell Green connection}
\mathrm{ord}(w) \mathcal{E}^{\mathrm{ell}}_{\Gammamma,w}(z,s) = -\frac{2^{s+1}\sqrt{\pi} \Gammamma(s+1/2)}{\Gammamma(s)}G_s^{\Gammamma}(z,w) +O(s^2)
\,\,\,\,\,
\textrm{\rm as $s \to 0$}
\end{equation}
for all $z,w \in\mathbb{H}$ with $z\neq \gammamma w$ when $\gammamma\in\Gammamma$.
It is now evident that one can
express $\log\left( \Vert f \Vert(z) \right)$ as a type of Kronecker limit function.
Indeed, upon using the functional equation for the Green's function, we will prove below the following
result. Under certain general conditions the form $f$, as described above, can be realized through a type of factorization
theorem, namely that
\begin{align}\lambdabel{log norm basic}
\log\left( \Vert f\Vert (z) \right)&=-2k + 2\pi \sum_{w\in \mathcal{F}_{\Gammamma}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}
\lim_{s\to 1}\left(G_s^{\Gammamma}(z,w) + \mathcal{E}_{\Gammamma,\infty}^{\mathrm{par}}(z,s)\right)\notag \\ &= 2\pi \sum_{w\in \mathcal{F}_\Gammamma}
\frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \left[ \lim_{s\to 1}\left(G_s^{\Gammamma}(z,w) + \mathcal{E}_{\Gammamma,\infty}^{\mathrm{par}}(z,s)\right) -
\frac{2}{\mathrm{vol}_{\mathbb{H}yp}(\Gammamma \backslash \HH)} \right].
\end{align}
With all this, it is evident that one can view the inner product realization of the Rohrlich-Jensen
formula as a special value of the inner product of the Niebur-Poincar\'e series $F_{m}^\Gammamma(z,s)$ and the resolvent
kernel $G_{s}^{\Gammamma}(z,w)$ plus the parabolic Eisenstein series $\mathcal{E}_{\Gammamma,\infty}^{\mathrm{par}}(z,s)$. Furthermore, because all
terms are eigenfunctions of the Laplacian, one can seek to compute the inner product
in hand in a manner similar to that which yields the Maass-Selberg formula.
\subsection{Our main results}
Unless otherwise explicitly stated, we will assume for the remainder
of this article that $\Gammamma$ is any Fuchsian group of the first kind with one cusp. By conjugating $\Gammamma$, if necessary, we may assume that the cusp is at $\infty$, with the cuspidal width equal to one. The group $\Gammamma$ will be arbitrary, but fixed, throughout this article, so, for the sake of brevity, in the sequel, we will suppress the index $\Gammamma$ in the notation for Eisenstein series, the Niebur-Poincar\'e series, the Kronecker limit function, the fundamental domain and the resolvent kernel.
When $\Gammamma$ is taken to be the modular group or the Atkin-Lehner group, that will be indicated in the notation.
With the above discussion, we have established that one manner in which the Rohrlich-Jensen formula can be
understood is through the study of the regularized inner product
\begin{equation}\lambdabel{RJ_integral}
\lambdangle F_{-n}(\cdot,1), \overline{\lim_{s\to 1} \left(G_s(\cdot,w) + \mathcal{E}_\infty^{\mathrm{par}}(\cdot,s)\right)} \rangle,
\end{equation}
which is defined as follows. Since $\Gammamma$ has one cusp at $\infty$, one can construct a (Ford) fundamental domain $\mathcal{F}$
of the action of $\Gammamma$ on $\HH$. Let $M = \Gammamma \backslash \HH$. A cuspidal neighborhood $\mathcal{F}_\infty(Y)$ of $\infty$
is given by $0 < x \leq 1$ and $y \geq Y$, where $z=x+iy$ and some $Y \in \mathbb{R}R$ sufficiently large.
(We recall that we have normalized the cusp to be of width one.) Let
$\mathcal{F}(Y) = \mathcal{F}\setminus \mathcal{F}_\infty(Y)$. Then, we define \eqref{RJ_integral} to be
$$
\lim_{Y\to \infty} \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(z,s)\right)d\mu_{\mathbb{H}yp}(z)
$$
where $d\mu_{\mathbb{H}yp}(z)$ denotes the hyperbolic volume element. The function
$G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(z,s)$ is unbounded as $z \to w$. However, the asymptotic growth of the function is logarithmic
thus integrable, hence it is not necessary to regularize the integral in \eqref{RJ_integral} in a neighborhood containing $w$. The need to
regularize the inner product \eqref{RJ_integral} stems solely the from the exponential growth behavior of the factor $F_{-n}(z,1)$ as $z\to\infty$.
Our first main result of this article is the following theorem.
\begin{theorem}\lambdabel{thm:main}
For any positive integer $n$ and any point $w\in \mathcal{F}$
\begin{equation}\lambdabel{main f-la}
\lambdangle F_{-n}(\cdot,1), \overline{\lim_{s\to 1} \left(G_s(\cdot,w) + \mathcal{E}_\infty^{\mathrm{par}}(\cdot,s)\right)} \rangle
= - \frac{\partial}{\partial s} F_{-n}(w,s) \Big|_{s=1}.
\end{equation}
\end{theorem}
We can combine Theorem \ref{thm:main} with the factorization theorem \eqref{log norm basic} and
properties of $F_{-n}(z,1)$ proved in \cite{Ni73} and obtain the following extension of the Rohrlich-Jensen formula.
\begin{corollary}\lambdabel{Rohr-Jensen}
In addition to the notation above, assume that the even weight $2k\geq 0$ meromorphic form $f$ has been normalized so its $q$-expansion
at $\infty$ has constant term equal to $1$. Then we have that
\begin{equation}\lambdabel{main f-la - corollary}
\lambdangle F_{-n}(\cdot ,1), \log\Vert f\Vert \rangle = - 2\pi \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}\frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1}.
\end{equation}
\end{corollary}
Let $g$ be a $\Gammamma$ invariant analytic function which necessarily has a pole at $\infty$. As such, there is a positive
integer $K$ and set of complex numbers $\{a_{n}\}_{n=1}^{K}$ such that
$$
g(z) = \sum_{n=1}^K a_n q_z^{-n} + O(1)
\,\,\,\,\,
\textrm{as $z \rightarrow \infty$.}
$$
It is proved in \cite{Ni73} that
\begin{equation}\lambdabel{g expr}
g(z) = \sum_{n=1}^K 2\pi \sqrt{n}a_n F_{-n}(z,1) + c(g)
\end{equation}
for some constant depending only upon $g$. With this, we can combine Corollary \ref{Rohr-Jensen}
and the Theorem on page 19 of \cite{Ro84} to obtain the following result.
\begin{corollary} \lambdabel{cor:j-function}
With notation as above, there is a constant $\beta$, defined by the Laurent expansion of $\mathcal{E}_\infty^{\mathrm{par}}(z,s)$
near $s=1$, such that
\begin{multline}\lambdabel{main f-la - corollary 2}
\lambdangle g, \log\Vert f\Vert \rangle = - 2\pi \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}
\Bigg(2\pi \sum_{n=1}^K \sqrt{n}a_n\frac{\partial}{\partial s} F_{-n}(w,s) \Big|_{s=1} \\ + c(g)(P(w) - \beta \vol_{\mathbb{H}yp}(M) +2) \Bigg).
\end{multline}
\end{corollary}
The constant $\beta$ is given in \eqref{KronLimitPArGen}. We refer the reader to equation \eqref{KronLimitPArGen} for further
details regarding the normalizations which define $\beta$ and the parabolic Kronecker limit function $P$.
Finally, we will consider the generating function of the normalized series constructed from the right-hand side of \eqref{main f-la}.
Specifically, we will prove the following identity.
\begin{theorem} \lambdabel{thm:generating series}
With notation as above, the generating series
$$
\sum_{n\geq 1}2\pi \sqrt{n} \frac{\partial}{\partial s} F_{-n}(w,s) \Big|_{s=1}q_z^n
$$
is, in the $z$ variable, the holomorphic part of the weight two biharmonic Maass form
$$
\mathcal{G}_w(z):=i\frac{\partial}{\partial z} \left( \frac{\partial}{\partial s}
\left( G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right) \Big|_{s=1} \right).
$$
\end{theorem}
Note that the weight two biharmonic Maas form is a function which satisfies the weight two modularity in $z$ and which is annihilated by $\Deltalta_2^2=(\xi_0\circ \xi_2)^2$, where, classically $\xi_\kappa := 2iy^\kappa \overline{\frac{\partial}{\partial \overline{z}}}$. It is clear from the definition that $\mathcal{G}_w(z)$ satisfies the weight two modularity in the $z$ variable. In section \ref{sect proof of thm 4} we will prove that $(\xi_0\circ \xi_2)^2\mathcal{G}_w(z)=0$.
In the case $\Gammamma = \text{\rm PSL}(2,\mathbb{Z}Z)$, our results will generalize the main theorems from
\cite{BK19}, as we will discuss below.
\subsection{Outline of the paper}
In section 2 we will establish notation and recall certain results from the literature.
There are two specific examples of Poincar\'e series which are particularly important for our
study, the Niebur-Poincar\'e series and the resolvent kernel. Both series are defined, and basic properties
are presented in section 3. In section 4 we state the Kronecker limit formulas associated to parabolic
and elliptic Eisenstein series, and we prove the factorization theorem \eqref{log norm basic}. The
proofs of the main results listed above will be given in section 5.
To illustrate our results, various examples are given in section 6. Our first example is when
$\Gammamma = \text{\rm PSL}(2,\mathbb{Z}Z)$ where, as claimed above, our results yield the main theorems of \cite{BK19}.
We then turn to the case when $\Gammamma$ is an Atkin-Lehner group $\Gammamma_0(N)^+$ for square-free level $N$.
The first examples are when the genus of $\Gammamma_0(N)^+$ is zero and when the function $g$
in Corollary \ref{cor:j-function} is the Hauptmodul $j_N^+(z)$. The next two examples we present are
for levels $N=37$ and $N=103$. For these levels the genus of the quotient by $\Gammamma_0(N)^+$ is one and two, respectively. In these cases,
certain generators of the corresponding function fields were constructed in \cite{JST13}. Consequently, we are able to
employ the results from \cite{JST13} and fully develop Corollary \ref{cor:j-function}.
\section{Background material}
\subsection{Basic notation} \lambdabel{notation}
Let $\Gammamma\subset\text{\rm PSL}(2,\mathbb{R})$ denote a Fuchsian
group of the first kind acting by fractional
linear transformations on the hyperbolic upper half-plane $\mathbb{H}:=\{z=x+iy\in\mathbb{C}\,
|\,x,y\in\mathbb{R};\,y>0\}$. We let $M:=\Gammamma\backslash\mathbb{H}$, which is a finite
volume hyperbolic Riemann surface, and denote by $p:\mathbb{H}\longrightarrow M$
the natural projection. We assume that $M$ has $e_{\Gammamma}$
elliptic fixed points and one cusp at $\infty$ of width one. By an abuse of notation, we also say that $\Gammamma$ has a cusp at $\infty$ of width one, meaning that the stabilizer $\Gammamma_\infty$ of $\infty$ is generated by the matrix $\bigl(\begin{smallmatrix}
1&1\\0&1\end{smallmatrix}\bigr)$. We identify $M$
locally with its universal cover $\mathbb{H}$. By $\mathcal{F}$ we denote the ``usual'' (Ford) fundamental domain for $\Gammamma$ acting
on $\mathbb H$.
We let $\mu_{\mathrm{hyp}}$ denote the hyperbolic metric on $M$, which is compatible with the
complex structure of $M$, and has constant negative curvature equal to minus one.
The hyperbolic line element $ds^{2}_{\mathbb{H}yp}$, resp.~the hyperbolic Laplacian
$\Deltalta_{\mathbb{H}yp}$ acting on functions, are given in the coordinate $z=x+iy$ on $\mathbb{H}$ by
\begin{align*}
ds^{2}_{\mathbb{H}yp}:=\frac{dx^{2}+dy^{2}}{y^{2}},\quad\textrm{resp.}
\quad\Deltalta_{\mathbb{H}yp}:=-y^{2}\left(\frac{\partial^{2}}{\partial
x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right).
\end{align*}
By $d_{\mathrm{hyp}}(z,w)$ we denote the hyperbolic distance between to the two points $z\in\mathbb{H}$ and
$w\in\mathbb{H}$. Our normalization of the hyperbolic Laplacian is different from the one considered in \cite{Ni73} and \cite{He83} where the Laplacian is taken with the plus sign.
\subsection{Modular forms}
Following \cite{Se73}, we define a weakly modular form $f$ of even weight $2k$ for $k \geq 0$ associated to $\Gammamma$ to be a
function $f$ which is meromorphic on $\mathbb H$ and satisfies the transformation property
\begin{equation}\lambdabel{transf prop}
f\left(\frac{az+b}{cz+d}\right) = (cz+d)^{2k}f(z),
\quad\textrm{for any $\begin{pmatrix}a&b\\c&d\end{pmatrix} \in \Gammamma$.}
\end{equation}
In the setting of this paper, any weakly modular
form $f$ will satisfy the relation $f(z+1)=f(z)$, so that for some positive integer $N$ we can write
$$
f(z) = \sum\limits_{n=-N}^{\infty}a_{n}q_z^{n},
\quad\text{ where } q_z =e(z)= e^{2\pi iz}.
$$
If $a_{n} = 0$ for all $n < 0$, then $f$ is said to be holomorphic at the cusp at $\infty$.
A holomorphic modular form with respect to $\Gammamma$ is a weakly modular form which is holomorphic on $\mathbb H$ and at all the cusps of $\Gammamma$.
When the weight $k$ is zero, the transformation property \eqref{transf prop} indicates that the function $f$ is invariant with respect to the action of elements of the group $\Gammamma$, so it may be viewed as a meromorphic function on the surface $M=\Gammamma\backslash \mathbb{H}$. In other words, a meromorphic function on $M$
is a weakly modular form of weight $0$.
For any two weight $2k$ weakly modular forms $f$ and $g$ associated to $\Gammamma$, with integrable singularities at finitely many points in $\mathcal{F}$,
the generalized inner product $\lambdangle \cdot,\, \cdot \rangle$ is defined as
\begin{equation} \lambdabel{def:inner prod}
\lambdangle f,g\rangle = \lim_{Y\to \infty} \int\limits_{\mathcal{F}(Y)}f(z) \overline{g(z)}(\text{\rm Im}(z))^{2k}d\mu_{\mathbb{H}yp}(z)
\end{equation}
where the integration is taken over the portion $\mathcal{F}(Y)$ of the fundamental domain $\mathcal{F}$ equal
to $\mathcal{F}\setminus \mathcal{F}_\infty(Y)$.
\subsection{Atkin-Lehner groups} \lambdabel{sect Atkin Leh groups}
Let $N=p_1\cdot\ldots\cdot p_r$ be a square-free, non-negative integer including the case $N=1$.
The subset of $\text{\rm SL}(2,\mathbb{R})$, defined by
\begin{align*}
\Gammamma_0(N)^+:=\left\{ \frac{1}{\sqrt{e}}\begin{pmatrix}a&b\\c&d\end{pmatrix}\in
\text{\rm SL}(2,\mathbb{R}): \,\,\, ad-bc=e, \,\,\, a,b,c,d,e\in\mathbb{Z}, \,\,\, e\mid N,\ e\mid a,
\ e\mid d,\ N\mid c \right\}
\end{align*}
is an arithmetic subgroup of $\text{\rm SL}(2,\mathbb{R})$. We use the terminology Atkin-Lehner groups of level $N$ to describe $\Gammamma_0(N)^+$
in part because these groups are obtained by adding all Atkin-Lehner involutions to the congruence group $\Gammamma_0(N)$, see \cite{AtLeh70}.
Let $\{\pm \textrm{Id}\}$ denote the set of two elements where $\textrm{Id}$ is the identity matrix.
In general, if $\Gammamma$ is a subgroup of $\text{\rm SL}(2,\mathbb{R})$, we let $\overline{\Gammamma} := \Gammamma /\{\pm \textrm{Id}\}$ denote its projection into $\textrm{PSL}(2,\mathbb{R})$.
Set $Y_N^{+}:=\overline{\Gammamma_0(N)^+} \backslash \mathbb{H}$. According to \cite{Cum04},
for any square-free $N$ the quotient space $Y_{N}^{+}$ has one cusp at $\infty$ with the cusp width equal to one. The spaces $Y_{N}^{+}$ will be
used in the last section where we give examples of our results for generators of function fields
of meromorphic functions on $Y_N^{+}$.
\subsection{Generators of function fields of Atkin-Lehner groups of small genus}
An explicit construction of generators of function fields of all meromorphic functions on $Y_N^{+}$ with genus $g_{N,+}\leq 3$ was given in \cite{JST13}.
When $g_{N,+}=0$, the function field of meromorphic functions on $Y_N^+$ is generated by a single function, the Hauptmodul $j_N^+(z)$, which is
normalized so that its $q$-expansion is of the form $q_z^{-1}+O(q_z)$. The Hauptmodul $j_N^+(z)$ appears in the ``Monstrous Moonshine'' and was investigated
in many papers, starting with Conway and Norton \cite{CN79}. The action of the $m$-th Hecke operator $T_m$ on $j_N^+(z)$ produces a meromorphic form on
$Y_N^{+}$ with the $q$-expansion $j_N^+|T_{m}(z)= q_z^{-m} + O(q_z)$.
When $g_{N,+}\geq 1$, the function field associated to $Y_N^+$ is generated by two functions $x_N^+(z)$ and $y_N^+(z)$. Stemming
from the results in \cite{JST13}, we have that for $g_{N,+}\leq 3$
the generators $x_N^+(z)$ and $y_N^+(z)$ such that their $q$-expansions are of the form
$$
x_N^+(z)=q_z^{-a}+\sum_{j=1}^{a-1}a_jq_z^{-j}+O(q_z) \quad \text{and} \quad y_N^+(z)=q_z^{-b}+\sum_{j=1}^{b-1}b_jq_z^{-j}+O(q_z)
$$
where $a,b$ are positive integers with $a\leq 1+g_{N,+}$, and $b\leq 2+g_{N,+}$. Furthermore, for $g_{N,+} \leq 3$, it is shown in \cite{JST13} that
all coefficients in the $q$-expansion for $x_N^+(z)$ and $y_N^+(z)$ are integers. For all such $N$, the precise values of
these coefficients out to large order were computed, and the results are available at \cite{jst url}.
\section{Two Poincar\'e series}
In this section we will define the Niebur-Poincar\'e series $F_{m}(z,s)$ and the resolvent kernel, also referred
to as the automorphic Green's function $G_s(z,w)$. We refer the reader to \cite{Ni73} for additional information
regarding $F_{m}(z,s)$ and to \cite{He83} and \cite{Iwa02} and references therein for further details regarding $G_s(z,w)$. As said above, we will suppress the group $\Gammamma$ from the notation.
\subsection{Niebur-Poincar\'e series}\lambdabel{sect_NP-series}
We start with the definition and properties of the Niebur-Poincar\'e series $F_{m}(z,s)$ associated to a co-finite Fuchsian group with one cusp; then we will specialize results to the setting of Atkin-Lehner groups.
\subsubsection{Niebur-Poincar\'e series associated to a co-finite Fuchsian group with one cusp}
Let $m$ be a non-zero integer, $z=x+iy\in\mathbb{H}$, and $s\in\mathbb{C}$ with $\mathbb{R}e(s)>1$.
Recall the notation $e(x):=\exp(2\pi i x)$, and let $I_{s-1/2}$ denote the modified $I$-Bessel function of the first kind; see, for example Appendix B.4,
formula (B.32) of \cite{Iwa02}). The Niebur-Poincar\'e series $F_{m}(z,s)$ is defined by the series
\begin{equation}\lambdabel{Def:Niebur Poinc series}
F_m(z,s)=F_m^\Gammamma(z,s):= \sum_{\gammamma\in\Gammamma_\infty \backslash \Gammamma} e(m\mathbb{R}e(\gammamma z))({\mathrm{Im}}(\gammamma z))^{1/2}I_{s-1/2}(2\pi |m| {\mathrm{Im}}(\gammamma z)).
\end{equation}
For fixed $m$ and $z$, the series \eqref{Def:Niebur Poinc series} converges absolutely and uniformly on any compact subset of the half plane
$\mathbb{R}e(s)>1$. Moreover, $\Deltalta_{\mathbb{H}yp}F_m(z,s) = s(1-s) F_m(z,s)$ for all $s\in\mathbb{C}$ in the half plane $\mathbb{R}e(s)>1$. From Theorem 5 of \cite{Ni73},
we have that for any non-zero integer $m$, the function $F_m(z,s)$ admits a meromorphic continuation to the whole complex plane $s\in\mathbb{C}C$.
Moreover, $F_m(z,s)$ is holomorphic at $s=1$ and, according to the spectral expansion given in Theorem 5 of \cite{Ni73}, $F_m(z,1)$
is orthogonal to constant functions, meaning that
$$
\lambdangle F_m(z,1), 1 \rangle=0.
$$
For our purposes, it is necessary to employ the Fourier expansion of $F_{m}(z,s)$ in the cusp $\infty$. The
Fourier expansion is proved in \cite{Ni73} and involves Kloosterman sums $S(m,n;c)$, which we now define. For
any integers $m$ and $n$, and real number $c$, define
$$
S(m,n;c)=S_\Gammamma(m,n;c):= \sum_{\bigl(\begin{smallmatrix}
a&\ast\\c&d\end{smallmatrix}\bigr)\in \Gammamma_\infty \diagdown \Gammamma \diagup \Gammamma_\infty } e\left( \frac{ma + nd}{c}\right).
$$
For $\mathbb{R}e(s)>1$ and $z=x+iy\in \mathbb{H}$, the Fourier expansion of $F_m(z,s)$ is given by
\begin{equation}\lambdabel{Four exp Nieb}
F_m(z,s)=e(mx)y^{1/2}I_{s-1/2}(2\pi |m|y) + \sum_{k=-\infty}^{\infty}b_k(y,s;m)e(kx),
\end{equation}
where
$$
b_0(y,s;m) = \frac{y^{1-s}}{(2s-1)\Gammamma(s)}2\pi^s |m|^{s-1/2} \sum_{c>0} S(m,0;c)c^{-2s}=\frac{y^{1-s}}{(2s-1)}B_0(s;m)
$$
and, for $k\neq 0$
$$
b_k(y,s;m)=B_k(s;m)y^{1/2}K_{s-1/2}(2\pi |m|y),
$$
with
$$
B_k(s;m)= 2 \sum_{c>0}S(m,k;c)c^{-1}\cdot \left\{
\begin{array}{ll}
J_{2s-1}\left(\frac{4\pi}{c} \sqrt{mk}\right), & \textrm{\rm if \,}mk>0 \\
I_{2s-1}\left(\frac{4\pi}{c} \sqrt{|mk|}\right), & \textrm{\rm if \,} mk<0.
\end{array}
\right.
$$
In the above expression, $J_{2s-1}$ denotes the $J$-Bessel function and $K_{s-1/2}$ is the modified Bessel function;
see, for example, formula (B.28) in \cite{Iwa02} for $J_{2s-1}$ and formula (B.34) of \cite{Iwa02})
for $K_{s-1/2}$.
According to the proof of Theorem 6 from \cite{Ni73}, the Fourier expansion \eqref{Four exp Nieb} extends by the principle of analytic continuation to the case when $s=1$, hence putting $B_k(1;m):= \lim_{s\downarrow 1} B_k(s;m)$, we have
\begin{equation}\lambdabel{Four exp Nieb at 1}
F_m(z,1)=\frac{\sigmanh(2\pi|m|y)}{\pi \sqrt{|m|}}e(mx) + B_0(1;m)+ \sum_{k\in\mathbb{Z}\setminus\{0\}}\frac{1}{2 \sqrt{|k|}} e^{-2\pi|k|y}B_k(1;m)e(kx).
\end{equation}
It is clear from \eqref{Four exp Nieb at 1} that for $n>0$ one has that
$$
F_{-n}(z,1) = \frac{1}{2\pi\sqrt{n}}q_{z}^{-n} + O(1)
\,\,\,\,\,\textrm{\rm as $z \rightarrow \infty$.}
$$
Moreover, applying $\frac{\partial }{\partial s}$ to the Fourier expansion \eqref{Four exp Nieb}, taking $s=1$ and reasoning analogously as in the proof of Lemma 4.3. (1), p. 19 of \cite{BK19}
we immediately deduce the following crude bound
\begin{equation} \lambdabel{eq: N-P deriv bound}
\left.\frac{\partial}{\partial s} F_{-n}(z,s)\right|_{s=1} \ll \exp\left( 2\pi n {\mathrm{Im}}(z)\right), \quad \text{as} \quad {\mathrm{Im}}(z) \to \infty.
\end{equation}
We note that the value of the derivative of the Niebur-Poincar\'e series at $s=1$ satisfies a
differential equation, namely that
\begin{align} \notag
\Deltalta_{\mathbb{H}yp}\left(\frac{\partial}{\partial s}\left. F_{-n}(z,s) \right|_{s=1} \right) &=\lim_{s\to 1}
\Deltalta_{\mathbb{H}yp} \left(\frac{F_{-n}(z,s) - F_{-n}(z,1)}{(s-1)}\right)= \\&=\lim_{s\to 1} \left(\frac{s(1-s)F_{-n}(z,s) -0}{(s-1)}\right) = -F_{-n}(z,1). \lambdabel{delta of deriv of N-P}
\end{align}
\subsubsection{Fourier expansion when $\Gammamma$ is an Atkin-Lehner group}
One can explicitly evaluate $B_0(1;m)$ for $m > 0$ when $\Gammamma$ is an Atkin-Lehner group. Set $\Gammamma=\overline{\Gammamma_0(N)^+}$
where $N$ is a squarefree, which we express as $N=\prod\limits_{\nu=1}^r p_\nu$. Let $B_{0,N}^+(1;m)$ denote
the coefficient $B_0(1;m)$ for $\overline{\Gammamma_0(N)^+}$.
From Theorem 8 and Proposition 9 of \cite{JST13} we get that
\begin{equation}\lambdabel{B0 for A-L groups}
B_{0,N}^+(1;m)= \frac{12\sigmagma(m)}{\pi\sqrt{m}}\prod\limits_{\nu=1}^r \left(1-
\frac{p_\nu^{\alpha_{p_\nu}(m)+1}(p_\nu -1) }{\left(p_\nu^{\alpha_{p_\nu}(m)+1} - 1\right)(p_\nu+1)}\right) ,
\end{equation}
where $\sigmagma(m)$ denotes the sum of divisors of a positive integer $m$ and $\alpha_p(m)$ is the largest integer such that $p^{\alpha_p(m)}$ divides $m$.
These expressions will be used in our explicit examples in section \ref{sect:examples} below.
\subsection{Automorphic Green's function} \lambdabel{sec: aut Green}
The automorphic Green's function, also called the resolvent kernel, for the Laplacian on $M$ is defined on page 31 of \cite{He83}.
In the notation of \cite{He83}, let $\chi$ be the identity character, $z,w\in \mathcal{F}$ with $z\neq w$, and $s\in\mathbb{C}$ with $\mathbb{R}e(s)>1$.
Formally, consider the series
$$
G_s(z,w)= \sum_{\gammamma\in\Gammamma} k_s(\gammamma z,w)
$$
with
$$
k_s(z,w):=-\frac{\Gammamma(s)^2}{4\pi \Gammamma(2s)}\left[1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right]^s
F\left(s,s;2s;1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right)
$$
and where $F(\alpha,\beta;\gammamma;u)$ is the classical hypergeometric function. We should point out that
the normalization we are using, which follows \cite{He83}, differs from the normalization for
the Green's function in Chapter 5 of \cite{Iwa02}; the two normalizations differ by a minus sign.
With this said, it is proved in \cite{He83}, Proposition 6.5. on p.33 that the series which defines $G_{s}(z,w)$ converges
uniformly and absolutely on compact subsets of $(z,w,s) \in \mathcal{F} \times \mathcal{F}\times \{s\in\mathbb{C} :\mathbb{R}e(s)>1\}$.
Furthermore, for all $s\in\mathbb{C}$ with $\mathbb{R}e(s)>1$, and all $z,w \in\mathbb{H}$ with $z\neq \gammamma w$ for $\gammamma\in\Gammamma$, the function
$G_s(z,w)$ is the eigenfunction of $\Deltalta_{\mathbb{H}yp}$ associated to the eigenvalue $s(1-s)$.
Combining formulas 9.134.1. and 8.703. from \cite{GR07} and applying the identity
$$
\cosh(d_{\mathbb{H}yp}(z,w))=\left(2-\left[1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right]\right)\left(1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right)^{-1}
$$
we deduce that $$k_s(z,w)=-\frac{1}{2\pi}Q^0_{s-1}(\cosh(d_{\mathbb{H}yp}(z,w))),$$ where $Q_\nu^{\mu}$ is the associated Legendre function as defined by formula 8.703 in \cite{GR07}, with $\nu=s-1$ and $\mu=0$.
Now, we can combine Theorem 4 of \cite{Ni73}
with Theorem 5.3 of \cite{Iwa02}, to deduce the Fourier expansion of the automorphic Green function in terms of
the Niebur-Poincar\'e series.
Specifically, let $w\in \mathcal{F}$ be fixed. Assume $z \in \mathcal{F}$
with $y={\mathrm{Im}}(z) > \max\{{\mathrm{Im}}(\gammamma w): \gammamma\in \Gammamma\}$, and assume $s\in\mathbb{C}$ with $\mathbb{R}e(s)> 1$. Then
$G_{s}(z,w)$ admits the expansion
\begin{equation}\lambdabel{Four exp Green}
G_s(z,w)=-\frac{y^{1-s}}{2s-1}\mathcal{E}_\infty^{\mathrm{par}}(w,s)-\sum_{k\in\mathbb{Z}\smallsetminus \{0\}} y^{1/2}K_{s-1/2}(2\pi |k| y) F_{-k}(w,s)e(kx)
\end{equation}
where $\mathcal{E}_\infty^{\mathrm{par}}(w,s)$ is the parabolic Eisenstein series associated to the cusp at $\infty$ of $\Gammamma$, see the next section for its full description.
Function $G_s(z,w)$ is unbounded as $z\to w$ and, according to Proposition 6.5. from \cite{He83} we have the asymptotics
$$
G_s(z,w)=\frac{\mathrm{ord}(w)}{2\pi}\log|z-w|+O(1),\quad \text{as}\quad z\to w.
$$
\section{Eisenstein series and their Kronecker limit formulas}
The purpose of this section is two-fold. First, we state the definitions of parabolic and elliptic Eisenstein
series as well as their associated Kronecker limit formulas. Specific examples of the parabolic Kronecker limit
formulas are recalled from \cite{JST13}. Second, we prove the factorization theorem for meromorphic forms in
terms of elliptic Kronecker limit functions, as stated in \eqref{ell kroneck limit one cusp2}.
\subsection{Parabolic Kronecker limit functions}
Associated to the cusp at $\infty$ of $\Gammamma$ one has a parabolic Eisenstein series ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$.
Let $\Gammamma_{\infty}$ denote the stabilizer subgroup within $\Gammamma$ of $\infty$. For $z\in \mathbb{H}$ and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$,
${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ is defined by the series
\begin{equation*}
{\cal E}^{\mathrm{par}}_{\infty}(z,s) =
\sum\limits_{\gammamma \in \Gammamma_{\infty}\backslash \Gammamma}\textrm{Im}(\gammamma z)^{s}.
\end{equation*}
It is well-known that ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ admits a meromorphic continuation
to all $s\in \mathbb{C}C$ and a functional equation in $s$.
For us, the Kronecker limit formula means the determination of the constant term in the Laurent expansion
of ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ at $s=1$.
Classically, Kronecker's limit formula is the assertion that for $\Gammamma = \textrm{PSL}(2,\mathbb{Z})$
one has that
\begin{equation}\lambdabel{PSL2_KLF}
\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=
\frac{3}{\pi(s-1)}
-\frac{1}{2\pi}\log\bigl(|\Deltalta(z)|{\mathrm{Im}}(z)^{6}\bigr)+C+O(s-1)
\,\,\,\text{\rm as} \,\,\,
s \rightarrow 1.
\end{equation}
where $C=6(1-12\,\zeta'(-1)-\log(4\pi))/\pi$ and $\Deltalta(z)$ is Dedekind's Delta function which defined by
\begin{equation}\lambdabel{PSL2_Delta}
\Deltalta(z) = \left[q_{z}^{1/24}\prod\limits_{n=1}^{\infty}\left(1 - q_{z}^{n}\right)\right]^{24} = \eta(z)^{24}.
\end{equation}
We refer to \cite{Siegel80} for a proof of \eqref{PSL2_KLF}, though the above formulation
follows the normalization from \cite{JST13}.
For general Fuchsian groups of the first kind, Goldstein \cite{Go73} studied analogues of the Kronecker's limit formula associated to parabolic Eisenstein series. After a slight renormalization and trivial generalization, Theorem 3-1 from \cite{Go73} asserts that the parabolic
Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)$ admits the Laurent expansion
\begin{equation} \lambdabel{KronLimitPArGen}
\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= \frac{1}{\vol_{\mathbb{H}yp}(M) (s-1)} + \beta- \frac{1}{\vol_{\mathbb{H}yp}(M)} \log (\varepsilonrt\eta_{\infty}^4(z)\varepsilonrt {\mathrm{Im}}(z)) + O(s-1),
\end{equation}
as $s \to 1$ and where $\beta=\beta_{\Gammamma}$ is a certain real constant depending only on the group $\Gammamma$.
As the notation suggests, the function $\eta_{\infty}(z)$ is a holomorphic form for $\Gammamma$ and can be viewed as
a generalization of the eta function $\eta(z)$ which is defined in \eqref{PSL2_Delta} for the full modular group.
By employing the functional equation for the parabolic Eisenstein series, as stated in Theorem 6.5 of \cite{Iwa02},
one can re-write the Kronecker limit formula as stating that
\begin{equation} \lambdabel{KronLimas s to 0}
\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= 1+ \log (\varepsilonrt \eta_{\infty}^4(z)\varepsilonrt {\mathrm{Im}}(z))\cdot s + O(s^2) \quad\text{ as } s \to 0,
\end{equation}
see Corollary 3 of \cite{JvPS19}. In this formulation, we will call the function
$$
P(z)=P_{\Gammamma}(z):=\log (\varepsilonrt \eta_{\infty}^4(z)\varepsilonrt {\mathrm{Im}}(z))
$$
the parabolic Kronecker limit function of $\Gammamma$.
\subsection{Atkin-Lehner groups} \lambdabel{sect Atkin Leh groups_KLF}
Let $N=p_1\cdot \ldots \cdot p_r$ be a positive squarefree number, which includes the possibility that $N=1$ and set
$$
\ell_N = 2^{1-r}\textrm{lcm}\Big(4,\ 2^{r-1}\frac{24}{(24,\sigmagma(N))}\Big)
$$
where $\textrm{lcm}$ stands for the least common multiple of two numbers. In \cite{JST13}, Theorem 16, it is proved that
\begin{equation}\lambdabel{DeltaN}
\Deltalta_N(z):=\left( \prod_{v \mid N} \eta(v z) \right)^{\ell_N}
\end{equation}
is a weight $k_N=2^{r-1} \ell_N$ holomorphic form for $\Gammamma_0(N)^+$ vanishing only at the cusp. By the valence formula,
the order of vanishing of $\Deltalta_N(z)$ at the cusp is $\nu_N:=k_N \vol_{\mathbb{H}yp}(Y_N^{+})/(4\pi)$
where $\vol_{\mathbb{H}yp}(Y_N^{+})=\pi\sigmagma(N)/(3\cdot 2^r)$ is the hyperbolic volume of the surface $Y_N^{+}$.
The Kronecker limit formula \eqref{KronLimitPArGen} for the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par},N}_{\infty}(z,s)$
associated to $Y_N^{+}$ reads as
\begin{equation} \lambdabel{KronLimitPArGen - level N}
\mathcal{E}^{\mathrm{par},N}_{\infty}(z,s)= \frac{1}{\vol_{\mathbb{H}yp}(Y_N^{+}) (s-1)} + \beta_N - \frac{1}{\vol_{\mathbb{H}yp}(Y_N^{+})}P_N(z) + O((s-1))
\end{equation}
as $s \to 1$. From Example 7 and Example 4 of \cite{JvPS19} we have the explicit evaluations of $\beta_{N}$ and $P_{N}(z)$.
Namely,
\begin{equation} \lambdabel{betaN}
\beta_N=- \frac{1}{\vol_{\mathbb{H}yp} (Y_N^{+}) }\left( \sum_{j=1}^{r} \frac{(p_j -1)\log p_j}{2(p_j+1)}- \log N + 2\log (4\pi) + 24\zeta'(-1) - 2\right)
\end{equation}
and the parabolic Kronecker limit function $P_N(z)$ is given by
$$
P_N(z)= \log\left( \sqrt[2^r]{\prod_{v \mid N} \varepsilonrt \eta(vz)\varepsilonrt ^4} \cdot {\mathrm{Im}}(z) \right).
$$
\subsection{Elliptic Kronecker limit functions}\lambdabel{ell_Eisen_series}
Elliptic subgroups of $\Gammamma$ have finite order and a unique fixed point within $\mathbb H$. For all
but a finite number of $w \in \mathcal{F}$, the order of the elliptic subgroup $\Gammamma_{w}$ which fixes $w$ is one.
For $z\in \mathbb{H}$ with $z\not=w$ and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$, the elliptic Eisenstein series
${\cal E}^{\textrm{ell}}_{w}(z,s)$ is defined by the series
\begin{equation}\lambdabel{ell_eisen}
{\cal E}^{\textrm{ell}}_{w}(z,s) =\sum\limits_{\gammamma \in \Gammamma_{w}\backslash \Gammamma}
\sigmanh(d_{\mathrm{hyp}}(\gammamma z, w))^{-s} =
\sum\limits_{\gammamma \in \Gammamma_{w}\backslash \Gammamma}
\left( \frac{2\,\textrm{Im}(w)\textrm{Im}(\gammamma z)}{|\gammamma z-w|\,|\gammamma z-\overline{w}|} \right)^s.
\end{equation}
It was first shown in \cite{vP10} that \eqref{ell_eisen} admits a meromorphic continuation to all $s \in \mathbb{C}C$.
The analogue of the Kronecker limit formula for ${\cal E}^{\textrm{ell}}_{w}(z,s)$ was first proved in \cite{vP10}; see also \cite{JvPS19}.
In the setting of this paper, it is shown in \cite{vP10}
that for any $w\in\mathcal{F}$ the series \eqref{ell_eisen} admits the Laurent expansion
\begin{multline}\lambdabel{Kronecker_elliptic}
\mathrm{ord}(w)\,\mathcal{E}^{\mathrm{ell}}_{w}(z,s)-
\frac{2^{s}\sqrt{\pi}\,\Gammamma(s-\frac{1}{2})}{\Gammamma(s)}\mathcal{E}^{\mathrm{par}}_{\infty}(w,1-s)
\,\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= \\
=-\frac{2\pi}{\vol_{\mathbb{H}yp}(M)}
-\frac{2\pi}{\vol_{\mathbb{H}yp}(M)}\log\bigl(|H_{\Gammamma}(z,w)|^{\mathrm{ord}(w)}{\mathrm{Im}}(z)\bigr)\cdot s+O(s^2)\quad\textrm{as $s \rightarrow 0$.}
\end{multline}
As a function of $z$, $H(z,w):=H_{\Gammamma}(z,w)$ is holomorphic on $\mathbb{H}$ and uniquely determined up to multiplication by a complex constant of absolute value one;
in addition, $H(z,w)$ is an automorphic form with a non-trivial multiplier system, which depends on $w$, with respect to
$\Gammamma$ acting on $z$. The function $H(z,w)$ vanishes if and only if $z=\gammamma w$ for some $\gammamma\in\Gammamma$.
We call the function
$$
E_w(z)=E_{w,\Gammamma}(z):=\log\bigl(|H(z,w)|^{\mathrm{ord}(w)}{\mathrm{Im}}(z)\bigr)
$$
the elliptic Kronecker limit function of $\Gammamma$ at $w$.
\subsection{A factorization theorem}
We can now prove equation \eqref{ell kroneck limit one cusp2}.
\begin{proposition} \lambdabel{prop: factorization}
With notation as above, let
$f$ be a weight $2k$ meromorphic form on $\mathbb{H}$ with $q$-expansion at $\infty$ given by
\begin{equation} \lambdabel{q exp. of f_2k}
f(z)= 1+ \sum_{n=1}^{\infty} b_{f}(n)q_z^n,
\end{equation}
Let $\mathrm{ord}_w(f)$ denote the order $f$ at $w$ and define the function
$$
H_{f}(z):= \prod_{w \in \mathcal{F}} H(z,w)^{\mathrm{ord}_w(f)}
$$
where $H(z,w)=H_{\Gammamma}(z,w)$ is given in \eqref{Kronecker_elliptic}. Then there exists a complex constant $c_{f}$ such that
\begin{equation} \lambdabel{factorization fla}
f(z) = c_{f}H_{f}(z).
\end{equation}
Furthermore,
$$
\abs{c_{f}} = \exp \left(-\frac{2\pi}{\vol_{\mathbb{H}yp}(M)} \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}
\left( 2-\log 2 + P(w)- \beta\vol_{\mathbb{H}yp}(M)\right) \right ),
$$
where $P(w)$ and $\beta$ are defined through the parabolic Kronecker limit function
\eqref{KronLimitPArGen}.
\end{proposition}
\begin{proof}
The proof closely follows the proof of Theorem 9 from \cite{JvPS19}.
Specifically, following the first part of the proof almost verbatim, we conclude that the quotient
$$
F_f(z):=\frac{H_f(z)}{f(z)}
$$
is a non-vanishing holomorphic function on $M$ which is bounded and non-zero at the cusp at $\infty$. Hence,
$\log \varepsilonrt F_{f}(z)\varepsilonrt$ is $L^{2}$ on $M$. From its spectral expansion and the fact that $\log \varepsilonrt F_{f}(z)\varepsilonrt$
is harmonic, one concludes $\log \varepsilonrt F_{f}(z)\varepsilonrt$ is constant, hence so is $F_{f}(z)$. The
evaluation of the constant is obtained by considering the limiting behavior as $z$ approaches $\infty$,
which is obtained by using the
asymptotic behavior of $H(z,w)$ as ${\mathrm{Im}}(z)\to\infty$, as given in Proposition 6 of \cite{JvPS19}.
\end{proof}
By following the proof of Proposition 12 from \cite{JvPS19} we obtain \eqref{ell kroneck limit one cusp}, and hence \eqref{ell kroneck limit one cusp2}, for meromorphic
forms $f$ on $\mathbb{H}$ with $q$-expansion \eqref{q exp. of f_2k}. We leave the verification of this simple argument to the reader.
\section{Proofs of main results}
\subsection{Proof of Theorem \ref{thm:main}}
Let $Y>1$ be sufficiently large so that the cuspidal neighborhood $\mathcal{F}_{\infty}(Y)$ of the cusp $\infty$ in $\mathcal{F}$ is of
the form $\{z \in {\mathbb H}: 0 < x < 1, y > Y\}$. For $s\in\mathbb{C}$ with $\mathbb{R}e(s)>1$, and arbitrary, but fixed $w\in\mathcal{F}$, we then have that
\begin{align*}
\int\limits_{\mathcal{F}(Y)}\Deltalta_{\mathbb{H}yp}(F_{-n}(z,1))&\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z) \\ &
-\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\Deltalta_{\mathbb{H}yp} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z) \\ &= -s(1-s)\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z).
\end{align*}
Actually, the first summand on the left-hand side is zero since $F_{-n}(n,1)$ is holomorphic; however, this
judicious form of the number zero is significant since we will use the method behind the Maass-Selberg theorem to
study the left-hand side of the above equation. Before this, note that the integrand on the right-hand side of the above equation is
holomorphic at $s=1$. As a result, we can write
\begin{align*}
\frac{\partial}{\partial s}&\left.\left(-s(1-s)\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\left(G_s(z,w) +
\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z)\right)\right|_{s=1}\\&=
\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z).
\end{align*}
Therefore,
\begin{align}\lambdabel{M-S starting f-la}
\lambdangle F_{-n}(z,1), &\overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle \notag \\&=\lim_{Y\to \infty} \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(z,s)\right)d\mu_{\mathbb{H}yp}(z) \notag \\&=
\lim_{Y\to \infty} \left[\frac{\partial}{\partial s}\left( \int\limits_{\mathcal{F}(Y)}\Deltalta_{\mathbb{H}yp}(F_{-n}(z,1))\left(G_s(z,w) +
\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z) \right. \right.\notag \\&-
\left.\left. \left. \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\Deltalta_{\mathbb{H}yp} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\mathbb{H}yp}(z) \right) \right|_{s=1}\right]
\end{align}
The quantity on the right-hand side of \eqref{M-S starting f-la} is setup for an application of Green's theorem as
in the proof of the Maass-Selberg relations for the Eisenstein series. As described on page 89 of \cite{Iwa02}, when
applying Green's theorem to each term on the right-side of \eqref{M-S starting f-la} for fixed $Y$, the resulting
boundary terms on the sides of the fundamental domain, which are identified by $\Gammamma$, will sum to zero. As such,
we get that
\begin{align}\lambdabel{M-S interm f-la}
\lambdangle F_{-n}(z,1),& \overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle \notag \\&=
\lim_{Y\to \infty} \left[\frac{\partial}{\partial s}\left( \int\limits_{0}^{1}\frac{\partial}{\partial y}F_{-n}(z,1)\left(G_s(z,w) +
\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)dx \right. \right. \notag\\ &-
\left.\left. \left. \int\limits_{0}^{1}F_{-n}(z,1)\frac{\partial}{\partial y} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)dx
\right) \right|_{s=1}\right],
\end{align}
where functions of $z$ and its derivatives with respect to $y={\mathrm{Im}}(z)$ are evaluated at $z=x+iY$.
In order to compute the difference of the two integrals of the right-hand side of \eqref{M-S interm f-la}, we will use the Fourier expansions
\eqref{Four exp Nieb at 1} and \eqref{Four exp Green} of the series $F_{-n}(z,1)$ and $G_s(z,w)$ respectively. It will be more convenient to write the first coefficient in the expansion \eqref{Four exp Nieb at 1} as $e(-nx)\sqrt{y}I_{\tfrac{1}{2}}(2\pi n y)$, as in \eqref{Four exp Nieb}.
Specifically, since the exponential functions $e(-nx)$ are orthogonal for different values of $n$, we get that
\eqref{M-S interm f-la} is equal to
\begin{multline*}
-F_{-n}(w,s)\sqrt{Y} \left(\frac{\partial}{\partial y}\left.\left(\sqrt{y}I_{\tfrac{1}{2}}(2\pi n y)\right)\right|_{y=Y}\cdot
K_{s-\tfrac{1}{2}}(2\pi n Y) \right. \\ \left.- I_{\tfrac{1}{2}}(2\pi n Y)\cdot \frac{\partial}{\partial y}\left.\left(\sqrt{y}K_{s-
\tfrac{1}{2}}(2\pi n y) \right)\right|_{y=Y}\right)
\end{multline*}
\begin{multline*}
+B_0(1;-n)(1-s)\frac{Y^{-s}}{2s-1}\mathcal{E}_\infty^{\mathrm{par}}(w,s) \\+
\sum_{j\in\mathbb{Z}\smallsetminus\{0\}}F_j(w,s) \left( b_j(Y,1;-n)\cdot \frac{\partial}{\partial y}\left.\left(\sqrt{y}K_{s-
\tfrac{1}{2}}(2\pi |j| y) \right)\right|_{y=Y} \right. \\ \left.-\frac{\partial}{\partial y}\left.b_j(y,1;-n)\right|_{y=Y}\cdot \sqrt{Y}
K_{s-\tfrac{1}{2}}(2\pi |j| Y) \right) =T_1(Y,s;w) + T_2(Y,s;w)+T_3(Y,s;w),
\end{multline*}
where the last equality above provides the definitions of the functions $T_{1}$, $T_{2}$ and $T_{3}$. Therefore, from
\eqref{M-S interm f-la} we conclude that
\begin{multline}\lambdabel{Three terms pre-comp}
\lambdangle F_{-n}(z,1), \overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle =\\= \lim_{Y\to \infty}
\left[\left.\frac{\partial}{\partial s}\left( T_1(Y,s;w) + T_2(Y,s;w)+T_3(Y,s;w) \right)\right|_{s=1}\right]
\end{multline}
We will treat each of the three terms on the right-hand side of \eqref{Three terms pre-comp} separately.
To evaluate the term $T_{1}$ in \eqref{Three terms pre-comp}, we apply
formulas 8.486.2 and 8.486.11 of \cite{GR07} in order to compute derivatives of the Bessel functions.
In doing so, we conclude that
$$
T_1(Y,s;w)=-\frac{X}{2}F_{-n}(w,s)\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right],
$$
where we set $X=2\pi n Y$. Next, we express $K_{s+\tfrac{1}{2}} (X)$ in terms of $K_{s-\tfrac{1}{2}} (X)$ and $K_{s-\tfrac{3}{2}}(X)$, using formula 8.485.10 from \cite{GR07} to get
$$
K_{s+\tfrac{1}{2}} (X)= K_{s-\tfrac{3}{2}} (X) + \frac{2s-1}{X}K_{s-\tfrac{1}{2}} (X).
$$
Then, applying formula 8.486.21 from \cite{GR07}, we deduce that
\begin{align*}
\frac{\partial}{\partial s}&\left.\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]\right|_{s=1}\\&= \sqrt{\frac{\pi}{2X}}e^X\mathrm{Ei}(-2X)\left[ -(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + \sqrt{\frac{2}{\pi X}} (2-1/X) \sigmanh (X) \right]+\frac{2}{X^2}e^{-X}\sigmanh (X),
\end{align*}
where $\mathrm{Ei}(x)$ denotes the exponential integral; see section 8.2 of \cite{GR07}. Continuing, we now employ
formula (B.36) from \cite{Iwa02} which asserts certain asymptotic behavior of the $I$-Bessel function as $X\to\infty$;
we are interested in the cases when $\nu=-1/2$ and when $\nu=3/2$.
This result, together with the bound $\mathrm{Ei}(-2X)\leq e^{-2X}/(2X) $, which follows from the expression 8.212.10 from \cite{GR07} for $\mathrm{Ei}(-x)$ with $x>0$, yields that
$$
\lim_{X\to\infty}\frac{X}{2}\frac{\partial}{\partial s}\left.\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) +
I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]\right|_{s=1} =0.
$$
Therefore,
\begin{multline*}
\lim_{Y\to \infty}\frac{\partial}{\partial s}\left. T_1(Y,s;w)\right|_{s=1}= - \frac{\partial}{\partial s}\left. F_{-n}(w,s)\right|_{s=1}\cdot \\\cdot\lim_{X\to \infty}\frac{X}{2}\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right].
\end{multline*}
Finally, by applying (B.36) from \cite{Iwa02} again, we deduce that
$$
\lim_{X\to \infty}\frac{X}{2}\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)
(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]=1.
$$
Hence
\begin{equation}\lambdabel{term1}
\lim_{Y\to \infty}\frac{\partial}{\partial s}\left. T_1(Y,s;w)\right|_{s=1}= - \frac{\partial}{\partial s}\left. F_{-n}(w,s)\right|_{s=1}.
\end{equation}
As for the term $T_{2}$ in \eqref{Three terms pre-comp}, let us use
the Laurent series expansion \eqref{KronLimitPArGen} of $\mathcal{E}_\infty^{\mathrm{par}}(w,s)$, from
which one easily deduces that
$$
\frac{\partial}{\partial s}\left.(s-1)\frac{Y^{-s}}{2s-1}\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right|_{s=1} = \frac{1}{Y}\left( \beta -
\frac{P(w)+2+\log Y}{\vol_{\mathbb{H}yp}(M)}\right).
$$
Therefore
\begin{equation}\lambdabel{term2}
\lim_{Y\to \infty}\frac{\partial}{\partial s}\left. T_2(Y,s;w)\right|_{s=1}=0.
\end{equation}
It remains to study the term $T_{3}$ in \eqref{Three terms pre-comp}.
Let us set $g(s,y,k):=\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi k y)$ for some positive integers $k$. Then $b_j(y,1;-n)=B_j(1;-n)g(1,y,|n|)$ and
\begin{multline*}
T_3(Y,s;w)=\sum_{j\in\mathbb{Z}\smallsetminus\{0\}} B_j(1;-n) F_j(w,s) \left( g(1,Y,|n|) \frac{\partial}{\partial y}\left.g(s,y,|j|)\right|_{y=Y} \right. \\ - \left. g(s,Y;|j|)\frac{\partial}{\partial y}\left.g(1,y,|n|)\right|_{y=Y} \right).
\end{multline*}
For positive integers $m$ and $\ell$ let us define
$$
G(s,Y,m,\ell):=g(1,Y,m) \frac{\partial}{\partial y}\left.g(s,y,\ell)\right|_{y=Y}- g(s,Y;\ell)\frac{\partial}{\partial y}\left.g(1,y,m)\right|_{y=Y}.
$$
Applying the formula 8.486.11 from \cite{GR07} to differentiate the $K-$Bessel function, together with formula 8.486.10 to express $K_{s+\tfrac{1}{2}}(2\pi |j|Y)$ we arrive at
\begin{multline*}
G(s,Y,|n|,|j|) =\frac{\pi Y}{2}K_{s-\tfrac{1}{2}}(2\pi |j| Y)K_{\tfrac{1}{2}}(2\pi |n| Y) \cdot \\ \cdot \left( |n|(K_{-\tfrac{1}{2}}(2\pi |n|Y) + K_{\tfrac{3}{2}}(2\pi |n|Y)) - |j|\left(2K_{s-\tfrac{3}{2}}(2\pi |j|Y) +\frac{2s-1}{2\pi |j| Y} K_{s-\tfrac{1}{2}}(2\pi |j|Y)\right)\right).
\end{multline*}
Now, we combine the bound (B.36) from \cite{Iwa02} with evaluation of the derivative $\frac{\partial}{\partial \nu} K_{\nu}$ at $\nu=\pm 1/2$ (formula 8.486.21 of \cite{GR07}) and the bound $\mathrm{Ei}(-4\pi |j|Y) \leq \exp(-4\pi |j|Y)/(4\pi |j|Y)$ for the exponential integral function to deduce the following crude bounds
$$
\max\left\{G(s,Y,|n|,|j|), \left.\frac{\partial }{\partial s} G(s,Y,|n|,|j|)\right|_{s=1}\right\}\ll (|n| + |j|)\exp(-2\pi Y (|n|+|j|)), \text{ as } Y\to +\infty,
$$
where the implied constant is independent of $Y,|j|$.
This, together with the bound \eqref{eq: N-P deriv bound} and the Fourier expansion \eqref{Four exp Nieb at 1} yields
$$
\frac{\partial}{\partial s} T_3(Y,s;w)\Big|_{s=1}\ll
\sum_{j\in\mathbb{Z}\smallsetminus\{0\}}(|n| + |j|) | B_j(1;-n)| \exp\left( -2\pi Y (|n|+|j|) + 2\pi|j| {\mathrm{Im}}(w)\right)
$$
It remains to estimate the sum on the right hand side of the above equation as $Y\to\infty$.
The bounds for the Kloosterman sum zeta function as stated on page 75 of \cite{Iwa02})
yield bounds for $B_j(1;-n)$ for $j\neq 0$. Specifically, one has that
$$
B_j(1;-n)\ll \exp\left(\frac{4\pi \sqrt{|jn|}}{c_\Gammamma}\right)
$$
where $c_\Gammamma$ is a certain positive constant depending on the group $\Gammamma$; in fact, $c_{\Gammamma}$ is equal to the minimal positive
left-lower entry of a matrix from $\Gammamma$. Also, the implied constant in the bound for $B_j(1;-n)$ is independent of $j$. Therefore
$$
\frac{\partial}{\partial s}\left. T_3(Y,s;w)\right|_{s=1}\ll \sum_{j\in\mathbb{Z}\smallsetminus\{0\}} (|n| + |j|)\exp\left( -2\pi \left( (|j|+|n|)Y - 2\sqrt{|jn|}/c_\Gammamma -|j| {\mathrm{Im}}(w) \right) \right).
$$
For $Y > 2{\mathrm{Im}}(w) + 2\sqrt{n}/c_\Gammamma$, this series over $j$ is uniformly convergent and is $o(1)$ as $Y\to\infty$.
In other words,
\begin{equation}\lambdabel{T3_limit}
\lim_{Y\to\infty}\frac{\partial}{\partial s}\left. T_3(Y,s;w)\right|_{s=1} =0.
\end{equation}
When combining \eqref{T3_limit} with \eqref{Three terms pre-comp} \eqref{term1} and \eqref{term2},
we have that
\begin{equation*}
\lambdangle F_{-n}(z,1), \overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle = \frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1},
\end{equation*}
which completes the proof of \eqref{main f-la}.
\subsection{Proof of Corollary \ref{Rohr-Jensen}}
The proof of Corollary \ref{Rohr-Jensen} is a combination of Theorem \ref{thm:main} and the factorization theorem as stated in
Proposition \ref{prop: factorization}. The details are as follows.
To begin we shall prove formula \eqref{log norm basic}. Starting with \eqref{ell kroneck limit one cusp2}, which is written as
$$
\log\left( y^k |f(z)| \right) = kP(z) - \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \lim_{s\to 0} \frac{1}{s} \mathrm{ord}(w)\mathcal{E}^{\mathrm{ell}}_{w}(z,s),
$$
we can express $\lim_{s\to 0} \frac{1}{s}\mathrm{ord}(w) \mathcal{E}^{\mathrm{ell}}_{w}(z,s)$ in terms of the resolvent kernel.
Specifically, using \eqref{Ell Green connection}, we have that
\begin{equation} \lambdabel{ell kroneck limit one cusp3}
\log\left( y^k |f(z)| \right)=kP(z) + \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \lim_{s\to 0}
\left(\frac{2^{s}\sqrt{\pi} \Gammamma(s-1/2)}{\Gammamma(s+1)}(2s-1)G_s(z,w)\right).
\end{equation}
By applying the functional equation for the Green's function, see Theorem 3.5 of \cite{He83} on pages 250--251, we get
\begin{align*}
\lim_{s\to 0}\frac{2^{s}\sqrt{\pi} \Gammamma(s-1/2)}{\Gammamma(s+1)}(2s-1)G_s(z,w) =
\lim_{s\to 1}&\left(\frac{2^{1-s}\sqrt{\pi} \Gammamma(1/2-s)}{\Gammamma(2-s)}\left((1-2s)G_s(z,w)\right.\right.
\\&- \left.\left.\frac{}{}\mathcal{E}_\infty^{\mathrm{par}}(z,1-s)\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\right).
\end{align*}
From the Kronecker limit formula \eqref{KronLimas s to 0} and standard Taylor series expansion of the gamma function we immediately deduce that
\begin{multline*}
\lim_{s\to 0}\frac{2^{s}\sqrt{\pi} \Gammamma(s-1/2)}{\Gammamma(s+1)}(2s-1)G_s(z,w)= \lim_{s\to 1}2\pi (-1+(s-1)(2-\log 2))\cdot \\ \cdot \left[2(1-s)G_s(z,w) - (G_s(z,w)+\mathcal{E}_\infty^{\mathrm{par}}(w,s)) - P(z)(1-s)\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right].
\end{multline*}
According to \cite{Iwa02}, p. 106, the point $s=1$ is the simple pole of $G_s(z,w)$ with the residue $-1/\vol_{\mathbb{H}yp}(M)$ (note that our $G_s(z,w)$ differs from the automorphic Green's function from \cite{Iwa02} by a factor of $-1$). Therefore, the Kronecker limit formula \eqref{KronLimitPArGen} yields the following equation
\begin{align} \lambdabel{Limit of G as s to 0}
\lim_{s\to 0}\frac{2^{s}\sqrt{\pi} \Gammamma(s-1/2)}{\Gammamma(s+1)}(2s-1)G_s(z,w)&= -\frac{2\pi}{\vol_{\mathbb{H}yp}(M)}P(z)- \frac{4\pi}{\vol_{\mathbb{H}yp}(M)} \\&
+ 2\pi\lim_{s\to 1}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right).\notag
\end{align}
Recall that the classical the Riemann-Roch theorem implies that
$$
k\frac{\vol_{\mathbb{H}yp}(M)}{2\pi}=\sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)};
$$
hence, after multiplying \eqref{Limit of G as s to 0} by $\frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} $ and taking the sum over all $w\in\mathcal{F}$
from \eqref{ell kroneck limit one cusp3}, we arrive at \eqref{log norm basic}, as claimed.
Having proved \eqref{log norm basic}, observe that the left-hand side of \eqref{log norm basic} is real valued.
As proved in \cite{Ni73}, $F_{-n}(z,1)$ is orthogonal to constant functions. Therefore, in order to prove \eqref{main f-la - corollary}
one simply applies \eqref{main f-la}, which was established above.
\subsection{Proof of Corollary \ref{cor:j-function}}
In order to prove \eqref{main f-la - corollary 2}, it suffices to compute
$\lambdangle 1, \overline{\lim_{s\to 1}(G_s(z,w) + \mathcal{E}_{\infty}^{\mathrm{par}}(w,s))} \rangle $, which we will
write as
$$
\int\limits_{\mathcal{F}}\lim_{s\to 1}\left(G_s(z,w) + \frac{1}{\vol_{\mathbb{H}yp}(M)(s-1)}+ \mathcal{E}_{\infty}^{\mathrm{par}}(w,s)-\frac{1}{\vol_{\mathbb{H}yp}(M)(s-1)}\right)d\mu_{\mathbb{H}yp}(z).
$$
From its spectral expansion, the function $\lim_{s\to 1}\left(G_s(z,w) + \frac{1}{\vol_{\mathbb{H}yp}(M)(s-1)}\right)$ is $L^2$ on $\mathcal{F}$ and orthogonal to constant functions.
Therefore, by using the Laurent series expansion \eqref{KronLimitPArGen}, we get that
$$
\lambdangle 1, \overline{\lim_{s\to 1}(G_s(z,w) + \mathcal{E}_{\infty}^{\mathrm{par}}(w,s))} \rangle =\vol_{\mathbb{H}yp}(M)\left(\beta - \frac{P(w)}{\vol_{\mathbb{H}yp}(M)}\right),
$$
which completes the proof.
\subsection{Proof of Theorem \ref{thm:generating series}}\lambdabel{sect proof of thm 4}
Our starting point is the Fourier expansion of the sum $G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)$.
Namley, for $\mathbb{R}e(s)>1$ and ${\mathrm{Im}}(w)$ sufficiently large we have that
\begin{align}\lambdabel{G+Epar}
G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)&=\left(1-\frac{y^{1-s}}{2s-1}\right)\mathcal{E}_\infty^{\mathrm{par}}(w,s) \notag
\\&-\sum_{k\in\mathbb{Z}\setminus\{0\}} \sqrt{y}K_{s-\tfrac{1}{2}}(2\pi |k| y)F_{-k}(w,s)e(kx).
\end{align}
If ${\mathrm{Im}}(z)$ is sufficiently large, exponential decay of $K_{s-\tfrac{1}{2}}(2\pi |k| y)$ is sufficient to ensure that the right-hand side of \eqref{G+Epar} is holomorphic at $s=1$. The Laurent series expansion of $\mathcal{E}_\infty^{\mathrm{par}}(w,s)$, combined with the expansions $y^{1-s}=1+(1-s)\log y + \tfrac{1}{2}(1-s)^2 \log^2y + O((1-s)^3)$ and $(2s-1)^{-1}= (1-2(s-1))^{-1} = 1-2(s-1)+4(s-1)^2 + O((s-1)^3)$ yields
\begin{multline*}
\frac{\partial}{\partial s}\left.\left(1-\frac{y^{1-s}}{2s-1}\right)\mathcal{E}_\infty^{\mathrm{par}}(w,s) \right|_{s=1} =\frac{1}{\vol_{\mathbb{H}yp}(M)}\left[ -4+2\beta\vol_{\mathbb{H}yp}(M)-2P(w)\right. \\ \left. +
\log y \left(\beta\mathrm{vol}_{\mathbb{H}yp}(M) - P(w)-2\right) - \tfrac{1}{2}\log^2y \right].
\end{multline*}
Additionally, for ${\mathrm{Im}}(z)$ sufficiently large, the series on the right-hand side of \eqref{G+Epar} is a uniformly convergent series of functions
which are holomorphic at $s=1$. As such, we may differentiate the series term by term. By employing formulas 8.469.3 and 8.486.21 of \cite{GR07},
we deduce for $k\neq0$ that
\begin{multline*}
\frac{\partial}{\partial s}\left. \left(\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi |k| y)F_{-k}(w,s) \right) \right|_{s=1}=\frac{e^{-2\pi|k|y}}{2\sqrt{|k|}}\cdot \\ \cdot\left[ \frac{\partial}{\partial s}\left. F_{-k}(w,s) \right|_{s=1}- F_{-k}(w,1)e^{4\pi|k|y} \mathrm{Ei}(-4\pi |k|y)\right],
\end{multline*}
where $\mathrm{Ei}(x)$ denotes the exponential integral function; see section 8.21 of \cite{GR07}. From this, we get the
expression that
\begin{multline*}
\frac{\partial}{\partial s}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\Big|_{s=1} = (\log y+2) \left(\beta - \frac{P(w)+2}
{\mathrm{vol}_{\mathbb{H}yp}(M)}\right) - \frac{\log^2 y}{2\mathrm{vol}_{\mathbb{H}yp}(M)} \\- \sum_{k\in\mathbb{Z}\setminus\{0\}}\frac{1}{2\sqrt{|k|}}\left[ \frac{\partial}{\partial s}
\left. F_{-k}(w,s) \right|_{s=1}- F_{-k}(w,1)e^{4\pi|k|y} \mathrm{Ei}(-4\pi |k|y)\right]e^{2\pi i kx - 2\pi|k|y}.
\end{multline*}
Let us now compute the derivative $\frac{\partial}{\partial z}$ of the above expression.
After multiplying by $i=\sqrt{-1}$, we get that
\begin{multline*}
\mathcal{G}_w(z)=\frac{1}{y}\left(\beta - \frac{P(w)+2}{\mathrm{vol}_{\mathbb{H}yp}(M)}\right) - \frac{\log y}{y \mathrm{vol}_{\mathbb{H}yp}(M)} +\sum_{k\geq 1} 2\pi \sqrt{k} \frac{\partial}{\partial s}\left. F_{-k}(w,s) \right|_{s=1}q_z^k \\ +\sum_{k\geq 1}\frac{F_{-k}(w,1)}{2\sqrt{k}y}q_z^k -\sum_{k\leq -1} 2\pi \sqrt{|k|} F_{-k}(w,1)
\mathrm{Ei}(4\pi ky) q_z^k + \sum_{k\leq -1}\frac{F_{-k}(w,1)}{2\sqrt{|k|}y}e^{2\pi i k(x-iy)}.
\end{multline*}
The proof of the assertion that $\sum_{k\geq 1} 2\pi \sqrt{k} \frac{\partial}{\partial s}\left. F_{-k}(w,s) \right|_{s=1}q_z^k$ is the holomorphic part of $\mathcal{G}_w(z)$ follows by citing the uniqueness of the analytic continuation in $z$.
It is left to prove that $\mathcal{G}_w(z)$ is weight two biharmonic Maass form. Since $\mathcal{G}_w(z)$ is obtained by taking the derivative $\frac{\partial}{\partial z}$ of a $\Gammamma-$invariant function, it is obvious that $\mathcal{G}_w(z)$ is weight two in $z$. Moreover, the straightforward computation that
$$
iy^2 \frac{\partial}{\partial \bar z}\mathcal{G}_w(z)=\Deltalta_{\mathbb{H}yp}\left(\frac{\partial}{\partial s}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\Big|_{s=1}\right)=-
\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right),
$$
combined with the fact that $\Deltalta_{\mathbb{H}yp}\left( \lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\right)=0$ proves that $\mathcal{G}_w(z)$ is biharmonic.
\section{Examples }\lambdabel{sect:examples}
\subsection{The full modular group}
Throughout this subsection, let $\Gammamma=\mathrm{PSL}(2,\mathbb{Z})$, in which case the the parabolic Kronecker limit function, $P(w)$ can be expressed, in the notation of \cite{BK19}, as $$P(w)=P_{\mathrm{PSL}(2,\mathbb{Z})}(w)=\log(|\eta(w)|^4 \cdot {\mathrm{Im}}(w))=\mathbbm{j}(w)-1,$$
where $\eta(w)$ is Dedekind's eta function and the last equality follows from the definition of $\mathbbm{j}_0(w)=\mathbbm{j}(w)$ given on p. 1 of \cite{BK19}.
In this setting, Corollary \ref{Rohr-Jensen}, when combined with \eqref{j_via_F} and Rohrlich's theorem
\eqref{rohrl thm} yields that
\begin{equation}\lambdabel{prod with jn}
\lambdangle j_n,\log||f||\rangle =2\pi\sqrt{n}\left( - 2\pi \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}\left(\frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1} -c_nP(w)\right)\right).
\end{equation}
Moreover, equating the constant terms in the Fourier series expansions for $F_{-n}(z,1)$ and $j_n(z)$, one easily deduces that $2\pi\sqrt{n}c_n=24\sigmagma(n)$.
This proves Theorem 1.2 of \cite{BK19} and shows that, in the notation of \cite{BK19} one has
\begin{equation}\lambdabel{jn for BK}
\mathbbm{j}_n(w)=2\pi\sqrt{n}\frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1} -24\sigmagma(n)P(w),
\end{equation}
an identity which provides a description of $\mathbbm{j}_n(w)$, for $n\geq 1$ different from the one given by formula (3.10) of \cite{BK19}.
Furthermore, from the identity \eqref{delta of deriv of N-P}, combined with the fact that $ \Deltalta_{\mathbb{H}yp}P(w)=1$, which is a straightforward implication of the Kronecker limit formula \eqref{KronLimitPArGen}, it follows that $$\Deltalta_{\mathbb{H}yp}\mathbbm{j}_n(w)= 2\pi\sqrt{n}\left( F_{-n}(w,1)-c_n \right)=j_n(w),$$
which agrees with formula (3.10) of \cite{BK19}.
Reasoning as above, we easily see that Theorem 1.3. of \cite{BK19} follows from Corollary \ref{cor:j-function} with $g(z)=j_n(z)$.
Finally, in view of \eqref{prod with jn}, Theorem \ref{thm:generating series} is closely related to the first part of Theorem 1.4 of \cite{BK19}. Namely, for large enough ${\mathrm{Im}}(z)$, in the notation of \cite{BK19}
\begin{align*}
\mathbb{H}_w(z)&=\sum_{n\geq 0} \mathbbm{j}_n(w) q_z^n=\mathbbm{j}_0(w)+\sum_{n\geq 1}\left(2\pi\sqrt{n}\frac{\partial}{\partial s}\left. F_{-n}(w,s)
\right|_{s=1} -24\sigmagma(n)P(w)\right)q_z^n\\&=1+P(w)\left(1-24\sum_{n\geq 1}\sigmagma(n)q_z^n\right) + \sum_{n\geq 1} 2\pi \sqrt{n} \frac{\partial}{\partial s}
\left. F_{-n}(w,s) \right|_{s=1}q_z^n.
\end{align*}
Theorem \ref{thm:generating series} implies that the function $\mathbb{H}_w(z)$ is the holomorphic part of the weight two biharmonic Maass form $$\widehat{\mathbb{H}}_w(z)=P(w)\widehat{E}_2(z)+\mathcal{G}_w(z),$$ where
$$
\widehat{E}_2(z)=1-24\sum_{n\geq 1}\sigmagma(n)q_z^n - \frac{3}{\pi y}
$$
is the weight two completed Eisenstein series for the full modular group.
\subsection{Genus zero Atkin-Lehner groups}
Let $N=\prod_{\nu=1}^r p_\nu$ be a positive square-free integer which is one of the $44$ possible values for which the quotient
space $Y_{N}^{+} =\overline{ \Gammamma_{0}^{+}(N)}\backslash \HH$ has genus zero; see \cite{Cum04} for a list of
such $N$ as well as \cite{JST14}. Let $\Deltalta_{N}(z)$ be the Kronecker limit function on $Y_{N}^{+}$ associated to the
parabolic Eisenstein series; it is given by formula \eqref{DeltaN} above.
In the notation of Section \ref{sect Atkin Leh groups_KLF},the function $\Deltalta_N(z)(j_N^+(z)-j_N^+(w))^{\nu_N}$, is the weight $k_N=2^{r-1}\ell_N$ holomorphic modular form which possesses the constant term $1$ in its $q$-expansion.
Furthermore, this function vanishes only at the point $z=w$, and, by the Riemann-Roch formula, its order of vanishing is equal to $k_N \vol_{\mathbb{H}yp}(Y_N^{+})\cdot\mathrm{ord}(w)/(4\pi )$.
When $N=1$, one has $k_1=12$, $\ell_1=24$, $\nu_1=1$ and $\vol_{\mathbb{H}yp}(Y_N^{+})=\pi/3$, hence $\Deltalta_1(z)(j_1^+(z)-j_1^+(w))^{\nu_1}$ equals the prime form $(\Deltalta(z)(j(z)-j(w)))^{1/\mathrm{ord}(w)}$ taken to the power $\mathrm{ord}(w)$; see page 3 of \cite{BK19}.
For any integer $m>1$ the $q$-expansion of the form $j_N^+|T_{m}(z)$ is $q_z^{-m}+O(q_z)$; hence there exists a constant $C_{m,N}$ such that
$j_N^+|T_{m}(z)=2\pi \sqrt{m} F_{-m}(z,1) + C_{m,N}$.
The constant $C_{m,N}$ can be explicitly evaluated in terms of $m$ and $N$ by equating the constant terms in the $q$-expansions.
Upon doing so, one obtains, using equation \eqref{B0 for A-L groups}, that
\begin{align*}
C_{m,N}=-2\pi \sqrt{m} B_{0,N}^+(1;-m)&=-24\sigmagma(m)\prod\limits_{\nu=1}^r \left(1-
\frac{p_\nu^{\alpha_{p_\nu}(m)+1}(p_\nu -1) }{\left(p_\nu^{\alpha_{p_\nu}(m)+1} - 1\right)(p_\nu+1)}\right)\\&=-24\sigmagma(m) \prod\limits_{\nu=1}^r \left(1- \kappa_{m}(p_\nu)\right),
\end{align*}
where we simplified the notation by denoting the second term in the product over $\nu$ by $\kappa_m(p_\nu)$.
We now can apply Corollary \ref{cor:j-function} with
$$
g(z)= j_N^+|T_{m}(z)=2\pi \sqrt{m} F_{-m}(z,1)-24\sigmagma(m)\prod\limits_{\nu=1}^r \left(1- \kappa_{m}(p_\nu)\right)
$$
and $f(z)=\Deltalta_N(z)(j_N^+(z)-j_N^+(w))^{\nu_N}$. Corollary \ref{cor:j-function}
becomes the statement that
\begin{align*}
\lambdangle j_N^+|T_{m}(z), &\log(y^{\tfrac{k_N}{2}} |\Deltalta_N(z)(j_N^+(z)-j_N^+(w))^{\nu_N}|) \rangle
\\& = -k_N \vol_{\mathbb{H}yp}(Y_N^{+})\left[\pi\sqrt{m}\left.\frac{\partial}{\partial s}F_{-m}(w,s)\right|_{s=1} \right. \\&
\left. +12\sigmagma(m)\prod\limits_{\nu=1}^r
\left(1- \kappa_{m}(p_\nu)\right) \left(\beta_N\vol_{\mathbb{H}yp}(Y_N^{+})-\log\left( |\Deltalta_N(w)|^{2/k_N} \cdot {\mathrm{Im}} (w)\right) -2\right)\right],
\end{align*}
where $\beta_N$ is given by \eqref{betaN}. In this form, we have obtained an alternate
proof and generalization of formula (1.2) from \cite{BK19}, which is the special case $N=1$.
\subsection{A genus one example}
Let us consider the case when $\Gammamma = \overline{\Gammamma_{0}(37)^{+}}$. The choice of $N=37$ is significant since
this level corresponds to the smallest square-free integer $N$ such that
$Y_{N}^{+}$ is genus one. From Proposition 11 of \cite{JST13}, we have that
$\vol_{\mathbb{H}yp}(Y_{37}^{+})= 19\pi/3$ and
$$
\beta_{37}=\frac{3}{19\pi}\left(\frac{10}{19}\log37+2-2\log(4\pi)-24\zeta'(-1)\right).
$$
The function field generators are $x_{37}^+(z)=q_z^{-2} + 2q_z^{-1}+ O(q_z)$ and $y_{37}^+(z)=q_z^{-3} + 3q_z^{-1}+ O(q_z)$, as displayed in Table 5 of \cite{JST13}. The generators $x_{37}^+(z)$ and $y_{37}^+(z)$ satisfy the cubic relation $y^2 - x^3 + 6xy - 6x^2 + 41y + 49x + 300 = 0$.
The functions $x_{37}^+(z)$ and $y_{37}^+(z)$ can be expressed in in terms of the Niebur-Poincar\'e series by comparing their
$q$-expansions. The resulting expressions are that
\begin{align*}
x_{37}^+(z)&=2\pi[\sqrt{2}F_{-2}(z,1)+ 2F_{-1}(z,1)]-2\pi(\sqrt{2}B_{0,37}^+(1;-2)+2B_{0,37}^+(1;-1))\\&=2\pi[\sqrt{2}F_{-2}(z,1)+ 2F_{-1}(z,1)]-\frac{60}{19}
\end{align*}
and
\begin{align*}
y_{37}^+(z)&=2\pi[\sqrt{3}F_{-3}(z,1)+ 3F_{-1}(z,1)]-2\pi(\sqrt{3}B_{0,37}^+(1;-3)+3B_{0,37}^+(1;-1))\\&=2\pi[\sqrt{3}F_{-3}(z,1)+ +3F_{-1}(z,1)]-\frac{84}{19}.
\end{align*}
It is important to note that $x_{37}^+(z)$ has a pole of order two at $z=\infty$, i.e., its $q$-expansion begins with $q_z^{-2}$.
As such, $x_{37}^+(z)$ is a linear transformation of the
Weierstrass $\wp$-function, in the coordinates of the upper half plane, associated to the elliptic curve obtained by
compactifying the space $Y_{37}^{+}$. Hence, there are three distinct points
$\{w\}$ on $Y_{37}^{+}$, corresponding to the two torsion points under the group law, such that $x_{37}^+(z)-x_{37}^+(w)$
vanishes as a function of $z$ only when $z=w$. The order of vanishing necessarily is equal to two. The cusp form
$\Deltalta_{37}(z)$ vanishes at $\infty$ to order $19$. Therefore, for such $w$, the form
$$
f_{37,w}(z)=\Deltalta_{37}^2(z)(x_{37}^+(z)-x_{37}^+(w))^{19}
$$
is a weight $2k_{37}=24$ holomorphic form. The constant term in its $q$-expansion is equal to $1$,
and $f_{37,w}(z)$ vanishes for points $z \in \mathcal{F}$ only when $z=w$. The order of vanishing of $f_{37,w}(z)$ at $z=w$ is $38\cdot \mathrm{ord}(w)$.
With all this, we can apply Corollary \ref{cor:j-function}. The resulting formulas are that
\begin{align*}
\lambdangle x_{37}^+, \log(\Vert f_{37,w}\Vert ) \rangle &= -152\pi^2 \left(\frac{\partial}{\partial s}\left. (\sqrt{2}F_{-2}(w,s) +
2F_{-1}(w,s))\right|_{s=1}\right)\\& +240\pi\left(\log\left(|\eta(w)\eta(37w)|^2\cdot{\mathrm{Im}} (w)\right) -\frac{10}{19}\log37+2\log(4\pi) +24\zeta'(-1)\right)
\end{align*}
and
\begin{multline*}
\lambdangle y_{37}^+, \log (\Vert f_{37,w}\Vert ) \rangle = - 152\pi^2 \left(\frac{\partial}{\partial s}\left. (\sqrt{3}F_{-3}(w,s) +3F_{-1}(w,s))\right|_{s=1}\right)\\ + 336\pi\left(\log\left(|\eta(w)\eta(37w)|^2\cdot{\mathrm{Im}} (w)\right) -\frac{10}{19}\log37+2\log(4\pi) +24\zeta'(-1)\right).
\end{multline*}
Of course, one does not need to assume that $w$ corresponds to a two torsion point.
In general, Corollary \ref{cor:j-function} yields an expression where the right-hand side
is a sum of two terms, and the corresponding factor in front would be one-half of the factors above.
\subsection{A genus two example}
Consider the level $N=103$.
In this case, $\vol_{\mathbb{H}yp}(Y_{103}^{+})= 52\pi/3$ and the function field generators are $x_{103}^+(z)=q_z^{-3} + q_z^{-1} + O(q_z)$
and $y_{103}^+(z)=q_z^{-4} + 3q_z^{-2} + 3q_z^{-1} + O(q_z)$, as displayed in Table 7 of \cite{JST13}. The generators $x_{103}^+(z)$ and
$y_{103}^+(z)$ satisfy the polynomial relation $y^3 - x^4 - 5yx^2 - 9x^3 + 16y^2 - 21yx - 60x^2 + 65y - 164x + 18 = 0$.
The surface $Y_{103}^{+}$ has genus two.
From Theorem 6 of \cite{Ni73}, we can write $x_{103}^+(z)$ and $y_{103}^+(z)$ in terms of the Niebur-Poincar\'e series.
Explictly, we have that
\begin{align*}
x_{103}^+(z)&=2\pi[\sqrt{3}F_{-3}(z,1)+ F_{-1}(z,1)]-2\pi(\sqrt{3}B_{0,103}^+(1;-3)+B_{0,103}^+(1;-1))\\&=2\pi[\sqrt{3}F_{-3}(z,1)+ F_{-1}(z,1)]-\frac{15}{13}
\end{align*}
and
\begin{align*}
y_{103}^+(z)&=2\pi[\sqrt{4}F_{-4}(z,1)+ 3\sqrt{2}F_{-2}(z,1) +3F_{-1}(z,1)]\\&-2\pi(\sqrt{4}B_{0,103}^+(1;-4)+3\sqrt{2}B_{0,103}^+(1;-2)+3B_{0,103}^+(1;-1))\\&=2\pi[2F_{-4}(z,1)+ 3\sqrt{2}F_{-2}(z,1) +3F_{-1}(z,1)]-\frac{57}{13}.
\end{align*}
The order of vanishing of $\Deltalta_{103}(z)$ at the cusp is $\nu_{103}=(12\cdot 52\pi/3)/(4\pi)=52$. Therefore, for an arbitrary, fixed $w\in\HH$, the form
$$f_{103,w}(z)=\Deltalta_{103}^3(z)(x_{103}^+(z)-x_{103}^+(w))^{52}$$ is the weight $3k_{103}=36$ holomorphic form which
has constant term in the $q$-expansion equal to $1$. Let $\{w_{1}, w_{2}, w_{3}\}$ be the three, not necessarily distinct,
points in the fundamental domain $\mathcal{F}$ where $(x_{103}^+(z)-x_{103}^+(w))$ vanishes. One of the points $w_{j}$ is equal to $w$. The form $f_{103,w_j}(z)$ vanishes at $z=w_{j}$ to order $52 \cdot\mathrm{ord}(w_j)$, $j=1,2,3$.
From Section \ref{sect Atkin Leh groups_KLF}, we have that
$$
\beta_{103}=\frac{3}{52\pi}\left(\frac{53}{104}\log103+2-2\log(4\pi)-24\zeta'(-1)\right)
$$
and $P_{103}(z)=\log\left(|\eta(z)\eta(103z)|^2\cdot {\mathrm{Im}} (z)\right)$.
Let us now apply Corollary \ref{cor:j-function} with $g(z)= x_{103}^+(z)$, in which case $c(g)=-15/13$.
In doing so, we get that
\begin{align*}
\lambdangle x_{103}^+, \log (\Vert f_{103,w}\Vert ) \rangle &= - 208\pi^2
\sum\limits_{j=1}^{3}\left(\frac{\partial}{\partial s}\left. (\sqrt{3}F_{-3}(w_j,s) +F_{-1}(w_j,s))\right|_{s=1}\right)
\\& +120\pi\sum\limits_{j=1}^{3}\left(\log\left(|\eta(w_j)\eta(103w_j)|^2\cdot{\mathrm{Im}} (w_{j})\right)\right)
\\&-360\pi\left(\frac{53}{104}\log103-2\log(4\pi) -24\zeta'(-1)\right).
\end{align*}
Similarly, we can take $g(z)= y_{103}^+(z)$, in which case $c(g)=-57/13$ and we get that
\begin{align*}
\lambdangle y_{103}^+, \log (\Vert f_{103,w}\Vert ) \rangle &= - 208\pi^2 \sum\limits_{j=1}^{3}
\left(\frac{\partial}{\partial s}\left. (2F_{-4}(w_{j},s)+ 3\sqrt{2}F_{-2}(w_{j},s) +3F_{-1}(w_{j},s))\right|_{s=1}\right)
\\& +456\pi\sum\limits_{j=1}^{3}\left(\log\left(|\eta(w_{j})\eta(103w_{j})|^2\cdot{\mathrm{Im}} (w_{j})\right)\right)
\\& -1368\pi\left(\frac{53}{104}\log103-2\log(4\pi) -24\zeta'(-1)\right).
\end{align*}
\subsection{An alternative formulation}
In the above discussion, we have written the constant $\beta$ and the Kronecker limit function $P$ separately. However, it should be pointed out that in all instances the appearance of these terms are in the combination $\beta \vol_{\mathbb{H}yp}(M)- P(z)$. From \eqref{KronLimitPArGen}, we can write
$$
\beta \vol_{\mathbb{H}yp}(M)- P(z)
= \frac{1}{\vol_{\mathbb{H}yp}(M)} \textrm{\rm CT}_{s=1} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s),
$$
where $\textrm{\rm CT}_{s=1}$ denotes the constant term in the Laurent expansion at $s=1$. It may
be possible that such notational change can provide additional insight concerning the formulas presented above.
\noindent
\noindent
James Cogdell \\
Department of Mathematics \\
Ohio State University \\
231 W. 18th Ave \\
Columbus, OH 43210,
U.S.A. \\
e-mail: [email protected]
\noindent
Jay Jorgenson \\
Department of Mathematics \\
The City College of New York \\
Convent Avenue at 138th Street \\
New York, NY 10031
U.S.A. \\
e-mail: [email protected]
\noindent
Lejla Smajlovi\'c \\
Department of Mathematics \\
University of Sarajevo\\
Zmaja od Bosne 35, 71 000 Sarajevo\\
Bosnia and Herzegovina\\
e-mail: [email protected]
\end{document}
\end{document} |
\begin{document}
\draft
\title{Entanglement and Collective Quantum Operations}
\author{Anthony Chefles}
\address{Department of Physical Sciences, University of Hertfordshire \\
Hatfield AL10 9AB, Herts, UK \\ email: [email protected]}
\author{Claire R. Gilson}
\address{Department of Mathematics, University of Glasgow, Glasgow G12 8QQ, UK}
\author{Stephen M. Barnett}
\address{Department of Physics and Applied Physics, University of Strathclyde \\ Glasgow G4 0NG, UK} \input epsf
\epsfverbosetrue
\maketitle
\begin{abstract}
We show how shared entanglement, together with classical communication and local quantum
operations, can be used to perform an arbitrary collective quantum operation upon $N$
spatially-separated qubits. A simple teleportation-based protocol for achieving this,
which requires $2(N-1)$ ebits of shared, bipartite entanglement and $4(N-1)$ classical
bits, is proposed. In terms of the total required entanglement, this protocol is shown to
be optimal for even $N$ in both the asymptotic limit and for `one-shot' applications.
\\ *
\end{abstract}
\pacs{PACS numbers: 03.67.-a, 03.67.Hk}
Interactions between physical systems involve the transmission of information between
them. The future state of any subsystem will depend not only upon its own history, but
also those of the other subsystems. This influence naturally involves the transmission of
information.
In classical physics, this information is purely classical. In quantum physics, it is
quantum information that is transmitted between the subsystems. Unlike classical
information, quantum information cannot be copied\cite{NoCloning}. This implies that any
quantum information transferred to some system must be lost by its source in the process.
If this transfer of information is incomplete, which it often is, the result is
entanglement between the systems.
Entanglement forms a crucial link between classical and quantum information. Nowhere is
this made more explicit than in the transmission of quantum information by
teleportation\cite{Teleport}. As is well-known, this can only be achieved by sending
classical information and making use of entanglement shared by the sending and receiving
locations.
Interactions between quantum subsystems are represented as collective operations on the
state space of the entire system. In this Letter, we show how shared entanglement (SE),
together with classical communication (CC) and local quantum operations (LQ), can be used
to perform an arbitrary collective operation upon $N$ spatially-separated $2$-level
quantum systems (qubits), using a simple teleportation-based protocol. This requires
$2(N-1)$ ebits of bipartite entanglement to be shared between the locations of the qubits.
Large amounts of entanglement are difficult to produce under controlled circumstances, so
it is natural to enquire as to whether or not this figure is optimal. For even $N$, we
give a graph-theoretic proof of the optimality of the teleportation protocol. This holds
both for `one-shot' applications, where the operation is carried out only once, and also
in the asymptotic limit\cite{Concentration}, where the operation is carried out a large
number of times and we are interested in the average entanglement required per run of the
operation.
We begin by considering the following scenario: take a network of $N$ laboratories,
$A_{j}$, where $j=1,{\ldots},N$, each of which contains a qubit. We label these $q_{j}$.
The laboratories also share a certain amount of pure, bipartite entanglement with each
other. We shall refer to this as the {\em resource entanglement}. Each $A_{j}$ also
contains auxiliary quantum systems, allowing arbitrary local collective operations to be
carried out in each laboratory. The laboratories can also send classical information to
each other.
Let us define the resource entanglement matrix ${\mathbf E}_{R}=\{E_{R}^{ij}\}$, where
$E_{R}^{ij}$ is the number of ebits shared by $A_{i}$ and $A_{j}$. This matrix is clearly
symmetric, has non-negative real elements and zeros on the diagonal.
From this matrix, we can construct a graph, which we term the {\em resource entanglement
graph}, $G_{E}(V,E)$. The vertex set $V$ is that of the laboratories $A_{j}$, and the edge
set $E$ represents the bipartite entanglement shared among them. The edge joining vertices
$A_{i}$ and $A_{j}$ has weight $E_{R}^{ij}$. This weight is equal to the amount of pure,
bipartite entanglement shared by $A_{i}$ and $A_{j}$. An edge of weight zero, which
represents no entanglement, is equivalent to no edge. The total resource entanglement is
\begin{equation}
E_{R}=\frac{1}{2}\sum_{ij}E_{R}^{ij}.
\end{equation}
We wish to use these resources to carry out an arbitrary collective operation upon the
$q_{j}$. Perhaps the most natural way doing so is by teleportation. Teleportation of a
qubit from one location to another costs 1 ebit of entanglement and requires 2 classical
bits to be sent from the origin to the destination of the qubit\cite{Teleport}.
We can consider the situation in which all laboratories share entanglement and have the
resources for two-way classical communication with one particular laboratory. Let this
laboratory be $A_{1}$. The other laboratories can teleport the states of their qubits to
$A_{1}$. The operation can then be carried out locally at $A_{1}$. The final states of
the other qubits can then be teleported back to their original laboratories, completing
the operation.
This teleportation procedure requires each of the laboratories $A_{2},{\ldots},A_{N}$ to
share 2 ebits of entanglement with $A_{1}$ and for 2 bits of classical information to be
communicated each way between each of them and $A_{1}$. The elements of the corresponding
resource entanglement matrix are
\begin{equation}
E_{R}^{ij}=2|{\delta}_{i1}-{\delta}_{1j}|.
\end{equation}
The corresponding graph $G_{E}$ is depicted in figure (1). The total resource entanglement
is
\begin{equation}
E_{R}=2(N-1).
\end{equation}
\begin{figure}
\caption{Resource entanglement
graph for the teleportation protocol.}
\end{figure}
Any quantum operation upon $N$ qubits can be performed using this method and thus, at
least for the topology of entanglement in our protocol, the value of $E_{R}$ in Eq. (3) is
sufficient.
This teleportation-based method for carrying out an arbitrary collective quantum operation
upon $N$ spatially separated qubits requires $E_{R}=2(N-1)$ ebits of entanglement. Is
this figure optimal, in the sense that no less bipartite entanglement will suffice?
We can pose this question in the following, alternative way: a network of laboratories
$A_{i}$ possesses shared bipartite entanglement, described by the graph $G_{E}$. If the
corresponding total resource entanglement is sufficient to enable an arbitrary collective
operation to be performed, then what lower bound must $E_{R}$ satisfy?
The first observation we shall make is that if an arbitrary operation can be carried out
using the entanglement described by $G_{E}$, then any graph obtained from $G_{E}$ by a
permutation of the vertices also describes sufficient entanglement to carry out an
arbitrary operation. The permutation invariance of this sufficiency condition is
intuitive. We will provide a proof of it elsewhere \cite{Big}.
Consider the graph ${\tilde G}_{E}$ defined by
\begin{equation}
{\tilde G}_{E}=\sum_{P[V]}G_{E}(V).
\end{equation}
This graph is obtained from $G_{E}$ by summing over all permutations $P$ of the vertex set
$V$. By summing, we mean summing the entanglement represented by the weights of the edges.
The resource entanglement matrix ${\tilde {\mathbf E}}_{R}=\{{\tilde E}^{ij}_{R}\}$ for
this graph is easily obtained. Its elements are
\begin{equation}
{\tilde E}^{ij}_{R}=\sum_{P[V]}E_{R}^{P(i),P(j)}.
\end{equation}
This graph is regular and complete. These properties follow immediately from the fact
that ${\tilde G}_{E}$, being defined as a sum over all vertex permutations, is itself
permutation invariant.
The total resource entanglement for this graph, ${\tilde E}_{R}$, is easily evaluated in
terms of the total resource entanglement of $G_{E}$. There are $N!$ permutations of the
vertex set, implying that ${\tilde G}_{E}$ describes $N!$ times as much entanglement as
$G_{E}$, that is
\begin{equation}
{\tilde E}_{R}=N!E_{R}.
\end{equation}
All $N(N-1)/2$ edges in this graph have the same weight. Denoting this weight simply by
$e$, we obtain
\begin{equation}
e=2(N-2)!E_{R}.
\end{equation}
There are $N!$ permutations of the vertex set. The permutation invariance of the
sufficiency condition then implies that the entanglement resources represented by ${\tilde
G}_{E}$ can be used to perform any operation $N!$ times. By this, we mean the following:
suppose that $A_{i}$ contains $N!$ qubits. We can then define $N!$ sets of qubits, where
each contains one from each laboratory. It will be possible to perform the same operation
separately upon each of these sets.
Using the formalism we have set up, we can obtain the minimum value of $E_{R}$ exactly
when $N$ is even. Our approach makes use of the SWAP operation upon 2 qubits. Consider a
pair of qubits, ${\alpha}$ and ${\beta}$, with respective states
$|{\psi}_{1}{\rangle}_{\alpha}$ and $|{\psi}_{2}{\rangle}_{\beta}$. The SWAP operation,
$U_{S}$, exchanges the states of these subsystems:
\begin{equation}
U_{S}|{\psi}_{1}{\rangle}_{\alpha}{\otimes}|{\psi}_{2}{\rangle}_{\beta}=|{\psi}_{2}{\rangle}_{\alpha}{\otimes}|{\psi}_{1}{\rangle}_{\beta}.
\end{equation}
The property of $U_{S}$ that is of particular interest to us is its ability to create 2
ebits of entanglement. To see how, suppose that in the laboratory containing ${\alpha}$
there is another qubit, ${\alpha}'$, and that these two qubits are initially prepared in a
maximally entangled state. Likewise, ${\beta}$ is initially maximally entangled with a
neighbouring qubit ${\beta}'$. If the SWAP operation is performed on ${\alpha}$ and
${\beta}$, then ${\alpha}$ will become maximally entangled with ${\beta}'$, and likewise
${\beta}$ and ${\alpha}'$ will become maximally entangled. Two ebits of entanglement have
been produced.
The network of $N$ laboratories is assumed to possess
sufficient entanglement resources, described by the graph ${\tilde G}_{E}$, to enable any
operation to be carried out $N!$ times. Here, we consider one particular operation, which
we will refer to as the pairwise-SWAP (PS) operation. Performing this operation once has
the effect of swapping the state of a qubit at $A_{j}$ with that of one at $A_{j+1}$, for
all odd $j$. If we write the two-qubit SWAP operation exchanging the states of qubits
$q_{j}$ and $q_{j+1}$ as $U_{S}^{j+1,j}$, then the PS operation may be written as
\begin{equation}
U_{PS}=U_{S}^{N,N-1}{\otimes}U_{S}^{N-2,N-3}{\otimes}{\ldots}{\otimes}U_{S}^{2,1}.
\end{equation}
This operation is depicted in figure (2).
\begin{figure}
\caption{Depiction of the pairwise-SWAP (PS) operation for $N=4$.}
\end{figure}
The PS operation can then be used to establish $N$ ebits of entanglement. A fuller
discussion of multiqubit operations with this property will be given in \cite{Big}. The
$N!$-fold PS operation can thus produce $N!N$ ebits of entanglement. Our aim is to use the
resources contained in the graph ${\tilde G}_{E}$ to perform the PS operation $N!$ times.
We wish to find the minimum value of $e$, and using Eq. (7), that of $E_{R}$, required to
do so.
To determine the minimum value of $e$ required to establish $2N!$ ebits of entanglement
between each pair of laboratories whose qubits' states will be exchanged by the PS
operation, we will make use of the fact that entanglement cannot increase under LQCC
operations. Consider the situation depicted in figure (3). We partition the entire
network into two sets. One contains the even laboratories $A_{2},A_{4},{\ldots},A_{N}$,
and the other contains the odd ones $A_{1},A_{3},{\ldots},A_{N-1}$. We shall refer to
these sets as $S_{even}$ and $S_{odd}$.
\begin{figure}
\caption{Use of the resource
entanglement graph ${\tilde G}
\end{figure}
According to the graph ${\tilde G}_{E}$, the total entanglement initially shared by these
sets can be calculated in a straightforward manner. Each of the $N/2$ laboratories in
$S_{odd}$ shares {\em e} ebits with each laboratory in $S_{even}$, that is, $Ne/2$ ebits
with $S_{even}$ in total. Adding up the $N/2$ such contributions from the laboratories in
$S_{odd}$ gives $(N/2)^{2}e$ ebits initially shared by $S_{even}$ and $S_{odd}$. The final
entanglement they share is $N!N$ ebits. The total entanglement that $S_{even}$ and
$S_{odd}$ share cannot increase, giving the inequality
\begin{equation}
\left(\frac{N}{2}\right)^{2}e{\geq}N!N.
\end{equation}
Making use of Eq. (7), we find that
\begin{equation}
E_{R}{\geq}2(N-1).
\end{equation}
This lower bound on the total resource entanglement is a tight bound, since this amount of
entanglement is precisely that which is required by the teleportation protocol. Thus, for
even $N$, the teleportation protocol is optimal with regard to the required total resource
entanglement. Using the above approach one can also calculated a lower bound on the
resource entanglement for an odd number of qubits\cite{Big}
We have derived this bound solely on the basis of the fact that, in a multiparticle
system, the entanglement shared by two exhaustive subsets, which will be of bipartite
form, cannot increase under LQCC operations.
Although the entanglement initially shared by each pair of laboratories is in pure,
bipartite form, the transformation shown in figure (3) may, at some point, manipulate the
resource entanglement into, possibly mixed, multiparticle entanglement. This does not
affect our argument. If the final entanglement is in multiparticle form, then in order to
carry out the $N!$-fold PS operation, $A_{j}$ and $A_{j+1}$ will have to be able to {\em
distill} $2N!$ ebits of pure, bipartite entanglement. The {\em total} distillable
entanglement between $S_{even}$ and $S_{odd}$ cannot increase, which leads to inequality
(10) and thus the teleportation bound in (11).
The nonincreasing of entanglement under LQCC operations is an asymptotic result. It
follows that the teleportation protocol is asymptotically optimal for even $N$. By
asymptotic\cite{Concentration}, we mean that, given a very large number of sets of
separated qubits, where the same, arbitrary operation is to be carried out on each set,
the teleportation protocol uses the minimum {\em average} entanglement that is required
per run of the operation.
In practical situations, it is often the resources required to carry out an operation
successfully just once that will be of interest. For general information processing
tasks, the resources required in the `one-shot' scenario are at least equal to the
resources required asymptotically. For the problem we have considered here, when $N$ is
even, the entanglement resources required in both scenarios are equal. This is because
the teleportation protocol, which requires $2(N-1)$ ebits, can be used to carry out any
collective operation on $N$ qubits once.
The classical communication resources required to carry out an arbitrary collective
operation on $N$ qubits can be analysed using the same technique. We will present a
detailed discussion of this matter in\cite{Big}, but take the opportunity to mention that
the teleportation protocol, which, in addition to $2(N-1)$ ebits of entanglement, also
requires $4(N-1)$ classical bits, is also optimal in terms of classical communication
resources for even $N$.
In this Letter we have determined the minimum amount of bipartite entanglement required to
carry out an arbitrary operation upon an even number of qubits. It is natural to attempt
to solve the same problem for the odd case. Unfortunately, we have not been able to find
a collective operation on an odd number of qubits which yields the minimum resource
entanglement in the same manner as the PS operation. Many aspects of the problem for odd
$N$ will be discussed in \cite{Big}, including the determination of lower bounds on the
minimum entanglement and communication resources. We also examine the consequences of the
assumption that it costs an ebit to move an ebit. This leads to the optimality of the
teleportation protocol in the one-shot case for the resource entanglement except possibly
when $N=3$.
\section*{Acknowledgements.}
We would like to thank Sandu Popescu, Noah Linden, Osamu Hirota and Masahide Sasaki for
interesting discussions. Part of this work was carried out at the Japanese Ministry of
Posts and Telecommunications Communications Research Laboratory, Tokyo, and we would like
to thank Masayuki Izutsu for his hospitality. This work was funded by the UK Engineering
and Physical Sciences Research Council, and by the British Council.
\begin{references}
\bibitem{NoCloning}
W. K. Wootters and W. H. Zurek, {\em Nature} {\bf 299}, 802 (1982); D. Dieks, {\em Phys.
Lett.} {\bf 92A}, 271 (1982).
\bibitem{Teleport}
C. H. Bennett, G. Brassard, C. Crepeau, R. Josza, A. Peres and W. K. Wootters, {\em Phys.
Rev. Lett.} {\bf 70}, 1895 (1993).
\bibitem{Concentration}
C. H. Bennett, H. J. Bernstein, S. Popescu and B. Schumacher, {\em Phys. Rev. A} {\bf 53}
2046 (1996).
\bibitem{Big}
A. Chefles, C. R. Gilson and S. M. Barnett, `Entanglement, Information and Multiparticle
Quantum Operations', In preparation.
\end{references}
\end{document} |
\begin{document}
\title{Generation of path-encoded Greenberger-Horne-Zeilinger states}
\author{N. Bergamasco}
\email{[email protected]}
\affiliation{Department of Physics, University of Pavia, Via Bassi 6, I-27100 Pavia, Italy}
\author{M. Menotti}
\affiliation{Department of Physics, University of Pavia, Via Bassi 6, I-27100 Pavia, Italy}
\author{J. E. Sipe}
\affiliation{Department of Physics, University of Toronto, 60 St. George Street, Toronto,\\ Ontario M5S 1A7, Canada}
\author{M. Liscidini}
\affiliation{Department of Physics, University of Pavia, Via Bassi 6, I-27100 Pavia, Italy}
\date{\today}
\begin{abstract}
We study the generation of Greenberger-Horne-Zeilinger (GHZ) states of three path-encoded photons. Inspired by the seminal work of Bouwmeester et al. \cite{Bouwmeester} on polarization-entangled GHZ states, we find a corresponding path representation for the photon states of an optical circuit, identify the elements required for the state generation, and propose a possible implementation of our strategy. Besides the practical advantage of employing an integrated system that can be fabricated with proven lithographic techniques, our example suggests that it is possible to enhance the generation efficiency by using microring resonators.
\end{abstract}
\pacs{42.50.-p,42.82.Et}
\maketitle
\section{Introduction}
Quantum correlations between subsystems are the focus of many studies on the foundations of quantum mechanics, and the ability to generate states that exhibit these correlations is central to quantum information processing.
While quantum correlations in a bipartite state are generally well-understood \cite{Nielsen}, the analysis of multipartite states is more intricate.
Even for tripartite entangled states, where only three subsystems are involved, one can identify separable states, biseparable states, and two inequivalent classes of tripartite entangled states \cite{Cirac}: Greenberger-Horne-Zeilinger (GHZ) states \cite{GHZ_Paper,GHZ_Book}, and W states \cite{Kiesel}. A state in one class cannot be transformed into one of the other using only local operations and classical communications.
In this communication, we focus on the generation of tripartite GHZ states, the simplest of which can be written as
\begin{equation}\label{GHZ paradigm}
\ket{GHZ}=\frac{1}{\sqrt{2}}\big(\ket{000}+\ket{111}\big),
\end{equation}
where $\ket{0}$ and $\ket{1}$ are orthogonal states.
GHZ states were first studied experimentally by Bouwmeester et al. \cite{Bouwmeester}, where the states $\ket{0}$ and $\ket{1}$ identified orthogonal photon polarizations. But other implementations of the orthogonal states are possible, and have been demonstrated in a variety of platforms including trapped ions \cite{Roos} and superconducting circuits \cite{DiCarlo}. GHZ states have been applied in tests of local realism \cite{Pan}, where the use of tripartite states allows for a demonstration of its conflict with quantum mechanics even in a definite measurement, as opposed to such tests using bipartite states which rely on the statistics of a large number of measurements. They have also been used to devise quantum communication protocols, such as multipartite quantum key distribution, with secret keys shared safely among three parties \cite{Jin}; dense coding \cite{Hao}, where the capacity of a transmission channel is increased by using quantum states of light; and entanglement swapping \cite{Xiaolong}.
When photons are used to produce a GHZ state, the entangled degree of freedom is typically polarization. This choice arises from the fact that polarization can be naturally used as a qubit, and because polarization-entangled photon pairs are now routinely produced by parametric sources \cite{kwiat95}. Moreover, rotation of a polarization-encoded qubit on the Bloch sphere can be easily done by means of linear optical elements such as wave-plates, and routing of the photons can be performed using beam splitters (BSs) and polarizing beam splitters (PBSs).
Yet the use of polarization can be problematic for long distance communication using optical fibers, where polarization can drift during propagation, and for the development of integrated quantum devices, where sophisticated solutions are required to control light polarization on a chip \cite{Matsuda:2012aa}. Thus, the use of other degrees of freedom in photonic implementations of GHZ states is worth investigating.
In this paper we propose a scheme to prepare GHZ states, with the generated photons entangled in the path degree of freedom; the states $\ket{0}$ and $\ket{1}$ here refer to the photon being in different spatial modes \cite{Matthews}, regardless any other degree of freedom. In presenting our strategy we consider, as an example, an integrated optical circuit in which two photon pairs are generated by spontaneous four-wave mixing (SFWM) in a $\chi^{(3)}$ material. While similar schemes could be implemented in different platforms, the approach we suggest allows us to take advantage of the enhancement of the generation rate provided by integrated microresonators, and to drastically reduce the footprint of the source \cite{Azzini:12}.
In principle, it would be possible to design optical schemes that manipulate path-encoded states and subsequently translate and output them in the polarization representation. This has been proposed recently to achieve chip-to-chip quantum communication \cite{Wang16}. However, here we are mainly interested in both the manipulation and output of path-encoded states on optical chips.
\begin{figure}
\caption{\label{ABC}
\label{ABC}
\end{figure}
We envision the situation depicted in Fig. \ref{ABC}: Four photons are generated in an integrated device, of which one is used as a target and three are used as qubits. For each qubit there are two paths, each path associated with a basis state. The three photons are routed to three independent parties (Alice, Bob, and Charlie), which can manipulate them, where the rotation of the qubit on the Bloch sphere is performed by means of a Mach-Zehnder interferometer and two phase shifts \cite{Silverstone15}.
The work is organized as follows: in section 2 we establish a correspondence between some relevant optical elements used in polarization optics and the integrated
counterparts in the path-encoding framework. In section 3 we present the integrated approach for generating the GHZ state, we discuss how to post-select the desired state, and we estimate the generation rate. Finally, in section 4 we draw our conclusions.
\section{From polarization- to path-encoded qubits}
Bulk sources used to generate quantum correlated photons typically rely on spontaneous parametric down-conversion (SPDC) in nonlinear crystals, e.g. $\beta$-barium borate (BBO) \cite{kwiat95}, or on SFWM, e.g. in optical fibers \cite{Smith:2009}. The former is a second-order nonlinear process that can be pictured as the spontaneous fission of a pump photon into two daughter photons of lower energy, while the latter is a third-order nonlinear process that can be regarded as the elastic scattering of two pump photons to yield a new photon pair. These two processes can also be used in photonic integrated circuits (PICs) \cite{Azzini:12,Ducci:2013}. In this context, SFWM is particularly useful, for the circuit can be easily fabricated in silicon, with recent implementations employing silicon nitride \cite{Moss}. These materials possess a relatively strong third-order nonlinear susceptibility that favours SFWM, and also provide strong field confinement thanks to the large index contrast with silicon dioxide, which is usually used as the low-index cladding material in the fabrication of ridge waveguides and resonators. In principle, the polarization of the generated photons can be used to implement a qubit either in a bulk or integrated source. Yet in PICs this is particularly challenging, and thus alternative solutions are desirable \cite{Menotti:2016}.
In this section we investigate the possibility of using the path degree of freedom of photons for qubit encoding. To this end, we propose employing two waveguides, or \emph{paths}, for each photon route in a PIC. We assign the state $\ket{1}$ or $\ket{0}$ to a photon when it travels in one waveguide or the other, which we graphically depict as dotted and dashed, respectively, in Fig. \ref{Analogies}. This convention is kept consistent throughout the whole circuit.
In Fig. \ref{Analogies} we show that there is a full correspondence between bulk optical elements used to manipulate polarization states and integrated optical elements necessary to manipulate path-encoded states.
\begin{figure}
\caption{\label{Analogies}
\label{Analogies}
\end{figure}
The rotation of polarization states is performed in bulk optics by using a $\lambda/2$ plate, while the corresponding evolution of path states is effected with a 50:50 directional coupler (DC) connecting the two waveguides associated with the $\ket{1}$ and $\ket{0}$ states.
Photons in a bulk optical circuit can be routed depending on their polarization using a PBS; the same can be done for the path-encoded states by properly connecting the waveguides of the input ports to the waveguides of the output ports (see Fig. \ref{Analogies}). Finally, photons in a bulk optical circuit can be spatially separated regardless their polarization state using a BS, and the corresponding operation on path-encoded states is performed in integrated optics using two 50:50 DCs.
Two remarks regarding path states and their manipulation are necessary: First, we note that the generation of a meaningful path-encoded state for a photon pair requires a source more complicated than a single-bus-waveguide ring resonator \cite{Mataloni04, Silverstone14, Preble15}.
Second, some of the integrated optical elements used to manipulate path states (see Fig. \ref{Analogies}) display a waveguide crossing that seems problematic in a planar geometry, which is usually the choice for PICs. However, we will see that proper sources can be designed, and a waveguide rearrangement can avoid the problematic waveguide crossing.
\section{State generation and manipulation}
Here we discuss the generation of path-encoded GHZ states and present an integrated circuit based on the fundamental building blocks introduced in the previous section.
Considering a generic parametric source, in the approximation of undepleted pump pulses described classically, the state of the generated photons is of the form \cite{braunstein2005}
\begin{multline}
\ket{\psi} = e^{\beta C^\dagger_{II}-H.c.}\ket{\text{vac}}\\
= \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}} + \beta\Up{C}_{II}\ket{\text{vac}} + \frac{1}{2}\left[\beta\Up{C}_{II}\right]^2\ket{\text{vac}}+ \ldots \\
\equiv \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}} + \beta\ket{\text{II}} + \frac{1}{2}\left[\beta\Up{C}_{II}\right]^2\ket{\text{vac}} + \ldots,
\label{eq:sfwmstate}
\end{multline}
where $\ket{\text{vac}}$ is the vacuum state, $\Up{C}_{II}$ is the photon pair creation operator, $\left|\beta\right|^2$ is the pair
generation probability per pulse when that number is very small, and $\ket{\text{II}}$ is the normalized two-photon state. In the limit of interest where $\left|\beta\right|^2\ll1$, we can truncate the expansion \eqref{eq:sfwmstate} at the quadratic term in $\beta$ , which corresponds to the generation of two pairs.
The properties of the four-photon state contribution to \eqref{eq:sfwmstate}, resulting from the generation of two photon pairs, are directly related to the those of the creation operator $\Up{C}_{II}$. Hence, once this has been calculated, the output state of two or more pairs can be obtained immediately.
For this reason, we begin with a discussion of the generation of a single photon pair.
The structure we propose can be divided in two parts: a nonlinear integrated source, which generates a path-encoded initial state, and a linear optical circuit to manipulate it. The full calculation of the output state is reported in the Appendix.
\begin{figure}
\caption{\label{source}
\label{source}
\end{figure}
The nonlinear integrated source (see Fig. \ref{source}) consists of four identical ring resonators arranged in two blocks, each of which is a Mach-Zehnder interferometer unbalanced by a phase $\phi_i$, with one ring resonator per arm. The two blocks are coherently pumped using a 50:50 directional coupler, which splits the pump amplitude into two waveguides, with $\phi$ being the pump phase difference between the two blocks. Although this is not strictly necessary, here we consider degenerate SFWM \cite{Fang:13}, for which we require a dual-pump scheme, where the 50:50 split ratio can be guaranteed by choosing an appropriate length of the directional coupler \cite{Huang:94}. Since the field enhancement inside the rings is much larger than that in the waveguides, we assume that the generation of photons occurs only in the resonators.
It should be noticed that although the use of four identical microring resonators might pose some constraints, the fabrication technique for multiple integrated elements on SOI platforms has constantly improved in recent years, up to the realization of several hundred coupled microrings \cite{Mookherjea}. Moreover, it is possible to tune each resonator almost independently via heaters: this enables the control of the position of its resonances with great precision \cite{Cunningham:10}. If one considers silicon ring resonators, the large nonlinearity ($\gamma\approx 200\ W^{-1}m^{-1}$) guarantees high generation efficiencies with mW pump powers and $Q\approx10000$ \cite{Azzini:12}, which relaxes the constraints on the ring tunability. Finally, the two blocks in Fig. \ref{source} have already been used for the generation of deterministically split photons by the reverse HOM effect, yielding high-visibility quantum interference \cite{Silverstone15}.
Indeed, when $\phi_i=\pi/2[2\pi]$ one observes deterministic splitting of the photon pair exiting the MZI \cite{Silverstone14}. But when the two blocks are pumped with a relative phase shift $\phi=\pi$ (or odd multiple), the two-photon state generated by the source is the Bell state (see the Appendix)
\begin{equation}
\ket{\Psi^-} = \frac{1}{\sqrt{2}}\left(\ket{1}\ket{0} - \ket{0}\ket{1}\right),
\label{eq:pathent}
\end{equation}
where we use the first and fourth waveguide for the first path-encoded qubit and we use the second and the third waveguide for the second path-encoded qubit as depicted in Fig. \ref{source}. This situation is analogous to that considered by Bouwmeester et al. \cite{Bouwmeester}, where the nonlinear crystal generates photon pairs in the corresponding polarization-encoded entangled state.
We now consider the simultaneous generation of two pairs of photons, described by the effect of $(C_{II}^\dagger)^2$ on the vacuum state. This leads to the four-photon state
\begin{multline}
\ket{\mathrm{IV}} = -\frac{1}{2\sqrt{3}}\int dk'_1dk'_2 dk_1dk_2
\phi_{\text{ring}}(k_1,k_2)\phi_{\text{ring}}(k'_1,k'_2)\\
\times e^{i(\psi(k_1,k_2) +\psi'(k_1,k_2))}(\Up{b}_{k_1,1}\Up{b}_{k_2,2}
- \Up{b}_{k_1,3}\Up{b}_{k_2,4})\\
\times (\Up{b}_{k'_1,1}\Up{b}_{k'_2,2}
- \Up{b}_{k'_1,3}\Up{b}_{k'_2,4})\ket{\text{vac}},
\label{eq:fourphostate_text}
\end{multline}
where $\phi_{\text{ring}}(k_1,k_2)$ is the biphoton wave function of a pair generated in a single ring, $\psi(k_1,k_2)$ and $\psi^{\prime}(k_1,k_2)$ are phase factors associated with propagation in the channel (which can be assumed constant) defined in \eqref{eq:psiwave}, and $b^{\dagger}_{k_i,j}$ is the operator associated with the creation of a photon having wavevector $k_i$ and exiting the structure in Fig. \ref{source} from the channel $j$. The state $\ket{\text{IV}}$ is normalized under the assumption that the biphoton wave function $\phi_{\text{ring}}(k_1,k_2)$ is separable (see below).
\begin{figure}
\caption{\label{schemfull}
\label{schemfull}
\end{figure}
We now turn to the manipulation of the state $\ket{\mathrm{IV}}$, which is done following the recipe Bouwmeester et al. \cite{Bouwmeester} used for polarization-encoded entangled states, but implemented for path-encoded entangled states using the correspondence between polarization bulk elements and the path integrated components illustrated in Fig. \ref{Analogies}. Note that we have avoided the waveguide overlapping in the integrated analogue of a beam splitter (see Fig. \ref{Analogies}) by a rearrangement of the circuit waveguides as shown in Fig. \ref{schemfull}.
In strict analogy with Bouwmeester et al. \cite{Bouwmeester}, post-selecting on a three-fold coincidence in detectors $D_1$, $D_2$, and $D_3$ in Fig. \ref{schemfull}, conditioned on the detection of a photon in the target detector $T$, identifies that a GHZ state was generated. Care must be taken to ensure that the generated GHZ state is pure. As in the generation of pairs of photons for heralded photon applications, this requires that the function $\phi_{\text{ring}}(k_1,k_2)$ is separable. To this end we observe that nearly-uncorrelated photons can be obtained by adjusting the duration of the pump pulse, which has to be comparable or shorter than the dwelling time of the photon inside the ring \cite{Helt10,onodera16}. For complete separability, a more careful design of the ring is required \cite{Gentry:16,Vernon:2017}. A more detailed discussion of the effect of the spectral properties of the BWF goes beyond the scope of this work, but we plan to examine this issue in a future communication.
Following earlier work \cite{Liscidini12}, the state \eqref{eq:fourphostate_text} can be written in terms of the creation operators corresponding to the asymptotic-out field of the structure of Fig. \ref{schemfull}. The relation between the asymptotic-in and -out field operators is reported in \eqref{eq:evo} of the Appendix. This allows us to rewrite the complete output state as:
\begin{align}
\ket{\psi} &= \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}} \nonumber \\
&+ \beta\ket{\text{II}} + \frac{\sqrt{3}}{2}\beta^2\left[\ket{\Phi} -\frac{1}{2\sqrt{3}}\ket{\psi_{\text{GHZ}}}\right],
\label{eq:outstate}
\end{align}
where $\ket{\Phi}$ includes other contributions that are second order in $\beta$ but would not lead to a four-fold coincidence event, while
\begin{multline}
\ket{\psi_{\text{GHZ}}} = \int dk_1dk_2dk'_1dk'_2\phi_{\text{ring}}(k_1,k_2)
\phi_{\text{ring}}(k'_1,k'_2)\\
\times e^{i\Gamma}\ket{\text{T}}\ket{\text{GHZ}},
\label{eq:ghz5}
\end{multline}
with $\Gamma$ a phase factor and
\begin{align}
\ket{\text{GHZ}} &= \frac{1}{\sqrt{2}}\left[\Up{b}_{k_1,D_{1,1}} \nonumber
\Up{b}_{k_2,D_{2,1}}\Up{b}_{k'_2,D_{3,0}} \right.\nonumber \\
&+ \left. e^{i\Theta(k_2,k'_1,k'_2)}\Up{b}_{k_2,D_{1,0}}\Up{b}_{k'_2,D_{2,0}}\Up{b}_{k_1,D_{3,1}}\right]\ket{\text{vac}}\nonumber \\
&= \frac{1}{\sqrt{2}}\left[\ket{110} + e^{i\Theta}\ket{001}\right], \qquad\qquad\qquad \nonumber\\
\ket{T}&=\Up{b}_{k'_1,T}\ket{\mathrm{vac}}.
\label{eq:GHZ}
\end{align}
Here $\Theta(k_2,k'_1,k'_2)$ is a relative phase between the two GHZ state components and depends on the relative positions of the detectors (see \eqref{eq:Theta}), which cannot be longer than the coherence length of the photons. Such a coherence length can always be increased by filtering, although for typical resonance widths achievable at telecom wavelengths in silicon and silicon nitride resonators it already ranges from centimetres to meters \cite{Moss, Grassani15, Preble15}.
As expected, any four-fold coincidence event results in a GHZ state, where the probability of such an event is $\left|\beta^2/4\right|^2$, when propagation losses are neglected. The magnitude of $\beta$ depends on the pump power, the ring radius and the quality factor of the resonators, and it can vary depending on the device under consideration. Yet values of $\left|\beta\right|^2 \approx 0.1$ have been demonstrated in PICs \cite{Silverstone15}, and assuming MHz pump repetition rates, this would allow for the preparation of path-encoded GHZ states with kHz generation rates with mW pump powers and quality factors of the order of $10^4$. Although our theoretical estimate does not account for any loss, device imperfection, and detector efficiencies, we still expect a large improvement on the generation rate with respect to the present results reported in the literature.
\section{Conclusions}
For the generation and processing of quantum correlated photons, we have shown that there is a one-to-one correspondence between components operating in a path encoding scheme and bulk optical elements operating in a polarization encoding scheme. Exploiting this result, we proposed and studied the generation of path-encoded tripartite GHZ states. Although the generation of the desired state is revealed only in post-selection, therefore destroying the quantum state, many protocols involving GHZ states are based on this condition \cite{Jin,Hao,Xiaolong}.
Our approach is suitable for the generation of multipartite states in quantum photonic integrated devices, as it overcomes the difficulties related to the use of the polarization degree of freedom. To demonstrate this, we designed and studied an integrated structure relying on the generation of photon pairs by SFWM in ring resonators, showing the potential of this approach in terms of source footprint and brightness.
\section*{Acknowledgments}
We are grateful to Mario Arnolfo Ciampini for the critical reading and the fruitful discussions on the manuscript.
\appendix
\section{}
Referring to the schematic representation in Fig. \ref{source}, the photon pair creation operator $C_{II}^\dagger$ can be expressed very generally as
\begin{equation}
C^\dagger_{II} = \frac{1}{\sqrt{2}}\sum_{p,q}\int dk_1dk_2\phi_{p,q}\left(k_1,k_2\right)b^\dagger_{k_1,p}b^\dagger_{k_2,q},
\end{equation}
where $p$ and $q$ run over all the output channels, $\phi_{p,q}$ is the amplitude of the biphoton wave function (BWF) that is associated with the photon pair exiting from channels $p$ and $q$ and
\begin{equation}\label{eq:normalcondition}
\sum_{p,q} \int\ dk_1dk_2 \left| \phi_{p,q}(k_1,k_2)\right|^2=1.
\end{equation}
The particular arrangement of the ring resonators in our source design allows for only a restricted number of combinations in $(p,q)$:
\begin{multline}
(p,q)\in\Omega = \left\{(1,1);(2,1);(1,2);(2,2);\right. \\ \left.(3,3);(4,3);(3,4);(4,4)\right\}.
\end{multline}
To lowest order in the pump intensities, $\phi_{p,q}(k_1,k_2)$ can be written as \cite{Yang, Helt10, Liscidini12}
\begin{multline}\label{eq:phipq}
\phi_{p,q}\left(k_1,k_2\right) =\frac{2\sqrt{2}\pi\alpha^2i}{\beta\hbar}\int dk
\phi_P(k)\phi_P(k_1+k_2-k)\\
\times S_{p,q}\left(k_1+k_2-k,k,k_1,k_2\right),
\end{multline}
where the coupling term $S_{p,q}$ is related to the superposition of the asymptotic-in fields in the structure by
\begin{multline}
S_{p,q}\left(k_1+k_2-k,k,k_1,k_2\right) =\\
= \frac{3}{2\epsilon_0}
\sqrt{\frac{(\hbar\omega_{k_1+k_2-k})(\hbar\omega_{k})(\hbar\omega_{k_1,p})(\hbar\omega_{k_2,q})}{16}}\\
\times\int d\V{r}\Gamma^{ijkl}\left(\V{r}\right)D^{i,\text{asy-in}}_{k_1+k_2-k}(\V{r})D^{j,\text{asy-in}}_{k}(\V{r})\\
\times D^{k,\text{asy-in}}_{k_1,p}(\V{r})D^{l,\text{asy-in}}_{k_2,q}(\V{r}),
\label{Spq}
\end{multline}
\noindent where $\Gamma^{ijkl}\left(\V{r}\right)$ is related to the third-order nonlinear susceptibility tensor \cite{Helt10}.
Working out the explicit form of each term in equation \eqref{Spq} with respect to the scheme in Fig. \ref{source}, we find that
\begin{multline}\label{Spq_sum}
S_{p,q}\left(k_1+k_2-k,k,k_1,k_2\right) =\\
= \frac{3}{2\epsilon_0}\sqrt{\frac{(\hbar\omega_{k_1+k_2-k})(\hbar\omega_{k})(\hbar\omega_{k_1,p})(\hbar\omega_{k_2,q})}{16}}\\
\times\sum_{n\in[1,4]}A_n(k_1+k_2-k)A_n(k)B_{n,p}(k_1)B_{n,q}(k_2)\\
\times\bar{\jmath}(k_1+k_2-k,k,k_1,k_2),
\end{multline}
\noindent where $\bar{\jmath}(k_1+k_2-k,k,k_1,k_2)$ is the overlap integral of the asymptotic-in fields $\tilde{D}_k(\mathbf{r})$ in a single ring
\begin{multline}
\bar{\jmath}\left(k_1+k_2-k,k,k_1,k_2\right)=\int_{1^{st}\text{ ring}} d\V{r}\Gamma^{ijkl}(\V{r})\\
\times \tilde{D}^{i}_{k_1+k_2-k}(\V{r})\tilde{D}^{j}_{k}(\V{r})\tilde{D}^{k}_{k_1}(\V{r})\tilde{D}^{l}_{k_2}(\V{r}),
\label{jbar}
\end{multline}
and the coefficients $A_n$ and $B_{n,p(q)}$ in equation \eqref{Spq_sum} are given by
\begin{align}
A_1(k) &= \left(it\right)^2 e^{i\phi_1}e^{ik(L_1+L_2)},\notag\\
A_2(k) &= itre^{ik(L_1+L_2)},\notag\\
A_3(k) &= r^2 e^{i\phi}e^{ik(L_1+L_2)},\notag\\
A_4(k) &= itr\,e^{i(\phi+\phi_2)}e^{ik(L_1+L_2)}
\label{eq:Acoeff}
\end{align}
and
\begin{align}
B_{1,1}(k) &= r\,e^{-ikL_3},\notag\\
B_{2,2}(k) &= r\,e^{-ikL_3},\notag\\
B_{2,1}(k) &= it\,e^{-ikL_3},\notag\\
B_{1,2}(k) &= it\,e^{-ikL_3},
\label{eq:Bcoeff}
\end{align}
where $L_1$, $L_2$, and $L_3$ are the distances between coupling points, and $\phi$, $\phi_1$, and $\phi_2$ are phase delays (see Fig. \ref{source}).
Summing all the contributions in equation \eqref{Spq_sum} we find that, when all the directional couplers have a 50:50 split ratio and the phase $\phi_1=\phi_2=\frac{\pi}{2}$, the only nonvanishing terms in \eqref{eq:phipq} are
\begin{align}
\phi_{1,2}(k_1,k_2) &= \phi_{2,1}(k_1,k_2)\notag \\
&= \frac{-i}{4}e^{i\psi\left(k_1,k_2\right)}\frac{\beta_{ring}}{\beta}\phi_{ring}(k_1,k_2)
\label{eq:phi12}
\end{align}
and
\begin{align}\label{eq:phi34}
\phi_{3,4}(k_1,k_2) &= \phi_{4,3}(k_1,k_2) \notag \\
&= \frac{i}{4}e^{i\psi\left(k_1,k_2\right)}e^{2i\phi}\frac{\beta_{ring}}{\beta}\phi_{ring}(k_1,k_2),
\end{align}
where $|\beta_{ring}|^2$ is the probability of generating a pair in a single ring resonator, with a BWF \cite{Helt10}
\begin{multline}
\phi_{ring}(k_1,k_2) = \frac{2\sqrt{2}\pi\alpha^2i}{\beta_{\text{ring}}\hbar}\int dk
\phi_P(k)\phi_P(k_1+k_2-k)\\
\times \frac{3}{4\epsilon_0}\sqrt{\frac{(\hbar\omega_{k_1+k_2-k})(\hbar\omega_{k})(\hbar\omega_{k_1})(\hbar\omega_{k_2})}{16}}\\
\times \bar{\jmath}(k_1+k_2-k,k,k_1,k_2)
\end{multline}
which is normalized according to
\begin{equation}\label{eq:phiringnorm}
\int\ dk_1dk_2\left|\phi_{ring}(k_1,k_2)\right|^2=1,
\end{equation}
and where we defined $\psi(k_1,k_2)$ as
\begin{equation}
\psi(k_1,k_2) = 2(k_1+k_2)(L_1+L_2-L_3).
\label{eq:psiwave}
\end{equation}
Now we can finally reconstruct the complete output state generated by the source.
Considering the limit of low generation probability, the ket \eqref{eq:sfwmstate} takes the form
\begin{align}
\ket{\psi} &\approx \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}}+\beta C_{II}^\dagger\ket{\text{vac}}\notag\\
&+\frac{1}{2}\left(\beta C_{II}^\dagger\right)^2\ket{\text{vac}}+\cdots \notag\\
&\equiv \left(1+\mathcal{O}(|\beta|^2)\right)\ket{vac}+\beta \ket{II}+\frac{\sqrt{3}}{2}\beta^2\ket{IV}+\cdots
\label{eq:expansion}
\end{align}
where the factor $\sqrt{3}$ comes form the normalization of the state $\ket{\text{IV}}$, and the normalized two-photon state is
\begin{multline}
\ket{II} = -\frac{i}{4\sqrt{2}}\int dk_1dk_2\frac{\beta_{\text{ring}}}{\beta}\phi_{\text{ring}}\left(k_1,k_2\right)\\
\times e^{i\psi\left(k_1,k_2\right)}\left\{\Up{b}_{k_1,1}\Up{b}_{k_2,2} + \Up{b}_{k_1,2}\Up{b}_{k_2,1}\right. \\
\left.- e^{2i\phi}\left(\Up{b}_{k_1,3}\Up{b}_{k_2,4}+\Up{b}_{k_1,4}\Up{b}_{k_2,3}\right)\right\}\ket{\text{vac}}.
\label{eq:twophostate3}
\end{multline}
When $\phi=\pi$, \eqref{eq:twophostate3} becomes
\begin{multline}
\ket{II} = -\frac{i}{2\sqrt{2}}\int dk_1dk_2\frac{\beta_{\text{ring}}}{\beta}\phi_{\text{ring}}(k_1,k_2)
e^{i\psi\left(k_1,k_2\right)}\\
\times\left\{\Up{b}_{k_1,1}\Up{b}_{k_2,2}
- \Up{b}_{k_1,3}\Up{b}_{k_2,4}\right\}\ket{\text{vac}},
\label{eq:twophostate4}
\end{multline}
which is equivalent to the Bell state \eqref{eq:pathent} in the path-encoding notation.
It should be noticed that, from the normalization condition \eqref{eq:normalcondition} and equations \eqref{eq:phi12} and \eqref{eq:phi34}, we have
\begin{equation}
4\times\int\ dk_1dk_2 \frac{1}{16}\left|\frac{\beta_{\text{ring}}}{\beta}\right|^2\left| \phi_{\text{ring}}(k_1,k_2)\right|^2=1
\end{equation}
that, together with the normalization condition on the BWF \eqref{eq:phiringnorm}, gives
\begin{equation}
\left|\beta\right|^2=\frac{\left|\beta_{ring}\right|^2}{4}.
\label{eq:efficiency}
\end{equation}
In this context we are interested in the simultaneous generation of two photon pairs, and thus we focus on the next term in the expansion \eqref{eq:expansion}, which involves the four-photon state $\ket{\mathrm{IV}}$; using Eq. \eqref{eq:efficiency}, this leads to Eq. \eqref{eq:fourphostate_text}.
Referring to Fig. \ref{schemfull} and following the notation \cite{Heebner04} for directional couplers, we can express the photon creation operators in \eqref{eq:fourphostate_text} in terms of the photon creation operators in each detector channel $D_{n,m}$ as
\begin{align}\label{eq:evo}
\Up{b}_{k_1,1} &= e^{-ik_1L_T}\Up{b}_{k_1,T} \\
\Up{b}_{k_2,2} &= -it_1e^{-ik_2L_{3,0}}\Up{b}_{k_2,D_{3,0}}+ r_1e^{-ik_2L_{2,0}}\Up{b}_{k_2,D_{2,0}} \notag \\
\Up{b}_{k_1,3} &= -it_2e^{-ik_1L_{3,1}}\Up{b}_{k_1,D_{3,1}}+ r_2e^{-ik_1L_{1,1}}\Up{b}_{k_1,D_{1,1}} \notag \\
\Up{b}_{k_2,4} &= -it_3e^{-ik_2L_{2,1}}\Up{b}_{k_2,D_{2,1}}+ r_3e^{ik_2L_{1,0}}\Up{b}_{k_2,D_{1,0}},\notag
\end{align}
where $L_{n,m}$ is the distance between the appropriate output directional coupler in the source and the detector $D_{n,m}$, and $L_T$ is the length between the upper directional coupler in the source and the target detector $T$.
Using \eqref{eq:evo} in \eqref{eq:fourphostate_text}, and referring to the output state expansion \eqref{eq:expansion} we find that the state at the detectors is \eqref{eq:outstate}-\eqref{eq:GHZ}, with the relative phase between the terms in the GHZ given by
\begin{align}\label{eq:Theta}
\Theta &= k_1(L_{1,1}-L_{3,1} )+k_2(L_{2,1}-L_{1,0} ) \notag\\
&+ k'_2(L_{3,0}-L_{2,0})+\frac{\pi}{2}.
\end{align}
\end{document} |
\begin{document}
\title{ extbf{ extsc{SIMPLE SEMIGROUPS IN FINITE CATEGORIES hanks{This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 670624).\
This work has been also supported by the French government, through the 3IA C\^ote d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.}
\begin{abstract}
In this paper, we classify finite categories with two objects such that one of the endomorphism monoids is a group. We prove that having a group on one side affects the structure of the other endomorphism monoid, and we prove that it is going to contain a simple semigroup. We also prove the other direction, that if we have a Rees matrix semigroup we can construct a category with two objects such that one of the objects is a group.\\
\textbf{Keywords:} finite categories, simple semigroups, groups.
\end{abstract}
\section{Introduction}
Studying finite categories mainly depends on studying the algebraic nature of their endomorphism monoids. In \cite{grouplike}, we classified finite categories with two objects such that their endomorphism monoids are grouplike. We studied the structure of such categories when the monoids contain a group. In this paper, we discuss categories such that one of the endomorphism monoids is a group, and in this case the group structure on one side affects the structure of the second monoid on the other side. In particular, the structure of the second monoid is going to contain a simple semigroup which is isomorphic to a Rees matrix semigroup with the same group \cite{JEP}.
The notion of categories associated to matrices was initially introduced by T. Leinster and C. Berger in \cite{berger}, and was further discussed in more details by S. Allouch and C. Simpson in \cite{samer1}. Categories are associated to matrices with positive integers, such that the size of the matrix corresponds to the number of objects and the coefficients correspond to the number of morphisms between each two objects. In our work, we represent categories as matrices in terms of their monoids and bimodules \cite{grouplike}.
\begin{definition}
A bimodule is a set with actions on the left and the right of the respective monoids, such that the actions commute i.e. $(g \cdot x) \cdot h = g \cdot (x \cdot h)$. It can be seen as a category such that one of the sets of morphisms is empty, it's called an \textit{upper triangulated category}.
\end{definition}
\begin{definition}
Let $\textsf{C}$ be a finite category. Then we get a matrix where the diagonal entries are monoids and the off diagonals entries are bimodules. This matrix is called \textit{algebraic matrix}.
\end{definition}
\begin{definition}
A category is called \textit{reduced} if it does not have any isomorphic distinct objects.
\end{definition}
\begin{definition}
A semigroup $S$ is \textit{simple} if its only ideals are $\emptyset $ and $S$. The Rees-Sushkevich structure theorem says that a simple semigroup is isomorphic to a \textit{Rees matrix semigroup}. In the following, we proceed without using this result, but some of the lemmas and constructions can also be viewed using this structure.
\end{definition}
\begin{remark}
In our work we exclude the case where the ideals are empty. This means that whenever we say an ideal, then it's not empty, unless needed.
\end{remark}
There are two directions:
\begin{theorem} [\textbf{Monoids to categories}, Theorem \ref{montocatstheorem}] \label{theorem1} Let $A$ be a finite monoid, and not a group, and let $S$ be its minimal ideal. Then $S$ is a simple semigroup, and we can construct a category $\textsf{C}'$ with two objects associated to
\begin{equation}\label{mat1} \tag{M(1)}
\left(\begin{array}{cc}
S^{*1} & L \\
R & G
\end{array}\right) ~; ~ S^{*1} = I \cup \{1\}
\end{equation}
where $L$ and $R$ are left and right minimal ideals of $S$ respectively and $G = L \cap R$ is a group. And
$$
|S| = \frac{|L|\cdot |R|}{|G|}.
$$
Then there is an inclusion $\textsf{C}' \subseteq \textsf{C}$ where $\textsf{C}$ is a category associated to the matrix
\begin{equation}\label{mat2} \tag{M(2)}
\left(\begin{array}{cc}
A & L \\
R & G
\end{array}\right).
\end{equation}
\end{theorem}
\begin{remark} \label{remarkgroup}
If $A$ is a group, then $S = A$ and in this case the identity of $S$ is the identity of $A$, and we obtain a groupoid \cite{grouplike}. In particular, we get $A \simeq G$ and $|L| = |R| = |G|$. Therefore, for this reason, we generally suppose in the following that $A$ is not a group.
\end{remark}
\begin{theorem} [\textbf{Categories to monoids}] \label{goaltheorem} Let $\textsf{C}$ be a category associated to
\begin{equation*}
\left(\begin{array}{cc} \label{mat3} \tag{M(3)}
A & U \\
V & G
\end{array}\right)
\end{equation*}
where $G$ is a group and $U,V$ not empty. Then $UV = S$ is the minimal ideal of $A$ and the construction of the category associated to (\ref{mat3}) is isomorphic to the construction of the category associated to (\ref{mat2}). And if $A$ is not a group then we get
$$
|A| \geq \frac{|L|\cdot |R|}{|G|} + 1.
$$
\end{theorem}
\section{Categories with groups endomorphism monoids}
In this section, we talk about categories with two objects where one of the endomorphism monoids is a group. The existence of a group on one of the diagonals of the matrix of a category imposes some properties over the sets of morphisms. These properties eventually characterize the nature of the other monoid.
\begin{lemma} \label{freeaction}
Let $\textsf{C}$ be a category associated to the algebraic matrix
$$
\left(\begin{array}{cc}
A & L \\
R & G
\end{array}\right)
$$
such that $L$ and $R$ are not empty and $G$ is a group. Then $G$ acts freely on $L$ and $R$. Therefore $|L|$ and $|R|$ are multiples of $|G|$.
\begin{proof}
Let $g_1, g_2 \in G$ and $x \in L$ such that
$$
x \cdot g_1 = x \cdot g_2
$$
then
$$
y \cdot x \cdot g_1 = y \cdot x \cdot g_2 ~;~y\in R.
$$
We have that
$$
y \cdot x = z \in G
$$
then
$$z^{-1}\cdot z \cdot g_1 = z^{-1} \cdot z \cdot g_2
$$
then $g_1=g_2$.
Same for any $y\in R$.
\end{proof}
\end{lemma}
\begin{cor} \label{freeactioncor}
Let $\textsf{C}$ be a category associated to the algebraic matrix
$$
\left(\begin{array}{cc}
A & L \\
R & G
\end{array}\right)
$$
such that $L$ and $R$ are not empty and $G$ is a group. Then $G$ acts freely on $L \times R$.
\begin{proof}
Let $g_1,g_2 \in G$ and let $(x,y) \in L \times R$ such that
\begin{eqnarray*}
g_1 \cdot (x,y) &=& g_2 \cdot (x,y) \\
(x \cdot g_1, g_1 \cdot y) &=& (x \cdot g_2, g_2 \cdot y)
\end{eqnarray*}
then
$$
x \cdot g_1 = x \cdot g_2 ~\textrm{and}~ g_1 \cdot y = g_2 \cdot y
$$
and by the free action of $G$ on $L$ and $R$ we get
$$
g_1 = g_2.
$$
\end{proof}
\end{cor}
\begin{remark}
The condition that $L$ and $R$ are not empty is important to obtain the results. Therefore, in the following we always assume that the bimodules $L$ and $R$ are not empty.
\end{remark}
Let $B$ be a monoid, and let $L$ be a right $B$-module and let $R$ be a left $B$-module. Then there exists a set
$$
L \otimes_B R := L \times R / \sim_B
$$
such that
$$
(xb,y) \sim_B (x,by) ~ \forall x \in L, b\in B, y\in R.
$$
$\sim_B$ is an equivalence relation.
Then there's a map
$$
L \times R \rightarrow L \otimes_B R := L \times R /\sim_B \rightarrow LR~;~ (x,y) \mapsto x\otimes y \mapsto x\cdot y.
$$
\begin{lemma}\label{assoc}
Let $A,B,A'$ be monoids.
\begin{enumerate}
\item If $L$ is an $(A,B)$-bimodule then $L \otimes_B R$ is a left $A$-module.
\item If $R$ is a $(B,A')$-bimodule then $L \otimes_B R$ is a right $A'$-module.
\item If $L,R$ are $(A,B)$ and $(B,A)$ bimodules, then $L \otimes_B R$ is an $(A,A)$-bimodule.
\end{enumerate}
\end{lemma}
\begin{lemma}
If $B=G$ is a group, then $G$ acts on $L \times R$ and
$$
(x,y) \sim (x',y') \iff \exists~ g\in G ~\textrm{such that}~ x' = xg^{-1} ~\textrm{and}~ y' = gy.
$$
In this case
$$
g \cdot (x,y) := (xg^{-1}, gy)~\textrm{and}~ L \otimes_G R = L \times R /G.
$$
\end{lemma}
\begin{remark}
If $B=G$ is a group, we can transform a left action to a right action and vice-versa by multiplying with inverses.
\end{remark}
\begin{prop}\label{determinantprop}
Let $\textsf{C}$ be a finite category associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & L \\
R & G
\end{array} \right)$.
\end{center}
Consider the quotient $L \times R /G$ with the equivalence relation
$$
(x,y) \sim (x',y') \iff \exists~ g\in G ~\textrm{such that}~ x' = xg^{-1} ~\textrm{and}~ y' = gy.
$$
Then the function
$$
f : L \times R / G \rightarrow LR~;~ f(x,y) = xy
$$
is bijective. In this case $|LR| = |L \times R / G| = \frac{|L||R|}{|G|}$.
\begin{proof}
\begin{enumerate}
\item \textbf{$f$ is injective:} Let $(x,y), (x',y') \in L\times R /G$ such that $f(x,y) = f(x',y')$ then $xy = x'y'$. There exist $c \in L$ and $d\in R$ such that $yc = 1_G$ and $dx = 1_G$. This gives us
$$
x = x \cdot (yc) = x'(y'c)
$$
and
$$y = (dx) \cdot y = (dx')y'
$$ where $y'c$ and $dx'$ are in $G$ and they are inverses, indeed
$$
(dx')(y'c) = d(x'y')c = d(xy)c = (dx)(yc) = 1_G.
$$
Then $(x,y) \sim (x',y')$.
\item \textbf{$f$ is surjective:} Evident.
\item Since $G$ acts freely on $L$ and on $R$ (Lemma \ref{freeaction}) then it also acts freely on $L \times R$ (Corollary \ref{freeactioncor}). Then
$|L \times R / G| = \frac{|L||R|}{|G|}$.
\end{enumerate}
\end{proof}
\end{prop}
\begin{conc}
Let $\textsf{C}$ be a category associated to the matrix
$$
\left(\begin{array}{cc}
A & L \\
R & G
\end{array}\right)
$$
where $G$ is a group. Then
\begin{enumerate}
\item $G$ acts freely on $L$ and $R$.
\item $|L|$ and $|R|$ are multiples of $|G|$.
\item If $A$ is not a group then $|A| \geq \frac{|L||R|}{|G|}+1$.
\end{enumerate}
\end{conc}
\section{From simple semigroups to categories} \label{simpletocats}
We aim in this section to construct a finite category with two objects where one of the endomorphism monoids is a group. We start with a simple semigroup $S$, to which we add an identity element, then we take the minimal left and right ideals of $S$, denote them by $L$ and $R$ respectively. The intersection of $L$ and $R$ is a group, which will be the second endomorphism monoid of the category.
\begin{definition}
Given a monoid (or semigroup) $S$, a \textit{left ideal} in $S$ is a subset $A$ of $S$ such that $SA$ is contained in $A$. Similarly, a \textit{right ideal} is a subset $A$ such that $AS$ is contained in $A$. Finally, a \textit{two-sided ideal}, or simply \textit{ideal}, in $S$ is a subset $A$ that is both a left and a right ideal.
\end{definition}
\begin{prop}
Let $S$ be a simple semigroup, $L$ be a minimal left ideal and $R$ be minimal right ideal of $S$. Then $LR = S$.
\begin{proof}
We have $L \subset S$ and $R \subset S$ then $LR \subset S$. In addition, $LR$ is a two sided ideal of $S$. Indeed, let $x \in S$ and $ab \in LR$ such that $a \in L$ and $b \in R$, then
$$
x (ab) = \underbrace{(xa)}_{\in L}b \in LR
$$
and
$$
(ab)x = a\underbrace{(bx)}_{\in R} \in LR.
$$
Hence $LR \subset S$ is a two sided ideal, but $S$ is simple, then $LR = S$.
\end{proof}
\end{prop}
\begin{theorem}\label{group}
Let $S$ be a simple semigroup, $L$ be a minimal left ideal and $R$ be minimal right ideal of $S$. Then:
\begin{enumerate}
\item $G= L\cap R$ is a group.
\item $RL = L\cap R$.
\end{enumerate}
\begin{proof}\
For part (1), it is sufficient to prove that G is right and left cancellative.\\
Let $z\in L \cap R$ and $zR$ be the right ideal contained in $R$, $zR = \{zr_1,...,zr_n\}$, let $r_i,r_j$ such that $zr_i=zr_j$, if $r_i\neq r_j$ then $zR \subsetneq R$. This is a contradiction with the minimality of $R$.\\
Let $Lz$ be the left ideal contained in $L$, $Lz = \{l_1z,...,l_mz\}$, let $l_i,l_j$ such that $l_iz=l_jz$, if $l_i\neq l_j$ then $Lz \subsetneq L$. This is a contradiction with the minimality of $L$.\\
Hence, $G$ is right and left cancellative. Proving now the existing of an identity element.\\
Since G is right cancellative, then the map $x\mapsto x\cdot z$ is injective, as $G$ is finite then it is also surjective. Then for all $w$, there exists $e; ez=w$. Take $w=z$.\\
Also, $G$ is left cancellative, then the map $x\mapsto z\cdot x$ is bijective, and for all $y$ there exists $e'$ such that $ze'=y$. Take $y=z$.\\
Then:\\
- $ey=e(zx)=(ez)x=zx=y$.\\
- $we'=(ez)e'=ez=w$.\\
Hence the existence of an identity element.\\
For part (2), we always have $RL \subseteq L\cap R$.\\
Let $x\in L\cap R=G$, $x=e \cdot x \in RL$ where $e$ is the identity of $G$.
\end{proof}
\end{theorem}
\begin{theorem} \label{simpletocatstheorem}
Let $S$ be a simple semigroup. Let $L$ and $R$ be minimal left and right ideals of $S$ respectively. Then we can construct a category $\textsf{C}$ associated to the matrix
$$
\left( \begin{array}{cc}
S^{*1} & L \\
R & G
\end{array} \right)~;~ S^{*1} = S \cup \{1\}
$$
such that $G = RL$ is a group and $LR = S$ where
\begin{itemize}
\item $|L|$ and $|R|$ are multiples of $|G|$.
\item $|S| = |LR| = \frac{|L||R|}{|G|}$.
\end{itemize}
\begin{proof}
We want to construct a category with two objects starting with a semigroup. The construction we use is called \textit{Karoubi envelope} or \textit{idempotent completion}. It was first introduced in 1963 by M. Artin, A. Grothendieck and J.L. Verdier \cite{artin269theorie}, then the definition also appears in \cite{borceux1986cauchy} and \cite{amarilli2021locality}. The idea is that if $\textsf{C}$ is a category, then its idempotent envelope $\textsf{C}^{idem}$ (also denoted $\tilde{\textsf{C}}$) is the category whose objects are pairs $(X,p)$ where $X \in Ob(\textsf{C})$ and $p : X \rightarrow X$ is an idempotent. The set of morphisms from $X$ to $X'$ is
$$
\textsf{C}^{idem}((X,p),(X',p')) = p \cdot \textsf{C}(X,X')\cdot p \subseteq \textsf{C}(X,X').
$$
Then in the case of a monoid $A$, we view it as a category with one object $\{\ast\}$ and $A$ is the set of endomorphisms of $\{\ast\}$, we denote the category by $(\{\ast\}, A)$. The Karoubi envelope will be the category
$$
Idem(A) := (\{\ast\}, A)^{idem}
$$
where the objects of $Idem(A)$ are the idempotents of $A$.
If we apply this construction to our work here, we get the following: if $S$ is a simple semigroup, $L$ minimal left ideal of $S$, $R$ minimal right ideal of $S$ and $G$ the group obtained by $L \cap R$. Then we take the two idempotents $e_1 = 1_{S^{*1}}$ and $e_2 = 1_G$, and we want to construct a category $Idem_{e_1,e_2}(S^{*1})$, the sets of morphisms are the following
\begin{center}
$S^{*1} = e_1 \cdot S^{*1} \cdot e_1 ~~~~~~~~~~~~ L = e_1 \cdot S^{*1} \cdot e_2$ \\
$R = e_2 \cdot S^{*1} \cdot e_1 ~~~~~~~~~~~~ G = e_2 \cdot S^{*1} \cdot e_2$.
\end{center}
Note that $L$ and $R$ here are the same of Theorem \ref{group}, and they are left and right minimal ideals of $S$ respectively.
Then $Idem_{e_1,e_2}(S^{*1})$ is a full subcategory of $Idem(S^{*1}) = \widetilde{S^{*1}}$ the Karoubi envelope of $S^{*1}$. The multiplication of morphisms is defined by the multiplication of elements of $S$.
\end{proof}
\end{theorem}
\begin{conc}
In this section, we started with a simple semigroup $S$ and we proved that we can construct a category with two objects such that one of the objects is $S$ and the other one is a group.
\end{conc}
\section{From finite monoids to categories}
In this section, we study how we can construct a category with two objects such that one of the objects is a group by starting with a finite monoid instead of a simple semigroup.
\begin{prop}
Let $A$ be a finite monoid, then $A$ has a unique non-empty minimal ideal denoted by $S_0$.
\begin{proof}
Let $S = \{S_i \subset A \mid S_i ~ \textrm{ideal} ~ of A\} = \{S_1, S_2, \hdots , S_k\}$ be the set of all ideals of $M$, set $S_0 = \underset{S_i \in S}{\bigcap}S_i = S_1 \cap S_2 \cap \hdots \cap S_k$. $S_0$ is not empty because it contains at least $S_1 \cdot S_2 \cdot \hdots \cdot S_k$ and it is minimal because it doesn't contain any other ideal. $S_0$ is unique. Indeed, suppose that $S'$ is another minimal ideal, then $S_0 \subseteq S'$ because $S_0$ is the intersection of all ideals, but $S'$ is minimal then $S_0 = S'$.
\end{proof}
\end{prop}
\begin{remark}
Let $A$ be a finite monoid and $S_0$ be the minimal ideal of $A$, then $S_0$ is a sub-semigroup of $A$.
\end{remark}
\begin{lemma} \label{leftofM}
Let $A$ be a finite monoid. If $M \subseteq A$ is a left ideal, then $M$ is a sub-monoid of $A$ and if $L \subseteq M$ is a minimal left ideal of $M$ then $L$ is a minimal left ideal of $A$.
\begin{proof}
We have $M \cdot L \subseteq L$ and $M \cdot L \subseteq M$ is a left ideal of $M$ contained in $L$, by the minimal of $L$ in $M$ we have $M \cdot L = L$, i.e. for all $l \in L$ there exists $m \in M, l' \in L$ such that $l = ml'$.
Now let $a \in A$ and $l \in L$ then
\begin{eqnarray*}
a \cdot l &=& a \cdot (ml')\\
&=& (am)\cdot l' \\
&\in& M \cdot L~\textrm{(because $M$ is left ideal of $A$)} \\
&\in& L~\textrm{(because $L$ is left ideal of $M$)}.
\end{eqnarray*}
Therefore, $L$ is a left ideal of $A$.
$M$ is minimal in $A$. Indeed, suppose that $L'$ is a left ideal of $A$ such that $L' \subseteq L$, then
$$
M \cdot L' \subseteq L' \subseteq L
$$
is a left ideal of $M$, but $L$ is minimal in $M$, then $M\cdot L' = L$. Therefore, $L' = L$.
\end{proof}
\end{lemma}
\begin{lemma} \label{rightofM}
Let $A$ be a finite monoid. If $M \subseteq A$ is a right ideal, then $M$ is a sub-monoid of $A$ and if $R \subseteq M$ is a minimal right ideal of $M$ then $R$ is a minimal right ideal of $A$.
\begin{proof}
Similar to the proof of Lemma \ref{leftofM}.
\end{proof}
\end{lemma}
\begin{lemma} \label{minofAminofS}
Let $A$ be a finite monoid and let $S_0$ be its minimal idea. Then $L$ (resp. $R$) is a minimal left (resp. minimal right) ideal of $A$ if and only if $L$ (resp. $R$) is a minimal left (resp. minimal right) ideal of $S_0$.
\begin{proof}
We have $S_0\cdot L \subset S_0$ and $S_0 \cdot L \subset L$ is a left ideal of $A$, by the minimality of $L$ in $A$ we obtain that $S_0\cdot L = L$. Then $L \subset S_0$. Also, $L$ is a minimal ideal of $S_0$. Indeed, if $L' \subset L$ is a left ideal of $S_0$ then $S_0 \cdot L' \subset L' \subset L$, but $S_0 \cdot L'$ is a left ideal of $A$ and $L$ is minimal of $A$ then $S_0 \cdot L' = L$ hence $L' = L$ and $L$ is a minimal left ideal of $S_0$.
For the other direction, use Lemma \ref{leftofM} and Lemma \ref{rightofM}.
\end{proof}
\end{lemma}
\begin{lemma}
Let $A$ be a finite monoid and $S_0$ the minimal ideal of $A$, then $S_0$ is a simple semigroup. In particular, $S_0$ has the structure of a Rees matrix semigroup.
\begin{proof}
If $J \subset S_0$ is an ideal of $S_0$ then $S_0\cdot J \cdot S_0 \subset J \subset S_0$ and $S_0 \cdot J \cdot S_0$ is an ideal of $A$, then by the minimality of $S_0$ in $A$, we obtain $S_0 \cdot J \cdot S_0 = S_0$ hence $J = S_0$ and $S_0$ is simple.
\end{proof}
\end{lemma}
\begin{theorem} \label{montocatstheorem}
Let $A$ be a finite monoid that's not a group, and let $S_0$ be its minimal ideal. Then by Section \ref{simpletocats}, we obtain a category $\textsf{C}'$ associated to the matrix
$$
\left(\begin{array}{cc}
S_0^{*1} & L \\
R & G
\end{array}\right)
$$
where $L$ and $R$ are minimal left and right ideals of $S_0$ respectively, and $G$ is a group.
Then there exists a category $\textsf{C}$ associated to the matrix
$$
\left(\begin{array}{cc}
A & L \\
R & G
\end{array}\right)
$$
such that $\textsf{C}' \subseteq \textsf{C}$.
\begin{proof}
We use the same construction of Theorem \ref{simpletocatstheorem}. Then if we apply this construction to our work here, we get the following: if $A$ is a monoid, $L$ minimal left ideal of $S$, $R$ minimal right ideal of $A$ and $G$ the group obtained by $L \cap R$. Then we take the two idempotents $e_1 = 1_{A}$ and $e_2 = 1_G$, and we want to construct a category $Idem_{e_1,e_2}(A)$, the sets of morphisms are the following
\begin{center}
$A = e_1 \cdot A \cdot e_1 ~~~~~~~~~~~~ L = e_1 \cdot A \cdot e_2$ \\
$R = e_2 \cdot A \cdot e_1 ~~~~~~~~~~~~ G = e_2 \cdot A \cdot e_2$.
\end{center}
$L$ is a left minimal ideal of $A$ and $R$ is a minimal right ideal of $A$.
Then $Idem_{e_1,e_2}(A)$ is a full subcategory of $Idem(A) = \tilde{A}$ the Karoubi envelope of $A$. The multiplication of morphisms is defined by the multiplication of elements of $A$.
Now since $A$ is not a group, then $1_A \notin S_0$ (Remark \ref{remarkgroup}), hence
$$
S_0^{*1} \rightarrow A
$$
is injective.
$L$ and $R$ are minimal left and right ideals of $S_0$ respectively. By Lemma \ref{minofAminofS}, $L$ and $R$ are also minimal left and right ideals of $A$ respectively, i.e. for all $x\in L, y\in R$ and $a\in A$, we have $a \cdot x \in L$ and $y \cdot a \in R$. By Section \ref{simpletocats} we have $LR = S_0$. Therefore $LR = S_0 \subset A$.
In addition, for all $x \in L$ and $y \in R$, $x\cdot y \neq 1_A$, because $G$ is a group and in this case we get isomorphic objects, which gives us that $A$ is a group and this is a contradiction with the hypothesis.
\end{proof}
\end{theorem}
\begin{conc}
For any finite monoid $A$, we can construct a category with two objects such that one of the objects is a group. This result means that every finite monoid $A$ is connected to a group, which is the intersection of a minimal left and a minimal right ideal of $A$. Therefore, Theorem \ref{theorem1} is proved.
\end{conc}
\section{From categories to simple semigroups}
In this section, we give a proof of the direction that starting with a category $\textsf{C}$ with two objects such that one of the objects is a group, then we can obtain a simple semigroup with cardinality as shown in the previous sections.
\begin{theorem}\label{determinant}
Let $\textsf{C}$ be a finite category associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & U \\
V & G
\end{array} \right)$
\end{center}
where $G$ is a group and $A$ is not a group, then $S = UV \subseteq A$ is a simple ideal such that $|S| = \frac{|U||V|}{|G|}$ and there exists a sub-category $\textsf{C}'$ of $\textsf{C}$ associated to the matrix
\begin{equation}\label{rees2}
\left( \begin{array}{cc}
S^{*1} & U \\
V & G
\end{array} \right)
\end{equation}
where $S^{*1} = S \cup \{1\}$ and $S$ is a simple semigroup.
\begin{proof}
$\textsf{C}'$ is a sub-category of $\textsf{C}$:
\begin{itemize}
\item $Ob(\textsf{C}') = Ob(\textsf{C})$.
\item Morphisms $= \{S^{*1}, U, V, G\}$.
\end{itemize}
Suppose $S' \subset S $ is a two sided ideal, $S \subset S'$?
Let $x \in U, y\in V$ and $a \in S'$. We have $y \cdot a \cdot x \in G $, then there exists $g\in G$ such that
\begin{eqnarray*}
g \cdot (y \cdot a \cdot x) &=& 1_G
\end{eqnarray*}
then
\begin{eqnarray*}
x \cdot y &=& x \cdot 1_G \cdot y \\
&=& x \cdot (g\cdot y\cdot a \cdot x) \cdot y \\
&=& \underbrace{(x \cdot g \cdot y)}_{\in S} \cdot \underbrace{a}_{\in S'} \cdot \underbrace{(x \cdot y)}_{\in S} \in S'.
\end{eqnarray*}
\end{proof}
\end{theorem}
\begin{definition}
Let $\textsf{C}$ be a finite category associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & U \\
V & G
\end{array} \right)$
\end{center}
where $G$ is a group. Let $x \in U$ and $y\in V$, define $L_y$ and $R_x$ in the following way
$$
L_y := U\cdot y \subseteq A
$$
and
$$
R_x := x \cdot V \subseteq A.
$$
\end{definition}
\begin{lemma}
$L_y$ is a minimal left ideal of $A$ and $R_x$ is a minimal right ideal of $A$.
\begin{proof}
It is clear that $L_y$ is a left ideal of $A$. Proving it's minimal. Suppose $L' \subseteq L_y$ a left ideal of $A$. Let $u \in U, a\in L'$, we have $y\cdot a \cdot u \in G$ then there exists $g\in G$ such that
$$
g \cdot (y \cdot a \cdot u) = 1_G
$$
then
\begin{eqnarray*}
u \cdot y &=& u \cdot 1_G \cdot y \\
&=& u \cdot (g\cdot y\cdot a \cdot u) \cdot y \\
&=& \underbrace{(u \cdot g \cdot y)}_{\in L_y} \cdot \underbrace{a}_{\in L'} \cdot \underbrace{(u \cdot y)}_{\in L_y} \in L'.
\end{eqnarray*}
Hence $L_y$ is minimal. Similarly for $R_x$.
\end{proof}
\end{lemma}
\begin{remark}
For all $g\in G$, if we replace $y$ by $gy$ we obtain
$$
L_{gy} = U \cdot g \cdot y = U \cdot y = L_y~(\textrm{because}~U \cdot g = U).
$$
\end{remark}
\begin{prop} \label{bijection}
Let $\textsf{C}$ be a category associated to the matrix
$$
\left(\begin{array}{cc}
A & U \\
V & G
\end{array}\right)
$$
Let $\mathcal{L}(A) = \{L \subseteq A \mid L~ \textrm{is a minimal left ideal of $A$}\}$ and $\mathcal{R}(A) = \{R \subseteq A \mid R ~\textrm{is a minimal right ideal of $A$}\}$.
Define the sets
$$
\bigslant{\{y\}}{y \sim gy} :=~ _{G}\backslash^{V} ~~~\textrm{and}~~~ \bigslant{\{x\}}{x \sim xg} := \bigslant{_U}{_G}.
$$
Then
$$
_{G}\backslash^{V} = \mathcal{L}(A) ~~\textrm{and}~~\bigslant{_U}{_G} = \mathcal{R}(A).
$$
This means that for every minimal left ideal $L$ of $A$, there exists $y \in V$ such that $L = L_y$. Similarly, for every minimal right ideal $R$ of $A$, there exists $x \in U$ such that $R = R_x$.
\end{prop}
\begin{lemma}
Let $\textsf{C}$ be a finite category associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & U \\
V & G
\end{array} \right)$
\end{center}
where $G$ is a group and $S= UV$ a simple ideal of $A$. For $x \in U$ and $y \in V$ such that $y\cdot x = 1_G$, we have
$$
L_y R_x = U \cdot y \cdot x \cdot V = UV = S.
$$
\end{lemma}
\begin{prop}
Let $A$ be a finite monoid that is not a group, let $L$ be a minimal left ideal and $R$ be a minimal right ideal of $A$, let $x,y \in A$, then
\begin{center}
$(xR)\cap (Ly) = xRLy$.
\end{center}
\begin{proof}
Theorem \ref{group}.
\end{proof}
\end{prop}
\begin{notation}
$(xR)\cap (Ly) = xRLy= G_{xy}$.
\end{notation}
\begin{theorem} \label{catstosimpletheorem}
Let $\textsf{C}$ be a finite category associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & U \\
V & G
\end{array} \right)$
\end{center}
where $G$ is a group. Let $x \in U, y\in V$ such that $y\cdot x = 1_G$, then by the construction described in Theorem \ref{montocatstheorem} there exists a category $\textsf{C}'$ associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & L_y \\
R_x & G_{xy}
\end{array} \right)$
\end{center}
such that $L_y R_x = S$ the simple ideal of $A$ where $L_y$ is a minimal left ideal of $A$ and $R_x$ is a minimal right ideal of $A$ and $G_{xy} = x \cdot G \cdot y$ is a group. Then $\textsf{C}' \simeq \textsf{C}$.
\begin{proof}
Consider the maps
$$
\phi : G \rightarrow G_{xy}~;~ g \mapsto xgy
$$
$$
\psi : U \rightarrow L_y ~;~ u \mapsto uy
$$
and
$$
\psi' : V \rightarrow R_x ~;~ v \mapsto xv.
$$
$\phi, \psi$ and $\psi'$ provide the isomorphisms needed to prove the theorem.
\end{proof}
\end{theorem}
\begin{prop}
The category obtained is unique.
\begin{proof}
Let $A$ be a monoid, choose $L,R$ such that $G = RL$. Choose another $L',R'$ such that $G' = R'L'$.
Since we have $G= RL$ then we can choose $L_y,R_x \subseteq A$ such that
$$
\textsf{C}(A,L,R,G) \simeq \textsf{C}(A,L_y,R_x,G_{xy}).
$$
Take $L_y = L'$ and $R_x = R'$, then
$$
\textsf{C}(A,L',R',G) = \textsf{C}(A,L_y,R_x,G_{xy}).
$$
Hence $\textsf{C}(A,L,R,G) \simeq \textsf{C}(A,L',R',G')$.
\end{proof}
\end{prop}
\begin{conc}
Finally in this section, we prove that if we have a category $\textsf{C}$ associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & U \\
V & G
\end{array} \right)$
\end{center}
then under a choice of an element $x\in U$ and an element $y\in V$, we obtain a left minimal ideal $L_y$ and a right minimal ideal $R_x$ of $A$ such that $L_yR_x$ is a simple semigroup and $R_xL_y$ is a group. This data gives us a unique category $\textsf{C}'$ associated to the matrix
\begin{center}
$\left( \begin{array}{cc}
A & L_y \\
R_x & G_{xy}
\end{array} \right)$
\end{center}
such that $\textsf{C}' \simeq \textsf{C}$ and $|L_yR_x| = \frac{|L_y||R_x|}{|G_{xy}|}$. If $A$ is not a group, then we get
$$
|A| = \frac{|U|\cdot|V|}{|G|} +1.
$$
Therefore, Theorem \ref{goaltheorem} is proved.
\end{conc}
\section{Connectivity of monoids}
\begin{definition}
We say that two monoids $A$ and $B$ are \textit{connected} if there exists a category with two objects such that its endomorphism monoids are $A$ and $B$.
\end{definition}
\begin{definition}
We say that a monoid $A$ \textit{has a group} $G$ if its minimal ideal is connected to $G$.
\end{definition}
\begin{remark}
Let $A$ be a finite monoid, by the discussion in the previous chapters, there is always a group connected to $A$ through the minimal ideal $S$ of $A$. The group is of the form $x \cdot G \cdot y$ where $x,y \in S$ and $G = RL$ where $L$ and $R$ are the minimal left and right ideals of $S$ (eventually $A$) respectively. In addition, $S$ is a simple semigroup, then by Rees-Sushkevich theorem, it is a Rees matrix semigroup, and the group $G$ is the group that is involved in the definition of the Rees matrix semigroup.
\end{remark}
We would like to thank J\'er\'emie Marqu\`es for suggesting this result in the following lemma.
\begin{lemma} [J\'er\'emie Marqu\`es] \label{transitive}
If $A,B$ are connected, and $B,C$ are connected then $A,C$ are connected.
\begin{proof}
$A,B$ are connected, then there exists a category $\textsf{C}_1$ associated to the algebraic matrix
$$
M_1 = \left(\begin{array}{cc}
A & L \\
R & B
\end{array}\right)
$$
$B,C$ are connected, then there exists a category $\textsf{C}_2$ associated to the algebraic matrix
$$
M_2 = \left(\begin{array}{cc}
B & L' \\
R' & C
\end{array}\right).
$$
Let
$$
M = \left(\begin{array}{cc}
A & L \otimes_B L' \\
R' \otimes_B R & C
\end{array}\right)
$$
then $M$ is a matrix of a category $\textsf{C}$. Indeed, $A$ acts to the left on $L$ and $C$ acts to the right on $L'$, then $A$ and $C$ act on $L \otimes_B L'$. Same for $R$ and $R'$. It remains to prove $(L \otimes_B L') \otimes_C (R' \otimes_B R) \rightarrow A$ and $(R' \otimes_B R) \otimes_A (L \otimes_B L') \rightarrow B$.
\begin{eqnarray*}
(L \otimes_B L') \otimes_C (R' \otimes_B R) &\simeq& L \otimes_B (L' \otimes_C R') \otimes_B R \\
&\rightarrow& L \otimes_B B \otimes_B R\\
&\simeq& L \otimes_B R \\
&\rightarrow& A.
\end{eqnarray*}
We have that
\begin{equation} \label{tens1}
L \otimes_B R \rightarrow A
\end{equation}
is an $(A,A)$-bimodule morphism, and
\begin{equation} \label{tens2}
R \otimes_A L \rightarrow B
\end{equation}
is a $(B,B)$-bimodule morphism.
To verify associativity, the morphisms (\ref{tens1}) and (\ref{tens2}) should satisfy the conditions that the following diagrams are commutative.
\begin{center}
\begin{tikzcd}
L\otimes_B R \otimes_A L \arrow[r] \arrow[d] & A \otimes_A L \arrow[d] \\
L \otimes_B B \arrow[r] & L
\end{tikzcd} ~\textrm{and}~ \begin{tikzcd}
R\otimes_A L \otimes_B R \arrow[r] \arrow[d] & B \otimes_B R \arrow[d] \\
R \otimes_A A \arrow[r] & R
\end{tikzcd}
\end{center}
Same for $(R' \otimes_B R) \otimes_A (L \otimes_B L')$.
\end{proof}
\end{lemma}
\begin{theorem}
Two monoids are connected if and only if they have the same group.
\begin{proof}
For the first direction, let $A$ be a monoid that has a group $G$, then the minimal ideal $S_0$ of $A$ is Rees matrix semigroup and connected to $G$.
Similarly, Let $B$ be a monoid that has a group $H$, then the minimal ideal $J_0$ of $A$ is Rees matrix semigroup and connected to $H$.
If $A$ is connected to $B$, then by transitivity (Lemma \ref{transitive}), $G$ and $H$ are connected, then $G$ and $H$ are isomorphic (Remark \ref{remarkgroup}).
For the second direction, let $A$ and $B$ be two monoids that have the same group $G$, then $A$ is connected to $G$ and $B$ is connected to $G$, by transitivity (Lemma \ref{transitive}), $A$ and $B$ are connected.
\end{proof}
\end{theorem}
\end{document} |
\begin{document}
\noindent{\small \bf Ordinary differential equations}\\
\noindent{\small \bf Mechanics of particles and systems}
\begin{center}
{\LARGE \bf The meromorphic non-integrability of the three-body
problem}\\
\ {\Large \bf Tsygvintsev Alexei}
\end{center}
\begin{abstract}
We study the planar three-body problem and prove the absence of a
complete set of complex meromorphic first integrals in a
neighborhood of the Lagrangian solution.
\end{abstract}
\begin{center}
{\bf 1. Introduction}
\end{center}
The three-body problem is a mechanical system which consists of
three mass points $m_1$, $m_2$, $m_3$ which attract each other
according to the Newtonian law [16].
The practical importance
of this problem arises from its applications to celestial
mechanics: the bodies which constitute the solar system attract
each other according to Newton's low, and the stability of this
system on a long period of time is a fundamental question.
Although Sundman [21] gave a power series solution to the
three-body problem in 1913, it was not useful in determining the
growth of the system for long intervals of time. Chazy [3]
proposed in 1922 the first general classification of motion as $t
\rightarrow \infty$. In view of the modern analysis [7], this
stability problem leads to the problem of integrability of a
Hamiltonian system i.e. the existence of a full set of analytic
first integrals in involution. Poincar\'e [18] considered
Hamiltonian functions $H(z,\mu)$ which in addition to
$z_1,\ldots,z_{2n}$ also depended analytically on a parameter
$\mu$ near $\mu=0$. His theorem states that under certain
assumptions about $H(z,0)$, which are in general satisfied, the
Hamiltonian system corresponding to $H(z,\mu)$ can have no
integrals represented as convergent series in $2n+1$ variables
$z_1,\ldots,z_{2n}$ and $\mu$, other than the convergent series in
$H$, $\mu$. Based on this result he proved in 1889 the
non-integrability of the restricted three-body problem [22].
However, this theorem does not assert anything about a fixed
parameter value $\mu$.
Bruns [2] showed in 1882 that the classical integrals are the only
independent algebraic integrals of the problem of three bodies.
His theorem has been extended by Painlev\'e [17], who has shown
that every integral of the problem of $n$ bodies which involves
the velocities algebraically (whether the coordinates are involved
algebraically or not) is a combination of the classical integrals.
However, citing [7] `` One may agree with Winter [25] that these
elegant negative results have no importance in dynamics, since
they do not take into account the peculiarities of the behavior of
phase trajectories. As far as first integrals are concerned,
locally, in a neighborhood of a non--singular point, a complete
set of independent integrals always exists. Whether they are
algebraic or transcendent depends explicitly on the choice of
independent variables. Therefore, the problem of the existence of
integrals makes sense only when it is considered in the whole
phase space or in a neighborhood of the invariant set ... ''
Consider a complex-analytic symplectic manifold $M$, a
holomorphic Hamiltonian vector field $X_H$ on $M$ and a
non-equilibrium integral curve $\Gamma \subset M$. The nature of
the relationship between the branching of solutions of a system of
variational equations along $\Gamma$ as functions of the complex
time and the non-existence of first integrals of $X_H$ goes back
to the classical works of Kowalewskaya [6]. Ziglin [27] studied
necessary conditions for an analytic Hamiltonian system with $n
>1$ degrees of freedom to possess $n$ meromorphic independent
first integrals in a sufficiently small neighborhood of the phase
curve $\Gamma$. One can consider the monodromy group $G$ of the
normal variational equations along $\Gamma$. The key idea was that
$n$ independent meromorphic integrals of $X_H$ must induce $n$
independent rational invariants for $G$. Then, in order that
Hamilton's equations have the above first integrals, it is
necessary that for any two non-resonant transformations
$g,g\prime\in G$ $g$ must commute with $g\prime$. Although Ziglin
formulated his result in terms of the monodromy group, it became
quite recently [15,20] that much more could be achieved, under
mild restrictions, by replacing this with the differential Galois
group. Namely, one should check if its identity component, under
Zariski's topology, is abelian.
The collinear three-body problem was proved to be non-integrable
near triple collisions by Yoshida [26] based on Ziglin's
analysis.
The present paper is devoted to the non-integrability of the
planar three-body problem.
In 1772 Lagrange [8] discovered the particular solution in which
three bodies form an equilateral triangle and each body describes
a conic.
Moeckel [14] has shown that for a small angular momentum there
exist orbits homoclinic to the Lagrangian elliptical orbits and
heteroclinic between them. Consequently in this case the problem
is not-integrable. Nevertheless, it was observed that for a large
angular momentum and for certain masses of two bodies which are
relatively small compared to the third one, the circular
Lagrangian orbits are stable and, a priori, the system can be
integrable near these solutions. Topan [23] found some examples of
such transcendental integrals in certain configurations of the
restricted three-body problem.
Our approach consists of applying the methods related to [27,15]
to the Lagrangian parabolic orbits. This means that we will study
the integrability of the problem in a sufficiently small complex
neighborhood of these solutions.
The plan of the paper is follows. In Section 2, following
Whittaker, we introduce the reductions of the planar three-body
problem from the Hamiltonian system of 6 degrees of freedom to 3
degrees of freedom. Section 3 is devoted to a parametrization of
the Lagrangian parabolic solution. \\ Section 4 contains the
normal variational equations along this solution. In Section 5 we
study the monodromy group of these equations. In Section 6,
applying the Ziglin's method, we prove that for the three-body
problem there are no two additional meromorphic first integrals in
a connected neighborhood of the Lagrangian parabolic solution
(Theorems 6.2-6.3). Section 7 contains a dynamical interpretation
of above theorems in connection with a theory of splitting and
transverse intersection of asymptotic manifolds.
\begin{center}
{\bf 2. The reduction of the problem}
\end{center}
Following Whittaker [24] let $(x_1,x_2)$ be the coordinates of
$m_1$, $(x_3,x_4)$ the coordinates of $m_2$, and $(x_5,x_6)$ the
coordinates of $m_3$. Let $y_r=m_k\displaystyle\frac{dx_r}{dt}$,
where $k$ denotes the greatest integer in
$\displaystyle\frac{1}{2}(r+1)$. The equations of motion are $$
\displaystyle\frac{dx_r}{dt}=\displaystyle\frac{\partial
H_1}{\partial y_r}, \quad
\displaystyle\frac{dy_r}{dt}=-\displaystyle\frac{\partial
H_1}{\partial x_r}, \quad (r=1,2,\dots,6), \leqno (2.1) $$ where
$$
\begin{array}{ll}
H_1=\displaystyle\frac{1}{2m_1}(y_1^2+y_2^2)+\displaystyle\frac{1}{2m_2}(y_3^2+y_4^2)+\displaystyle\frac{1}{2m_3}(y_5^2+y_6^2)-
m_3m_2\{(x_3-x_5)^2+
(x_4-x_6)^2\}^{-1/2}\\-m_3m_1\{(x_5-x_1)^2+(x_6-x_2)^2\}^{-1/2}-
m_1m_2\{(x_1-x_3)^2+ (x_2-x_4)^2\}^{-1/2}.
\end{array}
$$ This is a Hamiltonian system with $6$ degrees of freedom which
admits $4$ first integrals:
\noindent $T_1=H_1 $ -- the energy,\\
$T_2=y_1+y_3+y_5$, $T_3=y_2+y_4+y_6$ -- the components of the impulse of the system,\\
$T_4=y_1x_2+y_3x_4+y_5x_6-x_1y_2-x_3y_4-x_5y_6$ -- the integral of angular momentum of the system.
The system (2.1) can be transformed to a system with $4$ degrees
of freedom by the following canonical change (Poincar\'e, 1896) $$
x_r=\displaystyle\frac{\partial W_1}{\partial y_r},\quad
g_r=\displaystyle\frac{\partial W_1}{\partial l_r},\quad
(r=1,2,\dots,6), $$ where $$
W_1=y_1l_1+y_2l_2+y_3l_3+y_4l_4+(y_1+y_3+y_5)l_5+(y_2+y_4+y_6)l_6.
\leqno (2.2) $$ Here $(l_1,l_2)$ are the coordinates of $m_1$
relative to axes through $m_3$ parallel to the fixed axes,
$(l_3,l_4)$ are the coordinates of $m_2$ relative to the same
axes, $(l_5,l_6)$ are the coordinates of $m_3$ relative to the
original axes, $(g_1,g_2)$ are the components of impulse of $m_1$,
$(g_3,g_4)$ are the components of impulse of $m_2$, and
$(g_5,g_6)$ are the components of impulse of the system. It can be
shown that in the system of the center of masses the corresponding
equations for $l_5$, $l_6$, $g_5$, $g_6$ disappear from the system
and the reduced system takes the following form $$
\displaystyle\frac{dl_r}{dt}=\displaystyle\frac{\partial
H_2}{\partial g_r},\quad
\displaystyle\frac{dg_r}{dt}=-\displaystyle\frac{
\partial H_2}{\partial l_r}, \quad (r=1,2,3,4), \leqno (2.3)
$$ with the Hamiltonian $$
\begin{array}{ll}
H_2=\displaystyle\frac{M_1}{2}(g_1^2+g_2^2)+\displaystyle\frac{M_2}{2}(g_3^2+g_4^2)+
\displaystyle\frac{1}{m_3}(g_1g_3+g_2g_4)- \displaystyle \frac{
m_3m_2}{\rho_1}- \displaystyle \frac{
m_1m_3}{\rho_2}+\displaystyle\frac{m_1m_2}{\rho_3},
\end{array}
$$ where $$ \rho_1=\sqrt{l_3^2+l_4^2}, \quad
\rho_2=\sqrt{l_1^2+l_2^2}, \quad
\rho_3=\sqrt{(l_1-l_3)^2+(l_2-l_4)^2},$$ are the mutual distances
of the bodies and $M_1=m_1^{-1}+m_3^{-1}$,
$M_2=m_2^{-1}+m_3^{-1}$.
This system admits two first integrals in involution\\ $K_1=H_2$
-- the energy,\\ $K_2=g_2l_1+g_4l_3+g_6l_5-g_1l_2-g_3l_4-g_5l_6=k$
-- the integral of angular momentum.
Let us suppose that the Hamiltonian system (2.3) possesses a first
integral $K$ different from $K_{1,2}$.
\noindent {\bf Definition 2.1} The first integral $K$ of the
system (2.3) is called {\it meromorphic} if it is representable as
a ratio $$ K=\displaystyle \frac{R(l,g)}{Q(l,g)},$$ where $R$, $Q$
are analytic functions of the variables $l_i$, $g_i$, $1 \leq i
\leq 4$.
It can be shown [24] that the system (2.3) possesses an ignorable
coordinate which will make possible a further reduction.
Let us make the following canonical transformation $$
l_r=\displaystyle\frac{\partial W_2}{\partial g_r},\quad
p_r=\displaystyle\frac{\partial W_2}{\partial q_r},\quad
(r=1,2,3,4), \leqno (2.4) $$ where $$
W_2=g_1q_1\mathrm{cos}q_4+g_2q_1
\mathrm{sin}q_4+g_3(q_2\mathrm{cos}q_4-
q_3\mathrm{sin}q_4)+g_4(q_2\mathrm{sin}q_4+q_3\mathrm{cos}q_4). $$
Here $q_1$ is the distance $m_3m_1$; $q_2$ and $q_3$ are the
projections of $m_2m_3$ on, and perpendicular to $m_1m_3$; $p_1$
is the component of momentum of $m_1$ along $m_3m_1$; $p_2$ and
$p_3$ are the components of momentum of $m_2$ parallel and
perpendicular to $m_3m_1$.
One can write the new equations as follows
$$
\displaystyle\frac{dq_r}{dt}=\displaystyle\frac{\partial H}{\partial p_r},\quad \displaystyle\frac{dp_r}{dt}=
-\displaystyle\frac{
\partial H}{\partial q_r}, \quad (r=1,2,3), \leqno (2.5)
$$ and $$ \displaystyle\frac{dq_4}{dt}=\displaystyle\frac{\partial
H}{\partial p_4},\quad \displaystyle\frac{dp_4}{dt}=0, \leqno
(2.5.a) $$ with the Hamiltonian $$
\begin{array}{ll}
H=\displaystyle \frac{M_1}{2}\left\{p_1^2+\displaystyle
\frac{1}{q^2_1}P^2\right\}+\displaystyle \frac{M_2}{2}(p_2^2+
p_3^2)+\displaystyle \frac{1}{m_3}\left\{p_1p_2-\displaystyle
\frac{p_3}{q_1}P\right\} -\displaystyle \frac{ m_1m_3}{r_1}-
\displaystyle \frac{m_3m_2}{r_2}- \displaystyle \frac{
m_1m_2}{r_3}, \\ P=p_3q_2-p_2q_3-p_4,
\end{array}
$$ where $$ r_1=q_1, \quad r_2=\sqrt{q^2_2+q^2_3}, \quad
r_3=\sqrt{ (q_1-q_2)^2+q^2_3},$$ are the mutual distances of the
bodies.
Since $p_4=k=const$ the system (2.5) is a closed Hamiltonian
system with $3$ degrees of freedom. If this system is integrated
then $q_4$ can be found by a quadrature from (2.5.a).
\noindent {\bf Proposition 2.2} {\it If the Hamiltonian system
(2.3) admits the full set of functionally independent meromorphic
first integrals in involution $\{ K_1,K_2,K_3,K_4\}$ then the
system (2.5) possesses two functionally independent additional
first integrals $\{H_1,H_2\}$ which are meromorphic functions of
the variables $q_i$, $p_i$, $1\leq i \leq 3$.}
This is the obvious consequence of the canonical change (2.4).
\begin{center}
{\bf 3. A parametrization of the parabolic Lagrangian solution}
\end{center}
The equations (2.1) admit an exact solution discovered by
Lagrange [8] in which the triangle formed by the three bodies is
equilateral and the trajectories of the bodies are similar conics
with one focus at the common barycenter.
For the reduced form (2.5) the equality of the mutual distances gives
$$ q_1=q,\quad q_2=\displaystyle\frac{q}{2},\quad
q_3=\displaystyle\frac{\sqrt{3}q}{2}, \leqno (3.1) $$ where
$q=q(t)$ is an unknown function. Substituting (3.1) into (2.5) one
can show that $$ p_1=p,\quad p_2=Ap+\displaystyle\frac{B}{q},\quad
p_3=Cp+\displaystyle\frac{D}{q}, \leqno (3.2) $$ with $p=p(t)$
unknown and $A$, $B$, $C$, $D$ are the following constants $$
\begin{array}{ll} A=\displaystyle \frac{m_2(m_3-m_1)}{m_1S_3},\quad
B=-\displaystyle \frac{\sqrt{3}kS_1m_2m_3}{S_2S_3},\quad
C=\displaystyle \frac{\sqrt{3}m_2(m_1+m_3)}{m_1S_3},\\
D=-\displaystyle \frac{km_2(S_2+m_1m_2-m_3^2)}{S_2S_3},
\end{array} $$
where
$$ S_1=m_1+m_2+m_3, \quad
S_2=m_1m_2+m_2m_3+m_3m_1, \quad S_3=m_2+2m_3.$$
Substituting (3.1), (3.2) into the integral of energy $H=h=const$
we obtain the following relation between $q$ and $p$ $$
ap^2+\displaystyle\frac{bp}{q}+\displaystyle\frac{c}{q}+\displaystyle\frac{d}{q^2}=h,
\leqno (3.3) $$ where $$
a=\displaystyle\frac{2S_1S_2}{m_1^2S_3^2},\quad
b=-\displaystyle\frac{2\sqrt{3}km_2S_1}{m_1S_3^2},\quad c=-S_2,
\quad d=\displaystyle\frac{2k^2S_1(m_2^2+m_2m_3+m_3^2)}{S_3^2S_2}.
$$ Moreover, from (2.5) we have $$
\displaystyle\frac{dq}{dt}=\left(M_1+\displaystyle\frac{A}{m_3}\right)p+\displaystyle\frac{B}{m_3q}
\leqno (3.4)
$$ The equations (3.1), (3.2), (3.3), (3.4) define all Lagrangian
particular solutions and contain two free parameters: $k$ and $h$.
Consider the case of zero energy $h=0$ and $k\neq0$. Then there
exists a parabolic particular solution in the sense that the limit
velocity goes to zero when the bodies approach infinity and each
body describes a parabola.
Putting $w=pq$ one can find by using of (3.3) $q$, $p$ as the
functions of $w$ $$ q=P(w),\quad p=\displaystyle\frac{w}{P(w)},
\leqno (3.5) $$
where $P(w)=-(aw^2+bw+d)/c$.
Let $M={\mathbb C^6}$ be the complexified phase space of the system
(2.5). Then (3.5), (3.1), (3.2) define a parametrized parabolic
integral curve $\Gamma \in M$ with the parameter $w\in {\mathbb C
\mathbb P^1}$.
\begin{center}
{\bf 4. The normal variational equations}
\end{center}
Let $z=(q_1,q_2,q_3,p_1,p_2,p_3)$, $z\in M$. One can obtain the
variational equations of the system (2.5) along the integral
curve $\Gamma$ $$
\displaystyle\frac{d\zeta}{dt}=JH_{zz}(\Gamma)\zeta, \quad \zeta
\in T_{\Gamma}M, \leqno (4.1) $$ where $H_{zz}$ is the Hessian
matrix of Hamiltonian $H$ at $\Gamma$ and $J$ is the $6\times 6$
matrix $$ J
=
\left (\begin {array}{cc} 0&E\\\noalign{
}-E&0\end {array}
\right ),
$$ where $E$ is the identity $3 \times 3$ matrix .
These equations admit the linear first integral
$F=(\zeta,H_z(\Gamma))$, where $H_z=grad(H)$ and can be reduced on
the normal $5$-dimensional bundle $ G=T_{\Gamma}M/T\Gamma$ of
$\Gamma$ . After the restriction of (4.1) on the surface $F=0$ we
obtain {\it normal variational equations} (NVE) [27] which are the
system of $4$ equations $$ \displaystyle\frac{d\eta}{dt}=\tilde
A(\Gamma)\eta, \quad \eta\in{\mathbb C^4}, \leqno (4.2) $$ where
$\tilde A$ is a $4 \times 4$ matrix depending on $\Gamma$.
We can obtain NVE in the following natural way applying
Whittaker's procedure [24] of reducing the order of the
Hamiltonian system (2.5).
Fixing the level of energy $h=0$ one can find $p_1$ as a function
of the other variables from the equation $H(q,p)=0$ which takes
the following form $$ a_1p_1^2+b_1p_1+c_1=0, $$ where $a_1$,
$b_1$, $c_1$ are known functions depending on $p_2$, $p_3$,
$q_1$, $q_2$, $q_3$.
Solving this equation we get two solutions for $p_1$ $$
p_1=\displaystyle\frac{-b_1+\sqrt{\Delta}}{2a_1}=K_+ \quad
\mathrm{and} \quad
p_1=\displaystyle\frac{-b_1-\sqrt{\Delta}}{2a_1}=K_- ,$$ where
$\Delta=b_1^2-4a_1c_1$.
By substituting the Lagrangian solution given by (3.1), (3.2),
(3.5) in these relations we choose the root $p_1=K_-$ as
corresponding to this solution.
The functions $q_r(t)$, $p_r(t)$, $r=2,3$ satisfy the canonical
equations $$
\displaystyle\frac{dq_r}{dq_1}=\displaystyle\frac{\partial
K}{\partial p_r},\quad
\displaystyle\frac{dp_r}{dq_1}=-\displaystyle\frac{
\partial K}{\partial q_r}, \quad (r=2,3), \leqno (4.3)
$$ where $K=-K_-$ and $q_1$ is taken as the new time.
The system (4.3) is a nonautonomous Hamiltonian system with $2$
degrees of freedom which has the same integral curve $\Gamma$.
Notice that $K$ is not more a first integral.
It is useful to pass now to the new time $(q_1=q)\rightarrow w$.
From the formulas (3.3), (3.5) we have $$
q=\displaystyle\frac{aw^2+bw+d}{c},\quad
dq=-\displaystyle\frac{2aw+b}{c}dw. \leqno (4.4) $$ The resulting
NVE (4.2) are obtained as the variational equations of the system
(4.3) near the integral curve $\Gamma$ and after the substitution
(4.4) take the form $$ \displaystyle\frac{d\eta}{dw}=\tilde
A(\Gamma)\eta, \quad \eta\in{\mathbb C^4}, \leqno (4.5) $$ where
$\tilde A$ is a $4\times 4$ matrix whose elements are rational
functions of $w$.
We can represent $\tilde A$ in the following block form
$$
\tilde A
=
\left (\begin {array}{cc}
{M_{{3}}}^{T}&M_{{2}}\\\noalign{
}-M_ {{1}}&-M_{{3}}\end
{array}\right ), $$ where $M_1$, $M_2$, $M_3$ are $2\times 2$
matrices and $M_3^T$ means the transposition of $M_3$.
The matrix $M_1$ is symmetric and has the following form
$$
M_1
=
\displaystyle\frac{1}{S_1^3L^2Z^2}
\left(
\begin{array}{ll}
n_{11}&n_{12} \\
n_{12}&n_{22}
\end{array}
\right),
$$
where $L(w)$ is the linear polynomial
$$
L=l_1w+l_2,
$$
and $l_1=2S_2$,\quad $l_2=-\sqrt{3}m_1m_2k$.
$Z(w)$ is the following quadratic polynomial
$$ Z=z_1w^2+z_2w+z_3,
$$
where $z_1=S_2^2,\quad z_2=-\sqrt{3}m_1m_2kS_2,\quad
z_3=k^2m_1^2(m_2^2+m_2m_3+m_3^2).$
The coefficients $n_{ij}$ have the form $$
n_{11}=A_1w^2+A_2w+A_3,\quad n_{12}=A_4w^2+A_5w+A_6,\quad
n_{22}=A_7w^2+A_8w+A_9, $$ where $A_i$ are constants depending on
the masses $m_1$, $m_2$, $m_3$ and $k$.
The matrix $M_2$ has the following expression
$$
M_2
=
\displaystyle\frac{4S_1Z}{S_2S_3^3m_1^4m_2m_3} \left(
\begin{array}{ll}
1&0 \\
0&1
\end{array}
\right).
$$
For the matrix $M_3$ we have
$$
M_3
=
\displaystyle\frac{1}{m_1S_1LZ}
\left(
\begin{array}{ll}
m_{11}&m_{12} \\
m_{21}&m_{22}
\end{array}
\right),
$$
where
$$
\begin{array}{ll}
m_{11}=B_1w^2+B_2w+B_3,\quad m_{12}=B_4w^2+B_5w+B_6,\quad m_{21}=B_7w^2+B_8w+B_9,\\
m_{22}=B_{10}w^2+B_{11}w+B_{12},
\end{array}
$$ and $B_j$ are constants depending on $m_1$, $m_2$, $m_3$ and
$k$.
The system (4.5) has four singular points $w_1$, $w_2$, $w_3$,
$w_4$ in the complex plane: $$ w_1=\infty, $$ -- the infinity. $$
w_2=\displaystyle\frac{\sqrt{3}m_1m_2k}{2S_2}, $$ -- the root of
$L=0$. $$ w_3=\displaystyle \frac {( \sqrt{3}m_2 + i S_3)km_1}{2
S_2} ,\quad w_4=\displaystyle \frac{(\displaystyle \sqrt{3}m_2 -
\displaystyle i S_3)k m_1}{2S_2}, \leqno (4.6) $$ -- the
corresponding roots of the quadratic equation $Z=0$ where
$i^2=-1$.
Notice that the expressions for $w_{2,3,4}$ have a rational form
on the masses.
The singularities $w_i$, $1 \leq i \leq 4$ have a clear
mechanical sense: $w_1$ corresponds to the motion of the bodies at
infinity, $w_2$ defines the moment of the maximal approach.
It is easy to see from (4.6) that if the angular momentum constant
$k=0$, then $w_2=w_3=w_4=0$ and we have a triple collision of the
bodies at the moment of time $w=0$. If $k\neq 0$ then by the lemma
of Sundman there are no triple collisions in the real phase space
and $w_{3,4}$ become complex.
Since the expression for $p$ given in (3.5) becomes infinity when
$w\rightarrow w_{3,4}$, formally, we can consider $w_3$ and $w_4$
as corresponding to the ``complex'' collisions which tend to $w=0$
as $k\rightarrow 0$.
It was noted by Schaefke [19] that the equations (4.5) can be
reduced to fuchsian form.
In order to do it, consider the linear change of variables
$$\eta=Cx, \leqno (4.7) $$
where
$\eta=(\eta_1,\eta_2,\eta_3,\eta_4)^T$, $x=(x_1,x_2,x_3,x_4)^T$
and $C=diag(LZ,LZ,1,1)$.
In new variables the system (4.5) takes the following form $$
\displaystyle\frac{dx}{dw}=\left(\displaystyle\frac{A(k)}{w-w_2}+\displaystyle\frac{B(k)}{w-w_3}+
\displaystyle\frac{C(k)}{w-w_4}\right)x, \quad x\in{\mathbb C^4},
\leqno (4.8) $$ where $A(k),B(k),C(k)$ are known constant $4\times
4$ matrices depending on $m_1,m_2,m_3$ and $k$.
Under the assumption $k\neq 0$ we can exclude the parameter $k$
from the system (4.8) by using the change of time $w=kt$. As a
result, one obtains $$ \displaystyle\frac{dx}{dt}=\left(
\displaystyle\frac{A}{t-t_0}+\displaystyle\frac{B}{t-t_1}+\displaystyle\frac{C}{t-t_2}\right)x,
\leqno (4.9) $$
where $$ t_0=\displaystyle\frac{\sqrt{3}m_1m_2}{2S_2},\quad
t_1=\displaystyle\frac{m_1(\sqrt{3}m_2+iS_3)}{2S_2}, \quad
t_2=\displaystyle\frac{m_1(\sqrt{3}m_2-iS_3)}{2S_2}. $$ and $$
A=\displaystyle\frac{\tilde M(t_0)}{(t_0-t_1)(t_0-t_2)},\quad
B=\displaystyle\frac{\tilde M(t_1)}{(t_1-t_2)(t_1-t_0)},\quad
C=\displaystyle\frac{\tilde M(t_2)}{(t_2-t_1)(t_2-t_0)}. $$
Here, $\tilde M(w)$ is the following matrix $$ \tilde M(w)= \left(
{\begin{array}{cc} L\,Z\,{M_{3}}^{T} - {\displaystyle \frac {{
\partial L}\,Z}{{ \partial w }}E} & {M_{2}} \\
- L^{2}\,Z^{2}\,{M_{1}} & - L\,Z\,{M_{3}}
\end{array}}
\right), $$
where one should put $k=1$.
The system (4.9) is defined on a connected Riemann surface
$X={\mathbb C \mathbb P^1 / \{t_0,t_1,t_2,\infty \}}$.
It turns out that the matrix $A$ is real and the matrices
$B=R+iJ$, $C=R-iJ$ are complex conjugate being $R$ and $J$ real
matrices. It will simplify matters further if we choose the units
of masses as follows $$m_1=\alpha,\quad m_2=\beta, \quad
m_3=1,\quad 0< \alpha\leq\beta\leq 1 .$$ In Appendix A we write
the expressions for $A$, $R$, $J$ with help of MAPLE.
\begin{center}
{\bf 5. The monodromy group of the system (4.9)}
\end{center}
Let $\Sigma(t)$ be a solution of the matrix equation (4.9) $$
\displaystyle\frac{d}{dt}\Sigma=\left(
\displaystyle\frac{A}{t-t_0}+\displaystyle\frac{B}{t-t_1}+\displaystyle\frac{C}{t-t_2}\right)\Sigma,
\leqno (5.1) $$ with the initial condition $\Sigma(\tau)=I$, $\tau
\in X$ where $I$ is the unit $4\times 4$ matrix.
It can be continued along a closed path $\gamma$ with end points
at $\tau$. We obtain the function $\tilde \Sigma(t)$ which also
satisfies (5.1). From linearity of (5.1) it follows that there
exists a complex $4\times 4$ matrix $T_{\gamma}$ such that $\tilde
\Sigma(t)=\Sigma(t) T_{\gamma}$. The set of matrices
$G=\{T_{\gamma}\}$ corresponding to all closed curves in $X$ is a
group. This group is called the {\it monodromy group} of the
linear system (4.9). Let $T_i$ be the elements of $G$
corresponding to circuits around the singular points $t=t_i$,
$i=0,1,2$. Then the monodromy group $G$ is formed by $T_0$, $T_1$,
$T_2$. Denote by $T_{\infty}\in G$ the element corresponding to a
circuit around the point $t=\infty$.
\noindent {\bf Lemma 5.1} {\it The following assertions about the
monodromy group $G$ hold
\\
\noindent a) $T_0=I$ -- is the unit matrix and $$
T_1T_2=T^{-1}_{\infty}. \leqno (5.2) $$
\noindent b) There exist two non-singular matrices $U$, $V$ such
that $$ U^{-1}T_1U=V^{-1}T_2V=\left(
\begin{array}{cccc}
1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1
\\ 0 & 0 & 0 & 1
\end{array}
\right).$$
\noindent c) The matrix $T_{\infty}$ has the following eigenvalues
$$\mathrm{Spectr}(T_{\infty})=\left\{ e^{2\pi i\lambda_1},\quad
e^{2\pi i\lambda_2},\quad e^{-2\pi i\lambda_1},\quad e^{-2\pi
i\lambda_2}\right\}, \leqno (5.3)$$ where $$
\lambda_1=\displaystyle\frac{3}{2}+\displaystyle\frac{1}{2}\sqrt{13+\sqrt{\theta}},\quad
\lambda_2=\displaystyle\frac{3}{2}+\displaystyle\frac{1}{2}\sqrt{13-\sqrt{\theta}},
\leqno (5.4) $$ and $$ \theta=144\left(1-\displaystyle
3\frac{S_2}{S_1^2}\right),\quad S_1=\alpha+\beta+1,\quad
S_2=\alpha\beta+\alpha+\beta.$$ Moreover, $$ \mathrm{Spectr}
(T_{\infty} ) \neq \{1,1,1,1\}. $$}
{\it Proof.} a) The matrix $A$ has the eigenvalues
$\{-1,-1,0,0\}$. Following the general theory of the linear
differential equations let us write the general solution of the
system (4.9) near the singular point $t=t_0$ as follows $$
x(t)=c_1X_1(t)+c_2X_2(t)+c_3X_3(t)+c_4X_4(t),$$ where
$c_{1,\ldots,4}\in {\mathbb C}$ are arbitrary constants and $$
\begin{array}{llcc}
X_1(t)=\displaystyle
\frac{a_{-1}}{t-t_0}+a_0+a_1(t-t_0)+\cdots,\quad
X_2(t)=\displaystyle \frac{b_{-1}}{t-t_0}+b_0+b_1(t-t_0)+\cdots,\\
X_3(t)=c_0+c_1(t-t_0)+\cdots,\quad \quad \quad \quad \quad
X_4(t)=d_0+d_1(t-t_0)+\cdots,
\end{array} \leqno (5.5) $$ where $a_i$, $b_i$, $c_i$, $d_i \in
{\mathbb C^4} $ are some constant vectors.
By substituting (5.5) in (4.9) one can find $a_i$, $b_i$, $c_i$,
$d_i$ and show that the vectors $X_1(t)$, $X_2(t)$, $X_3(t)$,
$X_4(t)$ are functionally independent and meromorphic in a small
neighborhood of the point $t=t_0$. This implies that the element
$T_0$ of the monodromy group $G$ corresponding to a circuit around
$t_0$ is the unit matrix. Obviously we should have
$T_0T_1T_2=T^{-1}_{\infty}$. From this fact the relation (5.2)
follows.
\noindent b) The matrices $B$, $C$ have the same eigenvalues
$\{-2,-1,0,1\}$. It can be shown by a straightforward calculation
that near the singular point $t=t_1$ the general solution of the
system (4.9) can be represented as $$
x(t)=c_1Y_1(t)+c_2Y_2(t)+c_3Y_3(t)+c_4Y_4(t),$$ where
$c_{1,\ldots,4}\in {\mathbb C}$ are arbitrary constants and $$
\begin{array}{lllcc} Y_1(t)=e_1(t-t_1)+e_2(t-t_1)^2+\cdots, \quad
Y_2=f_0+f_1(t-t_1)+\cdots+C_1 \mathrm{ln}(t-t_1)Y_1(t), \\
Y_3(t)=\displaystyle \frac{g_{-1}}{t-t_1}+g_0+g_1(t-t_1)+\cdots,\\
Y_4(t)=\displaystyle \frac{h_{-2}}{(t-t_1)^2}+\displaystyle
\frac{h_{-1}}{t-t_1}+\cdots+C_2\mathrm{ln}(t-t_1)(f_0+f_1(t-t_1)+\cdots)+C_3\mathrm{ln}(t-t_1)Y_1(t),
\end{array}
$$ where $e_i$, $f_i$, $g_i$, $h_i \in {\mathbb C^4}$ are some
constant vectors and $C_1$, $C_2$, $C_3$ are parameters depending
on the masses $\alpha$, $\beta$.
For $C_1$, $C_2$ one can find $$
C_1=\displaystyle \frac{9}{4}\frac{\beta \alpha^3( \beta+2)^2(\alpha \beta+\alpha+
\beta)}{(\alpha+\beta+1)^3 },\quad C_2=iC_1.$$
The matrix $\Sigma(t)=(Y_1,Y_2,Y_3,Y_4)$ represents the solution
of the system (5.1) in a small neighborhood of the point $t=t_1$.
After going around of $t_1$ we get $\tilde \Sigma(t) =\Sigma(t) M$
where $$ M=\left(
\begin{array}{cccc}
1 & 2\pi i C_1 & 0 & 2\pi i C_3 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 &
2\pi iC_2
\\ 0 & 0 & 0 & 1
\end{array}
\right). $$
Since $C_1 \neq 0$, $C_2\neq 0$ for $\alpha>0$, $\beta>0$, there
exists a non-singular matrix $T$ such that $$ T^{-1}MT=\left(
\begin{array}{cccc}
1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1
\\ 0 & 0 & 0 & 1
\end{array}
\right), \leqno (5.6) $$ which is the Jordan form of $M$.
The matrix $T_1$ is similar to $M$ and therefore has the same
Jordan form (5.6). Repeating the analogous arguments for the
matrix $T_2$ we deduce that the same assertion holds for the
monodromy matrix $T_2$. Notice that the existence of logarithmic
branching near some Lagrangian solutions in three body problem was
first observed by H. Block (1909) and J.F. Chazy (1918) ( see for
instance [1]).
\noindent c) Consider the matrix $A_{\infty}=-(A+B+C)$. Then there
exists (see for example [4]) a non-singular matrix $W$ such that
$$ T_{\infty}=W^{-1}e^{2\pi iA_{\infty}}W. \leqno (5.7) $$
Appendix A contains the expressions for the elements of the matrix
$A_{\infty}$. One can calculate its eigenvalues $$
\mathrm{Spectr}(A_{\infty})=\{
\lambda_1,\lambda_2,3-\lambda_1,3-\lambda_2\}, $$ where
$\lambda_{1,2}$ are given in (5.3).
One can easy check that $$ 0\leq \sqrt{\theta} < 12, \leqno (5.8)
$$ for all $\alpha >0$, $\beta > 0$.
With the help of (5.7) we obtain for the eigenvalues of the matrix
$T_{\infty}$ the expression (5.3).
Let us suppose now that $\mathrm{
Spectr}(T_{\infty})=\{1,1,1,1\}$. Then according to (5.4) we
obtain
$$ \sqrt{13+\sqrt{\theta}}=n_1, \quad \sqrt{13-\sqrt{\theta}}=n_2, \quad n_1,n_2
\in{\mathbb Z
Z}.\leqno (5.9) $$ Hence, in view of (5.8), the number $r
=\sqrt{\theta}$ is an integer $0\leq r \leq 11$. The simple
calculation shows that for these $r$ the relations (5.9) are not
fulfilled. This implies that $$ \mathrm{Spectr} (T_{\infty} ) \neq
\{1,1,1,1\}. $$ The proof of Lemma 5.1 is completed. \quad \quad
$\Box$
\begin{center}
{\bf 6. Nonexistence of additional meromorphic first integrals}
\end{center}
We call the planar three-body problem (2.1) {\it meromorphically}
integrable near the Lagrangian parabolic solution $\Gamma$,
defined in Section 3, if the corresponding Hamiltonian system
(2.3) possesses a complete set of complex meromorphic first
integrals (see Definition 2.1) in involution in a connected
neighborhood of $\Gamma$. Recall that equations (2.3) describe the
motion of bodies in the system of the center of masses.
From Proposition 2.2 it follows that in this case the system (2.5)
admits two additional first integrals which are meromorphic and
functionally independent in the same neighborhood.
\noindent {\bf Theorem 6.1} {\it For $k\neq 0$ for the Hamiltonian
system (2.5) there are no two functionally independent additional
first integrals, meromorphic in a connected neighborhood of the
Lagrangian parabolic solution $\Gamma$.}
{\it Proof.} Suppose that the Hamiltonian system (2.5) admits two
functionally independent first integrals $H_1$, $H_2$, meromorphic
in a connected neighborhood of the Lagrangian parabolic solution
$\Gamma$ and functionally independent together with $H$. According
to Ziglin [27] in this case the NVE (4.5) have two functionally
independent meromorphic integrals $F_1$, $F_2$ which are
single-valued in a complex neighborhood of the Riemann surface
$\Gamma={\mathbb C \mathbb P^1}/\{t_0,t_1,t_2,\infty\}$. The linear
system (4.9) was obtained from (4.5) by the linear change of
variables (4.7) and the change of the time $w=kt$, $k\neq 0$.
Therefore, it possesses two functionally independent meromorphic
integrals $I_1$, $I_2$. From this fact the following lemma is
deduced
\noindent{\bf Lemma 6.2 (Ziglin [27])} {\it The monodromy group
$G$ of the system (4.9) has two rational, functionally independent
invariants $J_1$, $J_2$.}
In appropriate coordinates, according to b) of Lemma 5.1, the monodromy transformation $T_1$
can be written as follows $$T_1= \left(
\begin{array}{cccc}
1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1
\\ 0 & 0 & 0 & 1
\end{array}
\right) =I+D,$$ where $I$ is the unit matrix and $$D=\left(
\begin{array}{cccc}
0& 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1
\\ 0 & 0 & 0 & 0
\end{array}
\right ). \leqno (6.1) $$
For the monodromy matrix $T_2$ one writes $$ T_2=I+R,$$ where
$$R=\tilde V D \tilde V^{-1}=\left(
\begin{array}{cccc}
a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3
& c_4 \\ d_1 & d_2 & d_3 & d_4
\end{array}
\right),\leqno (6.2)$$ with some unknowns $a_i$, $b_i$, $c_i$,
$d_i\in {\mathbb C}$ and a nonsingular matrix $\tilde V$.
Let us input the following linear differential operators $$
\delta=x_2\displaystyle\frac{\partial}{\partial
x_1}+x_4\displaystyle\frac{\partial}{\partial x_3},$$ and $$
\Delta=
\left(\sum_{i=1}^4a_ix_i\right)\displaystyle\frac{\partial}{\partial
x_1}+\left(\sum_{i=1}^4b_ix_i\right)\displaystyle\frac{\partial}{\partial
x_2}+\left(\sum_{i=1}^4c_ix_i\right)\displaystyle\frac{\partial}{\partial
x_3}+\left(\sum_{i=1}^4d_ix_i\right)\displaystyle\frac{\partial}{\partial
x_4}. $$
\noindent{\bf Lemma 6.3} {\it Let $J$ be a rational invariant of
the monodromy group $G$, then the following relations hold $$
\delta J=0, \quad \Delta J =0.$$}
{\it Proof.} For an arbitrary $n\in \mathbb N$ we have $T_1^n=I+nD$,
hence $J\left(T_1^n x\right)=J\left( x+nD x\right).$ Expanding the
last expression in Taylor series we obtain $$ J\left(T_1^n
x\right)=J(x)+n\delta J(x)+\sum\limits_{i=2}^{\infty} n^i r_i(x),
\leqno (6.3)$$ where $r_i(x)$ are some rational functions.
In view of $J\left(T_1^n x\right)=J(x)$ and the fact that $J(x)$
is a rational function on $x$, the second term of (6.3) gives
$\delta J=0$. The relation $\Delta J=0$ is deduced by analogy from
the identity $J\left(T_2x\right)=J(x)$. \quad \quad $\Box$
{\it Case (1).} Assume that invariants $J_1$, $J_2$ depend on
$x_2$, $x_4$ only. By Lemma 6.3 we have $$ \Delta J_1=0, \quad
\Delta J_2=0. \leqno (6.4)$$
It can be verified that the equations (6.4) imply the conditions
$b_i=0$, $d_i=0$, $1\leq i \leq 4$. Accordingly, the matrix $R$
may be written $$ R=\left(
\begin{array}{cccc}
a_1 & a_2 & a_3 & a_4 \\ 0 & 0 & 0 & 0 \\ c_1 & c_2 & c_3 & c_4
\\ 0 & 0 & 0 & 0
\end{array}
\right). \leqno (6.5)$$
One can find the characteristic polynomial
$P(\lambda)=det(R-\lambda I)$ of $R$
$$P(\lambda)=\lambda^4-(a_1+c_3)\lambda^3+(a_1c_3-c_1a_3)\lambda^2.
\leqno (6.6)$$
In view of (6.1), (6.2) all eigenvalues of the matrix $R$ are
equal to $0$, thus, with help of (6.6) we get $$ a_1+c_3=0, \quad
a_1c_3=c_1a_3. \leqno (6.7) $$
The matrix $T_1T_2$ takes the following form $$ T_1T_2=\left(
\begin{array}{cccc}
a_1+1 & a_2+1 & a_3 & a_4 \\ 0 & 1 & 0 & 0 \\ c_1 & c_2 & c_3+1 &
c_4+1
\\ 0 & 0 & 0 & 1
\end{array}
\right), $$ and $$ \mathrm{Spectr}(T_1T_2)=\{1,1,s+f,s-f\},$$
where $$ s=1+\displaystyle \frac{a_1+c_3}{2}, \quad
f=\displaystyle \frac{ \sqrt { a_1^2+ c_3^2+4 c_1a_3-2a_1
c_3}}{2}. \leqno (6.8) $$
The straightforward calculation by using (6.7) and (6.8) shows
that the eigenvalues of the matrix $T_1T_2$ are equal to
$\{1,1,1,1\} $. According to (5.2) these must be the eigenvalues
of the matrix $T_{\infty}$ which is in contradiction to c) of
Lemma 5.1.
{\it Case(2).} Assume that even one from the invariants $J_1$,
$J_2$ depends on $x_1$ or $x_3$. Let, for example $$
\displaystyle\frac{\partial J_1}{\partial x_1}\neq 0. \leqno (6.9)
$$
It is useful to consider two additional linear operators $
\delta_1=[\delta,\Delta]$ and $
\delta_2=-\displaystyle\frac{1}{2}[\delta,\delta_1].$
One has $$ \delta_1=f_1\displaystyle\frac{\partial}{\partial
x_1}+f_2\displaystyle\frac{\partial}{\partial x_2}+
f_3\displaystyle\frac{\partial}{\partial
x_3}+f_4\displaystyle\frac{\partial}{\partial x_4}, \quad
\delta_2=(b_1x_2+b_3x_4)\displaystyle\frac{\partial}{\partial
x_1}+ (d_1x_2+d_3x_4)\displaystyle\frac{\partial}{\partial x_3},
$$ where $$
\begin{array}{llcc}
f_1=-b_1x_1+(a_1-b_2)x_2-b_3x_3+(a_3-b_4)x_4, &
f_2=b_1x_2+b_3x_4,\\ f_3=-d_1x_1+(c_1-d_2)x_2-d_3x_3+(c_3-d_4)x_4,
& f_4=d_1x_2+d_3x_4.\end{array}, \leqno (6.10)$$
We deduce from $\delta J_i=\Delta J_i=0$ that $$ \delta_1 J_i=0,
\quad \delta_2 J_i=0, \quad i=1,2.$$
Consider the partial differential equation $\delta J=0$. Solving
it one finds that $J=K(Y_1,Y_2,Y_3)$ where $K(y_1,y_2,y_3)$ is an
arbitrary function and $$Y_1=x_2, \quad Y_2=x_4, \quad
Y_3=x_4x_1-x_2x_3. \leqno (6.11)$$ Therefore, in view of (6.9),
(6.11) we have $J_1=J_1(Y_1,Y_2,Y_3)$ and $
\displaystyle\frac{\partial J_1}{\partial Y_3}\neq 0.$
Consequently, as $\delta_2 Y_1=\delta_2 Y_2=0$, one gets $$
\delta_2 J_1=\displaystyle \frac{\partial J_1}{\partial
Y_1}\delta_2 Y_1 +\displaystyle \frac{\partial J_1}{\partial
Y_2}\delta_2 Y_2+\displaystyle \frac{\partial J_1}{\partial
Y_3}\delta_2 Y_3=\displaystyle \frac{\partial J_1}{\partial
Y_3}\delta_2 Y_3. $$
This implies $$ \delta_2 Y_3=0. \leqno (6.12) $$
By substituting in (6.12) the expression for $Y_3$ given by
(6.11) we arrive to $$ b_3=d_1=0, \quad b_1=d_3=\rho, \leqno
(6.13) $$ for some $\rho \in {\mathbb C}$.
We now use the equation $\delta_1 J=0$ which can be written as $$
\delta_1 J=\displaystyle \frac{\partial J}{\partial Y_1}\delta_1
Y_1 +\displaystyle \frac{\partial J}{\partial Y_2}\delta_1
Y_2+\displaystyle \frac{\partial J}{\partial Y_3}\delta_1 Y_3=0,
\leqno (6.14) $$
One can show that $$
\begin{array}{lll}
\delta_1Y_1=\rho Y_1,\\ \delta_1Y_2=\rho Y_2, \\
\delta_1Y_3=v_{1}Y_1^2+v_2Y_2^2+v_3Y_1Y_2, \end{array} $$ where
$v_1=d_2-c_1$, $v_2=a_3-b_4$, $v_3=a_1-b_2-c_3+d_4$.
Hence, (6.14) yields $$ \rho Y_1 \displaystyle \frac{\partial
J}{\partial Y_1} +\rho Y_2\displaystyle \frac{\partial J}{\partial
Y_2}+(v_{1}Y_1^2+v_2Y_2^2+v_3Y_1Y_2)\displaystyle \frac{\partial
J}{\partial Y_3}=0.$$
This equation possesses two rational functionally independent
solutions $J_1(Y_1,Y_2,Y_3)$, $J_2(Y_1,Y_2,Y_3)$ only if $$
\rho=0, \quad v_1=v_2=v_3=0 .$$ which gives $$ a_1=\epsilon_1+b_2,
\quad c_3=\epsilon_1+d_4,\quad c_1=d_2=\zeta_1, \quad
a_3=b_4=\zeta_2, \quad \epsilon_1, \zeta_1, \zeta_2 \in
{\mathbb C}. \leqno (6.15) $$
After substitutions of (6.13), (6.15) in (6.2) the matrix $R$
is written as $$ R=\left(
\begin{array}{cccc}
b_2+\epsilon_1 & a_2 & \zeta_2 & a_4 \\ 0 & b_2 & 0 & \zeta_2 \\
\zeta_1 & c_2 & d_4+\epsilon_1 & c_4
\\ 0 & \zeta_1 & 0 & d_4
\end{array}
\right).$$
Now, consider the characteristic polynomial $P(\lambda)$ of $R$
$$P(\lambda)=\lambda^4+P_1\lambda^3+P_2\lambda^2+P_3\lambda+P_4,$$
where $$
\begin{array}{llll}
P_1=-2(b_2+d_4+\epsilon_1),\\ P_2=3b_2\epsilon_1-2\zeta_1\zeta_2+
3\epsilon_1d_4+4b_2d_4+b_2^2+\epsilon^2_1 +d_4^2,\\
P_3=-(d_4+b_2+\epsilon_1)(2b_2d_4+b_2\epsilon_1+d_4\epsilon_1-2\zeta_1
\zeta_2), \\ P_4=(b_2d_4
-\zeta_1\zeta_2)(b_2d_4+b_2\epsilon_1+d_4\epsilon_1-\zeta_1\zeta_2+\epsilon_1^2).
\end{array}$$
As above, in view of (6.1), (6.2) all eigenvalues of $R$ must be
equal to $0$ and therefore $P_i=0$, $1 \leq i \leq 4$. This
system gives $$ \epsilon_1=0, \quad b_2=\eta_1, \quad d_4=-\eta_1,
\quad \eta_1^2+\zeta_1 \zeta_2=0,$$ and the monodromy matrix $T_2$
becomes
$$ T_2=\left(
\begin{array}{cccc}
\eta_1+1 & a_2 & \zeta_2 & a_4 \\ 0 & \eta_1+1 & 0 & \zeta_2 \\
\zeta_1 & c_2 & 1-\eta_1 & c_4
\\ 0 & \zeta_1 & 0 & 1-\eta_1
\end{array}
\right). $$
The matrix $T_1T_2$ has the eigenvalues $\{1,1,1,1\}$ which
contradicts to c) of Lemma 5.1 and proves our claim. \quad \quad
$\Box$
Due to our definition of integrability we deduce from Theorem 6.2
the following
\noindent {\bf Theorem 6.3} {\it The planar three-body problem is
meromorphically non-integrable near the Lagrangian parabolic
solution. }
\begin{center}
{\bf 7. Final remarks}
\end{center}
In the end of 19th century Poincar\'e [18] indicated some
qualitative phenomena in the behavior of phase trajectories which
prevent the appearance of new integrals of a Hamiltonian system
besides those which are present, but fail to form a set sufficient
for complete integrability.
Let $M^{2n}$ be the phase space, and $H:M^{2n}\rightarrow \mathbb R$,
$H=H_0+\epsilon H_1 +O(\epsilon^2)$ the Hamiltonian function.
Suppose that for $\epsilon =0$ the corresponding Hamiltonian
system has an $m$--dimensional hyperbolic invariant torus $T^m_0$.
According to the Graff's theorem [5], for small $\epsilon$ the
perturbed system has an invariant hyperbolic torus
$T^m_{\epsilon}$ depending analytically on $\epsilon$. It can be
shown that $T^m_{\epsilon}$ has asymptotic invariant manifolds
$\Lambda^+$ and $\Lambda^-$ filled with trajectories which tend to
the torus $T^m_{\epsilon}$ as $t\rightarrow +\infty$ and
$t\rightarrow -\infty$ respectively. In integrable Hamiltonian
systems such manifolds (called also {\it separatrices}), as a
rule, coincide. In the nonintegrable cases, the situation is
different: asymptotic surfaces can have transverse intersection
forming a complicated tangle which prevent the appearance of new
integrals. For a modern presentation of these results see, for
example, [7].
The method of splitting of asymptotic surfaces was applied to the
three--body problem by many authors. In his book [13] J.K. Moser
described a technique which use the symbolic dynamics associated
with a transverse homoclinic point. Applying this method, it was
shown in [9] that under certain assumptions the planar circular
restricted three--body problem does not possess an additional real
analytic integral. The similar result for the Sitnikov problem and
the collinear three--body problem can be found in [13], [10]. The
existence and the transverse intersection of stable and unstable
manifolds along some periodic orbits in the planar three--body
problem where two masses are sufficiently small was established in
[11], using the results obtained in [12].
It is necessary to note that Theorem 6.3 implies the nonexistence
of a complete set of complex analytic first integrals for the
general planar three--body problem. To prove the nonexistence of
real analytic integrals one should use some heteroclinic
phenomena and can propose the following line of reasoning : Let
$M^{\infty}$ be the infinity manifold, then the taken Lagrangian
parabolic orbit is biasymptotic to it. This is a weakly hyperbolic
invariant manifold and the reference orbit is a heteroclinic orbit
to different periodic orbits sitting in $M^{\infty}$. The
dynamical interpretation of Theorem 6.3 seems to be the
transversality of the invariant stable and unstable manifolds of
$M^{\infty}$, along this orbit. A combination of passages near
several of these orbits (there is all the family obtained by
rotation) should allow to prove the existence of a heteroclinic
chain. This, in turn, gives rise to an embedding of a suitable
subshift, with lack of predictability, chaos and implies the
nonexistence of real analytic integrals.
\begin{center}
{\bf Acknowledgements}
\end{center}
I would like to thank L. Gavrilov and V. Kozlov for useful
discussions and the advice to study the present problem. Also, I
thank to J.-P. Ramis, J.J. Morales-Ruiz, J.-A. Weil and D.
Boucher for their attention to the paper. I am very grateful to
the anonymous referee for his useful remarks.
\begin{center}
{\bf References}
\end{center}
\noindent [1] V.I. Arnold, V.V. Kozlov, A.I. Neishtadt,{\it
Dynamical systems III, } Springer--Verlag, p. 63, (1987).
\noindent [2] H. Bruns, {\it Ueberdie Integrale des vierk$\ddot
o$rper Problems}, Acta Math. 11, p. 25-96, (1887-1888).
\noindent [3] J. Chazy, { \it Sur l'allure du mouvement dans le
probl\`eme des trois corps quand le temps croit ind\'efiniment},
Ann. Sci. Ecole Norm., 39 , 29-130 (1922).
\noindent [4] V.V. Golubev, {\it Lectures on analytic theory of
differential equations}, Gostekhizdat, Moskow, (1950), (Russian).
\noindent [5] S.M. Graff, {\it On the conservation of hyperbolic
invariant tori for Hamiltonian systems}, J. Differential Equations
15, 1--69, (1974).
\noindent [6] S.V. Kowalewskaya, {\it Sur le probl\`eme de la
rotation d'un corps solide autour d'un point fixe}, Acta. Math.
12,177-232 (1889).
\noindent [7] V.V. Kozlov, {\it Symmetries, Topology, and
Resonances in Hamiltonian mechanics,} Springer-Verlag (1996).
\noindent [8] J.L. Lagrange, {\it Oeuvres}. Vol. 6, 272-292, Paris
(1873).
\noindent [9] J. Llibre, C. Sim\'o, {\it Oscillatory solutions in
the planar restricted three--body problem,}Math. Ann., 248:
153--184, 1980.
\noindent [10] J. Llibre, C. Sim\'o, {\it Some homoclinic
phenomena in the three--body problem,} J. Differential Equations,
37, no. 3, 444--465, 1980.
\noindent [11] R. Martinez, C. Sim\'o, {\it A note on the
existence of heteroclinic orbits in the planar three body
problem,} In Seminar on Dynamical Systems, Euler International
Mathematical Institute, St. Petersburg, 1991, S. Kuksin, V.
Lazutkin and J. P$\ddot o$chel, editors, 129--139, Birkh$\ddot
a$user, 1993.
\noindent [12] R. Martinez, C. Pinyol, {\it Parabolic orbits in
the elliptic restricted three body problem,} J. Differential
Equations, 111, 299--339, (1994).
\noindent [13] J.K. Moser, {\it Stable and random motions in
dynamical systems,} Princeton Univ. Press, Princeton, N.J., 1973.
\noindent [14] R. Moeckel, {\it Chaotic dynamics near triple
collision}, Arch. Rational. Mech. Anal. 107, no. 1, 37-69 (1989).
\noindent [15] J.J. Morales-Ruiz, J.P. Ramis, {\it Galosian
Obstructions to integrability of Hamiltonian Systems}, Preprint
(1998).
\noindent [16] I. Newton, {\it Philosophiae naturalis principia
mathematica}, Imprimatur S. Pepys, Reg. Soc. Praeses, julii 5,
1686, Londini anno MDCLXXXVII.
\noindent [17] P. Painlev\'e, {\it M\'emoire sur les int\'egrales
premi\'eres du probl\`eme des n corps}, Acta Math. Bull. Astr. T
15 (1898).
\noindent [18] H. Poincar\'e, {\it Les m\'ethodes novelles de la
m\'ecanique c\'eleste}, vol. 1-3. Gauthier--Villars, Paris 1892,
1893, 1899.
\noindent [19] R. Schaefke, Private communication.
\noindent [20] M. Singer, A. Baider, R. Churchill, D. Rod, {\it On
the infinitesimal Geometry of Integrable Systems}, in Mechanics
Day, Shadwich et. al., eds, Fields Institute Communications, 7,
AMS, 5-56 (1996).
\noindent [21] K. F. Sundman, { \it Memoire sur le probl\`eme des
trois corps}, Acta Math. 36, 105-107 (1913).
\noindent [22] C.L. Siegel, J.K. Moser, {\it Lectures on Celestial
Mechanics,} Springer-Verlag (1971).
\noindent [23] Gh. Topan, {\it Sur une int\'egrale premi\'ere
transcendante dans certaines configurations du probl\'eme des
trois corps}, Bull. Math. Soc. Sci. Math. R. S. Roumanie (N.S.),
no. 1, 83-91 (1989).
\noindent [24] E.T. Whittaker, {\it A Treatise on the Analytical
Dynamics of particles and Rigid Bodies}. Cambridge University
Press, New York, (1970).
\noindent [25] A. Winter, {\it The analytical Foundations of
Celestial Mechanics,} Princeton Univ. Press, Princeton, (1941).
\noindent [26] H. Yoshida, {\it A criterion for the nonexistence
of an additional integral in Hamiltonian systems with a
homogeneous potential}, Phys. D. 29, no. 1-2, 128-142, (1987).
\noindent [27] S.L. Ziglin, {\it Branching of solutions and
non-existence of first integrals in Hamiltonian Mechanics I},
Func. Anal. Appl. 16 (1982).
\noindent( please use this address for correspondence )
\noindent{\small \bf
Section de Mathematiques,\\
Universit\'e de Gen\`eve\\
2-4, rue du Lievre,\\
CH-1211, Case postale 240, Suisse \\
Tel l.: +41 22 309 14 03 \\
Fax: +41 22 309 14 09 \\
E--mail: [email protected]}
\noindent{\small \bf
Laboratoire Emile Picard, UMR 5580,\\
Universit\'e Paul Sabatier\\
118, route de Narbonne,\\
31062 Toulouse Cedex, France \\
Tel l.: 05 61 55 83 37 \\
Fax: 05 61 55 82 00 \\
E--mail: [email protected]}
\noindent {\bf Tsygvintsev Alexei}
\begin{center}
{\bf Appendix A. The matrices $A_{\infty}$, $A$, $B$, $R$, $J$}
\end{center}
\begin{center}
$A_{\infty}=A_{\infty, ij}$, $1\leq i,j \leq 4$. \\
\end{center}
\noindent ${A_{\infty , \,11}}={\displaystyle \frac {1}{4}} \,
{\displaystyle \frac {12\,\alpha + 5\,\beta + 5\,\beta \,\alpha
^{2} + 26\,\alpha \,\beta + 12\,\alpha ^{2}}{\alpha \,{S_{1}}}
} , \quad {A_{\infty , \,12}}={\displaystyle \frac {3}{4}} \,
{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}},$\\
$
{A_{\infty , \,13}}= - 2\,{\displaystyle \frac {{S_{1}}}{{S_{
2}}^{2}\,{S_{3}}^{3}\,\alpha ^{4}\,\beta }}, \quad {A_{\infty ,
\,14}}=0, $\\
$
{A_{\infty , \,21}}={\displaystyle \frac {3}{4}} \, {\displaystyle
\frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}}, \quad {A_{\infty , \,22}}= - {\displaystyle \frac {1}{4}} \,
{\displaystyle \frac { - 12\,\alpha + \beta + \beta \,\alpha ^{
2} - 2\,\alpha \,\beta - 12\,\alpha ^{2}}{\alpha \,{S_{1}}}} ,
$\\
$
{A_{\infty , \,23}}=0, \quad {A_{\infty , \,24}}= -
2\,{\displaystyle \frac {{S_{1}}}{{S_{
2}}^{2}\,{S_{3}}^{3}\,\alpha ^{4}\,\beta }} , $\\
$
{A_{\infty , \,31}}={\displaystyle \frac {1}{8}} \, {\displaystyle
\frac {\alpha ^{2}\,\beta \,{S_{3}}^{3}\,(\alpha
+ 1)\,{S_{2}}^{3}\,(2\,\alpha + 13\,\beta + 13\,\beta \,\alpha
^{2} + 24\,\alpha \,\beta + 2\,\alpha ^{2})}{{S_{1}}^{3}}} , $\\
$
{A_{\infty , \,32}}={\displaystyle \frac {3}{8}} \, {\displaystyle
\frac {\sqrt{3}\,(\beta + 2\,\alpha + 4\,\alpha \,\beta + \beta
\,\alpha ^{2} + 2\,\alpha ^{2})\,(\alpha - 1)\, \beta \,\alpha
^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}} , $\\
$
{A_{\infty , \,33}}= - {\displaystyle \frac {1}{4}} \,
{\displaystyle \frac {\beta \,(5\,\alpha ^{2} + 14\,\alpha + 5)
}{\alpha \,{S_{1}}}} \quad {A_{\infty , \,34}}= - {\displaystyle
\frac {3}{4}} \, {\displaystyle \frac {\sqrt{3}\,(\alpha +
1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}} , $\\
$
{A_{\infty , \,41}}={\displaystyle \frac {3}{8}} \, {\displaystyle
\frac {\sqrt{3}\,(\beta + 2\,\alpha + 4\,\alpha \,\beta + \beta
\,\alpha ^{2} + 2\,\alpha ^{2})\,(\alpha - 1)\, \beta \,\alpha
^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}} , $\\
$
{A_{\infty , \,42}}={\displaystyle \frac {1}{8}} \, {\displaystyle
\frac {\alpha ^{2}\,\beta \,{S_{3}}^{3}\,(\alpha
+ 1)\,{S_{2}}^{3}\,( - 10\,\alpha + 7\,\beta + 7\,\beta \,
\alpha ^{2} - 12\,\alpha \,\beta - 10\,\alpha ^{2})}{{S_{1}}^{3}
}}, $\\
$
{A_{\infty , \,43}}= - {\displaystyle \frac {3}{4}} \,
{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}} , \quad {A_{\infty , \,44}}={\displaystyle \frac {1}{4}} \,
{\displaystyle \frac {\beta \,(10\,\alpha + \alpha ^{2} + 1)}{
\alpha \,{S_{1}}}}. $
\begin{center}
$A=(A_{ij})$, $1\leq i,j \leq 4$.\\
\end{center}
\noindent$ {A_{11}}= - {\displaystyle \frac {1}{4}}
\,{\displaystyle \frac {(\alpha + 1)\,(\alpha \,\beta +
4\,\alpha + \beta )}{ \alpha \,{S_{1}}}} , \quad
{A_{12}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}} , $\\ $ {A_{13}}=2\,{\displaystyle \frac
{{S_{1}}}{{S_{2}}^{2}\,{S_{3 }}^{3}\,\alpha ^{4}\,\beta }},\quad
{A_{14}}=0,\quad {A_{21}}={\displaystyle \frac {1}{4}}
\,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha -
1)}{\alpha \,{ S_{1}}}} , $\\
$
{A_{22}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{10\,\alpha \,\beta + 3\,\beta \,\alpha ^{2} + 3\,\beta
+ 4\,\alpha ^{2} + 4\,\alpha }{\alpha \,{S_{1}}}} ,\quad {A_{24}}=2\,{\displaystyle \frac {{S_{1}}}{{S_{2}}^{2}\,{S_{3
}}^{3}\,\alpha ^{4}\,\beta }}$\\
$
{A_{31}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac
{\alpha ^{2}\,\beta ^{2}\,{S_{3}}^{3}\,(\alpha + 1)\,( \alpha -
1)^{2}\,{S_{2}}^{3}}{{S_{1}}^{3}}} ,\quad {A_{34}}= -
{\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}},$\\
$
A_{23}=0, \quad {A_{32}}={\displaystyle \frac {1}{8}}
\,{\displaystyle \frac {\sqrt{3}\,(\alpha - 1)\,(\alpha +
1)^{2}\,\alpha ^{2}\, \beta
^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} ,\quad
{A_{33}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{(\alpha - 1)^{2}\,\beta }{\alpha \,{S_{1}}}},$\\
${A_{41}}={\displaystyle \frac {1}{8}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha - 1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta
^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} ,\quad {A_{42}}= -
{\displaystyle \frac {3}{8}} \,{\displaystyle \frac {\alpha
^{2}\,\beta ^{2}\,{S_{3}}^{3}\,(\alpha + 1)^{3}\,{
S_{2}}^{3}}{{S_{1}}^{3}}} , $\\
$
{A_{43}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}} ,\quad {A_{44}}={\displaystyle \frac {3}{4}}
\,{\displaystyle \frac {(\alpha + 1)^{2}\,\beta }{\alpha
\,{S_{1}}}}. $
\begin{center}
$R=(R_{ij})$, $1\leq i,j \leq 4$.\\
\end{center}
\noindent $ {R_{11}}= - {\displaystyle \frac {1}{2}}
\,{\displaystyle \frac {2\,\alpha + \beta + 6\,\alpha \,\beta +
2\,\alpha ^{2}
+ \beta \,\alpha ^{2}}{\alpha \,{S_{1}}}}, \quad {R_{12}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle
\frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}} , $\\
$
{R_{13}}=0, \quad {R_{14}}=0, $\\
$
{R_{21}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}},\quad {R_{22}}={\displaystyle \frac {1}{2}}
\,{\displaystyle \frac {(\alpha + 1)\,( - 2\,\alpha + \alpha
\,\beta + \beta ) }{\alpha \,{S_{1}}}} , $\\
$
{R_{23}}=0, \quad {R_{24}}=0, $\\
$
{R_{31}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac
{\beta \,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha +
1)\,(\alpha ^{2} + 6\,\beta \,\alpha ^{2} + \alpha + 13\,\alpha
\,\beta + 6\,\beta )}{{S_{1}}^{3}}} , $\\
$
{R_{32}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac
{\sqrt{3}\,(3\,\alpha ^{2} + 2\,\beta \,\alpha ^{2} + 7\, \alpha
\,\beta + 3\,\alpha + 2\,\beta )\,(\alpha - 1)\,\beta \,\alpha
^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}} , $\\
$
{R_{33}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac
{(\alpha + \sqrt{3} + 2)\,(\alpha + 2 - \sqrt{3})\,\beta
}{\alpha \,{S_{1}}}} ,\quad {R_{34}}={\displaystyle \frac {1}{2}}
\,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha -
1)}{\alpha \,{ S_{1}}}}, $\\
$
{R_{41}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac
{\sqrt{3}\,(3\,\alpha ^{2} + 2\,\beta \,\alpha ^{2} + 7\, \alpha
\,\beta + 3\,\alpha + 2\,\beta )\,(\alpha - 1)\,\beta \,\alpha
^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}}, $\\
$
{R_{42}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac
{\beta \,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha + 1)\,( -
5\,\alpha ^{2} + 2\,\beta \,\alpha ^{2} - 5\,\alpha - 9 \,\alpha
\,\beta + 2\,\beta )}{{S_{1}}^{3}}} , $\\
$
{R_{43}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}}, \quad {R_{44}}= - {\displaystyle \frac {1}{2}}
\,{\displaystyle \frac {(\alpha + \sqrt{3} + 2)\,(\alpha + 2 -
\sqrt{3})\,\beta }{\alpha \,{S_{1}}}}. $\\
\begin{center}
$J=(J_{ij})$, $1\leq i,j \leq 4$.\\
\end{center}
\noindent $ {J_{11}}= - {\displaystyle \frac {1}{2}}
\,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha -
1)}{\alpha \,{ S_{1}}}} , \quad {J_{12}}={\displaystyle \frac
{1}{2}} \,{\displaystyle \frac {(\alpha + 1)\,( - 2\,\alpha +
\alpha \,\beta + \beta ) }{\alpha \,{S_{1}}}}, $\\
$
{J_{13}}=0 ,\quad {J_{14}}=0, $\\
$
{J_{21}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac
{2\,\alpha + \beta + 6\,\alpha \,\beta + 2\,\alpha ^{2}
+ \beta \,\alpha ^{2}}{\alpha \,{S_{1}}}}, \quad {J_{22}}={\displaystyle \frac {1}{2}} \,{\displaystyle
\frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}}, $\\
$
{J_{23}}=0 , \quad {J_{24}}=0, \quad {J_{31}}= - {\displaystyle
\frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha -
1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta
^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} , $\\
$
{J_{32}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{\beta ^{2}\,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha
+ 1)\,(\alpha ^{2} + 4\,\alpha + 1)}{{S_{1}}^{3}}} , $\\
$
{J_{33}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{
S_{1}}}} , \quad {J_{34}}= - {\displaystyle \frac {1}{2}}
\,{\displaystyle \frac {2\,\alpha + \beta + 6\,\alpha \,\beta +
2\,\alpha ^{2}
+ \beta \,\alpha ^{2}}{\alpha \,{S_{1}}}}, $\\
$
{J_{41}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{\beta ^{2}\,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha
+ 1)\,(\alpha ^{2} + 4\,\alpha + 1)}{{S_{1}}^{3}}} , $\\
$
{J_{42}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac
{\sqrt{3}\,(\alpha - 1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta
^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} , $\\
$
{J_{43}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac
{(\alpha + 1)\,( - 2\,\alpha + \alpha \,\beta + \beta )
}{\alpha \,{S_{1}}}}, \quad {J_{44}}= - {\displaystyle \frac
{1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta
\,(\alpha - 1)}{\alpha \,{ S_{1}}}}. $
\end{document} |
\begin{document}
\title{Intersection forms, topology of maps and motivic
decomposition for resolutions of threefolds}
\tableofcontents
\section{Introduction}
This paper has two aims.
\n
The former is to give an introduction to our earlier work
\ci{decmightam}
and more generally to some of the main themes of the theory of perverse
sheaves and
to some of its
geometric applications. Particular emphasis is put on
the topological properties
of algebraic maps.
\n
The latter is to prove
a motivic version of the
decomposition theorem
for the resolution of a threefold $Y.$ This result allows
to define a pure motive whose Betti realization
is the intersection cohomology of $Y.$
We assume familiarity with Hodge theory and with the formalism
of derived categories.
On the other hand, we provide a few explicit computations of perverse
truncations and intersection cohomology complexes which
we could not
find in the literature and which may be helpful
to understand the machinery.
We discuss in detail the case of surfaces, threefolds and fourfolds.
In the surface case,
our ``intersection forms" version of
the decomposition theorem stems quite naturally from two well-known
and widely used theorems on surfaces, the Grauert
contractibility criterion for curves on a surface and the so called
``Zariski Lemma,"
cf. \ci{BPV}.
The following assumptions are made throughout the paper
\begin{ass}
\lambdaabel{s1}
We work with varieties over the complex numbers.
A {\em map} $f:X \to Y$ is a proper morphism of varieties.
We assume that $X$ is smooth. All (co)homology groups are
with rational coefficients.
\end{ass}
These assumptions are placed for ease of exposition only, for
the main results remain valid when $X$ is singular if one
replaces the cohomology
of $X$
with its intersection cohomology, or the constant sheaf
$\rat_{X}$ with the
intersection cohomology complex of $X.$
It is a pleasure to
dedicate this work to J. Murre, with admiration and respect.
\section{Intersection forms}
\lambdaabel{intforms}
\subsection{Surfaces}
\lambdaabel{surf}
Let $D = \cup D_{k} \subseteq X$ be a finite union of compact
irreducible curves
on a smooth complex surface. There is a sequence of
maps
\begin{equation}
\lambdaabel{blle}
H_{2}(D) \stackrel{r_{*}}\lambdaongrightarrow H^{BM}_{2}(X) \stackrel{PD}\simeq
H^{2}(X) \stackrel{r^*}\lambdaongrightarrow H^{2}(D).
\end{equation}
The group $H_{2}(D)$ is freely generated by the fundamental classes
$[D_{k}].$
\n
The group $H^{2}(D)\simeq H_{2}(D)^{\vee}$ and, via Mayer-Vietoris,
it is freely generated by the classes associated with points $p_{k}
\in
D_{k}.$
The map
$$
H_{2}(D) \stackrel{cl}\lambdaongrightarrow H^{2}(X), \qquad cl\,:= \, PD\,\circ \,
r_{*}
$$
is called the {\em class map} and it assigns to the fundamental
class $[D_{k}]$ the cohomology class $c_{1}( {\cal O}_{X}(D_{k}) ).$
\n
The restriction map $r,$ or rather
$r\circ PD,$ assigns to a
Borel Moore $2-$cycle meeting transversely all the $D_{k},$
the points of intersection with the appropriate multiplicities.
\n
The composition
$
H_{2}(D) \lambdaongrightarrow H^{2}(D),
$
gives rise to the so-called {\rm refined intersection form
on $D \subseteq X:$
\begin{equation}
\lambdaabel{iota}
\iota: H_{2}(D) \times H_{2}(D) \lambdaongrightarrow \rat
\end{equation}
with associated symmetric intersection matrix $||D_{h}\cdot
D_{k}||.$
\n
If $X$ is replaced by the germ of a neighborhood
of $D,$ then $X$ retracts to $D$ so that
all four spaces appearing in (\ref{blle}) have the same dimension
$b_{2}(D)=$numbers of curves in $D.$
\n
In this case the restriction map
$r$ is an isomorphism: the Borel Moore classes
of disks transversal to the $D_{k}$ map to the point of intersection.
\n
On the other hand, $cl$ may fail to be injective, e.g.
$(\comp \times {\Bbb P}n{1}, \{0\}\times {\Bbb P}n{1}).$
The following are two classical results
results concerning the properties of
the intersection form $\iota$, dealing respectively with resolutions
of
normal surface singularities and one dimensional families of curves.
They are known as the Grauert's Criterion and the Zariski
Lemma (cf. \ci{BPV}, p.90).
\begin{tm}
\lambdaabel{tmgra}
Let $f: X \to Y$ be the contraction of a divisor $D$
to a normal surface singularity.
Then the refined intersection form $\iota$ on $H_{2}(D)$
is negative definite.
In particular, the class map $cl$ is an isomorphism.
\end{tm}
\begin{tm}
\lambdaabel{wzar}
Let $f: X \to Y$ be a surjective proper map of quasi-projective
smooth varieties, $X$ a surface, $Y$ a curve.
Let $D = f^{-1}(y)$ be any fiber.
Then the rank of $cl$ is $b_{2}(D) -1.$
More precisely, let $F= \sum_k a_k D_{k},$ $a_{k}>0,$ be the
cycle-theoretic fiber. $F\cdot F=0$ and the induced bilinear form
$$
\frac{H_{2}(D)}{ \lambdaangle [F] \rangle } \times
\frac{H_{2}(D)}{ \lambdaangle [F] \rangle }
\lambdaongrightarrow \rat
$$
is non degenerate and negative definite.
\end{tm}
\begin{rmk}
\lambdaabel{link}
{\rm Theorem \ref{tmgra} can be interpreted
in terms of the topology of the ``link" ${\cal L}$ of the
singularity.
Let $N$ be a small
contractible neighborhood
of a singular point $y$ and ${\cal L}$ be its boundary.
Choose analytic disks $\Delta_1, \cdots, \Delta_r$ cutting
transversally
the divisors $D_1, \cdots, D_r$ at regular points. The classes of
these disks,
generate the Borel-Moore homology $H_2^{BM}(f^{-1}(N)) \simeq
H^2(f^{-1}(N)).$
The statement \ref{tmgra} implies that
each class $\Delta_{i}$ is
homologous
to a rational linear combination of exceptional curves.
Equivalently,
for every index $i$ some multiple of
the $1-$cycle $\Delta_i\cap {\cal L}$ bounds in the link
${\cal L}$ of $y.$
This is precisely what fails in the aforementioned example
$(\comp \times {\Bbb P}n{1}, \{0\}\times {\Bbb P}n{1}).$
A similar interpretation is possible for the ``Zariski lemma."}
\end{rmk}
In view of the important role played by these theorems
in the theory of complex surfaces it is natural to ask for
generalization
to higher dimension.
We next define what is the analogue of the intersection form for a
general
map $f:X \to Y$ (cf. \ref{s1})
\subsection{Intersection forms associated to a map}
\lambdaabel{psif}
General theorems, due to J. Mather, R. Thom and others
(cf. \ci{g-m}) ensure that a projective map
$f:X \to Y$ can be stratified, i.e. there is a decomposition
${\frak Y}= \coprod S_l$ of $Y$ with locally closed nonsingular
subvarieties
$S_l$,
the strata, so that $f: f^{-1}(S_l)\to S_l$ is, for any $l$, a
topologically locally trivial fibration.
Such stratification allows us, when $X$ is nonsingular, to define
a sequence of intersection forms.
Let $L$ be the pullback of an ample bundle on $Y.$
The idea is to use sections of $L$ to construct
transverse slices and reduce the strata to points,
and to use a very ample line bundle $\eta$ on $X$
to fix the ranges:
Let $ \dim S_l=l$,
let $s_l$ a generic point of the stratum $S_l$ and
$Y_s$ a complete intersection of $l$ hyperplane sections of $Y$
passing through $s_l$, transverse to $S_l$;
As we did for surfaces, we consider the maps:
$$
I_{l,0}: H_{n-l}(f^{-1}(s_l)) \times H_{n-l}(f^{-1}(s_l)) \lambdaongrightarrow \rat.
$$
obtained intersecting cycles supported in $f^{-1}(s)$
in the smooth $(n-l)-$dimensional ambient variety $f^{-1}(Y_s):$
$$
H_{n-l}(f^{-1}(s_l))\to H_{n-l}(f^{-1}(Y_s)) \simeq H^{n-l
}(f^{-1}(Y_s))
\to H^{n-l}(f^{-1}(s_l)).
$$
We can define other intersection forms, in different ranges,
cutting the cycles in $f^{-1}(s_l)$ with generic sections of $\eta.$
The composition:
$$
H_{n-l -k}(f^{-1}(s))\to H_{n-l -k}(f^{-1}(Y_s))
\simeq H^{n-l +k}(f^{-1}(Y_s)) \to H^{n-l +k}(f^{-1}(s)).
$$
gives maps
$$
I_{l,k}:H_{n-l-k}(f^{-1}(s_l)) \times H_{n-l+k}(f^{-1}(s_l)) \lambdaongrightarrow
\rat.
$$
Let us denote by
$$\cap \eta^k:
H_{n-l+k}(f^{-1}(s_l)) \to H_{n-l-k}(f^{-1}(s_l)),
$$
the operation of cutting a cycle in $f^{-1}(s_l)$ with $k$ generic
sections of $\eta$.
\n
Composing this map with $I_{l,k},$ we obtain the intersection forms we
will consider:
$$
I_{l,k }(\cap \eta^k \cdot, \cdot ): H_{n-l+k}(f^{-1}(s_l))
\times H_{n-l+k}(f^{-1}(s_l)) \lambdaongrightarrow \rat.
$$
\begin{rmk}
\lambdaabel{inde}
{\rm These intersection forms depend on $\eta$ but not on the
particular sections
used to cut the dimension.
They are independent of $L$.
In fact we could define them using a local slice of the stratum $S_l$
and its inverse image,
without reference to sections of $L.$ }
\end{rmk}
\begin{ex}
\lambdaabel{3fold}
{\rm Let $f:X \to Y$ be a resolution of singularities of a threefold
$Y,$
with a stratification $Y_0 \coprod C \coprod y_0,$
defined so that $f$ is an isomorphism over $Y_0$,
the fibers are one-dimensional over $C$, and there is a divisor
$D=\cup D_i$ contracted to the point $y_0$. We have the following
intersection forms:
-- let $c$ be a general point of $C$ and $s\in H^0(Y, {\cal O}(1))$
be a generic section vanishing at $c;$
there is the form $H_2(f^{-1}(c) \times H_2(f^{-1}(c)) \lambdaongrightarrow \rat$
which is nothing but the Grauert-type form on the surface
$f^{-1}(\{s=0\});$
--
similarly, over $y_0$,
there is the form on $H_4(D)$ given by $\eta \cap [D_i]\cdot [D_j];$
it is a Grauert-type form,
computed on a hyperplane section
of $X$ with respect to $\eta;$
--
finally, we have the more interesting $H_3(D)\times H_3(D) \lambdaongrightarrow
\rat$.
}
\end{ex}
One of the dominant themes of this paper is that
{\em Hodge theory affords non degeneracy results for these forms
and that this non degeneration
has strong cohomological consequences.}
To see why Hodge theory is relevant to the study of the intersection
forms,
let us sketch a proof of Theorem \ref{tmgra},
in the hypothesis that $X$ and $Y$ are projective.
The proof we give is certainly not the most natural or economic.
Its interest lies in the fact that,
while the original proof seems difficult to generalize to higher
dimension,
this one can be generalized.
It is based on the observation that the classes $[D_i]$ of the
exceptional curves are
``primitive'' with respect to the cup product with the first Chern
class
of any ample line bundle pulled back from $Y.$
Even though such a line bundle is certainly not ample,
some parts of the ``Hodge package," namely the Hard Lefschetz theorem
and the
Hodge Riemann bilinear relations, go through.
To prove this, we introduce a technique, which we call {\em
approximation of $L-$primitives,}
which plays a decisive role in what follows.
\n
{\em Proof of \ref{tmgra} in the case $X$ and $Y$ are projective.}
\n
Let $L$ be the pullback to $X$ of an ample line bundle on $Y$.
Since the map is dominant, $L^2 \neq 0,$ and we get the
Hodge-Lefschetz
type decomposition:
$$H^2(X,\real) \, = \, \real \lambdaangle c_1(L) \rangle \oplus
\hbox{\rm Ker \,} \{c_1(L) \wedge : H^2(X) \to H^4(X)\}.$$
Denote the kernel above by $P^{2}.$
This decomposition is orthogonal with respect to the Poincar\'e
duality
pairing which,
in turn,
is non degenerate when restricted to the two summands.
The decomposition holds with rational coefficients.
However, real coefficients are more convenient in view
of taking limits.
\n
Consider a sequence of Chern classes of ample $\rat -$line bundles
$L_n$,
converging to the Chern class of $L$, e.g. $L_n=L+\frac{1}{n} \eta,$
$\eta $ ample on $X.$
Define
$P^2_{1/n}= \hbox{\rm Ker \,} \{c_1(L_n):H^2(X) \to H^4(X)\}.$
These are $(b_2-1)-$dimensional subspaces of $H^2(X)$.
Any limit point of the sequence $P^2_{1/n}$ in ${\Bbb P}^{b_2}(\real)$
gives a codimension one subspace $W \subseteq H^2(X)$,
contained in $ \hbox{\rm Ker \,} \{c_1(L):H^2(X) \to H^4(X)\}=P^2.$
Since $\dim{W} = b_2-1 = \dim{ P^2},$ we must have $\lambdaim_{n}{
P^2_{1/n}}=P^2.$
\n
The Hodge Riemann Bilinear Relations hold on $P^2_{1/n}$ by classical
Hodge theory.
The duality pairing on the limit
$P^2$ is non degenerate.
It follows that
the Hodge Riemann Bilinear Relations hold on $P^2$ as well.
\n
The classes of the exceptional curves $D_i$ are in $P^2,$
since we
can choose a section of the very ample line bundle on $Y$ not
passing through the singular point and pull it back to $X.$
\n
The fact that these classes are independent is known
classically. Let us briefly mention here that if there is only one
component
$D_{i}$ then $0 \neq [D_{i}] \in H^{2}(X)$ in the K\"ahler $X.$
In general, one may also argue along the following lines (cf.
\ci{demigsemi}, \ci{deca}, $\S8$): use the Leray spectral sequence
over an affine neighborhood $V$ of the singularity $y$ to show that
$H^{2}(f^{-1}(V)) \to H^{2}(f^{-1}(y))$ is surjective; use the
basic properties of mixed Hodge structures to deduce
that $H^{2}(X) \to H^{2}(f^{-1}(y))$ is also surjective; conclude
by dualizing and by Poincar\'e Duality.
\n
The classes $[D_{i}]$ are real of type $(1,1)$ and for such
classes
$\alpha\in P^2 \cap H^{1,1}$ the Hodge Riemann
bilinear relations give
$$
\int_X \alpha \wedge \alpha \,<\, 0
$$
whence the statement of \ref{tmgra}.
\blacksquare
\subsection{Resolutions of isolated singularities in dimension $3$}
\lambdaabel{lde}
In this section we study the intersection forms
in the case of the resolution of three-dimensional
isolated singularities. Many of the features and techniques used
in the general case emerge already in this case.
Besides motivating what follows, we believe
that the statements and the techniques
used here are of some independent interest.
We prove all the relevant Hodge-theoretic
results about the intersection forms
associated to the resolution of an isolated singular point on a
threefold.
This example will be reconsidered in the last section, where we give
a motivic version
of the Hodge theoretic decomposition proved here.
As is suggested in the proof of Theorem \ref{tmgra} sketched at the
end of
the previous section, in order to draw conclusions on the
behaviour of the intersection forms, we must investigate
the extent to which the Hard Lefschetz theorem and
the Hodge Riemann Bilinear Relations hold when we
consider the cup product with the Chern class of the
pullback of an ample bundle by a projective map.
In order to motivate what follows let us recall an
inductive proof of the Hard Lefschetz theorem
based on the Hodge Riemann relations:
\n
{\em
Hard Lefschetz and Hodge-Riemann relations in dimension
$(n-1)$
and
Weak Lefschetz in dimension $n$ imply Hard Lefschetz in dimension
$n.$}
\n
Let $X$ be projective nonsingular and $X_H$ be a generic hyperplane
section with respect
to a very ample bundle $\eta$. Consider the map
$c_1(\eta):H^{n-1}(X)\to H^{n+1}(X).$
The Hard Lefschetz theorem
states it is an isomorphism. By the Weak Lefschetz Theorem
$i^*: H^{n-1}(X)\to H^{n-1}(X_H)$ is injective, and its dual
$i_*: H^{n-1}(X_H)\to H^{n+1}(X),$
with respect to Poincar\'e duality on $X$ and $X_H$,
is surjective. The cup product with $c_1(\eta)$
is the composition
$i_* \circ i^*$
$$
\xymatrix{
H^{n-1}(X) \ar[rd]^{i^*} \ar[rr]^{c_1(\eta)} & & H^{n+1}(X) \\
& H^{n-1}(X_H) \ar[ur]^{i_*} &
}
$$
and is therefore an isomorphism if and only if the bilinear form
$ \int_{X_{H}}$
remains non degenerate when restricted
to the subspace
$H^{n-1}(X) \subseteq H^{n-1}(X_H).$
This inclusion is a
Hodge substructure.
The Hodge Riemann relations on $X_{H}$
imply that the Hodge structure $H^{n-1}({X_H})$ is a direct sum of
Hodge
structures polarized by the pairing $\int_{X_{H}}.$
It follows that the restriction of the Poincar\'e
form $\int_{X_{H}}$ to $H^{n-1}(X)$ is non degenerate, as wanted.
The other cases of the Hard Lefschetz Theorem
(i.e. $c_1(\eta)^k$ for $k \geq 2$) follow immediately from the weak
Lefschetz theorem and
the Hard Lefschetz theorem for $X_H$. \blacksquare
\begin{ass}
\lambdaabel{3foldass}
$Y$ is projective with an isolated singular point $y$,
$dim Y=3.$
$X$ is a resolution and $f:X \to Y$ is
an isomorphism when restricted to $f^{-1}(Y-y)$.
Suppose $D=f^{-1}(y)$ is a divisor and let
$D_i$ be its irreducible components.
\end{ass}
As usual in this paper, we will denote
by $\eta$ a very ample line bundle on $X,$
and by $L$ the pullback to $X$
of a very ample line bundle on $Y.$
Of course $L$ is not ample.
We want to investigate whether the Hard Lefschetz theorem and the
Hodge
Riemann relations hold
if we consider cup-product with $c_1(L)$ instead of with an ample
line bundle.
\begin{rmk}
{\rm Since $c_1(L)^3 \neq 0$ we have an isomorphism
$c_1(L)^3:H^{0}(X) \to H^{6}(X). $}
\end{rmk}
\begin{rmk}
{\rm Clearly the classes $[ D_i ] \in H^2(X)$ are killed by the cup
product with $c_1(L),$ since
we can pick a generic section of
${\cal O}_Y(1)$ not passing through $y$ and its inverse image in $X$
will not meet the $D_{i}.$
Since $[D_i] \neq 0,$
it follows that
$c_1(L):H^{2}(X) \to H^{4}(X)$ is not an isomorphism.}
\end{rmk}
We now prove that in fact
the subspace $ \hbox{\rm Im \,} \{H_4(D) \to H^2(X) \}$ generated by
the classes $[D_{i}]$ is precisely
$ \hbox{\rm Ker \,} { c_1(L):H^{2}(X) \to H^{4}(X)}.$
\begin{tm}
\lambdaabel{ht3fold}
Let $s \in \Gamma(Y,{\cal O}_Y(1))$ be
a generic section and $X_s=f^{-1}(\{s=0\}) \stackrel{i}{\to}X$.
Then:
a. $i^*:H^1(X) \to H^1(X_s)$ is an isomorphism.
b. $i_*:H^3(X_s) \to H^5(X)$ is an isomorphism.
c. $i^*:H^2(X)/( \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\}) \to H^2(X_s)$ is injective.
d. $i_*: H^2(X_s) \to \hbox{\rm Ker \,} \{H^4(X) \to H^4(D)\}$ is surjective.
e. The map $H_3(D) \to H^3(X)$ is injective.
\end{tm}
{\em Proof.}
Set $X_0=X \setminus X_s$ e $Y_o=Y \setminus \{s=0\}$ and let us
consider
the Leray spectral sequence
for $f:X_0 \to Y_0.$ Since $Y_0$ is affine, we have $H^k(Y_0)=0$ for
$k>3$.
$$
\xymatrix{
&H^4(D)\ar[rrd]^{d_2} & & &
& \\
&H^3(D)\ar[rrd]^{d_2}\ar@{.}[dddrrr] & & &
& \\
&H^2(D) & & &
& \\
&H^1(D) & & &
& \\
& H^0(Y_0)& H^1(Y_0)& H^2(Y_0) & H^3(Y_0)& \\
\ar@<-4ex>[uuuuu] \ar@<4ex>[rrrr] & &
& & }
$$
The sequence degenerates so that we have surjections
$H^3(X_0) \to H^3(D)$ and
$H^4(X_0) \to H^4(D).$
But from \ci{ho3}, Proposition 8.2.6,
$H^3(X) \to H^3(D) \to 0$ and
$H^4(X) \to H^4(D) \to 0$ are also surjective.
We have the long exact sequence
$$
\xymatrix{
H^1_c(X_0) \ar[r] \ar@{=}[d] & H^1(X) \ar[r] & H^1(X_s) \ar[rr]
\ar[dr]& & H^2_c(X_0) \ar[r] \ar@{=}[d] & H^2(X) \ar[r]
& H^2(X_s) \\
H^5(X_0)^*=\{0 \} &
& & 0 \ar[ur] & H_4(X_0)
\ar@{=}[u] & & \\
& &
& & H_4(D) \ar@{=}[u] \ar@{^{(}->}[uur] &
& }
$$
The other statements are obtained applying duality.
\blacksquare
Since on $H^2(X_s)$ the bilinear relation of Hodge-Riemann hold,
the argument given at the beginning of this section shows that
$$
c_1(L)^2:H^1(X)\lambdaongrightarrow H^5(X) \; \hbox{ is an isomorphism}
$$
and
$$
c_1(L)\, :\, H^2(X)/H_4(D) \, \lambdaongrightarrow \,
\hbox{\rm Ker \,} { \{H^4(X) \to H^4(D) \}} \; \hbox{ is an isomorphism}.
$$
The Hodge Riemann relations hold for $P^1:=H^1(X)$,
$P^2:= \hbox{\rm Ker \,} c_1(L)^2:H^2(X)/H_4(D) \to H^6(X) $
since, by the weak Lefschetz Theorem,
they follow from those for $X_s.$
The Hodge Riemann relations for $P^3= \hbox{\rm Ker \,} \{c_1(L):H^3(X) \to
H^5(X)\},$ of which $H_{3}(D)$ is a subspace,
must be considered separately:
the main technique to be used here is the {\em approximation of
primitives}
introduced in the previous section to prove Theorem \ref{tmgra}.
\begin{tm}
\lambdaabel{polarizeh3}
The Poincar\'e pairing $\int_X $ is a polarization of $P^3$
\end{tm}
{\em Proof.}
Since
$c_1(L)^2:H^1(X)\to H^5(X) $ is an isomorphism, there is a
decomposition,
orthogonal with respect to the Poincar\`e pairing,
$H^3(X)=P^3 \oplus c_1(L)H^1(X)$ and, in particular, $\dim
{P^3}=b_3-b_1, $ just as if
$L$ were ample.
The Poincar\`e pairing remains nondegenerate when restricted to $P^3$.
The classes $c_1(L)+\frac{1}{n}c_1(\eta)$
are Chern classes of ample line bundles,
hence
$P^3_{1/n}= \hbox{\rm Ker \,} \{c_1(L)+\frac{1}{n}c_1(\eta):H^3(X) \to H^5(X) \}$
are $b_3-b_1-$dimensional subspaces of
$H^3(X)$ .
As in the proof of \ref{tmgra} a limit point of the sequence
$P^3_{1/n},$ considered as points in the real Grassmannian
$Gr(b_3-b_1, b_3),$
gives a subspace of $H^3(X)$, contained in $ \hbox{\rm Ker \,} \{c_1(L):H^3(X) \to
H^5(X)\}=P^3$
and, by equality of dimensions, $\lambdaim {P^3_{1/n}}=P^3.$
The Hodge Riemann relations must then hold
on the limit $P^3$ as explained in the proof of Theorem
\ref{tmgra}.
\blacksquare
Finally, let us remark
that the cup-product with $\eta$ gives an isomorphism $c_1(\eta):
H_4(D) \to H^4(D)$
via the bilinear form $\int_X c_1(\eta) \wedge [D_i] \wedge [D_j]$
which is negative definite.
As we remarked in \ref{3fold} this form is just
the intersection form on the exceptional curves of the restriction
of $f$ to a hyperplane section
(with respect to $\eta$) of $X$.
Summarizing:
$$
\xymatrix{
& & H_4(D) \ar@/_2pc/
@{-->}[ddrr]_>>>>>>>>{c_1(\eta)} &
& & & \\
H^0 \ar@/_2pc/[rrrrrr]^{c_1(L)^3} & H^1
\ar@/^3pc/[rrrr]^{c_1(L)^2} & H^2/H_4(D) \ar@/^1pc/[rr]^{c_1(L)}
& H^3 & \hbox{\rm Ker \,} \{ H^4 \to H^4(D) \} & H^5 & H^6 \\
& & & &
H^4(D) & &
}
$$
{\em the groups in the central row behave, with respect to $L$,
as the cohomology of a projective nonsingular variety on which $L$
is ample.}
We are now in position to prove the first nontrivial fact on
intersection forms
which generalizes \ref{tmgra}:
\begin{cor}
\lambdaabel{tmgra3dim}
$H^3(D)$ has Hodge structure which is pure of weight $3$ and the
Poincar\'e form is
a polarization. In particular, the (skew-symmetric) intersection form
$H_3(D) \times H_3(D) \to \rat$ is non degenerate.
\end{cor}
{\em Proof.} This follows because
$H_3(D) \to H^3(X)$,
is injective and identifies $H^3(D)$ with a Hodge substructure
of the polarized Hodge structure
$ P^3 \subseteq H^3(X)$.
\blacksquare
This is a nontrivial
criterion for a configuration of (singular) surfaces contained in a
nonsingular threefold
to be contractible to a point. See \ci{decmightam}, Corollary 2.1.11
for a generalization to arbitrary dimension.
\n
For example, the purity of the Hodge structure implies that
$H^3(D)=\oplus H^3(D_i).$
We will see that the non degeneracy statement of \ref{tmgra3dim}
also plays an important role in the motivic decomposition of $X$
described in section \ref{gmdmt}.
\begin{rmk}
{\rm The same analysis can be carried on with only notational changes
for an
arbitrary generically finite map
from a nonsingular threefold $X$, e.g. assuming that there is
also some divisor which is blown down to a curve etc.
In this case the Hodge structure of $X$ can be further decomposed,
splitting
of a piece corresponding
to the contribution to cohomology of this divisor. }
\end{rmk}
\begin{rmk}
{\rm The classical argument of Ramanujam \ci{rama}, \ci{e-v},
to derive the Aki\-zu\-ki-Kodaira-Nakano Vanishing Theorem
from Hodge theory and Weak Lefschetz can be adapted
to give the following sharp version:
if $L$ is a line bundle on a threefold $X$,
with $L^3 \neq 0$,
a multiple of which is globally generated,
then
$$
H^{p}(X, \Omega_X^q {\mathord{\otimes }\,}imes L^{-1})=0
$$
for $p+q<2$, and for $p+q=2$ but $(p,q) \neq (1,1)$.
More precisely $H^{1}(X, \Omega_X^1 {\mathord{\otimes }\,}imes L^{-1}) \neq 0$
if and only if some divisor is contracted to a point.}
\end{rmk}
\subsection{Resolutions of isolated singularities in dimension $4$}
\lambdaabel{lde4}
Let us quickly consider another similar example
in dimension $4$,
\begin{ass}
\lambdaabel{4foldass}
$f:X \to Y$, where $Y$ still has a unique singular point $y$
and $X$ is a resolution. As before, $\eta$ will denote a very ample
bundle on $X$,
and $L$ the pull-back of a very ample bundle on $Y$.
Set $D=f^{-1}(y).$
\end{ass}
An argument completely analogous to the one used in the previous
example
shows that the sequence of spaces
$H^0(X),$ $H^1(X),$ $ H^2(X)/H_6(D),$ $ H^3(X)/H_5(D),$ $ H^4(X),$
$ \hbox{\rm Ker \,} \{H^5(X) \to H^5(D)\}$, $ \hbox{\rm Ker \,} \{H^6(X) \to H^6(D)\}$, $H^7(X),$
$H^8(X) $
satisfies the Hard Lefschetz Theorem with respect to the cup product
with $L$.
The corresponding primitive spaces $P^1,P^2,P^3,$ are endowed with
pairing
satisfying the
Hodge Riemann bilinear relations.
The new fact that we have to face shows up when studying the Hodge
Riemann
bilinear relations
on $H^4(X)$.
The ``approximation of primitives'' technique here must be modified,
since
the dimension of $P^4= \hbox{\rm Ker \,} \{c_1(L):H^4(X) \to H^6(X) \}$ is greater
than
$b_4-b_2. $ Hence, if we introduce the primitive spaces
$P^4_{1/n}= \hbox{\rm Ker \,} \{c_1(L)+\frac{1}{n}c_1(\eta):H^4(X) \to H^6(X)\}$
with respect to the ample classes $c_1(L)+\frac{1}{n}c_1(\eta),$
their limit is a proper subspace, of dimension $b_4-b_2$, of $P^4$.
We can determine the exact dimension of $P^4$:
\begin{lm}
$\dim{ \hbox{\rm Ker \,} \{c_1(L):H^4(X) \to H^6(X)\}}\,=\, b_4 -b_2 +
\dim{H_6(D)}.$
\end{lm}
{\em Proof.}
Since $c_1(L)^2:H^2(X)/H_6(D) \stackrel{c_1(L)}{\to}
H^4(X) \stackrel{c_1(L)}{\to} \hbox{\rm Ker \,} \{H^6(X) \to H^6(D)\}$
is an isomorphism, we have an orthogonal decomposition
$$
H^4(X)= P^4 \oplus \hbox{\rm Im \,} \{c_1(L): H^2(X) \to H^4(X)\}.
$$
The statement follows from:
$ \hbox{\rm Ker \,} \{c_1(L): H^2(X) \to H^4(X) \}=H_6(D).$
\blacksquare
The ``excess'' dimension of $P_4$ is thus
$\dim{H_6(D)}$. On the other hand $P^4$
contains an obvious subspace of this dimension, namely
$c_1(\eta)H_6(D)$, the subspace generated by the classes obtained
intersecting
the irreducible components of the exceptional divisor with a generic
hyperplane section.
\begin{rmk}
{\rm The intersection form $\int_X c_1(\eta)^2 \wedge [D_i] \wedge
[D_j]$
is negative definite,
as it is just the intersection form on the exceptional curves of a
double
hyperplane section
of $X$.}
\end{rmk}
This last remark implies the following orthogonal decomposition
$$
H^4(X)= c_1(\eta)H_6(D) \oplus (c_1(\eta)H_6(D))^{{\Bbb P}erp}\cap P^4
\oplus \hbox{\rm Im \,} \{ c_1(L): H^2(X) \to H^4(X) \}.
$$
and $(c_1(\eta)H_6(D))^{{\Bbb P}erp}\cap P^4$ has dimension $b_4-b_2.$
This subspace turns out to be the subspace of ``approximable
$L-$primitives''
we are looking for, as shown in the following
\begin{tm}
$$
\lambdaim_{n \to \infty}{
\hbox{\rm Ker \,} { \{ c_1(L)+\frac{1}{n}c_1(\eta) :H^4(X) \to H^6(X)\}}
} \,
=\, (c_1(\eta)\,H_6(D))^{{\Bbb P}erp}\cap P^4.
$$
\end{tm}
{\em Proof.} The two subspaces have the same dimension, so it is
enough
to prove that
$$ \hbox{\rm Ker \,} { \{
c_1(L)+\frac{1}{n}c_1(\eta): \{ H^4(X) \to H^6(X) \}
\}
}
\subseteq (c_1(\eta)H_6(D))^{{\Bbb P}erp}.
$$
If $(c_1(L)+\frac{1}{n}c_1(\eta)(\alpha))=0,$ then, using
$c_{1}(L) [D_{i}]=0:$
$$
\int_X c_1(\eta)\wedge [D_i] \wedge \alpha \, = \,
-n \int_X c_1(L) \wedge [D_i] \wedge \alpha=0.
$$
\blacksquare
\begin{cor}
The Poincar\'e pairing is a polarization of the weight $4$ pure
Hodge structure
$(c_1(\eta)H_6(D))^{{\Bbb P}erp}\cap P^4$.
\end{cor}
Let us spell out the consequences of this analysis for the
intersection form
$ H_4(D) \times H_4(D) \to \rat.$
First notice that the same argument used in the proof of
\ref{ht3fold}.e shows that the map
$H_4(D) \to H^4(X)$ is injective. It follows that
$H_4(D) $ has a pure Hodge structure
which is the direct sum of two substructures
polarized (with opposite signs) by the Poincar\'e pairing.
The next result shows that in fact $H_4(D) $ is the direct sum
of two substructures, polarized (with opposite signs) by the
Poincar\'e pairing.
This gives a clear indication of what happens in
general:
\begin{cor}
\lambdaabel{tmgra4fold}
The intersection form $ H_4(D) \times H_4(D) \to \rat$ is
non degenerate.
There is a direct sum decomposition:
$$
H_4(D)= c_1(\eta)H_6(D) \oplus (c_1(\eta)H_6(D))^{{\Bbb P}erp}
$$
orthogonal with respect to the intersection form, which is negative
definite on
the first summand and positive on the second.
\end{cor}
\section{Intersection forms and Decomposition in the Derived Category}
\lambdaabel{intformdecomp}
We now show how the results we quoted at the beginning of the first
section
can be translated in statements about
the decomposition in the derived category of sheaves
of the direct image of the constant sheaf.
We will freely use the language of derived categories.
In particular we will use the notion of a constructible sheaf and
the functors
$Rf_*,Rf_!,f^*,f^!.$
In section \ref{subsm}
we briefly review the classical $E_2-$degeneration criterion
of Deligne \ci{dess}, \ci{shockwave} in order to motivate
the construction
of the perverse cohomology complexes.
These complexes are
a natural generalization
of the higher direct image local systems for a smooth map.
The construction of perverse cohomology is carried out in section
\ref{macc}.
We denote by $S(Y)$ the abelian category of sheaves of $\rat -$vector
spaces
on $Y$, and by $D^b(Y)$ the corresponding derived category of
bounded complexes.
We shall make use of the following splitting criterion
in the derived category. We state it in the form we need it in this
paper. For a more general statement and a proof the reader is
referred to \ci{demigsemi} and \ci{decmightam}.
Let $(U,y)$ be a germ of an isolated $n-$dimensional singularity
with the
obvious stratification $= V \coprod y,$ $j: V \to U \lambdaeftarrow y :
i$ be the
obvious maps, $P$ be a self-dual
complex on $U$ with $P_{|V}= {\cal L}[n],$ ${\cal L}$
a local system on $V,$ and $P \simeq \tau_{\lambdaeq 0}P. $
We wish to compare
$P,$ $IC_{U}({\cal L}):= \td{-1} Rj_{*} {\cal L}[n]$
and the stalk ${\cal H}^{0}(P)_{y}.$
\begin{lm}
\lambdaabel{splitp}
The following are equivalent:
\n
1) there is a canonical isomorphism in the derived category
$$
P \, \simeq \, IC_{U}({\cal L}) \oplus
{\cal H}^{0}(P)_{y}[0];
$$
\n
2) the natural map ${\cal H}^{0}(P) \lambdaongrightarrow {\cal H}^{0}(Rj_{*}
j^{*}P)=R^{n}j_{*}{\cal L}$
is zero.
\end{lm}
\subsection{Resolution of surface singularities }
\lambdaabel{fsts}
For a normal surface $Y$, let $j:Y_{reg} \to Y$ be the open
embedding
of its regular points. The intersection cohomology complex,
which we will consider in much more detail in the next section, is
$IC_Y= \tau_{\lambdaeq -1} Rj_{*}\rat_{Y_{reg} } [2] .$
The following, which we will prove as a consequence of \ref{tmgra},
is the first case of the Decomposition theorem which
needs to be stated in the derived category and not just
in the category of sheaves.
\begin{tm}
\lambdaabel{tmrs}
Let $f:X \to Y$ be a proper birational map of
quasi-projective surfaces, $X$ smooth, $Y$ normal.
There is a canonical isomorphisms
$$
Rf_{*} \rat_{X}[2] \stackrel{\simeq}\lambdaongrightarrow IC_{Y} \oplus
R^{2}f_*\rat_X [0].
$$
\end{tm}
{\em Proof.}
We work locally on $Y.$
Let $(Y,p)$ be the germ of an analytic normal surface singularity,
$f:(X,D) \to (Y,p)$ be a resolution. The fiber $D=f^{-1}(p)$ is a
connected union of
finitely many irreducible compact curves $D_k.$
Note that $j^*Rf_*\rat \simeq \rat_{Y\setminus p}.$
Consider the following diagram
$$
\xymatrix{
Rf_{*}\rat_{X}[2] \ar[rd]_{l_{0}} \ar@/_1pc/[rdd]_{l_{-1}}
\ar@/_2pc/[rddd]_{l_{-2}}
\ar[r]^{r} &
Rj_{*}j^{*} Rf_{*}\rat_{X}[2] & = Rj_{*} \rat_{Y
\setminus p}[2] & \\
& \td{0} Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar@[=][u]
\ar@[=][u] & & \\
& \td{-1} Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar[u] &
=: IC_{Y} &\\
& \td{-2} Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar[u] & =
\rat_{Y}[2] &
}
$$
We are looking for the obstructions to the existence
of the lifts $l_{0},$ $l_{-1}$ and $l_{-2}.$
\n
Since $R^k f_*\rat_X =0,$ $k\geq 3,$
we have that $\td{0} Rf_*\rat_X[2] \simeq Rf_* \rat_X[2].$
In particular, $l_0$ exists and it is unique.
\n
From the exact triangle
$$
\to \tau_{\lambdaeq -1}Rj_*j^*Rf_* \rat_X[2]
\to \tau_{\lambdaeq 0}Rj_*j^*Rf_* \rat_X[2]
\to {\cal H}^0 (Rj_*j^*Rf_* \rat_X[2])\simeq R^2j_* \rat_{Y \setminus
p}
\stackrel{+1}{\to}
$$
$l_{-1}$ exists iff
the natural map
$$
\rho: R^2f_* \rat_X \to R^2j_* \rat_{Y \setminus p}
$$
is trivial. Using the isomorphisms $(R^2f_* \rat_X)_{p} \simeq
H^{2}(X)$ and $(R^2j_* \rat_{Y \setminus p})_{p} \simeq
H^{2}(X \setminus D),$ the map $\rho$
can be identified with the restriction map $\rho$ appearing
in the long exact sequence of the
pair $(X, D):$
$$
\lambdadots \lambdaongrightarrow H_2(D) \stackrel{cl}\lambdaongrightarrow H^2(X)\simeq H^2(D)
\stackrel{\rho}\lambdaongrightarrow
H^2( X \setminus D ) \lambdaongrightarrow \lambdadots
$$
where have identified $H_{2}(D) \simeq H^{2}(X, X \setminus D)$
via Lefschetz Duality.
\n
By Theorem \ref{tmgra}, $cl$ is an isomorphism
and $\rho$ is trivial.
\n
This lift $l_{-1}$ is unique and splits by
Lemma \ref{splitp}.
\blacksquare
\begin{rmk}
\lambdaabel{nolift}
{\rm
It can be shown easily that a lift
$Rf_{*}\rat_{X}[2] \lambdaongrightarrow \td{-2}\, Rj_{*}\rat_{U}[2] = \rat_{Y}[2]$
exists iff $H^{1}(f^{-1}(p), \rat) =\{0\},$ i.e. iff $IC_{Y}\simeq
\rat_{Y}[2],$ iff $(Y,p)$ is a rational homology manifold.
\n
It follows that, in general, the natural map $\rat_{Y } \to
Rf_{*}\rat_{X}$
does {\em not} split and $Rf_{*}\rat_{X}$ does {\em not}
decompose as a direct sum of its shifted cohomology sheaves
as in (\ref{e1}).
}
\end{rmk}
\begin{ex}
\lambdaabel{rag2}
{\rm
Let $f: X = \comp \times {\Bbb P}n{1} \to Y$ be the real algebraic map
contracting precisely $D:= \{0 \} \times {\Bbb P}n{1}$ to a point $p \in
Y.$
One has
a {\em non} split exact sequence in the category
$P(Y)$ of perverse sheaves
on
$Y:$
$$
0 \lambdaongrightarrow IC_{Y} \lambdaongrightarrow Rf_{*}\rat_{X}[2] \lambdaongrightarrow H^{2}({\Bbb P}n{1})_{p}[0]
\lambdaongrightarrow 0.
$$
}
\end{ex}
It is remarkable that while the lift $l_{-2}$ does not exist in
general,
the lift $l_{-1}$ always exists.
While looking for a nontrivial map $Rf_{*} \rat_{X}[2] \to
\rat_{Y}[2],$
one ends up finding another {\em more interesting} map to
$IC_{Y}.$
Recall that the dualizing sheaf $\omega_{X} \simeq \rat_{X}[2n].$
Dualizing the canonical isomorphism of Theorem \ref{tmrs}
and, keeping in mind that $IC_{Y}$ and $H_{2}(D)_{p}[0]$
are simple objects in $P(Y)$ (cf. section \ref{simple}), we get
\begin{cor}
\lambdaabel{ctmrs} There are canonical isomorphisms
$$
H_{2}(D)_{p}[0] \oplus IC_{Y}^{*} \stackrel{\simeq}\lambdaongrightarrow
Rf_{*} \omega_{X}[-2] \stackrel{PD}\simeq
Rf_{*} \rat_{X}[2] \stackrel{\simeq}\lambdaongrightarrow IC_{Y} \oplus
H^{2}(D)_{p}[0],
$$
such that the composition is a direct sum map
and induces the intersection form $\iota$ on $H_{2}(D)$ and the
Poincar\'e-Verdier pairing on the self-dual $IC_{Y}.$
\n
In particular, if $X$ is compact, then the induced splitting
injection
$$
I\!H^{\bullet}(Y) \subseteq H^{\bullet}(X)
$$
exhibits the lhs as the pure Hodge substructure of the rhs
orthogonal to the space $cl (H_{2}(D)) \subseteq H^{2}(X)$
with respect to the Poincar\'e pairing on $X.$
\end{cor}
\subsection{Fibrations over curves}
\lambdaabel{fstc}
Let $f: X \to Y$ be a map of from a smooth surface onto a smooth
curve.
Denote by $\hat{f}: \hat{X} \to \hat{Y}$
the smooth part of the map $f,$
by $j: \hat{Y} \to Y$ the open immersion,
by $T^i:= R^i\hat{f}_*\rat_{\hat{X}} = {R^if_*\rat_X}_{|\hat{Y}}.$
For ease of exposition we assume that $f$ has connected fibers.
\n
Fix an ample line bundle $\eta$ on $X.$ The isomorphism
stated in the next proposition will depend on $\eta.$
\begin{pr}
\lambdaabel{rhl1}
There is an isomorphism
$$
Rf_* \rat_X[2] \simeq j_*T^0[2] \oplus P
\oplus j_*T^2[0],
$$
with $P$ a suitable self-dual (with respect to the Verdier duality
functor) object of $D^b(Y).$
\end{pr}
{\em Proof.}
We work around one critical value $p \in Y$
and replace $Y$ by a small disk centered at $p,$ $X$ by the preimage
of this disk, etc.
\n
Since the fibers are connected, $\rat_Y \simeq f_*\rat_X \simeq
j_*T^0 \simeq
j_*T^2.$
\n
Since $\eta$ is $f-$ample, $\eta: T^0 \simeq T^2,$
which in this case implies that $\eta: j_*T^0 \simeq j_*T^2.$
\n
There are the natural truncation
maps $f_*\rat_X \to Rf_*\rat_X \to R^2f_*\rat_X[-2].$
\n
There is the natural adjunction map $R^2f_*\rat_X \to j_*T^2.$
It is splitting/surjective in view of the presence of $\eta.$
\n
Putting together,
there is a sequence of maps
\begin{equation}
\lambdaabel{som}
j_*T^0[2] \stackrel{c}\to Rf_*\rat_X[2] \stackrel{\eta}\to Rf_*
\rat_X[4]
\stackrel{{\Bbb P}i}\to j_*T^2[2] \stackrel{\sigma}\to
j_*T^0[2] \stackrel{c}\to \lambdadots
\end{equation}
where the composition $\eta {\Bbb P}i c$ is the isomorphism
mentioned above
$\eta: j_*T^0 \simeq j_*T^2$ and $\sigma := ( \eta {\Bbb P}i c )^{-1}.$
\n
The reader can verify that the composition
$$
j_*T^0[2] \oplus j_*T^2[0] \stackrel{
\gamma:=( c + \eta c \sigma)[-2] }\lambdaongrightarrow
Rf_* \rat_X[2]
\stackrel{ \sigma{\Bbb P}i\eta \oplus {\Bbb P}i [-2]}\lambdaongrightarrow j_*T^0[2] \oplus
j_*T^2[0]
$$
is the identity, i.e. $\gamma$ splits.
\n
Let $P:= \mbox{Cone}(\gamma).$ There is a direct sum decomposition
\begin{equation}
\lambdaabel{chsp}
Rf_* \rat_X [2] \simeq j_*T^0[2] \oplus P \oplus j_*T^2[0].
\end{equation}
The self-duality of $P$ follows from the self-duality of
$Rf_* \rat_X [2]$
and of $j_*T^0[2] \oplus j_*T^2[0].$
\blacksquare
The object $P$
introduced in the previous proposition has a simple structure:
\begin{pr}
\lambdaabel{semisimp1}
Assumptions as in \ref{rhl1}.
The object $P$ splits in $D^b(Y)$ as
$$P=V \oplus j_*T^1[1],$$
where $V = \hbox{\rm Ker \,} {R^2f_*\rat_X \to j_*T^2}$
is a skyscraper sheaf supported at
$Y \setminus \hat{Y}.$
This decomposition is canonical and compatible with Verdier duality.
\end{pr}
{\em Proof.}
By inspecting cohomology sheaves we see that
${\cal H}^i(P) =0$ for $i \neq 0,1,$ that
${\cal H}^{-1}(P) = R^1f_*\rat_X$ and that
${\cal H}^0(P) =V.$
\n
In view of Lemma \ref{splitp}, we need to show that
\begin{equation}
\lambdaabel{rpr}
r': {\cal H}^0 (P) \to R^1j_*T^1
\end{equation}
is the zero map.
\n
We now show that this is equivalent to the Zariski Lemma.
\n
By applying adjunction to (\ref{chsp}), we obtain the a commutative
diagram
$$
\xymatrix{
Rf_{*}\rat_{X}[2] \ar[d]^{\simeq}
\ar[r] &
Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar[d]^{\simeq} \\
j_{*}T^{0}[2] \oplus P \oplus j_{*}T^{2}[0] \ar[r] &
Rj_{*}T^{0}[2] \oplus Rj_{*}P \oplus Rj_{*}T^{2}[0]
}
$$
The associated map of spectral sequences
${\Bbb H}^p(Y, {\cal H}^q (-) ) \Longrightarrow
{\Bbb H}^{p+q}(Y, -)$
gives a commutative diagram
$$
\xymatrix{
H_{2}(D) \ar[r]^{cl} & H^{2}(X) \ar[d]^{\simeq}
\ar[r]^{r} &
H^{2}(X \setminus D) \ar[d]^{\simeq} \\
& {\cal H}^{0}( P) \oplus (j_{*}T^{2})_{p} \ar[r]^{(r',id)} & \;
{\cal H}^{1}( Rj_{*}T^{1}) \oplus (j_{*}T^{2})_{p}.
}
$$
It follows that, using the identifications above,
$\mbox{Im}(cl)= \hbox{\rm Ker \,} {r} = \hbox{\rm Ker \,} {r'}
\subseteq {\cal H}^0(P).$
\n
In particular, $r'=0,$ iff $\dim{ \hbox{\rm Ker \,} {r} } = \dim{ {\cal H}^0(P) }.$
Note
that $\dim{ {\cal H}^0(P) }=b_2(f^{-1}(p)) -1.$
\n
It follows that $r'=0$ iff $\dim{ \mbox{Im} (cl) } =
b_2(f^{-1}(p)) -1.$ The latter is implied by
Theorem \ref{wzar}.
\n
We conclude by Lemma \ref{splitp}.
\blacksquare
Finally, we have:
\begin{tm}
\lambdaabel{stc}
There is an isomorphism
$$
Rf_* \rat_X[2] \simeq j_*T^0[2] \oplus j_*T^1[1] \oplus V [0]
\oplus j_*T^2[0].
$$
\end{tm}
\begin{rmk}
{\rm From \ref{stc} follows that $R^1f_* \rat_X \simeq
j_*R^1\hat{f}_*\rat_{\hat{X}}.$
Note that this implies the Local Invariant Cycle Theorem.
\n
Since $R^2f_* \rat_X=j_*R^2\hat{f}_*\rat_{\hat{X}}\oplus V,$ we have
the coarser decomposition
$$
Rf_* \rat_X[2] \simeq R^0f_*\rat_{X}[2] \oplus R^1f_*\rat_{X}[1]
\oplus R^2 f_*\rat_{X}[0].
$$
In particular, the Leray spectral sequence degenerates at $E_2.$
It is easy to see that the Leray filtration on the cohomology of $X$
is by Hodge substructures:
$$
L^2=f^*H(Y), \qquad L^1= \hbox{\rm Ker \,} { \{f_*:H(X) \to H(Y)\}}.
$$ }
\end{rmk}
\subsection{Smooth maps}
\lambdaabel{subsm}
Even in the case of a smooth fibration $f: X\to Y$
of a surface over a curve, the study of the complex
$Rf_{*}\rat_{X}$ is nontrivial, without any projectivity assumptions.
\begin{ex}
\lambdaabel{hopf}
{\rm
Let $X$ be a Hopf surface.
There is a natural holomorphic smooth
fibration $ f:X\to {\Bbb P}n{1}$ with fibers elliptic curves.
Since $b_{1}(X)=1,$ one sees easily that the
Leray Spectral Sequence for $f$ is not $E_{2}-$degenerate.
In particular, $Rf_{*}\rat_{X} $ is {\em not}
isomorphic to $\oplus_{i}{ R^{i}f_{*}\rat_{X}
[-i]}.$
}
\end{ex}
Let us briefly list {\em some}
of the important properties of a smooth projective map
$f:X \to Y$ of
smooth varieties.
The sheaves $R^{i}f_{*}\rat_{X}$ are locally constant over $Y,$
i.e. they are local systems.
In fact, $f$ is differentiably locally trivial over $Y$
in view of Ehresmann Lemma.
The model for a general decomposition theorem for $Rf_{*}\rat_{X}$
is the following
\begin{tm}
Let $f:X^{n} \to Y^{m}$ be a smooth projective map
of smooth quasi-projective varieties
of the indicated dimensions and $\eta$ be an $f-$ample
line bundle on $X.$
\lambdaabel{del}
\begin{equation}
\lambdaabel{e1}
Rf_{*}\rat_{X} \, \simeq_{D(Y)} \, \oplus{R^{i}f_{*}\rat_{X}
[-i]}.
\end{equation}
\begin{equation}
\lambdaabel{e2}
\eta^{i}: R^{n-m-i}f_{*}\rat_{X} \simeq
R^{n-m+i}f_{*}\rat_{X}, \quad \forall \,i \geq 0.
\end{equation}
\begin{equation}
\lambdaabel{e3}
\mbox{ The local systems $R^{j}f_{*}\rat_{X}$ on $Y$
are semisimple.}
\end{equation}
\end{tm}
The first Chern class of the
line bundle $\eta \in H^{2}(X, \rat) =
Hom_{D(X)}(\rat_{X}, \rat[2]),$ defines maps
$$
\eta: \rat_{X} \lambdaongrightarrow
\rat_{X}[2], \quad \eta: Rf_{*}\rat_{X} \lambdaongrightarrow Rf_{*}\rat_{X}[2],
\quad
\eta^{r}: Rf_{*}\rat_{X} \lambdaongrightarrow Rf_{*}\rat_{X}[2r]
$$
and finally
$$
\eta^{r}: R^{i}f_{*}\rat_{X} \lambdaongrightarrow R^{i+2r}f_{*}\rat_{X}.
$$
Theorem \ref{del}.\ref{e2} is then just a re-formulation
of the Hard Lefschetz Theorem for the fibers of $f$
and can be named the Relative Hard Lefschetz Theorem
for smooth maps.
We remind the reader that a functor
${\cal T}: D(Y) \to A$, $A$ an abelian category,
is said to be cohomological (cf. \ci{verd} II, 1.1.5.) if, setting
${\cal T}^i(K)= {\cal T}^0(K[i]),$ to a distinguished triangle
$$
K \to L \to M \stackrel{+1}{\to}
$$
corresponds a long exact sequence in $A$
$$
\to {\cal T}^i(K) \to {\cal T}^i(L) \to {\cal T}^i(M) \to {\cal
T}^{i+1}(K) \to...
$$
The cohomology sheaf functor ${\cal H}^{0}: D(Y) \to S(Y)$
is cohomological.
Noting that ${\cal H}^{i}(Rf_{*}) = R^{i}f_{*},$
Theorem \ref{del}.\ref{e1} can be re-phrased by saying
that $Rf_{*}\rat_{X}$ is decomposable with respect to
the functor ${\cal H}^{0}.$
\n
It is important to note that (\ref{e2}) implies (\ref{e1}) by
the Deligne-Lefschetz Criterion.
\n
Theorem \ref{del}.\ref{e3}
states that every local subsystem ${\cal L} \subseteq
R^{j}f_{*}\rat_{X}$ admits a complement, i.e. a local system ${\cal
L}'$
such that ${\cal L} \oplus {\cal L}' = R^{j}f_{*}\rat_{X}.$
Let us note {\em some} of the important consequences of Theorem
\ref{del}.
\n
The ${\cal H}^{0}-$decomposability (\ref{e1}) of $Rf_{*}\rat_{X}$
implies immediately the $E_{2}-$degeneration of the Leray spectral
sequence,
i.e. of the spectral sequence associated with the
cohomological functor ${\cal H}^{0}:$
\begin{equation}
\lambdaabel{e4}
\mbox{ $H^{p}(Y, R^{q}f_{*}\rat_{X}) \Longrightarrow H^{p+q}(X,
\rat)$ is $E_{2}-$degenerate}.
\end{equation}
This degeneration implies the surjection
\begin{equation}
\lambdaabel{e5a}
H^{k}(X, \rat) \lambdaongrightarrow H^{0}(Y, R^{k}f_{*}\rat_{X}) = H^{k}(X_{y},
\rat)^{{\Bbb P}i_{1}(Y, y)},
\end{equation}
i.e. the so-called Global Invariant Cycle Theorem.
\n
The Theory of MHS allows to show, using a smooth
compactification of $X,$ that in fact the
monodromy invariants are a Hodge substructure of $H^{k}(X_{y},\rat),$
which as a PHS is independent of $y \in Y$ (Theorem of the Fixed
Part).
In fact, (\ref{e3}) is a consequence of this fact.
In general, if $f$ is not smooth, Theorem \ref{del}
fails completely.
\n
The Relative Hard Lefschetz Theorem (\ref{e2}) fails
due to the presence of singular fibers, i.e.
fibers along which
the differential of $f$ drops rank.
\n
The sheaves $R^{j}f_{*}\rat_{X}$ are no longer locally constant.
Moreover, they are not semisimple in the category of constructible
sheaves: e.g. $j_{!}\rat_{\comp^{*}} \to \rat_{\comp}.$
\n
The following examples shows that
the ${\cal H}^{0}-$decomposability (\ref{e1}) fails in general and so
does
the $E_{2}-$degeneration of the Leray Spectral Sequence
(\ref{e4}).
\begin{ex}
\lambdaabel{exnondeg}
{\rm
Let $X$ be the blowing up of $\comp {\Bbb P}n{2},$
along ten
points lying on an irreducible cubic $C'$ and $C$ be the strict
transform of $C'$ on $X.$ Since $C^{2}=-1$ the curve contracts
to a point under a birational map $f: X\to Y.$
We leave to the reader the task to verify that
1) the Leray Spectral Sequences for $H^{2}(X,\rat)$
and for $I\!H^{2}(Y, \rat)$
are not $E_{2}-$degenerate and that, though
the Leray spectral sequence always degenerates over suitably small
Euclidean neighborhoods on $Y,$
2) the complex $IC_{Y}$ does not split
as a direct sum of its shifted cohomology sheaves.
}
\end{ex}
The following more general class of examples shows that the failure
of the $E_{2}-$degeneration is very frequent.
\begin{ex}
\lambdaabel{exgen}
{\rm
Let $f: X \to Y$ be a projective resolution of the singularities
of a projective and normal variety $Y$ such that
there is at least one index
$i$ such that the natural MHS
on $H^{i}(Y, \rat)$ is not pure (e.g. $i=2$ in\ref{exnondeg}).
Then the Leray Spectral Sequence for $f$ is not
$E_{2}-$degenerate.
If it were, then the edge sequence would
give an injection of MHS $f^{*}: H^{j}(Y, \rat)
\to H^{j}(X, \rat),$ forcing
the MHS of such a $Y$ to be pure.
}
\end{ex}
However, not everything is lost.
\section{Perverse sheaves and the Decomposition Theorem}
\lambdaabel{macc}
One of the main
ideas leading to the theory of perverse
sheaves is that Theorem \ref{del}, which holds for smooth maps,
can be made to hold for arbitrary
proper algebraic maps provided that it is re-formulated using
the perverse cohomology functor $^{p}\!{\cal H}^{0}$ in place of the
cohomology sheaf functor ${\cal H}^{0}.$
Just as this latter is $\tau_{\lambdaeq 0}\tau_{\geq 0},$ with
$\tau$ the standard truncation functors of a complex,
the perverse cohomology functor will be expressed
as $^{p}\!{\cal H}^{0}= \, ^{p}\!\tau_{\lambdaeq 0} \, \!^{p}\!\tau_{\geq
0},$
where $^{p}\!\tau$ is the so called perverse truncation functor.
Roughly speaking,
the perverse truncation functor (with respect to middle perversity,
which is the only case we will consider)
is defined by gluing standard truncations on the strata,
shifted by a term which depends on the dimension of the stratum.
The choice of the shifting is dictated by the behavior of the
standard
truncation with respect to duality,
as we suggest in \ref{troncodiaz}.
In this context and keeping this in mind,
perverse truncation becomes quite natural.
We believe it can be
useful to give a few details of its construction and an example of
computation,
related to the examples given in section
\ref{psif}. In analogy with the cohomology sheaf functor ${\cal
H}^0,$
the perverse cohomology functor $^{p}\!{\cal H}^{0}$ will be a
cohomological
functor which
takes values in an abelian
subcategory of ${D}^{b}(Y),$
whose object are the so called perverse sheaves. For a general proper
map these
objects play the role
played by local system for smooth maps.
\subsection{Truncation and Perverse sheaves}
\lambdaabel{sps}
Let ${D}^{b}(Y)$ be the bounded derived category of the category
$S(Y)$
of sheaves of rational vector spaces on $Y.$
We are interested in the full subcategory $D(Y)$ of those complexes
whose cohomology sheaves are constructible. This means that,
given an object $F$ of $D(Y),$ there is an algebraic Whitney
stratification $Y=\coprod{S_{l}},$ depending on $F,$ such that
${\cal H}^{j}(F)_{|S_{l}}$ is a finite rank local system.
By the Thom Isotopy Lemmata, $Rf_{*}\rat_{X}$,
and in fact any other complex appearing in this paper, is
an object of $D(Y).$ One is interested in direct sum decompositions
of this complex, in the geometric meaning of the summands
and in the consequences, both theoretical and practical,
of such splittings.
We now define the $t$-structure on $D(Y)$ associated with the middle
perversity. Instead of insisting on its axiomatic characterization
(cf.
\ci{bbd}),
we give the explicit construction of
the {\em perverse truncations}
${\Bbb P}td{m}: D(Y) \lambdaongrightarrow D(Y)$,
and ${\Bbb P}tu{m}: D(Y) \lambdaongrightarrow D(Y)$.
These come with natural morphisms
${\Bbb P}td{m}F \lambdaongrightarrow F$ and
$F \lambdaongrightarrow {\Bbb P}tu{m}F.$
\n
We start with the following:
\begin{lm}
\lambdaabel{troncodiaz}
Let $Z$ be nonsingular of complex dimension $r$, and $F \in D(Z)$
with locally constant cohomology sheaves. Then there are
natural isomorphisms:
$$
\tau_{\lambdaeq k} {\cal D}F \simeq {\cal D}\tau_{\geq -k-2r}F
\qquad
\tau_{\geq k} {\cal D}F \simeq {\cal D}\tau_{\lambdaeq -k-2r}F
$$
\end{lm}
{\em Proof.}
Since the dualizing complex is in this case isomorphic to
$\rat_Z[2r],$ it is enough to prove that there are natural
isomorphisms
$$
\tau_{\lambdaeq k} Rhom(F,\rat_Z) \simeq Rhom(\tau_{\geq -k}F, \rat_Z)
\qquad
\tau_{\geq k} Rhom(F,\rat_Z) \simeq Rhom(\tau_{\lambdaeq -k}F, \rat_Z).
$$
We prove the first statement. The proof of the
second is analogous.
Applying $Rhom$ and $\td{k}$ to
the map $F \to \tau_{\geq -k}F,$
we get:
$$
\begin{array}{ccc}
Rhom(\tau_{\geq -k}F, \rat_Z) & \lambdaongrightarrow & Rhom(F, \rat_Z) \\
\uparrow & & \uparrow \\
\td{k} Rhom(\tau_{\geq -k}F, \rat_Z) &
\stackrel{}\lambdaongrightarrow & \td{k} Rhom( F, \rat_Z).
\end{array}
$$
\n
To prove the statement it is enough to show that
the three complexes $ Rhom(\tau_{\geq -k}F, \rat_Z),$
$\td{k} Rhom(\tau_{\geq -k}F, \rat_Z)$ and $ \td{k} Rhom( F, \rat_Z)$
have the same cohomology sheaves.
Since $F$ and $\rat_{Z}$
have locally constant cohomology sheaves,
there are natural isomorphisms of complexes of vector spaces
$Rhom (F,\rat_{Z})_{y} \simeq Rhom(F_{y}, \rat_{y})
\simeq \oplus_i Hom( {\cal H}^{-i}F_{y}, \rat_{y})[-i]$.
The cohomology sheaves of the three complexes, are, therefore,
equal to $Hom( {\cal H}^{-i}F_{y}, \rat_{y})$ for $i\lambdaeq k$ and
vanish otherwise.
\blacksquare
The construction of the perverse truncation is done by induction on
the strata of $Y$ starting from the shifted standard truncation on
the
open stratum $U_{d}.$ In the sequel we will indicate by $U_l$ the
union
of strata of dimension bigger than or equal to $l$.
With a slight abuse of notation, we will write $U_{l+1}=U_l \coprod
S_l,$
with $S_l$ now denoting the union of strata of dimension $l.$
Let
$F \in Ob (D(Y))$ be ${\frak Y}-$constructible for
some stratification $\frak Y= \coprod S_l.$
All the constructions below will lead to
${\frak Y}-$constructible complexes.
We define ${\Bbb P}td{0}^{U_{d}}=\tau_{\lambdaeq -\dim{Y}}$ and
${\Bbb P}tu{0}^{U_{d}}=\tau_{\geq -\dim{Y} }$.
\n
Suppose that ${\Bbb P}td{0}^{U_{l+1}}: D(U_{l+1}) \lambdaongrightarrow D(U_{l+1}) $
and ${\Bbb P}tu{0}^{U_{l+1}}: D(U_{l+1}) \lambdaongrightarrow D(U_{l+1})$
have been defined.
\n
We proceed to define ${\Bbb P}td{0}^{U_l}$ and ${\Bbb P}tu{0}^{U_l}$ on
$U_l=U_{l+1} \coprod S_l$.
Let $i:S_l \to U_l \lambdaongleftarrow U_{l+1}: j $
be the inclusions: the exact triangles
$$
\tau'_{\lambdaeq 0}F \to F \to
Rj_* \,^{p}\tau_{ > 0 }^{U_{l+1}}j^*F \stackrel{[1]}\lambdaongrightarrow
\qquad
\tau''_{\lambdaeq 0}F \to F \to i_* \tau_{>-dim S } i^*F
\stackrel{[1]}\lambdaongrightarrow
$$
and
$$
Rj_!\,^{p}\tau_{ < 0 }^{U_{l+1}}j^!F \lambdaongrightarrow F \lambdaongrightarrow \tau'_{\geq
0}F
\stackrel{[1]}\lambdaongrightarrow
\qquad
i_!\tau_{ < -\dim{S} }i^!F \lambdaongrightarrow F \lambdaongrightarrow \tau''_{\geq 0}F
\stackrel{[1]}\lambdaongrightarrow
$$
define four functors (cf. \ci{bbd},
1.1.10, 1.3.3 and 1.4.10), i.e. the four objects
$\tau'_{\geq 0}F,$ $ \tau'_{\lambdaeq 0}F,$ $\tau''_{\geq 0}F$
and $ \tau''_{\lambdaeq 0}F$
which make the corresponding triangles exact,
are determined up to unique isomorphism.
Define
$$
{\Bbb P}td{0}^{U_{l}}:= \tau''_{\lambdaeq 0} \tau'_{\lambdaeq 0},
\qquad {\Bbb P}tu{0}^{U_{l}}:= \tau''_{\geq 0} \tau'_{\geq 0}.
$$
Define:
$$
{\Bbb P}td{0}:={\Bbb P}td{0}^{U_{0}}, \qquad {\Bbb P}tu{0}:={\Bbb P}tu{0}^{U_{0}}.
$$
We have the following compatibilities with respect to shifts.
$$
{\Bbb P}td{m} (F [l] ) \simeq {\Bbb P}td{m+l} (F) [l], \qquad
{\Bbb P}tu{m} (F [l] ) \simeq {\Bbb P}tu{m+l} (F) [l].
$$
These formulas hold for the ordinary truncation functors as well and
we symbolically
summarize them as follows
$$
(\tau_{m} ( [l] ))[-l] \, = \, \tau_{m+l}.
$$
The perverse truncations so defined have the following properties:
\begin{itemize}
\item
By the construction above, if
$F$ is ${\frak Y}-$cc, then so are
${\Bbb P}td{m}F$ and ${\Bbb P}tu{m}F$.
\item
Let $P(Y)$ be the full subcategory of complexes $Q$ such that
$$
\dim \hbox{ Supp }( {\cal H}^{-i}(Q) \lambdaeq i \hbox{ for every } i \in
\zed
$$
and the same holds for ${\cal D}(Q),$ the Verdier dual of $Q$.
$P(Y) $ is an abelian category.
The functor
$$
{\Bbb P}hix{0}{-}: D(Y) \lambdaongrightarrow P(Y), \qquad {\Bbb P}hix{0}{F}: =
{\Bbb P}td{0} {\Bbb P}tu{0}F \simeq {\Bbb P}tu{0} {\Bbb P}td{0} F,
$$
is cohomological.
Define
$$
{\Bbb P}hix{m}{F}:= {\Bbb P}hix{0}{F[m]}.
$$
These functors are called {\em the perverse cohomology functors}.
Any distinguished triangle $F \lambdaongrightarrow G \lambdaongrightarrow H \stackrel{[1]}\lambdaongrightarrow$
in $D(Y)$
gives rise to a long exact sequence in $P(Y)$:
$$
\lambdadots \lambdaongrightarrow {\Bbb P}hix{i}{F} \lambdaongrightarrow {\Bbb P}hix{i}{G} \lambdaongrightarrow
{\Bbb P}hix{i}{H} \lambdaongrightarrow {\Bbb P}hix{i+1}{F} \lambdaongrightarrow \lambdadots.
$$
If $F$ is
${\frak Y}-$cc, then so are ${\Bbb P}hix{m}{F},$ $\forall m \in \zed.$
\item
Poincar\'e- Verdier Duality induces functorial
isomorphisms for $F \in Ob ( D(Y) )$
$${\Bbb P}td{0}{\cal D}F \simeq {\cal D}{\Bbb P}tu{0}F, \qquad {\Bbb P}tu{0}{\cal
D}F
\simeq {\cal D}{\Bbb P}td{0}F
\qquad
{\cal D} ( {\Bbb P}hix{j}{F}) \simeq {\Bbb P}hix{-j}{ {\cal D} (F) }.
$$
This can be seen from the construction above. In fact,
by Lemma \ref{troncodiaz},
the isomorphisms hold for $U=U_{d},$ since
${\Bbb P}td{0}^{U_d}=\tau_{\lambdaeq -\dim{Y}}$ and
${\Bbb P}tu{0}^{U_d}=\tau_{\geq -\dim{Y}}$.
\n
Suppose that
${\Bbb P}td{0}^U{\cal D}\simeq {\cal D}{\Bbb P}tu{0}^U$ and
$ {\Bbb P}tu{0}^U{\cal D} \simeq {\cal D}{\Bbb P}td{0}^U$
for $U=U_{l+1}$. It then follows that the same isomorphisms hold for
$U=U_l.$
In fact, applying the functor ${\cal D}$ to the triangle defining
$\tau'_{\lambdaeq 0}{\cal D}F$,
and the inductive hypothesis ${\Bbb P}td{0}^U{\cal D}\simeq {\cal
D}{\Bbb P}tu{0}^U$,
we get the triangle defining $\tau'_{\geq 0}F$,
so that ${\cal D}\tau'_{\lambdaeq 0}{\cal D}F \simeq \tau'_{\geq 0}F.$
The argument for $\tau''_{\lambdaeq 0}$ is identical. We get
${\cal D}\tau''_{\lambdaeq 0}{\cal D}F \simeq \tau''_{\geq 0}F.$
It follows that
${\cal D} \tau''_{\lambdaeq 0} \tau'_{\lambdaeq 0} \simeq
\tau''_{\geq 0} {\cal D} \tau'_{\lambdaeq 0}\simeq
\tau''_{\geq 0} \tau'_{\geq 0} {\cal D}$
and the first wanted isomorphism follows.
The second is equivalent to the first one. The third
one follows formally:
${\cal D}( {\Bbb P}hix{m}{F}) \simeq
{\cal D} {\Bbb P}td{0} {\Bbb P}tu{0} (F[-m]) \simeq
{\Bbb P}tu{0}{\Bbb P}td{0} ({\cal D}(F) [m]) \simeq
{\Bbb P}hix{-m}{{\cal D}F}.$
\item
For every $F$ and $m$ one constructs,
functorially, a distinguished triangle
$$
{\Bbb P}td{m} F \lambdaongrightarrow F \lambdaongrightarrow {\Bbb P}tu{m+1} F \stackrel{[1]}\lambdaongrightarrow.
$$
\end{itemize}
The objects of the abelian category
$P (Y)$ are called {\em perverse sheaves.}
An object $F$ of $D(Y)$ is perverse if and only if
the two natural maps ${\Bbb P}td{0} F \lambdaongrightarrow F$ and $ F \lambdaongrightarrow {\Bbb P}tu{0} F
$
are isomorphisms.
\begin{ex}
\lambdaabel{semismall}
{\rm Let $f:X \lambdaongrightarrow Y$ a surjective proper map of surfaces, $X$
smooth. The direct image
$Rf_* \rat [2]$ is a perverse sheaf.}
\end{ex}
\begin{ex}
\lambdaabel{3foldtrunc}
{\rm
To give an example of how the truncation functors can be computed
from
the construction given above, let us examine
the example of section \ref{lde}.
The assumptions in \ref{3foldass} are in force and we use the same
notation.
We show that:
$$
{\Bbb P}td{0}Rf_*
\rat_X[3]\simeq \tau_{\lambdaeq 0}Rf_* \rat_X[3],
\qquad
{\Bbb P}td{-1}Rf_* \rat_X[3]=H_4(D)_y[1].
$$
Since $^{p}\!\tau^{Y-y}_{> 0}=\tau_{> -3}$
and $j^*Rf_*\rat_X[3]=\rat_{Y \setminus y}[3],$
we have $\tau '_{\lambdaeq 0}Rf_* \rat_X[3]= Rf_* \rat_X[3].$
The perverse truncation
${\Bbb P}td{0}Rf_* \rat_X[3]=\tau ''_{\lambdaeq 0} \tau'_{\lambdaeq 0}Rf_* \rat_X[3]=
\tau ''_{\lambdaeq 0}Rf_* \rat_X[3]$
is computed by the triangle
$$
\tau ''_{\lambdaeq 0}Rf_* \rat_X[3] \lambdaongrightarrow Rf_* \rat_X[3] \lambdaongrightarrow i_*\,
\tau_{>0}\,i^*Rf_* \rat_X[3]
\stackrel{+1}{\lambdaongrightarrow}.
$$
Since
$i^*Rf_* \rat_X[3] = \oplus_{j} H^{3-j}(D)_y[j],$ we have
$i_* \tau_{>0}i^*Rf_* \rat_X[3] = H^4(D)_y[-1]$, so that
$$
{\Bbb P}td{0}Rf_* \rat_X[3]\, \simeq \,
\mbox{Cone} \, \{Rf_* \rat_X[3] \to H^4(D)_y[-1] \} \, \simeq
\, \tau_{\lambdaeq 0}Rf_* \rat_X[3].
$$
Keeping in mind the truncation rules, we have the triangle
$$
\tau '_{\lambdaeq -1}Rf_* \rat_X[3] \lambdaongrightarrow Rf_* \rat_X[3] \lambdaongrightarrow Rj_*
\,^{p}\!\tau_{>-1}^{Y-y}
j^* \rat_Y[3] =Rj_* j^*\rat_Y[3] \stackrel{+1}{\lambdaongrightarrow}
$$
from which we deduce that
$$
\tau '_{\lambdaeq -1}Rf_* \rat_X[3] \, \simeq \,
i_!i^! Rf_* \rat_X[3] \, \simeq \, \oplus_{j} H_j(D)_y[j-3].
$$
The truncation
${\Bbb P}td{-1}Rf_* \rat_X[3]=\tau ''_{\lambdaeq -1} \tau'_{\lambdaeq -1}Rf_*
\rat_X[3]=
\tau ''_{\lambdaeq -1}(\oplus_{j}{ H_j(D)_y[j-3])}$
is computed by the triangle
$$
\tau ''_{\lambdaeq -1}(\oplus_{j}{ H_j(D)_y[j-3])} \lambdaongrightarrow \oplus_{j}{
H_j(D)_y[j-3]}
\to i_* \tau_{>-1}(\oplus_{j}{ H_j(D)_y[j-3] ) } \stackrel{+1}{\lambdaongrightarrow}
$$
from which the conclusion follows.
}
\end{ex}
It is remarkable that the category of Perverse sheaves is Artinian
and
Noetherian, that
its simple object can be completely characterized and
have an important geometric
meaning:
they are the intersection cohomology complexes.
\subsection{The simple objects of $P(Y)$}
\lambdaabel{simple}
Goresky and MacPherson introduced the intersection cohomology
groups of $Y$ for an arbitrary perversity. Here we deal with the
case
of middle perversity. These groups were first defined as
the homology of a chain sub-complex of the complex of geometric
chains with twisted coefficients on $Y.$
Later, following a suggestion by Deligne, they realized these groups
as the hypercohomology
of what they called the intersection cohomology complexes with
twisted
coefficients of $Y.$
\n
These complexes are the building blocks of $P(Y).$
They are special examples of perverse sheaves and every perverse sheaf
can be exhibited as a finite series of non trivial extensions
of objects of this kind supported on closed subvarieties of $Y.$
Let $Z \subseteq Y$ be a closed subvariety,
$Z^{o} \subseteq Z_{reg} \subseteq Z $
be an inclusion of Zariski-dense open subsets and $L$
be a local system on $Z^{o}.$
Goresky-MacPherson associate with this data
the intersection cohomology complex
$IC_{Z}(L)$ in $P(Z).$
\n
Up to isomorphism, this complex
is independent of the choice
of $Z^{o}:$ if $L$ and $L'$ are local systems on
$Z^{o}$ and ${Z^{o}}'$ respectively and $L_{|Z^{o}\cap {Z^{o}}' }
\simeq
L'_{| Z^{o}\cap {Z^{o}}' },$ then the associated intersection
cohomology complexes on $Z$ are canonically isomorphic.
\n
The complex $IC_{Z}(L),$ when viewed as a complex on $Y,$ is perverse
on $Y.$
\n
The intersection cohomology complex
of $Y$ is defined to be $IC_{Y}:= IC_{Y}(\rat_{Y_{reg}}).$
If $Y$ is smooth, or a rational homology manifold, then
$IC_{Y}\simeq \rat_{Y}[\dim{Y}].$
\n
If $Z$ is smooth and $L$ is a local system on $Z,$ then
$IC_{Z}(L) \simeq L[\dim{Z}].$
\begin{pr}
\lambdaabel{so}
The simple objects in $P(Y)$ are precisely the ones of the form
$IC_{Z}(L),$ $L$ simple on $Z^{o}.$
In particular, if $L$ is simple, then
$IC_{Z}(L)$ does not decompose into non-trivial
direct summands in $D(Y).$
\n
The semisimple objects of $P(Y)$ are finite direct sums
of such intersection cohomology
complexes on possibly differing subvarieties.
\end{pr}
Every perverse sheaf $Q \in P(Y)$ is supported on a finite union
of closed subvarieties of $Y.$ Let
$Z$ be any one of them. There is a Zariski-dense
open subset
$Z^{o}\subseteq Z_{reg},$ such that
$Q_{|Z^{o}} \simeq L[\dim{Z}],$
where $L$ is a local system on $Z^{o}.$
The object $Q$
admits a finite filtration where one of the quotients
is $IC_{Z}(L)$ and all the others are all supported
on $Supp(Q) \setminus Z^{o}.$ It follows that
$Q$ admits a finite filtration
where the quotients are intersection cohomology complexes
supported on closed subvarieties of $Y.$
An intersection cohomology complex $IC_{Z}(L)$ is characterized by
its
not admitting subquotients supported on smaller dimensional
subspaces of $Z.$ Its eventual splitting is entirely due
to a corresponding splitting of $L.$
Let us define the intersection cohomology complexes.
Assume ${\frak Y}$ is a stratification and $L$ is a local system
on the open stratum $U_d.$ We start by defining $IC_{U_d}(L):=L[dim
Y].$
Now suppose inductively that $IC_{U_{l+1}}(L)$ has been defined on
$U_{l+1}$
and we define it on $U_l$ by
$$
IC_{U_l}(L):=\tau_{\lambdaeq -l-1}Rj_*IC_{U_{l+1}}(L).
$$
Let us give formulae for $IC_{Y}(L)$ when $Y$ and $L$ have isolated
singularities. It suffices to work in the Euclidean topology.
\n
Let $(Y,p)$ be a germ of an isolated singularity,
$j: U: = Y \setminus p \to Y$ be the open embedding
and $L$ be a local system on $U.$
We have
\begin{equation}
\lambdaabel{icis}
IC_{Y}(L) = \td{-1} (Rj_{*}L[\dim{Y}]).
\end{equation}
If $\dim{Y} =1,$ then
$IC_{Y}(L) = j_{*}L[1].$
The stalk at $p$ are the invariants of $L$ i.e.
$H^{0}(U, L).$
\n
In general, when $\dim{Y} \geq 2,$ then
$IC_{Y}(L)$ is a complex, not a sheaf.
If $L$ is simple, then $IC_{Y}(L)$ is simple
and does not split non-trivially in $D(Y).$
The
cohomology sheaves ${\cal H}^{j}(IC_{Y}(L))$
are non trivial only for $j \in [-\dim{Y}, -1]$ and we have
\begin{equation}
\lambdaabel{csisl}
{\cal H}^{-\dim{Y}}(IC_{Y}(L)) = j_{*}L, \qquad
{\cal H}^{-\dim{Y}+l}(IC_{Y}(L)) = H^{l}(U,L)_{p}, \; 1 \lambdaeq l \lambdaeq
\dim{Y} -1,
\end{equation}
where $V_{p}$ denotes a skyscraper sheaf at $p\in Y$
with stalk $V.$
In order to familiarize ourselves with these complexes, we compute
two important examples:
\begin{ex}
\lambdaabel{3folddoublepoint}
{\rm We consider a threefold $Y$ with an ordinary double point $y$
and with associated link ${\cal L}.$
Let $j:Y\setminus y \to Y$ be the open embedding, so that
(\ref{icis}) gives
$$
IC_Y= \tau_{\lambdaeq -1}Rj_*\rat[3].
$$
The cohomology sheaves at $y$ are
$$
{\cal H}^{k}(IC_{Y}) = H^{k+3}({\cal L})_{p}, \hbox{ for } k
\lambdaeq -1, \qquad {\cal H}^{k}(IC_{Y})= 0
\hbox{ otherwise. }
$$
The singularity is analytically equivalent to a cone over a smooth
quadric
in projective space,
hence its link is homeomorphic to the $S^1-$bundle over $S^2 \times
S^2$
with Chern class $(1,1).$
The long exact sequence for this $S^1-$fibration gives
$$
H^k({\cal L})= \rat \hbox{ for } k=0,2,3,5 \qquad H^k({\cal L})= 0
\hbox{ otherwise, }
$$
which in turn implies that
$$
{\cal H}^{k}(IC_{Y}) = \rat \hbox{ for } k= -3, -1, \qquad
{\cal H}^{k}(IC_{Y}) = 0 \hbox{ otherwise. }
$$
We have a triangle in $D(Y),$ (not of perverse sheaves)
$$
\rat_Y[3] \to IC_Y \to \rat_y[1] \stackrel{+1}{\to}
$$
The fact that
${\cal H}^{-1}(IC_{Y}) = \rat_y $ should be compared with the
existence
of a small resolution
with fiber a projective line over the singular point, and the
statement
of the Decomposition Theorem \ref{dtbbd}.}
\end{ex}
\begin{ex}
\lambdaabel{cksrecipe}
{\rm
Let $Y=\comp^2,$ and $L$ be a local system on $\comp^2 \setminus (
x_1x_2=0)$
defined by the two monodromies $T_1$ and $T_2$ acting
on the vector space $V=L_{p},$ the stalk of $L$ at $p=(1,1) \in
\comp^{2}.$
We first determine the intersection cohomology complex
over $\comp^2 \setminus \{0\}.$
Denoting by $j:\comp^2 \setminus \{ x_1x_2=0\} \to \comp^2 \setminus
\{0\}$
the natural map, we have
$IC_{\comp^2 \setminus \{0\} }(L)=\tau_{\lambdaeq -2}Rj_*L[2]=(j_*L) [2].$
Denoting by
$j':\comp^2 \setminus \{0\} \to \comp^2$ the natural map, we have
$$
IC_{\comp^2}(L)\, = \, \tau_{\lambdaeq -1}Rj'_* IC_{\comp ^2 \setminus
\{0\} (L) }
\, =\, \tau_{ \lambdaeq -1}Rj'_*(j_*L [2]).
$$
In order to determine the cohomology sheaves of $IC_{\comp^2}(L),$
we compute
$H^i(\comp^2 \setminus \{0\}, j_*L)$ for $i=0,1.$
More precisely, we should determine these groups for a fundamental
system
of neighborhoods of the origin; however the cohomology groups are in
fact constant.
Set $N_1=T_1-Id,$ $N_2=T_2-Id.$
\n
We have
$H^0(\comp^2 \setminus \{0\}, j_*L)=H^0(\comp^2 \setminus \{
x_1x_2=0\},L)
= \hbox{\rm Ker \,} { N_1} \cap \hbox{\rm Ker \,} { N_2}=V^{{\Bbb P}i_1} .$
Since $j_*L= \tau_{\lambdaeq 0}Rj_*L,$ and fundamental deleted
neighborhoods
around
the axes are homotopic to circles, so that
$$
{\cal H}^{i}(Rj_*L) =0 \hbox{ for } i \geq 2,
$$
we have the following exact triangle in $D(\comp^2 \setminus \{0\})$
$$
j_*L =\tau_{\lambdaeq 0}Rj_*L \lambdaongrightarrow Rj_*L \lambdaongrightarrow
{\cal H}^{1}(Rj_*L)[-1] \stackrel {+1} {\lambdaongrightarrow}.
$$
The sheaf
${\cal H}^{1}(Rj_*L)$ is the local system on
$( x_1x_2=0) \setminus \{(0,0) \}=D_1 \coprod D_2$,
with fiber $ \hbox{\rm Coker \,} r{ N_1}$ and monodromy $T_2$ on $D_1$, and
fiber $ \hbox{\rm Coker \,} r{ N_2}$ and monodromy $T_1$ on $D_2$.
Since $\comp^2 \setminus \{ x_1x_2=0\}$ retracts to a torus $T^2$,
the cohomology of $L$ is isomorphic to the group cohomology of $\zed
^2$
with values in $V$ as a $\zed ^2$-module via the monodromies
$T_1,T_2,$
which can be computed by the Koszul complex (see for instance
\ci{weibel} )
$$
0 \lambdaongrightarrow V \stackrel{{\Bbb P}hi}{\lambdaongrightarrow}
V \oplus V \stackrel{ {\Bbb P}si}{\lambdaongrightarrow}V {\lambdaongrightarrow} 0,
$$
with
$$
{\Bbb P}hi(v)= (N_1(v),N_2 (v)) \qquad {\Bbb P}si(v_1,v_2)=N_2(v_1) -N_1(v_2).
$$
The long exact sequence associated to the exact triangle above gives
$$
{\cal H}^{-1}(IC_Y(L))_0 \simeq
H^1(\comp^2 \setminus \{0\}, j_*L) \,= \,
\frac{(N_1(v_1),N_2(v_2))\hbox{ such that }N_1N_2(v_1-v_2)=0 }
{ (N_1( v),N_2( v))}.
$$
More generally, a similar recipe holds for the cohomology sheaves
of the intersection cohomology complex of a local system
defined on the complement of a normal crossing, see \ci{cks}.}
\end{ex}
\subsection{Decomposability, $E_{2}-$degenerations
and filtrations}
\lambdaabel{de2}
\begin{defi}
\lambdaabel{hdec}
{\rm
Let ${\cal H}= {\cal H}^{0}$ be the sheaf cohomology functor.
We say that $F$ in $D(Y)$ is {\em ${\cal H}-$decomposable} if
$$
F \simeq_{D(Y)} \bigoplus_{i}{ {\cal H}^{i}(F)[-i] }
$$
We say that $F$ in $D(Y)$ is {\em $^{p}{\cal H}-$decomposable} if
$$
F \simeq_{D(Y)} \dsdix{i}{F}.
$$
}
\end{defi}
If $F$ is ${\cal H}-$decomposable, then the spectral sequence
$$
{H}^{p}( Y, {\cal H}^{q}(F) ) \Longrightarrow {\Bbb H}^{p+q}(Y,F)
$$
is $E_{2}-$degenerate.
This spectral sequence is the Leray Spectral Sequence when
$F= Rf_{*}(G).$ In this case the corresponding filtration
is called the Leray filtration.
\n
The analogous statements holds for $^{p}\!{\cal H}-$decomposability.
The corresponding spectral sequence is called the Perverse
Leray Spectral Sequence:
$$
{\Bbb H}^{p}(Y, ^{p}{\cal H}^{q}(Rf_{*}G) ) \Longrightarrow {\Bbb
H}^{p+q}(Y,G)
$$
and the corresponding filtration is called the perverse filtration.
\begin{defi}
\lambdaabel{pf}
{\rm
Let $f: X \to Y$ be a map, $n = \dim{X}.$
The perverse filtration $H^{n+j}_{\lambdaeq b}(X) \subseteq H^{n+j}(X),$
$
b,j \in \zed$
is defined to be the perverse filtration
on ${\Bbb H}^{j}(Y, Rf_*\rat_{X}[n])$
}
\end{defi}
It coincides, up to a shift, with the Leray filtration, when $f$ is
smooth.
If these decomposing isomorphisms exist, they are seldom unique.
We now give the statement (not in the most general form)
of one of the more general criteria for decomposability,
see \ci{dess} and \ci{shockwave}:
\begin{tm}
\lambdaabel{delilefsdege}
{\bf \rm (Deligne degeneration criterion.)}
Let $K$ be an object of $D^b(Y)$, and let
$\eta \in H^2(X).$ Suppose that $\eta^l :
{\Bbb P}hix{-l}{K} \to {\Bbb P}hix{l}{K}$
is an isomorphism for all $l.$
Then $K$ is $^{p}\!{\cal H}-$decomposable.
The same statement holds if we consider the functor ${\cal H}$.
\end{tm}
\begin{ex}
\lambdaabel{3foldancor}
{\rm By the computation done in \ref{3foldtrunc},
we have the following description of the perverse filtration
for the resolution of a threefold:
$$
H^{i}_{\lambdaeq -2}(X)=\{0\}, \qquad H^2_{\lambdaeq -1}(X)= \hbox{\rm Im \,} \{H_4(D) \to
H^2(X)\}, \qquad
H^i_{\lambdaeq -1}(X)=0 \hbox{ otherwise, }
$$
$$
H^4_{\lambdaeq 0}(X)= \hbox{\rm Ker \,} \{H^4(X) \to H^4(D)\}, \qquad
H^i_{\lambdaeq 0}(X)=H^i(X) \hbox{ otherwise },
$$
$$
H^i(X)_{\lambdaeq 1}=H^i(X) \hbox { for all } i.
$$
The condition \ref{delilefsdege}, that
$$\eta:
{\Bbb P}hix{-1}{Rf_* \rat_X[3]}=
H_4(D)_y \lambdaongrightarrow H^4(D)_y={\Bbb P}hix{1}{Rf_* \rat_X[3]}
$$
be an isomorphism, is just
the non degeneracy of the intersection form
$\eta \cap [D_i]\cdot[D_j].$
Note that in this case, the explicit description makes it clear that
the perverse filtration is given by Hodge substructures.}
\end{ex}
\subsection{The Decomposition Theorem of
Beilinson, Bernstein, Deligne and Gabber}
\lambdaabel{sdtbbd}
We can now state the generalization of
Deligne's Theorem \ref{del}
to the case of arbitrary proper maps.
Recall that if $X$ is smooth, then $IC_{Y} = \rat_{X}[n].$
\begin{tm}
\lambdaabel{dtbbd}
Let $f:X \to Y$ be a proper map of algebraic
varieties. Then
\begin{equation}
\lambdaabel{fd}
\mbox{the complex $Rf_{*}IC_{X} \simeq
\dsdix{i}{Rf_{*}IC_{X}}$ is $^p\!{\cal H}-$decomposable }
\end{equation}
The complexes
${\Bbb P}hix{j}{Rf_{*}IC_{X}}$ are semisimple i.e.
there is a canonical isomorphism
\begin{equation}
\lambdaabel{semisi}
{\Bbb P}hix{j}{Rf_{*}IC_{X}} \,\simeq_{P(Y)} \,
\oplus{IC_{Z_{a}}(L_{a} )}
\end{equation}
for some finite collection, depending on $j,$
of semisimple local systems $L_{a}$ on smooth
\underline{distinct} varieties $Z_{a}^{o} \subseteq Z \subseteq Y.$
\n
Let $\eta$ be an $f-$ample line bundle on $X.$
Then
\begin{equation}
\lambdaabel{rhl}
\eta^{r} \, : \, {\Bbb P}hix{-r }{ Rf_{*}IC_{X} } \, \simeq \,
{\Bbb P}hix{r }{ Rf_{*}IC_{X} }.
\end{equation}
\end{tm}
The Verdier Duality functor is an autoequivalence ${\cal D}: D(Y)
\to D(Y)$ which preserves $P(Y)$ and for which
one has
$$
{\cal D} \circ \, ^{p}\!{\cal H}^{-j} \, \simeq \,
^{p}\!{\cal H}^{j} \circ {\cal D}.
$$
This fact
implies that
the summands appearing in the semisimplicity statement
for $j$ are pairwise isomorphic to the ones appearing
for $-j$ and that the local systems $L$ are self-dual.
Theorem \ref{dtbbd} is the deepest known fact concerning the homology
of algebraic maps.
\noindent
The original proof uses algebraic geometry in positive characteristic.
in an essential way.
\n
M. Saito has given a transcendental proof of a more general statement
concerning his mixed Hodge modules in the series of papers
\ci{samhp}, \ci{samhm}, \ci{samhp}.
\n
We give a proof for the
push-forward of intersection cohomology (with constant coefficients)
first in the case of semismall maps (cf. \ci{demigsemi}) and
then for arbitrary maps in \ci{decmightam}. Though at present
our methods do not afford results concerning the push-forward with
more general coefficients, they give new and precise results on the
perverse filtration and on the refined intersection forms.
\n
C. Sabbah \ci{sa} has recently proved a decomposition theorem for
push-forwards of semisimple local systems.
\begin{rmk}
\lambdaabel{infatt}
{\em It is now evident that the computations in \ref{fsts} and
\ref{fstc}
establish the Decomposition Theorem for maps from a smooth surface.
In the case of the proper birational map $f:X \to Y$ of \ref{fsts},
in fact,
the complex $Rf_* \rat_X[2]$ is perverse, as observed in
\ref{semismall}, and \ref{tmrs}
states that it splits into $IC_Y$ and $R^2f_* \rat_X[0].$
In the case of the family of curves treated in \ref{fstc} we have that
$j_*T^0[2]={\Bbb P}hix{-1}{ Rf_{*}\rat_{X}[2] }[1],$ and
$j_*T^2[0]={\Bbb P}hix{1}{ Rf_{*}\rat_{X}[2] }[-1],$ and we
showed in \ref{rhl1} that
$\eta: j_*T^0 \to j_*T^2$ is an isomorphism.
The perverse sheaf $P$ splits, see \ref{semisimp1}, in
$j_*T^1[1]=IC_Y(T^1),$
and $V$, concentrated on points. }
\end{rmk}
\begin{rmk}
\lambdaabel{infatt3fold}
{\em For the case of the
resolution of a threefold with isolated singularities,
whose Hodge theory has been
treated in \ref{lde}, we have,
as seen in \ref{3foldtrunc},
$${\Bbb P}hix{-1}{ Rf_{*}\rat_{X}[3] }\simeq H_4(D)_y, \qquad
{\Bbb P}hix{1}{ Rf_{*}\rat_{X}[3] }\simeq H^4(D)_y \simeq \eta \wedge
H_4(D)_y,$$
and we have the splitting
$$
{\Bbb P}hix{0}{ Rf_{*}\rat_{X}[3] }\simeq IC_Y \oplus H_3(D)_y.
$$
Similarly, for the 4-fold with isolated singularities, see \ref{lde4},
$${\Bbb P}hix{-2}{ Rf_{*}\rat_{X}[4] }\simeq H_6(D)_y, \qquad
{\Bbb P}hix{-1}{ Rf_{*}\rat_{X}[4] }\simeq H_5(D)_y ,$$
$${\Bbb P}hix{2}{ Rf_{*}\rat_{X}[4] }\simeq H^6(D)_y\simeq \eta^2 \wedge
H_6(D)_y, \qquad
{\Bbb P}hix{1}{ Rf_{*}\rat_{X}[4] }\simeq H^5(D)_y \simeq \eta \wedge
H_5(D)_y,$$
and we have the splitting
$$
{\Bbb P}hix{0}{ Rf_{*}\rat_{X}[4] }\simeq IC_Y \oplus H_4(D)_y.
$$}
\end{rmk}
\subsection{Results on intersection forms }
\lambdaabel{inr}
In this section we list some of the results of
\ci{decmightam} which are related to the theme of this paper.
For simplicity, we state them in the special case when $f: X \to Y$
is a map of projective
varieties, $X$ smooth. Let $\eta$ and $A$
be ample line bundles on $X$ and $Y$ respectively, $L:= f^{*}A.$
\begin{tm}
\lambdaabel{uf}
For $l\geq 0$ and $b\in \zed,$ the subspaces given by the perverse
filtration (cf. \ref{de2})
$$
H^{l}_{\lambdaeq b}(X) \, \subseteq\, H^{l}(X)
$$
are pure Hodge sub-structures. The quotient spaces
$$
H^l_b(X)\, = \, H^{l}_{\lambdaeq b}(X) /H^{l}_{\lambdaeq b-1}(X)
$$
inherit a pure Hodge structure
of weight $l.$
\end{tm}
The cup product with $\eta$ verifies
$\eta \, H^l_{\lambdaeq a}(X) \subseteq H^{l+2}_{\lambdaeq a+2}(X)$
and induces maps, still denoted
$\eta: H^l_{a}(X) \to H^{l+2}_{a+2}(X)$.
The cup product with $L$ is compatible with the Decomposition Theorem
\ref{dtbbd}
and induces maps $L: H^l_{a}(X) \to H^{l+2}_{a}(X).$
\n
These maps satisfy graded Hard Lefschetz Theorems (cf.
\ci{decmightam}, Theorem 2.1.4).
\n
Define
$P^{-j}_{-i}:= \hbox{\rm Ker \,} {\, \eta^{i+1}} \cap \hbox{\rm Ker \,} {\, L^{j+1}}
\subseteq H^{n-i-j}_{-i}(X),$
$i,\,j \geq 0$ and $P^{-j}_{-i} :=0$
otherwise.
In the same way in which the classical Hard Lefschetz implies
the Primitive Lefschetz Decomposition for the cohomology of $X,$
the graded Hard Lefschetz Theorems imply the
double direct sum decomposition of
\begin{tm}
\lambdaabel{etaldecompo}
Let $i, \, j \in \zed.$
There is a Lefschetz-type direct sum decomposition into
pure Hodge sub-structures of weight $(n-i-j),$
called the $(\eta,L)-$decomposition:
$$
H^{n-i-j}_{-i}(X) = \bigoplus_{l,\,m\, \in \zed }{
\eta^{-i+l} \, L^{-j+m} \, P^{j-2m}_{i-2l}.}
$$
\end{tm}
One can define bilinear forms
$S^{\eta L}_{ij}$ on $H^{n-i-j}_{-j}(X)$ by
modifying the Poincar\'e pairing
$$
S^{\eta L}_{ij} ([\alpha], [\beta] ) \, := \,
\int_X{ \eta^i \wedge L^j \wedge \alpha \wedge \beta}
$$
and descending it to the graded groups.
These forms are non degenerate. In fact their signature
can be determined in the following generalization of the Hodge
Riemann relations.
\begin{tm}
\lambdaabel{tmboh}
The \ref{etaldecompo} is orthogonal with respect
to $S^{\eta L}_{ij}.$ The forms $S^{\eta L}_{ij}$
induce polarizations
of each $(\eta,L)-$direct summand.
\end{tm}
The homology groups $H^{BM}_{*}(f^{-1}(y))=H_{*}(f^{-1}(y)),$
$y \in Y,$
are filtered by virtue of the decomposition theorem (one may call this
the perverse filtration).
The natural cycle class map
$cl: H^{BM}_{n-*}(f^{-1}(y)) \to H^{n+*}(X)$ is filtered strict.
The following generalizes the Grauert Contractibility Criterion.
\begin{tm}
\lambdaabel{nhrbr}
Let $b \in \zed$, $y \in Y$.
The natural class maps
$$
cl_{b}: H^{BM}_{n-b,b}(f^{-1}(y)) \lambdaongrightarrow H^{n+b}_b(X)
$$
is injective and identifies
$H^{BM}_{n-b,b}(f^{-1}(y))\subseteq \hbox{\rm Ker \,} {\, L}\subseteq
H^{n+b}_b(X)$ with a pure Hodge substructure, compatibly with
the $(\eta,L)-$decomposition. Each $(\eta,L)-$direct summand
of $H^{BM}_{n-b,b}(f^{-1}(y))$ is polarized up to sign
by $S^{\eta L}_{-b,0}.$
\n
In particular, the restriction of $S^{\eta L}_{-b,0}$ to
$H^{BM}_{n-b,b}( f^{-1} (y ))$
is non degenerate.
\end{tm}
By intersecting in $X$ cycles supported on $f^{-1}(y),$
we get the refined intersection form (see section \ref{psif})
$H^{BM}_{n-*}(f^{-1}(y)) \to H^{n+*}(f^{-1}(y))$
which is filtered strict as well.
\begin{tm}
\lambdaabel{rcffv}
({\bf The Refined Intersection Form Theorem})
Let $b \in \zed$, $y \in Y$.
The graded refined intersection form
$$
H^{BM}_{n-b,a}(f^{-1}(y)) \lambdaongrightarrow H^{n+b}_{a}(f^{-1}(y))
$$
is zero if $a\neq b$ and it is an isomorphism if $a=b.$
\end{tm}
We have seen in earlier sections how these results can be made
explicit in the case of
surfaces, threefolds and fourfolds.
For more applications in any dimension see \ci{decmightam}.
In fact, the method of proof of the results stated
in this section is inspired by the low
dimensional examples
of surfaces, threefolds and fourfolds.
\subsection{The decomposition mechanism}
\lambdaabel{decmec}
It is quite hard to describe what kind of geometric phenomena
are expressed by the Decomposition Theorem.
The complex $Rf_* \rat_X$ essentially
describes $H^*(f^{-1}(U))$ for any neighborhood $U$ of a point $y.$
We gain some geometric insight if we represent, via Poincar\'e
duality, the cohomology classes with Borel-Moore cycles in $f^{-1}(U).$
Let $S$ be the stratum containing $y.$ By the exact sequence
$$
H_*^{BM}(f^{-1}(U \cap S)) \stackrel{i_*}{\to}
H_*^{BM}(f^{-1}(U)) \stackrel{j^*}{\to}
H_*^{BM}(f^{-1}(U \setminus S)),
$$
the Borel Moore cycles in $f^{-1}(U)$ are of two kind:
those in $ \hbox{\rm Im \,} i_*$, which are homologous to cycles supported
on the inverse image of the stratum $S,$ and those whose restriction to
$f^{-1}(U \setminus S)$ is not trivial.
Neither $i_*$ is injective nor $j^*$ is surjective:
there are non trivial cycles in $f^{-1}(U \cap S)$
which become homologous to zero in $f^{-1}(U )$, and there are
cycles in $f^{-1}(U \setminus S)$ which cannot be closed
to cycles in $f^{-1}(U).$
The Decomposition Theorem gives strong information on both types.
The first deep aspect of the Theorem is that the subspace
$ \hbox{\rm Im \,} i_*,$ has a uniform behavior for all projective maps,
related to the non degeneracy of the intersection forms.
For instance, we already noticed in \ref{link},
how the Grauert Theorem \ref{tmgra} implies that the classes
of disks transverse to exceptional curves are homologous to
linear combinations of the classes of these curves.
Such non degeneracy results, see \ref{nhrbr}, \ref{rcffv},
are peculiar to algebraic maps and stem from "weight'' considerations,
either in characteristic $0$ (Hodge Theory). or in positive characteristic
(weights of Frobenius, cf. \ci{bbd}).
The Decomposition Theorem, though, contains other deep information.
Since $Rf_* \rat_X$ splits as a direct sum of terms associated
with the strata, we have a splitting map, which can be made canonical after
an ample line bundle $\eta$ on $X$ has been chosen, from the subspace
$ \hbox{\rm Im \,} { j^* } \subseteq H_*^{BM}(f^{-1}(U \setminus S)$ to
$H_*^{BM}(f^{-1}(U ):$ i.e. the following is split exact
$$
0 \lambdaongrightarrow \hbox{\rm Im \,} {i_*} \lambdaongrightarrow H_*^{BM}(f^{-1}(U ) \lambdaongrightarrow \hbox{\rm Im \,} {j^*} \lambdaongrightarrow 0.
$$
The image of this map defines a subspace
of $H_*^{BM}(f^{-1}(U)$ which is complementary to $ \hbox{\rm Im \,} i_*$
and consists of classes which are closures of some Borel Moore cycles
in $f^{-1}(U \setminus S).$
The deep fact here, is that these cycles are governed by the
intersection cohomology complex construction on $Y;$ each stratum having $S$
in its closure contributes to $H_*^{BM}(f^{-1}(U)$ via the intersection
cohomology of a local system on the stratum.
\section{Grothendieck motive decomposition for maps of threefolds}
\lambdaabel{gmdmt}
We assume we are in the situation \ref{3foldass}.
Again, this is for ease of exposition only. See Remark
\ref{rmkfi}.
We have already shown that
$$
H^{2}_{-1}(X) \, = \, \hbox{\rm Im \,} { H_{4}(D) \to H^{2}(X) } \, =: \,
\hbox{\rm Im \,} {i_{*}},
$$
$$
H^4_{0}(X)= \hbox{\rm Ker \,} { \{H^4(X) \to H^4(D)\}} \, =: \, \hbox{\rm Ker \,} { i^*}.
$$
The choice of an ample line bundle $\eta$ allows to split
the perverse filtration:
$$H^2(X)= \hbox{\rm Im \,} i_* \oplus (c_1( \eta) \wedge \hbox{\rm Im \,} i_*)^{{\Bbb P}erp}.$$
$$H^4(X)= \hbox{\rm Ker \,} i^* \oplus ( \hbox{\rm Im \,} i_*)^{{\Bbb P}erp}.$$
We have, canonically, that
$$H^3(X)= \hbox{\rm Im \,} { H_3(D)} \oplus ( \hbox{\rm Im \,} {H_3(D))}^{{\Bbb P}erp}.$$
so that
$$
I\!H^i(Y)\,=\,H^i(X) \hbox{ for } i=0,\,1, \,5,\,6,
\qquad I\!H^2(Y)\, = \, (c_1( \eta) \wedge \hbox{\rm Im \,} H_4(D),
)^{{\Bbb P}erp}
$$
$$
I\!H^3(Y)\, = \, ( \hbox{\rm Im \,} {H_3(D)})^{{\Bbb P}erp}, \qquad I\!H^4(Y)\, =\, ( \hbox{\rm Im \,}
H_4(D))^{{\Bbb P}erp};
$$
here we are using the convention for intersection cohomology
compatible with singular cohomology:
$I\!H^i(Y):= {\Bbb H}^{i-n}(Y, IC_Y)).$
We want to realize these splittings by algebraic cycles
on $X \times X$, in order to find a Grothendieck motive
for the intersection cohomology of $Y.$
These cycles will be supported on $D \times D.$
We start with the following simple lemma.
\begin{lm}
\lambdaabel{support}
Let $X$ be a projective $n-$fold,
and $Y \subseteq X$ be a subvariety.
Let $W \subseteq
\hbox{\rm Im \,} { \{ H_s(Y) \to H^{2n-s}(X) \} } \subseteq H^{2n-s}(X)$
be a vector subspace
on which the restriction of the Poincar\`e pairing remains
non degenerate, i.e.
$H(X)=W \oplus W^{{\Bbb P}erp}.$
Then the projection
$P_W \in End(H(X))\simeq H(X \times X)$ on $W$
relative to the above splitting can be represented by a cycle
supported on
$Y \times Y$.
\end{lm}
{\em Proof. }
Let $\{e_1\}$ be a basis for $H(X)$
such that $ e_1, \cdots, e_k \in W$ and
$ e_{k+1}, \cdots, e_N \in W^{{\Bbb P}erp}.$
For $i=1, \cdots, k,$ we can represent $e_i$ by a cycle $\gamma_i$
contained in $Y$. In force of the hypothesis, the dual basis
$\{ {e_i}\check{} \}$ is of the form
$$
e_i \check{}= \sum_{j=1}^k a_{ij}e_j \qquad \hbox{ for }1 \lambdaeq i \lambdaeq
k
\qquad
e_i \check{}= \sum_{j=k+1}^N a_{ij}e_j \qquad \hbox{ for }k+1 \lambdaeq i
\lambdaeq N.
$$
In particular
$e_1 \check{}, \cdots,e_k \check{} $ are represented by the cycles
$
\gamma_i \check{}= \sum_{j=1}^k a_{ij}\gamma_j
$ supported on $Y$. The projector
$P_W= \sum_{i=1}^k e_i {\mathord{\otimes }\,}imes e_i \check{}$ is thus represented by
the cycle
$ \sum_{i,j=1}^k a_{ij}\gamma_i \times \gamma_j$, which is supported
on
$Y$.
\blacksquare
The following is a standard but very useful application of
``strictness'' in Hodge Theory
\begin{lm}
\lambdaabel{pesi}
Let $Y \subseteq X$ be a codimension $d$
subvariety of an $n-fold$
and let ${\Bbb P}i:\tilde{Y} \to Y$ a resolution of singularities.
Suppose $\beta \in \hbox{\rm Im \,} \{H_{2k}(Y) \to H^{2(n-k)}(X)\}
\cap H^{n-k,n-k}(X).$ Then there is
$\tilde{\beta}\in H^{n-k-d,n-k-d}(\tilde{Y})$ such that
$(i\circ {\Bbb P}i)_*(\tilde{\beta})= \beta.$
\end{lm}
{\em Proof.} We consider the weights of the homology groups as given
by
their being dual of the cohomology groups.
Thus $H_{2k}(Y)$ has weights $\geq -2k.$
The map $H_{2k}(Y) \to H^{2(n-k)}(X)$ is of type $(n,n)$. Since the
Hodge
structure on $ H^{2(n-k)}(X)$ is pure,
the strictness of maps of Hodge structures implies that
$$
\hbox{\rm Im \,} { \{ H_{2k}(Y) \to H^{2(n-k)}(X) \} } \,
= \, \hbox{\rm Im \,} { \{ W_{-2k}H_{2k}(Y) \to H^{2(n-k)}(X) \} }.
$$
It follows that $\beta =i_*\beta '$ for some
$\beta ' \in W_{-2k}H_{2k}(Y).$ On the other hand this group
coincides with
$ \hbox{\rm Im \,} { \{ {\Bbb P}i_* : H_{2k}(\tilde{Y}) \to H_{2k}(Y) \}}$
for any resolution, whence the statement.
\blacksquare
\begin{tm}
\lambdaabel{cicli}
Let $f:X \to Y$, $D$ as before.
Then there exist algebraic 3-dimensional cycles $Z_{-1}, Z_{0}, Z_1,$
supported on $D \times D$ such that:
\n
$Z_1$ defines the projection of $H(X)$ onto
$H^4_1(X)= c_1( \eta) \wedge \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\} \subseteq
H^4(X);$
\n
$Z_{-1}$ defines the projection of $H(X)$ onto
$H^2_{-1}(X)= \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\} \subseteq H^2(X);$
\n
$Z_0$ defines the projection of $H(X)$ on
$ \hbox{\rm Im \,} \{H_3(D) \to H^3(X)\} \subseteq H^3(X).$
\end{tm}
{\em Proof.}
Let $\Lambda$ be the inverse of the negative-definite intersection
matrix
$I_{ij}= \int_X c_1(\eta) \wedge[D_i]\wedge [D_j].$
We denote by $\eta \cap D_i$ the curve obtained intersecting the
divisor $D_i$
with a general section of $\eta$.
Set:
$$
Z_{-1}= \sum \lambdaambda_{ij}[(\eta \cap D_i) \times D_j]
\qquad Z_{1}= \sum \lambdaambda_{ij}[D_i \times (\eta \cap D_j)].
$$
It is immediate to verify that $Z_{-1} $ and $Z_1$ define the
sought-for projectors.
\n
The construction of $Z_0$ is not so direct:
Since, by \ref{polarizeh3}, the Poincar\'e paring is non degenerate
on
$ \hbox{\rm Im \,} \{H_3(D) \to H^3(X)\}$, by \ref{support} we can represent
the projection on $H_3(D)$ by a cycle supported on $D \times D$.
Furthermore, the projection is a map of Hodge structures, hence its
representative cycle
$P_3 \in H^6(X \times X)$
has type
$(3,3).$ By \ref{pesi} we have $P_3=i_* {\Bbb P}i_* \beta$ for some
$\beta \in H^{1,1}(\widetilde{D \times D}),$ where $\widetilde{D
\times D}$
is any resolution of
${D \times D}.$
By the Lefschetz Theorem on $(1,1),$ classes there is an algebraic
cycle $\tilde{Z}$ such
that $\beta =[\tilde{Z}]$. It is clear that $Z_0=i_* {\Bbb P}i_* Z$ does
the job.
\blacksquare
The following follows immediately.
\begin{cor}
\lambdaabel{gm}
The Grothendieck motive,
$(X, \Delta_{X} -Z_0 -Z_1 -Z_{-1})$
is a Betti realization of the Intersection cohomology of $Y.$
\end{cor}
We can be more specific: the projector $\Delta_{X} -Z_0 -Z_1 -Z_{-1}$
is supported on the fiber product $X \times_Y X$,
therefore defines a relative motive over $Y$
in the sense of \ci{ch} (see also \ci{decmigmot}).
By \ci{ch}, Lemma 2.23, the isomorphism of algebras
$$
End(Rf_* \rat_X [3])= H^{BM}_6(X \times_Y X)
$$
ensures that
the Betti realization of this relative motive is the projector
associated with the splitting for $Rf_* \rat_X [3]$ we have used
in this section.
\begin{rmk}
{\rm From the construction of the cycles it is evident that $Z_{-1}$
and $Z_1$ define in fact Chow motives, not only Grothendieck motives.
Under some hypothesis it is possible to construct a Chow projector
for $Z_0$ as well. For instance, if $D$ is smooth irreducible, and its
conormal bundle ${\cal I}_D/{\cal I}_D^2$ is ample. In this case,
let $Z_0$ the
cycle in $D \times D$ representing the Hodge $\Lambda$ operator with
respect to the polarization given by the conormal bundle. It is
immediate to verify that $Z_0$ defines the Chow motive we need. In
general some knowledge of the nature of the resolution may allow one
to find a Chow motive whose Betti realization is intersection
cohomology.}
\end{rmk}
\begin{rmk}
\lambdaabel{rmkfi}
{\rm It is not difficult to modify the proofs to produce a
Grothendieck motive for the intersection cohomology of $Y$ for an
arbitrary
three-dimensional variety $Y,$ (e.g. with non-isolated
singularities).
If, for example,
some divisor $D'$ is blown down to a curve $C,$ then one needs to
construct a
further projector, represented by a cycle which is a linear
combination of the components of $D' \times_C D'.$ This projector
splits off the
contribution of $D'$ to the cohomology of $X$. We leave this task to
the reader.}
\end{rmk}
\begin{rmk}
\lambdaabel{chissa}
{ \rm If $Y$ is a fourfold with isolated singularities, then the
computations in
\ref{lde4} express its intersection cohomology as a Hodge
substructure
of the cohomology of a resolution $X.$ The method developed in this
section does not apply in general since we do not know whether the
classes of the projectors, which are pushforward of classes of type
$(p,p)$ on a resolution of the product of the exceptional divisor
with itself, are represented by algebraic cycles.
On the other hand,
this can be achieved in the presence of supplementary
information on the singularities of $Y$ or on the exceptional
divisor. For example: if the singularities are locally isomorphic to toric
singularities. This allows to define a motive for the intersection
cohomology in several interesting cases.}
\end{rmk}
\end{document} |
\begin{document}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{{\mathbb{Z}}}{{\mathbb{Z}}}
\newcommand{{\mathbb{C}}}{{\mathbb{C}}}
\newcommand{{\mathbb{Q}}}{{\mathbb{Q}}}
\newcommand{{\mathbb{F}}}{{\mathbb{F}}}
\renewcommand{{\mathbb{H}}}{{\mathbb{H}}}
\renewcommand{{\mathbf{A}}}{{\mathbf{A}}}
\newcommand{{\mathbf{B}}}{{\mathbf{B}}}
\newcommand{{\mathbb{C}}C}{{\mathbf{C}}}
\newcommand{{\mathbf{D}}}{{\mathbf{D}}}
\newcommand{{\mathbf{E}}}{{\mathbf{E}}}
\newcommand{{\mathbb{F}}F}{{\mathbf{F}}}
\newcommand{{\mathbf{G}}}{{\mathbf{G}}}
\newcommand{{\mathcal{M}}}{{\mathcal{M}}}
\newcommand{{\mathcal{M}}}{{\mathcal{M}}}
\newcommand{{{\sf X}}}{{{\sf X}}}
\newcommand{{\mathbb{G}}}{{\mathbb{G}}}
\newcommand{{\mathrm{i}}}{{\mathrm{i}}}
\renewcommand{{\boldsymbol{a}}}{{\boldsymbol{a}}}
\newcommand{{\boldsymbol{b}}}{{\boldsymbol{b}}}
\newcommand{{\boldsymbol{c}}}{{\boldsymbol{c}}}
\newcommand{{\boldsymbol{d}}}{{\boldsymbol{d}}}
\newcommand{{\boldsymbol{t}}}{{\boldsymbol{t}}}
\newcommand{{\boldsymbol{q}}}{{\boldsymbol{q}}}
\newcommand{{\boldsymbol{p}}}{{\boldsymbol{p}}}
\newcommand{{\bar{z}}}{{\bar{z}}}
\newcommand{{\bar{g}}}{{\bar{g}}}
\newcommand{{\bar{n}}}{{\bar{n}}}
\newcommand{{\bar{x}}}{{\bar{x}}}
\newcommand{{\widetilde{H}}}{{\widetilde{H}}}
\newcommand{{\mathcal{M}}til}{{\widetilde{T}}}
\newcommand{{\tilde t}}{{\tilde t}}
\newcommand{{\varkappa}}{{\varkappa}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\varepsilonv}{\varepsilon^\varepsilone}
\renewcommand{{\mathfrak{g}}}{{\mathfrak{g}}}
\newcommand{{\mathfrak{t}}}{{\mathfrak{t}}}
\newcommand{\hookrightarrow}{\hookrightarrow}
\newcommand{\isoto}{\overset{\sim}{\to}}
\newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\newcommand{\labelto}[1]{\xrightarrow{\makebox[1.5em]{\scriptsize ${#1}$}}}
\newcommand{{\mathbb{G}}L}{{\bf{GL}}}
\newcommand{{\bf{SL}}}{{\bf{SL}}}
\newcommand{{\bf{Sp}}}{{\bf{Sp}}}
\newcommand{{\bf{PSp}}}{{\bf{PSp}}}
\newcommand{{{\bf SO}}}{{{\bf SO}}}
\newcommand{{\bf{PSO}}}{{\bf{PSO}}}
\newcommand{{\bf{Sp}}in}{{{\bf Spin}}}
\newcommand{{\bf{HSpin}}}{{\bf{HSpin}}}
\newcommand{{\bf{PGL}}}{{\bf{PGL}}}
\newcommand{{\bf SU}}{{\bf SU}}
\newcommand{{\bf PSU}}{{\bf PSU}}
\newcommand{{\rm Hom}}{{\rm Hom}}
\newcommand{{\rm Inn}}{{\rm Inn}}
\newcommand{{\rm Aut}}{{\rm Aut}}
\newcommand{{\rm Lie\,}}{{\rm Lie\,}}
\newcommand{{\mathbb{G}}al}{{\rm Gal}}
\newcommand{{\rm coker\,}}{{\rm coker\,}}
\newcommand{{\rm tors}}{{\rm tors}}
\newcommand{{\rm Ext}}{{\rm Ext}}
\newcommand{{\rm Stab}}{{\rm Stab}}
\newcommand{{\rm res}}{{\rm res}}
\newcommand{{\rm Ad}}{{\rm Ad}}
\newcommand{{\mathbb{C}}l}{{\rm Cl}}
\newcommand{{\rm ad}}{{\rm ad}}
\newcommand{{\rm im\,}}{{\rm im\,}}
\newcommand{{{\rm id}}}{{{\rm id}}}
\newcommand{{\rm diag}}{{\rm diag}}
\newcommand{\operatorname{Orb}}{\operatorname{Orb}}
\newcommand{{\rm Orb}}{{\rm Orb}}
\newcommand{{\rm Orb}bs}[1]{ \# \mathrm{Orb}( {#1} ) }
\newcommand{ *+[F]{1} }{ *+[F]{1} }
\newcommand{{\boldsymbol{c}}c}{{ \lower0.20ex\hbox{{\text{\Large$\circ$}}}}}
\newcommand{{\lower0.20ex\hbox{\text{\Large$\bullet$}}}}{{\lower0.20ex\hbox{\text{\Large$\bullet$}}}}
\newcommand{\bc}[1]{{\overset{#1}{{\boldsymbol{c}}c}}}
\newcommand{\bcu}[1]{{\underset{#1}{{\boldsymbol{c}}c}}}
\newcommand{\bcb}[1]{{\overset{#1}{{\lower0.20ex\hbox{\text{\Large$\bullet$}}}}}}
\newcommand{\bcbu}[1]{{\underset{#1}{{\lower0.20ex\hbox{\text{\Large$\bullet$}}}}}}
\newcommand{\sxymatrix}[1]{ \xymatrix@1@R=5pt@C=9pt{#1} }
\newcommand{\mxymatrix}[1]{ \xymatrix@1@R=0pt@C=9pt{#1} }
\newcommand{ \ar@{-}[r] }{ \ar@{-}[r] }
\newcommand{ \ar@{-}[l] }{ \ar@{-}[l] }
\newcommand{ \ar@{-}[d] }{ \ar@{-}[d] }
\newcommand{ \ar@{-}[u] }{ \ar@{-}[u] }
\newcommand{\ar@{=>}[r]}{\ar@{=>}[r]}
\newcommand{{\mathbb{R}}RR}{{{\mathbb{R}}ightarrow}}
\newcommand{\! > \!}{\! > \!}
\newcommand{ \!\! < \!\! }{ \!\! < \!\! }
\newcommand{\!\Leftarrow\!}{\!\Leftarrow\!}
\newcommand{{\sxymatrix{\boxone}}}{{\sxymatrix{ *+[F]{1} }}}
\renewcommand{\!-\!}{\!-\!}
\newcommand{\half}{{\tfrac{1}{2}}}
\newcommand{\Ss}{\sideset{}{'}\sum_{k\succeq i}}
\newcommand{\Ssd}{\sideset{}{'}\sum_{k\succeq i,\,k\in D^\tau}}
\newcommand{{K}}{{K}}
\newcommand{0}{0}
\newcommand{\alpha'}{\alpha'}
\newcommand{{\widetilde{D}}}{{\widetilde{D}}}
\newcommand{{\widetilde{D}}bar}{{\widetilde{D'}}}
\newcommand{m'}{m'}
\newcommand{{i'}}{{i'}}
\newcommand{{j'}}{{j'}}
\newcommand{{\Pi'}}{{\Pi'}}
\newcommand{{n'}}{{n'}}
\newcommand{\mathrm{SAut}}{\mathrm{SAut}}
\newcommand{\kern 0.8pt}{\kern 0.8pt}
\newcommand{\kern 1.0pt}{\kern 1.0pt}
\newcommand{\kern 2.0pt}{\kern 2.0pt}
\newcommand{\bfseries}{\bfseries}
\begin{abstract}
Let $G$ be a simply connected absolutely simple algebraic group defined over the field of real numbers ${\mathbb{R}}$.
Let $H$ be a simply connected semisimple ${\mathbb{R}}$-subgroup of $G$.
We consider the homogeneous space $X=G/H$.
We ask: how many connected components has $X({\mathbb{R}})$?
We give a method of answering this question.
Our method is based on our solutions of generalized Reeder puzzles.
\end{abstract}
\maketitle
\setcounter{section}{-1}
\section{Introduction}
In this paper by a semisimple or reductive group
we always mean a {\em connected} semisimple or reductive group, respectively.
Let $G$ be a {\em simply connected} absolutely simple algebraic group over the field of real numbers ${\mathbb{R}}$.
Let $H\subset G$ be a {\em simply connected} semisimple ${\mathbb{R}}$-subgroup.
We consider the homogeneous space $X=G/H$, which is an algebraic variety over ${\mathbb{R}}$.
The topological space $X({\mathbb{R}})$ of ${\mathbb{R}}$-points of $X$ need not be connected.
We ask
\begin{question}\label{q:1}
How many connected components has $X({\mathbb{R}})$?
\end{question}
The group of ${\mathbb{R}}$-points $G({\mathbb{R}})$ acts on the left on $X({\mathbb{R}})$, and we consider the orbits of this action.
By Lemma \ref{lem:orbits-components} below, the set of connected components of $X({\mathbb{R}})$
is the set of orbits $G({\mathbb{R}})\backslash X({\mathbb{R}})$ of $G({\mathbb{R}})$ in $X({\mathbb{R}})$.
On the other hand, there is a canonical bijection
\begin{equation}\label{e:Serre}
G({\mathbb{R}})\backslash X({\mathbb{R}})\isoto \ker\left[H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G)\right],
\end{equation}
see Serre \cite[Section I.5.4, Corollary 1 of Proposition 36]{Serre},
where $H^1({\mathbb{R}},G)$ denotes the first (nonabelian) Galois cohomology of $G$.
We see that Question \ref{q:1} is equivalent to the following question:
\begin{question}\label{q:2}
What is the cardinality of the finite set $\ker\left[H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G)\right]$?
\end{question}
In this paper we give a method of answering Question \ref{q:2} and hence, Question \ref{q:1}.
Namely, we give an explicit description of the Galois cohomology sets
$H^1({\mathbb{R}},G)$ for all simply connected ${\mathbb{R}}$-groups $G$,
permitting one to compute the kernel in Question \ref{q:2}.
We describe $H^1({\mathbb{R}},G)$ using our solutions of generalized Reeder puzzles.
Let $G$ be a {\em simply connected,} absolutely simple, simply-laced, compact ${\mathbb{R}}$-group.
Let $T\subset G$ be a maximal torus.
Let $\Pi$ be a basis of the root system $R(G_{\mathbb{C}}, T_{\mathbb{C}})$.
Let $D=D(G_{\mathbb{C}},T_{\mathbb{C}},\Pi)$ be the Dynkin diagram of $G$ with the set of vertices numbered by $1,2,\dots,n$,
then $D$ is simply-laced, i.e. it has no multiple edges.
By a {\em labeling of} $D$ we mean a family ${\boldsymbol{a}}=(a_i)_{i=1,\dots,n}$, where $a_i\in{\mathbb{Z}}/2{\mathbb{Z}}$.
In other words, at any vertex $i$ we write a label $a_i=0,1$.
We consider the set $L(D)$ of the labelings of $D$, it is an $n$-dimensional vector space over the field ${\mathbb{Z}}/2{\mathbb{Z}}$.
For any vertex $i$ we define the {\em move} ${\mathcal{M}}_i$ applied to a labeling ${\boldsymbol{a}}$:
if the vertex $i$ has an {\em odd} number of neighbors with 1,
${\mathcal{M}}_i$ {\em changes} $a_i$ (from 0 to 1 or from 1 to 0), otherwise it does nothing.
Clearly ${\mathcal{M}}_i({\mathcal{M}}_i({\boldsymbol{a}}))={\boldsymbol{a}}$.
We say that two labelings ${\boldsymbol{a}},{\boldsymbol{a}}'$ are {\em equivalent} if we can pass from ${\boldsymbol{a}}$ to ${\boldsymbol{a}}'$ by a finite sequence of moves.
This is indeed an equivalence relation on $L(D)$.
We denote the corresponding set of equivalence classes by ${\mathbb{C}}l(D)$.
It is the set of orbits of the Weyl group $W$ acting on $L(D)$,
and we denote it also by ${\rm Orb}(D)$ in Sections \ref{sec:An}--\ref{sec:G2} below.
The set ${\mathbb{C}}l(D)$ has a neutral element $[0]$, the class of the zero labeling $0$.
To solve the puzzle means to describe the set of equivalence classes ${\mathbb{C}}l(D)$ and to describe each equivalence class.
This is original Reeder's puzzle \cite{Reeder}, except that Reeder formulated his puzzle for any simply-laced graph,
not necessarily a simply-laced Dynkin diagram.
For a compact, simply connected, simply-laced group $G$ with Dynkin diagram $D$,
the pointed set ${\mathbb{C}}l(D)$ is in a bijection with $H^1({\mathbb{R}},G)$ .
In order to deal with non-simply-laced and noncompact groups, we generalize the puzzle.
We permit non-simply-laced Dynkin diagrams.
Then, when counting the number of neighbors with 1 of a given vertex $i$,
we do not count the {\em shorter} neighbors of $i$ connected with $i$ by a {\em double} edge.
In other words, ``the long roots don't see the short roots''.
We consider also colored Dynkin diagrams, which correspond to non-compact inner forms of compact groups.
A {\em coloring} of a Dynkin diagram $D$ is a family
\[{\boldsymbol{t}}=(t_i)_{i=1,\dots,n},\quad t_i\in{\mathbb{Z}}/2{\mathbb{Z}}.\]
If $t_i=1$, we color vertex $i$ in black, otherwise we leave it white.
When vertex $i$ is white, the move ${\mathcal{M}}_i$ acts as above.
When $i$ is black, the move ${\mathcal{M}}_i$ changes $a_i$ if $i$ has an {\em even} number of neighbors with 1,
and does nothing otherwise.
We write sometimes $L(D,{\boldsymbol{t}})$ for the set $L(D)$ with this Reeder puzzle.
We denote the corresponding set of equivalence classes by ${\mathbb{C}}l(D,{\boldsymbol{t}})$.
If ${\boldsymbol{t}}=\boldsymbol{0}=(0,\dots, 0)$, we have ${\mathbb{C}}l(D,\boldsymbol{0})={\mathbb{C}}l(D)$.
Note that if $D$ has a {\em black} vertex $i$, then the move ${\mathcal{M}}_i$
takes the zero labeling $0$ to a nonzero labeling
and hence does not respect the group structure in $L(D)$.
We recall the definition of $H^1({\mathbb{R}},G)$, cf. \cite[Section III.4.5]{Serre}.
Let $G$ be a linear algebraic group over ${\mathbb{R}}$.
We denote by $G({\mathbb{C}})$ the set of ${\mathbb{C}}$-points of $G$.
The first Galois cohomology set $H^1({\mathbb{R}},G)$ is, by definition,
$Z^1({\mathbb{R}},G)/\sim$, where the set of $1$-cocycles $Z^1({\mathbb{R}},G)$ is defined by $Z^1({\mathbb{R}},G)=\{z\in G({\mathbb{C}})\ |\ z{\bar{z}}=1\}$,
and two $1$-cocycles $z,z'\in Z^1({\mathbb{R}},G)$ are cohomologous (we write $z\sim z'$) if
$z'=gz\bar g^{-1}$ for some $g\in G({\mathbb{C}})$.
Here the bar denotes the complex conjugation in $G({\mathbb{C}})$; note that $G({\mathbb{R}})=\{g\in G({\mathbb{C}})\ |\ \bar g=g\}$.
By definition, the neutral element $[1]\in H^1({\mathbb{R}},G)$ is the class of the neutral cocycle $1\in Z^1({\mathbb{R}},G)\subset G({\mathbb{C}})$.
We write $G({\mathbb{R}})_2$ for the set of elements $g\in G({\mathbb{R}})$ such that $g^2=1$.
Then $g\bar{g}=g^2=1$, hence $G({\mathbb{R}})_2\subset Z^1({\mathbb{R}},G)$, and so we obtain a canonical map $G({\mathbb{R}})_2\to H^1({\mathbb{R}},G)$.
Let $G$ be a simply connected absolutely simple ${\mathbb{R}}$-group.
For simplicity we assume in the Introduction
that $G$ is an {\em inner form of a compact group.}
Then $G$ has a compact maximal torus $T$, see Subsection \ref{subsec:t}.
Choose a basis $\Pi$ of the root system $R=R(G_{\mathbb{C}},T_{\mathbb{C}})$.
We obtain an isomorphism
\[ \gamma\colon L(D)\isoto T({\mathbb{R}})_2\subset Z^1({\mathbb{R}}, G),\]
see formula \eqref{e:gamma} in Section \ref{sec:compact}.
This isomorphism induces a map $L(D)\to H^1({\mathbb{R}},G)$, which is surjective
by a result of Kottwitz \cite[Lemma 10.2]{Kottwitz}.
By Theorem \ref{thm:inner} the fibers of this map are equivalence classes
of the Reeder puzzle for $(D,{\boldsymbol{t}})$ for a certain coloring ${\boldsymbol{t}}$ of $D$.
In other words, we obtain a bijection
\[ {\mathbb{C}}l(D,{\boldsymbol{t}})\isoto H^1({\mathbb{R}},G). \]
Moreover, for a suitable basis $\Pi$ the coloring ${\boldsymbol{t}}$
can be obtained from a Kac diagram by removing vertex 0, see Section \ref{ss:Kac}.
In Sections \ref{sec:An}--\ref{sec:G2} we solve case by case the generalized Reeder puzzles for all such pairs $(D,{\boldsymbol{t}})$.
Namely, in each case we give a set ${{\sf X}}i$ of representatives for all equivalence classes in ${\mathbb{C}}l(D,{\boldsymbol{t}})$
and describe explicitly the equivalence class $[0]\subset L(D,{\boldsymbol{t}})$ of the zero labeling $0$.
Now let $H$ be a simply connected semisimple ${\mathbb{R}}$-subgroup of a simply connected absolutely simple ${\mathbb{R}}$-group $G$.
For simplicity, we assume in the Introduction that $H$ is absolutely simple
and that $G$ and $H$ are inner forms of compact groups.
Then $G$ contains a compact maximal torus $T_G$ and $H$ contains a compact maximal torus $T_H$.
We may and shall assume that $T_H\subset T_G$.
We denote by $(D_H,{\boldsymbol{t}}_H)$ and $(D_G,{\boldsymbol{t}}_G)$ the corresponding Reeder puzzles.
For a good choice of bases $\Pi_H$ and $\Pi_G$ we obtain colorings ${\boldsymbol{t}}_H$ and ${\boldsymbol{t}}_G$ coming from Kac diagrams
(in particular, not more than one vertex of each of $D_H$ and $D_G$ is black).
We describe our method of answering Question \ref{q:2}.
The embedding $T_H\hookrightarrow T_G$ induces an embedding $T_H({\mathbb{R}})_2\hookrightarrow T_G({\mathbb{R}})_2\,$.
Thus we obtain an injective homomorphism
\[\iota\colon L(D_H)\to L(D_G),\]
which can be computed explicitly.
Using results of Sections \ref{sec:An}--\ref{sec:G2} for the group $H$,
we construct a finite subset ${{\sf X}}i\subset L(D_H,{\boldsymbol{t}}_H)$ containing exactly one representative of each equivalence class
for the corresponding Reeder puzzle.
For any $\xi\in{{\sf X}}i\subset L(D_H,{\boldsymbol{t}}_H)$ we compute $\iota(\xi)\in L(D_G,{\boldsymbol{t}}_G)$.
Using results of Sections \ref{sec:An}--\ref{sec:G2} for the group $G$,
namely, the description of the equivalence class $[0]$ of $0$ in $L(G,{\boldsymbol{t}}_G)$\,,
we can check whether $\iota(\xi)\in L(D_G,{\boldsymbol{t}}_G)$ lies in $[0]$ or not.
We obtain a subset ${{\sf X}}i_0$ of ${{\sf X}}i$ consisting of all $\xi\in{{\sf X}}i$ such that $\iota(\xi)\in[0]$.
One can show (see Section \ref{sec:examples}) that ${{\sf X}}i_0$
is in a bijection with $\ker\left[ H^1({\mathbb{R}},H)\to H^1({\mathbb{R}}, G)\right]$ and therefore,
the cardinality of ${{\sf X}}i_0$ answers Questions \ref{q:2} and \ref{q:1}.
Note that in order to answer Question \ref{q:2},
we compute in Sections \ref{sec:An}--\ref{sec:G2} the sets ${\mathbb{C}}l(D,{\boldsymbol{t}})$
for inner forms of a compact group and certain sets ${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})$ for outer forms.
Since each of these sets is in a bijection with the corresponding Galois cohomology set,
we in particular compute the cardinalities of the Galois cohomology sets $H^1({\mathbb{R}},H)$
for all absolutely simple simply connected ${\mathbb{R}}$-groups $H$.
These cardinalities have been known.
The Galois cohomology of classical groups and adjoint groups is well known.
S.~Garibaldi and N.~Semenov \cite[Example 5.1]{GS} computed $H^1({\mathbb{R}},H)$ for a certain nonsplit
simply connected group $H$ of type ${\mathbf{E}}_7$.
B.~Conrad \cite[Proof of Lemma 4.9]{Conrad} computed $H^1({\mathbb{R}},H)$
for the split simply connected groups $H$ of types of ${\mathbf{E}}_6$ and ${\mathbf{E}}_7$.
The cardinalities of the Galois cohomology sets for ``most'' of simple ${\mathbb{R}}$-groups, in particular,
for all absolutely simple simply connected ${\mathbb{R}}$-groups,
were recently computed by J.~Adams \cite{A} by a method different from ours.
Our results agree with the previous results, in particular with the tables of Adams \cite{A}.
Later, after the first version of the present paper appeared in arXiv,
Borovoi and Timashev \cite{BoT} proposed a combinatorial method based on the notion of a Kac diagram,
permitting one to compute easily the cardinality of $H^1({\mathbb{R}},H)$
when $H$ is an inner form of any compact semisimple ${\mathbb{R}}$-group, not necessarily simply connected.
However, it seems that neither of these alternative approaches
permits one to answer Question \ref{q:1} about $(G/H)({\mathbb{R}})$,
except for the case when $H^1({\mathbb{R}},G)=1$ (which happens only when $G={\bf{SL}}(n)$ or $G={\bf{Sp}}(2n,{\mathbb{R}})$).
The rest of the paper is structured as follows.
In Section \ref{sec:1} we recall results of \cite{Bo}.
In Sections \ref{sec:compact} and \ref{sec:inner} we compute the moves ${\mathcal{M}}_i$
in the case when $G$ is compact and when it is a noncompact inner form of a compact group, respectively.
In particular, in Section \ref{sec:inner} we prove Theorem \ref{thm:inner} describing the pointed set $H^1({\mathbb{R}},G)$
for an {\em inner} form $G$ of a compact simply connected simple group in terms of the corresponding generalized Reeder puzzle.
In Section \ref{sec:outer} we prove Theorem \ref{cor:Theorem-3-Bo},
which reduces computing the Galois cohomology of an {\em outer} form of a compact, simply connected, simple ${\mathbb{R}}$-group to
computing Galois cohomology of an {\em inner} form of another compact group.
In Section \ref{sec:Kac} we describe the generalized Reeder puzzle for $G$
in terms of the Kac diagram of $G$ from \cite[Table 7]{OV}.
In Sections \ref{sec:An}--\ref{sec:G2}
we solve the generalized Reeder puzzles
for all isomorphism classes of simply connected absolutely simple ${\mathbb{R}}$-groups $G$.
We state the assertions necessary for our calculations, but omit straightforward proofs for brevity.
In the last Section \ref{sec:examples} we describe our method of answering Questions \ref{q:2} and \ref{q:1}
for all simply connected $H$ (not necessarily simple),
and we give examples of calculations using results of Sections \ref{sec:An}--\ref{sec:G2}.
\section{Galois cohomology of reductive real groups}
\label{sec:1}
In this section we state briefly the necessary results of \cite{Bo}.
For details see \cite{Bo} or \cite{Borovoi-arXiv}.
Let $G$ be a reductive group over ${\mathbb{R}}$.
Let $T$ be a {\em fundamental torus} of $G$, i.e.,
a maximal torus of $G$ (defined over ${\mathbb{R}}$) containing a maximal compact torus $T_0$ of $G$.
Then $T$ is the centralizer of $T_0$ in $G$; see \cite[Section 7]{Borovoi-arXiv}.
Let $T_1$ be the largest {\em split} subtorus of $T$.
We write $T({\mathbb{R}})_2$ for the group of elements of $T({\mathbb{R}})$ of order dividing 2.
\begin{lemma}[{\cite[Lemma 1.1]{Bo}, see also \cite[Lemma 3(a)]{Borovoi-arXiv}}]
\label{lem:Bo88}
The map $T({\mathbb{R}})_2\to H^1({\mathbb{R}},T)$ induces a canonical isomorphism $T({\mathbb{R}})_2/T_1({\mathbb{R}})_2\isoto H^1({\mathbb{R}},T)$.
\end{lemma}
Set $N_0=\mathcal{N}_G(T_0)$, $W_0=N_0/T$.
We have $W_0({\mathbb{C}})=W_0({\mathbb{R}})$; see \cite[Section 7]{Borovoi-arXiv}.
We define a left action of the group $W_0({\mathbb{R}})$ on the set $H^1({\mathbb{R}},T)$.
Let $w\in W_0({\mathbb{R}})$ be represented by $n\in N_0({\mathbb{C}})$ and let $\xi\in H^1({\mathbb{R}},T)$, $\xi=[z]$,
where $z\in Z^1({\mathbb{R}},T)$ is a cocycle and $[z]$ denotes the cohomology class of $z$.
We set
\begin{equation}\label{eq:Bo-action}
w* \xi:= [nz{\bar{n}}^{-1}]=[nzn^{-1}\cdot n{\bar{n}}^{-1}],
\end{equation}
where the bar denotes the complex conjugation in $G({\mathbb{C}})$.
This is a well-defined action; see \cite[Construction 8]{Borovoi-arXiv}.
(Note that in general the action $*$ does not respect the group structure on $H^1({\mathbb{R}},T)$.\,)
It is easy to see that the images of $\xi$ and $w*\xi$ in $H^1({\mathbb{R}},G)$ coincide.
Therefore, we obtain a canonical map
$$W_0({\mathbb{R}})\backslash H^1({\mathbb{R}},T)\to H^1({\mathbb{R}},G).$$
\begin{proposition}[{\cite[Theorem 1]{Bo}, see also \cite[Theorem 9]{Borovoi-arXiv}}]
\label{prop:Bo88}
The map
$$W_0({\mathbb{R}})\backslash H^1({\mathbb{R}},T)\to H^1({\mathbb{R}},G)$$
induced by the map $H^1({\mathbb{R}},T)\to H^1({\mathbb{R}},G)$ is a bijection.
\end{proposition}
\section{Weyl action for compact groups}
\label{sec:compact}
{\em We change our notation.}
In Sections \ref{sec:compact} -- \ref{sec:Kac},
$G$ is a {\em simply connected, simple, {\bfseries compact}} (i.e., anisotropic) linear algebraic group over ${\mathbb{R}}$.
Let $T$ be a maximal torus of $G$. Let $X^*={{\sf X}}^*(T_{\mathbb{C}}):={\rm Hom}(T_{\mathbb{C}}, {\mathbb{G}}_{m,{\mathbb{C}}})$ denote the character group of $T_{\mathbb{C}}$,
where ${\mathbb{G}}_{m,{\mathbb{C}}}$ is the multiplicative group over ${\mathbb{C}}$.
Let $R=R(G_{\mathbb{C}},T_{\mathbb{C}})\subset X^*$ denote the root system of $G_{\mathbb{C}}$ with respect to $T_{\mathbb{C}}$,
then we have a root decomposition
\[ {\rm Lie\,} G_{\mathbb{C}}={\rm Lie\,} T_{\mathbb{C}}\oplus\bigoplus_{\beta\in R}{\mathfrak{g}}_\beta\,.\]
Let
$\Pi\subset R$ be a basis of $R$ (a system of simple roots).
Note that $\Pi$ does not have to be a basis of $X^*$.
Write $\Pi=\{\alpha_1,\dots,\alpha_n\}$, then a simple root $\alpha_i$ is a homomorphism $\alpha_i\colon T_{\mathbb{C}}\to {\mathbb{G}}_{m,{\mathbb{C}}}$.
Let $R_+\subset R$ denote the set of positive roots with respect to the basis $\Pi$,
and let $B\subset G_{\mathbb{C}}$ denote the corresponding Borel subgroup of $G_{\mathbb{C}}$ containing $T_{\mathbb{C}}$, then
\[ {\rm Lie\,} B={\rm Lie\,} T_{\mathbb{C}}\oplus\bigoplus_{\beta\in R_+}{\mathfrak{g}}_\beta\,.\]
Let $D=D(G_{\mathbb{C}},T_{\mathbb{C}},\Pi)=D(G_{\mathbb{C}},T_{\mathbb{C}},B)$ denote the Dynkin diagram of $G_{\mathbb{C}}$ with respect to $T_{\mathbb{C}}$ and $\Pi$,
then the set of vertices of $D$ is $\Pi$.
Let $W=W(G,T)=N/T$ denote the Weyl group, where $N$ is the normalizer of $T$ in $G$.
By abuse of notation we write $W$ also for the group of points $W({\mathbb{R}})=W({\mathbb{C}})$.
Let $X_*={{\sf X}}_*(T_{\mathbb{C}}):={\rm Hom}({\mathbb{G}}_{m,{\mathbb{C}}}, T_{\mathbb{C}})$ denote the cocharacter group of $T$.
There is a canonical pairing
$$
\langle\ ,\,\rangle\colon X^*\times X_*\to{\mathbb{Z}},\quad (\chi,x)\mapsto \langle \chi,x\rangle\in {\mathbb{Z}},\quad \chi\in X^*,\ x\in X_*
$$
defined by
$$
\chi\circ x\,=\ ( z\mapsto z^{\langle \chi,x\rangle}\, ) \colon \ {\mathbb{G}}_{m,{\mathbb{C}}}\to {\mathbb{G}}_{m,{\mathbb{C}}}\,.
$$
We have a canonical basis $\Pi^\varepsilone= \{\alpha_1^\varepsilone,\dots,\alpha_n^\varepsilone\}$ of the dual root system $R^\varepsilone$,
where the simple coroot $\alpha_i^\varepsilone\colon {\mathbb{G}}_{m,{\mathbb{C}}}\to T_{\mathbb{C}}$ is the coroot corresponding to the simple root $\alpha_i$;
see \cite[Sections 7.4 and 7.5]{Springer}. Note that $\langle \alpha_i , \alpha_i^\varepsilone \rangle = 2$.
Since $G$ is {\em simply connected}, $\Pi^\varepsilone$ is a basis of $X_*$
(this is one of the definitions of a simply connected semisimple algebraic group,
cf.~\cite[Section 2.15]{SpringerAMS}).
\begin{lemma}[well-known]\label{lem:repr}
Let $G$, $T,\ N$, and $W$ be as above (in particular, $G$ is {\em compact}).
Then for any $w\in W({\mathbb{R}})=W({\mathbb{C}})$ there exists a representative $n\in N({\mathbb{R}})$ (and not just in $N({\mathbb{C}})$).
\end{lemma}
\begin{proof}
The group $W$ is generated by the reflections $r_1,\dots,r_n$, hence, it suffices to find such an $n$ for a reflection $w=r_i$.
This reduces to the case where $G={\bf SU}_2$ and $T$ is the diagonal torus, when we can take
\[n=\begin{pmatrix}0 &1\\-1 & 0\end{pmatrix}.\]
\end{proof}
Since $G$ is {\em compact,} by Borel and Serre \cite[Theorem 6.8, Example (a)]{Borel-Serre}, see also Serre \cite[III.4.5, Example (a)]{Serre}
(or by Lemma \ref{lem:Bo88} and Proposition \ref{prop:Bo88} above, where $T$ is compact, $N_0=N$, and $W_0=W$)
we have a bijection $W\backslash T({\mathbb{R}})_2\isoto H^1({\mathbb{R}},G)$.
Here $W$ acts on $T({\mathbb{R}})_2$ in the standard way.
Namely, since $G$ is compact, by Lemma \ref{lem:repr} we can choose a representative $n$ of $w\in W$ in $N({\mathbb{R}})$,
and for $a\in T({\mathbb{R}})_2$ we set
\[ w*a=na{\bar{n}}^{-1}=nan^{-1}.\]
Therefore, we are interested in the standard action of $W$ on $T({\mathbb{R}})_2$.
We identify $X_*/2X_*$ with $T({\mathbb{R}})_2$ by $x+ 2 X_*\mapsto x(-1)\in T({\mathbb{R}})_2$ for $x\in X_*$.
The canonical ${\mathbb{Z}}$-basis $\alpha_1^\varepsilone,\dots,\alpha_n^\varepsilone$ of $X_*$ gives a ${\mathbb{Z}}/2{\mathbb{Z}}$-basis of $X_*/2X_*$,
which we shall again write as $\alpha_1^\varepsilone,\dots,\alpha_n^\varepsilone$.
By a {\em labeling} of the Dynkin diagram $D$ we mean a vector ${\boldsymbol{a}}=(a_i)_{i=1,\dots,n}$, where $a_i\in {\mathbb{Z}}/2{\mathbb{Z}}$, i.e., $a_i=0,1$.
In other words, at each vertex $i$ of $D$ we write a label $a_i\in {\mathbb{Z}}/2{\mathbb{Z}}$.
We denote the abelian group of labelings of $D$ by $L(D)$.
We have a canonical isomorphism
\begin{equation}\label{e:gamma}
\gamma\colon L(D)\isoto T({\mathbb{R}})_2\subset Z^1({\mathbb{R}},G),\quad {\boldsymbol{a}}\mapsto a= \prod_{i=1}^n \left(\alpha_i^\varepsilone(-1)\right)^{a_i}.
\end{equation}
By abuse of notation we denote by $\gamma$ both the isomorphism $\gamma\colon L(D)\isoto T({\mathbb{R}})_2$
and the embedding $\gamma\colon L(D)\isoto T({\mathbb{R}})_2\hookrightarrow Z^1({\mathbb{R}},G)$.
Thus with ${\boldsymbol{a}}\in L(D)$ we associate $a=\gamma({\boldsymbol{a}})\in T({\mathbb{R}})_2\subset Z^1({\mathbb{R}},G)$.
We also associate with ${\boldsymbol{a}}$ the element $\sum_k a_k\alpha^\varepsilone_k\in X_*/2X_*$.
We wish to compute the orbits of $W$ in $T({\mathbb{R}})_2$ with respect to the standard left action.
The Weyl group $W$ is generated by the reflections $r_i=r_{\alpha_i}$.
We define the {\em moves}
${\mathcal{M}}_i\colon L(D)\to L(D)$ on the set of labelings $L(D)$ by ${\mathcal{M}}_i\kern 1.0pt {\boldsymbol{a}}={\boldsymbol{a}}'$, where
\begin{equation}\label{eq:r-i-action}
r_i\left(\prod_{j=1}^n \left(\alpha_j^\varepsilone(-1)\right)^{a_j}\right)=
\prod_{j=1}^n\left( \alpha_j^\varepsilone(-1)\right)^{a'_j} \
\text{ i.e., }\
r_i \left(\sum_{j=1}^n a_j\,\alpha_j^\varepsilone\right)=\sum_{j=1}^n a'_j\,\alpha_j^\varepsilone\in X_*/2X_*.
\end{equation}
Note that if ${\boldsymbol{a}}'={\mathcal{M}}_i\kern 1.0pt {\boldsymbol{a}}$, then ${\boldsymbol{a}}={\mathcal{M}}_i\kern 1.0pt{\boldsymbol{a}}'$, because $r_i^2=1$.
We say that two labelings ${\boldsymbol{a}},{\boldsymbol{a}}'\in L(D)$ are {\em equivalent}
if we can relate them by a series of moves.
The set of orbits of $W$ in $T({\mathbb{R}})_2$ is in a canonical bijection
with the set of equivalence classes of labelings ${\boldsymbol{a}}\in L(D)$
of the Dynkin diagram $D$ of $(G_{\mathbb{C}},T_{\mathbb{C}},\Pi)$ with respect to the moves.
The following Lemma \ref{prop:non-twisted} says that the moves
defined in this sections are indeed the moves of the Reeder puzzle on $D$.
\begin{lemma}\label{prop:non-twisted}
Let $G$ be a simply connected, simple, {\em compact} ${\mathbb{R}}$-group of absolute rank $n$,
and $D$ its Dynkin diagram, as above.
Define the moves ${\mathcal{M}}_i\colon L(D)\to L(D)$ by \eqref{eq:r-i-action}.
Then we have $a'_j=a_j$ for $j\neq i$, and $a'_i$ is given by
\begin{equation}\label{non-twisted-simply-laced}
a'_i = a_i + \Ss a_k
\end{equation}
(addition in ${\mathbb{Z}}/2{\mathbb{Z}}$), where $\Ss$ means
that the sum is taken over all the {\em neighbors} $k\neq i$ of $i$
except for the vertices $k$ connected to $i$ by a double edge
such that the root $\alpha_k$ is {\em shorter} than $\alpha_i$.
\end{lemma}
\begin{proof}
A reflection $r_i$ acts on $X_*$ by
\begin{equation}\label{eq:Springer-reflection}
r_i(y)=y-\langle\alpha_i,y\rangle \alpha_i^\varepsilone,
\end{equation}
cf. \cite[Section 7.4.1]{Springer}.
If $y=\sum_k a_k\alpha_k^\varepsilone\in X_*$, then
$$
r_i (y) =y-\sum_k a_k\langle\alpha_i,\alpha_k^\varepsilone\rangle\alpha_i^\varepsilone,
$$
and the same formula holds if $y=\sum_k a_k\alpha_k^\varepsilone\in X_*/2X_*$.
If we write $r_i (y)=\sum_k a'_k\alpha_k^\varepsilone$, then clearly $a'_j=a_j$ for $j\neq i$, and
\begin{equation}\label{eq:action-sum}
a'_i=a_i+\sum_k (-a_k)\langle \alpha_i,\alpha_k^\varepsilone\rangle,
\end{equation}
so we need only to compute (in ${\mathbb{Z}}/2{\mathbb{Z}}$) the sum in \eqref{eq:action-sum}.
We may assume that our root system $R$ is a root system in a Euclidean space $V$.
Then
$$
\langle \alpha_i,\alpha_k^\varepsilone\rangle=\frac{2(\alpha_i,\alpha_k)}{(\alpha_k,\alpha_k)},
$$
where $(\alpha_i,\alpha_k)$ is the scalar product in $V$.
If $k=i$, then $\langle\alpha_i,\alpha_k^\varepsilone\rangle=\langle\alpha_i,\alpha_i^\varepsilone\rangle=2\equiv 0 \pmod{2}$.
If two different vertices $i$ and $k$ are not connected by an edge, then $\langle\alpha_i,\alpha_k^\varepsilone\rangle=0$.
Thus the sum in \eqref{eq:action-sum} is taken over vertices $k$ different from $i$ that are connected to $i$ by an edge.
Now we consider cases.
If vertices $i$ and $k$ are connected by a single edge, then $\langle\alpha_i,\alpha_k^\varepsilone\rangle=-1$
\cite[VI.1.3, possibility (3)\,]{Bourbaki},
hence vertex $k$ gives $a_k$ to the sum in \eqref{eq:action-sum}.
If they are connected by a triple edge, then either $\langle\alpha_i,\alpha_k^\varepsilone\rangle=-1$
or $\langle\alpha_i,\alpha_k^\varepsilone\rangle=-3\equiv -1\pmod{2}$
\cite[VI.1.3, possibility (7)\,]{Bourbaki},
and again vertex $k$ gives $a_k$ to the sum.
If they are connected by a double edge and the root $\alpha_k$ is {\em longer} than $\alpha_i$,
then $\langle\alpha_i,\alpha_k^\varepsilone\rangle=-1$ \cite[VI.1.3, possibility (5)\,]{Bourbaki},
and again vertex $k$ gives $a_k$ to the sum.
However, if the vertices $i$ and $k$ are connected by a double edge
and the root $\alpha_k$ is {\em shorter} than $\alpha_i$\emph{},
then $\langle\alpha_i,\alpha_k^\varepsilone\rangle=-2\equiv 0\pmod{2}$ \cite[VI.1.3, possibility (5)\,]{Bourbaki},
hence vertex $k$ gives nothing to the sum in \eqref{eq:action-sum}.
We conclude that formula \eqref{eq:action-sum} can be written as \eqref{non-twisted-simply-laced}.
\end{proof}
\begin{corollary}\label{cor:non-twisted}
If $G$ is as in Lemma \ref{prop:non-twisted}, in particular $G$ is compact, then the map \eqref{e:gamma}
induces a bijection ${\mathbb{C}}l(D)\isoto H^1({\mathbb{R}},G)$, where the moves ${\mathcal{M}}_i$ act on $L(D)$ by
formula \eqref{non-twisted-simply-laced}.
\end{corollary}
\section{Weyl action for inner forms}
\label{sec:inner}
In this section $G$, $T$, $R$, $\Pi$, $D$, and $W$ are as in Section \ref{sec:compact},
in particular $G$ is a simply connected, simple,
{\em compact} linear algebraic group over ${\mathbb{R}}$.
\subsection{The $t$-twisted action}
\label{subsec:t}
Write $G^{\rm ad}=G/Z_G,\ T^{\rm ad}=T/Z_G$, where $Z_G$ denotes the center of $G$.
Then $T^{\rm ad}$ is a maximal torus in the adjoint group $G^{\rm ad}$.
Consider an inner twisted form (inner twist) $_z G$ of $G$,
where $z\in Z^1({\mathbb{R}},G^{\rm ad})$.
It is well known that $z$ is cohomologous to some $t\in T^{\rm ad}({\mathbb{R}})_2$
(see e.g., \cite[III.4.5, Example (a)]{Serre}). {\em We fix such an element} $t$.
Then $_z G\simeq \kern 0.8pt_t G$.
We have $_t G({\mathbb{C}})=G({\mathbb{C}})$,
but the complex conjugation in $_t G({\mathbb{C}})$ is given by
$$
g\mapsto {}^*{\bar{g}}={\rm Inn}(t)({\bar{g}}) .
$$
This means that if we lift $t\in T^{\rm ad}({\mathbb{R}})_2$ to some ${\tilde t}\in T({\mathbb{C}})$,
then the complex conjugation in $_t G({\mathbb{C}})$ is given by
$$
^*{\bar{g}}={\tilde t}\,{\bar{g}}\, {\tilde t}^{-1}.
$$
Since ${\tilde t}\in T({\mathbb{C}})$, we have $_t T=T$, hence $_t T$ is a compact maximal torus in $_t G$,
hence it is a fundamental torus of $_t G$.
Thus any inner form of a compact semisimple ${\mathbb{R}}$-group has a compact maximal torus.
Let $T_0$ of Section \ref{sec:1} be the maximal compact subtorus of $_t T$, then clearly $T_0=\kern 0.8pt_t T=T$.
Let $W_0:=W_0(\kern 0.8pt_t G,\kern 0.8pt_t T)$ be the group $W_0$ of Section \ref{sec:1},
then $W_0=W(G,T)= W$, because $W_0$ was defined in terms of $T_0$.
We consider the $t$-twisted action of $W_0=W$ given by formula \eqref{eq:Bo-action}
on $H^1({\mathbb{R}},\kern 0.8pt_t T)=H^1({\mathbb{R}},T)=T({\mathbb{R}})_2$.
Let $w\in W({\mathbb{R}})=W({\mathbb{C}})$, $w=nT$, where $n\in N({\mathbb{R}})$.
Then
\[{\bar{n}}=n,\quad {}^* {\bar{n}}={\tilde t}{\bar{n}}{\tilde t}^{-1}={\tilde t} n{\tilde t}^{-1}. \]
For $a\in T({\mathbb{R}})_2=T({\mathbb{C}})_2$ the $t$-twisted action of $w$ is given by
\begin{equation}\label{eq:ect-0-gen}
w* a= n\, a\, {}^* {\bar{n}}^{-1}
=n\, a \, {\tilde t}\, {\bar{n}}^{-1} {\tilde t}^{-1}
=n\, a \, {\tilde t}\, n^{-1} {\tilde t}^{-1}
= n a n^{-1}\cdot n{\tilde t} n^{-1} {\tilde t} ^{-1}.
\end{equation}
In particular, let $r_j\in W({\mathbb{R}})=W({\mathbb{C}})$ be the reflection corresponding to a simple root $\alpha_j$.
Write $r_j=n_j T$ for some $n_j\in N({\mathbb{R}})$.
For $a\in T({\mathbb{R}})_2$ the $t$-twisted action of $r_j$ is given by
\begin{equation}\label{eq:ect-0}
r_j* a= n_j\, a\, {}^* {\bar{n}}_j^{-1}
=n_j\, a \, {\tilde t}\, n_j^{-1} {\tilde t}^{-1}= n_j a n_j^{-1}\cdot n_j{\tilde t} n_j^{-1} {\tilde t} ^{-1}.
\end{equation}
Note that
\begin{equation}\label{eq:twisting}
r_j* a=r_j(a) \cdot n_j{\tilde t} n_j^{-1} {\tilde t}^{-1},
\end{equation}
where $r_j(a)=n_j a n_j^{-1}$.
In particular, we have $r_j * 1= n_j{\tilde t} n_j^{-1} {\tilde t}^{-1}$, so in general $r_j* 1 \neq 1$ and therefore,
the $t$-twisted action does not preserve the group structure in $T({\mathbb{R}})_2$.
Define
\begin{equation}\label{eq:t-bold}
{\boldsymbol{t}}=(t_i)\in({\mathbb{Z}}/2{\mathbb{Z}})^n, \quad\text{where}\quad (-1)^{t_i}=\alpha_i(t).
\end{equation}
We regard ${\boldsymbol{t}}$ as a {\em coloring} of the diagram $D$.
We color a vertex $i$ in black if $t_i=1$, and leave $i$ uncolored (i.e., white) if $t_i=0$.
Denote by $_{\boldsymbol{t}} D:=(D,{\boldsymbol{t}})$ the Dynkin diagram $D=D(G_{\mathbb{C}},T_C,\Pi)$
together with the coloring ${\boldsymbol{t}}$ .
The notation $_{\boldsymbol{t}} D$ suggests that we regard $_{\boldsymbol{t}} D=(D,{\boldsymbol{t}}) $ as an (inner) twist of $D$ by ${\boldsymbol{t}}$ .
We compute the moves corresponding to the $t$-twisted action.
For each vertex $i$ of $D$,
we define the move ${\mathcal{M}}_i$ by
${\mathcal{M}}_i\kern 1.0pt {\boldsymbol{a}}={\boldsymbol{a}}'$, where
\begin{equation*}\label{eq:r-i-action-twisted}
r_i*\left(\prod_{j=1}^n \left(\alpha_j^\varepsilone(-1)\right)^{a_j}\right)=
\prod_{j=1}^n \left(\alpha_j^\varepsilone(-1)\right)^{a'_j}\
\text{ i.e., }\
r_i *\left(\sum_{j=1}^n a_j\,\alpha_j^\varepsilone\right)=\sum_{j=1}^n a'_j\,\alpha_j^\varepsilone\in X_*/2X_*.
\end{equation*}
\begin{lemma}\label{prop:twisted}
For the $t$-twisted action of $W$ and the move ${\mathcal{M}}_i$ just defined,
we have, as in Lemma \ref{prop:non-twisted}, $a'_j=a_j$ for $j\neq i$,
while in formula \eqref{non-twisted-simply-laced}
the term $t_i\in{\mathbb{Z}}/2{\mathbb{Z}}$ defined by $(-1)^{t_i}=\alpha_i(t)$ must be added.
Thus we have
\begin{equation}\label{twisted-simply-laced}
a'_i = a_i+t_i + \Ss a_k \ ,
\end{equation}
where the meaning of $\Ss$ is the same as in formula \eqref{non-twisted-simply-laced}.
\end{lemma}
\begin{proof}
By \eqref{e:gamma}, \eqref{eq:twisting} and Lemma \ref{prop:non-twisted} it suffices to show
that $n_j{\tilde t} n_j^{-1} {\tilde t}^{-1}=\left(\alpha_j^\varepsilone(-1)\right)^{t_j}$.
We are indebted to Dmitry A. Timashev for the idea of the following proof.
Consider the ${\mathbb{C}}$-torus $T_{\mathbb{C}}$. As above, we
write $X_*$ for ${{\sf X}}_*(T_{\mathbb{C}})={\rm Hom}({\mathbb{G}}_{m,{\mathbb{C}}},T_{\mathbb{C}})$.
We have a canonical isomorphism of abelian complex Lie groups
$$
X_*\underset{{\mathbb{Z}}}{\otimes} {\mathbb{C}}^\times\isoto T({\mathbb{C}}),\quad x\otimes u\mapsto x(u),\quad x\in X_*,\ u\in {\mathbb{C}}^\times={\mathbb{G}}_{m,{\mathbb{C}}}({\mathbb{C}}).
$$
Thus we obtain an isomorphism of abelian complex Lie algebras (vector spaces over ${\mathbb{C}}$)
$$
X_* \underset{{\mathbb{Z}}}{\otimes} {\mathbb{C}} \isoto {\rm Lie\,} T_{\mathbb{C}},\quad x\otimes v\mapsto dx(v),\quad x\in X_*,\ v\in{\mathbb{C}},\
dx:=d_1 x\,\colon {\mathbb{C}}={\rm Lie\,}{\mathbb{G}}_{m,{\mathbb{C}}}\to {\rm Lie\,} T_{\mathbb{C}}\,.
$$
In particular, we obtain a canonical embedding
\begin{equation}\label{eq:embedding-X*}
X_*\hookrightarrow X_* \underset{{\mathbb{Z}}}{\otimes} {\mathbb{C}} \isoto {\rm Lie\,} T_{\mathbb{C}}{\boldsymbol{q}}uad x\mapsto x\otimes 1\mapsto dx(1).
\end{equation}
Now it is an easy exercise to deduce from \eqref{eq:Springer-reflection} and \eqref{eq:embedding-X*}
that for $1\le j\le n$ and for any $y\in{\rm Lie\,} T_{\mathbb{C}}$ we have
\begin{equation}\label{eq:Springer-reflection-Lie}
r_j(y)=y-\langle d \alpha_j,y\rangle d\alpha_j^\varepsilone(1),
\end{equation}
where we write $d\alpha_j$ for $d_1\alpha_j\, \colon {\rm Lie\,} T_{\mathbb{C}}\to{\rm Lie\,}{\mathbb{G}}_{m,{\mathbb{C}}}= {\mathbb{C}}$, and
we write $\langle d \alpha_j,y\rangle$ for $d\alpha_j(y)\in{\mathbb{C}}$.
Let $\omega_k^\varepsilone\in {\rm Lie\,} T_{\mathbb{C}}$ be the element such that
$\langle d\alpha_j,\omega_k^\varepsilone\rangle=\delta_{jk}$\,, where $\delta_{jk}$ is Kronecker's delta symbol.
We set
\[{\tilde t}=\exp\left(\pi {\mathrm{i}}\,\sum_k\kern 1.0pt t_k \omega_k^\varepsilone \right) \in T({\mathbb{C}}),\quad\text{where }{\mathrm{i}}^2=-1.\]
Then
\begin{equation*}
\alpha_j({\tilde t})=\exp \left\langle d\alpha_j,\,\pi {\mathrm{i}}\,\sum_k t_k \omega_k^\varepsilone\right\rangle=
\exp \left(\pi{\mathrm{i}}\,\sum_k t_k\langle d\alpha_j,\omega_k^\varepsilone\rangle\right)=
\exp(\pi{\mathrm{i}}\, t_j)=(-1)^{t_j},
\end{equation*}
because the exponential map commutes with homomorphisms of Lie groups; see \cite[Section 1.2.7, p.~29, Problem 26]{OV}.
It follows that the image of ${\tilde t}$ in $T^{\rm ad}({\mathbb{C}})$ is indeed $t$.
By \eqref{eq:Springer-reflection-Lie} we have
\begin{align*}\label{eq:Dima}
n_j{\tilde t} n_j^{-1} {\tilde t}^{-1}&=r_j({\tilde t}) {\tilde t}^{-1}
=\exp\left(\pi {\mathrm{i}}\, \sum_k\, t_k\left(r_j(\omega_k^\varepsilone)-\omega_k^\varepsilone\right)\,\right)\\
&=\exp\left(-\pi{\mathrm{i}}\sum_k t_k\langle d\alpha_j,\omega_k^\varepsilone\rangle d\alpha_j^\varepsilone(1)\right)
=\exp\left( t_j\, d\alpha_j^\varepsilone(-\pi{\mathrm{i}})\right)
=\left(\alpha_j^\varepsilone(-1)\right)^{t_j}.
\end{align*}
Thus $n_j{\tilde t} n_j^{-1} {\tilde t}^{-1}=\left(\alpha_j^\varepsilone(-1)\right)^{t_j}$, as required.
\end{proof}
According to Lemma \ref{prop:twisted}, the twisted action of ${\mathcal{M}}_i$ on a labeling ${\boldsymbol{a}}=(a_i)\in ({\mathbb{Z}}/2{\mathbb{Z}})^n$
is given by formula \eqref{twisted-simply-laced}.
This means that for any vertex $i$ of $D$,
the action of ${\mathcal{M}}_i$ is given
by formula \eqref{non-twisted-simply-laced} if vertex $i$ is white (i.e., $t_i=0$),
and by formula
\begin{equation}\label{twisted-action}
a'_i = a_i + 1+ \Ss a_k \ ,
\end{equation}
if vertex $i$ is black (i.e., $t_i=1$).
In other words, this is exactly the generalized Reader puzzle as described in the Introduction.
We denote by $L(D,{\boldsymbol{t}})$ (or $L(\kern 0.8pt_{\boldsymbol{t}} D)$) the set of labelings $({\mathbb{Z}}/2{\mathbb{Z}})^n$
with this twisted action of the moves ${\mathcal{M}}_i$.
By Lemma \ref{prop:twisted} the action of ${\mathcal{M}}_i$ on $L(D,{\boldsymbol{t}})$
is compatible with the $t$-twisted action of the reflection $r_i\in W$
on $T({\mathbb{R}})_2=\kern 0.8pt_t T({\mathbb{R}})_2$ with respect to the canonical bijection
\begin{equation*}
\gamma_{\boldsymbol{t}}\colon L(D,{\boldsymbol{t}})\isoto T({\mathbb{R}})_2\subset Z^1({\mathbb{R}},\kern 0.8pt_t G) \quad {\boldsymbol{a}}\mapsto a=\prod_i \left(\alpha_i^\varepsilone(-1)\right)^{a_i}.
\end{equation*}
By abuse of notation we denote by $\gamma_{\boldsymbol{t}}$ both the isomorphism $\gamma_{\boldsymbol{t}}\colon L(\kern 0.8pt_{\boldsymbol{t}} D)\isoto T({\mathbb{R}})_2$
and the embedding $\gamma_{\boldsymbol{t}}\colon L(\kern 0.8pt_{\boldsymbol{t}} D)\isoto T({\mathbb{R}})_2\hookrightarrow Z^1({\mathbb{R}},G)$.
We regard the twisted diagram $_{\boldsymbol{t}} D=(D,{\boldsymbol{t}})$ as the {\em colored Dynkin diagram of the twisted group $\kern 0.8pt_t G$}
(with respect to $T$ and $\Pi$).
We denote by ${\rm Orb}(\kern 0.8pt_{\boldsymbol{t}} D)$ the set of equivalence classes (orbits) in $L(\kern 0.8pt_{\boldsymbol{t}} D)$
with respect to the equivalence relation given by the moves of Lemma \ref{prop:twisted}
(in the Introduction we denoted this set of equivalence classes by ${\mathbb{C}}l(D,{\boldsymbol{t}})$\kern 0.8pt).
The following theorem describes the Galois cohomology of an {\em inner} form $_t G$
of a compact, simply connected, simple ${\mathbb{R}}$-group $G$
in terms of labelings of the corresponding colored Dynkin diagram $_{\boldsymbol{t}} D$.
\begin{theorem}\label{thm:inner}
Let $G$, $T$, $R$, $\Pi$, $D$, and $W$ be as in Section \ref{sec:compact}.
Let $t\in T^{\rm ad}({\mathbb{R}})_2$ and let ${\boldsymbol{t}}\in({\mathbb{Z}}/2{\mathbb{Z}})^n$ be defined by \eqref{eq:t-bold}.
Let $L(_{\boldsymbol{t}} D)$ be the set of labelings of the colored Dynkin diagram $_{\boldsymbol{t}} D$
with the moves given by formula \eqref{twisted-simply-laced}.
Then the canonical map
$$
\gamma_{\boldsymbol{t}}\colon L(\kern 0.8pt_{\boldsymbol{t}} D)\isoto T({\mathbb{R}})_2 \hookrightarrow Z^1({\mathbb{R}},\kern 0.8pt_t G)
$$
induces a canonical bijection
$$
\lambda_{\boldsymbol{t}}\colon {\rm Orb}(\kern 0.8pt_{\boldsymbol{t}} D)\isoto H^1({\mathbb{R}},\kern 0.8pt_t G).
$$
\end{theorem}
The theorem follows immediately from Proposition \ref{prop:Bo88} and Lemma \ref{prop:twisted}.
We specify that $\lambda_{\boldsymbol{t}}$ takes the orbit (class) of a labeling ${\boldsymbol{a}}=(a_j)$ to the cohomology class of the cocycle
\[\prod_{j=1}^n(\alpha_j^\varepsilone(-1))^{a_j}\in T({\mathbb{R}})_2\subset Z^1({\mathbb{R}},\kern 0.8pt_t G).\]
\section{Weyl action for outer forms}
\label{sec:outer}
In this section again $G$, $T$, $R$, $\Pi$, $B$, $D$, $N$ and $W$ are as in Section \ref{sec:compact},
in particular $G$ is a simply connected, simple,
{\em compact} linear algebraic group over ${\mathbb{R}}$, and $B$ is
the Borel subgroup of $G_{\mathbb{C}}$ containing $T_{\mathbb{C}}$, corresponding to the basis $\Pi$ of $R$.
Let $\rho\in{\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})$ denote the complex conjugation.
Since $G$ is defined over ${\mathbb{R}}$, the Galois group ${\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})=\{1,\rho\}$ acts on ${\rm Aut}\,G_{\mathbb{C}}$.
The {\em group of semi-automorphisms} $\mathrm{SAut}\,G_{\mathbb{C}}:=({\rm Aut}\,G_{\mathbb{C}})\rtimes {\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})$ acts on $D$,
see \cite[Proposition 3.1]{BKLR}. We describe this action here.
We construct a homomorphism
\begin{equation*}
\psi_S\colon \mathrm{SAut}\, G_{\mathbb{C}}\to{\rm Aut}\,D.
\end{equation*}
Let $s\in\mathrm{SAut}\,G_{\mathbb{C}}$.
We have $T_{\mathbb{C}}\subset B\subset G_{\mathbb{C}}$.
Consider the pair $(s(T_{\mathbb{C}}),s(B))$.
There exists $g\in G({\mathbb{C}})$ such that
\[g\cdot s(T_{\mathbb{C}})\cdot g^{-1}=T_{\mathbb{C}},\quad g\cdot s(B)\cdot g^{-1}=B,\]
and if $g'\in G({\mathbb{C}})$ is another such element, then $g'=t'g$ for some $t'\in T({\mathbb{C}})$.
We obtain a semi-automorphism
\[{\rm Inn}(g)\circ s \in\mathrm{SAut}(G_{\mathbb{C}},T_{\mathbb{C}},B),\]
which induces a well-defined automorphism
\[\psi_S(s)\in{\rm Aut}\, D.\]
The restriction of $\psi_S$ to the subgroup \ ${\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})\subset \mathrm{SAut}\, G$ \
gives the {\em $^*$-action} of ${\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})$ on $D$, see \cite[Section 2.3]{Tits}.
Let
\begin{equation}\label{e:psi}
\psi\colon {\rm Aut}\, G_{\mathbb{C}}\to{\rm Aut}\,D
\end{equation}
denote the restriction of $\psi_S$ to the subgroup ${\rm Aut}\, G_{\mathbb{C}}\subset \mathrm{SAut}\, G_{\mathbb{C}}$, then
$\psi$ is clearly ${\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})$-equivariant with respect to the $^*$-action of ${\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})$ on $D$.
The homomorphism $\psi$ fits into the exact sequence
\begin{equation}\label{split-Springer}
1\to G^{\rm ad}({\mathbb{C}})\to {\rm Aut}\,G_{\mathbb{C}}\labelto{\psi} {\rm Aut}\,D\to 1
\end{equation}
which admits a {\em splitting,} that is, a homomorphism
$\phi\colon{\rm Aut}\,D\to{\rm Aut}\,G_{\mathbb{C}}$ such that $\psi\circ\phi={{\rm id}}_{{\rm Aut} D}$\,,
see \cite[Expos\'e XXIV, Theorem 1.3]{SGA3} or \cite[Corollary 2.14]{SpringerAMS}, or \cite[Proposition 1.5.5]{Conrad-RGS}.
We construct a splitting of \eqref{split-Springer} of a special kind in the next lemma.
\begin{lemma}\label{lem:phi}
Let $G$ be as above, in particular compact and simply connected.
Then there exists a homomorphism
\[\phi\colon{\rm Aut}\,D\to{\rm Aut}\, G_{\mathbb{C}},\quad \theta\mapsto\phi_\theta\]
such that $\psi\circ\phi={{\rm id}}_{{\rm Aut} D}$
and for any $\theta\in{\rm Aut}\,D$ the automorphism $\phi_\theta\in{\rm Aut}\,G_{\mathbb{C}}$ is defined over ${\mathbb{R}}$.
\end{lemma}
\begin{proof}
Consider the complexification ${\mathfrak{g}}_{\mathbb{C}}$ of ${\mathfrak{g}}={\rm Lie\,} G$ and the root decomposition
\[ {\mathfrak{g}}_{\mathbb{C}}={\rm Lie\,} T_{\mathbb{C}} \oplus\bigoplus_{\beta\in R}{\mathfrak{g}}_\beta\,. \]
Consider a ``canonical system of generators'' $h_i\,,e_i\,,f_i\ (i=1,\dots,n)$ of ${\mathfrak{g}}_{\mathbb{C}}$
satisfying
\begin{align*}
&[h_i\,,h_j]=0,\quad [e_i\,,f_i]=h_i\,,\quad [e_i\,,f_j]=0\text{ for }i\neq j\\
&[h_i\,,e_j]=a_{ji}e_j\,,\quad [h_i\,, f_j]=-a_{ji} f_j\,,
\end{align*}
see \cite[Section 4.3.2]{OV}.
Here $(a_{ij})$ is the Cartan matrix,
\[e_i\in {\mathfrak{g}}_{\alpha_i},\quad f_i\in {\mathfrak{g}}_{-\alpha_i},\quad h_i\in {\rm Lie\,} T_{\mathbb{C}},\quad \Pi=\{\alpha_1,\dots,\alpha_n\}.\]
Since $G$ is compact, one can choose the generators $h_i\,,e_i\,,f_i$ such that
\[^\rho h_i=-h_i\,, \quad ^\rho e_i=-f_i\,, \quad ^\rho f_i=-e_i\,, \]
see \cite[Section 5.1.3, Problem 19]{OV}.
Now let $\theta\in {\rm Aut}\,D$.
We define an automorphism $\phi_\theta$ of ${\mathfrak{g}}_{\mathbb{C}}$ on the generators by
\[\phi_\theta(h_i)=h_{\theta(i)}\,,\quad \phi_\theta(e_i)=e_{\theta(i)}\,,\quad \phi_\theta(f_i)=f_{\theta(i)}\,.\]
Clearly $\phi_\theta$ commutes with $\rho$, hence the automorphism $\phi_\theta$ of ${\mathfrak{g}}_{\mathbb{C}}$ is defined over ${\mathbb{R}}$.
The ${\mathbb{R}}$-automorphism $\phi_\theta$ of ${\mathfrak{g}}$ induces a unique automorphism of the connected simply connected algebraic ${\mathbb{R}}$-group $G$;
by abuse of notation we denote this automorphism again by $\phi_\theta$.
We have $\phi_\theta(T)=T,\ \phi_\theta(B)=B$, and it is clear from our construction of $\psi$ that $\psi(\phi_\theta)=\theta$, hence
$\psi\circ\phi={{\rm id}}_{{\rm Aut} D}$\,.
Clearly
\[\phi\colon{\rm Aut}\,D\to{\rm Aut}_{\mathbb{R}}\, G,\quad \theta\mapsto \phi_\theta\]
is a homomorphism.
\end{proof}
\begin{corollary}
The complex conjugation $\rho$, when acting on $D$ via the $^*$-action, acts on ${\rm Aut}\,D$ trivially.
\end{corollary}
\begin{proof}
Indeed, if $\theta\in{\rm Aut}\,D$, then
\[^\rho\theta=\kern 0.8pt^\rho(\psi(\phi_\theta))=\psi(\kern 0.8pt^\rho(\phi_\theta))=\psi(\phi_\theta)=\theta,\]
because $\phi_\theta\in{\rm Aut}_{\mathbb{R}}\,G$.
\end{proof}
Let $_z G$ be an outer twisted form (outer twist) of $G$, where $z\in Z^1({\mathbb{R}},{\rm Aut}\, G)$ and $z\notin Z^1({\mathbb{R}},{\rm Inn}\, G)$.
The homomorphism $\psi$ of \eqref{e:psi} induces a map
\[ Z^1({\mathbb{R}},{\rm Aut}\,G)\to Z^1({\mathbb{R}},{\rm Aut}\,D)=({\rm Aut}\,D)_2\,.\]
We obtain an element $\tau=\psi(z)\in ({\rm Aut}\,D)_2$, then $\tau$ is a nontrivial involutive automorphism of $D$.
It acts on the set of vertices $\Pi$ of $D$; we write $\alpha_j\mapsto \alpha_{\tau(j)}$, $j=1,\dots,n$.
We write $\Pi^\tau$ for the set of fixed points of $\tau$ in $\Pi$, and $D^\tau$ for the corresponding Dynkin subdiagram.
Furthermore, $\tau$ acts on $\Pi^\varepsilone$ by $\alpha_j^\varepsilone\mapsto\alpha_{\tau(j)}^\varepsilone$
and on $W=\langle r_j\rangle_{j=1,\dots,n}$ by $\tau(r_j)=r_{\tau(j)}$.
We write $W^\tau$ for the algebraic subgroup of fixed points of $\tau$ in $W$.
The homomorphism $\phi$ of Lemma \ref{lem:phi} gives an involutive automorphism $\phi_\tau$ of $(G,T,B)$.
By abuse of notation, we shall denote this ``diagrammatic'' automorphism $\phi_\tau$ again by $\tau$.
Then $\tau\in{\rm Aut}_{\mathbb{R}}(G,T)_2$ and $\tau$ acts on $W$.
We write $_\tau T$, $_\tau T^{\rm ad}$, $_\tau G$, $_\tau G^{\rm ad}$, and $_\tau W$ for the corresponding twisted algebraic groups.
We consider the action of $\tau$ on $\Pi$ and on $\Pi^\varepsilone$.
The decomposition
$$
\Pi=\Pi^\tau\cup (\Pi\smallsetminus \Pi^\tau)
$$
of the basis $\Pi$ of the character group ${{\sf X}}^*(T^{\rm ad})$ of the adjoint torus $T^{\rm ad}$
induces a $\tau$-invariant decomposition into a direct product
\begin{equation}\label{e:T-product}
T^{\rm ad}=T^{\rm ad}(D^\tau)\times_{\mathbb{R}} T^{\rm ad}(D\smallsetminus D^\tau)
\end{equation}
with ${{\sf X}}^*(T^{\rm ad}(D^\tau))=\langle\Pi^\tau\rangle$ and ${{\sf X}}^*(T^{\rm ad}(D\smallsetminus D^\tau))=\langle\,\Pi\smallsetminus \Pi^\tau\rangle$.
Here for a subset $S\subset {{\sf X}}^*(T^{\rm ad})$, we denote by $\langle S\rangle$ the subgroup generated by $S$.
Concerning the corresponding $\tau$-twisted tori, we see that $_\tau T^{\rm ad}(D^\tau)=T^{\rm ad}(D^\tau)$ is a compact torus,
while the ${\mathbb{R}}$-torus $_\tau T^{\rm ad}(D\smallsetminus D^\tau)$
is isomorphic to the Weil restriction of scalars $R_{{\mathbb{C}}/{\mathbb{R}}} T'$ of some ${\mathbb{C}}$-torus $T'$.
It follows that
$$
H^1({\mathbb{R}},\kern 0.8pt_\tau T^{\rm ad}(D^\tau))=T^{\rm ad}(D^\tau)({\mathbb{R}})_2,\quad \text{while}\quad H^1({\mathbb{R}},\kern 0.8pt_\tau T^{\rm ad}(D\smallsetminus D^\tau))=1,
$$
and therefore, the embedding ${}_\tau T^{\rm ad}(D^\tau)\hookrightarrow\kern 0.8pt_\tau T^{\rm ad}$ induces a canonical isomorphism
\begin{equation}\label{eq:isomorphism-ad}
T^{\rm ad}(D^\tau)({\mathbb{R}})_2=H^1({\mathbb{R}},\kern 0.8pt_\tau T^{\rm ad}(D^\tau))\isoto H^1({\mathbb{R}},\kern 0.8pt_\tau T^{\rm ad}).
\end{equation}
Similarly, we have a decomposition
$$
\Pi^\varepsilone=\Pi^{\varepsilone\,\tau}\,\cup\, (\Pi^\varepsilone\smallsetminus \Pi^{\varepsilone\,\tau}),
$$
where we write $\Pi^{\varepsilone\,\tau}$ for $(\Pi^\varepsilone)^\tau$. This decomposition
of the basis $\Pi^\varepsilone$ of the cocharacter group ${{\sf X}}_*(T)$ induces a $\tau$-invariant decomposition into a direct product
$$
T=T(D^\tau)\times T(D\smallsetminus D^\tau)
$$
with ${{\sf X}}_*(T(D^\tau))=\langle\,\Pi^{\varepsilone\,\tau} \rangle$ and
${{\sf X}}_*(T(D\smallsetminus\, D^\tau))=\langle\,\Pi^\varepsilone\smallsetminus\, \Pi^{\varepsilone\,\tau} \rangle$.
As above, we have $_\tau T(D^\tau)=T(D^\tau)$, hence
$$
H^1({\mathbb{R}},\kern 0.8pt_\tau T(D^\tau))=T(D^\tau)({\mathbb{R}})_2,\quad \text{while}\quad H^1({\mathbb{R}},\kern 0.8pt_\tau T(D\smallsetminus D^\tau))=1.
$$
The involutive automorphism $\tau$ of $D$ acts on the set of labelings $L(D)$, and we denote by $L(D)^\tau$ the subset of invariants.
The homomorphism $\gamma\colon L(D)\to T({\mathbb{C}})_2$ given by formula \eqref{e:gamma} induces an isomorphism $L(D)^\tau\to\kern 0.8pt_\tau T({\mathbb{R}})_2$
(because the complex conjugation acts on $_\tau T({\mathbb{C}})_2$ as $\tau$).
We obtain a commutative diagram
\begin{equation}\label{eq:isomorphism}
\xymatrix{
L(D^\tau)\ar[r]^-\sim \ar@/_1pc/[d] &T(D^\tau)({\mathbb{R}})_2\ar[r]^-\sim\ar@/_1pc/[d] &H^1({\mathbb{R}}, T(D^\tau))\ar[d]^\sim \\
L(D)^\tau\ar[r]^-\sim\ar[u] &_\tau T({\mathbb{R}})_2\ar[r]\ar[u] &H^1({\mathbb{R}},\kern 0.8pt_\tau T)
}
\end{equation}
with obvious maps.
For our $z\in Z^1({\mathbb{R}},{\rm Aut}\,G_{\mathbb{C}})$ and $\tau=\phi_{\psi(z)}$ we have $\psi(z)=\psi(\tau)$.
It follows from the exact sequence \eqref{split-Springer} and \cite[I.5.5, Corollary 2 of Proposition 39]{Serre}
that our outer form $_z G$ of $G$ is an {\em inner} twist of $_\tau G$,
i.e. $_z G\simeq\kern 0.8pt_{z'} (\kern 0.8pt_\tau G)$ for some $z'\in Z^1({\mathbb{R}},\kern 0.8pt_\tau G^{\rm ad})$.
By Proposition \ref{prop:Bo88} the cocycle $z'$ is cohomologous to some $t\in Z^1({\mathbb{R}},\kern 0.8pt_\tau T^{\rm ad})\subset Z^1({\mathbb{R}},\kern 0.8pt_\tau G^{\rm ad})$,
and by \eqref{eq:isomorphism-ad} we may assume that
$t\in T^{\rm ad}(D^\tau)({\mathbb{R}})_2 \subset \kern 0.8pt_\tau T^{\rm ad}({\mathbb{R}})_2$.
We denote by ${\rm Inn}(t)$ the corresponding inner automorphism of $_\tau G$ of order dividing 2.
We set $\sigma={\rm Inn}(t)\circ\tau$.
Note that ${\rm Inn}(t)$ and $\tau$ commute, hence $\sigma$ is an outer automorphism of order 2 of $G$.
We write $_{\sigma} G=\, _{{\rm Inn}(t)}(_\tau G)$ for the corresponding twisted form of $G$, then $\sigma\sim z$ and $_\sigma G\simeq\kern 0.8pt_z G$.
For simplicity we also write $_{\sigma} G=\kern 0.8pt_{t\tau} G$.
We have $_\sigma G({\mathbb{C}})= G({\mathbb{C}})$, but the complex conjugation in $_\sigma G({\mathbb{C}})$ is given by
$$
^*{\bar{g}}=\sigma({\bar{g}})={\rm Inn}(t)(\tau({\bar{g}})) .
$$
Note that ${\rm Inn}(t)$ acts trivially on $_\tau T$, hence also on $_\tau W$,
because $_\tau W\subset{\rm Aut}(_\tau T)$.
We see that $_{t\tau} T=\kern 0.8pt_\tau T$ and $_{t\tau} W=\kern 0.8pt_\tau W$.
We consider the group $W_0:=W_0({}_{t\tau} G)=W_0({}_\tau G)$; see Section \ref{sec:1}.
We have $W_0({\mathbb{C}})=W_0({\mathbb{R}})={}_\tau W({\mathbb{R}})$; see \cite[Section 7]{Borovoi-arXiv}.
Clearly ${}_\tau W({\mathbb{R}})=W^\tau({\mathbb{C}})$, hence $W_0({\mathbb{R}})=W^\tau({\mathbb{C}})$.
The group $W_0({\mathbb{R}})$ acts on $H^1({\mathbb{R}},{}_{t\tau} T)=H^1({\mathbb{R}},{}_\tau T)$ as in formula \eqref{eq:Bo-action},
and it acts on the set of labelings $L(D^\tau)$
via \eqref{eq:isomorphism}.
We wish to describe this action explicitly.
Note that if $D$ is of type ${\mathbf{A}}_{2n}$, then $D^\tau=\emptyset$, $T(D^\tau)=1$,
$H^1({\mathbb{R}},{}_\tau T)=1$, $H^1({\mathbb{R}},{}_\tau G)=1$ (in this case $_\tau G\simeq {\bf{SL}}_{2n+1}$).
From now till the end of this section we shall assume that {\em $D$ is not of type ${\mathbf{A}}_{2n}$.}
Then from the classification of Dynkin diagrams we know that for any $j\in D\smallsetminus D^\tau$,
the vertices $j$ and $\tau(j)$ are not connected by an edge,
and therefore, the reflections $r_j$ and $r_{\tau(j)}$ commute.
\begin{lemma}\label{lem-pairs}
Assume that $D$ is not of type ${\mathbf{A}}_{2n}$.
Then the group $W_0({\mathbb{R}})$
is generated by the reflections $r_i$ for $i\in D^\tau$
and by the products $r_j\cdot r_{\tau(j)}$ for $j\in D\smallsetminus D^\tau$.
\end{lemma}
\begin{proof}
We have
$$
W_0({\mathbb{R}})={}_\tau W({\mathbb{R}})=W^\tau({\mathbb{C}}).
$$
Now the lemma follows from \cite[Proposition 13.1.2]{Ca}.
\end{proof}
\begin{lemma}\label{lem:prod-pairs}
Assume that $D$ is not of type ${\mathbf{A}}_{2n}$.
Let $j\in D\smallsetminus D^\tau$.
Then the product $r_j\cdot r_{\tau(j)}$ acts trivially on $H^1({\mathbb{R}},\kern 0.8pt_\sigma T)=H^1({\mathbb{R}},\kern 0.8pt_\tau T)$,
where $\sigma={\rm Inn}(t)\circ\tau$.
\end{lemma}
\begin{proof}
Let $b\in Z^1({\mathbb{R}},{}_\tau T)$.
By diagram \eqref{eq:isomorphism} we may assume that $b\in T(D^\tau)({\mathbb{R}})_2\subset T({\mathbb{C}})_2$.
Let ${\boldsymbol{b}}=(b_k)\in ({\mathbb{Z}}/2{\mathbb{Z}})^D$ be the corresponding labeling of $D$ such that $b=\prod_k (\alpha_k^\varepsilone(-1))^{b_k}$.
We set $w_{j,\tau(j)}=r_j r_{\tau(j)}$.
Let $G_j=G_{\alpha_j}$ denote the simple 3-dimensional subgroup of $G$ corresponding to the simple root $\alpha_j$.
Choose a representative $n_j\in G_j({\mathbb{R}})\cap \mathcal{N}_G(T)({\mathbb{R}})$ of $r_j\in W({\mathbb{R}})$.
Set $n_{\tau(j)}=\tau(n_j)\in G_{\tau(j)}({\mathbb{R}})$, then $\tau(n_{\tau(j)})=n_j$, because $\tau^2=1$.
Since the vertices $j$ and $\tau(j)$ are not connected by an edge,
the subgroups $G_j$ and $G_{\tau(j)}$ of $G$ commute,
hence $n_j$ and $n_{\tau(j)}$ commute.
Set $n_{j,\tau(j)}:=n_j n_{\tau(j)}$, then
\[\tau(n_{j,\tau(j)})=\tau(n_j n_{\tau(j)})=n_{\tau(j)} n_j= n_j n_{\tau(j)}=n_{j,\tau(j)}\, ,\]
hence $n_{j,\tau(j)}\in \mathcal{N}_G(T)({\mathbb{R}})^\tau$
and $n_{j,\tau(j)}$ represents $w_{j,\tau(j)}$.
We consider the action \eqref{eq:Bo-action} of $w_{j,\tau(j)}$ on $H^1({\mathbb{R}},\,_{t\tau}T)$.
We write $w$ for $w_{j,\tau(j)}$ and $n$ for $n_{j,\tau(j)}$.
Recall that $\sigma={\rm Inn}(t)\circ\tau$, where $t\in T^{\rm ad}(D^\tau)({\mathbb{R}})_2\subset T^{\rm ad}({\mathbb{C}})_2$.
We lift $t$ to some ${\tilde t}\in T({\mathbb{C}})$.
Then we have
\begin{equation*}
w* [b]:=[nbn^{-1}\cdot n\,{\rm Inn}(t)(\tau({\bar{n}})^{-1})]=
[nbn^{-1}\cdot n{\tilde t}\tau({\bar{n}})^{-1}{\tilde t}^{-1}]=[nbn^{-1}\cdot n{\tilde t} n^{-1}{\tilde t}^{-1}],
\end{equation*}
because $\tau({\bar{n}})=n$.
Thus the action \eqref{eq:Bo-action} of $w\in W_0({\mathbb{R}})$ on $Z^1({\mathbb{R}},\,_{t\tau}T)$
is compatible with the action \eqref{eq:ect-0-gen} of $w\in W({\mathbb{C}})$ on $T({\mathbb{C}})_2$.
We consider the $t$-twisted action \eqref{eq:ect-0-gen} of $W({\mathbb{C}})$ on $T({\mathbb{C}})_2$.
Then Lemma \ref{prop:twisted} is applicable, and it
implies that the move ${\mathcal{M}}_j$ corresponding to the reflection $r_j\in W({\mathbb{C}})$
can change only the $j$-coordinate $b_j$ of ${\boldsymbol{b}}$.
Now consider $w_{j,\tau(j)}=r_{\tau(j)}r_j\in W_0({\mathbb{R}})\subset W({\mathbb{C}})$ for $j\in D\smallsetminus D^\tau$,
then we see that ${\mathcal{M}}_{\tau(j)}{\mathcal{M}}_j$ can change only the $j$- and the $\tau(j)$-coordinates of ${\boldsymbol{b}}$.
In particular, if we write ${\boldsymbol{b}}'= ({\mathcal{M}}_j {\mathcal{M}}_{\tau(j)}) {\boldsymbol{b}}$, then $b'_i=b_i$ for any $i\in D^\tau$.
Since $w_{j,\tau(j)}\in W_0({\mathbb{R}})$ and $b\in Z^1({\mathbb{R}},\kern 0.8pt_\sigma T)$,
we see that $b':=w_{j,\tau(j)}(b)$ is contained in $Z^1({\mathbb{R}},\kern 0.8pt_\sigma T)$.
Since $b'_i=b_i$ for any $i\in D^\tau$, by diagram \eqref{eq:isomorphism} $b'\sim b$ in $Z^1({\mathbb{R}},\kern 0.8pt_\sigma T)$.
Thus $w_{j,\tau(j)}=r_{\tau(j)}\,r_j$ acts trivially on $H^1({\mathbb{R}},\kern 0.8pt_\sigma T)$.
\end{proof}
\begin{lemma}\label{lem:r-i-t-tau}
Let $a\in T(D^\tau)({\mathbb{R}})_2\subset Z^1({\mathbb{R}},\kern 0.8pt_\sigma T)$.
Let $i\in D^\tau$, and write $[a']= r_i[a]$, where $a'\in T(D^\tau)({\mathbb{R}})_2\subset Z^1({\mathbb{R}},\kern 0.8pt_\sigma T)$,
and $[a]\mapsto r_i[a]$ refers to the action \eqref{eq:Bo-action} of $r_i\in W_0({\mathbb{R}})$ on $H^1({\mathbb{R}},\,_\sigma T)$.
Write
\begin{equation*}
a=\prod_{j\in D^\tau} \left(\alpha_j^\varepsilone(-1)\right)^{a_j},{\boldsymbol{q}}uad
a'=\prod_{j\in D^\tau} \left(\alpha_j^\varepsilone(-1)\right)^{a'_j}
\end{equation*}
Then $a'_j=a_j$ for $j\neq i$ and
\begin{equation}\label{eq:twisted-outer}
a'_i=a_i+t_i+\Ssd a_k\,,
\end{equation}
where $(-1)^{t_i}=\alpha_i(t)$ and the sum is taken over the neighbors $k$ of $i$ {\em lying in $D^\tau$}.
\end{lemma}
\begin{proof}
Write $a=\prod_{j\in D} \left(\alpha_j^\varepsilone(-1)\right)^{a_j}$, then $a_j=0$ for $j\in {D\smallsetminus D^\tau}$.
Now let $i\in D^\tau$, then arguing as in the proof of Lemma \ref{lem:prod-pairs},
we see that the action \eqref{eq:Bo-action} of $r_i\in W_0({\mathbb{R}})$
is compatible with the action \eqref{eq:ect-0-gen}, where $n=n_i\in G_i({\mathbb{R}})\cap \mathcal{N}_G(T)({\mathbb{R}})$.
By Lemma \ref{prop:twisted} this action is given
by formula \eqref{twisted-simply-laced}, i.e., by formula \eqref{eq:twisted-outer},
where the sum is taken over {\em all} neighbors $k$ of $i$ in $D$.
However, if $k\in D\smallsetminus D^\tau$, then $a_k=0$.
We see that in formula \eqref{eq:twisted-outer}
we may take the sum only over neighbors $k$ of $i$ contained in $D^\tau$, as required.
\end{proof}
The following theorem was announced in \cite{Bo}.
It reduces computing the Galois cohomology of an outer form of a simply connected compact group $G$
to computing the Galois cohomology of an inner form of some other
simply connected compact group (of type ${\mathbf{A}}_l$ for some $l$).
\begin{theorem}[{\cite[Theorem 3]{Bo}}]
\label{cor:Theorem-3-Bo}
Let $G$ be a simply connected, simple,
compact linear algebraic group over ${\mathbb{R}}$.
Let\, $T$, $R$, $\Pi$ and $D$ be as in Section \ref{sec:compact}.
Let $\tau$ be an automorphism of order $2$ of the Dynkin diagram $D$ of $G_{\mathbb{C}}$
(then $D$ is simply-laced).
Let $l=\# D^\tau$.
Let $G(D^\tau)\subset G$ be the ${\mathbb{R}}$-subgroup of type ${\mathbf{A}}_l$
corresponding to the Dynkin subdiagram $D^\tau$ of $D$.
Let $t\in T^{\rm ad}(D^\tau)({\mathbb{R}})_2\subset \kern 0.8pt_\tau T^{\rm ad}({\mathbb{R}})_2$.
Then the natural embedding
$\kern 0.8pt_t G(D^\tau)\hookrightarrow {}_{t\,\tau} G,$
obtained by twisting by $t$ from the embedding $G(D^\tau)\hookrightarrow\kern 0.8pt_\tau G$,
induces a bijection
$$
H^1({\mathbb{R}},\kern 0.8pt_t G(D^\tau))\isoto H^1({\mathbb{R}},{}_{t\,\tau} G).
$$
\end{theorem}
\begin{proof}
The embedding $T(D^\tau)\hookrightarrow {}_\tau T$ induces an isomorphism
\begin{equation}\label{eq:isom-2}
H^1({\mathbb{R}},T(D^\tau))\isoto H^1({\mathbb{R}}, {}_\tau T).
\end{equation}
The group $W(D^\tau)({\mathbb{R}})$ acts on the left-hand side of \eqref{eq:isom-2};
this group is generated by $r_i$ for $i\in D^\tau$.
The group $W_0({\mathbb{R}})$ acts on the right-hand side; by Lemma \ref{lem-pairs}
this group is generated by $r_i$ for $i\in D^\tau$ and by $r_{\tau(j)} r_j$ for $j\in {D\smallsetminus D^\tau}$.
By Lemma \ref{lem:prod-pairs} the products $r_{\tau(j)} r_j$ for $j\in {D\smallsetminus D^\tau}$
act trivially on the right-hand side of \eqref{eq:isom-2}.
Comparing formulas \eqref{twisted-simply-laced} and \eqref{eq:twisted-outer},
we see that the actions of a reflection $r_i$ for $i\in D^\tau$
on the left-hand side and the right-hand side of \eqref{eq:isom-2} are compatible.
Thus we obtain a bijection of the quotients:
$$
H^1({\mathbb{R}},\kern 0.8pt_t G(D^\tau))=W(D^\tau)({\mathbb{R}})\backslash T(D^\tau)({\mathbb{R}})_2 \isoto W_0({\mathbb{R}})\backslash H^1({\mathbb{R}},{}_\tau T)= H^1({\mathbb{R}},{}_{t\,\tau} G),
$$
where the left-hand and right-hand equalities are bijections of Proposition \ref{prop:Bo88}.
\end{proof}
From diagram \eqref{eq:isomorphism} and Theorem \ref{cor:Theorem-3-Bo} we obtain a commutative diagram
\begin{equation}\label{eq:bijections}
\xymatrix{
L(D^\tau)\ar[r]^-\sim \ar@/_1pc/[d] &T(D^\tau)({\mathbb{R}})_2\ar[r]^-\sim\ar@/_1pc/[d] &H^1({\mathbb{R}}, T(D^\tau))\ar[r]\ar[d]^\sim &H^1({\mathbb{R}},\kern 0.8pt_t G(D^\tau))\ar[d]^\sim \\
L(D)^\tau\ar[r]^-\sim\ar[u] &_\tau T({\mathbb{R}})_2\ar[r]\ar[u] &H^1({\mathbb{R}},\kern 0.8pt_\tau T)\ar[r] &H^1({\mathbb{R}},\kern 0.8pt_{t\kern 0.8pt\tau} G)
}
\end{equation}
Recall that $t\in T^{\rm ad}(D^\tau)({\mathbb{R}})_2$.
We define a coloring ${\boldsymbol{t}}=(t_j)_{j\in D^\tau}\ (t_j\in {\mathbb{Z}}/2{\mathbb{Z}})$ of $D^\tau$
as in \eqref{eq:t-bold}, i.e., by $(-1)^{t_j}=\alpha_j(t)$.
\begin{proposition}\label{c:restriction}
\begin{enumerate}
\item[(i)]
In diagram \eqref{eq:bijections}, the map $L(D)^\tau\to H^1({\mathbb{R}},\kern 0.8pt_{t\kern 0.8pt\tau} G)$
of the bottom row of the diagram is surjective.
\item[(ii)]
Two labelings ${\boldsymbol{a}},{\boldsymbol{a}}'\in L(D)^\tau$ have the same image in $H^1({\mathbb{R}},\kern 0.8pt_{t\kern 0.8pt\tau} G)$
if and only if their images in $L(D^\tau)$
(i.e., their restrictions to $D^\tau$) lie in the same equivalence class in $L(D^\tau,{\boldsymbol{t}})$.
In particular, a labeling ${\boldsymbol{a}}\in L(D)^\tau$ maps to $[1]\in H^1({\mathbb{R}},\kern 0.8pt_{t\kern 0.8pt\tau} G)$
if and only if its restriction to $D^\tau$
lies in the equivalence class of 0 in $L(D^\tau,{\boldsymbol{t}})$.
\end{enumerate}
\end{proposition}
\begin{proof} (i) This follows from Lemma \ref{lem:Bo88} and Proposition \ref{prop:Bo88}.
(ii) Indeed, by Theorem \ref{thm:inner}
two labelings ${\boldsymbol{b}},{\boldsymbol{b}}'\in L(D^\tau)$ have the same image in $H^1({\mathbb{R}},\kern 0.8pt_t G(D^\tau))$
if and only if they lie in the same equivalence class in $L(D^\tau,{\boldsymbol{t}})$.
\end{proof}
We say that two labelings ${\boldsymbol{a}},{\boldsymbol{a}}'\in L(D)^\tau$ are {\em ${\boldsymbol{t}}$-equivalent}
if their restrictions to $D^\tau$ lie in the same equivalence class in $L(D^\tau,{\boldsymbol{t}})$.
We denote by ${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})$ the set of ${\boldsymbol{t}}$-equivalence classes in $L(D)^\tau$,
then the restriction map $L(D)^\tau\to L(D^\tau)$ induces a bijection
\begin{equation}\label{e:res-bij}
{\mathbb{C}}l(D,\tau,{\boldsymbol{t}})\isoto {\rm Orb}(D^\tau,{\boldsymbol{t}}).
\end{equation}
By Proposition \ref{c:restriction} we have a bijection
\begin{equation}\label{e:general-bijection}
{\mathbb{C}}l(D,\tau,{\boldsymbol{t}})\isoto H^1({\mathbb{R}},\kern 0.8pt_{t\kern 0.8pt\tau} G).
\end{equation}
We obtain a bijection ${\rm Orb}(D^\tau,{\boldsymbol{t}})\isoto H^1({\mathbb{R}},\kern 0.8pt_{t\kern 0.8pt\tau} G)$.
We specify that this bijection takes the class of a labeling ${\boldsymbol{b}}=(b_\alpha)_{\alpha\in \Pi^\tau}$
of $D^\tau$ to the cohomology class of the cocycle
\[\prod_{\alpha\in \Pi^\tau}(\alpha^\varepsilone(-1))^{b_{\alpha}}\in T(D^\tau)({\mathbb{R}})_2\subset Z^1({\mathbb{R}}, \kern 0.8pt_{t\tau} G).\]
In the case when $_z G$ is an {\em inner} form of the compact group $G$,
we again set $\tau=\psi(z)\in{\rm Aut}(D)$, then $\tau=1$, and we set ${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})={\rm Orb}(D,{\boldsymbol{t}})$ in this case.
In particular, when $_z G=G$ is {\em compact},
we have $\tau=1, t=1,{\boldsymbol{t}}=0$, and we set ${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})={\rm Orb}(D)$ in this case.
Then we have bijection \eqref{e:general-bijection} in all the cases.
\section{Reeder puzzles from Kac diagrams}
\label{sec:Kac}
In this section $G$, $T$, $R$, $\Pi$, $B$, $D$, and $W$ are as in Section \ref{sec:compact},
in particular $G$ is a simply connected, simple, {\em compact} linear algebraic group over ${\mathbb{R}}$.
\subsection{Inner forms}
\label{ss:Kac}
Let $_zG$ be a noncompact inner twisted form (inner twist) of $G$, where $z\in Z^1({\mathbb{R}}, G^{\rm ad})$.
By \cite[III.4.5, Example (a)]{Serre}, $z$ is cohomologous to some $t\in T^{\rm ad}({\mathbb{R}})_2\subset Z^1({\mathbb{R}}, G^{\rm ad})$.
We regard $t\in T^{\rm ad}({\mathbb{R}})_2\subset ({\rm Aut}\, G)_2$ as an involutive inner automorphism of $G$.
Since $_zG$ is noncompact, we have $t\neq 1$.
By Kac \cite{Kac}, see also Helgason \cite[Ch.~X, \S\,5]{Helgason},
Onishchik and Vinberg \cite[Ch.~4, \S\,4 and Ch.~5, \S\,1]{OV},
and Gorbatsevich, Onishchik, and Vinberg \cite[Section 3.3.7]{OV2},
involutive {\em inner} automorphisms of $G$ can be described
using Kac diagrams of types I and II in \cite[Table 7]{OV}.
(The Kac diagrams of type III in \cite[Table 7]{OV} correspond to involutive {\em outer} automorphisms.)
The relation between Kac diagrams and involutive inner automorphisms
is as follows, see \cite[Problem 5.1.38]{OV}.
Consider the extended Dynkin diagram ${\widetilde{D}}$ of $G$.
Its vertices correspond to the roots $\alpha_0,\alpha_1,\dots,\alpha_n$,
where $\alpha_1,\dots,\alpha_n$ are the simple roots and $\alpha_0$ is the {\em lowest} root.
There is a unique linear dependence
$$
m_0\alpha_0+m_1\alpha_1+\dots+m_n\alpha_n=0
$$
normalized so that $m_0=1$, and then $m_j$ are positive integers tabulated in \cite[Table 6]{OV}.
A {\em Kac $2$-marking of} ${\widetilde{D}}$ is a family of nonnegative integral
numerical marks ${\boldsymbol{q}}=(q_j)_{j=0,1,\dots,n}\in{\mathbb{Z}}^{n+1}_{\ge 0}$
at the vertices ${j=0,1,\dots,n}$ of ${\widetilde{D}}$, satisfying
\begin{equation}\label{eq:2markings}
m_0 q_0+m_1 q_1+\dots+m_n q_n=2.
\end{equation}
A Kac 2-marking ${\boldsymbol{q}}$ determines a unique element $t\in T^{\rm ad}({\mathbb{R}})_2\subset G^{\rm ad}({\mathbb{R}})_2$ such that
\begin{equation}\label{e:qj}
\alpha_j(t)=(-1)^{q_j},\quad j=1,\dots,n.
\end{equation}
Involutive inner automorphisms of $G$ are classified, up to conjugacy, by Kac 2-markings of ${\widetilde{D}}$.
Two Kac 2-markings ${\boldsymbol{p}}$ and ${\boldsymbol{q}}$ give conjugate inner automorphisms
if and only if ${\boldsymbol{p}}$ can be obtained form ${\boldsymbol{q}}$ by an automorphism of ${\widetilde{D}}$,
see \cite[3.3.6, Theorem 3.11]{OV2}.
\begin{lemma}\label{l:special-transitive}
The group ${\rm Aut}\,{\widetilde{D}}$ acts transitively on the set $\{j\in{\widetilde{D}}\ |\ m_j=1\}$.
\end{lemma}
\begin{proof}
By \cite[Section VI.2.3, Corollary of Proposition 6]{Bourbaki},
the center $Z(G)$ acts simply transitively on this set when acting on ${\widetilde{D}}$.
\end{proof}
Let ${\boldsymbol{q}}$ be a Kac 2-marking of ${\widetilde{D}}$.
It follows from \eqref{eq:2markings} that there are three possibilities:
\begin{enumerate}
\item[(0)] $q_i=2$ for some $i\in {\widetilde{D}}$ with $m_i=1$, and $q_j=0$ for all $j\neq i$.
\item[(1)] (Type I) $q_i=1$ for some $i$ with $m_i=2$, and $q_j=0$ for all $j\neq i$.
\item[(2)] (Type II) $q_{i_1}=1,\ q_{i_2}=1$ for some $i_1\neq i_2$
with $m_{i_1}=1,\ m_{i_2}=1$, and $q_j=0$ for all $j\neq i_1,i_2$.
\end{enumerate}
In case (0) clearly $t=1$ and hence, $_tG=G$ is compact, so we do not consider this case.
In case (1) we have $i\neq 0$, because $m_i=2$, while $m_0=1$.
We color vertex $i$ of ${\widetilde{D}}$ in black and leave the other vertices white.
We say that $({\widetilde{D}},{\boldsymbol{q}})$ is a Kac diagram of type I.
In case (2) by Lemma \ref{l:special-transitive} we may and shall assume that $i_1=0$.
We write $i$ for $i_2$, then $i\neq 0$.
We color vertices $i_1=0$ and $i_2=i$ in black and leave all the other vertices white.
We say that $({\widetilde{D}},{\boldsymbol{q}})$ is a Kac diagram of type II.
The Kac diagrams of types I and II up to isomorphism are tabulated in \cite[Table 7]{OV}.
From now on, when we consider Kac diagrams of types I and II, we assume that they are ones from that table.
Then in both cases I and II, ${\widetilde{D}}$ has exactly one nonzero black vertex $i$.
We compute the coloring ${\boldsymbol{t}}$ of $D$, induced by $t$, in terms of ${\boldsymbol{q}}$.
Comparing \eqref{e:qj} and \eqref{eq:t-bold}, we obtain that
\[(-1)^{t_j}=(-1)^{q_j}\quad\text{for } j=1,\dots,n. \]
Since $t_j=0,1$ and $q_j=0,1$, we see that $t_j=q_j$ for $j=1,\dots,n$.
Thus the map ${\boldsymbol{t}}\colon D\to \{0,1\}$ is the restriction of ${\boldsymbol{q}}\colon {\widetilde{D}}\to{\mathbb{Z}}_{\ge0}$ to the subset $D$ of ${\widetilde{D}}$.
We conclude that if a noncompact inner form $_tG$ of $G$
is given by a Kac diagram $({\widetilde{D}},{\boldsymbol{q}})$ of type I or II from \cite[Table 7]{OV},
then we can obtain a colored Dynkin diagram $(D,{\boldsymbol{t}})$ of the twisted group $_t G$
by removing vertex 0 from $({\widetilde{D}},{\boldsymbol{q}})$.
We say that $(D,{\boldsymbol{t}})$ is a {\em twisting diagram} for $_t G$.
It has exactly one black vertex $i$.
We call $i$ the {\em twisting vertex.}
The coloring ${\boldsymbol{t}}$ of $D$ defines the action of Lemma \ref{prop:twisted} of the Weyl group $W$ on $L(D)$;
we say that this action of $W$ is {\em twisted at} $i$.
The next proposition follows immediately from Lemma \ref{prop:twisted}.
\begin{proposition}\label{prop:twisted-i}
Let $_z G$ be a noncompact inner form of $G$.
Then $_z G\simeq \kern 0.8pt_t G$, where $t\in T^{\rm ad}({\mathbb{R}})_2$, $t\neq 1$, and $t$
comes from a Kac diagram $({\widetilde{D}},{\boldsymbol{q}})$ of type I or II in \cite[Table 7]{OV}.
The corresponding colored Dynkin diagram
$(D,{\boldsymbol{t}})$ is obtained from $({\widetilde{D}},{\boldsymbol{q}})$ by removing vertex 0.
It has a unique black vertex $i$.
For the (twisted at $i$) action of $W$ on $L(D,{\boldsymbol{t}})$, we have the same formula \eqref{non-twisted-simply-laced}
for ${\mathcal{M}}_j$ for $j\neq i$ as in Lemma \ref{prop:non-twisted}.
For ${\mathcal{M}}_i$ we have, as in Lemma \ref{prop:non-twisted}, $a'_j=a_j$ for $j\neq i$,
while in formula \eqref{non-twisted-simply-laced} for $a'_i$ we must add $1$.
Namely, we have
\begin{equation}\label{twisted-simply-laced-new}
a'_i = a_i+1 + \Ss a_k\, ,
\end{equation}
where the meaning of $\Ss$ is the same as in formula \eqref{non-twisted-simply-laced}.
\end{proposition}
\begin{construction} \label{const:augmented-diag}
Assume we have a twisting diagram with a black vertex $i$:
\begin{equation*}
\sxymatrix{ \cdots \ar@{-}[r] &\bc{} \ar@{-}[r] & \bcb{i} \ar@{-}[r] &\bc{} \ar@{-}[r] & \cdots }
\end{equation*}
We have formula \eqref{twisted-simply-laced-new} for ${\mathcal{M}}_i$.
In order to get formula \eqref{non-twisted-simply-laced} instead, we uncolor the vertex $i$
and {\em augment} our diagram by formally adding a new vertex which we call the {\em boxed 1},
connected by a simple edge to vertex $i$:
\begin{equation*}
\sxymatrix{ \cdots \ar@{-}[r] &\bc{} \ar@{-}[r] \ar@{-}[r] & \bc{i} \ar@{-}[r] \ar@{-}[d] &\bc{} \ar@{-}[r] & \cdots \\
& & *+[F]{1} & & }
\end{equation*}
Here 1 in the box means that we put 1 as the label at this new vertex.
Now the formula for ${\mathcal{M}}_i$ becomes \eqref{non-twisted-simply-laced}
where the boxed 1 is included in the sum.
Thus this boxed 1 accounts for twisting at $i$.
Note that we do not add a move corresponding to the boxed 1,
so the label 1 at the boxed 1 cannot be changed by moves.
We call the obtained diagram the {\em augmented diagram} corresponding to the twisting vertex $i$.
\end{construction}
\begin{example}
These are the Kac diagram, the twisting diagram and the augmented diagram
for the group $EIII$ of type ${\mathbf{E}}_6$ (see Subsection \ref{subsec:E6(1)} below):
\begin{equation*}
\mxymatrix
{ \bcb{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] &
\bc{5} &&& \bcb{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] & \bc{5} &&& *+[F]{1} \ar@{-}[r] &\bc{1}
\ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] & \bc{5}
\\ & & {\phantom{\hbox{\SMALL\kern 0.8pt 6}}}{\boldsymbol{c}}c \hbox{\SMALL\kern 0.8pt 6} \ar@{-}[d] & &
&&&& &\bcu{6} & & &&& & & &\bcu{6} & &
\\ & & \bcbu{0} & & }
\end{equation*}
\end{example}
\subsection{Outer forms}
\label{ss:Kac-outer}
Let $_z G$ be an outer twisted form (outer twist) of $G$, where $z\in Z^1({\mathbb{R}},{\rm Aut}\, G)$.
Using the homomorphism $\psi\colon{\rm Aut}\,G_{\mathbb{C}}\to{\rm Aut}\,D$ of \eqref{e:psi},
we obtain an element $\tau=\psi(z)\in Z^1({\mathbb{R}},{\rm Aut}\,D)=({\rm Aut}\,D)_2$; then $\tau^2=1$.
Since $_z G$ is an {\em outer} form, $\tau\neq 1$.
Thus $\tau$ is a nontrivial involutive automorphism of $D$.
We write $\Pi^\tau$ for the set of fixed points of $\tau$ in $\Pi$,
and $D^\tau$ for the corresponding Dynkin subdiagram.
As in Section \ref{sec:outer}, we denote the ``diagrammatic''
automorphism $\phi_\tau$ of $(G,T,B)$ again by $\tau$.
There exists $t\in T^{\rm ad}(D^\tau)({\mathbb{R}})_2$ such that
$_z G\simeq\kern 0.8pt_{t\tau}G$, see Section \ref{sec:outer}.
(Here $T^{\rm ad}= T^{\rm ad}(D^\tau)\times_{\mathbb{R}} T^{\rm ad}(D\smallsetminus D^\tau)$, see \eqref{e:T-product}.)
By \cite{Kac}, see also \cite[Ch.~X, \S\,5]{Helgason},
\cite[Ch.~4, \S\,4 and Ch.~5, \S\,1]{OV}, and \cite[Section 3.3.11]{OV2},
involutive {\em outer} automorphisms $\sigma=t\tau\in ({\rm Aut}\, G)_2$
can be described using Kac diagrams of type III in \cite[Table 7]{OV}
as follows.
Set ${\mathfrak{g}}={\rm Lie\,} G$.
The diagrammatic automorphism $\tau$ acts on $T^{\rm ad}$, and we consider the torus $(T^{\rm ad})^\tau$
of dimension $n'=\#\Pi^\tau+\half\#(\Pi-\Pi^\tau)$.
The nonzero weights $\alpha'$ of $(T^{\rm ad})^\tau_{\mathbb{C}}$ in ${\mathfrak{g}}_{\mathbb{C}}$
are the restrictions of the roots $\alpha$ of ${\mathfrak{g}}_{\mathbb{C}}$ with respect to $T^{\rm ad}_{\mathbb{C}}$.
The restricted roots form a (possibly nonreduced) root system, see \cite[Section 3.3.9, Theorem 3.14]{OV2}.
In particular, for any restricted root $\alpha'$ the coroot $(\alpha')^\varepsilone$ is defined.
Write ${\mathfrak{g}}^\tau$ and ${\mathfrak{g}}^{-\tau}$
for the $+1$ and $-1$ eigenspaces of $\tau$, respectively.
It is known (see \cite[Section 3.3.9]{OV2}) that ${\mathfrak{g}}^\tau_{\mathbb{C}}$ is a simple Lie algebra
and the representation of $G^\tau$ in ${\mathfrak{g}}^{-\tau}_{\mathbb{C}}$ is irreducible.
The set ${\Pi'}=\{\alpha'_1,\dots,\alpha'_{n'}\}$ of {\em distinct} restrictions
of simple roots $\{\alpha_1,\dots,\alpha_n\}$ to $(T^{\rm ad})^\tau$
is in a bijection with the set of orbits of $\tau$ in $\Pi=\{\alpha_1,\dots,\alpha_n\}$.
This set ${\Pi'}$ is a set of simple roots of ${\mathfrak{g}}^\tau_{\mathbb{C}}$
with respect to $(T^{\rm ad})_{\mathbb{C}}^\tau$ (see \cite[Section 3.3.9]{OV2}).
Let $\alpha'_0$ denote the {\em lowest weight} of $(T^{\rm ad})^\tau_{\mathbb{C}}$ in ${\mathfrak{g}}^{-\tau}_{\mathbb{C}}$.
Then $\{\alpha'_0,\alpha'_1,\dots,\alpha'_{n'}\}$ is an admissible system of roots
in the sense that the Cartan numbers $\langle \alpha'_{i'},(\alpha_{j'}^\prime)^\varepsilone\rangle$
are non-positive for ${i'}\neq {j'}$ (see \cite[Section 3.3.9]{OV2}).
The Cartan matrix is encoded by a twisted affine Dynkin diagram ${\widetilde{D}}bar$, see \cite[Section 3.1.7]{OV2}.
There is a unique linear dependence
\[ m'_0\alpha'_0+ m'_1\alpha'_1+\dots +m'_{n'}\alpha'_{n'}=0,\]
normalized so that $m'_0=1$, then $m'_{j'}$ are positive integers.
Set
\[{\mathfrak{t}}={\rm Lie\,} T={\rm Lie\,} T^{\rm ad},\quad {\mathfrak{t}}_0={\mathfrak{t}}^\tau={\rm Lie\,}(T^{\rm ad})^\tau,\quad {\mathfrak{t}}_1=\{x\in{\mathfrak{t}}\ |\ \tau(x)=-x\},\]
Set $V={\mathrm{i}}{\mathfrak{t}}\subset{\mathfrak{t}}_{\mathbb{C}},\quad V_0={\mathrm{i}}{\mathfrak{t}}_0,\ V_1={\mathrm{i}}{\mathfrak{t}}_1,$
where ${\mathrm{i}}^2=-1$.
Let $V^*$, $V_0^*$, and $V_1^*$ denote the dual spaces to the vector ${\mathbb{R}}$-spaces $V$, $V_0$, and $V_1$, resp.
Let $Q={{\sf X}}^*(T^{\rm ad}_{\mathbb{C}})$ and $Q_0={{\sf X}}^*(\kern 0.8pt (T^{\rm ad})_{\mathbb{C}}^\tau)$ be the corresponding character groups,
then $Q$ embeds into $V^*$ and $Q_0$ embeds into $V_0^*$.
The Killing form is positive definite on $V\subset {\mathfrak{t}}_{\mathbb{C}}\subset {\rm Lie\,} G_{\mathbb{C}}$.
The orthogonal decomposition $V=V_0\oplus V_1$ with respect to the Killing form
induces an identification $V^*=V_0^*\oplus V_1^*$,
and the restriction map $Q\to Q_0$ is compatible with the orthogonal projection
\[ V^*=V_0^*\oplus V_1^*\to V_0^*\]
with respect to this identification.
Note that all the simple roots $\alpha\in\Pi$ have the same length
(because $D$ admits a nontrivial automorphism).
We see that the restrictions (projections onto $V_0^*=(V^*)^\tau\subset V^*$\kern 0.8pt)
of the $\tau$-fixed roots $\alpha\in\Pi^\tau$ are longer
than the restrictions of the nonfixed roots $\alpha\in \Pi\smallsetminus\Pi^\tau$.
The involutive {\em outer} automorphisms $\sigma=t\tau\in ({\rm Aut}\, G)_2$
are classified, up to conjugacy, by Kac 1-markings of ${\widetilde{D}}bar$.
A {\em Kac $1$-marking of} ${\widetilde{D}}bar$ is a family of nonnegative integer numerical marks
${\boldsymbol{q}}=(q_{j'})_{{j'}=0,1,\dots,{n'}}\in{\mathbb{Z}}_{\ge 0}^{{n'}+1}$
at the vertices of ${\widetilde{D}}bar$, satisfying
\begin{equation}\label{e:1-marking}
m'_0 q_0+ m'_1 q_1+\dots +m'_{n'} q_{n'}=1.
\end{equation}
A Kac 1-marking of ${\widetilde{D}}bar$ determines a unique element $t\in(T^{\rm ad})({\mathbb{R}})^\tau_2$ such that
\begin{equation}\label{e:aut-1-marking}
\alpha'_{j'}(t)=(-1)^{\varkappa}\quad \text{with } \varkappa=q_{j'},\ {j'}=1,\dots,{n'}.
\end{equation}
With ${\boldsymbol{q}}$ one associates the outer involutive automorphism $\sigma=t\tau$ of $G$.
Two Kac 1-markings ${\boldsymbol{p}}$ and ${\boldsymbol{q}}$ of ${\widetilde{D}}bar$ give conjugate automorphisms of $G$
if and only if ${\boldsymbol{p}}$ can be obtained from ${\boldsymbol{q}}$ by an automorphism of ${\widetilde{D}}bar$,
see \cite[3.3.10, Theorem 3.16]{OV2}.
For a 1-marking ${\boldsymbol{q}}$ of ${\widetilde{D}}bar$, it follows from \eqref{e:1-marking}
that there exists a unique vertex ${i'}$ with $m'_{i'} =1$ such that $q_{i'}=1$;
for all ${j'}\neq {i'}$ we have $q_{j'}=0$.
We color vertex ${i'}$ of ${\widetilde{D}}bar$ in black and leave all the other vertices white.
We obtain a Kac diagram $({\widetilde{D}}bar,{\boldsymbol{q}})$ of type III.
The Kac diagrams of type III up to isomorphism are tabulated in \cite[Table 7]{OV}.
From now on, when we consider a Kac diagram of type III, we assume that it is from \cite[Table 7]{OV}.
Let ${i'}\in {\widetilde{D}}bar$ be the black vertex. We see from \eqref{e:aut-1-marking} that
\begin{equation}\label{e:Dynkin-twisted}
\alpha'_{{i'}}(t)=-1,\quad \alpha'_{{j'}}(t)=1\text{ for }1\le{j'}\le{n'},\ {j'}\neq {i'}.
\end{equation}
Clearly, if ${i'}=0$, then $\alpha'_{{j'}}(t)=1$ for all ${j'}\neq 0$, hence $t=1$.
We use the English (not Russian) version of \cite{OV}.
In the English version of \cite[Table 7]{OV} the vertices of Kac diagrams are numbered,
and one can easily see from the table that when ${i'}\neq 0$,
the restricted root $\alpha'_{{i'}}$ is long, hence
$\alpha'_{{i'}}$ is the restriction to $T^{\rm ad}_{\mathbb{C}}$ of
some {\em $\tau$-fixed} root $\alpha_i\in D^\tau$.
The element $t\in (T^{\rm ad})^\tau({\mathbb{R}})_2$ is determined by \eqref{e:Dynkin-twisted}.
For any $\alpha_j\in \Pi\smallsetminus\Pi^\tau$,
let $\alpha'_{{j'}}$ denote the restriction of $\alpha_j$ to $(T^{\rm ad})^\tau_{\mathbb{C}}$,
then the restricted root $\alpha'_{j'}$ is short, hence ${j'}\neq {i'}$
and therefore, $\alpha_j(t)=\alpha'_{{j'}}(t)=1$.
We see that $t\in T^{\rm ad}(D^\tau)({\mathbb{R}})_2$ and
\begin{equation}\label{e:Dynkin-non-twisted}
\alpha_{i}(t)=-1,\quad \alpha_{j}(t)=1\text{ for all }j\in D^\tau,\ j\neq i.
\end{equation}
Thus we can compute the Galois cohomology of $_z G$ as follows.
We take the Kac diagram of $_z G$ from \cite[Table 7]{OV} (the English version).
We remove vertex 0 and all vertices corresponding to the short roots.
What remains is a simply-laced colored Dynkin diagram $(D^\tau,{\boldsymbol{t}})$
with one black vertex or without black vertices;
this is a colored Dynkin diagram for $_t G(D^\tau)$, where $_{t\tau} G\simeq\kern 0.8pt_z G$.
The map ${\boldsymbol{t}}\colon D^\tau\to\{0,1\}$ is the restriction of the map
${\boldsymbol{q}}\colon {\widetilde{D}}bar\to{\mathbb{Z}}_{\ge0}$ to the subset $D^\tau$ of ${\widetilde{D}}bar$.
If ${\boldsymbol{t}}=0$ (no black vertices), then the moves ${\mathcal{M}}_j$ of the Reeder puzzle for $(D^\tau,{\boldsymbol{t}})$
are given by formula \eqref{non-twisted-simply-laced} for $D^\tau$.
If ${\boldsymbol{t}}\neq 0$, i.e, there is one black vertex $i$ in $D^\tau$,
then the moves ${\mathcal{M}}_j$ for $j\in D^\tau,\ j\neq i$ are given by formula \eqref{non-twisted-simply-laced},
while the move ${\mathcal{M}}_i$ is given by formula \eqref{twisted-simply-laced-new}.
In these formulas the sum is taken over the neighbors $k$ {\em lying in $D^\tau$}.
By solving the Reeder puzzle for $(D^\tau,{\boldsymbol{t}})$, we compute
\[{\rm Orb}(D^\tau,{\boldsymbol{t}})\cong H^1({\mathbb{R}},\kern 0.8pt_t G(D^\tau))\cong H^1({\mathbb{R}},\kern 0.8pt_{t\tau} G)\simeq H^1({\mathbb{R}},\kern 0.8pt_z G).\]
\begin{example}
These are the Kac diagram for the group $_{t\tau} G=EIV$
of type ${\mathbf{E}}_6$ (see Subsection \ref{subsec:EIV} below)
and the twisting diagram for $_t G(D^\tau)=G(D^\tau)$ with trivial twisting,
i.e., the uncolored Dynkin diagram ${\mathbf{A}}_2$:
\begin{equation*}
\mxymatrix
{ \bcb{0} \ar@{-}[r] & \bc{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] &\bc{4} &&&
\bc{3} \ar@{-}[r] & \bc{4} &&&
{\phantom{\hbox{AAAAA}}}
}
\end{equation*}
\end{example}
\begin{example}
These are the Kac diagram for the group $_{t\tau} G=EI$
of type ${\mathbf{E}}_6$ (see Subsection \ref{subsec:EI} below),
the twisting diagram for $_t G(D^\tau)$, and the augmented diagram for $_t G(D^\tau)$:
\begin{equation*}
\mxymatrix
{ \bc{0} \ar@{-}[r] & \bc{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] &\bcb{4} &&&
\bc{3} \ar@{-}[r] & \bcb{4} &&&
\bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] & *+[F]{1}
}
\end{equation*}
\end{example}
\section{Orbits: definitions and terminology}
\label{sec:term}
Starting with the next section, we solve the Reeder puzzles case by case,
i.e., describe the sets of equivalence classes ${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})$.
Proposition \ref{c:restriction} reduces the case of an outer form of a compact group
to the case of an inner form of another compact group.
In the case of an inner form we determine the set ${\rm Orb}(D,{\boldsymbol{t}})$ of
the orbits of the group $W$ generated by the moves ${\mathcal{M}}_i$
(i.e., reflections $r_{\alpha_i}$) acting on the set $L(\kern 0.8pt_{\boldsymbol{t}} D)$.
Here $L(\kern 0.8pt_{\boldsymbol{t}} D)$ is the set of labelings ${\boldsymbol{a}}=(a_1,\dots,a_n)$
corresponding to a twisting diagram $\kern 0.8pt_{\boldsymbol{t}} D$ with vertices $i=1,\dots n$,
where $a_i\in {\mathbb{Z}}/2{\mathbb{Z}}$ and each $i$ corresponds to the simple root $\alpha_i$.
We number the vertices of $D$ as in Onishchik and Vinberg \cite[Table 1]{OV}.
By {\em (connected) components} of a labeling of a Dynkin diagram we mean the connected
components of the graph obtained by removing the vertices with zeros and the corresponding edges.
For example, the following labeling of ${\mathbf{A}}_9$ has $3$ connected components:
\[
\sxymatrix{ 1 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 1 \ar@{-}[r] & 1}
{\boldsymbol{q}}uad\longmapsto{\boldsymbol{q}}uad
\sxymatrix{ 1 \ar@{-}[r] & 1 & & 1 & & 1 \ar@{-}[r] & 1 \ar@{-}[r] & 1.}
\]
For some diagrams $D$ the number of components of a labeling is an invariant of the action of $W$.
For some others, the parity of the number of components is an invariant.
By a {\em fixed labeling} we mean a fixed point of the action of $W$, that is, a labeling
which is fixed under all moves ${\mathcal{M}}_i$\,.
For example, for the action of Lemma \ref{prop:non-twisted} on ${\mathbf{A}}_5$, the labelings
\[
\sxymatrix{ 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0} \quad \text{and}\quad
\sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1}
\]
are fixed.
We say that two vertices $i,j$ of a Dynkin diagram $D$ are {\em neighbors} if they are connected by an
edge (single or multiple).
We say that $i$ is a vertex {\em of degree $d$} if it has exactly $d$ neighbors. We are especially
interested in vertices of degree 3. The Dynkin diagrams ${\mathbf{D}}_n$ $(n\ge4)$, ${\mathbf{E}}_6$, ${\mathbf{E}}_7$ and ${\mathbf{E}}_8$
have vertices of degree 3. Now let $D$ be a Dynkin diagram with a vertex $i$ of degree 3, and let
${\boldsymbol{a}}$ be a labeling of $D$ that looks near $i$ like
\begin{eqnarray}
\sxymatrix{\dots \ar@{-}[r] &1 \ar@{-}[r] & 1 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] &\dots
\\& & 1 & }
\end{eqnarray}
The move ${\mathcal{M}}_i$ of Lemma \ref{prop:non-twisted} splits the component of $i$
to three components (because $D$ has no cycles):
\begin{eqnarray}
\sxymatrix{\dots \ar@{-}[r] &1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] &\dots
\\& & 1 & }
\end{eqnarray}
and therefore increases the number of components by 2. We call this process {\em splitting at $i$}.
The reverse process is called {\em unsplitting}.
Let $G$ be a group with twisting diagram $_{\boldsymbol{t}} D$ and the set of labelings $L(\kern 0.8pt_{\boldsymbol{t}} D)$.
We denote by ${\rm Orb}(\kern 0.8pt_{\boldsymbol{t}} D)$ the set of orbits in $L(\kern 0.8pt_{\boldsymbol{t}} D)$ under the action of the Weyl group,
i.e., the set of equivalence classes of labelings with respect to the moves.
We denote the number of orbits by ${\rm Orb}bs{\kern 0.8pt_{\boldsymbol{t}} D}$.
In Sections \ref{sec:An} -- \ref{sec:G2} below we describe the pointed sets
${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})$ for simply connected groups of types ${\mathbf{A}}_n$ -- ${\mathbf{G}}_2$.
Since these sections may be regarded as parts of a table, most proofs are omitted.
In Section \ref{sec:An} we introduce notation which will be used in subsequent sections.
\section{Groups of type ${\mathbf{A}}_n$}
\label{sec:An}
\subsection{The compact group ${\bf SU}(n+1)$ of type ${\mathbf{A}}_n^{(0)}$}
\label{sect:An}\label{subsec:An}
Here $n\ge 1$.
The Dynkin diagram is
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n} }
\end{equation*}
The Weyl group acts by the moves that are described in Lemma \ref{prop:non-twisted}.
We denote the compact form of the complex group of type ${\mathbf{A}}_n$ by ${\mathbf{A}}_n^{(0)}$.
The superscript 0 shows that the group is compact and the diagram is uncolored.
\begin{lemma} \label{lem:basic-An}
For ${\mathbf{A}}_n^{(0)}$:
\begin{enumerate}
\item[(a)] The moves do not change the number of components.
\item[(b)] Every component can be reduced to length 1, e.g.
\[ 0 \!-\! 1 \!-\! 1 \!-\! 1 \!-\! 0\quad \mapsto\quad 0 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0 \ . \]
\item[(c)] Components may be pushed so that the space between components is of length 1, e.g.
\[ 1 \!-\! 0 \!-\! 0 \!-\! 1 \!-\! 0\quad \mapsto\quad 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0 \ .\]
\end{enumerate}
\end{lemma}
\begin{notation} \label{def:xi-form}
By $\xi_r^n$ (or just $\xi_r$) we mean the
labeling of ${\mathbf{A}}_n^{(0)}$ of the form
\begin{equation*}
\xi_r\quad = \quad \sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & \overset{1-0}{\cdots\cdots} & \ar@{-}[l]
\underset{r}{1} \ar@{-}[r] & 0 \ar@{-}[r] & \cdots }
\end{equation*}
which has $r$ components packed maximally to the left, namely,
\[ (\xi^n_r)_i =
\begin{cases}
1 & \mbox{ if } i=1,3,\dots,2r-1 \ , \\
0 & \mbox{ otherwise}\ .
\end{cases}
\]
By $\eta_r^n$ (or just $\eta_r$) we mean the labeling of ${\mathbf{A}}_n^{(0)}$
which has $r$ components packed maximally to the right, namely
\[ (\eta^n_r)_i = \begin{cases} 1 & \mbox{ if } i=n,n-2,\dots,n-2(r-1) \ , \\ 0 & \mbox{ otherwise}\ .
\end{cases}
\]
\end{notation}
\begin{example}\
$\xi_3^7 = \ 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0 $,\quad\ $\eta_2^7= \ 0 \!-\! 0 \!-\! 0 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! 1$.
\end{example}
\begin{lemma} \label{cor:basic-An}
Two labelings of ${\mathbf{A}}_n^{(0)}$ are equivalent if and only if they have the same number of components.
In particular, any labeling of ${\mathbf{A}}_n^{(0)}$ with $r$ components is equivalent to $\xi_r^n$ and to $\eta_r^n$.
\end{lemma}
Thus, in ${\mathbf{A}}_n^{(0)}$ the number of components is an invariant which fully characterizes orbits.
\begin{corollary}\label{cor:An-reps}
{\bfseries The orbit of zero} in $L({\mathbf{A}}_n^{(0)})$ consists of one labeling $\xi_0$.
As {\bfseries representatives of orbits} we can take $\xi_0,\xi_1,\dots,\xi_r$, where $r=\lceil
n/2\rceil$. We have
\begin{equation} \label{eq:An-num-of-orbits}
{\rm Orb}bs{{\mathbf{A}}_n^{(0)}}=r+1=\lceil n/2\rceil+1 = \begin{cases} k+1 & \mbox{ if }n=2k \ , \\ k+2 &
\mbox{ if }n=2k+1 \ . \end{cases}
\end{equation}
\end{corollary}
\subsection{The group ${\bf SU}(m,\,n+1-m)$ \ $(\,1 \le m \le\lceil n/2\rceil\,)$ with twisting diagram ${\mathbf{A}}_n^{(m)}$}
\label{sect:AnTwistm}\label{subsec:An^m}
The group $G$ is the special unitary group ${\bf SU}(m,n+1-m)$ of the diagonal Hermitian form
with $m$ times $-1$ and $n+1-m$ times $+1$ on the diagonal.
Our results are valid for all $1\le m\le n$, though ${\bf SU}(m,n+1-m)\simeq {\bf SU}(n+1-m,m)$
and therefore, it suffices to consider only the case $1 \le m \le\lceil n/2\rceil$.
The Kac diagram of $G$ is
\begin{equation*}
\sxymatrix{
& & \ar@{-}[lld] \bcb{0} \ar@{-}[rrd] & & &
\\
\bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bcb{m} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n}
}
\end{equation*}
see \cite[Table 7]{OV}.
We obtain the {\em twisting diagram} ${\mathbf{A}}_n^{(m)}$ by removing vertex 0 from the Kac diagram:
\begin{equation*}
\sxymatrix{
\bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bcb{m} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n}
}
\end{equation*}
The superscript in ${\mathbf{A}}_n^{(m)}$ refers to the twisting (black) vertex $m$ in the twisting diagram,
with respect to the numbering of Onishchik and Vinberg \cite{OV}.
We construct the augmented diagram
\begin{equation*}
\sxymatrix{
\bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{m} \ar@{-}[r] \ar@{-}[d] & \cdots \ar@{-}[r] & \bc{n} \\
& & *+[F]{1} & &
}
\end{equation*}
as in Construction \ref{const:augmented-diag},
i.e. by formally adding a new vertex with constant label 1, called boxed 1,
connected to the twisting vertex by a simple edge.
This retains the set of labelings and the set of orbits.
\begin{notation}\label{n:l,r}
Let ${\boldsymbol{a}}=(a_i)\in L({\mathbf{A}}_n^{(m)})$.
We have a schematic diagram:
\begin{equation} \label{schematic}
\sxymatrix{ \mathrm{LHS} \ar@{-}[r] &a_m \ar@{-}[r] \ar@{-}[d] & \mathrm{RHS} \\ & *+[F]{1} & }
\end{equation}
where LHS denotes the left-hand side and RHS denotes the right-hand side.
We denote by $l({\boldsymbol{a}})$ the number of components of ${\boldsymbol{a}}$ in LHS (to the left of the twisting vertex $m$),
and by $r({\boldsymbol{a}})$ the number of components of ${\boldsymbol{a}}$ in RHS (to the right of the twisting vertex $m$),
in both cases not taking into account the component of the boxed 1.
\end{notation}
\begin{remark}
For ${\mathbf{A}}_n^{(m)}$:
\begin{enumerate}
\item[(i)] Any labeling ${\boldsymbol{a}}$ is equivalent to a labeling ${\boldsymbol{a}}'$ with $a'_m=0$.
\item[(ii)] For the schematic diagram \eqref{schematic}, if $l({\boldsymbol{a}})\ge 1$ and $r({\boldsymbol{a}})\ge 1$,
then the rightmost component in LHS and the leftmost component in RHS
can be made to cancel each other out by unsplitting at vertex $m$
(which is a vertex of degree 3 for $1 < m < n$).
In other words, if $l({\boldsymbol{a}}),r({\boldsymbol{a}}) \ge 1$, then ${\boldsymbol{a}}$ is equivalent to some labeling ${\boldsymbol{a}}'$ with
$l({\boldsymbol{a}}')=l({\boldsymbol{a}})-1$ and $r({\boldsymbol{a}}')=r({\boldsymbol{a}})-1$.
\item[(iii)] A component cannot pass from one hand side to the opposite hand side.
In other words, if ${\boldsymbol{a}}\sim{\boldsymbol{a}}'$, then $r({\boldsymbol{a}})-l({\boldsymbol{a}})=r({\boldsymbol{a}}')-l({\boldsymbol{a}}')$.
\end{enumerate}
\end{remark}
\begin{proposition}\label{prop:An^(m)-invariant}
Two labelings ${\boldsymbol{a}},{\boldsymbol{a}}'\in L({\mathbf{A}}_n^{(m)})$ are equivalent if and only if
is $r({\boldsymbol{a}})-l({\boldsymbol{a}})=r({\boldsymbol{a}}')-l({\boldsymbol{a}}')$, and this invariant $r({\boldsymbol{a}})-l({\boldsymbol{a}})$ can take values
between $-\lceil ({m-1})/{2}\rceil$ and $\lceil ({n-m})/{2}\rceil$.
\end{proposition}
\begin{notation}\label{not:pq}
Let $p,q$ be integers, $0 \le p \le \lceil ({m-1})/{2}\rceil$ and $0 \le q \le \lceil ({n-m})/{2}\rceil$.
We write
\begin{equation}\label{eq:(p|q)}
(p|q) \ := \quad \sxymatrix{ \eta_p^{m-1} \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & \xi_q^{n-m} \\ & *+[F]{1} & }
\end{equation}
We have $l(p|q)=p$ and $r(p|q)=q$.
\end{notation}
\begin{corollary}\label{cor:An^(m)-reps}
For ${\mathbf{A}}_n^{(m)}$:
\begin{enumerate}
\item[(i)]
{\bfseries The orbit of zero} is the set of the labelings ${\boldsymbol{a}}$ such that $l({\boldsymbol{a}})=r({\boldsymbol{a}})$.
\item[(ii)]
As {\bfseries representatives of orbits} we can take
\[
(0|0)\ = \quad \sxymatrix{ \eta_0 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & \xi_0 \\ & *+[F]{1} & } ,
{\boldsymbol{q}}uad (p|0)\ = \quad \sxymatrix{ {\eta_p} \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & {\xi_0} \\ & *+[F]{1} & },
{\boldsymbol{q}}uad (0|q)\ = \quad \sxymatrix{ {\eta_0} \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & {\xi_q} \\ &
*+[F]{1} & }
\]
with $1 \le p \le \lceil ({m-1})/{2}\rceil$ and $1 \le q \le \lceil ({n-m})/{2}\rceil$.
\item[(iii)]
The number of orbits is
\begin{eqnarray} \label{eq:An^m-num-of-orbits}
{\rm Orb}bs{{\mathbf{A}}_n^{(m)}} & = & \left\lceil ({m-1})/{2}\right\rceil + 1 + \left\lceil
({n-m})/{2}\right\rceil \\ & = & \begin{cases}
k+1 & \mbox{ if }n=2k \ , \\
k+1 & \mbox{ if }n=2k+1 \mbox{ and } m \mbox{ is odd}, \\
k+2 & \mbox{ if }n=2k+1 \mbox{ and } m \mbox{ is even}.
\end{cases} \nonumber
\end{eqnarray}
\end{enumerate}
\end{corollary}
\subsection{Outer forms of ${\bf SU}(n+1)$} \label{ssec:An-outer}
Here $\tau$ is the nontrivial involutive automorphism of the Dynkin diagram $D={\mathbf{A}}_n$, where $n\ge 2$.
Case $G={\bf{SL}}(n+1)$, $n=2k$. Then $D^\tau=\emptyset$, and it is well known that $H^1({\mathbb{R}},{\bf{SL}}(n+1))=1$
(this follows, for example, from \cite[III.1.1, Proposition 1]{Serre}).
Case $G={\bf{SL}}(n+1)$, $n=2k+1$. Then $\#D^\tau=1$, $D^\tau=\bcb{}$\,, and again it is well known that
$H^1({\mathbb{R}} ,{\bf{SL}}(n+1))=1$.
Case $G={\bf{SL}}(k+1,{\mathbb{H}})$, where ${\mathbb{H}}$ denotes the Hamilton quaternions.
Then $n=2k+1$, $\#D^\tau=1$, $D^\tau=\bc{}$, ${\rm Orb}bs{{\mathbf{A}}_1}=2$.
The orbit of zero in $L({\mathbf{A}}_1)$ consists of 0.
{\bfseries The class of zero} in $L(D)^\tau$ consists of the labelings
whose restriction to $D^\tau$ is 0, namely with $a_{k+1}=0$.
The other class consists of the labelings with $a_{k+1}=1$.
\section{Groups of type ${\mathbf{B}}_n$}
\label{sec:Bn}
\subsection{The compact group ${\bf{Sp}}in(2n+1)$ of type ${\mathbf{B}}_n^{(0)}$} \label{subsec:Bn}
The Dynkin diagram is
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} \ar@{=>}[r] & \bc{n} }\, ,
\end{equation*}
where $n\ge 2$.
We write a labeling ${\boldsymbol{b}}\in L({\mathbf{B}}_n^{(0)})$ as ${\boldsymbol{b}} = ({\boldsymbol{a}} {\mathbb{R}}RR {\varkappa})$, where ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-1}^{(0)})$ and
${\varkappa} \in \{0,1\}$.
Note that the labeling
\begin{equation*}
\ell_1^{(0)} = \ (\xi_0 {\mathbb{R}}RR 1) \ = \quad (0 \!-\! ... \!-\! 0 {\mathbb{R}}RR 1)
\end{equation*}
is a fixed labeling.
We denote by $[\ell_1^{(0)}] \in {\rm Orb}({\mathbf{B}}_n^{(0)})$ the orbit of $\ell_1^{(0)}$ (consisting of one labeling),
and also, by slight abuse of notation, the subset
$\{\,[\ell_1^{(0)}]\,\}\subset {\rm Orb}({\mathbf{B}}_n^{(0)})$ consisting of this orbit.
We also note that if ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-1}^{(0)})$, ${\boldsymbol{a}}\neq 0$,
then $({\boldsymbol{a}}{\mathbb{R}}RR 1)$ is equivalent to $({\boldsymbol{a}}{\mathbb{R}}RR 0)$ in $L({\mathbf{B}}_n^{(0)})$.
\begin{proposition}
The map $\varphi \colon L({\mathbf{A}}_{n-1}^{(0)}) \to L({\mathbf{B}}_n^{(0)}) $ defined by
\[
{\boldsymbol{a}} \,{\longmapsto}\, ({\boldsymbol{a}} {\mathbb{R}}RR 0) \]
induces a bijection $\varphi_* \colon {\rm Orb}({\mathbf{A}}_{n-1}^{(0)}) \isoto
{\rm Orb}({\mathbf{B}}_n^{(0)}) \smallsetminus [\ell_1^{(0)}]$ on the orbits.
\end{proposition}
\begin{corollary} \label{cor:Bn-reps} For ${\mathbf{B}}_n^{(0)}$:
\begin{enumerate}
\item[(i)]
{\bfseries The orbit of zero} consists of the fixed labeling $\xi_0{\mathbb{R}}RR 0$.
\item[(ii)]
As {\bfseries representatives of orbits} we can take
$$\xi_0{\mathbb{R}}RR 1, \quad \xi_0 {\mathbb{R}}RR 0, \quad
\xi_1 {\mathbb{R}}RR 0, \quad \xi_2 {\mathbb{R}}RR 0 \ , \quad ... \ ,\quad \xi_r {\mathbb{R}}RR 0
$$
with $r = \lceil ({n-1})/{2} \rceil$.
\item[(iii)]
\begin{equation*}
{\rm Orb}bs{{\mathbf{B}}_n^{(0)}} = {\rm Orb}bs{ {\mathbf{A}}_{n-1}^{(0)} } + 1 = \begin{cases} k+2 & \mbox{ if } n=2k \ , \\ k+2 & \mbox{ if }
n=2k+1 \ . \end{cases}
\end{equation*}
\end{enumerate}
\end{corollary}
\subsection{The group ${\bf{Sp}}in(2m,\,2n+1-2m)$ \ $(\,1 \le m < n\,)$ with twisting diagram ${\mathbf{B}}_n^{(m)}$}
\label{subsec:Bn^m}
The group $G$ is the universal covering ${\bf{Sp}}in(2m,\,2n+1-2m)$
of the special orthogonal group ${{\bf SO}}(2m,\,2n+1-2m)$
of the diagonal quadratic form
with $2m$ times $-1$ and $2n+1-2m$ times $+1$ on the diagonal.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\sxymatrix{
\bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bcb{m} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} \ar@{=>}[r] & \bc{n}
& {\boldsymbol{q}}uad & \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{m} \ar@{-}[d] \ar@{-}[r] & \cdots &
\ar@{-}[l] \bc{n-1} \ar@{=>}[r] & \bc{n} \\
& & & & {\boldsymbol{q}}uad & & & & & & *+[F]{1} & & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
Note that if $m$ is even, the labeling
\[ \ell_1^{(m)} \ = \quad
\sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & \cdots & \ar@{-}[l] 1 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & \cdots & \ar@{-}[l] 0
\ar@{=>}[r] & 1 \\ & & & & *+[F]{1} & & }
\]
is a fixed labeling.
Note also that if ${\boldsymbol{b}}=({\boldsymbol{a}}{\mathbb{R}}RR 1)\in L({\mathbf{B}}_n^{(m)})$ and ${\boldsymbol{b}}\neq\ell_1^{(m)}$, then ${\boldsymbol{b}}\sim ({\boldsymbol{a}}{\mathbb{R}}RR 0)$.
\begin{proposition} \label{lem:Bn^m-bijection}
The map $\varphi\colon L({\mathbf{A}}_{n-1}^{(m)})\to L({\mathbf{B}}_n^{(m)})$ defined by
$$
{\boldsymbol{a}}\longmapsto (\sxymatrix{{\boldsymbol{a}} \ar@{=>}[r] &0})
$$
induces an injection
$$
{\rm Orb}({\mathbf{A}}_{n-1}^{(m)})\to {\rm Orb}({\mathbf{B}}_n^{(m)})
$$
which is bijective when $m$ is odd, and whose image is
${\rm Orb}({\mathbf{B}}_n^{(m)})\smallsetminus[\ell_1^{(m)}]$
when $m$ is even.
\end{proposition}
We write
\begin{equation}\label{eq:(>)}
(p|q\!\! > \!\!{\varkappa}):=\,{\boldsymbol{a}}{\mathbb{R}}RR{\varkappa},\quad\text{where}\quad{\boldsymbol{a}}=(p|q)\in L({\mathbf{A}}_{n-1}^{(m)}),\ {\varkappa} \in \{0,1\}.
\end{equation}
It follows from Proposition \ref{lem:Bn^m-bijection} that as a set of representatives for the
orbits in $L({\mathbf{B}}_n^{(m)})$ we can take the labelings of the form $\sxymatrix{{\boldsymbol{a}} \ar@{=>}[r] &0}$, where ${\boldsymbol{a}}$ runs
over the set of representatives of orbits in $L({\mathbf{A}}_{n-1}^{(m)})$ from Corollary \ref{cor:An^(m)-reps},
and when $m$ is even we should add the fixed labeling $\ell_1^{(m)}\in L({\mathbf{B}}_n^{(m)})$. Explicitly, we obtain:
\begin{corollary} \label{prop:Bn^(m)-reps}
{\bfseries The orbit of zero} in $L({\mathbf{B}}_n^{(m)})$ is the set of the labelings
${\boldsymbol{b}}=(\sxymatrix{{\boldsymbol{a}} \ar@{=>}[r] &{\varkappa}})$ with $r({\boldsymbol{a}})=l({\boldsymbol{a}})$.
As {\bfseries representatives of orbits} in $L({\mathbf{B}}_n^{(m)})$ we can take
$(p|0\! > \! 0)$ for $0\le p\le \lceil({m-1})/{2}\rceil$,\ \ $(0|q\! > \! 0)$ for $0<q\le \lceil({n-1-m})/{2}\rceil$,
and $\ell_1^{(m)}$ when $m$ is even.
\end{corollary}
\begin{corollary} \label{cor:Bn^(m)-number}
We have:
\[
{\rm Orb}bs{{\mathbf{B}}_n^{(m)}} = \begin{cases} {\rm Orb}bs{{\mathbf{A}}_{n-1}^{(m)}} & \mbox{ if }m\mbox{ is odd,} \\
{\rm Orb}bs{{\mathbf{A}}_{n-1}^{(m)}} + 1 & \mbox{ if }m\mbox{ is even.} \end{cases}
\]
Using Corollary \ref{cor:An^(m)-reps}(iii), we obtain
\begin{equation*} \label{eq:Bn^m-number-of-orbits}
{\rm Orb}bs{ {\mathbf{B}}_n^{(m)} } = \begin{cases} k & \mbox{ if } n=2k \mbox{ and } m \mbox{ is odd,} \\
k+2 & \mbox{ if } n=2k \mbox{ and } m \mbox{ is even,} \\
k+1 & \mbox{ if } n=2k+1 \mbox{ and } m \mbox{ is odd,} \\
k+2 & \mbox{ if } n=2k+1 \mbox{ and } m \mbox{ is even.}
\end{cases}
\end{equation*}
\end{corollary}
\subsection{The group ${\bf{Sp}}in(2n,1)$ with twisting diagram ${\mathbf{B}}_n^{(n)}$} \label{subsec:Bn^n}
The twisting diagram and the augmented diagram are:
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} \ar@{=>}[r] & \bcb{n} }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} \ar@{=>}[r] & \bc{n} \ar@{-}[r] &
*+[F]{1} }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
If $n=2k$, we have a fixed labeling in $L({\mathbf{B}}_n^{(n)})$
\begin{equation*}
\ell_1^{(n)} \ =\quad \sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & \overset{0-1}{\cdots} & \ar@{-}[l] 0
\ar@{-}[r] & 1 \ar@{=>}[r] & 1 \ar@{-}[r] & *+[F]{1} } \quad = \quad \xi_k {\mathbb{R}}RR 1\!-\!{\sxymatrix{\boxone}}\quad \in L({\mathbf{B}}_n^{(n)}) \ .
\end{equation*}
\begin{proposition}
Define a map $\varphi\colon L({\mathbf{A}}_{n-1}^{(0)}) \to L({\mathbf{B}}_n^{(n)})$ by
\[ \varphi({\boldsymbol{a}}) = (\, {\boldsymbol{a}} {\mathbb{R}}RR 0 \!-\! {\sxymatrix{\boxone}} \, ) \, , \]
then the induced map
$$
\varphi_*\colon {\rm Orb}({\mathbf{A}}_{n-1}^{(0)})\to {\rm Orb}({\mathbf{B}}_n^{(n)})
$$
is injective.
If $n$ is odd, then $\varphi_*$ is bijective;
if $n$ is even, the image of $\varphi_*$ is
${\rm Orb}({\mathbf{B}}_n^{(n)})\smallsetminus[\ell_1^{(n)}]$.
\end{proposition}
\begin{proposition}
{\bfseries The orbit of zero} in $L({\mathbf{B}}_n^{(n)})$ consists of two labelings:
\[
\sxymatrix{ 0 \ar@{-}[r] & \cdots & \ar@{-}[l] 0 \ar@{=>}[r] & 0 \ar@{-}[r] & *+[F]{1} }
\text{ \quad and \quad }
\sxymatrix{ 0 \ar@{-}[r] & \cdots & \ar@{-}[l] 0 \ar@{=>}[r] & 1 \ar@{-}[r] & *+[F]{1} } \ .
\]
\end{proposition}
\begin{corollary}\label{cor:Bnn-reps}
As {\bfseries representatives of orbits} in $ L({\mathbf{B}}_n^{(n)})$ we can take
\[\sxymatrix{ \xi_0 \ar@{=>}[r] & 0 \ar@{-}[r] & *+[F]{1} } \ ,{\boldsymbol{q}}uad
\sxymatrix{ \xi_1 \ar@{=>}[r] & 0 \ar@{-}[r] & *+[F]{1} } \ ,{\boldsymbol{q}}uad \dots\ ,{\boldsymbol{q}}uad
\sxymatrix{ \xi_r \ar@{=>}[r] & 0 \ar@{-}[r] & *+[F]{1} }
\]
where $r = \lceil ({n-1})/{2} \rceil$, together with $\ell_1^{(n)}$ when $n$ is even.
\end{corollary}
\begin{corollary}\label{Bnn-orbits}
\begin{equation*}
{\rm Orb}bs{ {\mathbf{B}}_n^{(n)} } = \begin{cases} {\rm Orb}bs{ {\mathbf{A}}_{n-1}^{(0)} } +1=k+2 &
\mbox{ if }\ n=2k \ , \\ {\rm Orb}bs{ {\mathbf{A}}_{n-1}^{(0)} }=k+1 & \mbox{ if }\
n=2k+1 \ .
\end{cases}
\end{equation*}
\end{corollary}
\section{Groups of type ${\mathbb{C}}C_n$}
\label{sec:Cn}
\subsection{The compact group ${\bf{Sp}}(n)$ with diagram ${\mathbb{C}}C_n^{(0)}$} \label{subsec:Cn}
The group $G$ is the compact ``quaternionic'' group ${\bf{Sp}}(n)$
of type ${\mathbb{C}}C_n$ ($n\ge 3$) with Dynkin diagram
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} & \ar@{=>}[l] \bc{n} }.
\end{equation*}
\begin{construction}\label{con:Cn}
Let $L({\mathbf{A}}_{n-1}^{(0)}) \sqcup L({\mathbf{A}}_{n-1}^{(n-1)})$ denote the disjoint union of the sets of labelings
$L({\mathbf{A}}_{n-1}^{(0)})$ and $L({\mathbf{A}}_{n-1}^{(n-1)})$.
We define a map
$$
\varphi\colon L({\mathbf{A}}_{n-1}^{(0)}) \sqcup L({\mathbf{A}}_{n-1}^{(n-1)})\to L({\mathbb{C}}C_n^{(0)})
$$
sending ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-1}^{(0)})$ to ${\boldsymbol{a}}\!\Leftarrow\! 0$ and sending
${\boldsymbol{a}}' \! \!-\! {\sxymatrix{\boxone}} \ \in L({\mathbf{A}}_{n-1}^{(n-1)})$ to ${\boldsymbol{a}}'\!\Leftarrow\! 1$.
Clearly $\varphi$ is a bijection.
\end{construction}
Note that
for any ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-1}^{(0)})$ and ${\boldsymbol{a}}'\! \!-\! {\sxymatrix{\boxone}}\ \in L({\mathbf{A}}_{n-1}^{(n-1)})$,
the labelings ${\boldsymbol{a}}\!\Leftarrow\! 0$ and ${\boldsymbol{a}}'\!\Leftarrow\! 1$
are not equivalent in $L({\mathbb{C}}C_n^{(0)})$.
\begin{proposition}
The bijection $\varphi$ of Construction \ref{con:Cn} induces a bijection on orbits
$$
\varphi_*\colon {\rm Orb}({\mathbf{A}}_{n-1}^{(0)}) \sqcup {\rm Orb}({\mathbf{A}}_{n-1}^{(n-1)})\isoto {\rm Orb}({\mathbb{C}}C_n^{(0)}).
$$
\end{proposition}
\begin{corollary} \label{cor:Cn}
For ${\mathbb{C}}C_n^{(0)}$:
\begin{itemize}
\item[(i)]
{\bfseries The orbit of zero} is just 0.
\item[(ii)]
As {\bfseries representatives for orbits} we can take
\[ \xi_0 \!\Leftarrow\! 0 , \quad \xi_1 \!\Leftarrow\! 0 , \quad \cdots , \quad \xi_r \!\Leftarrow\! 0, \]
where $r = \lceil ({n-1})/{2} \rceil$,
and
\[ \xi_0 \!\Leftarrow\! 1 , \quad \xi_1 \!\Leftarrow\! 1 , \quad \cdots , \quad \xi_s \!\Leftarrow\! 1, \]
where $s = \lceil {n}/{2} \rceil - 1$.
\item[(iii)] ${\rm Orb}bs{ {\mathbb{C}}C_n^{(0)} } = {\rm Orb}bs{ {\mathbf{A}}_{n-1}^{(0)} } + {\rm Orb}bs{ {\mathbf{A}}_{n-1}^{(n-1)} }= n+1.$
\end{itemize}
\end{corollary}
(Of course, it is well known that $\# H^1({\mathbb{R}},G)=n+1$ in this case,
this follows, for example, from \cite[III.1.1, Proposition 1]{Serre}.)
\subsection{The diagram ${\mathbf{A}}_n^{(m,n)}$}
(We shall need this diagram in Subsection \ref{subsec:Cm,n-m}.)
Denote by ${\mathbf{A}}_n^{(m,n)}$ the Dynkin diagram ${\mathbf{A}}_n$ with {\em two} black vertices $m$ and $n$, where $1\le m<n$:
$$
\sxymatrix{
\bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{m-1} \ar@{-}[r] & \bcb{m} \ar@{-}[r] & \bc{m+1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-1} \ar@{-}[r] & \bcb{n} \,.
}
$$
We denote by $L({\mathbf{A}}_n^{(m,n)})$ the set of labelings ${\boldsymbol{a}}=(a_i)$ of ${\mathbf{A}}_n^{(m,n)}$.
We consider the moves ${\mathcal{M}}_i$ given by formula \eqref{non-twisted-simply-laced}
for white vertices and by formula \eqref{twisted-simply-laced-new} for black vertices.
We construct the augmented diagram
\begin{equation*}
\sxymatrix{
\bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{m} \ar@{-}[r] \ar@{-}[d] & \cdots \ar@{-}[r] & \bc{n} \ar@{-}[r] & *+[F]{1} \\
& & *+[F]{1} & &
}\ ,
\end{equation*}
by adding $\sxymatrix{ *+[F]{1} }$ two times, and now the moves ${\mathcal{M}}_i$
are given by formula \eqref{non-twisted-simply-laced} for all $i=1,\dots,n$.
We consider the orbits (equivalence classes) in $L({\mathbf{A}}_n^{(m,n)})$. Note that when $m$ is odd,
the labeling
\begin{equation*}
\ell_1^{(m,n)}\ =\quad \sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & \stackrel{1-0}{\cdots} & \ar@{-}[l] 1 \ar@{-}[r] \ar@{-}[d] &
1 \ar@{-}[r] & \stackrel{1}{\cdots} & \ar@{-}[l] 1 \ar@{-}[r] & *+[F]{1} \\ & & & *+[F]{1} & & & & }
\end{equation*}
is a fixed labeling.
\begin{lemma}
For ${\mathbf{A}}_n^{(m,n)}$,
we can take the following labelings as representatives of orbits:
$(0|0)$, $(p|0)$ for $p=1,...,\lceil (m-1)/2 \rceil$ and $(0|q)$ for $q=1,...,\lceil
(n-1-m)/2 \rceil$, and when $m$ is odd, also the fixed labeling $\ell_1$.
\end{lemma}
\subsection{The group ${\bf{Sp}}(m,\,n-m)$ \ $(\, 1\le m\le \lfloor n/2\rfloor\,)$
with twisting diagram ${\mathbb{C}}C_n^{(m)}$}\label{subsec:Cm,n-m}
The group $G$ is the ``quaternionic'' group ${\bf{Sp}}(m,\,n-m)$,
the unitary group of the diagonal quaternionic Hermitian form with $m$ times $-1$ and $n-m$ times $+1$ on the diagonal.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bcb{m} \ar@{-}[r] & \cdots & \ar@{-}[l]
\bc{n-1} & \ar@{=>}[l] \bc{n} }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{m} \ar@{-}[d] \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} & \ar@{=>}[l] \bc{n} \\
& & *+[F]{1} & & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition}\label{prop0:Cn^(m)}
The bijection
$$
\varphi\colon L({\mathbf{A}}_{n-1}^{(m)}) \sqcup L({\mathbf{A}}_{n-1}^{(m,n-1)})\to L({\mathbb{C}}C_n^{(m)})
$$
sending ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-1}^{(m)})$ to ${\boldsymbol{a}}\!\Leftarrow\! 0$ and sending ${\boldsymbol{a}}'\! \!-\!{\sxymatrix{\boxone}} \in L({\mathbf{A}}_{n-1}^{(m,n-1)})$ to ${\boldsymbol{a}}'\!\Leftarrow\! 1$,
induces a bijection
$$
\varphi_*\colon {\rm Orb}({\mathbf{A}}_{n-1}^{(m)}) \sqcup {\rm Orb}({\mathbf{A}}_{n-1}^{(m,n-1)})\isoto {\rm Orb}({\mathbb{C}}C_n^{(m)}).
$$
\end{proposition}
Denote $(p|q \!\! < \!\! {\varkappa})\,=\,{\boldsymbol{a}}\!\!\Leftarrow\!\! {\varkappa}$, where ${\boldsymbol{a}}=(p|q)\in L({\mathbf{A}}_{n-1}^{(m)})$ and ${\varkappa} \in \{0,1\}$.
For example, for ${\mathbb{C}}C_5^{(3)}$ we have
\[
(1|0 \!\! < \!\! 1)\ =\quad \sxymatrix{ 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 0 & \ar@{=>}[l] 1 \\ & & *+[F]{1} & & } \ .
\]
\begin{corollary} \label{prop:Cn^(m)}
For ${\mathbb{C}}C_n^{(m)}$:
\begin{enumerate}
\item[(i)] {\bfseries The orbit of zero} is
$$
\{\,({\boldsymbol{a}}\!\Leftarrow\! 0)\ |\ {\boldsymbol{a}}\in L({\mathbf{A}}_{n-1}^{(m)}),\, l({\boldsymbol{a}})=r({\boldsymbol{a}})\,\}.
$$
\item[(ii)] As {\bfseries representatives of orbits} we can take $(p|0 \!\! < \!\! 0)$ with $p=0,...,\lceil ({m-1})/{2}
\rceil$, $(0|q \!\! < \!\! 0)$ with $q=1,...,\lceil ({n-1-m})/{2} \rceil$,
$(p|0 \!\! < \!\! 1)$ with $p = 0,...,\lceil ({m-1})/{2} \rceil$,
$(0|q \!\! < \!\! 1)$ with $q = 1,..., \lfloor ({n-1-m})/{2} \rfloor = \lceil ({n-2-m})/{2} \rceil$,
and when $m$ is odd, the fixed labeling
\[
\ell_1^{(m,n)}\ =\quad \sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & \overset{1-0}{\cdots} & \ar@{-}[l] 1 \ar@{-}[d] \ar@{-}[r] &
\overset{1}{\cdots} & \ar@{-}[l] 1 & \ar@{=>}[l] 1 \\ & & & *+[F]{1} & & }\ .
\]
\item[(iii)] ${\rm Orb}bs{ {\mathbb{C}}C_n^{(m)} } ={\rm Orb}bs{{\mathbf{A}}_{n-1}^{(m)}} + {\rm Orb}bs{{\mathbf{A}}_{n-1}^{(m,n-1)}}= n+1$.
\end{enumerate}
\end{corollary}
(Of course, it is well known that $\# H^1({\mathbb{R}},G) = n+1$ in this case,
this follows, for example, from \cite[III.1.1, Proposition 1]{Serre}.)
\subsection{The split group ${\bf{Sp}}(2n,{\mathbb{R}})$ with twisting diagram ${\mathbb{C}}C_n^{(n)}$}
The twisting diagram and the augmented diagram are
\begin{equation*}
\sxymatrix{
\bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1} & \ar@{=>}[l] \bcb{n} & {\boldsymbol{q}}uad{\boldsymbol{q}}uad & \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bc{n-1}
& \ar@{=>}[l] \bc{n} \ar@{-}[r] & *+[F]{1}
}
\end{equation*}
In this case there is only one orbit, ${\rm Orb}bs{ {\mathbb{C}}C_n^{(n)} } = 1$
(it is well known that $H^1({\mathbb{R}},G)=1$ in this case,
see for example \cite[III.1.2, Proposition 3]{Serre}).
\section{Groups of type ${\mathbf{D}}_n$}
\label{sec:Dn}
\subsection{The compact group ${\bf{Sp}}in(2n)$ of type ${\mathbf{D}}_n^{(0)}$} \label{subsec:Dn}
The group $G$ is the spin group ${\bf{Sp}}in(2n)$, the universal covering of the special orthogonal group ${{\bf SO}}(2n)$, where $n\ge 4$.
The Dynkin diagram of $G$ is
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-3} \ar@{-}[r] & \bc{n-2} \ar@{-}[d] \ar@{-}[r] & \bc{n-1} \\
& & & \bcu{n} & }
\end{equation*}
This diagram has a vertex of degree 3, the vertex $n-2$.
For brevity we introduce the following notation:
if ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-2}^{(0)})$, ${\boldsymbol{a}}=(a_i)_{i=1}^{n-2}$, ${\varkappa},\lambda\in\{0,1\}$, we write
\begin{equation}\label{eq:frac-Dn}
{\boldsymbol{a}} \frac{{\varkappa}}{\lambda} \quad:= \quad
\xymatrix@1@R=15pt@C=9pt
{a_1 \ar@{-}[r] &\cdots &\ a_{n-2} \ar@{-}[l] \ar@{-}[d] \ar@{-}[r] &\ {\varkappa}\ \\ & &\overset{\ }{\lambda} } \ .
\end{equation}
Note that for ${\mathbf{D}}_n^{(0)}$ the labelings
$$\ell_2^{(0)}=0=\xi_0^{n-2} \dfrac{0}{0}=0...0\frac{0}{0} \quad \text{and}\quad \ell_4^{(0)}= \xi_0^{n-2} \dfrac{1}{1}= 0...0\frac{1}{1}$$
are fixed labelings. If $n$ is even, $n=2k$, then the labelings
$$
\ell_1^{(0)}=\xi_{k-1}^{n-2}\dfrac{1}{0}= 10..10\frac{1}{0}\quad \text{and}\quad \ell_3^{(0)}=\xi_{k-1}^{n-2}\dfrac{0}{1}= 10..10\frac{0}{1}
$$
are fixed labelings.
\begin{proposition}
Define a map
\[
\varphi\colon L({\mathbf{A}}_{n-2}^{(0)}) \longrightarrow L({\mathbf{D}}_n^{(0)}), \quad {\boldsymbol{a}} \longmapsto {\boldsymbol{a}} \frac{0}{0}\, .
\]
Then the induced map $\varphi_*\colon \operatorname{Orb}({\mathbf{A}}_{n-2}^{(0)}) \to \operatorname{Orb}({\mathbf{D}}_n^{(0)})$ is injective.
If $n$ is even, $n=2k$, then the image of $\varphi_*$ is
$$\operatorname{Orb}({\mathbf{D}}_n^{(0)}) \smallsetminus \left\{ [\ell_4^{(0)}] , \ [\ell_1^{(0)}], \ [\ell_3^{(0)}] \right\}.$$
If $n$ is odd, $n=2k+1$, then the image of $\varphi_*$ is
$$\operatorname{Orb}({\mathbf{D}}_n^{(0)}) \smallsetminus [\ell_4^{(0)}].$$
\end{proposition}
\begin{corollary} \label{cor:Dn-reps}
For ${\mathbf{D}}_n^{(0)}$:
\begin{itemize}
\item[(i)]
{\bfseries The orbit of zero} is just the labeling $\ell_2^{(0)}=0$.
\item[(ii)]
{\bfseries Representatives of orbits} are:
\begin{itemize}
\item For $n=2k+1$ we can take the following representatives coming from $L({\mathbf{A}}_{n-2}^{(0)})$:
\[
\xi_0^{n-2} \frac{0}{0} = 0...0\frac{0}{0} \ , \quad \xi_1^{n-2} \frac{0}{0} = 10...0\frac{0}{0} \
, \quad ... ,
\quad \xi_{k}^{n-2}\frac{0}{0} = 101..01\frac{0}{0}
\]
and the fixed labeling $\ell_4^{(0)}$.
\item For $n=2k$ we can take the following representatives coming from $L({\mathbf{A}}_{n-2}^{(0)})$:
\[
\xi_0^{n-2} \frac{0}{0} = 0...0\frac{0}{0} \ , \quad \xi_1^{n-2} \frac{0}{0} = 10..0\frac{0}{0} \ ,
\quad ... , \quad \xi_{k-1}^{n-2} \frac{0}{0} = 10..10\frac{0}{0} \ ,
\]
and the fixed labelings $\ell_4^{(0)}$, $\ell_1^{(0)}$, and $\ell_3^{(0)}$.
\end{itemize}
\item[(iii)] We have
\begin{equation*}
{\rm Orb}bs{{\mathbf{D}}_n^{(0)}} = \begin{cases} {\rm Orb}bs{{\mathbf{A}}_{n-2}^{(0)}} + 3=k+3 & \mbox{ if } \ n = 2k \ , \\
{\rm Orb}bs{{\mathbf{A}}_{n-2}^{(0)}} + 1=k+2 & \mbox{ if } \ n = 2k+1 \ . \end{cases}
\end{equation*}
\end{itemize}
\end{corollary}
\begin{example}
For ${\mathbf{D}}_5^{(0)}$ we have representatives of orbits
\[ 000\frac{0}{0} \ , \quad 100\frac{0}{0} \ , \quad 101\frac{0}{0} , \quad 000\frac{1}{1} \ . \]
For ${\mathbf{D}}_6^{(0)}$ we have representatives of orbits
\[
0000\frac{0}{0} \ , \quad 1000\frac{0}{0} \ , \quad 1010\frac{0}{0} \ , \quad 1010\frac{1}{0} \ ,
\quad 1010\frac{0}{1} \ , \quad 0000\frac{1}{1} \ .
\]
\end{example}
\subsection{The group ${\bf{Sp}}in(2m,\,2n-2m)$ \ $(\,1 \le m \le \lfloor n/2 \rfloor\,)$ with twisting diagram ${\mathbf{D}}_n^{(m)}$}
\label{subsec:Dn^(m)}
The group $G$ is ${\bf{Sp}}in(2m,\,2n-2m)$, the universal covering of the special orthogonal group ${{\bf SO}}(2n,\,2n-2m)$
of the diagonal quadratic form with $2m$ times $-1$ and $2n-2m$ times $+1$ on the diagonal.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bcb{m} \ar@{-}[r] & \cdots & \ar@{-}[l]
\bc{n-2} \ar@{-}[d] \ar@{-}[r] & \bc{n-1} &{\boldsymbol{q}}uad{\boldsymbol{q}}uad& \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l]
\bc{m} \ar@{-}[d] \ar@{-}[r] & \cdots & \ar@{-}[l]
\bc{n-2} \ar@{-}[d] \ar@{-}[r] & \bc{n-1} \\
& & & & \bcu{n} & &{\boldsymbol{q}}uad{\boldsymbol{q}}uad& & & *+[F]{1} & & \bcu{n} & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{remark}\label{rem:when-they-occur}
For ${\mathbf{D}}_n^{(m)}$:
\begin{enumerate}
\item[(a)] When $m$ is even, we have fixed labelings
\[ \
\ell_2^{(m)}\ =\quad \sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & \overset{1-0}{\cdots} & \ar@{-}[l] 1 \ar@{-}[r] & 0
\ar@{-}[d] \ar@{-}[r] & 0 \ar@{-}[r] & \cdots & \ar@{-}[l] 0 \ar@{-}[d] \ar@{-}[r] & 0
\\ & & & & *+[F]{1} & & & 0 & }
\]
and
\[ \
\ell_4^{(m)}\ =\quad \sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & \overset{1-0}{\cdots} & \ar@{-}[l] 1 \ar@{-}[r] & 0
\ar@{-}[d] \ar@{-}[r] & 0 \ar@{-}[r] & \cdots & \ar@{-}[l] 0 \ar@{-}[d] \ar@{-}[r] & 1
\\ & & & & *+[F]{1} & & & 1 & } \ .
\]
\item[(b)] When $n-m$ is even, we have fixed labelings
\[ \ell_1^{(m)}\ = \quad
\sxymatrix{ \xi_0 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & \overset{1-0}{\cdots} & \ar@{-}[l] 1
\ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 1 \\ & *+[F]{1} & & & & & 0 & }
\]
and
\[ \ell_3^{(m)}\ =\quad
\sxymatrix{ \xi_0 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & \overset{1-0}{\cdots} & \ar@{-}[l] 1
\ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 0 \\ & *+[F]{1} & & & & & 1 & } \ .
\]
\end{enumerate}
(Cases (a) and (b) can occur together.)
\end{remark}
Note that $[\ell_2^{(m)}]$ is in the image of the map $\varphi_*$ of Theorem \ref{lem:Dn(m)} below,
while $[\ell_1^{(m)}], [\ell_3^{(m)}]$ and $[\ell_4^{(m)}]$ are not.
\begin{theorem}\label{lem:Dn(m)}
Consider the map $\varphi \colon L({\mathbf{A}}_{n-2}^{(m)}) \to L({\mathbf{D}}_n^{(m)})$ defined by
${\boldsymbol{a}} \mapsto {\boldsymbol{a}} \frac{0}{0}$\,.
Then the induced map on orbits $\varphi_* \colon {\rm Orb}({\mathbf{A}}_{n-2}^{(m)}) \to {\rm Orb}({\mathbf{D}}_n^{(m)})$ is injective,
and its image is the whole set ${\rm Orb}({\mathbf{D}}_n^{(m)})$
except for the fixed labelings $\ell_1^{(m)}$, $\ell_3^{(m)}$, and $\ell_4^{(m)}$ when they occur;
see Remark \ref{rem:when-they-occur}.
\end{theorem}
\begin{proof}
We prove the injectivity.
Let ${\boldsymbol{d}}={\boldsymbol{a}}\frac{{\varkappa}}{\lambda}\in L({\mathbf{D}}_n^{(m)})$, where ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-2}^{(m)})$\,.
Set
$$
\delta({\boldsymbol{d}})=({\varkappa}+\lambda\ {\rm mod}\ 2)(1-d_{n-2}) +r({\boldsymbol{a}})-l({\boldsymbol{a}}),
$$
where ${\varkappa}+\lambda\ {\rm mod}\ 2\in \{0,1\}\subset {\mathbb{Z}}$, $d_{n-2}=a_{n-2} \in \{0,1\}\subset {\mathbb{Z}}$.
It is easy to check that $\delta({\boldsymbol{d}})$ does not change under the moves in $L({\mathbf{D}}_n^{(m)})$.
Clearly we have $\delta({\boldsymbol{a}}\frac{0}{0})=r({\boldsymbol{a}})-l({\boldsymbol{a}})$.
Now if ${\boldsymbol{a}},{\boldsymbol{a}}'\in L({\mathbf{A}}_{n-2}^{(m)})$ and ${\boldsymbol{a}}\not\sim {\boldsymbol{a}}'$ in $L({\mathbf{A}}_n^{(m)})$,
then by Proposition \ref{prop:An^(m)-invariant} $r({\boldsymbol{a}})-l({\boldsymbol{a}})\neq r({\boldsymbol{a}}')-l({\boldsymbol{a}}')$,
hence $\delta({\boldsymbol{a}}\frac{0}{0})\neq \delta({\boldsymbol{a}}'\frac{0}{0})$,
and therefore, $({\boldsymbol{a}}\frac{0}{0})\not\sim ({\boldsymbol{a}}'\frac{0}{0})$ in $L({\mathbf{D}}_n^{(m)})$.
We prove the assertion about the image. There are two cases: (1) $n-m$ is odd, and (2) $n-m$ is even.
Case (1): $n-m$ is odd. Let ${\boldsymbol{d}}\in L({\mathbf{D}}_n^{(m)})$.
We prove that either ${\boldsymbol{d}}\sim(\dots\frac{0}{0})$ or ${\boldsymbol{d}}=\ell_4^{(m)}$.
Up to equivalence, we may assume that
\begin{equation}\label{eq:schematic-Dn(m)}
{\boldsymbol{d}}={\boldsymbol{a}}\frac{{\varkappa}}{\lambda}=\ \sxymatrix{ {\boldsymbol{a}}^l \ar@{-}[r] &0 \ar@{-}[r] \ar@{-}[d] & {\boldsymbol{a}}^r\dfrac{{\varkappa}}{\lambda} \\ & *+[F]{1} & }\ ,
\end{equation}
where ${\boldsymbol{a}}^l\in L({\mathbf{A}}_{m-1}^{(0)})$ is the left-hand side of ${\boldsymbol{a}}$,
${\boldsymbol{a}}^r\in L({\mathbf{A}}_{n-2-m}^{(0)})$ is the right-hand side of ${\boldsymbol{a}}$,
and ${\varkappa},\lambda\in\{0,1\}.$
If ${\varkappa}=\lambda=0$, then ${\boldsymbol{d}}={\boldsymbol{a}}\frac{0}{0}$, as required.
If ${\varkappa}=1$, $\lambda=0$, then ${\boldsymbol{a}}^r{\varkappa}={\boldsymbol{a}}^r 1\sim(\dots 0)$ in $L({\mathbf{A}}_{n-m-1}^{(0)})$, because $n-m-1$ is even.
Thus ${\boldsymbol{d}}\sim(\dots\frac{0}{0})$, as required.
The case ${\varkappa}=0$, $\lambda=1$ is similar to the case ${\varkappa}=1$, $\lambda=0$.
Now assume that ${\varkappa}=\lambda=1$. If ${\boldsymbol{a}}^r\neq 0$, then ${\boldsymbol{a}}^r\sim(\dots 1)$.
Thus ${\boldsymbol{d}}\sim(\dots 1\frac{1}{1})\sim(\dots 1\frac{0}{0})$, as required.
If ${\boldsymbol{a}}^r=0$ and either $m$ is odd or $m$ is even and ${\boldsymbol{a}}^l\neq\xi_{m/2}$,
then we may assume that $d_{m-1}=({\boldsymbol{a}}^l)_{m-1}=0$. Then, applying moves, we can change $d_m$ to 1,
then change $d_{m+1}$ to 1, \dots then change $d_{n-2}$ to 1, and finally we obtain
that ${\boldsymbol{d}}\sim(\dots 1\frac{1}{1})\sim (\dots 1\frac{0}{0})$, as required.
If ${\boldsymbol{a}}^r=0$, $m$ is even and ${\boldsymbol{a}}^l=\xi_{m/2}$, then ${\boldsymbol{d}}=\ell_4^{(m)}$, which completes the proof in Case (1).
Case (2): $n-m$ is even. Let ${\boldsymbol{d}}\in L({\mathbf{D}}_n^{(m)})$.
Up to equivalence, we may assume that ${\boldsymbol{d}}$ is as in \eqref{eq:schematic-Dn(m)}.
If ${\varkappa}=\lambda=0$, we have nothing to prove.
If ${\varkappa}=\lambda=1$ and ${\boldsymbol{d}}\neq\ell_4^{(m)}$, then the argument in Case (1)
shows that ${\boldsymbol{d}}\sim(\dots\frac{0}{0})$, as required.
Two cases remain: ${\varkappa}=1$, $\lambda=0$, and ${\varkappa}=0$, $\lambda=1$.
They are similar; we treat only the case ${\varkappa}=1$, $\lambda=0$.
Consider ${\boldsymbol{a}}{\varkappa}={\boldsymbol{a}} 1\in L({\mathbf{A}}_{n-1}^{(m)})$.
Using moves in $L({\mathbf{A}}_{n-1}^{(m)})$, we can reduce ${\boldsymbol{a}}1$ to a labeling which has
either 0 components right to the vertex $m$, or 0 components left to $m$.
In the former case ${\boldsymbol{d}}\sim(\dots\frac{0}{0})$, as required.
In the latter case, if ${\boldsymbol{d}}$ is as in \eqref{eq:schematic-Dn(m)} and ${\boldsymbol{a}}^r 1$
has less than $k:=(n-m)/2$ components, then ${\boldsymbol{a}}^r 1\sim(\dots 0)$ and ${\boldsymbol{d}}\sim(\dots \frac{0}{0})$, as required.
If ${\boldsymbol{a}}^r 1$ has $k$ components, then ${\boldsymbol{a}}^r1=\xi_k$. Since ${\boldsymbol{a}}^l=0$, we see that ${\boldsymbol{d}}=\ell_1^{(m)}$.
This completes the proof in Case (2).
\end{proof}
\begin{corollary} \label{cor:Dn^(m)-zero}
Set $A_0=\{{\boldsymbol{a}} \in L({\mathbf{A}}_{n-2}^{(m)})\ |\ l({\boldsymbol{a}})=r({\boldsymbol{a}})\}$ (this is the orbit of zero in $L({\mathbf{A}}_{n-2}^{(m)})$).
We write ${\boldsymbol{a}}=(a_i)$.
Then {\bfseries the orbit of zero} in $L({\mathbf{D}}_n^{(m)})$ is
$$
\left\{ {\boldsymbol{a}}\frac{0}{0},\, {\boldsymbol{a}}\frac{1}{1}\ |\ {\boldsymbol{a}}\in A_0\right\}\,\cup\,
\left\{ {\boldsymbol{a}}\frac{1}{0},\, {\boldsymbol{a}}\frac{0}{1}\ |\ {\boldsymbol{a}}\in A_0,\, a_{n-2}=1\right\}.
$$
\end{corollary}
Set
\begin{equation}\label{eq:(p|q)frac}
(p|q)\frac{{\varkappa}}{\lambda}:={\boldsymbol{a}}\frac{{\varkappa}}{\lambda}\in L({\mathbf{D}}_n^{(m)}),\quad
\text{where}\quad {\boldsymbol{a}}=(p|q)\in L({\mathbf{A}}_{n-2}^{(m)}),\ \ {\varkappa},\lambda \in \{0,1\},
\end{equation}
see formulas \eqref{eq:(p|q)} and \eqref{eq:frac-Dn}.
\begin{corollary} \label{cor:Dn^(m)-reps}
For $L({\mathbf{D}}_n^{(m)})$,
as {\bfseries representatives of orbits} we can take
the labelings $(p|0)\frac{0}{0}$ \ for \ $0 \le p \le \lfloor m/2 \rfloor = \lceil (m-1)/2 \rceil$,
the labelings $(0|q)\frac{0}{0}$ \ for \ $1 \le q \le \lceil ((n-2)-m)/2 \rceil$,
and the fixed labelings $\ell_4^{(m)}$ and $\ell_1^{(m)}$, $\ell_3^{(m)}$ when they occur;
see Remark \ref{rem:when-they-occur}.
\end{corollary}
\begin{corollary} \label{prop:Dn^(m)-number}
\begin{equation*}
{\rm Orb}bs{ {\mathbf{D}}_n^{(m)} } = \begin{cases} k+2 & \mbox{ if } n=2k+1 \ , \\
k+3 & \mbox{ if } n=2k \mbox{ and } m \mbox{ is even} , \\
k & \mbox{ if } n=2k \mbox{ and } m \mbox{ is odd}.
\end{cases}
\end{equation*}
\end{corollary}
\subsection{The group ${\bf{Sp}}in^*(2n)$ with twisting diagram ${\mathbf{D}}_n^{(n)}$}
\label{sec:Dn(n)} \label{subsec:Dn^(n)}
The group $G$ is the ``quaternionic'' spin group ${\bf{Sp}}in^*(2n)$,
the universal covering of ${{\bf SO}}^*(2n)$, the special unitary group
of the diagonal quaternionic skew-Hermitian form in $n$ variables
$$
{\mathrm{i}} x_1 {\bar{x}} _1+\dots+{\mathrm{i}} x_n {\bar{x}}_n.
$$
The twisting diagram and augmented diagram are:
\begin{equation*}
\xymatrix
@1@R=1pt@C=9pt
{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-3} \ar@{-}[r] & \bc{n-2}
\ar@{-}[d] \ar@{-}[r] & \bc{n-1}&& && \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-3} \ar@{-}[r] & \bc{n-2} \ar@{-}[d]
\ar@{-}[r] & \bc{n-1}
\\
& & & \bcbu{n} &
&&&& & & & {\boldsymbol{c}}c \ar@{-}[d] &
\\
&&&&&&& &&& & *+[F]{1} &
}
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
We consider the following labelings of ${\mathbf{D}}_n^{(n)}$:
\begin{equation*} \label{diag:Dn(n)-odd-rep}
m_1\ =\quad \sxymatrix{ 0 \ar@{-}[r] & \cdots & \ar@{-}[l] 0 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 0 \\ & & & 0 \ar@{-}[d] & \\ & & & *+[F]{1} & }
{\boldsymbol{q}}uad\text{and}{\boldsymbol{q}}uad
m_2\ =\quad\sxymatrix{ 1 \ar@{-}[r] & \cdots & \ar@{-}[l] 0 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 0 \\ & & & 0 \ar@{-}[d] & \\ & & & *+[F]{1} & }
\end{equation*}
\begin{proposition}
For ${\mathbf{D}}_n^{(n)}$ there are exactly two orbits:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} which consists of the labelings with odd number of components (including the boxed 1)
and we can take $m_1$ as a representative.
\item[2.] The other orbit that consists of the labelings with even number of components (including the boxed 1)
and we can take $m_2$ as a representative.
\end{enumerate}
\end{proposition}
\subsection{The group ${\bf{Sp}}in(2m+1,\, 2(n-m)-1)$}
\label{ssec:Dn-outer}
Here $0\le m \le \lfloor (n-1)/2\rfloor$.
The group $G$ is an outer form of the compact group ${\bf{Sp}}in(2n)$ of type ${\mathbf{D}}_{n}$.
Here for $n>4$ we consider the nontrivial involutive automorphism $\tau$ of the Dynkin diagram ${\mathbf{D}}_n$,
while for $n=4$ $\tau$ is {\em a} nontrivial involutive automorphism of ${\mathbf{D}}_4$.
The Kac diagram is:
\[
\sxymatrix{ \bc{0} \ar@{<=}[r] & \bc{1} \ar@{-}[r] & \cdots & \ar@{-}[l] \bcb{m} \ar@{-}[r] & \cdots & \ar@{-}[l]
\bc{n-2} \ar@{=>}[r] & \bc{n-1} }\,
\]
see \cite[Table 7]{OV}.
We erase vertex 0 and also the ``short'' vertex $n-1$ (which comes from $D\smallsetminus D^\tau$).
If $m=0$, we obtain $D^\tau={\mathbf{A}}_{n-2}^{(0)}$ (non-twisted). By formula \eqref{eq:An-num-of-orbits}
$$
{\rm Orb}bs{D^\tau}=\lceil (n-2)/2\rceil +1.
$$
The orbit of zero in $L(D^\tau)$ is 0.
{\bfseries The class of 0} in $L(D)^\tau$ consists of the labelings with zero restriction to $D^\tau$.
As {\bfseries representatives of equivalence classes} we can take $\xi_0$,
$\xi_1$, \dots, $\xi_r$, where $r=\lceil (n-2)/2\rceil$.
These representatives lie in $L(D^\tau)$ and hence in $L(D)^\tau$.
If $m \neq 0$, then after erasing the vertices $0$ and $n-1$ of the Kac diagram we obtain the twisted diagram
${\mathbf{A}}_{n-2}^{(m)}$. We add boxed 1 as a neighbor to vertex $m$ and obtain the augmented diagram
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{m} \ar@{-}[r] \ar@{-}[d] & \cdots \ar@{-}[r] & \bc{n-2} \\
& & *+[F]{1} & & }
\end{equation*}
By formula \eqref{eq:An^m-num-of-orbits}
\begin{eqnarray*} \label{eq:outer-num-of-orbits}
{\rm Orb}bs{D^\tau} & = & \left\lceil ({m-1})/{2}\right\rceil + 1 + \left\lceil
({n-2-m})/{2}\right\rceil \\ & = & \begin{cases}
k & \mbox{ if }n=2k \ , \\
k & \mbox{ if }n=2k+1 \mbox{ and } m \mbox{ is odd}, \\
k+1 & \mbox{ if }n=2k+1 \mbox{ and } m \mbox{ is even}.
\end{cases}
\end{eqnarray*}
{ The orbit of zero in $L(D,{\boldsymbol{t}})$ consists of the labelings ${\boldsymbol{a}}\in L({\mathbf{A}}_{n-2}^{(m)})$ such that $l({\boldsymbol{a}})=r({\boldsymbol{a}})$,
see Notation \ref{n:l,r}.
{\bfseries The class of zero in $L(D)^\tau$ } consists of the labelings ${\boldsymbol{d}}$
whose restriction ${\boldsymbol{a}}={\rm res}_{D^\tau}({\boldsymbol{d}})$ to $D^\tau$ satisfy $l({\boldsymbol{a}})=r({\boldsymbol{a}})$.
As {\bfseries representatives of equivalence classes} we can take
$(p|0)$ where $0\le p\le\lceil (m-1)/2\rceil$, and $(0|q)$ where $1\le q\le\lceil (n-2-m)/2\rceil $.
Again, these representatives lie in $L(D^\tau)$ and hence in $L(D)^\tau$.
\section{Groups of type ${\mathbf{E}}_6$}
\label{sec:E6}
\subsection{The compact group of type ${\mathbf{E}}_6^{(0)}$}
The Dynkin diagram of $G$ is
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] & \bc{5} \\
& & \bcu{6} & & }
\end{equation*}
\begin{proposition} [Reeder \cite{Reeder}]\label{prop:E6}
The diagram ${\mathbf{E}}_6^{(0)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of $0$, which is a fixed labeling.
\item[2.] The orbit consisting of all the labelings with $1$ or $3$ components with representative
\[ \ell_1\ = \quad
\sxymatrix{
1 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & & 0 & & }
\]
\item[3.] The orbit consisting of all the labelings with $2$ components with representative
\[ \ell_2\ = \quad
\sxymatrix{ 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & & 1 & & }
\]
\end{enumerate}
\end{proposition}
\begin{remark}
The moves in $L({\mathbf{E}}_n)$ for $n=6,7,8$ preserve the parity of the number of components.
\end{remark}
\begin{remark}
By \cite[Example 4.4]{Reeder} each of the graphs $D={\mathbf{E}}_6$ and $D={\mathbf{E}}_8$ is nonsingular
(namely, a certain quadratic form introduced by Reeder is nonsingular).
By \cite[Theorem 7.3 and Lemma 2.2(2)]{Reeder} in in both cases we have exactly $3$ orbits in $L(D)$: $\{0\}$,
the orbit consisting of all nonzero labelings with even number of components,
and the orbit consisting of all labelings with odd number of components.
\end{remark}
\subsection{The group $EII$ with twisting diagram ${\mathbf{E}}_6^{(2)}$}\label{subsec:E6(2)}
A maximal compact subgroup is of type ${\mathbf{A}}_1 {\mathbf{A}}_5$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \bcb{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] & \bc{5} \\ & &
\bcu{6} & &
}
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\mxymatrix{
\bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] \ar@{-}[d] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] & \bc{5} \\ & *+[F]{1} & \bcu{6} & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:E6^(2)}
The diagram ${\mathbf{E}}_6^{(2)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of all the labelings with $1$ or $3$ components (including the boxed 1).
\item[2.] The orbit consisting of the labelings with $2$ components excluding the fixed labeling $\ell'_1$, with representative
\[ \ell_3\ =\quad \sxymatrix{
1 \ar@{-}[r] & 1 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & *+[F]{1} & 1 & & } \]
\item[3.] The fixed labeling
\[ \ell'_1\ = \quad
\sxymatrix{
1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & *+[F]{1} & 0 & & }
\]
\end{enumerate}
\end{proposition}
\subsection{The group $EIII$ of Hermitian type with twisting diagram ${\mathbf{E}}_6^{(1)}$}\label{subsec:E6(1)}
A maximal compact subgroup of $G$ is of type ${\mathbf{D}}_5 T^1$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\mxymatrix{ \bcb{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] \ar@{-}[d] & \bc{4} \ar@{-}[r] & \bc{5} \\ & &
\bcu{6} & &
}
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\mxymatrix{ *+[F]{1} \ar@{-}[r] & \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[d] \ar@{-}[r] & \bc{4} \ar@{-}[r] &
\bc{5}
\\ & & & \bcu{6} & &
}
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:E6^(1)}
The diagram ${\mathbf{E}}_6^{(1)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of the labelings with $1$ or $3$ components excluding the fixed labeling $\ell'_2$.
\item[2.] The orbit consisting of all the labelings with $2$ components with representative
\[ \ell'_3\ = \quad
\sxymatrix{ *+[F]{1} \ar@{-}[r] & 1 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & & & 1 & & }
\]
\item[3.] The fixed labeling
\[ \ell'_2\ = \quad
\sxymatrix{ *+[F]{1} \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & & & 1 & & }
\]
\end{enumerate}
\end{proposition}
\subsection{The group $EIV$ of type ${\mathbf{E}}_6$}\label{subsec:EIV}
This is an outer form of the compact group of type ${\mathbf{E}}_6$ with
maximal compact subgroup of type ${\mathbb{F}}F_4$.
The Kac diagram is
$$
\sxymatrix{ \bcb{0} \ar@{-}[r] & \bc{} & \bc{} \ar@{=>}[l] \ar@{-}[r] & \bc{} }
$$
We denote by $\tau$ the nontrivial automorphism of the Dynkin diagram $D={\mathbf{E}}_6$.
We erase vertex 0 and the other ``short'' vertex of the Kac diagram.
We obtain $D^\tau=\sxymatrix{ \bc{3} \ar@{-}[r] & \bc{6}}$ and
${\rm Orb}bs{D^\tau}=2$.
The orbit of zero in $L(D^\tau)$ consists of one labeling 0 of $D^\tau$.
{\bfseries The equivalence class of zero} in $L(D)^\tau$ consists of the labelings whose restriction to $D^\tau$ is 0.
\subsection{The split group $EI$ of type ${\mathbf{E}}_6$}\label{subsec:EI}
This is an outer form of the compact group of type ${\mathbf{E}}_6$ with maximal compact subgroup of type ${\mathbb{C}}C_4$.
The Kac diagram is
$$\sxymatrix{ \bc{0} \ar@{-}[r] & \bc{} & \bc{} \ar@{=>}[l] \ar@{-}[r] & \bcb{} }$$
We erase vertex 0 and the other ``short'' vertex of the Kac diagram.
We obtain $(D^\tau,{\boldsymbol{t}})=\sxymatrix{ \bc{3} \ar@{-}[r] & \bcb{6}}$.
The augmented diagram is
\[ \sxymatrix{ \bc{3} \ar@{-}[r] & \bc{6} \ar@{-}[r] & *+[F]{1} } \]
We have
${\rm Orb}bs{D^\tau,{\boldsymbol{t}}}=2$.
The orbit of zero in $L(D^\tau,{\boldsymbol{t}})$ consists of the labelings with one component (including the boxed 1).
{\bfseries The equivalence class of zero} in $L(D)^\tau$ consists of the labelings ${\boldsymbol{a}}$ such that either $a_3=0$ or $a_6=1$.
Note that $H^1({\mathbb{R}},EI)$ was earlier computed by B.~Conrad \cite[Proof of Lemma 4.9]{Conrad}.
\section{Groups of type ${\mathbf{E}}_7$}
\label{sec:E7}
\subsection{The the compact group of type ${\mathbf{E}}_7^{(0)}$}
The Dynkin diagram is
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & \bcu{7} & & }
\end{equation*}
\begin{proposition}[Weng \cite{Weng}] \label{prop:E7}
The diagram ${\mathbf{E}}_7^{(0)}$ has $4$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of the fixed labeling 0.
\item[2.] The fixed labeling
\[ \ell_3\ =\quad
\sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\
& & & 1 & & }
\]
\item[3.] The orbit consisting of the labelings with $1$ or $3$ components
excluding the fixed labeling $\ell_3$, with representative
\[ \ell_1\ =\quad
\sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\
& & & 0 & & }
\]
\item[4.] The orbit consisting of all the labelings with $2$ or $4$ components, with representative
\[ \ell_2\ =\quad
\sxymatrix{ 0 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\
& & & 1 & & }
\]
\end{enumerate}
\end{proposition}
\begin{remark}
By a lemma of Chih-wen Weng \cite{Weng}, for any (uncolored) simply-laced tree
(not necessarily a Dynkin diagram) containing ${\mathbf{E}}_6$ as a subgraph,
any {\em movable} (non-fixed) labeling is equivalent either to a labeling
with one component or to a labeling with two components.
\end{remark}
\subsection{The split group $EV$ with twisting diagram ${\mathbf{E}}_7^{(7)}$}
\label{sec:E7(7)}
A maximal compact subgroup is of type ${\mathbf{A}}_7$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\xymatrix@1@R=0pt@C=9pt
{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & \bcbu{7} & & &}
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & {\boldsymbol{c}}c \ar@{-}[d] & & \\
& & & *+[F]{1} & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:E7^(7)}
The diagram ${\mathbf{E}}_7^{(7)}$ has $2$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero}
is the orbit consisting of all the labelings with $1$ or $3$ components (including the boxed 1).
\item[2.] The orbit consisting of all the labelings with $2$ or $4$ components (including the boxed 1), with representative
\[ m_3\ =\quad
\sxymatrix{ 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 1 \\
& & & 0 \ar@{-}[d] & & \\
& & & *+[F]{1} & & } \ .
\]
\end{enumerate}
\end{proposition}
Note that $H^1({\mathbb{R}},EV)$ was earlier computed by B.~Conrad \cite[Proof of Lemma 4.9]{Conrad}.
\subsection{The group $EVI$ with twisting diagram ${\mathbf{E}}_7^{(2)}$}
A maximal compact subgroup is of type ${\mathbf{A}}_1 {\mathbf{D}}_6$.
The twisting diagram and augmented diagram are:
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \bcb{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & \bcu{7} & & & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\mxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] \ar@{-}[d] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& *+[F]{1} & & \bcu{7} & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:E7^(2)}
The diagram ${\mathbf{E}}_7^{(2)}$ has $4$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of the labelings with $1$ or $3$ or $5$ components
(including the boxed 1), excluding the fixed labeling
$\ell'_2$ (see below).
\item[2.] The fixed labeling
\[\ell'_1\ =\quad
\sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\
& *+[F]{1} & & 0 & & }
\]
\item[3.] The fixed labeling
\[ \ell'_2\ =\quad
\sxymatrix{ 0 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\
& *+[F]{1} & & 1 & & }
\]
\item[4.] The orbit consisting of the labelings with $2$ or $4$ components (including the boxed 1)
excluding the fixed labeling $\ell'_1$, with representative
\[ \ell'_3\ =\quad
\sxymatrix{ 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\
& *+[F]{1} & & 1 & & }
\]
\end{enumerate}
\end{proposition}
Note that $H^1({\mathbb{R}},EVI)$ was earlier computed by Garibaldi and Semenov \cite[Example 5.1]{GS} by a different method.
\subsection{The group $EVII$ of Hermitian type with twisting diagram ${\mathbf{E}}_7^{(1)}$}
A maximal compact subgroup is of type ${\mathbf{E}}_6 T^1$.
The twisting diagram and augmented diagram are:
\begin{equation*}
\mxymatrix{ \bcb{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & \bcu{7} & & & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\mxymatrix{ *+[F]{1} \ar@{-}[r] & \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & & \bcu{7} & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition}\label{prop:E7^(1)}
The diagram ${\mathbf{E}}_7^{(1)}$ has $2$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of all the labelings with $1$ or $3$ components (including the boxed 1).
\item[2.] The orbit consisting of all the labelings with $2$ or $4$ components (including the boxed 1), with representative
\[ m'_3\ =\quad
\sxymatrix{ *+[F]{1} \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1
\ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 1 \\
& & & & 0 & & }
\]
\end{enumerate}
\end{proposition}
\section{Groups of type ${\mathbf{E}}_8$}
\label{sec:E8}
\subsection{The compact group of type ${\mathbf{E}}_8^{(0)}$}
The Dynkin diagram is
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] &
\bc{5} \ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bc{7} \\ & & & & \bcu{8} & & }
\end{equation*}
\begin{proposition} [Reeder \cite{Reeder}] \label{prop:E8}
The diagram ${\mathbf{E}}_8^{(0)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} which contains only $0$.
\item[2.] The orbit consisting of all the labelings with odd number of components, with representative
\[\ell_3\ =\quad
\sxymatrix{ 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 0 \ar@{-}[r] & 0 \\
& & & & 1 & & } \ .
\]
\item[3.] The orbit consisting of all the labelings with nonzero even number of components, with representative
\[\ell_2\ =\quad
\sxymatrix{ 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[d] \ar@{-}[r] & 1 \ar@{-}[r] & 0 \\
& & & & 1 & & } \ .
\]
\end{enumerate}
\end{proposition}
\subsection{The split group $EVIII$ with twisting diagram ${\mathbf{E}}_8^{(7)}$}
\label{subsec:EE8(7)}
A maximal compact subgroup is of type ${\mathbf{D}}_8$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\mxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] & \bc{5}
\ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bcb{7} \\ & & & & \bcu{8} & & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\mxymatrix{ \bc{1} \ar@{-}[r] &\bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] &
\bc{5} \ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bc{7} \ar@{-}[r] & *+[F]{1} \\ & & & &
\bcu{8} & & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:E8^(7)}
The diagram ${\mathbf{E}}_8^{(7)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of the labelings with odd number of components (including the boxed 1),
excluding the fixed labeling $\ell'_2$.
\item[2.] The fixed labeling
\[ \ell'_2\ = \quad
\sxymatrix{ 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & *+[F]{1} \\
& & & & 1 & & &}
\]
\item[3.] The orbit consisting of all the labelings with even number of components, with representative
\[ m_3\ = \quad
\sxymatrix{ 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & *+[F]{1} \\
& & & & 0 & & &}
\]
\end{enumerate}
\end{proposition}
\subsection{The group $EIX$ with twisting diagram ${\mathbf{E}}_8^{(1)}$}
A maximal compact subgroup is of type ${\mathbf{A}}_1 {\mathbf{E}}_7$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\mxymatrix{ \bcb{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] & \bc{5}
\ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bc{7} \\ & & & & \bcu{8} & & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\mxymatrix{ *+[F]{1} \ar@{-}[r] & \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] &
\bc{4} \ar@{-}[r] & \bc{5} \ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bc{7} \\ & & & & &
\bcu{8} & & }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:E8^(1)}
The diagram ${\mathbf{E}}_8^{(1)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of all the labelings
with odd number of components (including the boxed 1).
\item[2.] The fixed labeling
\[ \ell'_3\ =\quad \sxymatrix{ *+[F]{1} \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] &
1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 0 \ar@{-}[r] & 0 \\ & & & & & 1 & & } \]
\item[3.] The orbit consisting of the labelings with even number of components,
excluding the fixed labeling $\ell'_3$, with representative
\[ m'_3 \ =\quad
\sxymatrix{ *+[F]{1} \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] & 1 \ar@{-}[r] & 0 \ar@{-}[r] \ar@{-}[d] & 1 \ar@{-}[r] & 0 \\
& & & & & 0 & & } \]
\end{enumerate}
\end{proposition}
\section{Groups of type ${\mathbb{F}}F_4$}
\label{sec:F4}
\subsection{The compact group of type ${\mathbb{F}}F_4^{(0)}$}
The Dynkin diagram is
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] & \bc{4} }\,.
\end{equation*}
\begin{proposition} \label{prop:F4}
The diagram ${\mathbb{F}}F_4^{(0)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} which contains only \ \ $0 \!-\! 0 \!\Leftarrow\! 0 \!-\! 0$\,.
\item[2.] The orbit
\[ \left\{\ 1\!-\! 0 \!\Leftarrow\! 0\!-\! 0\, , \quad 1 \!-\! 1 \!\Leftarrow\! 0\!-\! 0\, , \quad 0 \!-\! 1 \!\Leftarrow\! 0 \!-\! 0 \ \right\} \,. \]
\item[3.] The orbit that contains the rest, with representative \ $\ell_2=\ 1 \!-\! 0 \!\Leftarrow\! 1 \!-\! 0 $\,.
\end{enumerate}
\end{proposition}
\subsection{The split group $FI$ with twisting diagram ${\mathbb{F}}F_4^{(4)}$}
A maximal compact subgroup is of type ${\mathbb{C}}C_3 {\mathbf{A}}_1$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] & \bcb{4} }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] & *+[F]{1} }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:F4^(4)}
The diagram ${\mathbb{F}}F_4^{(4)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} which
consists of the labelings of the form ${\boldsymbol{a}}\Leftarrow {\boldsymbol{a}}'$, where
$$
{\boldsymbol{a}}\in L(\sxymatrix{ \bc{1} \ar@{-}[r] & \bc{2} }), \quad {\boldsymbol{a}}'\in L( \sxymatrix{ \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] & *+[F]{1} }\ )\, ,
$$
and ${\boldsymbol{a}}'$ has only one component.
\item[2.] The fixed labeling $\ell'_2\ =\quad 1 \!-\! 0 \!\Leftarrow\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}} $\ .
\item[3.] The orbit
\[
\left\{\ 0 \!-\! 0 \!\Leftarrow\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\ , \quad 0 \!-\! 1 \!\Leftarrow\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\ ,
\quad 1 \!-\! 1 \!\Leftarrow\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}} \ \right\} \, .
\]
\end{enumerate}
\end{proposition}
\subsection{The group $FII$ with twisting diagram ${\mathbb{F}}F_4^{(1)}$}
A maximal compact subgroup is of type ${\mathbf{B}}_4$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\sxymatrix{ \bcb{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] & \bc{4} }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ *+[F]{1} \ar@{-}[r] & \bc{1} \ar@{-}[r] & \bc{2} & \ar@{=>}[l] \bc{3} \ar@{-}[r] & \bc{4} }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
\begin{proposition} \label{prop:F4^(1)}
The diagram ${\mathbb{F}}F_4^{(1)}$ has $3$ orbits. The orbits are:
\begin{enumerate}
\item[1.] {\bfseries The orbit of zero} consisting of
\[ \left\{\ {\sxymatrix{\boxone}}\!-\! 0 \!-\! 0 \!\Leftarrow\! 0 \!-\!0 \, , \quad {\sxymatrix{\boxone}}\!-\! 1 \!-\! 0 \!\Leftarrow\! 0 \!-\! 0 \, , \quad
{\sxymatrix{\boxone}}\!-\! 1 \!-\!1 \!\Leftarrow\!0 \!-\! 0 \ \right\} \, . \]
\item[2.] The fixed labeling \ $\ell'_1\ =\quad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!\Leftarrow\! 0 \!-\! 0 $\,.
\item[3.] The orbit that contains the rest, with representative \ $\ell'_3=\ \ {\sxymatrix{\boxone}}\!-\! 1 \!-\! 1 \!\Leftarrow\! 1 \!-\! 0 $\,.
\end{enumerate}
\end{proposition}
\section{Groups of type ${\mathbf{G}}_2$}
\label{sec:G2}
\subsection{The compact group of type ${\mathbf{G}}_2^{(0)}$}
The Dynkin diagram is
\begin{equation*}
\sxymatrix{ \bc{1} & \ar@3{->}[l] \bc{2} }\, .
\end{equation*}
The description of orbits is similar to the case ${\mathbf{A}}_2^{(0)}$, because $3 \equiv 1 \pmod{2}$.
We have ${\rm Orb}bs{{\mathbf{G}}_2^{(0)}}=2$.
The two orbits are
\[ \{\, 0 \!-\! 0\, \} \quad \mbox{ and } \quad \{\, 1 \!-\! 0\,,\quad 1 \!-\! 1\, , \quad 0 \!-\! 1\, \} \, . \]
\subsection{The split group with twisting diagram ${\mathbf{G}}_2^{(2)}$}
A maximal compact subgroup is of type ${\mathbf{A}}_1 {\mathbf{A}}_1$.
The twisting diagram and the augmented diagram are:
\begin{equation*}
\sxymatrix{ \bc{1} & \ar@3{->}[l] \bcb{2} }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ \bc{1} & \ar@3{->}[l] \bc{2} \ar@{-}[r] & *+[F]{1} }
\end{equation*}
(see \cite[Table 7]{OV} and Construction \ref{const:augmented-diag}).
The description of orbits is similar to the case ${\mathbf{A}}_2^{(2)}$.
We have ${\rm Orb}bs{{\mathbf{G}}_2^{(2)}}=2$.
The two orbits are
\[ \{\, 0 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\ , \quad 0 \!-\! 1 \!-\! {\sxymatrix{\boxone}}\ , \quad 1 \!-\! 1 \!-\! {\sxymatrix{\boxone}}\ \} \quad
\mbox{ and } \quad \{\, 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\ \} \,. \]
\section{Connected components in real homogeneous spaces}
\label{sec:examples}
Let $G$ be a simply connected absolutely simple algebraic group over ${\mathbb{R}}$.
Let $H\subset G$ be a simply connected semisimple ${\mathbb{R}}$-subgroup.
Set $X=G/H$.
In this section we describe our method of calculation
of the number of connected components $\#\pi_0(X({\mathbb{R}}))$,
and give examples.
\subsection{Triple $(D,\tau,{\boldsymbol{t}})$}
\label{ss:triple}
Let $G$ be a simply connected absolutely simple ${\mathbb{R}}$-group.
If $G$ is an {\em outer} form of a compact group $G_0$,
we can write $G=\kern 0.8pt_{t\kern 0.8pt \tau}G_0$ as in Section \ref{sec:outer},
where $\tau$ is an automorphism of order 2 of the Dynkin diagram $D$ of $G_{\mathbb{C}}$.
The element $t\in T^{\rm ad}({\mathbb{R}})_2$ defines a coloring ${\boldsymbol{t}}$ of $D^\tau$,
and we may assume that the coloring comes from a Kac diagram.
We obtain a triple $(D,\tau,{\boldsymbol{t}})$.
If $G$ is an {\em inner} form of a compact group $G_0$,
we can write $G=\kern 0.8pt_{t}G_0$ as in Section \ref{sec:inner},
and the element $t\in T^{\rm ad}({\mathbb{R}})_2$ defines a coloring ${\boldsymbol{t}}$ of $D$.
In this case we set $\tau=1$, then again ${\boldsymbol{t}}$ is a coloring of $D^\tau$,
and again we may assume that the coloring comes from a Kac diagram.
Again we obtain a triple $(D,\tau,{\boldsymbol{t}})$.
In both cases we have the bijection \eqref{e:general-bijection}
${\mathbb{C}}l(D,\tau,{\boldsymbol{t}})\isoto H^1({\mathbb{R}},G)$.
\subsection{Describing the connected components}
Let $H$ be a simply connected semisimple ${\mathbb{R}}$-subgroup
of a simply connected absolutely simple ${\mathbb{R}}$-group $G$.
We do not assume that $H$ is simple and that $H$ and $G$ are inner forms of compact groups.
Let $H=H_1\times\dots\times H_r$ be the decomposition of $H$ into the product of simple ${\mathbb{R}}$-groups.
We may and shall assume that each $H_i$ is absolutely simple.
Let $T_H$ be a fundamental torus of $H$, i.e., a maximal torus containing a maximal compact torus.
Then $T_H=\prod_i T_{i}$ where each $T_{i}\subset H_i$ is a fundamental torus of $H_i$.
We present $H_i$ as a twisted form of a compact group as in Subsection \ref{ss:triple}
and obtain a triple $(D_i,\tau_i,{\boldsymbol{t}}_i)$,
where $D_i$ is the Dynkin diagram of $H_i$, $\tau_i$ is an automorphism of $D_i$ with $\tau_i^2=1$,
and ${\boldsymbol{t}}_i$ is a coloring of $D_i^{\tau_i}$.
Then we have an isomorphism $L(D_i)^{\tau_i}\isoto T_i({\mathbb{R}})_2$.
We set $D_H=\sqcup_i D_i$ (disjoint union), $\tau_H=\prod_i \tau_i\in{\rm Aut}(D)$ (direct product of automorphisms),
$L(D_H)=\bigoplus_i L(D_i)$, then $L(D_H)^{\tau_H}=\bigoplus_i L(D_i)^{\tau_i}$,
and we have an isomorphism $L(D_H)^{\tau_H}\isoto T_H({\mathbb{R}})_2$.
We have a coloring ${\boldsymbol{t}}_H$ of $D_H^{\tau_H}$: a vertex $v\in D_i\subset D_H$
is black in $D_H$ if and only if it is black in $D_i$.
We write also $L(D_H,\tau_H,{\boldsymbol{t}}_H)$ for $L(D_H)$.
We define ${\mathbb{C}}l(D_H,\tau_H,{\boldsymbol{t}}_H)$ to be $\prod_i{\mathbb{C}}l(D_i,\tau_i,{\boldsymbol{t}}_i)$,
then we have a bijection ${\mathbb{C}}l(D_H,\tau_H,{\boldsymbol{t}}_H)\isoto H^1({\mathbb{R}},H)$.
Using results of Sections \ref{sec:An}--\ref{sec:G2}, for each $i$
we find a set of representatives ${{\sf X}}i_i\subset L(D_i,\tau_i,{\boldsymbol{t}}_i)^{\tau_i}$
of all equivalence classes in ${\mathbb{C}}l(D_i,\tau_i,{\boldsymbol{t}}_i)$.
We set ${{\sf X}}i=\prod_i{{\sf X}}i_i\subset L(D_H,\tau_H,{\boldsymbol{t}}_H)^{\tau_H}$, then ${{\sf X}}i$
is a set of representatives of all equivalence classes in ${\mathbb{C}}l(D_H,\tau_H,{\boldsymbol{t}}_H)$,
i.e., the composite map ${{\sf X}}i\hookrightarrow L(D_H,\tau_H,{\boldsymbol{t}}_H)^{\tau_H}\to {\mathbb{C}}l(D_H,\tau_H,{\boldsymbol{t}}_H)$ is bijective.
Let $T_G$ be a fundamental torus of $G$.
We may and shall assume that $T_H\subset T_G$.
We present $G$ as a twisted form of a compact ${\mathbb{R}}$-group,
then we have a triple $(D_G,\tau_G,{\boldsymbol{t}}_G)$.
Using results of Sections \ref{sec:An}--\ref{sec:G2},
we compute {\em the class of zero} $[0]_G\subset L(D_G,\tau_G,{\boldsymbol{t}}_G)^{\tau_G}$.
The embedding $T_H({\mathbb{R}})_2\hookrightarrow T_G({\mathbb{R}})_2$ induces an injective homomorphism
\[\iota\colon L(D_H)^{\tau_H}\to L(D_G)^{\tau_G},\]
which can be computed explicitly.
Let ${{\sf X}}i_0$ denote the preimage in ${{\sf X}}i$ of $[0]_G\subset L(D_G,\tau_G,{\boldsymbol{t}}_G)^{\tau_G}$
under the map ${{\sf X}}i\hookrightarrow L(D_H,\tau_H,{\boldsymbol{t}}_H)^{\tau_H}\to L(D_G,\tau_G,{\boldsymbol{t}}_G)^{\tau_G}$, see the commutative diagram:
\begin{equation*}
\xymatrix{
{{\sf X}}i\ar[r] &L(D_H,\tau_H,{\boldsymbol{t}}_H)^{\tau_H}\ar[r] \ar[d]^\iota &{\mathbb{C}}l(D_H,\tau_H,{\boldsymbol{t}}_H)\ar[r]^-\sim\ar[d] &H^1({\mathbb{R}},H) \ar[d] \\
&L(D_G, \tau_G,{\boldsymbol{t}}_G)^{\tau_G}\ar[r] &{\mathbb{C}}l(D_G,\tau_G,{\boldsymbol{t}}_G)\ar[r]^-\sim\ &H^1({\mathbb{R}},G)
}
\end{equation*}
We see that ${{\sf X}}i_0$ is in a bijection with $\ker\left[ H^1({\mathbb{R}},H)\to H^1({\mathbb{R}}, G)\right]$, and therefore,
the cardinality of ${{\sf X}}i_0$ answers Questions \ref{q:2} and \ref{q:1}.
\subsection{Generalities on reductive groups and Galois cohomology}
Let $G$ be a simply connected semisimple algebraic ${\mathbb{R}}$-group, $H\subset G$ be an ${\mathbb{R}}$-subgroup.
The group $G({\mathbb{R}})$ of ${\mathbb{R}}$-points acts on the left on $(G/H)({\mathbb{R}})$.
\begin{lemma}\label{lem:orbits-components}
Any orbit of $G({\mathbb{R}})$ in $(G/H)({\mathbb{R}})$
is a connected component of $(G/H)({\mathbb{R}})$.
\end{lemma}
\begin{proof}
Write $X=G/H$.
Let $x\in X({\mathbb{R}})$, then we have a map
$$
\phi_x\colon G({\mathbb{R}})\to X({\mathbb{R}}),\quad g\mapsto g\cdot x.
$$
The differential of $\phi_x$ at any point $g\in G({\mathbb{R}})$ is surjective,
hence by the implicit function theorem the map $\phi_x$ is open, hence
the orbits of $G({\mathbb{R}})$ in $X({\mathbb{R}})$ are open, hence they are open and closed.
Since $G$ is semisimple and simply connected,
by \cite[Corollary 4.7]{Borel-Tits}
or \cite[Proposition 7.6]{PR}
the group $G({\mathbb{R}})$ is connected,
hence the orbits of $G({\mathbb{R}})$ in $X({\mathbb{R}})$ are connected,
hence they are the connected components of $X({\mathbb{R}})$.
\end{proof}
\begin{lemma}\label{lem:pi1}
Let $\varphi\colon S\to T$ be a homomorphism of $k$-tori
over an algebraically closed field $k$ (of arbitrary characteristic),
and let $\varphi_*\colon {{\sf X}}_*(S)\to {{\sf X}}_*(T)$ denote the induced homomorphism of the cocharacter groups.
Then
\begin{enumerate}
\item[(i)] There is a canonical isomorphism
${\rm Hom}(\,({{\sf X}}^*(\ker\varphi))_{\rm tors},\,{\mathbb{Q}}/{\mathbb{Z}})\isoto({\rm coker\,}\varphi_*)_{\rm tors}$\,,
where by $A_{\rm tors}$ we denote the torsion subgroup of an abelian group $A$.
\item[(ii)]$\#\,({\rm coker\,}\varphi_*)_{\rm tors}=\#\, ({{\sf X}}^*(\ker\varphi))_{\rm tors}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Set $T_1={\rm im\,}\varphi$, then $T_1$ is a subtorus of $T$, and there exists a subtorus $T_2\subset T$
such that $T=T_1\times_k T_2$.
Let $\varphi_1\colon S\to T_1$ be the canonical surjective homomorphism,
then ${\rm coker\,}\varphi_*={\rm coker\,}\varphi_{1,*}\oplus{{\sf X}}_*(T_2)$, whence
\[({\rm coker\,}\varphi_*)_{\rm tors}\cong({\rm coker\,}\varphi_{1,*})_{\rm tors}\,.\]
Therefore, we may assume that $\varphi$ is surjective. Write $K=\ker\varphi$.
From the short exact sequence
$$
1\to K\to S\labelto{\varphi} T\to 1
$$
we obtain a short exact sequence
$$
0\to{{\sf X}}^*(T)\labelto{\varphi^*}{{\sf X}}^*(S)\to{{\sf X}}^*(K)\to 0,
$$
whence, by taking ${\rm Hom}(\cdot , {\mathbb{Z}})$, we obtain an exact sequence for the functor ${\rm Ext}_{\mathbb{Z}}$
(see e.g. \cite[Theorem III.3.2]{ML})
$$
{\rm Hom}({{\sf X}}^*(S),{\mathbb{Z}})\labelto{\varphi_*} {\rm Hom}({{\sf X}}^*(T),{\mathbb{Z}})\to{\rm Ext}^1_{\mathbb{Z}}({{\sf X}}^*(K),{\mathbb{Z}})\to{\rm Ext}^1_{\mathbb{Z}}({{\sf X}}^*(S),{\mathbb{Z}})=0,
$$
where the last equality follows from the fact that $X^*(S)$ is a free abelian group.
We have ${\rm Hom}({{\sf X}}^*(S),{\mathbb{Z}})=X_*(S)$ and ${\rm Hom}({{\sf X}}^*(T),{\mathbb{Z}})=X_*(T)$.
For a finitely generated abelian group $A$ we have ${\rm Ext}^1_{\mathbb{Z}}(A,{\mathbb{Z}})={\rm Hom}(A_{\rm tors},{\mathbb{Q}}/{\mathbb{Z}})$
\cite[Exercise 4 in Section III.9]{ML},
whence
$$
({\rm coker\,}\varphi_*)_{\rm tors}={\rm coker\,}\varphi_*={\rm Ext}^1_{\mathbb{Z}}(X^*(K),{\mathbb{Z}})={\rm Hom}({{\sf X}}^*(K)_{\rm tors},{\mathbb{Q}}/{\mathbb{Z}}),
$$
which proves (i), and (ii) follows immediately.
\end{proof}
\begin{corollary}\label{cor:pi1}
Let $\varphi\colon H\to G$
be a homomorphism of reductive ${\mathbb{C}}$-groups with finite kernel.
Let $T_H\subset H$ and $T\subset G$ be maximal tori such that $\varphi(T_H)\subset T$.
Let $\varphi_*\colon {{\sf X}}_*(T_H)\to {{\sf X}}_*(T)$ denote the induced homomorphism of the cocharacter groups.
Then $\#\ker[\varphi\colon H\to G]=\#\,({\rm coker\,}\varphi_*)_{\rm tors}$\,.
\end{corollary}
\begin{proof}
Write $K=\ker\varphi=\ker[T_H\to T]$, then
$$
\#\ker[\varphi\colon H\to G]=\#K=\#{{\sf X}}^*(K)=\#\,({\rm coker\,}\varphi_*)_{\rm tors}\,,
$$
where the last equality follows from Lemma \ref{lem:pi1}(ii).
\end{proof}
\begin{construction}\label{lem:sc-subgroup}
Let $G$ be a simply connected absolutely simple ${\mathbb{R}}$-group.
Let $T\subset G$ be a fundamental torus, $R=R(G_{\mathbb{C}},T_{\mathbb{C}})$ be the root system,
$\Pi\subset R$ be a basis, $D=D(R,\Pi)$ be the Dynkin diagram.
We may and shall assume that $G=\kern 0.8pt_{t\tau}G_0$, where $G_0$ is a compact group,
$\tau\in({\rm Aut}\,D)_2$, and $t\in T^{\rm ad}({\mathbb{R}})_2$, see Section \ref{sec:outer}.
Let $\Pi_H\subset \Pi$ be a $\tau$-invariant subset, and let $R_H\subset R$ denote the subset
consisting of integer linear combinations of simple roots $\alpha\in\Pi_H$
(then $R_H$ is a root system with basis $\Pi_H$).
Let $H_1$ denote the algebraic subgroup of $G_{\mathbb{C}}$
generated by $T_{\mathbb{C}}$ and the unipotent ``root'' subgroups $U_\beta$ for all roots $\beta\in R_H$.
Let $H$ denote the derived subgroup of $H_1$.
Then by \cite[Proposition 12.6]{MT} $H$ is a semisimple group with root system $R_H$.
Since $G$ is simply connected, by \cite[Proposition 12.14]{MT} $H$ is simply connected as well.
Since the complex conjugation $\rho$ acts on $R$ by $\rho(\beta)=-\tau(\beta)$ for $\beta\in R$,
the subset $R_H$ of $R$ is $\rho$-invariant,
and hence, the subgroups $H_1$ and $H$ are defined over ${\mathbb{R}}$.
\end{construction}
\begin{lemma}\label{lem:odd}
Let
$$
1\to A\to B\labelto{\psi} C\to 1
$$
be a short exact sequence of algebraic ${\mathbb{R}}$-groups,
where $A$ is finite and central in $B$.
If the order $\# A({\mathbb{C}})$ of $A({\mathbb{C}})$ is odd, then
the induced map
$$
\psi_*\colon H^1({\mathbb{R}},B)\to H^1({\mathbb{R}},C)
$$
is bijective.
\end{lemma}
\begin{proof}
Since $A$ is central, we have a cohomology exact sequence
$$
C({\mathbb{R}})\to H^1({\mathbb{R}},A)\to H^1({\mathbb{R}},B)\labelto{\psi_*} H^1({\mathbb{R}},C)\to H^2({\mathbb{R}},A);
$$
see \cite[I.5.7, Proposition 43]{Serre}.
Since $\#{\mathbb{G}}al({\mathbb{C}}/{\mathbb{R}})=2$ and $\# A({\mathbb{C}})$ is odd, by \cite[Section 6, Corollary 1 of Proposition 8]{AW}
we have $H^1({\mathbb{R}},A)=1$ and $H^2({\mathbb{R}},A)=1$.
It follows that the map $\psi_*$ is surjective and that $\ker\psi_*=1$.
We show that any fiber of $\psi_*$ contains only one element.
Indeed, let $\beta\in H^1({\mathbb{R}},B)$ and let $b\in Z^1({\mathbb{R}},B)$ be a cocycle representing $\beta$.
By \cite[I.5.5, Corollary 2 of Proposition 39]{Serre}, the fiber $\psi_*^{-1}(\psi_*(\beta))$
is in a bijection with the quotient of $H^1({\mathbb{R}}, A)$ by an action of the group $_b C({\mathbb{R}})$.
Since $H^1({\mathbb{R}},A)=1$, our fiber $\psi_*^{-1}(\psi_*(\beta))$ indeed contains only one element.
Thus $\psi_*$ is bijective.
\end{proof}
In Subsections \ref{ex:E7}\,--\,\ref{ex-E8}
we give examples of calculations of $\#\pi_0(\, (G/H)({\mathbb{R}})\,)$ using results of Sections \ref{sec:An}--\ref{sec:G2}.
\subsection{Example with ${\mathbf{E}}_7$}
\label{ex:E7}
Let $G=EV$, the split simply connected simple ${\mathbb{R}}$-group with compact maximal torus $T$,
of type ${\mathbf{E}}_7^{(7)}$ with twisting diagram and augmented diagram
\begin{equation*}
\mxymatrix
{ \bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4}
\ar@{-}[r] \ar@{-}[d] & \bc{5} \ar@{-}[r] & \bc{6} \\
& & & \bcbu{7} & & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\xymatrix
@1@R=9pt@C=9pt
{ {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c
\ar@{-}[r] \ar@{-}[d] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \\
& & & {\boldsymbol{c}}c \ar@{-}[d] & & \\
& & & *+[F]{\text{\small1}} & & }
\end{equation*}
see Subsection \ref{sec:E7(7)}.
Let $\Pi_G=\{ \alpha_1, \dots, \alpha_7\}$ be the simple roots (numbered as on the twisting diagram above).
We remove vertex $3$.
Set $\Pi_H=\Pi_G\smallsetminus \{\alpha_3\}$, and let $H$ be the corresponding semisimple ${\mathbb{R}}$-subgroup,
see Construction \ref{lem:sc-subgroup},
with maximal torus $T_H$ (contained in $T$)
and with twisting diagram of type ${\mathbf{A}}_2^{(0)}\sqcup {\mathbf{A}}_4^{(1)}$
\begin{equation*}
\xymatrix@1@R=9pt@C=9pt
{ {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c
&\ \
& {\boldsymbol{c}}c
\ar@{-}[r] \ar@{-}[d] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \\
& & &{\lower0.20ex\hbox{\text{\Large$\bullet$}}} & & }
\end{equation*}
and augmented diagram:
\begin{equation*}
\xymatrix@1@R=9pt@C=9pt
{ {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c
&\ \
& {\boldsymbol{c}}c
\ar@{-}[r] \ar@{-}[d] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \\
& & & {\boldsymbol{c}}c \ar@{-}[d] & & \\
& & & *+[F]{\text{\small1}} & & }
\end{equation*}
Then the semisimple group $H$ is simply connected, see Construction \ref{lem:sc-subgroup}.
We have $H=H_1\times H_2$, where $H_1$ is a compact groups of type
${\mathbf{A}}_2^{(0)}$ and $H_2$ is a twisted (noncompact) group of type ${\mathbf{A}}_4^{(1)}$. By Subsection \ref{sect:An},
for $H_1$ we have $\# {\rm Orb}({\mathbf{A}}_2^{(0)})=2$ with a set of representatives
$$
{{\sf X}}i_1=\ \{\ 0 \!-\! 0,\quad 1 \!-\! 0\ \}.
$$
By Subsection \ref{subsec:An^m}, for $H_2$ we have $\# {\rm Orb}({\mathbf{A}}_4^{(1)})=3$ with a set of representatives
$$
{{\sf X}}i_2=\ \{\ {\sxymatrix{\boxone}} \!-\! 0 \!-\! 0 \!-\! 0 \!-\! 0,\quad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0, \quad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 1\ \}.
$$
We set ${{\sf X}}i={{\sf X}}i_1\times{{\sf X}}i_2\subset L({\mathbf{A}}_2^{(0)}\sqcup{\mathbf{A}}_4^{(1)})=L({\mathbf{A}}_2^{(0)})\times L({\mathbf{A}}_4^{(1)})$,
hence $\#{{\sf X}}i=2\cdot 3=6$.
We write down ${{\sf X}}i$:
\begin{align*}
& 0 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 0 \!-\! 0 \!-\! 0\\
& 0 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0\\
& 0 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 1\\
& 1 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 0 \!-\! 0 \!-\! 0\\
& 1 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0\\
& 1 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 1\\
\end{align*}
We must compute the subset ${{\sf X}}i_0$ of ${{\sf X}}i$ consisting of the labelings
whose images in $L({\mathbf{E}}_7^{(7)})$ are contained in the orbit of zero $[0]$.
The homomorphism $L({\mathbf{A}}_2^{(0)}\sqcup{\mathbf{A}}_4^{(1)})\to L({\mathbf{E}}_7^{(7)})$ is induced
by the embedding ${\mathbf{A}}_2^{(0)}\sqcup{\mathbf{A}}_4^{(1)}\hookrightarrow {\mathbf{E}}_7^{(7)}$.
By Subsection \ref{sec:E7(7)} the labelings of ${\mathbf{E}}_7^{(7)}$ in the orbit of zero are those
with 1 or 3 components (including the boxed 1).
Thus ${{\sf X}}i_0$ consists of the following labelings of ${\mathbf{A}}_2^{(0)}\times {\mathbf{A}}_4^{(1)}$:
\begin{align*}
& 0 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 0 \!-\! 0 \!-\! 0\\
& 0 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 1\\
& 1 \!-\! 0{\boldsymbol{q}}uad {\sxymatrix{\boxone}}\!-\! 0 \!-\! 1 \!-\! 0 \!-\! 0
\end{align*}
We conclude that
$$\# \pi_0(\,(G/H)({\mathbb{R}})\,)\ =\ \#\ker \left[ H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G) \right] =\#{{\sf X}}i_0=3.$$
Similar calculations show that if we remove vertex 2 instead of vertex 3, then $\# \pi_0(\,(G/H)({\mathbb{R}})\,)=2$,
and if we remove vertex 1 instead of vertex 3, then $\# \pi_0(\,(G/H)({\mathbb{R}})\,)=1$,
i.e. $(G/H)({\mathbb{R}})$ will be connected.
\subsection{Examples with ${\bf{Sp}}in^*(2n)$}
Let $G={\bf{Sp}}in^*(2n)\ (n\ge 4)$, the simply connected ``quaternionic'' ${\mathbb{R}}$-group of type ${\mathbf{D}}_n^{(n)}$
with twisting diagram and augmented diagram
\begin{equation*}
\xymatrix@1@R=0pt@C=9pt
{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-3} \ar@{-}[r] & \bc{n-2} \ar@{-}[d]
\ar@{-}[r] & \bc{n-1} \\
& & & \bcbu{n} & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\sxymatrix{ {\boldsymbol{c}}c \ar@{-}[r] & \cdots \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[d]
\ar@{-}[r] & {\boldsymbol{c}}c \\
& & & {\boldsymbol{c}}c \ar@{-}[d] & \\
& & & *+[F]{1} & }
\end{equation*}
see Subsection \ref{subsec:Dn^(n)}.
Let $\Pi_G=\{ \alpha_1, \dots, \alpha_n\}$ be the simple roots (numbered as on the twisting diagram above).
We remove vertex $n-1$.
Set $\Pi_H=\Pi_G\smallsetminus \{\alpha_{n-1}\}$,
and let $H$ be the corresponding semisimple ${\mathbb{R}}$-subgroup, see Construction \ref{lem:sc-subgroup},
with twisting diagram of type ${\mathbf{A}}_{n-1}^{(1)}$
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-2} \ar@{-}[r] &\bcb{n}}
\end{equation*}
and augmented diagram
\begin{equation*}
\sxymatrix{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] & \bc{n-2} \ar@{-}[r] & \bc{n} \ar@{-}[r] & *+[F]{1} }\ .
\end{equation*}
Then the semisimple ${\mathbb{R}}$-subgroup $H$ is simply connected, see Construction \ref{lem:sc-subgroup}.
By Subsection \ref{subsec:An^m}
we can take for representatives of orbits in $L({\mathbf{A}}_{n-1}^{(1)})$ the set
\[ {{\sf X}}i=\left\{\eta_i\ |\ 1\le i\le\left\lceil n/2\right\rceil\right\},\]
where $\eta_i$ denotes the labeling with $i$ components (including the boxed 1) maximally packed to the right.
By Subsection \ref{sec:Dn(n)},
the orbit of zero in $L({\mathbf{D}}_n^{(n)})$ is the set of labelings
with {\em odd} number of components (including the boxed 1).
Thus
\[ {{\sf X}}i_0=\left\{\eta_i\ |\ 1\le i\le\left\lceil n/2\right\rceil,\ i \text{ is odd}\right\}.\]
We see that $\#{{\sf X}}i_0$ is the number of odd numbers $i$ between 1 and $\lceil n/2\rceil$, i.e., $\#{{\sf X}}i_0=\lceil n/4\rceil$.
We conclude that
$$\# \pi_0(\,(G/H)({\mathbb{R}})\,)=\#\ker[H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G)]=\#{{\sf X}}i_0= \left\lceil n/4\right\rceil.$$
Now, instead of removing vertex $n-1$, let us remove vertex $m$ with $1\le m\le n-2$:
\begin{equation*}
\xymatrix@1@R=0pt@C=9pt
{ \bc{1} \ar@{-}[r] & \cdots \ar@{-}[r] &\bc{m-1} & &\bc{m+1} \ar@{-}[r] &\cdots \ar@{-}[r] & \bc{n-3} \ar@{-}[r] & \bc{n-2} \ar@{-}[d]
\ar@{-}[r] & \bc{n-1} \\
& & &&&&& \bcbu{n} & &}
\end{equation*}
We obtain a subgroup $H=H_1\times H_2$, where $H_1$ is of type ${\mathbf{A}}_{m-1}^{(0)}$ (where $m-1=0$ is possible)
and $H_2$ is of type ${\mathbf{D}}_{n-m}^{(n-m)}$ (where $n-m=2$ is possible).
On the left of the removed vertex we can take
\[{{\sf X}}i_1=\{\xi_k\ |\ 0\le k\le \lceil (m-1)/2\rceil\} \]
for representatives of orbits in $L({\mathbf{A}}_{m-1}^{(0)})$, see Subsection \ref{subsec:An}.
On the right of the removed vertex we can take
\[{{\sf X}}i_2=\{\ell_1,\ell_2\} \]
for representatives of orbits in $L({\mathbf{D}}_{n-m}^{(n-m)})$
(where the labeling $\ell_1$ has one component and $\ell_2$ has two components, including the boxed 1),
see Subsection \ref{subsec:Dn^(n)}.
By Subsection \ref{subsec:Dn^(n)} applied to $G$, the orbit of zero in $L({\mathbf{D}}_n^{(n)})$ is the set of labelings
with {\em odd} number of components (including the boxed 1).
Now with any $\xi_k\in{{\sf X}}i_1$ we associate the pair $(\xi_k,\ell)\in {{\sf X}}i_1\times{{\sf X}}i_2$,
where $\ell$ is either $\ell_1$ or $\ell_2$
such that the total number of components in $\xi_k$ and $\ell$ is odd.
We obtain a bijection ${{\sf X}}i_1\isoto{{\sf X}}i_0$.
Thus in this case
\begin{align*}
\# \pi_0(\,(G/H)({\mathbb{R}})\,)=\ &\#\ker[H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G)]\\
=\ &\#{{\sf X}}i_0=\, \#{{\sf X}}i_1=\lceil (m-1)/2\rceil+1=\lceil (m+1)/2\rceil.
\end{align*}
In particular, if $m=n-2$, we obtain $\# \pi_0(\,(G/H)({\mathbb{R}})\,)=\lceil (n-1)/2\rceil$.
\subsection{Example with ${\bf{Sp}}in(2m+1,2n+1)$}
Let $G={\bf{Sp}}in(2m+1,2n+1)\ (m\ge 2,\ n\ge 3)$, which is an outer form of a compact group.
The Kac diagram of $G$ is
\[
\sxymatrix{\bc{0} &\bc{1}\ar@{=>}[l] \ar@{-}[r] & \cdots & \ar@{-}[l] \bcb{m} \ar@{-}[r] &\cdots & \ar@{-}[l] \bc{\ell-1} \ar@{=>}[r] & \bc{\ell} }\, ,
\]
see Subsection \ref{ssec:Dn-outer}
where $\ell=m+n$, see \cite[Table 7]{OV}.
We write $G=\kern 0.8pt_{t\tau}G_0$ as in Construction \ref{lem:sc-subgroup}.
We remove the $\tau$-stable vertex $m+n-k$ $(2\le k<n)$ of the Dynkin diagram
and denote the obtained semisimple ${\mathbb{C}}$-subgroup by $H$, then
by Construction \ref{lem:sc-subgroup} the subgroup $H$ is simply connected and defined over ${\mathbb{R}}$,
and we have $H={\bf SU}(m,n-k)\times{\bf{Sp}}in(2k+2)$.
We are interested in $\pi_0(\, (G/H)({\mathbb{R}})\,)$.
By Theorem \ref{cor:Theorem-3-Bo} applied to $G$ and $H$ we have a bijection
$\pi_0(\, (G'/H')({\mathbb{R}})\,)\isoto\pi_0(\, (G/H)({\mathbb{R}})\,)$,
where $G'={\bf SU}(m,n)$ of type ${\mathbf{A}}_{m+n-1}^{(m)}$ and $H'=H_1\times H_2$
with $H_1={\bf SU}(m,n-k)$ of type ${\mathbf{A}}_{m+n-k-1}^{(m)}$ and $H_2={\bf SU}(k)$ of type ${\mathbf{A}}_{k-1}^{(0)}$.
Although probably one can compute $\#\pi_0(\, (G'/H')({\mathbb{R}})\,)$ using real algebraic geometry,
we compute this number using Galois cohomology.
Namely, for $H_1$ of type ${\mathbf{A}}_{m+n-k-1}^{(m)}$ we can take
\[ {{\sf X}}i_1=\{(p|0)\ |\ 0\le p\le \lceil (m-1)/2\rceil\}\ \cup\
\{(0|q)\ |\ 1\le q\le \lceil (n-k-1)/2\rceil\} \]
for representatives of orbits in $L({\mathbf{A}}_{m+n-k-1}^{(m)})$, see Subsection \ref{subsec:An^m}.
For $H_2$ of type ${\mathbf{A}}_{k-1}^{(0)}$ we can take
\[{{\sf X}}i_2=\{\xi_i\ |\ 0\le i\le \lceil (k-1)/2\rceil\}\]
for representatives of orbits in $L({\mathbf{A}}_{k-1}^{(0)})$, see Subsection \ref{subsec:An}.
For $G'$, the orbit of zero in $L({\mathbf{A}}_{m+n-1}^{(m)})$
is the set of labelings with the same number of components on the left and on the right of $m$,
see Subsection \ref{subsec:An^m}.
Thus
\[ {{\sf X}}i_0=\{(\kern 0.8pt(p|0),\xi_p\kern 0.8pt)\in{{\sf X}}i_1\times{{\sf X}}i_2\}.\]
Here $0\le p\le \lceil (m-1)/2\rceil$, $0\le p\le \lceil (k-1)/2\rceil$, hence
\[\#{{\sf X}}i_0=1+\min(\lceil (m-1)/2\rceil,\lceil (k-1)/2\rceil)=\min(\lceil (m+1)/2\rceil,\lceil (k+1)/2\rceil).\]
We conclude that
\[\#\pi_0(\, (G/H)({\mathbb{R}})\,)=\#\pi_0(\, (G'/H')({\mathbb{R}})\,)=\#{{\sf X}}i_0= \min(\lceil (m+1)/2\rceil,\lceil (k+1)/2\rceil).\]
\subsection{Example with ${\mathbf{E}}_8$}
\label{ex-E8}
Let $G=EVIII$, the split form ${\mathbf{E}}_8^{(7)}$ of ${\mathbf{E}}_8$ with compact maximal torus $T$,
with Kac diagram and augmented diagram
\begin{equation*}
\xymatrix@1@R=0pt@C=9pt
{\bc{0} \ar@{-}[r] &\bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} \ar@{-}[r] & \bc{4} \ar@{-}[r] &
\bc{5} \ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bcb{7} \\ & & & & & \bcu{8} & & }
{\boldsymbol{q}}uad{\boldsymbol{q}}uad
\xymatrix@1@R=9pt@C=9pt
{ {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] &
{\boldsymbol{c}}c \ar@{-}[r] \ar@{-}[d] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & *+[F]{1} \\ & & & &
{\boldsymbol{c}}c & & & }
\end{equation*}
see Subsection \ref{subsec:EE8(7)}.
In this example, in contrast to the two previous examples, we construct an ${\mathbb{R}}$-subgroup $H$ of $G$
of the same rank, and not of smaller rank.
We remove vertex 4 from the Kac diagram (the extended Dynkin diagram), and we do not erase vertex 0.
This means that we consider the semisimple ${\mathbb{C}}$-subgroup $H$ of $G$,
generated by $T_{\mathbb{C}}$ and the unipotent ``root" subgroups $U_\beta$ with $\beta\in R_H$,
where $R_H$ is the set of $\beta\in R$ that are integer linear combinations of the roots
$\alpha_i$, $0\le i\le 8,\ i\neq 4$, where $\alpha_1,\dots,\alpha_8$
are the simple roots and $\alpha_0$ is the lowest root.
Since $-R_H=R_H$, the ${\mathbb{C}}$-subgroup $H$ is defined over ${\mathbb{R}}$.
We obtain a maximal connected algebraic subgroup $H$ of $G$ \cite[Table 5]{OV2}
with twisting diagram
\begin{equation*}
\xymatrix@1@R=0pt@C=9pt
{\bc{0} \ar@{-}[r] &\bc{1} \ar@{-}[r] & \bc{2} \ar@{-}[r] & \bc{3} & &
\bc{5} \ar@{-}[r] \ar@{-}[d] & \bc{6} \ar@{-}[r] & \bcb{7} \\ & & & & & \bcu{8} & & }
\end{equation*}
and augmented diagram
\begin{equation} \label{eq:Htil}
\xymatrix@1@R=9pt@C=9pt
{ {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c & &
{\boldsymbol{c}}c \ar@{-}[r] \ar@{-}[d] & {\boldsymbol{c}}c \ar@{-}[r] & {\boldsymbol{c}}c \ar@{-}[r] & *+[F]{1}
\\ & & & & & {\boldsymbol{c}}c & & & }
\end{equation}
We compute the fundamental group $\pi_1(H_{\mathbb{C}})$ of the semisimple group $H$.
Let ${\widetilde{H}}$ denote the universal covering of $H$.
Consider the composite morphism
$$
\varphi\colon {\widetilde{H}}\to H\to G,
$$
and let ${\mathcal{M}}til_H$ denote the maximal torus of ${\widetilde{H}}$ such that $\varphi({\mathcal{M}}til_H)=T$.
We denote by $\varphi_*\colon{{\sf X}}_*({\mathcal{M}}til_H)\to{{\sf X}}_*(T)$ the induced homomorphism of the cocharacter groups.
The cocharacter group ${{\sf X}}_*({\mathcal{M}}til_H)$ has a basis
\begin{equation}\label{eq:basis}
\alpha_0^\varepsilone,\ \alpha_1^\varepsilone,\ \alpha_2^\varepsilone,\ \alpha_3^\varepsilone,\ \widehat{\alpha_4^\varepsilone},
\ \alpha_5^\varepsilone,\ \alpha_6^\varepsilone,\ \alpha_7^\varepsilone,\ \alpha_8^\varepsilone,
\end{equation}
where $\widehat{\alpha_4^\varepsilone}$ means that $\alpha_4^\varepsilone$ is removed from the list.
The cocharacter group ${{\sf X}}_*(T)$ has a basis $\alpha_1^\varepsilone, \dots, \alpha_8^\varepsilone$,
while the subgroup ${\rm im\,}\varphi_*\subset{{\sf X}}_*(T)$
is generated by the cocharacters \eqref{eq:basis}.
There is a linear relation between $\alpha^\varepsilone_0,\ \alpha^\varepsilone_1,\dots,\alpha^\varepsilone_8$,
in which the removed simple coroot $\alpha^\varepsilone_4$ appears with coefficient 5,
while $\alpha^\varepsilone_0$ appears with coefficient 1;
see \cite[Table 6]{OV} or \cite[Planche VII, (IV)]{Bourbaki}.
We see that ${\rm im\,}\varphi_*\subset {{\sf X}}_*(T)$ contains $\alpha_i^\varepsilone$ for $i\neq 4$,
and it contains $5\alpha_4^\varepsilone$, but not $\alpha_4^\varepsilone$.
Thus ${\rm im\,}\varphi_*$ is a subgroup of index 5 in ${{\sf X}}_*(T)$.
By Corollary \ref{cor:pi1} the kernel of the canonical epimorphism ${\widetilde{H}}\to H$ is of order 5,
hence $\pi_1(H_{\mathbb{C}})$ is of order 5.
Since the order 5 of $\ker\varphi$ is odd, by Lemma \ref{lem:odd}
the induced map $H^1({\mathbb{R}},{\widetilde{H}})\to H^1({\mathbb{R}},H)$ is bijective,
whence
$$
\#\ker[H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G)]\ =\ \#\ker[H^1({\mathbb{R}},{\widetilde{H}})\to H^1({\mathbb{R}},G)].
$$
We compute $\#\ker[H^1({\mathbb{R}},{\widetilde{H}})\to H^1({\mathbb{R}},G)]$.
We have ${\widetilde{H}}={\widetilde{H}}_1\times {\widetilde{H}}_2$, where ${\widetilde{H}}_1$ is compact of type ${\mathbf{A}}_4^{(0)}$
and ${\widetilde{H}}_2$ is of type ${\mathbf{A}}_4^{(4)}$.
By Subsection \ref{sect:An} we can take
$$
{{\sf X}}i_1=\ \{\ 0 \!-\! 0 \!-\! 0 \!-\! 0, \quad 1 \!-\! 0 \!-\! 0 \!-\! 0, \quad 1 \!-\! 0 \!-\! 1 \!-\! 0\ \}
$$
as a set of representatives of orbits in $L({\mathbf{A}}_4^{(0)})$.
By Subsection \ref{subsec:An^m} we can take
$$
{{\sf X}}i_2=\ \{\ 0 \!-\! 0 \!-\! 0 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\,, \quad 0 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\,, \quad 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\ \}
$$
as a set of representatives of orbits in $L({\mathbf{A}}_4^{(4)})$.
Set ${{\sf X}}i={{\sf X}}i_1\times{{\sf X}}i_2$.
We denote ${{\sf X}}i_0$ the preimage in ${{\sf X}}i$ of the orbit of zero in $L({\mathbf{E}}_8^{(7)})$.
By Subsection \ref{subsec:EE8(7)} the orbit of zero $[0]\subset L({\mathbf{E}}_8^{(7)})$
consists of the labelings with odd number of components (including the boxed 1),
excluding the fixed labeling $\ell'_2$.
The subset of ${{\sf X}}i$ consisting of labelings with odd number of components has the following 5 labelings:
\begin{align*}
& 0 \!-\! 0 \!-\! 0 \!-\! 0 {\boldsymbol{q}}uad 0 \!-\! 0 \!-\! 0 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\\
& 0 \!-\! 0 \!-\! 0 \!-\! 0 {\boldsymbol{q}}uad 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\\
& 0 \!-\! 0 \!-\! 0 \!-\! 1 {\boldsymbol{q}}uad 0 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\\
& 0 \!-\! 1 \!-\! 0 \!-\! 1 {\boldsymbol{q}}uad 0 \!-\! 0 \!-\! 0 \!-\! 0 \!-\! {\sxymatrix{\boxone}}\\
& 0 \!-\! 1 \!-\! 0 \!-\! 1 {\boldsymbol{q}}uad 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}
\end{align*}
and one of them ( $0 \!-\! 0 \!-\! 0 \!-\! 0 \quad 1 \!-\! 0 \!-\! 1 \!-\! 0 \!-\! {\sxymatrix{\boxone}}$ ) is the preimage of $\ell'_2$.
We see that
${{\sf X}}i_0 =\ 4$.
We conclude that
\begin{align*}
\# \pi_0(\,(G/H)({\mathbb{R}})\,)\ =\ &\#\ker[H^1({\mathbb{R}},H)\to H^1({\mathbb{R}},G)]\\
=\ &\#\ker[H^1({\mathbb{R}},{\widetilde{H}})\to H^1({\mathbb{R}},G)]=\#{{\sf X}}i_0=4.
\end{align*}
\noindent{\sc Acknowledgements.}
The authors are very grateful to Dmitry A.~Timashev for his help in proving Lemma \ref{prop:twisted}
and also for reading Subsection \ref{ss:Kac-outer} and correcting an inaccuracy.
We thank the anonymous referee for careful reading the paper and for his/her comments, which helped to improve the exposition.
We note that Erwann Rozier computed in 2009 the cardinalities $\#{\rm Orb}(\kern 0.8pt_{\boldsymbol{t}} D)$ for some colored graphs $\kern 0.8pt_{\boldsymbol{t}} D$
(in particular, for all Dynkin diagrams and twisting diagrams) under the guidance of the first-named author.
\end{document} |
\begin{document}
\def{\bf V}{{\bf V}}
\def{\bf W}{{\bf W}}
\def{\bf x}{{\bf x}}
\def{\bf X}{{\bf X}}
\def{\bf y}{{\bf y}}
\def{\bf Y}{{\bf Y}}
\defd{d}
\def\varepsilon{\varepsilon}
\def{\rm D}{{\rm D}}
\newcommand{\removableFootnote}[1]{}
\newtheorem{theorem}{Theorem}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{lemma}[theorem]{Lemma}
\title{Stochastic Perturbations of Periodic Orbits with Sliding.}
\author{
D.J.W.~Simpson$^{\dagger}$ and R.~Kuske$^{dagger}$\\\\
$^{\dagger}$Institute of Fundamental Sciences\\
Massey University\\
Palmerston North\\
New Zealand\\\\
$^{dagger}$Department of Mathematics\\
University of British Columbia\\
Vancouver, BC\\
Canada
}
\maketitle
\begin{abstract}
Vector fields that are discontinuous on codimension-one surfaces are known as Filippov systems
and can have attracting periodic orbits involving segments
that are contained on a discontinuity surface of the vector field.
In this paper we consider the addition of small noise to a general Filippov system
and study the resulting stochastic dynamics near such a periodic orbit.
Since a straight-forward asymptotic expansion in terms of the noise amplitude is
not possible due to the presence of discontinuity surfaces,
in order to quantitatively determine the basic statistical properties of the dynamics,
we treat different parts of the periodic orbit separately.
Dynamics distant from discontinuity surfaces is analyzed by the use of a series expansion of the transitional probability density function.
Stochastically perturbed sliding motion is analyzed through stochastic averaging methods.
The influence of noise on points at which the periodic orbit escapes a
discontinuity surface is determined by zooming into the transition point.
We combine the results to quantitatively determine the effect of
noise on the oscillation time for a three-dimensional canonical model of relay control.
For some parameter values of this model,
small noise induces a significantly large reduction in the average oscillation time.
By interpreting our results geometrically, we are able to identify four features
of the relay control system that contribute to this phenomenon.
\end{abstract}
\section{Introduction}
\label{sec:INTRO}
\setcounter{equation}{0}
Filippov systems are vector fields with codimension-one surfaces, termed switching manifolds,
on which the vector field is discontinuous.
Subsets of switching manifolds at which
the vector field on either side of the manifold points towards the manifold
are known as stable sliding regions.
Whenever a trajectory of the system arrives at a stable sliding region,
future evolution is constrained to the switching manifold until it exits the sliding region.
This evolution is known as sliding motion \cite{DiBu08,Fi88}.
In Filippov models of stick-slip oscillators,
sliding motion corresponds to the sticking phase of the dynamics \cite{OeHi96},
and for relay control, sliding motion models extremely rapid switching \cite{DiBu08,Jo03,ZhMo03}.
So-called sliding-mode controllers specifically utilize sliding motion
to achieve superior control objectives \cite{TaLa12,Su06}.
Stable periodic orbits that involve segments of sliding motion arise in models of
stick-slip oscillators \cite{LuGe06,SzOs08},
relay control \cite{DiJo01,JoRa99,JoBa02,ZhFe10},
and population dynamics \cite{DeGr07,AmOl13,TaLi12}.
The purpose of the present paper is to quantitatively determine
the effects of noise on such periodic orbits.
\begin{figure}
\caption{
The attracting periodic orbit $\Gamma$ of (\ref{eq:relayControlSystem}
\label{fig:ppSketch}
\end{figure}
Throughout this paper, we use a canonical relay control model, given in \cite{DiBu08,Jo03,ZhMo03}, as an example.
The general model equations are
\begin{equation}
\begin{split}
\dot{{\bf X}} &= A{\bf X} + B\nu \;, \\
\varphi &= C^{\sf T} {\bf X} \;, \\
\nu &= -{\rm sgn}(\varphi) \;,
\end{split}
\label{eq:relayControlSystem}
\end{equation}
where ${\bf X} \in \mathbb{R}^N$ represents the state of the system,
$\varphi$ is the control measurement,
and $\nu$ is the control response.
We consider the following three-dimensional example
($N=3$) of (\ref{eq:relayControlSystem}), given in \cite{DiBu08,DiJo01},
\begin{equation}
A = \left[ \begin{array}{ccc}
-2 \zeta \omega - \lambda & 1 & 0 \\
-2 \zeta \omega \lambda - \omega^2 & 0 & 1 \\
-\lambda \omega^2 & 0 & 0
\end{array} \right] \;, \qquad
B = \left[ \begin{array}{c} 1 \\ -2 \\ 1 \end{array} \right] \;, \qquad
C = \left[ \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right] \;,
\label{eq:ABCDvalues3d}
\end{equation}
with parameter values,
\begin{equation}
\zeta = 0.5 \;, \qquad \lambda = 0.05 \;, \qquad \omega = 5 \;.
\label{eq:paramValues}
\end{equation}
The system (\ref{eq:relayControlSystem})-(\ref{eq:paramValues})
has an attracting symmetric periodic orbit, call it $\Gamma$, with two sliding segments, Fig.~\ref{fig:ppSketch}\removableFootnote{
When I export a figure that uses fill3 and has axes, to eps, the image is blurry.
}.
The parameter values (\ref{eq:paramValues}) are typical in the sense that
(\ref{eq:relayControlSystem}) with (\ref{eq:ABCDvalues3d}) exhibits
an attracting periodic orbit with one or more sliding segments over a relatively large range of parameter values \cite{DiJo01}.
From the viewpoint of control,
it is important to understand the robustness of (\ref{eq:relayControlSystem})
to random fluctuations, parameter uncertainty, and unmodelled nonlinear dynamics.
Conditions ensuring the robustness of equilibria of
general hybrid control systems under various assumptions have been established \cite{RaMi10,FeZh06,ChLi04,SkEv99}.
In \cite{DiJo02}, the robustness of attracting periodic orbits of (\ref{eq:relayControlSystem}) is investigated numerically
by altering the switching condition in different ways, such as by incorporating time delay.
The authors conclude that periodic orbits with sliding appear to be less robust
than periodic orbits that only have transversal intersections with the switching manifold.
In \cite{TaOs09}, a model of anti-lock brakes is shown to exhibit attracting periodic orbits with sliding and the
robustness of the periodic orbits is correlated with the size of their basins of attraction.
In addition, unlike attracting periodic orbits of smooth systems,
attracting periodic orbits with sliding segments may be destroyed by stable singular perturbations \cite{SiKo10}\removableFootnote{
Would be nice to include a short paragraph on stochastically perturbed periodic orbits in smooth systems.
However, I actually don't know of any paper that studies this specifically.
One can infer basic results from Theorem 2.3 of Chapter 2 of \cite{FrWe12},
and similar descriptions in \cite{Sc10,GrVa99}.
For coloured noise a stochastic Poincar\'{e} map is derived in \cite{WeKn90}.
Large deviations from a periodic orbit in a smooth system are described in \cite{HiMe13}.
}.
Randomness or uncertainty enters into relay control systems in various ways,
such as via the input and output of the controlling component
or through the action of circuit elements,
and is present in modelling by means of parameter uncertainty
and modelling approximations \cite{Ts84,FrPo02,DoBi01,AsMu08}.
For simplicity,
we incorporate randomness in (\ref{eq:relayControlSystem})
by adding white Gaussian noise to the control response.
Specifically, the stochastic model is
\begin{equation}
d{\bf X}(t) = \left( A {\bf X}(t) - B \,{\rm sgn} \left( C^{\sf T} {\bf X}(t) \right) \right) \, dt +
\sqrt{\varepsilon} B \, dW(t) \;,
\label{eq:relayControlSystem2}
\end{equation}
where $W(t)$ is standard Brownian motion and $0 < \varepsilon \ll 1$.
Let us consider a sample solution to (\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues})
from an arbitrary initial point.
Once sufficient time has passed to allow the solution to become close to $\Gamma$,
with high probability the solution follows a random path near $\Gamma$ for a long period of time.
Throughout this paper we ignore transient dynamics.
We define an {\em oscillation time} of a sample solution to (\ref{eq:relayControlSystem2})
as the difference between successive times at which the solution returns to the switching manifold
after a large excursion with $X_1 > 0$.
The oscillation time represents a stochastic analogue of the period of $\Gamma$.
To investigate the effect of the noise, for a handful of fixed values of $\varepsilon$ we numerically
solved (\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues})
over a long time frame
and recorded the oscillation times, $t_{\rm osc}$.
For all Monte-Carlo simulations in this paper we used the Euler-Maruyama method with a fixed step size.
We found that different step sizes produced essentially the same results.
We let $t_{{\rm osc},\Gamma}$ denote the period of $\Gamma$, and let
\begin{equation}
{\rm Diff}(t_{\rm osc}) \equiv \mathbb{E}[t_{\rm osc}] - t_{{\rm osc},\Gamma} \;,
\label{eq:Difftosc}
\end{equation}
where $\mathbb{E}$ denotes expectation.
Roughly speaking, we say that the noise alters the oscillation time {\em significantly} if
$|{\rm Diff}(t_{\rm osc})|$ is larger than, or comparable to, ${\rm Std}(t_{\rm osc})$.
Fig.~\ref{fig:manyPeriod} shows the variation in ${\rm Diff}(t_{\rm osc})$ and ${\rm Std}(t_{\rm osc})$ with $\varepsilon$,
given by our numerical experiment.
As for an analogous stochastic perturbation of a periodic orbit in a smooth system \cite{FrWe12},
${\rm Diff}(t_{\rm osc}) \sim K_1 \varepsilon$,
and ${\rm Std}(t_{\rm osc}) \sim K_2 \sqrt{\varepsilon}$,
for some constants $K_1, K_2$.
Consequently, as $\varepsilon \to 0$, ${\rm Std}(t_{\rm osc})$ is large relative to ${\rm Diff}(t_{\rm osc})$.
Yet the noise significantly alters the oscillation time for relatively small values of $\varepsilon$ because
$|{\rm Diff}(t_{\rm osc})| \approx {\rm Std}(t_{\rm osc})$ for $\varepsilon = 0.001$.
As we may infer from Fig.~\ref{fig:manyPeriod}, this is because the magnitude of $K_1$ is extremely large.
In our earlier work \cite{SiKu13c},
we gave numerical results similar to Fig.~\ref{fig:manyPeriod} for parameter values different to (\ref{eq:paramValues}).
We analyzed stochastically perturbed sliding motion
and showed that the noise may cause this motion to be significantly faster (or slower) than without noise, on average.
We suggested that this mechanism may be the cause for the reduction in oscillation time.
In this paper we use analytical methods to approximate
${\rm Diff}(t_{\rm osc})$ and ${\rm Std}(t_{\rm osc})$
and explain why we may have $|{\rm Diff}(t_{\rm osc})| \approx {\rm Std}(t_{\rm osc})$ for relatively small values of $\varepsilon$.
We find that the mechanism described in \cite{SiKu13c}
is one of four phenomena that induce a significant reduction in oscillation time for (\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues}).
\begin{figure}
\caption{
Plots of ${\rm Diff}
\label{fig:manyPeriod}
\end{figure}
The remainder of the paper is organized as follows.
As detailed in \S\ref{sec:OUTLINE},
to perform our analysis we split the stochastic dynamics into three phases.
A {\em regular} phase corresponds to dynamics near a section of the periodic orbit that does not involve sliding motion.
This is analyzed in \S\ref{sec:EXCUR} for which the dynamics
are described by a stochastic differential equation with a smooth drift coefficient.
A {\em sliding} phase corresponds to random motion about the switching manifold
near a sliding section of the periodic orbit and is strongly influenced by the discontinuity.
The stochastically perturbed dynamics are analyzed using stochastic averaging principles in \S\ref{sec:SLIDE}.
Our methods for both of these phases are invalid at points where the periodic orbit escapes from the switching manifold,
and the methods are impractical near such points.
Consequently, in \S\ref{sec:ESCAPE} we provide a separate analysis for
the transition from sliding to regular motion that we refer to as an {\em escaping} phase.
In \S\ref{sec:COMB} we combine the results to determine the statistics of
${\rm Diff}(t_{\rm osc})$ and ${\rm Std}(t_{\rm osc})$
for (\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues}).
Sections \ref{sec:EXCUR}-\ref{sec:COMB}
involve fundamentally different analytical methods and may be read independently.
Conclusions are given in \S\ref{sec:CONC}.
\section{General equations and three phases of stochastic dynamics}
\label{sec:OUTLINE}
\setcounter{equation}{0}
In this section we begin by introducing general equations and a coordinate system that is most convenient for our analysis, \S\ref{sub:COORDASSUM}.
In \S\ref{sub:DIVISON} we precisely partition the dynamics into regular, sliding and escaping phases.
Lastly in \S\ref{sub:COORDRCS} we construct the coordinate system of \S\ref{sub:COORDASSUM} for the relay control example.
\subsection{A stochastically perturbed Filippov system and assumptions on the equations}
\label{sub:COORDASSUM}
For an $N$-dimensional Filippov system ($N \ge 2$) with a single switching manifold,
we suppose that we may choose our coordinate system such that
the switching manifold coincides with $x_1 = 0$,
where $x_1 = e_1^{\sf T} {\bf x}$ is the first component of the state variable, ${\bf x}$.
We then write the Filippov system perturbed by noise as
\begin{equation}
d{\bf x}(t) = \left\{ \begin{array}{lc}
\phi^{(L)}({\bf x}(t)) \;, & x_1(t) < 0 \\
\phi^{(R)}({\bf x}(t)) \;, & x_1(t) > 0
\end{array} \right\} \,dt + \sqrt{\varepsilon} D \,d{\bf W}(t) \;,
\label{eq:sde}
\end{equation}
where $\phi^{(L)}$ and $\phi^{(R)}$ are functions that are $C^2$ on the closure of their respective half-spaces,
${\bf W}(t)$ is a standard $N$-dimensional vector Brownian motion,
$0 < \varepsilon \ll 1$ controls the noise amplitude, and
$D$ is an $N \times N$ matrix that specifies the relative strengths and correlations of the noise in different directions.
Throughout this paper it is convenient to separate the component of the noise in the $x_1$-direction
from the remaining directions, and we write
\begin{equation}
D D^{\sf T} = \left[ \begin{array}{c|c} \alpha & \beta^{\sf T} \\ \hline \beta & \gamma \end{array} \right] \;,
\label{eq:alphaBetaGamma}
\end{equation}
where $\alpha \in \mathbb{R}$, $\beta \in \mathbb{R}^{N-1}$ and $\gamma$ is an $(N-1) \times (N-1)$ matrix.
Many generalizations of (\ref{eq:sde}) are possible.
We anticipate that Filippov systems with multiple switching manifolds, nonsmooth switching manifolds,
coloured noise or multiplicative noise,
can be analyzed by extensions of the methods presented below.
We assume that when $\varepsilon = 0$, (\ref{eq:sde}) has an attracting periodic orbit $\Gamma$ that includes at least one sliding segment.
We now reorient the coordinate axes relative to one such sliding segment
in order to analyze the stochastically perturbed dynamics
relating to this segment, and relating to the subsequent segment of $\Gamma$ that does
not intersect the switching manifold.
Our underlying strategy is to repeat this coordinate change and analysis
for all sliding segments in order to determine the overall effect of noise on $\Gamma$.
Without loss of generality, we may assume that the chosen sliding segment ends at the origin,
and that from the origin $\Gamma$ then enters the right half-space,
as shown in Fig.~\ref{fig:phaseSchem}.
Consequently, the right half-flow $\phi^{(R)}$ is tangent to the switching manifold at the origin:
\begin{equation}
e_1^{\sf T} \phi^{(R)}(0) = 0 \;.
\label{eq:a1}
\end{equation}
To ensure a non-degenerate scenario we also require
\begin{equation}
e_1^{\sf T} \phi^{(L)}(0) > 0 \;.
\label{eq:a2}
\end{equation}
For simplicity we choose the coordinate $x_2$ such that at the origin
$\Gamma$ is tangent to the $x_2$-axis and locally the value of $x_2$ increases with time on $\Gamma$.
Therefore
\begin{equation}
e_2^{\sf T} \phi^{(R)}(0) > 0 \;, \qquad
e_j^{\sf T} \phi^{(R)}(0) = 0 \;,~\forall j > 2 \;.
\label{eq:a3}
\end{equation}
In order to ensure $\Gamma$ enters the right half-space from the origin
in a non-degenerate fashion we require
\begin{equation}
e_1^{\sf T} \frac{\partial \phi^{(R)}}{\partial x_2}(0) > 0 \;.
\label{eq:a4}
\end{equation}
Lastly, if the system is at least three-dimensional we may choose the remaining axes
so that the boundary of the stable sliding region is tangent to $x_2 = 0$ at the origin.
This requirement is indicated in Fig.~\ref{fig:phaseSchem}
and simplifies our expansions about the origin in \S\ref{sec:ESCAPE}.
Algebraically this requirement equates to
\begin{equation}
e_1^{\sf T} \frac{\partial \phi^{(R)}}{\partial x_j}(0) = 0 \;,~\forall j > 2 \;.
\label{eq:a5}
\end{equation}
\begin{figure}
\caption{
A schematic showing part of a periodic orbit, $\Gamma$, of (\ref{eq:sde}
\label{fig:phaseSchem}
\end{figure}
\subsection{Three dynamical phases}
\label{sub:DIVISON}
The periodic orbit $\Gamma$ may have many sliding segments\removableFootnote{
Periodic orbits that involve transversal intersections with the switching manifold
can be treated as having consecutive excursions.
}.
The assumptions (\ref{eq:a1})-(\ref{eq:a5}) ensure that
our coordinate system is centred at the end point of an arbitrarily chosen sliding segment
in a convenient fashion.
Such a point corresponds to a transition from sliding dynamics to regular dynamics.
In the presence of noise, we prefer to treat this transition as a sequence of three phases:
a sliding phase, an escaping phase, and a regular phase.
We now define these phases precisely and introduce notation used in the remaining sections of the paper.
In order to treat escape from the vicinity of the switching manifold separately, we introduce small constants
$\delta^-$ and $\delta^+$ that satisfy
\begin{equation}
\delta^- < 0 < \delta^+ \;, \qquad
\left| \delta^{\pm} \right| \ll 1 \;.
\label{eq:deltaMinusdeltaPlus}
\end{equation}
For a sample solution to (\ref{eq:sde}) with (\ref{eq:a1})-(\ref{eq:a5}) and $\varepsilon > 0$
that follows a path close to $\Gamma$,
we define the {\em sliding phase} as the part of the solution between
the point at which it arrives at the switching manifold and its first intersection with $x_2 = \delta^-$.
This is followed by an {\em escaping phase}
defined as the part of the solution between the end point of the sliding phase
and its first intersection with $x_2 = \delta^+$.
Lastly we refer to the subsequent part of the solution ending with its
next intersection with the switching manifold as a {\em regular phase}.
Throughout this paper we consider approximations to quantities such as transitional PDFs
and first passage times and locations.
Since these relate to finite intervals of time,
with the assumption that $\varepsilon$ is sufficiently small,
a wild departure of a sample solution from close proximity to $\Gamma$
occurs sufficiently rarely that such large deviations may be ignored.
As indicated in Fig.~\ref{fig:phaseSchem}, for the periodic orbit $\Gamma$ we let ${\bf x}_{\Gamma}^M$
denote the point at which the sliding segment starts\removableFootnote{
$M$ for manifold.
},
let ${\bf x}_{\Gamma}^S$ denote the point at which the sliding segment intersects $x_2 = \delta^-$,
let ${\bf x}_{\Gamma}^E$ denote the point at which $\Gamma$ intersects $x_2 = \delta^+$,
and let ${\bf x}_{\Gamma}^R$ denote the next point at which $\Gamma$ intersects the switching manifold.
We let $t_{\Gamma}^S$, $t_{\Gamma}^E$ and $t_{\Gamma}^R$
denote the deterministic evolution times for the sliding, escaping and regular phases, respectively.
\subsection{The coordinate change for the relay control system}
\label{sub:COORDRCS}
Here we change the coordinates of the relay control example,
(\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues}),
so that it conforms to the general system (\ref{eq:sde}) with assumptions (\ref{eq:a1})-(\ref{eq:a5}).
The periodic orbit $\Gamma$, shown in Fig.~\ref{fig:ppSketch}, has two sliding segments.
We choose the end point of the upper sliding segment
(the sliding segment with $X_3 > 0$) as the centre of the new coordinates.
Since $\Gamma$ is symmetric
(specifically (\ref{eq:relayControlSystem2}) is unchanged under ${\bf X} \mapsto -{\bf X}$),
the results obtained for the three phases associated with this end point
may be applied directly to the remaining half of $\Gamma$.
To determine the location of the end point of the upper sliding segment,
we note that while $X_1 < 0$ trajectories rapidly contract to a one-dimensional weakly stable manifold.
The intersection of this stable manifold with the switching manifold
provides a suitable approximation to the starting point of the upper sliding segment.
Then from Filippov's solution for sliding motion we find
that upper sliding segment of $\Gamma$ ends at ${\bf X} = (0,1,Z)^{\sf T}$, where $Z \approx 2.561$.
The relevant calculations for this derivation
and an exact expression for $Z$ are given in Appendix \ref{sec:DET}.
Further calculations reveal that the affine change of coordinates
\begin{equation}
{\bf x} = P {\bf X} + Q \;,
\label{eq:Xtox}
\end{equation}
where
\begin{equation}
P = \left[ \begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & \frac{1}{Z + 2} & 1
\end{array} \right] \;, \qquad
Q = \left[ \begin{array}{c}
0 \\ -1 \\ -\frac{1}{Z + 2} - Z
\end{array} \right] \;,
\end{equation}
transforms (\ref{eq:relayControlSystem2}) with $\varepsilon = 0$ to a system
satisfying (\ref{eq:a1})-(\ref{eq:a5}).
Specifically, under (\ref{eq:Xtox}) the system
(\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues}) becomes
\begin{equation}
d{\bf x}(t) = \left\{ \begin{array}{lc}
\mathcal{A} {\bf x}(t) + \mathcal{B}^{(L)} \;, & x_1(t) < 0 \\
\mathcal{A} {\bf x}(t) + \mathcal{B}^{(R)} \;, & x_1(t) > 0
\end{array} \right\} \,dt + \sqrt{\varepsilon} D \,d{\bf W}(t) \;,
\label{eq:relayControlSystem5}
\end{equation}
where\removableFootnote{
Also
$\mathcal{B}^{(L)} = P B - P A P^{-1} Q$,
$\mathcal{B}^{(R)} = - P B - P A P^{-1} Q$,
$D = P B e_1^{\sf T}$.
}
\begin{equation}
\mathcal{A} = P A P^{-1} =
\left[ \begin{array}{ccc}
-2 \zeta \omega - \lambda & 1 & 0 \\
-2 \zeta \omega \lambda - \omega^2 & \frac{-1}{Z+2} & 1 \\
-\lambda \omega^2 - \frac{2 \zeta \omega \lambda + \omega^2}{Z+2} &
\frac{-1}{(Z+2)^2} &
\frac{1}{Z+2}
\end{array} \right] \;, \label{eq:calA}
\end{equation}
\begin{equation}
\mathcal{B}^{(L)} =
\left[ \begin{array}{c}
2 \\ Z-2 \\ \frac{2 Z}{Z+2}
\end{array} \right] \;, \qquad
\mathcal{B}^{(R)} =
\left[ \begin{array}{c}
0 \\ Z+2 \\ 0
\end{array} \right] \;, \qquad
D =
\left[ \begin{array}{ccc}
1 & 0 & 0 \\
-2 & 0 & 0 \\
\frac{Z}{Z+2} & 0 & 0
\end{array} \right] \;.
\label{eq:calBLBRD}
\end{equation}
The system (\ref{eq:relayControlSystem5}) with (\ref{eq:calA})-(\ref{eq:calBLBRD})
is used as an example to illustrate our methods in the next three sections.
\section{Regular stochastic dynamics}
\label{sec:EXCUR}
\setcounter{equation}{0}
Here we consider sample solutions to (\ref{eq:sde})-(\ref{eq:a5}) that start
from an initial point, ${\bf x}_0$, on $x_2 = \delta^+$, to an intersection with the switching manifold, $x_1 = 0$.
For an arbitrary sample solution,
we let $t^R$ denote the first passage time to $x_1 = 0$ and let ${\bf x}^R$ denote
the corresponding arrival location of the solution.
When $\varepsilon = 0$, these values are deterministic and we denote them by
$t_{d}^R$ and ${\bf x}_{d}^R$ respectively.
Naturally the values of $t_{d}^R$ and ${\bf x}_{d}^R$ depend on ${\bf x}_0$,
but in this section it is convenient to ignore this dependency because
here we are not interested in variations in ${\bf x}_0$.
Such variations are considered in \S\ref{sec:COMB}
where it is necessary to calculate exactly how deviations
in one phase of the dynamics influence dynamics in subsequent phases.
Note, if ${\bf x}_0 = {\bf x}_{\Gamma}^E$,
then $t_{d}^R = t_{\Gamma}^R$ and ${\bf x}_{d}^R = {\bf x}_{\Gamma}^R$, Fig.~\ref{fig:phaseSchem}.
We assume $\varepsilon$ is small and ${\bf x}_0$ is sufficiently far from $x_1 = 0$
such that $t^R \approx t_{d}^R$ and ${\bf x}^R \approx {\bf x}_{d}^R$, with high probability.
Indeed, we have $t^R - t_{d}^R = O(\sqrt{\varepsilon})$
and ${\bf x}^R - {\bf x}_{d}^R = O(\sqrt{\varepsilon})$.
Dynamics for this phase are governed purely by the right half-system of (\ref{eq:sde}):
\begin{equation}
d{\bf x}(t) = \phi^{(R)}({\bf x}(t)) \,dt + \sqrt{\varepsilon} D \,d{\bf W}(t) \;.
\label{eq:sdeR}
\end{equation}
It suffices to use classical methods of analysis to study (\ref{eq:sdeR}).
We begin by obtaining $t^R$ and ${\bf x}^R$ to $O(\sqrt{\varepsilon})$
by sample path methods following \cite{FrWe12}.
\subsection{First order approximations}
\label{sub:REGVAR}
For the smooth stochastic differential equation, (\ref{eq:sdeR}),
we can expand ${\bf x}(t)$ as a series involving powers of $\sqrt{\varepsilon}$.
Specifically, by Theorem 2.2 of Chapter 2 of \cite{FrWe12} we can write
\begin{equation}
{\bf x}(t) = {\bf x}_{d}(t;{\bf x}_0) + \sqrt{\varepsilon} \,{\bf x}^{(1)}(t) + o(\sqrt{\varepsilon}) \;,
\label{eq:xExpFW}
\end{equation}
where ${\bf x}_{d}$ denotes the solution to $\dot{{\bf x}} = \phi^{(R)}({\bf x})$, from ${\bf x}_0$,
and ${\bf x}^{(1)}$ satisfies
\begin{equation}
d{\bf x}^{(1)}(t) = {\rm D}_{\bf x} \phi^{(R)}({\bf x}_{d}(t)) \,{\bf x}^{(1)}(t) \,dt + D \,d{\bf W}(t) \;,
\qquad {\bf x}^{(1)}(0) = 0 \;,
\label{eq:OUFW}
\end{equation}
where ${\rm D}_{\bf x} \phi^{(R)}$ is the Jacobian of $\phi^{(R)}$.
Equation (\ref{eq:OUFW}) is a time-dependent Ornstein-Uhlenbeck process \cite{Sc10,Ga09}.
By using an integrating factor we obtain the explicit solution
\begin{equation}
{\bf x}^{(1)}(t) = \int_0^t {\rm e}^{\int_s^t {\rm D}_{\bf x} \phi^{(R)}({\bf x}_{d}(\tilde{s})) \,d\tilde{s}} D \,d{\bf W}(s) \;.
\end{equation}
Consequently ${\bf x}^{(1)}(t)$ is a Gaussian random variable with zero mean and covariance matrix
\begin{equation}
K(t) = \int_0^t H(s,t) H(s,t)^{\sf T} \,ds \;,
\label{eq:XiFW}
\end{equation}
where
\begin{equation}
H(s,t) = {\rm e}^{\int_s^t {\rm D}_{\bf x} \phi^{(R)}({\bf x}_{d}(\tilde{s})) \,d\tilde{s}} D \;.
\label{eq:HFW}
\end{equation}
When $\varepsilon = 0$, first passage to the switching manifold occurs at the point,
${\bf x}_{d}^R = {\bf x}_{d}(t_{d}^R)$.
We assume that the deterministic solution intersects the switching manifold transversely
at this point, as is generically the case.
That is, we assume $e_1^{\sf T} v \ne 0$ where
\begin{equation}
v = \phi^{(R)}({\bf x}_{d}(t_{d}^R)) \;.
\label{eq:v}
\end{equation}
Then, for $\varepsilon > 0$, by Theorem 2.3 of Chapter 2 of \cite{FrWe12},
the first passage statistics satisfy
\begin{eqnarray}
\mathbb{E}[t^R] &=& t_{d}^R + o(\sqrt{\varepsilon}) \;,
\label{eq:meantex0} \\
{\rm Var}(t^R) &=& \frac{e_1^{\sf T} K(t_{d}^R) e_1}{(e_1^{\sf T} v)^2} \,\varepsilon + o(\varepsilon) \;,
\label{eq:vartex}
\end{eqnarray}
and
\begin{eqnarray}
\mathbb{E}[{\bf x}^R] &=& {\bf x}_{d}^R + o(\sqrt{\varepsilon}) \;,
\label{eq:meanxex0} \\
{\rm Cov}({\bf x}^R) &=& \left( I - \frac{v e_1^{\sf T}}{e_1^{\sf T} v} \right)
K(t_{d}^R) \left( I - \frac{v e_1^{\sf T}}{e_1^{\sf T} v} \right)^{\sf T} \varepsilon + o(\varepsilon) \;.
\label{eq:covxex}
\end{eqnarray}
In Fig.~\ref{fig:checkExcur}, the formulas (\ref{eq:vartex}) and (\ref{eq:covxex}) are compared with Monte-Carlo simulations
for the relay control example, (\ref{eq:relayControlSystem2}).
The upper curve of panel A is the square root of the leading order term of (\ref{eq:vartex}).
We compute $K$ in this expression by numerically evaluating the integral (\ref{eq:XiFW}).
The upper curves of the lower two panels of Fig.~\ref{fig:checkExcur}
correspond to analogous lowest-order approximations using (\ref{eq:covxex}).
\begin{figure}
\caption{
First passage statistics for a regular phase of the dynamics for the relay control
system (\ref{eq:ABCDvalues3d}
\label{fig:checkExcur}
\end{figure}
\subsection{Deviations in the mean}
\label{sub:REGMEAN}
From (\ref{eq:meantex0}) and (\ref{eq:meanxex0}) we see that to
determine the lowest order nonzero terms of
${\rm Diff}(t^R) = \mathbb{E}[t^R] - t_{\Gamma}^R$ and
${\rm Diff}({\bf x}^R) = \mathbb{E}[{\bf x}^R] - {\bf x}_{\Gamma}^R$,
more powerful methods are required.
To calculate $\mathbb{E}[t^R]$ to $O(\varepsilon)$ we express this quantity
in terms of the transitional PDF (probability density function) $p^R({\bf x},t;{\bf x}_0)$ for (\ref{eq:sdeR}),
\begin{equation}
\mathbb{E}[t^R] = \int_0^\infty \int_{\mathbb{R}^N} p^R({\bf x},t) \,d{\bf x} \,dt \;.
\label{eq:meantex1}
\end{equation}
The Fokker-Planck equation for $p^R$ with corresponding boundary conditions, including an absorbing barrier
at $x_1=0$, is considered below in (\ref{eq:FPEex})-(\ref{eq:FPEremainingBCs}).
A determination of $\mathbb{E}[{\bf x}^R]$ requires using the joint PDF for the first passage time and location, $k(x_2,\ldots,x_N,t;{\bf x}_0)$,
\begin{equation}
\mathbb{E}[x_j^R] = \int_0^\infty \int_{\mathbb{R}^{N-1}} x_j
k(x_2,\ldots,x_N,t) \,dx_2 \ldots \,dx_N \,dt \;.
\label{eq:meanxjex0}
\end{equation}
The joint PDF $k$ can be written in terms of $p^R$ through the probability current $J$ \cite{Sc10,Ga09} of (\ref{eq:sdeR}),
\begin{equation}
k(x_2,\ldots,x_N,t) = -e_1^{\sf T} J(0,x_2,\ldots,x_N,t) \;,
\end{equation}
where $J$ is given by
\begin{equation}
J({\bf x},t) = \phi^{(R)} p^R - \frac{\varepsilon}{2} \left[ \begin{array}{c}
\sum_{j=1}^N (D D^{\sf T})_{1j} \frac{\partial p^R}{\partial x_j} \\
\vdots \\
\sum_{j=1}^N (D D^{\sf T})_{Nj} \frac{\partial p^R}{\partial x_j}
\end{array} \right] \;.
\end{equation}
In view of the absorbing barrier at $x_1=0$, we have
\begin{equation}
k(x_2,\ldots,x_N,t) = \frac{\varepsilon}{2} (D D^{\sf T})_{11} \frac{\partial p^R}{\partial x_1}(0,x_2,\ldots,x_N,t) \;,
\label{eq:k2}
\end{equation}
so that (\ref{eq:meanxjex0}) becomes
\begin{equation}
\mathbb{E}[x_j^R] = \frac{\varepsilon}{2} (D D^{\sf T})_{11}
\int_0^\infty \int_{\mathbb{R}^{N-1}} x_j
\frac{\partial p^R}{\partial x_1}(0,x_2,\ldots,x_N,t)
\,dx_2 \ldots \,dx_N \,dt \;.
\label{eq:meanxjex1}
\end{equation}
The expressions for $\mathbb{E}[t^R]$ and $\mathbb{E}[x_j^R]$ involve $p^R$, which satisfies the following boundary value problem
\begin{eqnarray}
\frac{\partial p^R}{\partial t} &=&
-\sum_{i=1}^N \frac{\partial}{\partial x_i} \left( \phi_i^{(R)}({\bf x}) p^R \right) +
\frac{\varepsilon}{2} \sum_{i=1}^N \sum_{j=1}^N
(D D^{\sf T})_{ij} \frac{\partial^2 p^R}{\partial x_i \partial x_j} \;, \label{eq:FPEex} \\
p^R({\bf x},0;{\bf x}_0) &=& \delta({\bf x} - {\bf x}_0) \;, \label{eq:FPEIC} \\
p^R({\bf x},t;{\bf x}_0) &=& 0 \;, {\rm ~whenever~} x_1=0 \;, \label{eq:FPEabsorbingBC} \\
p^R({\bf x},t;{\bf x}_0) &\to& 0 {\rm ~as~} ||{\bf x}|| \to \infty \;, {\rm ~with~} x_1 \ge 0 \label{eq:FPEremainingBCs} \;.
\end{eqnarray}
Here (\ref{eq:FPEIC}) captures the initial condition,
(\ref{eq:FPEabsorbingBC}) is the absorbing boundary condition representing that the switching manifold is an absorbing barrier,
with (\ref{eq:FPEremainingBCs}) ensuring physically realistic behaviour.
The form of (\ref{eq:FPEex})-(\ref{eq:FPEremainingBCs}) suggests that the problem
can be solved using an asymptotic approach \cite{Za06,Ve05,KaPa03}.
The solution to (\ref{eq:FPEex})-(\ref{eq:FPEremainingBCs})
can be constructed using the solution to (\ref{eq:FPEex}) without the boundary conditions,
that is, the free-space solution $p_f({\bf x},t;{\bf x}_0)$.
With $\varepsilon \ll 1$, $p_f$ is concentrated around the deterministic trajectory ${\bf x}_{d}(t)$.
The PDF $p_f$ does not satisfy the boundary condition at $x_1=0$,
in particular near the point ${\bf x}_{d}^R$.
A boundary layer analysis in terms of the local variable $z = \frac{x_1}{\varepsilon}$
indicates that near the switching manifold
the boundary layer behaviour of the PDF is given by
the sum $p_{\ell}(z,x_2,\ldots,x_N,t) + p_f|_{x_1=0}$,
where $p_{\ell}$ is a local contribution to be determined.
The uniform solution is then obtained by matching the boundary layer solution to the outer free space solution,
$\lim_{z \to \infty}(p_{\ell} + p_f|_{x_1=0}) = \lim_{x_1 \to 0} p_f({\bf x},t;{\bf x}_0)$.
This implies $\lim_{z \to \infty} p_{\ell} = 0$, and hence
the uniform solution has the form $p^R = p_{\ell} + p_f$.
Furthermore, since $p_f$ decays exponentially away from its main concentration
around the deterministic trajectory, ${\bf x}_{d}$,
it remains to find the local contribution $p_{\ell}$ that decays away from ${\bf x}_{d}^R$.
To this end it is appropriate to use the local variables,
\begin{equation}
z = \frac{x_1}{\varepsilon} \;, \qquad
u_j = \frac{x_j - x_{{d},j}^R}{\sqrt{\varepsilon}} \;, \forall j \ne 1 \;, \qquad
\tau = \frac{t - t_{d}^R}{\sqrt{\varepsilon}} \;,
\label{eq:regularScaling}
\end{equation}
where the particular scaling for $\tau$ and $u_j$
is motivated by (\ref{eq:vartex}) and (\ref{eq:covxex}) respectively.
Then the local contribution can be written
\begin{eqnarray}
&& p_{\ell} \left( \varepsilon z,\sqrt{\varepsilon} u_2 + x_{{d},2}^R,\ldots,\sqrt{\varepsilon} u_N +
x_{{d},N}^R,\sqrt{\varepsilon} \tau + t_{d}^R \right)
= \varepsilon^{-\frac{N}{2}} \mathcal{P}(z,u_2,\ldots,u_N,\tau) \nonumber \\
&&= \varepsilon^{-\frac{N}{2}} \left( \mathcal{P}^{(0)}(z,u_2,\ldots,u_N,\tau)
+ \sqrt{\varepsilon} \mathcal{P}^{(1)}(z,u_2,\ldots,u_N,\tau) + O(\varepsilon) \right) \;,
\label{eq:pell}
\end{eqnarray}
with the expansion in powers of $\sqrt{\varepsilon}$.
The PDE and behaviour at infinity for $\mathcal{P}$ are given by
\begin{align}
\frac{1}{\sqrt{\varepsilon}} \frac{\partial \mathcal{P}}{\partial \tau} &=
-\frac{1}{\varepsilon} \phi_1^{(R)}({\bf x}_d^R)
\frac{\partial \mathcal{P}}{\partial z}
-\frac{1}{\sqrt{\varepsilon}} \sum_{i=2}^N
\frac{\partial \phi_1^{(R)}}{\partial x_i}({\bf x}_d^R) u_i \frac{\partial \mathcal{P}}{\partial z}
-\frac{1}{\sqrt{\varepsilon}} \sum_{i=2}^N
\phi_i^{(R)}({\bf x}_d^R) \frac{\partial \mathcal{P}}{\partial u_i} \nonumber \\
&+~\frac{1}{2 \varepsilon} \left( D D^{\sf T} \right)_{1,1} \frac{\partial^2 \mathcal{P}}{\partial z^2}
+\frac{1}{\sqrt{\varepsilon}} \sum_{i=2}^N
\left( D D^{\sf T} \right)_{i,1} \frac{\partial^2 \mathcal{P}}{\partial z \partial u_i} + O(\varepsilon^0) \;, \label{eq:fpe3} \\
\mathcal{P} &\to 0 {\rm ~as~} z \to \infty {\rm ~or~} u_j \to \pm \infty \;, \forall j \ne 1 \;. \label{eq:remainingBCs}
\end{align}
The absorbing boundary condition for $p^R$ (\ref{eq:FPEabsorbingBC})
gives the condition for $\mathcal{P}$ at $z=0$,
\begin{eqnarray}
&& -p_f(0,\sqrt{\varepsilon} u_2 + x_{{d},2}^R,\ldots,\sqrt{\varepsilon} u_N + x_{{d},N}^R,\sqrt{\varepsilon} \tau + t_{d}^R)
= \varepsilon^{-\frac{N}{2}} \mathcal{P}(0,u_2,\ldots,u_N,\tau) \nonumber \\
&&= \varepsilon^{-\frac{N}{2}} \left( f^{(0)}(u_2,\ldots,u_N,\tau) + \sqrt{\varepsilon} f^{(1)}(u_2,\ldots,u_N,\tau) + O(\varepsilon) \right)
\;. \label{eq:exBC0}
\end{eqnarray}
for some functions $f^{(i)}$.
\subsection{Regular dynamics for the relay control model}
\label{sub:REGRELAY}
Here we summarize the key steps in calculating the functions $f^{(i)}$ that appear in (\ref{eq:exBC0}),
and ultimately calculating $\mathbb{E}[t^R]$ and $\mathbb{E}[{\bf x}^R]$ for the relay control example.
Details are deferred to Appendix \ref{sec:CALEX}.
As described in \S\ref{sub:COORDRCS},
in transformed coordinates the right half-system of the relay control example is
\begin{equation}
d{\bf x}(t) = \left( \mathcal{A} {\bf x}(t) + \mathcal{B}^{(R)} \right) dt
+ \sqrt{\varepsilon} D \,d{\bf W}(t) \;,
\label{eq:relayControlSystem5R}
\end{equation}
with (\ref{eq:calA}) and (\ref{eq:calBLBRD}).
The transitional PDF of (\ref{eq:relayControlSystem5R}) takes the form
$p^R({\bf x},t) = p_f({\bf x},t) + \varepsilon^{-\frac{N}{2}} \mathcal{P}(z,u_2,\ldots,u_N,\tau)$
with initial and boundary conditions (\ref{eq:FPEIC})-(\ref{eq:FPEremainingBCs}).
The free-space contribution is given by
\begin{equation}
p_f({\bf x},t) =
\frac{1}{(2 \pi \varepsilon)^{\frac{3}{2}} \sqrt{\det(K(t))}}
{\rm e}^{-\frac{1}{2 \varepsilon} ({\bf x} - {\bf x}_{d}(t))^{\sf T}
K(t)^{-1} ({\bf x} - {\bf x}_{d}(t))} \;,
\label{eq:pfs}
\end{equation}
with covariance matrix
\begin{equation}
K(t) = \int_0^t {\rm e}^{\mathcal{A} s}
D D^{\sf T}
{\rm e}^{\mathcal{A}^{\sf T} s} \,ds \;.
\end{equation}
The local contribution $\mathcal{P}$ satisfies (\ref{eq:fpe3})
with $\phi_1^{(R)}({\bf x}_{\Gamma}^R) = x_{\Gamma,2}^R$
and $\left( D D^{\sf T} \right)_{1,1} = 1$
(coefficients for higher order terms are given in Appendix \ref{sec:CALEX}).
Note that $t_{\Gamma}^R$ and ${\bf x}_\Gamma^R$ are
obtained by solving (\ref{eq:relayControlSystem5R}) with $\varepsilon = 0$
(refer to (\ref{eq:bxdetrelay})).
With $\mathcal{P}$ expanded as in (\ref{eq:pell}), the $O(1)$ equation is
\begin{equation}
\frac{1}{2} \mathcal{P}^{(0)}_{z z} - x_{\Gamma,2}^R \mathcal{P}^{(0)}_{z} = 0 \;.
\label{eq:calP0}
\end{equation}
Therefore
\begin{equation}
\mathcal{P}^{(0)} = -f^{(0)}(u_2,u_3,\tau) \,{\rm e}^{2 x_{\Gamma,2}^R z} \;,
\label{eq:calP02}
\end{equation}
where $f^{(0)}$ is determined from (\ref{eq:exBC0}) and (\ref{eq:pfs}) and given by (\ref{eq:f0}).
Note $x_{\Gamma,2}^R < 0$,
because $\phi_1^{(R)}({\bf x}_{\Gamma}^R) = x_{\Gamma,2}^R$ is the component of the vector field
orthogonal to the switching manifold evaluated at the deterministic passage location.
Higher order corrections to $\mathcal{P}$ satisfy
\begin{equation}
\frac{1}{2} \mathcal{P}^{(j)}_{z z} - x_{\Gamma,2}^R \mathcal{P}^{(j)}_{z} =
\mathcal{F} \left( \mathcal{P}^{(0)},\ldots,\mathcal{P}^{(j-1)} \right) \;,
\label{eq:calPj}
\end{equation}
where the right hand-side is a function of the lower order components and their derivatives.
By solving (\ref{eq:calPj}) with $j = 1$ using (\ref{eq:calP02}) and the boundary conditions
(\ref{eq:remainingBCs})-(\ref{eq:exBC0}), we obtain
\begin{equation}
\mathcal{P}^{(1)} = \left( -f^{(1)}(u_2,u_3,\tau) + g^{(1)}(u_2,u_3,\tau) z \right) {\rm e}^{2 x_{\Gamma,2}^R z} \;,
\label{eq:calP12}
\end{equation}
where $f^{(1)}$ is found from (\ref{eq:exBC0}) and (\ref{eq:f0})
and $g^{(1)}$ depends on $f^{(0)}$ and its derivatives (\ref{eq:g1}).
By evaluating (\ref{eq:meantex1}) with (\ref{eq:pfs}) and (\ref{eq:calP02}),
we obtain the following formula for the mean first passage time,
\begin{equation}
\mathbb{E}[t^R] = t_{\Gamma}^R +
\frac{1}{2 \left( \phi_1^{(R)}({\bf x}_{\Gamma}^R) \right)^2} \left( \dot{\kappa}_{11}(t_{\Gamma}^R)
+ \frac{\kappa_{11}(t_{\Gamma}^R) dot{x}_{{d},1}(t_{\Gamma}^R)}{\phi_1^{(R)}({\bf x}_{\Gamma}^R)} - 1 \right) \varepsilon
+ O \left( \varepsilon^{\frac{3}{2}} \right) \;,
\label{eq:meantex2}
\end{equation}
where $\kappa_{11}$ denotes the $(1,1)$-element of $K$.
As discussed in \S\ref{sub:COORDRCS},
the deterministic passage location ${\bf x}_{\Gamma}^R$ is well approximated
by the intersection of the weakly stable manifold of the right half-space with the switching manifold.
Repeating (\ref{eq:xRint}), in transformed coordinates this intersection point is
\begin{equation}
{\bf x}_{\rm int}^{(R)} =
\left[ 0 \;, -\frac{1}{\omega^2} \;, -2-\frac{2 \zeta}{\omega} - Z - \frac{1}{\omega^2(Z+2)} \right]^{\sf T} \;.
\label{eq:xRint2}
\end{equation}
Indeed, for the parameter values we have used numerical calculations reveal that
$|| {\bf x}_{\Gamma}^R - {\bf x}_{\rm int}^{(R)} || \approx 0.000087$.
By combining (\ref{eq:meantex2}) and (\ref{eq:xRint2}) we obtain
the useful approximation\removableFootnote{
A necessary calculation is:
\begin{eqnarray}
&\dot{{\bf x}}_{d}(t_{d}^R)
= \mathcal{A} {\bf x}_{d}(t_{d}^R) + \mathcal{B}^{(R)}
\approx \mathcal{A} {\bf x}_{\rm int}^{(R)} + \mathcal{B}^{(R)}
= \left[ \begin{array}{c}
-\frac{1}{\omega^2} \\
-\frac{2 \zeta}{\omega} \\
-1 - \frac{2 \zeta}{\omega (Z+2)}
\end{array} \right]& \;, \\
&dot{{\bf x}}_{d}(t_{d}^R)
= \mathcal{A} \dot{{\bf x}}_{d}(t_{d}^R)
\approx \left[ \begin{array}{c}
\frac{\lambda}{\omega^2} \\
\frac{2 \zeta}{\omega} \\
\lambda + \frac{2 \zeta}{\omega (Z+2)}
\left( 1 + \lambda - \frac{1}{Z+2} \right)
\end{array} \right]& \;.
\end{eqnarray}
}
\begin{equation}
\mathbb{E}[t^R] \approx t_{d}^R +
\frac{\omega^4}{2} \left( \dot{\kappa}_{11}(t_{d}^R)
+ \lambda \kappa_{11}(t_{d}^R) - 1 \right) \varepsilon
+ O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\label{eq:meantex3}
\end{equation}
As shown in Fig.~\ref{fig:checkExcur}-A, (\ref{eq:meantex3}) is consistent with Monte-Carlo simulations.
The mean values $\mathbb{E}[{\bf x}_2^R]$ and $\mathbb{E}[{\bf x}_3^R]$ cannot be expressed as concisely.
Numerically it is convenient to compute these values by evaluating $p_f$ directly, rather than calculating $f^{(1)}$,
as shown in Appendix \ref{sec:CALEX}.
The results, shown in panels B and C of Fig.~\ref{fig:checkExcur},
were computed by numerically evaluating (\ref{eq:meanxjex2}).
\section{Stochastically perturbed sliding dynamics}
\label{sec:SLIDE}
\setcounter{equation}{0}
Here we consider sample solutions to (\ref{eq:sde}) with assumptions (\ref{eq:a1})-(\ref{eq:a5})
from an initial point, ${\bf x}_0$, on $x_1 = 0$, until an intersection $x_2 = \delta^-$.
For an arbitrary sample solution,
we let $t^S$ and ${\bf x}^S$ denote the first passage time and location to $x_2 = \delta^-$.
When $\varepsilon = 0$, these values are deterministic and we denote them by
$t_{d}^S$ and ${\bf x}_{d}^S$ respectively.
In this section it is convenient to write
\begin{equation}
{\bf y} = [ x_2, \ldots, x_N ]^{\sf T} \;, \qquad
\psi^{(L)} = [ \phi^{(L)}_2, \ldots, \phi^{(L)}_N ]^{\sf T} \;, \qquad
\psi^{(R)} = [ \phi^{(R)}_2, \ldots, \phi^{(R)}_N ]^{\sf T} \;,
\label{eq:slidingVariables}
\end{equation}
with which the piecewise-smooth stochastic differential equation (\ref{eq:sde}) may be written as
\begin{equation}
\left[ \begin{array}{c} dx_1(t) \\ d{\bf y}(t) \end{array} \right] =
\left\{ \begin{array}{lc}
\left[ \begin{array}{c} \phi^{(L)}_1(x_1(t),{\bf y}(t)) \\ \psi^{(L)}(x_1(t),{\bf y}(t)) \end{array} \right]
\;, & x_1(t) < 0 \\
\left[ \begin{array}{c} \phi^{(R)}_1(x_1(t),{\bf y}(t)) \\ \psi^{(R)}(x_1(t),{\bf y}(t)) \end{array} \right]
\;, & x_1(t) > 0
\end{array} \right\} \,dt + \sqrt{\varepsilon} D \,d{\bf W}(t) \;.
\label{eq:sde2}
\end{equation}
Since sample solutions remain near the switching manifold with high probability,
it is profitable to expand in $x_1$.
We rewrite (\ref{eq:sde2}) as
\begin{equation}
\left[ \begin{array}{c} dx_1(t) \\ d{\bf y}(t) \end{array} \right] =
\left\{ \begin{array}{lc}
\left[ \begin{array}{c}
a_L({\bf y}(t)) + c_L({\bf y}(t)) x_1(t) + O(x_1(t)^2) \\
b_L({\bf y}(t)) + d_L({\bf y}(t)) x_1(t) + O(x_1(t)^2)
\end{array} \right]
\;, & x_1(t) < 0 \\
\left[ \begin{array}{c}
-a_R({\bf y}(t)) + c_R({\bf y}(t)) x_1(t) + O(x_1(t)^2) \\
b_R({\bf y}(t)) + d_R({\bf y}(t)) x_1(t) + O(x_1(t)^2)
\end{array} \right]
\;, & x_1(t) > 0
\end{array} \right\} \,dt + \sqrt{\varepsilon} D \,d{\bf W}(t) \;.
\label{eq:sde3}
\end{equation}
Take care to note that $a_L$, $a_R$, $c_L$ and $c_R$ are scalars,
and $b_L$, $b_R$, $d_L$ and $d_R$ are $(N-1)$-dimensional vectors.
When $\varepsilon = 0$ we use Filippov's convention to define a deterministic sliding solution \cite{Fi88,Fi60}.
On $x_1 = 0$, for values of ${\bf y}$ for which $a_L > 0$ and $a_R > 0$, we define
\begin{equation}
\left[ \begin{array}{c} \dot{x}_{{d},1} \\ \dot{{\bf y}}_{d} \end{array} \right] =
(1-\mu({\bf y}_{d})) \left[ \begin{array}{c} a_L({\bf y}_{d}) \\ b_L({\bf y}_{d}) \end{array} \right] +
\mu({\bf y}_{d}) \left[ \begin{array}{c} -a_R({\bf y}_{d}) \\ b_R({\bf y}_{d}) \end{array} \right] \;,
\end{equation}
where $\mu$ is given by the requirement $\dot{x}_{{d},1} = 0$.
That is
$\mu = \frac{a_L}{a_L+a_R}$,
and hence
\begin{equation}
\dot{{\bf y}}_{d} = \Omega \equiv \frac{a_L b_R + a_R b_L}{a_L + a_R} \;.
\label{eq:Omega}
\end{equation}
The sliding solution, ${\bf y}_{d}(t;{\bf y}_0)$,
satisfies ${\bf y}_{d}(0;{\bf y}_0) = {\bf y}_0$,
and $\dot{{\bf y}}_{d} = \Omega({\bf y}_{d})$.
\subsection{Stochastic averaging}
\label{sub:AVERAGING}
The technique of stochastic averaging applies to stochastic systems with distinct time scales \cite{FrWe12,PaSt08,PaKo74,MoCu11}.
The underlying principle is to average fast variables in order
to obtain a simpler description of the behaviour of slow variables over a relatively long time frame.
Stochastic averaging has been an invaluable tool for understanding
periodically forced oscillators \cite{AnAs02,RoSp86,Da98},
and excitable systems \cite{BeGe06,Ba04b}.
Here we apply stochastic averaging
to determine the first passage statistics of stochastically perturbed sliding motion to
the plane $x_2 = \delta^-$.
From previous investigations \cite{SiKu13c} we know that $x_1(t) = O(\varepsilon)$,
for stochastically perturbed sliding motion of (\ref{eq:sde3}).
This motivates the scaling
\begin{equation}
z = \frac{x_1}{\varepsilon} \;,
\end{equation}
with which (\ref{eq:sde3}) may be written as
\begin{eqnarray}
dz(t) &=& \frac{1}{\varepsilon} \left\{ \begin{array}{lc}
a_L({\bf y}(t)) + \varepsilon c_L({\bf y}(t)) z(t) + O(\varepsilon^2) \;, & z(t) < 0 \\
-a_R({\bf y}(t)) + \varepsilon c_R({\bf y}(t)) z(t) + O(\varepsilon^2) \;, & z(t) > 0
\end{array} \right\} \,dt +
\frac{1}{\sqrt{\varepsilon}} \,e_1^{\sf T} D \,d{\bf W}(t) \;, \label{eq:dhatx3} \\
\,d{\bf y}(t) &=& F(z(t),{\bf y}(t)) \,dt +
\sqrt{\varepsilon} \left[ \begin{array}{c}
e_2^{\sf T} \\ \vdots \\ e_N^{\sf T}
\end{array} \right]
D \,d{\bf W}(t) \;, \label{eq:dhaty2}
\end{eqnarray}
where
\begin{equation}
F(z,{\bf y}) =
\frac{b_L+b_R}{2}
- \frac{b_L-b_R}{2} \,{\rm sgn}(z)
+ \varepsilon \frac{d_L+d_R}{2} z
- \varepsilon \frac{d_L-d_R}{2} z \,{\rm sgn}(z)
+ O(\varepsilon^2) \;.
\label{eq:F}
\end{equation}
In view of the manner by which $\varepsilon$ appears in (\ref{eq:dhatx3})-(\ref{eq:dhaty2}),
we may treat $z(t)$ and ${\bf y}(t)$ as fast and slow variables respectively.
Furthermore, the averaging approximation of (\ref{eq:dhaty2})
involves only terms in the drift coefficients that are of lower order than $\varepsilon$
(i.e.~terms involving $a_L$, $a_R$, $b_L$ and $b_R$).
Below we show that with this approximation the mean of ${\bf y}$ coincides with the deterministic solution, ${\bf y}_{d}$.
A full computation of the noise-induced correction to the mean
due to terms of the next order is beyond the scope of this paper\removableFootnote{
I started to compute fourth order terms in our full asymptotic expansion of the PDF
(given to third order in \cite{SiKu14b}).
Despite using {\sc matlab} to do algebraic manipulations the calculations quickly became very complicated.
(I appear to get a PDE for $f^{(1)}$ that is a nonhomogeneous version
of the PDF for $f^{(0)}$ involving terms $y f^{(0)}$ and $y^3 f^{(0)}$.)
It seems possible to be able to follow the calculations all the way through and obtain the correction semi-explicitly,
but there are so many terms to keep track of,
and some of the expressions are much nastier than those at third order,
that the effort required is far greater than the value of the result itself.
}.
However, for the relay control example some of the coefficients $c_L$, $c_R$, $d_L$ and $d_R$, take relatively large values.
Here we suppose these coefficients are $O \left( \varepsilon^{-\eta} \right)$, for some $\eta > 0$,
so that we can formally derive the correction via a straight-forward averaging approximation.
For any fixed ${\bf y}$ satisfying $a_L, a_R > 0$, as detailed in \cite{SiKu13c},
(\ref{eq:dhatx3}) has the quasi-steady-state density
\begin{equation}
p_{\rm qss}(z;{\bf y}) =
\left( \frac{2 a_L a_R}{\alpha (a_L+a_R)}
- \frac{a_L^3 c_R + a_R^3 c_L}{a_L a_R (a_L+a_R)^2} \,\varepsilon + O(\varepsilon^2) \right)
\left\{ \begin{array}{lc}
{\rm e}^{\frac{1}{\alpha} \left( 2 a_L z
+ c_L z^2 \varepsilon + O(\varepsilon^2) \right)} \;, & z < 0 \\
{\rm e}^{-\frac{1}{\alpha} \left( 2 a_R z
- c_R z^2 \varepsilon + O(\varepsilon^2) \right)} \;, & z > 0
\end{array} \right. \;,
\label{eq:pqss}
\end{equation}
where $\alpha = (D D^{\sf T})_{11}$ as in (\ref{eq:alphaBetaGamma}).
Given ${\bf y}$, it is suitable to assume $z$ is distributed according to (\ref{eq:pqss}).
Averaging $F$ over $p_{\rm qss}$ yields\removableFootnote{
From (\ref{eq:pqss}) we readily obtain
\begin{eqnarray}
\mathbb{E} \left[ {\rm sgn}(z) \,\big|\, {\bf y} \right] &=&
\frac{a_L-a_R}{a_L+a_R}
+ \frac{a_L^2 c_R - a_R^2 c_L}{a_L a_R (a_L+a_R)^2} \alpha \varepsilon + O(\varepsilon^2) \\
\mathbb{E} \left[ z \,\big|\, {\bf y} \right] &=&
\frac{a_L-a_R}{2 a_L a_R} \alpha + O(\varepsilon) \\
\mathbb{E} \left[ z \,{\rm sgn}(z) \,\big|\, {\bf y} \right] &=&
\frac{a_L^2+a_R^2}{2 a_L a_R (a_L+a_R)} \alpha + O(\varepsilon) \;.
\end{eqnarray}
}
\begin{eqnarray}
\overline{F}({\bf y}) &\equiv& \mathbb{E} \left[ F(z,{\bf y}) \,\big|\, {\bf y} \right] \nonumber \\
&=& \int_{-\infty}^\infty
F(z,{\bf y}) p_{\rm qss}(z;{\bf y}) \,dz \nonumber \\
&=& \Omega({\bf y}) + \Lambda({\bf y}) \alpha \varepsilon + O(\varepsilon^2) \;,
\label{eq:meanFcal}
\end{eqnarray}
where $\Omega$ is given by (\ref{eq:Omega}) and
\begin{equation}
\Lambda = \frac{(a_L^2 d_R - a_R^2 d_L)(a_L+a_R)
- (a_L^2 c_R - a_R^2 c_L)(b_L-b_R)}
{2 a_L a_R (a_L+a_R)^2} \;.
\label{eq:Lambda}
\end{equation}
The quantity $\Lambda$ was obtained in \cite{SiKu13c}
in the case that $\phi^{(L)}$ and $\phi^{(R)}$ are independent of ${\bf y}$.
The averaged equation of (\ref{eq:dhaty2}) is
$d\overline{{\bf y}}(t) = \overline{F}(\overline{{\bf y}}(t)) dt$.
Therefore we have the ODE
\begin{equation}
\frac{d\overline{{\bf y}}}{dt} = \Omega(\overline{{\bf y}})
+ \Lambda(\overline{{\bf y}}) \alpha \varepsilon + O(\varepsilon^2) \;.
\label{eq:dyBar}
\end{equation}
Note, in the limit $\varepsilon \to 0$, (\ref{eq:dyBar}) is equal to (\ref{eq:Omega}).
Formally there exists a sequence of stochastic solutions to (\ref{eq:sde2})
that converges to ${\bf y}_{d}(t)$ as $\varepsilon \to 0$ \cite{BuOu09}.
Since (\ref{eq:dyBar}) is the averaged equation
we can use it to obtain the mean of the first passage statistics for small $\varepsilon$.
The mean of the first passage time, $t^S$, is approximated by
\begin{equation}
e_1^{\sf T} \overline{{\bf y}} \left( \mathbb{E} \left[ t^S \right] \right) \approx \delta^- \;,
\label{eq:tBarDef}
\end{equation}
and the mean of the ${\bf y}$-component of the first passage location is approximated by
\begin{equation}
\mathbb{E} \left[ {\bf y}^S \right] \approx \overline{{\bf y}} \left( \mathbb{E} \left[ t^S \right] \right) \;.
\label{eq:byBarDef}
\end{equation}
We expect these approximations to be exact to at least $O(\varepsilon)$, and write
\begin{eqnarray}
\overline{{\bf y}}(t) &=& {\bf y}_{d}(t) + \overline{{\bf y}}^{(1)}(t) \varepsilon + O(\varepsilon^2) \;,
\label{eq:yBarSeries} \\
\mathbb{E}[t^S] &=& t_{d}^S + t^{S,1} \varepsilon + O(\varepsilon^2) \;,
\label{eq:tSlMeanSeries} \\
\mathbb{E}[{\bf y}^S] &=& {\bf y}_{d}^S + {\bf y}^{S,1} \varepsilon + O(\varepsilon^2) \;.
\label{eq:bySlMeanSeries}
\end{eqnarray}
By substituting (\ref{eq:yBarSeries}) into (\ref{eq:dyBar}) we obtain
\begin{equation}
\overline{{\bf y}}^{(1)}(t) = \alpha
\int_0^t {\rm e}^{\int_s^t ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(u)) \,du}
\Lambda({\bf y}_{d}(s)) \,ds \;,
\label{eq:yBar1}
\end{equation}
which indicates the leading-order deviation of $\overline{{\bf y}}(t)$ from
the deterministic value ${\bf y}_{d}(t)$.
We then express the leading-order deviations of $\mathbb{E}[t^S]$ and $\mathbb{E}[{\bf y}^S]$ in terms of $\overline{{\bf y}}^{(1)}(t_{d}^S)$.
First, from (\ref{eq:tBarDef}) and (\ref{eq:tSlMeanSeries}),
\begin{equation}
t^{S,1} =
-\frac{e_1^{\sf T} \overline{{\bf y}}^{(1)}(t_{d}^S)}
{e_1^{\sf T} \Omega({\bf y}_{d}(t_{d}^S))} \;.
\label{eq:meantsl}
\end{equation}
Second, from (\ref{eq:byBarDef}) and (\ref{eq:bySlMeanSeries}),
\begin{equation}
{\bf y}^{S,1} = \Omega(t_{d}^S) t^{S,1}
+ \overline{{\bf y}}^{(1)}(t_{d}^S) \;.
\label{eq:meanbysl1}
\end{equation}
The $x_1$-value of the mean first passage location is found using (\ref{eq:pqss}):
\begin{equation}
\mathbb{E} \left[ x_1^S \right] = \varepsilon \int_{-\infty}^{\infty} z p_{\rm qss}(z;{\bf y}) \,dz =
\frac{a_L-a_R}{2 a_L a_R} \bigg|_{{\bf y} = {\bf y}_{d}^S} \alpha \varepsilon + O(\varepsilon^2) \;.
\label{eq:meanx1sl}
\end{equation}
\subsection{Linear diffusion approximation}
Here we calculate deviations in $t^S$ and ${\bf x}^S$ from their respective mean values.
Our approach is to use a linear diffusion approximation
to obtain a stochastic differential equation for the difference between ${\bf y}$ and its averaged value,
and analyze first passage to $x_2 = \delta^-$.
We write the slow-fast system (\ref{eq:dhatx3})-(\ref{eq:dhaty2}) as
\begin{eqnarray}
dz(t) &=& \frac{1}{\varepsilon} \left( \left\{ \begin{array}{lc}
a_L({\bf y}(t)) \;, & z(t) < 0 \\
-a_R({\bf y}(t)) \;, & z(t) > 0
\end{array} \right\} + O(\varepsilon) \right) \,dt +
\frac{1}{\sqrt{\varepsilon}} \,e_1^{\sf T} D \,d{\bf W}(t) \;, \label{eq:dhatx4} \\
d{\bf y}(t) &=& \big( F_0(z(t),{\bf y}(t)) + O(\varepsilon) \big) \,dt +
\sqrt{\varepsilon} \left[ \begin{array}{c}
e_2^{\sf T} \\ \vdots \\ e_N^{\sf T}
\end{array} \right]
D \,d{\bf W}(t) \;, \label{eq:dhaty3}
\end{eqnarray}
where
\begin{equation}
F_0(z,{\bf y}) = \frac{b_L({\bf y})+b_R({\bf y})}{2} - \frac{b_L({\bf y})-b_R({\bf y})}{2} \,{\rm sgn}(z) \;,
\label{eq:F0}
\end{equation}
constitutes the leading order component of $F$ (\ref{eq:F}).
From (\ref{eq:meanFcal}), the averaged value of $F_0$ is
$\mathbb{E} \left[ F_0(z,{\bf y}) | {\bf y} \right] = \Omega({\bf y})$.
In view of (\ref{eq:Omega}), the averaged value of ${\bf y}(t)$ is ${\bf y}_{d}(t)$,
and for this reason we define
\begin{equation}
\hat{{\bf y}}(t) = {\bf y}(t) - {\bf y}_{d}(t) \;.
\label{eq:yhat}
\end{equation}
In \cite{SiKu14b} we performed an asymptotic expansion of the Fokker-Planck equation
for (\ref{eq:dhatx4})-(\ref{eq:dhaty3}).
Assuming the validity of this expansion, we showed that as $\varepsilon \to 0$ the distribution of
\begin{equation}
{\bf Y}(t) = \frac{\hat{{\bf y}}(t)}{\sqrt{\varepsilon}} \;,
\label{eq:checkby}
\end{equation}
converges weakly to that of
\begin{equation}
d{\bf Y}(t) = ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(t)) {\bf Y}(t) \,dt
+ M({\bf y}_{d}(t)) \,d{\bf W}(t) \;,
\label{eq:linearDiffusionApprox2}
\end{equation}
where
\begin{equation}
M({\bf y}) = \left[ -\frac{b_L({\bf y})-b_R({\bf y})}{a_L({\bf y})+a_R({\bf y})} \,\bigg|\, I \right] D \;,
\label{eq:M}
\end{equation}
and $I$ is the $(N-1)$-dimensional identity matrix.
In order to obtain statistics for $t^S$ and ${\bf x}^S$ by applying
standard first passage theory to (\ref{eq:linearDiffusionApprox2}),
we require strong convergence.
For this reason we use the method of averaging to derive a linear diffusion approximation,
which provides strong convergence \cite{Ki03,Ar03}, and compare it to (\ref{eq:linearDiffusionApprox2}).
Expanding (\ref{eq:dhaty3}) about ${\bf y} = {\bf y}_{d}$ produces\removableFootnote{
We start with (here $r = \frac{t}{\varepsilon}$)
\begin{align}
\frac{1}{\varepsilon} d\hat{{\bf y}}(r) &= \frac{1}{\varepsilon} d{\bf y}(r) - \frac{1}{\varepsilon} d{\bf y}_{d}(\varepsilon r) \nonumber \\
&= F_0(z(r),{\bf y}(r)) \,dr + \left[ \begin{array}{c}
e_2^{\sf T} \\ \vdots \\ e_N^{\sf T}
\end{array} \right]
D \,d{\bf W}(r) - \Omega({\bf y}_{d}(\varepsilon r)) \,dr + O(\varepsilon) \;.
\end{align}
We then substitute
\begin{equation}
F_0(z(r),{\bf y}(r)) = F_0(z(r),{\bf y}_{d}(\varepsilon r))
+ \left( {\rm D}_{\bf y} F_0 \right)(z(t),{\bf y}_{d}(\varepsilon r)) \left( {\bf y}(r) - {\bf y}_{d}(\varepsilon r) \right)
+ O \left( \big| {\bf y} - {\bf y}_{d} \big|^2 \right) \;,
\end{equation}
and note that the error term is $O(\varepsilon)$.
Finally we add and subtract the term
$({\rm D}_{\bf y} \Omega)({\bf y}_{d}(\varepsilon r)) \hat{{\bf y}}(r)$
and claim that
$\left( \left( {\rm D}_{\bf y} F_0 \right)(z(t),{\bf y}_{d}(\varepsilon r))
- ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(\varepsilon r)) \right) \hat{{\bf y}}(r) = O \left( \sqrt{\varepsilon} \right)$,
although I don't know how to justify this.
}
\begin{equation}
d\hat{{\bf y}}(t) = ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(t)) \hat{{\bf y}}(t) \,dt +
\big( F_0(z(t),{\bf y}_{d}(t)) - \Omega({\bf y}_{d}(t)) \big) \,dt
+ \sqrt{\varepsilon} \left[ \begin{array}{c}
e_2^{\sf T} \\ \vdots \\ e_N^{\sf T}
\end{array} \right]
D \,d{\bf W}(t) + O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\label{eq:linearDiffusionApprox0}
\end{equation}
The fast variable $z(t)$ appears in only the middle term of (\ref{eq:linearDiffusionApprox0}).
The essence of the linear diffusion approximation is to replace this term with an equivalent diffusion
\cite{FrWe12,PaSt08,MoCu11,Kh66b}.
Such a computation is beyond the scope of this paper in the general situation
that the noise terms of (\ref{eq:dhatx4}) and (\ref{eq:dhaty3}) are correlated.
In this case the middle term of (\ref{eq:linearDiffusionApprox0})
and the noise term of (\ref{eq:linearDiffusionApprox0}) are not independent
and it seems necessary to study the occupation times of $z(t)$ on
either side of zero, as in \cite{SiKu14b}\removableFootnote{
To describe short time dynamics with correlated noise in the simplest case
it seems that we require knowledge of the joint PDF of $W(r)$ and the positive occupation time of $z(t)$.
On the other hand it is not clear that a ``diffusion'' approximation is even appropriate,
that is it is possible that $\hat{{\bf y}}$ cannot be accurately described by an equation of the form
$d \hat{{\bf y}} = a(\hat{{\bf y}}) \,dt + b(\hat{{\bf y}}) \,d{\bf V}(t)$.
Over long times we know that the distribution of $\hat{{\bf y}}$ is roughly Gaussian \cite{SiKu14b}.
}.
With the correlation matrix of (\ref{eq:dhatx4})-(\ref{eq:dhaty3})
partitioned as in (\ref{eq:alphaBetaGamma}),
if $\beta = 0$ then the noise terms of (\ref{eq:dhatx4}) and (\ref{eq:dhaty3}) are uncorrelated.
In this case (\ref{eq:linearDiffusionApprox0}) admits the linear diffusion approximation
\begin{equation}
d\hat{{\bf y}}(t) = ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(t)) \hat{{\bf y}}(t) \,dt
+ \sigma({\bf y}_{d}(t)) \sqrt{\alpha \varepsilon} \,dW(t)
+ \sqrt{\varepsilon} \tilde{D} \,d{\bf V}(t) \;,
\label{eq:linearDiffusionApprox}
\end{equation}
where $W(t)$ is a one-dimensional Brownian motion,
${\bf V}(t)$ is an $(N-1)$-dimensional Brownian motion independent to $W(t)$,
$\tilde{D} \tilde{D}^{\sf T} = \gamma$, and
\begin{equation}
\sigma \sigma^{\sf T} = \frac{(b_L-b_R) (b_L-b_R)^{\sf T}}{(a_L+a_R)^2} \;.
\label{eq:sigma2}
\end{equation}
The formula (\ref{eq:sigma2}) is derived in Appendix \ref{sec:SIGMA}
by using an explicit expression for the transitional PDF of the leading order
truncation of (\ref{eq:dhatx4}).
The approximation (\ref{eq:linearDiffusionApprox}) is called ``linear''
because the drift term of (\ref{eq:linearDiffusionApprox}) is linear.
The validity of (\ref{eq:linearDiffusionApprox})
requires that the drift term of (\ref{eq:dhatx4}) is Lipschitz in ${\bf y}$
and is not influenced by the fact that this drift term is discontinuous in $z$.
The correlation matrices for the noise terms in (\ref{eq:linearDiffusionApprox})
are $\sigma \sigma^{\sf T} \alpha$ and $\gamma$, respectively.
In comparison, the correlation matrix for (\ref{eq:linearDiffusionApprox2}) is
\begin{equation}
M M^{\sf T} = \sigma \sigma^{\sf T} \alpha - \sigma \beta^{\sf T} - \beta \sigma^{\sf T} + \gamma \;.
\label{eq:MMT}
\end{equation}
and so (\ref{eq:linearDiffusionApprox}) is equivalent to (\ref{eq:linearDiffusionApprox2})
when $\beta = 0$.
Given this agreement we conjecture that (\ref{eq:linearDiffusionApprox2}) has strong convergence for any $D$,
with which we may apply standard first passage theory to (\ref{eq:linearDiffusionApprox2})
and obtain statistics for $t^S$ and ${\bf x}^S$.
Indeed the first passage theory
provides a good match to the results of Monte-Carlo simulations of the relay control system
for various choices of $D$ as discussed in the next section.
Equation (\ref{eq:linearDiffusionApprox2}) is a time-dependent Ornstein-Uhlenbeck process \cite{Sc10,Ga09},
and thus the PDF for ${\bf Y}(t)$ is the Gaussian
\begin{equation}
p^S({\bf Y},t) = \frac{1}{(2 \pi \varepsilon)^{\frac{N-1}{2}} \sqrt{\det(\Theta(t))}}
{\rm e}^{-\frac{1}{2} {\bf Y}^{\sf T} \Theta(t)^{-1} {\bf Y}} \;,
\label{eq:pycheck}
\end{equation}
where
\begin{equation}
\Theta(t) = \int_0^t
{\rm e}^{\int_s^t ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(\tilde{s})) \,d\tilde{s}}
M({\bf y}_{d}(s)) M({\bf y}_{d}(s))^{\sf T}
{\rm e}^{\int_s^t ({\rm D}_{\bf y} \Omega)({\bf y}_{d}(\tilde{s}))^{\sf T} \,d\tilde{s}} \,ds \;.
\label{eq:Theta}
\end{equation}
Then the leading order terms of ${\rm Var} \left( t^S \right)$ and ${\rm Cov} \left( {\bf y}^S \right)$
are found via standard first passage theory \cite{FrWe12}
(employed in \S\ref{sec:EXCUR}) applied to (\ref{eq:linearDiffusionApprox2}):
\begin{eqnarray}
{\rm Var} \left( t^S \right) &=& \frac{\theta_{11}(t_{d}^S)}
{e_1^{\sf T} \Omega({\bf y}_{d}(t_{d}^S))} \varepsilon + O(\varepsilon^2) \;,
\label{eq:vartsl} \\
{\rm Cov} \left( {\bf y}^S \right) &=&
\left( I - \frac{\Omega e_1^{\sf T}}{e_1^{\sf T} \Omega} \right)
\Theta
\left( I - \frac{\Omega e_1^{\sf T}}{e_1^{\sf T} \Omega} \right)^{\sf T}
\bigg|_{{\bf y} = {\bf y}_{d}(t_{d}^S)} \varepsilon + O(\varepsilon^2) \;,
\label{eq:covbysl}
\end{eqnarray}
where $\theta_{11}$ is the top left entry of $\Theta$.
Lastly, since $x_1$ operates on a fast time-scale relative to ${\bf y}$,
to leading order, $x_1^S$ is uncorrelated to ${\bf y}^S$.
From (\ref{eq:pqss}), we have
\begin{equation}
{\rm Var}(x_1^S) =
\frac{a_L^2 + a_R^2}{4 a_L^2 a_R^2} \bigg|_{{\bf y} = {\bf y}_{d}(t_{d}^S)} \varepsilon^2
+ O(\varepsilon^3) \;.
\label{eq:varx1sl}
\end{equation}
\subsection{Summary and comparison to numerical simulations}
\begin{figure}
\caption{
A comparison of first passage statistics for stochastically perturbed sliding motion for
the relay control example, (\ref{eq:relayControlSystem2}
\label{fig:checkSlide}
\end{figure}
\begin{figure}
\caption{
First passage statistics of the sliding phase for the relay control example
as in Fig.~\ref{fig:checkSlide}
\label{fig:checkSlide2}
\end{figure}
Fig.~\ref{fig:checkSlide} compares the above theoretical results
with Monte-Carlo simulations of the relay control example
from ${\bf x}_0 = {\bf x}_{\Gamma}^M$ to $x_2 = \delta^-$.
For panel A, ${\rm Diff}(t^S)$ is approximated by $\varepsilon t^{S,1}$ using (\ref{eq:meantsl}),
and ${\rm Std}(t^S)$ is approximated using (\ref{eq:vartsl}).
For panel B, ${\rm Diff}(x_1^S)$ is approximated by (\ref{eq:meanx1sl}),
and ${\rm Std}(x_1^S)$ by (\ref{eq:varx1sl}).
Lastly for panel C, ${\rm Diff}(x_3^S)$ is approximated
with (\ref{eq:byBarDef}) and (\ref{eq:meanbysl1}),
and the standard deviation with (\ref{eq:covbysl}).
Notice that the approximations to the standard deviations of $t^S$ and $x_3^S$ are zero.
This is because, remarkably, for our example the matrix $M M^{\sf T}$ is identically zero, and thus so is $\Theta$.
To see this why this is the case,
we notice that the left and right half-systems of (\ref{eq:relayControlSystem5})
are identical up to a constant vector, that is
\begin{equation}
a_L + a_R = e_1^{\sf T}
\left( \mathcal{B}^{(L)} - \mathcal{B}^{(R)} \right) \;, \qquad
b_L - b_R =
\left[ \begin{array}{c} e_2^{\sf T} \\ \vdots \\ e_N^{\sf T} \end{array} \right]
\left( \mathcal{B}^{(L)} - \mathcal{B}^{(R)} \right) \;,
\label{eq:aLplusaRbLplusbR}
\end{equation}
are constant.
Moreover, in view of (\ref{eq:calBLBRD}) we can write
\begin{equation}
D = \frac{1}{2} \left( \mathcal{B}^{(L)} - \mathcal{B}^{(R)} \right) e_1^{\sf T} \;.
\label{eq:calD2}
\end{equation}
By substituting (\ref{eq:sigma2}), (\ref{eq:aLplusaRbLplusbR}) and (\ref{eq:calD2}) into (\ref{eq:MMT}),
we immediately obtain $M M^{\sf T} = 0$.
This can also be explained geometrically.
The drift term of the stochastic differential equation (\ref{eq:sde2})
is piecewise, and in general points in different unrelated directions for $x_1 < 0$ and $x_1 > 0$.
When we rewrite (\ref{eq:sde2}) in terms of the variables
$x_1$ and $\hat{{\bf y}} = {\bf y} - {\bf y}_{d}(t)$
(representing deviations from the deterministic solution),
to lowest order the drift for $x_1 > 0$ is a scalar multiple of the drift for $x_1 < 0$.
For our example (\ref{eq:relayControlSystem2}),
both left and right drift vectors are multiples of $B$
and the noise term in (\ref{eq:relayControlSystem2})
is one-dimensional and also a multiple $B$.
Consequently, to leading order deviations occur along a line in the direction of $B$.
Deviations in any direction orthogonal to the switching manifold are $O(\varepsilon)$,
hence in this case overall deviations are $O(\varepsilon)$.
For this reason, for our example, the $O(\sqrt{\varepsilon})$ term
representing deviations in $y_i$ is zero.
In Fig.~\ref{fig:checkSlide2} we repeat the numerical comparison
using $D = e_1 e_1^{\sf T}$ in place of $D = B e_1^{\sf T}$ in (\ref{eq:ABCDvalues3d})\removableFootnote{
See {\sc drawSlide3b.m}.
}.
Now $M M^{\sf T}$ is nonzero and
${\rm Std}(t^S)$ and ${\rm Std}(x_3^S)$ are $O(\sqrt{\varepsilon})$.
Again the theoretical calculations are consistent with the numerical simulations.
We have found that the first passage predictions match the results of Monte-Carlo simulations
for other choices of $D$, including those with $\beta \ne 0$\removableFootnote{
I performed the experiment using, $D = [1,0,1]^{\sf T} e_1^{\sf T}$,
for which the noise terms are correlated,
and found that the first passage theory predictions still seem to work, see {\sc drawSlide4.m}.
}.
\section{Escaping analysis}
\label{sec:ESCAPE}
\setcounter{equation}{0}
In this section we study (\ref{eq:sde}) near the origin
and in the range $\delta^- \le x_2 \le \delta^+$.
As in the previous section it is convenient to write
${\bf y} = [ x_2, \ldots, x_N ]^{\sf T}$ and, repeating (\ref{eq:sde3}), write (\ref{eq:sde}) as
\begin{equation}
\left[ \begin{array}{c} dx_1(t) \\ d{\bf y}(t) \end{array} \right] =
\left\{ \begin{array}{lc}
\left[ \begin{array}{c}
a_L({\bf y}(t)) + c_L({\bf y}(t)) x_1(t) + O(x_1(t)^2) \\
b_L({\bf y}(t)) + d_L({\bf y}(t)) x_1(t) + O(x_1(t)^2)
\end{array} \right]
\;, & x_1(t) < 0 \\
\left[ \begin{array}{c}
-a_R({\bf y}(t)) + c_R({\bf y}(t)) x_1(t) + O(x_1(t)^2) \\
b_R({\bf y}(t)) + d_R({\bf y}(t)) x_1(t) + O(x_1(t)^2)
\end{array} \right]
\;, & x_1(t) > 0
\end{array} \right\} \,dt + \sqrt{\varepsilon} D \,d{\bf W}(t) \;.
\label{eq:sde4}
\end{equation}
Here we expand the coefficients in (\ref{eq:sde4}) about ${\bf y} = 0$.
In view of assumptions (\ref{eq:a1})-(\ref{eq:a5}), we can write\removableFootnote{
Specifically,
(i) (\ref{eq:a2}) implies the first equation and $a_L > 0$,
(ii) (\ref{eq:a1}) and (\ref{eq:a5}) together imply the second equation,
(iii) (\ref{eq:a4}) implies $\frac{\partial a_R}{\partial x_2} > 0$,
(iv) (\ref{eq:a3}) implies the third equation and $b_{R1} > 0$, and
(v) (\ref{eq:a3}) implies the fourth equation.
}
\begin{equation}
\begin{gathered}
a_L({\bf y}) = a_L + O(||{\bf y}||) \;, \qquad
a_R({\bf y}) = \frac{\partial a_R}{\partial x_2} x_2 + O(||{\bf y}||^2) \;, \\
b_{R1}({\bf y}) = b_{R1} + O(||{\bf y}||) \;, \qquad
b_{Ri}({\bf y}) = \sum_{j=2}^N \frac{\partial b_{Ri}}{\partial x_j} x_j + O(||{\bf y}||^2) \;,
\forall i \ne 1 \;, \\
c_R({\bf y}) = c_R + O(||{\bf y}||) \;, \qquad
d_R({\bf y}) = d_R + O(||{\bf y}||) \;,
\end{gathered}
\end{equation}
where, on the right hand sides, and in the remainder of this section, the coefficients are evaluated at ${\bf y} = 0$, and
\begin{equation}
a_L > 0 \;, \qquad
\frac{\partial a_R}{\partial x_2} < 0 \;, \qquad
b_{R1} > 0 \;.
\end{equation}
To study dynamics near the origin asymptotically in $\varepsilon$, we scale space and time.
Consider the general scaling\removableFootnote{
Note, \LaTeX~can't render $\mathcal{Y}$ bold!
}
\begin{equation}
\mathcal{X}_1 = \frac{x_1}{\varepsilon^{\lambda_1}} \;, \qquad
\mathcal{X}_i = \frac{x_i}{\varepsilon^{\lambda_2}} \;, \forall i \ne 1 \;, \qquad
\mathcal{T} = \frac{t}{\varepsilon^{\lambda_3}} \;,
\label{eq:tildes}
\end{equation}
where $\lambda_1, \lambda_2, \lambda_3 > 0$.
By substituting (\ref{eq:tildes}) into (\ref{eq:sde4}), for $\mathcal{X}_1 > 0$ we obtain\removableFootnote{
Originally I had this in terms of a PDE, but there are several reasons why I prefer the SDE.
\begin{enumerate}
\item
Scaling in the PDE appears more rigorous but neither way is properly rigorous.
To validate the asymptotic expansion one could try to apply the maximum principle,
but this is very difficult and well beyond the scope of this paper.
\item
In the context of the SDE the scaling seems easier to understand:
The values of $\lambda_1$, $\lambda_2$, $\lambda_3$ are such that the following three terms are leading order:
(i) noise in $\mathcal{X}_1$,
(ii) drift in $\mathcal{X}_1$,
and (iii) drift in $\mathcal{X}_2$.
\item
By doing the scaling in terms of a PDE,
one ends up with the desired BVP (\ref{eq:fpeu})-(\ref{eq:icu}),
but an extra leap is required to extract back the corresponding SDE (\ref{eq:du}).
Strictly speaking this SDE is not really necessary here,
but it does significantly add to our understanding of what is going on.
\item
If the scaling is done in the context of the elliptic BVP corresponding to escape from a
neighbourhood of the origin,
then since the PDE is independent of time,
the analysis gives us $\lambda_1$ and $\lambda_2$, but not $\lambda_3$.
\end{enumerate}
},
\begin{equation}
\begin{split}
d\mathcal{X}_1(\mathcal{T}) &=
\left( -\varepsilon^{\lambda_2+\lambda_3-\lambda_1}
{\textstyle \frac{\partial a_R}{\partial x_2}} \mathcal{X}_2(\mathcal{T})
+ \varepsilon^{\lambda_3} c_R \mathcal{X}_1(\mathcal{T}) +
O \left( \varepsilon^{\lambda_3-\lambda_1+2{\rm min}(\lambda_1,\lambda_2)} \right) \right) \,d\mathcal{T} \\
&+ \varepsilon^{\frac{\lambda_3+1}{2} - \lambda_1} e_1^{\sf T} D \,d{\bf W}(\mathcal{T}) \;, \\
d\mathcal{X}_2(\mathcal{T}) &= \left( \varepsilon^{\lambda_3-\lambda_2} b_{R1} +
O \left( \varepsilon^{\lambda_3-\lambda_2+{\rm min}(\lambda_1,\lambda_2)} \right) \right) \,d\mathcal{T}
+ \varepsilon^{\frac{\lambda_3+1}{2} - \lambda_2} e_2^{\sf T} D \,d{\bf W}(\mathcal{T}) \;, \\
d\mathcal{X}_i(\mathcal{T}) &=
O \left( \varepsilon^{\lambda_3-\lambda_2+{\rm min}(\lambda_1,\lambda_2)} \right) \,d\mathcal{T}
+ \varepsilon^{\frac{\lambda_3+1}{2} - \lambda_2} e_{i+1}^{\sf T} D \,d{\bf W}(\mathcal{T}) \;,
\quad \forall i \ge 3 \;,
\end{split}
\label{eq:sdetildeR}
\end{equation}
and for $\mathcal{X}_1 < 0$,
\begin{equation}
\begin{split}
d\mathcal{X}_1(\mathcal{T}) &= \left( \varepsilon^{\lambda_3-\lambda_1} a_L +
O \left( \varepsilon^{\lambda_3-\lambda_1+{\rm min}(\lambda_1,\lambda_2)} \right) \right) \,d\mathcal{T}
+ \varepsilon^{\frac{\lambda_3+1}{2} - \lambda_1} e_1^{\sf T} D \,d{\bf W}(\mathcal{T}) \;, \\
d\mathcal{X}_i(\mathcal{T}) &= \left( \varepsilon^{\lambda_3-\lambda_2} b_{Li} +
O \left( \varepsilon^{\lambda_3-\lambda_2+{\rm min}(\lambda_1,\lambda_2)} \right) \right) \,d\mathcal{T}
+ \varepsilon^{\frac{\lambda_3+1}{2} - \lambda_2} e_{i+1}^{\sf T} D \,d{\bf W}(\mathcal{T}) \;,
\quad \forall i \ge 2 \;.
\end{split}
\label{eq:sdetildeL}
\end{equation}
To identify the appropriate choice for each $\lambda_j$, $j=1,2,3$,
one would normally consider the asymptotic behaviour as $\varepsilon \to 0$
of the Fokker-Planck equation for the joint probability density of $\mathcal{X}_i$ for all $i$.
We do not provide such calculations here,
and instead consider the stochastic differential equation directly for the sake of brevity.
Both approaches lead to the same conclusions.
First note that the ratio of the drift in the $\mathcal{X}_1$-direction for $\mathcal{X}_1 > 0$,
to the drift in the $\mathcal{X}_1$-direction for $\mathcal{X}_1 < 0$, approaches zero as $\varepsilon \to 0$,
and for $\mathcal{X}_1 < 0$ this drift is directed to the right.
Therefore we expect sample solutions to be located almost entirely in the right half-space.
We then choose $\lambda_1$, $\lambda_2$ and $\lambda_3$ such that three
terms in (\ref{eq:sdetildeR}) are $O(1)$ and all other terms are of higher order.
This gives two possibilities.
First, we may have $\lambda_1 = \lambda_2 = \lambda_3 = 1$,
but this proves unhelpful because in this case the drift in $\mathcal{X}_1$ is a higher order term
and so this scaling does not capture dynamics escaping a neighbourhood of the switching manifold,
which is what we are trying to describe.
This suggests we need to look on a longer time-scale, that is, we should choose $\lambda_3 < 1$.
Indeed the second possibility is:
\begin{equation}
\lambda_1 = \frac{2}{3} \;, \qquad
\lambda_2 = \frac{1}{3} \;, \qquad
\lambda_3 = \frac{1}{3} \;.
\label{eq:chirholambda}
\end{equation}
Then for $\mathcal{X}_1 < 0$,
\begin{equation}
d\mathcal{X}_1(\mathcal{T}) = \frac{1}{\varepsilon^{\frac{1}{3}}} \,a_L \,d\mathcal{T} + O(\varepsilon^0) \;.
\label{eq:dXEscapeLeft}
\end{equation}
Consideration of the corresponding Fokker-Planck equation for $\mathcal{X}_1 < 0$
leads to the conclusion that the probability that $\mathcal{X}_1 < 0$ is negligible as $\varepsilon \to 0$.
Consequently we consider the dynamics for $\mathcal{X}_1> 0$ only,
providing an appropriate boundary condition at $\mathcal{X}_1 =0$ below in (\ref{eq:bc0u}).
With (\ref{eq:sdetildeR}) and (\ref{eq:chirholambda}), for $\mathcal{X}_1 > 0$
\begin{equation}
\begin{split}
d\mathcal{X}_1(\mathcal{T}) &= -\frac{\partial a_R}{\partial x_2} \mathcal{X}_2(\mathcal{T}) \,d\mathcal{T}
+ e_1^{\sf T} D \,d{\bf W}(\mathcal{T})
+ O \left( \varepsilon^{\frac{1}{3}} \right) \;, \\
d\mathcal{X}_2(\mathcal{T}) &= b_{R1} \,d\mathcal{T}
+ O \left( \varepsilon^{\frac{1}{3}} \right) \;, \\
d\mathcal{X}_i(\mathcal{T}) &= O \left( \varepsilon^{\frac{1}{3}} \right) \;,
\quad \forall i \ge 3 \;.
\end{split}
\label{eq:sdetildeR2}
\end{equation}
As $\varepsilon \to 0$, (\ref{eq:sdetildeR2}) approaches
\begin{equation}
\begin{split}
d\mathcal{X}_1(\mathcal{T}) &= -\frac{\partial a_R}{\partial x_2} \mathcal{X}_2(\mathcal{T}) \,d\mathcal{T}
+ \sqrt{\alpha} \,dW(\mathcal{T}) \;, \\
d\mathcal{X}_2(\mathcal{T}) &= b_{R1} \,d\mathcal{T} \;, \\
d\mathcal{X}_i(\mathcal{T}) &= 0 \;, \quad \forall i \ge 3 \;,
\end{split}
\label{eq:sdetildeR3}
\end{equation}
where $W(\mathcal{T})$ is a scalar Brownian motion,
and $\alpha = (D D^{\sf T})_{11}$ (\ref{eq:alphaBetaGamma}).
We now perform an additional scaling to simplify (\ref{eq:sdetildeR3}).
Note that $\mathcal{X}_2(\mathcal{T})$, as governed by the limiting equation (\ref{eq:sdetildeR3}), is deterministic.
Let $\mathcal{T}_0$ be the time at which $\mathcal{X}_2 = 0$.
Then, with
\begin{equation}
u = \frac{\left| \frac{\partial a_R}{\partial x_2} \right|^{\frac{1}{3}} b_{R1}^{\frac{1}{3}}}
{\alpha^{\frac{4}{3}}} \,\mathcal{X}_1 \;, \qquad
s = \frac{\left| \frac{\partial a_R}{\partial x_2} \right|^{\frac{2}{3}} b_{R1}^{\frac{2}{3}}}
{\alpha^{\frac{2}{3}}} \left( \mathcal{T} - \mathcal{T}_0 \right) \;,
\label{eq:us}
\end{equation}
(\ref{eq:sdetildeR3}) reduces to\removableFootnote{
The appropriate scaling of $\mathcal{X}_2$ is
$\tilde{Y} = \frac{\left| \frac{\partial a_R}{\partial x_2} \right|^{\frac{2}{3}}}
{b_{R1}^{\frac{1}{3}} \alpha^{\frac{2}{3}}} \,\mathcal{X}_2$,
with which we have $\tilde{Y}(s) = s$.
}
\begin{equation}
du(s) = s \,ds + dW(s) \;.
\label{eq:du}
\end{equation}
For the purposes of describing escaping dynamics,
we determine the transitional PDF for (\ref{eq:du}), call it $p^E(u,s)$,
by writing it as the solution to a boundary value problem.
The PDF satisfies the Fokker-Planck equation
\begin{equation}
p^E_s = -s p^E_u + \frac{1}{2} p^E_{uu} \;.
\label{eq:fpeu}
\end{equation}
To produce meaningful solutions we impose the following boundary condition at infinity:
\begin{equation}
p^E(u,s) \to 0 {\rm ~as~} u \to \infty \;.
\end{equation}
At $u=0$ we use the boundary condition
\begin{equation}
s p^E(0,s) - \frac{1}{2} p^E_u(0,s) = 0 \;,
\label{eq:bc0u}
\end{equation}
which may be justified in two ways.
First, the requirement that $u > 0$ with probability $1$ is equivalent to
$\frac{\partial}{\partial s} \int_0^\infty p^E(u,s) \,du = 0$,
and applying this identity to (\ref{eq:fpeu}) produces (\ref{eq:bc0u}).
Second, by (\ref{eq:dXEscapeLeft}) dynamics for $\mathcal{X}_1 < 0$
has drift directed to the right, which in the limit $\varepsilon \to 0$ is infinitely large.
Therefore sample solutions to (\ref{eq:du}) that reach $u=0$ are reflected back to the right;
indeed (\ref{eq:bc0u}) is a reflecting boundary condition \cite{Sc10,Ga09}.
Also we suppose $u(s_0) = u_0$ at some initial time $s_0$, which corresponds to the initial condition
\begin{equation}
p^E(u,s_0)
= \delta(u - u_0) \;.
\label{eq:icu}
\end{equation}
An explicit expression for the solution to (\ref{eq:fpeu})-(\ref{eq:icu}) is derived in \cite{KnYa01}
(by scaling $p^E$ in order to remove the
time-dependency in the coefficients of (\ref{eq:fpeu}) and taking Laplace transforms)
but takes a rather complicated form.
For our purposes it is useful to take $s_0 \to -\infty$,
because $s_0 \propto \frac{\delta^-}{\varepsilon^{\frac{1}{3}}} \to -\infty$, as $\varepsilon \to 0$,
when taking an initial point with $x_2 = \delta^-$ in the original equation (\ref{eq:sde4}).
Also, it is reasonable to take $u_0 \to 0$
because, as shown in \S\ref{sec:SLIDE}, $x_1(t)$ is a fast variable
and with high probability repeatedly intersects the switching manifold
as the sample solution approaches the escaping phase.
As shown by Knessl \cite{Kn00},
the solution to (\ref{eq:fpeu})-(\ref{eq:icu}) with $s_0 \to -\infty$ and $u_0 \to 0$
is given by
\begin{equation}
p^E(u,s) = 2^{\frac{2}{3}} {\rm e}^{-\frac{s^3}{6}} {\rm e}^{u s} Y(u,s) \;,
\label{eq:Kn00}
\end{equation}
where $Y$ is given by the inverse Laplace transform\removableFootnote{
The integral is over some Bromwich contour
(here the imaginary axis is a suitable contour
because all singularities of $\frac{{\rm Ai} \left( 2^{\frac{1}{3}} (u+\nu) \right)}
{{\rm Ai} \left( 2^{\frac{1}{3}} \nu \right)^2}$ have negative real part,
and this is what we use for numerical evaluation of $Y(u,s)$).
The integral converges because $\frac{{\rm Ai} \left( 2^{\frac{1}{3}} (u+{\rm i}\beta) \right)}
{{\rm Ai} \left( 2^{\frac{1}{3}} {\rm i}\beta \right)^2} \to 0$ sufficiently quickly as $\beta \to \infty$.
However, since $\frac{{\rm Ai} \left( 2^{\frac{1}{3}} (u+\nu) \right)}
{{\rm Ai} \left( 2^{\frac{1}{3}} \nu \right)^2} \to \infty$ as $\nu \to \infty$,
this function cannot be the Laplace transform of anything!
Thus, oddly, the Laplace transform of $Y(u,s)$ is equal to something other than
$\frac{{\rm Ai} \left( 2^{\frac{1}{3}} (u+\nu) \right)}
{{\rm Ai} \left( 2^{\frac{1}{3}} \nu \right)^2}$.
}
\begin{equation}
Y(u,s) = \frac{1}{2 \pi {\rm i}} \int_{\rm Br}
\frac{{\rm Ai} \left( 2^{\frac{1}{3}} (u+\nu) \right)}
{{\rm Ai} \left( 2^{\frac{1}{3}} \nu \right)^2} \,{\rm e}^{\nu s} \,d\nu \;,
\end{equation}
and ${\rm Ai}(u)$ is the Airy function\removableFootnote{
$Ai(u) = \frac{1}{\pi} \int_0^\infty {\rm cos} \left( \frac{1}{3} s^3 + u s \right) \,ds$
}.
The PDF (\ref{eq:Kn00}) is shown in Fig.~\ref{fig:Kn00}.
\begin{figure}
\caption{
The PDF, (\ref{eq:Kn00}
\label{fig:Kn00}
\end{figure}
\begin{figure}
\caption{
First passage statistics of an escaping phase for
the relay control example, (\ref{eq:relayControlSystem2}
\label{fig:checkEscape}
\end{figure}
Fig.~\ref{fig:checkEscape} shows the result of Monte-Carlo simulations of the relay control example
(\ref{eq:relayControlSystem2}) with (\ref{eq:ABCDvalues3d})-(\ref{eq:paramValues})
for first passage from ${\bf x} = {\bf x}_{d}^S$ to $x_2 = \delta^+$
using $\delta^- = -0.1$ and $\delta^+ = 0.2$.
The curves in panel B were obtained by using (\ref{eq:Kn00}).
If smaller values of $\delta^-$ and $\delta^+$ are used,
then the approximation of ${\rm Diff}(x_1^E)$ for small $\varepsilon$ improves
because the approximation is applied over a smaller region.
Since $\delta^-$ and $\delta^+$ are small,
the values in Fig.~\ref{fig:checkEscape} are significantly smaller
than the analogous values of the previous two sections.
In the next section we find that,
in agreement with this observation,
the escaping phase does not have a significant effect on the statistics of the oscillation time, $t_{\rm osc}$,
because the escaping phase corresponds to a relatively short time-frame.
Moreover, the leading-order description of the escaping dynamics derived in this section
does not provide us with a way to accurately approximate the data in panels A and C.
\section{Combining the results}
\label{sec:COMB}
\setcounter{equation}{0}
Here we use the results of Sections \ref{sec:EXCUR}--\ref{sec:ESCAPE}
to construct approximations to ${\rm Diff}(t_{\rm osc})$ and ${\rm Std}(t_{\rm osc})$
for the relay control system (\ref{eq:ABCDvalues3d})-(\ref{eq:relayControlSystem2}).
We employ approximations at various stages of the construction
in order to obtain results that can be interpreted in terms
of the values of the parameters and geometric features of the system.
The first passage times and locations discussed in sections \ref{sec:EXCUR}--\ref{sec:ESCAPE}
are stochastic quantities that depend significantly on one another.
We were able to ignore this interdependence in these sections
because in each case our attention was restricted to an individual phase.
In this section, however, it is necessary to consider the dependence carefully.
As evident from Fig.~\ref{fig:ppSketch},
on either side of the switching manifold, $\Gamma$ rapidly approaches a slow manifold.
For this reason, each point at which a regular phase ends, denoted ${\bf x}^R$,
is practically independent of the point at which the regular phase starts, denoted ${\bf x}^E$.
Consequently, we ignore the distribution of ${\bf x}^E$ in the computation of the distribution of ${\bf x}^R$.
For systems for which such a simplification is not possible,
one could use the results of the previous three sections
to numerically evaluate the statistics of the oscillation time via an iterative procedure.
In such a procedure one would consecutively apply
the distributions of the various phases of the dynamics in order,
rather than a stochastic simulation of many realizations of the equations.
The final approximations are affected by the values of $\delta^-$ and $\delta^+$.
In particular, the results for escaping are based on a series expansion of the system about ${\bf x} = 0$
that is applied for values of $x_2$ over the range $\delta^- < x_2 < \delta^+$.
For this reason, errors in approximations relating to escaping increase with the magnitude of $\delta^-$ and $\delta^+$,
and therefore we need $\delta^-$ and $\delta^+$ to be small.
However, the results for sliding are singular in the limit $\delta^- \to 0$,
and for this reason the accuracy of approximations relating to sliding decrease as $\delta^-$ approaches zero.
Also, the results for the regular phase assume that initial points on $x_2 = \delta^+$
are sufficiently far from $x_1 = 0$ so that a sample solution from an initial point is highly
unlikely to reach $x_1 = 0$ before undergoing a large excursion with $x_1 > 0$.
Hence the values of $\delta^-$ and $\delta^+$ cannot be too small.
For simplicity we take $\delta^-$ and $\delta^+$ independent of $\varepsilon$.
In view of the above points, and based on using $\varepsilon \le 0.0001$, throughout this paper we have used
\begin{equation}
\delta^- = -0.1 \;, \qquad \delta^+ = 0.2 \;.
\label{eq:specificdeltas}
\end{equation}
The final approximations are not substantially altered by using other values of $\delta^-$ and $\delta^+$
that are the same order of magnitude as the values in (\ref{eq:specificdeltas}).
Next we introduce additional notation,
for which it is helpful refer to Fig.~\ref{fig:phaseSchem}.
From an initial point ${\bf x}^M$ that lies on the switching manifold and is near ${\bf x}_{\Gamma}^M$,
we let $t^S$ and ${\bf x}^S$ denote the first passage time and location to $x_2 = \delta^-$.
We let
\begin{eqnarray}
{\rm Diff} \left( t^S \big| {\bf x}^M \right) &\equiv& \mathbb{E} \left[ t^S \big| {\bf x}^M \right]
- t_{d}^S \left( {\bf x}^M \right) \;,
\label{eq:Difftsl} \\
{\rm Diff} \left( {\bf x}^S \big| {\bf x}^M \right) &\equiv& \mathbb{E} \left[ {\bf x}^S \big| {\bf x}^M \right]
- {\bf x}_{d}^S \left( {\bf x}^M \right) \;,
\label{eq:Diffbxsl}
\end{eqnarray}
denote the differences between their means and deterministic values.
Below we evaluate (\ref{eq:Difftsl}) and (\ref{eq:Diffbxsl}) at ${\bf x}_{\Gamma}^M$ by using
(\ref{eq:tSlMeanSeries}) to compute $\mathbb{E} \left[ t^S \big| {\bf x}_{\Gamma}^M \right]$, and
(\ref{eq:bySlMeanSeries}) and (\ref{eq:meanx1sl})
to compute $\mathbb{E} \left[ {\bf x}^S \big| {\bf x}_{\Gamma}^M \right]$.
Also, we evaluate ${\rm Std} \left( t^S \big| {\bf x}_{\Gamma}^M \right)$ with (\ref{eq:vartsl}),
and ${\rm Cov} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)$ with (\ref{eq:covbysl}) and (\ref{eq:varx1sl}).
Similarly, from an initial point ${\bf x}^E$ that lies on $x_2 = \delta^+$ and near ${\bf x}_{\Gamma}^E$,
we let $t^R$ and ${\bf x}^R$ denote the first passage time and location to the switching manifold.
Below we evaluate
\begin{eqnarray}
{\rm Diff} \left( t^R \big| {\bf x}^E \right) &\equiv& \mathbb{E} \left[ t^R \big| {\bf x}^E \right]
- t_{d}^R \left( {\bf x}^E \right) \;,
\label{eq:Difftex} \\
{\rm Diff} \left( {\bf x}^R \big| {\bf x}^E \right) &\equiv& \mathbb{E} \left[ {\bf x}^R \big| {\bf x}^E \right]
- {\bf x}_{d}^R \left( {\bf x}^E \right) \;,
\label{eq:Diffbxex}
\end{eqnarray}
at ${\bf x}_{\Gamma}^E$ by using (\ref{eq:meantex3}) and (\ref{eq:meanxjex2}), respectively.
${\rm Std} \left( t^R \big| {\bf x}_{\Gamma}^E \right)$ and ${\rm Cov} \left( {\bf x}^R \big| {\bf x}_{\Gamma}^E \right)$
are given by (\ref{eq:vartex}) and (\ref{eq:covxex}), respectively.
Calculations relating the escaping phase do not enter into our final approximations
because escaping phases occur over significantly shorter time-frames than sliding and regular phases,
as discussed at the end of \S\ref{sec:ESCAPE}.
\subsection{An approximation to ${\rm Diff}(t_{\rm osc})$}
\label{sub:DIFF}
From a sample solution to (\ref{eq:ABCDvalues3d})-(\ref{eq:relayControlSystem2})
computed over a length of time that is substantially greater than the period of $\Gamma$,
we can identify first passage locations,
${\bf x}^M$, ${\bf x}^S$, ${\bf x}^E$ and ${\bf x}^R$,
and first passage times,
$t^S$, $t^E$ and $t^R$
corresponding to the beginning and end of sliding, escaping and regular phases.
Since (\ref{eq:ABCDvalues3d})-(\ref{eq:relayControlSystem2}) exhibits a simple symmetry about $x_1 = 0$,
the distributions of ${\bf x}^M$ and ${\bf x}^R$ are symmetric.
Also, as discussed above, each ${\bf x}^R$ is practically independent of the previous point ${\bf x}^E$.
Consequently it is suitable to use the approximation
\begin{equation}
{\rm Diff} \left( {\bf x}^M \right) \approx -{\rm Diff} \left( {\bf x}^R \big| {\bf x}_{\Gamma}^E \right) \;.
\label{eq:Diffbxpr2}
\end{equation}
We can compute ${\rm Diff} \left( t^S \right)$
by evaluating the following expression that is derived in Appendix \ref{sec:TSL}
via a Taylor series expansion
\begin{equation}
{\rm Diff} \left( t^S \right) = {\rm Diff} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} {\rm Diff} \left( {\bf x}^M \right)
+ \sum_{i=1}^N \sum_{j=1}^N {\rm D}_{{\bf x}}^2 t_{d}^S \left( {\bf x}_{\Gamma}^M \right)_{i,j}
{\rm Cov} \left( x_{\Gamma}^M \right)_{i,j} + O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\label{eq:Difftsl2}
\end{equation}
Note, $t_{d}^S$ is a function of the point ${\bf x}^M$ at which the sliding phase begins.
In (\ref{eq:Difftsl2}), $t_{d}^S$ and its derivatives are evaluated at ${\bf x}^M = {\bf x}_{\Gamma}^M$.
Each term in (\ref{eq:Difftsl2}) is $O(\varepsilon)$,
but, for our example, components of the vector ${\rm Diff} \left( {\bf x}^M \right)$ are of much larger magnitude than
elements of the matrix ${\rm Cov} \left( x_{\Gamma}^M \right)$.
For this reason we use the approximation
\begin{equation}
{\rm Diff} \left( t^S \right) \approx {\rm Diff} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} {\rm Diff} \left( {\bf x}^M \right) \;,
\label{eq:Difftsl3}
\end{equation}
which is evaluated using (\ref{eq:Diffbxpr2}).
Similarly we use
\begin{align}
{\rm Diff} \left( t^E \right) &\approx {\rm Diff} \left( t^E \big| {\bf x}_{\Gamma}^S \right)
+ {\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T} {\rm Diff} \left( {\bf x}^S \right) \;, \\
{\rm Diff} \left( t^R \right) &\approx {\rm Diff} \left( t^R \big| {\bf x}_{\Gamma}^E \right)
+ {\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T} {\rm Diff} \left( {\bf x}^E \right) \;, \\
{\rm Diff} \left( {\bf x}^S \right) &\approx {\rm Diff} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right) {\rm Diff} \left( {\bf x}^M \right) \;, \\
{\rm Diff} \left( {\bf x}^E \right) &\approx {\rm Diff} \left( {\bf x}^E \big| {\bf x}_{\Gamma}^S \right)
+ {\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right) {\rm Diff} \left( {\bf x}^S \right) \;.
\label{eq:DiffMany}
\end{align}
The difference for the time of half an oscillation is given simply by\removableFootnote{
Despite weak dependencies between the $t^S$, $t^E$ and $t^R$, this expression is exact
because it concerns mean values.
Compare equation (\ref{eq:Varthalfosc3}).
}
\begin{equation}
{\rm Diff} \left( t_{\frac{1}{2} {\rm osc}} \right) = {\rm Diff} \left( t^S \right)
+ {\rm Diff} \left( t^E \right) + {\rm Diff} \left( t^R \right) \;.
\label{eq:Diffthalfosc}
\end{equation}
Substituting (\ref{eq:Difftsl3})-(\ref{eq:DiffMany}) into (\ref{eq:Diffthalfosc}) and expanding brackets
yields an approximation for ${\rm Diff}(t_{\frac{1}{2} {\rm osc}})$
that is a sum of nine terms\removableFootnote{
Specifically
\begin{eqnarray}
{\rm Diff} \left( t_{\frac{1}{2} {\rm osc}} \right)
&=& {\rm Diff} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} {\rm Diff} \left( {\bf x}^M \right)
+ {\rm Diff} \left( t^E \big| {\bf x}_{\Gamma}^S \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T} {\rm Diff} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T} {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
{\rm Diff} \left( {\bf x}^M \right) \nonumber \\
&&+~{\rm Diff} \left( t^R \big| {\bf x}_{\Gamma}^E \right)
+ {\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T} {\rm Diff} \left( {\bf x}^E \big| {\bf x}_{\Gamma}^S \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T} {\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)
{\rm Diff} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T} {\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right) {\rm Diff} \left( {\bf x}^M \right) \;.
\end{eqnarray}
}.
Monte-Carlo simulations reveal that for our example
three of these terms have significantly larger values than the remaining six terms,
and for simplicity we approximate ${\rm Diff}(t_{\frac{1}{2} {\rm osc}})$
using the three largest terms:
\begin{equation}
{\rm Diff} \left( t_{\frac{1}{2} {\rm osc}} \right)
\approx {\rm Diff} \left( t^R \big| {\bf x}_{\Gamma}^E \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} {\rm Diff} \left( {\bf x}^M \right)
+ {\rm Diff} \left( t^S \big| {\bf x}_{\Gamma}^M \right) \;.
\label{eq:Diffthalfosc2}
\end{equation}
The first term in (\ref{eq:Diffthalfosc2}) represents
the additional time that regular phases take, on average, due to the presence of noise.
This term is negative-valued and large because by (\ref{eq:meantex3})
its magnitude is proportional to $\omega^4$, and we have used $\omega = 5$.
The second term in (\ref{eq:Diffthalfosc2}) represents
the average additional time that sliding phases take
due to noise causing the points ${\bf x}^M$, at which sliding phases start,
to be deviated from the deterministic value ${\bf x}_{\Gamma}^M$ in a particular direction, on average.
Indeed, as evident in Fig.~\ref{fig:checkExcur}-C,
small noise induces a large positive shift in the $x_3$-component of the average value of ${\bf x}^M$.
Finally the third term of (\ref{eq:Diffthalfosc2}) represents
the additional time that sliding phases take, on average, due to the noise.
In view of (\ref{eq:tSlMeanSeries}), (\ref{eq:yBar1}) and (\ref{eq:meantsl}),
this term is proportional to $\Lambda$ (\ref{eq:Lambda}).
The third term of (\ref{eq:Diffthalfosc2}) is large,
but not as large as the first term because $\Lambda$ is proportional to $\omega^2$.
The approximation (\ref{eq:Diffthalfosc2}) is compared with Monte-Carlo simulations in Fig.~\ref{fig:checkOsc}-A.
Also
\begin{equation}
{\rm Diff}(t_{\rm osc}) = 2 {\rm Diff}(t_{\frac{1}{2} {\rm osc}}) \;,
\label{eq:Difftosc2}
\end{equation}
is used for Fig.~\ref{fig:checkOsc}-B.
\begin{figure}
\caption{
A comparison of Monte-Carlo simulations with the theoretical approximations derived in the text
for oscillation times of the system (\ref{eq:paramValues}
\label{fig:checkOsc}
\end{figure}
\subsection{An approximation to ${\rm Std}(t_{\rm osc})$}
Taylor expanding ${\bf x}^M$ about its deterministic value ${\bf x}_{\Gamma}^M$,
(mentioned in \S\ref{sub:DIFF}), also leads to the formula
\begin{equation}
{\rm Var} \left( t^S \right) = {\rm Var} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right) + O \left( \varepsilon^{\frac{3}{2}} \right) \;,
\label{eq:combvartsl}
\end{equation}
which expresses ${\rm Var} \left( t^S \right)$ in terms of elements
that we can evaluate using equations derived in earlier sections.
A derivation of (\ref{eq:combvartsl}) is given in Appendix \ref{sec:TSL}.
Via similar calculations we obtain
\begin{eqnarray}
{\rm Var} \left( t^E \right) &=& {\rm Var} \left( t^E \big| {\bf x}_{\Gamma}^S \right)
+ {\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T}
{\rm Cov} \left( {\bf x}^S \right)
{\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right) + O \left( \varepsilon^{\frac{3}{2}} \right) \;, \\
{\rm Var} \left( t^R \right) &=& {\rm Var} \left( t^R \big| {\bf x}_{\Gamma}^E \right)
+ {\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T}
{\rm Cov} \left( {\bf x}^E \right)
{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right) + O \left( \varepsilon^{\frac{3}{2}} \right) \;, \\
{\rm Cov} \left( {\bf x}^S \right) &=& {\rm Cov} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} + O \left( \varepsilon^{\frac{3}{2}} \right) \;, \\
{\rm Cov} \left( {\bf x}^E \right) &=& {\rm Cov} \left( {\bf x}^E \big| {\bf x}_{\Gamma}^S \right)
+ {\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)
{\rm Cov} \left( {\bf x}^S \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T} + O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\label{eq:VarMany}
\end{eqnarray}
We use the approximation
\begin{equation}
{\rm Var} \left( t_{\frac{1}{2} {\rm osc}} \right) \approx {\rm Var} \left( t^S \right)
+ {\rm Var} \left( t^E \right) + {\rm Var} \left( t^R \right) \;,
\label{eq:Varthalfosc3}
\end{equation}
because, for our example, the value of each $t^E$ is practically independent
of the preceding value of $t^S$, and the value of each $t^R$ is practically
independent of the preceding value of $t^E$.
This is due to strong attraction to $x_1=0$ for the duration of the sliding phase,
which is inherent in stochastically perturbed sliding motion
and causes the $x_1$-value of ${\bf x}^S$ (which is the primary influence on the value of $t^E$)
to have a negligible correlation to $t^S$.
In an analogous fashion to the calculations in \S\ref{sub:DIFF},
by substituting (\ref{eq:combvartsl})-(\ref{eq:VarMany}) into (\ref{eq:Varthalfosc3})
and expanding brackets, we produce an approximation to ${\rm Var}(t_{\frac{1}{2} {\rm osc}})$
that is a sum of nine terms\removableFootnote{
Specifically,
\begin{eqnarray}
{\rm Var} \left( t_{\frac{1}{2} {\rm osc}} \right)
&=& {\rm Var} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)
+ {\rm Var} \left( t^E \big| {\bf x}_{\Gamma}^S \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T}
{\rm Cov} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)
{\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T}
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm D}_{{\bf x}} t_{d}^E \left( {\bf x}_{\Gamma}^S \right) \nonumber \\
&&+~{\rm Var} \left( t^R \big| {\bf x}_{\Gamma}^E \right)
+ {\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T}
{\rm Cov}\left( {\bf x}^E \big| {\bf x}_{\Gamma}^S \right)
{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T}
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)
{\rm Cov} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T}
{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right) \nonumber \\
&&+~{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T}
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
{\rm Cov} \left( {\bf x}^M \right) \nonumber \\
&& \qquad {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T}
{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right) \;.
\end{eqnarray}
}.
Monte-Carlo simulations reveal that three of these terms dominate.
By dropping the other six terms we generate the approximation
\begin{align}
&{\rm Var} \left( t_{\frac{1}{2} {\rm osc}} \right)
\approx {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)
+ {\rm Var} \left( t^R \big| {\bf x}_{\Gamma}^E \right) \nonumber \\
&+~{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right)^{\sf T}
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
{\rm Cov} \left( {\bf x}^M \right) {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm D}_{{\bf x}} {\bf x}_{d}^E \left( {\bf x}_{\Gamma}^S \right)^{\sf T}
{\rm D}_{{\bf x}} t_{d}^R \left( {\bf x}_{\Gamma}^E \right) \;.
\label{eq:Varthalfosc2}
\end{align}
The three terms in (\ref{eq:Varthalfosc2}) can be interpreted geometrically.
Noise creates variability in the values of ${\bf x}^M$.
The variance that this induces in the values of the sliding times $t^S$
is represented by the first term of (\ref{eq:Varthalfosc2}).
Variability in ${\bf x}^M$ is also responsible for variance in the time of regular phases;
this is represented by the third term of (\ref{eq:Varthalfosc2}).
The second term of (\ref{eq:Varthalfosc2})
simply represents the variance in the time of the regular phases
given that regular phases start at the deterministic location ${\bf x}_{\Gamma}^E$.
The approximation (\ref{eq:Varthalfosc2}) is compared with Monte-Carlo simulations in Fig.~\ref{fig:checkOsc}-A.
Lastly we determine ${\rm Var} \left( t_{\rm osc} \right)$ from ${\rm Var}(t_{\frac{1}{2} {\rm osc}})$.
Each oscillation time $t_{\rm osc}$ is the sum of two consecutive half oscillation times,
call them $t_{\frac{1}{2} {\rm osc},1}$ and $t_{\frac{1}{2} {\rm osc},2}$.
For our example, the value of each $t_{\frac{1}{2} {\rm osc},2}$
depends on heavily on the value of $t_{\frac{1}{2} {\rm osc},1}$.
This is because if $t_{\frac{1}{2} {\rm osc},1}$ is, say,
less than its deterministic value $\frac{t_{{\rm osc},\Gamma}}{2}$,
then the point at which the first half oscillation ends, ${\bf x}^R$,
is likely to be skewed in a particular direction from ${\bf x}_{\Gamma}^R$.
The second half oscillation begins at this end point
which affects the value of $t_{\frac{1}{2} {\rm osc},2}$.
To treat this difficulty, we define
\begin{equation}
\varrho = \frac{d \mathbb{E} \left[ t_{\frac{1}{2} {\rm osc},2} \big| t_{\frac{1}{2} {\rm osc},1} \right]}
{d t_{\frac{1}{2} {\rm osc},1}}
\Bigg|_{t_{\frac{1}{2} {\rm osc},1} \,=\, t_{\frac{1}{2} {\rm osc},\Gamma}} \;,
\label{eq:varrho}
\end{equation}
which measures the rate at which the mean value of $t_{\frac{1}{2} {\rm osc},2}$,
given $t_{\frac{1}{2} {\rm osc},1}$,
changes with $t_{\frac{1}{2} {\rm osc},1}$.
If $t_{\frac{1}{2} {\rm osc},1}$ and $t_{\frac{1}{2} {\rm osc},2}$ were independent
then we would have $\varrho = 0$.
Via straight-forward calculations based upon conditioning over the value of ${\bf x}^R$\removableFootnote{
Let us derive this formula in a slightly more general context.
Consider a stochastic trajectory that travels first to a surface, $\Sigma_1$,
then on to another surface, $\Sigma_2$.
Let $t_1$ and $t_2$ be the respective passage times
and ${\bf x}_{\rm mid}$ the intersection point on $\Sigma_1$.
We assume $t_1$ and ${\bf x}_{\rm mid}$
are correlated normally distributed random variables with known statistics:
\begin{eqnarray}
t_1 &\sim& N(t_{{d},1},\varepsilon t_{1,{\rm var}}) \;, \\
{\bf x}_{\rm mid} &\sim& N({\bf x}_{\rm mid,det},\varepsilon {\bf x}_{\rm mid,cov}) \;.
\end{eqnarray}
We also assume that, for any given ${\bf x}_{\rm mid}$,
$t_2$ is a normally distributed random variable with known statistics, and we write
\begin{equation}
t_2 ~\big|~ {\bf x}_{\rm mid} \sim N(t_{{d},2}({\bf x}_{\rm mid}), \varepsilon t_{2,{\rm var}}) \;,
\end{equation}
where in this exposition we ignore the dependence of
$t_{2,{\rm var}}$ on ${\bf x}_{\rm mid}$ because this provides a final contribution
that is of an order higher than $\varepsilon$.
Our goal is to derive ${\rm Var}(t_1 + t_2)$.
This non-trivial because $t_1$ and ${\bf x}_{\rm mid}$ are correlated,
and therefore $t_1$ and $t_2$ are correlated.
Let $t_{\rm osc} = t_1 + t_2$.
Then
\begin{eqnarray}
p(t_{\rm osc}) &=& \int_{-\infty}^\infty
p(t_2 = t_{\rm osc} - t_1 | t_1) p(t_1) \,dt_1 \nonumber \\
&=& \int_{-\infty}^\infty
\frac{1}{\sqrt{2 \pi \varepsilon t_{2,{\rm var}}}}
{\rm e}^{\frac{-(t_{\rm osc} - t_1 - \mathbb{E}[t_2|t_1])^2}{2 \varepsilon t_{2,{\rm var}}}}
\frac{1}{\sqrt{2 \pi \varepsilon t_{1,{\rm var}}}}
{\rm e}^{\frac{-(t_1 - t_{{d},1})^2}{2 \varepsilon t_{1,{\rm var}}}} \,dt_1 \;.
\label{eq:derivptosc}
\end{eqnarray}
Expanding the function $\mathbb{E}[t_2|t_1]$ as a Taylor series
centred at $t_1 = t_{{d},1}$ produces
\begin{equation}
\mathbb{E}[t_2|t_1] = t_{{d},2} + \varrho (t_1 - t_{{d},1})
+ O(|t_1 - t_{{d},1}|^2) \;,
\end{equation}
where we let
\begin{equation}
\varrho = \frac{d \mathbb{E}[t_2|t_{{d},1}]}{d t_1} \;.
\end{equation}
Consequently,
$t_{\rm osc} - t_1 - \mathbb{E}[t_2|t_1]
= t_{\rm osc} - t_{{\rm osc},\Gamma} - (1+\varrho)(t_1 - t_{{d},1})$,
and by substituting this into (\ref{eq:derivptosc}) we arrive at
\begin{equation}
p(t_{\rm osc}) = \frac{1}{2 \pi \varepsilon \sqrt{t_{1,{\rm var}} t_{2,{\rm var}}}}
\int_{-\infty}^\infty
{\rm e}^{\frac{-1}{2 \varepsilon t_{1,{\rm var}} t_{2,{\rm var}}}
\left( t_{1,{\rm var}} \left( t_{\rm osc} - t_{{\rm osc},\Gamma} - (1+\varrho)(t_1 - t_{{d},1}) \right)^2
+ t_{2,{\rm var}} (t_1 - t_{{d},1})^2 \right)} \,dt_1 \;.
\end{equation}
By completing the square in the exponent,
we can simplify this to
\begin{equation}
p(t_{\rm osc}) = \frac{1}{\sqrt{2 \pi \varepsilon \left( t_{2,{\rm var}} + (1+\varrho)^2 t_{1,{\rm var}} \right)}}
{\rm e}^{\frac{-(t_{\rm osc} - t_{{\rm osc},\Gamma})^2}
{2 \varepsilon \left( t_{2,{\rm var}} + (1+\varrho)^2 t_{1,{\rm var}} \right)}} \;.
\end{equation}
Therefore,
\begin{equation}
{\rm Var}(t_1 + t_2) = \varepsilon \left( t_{2,{\rm var}} + (1+\varrho)^2 t_{1,{\rm var}} \right) \;.
\end{equation}
},
it can be shown that
\begin{equation}
{\rm Var} \left( t_{\rm osc} \right) = {\rm Var} \left( t_{\frac{1}{2} {\rm osc},2} \right)
+ (1+\varrho)^2 {\rm Var} \left( t_{\frac{1}{2} {\rm osc},1} \right) + O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\end{equation}
Due to the symmetry of the relay control system,
${\rm Var}(t_{\frac{1}{2} {\rm osc},1}) = {\rm Var}(t_{\frac{1}{2} {\rm osc},2})$,
and therefore
\begin{equation}
{\rm Var} \left( t_{\rm osc} \right) =
\left( 1 + (1+\varrho)^2 \right) {\rm Var} \left( t_{\frac{1}{2} {\rm osc}} \right)
+ O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\label{eq:Vartosc2}
\end{equation}
For our example, $\varrho \approx -0.68$.
We obtained this value numerically by first computing the adjusted mean value of ${\bf x}^R$
given that $t_{\frac{1}{2} {\rm osc},1} = \frac{t_{{\rm osc},\Gamma}}{2} + \Delta t$
(using $\Delta t = 0.0001$),
then numerically solving the system with $\varepsilon = 0$ for half an oscillation from ${\bf x}^R$
in order to obtain the expected value of $t_{\frac{1}{2} {\rm osc},2}$,
and lastly using a first order finite difference approximation to evaluate (\ref{eq:varrho}).
The approximation (\ref{eq:Vartosc2}) is compared with Monte-Carlo simulations in Fig.~\ref{fig:checkOsc}-B.
\section{Conclusions}
\label{sec:CONC}
\setcounter{equation}{0}
In this paper we have quantitatively analyzed the effect of noise on periodic orbits
of Filippov systems that involve segments of sliding motion.
Our results apply to the general $N$-dimensional stochastic differential equation, (\ref{eq:sde}),
which is formed by adding white Gaussian noise of amplitude $\sqrt{\varepsilon}$
to a Filippov system with a single switching manifold.
We assume that in the absence of noise, i.e.~with $\varepsilon = 0$, (\ref{eq:sde})
has an attracting periodic orbit $\Gamma$ of period $t_{{\rm osc},\Gamma}$.
For small $\varepsilon > 0$, sample solutions to (\ref{eq:sde}) are likely to follow paths near $\Gamma$.
From such solutions we can identify oscillation times, $t_{\rm osc} \approx t_{{\rm osc},\Gamma}$,
defined by measuring the time taken between appropriate returns to the switching manifold ($x_1 = 0$).
In order to determine the statistics of $t_{\rm osc}$ for small $\varepsilon > 0$,
we split the stochastic dynamics into three phases: regular, sliding and escaping, see Fig.~\ref{fig:phaseSchem},
and analyzed each phase separately.
\subsubsection*{Regular dynamics}
From an initial point ${\bf x}_0$ in the right half-space,
we let $t^R$ and ${\bf x}^R$ denote the time and location for first passage to $x_1 = 0$.
To derive the mean values of these quantities to $O(\varepsilon)$,
we searched for an asymptotic solution to the Fokker-Planck equation of (\ref{eq:sde})
with an absorbing boundary condition at $x_1 = 0$, (\ref{eq:FPEex})-(\ref{eq:FPEremainingBCs}),
by introducing a boundary layer near $x_1 = 0$
and expanding about the deterministic passage time and location (\ref{eq:regularScaling}).
With the solution expanded in the form (\ref{eq:pell}),
only the first term of the local PDF $\mathcal{P}^{(0)}$
is required to obtain $\mathbb{E} \left[ t^R \big| {\bf x}_0 \right]$ to $O(\varepsilon)$.
To determine $\mathbb{E} \left[ {\bf x}^R \big| {\bf x}_0 \right]$ to $O(\varepsilon)$,
we also require the second term, $\mathcal{P}^{(1)}$.
We computed $\mathcal{P}^{(1)}$ by numerically evaluating integrals, see Appendix \ref{sec:CALEX}.
Standard calculations based on a sample path methodology are
sufficient to determine ${\rm Var} \left( t^R \big| {\bf x}_0 \right)$ and
${\rm Cov} \left( {\bf x}^R \big| {\bf x}_0 \right)$ to $O \left( \sqrt{\varepsilon} \right)$.
\subsubsection*{Sliding dynamics}
In \S\ref{sec:SLIDE} we analyzed stochastically perturbed sliding motion.
We let $t^S$ and ${\bf x}^S$ denote the time and location for
the first passage of (\ref{eq:sde}) to $x_2 = \delta^-$ from an initial point ${\bf x}_0$
that lies on the switching manifold.
We assumed that the deterministic solution from ${\bf x}_0$ to $x_2 = \delta^-$
is contained entirely within the interior of a stable sliding region.
Stochastic dynamics of (\ref{eq:sde}) in the $x_1$-direction, i.e.~orthogonal to the switching manifold,
occurs on an $O(\varepsilon)$ time-scale and for this reason it is suitable to employ stochastic averaging
to analyze the overall dynamics from ${\bf x}_0$ to $x_2 = \delta^-$.
To leading order, the averaged solution is identical to Filippov's solution of the deterministic equations.
We estimated ${\rm Diff} \left( t^S \big| {\bf x}_0 \right)$ and
${\rm Diff} \left( {\bf x}^S \big| {\bf x}_0 \right)$ to $O(\varepsilon)$
by including terms of the next order in the averaging calculation.
The key quantity affecting the magnitude of these differences is $\Lambda$ (\ref{eq:Lambda})
which denotes the $O(\varepsilon)$ component of the average drift in directions parallel to the switching manifold.
We obtained the leading order terms of ${\rm Var} \left( t^S \big| {\bf x}_0 \right)$
and ${\rm Cov} \left( {\bf x}^S \big| {\bf x}_0 \right)$
through the use of a linear diffusion approximation derived via averaging.
In particular we found that deviations of the first passage location ${\bf x}^S$
orthogonal to the switching manifold are $O(\varepsilon)$,
whereas deviations in a direction parallel to the switching manifold are $O(\sqrt{\varepsilon})$,
as evident in panels B and C of Fig.~\ref{fig:checkSlide2}.
This is because the discontinuity in the equations along $x_1 = 0$
inhibits deviations in the $x_1$-direction.
Furthermore, for the relay control system with noise added purely to the control response,
the leading order terms of ${\rm Std} \left( t^S \big| {\bf x}_0 \right)$
and ${\rm Std} \left( x_j^S \big| {\bf x}_0 \right)$ for $j > 2$ vanish
because the noise effectively acts only in the $x_1$-direction.
Consequently these standard deviations are $O(\varepsilon)$, Fig.~\ref{fig:checkSlide}.
\subsubsection*{Escaping dynamics}
We defined escaping dynamics as sections of solutions that lie within the strip, $\delta^- < x_2 < \delta^+$,
where $\delta^-$ and $\delta^+$ are suitably small, (\ref{eq:deltaMinusdeltaPlus}).
As shown in \S\ref{sec:ESCAPE},
the spatial and time scales for escaping dynamics are
$x_1 = O(\varepsilon^{\frac{2}{3}})$,
$x_j = O(\varepsilon^{\frac{1}{3}})$ for $j > 1$, and
$t = O(\varepsilon^{\frac{1}{3}})$.
We derived the leading order component of the transitional PDF for (\ref{eq:sde}) for an escaping phase by assuming $x_1 > 0$,
imposing a reflecting boundary condition at $x_1 = 0$, and solving the corresponding Fokker-Planck equation.
The result is Knessl's solution (\ref{eq:Kn00}).
However, escaping phases make up only a small fraction of dynamics
over a full oscillation and have little effect on $t_{\rm osc}$.
Indeed our final approximations of
${\rm Diff} \left( t_{\rm osc} \right)$ and ${\rm Std} \left( t_{\rm osc} \right)$ in \S\ref{sec:COMB}
do not involve calculations relating to escaping.
\subsubsection*{The statistics of $t_{\rm osc}$ for relay control}
In \S\ref{sec:COMB} we combined the results
to approximate ${\rm Diff} \left( t_{\rm osc} \right)$ and ${\rm Std} \left( t_{\rm osc} \right)$
for the relay control system (\ref{eq:ABCDvalues3d})-(\ref{eq:relayControlSystem2}).
Fig.~\ref{fig:checkOsc} reveals that the approximations (\ref{eq:Diffthalfosc2}), (\ref{eq:Difftosc2}), (\ref{eq:Varthalfosc2}) and (\ref{eq:Vartosc2})
fit the results of Monte-Carlo simulations reasonably well.
In view of the complexity in evaluating these approximations,
a geometric understanding of the terms in these equations is arguably more useful than the approximations themselves.
Here we use the results to obtain four reasons why
the noise significantly reduces the average oscillation time for the relay control example
at relatively small values of $\varepsilon$, as seen in Fig.~\ref{fig:manyPeriod}.
${\rm Diff}(t_{\frac{1}{2} {\rm osc}})$ is approximated by (\ref{eq:Diffthalfosc2})
as a sum of three terms that we have ordered by decreasing magnitude.
The first term, ${\rm Diff} \left( t^R \big| {\bf x}_{\Gamma}^E \right)$,
is negative and represents the difference created by the noise causing solutions
to return to the switching manifold earlier, on average, than in the absence of noise.
By (\ref{eq:meantex2}), this term is proportional to the square of the inverse of the velocity of $\Gamma$ at ${\bf x}^R$ in the $x_1$-direction.
For the relay control system the velocity is $-\frac{1}{\omega^2}$,
where $\omega = 5$, and for this reason the first term is relatively large.
The second term of (\ref{eq:Diffthalfosc2}),
${\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} {\rm Diff} \left( {\bf x}^M \right)$,
represents the difference created by sliding phases taking, on average, less time than they would without noise
due to sliding phases starting at points ${\bf x}^M$ that are, on average,
deviated from ${\bf x}_{\Gamma}^M$ in a particular direction.
The description of these two terms provides one reason why small noise significantly decreases $t_{\rm osc}$:
{\em Since $\Gamma$ slowly approaches the switching manifold along a path
that has a sharp angle relative to the switching manifold,
small noise tends to push solutions onto the switching manifold early
and at points deviated from ${\bf x}_{\Gamma}^R$}.
Loosely speaking, the noise causes solutions to ``cut the corner'' at ${\bf x}_{\Gamma}^R$.
The third term of (\ref{eq:Diffthalfosc2}), ${\rm Diff} \left( t^S \big| {\bf x}_{\Gamma}^M \right)$,
is proportional to $\Lambda$ (\ref{eq:Lambda}).
For the relay control system, $\Lambda$ involves terms in the first column of $A$, such as $\omega^2$, which is relatively large.
This suggests that the noise-induced effect observed in Fig.~\ref{fig:checkExcur} is due in part to this term
which we interpret as the result of {\em noise pushing solutions slightly off the switching manifold,
and causing the nature of the vector field away from the switching manifold to influence dynamics}.
Equation (\ref{eq:Varthalfosc2}) approximates ${\rm Var}(t_{\frac{1}{2} {\rm osc}})$.
The second term of (\ref{eq:Varthalfosc2}) represents the variance in the times of the regular phases,
and the first [resp.~third] term of (\ref{eq:Varthalfosc2}) represents the variance in $t^S$ [resp.~$t^R$]
due to the variability in the points ${\bf x}^M$.
Therefore deviations in $t_{\rm osc}$ are due primarily to
the variability in the first passage statistics of the regular phase.
For the relay control example, the deviations in these statistics are not as large
as one might expect because away from $x_1 = 0$ solutions rapidly contract onto a slow manifold\removableFootnote{
Using $D = e_1 e_1^{\sf T}$ in place of $D = B e_1^{\sf T}$ in (\ref{eq:ABCDvalues3d})
gives similar theoretical values for ${\rm Var}(t^R)$
and ${\rm Cov}({\bf x}^R)$.
}.
{\em Hence the slow-fast nature of the system
inhibits large deviations in $t_{\rm osc}$}.
For this reason ${\rm Std}(t_{\rm osc})$ is relatively small;
this constitutes a third reason for the nature of Fig.~\ref{fig:manyPeriod}.
Finally, in (\ref{eq:relayControlSystem2}) {\em the noise is added purely to the control response
causing the leading order contribution of the noise during stochastically perturbed sliding motion to vanish}.
Specifically, $M M^{\sf T} = 0$ where $M$ is the diffusion matrix in (\ref{eq:linearDiffusionApprox2})
-- the averaging approximation to the difference between stochastic solutions and the deterministic solution in the sliding phase.
Hence, again, for our particular example, ${\rm Std} \left( t_{\rm osc} \right)$ is less than
we would expect it to be in general.
\subsubsection*{Issues and future work}
The many approximations in our calculations combine to form discrepancies
between numerical results, obtained by Monte-Carlo simulations, and theoretical results, as evident in Fig.~\ref{fig:checkOsc}.
For instance, we used only the three largest terms in our expressions for
the statistics of $t_{\rm osc}$ and $t_{\frac{1}{2} {\rm osc}}$ for Fig.~\ref{fig:checkOsc}.
Our calculations for each of the three phases involve expansions in $\varepsilon$
and approximations are obtained by truncating these expansions.
Consequently the accuracy of the approximations decreases with increasing values of $\varepsilon$.
Calculations regarding escaping involve the assumption $x_2 = O(\varepsilon^{\frac{1}{3}})$.
Thus, strictly speaking, the values $\delta^-$ and $\delta^+$ should be $O(\varepsilon^{\frac{1}{3}})$,
but for simplicity we have set them as constants (\ref{eq:specificdeltas}).
Another source of error is that for the relay control example
the distance of the point ${\bf x}_{\Gamma}^M$, at which the sliding phase begins,
to the boundary of the stable sliding region is approximately equal to $\frac{1}{\omega^2}$, see Appendix \ref{sec:DET}.
With $\omega = 5$ this distance is relatively small
causing inaccuracy in the analysis for the sliding phase\removableFootnote{
With $\varepsilon = 0.0001$, $6.4\%$ of ${\bf x}^M$ values were actually located outside the stable sliding region.
(Naturally this percentage increases with $\varepsilon$ and decreases with $\omega$.)
}
because the calculations are singular in the limit that the distance of $\Gamma$ to the sliding boundary goes to zero.
Also, we have not attempted to compute the $O(\varepsilon)$ term of ${\rm Std} \left( t_{\rm osc} \right)$
necessary to fairly compare this value to ${\rm Diff} \left( t_{\rm osc} \right)$.
It remains to study large deviations of periodic orbits with sliding segments \cite{HiMe13}.
For systems with discontinuous drift, the small noise asymptotics of large deviations
may be fundamentally different to that of smooth systems \cite{GrHe01}.
In addition, it remains to investigate the effects of noise on sliding bifurcations
at which a segment of sliding motion is created or destroyed.
\appendix
\section{Calculations for the relay control example in the absence of noise}
\label{sec:DET}
\setcounter{equation}{0}
With $\varepsilon = 0$, (\ref{eq:relayControlSystem2}) is the piecewise-linear ODE system
\begin{equation}
\dot{{\bf X}} = \left\{ \begin{array}{lc}
A {\bf X} + B \;, & X_1 < 0 \\
A {\bf X} - B \;, & X_1 > 0
\end{array} \right. \;,
\label{eq:relayControlSystem4}
\end{equation}
where $A$ (\ref{eq:ABCDvalues3d}) has eigenvalues
$-\lambda$ and $-\omega \zeta \pm {\rm i} |\omega| \sqrt{1 - \zeta^2}$.
For the parameter values (\ref{eq:paramValues}), $\lambda$ is relatively small,
thus solutions to each linear half-system of (\ref{eq:relayControlSystem4})
rapidly approach the eigenspace corresponding to the eigenvalue $-\lambda$.
The eigenvector of $A$ for $-\lambda$ is
\begin{equation}
v_{-\lambda} = \left[ 1,~2 \zeta \omega,~\omega^2 \right]^{\sf T} \;,
\end{equation}
and the equilibria of the left and right half-systems of (\ref{eq:relayControlSystem4}) are, respectively,
\begin{equation}
{\bf X}^{*(L)} = \left[ \begin{array}{c}
\frac{1}{\lambda \omega^2} \\
-1 + \frac{2 \zeta}{\lambda \omega} + \frac{1}{\omega^2} \\
2 + \frac{2 \zeta}{\omega} + \frac{1}{\lambda}
\end{array} \right] \;, \qquad
{\bf X}^{*(R)} = \left[ \begin{array}{c}
-\frac{1}{\lambda \omega^2} \\
1 - \frac{2 \zeta}{\lambda \omega} - \frac{1}{\omega^2} \\
-2 - \frac{2 \zeta}{\omega} - \frac{1}{\lambda}
\end{array} \right] \;.
\end{equation}
${\bf X}^{*(L)}$ and ${\bf X}^{*(R)}$ are both {\em virtual} equilibria of (\ref{eq:relayControlSystem4}).
The weak stable manifold for each equilibrium is the line that passes through the equilibrium in the direction $v_{-\lambda}$.
These manifolds intersect $X_1 = 0$ at
\begin{equation}
{\bf X}_{\rm int}^{(L)} = \left[
0,~-1+\frac{1}{\omega^2},~2+\frac{2 \zeta}{\omega}
\right]^{\sf T} \;, \qquad
{\bf X}_{\rm int}^{(R)} = \left[
0,~1-\frac{1}{\omega^2},~-2-\frac{2 \zeta}{\omega}
\right]^{\sf T} \;.
\label{eq:XintLR}
\end{equation}
Consequently $\Gamma$ arrives at $X_1 = 0$ at points extremely close to
${\bf X}_{\rm int}^{(L)}$ and ${\bf X}_{\rm int}^{(R)}$.
For the purposes of applying the coordinate change described in \S\ref{sub:COORDRCS},
it is appropriate to approximate the point with $X_3 > 0$ at which $\Gamma$ returns to $X_1 = 0$ by ${\bf X}_{\rm int}^{(L)}$.
Stable sliding motion occurs on $X_1 = 0$
when $\dot{X_1} > 0$ for the left half-system of (\ref{eq:relayControlSystem4})
and $\dot{X_1} < 0$ for the right half-system of (\ref{eq:relayControlSystem4}).
By (\ref{eq:ABCDvalues3d}), stable sliding motion occurs on the strip
\begin{equation}
\left\{ (0,X_2,X_3)^{\sf T} ~\big|~ -1 < X_2 < 1 \right\} \;.
\label{eq:stableSlidingRegion}
\end{equation}
Sliding motion is specified by Filippov's solution \cite{Fi88,Fi60}, which yields
\begin{equation}
\left[ \begin{array}{c}
\dot{X}_2 \\ \dot{X}_3
\end{array} \right] =
\left[ \begin{array}{cc}
2 & 1 \\ -1 & 0
\end{array} \right]
\left[ \begin{array}{c}
X_2 \\ X_3
\end{array} \right] \;.
\label{eq:slidingDyns}
\end{equation}
Equation (\ref{eq:slidingDyns}) has the explicit solution
\begin{equation}
\left[ \begin{array}{c}
X_{{d},2}(t;{\bf X}_0) \\
X_{{d},3}(t;{\bf X}_0)
\end{array} \right] =
{\rm e}^t \left[ \begin{array}{c} 1 \\ -1 \end{array} \right] X_{0,2} +
{\rm e}^t \left(
t \left[ \begin{array}{c} 1 \\ -1 \end{array} \right] +
\left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \right)
(X_{0,2}+X_{0,3}) \;,
\label{eq:slidingSoln}
\end{equation}
for any initial point ${\bf X}_0 = (0,X_{0,2},X_{0,3})$ in the stable sliding region.
The upper sliding segment of $\Gamma$ ends at $X_2 = 1$.
With the approximation that the sliding segment starts at ${\bf X}_{\rm int}^{(L)}$, (\ref{eq:XintLR}),
the deterministic sliding time, here call it $T$, is therefore determined by
\begin{equation}
X_{{d},2} \left( T;{\bf X}_{\rm int}^{(L)} \right) = 1 \;,
\end{equation}
and sliding ends at
\begin{equation}
(0,1,Z)^{\sf T} \;, {\rm ~where~}
Z \equiv X_{{d},3} \left( T;{\bf X}_{\rm int}^{(L)} \right) \approx 2.561 \;.
\end{equation}
In the transformed coordinates (\ref{eq:Xtox}), the initial point for the sliding phase
and the end point for the excursion phase are, respectively,
\begin{eqnarray}
{\bf x}_{\rm int}^{(L)} = P {\bf X}_{\rm int}^{(L)} + Q &=&
\left[ \begin{array}{c}
0 \\
-2+\frac{1}{\omega^2} \\
2+\frac{2 \zeta}{\omega} - Z - \left( 2-\frac{1}{\omega^2} \right)
\frac{1}{Z+2}
\end{array} \right] \;, \\
{\bf x}_{\rm int}^{(R)} = P {\bf X}_{\rm int}^{(R)} + Q &=&
\left[ \begin{array}{c}
0 \\
-\frac{1}{\omega^2} \\
-2-\frac{2 \zeta}{\omega} - Z - \frac{1}{\omega^2(Z+2)}
\end{array} \right] \;. \label{eq:xRint}
\end{eqnarray}
\section{Calculations of the regular phase for relay control}
\label{sec:CALEX}
\setcounter{equation}{0}
Here we provide details of calculations for the relay control example
that were outlined in \S\ref{sub:REGRELAY}.
The deterministic solution to (\ref{eq:relayControlSystem5R}) is given by
\begin{equation}
{\bf x}_{d}(t) =
{\rm e}^{\mathcal{A} t} \left( {\bf x}_0 + \mathcal{A}^{-1} \mathcal{B}^{(R)} \right) -
\mathcal{A}^{-1} \mathcal{B}^{(R)} \;,
\label{eq:bxdetrelay}
\end{equation}
where ${\bf x}_0 = {\bf x}_{d}(0)$ denotes the initial point.
Here we take ${\bf x}_0 = {\bf x}_{\Gamma}^E$ (the deterministic end point of the
previous escaping phase, refer to Fig.~\ref{fig:phaseSchem})
with which first passage to the switching manifold
occurs at ${\bf x}_{\Gamma}^R \approx {\bf x}_{\rm int}^{(R)}$ (\ref{eq:xRint}), see \S\ref{sub:COORDRCS}.
Through elementary use of (\ref{eq:calA})-(\ref{eq:calBLBRD}),
the coefficients in the PDE for $\mathcal{P}$ (\ref{eq:fpe3}) are found to be
\begin{eqnarray}
\phi_1^{(R)}({\bf x}_{d}^R) &=& x_{\Gamma,2}^R \;, \label{eq:phi1R} \\
\phi_2^{(R)}({\bf x}_{d}^R) &=& \frac{-1}{Z+2} x_{\Gamma,2}^R + x_{\Gamma,3}^R + Z + 2 \;, \label{eq:phi2R} \\
\phi_3^{(R)}({\bf x}_{d}^R) &=& \frac{-1}{(Z+2)^2} x_{\Gamma,2}^R + \frac{1}{Z+2} x_{\Gamma,3}^R \;, \label{eq:phi3R}
\end{eqnarray}
\begin{equation}
\frac{\partial \phi_1^{(R)}}{\partial x_2}({\bf x}_{d}^R) = 1 \;, \qquad
\frac{\partial \phi_1^{(R)}}{\partial x_3}({\bf x}_{d}^R) = 0 \;,
\end{equation}
\begin{equation}
\left( D D^{\sf T} \right)_{1,1} = 1 \;, \qquad
\left( D D^{\sf T} \right)_{2,1} = -2 \;, \qquad
\left( D D^{\sf T} \right)_{3,1} = \frac{Z}{Z+2} \;.
\end{equation}
Then by substituting
\begin{equation}
{\bf x}_{d} \left( \sqrt{\varepsilon} \tau + t_{\Gamma}^R \right) =
{\bf x}_{\Gamma}^R + \sqrt{\varepsilon} \phi^{(R)}({\bf x}_{d}^R) \tau + O(\varepsilon) \;,
\end{equation}
with (\ref{eq:phi1R})-(\ref{eq:phi3R}) into
the expression for the free-space PDF (\ref{eq:pfs}),
we obtain an expression for $f^{(0)}$ by the absorbing boundary condition (\ref{eq:exBC0}).
Specifically
\begin{equation}
f^{(0)}(u_2,u_3,\tau) =
\frac{1}{(2 \pi)^{\frac{3}{2}} \sqrt{\det(K(t_{\Gamma}^R))}}
\,{\rm exp} \left( -\frac{1}{2} \chi^{\sf T} K(t_{\Gamma}^R)^{-1} \chi \right) \;,
{\rm ~where~}
\chi = \left[ \begin{array}{c}
-\phi_1^{(R)}({\bf x}_{d}^R) \tau \\
u_2 - \phi_2^{(R)}({\bf x}_{d}^R) \tau \\
u_3 - \phi_3^{(R)}({\bf x}_{d}^R) \tau
\end{array} \right] \;,
\label{eq:f0}
\end{equation}
which is used in (\ref{eq:calP02}) to obtain $\mathcal{P}^{(0)}$.
The function $g^{(1)}$ (which appears in the second term of the expression for $\mathcal{P}^{(1)}$ (\ref{eq:calP12}))
is determined from (\ref{eq:calPj}) and is given by
\begin{eqnarray}
g^{(1)}(u_2,u_3,\tau) &=&
- \frac{2}{\left( D D^{\sf T} \right)_{1,1}} \left( \frac{\partial \phi_1^{(R)}}{\partial x_2}({\bf x}_{d}^R) u_2
+ \frac{\partial \phi_1^{(R)}}{\partial x_3}({\bf x}_{d}^R) u_3 \right) f^{(0)}
- \frac{1}{\phi_1^{(R)}({\bf x}_{d}^R)} f^{(0)}_{\tau} \nonumber \\
&&+~\left( \frac{2 \left( D D^{\sf T} \right)_{2,1}}{\left( D D^{\sf T} \right)_{1,1}} - \frac{\mu_1}{\phi_1^{(R)}({\bf x}_{d}^R)}
\right) f^{(0)}_{u_2}
+ \left( \frac{2 \left( D D^{\sf T} \right)_{3,1}}{\left( D D^{\sf T} \right)_{1,1}} - \frac{\mu_2}{\phi_1^{(R)}({\bf x}_{d}^R)}
\right) f^{(0)}_{u_3} \;.
\label{eq:g1}
\end{eqnarray}
\subsubsection*{Calculation of $\mathbb{E}[t^R]$}
From (\ref{eq:meantex1}) we can write
\begin{equation}
\mathbb{E}[t^R] =
\int_0^\infty \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
p_f({\bf x},t) \,dx_3 \,dx_2 \,dx_1 \,dt +
\int_0^\infty \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\mathcal{P}(z,u_2,u_3,\tau) \,dx_3 \,dx_2 \,dx_1 \,dt \;.
\label{eq:meantex4}
\end{equation}
Using
$\Psi(s) \equiv -\frac{x_{{d},1} \left( s+t_{\Gamma}^R \right)}
{\sqrt{2 \kappa_{11} \left( s+t_{\Gamma}^R \right)}}$,
$\xi = \frac{x_1 - x_{{d},1}(t)}{\sqrt{2 \kappa_{11}(t)}}$
and $s = t - t_{\Gamma}^R$,
the first integral in (\ref{eq:meantex4}) is
\begin{eqnarray}
&& \int_0^\infty \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
p_f({\bf x},t) \,dx_3 \,dx_2 \,dx_1 \,dt \nonumber \\
&=& \int_0^\infty \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\frac{1}{(2 \pi \varepsilon)^{\frac{3}{2}} \sqrt{\det(K(t))}}
{\rm e}^{-\frac{1}{2 \varepsilon} ({\bf x} - {\bf x}_{d}(t))^{\sf T}
K(t)^{-1} ({\bf x} - {\bf x}_{d}(t))}
\,dx_3 \,dx_2 \,dx_1 \,dt \nonumber \\
&=& \int_0^\infty \int_0^\infty
\frac{1}{\sqrt{2 \pi \varepsilon \kappa_{11}(t)}}
{\rm e}^{-\frac{(x_1 - x_{{d},1}(t))^2}{2 \varepsilon \kappa_{11}(t)}}
\,dx_1 \,dt \nonumber \\
&=& \frac{1}{\sqrt{\pi \varepsilon}} \int_{-t_{\Gamma}^R}^\infty \int_{\Psi(s)}^\infty
{\rm e}^{-\frac{\xi^2}{\varepsilon}} \,d\xi \,ds \;.
\end{eqnarray}
Then reversing the order of integration and expanding $s = \Psi^{-1}(\xi)$
as a Taylor series centred at $\xi = 0$ produces
\begin{eqnarray}
\frac{1}{\sqrt{\pi \varepsilon}} \int_{-t_{\Gamma}^R}^\infty \int_{\Psi(s)}^\infty
{\rm e}^{-\frac{\xi^2}{\varepsilon}} \,d\xi \,ds
&=& \frac{1}{\sqrt{\pi \varepsilon}} \int_{-\infty}^\infty
\left(
t - \frac{\sqrt{2 \kappa_{11}}}{\dot{x}_{{d},1}} \xi
+ \left( \frac{\dot{\kappa}_{11}}{\dot{x}_{{d},1}^2}
- \frac{\kappa_{11} dot{x}_{{d},1}}{\dot{x}_{{d},1}^3} \right) \xi^2 + O(\xi^3)
\right) \Bigg|_{t = t_{\Gamma}^R}
{\rm e}^{-\frac{\xi^2}{\varepsilon}} \,d\xi \nonumber \\
&=& t_{\Gamma}^R
+ \frac{1}{2} \left( \frac{\dot{\kappa}_{11}}{\left( \phi_1^{(R)}({\bf x}_{d}^R) \right)^2}
+ \frac{\kappa_{11} dot{x}_{{d},1}}{\left( \phi_1^{(R)}({\bf x}_{d}^R) \right)^3} \right)
\Bigg|_{t = t_{\Gamma}^R} \varepsilon + O(\varepsilon^2) \;.
\label{eq:intPart1}
\end{eqnarray}
The second integral in (\ref{eq:meantex4}) is
\begin{eqnarray}
&& \int_0^\infty \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\mathcal{P}(z,u_2,u_3,\tau) \,dx_3 \,dx_2 \,dx_1 \,dt \nonumber \\
&=& -\varepsilon \int_{-\frac{t_{\Gamma}^R}{\sqrt{\varepsilon}}}^\infty
\int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
f^{(0)}(u_2,u_3,\tau) {\rm e}^{2 x_{\Gamma,2}^R z}
\,du_3 \,du_2 \,dz \,d\tau + O \left( \varepsilon^{\frac{3}{2}} \right) \nonumber \\
&=& -\frac{\varepsilon}{2 \left( \phi_1^{(R)}({\bf x}_{d}^R) \right)^2} + O \left( \varepsilon^{\frac{3}{2}} \right) \;,
\label{eq:intPart2}
\end{eqnarray}
and the sum of (\ref{eq:intPart1}) and (\ref{eq:intPart2}) produces (\ref{eq:meantex2}).
\subsubsection*{Calculation of $\mathbb{E}[{\bf x}^R]$}
Here we briefly describe the manner by which we evaluate $\mathbb{E}[{\bf x}^R]$ numerically.
Equation (\ref{eq:meanxjex1}) gives\removableFootnote{
The $\frac{1}{\varepsilon}$ arises from $z = \frac{x_1}{\varepsilon}$.
}
\begin{equation}
\mathbb{E}[x_j^R] = \frac{\varepsilon}{2} (D D^{\sf T})_{11}
\int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
x_j^R \left( \frac{\partial p_f}{\partial x_1}(0,x_2,x_3,t)
+ \frac{1}{\varepsilon} \frac{\partial \mathcal{P}}{\partial z}(0,u_2,u_3,\tau) \right)
\,dx_2 \,dx_3 \,dt \;,
\end{equation}
for $j = 2,3$, and changing to the local variables (\ref{eq:regularScaling}) yields
\begin{eqnarray}
\mathbb{E}[x_j^R] &=& \frac{\varepsilon^{\frac{5}{2}}}{2} (D D^{\sf T})_{11}
\int_{\frac{t_{\Gamma}^R}{\sqrt{\varepsilon}}}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\left( \sqrt{\varepsilon} u_j + x_{{\Gamma},j}^R \right)
\bigg( \frac{\partial p_f}{\partial x_1}
(0,\sqrt{\varepsilon} u_2 + x_{{\Gamma},2}^R,\sqrt{\varepsilon} u_3 + x_{{\Gamma},3}^R,
\sqrt{\varepsilon} \tau + t_{\Gamma}^R) \nonumber \\
&&+~\frac{1}{\varepsilon} \frac{\partial \mathcal{P}}{\partial z}
(0,u_2,u_3,\tau) \bigg)
\,du_2 \,du_3 \,d\tau \;.
\end{eqnarray}
Since $p_f$ is Gaussian with covariance matrix, $K(t)$,
it is straightforward to derive
\begin{eqnarray}
\frac{\partial p_f}{\partial x_1}(0,x_2,x_3,t) &=&
-\frac{1}{\varepsilon \det(K)}
\Big( -(\kappa_{22} \kappa_{33} - \kappa_{23}^2) x_{\Gamma,1}^R
+ (\kappa_{13} \kappa_{23} - \kappa_{12} \kappa_{33}) (x_2 - x_{\Gamma,2}^R) \nonumber \\
&&+~(\kappa_{12} \kappa_{23} - \kappa_{13} \kappa_{22}) (x_3 - x_{\Gamma,3}^R) \Big)
p_f(0,x_2,x_3,t) \;.
\end{eqnarray}
We also have from (\ref{eq:exBC0})
\begin{eqnarray}
\frac{\partial \mathcal{P}}{\partial z}(0,u_2,u_3,\tau) &=&
\frac{1}{\varepsilon^{\frac{3}{2}}}
\left( -\frac{2 \phi_1^{(R)}({\bf x}_{d}^R)}{(D D^{\sf T})_{11}} \left( f^{(0)} + \sqrt{\varepsilon} f^{(1)} \right)
+ \sqrt{\varepsilon} g^{(1)} + O(\varepsilon) \right) \nonumber \\
&=& -2 x_{\Gamma,2}^R p_f|_{x_1 = 0}
+ \frac{1}{\varepsilon} g^{(1)} + O \left( \frac{1}{\sqrt{\varepsilon}} \right) \;.
\end{eqnarray}
Finally we obtain
\begin{eqnarray}
\mathbb{E}[x_j^R] &=& \frac{\varepsilon^{\frac{3}{2}}}{2}
\int_{\frac{t_{\Gamma}^R}{\sqrt{\varepsilon}}}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\left( \sqrt{\varepsilon} u_j + x_{{\Gamma},j}^R \right)
\bigg( -\frac{1}{\det(K)} \Big( -(\kappa_{22} \kappa_{33} - \kappa_{23}^2) x_{\Gamma,1}^R \nonumber \\
&&+~(\kappa_{13} \kappa_{23} - \kappa_{12} \kappa_{33}) (x_2 - x_{\Gamma,2}^R)
+ (\kappa_{12} \kappa_{23} - \kappa_{13} \kappa_{22}) (x_3 - x_{\Gamma,3}^R) \Big) \nonumber \\
&&\times~p_f \left( 0,\sqrt{\varepsilon} u_2 + x_{{\Gamma},2}^R,\sqrt{\varepsilon} u_3 + x_{{\Gamma},3}^R,
\sqrt{\varepsilon} \tau + t_{\Gamma}^R \right) \nonumber \\
&&+~\frac{2 \kappa}{\alpha} p_f \big|_{x_1 = 0}
+ \frac{1}{\varepsilon} \,g^{(1)}(u_2,u_3,\tau) \bigg)
\,du_2 \,du_3 \,d\tau + O \left( \varepsilon^{\frac{3}{2}} \right) \;.
\label{eq:meanxjex2}
\end{eqnarray}
To produce the black lines in panels B and C of Fig.~\ref{fig:checkExcur} we have numerically evaluated
the leading order component of (\ref{eq:meanxjex2}), which is $O(\varepsilon)$.
\section{Calculation of $\sigma$}
\label{sec:SIGMA}
\setcounter{equation}{0}
Here we derive the formula (\ref{eq:sigma2}):
\begin{equation}
\sigma \sigma^{\sf T} = \frac{(b_L-b_R) (b_L-b_R)^{\sf T}}{(a_L+a_R)^2} \;,
\label{eq:sigma4}
\end{equation}
where $\sigma$ appears in (\ref{eq:linearDiffusionApprox}).
This is achieved by employing a linear diffusion approximation to reduce the drift term,
$\big( F_0(z(t),{\bf y}_{d}(t)) - \Omega({\bf y}_{d}(t)) \big) \,dt$,
of (\ref{eq:linearDiffusionApprox0}), to a diffusion term that
approximates this drift term and in the limit $\varepsilon \to 0$ has an equivalent distribution.
This is possible because the evolution of $z(t)$ is fast relative to that of ${\bf y}_{d}(t)$.
Since we are taking the limit $\varepsilon \to 0$,
we may neglect higher order terms
in the stochastic differential equation for $z(t)$, (\ref{eq:dhatx4}).
Furthermore, the vector noise term in (\ref{eq:dhatx4}) is equivalent to a scalar noise term
$\sqrt{\alpha} \,dW(t)$, where $\alpha = (D D^{\sf T})_{11}$.
It is convenient to further replace this term with simply $dW(t)$,
as the noise amplitude $\sqrt{\alpha}$ appears as only a multiplicative factor in the final result.
We let
\begin{equation}
r = \frac{t}{\varepsilon} \;,
\end{equation}
represent the fast time-scale.
Then (\ref{eq:dhatx4}) becomes
\begin{equation}
dq(r;{\bf y}) = \left\{ \begin{array}{lc}
a_L({\bf y}) \;, & q < 0 \\
-a_R({\bf y}) \;, & q > 0
\end{array} \right\} \,dr + \,dW(r) \;,
\label{eq:dxcheck}
\end{equation}
where we have replaced $z$ with the symbol $q$ to indicate that
changes mentioned above have been made.
In (\ref{eq:dxcheck}) ${\bf y}$ is treated as a constant,
so (\ref{eq:dxcheck}) represents {\em Brownian motion with two-valued drift} \cite{KaSh91}.
In order to approximate the behaviour of the drift term,
$\big( F_0(z(t),{\bf y}_{d}(t)) - \Omega({\bf y}_{d}(t)) \big) \,dt$, in distribution, we let
\begin{equation}
R(r,{\bf y}) = \mathbb{E} \left[
\left( F_0(q(\tilde{r}+r;{\bf y}),{\bf y}) - \Omega({\bf y}) \right)
\left( F_0(q(\tilde{r};{\bf y}),{\bf y}) - \Omega({\bf y}) \right)^{\sf T}
\right] \;.
\label{eq:R}
\end{equation}
For $r \ge 0$, $R(r,{\bf y})$ denotes the autocovariance
of the function $F_0$ (\ref{eq:F0}) with (\ref{eq:dxcheck}).
In (\ref{eq:R}), we take $q(\tilde{r};{\bf y})$ to be at steady-state
and thus $R(r,{\bf y})$ is independent of the value of $\tilde{r}$.
By stochastic averaging theory \cite{FrWe12,PaSt08,MoCu11,Kh66b},
in the limit $\varepsilon \to 0$ the drift term may be replaced by
the diffusion term $\sigma({\bf y}_{d}(t)) \sqrt{\alpha \varepsilon} \,dW(t)$, where
\begin{equation}
\sigma({\bf y}) \sigma({\bf y})^{\sf T} =
2 \int_0^\infty R(r,{\bf y}) \,dr \;.
\label{eq:sigma}
\end{equation}
Below we derive (\ref{eq:sigma4}) by evaluating (\ref{eq:sigma}).
Let $p(q,r|q_0)$ denote the transitional PDF of (\ref{eq:dxcheck}) with $q(0) = q_0$.
When $a_L, a_R > 0$, (\ref{eq:dxcheck}) has the steady-state PDF
\begin{equation}
p_{\rm ss}(q) = \frac{2 a_L a_R}{a_L+a_R}
\left\{ \begin{array}{lc}
{\rm e}^{2 a_L q} \;, & q \le 0 \\
{\rm e}^{-2 a_R q} \;, & q \ge 0
\end{array} \right. \;.
\label{eq:pss2}
\end{equation}
Then, by (\ref{eq:sigma}) we can write
\begin{equation}
\sigma({\bf y}) \sigma({\bf y})^{\sf T} =
2 \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\big( F_0(q,{\bf y}) - \Omega({\bf y}) \big)
\big( F_0(q_0,{\bf y}) - \Omega({\bf y}) \big)^{\sf T}
p(q,r|q_0) p_{\rm ss}(q_0) \,dq \,dq_0 \,dr \;,
\end{equation}
where $F_0$ is given by (\ref{eq:F0}).
Since,
\begin{equation}
\mathbb{E} \left[ F_0(q,{\bf y}) - \Omega({\bf y}) \right] \equiv 0 \;,
\end{equation}
it follows that
\begin{equation}
\sigma({\bf y}) \sigma({\bf y})^{\sf T} =
2 \int_0^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
\big( F_0(q,{\bf y}) - \Omega({\bf y}) \big)
\big( F_0(q_0,{\bf y}) - \Omega({\bf y}) \big)^{\sf T}
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr \;.
\end{equation}
By (\ref{eq:Omega}), (\ref{eq:meanFcal}) and (\ref{eq:F0}) we have
\begin{equation}
F_0(q,{\bf y}) - \Omega({\bf y}) =
\left\{ \begin{array}{lc}
\frac{a_L (b_L-b_R)}{a_L+a_R} \;, & q < 0 \\
\frac{-a_R (b_L-b_R)}{a_L+a_R} \;, & q > 0
\end{array} \right. \;.
\end{equation}
Therefore we can write
\begin{eqnarray}
\sigma({\bf y}) \sigma({\bf y})^{\sf T} &=&
2 \frac{(b_L-b_R)(b_L-b_R)^{\sf T}}{(a_L+a_R)^2} \Bigg(
a_L^2 \int_0^\infty \int_{-\infty}^0 \int_{-\infty}^0
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr \nonumber \\
&&-~a_L a_R \int_0^\infty \int_{-\infty}^0 \int_0^\infty
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr \nonumber \\
&&-~a_L a_R \int_0^\infty \int_0^\infty \int_{-\infty}^0
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr \nonumber \\
&&+~a_R^2 \int_0^\infty \int_0^\infty \int_0^\infty
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr \Bigg) \;.
\label{eq:sigma3}
\end{eqnarray}
Next we show that
\begin{equation}
\int_0^\infty p(q,r|q_0) - p_{\rm ss}(q) \,dr = \left\{ \begin{array}{lc}
\left( \frac{a_L^3+a_R^3}{a_L a_R (a_L+a_R)^2} +
\frac{2 a_R}{a_L+a_R} (q+q_0) \right)
{\rm e}^{2 a_L q} \\
+~\frac{1}{a_R} \left( {\rm e}^{-a_R(q-q_0)-a_R|q-q_0|} - {\rm e}^{-2 a_R q} \right)
\;, & q_0 \le 0,\, q \le 0 \\
\left( \frac{a_L^3+a_R^3}{a_L a_R (a_L+a_R)^2} -
\frac{2 a_L}{a_L+a_R} q + \frac{2 a_R}{a_L+a_R} q_0 \right)
{\rm e}^{-2 a_R q} \;, & q_0 \le 0,\, q \ge 0 \\
\left( \frac{a_L^3+a_R^3}{a_L a_R (a_L+a_R)^2} +
\frac{2 a_R}{a_L+a_R} q - \frac{2 a_L}{a_L+a_R} q_0 \right)
{\rm e}^{2 a_L q} \;, & q_0 \ge 0,\, q \le 0 \\
\left( \frac{a_L^3+a_R^3}{a_L a_R (a_L+a_R)^2} -
\frac{2 a_L}{a_L+a_R} (q+q_0) \right)
{\rm e}^{-2 a_R q} \\
+~\frac{1}{a_L} \left( {\rm e}^{a_L(q-q_0)-a_L|q-q_0|} - {\rm e}^{2 a_L q} \right)
\;, & q_0 \ge 0,\, q \ge 0
\end{array} \right. \;,
\label{eq:LapDiff}
\end{equation}
and from (\ref{eq:pss2}) and (\ref{eq:LapDiff}) straight-forward integration reveals that
the integrals that appear in (\ref{eq:sigma3}) are given simply by
\begin{equation}
\begin{split}
\int_0^\infty \int_{-\infty}^0 \int_{-\infty}^0
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr &=
\frac{1}{2 (a_L+a_R)^2} \;, \\
\int_0^\infty \int_{-\infty}^0 \int_0^\infty
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr &=
\frac{-1}{2 (a_L+a_R)^2} \;, \\
\int_0^\infty \int_0^\infty \int_{-\infty}^0
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr &=
\frac{-1}{2 (a_L+a_R)^2} \;, \\
\int_0^\infty \int_0^\infty \int_0^\infty
\big( p(q,r|q_0) - p_{\rm ss}(q) \big)
p_{\rm ss}(q_0) \,dq \,dq_0 \,dr &=
\frac{1}{2 (a_L+a_R)^2} \;,
\end{split}
\label{eq:sigma3aux}
\end{equation}
with which we immediately arrive at the desired result (\ref{eq:sigma4}).
To prove (\ref{eq:LapDiff}), we first note that,
as shown in \cite{KaSh84}, $p(q,r|q_0)$ is given by
\begin{equation}
p(q,r|q_0) = \left\{ \begin{array}{lc}
2 {\rm e}^{2 a_L q} \int_0^\infty
h(r,b,a_R) * h(r,b-q-q_0,a_L) \,db +
G(q,r,a_L|q_0) \;, & q_0 \le 0,\, q \le 0 \\
2 {\rm e}^{-2 a_R q} \int_0^\infty
h(r,b+q,a_R) * h(r,b-q_0,a_L) \,db \;, & q_0 \le 0,\, q \ge 0 \\
2 {\rm e}^{2 a_L q} \int_0^\infty
h(r,b+q_0,a_R) * h(r,b-q,a_L) \,db \;, & q_0 \ge 0,\, q \le 0 \\
2 {\rm e}^{-2 a_R q} \int_0^\infty
h(r,b+q+q_0,a_R) * h(r,b,a_L) \,db +
G(q,r,-a_R|q_0) \;, & q_0 \ge 0,\, q \ge 0
\end{array} \right. \;,
\label{eq:p}
\end{equation}
where
\begin{eqnarray}
h(r,q_0,\omega) &=& \frac{|q_0|}{\sqrt{2 \pi r^3}}
{\rm e}^{-\frac{(q_0 - \omega r)^2}{2 r}} \;, \label{eq:h} \\
G(q,r,\omega|q_0) &=& \frac{1}{\sqrt{2 \pi r}}
{\rm e}^{-\frac{(q-q_0-\omega r)^2}{2 r}} - {\rm e}^{-2 \omega q_0}
\frac{1}{\sqrt{2 \pi r}}
{\rm e}^{-\frac{(q+q_0-\omega r)^2}{2 r}} \;, \label{eq:Gabsorb}
\end{eqnarray}
and $*$ denotes convolution with respect to $r$.
Here we derive (\ref{eq:LapDiff}) from (\ref{eq:p})-(\ref{eq:Gabsorb}) for $q_0, q \ge 0$.
The case $q_0 \ge 0$, $q \le 0$ is similar and
the remaining two cases follow by symmetry.
For $q_0, q \ge 0$, direct integration yields\removableFootnote{
$\int_0^\infty G(q,r,-a_R|q_0) \,dr =
\frac{1}{a_R} \left( {\rm e}^{-a_R(q-q_0)-a_R|q-q_0|} - {\rm e}^{-2 a_R q} \right)$
}
\begin{align}
&\int_0^\infty p(q,r|q_0) - p_{\rm ss}(q) \,dr =
\frac{1}{a_R} \left( {\rm e}^{-a_R(q-q_0)-a_R|q-q_0|} - {\rm e}^{-2 a_R q} \right) \nonumber \\
&+~\lim_{\nu \to 0^+}
\mathcal{L} \left( 2 {\rm e}^{-2 a_R q} \int_0^\infty
h(r,b+q+q_0,a_R) * h(r,b,a_L) \,db - p_{\rm ss}(q) \right) \;,
\end{align}
where
\begin{equation}
\mathcal{L}[f(r)] = \int_0^\infty {\rm e}^{-\nu r} f(r) \,dr \;.
\end{equation}
denotes a Laplace transform in $r$.
Next, we recall (\ref{eq:pss2}) and note that
\begin{equation}
\mathcal{L}[h(r,q_0,\omega)] = {\rm e}^{\omega q_0 - \sqrt{\omega^2 + 2 \nu} |q_0|} \;,
\end{equation}
to obtain
\begin{align}
&\int_0^\infty p(q,r|q_0) - p_{\rm ss}(q) \,dr
= \frac{1}{a_R} \left( {\rm e}^{-a_R(q-q_0)-a_R|q-q_0|} - {\rm e}^{-2 a_R q} \right) \nonumber \\
&+~2 {\rm e}^{-2 a_R q} \lim_{\nu \to 0^+}
\left( \frac{{\rm e}^{\left( a_R - \sqrt{a_R^2 + 2 \nu} \right)(q+q_0)}}
{-a_R + \sqrt{a_R^2 + 2 \nu} - a_L + \sqrt{a_L^2 + 2 \nu}} -
\frac{a_L a_R}{\nu (a_L+a_R)} \right) \;.
\end{align}
Finally, by substituting
$-a + \sqrt{a^2 + 2 \nu} = \frac{\nu}{a} - \frac{\nu^2}{2 a^3} + O(\nu^3)$,
with $a = a_L,a_R$ in the above equation,
terms involving $\frac{1}{\nu}$ vanish and we arrive at (\ref{eq:LapDiff}) for $q_0, q \ge 0$.
\section{Derivations of formulas for ${\rm Diff} \left( t^S \right)$ and ${\rm Var}(t^S)$}
\label{sec:TSL}
\setcounter{equation}{0}
In this section we derive (\ref{eq:Difftsl2}) and (\ref{eq:combvartsl}):
\begin{align}
{\rm Diff} \left( t^S \right) &= {\rm Diff} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} {\rm Diff} \left( {\bf x}^M \right)
+ \sum_{i=1}^N \sum_{j=1}^N {\rm D}_{{\bf x}}^2 t_{d}^S \left( {\bf x}_{\Gamma}^M \right)_{i,j}
{\rm Cov} \left( x_{\Gamma}^M \right)_{i,j} + O \left( \varepsilon^{\frac{3}{2}} \right) \;, \label{eq:DifftslA} \\
{\rm Var} \left( t^S \right) &= {\rm Var} \left( t^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right) + O \left( \varepsilon^{\frac{3}{2}} \right) \;, \label{eq:combvartslA}
\end{align}
which express the leading order terms for ${\rm Diff} \left( t^S \right)$ and ${\rm Var}(t^S)$
in terms of conditioned quantities and may be evaluated using the results of \S\ref{sec:SLIDE}.
Analogous formulas in \S\ref{sec:COMB} relating to other components
of the stochastic dynamics may be derived in the same fashion\removableFootnote{
For example,
\begin{eqnarray}
{\rm Cov} \left( {\bf x}^S \right)
&=& \int \left( {\bf x}^S - \mathbb{E} \left[ {\bf x}^S \right] \right)
\left( {\bf x}^S - \mathbb{E} \left[ {\bf x}^S \right] \right)^{\sf T} p \left( {\bf x}^S \right) \,d{\bf x}^S \nonumber \\
&=& \int \int \left( {\bf x}^S - \mathbb{E} \left[ {\bf x}^S \big| {\bf x}^M \right]
+ \mathbb{E} \left[ {\bf x}^S \big| {\bf x}^M \right] - \mathbb{E} \left[ {\bf x}^S \right] \right)
\left( \cdots \right)^{\sf T}
p \left( {\bf x}^S \big| {\bf x}^M \right) \,d{\bf x}^S p \left( {\bf x}^M \right) \,d{\bf x}^M \nonumber \\
&=& \int \int \left( {\bf x}^S - \mathbb{E} \left[ {\bf x}^S \big| {\bf x}^M \right]
+ {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right) \left( {\bf x}^M - {\bf x}_{\Gamma}^M \right)
+ O(\varepsilon) \right) \left( \cdots \right)^{\sf T} p \left( {\bf x}^S \big| {\bf x}^M \right)
\,d{\bf x}^S p \left( {\bf x}^M \right) \,d{\bf x}^M \nonumber \\
&=& \int {\rm Cov} \left[ {\bf x}^S \big| {\bf x}^M \right] p \left( {\bf x}^M \right) \,d{\bf x}^M \nonumber \\
&&+~{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
\int \left( {\bf x}^M - {\bf x}_{\Gamma}^M \right) \left( {\bf x}^M - {\bf x}_{\Gamma}^M \right)^{\sf T}
p \left( {\bf x}^M \right) \,d{\bf x}^M {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T}
+ O \left( \varepsilon^{\frac{3}{2}} \right) \nonumber \\
&=& {\rm Cov} \left( {\bf x}^S \big| {\bf x}_{\Gamma}^M \right)
+ {\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)
{\rm Cov} \left( {\bf x}^M \right)
{\rm D}_{{\bf x}} {\bf x}_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} \;.
\end{eqnarray}
}.
First, by definition,
\begin{equation}
{\rm Diff} \left( t^S \right) \equiv \mathbb{E} \left[ t^S \right] - t_{\Gamma}^S
= \int t^S p \left( t^S \right) \,dt^S - t_{\Gamma}^S \;,
\end{equation}
where throughout this exposition $p(\cdot)$ denotes the PDF of the indicated variable.
Conditioning over the starting point ${\bf x}^M$ gives
\begin{equation}
{\rm Diff} \left( t^S \right) =
\int t^S \int p \left( t^S \big| {\bf x}^M \right) p \left( {\bf x}^M \right) \,d{\bf x}^M \,dt^S - t_{\Gamma}^S \;.
\end{equation}
By then reversing the order of integration and using
${\rm Diff} \left( t^S \big| {\bf x}^M \right) \equiv
\mathbb{E} \left[ t^S \big| {\bf x}^M \right] - t_{d}^S \left( {\bf x}^M \right)$, we obtain
\begin{equation}
{\rm Diff} \left( t^S \right) =
\int \left( t_{d}^S \left( {\bf x}^M \right) + {\rm Diff} \left( t^S \big| {\bf x}^M \right) \right)
p \left( {\bf x}^M \right) \,d{\bf x}^M - t_{\Gamma}^S \;.
\label{eq:DifftslA2}
\end{equation}
By replacing $t_{d}^S \left( {\bf x}^M \right)$ in (\ref{eq:DifftslA2})
with its Taylor series centred at the deterministic value ${\bf x}^M = {\bf x}_{\Gamma}^M$:
\begin{equation}
t_{d}^S \left( {\bf x}^M \right) =
t_{d}^S \left( {\bf x}_{\Gamma}^M \right) +
{\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} \left( {\bf x}^M - {\bf x}_{\Gamma}^M \right) +
\left( {\bf x}^M - {\bf x}_{\Gamma}^M \right)^{\sf T}
{\rm D}_{{\bf x}}^2 t_{d}^S \left( {\bf x}_{\Gamma}^M \right)
\left( {\bf x}^M - {\bf x}_{\Gamma}^M \right) + O \left( \varepsilon^{\frac{3}{2}} \right) \;,
\label{eq:tdetSTaylor}
\end{equation}
and evaluating the integral in (\ref{eq:DifftslA2}) we arrive at (\ref{eq:DifftslA}).
The error term in (\ref{eq:tdetSTaylor}) is $O \left( \varepsilon^{\frac{3}{2}} \right)$
because ${\bf x}^M - {\bf x}_{\Gamma}^M = O \left( \sqrt{\varepsilon} \right)$.
Second, to derive (\ref{eq:combvartslA}) we begin by writing
\begin{equation}
{\rm Var} \left( t^S \right)
= \int \left( t^S - \mathbb{E} \left[ t^S \right] \right)^2 p \left( t^S \right) \,dt^S \;.
\end{equation}
Conditioning over ${\bf x}^M$ gives
\begin{equation}
{\rm Var} \left( t^S \right)
= \int \left( t^S - \mathbb{E} \left[ t^S \right] \right)^2 \int p\left( t^S \big| {\bf x}^M \right)
p \left( {\bf x}^M \right) \,d{\bf x}^M \,dt^S \;.
\end{equation}
Reversing the order of integration and adding and subtracting $\mathbb{E} \left[ t^S \big| {\bf x}^M \right]$ produces
\begin{equation}
{\rm Var} \left( t^S \right)
= \int \int \left( t^S - \mathbb{E} \left[ t^S \big| {\bf x}^M \right]
+ \mathbb{E} \left[ t^S \big| {\bf x}^M \right] - \mathbb{E} \left[ t^S \right] \right)^2
p \left( t^S \big| {\bf x}^M \right) \,dt^S p \left( {\bf x}^M \right) \,d{\bf x}^M \;.
\label{eq:combvartslA2}
\end{equation}
Since the mean values differ from their deterministic values by $O(\varepsilon)$, we have
\begin{equation}
\mathbb{E} \left[ t^S \big| {\bf x}^M \right] - \mathbb{E} \left[ t^S \right] =
t_{\Gamma}^S \left( {\bf x}^M \right) - t_{\Gamma}^S + O(\varepsilon) \;.
\label{eq:meanSubstraction}
\end{equation}
By substituting (\ref{eq:tdetSTaylor}) and (\ref{eq:meanSubstraction}) into (\ref{eq:combvartslA2}),
and noting $t_{d}^S \left( {\bf x}_{\Gamma}^M \right) = t_{\Gamma}^S$,
we obtain
\begin{equation}
{\rm Var} \left( t^S \right)
= \int \int \left( t^S - \mathbb{E} \left[ t^S \big| {\bf x}^M \right]
+ {\rm D}_{{\bf x}} t_{d}^S \left( {\bf x}_{\Gamma}^M \right)^{\sf T} \left( {\bf x}^M - {\bf x}_{\Gamma}^M \right) + O(\varepsilon) \right)^2
p \left( t^S \big| {\bf x}^M \right) \,dt^S p \left( {\bf x}^M \right) \,d{\bf x}^M \;.
\label{eq:combvartslA3}
\end{equation}
Finally, by expanding the square in (\ref{eq:combvartslA3}) and evaluating the
double integral we arrive at (\ref{eq:combvartslA}).
\begin{comment}
\section{List of symbols}
\label{sec:GLOS}
\setcounter{equation}{0}
The symbols $i$ and $j$ are reserved for indices.\\
Numerical superscripts in parentheses are used to indicate terms in asymptotic expansions.\\
Bars are used to denote an averaged quantity.\\
Vector-valued state variables are written in bold.\\
The remaining symbols are summarized below.
{\footnotesize
\begin{tabular}{r@{~~--~~}l}
$a_L$ & term of $\phi^{(L)}$\\
$a_R$ & term of $\phi^{(R)}$\\
$b$ & Brownian motion local time\\
$b_L$ & term of $\phi^{(L)}$\\
$b_R$ & term of $\phi^{(R)}$\\
$c_L$ & term of $\phi^{(L)}$\\
$c_R$ & term of $\phi^{(R)}$\\
$d_L$ & term of $\phi^{(L)}$\\
$d_R$ & term of $\phi^{(R)}$\\
$e_i$ & coordinate vector\\
$f^{(i)}$ & piece of $p^{(i)}$\\
$g^{(i)}$ & piece of $p^{(i)}$\\
$h$ & inverse Gaussian PDF\\
$k$ & joint PDF for first passage time and location\\
$o$ & order (little-$o$ notation)\\
$p$ & PDF for Brownian motion with constant drift in Appendix \ref{sec:SIGMA}; anything else?\\
$p^R$ & PDF for the regular phase\\
$p^S$ & PDF for sliding\\
$p^E$ & PDF for escaping\\
$p_{\rm ss}$ & steady-state PDF\\
$p_{\rm qss}$ & quasi-steady-state PDF\\
$p_f$ & free-space PDF\\
$p_{\ell}$ & boundary layer component of PDF\\
$q$ & Brownian motion with two-valued drift\\
$q_0$ & initial condition for $q$\\
$r$ & scaled $t$ for sliding\\
$\tilde{r}$ & dummy $r$ in \S 4.2\\
$s$ & scaled $t$ for escaping; also dummy $t$ in \S 3.1\\
$\tilde{s}$ & dummy $t$ in \S 3.1 and \S 4.2\\
$t$ & time\\
$t^S$ & sliding time\\
$t^E$ & escaping time\\
$t^R$ & excursion time\\
$t_{\frac{1}{2} {\rm osc}}$ & half oscillation time\\
$t_{\rm osc}$ & oscillation time\\
$u$ & scaled $x_1$ for escaping\\
$u_i$ & scaled $x_i$ for the regular phase\\
$v$ & the value of the vector field at the deterministic location of first passage\\
$v_{-\lambda}$ & weakly stable eigenvector\\
$x$ & variable for Brownian motion with two-valued drift\\
$x_i$ & element of ${\bf x}$\\
${\bf x}$ & state of general system\\
${\bf x}^M$ & point at which previous excursion ends\\
${\bf x}^S$ & point at which sliding ends\\
${\bf x}^E$ & point at which escaping ends\\
${\bf x}^R$ & point at which excursion ends\\
${\bf x}_0$ & initial condition\\
${\bf x}_{d}$ & deterministic value dependent on initial condition\\
${\bf x}_{\Gamma}$ & value for periodic orbit\\
${\bf x}_{\rm int}^{(L)}$ & transformed ${\bf X}_{\rm int}^{(L)}$\\
${\bf x}_{\rm int}^{(R)}$ & transformed ${\bf X}_{\rm int}^{(R)}$\\
${\bf x}^{(i)}$ & component of ${\bf x}$ in asymptotic expansion\\
${\bf y}$ & the vector ${\bf x}$ omitting first element\\
$\hat{{\bf y}}$ & ${\bf y} - {\bf y}_{d}$\\
$z$ & equals $\frac{x_1}{\varepsilon}$ for regular and sliding phases
\end{tabular}
\begin{tabular}{r@{~~--~~}l}
$A$ & matrix in relay control normal form\\
$\mathcal{A}$ & transformed $A$\\
$B$ & vector in relay control normal form\\
$\mathcal{B}^{(L)}$ & transformed $B$ for the left\\
$\mathcal{B}^{(R)}$ & transformed $B$ for the right\\
$C$ & vector in relay control normal form\\
$D$ & diffusion matrix in transformed relay control and general system\\
${\rm D}_{{\bf x}}$ & derivative with respect to the vector ${\bf x}$\\
$F$ & stochastic function for sliding vector field\\
$F_0$ & stochastic function for sliding vector field in the limit $\varepsilon \to 0$\\
$G$ & absorbing PDF\\
$H$ & component of $K$\\
$J$ & probability current\\
$K$ & covariance matrix of ${\bf x}^{(1)}$\\
$K_i$ & coefficients in $\varepsilon$ expansions of ${\rm Diff}(t_{\rm osc})$ and ${\rm Std}(t_{\rm osc})$\\
$L$ & left\\
$M$ & diffusion matrix for sliding\\
$N$ & number of dimensions\\
$O$ & order (big-$O$ notation)\\
$P$ & matrix for linear change of coordinates, ${\bf X} \to {\bf x}$\\
$\mathcal{P}$ & scaled PDF for the regular phase\\
$\mathcal{P}^{(i)}$ & component of $\mathcal{P}$ in asymptotic expansion\\
$Q$ & vector for linear change of coordinates, ${\bf X} \to {\bf x}$\\
$R$ & right; correlation\\
$T$ & deterministic sliding time for relay control\\
$\mathcal{T}$ & scaled $t$ for escaping\\
${\bf V}$ & another vector Brownian motion\\
$W$ & Brownian motion\\
${\bf W}$ & vector Brownian motion\\
$X_i$ & element of ${\bf X}$\\
$\mathcal{X}$ & scaled $x$ for escaping\\
${\bf X}$ & state of relay control system\\
${\bf X}^{*(L)}$ & equilibrium solution of left half-system\\
${\bf X}^{*(R)}$ & equilibrium solution of right half-system\\
${\bf X}_{\rm int}^{(L)}$ & intersection of weakly stable manifold of left half-system with switching manifold\\
${\bf X}_{\rm int}^{(R)}$ & intersection of weakly stable manifold of right half-system with switching manifold\\
$Y$ & function in solution to PDF near escaping\\
${\bf Y}$ & scaled ${\bf y}$ for sliding\\
$Z$ & denotes $X_{\rm det,3}(T;{\bf X}_{\rm int}^{(L)})$\\
\end{tabular}
\begin{tabular}{r@{~~--~~}l}
$\alpha$ & $(1,1)$-element of $D D^{\sf T}$\\
$\beta$ & first column of $D D^{\sf T}$ omitting first element\\
$\gamma$ & the matrix $D D^{\sf T}$ omitting first row and column\\
$\delta$ & Dirac-delta function\\
$\delta^-$ & lower bound of $x_2$ for escaping region (negative valued)\\
$\delta^+$ & lower bound of $x_2$ for escaping region (positive valued)\\
$\varepsilon$ & square of noise amplitude\\
$\zeta$ & parameter for relay control\\
$\eta$ & power of $\varepsilon$ for averaging argument\\
$\theta_{ij}$ & element of $\Theta$\\
$\kappa_{ij}$ & element of $K$\\
$\lambda$ & parameter for relay control\\
$\lambda_i$ & scaling exponents for escaping\\
$\mu$ & ratio in Filippov's solution\\
$\nu$ & frequency variable for Laplace transform; also control output\\
$\xi$ & value of $\Psi$\\
$\varrho$ & variance correction parameter\\
$\sigma$ & averaged deviation, integral of correlation\\
$\tau$ & scaled $t$ for regular phase\\
$\phi^{(L)}$ & left half vector field\\
$\phi^{(R)}$ & right half vector field\\
$\varphi$ & control input\\
$\chi$ & vector in quadratic form part of $f^{(0)}$\\
$\psi^{(L)}$ & $\phi^{(L)}$ omitting first element\\
$\psi^{(R)}$ & $\phi^{(R)}$ omitting first element\\
$\omega$ & parameter for relay control; also dummy magnitude of drift\\
$\Gamma$ & periodic orbit\\
$\Theta$ & covariance matrix of ${\bf y}$, to lowest order\\
$\Lambda$ & correction to sliding vector field\\
$\Phi_i$ & auxiliary function in sliding PDF expansion\\
$\Psi$ & auxiliary function in double integral\\
$\Omega$ & sliding vector field
\end{tabular}
}
\end{comment}
\end{document} |
\begin{document}
\title{Derivation of a refined 6-parameter shell model: Descent from the three-dimensional Cosserat elasticity using a method of classical shell theory}
\author{ Mircea B\^irsan\thanks{Mircea B\^irsan, \ \ Lehrstuhl f\"{u}r Nichtlineare Analysis und Modellierung, Fakult\"{a}t f\"{u}r Mathematik,
Universit\"{a}t Duisburg-Essen, Thea-Leymann Str. 9, 45127 Essen, Germany; and Department of Mathematics, Alexandru Ioan Cuza University of Ia\c si, Blvd.
Carol I, no. 11, 700506 Ia\c si,
Romania; email: [email protected]}
}
\maketitle
\begin{center}
\thanks{\textit{Dedicated to Professor Sanda Cleja-\c Tigoiu on the occasion of her 70th birthday}}
\end{center}
\begin{abstract}
Starting from the three-dimensional Cosserat elasticity, we derive a two-dimensional model for isotropic elastic shells. For the dimensional reduction, we employ a derivation method similar to that used in classical shell theory, as presented systematically by Steigmann in [J. Elast. \textbf{111}: 91-107, 2013]. As a result, we obtain a geometrically nonlinear Cosserat shell model with a specific form of the strain-energy density, which has a simple expression with coefficients depending on the initial curvature tensor and on three-dimensional material constants. The explicit forms of the stress-strain relations and the local equilibrium equations are also recorded. Finally, we compare our results with other 6-parameter shell models and discuss the relation to the classical Koiter shell model.
\end{abstract}
\textbf{Keywords:} Shell theory; 6-parameter shells; Elastic Cosserat material; Strain-energy density; Curvature.
\section{Introduction}\label{Intro}
Elastic shell theory is an important branch of the mechanics of deformable bodies, in view of its applications in engineering. It is also a current domain of active research, because scientists are looking for new shell models, with better properties. This task is not easy, since the shell model should be simple enough to be manageable in practical engineering problems, but on the other side it should be complex enough to account for relevant curvature and three-dimensional effects.
The classical shell theory, also called the first order approximation theory, presents relatively simple shell models (e.g., the well-known Koiter shell model), but it is not applicable to all shell problems. The classical approach can be employed only if the Kirchhoff-Love hypotheses are satisfied; moreover, one can observe the effect of accuracy loss in classical shell theory for certain problems (see, e.g. \cite{Berdic-Mis92}). Therefore, more refined shell theories are needed.
One of the most general theories of shells, which has been much developed in the last decades, is the so-called 6-parameter shell theory. This approach has been initially proposed by Reissner \cite{Reissner74}. The theory of 6-parameter shells, presented in the books \cite{Libai98,Pietraszkiewicz-book04}, involves two independent kinematic fields:
the translation vector (3 degrees of freedom) and the rotation tensor (3 additional degrees of freedom).
Some of the achievements of this general shell theory have been presented in \cite{Pietraszkiewicz10,Eremeyev11,Pietraszkiewicz11}. We mention that the kinematical structure of 6-parameter shells is identical to the kinematical structure of Cosserat shells, which are regarded as deformable surfaces with a triad of rigid directors describing the orientation of material points. Thus, the rotation tensor in the 6-parameter model accounts for the orientation change of the triad of directors. General results concerning the existence of minimizers in the 6-parameter shell theory have been presented in \cite{Birsan-Neff-MMS-2014}.
In order to be useful in practice, the shell model should present a concrete (specific) form of the constitutive relations and strain-energy density. The specific form should satisfy these two requirements: the coefficients of the strain-energy density should be determined in terms of the three-dimensional material constants and they should depend on the (initial) curvature tensor $ \boldsymbol b $ of the reference configuration. In the literature of 6-parameter shells, we were not able to find a satisfactory strain-energy density for isotropic shells: the available specific forms are either too simple (in the sense that the coefficients are constant, i.e. independent of the initial curvature $ \boldsymbol b $), or they are general functions of the strain measures, which coefficients are not identified in terms of three-dimensional material constants.
Our present work aims to fill this gap and establishes a specific form for the strain-energy density of isotropic 6-parameter (Cosserat) elastic shells, together with explicit stress-strain relations, which fulfill the above requirements. In this model, we retain the terms up to the order $ O(h^3) $ with respect to the shell thickness $ h $ and derive a relatively simple expression of the strain-energy density, which can be used in applications. To obtain the two-dimensional strain-energy density (i.e., written as a function of $ (x_1,x_2) $, the surface curvilinear coordinates), we descend from a Cosserat three-dimensional elastic model and apply the derivation method from the classical theory of shells, which was systematically presented by Steigmann in \cite{Steigmann08,Steigmann12,Steigmann13}. Thus, in Section \ref{Sect2} we introduce the three-dimensional Cosserat continuum in curvilinear coordinates, with the appropriate strain measures \eqref{f1}, \eqref{f2}, equilibrium equations \eqref{f3} and constitutive relations \eqref{f4}-\eqref{f7}.
In Section \ref{Sect3}, we describe briefly the geometry of surfaces and the kinematics of 6-parameter shells, and define the shell strain tensor and bending-curvature tensor \eqref{f31}.
In the main Section \ref{Sect4}, we derive the two-dimensional shell model by performing the integration over the thickness and using the aforementioned derivation method \cite{Steigmann13}, inspired by the classical shell theory. Here, we adopt some assumptions which are common in the shell approaches (such as, for instance, the stress vectors on the major faces of the shells are of order $ O(h^3) $)
and are able to neglect some higher-order terms to obtain a simplified form of the strain-energy density \eqref{f61}.
For the sake of completeness, we also present the equilibrium equations for 6-parameter (Cosserat) shells \eqref{f82}, which we deduce from the condition that the equilibrium state is a stationary point of the energy functional.
Section \ref{Sect5} is devoted to further remarks and comments on the derived Cosserat shell model. We introduce the fourth-order tensor of elastic moduli for shells \eqref{f89}, \eqref{f93} and present the explicit form of the stress-strain relations \eqref{f100}.
In order to compare our results with other 6-parameter shell models, we write the strain-energy density in an alternative useful form \eqref{f107}. We pay special attention to the comparison with the Cosserat shell model of order $ O(h^5) $ which has been presented recently in \cite{Birsan-Neff-MMS-2019}. Although the derivation methods are different, we obtain the same form of the strain-energy density, except for the coefficients of the transverse shear energy, which are unequal. The value of the transverse shear coefficient derived in the present work is confirmed by the results obtained previously through $ \Gamma $-convergence in \cite{Neff_Hong_Reissner08} for the case of plates.
Finally, we discuss in Subsection \ref{Sect5.3} the relation between our 6-parameter shell model and the classical Koiter model. We show that, if we adopt appropriate restrictions (the material is a Cauchy continuum and the Kirchhoff-Love hypotheses are satisfied), we are able to reduce the form of our strain-energy density to obtain the classical Koiter energy, see \eqref{f125}.
\subsection*{Notations}\label{Not}
Let us present next some useful notations which will be used throughout this paper. The Latin indices $ i,j,k,... $ range over the set $ \{1,2,3\} $, while the Greek indices $ \alpha,\beta,\gamma,... $ range over the set $ \{1,2\} $. The Einstein summation convention over repeated indices is used. A subscript comma preceding an index $ i $ (or $ \alpha $) designates partial differentiation with respect to the variable $ x_i $ (oder $ x_\alpha\, $, respectively), e.g. $ f,_i = \dfrac{\partial f}{\partial x_i}\; $.
We denote by $ \,\delta_i^j\, $ the Kronecker symbol, i.e. $ \,\delta_i^j=1 $ for $ i=j $, while $\,\delta_i^j=0 $ for $ i\neq j $.
We employ the direct tensor notation. Thus, $ \otimes $ designates the dyadic product, $ {\boldsymbol{\mathbbm{1}}}_3 = \boldsymbol g_i\otimes\, \boldsymbol g^i\, $ is the unit second order tensor in the 3-space, and $ \mathrm{axl}(\boldsymbol W) $ stands for the axial vector of any skew-symmetric tensor $ \boldsymbol W $.
Let $ \mathrm{tr}(\boldsymbol X) $ denote the trace of any second order tensor $ \boldsymbol X $. The symmetric part, skew-symmetric part, and deviatoric part of $ \boldsymbol X $ are defined by
\[
\mathrm{sym}\, \boldsymbol X = \dfrac12\big( \boldsymbol X + \boldsymbol X^T\big),\qquad
\mathrm{skew}\, \boldsymbol X = \dfrac12\big( \boldsymbol X - \boldsymbol X^T\big),\qquad
\mathrm{dev}_3 \boldsymbol X = \boldsymbol X - \dfrac13\, \big(\mathrm{tr}\,\boldsymbol X\big)\,{\boldsymbol{\mathbbm{1}}}_3\,.
\]
The scalar product between any second order tensors $ \,\boldsymbol A = A^{ij}\boldsymbol g_i\otimes \boldsymbol g_j = A_{ij}\,\boldsymbol g^i\otimes \boldsymbol g^j\,$ and $\, \boldsymbol B = B^{kl}\boldsymbol g_k\otimes \boldsymbol g_l = B_{kl}\,\boldsymbol g^k\otimes \boldsymbol g^l\,$
is denoted by
\[
\boldsymbol A : \boldsymbol B = \mathrm{tr}\big(\boldsymbol A^T \boldsymbol B \big) = A^{ij} B_{ij} = A_{kl} B^{kl}\,.
\]
If $ \, \mathbf{u}nderline{\boldsymbol C} = C^{ijkl}\boldsymbol g_i\otimes
\boldsymbol g_j\otimes \boldsymbol g_k\otimes \boldsymbol g_l \, $
is a fourth-order tensor, then we use the corresponding notations
\[
\mathbf{u}nderline{\boldsymbol C} : \boldsymbol B = C^{ijkl}B_{kl}\,\boldsymbol g_i\otimes
\boldsymbol g_j\,,\qquad
\boldsymbol A : \mathbf{u}nderline{\boldsymbol C} = C^{ijkl} A_{ij}\,
\boldsymbol g_k\otimes \boldsymbol g_l \,,\qquad
\boldsymbol A : \mathbf{u}nderline{\boldsymbol C} : \boldsymbol B = C^{ijkl} A_{ij}\,B_{kl}\,.
\]
For any vector $ \boldsymbol v = v^i \boldsymbol g_i = v_i\, \boldsymbol g^i$ we write as usual
\[
\boldsymbol A \boldsymbol v = A^{ij}v_j\, \boldsymbol g_i = A_{ij}v^j\, \boldsymbol g^i
\qquad\mathrm{and}\qquad
\boldsymbol v \boldsymbol A = \boldsymbol A^T \boldsymbol v
= A^{ij}v_i\, \boldsymbol g_j = A_{ij}v^i\, \boldsymbol g^j\,.
\]
\section{Three-dimensional Cosserat elastic continua}\label{Sect2}
Let us consider a three-dimensional Cosserat body which occupies the domain
$\Omega_\xi\subset\mathbb{R}^3$ in its reference configuration.
The deformation is characterized by the vectorial map
$ \boldsymbol \mathbf{v}arphi_\xi:\Omega_\xi \rightarrow\Omega_c $ (here is $ \Omega_c\subset\mathbb{R}^3 $ the deformed configuration) and the microrotation tensor $ \boldsymbol{R}_\xi:\Omega_\xi \rightarrow \mathrm{SO}(3) $ (the special orthogonal group).
On the reference configuration $ \Omega_\xi $ we consider a system of curvilinear coordinates $(x_1,x_2,x_3)$, which are induced by the parametric representation $ \boldsymbol\Theta:\Omega_h \rightarrow\Omega_\xi\, $ with $ (x_1,x_2,x_3)\in \Omega_h\, $. Using the common notations, we introduce the covariant base vectors $\boldsymbol g_i : =\dfrac{\partial\boldsymbol\Theta }{\partial x_i}\,= \boldsymbol\Theta,_i$ and the contravariant base vectors $\boldsymbol g^i$ with $ \boldsymbol g^j\cdot \boldsymbol g_i=\delta^j_i\, $.
Let
$$ \boldsymbol \mathbf{v}arphi :\Omega_h \rightarrow\Omega_c\,, \qquad \boldsymbol \mathbf{v}arphi(x_1,x_2,x_3): = \boldsymbol \mathbf{v}arphi_\xi\big( \boldsymbol\Theta(x_1,x_2,x_3)\big) ,$$
be the \textit{deformation function} and
$$ \boldsymbol F_\xi = \boldsymbol \mathbf{v}arphi,_i\otimes\, \boldsymbol g^i\, $$
the \textit{deformation gradient}. We refer the domain $ \Omega_h $ to the orthonormal vector basis $ \{\boldsymbol e_1, \boldsymbol e_2 , \boldsymbol e_3 \} $, such that $ (x_1,x_2,x_3)= x_i\boldsymbol e_i\, $ and $ \nabla_x \boldsymbol\Theta=\boldsymbol\Theta,_i\,\otimes \boldsymbol e_i=\boldsymbol g_i\otimes \boldsymbol e_i$\,. The microrotation tensor can be represented as
\[ \boldsymbol{R}_\xi = \boldsymbol d_i \otimes \boldsymbol d_i^0\,,\]
where $ \{ \boldsymbol d_1^0\,, \boldsymbol d_2^0\,, \boldsymbol d_3^0\, \} $ is the orthonormal triad of directors in the reference configuration $ \Omega_\xi $ and $ \{ \boldsymbol d_1\,, \boldsymbol d_2\,, \boldsymbol d_3\, \} $ is the orthonormal triad of directors in the deformed configuration $ \Omega_c\, $. We denote by $\boldsymbol Q_e$ the \emph{elastic microrotation} given by
$$
\boldsymbol Q_e :\Omega_h \rightarrow \mathrm{SO}(3 ),\qquad \boldsymbol Q_e (x_1,x_2,x_3):= \boldsymbol R_\xi\big(\boldsymbol\Theta(x_1,x_2,x_3)\big).
$$
We choose the initial microrotation tensor $ \boldsymbol Q_0 $ such that
\begin{equation}\label{f0,5}
{\boldsymbol Q}_0=\polar{(\nabla_x \boldsymbol \Theta)}\in \rm{SO}(3 )\qquad\mbox{and}\qquad \boldsymbol Q_0=\boldsymbol d_i^0\otimes\boldsymbol e_i\,.
\end{equation}
Let
\begin{equation}\label{f1}
\overline{\boldsymbol{E}}:=\boldsymbol Q_e^T \boldsymbol F_\xi -{\boldsymbol{\mathbbm{1}}}_3
\end{equation}
denote the (non-symmetric) \emph{strain tensor} for nonlinear micropolar media and
\begin{equation}\label{f2}
\boldsymbol \Gamma := \mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,i}\big)\otimes \boldsymbol g^i
\end{equation}
be the \emph{wryness tensor} (see e.g., \cite{Neff_curl08,Pietraszkiewicz09,Birsan-Neff-L58-2017}), which is a strain measure for curvature (orientation change).
The local equations of equilibrium can be written in the form
\begin{equation}\label{f3}
\mathbb{D}iv \boldsymbol T +\boldsymbol f=\boldsymbol 0,\qquad \mathbb{D}iv \overline{\boldsymbol M} - \mathrm{axl}\big(\boldsymbol F_\xi \boldsymbol T^T - \boldsymbol T^T \boldsymbol F_\xi \big) + \boldsymbol c = \boldsymbol 0,
\end{equation}
where $ \boldsymbol T $ and $ \overline{\boldsymbol M} $ are the stress tensor and the couple stress tensor (of the first Piola-Kirchhoff type), $ \boldsymbol f $ and $ \boldsymbol c $ are the external body force and couple vectors. To the balance equations (\ref{f3}) one can adjoin boundary conditions.
Under hyperelasticity assumptions, the stress tensors $ \boldsymbol T $ and $ \overline{\boldsymbol M} $ are expressed by the constitutive equations
\begin{equation}\label{f4}
\boldsymbol Q_e^T \boldsymbol T = \dfrac{\partial W}{\partial\overline{\boldsymbol{E}}}\;
,\qquad
\boldsymbol Q_e^T \overline{\boldsymbol M} = \dfrac{\partial W}{\partial\boldsymbol \Gamma}\;,
\end{equation}
where $ W=W(\overline{\boldsymbol{E}}, \boldsymbol \Gamma ) $ is the elastically stored energy density. Using the Cosserat model for isotropic materials presented in \cite{Birsan-Neff-Ost_L56-2015,Birsan-Neff-MMS-2019}, we assume the following representation for the energy density
\begin{align}
W(\overline{\boldsymbol{E}}, \boldsymbol \Gamma)=\; & W_{\mathrm{mp}}(\overline{\boldsymbol{E}})+ W_{\mathrm{curv}}( \boldsymbol \Gamma), \mathbf{v}space{6pt}\label{f5}\\
W_{\mathrm{mp}}(\overline{\boldsymbol{E}} ) =\; &
\mu\,\|\,\mathrm{dev_3\,sym}\, \overline{\boldsymbol{E}}\,\|^2\, + \,\mu_c \, \|\,\mathrm{skew}\, \overline{\boldsymbol{E}}\,\|^2\, + \, \dfrac{\kappa}{2}\,\big(\mathrm{tr}\,\overline{\boldsymbol{E}}\,\big)^2
\mathbf{v}space{6pt}\label{f6}\\
=\; & \mu\,\|\,\mathrm{sym}\, \overline{\boldsymbol{E}}\,\|^2\, + \,\mu_c \, \|\,\mathrm{skew}\, \overline{\boldsymbol{E}}\,\|^2\, + \, \dfrac{\lambda}{2}\,\big(\mathrm{tr}\,\overline{\boldsymbol{E}}\,\big)^2
,\mathbf{v}space{6pt} \notag\\
W_{\mathrm{curv}}( \boldsymbol{\Gamma}) =\; & \mu\,L_c^2\,\Big(\,b_1\,\|\,\mathrm{dev_3\,sym}\, \boldsymbol{\Gamma}\|^2\, + \,b_2\, \|\,\mathrm{skew}\, \boldsymbol{\Gamma}\|^2\, + \, b_3\big(\mathrm{tr}\,\boldsymbol{\Gamma}\big)^2\,\Big) \mathbf{v}space{6pt}\label{f7}\\
=\; & \mu\,L_c^2\,\Big(\,b_1\,\|\,\mathrm{sym}\, \boldsymbol{\Gamma}\|^2\, + \,b_2\, \|\,\mathrm{skew}\, \boldsymbol{\Gamma}\|^2\, + \, \big(b_3- \dfrac{b_1}{3}\big)\big(\mathrm{tr}\,\boldsymbol{\Gamma}\big)^2\,\Big)\,, \notag
\end{align}
where $\mu>0$ is the shear modulus, $ \lambda $ the Lam\'e constant, $\kappa=\frac13(3\lambda+2\mu)$ is the bulk modulus of classical isotropic elasticity, and $\,\mu_c\ge 0$ is the so-called \emph{Cosserat couple modulus}, $b_1\,,\, b_2 \,,\, b_3>0$ are dimensionless constitutive coefficients and the parameter $\,L_c>0\,$ introduces an internal length which is characteristic for the material.
We remark that the model is geometrically nonlinear (since the strain measures $ \overline{\boldsymbol{E}}\,,\, \boldsymbol \Gamma $ are nonlinear functions of $ \boldsymbol\mathbf{v}arphi, \boldsymbol Q_e $), but it is physically linear in view of \eqref{f4}-\eqref{f7}. Thus, let us denote by
$$ \mathbf{u}nderline{\boldsymbol C} = C^{ijkl}\boldsymbol g_i\otimes
\boldsymbol g_j\otimes \boldsymbol g_k\otimes \boldsymbol g_l
\qquad \mathrm{and} \qquad \mathbf{u}nderline{\boldsymbol G} = G^{ijkl}\boldsymbol g_i\otimes
\boldsymbol g_j\otimes \boldsymbol g_k\otimes \boldsymbol g_l $$
the fourth-order tensors of the elastic moduli such that
\begin{equation}\label{f8}
\begin{array}{l}
\boldsymbol Q_e^T\boldsymbol T \,=\, \mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}} \,=\, 2\mu\, \mathrm{dev_3\,sym}\, \overline{\boldsymbol{E}} + 2\mu_c \, \mathrm{skew}\, \overline{\boldsymbol{E}} + \, \kappa (\mathrm{tr}\,\overline{\boldsymbol{E}}){\boldsymbol{\mathbbm{1}}}_3 \,=\, 2\mu\, \mathrm{sym}\, \overline{\boldsymbol{E}} + 2\mu_c \, \mathrm{skew}\, \overline{\boldsymbol{E}} + \, \lambda (\mathrm{tr}\,\overline{\boldsymbol{E}}){\boldsymbol{\mathbbm{1}}}_3\,,
\mathbf{v}space{6pt}\\
\boldsymbol Q_e^T\overline{\boldsymbol M} \,=\, \mathbf{u}nderline{\boldsymbol G} : \boldsymbol \Gamma \,=\, 2\mu\,L_c^2\,\Big(\,b_1\, \mathrm{dev_3\,sym}\, \boldsymbol{\Gamma} + \,b_2\, \mathrm{skew}\, \boldsymbol{\Gamma} + \, b_3\big(\mathrm{tr}\,\boldsymbol{\Gamma}\big){\boldsymbol{\mathbbm{1}}}_3\Big).
\end{array}
\end{equation}
By virtue of \eqref{f8}, we see that the tensor components are
\begin{equation}\label{f9}
\begin{array}{l}
C^{ijkl} = \mu\, \big(g^{ik} g^{jl} + g^{il} g^{jk}\big) + \mu_c \big(g^{ik} g^{jl} - g^{il} g^{jk}\big) + \lambda\, g^{ij} g^{kl}\,,
\mathbf{v}space{6pt}\\
G^{ijkl} = \mu\,L_c^2\,\Big( b_1 \big(g^{ik} g^{jl} + g^{il} g^{jk}\big) + b_2 \big(g^{ik} g^{jl} - g^{il} g^{jk}\big) + 2 \big(b_3- \dfrac{b_1}{3}\big)\, g^{ij} g^{kl} \Big),
\end{array}
\end{equation}
which satisfy the major symmetries $\; C^{ijkl} = C^{klij} $ , $\; G^{ijkl} = G^{klij} \;$.
Hence, we have
\begin{equation}\label{f10}
\begin{array}{l}
W_{\mathrm{mp}}(\overline{\boldsymbol{E}} ) = \dfrac12 \big(\boldsymbol Q_e^T\boldsymbol T\big) : \overline{\boldsymbol{E}} =
\dfrac12\; \overline{\boldsymbol{E}} : \mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}} \,,\qquad
W_{\mathrm{curv}}(\boldsymbol{\Gamma} ) = \dfrac12 \big(\boldsymbol Q_e^T\overline{\boldsymbol M}\big) : \boldsymbol{\Gamma} =
\dfrac12\; \boldsymbol{\Gamma} : \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}\,.
\end{array}
\end{equation}
Under these assumptions, the deformation function $ \boldsymbol\mathbf{v}arphi $ and microrotation tensor $ \boldsymbol Q_e $ are the solution of the following minimization problem
\begin{equation}\label{f10,5}
I =\displaystyle\int_{\Omega_\xi} W \big(\overline{\boldsymbol{E}} ,\boldsymbol{\Gamma}\big)\, \mathrm dV\quad\to \quad \textrm{\ \ min\ \ w.r.t.\ \ } ( \boldsymbol\mathbf{v}arphi, \boldsymbol Q_e\,)\, .
\end{equation}
For the sake of simplicity, we assume here that no external body and surface loads are present. The existence of minimizers to this energy functional has been proved by the direct methods of the calculus of variations (see, e.g., \cite{Neff_Edinb06,Birsan-Neff-Ost_L56-2015}).
\section{Geometry and kinematics of three-dimensional Cosserat shells} \label{Sect3}
For a shell-like three-dimensional Cosserat body, the parametric representation $ \boldsymbol\Theta $ has the special form (see, e.g., \cite{Ciarlet00,Libai98,Pietraszkiewicz-book04})
\begin{equation}\label{f11}
\boldsymbol\Theta(\boldsymbol x )=\boldsymbol y_0(x_1,x_2)+x_3\, \boldsymbol n_0(x_1,x_2),
\end{equation}
where $\boldsymbol n_0=\dfrac{\boldsymbol y_{0,1}\times \boldsymbol y_{0,2}}{\|\boldsymbol y_{0,1}\times \boldsymbol y_{0,2}\|}\, $ is the unit normal vector to the surface $\omega_\xi\,$, defined by the position vector $ \boldsymbol y_0(x_1,x_2) $. The parameter domain $ \Omega_h $ has the special form
$$\Omega_h=\left\{ (x_1,x_2,x_3) \,\Big|\,\, (x_1,x_2)\in\omega\subset \mathbb{R}^2 , \;\; x_3 \in \Big( -\frac{h}{2} , \frac{h}{2}\, \Big)\, \right\} ,$$
where $ h $ is the thickness. Thus, $ (x_1,x_2) $ are curvilinear coordinates on the midsurface $ \omega_\xi= \boldsymbol y_0(\omega) $ and $ x_3 $ is the coordinate through the thickness of the shell-like body $ \Omega_\xi\, $.
We denote the covariant and contravariant base vectors in the tangent plane of $ \omega_\xi $ as usual by
\[
\boldsymbol a_\alpha=\,\dfrac{\partial \boldsymbol y_0}{\partial x_\alpha}\,=\boldsymbol y_{0,\alpha}\,,\qquad \boldsymbol a^\beta\cdot\boldsymbol a_\alpha=\delta_\alpha^\beta\,\quad (\alpha,\beta=1,2) \qquad\mbox{and set}\qquad \boldsymbol a_3=\boldsymbol a^3=\boldsymbol n_0\,.
\]
The surface gradient and surface divergence are then defined by
\[
\mathrm{Grad}_s\,\boldsymbol f\,=\, \dfrac{\partial\boldsymbol f}{\partial x_\alpha}\,\otimes \boldsymbol a^\alpha= \boldsymbol f,_{\alpha}\otimes\, \boldsymbol a^\alpha\,,\qquad \mathrm{Div}_s\,\boldsymbol T\,=\, \boldsymbol T,_{\alpha} \boldsymbol a^\alpha\,.
\]
We introduce the first and second fundamental tensors of the surface $ \omega_\xi\, $ by
\begin{equation}\label{f12}
\begin{array}{l}
\boldsymbol{a}:= \text{Grad}_s\,\boldsymbol{y}_0 =\boldsymbol{a}_\alpha\otimes \boldsymbol{a}^\alpha=
a_{\alpha\beta}\boldsymbol{a}^\alpha\otimes \boldsymbol{a}^\beta= a^{\alpha\beta}\boldsymbol{a}_\alpha\otimes \boldsymbol{a}_\beta ,
\mathbf{v}space{4pt}\\
\boldsymbol{b}:= -\text{Grad}_s\,\boldsymbol{n}_0=- \boldsymbol{n}_{0,\alpha}\otimes\boldsymbol{a}^\alpha= b_{\alpha\beta}\,\boldsymbol{a}^\alpha\otimes \boldsymbol{a}^\beta=b^\alpha_\beta\,\boldsymbol{a}_\alpha\otimes \boldsymbol{a}^\beta,
\end{array}
\end{equation}
which are symmetric. We shall also need the skew-symmetric tensor $ \boldsymbol c \,$, called the alternator tensor in the tangent plane, defined by
\begin{equation}\label{f13}
\boldsymbol c:= \dfrac{1}{a}\,\,\varepsilon_{\alpha\beta}\,\boldsymbol{a}_\alpha\otimes \boldsymbol{a}_\beta = a\,\,\varepsilon_{\alpha\beta}\,\boldsymbol{a}^\alpha\otimes \boldsymbol{a}^\beta,\qquad\mbox{with}\qquad
a:=\sqrt{\mathrm{det} \big( a_{\alpha\beta} \big)}\,>0,
\end{equation}
where $\varepsilon_{\alpha\beta}\,$ is the two-dimensional alternator ($\varepsilon_{12}=-\varepsilon_{21}=1\,,\,\varepsilon_{11}=\varepsilon_{22}=0$) and $ a(x_1,x_2) $ determines the elemental area of the surface $ \omega_\xi \,$. In view of \eqref{f0,5} and \eqref{f11}, we can show that (see \cite[f. (46)]{Birsan-Neff-MMS-2019})
\begin{equation}\label{f13,5}
\boldsymbol n_0 = \boldsymbol d_3^0 = \boldsymbol Q_0 \boldsymbol e_3\,.
\end{equation}
The fundamental tensors satisfy the relation of Cayley-Hamilton type
\begin{equation}\label{f14}
\boldsymbol b^2-2H \boldsymbol b+K\boldsymbol a=\boldsymbol 0,\qquad 2H:=\mathrm{tr}\,\boldsymbol b=b^{\alpha}_{\alpha}\,,\qquad K:=\mathrm{det}\,\boldsymbol b=\mathrm{det} \big( b^{\alpha}_{\beta} \big),
\end{equation}
where $ H $ and $ K $ are the mean curvature and the Gau\ss{} curvature of the surface $ \omega_\xi\, $, respectively. We note that $ \boldsymbol a $ plays the role of the identity tensor in the tangent plane and designate by
\begin{equation}\label{f15}
\boldsymbol b^* := -\boldsymbol b + 2H \boldsymbol a
\end{equation}
the cofactor of $ \boldsymbol b $ in the tangent plane, since $ \boldsymbol b\,\boldsymbol b^* = K\boldsymbol a $ in view of \eqref{f14}$ _1\, $. Let us introduce the tensors
\begin{equation}\label{f16}
\boldsymbol \mu := \boldsymbol a - x_3\,\boldsymbol b,\qquad
\boldsymbol \mu^{-1} := \dfrac{1}{b} (\boldsymbol a - x_3\,\boldsymbol b^*),\qquad\mbox{with}\qquad
\boldsymbol \mu \,\boldsymbol \mu^{-1} = \boldsymbol \mu^{-1} \boldsymbol \mu = \boldsymbol a,
\end{equation}
where $ b $ is the determinant
\begin{equation}\label{f17}
b := \mathrm{det}\,\boldsymbol \mu = 1-2H\,x_3+K\,x_3^2\, .
\end{equation}
By virtue of $ \boldsymbol g_i = \boldsymbol\Theta,_i\, $ and \eqref{f11}, \eqref{f16}, we find the relations
\begin{equation}\label{f18}
\boldsymbol g_\alpha= \boldsymbol \mu\, \boldsymbol a_\alpha\,,\qquad
\boldsymbol g^\alpha= \boldsymbol \mu^{-1} \boldsymbol a^\alpha\,,\qquad
\boldsymbol g_3 = \boldsymbol g^3 = \boldsymbol n_0\,,
\end{equation}
which are well-known in the literature on shells. Hence, we have
\begin{equation}\label{f19}
\boldsymbol \mu = \boldsymbol g_\alpha \otimes \boldsymbol a^\alpha = \boldsymbol a^\alpha \otimes \boldsymbol g_\alpha \,,\qquad
\boldsymbol \mu^{-1} = \boldsymbol g^\alpha \otimes \boldsymbol a_\alpha = \boldsymbol a_\alpha \otimes \boldsymbol g^\alpha \,.
\end{equation}
In the derivation of the shell model we shall employ the expansion of various functions with respect to $ x_3 \,$ about 0. Therefore, we denote the derivative of functions with respect to $ x_3 $ with a prime, i.e. $ \,f' := \dfrac{\partial f}{\partial x_3}\, $ .\\ We can decompose the deformation gradient as follows
\begin{equation}\label{f20}
\boldsymbol F_\xi = \boldsymbol F_\xi\,{\boldsymbol{\mathbbm{1}}}_3 = \boldsymbol F_\xi( \boldsymbol a + \boldsymbol n_0\otimes \boldsymbol n_0) = \boldsymbol F_\xi\, \boldsymbol a + (\boldsymbol F_\xi \boldsymbol n_0)\otimes \boldsymbol n_0\,,
\end{equation}
where
\begin{align}
& \boldsymbol F_\xi \boldsymbol n_0 = ( \boldsymbol \mathbf{v}arphi,_i\otimes\, \boldsymbol g^i )\boldsymbol n_0 = \boldsymbol \mathbf{v}arphi,_3 = \boldsymbol \mathbf{v}arphi' \qquad\mbox{and}
\label{f21}\mathbf{v}space{6pt}\\
& \boldsymbol F_\xi \,\boldsymbol a = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi) \boldsymbol \mu^{-1}\,.
\label{f22}
\end{align}
To prove \eqref{f22}, we use \eqref{f18}, \eqref{f19} and write
\[
\boldsymbol F_\xi\, \boldsymbol a = ( \boldsymbol \mathbf{v}arphi,_i\otimes\, \boldsymbol g^i )\boldsymbol a = \boldsymbol \mathbf{v}arphi,_\alpha\otimes\, \boldsymbol g^\alpha = (\boldsymbol\mathbf{v}arphi,_\alpha\otimes\, \boldsymbol a^\alpha) (\boldsymbol a_\beta\otimes\, \boldsymbol g^\beta) = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi) \boldsymbol \mu^{-1}\,.
\]
Substituting \eqref{f21} and \eqref{f22} into \eqref{f20}, we get
\begin{equation}\label{f23}
\boldsymbol F_\xi = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi)\, \boldsymbol \mu^{-1} + \boldsymbol \mathbf{v}arphi'\otimes \boldsymbol n_0\,.
\end{equation}
We shall also need the derivatives of $ \boldsymbol F_\xi $ with respect to $ x_3\, $. These are
\begin{equation}\label{f24}
\begin{array}{l}
\boldsymbol F_\xi' = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi')\,\boldsymbol \mu^{-1} +
(\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi) \big(\boldsymbol \mu^{-1}\big)'
+ \boldsymbol \mathbf{v}arphi''\otimes \boldsymbol n_0\,,
\mathbf{v}space{6pt}\\
\boldsymbol F_\xi'' = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi'')\,\boldsymbol \mu^{-1} +
2(\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi') \big(\boldsymbol \mu^{-1}\big)'
+ (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi) \big(\boldsymbol \mu^{-1}\big)''
+ \boldsymbol \mathbf{v}arphi'''\otimes \boldsymbol n_0\,.
\end{array}
\end{equation}
Differentiating \eqref{f16} with respect to $ x_3\, $, we deduce
\begin{equation}\label{f25}
\boldsymbol \mu'=-\boldsymbol b, \qquad \boldsymbol \mu''=\boldsymbol 0,
\qquad \big(\boldsymbol \mu^{-1}\big)'= \boldsymbol \mu^{-1} \boldsymbol b \,\boldsymbol \mu^{-1}, \qquad
\big(\boldsymbol \mu^{-1}\big)''= 2 \boldsymbol \mu^{-1} \boldsymbol b\, \boldsymbol \mu^{-1} \boldsymbol b\, \boldsymbol \mu^{-1}.
\end{equation}
Let us take $ x_3=0 $ in relations \eqref{f23}-\eqref{f25}. In what follows, we employ the notation $ \; \boldsymbol f_0 := \boldsymbol f_{\big| x_3=0} $ for any function $ \boldsymbol f $. Thus, we have
\begin{equation}\label{f25,5}
\boldsymbol \mu_0=\boldsymbol a, \qquad \big(\boldsymbol \mu^{-1}\big)_0= \boldsymbol a, \qquad \big(\boldsymbol \mu^{-1}\big)'_0= \boldsymbol b, \qquad \big(\boldsymbol \mu^{-1}\big)''_0= 2\boldsymbol b^2
\end{equation}
and
\begin{equation}\label{f26}
\begin{array}{l}
(\boldsymbol F_\xi)_0 = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi)_0 + \boldsymbol \mathbf{v}arphi'_0\otimes \boldsymbol n_0\,,
\mathbf{v}space{6pt}\\
(\boldsymbol F_\xi)_0' = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi')_0 + (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi)_0\, \boldsymbol b + \boldsymbol \mathbf{v}arphi''_0\otimes \boldsymbol n_0\,,
\mathbf{v}space{6pt}\\
(\boldsymbol F_\xi)_0'' = (\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi'')_0 + 2(\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi')_0\, \boldsymbol b +
2(\mathrm{Grad}_s\,\boldsymbol \mathbf{v}arphi)_0 \,\boldsymbol b^2 + \boldsymbol \mathbf{v}arphi'''_0\otimes \boldsymbol n_0\,.
\end{array}
\end{equation}
Let us write the Taylor expansion of the deformation function $ \boldsymbol \mathbf{v}arphi(x_1,x_2,x_3) $ with respect to $ x_3 $ in the form
\begin{equation}\label{f27}
\boldsymbol \mathbf{v}arphi(x_1,x_2,x_3) = \boldsymbol m(x_1, x_2) + x_3\, \boldsymbol \alpha(x_1, x_2) + \dfrac{x_3^2}{2}\,\boldsymbol \beta(x_1, x_2) + \dfrac{x_3^3}{6}\,\boldsymbol \gamma(x_1, x_2) + \cdots \;,
\end{equation}
where
\begin{equation}\label{f28}
\boldsymbol m = \boldsymbol \mathbf{v}arphi_{\big| x_3=0} = \boldsymbol \mathbf{v}arphi_0\,, \qquad
\boldsymbol \alpha = \boldsymbol \mathbf{v}arphi'{}_{\big| x_3=0} = \boldsymbol \mathbf{v}arphi'_0\,,
\qquad
\boldsymbol \beta = \boldsymbol \mathbf{v}arphi''{}_{\big| x_3=0} = \boldsymbol \mathbf{v}arphi''_0\qquad\mathrm{etc.}
\end{equation}
On the other hand, we assume that the microrotation tensor $ \boldsymbol Q_e $ does not depend on $ x_3\, $, i.e.
\begin{equation}\label{f29}
\boldsymbol Q_e(x_i) = \boldsymbol Q_e(x_1, x_2).
\end{equation}
By virtue of \eqref{f26}-\eqref{f29}, we can write the strain tensor $ \overline{\boldsymbol{E}}=\boldsymbol Q_e^T \boldsymbol F_\xi -{\boldsymbol{\mathbbm{1}}}_3 $ and its derivatives on the midsurface $ x_3=0\, $:
\begin{equation}\label{f30}
\begin{array}{l}
\overline{\boldsymbol{E}}_0 =\boldsymbol Q_e^T \big(\boldsymbol F_\xi\big)_0 -{\boldsymbol{\mathbbm{1}}}_3 =
\boldsymbol Q_e^T \big(\mathrm{Grad}_s\,\boldsymbol m + \boldsymbol \alpha\otimes \boldsymbol n_0 \big) -{\boldsymbol{\mathbbm{1}}}_3 \,,
\mathbf{v}space{6pt}\\
\overline{\boldsymbol{E}}_0^{\,\prime} =
\boldsymbol Q_e^T \big(\boldsymbol F_\xi\big)'_0 =
\boldsymbol Q_e^T \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b +
\boldsymbol \beta\otimes \boldsymbol n_0 \big]\,,
\mathbf{v}space{6pt}\\
\overline{\boldsymbol{E}}_0^{\,\prime\prime} =
\boldsymbol Q_e^T \big(\boldsymbol F_\xi\big)''_0 =
\boldsymbol Q_e^T \big[\mathrm{Grad}_s\,\boldsymbol \beta +
2\big(\mathrm{Grad}_s\,\boldsymbol\alpha \big)\boldsymbol b +
2\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b^2 +
\boldsymbol \gamma\otimes \boldsymbol n_0 \big]\,.
\end{array}
\end{equation}
We note that the surface $ \omega_\xi $ (characterized by $ x_3=0 $) is the midsurface of the reference shell $ \Omega_\xi\, $, while $ \boldsymbol m (x_1,x_2) $ and $ \boldsymbol Q_e(x_1, x_2) $ represent the deformation vector and microrotation tensor, respectively, for this reference midsurface $ \omega_\xi \,$. Corresponding to
$ \boldsymbol m $ and $ \boldsymbol Q_e $ we introduce now the \emph{elastic shell strain tensor} $\boldsymbol E^e$ and the \emph{elastic shell bending-curvature tensor} $\boldsymbol K^e$, which are usually employed in the 6-parameter shell theory \cite{Libai98,Pietraszkiewicz-book04,Eremeyev06,Birsan-Neff-MMS-2014,Birsan-Neff-L54-2014}
\begin{equation}\label{f31}
\boldsymbol E^e := \boldsymbol Q_e^T\mathrm{Grad}_s\boldsymbol m - \boldsymbol a, \qquad
\boldsymbol K^e :=
\,\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,\alpha}\big)\otimes \boldsymbol a^\alpha.
\end{equation}
These strain measures describe the deformation of the midsurface $ \omega_\xi\, $, see e.g. \cite{Birsan-Neff-L57-2016,Birsan-Neff-L58-2017}. With the help of \eqref{f31}$_1 $ and the decomposition $ {\boldsymbol{\mathbbm{1}}}_3= \boldsymbol a + \boldsymbol n_0\otimes \boldsymbol n_0 $ we can write the relation \eqref{f30}$_1 $ in the form
\begin{equation}\label{f32}
\overline{\boldsymbol{E}}_0 \;=\; \boldsymbol E^e + \big( \boldsymbol Q_e^T \boldsymbol \alpha - \boldsymbol n_0\big)\otimes \boldsymbol n_0 \;= \;
\boldsymbol E^e + \boldsymbol Q_e^T \big( \boldsymbol \alpha - \boldsymbol d_3 \big)\otimes \boldsymbol n_0\,.
\end{equation}
In the same way, we can compute the wryness tensor $ \boldsymbol \Gamma $ and its derivatives on the midsurface $ x_3=0 $ in terms of the bending-curvature tensor $\boldsymbol K^e$. In view of \eqref{f18}, \eqref{f25,5} and \eqref{f29} we have
\begin{equation}\label{f33}
\begin{array}{l}
\boldsymbol \Gamma_0 = \Big(\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,i}\big)\otimes \boldsymbol g^i\Big)_{x_3=0}
= \,\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,\alpha}\big)\otimes \boldsymbol a^\alpha = \boldsymbol K^e,
\mathbf{v}space{6pt}\\
\boldsymbol \Gamma_0^{\,\prime} = \Big(\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,i}\big)\otimes \boldsymbol g^i\Big)'_{x_3=0}
= \,\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,\alpha}\big)\otimes \big[ \big(\boldsymbol \mu^{-1}\big)'_0 \;\boldsymbol a^\alpha\big] =
\big[\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,\alpha}\big)\otimes \boldsymbol a^\alpha \big]\boldsymbol b\,
= \boldsymbol K^e\boldsymbol b,
\mathbf{v}space{6pt}\\
\boldsymbol \Gamma_0^{\,\prime\prime} = \,\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,\alpha}\big)\otimes \big[ \big(\boldsymbol \mu^{-1}\big)''_0 \;\boldsymbol a^\alpha\big] = 2
\big[\mathrm{axl}\big(\boldsymbol Q_e^T\boldsymbol Q_{e,\alpha}\big)\otimes \boldsymbol a^\alpha \big]\boldsymbol b^2\,
= 2\boldsymbol K^e\boldsymbol b^2.
\end{array}
\end{equation}
These expressions will be useful in the sequel.
\section{Derivation of the two-dimensional shell model} \label{Sect4}
In order to obtain the expression of the elastically stored energy density for the two-dimensional shell model, we shall integrate the strain energy density $ W $ over the thickness and then perform some simplifications, suggested by the classical shell theory. Thus, in view of \eqref{f10,5} the total elastically stored strain-energy is
\begin{equation}\label{f34}
I =\displaystyle\int_{\Omega_\xi} W \big(\overline{\boldsymbol{E}} ,\boldsymbol{\Gamma}\big)\, \mathrm dV = \int_{\omega_\xi}\Big( \int_{-h/2}^{h/2} W \big(\overline{\boldsymbol{E}} ,\boldsymbol{\Gamma}\big)\,b(x_1,x_2,x_3)\, \mathrm d x_3\Big)\mathrm d a,
\end{equation}
where $ b(x_i) $ is given by \eqref{f17} and $ \mathrm{d}a = a(x_1,x_2)\, \mathrm{d}x_1\mathrm{d}x_2 = \sqrt{\mathrm{det}(a_{\alpha\beta}) }\,\mathrm{d}x_1\mathrm{d}x_2 $ is the elemental area of the midsurface $ \omega_\xi\, $.
\subsection{Integration over the thickness} \label{Sect4.1}
With a view toward integrating with respect to $ x_3 $\,, we expand the integrand from \eqref{f34} in the form
\[
Wb = \big(Wb\big)_0 + x_3\,\big(Wb\big)'_0 + \dfrac12\, x_3^2\,\big(Wb\big)''_0 + O(x_3^3)
\]
and find
\begin{equation}\label{f35}
\int_{-h/2}^{h/2} Wb\,\mathrm{d}x_3\, = \,h\,\big(Wb\big)_0 \,+\, \dfrac{h^3}{24}\, \big(Wb\big)''_0 + o(h^3).
\end{equation}
By differentiating \eqref{f17} we get $ b_0=1 $ , $ b_0'=-2H $ , $ b_0''= 2K \,$. Hence, we have
\begin{equation}\label{f36}
\begin{array}{l}
\big(Wb\big)_0 = W_0\, b_0 = W_0\,,
\mathbf{v}space{6pt}\\
\big(Wb\big)_0' = \big(W'b+Wb'\big)_0 = W_0'-2H\,W_0\,,
\mathbf{v}space{6pt}\\
(Wb)_0'' = W_0'' -4H\,W_0'+2K\,W_0\,.
\end{array}
\end{equation}
Inserting \eqref{f36} into \eqref{f35} we obtain the expression
\begin{equation}\label{f37}
\int_{-h/2}^{h/2} Wb\,\mathrm{d}x_3\, = \Big(h+ \dfrac{h^3}{12}\,K\Big) W_0 + \dfrac{h^3}{24}\, \big(W_0'' -4H\,W_0'\big) + o(h^3).
\end{equation}
According to our constitutive assumptions \eqref{f5}-\eqref{f10}, we can write
\begin{equation}\label{f38}
\begin{array}{l}
W_0 \;= W_{\mathrm{mp}}(\overline{\boldsymbol{E}}_0)+ W_{\mathrm{curv}}( \boldsymbol \Gamma_0) =
\dfrac12\, \overline{\boldsymbol{E}}_0 : \mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}}_0 +
\dfrac12\, \boldsymbol{\Gamma}_0 : \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}_0\,
= \dfrac12 \big(\boldsymbol Q_e^T\boldsymbol T_0\big) : \overline{\boldsymbol{E}}_0 + \dfrac12 \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol{\Gamma}_0\,,
\mathbf{v}space{6pt}\\
W_0' \;= \, \overline{\boldsymbol{E}}'_0 : \mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}}_0 +
\boldsymbol{\Gamma}'_0 : \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}_0\,
= \big(\boldsymbol Q_e^T\boldsymbol T_0\big) : \overline{\boldsymbol{E}}'_0 + \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol{\Gamma}'_0\,,
\mathbf{v}space{6pt}\\
W_0''\, = \, \overline{\boldsymbol{E}}''_0 : \mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}}_0 + \overline{\boldsymbol{E}}'_0 : \mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}}'_0 +
\boldsymbol{\Gamma}''_0 : \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}_0 + \boldsymbol{\Gamma}'_0 : \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}'_0
\mathbf{v}space{6pt}\\
\qquad = \, \big(\boldsymbol Q_e^T\boldsymbol T_0\big) : \overline{\boldsymbol{E}}''_0
+ \big(\boldsymbol Q_e^T\boldsymbol T'_0\big) : \overline{\boldsymbol{E}}'_0
+ \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol{\Gamma}''_0
+ \big(\boldsymbol Q_e^T\overline{\boldsymbol M}'_0\big) : \boldsymbol{\Gamma}'_0
\,.
\end{array}
\end{equation}
If we use the relations \eqref{f30}-\eqref{f33} in \eqref{f38} and substitute this in \eqref{f37}, we deduce the following successive expressions
\[
\begin{array}{rl}
\displaystyle\int_{-h/2}^{h/2} Wb\,\mathrm{d}x_3\, =& \,
\dfrac12 \Big(h+ \dfrac{h^3}{12}\,K\Big) \big[
\boldsymbol Q_e^T\boldsymbol T_0: \big(\boldsymbol E^e + \big( \boldsymbol Q_e^T \boldsymbol \alpha - \boldsymbol n_0\big)\otimes \boldsymbol n_0\big) +
\big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol K^e
\big]
\mathbf{v}space{6pt}\\
&+ \dfrac{h^3}{24} \Big\{
\boldsymbol T_0: \big[\mathrm{Grad}_s\,\boldsymbol \beta +
2\big(\mathrm{Grad}_s\,\boldsymbol\alpha \big)\boldsymbol b +
2\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b^2 +
\boldsymbol \gamma\otimes \boldsymbol n_0 \big]
\mathbf{v}space{6pt}\\
&+ \boldsymbol T'_0: \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b +
\boldsymbol \beta\otimes \boldsymbol n_0 \big]
+ 2 \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \big(\boldsymbol K^e\boldsymbol b^2\big)
+ \big(\boldsymbol Q_e^T\overline{\boldsymbol M}'_0\big) : \big(\boldsymbol K^e\boldsymbol b\big)
\mathbf{v}space{6pt}\\
&-4H \,\boldsymbol T_0: \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b +
\boldsymbol \beta\otimes \boldsymbol n_0 \big]
-4H \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \big(\boldsymbol K^e\boldsymbol b\big)
\Big\} \; + \; o(h^3)
\end{array}
\]
or, using the decomposition $ \boldsymbol T_0 = \boldsymbol T_0 \boldsymbol a + \boldsymbol T_0 \boldsymbol n_0\otimes \boldsymbol n_0\, $,
\[
\begin{array}{rl}
\displaystyle\int_{-h/2}^{h/2} Wb\,\mathrm{d}x_3 & = \,
\dfrac12 \Big(h+ \dfrac{h^3}{12}\,K\Big) \big[
\big(\boldsymbol Q_e^T\boldsymbol T_0 \boldsymbol a\big) : \boldsymbol E^e + \big(\boldsymbol Q_e^T\boldsymbol T_0 \boldsymbol n_0\big) \cdot \big( \boldsymbol Q_e^T \boldsymbol \alpha - \boldsymbol n_0\big) +
\big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol K^e
\big]
\mathbf{v}space{6pt}\\
&+ \dfrac{h^3}{24} \Big\{
\big(\boldsymbol T_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \beta +
2\big(\mathrm{Grad}_s\,\boldsymbol\alpha \big)\boldsymbol b +
2\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b^2 \big]
+ \big(\boldsymbol T_0 \boldsymbol n_0\big) \cdot \boldsymbol \gamma
\mathbf{v}space{6pt}\\
& + \big(\boldsymbol T'_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b \big]
+ \big(\boldsymbol T'_0 \boldsymbol n_0\big) \cdot \boldsymbol \beta
+ 2 \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \big(\boldsymbol K^e\boldsymbol b^2\big)
+ \big(\boldsymbol Q_e^T\overline{\boldsymbol M}'_0\big) : \big(\boldsymbol K^e\boldsymbol b\big)
\mathbf{v}space{6pt}\\
&-4H \big(\boldsymbol T_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b \big]
-4H \big(\boldsymbol T_0 \boldsymbol n_0\big) \cdot \boldsymbol \beta
-4H \big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \big(\boldsymbol K^e\boldsymbol b\big)
\Big\} \; + \; o(h^3).
\end{array}
\]
Making some further calculations using \eqref{f14} and \eqref{f15}, we obtain
\begin{equation}\label{f39}
\begin{array}{rl}
\displaystyle\int_{-h/2}^{h/2} Wb\,\mathrm{d}x_3 & = \,
\dfrac12 \Big(h- K\, \dfrac{h^3}{12}\Big) \big[
\big(\boldsymbol Q_e^T\boldsymbol T_0 \boldsymbol a\big) : \boldsymbol E^e +
\big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol K^e
\big]
+ \dfrac12 \Big(h+ \dfrac{h^3}{12}\,K\Big) \big(\boldsymbol T_0 \boldsymbol n_0\big) \cdot \big( \boldsymbol \alpha - \boldsymbol d_3\big)
\mathbf{v}space{6pt}\\
& + \dfrac{h^3}{24} \Big\{ \big(\boldsymbol T'_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b \big]
+ \big(\boldsymbol T'_0 \boldsymbol n_0\big) \cdot \boldsymbol \beta
+ \big(\boldsymbol Q_e^T\overline{\boldsymbol M}'_0\big) : \big(\boldsymbol K^e\boldsymbol b\big)
\mathbf{v}space{6pt}\\
&+
\big(\boldsymbol T_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \beta -
2\big(\mathrm{Grad}_s\,\boldsymbol\alpha \big)\boldsymbol b^* -
2K\big(\boldsymbol Q_e\boldsymbol a \big) \big]
+ \big(\boldsymbol T_0 \boldsymbol n_0\big) \cdot \big(\boldsymbol \gamma- 4H\boldsymbol\beta \big)
\Big\} \; + \; o(h^3).
\end{array}
\end{equation}
\subsection{Reduced form of the strain energy density} \label{Sect4.2}
The expression \eqref{f39} of the strain energy density per unit area of $ \omega_\xi $ can be further reduced, provided we make some assumptions and simplifications which are common in the classical shell theory. Thus, let us denote by $ \boldsymbol t^{\pm} $ the stress vectors on the major faces (upper and lower surfaces) of the shell, given by $ x_3=\pm\frac{h}{2} $\,. We notice that $ \boldsymbol n_0 $ is orthogonal to the major faces and write
\[
\begin{array}{l}
\boldsymbol t^{+} \,=\, \boldsymbol T\big(x_\alpha\,,\,\dfrac{h}{2}\;\big)\, \boldsymbol n_0 \,=\, \boldsymbol T_0 \boldsymbol n_0 +
\dfrac{h}{2}\,\boldsymbol T'_0 \boldsymbol n_0 +
\dfrac{h^2}{8}\,\boldsymbol T''_0 \boldsymbol n_0 + O(h^3),
\mathbf{v}space{6pt}\\
\boldsymbol t^{-} \,=\, \boldsymbol T\big(x_\alpha\,,\,\dfrac{-h}{2}\;\big)\, (-\boldsymbol n_0) \,=\, -\boldsymbol T_0 \boldsymbol n_0 +
\dfrac{h}{2}\,\boldsymbol T'_0 \boldsymbol n_0 -
\dfrac{h^2}{8}\,\boldsymbol T''_0 \boldsymbol n_0 + O(h^3),
\end{array}
\]
which yields
\begin{equation}\label{f40}
\boldsymbol t^{+} + \boldsymbol t^{-} \,=\, h\,\boldsymbol T'_0 \boldsymbol n_0 + O(h^3)
\qquad\mathrm{and}\qquad
\boldsymbol t^{+} - \boldsymbol t^{-} \,=\, 2\,\boldsymbol T_0 \boldsymbol n_0 + O(h^2).
\end{equation}
We assume as in the classical theory that $ \boldsymbol t^{\pm} $ are of order $ O(h^3) $ and from \eqref{f40} we find
\begin{equation}\label{f41}
\boldsymbol T_0 \boldsymbol n_0 = O(h^2)
\qquad\mathrm{and}\qquad
\boldsymbol T'_0 \boldsymbol n_0 = O(h^2).
\end{equation}
On the basis of \eqref{f41} and following the same rational as in the classical shell theory (see, e.g. \cite{Steigmann13}), we shall neglect these quantities and replace
\begin{equation}\label{f42}
\boldsymbol T_0 \boldsymbol n_0 = \boldsymbol 0 \qquad\mathrm{and}\qquad
\boldsymbol T_0^{\,\prime} \boldsymbol n_0 = \boldsymbol 0
\end{equation}
in all terms of the energy density \eqref{f39}.
Moreover, we regard the relations \eqref{f42} as two equations for the determination of the vectors $ \boldsymbol\alpha $ and $ \boldsymbol\beta $ in the expansion \eqref{f27}. Thus, from \eqref{f39} and \eqref{f42} we obtain
\begin{equation}\label{f43}
\begin{array}{rl}
\displaystyle\int_{-h/2}^{h/2} Wb\,\mathrm{d}x_3 \; = &
\dfrac12 \Big(h- K\, \dfrac{h^3}{12}\Big) \big[
\big(\boldsymbol Q_e^T\boldsymbol T_0 \boldsymbol a\big) : \boldsymbol E^e +
\big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol K^e
\big]
\mathbf{v}space{6pt}\\
& + \dfrac{h^3}{24} \Big\{ \big(\boldsymbol T'_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b \big]
+ \big(\boldsymbol Q_e^T\overline{\boldsymbol M}'_0\big) : \big(\boldsymbol K^e\boldsymbol b\big)
\mathbf{v}space{6pt}\\
& \qquad \;\; +
\big(\boldsymbol T_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \beta -
2\big(\mathrm{Grad}_s\,\boldsymbol\alpha \big)\boldsymbol b^* -
2K\big(\boldsymbol Q_e\boldsymbol a \big) \big]
\Big\}.
\end{array}
\end{equation}
In view of \eqref{f30}-\eqref{f32}, the equations \eqref{f42} can be written in the form
\begin{equation}\label{f44}
\begin{array}{l}
\Big[\, \mathbf{u}nderline{\boldsymbol C} :
\big(\boldsymbol E^e + \big( \boldsymbol Q_e^T \boldsymbol \alpha - \boldsymbol n_0\big)\otimes \boldsymbol n_0\big)\Big] \boldsymbol n_0 =\boldsymbol 0,
\mathbf{v}space{6pt}\\
\Big[\, \mathbf{u}nderline{\boldsymbol C} :
\big(\boldsymbol Q_e^T \mathrm{Grad}_s\,\boldsymbol \alpha+ (\boldsymbol E^e+ \boldsymbol a)\boldsymbol b + \boldsymbol Q_e^T \boldsymbol \beta \otimes \boldsymbol n_0\big)\Big] \boldsymbol n_0 =\boldsymbol 0.
\end{array}
\end{equation}
The first equation \eqref{f44}$ _1 $ can be used to determine the vector $ \boldsymbol\alpha $\,: we obtain successively
\[
\Big[ (\mu+\mu_c)\boldsymbol a + (\lambda+2\mu)\, \boldsymbol n_0\otimes \boldsymbol n_0\Big] \big( \boldsymbol Q_e^T \boldsymbol \alpha - \boldsymbol n_0\big) = - \big( \mathbf{u}nderline{\boldsymbol C} :
\boldsymbol E^e\big) \boldsymbol n_0\,,
\]
or equivalently,
\[
\boldsymbol Q_e^T \boldsymbol \alpha - \boldsymbol n_0 =
- \Big[ \dfrac{1}{\mu+\mu_c}\;\boldsymbol a + \dfrac{1}{\lambda+2\mu}\; \boldsymbol n_0\otimes \boldsymbol n_0\Big]
\big[
(\mu-\mu_c)\big( \boldsymbol n_0 \boldsymbol E^e\big)
+\lambda \big( \mathrm{tr}\, \boldsymbol E^e\big)\boldsymbol n_0
\big],
\]
which yields (since $ \boldsymbol Q_e\boldsymbol n_0 = \boldsymbol Q_e\boldsymbol d_3^0 =\boldsymbol d_3$)
\begin{equation}\label{f45}
\boldsymbol\alpha = \Big(1- \dfrac{\lambda}{\lambda+2\mu}\,\mathrm{tr}\, \boldsymbol E^e\Big)\boldsymbol d_3 \;-\;
\dfrac{\mu-\mu_c}{\mu+\mu_c}\; \boldsymbol Q_e\big( \boldsymbol n_0 \boldsymbol E^e\big).
\end{equation}
Further, we solve the second equation \eqref{f44}$ _2 $ to determine the vector $ \boldsymbol\beta $. To this aim, we insert $ \boldsymbol\alpha $ given by \eqref{f45}
into \eqref{f44}$ _2 $ and (in order to avoid quadratic terms and derivatives of the strain measures $ \boldsymbol E^e, \boldsymbol K^e $) we use the approximation
\[
\boldsymbol Q_e^T \mathrm{Grad}_s\boldsymbol\alpha\; \simeq\;
\boldsymbol Q_e^T \mathrm{Grad}_s\boldsymbol d_3\,.
\]
Since $ \;\boldsymbol Q_e^T \mathrm{Grad}_s\boldsymbol d_3 = \boldsymbol c \boldsymbol K^e -\boldsymbol b \;$ (see \cite[f. (70)]{Birsan-Neff-MMS-2019}), we use
\begin{equation}\label{f46}
\boldsymbol Q_e^T \mathrm{Grad}_s\boldsymbol\alpha\;= \; \boldsymbol c \boldsymbol K^e -\boldsymbol b
\end{equation}
and the equation \eqref{f44}$ _2 $ becomes
\[
\Big[\, \mathbf{u}nderline{\boldsymbol C} :
\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e+ \boldsymbol Q_e^T \boldsymbol \beta \otimes \boldsymbol n_0\big)\Big] \boldsymbol n_0 =\boldsymbol 0,
\]
which can be solved similarly as the equation \eqref{f44}$ _1 $ and yields
\begin{equation}\label{f47}
\boldsymbol \beta =
- \dfrac{\lambda}{\lambda+2\mu}\,\mathrm{tr}\,\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big)\, \boldsymbol d_3
\;-\;
\dfrac{\mu-\mu_c}{\mu+\mu_c}\; \boldsymbol Q_e\big( \boldsymbol n_0 \boldsymbol E^e\boldsymbol b\big).
\end{equation}
In view of \eqref{f45}-\eqref{f47}, we can write the tensors $ \overline{\boldsymbol{E}}_0 $ and $
\overline{\boldsymbol{E}}_0^{\,\prime} $ in \eqref{f32} and \eqref{f30}$ _2 $ in compact form
\begin{equation}\label{f48}
\begin{array}{l}
\overline{\boldsymbol{E}}_0 = \boldsymbol E^e -\Big[\;
\dfrac{\lambda}{\lambda+2\mu}\,\big(\mathrm{tr}\,\boldsymbol E^e\big)\,\boldsymbol n_0
+ \dfrac{\mu-\mu_c}{\mu+\mu_c}\,\big( \boldsymbol n_0 \boldsymbol E^e\big)
\Big] \otimes \boldsymbol n_0 = L_{n_0} \big( \boldsymbol E^e\big),
\mathbf{v}space{6pt}\\
\overline{\boldsymbol{E}}_0^{\,\prime} =
\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big) -\Big[\;
\dfrac{\lambda}{\lambda+2\mu}\,\mathrm{tr}\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big)\,\boldsymbol n_0
+ \dfrac{\mu-\mu_c}{\mu+\mu_c}\,\big( \boldsymbol n_0 \boldsymbol E^e\boldsymbol b\big)
\Big] \otimes \boldsymbol n_0 = L_{n_0} \big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big),
\end{array}
\end{equation}
where we have denoted for convenience with $ L_{n_0} $ the following linear operator
\begin{equation}\label{f49}
L_{n_0}(\boldsymbol X) := \,
\boldsymbol X \,-\,
\,\dfrac{\lambda}{\lambda+2\mu}\,\big(\mathrm{tr}\,\boldsymbol X\big)\,\boldsymbol n_0 \otimes \boldsymbol n_0
\,-\,
\dfrac{\mu-\mu_c}{\mu+\mu_c}\,\big( \boldsymbol n_0 \boldsymbol X\big)
\otimes \boldsymbol n_0
\qquad \text{for any}\qquad\boldsymbol X = X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha .
\end{equation}
To write the strain-energy density in a condensed form, we designate by
\begin{equation}\label{f49,5}
\begin{array}{rl}
W_{\mathrm{mixt}}(\boldsymbol X,\boldsymbol Y) :=& \mu\, (\mathrm{sym}\, \boldsymbol X) : (\mathrm{sym}\, \boldsymbol Y) + \mu_c (\mathrm{skew}\, \boldsymbol X) : (\mathrm{skew}\, \boldsymbol Y) +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big( \mathrm{tr} \boldsymbol X\big)\,\big(\mathrm{tr} \boldsymbol Y\big)
\mathbf{v}space{6pt}\\
= & \mu\,(\mathrm{dev_3\,sym}\, \boldsymbol X) : (\mathrm{dev_3\,sym}\, \boldsymbol Y) + \mu_c (\mathrm{skew} \boldsymbol X) : (\mathrm{skew}\, \boldsymbol Y) +\,\dfrac{2\mu(2\lambda+\mu)}{3(\lambda+2\mu)}\,\big( \mathrm{tr} \boldsymbol X\big)\,\big(\mathrm{tr} \boldsymbol Y\big)
\end{array}
\end{equation}
the bilinear form corresponding to the quadratic form
\begin{equation}\label{f49,6}
\begin{array}{rl}
W_{\mathrm{mixt}}(\boldsymbol X) :=& W_{\mathrm{mixt}}(\boldsymbol X,\boldsymbol X) \,\,=\,\,
W_{\mathrm{mp}}(\boldsymbol X) - \, \dfrac{\lambda^2}{2(\lambda+2\mu)}\,\big( \mathrm{tr} \boldsymbol X\big)^2
\mathbf{v}space{6pt}\\
= & \mu\,\|\, \mathrm{sym}\, \boldsymbol X\,\|^2 + \mu_c\| \,\mathrm{skew}\, \boldsymbol X\,\|^2 +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big( \mathrm{tr} \boldsymbol X\big)^2 .
\end{array}
\end{equation}
For Cosserat shells, it is convenient to introduce the following bilinear form
\begin{equation}\label{f50}
W_{\mathrm{Coss}}(\boldsymbol X,\boldsymbol Y) := W_{\mathrm{mixt}}(\boldsymbol X,\boldsymbol Y) - \, \dfrac{(\mu-\mu_c)^2}{2(\mu+\mu_c)}\,\big(\boldsymbol n_0 \boldsymbol X\big)\cdot\big(\boldsymbol n_0 \boldsymbol Y\big)
\end{equation}
for any two tensors of the form $ \boldsymbol X = X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $, $\; \boldsymbol Y = Y_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $, and the corresponding quadratic form
\begin{equation}\label{f51}
W_{\mathrm{Coss}}(\boldsymbol X) := W_{\mathrm{Coss}}(\boldsymbol X,\boldsymbol X) \;=\; W_{\mathrm{mixt}}(\boldsymbol X) - \, \dfrac{(\mu-\mu_c)^2}{2(\mu+\mu_c)}\,\|\boldsymbol n_0 \boldsymbol X\|^2,
\end{equation}
where $ \boldsymbol n_0 \boldsymbol X = X_{3\alpha} \boldsymbol a^\alpha$. We shall prove later that the quadratic form
$
W_{\mathrm{Coss}}(\boldsymbol X) $ is positive definite, see \eqref{f99}.
With these notations, we can prove by a straightforward calculation the following useful relation
\begin{equation}\label{f52}
W_{\mathrm{Coss}}(\boldsymbol X)
\;=\; \dfrac12\;
\boldsymbol X : \mathbf{u}nderline{\boldsymbol C} : L_{n_0}(\boldsymbol X)
\qquad \text{for any}\quad\boldsymbol X = X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha .
\end{equation}
Indeed, we have from \eqref{f10}, \eqref{f49}, \eqref{f49,6}, \eqref{f51}
\[
\begin{array}{rl}
\boldsymbol X : \mathbf{u}nderline{\boldsymbol C} : L_{n_0}(\boldsymbol X) &= \boldsymbol X : \mathbf{u}nderline{\boldsymbol C} : \boldsymbol X -
\boldsymbol X :\mathbf{u}nderline{\boldsymbol C} : \Big[
\,\dfrac{\lambda}{\lambda+2\mu}\,\big(\mathrm{tr}\,\boldsymbol X\big)\,\boldsymbol n_0 \otimes \boldsymbol n_0
+
\dfrac{\mu-\mu_c}{\mu+\mu_c}\,\big( \boldsymbol n_0 \boldsymbol X\big)
\otimes \boldsymbol n_0\Big]
\mathbf{v}space{6pt}\\
&=2W_{\mathrm{mp}}(\boldsymbol X) - \boldsymbol X : \Big[
\,\dfrac{\lambda^2}{\lambda+2\mu}\,\big(\mathrm{tr}\,\boldsymbol X\big)\,{\boldsymbol{\mathbbm{1}}}_3
+
(\mu-\mu_c)\,\big( \boldsymbol n_0 \boldsymbol X\big)
\otimes \boldsymbol n_0
+
\dfrac{(\mu-\mu_c)^2}{\mu+\mu_c}\,\boldsymbol n_0 \otimes \big( \boldsymbol n_0 \boldsymbol X\big)
\Big]
\mathbf{v}space{6pt}\\
&=2W_{\mathrm{mp}}(\boldsymbol X) -
\,\dfrac{\lambda^2}{\lambda+2\mu}\,\big(\mathrm{tr}\,\boldsymbol X\big)^2
-
\dfrac{(\mu-\mu_c)^2}{\mu+\mu_c}\,\| \boldsymbol n_0 \boldsymbol X\|^2
\mathbf{v}space{6pt}\\
&=2W_{\mathrm{mixt}}(\boldsymbol X) -
\dfrac{(\mu-\mu_c)^2}{\mu+\mu_c}\,\| \boldsymbol n_0 \boldsymbol X\|^2 \;=\; 2W_{\mathrm{Coss}}(\boldsymbol X)
\end{array}
\]
and the relation \eqref{f52} is proved.
Now, we can simplify the terms appearing in the strain-energy density \eqref{f43}: making use of \eqref{f42}, \eqref{f46}, \eqref{f48} and \eqref{f52} we find
\begin{equation}\label{f53}
\big(\boldsymbol Q_e^T\boldsymbol T_0 \boldsymbol a\big) : \boldsymbol E^e =
\boldsymbol E^e : \big(\boldsymbol Q_e^T\boldsymbol T_0 \big) = \boldsymbol E^e : \big(\mathbf{u}nderline{\boldsymbol C} :
\overline{\boldsymbol{E}}_0\big) = \boldsymbol E^e : \mathbf{u}nderline{\boldsymbol C} :
L_{n_0} \big( \boldsymbol E^e\big) = 2W_{\mathrm{Coss}}\big( \boldsymbol E^e\big)
\end{equation}
and
\begin{equation}\label{f54}
\begin{array}{l}
\big(\boldsymbol T'_0 \boldsymbol a\big) : \big[\mathrm{Grad}_s\,\boldsymbol \alpha +
\big(\mathrm{Grad}_s\,\boldsymbol m \big)\boldsymbol b \big] =
\big(\boldsymbol Q_e^T\boldsymbol T'_0 \boldsymbol a\big) : \big[ \big( \boldsymbol c \boldsymbol K^e -\boldsymbol b \big) + \big( \boldsymbol E^e + \boldsymbol a\big)\boldsymbol b\big]
= \big(\boldsymbol Q_e^T\boldsymbol T'_0 \big) : \big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big)
\mathbf{v}space{6pt}\\
\qquad\qquad
= \big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big) :
\big(\mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}}_0^{\,\prime}\big) =
\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big) :
\mathbf{u}nderline{\boldsymbol C} : L_{n_0}\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big) =
2 W_{\mathrm{Coss}}\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big)
\end{array}
\end{equation}
and
\begin{equation}\label{f55}
\begin{array}{l}
\big(\boldsymbol T_0 \boldsymbol a\big) : \big[\big(\mathrm{Grad}_s\,\boldsymbol\alpha \big)\boldsymbol b^*
+K\big(\boldsymbol Q_e\boldsymbol a \big) \big] =
\big(\boldsymbol Q_e^T\boldsymbol T_0 \boldsymbol a\big) : \big[ \big( \boldsymbol c \boldsymbol K^e -\boldsymbol b \big)\boldsymbol b^* + K \boldsymbol a\big]
=
\big(\boldsymbol Q_e^T\boldsymbol T_0 \big) : \big( \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
\mathbf{v}space{6pt}\\
\qquad\qquad
=
\big(\mathbf{u}nderline{\boldsymbol C} : \overline{\boldsymbol{E}}_0\big) : \big( \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big) =
\big[ 2\mu\, \mathrm{sym}\, \overline{\boldsymbol{E}}_0 + 2\mu_c \, \mathrm{skew}\, \overline{\boldsymbol{E}}_0 + \, \lambda (\mathrm{tr}\,\overline{\boldsymbol{E}}_0){\boldsymbol{\mathbbm{1}}}_3\big]: \big( \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
\mathbf{v}space{6pt}\\
\qquad\qquad
= 2\mu\, \mathrm{sym}\big( \boldsymbol E^e\big) : \mathrm{sym}\, \big( \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big) + 2\mu_c\, \mathrm{skew}\big( \boldsymbol E^e ) : \mathrm{skew}\, \big( \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big) +\,\dfrac{2\lambda\,\mu}{\lambda+2\mu}\, \mathrm{tr} \big(\boldsymbol E^e \big)\,\mathrm{tr} \big( \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
\mathbf{v}space{6pt}\\
\qquad\qquad
= 2 W_{\mathrm{Coss}}\big(\boldsymbol E^e\,,\, \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big),
\end{array}
\end{equation}
since $ \,\,\mathrm{tr}\,\overline{\boldsymbol{E}}_0 = \,\dfrac{2\,\mu}{\lambda+2\mu}\,\mathrm{tr} \,\boldsymbol E^e \,\,$ and the tensor $ \,\boldsymbol c \boldsymbol K^e \boldsymbol b^* \,$ is a planar tensor with basis $ \{\boldsymbol a^\alpha\otimes\boldsymbol a^\beta \} $.
Further, the two terms involving the bending-curvature tensor $ \boldsymbol K^e $ in the strain-energy density \eqref{f43} can be transformed as follows: by virtue of \eqref{f8}, \eqref{f10} and \eqref{f33} we have
\begin{equation}\label{f56}
\big(\boldsymbol Q_e^T\overline{\boldsymbol M}_0\big) : \boldsymbol K^e = \boldsymbol K^e : \big( \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}_0 \big)
= \boldsymbol K^e : \mathbf{u}nderline{\boldsymbol G} : \boldsymbol K^e = 2 W_{\mathrm{curv}} \big(\boldsymbol K^e\big)
\end{equation}
and
\begin{equation}\label{f57}
\big(\boldsymbol Q_e^T\overline{\boldsymbol M}'_0\big) : \big(\boldsymbol K^e \boldsymbol b\big) = \big(\boldsymbol K^e \boldsymbol b\big) : \big( \mathbf{u}nderline{\boldsymbol G} : \boldsymbol{\Gamma}'_0 \big)
= \big(\boldsymbol K^e \boldsymbol b\big) : \mathbf{u}nderline{\boldsymbol G} : \big(\boldsymbol K^e \boldsymbol b\big) = 2 W_{\mathrm{curv}} \big(\boldsymbol K^e \boldsymbol b\big).
\end{equation}
Finally, the term $ (\boldsymbol T_0\boldsymbol a) : \mathrm{Grad}_s\boldsymbol\beta \; $ appearing in the strain-energy density \eqref{f43} can be discarded. To justify this, we proceed as in the classical shell theory, see e.g. \cite{Steigmann12,Steigmann13}: the three-dimensional equilibrium equation
$\; \mathrm{Div}\,\boldsymbol T = \boldsymbol 0\; $
can be written as $\; \boldsymbol T,_i\boldsymbol g^i = \boldsymbol 0\, $, or equivalently
\[
\boldsymbol T,_\alpha\boldsymbol g^\alpha + \boldsymbol T^{\,\prime}\boldsymbol n_0 = \boldsymbol 0.
\]
Therefore, on the midsurface $ x_3=0 $ we have
\begin{equation}\label{f58}
\boldsymbol T_{0,\alpha}\boldsymbol a^\alpha + \boldsymbol T_0^{\,\prime}\boldsymbol n_0 = \boldsymbol 0.
\end{equation}
On the other hand, we see that
\[
\boldsymbol T_{0,\alpha}\boldsymbol a^\alpha = \big( \boldsymbol T_0 \boldsymbol a + \boldsymbol T_0 \boldsymbol n_0\otimes \boldsymbol n_0\big),_\alpha \boldsymbol a^\alpha =
\big( \boldsymbol T_0 \boldsymbol a \big),_\alpha \boldsymbol a^\alpha
+ \boldsymbol T_0 \boldsymbol n_0\big(\boldsymbol n_{0,\alpha} \cdot \,\boldsymbol a^\alpha\big)
= \mathrm{Div}_s(\boldsymbol T_0\boldsymbol a) -2H \,\boldsymbol T_0 \boldsymbol n_0\,.
\]
Inserting the last relation into \eqref{f58} we find
\begin{equation}\label{f59}
\mathrm{Div}_s(\boldsymbol T_0\boldsymbol a) + \boldsymbol T_0^{\,\prime}\boldsymbol n_0 -2H \,\boldsymbol T_0 \boldsymbol n_0 = \boldsymbol 0.
\end{equation}
With help of \eqref{f42}, \eqref{f59} and the divergence theorem for surfaces we get
\begin{equation}\label{f60}
\begin{array}{l}
\displaystyle\int_{\omega_\xi} \big(\boldsymbol T_0 \boldsymbol a\big) : \big(\mathrm{Grad}_s\,\boldsymbol \beta \big) \mathrm{d}a =
\int_{\omega_\xi} \big[ \mathrm{Div}_s\big(\boldsymbol \beta(\boldsymbol T_0\boldsymbol a)\big) - \boldsymbol \beta\cdot \mathrm{Div}_s(\boldsymbol T_0\boldsymbol a)\big] \mathrm{d}a =
\mathbf{v}space{6pt}\\
\qquad\qquad\displaystyle
= \int_{\partial\omega_\xi} \boldsymbol \beta\big(\boldsymbol T_0\boldsymbol a\big) \cdot \boldsymbol \nu\, \mathrm{d}\ell -
\int_{\omega_\xi} \boldsymbol \beta\cdot \big( 2H \,\boldsymbol T_0 \boldsymbol n_0 - \boldsymbol T_0^{\,\prime}\boldsymbol n_0 \big) \mathrm{d}a
= \int_{\partial\omega_\xi} \boldsymbol \beta\cdot \big(\boldsymbol T_0\boldsymbol a\big) \boldsymbol \nu\, \mathrm{d}\ell\,,
\end{array}
\end{equation}
where $ \boldsymbol\nu $ is the unit normal to the boundary curve $ \partial \omega_\xi $ lying in the tangent plane. The last integral in
\eqref{f60} represents a prescribed constant (determined by the boundary data on $ \partial \omega_\xi $), which can be omitted, since its variation vanishes identically and thus does not influence the minimizers of the energy functional.
In conclusion, using the results \eqref{f53}-\eqref{f57} in the equation \eqref{f43} we obtain the following expression of the areal strain-energy density for Cosserat shells
\begin{equation}\label{f61}
\begin{array}{rl}
W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) = &
\Big(h- K\, \dfrac{h^3}{12}\Big) \big[ W_{\mathrm{Coss}}\big( \boldsymbol E^e\big) + W_{\mathrm{curv}} \big(\boldsymbol K^e\big) \big]
\mathbf{v}space{6pt}\\
& + \;\dfrac{h^3}{12}\,\big[ W_{\mathrm{Coss}}\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big)
-2 W_{\mathrm{Coss}}\big(\boldsymbol E^e\,,\, \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
+ W_{\mathrm{curv}} \big(\boldsymbol K^e \boldsymbol b\big) \big],
\end{array}
\end{equation}
where $ W_{\mathrm{Coss}} $ is defined by \eqref{f50}, \eqref{f51} (see also equations \eqref{f88} and \eqref{f99})
and $ W_{\mathrm{curv}} $ is given in \eqref{f7}. This is the elastically stored strain-energy density for our model, which determines the constitutive equations. In Section \ref{Sect5} we shall present a useful alternative form of the
energy $ W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) $, together with explicit stress-strain relations (see \eqref{f100}, \eqref{f107}).
\subsection{The field equations for Cosserat shells} \label{Sect4.3}
For the sake of completeness, we record here the governing field equations of the derived shell model.
We deduce the form of the equilibrium equations for Cosserat shells from the condition that the solution is a stationary point of the energy functional $ I $\,, i.e. we impose that the variation of the energy functional is zero:
\begin{equation}\label{f62}
\delta I =0\,, \qquad \mathrm{with}\quad I=\int_{\omega_\xi}
W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) \,\mathrm{d}a.
\end{equation}
For simplicity we have assumed in \eqref{f62} that the external body loads are vanishing and the boundary conditions are null. To compute the variation $ \delta I $ we write
\begin{equation}\label{f63}
\delta\,
W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) \, = \,\dfrac{\partial\, W_{\mathrm{shell}}}{\partial \boldsymbol E^e}\,
: \big(\delta\boldsymbol E^e\big) +
\dfrac{\partial\, W_{\mathrm{shell}}}{\partial \boldsymbol K^e}\,: \big(\delta\boldsymbol K^e\big)
\, = \,\big(\boldsymbol Q_e^T\boldsymbol N \big)
: \big(\delta\boldsymbol E^e\big) +
\big(\boldsymbol Q_e^T\boldsymbol M\big) : \big(\delta\boldsymbol K^e\big) ,
\end{equation}
where we have introduced the tensors $ \boldsymbol N $ and $ \boldsymbol M $ such that
\begin{equation}\label{f64}
\boldsymbol Q_e^T\boldsymbol N = \dfrac{\partial\, W_{\mathrm{shell}}}{\partial \boldsymbol E^e}\qquad
\textrm{and}\qquad
\boldsymbol Q_e^T\boldsymbol M = \dfrac{\partial\, W_{\mathrm{shell}}}{\partial \boldsymbol K^e}\,.
\end{equation}
Let us denote by
\begin{equation}\label{f65}
\boldsymbol F_s :=\, \mathrm{Grad}_s\boldsymbol m \,=\, \boldsymbol m,_\alpha \otimes\, \boldsymbol a^\alpha
\end{equation}
the \textit{shell deformation gradient} (i.e., the surface gradient of the midsurface deformation $ \boldsymbol m $). Then, in view of \eqref{f31}$ _1 $ we have $\, \boldsymbol E^e = \boldsymbol Q_e^T \boldsymbol F_s - \boldsymbol a\,$ and, hence,
\begin{equation}\label{f66}
\delta\boldsymbol E^e = \delta\big( \boldsymbol Q_e^T \boldsymbol F_s - \boldsymbol a \big) = \delta \big( \boldsymbol Q_e^T \mathrm{Grad}_s\boldsymbol m \big)
= ( \delta \boldsymbol Q_e)^T \mathrm{Grad}_s\boldsymbol m +
\boldsymbol Q_e^T \mathrm{Grad}_s\big( \delta\boldsymbol m \big).
\end{equation}
To compute $ \delta\boldsymbol Q_e\, $, we notice that the tensor $ (\delta\boldsymbol Q_e)\boldsymbol Q_e^T $ is skew-symmetric and we denote
\begin{equation}\label{f67}
\boldsymbol\Omega := (\delta\boldsymbol Q_e)\boldsymbol Q_e^T, \qquad \boldsymbol\omega:=\mathrm{axl} (\boldsymbol\Omega),\qquad
\mathrm{with}\quad \boldsymbol\Omega=\boldsymbol\omega\times {\boldsymbol{\mathbbm{1}}}_3\,.
\end{equation}
In the above relations, the axial vector $ \boldsymbol\omega $ is the virtual rotation vector and $ \delta\boldsymbol m$ is the virtual translation. From \eqref{f67} we get
\begin{equation}\label{f68}
\delta\boldsymbol Q_e = \boldsymbol\Omega \,\boldsymbol Q_e
=-(\boldsymbol Q_e^T \boldsymbol\Omega)^T
\end{equation}
and substituting into \eqref{f66} we obtain
\begin{equation}\label{f69}
\delta\boldsymbol E^e = \boldsymbol Q_e^T\big( \mathrm{Grad}_s (\delta \boldsymbol m) - \boldsymbol\Omega \,\boldsymbol F_s \big)
.
\end{equation}
Further, in order to compute $ \delta\boldsymbol K^e\, $, we recall the formula (see \cite[f. (63)]{Birsan-Neff-L57-2016})
\begin{equation}\label{f70}
\boldsymbol K^e = \dfrac12\,\big[ \boldsymbol Q_e^T \big( \boldsymbol d_i\times \mathrm{Grad}_s\, \boldsymbol d_i\big)
- \boldsymbol d^0_i\times \mathrm{Grad}_s\, \boldsymbol d^0_i \,\big]
\end{equation}
and write (in view of \eqref{f68})
\begin{equation}\label{f71}
\delta\boldsymbol d_i = \delta \big( \boldsymbol Q_e \boldsymbol d_i^0\big) = (\delta\boldsymbol Q_e) \boldsymbol d_i^0 = \boldsymbol\Omega \boldsymbol Q_e \boldsymbol d_i^0 = \boldsymbol\Omega \boldsymbol d_i = \boldsymbol\omega\times
\boldsymbol d_i\,.
\end{equation}
Then, from \eqref{f70} it follows
\begin{equation}\label{f72}
\begin{array}{rl}
\delta\boldsymbol K^e =& \dfrac12 \, \delta \big[ \boldsymbol Q_e^T \big( \boldsymbol d_i\times \mathrm{Grad}_s\, \boldsymbol d_i\big) \big]
\mathbf{v}space{6pt}\\
= &
\dfrac12 \, \big[ (\delta\boldsymbol Q_e)^T \big( \boldsymbol d_i\times \mathrm{Grad}_s\, \boldsymbol d_i\big)
+ \boldsymbol Q_e^T \big( (\delta\boldsymbol d_i)\times \mathrm{Grad}_s\, \boldsymbol d_i\big)
+ \boldsymbol Q_e^T \big( \boldsymbol d_i\times \mathrm{Grad}_s (\delta\boldsymbol d_i)\big)\big]
\mathbf{v}space{6pt}\\
= &
\dfrac12 \, \boldsymbol Q_e^T \big[ -\boldsymbol \Omega \big( \boldsymbol d_i\times \mathrm{Grad}_s\, \boldsymbol d_i\big)
+ ( \boldsymbol \Omega\boldsymbol d_i)\times \mathrm{Grad}_s\, \boldsymbol d_i
+ \boldsymbol d_i\times \mathrm{Grad}_s (\boldsymbol \Omega\boldsymbol d_i)\big]
\mathbf{v}space{6pt}\\
= & \dfrac12 \, \boldsymbol Q_e^T \big[ -\boldsymbol\omega\times \big( \boldsymbol d_i\times \mathrm{Grad}_s\, \boldsymbol d_i\big)
+ ( \boldsymbol\omega\times\boldsymbol d_i)\times \mathrm{Grad}_s\, \boldsymbol d_i
+ \boldsymbol d_i\times \mathrm{Grad}_s (\boldsymbol\omega\times\boldsymbol d_i)\big] .
\end{array}
\end{equation}
By virtue of the Jacobi identity for the cross product, we have
\[
-\boldsymbol\omega\times \big( \boldsymbol d_i\times \mathrm{Grad}_s\, \boldsymbol d_i\big)
+ ( \boldsymbol\omega\times\boldsymbol d_i)\times \mathrm{Grad}_s\, \boldsymbol d_i =
- \boldsymbol d_i\times\big( \boldsymbol\omega\times\mathrm{Grad}_s\, \boldsymbol d_i\big)
\]
and inserting this in \eqref{f72} we get
\begin{equation}\label{f72,5}
\begin{array}{rl}
\delta\boldsymbol K^e =& \dfrac12 \, \boldsymbol Q_e^T \big[ \boldsymbol d_i \times \big( \mathrm{Grad}_s (\boldsymbol\omega\times\boldsymbol d_i) - \boldsymbol\omega\times\mathrm{Grad}_s\, \boldsymbol d_i \big)\big] .
\end{array}
\end{equation}
For the square brackets in \eqref{f72,5} we can write
\begin{equation}\label{f73}
\boldsymbol d_i \times \big( \mathrm{Grad}_s (\boldsymbol\omega\times\boldsymbol d_i) - \boldsymbol\omega\times\mathrm{Grad}_s\, \boldsymbol d_i \big)
= - \boldsymbol d_i \times \big( \boldsymbol d_i\times\mathrm{Grad}_s\, \boldsymbol\omega \big)
= 2 \,\mathrm{Grad}_s\, \boldsymbol\omega\,,
\end{equation}
since
\[
- \boldsymbol d_i \times \big( \boldsymbol d_i\times \boldsymbol\omega,_\alpha \big) = - (\boldsymbol d_i\cdot \boldsymbol\omega,_\alpha ) \boldsymbol d_i + (\boldsymbol d_i\cdot\boldsymbol d_i) \boldsymbol\omega,_\alpha
=-\boldsymbol\omega,_\alpha + 3\,\boldsymbol\omega,_\alpha
= 2\,\boldsymbol\omega,_\alpha\,.
\]
We substitute \eqref{f73} into \eqref{f72,5} and find
\begin{equation}\label{f74}
\delta\boldsymbol K^e = \boldsymbol Q_e^T\, \mathrm{Grad}_s \boldsymbol \omega.
\end{equation}
By virtue of \eqref{f69} and \eqref{f74}, the relation \eqref{f63} becomes
\begin{equation}\label{f75}
\delta\,
W_{\mathrm{shell}} \, = \, \boldsymbol N
: \big( \mathrm{Grad}_s (\delta \boldsymbol m) - \boldsymbol\Omega \,\boldsymbol F_s \big) +
\boldsymbol M : \mathrm{Grad}_s \boldsymbol \omega\,.
\end{equation}
We can rewrite the term $ \boldsymbol N :(\boldsymbol \Omega\boldsymbol F_s) $ as follows
\begin{equation}\label{f76}
\boldsymbol N :(\boldsymbol \Omega\boldsymbol F_s) = - \boldsymbol \Omega :(\boldsymbol F_s\boldsymbol N^T) = -
\boldsymbol \omega\cdot \mathrm{axl} \big( \boldsymbol F_s \boldsymbol N^T- \boldsymbol N\boldsymbol F_s^T \big),
\end{equation}
since
\[
\boldsymbol \Omega : \boldsymbol X = \mathrm{axl}(\boldsymbol \Omega) \cdot \mathrm{axl}(\boldsymbol X- \boldsymbol X^T)
\]
for any second order tensor $ \boldsymbol X $ and any skew-symmetric tensor $ \boldsymbol \Omega $. We use \eqref{f76} in \eqref{f75} and deduce
\begin{equation}\label{f77}
\delta\,
W_{\mathrm{shell}} \, = \, \boldsymbol N
: \mathrm{Grad}_s (\delta \boldsymbol m) +
\boldsymbol M : \mathrm{Grad}_s \boldsymbol \omega
+ \mathrm{axl} \big( \boldsymbol F_s \boldsymbol N^T- \boldsymbol N\boldsymbol F_s^T \big)\cdot \boldsymbol \omega\,.
\end{equation}
For the first two terms in the right-hand side of equation \eqref{f77} we employ relations of the type
\[
\boldsymbol S : \mathrm{Grad}_s \boldsymbol v = \mathrm{Div}_s(\boldsymbol S^T\boldsymbol v) - \big(\mathrm{Div}_s\boldsymbol S\big)\cdot \boldsymbol v\,,
\]
together with the divergence theorem on surfaces. Thus, in view of the null boundary conditions on $ \partial \omega_\xi $ we derive
\begin{equation}\label{f78}
\displaystyle\int_{\omega_\xi} \boldsymbol N : \mathrm{Grad}_s (\delta \boldsymbol m)\, \mathrm{d}a =
\int_{\partial\omega_\xi} (\delta \boldsymbol m)\cdot(\boldsymbol N \boldsymbol \nu)\, \mathrm{d}\ell
- \displaystyle\int_{\omega_\xi} \big(\mathrm{Div}_s\boldsymbol N\big) \cdot (\delta \boldsymbol m)\, \mathrm{d}a
= - \displaystyle\int_{\omega_\xi} \big(\mathrm{Div}_s\boldsymbol N\big) \cdot (\delta \boldsymbol m)\, \mathrm{d}a
\end{equation}
and similarly
\begin{equation}\label{f79}
\displaystyle\int_{\omega_\xi} \boldsymbol M : \mathrm{Grad}_s \boldsymbol\omega\, \mathrm{d}a
= - \displaystyle\int_{\omega_\xi} \big(\mathrm{Div}_s\boldsymbol M\big) \cdot \boldsymbol \omega\, \mathrm{d}a .
\end{equation}
Finally, in view of \eqref{f77}-\eqref{f79} we obtain
\begin{equation}\label{f80}
0\,=\, \delta\, I \,=\, \displaystyle\int_{\omega_\xi} \delta\, W_{\mathrm{shell}} \,\, \mathrm{d}a =
- \int_{\omega_\xi} \Big[ \big(\mathrm{Div}_s\boldsymbol N\big) \cdot (\delta \boldsymbol m) + \big(\mathrm{Div}_s\boldsymbol M+ \mathrm{axl} \big(\boldsymbol N\boldsymbol F_s^T - \boldsymbol F_s \boldsymbol N^T\big)\big) \cdot \boldsymbol \omega \Big]\, \mathrm{d}a,
\end{equation}
for any virtual translation $ \delta\boldsymbol m $ and any virtual rotation $ \boldsymbol\omega = \mathrm{axl} \big((\delta\boldsymbol Q_e)\boldsymbol Q_e^T\big) $. Relation \eqref{f80} yields the following local forms of the equilibrium equations
\begin{equation}\label{f81}
\mathrm{Div}_s\boldsymbol N = \boldsymbol 0\qquad\mathrm{and}\qquad
\mathrm{Div}_s\boldsymbol M + \mathrm{axl} \big(\boldsymbol N\boldsymbol F_s^T - \boldsymbol F_s \boldsymbol N^T\big)= \boldsymbol 0.
\end{equation}
\textbf{Remark:} The principle of virtual work for 6-parameter shells corresponding to equation
\eqref{f80} has been presented in \cite{Eremeyev06,Birsan-Neff-L58-2017}.
$ \Box $
If we consider now external body forces $ \boldsymbol f $ and couples $ \boldsymbol c $\,, we can write the equilibrium equations for Cosserat shells in the general form (see, e.g. \cite{Eremeyev06,Birsan-Neff-L58-2017})
\begin{equation}\label{f82}
\mathrm{Div}_s\boldsymbol N + \boldsymbol f = \boldsymbol 0,\qquad
\mathrm{Div}_s\boldsymbol M + \mathrm{axl} \big(\boldsymbol N\boldsymbol F_s^T - \boldsymbol F_s \boldsymbol N^T\big) + \boldsymbol c= \boldsymbol 0.
\end{equation}
The tensors $ \boldsymbol N $ and $ \boldsymbol M $ are the internal surface stress tensor and the internal surface couple tensor (of the first Piola-Kirchhoff type), respectively. They are given by the relations \eqref{f64}.
The general form of the boundary conditions of mixed type on $ \partial \omega_\xi $ is (see, e.g. \cite{Pietraszkiewicz04,Pietraszkiewicz11,Birsan-Neff-MMS-2014})
\begin{equation}\label{f83}
\begin{array}{rcl}
\boldsymbol{N}\boldsymbol{\nu} & = & \boldsymbol{N}^*,\qquad \boldsymbol{M}\boldsymbol{\nu} = \boldsymbol{M}^*\quad\mathrm{along}\,\,\,\partial \omega_f\,,
\mathbf{v}space{4pt}\\
\,\,\boldsymbol{m} & = & \boldsymbol{m}^* ,\qquad\quad \boldsymbol{Q}_e = \boldsymbol{Q}^* \quad\mathrm{along}\,\,\,\partial \omega_d\,,
\end{array}
\end{equation}
where $\partial \omega_f$ and $ \partial \omega_d $ build a disjoint partition of the boundary curve $\partial \omega_\xi\,$. Here, $\boldsymbol{N}^*$ and $\boldsymbol{M}^*$ are the external boundary force and couple vectors respectively, applied along the deformed boundary curve, but measured per unit length of $\partial \omega_f\,$. On the portion of the boundary $\partial \omega_d$ we have Dirichlet-type boundary conditions for the deformation vector $ \boldsymbol{m} $ and the microrotation tensor $ \boldsymbol{Q}_e\, $.
Using the obtained form of the energy density \eqref{f61} and the relations \eqref{f64}, we can give the stress-strain relations in explicit form for our shell model. These will be written in the next section.
\section{Remarks and discussions on the Cosserat shell model} \label{Sect5}
In this section we write the strain-energy density \eqref{f61} in some alternative useful forms and give the explicit expression for the constitutive equations \eqref{f64}. This allows us to compare the derived shell model with other approaches to 6-parameter shells and with the classical Koiter shell model.
We notice that the shell strain measures $ \boldsymbol E^e $ and $ \boldsymbol K^e $ (as well as the shell stress tensors $ \boldsymbol{Q}_e^T\boldsymbol N $ and $ \boldsymbol{Q}_e^T\boldsymbol M $) are tensors of the form $ \boldsymbol X=X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $ (where $ \boldsymbol a^3=\boldsymbol n_0 $). In what follows, we shall decompose any such tensor $ \boldsymbol X=X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $ in its ``planar'' part $ \boldsymbol a \boldsymbol X = X_{\beta\alpha}\boldsymbol a^\beta\otimes \boldsymbol a^\alpha $ and its ``transversal'' part $ \boldsymbol n_0\boldsymbol X = X_{3\alpha}\boldsymbol a^\alpha$ according to
\begin{equation}\label{f84}
\boldsymbol X = {\boldsymbol{\mathbbm{1}}}_3 \boldsymbol X = (\boldsymbol a + \boldsymbol n_0 \otimes \boldsymbol n_0)\boldsymbol X =\boldsymbol a \boldsymbol X + \boldsymbol n_0 \otimes (\boldsymbol n_0 \boldsymbol X).
\end{equation}
Note that $ \boldsymbol a \boldsymbol X $ is a planar tensor in the tangent plane, while $ \boldsymbol n_0 \boldsymbol X $ is a vector in the tangent plane. For instance, the decomposition of the shell strain tensor $ \boldsymbol E^e $ yields
\begin{equation}\label{f85}
\boldsymbol E^e=
\boldsymbol a \boldsymbol E^e + \boldsymbol n_0 \otimes (\boldsymbol n_0 \boldsymbol E^e), \qquad \boldsymbol a\boldsymbol E^e= E^e_{\beta\alpha}\boldsymbol a^\beta\otimes \boldsymbol a^\alpha,\qquad
\boldsymbol n_0 \boldsymbol E^e = E^e_{3\alpha}\boldsymbol a^\alpha,
\end{equation}
where $ \boldsymbol n_0 \boldsymbol E^e $ describes the transverse shear deformations and $ \boldsymbol a\boldsymbol E^e $ the in-plane deformation of the shell.
With this representation, we can decompose the constitutive equations
\eqref{f64} in the following way
\begin{equation}\label{f87}
\boldsymbol a \boldsymbol Q_e^T \boldsymbol N = \dfrac{\partial W_{\mathrm{shell}}}{\partial (\boldsymbol a \boldsymbol E^e)}\;,\quad
\boldsymbol n_0 \boldsymbol Q_e^T \boldsymbol N = \dfrac{\partial W_{\mathrm{shell}}}{\partial (\boldsymbol n_0 \boldsymbol E^e)}\;,\quad
\boldsymbol a \boldsymbol Q_e^T \boldsymbol M = \dfrac{\partial W_{\mathrm{shell}}}{\partial (\boldsymbol a \boldsymbol K^e)}\;,\quad
\boldsymbol n_0 \boldsymbol Q_e^T \boldsymbol M = \dfrac{\partial W_{\mathrm{shell}}}{\partial (\boldsymbol n_0 \boldsymbol K^e)}\;.
\end{equation}
\subsection{Explicit stress-strain relations} \label{Sect5.1}
In order to write the stress-strain relations explicitly, let us put the equations \eqref{f50} and \eqref{f51} in the forms
\begin{equation}\label{f88}
\begin{array}{rl}
W_{\mathrm{Coss}}(\boldsymbol X,\boldsymbol Y) =&
\mu\, \mathrm{sym}( \boldsymbol a\boldsymbol X) : \mathrm{sym} (\boldsymbol a \boldsymbol Y) + \mu_c\, \mathrm{skew}(\boldsymbol a \boldsymbol X) : \mathrm{skew}(\boldsymbol a \boldsymbol Y) +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big( \mathrm{tr} \boldsymbol X\big)\,\big(\mathrm{tr} \boldsymbol Y\big)
\mathbf{v}space{6pt}\\
&+ \,\dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\big( \boldsymbol n_0\boldsymbol X\big)\cdot\big(\boldsymbol n_0\boldsymbol Y\big),
\mathbf{v}space{6pt}\\
W_{\mathrm{Coss}}(\boldsymbol X) = &
\mu\,\| \mathrm{sym}( \boldsymbol a\boldsymbol X) \|^2 + \mu_c\| \mathrm{skew}(\boldsymbol a \boldsymbol X) \|^2 +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big( \mathrm{tr} \boldsymbol X\big)^2
+ \,\dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\| \boldsymbol n_0\boldsymbol X\|^2
\end{array}
\end{equation}
and note that $ \mathrm{tr} \boldsymbol X= \mathrm{tr} (\boldsymbol a\boldsymbol X) $. Suggested by \eqref{f88}, we introduce the fourth order planar tensor $ \mathbf{u}nderline{\boldsymbol C}_S $ of elastic moduli for the shell
\begin{equation}\label{f89}
\begin{array}{c}
\mathbf{u}nderline{\boldsymbol C}_S = C_S^{\alpha\beta\gamma\delta} \boldsymbol a_\alpha\otimes\boldsymbol a_\beta\otimes\boldsymbol a_\gamma\otimes\boldsymbol a_\delta\qquad \mathrm{with}
\mathbf{v}space{6pt}\\
C_S^{\alpha\beta\gamma\delta} = \mu\,\big( a^{\alpha\gamma}a^{\beta\delta} + a^{\alpha\delta}a^{\beta\gamma} \big)
+ \mu_c\,\big( a^{\alpha\gamma}a^{\beta\delta} - a^{\alpha\delta}a^{\beta\gamma} \big)
+ \,\dfrac{2\lambda\,\mu}{\lambda+2\mu}\; a^{\alpha\beta}a^{\gamma\delta} \,.
\end{array}
\end{equation}
Then, the tensor $ \mathbf{u}nderline{\boldsymbol C}_S $ satisfies the major symmetries $ C_S^{\alpha\beta\gamma\delta} = C_S^{\gamma\delta\alpha\beta} $ and we have
\begin{equation}\label{f90}
\mathbf{u}nderline{\boldsymbol C}_S : \boldsymbol T = 2\mu\, \mathrm{sym}\, \boldsymbol T + 2 \mu_c\, \mathrm{skew}\,\boldsymbol T +\,\dfrac{2\lambda\,\mu}{\lambda+2\mu}\,\big( \mathrm{tr}\, \boldsymbol T\big)\,\boldsymbol a\,,
\end{equation}
for any planar tensor $ \boldsymbol T = T_{\alpha\beta} \boldsymbol a^\alpha\otimes \boldsymbol a^\beta$. Due to the symmetry, the relations
\eqref{f88} can be written in a simple way
\begin{equation}\label{f91}
\begin{array}{rl}
W_{\mathrm{Coss}}(\boldsymbol X,\boldsymbol Y) =&
\dfrac12( \boldsymbol a\boldsymbol X) : \mathbf{u}nderline{\boldsymbol C}_S : ( \boldsymbol a\boldsymbol Y)+ \,\dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\big( \boldsymbol n_0\boldsymbol X\big)\cdot\big(\boldsymbol n_0\boldsymbol Y\big)
=\, \dfrac12 \, C_S^{\alpha\beta\gamma\delta} X_{\alpha\beta}Y_{\gamma\delta} + \dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,X_{3\alpha}Y_{3\alpha},
\mathbf{v}space{6pt}\\
W_{\mathrm{Coss}}(\boldsymbol X) = &
\dfrac12( \boldsymbol a\boldsymbol X) : \mathbf{u}nderline{\boldsymbol C}_S : ( \boldsymbol a\boldsymbol X)+ \,\dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\| \boldsymbol n_0\boldsymbol X\|^2,
\end{array}
\end{equation}
for any tensors $ \boldsymbol X=X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $, $ \boldsymbol Y=Y_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $.
Similarly, the quadratic form $ W_{\mathrm{curv}} $ defined by \eqref{f7} can be put in the form
\begin{equation}\label{f92}
\begin{array}{rl}
W_{\mathrm{curv}}(\boldsymbol X) =& \mu\,L_c^2\Big(\,
b_1\| \mathrm{sym}( \boldsymbol a\boldsymbol X) \|^2 + b_2\| \mathrm{skew}(\boldsymbol a \boldsymbol X) \|^2 +\,\big(b_3-\,\dfrac{b_1}{3}\big)\big( \mathrm{tr} \boldsymbol X\big)^2
+ \,\dfrac{b_1+b_2}{2}\,\| \boldsymbol n_0\boldsymbol X\|^2
\Big)
\mathbf{v}space{6pt}\\
= &
\dfrac12( \boldsymbol a\boldsymbol X) : \mathbf{u}nderline{\boldsymbol G}_S : ( \boldsymbol a\boldsymbol X)+ \mu\,L_c^2\,\dfrac{b_1+b_2}{2}\,\| \boldsymbol n_0\boldsymbol X\|^2
\,= \dfrac12 \, G_S^{\alpha\beta\gamma\delta} X_{\alpha\beta}X_{\gamma\delta} + \mu\,L_c^2\,\dfrac{b_1+b_2}{2}\,X_{3\alpha}X_{3\alpha}
\end{array}
\end{equation}
for any tensor $ \boldsymbol X=X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $, where the fourth order planar tensor $ \mathbf{u}nderline{\boldsymbol G}_S $ is given by
\begin{equation}\label{f93}
\begin{array}{c}
\mathbf{u}nderline{\boldsymbol G}_S = G_S^{\alpha\beta\gamma\delta} \boldsymbol a_\alpha\otimes\boldsymbol a_\beta\otimes\boldsymbol a_\gamma\otimes\boldsymbol a_\delta\qquad \mathrm{with}
\mathbf{v}space{6pt}\\
G_S^{\alpha\beta\gamma\delta} = \mu\,L_c^2\Big(
b_1\,\big( a^{\alpha\gamma}a^{\beta\delta} + a^{\alpha\delta}a^{\beta\gamma} \big)
+ b_2\,\big( a^{\alpha\gamma}a^{\beta\delta} - a^{\alpha\delta}a^{\beta\gamma} \big)
+ \big(b_3-\,\dfrac{b_1}{3}\,\big)\, a^{\alpha\beta}a^{\gamma\delta} \,
\Big).
\end{array}
\end{equation}
We see that $ G_S^{\alpha\beta\gamma\delta} = G_S^{\gamma\delta\alpha\beta} $ and for any planar tensor $ \boldsymbol T = T_{\alpha\beta} \boldsymbol a^\alpha\otimes \boldsymbol a^\beta$ it holds
\begin{equation}\label{f94}
\mathbf{u}nderline{\boldsymbol G}_S : \boldsymbol T = 2\mu\,L_c^2 \Big(b_1\,\mathrm{sym}\, \boldsymbol T + b_2\, \mathrm{skew}\,\boldsymbol T +\big(b_3-\,\dfrac{b_1}{3}\big)\big( \mathrm{tr}\, \boldsymbol T\big)\,\boldsymbol a\Big).
\end{equation}
In order to show that the quadratic forms $ W_{\mathrm{Coss}} $ and $ W_{\mathrm{curv}} $ are positive definite, let us introduce the \textit{surface deviator operator} $ \mathrm{dev}_s $ defined by \cite{Birsan-Neff-L58-2017}
\begin{equation}\label{f95}
\mathrm{dev}_s\, \boldsymbol X \,:=\, \boldsymbol X \,-\,
\dfrac12 \big( \mathrm{tr}\, \boldsymbol X\big)\,\boldsymbol a.
\end{equation}
According to Lemma 2.1 in \cite{Birsan-Neff-L58-2017} we can decompose any tensor $ \boldsymbol X=X_{i\alpha}\boldsymbol a^i\otimes \boldsymbol a^\alpha $ as a \textit{direct sum} (orthogonal decomposition) as follows
\begin{equation}\label{f96}
\boldsymbol{X}\,=\, \mathrm{dev_ssym}\, \boldsymbol{X}\, + \, \mathrm{skew}\, \boldsymbol{X}\, + \, \frac{1}{2}\,\big(\mathrm{tr}\,\boldsymbol{X}\big)\,\boldsymbol a\,.
\end{equation}
Then, relations \eqref{f95} and \eqref{f96} imply
\begin{equation}\label{f97}
\mathrm{sym}\,\boldsymbol{X}\,=\, \mathrm{dev_ssym}\, \boldsymbol{X}\, + \, \frac{1}{2}\,\big(\mathrm{tr}\,\boldsymbol{X}\big)\,\boldsymbol a
\qquad \mathrm{and}\qquad
\|\mathrm{sym}\,\boldsymbol{X}\|^2=\| \mathrm{dev_ssym}\, \boldsymbol{X}\|^2 + \, \frac{1}{2}\,\big(\mathrm{tr}\,\boldsymbol{X}\big)^2.
\end{equation}
Substituting \eqref{f97} into the relations \eqref{f90} and \eqref{f94}, we get (for any $ \boldsymbol T = T_{\alpha\beta} \boldsymbol a^\alpha\otimes \boldsymbol a^\beta$)
\begin{equation}\label{f98}
\begin{array}{l}
\mathbf{u}nderline{\boldsymbol C}_S : \boldsymbol T = 2\mu\, \mathrm{dev_ssym}\, \boldsymbol T + 2 \mu_c\, \mathrm{skew}\,\boldsymbol T +\,\dfrac{\mu(3\lambda+2\mu)}{\lambda+2\mu}\,\big( \mathrm{tr}\, \boldsymbol T\big)\,\boldsymbol a\,,
\mathbf{v}space{6pt}\\
\mathbf{u}nderline{\boldsymbol G}_S : \boldsymbol T = 2\mu\,L_c^2 \Big(b_1\,\mathrm{dev_ssym}\, \boldsymbol T + b_2\, \mathrm{skew}\,\boldsymbol T +\big(b_3+\,\dfrac{b_1}{6}\big)\big( \mathrm{tr}\, \boldsymbol T\big)\,\boldsymbol a\Big)
\end{array}
\end{equation}
and the quadratic forms \eqref{f88}$ _2 $ and \eqref{f92} become
\begin{equation}\label{f99}
\begin{array}{rl}
W_{\mathrm{Coss}}(\boldsymbol X) = &
\mu\,\| \mathrm{dev_ssym}( \boldsymbol a\boldsymbol X) \|^2 + \mu_c\| \mathrm{skew}(\boldsymbol a \boldsymbol X) \|^2 +\,\dfrac{\mu(3\lambda+2\mu)}{2(\lambda+2\mu)}\,\big( \mathrm{tr} \boldsymbol X\big)^2
+ \,\dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\| \boldsymbol n_0\boldsymbol X\|^2 ,
\mathbf{v}space{6pt}\\
W_{\mathrm{curv}}(\boldsymbol X) =& \mu\,L_c^2\Big(\,
b_1\| \mathrm{dev_ssym}( \boldsymbol a\boldsymbol X) \|^2 + b_2\| \mathrm{skew}(\boldsymbol a \boldsymbol X) \|^2 +\,\big(b_3+\,\dfrac{b_1}{6}\big)\big( \mathrm{tr} \boldsymbol X\big)^2
+ \,\dfrac{b_1+b_2}{2}\,\| \boldsymbol n_0\boldsymbol X\|^2
\Big).
\end{array}
\end{equation}
Under the usual assumptions on the material constants $ \mu>0 $, $ 3\lambda+2\mu>0 $ (from classical elasticity), together with $ \mu_c>0 $ and $ b_i>0 $, we see now that the quadratic forms \eqref{f99} are positive definite, since all the coefficients are positive.
Finally, we substitute \eqref{f91}, \eqref{f92} in the strain-energy density \eqref{f61} and performing the differentiation according to the relations \eqref{f87}, we obtain the following explicit forms of the constitutive equations for
the internal surface stress tensor $\boldsymbol Q_e^T \boldsymbol N $ and the internal surface couple tensor $ \boldsymbol Q_e^T \boldsymbol M $ of Cosserat shells
\[
\boldsymbol Q_e^T \boldsymbol N =
\boldsymbol a \boldsymbol Q_e^T \boldsymbol N + \boldsymbol n_0 \otimes (\boldsymbol n_0 \boldsymbol Q_e^T \boldsymbol N),\qquad
\boldsymbol Q_e^T \boldsymbol M =
\boldsymbol a \boldsymbol Q_e^T \boldsymbol M + \boldsymbol n_0 \otimes (\boldsymbol n_0 \boldsymbol Q_e^T \boldsymbol M)
\]
with
\begin{equation}\label{f100}
\begin{array}{rl}
\boldsymbol a \boldsymbol Q_e^T \boldsymbol N = &
\Big(h- K\, \dfrac{h^3}{12}\Big) \,
\mathbf{u}nderline{\boldsymbol C}_S : \big(\boldsymbol a \boldsymbol E^e\big)
+ \,\dfrac{h^3}{12}\,\big[\,
\mathbf{u}nderline{\boldsymbol C}_S : \big(\boldsymbol a \boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e\big)
\big]\boldsymbol b
-\dfrac{h^3}{12}\; \mathbf{u}nderline{\boldsymbol C}_S : \big( \boldsymbol c \boldsymbol K^e\boldsymbol b^*\big),
\mathbf{v}space{6pt}\\
\boldsymbol n_0 \boldsymbol Q_e^T \boldsymbol N = &
\,\dfrac{4\mu\,\mu_c}{\mu+\mu_c}\,\Big[
\Big(h- 2K\,\dfrac{h^3}{12}\,\Big) \big(\boldsymbol n_0 \boldsymbol E^e\big) + 2H\,\dfrac{h^3}{12}\, \big(\boldsymbol n_0 \boldsymbol E^e\boldsymbol b\big)
\Big]
,
\mathbf{v}space{6pt}\\
\boldsymbol a \boldsymbol Q_e^T \boldsymbol M = & \Big(h- K\, \dfrac{h^3}{12}\Big) \,
\mathbf{u}nderline{\boldsymbol G}_S : \big(\boldsymbol a \boldsymbol K^e\big)
+ \,\dfrac{h^3}{12}\,\boldsymbol c\,\big[\,
\mathbf{u}nderline{\boldsymbol C}_S : \big(\boldsymbol a \boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e\big)
\big]
-\dfrac{h^3}{12}\,\boldsymbol c\, \big[\mathbf{u}nderline{\boldsymbol C}_S : \big( \boldsymbol a \boldsymbol E^e\big)\big]\boldsymbol b^*
\mathbf{v}space{6pt}\\
& + \dfrac{h^3}{12}\,\, \big[\mathbf{u}nderline{\boldsymbol G}_S : \big( \boldsymbol a \boldsymbol K^e\boldsymbol b\big)\big]\boldsymbol b\,,
\mathbf{v}space{6pt}\\
\boldsymbol n_0 \boldsymbol Q_e^T \boldsymbol M = &
\mu\,L_c^2 \,(b_1+b_2) \Big[
\Big(h- 2K\,\dfrac{h^3}{12}\,\Big) \big(\boldsymbol n_0 \boldsymbol K^e\big) + 2H\,\dfrac{h^3}{12} \, \big(\boldsymbol n_0 \boldsymbol K^e\boldsymbol b\big)
\Big]
,
\end{array}
\end{equation}
where the tensors of elastic moduli $ \mathbf{u}nderline{\boldsymbol C}_S $ and $ \mathbf{u}nderline{\boldsymbol G}_S $ are given in \eqref{f89}, \eqref{f90} and \eqref{f93}, \eqref{f94}.
\subsection{Comparison with other 6-parameter shell models}\label{Sect5.2}
We present a detailed comparison with the
related shell model of order $ O(h^5) $ which has been presented recently in \cite{Birsan-Neff-MMS-2019}. The Cosserat shell model derived in \cite{Birsan-Neff-MMS-2019} has many similarities with the present model, but there are also some differences, which we indicate now.
First of all, the derivation method and starting point in \cite{Birsan-Neff-MMS-2019} is different, since the deformation function $ \boldsymbol \mathbf{v}arphi $ is assumed to be quadratic in $ x_3\, $. More precisely, the following ansatz is adopted (see \cite[f. (65)]{Birsan-Neff-MMS-2019})
\begin{equation}\label{f101}
\boldsymbol \mathbf{v}arphi( x_i) = \boldsymbol m(x_1,x_2)+ x_3\,\alpha(x_1,x_2)\, \boldsymbol d_3 +\displaystyle\frac{x_3^2}{2}\,\beta(x_1,x_2) \,\boldsymbol d_3\, .
\end{equation}
If we compare this ansatz with the expansion \eqref{f27}, we see the assumption \eqref{f101} is more restrictive.
On the other hand, the hypotheses \eqref{f42} from the classical shell theory were replaced in \cite{Birsan-Neff-MMS-2019} by the weaker requirements (see \cite[f. (60)]{Birsan-Neff-MMS-2019})
\begin{equation}\label{f102}
\boldsymbol n_0\cdot \boldsymbol T_0 \boldsymbol n_0 = 0 \qquad\mathrm{and}\qquad
\boldsymbol n_0\cdot \boldsymbol T_0^{\,\prime} \boldsymbol n_0 = 0 ,
\end{equation}
i.e. only the normal components of the stress vectors $ \boldsymbol t^+ $ , $ \boldsymbol t^- $ on the upper and lower surfaces of the shell are assumed to be zero. The two scalar equations \eqref{f102} are then employed in \cite{Birsan-Neff-MMS-2019} to determine the two scalar coefficients $ \alpha(x_1,x_2) $ and $ \beta(x_1,x_2) $ appearing in \eqref{f101}.
Moreover, we note that the paper \cite{Birsan-Neff-MMS-2019} presents a shell model of order $ O(h^5) $.
This different approach leads to a slightly different form of the strain-energy density. If we retain only the terms up to the order $ O(h^3) $ in the strain-energy density (see \cite[f. (104)]{Birsan-Neff-MMS-2019}), we get
\begin{equation}\label{f103}
\begin{array}{rl}
\widehat{W}_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) = &
\Big(h- K\, \dfrac{h^3}{12}\Big) \big[ W_{\mathrm{mixt}}\big( \boldsymbol E^e\big) + W_{\mathrm{curv}} \big(\boldsymbol K^e\big) \big]
\mathbf{v}space{6pt}\\
& + \;\dfrac{h^3}{12}\,\big[ W_{\mathrm{mixt}}\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c \boldsymbol K^e \big)
-2 W_{\mathrm{mixt}}\big(\boldsymbol E^e\,,\, \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
+ W_{\mathrm{curv}} \big(\boldsymbol K^e \boldsymbol b\big) \big],
\end{array}
\end{equation}
where $ {W}_{\mathrm{mixt}} $ is given by \eqref{f49,5}. We compare this expression with our energy \eqref{f61}.
Using the decomposition of tensors in planar and transversal parts \eqref{f84}, we deduce from
\eqref{f49,5} and \eqref{f50} the relations
\begin{equation}\label{f104}
\begin{array}{l}
W_{\mathrm{mixt}}(\boldsymbol S,\boldsymbol T) = W_{\mathrm{mixt}}(\boldsymbol a\boldsymbol S,\boldsymbol a\boldsymbol T)+ \, \dfrac{\mu+\mu_c}{2}\,\big(\boldsymbol n_0 \boldsymbol S\big)\cdot\big(\boldsymbol n_0 \boldsymbol T\big),
\mathbf{v}space{6pt}\\
W_{\mathrm{Coss}}(\boldsymbol S,\boldsymbol T) = W_{\mathrm{mixt}}(\boldsymbol a\boldsymbol S,\boldsymbol a\boldsymbol T)+ \, \dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\big(\boldsymbol n_0 \boldsymbol S\big)\cdot\big(\boldsymbol n_0 \boldsymbol T\big).
\end{array}
\end{equation}
Thus, using the relation \eqref{f104}$ _1 $ the strain-energy density \eqref{f103} (obtained in \cite{Birsan-Neff-MMS-2019} for order $ O(h^3) $) becomes
\begin{equation}\label{f106}
\begin{array}{l}
\widehat{W}_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) =
\Big(h- K\, \dfrac{h^3}{12}\Big) \big[ W_{\mathrm{mixt}}\big( \boldsymbol a \boldsymbol E^e\big)
+ \, \dfrac{\mu+\mu_c}{2}\,\|\boldsymbol n_0 \boldsymbol E^e \|^2
+ W_{\mathrm{curv}} \big(\boldsymbol K^e\big) \big]
\mathbf{v}space{6pt}\\
\qquad\qquad + \;\dfrac{h^3}{12}\,\big[ W_{\mathrm{mixt}}\big(\boldsymbol a \boldsymbol E^e\boldsymbol b
+ \boldsymbol c \boldsymbol K^e \big)
+ \, \dfrac{\mu+\mu_c}{2}\,\|\boldsymbol n_0 \boldsymbol E^e\boldsymbol b \|^2
-2 W_{\mathrm{mixt}}\big(\boldsymbol a \boldsymbol E^e\,,\, \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
+ W_{\mathrm{curv}} \big(\boldsymbol K^e \boldsymbol b\big) \big].
\end{array}
\end{equation}
On the other hand, our strain-energy density \eqref{f61} can be written with the help of \eqref{f104}$ _2 $ in the following alternative form
\begin{equation}\label{f107}
\begin{array}{l}
W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) =
\Big(h- K\, \dfrac{h^3}{12}\Big) \big[ W_{\mathrm{mixt}}\big( \boldsymbol a \boldsymbol E^e\big)
+\, \dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\|\boldsymbol n_0 \boldsymbol E^e \|^2
+ W_{\mathrm{curv}} \big(\boldsymbol K^e\big) \big]
\mathbf{v}space{6pt}\\
\qquad\qquad + \;\dfrac{h^3}{12}\,\big[ W_{\mathrm{mixt}}\big(\boldsymbol a \boldsymbol E^e\boldsymbol b
+ \boldsymbol c \boldsymbol K^e \big)
+ \, \dfrac{2\mu\,\mu_c}{\mu+\mu_c}\,\|\boldsymbol n_0 \boldsymbol E^e\boldsymbol b \|^2
-2 W_{\mathrm{mixt}}\big(\boldsymbol a \boldsymbol E^e\,,\, \boldsymbol c \boldsymbol K^e \boldsymbol b^* \big)
+ W_{\mathrm{curv}} \big(\boldsymbol K^e \boldsymbol b\big) \big].
\end{array}
\end{equation}
By comparison of \eqref{f106} and \eqref{f107} we see that the only difference between these two
strain-energy densities resides in the coefficients of the transverse shear deformation terms $ \| \boldsymbol n_0 \boldsymbol E^e \|^2$ and $ \| \boldsymbol n_0 \boldsymbol E^e \boldsymbol b \|^2 $. All other terms and coefficients in \eqref{f106} and \eqref{f107} are identical.
Note that the transverse shear coefficient in the present model \eqref{f107} is the harmonic mean $ \, \dfrac{2\mu\,\mu_c}{\mu+\mu_c}\, $ , while in the energy density \eqref{f106} (derived in \cite{Birsan-Neff-MMS-2019}) it is the arithmetic mean $ \, \dfrac{\mu+\mu_c}{2}\, $ . We mention that the same coefficient $ \, \dfrac{2\mu\,\mu_c}{\mu+\mu_c}\, $ for the transverse shear energy has been obtained using $ \Gamma $-convergence in \cite{Neff_Hong_Reissner08} in the case of plates.
This confirms the result \eqref{f107} obtained in our present work. We remind that this coefficient is adjusted in many plate and shell models by a correction factor, the so-called \textit{shear correction factor} (see for instance the discussions in \cite{Altenbach00,Pietraszkiewicz10,Vlachoutsis}).
\textbf{Further remarks:}
1. We remark that the strain-energy density \eqref{f107} obtained in this paper satisfies the invariance properties required by the local symmetry group of isotropic 6-parameter shells. These invariance requirements have been established in a general theoretical framework in \cite[Section 9]{Eremeyev06}.
2. The form of the constitutive relation \eqref{f107} (equivalent to \eqref{f61}) is remarkable, since one cannot find in the literature on 6-parameter shells appropriate expressions of the
strain-energy density $ W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) $ with coefficients depending on the initial curvature $ \boldsymbol b $ and expressed in terms of the three-dimensional material constants. Indeed, the strain-energy densities proposed in the literature are either simple expressions with constant coefficients (see, e.g. \cite[f. (72)]{Birsan-Neff-MMS-2014}, \cite[f. (50)]{Birsan-Neff-L58-2017}, \cite{Pietraszkiewicz-book04,Pietraszkiewicz10}), or general quadratic forms of $ \boldsymbol E^e,\;\boldsymbol K^e $ with unidentified coefficients (see, e.g. \cite[f. (52)]{Eremeyev06}.
3. We mention that the numerical treatment for the related \textit{planar} Cosserat shell model derived in \cite{Neff_plate04_cmt,Neff_plate07_m3as} has been presented in \cite{Sander-Neff-Birsan-16}, using geodesic finite elements.
4. If the thickness $ h $ is sufficiently small, one can show that the strain-energy density $ W_{\mathrm{shell}}(\boldsymbol E^e,\boldsymbol K^e) $ is a coercive and convex function of its arguments. Then, in view of Theorem 6 from \cite{Birsan-Neff-MMS-2014}, one can prove the existence of minimizers for our nonlinear Cosserat shell model.
\subsection{Relation to the classical Koiter shell model}\label{Sect5.3}
In this section, we discuss the relation to the classical shell theory and show that our strain-energy density \eqref{f107} can be reduced, in a certain sense, to the strain-energy of the classical Koiter model.
Thus, if we consider that the three-dimensional material is a Cauchy continuum (with no microrotation), then the Cosserat couple modulus and the curvature energy $ W_{\mathrm{curv}} $ are vanishing in the model
\eqref{f5}-\eqref{f6}:
\begin{equation}\label{f108}
\mu_c = 0, \qquad W_{\mathrm{curv}}\equiv 0 .
\end{equation}
Hence, the fourth order constitutive tensor for shells \eqref{f89} reduces to
\begin{equation}\label{f109}
C_S^{\alpha\beta\gamma\delta} = \mu\,\big( a^{\alpha\gamma}a^{\beta\delta} + a^{\alpha\delta}a^{\beta\gamma} \big)
+ \,\dfrac{2\lambda\,\mu}{\lambda+2\mu}\; a^{\alpha\beta}a^{\gamma\delta}\,,
\end{equation}
which coincide with the tensor of linear plane-stress elastic moduli, that appears in the Koiter model (see, e.g. \cite{Koiter60}, \cite[Sect. 4.1]{Ciarlet05}, \cite[f. (101)]{Steigmann13}). In view of \eqref{f49,6} and \eqref{f108}$ _1\, $, we notice that in this case
\begin{equation}\label{f110}
W_{\mathrm{mixt}}(\boldsymbol S)= W_{\mathrm{Koit}}( \boldsymbol S),
\end{equation}
where
\begin{equation}\label{f110,5}
W_{\mathrm{Koit}}(\boldsymbol S) := \mu\,\|\mathrm{sym}\, \boldsymbol S\|^2 + \dfrac{\lambda\, \mu}{\lambda+2 \mu}\,(\mathrm{tr}\, \boldsymbol S)^2
\end{equation}
is the quadratic form appearing in the Koiter model. We remind that the areal strain-energy density for Koiter shells has the expression \cite{Koiter60,Ciarlet05,Steigmann13}
\begin{equation}\label{f111}
h\,W_{\mathrm{Koit}}(\boldsymbol \varepsilon) + \dfrac{h^3}{12}\,W_{\mathrm{Koit}}(\boldsymbol \rho),
\end{equation}
where the \textit{change of metric} tensor $ \boldsymbol \varepsilon $ and the \textit{change of curvature} tensor $ \boldsymbol \rho $ are the nonlinear shell strain measures, which are given by
\begin{equation}\label{f112}
\begin{array}{l}
\boldsymbol \varepsilon \,=\, \dfrac12\,\big( \boldsymbol m,_\alpha \cdot\, \boldsymbol m,_\beta -a_{\alpha\beta}\big) \,\boldsymbol a^\alpha \otimes \boldsymbol a^\beta\, =\, \dfrac12\,\big[\, (\mathrm{Grad}_s\boldsymbol m)^T (\mathrm{Grad}_s\boldsymbol m) - \boldsymbol a\,\big],
\mathbf{v}space{6pt}\\
\boldsymbol \rho \,= \,\big( \boldsymbol n\cdot \boldsymbol m,_{\alpha\beta} - \boldsymbol n_0\cdot \boldsymbol a_{\alpha,\beta}\big) \,\boldsymbol a^\alpha \otimes \boldsymbol a^\beta\, =\, - (\mathrm{Grad}_s\boldsymbol m)^T (\mathrm{Grad}_s\boldsymbol n) - \boldsymbol b\,.
\end{array}
\end{equation}
Here, $ \boldsymbol n $ designates the unit normal vector to the deformed midsurface and we note that $ \boldsymbol \varepsilon $ and $ \boldsymbol \rho $ are symmetric planar tensors.
To obtain the classical shell model as a special case of our approach, we adopt the Kirchhoff-Love hypotheses. Thus, we assume that the reference unit normal $ \boldsymbol n_0 $ becomes after deformation the unit normal to the deformed midsurface, i.e. $ \boldsymbol n_0 $ transforms to $ \boldsymbol n $. But since we have $ \boldsymbol Q_e\boldsymbol n_0 = \boldsymbol Q_e\boldsymbol d_3^0 = \boldsymbol d_3\, $, this assumption means that
\begin{equation}\label{f113}
\boldsymbol n = \boldsymbol d_3\,.
\end{equation}
Then, we have $ \;\boldsymbol d_3\cdot\boldsymbol m,_\alpha = \boldsymbol n\cdot\boldsymbol m,_\alpha =0\; $ and the transverse shear deformations vanishes, since
\begin{equation}\label{f114}
\boldsymbol n_0 \boldsymbol E^e = \boldsymbol n_0
\big(\boldsymbol Q_e^T\mathrm{Grad}_s\boldsymbol m - \boldsymbol a\big)
= \big(\boldsymbol n_0
\boldsymbol Q_e^T\big)\mathrm{Grad}_s\boldsymbol m
= \boldsymbol d_3 \big( \boldsymbol m,_\alpha \otimes \boldsymbol a^\alpha\big)
= (\boldsymbol d_3 \cdot \boldsymbol m,_\alpha ) \boldsymbol a^\alpha
=\boldsymbol 0.
\end{equation}
This shows that the strain shell tensor is a planar tensor in this case, i.e.
\[
\boldsymbol E^e = E^e_{\alpha\beta} \boldsymbol a^\alpha\otimes \boldsymbol a^\beta \qquad \mathrm{and}\qquad
\boldsymbol a\boldsymbol E^e = \boldsymbol E^e .
\]
In view of \eqref{f108}, \eqref{f114} and $ \,\boldsymbol b\, \boldsymbol b^* =K\boldsymbol a\, $, we can put the strain-energy density \eqref{f107} in the following reduced form
\begin{equation}\label{f115}
\widetilde W_{\mathrm{shell}} =
\Big(h+ K\, \dfrac{h^3}{12}\Big) W_{\mathrm{mixt}}\big( \boldsymbol E^e\big)
+\,\dfrac{h^3}{12}\, W_{\mathrm{mixt}}\big( \boldsymbol E^e\boldsymbol b
+ \boldsymbol c \boldsymbol K^e \big)
-2\;\dfrac{h^3}{12}\, W_{\mathrm{mixt}}\big( \boldsymbol E^e\,,\, ( \boldsymbol E^e\boldsymbol b
+ \boldsymbol c \boldsymbol K^e ) \boldsymbol b^* \big).
\end{equation}
We see that the right-hand side of \eqref{f115} is a quadratic form of the planar tensors $ \boldsymbol E^e $ and $ \boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e $. Let us express these two tensors in terms of the Koiter shell strain measures $ \boldsymbol \varepsilon $ and $ \boldsymbol \rho $.
From \eqref{f112}$ _1 $ and \eqref{f31}$ _1 $ it follows
\begin{equation}\label{f115,5}
\begin{array}{rl}
\boldsymbol \varepsilon =&
\dfrac12\,\big[\, (\boldsymbol Q_e^T\mathrm{Grad}_s\boldsymbol m)^T (\boldsymbol Q_e^T\mathrm{Grad}_s\boldsymbol m) - \boldsymbol a\,\big]
\;=\;
\dfrac12\,\big[\, (\boldsymbol E^e+ \boldsymbol a)^T (\boldsymbol E^e+ \boldsymbol a) - \boldsymbol a\,\big]
\mathbf{v}space{6pt}\\
=& \dfrac12\,\big(\boldsymbol E^{e,T}\boldsymbol E^e + \boldsymbol a\boldsymbol E^e+ \boldsymbol E^{e,T}\boldsymbol a)
\;=\; \dfrac12\,\boldsymbol E^{e,T}\boldsymbol E^e + \mathrm{sym}\big(\boldsymbol a\boldsymbol E^e\big),
\end{array}
\end{equation}
which means
\begin{equation}\label{f116}
\mathrm{sym}\,\boldsymbol E^e\;=\; \boldsymbol \varepsilon \,-\, \dfrac12\,\boldsymbol E^{e,T}\boldsymbol E^e \,.
\end{equation}
Similarly, using \eqref{f112}$ _2\, $, \eqref{f113} and the relation $ \;\boldsymbol Q_e^T \mathrm{Grad}_s\boldsymbol d_3 = \boldsymbol c \boldsymbol K^e -\boldsymbol b \;$ (see \cite[f. (70)]{Birsan-Neff-MMS-2019}), we find
\begin{equation}\label{f117}
\begin{array}{rl}
\boldsymbol \rho =&
- (\boldsymbol Q_e^T\mathrm{Grad}_s\boldsymbol m)^T (\boldsymbol Q_e^T\mathrm{Grad}_s\boldsymbol d_3) - \boldsymbol b
\;=\;
- (\boldsymbol E^e+ \boldsymbol a)^T (\boldsymbol c \boldsymbol K^e -\boldsymbol b) - \boldsymbol b
\mathbf{v}space{6pt}\\
=& -\boldsymbol E^{e,T}\boldsymbol c \boldsymbol K^e - \boldsymbol c \boldsymbol K^e + \boldsymbol E^{e,T}\boldsymbol b
\;=\; -\boldsymbol E^{e,T}\boldsymbol c \boldsymbol K^e - \big( \boldsymbol E^e\boldsymbol b
+\boldsymbol c \boldsymbol K^e\big) + 2 \big(\mathrm{sym}\,\boldsymbol E^e\big)\boldsymbol b \,.
\end{array}
\end{equation}
Substituting \eqref{f116} in \eqref{f117}, we derive
\begin{equation}\label{f118}
\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e =
2\, \boldsymbol \varepsilon\,\boldsymbol b - \boldsymbol \rho - \boldsymbol E^{e,T}(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e).
\end{equation}
With the help of \eqref{f116} and \eqref{f118} we can write now the strain-energy \eqref{f115} as a function of the strain measures $ \boldsymbol \varepsilon $ and $ \boldsymbol \rho $\,: for the first term in
\eqref{f115} we obtain (from \eqref{f110,5} and \eqref{f115,5})
\begin{equation}\label{f119}
\begin{array}{rl}
W_{\mathrm{Koit}}(\boldsymbol \varepsilon) & = \; \mu\,\|\, \boldsymbol \varepsilon\,\|^2 +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big( \mathrm{tr} \,\boldsymbol \varepsilon\big)^2
\mathbf{v}space{6pt}\\
& = \;
\mu\,\|\, \mathrm{sym}\,\boldsymbol E^e + \dfrac12\,\boldsymbol E^{e,T}\boldsymbol E^e \,\|^2 +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big[ \,\mathrm{tr}\, \big(\mathrm{sym}\,\boldsymbol E^e + \dfrac12\,\boldsymbol E^{e,T}\boldsymbol E^e\big)\big]^2 .
\end{array}
\end{equation}
Since our model is physically linear (the strain-energy is quadratic in the strain measures) we can neglect the terms in \eqref{f119} which are more than quadratic in $ \boldsymbol E^e $ and find
\[
W_{\mathrm{Koit}}(\boldsymbol \varepsilon) = \mu\,\|\, \mathrm{sym}\,\boldsymbol E^e \,\|^2 +\,\dfrac{\lambda\,\mu}{\lambda+2\mu}\,\big(\mathrm{tr}\, \boldsymbol E^e \big)^2
\]
i.e.
\begin{equation}\label{f120}
h W_{\mathrm{Koit}}(\boldsymbol \varepsilon) \,= \,h W_{\mathrm{mixt}}(\boldsymbol E^e).
\end{equation}
Thus, the extensional part of our strain-energy density \eqref{f115} coincides in this case with the extensional part of the Koiter model \eqref{f111}.
Similarly, we compute the other two terms of the energy \eqref{f115} and discard the terms which are over-quadratic in the strain measures
$ \boldsymbol E^e $, $ \boldsymbol K^e $\,: in view of \eqref{f110} and \eqref{f118} we have
\[
\begin{array}{rl}
W_{\mathrm{Koit}}(\boldsymbol \rho) & =\; W_{\mathrm{mixt}}(\boldsymbol \rho)
\;=\;
W_{\mathrm{mixt}}\big(2\, \boldsymbol \varepsilon\,\boldsymbol b - (\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e) - \boldsymbol E^{e,T}(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e)\big)
\mathbf{v}space{6pt}\\
& = \;
W_{\mathrm{mixt}}\big(2\, \boldsymbol \varepsilon\,\boldsymbol b - (\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e)\big)
\mathbf{v}space{6pt}\\
& = \;
W_{\mathrm{mixt}}\big(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e\big)
+ 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b\big)
- 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b \,,\, \boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e\big).
\end{array}
\]
It follows
\[
W_{\mathrm{mixt}}(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e) \;=\; W_{\mathrm{Koit}}(\boldsymbol \rho) - 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b\big)
+ 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b \,,\, \boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e\big)
\]
and inserting \eqref{f118} here we find for the second term in the energy \eqref{f115}:
\begin{equation}\label{f121}
\begin{array}{rl}
W_{\mathrm{mixt}}(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e) & = \; W_{\mathrm{Koit}}(\boldsymbol \rho) - 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b\big)
+ 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b \,,\, 2\, \boldsymbol \varepsilon\,\boldsymbol b - \boldsymbol \rho \big)
\mathbf{v}space{6pt}\\
& = \;
W_{\mathrm{Koit}}(\boldsymbol \rho) + 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b\big)
- 4\,
W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,\boldsymbol b \,,\, \boldsymbol \rho \big).
\end{array}
\end{equation}
For the last term in \eqref{f115} we write with the help of \eqref{f118}:
\begin{equation}\label{f122}
(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e)\boldsymbol b^* \;=\; 2K\, \boldsymbol \varepsilon - \boldsymbol \rho\, \boldsymbol b^* -
\boldsymbol E^{e,T}(\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e)
\boldsymbol b^*
\end{equation}
and derive from \eqref{f116} and \eqref{f122}
\begin{equation}\label{f123}
\begin{array}{l}
W_{\mathrm{mixt}}\big(\boldsymbol E^e, (\boldsymbol E^e\boldsymbol b + \boldsymbol c\boldsymbol K^e)\boldsymbol b^*\big) \; = \;
W_{\mathrm{mixt}}\big(\mathrm{sym}\,\boldsymbol E^e,\, 2K\, \boldsymbol \varepsilon - \boldsymbol \rho\, \boldsymbol b^*\big)
\mathbf{v}space{6pt}\\
\qquad\qquad = \;
W_{\mathrm{mixt}}\big(\, \boldsymbol\varepsilon\,,\, 2K\, \boldsymbol \varepsilon - \boldsymbol \rho\, \boldsymbol b^*\big)
\;=\; 2K\, W_{\mathrm{Koit}}( \boldsymbol\varepsilon ) - W_{\mathrm{mixt}}\big(\, \boldsymbol\varepsilon\,,\, \boldsymbol \rho\, \boldsymbol b^*\big),
\end{array}
\end{equation}
We substitute \eqref{f120}, \eqref{f121} and \eqref{f123} into \eqref{f115} and obtain
\[
\begin{array}{rl}
\widetilde W_{\mathrm{shell}}(\boldsymbol \varepsilon,\boldsymbol \rho) \; = &
\Big(h+ K\, \dfrac{h^3}{12}\Big) W_{\mathrm{Koit}}\big( \boldsymbol \varepsilon\big)
+\,\dfrac{h^3}{12}\,\Big( W_{\mathrm{Koit}}( \boldsymbol \rho )
+ 4 \, W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\, \boldsymbol b\big)
- 4 \, W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\, \boldsymbol b
\,,\, \boldsymbol \rho\big)
\Big)
\mathbf{v}space{6pt}\\
&
-2\;\dfrac{h^3}{12} \,\Big( 2K\, W_{\mathrm{Koit}}( \boldsymbol \varepsilon )
- W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon
\,,\, \boldsymbol \rho\, \boldsymbol b^*\big)
\Big),
\end{array}
\]
which can be written in view of \eqref{f110} in the form
\begin{equation}\label{f124}
\begin{array}{c}
\widetilde W_{\mathrm{shell}}(\boldsymbol \varepsilon,\boldsymbol \rho) \, = \,
h\, W_{\mathrm{Koit}}\big( \boldsymbol \varepsilon\big)
+\,\dfrac{h^3}{12}\, W_{\mathrm{Koit}}( \boldsymbol \rho )
+\,\dfrac{h^3}{12}\, \Big[
4 \, W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\, \boldsymbol b
\,,\, \boldsymbol \varepsilon\, \boldsymbol b - \boldsymbol \rho
\big)
- \, W_{\mathrm{mixt}}\big( \boldsymbol \varepsilon\,,\, 3K\,\boldsymbol \varepsilon - 2 \boldsymbol\rho \,\boldsymbol b^*\big)
\Big].
\end{array}
\end{equation}
The terms in the square brackets in \eqref{f124} involve the initial curvature of the shell through the tensor $ \,\boldsymbol b \,$, the cofactor $ \,\boldsymbol b^* = 2H \boldsymbol a - \boldsymbol b\, $ and the determinant $\, K=\mathrm{det}\, \boldsymbol b\, $ (Gau\ss{} curvature). These terms vanish in the case of plates (since $ \boldsymbol b = \boldsymbol 0 $); moreover, they
can be neglected also for sufficiently thin shells, provided the midsurface strain is small.
We note that the corresponding terms in the classical shell theory have been neglected using
similar arguments, see the discussion about the term $ W_3 $ in \cite[f. (57)]{Steigmann13}. Finally, if we retain only the leading extensional and bending terms in \eqref{f124}, we obtained the reduced classical form
\begin{equation}\label{f125}
\widetilde W_{\mathrm{shell}}(\boldsymbol \varepsilon,\boldsymbol \rho)
\, = \,
h\, W_{\mathrm{Koit}}\big( \boldsymbol \varepsilon\big)
+\,\dfrac{h^3}{12}\, W_{\mathrm{Koit}}( \boldsymbol \rho )\,,
\end{equation}
in accordance with the Koiter energy density \eqref{f111}.
In conclusion, our model can be regarded as a generalization of the classical Koiter model in the framework of 6-parameter shell theory.
\noindent
\small{\textbf{Acknowledgements}\quad
This research has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project no. 415894848.
\end{document} |
\begin{document}
\begin{abstract}
Recently, Bollen, Draisma, and Pendavingh have introduced the Lindstr\"om
valuation on the algebraic matroid of a field extension of characteristic~$p$.
Their construction passes through what they call a matroid flock and builds on
some of the associated theory of matroid flocks which they develop. In this
paper, we give a direct construction of the Lindstr\"om valuated matroid
using the theory of inseparable field extensions. In particular, we
give a description of the valuation, the valuated circuits, and the valuated
cocircuits.
\end{abstract}
\maketitle
The algebraic matroid of a field extension records which subsets of a fixed set
of elements of the field are algebraically independent. In characteristic~0, the
algebraic matroid coincides with the linear matroid of the vector configuration
of differentials, and, as a consequence the class of matroids with algebraic
realizations over a field of characteristic 0 is exactly equivalent to the class
of matroids with linear realizations in characteristic~0~\cite{ingleton}.
However, in positive characteristic, there are strictly more algebraic matroids
than linear matroids, and without an equivalence to linear matroids, the class
of algebraic matroids is not well understood.
Pioneering work of Lindstr\"om has shown the power of first applying well-chosen
powers of the Frobenius morphism to the field elements, before taking
differentials. In particular, he constructed an infinite family of matroids
(the Fano matroid among them) for which any algebraic realization over a
field of finite characteristic, after applying appropriate powers of Frobenius
and taking differentials, yields a linear representation of the same
matroid~\cite{lindstrom}.
In general, no single choice of powers of Frobenius may capture the full
algebraic matroid, and so Bollen, Draisma, and Pendavingh went one step further
by looking at the matroids of differentials after all possible powers of
Frobenius applied to the chosen field elements~\cite{bollen-draisma-pendavingh}.
These matroids fit together to form what they call a \defi{matroid flock}, and
they show that a matroid flock is equivalent to a valuated
matroid~\cite{bollen-draisma-pendavingh}*{Thm.~7}. Therefore, the matroid flock
of differentials defines a valuation on the algebraic matroid of the field
extension, called the \defi{Lindstr\"om valuation} of the algebraic matroid. In
this paper we give a direct construction of this valuation, without reference to
matroid flocks.
We now explain the construction of the Lindstr\"om valuation of an algebraic
matroid. Throughout this paper, we will work with an extension of fields $L
\supset K$ of characteristic $p>0$ as well as fixed elements $x_1, \ldots, x_n
\in L$. We also assume that $L$ is a finite extension of $K(x_1, \ldots, x_n)$,
for example, by replacing $L$ with $K(x_1, \ldots, x_n)$. The algebraic matroid
of this extension can be described in terms of its bases, which are subsets $B
\subset E = \{1, \ldots, n\}$ such that the extension of~$L$ over $K(x_B) =
K(x_i : i \in B)$ is algebraic. We recall from~\cite{lang}*{Sec.~V.6} that if
$K(x_B)^{\sep}$ denotes the set of elements of~$L$ which are separable over
$K(x_B)$, then $L$ is a purely inseparable extension of $K(x_B)^{\sep}$, and the
degree of this extension, $[L : K(x_B)^{\sep}]$ is called the \defi{inseparable
degree} and denoted by adding a subscript: $[L : K(x_B)]_i$.
Now, we define a valuation on the algebraic matroid of $L$ as the following
function $\nu$ from the set of bases to $\ZZ$:
\begin{equation}\label{eq:valuation}
\nu(B) = \log_p [L : K(x_B)]_i.
\end{equation}
Note that $\nu(B)$ is finite because we assumed that $L$ was a finitely generated
algebraic extension of $K(x_B)$ and it is an integer because $[L : K(x_B)]_i$ is
the degree of a purely inseparable extension, which is always a power
of~$p$~\cite{lang}*{Cor.~V.6.2}.
\begin{thm}\label{t:agree}
The function $\nu$ in \eqref{eq:valuation} defines a valuation on the algebraic
matroid of $L \supset K$, such that the associated matroid flock is the matroid
flock of the extension.
\end{thm}
In addition to the valuation given in~\eqref{eq:valuation}, we give
descriptions of the valuated circuits of the Lindstr\"om
valuated matroid in the beginning of Section~\ref{sec:valuated-matroid} and of
the valuated cocircuits and minors in Section~\ref{sec:cocircuits-minors}. The
description of the
circuits gives an algorithm for computing the Lindstr\"om valuated matroid using
Gr\"obner bases, assuming that $L$ is finitely generated over a prime field
(see Remark~\ref{r:computing} for details).
\begin{rmk}\label{r:sign-convention}
There are two different sign conventions used in the literature on valuated
matroids. We use the convention which is compatible with the ``min-plus''
convention in tropical geometry, which is the opposite of what was used in the
original paper of Dress and Wenzel~\cite{dress-wenzel}, but is consistent
with~\cite{bollen-draisma-pendavingh}.
\end{rmk}
\subsection*{Acknowledgments}
I'd like to thank Jan Draisma for useful discussion about the results
in~\cite{bollen-draisma-pendavingh}, which prompted this paper, Rudi Pendavingh
for suggesting the results appearing in
Section~\ref{sec:cocircuits-minors}, and Felipe Rinc\'on for helpful feedback.
The author was supported by NSA Young
Investigator grant H98230-16-1-0019.
\section{The Lindstr\"om valuated matroid}\label{sec:valuated-matroid}
In this section, we verify that the function~\eqref{eq:valuation} from the
introduction is a valuation on the algebraic matroid of the extension $L \supset
K$ and the elements $x_1, \ldots, x_n$. We do this by first constructing the
valuated matroid in terms of its valuated circuits, and then showing that the
corresponding valuation agrees with the function~\eqref{eq:valuation}.
Throughout the rest of the paper, we will use $F$ to denote the Frobenius
morphism $x \mapsto x^p$.
Recall that a (non-valuated) circuit of the algebraic matroid of the elements
$x_1, \ldots, x_n$ in the extension $L \supset K$ is an inclusion-wise minimal
set $C \subset E$ such that $K(x_{C})$ has transcendence degree $\lvert C \rvert
- 1$ over $K$. Therefore, there is a unique (up to scaling) polynomial
relation among the~$x_i$, which we call the \defi{circuit polynomial},
following~\cite{kiraly-rosen-theran}. More precisely, we let $K[X_C]$ be the
polynomial ring whose variables are denoted $X_i$ for $i \in C$. The
aforementioned circuit polynomial is a (unique up to scaling) generator $f_C$ of the kernel of the
homomorphism $K[X_C] \rightarrow K(x_C)$ which sends $X_i$ to $x_i$. We write
this polynomial:
\begin{equation*}
f_{C} = \sum_{\bfu \in J} c_{\bfu} X^{\bfu} \in K[X_C] \subset K[X_E]
\end{equation*}
where $J \subset \ZZ_{\geq 0}^n$ is a finite set of exponents and $c_{\bfu} \neq 0$
for all $\bfu \in J$. Then, we
define $\vecC(f_{C})$ to be the vector in $(\ZZ \cup \{\infty\})^n$ with
components:
\begin{equation}\label{eq:c-function}
\vecC(f_{C})_i =
\min \{\val_p \bfu_i \mid \bfu \in J, \bfu_i \neq 0 \},
\end{equation}
where $\val_p \bfu_i$ denotes the $p$-adic valuation, which is defined to be the
power of~$p$ in the prime factorization of the positive integer $\bfu_i$. If
$\bfu_i = 0$
for all $\bfu \in J$, then we take $\vecC(f_C)_i$ to be $\infty$.
For any vector $\vecC \in (\ZZ \cup
\{\infty\})^n$, the \defi{support} of~$\vecC$, denoted $\supp \vecC$, is the set
$\{i \in E \mid \vecC_i < \infty\}$. Since $f_C$ is a polynomial in the
variables~$X_i$
for $i \in C$, but not in any proper subset of them, the support of $\vecC(f_C)$
is exactly the circuit~$C$.
We will take the valuated circuits of the Lindstr\"om valuation to be the set of
vectors:
\begin{equation}\label{eq:valuated-circuits}
\mathcal C = \{\vecC(f_C) + \lambda \onevector \mid \mbox{$C$ is a circuit of $L
\supset K$},
\lambda \in \ZZ\}
\subset (\ZZ \cup \{\infty\})^n,
\end{equation}
where $\onevector$ denotes the vector $(1, \ldots, 1)$.
Before verifying that this collection of vectors satisfies the axioms, we prove
the following preliminary lemma relating the
definition in~\eqref{eq:c-function} to the inseparable
degree:
\begin{lem}\label{l:poly-insep-degree}
Let $S \subset E$ be a set of rank $\lvert S \rvert - 1$, and let $C$ be the
unique circuit contained in $S$. If we abbreviate the vector $\vecC(f_C)$ as
$\vecC$, then
\begin{equation*}
[K(x_{S}) : K(x_{S \setminus \{i\}})]_i = p^{\vecC(f)_i}
\end{equation*}
for any $i \in C$.
In particular, $K(x_{S})$ is a separable extension of $K(x_{S \setminus
\{i\}})$ if and only if $\vecC_i = 0$.
\end{lem}
\begin{proof}
For $i \in C$, we let $Y_i$ denote the monomial $X_i^{p^{\vecC_i}}$ in $K[X_S]$.
Then, the polynomial~$f_C$ lies in the polynomial subring $K[X_{S \setminus
\{i\}}, Y_i]$, by the definition of~$\vecC_i$. Similarly, we let $y_i$ denote
the element $x_i^{p^{\vecC_i}} = F^{\vecC_i} x_i$ in $K(x_S)$. Then, $f_C$, as a
polynomial in $K[X_{S \setminus \{i\}}, Y_i]$, is the minimal defining relation
for $K(x_{S\setminus \{i\}}, y_i)$ as an extension of $K(x_{S \setminus
\{i\}})$. By the definition of $\vecC_i$, some term of $f_C$ is of the form
$X^uY_i^a$, where $a$ is not divisible by $p$, and so $\partial f_C/\partial
Y_i$ is a non-zero polynomial. Therefore, $f_C$ is a separable polynomial of
$Y_i$, and so $K(x_{S \setminus \{i\}}, y_i)$ is a separable extension of
$K(x_{S \setminus \{i\}})$.
On the other hand, $K(x_S)$ is a purely inseparable extension of $K(x_{S
\setminus \{i\}}, y_i)$, defined by the minimal relation
\begin{equation*}
x_i^{p^{\vecC_i}} - y_i = 0.
\end{equation*}
Therefore, this extension has degree $p^{\vecC_i}$, which is thus the inseparable
degree $[K(x_S) : K(x_{S \setminus \{x_i\}})]_i$, as desired.
\end{proof}
We now verify that the collection~\eqref{eq:valuated-circuits} satisfies the
axioms of valuated circuits. Several equivalent characterizations of valuated
circuits are given in~\cite{murota-tamura}, and we will use the characterization
in the following proposition:
\begin{prop}[Thm.~3.2 in \cite{murota-tamura}]\label{p:valuated-circuits}
A set of vectors $\mathcal C \subset (\ZZ \cup \{\infty\})^n$ is the set of
\defi{valuated circuits} of a valuated matroid if and only if it satisfies the
following properties:
\begin{enumerate}
\item
The collection of sets $\{\supp \vecC \mid \vecC \in \mathcal C\}$ satisfies
the axioms of the circuits of a non-valuated matroid.
\item
If $\vecC$ is a valuated circuit, then $\vecC + \lambda \onevector$ is a
valuated circuit for all $\lambda \in \ZZ$.
\item
Conversely, if $\vecC$ and $\vecC'$ are valuated circuits with $\supp \vecC =
\supp \vecC'$, then $\vecC = \vecC' + \lambda \onevector$ for some integer
$\lambda$.
\item
Suppose $\vecC$ and $\vecC'$ are in $\mathcal C$ such that
\begin{equation*}
\rank(\supp \vecC \cup \supp \vecC') = \lvert \supp \vecC \cup \supp \vecC'
\rvert - 2,
\end{equation*}
and $u, v \in E$ are elements such that $\vecC_u = \vecC_u'$ and $\vecC_v <
\vecC_v' = \infty$. Then there exists a vector $\vecC'' \in \mathcal C$ such
that $\vecC_u'' = \infty$, $\vecC_v'' = \vecC_v$, and $\vecC_i'' \geq \min\{
\vecC_i, \vecC_i'\}$ for all $i \in E$ .
\end{enumerate}
\end{prop}
The first property from Proposition~\ref{p:valuated-circuits} is
equivalent to axioms $\mathrm{VC1}$, $\mathrm{VC2}$, and $\mathrm{MCE}$
from~\cite{murota-tamura} and the three after that are denoted $\mathrm{VC3}$,
$\mathrm{VC3_e}$, $\mathrm{VCE_{loc1}}$, respectively.
\begin{prop}\label{p:circuit-axioms}
The collection~$\mathcal C$ of vectors given in~\eqref{eq:valuated-circuits}
defines the valuated circuits of a valuated matroid.
\end{prop}
\begin{proof}
The first axiom from Proposition~\ref{p:valuated-circuits} follows because each
valuated circuit is constructed to have support equal to a non-valuated circuit.
The second axiom follows immediately from the construction, and the third
follows from the uniqueness of circuit polynomials.
Thus, it remains only to check (4) from Proposition~\ref{p:valuated-circuits}.
Suppose that $\vecC$ and~$\vecC'$ are valuated circuits and $u, v \in E$ are
elements satisfying the hypotheses of condition~(4). We can write $\vecC =
\vecC(f) + \lambda \onevector$ and $\vecC' = \vecC(f') + \lambda'\onevector$ for
circuit polynomials~$f$ and~$f'$ in $K[X_1, \ldots, X_n]$. Note that $\vecC(F^m
f) = \vecC(f) + m \onevector$, and so by either replacing $f$ with $F^m f$ or
replacing $f'$ with $F^m f'$, for some integer~$m$,
we can assume that $\lambda = \lambda'$. Moreover,
since the fourth axiom only depends on the relative values of the entries of
$\vecC$ and $\vecC'$, it is sufficient to check the axiom for $\vecC$
and~$\vecC'$ replaced by $\vecC(f) = \vecC - \lambda \onevector$ and $\vecC(f')
= \vecC' - \lambda \onevector$, respectively.
We now define an injective homomorphism $\psi$ from
the polynomial ring $K[Y_1, \ldots, Y_n]$ to $K[X_1, \ldots, X_n]$ by
\begin{equation*}
\psi(Y_i) = F^{\min\{\vecC_i, \vecC_i'\}} X_i
\end{equation*}
Thus, there exist polynomials $g$ and $g'$ in $K[Y_1, \ldots
Y_n]$ such that $f = \psi(g)$ and $f' =
\psi(g')$. In particular, since $\vecC(g)_i = \vecC_i -
\min\{\vecC_i, \vecC_i'\}$ and $\vecC(g')_i = \vecC_i' - \min\{\vecC_i,
\vecC_i'\}$, then our assumptions on $u$ and $v$ imply that $\vecC(g)_u =
\vecC(g')_u = \vecC(g)_v = 0$.
Likewise, we define $y_i = F^{\min\{\vecC_i, \vecC_i'\}} x_i$ so that the
elements $y_i \in L$
satisfy the polynomials $g$ and $g'$. Thus,
Lemma~\ref{l:poly-insep-degree} shows that $g$ is separable in the variable
$Y_v$, and so if $S$ denotes the set $\supp \vecC \cup \supp \vecC'$, then
$K(y_S)$ is a separable extension of $K(y_{S \setminus \{v \}})$. Likewise,
$g'$ is separable in the variable $Y_u$ and doesn't use the variable
$Y_v$, and so $K(y_{S \setminus \{v\}})$ is a separable extension of $K(y_{S
\setminus \{v, u\}})$. Since the composition of separable extensions is
separable, $y_v$ is separable over $K(y_{S \setminus
\{v,u\}})$~\cite{lang}*{Thm.~V.4.5}.
Since algebraic extensions have transcendence degree $0$, then the field $K(y_{S
\setminus \{v, u\}})$ has the same transcendence degree over $K$ as $K(y_S)$ does,
and that transcendence degree is $\lvert S
\rvert - 2$, because we assumed that $\rank(S) = \lvert S \rvert - 2$. In
addition, we
have containments
$K(y_{S \setminus \{v,u\}}) \subset K(y_{S \setminus\{u\}}) \subset K(y_{S})$,
so that $K(y_{S \setminus \{u\}})$ also has transcendence degree $\lvert S \rvert
-2$, and therefore there exists a unique (up to scaling) polynomial relation
$g'' \in K[Y_{S \setminus \{u\}}]$ among the elements $y_i$ for $i \in S
\setminus \{u\}$. Since $y_v$ is finite and separable over $K(y_{S \setminus
\{u\}})$, $\vecC(g'')_v = 0$ by Lemma~\ref{l:poly-insep-degree}.
We claim that the $\vecC''= \vecC(\psi(g''))$ satisfies the desired conclusions
of the axiom.
First,
\begin{equation*}
\vecC_v'' = \vecC(g_{C''})_v + \min\{\vecC_v, \vecC_v'\} = 0 + \min\{\vecC_v,
\infty\} = \vecC_v,
\end{equation*}
as desired. Similarly,
\begin{equation*}
\vecC''_i = \vecC (g_{C''})_i + \min \{\vecC_i, \vecC_i'\} \geq
\min \{\vecC_i, \vecC_i'\},
\end{equation*}
and, finally, $\vecC''_u = \infty$ because $g''$ was chosen to be a polynomial
in the variables $Y_{S \setminus \{u\}}$.
\end{proof}
\begin{rmk}\label{r:computing}
The valuated circuits defined in Proposition~\ref{p:circuit-axioms} are
effectively computable from a suitable description of $L$ and the $x_i$. More
precisely, suppose $K$ is a finitely generated extension of $\mathbb F_p$ and
$L$ is given as the fraction field of $K[x_1,\ldots, x_n]/I$ for a prime
ideal~$I$.
Then $I$ can be represented in computer algebra software, and the elimination
ideals $I \cap K[x_S]$ can be computed for any subset $S \subset E$ using
Gr\"obner basis methods. The circuits of the algebraic matroid are the minimal
subsets $C$ for which $I \cap K[x_C]$ is not the zero ideal, in which case the
elimination ideal will be principal, generated by the circuit polynomial~$f_C$. By computing all of these elimination
ideals, we can determine the circuits of the algebraic matroid, and from the
corresponding generators, we get the valuated circuits
by the formula~\eqref{eq:valuated-circuits}.
\end{rmk}
\begin{ex}\label{ex:toric}
One case where the connection between the Lindstr\"om valuated matroid and linear
algebraic valuated matroids is most transparent is when the variables $x_i$ are
monomials. This example is given in~\cite{bollen-draisma-pendavingh}*{Thm.~45},
but we discuss it here in terms of our description of the valuated circuits.
We let $A$ be any $d \times n$ integer matrix, and then we take $L = K(t_1,
\ldots, t_d)$ for any field $K$ of characteristic~$p$, and we let $x_i$ be the
monomial $t_1^{A_{1i}} \cdots t_d^{A_{di}}$, whose exponents are the $i$th
column of $A$. Then the algebraic matroid of $x_1, \ldots, x_n$ is the same as
the linear matroid of the vector configuration formed by taking the columns
of~$A$. Moreover, we claim that the Lindstr\"om valuated matroid is the same as
the valuated matroid of the same vector configuration with respect to the
$p$-adic valuation on~$\mathbb Q$.
To see this, we look at the valuated circuits of both valuated matroids. A
circuit of the linear matroid is determined by an $n \times 1$ vector $\bfu$ with
minimal support such that $A \bfu = \mathbf 0$. The circuit is the support of
the vector $\bfu$, and the valuated circuit is the entry-wise $p$-adic valuation
of~$\bfu$. The support of~$\bfu$ is also a circuit of $x_1, \ldots, x_n$ with
circuit polynomial
\begin{equation*}
f = X_1^{\bfu^{(+)}_1} \cdots X_n^{\bfu^{(+)}_n} -
X_1^{\bfu^{(-)}_1} \cdots X_n^{\bfu^{(-)}_n}
\end{equation*}
where
\begin{equation*}
\bfu^{(+)}_i = \min\{0, \bfu_i\} \qquad \bfu^{(-)}_i = -\max\{0, \bfu_i\}
\end{equation*}
so that $\bfu = \bfu^{(+)} - \bfu^{(-)}$. Then, since one of
$\val_p(\bfu_i^{(-)})$ and $\val_p(\bfu_i^{(+)})$ equals $\val_p(\bfu_i)$ and
the other is infinite, $\vecC(f)$ is the same as the entry-wise $p$-adic valuation
of $\bfu$, which is the valuated circuit of the linear matroid. Thus, the
valuated circuits of the linear and algebraic matroids are the same.
\end{ex}
\begin{prop}\label{p:valuation-circuits}
The Lindstr\"om valuated matroid given by the circuits
in~\eqref{eq:valuated-circuits} agrees with the valuation~\eqref{eq:valuation}
given in the introduction.
\end{prop}
\begin{proof}
The essential relation between the valuation and the valuated circuits is that
if $B$ is a basis, $u \in B$, $v \in E \setminus B$, and $\vecC$ is a valuated
circuit whose support is contained in $B \cup \{v\}$, then:
\begin{equation}\label{eq:valuated-basis-circuit}
\nu(B) + \vecC_u = \nu(B \setminus \{u\} \cup \{v\}) + \vecC_v
\end{equation}
This relation is used at the beginning of \cite{murota-tamura}*{Sec. 3.1} to
define the valuated circuits in terms of the valuation, and in the other
direction with~(10) from~\cite{murota-tamura}.
In~\eqref{eq:valuated-basis-circuit}, we adopt the convention that $\nu(B
\setminus \{u\} \cup \{v\})$ is $\infty$ if $B \setminus \{u\} \cup \{v \}$ is
not a basis.
The only quantities in \eqref{eq:valuated-basis-circuit} which can
be infinite are $\vecC_u$ and $\nu(B \setminus \{u\} \cup \{v\})$, because if $\vecC_v$
were infinite, then $\supp \vecC$ would be contained in $B$, which contradicts $B$
being a basis. However $B \setminus \{u\} \cup \{v\}$ is not a basis if and only
the support of $\vecC$ is contained in $B \setminus \{u \} \cup \{v\}$, which is
true if and only $\vecC_u = \infty$. Therefore, the left hand side
of~\eqref{eq:valuated-basis-circuit} is infinite if and only if the right hand
side is, so for the rest of the proof, we can assume that all of the terms
of~\eqref{eq:valuated-basis-circuit}
are finite.
By the multiplicativity of inseparable degrees~\cite{lang}*{Cor.~V.6.4}, we have
\begin{align*}
\nu(B) &= \log_p [L : K(x_B)]_i \\
&= \log_p [L : K(x_{B \cup \{v\}})]_i +
\log_p [K(x_{B \cup \{v\}}) : K(x_B)]_i \\
&= \log_p [L : K(x_{B \cup \{v\}})]_i + {\vecC_v},
\end{align*}
by Lemma~\ref{l:poly-insep-degree}. Similarly, we also have
\begin{align*}
\nu(B \setminus \{u\} \cup \{v\}) &=
\log_p [L : K(x_{B \setminus \{u\} \cup \{v\}}]_i \\
&= \log_p [L : K(x_{B \cup \{v\}})]_i
+ \log_p [K(x_{B \cup \{v\}}) : K(x_{B \setminus \{u\} \cup \{v\}})]_i \\
&= \log_p [L : K(x_{B \cup \{v\}})]_i + \vecC_u,
\end{align*}
again, using Lemma~\ref{l:poly-insep-degree} for the last step.
Therefore,
\begin{equation*}
\nu(B) - \vecC_v = \log_p [L : K(x_{B \cup \{v\}})]_i =
\nu(B \setminus \{u\} \cup \{v \}) - \vecC_u,
\end{equation*}
which is just a rearrangement of the desired equation
\eqref{eq:valuated-basis-circuit}.
\end{proof}
Thus, we've proved the first part of Theorem~\ref{t:agree}, namely that the
function~$\nu$ given in~\eqref{eq:valuation} defines a valuation on the
algebraic matroid $M$. In the next section, we turn to the second part of
Theorem~\ref{t:agree} and show that this valuation is compatible with the
matroid flock studied in~\cite{bollen-draisma-pendavingh}.
\section{Matroid flocks}
We now show that the matroid flock defined by the valuated matroid from the
previous section is the same as the matroid flock defined from the extension $L
\supset K$ in~\cite{bollen-draisma-pendavingh}. A \defi{matroid flock} is a
function~$M$ which maps each vector $\alpha \in \ZZ^n$ to a matroid
$M_\alpha$ on the set $E$, such that:
\begin{enumerate}
\item $M_\alpha \slash i = M_{\alpha + e_i} \backslash i$ for all $\alpha \in
\ZZ^n$ and $i \in E$,
\item $M_\alpha = M_{\alpha + \onevector}$ for all $\alpha \in \ZZ^n$.
\end{enumerate}
In the first axiom, the matroids $M_\alpha \slash i$ and $M_{\alpha + e_i} \backslash i$
are the contraction and deletion of the respective matroids with respect to the
single element~$i$.
To any valuated matroid $M$, the associated
matroid flock, which we also denote by $M$, is defined by letting $M_\alpha$ be
the matroid whose bases
consist of those bases of $M$ such that
$\mathbf e_B \cdot \alpha - \nu(B) = g(\alpha)$,
where $\mathbf e_B$ is the indicator vector with entry $(\mathbf e_B)_i = 1$ for
$i \in B$ and $(\mathbf e_B)_i = 0$
otherwise, and where
\begin{equation}\label{eq:def-g}
g(\alpha) = \max\{ \mathbf e_B \cdot \alpha - \nu(B) \mid B \mbox{ is a basis of
$M$}\}.
\end{equation}
Moreover, any matroid flock comes from a valuated matroid in this way by
Theorem~7 in~\cite{bollen-draisma-pendavingh}.
On the other hand, \cite{bollen-draisma-pendavingh} also associates a matroid
flock directly to the extension $L \supset K$ and the elements $x_1, \ldots,
x_n$. Their construction is in terms of algebraic varieties and the tangent
spaces at sufficiently general points. Here, we recast their definition using
the language of field theory and derivations. Define $\tilde L$ to be the
perfect closure of $L$, which is equal to the union
$\bigcup_{k \geq 0} L(x^{1/p^k}_1, \ldots, x_n^{1/p^k})$
of the infinite tower of
purely inseparable extensions of~$L$.
For a vector $\alpha \in \ZZ^n$, we define $F^{-\alpha} x_E$ to be the vector in
$\tilde L^n$ with $(F^{-\alpha} x_E)_i = F^{-\alpha_i} x_i$, and $K(F^{-\alpha}
x_E)$ to be the field generated by these elements. Recall from field theory,
e.g.\ \cite{lang}*{Sec. XIX.3}, that the vector space of differentials
$\Omega_{K(F^{-\alpha}x_E)/ K}$ is defined algebraically over
$K(F^{-\alpha}x_E)$, generated by the differentials $d (F^{-\alpha_i} x_i)$ as
$i$ ranges over the set~$E$. We define $N_\alpha$ to be the matroid on $E$ of
the configuration of these vectors $d(F^{-\alpha_i} x_i)$ in
$\Omega_{K(F^{-\alpha}x_E)/K}$, and then the function $N$ which sends $\alpha$
to $N_\alpha$ is a matroid flock~\cite{bollen-draisma-pendavingh}*{Thm.~34}.
\begin{proof}[Proof of Theorem~\ref{t:agree}]
The function $\nu$ is a valuation on $M$ by
Propositions~\ref{p:circuit-axioms} and~\ref{p:valuation-circuits}, so it only
remains to show that the matroid flock associated to this valuation
coincides with the matroid flock $N$ defined above.
Let $\alpha$ be a vector in $\ZZ^n$. Since both $M$ and $N$ are
matroid flocks, they are invariant under shifting $\alpha$ by the vector
$\onevector$, as in the second axiom of a matroid flock.
Therefore, we can shift
$\alpha$ by a multiple of $\onevector$ such that all entries of $\alpha$ are
non-negative and it suffices to show $M_\alpha = N_\alpha$ in this case.
Now let $B$ be a basis of $M$ and we want to show that the differentials
$d(F^{-\alpha_i}x_i)$, for $i \in B$, form a basis for $\Omega_{K(F^{-\alpha}
x_E) / K}$ if and only if ${\mathbf e_B \cdot \alpha} - \nu(B)$ equals
$g(\alpha)$, as defined in~\eqref{eq:def-g}. Since the field $K(F^{-\alpha}
x_B)$ is generated by the algebraically independent elements $F^{-\alpha_i} x_i$
as $i$ ranges over the elements of $B$, the differentials $d(F^{-\alpha_i}x_i)$ do form a basis for
$\Omega_{K(F^{-\alpha} x_B)/K}$. Moreover, the natural map
$\Omega_{K(F^{-\alpha} x_B)/K} \rightarrow \Omega_{K(F^{-\alpha} x_E)/K}$ is an
isomorphism if and only if the $K(F^{-\alpha} x_E)$ is a separable extension of
$K(F^{-\alpha} x_B)$~\cite{lang}*{Prop.~VIII.5.2}, i.e.\ if and only if its
inseparable degree is~$1$. Therefore, $B$ is a basis for $N_\alpha$ if and only
if $[K(F^{-\alpha} x_E) : K(F^{-\alpha} x_B)]_i = 1$.
We list the inseparable degrees:
\begin{align*}
[L : K(x_B)]_i &= p^{\nu(B)} \\
[K(F^{-\alpha} x_B) : K(x_B)]_i &= p^{\mathbf e_B \cdot \alpha} \\
[K(F^{-\alpha} x_E) : K(x_E)]_i &= p^{m(\alpha)} \\
[L : K(x_E)]_i &= p^\ell
\end{align*}
The first of these equalities is by definition, the second is because
$K(F^{-\alpha} x_B)$
is the purely inseparable extension of $K(x_B)$ defined by adjoining
a $p^{\alpha_i}$-root of $x_i$ for each~$i$, and the third and fourth we take to
be the
definitions of the integers $m(\alpha)$ and~$\ell$, respectively.
By the
multiplicativity of inseparable degrees, and taking logarithms, we have:
\begin{align}\label{eq:insep-degree}
\log_p [K(F^{-\alpha} x_E) : K(F^{-\alpha} x_B)]_i &=
\log_p [K(F^{-\alpha} x_E) : K(x_B)]_i - \mathbf e_B \cdot \alpha \notag \\
&= m(\alpha) + [K(x_E) : K(x_B)]_i - \mathbf e_B \cdot \alpha \notag \\
&= m(\alpha) - \ell + \nu(B) - \mathbf e_B \cdot \alpha
\end{align}
As noted above, $B$ is a basis of $N_\alpha$ if and only if the left hand
side of \eqref{eq:insep-degree} is zero, and $B$ is a basis of $M_{\alpha}$ if
and only $\mathbf e_B \cdot \alpha - \nu(B) = g(\alpha)$.
Thus, it suffices to show that $m(\alpha) - \ell$ equals $g(\alpha)$.
Since \eqref{eq:insep-degree} is always non-negative, we have the inequality
\begin{equation*}
m(\alpha) - \ell \geq {\mathbf e_B \cdot \alpha} - \nu(B)
\end{equation*}
for all bases $B$,
and thus $m(\alpha) - \ell \geq g(\alpha)$. On the other hand, if $m(\alpha) -
\ell > g(\alpha)$, then
\eqref{eq:insep-degree} will always be positive, so no subset of the
differentials $d(F^{\alpha_i}x_i)$ will form a basis for $\Omega_{K(F^{-\alpha}
x_E)/K}$. However, this would contradict the fact that the complete set of
differentials $d(F^{-\alpha_i}x_i)$ for all $i \in E$ forms a generating set for
$\Omega_{K(F^{-\alpha} x_E)/K}$, and therefore, some subset forms a basis. Thus,
$m(\alpha)$ must equal $g(\alpha) + l$, which completes the proof that the two
matroid flocks coincide.
\end{proof}
\begin{rmk}\label{r:equivalence}
By \cite{bollen-draisma-pendavingh}*{Thm.~7}, any matroid flock, such as that of
an algebraic extension, comes from a valuated matroid, but the valuation is not
unique. In particular, two valuations $\nu$ and $\nu'$ are called
\defi{equivalent} if they differ by a shift $\nu'(B) = \nu(B) + \lambda$ for
some constant $\lambda$~\cite{dress-wenzel}*{Def.~1.1}, and equivalent valuations define the same matroid
flock. However, among all equivalent valuations giving the matroid flock of an
algebraic extension, the formula~\eqref{eq:valuation} nevertheless gives a
distinguished valuation. For example, if $L = K(x_E)$, then this distinguished
valuation $\nu$ is the unique representative such that the minimum $\min_B
\nu(B)$ over all bases $B$ is $0$. If $L$ is a proper extension of $K(x_E)$,
then the valuation $\nu$ records the inseparable degree $[L: K(x_E)]_i$, which
was denoted $p^\ell$ in the proof of Theorem~\ref{t:agree}.
\end{rmk}
\begin{ex}\label{ex:non-fano}
We look at the matroid flock and Lindstr\"om valuation of an algebraic
realization of the non-Fano matroid~$M$ over $K = \mathbb F_2$, which is a
special case of the construction
in Example~\ref{ex:toric}. The realization is given by the elements
\begin{align*}
x_1 &= t_1 &
x_3 &= t_3 &
x_5 &= t_1 t_3 &
x_7 &= t_1t_2t_3 \\
x_2 &= t_2 &
x_4 &= t_1 t_2 &
x_6 &= t_2 t_3
\end{align*}
in the field $L = K(t_1, t_2, t_3)$. The differentials of these
elements in $\Omega_{L/K}$ are:
\begin{align*}
dx_1 &= dt_1 &
dx_4 &= t_2 \,dt_1 + t_1\, dt_2 \\
dx_2 &= dt_2 &
dx_5 &= t_3\, dt_1 + t_1 \,dt_3 \\
dx_3 &= dt_3 &
dx_6 &= t_3 \, dt_2 + t_2 \,dt_3 \\
&& dx_7 &= t_2 t_3 \,dt_1 + t_1 t_3 \,dt_2 + t_1 t_2 \,dt_3.
\end{align*}
These vectors are projectively equivalent to the Fano configuration, and,
therefore, the matroid $M_{(0,0,0,0,0,0,0)}$ of the matroid flock is the Fano
matroid. In particular, we have the linear relation $t_3 \, dx_4 + t_2 \,dx_5 +
t_1 \, dx_6 = 0$, among the differentials, even though $\{4, 5, 6\}$ is a basis
of the algebraic matroid.
On the other hand, if we let $\alpha = (-1, -1, -1, 0, 0, 0, -1)$, then
$K(F^{-\alpha}x_E)$ is the subfield $K(x_4, x_5, x_6) \subset L$, because
\begin{align*}
Fx_1 = x_1^2 &= x_4 x_5 x_6^{-1} &
Fx_3 = x_3^2 &= x_4^{-1} x_5 x_6 \\
Fx_2 = x_2^2 &= x_4 x_5^{-1} x_6 &
Fx_7 = x_7^2 &= x_4 x_5 x_6
\end{align*}
Therefore, $\{4, 5, 6\}$ is a basis for the matroid $M_\alpha$.
Using the basis $dx_4, dx_5, dx_6$ for $\Omega_{K(F^{-\alpha}x_E)/K}$, one can
check that the vectors $d(Fx_i)$, for $i = 1, 2, 3, 7$ are all parallel to each
other, and thus the bases of $M$ which contain at least two of these indices is
not a basis for $M_{\alpha}$.
We claim that the Lindstr\"om valuation~$\nu$ of the field extension $L$ of $K$
takes the value $0$ for every basis of $M$ except that $\nu(\{4, 5, 6\}) = 1$.
This can be seen directly from the definition~\eqref{eq:valuation} because one
can check that every basis other than $\{4, 5, 6\}$ generates the field $L$, and
$L \supset K(x_4, x_5, x_6)$ is an index~$2$, purely inseparable extension.
Alternatively, the fact that the vector configuration of the differentials
$dx_i$ in $\Omega_{L/K}$ is the Fano matroid means that its bases consist of all
bases of $M$ except for $\{4, 5, 6\}$, and so the bases of the Fano matroid have
the same valuation, except for $\{4,5,6\}$, which has larger valuation. As in
Remark~\ref{r:equivalence}, the matroid flock only determines the valuation up
to equivalence, so we can take $\nu(B) = 0$ for $B$ a basis of the Fano
matroid. Then, the computation of $M_{\alpha}$ above shows that both $\{4, 5,
6\}$ and $\{3, 5, 6\}$ are bases, and thus,
\begin{equation*}
\mathbf e_{\{4, 5, 6\}}
\cdot \alpha - \nu(\{4, 5, 6\}) =
\mathbf e_{\{3,5,6\}} \cdot \alpha - \nu(\{3, 5, 6\}) = -1 - 0 = -1
\end{equation*}
and so we can solve for $\nu(\{4, 5, 6\}) = 1$.
Finally, a third way of computing the Lindstr\"om valuation is to use
Example~\ref{ex:toric}, which shows that the valuation is the same as that of
the vector configuration given by the columns of the matrix
\begin{equation*}
A =
\begin{pmatrix}
1 & 0 & 0 & 1 & 1 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 & 1
\end{pmatrix}
\end{equation*}
over the field of rational numbers $\QQ$ with the $2$-adic valuation. The
valuation of a vector configuration is given by the $2$-adic valuation of the
determinant of the submatrices. The submatrices of $A$ corresponding to bases
of~$M$ all have determinant $\pm 1$ except for the one with
columns $\{4,
5, 6\}$, whose determinant is $-2$, which has $2$-adic valuation equal to $1$.
\end{ex}
\section{Cocircuits and minors}\label{sec:cocircuits-minors}
In this section, we consider further properties of the Lindstr\"om valuated
matroid which can be understood in terms of the field theory of the extension.
In particular, we give constructions of the valuated cocircuits and minors of
the Lindstr\"om valuated matroid.
First, a \defi{hyperplane} of the algebraic matroid of $L$ is a maximal subset
$H$ of~$E$ such that $L$ has transcendence degree $1$ over $K(x_E)$. For any
hyperplane~$H$, we define a vector in $(\ZZ \cup
\{\infty\})^n$:
\begin{equation*}
\vecCco(H)_i = \begin{cases}
\infty &\mbox{if } i \in H \\
\log_p [L : K(x_{H \cup \{i\}})]_i & \mbox{if } i \notin H
\end{cases}
\end{equation*}
The expression in the second case is an integer by \cite{lang}*{Cor.~V.6.2} and
finite because, by the assumption that $H$ is a hyperplane, $L$ must be an
algebraic extension of $K(x_{H \cup \{i\}})$, for $i \notin H$.
\begin{prop}
The collection of vectors:
\begin{equation*}
\{\vecCco(H) + \lambda \onevector \mid
H \mbox{ is a hyperplane of the algebraic matroid of } L, \lambda \in \ZZ \}
\end{equation*}
define the cocircuits of the Lindstr\"om valuation of the field $L$ and the
elements $x_1, \ldots, x_n$.
\end{prop}
\begin{proof}
By definition, the cocircuits of a valuated matroid~$M$ are the circuits of the
dual $M^*$, and the dual valuation is defined by $\nu(B^*) = \nu(E \setminus
B^*)$ for any subset $B^* \subset E$ such that $E \setminus B^*$ is a basis
of~$M$. Suppose $B^*$ and $B^* \setminus \{u\} \cup \{v\}$ are bases of $M^*$,
and $\vecCco(H)$ is a cocircuit contained in $B^* \cup \{v\}$. Then, as in the
proof of Proposition~\ref{p:valuation-circuits}, we have to show the relation:
\begin{equation}\label{eq:cocircuit}
\nu^*(B^*) + \vecCco(H)_u = \nu^*(B^* \setminus \{u\} \cup \{v\}) + \vecCco(H)_v
\end{equation}
We write $B$ for the complement $E \setminus B^*$, which is a basis of~$M$.
We can then expand these expressions using their definitions and
multiplicativity of the inseparable degree:
\begin{align*}
\nu^*(B^*) &= \log_p [L : K(x_{H \cup \{v\}})]_i
+ \log_p [K(x_{H \cup \{v\}}) : K(x_{B})]_i \\
\vecCco(H)_u &= \log_p [L : K(x_{H \cup \{u\}})]_i \\
\nu^*(B^* \setminus \{u\} \cup \{v\}) &=
\log_p [L : K(x_{H \cup \{u\}})]_i \\
&\qquad\qquad + \log_p [K(x_{H \cup \{u\}}) :
K(x_{B \setminus \{v\} \cup \{u\}})]_i \\
\vecCco(H)_v &= \log_p [L : K(x_{H \cup \{v\}})]_i
\end{align*}
Therefore, to show the relation \eqref{eq:cocircuit}, it is sufficient to show
that
\begin{equation}\label{eq:cocircuit-rewritten}
[K(x_{H \cup \{v\}}) : K(x_{B})]_i =
[K(x_{H \cup \{u\}}) : K(x_{B \setminus \{v\} \cup \{u\}})]_i
\end{equation}
We claim that \eqref{eq:cocircuit-rewritten} is true because both sides are
equal to the inseparable degree $[K(x_{H}) : K(x_{B \setminus \{v\}})]_i$.
Indeed, the extensions on either side of \eqref{eq:cocircuit-rewritten} are
given adjoining to the extension $K(x_H) \supset K(X_{B \setminus \{v\}})$ a
single transcendental element, namely, $x_v$ on the left, and $x_u$ on the
right. Such a transcendental element has no relations with the other elements of
$x_H$ and so doesn't affect the inseparable degree.
\end{proof}
Minors of a valuated matroid are defined in~\cite{dress-wenzel}*{Prop. 1.2
and~1.3}. Note that the definition of the valuation on the minor depends on an
auxiliary choice of a set of vectors, and the valuation is only defined up to
equivalence.
\begin{prop}
Let $F$ and $G$ be disjoint subsets of $E$. Then the minor $M \backslash G / F$,
denoting the deletion of $G$ and the contraction of~$F$,
is equivalent to the Lindstr\"om valuation of the extension $K(x_{E \setminus G}) \supset
K(x_{F})$ with the elements $x_i$ for $i \in E \setminus (F \cup G)$.
\end{prop}
\begin{proof}
The valuated circuits of the deletion $M \backslash G$ are equal to the restriction of
the valuated circuits $\vecC$ such that $\supp \vecC \cap G = \emptyset$ to the
indices $E \setminus G$. Likewise, the circuits and circuit polynomials of the
algebraic extension
$K(x_{E'}) \supset K$ are those of $L \supset K$ such that the support and
variable indices, respectively, are disjoint from~$G$. Therefore, the valuated
circuits of the Lindstr\"om matroid of $K(x_{E \setminus G})$ as an extension of $K$ are
the same as those of the deletion~$M \backslash G$.
Dually, the valuated cocircuits of the contraction $M \backslash G / F$ are
the restrictions of the cocircuits $\vecCco$ of $M \backslash G$ such that $\supp
\vecCco \cap F = \emptyset$ to the indices in $E \setminus (F \cup G)$. The hyperplanes of
the extension $K(x_{E \setminus G}) \supset K(x_F)$ are the hyperplanes of
$K(x_{E \setminus G}) \supset K$ which contain $F$ and so the valuated cocircuits are the
valuated cocircuits which are disjoint from $F$ and with indices restricted to
the indices $E \setminus (
F \cup G)$. Therefore, the Lindstr\"om valuated matroid of $K(x_{E \setminus G}) \supset K(x_F)$ is
the same as the minor $M \backslash G / F$.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{bollen-draisma-pendavingh}{article}{
author = {Bollen, Guus P.},
author = {Draisma, Jan},
author = {Pendavingh, Rudi},
title = {Algebraic matroids and Forbenius flocks},
date = {2018},
journal = {Adv. Math.},
volume = {323},
pages = {688--719},
}
\bib{dress-wenzel}{article}{
author = {Dress, Andreas W. M.},
author = {Wenzel, Walter},
title = {Valuated matroids},
journal = {Adv. Math.},
volume = {93},
number = {2},
date = {1992},
pages = {214--250},
}
\bib{ingleton}{article}{
author={Ingleton, A. W.},
title={Representation of matroids},
conference={
title={Combinatorial Mathematics and its Applications (Proc. Conf.,
Oxford, 1969)}},
book={
publisher={Academic Press, London}},
date={1971},
pages={149--167},
}
\bib{kiraly-rosen-theran}{unpublished}{
author = {Kir\'aly, Franz J.},
author = {Rosen, Zvi},
author = {Theran, Louis},
title = {Algebraic matroids with graph symmetry},
year = {2013},
note = {preprint, \arxiv{1312.3777}},
}
\bib{lang}{book}{
author = {Lang, Serge},
title = {Algebra},
year = {2002},
publisher = {Springer},
series = {Graduate Texts in Mathematics},
volume = {211},
}
\bib{lindstrom}{article}{
author = {Lindstr\"om, Bernt},
title = {On the algebraic characteristic set for a class of matroids},
journal = {Proc. AMS},
volume = {95},
number = {1},
pages = {147--151},
year = {1985},
}
\bib{murota-tamura}{article}{
author = {Murota, Kazuo},
author = {Tamura, Akihisa},
title = {On circuit valuations of matroids},
journal = {Adv. Appl. Math.},
volume = {26},
pages = {192--225},
year = {2001},
}
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\title[Bijective proofs for Schur function identities]{Bijective
proofs for Schur function identities which imply
Dodgson's condensation formula and Pl\"ucker relations}
\begin{abstract}
We present a ``method'' for bijective proofs for
determinant identities, which is based on translating determinants
to Schur functions by the Jacobi--Trudi identity. We illustrate
this ``method'' by generalizing a bijective construction (which was
first used by Goulden) to a class of Schur function identities, from
which we shall obtain bijective proofs for Dodgson's condensation
formula, Pl\"ucker relations and a recent identity
of the second author.
\end{abstract}
\author{Markus Fulmek\\Michael Kleber}
\address{
Institut f\"ur Mathematik der Universit\"at Wien\\
Strudlhofgasse 4, A-1090 Wien, Austria.\newline\leavevmode\indent
Massachusetts Institute of Technology\\
77 Massachusetts Avenue, Cambridge, MA 02139, USA.
}
\email{
{\tt [email protected]}\\
{\tt [email protected]}
}
\date{\today}
\rm
\maketitle
\def\qp#1#2{[{#1}^{#2}]}
\def\sqp#1#2{s_{\qp{#1}{#2}}}
\def\pa#1{(#1)}
\def\sla#1{s_{\pa{#1}}}
\defP{P}
\defA{A}
\defB{B}
\def\minor#1#2#3#4#5{#1_{\{#2,#3\},\{#4,#5\}}}
\def\lambda{\lambdambda}
\def\sigma{\sigmagma}
\def{\prime\prime}{{\prime\prime}}
\let\epsilon\varepsilon
\newcommand{{e}}{{e}}
\newcommand{{h}}{{h}}
\newcommand{{k}}{{k}}
\newcommand{\schf}[2]{{s}_{#1}\of{#2,\bold{x}}}
\newcommand{:=}{:=}
\newcommand{T}{T}
\newcommand{B}{B}
\newcommand{D}{D}
\newcommand{S}{S}
\newcommand{a}{a}
\newcommand{\hat{a}}{\hat{a}}
\newcommand{P}{P}
\newcommand{Pset}{M}
\newcommand{{\mathcal P}}{{\mathcal P}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal P}_o}{{\cal P}_o}
\newcommand{\SG}[1]{\boldsymbol{S}_{#1}}
\newcommand{\GF}[1]{\bold{GF}\of{#1}}
\newcommand{\operatorname{sgn}}{\operatorname{sgn}}
\newcommand{\operatorname{inv}}{\operatorname{inv}}
\newcommand{\sschf}[2]{{sp}_{#1}\of{#2,\bold{x}}}
\newcommand{\isschf}[2]{{sp}_{n,m}\of{#1,#2;\bold{x},\bold z}}
\newcommand{\osschf}[2]{{sp}_{n,1}\of{#1,#2,\bold{x},z}}
\newcommand{{\bold{R}}}{{\bold{R}}}
\newcommand{\tilde{\bold{R}}}{\tilde{\bold{R}}}
\newcommand{\oschf}[2]{{o}_{#1}\of{#2,\bold{x}}}
\newcommand{\oschftriv}[2]{{o}_{#1}\of{#2,\seqof{1,1,\dots,1}}}
\newcommand{\text{ iff }}{\text{ iff }}
\newcommand{\text{ implies }}{\text{ implies }}
\renewcommand{\text{ and }}{\text{ and }}
\newcommand{\thmref}[1]{Theorem~\ref{#1}}
\newcommand{\secref}[1]{\S\ref{#1}}
\newcommand{\lemref}[1]{Lemma~\ref{#1}}
\newcommand{\figref}[1]{Figure~\ref{#1}}
\newcommand{\seqof}[1]{\left\lambdangle#1\right\rangle}
\newcommand{\setof}[1]{\left\{#1\right\}}
\newcommand{\of}[1]{\left(#1\right)}
\newcommand{\parof}[1]{\left(#1\right)}
\newcommand{\numof}[1]{\left|#1\right|}
\newcommand{\detof}[2]{\left|{#2}\right|_{#1\times #1}}
\newcommand{\firstcolumn}[1]{
\quad \vdots\quad }
\newcommand{\firstrow}[1]{
\quad \vdots\quad }
\newcommand{\brkof}[1]{\left[#1\right]}
\newcommand{\intof}[1]{\lfloor #1\rfloor}
\newcommand{\absof}[1]{\left|#1\right|}
\newcommand{\operatorname{sgn}of}[1]{\operatorname{sgn}\of{#1}}
\newtheorem{thm}{Theorem}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{dfn}[thm]{Definition}
\newtheorem{obs}[thm]{Observation}
\newtheorem{rem}[thm]{Remark}
\font\scalefont = cmti8
\unitlength = 5mm
\thicklines
\newcount\xmin
\newcount\xmax
\newcount\ymin
\newcount\ymax
\newcount\gridwidth
\newcount\gridheight
\newcount\nofxpoints
\newcount\nofypoints
\newcount\dcnta
\newcount\dcntb
\newcount\dcntc
\def\begingrid#1#2#3#4{
\global\xmin = #1
\global\ymin = #2
\global\xmax = #3
\global\ymax = #4
\ifnum\xmin > \xmax\errmessage{PATHS: \xmin > \xmax|}\fi
\ifnum\ymin > \ymax\errmessage{PATHS: \ymin > \ymax|}\fi
\global\gridwidth = \xmax
\global\gridheight = \ymax
\global\advance\gridwidth by -\xmin
\global\advance\gridheight by -\ymin
\nofxpoints = \gridwidth
\advance\nofxpoints by 1
\nofypoints = \gridheight
\advance\nofypoints by 1
\begin{picture}(\gridwidth, \gridheight)(\xmin, \ymin)
\global\dcnta = \ymin
\loop\ifnum\dcnta<\ymax
\makegridline\dcnta
\global\advance\dcnta by 1
\repeat
\makegridline\ymax
}
\def\makegridline#1{
\begingroup
\global\dcntb = \xmin
\loop\ifnum\dcntb<\xmax
\put(\dcntb, #1){\circle*{0.1}}
\global\advance\dcntb by 1
\repeat
\put(\xmax, #1){\circle*{0.1}}
\endgroup
}
\def\makexaxis{\thinlines
\dcnta = \gridwidth
\advance\dcnta by 1
\put(\xmin, 0){\vector(1,0){\dcnta}}
\thicklines
}
\def\makeyaxis{\thinlines
\dcntb = \gridheight
\advance\dcntb by 1
\put(0, \ymin){\vector(0,1){\dcntb}}
\thicklines
}
\def\makeaxes{\thinlines
\dcnta = \gridwidth
\advance\dcnta by 1
\put(\xmin, 0){\vector(1,0){\dcnta}}
\dcntb = \gridheight
\advance\dcntb by 1
\put(0, \ymin){\vector(0,1){\dcntb}}
\thicklines
}
\def\makexscale{
\dcnta = \xmin
\loop\ifnum\dcnta<0
\put(\dcnta,0){\line(0,-1){0.1}}
\put(\dcnta,-0.4){\makebox(0,0)[tr]{
{\scalefont\number\dcnta}}}
\advance\dcnta by 1
\repeat
\dcnta = 1
\dcntb=\xmax
\loop\ifnum\dcnta<\dcntb
\put(\dcnta,0){\line(0,-1){0.1}}
\put(\dcnta, -0.4){\makebox(0,0)[tr]{
{\scalefont\number\dcnta}}}
\advance\dcnta by 1
\repeat
\put(\dcntb,0){\line(0,-1){0.1}}
\put(\dcntb, -0.4){\makebox(0,0)[tr]{{\scalefont\number\dcntb}}}
}
\def\makeyscale{
\dcnta = \ymin
\loop\ifnum\dcnta<0
\put(0,\dcnta){\line(-1,0){0.1}}
\put(-0.3, \dcnta){\makebox(0,0)[r]{
{\scalefont\number\dcnta}}}
\advance\dcnta by 1
\repeat
\dcnta = 1
\loop\ifnum\dcnta<\ymax
\put(0,\dcnta){\line(-1,0){0.1}}
\put(-0.3, \dcnta){\makebox(0,0)[r]{
{\scalefont\number\dcnta}}}
\advance\dcnta by 1
\repeat
\put(0,\ymax){\line(-1,0){0.1}}
\put(-0.3, \ymax){\makebox(0,0)[r]{{\scalefont\number\ymax}}}
}
\def\gridcaption#1{
\dcnta=\xmin
\dcntb=\gridwidth
\divide\dcntb by 2
\advance\dcnta by \dcntb
\dcntb=\ymax
\advance\dcntb by 1
\put(\dcnta,\dcntb){ \begin{picture}(1,1)(0,0)
\put(0.5,0.5){\makebox(0,0){#1}}
\end{picture}
}
}
\newsavebox{\skewdotteddownbox}
\newsavebox{\skewdottedupbox}
\savebox{\skewdotteddownbox}(1,1)[lb]{
\put(0.2,-0.2){\circle*{0.01}}
\put(0.4,-0.4){\circle*{0.01}}
\put(0.6,-0.6){\circle*{0.01}}
\put(0.8,-0.8){\circle*{0.01}}
}
\savebox{\skewdottedupbox}(1,1)[lb]{
\put(0.2,0.2){\circle*{0.01}}
\put(0.4,0.4){\circle*{0.01}}
\put(0.6,0.6){\circle*{0.01}}
\put(0.8,0.8){\circle*{0.01}}
}
\def\downskewline#1#2#3{
\multiput(#1,#2)(1,-1){#3}{\usebox{\skewdotteddownbox}}
}
\def\upskewline#1#2#3{
\multiput(#1,#2)(1,1){#3}{\usebox{\skewdottedupbox}}
}
\newsavebox{\hplainbox}
\savebox{\hplainbox}{\put(0,0){\line(1,0){1}}}
\newsavebox{\hdottedbox}
\savebox{\hdottedbox}(1,0)[bl]{
\put(0.2,0){\circle*{0.01}}
\put(0.4,0){\circle*{0.01}}
\put(0.6,0){\circle*{0.01}}
\put(0.8,0){\circle*{0.01}}
}
\newsavebox{\vplainupbox}
\savebox{\vplainupbox}{\put(0,0){\line(0,1){1}}}
\newsavebox{\vdottedupbox}
\savebox{\vdottedupbox}(0,1)[bl]{
\put(0,0.2){\circle*{0.01}}
\put(0,0.4){\circle*{0.01}}
\put(0,0.6){\circle*{0.01}}
\put(0,0.8){\circle*{0.01}}
}
\newsavebox{\vplaindownbox}
\savebox{\vplaindownbox}{\put(0,0){\line(0,-1){1}}}
\newsavebox{\vdotteddownbox}
\savebox{\vdotteddownbox}(0,1)[bl]{
\put(0,-0.2){\circle*{0.01}}
\put(0,-0.4){\circle*{0.01}}
\put(0,-0.6){\circle*{0.01}}
\put(0,-0.8){\circle*{0.01}}
}
\newsavebox{\skewupbox}
\savebox{\skewupbox}{\put(0,0){\line(1,1){1}}}
\newsavebox{\skewdownbox}
\savebox{\skewdownbox}{\put(0,0){\line(1,-1){1}}}
\newsavebox{\hstep}
\newsavebox{\vstep}
\newsavebox{\sstep}
\newsavebox{\rvstep}
\newsavebox{\rsstep}
\newcount\updownincrement
\newif\iflabelled
\newif\ifelabel
\newcount\lambdabelinfty
\def\setlabelinfty#1{\global\lambdabelinfty=#1}
\newcount\lambdabelcount
\setlabelinfty{1000}
\def\printlabel{
\ifnum\lambdabelcount<\lambdabelinfty{\scalefont\number\lambdabelcount}
\else$\scriptstyle\infty$
\fi
}
\def\uppath#1#2#3{
\global\lambdabelcount=1
\savebox{\hstep}{\usebox{\hplainbox}}
\savebox{\vstep}{\usebox{\vplainupbox}}
\savebox{\sstep}{\usebox{\skewupbox}}
\savebox{\rvstep}{\usebox{\vplaindownbox}}
\savebox{\rsstep}{\usebox{\skewdownbox}}
\updownincrement=1
\dcnta=#1
\dcntb=#2
\thickpoint{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\thickpoint{\dcnta}{\dcntb}
}
\def\updottedpath#1#2#3{
\global\lambdabelcount=1
\savebox{\hstep}{\usebox{\hdottedbox}}
\savebox{\vstep}{\usebox{\vdottedupbox}}
\savebox{\sstep}{\usebox{\skewdottedupbox}}
\savebox{\rvstep}{\usebox{\vdotteddownbox}}
\savebox{\rsstep}{\usebox{\skewdotteddownbox}}
\updownincrement=1
\dcnta=#1
\dcntb=#2
\thickpoint{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\thickpoint{\dcnta}{\dcntb}
}
\def\downpath#1#2#3#4{
\global\lambdabelcount=#4
\savebox{\hstep}{\usebox{\hplainbox}}
\savebox{\vstep}{\usebox{\vplaindownbox}}
\savebox{\sstep}{\usebox{\skewdownbox}}
\savebox{\rvstep}{\usebox{\vplainupbox}}
\savebox{\rsstep}{\usebox{\skewupbox}}
\updownincrement=-1
\dcnta=#1
\dcntb=#2
\thickpoint{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\thickpoint{\dcnta}{\dcntb}
\elabelfalse
}
\def\downdottedpath#1#2#3#4{
\global\lambdabelcount=#4
\savebox{\hstep}{\usebox{\hdottedbox}}
\savebox{\vstep}{\usebox{\vdotteddownbox}}
\savebox{\sstep}{\usebox{\skewdotteddownbox}}
\savebox{\rvstep}{\usebox{\vdottedupbox}}
\savebox{\rsstep}{\usebox{\skewdottedupbox}}
\updownincrement=-1
\dcnta=#1
\dcntb=#2
\thickcircle{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\thickcircle{\dcnta}{\dcntb}
\elabelfalse
}
\def\segment#1#2#3{
\lambdabelledfalse
\savebox{\hstep}{\usebox{\hplainbox}}
\savebox{\vstep}{\usebox{\vplainupbox}}
\savebox{\sstep}{\usebox{\skewupbox}}
\updownincrement=1
\dcnta=#1
\dcntb=#2
\gridpoint{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\gridpoint{\dcnta}{\dcntb}
}
\def\dottedsegment#1#2#3{
\lambdabelledfalse
\savebox{\hstep}{\usebox{\hdottedbox}}
\savebox{\vstep}{\usebox{\vdottedupbox}}
\savebox{\sstep}{\usebox{\skewdottedupbox}}
\updownincrement=1
\dcnta=#1
\dcntb=#2
\gridcircle{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\gridcircle{\dcnta}{\dcntb}
}
\def\uparrowpath#1#2#3{
\global\lambdabelcount=1
\savebox{\hstep}{\usebox{\hplainbox}}
\savebox{\vstep}{\usebox{\vplainupbox}}
\savebox{\sstep}{\usebox{\skewupbox}}
\savebox{\rvstep}{\usebox{\vplaindownbox}}
\savebox{\rsstep}{\usebox{\skewdownbox}}
\updownincrement=1
\dcnta=#1
\dcntb=#2
\thickpoint{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\put(\dcnta,\dcntb){\vector(0,1){0.3}}
}
\def\updottedarrowpath#1#2#3{
\global\lambdabelcount=1
\savebox{\hstep}{\usebox{\hdottedbox}}
\savebox{\vstep}{\usebox{\vdottedupbox}}
\savebox{\sstep}{\usebox{\skewdottedupbox}}
\savebox{\rvstep}{\usebox{\vdotteddownbox}}
\savebox{\rsstep}{\usebox{\skewdotteddownbox}}
\updownincrement=1
\dcnta=#1
\dcntb=#2
\thickcircle{\dcnta}{\dcntb}
\afterassignment\handlenextstep\let\next=#3\mendlist
\put(\dcnta,\dcntb){\vector(1,0){0.3}}
}
\newsavebox{\dummybox}
\def\xboxes{\savebox{\dummybox}{\usebox{\vstep}}
\savebox{\vstep}{\usebox{\rvstep}}
\savebox{\rvstep}{\usebox{\dummybox}}
\savebox{\dummybox}{\usebox{\sstep}}
\savebox{\sstep}{\usebox{\rsstep}}
\savebox{\rsstep}{\usebox{\dummybox}}
}
\def\mendlist{\mendlist}
\def\afterassignment\handlenextstep\let\next={\afterassignment\handlenextstep\let\next=}
\def\handlenextstep{
\ifx\next\mendlist
\let\next=\relax
\else
\ifx\next-
\put(\dcnta,\dcntb){\usebox{\hstep}}
\iflabelled\put(\dcnta,\dcntb){\makebox(1,0.6){\printlabel}}\fi
\ifelabel\advance\lambdabelcount by \updownincrement\fi
\advance\dcnta by 1
\else
\ifx\next|
\advance\lambdabelcount by \updownincrement
\put(\dcnta,\dcntb){\usebox{\vstep}}
\advance\dcntb by\updownincrement
\else
\ifx\next/
\put(\dcnta,\dcntb){\usebox{\sstep}}
\iflabelled
\ifnum\updownincrement>0
\advance\lambdabelcount by 1
\put(\dcnta,\dcntb){\begin{picture}(1,1)(0,0)
\put(-0.5,0.5){\printlabel}\end{picture}}
\else
\put(\dcnta,\dcntb){\begin{picture}(1,1)(0,0)
\put(0.3,-0.5){\printlabel}\end{picture}}
\fi
\fi
\advance\dcnta by 1
\advance\dcntb by\updownincrement
\else
\ifx\next*
\ifelabel\elabelfalse\else\elabeltrue\fi
\xboxes
\updownincrement=-\updownincrement
\else
\errmessage{PATHS: Wrong symbol.}
\fi
\fi
\fi
\fi
\let\next=\afterassignment\handlenextstep\let\next=
\fi
\next
}
\def\thickpoint#1#2{\put(#1, #2){\circle*{0.4}}}
\def\thickcircle#1#2{\put(#1, #2){\circle{0.4}}}
\def\thincircle#1#2{\put(#1, #2){\circle{0.2}}}
\def\gridpoint#1#2{\put(#1, #2){\circle*{0.2}}}
\def\gridcircle#1#2{\put(#1, #2){\circle{0.2}}}
\def\hlabel#1#2#3{\put(#1,#2){\makebox(1,0.6){\scalefont #3}}}
\def\lambdabelpointal#1#2#3{
\put(#1,#2){
\begin{picture}(1,1)(0,0)
\put(-0.85,0.35){$\scriptstyle #3$}
\end{picture}
}
}
\def\lambdabelpointar#1#2#3{
\put(#1,#2){
\begin{picture}(1,1)(0,0)
\put(-0.15,0.35){$\scriptstyle #3$}
\end{picture}
}
}
\def\lambdabelpointbl#1#2#3{
\put(#1,#2){
\begin{picture}(1,1)(0,0)
\put(-0.85,-0.8){$\scriptstyle #3$}
\end{picture}
}
}
\def\lambdabelpointbr#1#2#3{
\put(#1,#2){
\begin{picture}(1,1)(0,0)
\put(-0.15,-0.8){$\scriptstyle #3$}
\end{picture}
}
}
\def\end{picture}{\end{picture}}
\def\drawrect#1#2#3#4{
\put(#1,#2){
\begin{picture}(#3,#4)(0,0)\thicklines
\put(0,0){\line(0,1){#4}}
\put(0,0){\line(1,0){#3}}
\put(#3,#4){\line(0,-1){#4}}
\put(#3,#4){\line(-1,0){#3}}
\end{picture}
}
}
\newsavebox{\smallrectbox}
\savebox{\smallrectbox}(0.2,0.2)[bl]{
\thinlines
\put(-0.2,-0.2){\line(1,0){0.4}}
\put(0.2,-0.2){\line(0,1){0.4}}
\put(0.2,0.2){\line(-1,0){0.4}}
\put(-0.2,0.2){\line(0,-1){0.4}}
}
\def\gridrect#1#2{\put(#1, #2){\usebox{\smallrectbox}}}
\newcount\rowcount
\newcount\columncount
\def\begintableau#1#2{
\global\rowcount=#1
\begin{picture}(#2,#1)(0,0)
\thinlines
}
\def\end{picture}{\end{picture}}
\def\row#1{
\global\advance\rowcount by -1
\global\columncount=0
\afterassignment\handlenextentry\let\next=#1\mendlist
}
\def\mendlist{\mendlist}
\def\afterassignment\handlenextentry\let\next={\afterassignment\handlenextentry\let\next=}
\def\handlenextentry{
\ifx\next\mendlist
\let\next=\relax
\else
\ifx\next=-
\put(\columncount,\rowcount){\framebox(1,1){\space}}
\global\advance\columncount by 1
\else
\put(\columncount,\rowcount){\framebox(1,1){$\next$}}
\global\advance\columncount by 1
\fi
\let\next=\afterassignment\handlenextentry\let\next=
\fi
\next
}
\section{Introduction}
\lambdabel{intro}
Usually, bijective proofs of determinant identities involve
the following steps (cf., e.g, \cite[Chapter 4]{Stanton-White} or \cite{Zeilberger,Zeilberger2}):
\begin{itemize}
\item Expansion of the determinant as sum over the symmetric group,
\item Interpretation of this sum as the generating function of some
set of combinatorial objects which are equipped with some signed weight,
\item Construction of an explicit weight-- and sign--preserving bijection
between the respective combinatorial objects, maybe supported by the
construction of a sign--reversing involution for certain objects.
\end{itemize}
Here, we will present another ``method'' of bijective proofs for
determinant identitities, which involves the following steps:
\begin{itemize}
\item First, we replace the entries $a_{i,j}$ of the determinants by
$h_{\lambdambda_i-i+j}$ (where $h_m$ denotes the $m$--th complete
homogeneous function),
\item Second, by the Jacobi--Trudi identity we transform the original
determinant identity into an equivalent identity for Schur functions,
\item Third, we obtain a bijective proof for this equivalent
identity by using the interpretation of Schur functions in
terms of nonintersecting lattice paths.
(In this paper, we shall achieve this with a
construction which was used for the proof of a Schur function
identity \cite[Theorem~1.1]{Fulmek:Ciucu} conjectured by Ciucu.)
\end{itemize}
We show how this method applies naturally to provide elegant bijective
proofs of Dodgson's Condensation Rule \cite{Dodgson}
and the Pl\"ucker relations.
The bijective construction we use here was (to the best of our knowledge) first used by I.~Goulden \cite{Goulden:Schur}.
(The first author is grateful to A.~Hamel \cite{Hamel}
for drawing his attention to Goulden's work.)
Goulden's exposition, however, left open a small gap, which we shall close
here.
The paper is organized as follows: In Section~\ref{expo}, we
present the theorems we want to prove, and explain Steps 1 and 2
of our above ``method'' in greater detail. In
Section~\ref{background}, we briefly recall the combinatorial
definition of Schur functions and the Gessel--Viennot--approach.
In Section~\ref{bijection}, we explain the bijective construction
employed in Step 3 of our ``method'' by
using the proof of a Theorem from Section~\ref{expo} as an illustrating
example. There, we shall also close the small gap in Goulden's
work. In Section~\ref{general}, we ``extract'' the general
structure underlying the bijection: As it turns out, this is just a
simple graph--theoretic statement. From this we may easily derive a
general ``class'' of Schur function identities which follow from
these considerations. In order to show that these quite general
identitities specialize to something useful, we shall deduce the
Pl\"ucker relations, using again our ``method''. In
Section~\ref{kleber}, we turn to a theorem
\cite[Theorem 3.2]{Kleber}
recently proved by the second author by using Pl\"ucker relations: We explain how this theorem fits into our
construction and give a bijective proof using inclusion--exclusion.
\section{Exposition of identities and proofs}
\lambdabel{expo}
The origin of this paper was the attempt to give a bijective proof of
the following identity for Schur functions, which arose in work of Kirillov \cite{Kirillov}:
\begin{thm}
\lambdabel{thm:kirillov}
Let $c,r$ be positive integers; denote by $\qp{c}{r}$ the partition
consisting of $r$
rows with constant length $c$. Then we have the following identity
for Schur functions:
\begin{equation}
\lambdabel{eq:kirillov}
\left(\sqp{c}{r}\right)^2=
\sqp{c}{r-1}\cdot\sqp{c}{r+1}+\sqp{(c-1)}{r}\cdot\sqp{(c+1)}{r}.
\end{equation}
\end{thm}
(See \cite[7.10]{Stanley2}, \cite{Fulton-Harris:Representation-Theory}, \cite{Macdonald:Symmetric-Functions} or \cite{Sagan:The-Symmetric}
for background information on Schur functions; in order to keep our exposition
self--contained, a combinatorial definition is given in
Section~\ref{bijection}.)
The identity \eqref{eq:kirillov} was recently considered by
the second author
\cite[Theorem 4.2]{Kleber}, who also gave a bijective proof,
and generalized it considerably \cite[Theorem 3.2]{Kleber}.
The construction we use here
does in fact prove a more general statement:
\begin{thm}
\lambdabel{thm:general}
Let $\pa{\lambdambda_1,\lambdambda_2,\dots,\lambdambda_{r+1}}$ be a
partition, where $r>0$ is some integer.
Then we have the following identity
for Schur functions:
\begin{multline}
\lambdabel{eq:general}
\sla{\lambdambda_1,\dots,\lambdambda_r}\cdot\sla{\lambdambda_2,\dots,\lambdambda_{r+1}}\\
=\sla{\lambdambda_2,\dots,\lambdambda_r}\cdot\sla{\lambdambda_1,\dots,\lambdambda_{r+1}} +
\sla{\lambdambda_2-1,\dots,\lambdambda_{r+1}-1}\cdot
\sla{\lambdambda_1+1,\dots,\lambdambda_r+1}.
\end{multline}
\end{thm}
Clearly, Theorem~\ref{thm:kirillov} is a direct consequence of Theorem~\ref{thm:general}: Simply set $\lambdambda_1=\dots=\lambdambda_{r+1}=c$.
Theorem~\ref{thm:general}, however, is in fact equivalent to Dodgson's
condensation formula \cite{Dodgson}, which is also known as
Desnanot--Jacobi's adjoint matrix theorem (see \cite[Theorem 3.12]{Bressoud}:
According to \cite{Bressoud}, Langrange discovered this theorem for $n=3$,
Desnanot proved it for $n\leq 6$
and Jacobi published the general theorem \cite{Jacobi},
see also \cite[vol.~I, pp.~142]{MuirAB}):
\begin{thm}
\lambdabel{lem:general-minors}
Let $A$ be an arbitrary $(r+1)\times(r+1)$--determinant. Denote by
$\minor{A}{r_1}{r_2}{c_1}{c_2}$
the minor consisting of rows $r_1,r_1+1,\dots,r_2$ and columns
$c_1,c_1+1,\dots,c_2$ of $A$. Then we have the following identity:
\begin{multline}
\lambdabel{eq:general-minors}
\minor{A}{1}{r+1}{1}{r+1}\minor{A}{2}{r}{2}{r} \\=
\minor{A}{1}{r}{1}{r}\minor{A}{2}{r+1}{2}{r+1} -
\minor{A}{2}{r+1}{1}{r}\minor{A}{1}{r}{2}{r+1}.
\end{multline}
\end{thm}
The transition from Theorem~\ref{lem:general-minors} to
Theorem~\ref{thm:general} is established by the Jacobi--Trudi identity (see
\cite[I, (3.4)]{Macdonald:Symmetric-Functions}), which
states that for any partition $\lambdambda=(\lambdambda_1,\dots,\lambdambda_r)$
of length $r$ we have
\begin{equation}
\lambdabel{eq:Jacobi-Trudi}
s_\lambdambda = \det(h_{\lambdambda_i-i+j})_{i,j=1}^r\;,
\end{equation}
where $h_m$ denotes the $m$--th complete homogeneous symmetric function:
Setting $A_{i,j}:= h_{\lambdambda_i-i+j}$ for $1\leq i,j\leq r+1$ in
Theorem~\ref{lem:general-minors} and using identity \eqref{eq:Jacobi-Trudi}
immediately yields \eqref{eq:general}.
That the seemingly weaker statement of Theorem~\ref{thm:general} does
in fact imply Theorem~\ref{lem:general-minors} is due to the
following observation: Choose $\lambdambda$ so that the numbers
$\lambdambda_i-i+j$ are all distinct for $1\leq i,j\leq (r+1)$ (e.g.,
$\lambdambda=\left((r+1)r,r^2,(r-1)r,\dots,r\right)$ would suffice)
and rewrite \eqref{eq:general} as a
determinantal expression according to the Jacobi--Trudi identity
\eqref{eq:Jacobi-Trudi}.
This yields a special case of identity
\eqref{eq:general-minors}
with $A_{i,j}:= h_{\lambdambda_i-i+j}$ as above.
Now recall that the complete homogeneous symmetric functions are
algebraically independent (see, e.g., \cite{Sturmfels}), whence the
identity
\eqref{eq:general-minors} is true for generic $A_{i,j}$. For later
use, we record this simple observation in a more general fashion:
\begin{obs}
\lambdabel{obs:christian}
Let ${\mathcal I}$ be an identity involving determinants
of homogeneous symmetric
functions $h_{n}$, where $n$ is some nonnegative integer.
Then ${\mathcal I}$ is, in fact, equivalent to a
general determinant identity which is obtained from ${\mathcal I}$ by considering
each $h_n$ as a formal variable.
\end{obs}
So far, the promised proof (to be given in Section~\ref{bijection})
of Theorem~\ref{thm:general}
would give a new bijective proof of Dodgson's
Determinant--Evaluation Rule (a beautiful bijective proof was also
given by Zeilberger \cite{Zeilberger}). But we can do a little
better: Our bijective construction does, in fact, apply to a quite
general ``class of Schur function identities'', a special case of
which implies the Pl\"ucker relations (also known as
Grassmann--Pl\"ucker syzygies), see, e.g., \cite{Sturmfels}, or
\cite[Chapter 3, Section 9, formula II]{Turnbull}:
\begin{thm}[Pl\"ucker relations]
\lambdabel{thm:pluecker}
Consider an arbitrary $2n\times n$--matrix with row indices
$1,2,\dots,2n$.
Denote the $n\times n$--minor
of this matrix consisting of rows $i_1,\dots,i_n$ by $[i_1,\dots,i_n]$.
Consider some fixed
list of integers $1\leq r_1< r_2<\dots< r_k\leq n$, $0\leq k\leq n$.
Then we have:
\begin{multline}
\lambdabel{eq:pluecker}
[1,2,\dots,n]\cdot [n+1,n+2,\dots,2n]=\\
\sum_{n+1\leq s_1< s_2<\dots< s_k\leq 2n}
[1,\dots,s_1,\dots,s_k,\dots,n]\cdot
[n+1,\dots,r_1,\dots,r_k,\dots,2n],
\end{multline}
where the notation of the summands means that rows $r_i$ were exchanged
with rows $s_i$, respectively.
\end{thm}
This is achieved by observing that \eqref{eq:pluecker} can be specialized
to a Schur function identity of the form
\begin{equation*}
\lambdabel{eq:pluecker-schur}
s_\lambdambda s_\mu=
\sum_{\lambdambda^\prime\!\!,\,\mu^\prime} s_{\lambdambda^\prime}s_{\mu^\prime},
\end{equation*}
where $\lambdambda$ and $\mu$ are partitions with the same number $n$ of parts,
and where the sum is over certain pairs $\lambdambda^\prime,\mu^\prime$ derived
from $\lambdambda, \mu$ (to be described later). This Schur function identity
belongs to the ``class of identities'' which follow from the bijective
construction. By applying Observation~\ref{obs:christian} with
suitable $\lambdambda$ and $\mu$, we may deduce \eqref{eq:pluecker}.
\begin{rem}
Summing equation~\eqref{eq:pluecker} over all possible choices of
subsets $\{r_1,\dots,r_k\}$ yields the determinant identity behind
Ciucu's Schur function identity \cite[Theorem~1.1]{Fulmek:Ciucu}
\begin{equation}
\lambdabel{eq:ciucu}
\sum_{A\subset T:\;\vert A\vert =k} s_{\lambdambda(A)}s_{\lambdambda(T-A)} =
2^k s_{\lambdambda(t_2,\dots,t_{2k})}s_{\lambdambda(t_1,\dots,t_{2k-1})},
\end{equation}
where $T=\{t_1<\dots<t_{2k}\}$ is some set of positive integers
and $\lambdambda(\{t_{i_1}<\dots <t_{i_r}\})$ denotes the partition with parts
$t_{i_r}-r+1\geq\dots\geq t_{i_2}-1\geq t_{i_1}$.
\end{rem}
\begin{rem}
The Pl\"ucker relations \eqref{eq:pluecker} appear in a slightly different
notation as Theorem~2 in \cite{Stanley-Propp}, together with another
elegant proof.
\end{rem}
Moreover, the bijective method yields a proof of the second author's
theorem \cite[Theorem 3.2]{Kleber}: Since this theorem is rather
complicated to state, we defer it to Section~\ref{kleber}.
\section{Combinatorial background and definitions}
\lambdabel{background}
As usual,
an $r$-tuple $\lambdambda = \parof{\lambdambda_1, \lambdambda_2,\dots,
\lambdambda_r}$ with $\lambdambda_1\geq\lambdambda_2\geq\dots\geq\lambdambda_r\geq 0$
is called a {\em partition of length $r$\/}. The {\em Ferrers
board\/} $F(\lambdambda)$ of $\lambdambda$
is an array of cells with $r$ left-justified rows and $\lambdambda_i$
cells in row $i$.
An {\em $N$--semistandard Young tableau\/} of shape $\lambdambda$ is a filling of the cells of $F(\lambdambda)$ with integers from the set $\{1,2,\dots,N\}$, such
that the numbers filled into the cells weakly increase in rows
and strictly increase in columns
(see the right picture of Figure~\ref{fig:GV} for an illustration).
Schur functions, which are irreducible general linear characters, can be combinatorially defined by means of $N$--semistandard Young tableaux (see
\cite[I, (5.12)]{Macdonald:Symmetric-Functions},
\cite[Def.~4.4.1]{Sagan:The-Symmetric}, \cite[Def.~5.1]{Stanley}):
\begin{equation*}
s_\lambdambda(x_1, x_2, x_3, \dots, x_N) = \sum_{{\bold T}}w({\bold T}),
\end{equation*}
where the sum is over all $N$--semistandard Young tableaux ${\bold T}$ of
shape $\lambdambda$. Let $m({\bold T}, k)$ be the number of entries $k$
in the tableau ${\bold T}$. The weight $w({\bold T})$ of ${\bold T}$
is defined as follows:
\begin{equation*}
w({\bold T}) = \prod_{k = 1}^{N} x_k^{m({\bold T}, k)}.
\end{equation*}
The Gessel-Viennot interpretation \cite{Gessel-Viennot:Determinants-paths}
of semistandard Young tableaux of shape $\lambdambda$ as nonintersecting lattice paths (see the left picture of Figure~\ref{fig:GV} for an illustration)
allows an equivalent definition of Schur functions:
\begin{equation*}
s_\lambdambda(x_1, x_2, x_3, \dots, x_N) = \sum_{{\bold P}}w({\bold P}),
\end{equation*}
where the sum is over all
$r$-tuples ${\bold P} = \left(P_1, P_2, \dots, P_r\right)$ of
lattice paths (in the integer lattice, i.e., the directed graph with
vertices $\mathbb Z\times\mathbb Z$ and arcs from $(j, k)$ to $(j+1, k)$ and
from $(j, k)$ to $(j, k+1)$ for all $j, k$), where $P_i$ starts at $(-i, 1)$
and ends at $(\lambdambda_{i}-i, N)$, and where no two paths $P_i$ and $P_j$
have a lattice point in common (such an $r$-tuple is called nonintersecting).
The weight $w({\bold P})$ of
an
$r$-tuple ${\bold P} = \left(P_1, P_2, \dots, P_r\right)$ of paths
is defined by:
\begin{equation*}
w({\bold P}) = \prod_{i = 1}^{r} w({P_i}).
\end{equation*}
The weight $w(P)$ of a single path $P$ is defined as follows:
Let $n({P}, k)$ be the number of horizontal steps at height $k$
(i.e., directed arcs from some $(j, k)$ to $(j+1, k)$) that belong to
path $P$, then we define
\begin{equation*}
w({P}) = \prod_{k = 1}^{N} x_k^{n({P}, k)}.
\end{equation*}
That these definitions are in fact equivalent is due to a weight--preserving
bijection between tableaux and nonintersecting lattice paths. The
Gessel--Viennot method \cite{Gessel-Viennot:Determinants-paths}
builds on the lattice path definition to give a bijective proof of
the Jacobi--Trudi identity \eqref{eq:Jacobi-Trudi} (see, e.g.,
\cite[ch.~4]{Sagan:The-Symmetric}, \cite{Stembridge} or
\cite{Fulmek-Krattenthaler}).
\begin{figure}
\caption{Illustration of a $6$--semistandard Young tableau and
its associated lattice paths for $\lambdambda=(4,3,2)$.}
\end{figure}
Next, we give a combinatorial definition for {\em skew\/} Schur
functions: Let $\lambdambda=(\lambdambda_1,\dots,\lambdambda_r)$ and
$\mu=(\mu_1,\dots,\mu_r)$ be partitions with $\mu_i\leq\lambdambda_i$
for $1\leq i\leq r$; here, we allow $\mu_i=0$.
The {\em skew Ferrers
board\/} $F(\lambdambda/\mu)$ of $(\lambdambda,\mu)$
is an array of cells with $r$ left-justified rows and $\lambdambda_i-\mu_i$
cells in row $i$, where the first $\mu_i$ cells in row $i$ are missing.
\begin{figure}
\caption{Illustration of a $6$--semistandard skew Young tableau and
its associated lattice paths
for $\lambdambda=(4,3,2)$ and $\mu=(1,0,0)$.}
\end{figure}
An {\em $N$--semistandard skew Young tableau\/} of shape $\lambdambda/\mu$ is a filling of the cells of $F(\lambdambda/\mu)$ with integers from the set
$\{1,2,\dots,N\}$,
such that the numbers filled into the cells weakly increase in rows and
strictly increase in columns
(see the right picture of Figure~\ref{fig:skew} for an illustration).
Then we have the following definition for skew Schur functions:
\begin{equation*}
s_{\lambdambda/\mu}(x_1, x_2, x_3, \dots, x_N) = \sum_{{\bold T}}w({\bold T}),
\end{equation*}
where the sum is over all $N$--semistandard skew Young tableaux ${\bold T}$ of
shape $\lambdambda/\mu$, where the weight $w({\bold T})$ of ${\bold T}$
is defined as before.
Equivalently, we may define:
\begin{equation*}
s_{\lambdambda/\mu}(x_1, x_2, x_3, \dots, x_N) = \sum_{{\bold P}}w({\bold P}),
\end{equation*}
where the sum is over all $r$-tuples ${\bold P} = \left(P_1, P_2, \dots,
P_r\right)$ of nonintersecting lattice paths, where $P_i$ starts at
$(\mu_i-i, 1)$ and ends at $(\lambdambda_{i}-i, N)$ (see the left
picture of Figure~\ref{fig:skew} for an illustration), and where the
weight $w({\bold P})$ of such an $r$-tuple ${\bold P}$ is defined as before.
\section{Bijective proof of Theorem~\ref{thm:general}}
\lambdabel{bijection}
\begin{proof}
Let us start with a combinatorial description for the objects
involved in \eqref{eq:general}: By the Gessel--Viennot
interpretation of Schur functions as generating functions of
nonintersecting lattice paths, we may view the left--hand side of
the equation as the weight of all {\em pairs\/} $({\boldP}^g,
{\boldP}^b)$, where ${\boldP}^g$ and ${\boldP}^b$ are
$r$-tuples of nonintersecting lattice paths. The paths of
${\boldP}^g$ are coloured green, the paths of ${\boldP}^b$ are
coloured blue. The $i$-th green path $P^g_i$ starts at $(-i, 1)$
and ends in $(\lambdambda_{i}-i, N)$. The $i$-th blue path $P^b_i$
starts at $(-i-1, 1)$ and ends in $(\lambdambda_{i+1}-i-1, N)$. For an
illustration, see the upper left pictures in
Figures~\ref{fig:caseA} and \ref{fig:caseB}, where green paths are
drawn with full lines and blue paths are drawn with dotted lines.
For the right--hand side of \eqref{eq:general}, we use the same
interpretation. We may view the first term as the weight of all
{\em pairs\/} $({\boldA}^g, {\boldA}^b)$, where ${\boldA}^g$
is an $(r-1)$-tuple of nonintersecting lattice paths and
${\boldA}^b$ is an $(r+1)$-tuple of nonintersecting lattice
paths. The paths of ${\boldA}^g$ are coloured green, the paths of
${\boldA}^b$ are coloured blue. The $i$-th green path $A^g_i$
starts at $(-i-1, 1)$ and ends in $(\lambdambda_{i+1}-i-1, N)$. The
$i$-th blue path $A^b_i$ starts at $(-i, 1)$ and ends in
$(\lambdambda_{i}-i, N)$. For an illustration, see the upper right
picture in Figure~\ref{fig:caseA}.
In the same way, we may view the second term as the weight of all
{\em pairs\/} $({\boldB}^g, {\boldB}^b)$, where ${\boldB}^g$
and ${\boldB}^b$ are $r$-tuples of nonintersecting lattice paths.
The paths of ${\boldB}^g$ are coloured green, the paths of
${\boldB}^b$ are coloured blue. The $i$-th green path $B^g_i$
starts at $(-i, 1)$ and ends in $(\lambdambda_{i+1}-i-1, N)$. The
$i$-th blue path $B^b_i$ starts at $(-i-1, 1)$ and ends in
$(\lambdambda_{i}-i, N)$. For an illustration, see the upper right
picture in Figure~\ref{fig:caseB}.
In any case, the weight of some pair of paths $({\bold P},{\bold Q})$ is defined as follows:
\begin{equation*}
w({\bold P},{\bold Q}):=
w({\bold P})\cdot w({\bold Q}).
\end{equation*}
\begin{figure}
\caption{Illustration of the construction in the proof, case A: $r=3$,
$(\lambdambda_1,\lambdambda_2,\lambdambda_3,\lambdambda_4)=(5,4,3,2)$.
}
\end{figure}
\begin{figure}
\caption{Illustration of the construction in the proof, case B: $r=3$,
$(\lambdambda_1,\lambdambda_2,\lambdambda_3,\lambdambda_4)=(5,4,3,2)$.
}
\end{figure}
What we want to do is to give a weight--preserving bijection between the
objects on the left side and on the right side:
\begin{equation}
\lambdabel{eq:bijection}
\{({\boldP}^g, {\boldP}^b)\}\leftrightarrow
\left(\{({\boldA}^g, {\boldA}^b)\}\cup
\{({\boldB}^g, {\boldB}^b)\}\right).
\end{equation}
Clearly, such a bijection would establish \eqref{eq:general}.
The basic idea is very simple and was already used in
\cite{Goulden:Schur} and in \cite{Fulmek:Ciucu}: Since it will be
reused later, we state it here quite generally:
\begin{dfn}
\lambdabel{dfn:graph-auxil}
Let ${\boldP}^1, {\boldP}^2$ be two arbitrary families of
nonintersecting lattice paths. The paths $P^1_i$ of the first
family are coloured with colour blue, the paths $P^2_j$ of the
second familiy are coloured with colour green.
Let $G({\boldP}^1, {\boldP}^2)$ be the ``two--coloured'' graph
made up by ${\boldP}^1$ and ${\boldP}^2$ in the obvious sense.
Observe that there are the two possible orientations for any edge
in that graph: When traversing some path, we may either move
``right--upwards'' (this is the ``original'' orientation of the
paths) or ``left--downwards''.
A {\em changing trail} is a trail in $G({\boldP}^1, {\boldP}^2)$
with the following properties:
\begin{itemize}
\item Subsequent edges of the same colour are traversed in the same
orientation, subsequent edges of the opposite colour are traversed
in the opposite orientation.
\item At every intersection of green and blue paths,
colour {\em and\/} orientation are changed {\em if this is possible\/}
(i.e., if there is an adjacent edge of opposite colour and
opposite orientation);
otherwise the trail must stop there.
\item The trail is {\em maximal\/} in the sense that
it cannot be extended by adjoining edges (in a way which is
consistent with the above conditions) at its
start or end.
\end{itemize}
Note that for every edge $e$, there is a {\em unique\/} changing trail which
contains $e$: E.g., consider some blue edge which is right-- or
upwards--directed and enters vertex $v$. If there is an intersection at $v$,
and if there is a green edge leaving $v$ (in opposite direction left or
downwards), then the trail must continue with this edge; otherwise it must stop
at $v$. If there is no intersection at $v$,
and if there is a blue edge leaving $v$ (in the same direction right or
upwards), then the trail must continue with this edge; otherwise it must stop
at $v$.
Note that a changing trail is either
``path--like'', i.e., has obvious starting point and end point
(clearly, these must be the end points or starting points of some path from
either ${\boldP}^1$ or ${\boldP}^2$),
or it is ``cycle--like'', i.e., is a closed trail.
\end{dfn}
Let us return from general definitions to our concrete case: Starting
with an object $({\boldP}^g, {\boldP}^b)$ from the left--hand side of
\eqref{eq:bijection}, we interpret this pair of lattice paths
as a graph $G({\boldP}^g, {\boldP}^b)$ with
green and blue edges. (See the upper left
pictures in Figures~\ref{fig:caseA} and \ref{fig:caseB}.)
Next, we determine the changing trail which starts at the rightmost
endpoint $(\lambdambda_{1}-1,N)$: Follow the green edges downward or to
the left; at every intersection, change colour and orientation, if this
is possible; otherwise stop there.
Clearly, this changing trail is ``path--like''.
(See Figures~\ref{fig:caseA} and
\ref{fig:caseB} for an illustration: There, the orientation of edges
is indicated by small arrows in the upper pictures; the lower pictures
show the corresponding changing trails.)
Now we change colours green to blue and vice versa along this
changing trail: It is easy to see that this recolouring
yields nonintersecting tuples of green and blue lattice paths.
Note that there are exactly two possible cases:
\noindent{\bf Case A:} The changing trail stops at the rightmost
starting point, $(-1,1)$, of the lattice paths. In this case, from
the recolouring procedure we obtain an object $({\boldA}^g,
{\boldA}^b)$; see the upper right picture in
Figure~\ref{fig:caseA}.
\noindent{\bf Case B:} The changing trail stops at the the leftmost endpoint,
$(\lambdambda_{r+1}-r-1,N)$, of the lattice paths. In this case, from
the recolouring procedure we obtain an object $({\boldB}^g,
{\boldB}^b)$; see the upper right picture in
Figure~\ref{fig:caseB}.
It is clear that altogether this gives a mapping of the set of all
objects $({\boldP}^g, {\boldP}^b)$ into the union of the two sets of all
objects $({\boldA}^g, {\boldA}^b)$ and $({\boldB}^g, {\boldB}^b)$,
respectively. Of course, this mapping is weight--preserving. It is also
injective since the above construction is reversed by simply repeating
it, i.e, determine the changing trail starting at the rightmost
endpoint $(\lambdambda_{1}-1,N)$ (this trail is exactly the same as before,
only the colours are exchanged) and change colours.
For an illustration, read Figures~\ref{fig:caseA} and
\ref{fig:caseB} from right to left.
So what is left to prove is surjectivity: To this end, it suffices
to prove that if we apply our (injective) recolouring construction
to an {\em arbitrary\/} object $({\boldA}^g, {\boldA}^b)$ or
$({\boldB}^g, {\boldB}^b)$, we do {\em always\/} get an object
$({\boldP}^g, {\boldP}^b)$; i.e., two r--tuples of
nonintersecting lattice paths, coloured green and blue, and with
the appropriate starting points and endpoints.
We {\em do\/} have something to prove: Note that in both cases, A
(see Figure~\ref{fig:caseA}) and B (see Figure~\ref{fig:caseB}),
there is {\em prima vista\/} a second possible endpoint for the
changing trail, namely the leftmost starting point, $(-r-1,1)$,
of the lattice paths, where the leftmost blue path starts. If this
endpoint could actually be reached, then the resulting object would
clearly not be of type $({\boldP}^g, {\boldP}^b)$. So we have
to show that this is impossible. (Goulden left out this
indispensable step in \cite[Theorem~2.2]{Goulden:Schur}, but we
shall close this small gap immediately.)
\begin{obs}
\lambdabel{obs:colour-changing}
The following properties of changing trails are immediate:
\begin{itemize}
\item If some edge of a changing trail is used by paths of {\em both\/}
colours green and blue, then it is necessarily traversed in both
orientations and thus forms a {\em changing trail\/} (which is ``cycle--like'')
by itself.
\item Two changing trails may well {\em touch\/}
each other (i.e., have some vertex in common), but can {\em never cross}.
\end{itemize}
\end{obs}
Now observe that in Case A, there is also a second possible
starting point of a ``path--like'' changing trail, namely the
left--most endpoint $(\lambdambda_{r+1}-r-1,N)$ of the lattice paths
(see the left picture in Figure~\ref{fig:2paths}). Likewise, in
Case B, there is a second possible starting point of a
``path--like'' changing trail, namely the rightmost starting point
$(-1,1)$ of the lattice paths (see the right picture in
Figure~\ref{fig:2paths}).
In both cases, if the changing trail starting in $(\lambdambda_1-1,N)$
would reach the
leftmost starting point $(-r-1,1)$ of the lattice paths, it clearly would
{\em cross\/}
this other ``path--like'' changing trail; a contradiction to
Observation~\ref{obs:colour-changing}. (The pictures in Figure~\ref{fig:2paths}
shows these other changing trails for the examples in Figures
\ref{fig:caseA} and \ref{fig:caseB}, respectively.)
This finishes the proof.
\end{proof}
\begin{figure}
\caption{Illustration of the second changing trails for cases A and B.}
\end{figure}
\section{The bijective construction, generalized}
\lambdabel{general}
It is immediately obvious that the bijective construction used in the
proof of Theorem~\ref{thm:general} is not at all restricted to the
special situation of Theorem~\ref{thm:general}: We can {\em always\/}
consider the product of two (arbitrary) skew Schur functions as
generating functions of certain ``two--coloured graphs'' derived
from the lattice path interpretation, as above. Determining the
changing trails which start in some fixed set of
starting points and recolouring their edges will {\em always\/}
yield an injective (and, clearly, weight--preserving) mapping: The
only issue which needs extra care is surjectivity.
In the proof of Theorem~\ref{thm:general} we saw that the argument showing
surjectivity boils down to a very simple graph--theoretic reasoning. We
shall recast this simple reasoning into a general statement:
\begin{obs}
\lambdabel{obs:Kn}
Consider the complete graph $K_{2n}$ with $2n$ vertices,
numbered $1,2,\dots,2n$, and represent its vertices
as points on the unit circle (i.e., vertex number $m$ is represented as
$e^{2m\pi \sqrt{-1} }$); represent the edges as straight lines connecting
the corresponding vertices.
Call a matching in this graph
{\em noncrossing\/} if no two of its edges cross each other in this
geometric representation (see Figure~\ref{fig:matchings} for an illustration).
Then we have:
Any edge which belongs to a {\em perfect} noncrossing matching must
connect an odd--numbered vertex to an even--numbered vertex.
\end{obs}
\begin{rem}
Note that the number of {\em perfect\/} noncrossing matchings in $K_{2n}$
is the Catalan number $C_n$ (see \cite[p.~222]{Stanley2}).
\end{rem}
\begin{rem}
Note that the argument proving surjectivity in Theorem~\ref{thm:general}
amounts to the fact that the two possible ``path--like'' changing
trails connecting the four possible starting points and end points
$(\lambdambda_1-1,N)$, $(\lambdambda_{r+1}-r-1,N)$, $(-r-1,1)$ and $(-1,1)$ must
correspond to a {\em noncrossing\/} perfect matching of the complete graph
$K_4$.
\end{rem}
\begin{figure}
\caption{Illustration of a {\em perfect\/}
\end{figure}
We shall derive a general statement for skew Schur functions:
Let $\lambdambda=(\lambdambda_1,\dots,\lambdambda_r)$ and
$\mu=(\mu_1,\dots,\mu_r)$ be partitions with
$0\leq\mu_i\leq\lambdambda_i$ for $1\leq i\leq r$; let
$\sigmagma=(\sigmagma_1,\dots,\sigmagma_r)$ and $\tau=(\tau_1,\dots,\tau_r)$
be partitions with $0\leq\tau_i\leq\sigmagma_i$ for
$1\leq i\leq r$.
\begin{rem}
\lambdabel{rem:zero-parts}
We intentionally allow parts of length 0 in the partitions $\lambdambda$
and $\sigmagma$: This is equivalent to allowing them to have
different numbers of parts.
\end{rem}
Interpret $s_{\lambdambda/\mu}$ as the generating function of the
family of nonintersecting
lattice paths $(P^b_1,\dots,P^b_r)$, where
$P^b_i$ starts at $(\mu_{i}-i, 1)$ and ends at $(\lambdambda_{i}-i, N)$.
Colour the corresponding lattice paths blue.
Interpret $s_{\sigmagma/\tau}$ as the generating function of the family of nonintersecting
lattice paths $(P^g_1,\dots,P^g_r)$, where $P^g_i$ starts at
$(\tau_{i}+t-i, 1)$
and ends at $(\sigmagma_{i}+t-i, N)$. Colour the corresponding lattice paths
green. Here, $t$ is an arbitrary but fixed integer which indicates the
horizontal offset of the green paths with respect to the blue paths.
Consider the sequence of possible starting points of ``path--like''
changing trails of the corresponding two-coloured graph, in the
sense of Section~\ref{bijection}, where the end--points of the
lattice paths appear in order from right to left in this sequence,
followed by the starting points of the lattice paths in order from
left to right. Note that the number of such points is even, $2k$,
say. More precisely, consider $(x_1,N),\dots (x_l,N),$ followed by
$(x_{l+1},1),\dots,(x_{2k},1)$, where
\begin{multline*}
\{x_1,\dots,x_l\}=
\{i:\;\lambdambda_{i}-i\neq\sigmagma_{j}+t-j\text{ for }1\leq j\leq r\} \cup\\
\{j:\;\sigmagma_{j}+t-j\neq\lambdambda_{i}-i\text{ for }1\leq i\leq r\},
\end{multline*}
$x_1>x_2>\dots >x_l$, and where
\begin{multline*}
\{x_{l+1},\dots,x_{2k}\}=
\{i:\;\mu_{i}-i\neq\tau_{j}+t-j\text{ for }1\leq j\leq r\} \cup\\
\{j:\;\tau_{j}+t-j\neq\mu_{i}-i\text{ for }1\leq i\leq r\},
\end{multline*}
$x_{l+1}<\dots <x_{2k}.$
Denote this sequence of points $(x_i,.)$ by $(Q_i),1\leq i\leq 2k$.
For $1\leq i\leq l$, blue points $Q_i$ are coloured black and green
points $Q_i$ are coloured white. For $l+1\leq i\leq 2k$, blue
points $Q_i$ are coloured white and green points $Q_i$ are coloured
black. Points with even index are called even, points with odd
index are called odd. Then the following lemma is immediate:
\begin{lem}
\lambdabel{lem:parity}
A path--like changing trail in the two--coloured graph defined above can only
connect points of different colours (out of black and white) {\em
and\/} of different parity (by Observation \ref{obs:Kn});
e.g., some white $Q_{2m}$ and some
black $Q_{2n+1}$.
\end{lem}
Now fix an arbitrary subset of points $\{Q_{i_1},\dots,Q_{i_m}\}$.
Start with an arbitrary two--coloured graph from
$s_{\lambdambda/\mu}s_{\sigmagma/\tau}$ (interpreted again as the product of the
generating functions of the corresponding families of
nonintersecting lattice paths) and recolour the changing trails
starting in $Q_{i_1},\dots,Q_{i_m}$. In general, this will give
another two--coloured graph, which can be interpreted as belonging
to some other
$s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime}$. Take
an arbitrary object (i.e., two--coloured graph) from
$s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime}$ and
repeat the same recolouring operation as long as it generates some
``new'' (yet unseen) object.
The set of objects thus generated decomposes into two disjoint
sets: One set, $O_0$, encompasses all objects which show the same
colouring of points $Q_{i_1},\dots,Q_{i_m}$ as in the starting
object; the other, $O_1$ encompasses the objects with the opposite
colouring for these points.
It is clear that recolouring changing trails which start in points
$Q_{i_1},\dots,Q_{i_m}$ establishes a bijection between $O_0$ and
$O_1$.
On the other hand, each object in $O_0$ belongs to some
$s_{\lambdambda^{\prime\prime}/\mu^{\prime\prime}}s_{\sigmagma^{\prime\prime}/\tau^{\prime\prime}}$:
Denote the set of all the corresponding quadruples
$(\lambdambda^{\prime\prime},\mu^{\prime\prime},\sigmagma^{\prime\prime},\tau^{\prime\prime})$ which
occur in this sense by $S_0$. The same consideration applies to
$O_1$: Denote by $S_1$ the corresponding set of
quadruples $(\lambdambda^\prime,\mu^\prime,\sigmagma^\prime,\tau^\prime)$.
\begin{lem}
\lambdabel{lem:most-general}
Given the above definitions, we have the following ``generic''
identity for skew Schur functions:
\begin{equation}
\lambdabel{eq:most-general}
\sum_{(\lambdambda^\prime,\mu^\prime,\sigmagma^\prime,\tau^\prime)\in S_1}
s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime}=
\sum_{(\lambdambda^{\prime\prime},\mu^{\prime\prime},\sigmagma^{\prime\prime},\tau^{\prime\prime})\in S_0}
s_{\lambdambda^{\prime\prime}/\mu^{\prime\prime}}s_{\sigmagma^{\prime\prime}/\tau^{\prime\prime}}.
\end{equation}
\end{lem}
This statement is certainly as general as useless: Let us
specialize to a somewhat ``friendlier'' assertion.
\begin{lem}
\lambdabel{lem:general-skew}
Given the above definitions, assume that all black points have the same
parity, and that all white points have the same parity. Then \eqref{eq:most-general} specializes to
\begin{equation}
s_{\lambdambda/\mu}s_{\sigmagma/\tau}=
\sum_{(\lambdambda^\prime,\mu^\prime,\sigmagma^\prime,\tau^\prime)\in S_1}
s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime},
\end{equation}
where $S_1$ encompasses all the quadruples
$(\lambdambda^\prime,\mu^\prime,\sigmagma^\prime,\tau^\prime)$
which correspond to any two--coloured graph object that can be obtained
by recolouring the changing trails starting in points
$Q_{i_1},\dots,Q_{i_m}$ in any ``initial'' two--coloured graph object
from $s_{\lambdambda/\mu}s_{\sigmagma/\tau}$.
\end{lem}
\begin{proof}
Without loss of generality we may assume that all even points are
white and all odd points are black in
$s_{\lambdambda/\mu}s_{\sigmagma/\tau}$. By recolouring changing trails,
all the points $Q_{i_1},\dots,Q_{i_m}$ are matched with points of
opposite colour and parity.
So if $Q_i$ is odd and black, then the recolouring trail starting at
$Q_i$ connects it which some other point $Q_k$ which is even and white:
After recolouring, $Q_i$ is odd and white, and the recolouring operation
altogether yields some two--coloured graph object belonging to some
$s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime}$.
Now if we apply the recolouring operation to an {\em arbitrary\/}
object from
$s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime}$, the
only possible partners for ``wrongly--coloured'' $Q_i$ (odd, but
white) is another ``wrongly--coloured'' $Q_j$ (even, but black).
Hence this operation takes objects from
$s_{\lambdambda^\prime/\mu^\prime}s_{\sigmagma^\prime/\tau^\prime}$ back
to $s_{\lambdambda/\mu}s_{\sigmagma/\tau}$.
\end{proof}
\section{Proof of the Pl\"ucker relations}
\lambdabel{pluecker}
In order to show that the general assertions of
Section~\ref{general} do in fact lead to some interesting
identities, we give a proof of the Pl\"ucker relations
(Theorem~\ref{thm:pluecker}), which is based on
Lemma~\ref{lem:general-skew}.
\begin{proof}
In the notation of Section~\ref{general}, let
$$\lambdambda=\left(2n(n-1),2(n-1)^2,\dots,4(n-1),2(n-1)\right)$$
and
$$\sigmagma=\left((2n-1)(n-1),(2n-3)(n-1),\dots, 3(n-1),n-1\right),$$
$\mu=\tau=(0,\dots,0)$; and choose horizontal offset $t=0$. I.e., interpret
$s_\lambdambda s_\sigmagma$ as the generating function of two--coloured
graph objects consisting of two $n$-tuples of nonintersecting
lattice paths, coloured
green and blue, respectively,
where green path $P^g_i$ starts at $(-i,1)$ and ends at
$(\lambdambda_i-i,N)$, and where blue path $P^b_i$ starts at $(-i,1)$
and ends at $(\sigmagma_i-i,N)$.
Observe that this setting obeys the assumptions of
Lemma~\ref{lem:general-skew}.
Now consider the set of green endpoints $\{Q_1,\dots,Q_k\}$, where
$Q_i=(\lambdambda_{r_i}-r_i,N)$.
(Here, $1\leq r_1< r_2<\dots< r_k\leq n$ is the fixed
list of integers from Theorem~\ref{thm:pluecker}.)
Recolouring changing trails which
start at these points amounts to determining the set
$\{R_1,\dots,R_k\}$ of respective
endpoints of the changing trails, and changing colours.
Assume that $R_i=(\sigmagma_{s_i}-s_i,N)$, then in terms of the associated
Schur functions Lemma~\ref{lem:general-skew} directly leads to the identity:
\begin{equation}
\lambdabel{eq:schur-pluecker}
s_\lambdambda s_\sigmagma = \sum_{1\leq s_1<\dots< s_k\leq n}
s_{(\lambdambda_1,\dots,\sigmagma_{s_1},\dots,\sigmagma_{s_k},\dots,\lambdambda_n)}
s_{(\sigmagma_1,\dots,\lambdambda_{r_1},\dots,\lambdambda_{r_k},\dots,\sigmagma_n)},
\end{equation}
where the notation of the summands means that parts $\lambdambda_{r_i}$
were exchanged with parts $\sigmagma_{s_i}$, respectively.
By the Jacobi--Trudi identity \eqref{eq:Jacobi-Trudi} and
Observation~\ref{obs:christian},
\eqref{eq:pluecker} and \eqref{eq:schur-pluecker} are in fact equivalent.
\end{proof}
\begin{rem}
In fact, even the quite general assertion of Lemma~\ref{lem:most-general}
can be generalized further: So far, our lattice paths always had
starting points and end points at the same horizontal lines $(.,1)$ and
$(.,N)$, corresponding to the range of variables $x_1,\dots,x_N$. Dropping
this constraint yields Schur functions with different ranges of variables
(e.g., $s_\lambdambda(x_4,x_5,x_6)$). Recalling that (see
Remark~\ref{rem:zero-parts}) we actually also do allow partitions of different
lengths, it is easy to see that Theorem~5 in \cite{Krattenthaler}
(which is a generalization of Ciucu's Schur function identity
\eqref{eq:ciucu}) can be proved in the same way as
Lemma~\ref{lem:most-general}.
\end{rem}
\section{Kleber's Theorem}
\lambdabel{kleber}
The theorem \cite[Thm.~3.2]{Kleber} is expressed in terms of
certain operations on Ferrers boards (called Young
diagrams in \cite{Kleber}): In order to state it, we need to describe the
relevant notation.
First, we introduce a particular way of drawing the Ferrers board
of $\lambdambda=(\lambdambda_1,\dots,\lambdambda_r)$ in the plane: Let
$x_1>x_2>\dots>x_n>x_{n+1}=0$ be the ordered list of the {\em
distinct\/} parts contained in $\lambdambda$; set $y_i=\text{the number
of parts of }\lambdambda\text{ which are }\geq x_i$.
Setting $y_0=0$, we have $0=y_0<y_1<\dots<y_n$, and $(x_i),(y_i)$
simply yield another encoding of the partition $\lambdambda$:
$$\lambdambda=(x_1^{y_1-y_0},x_2^{y_2-y_1},\dots,x_n^{y_n-y_{n-1}}).$$
Now consider the $n$ points $(x_1,-y_1),(x_2,-y_2),\dots,(x_n,-y_n)$ in
the plane: The Ferrers board of $\lambdambda$ is represented as the set of
points $(x,-y)$ such that:
\begin{align*}
x\geq 0\text{ and } y\geq 0,&\\
x\leq x_i\text{ and }y\leq y_i&\text{ for some } i.
\end{align*}
Figure \ref{fig:outer-corners} illustrates this concept. The $n$ points
$c_1=(x_1,-y_1),c_2=(x_2,-y_2),\dots,c_n=(x_n,-y_n)$ are called {\em outside corners\/},
the $n+1$ points $(x_1,-y_0),(x_2,-y_1),\dots,(x_{n+1},-y_n)$
are called {\em inside corners\/}.
\begin{figure}
\caption{Illustration of outer corners and special drawing of Ferrers
board for partition $\lambdambda=(8,6,5,3,3,1,1)$.}
\end{figure}
Now we are in a position to define two operations on partitions:
In the above notation, take two integers $i,j$ such that $1\leq i\leq j\leq n$
and define two partitions derived from the original $\lambdambda$
via manipulating the inside and outside corners of its
associated Ferrers board:
\begin{align*}
\pi^i_j(\lambdambda):&\text{ add $1$ to each of }x_{i+1},\dots,x_j;y_i,\dots,y_j,\\
\mu^i_j(\lambdambda):&\text{ add $-1$ to each of }x_{i+1},\dots,x_j;y_i,\dots,y_j.
\end{align*}
These operations add or remove, respectively, a {\em border strip\/}
that reaches from the $i$-th outside corner to the $j$-th inside corner
(see Figure \ref{fig:kleber}).
\begin{figure}
\caption{Illustration of operations $\pi_j^i$ and $\mu_j^i$ for
$i=2$, $j=5$ applied to $\lambdambda=(8,6,5,3,3,1,1)$.}
\end{figure}
We need to add or remove {\em nested border strips\/}: Given integers
$1\leq i_1<\dots<i_k\leq j_k<\dots j_1\leq n$, we define
\begin{align*}
\pi^{i_1,\dots,i_k}_{j_1,\dots,j_k} & =
\pi^{i_1}_{j_1}\circ \dots\circ\pi^{i_k}_{j_k},\\
\mu^{i_1,\dots,i_k}_{j_1,\dots,j_k} & =
\mu^{i_1}_{j_1}\circ \dots\circ\mu^{i_k}_{j_k}.
\end{align*}
Note that the corners which are shifted by these operations might
not appear as corners in the geometric sense any more; nevertheless
we consider them as the object for subsequent operations $\pi$ and $\mu$:
Nesting $\pi$ and $\mu$ in this sense yields something which can be interpreted
again as a partition, since we always have $x_i\geq x_{i+1}$
and $y_i\leq y_{i+1}$ (see Figure~\ref{fig:kleber}.)
The last operation we need is the following: In the above notation,
let $k$ be an integer, $1\leq k\leq n$. Clearly, the Ferrers board
contains at least one column of length $l=y_k$: Adding or removing
some column of length $l$ amounts to adding $\pm 1$ to all
coordinates $x_i$, $1\leq i\leq k$. We denote this operation by
$\lambdambda\pm\omega_l$. (See Figure~\ref{fig:kleber2}.)
\begin{figure}
\caption{Illustration of operation $\lambdambda\pm\omega_l$ for
$l=y_4=5$, applied to $\lambdambda=(8,6,5,3,3,1,1)$.}
\end{figure}
\begin{thm}[Theorem 3.2 in \cite{Kleber}]
\lambdabel{thm:Kleber}
Let $\lambdambda=(\lambdambda_1,\lambdambda_2,\dots,\lambdambda_r)$ be a partition
with $n$ outside corners. For an arbitrary integer $k$, $1\leq
k\leq n$, set $l=y_k$ (in the above notation). Then we have:
\begin{multline}
\lambdabel{eq:kleber}
s_\lambdambda s_\lambdambda = \\
s_{\lambdambda+\omega_l}s_{\lambdambda-\omega_l} +
\sum_{m\geq 1}
\sum_{\substack{
1\leq i_1<\dots<i_m\leq k\\
k\leq j_m<\dots<j_1\leq n
}}
(-1)^{m-1}s_{\pi^{i_1,\dots,i_m}_{j_1,\dots,j_m}(\lambdambda)}
s_{\mu^{i_1,\dots,i_m}_{j_1,\dots,j_m}(\lambdambda)}.
\end{multline}
\end{thm}
The connections between Ferrers boards and nonintersecting lattice
paths were illustrated in Section~\ref{background}: Here we have to
give the proper ``translation'' of operations
$\pi_{j}^{i}$ and $\mu_{j}^{i}$ to nonintersecting lattice paths.
First observe that the outside corners of a partition correspond to {\em
blocks of consecutive endpoints\/} (here, consecutive means ``having distance
1 in the horizontal direction'') in the lattice path interpretation: Number
these blocks from right to left by $1,2,\dots,n$, and denote the additional
block of (consecutive) starting points by $n+1$ (see the upper picture
in Figure~\ref{fig:kleber-lattice}).
Interpret some object from $s_\lambdambda s_\lambdambda$ in the same way as in
Section~\ref{general}. More precisely, let $\sigmagma=\lambdambda$, $\mu=\tau=0$
and horizontal offset $t=1$ in the general definitions preceding
Lemma~\ref{lem:most-general}. Figure~\ref{fig:kleber-lattice}
illustrates the position of starting points and end points of the corresponding
lattice paths: Blue points are drawn as black dots,
green points are drawn as white dots; blocks are indicated by horizontal
braces.
It is easy to see that the simultaneous
application of $\pi_j^i$ to the ``green object'' and of $\mu_j^i$ to the
``blue
object'' amounts to interchanging colours of the leftmost point in blue block
$i$ and of the appropriate endpoint of a corresponding
changing trail in block $j+1$ (i.e., the rightmost point in green block $j+1$
if $j<n$, or the the leftmost point in blue block $n+1$ if $j=n$; see Figure
\ref{fig:kleber-lattice}).
Likewise, adding some column of height $l=y_k$ to the ``blue object'' and
simultaneously removing such column from the ``green object'' amounts to interchanging colours of the
leftmost blue point and the rightmost green point in blocks $1,2,\dots,k$
if $k<n$; if $k=n$, then the same effect can be achieved by interchanging
colours of the leftmost blue point and the rightmost green point in block
$n+1$. (See Figure~\ref{fig:kleber-lattice2}.)
\begin{figure}
\caption{Illustration of operations $\pi_j^i$ and $\mu_j^i$ for
$i=2$, $j=4$ applied to $\lambdambda=(8,6,5,3,3,1,1)$,
translated to lattice paths.}
\end{figure}
\begin{figure}
\caption{Illustration of operations $\lambdambda\pm\omega_l$ for
$l=y_4=5$, applied to $\lambdambda=(8,6,5,3,3,1,1)$,
translated to lattice paths.}
\end{figure}
\noindent
{\em Proof of Theorem~\ref{thm:Kleber}:\/}
Consider a two-coloured object from $s_\lambdambda s_\lambdambda$
in the lattice path interpretation. As in Section~\ref{general}, we
look at the noncrossing perfect matching that the changing trails
induce among their $2n+2$ endpoints, the leftmost and rightmost points
in blocks $1,\ldots,n,n+1$. Note that in the case of $s_\lambdambda
s_\lambdambda$, the parity constraint and the colour constraint of
Lemma~\ref{lem:parity} coincide.
Now consider the $k$ changing trails which begin at the leftmost
(blue) endpoints of blocks $1,2,\ldots,k$. There are exactly two
cases:
\begin{enumerate}
\item
The changing trails match these points up with the
rightmost (green) endpoints of blocks $1,2,\ldots,k$:
Then recolouring these $k$ trails results in an object of type
$s_{\lambdambda+\omega_l}s_{\lambdambda-\omega_l}$. Conversely, given an
object of type $s_{\lambdambda+\omega_l}s_{\lambdambda-\omega_l}$, the parity
and colour constraints of Lemma~\ref{lem:parity} force the points
in blocks $1,2,\ldots,k$ to be matched amongst themselves, so they are
in bijection with this subset of $s_\lambdambda s_\lambdambda$ objects.
\item
Otherwise, some of those $k$ points must match up with points in
blocks $k+1,\ldots,n,n+1$. Suppose there are $m$ such matchings, and
that they match the leftmost points in blocks $i_1<i_2<\cdots<i_m$
with points in blocks $j_1>j_2>\cdots>j_m$. (Since the changing trails
cannot cross, we in fact know that $i_r$ is matched with $j_r$,
for $1\leq r\leq m$.) Recolouring these $m$ trails gives an object of
type
$s_{\pi^{i_1\ldots i_m}_{j_1\ldots j_m}}(\lambdambda)
s_{\mu^{i_1\ldots i_m}_{j_1\ldots j_m}}(\lambdambda)$.
This time, though, we do not have a bijection. Given an object of
$s_{\pi^{i_1\ldots i_m}_{j_1\ldots j_m}}(\lambdambda)
s_{\mu^{i_1\ldots i_m}_{j_1\ldots j_m}}(\lambdambda)$,
the same parity and colour constraints of Lemma~\ref{lem:parity} do
guarantee that $m$ changing trails connect each $i_r$ with $j_r$.
However, when we recolour them to get an object of $s_\lambdambda
s_\lambdambda$, we may arrive at an object that has {\em other}
changing trails leaving blocks $1,\ldots,k$, aside from the $m$ we
considered.
Thus, we are in the ``typical'' situation for an inclusion--exclusion
argument, which immediately yields equation~\eqref{eq:kleber}.
\end{enumerate}
This finishes the proof.
\qed
\begin{rem}
When $k=1$, both cases of the above proof amount to recolouring the trail
beginning
at the rightmost green endpoint, so this is a special case of
Lemma~\ref{lem:general-skew}. The $k=n$ case follows similarly, after
exchanging blue and green.
\end{rem}
\ifx\undefined\leavevmode\hbox to3em{\hrulefill}\,
\newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,}
\fi
\end{document} |
\begin{document}
\begin{center}
\Large \textbf{Almost periodic solutions of retarded SICNNs with functional response on piecewise constant argument}
\end{center}
\begin{center}
\normalsize \textbf{Marat Akhmet$^{a,} \symbolfootnote[1]{Corresponding Author Tel.: +90 312 210 5355, Fax: +90 312 210 2972, E-mail: [email protected]}$, Mehmet Onur Fen$^b$, Mokhtar Kirane$^{c}$} \\
\textit{\textbf{RFDEPCAootnotesize$^a$Department of Mathematics, Middle East Technical University, 06800, Ankara, Turkey}} \\
\textit{\textbf{RFDEPCAootnotesize$^b$Neuroscience Institute, Georgia State University, Atlanta, Georgia 30303, USA}} \\
\textit{\textbf{RFDEPCAootnotesize$^c$Laboratoire de Mathématiques, Image et Applications, Pôle Sciences et Technologies, Université de La Rochelle, Avenue Michel Crépeau, 17042 La Rochelle, France}}
\end{center}
\begin{center}
\textbf{Abstract}
\end{center}
\noindent\ignorespaces
We consider a new model for shunting inhibitory cellular neural networks, retarded functional differential equations with piecewise constant argument. The existence and exponential stability of almost periodic solutions are investigated. An illustrative example is provided.
\noindent\ignorespaces \textbf{Keywords:} Shunting inhibitory cellular neural networks; Retarded functional differential equations; Alternate constancy of argument; Bohr almost periodic solutions; Exponential stability
\section{Introduction}
Cellular neural networks ($CNNs$) have been paid much attention in the past two decades \cite{chua}-\cite{1}. Exceptional role in psychophysics, speech, perception, robotics, adaptive pattern recognition, vision, and image processing is played by shunting inhibitory cellular neural networks ($SICNNs$), which was introduced by Bouzerdoum and Pinter \cite{bouzer1}. One of the most attractive subjects for this type of neural networks is the existence of almost periodic solutions. This problem has been investigated for models with different types of activation functions \cite{10}-\cite{12}. In the present study, we investigate a new model of $SICNNs$ by considering deviated as well as piecewise constant time arguments, and prove the existence of exponentially stable almost periodic solutions. All the results are discussed for the general type of activation functions, but they can be easily specified for applications.
Extended information about differential equations with generalized piecewise constant argument \cite{a1} can be found in the book \cite{akhmet}. As a subclass, they contain differential equations with piecewise constant argument ($EPCA$) \cite{aw1}-\cite{w1}, where the piecewise constant argument is assumed to be a multiple of the greatest integer function.
Differential equations with piecewise constant argument are very useful as models for neural networks. This was shown in the studies \cite{hul}-\cite{xue}, where the authors utilized $EPCA.$ We propose to involve a new type of systems, retarded functional differential equations with piecewise constant argument of generalized type, in the modeling. It will help to investigate a larger class of neural networks.
In paper \cite{a1}, differential equations with piecewise constant argument of generalized type ($EPCAG$) were introduced. We not only maximally generalized the argument functions, but also proposed to reduce investigation of $EPCAG$ to integral equations. Due to that innovation, it is now possible to analyze essentially non-linear systems, that is, systems non-linear with respect to values of solutions at discrete moments of time, where the argument changes its constancy. Previously, the main and unique method for $EPCA$ was reduction to discrete equations and, hence, only equations in which values of solutions at the discrete moments appear linearly \cite{aw1}-\cite{w1} have been considered.
The crucial novelty of the present paper is that the piecewise constant argument in the functional differential equations is of alternate (advanced-delayed) type. In the literature, biological reasons for the argument to be delayed were discussed \cite{murray,peskin}. However, the role of advanced arguments has not been analyzed properly yet. Nevertheless, the importance of anticipation for biology was mentioned by some authors. For example, in the paper \cite{bucks}, it is supposed that synchronization of biological oscillators may request anticipation of counterparts behavior. Consequently, one can assume that equations for neural networks may also need anticipation, which is usually reflected in models by advanced argument. Therefore, the systems taken into account in the present study can be useful in future analyses of $SICNNs.$ Furthermore, the idea of involving both advanced and delayed arguments in neural networks can be explained by the existence of retarded and advanced actions in a model of classical electrodynamics \cite{driver}. Moreover, mixed type deviation of the argument may depend on traveling waves emergence in $CNNs$ \cite{waves}. Understanding the structure of such traveling waves is important due to their potential applications including image processing (see, for example, \cite{chua}-\cite{waves}). More detailed analysis of deviated arguments in neural networks can be found in \cite{ay}-\cite{aay2}.
Shunting inhibition is a phenomenon in which the cell is ``clamped'' to its resting potential when the reversal potential of $Cl^-$ channels are close to the membrane resting potential of the cell \cite{bouzer1,Shepherd04}. It occurs through the opposition of an inward current, which would otherwise depolarize the membrane potential to threshold, by an inward flow of $Cl^-$ ions \cite{Shepherd04}. From the biological point of view, shunting inhibition has an important role in the dynamics of neurons \cite{Vida06}-\cite{Graham98}. According to the results of Vida et al. \cite{Vida06} networks with shunting inhibition are advantageous compared to the networks with hyperpolarizing inhibition such that in the former type networks oscillations are generated with smaller tonic excitatory drive, network frequencies are tuned to the $\gamma$ band, and robustness against heterogeneity in the excitatory drive is markedly improved. It was demonstrated by Mitchell and Silver \cite{Mitchell03} that shunting inhibition can modulate the gain and offset of the relationship between output firing rate and input frequency in granule cells when excitation and/or inhibition are mediated by time dependent synaptic input. Besides, Borg-Graham et al. \cite{Graham98} proposed that nonlinear shunting inhibition may act during the initial stage of visual cortical processing, setting the balance between opponent `On' and `Off' responses in different locations of the visual receptive field \cite{Graham98}. On the other hand, shunting neural networks are important for various engineering applications \cite{bouzer1},\cite{bouzer2}-\cite{Arulampalam01}. For example, in vision, shunting lateral inhibition enhances edges and contrast, mediates directional selectivity, and causes adaptation of the organization of the spatial receptive field and of the contrast sensitivity function \cite{bouzer1},\cite{bouzer2}-\cite{Pinter83}. Moreover, such networks are appropriate to be used in medical diagnosis \cite{Arulampalam01}. Therefore, the investigation of the dynamics of $SICNNs,$ which are biologically inspired networks designed upon the shunting inhibition concept \cite{bouzer1}, is important for the improvement of the techniques used in medical diagnosis, adaptive pattern recognition, image processing etc. \cite{bouzer2}-\cite{Arulampalam01} and may shed light on neuronal activities concerning shunting inhibition \cite{Vida06}-\cite{Graham98}.
Exponential stability of neural networks has been widely studied in the literature (see, for example, \cite{Chunxia}-\cite{9},\cite{Liao02}-\cite{7}). According to Liao et al. \cite{Liao02}, the exponential stability has importance in neural networks when the exponentially convergence rate is used to determine the speed of neural computations. The studies \cite{Liao02,Yi99} were concerned with the exponential stability and estimation of exponential convergence rates in neural networks. In the paper \cite{Liao02}, Lyapunov-Krasovskii functionals and the linear matrix inequality (LMI) approaches were combined to investigate the problem, whereas the boundedness of the Dini derivative of the neuron input output activations was required in \cite{Yi99}. The exponential stabilization problem of memristive neural networks was considered in \cite{Wen15a} by means of the Lyapunov-Krasovskii functional and free weighting matrix techniques. Additionally, the Lyapunov-Krasovskii functional method was considered by Wen et al. \cite{Wen15c} to analyze the passivity of stochastic impulsive memristor-based piecewise linear systems, and the free weighting matrix approach was utilized in \cite{He06} to derive an LMI based delay dependent exponential stability criterion for neural networks with a time varying delay. On the other hand, exponential stability criteria were derived by Dan et al. \cite{Dan13} for an error system in order to achieve lag synchronization of coupled delayed chaotic neural networks. The concept of lag synchronization was taken into account also within the scope of the papers \cite{Wen14} and \cite{Wen15b} for memristive neural networks and for a class of switched neural networks with time-varying delays, respectively. Furthermore, the Banach fixed point theorem and the variant of a certain integral inequality with explicit estimate were used to investigate the global exponential stability of pseudo almost periodic solutions of $SICNNs$ with mixed delays in the study \cite{cherif}.
Almost periodic and in particular quasi-periodic motions are important for the theory of neural networks. According to Pasemann et al. \cite{Pasemann03}, periodic and quasi-periodic solutions have many fundamental importances in biological and artificial systems, as they are associated with central pattern generators, establishing stability properties and bifurcations (leading to the discovery of periodic solutions). Besides, the sinusoidal shape of neural output signals is, in general, associated with appropriate quasi-periodic attractors for discrete-time dynamical systems. In the book \cite{Izhikevich07}, the dynamics of the brain activity is considered as a system of many coupled oscillators with different incommensurable periods. Signals from the neurons have a phase shift of $\pi/2,$ and may be useful for various kinds of applications; for instance, controlling the gait of legged robots \cite{Kimura99}. Furthermore, an alternative discrete time model of coupled quasi-periodic and chaotic neural network oscillators were considered by Wang \cite{Wang92}.
Let us describe the model of $SICNNs$ in its most original form \cite{bouzer1}. Consider a two dimensional grid of processing cells arranged into $m$ rows and $n$ columns, and let $C_{ij},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n,$ denote the cell at the $(i,j)$ position of the lattice. In $SICNNs,$ neighboring cells exert mutual inhibitory interactions of the shunting type. The dynamics of a cell $C_{ij}$ are described by the following nonlinear ordinary differential equation,
\begin{eqnarray}
\begin{array}{l} \label{1}
\displaystyle RFDEPCArac{dx_{ij}}{dt}=-a_{ij}x_{ij}-\sum_{C_{kl}\in N_{r}(i,j)} C_{ij}^{kl}f(x_{kl}(t))x_{ij} + L_{ij}(t),
\end{array}
\end{eqnarray}
where $x_{ij}$ is the activity of the cell $C_{ij};$ $L_{ij}(t)$ is the external input to the cell $C_{ij};$ the constant $a_{ij}>0$ represents the passive decay rate of the cell activity; $C_{ij}^{kl}\geq 0$ is the coupling strength of postsynaptic activity of the cell $C_{kl}$ transmitted to the cell $C_{ij};$ the activation function $f(x_{kl})$ is a positive continuous function representing the output or firing rate of the cell $C_{kl};$ and the $r-$neighborhood of the cell $C_{ij}$ is defined as
\begin{eqnarray*}
N_{r}(i,j)=\{C_{kl}: \max(|k-i|,|l-j|)\leq r, \ 1\leq k\leq m, 1\leq l \leq n \}.
\end{eqnarray*}
It is worth noting that even if the activation function is supposed to be globally bounded and Lipschitzian, these properties are not valid for the nonlinear terms in the right hand sides of the differential equations describing the dynamics of $SICNNs,$ and this is one of the reasons why a sophisticated mathematical analysis is required for $SICNNs$ in general. Another reason is that the
connections between neurons in $SICNNs$ act locally only in $r-$neighborhoods. This causes special ways of evaluations different than those customized for earlier developed neural networks in the mathematical analyses of the models.
It is reasonable to say that the usage of deviated arguments in neural networks makes the models much closer to applications. For example, in \cite{8} the model was considered with variable delays,
\begin{eqnarray}\label{2}
RFDEPCArac{dx_{ij}}{dt} = - a_{ij} x_{ij} - \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kl}(t- \tau(t)))x_{ij} + L_{ij}(t).
\end{eqnarray}
In the present study, we introduce and investigate more general neural networks. The model will be described in the next section.
\section{Preliminaries}
Let $\mathbb Z$ and $\mathbb R$ denote the sets of all integers and real numbers, respectively. Throughout the paper, the norm $\left\|u\right\|=\displaystyle \max_{(i,j)} \left|u_{ij}\right|,$ where $u=\left\{u_{ij}\right\} = (u_{11},\ldots,u_{1n}, \ldots, u_{m1} \ldots,u_{mn}) \in \mathbb R^{m\times n},$ will be used.
Suppose that $\theta=\{\theta_p\}$ and $\zeta=\{\zeta_p\},$ $p \in \mathbb Z,$ are sequences of real numbers such that the first one is strictly ordered, $|\theta_p| \to \infty$ as $|p| \to \infty,$ and the second one satisfies $\theta_p \le \zeta_p \le \theta_{p+1}$ for all $p \in \mathbb Z.$ The sequence $\zeta$ is not necessarily strictly ordered. We say that a function is of $\gamma-$type, and denote it by $\gamma(t),$ if $\gamma(t) = \zeta_p$ for $\theta_p \le t < \theta_{p+1},$ $p \in \mathbb Z.$ One can affirm, for example, that $\displaystyle 2 \left[RFDEPCArac{t+1}{2} \right]$ is a $\gamma-$type function with $\theta_p=2p-1$, $\zeta_p=2p.$
Fix a non-negative number $\tau \in \mathbb R$ and let ${\cal C}^0$ be the set of all continuous functions mapping the interval $[-\tau,0]$ into $\mathbb R,$ with the uniform norm $\displaystyle \|\phi\|_0 = \max_{t\in[-\tau,0]} \left| \phi(t) \right|.$ Moreover, we denote by ${\cal C}$ the set consisting of continuous functions mapping the interval $[-\tau,0]$ into $\mathbb R^{m \times n},$ with the uniform norm $\displaystyle \|\phi\|_0 = \max_{t\in[-\tau,0]} \left\| \phi(t) \right\|.$
In the present study, we propose to investigate retarded $SICNNs$ with functional response on piecewise constant argument of the following form,
\begin{eqnarray}\label{3}
RFDEPCArac{dx_{ij}}{dt} = - a_{ij} x_{ij} - \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{klt},x_{kl\gamma(t)})x_{ij} + L_{ij}(t),
\end{eqnarray}
where $f:{\cal C}^0 \times {\cal C}^0 \to \mathbb R$ is a continuous functional.
In network (\ref{3}), the terms $x_{klt}$ and $x_{kl\gamma(t)}$ must be understood in the way used for functional differential equations \cite{burton}-\cite{kuang}. That is, $x_{klt}(s) = x_{kl}(t+s)$ and $x_{kl\gamma(t)}(s) = x_{kl}(\gamma(t) + s)$ for $s \in [-\tau,0].$ Let us clarify that the argument function $\gamma(t)$ is of the alternate type. Fix an integer $p$ and consider the function on the interval $[\theta_{p},\theta_{p+1}).$ Then, the function $\gamma(t)$ is equal to $\zeta_{p}.$ If the argument $t$ satisfies $\theta_p \leq t < \zeta_p,$ then $\gamma(t)> t$ and it is of advanced type. Similarly, if $\zeta_p < t < \theta_{p+1},$ then $\gamma(t)< t$ and, hence, it is of delayed type. Consequently, it is worth noting that the $SICNN$ (\ref{3}) is with {\it alternate constancy} of argument. It is known that $\gamma(t)$ is the most general among piecewise constant argument functions \cite{akhmet}. Our model is much more general than the equations investigated in \cite{s3}-\cite{wyz1}, where the delay is constant $\tau = 1$ and it is equal to the step of the greatest integer function $[t].$ Differential equations with functional response on the piecewise constant argument were first introduced in the paper \cite{a9}. In the present study, we apply the theory to the analysis of neural networks. All previous authors were at most busy with terms of the form $x({\gamma(t)}).$ Thus, one can say that retarded functional differential equations with piecewise constant argument in the most general form is investigated in this paper.
Since the model (\ref{3}) is a new one, we have to investigate not only the existence of almost periodic solutions and their stability, but also common problems of the existence and uniqueness of solutions, their continuation to infinity and boundedness.
One can easily see that system (\ref{2}) is a particular case of (\ref{3}). Additionally, results of the present paper are true or can be easily adapted to the following systems,
\begin{eqnarray}\label{4}
RFDEPCArac{dx_{ij}}{dt} = - a_{ij} x_{ij} - \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kl}(\gamma(t)))x_{ij} + L_{ij}(t),
\end{eqnarray}
that is, differential equations with piecewise constant argument, $EPCAG,$
\begin{eqnarray}\label{5}
RFDEPCArac{dx_{ij}}{dt} = - a_{ij} x_{ij} - \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kl}(t - \tau(t)),x_{kl}(\gamma(t)- \tau(t)))x_{ij} + L_{ij}(t),
\end{eqnarray}
differential equations with variable delay and piecewise constant argument,
\begin{eqnarray}\label{6}
RFDEPCArac{dx_{ij}}{dt} = - a_{ij} x_{ij} - \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kl}(t - \tau(t)))x_{ij} - \sum_{C_{kl} \in N_r(i,j)} D_{ij}^{kl}g(x_{kl}(\gamma(t)))x_{ij} + L_{ij}(t).
\end{eqnarray}
In other words, what we have suggested are sufficiently general models, which can be easily specified for concrete applications.
Let us introduce the initial condition for $SICNN$ (\ref{3}). Fix a number $\sigma \in \mathbb R$ and functions $\phi=\left\{\phi_{ij}\right\},$ $\psi=\left\{\psi_{ij} \right\} \in {\cal C},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n.$ In the case $\gamma(\sigma) < \sigma,$ we say that a solution $x(t)= \left\{x_{ij}(t)\right\}$ of (\ref{3}) satisfies the initial condition and write $x(t) = x(t,\sigma,\phi,\psi),$ $t \ge \sigma,$ if $x_{\sigma}(s) = \phi(s),$ $x_{\gamma(\sigma)}(s) = \psi(s)$ for $s \in [-\tau,0].$ In what follows, we assume that if the set $[\gamma(\sigma)-\tau,\gamma(\sigma)] \cup [\sigma -\tau,\sigma]$ is connected, then the equation $\phi(s) = \psi(s+\sigma - \gamma(\sigma))$ is true for all $s \in [-\tau, \gamma(\sigma) - \sigma].$
If $\gamma(\sigma) \ge \sigma,$ then we look for a solution $x(t)= x(t,\sigma,\phi),$ $t \ge \sigma,$ such that $x_{\sigma}(s) = \phi(s), s \in [-\tau,0].$
Thus, if $\theta_p \le \sigma < \theta_{p+1}$ for some $p \in \mathbb Z,$ then there are two cases of the initial condition:
\begin{itemize}
\item[\bf (IC$_1$)] $x_{\sigma}(s) = \phi(s),$ $\phi \in {\cal C},$ $s \in [-\tau,0]$ if $\theta_p \le \sigma \le \zeta_p < \theta_{p+1};$
\item[\bf (IC$_2$)] $x_{\sigma}(s) = \phi(s),$ $x_{\gamma(\sigma)}(s) = \psi(s),$ $\phi,\psi \in {\cal C},$ $s \in [-\tau,0],$ if $\theta_p \le \zeta_p < \sigma < \theta_{p+1}.$
\end{itemize}
Considering $SICNN$ (\ref{3}) with these conditions, we shall say about the initial value problem $(IVP)$ for (\ref{3}). To be short, we shall say only about $IVP$ in the form $x(t,\sigma,\phi,\psi),$ specifying $x(t,\sigma,\phi)$ for $(IC_1),$ if needed. Thus, we can provide the following definitions now.
\begin{defn}\label{d2}
A function $x(t)= \left\{x_{ij}(t)\right\},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n,$ is a solution of (\ref{3}) with $(IC_1)$ or $(IC_2)$ on an interval $[\sigma, \sigma + a)$ if:
\begin{enumerate}
\item [(i)] it satisfies the initial condition;
\item[(ii)] $x(t)$ is continuous on $[\sigma, \sigma + a);$
\item[(iii)] the derivative $x'(t)$ exists for
$t\geq \sigma$ with the possible exception of the points
$\theta_p$, where one-sided derivatives exist;
\item[(iv)] equation $(\ref{3})$ is satisfied by $x(t)$ for all $t > \sigma$ except possibly at the points of $\theta,$ and it holds for the right derivative of $x(t)$ at the points $\theta_p.$
\end{enumerate}
\end{defn}
\begin{defn}\label{defn1}
A function $x(t)= \left\{x_{ij}(t)\right\},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n,$ is a solution of (\ref{3}) on $\mathbb R$ if:
\begin{enumerate}
\item[(i)] $x(t)$ is continuous;
\item[(ii)] the derivative $x'(t)$ exists for all
$t \in \mathbb R$ with the possible exception of the points
$\theta_p,$ $p \in \mathbb Z,$ where one-sided derivatives exist;
\item[(iv)] equation $(\ref{3})$ is satisfied by $x(t)$ for all $t \in \mathbb R$ except at the points of $\theta,$ and it holds for the right derivative of $x(t)$ at the points $\theta_p,$ $p \in \mathbb Z.$
\end{enumerate}
\end{defn}
The existence and uniqueness of solutions of (\ref{3}) will be investigated in the next section.
\section{Existence and uniqueness}
Throughout the paper we suppose in $SICNN$ (\ref{3}) that $\displaystyle \gamma_0 = \min_{(i,j)} a_{ij} > 0$ and $ C_{ij}^{kl}$ are non-negative numbers.
The following assumptions are required.
\begin{itemize}
\item[\bf (C1)] The functional $f$ satisfies the Lipschitz condition
\[
\|f(\phi_1,\psi_1) - f(\phi_2,\psi_2)\| \le L( \|\phi_1 - \phi_2\|_0 + \|\psi_1 - \psi_2\|_0 ),
\]
for some positive constant $L,$ where $(\phi_1,\psi_1)$ and $(\phi_2,\psi_2)$ are from ${\cal C}^0 \times {\cal C}^0;$
\item [\bf (C2)] There exists a positive number $M$ such that $\displaystyle \sup_{(\phi,\psi) \in {\cal C}^0\times {\cal C}^0} \left|f (\phi,\psi)\right| \le M;$
\item [\bf (C3)] There exists a positive number $\bar \theta$ such that $\theta_{p+1} - \theta_{p} \leq \bar \theta$ for all $p \in \mathbb Z;$
\item [\bf (C4)] $|L_{ij}(t)| \le L_{ij}$ for all $i,j$ and $t \in \mathbb R,$ where $L_{ij}$ are non-negative real constants.
\end{itemize}
In the remaining parts of the paper, the notations
$\mu = \displaystyle \max_{(i,j)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl},$ $\displaystyle \bar c = \max_{(i,j)}RFDEPCArac{\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{a_{ij}},$ $\displaystyle \bar d = \max_{(i,j)}RFDEPCArac{\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{2a_{ij}-\gamma_0},$ $\displaystyle \bar L= \max_{(i,j)} L_{ij}$ and $\displaystyle \bar l = \max_{(i,j)}RFDEPCArac{L_{ij}}{a_{ij}}$ will be used. We assume that $\mu \bar{\theta}M<1$ and $M \bar c<1.$
Let us denote ${\cal C}_{H_0} = \{\phi \in {\cal C} : \|\phi\|_0\leq H_0\},$ where $H_0$ is a positive number.
\begin{lem} \label{tip-tap} Suppose that the conditions $(C1)-(C4)$ hold and fix an integer $p.$ If $H_0$ is a positive number such that $\displaystyle \mu \bar{\theta} \Big[ M + RFDEPCArac{2L(H_0+\bar{\theta} \bar{L})}{1-\mu \bar{\theta}M} \Big]<1,$ then for every $(\sigma,\phi,\psi) \in [\theta_p, \theta_{p+1}] \times {\cal C}_{H_0} \times {\cal C}_{H_0}$ there exists a unique solution $x(t)=x(t,\sigma, \phi,\psi)$ of (\ref{3}) on $[\sigma, \theta_{p+1}].$
\end{lem}
\noindent {\bf Proof.} We assume without loss of generality that $\theta_p\leq \sigma \le \zeta_p < \theta_{p+1}.$ That is, we consider $(IC_1)$ and the solution $x(t,\sigma, \phi).$
Fix an arbitrary function $\phi \in {\cal C}_{H_0}.$ Let us denote by $\Lambda$ the set of continuous functions $u(t)=\left\{u_{ij}(t)\right\},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n,$ defined on $[\sigma-\tau,\theta_{p+1}]$ such that $u_{\sigma}(t)=\phi(t),$ $t\in [-\tau,0],$ and $\left\|u\right\|_1 \le K_0,$ where $\displaystyle \left\|u\right\|_1 = \max_{t\in [\sigma,\theta_{p+1}]} \left\|u(t)\right\|$ and $K_0=\displaystyle RFDEPCArac{H_0+\bar{\theta}\bar L}{1-\mu \bar{\theta}M}.$
Define on $\Lambda$ an operator $\Phi$ such that
\begin{eqnarray*}
(\Phi u (t))_{ij}= \left\{\begin{array}{ll} \phi_{ij}(t-\sigma), ~t \in [\sigma-\tau,\sigma],\\
e^{-a_{ij}(t-\sigma)}\phi_{ij}(0) - \displaystyle \int_{\sigma}^{t}e^{-a_{ij}(t-s)}\Big[\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}\\
\times f(u_{kls}, u_{kl\gamma(s)} )u_{ij}(s) -L_{ij}(s) \Big] ds, ~t \in [\sigma, \theta_{p+1}].
\end{array}\right.
\end{eqnarray*}
One can confirm that
$
\left| (\Phi u (t))_{ij} \right| \le H_0 + \Big( MK_0 \displaystyle \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} + \bar{L} \Big) \bar{\theta},
$
$t\in [\sigma,\theta_{p+1}].$
Accordingly, the inequality
$
\left\|\Phi u\right\|_1 \le H_0 + (\mu M K_0 + \bar{L})\bar{\theta} = K_0
$
is valid. Therefore, $\Phi(\Lambda) \subseteq \Lambda.$
On the other hand, if $u(t)=\left\{u_{ij}(t)\right\}$ and $v(t)=\left\{v_{ij}(t)\right\}$ belong to $\Lambda,$ then we have for $t\in [\sigma,\theta_{p+1}]$ that
\begin{eqnarray*}
&& \left| (\Phi u(t))_{ij} - (\Phi v(t))_{ij} \right| \le \displaystyle \int_{\sigma}^{t}e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left|f(u_{kls},u_{kl\gamma(s)})\right| \left|u_{ij}(s)-v_{ij}(s)\right| ds \\
&& + \displaystyle \int_{\sigma}^{t}e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left| f(u_{kls},u_{kl\gamma(s)}) - f(v_{kls},v_{kl\gamma(s)})\right| \left|v_{ij}(s)\right| ds \\
&& \le \bar{\theta} (M+2K_0L) \left\|u-v\right\|_1 \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}.
\end{eqnarray*}
Hence, the inequality $\left\|\Phi u - \Phi v \right\|_1 \le \mu \bar{\theta} (M+2K_0L) \left\|u-v\right\|_1$ holds. Because $\mu \bar{\theta} (M+2K_0L) = \displaystyle \mu \bar{\theta} \Big[ M + RFDEPCArac{2L(H_0+\bar{\theta} \bar{L})}{1-\mu \bar{\theta}M} \Big] <1,$ the operator $\Phi$ is a contraction.
Consequently, there exists a unique solution of (\ref{3}) on $[\sigma, \theta_{p+1}].$ $\square$
The next assertion can be proved exactly in the way that is used to verify Lemma $2.2$ from \cite{akhmet}, if we use Lemma \ref{tip-tap}.
\begin{lem}\label{lemi2}
Suppose that the conditions $(C1)-(C4)$ hold and fix an integer $p.$ If $H_0$ is a positive number such that $\displaystyle \mu \bar{\theta} \Big[ M + RFDEPCArac{2L(H_0+\bar{\theta} \bar{L})}{1-\mu \bar{\theta}M} \Big]<1,$ then for every $(\sigma,\phi,\psi) \in [\theta_p, \theta_{p+1}] \times {\cal C}_{H_0} \times {\cal C}_{H_0}$ there exists a unique solution $x(t)=x(t,\sigma, \phi,\psi),$ $t \ge \sigma,$ of (\ref{3}), and it satisfies the integral equation
\begin{eqnarray}\label{pshik2}
x_{ij}(t)= {\rm e}^{-a_{ij}(t-\sigma)}\phi_{ij}(0) - \int_{\sigma}^{t}{\rm e}^{-a_{ij}(t-s)}\Big[\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kls},x_{kl\gamma(s)})x_{ij}(s) -L_{ij}(s)\Big]ds.
\end{eqnarray}
\end{lem}
\section{Bounded solutions}
In this section, we will investigate the existence of a unique bounded solution of $SICNN$ (\ref{3}). Moreover, the exponential stability of the bounded solution will be considered. An auxiliary result is presented in the following lemma.
\begin{lem}\label{lemi3}
Assume that the conditions $(C1)-(C4)$ are fulfilled. If $H_0$ is a positive number such that $\displaystyle \mu \bar{\theta} \Big[ M + RFDEPCArac{2L(H_0+\bar{\theta} \bar{L})}{1-\mu \bar{\theta}M} \Big]<1,$ then a function $x(t)=\left\{x_{ij}(t)\right\},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n,$ satisfying $\displaystyle\sup_{t\in\mathbb R} \left\|x(t)\right\|\le H_0$ is a solution of (\ref{3}) if and only if it satisfies the following integral equation
\begin{eqnarray}\label{pshik3}
&& x_{ij}(t)= - \int_{-\infty}^{t}{\rm e}^{-a_{ij}(t-s)}\Big[\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kls},x_{kl\gamma(s)})x_{ij}(s) -L_{ij}(s)\Big]ds.
\end{eqnarray}
\end{lem}
\noindent {\bf Proof.} We consider only sufficiency. The necessity can be proved by using (\ref{pshik2}) in a very similar way to the ordinary differential equations case.
One can obtain that
\begin{eqnarray*}
&& \displaystyle \Big| \int_{-\infty}^{t}{\rm e}^{-a_{ij}(t-s)}\Big[\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}f(x_{kls},x_{kl\gamma(s)})x_{ij}(s) -L_{ij}(s)\Big]ds \Big|\\
&& \le RFDEPCArac{1}{a_{ij}} \Big(\displaystyle M H_0 \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} + L_{ij} \Big).
\end{eqnarray*}
Therefore, the integral in (\ref{pshik3}) is convergent.
Differentiate (\ref{pshik3}) to verify that it is a solution of (\ref{3}).
$\square$
The following conditions are needed.
\begin{itemize}
\item [\bf (C5)] $\displaystyle \mu \bar{\theta}\Big[ M + RFDEPCArac{2L(H+\bar{\theta} \bar{L})}{1-\mu \bar{\theta}M} \Big]<1,$ where $H=\displaystyle RFDEPCArac{\bar l}{1- M \bar c};$
\item [\bf (C6)] $(M+2LH)\bar c<1;$
\item [\bf (C7)] $2\bar{d} \left[M+LHe^{\gamma_0\tau/2} \left(1+e^{\gamma_0 \bar{\theta}/2}\right)\right]<1.$
\end{itemize}
The main result concerning the existence and exponential stability of bounded solutions of (\ref{3}) is mentioned in the next theorem.
\begin{thm} \label{thm2} Suppose that the conditions $(C1)-(C6)$ hold. Then, (\ref{3}) admits a unique bounded on $\mathbb R$ solution, which satisfies (\ref{pshik3}). If, additionally, the condition $(C7)$ is valid, then the solution is exponentially stable with exponential convergence rate $\gamma_0/2.$
\end{thm}
\noindent {\bf Proof.} Let $C_0(\mathbb R)$ be the set of uniformly continuous functions defined on $\mathbb R$ such that if $u(t) \in C_0(\mathbb R),$ then $\|u\|_{\infty} \le H,$ where $\|u\|_{\infty} = \displaystyle \sup_{t\in\mathbb R}\|u(t)\|.$ Define on $C_0(\mathbb R)$ the operator $\Pi$ as
\begin{eqnarray} \label{theorem_operator_defn}
(\Pi u(t))_{ij} \equiv - \int_{-\infty}^{t}{\rm e}^{-a_{ij}(t-s)}\Big[\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} f(u_{kls}, u_{kl\gamma(s)})u_{ij}(s) - L_{ij}(s)\Big]ds.
\end{eqnarray}
If $u(t)=\left\{u_{ij}(t)\right\},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,n,$ belongs to $C_0(\mathbb R),$ then we have that
\begin{eqnarray*}
\left|(\Pi u(t))_{ij} \right| \le \displaystyle \int_{-\infty}^t e^{-a_{ij}(t-s)} \Big( \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M H + L_{ij} \Big)ds =RFDEPCArac{1}{a_{ij}} \Big( \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M H + L_{ij} \Big).
\end{eqnarray*}
Utilizing the last inequality one can show that
$
\left\|(\Pi u) \right\|_{\infty} \le \bar c MH+\bar l=H.
$
Therefore, $\Pi u(t)\in C_0(\mathbb R).$
Let us verify that this operator is contractive. Indeed, if $u(t)=\left\{u_{ij}(t)\right\}$ and $v(t)=\left\{v_{ij}(t)\right\}$ belong to $C_0(\mathbb R),$ then
\begin{eqnarray*}
&& \left| (\Pi u(t))_{ij} -(\Pi v(t))_{ij} \right| \le \displaystyle \int_{-\infty}^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left| f(u_{kls},u_{kl\gamma(s)}) \right| \left|u_{ij}(s)-v_{ij}(s)\right| ds \\
&& + \displaystyle \int_{-\infty}^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left| f(u_{kls},u_{kl\gamma(s)}) - f(v_{kls},v_{kl\gamma(s)}) \right| \left|v_{ij}(s)\right| ds \\
&& \le \displaystyle \int_{-\infty}^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M \left|u_{ij}(s)-v_{ij}(s)\right| ds \\
&& + \displaystyle \int_{-\infty}^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} HL \left( \left\|u_{kls}-v_{kls}\right\|_0 + \left\|u_{kl\gamma(s)}-v_{kl\gamma(s)}\right\|_0 \right) ds\\
&& \le (M+2LH) RFDEPCArac{\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{a_{ij}} \left\|u-v\right\|_{\infty}.
\end{eqnarray*}
Hence, the inequality $ \left\|\Pi u-\Pi v\right\|_{\infty}\le (M+2LH)\bar c \left\|u-v\right\|_{\infty}$ is valid.
In accordance with condition $(C6),$ the operator $\Pi$ is contractive. Consequently, $SICNN$ (\ref{3}) admits a unique solution $\widetilde v(t)=\left\{\widetilde v_{ij}(t)\right\}$ that belongs to $C_0(\mathbb R).$
We will continue with the investigation of the exponential stability. Fix an arbitrary number $\epsilon>0$ and let $\delta$ be a sufficiently small positive number such that $\alpha_1<1,$ $\alpha_2<1,$ $\alpha_3<1$ and ${\cal K}(\delta) < \epsilon,$ where
$\displaystyle {\cal K}(\delta) = RFDEPCArac{\delta}{1 - 2\bar d [M + LHe^{\gamma_0\tau/2}(1+e^{\gamma_0 \bar{\theta}/2})]},$ $\alpha_1=\displaystyle \mu \bar{\theta}\Big[ M + RFDEPCArac{2L(H+\delta+\bar{\theta} \bar{L})}{1-\mu \bar{\theta}M} \Big],$
$\alpha_2=(M+2LH)\bar c + 4L \bar d {\cal K}(\delta)$ and
$\alpha_3=\mu \bar{\theta}(M+2LH) + 2 \mu \bar{\theta} L{\cal K}(\delta).$
Suppose that $\widetilde v_{\sigma}(s)=\eta(s),$ $s\in [-\tau,0].$ Let $u(t)=\left\{u_{ij}(t)\right\}$ be a solution of the network (\ref{3}) with $u_{\sigma}(s)=\phi(s),$ $s\in [-\tau,0],$ where the function $\phi$ satisfies the inequality $\|\phi - \eta\|_0 < \delta.$ Without loss of generality we assume that $\gamma(\sigma) \ge \sigma.$ Using Lemma \ref{lemi2} one can verify for $t\ge \sigma$ that
\begin{eqnarray*}
&& u_{ij}(t) - \widetilde v_{ij}(t) = {\rm e}^{-a_{ij}(t-\sigma)} \left( \phi_{ij}(0)-\eta_{ij}(0) \right)\\
&& - \int_{\sigma}^{t}{\rm e}^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \Big[ f(u_{kls},u_{kl\gamma(s)})u_{ij}(s)- f(\widetilde v_{kls},\widetilde v_{kl\gamma(s)})\widetilde v_{ij}(s) \Big] ds.
\end{eqnarray*}
Denote by $w(t)=\left\{w_{ij}(t)\right\},$ the difference $u(t) - \widetilde v(t).$ Then, $w(t)$ satisfies the relation
\begin{eqnarray} \label{Shunting_almost_periodic_stability_proof}
&& w_{ij}(t) = {\rm e}^{-a_{ij}(t-\sigma)} \left(\phi_{ij}(0)-\eta_{ij}(0)\right) - \int_{\sigma}^{t}{\rm e}^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}\Big[ f(\widetilde v_{kls} +\nonumber\\
&& w_{kls}, \widetilde v_{kl\gamma(s)}+w_{kl\gamma(s)} )(\widetilde v_{ij}(s) +w_{ij}(s)) - f(\widetilde v_{kls}, \widetilde v_{kl\gamma(s)}) \widetilde v_{ij}(s) \Big]ds .
\end{eqnarray}
We will consider equation (\ref{Shunting_almost_periodic_stability_proof}) for $\sigma =0.$ Let $\Psi_{\delta}$ be the set of all continuous functions $w(t)=\left\{w_{ij}(t)\right\}$ which are defined on $[-\tau, \infty)$ such that:
\begin{itemize}
\item [(i)] $w(t) = \phi(t) - \eta(t),$ $t\in[-\tau,0];$
\item [(ii)] $w(t)$ is uniformly continuous on $[0,+ \infty);$
\item [(iii)]$|| w(t)|| \leq {\cal K}(\delta) e^{-\gamma_0 t/2}$ for $t\geq 0.$
\end{itemize}
Define on $\Psi_{\delta}$ an operator $\tilde \Pi$ such that
\begin{eqnarray*}
&& (\tilde \Pi w (t))_{ij}= \left\{\begin{array}{ll} \phi_{ij}(t) - \eta_{ij}(t), ~t \in [-\tau,0],\\
e^{-a_{ij}t}(\phi_{ij}(0)-\eta_{ij}(0)) - \displaystyle \int_{0}^{t}e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}\Big[f(\widetilde v_{kls} +\\
w_{kls},\widetilde v_{kl\gamma(s)}+w_{kl\gamma(s)} )(\widetilde v_{ij}(s) + w_{ij}(s)) - f(\widetilde v_{kls},\widetilde v_{kl\gamma(s)})\widetilde v_{ij}(s)\Big]ds, ~t > 0.
\end{array}\right.
\end{eqnarray*}
We shall show that $\tilde \Pi : \Psi_{\delta} \rightarrow \Psi_{\delta}.$
Indeed, it is true for $t\geq 0$ that
\begin{eqnarray*}
&& |(\tilde \Pi w (t))_{ij}| \leq e^{-a_{ij}t}\delta + \int_{0}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \\
&& \times \left| f(\widetilde v_{kls} + w_{kls},\widetilde v_{kl\gamma(s)}+w_{kl\gamma(s)} ) - f(\widetilde v_{kls},\widetilde v_{kl\gamma(s)}) \right| \left|\widetilde v_{ij}(s)\right| ds \\
&& + \int_{0}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left|f(\widetilde v_{kls} + w_{kls},\widetilde v_{kl\gamma(s)}+w_{kl\gamma(s)} )\right| \left|w_{ij}(s)\right| ds \\
&& \le e^{-a_{ij}t}\delta + \int_{0}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} H L \left( \left\| w_{kls} \right\|_0 + \left\| w_{kl\gamma(s)} \right\|_0 \right) ds \\
&& + \int_{0}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M {\cal K}(\delta) e^{-\gamma_0 s/ 2}ds \\
&& \le e^{-a_{ij}t}\delta + \int_{0}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} H L {\cal K}(\delta) \left( e^{\gamma_0 \tau /2} +e^{\gamma_0(\bar{\theta}+\tau)/2} \right) e^{-\gamma_0 s/ 2} ds \\
&& + \int_{0}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M {\cal K}(\delta) e^{-\gamma_0 s/ 2}ds\\
&& = e^{-a_{ij}t}\delta + \left(RFDEPCArac{2\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{2a_{ij}-\gamma_0}\right) {\cal K}(\delta) [M + LHe^{\gamma_0\tau/2}(1+e^{\gamma_0 \bar{\theta}/2})] e^{-\gamma_0 t/ 2}.
\end{eqnarray*}
Thus, the inequality
\begin{eqnarray*}
\left\|\tilde \Pi w (t) \right\| \leq e^{-\gamma_0 t} \delta + 2 \bar d {\cal K}(\delta) [M + LHe^{\gamma_0\tau/2}(1+e^{\gamma_0 \bar{\theta}/2})] e^{-\gamma_0 t/ 2} \le {\cal K}(\delta)e^{-\gamma_0 t/ 2}
\end{eqnarray*}
is valid for $t\ge 0.$
Now, let $w^1(t)=\left\{w^1_{ij}(t)\right\},$ $w^2(t)=\left\{w^2_{ij}(t)\right\}$ be elements of $\Psi_{\delta}.$ One can confirm for $t\ge 0$ that
\begin{eqnarray*}
&& \left\| (\tilde \Pi w^1 (t))_{ij} - (\tilde \Pi w^2 (t))_{ij} \right\| \le \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left(\left| \widetilde{v}_{ij}(s) \right| + \left| w^2_{ij}(s) \right| \right)\\
&& \times \left| f(\widetilde v_{kls} + w^1_{kls}, \widetilde v_{kl\gamma(s)}+w^1_{kl\gamma(s)}) - f(\widetilde v_{kls} + w^2_{kls}, \widetilde v_{kl\gamma(s)} + w^2_{kl\gamma(s)}) \right| ds \\
&& + \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left| f(\widetilde v_{kls} + w^1_{kls}, \widetilde v_{kl\gamma(s)}+w^1_{kl\gamma(s)}) \right| \left| w^1_{ij}(s)-w^2_{ij}(s) \right| ds \\
&& \le \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} L\Big( H + {\cal K}(\delta)e^{-\gamma_0 s /2} \Big) \Big( \left\| w^1_{kls}-w^2_{kls} \right\|_0 + \left\| w^1_{kl\gamma(s)}-w^2_{kl\gamma(s)} \right\|_0 \Big) ds \\
&& + \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M \left| w^1_{ij}(s)-w^2_{ij}(s) \right| ds \\
&& \le (M+2LH) \displaystyle \sup_{t \ge 0} \left\| w^1(t)-w^2(t) \right\| RFDEPCArac{\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{a_{ij}} \left(1-e^{-a_{ij}t}\right) \\
&& + 4L{\cal K}(\delta) \displaystyle \sup_{t \ge 0} \left\| w^1(t)-w^2(t) \right\| RFDEPCArac{\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{2a_{ij}-\gamma_0} \left( e^{-\gamma_0t/2} -e^{-a_{ij}t} \right).
\end{eqnarray*}
Therefore, we have that
$\displaystyle \sup_{t\geq 0}||\tilde \Pi w^1 (t) - \tilde \Pi w^2 (t) || \leq \alpha_2 \sup_{t\geq 0}||w^1 (t) - w^2 (t) ||.$
Since $\alpha_2<1,$ one can conclude by using a contraction mapping argument that there exists a unique fixed point $\widetilde w(t)=\left\{\widetilde w_{ij}(t)\right\}$ of the operator $\tilde \Pi :\ \Psi_{\delta} \rightarrow \Psi_{\delta},$ which is a solution of (\ref{Shunting_almost_periodic_stability_proof}).
To complete the proof, we need to show that there does not exist a solution of (\ref{Shunting_almost_periodic_stability_proof}) with $\sigma=0$ different from $\widetilde w(t).$ Suppose that $\theta_p \le 0 < \theta_{p+1}$ for some $p \in \mathbb Z.$ Assume that there exists a solution $\overline w(t)=\left\{\overline w_{ij}(t)\right\}$ of (\ref{Shunting_almost_periodic_stability_proof}) different from $\widetilde w(t).$ Denote by $z(t)=\left\{z_{ij}(t)\right\}$ the difference $\overline w(t)-\widetilde w(t),$ and let $\displaystyle \max_{t\in [0, \theta_{p+1}]} ||z(t)|| = \bar m.$
It can be verified for $t\in[0,\theta_{p+1}]$ that
\begin{eqnarray*}
&& |z_{ij}(t)| \le \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left| \widetilde v_{ij}(s) + \widetilde w_{ij}(s) \right| \\
&& \times \left| f(\widetilde{v}_{kls}+\overline{w}_{kls}, \widetilde{v}_{kl\gamma(s)}+\overline{w}_{kl\gamma(s)}) - f(\widetilde{v}_{kls}+\widetilde{w}_{kls}, \widetilde{v}_{kl\gamma(s)}+\widetilde{w}_{kl\gamma(s)}) \right| ds \\
&& + \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \left|f(\widetilde{v}_{kls}+\overline{w}_{kls}, \widetilde{v}_{kl\gamma(s)}+\overline{w}_{kl\gamma(s)})\right| \left| z_{ij}(s) \right| ds \\
&& \le \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} L \left( H + {\cal K}(\delta) \right) \left( \left\| z_{kls} \right\|_0 + \left\| z_{kl\gamma(s)} \right\|_0 \right) ds \\
&& + \displaystyle \int_0^t e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M \left| z_{ij}(s) \right| ds \\
&& \le \bar{\theta} \bar{m}\left[M+2L \left( H + {\cal K}(\delta) \right)\right] \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}.
\end{eqnarray*}
The last inequality yields $\left\|z(t)\right\| \le \alpha_3 \bar m.$ Because $\alpha_3<1$ we obtain a contradiction. Therefore, $\overline w(t)=\widetilde w(t)$ for $t\in[0,\theta_{p+1}].$ Utilizing induction one can easily prove the uniqueness for all $t \ge 0.$ $\square$
\begin{remark} In the proof of Theorem \ref{thm2}, we make use of the contraction mapping principle to prove the exponential stability. In the literature, Lyapunov-Krasovskii functionals, LMI technique, free weighting matrix method and differential inequality technique were used to investigate the exponential stability in neural networks \cite{10,Liao02,He06}. They may also be considered in the future to prove the exponential stability in networks of the form (\ref {3}).
\end{remark}
The next section is devoted to the existence as well as the exponential stability of almost periodic solutions of (\ref{3}).
\section{Almost periodic solutions}
Let us denote by $B_0(\mathbb R)$ the set of all bounded and continuous functions defined on $\mathbb R.$ For $g\in B_0(\mathbb R)$ and $ \alpha \in \mathbb R,$
a translation of $g$ by $\alpha$ is a function $Q_{\alpha} g (t)= g(t+\alpha),$ $t \in \mathbb R.$ A number $\alpha \in \mathbb R$ is called an $\epsilon-$translation number of a function $g\in B_0(\mathbb R)$ if $\,|| Q_{\alpha} g (t)- g(t)||< \epsilon\,$ for every $t \in \mathbb R.$ Besides, a set $S \subset \mathbb R$ is said to be relatively dense if there exists a number $h > 0$ such that $[\vartheta, \vartheta+ h] \cap S \not = \emptyset $ for all $\vartheta \in \mathbb R.$ A function $g \in B_0 (\mathbb R)$ is said to be almost periodic, if for every positive number $\epsilon,$ there exists a relatively dense set of $\epsilon-$translation numbers of $g$ \cite{cord}.
On the other hand, an integer $k_0$ is called an $\epsilon-$almost period of a sequence $\left\{a_p\right\},$ $p\in\mathbb Z,$ of real numbers if $\left|a_{p+k_0}-a_p\right|<\epsilon$ for any $p\in\mathbb Z$ \cite{sp}. Let $\zeta_p^q = \zeta_{p+q} -\zeta_p$ and $\theta_p^q = \theta_{p+q} - \theta_p$ for all $p$ and $q.$ We call the family of sequences $\left\{\zeta_p^q\right\},$ $q \in \mathbb Z,$ equipotentially almost periodic \cite{akhmet,sp,hala} if for an arbitrary positive number $\epsilon$ there exists a relatively dense set of $\epsilon-$almost periods, common for all sequences $\left\{\zeta_p^q\right\},$ $q \in \mathbb Z.$
The following conditions are required.
\begin{itemize}
\item[\bf (C8)] The sequences $\left\{\zeta_p^q\right\},$ $q \in \mathbb Z,$ as well as the sequences $\left\{\theta_p^q\right\},$ $q \in \mathbb Z,$ are equipotentially almost periodic;
\item [\bf (C9)] There exist positive numbers $\underline \theta$ and $\underline \zeta$ such that $\theta_{p+1} - \theta_{p} \ge \underline \theta$ and $\zeta_{p+1} - \zeta_{p} \ge \underline \zeta$ for all $p \in \mathbb Z.$
\end{itemize}
It follows from condition $(C8)$ that there exists a positive number $\bar \theta$ such that condition $(C3)$ is valid, and $|\theta_p|,|\zeta_p| \to \infty$ as $|p| \to \infty$ \cite{akhmet,sp,hala}.
The next assertion can be proved by the method of common almost periods developed in \cite{Wexler66} (see also \cite{sp,hala}).
\begin{lem} \label{lem1} \cite{hala} Assume that $L(t)=\left\{L_{ij}(t)\right\},$ $i = 1,2,\ldots,m,$ $j=1,2,\ldots,n$ is almost periodic and the conditions $(C8),$ $(C9)$ are valid. Then, for arbitrary $\eta >0,$ $0 <\nu <\eta,$ there exist relatively dense sets of real numbers $\Omega$ and integers $Q$ such that
\begin{enumerate}
\item[(i)] $\|L(t+\alpha) - L(t)\|< \eta,$ $t \in \mathbb R;$
\item[(ii)] $|\zeta_p^q - \alpha| < \nu,$ $p \in \mathbb Z;$
\item[(iii)] $|\theta_p^q - \alpha| < \nu,$ $p \in \mathbb Z,$ $\alpha \in \Omega,$ $q \in Q.$
\end{enumerate}
\end{lem}
The existence and exponential stability of the almost periodic solution of the network (\ref{3}) is mentioned in the following theorem.
\begin{thm} \label{thm3} Assume that the conditions $(C1),(C2),$ $(C4)-(C6),$ $(C8)$ and $(C9)$ are fulfilled. Then, the $SICNN$ (\ref{3}) admits a unique almost periodic solution. If, additionally, the condition $(C7)$ is valid, then the solution is exponentially stable with exponential convergence rate $\gamma_0/2.$
\end{thm}
{\bf Proof.} It follows from Theorem \ref{thm2} that (\ref{3}) admits a unique bounded on $\mathbb R$ solution $u(t)=\left\{u_{ij}(t)\right\},$ $i=1,2,\ldots,m,$ $j=1,2,\ldots,m,$ which is exponentially stable provided that the condition $(C7)$ is valid. We will show that it is an almost periodic function.
Consider the operator $\Pi$ defined by equation (\ref{theorem_operator_defn}) again. It is sufficient to verify that $\Pi u(t)$ is almost periodic, if $u(t)$ is.
Let us denote
$
\displaystyle \beta=\max_{(i,j)} \left[RFDEPCArac{1}{a_{ij}} + (M+3LH) RFDEPCArac{\sum_{C_{kl}\in N_r(i,j)}C_{ij}^{kl}}{a_{ij}} + RFDEPCArac{4LH^2\sum_{C_{kl}\in N_r(i,j)}C_{ij}^{kl}}{1-e^{-a_{ij} \underline{\theta}}} \right].
$
Fix an arbitrary positive number $\epsilon.$ Because $\Pi u$ is uniformly continuous, there exists a positive number $\eta$ satisfying $\displaystyle \eta < RFDEPCArac{\underline{\theta}}{5}$ and $\displaystyle \eta \leq RFDEPCArac{\epsilon}{3\beta}$ such that if $\left|t'-t''\right|<4\eta,$ then
\begin{eqnarray} \label{SICNN_ap_proof_3}
\left\|\Pi u(t') - \Pi u (t'')\right\| < \displaystyle RFDEPCArac{\epsilon}{3}.
\end{eqnarray}
Next, we take into account a number $\nu$ with $0<\nu<\eta$ such that $\left\|u(t')-u(t'')\right\|<\eta$ whenever $\left|t'-t''\right|<\nu,$ and let $\alpha$ and $q$ be numbers as mentioned in Lemma \ref{lem1} such that $\alpha$ is an $\eta-$translation number for $u(t).$
Assume that $t\in (\theta_{\overline{p}} + \eta, \theta_{\overline{p}+1} - \eta)$ for some $\overline p \in \mathbb Z.$ Making use of the equation
\begin{eqnarray*}
&& \displaystyle(\Pi u(t+\alpha))_{ij} - (\Pi u(t))_{ij} = -\int_{-\infty}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} \\
&& \times \displaystyle \Big[ f(u_{kl(s +\alpha)},u_{kl\gamma(s+\alpha)})u_{ij}(s +\alpha) - f(u_{kls},u_{kl\gamma(s)})u_{ij}(s) \Big]ds \\
&& + \displaystyle \int_{-\infty}^{t} e^{-a_{ij}(t-s)} \left[ L_{ij}(s+\alpha)- L_{ij}(s) \right] ds,
\end{eqnarray*}
we obtain that
\begin{eqnarray} \label{SICNN_ap_proof_1}
\begin{array}{l}
\displaystyle \left|(\Pi u(t+\alpha))_{ij} - (\Pi u(t))_{ij}\right| \le \int_{-\infty}^{t} e^{-a_{ij}(t-s)}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M \left|u_{ij}(s+\alpha)-u_{ij}(s)\right|ds \\
+ \displaystyle\int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl(s+\alpha)}-u_{kls} \right\|_0 ds \\
+ \displaystyle\int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl\gamma(s+\alpha)}-u_{kl\gamma(s)} \right\|_0 ds \\
+ \displaystyle\int_{-\infty}^{t} e^{-a_{ij}(t-s)} \left|L_{ij}(s+\alpha)- L_{ij}(s)\right| ds.
\end{array}
\end{eqnarray}
According to Lemma \ref{lem1}, $(i),$ the inequality
\begin{eqnarray*}
\int_{-\infty}^{t} e^{-a_{ij}(t-s)} \left|L_{ij}(s+\alpha)- L_{ij}(s)\right| ds < RFDEPCArac{\eta}{a_{ij}}
\end{eqnarray*}
is valid. Moreover, since $\alpha$ is an $\eta-$translation number for $u(t),$ one can confirm that
\begin{eqnarray*}
\int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} M \left|u_{ij}(s+\alpha)-u_{ij}(s)\right|ds < RFDEPCArac{M \eta}{a_{ij}} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}
\end{eqnarray*}
and
\begin{eqnarray*}
\displaystyle\int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl(s+\alpha)}-u_{kls} \right\|_0 ds < RFDEPCArac{LH \eta}{a_{ij}}\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}.
\end{eqnarray*}
On the other hand, we have
\begin{eqnarray}\label{SICNN_ap_ineq2}
\begin{array}{l}
\displaystyle \int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl\gamma(s+\alpha)}-u_{kl\gamma(s)} \right\|_0 ds \\
\le \displaystyle\int_{\theta_{\overline{p}}+\eta}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl\gamma(s+\alpha)}-u_{kl(\gamma(s)+\alpha)} \right\|_0 ds \\
+ \displaystyle \sum_{\lambda=0}^{\infty} \int_{\theta_{\overline{p}-\lambda-1}+\eta}^{\theta_{\overline{p}-\lambda}-\eta} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl\gamma(s+\alpha)}-u_{kl(\gamma(s)+\alpha)} \right\|_0 ds \\
+ \displaystyle \sum_{\lambda=0}^{\infty} \int_{\theta_{\overline{p}-\lambda}-\eta}^{\theta_{\overline{p}-\lambda}+\eta} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl\gamma(s+\alpha)}-u_{kl(\gamma(s)+\alpha)} \right\|_0 ds \\
+ \displaystyle \int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl(\gamma(s)+\alpha)}-u_{kl\gamma(s)} \right\|_0 ds.
\end{array}
\end{eqnarray}
For any $p\in\mathbb Z,$ if $s\in (\theta_p+\eta, \theta_{p+1}-\eta),$ then one can show by using Lemma $\ref{lem1}, (iii)$ that the number $s +\alpha$ belongs to the interval $(\theta_{p + q},\theta_{p + q+1})$ so that
$$
\left\| u_{\gamma(s+\alpha)} -u_{\gamma(s)+\alpha} \right\|_0 = \max_{\kappa\in [-\tau,0]} \left| u(\kappa+\zeta_{p+q}) - u(\kappa+\zeta_p+\alpha) \right| < \eta,
$$
since $\left| (\kappa+\zeta_{p+q})-(\kappa+\zeta_p+\alpha) \right| = \left| \zeta^q_p - \alpha \right|<\nu$ by Lemma \ref{lem1}, $(ii).$
Besides, the inequality
$$
\displaystyle \sum_{\lambda=0}^{\infty} \int_{\theta_{\overline{p}-\lambda}-\eta}^{\theta_{\overline{p}-\lambda}+\eta} e^{-a_{ij}(t-s)} ds \le 2\eta \displaystyle \sum_{\lambda=0}^{\infty} e^{-a_{ij}\underline{\theta}\lambda}=RFDEPCArac{2\eta}{1-e^{-a_{ij}\underline{\theta}}}
$$
is valid.
Therefore, (\ref{SICNN_ap_ineq2}) yields
\begin{eqnarray*}
&& \displaystyle \int_{-\infty}^{t} e^{-a_{ij}(t-s)} \sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl} LH \left\| u_{kl\gamma(s+\alpha)}-u_{kl\gamma(s)} \right\|_0 ds \\
&& < LH \eta \left( RFDEPCArac{2\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{a_{ij}} + RFDEPCArac{4H\sum_{C_{kl} \in N_r(i,j)} C_{ij}^{kl}}{1-e^{-a_{ij}\underline{\theta}}} \right).
\end{eqnarray*}
It can be verified by means of (\ref{SICNN_ap_proof_1}) that
$$
\displaystyle \left|(\Pi u(t+\alpha))_{ij} - (\Pi u(t))_{ij} \right|
< \eta\left[RFDEPCArac{1}{a_{ij}} + (M+3LH) RFDEPCArac{\sum_{C_{kl}\in N_r(i,j)}C_{ij}^{kl}}{a_{ij}} + RFDEPCArac{4LH^2\sum_{C_{kl}\in N_r(i,j)}C_{ij}^{kl}}{1-e^{-a_{ij} \underline{\theta}}} \right].
$$
Hence,
\begin{eqnarray} \label{SICNN_ap_proof_4}
\left\|\Pi u (t+\alpha) - \Pi u(t)\right\|<\beta \eta \le \epsilon/3
\end{eqnarray}
for each $t$ that belongs to the intervals $(\theta_{p} + \eta, \theta_{p+1} - \eta),$ $p\in \mathbb Z.$
The inequality $\eta < \underline{\theta}/5$ ensures that $t+3\eta \in (\theta_{p} + \eta, \theta_{p+1} - \eta)$ if $\left| t-\theta_p \right| \le \eta.$
Now, by means of the inequalities (\ref{SICNN_ap_proof_3}) and (\ref{SICNN_ap_proof_4}) we attain for $\left| t-\theta_p \right| \le \eta,$ $p\in\mathbb Z,$ that
\begin{eqnarray*}
&& \left\|\Pi u(t+\alpha) - \Pi u(t)\right\| \le \left\|\Pi u(t+\alpha) - \Pi u(t+\alpha+3\eta)\right\| \\
&&+ \left\|\Pi u(t+\alpha+3\eta) - \Pi u(t+3\eta)\right\| + \left\|\Pi u(t+3\eta) - \Pi u(t)\right\| \\
&&<\epsilon.
\end{eqnarray*}
The last inequality implies that $\alpha$ is an $\epsilon-$translation number of $\Pi u(t).$ Consequently, the $SICNN$ (\ref{3}) admits a unique almost periodic solution.
$\square$
\begin{remark}
The Bohr definition of almost periodicity is also suitable for the application of Lyapunov functional method and the technique of Young inequality \cite{Jiang05} to show the existence, uniqueness and exponential stability of almost periodic solutions in $CNNs.$
\end{remark}
\section{An example}
Consider the sequence $\theta=\left\{\theta_p\right\}$ defined as $\displaystyle \theta_p = p + RFDEPCArac{1}{4}|\sin(p) - \cos(p \sqrt 2)|,$ $p\in\mathbb Z.$ Utilizing the technique provided in \cite{sp,akhmet54}, one can verify that the sequences $\left\{\theta_p^q\right\},$ $q \in \mathbb Z,$ are equipotentially almost periodic. We take the function $\gamma(t)$ with $\zeta_p=\theta_p.$ One can confirm that the conditions $(C3)$ and $(C9)$ hold with $\bar{\theta}=3/2$ and $\underline{\theta}=1/2,$ respectively.
Let us take into account the $SICNN$
\begin{eqnarray}\label{pca_example}
RFDEPCArac{dx_{ij}}{dt} = - a_{ij} x_{ij} - \sum_{C_{kl} \in N_1(i,j)} C_{ij}^{kl}f(x_{kl}(\gamma(t)-\tau))x_{ij} + L_{ij}(t),
\end{eqnarray}
in which $i,j = 1,2,3,$ $f(s)=\displaystyle RFDEPCArac{s^2}{2}$ if $|s| \le 0.1,$ $f(s)=0.005$ if $|s|>0.1,$ $\tau=0.3,$
$$\left( \begin{array}{ccc}
a_{11}&a_{12}&a_{13} \\
a_{21}&a_{22}&a_{23} \\
a_{31}&a_{32}&a_{33} \end{array} \right)= \left( \begin{array}{ccc}
9&3&5 \\
6&5&4 \\
3&12&9 \end{array} \right),$$
$$\left( \begin{array}{ccc}
C_{11}&C_{12}&C_{13} \\
C_{21}&C_{22}&C_{23} \\
C_{31}&C_{32}&C_{33} \end{array} \right)= \left( \begin{array}{ccc}
0.08&0.01&0.02 \\
0.05&0.03&0.06 \\
0.04&0.07&0.02 \end{array} \right),$$
\begin{eqnarray*}
&& \left( \begin{array}{ccc}
L_{11}(t)&L_{12}(t)&L_{13}(t) \\
L_{21}(t)&L_{22}(t)&L_{23}(t) \\
L_{31}(t)&L_{32}(t)&L_{33}(t) \end{array} \right) \\
&& = \left( \begin{array}{ccc}
0.1\cos(t) + 0.2\sin(\sqrt{2}t)&0.2\cos(\pi t) + 0.1\sin(\sqrt{2}t)&0.15\cos(2t)- 0.12\cos(\pi t)\\
0.15\cos(3t) - 0.1\sin(\pi t)&0.2\cos(t) - 0.15\sin(\sqrt{2}t)&0.1\sin(t)+ 0.2\cos(\sqrt{3}t)\\
0.2\cos(\sqrt{2}t) + 0.14\sin(\pi t)&0.2\cos(\sqrt{2}t) + 0.1\sin(t)&0.15\cos(\sqrt{2}t)- 0.13\cos(4t) \end{array} \right).
\end{eqnarray*}
One can calculate that
$ \sum_{C_{kl} \in N_1(1,1)} C_{11}^{kl} =0.17,$ $\sum_{C_{kl} \in N_1(1,2)} C_{12}^{kl} = 0.25,$ $\sum_{C_{kl} \in N_1(1,3)} C_{13}^{kl} = 0.12,$
$\sum_{C_{kl} \in N_1(2,1)} C_{21}^{kl} = 0.28,$ $\sum_{C_{kl} \in N_1(2,2)} C_{22}^{kl} = 0.38,$ $\sum_{C_{kl} \in N_1(2,3)} C_{23}^{kl} = 0.21,$ $\sum_{C_{kl} \in N_1(3,1)} C_{31}^{kl} = 0.19,$ $\sum_{C_{kl} \in N_1(3,2)} C_{32}^{kl} = 0.27,$ $\sum_{C_{kl} \in N_1(3,3)} C_{33}^{kl} = 0.18.$ The conditions $(C5)-(C7)$ are valid for (\ref{pca_example}) with $\gamma_0=3,$ $\mu=0.38,$ $\bar{c}=\bar{d}=0.25/3,$ $M=0.005,$ $L=0.1,$ $\bar{L}=0.35,$ $\bar{l}=0.34/3.$ According to Theorem \ref{thm3}, the network (\ref{pca_example}) has a unique almost periodic solution, which is exponentially stable with the rate of convergence $3/2.$
Consider the constant function $\phi(t)=\left\{\phi_{ij}(t)\right\}$ such that $\phi_{11}(t)=-0.025,$ $\phi_{12}(t)=0.036,$ $\phi_{13}(t)=-0.014,$ $\phi_{21}(t)=0.012,$ $\phi_{22}(t)=-0.021,$ $\phi_{23}(t)=0.042,$ $\phi_{31}(t)=0.023,$ $\phi_{32}(t)=-0.015,$ $\phi_{33}(t)=0.012.$ We depict in Figure \ref{pca_fig1} the solution $x(t)=\left\{x_{ij}(t)\right\}$ of (\ref{pca_example}) with $x(t)=\phi(t),$ $t\le \sigma= \theta_0= \displaystyle RFDEPCArac{1}{4}.$
Figure \ref{pca_fig1} supports the result of Theorem \ref{thm3} such that the represented solution converges to the unique almost periodic solution of $SICNN$ (\ref{pca_example}).
\begin{figure}
\caption{The unique almost periodic solution of $SICNN$ (\ref{pca_example}
\label{pca_fig1}
\end{figure}
\section{Conclusion}
In this paper, we investigate the existence as well as the exponential stability of almost periodic solutions in a new model of $SICNNs.$ The usage of the functional response on alternate (advanced-delayed) type of piecewise constant arguments is the main novelty of our study, and it is useful for the investigation of a large class of neural networks. An illustrative example is provided to show the effectiveness of the theoretical results.
Our approach concerning exponential stability may be used in the future to investigate synchronization of chaos in coupled neural networks and control of chaos in large communities of neural networks with piecewise constant argument.
Differential equations with functional response on piecewise constant argument can be applied for the development of other kinds of recurrent networks such as Hopfield and Cohen-Grossberg neural networks \cite{Hopfield84,Cohen93} and others. This will provide new opportunities for the analysis and applications of neural networks.
\section*{Acknowledgments}
The authors wish to express their sincere gratitude to the referees for the helpful criticism and valuable suggestions, which helped to improve the paper significantly.
The second author is supported by the 2219 scholarship programme of T\"{U}B\.{I}TAK, the Scientific and Technological Research Council of Turkey.
\end{document} |
\begin{document}
\title{\Large $(2,3)$-GENERATION OF THE SPECIAL LINEAR GROUPS OF DIMENSIONS $9$, $10$ and $11$}
\author{\Large E. Gencheva, Ts. Genchev and K. Tabakov}
\date{}
\maketitle
\begin{abstract}
In the present paper we prove that the groups $PSL_{n}(q)$ ($n = 9$, $10$ or $11$) are $(2,3)$-generated for any $q$. Actually, we provide explicit generators $x_{n}$ and $y_{n}$ of respective orders $2$ and $3$, for the special linear group $SL_{n}(q)$.
\indent\\
\noindent\textbf{Key words:}\quad(2,3)-generated group.\\
\noindent\textbf{2010 Mathematics Subject Classification:} \,20F05, 20D06.
\end{abstract}
\indent\indent\indent\textbf{1.\,\,Introduction.} $(2,3)$-generated groups are those groups which can be generated by an involution and an element of order $3$ or, equivalently, they appear to be homomorphic images of the famous modular group $PSL_{2}(\mathbb{Z})$. It is known that many series of finite simple groups are
$(2,3)$-generated. Most powerful result of Liebeck-Shalev and L\"{u}beck-Malle states that all finite simple groups, except the infinite families $PSp_{4}(2^{m})$, $PSp_{4}(3^{m})$, $^{2}B_{2}(2^{2m+1})$, and a finite number of other groups, are $(2,3)$-generated (see \cite{12}). We have especially focused our attention to the projective special linear groups defined over finite fields. Many authors have been investigated the groups $PSL_{n}(q)$ with respect to that generation property. $(2,3)$-generation has been proved in the cases $n=2$, $q\neq 9$ \cite{7}, $n=3$, $q\neq 4$ \cite{5},\cite{2}, $n=4$, $q\neq 2$ \cite{14}, \cite{13}, \cite{8}, \cite {10}, $n=5$, any $q$ \cite{17}, \cite{9}, $n=6$, any $q$ \cite{16}, $n=7$, any $q$ \cite{15}, $n=8$, any $q$ \cite{6}, $n\geq 5$, odd $q\neq 9$ \cite{3}, \cite{4}, and $n\geq 13$, any $q$ \cite{11}. In this way the only cases that still remain open are those for $9\leq n \leq12$, even $q$ or $q=9$. In the present work we continue our investigation by considering the next portion of infinite series of finite special linear groups and their projective images. We shall treat the groups $SL_{9}(q)$ and $SL_{10}(q)$ simultaneously. Based on the results obtained below, we deduce the following.\\
\indent\indent\textbf{Theorem.} \emph{The groups $SL_{9}(q)$, $SL_{10}(q)$, $SL_{11}(q)$ and their simple projective images $PSL_{9}(q)$, $PSL_{10}(q)$ and $PSL_{11}(q)$ are $(2,3)$-generated for all $q$ }.\\
\indent\indent\textbf{2.\,\,Proof of the Theorem.} First let $G = SL_{n}(q)$ and $\overline{G} = G/Z(G) = PSL_{n}(q)$, where $n = 9$ or $10$, and $q = p^{m}$ for a prime number $p$. Set $Q = q^{n-1}-1$ if $q \neq 3, 7$ and $Q = (q^{n-1}-1)/2$ if $q = 3, 7$. The group $G$ acts (naturally) on the left on the $n$-dimensional column vector space $V = F^{n}$ over the field $F = GF(q)$. We denote by $v_{1}$, . . . , $v_{n}$ the standard base of the space $V$, i.e. $v_{i}$ is a column which has $1$ as its $i$-th coordinate, while all other coordinates are zeros.\\
\indent\indent\ We shall need the following result, which can be easily obtained by the list of maximal subgroups of $G$ given in \cite{1} and simple arithmetic considerations using (for example) Zsigmondy's well-known theorem. \\
\indent\indent\textbf{Lemma 1.} \emph{For any maximal subgroup $M$ of the group $G$ either it stabilizes one-dimensional subspace or hyperplane of $V$ ($M$ is reducible on the space $V$) or $M$ has no element of order $Q$}.\\
\indent\indent \textbf{2.1.} We suppose first that $q \neq 2, 4$ if $n = 9$ and $q > 4$ if $n = 10$. Let choose an element $\omega$ of order $Q$ in the multiplicative group of the field $GF(q^{n-1})$ and set
\begin{center}
$f_{n}(t) = (t - \omega)(t - \omega^{q})(t - \omega^{q^{2}})(t - \omega^{q^{3}}) . . . (t - \omega^{q^{n-3}}) (t - \omega^{q^{n-2}}) = t^{n-1} - \alpha_{1}t^{n-2} + \alpha_{2}t^{n-3} - \alpha_{3}t^{n-4} + . . . + (-1)^{n-2}\alpha_{n-2}t + (-1)^{n-1}\alpha_{n-1}$.
\end{center}
Then $f_{n}(t) \in F[t]$ and the polynomial $f_{n}(t)$ is irreducible over the field $F$. Note that $\alpha_{n-1} = \omega^\frac{q^{n-1} - 1}{q - 1}$ has order $q - 1$ if $q \neq 3, 7$, $\alpha_{n-1} = 1$ if $q = 3$, and $\alpha_{n-1}^{3} = 1 \neq \alpha_{n-1}$ if $q = 7$.\\
\indent\indent Now let
\begin{center}
\[ x_{9} = \left[ \begin{array}{ccccccccc}
-1 & 0 & 0 & 0 & 0 & 0 & \alpha_{5}\alpha_{8}^{-1} & 0 & \alpha_{5}\\
0 & -1 & 0 & 0 & 0 & 0 & \alpha_{4}\alpha_{8}^{-1} & 0 & \alpha_{4}\\
0 & 0 & 0 & -1 & 0 & 0 & \alpha_{3}\alpha_{8}^{-1} & 0 & \alpha_{6}\\
0 & 0 & -1 & 0 & 0 & 0 & \alpha_{6}\alpha_{8}^{-1} & 0 & \alpha_{3}\\
0 & 0 & 0 & 0 & -1 & 0 & \alpha_{2}\alpha_{8}^{-1} & 0 & \alpha_{2}\\
0 & 0 & 0 & 0 & 0 & 0 & \alpha_{1}\alpha_{8}^{-1} & -1 & \alpha_{7}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8}\\
0 & 0 & 0 & 0 & 0 & -1 & \alpha_{7}\alpha_{8}^{-1} & 0 & \alpha_{1}\\
0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8}^{-1} & 0 & 0\\
\end{array} \right],\]
\[y_{9} = \left[ \begin{array}{ccccccccc}
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right];\]
\[ x_{10} = \left[ \begin{array}{cccccccccc}
0 & 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{2}\alpha_{9}^{-1} & 0 & \alpha_{3}\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{4}\alpha_{9}^{-1} & 0 & \alpha_{7}\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{5}\alpha_{9}^{-1} & 0 & \alpha_{5}\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{3}\alpha_{9}^{-1} & 0 & \alpha_{2}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{1}\alpha_{9}^{-1} & -1 & \alpha_{8}\\
0 & -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{7}\alpha_{9}^{-1} & 0 & \alpha_{4}\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{6}\alpha_{9}^{-1} & 0 & \alpha_{6}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9}\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{8}\alpha_{9}^{-1} & 0 & \alpha_{1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9}^{-1} & 0 & 0\\
\end{array} \right],\]
\[y_{10} = \left[ \begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right].\]
\end{center}
Then $x_{n}$ and $y_{n}$ are elements of $G (= SL_{n}(q))$ of orders $2$ and $3$, respectively. Denote
\begin{center}
\[z_{9} = x_{9}y_{9} = \left[ \begin{array}{ccccccccc}
0 & 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{5} & \alpha_{5}\alpha_{8}^{-1}\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{4} & \alpha_{4}\alpha_{8}^{-1}\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{6} & \alpha_{3}\alpha_{8}^{-1}\\
0 & -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{3} & \alpha_{6}\alpha_{8}^{-1}\\
0 & 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{2} & \alpha_{2}\alpha_{8}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{7} & \alpha_{1}\alpha_{8}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8} & 0\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{1} & \alpha_{7}\alpha_{8}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8}^{-1}\\
\end{array} \right],\]
\[z_{10} = x_{10}y_{10} = \left[ \begin{array}{cccccccccc}
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{3} & \alpha_{2}\alpha_{9}^{-1}\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{7} & \alpha_{4}\alpha_{9}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{5} & \alpha_{5}\alpha_{9}^{-1}\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{2} & \alpha_{3}\alpha_{9}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{8} & \alpha_{1}\alpha_{9}^{-1}\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{4} & \alpha_{7}\alpha_{9}^{-1}\\
0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{6} & \alpha_{6}\alpha_{9}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9} & 0\\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{1} & \alpha_{8}\alpha_{9}^{-1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9}^{-1}\\ \end{array} \right].\]
\end{center}
The characteristic polynomial of $z_{n}$ is $f_{z_{n}}(t) = (t - \alpha_{n-1}^{-1})f_{n}(t)$ and the characteristic roots $\alpha_{n-1}^{-1}$, $\omega$, $\omega^{q}$, $\omega^{q^{2}}$, $\omega^{q^{3}}$, . . . , $\omega^{q^{n-3}}$, and $\omega^{q^{n-2}}$ of $z_{n}$ are pairwise distinct. Then, in $GL_{n}(q^{n-1})$, $z_{n}$ is conjugate to the matrix diag $(\alpha_{n-1}^{-1}$, $\omega$, $\omega^{q}$, $\omega^{q^{2}}$, $\omega^{q^{3}}$, . . . , $\omega^{q^{n-3}}$, $\omega^{q^{n-2}})$ and hence $z_{n}$ is an element of $SL_{n}(q)$ of order $Q$.\\
\indent\indent Let $H_{n}$ is the subgroup of $G (= SL_{n}(q))$ generated by the above elements $x_{n}$ and $y_{n}$.\\
\indent\indent\textbf{Lemma 2.} \emph{The group $H_{n}$ can not stabilize one-dimensional subspaces or hyperplanes of the space $V$ or equivalently $H_{n}$ acts irreducible on $V$.}\\
\indent\indent P r o o f. Assume that $W$ is an $H_{n}$-invariant subspace of $V$ and $k$ = dim $W$, $k=1$ or $n-1$.\\
\indent\indent Let first $k = 1$ and $0 \neq w \in W$. Then $y_{n}(w) = \lambda w$ where $\lambda \in F$ and $\lambda^{3} = 1$. This yields
\begin{center}
$w = \mu_{1}(v_{1} + \lambda^{2} v_{2} + \lambda v_{3}) + \mu_{2}(v_{4} + \lambda^{2} v_{5} + \lambda v_{6}) + \mu_{3}(v_{7} + \lambda^{2} v_{8} + \lambda v_{9})$ $(\mu_{1}, \mu_{2}, \mu_{3} \in F)$
\end{center}
if $n = 9$, and
\begin{center}
$w = \mu_{1}^{'}v_{1} + \mu_{2}^{'}(v_{2} + \lambda v_{3}) + \mu_{3}^{'}(\lambda v_{4} + v_{5} + \lambda^{2} v_{6}) + \mu_{2}^{'} \lambda^{2} v_{7} + \mu_{4}^{'}(\lambda v_{8} + v_{9} + \lambda^{2} v_{10}) (\mu_{i}^{'} \in F)$
\end{center}
if $n = 10$. Moreover $\mu_{1}^{'} = 0$ if $\lambda \neq 1$.\\
Now $x_{n}(w) = \nu w$ where $\nu = \pm 1$. This yields consecutively $\mu_{3} \neq 0$ , $\mu_{4}^{'} \neq 0$, $\alpha_{n - 1} = \lambda^{2}\nu$, and (in case $n = 9$)
\begin{enumerate}[(1)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\lambda\nu\mu_{1} + \mu_{2} = (\lambda\nu\alpha_{3} + \lambda\alpha_{6})\mu_{3},
\end{equation*}
\end{enumerate}
\begin{enumerate}[(2)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{2} = (\alpha_{1} - \lambda\nu + \nu\alpha_{7})\mu_{3},
\end{equation*}
\end{enumerate}
\begin{enumerate}[(3)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\lambda\mu_{2} - \alpha_{2}\mu_{3}) = 0,
\end{equation*}
\end{enumerate}
\begin{enumerate}[(4)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\mu_{1} - \lambda\alpha_{5}\mu_{3}) = 0,
\end{equation*}
\end{enumerate}
\begin{enumerate}[(5)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\mu_{1} - \lambda^{2}\alpha_{4}\mu_{3}) = 0;
\end{equation*}
\end{enumerate}
in case $n = 10$ we obtain the following relations:\\
\begin{enumerate}[(1.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{1}^{'} = -\nu\lambda\mu_{3}^{'} + (\lambda\alpha_{3}\alpha_{9}^{-1} + \lambda^{2}\alpha_{2})\mu_{4}^{'},
\end{equation*}
\end{enumerate}
\begin{enumerate}[(2.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{2}^{'} = -\nu\lambda^{2}\mu_{3}^{'} + (\lambda\alpha_{7}\alpha_{9}^{-1} + \lambda^{2}\alpha_{4})\mu_{4}^{'},
\end{equation*}
\end{enumerate}
\begin{enumerate}[(3.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{3}^{'} = (-\nu + \lambda_{2}\alpha_{1} + \lambda\alpha_{8}\alpha_{9}^{-1})\mu_{4}^{'},
\end{equation*}
\end{enumerate}
\begin{enumerate}[(4.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\mu_{2}^{'} - \lambda\alpha_{5}\mu_{4}^{'}) = 0,
\end{equation*}
\end{enumerate}
\begin{enumerate}[(5.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\lambda^{2}\alpha_{6} - \alpha_{5}) = 0.
\end{equation*}
\end{enumerate}
In particular, we have $\alpha_{n-1}^{3} = \nu$ and $\alpha_{n-1}^{6} = 1$. This is impossible if $q = 5$ or $q > 7$ since then $\alpha_{n-1}$ has order $q - 1$.\\
Next, let us continue with the case $n = 9$. According to our assumption ($q \neq 2, 4$) only two possibilities left: $q = 3$ (and $\alpha_{8} = 1$), $q = 7$ (and $\alpha_{8}^{3} = 1 \neq \alpha_{8}$). So $\nu = 1$, $\alpha_{8} = \lambda^{2}$ and (1), (2), (3), (4), (5) produce $\alpha_{1} = \lambda^{2}\alpha_{2} - \alpha_{7} + \lambda$, $\alpha_{3} = \lambda\alpha_{2} + \lambda^{2}\alpha_{4} - \alpha_{6}$ and $\alpha_{5} = \lambda\alpha_{4}$. Now $f_{9}(-1) = (1 + \lambda + \lambda^{2})(1 + \alpha_{2} + \alpha_{4}) = 0$ both for $q = 3$ and $q = 7$, an impossibility as $f_{9}(t)$ is irreducible over the field $F$.\\
Lastly, we treat the case $n = 10$, and as $q > 4$, that is $q = 7$ (and $\alpha_{9}^{3} = 1 \neq \alpha_{9}$). Thus $\nu = 1$ and $\alpha_{9} = \lambda^{2} \neq 1$. So
$\lambda \neq 1$, $\mu_{1}^{'} = 0$ and from $(1.)$, $(2.)$, $(3.)$, $(4.)$, $(5.)$ we can extract that $\alpha_{1} = \lambda^{2}\alpha_{2} + \lambda^{2}\alpha_{3} - \alpha_{8} + \lambda$, $\alpha_{5} = -\lambda^{2}\alpha_{2} - \lambda^{2}\alpha_{3} + \lambda\alpha_{4} + \lambda\alpha_{7}$ and $\alpha_{6} = -\alpha_{2} -\alpha_{3}+ \lambda^{2}\alpha_{4} + \lambda^{2}\alpha_{7}$. Then $f_{10}(-1) = -(1 + \lambda + \lambda^{2})(1 + \alpha_{4} + \alpha_{7}) = 0$, again an impossibility as $f_{10}(t)$ is irreducible over the field $F$.\\
\indent\indent Now let $k=n-1$. The subspace $U$ of $V$ which is generated by the vectors $v_{1}$, $v_{2}$, $v_{3}$, . . . , $v_{n-1}$ is $\left\langle {z_{n}}\right\rangle$-invariant. If $W \neq U$ then $U \cap W$ is $\left\langle {z_{n}}\right\rangle$-invariant and dim $(U \cap W) = n-2$. This means that the characteristic polynomial of $z_{n}|_{U \cap W}$ has degree $n-2$ and must divide $f_{z_{n}}(t)$ which is impossible as $f_{n}(t)$ is irreducible over $F$. Thus $W = U$ but obviously $U$ is not $\left\langle {y_{n}}\right\rangle$-invariant, a contradiction.\\
\indent\indent The lemma is proved. (Note that the statement is false if $q = 2$ or $4$ in both cases, and additionally if $q = 3$ in case $n = 10$.)
$\square$\\
\indent\indent Now, as $H_{n} = \left\langle{x_{n},y_{n}}\right\rangle$ acts irreducible on the space $V$ and it has an element of order $Q$, we conclude (by Lemma 1) that $H_{n}$ can not be contained in any maximal subgroup of $G (= SL_{n}(q))$. Thus $H_{n} = G$ and $G = \left\langle {x_{n},y_{n}}\right\rangle$ is a $(2,3)$-generated group. Obviously $\overline{x_{n}}$ and $\overline{y_{n}}$ are elements of respective orders $2$ and $3$ for the group $\overline{G} = PSL_{n}(q)$, and $\overline{G} = \left\langle {\overline{x_{n}},\overline{y_{n}}}\right\rangle$ is a $(2,3)$-generated group too.\\
\indent\indent \textbf{2.2.} Now we proceed to prove the $(2,3)$ - generation of the remaining groups $SL_{9}(2)$, $SL_{9}(4)$, $SL_{10}(2)$, $SL_{10}(3)$ and $SL_{10}(4)$. Below we provide elements $x_{n}^{(q)}$ and $y_{n}^{(q)}$ of orders $2$ and $3$, respectively, for each one of the groups $SL_{n}(q)$ in this list, and prove that $\left\langle {x_{n}^{(q)},y_{n}^{(q)}}\right\rangle = SL_{n}(q)$. In our consideration, counting the orders of some elements in the corresponding groups $\left\langle {x_{n}^{(q)},y_{n}^{(q)}}\right\rangle$, we rely on the great possibilities of Magma Computational Algebra System. We also use the orders of the maximal subgroups of the groups in the list above. (The maximal subgroups of these groups are classified in \cite{1}.)\\
\indent\indent Take the following two matrices of $SL_{9}(2)$:
\[ x_{9}^{(2)} = \left[ \begin{array}{ccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0\\
\end{array} \right],
y_{9}^{(2)} = \left[ \begin{array}{ccccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right].\]
Then $x_{9}^{(2)}$ and $y_{9}^{(2)}$ are elements of respective orders $2$ and $3$ in the group $SL_{9}(2)$, and $x_{9}^{(2)}y_{9}^{(2)}$ has order $73$; also $x_{9}^{(2)}y_{9}^{(2)}(x_{9}^{(2)}(y_{9}^{(2)})^{2})^{2}$ is an element in $\left\langle {x_{9}^{(2)},y_{9}^{(2)}}\right\rangle$ of order $3.127$. Since in $SL_{9}(2)$ there is no maximal subgroup of order divisible by $73.127$ it follows that $SL_{9}(2) = \left\langle {x_{9}^{(2)},y_{9}^{(2)}}\right\rangle$.\\
\indent\indent Now continue with the desired matrices of $SL_{9}(4)$:
\[ x_{9}^{(4)} = \left[ \begin{array}{ccccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \eta\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\end{array} \right],
y_{9}^{(4)} = \left[ \begin{array}{ccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right].\]
(Here $\eta$ is a generator of $GF(4)^{*}$.)\\
Besides that $x_{9}^{(4)}$ and $y_{9}^{(4)}$ have orders $2$ and $3$, respectively, $x_{9}^{(4)}y_{9}^{(4)}$ has order $3.5.43.127$ and in $\left\langle {x_{9}^{(4)},y_{9}^{(4)}}\right\rangle$ the order of the following element
\begin{center}
$(x_{9}^{(4)}(y_{9}^{(4)})^{2})^{2}(x_{9}^{(4)}y_{9}^{(4)})^{3}x_{9}^{(4)}(y_{9}^{(4)})^{2}(x_{9}^{(4)}y_{9}^{(4)})^{2}x_{9}^{(4)}(y_{9}^{(4)})^{2}(x_{9}^{(4)}y_{9}^{(4)})^{2}x_{9}^{(4)}(y_{9}^{(4)})^{2}x_{9}^{(4)}y_{9}^{(4)}$
\end{center}
is $3.7.19.73$. But no one maximal subgroup of $SL_{9}(4)$ has order divisible by $43.73$. Thus $ SL_{9}(4) = \left\langle {x_{9}^{(4)},y_{9}^{(4)}}\right\rangle$ is a $(2,3)$ - generated group too.\\
\indent\indent Next, we consider the appropriate pair of elements in the group $SL_{10}(2)$:
\[ x_{10}^{(2)} = \left[ \begin{array}{cccccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0\\
\end{array} \right],
y_{10}^{(2)} = \left[ \begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right].\]
Here we obtain that the order of $x_{10}^{(2)}y_{10}^{(2)}$ is $3.11.31$ and $x_{10}^{(2)}y_{10}^{(2)}(x_{10}^{(2)}(y_{10}^{(2)})^{2})^{2}$ has order $73$. But there is no maximal subgroup in $SL_{10}(2)$ of order divisible by $11.73$. So $SL_{10}(2) = \left\langle {x_{10}^{(2)},y_{10}^{(2)}}\right\rangle$.\\
\indent\indent Further, let us deal with the group $SL_{10}(3)$ and choose:
\[ x_{10}^{(3)} = \left[ \begin{array}{cccccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\end{array} \right],
y_{10}^{(3)} = \left[ \begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right].\]
The product of the last two matrices has order $11^{2}.61$ and the following element
\begin{center}
$(x_{10}^{(3)}y_{10}^{(3)})^{2}x_{10}^{(3)}(y_{10}^{(3)})^{2}(x_{10}^{(3)}y_{10}^{(3)})^{2}x_{10}^{(3)}(y_{10}^{(3)})^{2}x_{10}^{(3)}y_{10}^{(3)}x_{10}^{(3)}(y_{10}^{(3)})^{2}x_{10}^{(3)}y_{10}^{(3)}$
\end{center}
is of order $2.13.757$. Checking the orders of the maximal subgroups of $SL_{10}(3)$ we can see that no one of them is a multiple of $61.757$ which means that $SL_{10}(3) = \left\langle {x_{10}^{(3)},y_{10}^{(3)}}\right\rangle$.\\
\indent\indent Lastly, we finish with the proof of the $(2,3)$ - generation of the group $SL_{10}(4)$ by taking its elements:
\[ x_{10}^{(4)} = \left[ \begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \eta\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\end{array} \right],
y_{10}^{(4)} = \left[ \begin{array}{cccccccccc}
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right].\]
(Recall that $\eta$ is a generator of $GF(4)^{*}$.)\\
In this case $x_{10}^{(4)}y_{10}^{(4)}$ has order $3.19.73$ and the element $(x_{10}^{(4)}(y_{10}^{(4)})^{2})^{3}x_{10}^{(4)}y_{10}^{(4)}(x_{10}^{(4)}(y_{10}^{(4)})^{2})^{6}$ has order $5.11.31.41$. Similarly, as in the previous cases, we can conclude that $SL_{10}(4)$ is generated by the above matrices because no one of its maximal subgroups has order divisible by $41.73$.\\
\indent\indent Finally, it is obvious that the projective images (if it is necessary) of all these elements $x_{n}^{(q)}$ and $y_{n}^{(q)}$ (again of orders $2$ and $3$, respectively) generate the correspondent simple special linear group.\\
\indent\indent \textbf{Acknowledgement}. We express our gratitude to Prof. Marco Antonio Pellegrini who provided us with all these generators for the groups $SL_{9}(2)$, $SL_{9}(4)$, $SL_{10}(2)$, $SL_{10}(3)$ and $SL_{10}(4)$.\\
\indent\indent \textbf{2.3.} Finally let $G = SL_{11}(q)$ and $\overline{G} = G/Z(G) = PSL_{11}(q)$, where $q = p^{e}$ and $p$ is a prime. Set $d = (11,q - 1)$ and $Q=(q^{11}-1)/(q-1)$. It is easily seen that here $(6,Q)=1$. The group $G$ acts (naturally) on an eleven-dimensional vector space $V = F^{11}$ over the field $F = GF(q)$.\\
\indent\indent\ We shall make use of the known list of maximal subgroups of ${G}$ given in \cite{1}. In Aschbaher's notation any maximal subgroup of ${G}$ belongs to one of the following families \emph{$C_{1}, C_{2}, C_{3}, C_{5}, C_{6}, C_{8}$}, and \emph{S}. Roughly speaking, they are:
\begin{itemize}
\item \emph{$C_{1}$}: stabilizers of subspaces of $V$,
\item \emph{$C_{2}$}: stabilizers of direct sum decompositions of $V$,
\item \emph{$C_{3}$}: stabilizers of extension fields of $F$ of prime degree,
\item \emph{$C_{5}$}: stabilizers of subfields of $F$ of prime index,
\item \emph{$C_{6}$}: normalizers of extraspecial groups in absolutely irreducible representations,
\item \emph{$C_{8}$}: classical groups on $V$ contained in $G$,
\item \emph{S}: almost simple groups, absolutely irreducible on $V$, and the representation of their (simple) \emph{socles} on $V$ can not be realized over proper subfields of $F$; not continued in members of \emph{$C_{8}$}.
\end{itemize}
In \cite{1} the representatives of the conjugacy classes of maximal subgroups of ${G}$ are specified in Tables $8.70$ and $8.71$. For the reader's convenience we provide the exact list of maximal subgroups of $G$ together with their orders. The notation used here for group structures is standard group-theoretic notation as in \cite{1}. Especially, $A \times B$ is the direct product of groups $A$ and $B$, and we write $A:B$ or $A.B$ to denote a split extension of $A$ by $B$ or an extension of $A$ by $B$ of unspecified type, respectively; the cyclic group of order $n$ is simple denoted by $n$, and $E_{q^{k}}$ stands for an elementary abelian group of order $q^{k}$.\\
If $M$ is a maximal subgroup of $G$ then one of the following holds.
\begin{enumerate}
\item $M \cong E_{q^{10}}:GL_{10}(q)$ of order $q^{55}(q - 1)(q^{2} - 1)(q^{3} - 1)(q^{4} - 1)(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)(q^{8} - 1)(q^{9} - 1)(q^{10} - 1)$.
\item $M \cong E_{q^{18}}:(SL_{9}(q)\times SL_{2}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)(q^{4} - 1)(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)(q^{8} - 1)(q^{9} - 1)$.
\item $M \cong E_{q^{24}}:(SL_{8}(q)\times SL_{3}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)^{2}(q^{4} - 1)(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)(q^{8} - 1)$.
\item $M \cong E_{q^{28}}:(SL_{7}(q)\times SL_{4}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)^{2}(q^{4} - 1)^{2}(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)$.
\item $M \cong E_{q^{30}}:(SL_{6}(q)\times SL_{5}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)^{2}(q^{4} - 1)^{2}(q^{5} - 1)^{2}(q^{6} - 1)$.
\item $M \cong (q - 1)^{10}: S_{11}$ (if $q \geq 5$) of order $2^{8}.3^{4}.5^{2}.7.11.(q - 1)^{10}$.
\item $M \cong \frac{q^{11} - 1}{q - 1}:11$ of order $11.\frac{q^{11} - 1}{q - 1}$.
\item $M \cong SL_{11}(q_{0}). (11,\frac{q - 1}{q_{0} - 1})$ (if $q = q_{0}^{r}$, $r$ prime) of order $q_{0}^{55}(q_{0}^{2} - 1)(q_{0}^{3} - 1)(q_{0}^{4} - 1)(q_{0}^{5} - 1)(q_{0}^{6} - 1)(q_{0}^{7} - 1)(q_{0}^{8} - 1)(q_{0}^{9} - 1)(q_{0}^{10} - 1)(q_{0}^{11} - 1).(11,\frac{q - 1}{q_{0} - 1})$.
\item $M \cong 11_{+}^{1+2}:Sp_{2}(11)$ (if $q = p \equiv 1$ (mod $11$) or $q=p^{5}$ and $p \equiv 3, 4, 5, 9$ (mod $11$)) of order $2^{3}.3.5.11^{4}$ (here $11_{+}^{1+2}$ stands for an extraspecial group of order $11^{3}$ and exponent $11$).
\item $M \cong d \times SO_{11}(q)$ (if $q$ is odd) of order $d.q^{25}(q^{2} - 1)(q^{4} - 1)(q^{6} - 1)(q^{8} - 1)(q^{10} - 1)$.
\item $M \cong (11,q_{0} - 1) \times SU_{11}(q_{0})$ (if $q = q_{0}^{2}$) of order $q_{0}^{55}(q_{0}^{2} - 1)(q_{0}^{3} + 1)(q_{0}^{4} - 1)(q_{0}^{5} + 1)(q_{0}^{6} - 1)(q_{0}^{7} + 1)(q_{0}^{8} - 1)(q_{0}^{9} + 1)(q_{0}^{10} - 1)(q_{0}^{11} + 1).(11,q_{0} - 1)$.
\item $M \cong d \times L_{2}(23)$ (if $q = p \equiv 1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18$ (mod $23$), $q \neq 2$) of order $2^{3}.3.11.23.d$.
\item $M \cong d \times U_{5}(2)$ (if $q = p \equiv 1$ (mod $3$)) of order $2^{10}.3^{5}.5.11.d$.
\item $M \cong M_{24}$ (if $q = 2$) of order $2^{10}.3^{3}.5.7.11.23$.
\end{enumerate}
\indent\indent Now we proceed as follows. First we prove that there is only one type of maximal subgroups of $G$ whose order is a multiple of $Q$; actually these are the groups (in Aschbaher's class \emph{$C_{3}$}) in case $7$ above, of order $11.\frac{q^{11} - 1}{q - 1}$. Into the second step we find out two elements $x_{11}$ and $y_{11}$ of respective orders $2$ and $3$ in $G$ such that their product has got order $Q$. Finally, we deduce that the group $G$ is generated by these two elements. Then the projective images of these elements will generate the group $\overline{G}$.\\
\indent\indent Let us start with the first step in our strategy. In order to prove the above mentioned arithmetic fact we use the well-known Zsigmondy's theorem, and take a primitive prime divisor of $p^{11e} - 1$, i.e., a prime $r$ which divides $p^{11e} - 1$ but does not divide $p^{i} - 1$ for $0 < i < 11e$. Obviously $r \geq 23$ (as $r - 1$ is a multiple of $11e$) and also $r$ divides $Q$. Now it is easy to be seen that the only maximal subgroups of orders divisible by $r$ are those in cases $11$, and $12$ or $14$ with $r = 23$. In case $11$ if $Q = \frac{q_{0}^{22} - 1}{q_{0}^{2} - 1}$ divides the order of $M$, then $\frac{q_{0}^{11} - 1}{q_{0} - 1}$ should be a factor of the integer $q_{0}^{55}(q_{0} + 1)(q_{0}^{2} - 1)(q_{0}^{3} + 1)(q_{0}^{4} - 1)(q_{0}^{5} + 1)(q_{0}^{6} - 1)(q_{0}^{7} + 1)(q_{0}^{8} - 1)(q_{0}^{9} + 1)(q_{0}^{10} - 1).(11,q_{0} - 1)$, an impossibility, by the same Zsigmondy's theorem. As for the groups in case $12$, we have $Q = \frac{p^{11} - 1}{p - 1} \geq \frac{3^{11} - 1}{3 - 1} > 2^{3}.3.11^{2}.23 \geq |M|$. Lastly, in case $14$ $Q = 2^{11} - 1 = 23.89$ does not divide the order of $M_{24}$.\\
\indent\indent Further, let us choose for $x_{11}$ the matrix
\begin{center}
\[ x_{11} = \left[ \begin{array}{rrrrrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\end{array} \right]\]
\end{center}
and $y_{11}$ to be in the form
\begin{center}
\[ y_{11} = \left[ \begin{array}{rrrrrrrrrrl}
-1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{1}\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{2}\\
0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{3}\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{4}\\
0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & \delta_{5}\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \delta_{6}\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & \delta_{7}\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & \delta_{8}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & \delta_{9}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \delta_{10}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\end{array} \right].\]
\end{center}
Then $x_{11}$ is an involution of $G$ and $y_{11}$ is an element of order $3$ in $G$ for any $\delta_{1}$, $\delta_{2}$, $\delta_{3}$, $\delta_{4}$, $\delta_{5}$, $\delta_{6}$, $\delta_{7}$, $\delta_{8}$, $\delta_{9}$, $\delta_{10} \in GF(q)$, also
\begin{center}
\[ z_{11}=x_{11}y_{11} = \left[ \begin{array}{rrrrrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \delta_{10}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & \delta_{9}\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & \delta_{8}\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & \delta_{7}\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & -\delta_{6}\\
0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & \delta_{5}\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{4}\\
0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{3}\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{2}\\
-1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{1}\\
\end{array} \right].\]
\end{center}
The characteristic polynomial of $z_{11}$ is
\begin{center}
$f_{z_{11}}(t) = t^{11}-\delta_{1}t^{10}+(\delta_{10}-1)t^{9}+(2\delta_{1}+\delta_{3}+1)t^{8}-(\delta_{1}+\delta_{8}+\delta_{9}+2\delta_{10}+1)t^{7}-(\delta_{1}-\delta_{2}+\delta_{3}+\delta_{5}-\delta_{10})t^{6}+(\delta_{1}+\delta_{3}-\delta_{6}+\delta_{7}+\delta_{8}+\delta_{9}+\delta_{10}+1)t^{5}+(\delta_{1}-\delta_{2}-\delta_{4}-\delta_{7}-\delta_{8}-\delta_{9}-\delta_{10})t^{4}-(\delta_{1}-\delta_{2}-\delta_{4}+\delta_{9}+\delta_{10}+2)t^{3}+(\delta_{2}+\delta_{9}+\delta_{10}+2)t^{2}-(\delta_{2}-1)t-1$
\end{center}
Now let take an element $\omega$ of order $Q$ in the multiplicative group of the field $GF(q^{11})$ and put
\begin{center}
$l(t) = (t - \omega)(t - \omega^{q})(t - \omega^{q^{2}})(t - \omega^{q^{3}})(t - \omega^{q^{4}})(t - \omega^{q^{5}})(t - \omega^{q^{6}})(t - \omega^{q^{7}})(t - \omega^{q^{8}})(t - \omega^{q^{9}})(t - \omega^{q^{10}}) = t^{11} - at^{10} + bt^{9} - ct^{8} + dt^{7} - et^{6} + ft^{5}-gt^{4}+ht^{3}-kt^{2}+mt- 1$.
\end{center}
The last polynomial has all its coefficients in the field $GF(q)$ and the roots of $l(t)$ are pairwise distinct (in fact, the polynomial $l(t)$ is irreducible over $GF(q)$ which is not necessary for our considerations). The polynomials $f_{z_{11}}(t)$ and $l(t)$ are identically equal if
\begin{center}
$\delta_{1}=a$, $\delta_{2}=-m+1$, $\delta_{3}=-2a-c-1$, $\delta_{4}=a+2m-2-k+h$, $\delta_{5}=a-m+3+c+b+e$, $\delta_{6}=-a-c+1+g-m+k-h-f$, $\delta_{7}=3-m+k-h+g+a+b+d$, $\delta_{8}=-a+k-m+1-b-d$, $\delta_{9}=m-4-b-k$, $\delta_{10}= b+1$
\end{center}
For these values of $\delta_{i} (i=1,...,10)$ $f_{z_{11}}(t)=l(t)$ and then, in $GL_{11}(q^{11})$, $z_{11}$ is conjugate to diag $(\omega, \omega^{q}, \omega^{q^{2}}, \omega^{q^{3}}, \omega^{q^{4}}, \omega^{q^{5}}, \omega^{q^{6}}, \omega^{q^{7}}, \omega^{q^{8}}, \omega^{q^{9}}, \omega^{q^{10}})$ and hence $z_{11}$ is an element of $G$ of order $Q$.\\
\indent\indent Now, $H = \left\langle x_{11},y_{11}\right\rangle$ is a subgroup of $G$ of order divisible by $6Q$. We have already proved above that the only maximal subgroup of $G$ whose order is a multiple of $Q$ is that in Aschbaher's class \emph{$C_{3}$}, of order $11Q$, which means that $H$ can not be contained in any maximal subgroup of $G$. Thus $H = G$ and $G = \left\langle x_{11},y_{11}\right\rangle$ is a $(2,3)$-generated group; $\overline{G} = \left\langle {\overline{x_{11}},\overline{y_{11}}}\right\rangle$ is a $(2,3)$-generated group too. \\
\indent\indent This completes the proof of the theorem.
$\square$\\
\begin{center}
\end{center}
E. Gencheva and Ts. Genchev\\
Department of Mathematics\\
Technical University of Varna \\
Varna, Bulgaria\\
e-mail: [email protected]; [email protected]\\
K. Tabakov\\
Faculty of Mathematics and Informatics\\
Department of Algebra\\
"St. Kliment Ohridski" University of Sofia\\
Sofia, Bulgaria\\
e-mail: [email protected]\\
\end{document} |
\begin{eqnarray}gin{document}
\title{A robust multivariate, non-parametric outlier identification method for scrubbing in fMRI}
\begin{eqnarray}gin{abstract}
Functional magnetic resonance imaging (fMRI) data contain high levels of noise and artifacts due to head motion, scanner instabilities, and other sources. To avoid contamination of downstream analysis, fMRI-based studies must identify and remove these sources of noise prior to statistical analysis. One common approach is the ``scrubbing'' or ``censoring'' of fMRI volumes that are thought to contain high levels of noise. However, existing scrubbing techniques are based on subject head motion measures or ad hoc measures of signal change. Here, we consider scrubbing through the lens of outlier detection, where volumes containing artifacts are thought of as multidimensional outliers. Robust multivariate outlier detection methods have been proposed using robust distances, which are related to the Mahalanobis distance. These robust distances have a known distribution when the data are i.i.d. Gaussian, and that distribution can be used to determine an appropriate threshold for outliers. However, in the fMRI context, we observe clear violations of the assumptions of Gaussianity and independence. Here, we develop a robust multivariate outlier detection method that is applicable to non-Gaussian data. The objective is to obtain threshold values to flag outlying volumes based on their robust distances. We propose two threshold candidates that embark on the same two steps, but the choice of which depends on a researcher's purpose (i.e., greater data retention vs. greater sensitivity). The two main steps of our procedure are: (1) dimension reduction and selection to identify primarily artifactual latent directions in the data; (2) robust univariate outlier imputation to remove the influence of outliers from the distribution; and (3) estimating a threshold for outliers based on the upper quantile of the outlier-free distribution of RD. We propose two threshold choices. The first threshold is an upper quantile (e.g., $99^{th}$) of the empirical distribution of robust distances obtained from the imputed data. The second threshold is an estimation of the upper quantile of the robust distance distribution employed by a nonparametric bootstrap to account for uncertainty in the empirical quantile. We compare our proposed approach with existing approaches for scrubbing in fMRI, including motion scrubbing, data-driven scrubbing, and existing multivariate outlier detection methods based on restrictive parametric assumptions.
\end{abstract}
\keywords{outliers, fMRI, robust estimates}
\section{Introduction}
Functional magnetic resonance imaging (fMRI) is a non-invasive technique that can be used to localize task-specific active brain regions and assists in predicting psychological or disease states \citep{lindquist2008statistical}. During neuronal activity, the brain hemodynamics alter due to an increase in the blood oxygenation level in the brain cells. These changes in the brain are captured in a magnetic resonance imaging (MRI) scanner. These MR images are collected over the time ($2 \sim 60$ min.) of the experiment and consist of $\sim100,000$ evenly sized cubes known as voxels, which collectively represent an individual’s whole brain. Therefore, fMRI data are a form of high-dimensional data containing the received signals from each voxel at each time point. An acquired image from all voxels at one time point is called a $\textit{volume}$.
To employ artifact-contaminated fMRI data reduces the quality of the results and influences the statistical result by reducing the signal-to-noise ratio (SNR) and violating common statistical assumptions. A low SNR is one of the shortcomings of fMRI that makes it more difficult to identify active brain regions associated with an activation task because these artifactual signals might mask the real brain signals. Artifacts may be either participant-related (such as, head movements, eye movements, breathing, and heartbeats \citep{friston1996movement, beauchamp2003fmri, frank2001estimation, kruger2001physiological}) or equipment-related (spikes, scanner drift). The identification of artifact-contaminated volumes before data analysis, is crucial.
In this work, we propose a robust outlier detection approach for identifying artifact-contaminated fMRI volumes. Our approach considers the high-dimensional, auto-correlated, and non-Gaussian aspects of fMRI data. Standard outlier detection methods fail in high-dimensional data. However, in many cases dimension reduction techniques can be used, after which existing multivariate outlier detection methods can be applied. The literature provides various distance-based outlier detection methods for multivariate data. One of the oldest is Mahalanobis distance \citep{mahalanobis1936generalised} which is based on the sample mean and sample covariance. However, both the center and scaling factors are obtained by using all observations and therefore may be influenced by outliers. The influence of outliers on the sample mean and covariance can cause ``masking" and ``swamping" effects \citep{rousseeuw1990unmasking}. The masking effect is failure to identify true outliers, while the swamping effect is to flag non-outliers as outliers. As such, Mahalanobis is a non-robust measure.
\cite{rousseeuw1990unmasking} proposed the use of robust minimum covariance determinant (MCD) estimates of mean and covariance in place of the conventional ones to produce a robust distance (RD) measure. MCD estimates are calculated by splitting the data into two subsets, one of which contains the observations located close to the center of the data which is unlikely to represent outliers, is used to estimate the location and scale parameters robustly.
\cite{hardin2005} derived the theoretical distribution of MCD-based RD for Gaussian distributed data. An upper quantile of that distribution can be used to identify outliers. For example, using the $99^{th}$ quantile of the theoretical distribution, on average $1\%$ of non-outlying observations will be flagged as outliers along with the others. Unfortunately, the empirical distribution of RD has often been observed to deviate from the theoretical distribution. Previous work employing MCD-based RDs attempted ad-hoc approaches to improve the distributional fit, for example median matching \citep{mejia2017pca, filzmoser2008, maronna2002}. Some reasons for this deviation might be due to violation of assumptions of independence, identical, and Gaussianity. fMRI data typically violate these key assumptions, in particular those of Gaussianity and independence. Therefore the use of Hardin \& Rocke approach fMRI data can cause high false positives and incorrectly scrubbing the beneficial volumes as artifactual.
Here, we propose a novel non-parametric robust multivariate outlier detection method that is applicable to fMRI data. We consider two approaches to threshold outliers: an empirical quantile calculation of the robustly transformed data's RDs and an upper quantile estimation of original data's RDs via a non-parametric bootstrap. These methods can be applied to any type of low-dimensional data, so we assume that the fMRI data has undergone dimension reduction using previously proposed techniques \cite{pham2023less}. First, we define the two MCD subsets. Second, we apply a robust univariate outlier imputation proposed by \cite{raymaekers2021transforming} to mitigate the influence of outliers on the procedure. Finally, we identify outliers based on the threshold values. One way of thresholding the RDs is to calculate an empirical quantile of the RDs via robustly transformed data. Another way is to employ a non-parametric bootstrap within each subset to estimate the distribution of RDs and the quantile for thresholding outliers. These thresholds are applied to the RDs of the original data to identify outliers.
The remainder of this paper is organized as follows. In Section \ref{sec:methods}, we introduce our proposed method and compare it with \cite{hardin2005}'s theoretical approach using simulated data and two toy fMRI datasets. In Section \ref{sec:dimred}, we apply a recently proposed dimension reduction and selection approach to the fMRI data. In Section \ref{sec:RD}, we briefly describe how MCD subset selection and RD calculation are employed from a recently proposed method. In Section \ref{sec:distRD}, we explain and illustrate the consequences of utilizing Hardin \& Rocke's approach for the distribution of RDs in terms of identifying outliers in both fMRI data and simulated fMRI data. In Section \ref{sec:bootRD}, we describe our proposed approach to threshold the distribution of RDs to identify outliers. In Section \ref{sec:uoi}, we describe our robust univariate outlier imputation algorithm. In Section \ref{sec:estQ}, we describe the estimation of an upper quantile of the RDs, considering two quantile calculations to threshold RDs. In Section \ref{sec:EDA}, we apply our method to fMRI data
from the Human Connectome Project (HCP) 42-subject retest \footnote{http://humanconnectome.org} and compare it with existing methods \citep{afyouni2018insight, smyser2010longitudinal, pham2023less, power2012, power2014}. In Section \ref{sec:discussion}, we briefly summarize the results and discuss limitations, suggesting future work.
\section{Method}
\label{sec:methods}
Here, we propose a novel, robust, high-dimensional outlier detection method that can be used to identify artifact-contaminated fMRI volumes. Since neuronal activity accounts for only a small portion of variance in fMRI data, temporal spikes or outliers in the BOLD signal are assumed to be of artifactual origin. Our approach consists of four steps, which are described in the following subsections. The first step is to reduce the dimensions and select high-kurtosis components, which are likely to represent artifactual volumes. For low-dimensional data, this step can be skipped. The second step is to compute a robust distance (RD) measure. The third step, which is our novel contribution, is to estimate the null distribution of RDs non-parametrically and obtain an upper quantile of this distribution to serve as an outlier detection threshold. The fourth step is to apply this threshold to identify artifactual volumes.
\subsection{Dimension reduction and selection}
\label{sec:dimred}
Here, we adopt a dimension-reduction and selection approach that we recently proposed and validated \citep{pham2023less}. Briefly, we reduce the dimensionality of the data by applying independent component analysis (ICA) and then select high-kurtosis components likely to represent artifactual patterns in the data. This approach improves upon the PCA leverage method proposed by \cite{mejia2017pca} and selects certain components based on kurtosis. As the use of ICA is a popular tool for dimension reduction and for identifying spatially independent components in fMRI, which can be of neuronal or artifactual origin, these artifactual noises can be identified and removed in fMRI datasets by ICA using ICA-FIX \citep{griffanti2014} and ICA-AROMA \citep{pruim2015}. \cite{pham2023less} comprehensively compared the effectiveness of this novel ICA-based scrubbing approach, termed "projection scrubbing," with data-driven (DVARS \cite{afyouni2018insight, smyser2010longitudinal}) and head-motion-based scrubbing techniques \citep{power2012, power2014}. They found that projection scrubbing tended to produce more reliable and valid estimates of functional connectivity (FC), while retaining much more data than motion scrubbing. In this work, we, therefore, adopt the proposed dimension-reduction and selection approach, described next, while introducing an improved technique for identifying artifactual volumes.
Consider an fMRI dataset, ${\bf{Y}}_{T \times V}$, where $T$ is the duration of an fRMI experiment, and $V$ is the total number of voxels of the brain ($T \ll V$). ICA decomposes ${\bf{Y}}$ into a spatial source signal matrix $({\bf{S}})$ containing independent components (ICs) and a temporal mixing matrix $({\bf{A}}_{T \times Q})$ containing the temporal activation profile of each IC. That is, ${\bf{Y}}={\bf{A}} \ {\bf{S}} + {\bf{E}}$. ${\bf{A}}$ can be considered a dimension-reduced version of ${\bf{Y}}$ along the spatial dimension. While the ICs of ${\bf{S}}$ may represent signal or noise, artifacts tend to appear in burst noise causing high spikes in the corresponding time courses. Therefore, the columns of ${\bf{A}}$ related to artifactual ICs are more likely contain extreme values. Kurtosis is an indicator of the presence of potential outliers. Since outliers can affect the tail of the distribution to be heavy-tailed, a higher kurtosis value is an indication of the existing outliers. Therefore, the components having \textit{high kurtosis} value are selected from the dimension-reduced data $({\bf{A}}_{T \times Q})$. Suppose we select the $K$ high kurtosis components within the $Q$ independent components. The resultant matrix, ${\bf{X}}_{T \times K}$, consists of the columns of $({\bf{A}}_{T \times Q})$ corresponding to the rows of ${\bf{S}}$ that are likely to represent artifacts. The kurtosis values are calculated as:
\begin{eqnarray}gin{align}
\text{Kurt} = \fracac{1}{N} \mathlarger{\sum}_{i=1}^{Q} \mbox{Bi}gg( \fracac{a_{i} -\begin{eqnarray*}r{a}}{s} \mbox{Bi}gg)^4 -3 ,
\label{eqn:KURT}
\end{align}
for each component (${\bf{a}} = (a_1, a_2, ..., a_Q)$) of $({\bf{A}}_{T \times Q})$, where $\begin{eqnarray*}r{a}$ and $s$ are the sample mean and standard deviation of $\bf{a}$, respectively. The goal of our method, described below, is to detect $K$-dimensional outliers among the $T$ observations of ${\bf{A}}$. These extreme observations can then be excluded from analysis via ``scrubbing" to avoid undue influence.
\subsection{Robust distance calculation}
\label{sec:RD}
Let ${\bf{X}} = ({\bf{x}}_1, {\bf{x}}_2, ..., {\bf{x}}_n)^T$ represent data with $n$ observations across $p$ dimensions. The Mahalanobis distance (MD) of the $i^{th}$ observation ${\bf{x}}_i = ({x}_{i1}, {x}_{i2}, ..., {x}_{ip})^T$ is defined as
$\text{MD}({\bf{x}}_i) = \sqrt{({\bf{x}}_i - \hbox{{\small$1\over2$}}at{{\boldsymbol{\mu}}})^T \ {\hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}}^{-1} \ ({\bf{x}}_i - \hbox{{\small$1\over2$}}at{{\boldsymbol{\mu}}})}$, where ${\hbox{{\small$1\over2$}}at{\boldsymbol{\mu}}} = ({\hbox{{\small$1\over2$}}at{\mu}}_1, \hbox{{\small$1\over2$}}at{\mu}_2, ..., \hbox{{\small$1\over2$}}at{\mu}_p)^T$ and $\hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}_{p \times p}$ are the sample mean and sample covariance, respectively, across the $n$ observations. \cite{rousseeuw1990unmasking} proposed using minimum covariance determinant (MCD) estimators of the mean, $\hbox{{\small$1\over2$}}at{{\boldsymbol{\mu}}}_{MCD}$, and covariance, $\hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}_{MCD}$, to mitigate the influence of outliers on the MD. These MCD estimates are obtained by choosing a subset of $h < n$ observations closest to the center of the distribution, and therefore unlikely to represent outliers. The MCD achieves its maximum breakdown point by choosing $h = \lfloor{(n+p+1)/2}\rfloor$ \citep{lopuhaa1991}. For $n$ observations ${\bf{x}}_{1}, {\bf{x}}_{2}, ..., {\bf{x}}_{n}$ with $p$ dimensions, the subset of $h$ observations, ${\bf{x}}_{i_1}, {\bf{x}}_{i_2}, ..., {\bf{x}}_{i_h}$, is chosen to provide the minimum possible determinant of the covariance among any subset. That is, letting $S_1 = \{i_1, i_2, ..., i_h\}$, $det ( \hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}_{S_1}) \le det ( \hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}_{S})$ for any set $S = \{k_1, k_2, ..., k_h\}$. We call the observations ${\bf{x}}_{i_1}, {\bf{x}}_{i_2}, ..., {\bf{x}}_{i_h}$ \textit{included} observations and the remaining observations \textit{excluded} observations. Let $S_2 = \{ 1,2, ...,n \} \begin{eqnarray*}ckslash S_1$ index the excluded observations. The determination of the set $S_1$ requires an assessment of $\binom{n}{h}$ possibilities, which is computationally demanding as $h \approx \fracac{n}{2}$. To overcome this computational challenge, we use an algorithm, \texttt{FastMCD}, developed by \cite{rousseeuw1999}. The MCD estimators, $\hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}_{MCD}$ and $\hbox{{\small$1\over2$}}at{{\boldsymbol{\mu}}}_{MCD}$, are obtained based only on the included observations. Since MCD estimators have a breakdown point of $1-\fracac{h}{n}$, the estimator of the mean and covariance remains robust as long as the proportion of outliers in the data does not exceed $1-\fracac{h}{n}$. Using MCD estimates of the mean and covariance in the MD formula results in a robust distance (RD) metric.
$$
\begin{eqnarray}gin{aligned}
RD({\bf{x}}_i) &= \sqrt{({\bf{x}}_i - \hbox{{\small$1\over2$}}at{\boldsymbol{\mu}}_{MCD})^T \ {\hbox{{\small$1\over2$}}at{{\boldsymbol{\Sigma}}}_{MCD}}^{-1} \ ({\bf{x}}_i - \hbox{{\small$1\over2$}}at{{\boldsymbol{\mu}}}_{MCD})}
\end{aligned}
$$
Figure \ref{fig:matrixRD} provides an illustration of the RD computation using toy fMRI data based on the publicly available Autism Brain Imaging Data Exchange (ABIDE) data \citep{di2014ABIDE}. These toy data are available in the \texttt{fMRIscrub R} package. The data consist of a single slice of a session represented by a $T \times V$ matrix , where $T = 145$ and $V = 4679$. This session was determined to be relatively free of artifacts based on visual inspection.
\begin{eqnarray}gin{figure}[H]
\centering
\includegraphics[height=0.6\textwidth, trim=0 0in 0in 0.195in, clip]{image/Figure1.pdf} \\
\includegraphics[width = 0.4\linewidth, trim=0 0in 0in 3.6in, clip]{image/Figure3_Distributions_of_RD_from_ABIDE2.pdf}
\caption{\small \textbf{Illustration of MCD-based robust distance calculation.} For illustration purposes, we chose $6$ components (out of 26) from the dimension-reduced and high-kurtosis components selected from fMRI data. Each component is a time series. The orange lines indicate included observations (volumes), while the turquoise ones indicate excluded observations. We obtain RDs of each observation in the time series based on the mean and covariance calculated from the included observations. As we can see from the left image, volumes with higher signal intensities are labeled as included, while volumes with lower signal intensities are labeled as excluded based on the MCD calculation. The right image shows the RDs of each volume across all six components. The excluded observations tend to have higher signal occurrences and higher RDs than the included observations. This figure visually illustrates the idea of expecting outliers in the excluded subset as it contains volumes that contain relatively higher signal intensities than the volumes in the included subset. The RD metric provides a single value for each volume that indicates the location of the volume relatively to the center of the data.}
\label{fig:matrixRD}
\end{figure}
\subsection{Theoretical distribution of MCD-based robust distance}
\label{sec:distRD}
To identify outliers based on RDs, it is necessary to know the distribution of RDs among non-outlying observations. An upper quantile of that distribution can be used as a threshold to identify outliers. \cite{hardin2005} derived the distribution of MCD-based RDs of i.i.d. Gaussian data. They proved that the RDs of excluded observations approximately follow a scaled $\textit{F}$ distribution. However, this theoretical result becomes invalid if any of the assumptions of Gaussianity, independence, or identical distribution are violated. As we show below, these assumptions are usually violated for fMRI data, making this theoretical result inapplicable, especially given that the chosen dimensions are selected based on having higher kurtosis than Gaussian data.
In Figure \ref{fig:sim}, we visualize Hardin \& Rocke's theoretical result on three outlier-free and Gaussian simulated datasets. Because the excluded observations are a subset of the dataset with a distribution truncated at the $(1-\fracac{h}{n})$-th quantile, both included and excluded observations are displayed on the histograms for ease of visualization. The scaled F distribution is displayed to show the empirical versus theoretical distribution fit of excluded observations' RDs. Observations are identified as outliers when they lie beyond the vertical lines at the $99^{th}$ quantile of the theoretical F distribution. We replicate these three datasets $100$ times with the same settings to compute the average false positive rate (FPR), which indicates the rates of observations being wrongly labeled as outliers. Since the $99^{th}$ quantile is used, an FPR near $1 \%$ is expected.
Figure \ref{fig:sim}a shows the RDs of the first dataset, which is generated from independent and identical Normal distribution. Recall that under these assumptions, the excluded observations' RDs follow a scaled F distribution. The second and third datasets, shown in Figure \ref{fig:sim}b and Figure \ref{fig:sim}c, respectively, are generated from a Gaussian first-order auto-regressive ($AR(1)$) model to reflect one feature of fMRI data: autocorrelation. To mimic typical fMRI data, we choose $\mbox{P}hi = 0.4$, a value that is commonly assumed for the temporal correlation of single-band fMRI data. For the third dataset, we choose $\mbox{P}hi = 0.9$, which imitates a more modern fMRI acquisition with fast temporal resolution. The theoretical F distribution fits reasonably well on the three histograms displayed in Figure \ref{fig:sim}. As expected, the $99^{th}$ quantile lies at the tail of the distribution of excluded observations’ RDs for each dataset, near the nominal level of $1 \%$. The average FPRs in the three scenarios are $1.3 \%$, $1.5 \%$, and $6.1\%$ respectively. The assumption of independence is violated in the second and third datasets and these FPRs suggest that the violation of the independence assumption has an effect on the validity of the theoretical F distribution.
While a distributional fit could be improved by accounting for the dependence through effective sample size, fMRI data also exhibit major deviations from the assumption of Gaussianity. The degree of dependence is known to vary dramatically across the brain \citep{Parlak2022-sz}. Therefore determining the effective sample size in fMRI data is non-trivial, and would need to be done in a robust manner to avoid the influence of outliers. Although the spatial autocorrelation pattern is accounted via bias correction, these correction strength also vary across the brain \cite{afyouni2019effective}. That is, adjusting the effective sample size would not be an appropriate solution to the problem and a more flexible non-parametric approach is required to address the various violated assumptions.
\begin{eqnarray}gin{figure}[H]
\centering
\begin{eqnarray}gin{tabular}{ccc}
\begin{eqnarray}gin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=0.9\textwidth, trim=0 0.5in 0in 0in, clip]{{image/Figure2a_Distributions_of_RD_from_simualted_data.pdf}} \\
\hbox{{\small$1\over2$}}box{}
\caption{i.i.d. Gaussian data}
\end{subfigure} &
\begin{eqnarray}gin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=0.9\textwidth, trim=0 0.5in 0in 0in, clip]{{image/Figure2b_Distributions_of_RD_from_simualted_data.pdf}} \\
\includegraphics[width = 0.9\linewidth, trim=0 0in 0in 3.6in, clip]{image/Figure3_Distributions_of_RD_from_ABIDE2.pdf}
\caption{AR(1) model, $\mbox{P}hi=0.4$}
\end{subfigure} &
\begin{eqnarray}gin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=0.9\textwidth, trim=0 0.5in 0in 0in, clip]{{image/Figure2c_Distributions_of_RD_from_simualted_data.pdf}} \\
\hbox{{\small$1\over2$}}box{}
\caption{AR(1) model, $\mbox{P}hi=0.9$}
\end{subfigure}
\end{tabular} \\
\caption{\small \textbf{Distributions of RDs in simulated Gaussian data with and without dependence and fit of the theoretical F distribution.} The vertical dashed lines indicate the 99th quantile of the F distribution proposed by \cite{hardin2005}. Reported false positive rates (FPR), written on top right of each panel, are obtained by averaging the replicated simulation with a nominal rate of $0.01$ for each setting $100$ times. (A) i.i.d Gaussian data are simulated. The mean FPRs is very close to the nominal rate of 0.01. (B) A first-order autoregressive (AR(1)) model with a correlation coefficient of $\mbox{P}hi=0.4$ is simulated. Even though the independence assumption of the theoretical method is violated, the average FPR does not show a significant change. (C) A first autoregressive (AR(1)) model with a correlation coefficient of $\mbox{P}hi=0.9$ is simulated. Minor violations of the independence assumption have negligible effects on the validity of the theoretical F distribution, but stronger dependence patterns may alter the distribution more substantially. }
\label{fig:sim}
\end{figure}
In addition to violating the independence assumption, fMRI data also violate the Gaussianity assumption. To demonstrate this and the subsequent failure of the theoretical distribution, we employ two ``toy" fMRI sessions based on a single brain slice from two fMRI sessions from the publicly available ABIDE database \citep{di2014ABIDE}. Based on visual inspection of the original fMRI data, the first fMRI session, \textit{ABIDE 1}, is known to be highly contaminated with artifacts, while the second fMRI session, \textit{ABIDE 2}, is relatively free of artifacts. Figure \ref{fig:hist-ABIDE} displays the empirical distribution of RDs for each dataset after dimension reduction, as described in Section \ref{sec:dimred}. Q-Q plots are shown to check the Normality assumption for $5$ of the components. Both datasets show violations of Gaussianity. The Q-Q plots of the selected components highlight that they deviate from Gaussianity, which cannot be explained solely by the presence of outliers. Their distributions are non-Gaussian in different ways, which prevents the application of a common transformation to achieve Normality. As a result, the scaled F distributions clearly fail to fit the distribution of the RDs' excluded observations from both datasets. The empirical distribution of RDs is greater than what would be expected based on the theoretical distribution. The $99^{th}$ quantile of the theoretical F distribution falls within the middle of the distribution of excluded observations for both data. This would lead to many likely non-outlying observations being classified as outliers. The conclusion is that, while Hardin \& Rocke's theoretical result holds for i.i.d. Gaussian data and even moderately correlated Gaussian data, their approach fails when the assumption of Gaussianity is violated, making it inappropriate for fMRI data.
\begin{eqnarray}gin{figure}[H]
\centering
\begin{eqnarray}gin{subfigure}{\linewidth}
\centering
\begin{eqnarray}gin{tabular}{cc}
\includegraphics[width = 0.3\linewidth, trim=0 0.85in 0in 0, clip]{image/Figure3_Distributions_of_RD_from_ABIDE1.pdf} &
\includegraphics[width = 0.3\linewidth, trim=0 0.85in 0in 0, clip]{image/Figure3_Distributions_of_RD_from_ABIDE2.pdf} \\
\end{tabular}
\begin{eqnarray}gin{tabular}{c}
\includegraphics[width = 0.4\linewidth, trim=0 0in 0in 3.6in, clip]{image/Figure3_Distributions_of_RD_from_ABIDE2.pdf} \\
\end{tabular}
\caption{Distribution of RDs of toy fMRI datasets, theoretical distribution and threshold value (dashed) based on the theoretical F distribution (solid line) }
\hbox{{\small$1\over2$}}space{10em}
\end{subfigure}
\hbox{{\small$1\over2$}}fill
\hbox{{\small$1\over2$}}space{2em}
\begin{eqnarray}gin{subfigure}{\linewidth}
\begin{eqnarray}gin{tabular}{c|c}
\begin{eqnarray}gin{picture}(10,90)\mbox{P}ut(0,45){\rotatebox[origin=c]{90}{ABIDE1}}\end{picture} & \includegraphics[width = 0.95\linewidth,trim=0in 0in 0in 0.53in, clip]{image/Figure3_QQplot_ABIDE1.pdf} \\
\hbox{{\small$1\over2$}}line
\begin{eqnarray}gin{picture}(10,90)\mbox{P}ut(0,45){\rotatebox[origin=c]{90}{ ABIDE2}}\end{picture} & \includegraphics[width = 0.95\linewidth,trim=0in 0in 0in 0.53in, clip]{image/Figure3_QQplot_ABIDE2.pdf}
\end{tabular}
\caption{Normal QQ-plots of several components for ABIDE1 (out of 26) and ABIDE2 (out of 5)}
\end{subfigure}
\caption{\small \textbf{fMRI data violate the Gaussianity assumption, leading to an RD distribution that does not follow the theoretical F distribution.} (A) The dashed-vertical lines represent the $99^{th}$ quantile of the F distribution. RDs of ABIDE1 range from $10$ to over $10000$. Volumes with RD greater than $150$ are labeled as outlier based on the threshold value. Considering the range of the RDs, the threshold value does not indicate a resonable value. Unlike ABIDE1, the range of RDs of ABIDE2 is very short. Given that ABIDE2 is known to be cleaner, the RDs tend to be smaller overall, but the distribution still deviates from the theoretical F distribution, and the $99^{th}$ quantile of the F distribution may be overly aggressive. (B) QQ-plots of several components after applying dimension reduction, showing significant and heterogeneous deviations from Gaussian distribution for both datasets.}
\label{fig:hist-ABIDE}
\end{figure}
\subsection{Proposed method to determine the distribution of robust distance non-parametically}
\label{sec:bootRD}
Since fMRI data are autocorrelated and non-Gaussian, two key assumptions of the theoretical approach to determining a distribution for RDs of excluded observations are violated. Although typical fMRI data could also violate their third assumption, which is to be from identically distributed data, we seek to satisfy it by applying mean and variance detrending to each independent component. Therefore, a method of determining the distribution of RDs and identifying outliers is required to overcome both independence and Gaussianity violation. Here, we introduce a novel technique for estimating the $(1-\alpha)$ quantile of the distribution of RDs based on univariate outlier imputation and (optionally) a non-parametric bootstrap procedure. The following subsections describe the procedure. Section \ref{sec:uoi} describes a robust univariate outlier identification and imputation procedure used to prevent outliers being masked, where an \textit{empirical} quantile will be calculated after this step. Section \ref{sec:estQ} describes the details of estimating the $(1-\alpha)$ quantile of the distribution of RDs by using a non-parametric bootstrap and the choice of a threshold measure summarizing over bootstrap samples to identify outliers.
\subsubsection{Robust univariate outlier imputation}
\label{sec:uoi}
The existence of outliers in the data alters the distribution of RDs, particularly the upper quantiles. For instance, for a dataset of size $n$, assuming there is only one outlier, bootstrap samples from that dataset would contain that outlier with probability $\left(1- \fracac{1}{n} \right)^{n} \to \fracac{1}{e} \approx 0.37$ as $n \to \infty$. In fMRI data, there are typically numerous outlying volumes. Prior to estimation of a quantile, we therefore apply univariate outlier imputation to each component of the dimension-reduced dataset. Algorithm \ref{app:ruoi} describes the steps to obtain imputed data. First, we robustly transform each component to achieve central Normality, as proposed by \cite{raymaekers2021transforming}. The goal of this transformation is to prevent outliers from being masked after power transformations (e.g. Box-Cox or Yeo-Johnson). This transformation aims to achieve Normality for the center of the data distribution while retaining the outlying observations in the tail of the distribution.
Following the transformation, we identify outliers by using the median absolute deviation (MAD), which is a robust measure of variability for univariate data. For a given observation ${\bf{x}} = \{ {x}_1,{x}_2, ..., {x}_n \}$, the MAD is defined as the median of the absolute deviations from the median of the data $MAD = med(|{\bf{x}} - M|)$ where $M = med({\bf{x}})$. We use a scaling factor of $1.4826$ to make the MAD a consistent estimator of the standard deviation for Gaussian data \citep{rousseeuw1993alternatives}. Observations lying beyond $4$ MADs of the median are considered outliers. To obtain an imputed dataset for each outlying observation, we use the mean of the nearest preceding and following non-outlier observations. We will use this imputed dataset, ${\bf{X}}^{0}_{T \times K}$, to determine the distribution of RDs in the absence of outliers, as described next.
\subsubsection{Estimation of $(1-\alpha)$ quantile of robust distance}
\label{sec:estQ}
Our goal is to estimate the $(1-\alpha)$ quantile of the distribution of RDs, which will be used to threshold the observed RDs to identify outliers. Here, $\alpha$ represents the proportion of non-outliers that are expected to be labeled as outliers, i.e. false positives. To estimate the $(1-\alpha)$ quantile while accounting for uncertainty, we propose a bootstrap procedure described in Algorithm \ref{alg:boot}. First, we divide observations into included and excluded MCD subsets as described in Section \ref{sec:RD}. We then bootstrap included and excluded observations separately to preserve the MCD structure. For each bootstrap sample, we compute the RD of each observation in the bootstrap sample using the bootstrap MCD mean and the main MCD covariance. We record the $(1-\alpha)$ quantile of the RD in each bootstrap sample. This process is repeated across $B$ bootstrap samples to obtain a bootstrap distribution of the $(1-\alpha)$ quantile of the distribution of RDs. Finally, we can choose a threshold measure among several possible summary statistics from this bootstrap distribution.
\begin{eqnarray}gin{algorithm}[H]
\mbox{D}ontPrintSemicolon
\SetAlgoLined
\textbf{Input}: univariate outlier imputed data ${\bf{X}}^{0}_{T \times K}$ and $\alpha \in (0,1)$ \;
Define the index sets $S_1$ and $S_2$ for $\textit{included}$ and $\textit{excluded}$ observations of ${\bf{X}}^0$, respectively \;
Compute main MCD covariance ${\hbox{{\small$1\over2$}}at{\boldsymbol{\Sigma}}}_{MCD}$ from $\{{\bf{x}}^0_{t} : t \in S_1 \}$ where ${\bf{X}}^0_{t}$ is the t-th row of ${\bf{X}}^0$ \;
\For{b = 1,2,.., B}{
use $F_1 = \fracac{1}{n}$ probability mass to sample each of $n$ included observations to create $S_{1}^{(b)}$ \;
use $F_2 = \fracac{1}{n-h}$ probability mass to sample each of $n-h$ excluded observations to create $S_{2}^{(b)}$ \;
compute MCD estimate of sample mean ${\hbox{{\small$1\over2$}}at{\boldsymbol{\mu}}}_{MCD}^{(b)}$ from $\{ {\bf{x}}^0_{t} : t \in S^{(b)}_{1} \}$ \;
compute bootstrap-based RDs of $\{ {\bf{x}}^{0}_t : t \in S^{(b)}_1 \bigcup S^{(b)}_2 \}$ by using ${\hbox{{\small$1\over2$}}at{\boldsymbol{\Sigma}}}_{MCD}$ and ${\hbox{{\small$1\over2$}}at{\boldsymbol{\mu}}}_{MCD}^{(b)}$ \;
calculate $(1-\alpha)$ quantile estimates, ${\hbox{{\small$1\over2$}}at{Q}}^{(b)}_{(1-\alpha)}$, from bootstrap-based RDs
}
\Return the estimated {${\hbox{{\small$1\over2$}}at{Q}}^{(b)}_{(1-\alpha)}$} for $b=1,2,...,B$
\caption{Bootstrap-based Estimation of the $(1-\alpha)$ Quantile of the distribution of RD}
\label{alg:boot}
\end{algorithm}
Depending on the aim of a study, there may be different objectives for outlier detection, including nominal specificity, higher sensitivity, or higher specificity. If the objective is nominal specificity, equivalent to nominal false positive rate of $\alpha$, then the mean or median of the bootstrap distribution could be used to estimate the ($1-\alpha$) quantile of the RD distribution. However, if the objective is high sensitivity to outliers (as in the fMRI scrubbing context), then one could use either a lower bound of a bootstrap confidence interval or the lower bound of a bootstrap CI. On the other hand, if the objective is high specificity to avoid discarding observations that are not true outliers, one could use the lower bound of a bootstrap CI. Hence, we consider two approaches: the empirical quantile and the lower bound (LB) of a bootstrap CI. To compare these two and assess their effects on FPR, we generate $1000$ outlier-free replicates of i.i.d and Gaussian data of size $n = 1000$ and set $\alpha$ to $0.01$. We start by applying Algorithm \ref{alg:boot}, which results in a distribution of the bootstrap-based $99^{th}$ quantile.
For the objective of high sensitivity, we consider using the lower bound of a one-sided bootstrap confidence interval (CIs) of the $(1-\alpha)$ quantile as a potential cutoff. Specifically, we consider $97.5\%$ lower one-sided CI ($95\%$ CI LB). The FPR for each replicate is illustrated in Figure \ref{fig:FPR} along with the means across replicates. As seen in Figure \ref{fig:FPR}, this threshold achieves FPR above $0.01$ in all replicates, indicating that it avoids below-nominal FPR. We also consider the $0.99$ empirical quantile as a possibility to reduce the time complexity. The FPR across replicates achieve the strictest type-1 error control, maintaing FPR below $0.02$ in all of the replicates in the simulated example. For comparison, we also consider using the $0.99$ quantile of the theoretical F distribution. The theoretical F distribution is more variable than other cutoff types and often above the nominal rate of $0.01$. The empirical quantile FPR is closer to the nominal rate and are less variable than the theoretical distribution. Considering that this is Gaussian data, the theoretical threshold achieves close to nominal FPR would be worse in fMRI data.
For a given choice of threshold approach, there is variability in the sensitivity to outliers using that approach. That can be due to sampling variability in the estimate and/or variability in the distribution of the data. As a result, a particular choice of threshold may yield outlier sensitivity rates that are higher or lower than desired. It is important to take this into account in order to avoid being overly conservative or overly liberal when identifying outliers, depending on the data context. In the context of fMRI, it is important to identify all outliers, since any artifactual volumes retained can have negative effects on downstream analysis. This is consistent with the approach often adopted in head motion-based scrubbing, where low (stringent) thresholds for head motion are often used in order to remove as much noise as possible, even though some signal is also discarded.
As demonstrated in \cite{pham2023less}, data-driven scrubbing techniques, such as the one proposed here, can effectively reduce noise and retain much less data by better distinguishing signal and noise. However, it remains crucial to perform stringent noise removal is in fRMI analysis. Therefore, we consider a conservative threshold, the empirical threshold. Figure \ref{fig:FPR} shows that the empirical threshold generally achieves removal rates of at least $1\%$, while consistently maintaining a removal rate below $2\%$. On the other hand, the LB of the $95\%$ bootstrap CI consistently achieves at least $1\%$, but has an average removal rate of $3\%$. We also show the removal rate of the theoretical threshold, which is around the nominal level of $1\%$ on average (note that the data used here are Gaussian and match the assumptions of this method, unlike real fMRI data), but exhibits a higher level of variability than the other two threshold approaches. Additionally, the empirical threshold is less variable than the $95\%$ bootstrap CI and theoretical threshold. Figure \ref{fig:FPR} illustrates that the empirical quantile provides a more sensitive and stable threshold for outliers than the LB of the $95\%$ bootstrap CI.
\begin{eqnarray}gin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{image/Figure4_FPR_rep1000_N250_noimp.pdf}
\caption{\small \textbf{Rate of removed outliers in simulated standard Normal data, mean across replicates (block dot), and standard error bars of the mean across the replicates (gray).} For fMRI data, it is crucial to identify all outliers, since any artifactual volumes retained can negatively affect the downstream analysis. The empirical threshold generally achieves at least a $1\%$ removal rate while maintaining a removal rate consistently below $2\%$. On the other hand, the LB of the $95\%$ bootstrap CI consistently achieves at least a $1\%$ removal rate, but has an average removal rate of $3\%$. We also show the removal rate of the theoretical threshold, which is around the nominal level of $1\%$ on average (note that the data here are Gaussian, matching the assumptions of this method unlike real fMRI data), but exhibits higher level of variability than the other two threshold approaches. In addition, the empirical threshold is less variable than the LB of the $95\%$ bootstrap CI and theoretical threshold.}
\label{fig:FPR}
\end{figure}
\begin{eqnarray}gin{figure}[H]
\centering
\includegraphics[width=0.53\textwidth]{image/Figure5_RD_with_cutoff_ABIDE.pdf}
\caption{\small \textbf{Distribution of RDs of ABIDE datasets, theoretical threshold and proposed thresholds.} The theoretical quantile performs poorly with more than half of the excluded observations labeled as outliers, which contradicts with the higher sensitivity goal in fMRI scrubbing analysis. By contrast, the empirical and LB of $95\%$ bootstrap CI cutoffs perform well, successfully identifying the long upper tail likely to consists of outliers in both datasets. The bootstrap CI LB method is slightly more stringent than the empirical threshold. \\}
\label{fig:ABIDE-cutoffs}
\end{figure}
We apply the same approach to both ABIDE toy datasets. As illustrated in Figure \ref{fig:ABIDE-cutoffs}, it is clear that the theoretical quantile performs poorly, with more than half of the excluded observations labeled as outliers. This is particularly bad for ABIDE1, the dataset with more artifacts. By contrast, the empirical and LB of $95\%$ bootstrap CI cutoffs perform well, successfully identifying the long upper tail likely to consist of outliers in both datasets.
\section{Experimental data results}
\label{sec:EDA}
We apply both the empirical and LB of bootstrap CI outlier identification methods to fMRI data and compare their performance with existing methods. This section is organized as follows. In Section \ref{sec:fMRI}, we introduce the fMRI data we employ. We apply our proposed outlier identification procedure to some selected data and present the results on the RDs distribution. In Section \ref{sec:eval}, we compare our proposed approaches with existing scrubbing methods in terms of ther effectiveness on functional connectivity. In Section \ref{sec:spatial}, illustrate the artifacts associated with spatial patterns of our proposed methods.
\subsection{fMRI datasets}
\label{sec:fMRI}
We employed publicly available data from the Human Connectome Project (HCP) \citep{van2013wu}. This data resource includes resting-state fMRI (rs-fMRI) from $1200$ healthy young adults, and details of the acquisition and processing can be found in \cite{van2013wu} and \cite{glasser2013minimal}. Each subject was scanned over two sessions (REST1 and REST2), and at each session, there were two runs acquired with different phase encoding (LR and RL). $45$ participants were retested (RETEST) with the two sessions and two acquisition directions. Since we compared our approach with the recently developed scrubbing approach \citep{pham2023less}, we employed the same data from $42$ of these subjects. That is, for each subject, there were $8$ sessions. Each of these data included $1200$ volumes acquired every $0.72$ seconds over approximately $15$ minutes. However, the first $15$ volumes were excluded from the analysis to eliminate magnetic field instabilities, resulting in $1185$ volumes per subject. The total number of voxels inside the brain was approximately $200,000$ but varied across subjects. After masking out non-brain areas, a matrix ${\bf{Y}}_{T \times V}$ was created, where $T$ is the acquired volumes for an experiment, and $V$ is the number of voxels in the brain. We employed the \texttt{ciftiTools R} package \citep{pham2022ciftitools} to read, process, and analyze the data.
First, we visualize our two threshold candidates (empirical vs bootstrap $95\%$ CI LB) with RDs of fMRI data. To illustrate our method, we selected $5$ highly noisy sessions shown in Figure \ref{fig:fMRIRD}. The same subject, "A", and three rs-fMRI data from different subjects ("B", "C", and "D") exhibit high levels of noise and artifacts. We refer to these five sessions as \textit{A1}, \textit{A2}, \textit{B}, \textit{C}, and \textit{D}. For each session, we first apply dimension reduction and selection method as described in Section \ref{sec:estQ}. Second, we compute the RDs of each volume, and we obtain thereshold values based on the empirical vs bootstrap $95\%$ CI LB\ref{sec:estQ}. Figure \ref{fig:fMRIRD} shows the empirical distribution of RDs of each fMRI session. Since both cutoff values look very close to each other, we adopt the empirical as a threshold for further analysis to avoid the time complexity that occurs during the bootstrap procedure.
\begin{eqnarray}gin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{image/Figure6_RD_with_cutoff_5HCP.pdf}
\caption{\small \textbf{Distributions of the robust distances for $5$ noisy HCP sessions}. The vertical lines show two estimates of the $0.99$ quantile of RDs: empirical (solid) and bootstrap $95^{th}$ CI LB (dotted). Both threshold values identify quite similar artifactual volumes, with the bootstrap $95^{th}$ CI LB cutoff being slightly stringent. Both approaches seem to threshold the RDs of the data at a reasonable level.\\}
\label{fig:fMRIRD}
\end{figure}
\subsection{Effect of scrubbing on validity of functional connectivity}
\label{sec:eval}
Here, we compare our proposed outlier detection approach with existing scrubbing techniques for fMRI data, including data-driven scrubbing (e.g., projection scrubbing, DVARS) and motion scrubbing. ``Projection scrubbing'' is based on the same dimension reduction and selection procedure but employs a non-robust distance measure, leverage, and uses an ad-hoc threshold for outliers \citep{pham2023less}. Specifically, artifactual volumes are identified based on having a leverage value greater than $3$ times of the median across all volumes. While this approach has been shown to outperform existing data-driven and hardware-based approaches \citep{pham2023less}, it may suffer from masking and/or swamping effect due to the relationship of leverage with Mahalanobis Distance.
Another common practice to detect artifactual volumes in fMRI data is motion scrubbing using frame-wise displacement (FD), a measure of subject head motion. This approach aims to detect artifacts caused by head movement \citep{power2012, power2014}. FD is a summary measure based on six rigid body realignment parameters used to align a subject's brain across volumes within an fMRI session. We adopted a lagged and filtered version of FD designed for HCP-style multi-band data (``modified FD"), as described by \cite{pham2023less}. A threshold is applied to these time-specific FD measures to flag high motion volumes. Although this approach is commonly used in fMRI analysis, there is no principled or universally accepted threshold value. By following the most common practice across studies, we use $0.2$ as a FD threshold.
\cite{pham2023less} recently conducted a comprehensive comparison of data-driven scrubbing techniques (projection scrubbing, DVARS) and motion scrubbing. They found that data-driven scrubbing was at least as effective as motion scrubbing for downstream analyses based on functional connectivity (FC), while removing much less data ($\sim3\%$ versus $\sim18\%$ of volumes). One of the metrics considered by \cite{pham2023less} was the mean absolute change (MAC), proposed as a measure of the impact of scrubbing on validity by \citep{power2014,power2014methods, williams2022advancing}. Broadly, MAC compares two scenarios. In the first scenario, volumes are removed from the data based on a particular scrubbing method. In the second scenario, the same number of volumes are removed randomly. This process is repeated across many random selections. Then, MAC is based on the difference in changes in FC with scrubbing versus random removal. If $\mbox{D}elta Z_{rsp}$ is the change in FC values for a given scan between scrubbing and random scrubbing across $Q$ permutations, then MAC is calculated as follows:
\begin{eqnarray}gin{align}
\text{MAC} = \fracac{1}{SP} \mathlarger{\sum}_{s=1}^{S} \mathlarger{\sum}_{p=1}^{P} \mbox{Bi}gg| \fracac{1}{R} \mathlarger{\sum}_{r=1}^{R} \mbox{D}elta z_{rsp} \mbox{Bi}gg| , \label{eqn:MAC}
\end{align}
where $S$ is the number of subjects, $P$ is the unique FC pairs, and $R$ is the random scrubbing permutation. At a fixed censoring rate, higher MAC values are an indication of improved validity of FC. Because MAC is expected to increase with censoring rate, to compare these values successfully the censoring rates must be fixed.
In Figure \ref{fig:MAC}, we illustrate the effects of various scrubbing approaches on the effectiveness of FC by using the MAC metric. Note that since MAC is expected to increase with a higher censoring rate, it should only be compared across a fixed censoring rate. The left-hand plot displays all methods and shows that motion scrubbing (dark red) and RD with the theoretical cutoff (RD theoretical, pink) result in very high censoring rates near $20\%$. Comparing these two, modified FD seems to be better than the theoretical based on MAC. The right-hand plot shows the same results on a narrower x-axis scale to facilitate comparison across the remaining methods, including our proposed approach. Our proposed method is displayed in purple for three different bootstrap-based cutoffs ($50\%$, $80\%$, $95\%$) and in green with the empirical cutoff. A reference line is plotted to facilitate comparison with ICA projection scrubbing and DVARS. This shows that our proposed method flags slightly more volumes than projection scrubbing, suggesting it may help to avoid masking of smaller outliers that may occur with projection scrubbing due to the use of a non-robust distance measure. Our proposed method may also be slightly better in terms of MAC. Both projection scrubbing and our proposed robust method perform better than DVARS in terms of MAC.
\begin{eqnarray}gin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{image/RD_plot_comp.pdf}
\caption{\textbf{Effect of different scrubbing methods on MAC.} The x-axis shows censoring rate ($\%$ frames removed) and the y\-axis shows MAC. Since MAC is expected to increase with higher censoring rate, it is compared across a fixed censoring rate. (A) Comparisons of aforementioned scrubbing methods. This shows that scrubbing (modified FD, dark red) and RD with the theoretical cutoff (RD theoretical, pink) result in very high censoring rates near $20\%$. Comparing these two, modified FD seems to be better than RD theoretical based on MAC. (B) Comparisons of scrubbing methods producing closer MAC measures. Our proposed method is displayed in purple for three different lower bound of bootstrap CI ($50\%$, $80\%$, $95\%$) and in green with the empirical cutoff. The reference line is displayed to compare the methods. Our proposed method flags slightly more volumes than projection scrubbing, suggesting it may help to avoid masking of smaller outliers that may occur with projection scrubbing due to the use of a non-robust distance measure. Our proposed method may also be slightly better in terms of MAC. Both projection scrubbing and our proposed robust method perform better than DVARS in terms of MAC.}
\label{fig:MAC}
\end{figure}
\subsection{Spatial patterns of the artifactual volumes}
\label{sec:spatial}
Finally, we visualize the artifacts associated with spatial patterns of RD-based artifactual volumes for each high RD volume. Recall that after performing ICA to obtain ${\bf{Y}} = {\bf{AS}} + {\bf{E}}$, $\bf{X}$ is obtained by selecting a subset of high-kurtosis columns of $\bf{A}$. Let ${\bf{A}}^*$ and ${\bf{S}}^*$ contain only the selected rows and columns of ${\bf{A}}$ and ${\bf{S}}$. For each outlying volume $t$, an image of artifact intensity can be obtained by multiplying the $t$-th row of ${\bf{A}}^*$ by ${\bf{S}}^*$. SSince it is not possible to display each of these volumes due to space constraints, we visualize the average across all outlying volumes, which gives an overall measure of artifact intensity at every voxel of the brain.
Using the same five high-noise sessions shown in Figure \ref{fig:fMRIRD}, we display two orthogonal views of brain images (slices) acquired from each subject, containing artifact intensities, in Figure \ref{fig:brain}. The intensities are lower when the color is more red and higher when the color is more yellow. The axial views of each subject show a ring of intensity on the outer edge of the brain, which is often caused by head movements during data acquisition, leading to mislocalization of signals. The sagittal view of each image demonstrates high artifact intensity around the spinal cord (white rectangle), which is another indication of head movement-related artifacts. It is interesting to see similar artifact patterns across different sessions of the same subject (A1 and A2). In contrast to the patterns of artifacts seen in sessions \textit{A1}, \textit{A2}, \textit{B}, and \textit{C}, the sagittal view of session \textit{D} illustrates tissue activation around the edge of the cerebellum (white circle on the sagittal views), which could be an indication of pulsatile artifact from blood being pumped in brain blood vessels with each heart beat. In addition to these spatial differences, the subject-based color scales highlight the signal changes across subjects. For instance, session \textit{D} exhibits more intense artifacts compared to subject \textit{A}. This agrees with the higher magnitude of RDs of session D compared to the other sessions, as seen in Figure \ref{fig:fMRIRD}.
\begin{eqnarray}gin{figure}[H]
\centering
\includegraphics[width=0.84\textwidth]{image/spatial_brain.pdf}
\caption{\small \textbf{Average spatial patterns of artifact intensity for five high-noise sessions.} The bottom color scales indicate the subject-specific signal magnitudes. Note that the images are thresholded at the mean to aid in visualization. The white rectangular on each sagittal view shows the spinal cord of the brain. The white circle on each sagittal view highlights the cerebellum of the brain and identified artifacts that area are indication of head motion sourced artifact. \\}
\label{fig:brain}
\end{figure}
\section{Discussion and future work}
\label{sec:discussion}
We have proposed a robust non-parametric multivariate outlier detection method that is applicable to fMRI data, which exhibits markedly non-Gaussian and non-independent features. We have compared our proposed approach with an established robust but parametric outlier detection method based on the theoretical F distribution of RD established by \cite{hardin2005}. While this method is suitable for identical, independent, and normally distributed data, it cannot be applied to fMRI data \citep{hardin2005,filzmoser2008, cerioli2010multivariate, mejia2017pca}. Violation of the assumptions causes the distribution of RD to deviate from the theoretical distribution, which can lead to an increase in false positive rate or a decrease in sensitivity to outliers, depending on the nature of the violations. Prior work has attempted to reduce the distributional deviation between the theoretical and empirical fit through ad hoc methods. Our proposed non-parametric approach instead uses univariate outlier imputation, combined with a bootstrap procedure to estimate relevant summary statistics of the true distribution of RD.
The proposed method also offers a statistically principled alternative to motion scrubbing, which is known to result in high rates of volume censoring, to identify artifactual volumes in fMRI data. Motion scrubbing often employs an ad-hoc but strict threshold value that could cause more than half of the volumes to be flagged as outliers, excluding potentially valuable data. Moreover, motion scrubbing can only detect artifacts coinciding with head motion and may therefore miss other types of artifacts. Our proposed method also improves upon existing data-driven scrubbing techniques such as projection scrubbing \citep{pham2023less} and DVARS \citep{smyser2010longitudinal, afyouni2018insight}, as a formal robust outlier detection framework that is appropriate for fMRI data.
Although the proposed method outperforms existing methods, it has several limitations. First, we use neighboring observations of each column of data. To resolve this, we could use a more generalized outlier imputation approach, which imputes better by using information from other columns. Incorporating multiple imputation into the bootstrap samples could also provide better results. Second, we employ a robust transformation prior to individual components to reduce skew prior to univariate outlier detection. However, components often exhibit other non-Gaussian features such as heavy tails. In the future, we plan to develop a more flexible robust transformation that is appropriate for a wider range of non-Gaussian distributions.
Additional validation studies are required to better understand the performance of our proposed method. First, while the proposed method's excellent performance is demonstrated in simulated data, some additional tests are still needed. For example, the method could be tested on non-normal and outlier-contaminated simulated data. Second, the method has been applied to resting-state fMRI data, which contains task-free signals. However, the method could be applied and tested on task-related fMRI data by regressing out the task-related signals. Third, we employed data from one subject with two separate imaging sessions that showed very similar patterns for artifacts. This attribute could be studied for more subjects to explore whether there is recognizable inter-individual variability in the spatial patterns of artifacts in fMRI datasets. Analyzing more subjects would also allow us to better quantify the performance of our method compared with existing data-driven and software-based scrubbing techniques. Fourth, identification of eye movement-related artifacts is still an important topic of ongoing research. Future work may apply our proposed approach to data where eye tracking is available to see if it is capable of detecting blinks and eye movements better than existing scrubbing techniques, since eye movements can reduce the quality
\appendix
\renewcommand\thefigure{\thesection.\arabic{figure}}
\setcounter{figure}{0}
\setcounter{page}{1}
\section{Robust univariate outlier imputation algorithm}\label{app:ruoi}
\begin{eqnarray}gin{algorithm}[H]
\mbox{D}ontPrintSemicolon
\SetAlgoLined
\textbf{Input}: ${\bf{X}}_{T \times K}$ (dimension reduced, detrended $\&$ selected fMRI data) \;
Initialize ${\bf{X}}^{0}_{T \times K}$ $\leftarrow$ ${\bf{X}}_{T \times K}$ \;
\For{k = 1,2,.., K}{
${\bf{x}}_k$ k-th column vector \;
Robustly transform ${\bf{x}}_k$ to be ${\bf{x}}^{'}_k$
and $M_{k} = med({\bf{x}}^{'}_k)$\;
Calculate $\text{MAD}_k = med(|{\bf{x}}^{'}_k - M_{k})|)$ \;
Define $\mathcal{T}_k$ = $\left\{ { t : x^{'}_{t,k}} \notin M_{k} \mbox{P}m [4 \cdot (1.4826 \cdot \text{MAD}_k)] \right\}$ the set of univariate outlier indices \;
\For{${t} \in \mathcal{T}_k$}{
${x}^{0}_{t,k} \leftarrow \text{mean} \left\{ {x}^{'}_{t-a,k}, {x}^{'}_{t+b,k} \right\}$ where $a = \min \{1,2,..., t-1 : x^{'}_{t-a,k} \notin \mathcal{T}_k \}$ and $b = \max \{1,2,..., T-t : x^{'}_{t+b,k} \notin \mathcal{T}_k \}$. The corresponding ${x}^{'}_{t-a,k}$ and ${x}^{'}_{t+b,k}$ will be dropped if either $a$ or $b$ is null.
}
}
\Return{${\bf{X}}^{0}_{T \times K}$} the imputed data matrix
\caption{Univariate Outlier Imputation}
\end{algorithm}
\end{document} |
\begin{document}
\author{Pablo Ramacher}
\email{[email protected]}
\mathrm{ad}\,dress{Philipps-Universit\"at Marburg, FB 12 Mathematik und Informatik, Hans-Meerwein-Str., 35032 Marburg}
{\bf \mathfrak t}itle{Addendum to ''The equivariant spectral function of an invariant elliptic operator''}
\begin{abstract} Let $M$ be a compact boundaryless Riemannian manifold, carrying an effective and isometric action of a torus $T$, and $P_0$ an invariant elliptic classical pseudodifferential operator on $M$. In this note, we strengthen the asymptotics for the equivariant (or reduced) spectral function of $P_0$ derived in \cite{ramacher16}, which are already sharp in the eigenvalue aspect, to become almost sharp in the isotypic aspect. In particular, this leads to hybrid equivariant ${\rm L}^p$-bounds for eigenfunctions that are almost sharp in the eigenvalue and isotypic aspect.
\end{abstract}
\title{Addendum to ''The equivariant spectral function of an invariant elliptic operator''}
\setcounter{tocdepth}{1}
{\bf \mathfrak t}ableofcontents{}
\section{Introduction}
Let $M$ be a closed $n$-dimensional Riemannian manifold with an effective and isometric action of a compact Lie group $G$. In this paper, we strenghten the asymptotics derived in \cite{ramacher16} for the equivariant (or reduced) spectral function of an invariant elliptic operator on $M$, which are already sharp in the eigenvalue aspect, to become also almost sharp in the isotypic aspect in case that $G=T$ is a torus, that is, a compact connected Abelian Lie group. In particular, if $T$ acts on $M$ with orbits of the same dimension, we obtain hybrid equivariant ${\rm L}^p$-bounds for eigenfunctions that are almost sharp up to a logarithmic factor.
To explain our results, consider an elliptic classical pseudodifferential operator
\begin{equation}n
P_0:{\mathbb C}inft(M) \, \longrightarrow \, {\rm L}^2(M)
\end{equation}n
of degree $m$ on $M$ acting on the Hilbert space of square integrable functions on $M$ with the space of smooth functions on $M$ as domain. We assume that $P_0$ is positive and symmetric, so that it has a unique self-adjoint extension $P$, which has discrete spectrum. Let $\mklm{E_\lambda}$ be a spectral resolution of $P$, and denote by $e(x,y,\lambda)$ the \emph{spectral function} of $P$ which is given by the Schwartz kernel of $E_\lambda$.
Further, assume that $M$ carries an effective and isometric action of a compact Lie group $G$ with Lie algebra ${\bf \mathfrak g}$ and orbits of dimension less or equal $n-1$.
Suppose that $P$ commutes with the {left-regular representation} $({\bf \mathfrak p}i,{\rm L}^2(M))$ of $G$ so that each eigenspace of $P$ becomes a unitary $G$-module. If $\widehat G$ denotes the set of equivalence classes of irreducible unitary representations of $G$, the Peter-Weyl theorem asserts that
\begin{equation}
\label{eq:PW}
{\rm L}^2(M)=\bigoplus_{{\bf \mathfrak g}amma \in \widehat G} {\rm L}^2_{\bf \mathfrak g}amma(M),
\end{equation}
a Hilbert sum decomposition, where ${\rm L}^2_{\bf \mathfrak g}amma(M):={\mathcal P}i_{\bf \mathfrak g}amma {\rm L}^2(M)$ denotes the ${\bf \mathfrak g}amma$-isotypic component, and ${\mathcal P}i_{\bf \mathfrak g}amma$ the corresponding projection. Let $e_{\bf \mathfrak g}amma(x,y,\lambda)$ be the spectral function of the operator $P_{\bf \mathfrak g}amma:={\mathcal P}i_{\bf \mathfrak g}amma \circ P\circ {\mathcal P}i_{\bf \mathfrak g}amma$, which is also called the \emph{reduced spectral function} of $P$. Further, let ${\mathcal J }bb:T^\ast M {\bf \mathfrak t}o {\bf \mathfrak g}^\ast$ denote the momentum map of the Hamiltonian $G$-action on $T^\ast M$, induced by the action of $G$ on $M$, and write ${\mathcal O}mega:={\mathcal J }bb^{-1}(\mklm{0})$. In \cite[Theorem 4.3]{ramacher16}, the \emph{equivariant local Wey law}
\begin{equation}n
\left |e_{\bf \mathfrak g}amma(x,x,\lambda)- \lambda^{\frac{n-{\bf \mathfrak k}appa_x}{m}} \frac{d_{\bf \mathfrak g}amma [{\bf \mathfrak p}i_{{\bf \mathfrak g}amma|G_x}:{\bf 1}]}{(2{\bf \mathfrak p}i)^{n-{\bf \mathfrak k}appa_x}} \int_{\{ (x,\xi) \in {\mathcal O}mega, \, p(x,\xi)< 1\}} \frac{ \,d \xi}{{\bf \mathfrak t}ext{vol}\, {\mathcal O}_{(x,\xi)}} \right | \leq C_{x,{\bf \mathfrak g}amma} \, \lambda^{\frac{n-{\bf \mathfrak k}appa_x-1}{m}}, \quad x \in M,
\end{equation}n
was shown as $\lambda {\bf \mathfrak t}o +\infty $, where ${\bf \mathfrak k}appa_x:=\,dim {\mathcal O}_x$ is the dimension of the $G$-orbit through $x$, $d_{\bf \mathfrak g}amma$ denotes the dimension of an irreducible $G$-representation ${\bf \mathfrak p}i_{\bf \mathfrak g}amma$ belonging to ${\bf \mathfrak g}amma$ and $ [{\bf \mathfrak p}i_{{\bf \mathfrak g}amma|G_x}:{\bf 1}]$ the multiplicity of the trivial representation in the restriction of ${\bf \mathfrak p}i_{\bf \mathfrak g}amma$ to the isotropy group $G_x$ of $x$, while $C_{x,{\bf \mathfrak g}amma}>0$ is a constant satisfying
\begin{equation}
\label{eq:25.5.2018a}
C_{x,{\bf \mathfrak g}amma} =O_x\big ( d_{\bf \mathfrak g}amma \sup_{l \leq \lfloor {\bf \mathfrak k}appa_x/2+3 \rfloor} \norm{{\mathcal D}^l {\bf \mathfrak g}amma}_\infty\big ),
\end{equation}
and $D^l$ are differential operators on $G$ of order $l$. Both the leading term and the constant $C_{x,{\bf \mathfrak g}amma}$ in general depend in a highly non-uniform way on $x\in M$, exhibiting a caustic behaviour in the neighborhood of singular orbits.
A precise description of this caustic behaviour was achieved in \cite{ramacher16} by relying on the results \cite{ramacher10} on singular equivariant asymptotics obtained via resolution of singularities. More precisely, consider the stratification $M=M(H_1) \, \,dot \cup \,dots \,dot \cup \, M(H_L)$ of $M$ into orbit types, arranged in such a way that
$(H_i) \leq (H_j)$ implies $i {\bf \mathfrak g}eq j$, and let ${\rm L}ambda$ be the maximal length that a maximal totally ordered subset of isotropy types can have. Write $M_\mathrm{prin}:=M(H_L)$, $M_\mathrm{except}$, and $M_\mathrm{sing}$ for the union of all orbits of principal, exceptional, and singular type, respectively, so that
\begin{equation}n
M= M_\mathrm{prin}\, \,dot \cup \, M_\mathrm{except}\, \,dot \cup \, M_\mathrm{sing},
\end{equation}n
and denote by ${\bf \mathfrak k}appa:=\,dim G/H_L$ the dimension of an orbit of principal type. Then, by \cite[Theorem 7.7]{ramacher16} one has for $x \in M_\mathrm{prin}\cup M_\mathrm{except}$ and $\lambda {\bf \mathfrak t}o +\infty $ the \emph{singular equivariant local Weyl law}
\begin{align*}
\begin{split}
{\mathcal B}ig |e_{\bf \mathfrak g}amma(x,x,\lambda)&- \frac{d_{\bf \mathfrak g}amma \lambda^{\frac{n-{\bf \mathfrak k}appa}{m}}}{(2{\bf \mathfrak p}i)^{n-{\bf \mathfrak k}appa}} \sum_{N=1}^{{\rm L}ambda-1} \, \sum_{{i_1<\,dots< i_{N} }} \, {\bf \mathfrak p}rod_{l=1}^{N} |{\bf \mathfrak t}au_{i_l}|^{\,dim G- \,dim H_{i_l}-{\bf \mathfrak k}appa} \mathcal{L}^{0,0}_{i_1\,dots i_{N} }(x,{\bf \mathfrak g}amma) {\mathcal B}ig |\\
&\leq \widetilde C_{\bf \mathfrak g}amma \lambda^{\frac{n-{\bf \mathfrak k}appa-1}m} \sum_{N=1}^{{\rm L}ambda-1}\, \sum_{{i_1<\,dots< i_{N}}} {\bf \mathfrak p}rod_{l=1}^N |{\bf \mathfrak t}au_{i_l}|^{\,dim G- \,dim H_{i_l}-{\bf \mathfrak k}appa-1},
\end{split}
\end{align*}
where the multiple sums run over all possible maximal totally ordered subsets $\mklm{(H_{i_1}),\,dots, (H_{i_N})}$ of singular isotropy types, the coefficients $\mathcal{L}^{0,0}_{i_1\,dots i_{N}}$ are explicitly given and bounded functions in $x$, and ${\bf \mathfrak t}au_{i_j} ={\bf \mathfrak t}au_{i_j}(x)\in (-1,1)$ are desingularization parameters that arise in the resolution process satisfying $|{\bf \mathfrak t}au_{i_j}|\approx {\bf \mathfrak t}ext{dist}\, (x, M(H_{i_j}))$, while $\widetilde C_{\bf \mathfrak g}amma>0$ is a constant independent of $x$ and $\lambda$ that fulfills
\begin{equation}
\label{eq:25.5.2018b}
\widetilde C_{{\bf \mathfrak g}amma} =O\big ( d_{\bf \mathfrak g}amma \sup_{l \leq \lfloor {\bf \mathfrak k}appa/2+3 \rfloor} \norm{{\mathcal D}^l {\bf \mathfrak g}amma}_\infty\big ).
\end{equation}
As a major consequence, the above expansions lead to equivariant bounds for eigenfunctions. In the non-singular case, that is, when only principal and exceptional orbits are present, and consequently all $G$-orbits have the same dimension ${\bf \mathfrak k}appa$, the hybrid ${\rm L}^q$-estimates
\begin{equation}
\label{eq:Lqbound}
\norm{u}_{{\rm L}^q(M)} \leq \begin{cases} C_{\bf \mathfrak g}amma \, \lambda^{\frac{\,delta_{n-{\bf \mathfrak k}appa}(q)}{m}} \norm{u}_{{\rm L}^2}, & \frac{2(n-{\bf \mathfrak k}appa+1)}{n-{\bf \mathfrak k}appa-1} \leq q \leq \infty,
\\ C_{\bf \mathfrak g}amma \, \lambda^{\frac{(n-{\bf \mathfrak k}appa-1)(2-q')}{4m q'}} \norm{u}_{{\rm L}^2}, & 2 \leq q \leq \frac{2(n-{\bf \mathfrak k}appa+1)}{n-{\bf \mathfrak k}appa-1}, \end{cases}
\end{equation}
were shown in \cite[Theorem 5.4]{ramacher16} for any eigenfunction $u \in {\rm L}^2_{\bf \mathfrak g}amma(M)$ of $P$ with eigenvalue $\lambda$, where $\frac 1q+\frac 1{q'}=1$, $\,delta_n(p):=\max \left ( n \left |1/ 2 - 1/p \right | -1/2,0 \right )$, and $C_{{\bf \mathfrak g}amma}>0$ is a constant independent of $\lambda$ satisfying the estimate
\begin{equation}
\label{eq:24.07.2017}
C_{\bf \mathfrak g}amma \ll \sqrt{d_{\bf \mathfrak g}amma \sup_{l \leq \lfloor {\bf \mathfrak k}appa/2+1\rfloor} \norm{D^l{\bf \mathfrak g}amma}_\infty},
\end{equation}
provided that the co-spheres $S_x^\ast M$ are strictly convex. Note that for the proof of ${\rm L}^p$-bounds it is necessary to describe the caustic behaviour of the relevant spectral kernels as $\mu{\bf \mathfrak t}o +\infty $ in a neighborhood of the diagonal, which makes things considerably more envolved.
In case that singular orbits are present, one has the pointwise bound
\begin{equation}
\label{eq:4.12.2015}
\sum_{\stackrel{\lambda_j \in (\lambda,\lambda+1],}{ e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)}} |e_j(x)|^2 \leq \begin{cases} C \, \lambda^{\frac{n-1}m}, & \hspace{-.0cm} x\in M_\mathrm{sing}, \\
& \\
\widetilde C_{\bf \mathfrak g}amma \, \lambda^{\frac{n-{\bf \mathfrak k}appa-1}m} \sum\limits_{N=1}^{{\rm L}ambda-1}\, \sum\limits_{{i_1<\,dots< i_{N}}} {\bf \mathfrak p}rod\limits_{l=1}^N |{\bf \mathfrak t}au_{i_l}|^{\,dim G- \,dim H_{i_l}-{\bf \mathfrak k}appa-1}, & x\in M- M_\mathrm{sing}, \end{cases}
\end{equation}
for a constant $C>0$ independent of ${\bf \mathfrak g}amma$, where $\mklm{e_j}_{j {\bf \mathfrak g}eq 0}$ is an orthonormal basis of ${\rm L}^2(M)$ compatible with the decomposition \end{equation}ref{eq:PW},
showing that eigenfunctions tend to concentrate along lower dimensional orbits.
The aim of this note is to sharpen the above results in the isotypic aspect in case that $G=T$ is a torus, and show that instead of the bounds \end{equation}ref{eq:25.5.2018a} and \end{equation}ref{eq:25.5.2018b} one has the better estimates
\begin{equation}n
C_{x,{\bf \mathfrak g}amma}=O_x{\mathcal B}ig (\sup_{l \leq 1 }\norm{D^l {\bf \mathfrak g}amma}_\infty{\mathcal B}ig ), \qquad \widetilde C_{{\bf \mathfrak g}amma}=O{\mathcal B}ig (\sup_{l \leq 1 }\norm{D^l {\bf \mathfrak g}amma}_\infty{\mathcal B}ig ), \qquad {\bf \mathfrak g}amma \in {\mathcal W}_\lambda,
\end{equation}n
where ${\mathcal W}_\lambda$ denotes the set of representations
\begin{equation}n
{\mathcal W}_\lambda:=\mklm{{\bf \mathfrak g}amma \in \widehat T' \mid |{\bf \mathfrak g}amma | \leq \frac{\lambda^{1/m}}{\log \lambda}}.
\end{equation}n
Here $\widehat T'\subset$ stands for the subset of representations occuring in the Peter-Weyl decomposition \end{equation}ref{eq:PW}, and we denoted the differential of a character ${\bf \mathfrak g}amma\in \widehat T$, which corresponds to an integral linear form ${\bf \mathfrak g}amma:{\bf \mathfrak t} \rightarrow i{\mathbb R}$, by the same letter.
Similarly, it will be shown that the constant $C_{\bf \mathfrak g}amma$ in \end{equation}ref{eq:24.07.2017} actually satisfies the bound
\begin{equation}n
C_{\bf \mathfrak g}amma \ll 1, \qquad {\bf \mathfrak g}amma \in {\mathcal W}_\lambda.
\end{equation}n
By the equivariant Weyl law \cite{ramacher10} and Gauss' law, $|{\bf \mathfrak g}amma|$ can grow at most of rate $\lambda^{1/m}$.
Thus, the bounds \end{equation}ref{eq:Lqbound} hold for \emph{almost any} eigenfunction $u\in {\rm L}^2(M)$ with $C_{\bf \mathfrak g}amma$ independent of ${\bf \mathfrak g}amma$, which is consistent with recent results of Tacy \cite{tacy18}.
As will be discussed, the improved bounds are almost sharp in this sense, being already attained for $\mathrm{SO}(2)$-actions on the $2$-sphere and the $2$-torus. For their proof, a careful examination of the remainder in the stationary phase expansion of the relevant spectral kernels is necessary. These bounds are crucial for deriving hybrid subconvex bounds for Hecke-Maass forms on compact arithimetic quotients of semisimple Lie groups in the eigenvalue and isotypic aspect \cite{ramacher-wakatsuki17}.
Through the whole document, the notation $O(\mu^{k}), k \in {\mathbb R} \cup \mklm{{\bf \mathfrak p}m \infty},$ will mean an upper bound of the form $C \mu^k$ with a constant $C>0$ that is uniform in all relevant variables, while $O_\aleph(\mu^{k})$ will denote an upper bound of the form $C_\aleph \, \mu^k$ with a constant $C_\aleph> 0$ that depends on the indicated variable $\aleph$. In the same way, we shall write $a\ll_\aleph b$ for two real numbers $a$ and $b$, if there exists a constant $C_\aleph>0$ depending only on $\aleph$ such that $|a| \leq C_\aleph b$, and similarly $a \ll b$, if the bound is uniform in all relevant variables. Finally, ${\mathbb N}$ will denote the set of natural numbers $0,1,2,3,\,dots$. \\
\section{The reduced spectral function of an invariant elliptic operator}
\label{sec:RSF}
Let $M$ be a closed connected Riemannian manifold of dimension $n$ with Riemannian volume density $dM$, and $P_0$ an elliptic classical pseudodifferential operator on $M$
of degree $m$ which is positive and symmetric. The principal symbol $p(x,\xi)$ of $P_0$ constitutes a strictly positive function on $T^\ast M\setminus\mklm{0}$, where $T^\ast M$ denotes the cotangent bundle of $M$. The operator $P_0$ has a unique self-adjoint extension $P$, its domain being the $m$-th Sobolev space $H^m(M)$. It is well known that there exists an orthonormal basis $\mklm{e_j}_{j{\bf \mathfrak g}eq 0}$ of ${\rm L}^2(M)$ consisting of eigenfunctions of $P$ with eigenvalues $\mklm{\lambda_j}_{j {\bf \mathfrak g}eq 0}$ repeated according to their multiplicity, and that $Q:=\sqrt[m]{P}$ constitutes a classical pseudodifferential operator of order $1$ with principal symbol $q(x,\xi):=\sqrt[m]{p(x,\xi)}$ and domain $H^1(M)$. Again, $Q$ has discrete spectrum, and its eigenvalues are given by $\mu_j:=\sqrt[m]{\lambda_j}$. The spectral function $e(x,y,\lambda)$ of $P$ can then be described by studying the spectral function of $Q$, which in terms of the basis $\mklm{e_j}$ is given by
\begin{equation}n
e(x,y,\mu):=\sum_{\mu_j\leq \mu} e_j(x) \overline{e_j(y)}, \qquad \mu\in {\mathbb R},
\end{equation}n
and belongs to ${\mathbb C}inft(M {\bf \mathfrak t}imes M)$ as a function of $x$ and $y$. Let $\chi_\mu$ be the spectral projection onto the sum of eigenspaces of $Q$ with eigenvalues in the interval $(\mu, \mu+1]$, and denote its Schwartz kernel by $\chi_\mu(x,y):=e(x,y,\mu+1) - e(x,y,\mu)$. To obtain an asymptotic description of the spectral function of $Q$ let $\varrho \in {\mathcal S}({\mathbb R},{\mathbb R}_+)$ be such that $\varrho(0)=1$ and $\supp \hat \varrho\in (-\,delta/2,\,delta/2)$ for a given $\,delta>0$, and define the {approximate spectral projection operator}
\begin{equation}
\label{eq:2.1}
\widetilde \chi_\mu u := \sum_{j=0}^\infty \varrho(\mu-\mu_j) E_{j} u, \qquad u \in {\rm L}^2(M),
\end{equation}
where $E_j$ denotes the orthogonal projection onto the subspace spanned by $e_j$. Clearly, $K_{\widetilde \chi_\mu}(x,y):=\sum_{j=0}^\infty \varrho(\mu-\mu_j) e_j(x) \overline{e_j(y)}\in {\mathbb C}inft(M{\bf \mathfrak t}imes M)$ constitutes the kernel of $\widetilde \chi_\mu$.
As H\"ormander \cite{hoermander68} showed, $\widetilde \chi_\mu$ can be approximated by Fourier integral operators yielding an asymptotic formula for the kernels of $\widetilde \chi_\mu$ and $\chi_\mu$, and finally for the spectral function of $Q$ and $P$.
Now, assume that $M$ carries an effective and isometric action of a compact Lie group $G$. Let $P$ commute with the left-regular representation $({\bf \mathfrak p}i,{\rm L}^2(M))$ of $G$. Consider the Peter-Weyl decomposition of ${\rm L}^2(M)$, and let ${\mathcal P}i_{\bf \mathfrak g}amma$ be the projection onto the isotypic component belonging to ${\bf \mathfrak g}amma \in \widehat G$, which is given by the Bochner integral
\begin{equation}n
{\mathcal P}i_{\bf \mathfrak g}amma=d_{\bf \mathfrak g}amma \intop_G \overline{{\bf \mathfrak g}amma(g)} {\bf \mathfrak p}i(g) \,d_G(g),
\end{equation}n
where $d_{\bf \mathfrak g}amma$ is the dimension of an unitary irreducible representation of class ${\bf \mathfrak g}amma$, and $d_G(g) \end{equation}uiv dg$ Haar measure on $G$, which we assume to be normalized such that ${\bf \mathfrak t}ext{vol}\, G=1$. If $G$ is finite, $d_G$ is simply the counting measure. In addition, let us suppose that the orthonormal basis $\mklm{e_j}_{j{\bf \mathfrak g}eq 0}$ is compatible with the Peter-Weyl decomposition in the sense that each vector $e_j$ is contained in some isotypic component ${\rm L}^2_{\bf \mathfrak g}amma(M)$. In order to describe the spectral function of the operator $Q_{\bf \mathfrak g}amma:={\mathcal P}i_{\bf \mathfrak g}amma \circ Q\circ {\mathcal P}i_{\bf \mathfrak g}amma=Q\circ {\mathcal P}i_{\bf \mathfrak g}amma={\mathcal P}i_{\bf \mathfrak g}amma \circ Q$ given by
\begin{equation}
\label{eq:24.09.2015}
e_{\bf \mathfrak g}amma (x,y,\mu):=\sum_{\mu_j\leq \mu,\, e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)} e_j(x) \overline{e_j(y)},
\end{equation}
we consider the composition $ \chi_\mu\circ {\mathcal P}i_{\bf \mathfrak g}amma$ with kernel
$
K_{\chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,y)=e_{\bf \mathfrak g}amma(x,y,\lambda+1)-e_{\bf \mathfrak g}amma(x,y,\lambda)
$,
together with the corresponding equivariant approximate spectral projection
\begin{align}
\label{eq:1004}
(\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma) u = \sum_{j{\bf \mathfrak g}eq 0,\, e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)} \varrho(\mu-\mu_j) E_{j} u.
\end{align}
Its kernel can be written as
\begin{equation}n
K_{\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,y):=\sum_{j{\bf \mathfrak g}eq 0, e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)} \varrho(\mu-\mu_j) e_j(x) \overline{e_j(y)}\in {\mathbb C}inft(M{\bf \mathfrak t}imes M).
\end{equation}n
By using Fourier integral operator methods, it was shown in \cite{ramacher16} that the kernel of $\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma$ can be expressed as follows. Let $\mklm{({\bf \mathfrak k}appa_\iota, Y_\iota)}_{\iota \in I}$, ${\bf \mathfrak k}appa_\iota:Y_\iota \stackrel{\simeq}{\bf \mathfrak t}o \widetilde Y_\iota \subset {\mathbb R}^n$, be an atlas for $M$, $\mklm{f_\iota}$ a corresponding partition of unity, and $\mklm{\bar f_\iota}$ a set of test functions with compact support in $Y_\iota$ satisfying $\bar f_\iota \end{equation}uiv 1$ on $\supp f_\iota$.
Consider further a test function $0 \leq \alpha \in {\mathbb C}T(1/2, 3/2)$ such that $\alpha \end{equation}uiv 1$ in a neighborhood of $1$, and set
\begin{align}
\begin{split}
\label{eq:02.05.2015}
I^{\bf \mathfrak g}amma_\iota(\mu, R, s, x,y):= & \int _G \int_{{\mathcal S}igma^{R,s}_{\iota,x}} e^{i{ \mu} {\mathcal P}hi_{\iota,x,y}(\omega,g)} \hat \varrho(s) \overline{{\bf \mathfrak g}amma(g)} f_\iota( x) \\ &\cdot a_\iota(s, {\bf \mathfrak k}appa_\iota( x) , \mu \omega) \bar f _\iota (g \cdot y) \alpha(q( x, \omega)) J_\iota(g,y) {\,d{\mathcal S}igma^{R,s}_{\iota,x}(\omega) \,d g},
\end{split}
\end{align}
where ${\mathcal P}hi_{\iota,x,y}(\omega,g):=\eklm{{\bf \mathfrak k}appa_\iota( x) - {\bf \mathfrak k}appa_\iota(g \cdot y),\omega}$, $a_\iota \in S^0_{\mathrm{phg}}$ is a suitable classical polyhomogeneous symbol satisfying $a_\iota(0,{\bf \mathfrak t}ilde x, \eta)=1$, $J_\iota(g,y)$ a Jacobian, and
\begin{equation}
\label{eq:20.04.2015}
{\mathcal S}igma^{R,s}_{\iota,x}:=\mklm{\omega \in {\mathbb R}^n \mid \zeta_\iota (s, {\bf \mathfrak k}appa_\iota (x),\omega) = R}
\end{equation}
is a smooth compact hypersurface given in terms of a smooth function $\zeta_\iota$ which is homogeneous in $\eta$ of degree $1$ and satisfies $\zeta_\iota(0, {\bf \mathfrak t}ilde x, \eta) = q({\bf \mathfrak k}appa_\iota^{-1}({\bf \mathfrak t}ilde x), \eta)$.
Then, by \cite[Corollary 2.2]{ramacher16} one has for $\mu {\bf \mathfrak g}eq 1$, $x,y \in M$, and each ${\bf \mathfrak t}ilde N \in {\mathbb N}$ the asymptotic expansion
\begin{align}
\label{eq:13.06.2016}
K_{\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,y)
= &{\mathcal B}ig(\frac{\mu}{2{\bf \mathfrak p}i}{\mathcal B}ig )^{n-1} \frac{d_{\bf \mathfrak g}amma}{2{\bf \mathfrak p}i} \sum _\iota {\mathcal B}ig [ \sum_{j=0}^{{\bf \mathfrak t}ilde N-1} D^{2j}_{R,s} I^{\bf \mathfrak g}amma_\iota(\mu, R, s,x,y)_{|(R,s)=(1,0)} \, \mu^{-j} + \mathcal{R}^{\bf \mathfrak g}amma_{\iota}(\mu,x,y) {\mathcal B}ig ]
\end{align}
up to terms
of order $O(|\mu|^{-\infty}\norm{{\bf \mathfrak g}amma}_\infty)$ which are uniform in $x,y$, where $D^{2j}_{R,s}$ are known differential operators of order $2j$ in $R,s$, and
\begin{align*}
|\mathcal{R}^{\bf \mathfrak g}amma_{\iota}(\mu,x,y)| \leq& C\mu^{-{\bf \mathfrak t}ilde N} \sum_{|\beta| \leq 2{\bf \mathfrak t}ilde N +3} \sup_{R,s} \big |{\bf \mathfrak g}d_{R,s}^\beta I^{\bf \mathfrak g}amma_\iota(\mu,R,s,x,y) \big |
\end{align*}
for some constant $C>0$. On the other hand, $K_{\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,y)$ is rapidly decaying as $\mu {\bf \mathfrak t}o -\infty$ and uniformly bounded in $x,y$ by $\norm{{\bf \mathfrak g}amma}_\infty$.
\section{Equivariant asymptotics of oscillatory integrals}
Let the notation be as in the previous section. As we have seen there, the question of describing the spectral function in the equivariant setting reduces to the study of oscillatory integrals of the form
\begin{equation}
\label{eq:03.05.2015}
I^{\bf \mathfrak g}amma_{x,y}(\mu):=\int_{G}\int_{{\mathcal S}igma^{R,s}_x} e^{i\mu {\mathcal P}hi_{x,y}(\omega,g)} \overline{{\bf \mathfrak g}amma(g)} a (x,y, \omega,g) \,d {\mathcal S}igma^{R,s}_x (\omega) \,d g, \qquad \mu {\bf \mathfrak t}o + \infty,
\end{equation}
with ${\mathcal S}igma^{R,s}_x$ as in \end{equation}ref{eq:20.04.2015} and phase function
\begin{equation}n
{\mathcal P}hi_{x,y}(\omega,g):= \eklm{{\bf \mathfrak k}appa(x) - {\bf \mathfrak k}appa( g\cdot y), \omega},
\end{equation}n
where we have skipped the index $\iota$ for simplicity of notation, and $a \in {\mathbb C}T$ is an amplitude that might depend on $\mu$ and other parameters such that $(x,y,\omega, g) \in \supp a$ implies $x, g \cdot y \in Y$. In what follows, we shall write $^yG:=\mklm{g \in G \mid g\cdot y \in Y}$, as well as
\begin{equation}
\label{eq:11.9.2017}
I^{\bf \mathfrak g}amma_x(\mu) :=I^{\bf \mathfrak g}amma_{x,x}(\mu), \qquad {\mathcal P}hi_x:={\mathcal P}hi_{x,x}.
\end{equation}
Let us assume in the following that $G$ is a continuous group, and write ${\bf \mathfrak k}appa(x)=({\bf \mathfrak t}ilde x_1, \,dots, {\bf \mathfrak t}ilde x_n)$ so that the canonical local trivialization of $T^\ast Y$ reads
\begin{equation}n
Y {\bf \mathfrak t}imes {\mathbb R}^n \, \ni (x,\eta) \quad \end{equation}uiv \quad \sum_{k=1}^n \eta_k (d{\bf \mathfrak t}ilde x_k)_{x} \in \, T^\ast_xY.
\end{equation}n
With respect to this trivialization, we shall identify ${\mathcal S}igma^{R,s}_{x'}$ with a subset in $T^\ast_{x} Y$ for eventually different $x$ and $x'$, if convenient.
Let ${\mathcal O}mega:={\mathcal J }bb^{-1}(\mklm{0})$ be the zero level set of the momentum map ${\mathcal J }bb: T^\ast M {\bf \mathfrak t}o {\bf \mathfrak g}^\ast$ of the underlying Hamiltonian $G$-action on $T^\ast M$.
Let ${\mathcal O}_x:=G\cdot x$ denote the $G$-orbit and $G_x:=\mklm{g \in G\mid g\cdot x=x}$ the stabilizer or isotropy group of a point $x\in M$. Throughout the paper, it is assumed that
\begin{equation}n
\,dim {\mathcal O}_x \leq n-1 \qquad {\bf \mathfrak t}ext{for all } x \in M.
\end{equation}n
Let further $N_y{\mathcal O}_x$ be the normal space to the orbit ${\mathcal O}_x$ at a point $y \in {\mathcal O}_x$, which can be identified with $\mathrm{Ann}(T_y{\mathcal O}_x)$ via the underlying Riemannian metric. For $x\in Y$ and ${\mathcal O}_y \cap Y \not=\emptyset$ let
\begin{equation}n
{\mathbb C}rit \, {\mathcal P}hi_{x,y}:={\mathcal B}ig \{(\omega,g) \in {\mathcal S}igma^{R,s}_x {\bf \mathfrak t}imes \, ^y G\mid \, d({\mathcal P}hi_{x,y})_{(\omega,g)}=0{\mathcal B}ig \}
\end{equation}n
be the critical set of ${\mathcal P}hi_{x,y}$. With $M_{\bf \mathfrak t}ext{prin}$, $M_{\bf \mathfrak t}ext{except}$, and $M_{\bf \mathfrak t}ext{sing}$ denoting the principal, exceptional, and singular stratum, respectively, it was shown in \cite[Lemma 3.1]{ramacher16} that
\begin{itemize}
\item if $y\in \mathcal{O}_x$, the set ${\mathbb C}rit \, {\mathcal P}hi_{x,y}$ is clean and given by the smooth submanifold
\begin{equation}n
{\mathcal J }=\big \{(\omega,g)\mid (g \cdot y, \omega) \in {\mathcal O}mega, \, x=g\cdot y\big \}= V_{\mathcal J } {\bf \mathfrak t}imes G_{\mathcal J }
\end{equation}n
of codimension $2\,dim {\mathcal O}_x$, with $V_{\mathcal J }={\mathcal S}igma^{R,s}_x \cap N_x{\mathcal O}_x$ and $G_{\mathcal J }=\mklm{g \in G \mid x=g\cdot y}\subset \, ^y G$.
\item if $y \not\in \mathcal{O}_x$,
$${\mathbb C}rit \, {\mathcal P}hi_{x,y}={\mathcal B}ig \{(\omega,g)\mid (g \cdot y,\omega) \in {\mathcal O}mega, \, {\bf \mathfrak k}appa(x)-{\bf \mathfrak k}appa(g \cdot y) \in N_\omega {\mathcal S}igma^{R,s}_x{\mathcal B}ig \};$$
furthermore, assume that $G$ acts on $M$ with orbits of the same dimension ${\bf \mathfrak k}appa$, that is, $M=M_\mathrm{prin}\, \cup\, M_\mathrm{except}$, and that the co-spheres $S_x^\ast M$ are strictly convex. Then, either ${\mathbb C}rit \, {\mathcal P}hi_{x,y}$ is empty, or, choosing $Y$ sufficiently small, ${\mathbb C}rit \, {\mathcal P}hi_{x,y}$ is clean and of codimension $n-1+{\bf \mathfrak k}appa$, its finitely many connected components being of the form
\begin{equation}n
{\mathcal J }=V_{\mathcal J } {\bf \mathfrak t}imes G_{\mathcal J }
\end{equation}n
with $V_{\mathcal J }= \mklm{\omega_{\mathcal J }}$ and $G_{\mathcal J } = g_{\mathcal J } \cdot G_y\subset \, ^y G$ for some $\omega_{\mathcal J }\in {\mathcal S}igma^{R,s}_x$ and $g_{\mathcal J } \in G$.
\end{itemize}
From this an asymptotic expansion for the integrals $I^{\bf \mathfrak g}amma_{x,y}(\mu)$ was deduced in \cite[Theorem 3.3]{ramacher16}, yielding a corresponding asymptotic formula for $K_{\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,y)$. In this paper, we improve the estimate for the remainder in the isotypic aspect in case that $G=T$ is a torus, which we assume from now on.
For this, recall that the exponential function $\exp$ is a covering homomorphism of ${\bf \mathfrak t}$ onto $T$, and its kernel $L$ a lattice in ${\bf \mathfrak t}$. Let $\widehat T$ denote the \emph{set of characters of $T$}, that is, of all continuous homomorphisms of $T$ into the circle, which we identify with the unitary dual of $T$. The differential of a character ${\bf \mathfrak g}amma: T {\bf \mathfrak t}o S^1$, denoted by the same letter, is a linear form ${\bf \mathfrak g}amma:{\bf \mathfrak t}{\bf \mathfrak t}o i {\mathbb R}$ which is \emph{integral} in the sense that ${\bf \mathfrak g}amma(L) \subset 2{\bf \mathfrak p}i i \, {\mathbb Z}$. On the other hand, if ${\bf \mathfrak g}amma$ is an integral linear form, one defines
\begin{equation}n
t^{\bf \mathfrak g}amma= e^{{\bf \mathfrak g}amma(X)}, \qquad t= \exp X \in T,
\end{equation}n
setting up an identification of $\widehat T$ with the integral linear forms on ${\bf \mathfrak t}$ via ${\bf \mathfrak g}amma(t)\end{equation}uiv t^{\bf \mathfrak g}amma$. Further, all irreducible representations of $T$ are $1$-dimensional. We now make the following
\begin{definition}
Denote by $\widehat T'\subset \widehat T$ the subset of representations occuring in the decomposition \end{equation}ref{eq:PW} of ${\rm L}^2(M)$, and let $\mklm{\mathcal{V}_\mu}_{\mu \in (0,\infty)}$ be a family of finite subsets $\mathcal{V}_\mu\subset \widehat T'$ such that
\begin{equation}n
\max_{{\bf \mathfrak g}amma \in \mathcal{V}_\mu} |{\bf \mathfrak g}amma| \leq C \frac\mu{\log \mu}
\end{equation}n
for a constant $C>0$ independent of $\mu$.
\end{definition}
Our main result is the following improvement of the remainder and coefficient estimates in \cite[Theorem 3.3]{ramacher16}.
\begin{thm}
\label{thm:12.05.2015}
Assume that $T$ is a torus acting on $M$ with orbits of dimension less or equal $n-1$, and let $ \mathcal{V}_\mu$ be as in the previous definition.
\begin{enumerate}
\item[(a)] Let $y \in \mathcal{O}_x$. Then, for every ${\bf \mathfrak g}amma \in \widehat T$ and ${\bf \mathfrak t}ilde N=0,1,2,\,dots $ one has the asymptotic formula
\begin{equation}n
I^{\bf \mathfrak g}amma_{x,y}(\mu)=(2{\bf \mathfrak p}i/\mu)^{\,dim \mathcal{O}_x} \left [\sum_{k=0}^{{\bf \mathfrak t}ilde N-1} \mathcal{Q}_{k}(x,y) \mu^{-k} +\mathcal{R}_{{\bf \mathfrak t}ilde N}(x,y,\mu)\right ], \qquad \mu {\bf \mathfrak t}o +\infty,
\end{equation}n
where the coefficients and the remainder depend smoothly on $R$ and $s$. The coefficients satisfy the bounds
\begin{align*}
|\mathcal{Q}_k(x,y)|&\leq C_{k,{\mathcal P}hi_{x,y}} {\bf \mathfrak t}ext{vol}\, (\supp a(x,y,\cdot,\cdot)\cap \mathcal{C}_{x,y}) \sup _{l\leq k} \norm{(D_\omega^{2l} D_t^l {\bf \mathfrak g}amma a)(x,y, \cdot,\cdot)}_{\infty}
\end{align*}
while the remainder satisfies
\begin{align*}
|\mathcal{R}_{{\bf \mathfrak t}ilde N}(x,y, \mu) | &\leq \widetilde C_{{\bf \mathfrak t}ilde N,{\mathcal P}hi_{x,y}} {\bf \mathfrak t}ext{vol}\, (\supp a (x,y,\cdot,\cdot )) \\ & \cdot \sup_{l\leq 2{\bf \mathfrak t}ilde N+ \,dim \mathcal{O}_x +1} \norm{(D_\omega^l D_t^l a)(x,y,\cdot,\cdot )}_{\infty} \, \sup_{l\leq {\bf \mathfrak t}ilde N} \norm{ D_t^l {\bf \mathfrak g}amma}_{\infty} \mu^{-{\bf \mathfrak t}ilde N}, \qquad {\bf \mathfrak g}amma \in \mathcal{V}_\mu.
\end{align*}
The bounds are uniform in $R,s$ for suitable constants $C_{k,{\mathcal P}hi_{x,y}}>0$ and $\widetilde C_{{\bf \mathfrak t}ilde N,{\mathcal P}hi_{x,y}}>0$, where $D_\omega^l$ and $D_t^l$ denote differential operators of order $l$ on ${\mathcal S}igma^{R,s}_x$ and $T$, respectively. As functions in $x$ and $y$, $\mathcal{Q}_k(x,y)$ and $\mathcal{R}_{{\bf \mathfrak t}ilde N}(x,y,\mu)$ are smooth on $Y \cap M_\mathrm{prin}$, and the constants $C_{k,{\mathcal P}hi_{x,y}}$ and $\widetilde C_{{\bf \mathfrak t}ilde N,{\mathcal P}hi_{x,y}}$ are uniformly bounded in $x$ and $y$ if $M= M_\mathrm{prin} \cup M_\mathrm{except}$.
\item[(b)] Let $y \not \in \mathcal{O}_x$. Assume that $M=M_\mathrm{prin}\, \cup\, M_\mathrm{except}$ and that the co-spheres $S_x^\ast M$ are strictly convex. Then, for sufficiently small $Y$ and every ${\bf \mathfrak t}ilde N=0,1,2,\,dots $ one has the asymptotic formula
\begin{equation}n
I^{\bf \mathfrak g}amma_{x,y}(\mu)=\sum_{{\mathcal J } \in {\bf \mathfrak p}i_0({\mathbb C}rit \, {\mathcal P}hi_{x,y})}(2{\bf \mathfrak p}i/\mu)^{\frac{n-1+{\bf \mathfrak k}appa}{2}} e^{i\mu \,^0{\mathcal P}hi_{x,y}^{\mathcal J }} \left [ \sum_{k=0}^{{\bf \mathfrak t}ilde N-1} \mathcal{Q}_{{\mathcal J },k}(x,y) \mu^{-k} +\mathcal{R}_{{\mathcal J }, {\bf \mathfrak t}ilde N}(x,y,\mu)\right ]
\end{equation}n
as $\mu {\bf \mathfrak t}o +\infty$, where ${\bf \mathfrak k}appa:=\,dim M/T$ and $^0{\mathcal P}hi_{x,y}^{\mathcal J }$ stands for the constant values of ${\mathcal P}hi_{x,y}$ on the connected components ${\mathcal J }$ of its critical set. The coefficients $\mathcal{Q}_{{\mathcal J },k}(x,y)$ and the remainder term $\mathcal{R}_{{\mathcal J },{\bf \mathfrak t}ilde N}(x,y,\mu) $ depend smoothly on $R,s$, and $x,y \in Y \cap M_\mathrm{prin}$. Furthermore, they satisfy bounds analogous to the ones in (a), where now derivatives in $t$ up to order $2k$ and $2{\bf \mathfrak t}ilde N$ can occur, and
the constants $C_{k,{\mathcal P}hi_{x,y}}$ and $\widetilde C_{{\bf \mathfrak t}ilde N,{\mathcal P}hi_{x,y}}$ are no longer uniformly bounded, but satisfy
\begin{align*}
C_{k,{\mathcal P}hi_{x,y}}& \ll {\bf \mathfrak t}ext{dist}\,(y, {\mathcal O}_x)^{-(n-1-{\bf \mathfrak k}appa)/2 -k}, \qquad
\widetilde C_{{\bf \mathfrak t}ilde N,{\mathcal P}hi_{x,y}} \ll {\bf \mathfrak t}ext{dist}\,(y, {\mathcal O}_x)^{-(n-1-{\bf \mathfrak k}appa)/2-{\bf \mathfrak t}ilde N}.
\end{align*}
\end{enumerate}
\end{thm}
\begin{proof}
The asymptotic expansion for the integral $I^{\bf \mathfrak g}amma_{x,y}(\mu)$, the smoothness of the coefficients $\mathcal{Q}_{k}(x,y)$, $\mathcal{Q}_{{\mathcal J },k}(x,y)$, and the remainder terms in the parameters $R,s$, and $x,y \in Y\cap M_{\bf \mathfrak t}ext{prin}$, as well as corresponding bounds for the coefficients and the remainder term were shown in \cite[Theorem 3.3]{ramacher16}. To improve on the remainder estimate concerning its dependence on ${\bf \mathfrak g}amma$ as $\mu {\bf \mathfrak t}o +\infty$, we rewrite $I^{\bf \mathfrak g}amma_{x,y}(\mu)$ up to a volume factor as
\begin{equation}n
I^{\bf \mathfrak g}amma_{x,y}(\mu)\end{equation}uiv \int_{{\bf \mathfrak t}}\int_{{\mathcal S}igma^{R,s}_x} e^{i\mu {\mathcal P}hi_{x,y}(\omega,\exp (-X))} e^{-{\bf \mathfrak g}amma(X)} a (x,y, \omega,X) \,d {\mathcal S}igma^{R,s}_x (\omega) \,d X, \qquad {\bf \mathfrak g}amma \in \widehat T,
\end{equation}n
where we can assume that $a$ is compactly supported with respect to $X\in {\bf \mathfrak t}$ in a small open connected subset $^y{\bf \mathfrak t}\subset {\bf \mathfrak t}$ by choosing $Y$ small.
If we were to apply the stationary and non-stationary phase principles to $I^{\bf \mathfrak g}amma_{x,y}(\mu)$ with ${\mathcal P}hi_{x,y}$ as phase function, which was the way we followed in \cite{ramacher16}, this would involve derivatives of the amplitude $\overline {\bf \mathfrak g}amma a$ and generate non-optimal powers in ${\bf \mathfrak g}amma$ in the remainder estimate. Instead, note that the character ${\bf \mathfrak g}amma(t)=e^{{\bf \mathfrak g}amma(X)}\in S^1$ constitutes itself a phase, which can oscillate rather quickly as ${\bf \mathfrak g}amma$ increases. To deal with these oscillations, we shall absorb them into the phase function, and define for arbitrary $\xi \in {\bf \mathfrak t}^\ast$
\begin{equation}n
{\mathcal P}hi^\xi_{x,y}(\omega,X):= {\mathcal P}hi_{x,y}(\omega,\e{-X})-\xi(X), \qquad t=\exp X \in T.
\end{equation}n
The idea is then to apply the stationary and non-stationary phase principles to the integrals $I^{\bf \mathfrak g}amma_{x,y}(\mu)$ with phase function ${\mathcal P}hi^\xi_{x,y}(\omega,X)$ and $\xi={\bf \mathfrak g}amma/i\mu$ as parameter, compare \cite[Theorem 7.7.6]{hoermanderI}, to obtain remainder estimates that are optimal in ${\bf \mathfrak g}amma\in \mathcal{V}_\mu$.
If $\mklm{X_1,\,dots,X_d}$ denotes a basis of ${\bf \mathfrak t}$, the $X$-derivatives of $ {\mathcal P}hi^\xi_{x,y}(\omega,X)$ read
\begin{equation}n
\sum_{k=1}^n \omega_k (d {\bf \mathfrak t}ilde x_k)_{\e{-X} \cdot y} (\widetilde X_j) -\xi(X_j)=[{\mathcal J }bb(\e{-X} \cdot y, \omega)-\xi](X_j),
\end{equation}n
so that
\begin{equation}n
{\mathbb C}rit \, {\mathcal P}hi^\xi_{x,y}=\mklm{(\omega,X) \mid {\bf \mathfrak k}appa(x) - {\bf \mathfrak k}appa (\e{-X} \cdot y) \in N_\omega({\mathcal S}igma^{R,s}_x), \quad (\e{-X} \cdot y, \omega) \in {\mathcal J }bb^{-1}(\mklm{\xi})}.
\end{equation}n
A repetition of the arguments given in \cite[Proof of Lemma 3.1]{ramacher16} then shows that for sufficiently small $|\xi|$
\begin{itemize}
\item if $y\in \mathcal{O}_x$, the set ${\mathbb C}rit \, {\mathcal P}hi^\xi_{x,y}$ is clean and given by the smooth submanifold
\begin{equation}n
{\mathcal J }=\big \{(\omega,X)\mid (\e{-X} \cdot y, \omega) \in {\mathcal J }bb^{-1}(\mklm{\xi}), \, x=\e{-X}\cdot y\big \}
\end{equation}n
of codimension $2\,dim {\mathcal O}_x$;
\item if $y \not\in \mathcal{O}_x$ and $T$ acts on $M$ with orbits of the same dimension ${\bf \mathfrak k}appa$
and the co-spheres $S_x^\ast M$ are strictly convex, then either ${\mathbb C}rit \, {\mathcal P}hi^\xi_{x,y}$ is empty, or, choosing $Y$ sufficiently small, ${\mathbb C}rit \, {\mathcal P}hi^\xi_{x,y}$ is clean and of codimension $n-1+{\bf \mathfrak k}appa$,
\end{itemize}
which would also just follow from \cite[Proof of Lemma 3.1]{ramacher16} and the implicit function theorem.
In addition, note that for $(\omega,X)\in {\mathbb C}rit \, {\mathcal P}hi^\xi_{x,y}$
\begin{equation}n
{\mathcal M}_{x,y}(\omega,X):={\bf \mathfrak t}ext{Trans Hess } {\mathcal P}hi^\xi_{x,y}(\omega,X) \, {\bf \mathfrak t}ext{is independent of $\xi$}.
\end{equation}n
Next, notice that under the assumptions in (a) and (b), respectively, there is an open tubular neighborhood $U_0$ of ${\mathbb C}rit \, {\mathcal P}hi_{x,y}$ and a constant $\mu_0>0$ such that for all $\mu {\bf \mathfrak g}eq \mu_0$ and ${\bf \mathfrak g}amma \in \mathcal{V}_\mu$
\begin{itemize}
\item ${\mathbb C}rit \, {\mathcal P}hi_{x,y}^{{\bf \mathfrak g}amma / i\mu} \subset U_0$,
\item ${\mathbb C}rit \, {\mathcal P}hi_{x,y}^{{\bf \mathfrak g}amma/ i\mu}$ is clean, that is, ${\mathcal P}hi_{x,y}^{{\bf \mathfrak g}amma/i\mu} $ is a Morse-Bott function.
\end{itemize}
Let $U_1$ and $U_2$ be two further open tubular neighborhoods of ${\mathbb C}rit \, {\mathcal P}hi_{x,y}$ and $\mu_0 >\mu_1 >\mu_2>0$ be such that $U \subset U_1 \subset U_2$ are proper inclusions and the pairs $(U_1,\mu_1)$, $(U_2,\mu_2)$ have the same properties than $(U_0,\mu_0)$. Let $u \in {\mathbb C}inft(U_2,{\mathbb R}^+)$ be a test function with $u_{|U_1}\end{equation}uiv 1$ and define
\begin{align*}
^1I^{\bf \mathfrak g}amma_{x,y}(\mu)&:= \int_{{\bf \mathfrak t}}\int_{{\mathcal S}igma^{R,s}_x} e^{i\mu {\mathcal P}hi^{{\bf \mathfrak g}amma/i\mu}_{x,y}(\omega,X)}u(\omega,X) a(x,y, \omega,X) \,d {\mathcal S}igma^{R,s}_x (\omega) \,d X, \\
^2I^{\bf \mathfrak g}amma_{x,y}(\mu)&:=I^{\bf \mathfrak g}amma_{x,y}(\mu)-^1I^{\bf \mathfrak g}amma_{x,y}(\mu).
\end{align*}
By construction, for ${\bf \mathfrak g}amma \in\mathcal{V}_\mu$ and $\mu {\bf \mathfrak g}eq \mu_0$ all critical sets ${\mathbb C}rit \, {\mathcal P}hi_{x,y}^{{\bf \mathfrak g}amma / i\mu}$ have a minimal, non-vanishing\footnote{ At least on the intersection of the support of $a(x,y,\cdot,\cdot)$ and $ U_1$.} distance to ${\bf \mathfrak g}d U_1$, so that
\begin{equation}n
|{\bf \mathfrak g}rad {\mathcal P}hi^{{\bf \mathfrak g}amma/i\mu}_{x,y}| {\bf \mathfrak g}eq C >0 \quad {\bf \mathfrak t}ext{on $\supp(1-u) a(x,y,\cdot,\cdot )$ for all ${\bf \mathfrak g}amma \in\mathcal{V}_\mu$ with $\mu {\bf \mathfrak g}eq \mu_0$.}
\end{equation}n
An application of the non-stationary phase principle \cite[Theorem 7.7.1]{hoermanderI} with respect to the phase function $ {\mathcal P}hi^{{\bf \mathfrak g}amma/i\mu}_{x,y}$ then yields for every $k \in {\mathbb N}$ the uniform bound
\begin{equation}n
^2I^{\bf \mathfrak g}amma_{x,y}(\mu) =O_{k,a}(\mu^{-k}) \qquad {\bf \mathfrak t}ext{for all ${\bf \mathfrak g}amma \in\mathcal{V}_\mu$ with $\mu {\bf \mathfrak g}eq \mu_0$.}
\end{equation}n
It remains to estimate the integral $^1I^{\bf \mathfrak g}amma_{x,y}(\mu)$ by means of the stationary phase principle with $\xi={\bf \mathfrak g}amma/i\mu$ as parameter, for which we shall follow \cite[Theorem 7.7.5]{hoermanderI} and its proof. Assume as we may that $U_2$ is sufficiently small, and introduce normal tubular coordinates on $U_2$ in form of an atlas $\mklm{(\zeta_\iota,\mathcal{Y}_\iota)}_{\iota \in I}$ such that
\begin{enumerate}
\item $\supp ua(x,y,\cdot,\cdot ) \subset \bigcup_\iota \mathcal Y_\iota$,
\item $\zeta_\iota^{-1}(m',m'') \in {\mathbb C}rit \, {\mathcal P}hi^\xi_{x,y}$ iff ${\mathbb R}^{\,d''}\ni m''=m''_\xi$, where
\begin{equation}n
d''=\begin{cases} 2 \,dim {\mathcal O}_x & {\bf \mathfrak t}ext{in case (a)} \\ n-1+{\bf \mathfrak k}appa & {\bf \mathfrak t}ext{in case (b)}. \end{cases}
\end{equation}n
\item the ${\bf \mathfrak t}$-coordinates are given by standard Euclidean coordinates, so that in each chart
$$X=\sum_\alpha m'_{{\bf \mathfrak t},\alpha} X_\alpha ' + \sum_\beta m''_{{\bf \mathfrak t},\beta} X_\beta ''$$
for a suitable basis $\{X_\alpha',X_\beta''\}$ of ${\bf \mathfrak t}$.
\end{enumerate}
Let $\mklm{p_\iota}$ be a partition of unity subordinated to the covering $\mklm{\mathcal{Y}_\iota}$, and write $ a_\iota(x,y,\omega,X):= p_\iota(\omega,X) a(x,y,\omega,X) $ as well as $a_\iota (x,y,m):=a_\iota (x,y, \zeta_\iota^{-1}(m))\beta_\iota(m) $, $\beta_\iota$ being a Jacobian. Denote the product of $u \circ \zeta^{-1}_\iota$ with the Taylor expansion of $a_\iota(x,y,\cdot )$ in the variable $m''$ at the point $m''_\xi$ of order $2k$ by $T^\xi_\iota(x,y,m)$, which is smooth and bounded in $\xi$. Let ${\mathcal M}_{x,y}(\omega,X)$ be as above and set ${\mathcal M}^\iota_{x,y}(m',m''_\xi):=({\mathcal M}_{x,y} \circ \zeta_\iota^{-1})(m',m''_\xi)$. Since for sufficiently small $|m''-m''_\xi|$
\begin{equation}n
\frac{|m''-m''_\xi|}{|{\bf \mathfrak g}rad_{m''} {\mathcal P}hi^\xi_{x,y} (m',m''_
\xi)|} \ll \norm{ {\mathcal M}^\iota_{x,y}(m',m''_\xi)^{-1}} \ll 1
\end{equation}n
for all $\xi$, \cite[Theorem 7.7.1]{hoermanderI} yields with respect to ${\mathcal P}hi^\xi_{x,y}(m):=({\mathcal P}hi^\xi_{x,y}\circ \zeta_\iota^{-1})(m)$ for any $k \in {\mathbb N}$
\begin{equation}n
^1I^{\bf \mathfrak g}amma_{x,y}(\mu)= \sum_\iota \int_{{\mathbb R}^{d'}} \int_{{\mathbb R}^{d''}} e^{i\mu {\mathcal P}hi^{i{\bf \mathfrak g}amma/\mu}_{x,y}(m)} T^{i{\bf \mathfrak g}amma/\mu}_\iota(x,y,m) \,d m'' \,d m' +O_{k,a}(\mu^{-k})
\end{equation}n
uniformly in ${\bf \mathfrak g}amma$. Next, note that for fixed $m'$
\begin{equation}
\label{eq:quadrform}
m'' \longmapsto \eklm{{\mathcal M}^\iota_{x,y}(m',m''_\xi) (m''-m''_\xi),(m''-m''_\xi)}
\end{equation}
defines a non-degenerate quadratic form, and introduce the auxiliary function
\begin{equation}n
H^\xi(m):={\mathcal P}hi^\xi_{x,y}(m)-{\mathcal P}hi^\xi_{x,y}(m',m''_\xi)-\eklm{{\mathcal M}^\iota_{x,y}(m',m''_\xi) (m''-m''_\xi),(m''-m''_\xi)}/2,
\end{equation}n
which vanishes of third order at $m''=m''_\xi$. The function
\begin{equation}n
^s{\mathcal P}hi^\xi_{x,y}(m):=\eklm{{\mathcal M}^\iota_{x,y}(m',m''_\xi) (m''-m''_\xi),(m''-m''_\xi)}/2+s H^\xi(m)
\end{equation}n
interpolates between ${\mathcal P}hi^\xi_{x,y}(m)-{\mathcal P}hi^\xi_{x,y}(m',m''_\xi)=\, ^1{\mathcal P}hi^\xi_{x,y}(m)$ and the quadratic form \end{equation}ref{eq:quadrform}, and we define
\begin{equation}n
{\mathcal I}(s):=\int_{{\mathbb R}^{d''}} e^{i\mu \, ^s{\mathcal P}hi^{\xi}_{x,y}(m)} T^\xi_\iota(x,y,m) \,d m''.
\end{equation}n
Taylor expansion then yields
\begin{equation}n
{\mathcal B}ig | {\mathcal I}(1)- \sum_{l=0}^{2k-1} {\mathcal I}^{(l)} (0)/l! {\mathcal B}ig | \ll \sup_{0 \leq s \leq 1} |{\mathcal I}^{(2k)} (s)|/L!.
\end{equation}n
Now, differentiation with respect to $s$ gives
\begin{equation}n
{\mathcal I}^{(l)}(s)= \int_{{\mathbb R}^{d''}} e^{i\mu \, ^s{\mathcal P}hi^{\xi}_{x,y}(m)} (i\mu H^\xi(m))^l\, T^\xi_\iota(x,y,m) \,d m''.
\end{equation}n
In view of the uniform bounds
\begin{equation}n
\frac{|m''-m''_\xi|}{|{\bf \mathfrak g}rad_{m''} \, ^s{\mathcal P}hi^\xi_{x,y} (m',m''_\xi)|} \ll \norm{ {\mathcal M}^\iota_{x,y}(m',m''_\xi)^{-1}} \ll 1 \qquad {\bf \mathfrak t}ext{for all $\xi$ and $s$}
\end{equation}n
and\footnote{Note that $D^\alpha_{m''} H^\xi(m)=D^\alpha_{m''} {\mathcal P}hi_{x,y}(m)$ for $|\alpha|{\bf \mathfrak g}eq 3$, while for $|\alpha| \leq 2$ Taylor expansion at $m''_\xi$ implies
\begin{align*}
|D^\alpha_{m''} H^\xi(m)| \ll |m''-m''_\xi|^{3-|\alpha|} \sum_{|\beta|=3} \sup |D^\beta_{m''} {\mathcal P}hi_{x,y}(m)| \ll |m''-m''_\xi|^{3-|\alpha|}
\end{align*}
uniformly in $\xi$ since $H^\xi(m)$ depends on $\xi$ only via the term $\xi\big (\sum_\alpha m'_{{\bf \mathfrak t},\alpha} X_\alpha ' + \sum_\beta m''_{{\bf \mathfrak t},\beta} X_\beta ''\big )$, which vanishes when differentiated more than one time.}
\begin{equation}n
\big |D^\alpha_{m''} [H^\xi(m)^{2k}\, T^\xi_\iota(x,y,m)]\big | \ll |m''-m''_\xi|^{6k-|\alpha|} \quad {\bf \mathfrak t}ext{for all $\xi$}
\end{equation}n
we obtain from \cite[Theorem 7.7.1]{hoermanderI} with $k$ replaced by $3k$ there the important uniform bound
\begin{equation}n
{\mathcal I}^{(2k)}(s) =O(\mu^{-k}) \qquad {\bf \mathfrak t}ext{for all ${\bf \mathfrak g}amma \in \mathcal{V}_\mu$ with $\mu {\bf \mathfrak g}eq \mu_0$ and all $s$.}
\end{equation}n
Next, denote by ${\mathcal H}^\xi(m)$ the Taylor expansion of $H^\xi(m)$ of order $3k$, and notice that one has
\begin{equation}n
(H^\xi)^l-({\mathcal H}^\xi)^l=O(|m''-m''_\xi|^{2k+2l})
\end{equation}n
uniformly in $\xi$. Applying again \cite[Theorem 7.7.1]{hoermanderI} gives
\begin{equation}n
{\mathcal I}^{(l)}(0)= \int_{{\mathbb R}^{d''}} e^{i\mu \, ^0{\mathcal P}hi^{\xi}_{x,y}(m)} (i\mu {\mathcal H}^\xi(m))^l\, T^\xi_\iota(x,y,m) \,d m'' + O_{k,a}(\mu^{-k})
\end{equation}n
uniformly in $\xi$. The assertion now follows by taking into account \cite[Lemma 7.7.3]{hoermanderI} and the final arguments in the proof of \cite[Theorem 7.7.5]{hoermanderI}.
Note that the Taylor expansion ${\mathcal H}^\xi$ starts with terms of degree $3$ and depends on $\xi$ in that the coefficients are evaluated at $m''=m''_{\xi}$. Consequently, when applied to ${\mathcal I}^{(l)}(0)$ the remainder estimate in \cite[Lemma 7.7.3]{hoermanderI} can be uniformly estimated in $\xi$. The final remainder estimate results from the above uniform estimates, and local contributions of higher order where additional derivatives of ${\bf \mathfrak g}amma$ arise. The local terms are unique, and coincide with the ones with phase function ${\mathcal P}hi_{x,y}$ and amplitude $\overline {\bf \mathfrak g}amma a$ considered in \cite[Theorem 3.3]{ramacher16}, from which the corresponding bounds are deduced. The fact that in case (a) only $t$-derivatives of order $k$ and ${\bf \mathfrak t}ilde N$ occur, follows from the particular form of the transversal Hessian, \cite[Proof of Theorem 3.3]{ramacher16}.
\end{proof}
Similarly, one derives
\begin{thm}
\label{thm:14.05.2017}
Consider the integrals $I^{\bf \mathfrak g}amma_{x,y}(\mu)$ defined in \end{equation}ref{eq:03.05.2015}. Assume that the torus $T$ acts on $M$ with orbits of the same dimension ${\bf \mathfrak k}appa \leq n-1$, and that the co-spheres $S_x^\ast M$ are strictly convex. Then, for sufficiently small $Y$ and arbitrary ${\bf \mathfrak t}ilde N_1, {\bf \mathfrak t}ilde N_2 \in {\mathbb N}$ one has the asymptotic formula
\begin{gather*}
I^{\bf \mathfrak g}amma_{x,y}(\mu)\\
= \sum_{{\mathcal J } \in {\bf \mathfrak p}i_0({\mathbb C}rit \, {\mathcal P}hi_{x,y})} \frac{e^{i \mu \,^0{\mathcal P}hi_{ x,y}^{\mathcal J }}}{\mu^{\bf \mathfrak k}appa (\mu \norm{ {\bf \mathfrak k}appa(x)-{\bf \mathfrak k}appa(g_{\mathcal J } \cdot y)}+1 )^{\frac{n-1-{\bf \mathfrak k}appa}2}} \left [ \sum_{k_1,k_2 =0}^{{\bf \mathfrak t}ilde N_1-1,{\bf \mathfrak t}ilde N_2-1} \frac { \mathcal{Q}_{{\mathcal J }, k_1,k_2} (x,y)}{\mu^{k_1} (\mu \norm{ {\bf \mathfrak k}appa(x)-{\bf \mathfrak k}appa(g_{\mathcal J } \cdot y)}+1)^{k_2}}\right. \\ \left. + \mathcal{R}_{{\mathcal J },{\bf \mathfrak t}ilde N_1, {\bf \mathfrak t}ilde N_2}(x,y,\mu) \right ]
\end{gather*}
as $\mu {\bf \mathfrak t}o +\infty$. The coefficients and the remainder term depend smoothly on $R,t$,
while $^0{\mathcal P}hi_{x,y}^{\mathcal J }:= R \, c_{x,g_{\mathcal J }\cdot y} (t)$ denotes the constant value of ${\mathcal P}hi_{x,y}$ on ${\mathcal J }$. Furthermore, the coefficients are uniformly bounded in $R,s, x$, and $y$ by derivatives of ${\bf \mathfrak g}amma$ up to order $2k_1$,
and the remainder term
\begin{equation}n
\mathcal{R}_{{\mathcal J },{\bf \mathfrak t}ilde N_1, {\bf \mathfrak t}ilde N_2}(x,y,\mu)= O_{{\mathcal J },{\bf \mathfrak t}ilde N_1, {\bf \mathfrak t}ilde N_2} {\mathcal B}ig ( \mu^{-{\bf \mathfrak t}ilde N_1} (\mu \norm{ {\bf \mathfrak k}appa(x)-{\bf \mathfrak k}appa(g_{\mathcal J } \cdot y)}+1)^{-{\bf \mathfrak t}ilde N_2} {\mathcal B}ig )
\end{equation}n
by derivatives of ${\bf \mathfrak g}amma$ up to order $2 {\bf \mathfrak t}ilde N_1$, provided that ${\bf \mathfrak g}amma \in \mathcal{V}_\mu$.
\end{thm}
\begin{proof}
The proof is essentially the same than the one of \cite[Theorem 3.4]{ramacher16}, using the arguments given in the proof of the previous theorem.
\end{proof}
\section{The equivariant local Weyl law}
We shall now prove an improved version of the equivariant local Weyl derived in \cite{ramacher16}. For this, we first prove the following refinement of \cite[Proposition 4.1]{ramacher16}.
\begin{proposition} [\bf Point-wise asymptotics for the kernel of the equivariant approximate projection]
\label{thm:kernelasymp}
For any fixed $x \in M$, ${\bf \mathfrak g}amma \in \widehat T$, and ${\bf \mathfrak t}ilde N\in {\mathbb N}$ one has as $\mu {\bf \mathfrak t}o +\infty$
\begin{align}
\label{eq:13.05.2015}
\begin{split}
K_{\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,x)&=\sum_{j{\bf \mathfrak g}eq 0, \, e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)} \varrho(\mu-\mu_j) |{e_j(x)}|^2 \\ & = {\mathcal B}ig (\frac{\mu}{2{\bf \mathfrak p}i}{\mathcal B}ig )^{n-\,dim \mathcal{O}_x-1} \frac{d_{\bf \mathfrak g}amma}{2{\bf \mathfrak p}i} \left [\sum_{k=0}^{{\bf \mathfrak t}ilde N-1} {\mathcal L}_k(x,{\bf \mathfrak g}amma) \mu^{-k}+ \mathcal{R}_{{\bf \mathfrak t}ilde N}(x,{\bf \mathfrak g}amma) \right ]
\end{split}
\end{align}
with coefficients and remainder depending smoothly on $x \in M_\mathrm{prin}$. They satisfy the bounds
\begin{equation}n
|\mathcal{L}_k(x,{\bf \mathfrak g}amma)| \leq C_{k,x} \sup_{l \leq k} \norm{D^l {\bf \mathfrak g}amma}_\infty,
\end{equation}n
as well as
\begin{equation}n
|\mathcal{R}_{{\bf \mathfrak t}ilde N}(x,{\bf \mathfrak g}amma)| \leq {\bf \mathfrak t}ilde C_{{\bf \mathfrak t}ilde N,x} \sup_{l \leq {\bf \mathfrak t}ilde N} \norm{D^l {\bf \mathfrak g}amma}_\infty \mu^{-{\bf \mathfrak t}ilde N}, \qquad {\bf \mathfrak g}amma \in \mathcal{V}_\mu,
\end{equation}n
where $D^l$ denotes a differential operator on $T$ of order $l$, and the constants $C_{k,x}$, ${\bf \mathfrak t}ilde C_{{\bf \mathfrak t}ilde N,x}$ are uniformly bounded in $x$ if $M= M_\mathrm{prin} \cup M_\mathrm{except}$. In particular, the leading coefficient is given by
\begin{align*}
{\mathcal L}_0(x,{\bf \mathfrak g}amma) = \hat \varrho(0) [{{\bf \mathfrak p}i_{\bf \mathfrak g}amma}_{|T_x}:{\bf 1}] \, \mbox{vol} \, [( {\mathcal O}mega \cap S_x^\ast M)/T],
\end{align*}
where $S^\ast M:=\mklm{(x,\xi) \in T^\ast M\mid p(x,\xi)=1}$. If $\mu {\bf \mathfrak t}o -\infty$, the function $K_{\widetilde \chi_\mu \circ {\mathcal P}i_{\bf \mathfrak g}amma}(x,x)$ is rapidly decreasing in $\mu$.
\end{proposition}
\begin{proof}
We only have to prove the bounds for the coefficients and the remainder, since all other asssertions have been shown in \cite{ramacher16}. Let the notation be as in Section \ref{sec:RSF}, and $R,s \in {\mathbb R}$, $x \in Y_\iota$ be fixed. As a direct consequence of Theorem \ref{thm:12.05.2015} (a) we have for any ${\bf \mathfrak t}ilde N\in {\mathbb N}$
\begin{equation}n
{\bf \mathfrak g}d_{R,s}^\beta I^{\bf \mathfrak g}amma_\iota(\mu, R, s, x,x)= (2{\bf \mathfrak p}i/\mu)^{\,dim \mathcal{O}_x} \left [ \sum_{k=0}^{{\bf \mathfrak t}ilde N-1} {\mathcal L}^k_{\iota,\beta}(R,s,x,{\bf \mathfrak g}amma) \mu^{-k}+ \mathcal{R}^{{\bf \mathfrak t}ilde N}_{\iota,\beta}(R,s,x,{\bf \mathfrak g}amma,\mu) \right ],
\end{equation}n
where the coefficients and the remainder term are explicitly given and depend smoothly on $R,s$, and $x\in Y \cap M_\mathrm{prin}$. Furthermore, both the coefficients ${\mathcal L}^k_{\iota,\beta}(R,s,x,{\bf \mathfrak g}amma)$ and the remainder are bounded by expressions involving derivatives of ${\bf \mathfrak g}amma$ up to order $k$ and ${\bf \mathfrak t}ilde N$, respectively, which are uniformly bounded in $x$ if $M=M_\mathrm{prin} \cup M_\mathrm{except}$. Equation \end{equation}ref{eq:13.06.2016} then implies the asymptotic expansion \end{equation}ref{eq:13.05.2015} with the specified estimate for the remainder.
\end{proof}
We can now sharpen \cite[Theorem 4.3]{ramacher16} in the isotypic aspect as follows.
\begin{thm}[\bf Equivariant local Weyl law]
\label{thm:main}
Let $M$ be a closed connected Riemannian manifold $M$ of dimension $n$ carrying an isometric and effective action of a torus $T$, and $P_0$ a $T$-invariant elliptic classical pseudodifferential operator on $M$
of degree $m$. Let $p(x,\xi)$ be its principal symbol, and assume that $P_0$ is positive and symmetric. Denote its unique self-adjoint extension by $P$, and for a given ${\bf \mathfrak g}amma \in \widehat T$ let $e_{\bf \mathfrak g}amma(x,y,\lambda)$ be its reduced spectral function. Further, let ${\mathcal J }bb:T^\ast M {\bf \mathfrak t}o {\bf \mathfrak t}^\ast$ be the momentum map of the $T$-action on $M$, and put ${\mathcal O}mega:={\mathcal J }bb^{-1}(\mklm{0})$. Then, for fixed $x \in M$ one has
\begin{equation}
\label{eq:29.10.2015}
\left |e_{\bf \mathfrak g}amma(x,x,\lambda)-\frac{ [{\bf \mathfrak p}i_{{\bf \mathfrak g}amma|T_x}:{\bf 1}]}{(2{\bf \mathfrak p}i)^{n-{{\bf \mathfrak k}appa_x}}} \lambda^{\frac{n-{\bf \mathfrak k}appa_x}{m}} \int_{\mklm{\xi\mid \, (x,\xi) \in {\mathcal O}mega, \, p(x,\xi)< 1}} \frac{ \,d \xi}{{\bf \mathfrak t}ext{vol}\, {\mathcal O}_{(x,\xi)}} \right | \leq C_{x,{\bf \mathfrak g}amma} \, \lambda^{\frac{n-{{\bf \mathfrak k}appa_x}-1}{m}}
\end{equation}
as $\lambda {\bf \mathfrak t}o +\infty$, where ${\bf \mathfrak k}appa_x:=\,dim {\mathcal O}_x$ and $ [{\bf \mathfrak p}i_{{\bf \mathfrak g}amma|T_x}:{\bf 1}]\in \mklm{0,1}$ denotes the multiplicity of the trivial representation in the restriction of ${\bf \mathfrak p}i_{\bf \mathfrak g}amma$ to the isotropy group $T_x$ of $x$. Furthermore, for arbitrary ${\bf \mathfrak g}amma \in {\mathcal W}_\lambda:=\mklm{{\bf \mathfrak g}amma \in \widehat T' \mid |{\bf \mathfrak g}amma | \leq \frac{\lambda^{1/m}}{\log \lambda}}$
\begin{equation}
\label{eq:4.6.2017}
C_{x,{\bf \mathfrak g}amma}=O_x{\mathcal B}ig (\sup_{l \leq 1 }\norm{D^l {\bf \mathfrak g}amma}_\infty{\mathcal B}ig )=O_x (|{\bf \mathfrak g}amma| )
\end{equation}
is a constant that depends smoothly on $x\in M_\mathrm{prin}$ and is uniformly bounded in $x$ if $M= M_\mathrm{prin} \cup M_\mathrm{except}$.
\end{thm}
\begin{proof}
This follows directly by taking ${\bf \mathfrak t}ilde N=1$ in \end{equation}ref{eq:13.05.2015} and integrating with respect to $\mu$ from $-\infty$ to $\sqrt[m]{\lambda}$ with the arguments given in \cite[Proof of Eq. (2.25)]{duistermaat-guillemin75}.
\end{proof}
\begin{rem}
\label{rem:23.04.2017}
\hspace{0cm}
\begin{enumerate}
\item
With the same constant $C_{x,{\bf \mathfrak g}amma}$ as in \end{equation}ref{eq:29.10.2015} one also has the bound
\begin{equation}n
\left | e_{\bf \mathfrak g}amma(x,y,\lambda+1)-e_{\bf \mathfrak g}amma(x,y,\lambda) \right | \leq \sqrt{C_{x,{\bf \mathfrak g}amma} \lambda^{\frac{n-{\bf \mathfrak k}appa_x-1}m}} \sqrt{C_{y,{\bf \mathfrak g}amma} \lambda^{\frac{n-{\bf \mathfrak k}appa_y-1}m}}, \qquad x, y \in M, \, {\bf \mathfrak g}amma \in {\mathcal W}_\lambda,
\end{equation}n
compare \cite[Remark 4.4]{ramacher16}.
\item As a consequence of Theorem \ref{thm:main}, the constant $C_{x,{\bf \mathfrak g}amma}$ in \cite[Corollary 4.6]{ramacher16} can be improved accordingly, as well as all examples given in \cite[Section 4]{ramacher16}.
\end{enumerate}
\end{rem}
\section{Equivariant ${\rm L}^p$-bounds of eigenfunctions for non-singular group actions}
\label{sec:equivLp}
Let the notation be as in the previous sections. As a consequence of the improved point-wise asymptotics for the kernel of the equivariant approximate projection, one obtains in the non-singular case the following sharpened equivariant ${\rm L}^\infty$-bounds for eigenfunctions.
\begin{proposition}[\bf ${\rm L}^\infty$-bounds for isotypic spectral clusters]
\label{thm:bounds}
Assume that $T$ acts on $M$ with orbits of the same dimension ${\bf \mathfrak k}appa$, and denote by $\chi_\lambda$ the spectral projection onto the sum of eigenspaces of $P$ with eigenvalues in the interval $(\lambda, \lambda+1]$. Then, for any ${\bf \mathfrak g}amma \in {\mathcal W}_\lambda$,
\begin{equation}
\label{eq:5}
\norm{(\chi_\lambda\circ {\mathcal P}i_{\bf \mathfrak g}amma) u}_{{\rm L}^\infty(M)} \leq C (1+ \lambda)^{\frac{n-{\bf \mathfrak k}appa-1}{2m}} \norm{u}_{{\rm L}^2(M)}, \qquad u \in {\rm L}^2(M),
\end{equation}
for a positive constant $C$ independent of ${\bf \mathfrak g}amma$. In particular, we obtain
\begin{equation}n
\norm{u}_{{\rm L}^\infty(M)} \ll \lambda^{\frac{n-{\bf \mathfrak k}appa-1}{2m}}
\end{equation}n
for any eigenfunction $u \in {\rm L}^2_{\bf \mathfrak g}amma(M)$ of $P$ with eigenvalue $\lambda$ satisfying $\norm{u}_{{\rm L}^2}=1$ and ${\bf \mathfrak g}amma \in {\mathcal W}_\lambda$.
\end{proposition}
\begin{proof}
By Proposition \ref{thm:kernelasymp} we have for ${\bf \mathfrak g}amma \in {\mathcal W}_\lambda$ the uniform bound
\begin{equation}n
|K_{ \widetilde \chi_\lambda\circ {\mathcal P}i_{\bf \mathfrak g}amma}(y,y)| \ll (1+\lambda)^{\frac{n-{\bf \mathfrak k}appa-1}m}, \qquad y \in M=M_\mathrm{prin} \cup M_\mathrm{except}.
\end{equation}n
The assertion now follows by a repetition of the arguments in the proof of \cite[Proposition 5.1 and Equation (5.4)]{ramacher16}.
\end{proof}
Similarly, we are able to sharpen the ${\rm L}^p$-bounds for isotypic spectral clusters derived in \cite[Theorem 5.4]{ramacher16} in the isotypic aspect.
\begin{thm}[\bf ${\rm L}^p$-bounds for isotypic spectral clusters]
\label{thm:20.02.2016}
Let $M$ be a closed connected Riemannian manifold $M$ of dimension $n$ on which a torus $T$ acts effectively and isometrically with orbits of the same dimension ${\bf \mathfrak k}appa$. Further, let $P$ be the unique self-adjoint extension of a $T$-invariant elliptic positive symmetric classical pseudodifferential operator on $M$
of degree $m$, and assume that its principal symbol $p(x,\xi)$ is such that the co-spheres $S_x^\ast M:=\mklm{(x,\xi) \in T^\ast M\mid \, p(x,\xi)=1}$ are strictly convex. Denote by $\chi_\lambda$ the spectral projection onto the sum of eigenspaces of $P$ with eigenvalues in the interval $(\lambda, \lambda+1]$, and by ${\mathcal P}i_{\bf \mathfrak g}amma$ the projection onto the isotypic component ${\rm L}^2_{\bf \mathfrak g}amma(M)$, where ${\bf \mathfrak g}amma \in \widehat T$. Then, for $u \in {\rm L}^2(M)$ and arbitrary ${\bf \mathfrak g}amma\in {\mathcal W}_\lambda$
\begin{equation}
\label{eq:31.12.2015}
\norm{(\chi_\lambda \circ {\mathcal P}i_{\bf \mathfrak g}amma) u}_{{\rm L}^q(M)} \leq \begin{cases} C \, \lambda^{\frac{\,delta_{n-{\bf \mathfrak k}appa}(q)}{m}} \norm{u}_{{\rm L}^2(M)}, & \frac{2(n-{\bf \mathfrak k}appa+1)}{n-{\bf \mathfrak k}appa-1} \leq q \leq \infty,
\\ C \, \lambda^{\frac{(n-{\bf \mathfrak k}appa-1)(2-q')}{4m q'}} \norm{u}_{{\rm L}^2(M)}, & 2 \leq q \leq \frac{2(n-{\bf \mathfrak k}appa+1)}{n-{\bf \mathfrak k}appa-1}, \end{cases}
\end{equation}
for a positive constant $C$ independent of ${\bf \mathfrak g}amma$, where $\frac 1q+\frac 1{q'}=1$ and
\begin{equation}n
\,delta_{n-{\bf \mathfrak k}appa}(q):=\max \left ( (n-{\bf \mathfrak k}appa) \left | \frac 12-\frac 1q \right| -\frac 12,0 \right ).
\end{equation}n
In particular,
\begin{equation}n
\norm{u}_{{\rm L}^q(M)} \ll \begin{cases} \lambda^{\frac{\,delta_{n-{\bf \mathfrak k}appa}(q)}{m}}, & \frac{2(n-{\bf \mathfrak k}appa+1)}{n-{\bf \mathfrak k}appa-1} \leq q \leq \infty,
\\ \lambda^{\frac{(n-{\bf \mathfrak k}appa-1)(2-q')}{4m q'}}, & 2 \leq q \leq \frac{2(n-{\bf \mathfrak k}appa+1)}{n-{\bf \mathfrak k}appa-1}, \end{cases}
\end{equation}n
for any eigenfunction $u \in {\rm L}^2_{\bf \mathfrak g}amma(M)$ of $P$ with eigenvalue $\lambda$ satisfying $\norm{u}_{{\rm L}^2}=1$ and ${\bf \mathfrak g}amma \in {\mathcal W}_\lambda$.
\end{thm}
\begin{proof}
The proof is a verbatim repetition of the proof of \cite[Theorem 5.4]{ramacher16} where instead of \cite[Theorem 3.4]{ramacher16} the improved estimates from Theorem \ref{thm:14.05.2017} are used.
\end{proof}
As a consequence of the previous theorem, all examples given in \cite[Section 5]{ramacher16} can be sharpened in the isotypic aspect.
\section{The singular equivariant local Weyl law. Caustics and concentration of \\ eigenfunctions}
\label{sec:5}
Using the improved remainder estimates from Theorem \ref{thm:12.05.2015} all results in \cite[Section 7]{ramacher16} can be sharpened. In particular, the singular equivariant local Weyl law proved in \cite[Theorem 7.7]{ramacher16} can be improved in the isotypic aspect.
As before, let $M$ be a closed connected Riemannian manifold and $T$ a torus acting on $M$ by isometries, and consider the decomposition of $M$ into orbit types
\begin{equation}
\label{eq:2.19}
M=M(H_1) \, \,dot \cup \, \cdots \, \,dot \cup \, M(H_L),
\end{equation}
where we suppose that the isotropy types are numbered in such a way that $(H_i) {\bf \mathfrak g}eq (H_j)$ implies $i \leq j$, $(H_L)$ being the principal isotropy type. We then have the following
\begin{thm}[\bf Singular equivariant local Weyl law]
\label{thm:15.11.2015}
Let $M$ be a closed connected Riemannian manifold $M$ of dimension $n$ with an isometric and effective action of a torus $T$ and $P_0$ a $T$-invariant elliptic classical pseudodifferential operator on $M$
of degree $m$. Let $p(x,\xi)$ be its principal symbol, and assume that $P_0$ is positive and symmetric. Denote its unique self-adjoint extension by $P$, and for a given ${\bf \mathfrak g}amma \in \widehat T$ let $e_{\bf \mathfrak g}amma(x,y,\lambda)$ be its reduced spectral counting function. Write
${\bf \mathfrak k}appa$ for the dimension of an $T$-orbit in $M$ of principal type. Then, for $x \in M_\mathrm{prin}\cup M_\mathrm{except}$ one has the asymptotic formula
\begin{gather*}
\left |e_{\bf \mathfrak g}amma(x,x,\lambda)- \frac{ \lambda^{\frac{n-{\bf \mathfrak k}appa}{m}}}{(2{\bf \mathfrak p}i)^{n-{\bf \mathfrak k}appa}} \sum_{N=1}^{{\rm L}ambda-1} \, \sum_{i_1<\,dots< i_{N}} \, {\bf \mathfrak p}rod_{l=1}^{N} |{\bf \mathfrak t}au_{i_l}|^{\,dim G- \,dim H_{i_l}-{\bf \mathfrak k}appa} \mathcal{L}_{i_1\,dots i_{N} }^{0,0}(x,{\bf \mathfrak g}amma) \right | \\
\leq \widetilde C_{\bf \mathfrak g}amma \, \lambda^{\frac{n-{\bf \mathfrak k}appa-1}m} \sum_{N=1}^{{\rm L}ambda-1}\, \sum_{i_1<\,dots< i_{N}} {\bf \mathfrak p}rod_{l=1}^N |{\bf \mathfrak t}au_{i_l}|^{\,dim G- \,dim H_{i_l}-{\bf \mathfrak k}appa-1}
\end{gather*}
as $\lambda {\bf \mathfrak t}o +\infty$, where the multiple sum runs over all possible totally ordered subsets $\mklm{(H_{i_1}),\,dots, (H_{i_N})}$ of singular isotropy types, and the coefficients satisfy the bounds
$
\mathcal{L}_{i_1\,dots i_{N}}^{0,0}(x,{\bf \mathfrak g}amma) \ll \norm{{\bf \mathfrak g}amma}_\infty
$
uniformly in $x$, while
\begin{equation}n
\widetilde C_{\bf \mathfrak g}amma \ll \sup_{l\leq 1} \norm{D^l {\bf \mathfrak g}amma}_\infty
\end{equation}n
is a constant independent of $x$ and $\lambda$, the $D^l$ are differential operators on $T$ of order $l$, and the ${\bf \mathfrak t}au_{i_j}={\bf \mathfrak t}au_{i_j}(x)$ parameters satisfying $|{\bf \mathfrak t}au_{i_j}|\approx {\bf \mathfrak t}ext{dist}\, (x, M(H_{i_j}))$.
\end{thm}
\begin{proof}
The proof consists in a verbatim repetition of the proof of \cite[Theorem 7.7]{ramacher16} using the improved remainder estimate in Theorem \ref{thm:12.05.2015} (a).
\end{proof}
As an immediate consequence this yields
\begin{cor}[\bf Singular point-wise bounds for isotypic spectral clusters]
\label{cor:2.12.2015}
In the setting of Theorem \ref{thm:15.11.2015} we have
\begin{equation}n
\sum_{\stackrel{\lambda_j \in (\lambda,\lambda+1],}{ e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)}} |e_j(x)|^2 \leq \begin{cases} C \, \lambda^{\frac{n-1}m}, & x\in M_\mathrm{sing}, \\
& \\
C_{\bf \mathfrak g}amma \, \lambda^{\frac{n-{\bf \mathfrak k}appa-1}m} \sum\limits_{N=1}^{{\rm L}ambda-1}\, \sum\limits_{i_1<\,dots< i_{N}}{\bf \mathfrak p}rod\limits_{l=1}^N |{\bf \mathfrak t}au_{i_l}|^{\,dim G- \,dim H_{i_l}-{\bf \mathfrak k}appa-1}, & x\in M-M_\mathrm{sing}, \end{cases}
\end{equation}n
with $C>0$ independent of ${\bf \mathfrak g}amma$. In particular, the bound holds for each individual $e_j \in {\rm L}^2_{\bf \mathfrak g}amma(M)$ with $\lambda_j \in (\lambda, \lambda+1]$.
\end{cor}
\qed
Integrating the asymptotic formulae in Theorems \ref{thm:main} and \ref{thm:15.11.2015} over $x\in M$ yields a sharpened remainder estimate for the equivariant Weyl law derived in \cite{ramacher10}.
In addition, as a consequence of the previous theorem, the example given in \cite[Section 7]{ramacher16} can be sharpened in the isotypic aspect.
\section{Sharpness}
\label{sec:sharpness}
By the arguments given in \cite[Section 8]{ramacher16} the remainder estimates in Theorems \ref{thm:main} and \ref{thm:15.11.2015} are sharp in the spectral parameter $\lambda$, and already attained on the $2$-dimensional sphere $S^2$.
To see that they are almost sharp in the isotypic aspect, endow $M=S^2$ with the induced metric, and let ${\mathcal D}elta$ be the corresponding Laplace-Beltrami operator. The eigenvalues of $-{\mathcal D}elta$ are given by the numbers $ \lambda_k=k(k+1)$ with $k=0,1,2,3,\,dots$, and the corresponding $k(k+1)$-dimensional eigenspaces ${\mathcal H}_k$ are spanned by the classical spherical functions $Y_{km}$, $m \in {\mathbb Z}$, $|m| \leq k$.
The ${Y_{kl}}$ are orthonormal to each other, and by the spectral theorem we have the decomposition $
{\rm L}^2(M)= \bigoplus _{k=0}^\infty {\mathcal H}_k$. Furthermore, by restricting the left regular representation of $\mathrm{SO}(3)$ in ${\rm L}^2(S^2)$ to the eigenspaces ${\mathcal H}_k$ one obtains realizations for all elements in the unitary dual $\widehat{\mathrm{SO}(3)}\simeq \mklm{k=0,1,2,3,\,dots}$. Now, let $T= \mathrm{SO}(2)$ be isomorphic to the isotropy group of a point in $S^2\simeq \mathrm{SO}(3)/\mathrm{SO}(2)$. The irreducible representations of $\mathrm{SO}(2)$ are $1$-dimensional, and the corresponding characters are given by the exponentials ${\bf \mathfrak t}heta \mapsto e^{im{\bf \mathfrak t}heta}$, where ${\bf \mathfrak t}heta \in [0,2{\bf \mathfrak p}i)\simeq \mathrm{SO}(2)$, $m \in {\mathbb Z}\simeq \widehat{\mathrm{SO}(2)}$. Each ${\mathcal H}_k$ decomposes into $\mathrm{SO}(2)$ representations with multiplicity $1$ according to
$
{\mathcal H}_k=\bigoplus_{|m|\leq k} {\mathcal H}_k^m,
$ where ${\mathcal H}_k^m$ is spanned by $Y_{km}$.
Consequently, if $N_{m}(\lambda):=\int_{S^2} e_m(x,x,\lambda) dS^2(x)$ denotes the equivariant counting function of ${\mathcal D}elta$ we obtain the estimate
\begin{align}
\label{eq:3.6.2017}
N_{m}(\lambda) =\sum_{k(k+1) \leq \lambda, \, |m| \leq k} 1\approx \sum_{|m| \leq k \leq \sqrt{\lambda}} 1\approx \sqrt{\lambda}-|m|,
\end{align}
as $\lambda {\bf \mathfrak t}o +\infty $,
showing that the remainder estimates in Theorems \ref{thm:main} and \ref{thm:15.11.2015} are almost sharp both in the eigenvalue and in the isotypic aspect. \\
To see that the equivariant ${\rm L}^p$-bounds in Section \ref{sec:equivLp} are almost sharp in the eigenvalue and isotypic aspect, let us consider the standard $2$-torus $M=T^2\subset {\mathbb R}^3$ on which $G=\mathrm{SO}(2)$ acts by rotations around the symmetry axis. Then all orbits are $1$-dimensional and of principal type.
Proposition \ref{thm:bounds} then implies the bound
\begin{equation}n
\norm{u}_{{\rm L}^\infty(T^2)} =O(1 ) , \qquad u \in {\rm L}^2(T^2), \, \norm{u}_{{\rm L}^2}=1,
\end{equation}n
for any eigenfunction of the Laplace-Beltrami operator ${\mathcal D}elta$ on $T^2$. Now, via the identification
\begin{equation}n
{\mathbb R}^2/{\mathbb Z}^2 \stackrel{\simeq} \longrightarrow T^2 \simeq S^1 {\bf \mathfrak t}imes S^1, (x_1,x_2) \, \longmapsto \, (e^{2{\bf \mathfrak p}i i x_1}, e^{2{\bf \mathfrak p}i i x_2}),
\end{equation}n
the standard orthonormal basis of eigenfunctions of ${\mathcal D}elta$ is given by $\mklm{e^{2{\bf \mathfrak p}i i k_1 x_1}e^{2{\bf \mathfrak p}i i k_2 x_2}\mid (k_1,k_2) \in {\mathbb Z}^2}$, showing that the bounds in Proposition \ref{thm:bounds} and Theorem \ref{thm:20.02.2016} are almost sharp both in the eigenvalue and isotypic aspect.
\appendix
\renewcommand*{{\bf \mathfrak t}hesection}{{\mathcal A}lph{section}}
{\bf \mathfrak p}rovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}{\bf \mathfrak t}hinspace}
{\bf \mathfrak p}rovidecommand{{\mathcal M}R}{\relax\ifhmode\unskip\space\fi MR }
{\bf \mathfrak p}rovidecommand{{\mathcal M}Rhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
{\bf \mathfrak p}rovidecommand{\href}[2]{#2}
\end{document} |
\betagin{document}
\sloppy
\newenvironment{proo}{\betagin{trivlist} \item{\sc {Proof.}}}
{
$\square$ \end{trivlist}}
\longrightarrowg\def\symbolfootnote[#1]#2{\betagingroup
\def\thefootnote{{\mathcal O}nsymbol{footnote}}{\mathfrak o}otnote[#1]{#2}\endgroup}
\title{Twisting of properads}
\author{Sergei~A. Merkulov}
{\mathrm a\mathrm d}dress{Sergei~Merkulov: Department of Mathematics, University of Luxembourg, Grand Duchy of Luxembourg }
\email{[email protected]}
\betagin{abstract} We study T.\ Willwacher's twisting endofunctor $\mathsf{tw}$ in the category of dg prop(erad)s ${\mathcal P}$ under the operad of (strongly homotopy) Lie algebras, $i:\mathcal{L} \mathit{ie}\rightarrow {\mathcal P}$. It is proven that if ${\mathcal P}$ is a properad under properad of Lie bialgebras $\mathcal{L}\mathit{ieb}$, then the associated twisted properad $\mathsf{tw}{\mathcal P}$ becomes in general a properad under quasi-Lie bialgebras (rather than under $\mathcal{L}\mathit{ieb}$). This result implies that the cyclic cohomology of any cyclic homotopy associative algebra has in general an induced structure of a quasi-Lie bialgebra. We show that the cohomology of the twisted properad $\mathsf{tw}\mathcal{L}\mathit{ieb}$ is highly non-trivial --- it contains the cohomology of the so called haired graph complex introduced and studied recently in the context of the theory of long knots and the theory of moduli spaces ${\mathcal M}_{g,n}$ of algebraic curves of arbitrary genus $g$ with $n$ punctures.
Using a polydifferential functor from the category of props to the category of operads, we introduce the notion of a Maurer-Cartan element of a strongly homotopy Lie bialgebra, and use it to construct
a new twisting endofunctor $\mathsf{Tw}$ in the category dg prop(erad)s ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}$, the minimal resolution of $\mathcal{L}\mathit{ieb}$. We prove that $\mathsf{Tw} \mathcal{H}\mathit{olieb}$ is quasi-isomorphic to $\mathcal{L}\mathit{ieb}$, and establish its relation to the homotopy theory of triangular Lie bialgebras.
It is proven that the dg Lie algebra $\mathsf{Def}(\mathcal{H}\mathit{olieb}\stackrel{}{\rightarrow} {\mathcal P})$ controlling deformations of $i$ acts on $\mathsf{Tw}{\mathcal P}$ by derivations. In some important examples this dg Lie algebra has a rich and interesting cohomology
(containing, for example, the Grothendieck-Teichm\"uller Lie algebra).
Finally, we introduce a diamond version $\mathsf{Tw}^\lozenge$ of $\mathsf{Tw}$ which works in the category of dg properads under {\em involutive}\, (strongly homotopy) Lie bialgebras, and discuss its applications in string topology.
\end{abstract}
\maketitle
\markboth{}{}
{{\mathsf m}all
{{\mathsf m}all
\tableofcontents
}
}
{\Large
{\mathsf e}ction{\bf Introduction}
}
In this paper we study some new aspects of a well-known twisting endofunctor $\mathsf{tw}$ \cite{W} in a certain subcategory of the category of properads, and introduce a new one. Both of them have applications in several areas of modern research --- the string topology, the theory of moduli spaces of algebraic curves, the theory of cyclic strongly homotopy associative algebras etc --- which we discuss below.
Let ${\mathcal P}$ be a properad under the operad $\mathcal{L} \mathit{ie}_d$ of (degree $d\in {\mathbb Z}$ shifted) Lie algebras, that is, the one equipped with a morphism
$$
i: \mathcal{L} \mathit{ie}_d \longrightarrow {\mathcal P}.
$$
Thomas Willwacher introduced in \cite{W} a {\it twisting endofunctor}\,
$$
\mathsf{tw}: {\mathcal P} \longrightarrow \mathsf{tw}{\mathcal P}
$$
in the category of such properads; the twisted properad $\mathsf{tw}{\mathcal P}$ is obtained from ${\mathcal P}$ by adding to it a new generator degree $d$ generator $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ with no inputs and precisely one output encoding the defining property
of a Maurer-Cartan element of a generic ${\mathcal L} ie_d$-algebra. This twisting construction originated in the formality theory of the chain operad of chains of little disks operad \cite{Ko2, LV, W} and found many other important applications including the Deligne conjecture \cite{DW}, the homotopy theory of configuration spaces \cite{CW} and the theory of moduli spaces of algebraic curves \cite{MW1, Me2}. The twisted properad comes equipped with a canonical morphism
$$
\mathcal{L} \mathit{ie}_d \longrightarrow \mathsf{tw}{\mathcal P}
$$
so the twisting construction is indeed an endofunctor in the category, $\mathsf{PROP}_{\mathcal{L} \mathit{ie}_d}$, of operads under $\mathcal{L} \mathit{ie}_d$. It can be naturally extended \cite{W} to the category $\mathsf{PROP}_{\mathcal{H} \mathit{olie}_d}$ of properads under $\mathcal{H} \mathit{olie}_d$, the minimal resolution of $\mathcal{L} \mathit{ie}_d$. It is proven in \cite{DW} that $\mathsf{tw} \mathcal{H} \mathit{olie}_d$ is quasi-isomorphic to $\mathcal{L} \mathit{ie}_d$; the cohomologies of twisted versions of some other classical operads under $\mathcal{L} \mathit{ie}_d$ have been computed in \cite{DW, DSV}.
The main purpose of this paper is to study the restriction of T.\ Willwacher's twisting endofunctor $\mathsf{tw}$,
$$
\mathsf{tw}: \mathsf{PROP}_{\mathcal{L}\mathit{ieb}cd} \longrightarrow \mathsf{PROP}_{\mathcal{L} \mathit{ie}_d}
$$
to the subcategory $\mathsf{PROP}_{\mathcal{L}\mathit{ieb}cd} \subset \mathsf{PROP}_{\mathcal{L} \mathit{ie}_d}$ of properads under the properad $\mathcal{L}\mathit{ieb}cd$ of (degree shifted) Lie bialgebras, i.e.\ the ones which come equipped with a non-trivial morphism of properads,
\betagin{equation}\lambdabel{1: i from LBcd to P}
i: \mathcal{L}\mathit{ieb}cd\longrightarrow {\mathcal P}.
\end{equation}
and then to appropriately modify $\mathsf{tw} \rightsquigarrow \mathsf{Tw}$ such that the new functor $\mathsf{Tw}$ which becomes an {\em endofuntor}\, of $ \mathsf{PROP}_{\mathcal{L}\mathit{ieb}cd}$. Everything will work, of course, in the category $\mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}$ of properads under
$\mathcal{H}\mathit{olieb}_{c,d}$, the minimal resolution of $\mathcal{L}\mathit{ieb}cd$.
The notion of Lie bialgebra was introduced by Vladimir Drinfeld in \cite{Dr1} in the context of the theory of Yang-Baxter equations and the deformation theory of universal enveloping algebras. This notion and its involutive version have found since many applications in algebra, string topology, contact homology, theory of associators and the theory of Riemann surfaces with punctures. The properad $\mathcal{L}\mathit{ieb}cd$ controls Lie bialgebras with Lie bracket of degree $1-d$ and Lie-cobracket of degree $1-c$; the properads $\mathcal{L}\mathit{ieb}cd$ with the same parity of $c+d\in {\mathbb Z}$ are isomorphic to each other up to degree shift so that there are essentially two different types of such properads, even and odd ones.
In general, given ${\mathcal P}\in \mathsf{PROP}_{\mathcal{L}\mathit{ieb}cd}$, the associated twisted properad $\mathsf{tw}{\mathcal P}$ is no more a properad under $\mathcal{L}\mathit{ieb}cd$ --- a generic Maurer-Cartan element of the Lie bracket need not to respect the Lie cobracket. Surprisingly enough, the cohomology of any twisted properad $\mathsf{tw}{\mathcal P}$ always comes equipped with an induced from $i$ morphism of properads,
\betagin{equation}\lambdabel{1: i^q map}
i^q: q\mathcal{L}\mathit{ieb}_{c-1,d}\longrightarrow H^\bulletlet(\mathsf{tw}{\mathcal P}),
\end{equation}
where $q\mathcal{L}\mathit{ieb}_{c-1,d}$ is the properad of (degree shifted) {\it quasi}-Lie bialgebras which have been also introduced by Vladimir Drinfeld in \cite{Dr1} in the context of the theory of quantum groups. The map $i^q$ is described explicitly in Theorem {\ref{2: qLien to twP}} below. In the special case when ${\mathcal P}$ is the properad of ribbon graphs $\mathcal{R} \mathcal{G} ra_d$ introduced in \cite{MW1}, the map $i^q$ has been found (in a slightly different but equivalent form) in \cite{Me2}. As $\mathsf{tw}\mathcal{R} \mathcal{G} ra_d$ acts canonically (almost by its very construction), on the reduced cyclic cohomology $H^\bulletlet(Cyc(A))$ of an arbitrary cyclic strongly homotopy associative algebra $A$ (equipped with the degree $-d$ scalar product), we deduce a new observation that {\em $H^\bulletlet(Cyc(A))$ is always a quasi-Lie bialgebra}, see \S {\ref{3: subsec on qLieb on Cyc(A)}} for full details. It is worth noting that the twisted properad of ribbon graphs $\mathsf{tw} \mathcal{R} \mathcal{G} ra_d$ controls \cite{Me2} the totality of compactly supported cohomology groups ${\partial}rod_{n\geq 1, 2g+n\geq 3} H_c^{\bulletlet- d(2g-2+n)}({\mathcal M}_{g,n})$ of moduli spaces ${\mathcal M}_{g,n}$ of genus $g$ algebraic curves with $m$ boundaries and $n$ punctures, and the associated map
$$
i^q: q\mathcal{L}\mathit{ieb}_{d-1,d}\longrightarrow H(\mathsf{tw}\mathcal{R} \mathcal{G} ra_d)\simeq {\partial}rod_{n\geq 1, 2g+n\geq 3} H_c({\mathcal M}_{g,n})
$$
is non-trivial on infinitely many elements of $q\mathcal{L}\mathit{ieb}_{-1,0}$ (see \S 3.9 in \cite{Me2}).
The deformation complex
$$
\mathsf{Def}\left(q\mathcal{L}\mathit{ieb}_{c-1,d}\stackrel{i^q}{\longrightarrow} H^\bulletlet(\mathsf{tw}{\mathcal P})\right)
$$
of the morphism $i^q$ has, in general, a much richer cohomology than the complex $\mathsf{Def}(\mathcal{L} \mathit{ie}_{d}\stackrel{i}{\longrightarrow} \mathsf{tw}{\mathcal P})$; moreover, that cohomology comes always equipped with a morphism of cohomology groups,
$$
H^\bulletlet(\mathsf{GC}^{\geq 2}_{c+d-1})\longrightarrow H^\bulletlet\left(
\mathsf{Def}\left(q\mathcal{L}\mathit{ieb}_{c-1,d}\stackrel{i^q}{\longrightarrow} H^\bulletlet(\mathsf{tw}{\mathcal P})\right)\right)
$$
where $\mathsf{GC}^{\geq 2}_n$ is the famous Maxim Kontsevich graph complex \cite{Ko1} (more precisely, its extension allowing graphs with bivalent vertices). The case $c+d=3$ is of special interest as the dg Lie algebra
$H^\bulletlet(\mathsf{GC}^{\geq 2}_{2})$ contains the Grothendieck-Teichm\"uller Lie algebra \cite{W}. The case $c+d=2$ is also of interest as it corresponds to the odd Kontsevich graph complex $H^\bulletlet(\mathsf{GC}_1^{\geq 2})$ which contains a rich subspace generated by trivalent graphs.
Another important example of a dg properad in the subcategory $\mathsf{PROP}_{\mathcal{L}\mathit{ieb}cd}$ is the properad
$\mathcal{L}\mathit{ieb}cd$ itself; the cohomology $H^\bulletlet(\mathsf{tw}\mathcal{L}\mathit{ieb}cd)$ of the associated twisted properad is highly non-trivial:
we show in \S {\ref{3: subsec on reduced twisting of (prop)erads under HoLBcd}} that, for any natural number $N\geq 1$, there is an injection of cohomology groups,
$$
H^\bulletlet(\mathsf{HGC}_{c+d}^N) \longrightarrow H^{\bulletlet + dN}(\mathsf{tw} \mathcal{L}\mathit{ieb}_{c,d})
$$
where $\mathsf{HGC}_d^N$ is a version of the Kontsevich graph complex $\mathsf{GC}_d$ with $N$ labelled hairs which has been introduced and studied recently in the context of the theory of moduli spaces ${\mathcal M}_{g,n}$ of algebraic curves of arbitrary genus $g$ with $n$ punctures \cite{CGP} and the theory of long knots \cite{FTW}.
The functor $\mathsf{tw}$ is {\em not}\, an endofunctor of the category $\mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}$
and, contrary to the canonical projection $\mathsf{tw} \mathcal{H} \mathit{olie}_d \rightarrow \mathcal{H} \mathit{olie}_d$, the analogous ``forgetful" map
$$
\mathsf{tw} \mathcal{H}\mathit{olieb}_{c,d} \longrightarrow \mathcal{H}\mathit{olieb}_{c,d}
$$
is {\em not}\, a quasi-isomorphism (as the above result with the haired graph complex demonstrates).
In \S 4 we introduce a new twisting endofunctor
$$
\mathsf{Tw}: \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}} \longrightarrow \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}
$$ which is an enlargement of $\mathsf{tw}$ in the sense of the number of new generators (hence the notation) and which fixes both these ``not"s .
The key point is to introduce the correct notion of a Maurer-Cartan element of a generic $\mathcal{H}\mathit{olieb}_{c,d}$-algebra. The idea is to use a polydifferential functor \cite{MW1}
$$
\betagin{array}{rccc}
{\mathcal O}: & \text{\sf Category of dg props} & \longrightarrow & \text{\sf Category of dg operads}\\
& {\mathcal P} &\longrightarrow & {\mathcal O}{\mathcal P}
\end{array}
$$
whose main property is that,
given any dg prop ${\mathcal P}$ and its any representation
in a dg vector space $V$, the dg operad ${\mathcal O}{\mathcal P}$ comes equipped canonically with an induced representation
in the graded commutative tensor algebra ${\odot^\bulletlet} V$ given in terms of polydifferential --- with respect to the standard multiplication in ${\odot^\bulletlet} V$ --- operators. The point is that the dg operad ${\mathcal O}\mathcal{H}\mathit{olieb}_{c,d}$ comes equipped with a highly non-trivial morphism morphism of dg properads which was discovered in \cite{MW1},
$$
\mathcal{H} \mathit{olie}_{c+d}^+ \longrightarrow {\mathcal O}\mathcal{H}\mathit{olieb}_{c,d},
$$
and whose image brings into play {\em all}\, generators of the properad $\mathcal{H}\mathit{olieb}_{c,d}$, not just the
ones spanning the sub-properad $\mathcal{H} \mathit{olie}_d$ of $\mathcal{H}\mathit{olieb}_{c,d}$. Here the symbol $+$ means a slight extension of $\mathcal{H} \mathit{olie}_d$ which take cares about deformations of the differential in representation spaces \cite{Me1}. It makes sense to talk about Maurer-Cartan elements of representations of $\mathcal{H} \mathit{olie}_{c+d}^+$ as usual, and hence it makes sense to talk about {\em Maurer-Cartan elements $\gamma\in {\odot^{\bulletlet\geq 1}}( V[c])$ of an arbitrary $\mathcal{H}\mathit{olieb}_{c,d}$-algebra}\, $V$ via the above morphism. Rather surprisingly, these MC elements $\gamma$ can be used to twist not only the dg operad ${\mathcal O}\mathcal{H}\mathit{olieb}_{c,d}$ but the dg properad $\mathcal{H}\mathit{olieb}_{c,d}$ itself giving thereby rise to a new twisting endofunctor $\mathsf{Tw}$ on the category $\mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}$! As explained in \S 4, the twisting endofunctor $\mathsf{Tw}$ adds to a generic properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$ infinitely many new (skew)symmetric generators,
$$
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^2}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^{}}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^m}**@{-},
\end{xy}}\end{array} =
(-1)^{c|\sigma|}
\betagin{array}{c}\resizebox{17mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-11mm,7mm>*{^{\sigma(1)}}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,7mm>*{^{\sigma(2)}}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,6mm>*{}**@{-},
<0.6mm,0.44mm>*{};<10mm,7mm>*{^{\sigma(m)}}**@{-},
\end{xy}}\end{array} \ \ \ {\mathfrak o}rall \sigma\in {\mathbb S}_m, \ m\geq 1.
$$
with the $m=1$ corolla $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ corresponding to the original functor $\mathsf{tw}$. We show in \S {\ref{4: subsec on trinagular}} that the quotient of $\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}$ by the ideal generated by $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ gives us a dg free properad closely related to the properad of strongly homotopy {\em triangular}\, Lie bialgebras.
It is proven in \S 4 that the natural projection
$$
\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d} \longrightarrow \mathcal{H}\mathit{olieb}_{c,d}
$$
is a quasi-isomorphism. We also prove for any ${\mathcal P}\in \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}$ t the dg Lie algebra
$\mathsf{Def}(\mathcal{H}\mathit{olieb}_{c,d}\stackrel{i}{\rightarrow} {\mathcal P})$ controlling deformations of the given morphism
$
i: \mathcal{H}\mathit{olieb}_{c,d} \rightarrow {\mathcal P}$
acts on $\mathsf{Tw}{\mathcal P}$ by derivations. In the case ${\mathcal P}=\mathcal{H}\mathit{olieb}_{c,d}$ the associated complex
$$
\mathsf{Def}(\mathcal{H}\mathit{olieb}_{c,d}\stackrel{{\mathrm{Id}}}{\rightarrow} \mathcal{H}\mathit{olieb}_{c,d})
$$
has cohomology equal (up to one rescaling class) to $H^\bulletlet(\mathsf{GC}_{c+d+1}^{\geq 2})$ \cite{MW2} so that, for any dg properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$ there is always a morphism of cohomology groups
$$
H^\bulletlet(\mathsf{GC}_{c+d+1}^{\geq 2}) \longrightarrow H^\bulletlet(\mathsf{Def}(\mathcal{H}\mathit{olieb}_{c,d}\stackrel{i}{\rightarrow} {\mathcal P}))
$$
and hence an action of $H^\bulletlet(\mathsf{GC}_{c+d+1}^{\geq 2})$ on the cohomology of the twisted dg properad $\mathsf{Tw}{\mathcal P}$ by derivations (which can be in concrete cases homotopy trivial).
A similar trick via the polydifferential functor ${\mathcal O}$ works fine in the case of strongly homotopy {\em involutive}\, Lie bialgebras; we denote the associate properad by $\mathcal{L}\mathit{ieb}^\diamondcd$ and its minimal resolution by $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$. There is again a highly non-trivial morphism of dg properads \cite{MW1},
$$
\mathcal{H} \mathit{olie}^{\diamond +}_{c+d} \to {\mathcal O}\mathcal{H}\mathit{olieb}_{c,d}^\diamond,
$$
where $\mathcal{H} \mathit{olie}^{\diamond +}$ is a {\em diamond}\, extension of $\mathcal{H} \mathit{olie}_d$ which was introduced and studied in \cite{CMW}. Maurer-Cartan elements of $\mathcal{H} \mathit{olie}^{\diamond +}$-algebras are defined in the standard way so that the above morphism of properads can be immediately translated into the notion of {\em Maurer-Cartan element of a $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra}. However this time we obtain essentially nothing new: this approach just re-discovers the notion which has been introduced earlier in general in \cite{CFL}, and in the special case of $\mathcal{L}\mathit{ieb}^\diamondcd$-algebras of cyclic words in \cite{Ba1,Ba2}. Therefore the diamond extension $\mathsf{Tw}^\diamond$ of the twisting endofunctor $\mathsf{Tw}$ from \S 3 gives us essentially nothing new as well: we obtain just a properadic incarnation of the twisting constructions in \cite{Ba1,Ba2,CFL}. This incarnation fits nicely the beautiful approach to string topology developed in \cite{NW} via the so called partition functions of closed manifolds. Full details are given in \S 5.
{\bf Notation}. We work over a field ${\mathbb K}$ of characteristic zero.
The set $\{1,2, \ldots, n\}$ is abbreviated to $[n]$; its group of automorphisms is
denoted by ${\mathbb S}_n$;
the trivial (resp., sign) one-dimensional representation of
${\mathbb S}_n$ is denoted by ${\mbox{1 \hskip -7pt 1}}_n$ (resp., by ${\mathit s \mathit g\mathit n}_n$). We often abbreviate ${\mathit s \mathit g\mathit n}_n^d:= {\mathit s \mathit g\mathit n}_n^{\otimes |d|}$, $d\in {\mathbb Z}$.
The cardinality of a finite set $A$ is denoted by $\# A$.
We work throughout in the category of ${\mathbb Z}$-graded vector spaces over a field ${\mathbb K}$
of characteristic zero.
If $V=\oplus_{i\in {\mathbb Z}} V^i$ is a graded vector space, then
$V[d]$ stands for the graded vector space with $V[d]^i:=V^{i+d}$; the canonical isomorphism $V\rightarrow V[d]$ is denoted by ${\mathfrak s}^d$.
for $v\in V^i$ we set $|v|:=i$.
For a
prop(erad) ${\mathcal P}$ we denote by ${\mathcal P}\{d\}$ a prop(erad) which is uniquely defined by
the following property:
for any graded vector space $V$ a representation
of ${\mathcal P}\{d\}$ in $V$ is identical to a representation of ${\mathcal P}$ in $V[d]$; in particular, one has for an
endomorphism properad ${\mathcal E} nd_V\{-d\}={\mathcal E} nd_{V[d]}$.
Thus a map ${\mathcal P}\{d\} \rightarrow {\mathcal E} nd_V$ is the same as $
{\mathcal P} \rightarrow {\mathcal E} nd_{V[d]}\equiv {\mathcal E} nd_V\{-d\}$. The operad controlling Lie algebras with Lie bracket of degree $-d$ is denoted by ${\mathcal L} ie_{d+1}$ while its minimal resolution by ${\mathcal H} \mathit{olie}_{d+1}$; thus ${\mathcal L} ie_{d+1}$ is equal to ${\mathcal L} \mathit{ie}\{d\}$ if ones uses the standard notation ${\mathcal L} ie:=\mathcal{L} \mathit{ie}_1$ for the ordinary operad of Lie algebras.
We often used the following elements
$$
\oint_{123}\hspace{-1mm}:=\sum_{k=1}^3 (123)^k\in {\mathbb K}[{\mathbb S}_3], \ \ \ \ \ \
\text{Alt}^d_{{\mathbb S}_n}:=\sum_{\sigma\in {\mathbb S}_n} (-1)^{d|\sigma|} \sigma\in {\mathbb K}[{\mathbb S}_n]
$$
as linear operators on ${\mathbb S}_3$- and, respectively, ${\mathbb S}_n$-modules.
{\Large
{\mathsf e}ction{\bf Twisting of operads under $\mathcal{L} \mathit{ie}_d$ --- an overview in pictures}
}
\subsection{Introduction} This section is a more or less self-contained exposition of Thomas Willwacher's construction \cite{W} of the twisting endofunctor in the category of operads under the operad of Lie algebras. For purely pedagogical purposes, we consider here a new intermediate step based on the ``plus" endofunctor from \cite{Me1}. We try to show all the details (including elementary ones) with emphasis on the action of the deformation complexes on twisted operads. Many calculations in the later sections, where we discuss some new material, are more tedious but analogous to the ones reviewed here.
\subsection{Reminder about $\mathcal{H} \mathit{olie}_d$}
Recall that the operad of degree shifted Lie algebras is defined, for any integer $d\in {\mathbb Z}$, as a quotient,
$$
\mathcal{L} \mathit{ie}_{d}:={\mathcal F} ree\lambdangle e\rangle/\lambdangle{\mathcal R}\rangle,
$$
of the free prop generated by an ${\mathbb S}$-module $e=\{e(n)\}_{n\geq 2}$ with
all $e(n)=0$ except{\mathfrak o}otnote{When representing elements of all operads and props
below as (decorated) graphs we tacitly assume that all edges and legs are {\em directed}\, along the flow going from the bottom of the graph to the top.}
$$
e(2):= {\mathit s \mathit g\mathit n}_2^{d}\otimes {\mbox{1 \hskip -7pt 1}}_1[d-1]=\mbox{span}\left\lambdangle
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{},
\end{xy}\end{array}
=(-1)^{d}
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_1}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_2}}**@{},
\end{xy}\end{array}
\right\rangle
$$
modulo the ideal generated by the following relation
\betagin{equation}\lambdabel{2: Lie operad Jacobi relation}
\oint_{123}\hspace{-1mm}\betagin{array}{c}\resizebox{9mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array}
\equiv
\betagin{array}{c}\resizebox{9mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array}
+
\betagin{array}{c}\resizebox{9mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^2}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^1}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^3}**@{},
\end{xy}}\end{array}
+
\betagin{array}{c}\resizebox{9mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^1}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^3}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^2}**@{},
\end{xy}}\end{array}
=0.
\end{equation}
Its minimal resolution $\mathcal{H} \mathit{olie}_d$ is a dg free operad whose (skew)symmetric generators,
\betagin{equation}\lambdabel{2: Lie_inf corolla}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(7,-7)*{_{n-1}},
(13,-7)*{_n},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}
=(-1)^d
\betagin{array}{c}\resizebox{23mm}{!}{\xy
(1,-6)*{\ldots},
(-13,-7)*{_{\sigma(1)}},
(-6.7,-7)*{_{\sigma(2)}},
(13,-7)*{_{\sigma(n)}},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array},
\ \ \ {\mathfrak o}rall \sigma\in {\mathbb S}_n,\ n\geq2,
\end{equation}
have degrees $1+d-nd$.
The differential in $\mathcal{H} \mathit{olie}_d$ is given by
\betagin{equation}\lambdabel{2: d in Lie_infty}
\delta\hspace{-3mm}
\betagin{array}{c}\resizebox{21mm}{!}{\xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(7,-7)*{_{n-1}},
(13,-7)*{_n},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}
=
\sum_{A\varepsilonsubsetneq [n]\atop
\# A\geq 2}{\partial}m
\betagin{array}{c}\resizebox{19mm}{!}{\betagin{xy}
<10mm,0mm>*{\bulletlet},
<10mm,0.8mm>*{};<10mm,5mm>*{}**@{-},
<0mm,-10mm>*{...},
<14mm,-5mm>*{\ldots},
<13mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ }},
<14mm,-10mm>*{_{[n]{\mathsf e}tminus A}};
<10.3mm,0.1mm>*{};<20mm,-5mm>*{}**@{-},
<9.7mm,-0.5mm>*{};<6mm,-5mm>*{}**@{-},
<9.9mm,-0.5mm>*{};<10mm,-5mm>*{}**@{-},
<9.6mm,0.1mm>*{};<0mm,-4.4mm>*{}**@{-},
<0mm,-5mm>*{\bulletlet};
<-5mm,-10mm>*{}**@{-},
<-2.7mm,-10mm>*{}**@{-},
<2.7mm,-10mm>*{}**@{-},
<5mm,-10mm>*{}**@{-},
<0mm,-12mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ }},
<0mm,-15mm>*{_{A}}.
\end{xy}}
\end{array}
\end{equation}
If $d$ is even, all the signs above are equal to $-1$.
\subsection{``Plus" extension}\lambdabel{2: subsection on plus functor}
We also consider a dg operad $\mathcal{H}\mathit{olieb}_d^+$ which is an extension of $\mathcal{H}\mathit{olieb}_{d}$ by an extra degree 1 generator $\betagin{array}{c}\resizebox{1.7mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ and the differential given by an above formula with the summation running over all possible non-empty subsets $A\subset [n]$.
More generally, there is an endofunctor on the category of dg props (or dg operads)
introduced in \cite{Me1}
$$
\betagin{array}{rccc}
^+: & \text{category of dg props} & \longrightarrow & \text{category of dg props} \\
& ({\mathcal P},{\p}) & \longrightarrow & ({\mathcal P}^+, {\p}^+)
\end{array}
$$
defined as follows. For any dg prop ${\mathcal P}$,
let ${\mathcal P}^+$ be the free prop generated by ${\mathcal P}$ and one other operation
$\betagin{array}{c}\resizebox{2.2mm}{!}{
\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-4mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ of arity $(1,1)$ and of cohomological degree $+1$. On ${\mathcal P}^+$ one defines a differential ${\p}^+$ by setting its value on the new generator by
$$
{\p}^+
\betagin{array}{c}\resizebox{2.0mm}{!}{
\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-4mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
:=-
\betagin{array}{c}\resizebox{2.8mm}{!}{ \betagin{xy}
<0mm,0mm>*{};<0mm,-4mm>*{}**@{-},
<0mm,0mm>*{};<0mm,8mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};
<0mm,4mm>*{\bulletletllet};
\end{xy}
}\end{array}
$$
and on any other element $a\in {\mathcal P}(m,n)$ (which we identify pictorially with the $(m,n)$-corolla
whose vertex is decorated with $a$) by the formula
\betagin{equation}\lambdabel{2: delta^+}
{\p}^+
\betagin{array}{c}\resizebox{15mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}:= {\p}
\betagin{array}{c}\resizebox{15mm}{!}{ \betagin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}
\end{array}
-
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{16mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\bulletletllet};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<6mm,5mm>*{..}**@{},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-4mm,5.5mm>*{^i}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
+ (-1)^{|a|}
\overset{n-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{16mm}{!}{\betagin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0mm>*{};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-3.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-6mm,-5mm>*{..}**@{},
<0mm,0mm>*{};<0mm,-5mm>*{}**@{-},
<0mm,-5mm>*{\bulletletllet};
<0mm,-5mm>*{};<0mm,-8mm>*{}**@{-},
<0mm,-5mm>*{};<0mm,-10mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<3.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<6mm,-5mm>*{..}**@{},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-4mm,-6.9mm>*{^i}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}.
\end{equation}
where ${\partial}$ is the original differential in ${\mathcal P}$.
The dg prop $({\mathcal P}^+, {\p}^+)$ is uniquely characterized by the property: there is a one-to-one correspondence between representations
$$
\rho: {\mathcal P}^+ \longrightarrow {\mathcal E} nd_V
$$
of $({\mathcal P}^+, {\p}^+)$ in a dg vector space $(V,d)$, and representations of ${\mathcal P}$ in the same space $V$
but equipped with a deformed differential $d+d'$, where $d':=\rho(\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy})$.
\subsection{From morphisms $\mathcal{H} \mathit{olie}_d^+$ to twisted morphisms from $\mathcal{H} \mathit{olie}_d$}\lambdabel{2: subsec on twisting d in operads} Given any dg operad $({\mathcal A}=\{{\mathcal A}(n)\}_{n\geq 0},{\p})$,
it is well-known that any element $h\in {\mathcal A}(1)$ defines a derivation $D_h$ of the (non-differential) operad ${\mathcal A}$ by the formula analogous to (\ref{2: delta^+}),
$$
D_ha= h\circ_1 a - (-1)^{|h||a|} \sum_{i=1}^n a\circ_i h, \ \ {\mathfrak o}rall \ a\in {\mathcal A}(n).
$$
Moreover, if $|h|=1$ and
${\p} h=-h\circ_1 h$, then the operator
$$
{\partial}c:= {\partial}+ D_h
$$
is also a differential in ${\mathcal A}$ (which acts on $h$ by
$
{\partial}c h= h\circ h
$).
Assume we have a morphism of dg operads
\betagin{equation}\lambdabel{2: g^+ map from Holie}
g^+: (\mathcal{H} \mathit{olie}_d^+, \delta^+) \longrightarrow ({\mathcal A},{\p})
\end{equation}
Then the element $h:=g^+(\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy})$
satisfies all the conditions specified above so that the sum
$$
{\partial}c:= {\partial} + D_{g^+(\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy})}
$$
is a differential in ${\mathcal A}$. Hence we have the following
\subsubsection{\bf Proposition}\lambdabel{2: Prop on Holie -> O with d+} {\it For any morphism of dg operads (\ref{2: g^+ map from Holie}) there is an associated morphism of dg operads,
$$
g: (\mathcal{H} \mathit{olie}_d, \delta) \longrightarrow ({\mathcal A}, {\partial}c)
$$
given by the restriction of $g^+$ to the generators of $\mathcal{H} \mathit{olie}_d$.}
\betagin{proof} Abbreviating $C_n:=\left(\hspace{-2.5mm} \betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\hspace{-2.5mm}\right)$, we have for any $n\geq 2$,
\ \ \ \
$
{\partial}c g(C_n)\equiv {\partial}c g^+(C_n)={\partial} g^+(C_n) + D_{g^+(\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-2mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,2mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy})}(C_n)
=g^+(\delta^+ C_n) + D_{g^+(\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-2mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,2mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy})}(C_n)= g(\delta C_n).
$
\end{proof}
\subsection{Twisting of $\mathcal{H} \mathit{olie}_d$ by a Maurer-Cartan element}\lambdabel{2: subsec on the def of twA} Let $\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d$ be a dg free operad generated by degree $1+d-nd$ corollas (\ref{2: Lie_inf corolla}) of type $(1,n)$ with $n\geq 2$, and also by an additional corolla $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,3.8mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ of type $(1,0)$ and of degree $d$. The differential is defined on the $(1,n\geq 2)$ generators by the standard formula (\ref{2: d in Lie_infty}), while on the new generator it is defined as follows
\betagin{equation}\lambdabel{2: delta on (1,0) MC corolla}
\delta \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}= -
\sum_{{ k\geq 2}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-5,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\end{equation}
\subsubsection{\bf Lemma}\lambdabel{2: Lemma on d^2 for MC Lie} $\delta^2=0$, {\em i.e.\ it is indeed a differential in $\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d$}.
\betagin{proof} We have (assuming that $d$ is even to simplify signs)
\betagin{equation}rn
\delta^2\hspace{-1mm} \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
&=&
-\sum_{{ k\geq 2}}
{\mathfrak r}ac{1}{k!}
\delta\left(
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-5,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\right)\\
&=& +\sum_{{ k\geq 2}}
{\mathfrak r}ac{1}{k!}
\left(\sum_{k=k'+k''\atop k'\geq 2,k''\geq 1}
{\mathfrak r}ac{k!}{k'!k''!}
\betagin{array}{c}\resizebox{16mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(-5,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\right)
- \sum_{{ k\geq 2}}
{\mathfrak r}ac{1}{k!}
\left(\sum_{l\geq 2}
{\mathfrak r}ac{k}{l!}
\betagin{array}{c}\resizebox{16mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{l}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k-1}},
(-5,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\right)=0,
\end{equation}rn
where the first summand comes from (\ref{2: d in Lie_infty}) and the second one from (\ref{2: delta on (1,0) MC corolla}).
\end{proof}
A representation
$$
\rho: \widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d \longrightarrow {\mathcal E} nd_V
$$
of $\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d$ in a dg (appropriately filtered) vector space $(V,d)$ is given by
a $\mathcal{H} \mathit{olie}_d$-algebra structure $\{\mu_n\}_{n\geq 1}$ on $V$,
$$
mu_1:=d,\ \ \mu_n:=\rho\left(\hspace{-2mm} \betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\hspace{-2mm}\right): \odot^n (V[d]) \rightarrow V[d+1], \ \ n\geq
$$
together with a special element
$
m:= \rho(\hspace{-1mm}\betagin{array}{c}\resizebox{1.4mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}\hspace{-1mm})
$
satisfying the equation (a filtration on $V$ is assumed to be such that this infinite in general sum makes sense)
$$
dm + \sum_{k\geq 2} {\mathfrak r}ac{1}{k!} \mu_k(m,\ldots, m)=0.
$$
Such an element is called the Maurer-Cartan element of the given $\mathcal{H} \mathit{olie}_d$-algebra structure on $V$.
\subsubsection{\bf Proposition}\lambdabel{2: map from HoLB+ to tilda twA} {\em There is a morphism of dg operads
$$
c^+: (\mathcal{H} \mathit{olie}_d^+,\delta^+) \longrightarrow (\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d,\delta)
$$
given on the generators as follows:}
\betagin{equation}\lambdabel{2: c^+ on Holie-corollas}
\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
\stackrel{c^+}{\longrightarrow}
\sum_{{ k\geq 1}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-25,8)*{}="1";
(-25,-5)*+{_1}="l1";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\endxy}
\end{array}
, \ \
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\stackrel{c^+}{\longrightarrow}
\sum_{{ k\geq 0}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{26mm}{!}{ \xy
(-25,8)*{}="1";
(-30,-7)*+{_1}="l2";
(-27,-7)*+{_2}="l1";
(-24,-6)*{...};
(-20,-7)*+{_n}="ln";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\ar @{-} "l2";"L" <0pt>
\ar @{-} "ln";"L" <0pt>
\endxy}
\end{array}
\ {\mathfrak o}rall n\geq 2.
\end{equation}
\betagin{proof}
One has to check that $\delta \circ c^+ = c^+\circ \delta^+$. One has (assuming for simplicity of signs again that $d$ is even)
\betagin{equation}rn
\delta\circ c^+(\hspace{-1mm} \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}\hspace{-1mm})
\hspace{-2mm}
&=&
\hspace{-2mm}
\sum_{{ k\geq 1}}
{\mathfrak r}ac{1}{k!}
\ \delta \left(\hspace{-1mm} \betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-25,8)*{}="1";
(-25,-5)*+{_1}="l1";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\endxy}
\end{array}\hspace{-1mm} \right)
=
-\sum_{{ k\geq 1}}
{\mathfrak r}ac{1}{k!}
\ \left(\sum_{k+1=k'+k''\atop k'\geq 2,k''\geq 0}{\mathfrak r}ac{k!}{k'!k''!}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,9)*{}="1";
(-27,-5)*+{_1}="d";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(-27,+3)*{\bulletlet}="L";
(-22,-5)*{\bulletlet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "d";"L" <0pt>
\endxy}
\end{array}
\right.\\
&& \hspace{-3mm}+ \left.
\sum_{k=k'+k''\atop k',k''\geq 1}{\mathfrak r}ac{k!}{k'!k''!}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,9)*{}="1";
(-24,-13)*+{_1}="d";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(-27,+3)*{\bulletlet}="L";
(-24,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "d";"B" <0pt>
\endxy}
\end{array}
\right)
+
\sum_{{ k\geq 1}}
{\mathfrak r}ac{1}{k!}\hspace{-1mm}
\ \left(\sum_{l\geq 2}{\mathfrak r}ac{k}{l!}\hspace{-1mm}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,9)*{}="1";
(-27,-5)*+{_1}="d";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{l}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k-1}},
(-27,+3)*{\bulletlet}="L";
(-22,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "d";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}\right)
\\
&=&
-\hspace{-1mm} \sum_{k',k''\geq 1}{\mathfrak r}ac{1}{k'!k''!}\hspace{-2mm}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,9)*{}="1";
(-24,-13)*+{_1}="d";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(-27,+3)*{\bulletlet}="L";
(-24,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "d";"B" <0pt>
\endxy}
\end{array}
= - c^+\left(\hspace{-2mm}
\betagin{array}{c}\resizebox{2.8mm}{!}{ \betagin{xy}
<0mm,0mm>*{};<0mm,-4mm>*{}**@{-},
<0mm,0mm>*{};<0mm,8mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};
<0mm,4mm>*{\bulletletllet};
\end{xy}
}\end{array}\hspace{-2mm}\right)
=
c^+\circ \delta^+\left(\hspace{-1mm}\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}\hspace{-1mm}\right).
\end{equation}rn
Similarly one checks the required equality for any $n\geq 2$
\betagin{equation}rn
\delta\circ c^+\hspace{-1mm}\left(\hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\hspace{-3mm}\right)\hspace{-3mm}
&=&
\hspace{-2mm}
\sum_{{ k\geq 0}}
{\mathfrak r}ac{1}{k!}\, \delta\left(\hspace{-2mm}
\betagin{array}{c}\resizebox{23mm}{!}{ \xy
(-25,8)*{}="1";
(-30,-7)*+{_1}="l2";
(-27,-7)*+{_2}="l1";
(-24,-6)*{...};
(-20,-7)*+{_n}="ln";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\ar @{-} "l2";"L" <0pt>
\ar @{-} "ln";"L" <0pt>
\endxy}
\end{array}\right)
=
\hspace{-0.5mm} -\sum_{k\geq 1}{\mathfrak r}ac{1}{k!}\left(
\sum_{k=k'+k''\atop k'\geq 0,k''\geq 1}{\mathfrak r}ac{k!}{k'!k''!}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,9)*{}="1";
(-31,-13)*{_1}="l1";
(-27,-12)*{...};
(-24,-13)*+{_n}="ln";
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(-27,+3)*{\bulletlet}="L";
(-24,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"B" <0pt>
\ar @{-} "ln";"B" <0pt>
\endxy}
\end{array}
\right.\\
&& \hspace{-21mm}+\left.
\sum_{k=k'+k''\atop k'\geq 1, k''\geq 0}{\mathfrak r}ac{k!}{k'!k''!}\sum_{i=1}^n
\betagin{array}{c}\resizebox{23mm}{!}{ \xy
(-27,9)*{}="1";
(-24,-13)*+{_i}="d";
(-24,-5)*+{}="n1";
(-31,-5)*+{}="n2";
(-27.5,-3)*{...};
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
<-28mm,-8mm>*{\underbrace{ \ \ \ }_{[n]{\mathsf e}tminus i}},
(-27,+3)*{\bulletlet}="L";
(-18,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "d";"B" <0pt>
\ar @{-} "n1";"L" <0pt>
\ar @{-} "n2";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm} \right)
-\sum_{k\geq 2}{\mathfrak r}ac{1}{k!}
\left(
\sum_{k=k'+k''\atop k'\geq 2, k''\geq 0}{\mathfrak r}ac{k!}{k'!k''!}\sum_{i=1}^n
\betagin{array}{c}\resizebox{23mm}{!}{ \xy
(-27,9)*{}="1";
(-24,-5)*+{_n}="n1";
(-31,-5)*+{_1}="n2";
(-27,-4)*{...};
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(-27,+3)*{\bulletlet}="L";
(-18,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "n1";"L" <0pt>
\ar @{-} "n2";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm} \right)
\\
&&\hspace{-21mm}
+\sum_{{ k\geq 1}}
{\mathfrak r}ac{1}{k!}
\ \left(\sum_{l\geq 2}{\mathfrak r}ac{k}{l!}
\betagin{array}{c}\resizebox{23mm}{!}{ \xy
(-27,9)*{}="1";
(-24,-5)*+{_n}="n1";
(-31,-5)*+{_1}="n2";
(-27,-4)*{...};
(-3,-5)*{...};
(-15,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{l}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k-1}},
(-27,+3)*{\bulletlet}="L";
(-18,-5)*{\bulletletllet}="B";
(-20,-13)*{\bulletlet}="b1";
(-11,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "n1";"L" <0pt>
\ar @{-} "n2";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}\right)
- \sum_{k\geq 0}{\mathfrak r}ac{1}{k!}
\sum_{k=k'+k''\atop k',k''\geq 0}{\mathfrak r}ac{k!}{k'!k''!}\sum_{[n]=I'\sqcup I'' \atop \# I', \#I'' \geq 2}
\betagin{array}{c}\resizebox{23mm}{!}{ \xy
(-27,9)*{}="1";
(-26,-13)*+{}="d1";
(-20,-13)*+{}="d2";
(-22,-11.2)*{...};
(-24,-5)*+{}="n1";
(-31,-5)*+{}="n2";
(-27.5,-3)*{...};
(-3,-5)*{...};
(-15,-13)*{...};
<-11mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ }_{k'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
<-28mm,-7.5mm>*{\underbrace{ \ \ \ }_{I''}},
<-22mm,-14mm>*{\underbrace{ \ \ \ }_{I'}},
(-27,+3)*{\bulletlet}="L";
(-18,-5)*{\bulletletllet}="B";
(-15,-13)*{\bulletlet}="b1";
(-8,-13)*{\bulletlet}="b2";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "d1";"B" <0pt>
\ar @{-} "d2";"B" <0pt>
\ar @{-} "n1";"L" <0pt>
\ar @{-} "n2";"L" <0pt>
\endxy}
\end{array}
\\
&=&\hspace{-2mm} -c^+\left(\hspace{-2mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,11)*{}="1";
(-0,6)*{\bulletlet}="0";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"0" <0pt>
\ar @{-} "1";"0" <0pt>
\endxy}
\end{array}
\hspace{-3mm}
+
\sum_{i=1}^n\hspace{-2mm}
\betagin{array}{c}\resizebox{15mm}{!}{ \xy
(-0,5)*{}="1";
(3,-13)*+{_i}="d1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(3.3,-7)*{\bulletlet}="Li";
(-3,-7)*+{_2}="L2";
(0,-6)*{...};
(6,-6)*{...};
(10,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"Li" <0pt>
\ar @{-} "C";"1" <0pt>
\ar @{-} "d1";"Li" <0pt>
\endxy}
\end{array} \hspace{-2mm} \right) - 0 + c^+\hspace{-1mm}\left(\delta\hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\hspace{-2mm}\right)
=
c^+\hspace{-1mm}\left(\delta^+\hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\hspace{-2mm}\right).
\end{equation}rn
\end{proof}
Hence by Proposition {\ref{2: Prop on Holie -> O with d+}}, the differential in the operad $\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d$ can be twisted,
$$
\delta\rightarrow
\delta_{\centerdot}=\delta + D_{c^+(\hspace{-1.4mm}\betagin{array}{c}\resizebox{1mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}\hspace{-1.4mm})}.
$$
The operad $\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d$ equipped with the twisted differential $\delta_{\centerdot}$ is denoted from now on by $\mathsf{tw}\mathcal{H} \mathit{olie}_d$.
\subsubsection{\bf Definition-proposition} The data $\mathsf{tw}\mathcal{H} \mathit{olie}_d:= \{\mathsf{tw}\mathcal{H} \mathit{olie}_d(n)\}_{n\geq 0}, \delta_{\centerdot})$
is called the {\it twisted operad of strongly homotopy Lie algebras}.
It comes equipped with a
monomorphism
$$
c: (\mathcal{H} \mathit{olie}_d,\delta) \longrightarrow (\mathsf{tw} \mathcal{H} \mathit{olie}_d, \delta_{\centerdot})
$$
given on the generators of $\mathcal{H} \mathit{olie}_d$ by the second expression in formula (\ref{2: c^+ on Holie-corollas}).
\subsection{Twisting of operads under $\mathcal{H} \mathit{olie}_d$ \cite{W}}
Let $({\mathcal A}, {\p})$ be a dg operad equipped with a non-trivial morphism of operads
$$
i: \mathcal{H}\mathit{olieb}_d \longrightarrow {\mathcal A}
$$
Such an operad is called an {\it operad under $\mathcal{H} \mathit{olie}_d$}. {\em Generic}\, elements of ${\mathcal A}=\{{\mathcal A}(n)\}_{n\geq 0}$ are denoted in this paper as decorated corollas with, say, white vertices (to distinguish them from generators of $\mathcal{H} \mathit{olie}_d$),
$$
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(7,-7)*{_{n-1}},
(13,-7)*{_n},
(0,0)*{\circ}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array} \in {\mathcal A}(n),\ \ \ n\geq 1.
$$
The images of the generators (\ref{2: Lie_inf corolla}) of $\mathcal{H} \mathit{olie}_d$ under the map $i$ are denoted by decorated corollas with vertices shown as $\circledcirc$ (to emphasize the special status of these elements of ${\mathcal A}$),
$$
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(7,-7)*{_{n-1}},
(13,-7)*{_n},
(0,0)*{\circledcirc}="a",
(0,0)*{\bulletlet},
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}
:=i\left(
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(7,-7)*{_{n-1}},
(13,-7)*{_n},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}\right)\in {\mathcal A}(n),\ \ \ n\geq 2.
$$
It is worth noting that some of these elements can stand for the zero vector in ${\mathcal A}(n)$ as we do not assume in general that the map $i$ is an injection on every generator.
We define a dg operad $\widetilde{\mathsf{tw}}{\mathcal A}=\{\widetilde{\mathsf{tw}}{\mathcal A}(n)\}_{n\geq 0}$ as an operad generated freely by ${\mathcal A}$ and one new generator $\betagin{array}{c}\resizebox{2.5mm}{!}{ \xy
(0,0)*{\bulletlet}="a",
(0,4)*{}="0",
\ar @{-} "a";"0" <0pt>
\endxy}\end{array}$ of type $(1,0)$ and of cohomological degree $d$. The differential ${\p}$ in $\widetilde{\mathsf{tw}}{\mathcal A}$ is equal to ${\p}$ when acting on elements of ${\mathcal A}$, and its action on the new generator is defined by
\betagin{equation}\lambdabel{2: sd on (1,0) generator in A}
{\p} \betagin{array}{c}\resizebox{1.4mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}= -
\sum_{{ k\geq 2}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{15mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
<-5mm,-8mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-5,+3)*{\circledcirc}="L";(-5,+3)*{\bulletlet};
(-14,-4)*{\bulletlet}="B";
(-8,-4)*{\bulletlet}="C";
(3,-4)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\end{equation}
There is a chain of operadic morphisms,
$$
i^+: (\mathcal{H} \mathit{olie}_d^+, \delta^+) \stackrel{c^+}{\longrightarrow} (\widetilde{\mathsf{tw}}\mathcal{H} \mathit{olie}_d, \delta) \stackrel{i}{\longrightarrow} (\widetilde{\mathsf{tw}}{\mathcal A}, {\p})
$$
where the map $i$ is extended to the extra generator as the identity map.
Using Proposition {\ref{2: Prop on Holie -> O with d+}}, one concludes that the differential ${\p}$ in $\widetilde{\mathsf{tw}}{\mathcal A}$ can be twisted,
$$
{\p} \rightarrow {\p}_\centerdot:= {\p} + D_{i^+(\hspace{-1.4mm}\betagin{array}{c}\resizebox{1mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}\hspace{-1.4mm})}
$$
This makes the ${\mathbb S}$-module $\widetilde{\mathsf{tw}}{\mathcal A}$ into a new {\em d}g operad denoted from now on by $\mathsf{tw}{\mathcal A}$.
\subsubsection{\bf Definition-proposition} For any dg operad $({\mathcal A},\delta)$ under $\mathcal{H} \mathit{olie}_d$, the associated dg operad
$$
\mathsf{tw}{\mathcal A}:= \{\mathsf{tw}{\mathcal A}(n), {\p}_{\centerdot})\}_{n\geq 0}
$$
is called the {\it twisted extension of ${\mathcal A}$} or the {\it twisted operad of ${\mathcal A}$}. There is
\betagin{itemize}
\item[(i)] a morphism of dg operads
$$
\iota: (\mathcal{H} \mathit{olie}_d,\delta) \longrightarrow (\mathsf{tw}{\mathcal A}, {\p}_{\centerdot})
$$
which factors through the composition
$$
(\mathcal{H} \mathit{olie}_d,\delta) \stackrel{c}{\longrightarrow} (\mathsf{tw}\mathcal{H} \mathit{olie}_d), \delta_{\centerdot}) \stackrel{\mathsf{tw}(i)}\longrightarrow (\mathsf{tw}{\mathcal A}, {\p}_{\centerdot})
$$
and hence is given explicitly by
\betagin{equation}\lambdabel{2: i on Holie-corollas to Tw(O)}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(7,-7)*{_{n-1}},
(13,-7)*{_n},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}
\ \
\stackrel{\iota}{\longrightarrow}
\ \
\sum_{{ k\geq 0}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{27mm}{!}{ \xy
(-25,8)*{}="1";
(-30,-7)*+{_1}="l2";
(-27,-7)*+{_2}="l1";
(-24,-6)*{...};
(-20,-7)*+{_n}="ln";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\circledcirc}="L";(-25,+3)*{\bulletlet};
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\ar @{-} "l2";"L" <0pt>
\ar @{-} "ln";"L" <0pt>
\endxy}
\end{array} \ \ \ {\mathfrak o}rall\ n\geq 2.
\end{equation}
\item[(ii)] a natural epimorphism of dg operads
$$
p: (\mathsf{tw}{\mathcal A}, {\p}_\centerdot) \longrightarrow ({\mathcal A}, {\p})
$$
which sends the extra generator $\betagin{array}{c}\resizebox{2.5mm}{!}{ \xy
(0,0)*{\bulletlet}="a",
(0,4)*{}="0",
\ar @{-} "a";"0" <0pt>
\endxy}\end{array}$ to zero.
\end{itemize}
\subsubsection{\bf Proposition \cite{DW}}\lambdabel{2: Prop of DW on exactness of Tw} {\em The endofunctor $\mathsf{tw}$ in the category of operads under
$\mathcal{H} \mathit{olie}_d$ is exact, i.e.\ any diagram
$$
\mathcal{H} \mathit{olie}_d \longrightarrow {\mathcal A} \stackrel{g}{\longrightarrow} {\mathcal A}'
$$
with $g$ being a quasi-isomorphism, the map $\mathsf{tw} g$ in the associated diagram
$$
\mathcal{H} \mathit{olie}_d \longrightarrow \mathsf{tw}{\mathcal A} \stackrel{\mathsf{tw}(g)}{\longrightarrow} \mathsf{tw}{\mathcal A}'
$$
is also a quasi-isomorphism.}
\subsection{An action of the deformation complex of $\mathcal{H} \mathit{olie}_d\stackrel{i}{\rightarrow} {\mathcal A}$ on $\mathsf{tw}{\mathcal A}$}
Given a dg operad $({\mathcal A},{\p})$ under $\mathcal{H} \mathit{olie}_d$,
$$
\betagin{array}{rccc}
i: & \mathcal{H}\mathit{olieb}_d & \longrightarrow & {\mathcal A} \\
&
\betagin{array}{c}\resizebox{15mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\bulletlet}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
&\longrightarrow &
\betagin{array}{c}\resizebox{15mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\circledcirc}="C";(0,+1)*{\bulletlet};
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
\end{array}
$$
Consider a dg Lie algebra controlling deformations of the morphism $i$ (see \cite{MV} for several equivalent constructions of such a dg Lie algebra),
$$
\mathsf{Def}\left(\mathcal{H} \mathit{olie}_d \stackrel{i}{\rightarrow} {\mathcal A}\right)={\partial}rod_{n\geq 2} {\mathcal A}(n)\otimes_{{\mathbb S}_n} {\mathit s \mathit g\mathit n}_n^{|d|}[d(1-n)]
$$
Its Maurer-Cartan elements are in 1-1 correspondence with morphisms $\mathcal{H} \mathit{olie}_d\rightarrow {\mathcal A}$ which are deformations of $i$; in particular the zero MC element corresponds to $i$ itself.
An element $\gamma$ of the above complex can be represented pictorially as a collection of $(1,n)$-corollas,
$$
\gamma=\{\hspace{-2mm} \betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,-9.4)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ }_n};
(0,+1)*{\circledast}="C";
(-7,-7)*+{}="L1";
(-3,-7)*+{}="L2";
(2,-5)*{...};
(7,-7)*+{}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}\hspace{-2mm} \}_{n\geq 2},
$$
of corollas whose vertices are decorated with elements of ${\mathcal A}(n)\otimes_{{\mathbb S}_n} {\mathit s \mathit g\mathit n}_n^{|d|}$ and whose input legs are (skew)symmetrized (so that we can omit their labels); the degrees of decorations of vertices are shifted by $d(1-n)$. To distinguish these elements from the generic elements of ${\mathcal A}$
as well as from the images of $\mathcal{H} \mathit{olie}_d$-generators under $i$, we denote the vertices of such corollas from now on by $\circledast$. A formal sum of such corollas is homogeneous of degree $p$ if and only if the degree of each contributing $(1,n)$-corolla is equal to $p+d-dn$. The differential $\delta$ in the deformation complex $\mathsf{Def}\left(\mathcal{H} \mathit{olie}_d \stackrel{i}{\rightarrow} {\mathcal A}\right)$ can then be given explicitly by
\betagin{equation}\lambdabel{2: MC eqn in Def(Lie to A)}
\delta \hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,-9.5)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ }_n};
(0,+1)*{\circledast}="C";
(-7,-7)*+{}="L1";
(-3,-7)*+{}="L2";
(2,-5)*{...};
(7,-7)*+{}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
=
{\p} \hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-0,6)*{}="1";
(0,-9.5)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ }_n};
(0,+1)*{\circledast}="C";
(-7,-7)*+{}="L1";
(-3,-7)*+{}="L2";
(2,-5)*{...};
(7,-7)*+{}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
+
\sum_{[n]=[n']\sqcup [n'']\atop n'\geq 2,n''\geq 1}\left(
{\partial}m
\betagin{array}{c}\resizebox{19mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
(-12.5,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{n'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{n''}},
(-5,+3)*{\circledcirc}="L";(-5,+3)*{\bulletlet};
(-14,-5)*{\circledast}="B";
(-20,-13)*{}="b1";
(-17,-13)*{}="b2";
(-10,-13)*{}="b3";
(-8,-5)*{}="C";
(3,-5)*{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "B";"b3" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\mp (-1)^{|\circledast|}
\betagin{array}{c}\resizebox{19mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
(-12.5,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{n'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{n''}},
(-5,+3)*{\circledast}="L";
(-14,-5)*{\circledcirc}="B";(-14,-5)*{\bulletlet};
(-20,-13)*{}="b1";
(-17,-13)*{}="b2";
(-10,-13)*{}="b3";
(-8,-5)*{}="C";
(3,-5)*{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "B";"b3" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\right)
\end{equation}
where the rule of signs depends on $d$ and is read from (\ref{2: d in Lie_infty}); for $d$ even the first ${\partial}m$-symbol is $+1$, while the second one is $-1$.
Let $(\mathrm{Der}(\mathsf{tw}{\mathcal A}),\ [\ ,\ ])$ be the Lie algebra of derivations of the non-differential operad $\mathsf{tw}{\mathcal A}$. The differential ${\p}_\centerdot$ in $\mathsf{tw}{\mathcal A}$ is, of course, its MC element and hence makes $\mathrm{Der}(\mathsf{tw}{\mathcal A})$ into a dg Lie algebra with the differential given by the commutator $[{\p}_\centerdot,\ ]$.
\subsubsection{\bf Proposition}\lambdabel{2: Theroem on Def action on TwA}{\it
There is a canonical morphism of dg Lie algebras
\betagin{equation}\lambdabel{2: Def to Der(A)}
\betagin{array}{rccc}
{\mathbb P}hi: & \mathsf{Def}\left(\mathcal{H} \mathit{olie}_d \stackrel{i}{\rightarrow} {\mathcal A}\right) &\longrightarrow & \mathrm{Der}(\mathsf{tw}{\mathcal A})\\
& \gamma & \longrightarrow & {\mathbb P}hi_\gamma
\end{array}
\end{equation}
where the derivation ${\mathbb P}hi_\gamma$ is given on the generators by
$$
\betagin{array}{rccc}
{\mathbb P}hi_\gamma: &\mathsf{tw}{\mathcal A} & \longrightarrow & \mathsf{tw}{\mathcal A} \ \ \ \ \ \ \ \ \ \ \ \\
& \betagin{array}{c}\resizebox{1.4mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
& \longrightarrow & \displaystyle
\sum_{{ k\geq 2}}
{\mathfrak r}ac{1-k}{k!}
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-5,8)*{}="1";
(-3,-5)*{...};
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-5,+3)*{\circledast}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array} \ \ \ \ \ \ \ \ \
\\
& \betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
&\longrightarrow &\displaystyle
\sum_{k\geq 1} {\mathfrak r}ac{1}{k!}
\left( -
\betagin{array}{c}\resizebox{17mm}{!}{ \xy
(-0,11)*{}="1";
(-0,6)*{\circledast}="0";
(5,1)*{\bulletlet}="r1";
(13,1)*{\bulletlet}="r2";
(9,1)*{_{...}};
<9mm,-3mm>*{\underbrace{ \ \ \ \ \ \ \ \ }_{k}},
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"0" <0pt>
\ar @{-} "1";"0" <0pt>
\ar @{-} "r1";"0" <0pt>
\ar @{-} "r2";"0" <0pt>
\endxy}
\end{array}
{ +}
(-1)^{|\circledast||\circ|}\sum_{i=1}^n\hspace{-2mm}
\betagin{array}{c}\resizebox{17mm}{!}{ \xy
(-0,5)*{}="1";
(3,-15)*+{_i}="d1";
(7,-14.5)*{\bulletlet}="d2";
(15,-14.5)*{\bulletlet}="d3";
(11,-14.5)*{_{...}};
<11mm,-19mm>*{\underbrace{ \ \ \ \ \ \ \ \ }_{k}},
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(3,-8)*{\circledast}="Li";
(-3,-7)*+{_2}="L2";
(0,-6)*{...};
(6,-6)*{...};
(10,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"Li" <0pt>
\ar @{-} "C";"1" <0pt>
\ar @{-} "d1";"Li" <0pt>
\ar @{-} "d2";"Li" <0pt>
\ar @{-} "d3";"Li" <0pt>
\endxy}
\end{array}
\right)
\end{array}
$$
}
\betagin{proof} Any derivation of $\mathsf{tw} {\mathcal A}$ is uniquely determined by its values on the generators, i.e.\ on $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ and on every element $\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}$ of ${\mathcal A}$. The first value can be chosen arbitrary, while the second ones are subject to the condition that they are derivations of the operad structure in ${\mathcal A}$; as ${\mathbb P}hi_\gamma$ applied to $\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}$ is precisely of the form $D_h$ discussed \S{\ref{2: subsec on twisting d in operads}}, we conclude that the above formulae do define a derivation of $\mathsf{tw}{\mathcal A}$ as a non-differential operad. Hence it remains to show that ${\mathbb P}hi_\gamma$ respects differentials in both dg Lie algberas, that is, satisfies the equation
\betagin{equation}\lambdabel{2: compatibility of Der with d}
{\mathbb P}hi_{\delta\gamma}=[{\partial}_\centerdot, {\mathbb P}hi_\gamma].
\end{equation}
It is straightforward to check that the operator equality (\ref{2: compatibility of Der with d}) holds true when applied to generators of $\mathsf{tw}{\mathcal A}$ if and only if one has the equality,
$$
\sum_{{ k\geq 1}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-25,8)*{}="1";
(-25,-7)*+{}="l1";
(-8,-5)*{...};
<-9mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+1)*{(\delta\circledast)}="L";
(-18,-5)*{\bulletlet}="B";
(-13,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\endxy}
\end{array}
=
\sum_{{ k\geq 1}}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-25,8)*{}="1";
(-25,-7)*+{}="l1";
(-8,-5)*{...};
<-9mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+1)*{({\p}_\centerdot\circledast)}="L";
(-18,-5)*{\bulletlet}="B";
(-13,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "l1";"L" <0pt>
\endxy}
\end{array}
+
\sum_{k\geq 1\atop l\geq 2} {\mathfrak r}ac{(-1)^{|\circledast|}}{k!l!}
\betagin{array}{c}\resizebox{20mm}{!}{ \xy
(0,11)*{}="1";
(-5,1)*{}="l";
(-0,6)*{\circledcirc}="0";(-0,6)*{\bulletlet};
(5,1)*{\bulletlet}="r1";
(13,1)*{\bulletlet}="r2";
(9,1)*{_{...}};
<9mm,-3mm>*{\underbrace{ \ \ \ \ \ \ \ \ }_{k}},
(0,+1)*{\circledast}="C";
(-7,-7)*{_\bulletlet}="L1";
(-3,-7)*{_\bulletlet}="L2";
(2,-5)*{...};
(7,-7)*{_\bulletlet}="L3";
<0mm,-11mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{l}},
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"0" <0pt>
\ar @{-} "1";"0" <0pt>
\ar @{-} "l";"0" <0pt>
\ar @{-} "r1";"0" <0pt>
\ar @{-} "r2";"0" <0pt>
\endxy}
\end{array}
$$
which is indeed the case due to (\ref{2: MC eqn in Def(Lie to A)}). The compatibility of the map ${\mathbb P}hi$ with Lie brackets is almost obvious.
\end{proof}
Thus the Lie algebra $H^\bulletlet(\mathsf{Def}(\mathcal{H} \mathit{olie}_d \rightarrow {\mathcal A}))$ acts on the cohomology of the twisted operad $\mathsf{tw}{\mathcal A}$ by derivations. For some operads (see Example in \S {\ref{2: Example Graphs_d operad}}) below) this cohomology Lie Lie algebra can be extremely rich and interesting.
\subsection{Twisting of $\mathcal{L} \mathit{ie}_d$} Assume, in the above notation, that ${\mathcal A}$ is $\mathcal{L} \mathit{ie}_d$ and the morphism
$$
i: \mathcal{H} \mathit{olie}_d \longrightarrow \mathcal{L} \mathit{ie}_d
$$
is the canonical quasi-isomorphism. Then $\mathsf{tw}\mathcal{L} \mathit{ie}_d$ is a dg operad generated by
the degree $1-d$ corolla \hspace{-2mm}
$
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,5)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm} = (-1)^d\hspace{-2mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,5)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_2}="C";
(-2,-5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
$
(modulo the Jacobi relations) and the degree $d$ $(1,0)$-corolla $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$.
The twisted differential $\delta_\centerdot$ is trivial on the first generator (due to the Jacobi identities)
$$
\delta_\centerdot\hspace{-3mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\equiv
\betagin{array}{c}\resizebox{11.0mm}{!}{ \xy
(-9,8)*{}="1";
(-9,+3)*{\bulletlet}="L";
(-14,-3.5)*{\bulletlet}="B";
(-18,-10)*+{_1}="b1";
(-10,-10)*+{_2}="b2";
(-3,-5)*{\bulletlet}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+(-1)^d
\betagin{array}{c}\resizebox{11.5mm}{!}{ \xy
(-9,8)*{}="1";
(-9,+3)*{\bulletlet}="L";
(-14,-3.5)*{\bulletlet}="B";
(-18,-10)*+{_1}="b1";
(-10,-10)*{\bulletlet}="b2";
(-3,-5)*+{_2}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{11.5mm}{!}{ \xy
(-9,8)*{}="1";
(-9,+3)*{\bulletlet}="L";
(-14,-3.5)*{\bulletlet}="B";
(-18,-10)*+{_2}="b1";
(-10,-10)*{\bulletlet}="b2";
(-3,-5)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}=0,
$$
while it acts on the second generator by
$$
\delta_\centerdot \betagin{array}{c}\resizebox{1.4mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}=
{\mathfrak r}ac{1}{2}
\betagin{array}{c}\resizebox{6.0mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}.
$$
The twisted morphism $\mathsf{tw}(i): \mathcal{L} \mathit{ie}_d \longrightarrow \mathsf{tw}\mathcal{L} \mathit{ie}_d$ becomes in this case the obvious inclusion.
This example is important for understanding those twisted operads $\mathsf{tw}{\mathcal A}$ for which the map $\mathcal{H} \mathit{olie}_d\rightarrow {\mathcal A}$ factors through the above projection,
$$
\mathcal{H} \mathit{olie}_d \longrightarrow \mathcal{L} \mathit{ie}_d \longrightarrow {\mathcal A}.
$$
We call such dg operads {\em operads under $\mathcal{L} \mathit{ie}_d$}. The deformation complex of the epimorphism $i$ has almost trivial cohomology, $H^\bulletlet(\mathsf{Def}(\mathcal{H}\mathit{olieb}_{d}\rightarrow \mathcal{L} \mathit{ie}_d))= {\mathbb R}[-1]$; the only non-trivial cohomology class acts on $\mathsf{tw}\mathcal{L}\mathit{ieb}d$ by rescaling its generators.
\subsection{Twisting of (prop)operads under $\mathcal{L} \mathit{ie}_d$}\lambdabel{2: subsec on twisting of (prop)erads under Lie} Let $({\mathcal A}, {\p})$ be a dg operad equipped with an operadic morphism
$$
\betagin{array}{rccc}
\imath: & (\mathcal{L} \mathit{ie}_d, 0) & \longrightarrow & ({\mathcal A},{\p})\\
& \betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,4mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{},
\end{xy}\end{array} & \longrightarrow & \betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";(-5,+1)*{\bulletlet};
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\end{array}
$$
where $\mathcal{L} \mathit{ie}_d$ is understood as a differential operad with the trivial differential. Then $(\mathsf{tw}{\mathcal A},{\p}_\centerdot)$ is an operad freely generated by ${\mathcal A}$ and one new generator $\betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ of degree $d$. The differential ${\p}_\centerdot$ acts, by definition, on an element $a$ of ${\mathcal A}(n)$ (identified with the $a$-decorated $(1,n)$-corolla) by a formula similar to (\ref{2: delta^+})
\betagin{equation}\lambdabel{2: sd_c on generators of TwO}
{\p}_\centerdot\hspace{-3mm}
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
:=
{\p}\hspace{-3mm}
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-0,6)*{}="1";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"1" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{15mm}{!}{ \xy
(-0,11)*{}="1";
(-0,6)*{\circledcirc}="0";(-0,6)*{\bulletlet};
(5,1)*{\bulletlet}="r";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"0" <0pt>
\ar @{-} "1";"0" <0pt>
\ar @{-} "r";"0" <0pt>
\endxy}
\end{array}
-
(-1)^{|a|}\sum_{i=1}^n\hspace{-2mm}
\betagin{array}{c}\resizebox{16mm}{!}{ \xy
(-0,5)*{}="1";
(3,-15)*+{_i}="d1";
(9,-14.5)*{\bulletlet}="d2";
(0,+1)*{\circ}="C";
(-7,-7)*+{_1}="L1";
(3,-8)*{\circledcirc}="Li";(3,-8)*{\bulletlet};
(-3,-7)*+{_2}="L2";
(0,-6)*{...};
(6,-6)*{...};
(10,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"Li" <0pt>
\ar @{-} "C";"1" <0pt>
\ar @{-} "d1";"Li" <0pt>
\ar @{-} "d2";"Li" <0pt>
\endxy}
\end{array}
\end{equation}
and on the extra generator as follows
\betagin{equation}\lambdabel{2: d_c on boxdot}
{\p}_\centerdot \betagin{array}{c}\resizebox{1.4mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}=
{\mathfrak r}ac{1}{2}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";(-5,+1)*{\bulletlet};
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}.
\end{equation}
By Proposition~{\ref{2: Prop of DW on exactness of Tw}}, the canonical epimorphism
$$
\mathsf{tw} \mathcal{H} \mathit{olie}_d \longrightarrow \mathsf{tw}\mathcal{L} \mathit{ie}_d
$$
is a quasi-isomorphism. Moreover, it is proven in \cite{DW} that the natural projections,
\betagin{equation}\lambdabel{2: quasi-iso twHoLB to HoLB, twLB to LB}
\mathsf{tw}\mathcal{H} \mathit{olie}_d \longrightarrow \mathcal{H} \mathit{olie}_d, \ \ \ \mathsf{tw}\mathcal{L} \mathit{ie}_d\longrightarrow \mathcal{L} \mathit{ie}_d
\end{equation}
are quasi-isomorphisms as well.
\subsubsection{\bf Example: Twisting of ${\mathcal A} ss$} The operad of associative algebras ${\mathcal A} ss$ is obviously an operad under $\mathcal{L} \mathit{ie}_1$ and hence can be twisted. It is proven in \cite{CL} that the natural projection
$$
\mathsf{tw} {\mathcal A} ss \longrightarrow {\mathcal A} ss
$$
is a quasi-isomorphism.
\subsection{Example: M.\ Kontsevich's operad of graphs}\lambdabel{2: Example Graphs_d operad} Here is an example of the twisting procedure used in \cite{W} to reproduce an important dg operad of graphs ${\mathcal G} raphs_d$ which has been invented by M.\ Kontsevich in \cite{Ko2} in the context of a new proof of the formality of the little disks operad, and which was further studied in \cite{LV,W}.
By a {\em graph}\, $\Gamma$ we understand a 1-dimensional $CW$ complex whose 0-cells are called vertices and 1-cells are called edges; the set of vertices of $\Gamma$ is denoted by $V(\Gamma)$ and the set of edges by $E(\Gamma)$.
Let $\mathcal{G} ra_d(n)$, $d\in {\mathbb Z}$, stand for the graded vector space generated by graphs $\Gamma$ such that
\betagin{itemize}
\item[(i)] $\Gamma$ has precisely $n$ vertices which are labelled, that is an isomorphism $V(\Gamma)\rightarrow [n]$ is fixed;
\item[(ii)] $\Gamma$ is equipped with an orientation which is for $d$ even is defined as an ordering of edges (up the sign action of ${\mathbb S}_{\# E(\Gamma)}$), while for $d$ odd it is defined as a choice of the direction on each edge (up to the sign action of ${\mathbb S}_2$ whose generator flips the direction).
\item[(iii)] $\Gamma$ is assigned the cohomological degree $(1-d)\# E(\Gamma)$.
\end{itemize}
For example,
$$
\betagin{array}{c}\resizebox{9mm}{!}{
\xy
(0,2)*{^1},
(8,2)*{^2},
(0,0)*{\circ}="a",
(8,0)*{\circ}="b",
\ar @{-} "a";"b" <0pt>
\endxy}\end{array} \in {\mathcal G} ra_{d}(2), \ \ \
\betagin{array}{c}\resizebox{10mm}{!}{
\xy
(-7,4)*{^1},
(-7,-4)*{_2},
(4,2)*{^3},
{\ar@{-}(-5,4)*{\circ};(-5,-4)*{\circ}};
{\ar@{-}(-5,4)*{\circ};(4,0)*{\circ}};
{\ar@{-}(4,0)*{\circ};(-5,-4)*{\circ}};
\endxy}\end{array}\in {\mathcal G} ra_{d}(3)
$$
where for $d$ odd one should assume a choice of directions on edges (defined up to a flip
and multiplication by $-1$).
The ${\mathbb Z}$-graded vector space $\mathcal{G} ra_d(n)$ is an ${\mathbb S}_n$-module with the permutation group acting on graphs by relabeling their vertices. The ${\mathbb S}$-module
$$
\mathcal{G} ra_d:= \{
\mathcal{G} ra_d(n)\}
$$
is an operad \cite{W} with the
operadic compositions
\betagin{equation}\lambdabel{3: operad comp in Gra}
\betagin{array}{rccc}
\circ_i: & {\mathcal G} ra (n)\otimes {\mathcal G} ra (m) & \longrightarrow & {\mathcal G} ra (m+n-1)\\
& \Gamma_1 \otimes \Gamma_2 &\longrightarrow & \Gamma_1\circ_i \Gamma_2
\end{array}
\end{equation}
defined as follows: $\Gamma_1\circ_i \Gamma_2$ is the linear combination of graphs obtained by substituting the graph $\Gamma_2$ into the $i$-labeled vertex of $\Gamma_1$ and taking a sum over all possible re-attachments of dangling edges (attached earlier to that vertex) to the vertices of $\Gamma_2$. Here is an example (for $d$ odd),
$$
\betagin{array}{c}\resizebox{10mm}{!}{
\xy
(-5,2)*{^1},
(5,2)*{^2},
{\ar@/^0.6pc/(-5,0)*{\bulletlet};(5,0)*{\bulletlet}};
{\ar@/^0.6pc/(5,0)*{\bulletlet};(-5,0)*{\bulletlet}};
\endxy}
\end{array}
\ \ \circ_1\
\betagin{array}{c}\resizebox{4mm}{!}{
\xy
(-2,7)*{^1},
(-2,0)*{_2},
{\ar@{->}(0,7)*{\bulletlet};(0,0)*{\bulletlet}};
\endxy}\end{array}
=
\betagin{array}{c}\resizebox{12mm}{!}{
\xy
(-7,7)*{^1},
(-7,0)*{_2},
(5,2)*{^3},
{\ar@{->}(-5,7)*{\bulletlet};(-5,0)*{\bulletlet}};
{\ar@/^0.6pc/(-5,0)*{\bulletlet};(5,0)*{\bulletlet}};
{\ar@/^0.6pc/(5,0)*{\bulletlet};(-5,0)*{\bulletlet}};
\endxy}\end{array}
+
\betagin{array}{c}\resizebox{12mm}{!}{
\xy
(-7,0)*{^1},
(-7,-7)*{_2},
(5,2)*{^3},
{\ar@{->}(-5,0)*{\bulletlet};(-5,-7)*{\bulletlet}};
{\ar@/^0.6pc/(-5,0)*{\bulletlet};(5,0)*{\bulletlet}};
{\ar@/^0.6pc/(5,0)*{\bulletlet};(-5,0)*{\bulletlet}};
\endxy}\end{array}
+
\betagin{array}{c}\resizebox{11mm}{!}{
\xy
(-7,4)*{^1},
(-7,-4)*{_2},
(5,2)*{^3},
{\ar@{->}(-5,4)*{\bulletlet};(-5,-4)*{\bulletlet}};
{\ar@{->}(-5,4)*{\bulletlet};(5,0)*{\bulletlet}};
{\ar@{->}(5,0)*{\bulletlet};(-5,-4)*{\bulletlet}};
\endxy}\end{array}
+
\betagin{array}{c}\resizebox{11mm}{!}{
\xy
(-7,4)*{^1},
(-7,-4)*{_2},
(5,2)*{^3},
{\ar@{->}(-5,4)*{\bulletlet};(-5,-4)*{\bulletlet}};
{\ar@{<-}(-5,4)*{\bulletlet};(5,0)*{\bulletlet}};
{\ar@{<-}(5,0)*{\bulletlet};(-5,-4)*{\bulletlet}};
\endxy}
\end{array}
$$
There is a morphism of operads \cite{W}
$$
\betagin{array}{ccc}
\mathcal{L} \mathit{ie}_d & \longrightarrow & \mathcal{G} ra_d\\
\betagin{array}{c}
\xy
<0mm,0.55mm>*{};<0mm,3.5mm>*{}**@{-},
<0.5mm,-0.5mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.48mm,-0.48mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.5mm,-0.5mm>*{};<2.7mm,-3.2mm>*{_2}**@{},
<-0.48mm,-0.48mm>*{};<-2.7mm,-3.2mm>*{_1}**@{},
\endxy\end{array}
&\longrightarrow &
\xy
(0,2)*{_{1}},
(5,2)*{_{2}},
(0,0)*{\circ}="a",
(5,0)*{\circ}="b",
\ar @{-} "a";"b" <0pt>
\endxy
\end{array}
$$
so that one can apply the twisting endofunctor to $\mathcal{G} ra_d$. The resulting dg operad $\mathsf{tw} \mathcal{G} ra_d$ is generated by graphs with two types of vertices, white ones which are labelled and black ones which are unlabelled and assigned the cohomological degree $d$, e.g.
$$
\betagin{array}{c}\resizebox{10mm}{!}{
\xy
(-7,4)*{^2},
(-7,-4)*{_1},
{\ar@{-}(-5,4)*{\circ};(-5,-4)*{\circ}};
{\ar@{-}(-5,4)*{\circ};(4,0)*{\circ}};
{\ar@{-}(4,0)*{\bulletlet};(-5,-4)*{\circ}};
\endxy}\end{array}\in {\mathcal G} raphs_{d}(2)
$$
The differential acts on white vertices and black vertices by splitting them,
\betagin{equation}\lambdabel{2: d in Graphs}
\xy
(0,2)*{_{i}},
(0,0)*{\circ}="a",
\endxy
\rightsquigarrow
\xy
(0,2)*{_{i}},
(0,0)*{\circ}="a",
(5,0)*{\bulletlet}="b",
\ar @{-} "a";"b" <0pt>
\endxy
,
\ \ \ \
\xy
(0,0)*{\bulletlet}="a",
\endxy
\rightsquigarrow
\xy
(0,0)*{\bulletlet}="a",
(5,0)*{\bulletlet}="b",
\ar @{-} "a";"b" <0pt>
\endxy
\end{equation}
and re-attaching edges.
The dg sub-operad of $\mathsf{tw}\mathcal{G} ra_d$ generated by graphs with at least one white vertex is denoted by ${\mathcal G} raphs_d$. It is proven in \cite{Ko2,LV} that its cohomology $H^\bulletlet({\mathcal G} raphs_d)$ is the operad of $d$-algebras. The case $d=2$ is of special interest as $2$-algebras are precisely the Gerstenhaber algebras which have many applications in algebra, geometry and mathematical physics.
{\Large
{\mathsf e}ction{\bf Partial twisting of properads under $\mathcal{L}\mathit{ieb}_d$ and quasi-Lie bialgebras}
}
\subsection{Reminder on the properads of (degree shifted) Lie bialgebras and quasi-Lie bialgebras} The properad of degree shifted Lie bialgebras is defined, for any pair of integer $c,d\in {\mathbb Z}$, as the quotient
$$
\mathcal{L}\mathit{ieb}_{c,d}:={\mathcal F} ree\lambdangle E_0\rangle/\lambdangle{\mathcal R}\rangle,
$$
of the free prop generated by an ${\mathbb S}$-bimodule $E_0=\{E_0(m,n)\}_{m,n\geq 0}$ with
all $E_0(m,n)=0$ except
$$
E_0(2,1):={\mbox{1 \hskip -7pt 1}}_1\otimes {\mathit s \mathit g\mathit n}_2^{c}[c-1]=\mbox{span}\left\lambdangle
\betagin{array}{c}\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_2}}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_1}}**@{},
\end{xy}\end{array}
=(-1)^{c}
\betagin{array}{c}\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_1}}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_2}}**@{},
\end{xy}\end{array}
\right\rangle
$$
$$
E_0(1,2):= {\mathit s \mathit g\mathit n}_2^{d}\otimes {\mbox{1 \hskip -7pt 1}}_1[d-1]=\mbox{span}\left\lambdangle
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{},
\end{xy}\end{array}
=(-1)^{d}
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_1}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_2}}**@{},
\end{xy}\end{array}
\right\rangle
$$
by the ideal generated by the following relations
\betagin{equation}\lambdabel{3: R for LieB}
{\mathcal R}:\left\{
\betagin{array}{c}
\oint_{123}\hspace{-1mm} \betagin{array}{c}\resizebox{8.1mm}{!}{
\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bulletlet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{},
\end{xy}}\end{array}
=0
\ \ , \ \
\oint_{123}\hspace{-1mm} \betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array}
=0
\\
(-1)^{c+d}\betagin{array}{c}\resizebox{5mm}{!}{\betagin{xy}
<0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-},
<0mm,3mm>*{\bulletlet};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\bulletlet};<0mm,-0.8mm>*{}**@{},
<-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-},
<0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{},
\end{xy}}\end{array}
+(-1)^{cd}\left(
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy}}\end{array}
+ (-1)^{d}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy}}\end{array}
+ (-1)^{d+c}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}}\end{array}
+ (-1)^{c}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}}\end{array}=0\right).
\end{array}
\right.
\end{equation}
where the vertices are ordered implicitly in such a way that the ones on the top come first.
V.\ Drinfeld introduced \cite{Dr2} the notion of {\it quasi-Lie bialgebra}\ or {\it Lie quasi-bialgebra}.
The prop(erad) $q\mathcal{L}\mathit{ieb}cd$ controlling degree shifted quasi-Lie bialgebras can be defined, for any pair of integer $c,d\in {\mathbb Z}$, as the quotient
$$
q\mathcal{L}\mathit{ieb}_{c,d}:={\mathcal F} ree\lambdangle E_q\rangle/\lambdangle{\mathcal R}_q\rangle,
$$
of the free prop(erad) generated by an ${\mathbb S}$-bimodule $Q=\{Q(m,n)\}_{m,n\geq 0}$ with
all $Q(m,n)=0$ except
\betagin{equation}rn
Q(2,1)&:=&{\mbox{1 \hskip -7pt 1}}_1\otimes {\mathit s \mathit g\mathit n}_2^{c}[c-1]=\mbox{span}\left\lambdangle
\betagin{array}{c}\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_2}}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_1}}**@{},
\end{xy}\end{array}
=(-1)^{c}
\betagin{array}{c}\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_1}}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_2}}**@{},
\end{xy}\end{array}
\right\rangle,\\
Q(1,2)&:=& {\mathit s \mathit g\mathit n}_2^{d}\otimes {\mbox{1 \hskip -7pt 1}}_1[d-1]=\mbox{span}\left\lambdangle
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{},
\end{xy}\end{array}
=(-1)^{d}
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_1}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_2}}**@{},
\end{xy}\end{array}
\right\rangle,\\
Q(3,0)&:=& ({\mathit s \mathit g\mathit n}_3)^{\otimes|c|}[2c-d-1]=\mbox{span}\left\lambdangle
\betagin{array}{c}\betagin{xy}
<0mm,-1mm>*{\bulletlet};<-4mm,3mm>*{^{_1}}**@{-},
<0mm,-1mm>*{\bulletlet};<0mm,3mm>*{^{_2}}**@{-},
<0mm,-1mm>*{\bulletlet};<4mm,3mm>*{^{_3}}**@{-},
\end{xy}\end{array}= (-1)^{c|\sigma|}
\betagin{array}{c}\betagin{xy}
<0mm,-1mm>*{\bulletlet};<-6mm,3mm>*{^{_{\sigma(1)}}}**@{-},
<0mm,-1mm>*{\bulletlet};<0mm,3mm>*{^{_{\sigma(2)}}}**@{-},
<0mm,-1mm>*{\bulletlet};<6mm,3mm>*{^{_{\sigma(3)}}}**@{-},
\end{xy}\end{array}\ \ {\mathfrak o}rall \sigma\in {\mathbb S}_3
\right\rangle,
\end{equation}rn
modulo the ideal generated by the following relations
\betagin{equation}\lambdabel{3: R for qLieB}
{\mathcal R}_q:\left\{\hspace{-2mm}
\betagin{array}{c}
\displaystyle
\oint_{123}
\left(\hspace{-2mm} \betagin{array}{c}\resizebox{9.4mm}{!}{
\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bulletlet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{},
\end{xy}}\end{array}
+
\betagin{array}{c}\resizebox{12.5mm}{!}{
\betagin{xy}
(0,0)*{\bulletlet};(-4,5)*{^{1}}**@{-},
(0,0)*{\bulletlet};(0,5)*{^{2}}**@{-},
(0,0)*{\bulletlet};(4,5)*{\bulletlet}**@{-},
(4,5)*{\bulletlet};(4,10)*{^{3}}**@{-},
(4,5)*{\bulletlet};(8,0)*{_{\, 1}}**@{-},
\end{xy}
}
\end{array}
\hspace{-2mm}
\right)
=0
\ , \ \ \
\oint_{123}\hspace{-1mm} \betagin{array}{c}\resizebox{10.0mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array} =0
\ , \ \ \
\text{Alt}^c_{{\mathbb S}_4}\hspace{-1.5mm}
\betagin{array}{c}\resizebox{11.4mm}{!}{
\betagin{xy}
(0,0)*{\bulletlet};(-4,5)*{^{1}}**@{-},
(0,0)*{\bulletlet};(0,5)*{^{2}}**@{-},
(0,0)*{\bulletlet};(4,5)*{\bulletlet}**@{-},
(4,5)*{\bulletlet};(1.5,10)*{^{3}}**@{-},
(4,5)*{\bulletlet};(6.5,10)*{^{4}}**@{-},
\end{xy}}
\end{array}
=0
\\
\hspace{-1mm} \betagin{array}{c}\resizebox{6mm}{!}{\betagin{xy}
<0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-},
<0mm,3mm>*{\bulletlet};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\bulletlet};<0mm,-0.8mm>*{}**@{},
<-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-},
<0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{},
\end{xy}}\end{array}\hspace{-2mm}
+(-1)^{cd+c+d}\left(\hspace{-2mm}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy}}\end{array}\hspace{-1mm}
+ (-1)^{d}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy}}\end{array}\hspace{-1mm}
+ (-1)^{d+c}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}}\end{array}\hspace{-1mm}
+ (-1)^{c}
\betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,-0.8mm>*{\bulletlet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bulletlet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}}\end{array}\hspace{-2mm}\right)=0.
\end{array}
\right.
\end{equation}
Its minimal resolution $\mathcal{H}\mathit{oqlieb}_{c,d}$ is a free operad
$
\mathcal{H}\mathit{oqlieb}_{c,d}:={\mathcal F} ree \left\lambdangle E\right\rangle
$
generated by an ${\mathbb S}$-bimodule
$
E_q=\{E_q(m,n)\}_{m\geq 1, n\geq 0, m+n\geq 3}
$
with
$$
{E}_q(m,n):={\mathit s \mathit g\mathit n}_m^{\otimes |c|}\otimes {\mathit s \mathit g\mathit n}_n^{|d|}[cm+dn-1-c-d]\equiv\text{span}\left\lambdangle\hspace{-1mm}
\betagin{array}{c}\resizebox{15mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-10.5mm,5.9mm>*{^{\sigma(1)}}**@{},
<0mm,0mm>*{};<-4mm,5.9mm>*{^{\sigma(2)}}**@{},
<0mm,0mm>*{};<10.0mm,5.9mm>*{^{\sigma(m)}}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-10.5mm,-6.9mm>*{^{\tau(1)}}**@{},
<0mm,0mm>*{};<-4mm,-6.9mm>*{^{\tau(2)}}**@{},
<0mm,0mm>*{};<10.0mm,-6.9mm>*{^{\tau(n)}}**@{},
\end{xy}}\end{array}\hspace{-3mm}
=(-1)^{c|\sigma|+d|\tau|}\hspace{-1mm}
\betagin{array}{c}\resizebox{12mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}\hspace{-1mm}
\right\rangle_{ {\mathfrak o}rall \sigma\in {\mathbb S}_m \atop {\mathfrak o}rall\tau\in {\mathbb S}_n}
$$
The differential in $\mathcal{H}\mathit{oqlieb}_{c,d}$ is given on the generators by
\betagin{equation}\lambdabel{3: d in qLBcd_infty}
\delta
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44
mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
\ \ = \ \
\sum_{[m]=I_1\sqcup I_2\atop
{|I_1|\geq 0, |I_2|\geq 1}}
\sum_{[n]=J_1\sqcup J_2\atop
{|J_1|, |J_2|\geq 0}
}\hspace{0mm}
{\partial}m
\betagin{array}{c}\resizebox{22mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<0mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<12.4mm,4.8mm>*{}**@{-},
<0mm,0mm>*{};<-2mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ }}**@{},
<0mm,0mm>*{};<-2mm,9mm>*{^{I_1}}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<0mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
}}**@{},
<0mm,0mm>*{};<0mm,-10.6mm>*{_{J_1}}**@{},
<13mm,5mm>*{};<13mm,5mm>*{\bulletlet}**@{},
<12.6mm,5.44mm>*{};<5mm,10mm>*{}**@{-},
<12.6mm,5.7mm>*{};<8.5mm,10mm>*{}**@{-},
<13mm,5mm>*{};<13mm,10mm>*{\ldots}**@{},
<13.4mm,5.7mm>*{};<16.5mm,10mm>*{}**@{-},
<13.6mm,5.44mm>*{};<20mm,10mm>*{}**@{-},
<13mm,5mm>*{};<13mm,12mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{},
<13mm,5mm>*{};<13mm,14mm>*{^{I_2}}**@{},
<12.4mm,4.3mm>*{};<8mm,0mm>*{}**@{-},
<12.6mm,4.3mm>*{};<12mm,0mm>*{\ldots}**@{},
<13.4mm,4.5mm>*{};<16.5mm,0mm>*{}**@{-},
<13.6mm,4.8mm>*{};<20mm,0mm>*{}**@{-},
<13mm,5mm>*{};<14.3mm,-2mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }}**@{},
<13mm,5mm>*{};<14.3mm,-4.5mm>*{_{J_2}}**@{},
\end{xy}}\end{array}
\end{equation}
where the signs on the r.h.s\ are uniquely fixed for $c+d\in 2{\mathbb Z}$ by the fact that they all equal to $-1$ if $ c$ and $d$ are even integers.
Taking the quotient of $\mathcal{H}\mathit{oqlieb}_{c,d}$ by the ideal generated by all $(m,0)$-corollas, $m\geq 3$, gives us the minimal model $\mathcal{H}\mathit{olieb}_{c,d}$ of the properad $\mathcal{L}\mathit{ieb}cd$.
The properads $\mathcal{H}\mathit{olieb}_{c,d}$ and $q\mathcal{H}\mathit{olieb}_{c,d}$ with the same parity of $c+d$ are isomorphic to each other up to degree shift,
$$
\mathcal{H}\mathit{olieb}_{c,d}=\mathcal{H}\mathit{olieb}_{c+d,0}\{d\}, \ \ \ q\mathcal{H}\mathit{olieb}_{c,d}=q\mathcal{H}\mathit{olieb}_{c+d,0}\{d\},
$$
i.e.\ there are essentially two different types of the (quasi-)Lie bialgebra properads, even and odd ones.
\subsection{A short reminder on graph complexes}\lambdabel{3: subsec on GC and HGC}
The M.\ Kontsevich graph complexes come in a family $\mathsf{GC}_d$ parameterized by an integer $d\in {\mathbb Z}$. The complex $\mathsf{GC}_d$ for fixed $d$ is generated by arbitrary graphs $\Gamma$ with valencies $|v|$ of vertices $v$ of $\Gamma$ satisfying $|v|\geq 3$, and with the orientation $or$ defined on each graph $\Gamma\in \mathsf{GC}_d$ as an ordering of edges (up to an even permutation) for $d$ even, and an ordering of vertices and half edges (again up to even permutation); each graph $\Gamma$ has precisely two different orientations, $or$ and $-or$, and one identifies $(\Gamma,or)=-(\Gamma,-or)$ and abbreviates the pair $(\Gamma,or)$ to $\Gamma$. The cohomological degree of $\Gamma\in \mathsf{GC}_d$ is defined by
$$
|\Gamma|=d(\# V(\Gamma)-1) + (1-d) \#E(\Gamma)
$$
The differential $\delta$ on $\mathsf{GC}_d$ is given by an action,
$
\delta\Gamma=\sum_v \delta_v\Gamma
$,
on each vertex
$
v= \betagin{array}{c}\resizebox{7mm}{!}{\xy
(0,0)*{\bulletletllet}="a",
(2.3,4)*{}="1",
(-2.3,4)*{}="2",
(5,1)*{}="3",
(-5,1)*{}="4",
(-3.6,-3.6)*{}="5",
(3.6,-3.6)*{}="6",
(0,-4.5)*{}="7",
\ar @{-} "a";"1" <0pt>
\ar @{-} "a";"2" <0pt>
\ar @{-} "a";"3" <0pt>
\ar @{-} "a";"4" <0pt>
\ar @{-} "a";"5" <0pt>
\ar @{-} "a";"6" <0pt>
\ar @{-} "a";"7" <0pt>
\endxy}
\end{array}
$
of a graph $\Gamma\in \mathsf{GC}_d$
by splitting $v$ into two new vertices connected by an edge, and then re-attaching the edges attached earlier to $v$ to the new vertices in all possible ways,
$$
\delta_v:
\betagin{array}{c}\resizebox{7mm}{!}{\xy
(0,0)*{\bulletletllet}="a",
(2.3,4)*{}="1",
(-2.3,4)*{}="2",
(5,1)*{}="3",
(-5,1)*{}="4",
(-3.6,-3.6)*{}="5",
(3.6,-3.6)*{}="6",
(0,-4.5)*{}="7",
\ar @{-} "a";"1" <0pt>
\ar @{-} "a";"2" <0pt>
\ar @{-} "a";"3" <0pt>
\ar @{-} "a";"4" <0pt>
\ar @{-} "a";"5" <0pt>
\ar @{-} "a";"6" <0pt>
\ar @{-} "a";"7" <0pt>
\endxy}
\end{array}
\ \longrightarrow \ \sum
\betagin{array}{c}\resizebox{7mm}{!}{\xy
(0,-2.3)*{\bulletletllet}="a",
(0,2.3)*{\bulletletllet}="b",
(-7,-5)*{}="1",
(7,-6)*{}="2",
(-3,-8)*{}="3",
(3,-8)*{}="4",
(5,7)*{}="5",
(-5,7)*{}="6",
(0,8)*{}="7",
\ar @{-} "a";"b" <0pt>
\ar @{-} "a";"1" <0pt>
\ar @{-} "a";"2" <0pt>
\ar @{-} "a";"3" <0pt>
\ar @{-} "a";"4" <0pt>
\ar @{-} "b";"5" <0pt>
\ar @{-} "b";"6" <0pt>
\ar @{-} "b";"7" <0pt>
\endxy}
\end{array}.
$$
It is very hard to compute the cohomology classes $\mathsf{GC}_d$ explicitly. Here are two examples of degree zero cycles in $\mathsf{GC}_2$
$$
\mathfrak{w}_3 =
\betagin{array}{c}\resizebox{12mm}{!}{
\xy
(0,0)*{\bulletletllet}="a",
(0,8)*{\bulletletllet}="b",
(-7.5,-4.5)*{\bulletletllet}="c",
(7.5,-4.5)*{\bulletletllet}="d",
\ar @{-} "a";"b" <0pt>
\ar @{-} "a";"c" <0pt>
\ar @{-} "b";"c" <0pt>
\ar @{-} "d";"c" <0pt>
\ar @{-} "b";"d" <0pt>
\ar @{-} "d";"a" <0pt>
\endxy}
\end{array}, \
\mathfrak{w}_5 =
\betagin{array}{c}\resizebox{13mm}{!}{
\xy
(0,0)*{\bulletletllet}="0",
(0,8)*{\bulletletllet}="1",
(-8,3)*{\bulletletllet}="5",
(8,3)*{\bulletletllet}="2",
(-5,-7)*{\bulletletllet}="4",
(5,-7)*{\bulletletllet}="3",
\ar @{-} "0";"1" <0pt>
\ar @{-} "0";"2" <0pt>
\ar @{-} "0";"3" <0pt>
\ar @{-} "0";"4" <0pt>
\ar @{-} "0";"5" <0pt>
\ar @{-} "1";"2" <0pt>
\ar @{-} "2";"3" <0pt>
\ar @{-} "3";"4" <0pt>
\ar @{-} "4";"5" <0pt>
\ar @{-} "5";"1" <0pt>
\endxy}\end{array}
\ +\ {\mathfrak r}ac{5}{2}
\betagin{array}{c}\resizebox{12mm}{!}{
\xy
(1,0)*{\bulletletllet}="0",
(0,8)*{\bulletletllet}="1",
(-8,3)*{\bulletletllet}="5",
(8,3)*{\bulletletllet}="2",
(-5,-7)*{\bulletletllet}="4",
(5,-7)*{\bulletletllet}="3",
\ar @{-} "0";"1" <0pt>
\ar @{-} "0";"2" <0pt>
\ar @{-} "0";"4" <0pt>
\ar @{-} "1";"4" <0pt>
\ar @{-} "5";"3" <0pt>
\ar @{-} "1";"2" <0pt>
\ar @{-} "2";"3" <0pt>
\ar @{-} "3";"4" <0pt>
\ar @{-} "4";"5" <0pt>
\ar @{-} "5";"1" <0pt>
\endxy}
\end{array},
$$
which represent non-trivial cohomology classes. It has been proven in \cite{W} that
$H^0(\mathsf{GC}_2)={\mathfrak g}{\mathfrak r}{\mathfrak t}_1$, the Lie algebra of the Grothendieck-Teichm\"uller group $GRT_1$. Interestingly in the present context, the graph complexes $\mathsf{GC}_{c+d+1}$ control \cite{MW2} the homotopy theory of properads $\mathcal{H}\mathit{olieb}_{c,d}$ and $\mathcal{L}\mathit{ieb}cd$, i.e.\ there is a quasi-isomorphism, up to one rescaling class, of dg Lie algebras
$$
\mathsf{GC}_{c+d+1} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}) \simeq \mathsf{Def}(\mathcal{L}\mathit{ieb}cd \stackrel{{\mathrm{Id}}}{\rightarrow} \mathcal{L}\mathit{ieb}cd)[1]
$$
where $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d})$ is the derivation complex of genus completion of the properad $\mathcal{H}\mathit{olieb}_{c,d}$.
The graph complex $\mathsf{HGC}_d^N$ with $N$ labelled hairs is defined similarly --- the only novelty is that each graph $\Gamma$ in $\mathsf{HGC}_d^N$ has precisely $N$ hairs (or legs) attached to its vertex or vertices. Again each vertex must be at least trivalent (with hairs counted), and the differential $\delta$ acts on vertices as before. One can understand hairs as kind of special univalent vertices on which $\delta$ does not act; they are assigned the same cohomological degree $1-d$ as edges.
The haired graph complexes has been introduced and studied recently in the context of the theory of moduli spaces ${\mathcal M}_{g,N}$ of algebraic curves of arbitrary genus $g$ with $N$ punctures \cite{CGP}
and the theory of long knots \cite{FTW}. It has been proven in \cite{CGP} that there is an isomorphism of cohomology groups
$$
H^\bulletlet(\mathsf{HGC}_0^N)= {\partial}rod_{2g+N\geq 4} W_0 H_c^{\bulletlet - N} {\mathcal M}_{g,N}
$$
where $W_0 H_c^{\bulletlet} {\mathcal M}_{g,N}$ stands for the weight zero summand of the compactly supported cohomology of the moduli space ${\mathcal M}_{g,N}$.
\subsection{Partial twisting of properads under $\mathcal{L}\mathit{ieb}cd$}\lambdabel{3: subsec on reduced twisting of (prop)erads under LBcd} Let ${\mathcal P}=\{{\mathcal P}(m,n), {\partial}\}_{m,n\geq 0}$ be a dg properad. We represent its generic elements pictorially as $(m,n)$-corollas
\betagin{equation}\lambdabel{3: generic elements of cP as (m,n)-corollas}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\circ}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
(-5.5,7)*{_1},
(-3,7)*{_2},
(3,6)*{},
(5.9,7)*{m},
(-3,-7)*{_2},
(3,-7)*+{},
(5.9,-7)*{n},
(-5.5,-7)*{_1},
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array}
\end{equation}
whose white vertex is decorated by an element of ${\mathcal P}(m,n)$. Properadic compositions in ${\mathcal P}$ are represented pictorially by gluing out-legs of such decorated corollas to in-legs of another decorated corollas.
Assume ${\mathcal P}$ comes equipped with a non-trivial morphism
\betagin{equation}\lambdabel{3: i from LBcd to P}
i: \mathcal{L}\mathit{ieb}cd \longrightarrow {\mathcal P}:\ \ \ \
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*+{}="1";
(-5,-0.2)*{\bulletlet}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\stackrel{i}{\rightarrow}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*+{}="1";
(-5,-0.2)*{\circledcirc}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array},
\ \ \ \
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\stackrel{i}{\rightarrow}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\end{equation}
The images under $i$ of the generators of $\mathcal{L}\mathit{ieb}cd$ are special elements of ${\mathcal P}$ and hence we reserve a special notation $\circledcirc$ for the decoration of the associated corollas. In particular, ${\mathcal P}$ is a properad under $\mathcal{L} \mathit{ie}_d$ and hence can be twisted in the full analogy to the case of operads discussed in the previous section: applying T.\ Willwacher twisting endofunctor to $({\mathcal P},{\partial})$ we obtain a dg properad $(\mathsf{tw} {\mathcal P}, {\p}_\centerdot)$ called the {\em partial twisting of a properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$}. The latter is freely generated by ${\mathcal P}$ and an extra generator $\betagin{array}{c}\resizebox{1.2mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}$ of degree $d$. The twisted differential ${\partial}c$ acts on the latter generator by the standard formula (\ref{2: d_c on boxdot}), while its action on elements of ${\mathcal P}$ is given by the following obvious analogue of (\ref{2: sd_c on generators of TwO}),
\betagin{equation}\lambdabel{2: d_centerdot on twP under Lieb}
{\p}_\centerdot \betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
=
{\p} \betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
+
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{15mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,5mm>*{}**@{-},
<0mm,13mm>*{\circledcirc};
<0mm,13mm>*{};<5mm,10mm>*{_\bulletlet}**@{-},
<0mm,5mm>*{};<0mm,12mm>*{}**@{-},
<0mm,14mm>*{};<0mm,17mm>*{}**@{-},
<0mm,5mm>*{};<0mm,19mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<6mm,5mm>*{..}**@{},
<-8.5mm,5.5mm>*{^1}**@{},
<-4mm,5.5mm>*{^i}**@{},
<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-5mm>*{}**@{-},
<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<-8.5mm,-6.9mm>*{^1}**@{},
<-5mm,-6.9mm>*{^2}**@{},
<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
- (-1)^{|a|}
\overset{n-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,-5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,-5mm>*{}**@{-},
<0mm,-11mm>*{\circledcirc};
<0mm,-12mm>*{};<5mm,-16mm>*{_\bulletlet}**@{-},
<0mm,-5mm>*{};<0mm,-10mm>*{}**@{-},
<0mm,-12mm>*{};<0mm,-17mm>*{}**@{-},
<0mm,-5mm>*{};<0mm,-19mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,-5mm>*{}**@{-},
<6mm,-5mm>*{..}**@{},
<-8.5mm,-6.9mm>*{^1}**@{},
<-4mm,-6.9mm>*{^i}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,5mm>*{}**@{-},
<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<-8.5mm,5.5mm>*{^1}**@{},
<-5mm,5.5mm>*{^2}**@{},
<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array},
\end{equation}
The twisted properad comes equipped with a natural epimorphism of dg properads
$$
(\mathsf{tw}{\mathcal P}, {\p}_\centerdot) \longrightarrow ({\mathcal P}, {\p})
$$
which sends the MC generator to zero. According to the general twisting machinery, the element $\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}$ remains a cocycle in ${\mathcal P}$ even after the twisting of the original differential so that the original morphism $i$ extends to the twisted version by the same formula,
\betagin{equation}\lambdabel{2: map i from Lie to twP}
\betagin{array}{rccc}
\imath: & (\mathcal{L} \mathit{ie}_d, 0) & \longrightarrow & (\mathsf{tw}{\mathcal P},{\p}_\centerdot)\\
&
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
& \longrightarrow & \betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\end{array}.
\end{equation}
However the image of the co-Lie generator in ${\mathcal P}$ is {\em not}, in general, respected by the twisted
differential,
$$
{\p}_\centerdot\hspace{-3mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*+{_1}="1";
(-5,0)*{\circledcirc}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\hspace{-2mm} = \hspace{-2mm}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-9,8)*{^2}="1";
(-9,+2.5)*{\circledcirc}="L";
(-13,-3)*{\circledcirc}="B";
(-17,4)*+{^1}="b1";
(-13,-9)*+{_1}="b2";
(-5,-3)*{\bulletlet}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
(-1)^c\hspace{-2mm}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-9,8)*{^1}="1";
(-9,+2.5)*{\circledcirc}="L";
(-13,-3)*{\circledcirc}="B";
(-17,4)*+{^2}="b1";
(-13,-9)*+{_1}="b2";
(-5,-3)*{\bulletlet}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
-(-1)^{c-1}\hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-18,7)*{^1}="1";
(-14,2.8)*{\circledcirc}="L";
(-14,-2.8)*{\circledcirc}="B";
(-18,-8)*{_1}="b1";
(-10,-8)*{\bulletlet}="b2";
(-10,7)*{^2}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
=\hspace{-2mm}
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-19,8)*{^1}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^2}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
(-1)^c\hspace{-2mm}
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-19,8)*{^2}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^1}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}.
$$
where we used the image under $i$ of the third relation in (\ref{3: R for LieB}) (and ordered vertices from bottom to the top).
The first equality equality in the formula just above, the formula (\ref{2: d_c on boxdot}) and the Drinfeld compatibility condition (that is, the bottom relation in (\ref{3: R for qLieB})) imply
$$
{\p}_\centerdot\hspace{-3mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5.5)*{\bulletlet}="1";
(-5,0)*{\circledcirc}="L";
(-8,5.5)*+{_1}="C";
(-2,5.5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
=\hspace{-2mm}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-9,8)*{^2}="1";
(-9,+2.5)*{\circledcirc}="L";
(-13,-3)*{\circledcirc}="B";
(-17,4)*+{^1}="b1";
(-13,-9)*{\bulletlet}="b2";
(-5,-3)*{\bulletlet}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
(-1)^c\hspace{-2mm}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-9,8)*{^1}="1";
(-9,+2.5)*{\circledcirc}="L";
(-13,-3)*{\circledcirc}="B";
(-17,4)*+{^2}="b1";
(-13,-9)*{\bulletlet}="b2";
(-5,-3)*{\bulletlet}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
-(-1)^{c-1}\hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-18,7)*{^1}="1";
(-14,2.8)*{\circledcirc}="L";
(-14,-2.8)*{\circledcirc}="B";
(-18,-8)*{\bulletlet}="b1";
(-10,-8)*{\bulletlet}="b2";
(-10,7)*{^2}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+{\mathfrak r}ac{(-1)^{c-1}}{2}\hspace{-1mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-18,7)*{^1}="1";
(-14,2.8)*{\circledcirc}="L";
(-14,-2.8)*{\circledcirc}="B";
(-18,-8)*{\bulletlet}="b1";
(-10,-8)*{\bulletlet}="b2";
(-10,7)*{^2}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}=0
$$
which in turn implies that the element
$ \hspace{-2mm}\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-19,8)*{^1}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^2}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
\in \mathsf{tw} {\mathcal P}
$
is a cycle with respect to the twisted differential ${\p}_\centerdot$. The linear combination
$$
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-19,8)*{^1}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^2}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+\lambda
(-1)^c\hspace{-2mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-19,8)*{^2}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^1}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
$$
is a ${\p}_\centerdot$-coboundary for $\lambda=1$, but for other values of the parameter $\lambda$, say for $\lambda=-1,$ it represents, in general, a non-trivial cohomology class in $H^\bulletlet(\mathsf{tw}{\mathcal P}, {\p}_\centerdot)$ of cohomological degree $2-c$.
\subsection{Theorem on partial twisting and quasi-Lie bialgebras}\lambdabel{2: qLien to twP}
{\it Let ${\mathcal P}$ be a dg properad under $\mathcal{L}\mathit{ieb}cd$ and $\mathsf{tw} {\mathcal P}$ the associated twisting of ${\mathcal P}$ as a properad under $\mathcal{L} \mathit{ie}_d$. Then there is an explicit morphism of properads
$$
\betagin{array}{rccl}
i^Q: & q\mathcal{L}\mathit{ieb}_{c-1,d} & \longrightarrow & H^\bulletlet(\mathsf{tw}{\mathcal P},{\partial}c)
\\
&
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
&\longrightarrow &
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array} \mod {\mathsf I\mathsf m}\, {\partial}c
\\
&
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6.5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8.5,6)*+{_1}="C";
(-1.5,6)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
& \longrightarrow &\betagin{array}{c}\resizebox{9mm}{!}{ \xy
(-5,-6.5)*+{_1}="1";
(-5,0)*{\boxtimes}="L";
(-8.5,6)*+{_1}="C";
(-1.5,6)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}:=
{\mathfrak r}ac{1}{2}
\left( \hspace{-2mm}
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-19,8)*{^1}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^2}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
-
(-1)^c
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-19,8)*{^2}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^1}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\hspace{-2mm}
\right)\mod {\mathsf I\mathsf m}\, {\partial}c
\\
&
\betagin{array}{c}\resizebox{9mm}{!}{ \betagin{xy}
<0mm,-1mm>*{\bulletlet};<-4mm,3mm>*{^{_{{1}}}}**@{-},
<0mm,-1mm>*{\bulletlet};<0mm,3mm>*{^{_{{2}}}}**@{-},
<0mm,-1mm>*{\bulletlet};<4mm,3mm>*{^{_{{3}}}}**@{-},
\end{xy}}\end{array}
& \longrightarrow &
\oint_{123} \hspace{-2.5mm} \betagin{array}{c}\resizebox{17mm}{!}{ \xy
(0,9)*{^2}="c";
(0,+3)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(-11,4)*+{^1}="u1";
(11,4)*+{^3}="u2";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"C" <0pt>
\ar @{-} "d1";"u1" <0pt>
\ar @{-} "d2";"u2" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\endxy}
\end{array}
\mod {\mathsf I\mathsf m}\, {\partial}c \\
\end{array}
$$
}
The graphs on the r.h.s.\ are cycles in $\mathsf{tw}{\mathcal P}$ which are not, in general, coboundaries.
What about ${\mathcal P}=\mathcal{L}\mathit{ieb}cd$???? Is the image of the bottom graph is a non-trivial cohomology class?
\betagin{proof}
Proof is a straightforward bur rather tedious calculation. Remarkably, the first and and the fourth relations in the list ${\mathcal R}_Q$ above hold true exactly. However, the remaining third relation holds true only up to ${\partial}c$-exact terms. Let us check it in full details that the map $i^Q$ satisfies
\betagin{equation}\lambdabel{3: relation (4,0) in qLieb}
\text{Alt}_{{\mathbb S}_4}^{c-1}\ i^Q\left(\hspace{-1.5mm}
\betagin{array}{c}\resizebox{9.0mm}{!}{
\betagin{xy}
(0,0)*{\bulletlet};(-4,5)*{^{1}}**@{-},
(0,0)*{\bulletlet};(0,5)*{^{2}}**@{-},
(0,0)*{\bulletlet};(4,5)*{\bulletlet}**@{-},
(4,5)*{\bulletlet};(1.5,10)*{^{3}}**@{-},
(4,5)*{\bulletlet};(6.5,10)*{^{4}}**@{-},
\end{xy}}
\end{array}
\hspace{-1mm}\right)
=0 \bmod {\mathsf I\mathsf m}\, {\partial}c .
\end{equation}
We have
$$
i^Q\left(\hspace{-2.8mm}
\betagin{array}{c}\resizebox{9.4mm}{!}{
\betagin{xy}
(0,0)*{\bulletlet};(-4,5)*{^{1}}**@{-},
(0,0)*{\bulletlet};(0,5)*{^{2}}**@{-},
(0,0)*{\bulletlet};(4,5)*{\bulletlet}**@{-},
(4,5)*{\bulletlet};(1.5,10)*{^{3}}**@{-},
(4,5)*{\bulletlet};(6.5,10)*{^{4}}**@{-},
\end{xy}}
\end{array}
\hspace{-1.8mm}\right)\hspace{-0.5mm}
=\hspace{-0.5mm}
{\mathfrak r}ac{({\mathrm{Id}} + (-1)^{c-1}(34))}{2}
\left(\hspace{-3mm}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(0,9)*{^2}="c";
(0,+2)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,-3.5)*{\circledcirc}="d3";
(-10,4)*+{^1}="l";
(10,9)*+{^3}="3";
(20,4)*+{^4}="4";
(10,2)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,-10)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"C" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "d2";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
\hspace{-1mm}
+(-1)^{c-1}
\hspace{-2.7mm}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(0,9)*{^1}="c";
(0,+2)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,-3.5)*{\circledcirc}="d3";
(-10,4)*+{^2}="l";
(10,9)*+{^3}="3";
(20,4)*+{^4}="4";
(10,2)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,-10)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"C" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "d2";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
+ \hspace{-2mm}
\betagin{array}{c}\resizebox{20mm}{!}{ \xy
(10,4)*{^2}="c";
(0,+3)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,9.5)*{\circledcirc}="d3";
(-10,5)*+{^1}="l";
(10,22)*+{^3}="3";
(20,17)*+{^4}="4";
(10,15)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,2)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"d2" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "C";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
\hspace{-2mm}\right).
$$
The Jacobi identity for the Lie generator implies the following vanishing
$$
\text{Alt}_{{\mathbb S}_4}^{c-1}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(10,4)*{^2}="c";
(0,+3)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,9.5)*{\circledcirc}="d3";
(-10,5)*+{^1}="l";
(10,22)*+{^3}="3";
(20,17)*+{^4}="4";
(10,15)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,2)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"d2" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "C";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
=0
$$
The symmetry properties of the generators imply, for any $c,d\in {\mathbb Z}$, the equality
$$
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(0,9)*{^2}="c";
(0,+2)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,-3.5)*{\circledcirc}="d3";
(-10,4)*+{^1}="l";
(10,9)*+{^3}="3";
(20,4)*+{^4}="4";
(10,2)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,-10)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"C" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "d2";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
=
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(0,9)*{^3}="c";
(0,+2)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,-3.5)*{\circledcirc}="d3";
(-10,4)*+{^4}="l";
(10,9)*+{^2}="3";
(20,4)*+{^1}="4";
(10,2)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,-10)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"C" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "d2";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
$$
Therefore the first two summands in the above formula do not cancel out upon (skew)symmetrization. However one has the equality modulo ${\partial}c$-exact terms,
$$
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-19,8)*{^2}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^x}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_y}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
=(-1)^{c-1}
\hspace{-2mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-19,8)*{^x }="1";
(-19,+3)*{\circledcirc}="L";
(-14,-2.5)*{\circledcirc}="B";
(-9,3)*+{^2}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
$$
which can be used to transform first two terms in the above formula into the third one (up to a permutation) which has been just considered. Hence
$$
\text{Alt}_{{\mathbb S}_4}^{c-1} \betagin{array}{c}\resizebox{22mm}{!}{ \xy
(0,10)*{^2}="c";
(0,+3)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,-3.5)*{\circledcirc}="d3";
(-13,5)*+{^1}="l";
(10,10)*+{^3}="3";
(22,5)*+{^4}="4";
(10,3)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,-10)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"C" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "d2";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
= (-1)^{c-1}
\text{Alt}_{{\mathbb S}_4}^{c-1}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(10,4)*{^2}="c";
(0,+3)*{\circledcirc}="C";
(-5,-3.5)*{\circledcirc}="d1";
(5,-3.5)*{\circledcirc}="d2";
(15,9.5)*{\circledcirc}="d3";
(-13,5)*+{^1}="l";
(10,22)*+{^3}="3";
(22,17)*+{^4}="4";
(10,15)*{\circledcirc}="r";
(-5,-10)*{\bulletlet}="b1";
(5,-10)*{\bulletlet}="b2";
(15,2)*{\bulletlet}="b3";
\ar @{-} "d1";"C" <0pt>
\ar @{-} "d2";"C" <0pt>
\ar @{-} "c";"d2" <0pt>
\ar @{-} "d1";"l" <0pt>
\ar @{-} "C";"r" <0pt>
\ar @{-} "3";"r" <0pt>
\ar @{-} "d1";"b1" <0pt>
\ar @{-} "d2";"b2" <0pt>
\ar @{-} "d3";"r" <0pt>
\ar @{-} "d3";"b3" <0pt>
\ar @{-} "d3";"4" <0pt>
\endxy}
\end{array}
\bmod {\partial}c =
\ \ \ \ 0 \bmod {\mathsf I\mathsf m}\, {\partial}c
$$
and the claim follows.
\end{proof}
\subsection{Haired graph complexes and $\mathsf{tw}\mathcal{L}\mathit{ieb}cd$}\lambdabel{3: subsec on reduced twisting of (prop)erads under HoLBcd}
The epimorphism
$$
\mathsf{tw} \mathcal{H}\mathit{olieb}_{c,d}=\{\mathsf{tw} \mathcal{H}\mathit{olieb}_{c,d}(N,M)\} \longrightarrow \mathsf{tw}\mathcal{L}\mathit{ieb}cd=\{\mathsf{tw}\mathcal{L}\mathit{ieb}cd(N,M)\}
$$
is a quasi-isomorphism for any $M,N\geq 1$ as $\mathsf{tw}$ is an exact functor. It is a straightforward inspection to see the complex $\mathsf{tw} \mathcal{H}\mathit{olieb}_{c,d}(N,0)$ is identical to th oriented haired graph complex $\mathsf{HHOGC}^N_{c+d+1}[-dN]$ introduced
in \S 3.4.2 of \cite{AWZ}. One of the main results in that paper says that there is an isomorphism of cohomology groups
$$
H^\bulletlet(\mathsf{HGC}^N_d)\simeq H^\bulletlet(\mathsf{HHOGC}^N_{d+1})
$$
Hence we can conclude that {\em for any natural number $N\geq 1$ one has an isomorphisms of cohomology groups
$$
H^\bulletlet\left(\mathsf{tw} \mathcal{H}\mathit{olieb}_{c,d}(N,0)\right)\simeq H^\bulletlet(\mathsf{tw}\mathcal{L}\mathit{ieb}_{c,d}(N,0))
\simeq H^{\bulletlet-dN}(\mathsf{HGC}_{c+d}^N).
$$
}
In particular, we have an induced morphism
$$
H^\bulletlet(\mathsf{HGC}^N_{c+d}) \longrightarrow H^{\bulletlet + dN}(\mathsf{tw}{\mathcal P}(N,0))
$$
for any dg properad ${\mathcal P}\in \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}$.
\subsection{An example: (chain) gravity properad} A ribbon graph $\Gamma$ is a graph with an extra structure: the set of half-edges attached to each vertex comes equipped with a cyclic ordering
(a detailed definition can be found, e.g., in \S 4.1 of the paper \cite{MW1} to which we refer often in this subsection). Thickening each vertex $v\in V(\Gamma)$ of a ribbon graph $\Gamma$ into a closed disk, and every edge $e\in \Gamma$ attached to $v$ into a 2-dimensional strip glued to that disk, one associates to $\Gamma$ a unique topological $2$-dimensional-surface with boundaries; the set of such boundaries is denoted by $B(\Gamma)$. Shrinking 2-strips back into 1-dimensional edges, one represents each boundary $b$ as a closed path comprising some vertices and edges of $\Gamma$. We work with {\em connected}\, ribbon graphs only, their genus is defined by
\betagin{equation}\lambdabel{4: genus of a ribbon graph}
g= 1+{\mathfrak r}ac{1}{2}\left(\# E(\Gamma) - \# V(\Gamma)- \# B(\Gamma)\right).
\end{equation}
Let $\mathcal{R} \mathcal{G} ra_d(m,n)$, $d\in {\mathbb Z}$, stand for the graded vector space generated by ribbon graphs $\Gamma$ such that
\betagin{itemize}
\item[(i)] $\Gamma$ has precisely $n$ vertices and $m$ boundaries which are labelled, i.e.\ some isomorphisms $V(\Gamma)\rightarrow [n]$ and $B(\Gamma)\rightarrow [\bar{m}]:=\{\bar{1},\ldots, \bar{m}\}$ are fixed;
\item[(ii)] $\Gamma$ is equipped with an orientation which is for $d$ even is defined as an ordering of edges (up the sign action of ${\mathbb S}_{\# E(\Gamma)}$), while for $d$ odd it is defined as a choice of the direction on each edge (up to the sign action of ${\mathbb S}_2$).
\item[(iii)] $\Gamma$ is assigned the cohomological degree $(1-d)\# E(\Gamma)$.
\end{itemize}
For example,
$$
\betagin{array}{c}\resizebox{8.2mm}{!}{ \xy
(3.5,4)*{^{\bar{1}}};
(7,0)*+{_2}*{\mathfrak r}m{o}="A";
(0,0)*+{_1}*{\mathfrak r}m{o}="B";
\ar @{-} "A";"B" <0pt>
\endxy} \end{array}\hspace{-2mm} \in \mathcal{R} \mathcal{G} ra_d(1,2),
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(0.5,1)*{^{{^{\bar{1}}}}},
(0.5,5)*{^{{^{\bar{2}}}}},
(0,-2)*+{_{_1}}*{\mathfrak r}m{o}="A";
"A"; "A" **\crv{(7,7) & (-7,7)};
\endxy}\end{array} \hspace{-2mm} \in \mathcal{R} \mathcal{G} ra_d(2,1),
\betagin{array}{c}\resizebox{9mm}{!}{ \xy
(-4,0)*{^{\bar{_1}}};
(-1,0)*{^{\bar{_2}}};
(1.5,0)*{^{\bar{_3}}};
(0,5)*+{_1}*{\mathfrak r}m{o}="1";
(0,-4)*+{_2}*{\mathfrak r}m{o}="3";
"1";"3" **\crv{(4,0) & (4,1)};
"1";"3" **\crv{(-4,0) & (-4,-1)};
\ar @{-} "1";"3" <0pt>
\endxy}\end{array} \hspace{-2mm}\in \mathcal{R} \mathcal{G} ra_d(3,2),
\betagin{array}{c}\resizebox{9mm}{!}{ \xy
(-3,1)*{^{\bar{_1}}};
(0,8)*+{_1}*{\mathfrak r}m{o}="1";
(0,-4)*+{_2}*{\mathfrak r}m{o}="3";
"1";"3" **\crv{(-5,2) & (5,2)};
"1";"3" **\crv{(5,2) & (-5,2)};
"1";"3" **\crv{(-7,7) & (-7,-7)};
\endxy}\end{array} \hspace{-2mm}\in \mathcal{R} \mathcal{G} ra_d(1,2).
$$
The subspace of $\mathcal{R} \mathcal{G} ra_d(m,n)$ spanned by ribbon graphs of genus $g$ is denoted by $\mathcal{R} \mathcal{G} ra_d(g;m,n)$.
The permutation group ${\mathbb S}_.^{op}\times {\mathbb S}_n$ acts on $\mathcal{R} \mathcal{G} ra_d(m,n)$ by relabelling vertices and boundaries. The ${\mathbb S}$-bimodule
$$
\mathcal{R} \mathcal{G} ra_d=\{\mathcal{R} \mathcal{G} ra_d(m,n)\}
$$
has the structure of a properad \cite{MW1} given by substituting a boundary $b$ of one ribbon graph into a vertex $v$ of another one, and reattaching half-edges (attached earlier to $v$) among the vertices belonging to $b$ in all possible ways while respecting the cyclic orders of both sets. One of the main motivation behind this definition of $\mathcal{R} \mathcal{G} ra_d$ is that it comes with a morphism of operads,
\betagin{equation}\lambdabel{3: Lieb_dd to RGra_d}
\betagin{array}{rccc}
i: & \mathcal{L}\mathit{ieb}_{d,d} & \longrightarrow & \mathcal{R} \mathcal{G} ra_d\\
& \betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{},
<0mm,4mm>*{^{_{\bar{1}}}}**@{},
\end{xy}\end{array}
&\longrightarrow&
\betagin{array}{c}\resizebox{8.2mm}{!}{ \xy
(3.5,4)*{^{\bar{1}}};
(7,0)*+{_2}*{\mathfrak r}m{o}="A";
(0,0)*+{_1}*{\mathfrak r}m{o}="B";
\ar @{-} "A";"B" <0pt>
\endxy} \end{array}
\\
& \betagin{array}{c}\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_{\bar{1}}}}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_{\bar{2}}}}**@{},
\end{xy}\end{array}
&\longrightarrow &
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(0.5,1)*{^{{^{\bar{1}}}}},
(0.5,5)*{^{{^{\bar{2}}}}},
(0,-2)*+{_{_1}}*{\mathfrak r}m{o}="A";
"A"; "A" **\crv{(7,7) & (-7,7)};
\endxy}\end{array}
\end{array}
\end{equation}
In particular, $\mathcal{R} \mathcal{G} ra_d$ is a properad under $\mathcal{L} \mathit{ie}_d$ and hence can be twisted: $\mathsf{tw}\mathcal{R} \mathcal{G} ra_d$
is generated by ribbon graphs with two types of vertices, white ones which are labelled and black ones which are unlabelled and assigned the cohomological degree $d$ (cf.\ \S {\ref{2: Example Graphs_d operad}}), e.g.
$$
\betagin{array}{c}\resizebox{9mm}{!}{ \xy
(-3,1)*{^{\bar{_1}}};
(0,8)*+{_1}*{\mathfrak r}m{o}="1";
(0,-4)*{\bulletlet}="3";
"1";"3" **\crv{(-5,2) & (5,2)};
"1";"3" **\crv{(5,2) & (-5,2)};
"1";"3" **\crv{(-7,7) & (-7,-7)};
\endxy}\end{array} \in \mathsf{tw}{\mathcal R}{\mathcal G} ra_d(1,1).
$$
The differential $\delta_\centerdot$ in $\mathsf{tw}\mathcal{R} \mathcal{G} ra_d$ is determined by its action on vertices as in (\ref{2: d in Graphs}). One of the main results in \cite{Me2} is the proof of the following
\subsubsection{\bf Theorem} (i) {\em For any $g\geq 0$, $m\geq 1$ and $n\geq 0$ with $2g+m+n\geq 3$ one has an isomorphism of ${\mathbb S}_m^{op}\times {\mathbb S}_n$-modules,}
$$
H^\bulletlet(\mathsf{tw}\mathcal{R} \mathcal{G} ra_d(g;m,n))= H^{\bulletlet-m +d(2g-2+m+n)}_c({\mathcal M}_{g,m+n}\times {\mathbb R}^m)
$$
where ${\mathcal M}_{g,m+n}$ is the moduli space of genus algebraic curves with $m+n$ marked points, and
$H^\bulletlet_c$ stands for the compactly supported cohomology functor.
(ii) {\em For any $g\geq 0$, $m\geq 1$ and $n\geq 0$ with $2g+m+n< 3$ one has}
$$
H^k(\mathsf{tw}\mathcal{R} \mathcal{G} ra_d(g;m,n))=\left\{\betagin{array}{ll} {\mathbb K} & \text{if} \ g=n=0,m=2, k=(1-d)p \ \text{with} \ p\geq 1\ \& \ p\equiv 2d+1 \bmod 4 \\
0 & \text{otherwise}.
\end{array}
\right.
$$
{\em where ${\mathbb K}$ is generated by the unique polytope-like ribbon graph with $p$ edges and $p$ bivalent vertices which are all black}.
This result says that the most important part of $\mathsf{tw}\mathcal{R} \mathcal{G} ra_d$ is the dg sub-properad
${\mathcal C} h\mathcal{G} rav_d$ spanned by ribbon graphs with black vertices at least trivalent; it is called the {\em chain gravity properad}. Its cohomology
$$
\mathcal{G} rav_d:=\left\{{\partial}rod_{g\geq 0\atop 2g\geq 3-m-n} H^{\bulletlet-m +d(2g-2+m+n)}_c({\mathcal M}_{g,m+n}\times {\mathbb R}^m)\right\}_{m\geq1, n\geq 0}
$$
is called the {\em gravity}\, properad.
The general morphism $i^Q$ from Theorem {\ref{2: qLien to twP}} reads in this concrete situation as follows
$$
\betagin{array}{rccc}
i^Q: & q\mathcal{L}\mathit{ieb}_{d-1,d} & \longrightarrow & \mathcal{G} \mathcal{R} av_d
\\
&
\betagin{array}{c}\betagin{xy}
<0mm,0.66mm>*{};<0mm,4mm>*{^{_{\bar{1}}}}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{},
\end{xy}\end{array}
&\longrightarrow &
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(3.5,4)*{^{\bar{1}}};
(7,0)*+{_2}*{\mathfrak r}m{o}="A";
(0,0)*+{_1}*{\mathfrak r}m{o}="B";
\ar @{-} "A";"B" <0pt>
\endxy} \end{array}
\\
&
\betagin{array}{c}\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_{\bar{2}}}}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_{\bar{1}}}}**@{},
\end{xy}\end{array}
& \longrightarrow &
{\mathfrak r}ac{1}{2}
\left(\hspace{-1.5mm}
\betagin{array}{c}\resizebox{6mm}{!}{
\mbox{$\xy
(0.5,0.9)*{^{{^{\bar{1}}}}},
(0.5,5)*{^{{^{\bar{2}}}}},
(0,-8)*+{_{_1}}*{\mathfrak r}m{o}="C";
(0,-2)*{\bulletlet}="A";
(0,-2)*{\bulletlet}="B";
"A"; "B" **\crv{(6,6) & (-6,6)};
\ar @{-} "A";"C" <0pt>
\endxy$}}
\end{array}
- (-1)^d
\betagin{array}{c}\resizebox{6mm}{!}{
\mbox{$\xy
(0.5,0.9)*{^{{^{\bar{2}}}}},
(0.5,5)*{^{{^{\bar{1}}}}},
(0,-8)*+{_{_1}}*{\mathfrak r}m{o}="C";
(0,-2)*{\bulletlet}="A";
(0,-2)*{\bulletlet}="B";
"A"; "B" **\crv{(6,6) & (-6,6)};
\ar @{-} "A";"C" <0pt>
\endxy$}}
\end{array}
\hspace{-1.5mm}\right)
\\
&
\betagin{array}{c}\betagin{xy}
<0mm,-1mm>*{\bulletlet};<-4mm,3mm>*{^{_{\bar{1}}}}**@{-},
<0mm,-1mm>*{\bulletlet};<0mm,3mm>*{^{_{\bar{2}}}}**@{-},
<0mm,-1mm>*{\bulletlet};<4mm,3mm>*{^{_{\bar{3}}}}**@{-},
\end{xy}\end{array}
& \longrightarrow &
{\mathfrak r}ac{1}{2}\left(\hspace{-1.5mm}
\betagin{array}{c}\resizebox{13.5mm}{!}{
\mbox{$\xy
(0.5,0.9)*{^{{^{\bar{1}}}}},
(5,0)*{^{{^{\bar{2}}}}},
(10.5,0.9)*{^{{^{\bar{3}}}}},
(10,-2)*{\bulletlet}="B";
(0,-2)*{\bulletlet}="A";
"A"; "A" **\crv{(6,6) & (-6,6)};
"B"; "B" **\crv{(16,6) & (4,6)};
\ar @{-} "A";"B" <0pt>
\endxy$}}
\end{array}
-(-1)^d
\betagin{array}{c}\resizebox{13.5mm}{!}{
\mbox{$\xy
(0.5,0.9)*{^{{^{\bar{2}}}}},
(5,0)*{^{{^{\bar{1}}}}},
(10.5,0.9)*{^{{^{\bar{3}}}}},
(10,-2)*{\bulletlet}="B";
(0,-2)*{\bulletlet}="A";
"A"; "A" **\crv{(6,6) & (-6,6)};
"B"; "B" **\crv{(16,6) & (4,6)};
\ar @{-} "A";"B" <0pt>
\endxy$}}
\end{array}
\hspace{-1.5mm}\right)
\\
\end{array}
$$
We have in $\mathsf{tw}\mathcal{R} \mathcal{G} ra_d$
$$
\delta_\centerdot \betagin{array}{c}
\mbox{$\xy
(-3,-0.2)*{^{{^{\bar{1}}}}},
(3,-0.2)*{^{{^{\bar{3}}}}},
(0,3)*{^{{^{\bar{2}}}}},
(0,1)*{\bulletlet}="A1";
(0,1)*{\bulletlet}="A2";
"A1"; "A2" **\crv{(-6,7) & (-6,-5)};
"A1"; "A2" **\crv{(6,7) & (6,-5)};
\endxy$}
\end{array}
=
\betagin{array}{c}\resizebox{13.5mm}{!}{
\mbox{$\xy
(0.5,0.9)*{^{{^{\bar{1}}}}},
(5,0)*{^{{^{\bar{2}}}}},
(10.5,0.9)*{^{{^{\bar{3}}}}},
(10,-2)*{\bulletlet}="B";
(0,-2)*{\bulletlet}="A";
"A"; "A" **\crv{(6,6) & (-6,6)};
"B"; "B" **\crv{(16,6) & (4,6)};
\ar @{-} "A";"B" <0pt>
\endxy$}}
\end{array}
-
\betagin{array}{c}\resizebox{9mm}{!}{
\xy
(2.0,-3.5)*{^{{^{\bar{2}}}}},
(-2,-3.5)*{^{{^{\bar{1}}}}},
(0.5,4)*{^{{^{\bar{3}}}}},
(0,-8)*{\bulletlet}="C";
(0,3)*{\bulletlet}="A1";
(0,3)*{\bulletlet}="A2";
"C"; "A1" **\crv{(-5,-9) & (-5,4)};
"C"; "A2" **\crv{(5,-9) & (5,4)};
\ar @{-} "A1";"C" <0pt>
\endxy}
\end{array}
$$
so that the above map can be re-written exactly in form first found in \cite{Me2} via a completely independent calculation using ribbon graphs only,
$$
i^Q: \betagin{array}{c}\betagin{xy}
<0mm,-1mm>*{\bulletlet};<-4mm,3mm>*{^{_1}}**@{-},
<0mm,-1mm>*{\bulletlet};<0mm,3mm>*{^{_2}}**@{-},
<0mm,-1mm>*{\bulletlet};<4mm,3mm>*{^{_3}}**@{-},
\end{xy}\end{array}
\longrightarrow
{\mathfrak r}ac{1}{2}\left(-\hspace{-1mm}
\betagin{array}{c}\resizebox{9mm}{!}{
\xy
(2.0,-3.5)*{^{{^{\bar{2}}}}},
(-2,-3.5)*{^{{^{\bar{1}}}}},
(0.5,4)*{^{{^{\bar{3}}}}},
(0,-8)*{\bulletlet}="C";
(0,3)*{\bulletlet}="A1";
(0,3)*{\bulletlet}="A2";
"C"; "A1" **\crv{(-5,-9) & (-5,4)};
"C"; "A2" **\crv{(5,-9) & (5,4)};
\ar @{-} "A1";"C" <0pt>
\endxy}
\end{array}
+\hspace{-1mm}
\betagin{array}{c}\resizebox{9mm}{!}{
\xy
(2.0,-3.5)*{^{{^{\bar{1}}}}},
(-2,-3.5)*{^{{^{\bar{2}}}}},
(0.5,4)*{^{{^{\bar{3}}}}},
(0,-8)*{\bulletlet}="C";
(0,3)*{\bulletlet}="A1";
(0,3)*{\bulletlet}="A2";
"C"; "A1" **\crv{(-5,-9) & (-5,4)};
"C"; "A2" **\crv{(5,-9) & (5,4)};
\ar @{-} "A1";"C" <0pt>
\endxy}
\end{array}\right)
$$
This version of the map $i^Q$ was used in \cite{Me2} to show that this map is injective
on infinitely elements of $q\mathcal{L}\mathit{ieb}_{d-1,d}$ constructing thereby infinitely many higher genus cohomology classes in $H^\bulletlet({\mathcal M}_{g,m+n})$ from the unique cohomology class in $H^\bulletlet({\mathcal M}_{0,3})$ via properadic compositions in $\mathcal{G} rav_d\subset H^\bulletlet(\mathsf{tw}\mathcal{R} \mathcal{G} ra_d)$.
\subsection{Quasi-Lie bialgebra structures on Hochschild cohomologies of cyclic $A_\infty$-algebras}\lambdabel{3: subsec on qLieb on Cyc(A)} Let $A$ be a graded vector space equipped with a degree $-n$
non-degenerate scalar product
\betagin{equation}\lambdabel{3: scalar product in A}
\betagin{array}{rccc}
\lambdangle\ ,\ \rangle: & A\odot A & \longrightarrow & {\mathbb K}[-n]\\
&a\odot b & \longrightarrow & \lambdangle a,b \rangle=(-1)^{|a||b|} \lambdangle a,b \rangle.
\end{array}
\end{equation}
One has an associated isomorphism of graded vector spaces,
$$
A\simeq A^*[-n]:={\mathrm H\mathrm o\mathrm m}(A,{\mathbb K})[-n],
$$
and an induced non-degenerate pairing
$$
\betagin{array}{rccc}
{\mathbb T}heta: & \otimes^2\left(A[n-1]\right) & \longrightarrow & {\mathbb K}[n-2]\equiv {\mathbb K}[1-(3-n)]\\
& (a'={\mathfrak s}^{n-1}a, b'= {\mathfrak s}^{n-1} b) & \longrightarrow & {\mathfrak s}^{2n-2} \lambdangle a,b \rangle
\end{array}
$$
which satisfies the following equation (cf.\ \S 2.3 in \cite{MW1} with $d=3-n$ in the notation of that paper)
\betagin{equation}rn
{\mathbb T}heta(b', a') &=& {\mathfrak s}^{2n-2} \lambdangle b,a \rangle \\
&=& (-1)^{|a||b|} {\mathfrak s}^{2n-2} \lambdangle a,b \rangle \\
&=& (-1)^{(|a'|+n-1)(|b'|+n-1)} {\mathbb T}heta(a', b') \\
&=& (-1)^{|a'| + |b'| + (3-n)}{\mathbb T}heta(a', b')
\end{equation}rn
where we used the fact that ${\mathbb T}heta(a,b)=0$ unless $|a|+|b|=n$. By Theorem 4.2.2 in \cite{MW1} this symmetry equation
implies that the (reduced) space of cyclic word
$$
Cyc(A):=\bigoplus_{p\geq 2} \left(\otimes^p(A[n-1])\right)^{{\mathbb Z}_p}\simeq
\bigoplus_{p\geq 2}
\left(\otimes^p(A[n-1])\right)^{{\mathbb Z}_p}
$$
carries canonically a representation of the properad $\mathcal{R} \mathcal{G} ra_{3-n}$ discussed in the previous subsection. In particular this space is a $\mathcal{L}\mathit{ieb}_{3-n,3-n}$-algebra (see (\ref{3: Lieb_dd to RGra_d})) with the Lie bracket given by a simple formula,
$$
\{(a'_1\otimes...\otimes b'_k)^{{\mathbb Z}_k}, (b'_1\otimes ...\otimes b'_l)^{{\mathbb Z}_n}\}:=\hspace{100mm}
$$
$$
\hspace{9mm} \sum_{i=1}^k\sum_{j=1}^l
{\partial}m
{\mathbb T}heta(a'_i,b'_j) (a'_1\otimes ...\otimes a'_{i-1}\otimes b'_{j+1}\otimes ... \otimes b'_l\otimes b'_1\otimes ... \otimes b'_{j-1}\otimes a'_{i+1}\otimes\ldots\otimes a'_k)^{{\mathbb Z}_{k+l-2}}
$$
Maurer Cartan elements elements of this $\mathcal{L} \mathit{ie}_d$-algebra are degree $d=3-n$ elements
$
\gamma\in Cyc(A)$ such that $\{\gamma, \gamma\}=0$. There is a one-to-one correspondence
between such Maurer-Cartan elements and{\mathfrak o}otnote{One can use this statement as a definition of a degree $n$ cyclic $A_\infty$-algebra structure on $A$.} {\em degree $n$ cyclic strongly homotopy algebra structures in $A$}. The dg Lie algebra
$$
CH(A):= \left( Cyc(A), d_\gamma:=\{\gamma,\ \} \right)
$$
is precisely the (reduced) cyclic Hochschild complex of the cyclic $A_\infty$-algebra $(A,\gamma)$.
By the very definition of the twisting endofunctor $\mathsf{tw}$, the chain gravity properad ${\mathcal C} h\mathcal{G} rav_{3-n}$
admits a canonical representation in $CH(A)$ for any degree $n$ cyclic $A_\infty$-algebra $A$.
In particular, the gravity properad $\mathcal{G} rav_{3-n}$ acts on its cohomology $H^\bulletlet(CH(A))$ implying, by Theorem {\ref{2: qLien to twP}}, the following observation.
\subsubsection{\bf Corollary} {\em The Hochschild cohomology $H^\bulletlet(CH(A))$ of any degree $n$ cyclic $A_\infty$ algebra is a quasi-Lie bialgebra; more precisely it carries a representation of the properad $q\mathcal{L}\mathit{ieb}_{2-n,3-n}$.}
If $A$ is a dg Poincare model of some compact $n$-dimensional manifold, then
there is a linear map
$$
\bar{H}_\bulletlet^{S^1}(LM) \longrightarrow H^\bulletlet(CH(A))
$$
from the reduced equivariant homology $\bar{H}_\bulletlet^{S^1}(LM)$ of the free loop space $LM$ of $M$. If $M$ is simply connected, this map is an isomorphism so that the gravity properad $\mathcal{G} rav_d$ acts
on $\bar{H}_\bulletlet^{S^1}(LM)$. However the Maurer-Cartan associated to any Poincare model is a relatively simple linear combination of cyclic words in the letters, and the just mentioned action is much trivialized. That ``trivialization" is studied in detail in
\cite{Me4} where it is shown that the action of ${\mathcal C} h\mathcal{G} rav_{3-n}$ on $CH(A)$
factors through a quotient properad ${\mathcal S}{\mathcal T}_{3-n}$ which contains $\mathcal{L}\mathit{ieb}_{d,d}$ \cite{CS}, the gravity operad \cite{Ge, We} and the four $\mathcal{H}\mathit{olieb}^\diamond_{d-1}$-operations found in \cite{Me3}.
One has to consider a less trivial class (comparing to the
class of Poincare models) of cyclic $A_\infty$-algebras $A$ in order to get a chance to see a less trivial action of the gravity properad on the associated cyclic Hochschild cohomologies.
{\Large
{\mathsf e}ction{\bf A full twisting of properads under $\mathcal{H}\mathit{olieb}_{c,d}$}
}
\subsection{Reminder on the polydifferential functor ${\mathcal O}$}
There is an exact polydifferential functor \cite{MW1}
$$
\betagin{array}{rccc}
{\mathcal O}: & \text{\sf Category of dg props} & \longrightarrow & \text{\sf Category of dg operads}\\
& {\mathcal P} &\longrightarrow & {\mathcal O}{\mathcal P}
\end{array}
$$
which has the following property:
given any dg prop ${\mathcal P}$ and its arbitrary representation $\rho: {\mathcal P}\rightarrow {\mathcal E} nd_V$
in a dg vector space $V$, the associated dg operad ${\mathcal O}{\mathcal P}$ has an associated representation, ${\mathcal O}\rho: {\mathcal O}({\mathcal P})\rightarrow {\mathcal E} nd_{\odot^\bulletlet V}$,
in the graded commutative tensor algebra ${\odot^\bulletlet} V$ given in terms of polydifferential (with respect to the standard multiplication in ${\odot^\bulletlet} V$) operators. Roughly speaking the functor ${\mathcal O}$ symmetrizes all outputs of elements of ${\mathcal P}$, and splits all inputs into symmetrized blocks; pictorially, if we identify elements of ${\mathcal P}$ with decorated corollas as in (\ref{3: generic elements of cP as (m,n)-corollas}), then every element of ${\mathcal O}({\mathcal P})$ can be identified with a decorated corolla
which is allowed to have the {\em same}\, numerical labels assigned to its different in-legs, and also with the same label 1 assigned to its all outgoing legs{\mathfrak o}otnote{Treating out- and inputs legs in this procedure on equal footing, one gets a polydifferential functor ${\mathcal D}$ in the category of dg props such that ${\mathcal O}({\mathcal P})$ a sub-operad of ${\mathcal D}({\mathcal P})$. It was introduced and studied in \cite{MW3}.}
$$
\betagin{array}{c}\resizebox{16mm}{!}{\xy
(-9,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-7.5,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-6,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-1,-6)*{};
(0,0)*{\circ }
**\dir{-};
(0,-6)*{};
(0,0)*{\circ }
**\dir{-};
(1,-6)*{};
(0,0)*{\circ }
**\dir{-};
(9,-6)*{};
(0,0)*{\circ }
**\dir{-};
(7.5,-6)*{};
(0,0)*{\circ }
**\dir{-};
(6,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-3,-5)*{...};
(3,-5)*{...};
(-9,-7.5)*{_1};
(-7.5,-7.5)*{_1};
(-6,-7.5)*{_1};
(-1.1,-7.5)*{_i};
(1.1,-7.5)*{_i};
(0,-7.5)*{_i};
(7.8,-7.5)*{_k};
(6.3,-7.5)*{_k};
(9.6,-7.5)*{_k};
(0,7)*{{ }^{1111}};
(-2,6)*{};
(0,0)*{\circ }
**\dir{-};
(-0.7,6)*{};
(0,0)*{\circ }
**\dir{-};
(0.7,6)*{};
(0,0)*{\circ }
**\dir{-};
(2,6)*{};
(0,0)*{\circ }
**\dir{-};
\endxy}\end{array} \ \ \ {\simeq} \ \ \
\betagin{array}{c}\resizebox{16mm}{!}{\xy
(-9,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-7.5,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-6,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-1,-6)*{};
(0,0)*{\circ }
**\dir{-};
(0,-6)*{};
(0,0)*{\circ }
**\dir{-};
(1,-6)*{};
(0,0)*{\circ }
**\dir{-};
(9,-6)*{};
(0,0)*{\circ }
**\dir{-};
(7.5,-6)*{};
(0,0)*{\circ }
**\dir{-};
(6,-6)*{};
(0,0)*{\circ }
**\dir{-};
(-3,-5)*{...};
(3,-5)*{...};
(-2,6)*{};
(0,0)*{\circ }
**\dir{-};
(-0.7,6)*{};
(0,0)*{\circ }
**\dir{-};
(0.7,6)*{};
(0,0)*{\circ }
**\dir{-};
(2,6)*{};
(0,0)*{\circ }
**\dir{-};
(-8.2,-7.9)*+\hbox{${{1}}$}*{\mathfrak r}m{o};
(8.2,-7.9)*+\hbox{${{\, k\, }}$}*{\mathfrak r}m{o};
(0,-7.9)*+\hbox{${{\, i\, }}$}*{\mathfrak r}m{o};
(0,8)*+\hbox{${{\, 1\, }}$}*{\mathfrak r}m{o};
\endxy}\end{array}
$$
Since we want to apply the above construction to dg props ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$, we are more interested in this paper in its degree shifted version, ${\mathcal O}_{c,d}$, which was also introduced in \cite{MW1},
$$
{\mathcal O}_{c,d}{\mathcal P}:= {\mathcal O}({\mathcal P}\{c\})
$$
The notation may be slightly misleading as ${\mathcal O}_{c,d}$ does not depend on $d$ but it suits us well in the context of this paper.
We refer to \cite{MW2} for more details about the functor ${\mathcal O}_{c,d}$ (and to \cite{MW3} for its extension ${\mathcal D}$) and discuss next the particular example, the dg operad
$$
{\mathcal O}_{c,d}\mathcal{H}\mathit{olieb}_{c,d}\simeq {\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}\equiv\{{\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}(k)\}_{k\geq 1}.
$$
The ${\mathbb S}_k$-module ${\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}(k)$, $k\geq 1$, is generated by graphs $\gamma$ constructed from arbitrary decorated graphs $\Gamma$ from $\mathcal{H}\mathit{olieb}_{0,c+d}(m,n)$, ${\mathfrak o}rall m,n\geq 1$,
as follows:
\betagin{itemize}
\item[(i)]
draw new $k$ big white vertices labelled from $1$ to $k$ (these will be inputs of $\gamma$) and one extra output big white vertex,
\item[(ii)] symmetrize all $m$ outputs legs of $\Gamma$ and attach them to the unique output white vertex;
\item[(iii)] partition the set $[n]$ of input legs of $\Gamma$ into $k$ ordered disjoint (not necessary non-empty) subsets
$$
[n]=I_1\sqcup \ldots \sqcup I_k, \ \ \ \ \#I_i\geq 0, i\in [k],
$$
then symmetrize the legs in each subset $I_i$ and attach them (if any) to the $i$-labelled input white vertex.
\end{itemize}
For example, the element
$$
\Gamma=\betagin{array}{c}\resizebox{11mm}{!}{
\xy
(0,0)*{\bulletlet}="o",
(-2,5)*{}="2",
(4,5)*{\bulletlet}="3",
(4,10)*{}="u",
(4,0)*{}="d1",
(7,0)*{}="d2",
(10,0)*{}="d3",
(-1.5,-5)*{}="5",
(1.5,-5)*{}="6",
(4,-5)*{}="7",
(-4,-5)*{}="8",
(-2,7)*{_1},
(4,12)*{_2},
(-1.5,-7)*{_2},
(1.5,-7)*{_3},
(10.4,-1.6)*{_6},
(-4,-7)*{_1},
(4,-1.6)*{_4},
(7,-1.6)*{_5},
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"8" <0pt>
\ar @{-} "3";"u" <0pt>
\ar @{-} "3";"d1" <0pt>
\ar @{-} "3";"d2" <0pt>
\ar @{-} "3";"d3" <0pt>
\endxy}\end{array}
\in \mathcal{H}\mathit{olieb}_{0,c+d}(2,6)
$$
can produce the following generator
$$
\gamma=\betagin{array}{c}\resizebox{15mm}{!}{ \xy
(-1.5,5)*{}="1",
(1.5,5)*{}="2",
(9,5)*{}="3",
(0,0)*{\bulletlet}="A";
(9,3)*{\bulletlet}="O";
(5,12)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X";
(-6,-10)*+{_1}*{\mathfrak r}m{o}="B";
(6,-10)*+{_2}*{\mathfrak r}m{o}="C";
(14,-10)*+{_3}*{\mathfrak r}m{o}="D";
(22,-10)*+{_4}*{\mathfrak r}m{o}="E";
"A"; "B" **\crv{(-5,-0)};
"A"; "D" **\crv{(5,-0.5)};
"A"; "C" **\crv{(-5,-7)};
"A"; "O" **\crv{(5,5)};
\ar @{-} "O";"C" <0pt>
\ar @{-} "O";"D" <0pt>
\ar @{-} "O";"X" <0pt>
\ar @{-} "A";"X" <0pt>
\ar @{-} "O";"B" <0pt>
\endxy}
\end{array} \in {\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}(4)\simeq{\mathcal O}_{c,d}\mathcal{H}\mathit{olieb}_{c,d}(4)
$$
in the associated polydifferential operad (note that one and the same element $\Gamma\in \mathcal{H}\mathit{olieb}_{0,c+d}$ can give rise to several different generators of ${\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}$). The labelled white vertices of elements of ${\mathcal O}(\mathcal{H}\mathit{olieb}_{0,c+d}$ are called {\em external}, while unlabelled black vertices (more, precisely, the vertices of the underlying elements of $\mathcal{H}\mathit{olieb}_{0,c+d}$) are called {\em internal}. The same terminology can applied to ${\mathcal O}{\mathcal P}$ for any dg prop ${\mathcal P}$.
For any $k,l\geq 1$ and $i\in [k]$ the operadic composition
$$
\betagin{array}{rccc}
\circ_i: &{\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}(k)\otimes {\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}(k) & \longrightarrow &
{\mathcal O}\mathcal{H}\mathit{olieb}_{0,c+d}(k+l-1)\\
& \Gamma_1\otimes \Gamma_2 &\longrightarrow & \Gamma_1\circ_i \Gamma_2
\end{array}
$$
is defined by
\betagin{itemize}
\item[(i)]
substituting the graph $\Gamma_2$ (with the output external vertex erased so that all edges connected to that external vertex are hanging at this step loosely) inside the big circle of the $i$-labelled external vertex of $\Gamma_1$,
\item[(ii)] erasing that big $i$-th labelled external circle (so that all edges of $\Gamma_1$ connected to that $i$-th external vertex, if any, are also hanging loosely), and
\item[(iii)] finally taking the sum over all possible ways to do the following three operations in any order,
\betagin{itemize}
\item[(a)] glue some (or all or none) hanging edges of $\Gamma_2$ to the same number of hanging edges of $\Gamma_1$,
\item[(b)] attach some (or all or none) hanging edges of $\Gamma_2$ to the output external vertex of $\Gamma_1$,
\item[(c)] attach some (or all or none) hanging edges of $\Gamma_1$ to the external input vertices of $\Gamma_2$,
\end{itemize}
in such a way that no hanging edges are left.
\end{itemize}
We refer to \cite{MW1,MW3} for concrete examples of such compositions.
\subsubsection{\bf Proposition \cite{MW1}}\lambdabel{4: Prop on map from Lie^+ to OHoLB}
{\em There is a morphism of dg operads
\betagin{equation}\lambdabel{3: map Holie^+_{c+d} to f_{c,d}(HoLBcd)}
\mathcal{H} \mathit{olie}^+_{c+d} \to {\mathcal O}_{c,d}\mathcal{H}\mathit{olieb}_{c,d}
\end{equation}
given explicitly on the $(1,1)$-generator by
$$
\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
\end{xy}
\longrightarrow
\sum_{m\geq 2} \betagin{array}{c}\resizebox{12mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^1}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^1}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^1}**@{-},
<0mm,9mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,11mm>*{^m},
<0.0mm,-0.44mm>*{};<0mm,-5mm>*{}**@{-},
\end{xy}}\end{array}
$$
and on the remaining $(1,n)$-generators with $n\geq 2$ by}
$$
\betagin{array}{c}\resizebox{20mm}{!}{ \xy
(1,-5)*{\ldots},
(-13,-7)*{_1},
(-8,-7)*{_2},
(-3,-7)*{_3},
(13,-7)*{_n},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-12,-5)*{}="b_1",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
(12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_1" <0pt>
\ar @{-} "a";"b_4" <0pt>
\ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}
\longrightarrow
\sum_{m\geq 1}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(0,8)*{^1}="1";
(-2,8)*{^1}="2";
(2,8)*{^1}="3";
(2,-4)*{\ldots};
(0,+3)*{\bulletlet}="L";
(-8,-5)*+{_1}*{\mathfrak r}m{o}="B";
(-3,-5)*+{_2}*{\mathfrak r}m{o}="C";
(8,-5)*+{_n}*{\mathfrak r}m{o}="D";
<0mm,12mm>*{\overbrace{ \ \ \ \ \ \ \ \ }},
<0mm,14.6mm>*{_m},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\endxy}
\end{array}
$$
Proof is a straightforward calculation (cf.\ \S 5.5 and \S 5.7 in \cite{MW1}).
Using T.\ Willwacher twisting endofuctor discussed in \S 2 one obtains via the morphism
(\ref{3: map Holie^+_{c+d} to f_{c,d}(HoLBcd)}) a dg operad
$
\mathsf{tw}\, {\mathcal O}_{c,d} \mathcal{H}\mathit{olieb}_{c,d}
$.
\subsubsection{\bf Properads under $\mathcal{H}\mathit{olieb}_{c,d}$} Assume ${\mathcal P}$ is a dg properad {\em under}\, $\mathcal{H}\mathit{olieb}_{c,d}$, i.e.\ the one which comes equipped with a non-trivial morphism
\betagin{equation}\lambdabel{4: i from HoLB to P}
\betagin{array}{rccc}
i: & \mathcal{H}\mathit{olieb}_{c,d} & \longrightarrow & {\mathcal P}
\\
&
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\bulletlet}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
(-5.5,7)*{_1},
(-3,7)*{_2},
(3,6)*{},
(5.9,7)*{m},
(-3,-7)*{_2},
(3,-7)*+{},
(5.9,-7)*{n},
(-5.5,-7)*{_1},
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array}
&\longrightarrow &
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\circledcirc}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
(-5.5,7)*{_1},
(-3,7)*{_2},
(3,6)*{},
(5.9,7)*{m},
(-3,-7)*{_2},
(3,-7)*+{},
(5.9,-7)*{n},
(-5.5,-7)*{_1},
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array}
\end{array}
\end{equation}
Note that corollas on the right hand side (the ones with $\circledcirc$ as the vertex) stand from now on for images of the generators of $\mathcal{H}\mathit{olieb}_{c,d}$ under the map $i$ so that {\it some (or all) of them can in fact be equal to zero}.
Applying the functor ${\mathcal O}_{c,d}$ and using the above proposition we obtain an associated chain of morphism of dg operads,
$$
\iota: \mathcal{H} \mathit{olie}_{c+d}^+ \longrightarrow {\mathcal O}_{c,d} \mathcal{H}\mathit{olieb}_{c,d} \longrightarrow {\mathcal O}_{c,d}{\mathcal P}
$$
and hence a morphism of the associated twisted dg operads,
$$
\mathsf{tw}{\mathcal O}(i): \mathsf{tw}{\mathcal O}_{c,d} \mathcal{H}\mathit{olieb}_{c,d}\simeq \mathsf{tw}{\mathcal O}(\mathcal{H}\mathit{olieb}_{0,c+d}) \longrightarrow \mathsf{tw}{\mathcal O}_{c,d}{\mathcal P}
$$
The degree of the generating $(m,n)$-corolla $\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\bulletlet}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array}$ of $\mathcal{H}\mathit{olieb}_{0,c+d}$ is equal to $1+c+d- (c+d)n$, so its $m$ out-legs are carry trivial representation of ${\mathbb S}_m$ (and are assigned degree $0$), while its in-legs are (skew)symmetrized according to the parity of $c+d\in {\mathbb Z}$ (and assigned degree $(c+d)$; the vertex is assigned the degree $1+c+d$.
{\em Hence it is only the sum $c+d$ of our integer parameters which plays a role in this story}.
Therefore we can assume without loss of generality that
$$
c=0,\ \ \ d\ \text{is an arbitrary integer},
$$
from now on, i.e. work solely with dg props under $\mathcal{H}\mathit{olieb}_{0,d}$.
\subsection{Maurer-Cartan elements of strongly homotopy Lie bialgebras}\lambdabel{4: subsec on MC elements of HoLBcd} Given a $\mathcal{H}\mathit{olieb}_{0,d}$-algebra structure in a dg vector space $(V,\delta)$, i.e.\ a morphism of properads
$$
\rho: \mathcal{H}\mathit{olieb}_{0,d} \longrightarrow {\mathcal E} nd_V.
$$
Its {\it Maurer-Cartan element}\, is, by definition, a Maurer-Cartan element $\gamma\in \odot^{\geq 1}V$ of the associated $\mathcal{H} \mathit{olie}_d^+$ structure induced on $\odot^{\geq 1}V$
via the canonical monomorphism
\[
\mathcal{H} \mathit{olie}^+_{d} \to {\mathcal O}\mathcal{H}\mathit{olieb}_{0,d}
\]
described explicitly in Proposition {\ref{4: Prop on map from Lie^+ to OHoLB}}. Let us describe it in more detail.
The $\mathcal{H}\mathit{olieb}_{0,d}$-structure on $V$ is given by a collection of linear maps of cohomological degree $1+d-dn$,
$$
\rho\left(\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\bulletlet}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
(-5.5,7)*{_1},
(-3,7)*{_2},
(3,6)*{},
(5.9,7)*{m},
(-3,-7)*{_2},
(3,-7)*+{},
(5.9,-7)*{n},
(-5.5,-7)*{_1},
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array}
\right)=:\mu_{m,n}: \otimesimes^n V \longrightarrow \odot^m V
$$
satisfying compatibility conditions. Each such linear map gives rise to a map
$$
\hat{\mu}_{m,n}: \otimes^n (\odot^{\geq 1}V) \longrightarrow \odot^{\geq 1}V
$$
given, in
arbitrary basis $\{p_\alpha\}$ of $V$ as follows
$$
\hat{\mu}_{m,n}(f_1,\ldots, f_n):= \sum {\partial}m \mu_{m,n}(p_{\alpha_1}\otimes p_{\alpha_2}\otimes \ldots \otimes p_{\alpha_n})\cdot
{\mathfrak r}ac{{\partial} f_1}{{\partial} p_{\alpha_1}} {\mathfrak r}ac{{\partial} f_2}{{\partial} p_{\alpha_2}}\cdots {\mathfrak r}ac{{\partial} f_n}{{\partial} p_{\alpha_n}},\
\ \ {\mathfrak o}rall f_1,\ldots, f_n\in \odot^{\geq 1}V.
$$
Then a degree $d$ element $\gamma \in \odot^{\geq 1}V$ is a Maurer-Cartan element of the (appropriately filtered or nilpotent) $\mathcal{H}\mathit{olieb}_{0,d}$-algebra on structure $V$ if and only if the following equation holds,
\betagin{equation}\lambdabel{4: MC eqn for HoLB0d}
\delta\gamma+ \sum_{m,n\geq 1}{\mathfrak r}ac{1}{n!}\hat{\mu}_{m,n}(\underbrace{\gamma,\ldots,\gamma}_n)=0.
\end{equation}
The operator
\betagin{equation}\lambdabel{4: twisted by MC differentiail in V}
\delta_\gamma v :=\delta v + \sum_{n\geq 1}{\mathfrak r}ac{1}{n!}{\mu}_{1,n+1}(\underbrace{\gamma_1,\ldots,\gamma_1}_n, v ),\ \ \ \ {\mathfrak o}rall\, v\in V,
\end{equation}
with $\gamma_1$ being the image of $\gamma$ under the projection $\odot^{\geq 1}V \rightarrow V$,
is a twisted differential on $V$.
\subsubsection{\bf Combinatorial incarnation}\lambdabel{4: subsec on comb incarn of HoLB MC elements}
Maurer-Cartan elements of $\mathcal{H}\mathit{olieb}_{0,d}$-algebras admit a simple combinatorial description as (a representation in $V$ of) an infinite linear combination
$$
\gamma \simeq \sum_{m\geq 1}{\mathfrak r}ac{1}{m!} \betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m},
\end{xy}}\end{array}
$$
of degree $d$ $(m,0)$-corollas with symmetrized outgoing legs. One extends the standard differential $\delta$ in $\mathcal{H}\mathit{olieb}_{0,d}$ to such new generating corollas as follows
\betagin{equation}\lambdabel{4: d on HoLB MC}
\delta\ \betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m},
\end{xy}}\end{array}
=
- \sum_{k\geq 1, [m]=\sqcup [m_\bulletlet]\atop { m_{0}\geq 1, k+m_0\geq 3\atop m_1,...,m_k\geq 0}}{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-27,8)*{}="1";
(-25,8)*{}="2";
(-23,8)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-25,+3)*{\bulletlet}="L";
(-15,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-3,-5)*{...};
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_n}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\endxy}
\end{array} \ \ \ \ {\mathfrak o}rall\ m\geq 1.
\end{equation}
where we take the sum over all partitions of the ordered set $[m]$ into $k+1$ ordered subsets. For $k=1$ we recover the standard formula (cf.\ (\ref{2: delta on (1,0) MC corolla})).
Let us check first that the above definition makes sense.
\subsubsection{\bf Lemma}
$\delta^2\ \betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m},
\end{xy}}\end{array}\equiv 0$ \ {\it for any $m\geq 1$}.
\betagin{proof} The proof is based on a straightforward calculation which is similar to the one made in the proof of Lemma {\ref{2: Lemma on d^2 for MC Lie}} above. The only really new phenomenon is the appearance in $\delta^2$ of summands of the form
\betagin{equation}\lambdabel{4: terms of the for lastochka}
\sum_{[m]= [m'_\bulletlet] \sqcup [m''_\bulletlet]\sqcup [m_0] }\ \ {\mathfrak r}ac{1}{k'!} {\mathfrak r}ac{1}{k''!}
\betagin{array}{c}\resizebox{55mm}{!}{ \xy
(8,8)*{}="1";
(10,8)*{}="2";
(12,8)*{}="3";
(17,8)*{}="n11";
(20,7)*{...};
(22,8)*{}="n12";
(24,8)*{}="n21";
(26.4,7)*{...};
(29,8)*{}="n22";
(36,8)*{}="nn1";
(38,7)*{...};
(40,8)*{}="nn2";
(-8,8)*{}="1'";
(-10,8)*{}="2'";
(-12,8)*{}="3'";
(-17,8)*{}="n11'";
(-20,7)*{...};
(-22,8)*{}="n12'";
(-24,8)*{}="n21'";
(-26.4,7)*{...};
(-29,8)*{}="n22'";
(-36,8)*{}="nn1'";
(-38,7)*{...};
(-40,8)*{}="nn2'";
(10,+3)*{\bulletlet}="R";
(20,-5)*{\bulletlet}="B1";
(27,-5)*{\bulletlet}="C1";
(38,-5)*{\bulletlet}="D1";
<30mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k''}},
(32,-5)*{...};
<10mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m''_0}},
<19mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m''_1}},
<26mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m''_2}},
<38mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m''_n}},
(0,-5)*{\bulletlet}="0";
(-3,8)*{}="s1";
(-0,7)*{...};
(3,8)*{}="s2";
<0mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
(-10,+3)*{\bulletlet}="L";
(-20,-5)*{\bulletlet}="B1'";
(-27,-5)*{\bulletlet}="C1'";
(-38,-5)*{\bulletlet}="D1'";
<-30mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k'}},
(-32,-5)*{...};
<-10mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m'_0}},
<-19mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m'_1}},
<-26mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m'_2}},
<-38mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m'_n}},
\ar @{-} "D1";"R" <0pt>
\ar @{-} "C1";"R" <0pt>
\ar @{-} "B1";"R" <0pt>
\ar @{-} "1";"R" <0pt>
\ar @{-} "2";"R" <0pt>
\ar @{-} "3";"R" <0pt>
\ar @{-} "1";"R" <0pt>
\ar @{-} "2";"R" <0pt>
\ar @{-} "n11";"B1" <0pt>
\ar @{-} "n12";"B1" <0pt>
\ar @{-} "n21";"C1" <0pt>
\ar @{-} "n22";"C1" <0pt>
\ar @{-} "nn1";"D1" <0pt>
\ar @{-} "nn2";"D1" <0pt>
\ar @{-} "0";"L" <0pt>
\ar @{-} "0";"R" <0pt>
\ar @{-} "0";"s1" <0pt>
\ar @{-} "0";"s2" <0pt>
\ar @{-} "D1'";"L" <0pt>
\ar @{-} "C1'";"L" <0pt>
\ar @{-} "B1'";"L" <0pt>
\ar @{-} "1'";"L" <0pt>
\ar @{-} "2'";"L" <0pt>
\ar @{-} "3'";"L" <0pt>
\ar @{-} "1'";"L" <0pt>
\ar @{-} "2'";"L" <0pt>
\ar @{-} "n11'";"B1'" <0pt>
\ar @{-} "n12'";"B1'" <0pt>
\ar @{-} "n21'";"C1'" <0pt>
\ar @{-} "n22'";"C1'" <0pt>
\ar @{-} "nn1'";"D1'" <0pt>
\ar @{-} "nn2'";"D1'" <0pt>
\endxy}
\end{array}
\end{equation}
which cancel each other for symmetry reasons.
\end{proof}
\subsection{Full twisting of properads under $\mathcal{H}\mathit{olieb}_{c,d}$}\lambdabel{4: Subsec on Def of Tw of Tw(HoLB) and Tw(P)}
Let ${\mathcal P}$ be a properad under $\mathcal{H}\mathit{olieb}_{0,d}$ (as in (\ref{4: i from HoLB to P})). We construct
the associated {\em fully twisted}\, properad $(\mathsf{Tw} {\mathcal P},{\partial}c)$ in several steps.
First we define $\widetilde{\mathsf{Tw}}{\mathcal P}$
to be
be the properad generated freely by ${\mathcal P}$ and a family of new $(m,0)$-corollas, $\betagin{array}{c}\resizebox{11mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m} \end{xy}}\end{array}$, $m\geq 1$, of cohomological degree $d$ which are called {\it MC generators}.
We make this properad differential by using
the original differential ${\p}$ on elements of ${\mathcal P}$ and extending its action on the new generators by (cf.\ (\ref{4: d on HoLB MC}))
\betagin{equation}\lambdabel{3: d on MC in TW(P)}
{\p}\ \betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m},
\end{xy}}\end{array}
=
- \sum_{k\geq 1, [m]=\sqcup [m_\bulletlet]\atop { m_{0}\geq 1, k+m_0\geq 3\atop m_1,...,m_k\geq 0}}{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-27,8)*{}="1";
(-25,8)*{}="2";
(-23,8)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-25,+3)*{\circledcirc}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-5mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-3,-5)*{...};
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_n}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\endxy}
\end{array} \ \ \ \ {\mathfrak o}rall\ m\geq 1.
\end{equation}
Note that corollas with $\circledcirc$-vertices are images of the geneartors of $\mathcal{H}\mathit{olieb}_{0,d}$
in ${\mathcal P}$ under the morphism $i$ (and hence some of them can, in principle, be zero). The map (\ref{4: i from HoLB to P}) extends to a morphism
$$
\widetilde{\mathsf{Tw}}(i): \widetilde{\mathsf{Tw}}\mathcal{H}\mathit{olieb}_{0,d} \longrightarrow \widetilde{\mathsf{Tw}}{\mathcal P}
$$
which restricts on the MC generators as the identoty map.
Next we notice the following surprising result which tells us essentially the MC elements
originating in ${\mathcal O}(\mathcal{H}\mathit{olieb}_{c,d})$ can be used to twist not only the operad ${\mathcal O}(\mathcal{H}\mathit{olieb}_{c,d})$ (which is obvious) but also the
properad $\mathcal{H}\mathit{olieb}_{c,d}$ itself!
\subsubsection{\bf Theorem}\lambdabel{3: Th on HoLB^+ to Tw(HoLB)} {\em
There is a canonical monomorphism of dg props
$$
\mathcal{H}\mathit{olieb}_{0,d}^+ \longrightarrow \widetilde{\mathsf{Tw}}\mathcal{H}\mathit{olieb}_{0,d}
$$
given on the generating $(m,n)$-corollas with $m,n\geq 1$ as follows,
$$
\betagin{array}{c}\resizebox{12mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44
mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
\longrightarrow
\sum_{k\geq 0, [m]=\sqcup [m_\bulletlet],\atop { m_{0}\geq 1, k+m_0+n\geq 3\atop m_1,...,m_k\geq 0}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{23mm}{!}{ \xy
(-27,8)*{}="1";
(-25,8)*{}="2";
(-23,8)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-29,-8)*+{_1}="l2";
(-27,-8)*+{_2}="l1";
(-24,-6)*{...};
(-20,-8)*+{_n}="ln";
(-3,-5)*{...};
(-25,+3)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_k}},
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\ar @{-} "l1";"L" <0pt>
\ar @{-} "l2";"L" <0pt>
\ar @{-} "ln";"L" <0pt>
\endxy}
\end{array}.
$$
}
\betagin{proof} One has to check that the above explicit map of properads respects their differentials, $\delta^+$ on the l.h.s.\ and ${\partial}$ on the r.h.s. This is a straightforward calculation which is analogous to (but much more tedious than) the one used in the proof of Theorem {\ref{2: map from HoLB+ to tilda twA}}. The only really new aspect is again the appearance of terms (\ref{4: terms of the for lastochka}) which cancel out for symmetry reasons. We omit full details.
\end{proof}
Hence for any properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{0,d}$ there is an associated morphism
of dg properads
$$
i^+: \mathcal{H}\mathit{olieb}_{0,d}^+ \longrightarrow \widetilde{\mathsf{Tw}}{\mathcal P}
$$
which factors through the morphism described in the Theorem just above.
\subsubsection{\bf Twisting of the differential} The argument in \S {\ref{2: subsec on twisting d in operads}} about twisting of differentials in operads extends straightforwardly to properads. Indeed, given any dg prop $({\mathcal P}=\{{\mathcal P}(m,n)\},{\p})$,
and any $h\in {\mathcal A}(1,1)$, there is an associated derivation $D_h$ of the (non-differential) prop ${\mathcal P}$ by the formula analogous to (\ref{2: delta^+}),
\betagin{equation}\lambdabel{3: formula for D_h}
D_h(a)= \sum_{i=1}^m h _1\circ_i a - (-1)^{|h||a|} \sum_{j=1}^n a _j\circ_1 h, \ \ {\mathfrak o}rall \ a\in {\mathcal P}(m,n).
\end{equation}
Moreover, if $|h|=1$ and
$h\circ_1 h=-{\p} h$, then the sum
$
{\partial}_h:= {\partial}+ D_h
$
is a differential in ${\mathcal P}$.
Assume we have a morphism of dg props
\betagin{equation}\lambdabel{3: g^+ map from HoLB}
g^+: (\mathcal{H}\mathit{olieb}_{0,d}^+, \delta^+) \longrightarrow ({\mathcal P},{\p})
\end{equation}
Then the element
\betagin{equation}\lambdabel{3: blacklozenge (1,1) element}
\betagin{xy}
<0mm,-3mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{_\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}
:=g^+(\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\bulletletllet};<0mm,0mm>*{}**@{},
\end{xy})=
\sum_{k=1}^\infty {\mathfrak r}ac{1}{k!}\ \resizebox{17mm}{!}{ \xy
(-25,8)*{}="1";
(-9,-4)*{...};
<-11mm,-8mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\circledast}="L";
(-25,-3)*{}="N";
(-19,-4)*{\bulletlet}="B";
(-13,-4)*{\bulletlet}="C";
(-2,-4)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "N";"L" <0pt>
\endxy} \in \widetilde{\mathsf{Tw}}{\mathcal P}
\end{equation}
satisfies all the conditions specified above for $h$ so that the sum
$$
{\p}_{\centerdot}:= {\partial} + D_{\hspace{-1.8mm}\betagin{array}{c}\resizebox{1.8mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}}
$$
is a differential in $\widetilde{\mathsf{Tw}}{\mathcal P}$.
\subsubsection{\bf Main definition} {\em Let ${\mathcal P}$ be a dg properad under $\mathcal{H}\mathit{olieb}_{0,d}$. The full twisting, $\mathsf{Tw}{\mathcal P}$, of ${\mathcal P}$ is a dg properad defined as the properad $\widetilde{\mathsf{Tw}}{\mathcal P}$ equipped with the twisted differential ${\partial}c$.}
Thus
$\mathsf{Tw}{\mathcal P}$ is identical to $\widetilde{\mathsf{Tw}}{\mathcal P}$ as a non-differential properad, i.e.\ it is generated freely by ${\mathcal P}$ and the family of extra generators $\betagin{array}{c}\resizebox{11mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^2}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^{}}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^m}**@{-},
\end{xy}}\end{array}$, $m\geq 1$, of degree $d$.
If we represent elements of ${\mathcal P}$ as decorated corollas (\ref{3: generic elements of cP as (m,n)-corollas}), then the twisted differential ${\p}_\centerdot$ acts on elements of ${\mathcal P}$ as follows,
\betagin{equation}\lambdabel{4: d_centerdot on Tw(P)}
{\p}_\centerdot \betagin{array}{c}\resizebox{13mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
=
{\p} \betagin{array}{c}\resizebox{13mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
+
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\blacklozenge};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<6mm,5mm>*{..}**@{},
<-8.5mm,5.5mm>*{^1}**@{},
<-4mm,5.5mm>*{^i}**@{},
<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-5mm>*{}**@{-},
<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<-8.5mm,-6.9mm>*{^1}**@{},
<-5mm,-6.9mm>*{^2}**@{},
<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
- (-1)^{|a|}
\overset{n-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,-5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,-5mm>*{}**@{-},
<0mm,-5mm>*{\blacklozenge};
<0mm,-5mm>*{};<0mm,-8mm>*{}**@{-},
<0mm,-5mm>*{};<0mm,-10mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,-5mm>*{}**@{-},
<6mm,-5mm>*{..}**@{},
<-8.5mm,-6.9mm>*{^1}**@{},
<-4mm,-6.9mm>*{^i}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,5mm>*{}**@{-},
<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<-8.5mm,5.5mm>*{^1}**@{},
<-5mm,5.5mm>*{^2}**@{},
<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
\end{equation}
where $\betagin{array}{c}\resizebox{1.8mm}{!}{\betagin{xy}
<0mm,3mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
$ is given by (\ref{3: blacklozenge (1,1) element}).
On the other hand, the action of ${\p}_\centerdot$ on the MC generators is given by,
\betagin{equation}\lambdabel{4: d_centerdot on MC generators of Tw(P)}
{\p}_\centerdot
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}:=
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\blacklozenge};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<6mm,5mm>*{..}**@{},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-4mm,5.5mm>*{^i}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
- \sum_{k\geq 1, [m]=\sqcup [m_\bulletlet],\atop { m_{0}\geq 1, k+m_0\geq 3\atop m_1,...,m_k\geq 0}}{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,8)*{}="1";
(-25,8)*{}="2";
(-23,8)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-25,+2)*{\circledcirc}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-3,-5)*{...};
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_n}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\endxy}
\end{array} \ \ \ \ {\mathfrak o}rall\ m\geq 1.
\end{equation}
Note that for $m\geq 2$ the first sum on the r.h.s.\ of (\ref{4: d_centerdot on MC generators of Tw(P)}) cancels out with all the summands corresponding to $k\geq 2 ,m_0=1,m_i=m-1, i\in [k]$, in the second sum.
By its very construction, this twisted prop $\mathsf{Tw}{\mathcal P}$ has the following properties:
\betagin{itemize}
\item[(a)] There is a canonical chain of morphisms of dg prop(erad)s
$$
(\mathcal{H}\mathit{olieb}_{0,d},\delta) \longrightarrow (\mathsf{Tw}\mathcal{H}\mathit{olieb}_{0,d}, \delta_\centerdot) \longrightarrow (\mathsf{Tw}{\mathcal P},{\p}_\centerdot).
$$
given explicitly by
\betagin{equation}\lambdabel{4: map frpm HoLBcd to TwP}
\betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44
mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
\longrightarrow
\sum_{k\geq 0, [m]=\sqcup [m_\bulletlet]\atop { m_{0}\geq 1, k+m_0+n\geq 3\atop m_1,...,m_k\geq 0}}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-27,8)*{}="1";
(-25,8)*{}="2";
(-23,8)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-29,-8)*+{_1}="l2";
(-27,-8)*+{_2}="l1";
(-24,-6)*{...};
(-20,-8)*+{_n}="ln";
(-3,-5)*{...};
(-25,+3)*{\circledcirc}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_k}},
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\ar @{-} "l1";"L" <0pt>
\ar @{-} "l2";"L" <0pt>
\ar @{-} "ln";"L" <0pt>
\endxy}
\end{array},\ \ \ \ \ \ m,n\geq 1, m+n\geq 3
\end{equation}
\item[(b)] There is a canonical epimorphism of of dg prop(erad)s,
$$
(\mathsf{Tw}{\mathcal P}, {\p}_\centerdot) \longrightarrow ({\mathcal P}, {\p})
$$
which sends all the MC generators $\betagin{array}{c}\resizebox{11mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m} \end{xy}}\end{array}$, $m\geq 1$, to zero.
Note that the natural inclusion of ${\mathbb S}$-bimodules ${\mathcal P}\rightarrow \mathsf{Tw}{\mathcal P}$ is not, in general, a morphism of {\em dg}\, properads.
\end{itemize}
\subsubsection{\bf Remark} Assume ${\mathcal P}$ is a properad under $\mathcal{L}\mathit{ieb}_{0,d}$, i.e.\
all corollas on the r.h.s.\ of the map (\ref{4: i from HoLB to P}) vanish except the following two, $\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,-6)*+{}="1";
(-5,-0.2)*{\circledcirc}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}$ and
$ \betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\circledcirc}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}$. Then the associated twisted properad $\mathsf{Tw}{\mathcal P}$ is, in general, a properad under the minimal resolution $\mathcal{H}\mathit{olieb}_{0,d}$ of $\mathcal{L}\mathit{ieb}_{0,d}$, not just under $\mathcal{L}\mathit{ieb}_{0,d}$! Put another way, {\em the full twisting of ${\mathcal P}$ produces, in general, higher homotopy Lie bialgebras operations},
a new phenomenon comparing to what we get in $\mathsf{tw}{\mathcal P}$ under the partial twisting of ${\mathcal P}$.
\subsubsection{\bf Full twisting for general values of the integer parameters $c$ and $d$}
{\em The full twisting $\mathsf{Tw}{\mathcal P}$ of a dg properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$}\, is defined as $(\mathsf{Tw}{\mathcal P}\{c\})\{-c\}$; note that ${\mathcal P}\{c\}$ is a dg properad under $\mathcal{H}\mathit{olieb}_{0,c+d}$ so that the above twisting functor $\mathsf{Tw}$ applies.
Thus the full twisting $\mathsf{Tw}{\mathcal P}$ of ${\mathcal P}\in \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}}$ is generated freely by ${\mathcal P}$ and extra MC generators
$$
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^2}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^{}}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^m}**@{-},
\end{xy}}\end{array} =
(-1)^{c|\sigma|}
\betagin{array}{c}\resizebox{17mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-11mm,7mm>*{^{\sigma(1)}}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,7mm>*{^{\sigma(2)}}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,6mm>*{}**@{-},
<0.6mm,0.44mm>*{};<10mm,7mm>*{^{\sigma(m)}}**@{-},
\end{xy}}\end{array} \ \ \ {\mathfrak o}rall \sigma\in {\mathbb S}_m, \ m\geq 1,
$$
of cohomological degree $(1-m)c+d$. It comes equipped with a canonical morphism
$
\mathcal{H}\mathit{olieb}_{c,d} \rightarrow \mathsf{Tw}{\mathcal P}
$
given by (\ref{4: map frpm HoLBcd to TwP}).
Next we show that quasi-isomorphisms (\ref{2: quasi-iso twHoLB to HoLB, twLB to LB}) extend to their full twisting analogues.
\subsubsection{\bf Theorem}\lambdabel{4: theorem on H(TwHoLBcd)} {\em The canonical projection ${\partial}i: \mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d} \rightarrow \mathcal{H}\mathit{olieb}_{c,d}$ is a quasi-isomorphism, i.e.\
$H^\bulletlet(\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d})=\mathcal{L}\mathit{ieb}cd$.}
\betagin{proof} For any $m\geq 2$ the r.h.s.\ of formula (\ref{4: d_centerdot on MC generators of Tw(P)}) applied to ${\mathcal P}=\mathcal{H}\mathit{olieb}_{c,d}$ contains a unique summand of the form
$$
{\p}_\centerdot
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}:=
-
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-5mm>*{\bulletlet};<0mm,0mm>*{\bulletlet}**@{-},
<0mm,-0mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
+ \ldots.
$$
Let us call the unique edge of such a summand {\em special}, and consider a filtration of $\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d}$ by the number of non-special edges plus the total number of MC generators. On the initial page $E_0$ of the associated spectral sequence the induced differential acts only on MC generators with $m\geq 2$ by the formula given just above (with no additional terms). Hence the next page $E_1$ of the spectral sequence is equal to the quotient subcomplex $\mathsf{tw} \mathcal{H}\mathit{olieb}_{c,d}'$ of the partially twisted properad $\mathsf{tw}\mathcal{H}\mathit{olieb}_{c,d}$ by the differential ideal generated by graphs with at least one special edge; the induced differential acts only on the generators
of $\mathcal{H}\mathit{olieb}_{c,d}$ by the standard formula (\ref{3: d in qLBcd_infty}). We consider next a filtration of $E_1$ by the total number of paths connecting in-legs and univalent MC generators $\betagin{array}{c}\resizebox{2.5mm}{!}{ \xy
(0,0)*{\bulletlet}="a",
(0,3.5)*{}="0",
\ar @{-} "a";"0" <0pt>
\endxy}\end{array}$ to the out-legs of elements of $E_1$. The induced differential $d$ on the associated graded complex $gr E_1$ is precisely the ${\mathfrak r}ac{1}{2}$-prop differential{\mathfrak o}otnote{The notion of ${\mathfrak r}ac{1}{2}$-prop (as well the closely related notion of path filtration) was introduced by Maxim Kontsevich \cite{Ko3}. A nice exposition of this theory can be found in \cite{MaVo}.} in $\mathcal{H}\mathit{olieb}_{c,d}$ given explicitly by those summands in (\ref{3: d in qLBcd_infty}) whose lower (or upper) corolla has type $(1,p\geq 2)$ (or, resp., $(p\geq 2,1))$ only. The point is that such summands never create new special edges which have to be set to zero by hand, i.e.\ the fact that we have to take the quotient by graphs with at least one special edge does not complicate the action of the induced differential any more. Hence the next page $E_2 \sim H^\bulletlet(gr E_1)$ of our spectral sequence is spanned by graphs generated by the following three corollas
$$
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
= (-1)^d
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_2}="C";
(-2,-5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
, \ \ \ \ \
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_1}="C";
(-2,4.5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
=(-1)^c
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_2}="C";
(-2,4.5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array},
\ \ \ \betagin{array}{c}\resizebox{2.5mm}{!}{ \xy
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
\ar @{-} "a";"0" <0pt>
\endxy}\end{array},
$$
subject to the relations
$$
\oint_{123}\hspace{-1mm} \betagin{array}{c}\resizebox{7mm}{!}{
\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bulletlet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{},
\end{xy}}\end{array}
=0, \ \ \ \
\oint_{123}\hspace{-1mm} \betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array}
=0, \ \ \ \betagin{array}{c}\resizebox{5mm}{!}{\betagin{xy}
<0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-},
<0mm,3mm>*{\bulletlet};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\bulletlet};<0mm,-0.8mm>*{}**@{},
<-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-},
<0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{},
\end{xy}}\end{array}=0, \ \ \ \betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*{\bulletlet}="1";
(-5,-1)*{\bulletlet}="L";
(-8,4.5)*+{_1}="C";
(-2,4.5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}=0.
$$
The induced differential acts only on the MC generator by the standard formula
$$
\betagin{array}{c}\resizebox{1.4mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}
\longrightarrow
{\mathfrak r}ac{1}{2}
\betagin{array}{c}\resizebox{6.1mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-4)*{\bulletlet}="C";
(-2,-4)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}.
$$
As $H^\bulletlet(\mathsf{tw}\mathcal{L} \mathit{ie}_d)=\mathcal{L} \mathit{ie}_d$, we conclude that the cohomology is spanned by the standard two generators of $\mathcal{L}\mathit{ieb}cd$ modulo the above three relations. Hence $H^\bulletlet(\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d})=\mathcal{L}\mathit{ieb}cd$ and the Theorem is proven.
\end{proof}
\subsection{An action of the deformation complex $\mathsf{Def}(\mathcal{H}\mathit{olieb}_{c,d}\rightarrow {\mathcal P})$ on $\mathsf{Tw}{\mathcal P}$}
Given a dg properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$ (see (\ref{4: i from HoLB to P})), one can consider ${\mathcal P}$ as a dg properad under $\mathcal{H}\mathit{olieb}_{c,d}^+$ using the composition
$$
i^+: \mathcal{H}\mathit{olieb}_{c,d}^+ \longrightarrow \mathcal{H}\mathit{olieb}_{c,d} \stackrel{i}{\longrightarrow} {\mathcal P}
$$
where the first arrow is the unique morphism which sends the $(1,1)$-generator to zero and is the identity on all other generators. Following \cite{MW1} we define the deformation complex of the morphism $i$ in (\ref{4: i from HoLB to P}) as the deformation complex of the morphism $i^+$,
$$
\mathsf{Def}\left(\mathcal{H}\mathit{olieb}_{c,d}^+ \stackrel{i^+}{\longrightarrow} {\mathcal P}\right)={\partial}rod_{m,n\geq 1} {\mathcal P}(m,n)\otimes_{{\mathbb S}_m^{op}\times {\mathbb S}_n}\left({\mathit s \mathit g\mathit n}_m^{|c|}\otimes {\mathit s \mathit g\mathit n}_n^{|d|}\right)[c(1-m)+d(1-n)].
$$
Note that even in the case when the properad ${\mathcal P}$ is generated by $(m,n)$-operations with $m,n\geq 1$ and $m+n\geq 3$ (as, e.g., in the case ${\mathcal P}=\mathcal{H}\mathit{olieb}_{c,d}$) the term ${\mathcal P}(1,1)$ is often non-zero and should not be ignored in the deformation theory (cf.\ \cite{MW3}); hence the $+$-extension. As in \cite{MW2}, we abuse notation and re-denote $\mathsf{Def}(\mathcal{H}\mathit{olieb}_{c,d}^+ \stackrel{i^+}{\rightarrow} {\mathcal P})$ by
$\mathsf{Def}(\mathcal{H}\mathit{olieb}_{c,d} \stackrel{i}{\rightarrow} {\mathcal P})$.
Following \cite{MV} one can describe the differential (and Lie brackets) in
$\mathsf{Def}\left(\mathcal{H}\mathit{olieb}_{c,d} \stackrel{i}{\rightarrow} {\mathcal P}\right)$ very explicitly. Let us represent a generic element of this complex pictorially as a collection of $(m,n)$-corollas,
$$
\gamma=\left\{
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(0,9)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ }^m};
(0,-9)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }_n};
(-7,7)*+{}="U1";
(-3,7)*+{}="U2";
(2,5)*{...};
(7,7)*+{}="U3";
(0,0)*{\circledast}="C";
(-7,-7)*+{}="L1";
(-3,-7)*+{}="L2";
(2,-5)*{...};
(7,-7)*+{}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"U1" <0pt>
\ar @{-} "C";"U2" <0pt>
\ar @{-} "C";"U3" <0pt>
\endxy}
\end{array}
\right\}_{m,n\geq 1},
\ \ \text{or as a formal sum}\
\gamma=\sum_{m,n\geq 1}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(-7,7)*+{_1}="U1";
(-3,7)*+{_2}="U2";
(2,5)*{...};
(7,7)*+{_m}="U3";
(0,0)*{\circledast}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"U1" <0pt>
\ar @{-} "C";"U2" <0pt>
\ar @{-} "C";"U3" <0pt>
\endxy}
\end{array},
$$
of corollas whose vertices are decorated with elements{\mathfrak o}otnote{A formal sum $\gamma$ of such $(m,n)$-corollas is homogeneous of degree $k$ as an element of the deformation complex if and only if their degrees $|\circledast|$
as elements of ${\mathcal P}(m,n)\otimes_{{\mathbb S}_m^{op}\times {\mathbb S}_n} ({\mathit s \mathit g\mathit n}_m^{|c|}\otimes {\mathit s \mathit g\mathit n}_n^{|d|})$ are equal to $k+c(1-m)+d(1-n)$. This explains the grading consistency of the explicit formulae
shown
in Theorem {\ref{3: Theorem on Def action on TwP}} below.} of ${\mathcal P}(m,n)\otimes_{{\mathbb S}_m^{op}\times {\mathbb S}_n} ({\mathit s \mathit g\mathit n}_m^{|c|}\otimes {\mathit s \mathit g\mathit n}_n^{|d|})$ and denoted by $\circledast$ in order to distinguish them from the generic elements of ${\mathcal P}$ which are represented as corollas (\ref{3: generic elements of cP as (m,n)-corollas})
and also from the images of $\mathcal{H} \mathit{olie}_d$-generators under $i$ which are represented pictorially as $\circledcirc$-vertex corollas. Since labels of in- and out legs are (skew)symmetrized, one cam omit them in pictures. The differential in the deformation complex is given by
\betagin{equation}\lambdabel{3: differential in Def(Lieb to P)}
\delta \hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(0,9)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ }^m};
(0,-9)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }_n};
(-7,7)*+{}="U1";
(-3,7)*+{}="U2";
(2,5)*{...};
(7,7)*+{}="U3";
(0,0)*{\circledast}="C";
(-7,-7)*+{}="L1";
(-3,-7)*+{}="L2";
(2,-5)*{...};
(7,-7)*+{}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"U1" <0pt>
\ar @{-} "C";"U2" <0pt>
\ar @{-} "C";"U3" <0pt>
\endxy}
\end{array}
=
{\p} \hspace{-3mm}
\betagin{array}{c}\resizebox{12mm}{!}{ \xy
(0,9)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ }^m};
(0,-9)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }_n};
(-7,7)*+{}="U1";
(-3,7)*+{}="U2";
(2,5)*{...};
(7,7)*+{}="U3";
(0,0)*{\circledast}="C";
(-7,-7)*+{}="L1";
(-3,-7)*+{}="L2";
(2,-5)*{...};
(7,-7)*+{}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"U1" <0pt>
\ar @{-} "C";"U2" <0pt>
\ar @{-} "C";"U3" <0pt>
\endxy}
\end{array}
+
\sum_{[n]=[n']\sqcup [n'']\atop n'\geq 2,n''\geq 1}\left(
{\partial}m
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-10,8)*{}="11";
(-7,8)*{}="12";
(0,8)*{}="13";
(-3,-5)*{...};
(-13,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{n'}},
<-16mm,5mm>*{\overbrace{ \ \ \ \ \ \ \ \ \ \ }^{m'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ }_{n''}},
<-5mm,12mm>*{\overbrace{ \ \ \ \ \ \ \ \ \ \ \ }^{m''}},
(-5,+2)*{\circledcirc}="L";
(-14,-6)*{\circledast}="B";
(-20,-13)*{}="b1";
(-17,-13)*{}="b2";
(-10,-13)*{}="b3";
(-20,1)*{}="a1";
(-17,1)*{}="a2";
(-11,1)*{}="a3";
(-8,-5)*{}="C";
(1,-5)*{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "B";"b3" <0pt>
\ar @{-} "B";"a1" <0pt>
\ar @{-} "B";"a2" <0pt>
\ar @{-} "B";"a3" <0pt>
\ar @{-} "11";"L" <0pt>
\ar @{-} "12";"L" <0pt>
\ar @{-} "13";"L" <0pt>
\endxy}
\end{array}
\mp
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-10,8)*{}="11";
(-7,8)*{}="12";
(0,8)*{}="13";
(-3,-5)*{...};
(-13,-13)*{...};
<-15mm,-17mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ }_{n'}},
<-16mm,5mm>*{\overbrace{ \ \ \ \ \ \ \ \ \ \ }^{m'}},
<-3mm,-9mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ }_{n''}},
<-5mm,12mm>*{\overbrace{ \ \ \ \ \ \ \ \ \ \ \ }^{m''}},
(-5,+2)*{\circledast}="L";
(-14,-6)*{\circledcirc}="B";
(-20,-13)*{}="b1";
(-17,-13)*{}="b2";
(-10,-13)*{}="b3";
(-20,1)*{}="a1";
(-17,1)*{}="a2";
(-11,1)*{}="a3";
(-8,-5)*{}="C";
(1,-5)*{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "B";"b3" <0pt>
\ar @{-} "B";"a1" <0pt>
\ar @{-} "B";"a2" <0pt>
\ar @{-} "B";"a3" <0pt>
\ar @{-} "11";"L" <0pt>
\ar @{-} "12";"L" <0pt>
\ar @{-} "13";"L" <0pt>
\endxy}
\end{array}
\right)
\end{equation}
where the rule of signs depends on $d$ and is read from (\ref{2: d in Lie_infty}); for $d$ even the first ambiguous sign symbol on the r.h.s.\ is $+1$, while the second one is $-(-1)^{|\circledast|}$.
Let $(\mathrm{Der}(\mathsf{Tw}{\mathcal P}),\ [\ ,\ ])$ be the Lie algebra of derivations of the properad $\mathsf{Tw}{\mathcal P}$. The differential ${\p}_\centerdot$ is its MC element making $\mathrm{Der}(\mathsf{Tw}{\mathcal P})$ into a dg Lie algebra with the differential $[{\p}_\centerdot,\ ]$.
\subsubsection{\bf Theorem}\lambdabel{3: Theorem on Def action on TwP}{\it
There is a morphism of dg Lie algebras
\betagin{equation}\lambdabel{3: Def to Der(P)}
\betagin{array}{rccc}
{\mathbb P}hi: & \mathsf{Def}\left(\mathcal{H}\mathit{olieb}_{c,d} \stackrel{i}{\rightarrow} {\mathcal P}\right) &\longrightarrow & \mathrm{Der}(\mathsf{Tw}{\mathcal P})\\
& \gamma & \longrightarrow & {\mathbb P}hi_\gamma
\end{array}
\end{equation}
where the derivation ${\mathbb P}hi_\gamma$ is given on the generators by
$$
\betagin{array}{rccc}
{\mathbb P}hi_\gamma: &\mathsf{Tw}{\mathcal P} & \longrightarrow & \mathsf{Tw}{\mathcal P} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
& \longrightarrow &
\displaystyle
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{15mm}{!}{
\betagin{xy}
<0mm,0mm>*{\bulletlet};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{\bulletlet};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\lozenge};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\bulletlet};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};<3.5mm,5mm>*{}**@{-},
<6mm,5mm>*{..}**@{},
<-8.5mm,5.5mm>*{^1}**@{},
<-4mm,5.5mm>*{^i}**@{},
<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
+
\sum_{k\geq 1, m=\sum m_\bulletlet,\atop { m_{0}\geq 1, k+m_0\geq 3\atop m_1,...,m_k\geq 0}}{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{22mm}{!}{ \xy
(-28,12)*{}="1";
(-25,12)*{}="2";
(-22,12)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-25,+4)*{\circledast}="L";
(-15,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-3,-5)*{...};
<-25mm,14.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_n}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\endxy}
\end{array}
\\
&
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
& \longrightarrow &
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\lozenge};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<6mm,5mm>*{..}**@{},
<-8.5mm,5.5mm>*{^1}**@{},
<-4mm,5.5mm>*{^i}**@{},
<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-5mm>*{}**@{-},
<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<-8.5mm,-6.9mm>*{^1}**@{},
<-5mm,-6.9mm>*{^2}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
- (-1)^{|a|}
\overset{n-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,-5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,-5mm>*{}**@{-},
<0mm,-5mm>*{\lozenge};
<0mm,-5mm>*{};<0mm,-8mm>*{}**@{-},
<0mm,-5mm>*{};<0mm,-10mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,-5mm>*{}**@{-},
<6mm,-5mm>*{..}**@{},
<-8.5mm,-6.9mm>*{^1}**@{},
<-4mm,-6.9mm>*{^i}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,5mm>*{}**@{-},
<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<-8.5mm,5.5mm>*{^1}**@{},
<-5mm,5.5mm>*{^2}**@{},
<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
\end{array}
$$
where (cf.\ (40))
\betagin{equation}\lambdabel{3: (1,1) part Def action on TwP}
\betagin{xy}
<0mm,-3mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{_\lozenge};<0mm,0mm>*{}**@{},
\end{xy}:= -
\sum_{k=1}^\infty {\mathfrak r}ac{1}{k!}\ \resizebox{18mm}{!}{ \xy
(-25,8)*{}="1";
(-9,-4)*{...};
<-11mm,-8mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,+3)*{\circledast}="L";
(-25,-3)*{}="N";
(-19,-4)*{\bulletlet}="B";
(-13,-4)*{\bulletlet}="C";
(-2,-4)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "N";"L" <0pt>
\endxy}
\end{equation}
}
\hspace{-2mm}
\betagin{proof} (A sketch). Any derivation of $\mathsf{Tw} {\mathcal P}$ (viewed as a non-differential properad) is uniquely determined by its values on the MC generators $\betagin{array}{c}\resizebox{10mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}$ and on arbitrary elements of ${\mathcal P}$.
The first values can be chosen arbitrary, while the second ones must be compatible with the properad compositions; as the second values are, by the definition, of the form (\ref{3: formula for D_h}), we conclude that the above formulae do define a derivation of $\mathsf{Tw}{\mathcal P}$ as a non-differential prop(erad). Hence the main point is to show that ${\mathbb P}hi_\gamma$ respects differentials in both dg Lie algebras, i.e. satisfies the equation
\betagin{equation}\lambdabel{3: compatibility of Def to Der with d}
{\mathbb P}hi_{\delta\gamma}=[{\partial}_\centerdot, {\mathbb P}hi_\gamma].
\end{equation}
Consider first a simpler morphism of non-differential graded Lie algebras,
$$
\betagin{array}{rccc}
\widetilde{{\mathbb P}hi}: & \mathsf{Def}\left(\mathcal{H}\mathit{olieb}_{c,d} \stackrel{i}{\rightarrow} {\mathcal P}\right) &\longrightarrow & \mathrm{Der}(\mathsf{Tw}{\mathcal P})\\
& \gamma & \longrightarrow & \widetilde{{\mathbb P}hi}_\gamma
\end{array}
$$
where the derivation $\widetilde{{\mathbb P}hi}_\gamma\in \mathrm{Der}(\mathsf{Tw} {\mathcal P})$ is given on the generators by
$$
\widetilde{{\mathbb P}hi}_\gamma\left(\hspace{-2mm}\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}\hspace{-1mm}\right)
=
\sum_{k\geq 1, m=\sum m_\bulletlet,\atop { m_{0}\geq 1, k+m_0\geq 3\atop m_1,...,m_k\geq 0}}{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{20mm}{!}{ \xy
(-28,12)*{}="1";
(-25,12)*{}="2";
(-22,12)*{}="3";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-25,+4)*{\circledast}="L";
(-15,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-3,-5)*{...};
<-25mm,14.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_n}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\endxy}
\end{array}, \ \ \ \ \ \
\widetilde{{\mathbb P}hi}_\gamma\left(\hspace{-1.5mm} \betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}\hspace{-1.5mm}\right) = 0.
$$
The map $\widetilde{{\mathbb P}hi}$ respects the Lie bracket while the obstruction for this map to respect
the differentials is given by the derivation of type (\ref{3: formula for D_h}),
$$
[{\p}_\centerdot, \widetilde{{\mathbb P}hi}_\gamma] - \widetilde{{\mathbb P}hi}_\gamma = D_{\gamma_1}, \ \ \
\gamma_1=\widetilde{{\mathbb P}hi}_\gamma\left(\hspace{-1mm} \betagin{array}{c}\resizebox{2mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
\hspace{-1mm}
\right) \in \mathsf{Tw}{\mathcal P}(1,1),
$$
with $\betagin{array}{c}\resizebox{1.8mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
$ given by (\ref{3: blacklozenge (1,1) element}). It is a straightforward calculation to check
that the adjustment of the derivation $\widetilde{{\mathbb P}hi}_\gamma$ with an extra term of the type (\ref{3: formula for D_h}),
$$
\widetilde{{\mathbb P}hi}_\gamma \longrightarrow {\mathbb P}hi_\gamma = \widetilde{{\mathbb P}hi}_\gamma + D_{\betagin{xy}
<0mm,-2.5mm>*{};<0mm,2.5mm>*{}**@{-},
<0mm,0mm>*{_\lozenge};<0mm,0mm>*{}**@{},
\end{xy}}
$$
solves the problem of the compatibility with the differentials.
\end{proof}
\subsection{Grothendieck-Teichm\"uller group and twisted properads} Let $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}$ be the genus completion of the properad $\mathcal{H}\mathit{olieb}_{c,d}$. It was proven in \cite{MW2} that for any $c,d\in {\mathbb Z}$ there is a morphism of dg Lie algebras
$$
F \colon \mathsf{GC}_{c+d+1}^{or}\to \mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{c,d})
$$
which is a quasi-isomorphism up to one rescaling class (which controls the
automorphism of $\mathcal{H}\mathit{olieb}_{c,d}$ given by rescaling each $(m,n)$ generator by $\lambda^{m+n-2}$ for any $\lambda \in
K^*$). Here $\mathsf{GC}_{c+d+1}^{or}$ stands for the oriented version of the Kontsevich graph complex from \S {\ref{3: subsec on GC and HGC}} which was studied in \cite{W2} and where it was proven that
$$
H^\bulletlet(\mathsf{GC}_{3}^{or})=H^\bulletlet(\mathsf{GC}_{2})= {\mathfrak g}{\mathfrak r}{\mathfrak t}_1,
$$
This result implies that for any $c,d\in {\mathbb Z}$ with $c+d=2$, one has an isomorphism of Lie algebras,
$$
H^0(\mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}))={\mathfrak g}{\mathfrak r}{\mathfrak t}
$$
where ${\mathfrak g}{\mathfrak r}{\mathfrak t}$ is the Lie algebra of the ``full" Grothendieck-Teichm\"uller group $GRT_1$
\cite{Dr2}.
Let $\widehat{{\mathcal P}}$ be a dg properad under $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}$ and let $\mathsf{Tw}\widehat{{\mathcal P}}$ be the associated twisted properad. One has morphisms
$$
\mathsf{Tw}(i): (\widehat{\mathcal{H}\mathit{olieb}}_{c,d},\delta) \longrightarrow (\mathsf{Tw}\widehat{{\mathcal P}},{\p}_\centerdot), \ \ \ \
{\mathbb P}hi:\mathsf{Def}\left(\widehat{\mathcal{H}\mathit{olieb}}_{c,d} \stackrel{i}{\rightarrow} \widehat{{\mathcal P}}\right) \longrightarrow \mathrm{Der}(\mathsf{Tw}\widehat{{\mathcal P}})
$$
given explicitly by the same formulae as in (\ref{4: map frpm HoLBcd to TwP}) and in Theorem {\ref{3: Theorem on Def action on TwP}}.
\subsubsection{\bf Proposition} {\em For any dg properad $\widehat{{\mathcal P}}$ under $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}$ there is an associated morphism of complexes,
$$
{\mathcal F}: \mathsf{GC}_{c+d+1}^{or} \longrightarrow \mathrm{Der}(\mathsf{Tw}\widehat{{\mathcal P}})[1],
$$
where $\mathsf{GC}_{c+d+1}^{or}$ is the oriented version of the Kontsevich graph complex. If $c+d=2$, there is an associated linear map
$
{\mathfrak g}{\mathfrak r}{\mathfrak t}\longrightarrow H^1\left(\mathrm{Der}(\mathsf{Tw}\widehat{{\mathcal P}})\right).
$
}
\betagin{proof} The morphism $\mathsf{Tw}(i)$ induces a morphism of dg Lie algebras,
$$
\mathsf{Def}(\widehat{\mathcal{H}\mathit{olieb}}_{c,d} \stackrel{{\mathrm{Id}}}{\rightarrow} \widehat{\mathcal{H}\mathit{olieb}}_{c,d}) \longrightarrow \mathsf{Def}(\widehat{\mathcal{H}\mathit{olieb}}_{c,d} \stackrel{\mathsf{Tw}(i)}{\longrightarrow} \mathsf{Tw}\widehat{{\mathcal P}})
$$
The l.h.s.\ can be identified as a complex (but not as a Lie algebra) with the degree shifted
derivation complex $\mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{c,d})[-1]$ while the r.h.s.\ can be mapped, according to
Theorem {\ref{3: Theorem on Def action on TwP}}, into the complex $\mathrm{Der}(\mathsf{Tw}(\widehat{{\mathcal P}})$. Thus we obtain a chain of morphisms of complexes
$$
{\mathcal F}: \mathsf{GC}_{c+d+1}^{or} \stackrel{F}{\longrightarrow} \mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}) \longrightarrow \mathrm{Der}(\mathsf{Tw}\widehat{{\mathcal P}})[1]
$$
which proves the claim.
\end{proof}
Thus fully twisted completed properads under $\mathcal{H}\mathit{olieb}_{c,d}$ can have potentially a highly non-trivial homotopy theory depending on the properties of the above map ${\mathcal F}$ at the cohomology level.
\subsection{From representations of ${\mathcal P}$ to representations of $\mathsf{Tw}{\mathcal P}$}
Assume a dg properad ${\mathcal P}$ under $\mathcal{H}\mathit{olieb}_{c,d}$ admits a representation in a dg space $(V,d)$. Then the graded vector space $V[-c]$ is a $\mathcal{H}\mathit{olieb}_{0,c+d}$-algebra. For any Maurer-Cartan element $\gamma\in \odot^{\geq 1}(V[-c])$ , that is, for any a solution of the equation (\ref{4: MC eqn for HoLB0d}), we obtain a presentation of $\mathsf{Tw}{\mathcal P}$ in $V$ equipped with the twisted differential (\ref{4: twisted by MC differentiail in V}). Let us consider such twisted representations in the case ${\mathcal P}=\mathcal{H}\mathit{olieb}_{c,d}$ in detail.
\subsubsection{\bf Twisted $\mathcal{H}\mathit{olieb}_{c,d}$-algebra structures: an explicit description} Let $(V,d)$ be a graded vector space equipped with a basis $\{e_\alpha\}$ and $V^*$ its dual equipped with the dual basis $\{e^\alpha\}$. Consider a graded
commutative tensor algebra
$$
\odot^{\geq 1}(V[-c])\otimesimes\odot^{\geq 1}(V^*[-d]) \subset \odot^{\bulletlet}\left(V[-c]\oplus V^*[-d]\right)\simeq {\mathbb K}[x^\alpha, p_\alpha]
$$
where $x^\alpha:={\mathfrak s}^{-d} e^\alpha$, $p_\alpha:={\mathfrak s}^{-c} e_\alpha$.
The paring $V[-c]\otimes V^*[-d]\rightarrow {\mathbb K}[-c-d]$ makes this space into a Lie algebra with respect to the Poisson type brackets $\{\ ,\ \}$ (of degree $-c-d$).
There is a 1-1 correspondence representations of $\mathcal{H}\mathit{olieb}_{c,d}$ in $V$,
$$
\rho: \mathcal{H}\mathit{olieb}_{c,d} \longrightarrow {\mathcal E} nd_V {\mathbb R}ightarrow \left\{\rho\left(\hspace{-2mm} \betagin{array}{c}\resizebox{8mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\bulletlet}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
(-5.5,7)*{_1},
(-3,7)*{_2},
(3,6)*{},
(5.9,7)*{m},
(-3,-7)*{_2},
(3,-7)*+{},
(5.9,-7)*{n},
(-5.5,-7)*{_1},
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array} \hspace{-2mm} \right)=: {\partial}i_n^m(x,p)\in \odot^n(V^*[-d]) \otimesimes \odot^m(V[-c])\right\}_{m,n\geq 1, m+n\geq 3},
$$
and Maurer-Cartan elements of the Lie algebra $(\odot^{\geq 1}(V[c])\bigotimes\odot^{\geq 1}(V^*[d]), \{\ , \ \})$, that is, with degree $1+c+d$ elements
$$
{\partial}i(x,p)=\sum_{n,m\geq 1} {\partial}i_n^m(x,p)=\sum_{m\geq 1}\sum_{\alpha_\bulletlet, \beta_\bulletlet} {\mathfrak r}ac{1}{m!n!} {\partial}i^{\alpha_1\ldots \alpha_m}_{\beta_1\ldots\beta_n} p_{\alpha_1}\cdots p_{\alpha_m} x^{\beta_1}\cdots x^{\beta_n}
$$
such that
$$
\{{\partial}i,{\partial}i\}=2\sum_{\alpha} {\partial}m {\mathfrak r}ac{{\partial} {\partial}i}{{\partial} x^\alpha}{\mathfrak r}ac{{\partial} {\partial}i}{{\partial} p_\alpha}=0
$$
and the degree $1$ summand ${\partial}i_1^1\in V\otimes V^*$ is precisely the given differential $d$ in $V$.
The representation $\rho$ induces a representation of the associated polydifferential operad
$$
{\mathcal O}_{c,d}(\rho):\ \ \ {\mathcal O}_{c,d}(\mathcal{H}\mathit{olieb}_{c,d}) \longrightarrow {\mathcal E} nd_{\odot^{\bulletlet} (V[-c])}
$$
and hence a $\mathcal{H} \mathit{olie}_{c+d}$-algebra structure on ${\mathcal E} nd_{\odot^\bulletlet (V\{-c\})}$ via the composition of ${\mathcal O}_{c,d}(\rho)$ with the map (\ref{3: map Holie^+_{c+d} to f_{c,d}(HoLBcd)}). Let
$$
\gamma(p)=\sum_{m\geq 1} \gamma_m(p),\ \ \ \gamma_m\in \odot^{m} (V[-c])
\subset {\mathbb K}[p_\alpha]
$$
be a Maurer-Cartan element of that $\mathcal{H} \mathit{olie}_{c+d}$-algebra structure, that is, a degree $c+d$ solution of the following explicit coordinate incarnation of the Maurer-Cartan equation
(\ref{4: MC eqn for HoLB0d}),
$$
\sum_{n\geq 1} {\partial}m {\mathfrak r}ac{1}{n!}{\mathfrak r}ac{{\partial}^n {\partial}i}{{\partial} x^{\alpha_1}\ldots {\partial} x^{\alpha_n}}|_{x=0}
{\mathfrak r}ac{{\partial} \gamma(p)}{{\partial} p_{\alpha_1}}\ldots{\mathfrak r}ac{{\partial} \gamma(p)}{{\partial} p_{\alpha_n}}=0.
$$
Then the data
${\partial}i(x,p)$ and $\gamma(p)$ give rise to a representation,
$$
\betagin{array}{rccc}
\rho^{Tw}: & \mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d} & \longrightarrow & {\mathcal E} nd_V
\\
&\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(0,4.5)*+{...},
(0,-4.5)*+{...},
(0,0)*{\bulletlet}="o",
(-5,5)*{}="1",
(-3,5)*{}="2",
(3,5)*{}="3",
(5,5)*{}="4",
(-3,-5)*{}="5",
(3,-5)*{}="6",
(5,-5)*{}="7",
(-5,-5)*{}="8",
(-5.5,7)*{_1},
(-3,7)*{_2},
(3,6)*{},
(5.9,7)*{m},
(-3,-7)*{_2},
(3,-7)*+{},
(5.9,-7)*{n},
(-5.5,-7)*{_1},
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"2" <0pt>
\ar @{-} "o";"3" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"5" <0pt>
\ar @{-} "o";"6" <0pt>
\ar @{-} "o";"7" <0pt>
\ar @{-} "o";"8" <0pt>
\endxy}\end{array} &\longrightarrow & {\partial}i_n^m
\\
&
\betagin{array}{c}\resizebox{12mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
<0mm,9mm>*{^m},
\end{xy}}\end{array}
&\longrightarrow&
\gamma_m
\end{array}
$$
of the twisted prop $\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d}$
in $V=\text{span}\lambdangle e_\alpha\rangle$ equipped with the deformed differential
\betagin{equation}\lambdabel{4: differential twisted in V}
d_\centerdot=d + \sum_{k\geq 1} {\partial}m e_\beta {\mathfrak r}ac{1}{k!}{\mathfrak r}ac{{\partial}^{k+2} {\partial}i}{{\partial} p_\beta{\partial} x^{\alpha_0}x^{\alpha_1}\ldots {\partial} x^{\alpha_k}}_{x=p=0}
{\mathfrak r}ac{{\partial} \gamma(p)}{{\partial} p_{\alpha_1}}|_{p=0}\ldots {\mathfrak r}ac{{\partial} \gamma(p)}{{\partial} p_{\alpha_k}}|_{p=0} {\mathfrak r}ac{{\partial}}{{\partial} e_{\alpha_0}}
\end{equation}
The associated twisted $\mathcal{H}\mathit{olieb}_{c,d}$ structure on $V$ is given explicitly by (cf.\ (\ref{4: map frpm HoLBcd to TwP}))
$$
{\partial}i^{\mathsf{Tw}}
:=\hspace{-1mm}\sum_{m,n\geq 1\atop \alpha_\bulletlet, \beta_\bulletlet} {\mathfrak r}ac{1}{m!n!} {\partial}i_{\alpha_1\ldots \alpha_n}^{\beta_1\ldots\beta_m} p_{\beta_1}\ldots p_{\beta_m} (x^{\alpha_1}+ {\mathfrak r}ac{{\partial} \gamma}{{\partial} p_{\alpha_1}}) \ldots (x^{\alpha_n}+ {\mathfrak r}ac{{\partial} \gamma}{{\partial} p_{\alpha_n}})
$$
The MC equation for $\gamma$ ensures that ${\partial}i^\mathsf{Tw}|_{x=0}=0$. As ${\partial}i^{\mathsf{Tw}}$ is produced from ${\partial}i$ by the change of variables $x^{\alpha} \rightarrow x^{\alpha} + {\mathfrak r}ac{{\partial} \gamma(p)}{{\partial} p_{\alpha}}$, it is easy to check --- using the vanishing of the sum
$$
\sum {\partial}m {\mathfrak r}ac{{\partial}^2 \gamma(p)}{{\partial} p_{\alpha}{\partial} p_{\beta}}){\mathfrak r}ac{{\partial} {\partial}i(x,p)}{{\partial} x^{\alpha}}{\mathfrak r}ac{{\partial} {\partial}i(x,p)}{{\partial} x^{\beta}}\equiv 0
$$
solely for degree+symmetry reasons --- that
that the equation
$
\{{\partial}i^{\mathsf{Tw}}, {\partial}i^{\mathsf{Tw}}\}=0$ holds true indeed. Finally, one notices that the $(1,1)$ summand in ${\partial}i^\gamma$ (which is responsible for the differential on $V$) is precisely the twisted differential (\ref{4: differential twisted in V}) or, equivalently, $d + \sum_{k\geq 2}\hat{\mu}_{k,1}$ in the notation of {\S \ref{4: subsec on MC elements of HoLBcd})}. This gives a short and independent ``local coordinate" check of many ``properadic" claims made above.
\subsection{Homotopy triangular Lie bialgebras and Lie trialgebras}\lambdabel{4: subsec on trinagular} Assume $V$ is a vector space concentrated in degree zero, say, $V={\mathbb K}^N$ for some $N\in {\mathbb N}$, and let $\mathcal{H}\mathit{olieb}_{1,1}$
be the prop of ordinary Lie bialgebras. Any representation $\rho$ of $\mathsf{Tw}\mathcal{H}\mathit{olieb}_{1,1}$ in $V$ is uniquely determined by its values on generators of cohomological degree zero only, i.e.\ only on the following three generators of $\mathsf{Tw}\mathcal{H}\mathit{olieb}_{1,1}$,
$$
\rho\, (\hspace{-3mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,-5)*+{}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{}="C";
(-2,4.5)*+{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
): V\rightarrow \wedge^2 V,
\ \ \ \
\rho\, (\hspace{-3mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,5)*+{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-4.5)*+{}="C";
(-2,-4.5)*+{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
): \wedge^2 V \rightarrow V,
\ \ \ \
\rho\, (\hspace{-1mm}
\betagin{array}{c}\resizebox{3.8mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,5mm>*{}**@{-},
<0mm,0.5mm>*{};<3mm,5mm>*{}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array} \hspace{-1mm} ) \in \wedge^2V
$$
which satisfy the standard relations (\ref{3: R for LieB}) as well as the following one,
\betagin{equation}\lambdabel{3: Tw(LB) relation}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*{\bulletlet}="1";
(-5,0)*{\bulletlet}="L";
(1,-0)*+{_3}="R";
(-8,5)*{^1}="C";
(-2,5)*{^2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "1";"R" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*{\bulletlet}="1";
(-5,0)*{\bulletlet}="L";
(1,-0)*+{_2}="R";
(-8,5)*{^3}="C";
(-2,5)*{^1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "1";"R" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*{\bulletlet}="1";
(-5,0)*{\bulletlet}="L";
(1,-0)*+{_1}="R";
(-8,5)*{^2}="C";
(-2,5)*{^3}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "1";"R" <0pt>
\endxy}
\end{array}
+
(-1)^c
\left(
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,6)*{^1}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
(-12,+1)*{^2}="l";
(+2,+1)*{^3}="r";
\ar @{-} "D";"L" <0pt>
\ar @{-} "D";"r" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "C";"l" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,6)*{^1}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
(-12,+1)*{^2}="l";
(+2,+1)*{^3}="r";
\ar @{-} "D";"L" <0pt>
\ar @{-} "D";"r" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "C";"l" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,6)*{^3}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
(-12,+1)*{^1}="l";
(+2,+1)*{^2}="r";
\ar @{-} "D";"L" <0pt>
\ar @{-} "D";"r" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "C";"l" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\right)=0.
\end{equation}
If $\rho\, (\hspace{-3mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,5)*+{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-4.5)*+{}="C";
(-2,-4.5)*+{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
)$ happens to be zero, then the new relation (\ref{3: Tw(LB) relation}) reduced to the classical Yang-Baxter equation so that associated $\mathsf{Tw}\mathcal{H}\mathit{olieb}_{1,1}$-algebra structure in $V$ becomes precisely a so called {\it triangular Lie bialgebra structure}\, on $V$ \cite{Dr1}. Thus a generic $\mathsf{Tw}\mathcal{H}\mathit{olieb}_{1,1}$-algebra structure on ${\mathbb K}^N$ is a version of that notion in which $V$ has two Lie bialgebra structures, one is given by the pair
$\rho\, (\hspace{-3mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,5)*+{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-4.5)*+{}="C";
(-2,-4.5)*+{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
)$ and $\rho\, (\hspace{-3mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,-5)*+{}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{}="C";
(-2,4.5)*+{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
)$
and one is given by a pair
$$
\rho \left(\hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}
\right) \ \ \ \text{and}\ \ \
\rho\left(\hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_1}="C";
(-2,4.5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{12.5mm}{!}{ \xy
(-18,8)*{^1}="1";
(-18,+2.5)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,4)*+{^2}="b1";
(-23,-4)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
-
\betagin{array}{c}\resizebox{12.5mm}{!}{ \xy
(-18,8)*{^2}="1";
(-18,+2.5)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,4)*+{^1}="b1";
(-23,-4)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\hspace{-1.5mm}\right)
$$
in which the Lie cobracket is twisted by the coboundary term.
Motivated by the above observation, we introduce a properad of {\em Lie trialgebras}\,
$\mathcal{L}\mathit{ieb}cd^\vee$ which is generated by the ${\mathbb S}$-bimodule $T=\{T(m,n)\}_{m,n\geq 0}$ with
all $T(m,n)=0$ except
$$
T(2,1):={\mbox{1 \hskip -7pt 1}}_1\otimes {\mathit s \mathit g\mathit n}_2^{|c|}[c-1]=\mbox{span}\left\lambdangle\hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_1}="C";
(-2,4.5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
=(-1)^c
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_2}="C";
(-2,4.5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\hspace{-1mm} \right\rangle
$$
$$
T(1,2):= {\mathit s \mathit g\mathit n}_2^{|d|}\otimes {\mbox{1 \hskip -7pt 1}}_1[d-1]=\mbox{span}\left\lambdangle\hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
= (-1)^d
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_2}="C";
(-2,-5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-1mm}
\right\rangle
$$
$$
T(2,0):={\mathit s \mathit g\mathit n}_2^{|c|}[c-d]=\mbox{span}\left\lambdangle
\betagin{array}{c}\resizebox{5.0mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^1}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}
=(-1)^c
\betagin{array}{c}\resizebox{5.8mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^2}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}
\right\rangle
$$
modulo relations (\ref{3: R for LieB}) and
(\ref{3: Tw(LB) relation}). This properad comes equipped with two morphisms from $\mathcal{L}\mathit{ieb}cd$, the one which is identity on the generators of $\mathcal{L}\mathit{ieb}cd$ and the twisted one given by
\betagin{equation}\lambdabel{4: map LBcd to TwLBcd}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\rightarrow
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
, \ \ \ \
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.7)*+{_1}="C";
(-2,4.7)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\rightarrow
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.7)*+{_1}="C";
(-2,4.7)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{12.5mm}{!}{ \xy
(-18,8)*{^1}="1";
(-18,+2.5)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,4)*+{^2}="b1";
(-23,-4)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+ (-1)^c
\betagin{array}{c}\resizebox{12.5mm}{!}{ \xy
(-18,8)*{^2}="1";
(-18,+2.5)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,4)*+{^1}="b1";
(-23,-4)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\end{equation}
The full twisting construction gives us a minimal resolution of $\mathcal{L}\mathit{ieb}cd^\vee$ as follows. Consider a quotient dg properad
$$
\mathcal{H}\mathit{olieb}_{c,d}^\vee :=\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d}/ \lambdangle \betagin{array}{c}\resizebox{1.6mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{_\bulletlet};
\end{xy}}\end{array}\rangle
$$
by the ideal generated by the univalent MC generator.
\subsubsection{\bf Theorem} {\em The canonical projection $\mathcal{H}\mathit{olieb}_{c,d}^\vee \rightarrow \mathcal{L}\mathit{ieb}cd^\vee$ is a quasi-isomorphism.}
\betagin{proof} Consider a filtration of $\mathcal{H}\mathit{olieb}_{c,d}^\vee$ by the number of MC generators. The differential $d$ in the associated graded complex $gr \mathcal{H}\mathit{olieb}_{c,d}^\vee$ acts on the generators coming from $
\mathcal{H}\mathit{olieb}_{c,d}$ by the standard formula (\ref{3: d in qLBcd_infty}) while on the MC generators by
$$
d \betagin{array}{c}\resizebox{5.0mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^1}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}=0, \ \ \ d
\betagin{array}{c}\resizebox{13mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}:=
- \sum_{[m]=[m_0]\sqcup [m_1]\atop {\# m_0= 2, \# m_1\geq 1}}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-27,8)*{}="1";
(-22,8)*{}="3";
(-18,8)*{}="n11";
(-13,8)*{}="n12";
(-15.6,7.1)*{...};
(-24.9,7.2)*{...};
(-25,+2)*{\bulletlet}="L";
(-15.5,-5)*{_\bulletlet}="B";
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\endxy}
\end{array} \ \ \ {\mathfrak o}rall m\geq 3.
$$
Since the number of the MC generators is preserved, we can assume that they are distinguished, say, labelled by integers. Then the direct summand of $gr \mathcal{H}\mathit{olieb}_{c,d}^\vee$
with, say, $k$ MC generators (labelled by integers from $[k]$) can by identified with a direct summand in $\mathcal{H}\mathit{olieb}_{c,d}$ whose first in-legs (labelled by integers from $[k]$) are attached to ``operadic type" $(m_i,1)$-corollas with $m_i\geq 2$, $i\in [k]$. The cohomology of this summand is spanned by trivalent corollas only; trivalent $(2,1)$ corollas whose unique in-legs are labelled by integers from $[k]$ correspond in this approach precisely to $k$ copies of the MC generator $\betagin{array}{c}\resizebox{5.0mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^1}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}$. This result proves the claim.
\end{proof}
\subsubsection{\bf Properad of triangular Lie bialgebras and its minimal resolution} Triangular Lie bialgebras appear naturally in the representation theory of the twisted properad $\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}$. Consider a quotient properad
$$
\mathcal{L}\mathit{ieb}^\triangle_{c,d}:= \mathcal{L}\mathit{ieb}^\vee_{c,d}/I
$$
of the defined above properad $\mathcal{L}\mathit{ieb}^\vee_{c,d}$ by the ideal $I$ generated by
the coLie corolla $\hspace{-3mm}
\betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,-4)*+{}="1";
(-5,0)*{\bulletlet}="L";
(-8,3.5)*+{}="C";
(-2,3.5)*+{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}\hspace{-2mm}$. Thus $\mathcal{L}\mathit{ieb}^\triangle_{c,d}$ governs two operations of degrees $1-d$ and $d-c$ respectively,
$$
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
= (-1)^d
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_2}="C";
(-2,-5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
, \ \ \
\betagin{array}{c}\resizebox{5.0mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^1}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}
=(-1)^c
\betagin{array}{c}\resizebox{5.8mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^2}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}
$$
which are subject to the following relations
\betagin{equation}\lambdabel{3: R for triangular LieB}
{\mathcal R}^\triangle:\left\{
\betagin{array}{c}
\betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array}
+
\betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^2}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^1}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^3}**@{},
\end{xy}}\end{array}
+
\betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bulletlet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^1}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^3}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^2}**@{},
\end{xy}}\end{array}=0,
\ \ \
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,6)*{^1}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
(-12,+1)*{^2}="l";
(+2,+1)*{^3}="r";
\ar @{-} "D";"L" <0pt>
\ar @{-} "D";"r" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "C";"l" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,6)*{^1}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
(-12,+1)*{^2}="l";
(+2,+1)*{^3}="r";
\ar @{-} "D";"L" <0pt>
\ar @{-} "D";"r" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "C";"l" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,6)*{^3}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*{\bulletlet}="C";
(-2,-5)*{\bulletlet}="D";
(-12,+1)*{^1}="l";
(+2,+1)*{^2}="r";
\ar @{-} "D";"L" <0pt>
\ar @{-} "D";"r" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "C";"l" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}=0.
\end{array}
\right.
\end{equation}
Its representations in a dg vector space $V$ are precisely degree shifted triangular Lie bialgebra structures in $V$, the case $c=d=1$ corresponding to he ordinary triangular Lie bialgebras \cite{Dr1}. There is a morphism of properads
$$
f: \mathcal{L}\mathit{ieb}cd \longrightarrow \mathcal{L}\mathit{ieb}cd^\triangle
$$
given on the generators by
$$
f\, ( \hspace{-2mm} \betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array} \hspace{-2mm} )
= \hspace{-2mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
, \ \ \ \ \
f(\hspace{-3mm} \betagin{array}{c}\resizebox{7mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.7)*+{_1}="C";
(-2,4.7)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\hspace{-2mm})
=\hspace{-2mm}
\betagin{array}{c}\resizebox{12.5mm}{!}{ \xy
(-18,8)*{^1}="1";
(-18,+2.5)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,4)*+{^2}="b1";
(-23,-4)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+ (-1)^c
\betagin{array}{c}\resizebox{12.5mm}{!}{ \xy
(-18,8)*{^2}="1";
(-18,+2.5)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,4)*+{^1}="b1";
(-23,-4)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
$$
Consider an ideal $I^{\triangle}$ in $\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}$ generated by all $(m,n)$-corollas
$\betagin{array}{c}\resizebox{10mm}{!}{ \xy
(-7,7)*+{_1}="U1";
(-3,7)*+{_2}="U2";
(2,5)*{...};
(7,7)*+{_m}="U3";
(0,0)*{\circledast}="C";
(-7,-7)*+{_1}="L1";
(-3,-7)*+{_2}="L2";
(2,-5)*{...};
(7,-7)*+{_n}="L3";
\ar @{-} "C";"L1" <0pt>
\ar @{-} "C";"L2" <0pt>
\ar @{-} "C";"L3" <0pt>
\ar @{-} "C";"U1" <0pt>
\ar @{-} "C";"U2" <0pt>
\ar @{-} "C";"U3" <0pt>
\endxy}
\end{array}$ with $m\geq 2$, $n\geq 1$. This ideal is differential, and, moreover, the quotient
properad $\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}/I^{\triangle}$
is a dg {\em free}\, properad with generators
$$
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(1,-5)*{\ldots},
(-8,-7)*{_1},
(-3,-7)*{_3},
(8,-7)*{_{n}},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_4" <0pt>
\endxy}\end{array}
\hspace{-3mm}
=(-1)^d \hspace{-2.6mm}
\betagin{array}{c}\resizebox{17mm}{!}{\xy
(1,-6)*{\ldots},
(-7.5,-7)*{_{\sigma(1)}},
(11,-7)*{_{\sigma(n)}},
(0,0)*{\bulletlet}="a",
(0,5)*{}="0",
(-8,-5)*{}="b_2",
(-3,-5)*{}="b_3",
(8,-5)*{}="b_4",
\ar @{-} "a";"0" <0pt>
\ar @{-} "a";"b_2" <0pt>
\ar @{-} "a";"b_3" <0pt>
\ar @{-} "a";"b_4" <0pt>
\endxy}\end{array} \hspace{-3mm}{\mathfrak o}rall \sigma\in {\mathbb S}_{n\geq 2}\ \ , \ \
\betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^2}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^{}}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^m}**@{-},
\end{xy}}\end{array}\hspace{-2mm} =
(-1)^{c|\tau|}\hspace{-1mm}
\betagin{array}{c}\resizebox{16mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-11mm,7mm>*{^{\tau(1)}}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,7mm>*{^{\tau(2)}}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,6mm>*{}**@{-},
<0.6mm,0.44mm>*{};<10mm,7mm>*{^{\tau(m)}}**@{-},
\end{xy}}\end{array} {\mathfrak o}rall \tau\in {\mathbb S}_{m\geq 1},
$$
of degrees $1+d-nd$ and $d+c-mc$ respectively. This is an extension of $\mathsf{Tw}\mathcal{H} \mathit{olie}_d$ by the MC $(m,0)$-generators with $m\geq 2$. The induced differential acts on the unique $(1,2)$- and $(2,0)$-generators trivially, i.e.\ they are cohomology classes. In fact every cohomology class in $\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}/I^{\triangle}$ is generated by this pair via properadic compositions. Indeed, consider a further quotient of $\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}/I^{\triangle}$ by the ideal generated by $\betagin{array}{c}\resizebox{2.5mm}{!}{ \xy
(0,0)*{\bulletlet}="a",
(0,3.5)*{}="0",
\ar @{-} "a";"0" <0pt>
\endxy}\end{array}$ and denote that quotient by $\mathcal{H}\mathit{olieb}_{c,d}^\triangle$; notice that the induced differential in $\mathcal{H}\mathit{olieb}_{c,d}^\triangle$ is much simplified: this properad is freely generated by the operad $\mathcal{H} \mathit{olie}_d$ equipped with standard differential (\ref{2: d in Lie_infty}) and the MC elements $\betagin{array}{c}\resizebox{12.5mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^2}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^{}}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^m}**@{-},
\end{xy}}\end{array}$, $m\geq 2$ with the differential given by
$$
{\partial}c \betagin{array}{c}\resizebox{5.8mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^1}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\bulletlet};
\end{xy}}\end{array}
=0
, \ \ \
{\p}_\centerdot\hspace{-2mm}
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}=
- \hspace{-2mm} \sum_{k\geq 2, [m]=\sqcup [m_\bulletlet],\atop { m_{0}=1, m_1,...,m_k\geq 1}}{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{21mm}{!}{ \xy
(-25,8)*{}="1";
(-18,8)*{}="n11";
(-15,7)*{...};
(-13,8)*{}="n12";
(-11,8)*{}="n21";
(-8.4,7)*{...};
(-6,8)*{}="n22";
(1,8)*{}="nn1";
(3,7)*{...};
(5,8)*{}="nn2";
(-25,+2)*{\bulletlet}="L";
(-14,-5)*{\bulletlet}="B";
(-8,-5)*{\bulletlet}="C";
(3,-5)*{\bulletlet}="D";
<-5mm,-10mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-3,-5)*{...};
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
<-9mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<3mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_n}},
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\ar @{-} "nn1";"D" <0pt>
\ar @{-} "nn2";"D" <0pt>
\endxy}
\end{array} \ \ {\mathfrak o}rall\ m\geq 3.
$$
Let show that the projection
$$
\mathsf{Tw} \mathcal{H}\mathit{olieb}_{c,d}/I^{\triangle} \longrightarrow \mathcal{H}\mathit{olieb}_{c,d}^\triangle
$$
is a quasi-isomorphism. Consider a filtration of both sides by the number of MC $(m,0)$ generators with $m\geq 2$ (this number can not decreese). The induced differential on the asscoaied graded of the r.h.s.\ acts only on $\mathcal{H} \mathit{olie}_d$ generators so that its cohomology is a properad, say $P$, generated freely by $\mathcal{L} \mathit{ie}_d$ and the MC generators with $m\geq 2$. On the other hand, the associated graded of the l.h.s.\ is isomorphic to tensor products (modulo the action of finite permutation roups) of complexes $\mathsf{tw}\mathcal{H} \mathit{olie}_d$ and the trivial complex spanned by $(m,0)$ generators with $m\geq 2$. According to \cite{DW}, the $H^\bulletlet(\mathsf{tw}\mathcal{H} \mathit{olie}_d)=\mathcal{L} \mathit{ie}_d$ so that on the l.h.s.\ we get the same properad $P$. Thus at the second page of the spectral sequence the above map becomes the identity map implying, by the Comparison Theorem, the resquired quasi-isomorphism.
Using Gr\"obner basis techniques A.\ Khoroshkin has proven \cite{Kh} that $\mathcal{H}\mathit{olieb}_{c,d}^\triangle$ is a minimal resolution of $\mathcal{L}\mathit{ieb}cd^\triangle$ at the dioperadic level. perhaps this result holds true at the properadic level as well.
Homotopy triangular Lie bialgebras (in a different, non-properadic, context) have been studied in \cite{LST} where their relation to homotopy Rota-Baxter Lie algebras has been established.
\subsection{Full twisting of properads under $\mathcal{L}\mathit{ieb}_{c,d}$} Assume ${\mathcal P}=\mathcal{L}\mathit{ieb}_{c,d}$ and the map $i: \mathcal{H}\mathit{olieb}_{c,d}\rightarrow \mathcal{L}\mathit{ieb}cd$ is the canonical projection. The associated twisted dg prop(erad) $(\mathsf{Tw}\mathcal{L}\mathit{ieb}cd, \delta_\centerdot)$ is generated by
the standard corollas
$$
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
= (-1)^d
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,4)*{}="1";
(-5,0)*{\bulletlet}="L";
(-8,-5)*+{_2}="C";
(-2,-5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
, \ \ \ \ \
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_1}="C";
(-2,4.5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
=(-1)^c
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-5)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,4.5)*+{_2}="C";
(-2,4.5)*+{_1}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
$$
of degrees $1-c$ and $1-d$ respectively modulo relations (\ref{3: R for LieB}), as well by the
family of extra generators,
$$
\betagin{array}{c}\resizebox{12.5mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{^1}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{^2}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{^{}}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{^m}**@{-},
\end{xy}}\end{array} =
(-1)^{c|\sigma|}
\betagin{array}{c}\resizebox{15mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};
<-0.6mm,0.44mm>*{};<-11mm,7mm>*{^{\sigma(1)}}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,7mm>*{^{\sigma(2)}}**@{-},
<0mm,5mm>*{\ldots},
<0.4mm,0.7mm>*{};<4.5mm,6mm>*{}**@{-},
<0.6mm,0.44mm>*{};<10mm,7mm>*{^{\sigma(m)}}**@{-},
\end{xy}}\end{array} \ \ \ {\mathfrak o}rall \sigma\in {\mathbb S}_m, \ m\geq 1,
$$
of cohomological degree $(1-m)c+d$. The twisted differential $\delta_\centerdot$ is given explicitly on the first pair of generators by
$$
\delta_\centerdot\hspace{-3mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array} =0,
\ \ \
\delta_\centerdot\hspace{-3mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*+{_1}="1";
(-5,0)*{\bulletlet}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\hspace{-2mm}
=\hspace{-2mm}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-19,8)*{^1}="1";
(-19,+3)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,3)*+{^2}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
(-1)^c\hspace{-2mm}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-19,8)*{^2}="1";
(-19,+3)*{\bulletlet}="L";
(-14,-2.5)*{\bulletlet}="B";
(-9,3)*+{^1}="b1";
(-14,-8)*{\bulletlet}="b2";
(-23,-3)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "B";"b2" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}.
$$
where we used relations (\ref{3: R for LieB}) (and an ordering of vertices in the second line just above goes from the bottom to the top).
The action of ${\p}_\centerdot$ on the remaining MC generators is given by
$$
{\partial}_\centerdot \betagin{array}{c}\resizebox{1.6mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<0mm,4mm>*{}**@{-},
<0mm,0mm>*{_\bulletlet};
\end{xy}}\end{array}= +
{\mathfrak r}ac{1}{2}
\betagin{array}{c}\resizebox{6.1mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-4)*{\bulletlet}="C";
(-2,-4)*{\bulletlet}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\ \ \ \text{and}\ \ \ \
{\partial}_\centerdot
\betagin{array}{c}\resizebox{6.0mm}{!}{\betagin{xy}
<0mm,0.5mm>*{};<-3mm,6mm>*{^1}**@{-},
<0mm,0.5mm>*{};<3mm,6mm>*{^2}**@{-},
<0mm,0mm>*{_\bulletlet};
\end{xy}}\end{array}=
-
\betagin{array}{c}\resizebox{7.5mm}{!}{ \xy
(-5,-6)*{_\bulletlet}="1";
(-5,-0.5)*{\bulletlet}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
$$
and, for $m\geq 3$, by
$$
{\p}_\centerdot
\betagin{array}{c}\resizebox{13.5mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
=
\sum_{[m]=[m_0]\sqcup [m_1]\atop {\# m_0= 2, \# m_1\geq 1}}(-1)^{1+c\sigma'}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-27,8)*{}="1";
(-22,8)*{}="3";
(-18,8)*{}="n11";
(-13,8)*{}="n12";
(-15,7)*{...};
(-25,+2)*{\bulletlet}="L";
(-15.5,-5)*{\bulletlet}="B";
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "3";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\endxy}
\end{array}
-
\sum_{[m]=[m_0]\sqcup [m_1]\sqcup [m_2] \atop {\# m_0=1, \# m_1,\# m_2\geq 1}}{\mathfrak r}ac{(-1)^{c\sigma''}}{2}
\betagin{array}{c}\resizebox{17mm}{!}{ \xy
(-27,8)*{}="1";
(-25,8)*{}="2";
(-23,8)*{}="3";
(-19,8)*{}="n11";
(-16.8,7)*{...};
(-14,8)*{}="n12";
(-36,8)*{}="n21";
(-33.4,7)*{...};
(-31,8)*{}="n22";
(-25,+2)*{\bulletlet}="L";
(-17,-5)*{\bulletlet}="B";
(-33,-5)*{\bulletlet}="C";
<-25mm,10.6mm>*{\overbrace{ \ \ \ \ \ \ }^{m_0}},
<-17mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_2}},
<-34mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "2";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\ar @{-} "n21";"C" <0pt>
\ar @{-} "n22";"C" <0pt>
\endxy}
\end{array},
$$
where $\sigma'$ (resp. $\sigma''$) is the parity of the permutation $[m]\rightarrow[m_0]\sqcup [m_1]$ (resp.\ $[m]\rightarrow[m_0]\sqcup [m_1]\sqcup [m_2]$) associated with the partition of the ordered set $[m]$ into two (resp.\ three) disjoint ordered subsets.
These relations imply
$$
{\p}_\centerdot\left(\hspace{-3mm}
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*+{_1}="1";
(-5,-0.2)*{\bulletlet}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{13.5mm}{!}{ \xy
(-19,9)*{^1}="1";
(-19,+3)*{\circledcirc}="L";
(-14,-3.5)*{\bulletlet}="B";
(-7,5)*+{^2}="b1";
(-25,-5)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
(-1)^c
\betagin{array}{c}\resizebox{13.5mm}{!}{ \xy
(-19,9)*{^2}="1";
(-19,+3)*{\bulletlet}="L";
(-14,-3.5)*{\bulletlet}="B";
(-7,5)*+{^1}="b1";
(-25,-5)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
\right)=0
$$
which is in agreement with the general result saying that $\mathsf{Tw}\mathcal{L}\mathit{ieb}cd$ is a properad under $\mathcal{H}\mathit{olieb}_{c,d}$; the morphism (\ref{4: map frpm HoLBcd to TwP}) takes in this case the following form
\betagin{equation}\lambdabel{4: map frpm HoLBcd to TwP for P under LBcd}
\betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44
mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
\longrightarrow
\left\{\betagin{array}{cl}
0 & \text{if}\, n\geq 2\ \text{and}\ m+n> 3,\\
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,6)*{}="1";
(-5,+1)*{\bulletlet}="L";
(-8,-5)*+{_1}="C";
(-2,-5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array} & \text{if}\ n=2,m=1,
\\
\betagin{array}{c}\resizebox{8mm}{!}{ \xy
(-5,-6)*+{_1}="1";
(-5,-0.2)*{\bulletlet}="L";
(-8,5)*+{_1}="C";
(-2,5)*+{_2}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
\betagin{array}{c}\resizebox{13.5mm}{!}{ \xy
(-19,9)*{^1}="1";
(-19,+3)*{\bulletlet}="L";
(-14,-3.5)*{\bulletlet}="B";
(-7,5)*+{^2}="b1";
(-25,-5)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array}
+
(-1)^c
\betagin{array}{c}\resizebox{13.5mm}{!}{ \xy
(-19,9)*{^2}="1";
(-19,+3)*{\bulletlet}="L";
(-14,-3.5)*{\bulletlet}="B";
(-7,5)*+{^1}="b1";
(-25,-5)*+{_1}="C";
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "B";"b1" <0pt>
\ar @{-} "1";"L" <0pt>
\endxy}
\end{array} & \text{if}\ n=1,m=2,
\\
\sum_{[m]=[m_0]\sqcup [m_1]\atop {\# m_0= 1, \# m_1\geq 1}}(-1)^{c\sigma_{m_0,m_1}}
\betagin{array}{c}\resizebox{11mm}{!}{ \xy
(-25,8)*{}="1";
(-18,8)*{}="n11";
(-13,8)*{}="n12";
(-15,7)*{...};
(-25,+2)*{\bulletlet}="L";
(-15.5,-5)*{\bulletlet}="B";
(-27,-5)*{}="C";
<-25mm,10.6mm>*{\overbrace{ \ \ }^{m_0}},
<-16mm,10.6mm>*{\overbrace{ \ \ \ \ }^{m_1}},
\ar @{-} "B";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "n11";"B" <0pt>
\ar @{-} "n12";"B" <0pt>
\endxy}
\end{array}
&
\text{if}\ n=1, m\geq 3.
\end{array}
\right.
\end{equation}
Note that the quotient of the dg operad $\mathsf{Tw}\mathcal{L}\mathit{ieb}cd$ by the (differential) ideal generated by corollas $\betagin{array}{c}\resizebox{11mm}{!}{
\betagin{xy}
<0mm,-1mm>*{\bulletlet};
<0mm,0mm>*{};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}$ with $m\neq 2$
gives us precisely the properad of Lie trialgebras $\mathcal{L}\mathit{ieb}cd^\vee$.
Theorem {\ref{4: theorem on H(TwHoLBcd)}} impli that the canonical
projection
$
\mathsf{Tw}\mathcal{L}\mathit{ieb}cd \rightarrow \mathcal{L}\mathit{ieb}cd
$
is a quasi-isomorphism.
Similarly one can describe explicitly the twisted properad $\mathsf{Tw}{\mathcal P}$ associated to any properad ${\mathcal P}\in \mathsf{PROP}_{\mathcal{L}\mathit{ieb}cd}$.
Note that $\mathsf{Tw}{\mathcal P}$ is {\em not}\, in general a properad under $\mathcal{L}\mathit{ieb}cd$ as higher homotopy operations of type $(m\geq 3,1)$ can be non-trivial.
{\Large
{\mathsf e}ction{\bf Full twisting endofunctor in the case of involutive Lie bialgebras}
}
\subsection{Introduction}
This section adopts the full twisting endofunctor $\mathsf{Tw}$ in the category of properads under $\mathcal{H}\mathit{olieb}_{c,d}$ to the case when (strongly homotopy) Lie bialgebras satisfy the {\em involutivity or diamond}\, condition (which is often satisfied in applications). The corresponding twisting endofunctor
$$
\mathsf{Tw}^\diamond: \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}^\diamond} \longrightarrow \mathsf{PROP}_{\mathcal{H}\mathit{olieb}_{c,d}^\diamond}
$$
admits a much shorter and nicer formulation than $\mathsf{Tw}$ due to the equivalence of $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra structures and the so called homotopy ${\mathcal B}{\mathcal V}^{com}$-structures which are used heavily in the Batalin-Vilkovisky formalism of the mathematical physics and QFT.
We show many formulae explicitly but omit all the calculations proving them because proofs are much analogous to the ones given in the previous sections, and because this {\em diamond}\, version $\mathsf{Tw}^\diamond$ of $\mathsf{Tw}$ gives us essentially nothing new --- upon representations of the twisted properad $\mathsf{Tw}^\diamond\mathcal{H}\mathit{olieb}_{c,d}^\diamond$ defined below one recovers the well-known constructions from
\cite{CFL,NW}. Thus the only small novelty is that we arrive to these beautiful results using a different language, the properadic one.
\subsection{Reminder on involutive Lie bialgebras} Given any pair of integers $c,d$ of the same parity, $c=d\bmod 2{\mathbb Z}$, the properad $\mathcal{L}\mathit{ieb}^\diamondcd$ of {\em involutive}\, Lie bialgebras is defined as the quotient of $\mathcal{L}\mathit{ieb}cd$ by the ideal generated by the involutivity, or ``diamond", relation
$$
\betagin{array}{c}\resizebox{5.5mm}{!}{
\xy
(0,0)*{\bulletlet}="a",
(0,6)*{\bulletlet}="b",
(3,3)*{}="c",
(-3,3)*{}="d",
(0,9)*{}="b'",
(0,-3)*{}="a'",
\ar@{-} "a";"c" <0pt>
\ar @{-} "a";"d" <0pt>
\ar @{-} "a";"a'" <0pt>
\ar @{-} "b";"c" <0pt>
\ar @{-} "b";"d" <0pt>
\ar @{-} "b";"b'" <0pt>
\endxy}
\end{array}=0.
$$
Note that this relation is void in $\mathcal{L}\mathit{ieb}cd$ for $c$ and $d$ of opposite parities.
It was proven in \cite{CMW} that the minimal resolution $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$ of the properad $\mathcal{L}\mathit{ieb}^\diamondcd$ is a free properad generated
by the following (skew)symmetric corollas of degree $1+c(1-m-a) + d(1-n-a)$
\betagin{equation}\lambdabel{5: generators of HoLoBcd}
\betagin{array}{c}\resizebox{13mm}{!}{\xy
(-9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,-6)*{\ldots};
(-10,-8)*{_1};
(-6,-8)*{_2};
(10,-8)*{_n};
(-9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,6)*{\ldots};
(-10,8)*{_1};
(-6,8)*{_2};
(10,8)*{_m};
\endxy}\end{array}
=
(-1)^{(d+1)(\sigma+\tau)}
\betagin{array}{c}\resizebox{16mm}{!}{\xy
(-9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,-6)*{\ldots};
(-12,-8)*{_{\tau(1)}};
(-6,-8)*{_{\tau(2)}};
(12,-8)*{_{\tau(n)}};
(-9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,6)*{\ldots};
(-12,8)*{_{\sigma(1)}};
(-6,8)*{_{\sigma(2)}};
(12,8)*{_{\sigma(m)}};
\endxy}\end{array}\ \ \ {\mathfrak o}rall \sigma\in {\mathbb S}_m, {\mathfrak o}rall \tau\in {\mathbb S}_n,
\end{equation}
where $m+n+ a\geq 3$, $m\geq 1$, $n\geq 1$, $a\geq 0$. The differential in
$\mathcal{H}\mathit{olieb}_{d}^\diamond$ is given on the generators by
\betagin{equation}\lambdabel{5: d on HoLoBcd}
\delta
\betagin{array}{c}\resizebox{13mm}{!}{\xy
(-9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,-6)*{\ldots};
(-10,-8)*{_1};
(-6,-8)*{_2};
(10,-8)*{_n};
(-9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,6)*{\ldots};
(-10,8)*{_1};
(-6,8)*{_2};
(10,8)*{_m};
\endxy}\end{array}
=
\sum_{l\geq 1}\sum_{a=b+c+l-1}\sum_{[m]=I_1\sqcup I_2\atop
[n]=J_1\sqcup J_2} {\partial}m
\betagin{array}{c}
\betagin{array}{c}\resizebox{18mm}{!}{\xy
(0,0)*+{b}*\cir{}="b",
(10,10)*+{c}*\cir{}="c",
(-9,6)*{}="1",
(-7,6)*{}="2",
(-2,6)*{}="3",
(-3.5,5)*{...},
(-4,-6)*{}="-1",
(-2,-6)*{}="-2",
(4,-6)*{}="-3",
(1,-5)*{...},
(0,-8)*{\underbrace{\ \ \ \ \ \ \ \ }},
(0,-11)*{_{J_1}},
(-6,8)*{\overbrace{ \ \ \ \ \ \ }},
(-6,11)*{_{I_1}},
(6,16)*{}="1'",
(8,16)*{}="2'",
(14,16)*{}="3'",
(11,15)*{...},
(11,6)*{}="-1'",
(16,6)*{}="-2'",
(18,6)*{}="-3'",
(13.5,6)*{...},
(15,4)*{\underbrace{\ \ \ \ \ \ \ }},
(15,1)*{_{J_2}},
(10,18)*{\overbrace{ \ \ \ \ \ \ \ \ }},
(10,21)*{_{I_2}},
(0,2)*-{};(8.0,10.0)*-{}
**\crv{(0,10)};
(0.5,1.8)*-{};(8.5,9.0)*-{}
**\crv{(0.4,7)};
(1.5,0.5)*-{};(9.1,8.5)*-{}
**\crv{(5,1)};
(1.7,0.0)*-{};(9.5,8.6)*-{}
**\crv{(6,-1)};
(5,5)*+{...};
\ar @{-} "b";"1" <0pt>
\ar @{-} "b";"2" <0pt>
\ar @{-} "b";"3" <0pt>
\ar @{-} "b";"-1" <0pt>
\ar @{-} "b";"-2" <0pt>
\ar @{-} "b";"-3" <0pt>
\ar @{-} "c";"1'" <0pt>
\ar @{-} "c";"2'" <0pt>
\ar @{-} "c";"3'" <0pt>
\ar @{-} "c";"-1'" <0pt>
\ar @{-} "c";"-2'" <0pt>
\ar @{-} "c";"-3'" <0pt>
\endxy}\end{array}
\end{array}
\end{equation}
where the summation parameter $l$ counts the number of internal edges connecting the two vertices
on the r.h.s., and the signs are fixed by the fact that they all equal to $-1$ for $c$ and $d$
odd integers.
The ``plus" extension (see \S {\ref{2: subsection on plus functor}}), $\mathcal{H}\mathit{olieb}_{c,d}^{\diamond +}$, of this properad looks especially natural --- one adds just one extra $(1,1)$-generator (which we denote from nowon by $\xy
(0,-4)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
(0,4)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
\endxy$) to the list while keeping the differential (\ref{5: d on HoLoBcd}) formally the same.
Let $\hbar$ be a formal variable of degree $c+d$ and, for a vector space $V$. let $V[[\hbar]]$ stand for the topological vector space of formal power series with coefficients in $V$; it is a module over the topological ring ${\mathbb K}[[\hbar]]$ of formal power series in $\hbar$. Consider a dg properad $\mathcal{H}\mathit{olieb}_{c,d}^{\hbar+}$ which is identical to
$\mathcal{H}\mathit{olieb}_{c,d}^+[[\hbar]]$ as a topological ${\mathbb K}[[\hbar]]$-module but is equipped with a different $\hbar$-dependent differential
\betagin{equation}\lambdabel{5: d_hbar differential in HoLoB[[h]]}
\delta
\betagin{array}{c}\resizebox{13mm}{!}{\xy
(-9,-6)*{};
(0,0)*+{\bulletlet}
**\dir{-};
(-5,-6)*{};
(0,0)*+{}
**\dir{-};
(9,-6)*{};
(0,0)*+{}
**\dir{-};
(5,-6)*{};
(0,0)*+{}
**\dir{-};
(0,-6)*{\ldots};
(-10,-8)*{_1};
(-6,-8)*{_2};
(10,-8)*{_m};
(-9,6)*{};
(0,0)*+{}
**\dir{-};
(-5,6)*{};
(0,0)*+{}
**\dir{-};
(9,6)*{};
(0,0)*+{}
**\dir{-};
(5,6)*{};
(0,0)*+{}
**\dir{-};
(0,6)*{\ldots};
(-10,8)*{_1};
(-6,8)*{_2};
(10,8)*{_n};
\endxy}\end{array}
=
\sum_{l\geq 1}\sum_{[m]=I_1\sqcup I_2\atop
[n]=J_1\sqcup J_2}{\partial}m \hbar^{l-1}
\betagin{array}{c}
\betagin{array}{c}\resizebox{18mm}{!}{\xy
(0,0)*+{\bulletlet}="b",
(10,10)*+{\bulletlet}="c",
(-9,6)*{}="1",
(-7,6)*{}="2",
(-2,6)*{}="3",
(-3.5,5)*{...},
(-4,-6)*{}="-1",
(-2,-6)*{}="-2",
(4,-6)*{}="-3",
(1,-5)*{...},
(0,-8)*{\underbrace{\ \ \ \ \ \ \ \ }},
(0,-11)*{_{J_1}},
(-6,8)*{\overbrace{ \ \ \ \ \ \ }},
(-6,11)*{_{I_1}},
(6,16)*{}="1'",
(8,16)*{}="2'",
(14,16)*{}="3'",
(11,15)*{...},
(11,6)*{}="-1'",
(16,6)*{}="-2'",
(18,6)*{}="-3'",
(13.5,6)*{...},
(15,4)*{\underbrace{\ \ \ \ \ \ \ }},
(15,1)*{_{J_2}},
(10,18)*{\overbrace{ \ \ \ \ \ \ \ \ }},
(10,21)*{_{I_2}},
(0,2)*-{};(8.0,10.0)*-{}
**\crv{(0,10)};
(0.5,1.8)*-{};(8.5,9.0)*-{}
**\crv{(0.4,7)};
(1.5,0.5)*-{};(9.1,8.5)*-{}
**\crv{(5,1)};
(1.7,0.0)*-{};(9.5,8.6)*-{}
**\crv{(6,-1)};
(5,5)*+{...};
\ar @{-} "b";"1" <0pt>
\ar @{-} "b";"2" <0pt>
\ar @{-} "b";"3" <0pt>
\ar @{-} "b";"-1" <0pt>
\ar @{-} "b";"-2" <0pt>
\ar @{-} "b";"-3" <0pt>
\ar @{-} "c";"1'" <0pt>
\ar @{-} "c";"2'" <0pt>
\ar @{-} "c";"3'" <0pt>
\ar @{-} "c";"-1'" <0pt>
\ar @{-} "c";"-2'" <0pt>
\ar @{-} "c";"-3'" <0pt>
\endxy}\end{array}
\end{array}
\end{equation}
where $l$ counts the number of internal edges connecting the two vertices on the r.h.s. The symbol ${\partial}m$ stands for $-1$ in the case $c,d\in 2{\mathbb Z}$. There is a morphism of dg properads (cf.\ \cite{CMW})
$$
{\mathcal F}^+: \mathcal{H}\mathit{olieb}_{c,d}^{\hbar+} \longrightarrow \mathcal{H}\mathit{olieb}_{c,d}^{\diamond +}[[\hbar]]
$$
given on the generators as follows (cf.\ \cite{CMW})
\betagin{equation}\lambdabel{5: formal power series of generators}
{\mathcal F}^+:\
\betagin{array}{c}\resizebox{13mm}{!}{\betagin{xy}
<0mm,0mm>*{\bulletlet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
\longrightarrow
\sum_{a=0}^\infty \hbar^{a}
\betagin{array}{c}\resizebox{14mm}{!}{ \xy
(-9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,-6)*{\ldots};
(-10,-8)*{_1};
(-6,-8)*{_2};
(10,-8)*{_m};
(-9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,6)*{\ldots};
(-10,8)*{_1};
(-6,8)*{_2};
(10,8)*{_n};
\endxy}\end{array}
\ \ \ \ \ \ \ \ \ \ \ \ \ {\mathfrak o}rall \ m,n\geq 1.
\end{equation}
There is obviously a 1-1 correspondence between morphisms of dg properads
$\mathcal{H}\mathit{olieb}_{c,d}^\diamond \longrightarrow {\mathcal P}
$
in the category of graded vector spaces over ${\mathbb K}$, and continuous morphisms
of dg properads
$\mathcal{H}\mathit{olieb}_{c,d}^{\hbar+} \longrightarrow {\mathcal P}[[\hbar]]
$
in the category of topological ${\mathbb K}[[\hbar]]$-modules.
Let $\mathcal{H} \mathit{olie}_d^\diamond$ be the quotient of $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$ by the (differential) ideal generated
by all corollas with the number of outgoing legs $\geq 2$.
It is generated by the following family of (skew)symmetric corollas with $a\geq 0$, $n\geq 1$ and $a+n\geq 2$,
$$
\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-7.5,-8.6)*{_{_1}};
(-4.1,-8.6)*{_{_2}};
(9.0,-8.5)*{_{_{n}}};
(0.0,-6)*{...};
(0,6)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-7,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(8,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
\endxy}\end{array}
=(-1)^{d|\sigma|}
\betagin{array}{c}\resizebox{15.2mm}{!}{ \xy
(-8.5,-8.6)*{_{_{\sigma(1)}}};
(-3.1,-8.6)*{_{_{\sigma(2)}}};
(9.0,-8.5)*{_{_{\sigma(n)}}};
(0.0,-6)*{...};
(0,6)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-7,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(8,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
\endxy}\end{array}\ \ \ {\mathfrak o}rall\ \sigma\in {\mathbb S}_n,
$$
which are assigned degree $1 - d(n-1 +a)$; the induced differential acts as follows
$$
\delta\betagin{array}{c}
\resizebox{12mm}{!}{ \xy
(-7.5,-8.6)*{_{_1}};
(-4.1,-8.6)*{_{_2}};
(9.0,-8.5)*{_{_{n}}};
(0.0,-6)*{...};
(0,6)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-7,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(8,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
\endxy}\end{array}
=
\sum_{a=p+q\atop [n]=I_1\sqcup I_2}{\partial}m
\betagin{array}{c}
\resizebox{15mm}{!}{ \xy
(0,0)*+{p}*\cir{}="b",
(10,10)*+{q}*\cir{}="c",
(-4,-6)*{}="-1",
(-2,-6)*{}="-2",
(4,-6)*{}="-3",
(1,-5)*{...},
(0,-8)*{\underbrace{\ \ \ \ \ \ \ \ }},
(0,-11)*{_{I_1}},
(10,16)*{}="2'",
(11,4)*{}="-1'",
(16,4)*{}="-2'",
(18,4)*{}="-3'",
(13.5,4)*{...},
(15,2)*{\underbrace{\ \ \ \ \ \ \ }},
(15,-1)*{_{I_2}},
\ar @{-} "b";"c" <0pt>
\ar @{-} "b";"-1" <0pt>
\ar @{-} "b";"-2" <0pt>
\ar @{-} "b";"-3" <0pt>
\ar @{-} "c";"2'" <0pt>
\ar @{-} "c";"-1'" <0pt>
\ar @{-} "c";"-2'" <0pt>
\ar @{-} "c";"-3'" <0pt>
\endxy}
\end{array}
$$
Representations, $\rho: {\mathcal H} o lie^\diamond_{d}\rightarrow {\mathcal E} nd_V$, of this operad in a dg vector space $(V,{\partial})$ can be identified
with continuous representations of the topological operad ${\mathcal H} o lie_{d}[[\hbar]]$
in the topological vector space $V[[\hbar]]$ equipped with the differential
$$
{\partial} + \sum_{p\geq 1}\hbar^p \Delta_p, \ \ \ \Delta_p:=\rho \left(\betagin{array}{c}
\resizebox{4mm}{!}{ \xy
(0,5)*{};
(0,0)*+{_a}*\cir{}
**\dir{-};
(0,-5)*{};
(0,0)*+{_a}*\cir{}
**\dir{-};
\endxy}\end{array}\right).
$$
Here the formal parameter $\hbar$ is assumed to have homological degree $d$.
It is easy to see that the quotient of the dg properad
$\mathcal{H}\mathit{olieb}_{c,d}^{\hbar+}$ by the (differential) ideal generated
by all corollas with the number of outgoing legs $\geq 2$ is identical to $\mathcal{H} \mathit{olie}_d^+[[\hbar]]$ as a dg properad.
Hence we obtain from (\ref{5: formal power series of generators}) a canonical morphism of dg properads
\betagin{equation}\lambdabel{5: f^+ from Holie^h to Holie[h]}
f^+: \mathcal{H} \mathit{olie}_d^{+}[[\hbar]] \longrightarrow \mathcal{H} \mathit{olie}_d^{\diamond+}[[\hbar]]
\end{equation}
It gives us a compact presentation of any morphism $\mathcal{H} \mathit{olie}_d^{\diamond+}\rightarrow {\mathcal P}$ as an associated continuous morphism of properads
$\mathcal{H} \mathit{olie}_d[[\hbar]] \rightarrow {\mathcal P}[[\hbar]]$ in the category of topological ${\mathbb K}[[\hbar]]$-modules.
\subsubsection{\bf Proposition}\lambdabel{4: Prop on map from diamond Lie^+ to OHoLB}
{\em There is a morphism of dg operads
$$
F^+: \mathcal{H} \mathit{olie}^{\diamond +}_{c+d} \to {\mathcal O}_{c,d}\mathcal{H}\mathit{olieb}_{c,d}^\diamond
$$
given explicitly on the $(1,1)$-generators by
$$
\betagin{array}{c}
\resizebox{3.5mm}{!}{ \xy
(0,5)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
(0,-5)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
\endxy}\end{array}
\longrightarrow
\sum_{m\geq 2} \betagin{array}{c}\resizebox{9mm}{!}{\betagin{xy}
(0,-6)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
(-2,6)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
(2,6)*{};
(0,0)*+{_0}*\cir{}
**\dir{-};
<0mm,8mm>*{\overbrace{ \ \ \ \ \ \ \ \ \ \ }},
<0mm,10mm>*{^m},
\end{xy}}\end{array},
$$
and on the remaining $(1,n)$-generators with $a+n\geq 2$ by}
\betagin{equation}\lambdabel{5: map Holie^diom+c+d to fc,d}
\betagin{array}{c}
\resizebox{12mm}{!}{ \xy
(-7.5,-8.6)*{_{_1}};
(-4.1,-8.6)*{_{_2}};
(9.0,-8.5)*{_{_{n}}};
(0.0,-6)*{...};
(0,6)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(-7,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(8,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
(4,-7)*{};
(0,0)*+\hbox{$_{{a}}$}*{\mathfrak r}m{o}
**\dir{-};
\endxy}\end{array}
\longrightarrow
\sum_{m\geq 1, l_i\geq 1 \atop a=c+\sum_{i=1}^n(l_i-1)}
\betagin{array}{c}\resizebox{16mm}{!}{\xy
(0,0)*+{_1}*\cir{}="b",
(10,10)*+{c}*\cir{}="c",
(20,0)*+{_n}*\cir{}="r",
(6,16)*{}="1'",
(8,16)*{}="2'",
(14,16)*{}="3'",
(11,15)*{...},
(10,0)*{\ldots},
(10,18)*{\overbrace{ \ \ \ \ \ \ \ \ }},
(10,21)*{_{m}},
(0,2)*-{};(8.0,10.0)*-{}
**\crv{(0,10)};
(0.5,1.8)*-{};(8.5,9.0)*-{}
**\crv{(0.4,7)};
(1.8,0.6)*-{};(9.1,8.5)*-{}
**\crv{(5,1)};
(2.0,0.1)*-{};(9.5,8.6)*-{}
**\crv{(6,-1)};
(5,5)*+{_{l_1}};
(20,2)*-{};(12.0,10.0)*-{}
**\crv{(20,10)};
(19.5,1.8)*-{};(11.5,9.0)*-{}
**\crv{(20.4,7)};
(17.9,0.6)*-{};(10.9,8.5)*-{}
**\crv{(15,1)};
(1.7,0.0)*-{};(9.5,8.6)*-{}
**\crv{(6,-1)};
(16,5)*+{_{l_n}};
\ar @{-} "c";"1'" <0pt>
\ar @{-} "c";"2'" <0pt>
\ar @{-} "c";"3'" <0pt>
\endxy}\end{array}.
\end{equation}
Proof is a straightforward direct calculation (cf.\ \S{\ref{4: Prop on map from Lie^+ to OHoLB}}).
The existence of such a map follows also from Proposition 5.4.1 and Lemma B.4.1 proven in \cite{CMW}.
Given a $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra structure,
$$
\rho: \mathcal{H}\mathit{olieb}_{c,d}^\diamond \longrightarrow {\mathcal E} nd_V,
$$
$$
\mu^a_{m,n}:=\rho\left(\betagin{array}{c}\resizebox{13mm}{!}{ \xy
(-9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,-6)*{\ldots};
(-10,-8)*{_1};
(-6,-8)*{_2};
(10,-8)*{_m};
(-9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,6)*{\ldots};
(-10,8)*{_1};
(-6,8)*{_2};
(10,8)*{_n};
\endxy}\end{array}
\right): \odot^n (V[-c]) \longrightarrow (\odot^m (V[-c])) [1+(c+d)(1-n-a)],
$$
in a graded vector space $V$, there is an associated ${\mathcal O}_{c,d}\mathcal{H}\mathit{olieb}_{c,d}$-algebra structure
in $\odot^\bulletlet(V[-c])$ given in terms of polydifferential operators, and hence a
continuous $\mathcal{H} \mathit{olie}_d^{+}[[\hbar]]$-algebra structure on $\odot^\bulletlet(V[-c])[[\hbar]]$ given by the composition
$$
\mathcal{H} \mathit{olie}_d^{+}[[\hbar]] \stackrel{f^+}{\rightarrow} \mathcal{H} \mathit{olie}_d^{\diamond+}[[\hbar]] \stackrel{F^+}{\rightarrow} {\mathcal O}_{c,d}\mathcal{H}\mathit{olieb}_{c,d}[[\hbar]] \rightarrow {\mathcal E} nd_{\odot^\bulletlet(V[-c])}[[\hbar]]
$$
Assuming that the latter is nilpotent (or appropriately filtered which is often the case in applications), one defines a {\em Maurer-Cartan element $\gamma$ of the given
$\mathcal{H}\mathit{olieb}_{c,d}$-algebra structure in $V$}\, as a Maurer-Cartan of the induced continuous $\mathcal{H} \mathit{olie}_{c+d}^{\hbar+}$-algebra structure in $\odot^\bulletlet(V[-c])[[\hbar]]$. Using
(\ref{5: map Holie^diom+c+d to fc,d}) one can describe such an MC element as a homogeneous
(of degree $c+d$) formal power series
\betagin{equation}\lambdabel{5: ga-MC as h-series}
\gamma=\sum_{a\geq 0, m\geq 0}\hbar^a\gamma_{a,m}\in \odot^{\bulletlet\geq 1}(V[-c])[[\hbar]], \ \
\gamma_{a,m}\in \odot^m (V[-c]),
\end{equation}
satisfying the equation
\betagin{equation}\lambdabel{5: MC eqn Delta(e^g)}
\Delta_\rho\left(e^{{\mathfrak r}ac{\gamma}{\hbar}}\right)=0
\end{equation}
where $\Delta_\rho$ is a degree $+1$ polydifferential operator on $\odot^{\bulletlet}(V[-c])[[\hbar]]$ given, in an arbitrary basis $\{p_\alpha\}$ of $V[-c]$ as a sum (cf. \S {\ref{4: subsec on MC elements of HoLBcd}})
\betagin{equation}\lambdabel{5: Delta_rho}
\Delta_\rho:= \sum_{a\geq 0\atop
m,n\geq 1}{\partial}m \hbar^{a+n-1} \mu^a_{m,n}(p_{\alpha_1}\otimes ...\otimes p_{\alpha_n}){\mathfrak r}ac{{\partial}^n}{{\partial} p_{\alpha_1}\ldots {\partial} p_{\alpha_n}}
\end{equation}
Here the differential in $V$ is encoded as $\mu^0_{1,1}$. The operator $\Delta_\rho$ encodes fully the given $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra structure $\rho$ in $V$: there is a {\em one-to-one correspondence}\, \cite{CMW,Me3} between $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra structures in $V$ and degree
$1$ operators on $\odot^\bulletlet(V[-c])[[\hbar]]$ of the form
$$
\Delta=\sum_{a\geq 0} \hbar^a \Delta_a
$$
such that $\Delta_a$ is a derivation of the graded commutative algebra $\odot^\bulletlet(V[-c])$ of order $\leq a+1$ (such structures are often called ${\mathcal B}{\mathcal V}_\infty^{com}$-algebra structures in the literatures).
The sum
\betagin{equation}\lambdabel{5: twisted by MC differentiail in V}
\delta_\gamma v :=\sum_{n\geq 0}{\mathfrak r}ac{1}{n!}{\mu}^0_{1,n+1}(\underbrace{\gamma_1,\ldots,\gamma_1}_n, v ),
\end{equation}
is a twisted differential on $V$ which is used in the following definition-proposition.
\subsection{Diamond twisting endofunctor} The {\em diamond twisting}, $(\mathsf{Tw}^\diamond {\mathcal P},{\partial}c)$, of a dg properad $({\mathcal P},{\partial})$ under $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$
is, by definition, a properad freely generated by ${\mathcal P}$ and the ${\mathbb S}$-bimodule $M=\{M(m,n)\}$ such that $M(m,n)=0$ for $n\geq 1$ and $M(m,0)=\oplus_{a\geq 0} M_a(0,m)$ with $M_a(0,m)$ being the following 1-dimensional representations of ${\mathbb S}_m$,
$$
M_a(0,m):={\mathit s \mathit g\mathit n}_m^{|c|}[(c+d)(1-a)-cm]=\text{span}\left\lambdangle
\betagin{array}{c}\resizebox{9mm}{!}{\betagin{xy}
(0,6)*{...};
(-5.5,7)*{^1};
(0,0)*+{_a}*\cir{}
**\dir{-};
(5.5,7)*{^m};
(0,0)*+{_a}*\cir{}
**\dir{-};
(-2.5,7)*{^2};
(0,0)*+{_a}*\cir{}
**\dir{-};
(2.5,6)*{};
(0,0)*+{_a}*\cir{}
**\dir{-};
\end{xy}}\end{array}
\right\rangle
$$
The differential in $\mathsf{Tw}^\diamond {\mathcal P}$ is defined on the generators as follows
\betagin{equation}\lambdabel{5: d_centerdot on Tw^diamond P}
{\p}_\centerdot \betagin{array}{c}\resizebox{13mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
=
{\p} \betagin{array}{c}\resizebox{13mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,6mm>*{^1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,6mm>*{^2}**@{-},
<0mm,0mm>*{\circ};<0mm,5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,6mm>*{^m}**@{-},
<0mm,0mm>*{\circ};<-8mm,-6mm>*{_1}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-6mm>*{_2}**@{-},
<0mm,0mm>*{\circ};<0mm,-5.5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-6mm>*+{}**@{-},
<0mm,0mm>*{\circ};<8mm,-6mm>*{_n}**@{-},
\end{xy}}\end{array}
+
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{
\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\blacklozenge};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,5mm>*{}**@{-},
<6mm,5mm>*{..}**@{},
<-8.5mm,5.5mm>*{^1}**@{},
<-4mm,5.5mm>*{^i}**@{},
<9.0mm,5.5mm>*{^m}**@{},
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,-5mm>*{}**@{-},
<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<-8.5mm,-6.9mm>*{^1}**@{},
<-5mm,-6.9mm>*{^2}**@{},
<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
- (-1)^{|a|}
\overset{n-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy}
<0mm,0mm>*{\circ};<-8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-3.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-6mm,-5mm>*{..}**@{},
<0mm,0mm>*{\circ};<0mm,-5mm>*{}**@{-},
<0mm,-5mm>*{\blacklozenge};
<0mm,-5mm>*{};<0mm,-8mm>*{}**@{-},
<0mm,-5mm>*{};<0mm,-10mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{\circ};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{\circ};<3.5mm,-5mm>*{}**@{-},
<6mm,-5mm>*{..}**@{},
<-8.5mm,-6.9mm>*{^1}**@{},
<-4mm,-6.9mm>*{^i}**@{},
<9.0mm,-6.9mm>*{^n}**@{},
<0mm,0mm>*{\circ};<-8mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<-4.5mm,5mm>*{}**@{-},
<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{\circ};<4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{\circ};<8mm,5mm>*{}**@{-},
<-8.5mm,5.5mm>*{^1}**@{},
<-5mm,5.5mm>*{^2}**@{},
<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<9.0mm,5.5mm>*{^m}**@{},
\end{xy}}\end{array}
\end{equation}
\betagin{equation}\lambdabel{5: d_centerdot on involutive MC generators of Tw^diamondP}
{\p}_\centerdot
\betagin{array}{c}\resizebox{9mm}{!}{\betagin{xy}
(0,6)*{...};
(-5.5,7)*{^1};
(0,0)*+{_a}*\cir{}
**\dir{-};
(5.5,7)*{^m};
(0,0)*+{_a}*\cir{}
**\dir{-};
(-2.5,7)*{^2};
(0,0)*+{_a}*\cir{}
**\dir{-};
(2.5,6)*{};
(0,0)*+{_a}*\cir{}
**\dir{-};
\end{xy}}\end{array}
:=
\overset{m-1}{\underset{i=0}{\sum}}
\betagin{array}{c}\resizebox{15mm}{!}{
\betagin{xy}
<0mm,-1mm>*+{_a}*\cir{};
<-0.9mm,0.2mm>*{};<-8mm,5mm>*{}**@{-},
<-0.3mm,0.4mm>*{};<-3.5mm,5mm>*{}**@{-},
<-6mm,5mm>*{..}**@{},
<0mm,0.5mm>*{};<0mm,5mm>*{}**@{-},
<0mm,5mm>*{\blacklozenge};
<0mm,5mm>*{};<0mm,8mm>*{}**@{-},
<0mm,5mm>*{};<0mm,9mm>*{^{i\hspace{-0.2mm}+\hspace{-0.5mm}1}}**@{},
<0.9mm,0.2mm>*{};<8mm,5mm>*{}**@{-},
<0.3mm,0.4mm>*{};<3.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<6mm,5mm>*{..}**@{},
<0mm,0mm>*{};<-8.5mm,5.7mm>*{^1}**@{},
<0mm,0mm>*{};<-4mm,5.7mm>*{^i}**@{},
<0mm,0mm>*{};<9.0mm,5.7mm>*{^m}**@{},
\end{xy}}\end{array}
-
\sum_{k\geq 1, [m]=\sqcup [m_\bulletlet]\atop
a=b+\sum_{i=1}^k(c_i+l_i-1)}
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{21mm}{!}{\xy
(0,0)*+{_{c_1}}*\cir{}="b",
(10,10)*+{b}*\cir{}="c",
(20,0)*+{_{c_k}}*\cir{}="r",
(6,16)*{}="1'",
(8,16)*{}="2'",
(14,16)*{}="3'",
(11,15)*{...},
(10,0)*{\ldots},
(10,18)*{\overbrace{ \ \ \ \ \ \ \ \ }},
(10,21)*{_{m_0}},
(0,2)*-{};(8.0,10.0)*-{}
**\crv{(0,10)};
(0.5,1.8)*-{};(8.5,9.0)*-{}
**\crv{(0.4,7)};
(1.8,0.6)*-{};(9.1,8.5)*-{}
**\crv{(5,1)};
(2.0,0.1)*-{};(9.5,8.6)*-{}
**\crv{(6,-1)};
(5,5)*+{_{l_1}};
(20,2)*-{};(12.0,10.0)*-{}
**\crv{(20,10)};
(19.5,1.8)*-{};(11.5,9.0)*-{}
**\crv{(20.4,7)};
(17.9,0.6)*-{};(10.9,8.5)*-{}
**\crv{(15,1)};
(1.7,0.0)*-{};(9.5,8.6)*-{}
**\crv{(6,-1)};
(16,5)*+{_{l_k}};
(-7,10)*{}="1l";
(-5,10)*{}="2l";
(-3,10)*{}="3l";
(-5,12)*{\overbrace{ \ \ \ \ \ \ }},
(-5,14)*{_{m_1}},
(27,10)*{}="1r";
(25,10)*{}="2r";
(23,10)*{}="3r";
(25,12)*{\overbrace{ \ \ \ \ \ \ }},
(25,14)*{_{m_k}},
\ar @{-} "c";"1'" <0pt>
\ar @{-} "c";"2'" <0pt>
\ar @{-} "c";"3'" <0pt>
\ar @{-} "b";"1l" <0pt>
\ar @{-} "b";"2l" <0pt>
\ar @{-} "b";"3l" <0pt>
\ar @{-} "r";"1r" <0pt>
\ar @{-} "r";"2r" <0pt>
\ar @{-} "r";"3r" <0pt>
\endxy}\end{array}\ \ \ \ {\mathfrak o}rall\ a,m\geq 1.
\end{equation}
where $\betagin{array}{c}\resizebox{1.8mm}{!}{\betagin{xy}
<0mm,-0.55mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0.5mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}}\end{array}
$ is given by
\betagin{equation}\lambdabel{5: involutive blacklozenge (1,1) element}
\betagin{xy}
<0mm,-3mm>*{};<0mm,3mm>*{}**@{-},
<0mm,0mm>*{_\blacklozenge};<0mm,0mm>*{}**@{},
\end{xy}
:=
\sum_{k=1}^\infty {\mathfrak r}ac{1}{k!}\ \resizebox{17mm}{!}{ \xy
(-25,9)*{}="1";
(-9,-4)*{...};
<-11mm,-8mm>*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{k}},
(-25,3)*+{_0}*\cir{}="L";
(-25,-3)*{}="N";
(-19,-4)*+{_0}*\cir{}="B";
(-13,-4)*+{_0}*\cir{}="C";
(-2,-4)*+{_0}*\cir{}="D";
\ar @{-} "D";"L" <0pt>
\ar @{-} "C";"L" <0pt>
\ar @{-} "B";"L" <0pt>
\ar @{-} "1";"L" <0pt>
\ar @{-} "N";"L" <0pt>
\endxy}
\end{equation}
Note that for $m+a\geq 1$ the first sum on the r.h.s.\ of (\ref{4: d_centerdot on MC generators of Tw(P)}) cancels out with all the summands corresponding to $k\geq 2 ,m_0=1,m_i=m-1, c_i=a, i\in [k]$, in the second sum.
\subsubsection{\bf Theorem}\lambdabel{5: Th on HoLB^h to Tw(HoLB)[[h]]}\lambdabel{5: Theorem on map from HoLoBcd to TwP} {\em
For any dg properad ${\mathcal P}$ equipped with a map
$$
f: \mathcal{H}\mathit{olieb}^\diamond_{c,d} \longrightarrow {\mathcal P}
$$
there is an associated map of dg properads
$$
\mathsf{Tw} f: \mathcal{H}\mathit{olieb}^\diamond_{c,d} \longrightarrow \mathsf{Tw}^\diamond{\mathcal P}
$$
given explicitly by
$$
\betagin{array}{c}\resizebox{13mm}{!}{\xy
(-9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,-6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,-6)*{\ldots};
(-10,-8)*{_1};
(-6,-8)*{_2};
(10,-8)*{_n};
(-9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(-5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(9,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(5,6)*{};
(0,0)*+{a}*\cir{}
**\dir{-};
(0,6)*{\ldots};
(-10,8)*{_1};
(-6,8)*{_2};
(10,8)*{_m};
\endxy}\end{array}
\longrightarrow
\sum_{k\geq 1, [m]=\sqcup [m_\bulletlet]
\atop
a=b+\sum_{i=1}^k(c_i+l_i-1)}
{\partial}m
{\mathfrak r}ac{1}{k!}
\betagin{array}{c}\resizebox{21mm}{!}{\xy
(-3,0)*+{_{c_1}}*\cir{}="b",
(10,10)*+{b}*\cir{}="c",
(23,0)*+{_{c_k}}*\cir{}="r",
(6,16)*{}="1'",
(8,16)*{}="2'",
(14,16)*{}="3'",
(11,15)*{...},
(8,2)*{}="1''",
(10,2)*{}="2''",
(12,2)*{}="3''",
(3,-1)*{...},
(17,-1)*{...},
(10,18)*{\overbrace{ \ \ \ \ \ \ \ \ }},
(10,21)*{_{m_0}},
(10,1)*{\underbrace{ \ }},
(10,-1)*{_{n}},
(-3,2)*-{};(8.0,10.0)*-{}
**\crv{(0,10)};
(-2.5,1.8)*-{};(8.5,9.0)*-{}
**\crv{(0.4,7)};
(-1.2,0.6)*-{};(9.1,8.5)*-{}
**\crv{(5,1)};
(5,5)*+{_{l_1}};
(23,2)*-{};(12.0,10.0)*-{}
**\crv{(20,10)};
(22.5,1.8)*-{};(11.5,9.0)*-{}
**\crv{(20.4,7)};
(20.9,0.6)*-{};(10.9,8.5)*-{}
**\crv{(15,1)};
(18,5)*+{_{l_k}};
(-7,10)*{}="1l";
(-5,10)*{}="2l";
(-3,10)*{}="3l";
(-5,12)*{\overbrace{ \ \ \ \ \ \ }},
(-5,14)*{_{m_1}},
(27,10)*{}="1r";
(25,10)*{}="2r";
(23,10)*{}="3r";
(25,12)*{\overbrace{ \ \ \ \ \ \ }},
(25,14)*{_{m_k}},
\ar @{-} "c";"1'" <0pt>
\ar @{-} "c";"2'" <0pt>
\ar @{-} "c";"3'" <0pt>
\ar @{-} "c";"1''" <0pt>
\ar @{-} "c";"2''" <0pt>
\ar @{-} "c";"3''" <0pt>
\ar @{-} "b";"1l" <0pt>
\ar @{-} "b";"2l" <0pt>
\ar @{-} "b";"3l" <0pt>
\ar @{-} "r";"1r" <0pt>
\ar @{-} "r";"2r" <0pt>
\ar @{-} "r";"3r" <0pt>
\endxy}\end{array}
$$
}
(In the case $c,d\in 2{\mathbb Z}$ the symbol ${\partial}m$ above can be replaced by $+1$.)
To prove this statement one has to check the compatibility of $\mathsf{Tw} f$ with the differentials on both sides.
This can be done either by a direct (but tedious) computation or by studying generic representations of both properads involved in the above statement as is done briefly in
\S {\ref{5: subsec on repr of Tw^diamond HoLoB}} below in the most important and illustrative case ${\mathcal P}=\mathcal{H}\mathit{olieb}_{c,d}^\diamond$.
In a full analogy to \S {\ref{3: Theorem on Def action on TwP}} the deformation complex of any morphism $f$ as above acts on $\mathsf{Tw}^\diamond {\mathcal P}$ by derivations, that is,
there is a morphism of dg Lie algebras
$$
\mathsf{Def}\left(\mathcal{H}\mathit{olieb}_{c,d}^\diamond \stackrel{f}{\rightarrow} {\mathcal P}\right) \longrightarrow \mathrm{Der}(\mathsf{Tw}{\mathcal P})
$$
\subsection{Representations of $\mathsf{Tw}^\diamond \mathcal{H}\mathit{olieb}_{c,d}^\diamond$}\lambdabel{5: subsec on repr of Tw^diamond HoLoB} Let $\rho: \mathcal{H}\mathit{olieb}_{c,d}^\diamond\rightarrow {\mathcal E} nd_V$ be a homotopy involutive Lie bialgebra structure in a graded vector space $V$ and let $\Delta_\rho$ be its
equivalent incarnation as a differential operator (\ref{5: Delta_rho}) on $\odot^\bulletlet(V[-c][[\hbar]]$. Assume a Maurer-Cartan element $\gamma$ of this $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-structure is fixed, that is, a formal power series (\ref{5: ga-MC as h-series}) satisfying the equation
(\ref{5: MC eqn Delta(e^g)}). These data $(\rho, \gamma)$ gives us
\betagin{itemize}
\item[(i)]
a representation of
$\mathsf{Tw}\mathcal{H}\mathit{olieb}_{c,d}^\diamond$ in $V$ which sends the MC generators,
$$
\betagin{array}{c}\resizebox{9mm}{!}{\betagin{xy}
(0,6)*{...};
(-5.5,7)*{^1};
(0,0)*+{_a}*\cir{}
**\dir{-};
(5.5,7)*{^m};
(0,0)*+{_a}*\cir{}
**\dir{-};
(-2.5,7)*{^2};
(0,0)*+{_a}*\cir{}
**\dir{-};
(2.5,6)*{};
(0,0)*+{_a}*\cir{}
**\dir{-};
\end{xy}}\end{array} \longrightarrow \gamma_{a,m} \in \odot^m(V[-c])
$$
to the corresponding summands of the MC series (\ref{5: ga-MC as h-series}).
\item[(ii)] a {\em twisted}\, $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra structure on $V$ which can be encoded as the following
$\gamma$-twisted differential operator
$$
\Delta_\gamma:= e^{-{\mathfrak r}ac{\gamma}{\hbar}}\circ \Delta_\rho \circ e^{{\mathfrak r}ac{\gamma}{\hbar}}=:\sum_{a\geq 0} \hbar^a\Delta_{(a)\gamma}
$$
\end{itemize}
The MC equation (\ref{5: MC eqn Delta(e^g)}) guarantees that summands $\Delta_{(a)\gamma}$ are differential operators of order $\leq a+1$ so that $\Delta_\gamma$ induces some $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$-algebra structure on $V$ indeed. A straightforward combinatorial inspection of $\Delta_\gamma$ recovers the universal properadic formula shown in Theorem \S {\ref{5: Theorem on map from HoLoBcd to TwP}}.
Thus, contrary to the twisting endofunctor $\mathsf{Tw}$ introduced in the previous section, its diamond version $\mathsf{Tw}^\diamond$ gives us essentially nothing new --- it reproduces in a different language the well-known twisting construction introduced in \S 9 of \cite{CFL} in terms of generic representations of the properads $\mathcal{H}\mathit{olieb}_{c,d}^\diamond$ and $\mathsf{Tw}^\diamond\mathcal{H}\mathit{olieb}_{c,d}^\diamond$. In the special class of representations of $\mathcal{L}\mathit{ieb}^\diamondcd$ on the spaces of cyclic words, the MC equation (\ref{5: MC eqn Delta(e^g)}) has been introduced and studied by S.\ Barannikov \cite{Ba1,Ba2} in the context of the deformation theory of modular operads and its applications in the theory of Kontsevich moduli spaces.
A beautiful concrete solution $\Gamma$ of the MC equation (\ref{5: MC eqn Delta(e^g)}) has been constructed by F.\ N\"aef and T.\ Willwacher in \cite{NW} when studying string topology of not necessarily simply connected manifolds $M$; that MC element $\gamma$ has been obtained in \cite{NW} from the so called {\em partition function}\, $Z_M$ on $M$ which has been constructed earlier by R.\ Campos and T.\ Willwacher in \cite{CW} when studying new graph models of configuration spaces of manifolds. Thus the $\mathcal{H}\mathit{olieb}^\diamond_{3-n}$-algebra structure constructed in \cite{NW} on the space of cyclic words $Cyc(H^\bulletlet(M)[1])$ of the de Rham cohomology $H^\bulletlet(M)$ of an $n$-dimensional closed manifold $M$ gives us an example of the action of the twisting endofunctor $\mathsf{Tw}^\diamond$ on standard $\mathcal{L}\mathit{ieb}^\diamond_{3-n}$-algebra structure on $Cyc(H^\bulletlet(M)[1])$.
\def$'${$'$}
\betagin{thebibliography}{10}
\bibitem[AWZ]{AWZ} A.\ Andersson, T.\ Willwacher and M.\ \v Zivkovi\' c,
{\it Oriented hairy graphs and moduli spaces of curves}, arXiv:2005.00439 (2020).
\bibitem[B1]{Ba1} S.\ Barannikov, {\em Modular operads and Batalin-Vilkovisky geometry}.
Int. Math. Res. Not. IMRN (2007), no. 19, Art. ID rnm075, 31 pp.
\bibitem[B2]{Ba2} S.\ Barannikov, {\em
Noncommutative Batalin-Vilkovisky geometry and matrix integrals}.
C. R. Math. Acad. Sci. Paris {\bf 348} (2010), 3591-7362
\bibitem[CMW]{CMW} R.\ Campos, S.\ Merkulov and T.\ Willwacher {\em The Frobenius properad is Koszul}, Duke Math.\ J. {\bf 165}, No.1 (2016), 2921-2989.
\bibitem[CW]{CW} R.\ Campos and T.\ Willwacher, {\em A model for configuration spaces of points}, to appear in Algebraic \& Geometric Topology,
\bibitem[CS]{CS} M.\ Chas and D.\ Sullivan, {\em Closed string operators in topology leading to Lie bialgebras and higher string algebra}, in: { The legacy of Niels Henrik Abel}, pp.\ 771--784, Springer, Berlin, 2004.
\bibitem[CFL]{CFL} K. Cieliebak, K. Fukaya and J. Latschev, {\em Homological algebra related to surfaces with boundary}, Quantum Topology, {\bf 11}, No.4 (2020) 691-837.
\bibitem[CGP]{CGP}
M.\ Chan, S.\ Galatius and S.\ Payne, {\em Topology of moduli spaces of tropical curves with marked points}, preprint arXiv:1903.07187 (2019)
\bibitem[CS]{CS} M.\ Chas and D.\ Sullivan, {\em Closed string operators in topology leading to Lie bialgebras and higher string algebra}, in: { The legacy of Niels Henrik Abel}, pp.\ 771--784, Springer, Berlin, 2004.
\bibitem[CL]{CL} J.\ Chuang and A.\ Lazarev, Combinatorics and formal geometry of the master equation, Lett. Math. Phys. {\bf 103}, 1 (2013)
79-112.
\bibitem[C1]{Co1} K.\ Costello, {\em The $A_\infty$ operad and the moduli space of curves}, arXiv:
math.AG/0402015 (2004)
\bibitem[C2]{Co2} K.\ Costello, {\em
A dual version of the ribbon graph decomposition
of moduli space}, Geometry \& Topology {\bf 11} (2007) 1637-1652.
\bibitem[D1]{Dr1}
V.\ Drinfeld,
{\em Hamiltonian structures on Lie groups, Lie bialgebras and the geometric
meaning of the classical Yang-Baxter equations}, Soviet Math. Dokl. {\bf 27} (1983) 68--71.
\bibitem[D2]{Dr2}
V. Drinfeld, {\em On quasitriangular quasi-Hopf algebras and a group closely connected
with $Gal(\bar{Q}/Q)$}, Leningrad Math. J. {\bf 2}, No.\ 4 (1991), 829--860.
\bibitem[DW]{DW} V.\ Dolgushev and T.\ Willwacher, {\it Operadic twisting --- with an application to Deligne's conjecture}, Journal of Pure and Applied Algebra {\bf 219} (2015) 1349-1428
\bibitem[DSV]{DSV} V.\ Dotsenko
S.\ Shadrin
B.\ Vallette, {\em The twisting procedure}, arXiv:1810.02941 (2018)
\bibitem[FTW]{FTW}
B.\ Fresse, V.\ Turchin and T.\ Willwacher, {\em The rational homotopy of mapping spaces of $E_n$ operads} Preprint,
arXiv:1703.06123, 2017.
\bibitem[G]{Ge} E.\ Getzler, {\em Two-dimensional topological gravity and equivariant cohomology}, Comm.\ Math.\ Phys. {\bf 163} (1994), no. 3,
473-489.
\bibitem[Kh]{Kh} A.\ Khoroshkin, private communication.
\bibitem[Ko1]{Ko1} M.\ Kontsevich, {\em Formality Conjecture}, In: D. Sternheimer et al. (eds.),
Deformation Theory and Symplectic
Geometry, Kluwer 1997, 139-156.
\bibitem[Ko2]{Ko2} M.\ Kontsevich, {\em
Operads and motives in deformation quantization}, Lett.\ Math.\ Phys.
{\bf 48}(1) (1999), 351-772.
\bibitem[Ko3]{Ko3} M.\ Kontsevich, unpublished.
\bibitem[LV]{LV} P.\ Lambrechts and I.\ Volic, {\em Formality of the little N-disks operad}, Memoirs of the AMS {\bf 230} (2013), 116pp.
\bibitem[LST]{LST}
A.\ Lazarev, Y.\ Sheng and R.\ Tang, {\em
Homotopy relative Rota-Baxter Lie algebras, triangular $L_\infty$ -bialgebras and higher derived brackets}, arXiv:2008.00059 (2020).
\bibitem[MaVo]{MaVo} M.\ Markl and A.A.\ Voronov,
{\em PROPped up graph cohomology}. In: ``Algebra, Arithmetic
and Geometry - Manin Festschrift" (eds. Yu.\ Tschinkel and Yu.\ Zarhin),
Vol.\ II,
Progr. Math.\ vol.\ 270, Birkhauser (2010) pp. 249-281.
\bibitem[Me1]{Me1} S.A.\ Merkulov, {\em Formality theorem for
quantizations of Lie bialgebras}, Lett.\ Math.\ Phys. {\bf 106} (2016) 169-195
\bibitem[Me2]{Me2} S.A.\ Merkulov, {\it Gravity prop and moduli spaces ${\mathcal M}_{g,n}$}, arXiv:2108.10644 (2021)
\bibitem[Me3]{Me3} S.A.\ Merkulov, {\em Prop of ribbon hypergraphs and strongly homotopy involutive Lie bialgebras}, Internat.\ Math.\ Res.\ Notices (2022) rnac023.
\bibitem[Me4]{Me4} S.A.\ Merkulov, {\em From gravity to string topology}, arXiv:2201.01122 (2012)
\bibitem[MeVa]{MV} S.\ Merkulov and B.\ Vallette,
{\em Deformation theory of representations of prop(erad)s I \& II},
J.\ f\"ur die reine und angewandte Mathematik (Qrelle) {\bf 634}, 51-106,
\& {\bf 636}, 123-174 (2009)
\bibitem[MW1]{MW1} S.A.\ Merkulov and T.\ Willwacher, {\em Props of ribbon graphs, involutive Lie bialgebras and moduli spaces of curves}, preprint arXiv:1511.07808 (2015) 51pp.
\bibitem[MW2]{MW2} S.\ Merkulov and T.\ Willwacher, {\em Deformation theory of Lie bialgebra properads}, In: Geometry and Physics: A Festschrift in honour of Nigel Hitchin, Oxford University Press 2018, pp. 219-248.
\bibitem[MW3]{MW3} S.A. Merkulov and T.\ Willwacher, {\em Classification of universal formality maps
for quantizations of Lie bialgebras}, Compositio Mathematica, {\bf 156} (2020) 2111-2148
\bibitem[NW]{NW} F.\ Naef and T.\ Willwacher, {\it String topology and configuration spaces of two points}, preprint arXiv:1911.06202 (2019).
\bibitem[Pe]{Pe} R.C.\ Penner, {\em The decorated Teichm\"uller space of punctured surfaces}, Comm.\ Math.\
Phys. {\bf 113} (1987), 299-339.
\bibitem[We]{We} C.\ Westerland, {\it Equivariant operads, string topology, and Tate cohomology}, Math.\ Ann. {\bf 340} (2008), no. 1, 97-142.
\bibitem[W1]{W} T.\ Willwacher, {\em M.\ Kontsevich's graph complex and the Grothendieck-Teichmueller Lie algebra},
Invent. Math. {\bf 200} (2015), 671-760.
\bibitem[W2]{W2} T.\ Willwacher, {\em The oriented graph complexes},
Comm. Math. Phys. 334 (2015), no. 3, 1649--1666.
\end{thebibliography}
\end{document} |
\begin{document}
\footskip30pt
\title{On the triangulated category of framed motives $\text{DFr}_{-}^{eff}(k)$}
\author{Ivan Panin}
\address{St. Petersburg Branch of V. A. Steklov Mathematical Institute,
Fontanka 27, 191023 St. Petersburg, Russia}
\email{[email protected]}
\thanks{
}
\keywords{Motivic homotopy theory, framed correspondences, spectral
categories}
\subjclass[2010]{14F42, 19E08, 55U35}
\begin{abstract}
The category of framed correspondences $Fr_*(k)$ was invented by
Voevodsky \cite[Section 2]{Voe2} in order to give
another framework for $\text{SH}(k)$ more amenable
to explicit calculations.
Based on \cite{Voe2} and \cite{GP4} Garkusha and the author introduced in
\cite[Section 2]{GP5}
a triangulated category of framed bispectra $\text{SH}_{nis}^{fr}(k)$.
It is shown in \cite[Section 2]{GP5} that $\text{SH}_{nis}^{fr}(k)$ recover classical Morel--Voevodsky triangulated
categories of bispectra $\text{SH}(k)$.
For any infinite perfect field $k$ a triangulated category of $\mathbb {F}\text{r}$-motives
$\text{D}\mathbb {F}\text{r}_{-}^{eff}(k)$ is constructed in the style of Voevodsky's
construction of the category $\text{DM}_-^{eff}(k)$.
In our approach the Voevodsky category of Nisnevich sheaves with transfers is replaced
with the category of $\mathbb {F}\text{r}$-modules.
To each smooth
$k$-variety $X$ the $\mathbb {F}\text{r}$-motive $\text{M}_{\mathbb {F}\text{r}}(X)$ is associated in the
category $\text{D}\mathbb {F}\text{r}_{-}^{eff}(k)$.
We identify the triangulated category $\text{D}\mathbb {F}\text{r}_{-}^{eff}(k)$ with the full triangulated subcategory $\text{SH}^{eff}_{-}(k)$
of the classical Morel--Voevodsky triangulated category $\text{SH}^{eff}(k)$ of effective motivic bispectra
\cite{Jar2}. Moreover, the triangulated category $\text{D}\mathbb {F}\text{r}_{-}^{eff}(k)$ is naturally
{\it symmetric monoidal}.
Particularly,
$\text{M}_{\mathbb {F}\text{r}}(X)\otimes_{\mathbb {F}\text{r}} \text{M}_{\mathbb {F}\text{r}}(Y)=\text{M}_{\mathbb {F}\text{r}}(X\times Y)$.
The mentioned
identification of the triangulated categories respects the symmetric monoidal structures on both sides.
We work with the derived category $\text{D}\mathbb {F}\text{r}_-(k)$ of bounded below $\mathbb {F}\text{r}$-modules
rather than with the homotopy category $\text{SH}_{nis}(k)$ of bispectra as in \cite[Section 2]{GP5}.
\end{abstract}
\maketitle
\thispagestyle{empty} \pagestyle{plain}
\newdir{ >}{{}*!/-6pt/@{>}}
\section{Introduction}
The Voevodsky triangulated category of motives
$\text{DM}_-^{eff}(k)$~\cite{Voe1} provides a natural framework to study
motivic cohomology.
In this paper a new short approach to constructing the part $\text{SH}^{eff}_{-}(k)$
of the classical triangulated
category $\text{SH}(k)$ is presented providing the base field is infinite and perfect.
We work in the framework of
strict $V$-spectral categories introduced in
\cite[Definition~\ref{vsp}]{GP2}
The main new feature of our spectral category $\mathbb {F}\text{r}$
is that {\it it is symmetric monoidal}. It is also connective and Nisnevich excisive in the sense
of~\cite{GP}.
Each $\pi_0(\mathbb {F}\text{r})$-presheaf $\mathcal F$ of Abelian groups
is automatically a radditive framed presheaf of Abelian groups
in the sense of \cite{Voe2}. By \cite[Lemma 2.15]{GP3} such an $\mathcal F$
is a $\mathbb ZF_*(k)$-presheaf of Abelian groups
in the sense of \cite[2.13]{GP3}.
By \cite[Lemma 4.5]{Voe2} and \cite[Lemma 2.15]{GP3} its associated Nisnevich sheaf
$\mathcal F_{nis}$ is canonically a $\mathbb ZF_*(k)$-presheaf of Abelian groups.
If $\mathcal F$ is homotopy invariant and stable in the sense of \cite{Voe2}
(see also \cite[Def. 2.13, 2.14]{GP3}), then by
\cite[Thm. 1.1]{GP3} the framed Nisnevich sheaf
$\mathcal F_{nis}$ is strictly homotopy invariant and stable.
The main symmetric monoidal strict $V$-spectral category $\mathbb {F}\text{r}$ is constructed in
Section \ref{The_Category}. It is strict over infinite perfect fields.
Denote by $\text{D}\mathbb {F}\text{r}_-(k)$ the full triangulated subcategory of
$\text{SH}^{nis}(\mathbb {F}\text{r})$ of bounded below $\mathbb {F}\text{r}$-modules. We also denote by
$\text{D}\mathbb {F}\text{r}_-^{eff}(k)$
the full triangulated subcategory of
$\text{D}\mathbb {F}\text{r}_-(k)$
of those $\mathbb {F}\text{r}$-modules $M$ such that each
$\mathbb ZF_*(k)$-presheaf
$\pi_i(M)|_{\mathbb ZF_*(k)}$ is
{\it homotopy invariant and stable}
in the sense of
\cite[Def. 2.13, 2.14]{GP3}.
We call $\text{D}\mathbb {F}\text{r}_-^{eff}(k)$
{\it the triangulated category of $\mathbb {F}\text{r}$-motives}.
The category $\text{D}\mathbb {F}\text{r}_{-}^{eff}(k)$ is naturally symmetric monoidal.
For each
$X\in Sm/k$ the $\mathbb {F}\text{r}$-module
$$C_*(\mathbb {F}\text{r}(X)):=|d\mapsto\underline{\Hom}(\Delta^d,\mathbb {F}\text{r}(X))|$$
belongs to $\text{D}\mathbb {F}\text{r}_{-}^{eff}(k)$ and is called
{\it the} $\mathbb {F}\text{r}$-{\it motive of} $X$; \
$\text{M}_{\mathbb {F}\text{r}}(X)\otimes_{\mathbb {F}\text{r}} \text{M}_{\mathbb {F}\text{r}}(Y)=\text{M}_{\mathbb {F}\text{r}}(X\times Y)$.
The latter triangulated category {\it is identified} with the full triangulated subcategory $\text{SH}^{eff}_{-}(k)$
of the classical Morel--Voevodsky triangulated category $SH^{eff}(k)$ of effective motivic bispectra
({\it this is the main result of the preprint}). See Theorem \ref{VeryMain}.
The mentioned
identification respects {\it the symmetric monoidal structures} on both sides.
It can be shown that the identification triangulated functor as in Theorem \ref{VeryMain}
$$\mathbb M_{\text{SH}}: \text{D}\mathbb {F}\text{r}_-^{eff}(k)\to SH^{eff}_-(k)$$
takes the $\mathbb {F}\text{r}$-motive
$\text{M}_{\mathbb {F}\text{r}}(X)$ of $X$ to
the symmetric bispectrum
$\Sigma_{\mathbb G_m}\Sigma_{S^1}(X_+)$.
Sections 2 and 3 contains the materials of
\cite[Sections 2 and 3]{GP2}
adapted to the symmetric monoidal spectral category
$\mathbb {F}\text{r}$,
which is defined in Section 4.
In Section 4 the language of triangulated categories is used
as opposed to the model categories language. This allows to state all constructions
and results in a very explicit form.
The main result here is Theorem \ref{neploho}.
However it seems that this language does not allow to prove
Theorem 6.2 (the main result of this preprint).
Also this language does not allow
to state and prove the following true result:
there is a triangulated equivalence of the triangulated categories
$$\text{SH}^{mot}(\mathbb {F}\text{r})\to \text{SH}^{mot}(k).$$
Triangulated subcategories $\text{SH}^{nis}(\mathbb {F}\text{r})$,
$\text{D}\mathbb {F}\text{r}_-(k)$
and
$\text{D}\mathbb {F}\text{r}_-^{eff}(k)$
are defined in Section 5.
The main result of the preprint (Theorem \ref{VeryMain})
is stated in Section 6.
Its proof is postponed to the next preprint.
Throughout the paper we denote by $Sm/k$ the category of smooth
separated schemes of finite type over the base field $k$. The base field $k$
is supposed to be infinite and perfect. The paper
\cite{DP} shows that there is no restriction on the characteristic of $k$. \\
{\bf Acknowledgements}. The author is very grateful to G.Garkusha for his deep interest
in the topic of this preprint. I am very grateful also to my mother in law
K.Shahbazian for
her very stimulating interest to the present work on all its stages.
\section{Preliminaries}
We work in the framework of spectral categories and modules over
them in the sense of Schwede--Shipley~\cite{SS}. We start with
preparations.
We follow \cite[Definition 2.1.1, Remark 2.1.5]{HSS}.
A symmetric sequence of objects in a category $\mathcal C$ is a functor $\Sigma \to \mathcal C$, and
the category of symmetric sequences of objects in $\mathcal C$ is the functor category $\mathcal C^{\Sigma}$.
The category $\Sigma$ is a skeleton of the category of finite sets and
isomorphisms. Hence every symmetric sequence has an extension, which is unique
up to isomorphism, to a functor on the category of all finite sets and isomorphisms.
We will use both view points (often the second one).
Recall that symmetric spectra have two sorts of homotopy groups
which we shall refer to as {\it naive\/} and {\it true homotopy
groups\/} respectively following terminology of~\cite{Sch}.
Precisely, the $k$th naive homotopy group of a symmetric spectrum
$X$ is defined as the colimit
$$\hat\pi_k(X)=\colim_n\pi_{k+n}X_n.$$
Denote by $\gamma X$ a stably fibrant model of $X$ in $Sp^\Sigma$.
The $k$-th true homotopy group of $X$ is given by
$$\pi_kX=\hat\pi_k(\gamma X),$$
the naive homotopy groups of the symmetric spectrum $\gamma X$.
Naive and true homotopy groups of $X$ can considerably be different
in general (see, e.g.,~\cite{HSS,Sch}). The true homotopy groups
detect stable equivalences, and are thus more important than the
naive homotopy groups. There is an important class of {\it
semistable\/} symmetric spectra within which
$\hat\pi_*$-isomorphisms coincide with $\pi_*$-isomorphisms. Recall
that a symmetric spectrum is semistable if some (hence any) stably
fibrant replacement is a $\pi_*$-isomorphism. Suspension spectra,
Eilenberg--Mac Lane spectra, $\Omega$-spectra or $\Omega$-spectra
from some point $X_n$ on are examples of semistable symmetric
spectra (see~\cite{Sch}).
Semistability
is preserved under suspension, loop, wedges and shift.
A symmetric spectrum $X$ is {\it $n$-connected\/} if the true
homotopy groups of $X$ are trivial for $k\geqslant n$. The spectrum $X$
is {\it connective\/} is it is $(-1)$-connected, i.e., its true
homotopy groups vanish in negative dimensions. $X$ is {\it bounded
below\/} if $\pi_i(X)=0$ for $i\ll 0$.
\begin{defs}\textrm{\huge${}_{\ulcorner}$}bel{basic}{\rm
(1) Following~\cite{SS} a {\it spectral category\/} is a category
$\mathcal O$ which is enriched over the category $Sp^\Sigma$ of symmetric
spectra (with respect to smash product, i.e., the monoidal closed
structure of \cite[2.2.10]{HSS}). In other words, for every pair of
objects $o,o'\in\mathcal O$ there is a morphism symmetric spectrum $\mathcal
O(o,o')$, for every object $o$ of $\mathcal O$ there is a map from the
sphere spectrum $S$ to $\mathcal O(o,o)$ (the ``identity element" of
$o$), and for each triple of objects there is an associative and
unital composition map of symmetric spectra $\mathcal O(o',o'')\wedge\mathcal
O(o,o') \to\mathcal O(o,o'')$. An $\mathcal O$-module $M$ is a contravariant
spectral functor to the category $Sp^\Sigma$ of symmetric spectra,
i.e., a symmetric spectrum $M(o)$ for each object of $\mathcal O$
together with coherently associative and unital maps of symmetric
spectra $M(o)\wedge\mathcal O(o',o)\to M(o')$ for pairs of objects
$o,o'\in\mathcal O$. A morphism of $\mathcal O$-modules $M\to N$ consists of
maps of symmetric spectra $M(o)\to N(o)$ strictly compatible with
the action of $\mathcal O$. The category of $\mathcal O$-modules will be
denoted by $\Mod\mathcal O$.
(2) A {\it spectral functor\/} or a {\it spectral homomorphism\/}
$F$ from a spectral category $\mathcal O$ to a spectral category $\mathcal O'$
is an assignment from $\Ob\mathcal O$ to $\Ob\mathcal O'$ together with
morphisms $\mathcal O(a,b)\to\mathcal O'(F(a),F(b))$ in $Sp^\Sigma$ which
preserve composition and identities.
(3) The {\it monoidal product\/} $\mathcal O\wedge\mathcal O'$ of two spectral
categories $\mathcal O$ and $\mathcal O'$ is the spectral category where
$\Ob(\mathcal O\wedge\mathcal O'):=\Ob\mathcal O\times\Ob\mathcal O'$ and $\mathcal
O\wedge\mathcal O'((a,x),(b,y)):= \mathcal O(a,b)\wedge\mathcal O'(x,y)$.
(3') A monoidal spectral category consists of a spectral category $\mathcal O$ equipped with
a spectral functor
$\diamond: \mathcal O \wedge \mathcal O \to \mathcal O$,
a unit $u \in Ob \mathcal O$, a $Sp^{\Sigma}$-natural associativity isomorphism
and two $Sp^{\Sigma}$-natural unit isomorphisms. Symmetric monoidal spectral
categories are defined similarly.
(4) A spectral category $\mathcal O$ is said to be {\it connective\/} if
for any objects $a,b$ of $\mathcal O$ the spectrum $\mathcal O(a,b)$ is
connective.
(5) By a ringoid over $Sm/k$ we mean a preadditive category $\mathcal R$
whose objects are those of $Sm/k$ together with a functor
$$\rho:Sm/k\to\mathcal R,$$
which is identity on objects. Every such ringoid gives rise to a
spectral category $\mathcal O_{\mathcal R}$ whose objects are those of $Sm/k$
and the morphisms spectrum $\mathcal O_{\mathcal R}(X,Y)$, $X,Y\in Sm/k$, is
the Eilenberg--Mac~Lane spectrum $H\mathcal R(X,Y)$ associated with the
abelian group $\mathcal R(X,Y)$. Given a map of schemes $\alpha$, its
image $\rho(\alpha)$ will also be denoted by $\alpha$, dropping
$\rho$ from notation.
(6) By a spectral category over $Sm/k$ we mean a spectral category
$\mathcal O$ whose objects are those of $Sm/k$ together with a spectral
functor
$$\sigma:\mathcal O_{naive}\to\mathcal O,$$
which is identity on objects. Here $\mathcal O_{naive}$ stands for the
spectral category whose morphism spectra are defined as
$$\mathcal O_{naive}(X,Y)_p=\Hom_{Sm/k}(X,Y)_+\wedge S^p$$
for all $p\geqslant 0$ and $X,Y\in Sm/k$.
It is straightforward to verify that the category of $\mathcal
O_{naive}$-modules can be regarded as the category of presheaves
$Pre^\Sigma(Sm/k)$ of symmetric spectra on $Sm/k$. This is used in
the sequel without further comment.
}\end{defs}
Let $\mathcal O$ be a spectral category and let $\Mod\mathcal O$ be the
category of $\mathcal O$-modules. Recall that the projective stable model
structure on $\Mod\mathcal O$ is defined as follows (see~\cite{SS}). The
weak equivalences are the objectwise stable weak equivalences and
fibrations are the objectwise stable projective fibrations. The
stable projective cofibrations are defined by the left lifting
property with respect to all stable projective acyclic fibrations.
Recall that the Nisnevich topology is generated by elementary
distinguished squares, i.e. pullback squares
\begin{equation}\textrm{\huge${}_{\ulcorner}$}bel{squareQ}
\xymatrix{\ar@{}[dr] |{\textrm{$Q$}}U'\ar[r]\ar[d]&X'\ar[d]^\varphi\\
U\ar[r]_{\textrm{\rm op}}lusi&X}
\end{equation}
where $\varphi$ is etale, ${\textrm{\rm op}}lusi$ is an open embedding and
$\varphi^{-1}(X\setminus U)\to(X\setminus U)$ is an isomorphism of
schemes (with the reduced structure). Let $\mathcal Q$ denote the set of
elementary distinguished squares in $Sm/k$ and let $\mathcal O$ be a
spectral category over $Sm/k$. By $\mathcal Q_{\mathcal O}$ denote the set of
squares
\begin{equation}\textrm{\huge${}_{\ulcorner}$}bel{squareOQ}
\xymatrix{\ar@{}[dr] |{\textrm{$\mathcal O Q$}}\mathcal O(-,U')\ar[r]\ar[d]&\mathcal O(-,X')\ar[d]^\varphi\\
\mathcal O(-,U)\ar[r]_{\textrm{\rm op}}lusi&\mathcal O(-,X)}
\end{equation}
which are obtained from the squares in $\mathcal Q$ by taking $X\in Sm/k$
to $\mathcal O(-,X)$. The arrow $\mathcal O(-,U')\to\mathcal O(-,X')$ can be
factored as a cofibration $\mathcal O(-,U')\rightarrowtail Cyl$ followed
by a simplicial homotopy equivalence $Cyl\to\mathcal O(-,X')$. There is a
canonical morphism $A_{\mathcal O Q}:=\mathcal O(-,U)\bigsqcup_{\mathcal O(-,U')}
Cyl\to\mathcal O(-,X)$.
\begin{defs}[see~\cite{GP}]{\rm
I. The {\it Nisnevich local model structure\/} on $\Mod\mathcal O$ is the
Bousfield localization of the stable projective model structure with
respect to the family of projective cofibrations
\begin{equation*}\textrm{\huge${}_{\ulcorner}$}bel{no}
\mathcal N_{\mathcal O}=\{\cyl(A_{\mathcal O Q}\to\mathcal O(-,X))\}_{\mathcal Q_{\mathcal O}}.
\end{equation*}
The homotopy category for the Nisnevich local model structure will
be denoted by $SH^{\nis}_{S^1}\mathcal O$. In particular, if $\mathcal O=\mathcal O_{naive}$
then we have the Nisnevich local model structure on
$Pre^\Sigma(Sm/k)=\Mod\mathcal O_{naive}$ and we shall write $SH^{\nis}_{S^1}(k)$
to denote $SH^{\nis}_{S^1}\mathcal O_{naive}$.
II. The {\it motivic model structure\/} on $\Mod\mathcal O$ is the
Bousfield localization of the Nisnevich local model structure with
respect to the family of projective cofibrations
\begin{equation*}\textrm{\huge${}_{\ulcorner}$}bel{ao}
\mathcal A_{\mathcal O}=\{\cyl(\mathcal O(-,X\times\mathbb A^1)\to\mathcal O(-,X))\}_{X\in Sm/k}.
\end{equation*}
The homotopy category for the motivic model structure will be
denoted by $SH^{\mot}_{S^1}\mathcal O$. In particular, if $\mathcal O=\mathcal O_{naive}$
then we have the motivic model structure on
$Pre^\Sigma(Sm/k)=\Mod\mathcal O_{naive}$ and we shall write write
$SH^{\mot}_{S^1}(k)$ to denote $SH^{\mot}_{S^1}\mathcal O_{naive}$.
}\end{defs}
\begin{defs}[see~\cite{GP}]\textrm{\huge${}_{\ulcorner}$}bel{Nis_and_Mot_exc}{\rm
I. We say that $\mathcal O$ is {\it Nisnevich excisive\/} if for every
elementary distinguished square $Q$
\begin{equation*}
\xymatrix{\ar@{}[dr] |{\textrm{$Q$}}U'\ar[r]\ar[d]&X'\ar[d]^\varphi\\
U\ar[r]_{\textrm{\rm op}}lusi&X}
\end{equation*}
the square $\mathcal O Q$~\eqref{squareOQ} is homotopy pushout in the
Nisnevich local model structure on $Pre^\Sigma(Sm/k)$.
II. $\mathcal O$ is {\it motivically excisive\/} if:
\begin{itemize}
\item[(A)] for every elementary distinguished square $Q$ the square $\mathcal O
Q$~\eqref{squareOQ} is homotopy pushout in the motivic model
structure on $Pre^\Sigma(Sm/k)$ and
\item[(B)] for every $X\in Sm/k$ the natural map
$$\mathcal O(-,X\times\mathbb A^1)\to\mathcal O(-,X)$$
is a weak equivalence in the motivic model structure on
$Pre^\Sigma(Sm/k)$.
\end{itemize}
}\end{defs}
Recall that a sheaf $\mathcal F$ of abelian groups in the Nisnevich
topology on $Sm/k$ is {\it strictly $\mathbb A^1$-invariant\/} if for
any $X\in Sm/k$, the canonical morphism
$$H^*_{\nis}(X,\mathcal F)\to H^*_{\nis}(X\times\mathbb A^1,\mathcal F)$$
is an isomorphism.
\begin{defs}\textrm{\huge${}_{\ulcorner}$}bel{vsp}{\rm
Let $(\mathcal O, \diamond, pt)$ be a {\it symmetric monoidal} spectral category over $Sm/k$ together with the
structure spectral functor $\sigma:\mathcal O_{naive}\to\mathcal O$
and an additive functor
$\mathbb ZF_*(k)\xrightarrow{\varepsilon}\pi_0\mathcal O$. We say
that $((\mathcal O,\diamond,pt),\sigma,\varepsilon)$ is a {\it symmetric monoidal $V$-spectral category\/} if
\begin{enumerate}
\item $\mathcal O$ is connective and Nisnevich excisive;
\item the structure map $\rho: Sm/k \to \pi_0\mathcal O$ induced by $\sigma$ equals $\varepsilon \circ in$,
where $in: Sm/k \to \mathbb ZF_*(k)$ is the graphic functor.
\end{enumerate}
}
\end{defs}
\begin{rem}\textrm{\huge${}_{\ulcorner}$}bel{additivity} {\rm
Since $\mathcal O$ is connective and Nisnevich excisive, for each
$\mathcal O$-module $M$ and each integer $i$ the presheaf
$\pi_i(M)|_{Sm/k}$ is {\it radditive} (the restriction is taken via the $\rho$).
That is $\pi_i(M)(\emptyset)=0$ and $\pi_i(M)(X_1\sqcup X_2)=\pi_i(M)(X_1)\times \pi_i(M)(X_2)$.
Particularly, the functor $\pi_i(M)|_{\mathbb ZF_*(k)}$ is additive. So, $\pi_i(M)|_{\mathbb ZF_*(k)}$ is
a {\it presheaf of Abelian groups on} $\mathbb ZF_*(k)$
in the sense of \cite[Def. 2.13]{GP3}
(the restriction is taken via the $\varepsilon$).
}
\end{rem}
We note that if $(\mathcal O, \diamond, pt)$ is a symmetric monoidal spectral category over $Sm/k$,
then for every
$\mathcal O$-module $M$ and any smooth scheme $U$, the presheaf of
symmetric spectra
$$\underline{\Hom}(U,M):=M(-\times U)$$
is an $\mathcal O$-module. Moreover, $M(-\times U)$ is functorial in $U$.
\begin{lem}\textrm{\huge${}_{\ulcorner}$}bel{pepe}
Every symmetric monoidal $V$-spectral category $\mathcal O$ is motivically excisive.
\end{lem}
\begin{proof}
Every symmetric monoidal $V$-spectral category is, by definition, Nisnevich excisive.
Since there is an action of smooth schemes on $\mathcal O$, the
fact that $\mathcal O$ is motivically excisive is proved similar
to~\cite[5.8]{GP}.
\end{proof}
\begin{defs}\textrm{\huge${}_{\ulcorner}$}bel{SHnis}{\rm
Let $((\mathcal O,\diamond,pt),\sigma,\varepsilon)$ be a symmetric monoidal $V$-spectral category. Since it is both Nisnevich
and motivically excisive, it follows from~\cite[5.13]{GP} that the
pair of natural adjoint fuctors
$$\xymatrix{{\Psi_*}:Pre^\Sigma(Sm/k)\ar@<0.5ex>[r]&\Mod\mathcal O:{\Psi^*}\ar@<0.5ex>[l]}$$
induces a Quillen pair for the Nisnevich local projective
(respectively motivic) model structures on $Pre^\Sigma(Sm/k)$ and
$\Mod\mathcal O$. In particular, one has adjoint functors between
triangulated categories
\begin{equation}\textrm{\huge${}_{\ulcorner}$}bel{adjoint}
{\Psi_*}: \text{SH}^{nis}(\mathcal O_{naive})\rightleftarrows \text{SH}^{nis}(\mathcal O):{\Psi^*}\quad\textrm{ and }
\quad {\Psi_*}:\text{SH}^{mot}(\mathcal O_{naive})\rightleftarrows \text{SH}^{mot}(\mathcal O):{\Psi^*}.
\end{equation}
}
\end{defs}
\section{The triangulated category $D\mathcal O_-^{eff}(k)$}\textrm{\huge${}_{\ulcorner}$}bel{dominus}
In this section we work with a symmetric monoidal $V$-spectral category
$((\mathcal O,\diamond,pt),\sigma,\varepsilon)$ in the sense of Definition \ref{vsp}.
We work in this section with the category $\text{SH}^{nis}(\mathcal O)$ as in Definition \ref{SHnis}.
Let $M$ be an $\mathcal O$-module. By Remark \ref{additivity} its $\pi_0\mathcal O$-presheaves $\pi_i(M)$
restricted via the $\varepsilon$ to the additive category $\mathbb ZF_*(k)$
are $\mathbb ZF_*(k)$-{\it presheaves of Abelian groups} in the sense of \cite[Def. 2.13]{GP3}.
Thus, by \cite[Lemma 4.5]{Voe2} and \cite[Cor. 2.17]{GP3} the associated Nisnevich sheaf
$\pi^{nis}_i(M)$ is canonically a $\mathbb ZF_*(k)$-presheaves of Abelian groups
(possibly it is not a $\pi_0\mathcal O$-presheaf).
We shall often work with simplicial $\mathcal O$-modules
$M[\bullet]$. The {\it realization\/} of $M[\bullet]$ is the $\mathcal
O$-module $|M|$ defined as the coend
$$|M|=\Delta[\bullet]_+\wedge_{\Delta} M[\bullet]$$
of the functor $\Delta[\bullet]_+\wedge
M[\bullet]:\Delta\times\Delta^{{\textrm{\rm op}}}\to\Mod\mathcal O$. Here $\Delta[n]$
is the standard simplicial $n$-simplex.
Recall that the simplicial ring $k[\Delta]$ is defined as
$$k[\Delta]_n=k[x_0,\ldots,x_n]/(x_0+\cdots+x_n-1).$$
By $\Delta^{\cdot}$ we denote the cosimplicial affine scheme
$\spec(k[\Delta])$.
Given an $\mathcal O$-module $M$, we set
$$C_*(M):=|\underline{\Hom}(\Delta^{\cdot},M)|.$$
Note that $C_*(M)$ is an $\mathcal O$-module and is functorial in $M$.
{\bf Our} $C_*(M)$ {\bf is different of} $C_*(M)$ {\bf used in} \cite[Sect. 3]{GP2}.
\begin{defs}[Definition 3.3 in \cite{GP2}]
\textrm{\huge${}_{\ulcorner}$}bel{boundedOmod}{\rm
The $\mathcal O$-motive $M_{\mathcal O}(X)$ of a smooth algebraic variety
$X\in Sm/k$ is the $\mathcal O$-module $C_*(\mathcal O(-,X))$. We say that an
$\mathcal O$-module $M$ is {\it bounded below\/} if for $i\ll 0$ the
Nisnevich sheaf $\pi_i^{\nis}(M)$ is zero. $M$ is {\it
$n$-connected\/} if $\pi_i^{\nis}(M)$ are trivial for $i\leqslant n$. $M$
is {\it connective\/} is it is $(-1)$-connected, i.e.,
$\pi_i^{\nis}(M)$ vanish in negative dimensions.
}
\end{defs}
\begin{defs}[\cite{GP2}]\textrm{\huge${}_{\ulcorner}$}bel{DOminus}{\rm
Denote by $\Mod_{-}\mathcal O$ the full subcategory of
of bounded below $\mathcal O$-modules. \\
Denote by $D\mathcal O_-(k)$ the full triangulated subcategory of
$SH^{nis}(\mathcal O)$ of bounded below $\mathcal O$-modules. We also denote by
$D\mathcal O_-^{eff}(k)$ the full triangulated subcategory of $D\mathcal
O_-(k)$ of those $\mathcal O$-modules $M$ such that each
$\pi_0\mathcal O$-presheaf
$\pi_i(M)$ regarded via the functor $\varepsilon$ as a $\mathbb ZF_*(k)$-presheaf of Abelian groups
is {\it homotopy invariant and stable} in the sense of
\cite[Def. 2.13, 2.14]{GP3}.
The category $D\mathcal O_-^{eff}(k)$ is an
analog of Voevodsky's triangulated category
$DM_-^{eff}(k)$.
}
\end{defs}
\begin{lem}[Corollary 3.4 in \cite{GP2}]
\textrm{\huge${}_{\ulcorner}$}bel{porto}{\rm
If an $\mathcal O$-module $M$ is bounded below (respectively
$n$-connected) then so is $C_*(M)$. In particular, the
$\mathcal O$-motive $M_{\mathcal O}(X)$ of any smooth algebraic variety $X\in Sm/k$
is connective.
}
\end{lem}
\begin{rem}\textrm{\huge${}_{\ulcorner}$}bel{1st_endo_funct}{\rm
By Lemma \ref{porto} the assignment $M\mapsto C_*(M)$ is a functor $C_*: \Mod_{-}\mathcal O\to \Mod_{-}\mathcal O$.
}
\end{rem}
\begin{lem}[Compare with Lemma 3.5 in \cite{GP2}]
\textrm{\huge${}_{\ulcorner}$}bel{spain}{\rm
The functor $C_*: \Mod_{-}\mathcal O\to \Mod_{-}\mathcal O$ respects local equivalences and
induces a triangulated endofunctor
$$C_*: D\mathcal O_-(k)\to D\mathcal O_-(k)$$
}
\end{lem}
\begin{thm}[Compare with Theorem 3.5 in \cite{GP2}]
\textrm{\huge${}_{\ulcorner}$}bel{neploho}{\rm
Let $(\mathcal O,\diamond, pt)$ be a symmetric monoidal $V$-spectral category.
Consider the full triangulated subcategory $\mathcal T$ of $SH^{nis}(\mathcal O)$ generated by the compact objects
$\cone(\mathcal O(-,X\times\mathbb A^1)\to\mathcal O(-,X)),\ X\in Sm/k.$
Then the triangulated endofunctor
$$C_*:D\mathcal O_-(k)\to D\mathcal O_-(k)$$
as in Lemma \ref{spain} lands in $D\mathcal O_-^{eff}(k)$. The kernel of $C_*$ is $\mathcal T_-:=\mathcal
T\cap D\mathcal O_-(k)$. Moreover, $C_*$ is left adjoint to the inclusion
functor
$$i:D\mathcal O_-^{eff}(k)\to D\mathcal O_-(k)$$
and $D\mathcal O_-^{eff}(k)$ is triangle equivalent to the quotient
category $D\mathcal O_-(k)/\mathcal T_-$ \ .
}
\end{thm}
\section{The main symmetric monoidal strict $V$-spectral category}\textrm{\huge${}_{\ulcorner}$}bel{The_Category}
We construct in this section our
main symmetric monoidal strict $V$-spectral category $(\mathbb {F}\text{r},\diamond, pt)$.
First construct a spectral category $\mathbb {F}\text{r}$.
Its objects are those of $Sm/k$.
To each pair $Y,X\in Sm/k$ we assign a symmetric spectrum
$\mathbb {F}\text{r}(Y,X)$. The latter is described as follows. Its terms are the functors
$A \mapsto \mathbb {F}\text{r}(Y,X)_A=Fr_A(Y,X\otimes S^A)$ (here $A$ runs over the category of finite sets
and their isomorphisms). The structure maps are defined by the obvious compositions
$$\varepsilon_{A,B}: Fr_A(Y,X\otimes S^A)\wedge S^B\to Fr_{A}(Y,X\otimes S^{A\sqcup B})\hookrightarrow Fr_{A\sqcup B}(Y,X\otimes S^{A\sqcup B}).$$
For each triple $Z,Y,X\in Sm/k$ there is an obvious symmetric spectra morphism
$$\circ_{Z,Y,X}: \mathbb {F}\text{r}(Y,X)\wedge \mathbb {F}\text{r}(Z,Y)\to \mathbb {F}\text{r}(Z,X) \ \ \text{(the composition law)}.$$
It is uniquely determined by simplicial set morphisms
$\mathbb {F}\text{r}(Y,X)_A\wedge \mathbb {F}\text{r}(Z,Y)_B\to \mathbb {F}\text{r}(Z,X)_{A\sqcup B}$
which on $n$-simplices are given by the set maps
$$Fr_A(Y,X\otimes (S^A)_n)\wedge Fr_B(Z,Y\otimes (S^B)_n)\to Fr_A(Y\otimes (S^B)_n,X\otimes (S^A)_n\otimes (S^B)_n)\wedge Fr_B(Z,Y\otimes (S^B)_n)\to$$
$$\to Fr_{A\sqcup B}(Z,X\otimes (S^{A\sqcup B})_n).$$
In details, the set map is given by
$$(\alpha,\beta)\mapsto (\alpha\otimes id_{(S^B)_n},\beta)\mapsto (\alpha\otimes id_{(S^B)_n})\circ \beta.$$
For each $X\in Sm/k$ the identity morphism $id_X$ gives rise to the symmetric spectra morphism
$u_X: \mathbb S\to \mathbb {F}\text{r}(X,X)$.
We formed a spectral category $\mathbb {F}\text{r}$ and
a spectral functor
$\sigma: \mathcal O_{naive}\to \mathbb {F}\text{r}$,
which is identity on objects. The pair
$(\mathbb {F}\text{r},\sigma)$
is a spectral category over $Sm/k$
in the sense of Definition \ref{basic}(6).
Equip now the spectral category $\mathbb {F}\text{r}$ with a spectral functor
$\diamond: \mathbb {F}\text{r}\wedge \mathbb {F}\text{r}\to \mathbb {F}\text{r}$
(taking $(X_1,X_2)$ to $X_1\times X_2$),
a unit $u\in \mathbb {F}\text{r}$,
a $Sp^{\Sigma}$-natural associativity isomorphism $a$
and two $Sp^{\Sigma}$-natural unit isomorphisms $u_l$, $u_r$
and a twist isomorphism
$tw: \mathbb {F}\text{r}\wedge \mathbb {F}\text{r}\to \mathbb {F}\text{r}\wedge \mathbb {F}\text{r}$
and a spectral functor isomorphism
$\Phi: \diamond\to \diamond\circ tw$
such that the data
$$(\mathbb {F}\text{r}, \diamond, tw, \Phi, u, a, u_l, u_r)$$
form a symmetric monoidal spectral category.
First construct the spectral functor
$\diamond$. On objects it takes an object $(X_1,X_2)\in Sm/k\times Sm/k$ to $X_1\times X_2 \in Sm/k$.
To construct $\diamond$ on morphisms it sufficient to construct certain symmetric spectra morphisms
$$\diamond_{(V,Y),(U,X)}: \mathbb {F}\text{r}(V,U)\wedge \mathbb {F}\text{r}(Y,X)\xrightarrow{} \mathbb {F}\text{r}(V\times Y,U\times X)$$
and check that they satisfy the expected properties. To construct the morphism $\diamond_{(V,Y),(U,X)}$ it is sufficient to construct
simplicial set morphisms
$$\boxtimes_{(V,Y),(U,X),A,B}: \mathbb {F}\text{r}(V,U)_A\wedge \mathbb {F}\text{r}(Y,X)_B \xrightarrow{} \mathbb {F}\text{r}(V\times Y,U\times X)_{A\sqcup B}$$
subjecting the known properties.
The latter are given on $n$-simplices by the exterior product maps
$$\boxtimes_{(V,Y),(U,X),A,B,\ n}: Fr_A(V,U\otimes (S^A)_n)\wedge Fr_B(Y,X\otimes (S^B)_n) \to Fr_{A\sqcup B}(V\times Y,(U\times X) \otimes (S^{A\sqcup B})_n).$$
We constructed the spectral functor
$\diamond$.
Second we take the point $pt:=Spec (k)$ as the unit of the spectral category $\mathbb {F}\text{r}$
and we skip constructions of desired $a$, $u_l$, $u_r$ (they are obvious).
Third we construct the twist spectral categories isomorphism
$tw: \mathbb {F}\text{r}\wedge \mathbb {F}\text{r}\to \mathbb {F}\text{r}\wedge \mathbb {F}\text{r}$.
On objects it takes $(X_1,X_2)$ to $(X_2,X_1)$. On morphisms it is determined by certain symmetric spectra isomorphisms
$$tw_{(V,Y),(U,X)}: \mathbb {F}\text{r}(V,U)\wedge \mathbb {F}\text{r}(Y,X)\xrightarrow{} \mathbb {F}\text{r}(Y,X)\wedge \mathbb {F}\text{r}(V,U).$$
In turn the $tw_{(V,Y),(U,X)}$ is determined by the family of simplicial set isomorphisms (switching factors)
$$tw^C_{A,B}: \mathbb {F}\text{r}(V,U)_A\wedge \mathbb {F}\text{r}(Y,X)_B \to \mathbb {F}\text{r}(Y,X)_B\wedge \mathbb {F}\text{r}(V,U)_A.$$
Here for each finite set $C$ the ordered pairs $(A,B)$ run over all subsets $A\subseteq C$, $B\subseteq C$ such that
$A\cup B=C$ and $A\cap B=\emptyset$.
Finally we construct the desired spectral functor isomorphism
$\Phi: \diamond\to \diamond\circ tw$. It is the assignment
$(V,Y)\mapsto \Phi(V,Y)=[\tau_{V,Y}: V\times Y \to Y\times X]$. Here the switching factors
isomorphism $\tau_{V,Y}$ is regarded as a point in $Fr_0(V\times Y, Y\times V)$.
So, it is regarded as a symmetric spectra morphism
$\mathbb S\xrightarrow{\Phi(V,Y)} \mathbb {F}\text{r}(V\times Y, Y\times V)$.
It's easy to check that $\Phi$ is a spectral functor isomorphism indeed.
We left to the reader to check that the data
$(\mathbb {F}\text{r}, \diamond, tw, \Phi, u, a, u_l, u_r)$
form a symmetric monoidal spectral category.
\section{Properties of the main spectral category}
Let $((\mathbb {F}\text{r},\diamond,pt), \sigma:\mathcal O_{naive}\to \mathbb {F}\text{r})$ be the {\it symmetric monoidal} spectral category over $Sm/k$
as in Section \ref{The_Category}.
\begin{lem}\textrm{\huge${}_{\ulcorner}$}bel{epsilon}{\rm
There is an additive functor $\mathbb ZF_*(k)\xrightarrow{\varepsilon}\pi_0(\mathbb {F}\text{r})$ such that
the data $((\mathcal O, \diamond, pt),\sigma,\varepsilon)$
is a symmetric monoidal $V$-spectral category in the sense of Definition \ref{vsp}.
}
\end{lem}
Applying now Lemma \ref{pepe} we get the following
\begin{cor}\textrm{\huge${}_{\ulcorner}$}bel{Nis_and_Mot_exc_true}{\rm
The symmetric monoidal spectral category $(\mathbb {F}\text{r}, \diamond, pt, tw, \Phi, u, a, u_l, u_r)$
as in Section \ref{The_Category} is Nisnevich and Motivically excisive in the sense of
\cite{GP}
(see Definition \ref{Nis_and_Mot_exc}).
}
\end{cor}
The following definition is just Definition \ref{boundedOmod} adapted to the category $\Mod \mathbb {F}\text{r}$.
\begin{defs}\textrm{\huge${}_{\ulcorner}$}bel{boundedFrmod}{\rm
The $\mathbb {F}\text{r}$-motive $M_{\mathbb {F}\text{r}}(X)$ of a smooth algebraic variety
$X\in Sm/k$ is the $\mathbb {F}\text{r}$-module $C_*(\mathbb {F}\text{r}(-,X))$. We say that an
$\mathbb {F}\text{r}$-module $M$ is {\it bounded below\/} if for $i\ll 0$ the
Nisnevich sheaf $\pi_i^{\nis}(M)$ is zero. $M$ is {\it
$n$-connected\/} if $\pi_i^{\nis}(M)$ are trivial for $i\leqslant n$. $M$
is {\it connective\/} is it is $(-1)$-connected, i.e.,
$\pi_i^{\nis}(M)$ vanish in negative dimensions.
}\end{defs}
\begin{defs}{\rm
Denote by $\Mod \mathbb {F}\text{r}_{-}$ the full subcategory of
of bounded below $\mathbb {F}\text{r}$-modules. \\
Denote by $\text{D}\mathbb {F}\text{r}_-(k)$ the full triangulated subcategory of
$SH^{nis}(\mathbb {F}\text{r})$ of bounded below $\mathbb {F}\text{r}$-modules. We also denote by
$\text{D}\mathbb {F}\text{r}_-^{eff}(k)$
the full triangulated subcategory of
$\text{D}\mathbb {F}\text{r}_-(k)$
of those $\mathbb {F}\text{r}$-modules $M$ such that each
$\mathbb ZF_*(k)$-presheaf
$\pi_i(M)|_{\mathbb ZF_*(k)}$ is
{\it homotopy invariant and stable}
in the sense of
\cite[Def. 2.13, 2.14]{GP3}.
}
\end{defs}
In certain sense
$\text{D}\mathbb {F}\text{r}_-^{eff}(k)$ is an
analog of Voevodsky's triangulated category
$DM_-^{eff}(k)$
\cite{Voe1}.
\begin{defs}\textrm{\huge${}_{\ulcorner}$}bel{prekrasno}{\rm
The triangulated category $\text{D}\mathbb {F}\text{r}_-^{eff}(k)$
is called
{\it the triangulated category of effective $\mathbb {F}\text{r}$-motives.
}
}
\end{defs}
One can prove the following
\begin{thm}\textrm{\huge${}_{\ulcorner}$}bel{DFr_eff_and_SH_eff}{\rm
There is a natural triangulated equivalence between the triangulated categories
$\text{D}\mathbb {F}\text{r}_-^{eff}(k)$
and the Voevodsky category
$SH^{eff}_-(k)$.
}
\end{thm}
A sketch of a proof of this result will be presented in the next section.
\section{Triangulated equivalences $SH^{eff}(k)\rightleftarrows \text{D}\mathbb {F}\text{r}_-^{eff}(k)$ }
We construct in this section triangulated equivalences (quasi-inverse to each the other)
$$\mathbb M^{\mathbb {F}\text{r}}_{\text{eff}}: SH^{eff}_-(k)\rightleftarrows \text{D}\mathbb {F}\text{r}_-^{eff}(k): \mathbb M_{\text{SH}}^{\text{eff}}.$$
To construct these functors we need preliminaries.
Let $\mathbb G^{\wedge 1}_m\in \Delta^{op}(Fr_0(k))$ be as in \cite[Notation 8.1]{GP4}.
Let $\mathbb G_m^{\wedge n}$ be the $n$th monoidal power $\mathbb G_m^{\wedge 1}$ be as in \cite[Notation 8.1]{GP4}.
The category $Pre^{\Sigma}_{S^1, \mathbb G^{\wedge 1}_m}(Sm/k)$ of presheaves of symmetric bispectra
can be regarded as the category of
symmetric $\mathbb G^{\wedge 1}_m$-spectra in the category
$\Mod\mathcal O_{naive}$
of presheaves of symmetric spectra
(see Definition \ref{basic}).
Similarly we can (and will) consider a category of symmetric $\mathbb G^{\wedge 1}_m$-spectra in the category
$\Mod\mathbb {F}\text{r}$.
It follows from~\cite[5.13]{GP} that there is
a pair of natural adjoint fuctors
$$\xymatrix{{\Phi_*}: Pre^{\Sigma}_{S^1, \mathbb G^{\wedge 1}_m}(Sm/k)=Sp_{\mathbb G^{\wedge 1}_m}(\Mod\mathcal O_{naive})\ar@<0.5ex>[r]& Sp_{\mathbb G^{\wedge 1}_m}(\Mod\mathbb {F}\text{r}):{\Phi^*}\ar@<0.5ex>[l]}$$
There is another pair of adjoint functors
$$\xymatrix{{\Sigma^{\infty}_{\mathbb {F}\text{r}(\mathbb G^{\wedge 1}_m)}}: \Mod\mathbb {F}\text{r} \ar@<0.5ex>[r]& Sp_{\mathbb G^{\wedge 1}_m}(\Mod\mathbb {F}\text{r}):
\Omega^{\infty}_{\mathbb {F}\text{r}(\mathbb G^{\wedge 1}_m)}\ar@<0.5ex>[l]}$$
Here $\mathbb {F}\text{r}(\mathbb G^{\wedge 1}_m)$ stands for the $\mathbb {F}\text{r}$-module represented by the simplicial scheme $\mathbb G^{\wedge 1}_m$.
For each $\mathbb {F}\text{r}$-module $M$ consider the $\mathbb {F}\text{r}$-module
$C_*(M):=|\underline{\Hom}(\Delta^{\cdot},M)|$ as in Section \ref{dominus}.
By Lemma \ref{spain} and Theorem \ref{neploho}
the endo-functor $C_*: \Mod\mathbb {F}\text{r}_{-}\to \Mod\mathbb {F}\text{r}_{-}$
induces a trangulated functor
$C_*: \text{D}\mathbb {F}\text{r}_-(k) \to \text{D}\mathbb {F}\text{r}_-^{eff}(k)$.
By Theorem \ref{neploho}
the pair of triangulated functors
\begin{equation}\textrm{\huge${}_{\ulcorner}$}bel{Sigma_Omega_2}
C_*: \text{D}\mathbb {F}\text{r}_-(k) \rightleftarrows \text{D}\mathbb {F}\text{r}_-^{eff}(k):i
\end{equation}
is a pair of adjoint triangulated functors (here $i$ is the inclusion functor).\\
Let $\mathbb {F}\text{r}(n)=\text{M}_{\mathbb {F}\text{r}}(\mathbb G^{\wedge n}_m)$ be the $\mathbb {F}\text{r}$-motive of $\mathbb G^{\wedge n}_m$.
For each cofibrant object $E$ in the projective model structure on $\Mod \mathbb {F}\text{r}$ put
$E(n)=E\otimes^{\mathbb {F}\text{r}}\mathbb {F}\text{r}(n)$. It is a cofibrant object
in the projective model structure on $\Mod \mathbb {F}\text{r}$. Clearly,
$\Sigma^{\infty}_{\mathbb {F}\text{r}(1)}(E):=(E, E(1), E(2), ... )$ is naturally an object of $Sp_{\mathbb G^{\wedge 1}_m}(\Mod\mathbb {F}\text{r}$).
\begin{defs}{\rm
Let $E\mapsto E^c$ be the cofibrant replacement in the projective model structure on $Pre^{\Sigma}_{S^1, \mathbb G^{\wedge 1}_m}(Sm/k)$. Put\\
$\mathbb M^{\mathbb {F}\text{r}}(E)=(C_* \circ \Omega^{\infty}_{\mathbb {F}\text{r}(1)} \circ \Phi_*)(E^c)=
\Omega^{\infty}_{\mathbb G^{\wedge 1}_m}C_*\mathbb F\text{r}(E^c) \in \Mod \mathbb {F}\text{r}$. \\
Let $\mathcal E\mapsto \mathcal E^c$ be the cofibrant replacement in the projective model structure on $\Mod \mathbb {F}\text{r}$. Put\\
$\mathbb M_{\text{SH}}(\mathcal E)=\Phi^*(\Sigma^{\infty}_{\mathbb {F}\text{r}(1)}(\mathcal E^c))\in Pre^{\Sigma}_{S^1, \mathbb G^{\wedge 1}_m}(Sm/k)$.
Thus,\\
$\mathbb M_{\text{SH}}(\mathcal E)=\text{the object} \ \Sigma^{\infty}_{\mathbb {F}\text{r}(1)}(\mathcal E^c) \ \text{of} \ Sp_{\mathbb G^{\wedge 1}_m}(\Mod\mathbb {F}\text{r}) \
\text{regarded as an object in} \ Pre^{\Sigma}_{S^1, \mathbb G^{\wedge 1}_m}(Sm/k)$.
}
\end{defs}
A proof of the following result is postponed to the next preprint.
It can be given in the spirit of the proofs as in \cite[Section 2]{GP5}.
\begin{thm}\textrm{\huge${}_{\ulcorner}$}bel{VeryMain}{\rm
$\bullet$ The functor $\mathbb M_{\text{SH}}$ induces a triangulated equivalence \\
$\mathbb M^{eff}_{\text{SH}}: \text{D}\mathbb {F}\text{r}_-^{eff}(k)\to SH^{eff}_-(k)$ \\
between these triangulated categories; \\
$\bullet$ A triangulated functor $\mathbb M^{\mathbb {F}\text{r}}_{eff} : SH^{eff}_-(k)\to \text{D}\mathbb {F}\text{r}_-^{eff}(k)$ \\
quasi-inverse to
$\mathbb M^{eff}_{\text{SH}}$
is induced by the functor \\
$\mathbb M^{\mathbb {F}\text{r}}: Pre^{\Sigma}_{S^1, \mathbb G^{\wedge 1}_m}(Sm/k)\to \Mod \mathbb {F}\text{r}$.
}
\end{thm}
\end{document} |
\begin{document}
\title{A randomness test for functional panels}
\author[1]{Piotr Kokoszka \thanks{[email protected]}}
\author[2]{Matthew Reimherr \thanks{\textit{Corresponding author}, [email protected]}}
\author[3]{Nikolas W\"olfing \thanks{[email protected], funding provided by the Helmholtz Association through the Helmholtz Alliance Energy-Trans is gratefully acknowledged.}}
\affil[1]{Colorado State University}
\affil[2]{Pennsylvania State University}
\affil[3]{Centre for European Economic Research, ZEW Mannheim}
\date{}
\maketitle
\begin{abstract}
Functional panels are collections of functional time series, and
arise often in the study of high frequency multivariate data. We
develop a portmanteau style test to determine if the cross--sections
of such a panel are independent and identically distributed. Our
framework allows the number of functional projections and/or the
number of time series to grow with the sample size. A large sample
justification is based on a new central limit theorem for random
vectors of increasing dimension. With a proper normalization, the
limit is standard normal, potentially making this result easily
applicable in other FDA context in which projections on a subspace
of increasing dimension are used. The test is shown to have correct
size and excellent power using simulated panels whose random
structure mimics the realistic dependence encountered in real panel
data. It is expected to find application in climatology, finance,
ecology, economics, and geophysics. We apply it to Southern Pacific
sea surface temperature data, precipitation patterns in the
South--West United States, and temperature curves in Germany.
\end{abstract}
\section{Introduction} \label{s:int}
We define a {\em functional panel} as a
stochastic process of the form
\begin{equation} \label{e:panel}
\bX_n(t) = [ X_{1, n}(t), \ldots, X_{I, n}(t)]^\top, \ \ \ \
1 \le n \le N,
\end{equation}
where each $X_{i, n}$ is a function of time $t$. The dimension $I$ can increase
with the series length $N$, with examples discussed below.
For the applications that motivate the present research, it is enough
to think of the $X_{i, n}$ as curves defined on the same time
interval, but in principle, functions on more general domains, e.g.,
volumes or surfaces, can be considered. The discrete time index $n$
refers to a unit like a day, week or year. The index $t$ is the
continuous time argument of the function $X_{i, n}$. The index $i$
refers to the $i^\text{th}$ time series in the panel. This paper develops a
test of the null hypothesis
$H_0$: the random elements $\bX_n, \ 1 \le n \le N,$ are independent
and identically distributed.
\noindent Our test is designed to detect serial dependence, and we assume stationarity across $n$ even under the alternative.
To illustrate the functional panel concept, Figure~\ref{f:ninoregions}
shows four curves, $I=4$, for a fixed $n$. The index $n$ refers to
years, and the four curves describe the sea surface temperature in
four regions used to measure the El Ni\~{n}o climatic phenomenon.
Figure~\ref{f:SantaCruz} shows another example, now with $i$ fixed.
The data point $X_{i,n}(t)$ is the log--precipitation at location $i$
on day $t$ of year $n$. The construction of this series is explained
in detail in Section~\ref{s:fsp}. Data structures of this type are
very common in climate studies; $X_{i, n}(t)$ can be total
precipitation or maximum temperature on day $t, \ 1 \le t \le 365,$ of
year $n$ at location $i$ in some region. In such climate
applications, $I$ is comparable to $N$ because records often start at
the end of the 19$^{\text{th}}$ or towards the middle of the
20$^{\text{th}}$century, thus, they are about 60 to 120 years long
($N\approx 60\text{ to }120$), and there are several dozen measurement
stations in a region ($I\approx 40 \text{ to }120$). (The United
States Historical Climatology Network -- USHCN -- contains weather
data collected at 1,218 stations across the 48 contiguous states,
starting from ca. 1900.)
\begin{figure}
\caption{Sea Surface Temperature curves of El Ni\~{n}
\label{f:ninoregions}
\end{figure}
\begin{figure}
\caption{Smoothed log--precipitation, Santa Cruz, California, 1982 to 2013.}
\label{f:SantaCruz}
\end{figure}
Climate data do not exhaust possible applications. Intraday
financial data typically come in panels. For example, $X_{i,n}(t)$
can be the exchange rate (against the US dollar) of currency $i,\ 1
\le i \le I,$ at minute $t$ of the $n^\text{th}$ trading day. Panels of
exchange rates contain information on the intraday strength of the US
dollar. Corporate bond yield curves are
large panels because a bond portfolio includes hundreds of companies,
$I\sim 10^3$; in economic studies, government bond yields curves form
small panels because only a few countries are considered to assess
risk in a region, see, e.g., \citet{hardle:majer:2015}.
At the intersection of climate and financial panels,
\citet{hardle:osipienko:2012} use a functional panel framework
in which $i$ refers to a spatial location, and the interest lies in
pricing a financial derivative product whose value depends on the
weather at location $i$. Modeling electricity data involves functional
panels indexed by regions or power companies with the daily index $n$,
see \citet{liebl:2013} for an overview. Daily pollution (particulate,
oxide or ozone) curves at several locations within a city form a functional
panel of moderate size.
In these examples, the dependence between the $X_{i,n}, 1 \le i \le
I$, for fixed $n$, is strong, and, generally, the temporal dependence,
indexed by $n$, cannot be neglected. In specific applications, this
dependence is modeled by deterministic trends or periodic functions
(climate data) or by common factors (financial data). To validate a
model, it is usual to verify that residual curves computed in some
manner form a random sample.
(See, e.g., \citet{kowal:matteson:ruppert:2014} for a model applied
to functional panels of government yield curves and neurological
measurements with an explicit residual iid assumption.)
It is thus important to develop a test of
randomness, i.e., to test the null hypothesis $H_0$ stated above. Such a
test could be viewed as analogous to tests of randomness which are
crucial in time series analysis, see, e.g., Section 1.6 of
\citet{brockwell:davis:2002}. They can be applied to original or
transformed data, or to model residuals. The purpose of this paper is
to develop a suitable test for functional panels.
Before discussing our approach, we provide
some historical background. Our methodology builds on the well-established
paradigm of testing for randomness in time series which
can be traced back to the work of \citet{box:pierce:1970}, which
was followed by a number of influential contributions including
\citet{chitturi:1976,hosking:1980,ljung:box:1978} and \citet{mcleod:1978}.
These tests use as a starting point
the asymptotic distribution of the sample autocorrelations
of a white noise: the $\hat\rho_h$ are approximately independent
normal random variables with mean zero and variance $1/N$, where
$N$ is the sample size.
Therefore, $N\sum_{h=1}^H {\hat\rho_h}^2$ is approximately
chi--square with $H$ degrees of freedom.
This research is
now reported in textbook expositions including
\citet{brockwell:davis:1991}, \citet{li:2004}, and
\citet{lutkepohl:2005}. More recent contributions include
\citet{fisher:gallagher:2012} and \citet{pena:rodriguez:2002}.
For a single functional time series, a randomness test was derived by
\citet{gabrys:kokoszka:2007} and elaborated on by
\citet{horvath:huskova:rice:2013} and \citet{jiofack:nkiet:2010}. In the context of {\em scalar} panel
data, the only work we are aware of is \citet{fu:etal:2002} who
define residual autocorrelations in the autoregressive panel model of
\citet{hjellvik:tjostheim:1999} as
\[
\hat r_h = \frac{\sum_{i=1}^I \sum_{n=1}^{N-h} \hat\eg_{i, n+h}
\hat\eg_{i,n}}
{\sum_{i=1}^I \sum_{n=1}^{N-h}\hat\eg_{i,n}^2},
\]
where the $\hat\eg_{i,n}$ are appropriately defined residuals. In
their asymptotic setting, the number of temporal points, $N$, is
fixed, and the number of time series, $I$, increases to infinity. They
show that for any fixed $H$, the vector $[\hat r_1, \ldots,
\hat r_H]^\top$ is asymptotically normal with the asymptotic covariance
matrix that can be estimated. By constructing a suitable quadratic
form, they derive a portmanteau test statistic whose asymptotic
distribution is $\chi^2_H$. There is at present no randomness test
for functional panels, and it is our objective to derive a practically
useable test which is supported by asymptotic arguments.
\citet{hsiao:2003} provides an excellent account of the methodology
for scalar panel data.
We reduce the dimension of the functions $X_{i,n}$ by using
projections on functional principal components. Denote the number of
such projections for the $i^\text{th}$ series in the panel by $p(i)$. The
total number of scalar time series we must consider is thus
$p=\sum_{i=1}^I p(i)$. For climate applications discussed above, $p$
can approach several hundred. It is thus natural to consider
asymptotics with $p$ increasing to infinity with $N$. Our theory
applies to cases of fixed $p(i)$ and increasing $I$, fixed $I$ and
increasing $p(i)$, or both increasing with $N$. In this framework, we
show that it is possible to construct a test statistic that is
asymptotically standard normal. The work of \citet{fu:etal:2002}
can be viewed as considering $p(i)=1$, $N$ fixed, and $I\to\infty$.
Our setting is thus quite different, and requires a new asymptotic
framework. Despite some theoretical complexity, our approach leads to
a test whose asymptotic null distribution is standard normal, and
which can be algorithmically implemented. It is therefore hoped that
our work will find application in the analysis of functional panels of
the type specified above. In particular, it could motivate the
development of suitable change point tests that target more
specific alternatives, \citet{aston:kirch:2012AAS} and \citet{zhang:shao:hayhoe:wuebles:2011} consider the case of $I=1$.
Since the goal of our methodology is to check the independence
assumption, our tests are based on the usual functional principal
component scores; functional principal components form the optimal
basis under the null hypothesis. They have an established place in FDA
research with readily available {\tt R} and {\tt matlab}
implementations. In principle, other basis systems could be used,
especially those custom--developed for time series of functions, see, e.g.,
\citet{hormann:2015} and \citet{panaretos:2013}, or even those
going beyond linear dimension reduction, see \citet{li:song:2016}.
In each of these cases, our general approach could be applied to the
resulting scores, but new asymptotic justifications and numerical
implementations would have to be developed.
The remainder of the paper is organized as follows. In
Section~\ref{ss:results}, we formalize the asymptotic framework,
derive the test statistic and establish its asymptotic normality.
Section~\ref{ss:fsi} describes the practical implementation of the
test procedure in algorithmic steps. Section \ref{s:fsp} illustrates
the application of the test on three climate data sets which form
functional panels: sea surface temperatures in the pacific ocean, US
regional precipitation data and temperature curves in Germany.
Section \ref{ss:sim} further examines the finite sample performance of
our test by applying it to simulated data which resemble the above
mentioned data sets. The proofs of the asymptotic results are
presented in Section \ref{s:multi} and in the supplemental material.
In addition to these mathematical calculations, the supplemental
material contains a zipped folder containing the complete {\tt R}
code, a corresponding {\tt README} file and the data sets.
\section{Testing procedure} \label{s:test}
\subsection{Assumptions and large sample
results} \label{ss:results}
We assume that all functions have been rescaled so that their
domain is the unit interval $[0,1]$. We also assume that
they have mean zero: $\E X_{i,n}(t) = 0$ for almost all $t\in[0,1]$.
In practice, the mean is removed by subtracting the sample mean, see Section~\ref{ss:fsi}, so that the functional time series forming the
panel each have sample mean zero. Subtracting the sample
mean introduces additional terms of the
order $O_P(N^{-1})$, and so does not affect the limiting distribution.
Denote by $L^2= L^2([0,1])$ the Hilbert space of square integrable
functions with the usual inner product $\langle \cdot, \cdot \rangle$
and the norm $\| \cdot \|$ it generates. The assumptions and the
definition of the test statistic involves the Kronecker product, and
its properties are heavily used in the proofs. Readers are referred to
\citet{graham:1981} for a very useful exposition. In the context
of matrices and vectors, we take $\otimes$ to be the usual Kronecker
product. Between two functions or operators $x$ and $y$, we take $x
\otimes y$ to be the operator $\langle x, \cdot \rangle y$. We will
often not distinguish notationally between the cases, as it will
always be clear from the context which we mean. Further details will
be provided as needed. By $| \cdot |$ we denote the Euclidean norm of
a vector.
Our first assumption states the functions forming the panel
are in $L^2$ and have uniformly bounded fourth moments.
\begin{assumption} \label{a:1}
Assume that $\{\bX_n\}$ is a {\em zero mean}
sequence of random
functional vectors taking values in $\{L^2\}^I$. Furthermore, assume
that there exists a constant $M$ such that
\[
\E \| X_{i,n}\|^4 \leq M < \infty,
\quad i=1, \ldots, I, \quad n=1,\ldots,N.
\]
\end{assumption}
Our second assumption connects the rate of growth of $I$ and the
$p(i)$ to the rate of decay of the gaps between the eigenvalues
of individual series and the rate of decay of the eigenvalues of
the whole panel. Assumptions of this type go back at least to the work
of \citet{dauxois:1982}.
To the best of our knowledge, only the case of a single functional
series or sample, possibly with explanatory variables or functions,
has been considered, see \citet{cai:hall:2006}, \citet{crambes:kneip:sarda:2009},
\citet{fremdt:2014}, \citet{hall:muller:wang:2006}, and \citet{paul:peng:2009},
among many others.
The complexity of our Assumption~\ref{a:pn2} is due to the
panel structure of the data. To formulate it, define
\[
{\bf X}_{in} = [ X_{1in}, \ldots, X_{p(i)in}]^\top, \ \ \ \
X_{jin} = \langle X_{i,n}, v_{i,j} \rangle
\]
and column vectors
$
{\bf X}_n^\star
= [{\bf X}_{1n}^\top, \ldots, {\bf X}_{In}^\top]^\top
$
of length $p_N:=p= \sum_{i=1}^{I} p(i)$.
The panel $\lbr {\bf X}_n^\star\rbr$ is thus an approximation
of dimension $p_N$ to the functional panel $\lbr {\bf X}_n \rbr$
given by \eqref{e:panel}.
Let
\[
\bC_{0,N} = \E \lb\bX_n^\star \bX_n^{\star \top} \rb
\]
be the $p_N\times p_N$ covariance matrix whose eigenvalues are
$\gamma_1 \ge \ldots \ge \gamma_{p_N}$. Denote by
$\lambda_{i,1} > \lambda_{i,2}> \ldots $ the eigenvalues of the
covariance operator $\E[ X_{i,1} \otimes X_{i,1}]$ and define
\[
\Gamma_N = \sum_{i,\ip=1}^I \sum_{j = 1}^{p(i)}
\sum_{\jp = 1}^{p(\ip)}
(\alpha_{i,j}^{-1} + \alpha_{\ip, \jp}^{-1})^2,
\]
where $\alpha_{i,1} = \lambda_{i,1} - \lambda_{i,2}$ and for $j \geq 2$,
$\alpha_{i,j}
= \min\{\lambda_{i, j-1} - \lambda_{i,j},
\lambda_{i,j} - \lambda_{i, j+1} \}$.
\begin{assumption} \label{a:pn2} Assume that the sequence $p_N$
is such that $p_N \to \infty$ and
\[
N^{-1/2} p_N^{-1} \gamma_{p_N}^{-3} I^3\Gamma_N^{1/2} \to 0.
\]
(The number of panels, $I$, can either stay fixed or tend to infinity.)
\end{assumption}
Assumption~\ref{a:pn2} has the following interpretation. The first
two terms, $N^{-1/2} p_N^{-1}$, indicate the rate at which information
accumulates as $N \to \infty$, while the third and fourth terms,
$\gamma_{p_N}^{-3} I^3$, indicate the rate at which the panel
structure detracts information (with $\gamma_{p_N}$ governing the
correlation between series). The last term $\Gamma_N^{1/2}$
incorporates the spacing of the eigenvalues and is common in
asymptotics with an increasing number of projections. A more readily
interpretable form of this assumption is stated in
Section~\ref{ss:I=1} for the case of a single time series ($I=1$). We
do not impose any specific dependence structure, and prefer to use a
general, admittedly rather technical, Assumption~\ref{a:pn2}. An
alternative approach would be to impose some temporal dependence
structure, e.g., as in \citet{jirak:2015}, and establich analogous
results under such assumptions. Instead, we give a brief example to
help shed further light on Assumption \ref{a:pn2}.
Assume that each element of the panel has the same covariance operator
so that $\lambda_{i,j} \equiv \lambda_j$ and $\alpha_{i,j} \equiv
\alpha_j$ for all $i$ and $j$. In this case, it makes sense to also
assume that $p_i \equiv p$ so that $p_N = I p$. Collect the
$\lambda_j$ into a diagonal matrix $\Lambda$. Furthermore, assume
that the panels are independent so $\bC_{0,N} = \Lambda \otimes \bI_{I
\times I}$, where $\otimes$ denotes the Kronecker product and
$\bI_{I\times I}$ the $I \times I$ identity. This then implies that
$\gamma_{p_N} = \lambda_{p}$.
We now assume explicitly that $\lambda_j = j^{-\alpha}$ and $\lambda_j - \lambda_{j+1} = j^{-\alpha - 1}$. This implies that
\begin{align*}
\Gamma_N & \leq I^2 \sum_{j} \sum_{j^\prime} \left[j^{\alpha + 1} + (j')^{\alpha + 1} \right]^2 \\
& = I^2 \left[ 2 p \sum_{j=1}^p j^{2\alpha + 2} + 2 \left(\sum_{j=1}^p j^{\alpha + 1}\right)^2 \right] \\
& \approx I^2 \left[ 2 p^{2\alpha +4} (2 \alpha + 3)^{-1} + 2 p^{2 \alpha + 4} ( \alpha + 2)^{-2}
\right] \sim I^{2} p^{2 \alpha + 4}.
\end{align*}
Here $\approx$ means the limit of their ratio tends to 1, while $\sim $ means the limit of their ratio is a finite nonzero constant. So then we have
\[
N^{-1/2} p_N^{-1} \gamma_{p_N}^{-3} I^3\Gamma_N^{1/2}
\sim N^{-1/2} p^{-1} I^{-1} p^{3 \alpha} I^{3} I p^{\alpha + 2}
= N^{-1/2} p^{4 \alpha +1} I^2.
\]
The parameter $\alpha$ is usually viewed as the smoothness of the $X_{i,n}$ processes. We can see that for rougher processes, we can actually take larger panels and more principal components, since $\alpha$ will be smaller in these cases. For example, $\alpha = 2$ for Brownian motion. The same calculations will show that in the single panel case of Section \ref{ss:I=1}, the rate becomes
\begin{align*}
\frac{N^{-1/2} \sum_{j=1}^p \alpha_j^{-1} }{\lambda_p^2}
\sim N^{-1/2} p^{3 \alpha +2}.
\end{align*}
Since it must be the case that $\alpha > 1$, we can see the price we pay for the lack of structure in the panel as $ 3\alpha + 2 < 4\alpha + 1$. This price increases for smoother processes, i.e., larger $\alpha$.
We now proceed to define the test statistic.
Let $\hat v_{i, j}$ be the $j^\text{th}$ estimated functional principal
component (EFPC) of the $i^\text{th}$ functional time series, see, e.g.,
Chapter 3 of \citet{HKbook}. Set
\[
\widehat {\bf X}_{in} = [\widehat X_{1in}, \ldots, \widehat X_{p(i)in}]^\top, \ \ \ \
\widehat X_{jin} = \langle X_{i,n}, \hat v_{i,j} \rangle.
\]
Next, we form column vectors of length $p_N:=p= \sum_{i=1}^I p(i)$
given by
\[
\widehat {\bf X}_n
= [\widehat {\bf X}_{1n}^\top, \ldots, \widehat {\bf X}_{In}^\top]^\top.
\]
To form a portmanteau test statistic using the $\widehat \bX_n$,
we introduce
\begin{align*}
\widehat \bV_h = N^{-1} \sum_{n=1}^{N-h}
\widehat \bX_n \otimes \widehat \bX_{n+h}; \ \ \ \
\widehat \bC_0 = N^{-1} \sum_{n=1}^N
\widehat \bX_n \widehat \bX_{n}^\top.
\end{align*}
Observe that $\widehat \bV_h$ is a column vector of length $p_N^2$ and
$\widehat \bC_0 \otimes \widehat \bC_0$ is a $p_N^2\times p_N^2$
symmetric matrix. The test statistic is defined by
\begin{align*}
\widehat Q_N & = N \sum_{h=1}^H \widehat \bV_h^\top
(\widehat \bC_0 \otimes \widehat \bC_0)^{-1} \widehat \bV_h.
\end{align*}
The summation limit $H$ plays the same role as the maximal number
of lags in the usual Box--Pierce--Ljung type statistics.
It is fixed in the asymptotic theory.
Our first result states that $\widehat Q_N$ is asymptotically normal
under $H_0$, i.e.,\ when the data are iid.
\begin{theorem} \label{t:multi}
If Assumptions \ref{a:1} and \ref{a:pn2} hold, then under $H_0$,
\[
\frac{\widehat Q_N - p_N^2 H}{p_N \sqrt{2H}} \overset{\cD}{\to} \mathcal{N}(0,1).
\]
\end{theorem}
Theorem~\ref{t:multi} is proven in Appendix~\ref{s:multi}. The proof
involves a sequence of vectors of
projections of increasing dimension. In
\citet{cardot:fms:2003} and \citet{horvath:huskova:rice:2013},
this problem is avoided by making extensive use of the Prokhorov--Levy
metric. However, such a technique is limited due to the difficulty of
incorporating dimension into any Berry--Esseen type convergence result,
which typically rely on highly complex smoothing arguments.
Furthermore, such an approach typically does not yield results as
sharp as proving the CLT directly due to the way they depend on
dimension. In contrast, our approach adds no additional assumptions
beyond those needed to replace the estimated eigenvalues and
eigenfunctions with their theoretical counterparts. We therefore view
the following theorem, which establishes the asymptotic normality of
general quadratic forms based on autocorrelations, as an important
contribution of this paper.
\begin{theorem}\label{t:norm}
Let $\{\bZ_{n,N}, 1 \le n \le N\}$ be an array of random vectors with
$\bZ_{n,N} \in \mbR^{p_N}$. For each $N$,
assume that $\bZ_{1,N}, \dots, \bZ_{N,N}$ are iid and that
\begin{equation} \label{e:m-cond-Z}
\E[\bZ_{1,N}] ={ \bf{0}} \quad \mbox{and}
\quad \E[\bZ_{1,N} \bZ_{1,N}^\top]
= \bI_{p_N}.
\end{equation}
If, as $N \to \infty$,
\begin{equation} \label{e:p-cond-Z}
p_N \to \infty, \ \ \ p_N N^{-2/3} \to 0, \ \ \
{\rm and} \ \ \ N^{-1/2} \E| \bZ_{1,N}|^4 \to 0,
\end{equation}
then for $H$ fixed
\[
\frac{N^{-1}\sum_{h=1}^H \left| \sum_{n=h+1}^{N} \bZ_{n-h,N}
\otimes \bZ_{n,N} \right|^2 - p_N^2 H}{p_N \sqrt{2H}}
\overset{\cD}{\to} \mathcal{N}(0,1).
\]
\end{theorem}
The proof of Theorem~\ref{t:norm} is presented in the supplemental
material.
Limit results for random vectors with the dimension increasing with
the sample size appear in the asymptotic theory for empirical
likelihood, see, e.g., \citet{hjort:mckeague:vankeilegom:2009}
and \citet{peng:schick:2013}. The Central Limit Theorem
established by \citet{peng:schick:2012} is motivated by
such theory. Using the notation of Theorem~\ref{t:norm},
a corollary to their main result can be stated as
\[
\frac{N^{-1}\left | \sum_{n=1}^N \bZ_{n, N} \right |^2 - p_N}{\sqrt{2p_N}}
\overset{\cD}{\to} \mathcal{N}(0,1).
\]
Their focus is not on the lagged Kronecker products, but on the case
where $\E[\bZ_{1,N} \bZ_{1,N}^\top]$ is a general covariance matrix (i.e.,\ not the identity),
and the centering is with respect to its trace. Other results on CLT
convergence rates which incorporate dimension are given in
\citet{senatov:1998}.
We conclude this section with a general framework under which the test
rejects the null. If the sequence is stationary and
weakly dependent and at least
one element of the panel exhibits nonzero correlation with another
element (at some lagged time index), then the test will reject with
power approaching one. As a specific assumption for stationarity
and weak dependence we use the concept of $L^p$--$m$--approximability,
see \citet{HKbook}, Chapter 16.
\begin{assumption}{(Alternative Hypothesis)} \label{a:HA}
Assume that $\{(X_{i,1},\dots,X_{i,N})\}$ is a stationary $L^4$-m
approximable sequence (for each $i$) and that there exists a nonempty
subset of indices $\cH^\star \subset \{1,\dots,H\}$, and for each $h
\in \cH^\star$ a nonempty subset of pairs of indies $\cI_h^\star
\subset \{1, \dots, I\}^2$ such that
and $j \in \{1,\dots, I\}$ such that
\[
\left(\E[\langle X_{n}, v_{i}
\rangle \langle X_{n+h}, v_{j} \rangle ]\right)^2
\geq R > 0,
\]
for all $h \in \cH^\star$ and $(i,j) \in \cI_h^\star$.
\end{assumption}
\begin{theorem} \label{t:multi:HA}
If Assumptions \ref{a:1}, \ref{a:pn2}, and \ref{a:HA} hold, then
\[
\frac{\widehat Q_N - p_N^2 H}{p_N \sqrt{2H}}
\gtrsim \gamma_1^{-2} R N (1+ o_P(1)) \sum_{h \in \cH^\star} | \cI_h^\star|
\overset{P}{\to} \infty.
\]
\end{theorem}
\noindent Theorem~\ref{t:multi:HA} is proven in Appendix~\ref{s:multi}
In the next section, we consider the case of a single series to illustrate
our assumptions. We emphasize that $H$ is assumed
to be fixed, but can be arbitrarily large. Asymptotic under $H$ diverging
to infinity (for a single series) were investigated by
\citet{horvath:huskova:rice:2013}. It should, in principle, be possible
to let the number of panel series, $I$, the number of projections
$p$ and the maximal lag $H$ tend simultaneously to infinity, but we do not
develop such a more complex theory here. We thus stay within the
framework of traditional time series analysis where asymptotics are
derived for a finite number of lags, see, e.g, Chapter 7 of
\citet{brockwell:davis:1991}.
\subsection{Case of $I=1$ (a single functional series)} \label{ss:I=1}
To provide a more tangible intuition behind the form of the statistic
$\widehat Q_N$, we discuss the simpler scenario where we only have one
time series, i.e., $I=1$. Define
\[
\Delta_{N,h} = N^{-1/2} \sum_{n=1}^{N-h} X_n \otimes X_{n+h}.
\]
The autocovariance operator $\Delta_{N,h}$ is Hilbert--Schmidt. Recall
that Hilbert--Schmidt operators form a separable Hilbert space with
the inner product
\begin{equation} \label{e:HS-prod}
\lip \Psi_1, \Psi_2 \rip_\cS = \sum_{k} \lip \Psi_1(e_k), \Psi_2(e_k) \rip,
\end{equation}
where $\lbr e_k \rbr$ is any orthonormal basis, see, e.g., Chapter 2 of
\citet{HKbook}. A direct application of this definition with the
functions $\hv_k$ (extended to a complete system) shows that
\[
\langle \Delta_{N,h}, \hat v_j \otimes \hat v_{\jp} \rangle_\cS
= N^{-1/2} \sum_{n=1}^{N-h} \widehat X_{jn} \widehat X_{\jp, n+h}.
\]
Therefore, by Lemma 7.1 of \citet{HKbook}, the statistic
$\widehat Q_N$ can be expressed as
\[
\widehat Q_N = \sum_{h=1}^H \sum_{j, \jp=1}^{p_N}
\frac{\langle \Delta_{N,h}, \hat v_j \otimes \hat v_{\jp} \rangle_\cS^2}
{\hat \lambda_j \hat \lambda_{\jp}}.
\]
The summands are the squares of the sample cross--correlations of
all projections
under consideration. These are added over all projections and all lags up
to lag $H$.
In the case of a single time series, Assumption~\ref{a:pn2} can be replaced
by a more interpretable assumption:
\begin{assumption}\label{a:pn}
We assume that the sequence $p_N$ is nondecreasing,
$p_N \to \infty$, and satisfies
\[
\frac{N^{-1/2} \sum_{j=1}^{p_N} \alpha_j^{-1} }
{ \lambda_{p_N}^2} \to 0,
\]
where $\alpha_1 = \lambda_1 - \lambda_2$ and for $j \geq 2$,
$\alpha_j
= \min\{\lambda_{j-1} - \lambda_j, \lambda_j - \lambda_{j+1} \}$.
\end{assumption}
Assumption \ref{a:pn} quantifies the intuition that $p$ should
increase to infinity at a rate slower than $N$, depending on the rate
of decay of the eigenvalues $\lambda_j$ and the gaps
between them. Direct verification shows
that if the $\lambda_j$ decay exponentially fast, then $p$ must increase
at a rate slower than $\ln N$. If the $\lambda_j$ decay like a
power function, Assumption~\ref{a:pn} will hold if $\ln(p)/\ln(N) \to
0$.
The proof of Theorem~\ref{t:multi} for $I=1$ is presented in the
supplemental material. It is less abstract than the general proof
in Appendix~\ref{s:multi}; its study may facilitate the understanding
of the general case.
\subsection{Details of implementation} \label{ss:fsi}
In this section, we provide a step by step description of the testing
procedure. All steps listed below can be easily implemented in {\tt R}
or {\tt Matlab} using basic routines and the functional principal
component tool box (see \citet{ramsay:hooker:graves:2009}).
Supplemental material contains a ready to use {\tt R}
function implementing the test. For step 6, we provide two alternative
but equivalent procedures where the first might be more intuitive
while the second is computationally more efficient. The finite sample
bias correction in step 7 follows from an extension of the arguments
of \citet{ljung:box:1978}. It is clearly asymptotically negligible.
\begin{enumerate}
\item Center each functional time series, i.e., compute
\[
X_{i, n}^c (t) = X_{i,n}(t) - \hat\mu_i(t), \ \ \
\hat\mu_i(t) = N^{-1} \sum_{n=1}^N X_{i,n}(t).
\]
\item Calculate the eigenfunctions $\hat v_{i,j}$ and the eigenvalues
$\hat \lambda_{i,j}$ of the empirical covariance operator defined by
\[
\widehat C_i (x) = \frac{1}{N} \sum_{n=1}^N
\langle X_{i,n}^c, x \rangle X_{i,n}^c.
\]
(This step is implemented as \verb|pca.fd| in {\tt R} and as \verb|pca_fd|
in {\tt Matlab}. Both functions, by default,
center their arguments as in step 1.)
\item For each $1 \le i \le I$, determine $p(i)$ as the smallest $k$ for
which
\[
\frac{\sum_{j=1}^k\hat\lambda_{i,j}}
{\sum_{j=1}^{N}\hat\lambda_{i,j}} > 0.85.
\]
\item Construct the vectors of scores
\[
\widehat {\bf X}_{in} = [\widehat X_{1in}, \ldots, \widehat X_{p(i)in}]^\top, \ \ \ \
\widehat X_{jin} = \langle X_{i,n}, \hat v_{i,j} \rangle.
\]
and the vectors
\[
\widehat {\bf X}_n
= [\widehat {\bf X}_{1n}^\top, \ldots, \widehat {\bf X}_{In}^\top]^\top.
\]
Using the vectors $\{\widehat {\bf X}_n \}$, generate the $N \times p_N$ matrix
\[
\widehat{\bf X} = [\widehat{\bf X}_1,..., \widehat{\bf X}_N]^\top
\]
and calculate the empirical covariance matrix
\[
\widehat{\bf C}_0 = N^{-1} \widehat{\bf X}^\top \widehat{\bf X} .
\]
\item
Calculate the spectral decomposition
\[
\widehat \bC_0 = {\bf U} {\bf D} {\bf U}^\top,
\]
where ${\bf D}$ is the diagonal matrix of eigenvalues and
${\bf U}$ the matrix of eigenvectors.
Let $d_1, \dots, d_{p_N}$ be the eigenvalues of $\widehat \bC_0$.
Choose a cutoff point $q$ defined as the smallest integer for which
\[
\frac{\sum_{i=1}^q d_i}{\sum_{i=1}^{p_N} d_i} \geq 0.85.
\]
Set ${\bf D}^{-1}[i,i] = d_i^{-1}$ if $1\leq i \leq q$ and
zero otherwise. Calculate the generalized inverse
\[
\widehat \bC_0^{-1} = {\bf U} {\bf D}^{-1} {\bf U}^\top.
\]
(Note that $\widehat \bC_0$ might be singular, e.g., when $p_N>N$.)
\item
Using the vectors $\{\widehat {\bf X}_n \}$ from step 4, compute the terms
\[
\widehat \bV_h = N^{-1} \sum_{n=1}^{N-h}
\widehat \bX_n \otimes \widehat \bX_{n+h}
\]
for $1 \leq h \leq H$, where $H$ is chosen by the user.
(A discussion of this issue is presented at the end of Section~\ref{ss:sim}.)
The test statistic
\[
\widehat Q_N = N \sum_{h=1}^H \widehat \bV_h^\top
(\widehat \bC_0 \otimes \widehat \bC_0)^{-1} \widehat \bV_h.
\]
can be calculated using
$(\widehat \bC_0 \otimes \widehat \bC_0)^{-1}
\approx {\widehat \bC_0}^{-1} \otimes {\widehat \bC_0}^{-1}$,
where ${\widehat \bC_0}^{-1}$ is the generalized inverse from step 5.
An alternative procedure for step 6 which avoids the Kronecker product and
therefore requires less computational resources is the following:\\
For each $h, 1 \leq h \leq H$, take two submatrices of the matrix $\widehat{\bf X}$
defined in step 4,
\begin{align*}
\widehat{\bf X}_{-h} = [\widehat{\bf X}_1,..., \widehat{\bf X}_{N-h}]^\top \\
\widehat{\bf X}_{+h} = [\widehat{\bf X}_{1+h},..., \widehat{\bf X}_{N}]^\top
\end{align*}
and construct the matrices
\[
\widehat{\bf M}_h = N^{-1} ( \widehat{\bf X}_{-h}^\top \widehat{\bf X}_{+h} )^\top ,\,\,\, h=1,\, ...,\, H.
\]
(Note that the vectorized form $\text{vec}\!\! \left(\widehat{\bf M}_h \! \right)$ is equivalent to the
vectors $\widehat{\bf V}_h$ defined before.)
Calculate the test statistic as
\[
\widehat Q_N = N \sum_{h=1}^H
\text{vec} \!\! \left(\widehat{\bf M}_h\right)^\top
\text{vec} \!\! \left(
\left(\widehat \bC_0^{-1}\right)^\top \widehat{\bf M}_h \, \widehat \bC_0^{-1}
\right),
\]
where ${\widehat \bC_0}^{-1}$ is the generalized inverse from step 5.
\item Reject the null hypothesis at significance level $0< \alpha < 1$, if
\[
\frac{\widehat Q_N - q^2 H \left(1-\frac{H+1}{2N}\right)}{q\sqrt{2H \left(1-\frac{H+1}{2N}\right)}}
> \Phi^{-1}(1-\alpha),
\]
where $q$ is determined in step 5 and where $\Phi^{-1}(1-\alpha)$ is the
$(1-\alpha)$th quantile of the standard normal distribution.
\end{enumerate}
\section{Applications and finite sample performance}
\label{s:fsp}
As discussed in Section~\ref{s:int}, there are many examples of
functional panels, with various temporal and cross-sectional
dependence structures, and various shapes of the curves. This
paper focuses on methodology and theory. It is therefore not
possible to present a simulation study which covers the wide range
of possibly relevant scenarios. However, rather than considering
some ad hoc artificial data generating processes (DGP), we focus on
three real data sets taken from climate studies and then
simulate panels whose random structure resembles the one of these
real data sets closely. Our goal is to evaluate the performance of
the test in realistic settings and so to provide additional
guidance for its application.
\subsection{Application to climate data} \label{ss:data}
We consider three climate data sets with different
values of $I$ and $N$ and different levels of noise.
Each of them consists of $N$ annual curves at $I$ locations.
Before describing these data in more detail,
we provide the following summary.
\begin{enumerate}
\item {\bf El Ni\~{n}o SST:} $N=63, \ I=4$,\ smooth.
\item {\bf US precipitation:} $N=113, \ I = 103$,\ noisy.
\item {\bf German temperature:} $N=61,\ I=42$,\ noisy.
\end{enumerate}
El Ni\~{n}o is a phenomenon of semi--periodic variation
of sea surface temperature (SST) in the southern Pacific Ocean. The
phenomenon is measured by an index for SST variation in several
regions, generally referred to as Ni\~{n}o-1+2, Ni\~{n}o-3,
Ni\~{n}o-4, and Ni\~{n}o-3.4, see, e.g., \citet{trenberth:1997}, and
\citet{trenberth:stepaniak:2001}. Data with monthly SST
measurements for all four regions from 1950 onwards are available
online:\\
{\tt http://www.cpc.ncep.noaa.gov/data/indices/ersst3b.nino.mth.81-10.ascii}.
Panels with $I$ comparable to $N$ arise in regional climate studies.
As an example, we use monthly precipitation data from the United
States Historical Climatology Network for all stations in California,
Arizona, and New Mexico, a region known for reoccurring precipitation
deficits. Screening for completeness yields a panel of $I=103$
stations from 1901 to 2013, $N=113$. Increasing all records by 0.01
inch (which is the smallest discrete unit of measurement for these
data) allows us to use a log-transformation which leads to
approximately normal data. (Any invertible transformation preserves
the iid property stated as $H_0$.) As our last example, we consider
daily temperature data at $I=42$ weather stations in Germany over $N=
61$ years (accessible online at: {\tt www.dwd.de}).
To remove long term trends that would lead to a rejection, all data
were detrended by fitting a least squares line to observations at each
location and each month. This operation had a visible effect only on
the German temperature data, as illustrated in Figure
\ref{fig:august}. Arguably, more sophisticated methods
of long term behavior modeling could be used, methods, where testing the
residuals for independence might be crucial to conclude on the model
validity. However, our objective is not a climatological study, but merely
an illustration of statistical methodology which can be applied to
residual curves obtained from more complex models.
\begin{figure}
\caption{Monthly averages of maximum daily temperature in the month of August, 1953 to
2013, in Mannheim, Germany. Note the very high record of almost
$32^\circ\text{C}
\label{fig:august}
\end{figure}
The El Ni\~{n}o indices are already smoothed regional averages and do not
require further smoothing. We therefore expand the monthly
measurements using a B-spline basis of order four with one knot placed
at each month, the expansion closely matches the data points.
Figure~\ref{f:ninoregions} shows the spline-expansion for the year
2012, while Figure~\ref{f:ninofullsample} exhibits the SST
measurements in each region over the whole span of the sample. The
data from individual weather stations in the US and in Germany are
noisy. Due to pronounced annual periodicity, we expand each annual
curve using a Fourier series with 25 basis functions and apply the
harmonic acceleration roughness penalty. An example for the US smoothed,
log-transformed precipitation data is given in Figure \ref{f:SantaCruz}.
The smoothness parameter was chosen to minimize the standard
generalized cross--validation criterion. Details of the procedure
are described in Section 5.3 of \citet{ramsay:hooker:graves:2009}.
We apply the test to the detrended and smoothed curves. As described
in Section~\ref{ss:fsi}, Step 3, the numbers $p(i)$ are determined
separately for each time series $i$ such that at least 85\% of
variance within the time series is captured by the $p(i)$ principal
components. For the El Ni\~{n}o SST curves, this requires two
principal components for each region. For the log-precipitation and
the temperature data, the same criterion selects three principal
components for each station. The fact that all time series within a
panel are assigned the same number of principal components underlines
the fact that the climate measures we use have structurally similar
data generating processes. This, however, does not necessarily need to
be the case for different panels. Our testing procedure accounts
for such a possibility by
choosing $p(i)$ individually for each $i$. Additional simulations show
that the test is very robust to the choice of the cut-off criterion of
this first dimension reduction.
The second principal component analysis described in Step 5,
Section~\ref{ss:fsi}, determines the number of dimensions $q$ for each
sample that are finally used to derive our test statistic. The 85\% of
variance criterion selects $q=2$ for the El Ni\~{n}o data with
$I=4$. It selects $q=3$ for the German temperature data with $I=42$,
and $q=37$ for the US precipitation data with $I=103$. The fact that
the second dimension reduction for the German temperature data selects
just 3 principal components for the whole panel can be attributed to
the rather dense geographical coverage and thus large homogeneity of
the cross-sectional temperature curves in this sample.
\begin{figure}
\caption{Sea Surface Temperature in four El Ni\~{n}
\label{f:ninofullsample}
\end{figure}
\begin{table}[htb]
\centering
\caption{Test results (normalized $\widehat{Q}_N$ and $p$\,-values)
for the three data sets:
El Ni\~{n}o SST curves with $I=4$, $N=63$;
US log-precipitation data with $I=103$, $N=113$;
German temperature data with $I=42$, $N=61$.}
\label{t:testresults}
\begin{tabular}{l|rr|rr|rr}
\hline\hline
& \multicolumn{2}{c|}{El Ni\~{n}o SST}
& \multicolumn{2}{c|}{US precip.}
& \multicolumn{2}{c}{German temp.} \\
& stat. & $p$\,-value & stat. & $p$\,-value & stat. & $p$\,-value\\
\hline
$H=1$ & 13.483 & $<0.001$ & 1.704 & 0.044 & -0.701 & 0.758 \\
$H=2$ & 11.778 & $<0.001$ & 2.080 & 0.019 & -1.000 & 0.841 \\
$H=3$ & 10.282 & $<0.001$ & 2.827 & 0.002 & -0.138 & 0.555 \\
$H=4$ & 8.973 & $<0.001$ & 2.587 & 0.005 & 0.193 & 0.424 \\
$H=5$ & 7.617 & $<0.001$ & 2.714 & 0.003 & -0.349 & 0.636 \\
$H=6$ & 6.508 & $<0.001$ & 2.762 & 0.003 & -0.119 & 0.547 \\
$H=7$ & 5.756 & $<0.001$ & 2.696 & 0.004 & 0.066 & 0.474 \\
$H=8$ & 5.142 & $<0.001$ & 3.060 & 0.001 & 0.324 & 0.373 \\
$H=9$ & 4.883 & $<0.001$ & 3.008 & 0.001 & 0.011 & 0.496 \\
$H=10$& 4.750 & $<0.001$ & 2.871 & 0.002 & 0.231 & 0.408 \\
\hline\hline
\end{tabular}
\end{table}
Table~\ref{t:testresults} shows the normalized test statistic and
the corresponding $p$\,-values for $H=1,...,10$ for all three panels.
The El Ni\~{n}o anomaly typically happens at irregular intervals of
three to six, sometimes seven years, thus, importance should be
given to the results for these lags.
However, the test rejects $H_0$ for the SST panel for all lags tested
at any reasonable level of significance.
For the South--West precipitation data, the rejection is also
convincing, even though less strong. Both data sets reflect
known semi--periodic cycles extending over several years, and this is
a likely reason for the rejections. For the German temperature data,
in contrast, there is no evidence for a violation of
$H_0$. After a simple detrending, the panel of annual temperature
curves over Germany can be taken to consist of iid observations. Due
to a dense spatial coverage, one might say that after local trends
have been removed, the annual temperature pattern over Germany can be
treated each year as an independent replication.
\subsection{Simulation scheme and finite sample performance}
\label{ss:sim}
We now use the estimated stochastic structure of the panels
described in Section~\ref{ss:data} to generate artificial functional
panels. This section serves a twofold purpose: we want to validate
the conclusions implied by Table \ref{t:testresults}, and we want to
evaluate the empirical size and power of the test in realistic settings.
\subsubsection{Simulation scheme}
We first describe the procedure to simulate functional panels which
closely resemble the original climate data, but do not violate the null
hypothesis. Then, we explain how we generate panels with increasing
temporal dependence.
Functional panel data are expected to exhibit correlation of curves
for a fixed period $n$ (`between' individuals, i.e., stations or regions),
as well as dependence over periods for a fixed individual $i$
(`within' a time series).
The following data generating process features the first type of between
correlation but excludes dependence over time:
For every individual station or region $i$, we calculate the
empirical mean function $\hat{\mu}_i(t)$ together with $k=1,...,12$
empirical principal components $\hat{v}_{k,i}(t)$ (12 is the maximum
number of EFPC's for these data). We calculate the score
$\hat{\xi}_{k,i,n}$ for each principal component $k$, for every
individual $i$, and for every year $n=1,...,N$. Let
\[
\sigma_k(i, \ip) = \frac{1}{N-1} \sum_{n=1}^N
\lp \hat\xi_{k,i,n} - \bar \xi_{k,i} \rp
\lp \hat\xi_{k,\ip,n} - \bar \xi_{k,\ip} \rp
\]
and
\[
\bSig_k = \lb \sigma_k(i, \ip), 1 \le i, \ip \le I \rb.
\]
The matrix $\bSig_k$ is the empirical covariance matrix
of the scores from different individuals. Calculate
its Cholesky decomposition $\bSig_k = \bL_k \bL_k^\top$ and
obtain simulated scores of the form
\[
\boldsymbol{\zeta}_k = \mathbf{z}_k \bL^\top_k, \quad 1 \le k \le 12,
\]
where $\mathbf{z}_k$ is an $N \times I$ matrix of independent
standard normal random variables. Each matrix
$\boldsymbol{\zeta}_k$, $1\leq k\leq 12$,
has $I$ column vectors $\boldsymbol{\zeta}_{k,i}$
of length $N$ which are correlated among each other much like
the scores of the individual stations or regions from the original
climate data. With these scores, the data generating process
for the simulated iid functional panel is
\[
\mathbf{X}^{H_0}_{i}(t) = \hat{\mu}_i(t) + \sum_{k=1}^{12} \boldsymbol{\zeta}_{k,i} \hat{v}_{k,i}(t), \quad i=1,...,I,
\]
where each $\mathbf{X}^{H_0}_{i}(t)$ is a vector of random curves
of length $N$ and the superscript $H_0$ indicates that the artificial
sample satisfies the null hypothesis of independence. The essence of
the above procedure is that if the original data satisfied $H_0$ and
were normal, then the data generating process would have the same random
structure as the estimated structure of the data. Normal QQ--plots show
that the scores of the three data sets in question are approximately
normal ($i, k$ fixed, $N$ points per plot).
To construct an alternative to $H_0$, we impose autocorrelation on
each time series in the form of a functional autoregressive process of
order 1, FAR(1) (see Chapters 3 and 4 of
\citet{bosq:2000}, or Chapter 13 of \citet{HKbook}).
An appropriate Cholesky factor $\mathbf{L}_{\text{ac}}$ is defined
as follows:
Choose $\rho\neq0$, $-1<\rho<1$, the level of autocorrelation to
be imposed (one could also specify different levels of $\rho$ for
each $k$). Construct an $N \times N$ Toeplitz matrix
such that the first column corresponds to the sequence
$\{\rho^{n-1}\}_{n=1,...,N}$.
For the Cholesky factor $\mathbf{L}_{\text{ac}}$, take the lower
triangular of this Toeplitz matrix and divide each element by
$[(\rho^{2n}-1)/(\rho^2-1)]^{1/2}$, where $n=1,...,N$
denotes the row number of the corresponding element. This ensures
that $\mathbf{L}_{\text{ac}} \mathbf{L}_{\text{ac}}^\top$ is positive
semi-definite and has all diagonal elements equal to $1$. Thus,
applying this factor to a vector of length $N$ imposes autocorrelation
among the vectors' elements but does not change the
overall level of variance. With the generic scores defined before,
the data generating process for the autocorrelated functional
panel is
\[
\mathbf{X}^{\text{ac}}_{i}(t) = \hat{\mu}_i(t)
+ \sum_{k=1}^{12} \mathbf{L}_{\text{ac}}\boldsymbol{\zeta}_{k,i} \hat{v}_{k,i}(t)
, \quad i=1,...,I.
\]
In summary, we can generate siblings of functional panels of the type
$\{X^{H_0}_{i,n}(t), X^{\text{ac}}_{i,n}(t) \}$,
$i=1,...,I$, $n=1,...,N$, whose cross--sectional dependence structure
is the same and similar to that of the real data, but where one sample
obeys the hypothesis of independence while the other sample follows an
explicit FAR(1) process. This procedure allows us to vary the length of
the panel, $N$.
\subsubsection*{Finite sample performance}
To evaluate the empirical size and power, we simulate $R=10^3$
replications of panels with $I=4$, $I=42$, and $I=103$, and with the
cross--sectional dependence structure resembling that of the
respective data sets. We report results for the length $N=60$ and
$N=120$ (years), typical sample sizes encountered in historical
climate data. We test for $H=3,...,6$ which is the relevant range of
years for which dependence in the El Ni\~no driven climatic measures
is expected. Table~\ref{t:empiricalsize} shows the point estimates of
the rejection frequency for the simulation under $H_0$ together with
the Clopper--Pearson confidence intervals for the probability of
success (see \citet{clopper:pearson:1934}).\footnote{ Clopper--Pearson
confidence intervals are almost identical to the confidence intervals
based on the normal approximation to the binomial distribution, except
for cases of empirical power close to 1 when the right end point of
the latter exceeds 1.}
\begin{table}[hb]
\centering
\caption{Rejection frequencies (and confidence bands) for the test
with $\alpha=0.05$ obtained from 1000 simulations under $H_0$ for
each of the data sets.}
\label{t:empiricalsize}
\begin{tabular}{l|ll|ll}
\multicolumn{5}{c}{El Ni\~{n}o data, $I=4$, i.i.d.}\\
\hline\hline
& \multicolumn{2}{l|}{$N=60$} & \multicolumn{2}{l}{$N=120$} \\
\hline
$H=3$ & 0.055 & (0.042, 0.071) & 0.053 & (0.040, 0.069) \\
$H=4$ & 0.071 & (0.056, 0.089) & 0.051 & (0.038, 0.067) \\
$H=5$ & 0.060 & (0.046, 0.077) & 0.061 & (0.047, 0.078) \\
$H=6$ & 0.063 & (0.049, 0.080) & 0.057 & (0.043, 0.073) \\
\hline
\multicolumn{5}{c}{} \\
\multicolumn{5}{c}{US precipitation data, $I=103$, i.i.d.}\\
\hline\hline
& \multicolumn{2}{l|}{$N=60$} & \multicolumn{2}{l}{$N=120$} \\
\hline
$H=3$ & 0.033 & (0.023, 0.046) & 0.033 & (0.023, 0.046) \\
$H=4$ & 0.046 & (0.034, 0.061) & 0.041 & (0.030, 0.055) \\
$H=5$ & 0.054 & (0.041, 0.070) & 0.048 & (0.036, 0.063) \\
$H=6$ & 0.079 & (0.063, 0.097) & 0.058 & (0.044, 0.074) \\
\hline
\multicolumn{5}{c}{} \\
\multicolumn{5}{c}{German temperature data, $I=42$, i.i.d.}\\
\hline\hline
& \multicolumn{2}{l|}{$N=60$} & \multicolumn{2}{l}{$N=120$} \\
\hline
$H=3$ & 0.062 & (0.048, 0.079) & 0.061 & (0.047, 0.078) \\
$H=4$ & 0.060 & (0.046, 0.077) & 0.052 & (0.039, 0.068) \\
$H=5$ & 0.063 & (0.049, 0.080) & 0.054 & (0.041, 0.070) \\
$H=6$ & 0.059 & (0.045, 0.075) & 0.054 & (0.041, 0.070) \\
\hline\hline
\end{tabular}
\end{table}
The empirical sizes reported in Table~\ref{t:empiricalsize} validate
the results obtained in Section~\ref{ss:data}. For the DGPs we
considered, the test has overall a satisfactory, often excellent,
empirical size. The evaluation of power is more subjective as it
depends on the distance of the DGP from $H_0$. For every
simulated panel following $H_0$, we obtained an autocorrelated sibling
with a fixed level of autocorrelation: $\rho=0.38$ for $I=4$,
$\rho=0.37$ for $I=42$, and $\rho=0.19$ for $I=103$. These are the
correlation levels for which the power is almost or exactly equal to
1 if $N=120$. In light of these moderate levels of autocorrelation
and the rejection frequencies reported in Table~\ref{t:powertable}, we
conclude that our test has excellent power together with good
empirical size, such that the rejections as well as the non--rejection
of $H_0$ for the data presented in Section~\ref{ss:data} provide
reliable insights.
\begin{table}[htb]
\centering
\caption{Rejection frequencies (and confidence bands) for the test with $\alpha=0.05$ obtained from 1000 simulations of an AR(1) process for each of the data sets; $\rho$ indicates the degree of autocorrelation.}
\label{t:powertable}
\begin{tabular}{l|ll|ll}
\multicolumn{5}{c}{El Ni\~{n}o SST curves, $I=4$, $\rho=0.38$}\\
\hline\hline
& \multicolumn{2}{l|}{$N=60$} & \multicolumn{2}{l}{$N=120$} \\
\hline
$H=3$ & 0.987 & (0.978, 0.993) & 1.000 & (0.996, 1.000) \\
$H=4$ & 0.927 & (0.909, 0.942) & 1.000 & (0.996, 1.000) \\
$H=5$ & 0.778 & (0.751, 0.803) & 1.000 & (0.996, 1.000) \\
$H=6$ & 0.607 & (0.576, 0.637) & 0.997 & (0.991, 0.999) \\
\hline
\multicolumn{5}{c}{} \\
\multicolumn{5}{c}{US precipitation data, $I=103$, $\rho=0.19$}\\
\hline\hline
& \multicolumn{2}{l|}{$N=60$} & \multicolumn{2}{l}{$N=120$} \\
\hline
$H=3$ & 0.951 & (0.936, 0.964) & 1.000 & (0.996, 1.000) \\
$H=4$ & 0.981 & (0.970, 0.989) & 1.000 & (0.996, 1.000) \\
$H=5$ & 0.994 & (0.987, 0.998) & 1.000 & (0.996, 1.000) \\
$H=6$ & 0.994 & (0.987, 0.998) & 1.000 & (0.996, 1.000) \\
\hline
\multicolumn{5}{c}{} \\
\multicolumn{5}{c}{German temperature curves, $I=42$, $\rho=0.37$}\\
\hline\hline
& \multicolumn{2}{l|}{$N=60$} & \multicolumn{2}{l}{$N=120$} \\
$H=3$ & 0.790 & (0.763, 0.815) & 0.996 & (0.990, 0.999) \\
$H=4$ & 0.690 & (0.660, 0.719) & 0.986 & (0.977, 0.992) \\
$H=5$ & 0.615 & (0.584, 0.645) & 0.974 & (0.962, 0.983) \\
$H=6$ & 0.554 & (0.523, 0.585) & 0.966 & (0.953, 0.976) \\
\hline\hline
\end{tabular}
\end{table}
In our simulation study, the power generally decreases with $H$, but
it increases with $H$ for the DGP mimicking the US precipitation
panel. This agrees with the $p$\,-values reported in
Table~\ref{t:testresults}. The issue of the selection of $H$ is a
difficult one, and it is not satisfactorily solved even for the
standard Ljung--Box--Pierce test for a single scalar time series.
Statistical software packages display the $p$\,-values, often in the
form of a graph, as a function of $H$. If for some relatively large
range of $H$ the $p$\,-values are above the 5\% level, $H_0$ is
accepted, if they are below, $H_0$ is rejected. In mixed cases, the
test is found to be inconclusive. The same strategy can be followed
in the application of our test. In addition, some background
knowledge of the science problem may be utilized. In the SST and US
precipitation examples, the temperature and rainfall patterns are
known to reoccur every 3--6 years, so importance was attached to these
lags $H$.
\appendix
\section{Proof of Theorems~\ref{t:multi} and \ref{t:multi:HA}} \label{s:multi}
The plan of the proof of Theorem~\ref{t:multi} is as follows.
In Lemma~\ref{l:2}, we show that the convergence to the
normal limit holds if instead of projections on the
EFPCs $\hv_{i,j}$, projections on the $v_{i,j}$ are used.
Recall that $v_{i,j}$ is the $j^\text{th}$ FPC of the $i^\text{th}$ functional
time series in the panel.
The statistic constructed using the $v_{i,j}$ is denoted by
$Q_N$.
The proof of Lemma~\ref{l:2} relies on Theorem~\ref{t:norm}.
Next, we show in Lemmas \ref{l:3} and \ref{l:4} that the
the transition from $Q_N$ to $\widehat Q_N$ involves
asymptotically negligible terms. Lemma~\ref{l:props}
collects several properties referred to in the proofs. {The proof of Theorem \ref{t:multi:HA} is given at the end of this section.}
Let $\bX_{i,n}$ be the column vector of length $p(i)$ defined by
$
\bX_{i,n} = [\langle X_{i,n}, v_{i,j} \rangle, \ 1 \le j \le p(i)]^\top.
$
By stacking these $I$ vectors on top of each other, we
construct a column vector of length $p= \sum_{i=1}^I p(i)$
defined by
\[
\bX_n = [ \bX_{1,n}^\top, \ldots, \bX_{I,n}^\top]^\top.
\]
We abuse notation slightly as $\bX_n$ was used earlier to reference the functional panel vector, but throughout this section $\bX_n$ will be defined as above. We allow the number of time series, $I$, and/or the number of
FPCs, $p(i)$, used for each series to increase with the temporal sample
size, $N$, in any way which implies that $p=p_N$ increases to
infinity. Recall that
$
\bC_{0,N} = \E \lb\bX_n \bX_n^\top \rb
$
is the $p_N\times p_N$ covariance matrix whose eigenvalues are
$\gamma_1 \ge \ldots \ge \gamma_{p_N}$.
Our first lemma contains two bounds involving the $\gamma_j$,
which will be used throughout the proof of Theorem~\ref{t:multi}.
\begin{lemma}\label{l:bound} We have the bounds
\[
\sum_{j=1}^{p_N} \gamma_j \leq I M^{1/2}
\ \ \ \mbox{\rm and} \ \ \
p_N \leq \gamma_{p_N}^{-1} I M^{1/2},
\]
where $M$ is the bound in Assumption~\ref{a:1}.
\begin{proof}
Notice that
\[
\sum_{j=1}^{p_N} \gamma_j = \mbox{trace}(\bC_{0,N}) = \E |\bX_n|^2
\leq \sum_{i=1}^I \E \|X_{i,n}\|^2.
\]
Applying Jensen's inequality and Assumption \ref{a:1} gives the first claim.
We then immediately obtain the second claim since
$
\gamma_{p_N} p_N \leq \sum_{j=1}^{p_N} \gamma_j.
$
\end{proof}
\end{lemma}
\begin{lemma} \label{l:2}
If Assumptions \ref{a:1} and \ref{a:pn2} hold then,
under $H_0$,
\[
\frac{Q_N - p_N^2 H}{p_N \sqrt{2H}} \overset{\cD}{\to} \mathcal{N}(0,1).
\]
\end{lemma}
\begin{proof}
The standardized vectors used in Theorem~\ref{t:norm} are given by
\[
\bZ_{n,N} = \bC^{-1/2}_{0,N} \bX_{n}.
\]
Using the mixed--product property of the Kronecker product, we can express $Q_N$ as
\begin{align*}
& N^{-2} \sum_{n=1}^{N-h} \sum_{\np=1}^{N-h} (\bX_n \otimes \bX_{n+h}) ^\top(\bC_{0,N} \otimes \bC_{0,N})^{-1} (\bX_n \otimes \bX_{n+h}) \\
= & N^{-2} \sum_{n=1}^{N-h} \sum_{\np=1}^{N-h} (\bX_n \otimes \bX_{n+h}) ^\top(\bC_{0,N}^{-1/2} \otimes \bC_{0,N}^{-1/2})
(\bC_{0,N}^{-1/2} \otimes \bC_{0,N}^{-1/2})
(\bX_n \otimes \bX_{n+h}) \\
= & N^{-2} \sum_{n=1}^{N-h} \sum_{\np=1}^{N-h} (\bZ_{n,N} \otimes \bZ_{n+h,N})^\top (\bZ_{n,N} \otimes \bZ_{n+h,N}) \\
= & N^{-2} \sum_{n=1}^{N-h} \sum_{\np=1}^{N-h} \left| \bZ_{n,N} \otimes \bZ_{n+h,N} \right|^2.
\end{align*}
So we need to establish that
\[
N^{-1/2} \E|\bZ_{1,N}|^4 \to 0.
\]
By definition we have that
\[
|\bZ_{1,N}|^4 = (\bX_{1}^\top \bC^{-1}_{0,N} \bX_{1})^2.
\]
Applying the Cauchy--Schwarz and operator norm inequality we have that
\[
\bX_{1}^\top \bC^{-1}_{0,N} \bX_{1} \leq | \bX_{1} | |\bC^{-1}_{0,N}
\bX_{1}| \leq | \bX_1|^2 \| \bC^{-1}_{0,N} \|.
\]
Since $\|\bC^{-1}_{0,N} \|$ is the largest eigenvalue of
$\bC^{-1}_{0,N} $, it is simply the reciprocal of the smallest
eigenvalue of $\bC_{0,N} $. Therefore we have
\[
|\bZ_{1,N}|^4 \leq | \bX_{1}|^4 \gamma_{p_N}^{-2}.
\]
The norm of $\bX_{1}$ can be expressed as
\[
|\bX_{1}|^2 = \sum_{i}^I \sum_{j}^{p(i)} \langle X_{i,1}, v_{j,i} \rangle^2
\leq \sum_{i}^I \|X_{i,n}\|^2.
\]
A final application of the Cauchy--Schwarz inequality yields
\[
|\bZ_{1,N}|^4
\leq \gamma_{p_N}^{-2} I \sum_{i=1}^I \|X_{i,1}\|^4.
\]
Taking expected values we have that
\[
\E|\bZ_{1,N}|^4 \leq \gamma_{p_N}^{-2} I^2 M.
\]
Therefore by Assumption \ref{a:pn2}
\[
N^{-1/2} \E|\bZ_{1,N}|^4 = N^{-1/2} \gamma_{p_N}^{-2} I^2 = o(1).
\]
Finally, to apply Theorem \ref{t:norm} we need only to show that $p_N N^{-2/3} \to 0$. Using Lemma \ref{l:bound} we have that
\[
p_N N^{-2/3}\leq N^{-2/3} \gamma_{p_N}^{-1} I M^{1/2}.
\]
Which is clearly $o(1)$ by Assumption \ref{a:pn2}.
\end{proof}
For the next lemma it will be notationally useful to define the lag $h$ cross covariance operators:
\[
\Delta_{i,\ip,h} = N^{-1/2} \sum_{n=1}^{N - h} X_{i,n} \otimes X_{\ip, n+h}.
\]
\begin{lemma} \label{l:3}
If Assumptions \ref{a:1} and \ref{a:pn2} hold then,
under $H_0$,
\[
\frac{Q_N - N \sum_{h=1}^H \widehat V_h^\top
(\bC_{0,N} \otimes \bC_{0,N})^{-1} \widehat V_h}{p_N \sqrt{2H}}
= o_P(1).
\]
\end{lemma}
\begin{proof}
A minor rearrangement yields
\[
\frac{Q_N - N \sum_{h}^H \widehat V_h^\top (\bC_{0,N} \otimes \bC_{0,N})^{-1} \widehat V_h}{p_N \sqrt{2H}}
= \frac{2 N \sum_{h}^H ( V_h - \widehat V_h )^\top (\bC_{0,N} \otimes \bC_{0,N})^{-1} (V_h+\widehat V_h)}{p_N \sqrt{2H}}.
\]
The Cauchy--Schwarz and operator norm inequality yield
\begin{align*}
|( V_h - \widehat V_h )^\top (\bC_{0,N} \otimes \bC_{0,N})^{-1} (V_h+\widehat V_h)|
\leq \|V_h - \widehat V_h\| \|V_h + \widehat V_h\| \gamma_{p_N}^{-2}
\end{align*}
For each coordinate of $V_h$, there exists $i,j, \ip, \jp$ such that the coordinate can be expressed as
\[
N^{-1} \sum_{n=1}^{N-h} \langle X_{i,n}, v_{j,i} \rangle \langle X_{i^\prime,n+h}, v_{j^\prime,i^\prime} \rangle
= N^{-1} \sum_{n=1}^{N-h} \langle X_{i,n} \otimes X_{i^\prime,n+h}, v_{j,i}\otimes v_{j^\prime,i^\prime} \rangle
= N^{-1/2} \langle \Delta_{i,\ip,h} , v_{j,i}\otimes v_{j^\prime,i^\prime} \rangle.
\]
Therefore
\begin{align*}
\|V_h - \widehat V_h\|^2
& = N^{-1} \sum_{i, \ip}^I \sum_{j}^{p(i)} \sum_{\jp}^{p(\ip)} \langle \Delta_{i,\ip,h} , v_{j,i}\otimes v_{j^\prime,i^\prime}
- \hat v_{j,i}\otimes \hat v_{j^\prime,i^\prime} \rangle^2 \\
& \leq N^{-1} \max \|\Delta_{i,\ip,h}\|^2 \sum_{i, \ip}^I \sum_{j}^{p(i)} \sum_{\jp}^{p(\ip)} \| v_{j,i}\otimes v_{j^\prime,i^\prime}
- \hat v_{j,i}\otimes \hat v_{j^\prime,i^\prime}\|^2 \\
& \leq N^{-1} \max \|\Delta_{i,\ip,h}\|^2 \sum_{i, \ip}^I \sum_{j}^{p(i)} \sum_{\jp}^{p(\ip)} ( \alpha_{i,j} \|C_{ii} - \widehat C_{ii}\| +
\alpha_{\ip,\jp} \|C_{\ip \ip} - \widehat C_{\ip \ip}\| )|^2 \\
& \leq N^{-1} \max \|\Delta_{i,\ip,h}\|^2 \max \|C_{ii} - \widehat C_{ii}\|^2 \Gamma_{N}\\
& = N^{-2} I^3 \Gamma_{N} O_P(1),
\end{align*}
where the last equality follows from Lemma \ref{l:props}. By Parceval's inequality
\[
\| V_h \|^2 =
N^{-1} \sum_{i, \ip}^I \sum_{j}^{p(i)} \sum_{\jp}^{p(\ip)} \langle \Delta_{i,\ip,h} , v_{j,i}\otimes v_{j^\prime,i^\prime} \rangle^2
\leq N^{-1} \sum_{i, \ip} \|\Delta_{i,\ip,h}\|^2
= N^{-1} O_P(I^2),
\]
and the same holds for $\| \widehat V_h\|^2$.
Combining everything, the original difference is of the order
\[
N p_N^{-1} \gamma_{p_N}^{-2} N^{-1} I^{3/2} \Gamma_N^{1/2}N^{-1/2} I
O_P(1) = N^{-1/2} p_N^{-1} I^{5/2} \gamma_{p_N}^{-2}
\Gamma_N^{1/2}O_P(1),
\]
which is $o_P(1)$ by Assumption \ref{a:pn2}.
\end{proof}
\begin{lemma} \label{l:4}
If Assumptions \ref{a:1} and \ref{a:pn2} hold,
then under $H_0$,
\[
\frac{ N \sum_{h=1}^H
[\widehat V_h^\top (\bC_{0,N} \otimes \bC_{0,N})^{-1} \widehat V_h -
\widehat V_h^\top (\widehat \bC_{0,N} \otimes \widehat \bC_{0,N})^{-1}
\widehat V_h]}{p_N \sqrt{2H}} =o_P(1).
\]
\end{lemma}
\begin{proof}
Using the Cauchy--Schwarz inequality and
operator norm inequality we have that
\begin{align*}
& |\widehat V_h^\top (\bC_{0,N} \otimes \bC_{0,N})^{-1} \widehat V_h -
\widehat V_h^\top (\widehat \bC_{0,N} \otimes \widehat \bC_{0,N})^{-1} \widehat V_h |\\
\leq & \| \widehat V_h^\top\|^2 \|(\bC_{0,N} \otimes \bC_{0,N})^{-1} - (\widehat \bC_{0,N} \otimes \widehat \bC_{0,N})^{-1} \|_{\cS} \\
\leq & \| \widehat V_h\|^2 [\gamma_{p_N}^{-1} + \widehat \gamma_{p_N}^{-1} ] \| \bC_{0,N}^{-1} - \widehat \bC_{0,N}^{-1} \|_{\cS} \\
\leq & \| \widehat V_h\|^2 [\gamma_{p_N}^{-1} + \widehat \gamma_{p_N}^{-1} ] [ \gamma_{p_N}^{-1} \widehat \gamma_{p_N}^{-1} ]\| \bC_{0,N} - \widehat \bC_{0,N} \|_{\cS} \\
\leq & \| \widehat V_h\|^2 [\gamma_{p_N}^{-1} + \widehat \gamma_{p_N}^{-1} ] [ \gamma_{p_N}^{-1} \hat \gamma_{p_N}^{-1} ]\| \bC_{0,N} - \hat \bC_{0,N} \|.
\end{align*}
As before, we can apply Lemma \ref{l:props}.1 to obtain
\[
\| \widehat V_h\|^2 = \widehat V_h^\top \widehat V_h
= N^{-1} \sum_{i, \ip}^I \sum_{j}^{p(i)} \sum_{\jp}^{p(\ip)} \langle \Delta_{i,\ip,h} , \hat v_{j,i}\otimes \hat v_{j^\prime,i^\prime} \rangle^2
\leq N^{-1} \sum_{i, \ip}^I \| \Delta_{i,\ip,h}\|^2
= O_P(N^{-1} I^2).
\]
Turning to the eigenvalues we have that
\[
[\gamma_{p_N}^{-1} + \hat \gamma_{p_N}^{-1} ] [ \gamma_{p_N}^{-1} \hat \gamma_{p_N}^{-1} ]
= \gamma_{p_N}^{-3} \left[ \frac{\gamma_{p_N} }{\hat \gamma_{p_N}} + \frac{\gamma_{p_N}^2 }{\hat \gamma_{p_N}^2}\right].
\]
Applying Lemma \ref{l:props}.4 to bound the difference
\[
\left|\frac{\hat \gamma_{p_N} }{ \gamma_{p_N}} - 1\right|
= \frac{|\hat \gamma_{p_N} - \gamma_{p_N}| }{ \gamma_{p_N}}
\leq \frac{ \| \widehat \bC_{0,N} - \bC_{0,N}\| }{\gamma_{p_N}}
= O_P(\gamma_{p_N}^{-1} I N^{-1/2} \Gamma_N^{1/2}) =o_P(1).
\]
Putting everything together, the original difference is
\[
O_P(\gamma_{p_N}^{-3} I^3 N^{-1/2} p_N^{-1} \Gamma_N^{1/2} ) = o_P(1),
\]
by Assumption \ref{a:pn2}.
\end{proof}
Our last lemma contains several properties which were
used in the arguments developed above.
\begin{lemma} \label{l:props} If Assumptions \ref{a:1} and \ref{a:pn2}
hold, then we have the following properties:
\begin{enumerate}
\item $\max \| \Delta_{i,\ip,h}\|^2 = O_P(I^2)$ under $H_0$.
\item $\max \| C_{i \ip}\| = O(1)$.
\item $\max \| \widehat C_{ii} - C_{ii} \| = O_P(N^{-1/2} I )$ and $\max \| \widehat C_{ii} - C_{ii} \|^2 = O_P(N^{-1} I )$
\item $\| \widehat \bC_{0,N} - \bC_{0,N} \|
= O_P(I N^{-1/2} \Gamma_N^{1/2})$ \\
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item For each fixed $i$ and $\ip$, we have that
\begin{align*}
\E\| \Delta_{i,\ip,h}\|^2 & = N^{-1} \sum_{n=1}^{N-h} \sum_{\np=1}^{N-h} \langle X_{i,n} \otimes X_{i,n+h}, X_{\ip,\np} \otimes X_{\ip,\np+h} \rangle \\
& = N^{-1} \sum_{n=1}^{N-h} \E \langle X_{i,n} \otimes X_{i,n+h}, X_{\ip,n} \otimes X_{\ip,n+h} \rangle \\
& \leq \E\|X_{i,1}\|^2 \E\|X_{\ip,1}\|^2 \leq M.
\end{align*}
Therefore we have that
\[
\E \left[\max \| \Delta_{i,\ip,h}\|^2\right] \leq I^2 M,
\]
and the result follows from Markov's inequality.
\item By Jensen's inequality we have that
\begin{align*}
\|C_{i \ip}\| &\leq \E\|X_{i,n}\otimes X_{\ip,n}\| = \E[\|X_{i,n}\| \|X_{\ip,n}\|] \leq \sqrt{\E\|X_{i,n}\|^2 \E\|X_{\ip,n}\|^2} \\
& \leq (\E\|X_{i,n}\|^4 \E\|X_{\ip,n}\|^4)^{1/4} \leq M^{1/2},
\end{align*}
which proves the claim.
\item The argument is the same as in 1.
\item By the triangle inequality we have
\[
\| \widehat \bC_{0,N} - \bC_{0,N} \|
\leq \| \widehat \bC_{0,N} - \tilde\bC_{0,N} \|
+ \| \tilde \bC_{0,N} - \bC_{0,N}\|
\]
where $ \tilde\bC_{0,N}$ is formed by projecting the $C_{i\ip}$
onto the estimated PCs. So the square of the first term is given by
\begin{align*}
\| \widehat \bC_{0,N} - \tilde \bC_{0,N} \|^2
& = \sum_{i, \ip}^I \sum_{j=1}^{p(i)} \sum_{j=1}^{p(\ip)}
\left( N^{-1} \sum_{n=1}^N \langle X_{i,n} \otimes X_{\ip,n}, \hat v_{j,i} \otimes \hat v_{\jp,\ip} \rangle
-\langle C_{i,\ip}, \hat v_{j,i} \otimes \hat v_{\jp,\ip} \rangle \right)^2 \\
& \leq \sum_{i, \ip}^I \left\| N^{-1} \sum_{n=1}^N X_{i,n} \otimes X_{\ip,n} - C_{i, \ip}\right\|^2
= O_P(N^{-1} I^2).
\end{align*}
The square of the second term is given by
\begin{align*}
\| \tilde \bC_{0,N} - \bC_{0,N}\|^2
& = \sum_{i, \ip}^I \sum_{j=1}^{p(i)} \sum_{j=1}^{p(\ip)} \langle C_{i, \ip}, \hat v_{j,i} \otimes \hat v_{\jp,\ip} - v_{j,i} \otimes v_{\jp,\ip} \rangle^2 \\
& \leq \sum_{i, \ip}^I \sum_{j=1}^{p(i)} \sum_{j=1}^{p(\ip)} \|C_{i,\ip}\|^2 (\|v_{j,i} - \hat v_{j,i}\| + \|v_{\jp,\ip} - \hat v_{\jp,\ip}\| )^2 \\
& \leq \sum_{i, \ip}^I \sum_{j=1}^{p(i)} \sum_{j=1}^{p(\ip)} \|C_{i,\ip}\|^2 (\sqrt{2} \alpha_{j,i}^{-1} \| C_{ii} - \widehat C_{i i}\| + \sqrt{2} \alpha_{\jp,\ip}^{-1} \| C_{\ip \ip} - \widehat C_{\ip \ip}\| )^2 \\
& \leq 2 \max \|C_{i,\ip}\|^2 \max \| C_{ii} - \widehat C_{ii} \|^2
\sum_{i, \ip}^I \sum_{j=1}^{p(i)} \sum_{j=1}^{p(\ip)} (\alpha_{j,i}^{-1} + \alpha_{\jp,\ip}^{-1})^2 \\
& = I N^{-1}\Gamma_N O_P(1).
\end{align*}
Therefore both terms are asymptotically bounded by
$I^2 N^{-1} \Gamma_N O_P(1)$, which proves the claim.
\end{enumerate}
\end{proof}
\end{lemma}
{
\begin{proof}[Proof of Theorem \ref{t:multi:HA}]
Analogous results to Lemmas \ref{l:3} and \ref{l:4} are obtained in the same way and are thus omitted for brevity. We mention that the key difference is that the $\|\Delta_{i,\ip,h}\|$ is no longer of order $O_P(1)$, but $O_P(N^{1/2})$. The $\widehat C_{ii}$ are still root-$N$ consistent since the series is assumed to be $L^4$-m approximable. We therefore only show that
\[
\frac{Q_N - p_N^2 H}{p_N \sqrt{2H}} \overset{P}{\to} \infty.
\]
We assume that those lag terms which exhibit correlation are contained in the set $\cH^\star$ we therefore begin with the lower bound
\[
Q_N \geq N \sum_{h \in \cH^\star} \bV_{h^\star}^\top ( \bC_0 \otimes \bC_0 )^{-1}\bV_{h^\star}.
\]
Since the smallest eigenvalue of $( \bC_0 \otimes \bC_0 )^{-1}$ is $\gamma_1^{-2}$ we can bound $Q_N$ below using
\[
Q_N \geq N \gamma_1^{-2} \sum_{h \in \cH^\star} \| \bV_{h}\|^2.
\]
Isolating the pairs $\cI^\star_h$ which are assumed to be correlated (at a lag of $h$) we can further bound below as
\[
Q_N \geq \gamma_1^{-2}N \sum_{h \in \cH^\star} \sum_{(i,j) \in \cI_h^\star} \left( N^{-1}\sum_{n=1}^N ( \langle X_{n}, v_{i} \rangle \langle X_{n+h}, v_{j} \rangle \right) ^2
=\gamma_1^{-2} N R (1+o_P(1)) \sum_{h \in \cH^\star} | \cI_h|
\]
where the last equality holds since, by Assumption \ref{a:HA}, the summands form a stationary and ergodic sequence. Combining Lemma \ref{l:bound} with Assumption \ref{a:pn2}, $N \gamma_1^{-2}$ tends to infinity faster than $p_N^2$ and the claim holds. We also see that the effect of having more indices which exhibit correlation is additive.
\end{proof}
}
\renewcommand{0.9}{0.9}
\small
\end{document} |
\begin{document}
\title{Controlling the transport of an ion: Classical and quantum
mechanical solutions}
\author{H A F\"urst$^1$, M H Goerz$^2$, U G Poschinger$^1$, M
Murphy$^3$, S Montangero$^3$, T
Calarco$^3$, F Schmidt-Kaler$^1$, K Singer$^1$, C P Koch$^2$}
\address{$^1$QUANTUM, Institut f\"ur Physik, Universit\"at Mainz,
D-55128 Mainz, Germany}
\address{$^2$Theoretische Physik, Universit\"at Kassel,
Heinrich-Plett-Stra{\ss}e 40, D-34132 Kassel, Germany}
\address{$^3$Institut f\"ur Quanteninformationsverarbeitung,
Universit\"at Ulm, D-89081 Ulm, Germany}
\ead{[email protected]}
\begin{abstract}
We investigate the performance of different control techniques for
ion transport
in state-of-the-art segmented miniaturized ion
traps. We employ numerical optimization of classical
trajectories and quantum wavepacket propagation as well as analytical
solutions derived from invariant based inverse engineering and
geometric optimal control. We find that accurate shuttling can be
performed with operation times below the
trap oscillation period. The maximum speed is limited
by the maximum acceleration that can be exerted on the ion.
When using controls obtained from classical dynamics for wavepacket
propagation, wavepacket squeezing is the only quantum effect that
comes into play for a large range of trapping parameters. We show that
this can be corrected by a compensating
force derived from invariant based inverse engineering, without a
significant increase in the operation time.
\end{abstract}
\pacs{37.10.Ty,03.67.Lx,02.30.Yy}
\maketitle
\section{Introduction}
Trapped laser-cooled ions represent a versatile experimental platform
offering near-perfect control and tomography of a few body system in
the classical and quantum
domain~\cite{CIRAC1995,Blatt2008,WINELAND1998,CASANOVA2012}. The fact
that both internal (qubit) and external (normal modes of oscillation)
degrees of freedom can be manipulated in the quantum regime allows for
many applications in the fields of quantum information processing and
quantum simulation~\cite{MONZ2011,GERRITSMA2011,RICHERME2013}.
Currently, a significant research effort is devoted to scaling
these experiments up to larger numbers of qubits. A promising
technology to achieve this goal are \textit{microstructured segmented
ion traps}, where small ion groups are stored in local potentials
and ions are shuttled within the trap by applying suitable voltage
ramps to the trap electrodes~\cite{KIELPINSKY2002}. In order to enable
scalable experiments in the quantum domain, these shuttling operations
have to be performed such that the required time is much shorter than
the timescales of the relevant decoherence processes. At the same
time, one needs to avoid excitation of the ion's motion after the
shuttling operation. These opposing requirements clearly call for the
application of advanced control techniques.
Adiabatic ion shuttling operations in a segmented trap have been demonstrated
in Ref.~\cite{ROWE}. Recent experiments have achieved
non adiabatic shuttling of single ions within a few trap oscillation
cycles while retaining the quantum ground state of
motion~\cite{WALTHER2012,Bowler2012}. This was made possible by
finding `sweet spots'\ in the shuttling time or removal of the excess
energy accumulated during the shuttling by kicks of the trap
potential. Given the experimental constraints, it is natural to ask
what the speed limitations for the shuttling process are. The impact
of quantum effects
for fast shuttling operations, i.e., distortions of the wavepacket,
also need to be analyzed, and it needs to be assessed whether
quantum control techniques~\cite{SomloiCP93,ZhuJCP98,ReichJCP12}
may be applied to avoid these.
Moreover, from a control-theoretical perspective and in view of
possible future application in experiment, it is of interest to
analyze how optimized voltage ramps can be obtained.
Optimal control theory (OCT) combined with
classical equations of motion was employed in Ref.~\cite{Schulz2006}
to obtain optimized voltage
ramps. Quantum effects were predicted not to play a role unless the
shuttling takes place on a timescale of a single oscillation period.
In Refs.~\cite{CHEN2011,Torrontegui2011}, control techniques such as
inverse engineering were applied to atomic shuttling problems. The
transport of atomic wavepackets in optical dipole potentials was
investigated using OCT with quantum mechanical equations of
motion~\cite{CALARCO2004,deChiaraPRA08,MurphyPRA09}.
The purpose of the present paper is to assess available
optimization strategies for the specific problem of transporting a
single ion in a microchip ion trap and to utilize them to study the
quantum speed limit for this
process~\cite{GiovannettiPRA03,CanevaPRL09}, i.e., to determine the
shortest possible time for the transport. Although parameters of the
trap architecture of Ref.~\cite{SCHULZ2008} are used
throughout the entire manuscript, we strongly emphasize that the
qualitative results we obtain hold over a wide parameter regime. They
are thus generally valid for current segmented ion traps, implemented
with surface electrode geometry~\cite{SCHULZ2008,AMINI2011}
or more traditional multilayer geometry.
The paper is organized as
follows. We start by outlining the theoretical framework in
Sec.~\ref{subs:ttcp}. In
particular we review the combination of numerical optimization with
classical dynamics in Sec.~\ref{subs:kloct} and with wavepacket motion
in Sec.~\ref{subs:qmoct}. Analytical solutions to the control problem,
obtained from the harmonic approximation of the trapping potential,
are presented in Secs.~\ref{subsec:geometric} and~\ref{subs:invmeth0}.
Section~\ref{subs:apcontrol} is devoted to the presentation and
discussion of our results. The control solutions for purely classical
dynamics of the ion, obtained both numerically and analytically, yield
a minimum transport duration as shown in Sec.~\ref{subs:kloct2}. We
discuss in Sec.~\ref{subs:qmprop2}, how far these solutions
correspond to the quantum speed limit. Our results obtained by
invariant-based inverse engineering are presented in
Sec.~\ref{subs:invmeth}, and we analyze the feasibility of quantum optimal
control in Sec.~\ref{subs:qmoct2}. Section~\ref{sec:concl} concludes
our paper.
\section{Methods for trajectory control and wavepacket
propagation}\label{subs:ttcp}
In the following we present the numerical methods we employ to control
the transport of a single trapped ion. Besides numerical optimization
describing the motion of the ion either with classical mechanics or
via wavepacket propagation, we also utilize two analytical
methods. This is made possible by the trap geometry which leads to an
almost perfectly harmonic trapping potential for the ion at all
times.
\subsection{Prerequisites}
We assume ponderomotive confinement of the ion
at the rf-node of a linear segmented Paul trap and a purely
electrostatic confinement along the trap axis $x$, see Fig.~\ref{fig:electrodes}.
This enables us to treat the dynamics only along this dimension. We consider
transport of a single ion with mass $m$ between two neighboring electrodes,
which give rise to individual potentials centered at $x_1$ and $x_2$. This may
be scaled up to $N$ electrodes and longer transports without any loss of
generality.
\begin{figure}
\caption{(a) Ion shuttling in a segmented linear trap. The
dc electrodes form the axial potential for the ion transport along
the $x$-axis. The rf electrodes
for confinement of the ions along the $x$-axis are not shown. (b)
Axial electrode potentials formed by applying a
dc voltage to a facing pair of trap segments.
For the specific scenario presented in this manuscript, we use $d =
280\,\mu$m, $g=30\,\mu$m and $h=500\,\mu$m.
Each potential is
generated from a single pair of segments, depicted in red in
(a) and biased to 1$\,$V with all the other dc electrodes grounded.
}
\label{fig:electrodes}
\end{figure}
The ion motion is controlled by a time-dependent electrostatic potential,
\begin{equation}\label{eq:V}
V(x,t)= U_1(t)\phi_1(x)+U_2(t)\phi_2(x)\,,
\end{equation}
with segment voltages $U_i(t)$, and normal electrode potentials on the
trap axis, $\phi_i(x)$. They are dimensionless electrostatic potentials obtained
with a bias of $+1$~V at electrode $i$ and the remaining electrodes
grounded (see Fig.~\ref{fig:electrodes}(b)).
These potentials are calculated by using a fast multipole boundary element
method~\cite{SINGER2010} for the trap geometry used in recent
experiments~\cite{WALTHER2012} and shown in Fig.~\ref{fig:electrodes}. In order to
speed up numerics and obtain smooth derivatives, we calculate values for
$\phi_i(x)$ on a mesh and fit rational functions to the resulting data. The spatial derivatives $\phi_i'(x)$ and $\phi_i''(x)$ are obtained by differentiation of the fit functions.
Previous experiments have shown that the calculated potentials allow for
the prediction of ion positions and trap frequencies with an accuracy
of one per cent~\cite{Huber2010,brownnutt2012spatially} which
indicates the precision of the microtrap fabrication process. An
increase in the precision can be achieved by calibrating the trapping
potentials using
resolved sideband spectroscopy. This is sufficient to warrant the
application of control techniques as studied here.
For the geometry of the trap described in Ref.~\cite{WALTHER2012}, we
obtain harmonic trap
frequencies of about $\omega=2\pi\cdot$1.3~MHz with a bias voltage of -7~V
at a single trapping segment. The individual segments are spaced 280~$\mu$m
apart. Our goal is to shuttle a single ion along this distance within a time span on
the order of the oscillation period by changing the voltages $U_1$ and $U_2$,
which are supposed to stay within a predetermined range that is set by
experimental constraints. We seek to minimize the amount of motional excitation
due to the shuttling process.
\subsection{Numerical optimization with classical dynamics}
\label{subs:kloct}
Assuming the ion dynamics to be well described classically,
we optimize the time dependent voltages in order to reduce the amount
of transferred energy. This corresponds to minimizing
the functional $J$,
\begin{equation}\label{eq:J}
J = (E(T)-E_{\rm T})^2 + \sum_i \int_0^T \frac{\lambda_a}{S(t)} \Delta U_i(t)^2 \ensuremath{\, \textnormal{d}} t\,,
\end{equation}
i.e., to miniziming
the difference between desired energy $E_{\rm T}$ and the energy
$E(T)$ obtained at the final time $T$.
$\Delta U_i(t)= U_i^{n+1}(t) - U_i^{n}(t)$ is the update of each
voltage ramp in an iteration step $n$, and the second term in
Eq.~\eref{eq:J} limits the
overall change in the integrated voltages during one iteration.
The weight $\lambda_a$ is used to tune the convergence and limit the
updates. To suppress updates near $t=0$ and $t=T$ the shape function
$S(t) \geq 0$ is chosen to be zero at these points in time. For a
predominantly harmonic axial confinement, the final energy is given by
\begin{equation}\label{eq:ET}
E(T) = \frac{1}{2} m \dot{x}^2(T) + \frac{1}{2} m \omega^2 (x(T)-x_2)^2\,.
\end{equation}
In order to obtain transport without motional excitation, we choose
$E_T = 0$. Evaluation of Eq.~\eref{eq:ET} requires solution of
the classical equation of motion. It reads
\begin{equation}\label{eq:class:eom}
\ensuremath{\, \textnormal{d}}ot x(t) = -\frac{1}{m} \left.\frac{\partial}{\partial x}
V(x,t)\right|_{x=x(t)} = -\frac{1}{m} \sum_{i=1}^2
U_i(t)\phi_i'\left(x(t)\right)
\end{equation}
for a single ion trapped in the potential of Eq.~\eref{eq:V} and is
solved numerically using a \textit{Dormand-Prince Runge-Kutta}
integrator~\cite{SINGER2010}.
Employing Krotov's method for optimal control~\cite{Konnov99}
together with the classical equation of motion,
Eq.~\eref{eq:class:eom}, we obtain the
following iterative update rule:
\begin{equation}\label{eqn:krotklupd}
\Delta U_i(t) = -
\frac{S(t)}{\lambda_{a}} p_2^{(n)}(t) \phi_i'(x^{(n+1)}(t))\,,
\end{equation}
where $n$ denotes the previous iteration step. $\mathbf{p}=(p_1,p_2)$
is a costate vector which evolves according to
\begin{equation}
\dot{\mathbf{p}}(t) = - \left(\begin{array}{c} \frac{p_2}{m} V''(U_i(t), x(t)) \\ p_1 \end{array}\right)\,,
\end{equation}
with its `initial' condition defined at the final time $T$:
\begin{equation}\label{eqn:pTkrot}
\mathbf{p}(T) = - 2 m \left(E(T)-E_{\rm T}\right) \left(\begin{array}{c} \omega^2 (x(T) - x_2) \\ \dot{x}(T) \end{array}\right)\,.
\end{equation}
The algorithm works by propagating $x(t)$ forward in time, solving
Eq.~\eref{eq:class:eom} with an initial guess for $U_i(t)$ and
iterating the following steps until the desired value of $J$ is
achieved:
\begin{enumerate}
\item Obtain $p(T)$ according to Eq.~\eref{eqn:pTkrot} and propagate $p(t)$ backwards in time using Eq.~\eref{eqn:pTkrot}.
\item Update the voltages according to Eq.~\eref{eqn:krotklupd} at each time
step while propagating $x(t)$ forward in time with the immediately
updated voltages.
\end{enumerate}
The optimization algorithm shows rapid convergence and brings the final
excitation energy $E(T)$ as close to zero as desired.
An example of an optimized voltage ramp is shown in
Fig.~\ref{fig:guesscfoct}(a). The voltages obtained are not symmetric under
time reversal in contrast to the initial guess. This is rationalized
by the voltage updates occurring only during forward propagation which
breaks the time reversal symmetry.
We find this behavior to be typical for the Krotov algorithm combined
with the classical equation of motion.
\subsection{Numerical optimization of wavepacket propagation}\label{subs:qmoct}
When quantum effects are expected to influence the transport, the ion has to
be described by a wave function $\Psi(x,t)$. The control target is
then to perfectly transfer the initial wavefunction, typically the
ground state of the trapping potential centered around position $x_1$,
to a target wavefunction, i.e., the
ground state of the trapping potential centered around position $x_2$.
This is achieved by minimizing the functional
\begin{equation}\label{eq:qmkrotj}
J = 1 - \left\vert \int\limits_{-\infty}^{\infty}
\Psi(x,T)^* \Psi^{\operatorname{tgt}}(x) \ensuremath{\, \textnormal{d}} x
\right\vert^2
+ \int\limits_{0}^{T} \frac{\lambda_a}{S(t)} \sum_i
\Delta U_i(t)^2 \ensuremath{\, \textnormal{d}} t\,.
\end{equation}
Here, $\Psi(x,T)$ denotes the wave function of the single ion propagated
with the set of voltages $U_i(t)$, and $\Psi^{\operatorname{tgt}}(x)$ is the
target wave function.
The voltage updates $\Delta U_i(t)$, scaling factor $\lambda_a$ and shape
function $S(t)$ have identical meanings as in
Sec.~\ref{subs:kloct}. $\Psi(x,T)$ is obtained by solving
the time-dependent Schr\"odinger equation (TDSE),
\begin{eqnarray}
i \hbar \frac{\partial}{\partial t} \Psi(x,t) = \Op H(t) \Psi(x,t)
= \left( -\frac{\hbar^2}{2m}\frac{d^2}{dx^2}
+ \sum_{i=1}^{N} U_i(t) \phi_i(x)
\right) \Psi(x,t)\,.
\label{eq:tdse}
\end{eqnarray}
As in the classical case, optimization of the transport problem is
tackled using Krotov's
method~\cite{SomloiCP93,ReichJCP12}. The update equation derived from
Eq.~\eref{eq:qmkrotj} is given by
\begin{equation}\label{eqn:krotovupdate}
\Delta U_i(t)
= \frac{S(t)}{\lambda_a}
\mathfrak{Im}\int\limits_{x_{\min}}^{x_{\max}}
\chi^{n}(x,t)^*\,
\phi_i(x) \,
\Psi^{n+1}(x,t)
\ensuremath{\, \textnormal{d}} x\,,
\end{equation}
with $n$ denoting the iteration step. $\chi(x,t)$ is a costate wave
function obeying the TDSE with `initial' condition
\begin{equation}\label{eqn:chioft}
\chi(x,T) =
\left[
\int\limits_{x_{\min}}^{x_{\max}}
(\Psi(x,T))^* \Psi^{\operatorname{tgt}}(x) \ensuremath{\, \textnormal{d}} x \;
\right]
\Psi^{\operatorname{tgt}}(x,T)\,.
\end{equation}
Optimized voltages $U_i(t)$ are obtained similarly to
Sec.~\ref{subs:kloct}, i.e., one starts with the ground state,
propagates $\Psi(x,t)$ forward in time according to
Eq.~\eref{eq:tdse}, using an
initial guess for the voltage ramps, and iterates the following steps
until the desired value of $J$ is achieved:
\begin{enumerate}
\item Compute the costate wave function at the final time $T$
according to Eq.~\eref{eqn:chioft} and
propagate $\chi(x,t)$ backwards in time, storing $\chi(x,t)$ at each
timestep.
\item Update the control voltages according to
Eq.~\eref{eqn:krotovupdate} using the stored $\chi(x,t)$, while
propagating $\Psi(x,t)$ forward using the immediately updated
control voltages.
\end{enumerate}
Equations~\eref{eqn:krotovupdate}
and~\eref{eqn:chioft} imply a sufficiently large initial overlap
between the wave function, which is
forward propagated under the initial guess, and the target state
in order to obtain a reasonable voltage update. This emphasizes the
need for good initial guess ramps and illustrates the difficulty
of the control problem when large phase space volumes need to be
covered.
To solve the TDSE numerically, we use the Chebshev
propagator~\cite{Tal-EzerJCP84} in
conjunction with a Fourier grid~\cite{RonnieReview88,RonnieReview94}
for efficient and accurate application of the kinetic
energy part of the Hamiltonian.
Denoting the transport time by $T$ and the inter-electrode spacing by
$d$, the average momentum during the shuttling is given by $\bar{p}=m
d / T$. Typical values of these parameters yield a phase space volume of
$d \cdot \bar{p}/h\approx 10^7$.
This requires the numerical integration to be extremely stable. In
order to ease the numerical treatment, we can exploit the fact that
the wavefunction's spatial extent is much smaller than $d$ and most
excess energy occurs in the form of classical oscillations. This
allows for propagating the wave function on a small \textit{moving
grid} that extends around the
instantaneous position and momentum expectation values~\cite{SINGER2010}.
The details of our implementation combining the Fourier representation
and a moving grid are described in~\ref{subs:qmprop}.
\subsection{Initial guess voltages} \label{subs:guessgen}
Any optimization, no matter whether it employs classical or quantum
equations of motion, starts from an initial guess. For many
optimization problems, and in particular when using gradient-based methods
for optimization, a physically motivated initial guess is crucial
for success of the optimization~\cite{KochPRA04}. Here,
we design the initial guess for the voltage
ramps such that the ion is dragged from
position $x_1$ to $x_2$ in a smooth fashion. This is
achieved as follows:
The trapping potential $V(x,t)$ can be described by the position of
its local minimum $\alpha(t)$. Obviously, $\alpha(t)$ needs
to fulfill the boundary conditions $\alpha(0) = x_1$, $\alpha(T) =
x_2$. In order to ensure smooth acceleration and deceleration of the
center of the trap, we also demand $\dot\alpha(0) =
\dot\alpha(T)=\ensuremath{\, \textnormal{d}}ot{\alpha}(0)=\ensuremath{\, \textnormal{d}}ot{\alpha}(T)=0$.
A possible ansatz fulfilling these boundary conditions is given by
a polynomial of order 6,
\begin{eqnarray}\label{eqn:transfunc1}
\alpha(t) = x_1 + d (10 s^3-15s^4+6s^6)\,,
\end{eqnarray}
where $d=x_2-x_1$ denotes the transport distance and $s=t/T$ is a
dimensionless time.
To derive initial guess voltages $U_i^0(t)$,
we use as a first condition that the local minimum of the potential
coincides with $\alpha(t)$. Second, we fix the trap frequency $\omega$
to a constant value throughout the whole shuttling process,
\begin{equation}
\begin{array}{rl}
\left.\frac{\partial V}{\partial x}\right|_{x=\alpha(t)} &=\phi_1'(\alpha(t)) U_1^0(t) + \phi_2'(\alpha(t)) U_2^0(t) \stackrel{!}{=} 0,\\
\left.\frac{\partial^2 V}{\partial x^2}\right|_{x=\alpha(t)} &=\phi_1''(\alpha(t)) U_1^0(t) + \phi_2''(\alpha(t)) U_2^0(t) \stackrel{!}{=} m \omega^2\,.
\end{array}\label{eqn:guesspots}
\end{equation}
These equations depend on first and second order spatial derivatives of the
electrode potentials. Solving for $U_1^0(t)$, $U_2^0(t)$, we obtain
\begin{equation}
U_i^0 (t) = \frac{(-1)^i m \omega ^2 \phi_j'(\alpha(t))}{
\phi_2''(\alpha(t)) \phi_1'(\alpha(t)) - \phi_2'(\alpha(t))\phi_1''(\alpha(t))
}, \quad i,j \in \{1,2\}, \quad j \neq i \label{eqn:solguess}.
\end{equation}
An example is shown in Fig.~\ref{fig:guesscfoct}. If the
electrode potentials have translational symmetry, i.e., $\phi_j(x)=\phi_i(x+d)$,
then $U^0_1(t)=U^0_2(T-t)$. This condition is approximately met for
sufficiently homogeneous trap architectures.
\begin{figure}
\caption{Control voltages applied to electrodes for transporting a $^{40}
\label{fig:guesscfoct}
\end{figure}
\subsection{Geometric optimal control}\label{subsec:geometric}
Most current ion traps are fairly well described by a simple harmonic
model,
\begin{equation}
V(x,t) = -u_1(t) \frac{1}{2}m \omega_0^2 (x-x_1)^2 -
u_2(t) \frac{1}{2}m \omega_0^2 (x-x_2)^2\,,
\end{equation}
where $\omega_0$ is the trap frequency and $u_i$ are dimensionless control
parameters which correspond to the electrode voltages.
Since the equations of motion can be solved analytically, one can also
hope to solve the control problem analytically. One option is given by
Pontryagin's maximum principle~\cite{PONTRY,CHEN2011} which allows to
determine time-optimal controls. Compared to numerical optimization
which always yields local optima, Pontryagin's maximum principle
guarantees the optimum to be global.
In general, the cost functional,
\begin{equation}
J[\mathbf{u}] = \int_0^T g(\mathbf{y},\mathbf{u}) \ensuremath{\, \textnormal{d}} t\,,
\end{equation}
is minimized
for the equation of motion $\dot{\mathbf{y}} = \mathbf{f}(\mathbf{y},
\mathbf{u})$ and a running cost $g(\mathbf{y},\mathbf{u})$ with
$\mathbf{u} = (u_1, u_2)$ and $\mathbf{y} = (x,v)$ in our
case. The optimization problem is formally equivalent to finding a
classical trajectory by the principle of least action.
The corresponding classical control Hamiltonian that completely
captures the optimization problem is given by
\begin{equation}
H_c(\mathbf{p},\mathbf{y},\mathbf{u}) = p_0 g(\mathbf{y},\mathbf{u}) + \mathbf{p} \cdot \mathbf{f}(\mathbf{y}, \mathbf{u}) \label{eqn:hamcontrol}
\end{equation}
with costate $\mathbf{p}$, obeying
\begin{equation}\label{eq:costate}
\dot{\mathbf{p}}=-\frac{\partial H_c}{\partial \mathbf{y}}\,,
\end{equation}
and $p_0 < 0$ a constant to compensate dimension.
Pontryagin's principle states that $H_c$ becomes maximal for the
optimal choice of $\mathbf u(t)$~\cite{PONTRY,CHEN2011}.
Here we seek to minimize the transport time $T$. The cost functional
then becomes
\[
J[\mathbf{u}] = \int_0^{T_{\rm{min}}} \ensuremath{\, \textnormal{d}} t = T_{\rm{min}}\,,
\]
which is independent of $\mathbf {u}$ itself and leads to
$g(\mathbf{y},\mathbf{u})= 1$. Inserting the classical equations of motion
$\dot{\mathbf{y}} = (v, -\partial_x V)$, the control Hamiltonian becomes
\begin{equation}
\label{eq:H_c}
H_c(\mathbf{p},\mathbf{y},\mathbf{u}) = p_0 + p_1 v
+ p_2 \left( u_1 \cdot (x-x_1) + u_2 \cdot (x-x_2)\right) \omega_0^2\,.
\end{equation}
We bound $u_1$ and $u_2$ by $u_{\rm
max}$ which corresponds to the experimental voltage limit.
Since $H_c$ is linear in $u_i$ and $x_1 \leq x \leq x_2$,
$H_c$ becomes maximal depending on the sign of $p_2$,
\begin{equation}
u_1(t)= - u_2(t)= \rm{sign}(p_2) u_{\rm max}\,.
\label{eqn:biasmax}
\end{equation}
Evaluating Eq.~\eref{eq:costate} for $H_c$ of Eq.~\eref{eq:H_c} leads to
\begin{eqnarray}
\dot{p_1} = p_2 \omega_0^2 \left(u_1 - u_2\right)\\
\dot{p_2} = -p_1.
\end{eqnarray}
In view of Eq.~\eref{eqn:biasmax}, the only useful choice is $p_2(0) > 0$.
Otherwise the second electrode would be biased to a positive voltage, leading to
a repulsive instead of an attractive potential acting on the ion.
The equations of motion for the costate thus become
\begin{eqnarray}
\dot{p_1} = 0 & \Rightarrow p_1(t) = c_1\\
\dot{p_2} = -p_1& \Rightarrow p_2(t) = p_2(0) - c_1 t.\label{eq:p2}
\end{eqnarray}
For a negative constant $c_1$, $p_2$ is never going to cross
zero. This implies that there will not be a switch in voltages
leading to continuous acceleration. For positive $c_1$ there will be a zero
crossing at time $t_{\rm sw} = p_2(0)/c_1$. The optimal solution thus
corresponds to a single switch of the voltages. We will analyze this
solution and compare it to the solutions obtained by numerical
optimization below in Section~\ref{subs:apcontrol}.
\subsection{Invariant based inverse engineering}\label{subs:invmeth0}
For quantum mechanical equations of motion,
geometric optimal control is limited to very simple dynamics such as
that of three- or four-level systems, see
e.g. Ref.~\cite{HaidongPRA12}.
A second analytical approach that is perfectly adapted to the quantum
harmonic oscillator
utilizes the Lewis-Riesenfeld theory which introduces dynamical
invariants and their eigenstates~\cite{lewis1969}. This
invariant-based inverse engineering approach (IEA) has recently been
applied to the transport problem~\cite{TORR2011,PalmeroPRA13}. The basic idea is
to compensate the inertial force occurring during the transport
sequence. To this end, the potential is written in the following form:
\begin{equation}\label{eqn:invpot}
V(x,t) = -F(t) x + \frac{m}{2} \Omega^2(t) x^2 + \frac{1}{\rho^2(t)} U\left(\frac{x-\alpha(t)}{\rho(t)}\right)\,.
\end{equation}
The functions $F$, $\Omega$, $\rho$ and $\alpha$ have to fulfill
constraints,
\begin{eqnarray}
\ensuremath{\, \textnormal{d}}ot \rho(t) + \Omega^2(t)\rho(t) = \frac{\Omega_0^2}{\rho^3(t)}\,,\\
\ensuremath{\, \textnormal{d}}ot \alpha(t) + \Omega^2(t)\alpha(t) = F(t)/m\,, \label{eqn:compforce0}
\end{eqnarray}
where $\Omega_0$ is a constant and $U$ an arbitrary function. We choose
$\Omega(t)=\Omega_0=0$, $\rho(t) = 1$, and $\alpha(t)$ to be the
transport function of Sec.~\ref{subs:guessgen}. This enables us to
deduce the construction rule for $F(t)$, using Eq.~\eref{eqn:compforce0},
\begin{equation}\label{eq:F}
\ensuremath{\, \textnormal{d}}ot\alpha(t) = F(t)/m\,,
\end{equation}
such that $F(t)$ compensates the inertial force given by the
acceleration of the trap center.
For the potential of Eq.~\eref{eqn:invpot}, the Hermitian operator
\begin{equation}
\Op{I} = \frac{1}{2m}
\left[\rho\left(p-m\dot\alpha\right)-m\dot\rho\left(x-\alpha\right)\right]^2
+\frac{1}{2} m \Omega_0^2 \left(\frac{x-\alpha}{\rho}\right)^2
+ U\left(\frac{x-\alpha}{\rho}\right)
\end{equation}
fulfills the invariance condition for all conceivable quantum states $\ket{\Psi(t)}$:
\begin{equation}
\frac{\rmd }{\rmd t }\braket{\Psi(t)|\Op{I}(t)|\Psi(t)} = 0 \quad \Leftrightarrow \quad \frac{\rmd \Op{I}}{\rmd t } = \frac{\partial \Op{I} }{\partial t} + \frac{1}{\rmi \hbar} [\Op{I}(t),\Op{H}(t)] = 0
\end{equation}
with $\Op{H}$ the Hamiltonian of the ion.
The requirement for transporting the initial ground state to the
ground state of the trap at the final time corresponds to $\Op{H}$ and
$\Op{I}$ having a common set
of eigenfunctions at initial and final time. This is the case for
$\dot\alpha(0)= \dot\alpha(T) = \dot{\rho}(t) =0$~\cite{DHARA1984,TORR2011}.
We can now identify $U$ in Eq.~\eref{eqn:invpot} with the trapping
potential of Eq.~\eref{eq:V}.
The additional compensating force
is generated using the same trap electrodes
by applying an additional voltage $\delta U_i$. For a given
transport function $\alpha(t)$ we therefore have to solve the underdetermined
equation,
\begin{equation}
m \ensuremath{\, \textnormal{d}}ot{\alpha}(t) = -\phi_1'(x(t)) \delta U_1(t) - \phi_2'(x(t)) \delta U_2(t),\label{eqn:regul}
\end{equation}
where $x(t)$ is given by the classical trajectory. Since the ion
is forced to follow the center of the trap we can set
$x(t)=\alpha(t)$. The compensating force is supposed to be a function
of time only, cf. Eq.~\eref{eq:F}, whereas
changing the electrode voltages by $\delta U_i$ will, via the
$\phi_i(x)$, in general yield a
position-dependent force. This leads to a modified second derivative
of the actual potential:
\begin{equation}
m \omega_c(t)^2 =\sum_{i=1}^2\phi_i''(\alpha(t))(U_i^0(t)+\delta U_i(t)) = m( \omega^2 + \delta\omega(t)^2)\,,
\end{equation}
where $\delta\omega(t)^2$ denotes the change in trap frequency due to the
compensation voltages $\delta U_i$, $\omega$ is the initially desired trap
frequency, and $U_i^0(t)$ is found in Eq.~\eref{eqn:solguess}.
A time-varying actual frequency $omega_c(t)$ might lead to wavepacket
squeezing.
However, since Eq.~\eref{eqn:regul} is underdetermined, we can set
$\delta\omega(t)^2 = 0$ leading to $\omega_c(t) = \omega$ as desired.
With this condition we can solve Eq.~\eref{eqn:regul} and obtain
\begin{equation}\label{eqn:compconstw}
\delta U_i (t) = \frac{\ensuremath{\, \textnormal{d}}ot \alpha(t) \,(-1)^i \, m \,\phi_j''(\alpha(t))}{
\phi_2''(\alpha(t)) \phi_1'(\alpha(t)) - \phi_2'(\alpha(t))\phi_1''(\alpha(t))
}\,,~i,j \in \{1,2\}\,,~j \neq i\,.
\end{equation}
Note that Eq.~\eref{eqn:compconstw} depends only on the trap
geometry. The transport duration
$T$ enters merely as a scaling parameter via $\ensuremath{\, \textnormal{d}}ot \alpha(t) =
\alpha''(s)/T^2$. An example
of a voltage sequence obtained by this method is shown in
\fref{fig:guesscfoct}(b). The voltage curves are symmetric under time
inversion like the guess voltages, that are derived from the same potential
functions $\phi_i(x)$.
\section{Application and comparison of the control methods}\label{subs:apcontrol}
We now apply the control strategies introduced in Sec.~\ref{subs:ttcp}
to a scenario with the parameters chosen to correspond to a
typical experimental setting. The
scaling of the classical speed limit is studied for a fixed maximum control
voltage range and we show how in the limiting case the \textit{bang-bang} solution is
obtained. To verify the validity of the classical solution we are
applying the obtained voltage ramps to a quantum mechanical wave
packet propagation. Similarly, we use the invariant-based approach and
verify the result for a quantum mechanical propagation.
\subsection{Experimental constraints and limits
to control for classical ion transport}\label{subs:kloct2}
\begin{figure}
\caption{Final energy vs. transport time for different voltage
ramps and classical dynamics. (a) shows the improvement over the
initial guess (black) by numerical optimization for a
maximum voltage of 10$\,$V (blue) and (b) compares the results of
numerical optimization for maximum voltages of 10$\,$V (blue),
20$\,$V (purple), and 30$\,$V (green). The spikes in (b) are due to
voltage truncation.
}
\label{fig:transportopts}
\end{figure}
\begin{figure}
\caption{(a) Minimum transport time $T^\mathrm{opt}
\label{fig:classlimit}
\end{figure}
In any experiment, there is an upper limit to the electrode voltages
that can be applied. It is the range of electrode voltages that limits
the maximum transport speed.
Typically this range is given by $\pm$10$\,$V for technical reasons. It
could be increased by the development of better voltage
supplies. We define the minimum possible transport time $T_{\rm
{min}}$ to be the smallest time $T$ for which
less than $0.01$ phonons are excited due to the total transport.
To examine how $T_{\rm min}$ scales as a
function of the maximum electrode voltages $U_{\rm max}$, we
have carried out numerical optimization combined with classical
equations of motion. The initial guess voltages, cf.
Eqs.~\eref{eqn:transfunc1} and~\eref{eqn:solguess}, were taken to
preserve a constant trap frequency of $\omega = 2 \pi \cdot 1.3$~MHz for
a $^{40}\rm{Ca}^+$ ion. The transport ramps were optimized for a
range of maximum voltages between 10-150~V and transport times between
10 ns and 300 ns with voltages truncated to
$\pm\,U_{\rm max}$ during the updates. The results are shown in
Figs.~\ref{fig:transportopts} and~\ref{fig:classlimit}.
Figure~\ref{fig:transportopts} depicts the final excitation energy
versus transport time, comparing the initial guess (black) to an
optimized ramp with $U_{\rm max}=10\,$V (blue) in
Fig.~\ref{fig:transportopts}(a). For the initial guess, the final
energy displays an oscillatory behavior with respect to the
trap period ($T_{\rm{per}}=0.769\,\mu$s for $\omega = 2 \pi \cdot 1.3\,$MHz)
as it has been experimentally observed in
Ref.~\cite{WALTHER2012}, and an overall decrease of the final energy
for longer transport times. The optimized transport with
$U_{\rm max}=10\,$V (blue line in Fig.~\ref{fig:transportopts}(a))
shows a clear speed up of energy neutral transport:
An excitation energy of less than 0.01 phonons is obtained for
$T^\mathrm{opt}_\mathrm{min}=0.284\,\mu$s compared to $T^\mathrm{guess}_\mathrm{min}=1.391\,\mu$s.
The speedup increases with maximum voltage as shown in
Fig.~\ref{fig:transportopts}(b).
The variation of $T^\mathrm{opt}_{\rm min}$ on $U_{\rm max}$ is
studied in Fig.~\ref{fig:classlimit}(a). We find
a functional dependence of
\begin{equation}
T^\mathrm{opt}_{\rm min}(U_{\rm max}) \approx
a \left(\frac{U_{\rm max}}{1\,{\rm V}}\right)^{-b}
\label{eqn:fitfn}
\end{equation}
with $a = 0.880(15)\,\mu$s and $b = 0.487(5)$.
Optimized voltages are shown in Fig.~\ref{fig:classlimit}(b)
for the left electrode with $U_{\rm{max}}=10\,$V. As the transport
time decreases, the voltage ramp approaches a square shape. A
bang-bang-like solution is attained at $T=280\,$ns. However, for such
a short transport time,
classical control of energy neutral transport breaks down due to
an insufficient voltage range and the final excitation amounts to 5703
mean phonons.
In the following we show that for purely harmonic potentials, the
exponent $b$ in Eq.~\eref{eqn:fitfn} is universal, i.e., it does not depend
on trap frequency nor ion mass. It is solely determined by the
bang-bang like optimized voltage sequences, where instantaneous
switching between maximum acceleration and deceleration guarantees
shuttling within minimum time. The technical feasibility of bang-bang
shuttling is thoroughly analyzed in Ref.~\cite{ALONSO2013}.
The solution is obtained by the application of
Pontryagin's maximum principle \cite{PONTRY,CHEN2011} as discussed in
Sec.~\ref{subsec:geometric} and assumes instantaneous
switches. Employing Eqs.~\eref{eqn:biasmax} and~\eref{eq:p2},
the equation of motion becomes
\begin{equation}
\ensuremath{\, \textnormal{d}}ot x = \omega_0^2 u_{\rm max} \cdot \left\{
\begin{array}{cc}
d , & t < t_{\rm sw}\\ -d, & t > t_{\rm sw}
\end{array} \right. .
\end{equation}
This can be integrated to
\begin{equation}
x(t)= \left\{
\begin{array}{lc}
x_1 + u_{\rm max} d \omega_0^2t^2 , & 0 \leq t \leq t_{\rm sw}\\
x_1 + d - u_{\rm max} d \omega_0^2(t-T_{\rm{min}})^2 , & t_{\rm sw} \leq t \leq T_{\rm{min}}
\end{array} \right.
\end{equation}
with the boundary conditions $x(0) = x_1$, $x(T_{\rm{min}}) = x_2$ and $\dot{x}(0) =
\dot{x}(T_{\rm{min}}) = 0$. Using the continuity of $\dot x$ and $x$
at $t=t_{\rm sw}$, we obtain
\begin{equation}
t_{\rm sw} = \frac{T}{2} ,\quad T_{\rm{min}} = \frac{\sqrt{2}}{\omega_0} \sqrt{\frac{1}{u_{\rm max}}}. \label{eqn:Ttheo}
\end{equation}
Notably, the minimum transport time is proportional to $u_{\rm
max}^{-1/2}$ which explains the behavior of the numerical data
shown in Fig.~\ref{fig:classlimit}.
This scaling law can be understood intuitively by considering that in the bang-bang
control approach, the minimum shuttling time is given by the shortest attainable
trap period, which scales as $u_{max}^{-1/2}$.
Assuming a trap frequency of $\omega_0=2 \pi \cdot 0.55$~MHz in
Eq.~\eref{eqn:Ttheo}, corresponding to a trapping voltage of $-1\,$V
for our trap geometry, we find a prefactor $\sqrt{2}/\omega_0 =
0.41 \mu$s. This is smaller than $a=0.880(15)\mu$s obtained by
numerical optimization for realistic trap potentials. The difference
can be rationalized in terms of the average acceleration
provided by the potentials. For realistic trap geometries, the force
exerted by the electrodes is inhomogeneous along the transport path.
Mutual shielding of the electrodes reduces the electric field
feedthrough of an electrode to the neighboring ones. Thus, the
magnitude of the accelerating force that a real electrode can
exert on the ion
when it is located at a neighboring electrode is reduced with respect
to a constant force generating harmonic potential with the same trap
frequency.
The minimum transport time of $T^\mathrm{opt}_\mathrm{min}=0.284\,\mu$s
identified here for $U_\mathrm{max}=10\,$V, cf. the blue line in
Fig.~\ref{fig:transportopts}(a), is significantly shorter than
operation times realized experimentally. For comparison, an ion has
recently been shuttled within $3.6\,\mu$s, leading to a final
excitation of $0.10\pm0.01$ motional quanta~\cite{WALTHER2012}.
Optimization may
not only improve the transport time but also the stability with
respect to uncertainties in the time. This is in contrast to the
extremely narrow minima of the final excitation energy for the guess
voltage ramps shown in black in Fig.~\ref{fig:transportopts}(a),
implying a very high
sensititivity to uncertainties in the transport time. For example, for
the fourth minimum of the black curve, located at $3.795\,\mu$s and close
to the operation time of Ref.~\cite{WALTHER2012} (not shown in
Fig.~\ref{fig:transportopts}(a)), final excitation energies of less
than 0.1 phonons are observed only within a window of 3$\,$ns.
Optimization of the voltage ramps for $T=3.351\,\mu$s increases
the stability against variations in transport time to
more than $60\,$ns.
In conclusion we find that optimizing the classical motion of an ion
allows us to identify the
minimum operation time for a given maximum voltage and improve the
stability with respect to timing uncertainies for longer operation
times. The analytical solution derived from Pontryagin's maximum
principle is helpful to understand the minimum time control strategy.
Numerical optimization accounts for all typical features of realistic
voltage ramps. It allows
for identifying the minimum transport time, predicting $36.9\%$ of the
oscillation period for current maximum voltages and a trap frequency of
$\omega = 2 \pi\cdot 1.3\,$MHz. This number can be
reduced to $12.2\%$ when increasing the maximum voltage by one order of
magnitude.
However, these predictions may be rendered invalid by a breakdown of
the classical approximation.
\subsection{Validity of classical solutions in the
quantum regime}\label{subs:qmprop2}
\begin{figure}
\caption{Testing control strategies obtained with classical dynamics
for wavepacket motion:
(a) Final excitation energy of the ion wavepacket
with the initial guess (black) and the optimized voltage ramps with
$U_\mathrm{max}
\label{fig:sqlimit}
\end{figure}
We now employ quantum wavepacket dynamics to test the classical
solutions, obtained in Sec.~\ref{subs:kloct2}.
Provided the trap frequency is constant and the trap is perfectly
harmonic, the wave function will only be displaced during the
transport. For a time-varying trap frequency, however, squeezing may
occur~\cite{scu11}. In extreme cases, anharmonicities of the potential
might lead to wavepacket dispersion. Since these two
effects are not accounted for by numerical optimization of classical
dynamics, we discuss in the following at which timescales such
genuine quantum effects become significant. To this end, we have
employed the optimized voltages shown in Fig.~\ref{fig:classlimit}(b)
in the propagation of a quantum wavepacket. We compare the results of
classical and quantum mechanical motion in Fig.~\ref{fig:sqlimit}(a),
cf. the red and lightblue lines. A clear deviation is observed.
Also, as can be seen Fig.~\ref{fig:sqlimit}(b), the
wavefunction fails to reach the target wavefunction for transport
times close to the classical limit $T^\mathrm{opt}_{\rm{min}}$. This is
exclusively caused by squeezing and can be verified by inspecting
the time evolution of the wavepacket in the final potential: We find the
width of the wavepacket to oscillate, indicating a squeezed
state. No wavepacket dispersion effects are observed, i.e., the final
wavepackets are still minimum uncertainty states, with $\min(\Delta
x\cdot\Delta p) = \hbar/2$. This means that no effect of
anharmonicities in the potential is observed.
An impact of anharmonicities is expected once the size of the
wavefunction becomes comparable to segment distance $d$ (see
Fig.~\ref{fig:electrodes}). Then the wavefunction extends over
spatial regions in which the potentials
deviate substantially from harmonic potentials.
For the ion shuttling problem, this effect does not play a role over the
relevant parameter regime.
The effects of anharmonicities in
the quantum regime for trapped ions were thoroughly analyzed in
Ref.~\cite{HOME2011}.
Squeezing increases $T_{\rm min}$ from $0.28\,\mu$s to $0.86\,\mu$s
for the limit of exciting less than 0.01 phonons, see the red curve in
Fig.~\ref{fig:sqlimit}(a), i.e., it only triples the minimum
transport time.
We show that squeezing can be suppressed altogether in the following
section.
\subsection{Application of a compensating force approach}\label{subs:invmeth}
\begin{figure}
\caption{Minimum transport time $T_{\rm min}
\label{fig:compforcelimit}
\end{figure}
In the invariant-based IEA, the minimal transport time is determined by the
maximum voltages that are required for attaining zero motional
excitation. The total voltage that needs to be applied is given by
$U_i(t)=U_i^0(t)+\delta U_i(t)$ with $U_i^0(t)$ and $\delta U_i(t)$
found in Eqs.~\eref{eqn:solguess} and~\eref{eqn:compconstw}.
The maximum of $U_i(t)$, and thus the mininum in $T$,
is strictly
related to the acceleration of the ion provided by the transport
function $\alpha(t)$, cf. Eq.~\eref{eqn:compconstw}.
If the acceleration is too high, the voltages
will exceed the feasibility limit $U_{\rm max}$.
At this point it can also be understood why the acceleration should be zero
at the beginning and end of the transport: For $\ensuremath{\, \textnormal{d}}ot\alpha(0)\neq 0$
a non-vanishing correction voltage
$\delta U_i \neq 0$ is obtained from Eq.~\eref{eqn:compconstw}. This
implies that the voltages do not match the initial trap conditions,
where the ion should be located at the center of the initial
potential.
We can derive a transport function $\alpha(t)$ compliant with the
boundary conditions using Eq.~\eref{eqn:transfunc1}.
For this case,
Fig.~\ref{fig:compforcelimit} shows the transport time
$T^\mathrm{IEA}_{\rm{min}}$ versus the maximum voltage $U_{\rm{max}}$
that is applied to the electrodes during the transport sequence.
For large transport times, the initial guess voltages $U_i^0(t)\propto\omega^2$
dominate the compensation voltages $\delta U_i(t)\propto\ensuremath{\, \textnormal{d}}ot \alpha(t) =
\alpha''(s)/T^2 $. This leads to the bend of the red curve. When the trap
frequency $\omega$ is lowered, the bend decreases. For the
limiting case of no confining potential $\omega=U_i^0(t)=0$,
$T^\mathrm{IEA}_\mathrm{min}$ is solely determined by the compensation
voltages.
In this case the same scaling of $T^\mathrm{IEA}_\mathrm{min}$ with
$U_\mathrm{max}$ as for
the optimization of classical dynamics is observed, cf. black and blue
lines in Fig.~\ref{fig:compforcelimit}. For large $U_\mathrm{max}$,
this scaling also applies to the case of non-zero trap frequency,
cf. red line in Fig.~\ref{fig:compforcelimit}.
We have tested the performance of the compensating force by employing
it in the time evolution of the wavefunction. It leads to near-perfect
overlap with the target state with an infidelity of less than
$10^{-9}$. The final excitation energy of the propagated wave function
is shown in Fig.~$\ref{fig:sqlimit}$ (green line) for a maximum voltage of
$U_{\rm max} =10\,$V. For the corresponding minimum transport time,
$T^\mathrm{IEA}_{\rm min}(10\,\rm{V}) = 418\,$ns, a final
excitation energy six orders of magnitude below that found by
optimization of the classical dynamics is obtained. This demonstrates
that the invariant-based IEA is capable of avoiding the wavepacket
squeezing that was observed in Sec.~\ref{subs:qmprop2}
when employing classically optimized
controls in quantum dynamics.
It also confirms that anharmonicities do
not play a role since these would not be accounted for by the
IEA-variant employed here. Note that an adaptation of the
invariant-based IEA to anharmonic traps is found in
Ref.~\cite{PalmeroPRA13}.
Similarly to numerical optimization of classical dynamics,
IEA is capable of improving the
stability against variations in transport time $T$.
The final excitation energy obtained for $T=3.351\,\mu$s stays below
0.1 phonons within a window of more than $13\,$ns.
A further reduction of the minimum transport time may be achieved
due to the freedom of choice in the transport function
$\alpha(t)$, by employing higher polynomial orders in order
to reduce the compensation voltages $\delta U_i(t)$, cf.
Eq.~\eref{eqn:compconstw}.
However, the fastest quantum mechanically valid transport
has to be slower than the solutions obtained for
classical ion motion. This follows from the bang-bang control being
the time-optimal solution for a given voltage limit and the IEA
solutions requiring additional voltage to compensate the wavepacket
squeezing.
We can thus conclude that the time-optimal quantum solution
will be inbetween the blue and black curves of
Fig.~\ref{fig:compforcelimit}.
\subsection{Feasibility analysis of quantum optimal control}\label{subs:qmoct2}
Numerical optimization of the wavepacket motion is expected to become
necessary once the dynamics explores spatial regions in which the
potential is strongly anharmonic or is subject to strongly anharmonic
fluctuations. This can be expected, for example, when the spatial
extent of the wavefunction is not too different from that of the
trap. Correspondingly, we introduce the parameter $\xi=\sigma_0/d$,
which is the wavefunction size normalized to the transport distance.
While for current trap
architectures, such a scenario is rather unlikely, further
miniaturization might lead to this regime. Also, it is currently
encountered in the transport of neutral atoms in tailored optical
dipole potentials~\cite{Ivanov2010,WaltherTwo2012}.
Gradient-based quantum OCT requires an initial guess voltage that
ensures a finite overlap of the propagated wave function $\Psi(T)$
with the target state $\Psi^{\rm tgt}$, see
Eq.~\eref{eqn:chioft}. Otherwise, the amplitude
of the co-state $\chi$ vanishes. The overlap can also be
analyzed in terms of phase space volume.
For a typical ion trap setting with parameters as in
Fig.~\ref{fig:electrodes}, the total covered phase space volume in
units of Planck's constant is $ m\,d^2\, \omega /2 \pi h \approx
10^7$. This leads to very slow convergence of the optimization
algorithm, unless an extremely good initial
guess is available.
\begin{figure}
\caption{(a) Mean improvement of the optimization functional,
$\overline{\Delta J}
\label{fig:qmconv}
\end{figure}
We utilize the results of the optimization for
classical dynamics of Sec.~\ref{subs:kloct2} as initial guess
ramps for optimizing the wavepacket dynamics and investigate the
convergence rate as a function of the system
dimension, i.e., of $\xi$. The results are shown in
Fig.~\ref{fig:qmconv}(a), plotting the mean improvement per
optimization step, $\Delta J$, averaged over 100 iterations, versus the
scale parameter $\xi$.
We computed the convergence rate $\overline{\Delta J}$ for different,
fixed optimization weights $\lambda_a$ in
Eq.~\eref{eqn:krotovupdate}. The curves in Fig.~\ref{fig:qmconv}(a)
are truncated for large values of $\overline{\Delta J}$, where the
algorithm becomes numerically unstable. Values below
$\overline{\Delta J}=10^{-6}$ (dashed grey line in
Fig.~\ref{fig:qmconv}(a)) indicate an insufficient convergence rate
for which no significant gain of fidelity is obtained with
reasonable computational resources. In this case the potentials are
insufficiently anharmonic to provide \textit{quantum} control of the
wavefunction.
Numerical optimization of the wavepacket dynamics
is applicable and useful for scale parameters of $\xi\approx
0.05$ and larger, indicated by arrows (2) and (3) in
Fig.~\ref{fig:qmconv}(a). Then the wavefunction size becomes
comparable to the transport distance, leading for example to a phase
space volume of around $10\,h$ for arrow (2). At this scale the force
becomes inhomogeneous across the wavepacket. This leads to a
breakdown of the IEA, as
illustrated for $\xi = 0.4$ in Figs.~\ref{fig:qmconv}(b)
and~\ref{fig:schema}.
The fidelity $\mathfrak{F}_{\rm{IEA}}$ for the IEA drops below
$94.6\%$, whereas $\mathfrak{F}_{\rm{qOCT}}=0.999$ is achieved by
numerical optimization of the quantum dynamics.
\begin{figure}
\caption{Limitation of the compensating force approach. A force
inhomogeneity $\Delta F =
\sum_i[\phi_i'(\alpha(t)+\sigma_0)-\phi'(\alpha(t)-\sigma_0)]\delta
U_i(t)$ across the wavefunctions is caused by anharmonicities of
the potential $\Delta V = F(t)\, x$ used to implement the
compensating force. The relative spread of the force
$\Delta F/F$ across the wavefunction is taken at the point in
time, where the acceleration $\ensuremath{\, \textnormal{d}
\label{fig:schema}
\end{figure}
\section{Summary and Conclusions}\label{sec:concl}
Manipulation of motional degrees of freedom is very widespread in
trapped-ion experiments. However, most theoretical calculations
involving ion transport over significant distances are based on
approximations that in general do not guarantee the level of precision
needed for high-fidelity quantum control, especially in view of
applications in the context of quantum technologies. As a consequence,
before our work little was known about how to apply optimal control
theory to large-scale manipulation of ion motion in traps, concerning
in particular the most efficient simulation and control methods to be
employed in different parameter regimes, as well as the level of
improvement that optimization could bring.
With this in mind, in the present work we have investigated the
applicability of several classical and
quantum control techniques for the problem of moving an ion across a
trap in a fast and accurate way. When describing the ion dynamics
purely classically, numerical optimization yields transport times
significantly shorter than a trapping period. The minimum transport
duration depends on the maximal electrode voltage that can be applied
and was found to scale as $1/\sqrt{U_{\rm{max}}}$. The same scaling is
observed for time-optimal bang-bang-like solutions that can be
derived using Pontryagin's maximum principle and assuming perfectly
harmonic traps. Not surprisingly, the classically optimized solutions
were found to fail when tested in quantum wavepacket motion for
transport durations of about one third of a trapping period. Wavepacket
squeezing turns out to be the dominant source of error with the
final wavepacket remaining a minimum uncertainty state.
Anharmonic effects were found to play no significant role for single-ion
shuttling over a wide range of parameters.
Wavepacket squeezing can be perfectly compensated by the control strategy
obtained with the invariant-based inverse engineering approach. It
amounts to applying correction voltages which can be generated by the
trapping electrodes and which exert a compensating force on the
ion. This is found to be the method of choice for current experimental
settings.
Control methods do not only allow to assess the minimum time required
for ion transport but can also yield more robust solutions.
For transport times that have been used in recent
experiments~\cite{WALTHER2012}, significantly larger than the minimum
times identified here, the classical solutions are valid also for the
quantum dynamics. In this regime, both numerical optimization of
classical ion motion and the inverse engineering approach yield a
significant improvement of stability against uncertainties in
transport time. Compared to the initial guess voltages, the time
window within which less than 0.1 phonons are excited after transport
is increased by a factor of twenty for numerical optimization and
a factor of five for the inverse engineering approach.
Further miniaturization is expected to yield trapping potentials where
the wavepacket samples regions of space in which the potential, or
potential fluctuations, are strongly anharmonic. Also, for large
motional excitations recent experiments have shown
nonlinear Duffing oscillator behavior \cite{AKERMAN2010}, nonlinear
coupling of modes in linear ion crystals \cite{ROOS2008,nie2009theory}
and amplitude dependent modifications of normal modes frequencies and
amplitude due to nonlinearities \cite{HOMENJP}. In these cases,
numerical optimization of the ion's quantum dynamics presents itself
as a well-adapted and efficient approach capable of providing
high-fidelity control solutions.
The results presented in this paper provide us with a
systematic recipe, based on a single parameter (the relative wave
packet size $\xi$), to assess which simulation and control methods are
best suited in different regimes. We observe a crossover between
applicability of the invariant-based IEA, for a very small
wavefunction extension, and that of quantum OCT, when the width of the
wave function becomes comparable with the extension of the potential.
Both methods combined cover the full range of conceivable trap
parameters. That is, no matter what are the trapping parameters,
control solutions for fast, high-fidelity transport are available. In
particular, in the regime $\xi\ll 1$, relevant for ion transport in
chip traps, solutions obtained with the inverse engineering approach
are fully adequate for the
purpose of achieving high-fidelity quantum operations. This provides a
major advantage in terms of efficiency over optimization algorithms based
on the solution of the Schrödinger equation. The latter in turn becomes
indispensable when processes involving motional excitations inside the
trap and/or other anharmonic effects are relevant. In this case, the
numerical quantum OCT method demonstrated in this paper provides a
comprehensive way to deal with the manipulation of the ions’ external
states.
\ack
KS, UP, HAF and FSK thank Juan
Gonzalo Muga and Mikel Palmero for the discussions about the invariant
based approach. HAF thanks Henning Kaufmann for useful contributions to the
numerical framework.
The Mainz team acknowledges financial support by the Volkswagen-Stiftung, the
DFG-Forschergruppe (FOR 1493) and the EU-projects DIAMANT (FP7-ICT),
IP-SIQS, the IARPA MQCO project and the MPNS COST Action MP1209. MHG and CPK are grateful to
the DAAD for financial support. SM, FSK and TC acknowledge support from
EU-projects SIQS, DIAMANT and PICC and from the DFG
SFB/TRR21. MHG, SM, TC and CPK enjoyed the hospitality of KITP and
acknowledge support in part by the National Science Foundation under
Grant No. NSF PHY11-25915.
\appendix
\section{Quantum wavepacket propagation with a moving Fourier grid}\label{subs:qmprop}
For transport processes using realistic trap parameters, naive
application of the standard Fourier grid
method~\cite{RonnieReview88,RonnieReview94} will lead to unfeasible
grid sizes. This is due to the
transport distance being usually 3 to 5 orders of magnitude larger
than the spatial width of the wavepacket and possible acceleration of
the wavepacket requiring a sufficiently
dense coordinate space grid. To limit the number of grid points, a
\emph{moving grid} is introduced. Instead of using a spatial grid that
covers the entire transport distance, the grid is defined to only
contain the initial wavepacket, in a window between $x_{\min}$ and
$x_{\max}$. The wavepacket $\Psi(x, t_0)$ is now propagated for a
single time step to
$\Psi(x, t_0+dt)$. For the propagated wave function, the expectation value
\begin{equation}
\left\langle x \right\rangle
= \int_{x_{\min}}^{x_{\max}}
\Psi^{*}(x, t_0 + dt)\, x \, \Psi(x, t_0 + dt) \ensuremath{\, \textnormal{d}} x
\end{equation}
is calculated, and from that an offset is obtained,
\begin{equation}
\bar{x} = \left\langle x \right\rangle - \frac{x_{\max} - x_{\min}}{2}\,,
\end{equation}
by which $x_{\min}$ and $x_{\max}$ are shifted.
The wavepacket is now moved to the center of the new grid,
and the propagation continues to the next time step.
The same idea can also be applied to momentum space. After the
propagation step, the expectation value $\left\langle k \right\rangle$
is calculated and stored as an offset $\bar{k}$. The wave function is
then shifted in momentum space by this offset, which is achieved by
multiplying it by $e^{-i\bar{k}x}$. This cancels out the fast
oscillations in $\Psi(x,t_0 + dt)$. When applying the kinetic operator
in the next propagation step, the offset has to be taken into account,
i.e.,
the kinetic operator in momentum space becomes $(k+\bar{k})^2/2m$.
The combination of the moving grid in coordinate and momentum space
allows to choose the grid window with the mere requirement of being
larger than the extension of the wavepacket at any point of the
propagation. We find typically 100 grid points to be sufficient
to represent the acceleration within a single time step. The procedure
is illustrated in Fig.~\ref{fig:moving_grid} and the steps of the
algorithm are summarized in Table~\ref{tab:howtomovgrid}.
\begin{figure}
\caption{Illustration of the moving grid procedure. The propagation of
the wave function $\Psi(x,t_{0}
\label{fig:moving_grid}
\end{figure}
\begin{table}[tbp]
\caption{\label{tab:howtomovgrid}Necessary steps for wavepacket propagation over long distances.}
\centering
\begin{tabular}
{l l l l }
\br
& & Mathematical step& Possible implementation \\
\mr
1. & Calculate position mean & $\langle x\rangle=\bra{\Psi}\Op{x}\ket{\Psi}$ & $\langle x\rangle=\sum_i x_i \Psi_i^*\Psi_i$ \\
2. & Transform to momentum space & & $\{\Phi_i\}=\mathcal{FFT}(\{\Psi_i\})$ \\
3. & Calculate momentum mean & $\langle p\rangle=\bra{\Psi}\Op{p}\ket{\Psi}$ & $\langle p\rangle=\sum_i \hbar k_i \Phi_i^*\Phi_i$ \\
4. & Shift position & $\ket{\Psi}\rightarrow\exp\left(\frac{i}{\hbar}\langle x\rangle \Op{p}\right)\ket{\Psi}$ & $\Phi_i\rightarrow\exp\left(i k_i \langle x\rangle\right)\Phi_i$\\
5. & Transform to position space & & $\{\Psi_i\}=\mathcal{FFT}^{-1}(\{\Phi_i\})$ \\
6. & Shift momentum & $\ket{\Psi}\rightarrow\exp\left(\frac{i}{\hbar}\langle p\rangle \Op{x}\right)\ket{\Psi}$ & $\Psi_i\rightarrow\exp\left(\frac{i}{\hbar} \langle p\rangle x_i\right)\Psi_i$\\
7. & Update classical quantities & & $x_{cl}+=\langle x\rangle, p_{cl}+=\langle p\rangle$\\ \br
\end{tabular}
\end{table}
\section*{References}
\providecommand{\operatorname{new}block}{}
\end{document} |
\begin{document}
\title[$p$-Proximal contraction]{A remark on the paper ``A note on the paper Best proximity point results for $p$-proximal contractions"}
\author[S.\ Som]
{Sumit Som}
\address{ Sumit Som,
Department of Mathematics,
School of Basic and Applied Sciences, Adamas University, Barasat-700126, India.}
\email{[email protected]}
\subjclass {$54H25$, $47H10$}
\keywords{Best proximity point, $p$-proximal contraction, Banach contraction principle}
\begin{abstract}
Recently, In the year 2020, Altun et al. \cite{AL} introduced the notion of $p$-proximal contractions and discussed about best proximity point results for this class of mappings. Then in the year 2021, Gabeleh and Markin \cite{GB} showed that the best proximity point theorem proved by Altun et al. in \cite{AL} follows from the fixed point theory. In this short note, we show that if the $p$-proximal contraction constant $k<\frac{1}{3}$ then the existence of best proximity point for $p$-proximal contractions follows from the celebrated Banach contraction principle.
\end{abstract}
\maketitle
\section{\bf{Introduction}}
Metric fixed point theory is an essential part of Mathematics as it gives sufficient conditions which will ensure the existence of solutions of the equation $F(x)=x$ where $F$ is a self mapping defined on a metric space $(M,d).$ Banach contraction principle for standard metric spaces is one of the important results in metric fixed point theory and it has lot of applications. Let $A,B$ be non-empty subsets of a metric space $(M,d)$ and $Q:A\rightarrow B$ be a non-self mapping. A necessary condition, to guarantee the existence of solutions of the equation $Qx=x,$ is $Q(A)\cap A\neq \phi.$ If $Q(A)\cap A= \phi$ then the mapping $Q$ has no fixed points. In this case, one seek for an element in the domain space whose distance from its image is minimum i.e, one interesting problem is to $\mbox{minimize}~d(x,Qx)$ such that $x\in A.$ Since $d(x,Qx)\geq d(A,B)=\inf~\{d(x,y):x\in A, y\in B\},$ so, one search for an element $x\in A$ such that $d(x,Qx)= d(A,B).$ Best proximity point problems deal with this situation. Authors usually discover best proximity point theorems to generalize fixed point theorems in metric spaces. Recently, In the year 2020, Altun et al. \cite{AL} introduced the notion of $p$-proximal contractions and discussed about best proximity point results for this class of mappings. Then, in the year 2021, Gabeleh and Markin \cite{GB} showed that the best proximity point theorem proved by Altun et al. in \cite{AL} follows from a result in fixed point theory. In this short note, we show that if the $p$-proximal contraction constant $k<\frac{1}{3}$ then the existence of best proximity point for $p$-proximal contractions follows from the Banach contraction principle.
\section{\bf{Main results}}
We first recall the following definition of $p$-proximal contraction from \cite{AL}.
\begin{definition}\cite{AL}
Let $(A, B)$ be a pair of nonempty subsets of a metric space $(M,d).$ A mapping $f:A\rightarrow B$ is said to be a $p$-proximal contraction if there exists $k\in (0,1)$ such that \[
\begin{rcases}
d(u_1, f(x_1))= d(A, B)\\
d(u_2, f(x_2))= d(A, B)
\end{rcases}
{\Longrightarrow d(u_1,u_2)\leq k \Big(d(x_1,x_2)+|d(u_1,x_1)-d(u_2,x_2)|\Big)}
\] for all $u_1, u_2, x_1, x_2 \in A,$ where $d(A,B) = \inf\Big\{d(x, y): x\in A,\ y\in B\Big\}.$
\end{definition}
In this paper, we call the constant $k$ in the above definition as $p$-proximal contraction constant. The following notations will be needed.
Let $(M,d)$ be a metric space and $A,B$ be nonempty subsets of $M.$ Then
$$A_0=\{x\in A: d(x,y)=d(A,B)~\mbox{for some}~y\in B\}.$$
$$B_0=\{y\in B: d(x,y)=d(A,B)~\mbox{for some}~x\in A\}.$$
\begin{definition}\cite{BS}
Let $(M,d)$ be a metric space and $A,B$ be two non-empty subsets of $M.$ Then $B$ is said to be approximatively compact with respect to $A$ if for every sequence $\{y_n\}$ of $B$ satisfying $d(x,y_n)\rightarrow d(x,B)$ as $n\rightarrow \infty$ for some $x\in A$ has a convergent subsequence.
\end{definition}
We need the following lemma from \cite{FE}.
\begin{lemma}\cite[\, proposition 3.3]{FE}\label{b}
Let $(A,B)$ be a nonempty and closed pair of subsets of a metric space $(X,d)$ such that $B$ is approximatively compact with respect to $A.$ Then $A_0$ is closed.
\end{lemma}
In \cite{AL}, Altun et al. proved the following best proximity point result.
\begin{theorem}\cite{AL}\label{a}
Let $A,B$ be nonempty and closed subsets of a complete metric space $(M,d)$ such that $A_0$ is nonempty and $B$ is approximatively compact with respect to $A.$ Let $T:A\rightarrow B$ be a $p$-proximal contraction such that $A_0\neq \phi$ and $T(A_0)\subseteq B_0.$ Then $T$ has an unique best proximity point.
\end{theorem}
In \cite{GB}, Gabeleh and Markin showed that Theorem \ref{a} follows from the following fixed point theorem.
\begin{theorem}\cite{OP}\label{c}
Let $(M,d)$ be a complete metric space and $T:M\rightarrow M$ be a $p$-contraction mapping. Then $T$ has an unique fixed point and for any $x_0\in M,$ the Picard iteration sequence $\{T^{n}(x_0)\}$ converges to the fixed point of $T.$
\end{theorem}
Now we like to state our main result.
\begin{theorem}
If the $p$-proximal contraction constant $0<k<\frac{1}{3}$ then Theorem \ref{a} follows from the Banach contraction principle.
\end{theorem}
\begin{proof}
Let $x\in A_0.$ As $T(A_0)\subseteq B_0,$ so, $T(x)\in B_0.$ This implies there exists $y\in A_0$ such that $d(y,T(x))=d(A,B).$ Now, we will show that $y\in A_0$ is unique. Suppose there exists $y_1,y_2\in A_0$ such that $d(y_1,T(x))=d(A,B)$ and $d(y_2,T(x))=d(A,B).$ Since, $T:A\rightarrow B$ is a $p$-proximal contraction so we have,
$$d(y_1,y_2)\leq k\Big(d(x,x)+|d(y_1,x)-d(y_2,x)|\Big)\leq k d(y_1,y_2)$$
$$\Longrightarrow y_1=y_2.$$
Let $S_1:A_0\rightarrow A_0$ be defined by $S_1(x)=y.$ Now, we will show that $S$ is a contraction mapping. Let $x_1,x_2\in A_0.$ As $d(S_1(x_1),T(x_1))=d(A,B)$ and $d(S_1(x_2),T(x_2))=d(A,B)$ and $T$ is a $p$-proximal contraction so, we have,
$$d(S_1(x_1),S_1(x_2))\leq k\Big(d(x_1,x_2)+|d(S_1(x_1),x_1)-d(S_1(x_2),x_2)|\Big)$$
$$\Longrightarrow d(S_1(x_1),S_1(x_2))\leq k\Big(d(x_1,x_2)+ d(S_1(x_1),S_1(x_2))+d(x_1,x_2)|\Big)$$
$$\Longrightarrow d(S_1(x_1),S_1(x_2))\leq \frac{2k}{1-k} d(x_1,x_2).$$
Since $0<k<\frac{1}{3},$ so, $0<\frac{2k}{1-k}<1.$ This shows that $S:A_0\rightarrow A_0$ is a Banach contraction mapping. Now, from lemma \ref{b}, we can say $A_0$ is closed. So, $A_0$ is a complete metric space. Then, by Banach contraction principle the mapping $S_1$ has an unique fixed point $z\in A_0.$ Now, $d(z,T(z))=d(S(z),T(z))=d(A,B).$ This shows that $z$ is a best proximity point for $T.$ Uniqueness follows from the definition of $p$-proximal contraction. Also, we can conclude that for any $x_0\in A_0$ the sequence $\{S_1^{n}(x_0)\}$ will converge to the unique best proximity point of $T.$
\end{proof}
\section{conclusion}
The main motivation of the current paper is that if the $p$-proximal contraction constant $0<k<\frac{1}{3}$ then the best proximity point theorem by Altun \cite{AL} follows from the Banach contraction principle. If $\frac{1}{3}\leq k<1$ then the best proximity point theorem by Altun \cite{AL} follows from Theorem \ref{c} which is already shown by Gabeleh and Markin in \cite{GB}.
\end{document} |
\begin{document}
\title{Concurrent Hash Tables: Fast and General(?)!}
\author{Tobias Maier$^1$ \and Peter Sanders$^1$ \and Roman Dementiev$^2$}
\institute{$^1$ Karlsruhe Institute of Technology, Karlsruhe, Germany \quad \email{\{t.maier,sanders\}@kit.edu}\\
$^2$ Intel Deutschland GmbH \hspace{4.53cm} \email{[email protected]}}
\maketitle
\begin{abstract}
Concurrent hash tables are one of the most important concurrent data
structures which is used in numerous applications. Since hash table accesses
can dominate the execution time of whole applications, we need implementations
that achieve good speedup even in these cases. Unfortunately, currently
available concurrent hashing libraries turn out to be far away from this
requirement in particular when adaptively sized tables are necessary or
contention on some elements occurs.
Our starting point for better performing data structures is a fast and simple
lock-free concurrent hash table based on linear probing that is however
limited to word sized key-value types and does not support dynamic size
adaptation. We explain how to lift these limitations in a provably scalable
way and demonstrate that dynamic growing has a performance overhead comparable
to the same generalization in sequential hash tables.
We perform extensive experiments comparing the performance of our
implementations with six of the most widely used concurrent hash tables. Ours
are considerably faster than the best algorithms with similar restrictions and
an order of magnitude faster than the best more general tables. In some
extreme cases, the difference even approaches four orders of magnitude.
\end{abstract}
{\bf Category: }
[D.1.3] Programming Techniques Concurrent Programming
[E.1] Data Structures Tables
[E.2] Data Storage Representation Hash-table representations
{\bf Terms: }
Performance, Experimentation, Measurement, Design, Algorithms
{\bf Keywords: }
Concurrency, dynamic data structures, experimental analysis, hash table, lock-freedom, transactional memory
\section{Introduction}\label{sec:int}
A hash table is a dynamic data structure which stores a set of elements that are
accessible by their key. It supports insertion, deletion, find and update in
constant expected time. In a concurrent hash table, multiple threads have access
to the same table. This allows threads to share information in a flexible and
efficient way. Therefore, concurrent hash tables are one of the most important
concurrent data structures. See Section~\ref{sec:semantics} for a more detailed
discussion of concurrent hash table functionality.
To show the ubiquity of hash tables we give a short list of example
applications: A very simple use case is storing sparse sets of precomputed
solutions (e.g. \cite{precomPassHash},
\cite{BFMSS07}). A more complicated one
is aggregation as it is frequently used in analytical data base queries of the
form {\tt SELECT FROM}\ldots {\tt COUNT}\ldots {\tt GROUP BY} $x$
\cite{muller2015cache}. Such a query selects rows from one or several relations
and counts for every key $x$ how many rows have been found (similar queries work
with {\tt SUM}, {\tt MIN}, or {\tt MAX}). Hashing can also be used for a
data-base join \cite{chen2007improving}. Another group of examples is the
exploration of a large combinatorial search space where a hash table is used to
remember the already explored elements (e.g., in dynamic programming
\cite{StivalaCASTable10}, itemset mining \cite{PCY95}, a chess program, or when
exploring an implicitly defined graph in model checking
\cite{stornetta1996implementation}). Similarly, a hash table can maintain a set
of cached objects to save I/Os \cite{nishtala2013scaling}. Further examples are
duplicate removal, storing the edge set of a sparse graph in order to support
edge queries \cite{MehSan08}, maintaining the set of nonempty cells in a
grid-data structure used in geometry processing
(e.g. \cite{dietzfelbinger1997reliable}), or maintaining the children in tree
data structures such as van Emde-Boas search trees \cite{DKMS04} or suffix trees
\cite{McCreight:1976:SES}.
Many of these applications have in common that -- even in the sequential version
of the program -- hash table accesses constitute a significant fraction of the
running time. Thus, it is essential to have highly scalable concurrent hash
tables that actually deliver significant speedups in order to parallelize these
applications. Unfortunately, currently available general purpose concurrent
hash tables do not offer the needed scalability (see Section~\ref{sec:exp} for
concrete numbers). On the other hand, it seems to be folklore that a lock-free
linear probing hash table where keys and values are machine words, which is
preallocated to a bounded size, and which supports no true deletion operation
can be implemented using atomic compare-and-swap (CAS) instructions
\cite{StivalaCASTable10}. Find-operations can even proceed naively and without
any write operations. In Section~\ref{sec:nongrow} we explain our own
implementation (\textit{folklore}) in detail, after elaborating on some related work,
and introducing the necessary notation (in \autoref{sec:rel} and
\ref{sec:prelim} respectively).
To see the potential big performance differences, consider an exemplary
situation with mostly read only access to the table and heavy contention for a
small number of elements that are accessed again and again by all threads.
\textit{folklore}\ actually profits from this situation because the contended
elements are likely to be replicated into local caches. On the other hand, any
implementation that needs locks or CAS instructions for find-operations, will
become much slower than the sequential code on current machines. The purpose of
our paper is to document and explain performance differences, and, more
importantly, to explore to what extent we can make \textit{folklore}\ more general
with an acceptable deterioration in performance.
These generalizations are discussed in \autoref{sec:general}. We explain how to
grow (and shrink) such a table, and how to support deletions and more general
data types. In \autoref{sec:add_tsx} we explain how hardware transactional
memory can be used to speed up insertions and updates and how it may help to
handle more general data types.
After describing implementation details in \autoref{sec:impl}, \autoref{sec:exp}
experimentally compares our hash tables with six of the most widely used
concurrent hash tables for microbenchmarks including insertion, finding, and
aggregating data. We look at both uniformly distributed and skewed input
distributions. \autoref{sec:conclusion} summarizes the results and discusses
possible lines of future research.
\section{Related Work}\label{sec:rel}
This publication follows up on our previous findings about generalizing fast
concurrent hash tables~\cite{ppoppShort}. In addition to
describing how to generalize a fast linear probing hash table, we offer an
extensive experimental analysis comparing many concurrent hash tables from
several libraries.
There has been extensive previous work on concurrent hashing. The widely used
textbook ``The Art of Multiprocessor Programming''
\cite{HerSha12} by Herlihy and Shavit devotes an entire chapter to
concurrent hashing and gives an overview over previous work. However, it seems
to us that a lot of previous work focuses more on concepts and correctness but
surprisingly little on scalability.
For example, most of the discussed growing mechanisms assume that the
size of the hash table is known exactly without a discussion that this
introduces a performance bottleneck limiting the speedup to a
constant. Similarly, the actual migration is often done sequentially.
Stivala et al.~\cite{StivalaCASTable10} describe a bounded concurrent linear
probing hash table specialized for dynamic programming that only support insert
and find. Their insert operation starts from scratch when the CAS fails which
seems suboptimal in the presence of contention. An interesting point is that
they need only word size CAS instructions at the price of reserving a special
empty value. This technique could also be adapted to port our code to machines
without 128-bit CAS.
Kim and Kim \cite{kim2013performance} compare this table with a cache-optimized
lockless implementation of hashing with chaining and with hopscotch hashing
\cite{herlihy2008hopscotch}. The experiments use only uniformly distributed
keys, i.e., there is little contention. Both linear probing and hashing with
chaining perform well in that case. The evaluation of find-performance is a bit
inconclusive: chaining wins but using more space than linear probing. Moreover
it is not specified whether this is for successful (use key of inserted
elements) or mostly unsuccessful (generate fresh keys) accesses. We suspect
that varying these parameters could reverse the result.
Gao et al.~\cite{GaoGrooteHesselink05} present a theoretical dynamic linear
probing hash table, that is lock-free. The main contribution is a formal
correctness proof. Not all details of the algorithm or even an implementation is
given. There is also no analysis of the complexity of the growing procedure.
Shun and Blelloch \cite{shun2014phase} propose \emph{phase concurrent hash tables} which are
allowed to use only a single operation within a globally synchronized
phase. They show how phase concurrency helps to implement some operations more
efficiently and even deterministically in a linear probing context. For
example, deletions can adapt the approach from \cite{Knu98} and rearrange
elements. This is not possible in a general hash table since this might cause
find-operations to report false negatives. They also outline an elegant growing
mechanism albeit without implementing it and without filling in all the detail
like how to initialize newly allocated tables. They propose to trigger a
growing operation when any operation has to scan more than $k\log n$ elements
where $k$ is a tuning parameter. This approach is tempting since it is somewhat
faster than the approximate size estimator we use. We actually tried that but
found that this trigger has a very high variance -- sometimes it triggers late
making operations rather slow, sometimes it triggers early wasting a lot of
space. We also have theoretical concerns since the bound $k\log n$ on the length
of the longest probe sequence implies strong assumptions on certain properties
of the hash function. Shun and Blelloch make extensive experiments including
applications from the problem based benchmark suite
\cite{shun2012brief}.
Li et al.~\cite{AlgoImpCuckoo14} use
the bucket cuckoo-hashing method by Dietzfelbinger and Weidling \cite{DieWei07} and develop a concurrent
implementation. They exploit that using a BFS-based insertion algorithm, the
number of element moves for an insertion is very small. They use fine grained
locks which can sometimes be avoided using transactional memory (Intel
TSX). As a result of their work, they implemented the small open source library libcuckoo,
which we measure against (which does not use TSX). This approach has the
potential to achieve very good space efficiency. However, our measurements
indicate that the performance penalty is high.
The practical importance of concurrent hash tables also leads to new and
innovative implementations outside of the scientific community. A good example
of this is the Junction library, that was published by Preshing \cite{junctionGit}
in the beginning of 2016, shortly after our initial publication \cite{hashTRArxiv}.
\section{Preliminaries}\label{sec:prelim}
We assume that each application thread has its own designated hardware thread or
processing core and denote the number of these threads with $p$. A data
structure is non-blocking if no blocked thread currently accessing this data
structure can block an operation on the data structure by another thread. A data
structure is lock-free if it is non-blocking and guarantees global progress,
i.e., there must always be at least one thread finishing its operation in a
finite number of steps.
\emph{Hash Tables} store a set of $\langle\Id{Key}, \Id{Value}\rangle$ pairs
(elements).
\footnote{Much of what is said here can be generalized to the case when
\Id{Element}s are black boxes from which keys are extracted by an accessor
function.}
A hash function $h$ maps each key to a cell of a table (an array). The
number of elements in the hash table is denoted $n$ and the number of operations
is $m$. For the purpose of algorithm analysis, we assume that $n$ and $m$ are
$\gg p^2$ -- this allows us to simplify algorithm complexities by hiding $O(p)$
terms that are independent of $n$ and $m$ in the overall cost.
Sequential hash tables support the insertion of elements, and finding, updating,
or deleting an element with given key -- all of this in constant expected
time. Further operations compute $n$ (\Id{size}), build a table with a given
number of initial elements, and iterate over all elements (\Id{forall}).
\emph{Linear Probing} is one of the most popular sequential hash table
schemes used in practice. An element $\langle x,a\rangle$ is stored at the first free table
entry following position $h(x)$ (wrapping around when the end of the table is
reached). Linear probing is at the same time simple and efficient -- if the
table is not too full, a single cache line access will be enough most of the
time. Deletion can be implemented by rearranging the elements locally
\cite{Knu98} to avoid holes violating the invariant mentioned above. When the
table becomes too full or too empty, the elements can be migrated to a larger or
smaller table respectively. The migration cost can be charged to insertions and
deletions causing amortized constant overhead.
\section{Concurrent Hash Table Interface and Folklore Implementation}
\label{sec:semantics}\label{sec:nongrow}
Although it seems quite clear what a hash table is and how this generalizes to
concurrent hash tables, there is a surprising number of details to
consider. Therefore, we will quickly go over some of our interface decisions,
and detail how this interface can be implemented in a simple, fast, lock-free
concurrent linear probing hash table.
This hash table will have a bounded capacity $c$ that has to be specified when
the table is constructed. It is the basis for all other hash table variants
presented in this publication. We call this table the \emph{folklore} solution,
because variations of it are used in many publications
and it is not clear to us by whom it was first published.
The most important requirement for concurrent data structures is, that they
should be \emph{linearizable}, i.e., it must be possible to order the hash table
operations in some sequence -- without reordering two opperations of the same
thread -- so that executing them sequentially in that order yields the same
results as the concurrent processing. For a hash table data structure, this
basically means that all operations should be executed atomically some time
between their invokation and their return. For example, it has to be avoided,
that a \texttt{find} returns an inconsistent state, e.g.~a half-updated data field
that was never actually stored at the corresponding key.
Our variant of the folklore solution ensures the atomicity of operations using
2-word atomic CAS operations for all changes of the table. As long as the key
and the value each only use one machine word, we can use 2-word CAS opearations
to atomically manipulate a stored key together with the corresponding
value. There are other variants that avoid need 2-word compare and swap
operations, but they often need a designated empty value (see \cite{junctionGit})
. Since, the corresponding machine instructions are widely available on modern
hardware, using them should not be a problem. If the target architecture does not
support the needed instructions, the implementation can easily be switched to use
a variant of the folklore solution which does not use 2-word CAS. As it can
easily be deduced by the context, we will usually omit the ``2-word'' prefix and
use the abbreviation CAS for both single and double word CAS operations.
\begin{algorithm*}[t]
\caption{Pseudocode for the \texttt{insertOrUpdate} operation}
\label{alg:mod}
\Indm
\KwIn{Key $k$, Data Element $d$, Update Function $\textit{up}:\ \textit{Key}\times \textit{Val} \times \textit{Val} \rightarrow \textit{Val}$}
\KwOut{Boolean \texttt{true} when a new key was inserted, \texttt{false} if an update occurred}
\Indp
$i$\ \texttt{=}\ \texttt{h(}$k$\texttt{)}\;
\While{\texttt{true}} {
$i\ \texttt{=}\ i\ \texttt{\%}\ c$\;
$\textit{current}\ \texttt{=}\ \textit{table}\texttt{[}i\texttt{]}$\;
\uIf(\tcp*[f]{\label{l:upretf}Key is not present yet \dots}){\textit{current.key} == \texttt{empty\_key}}{
\eIf{\textit{table}\texttt{[}$i$\texttt{].CAS(}\textit{current}$, \langle k, d\,\rangle$\texttt{)}}
{\Return{\texttt{true}}}
{ $i$\texttt{-\,\!-}\; }
}
\uElseIf(\tcp*[f]{\label{l:inretf}Same key already present \dots}){$\textit{current.key}\ \texttt{==}\ k$}{
\eIf{\textit{table}\texttt{[}$i$\texttt{].atomicUpdate(}$\textit{current},\ d,\ \textit{up}$\texttt{)}}
{
\tcp*[f]{\label{l:atomup} default:
{\it\texttt{atomicUpdate(}$\cdot$\texttt{) = CAS(}
$current$\texttt{, }\textit{up}\texttt{(}
$k, \textit{current.data},\ d$\texttt{))}}}\\
\Return{false} \label{l:uprett}
}
{ $i$\texttt{-\,\!-}\; }
}
$i$\texttt{++}\;
}
\end{algorithm*}
\paragraph*{Initialization} The constructor allocates an array of size $c$
consisting of 128-bit aligned cells whose key is initialized to the empty
values.
\paragraph*{Modifications} We propose, to categorize all changes to the hash
table content into one of the following three functions, that can be implemented
very similarly (does not cover deletions).
\noindent\texttt{insert(}$k, d$\texttt{)}: Returns \texttt{false} if an element with
the specified key is already present. Only one operation should succeed if
multiple threads are inserting the same key at the same time.
\noindent\texttt{update(}$k, d, \textit{up}$\texttt{)}: Returns \texttt{false}, if there is no
value stored at the specified key, otherwise this function atomically updates
the stored value to $\textit{new} = \textit{up}(\textit{current}, d)$. Notice,
that the resulting value can be dependent on both the current value and the
input parameter $d$.
\noindent\texttt{insertOrUpdate(}$k, d, \textit{up}$\texttt{)}: This operation updates the
current value, if one is present, otherwise the given data element is inserted
as the new value. The function returns \texttt{true}, if \texttt{insertOrUpdate}
performed an \texttt{insert} (key was not present), and \texttt{false} if an
\texttt{update} was executed.
We choose this interface for two main reasons. It allows
applications to quickly differentiate between inserting and changing an element
-- this is especially usefull since the thread who first inserted a key can be
identified uniquely. Additionally it allows transparent, lockless updates that
can be more complex, than just replacing the current value (think of CAS or Fetch-and-Add).
The update interface using an update function deserves some special attention,
as it is a novel approach compared to most interfaces we encountered during our
research. Most implementations fall into one of two categories: They return
mutable references to table elements -- forcing the user to implement atomic
operations on the data type; or they offer an \texttt{update} function which
usually replaces the current value with a new one -- making it very hard to
implement atomic changes like a simple counter (\texttt{find} +
\texttt{increment} + \texttt{overwrite} not necessarily atomic).
In Algorithm~\ref{alg:mod} we show the pseudocode of the \texttt{insertOrUpdate}
function. The operation computes the hash value of the key and proceeds to look for an
element with the appropriate key (beginning at the corresponding position). If
no element matching the key is found (when an empty space is encountered), the
new element has to be inserted. This is done using a CAS operation. A failed
swap can only be caused by another insertion into the same cell. In this case,
we have to revisit the same cell, to check if the inserted element matches the
current key. If a cell storing the same key is found, it will be updated using
the \texttt{atomicUpdate} function. This function is usually implemented by
evaluating the passed update function (\textit{up}) and using a CAS operation,
to change the cell. In the case of multiple concurrent updates, at least one
will be successful.
In our (C++) implementation, partial template specialization can be used to
implement more efficient \texttt{atomicUpdate} variants using atomic operations
-- changing the default \autoref{l:atomup}, e.g.~overwrite (using single word
store), increment (using fetch and add).
The code presented in Algorithm~\ref{alg:mod} can easily be modified to implement the
\texttt{insert} (return \texttt{false} when the key is already present -- \autoref{l:inretf}) and
\texttt{update} (return \texttt{true} after a successful update -- \autoref{l:uprett} and
\texttt{false} when the key is not found -- \autoref{l:upretf}) functions. All modification functions
have a constant expected running time.
\paragraph*{Lookup} Since this folklore implementation does not move elements
within the table, it would be possible for \texttt{find($k$)} to return a
reference to the corresponding element. In our experience, returning references
directly tempts inexperienced programmers to opperate on these references in a
way that is not necessarily threadsafe. Therefore, our implementation returns a
copy of the corresponding cell ($\langle k, d\,\rangle$), if one is found
($\langle\texttt{empty\_key}, \cdot \rangle$ otherwise). The \texttt{find}
operation has a constant expected running time.
Our implementation of \texttt{find} somewhat non-trivial, because it is not
possible to read two machine words at once using an atomic
instruction\footnote{The element is not read atomically, because x86 does not
support that. One could use a 2-word CAS to achieve the same effect but this
would have disastrous effects on performance when many threads try to find the
same element.}. Therefore it is possible for a cell to be changed inbetween
reading its key and its value -- this is called a \emph{torn read}. We have to
make sure, that torn reads cannot lead to any wrong behavior. There are two
kinds of interesting torn reads: First an empty key is read while the searched
key is inserted into the same cell, in this case the element is not found
(consistent since it has not been fully inserted); Second the element is updated
between the key being read and the data being read, since the data is read
second, only the newer data is read (consistent with a finished update).
\paragraph*{Deletions} The folklore solution can only handle deletions using
dummy elements -- called tombstones. Usually the key stored in a cell is
replaced with \texttt{del\_key}. Afterwards the cell cannot be used anymore.
This method of handling deleted elements is usually not feasible, as it does not
increase the capacity for new elements. In \autoref{ss:delete} We will show, how
our generalizations can be used to handle tombstones more efficiently.
\paragraph*{Bulk Operations} While not often used in practice, the folklore table
can be modified to support operations like \texttt{buildFrom($\cdot$)} (see
\autoref{ss:bulk}) -- using a bulk insertion which can be more efficient than
element-wise insertion -- or \texttt{forall($f$)} -- which can be implemented embarrassingly parallel by splitting the table between threads.
\paragraph*{Size} Keeping track of the number of contained elements deserves
special notice here because it turns out to be significantly harder in
concurrent hash tables. In sequential hash tables, it is trivial to count the
number of contained elements -- using a single counter. This same method is
possible in parallel tables using atomic fetch and add operations, but it
introduces a massive amount of contention on one single counter creating a
performance bottleneck.
Because of this we did not include a counting method in folklore implementation. In
\autoref{ss:size} we show how this can be alleviated using an approximate count.
\section{Generalizations and Extensions}\label{sec:general}
In this section, we detail how to adapt the concurrent hash table implementation
-- described in the previous section -- to be universally applicable to all hash
table workloads. Most of our efforts have gone into a scalable migration method
that is used to move all elements stored in one table into another table. It
turns out that a fast migration can solve most shortcomings of the folklore
implementation (especially deletions and adaptable size).
\subsection{Storing Thread-Local Data}\label{ss:handle}
By itself, storing thread specific data connected to a hash table does not offer
additional functionality, but it is necessary to efficiently implement some of
our other extensions. Per-thread data can be used in many different ways, from
counting the number of insertions to caching shared resources.
From a theoretical point of view, it is easy to store thread specific data. The
additional space is usually only dependent on the number of threads ($\Oh{p}$
additional space), since the stored data is often constant sized. Compared to
the hash table this is usually negligible ($p \ll n < c$).
Storing thread specific data is challenging from a software design and performance
perspective. Some of our competitors use a \texttt{register($\cdot$)} function
that each thread has to call before accessing the table. This allocates some
memory, that can be accessed using the global hash table object.
Our solution uses explicit handles. Each thread has to create a handle, before
accessing the hash table. These handles can store thread specific data, since
they are not shared between threads. This is not only in line with the RAII
idiom (resource acquisition is initialization~\cite{meyers2005effective}), it also protects our
implementation from some performance pitfalls like unnecessary indirections and
false sharing\footnote{Significant slow down created by the cache coherency
protocol due to multiple threads repeatedly changing distinct values within
the same cache line.}. Moreover, the data can easily be deleted once the
thread does not use the hash table anymore (delete the handle).
\subsection{Approximating the Size}\label{ss:size}
Keeping an exact count of the elements stored in the hash table can often lead
to contention on one count variable. Therefore, we propose to support only an
approximative size operation.
To keep an approximate count of all elements, each thread maintains a local
counter of its successful insertions (using the method desribed in
\autoref{ss:handle}). Every $\Theta(p)$ such insertions this counter is
atomically added to a global insertion counter $I$ and then reset. Contention at
$I$ can be provably made small by randomizing the exact number of local
insertions accepted before adding to the global counter, e.g., between $1$ and
$p$. $I$ underestimates the size by at most $\Oh{p^2}$. Since we assume the size
to be $\gg p^2$ this still means a small relative error. By adding the maximal
error, we also get an upper bound for the table size.
If deletions are also allowed, we maintain a global counter $D$ in a similar
way. $S=I-D$ is then a good estimate of the total size as long as $S\gg
p^2$.
When a table is migrated for growing or shrinking (see \autoref{ss:growShrink}), each
migration thread locally counts the elements it moves. At the end of the migration, local
counters are added to create the initial count for $I$ ($D$ is set to $0$).
This method can also be extended to give an exact count -- in absence of
concurrent insertions/deletions. To do this, a list of all handles has to be
stored at the global hash table object. A thread can now iterate over all
handles computing the actual element size.
\subsection{Table Migration}
While Gao et al.~\cite{GaoGrooteHesselink05} have shown that lock-free dynamic
linear probing hash tables are possible, there is no result on their practical
feasibility. Our focus is geared more towards engineering the fastest migration
possible, therefore, we are fine with small amounts of locking, as long as it
improves the overall performance.
\subsubsection{Eliminating Unnecessary Contention from the Migration}
\label{ss:growShrink}
If the table size is not fixed, it makes sense to assume that the hash function
$h$ yields a large pseudorandom integer which is then mapped to a cell position
in $0..c-1$ where $c$ is the current capacity $c$.\footnote{We use $x..y$ as a
shorthand for $\set{x,\ldots,y}$ in this paper.} We will discuss a way to do
this by scaling. If $h$ yields values in the global range $0..U-1$ we map key
$x$ to cell $h_c(x)\ensuremath{\mathbin{:=}}\lfloor h(x)\frac{c}{U}\rfloor$. Note that when both $c$ and $U$
are powers of two, the mapping can be implemented by a simple shift operation.
\myparagraph{Growing} Now suppose that we want to migrate the table into a table
that has at least the same size (growing factor $\gamma\geq 1$). Exploiting the
properties of linear probing and our scaling function, there is a surprisingly
simple way to migrate the elements from the old table to the new table in
parallel which results in exactly the same order a sequential algorithm would
take and that completely avoids synchronization between threads.
\begin{mylemma}\label{lem:grow} Consider a range $a..b$ of nonempty cells in the
old table with the property that the cells $a-1\bmod c$ and $b+1\bmod c$ are both
empty -- call such a range a \emph{cluster} (see \autoref{fig:cluster}). When
migrating a table, sequential migration will map the elements stored in that
cluster into the range $\floor{\gamma a}.. \floor{\gamma (b+1)}$
in the target table, regardless of the rest of the source array.
\end{mylemma}
\begin{figure*}
\caption{Two neighboring clusters and their non-overlapping target areas ($\gamma = 2$).}
\caption{Left: table split into even blocks. Right: resulting cluster distribution (moved implicit block borders).}
\caption{Cluster migration and work distribution}
\label{fig:cluster}
\label{fig:blocks}
\end{figure*}
\begin{proof}
Let $x$ be an element stored in the cluster $a..b$ at position
$p(x) = h_c(x)+d(x)$. Then $h_c(x)$ has to be in the cluster $a..b$, because
linear probing does not displace elements over empty cells
($h_c(x) = \lfloor h(x)\frac{c}{U}\rfloor \geq a$), and therefore,
$h(x)\frac{c'}{U} \geq a\frac{c'}{c} \geq \gamma a$.
Similarly, from $\lfloor h(x)\frac{c}{U} \rfloor\leq b$ follows
$h(x)\frac{c}{U} < b+1$, and therefore, $h(x)\frac{c'}{U} < \gamma (b+1)$.
\end{proof}
Therefore, two distinct clusters in the source table cannot overlap in the
target table. We can exploit this lemma by assigning entire clusters to
migrating threads which can then process each cluster completely independently.
Distributing clusters between threads can easily be achieved by first splitting
the table into blocks (regardless of the tables contents) which we assign to
threads for parallel migration. A thread assigned block $d..e$ will migrate
those clusters that start within this range -- implicitly moving the block
borders to free cells as seen in \autoref{fig:blocks}). Since the average
cluster length is short and $c=\Om{p^2}$, it is sufficient to deal out blocks of
size $\Om{p}$ using a single shared global variable and atomic fetch-and-add
operations. Additionally each thread is responsible for initializing all cells
in its region of the target table. This is important, because sequentially
initializing the hash table can quickly become infeasible.
Note that waiting for the last thread at the end of the migration introduces
some waiting (locking), but this does not create significant work imbalance,
since the block/cluster migration is really fast and clusters are expected to be
short.
\myparagraph{Shrinking} Unfortunately, the nice structural Lemma~\ref{lem:grow}
no longer applies. We can still parallelize the migration with little
synchronization. Once more, we cut the source table into blocks that we assign
to threads for migration. The scaling function maps each block $a..b$ in the
source table to a block $a'..b'$ in the target table. We have to be careful
with rounding issues so that the blocks in the target table are non-overlapping.
We can then proceed in two phases. First, a migrating thread migrates those
elements that move from $a..b$ to $a'..b'$. These migrations can be done in a
sequential manner, since target blocks are disjoint. The majority of elements
will fit into the target block. Then, after a barrier synchronization, all
elements that did not fit into their respective target blocks are migrated using
concurrent insertion i.e., using atomic operations. This has negligible overhead
since elements like this only exist at the boundaries of blocks. The resulting
allocation of elements in the target table will no longer be the same as for a
sequential migration but as long as the data structure invariants of a linear
probing hash table are fulfilled, this is not a problem.
\subsubsection{Hiding the Migration from the Underlying Application}
\label{ss:async}
To make the concurrent hash table more general and easy to use, we would like to
avoid all explicit synchronization. The growing (and shrinking) operations
should be performed asynchronously when needed, without involvement of the
underlying application.
The migration is triggered once the table is filled to a factor $\geq\alpha$
(e.g. $50\,\%$), this is estimated using the approximate count from
\autoref{ss:size}, and checked whenever the global count is updated. When a
growing operation is triggered, the capacity will be increased by a factor of
$\gamma\geq1$ (Usually $\gamma = 2$). The difficulty is ensuring that this
operation is done in a transparent way without introducing any inconsistent
behavior and without incurring undue overheads.
To hide the migration process from the user, two problems have to be solved.
First, we have to find threads to grow the table, and second, we have to ensure,
that changing elements in the source table will not lead to any inconsistent
states in the target table (possibly reverting changes made during the
migration). Each of these problems can be solved in multiple ways. We
implemented two strategies for each of them resulting in four different variants
of the hash table (mix and match).
\myparagraph{Recruiting User-Threads} A simple approach to dynamically
allocate threads to growing the table, is to ``enslave'' threads that
try to perform table accesses that would otherwise have to wait for the
completion of the growing process anyway. This works really well when the table
is regularly accessed by all user-threads, but is inefficient in the worst case when
most threads stop accessing the table at some point, e.g., waiting for the
completion of a global computation phase at a barrier. The few threads still
accessing the table at this point will need a lot of time for growing (up to
$\Om{n}$) while most threads are waiting for them. One could try to also enslave
waiting threads but it looks difficult to do this in a sufficiently general and
portable way.
\myparagraph{Using a Dedicated Thread Pool} A provably efficient approach is to
maintain a pool of $p$ threads dedicated to growing the table. They are blocked
until a growing operation is triggered. This is when they are awoken to
collectively perform the migration in time $\Oh{n/p}$ and then get back to
sleep. During a migration, application threads might have to sleep until the
migration threads are finished. This will increase the CPU time of our migration
threads making this method nearly as efficient as the enslavement variant.
Using a reasonable computation model, one can show that using thread pools for
migration increases the cost of each table access by at most a constant in a
globally amortized sense (over the non-growing folklore solution). We omit the
relatively simple proof.
To remain fair to all competitors, we used exactly as many threads for the
thread pool as there were application threads accessing the table. Additionally
each migration thread was bound to a core, that was also used by one
corresponding application thread.
\myparagraph{Marking Moved Elements for Consistency (asynchronous)} During the
migration it is important that no element can be changed in the old table after
it has been copied to the new table. Otherwise, it would be hard to
guarantee that changes are correctly applied to the new table. The easiest
solution to this problem is, to mark each cell before it is copied. Marking each
cell can be done using a CAS operation to set a special marked bit which is
stored in the key. In practice this reduces the possible key space. If this
reduction is a problem, see \autoref{ss:restoring_key_space} on how to circumvent
it. To ensure that no copied cell can be changed, it suffices to ensure that no
marked cell can be changed. This can easily be done by checking the bit before
each writing operation, and by using CAS operations for each update. This
prohibits the use of fast atomic operations to change element values.
After the migration, the old hash table has to be deallocated. Before
deallocating an old table, we have to make sure that no thread is currently
using it anymore. This problem can generally be solved by using reference
counting. Instead of storing the table with a usual pointer, we use a reference
counted pointer (e.g.~\texttt{std::shared\_ptr}) to ensure that the table is
eventually freed.
The main disadvantage of counting pointers is that acquiring a counting pointer
requires an atomic increment on a shared counter. Therefore, it is not feasible
to acquire a counting pointer for each operation. Instead a copy of the shared
pointer can be stored locally, together with the increasing version number of
the corresponding hash table (using the method from \autoref{ss:handle}). At the
beginning of each operation, we can use the local version number to make sure
that the local counting pointer still points to the newest table version. If
this is not the case, a new pointer will be acquired. This happens only once per
version of the hash table. The old table will automatically be freed once every
thread has updated its local pointer. Note that counting pointers cannot be
exchanged in a lock-free manner increasing the cost of changing the current
table (using a lock). This lock could be avoided by using a hazard pointer. We did not do this
\myparagraph{Prevent Concurrent Updates to ensure Consistency (synchronized)} We
propose a simple protocol inspired by read-copy-update protocols
\cite{mckenney1998read}. The thread $t$ triggering the growing operation sets
some global growing flag using a CAS instruction. A thread $t$ performing a
table access sets a local busy flag when starting an operation. Then it
inspects the growing flag, if the flag is set, the local flag is unset. Then
the local thread waits for the completion of the growing operation, or helps
with migrating the table depending on the current growing strategy. Thread $t$
waits until all busy flags have been unset at least once before starting the
migration. When the migration is completed, the growing flag is reset, signaling
to the waiting threads that they can safely continue their table-operations.
Because this protocol ensures that no thread is accessing the previous table
after the beginning of the migration, it can be freed without using reference
counting.
We call this method (semi-)synchronized, because grow and update operations are
disjoint. Threads participating in one growing step still arrive asynchronously,
e.g.~when the parent application called a hash table operation. Compared to the
marking based protocol, we save cost during migration by avoiding CAS
operations. However, this is at the expense of setting the busy flags for
\emph{every} operation. Our experiments indicates that overall this is only
advantageous for updates using atomic operations like fetch-and-add that cannot
coexist with the marker flags.
\subsection{Deletions}\label{ss:delete}
For concurrent linear probing, we combine \emph{tombstoning} (see
\autoref{sec:nongrow}) with our migration algorithm to clean the table once it
is filled with too many \emph{tombstones}.
A \emph{tombstone} is an element, that has a \texttt{del\_key} in place of its
key. The key $x$ of a deleted entry $\langle x,a\rangle$ is atomically changed
to $\langle\texttt{del\_key}, a\rangle$. Other table operations scan over these
deleted elements like over any other nonempty entry. No inconsistencies can
arise from deletions. In particular, a concurrent find-operations with a torn
read will return the element before the deletion since the delete-operation will
leave the value-slot $a$ untouched. A concurrent insert $\langle x,b\rangle$ might read the
key $x$ before it is overwritten by the deletion and return \Id{false} because
it concludes that an element with key $x$ is already present. This is consistent
with the outcome when the insertion is performed before the deletion in a
linearization.
This method of deletion can easily be implemented in the folklore solution from
\autoref{sec:nongrow}. But the starting capacity has to be set dependent on the
number of overall insertions, since this form of deletion does not free up any
of the deleted cells. Even worse, tombstones will fill up the table and slow
down find queries.
Both of these problems can be solved by migrating all non-tombstone elements into
a new table. The decision when to migrate the table should be made solely based
on the number of insertions $I$ ($= \textit{number of nonempty cells}$). The count of all non-deleted elements $I-D$ is then used
to decide whether the table should grow, keep the same size (notice
$\gamma=1$ is a special case for our optimized migration), or shrink. Either way,
all tombstones can be removed in the course of the element migration.
\subsection{Bulk Operations}\label{ss:bulk}
Building a hash table for $n$ elements passed to the constructor can be
parallelized using integer sorting by the hash function value. This works in
time $O(n/p)$ regardless how many times an element is inserted, i.e., sorting
circumvents contention.
See the work of Müller et al.\cite{muller2015cache} for a discussion of this phenomenon
in the context of aggregation.
This can be generalized for processing batches of size $m=\Om{n}$ that may even
contain a mix of insertions, deletions, and updates. We outline a simple
algorithm for bulk-insertion that works without explicit sorting albeit does not
avoid contention. Let $a$ denote the old size of the hash table and $b$ the
number of insertions. Then $a+b$ is an upper bound for the new table size. If
necessary, grow the table to that size or larger (see below). Finally, in
parallel, insert the new elements.
More generally, processing batches of size $m=\Om{n}$ in a globally synchronized
way can use the same strategy. We outline it for the case of bulk
insertions. Generalization to deletions, updates, or mixed batches is possible:
Integer sort the elements to be inserted by their hash key in expected time
$\Oh{m/p}$. Among elements with the same hash value, remove all but the last.
Then ``merge'' the batch and the hash table into a new hash table (that may have
to be larger to provide space for the new elements). We can adapt ideas from
parallel merging \cite{HagRue89}. We co-partition the sorted insertion array
and the hash table into corresponding pieces of size $\Oh{m/p}$. Most of the
work can now be done on these pieces in an embarrassingly parallel way -- each
piece of the insertion array is scanned sequentially by one thread. Consider an
element $\langle x,a\rangle$ and previous insertion position $i$ in the table. Then we start
looking for a free cell at position $\max(h(x),i)$
\subsection{Restoring the Full Key Space}\label{ss:restoring_key_space}
Our table uses special keys, like the empty key (\texttt{empty\_key}) and the deleted key
(\texttt{del\_key}). Elements that actually have these keys cannot be stored in
the hash table. This can easily be fixed by using two special slots in the
global hash table data structure. This makes some case distinction necessary
but should have rather low impact on the overall performance.
One of our growing variants (asynchronous) uses a marker bit in its key field. This halves the
possible key space from $2^{64}$ to $2^{63}$. To regain the lost key space, we
can store the lost bit implicitly. Instead of using one hash table that holds
all elements, we use the two subtables $t_0$ and $t_1$. The subtable $t_0$ holds
all elements whose key does not have its topmost bit set. While $t_1$ stores all
elements whose key does have the topmost bit set, but instead of storing the
topmost bit explicitly it is removed.
Each element can still be found in constant time, because when looking for a
certain key, it is immediately obvious in which table the corresponding element
will be stored. After choosing the right table, comparing the 63 explicitly
stored bits can uniquely identify the correct element. Notice that both empty
keys have to be stored distinctly (as described above).
\subsection{Complex Key and Value Types}\label{sec:add_string_keys}
Using CAS instructions to change the content of hash table cells makes our data
structure fast but limits its use to cases where keys and values fit into memory
words. Lifting this restriction is bound to have some impact on performance but
we want to outline ways to keep this penalty small. The general idea is to
replace the keys and or values by references to the actual data.
\paragraph*{Complex Keys} To make things more concrete we outline a way where
the keys are strings and the hash table data structure itself manages space for
the keys. When an element $\langle s,a\rangle$ is inserted, space for string $s$
is allocated. The hash table stores $\langle r,a\rangle$ where $r$ is a pointer
to $s$. Unfortunately, we get a considerable performance penalty during table
operations because looking for an element with a given key now has to follow
this indirection for every key comparison -- effectively destroying the
advantage of linear probing over other hashing schemes with respect to cache
efficiency. This overhead can be reduced by two measures: First, we can make the
table bigger thus reducing the necessary search distance -- considering that the
keys are large anyway, this has a relatively small impact on overall storage
consumption. A more sophisticated idea is to store a \emph{signature} of the key
in some unused bits of the reference to the key (on modern machines keys
actually only use $48\,$bits). This signature can be obtained from the master
hash function $h$ extracting bits that were \emph{not} used for finding the
position in the table (i.e. the least significant digits). While searching for a
key $y$ one can then first compare the signatures before actually making a full
key comparison that involves a costly pointer dereference.
Deletions do \emph{not} immediately deallocate the space for the key because
concurrent operations might still be scanning through them. The space for
deleted keys can be reclaimed when the array grows. At that time, our migration
protocols make sure that no concurrent table operations are going on.
The memory management is challenging since we need high throughput allocation
for very fine grained variable sized objects and a kind of garbage
collection. On the positive side, we can find all the pointers to the strings
using the hash function. All in all, these properties might be sufficiently
unique that a carefully designed special purpose implementation is faster than
currently available general purpose allocators. We outline one such approach:
New strings are allocated into memory pages of size $\Om{p}$. Each thread has
one current page that is only used locally for allocating short strings. Long
strings are allocated using a general purpose allocator. When the local page of
a thread is full, the thread allocates a fresh page and remembers the old one on a
stack. During a shrinking phase, a garbage collection is done on the string
memory. This can be parallelized on a page by page basis. Each thread works on
two pages $A$ and $B$ at a time where $A$ is a partially filled page. $B$ is
scanned and the strings stored there are moved to $A$ (updating their pointer in
the hash table). When $A$ runs full, $B$ replaces $A$. When $B$ runs empty, it is
freed. In either case, an unprocessed page is obtained to become $B$.
\paragraph*{Complex Values} We can take a similar approach as for complex
keys -- the hash table data structure itself allocates space for complex
values. This space is only deallocated during migration/cleanup phases that make
sure that no concurrent table operations are affected. The find-operation only
hands out \emph{copies} of the values so that there is no danger of stale
data. There are now two types of update operations. One that modifies part of a
complex value using an atomic CAS operation and one that allocates an entirely
new value object and performs the update by atomically setting the
value-reference to the new object. Unfortunately it is not possible to use both
types concurrently.
\paragraph*{Complex Keys \emph{and} Values} Of course we can combine the
two approaches described above. However in that case, it will be more efficient to store a
single reference to a combined key-value object together with a signature.
\section{Using Hardware Memory Transactions}\label{sec:add_tsx}
The biggest difference between
a concurrent table,
and a sequential hash table is the use of atomic processor instructions. We use
them for accessing and modifying data which is shared between threads. An
additional way to achieve atomicity is the use of hardware transactional memory
synchronization introduced recently by Intel and IBM. The new instruction
extensions can group many memory accesses into a single transaction. All changes
from one transaction are committed at the same time. For other threads they
appear to be atomic. General purpose memory transactions do not have progress
guarantees (i.e. can always be aborted), therefore they require a fall-back path
implementing atomicity (a lock or an implementation using traditional atomic
instructions).
We believe that transactional memory synchronization is an important opportunity
for concurrent data structures. Therefore, we analyze how to efficiently use
memory transactions for our concurrent linear probing hash tables. In the following,
we discuss which aspects of our hash table can be improved by using restricted
transactional memory implemented in Intel Transactional Synchronization
Extensions (Intel TSX).
We use Intel TSX by wrapping sequential code into a memory transaction. Since
the sequential code is simpler (e.g. less branches, more freedom for compiler
optimizations) it can outperform inherently more complex code based on
(expensive 128-bit CAS) atomic instructions. As a transaction fall-back
mechanism, we employ our atomic variants of hash table operations. Replacing the
insert and update functions of our specialized growing hash table with Intel TSX
variants increases the throughput of our hash table by up to 28 \% (see
\autoref{sec:exp_results}). Speedups like this are easy to obtain on workloads
without contentious accesses (simultaneous write accesses on the same
cell). Contentious write accesses lead to transaction aborts which have a higher
latency than the failure of a CAS. Our atomic fall-back minimizes the penalty
for such scenarios compared to the classic lock-based fall-back that causes more
overhead and serialization.
Another aspect that can be improved through the use of memory transactions is
the key and value size. On current x86 hardware, there is no atomic instruction
that can change words bigger than 128 bits at once. The amount of memory that
can be manipulated during one memory transaction can be far greater than 128
bits. Therefore, one could easily implement hash tables with complex keys and
values using transactional memory synchronization. However, using atomic
functions as fall-back will not be possible. Solutions with fine-grained locks
that are only needed when the transactions actually fail, are still possible.
With general purpose memory transactions it is even possible to atomically
change multiple values that are not stored consecutively. Therefore, it is
possible to implement a hash table that separates the keys from the values
storing each in a separate table. In theory this could improve the cache
locality of linear probing.
Overall, transactional memory synchronization can be used to improve performance
and to make the data structure more flexible.
\section{Implementation Details}\label{sec:impl}
\myparagraph{Bounded Hash Tables.} All of our implementations are constructed
around a highly optimized variant of the circular bounded \emph{folklore} hash
table that was describe in \autoref{sec:nongrow}. The main performance
optimizations were to restrict the table size to powers of two -- replacing
expensive modulo operations with fast bit operations
When initializing the capacity $c$, we compute the lowest power of
two, that is still at least twice as large as the expected number of insertions
($2n \leq \textit{size} \leq 4n$).
We also built a second non growing hash table variant called \emph{tsxfolklore},
this variant forgoes the usual CAS-operations that are used to change
cells. Instead tsxfolklore uses TSX transactions to change elements in the table
atomically. As described in \autoref{sec:add_tsx}, we use our usual atomic
operations as fallback in case a TSX transaction is aborted.
\myparagraph{Growing Hash Tables.} All of our growing hash tables use folklore
or tsxfolklore to represent the current status of the hash table. When the table
is approximately 60\% filled, a migration is started. With each migration, we
double the capacity. The migration works in cell-blocks of the size 4096. Blocks are
migrated with a minimum amount of atomics by using the cluster migration described in
\autoref{ss:growShrink}.
We use a user-space memory pool from Intel's TBB library to prevent a slow down
due to the re-mapping of virtual to physical memory (protected by a coarse lock
in the Linux kernel). This improves the performance of our growing variants,
especially when using more than 24 threads. By allocating memory from this
memory pool, we ensure that the virtual memory that we receive is already mapped
to physical memory, bypassing the kernel lock.
In \autoref{ss:growShrink} we identified two orthogonal problems that have to be
solved to migrate hash tables: which threads execute the migration? and how can
we make sure that copied elements cannot be changed? For each of these problems
we formulated two strategies. The table can either be migrated by user-threads
that execute operations on the table (\emph{u}), or by using a pool of threads
which is only responsible for the migration (\emph{p}). To ensure that copied
elements cannot be changed, we proposed to wait for each currently running
operation synchronizing update and growing phases (\emph{s}), or to mark
elements before they are copied, thus proceeding fully asynchronously
(\emph{a}).
All strategies can be combined -- creating the following four growing hash table
variants: \emph{uaGrow} uses enslavement of user threads and asynchronous marking for
consistency; \emph{usGrow} also uses user threads threads, but ensures
consistency by synchronizing updates and growing routines;
\emph{paGrow} uses a pool of dedicated migration threads for the migration and asynchronous marking of
migrated entries for consistency; and \emph{psGrow} combines the use of a
dedicated thread pool for migration with the synchronized exclusion
mechanism.
All of these versions can also be instantiated using the TSX based non-growing
table tsxfolklore as a basis.
\section{Experimental Evaluation}\label{sec:exp}
We performed a large number of experiments to investigate the performance of
different concurrent hash tables in a variety of circumstances (an overview over all tested hash tables can be found in \autoref{tab:functionality}). We begin by
describing the tested competitors (\autoref{sec:exp_variants}, our variants are
introduced in \autoref{sec:impl}), the test instances
(\autoref{sec:exp_instance}), and the test environment
(\autoref{sec:exp_hardware}). Then \autoref{sec:exp_results} discusses the
actual measurements. In \autoref{ss:price}, we conclude the section by summarizing our experiments and reflecting how different generalizations affect the performance of hash tables.
\subsection{Competitors}\label{sec:exp_variants}
To compare our implementation to the current state of the art we use a broad
selection of other concurrent hash tables.
These tables were chosen on the basis of their popularity in applications and
academic publications. We split these hash table implementations into the
following three groups depending on their growing functionality.
\subsubsection{Efficiently Growing Hash Tables}
This group contains all hash tables, that are able to grow efficiently from a
very small initial size. They are used in our growing benchmarks, where we
initialize tables with an initial size of $4096$ thus making growing
necessary.
\paragraph*{Junction Linear {\color{hone} \fcirc}\xspace, Junction Grampa {\color{htwo} \bcirc}\xspace, and Junction Leapfrog {\color{htwo} \lcirc}\xspace}
The junction library consists of three different variants of a dynamic
concurrent hash table. It was published by Jeff Preshing over github
\cite{junctionGit}, after our first publication on the subject
(\cite{hashTRArxiv}). There are no scientific publications, but on his blog
\cite{preshBlog} Preshing writes some insightful posts on his implementation.
In theory, junction's hash tables use an approach to growing which is similar to
ours. A filled bounded hash table is migrated into a newly allocated bigger
table. Although they are constructed from a similar idea, the execution seems
to differ quite significantly. The junction hash tables use a
\emph{quiescent-state based reclamation} (QSBR) protocol, for memory
reclamation. Using this protocol, in order to reclaim freed hash table memory,
the user has to regularly call a designated function.
Contrary to other hash tables, we used the provided standard hash function
(avalanche), because junction assumes its hash function, to be
invertible. Therefore, the hash function which is used for all other tables (see
\autoref{sec:exp_instance}) is not usable.
The different hash tables within junction all perform different variants of open
addressing. These variants are described in more detail, in one of Preshing's
blogposts (see~\cite{preshBlog}).
\paragraph*{tbbHM \tbba and tbbUM \tbbb} (correspond to the TBB maps
\texttt{tbb::concurrent\_hash\_map} and \texttt{tbb::concurrent\_unordered\_map}
respectively) The Threading Building Blocks \cite{TBBCite} (TBB) library
(Version 4.3 Update 6) developed by Intel is one of the most widely used
libraries for shared memory concurrent programming. The two different
concurrent hash tables it contains behave relatively similar in our tests.
Therefore, we sometimes only plot the results of tbbHM \tbba. But they have some
differences concerning the locking of accessed elements. Therefore, they behave
very differently under contention.
\subsubsection{Hash Tables with Limited Growing Capabilities}
This group contains all hash tables that can only grow by a limited amount
(constant factor of the initial size) or become very slow when growing is
required. When testing their growing capabilities, we usually initialize these
tables with half their target size.
This is comparable to a workload where the approximate number of elements is
known but cannot be bound strictly.
\paragraph*{folly {\color{hone} $+$}\xspace} (\texttt{folly::AtomicHashMap}) This hash table was
developed at facebook as a part of their open source library
folly~\cite{FollyCite} (Version 57:0). It uses restrictions on key and data
types similar to our folklore implementation. In contrast to our growing
procedure, the folly table grows by allocating additional hash tables. This
increases the cost of future queries and it bounds the total growing factor to
$\approx 18$ ($\times \textit{initial
size}$).
\paragraph*{cuckoo \cuck} (\texttt{cuckoohash\_map}) This hash table using (bucket)
cuckoo hashing as its collision resolution method, is part of the small
libcuckoo library (Version 1.0). It uses a fine grained locking approach
presented by Li et al.~\cite{AlgoImpCuckoo14} to ensure consistency. Cuckoo is
mentionable for their interesting interface, which combines easy container style
access with an update routine similar to our update interface.
\paragraph*{RCU \rcua/RCU QSBR \rcub}
This hash
table is part of the Userspace RCU library (Version 0.8.7) \cite{lwnRCU}, that brings the read
copy update principle to userspace applications. Read copy update is a set of
protocols for concurrent programming, that are popular in the Linux kernel
community \cite{mckenney1998read}. The hash table uses split-ordered lists to grow in a lock-free manner. This approach has been proposed by Shalev and Shavit \cite{splitOrdered}.
RCU uses the recommended read-copy-update variant called urcu. RCU QSBR uses a
QSBR based protocol that is comparable to the one used by junction hash
tables. It forces the user to repeatedly call a function with each participating
thread. We tested both variants, but in many plots we show only RCU \rcua
because both variants behaved very similarly in our tests.
\subsubsection{Non-Growing Hash Tables}
One of the most important subjects of this publication is offering a
scalable asynchronous migration for the simple folklore hash table. While this
makes it usable in circumstances where bounded tables cannot be used, we want to
show that even when no growing is necessary we can compete against bounded hash
tables. Therefore, it is reasonable to use our growing hash table even in
applications where the number of elements can be bounded in a reasonable manner,
offering a graceful degradation in edge cases and allowing improved memory usage
if the bound is not reached.
\paragraph*{Folklore \folk}
Our implementation of the folklore solution described in
\autoref{sec:nongrow}. Notice that this hash table is the core of our
growing variants. Therefore, we can immediately determine the overhead that the
ability for growing places on this implementation (Overhead for approximate
counting and shared pointers).
\paragraph*{Phase Concurrent {\color{hone} $\blacklozenge$}\xspace}
This hash table implementation proposed by Shun and Blelloch \cite{shun2014phase} is designed to
support only phase concurrent accesses, i.e.~no reads can occur concurrently
with writes. We tested this table anyway, because several of our test instances
satisfy this constraint and it showed promising running times.
\paragraph*{Hopscotch Hash \hops}
Hopscotch hashing (ver 2.0) is one of the more popular variants of open addressing. The
version we tested, was published by Herlihy et al.~\cite{herlihy2008hopscotch} connected to their original
publication proposing the technique.
Interestingly, the provided implementation only implements the functionality of a
hash set (unable to retrieve/update stored data). Therefore, we had to adapt
some tests to account for that (\texttt{insert}$\cong$\texttt{put} and
\texttt{find}$\cong$\texttt{contains}).
\paragraph*{LeaHash \lea}
This hash table is designed by Lea \cite{leahash} as part of Java's Concurrency
Package. We have obtained a C++ implementation which was published together
with the hopscotch table. It was previously used during for experiments by
Herlihy et al.~\cite{herlihy2008hopscotch} and Shun and Blelloch \cite{shun2014phase}. LeaHash uses hashing with
chaining and the implementation that we use has the same hash set interface as
hopscotch.
As previously described, we used hash set implementations for
Hopscotch hashing, as well as LeaHash (they were published like this). They
should easily be convertible into common hash map implementations, without
loosing too much performance, but probably using quite a bit more memory.
\subsubsection{Sequential Variants}
To report absolute speedup numbers, we implemented sequential variants of
growing and fixed size tables. They do not use any atomic instructions or
similar slowdowns. They outperform popular choices like google's dense hash map
significantly (80\% increased insert throughput), making them a reasonable
approximation for the optimal sequential performance.
\begin{table*}[ht]
\caption{\label{tab:functionality}Overview over Table Functionalities.}
\small
\centering
\begin{tabular} {lc|p{2.3cm}p{1.7cm}p{2.5cm}p{1.2cm}p{2.5cm}}
\bf name & \bf plot & \bf std. interface & \bf growing & \bf atomic updates & \bf deletion & \bf arbitrary types \\
\hline
xyGrow & & & & & & \\
\ \ uaGrow & \uag & using handles & \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ & \\
\ \ usGrow & \usg & \hspace{5mm} '' & \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ & \\
\ \ paGrow & \pag & \hspace{5mm} '' & \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ & \\
\ \ psGrow & \psg & \hspace{5mm} '' & \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ &
\\
Junction & & & & & & \\
\ \ linear & {\color{hone} \fcirc}\xspace & qsbr function & \centering$\checkmark$ & only overwrite & \centering$\checkmark$ & \\
\ \ grampa & {\color{htwo} \bcirc}\xspace & \hspace{5mm} '' & \centering$\checkmark$ & \hspace{7mm} ''& \centering$\checkmark$ & \\
\ \ leapfrog&{\color{htwo} \lcirc}\xspace & \hspace{5mm} '' & \centering$\checkmark$ & \hspace{7mm} ''& \centering$\checkmark$ &
\\
TBB & & & & & & \\
\ \ hash map & \tbba& \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ & \hspace{1cm}$\checkmark$ \\
\ \ unordered& \tbbb& \centering$\checkmark$ & \centering$\checkmark$ & \centering$\checkmark$ & \centering unsafe & \hspace{1cm}$\checkmark$ \\
\hline
Folly & {\color{hone} $+$}\xspace & \centering$\checkmark$ & const factor & \centering$\checkmark$ & & \\
Cuckoo & \cuck & \centering$\checkmark$ & slow & \centering$\checkmark$ & \centering$\checkmark$ & \hspace{1cm}$\checkmark$ \\
RCU & & & & & & \\
\ \ urcu & \rcua & register thread & very slow & \centering$\checkmark$ & \centering$\checkmark$ & \hspace{1cm}$\checkmark$ \\
\ \ qsbr & \rcub & qsbr function & \hspace{5mm}''& \centering$\checkmark$ & \centering$\checkmark$ & \hspace{1cm}$\checkmark$ \\
\hline
Folklore & \folk & \centering$\checkmark$ & & \centering$\checkmark$ & & \\
Phase &{\color{hone} $\blacklozenge$}\xspace & sync phases & & only overwrite & \centering$\checkmark$ & \\
Hopscotch & \hops & \centering$\checkmark$ & & set interface & \centering$\checkmark$ & \\
Lea Hash & \lea & \centering$\checkmark$ & & set interface & \centering$\checkmark$ & \\
\end{tabular}
\end{table*}
\subsubsection{Color/Marker Choice}
For practicality reasons, we chose not to print a legend with all of our
figures. Instead, we use this section to explain the color and marker choices
for our plots (see \autoref{sec:exp_variants} and \autoref{tab:functionality}), hopefully making them more readable.
Some of the tested hash tables are part of the same library. In these cases, we
use the same marker, for all hash tables within that library. The different variants of the hash table are then differentiated using the line color (and filling of the marker).
For our own tables, we mostly use \uag and \usg for uaGrow and usGrow
respectively.
\subsection{Hardware Overview}
\label{sec:exp_hardware}
Most of our Experiments were run on a two socket machine, with Intel Xeon E5-2670
v3 processors (previously codenamed Haswell-EP). Each processor has 12 cores
running at $2.3\,\text{Ghz}$ base frequency. The two sockets are connected by
two Intel QPI-links. Distributed to the two sockets there are $128\,\text{GB}$
of main memory ($64\,\text{GB}$ each). The processors support Intel
Hyper-Threading, AVX2, and TSX technologies.
This system runs a Ubuntu distribution with the kernel number
3.13.0-91-generic. We compiled all our tests with gcc 5.2.0 -- using
optimization level \texttt{-O3} and the necessary compiler flags
(e.g.~\texttt{-mcx16}, \texttt{-msse4.2} among others).
Additionally we executed some experiments on a 32-core 4-socket Intel Xeon
E5-4640 (SandyBridge-EP) machine, with $512\,\text{GB}$ main memory (using the
same operating system and compiler), to verify our findings, and show improved
scalability even on 4-socket machines.
\subsection{Test Methodology}\label{sec:exp_instance}
Each test measures the time it takes, to execute $10^8$ hash table operations
(\emph{strong scaling}). Each data point was computed by taking the average, of
five separate execution times. Different tests use different hash table
operations and key distributions. The used keys are pre-computed before the
benchmark is started. Each speedup given in this section is computed as the
\emph{absolute speedup} over our hand-optimized sequential hash table.
The work is distributed between threads dynamically. While there is
work to do, threads reserve blocks of 4096 operations to execute (using an
atomic counter). This ensures a minimal amount of work imbalance, making the
measurements less prone to variance.
Two executions of the same test will always use the same input keys. Most
experiments are performed with uniformly random generated keys (using the
Mersenne twister random number generator \cite{MatNis98}). Since real world
inputs may have recurring elements, there can be contention which can
potentially lead to performance issues. To test hash table performance under
contention, we use Zipf's distribution to create skewed key sequences. Using
the Zipf distribution, the probability for any given key $k$ is
$P(k)=\frac{1}{k^s\cdot H_{N,s}}$, where $H_{N,s}$ is the $N$-th generalized
harmonic number $\sum^N_{k=1}\frac{1}{k^s}$ (normalization factor) and $N$ is
the universe size ($N=10^8$). The exponent $s$ can be altered to regulate the
contention. We use the Zipf distribution, because it closely models some real
world inputs like natural language, natural size distributions (e.g.~of firms or
internet pages), and even user behavior (\cite{zipfFirms}, \cite{zipfWebCache},
\cite{zipfInternet}). Notice that key generation is done prior to the benchmark
execution as to not influence the measurements unnecessarily (this is especially
necessary, for skewed inputs).
As a hash function, we use two CRC32C x86 instructions with different seeds, to
generate the upper and lower 32 bits of each hash value. Their hardware
implementation minimizes the computational overhead.
\subsection{Experiments}\label{sec:exp_results}
The most basic functionality of each hash table is inserting and finding
elements. The performance of many parallel algorithms depends on the
scalability of parallel insertions and finds. Therefore, we begin our
experiments with a thorough investigation into the scalability of these basic
hash table operations.
\paragraph*{Insert Performance}
\begin{figure*}
\caption{\label{fig:in_prein}
\label{fig:in_prein}
\caption{\label{fig:in_grow}
\label{fig:in_grow}
\caption{\label{fig:in_test}
\label{fig:in_test}
\end{figure*}
We begin with the very basic test of inserting $10^8$ different uniformly random
keys, into a previously empty hash table. For this first test, all hash tables
have been initialized to the final size making growing unnecessary. The results
presented in \autoref{fig:in_prein} show clearly, that the folklore \folk
solution is optimal in this case. Since there is no migration necessary, and the
table can be initialized large enough, such that long search distances become
very improbable. The large discrepancy between the folklore \folk solution, and
all previous growable hash tables is what motivated us, to work with growable
hash tables in the first place. As shown in the plot, our growing hash table
uaGrow \uag looses about $10\%$ of performance over folklore \folk ($9.6\times$
Speedup vs.~$8.7\times$). This performance loss can be explained with some
overheads that are necessary for eventually growing the table (e.g.~estimating
the number of elements). All hash tables that have a reasonable performance
($>50\%$ of folklore~\folk performance), are variants of open addressing
(junction leapfrog {\color{htwo} \lcirc}\xspace $4.4$ at $p=12$, folly {\color{hone} $+$}\xspace $5.1$, phase {\color{hone} $\blacklozenge$}\xspace
$8.3$) that have similar restrictions on key and value types. All hash tables that can handle generic data types are severely outclassed (\tbba, \tbbb, \cuck, \rcua, and \rcub).
After this introductory experiment, we take a look at the growing
capability of each table. We again insert $10^8$ elements into a previously empty table. This
time, the table has only been initialized, to hold 4092 elements ($5\cdot10^7$ for
all \emph{semi growing} tables). We can clearly see from the plots in
\autoref{fig:in_grow}, that our hash table variants are significantly faster
than any comparable tables. The difference becomes especially obvious once two
sockets are used ($>12$ cores). With more than one socket, none of our
competitors could achieve any significant speedups. On the contrary, many tables
become slower when executed on more cores. This effect, does not happen for our
table.
Junction grampa {\color{htwo} \bcirc}\xspace is the only growing hash table -- apart from our growing
variants -- which achieves absolute speedups higher than $2$. Overall, it is
still severely outperformed by our hash table uaGrow \uag (factor
$2.5\times$). Compared to all other tables, we achieve at least seven times the
performance (descending order; using 48 threads) folly {\color{hone} $+$}\xspace
($7.4\times$), junction leapfrog {\color{htwo} \lcirc}\xspace ($7.7\times$), tbb hm \tbba ($9.6\times$), tbb um \tbbb
($10.7\times$), junction linear {\color{hone} \fcirc}\xspace ($22.6\times$), cuckoo \cuck ($61.3\times$), rcu \rcua
($63.2\times$), and rcu with qsbr \rcub ($64.5\times$).
The speedup in this growing instance is even better than the speedup in our
non-growing tests. Overall we reach absolute speedups of $>9\times$ compared
to the sequential version (also with growing). This is slightly better then the
absolute speedup in the non-growing test ($\approx 8.5$), suggesting that our
migration is at least as scalable as hash table accesses. Overall the insert
performance of our implementation behaves as one would have hoped. It performs
similar to folklore \folk in the non-growing case, while performing similarly well in
tests where growing is necessary.
\paragraph*{Find Performance}
When looking for a key in a hash table there are two possible outcomes, either it is in the
table or it is not. For most hash tables not finding an element
takes longer than finding said element. Therefore, we present two distinct
measurements for both cases \autoref{fig:in_find_s} and \autoref{fig:in_find_u}.
The measurement for successful finds has been made by looking for $10^8$
elements, that have previously been inserted into a hash table. For the
unsuccessful measurement, $10^8$ uniformly random keys are searched in this same
hash table.
All the measurements made for these plots were done on a preinitialized table
(preinitialized before insertion). This does not make a difference for our
implementation, but it has an influence on some of our competitors. All tables
that grow by allocating additional tables (namely cuckoo \cuck and folly {\color{hone} $+$}\xspace) have
significantly worse find performance on a grown table, as they can have multiple
active tables at the same time (all of them have to be checked).
\begin{figure*}
\caption{\label{fig:in_find_s}
\label{fig:in_find_s}
\caption{\label{fig:in_find_u}
\label{fig:in_find_u}
\caption{\label{fig:in_find}
\label{fig:in_find}
\end{figure*}
Obviously, find workloads achieve bigger throughputs than insert heavy workloads
-- no memory is changed and no coordination is necessary between
processors (i.e.~atomic operations). It is interesting that find operations
seem to scale better with multiple processors. Here, our growable implementations
achieve speedups of $12.8$ compared to $9$ in the insertion case.
When comparing the find performance between different tables, we can see that
other implementations with open addressing narrow the gap towards our
implementation. Especially, the hopscotch hashing \hops and the phase concurrent
approach {\color{hone} $\blacklozenge$}\xspace seem to perform well when finding elements. Hopscotch hashing \hops performs
especially well in the unsuccessful case, here it outperforms all other hash
tables, by a significant margin. However, this has to be taken with a grain of
salt, because the tested implementation only offers the functionality of a hash
set (contains instead of find). Therefore, less memory is needed per element and
more elements can be hashed into one cache line, making lookups significantly
more cache efficient.
For our hash tables, the performance reduction between successful and
unsuccessful finds is around $20$ to $23\,\%$ The difference of absolute speedups
between both cases is relatively small -- suggesting that sequential hash
tables suffer from the same performance penalties. The biggest difference
has been measured for folly {\color{hone} $+$}\xspace
($51$ to $55\,\%$ reduced performance). Later we see that the reason for this is likely that folly
{\color{hone} $+$}\xspace is configured to use only relatively little memory (see
\autoref{fig:mem_test}). When initialized with more memory, its performance gets closer to the
performance of other hash tables using open addressing.
\paragraph*{Performance under Contention}
Up to this point, all data sets we looked at contained uniformly random keys
sampled from the whole key space. This is not necessarily the case in real
world data sets. For some data sets one keys might appear many times. In some
sets one key might even dominate the input. Access to this key's element can
slow down the global progress significantly, especially if hash table operations use (fine
grained) locking, to protect hash table accesses.
To benchmark the robustness of the compared hash tables onthese degenerated
inputs, we construct the following test setup. Before the execution, we compute
a sequence of skewed keys using the Zipf distribution described in
\autoref{sec:exp_instance} ($10^8$ keys from the range $1..10^8$). Then the
table is filled with all keys from the same range $1..10^8$.
For the first benchmark we execute an update operation for each key of the
skewed key sequence, overwriting its previously stored element
(\autoref{fig:co_upd}). These update operations will create contending write
accesses to the hash table. Note that updates perform simple overwrites, i.e.,
the resulting value of the element is not dependent on the previous value. The
hash table will remain at a constant size for the whole execution, making it
easy to compare different implementations independent of effects introduced
through growing. In the second benchmark, we execute find operations instead of
updates, thus creating contending read accesses.
\begin{figure*}
\caption{\label{fig:co_upd}
\label{fig:co_upd}
\caption{\label{fig:co_find}
\label{fig:co_find}
\caption{\label{fig:co_test}
\label{fig:co_test}
\end{figure*}
For sequential hash tables, contention on some elements can have very positive
effects. When one cell is visited repeatedly, its contents will be cached and
future accesses will be faster. The sequential performance is shown in our
figures using a dashed black line. For concurrent hash tables, contention has
very different effects.
Unsurprisingly, the effects experienced from contention are different between
writing and reading operations. The reason is that multiple threads can read
the same value simultaneously, but only one thread at a time can change a value
(on current CPU architecture). Therefore, read accesses can profit from cache
effects -- much like a sequential hash table, while write accesses are hindered
by the contention. This goes so far, that for workloads with high contention no
concurrent hash table can achieve the performance of a sequential table.
Appart from slowdown because of exclusive write accesses, there is also the additional problem of cache
invalidation. When a value is repeatedly changed by different cores of a
multi-socket architecture, then cached copies have to be invalidated whenever
this value is changed. This leads to bad cache efficiency and also to high
traffic on QPI Links (connections between sockets).
From the update measurement shown in \autoref{fig:co_upd} it is clearly visible,
that the serious impact through contention begins between $s = 0.85$ and
$0.95$. Up until that point contention has a positive effect even on update
operations. For a skew between $s = 0.85$ and $0.95$, about $1\,\%$ to $3\,\%$ of
all accesses go to the most common element (key $k_1$). This is exactly the
point where $1/p \approx P(k_1)$, therefore, on average there will be one thread
changing the value of $k_1$.
It is noteworthy that the usGrow \usg version of our hash table is more
efficient when updating than the uaGrow \uag version. The reason for this is
that usGrow uses 128\,bit CAS operations to update elements while simultaneously
making sure, that the marked bit of the element has not been set before the
change. This can be avoided using the usGrow \usg variant by specializing the
update method to use atomic operations on the data part of the element. This is
possible because updates and grow routines cannot overlap in this variant.
The plot in \autoref{fig:co_find} shows that concurrent hash tables achieve
performance improvements similar to sequential ones when repeatedly accessing
the same elements. Our hash table can even increase its speedups over uniform
access patterns, the highest speedup of uaGrow \uag is $17.9$ at $s=1.25$. Since
the speedup is this high, we also included scaled plots showing $5\times$ and
$10\times$ the throughput of the sequential variant. Unfortunately, our growable
variants cannot improve as much, with contention as the non-growing folklore
\folk and phase concurrent {\color{hone} $\blacklozenge$}\xspace tables (both $23.2$ at $s=1.25$). This is
probably due to minor overheads compared to the folklore \folk implementation
which get pronounced since the overall function execution time is reduced.
Overall, we see that our folklore \folk implementation which our growable variants are
based upon, outperforms all other competitors. Our growable variant usGrow \usg is
consistently close to folklore's performance -- outperforming all hash tables that
have the ability to grow.
\paragraph*{Aggregation -- a common Use Case}
Hash tables are often used for key aggregation. The idea is that all data
elements connected to the same key are aggregated using a commutative and
associative function. For our test,
we implemented a simple key count program. To implement the key count routine
with a concurrent hash table, an insert-or-increment function is necessary. For
some tables, we were not able to implement an update function, where the
resulting value depends on the previous value, within the given interface
(junction tables, rcu tables, phase concurrent, hopscotch, and leahash). This
was mainly a problem of the used interfaces, therefore, it could probably be
solved by reimplementing a more functional interface. For our table this can
easily be achieved with the \texttt{insertOrUpdate} interface using an increment as update function (see
\autoref{sec:nongrow}).
The aggregation benchmark uses the same Zipf key distribution as the other contention
tests. For $10^8$ skewed keys, the insert-or-increment function is
called. Contrary to the previous contention test, there is no pre-initialization.
Therefore, the number of distinct elements in the hash table is dependent on the
contention of the key sequence (given by $s$). This makes growable hash tables
even more desirable, because the final size can only be guessed before the
execution.
\begin{figure*}
\caption{\label{fig:agg_test_f}
\label{fig:agg_test_f}
\caption{\label{fig:agg_test_u}
\label{fig:agg_test_u}
\caption{\label{fig:special_tests}
\label{fig:special_tests}
\end{figure*}
Like in previous tests, we make two distinct measurements. One with growing
(\autoref{fig:agg_test_f}) and one without (\autoref{fig:agg_test_u}). In the test
without growing, we initialize the table with a size of $10^8$ to ensure that
there is enough room for all keys, even if they are distinct. We excluded the
semi-growing tables from \autoref{fig:agg_test_u} as approximating the number of
unique keys can be difficult. To set the growing performance into relation, we
show some non-growing tests. Growing actually costs less in the presence of
contentious updates, because the resulting table will be smaller than without
contention, therefore, fewer growing steps can be amortized over the same number
of operations.
The result of this measurement is clearly related to the result of the
contentious overwrite test shown in \autoref{fig:co_upd}. However, changing a
value by increment has some slight differences to overwriting it, since the
updated value of an insert-or-increment is dependent on its previous value. In
the best case, this increment can be implemented using an atomic fetch-and-add
operation (i.e. usGrow \usg, folklore \folk, and folly {\color{hone} $+$}\xspace). However this is
not possible for in all hash tables, sometimes dependent updates are implemented
using a read-modify-CAS cycle (i.e. uaGrow \uag) or fine grained locking
(i.e. tbb hash map \tbba or cuckoo \cuck).
Until $s=0.85$, uaGrow \uag seems to be the more efficient option, since it has an
increased writing performance and the update cycle will be successful most of
the time. From that point on, usGrow \usg is clearly more efficient because fetch-and-add behaves better under contention. For highly skewed workloads, it
comes really close to the performance of our folklore implementation \folk which
again performs the best out of all implementations.
\paragraph*{Deletion Tests}
\begin{figure}
\caption{\label{fig:del_test}
\label{fig:del_test}
\end{figure}
As described in \autoref{ss:delete}, we use migration, not only to implement an
efficiently growing hash table, but also to clean up the table after
deletions. This way all tombstones are removed, and thus freed cells are
reclaimed. But how does this fare against different ways of removing elements.
This is what we investigate with the following benchmark.
The test starts on a prefilled table ($10^7$ elements) and consists of $10^8$
insertions -- each immediately followed by a deletion. Therefore, the table
remains at approximately the same size throughout the test ($\pm p$ elements). All
keys used in the test are generated before the benchmark execution (uniform distribution). As described
in \autoref{sec:exp_instance}, all keys are stored in one array. Each insert
uses an entry from this array distributed in blocks of $4096$ from the
beginning. The corresponding deletion uses the key that is $10^7$ elements prior
to the corresponding insert. The keys stored within the hash table are
contained in a sliding window of the key array.
We constructed the test to keep a constant table size, because this allows us to test non-growing tables without significantly overestimating the necessary
capacity. All hash tables are initialized with $1.5\times 10^7$ capacity,
therefore, it is necessary to reclaim deleted cells to successfully execute the
benchmark.
The measurements shown in \autoref{fig:del_test} indicate, that only the phase
concurrent hash table {\color{hone} $\blacklozenge$}\xspace by Shun and Blelloch \cite{shun2014phase} can outperform our
table. The reason for this is pretty simple. Their table performs linear
probing comparable to our technique, but it does not use any tombstones for
deletion. Instead, deleted cells are reclaimed immediately (possibly moving
elements). This is only possible, because the table does not allow concurrent lookup
operations, thus, removing the possibility for the so called ABA problem (a lookup of
an element while it is deleted returns wrong data, if there is also a concurrent
insert into the newly freed cell).
From all remaining hash tables that support fully concurrent access, ours is
clearly the fastest, even though there are other hash tables like cuckoo \cuck and
hopscotch \hops that also get around full table migrations.
\paragraph*{Mixed Insertions and Finds}
It can be argued that some of our tests are just micro-benchmarks which are not
representative of real world workloads that often mix insertions with lookups.
To address these concerns, we want to show that mixed function workloads
(i.~e.~combined find and insert workloads) behave similarly.
As in previous tests, we generate a key sequence for our test. Each key of this
sequence is used for an insert or a find operation. Overall, we
generate $10^8$ keys for our benchmark. For
each key, insert or find is chosen at random according to the write percentage
$wp$. In addition to the keys used in the benchmark, we generate a small number
of keys ($pre = 8192\cdot p = 2\,\text{blocks}\cdot p$) that are inserted prior
to the benchmark. This ensures that the table is not empty and there are keys
that can be found with lookups.
The keys used for insertions are drawn uniformly from the key space. Our goal
for find keys is to pre-construct the find keys in a way that makes find
operations successful and is also fair to all data structures. If all find
operations were executed to the pre-inserted keys then linear probing hash
tables would have an unfair advantage, because elements that are inserted early
have very short probing distances, while later elements can take much longer to
find. Therefore any find will look for a random key, that is inserted at least
$8192 \cdot p$ elements earlier in the key sequence. This key is usually
already in the table when the find operation is called. Looking for a random
inserted element is representative of the overall distribution of probing
distances in the table.
Notice that this method does not strictly ensure that all search keys are
already inserted. In our practical tests we found, that the number of
keys which were not found was negligible for performance purposes (usually below $1000$).
\begin{figure*}
\caption{\label{fig:mix_test_ng}
\label{fig:mix_test_ng}
\caption{\label{fig:mix_test_g}
\label{fig:mix_test_g}
\caption{\label{fig:mix_tests}
\label{fig:mix_tests}
\end{figure*}
Comparable to previous tests, we test all hash tables with and without the
necessity to grow the table. In the non-growing test the size of each table is
pre-initialized to be $c = pre + (wp\cdot 10^8)$. In the growing tests
semi-growing hash tables are initialized with half that capacity.
Similar to previous tests it, is obvious that our non-growing linear probing hash
table folklore \folk outperforms most other tables especially on find-heavy
workloads. Overall, our hash tables behave similar to the sequential solution
with a constant speedup around a factor of $10\times$. Interestingly, the
running time does not seem to be a linear function (over $wp$). Instead,
performance decreases super-linearly. One reason for this could be that for find-heavy workloads, the table remains relatively small for most of the
execution. Therefore, cache effects and similar influences could play a role,
since lookups only look for a small sample of elements that is already in the
table.
\paragraph*{Using Dedicated Growing Threads}
In \autoref{ss:async} and \ref{sec:impl} we describe the possibility, to use a
pool of dedicated migration threads which grow the table cooperatively. Usually
the performance of this method does not differ greatly from the performance of
the enslavement variant used throughout our testing. This can be seen in
\autoref{fig:thread_grow}. Therefore, we omitted these variants from most
plots.
\begin{figure*}
\caption{\label{fig:p_ins}
\label{fig:p_ins}
\caption{\label{fig:p_del}
\label{fig:p_del}
\caption{\label{fig:thread_grow}
\label{fig:thread_grow}
\end{figure*}
In \autoref{fig:p_ins} one can clearly see the similarities between the the
variants using a thread pool and their counterparts (uaGrow \uag $\cong$ paGrow \pag and usGrow \usg $\cong$ psGrow \psg). The biggest consistent
difference we found between the two options has been measured during the
deletion benchmark in \autoref{fig:p_del}. During this benchmark, insert and
delete are called alternately. This keeps the actual table size constant. For
our implementation, this means that there are frequent migrations on a
relatively small table size. This is difficult when using additional migration
threads, since the threads have to be awoken regularly, introducing some
operating system overhead (scheduling and notification).
\paragraph*{Using Intel TSX Technology}
As described in \autoref{sec:add_tsx}, concurrent linear probing hash tables can
be implemented using Intel TSX technology to reduce the number of atomic
operations. \autoref{fig:tsx} shows some of the results using this approach.
The implementation used in these tests changes only the operations within our
bounded hash table (folklore) to use TSX-transactions. Atomic fallback
implementations are used, when a transaction fails. We also instantiated our
growing hash table variants, to use the TSX-optimized table as underlying
hash table implementation.
We tested this variant with a uniform insert workload (see ``Insert
Performance''), because the lookup implementation does not actually need a
transaction. We also show the non-TSX variant, using dashed lines, to indicate the relative performance benefits.
\begin{figure*}
\caption{\label{fig:tsx0}
\label{fig:tsx0}
\caption{\label{fig:tsx1}
\label{fig:tsx1}
\caption{\label{fig:tsx}
\label{fig:tsx}
\end{figure*}
In \autoref{fig:tsx0} one can clearly see that TSX-optimized hash tables offer
improved performance as long, as growing is not necessary. Unfortunately,
\autoref{fig:tsx1} paints a different picture for instances where growing is
necessary. While TSX can be used to improve the usGrow \usg variant of our hash table
especially when using hyperthreading, it offers no performance benefits in the
uaGrow \uag variant. The reason for this is that the running time in these
measurements is dominated by the table migration which is not optimized for
TSX-transactions.
In theory, the migration algorithm can make use of transactions similarly to
single operations. It would be interesting whether an optimized migration could
further improve the growing instances of this test. We have not implemented such
a migration, as it introduces the need for some complex parameter optimizations
-- partitioning the migration into smaller blocks or executing each block-migration
into multiple transactions. We estimate that a well optimized TSX-migration can
gain performance increases on the order of those witnessed in the non-growing
case.
\paragraph*{Memory Consumption}
One aspect of parallel hash tables, that we did not talk about until now
is memory consumption. Overall, a low memory consumption is preferable, but
having less cells means that there will be more hash collisions. This
leads to longer running times especially for non-successful find operations.
Most hash tables do not allow the user to set a specific table size
directly. Instead they are initialized using the expected number of elements.
We use this mechanism to create tables of different sizes. Using these
different hash tables with different sizes, we find out how well any one hash
table scales when it is given more memory. This is interesting for applications
where the hash table speed is more important than its memory footprint (lookups
to a small or medium sized hash table within an application's inner loop).
The values presented in the following plot are aquired by initializing the hash
tables with different table capacities
($4096, 0.5\times, 1.0\times, 1.25\times, 1.5\times, 2.0\times, 2.5\times,
3.0\times 10^8$
expected elements; semi- and non-growing hash tables start at $0.5 \times$ and
$1\times$ respectively). During the test, the memory consumption is measured by
logging the size of each allocation, and deallocation during the execution (done
by replacing allocation methods, e.g.~\texttt{malloc} and
\texttt{memalign}). Measurements with growing (initial capacity $<10^8$)
are marked with dashed lines. Afterwards the table is filled with $10^8$
elements. The plotted measurements show the throughput that can be achieved when
doing $10^8$ unsuccessful lookups on the preinitialized table. This throughput
is plotted over the amount of allocated memory each hash table used.
The minimum size for any hash table should be around
$1.53\,\text{GiB} \approx 10^8\cdot (8\,\text{B}+8\,\text{B})$ (Key and Value each have
$8\,\text{B}$). Our hash table uses a number of cells equal to the smallest
power of $2$ that is at least two times as large as the expected number of
elements. In this case this means we use $2^{28} \approx 2.7\cdot 10^8$,
therefore, the table will be filled to $\approx 37\,\%$ and use exactly
$4\text{GiB}$. We believe that this memory usage is reasonable, especially for
heavily accessed tables where the performance is important. This is
supported by our measurements as all hash tables that use less memory have bad
performance.
Most hash tables round the number of cells in some convenient
way. Therefore, there are often multiple measurement points using the same
amount of memory. As expected, using the same amount of memory will usually
achieve a comparable performance. Out of the tested hash tables only the folly {\color{hone} $+$}\xspace
hash table grows linearly with the expected final size. It is also the hash
table, that gains the most performance by increasing its memory. This makes a
lot of sense considering that it uses linear probing and is by default
configured to use more than $50\,\%$ of its cells.
The plot also shows that some hash tables do not gain any performance benefits
from the increased size. Most notable for this are cuckoo \cuck, all variations
of junction {\color{hone} \fcirc}\xspace {\color{htwo} \bcirc}\xspace {\color{htwo} \lcirc}\xspace and the urcu hash tables \rcua. The TBB hash tables \tbba and
\tbbb seem to use a constant amount of memory, independently from the
preinitialized number of elements. This might be a measurement error, caused by
the fact that they use different memory allocation methods (not not logged in our test).
There are also some things that can be learned about growing hash tables from
this plot. Our migration technique ensures, that our hash table has the exact
same size when growing is required as when it is preinitialized using the same
number of elements. Therefore, lookup operations on the grown table take the
same time as they would on a preinitialized table. This is not true, for many of
our competitors. All Junction tables and RCU produce smaller tables when growing
was used, they also suffer from a minor slowdown, when using lookups on these
smaller tables. Using Folly is even worse, it produces a bigger table -- when
growing is needed -- and still suffers from significantly worse performance.
\begin{figure*}
\caption{Performance of unsuccessful find operations
over the size of the data structure.}
\caption{\label{fig:mem_test}
\label{fig:mem_test}
\end{figure*}
\paragraph*{Scalability on a 4-Socket Machine}
Bad performance on multi-socket workloads is recurring theme throughout our
testing. This is especially true for some of our competitors where 2-Socket
running times are often worse than 1-Socket running times. To further expand the
understanding of this problem we made some tests on the 4-Socket machine described in \autoref{sec:exp_hardware}.
The used test instances are generated similar to the insert/find tests described
in the beginning of this section ($10^8$ executed operations with uniformly
random keys). The results can be seen in \autoref{fig:in_127} (Insertions) and
\autoref{fig:look_127} (unsuccessful finds).
\begin{figure*}
\caption{\label{fig:in_127}
\label{fig:in_127}
\caption{\label{fig:look_127}
\label{fig:look_127}
\caption{\label{fig:127_tests}
\label{fig:127_tests}
\end{figure*}
Our competitor's hash tables
seem to be a lot more effective when using only one of the four sockets
(compared to one of two sockets on the two-socket machine). This is especially
true for the Lookup workload where the junction hash tables {\color{hone} \fcirc}\xspace start out more
efficient than our implementation. However this effect seems to invert once
multiple sockets are used.
In the test using lookups, there seems to be a performance problem using our hash
table. It seems to scale sub-optimally on one socket. On two sockets
however, the hash table seems to scale significantly better.
Overall the four-socket machine reconfirms our observations. None of our
competitors scale well when a growing hash table is used over multiple
sockets. On the contrary, using multiple sockets will generally reduce the
throughput. This is not the case for our hash table. The efficiency is reduced
when using more then two sockets but the absolute throughput at least remains
stable.
\subsection{The Price of Generality} \label{ss:price}
Having looked at many detailed
measurements, let us now try to get a bigger picture by asking
which hash table performs well for specific requirements and how much
performance has to be sacrificed for additional flexibility. This will give us
an intuition, where performance is sacrificed on our way to a fully general hash
table. Seeing that all tested hash tables fail to scale linearly on
multi-socket machines we try to answer the question if concurrent hash tables
are worth their overhead at all.
At the most restricted level -- no growing/deletions and word sized key and
value types -- we have shown that common linear probing hash tables offer the
best performance (over a number of operations). Our implementation of this
``folklore'' solution outperforms different approaches consistently, and
performs at least as good as other similar implementations (i.e. the phase
concurrent approach). We also showed, that this performance can be improved by
using Intel TSX technology. Furthermore, we have shown that our approach to
growing hash tables does not affect the performance on known input
sizes significantly (preinitialized table to the correct size).
Sticking to fixed data types but allowing dynamic growing, the best data
structures are our growing variants
($\{\text{ua}, \text{us}, \text{pa}, \text{ps}\}$Grow). The difference in our
measurements between pool growing (pxGrow) and the corresponding variants with
enslavement (uxGrow) are not very big. Growing with marking performs better than globally
synchronized growing except for update heavy workloads. The price of
growing compared to a fixed size is less than a factor of two for insertions and
updates (aggregation) and negligible for find-operations. Moreover, this
slowdown is comparable to the slowdown experienced in sequential hash tables
when growing is necessary. None of the other data structures that support
growing comes even close to our data structures. For insertions and updates we
are an order of magnitude faster then many of our competitors. Furthermore, only
one competitor achieves speedups above one when inserting into a growing table
(junction grampa).
Among the tested hash tables, only TBB, Cuckoo, and RCU have the ability to
store arbitrary key-/value-type combinations. Therefore, using arbitrary data
objects with one of these hash tables can be considered to cost at least an
order of magnitude in performance
($\text{TBB}[\textit{arbitrary}] \leq \text{TBB}[\textit{word sized}] \approx 1/10\cdot
\text{xyGrow}$).
In our opinion, this restricts the use of these data structures to situations
where hash table accesses are not a computational bottleneck. For more
demanding applications the only way to go is to get rid of the general data
types or the need for concurrent hash tables altogether. We believe that the
generalizations we have outlined in \autoref{sec:add_string_keys} will be able
to close this gap. Actual implementations and experiments are therefore
interesting future work.
Finally let us consider the situation where we need general data types but no
growing. Again, all the competitors are an order of magnitude slower for
insertion than our bounded hash tables. The single exception is cuckoo, which is
only five times slower for insertion and six times slower for successful
reads. However, it severely suffers from contention being an almost record
breaking factor of 5\,600 slower under find-operations with contention. Again,
it seems that better data structures should be possible.
\section{Conclusion}\label{sec:conclusion}
We demonstrate that a bounded linear probing hash table specialized to pairs of
machine words has much higher performance than currently available general
purpose hash tables like Intel TBB, Leahash, or RCU based implementations. This is
not surprising from a qualitative point of view given previous publications
\cite{StivalaCASTable10,kim2013performance,shun2014phase}. However, we found it
surprising how big the differences can be in particular in the presence of
contention. For example, the simple decision to require a lock for reading can
decrease performance by almost four orders of magnitude.
Perhaps our main contribution is to show that integrating an adaptive growing
mechanism into that data structure has only a moderate performance
penalty. Furthermore, the used migration algorithm can also be used to implement
deletions in a way that reclaims freed memory. We also explain how to further
generalize the data structure to allowing more general data types.
The next logical steps are to implement these further generalizations
efficiently and to integrate them into an easy to use library that hides most of
the variants from the user, e.g., using programming techniques like partial
template specialization.
Further directions of research could be to look for a practical growable
lock-free hash table.
\paragraph*{Acknowledgments}
We would like to thank Markus Armbruster, Ingo M\"uller, and Julian Shun for
fruitful discussions.
\end{document} |
\begin{document}
\title[Multivariate generating functions]{Multivariate generating functions built of Chebyshev polynomials and some of
its applications and generalizations.}
\author{Pawe\l \ J. Szab\l owski}
\address{Emeritus in Department of Mathematics and Information Sciences,\\
Warsaw University of Technology\\
ul Koszykowa 75, 00-662 Warsaw, Poland}
\email{[email protected]}
\thanks{The author is grateful to the unknown referee for his detailed and in-depth
remarks and suggestions.}
\date{January, 2018}
\subjclass[2000]{Primary 42C10, 33C47, Secondary 26B35 40B05}
\keywords{multivariate generating functions, Kibble-Slepian formula, Chebyshev
polynomials, q-Hermite polynomials, inversion of Poisson-Mehler formula}
\begin{abstract}
We sum multivariate generating functions composed of products of Chebyshev
polynomials of the first and the second kind. That is, we find closed forms of
expressions of the type $\sum_{j\geq0}\rho^{j}\prod_{m=1}^{k}T_{j+t_{m}}
(x_{m})\prod_{m=k+1}^{n+k}U_{j+t_{m}}(x_{m}),$ for different integers $t_{m},$
$m=1,...,n+k.$ We also find a Kibble-Slepian formula of $n$ variables with
Hermite polynomials replaced by Chebyshev polynomials of the first or the
second kind. In all the considered cases, the obtained closed forms are
rational functions with positive denominators. We show how to apply the
obtained results to integrate some rational funtions or sum some related
series of Chebyshev polynomials. We hope that the obtained formulae will be
useful in the so-called free probability. We expect also that the obtained
results should inspire further research and generalizations. In particular,
that, following methods presented in this paper, one would be able to obtain
similar formulae for the so-called $q-$Hermite polynomials. Since the
Chebyshev polynomials of the second kind considered here are the $q$-Hermite
polynomials for $q=0$. We have applied these methods in the one- and
two-dimensional cases and were able to obtain nontrivial identities concerning
$q-$Hermite polynomials.
\end{abstract}
\maketitle
\section{Introduction}
In this work we obtain closed forms of the following expressions:
Case I. The multivariate generating functions:
\begin{equation}
\chi_{k,n}^{(t_{1},...,t_{k+n})}(x_{1},...,x_{n+k}|\rho)=\sum_{j\geq0}\rho
^{j}\prod_{m=1}^{k}T_{j+t_{m}}(x_{m})\prod_{m=k+1}^{n+k}U_{j+t_{m}}(x_{m}),
\label{_ktnu}
\end{equation}
where $\left\vert t_{m}\right\vert ,k,n\in\{0,1,...\},$ $k+n\geq1,$
$\left\vert \rho\right\vert <1,$ $\left\vert x_{m}\right\vert \leq1$ and
$T_{j},U_{j}$ denote $j-$th Chebyshev polynomials respectively of the first
and second kind.
Case II. The so-called Kibble--Slepian formula for Chebyshev polynomials i.e.
closed forms of the expressions:
\begin{align}
f_{T}(\mathbf{x}|K_{n}) & =\sum_{S}(\prod_{1\leq i<j\leq n}\left( \rho
_{ij}\right) ^{s_{ij}})\prod_{m=1}^{n}T_{\sigma_{m}}(x_{m}),\label{_t}\\
f_{U}(\mathbf{x}|K_{n}) & =\sum_{S}(\prod_{1\leq i<j\leq n}\left( \rho
_{ij}\right) ^{s_{ij}})\prod_{m=1}^{n}U_{\sigma_{m}}(x_{m}), \label{_u}
\end{align}
where $\mathbf{x\allowbreak=\allowbreak(}x_{1},...,x_{n})$. $K_{n}$ denotes
the symmetric, non-singular, $n\times n$ matrix with ones on its diagonal and
with $\rho_{ij}$ as its non-diagonal $ij-th$ entry. $\sum_{S}$ denotes
summation over all $n(n-1)/2$ non-diagonal entries of a symmetric $n\times n-$
matrix $S_{n}$ with zeros on the main diagonal and entries $s_{ij}$ being
nonnegative integers, while $\sigma_{m}$ is the sum of the entries $s_{ij}$
along the $m-th$ row of the matrix $S_{n}.$
We will show that in the case I. all functions $\chi_{k,n}$ are rational with
common denominator $w_{n+k}(x_{1},...,x_{k+n}|\rho)$ which is a symmetric
polynomial in $x_{1},...,x_{n+k}$ of degree $2^{n+k-1}$ as well as in $\rho$
of degree $2^{n+k}$ defined recursively by (\ref{rek}).
In case II. both functions $f_{T}(\mathbf{x}|K_{n})$ and $f_{U}(\mathbf{x}
|K_{n})$ are rational with the same denominator
\begin{equation}
V_{n}(\mathbf{x|}K_{n})=\prod_{j=1}^{n-1}\prod_{k=j+1}^{n}w_{2}(x_{k}
,x_{j}|\rho_{kj}), \label{mianKib}
\end{equation}
where $w_{2}$ is defined by (\ref{w2}), below.
The fact that these functions are rational, is not very surprising, given the
fact that Chebyshev polynomials could be expressed by the trigonometric
functions and the fact that by the Euler formulae the series (\ref{_ktnu}),
(\ref{_t}) and (\ref{_u}) are sums of some geometric series. However, to get
the exact forms of the denominators and especially the numerators, is nontrivial.
Both statements will be proved in the sequel. The first one in the Section
\ref{one} and the second in the Section \ref{kibb}.
Chebyshev polynomials of the second kind (that are orthogonal with respect to
the semicircle distribution) have played a similar role in the rapidly
recently developing "free probability", as the Hermite polynomials (that are
orthogonal with respect to the normal distribution) play in classical
probability. This is so because the central role in the free probability is
played by the semicircle distribution, while in the classical one the central
role is played by the normal distribution. Hence the results presented below
are of significance for the free probability theory.
The possible other applications of the results of the paper can, for example,
help in the following:
\begin{enumerate}
\item To simplify calculations of some of the multiple integrals of the form
\[
\underset{k~fold}{\int...\int}\frac{v_{m}(x_{1},...,x_{n}|\mathbf{p})}
{\Omega_{n}(x_{1},...,x_{n}|\mathbf{p})}\prod_{j=1}^{k}(1-x_{i}^{2})^{m_{j}
/2}dx_{1}...dx_{k},
\]
where $v_{m}$ denotes some polynomial in variables $x_{1},...,x_{n}$ and
numbers $m_{j}\allowbreak\in\{-1,1\}$, $\mathbf{p}$\textbf{\ }denotes a set of
parameters. Thus, this set might be different in cases I. or II. $\Omega_{n}
$\textbf{\ }is equal to $w_{n}$ in the case I, (see iterative formula
(\ref{rek})) or $V_{n}$ in the case II (see formula (\ref{mianKib})). This is
based on the observation that the closed forms in Case I and Case II are the
rational functions with the denominators of the form $\Omega_{n}$ while the
numerators are, depending on the case and on numbers $t_{m},$ $m\allowbreak
=\allowbreak1,\ldots,n,$ polynomials of degree at most $\sum_{m=1}^{n}
(t_{m}+1).$ For example, for $n\allowbreak=\allowbreak2$ see Proposition
\ref{2wym}. Hence, one could imagine expanding $\frac{v_{m}(x_{1}
,...,x_{n}|\mathbf{p})}{\Omega_{n}(x_{1},...,x_{n}|\mathbf{p})}$ into the
linear combinations of the series of the forms (\ref{_ktnu}), (\ref{_t}) or
(\ref{_u}) depending on the cases considered Case I or Case II. Now notice
that having an absolute uniform convergence of the appropriate series
($\left\vert \rho\right\vert ,$ $\left\vert \varrho_{ij}\right\vert <1$ and
$\left\vert T_{i}(x)\right\vert ,|U_{i}(x)|\leq i,$ $\left\vert x\right\vert
\leq1,i\geq0$) one can perform integrations of each summand separately, which
is very easy.
Below we present a few examples illustrating this idea. In the first three of
these examples we will use the fact that following Proposition \ref{2wym},
iii), the numerators of the functions $\chi_{0,2}^{0,0}(x,y,\rho)$ and
$\chi_{0,2}^{2,0}(x,y,\rho)$ are equal respectively
\[
1-\rho^{2}\text{ and }(4x^{2}-4xy-1+\rho^{2}).\text{ }
\]
Thus for $\left\vert x\right\vert ,\left\vert y\right\vert \leq1$ and
$\left\vert \rho\right\vert <1$ we get
\begin{equation}
\int_{-1}^{1}\frac{2(1-\rho^{2})\sqrt{1-y^{2}}dy}{\pi((1-\rho^{2})^{2}
-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=1, \label{E0}
\end{equation}
\begin{equation}
\int_{-1}^{1}\frac{2(4x^{2}-4xy-1+\rho^{2})\sqrt{1-y^{2}}dy}{\pi((1-\rho
^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=4x^{2}-1, \label{E1}
\end{equation}
since $U_{2}(x)\allowbreak=\allowbreak4x^{2}-1.$ In the next example we use
the (\ref{00}) to sum
\begin{equation}
\sum_{j\geq0}\rho^{2j}U_{2j}(x)=\chi_{0,2}^{0,0}(x,0,i\rho)=\frac{1+\rho^{2}
}{(1+\rho^{2})^{2}-4\rho^{2}x^{2}} \label{oddU}
\end{equation}
and then (\ref{IU}) and the form of $\chi_{0,2}^{2,0}(x,y,\rho)$ to get the
following result :
\begin{equation}
\int_{-1}^{1}\frac{(4x^{2}-4xy-1+\rho^{2})dy}{\pi\sqrt{1-y^{2}}((1-\rho
^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=\frac{4x^{2}-1-\rho
^{2}}{(1+\rho^{2})^{2}-4x^{2}\rho^{2}}. \label{E2}
\end{equation}
In the example below, we used the fact that, following Proposition \ref{2wym},
iv), the numerator of the function $\chi_{1,1}^{1,0}(y,x,\rho)$ is equal to
$(y(1+\rho^{2})-2\rho x).$ Hence taking into account (\ref{IT}) and the fact
that $U_{1}(x)\allowbreak=\allowbreak2x$ we get:
\begin{equation}
\int_{-1}^{1}\frac{2(y(1+\rho^{2})-2\rho x)\sqrt{1-y^{2}}dy}{\pi((1-\rho
^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=-\rho x. \label{E3}
\end{equation}
The following two example exploit the form Corollary \ref{3wym},ii) and either
(\ref{oU})
\begin{equation}
\frac{2}{\pi}\int_{-1}^{1}\frac{(1+\rho^{2})^{3}+16\rho^{3}xyz-4\rho
^{2}(1+\rho^{2})(x^{2}+y^{2}+z^{2})}{w_{3}(x,y,z|\rho)}\sqrt{1-z^{2}}dz=1,
\label{E4}
\end{equation}
or (\ref{IU}) and then, of course, one of the formulae given in Proposition
\ref{2wym} to sum the obtained infinite series:
\begin{gather}
\frac{1}{\pi}\int_{-1}^{1}\frac{(1+\rho^{2})^{3}+16\rho^{3}xyz-4\rho
^{2}(1+\rho^{2})(x^{2}+y^{2}+z^{2})}{\sqrt{1-z^{2}}w_{3}(x,y,z|\rho
)}dz\label{E5}\\
=\frac{(1-\rho^{2})^{3}+4\rho^{2}(1-\rho^{2})(x^{2}+y^{2})}{(1-\rho^{2}
)^{4}+16\rho^{4}(x^{4}+y^{4})+8\rho^{2}(1-\rho^{2})^{2}(x^{2}+y^{2}
)-16\rho^{2}(1+\rho^{4})x^{2}y^{2}},\nonumber
\end{gather}
we have here $w_{3}(x,y,z|\rho)$ is given by (\ref{w3}).
\item To derive several expansions of the type (\ref{_u}) and (\ref{_t}) for
the special choices of the parameters $x_{j}$. To illustrate this idea we have
the following examples:
\begin{equation}
\sum_{j=0}^{\infty}(j+1)\rho^{j}U_{j}(x)U_{j}(y)=\frac{(1+\rho^{2})(1-\rho
^{2})^{2}-4\rho^{2}(1+\rho^{2})(x^{2}+y^{2})+16\rho^{3}xy}{((1-\rho^{2}
)^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))^{2}}, \label{EE1}
\end{equation}
\begin{gather}
\sum_{j\geq0}t^{j}T_{2j+1}(x)T_{2j+1}(y)=\label{EE2}\\
\frac{(1-t)xy(1+6t+t^{2}-4t(x^{2}+y^{2}))}{(1-t)^{4}+8t(1-t)^{2}(x^{2}
+y^{2})-16t(1+t^{2})x^{2}y^{2}+16t^{2}(x^{4}+y^{4})}.\nonumber
\end{gather}
To get these identities we used formualae given in (\ref{00}), (\ref{T11}),
(\ref{U11}) as well as in Corollary \ref{3wym},
\item To obtain families of multivariate distributions in $\mathbb{R}^{n}$
with compact support of the form:
\[
f_{n}(x_{1},...,x_{n})=\frac{p_{m}(x_{1},...,x_{n}|\mathbf{p})}{\Omega
_{n}(x_{1},...,x_{n}|\mathbf{p})}\prod_{j=1}^{n}(1-x_{i}^{2})^{m_{j}/2},
\]
where polynomial $p_{m}$ can depend on many parameters, can have any degree,
but must me positive on $\mathbf{S\allowbreak=\allowbreak}[-1,1]^{n}$ and such
that $f_{n}$ integrates to $1$ on $\mathbf{S,}$ indices $m_{j}\in\{-1,1\}.$
\end{enumerate}
There is one more reason for which the results are important. Namely, the
Chebyshev polynomials of the second kind are, as stated above, identical with
the so-called $q-$Hermite polynomials for $q\allowbreak=\allowbreak0$. Thus
the results of the paper can be an inspiration to obtain similar results for
the $q-$Hermite polynomials. All these ideas are explained and made more
precise in the sequence of observations, remarks, hypothesis and conjectures
presented in Section \ref{gen}.
An interesting, nontrivial example of an application of the method presented
in Theorem \ref{main} applied to the well-known cases and leading to the
non-obvious identities like the ones shown by (\ref{id2wym}), (\ref{sumk}) and
(\ref{00k}) is presented in Subsection \ref{Ident}.
The paper is organized as follows. In the next section we present some
elementary observations, we recall the basic properties of Chebyshev
polynomials as well as we prove some important auxiliary results. The main
results of the paper are presented in the two successive Sections \ref{one}
and \ref{kibb} presenting respectively closed forms of the one-parameter
multivariate generating functions and the closed form of the analogue of
Kibble--Slepian formula. The next Section \ref{gen} presents generalization,
observations, conjectures and examples. Finally the last Section \ref{dow}
contains longer proofs.
\section{Auxiliary results and elementary observations\label{pom}}
Let us recall (following \cite{Mason2003}), the definitions of the Chebyshev
polynomials:
\begin{equation}
U_{n}(\cos(\alpha))\allowbreak=\allowbreak\sin((n+1)\alpha)/\sin(\alpha)\text{
and }T_{n}(\cos(\alpha))\allowbreak=\allowbreak\cos(n\alpha) \label{Czebysz}
\end{equation}
and the orthogonality relations they satisfy:
\begin{align}
\int_{-1}^{1}T_{i}(x)T_{j}(x)\frac{1}{\pi\sqrt{1-x^{2}}}dx & =\left\{
\begin{array}
[c]{ccc}
0 & if & i\neq j\\
1/2 & if & i=j\neq0\\
1 & if & i=j=0
\end{array}
\right. ,\label{oT}\\
\int_{-1}^{1}U_{i}(x)U_{j}(x)\frac{2}{\pi}\sqrt{1-x^{2}}dx\allowbreak &
=\allowbreak\left\{
\begin{array}
[c]{ccc}
0 & if & i\neq j\\
1 & if & i=j
\end{array}
\right. . \label{oU}
\end{align}
We have also some simple properties of Chebyshev polynomials that were useful
in obtaining examples (\ref{E1}-\ref{E5}) and (\ref{EE1},\ref{EE2}):
\begin{equation}
T_{j}(0)\allowbreak=\allowbreak U_{j}(0)=\left\{
\begin{array}
[c]{ccc}
0 & if & j\text{ is odd}\\
(-1)^{j/2} & if & j\text{ is even}
\end{array}
\right. , \label{00}
\end{equation}
\begin{gather}
T_{i}(1)=1,T_{j}(-1)=(-1)^{j-2\left\lfloor j/2\right\rfloor },\label{T11}\\
U_{j}(\pm1)=\pm(j+1), \label{U11}
\end{gather}
for $j\geq0,$
\begin{equation}
\int_{-1}^{1}T_{j}(x)\frac{2\sqrt{1-x^{2}}}{\pi}dx=\left\{
\begin{array}
[c]{ccc}
1 & if & j=0\\
-1/2 & if & j=2\\
0 & if & j\notin\{0,2\}
\end{array}
\right. , \label{IT}
\end{equation}
and
\begin{equation}
\int_{-1}^{1}U_{j}(x)\frac{1}{\pi\sqrt{1-x^{2}}}dx=\left\{
\begin{array}
[c]{ccc}
0 & if & j\text{ is odd}\\
1 & if & j\text{ is even}
\end{array}
\right. . \label{IU}
\end{equation}
In the sequel, if all integer parameters $t_{1},...,t_{n+k}$ will be equal to
zero, then they will be dropped from function $\chi$. Notice also that the
functions $\chi$ are known for $n\allowbreak=\allowbreak1$ and $n\allowbreak
=\allowbreak2$ and $t_{1}\allowbreak=\allowbreak0,$ $t_{2}\allowbreak
=\allowbreak0$. By (\ref{_ktnu}) we have:
\begin{gather}
\chi_{0,1}(x|\rho)=\frac{1}{w_{1}(x|\rho)};\chi_{1,0}(x|\rho)=\frac{1-\rho
x}{w_{1}(x|\rho)},\label{_1}\\
\chi_{0,2}(x,y|\rho)\allowbreak=\allowbreak\sum_{n\geq0}\rho^{n}U_{n}
(x)U_{n}(y)=\frac{1-\rho^{2}}{w_{2}(x,y|\rho)},\label{_2}\\
\chi_{2,0}(x,y|\rho)=\sum_{n\geq0}\rho^{n}T_{n}(x)T_{n}(y)=\frac{1-\rho
^{2}+2\rho^{2}\left( x^{2}+y^{2}\right) -\left( \rho^{2}+3\right) \rho
xy}{w_{2}(x,y|\rho)},\label{_3}\\
\chi_{1,1}(x,y|\rho)=\sum_{n\geq0}\rho^{n}U_{n}(x)T_{n}(y)=\frac{1-\rho
^{2}-2\rho xy+2\rho^{2}y^{2}\allowbreak}{w_{2}(x,y|\rho)}, \label{_4}
\end{gather}
where:
\begin{align}
w_{1}(x|\rho) & =1-2\rho x+\rho^{2},\label{w1}\\
w_{2}(x,y|\rho) & =(1-\rho^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}
(x^{2}+y^{2}). \label{w2}
\end{align}
Notice also that both $\chi_{2,0}$ and $\chi_{0,2}$ are positive on
$[-1,1]\times\lbrack-1,1].$ The formulae in (\ref{_1}) are well known within
e.g. theory of Poisson kernel. The formula in (\ref{_2}) it is famous
Poisson-Mehler formula for $q-$Hermite polynomials where we set $q=0.$ Both
can be found in \cite{Mason2003}. The second formula in (\ref{_3}) and in
(\ref{_4}) have been recently obtained in \cite{Szab-Cheb}.
To calculate the functions $\chi_{k,n}^{(t_{1},...,t_{k+n})}$ we need the
following auxiliary results. They are very simple, based on the elementary
properties of the trigonometric functions. We present them for the sake of the
completeness of the paper. We have:
\begin{proposition}
\begin{equation}
w_{1}(\cos(\alpha+\beta)|\rho)w_{1}(\cos(\alpha-\beta)|\rho)\allowbreak
=w_{2}(\cos(\alpha),\cos(\beta)|\rho). \label{pro2}
\end{equation}
\end{proposition}
\begin{proof}
We have
\begin{gather*}
(1-2\rho\cos(\alpha+\beta)+\rho^{2})((1-2\rho\cos(\alpha-\beta)+\rho
^{2})\allowbreak=\allowbreak\\
(1+\rho^{2})^{2}-2\rho(1+\rho^{2})(\cos(\alpha+\beta)+\cos(\alpha
-\beta))+4\rho^{2}\cos(\alpha+\beta)\cos(\alpha-\beta).
\end{gather*}
Now recall that $\cos(\alpha+\beta)+\cos(\alpha-\beta)\allowbreak
=\allowbreak2\cos(\alpha)\cos(\beta)$ and $\cos(\alpha+\beta)\cos(\alpha
-\beta)\allowbreak=\allowbreak\cos^{2}\alpha\allowbreak+\allowbreak\cos
^{2}\beta\allowbreak-\allowbreak1.$
\end{proof}
\begin{proposition}
\label{ilocz}
\begin{gather}
\prod_{j=1}^{k}\cos(\alpha_{j})\allowbreak=\allowbreak\frac{1}{2^{n}}
\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{k}\in\{-1,1\}}\cos(\sum_{l=1}^{k}
i_{l}\alpha_{l}),\label{cos}\\
\prod_{j=1}^{n}\sin(\alpha_{j})\allowbreak\prod_{j=n+1}^{n+k}\cos(\alpha
_{j})=\allowbreak\nonumber\\
\left\{
\begin{array}
[c]{ccc}
\begin{array}
[c]{c}
(-1)^{(n+1)/2}\frac{1}{2^{n+k}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n+k}
\in\{-1,1\}}\\
(-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\sin(\sum_{l=1}^{n+k}i_{l}\alpha_{l})
\end{array}
& if & n\text{ is odd}\\%
\begin{array}
[c]{c}
(-1)^{n/2}\frac{1}{2^{n+k}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n+k}\in
\{-1,1\}}\\
(-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\cos(\sum_{l=1}^{n+k}i_{l}\alpha_{l})
\end{array}
& if & n\text{ is even}
\end{array}
\right. . \label{sin}
\end{gather}
\end{proposition}
\begin{proof}
See section \ref{dow}.
\end{proof}
\begin{lemma}
\label{aux1}Let us take $n\in\mathbb{N},$ $\left\vert \rho_{i}\right\vert <1,$
$\alpha_{i}\in\mathbb{R},$ $i\allowbreak\in S_{n}\allowbreak=\allowbreak
\{1,...,n\}.$ Let $M_{i,n}$ denote a subset of the set $S_{n}$ containing $i$
elements. Let us denote by $\sum_{M_{i,n}\subseteq S_{n}}$ summation over all
$M_{i,n}$ contained in $S_{n}.$ We have:
\begin{gather}
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{i=1}^{n}\rho_{i}^{k_{i}}
)\cos(\beta+\sum_{i=1}^{n}k_{i}\alpha_{i})\allowbreak=\label{ksc}\\
\allowbreak\frac{\sum_{j=0}^{n}(-1)^{j}\sum_{M_{j,n}\subseteq S_{n}}
(\prod_{k\in M_{j,n}}\rho_{k})\cos(\beta-\sum_{k\in M_{j,n}}\alpha_{k})}
{\prod_{i=1}^{n}(1+\rho_{i}^{2}-2\rho_{i}\cos(\alpha_{i}))},\nonumber\\
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{i=1}^{n}\rho_{i}^{k_{i}}
)\sin(\beta+\sum_{i=1}^{n}k_{i}\alpha_{i})=\label{kss}\\
\frac{\sum_{j=0}^{n}(-1)^{j}\sum_{M_{j,n}\subseteq S_{n}}(\prod_{k\in M_{j,n}
}\rho_{k})\sin(\beta-\sum_{k\in M_{j,n}}\alpha_{k})}{\prod_{i=1}^{n}
(1+\rho_{i}^{2}-2\rho_{i}\cos(\alpha_{i}))}.\nonumber
\end{gather}
\end{lemma}
\begin{proof}
See section \ref{dow}.
\end{proof}
We will also need the following almost trivial special cases of formulae
(\ref{ksc}) and (\ref{kss}). We will formulate them as corollary.
\begin{corollary}
\label{suma}For all $\left\vert \rho\right\vert <1$ we have
\begin{align}
\sum_{n\geq0}\rho^{n}\sin(n\alpha+\beta) & =(\sin(\beta)-\rho\sin
(\beta-\alpha))/(1-2\rho\cos(\alpha)+\rho^{2}),\label{s_si}\\
\sum_{n\geq0}\rho^{n}\cos(n\alpha+\beta) & =(\cos(\beta)-\rho\cos
(\beta-\alpha)/(1-2\rho\cos(\alpha)+\rho^{2}). \label{s_g_c}
\end{align}
\end{corollary}
\begin{proof}
Set $n\allowbreak=\allowbreak1$ and $\alpha\allowbreak=\allowbreak\alpha_{1}$
(\ref{kss}) and (\ref{ksc}).
\end{proof}
\section{One parameter sums. Multivariate generating functions of Chebyshev
polynomials\label{one}}
The theorem below is obtained by very elementary methods. Given the definition
of the function $\chi_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)$
presented by (\ref{_ktnu}) it is obvious that it must be in the form of a
rational function. Even many properties of the denominator of these functions
can be more or less deduced from the definition. However the exact forms of
the numerators of these functions are not trivial. For the sake of
completeness of the paper, we present all these trivial and nontrivial
observations in one theorem.
\begin{theorem}
\label{main}For all integers $n,k\geq0,$ $\left\vert x_{s}\right\vert
<1,t_{s}\in\mathbb{Z},$ $s\allowbreak=\allowbreak1,...,n+k,$ we have:
\begin{equation}
\chi_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)=\allowbreak
\frac{l_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)}{w_{n+k}
(x_{1},...,x_{n+k}|\rho)}, \label{formula}
\end{equation}
where $w_{m}(x_{1},...,x_{m}|q)$ is a symmetric polynomial of degree $2^{m-1}$
in $x_{1},...,x_{m}$ and of degree $2^{m}$ in $\rho$ defined by the following
recurrence :
\begin{gather}
w_{m+1}(x_{1},...,x_{m-1},\cos(\alpha),\cos(\beta)|\rho)=\label{rek}\\
w_{m}(x_{1},...,x_{m-1},\cos(\alpha+\beta)|\rho)w_{m}(x_{1},...,x_{m-1}
,\cos(\alpha-\beta)|\rho),\nonumber
\end{gather}
$n\geq1$, with $w_{1}(x|q)$ given by (\ref{w1}).
$l_{n,k}^{(t_{1},\ldots,t_{n+k})}(x_{1},...,x_{n+k}|\rho)$ is another
polynomial given by the relationship:
\begin{gather}
l_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)=\label{diff}\\
\sum_{j=0}^{2^{n+k}-1}\rho^{j}\sum_{m=0}^{j}\frac{1}{m!}\left. \frac{d^{m}
}{d\rho^{m}}w_{k+n}(x_{1},...,x_{k+n}|\rho)\right\vert _{\rho=0}\nonumber\\
\times\prod_{s=1}^{k}T_{(j-m)+t_{s}}(x_{s})\prod_{s=1+k}^{n+k}U_{(j-m)+t_{s}
}(x_{s}).
\end{gather}
\end{theorem}
\begin{proof}
See section \ref{dow}.
\end{proof}
\begin{corollary}
Theorem \ref{main} provides for free the following important set of identities
involving Chebyshev polynomial of the first and the second kind. Namely we
have: $\forall n,k\geq0:n+k\geq1,\forall t_{1},\ldots,t_{n+k}\geq0,\forall
j\geq2^{n+k},\forall(x_{1},\ldots,x_{k+n})\in(-1,1)^{n+k}$
\begin{equation}
\sum_{m=0}^{j}\frac{1}{m!}\left. \frac{d^{m}}{d\rho^{m}}w_{k+n}
(x_{1},...,x_{k+n}|\rho)\right\vert _{\rho=0}\times\prod_{s=1}^{k}
T_{(j-m)+t_{s}}(x_{s})\prod_{s=1+k}^{n+k}U_{(j-m)+t_{s}}(x_{s})=0.
\label{identities}
\end{equation}
In particular we have for $n+k\allowbreak=\allowbreak1:$
\[
U_{k}(x)-2xU_{k+1}(x)+U_{k+2}(x)=0,
\]
which is nothing else as the well-known three-term recurrence satisfied by the
Chebyshev polynomials. However for say $k=0$ and $n\allowbreak=\allowbreak2$
we get for all $s,m\geq0$
\begin{gather*}
-4xyU_{s}(y)U_{m}(x)+2(2x^{2}+2y^{2}-1)U_{s+1}(y)U_{m+1}(x)\\
-4xyU_{s+2}(y)U_{m+3}(x)+U_{s+3}(y)U_{m+3}(x)=0,
\end{gather*}
which is, to my knowledge, unknown.
\end{corollary}
\begin{proof}
Since $l_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)$ is a polynomial
of degree $2^{k+n}-1$ in $\rho$ all its derivatives with respect to $\rho$ of
higher than $2^{k+n}-1$ should be equal to zero.
\end{proof}
\begin{corollary}
For $n\geq1,$ after swapping $x_{1}$ and $x_{n},$ taking $\beta\allowbreak
=\allowbreak0,$ $\cos(\alpha)\allowbreak=\allowbreak x_{2}$ we get:
\[
w_{n}(1,...x_{n-1},x_{n}|\rho)=(w_{n-1}(x_{2},...x_{n}|\rho))^{2}.
\]
In particular
\[
w_{3}(x_{1},\cos(\alpha_{2}),\cos(\alpha_{3})|\rho)=w_{2}(x_{1},\cos
(\alpha_{3}+\alpha_{2})|\rho)w_{2}(x_{1},\cos(\alpha_{3}-\alpha_{2})|\rho),
\]
which, after replacing $\cos(\alpha_{2})$ by $x_{2}$ and $\cos(\alpha_{3})$ by
$x_{3}$ and with the help of Mathematica, yields:
\begin{gather}
w_{3}(x_{1},x_{2},x_{3}|\rho)=16\rho^{4}(x_{1}^{4}+x_{2}^{4}+x_{3}^{4}
)-8\rho^{2}(1+\rho^{2})^{2}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2})\label{w3}\\
+16\rho^{2}(1+\rho^{4})(x_{1}^{2}x_{2}^{2}+x_{1}^{2}x_{3}^{2}+x_{2}^{2}
x_{3}^{2})+64\rho^{4}x_{1}^{2}x_{2}^{2}x_{3}^{2}-32\rho^{3}(1+\rho^{2}
)x_{1}x_{2}x_{3}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2})\nonumber\\
-8\rho(1+\rho^{2})(1+\rho^{4}-6\rho^{2})x_{1}x_{2}x_{3}+(1+\rho^{2}
)^{4}.\nonumber
\end{gather}
\end{corollary}
\begin{remark}
Notice that from Theorem \ref{main} we deduce that for all integers
$t_{1},...,t_{k+n}$ the ratio
\[
\frac{\chi_{k,n}^{(t_{1},...,t_{k+n})}(x_{1},...,x_{n+k}|\rho)}{\chi
_{k,n}^{(0,...,0)}(x_{1},...,x_{n+k}|\rho)}
\]
is a rational function of arguments $x_{1},...,x_{n+k},\rho.$
Such observation for was first made by Carlitz for $k+n\allowbreak
=\allowbreak2,$ nonnegative integers $t_{1}$ and $t_{2}$ concerning the
so-called Rogers--Szeg\"{o} polynomials and two variables $x_{1}$ and $x_{2}$
in \cite{Carlitz72} (formula 1.4). Later it was generalized by Szab\l owski in
\cite{Szab5} for the so-called $q-$Hermite polynomials, also for the two
variables . Now, it turns out that for $q\allowbreak=\allowbreak0$ the
$q-$Hermite polynomials are equal to Chebyshev polynomials of the second kind,
hence one can state that so far the above-mentioned observation was known for
$k\allowbreak=\allowbreak0$ and $n\allowbreak=\allowbreak2.$ Hence we deal
with far-reaching generalization both in the number of variables as well as
for the Chebyshev polynomials of the first kind.
\end{remark}
\begin{corollary}
\label{gest}For $\left\vert x_{i}\right\vert \leq1$ and $\left\vert
\rho\right\vert <1,$ $n\geq1:$
\begin{gather*}
\chi_{n,0}(x_{1},...,x_{n}|\rho)\geq0,\\
\underset{j\text{ fold}}{\int_{-1}^{1}...\int_{-1}^{1}(}\prod_{s=1}^{n}
\frac{1}{\pi\sqrt{1-x_{s}^{2}}})\chi_{n,0}(x_{1},...,x_{n}|\rho)dx_{1}
...dx_{j}\allowbreak=\allowbreak\prod_{s=j+1}^{n}\frac{1}{\pi\sqrt{1-x_{s}
^{2}}},
\end{gather*}
for $j=1,...,n.$
\end{corollary}
\begin{proof}
For the first assertion recall that based on Theorem \ref{main} we have
\begin{gather*}
\chi_{n,0}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|\rho)\allowbreak
=\allowbreak\sum_{k\geq0}\rho^{k}\prod_{j=1}^{n}T_{k}(\cos(\alpha
_{j}))\allowbreak=\\
\allowbreak\frac{1}{2^{n}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n}\in
\{-1,1\}}\frac{(1-\rho\cos(\sum_{k=1}^{n}i_{k}\alpha_{k}))}{(1-2\rho\cos
(\sum_{k=1}^{n}i_{k}\alpha_{k})+\rho^{2})},
\end{gather*}
which is nonnegative for all $\alpha_{i}\in\mathbb{R}$, $i\allowbreak
=\allowbreak1,...,n$ and $\left\vert \rho\right\vert <1.$ \newline The
remaining part follows directly the definition (\ref{_ktnu}) of $\chi_{n,0}$
and the properties of polynomials $T_{i}$.
\end{proof}
Let us now finish the case $n\allowbreak=\allowbreak2.$ That is let us
calculate $\chi_{2,0}^{n,m}(x,y|\rho),$ $\chi_{1,1}^{n,m}(x,y|\rho)$. The case
of $\chi_{0,2}^{n,m}(x,y|\rho)$ has been solved in e.g. \cite{SzablAW} (Lemma
3, with $q\allowbreak=\allowbreak0).$
\begin{proposition}
\label{2wym}i)
\begin{align*}
\chi_{1,0}^{m,0}(x|\rho) & =\sum_{i=0}^{\infty}\rho^{i}T_{i+m}
(x)\allowbreak=\allowbreak\frac{T_{m}(x)-\rho T_{m-1}(x)}{w_{1}(x|\rho)},\\
\chi_{0,1}^{0,m}(x|\rho) & =\sum_{i=0}^{\infty}\rho^{i}U_{i+m}
(x)=\frac{U_{m}(x)-\rho U_{m-1}(x)}{w_{1}(x|\rho)},
\end{align*}
ii)
\begin{gather*}
\chi_{2,0}^{n,m}(x,y|\rho)\allowbreak=\allowbreak\sum_{k\geq0}\rho^{k}
T_{k+n}(x)T_{k+m}(y)\allowbreak=\allowbreak\\
(T_{n}(x)T_{m}(y)(w_{2}(x,y|\rho)-\rho^{4})\\
+\rho T_{n+1}(x)T_{m+1}(y)(1-2\rho^{2}+4\rho^{2}(x^{2}+y^{2})-4\rho xy)\\
+\rho^{2}T_{n+2}(x)T_{m+2}(y)(1-4\rho xy)+\rho^{3}T_{n+3}(x)T_{m+3}
(y))/w_{2}(x,y|\rho),
\end{gather*}
iii)
\begin{gather*}
\chi_{0,2}^{n,m}(x,y|\rho)\allowbreak=\sum_{j\geq0}\rho^{j}U_{j+n}
(x)U_{j+m}(y)\allowbreak=\\
(U_{n}(x)U_{m}(y)(w_{2}(x,y|\rho)-\rho^{4})\\
+\rho U_{n+1}(x)U_{m+1}(y)(1-2\rho^{2}+4\rho^{2}(x^{2}+y^{2})-4\rho xy)\\
+\rho^{2}U_{n+2}(x)U_{m+2}(y)(1-4\rho xy)+\rho^{3}U_{n+3}(x)U_{m+3}
(y))/w_{2}(x,y|\rho)
\end{gather*}
$\allowbreak$\newline
iv)
\begin{gather*}
\chi_{1,1}^{n,m}(x,y|\rho)\allowbreak=\allowbreak\sum_{j\geq0}\rho^{j}
U_{m+j}(x)T_{n+j}(y)\allowbreak=\\
(T_{n}(y)U_{m}(x)(w_{2}(x,y|\rho)-\rho^{4})\\
+\rho T_{n+1}(y)U_{m+1}(x)(1-2\rho^{2}+4\rho^{2}(x^{2}+y^{2})-4\rho xy)\\
+\rho^{2}T_{n+2}(y)U_{m+2}(y)(1-4\rho xy)+\rho^{3}T_{n+3}(y)U_{m+3}
(y))/w_{2}(x,y|\rho).
\end{gather*}
$\allowbreak\allowbreak\allowbreak$
\end{proposition}
\begin{proof}
We apply a formula (\ref{diff}). For i) we take $n=1$ and notice that values
of derivatives of $w_{1}$ respect to $\rho$ at $\rho=0$ are $1,$ $-2x,$ $2.$
To get ii) we notice that subsequent derivatives of $w_{2}$ with respect to
$\rho$ at $\rho=0$ are $1,$ $-4xy,$ $8x^{2}+8y^{2}-4,$ $-24xy$. Having this
and applying directly (\ref{diff}) we get certain defined formula expanded in
powers of $\rho.$ Now it takes Mathematica to get this form.
iii) and iv) We argue similarly getting expansions in powers of $\rho.$ Then
using Mathematica we try to get more friendly form.
\end{proof}
As a corollary we get formulae presented in (\ref{_2}) and (\ref{_3}) when
setting $n\allowbreak=\allowbreak m\allowbreak=\allowbreak0$ and remembering
that $T_{-i}(x)\allowbreak=\allowbreak T_{i}(x),$ $U_{-i}(x)=-U_{i-2}(x),$ for
$i\allowbreak=\allowbreak0,1,2$.
\begin{corollary}
\label{3wym}$\forall x,y,z\in\lbrack-1,1],\left\vert \rho\right\vert <1:$
i)
\begin{gather*}
\chi_{3,0}(x,y,z|\rho)=\sum_{i\geq0}\rho^{i}T_{i}(x)T_{i}(y)T_{i}
(z)\allowbreak=\allowbreak((1+\rho^{2})^{3}\allowbreak\allowbreak
+\allowbreak8\rho^{4}\left( x^{4}+y^{4}+z^{4}\right) \allowbreak
+\allowbreak\allowbreak32\rho^{4}x^{2}y^{2}z^{2}\allowbreak\\
-\allowbreak2\left( \rho^{2}+1\right) \left( \rho^{2}+3\right) \rho
^{2}\left( x^{2}+y^{2}+z^{2}\right) \allowbreak+\allowbreak4\left( \rho
^{4}+3\right) \rho^{2}\left( x^{2}y^{2}+x^{2}z^{2}+y^{2}z^{2}\right)
\allowbreak\\
-\newline\allowbreak4\left( 3\rho^{2}+5\right) \rho^{3}xyz\left(
x^{2}+y^{2}+z^{2}\right) \allowbreak-\allowbreak\left( \rho^{6}-15\rho
^{4}-25\rho^{2}+7\right) \rho xyz)\allowbreak/w_{3}(x,y,z|\rho),
\end{gather*}
ii)
\begin{gather*}
\chi_{0,3}(x,y,z|\rho)=\sum_{i\geq0}\rho^{i}U_{i}(x)U_{i}(y)U_{i}(z)=\\
((1+\rho^{2})^{3}+16\rho^{3}xyz-4\rho^{2}(1+\rho^{2})(x^{2}+y^{2}
+z^{2}))/w_{3}(x,y,z|\rho),
\end{gather*}
iii)
\begin{gather*}
\chi_{1,2}(x,y,z|\rho)=\sum_{i\geq0}\rho^{i}T_{i}(x)U_{i}(y)U_{i}(z)=\\
(\left( \rho^{2}+1\right) ^{3}\allowbreak+\allowbreak8\rho^{4}
x^{4}\allowbreak-\allowbreak16\rho^{3}x^{3}yz\allowbreak-\allowbreak2\left(
\rho^{2}+1\right) \left( \rho^{2}+3\right) \rho^{2}x^{2}\allowbreak\\
\allowbreak+\allowbreak8\rho^{2}x^{2}\left( y^{2}+z^{2}\right)
-\allowbreak4\rho\left( 5-(\rho^{2}+2)^{2}\right) xyz\allowbreak
-\allowbreak4\left( \rho^{2}+1\right) \rho^{2}(y^{2}+z^{2})\allowbreak
)/w_{3}(x,y,z|\rho),
\end{gather*}
iv)
\begin{gather*}
\chi_{2,1}(x,y,z|\rho)\sum_{i\geq0}\rho^{i}T_{i}(x)T_{i}(y)U_{i}(z)=\\
(\left( \rho^{2}+1\right) ^{3}\allowbreak+\allowbreak8\rho^{4}\left(
x^{4}+y^{4}\right) \allowbreak-\allowbreak2\left( \rho^{2}+1\right) \left(
\rho^{2}+3\right) \rho^{2}\left( x^{2}+y^{2}\right) \allowbreak\\
+\allowbreak4\left( \rho^{4}+3\right) \rho^{2}x^{2}y^{2}\allowbreak
+\allowbreak16\rho^{4}x^{2}y^{2}z^{2}\allowbreak+\allowbreak8\rho^{2}
z^{2}\left( x^{2}+y^{2}\right) \allowbreak-\allowbreak8\left( \rho
^{2}+2\right) \rho^{3}xyz\left( x^{2}+y^{2}\right) \allowbreak\\
-\allowbreak8\rho^{3}xyz^{3}\allowbreak-\allowbreak2\left( -5\rho^{4}
-10\rho^{2}+3\right) \rho xyz\allowbreak-\allowbreak4\left( \rho
^{2}+1\right) \rho^{2}z^{2})/w_{3}(x,y,z|\rho),
\end{gather*}
where $w_{3}(x,y,z|\rho)\allowbreak$ is given by (\ref{w3}).
\end{corollary}
\begin{proof}
Again we apply formula (\ref{diff}). Besides we take $n\allowbreak
=\allowbreak3,$ $k\allowbreak=\allowbreak0$ for i), $n\allowbreak
=\allowbreak0,$ $k\allowbreak=\allowbreak3$ for ii), $n\allowbreak
=\allowbreak1,$ $k\allowbreak=\allowbreak2$ for iii) and $n\allowbreak
=\allowbreak2,$ $k\allowbreak=\allowbreak1$ for iv). Now we have to remember
that successive derivatives of $w_{3}$ with respect to $\rho$ taken at
$\rho\allowbreak=\allowbreak0$ are respectively $1,$ $-8xyz,$ $8(1\allowbreak
-\allowbreak(x^{2}+y^{2}+z^{2})\allowbreak+\allowbreak4(x^{2}y^{2}
\allowbreak+\allowbreak x^{2}z^{2}\allowbreak+\allowbreak y^{2}z^{2})),$
$48xyz(5\allowbreak-\allowbreak4(x^{2}+y^{2}+z^{2})),$ $48(3\allowbreak
-\allowbreak8(x^{2}+y^{2}+z^{2})\allowbreak+\allowbreak8(x^{4}+y^{4}
+z^{4})\allowbreak+\allowbreak32x^{2}y^{2}z^{2}),$ $960xyz(5\allowbreak
-\allowbreak4(x^{2}+y^{2}+z^{2})),$ $2880(1\allowbreak-\allowbreak(x^{2}
+y^{2}+z^{2})\allowbreak+\allowbreak4(x^{2}y^{2}\allowbreak+\allowbreak
x^{2}z^{2}\allowbreak+\allowbreak y^{2}z^{2})),$ $-40320xyz.$ Then we get
certain formulae by applying directly formula (\ref{diff}). The expression are
long and not very legible. We applied Mathematica to get forms presented in
i), ii) iii) and iv).
\end{proof}
\section{Kibble--Slepian formula and related sums for Chebyshev polynomials
\label{kibb}}
Let $f_{n}(x_{1},...,x_{n}|K_{n})$ denote the density of the normal
distribution with zero expectations and non-singular covariance matrix $K_{n}$
such that $\operatorname*{var}(X_{i})=\allowbreak1$ for $i\allowbreak
=\allowbreak1,...,n,$ i.e. having $1^{\prime}s$ on the diagonal. Let
$\rho_{ij}$ denote $ij-$th entry of matrix $K_{n}.$ Consequently, the
one-dimensional marginals $f_{1}$ are given by:
\[
f_{1}(x)\allowbreak=\allowbreak\exp(-x^{2}/2)/\sqrt{2\pi}.
\]
Let us also denote by $S_{n}$ a symmetric $n\times n$ matrix with zeros on the
diagonal and nonnegative integers as off-diagonal entries. Let us denote the
$ij-$th entry of the matrix $S_{n}$ by $s_{ij}.$ Recall that Kibble in the 40s
and Slepian in the 70s presented the following formula:
\begin{equation}
\frac{f_{n}(x_{1},...,x_{n}|K_{n})}{\prod_{m=1}^{n}f_{1}(x_{m})}=\sum
_{S}(\prod_{1\leq i<j\leq n}\frac{\left( \rho_{ij}\right) ^{s_{ij}}}
{s_{ij}!}\prod_{m=1}^{n}H_{\sigma_{m}}(x_{m})), \label{K-S}
\end{equation}
where $H_{i}(x)$ denotes $i-th$ (so called probabilistic) Hermite polynomial
i.e. forming the orthonormal base of the space of functions square integrable
with respect to the weight $f_{1}(x)$, $\sigma_{m}\allowbreak=\allowbreak
\sum_{j=1}^{m-1}s_{jm}\allowbreak+\allowbreak\sum_{j=1+m}^{n}s_{mj},$
$\sum_{S}$ denotes, as before, summation over all $n(n-1)/2$ non-diagonal
entries of the matrix $S_{n}.$ To see more details on Kibble--Slepian formula
see e.g. recent paper by Ismail \cite{Ismal2016}. A partially successful
attempt was made by Szab\l owski in \cite{SzablKib} where for $n\allowbreak
=\allowbreak3$ the author replaced polynomials $H_{n}$ by the so called
$q-$Hermite polynomials $H_{n}(x|q)$ and $s_{ij}!$ substituted by
$[s_{ji}]_{q}!$ where $[n]_{q}\allowbreak=\allowbreak(1-q^{n})/(1-q)$ for
$\left\vert q\right\vert <1,$ $[n]_{1}\allowbreak=\allowbreak n$ and
$[n]_{q}!\allowbreak=\allowbreak\prod_{i=1}^{n}[i]_{q}$ with $[0]_{q}
!\allowbreak=\allowbreak1.$ Taking into account that $H_{n}(x|0)\allowbreak
=\allowbreak U_{n}(x/2)$ and $[n]_{0}!\allowbreak=\allowbreak1$ we see that
(\ref{K-S}) has been generalized and summed already for other polynomials. The
intension of summing in \cite{SzablKib} was to find a generalization of the
normal distribution that has compact support. The attempt was partially
successful since also one has obtained a relatively closed form for the sum,
however the obtained sum was not positive for the suitable values of
parameters $\rho_{ij}$ and all values of parameters $\left\vert q\right\vert
<1.$
In the present paper, we are going to present closed form of the sum
(\ref{K-S}) where polynomials $H_{n}$ are replaced by Chebyshev polynomials of
both the first and second kind and $s_{ij}!$ are replaced by $1.$ This last
replacement is justified by the fact that $\left[ s_{ji}\right]
_{q}!\allowbreak=\allowbreak1$ if $q\allowbreak=\allowbreak0.$ For more
details, see publications on the so-called $q-$series and also brief
introduction at the beginning of the Section \ref{gen}, below.
In other words, we are going to find closed forms for the sums (\ref{_t}) and
(\ref{_u}), where $\mathbf{x\allowbreak}$ and $K_{n},$ used below, mean, as
before, $\mathbf{x=\allowbreak(}x_{1},...,x_{n})$ while $K_{n}$ denotes
symmetric $n\times n$ matrix with ones on its diagonal and $\rho_{ij}$ as its
$ij-th$ entry. We will assume that all $\rho^{\prime}s$ are from the segment
$(-1,1)$ and additionally that matrix $K_{n}$ is positive definite.
We have the following result:
\begin{theorem}
\label{kibble}Let us denote $\mathcal{K}_{n}\allowbreak=\allowbreak\left\{
(i,j):1\leq i<j\leq n\right\} $, $\beta_{n,m}\allowbreak=\beta_{n,m}
(i_{n},i_{m})\allowbreak=\allowbreak i_{n}\alpha_{n}+i_{m}\alpha_{m} $. For
$S\subseteq\mathcal{K}_{n}$ let $\rho_{S}\allowbreak=\allowbreak
\prod_{(n,m)\in S}\rho_{nm}$, $b_{S}\allowbreak=\allowbreak\sum_{(n,m)\in
S}\beta_{n,m}$, $B_{1,\ldots,n}\allowbreak=\allowbreak B(i_{1},\ldots
,i_{n})\allowbreak=\allowbreak\sum_{j=1}^{n}i_{j}\alpha_{j} $.
We have i)
\begin{gather*}
f_{T}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})\allowbreak=\\
\allowbreak\frac{1}{2^{n}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n}\in
\{-1,1\}}\frac{\sum_{k=0}^{n}(-1)^{k}\sum_{S_{k}\subseteq\mathcal{K}_{n}
}^{\prime}\rho_{S_{k}}\cos(b_{S_{k}})}{\prod_{j=1}^{n}\prod_{m=j+1}
^{n}(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})},
\end{gather*}
ii) If $n$ is even then
\begin{gather*}
f_{U}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})=\\
(-1)^{n/2}\frac{1}{2^{n}\prod_{j=1}^{n}\sin(\alpha_{j})}\sum_{i_{1}
\in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\\
\frac{\sum_{k=0}^{n}(-1)^{k}\sum_{S_{k}\subseteq\mathcal{K}_{n}}^{\prime}
\rho_{S_{k}}\cos(B_{1,\ldots,n}-b_{S_{k}})}{\prod_{j=1}^{n}\prod_{m=j+1}
^{n}(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})},
\end{gather*}
while if $n$ is odd then
\begin{gather*}
f_{U}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})=\\
(-1)^{n/2}\frac{1}{2^{n}\prod_{j=1}^{n}\sin(\alpha_{j})}\sum_{i_{1}
\in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\\
\frac{\sum_{k=0}^{n-1}(-1)^{k}\sum_{S_{k}\subseteq\mathcal{K}_{n}}^{\prime
}\rho_{S_{k}}\sin(B_{1,\ldots,n}-b_{S_{k}})}{\prod_{j=1}^{n}\prod_{m=j+1}
^{n}(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})}
\end{gather*}
where $S_{k}$ denotes any subset of $\mathcal{K}_{n}$ that contains $k$
elements and $\sum_{S_{k}\in K_{n}}^{\prime}$ means summation over all $S_{k}$.
\end{theorem}
\begin{proof}
Let us consider (\ref{_t}) first. Keeping in mind assertions of Proposition
\ref{ilocz} we see that $f_{T}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})$
is the sum of $2^{n}$ summands depending on different arrangement of values of
variables $i_{k}\in\{-1,1\},$ $k\allowbreak=\allowbreak1,...,n.$ Each summand
is equal to cosine taken at $\sum_{j=1}^{n}i_{j}s_{j}\alpha_{j}.$ Recalling
the definition of numbers $s_{j}$ we see that in such sum $s_{mj},$ $1\leq
m<j\leq n$ appears twice, once as $s_{mj}\alpha_{m}i_{m}$ and secondly as
$s_{mj}\alpha_{j}i_{j}.$ Or in other words, we have $\sum_{j=1}^{n}i_{j}
s_{j}\alpha_{j}\allowbreak=\allowbreak\sum_{m=1}^{n-1}\sum_{j=m+1}^{n}
s_{mj}(\alpha_{m}i_{m}+\alpha_{j}i_{j}).$ Having this in mind, we can now
apply summation formula (\ref{ksc}) with $\beta\allowbreak=\allowbreak0$ and
have summed each cosine with a particular system of values of the set
$\{i_{j}:j\allowbreak=\allowbreak1,...,n\}.$ Now it remains to sum over, all
such systems of values.
As far as other assertions are concerned, we use the definition of Chebyshev
polynomials of the second kind, formulae presented in Proposition \ref{ilocz}.
We have in this case $\sum_{j=1}^{n}i_{j}(s_{j}+1)\alpha_{j}\allowbreak
=\allowbreak\sum_{j=1}^{n}i_{j}\alpha_{j}\allowbreak+\allowbreak\sum
_{m=1}^{n-1}\sum_{j=m+1}^{n}s_{mj}(\alpha_{m}i_{m}+\alpha_{j}i_{j}).$ As the
result we deal with signed sum of either sines or cosines depending on the
fact if $n(n-1)/2$ (the number of different $s_{mj},$ $1\leq m<j\leq n$ ) is
odd or even. Now again we refer to either (\ref{kss}) or (\ref{ksc}) depending
on the parity of $n(n-1)/2$ this time with $\beta\allowbreak=\allowbreak
\sum_{j=1}^{n}i_{j}\alpha_{j}.$
\end{proof}
\begin{corollary}
Both functions $f_{T}(\mathbf{x}|K_{n})$ and $f_{U}(\mathbf{x}|K_{n})$ are
rational functions of all its arguments. Moreover, they have the same
denominators given by the following formula:
\[
V_{n}(\mathbf{x|}K_{n})=\prod_{j=1}^{n-1}\prod_{k=j+1}^{n}w_{2}(x_{j}
,x_{k}|\rho_{ij}),
\]
where $w_{2}$ is given by the formula (\ref{w2}).
\end{corollary}
\begin{proof}
First of all, notice that following formulae given in Theorem \ref{kibble} the
functions $f_{T}(\mathbf{x}|K_{n})$ and $f_{U}(\mathbf{x}|K_{n})$ are rational
functions of $x_{1}\allowbreak=\allowbreak\cos(\alpha_{1}),...,x_{n}
\allowbreak=\allowbreak\cos(\alpha_{n}).$ Moreover, it is easy to notice that
all formulae have the same denominators. To find these denominators notice
that the factors in each denominator referring to $(i_{j},i_{m})$ and
$(-i_{j},i_{m})$ are the same since cosine is an even function and that
cosines appear solely in denominators. Further, we can group factors
$(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})$ and
$(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},-i_{m}))+\rho_{jm}^{2})$ and apply
(\ref{pro2})
\begin{gather*}
(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})(1-2\rho_{jm}
\cos(\beta_{j,m}(i_{j},-i_{m}))+\rho_{jm}^{2})\\
=w_{2}(\cos(\alpha_{j}),\cos(\alpha_{m})|\rho_{jm}).
\end{gather*}
since $\beta_{n,m}(i_{n},i_{m})\allowbreak=\allowbreak i_{n}\alpha_{n}
+i_{m}\alpha_{m}.$
\end{proof}
\begin{corollary}
Let us denote $\beta_{kj}=i_{k}\alpha_{k}+i_{j}\alpha_{j},$ $k\allowbreak
=\allowbreak1,2,$ $j=2,3,$ $k<j,$ $p\allowbreak=\allowbreak\rho_{12}\rho
_{13}\rho_{23},$ $B_{1,2,3}\allowbreak=\sum_{j=1}^{3}i_{j}a_{j},$
\newline$\allowbreak$
\begin{gather*}
c(i_{1},i_{2},i_{3},\alpha_{1},\alpha_{2},\alpha_{3},\rho_{12},\rho_{13}
,\rho_{23})\allowbreak=\allowbreak(1-\sum_{1\leq k<j\leq3}\rho_{k,j}\cos
(\beta_{k,j})\allowbreak+\allowbreak\\
p\sum_{1\leq k<j\leq3}\rho_{k,j}^{-1}\cos(2B_{1,2,3}\allowbreak-\allowbreak
\beta_{kj})\allowbreak-\allowbreak p\cos(2B_{1,2,3}))/\prod_{1\leq k<j\leq
3}(1-\rho_{kj}\cos(\beta_{kj})+\rho_{kj}^{2}),
\end{gather*}
\begin{gather*}
s(i_{1},i_{2},i_{3},\alpha_{1},\alpha_{2},\alpha_{3},\rho_{12},\rho_{13}
,\rho_{23})\allowbreak=(\sin(B_{1,2,3})(1+p)\\
-(\rho_{12}\sin(i_{3}\alpha_{3})+\rho_{13}\sin(i_{2}\alpha_{2})+\rho_{23}
\sin\left( i_{1}\alpha_{2}\right) )\allowbreak\\
-\allowbreak(\rho_{12}\rho_{13}\sin(i_{1}\alpha_{1})+\rho_{12}\rho_{23}
\sin(i_{2}\alpha_{2})+\rho_{13}\rho_{23}\sin(i_{3}\alpha_{3}))\\
/\allowbreak\prod_{1\leq k<j\leq3}(1-\rho_{kj}\cos(\beta_{kj})+\rho_{kj}^{2}).
\end{gather*}
Then:
i) $f_{T}(\cos(\alpha_{1}),\cos(\alpha_{2}),\cos(\alpha_{3}),\rho_{12}
,\rho_{13},\rho_{23})=$\newline$\frac{1}{4}\sum_{i_{2}\in\{-1,1\}}\sum
_{i_{3}\in\{-1,1\}}c(1,i_{2},i_{3},\alpha_{1},\alpha_{2},\alpha_{3},\rho
_{12},\rho_{13},\rho_{23}),$
ii) $f_{U}(\cos(\alpha_{1}),\cos(\alpha_{2}),\cos(\alpha_{3}),\rho_{12}
,\rho_{13},\rho_{23})\allowbreak=\allowbreak$\newline$\frac{1}{8}\sum
_{i_{1}\in\{-1,1\}}\sum_{i_{2}\in\{-1,1\}}\sum_{i_{3}\in\{-1,1\}}
(-1)^{\sum_{k=1}^{3}(i_{k}+1)/2}s(i_{1},i_{2},i_{3},\alpha_{1},\alpha
_{2},\alpha_{3},\rho_{12},\rho_{13},\rho_{23})$
$p/\rho_{kj}$ in case of $\rho_{kj}\allowbreak=\allowbreak0$ is understood as
the limit when $\rho_{kj}\allowbreak\rightarrow\allowbreak0.$
iii) $f_{U}(x,y,z,\rho_{12},\rho_{13},\rho_{23})\allowbreak=\allowbreak
(4\rho_{12}\rho_{13}(\rho_{23}-\rho_{12}\rho_{13})(1-\rho_{23}^{2}
)x^{2}\allowbreak+\allowbreak4\rho_{12}\rho_{23}(\rho_{13}-\rho_{12}\rho
_{23})(1-\rho_{13}^{2})y^{2}\allowbreak+\allowbreak4\rho_{13}\rho_{23}
(\rho_{12}-\rho_{13}\rho_{23})(1-\rho_{12}^{2})z^{2}\allowbreak-\allowbreak
4(\rho_{13}-\rho_{12}\rho_{23})(\rho_{23}-\rho_{12}\rho_{13})(1+\rho_{12}
\rho_{13}\rho_{23})xy\allowbreak-\allowbreak4(\rho_{12}-\rho_{13}\rho
_{23})(\rho_{23}-\rho_{12}\rho_{13})(1+\rho_{12}\rho_{13}\rho_{23}
)xz\allowbreak-\allowbreak4(\rho_{13}-\rho_{12}\rho_{23})(\rho_{12}-\rho
_{23}\rho_{13})(1+\rho_{12}\rho_{13}\rho_{23})yz\allowbreak+\allowbreak
(1-\rho_{12}^{2})(1-\rho_{13}^{2})(1-\rho_{23}^{2})(1-\rho_{12}\rho_{13}
\rho_{23}))$\newline$/(w_{2}(x,y|\rho_{12})w_{2}(x,z|\rho_{13})w_{2}
(y,z|\rho_{23}))$
\end{corollary}
\begin{proof}
First of all, notice that $\sum_{k=1}^{2}\sum_{j=k+1}^{3}\beta_{kj}
\allowbreak=\allowbreak2B_{1,2,3}$ hence in particular $B_{1,2,3}
\allowbreak-\allowbreak\sum_{k=1}^{2}\sum_{j=k+1}^{3}\beta_{kj}\allowbreak
=\allowbreak-B_{1,2,3}.$ Then the formula i) is clear based on (\ref{ksc})
with $\beta\allowbreak=\allowbreak B_{1,2,3}$. To get ii) notice that
$B_{1,2,3}\allowbreak-\allowbreak\beta_{12}\allowbreak=\allowbreak i_{3}
\alpha_{3}$ and $B_{1,2,3}\allowbreak-\allowbreak\beta_{12}\allowbreak
-\allowbreak\beta_{13}\allowbreak=\allowbreak-i_{1}\alpha_{1},$ similarly for
the other pairs $(1,3)$ and $(2,3)$. Recall also that $B_{1,2,3}
\allowbreak-\allowbreak\sum_{k=1}^{2}\sum_{j=k+1}^{3}\beta_{kj}\allowbreak
=\allowbreak-B_{1,2,3}.$ Now based on (\ref{kss}) ii) is also clear.
iii) was obtained with the help of Mathematica.
\end{proof}
\begin{remark}
With the help of Mathematica one can show, for example, that the numerator of
$f_{T}(x,y,z|K_{3})$ is a polynomial of degree $6$ and it consists of $265$
monomials. Numerical simulation suggest that it is a nonnegative on
$(-1,1)^{3}.$ Unfortunately $f_{U}(x,y,z|K_{3})$ is not nonnegative there
since we have for example \newline$f_{U}(-.9,-.95,.94,|\left[
\begin{array}
[c]{ccc}
0 & .6 & .8\\
.6 & 0 & .9\\
.8 & .9 & 0
\end{array}
\right] )\allowbreak=\allowbreak-0.0912121.$ Besides notice that it happens
in the case when matrix $\left[
\begin{array}
[c]{ccc}
1, & .6 & .8\\
.6 & 1 & .9\\
.8 & .9 & 1
\end{array}
\right] $ is positive definite. This observation is in accordance with the
general negative result presented in \cite{SzablKib} Theorem 1. Recall that
\cite{SzablKib} concerns something like generalization of $f_{U}$ to all
parameters $q\in(-1,1)$ taking into account that $q$-Hermite polynomials
$H_{n}(x|q)$ can be identified for $q\allowbreak=\allowbreak0$ with
polynomials $U_{n}(x/2).$ The example presented in \cite{SzablKib} concerns
the case (adopted to $q\allowbreak=\allowbreak0)$ when say $\rho
_{12}\allowbreak=\allowbreak0.$ Hence we see that there are many sets of $6$
tuples $x,y,z,\rho_{12},\rho_{13},\rho_{23}$ leading to negative values of
$f_{U}.$
\end{remark}
\section{Remarks on generalization\label{gen}}
In this section, we are going firstly to present $q-$generalization of the
Chebyshev of the first kind and secondly present some remarks and observations
that might help to obtain formulae similar to the ones presented in Theorem
\ref{main} with Chebyshev polynomials replaced by the so-called $q-$Hermite
$\left\{ h_{n}\right\} $ and related polynomials. $q$ is here a certain real
(in general) number such that $\left\vert q\right\vert <1.$ Since in the
previous chapters we considered, so to say, the case $q=0$ we will assume in
this chapter that $q\neq0.$
To proceed further we need to recall certain notions used in $q-$series
theory: $\left[ 0\right] _{q}\allowbreak=\allowbreak0;$ $\left[ n\right]
_{q}\allowbreak=\allowbreak1+q+\ldots+q^{n-1},$ $\left[ n\right]
_{q}!\allowbreak=\allowbreak\prod_{j=1}^{n}\left[ j\right] _{q},$ with
$\left[ 0\right] _{q}!\allowbreak=1,$
\[
\genfrac{[}{]}{0pt}{}{n}{k}
_{q}\allowbreak=\allowbreak\left\{
\begin{array}
[c]{ccc}
\frac{\left[ n\right] _{q}!}{\left[ n-k\right] _{q}!\left[ k\right]
_{q}!} & , & 0\leq k\leq n\\
0 & , & otherwise
\end{array}
\right. .
\]
$\binom{n}{k}$ will denote ordinary, well known binomial coefficient. \newline
It is useful to use the so-called $q-$Pochhammer symbol for $n\geq1:$
\[
\left( a|q\right) _{n}=\prod_{j=0}^{n-1}\left( 1-aq^{j}\right) ,~~\left(
a_{1},a_{2},\ldots,a_{k}|q\right) _{n}\allowbreak=\allowbreak\prod_{j=1}
^{k}\left( a_{j}|q\right) _{n}.
\]
with $\left( a|q\right) _{0}\allowbreak=\allowbreak1$. Note that $n$ can be
equal to $\infty,$ then the $q-$Pochhammer symbol is well defined provided
$\left\vert q\right\vert <1.$
Often $\left( a|q\right) _{n},$ as well as, $\left( a_{1},a_{2}
,\ldots,a_{k}|q\right) _{n}$ will be abbreviated to $\left( a\right) _{n}$
and \newline$\left( a_{1},a_{2},\ldots,a_{k}\right) _{n},$ if it will not
cause misunderstanding.
It is easy to notice that $\left( q\right) _{n}=\left( 1-q\right)
^{n}\left[ n\right] _{q}!$ and that
\[
\genfrac{[}{]}{0pt}{}{n}{k}
_{q}\allowbreak=\allowbreak\allowbreak\left\{
\begin{array}
[c]{ccc}
\frac{\left( q\right) _{n}}{\left( q\right) _{n-k}\left( q\right) _{k}}
& , & n\geq k\geq0\\
0 & , & otherwise
\end{array}
\right. .
\]
\newline The above mentioned formula is just an example where direct setting
$q\allowbreak=\allowbreak1$ is senseless however, the passage to the limit
$q\longrightarrow1^{-}$ makes sense.
Notice that in particular $\left[ n\right] _{1}\allowbreak=\allowbreak
n,\left[ n\right] _{1}!\allowbreak=\allowbreak n!,$ $
\genfrac{[}{]}{0pt}{}{n}{k}
_{1}\allowbreak=\allowbreak\binom{n}{k},$ $(a)_{1}\allowbreak=\allowbreak1-a,$
$\left( a;1\right) _{n}\allowbreak=\allowbreak\left( 1-a\right) ^{n}$ and
$\left[ n\right] _{0}\allowbreak=\allowbreak\left\{
\begin{array}
[c]{ccc}
1 & if & n\geq1\\
0 & if & n=0
\end{array}
\right. ,$ $\left[ n\right] _{0}!\allowbreak=\allowbreak1,$ $
\genfrac{[}{]}{0pt}{}{n}{k}
_{0}\allowbreak=\allowbreak1,$ $\left( a;0\right) _{n}\allowbreak
=\allowbreak\left\{
\begin{array}
[c]{ccc}
1 & if & n=0\\
1-a & if & n\geq1
\end{array}
\right. .$
$i$ will denote, as before, the imaginary unit, unless otherwise clearly
stated. In the sequel we will need also the so-called $q-$Hermite polynomials.
There exists a very large literature on the properties as well as applications
of these polynomials. Let us recall only that the three-term recurrence
satisfied by these polynomials is the following
\[
h_{n+1}(x|q)=2xh_{n}(x|q)-(1-q^{n})h_{n-1}(x|q),
\]
with $h_{-1}(x|q)\allowbreak=\allowbreak0,$ $h_{0}(x|q)\allowbreak
=\allowbreak1.$ It is well known that the density, which makes these
polynomials orthogonal is the following
\[
f_{h}\left( x|q\right) =\frac{2\left( q\right) _{\infty}\sqrt{1-x^{2}}
}{\pi}\prod_{k=1}^{\infty}l\left( x|q^{k}\right) ,
\]
where $l\left( x|a\right) =(1+a)^{2}-4x^{2}a.$ Moreover, generating
functions of these polynomials, are equal to:
\begin{equation}
\sum_{j=0}^{\infty}\frac{t^{j}}{\left( q\right) _{j}}h_{j}\left(
x|q\right) =\frac{1}{\prod_{k=0}^{\infty}v\left( x|tq^{k}\right) },
\label{ch1}
\end{equation}
where $v\left( x|a\right) =1-2ax+a^{2}.$
\begin{remark}
For the sake of completeness of the paper, let us recall that $h_{n}
(x|0)\allowbreak=\allowbreak U_{n}(x),$ for $n\geq-1.$
\end{remark}
\subsection{Conjectures, remarks and interesting identities\label{Ident}}
Theorem \ref{main} suggests the new method of summing characteristic
functions. One can formulate it in the following way.
\emph{Suppose, that we can guess, that the form of certain multivariate
characteristic function, say for example }
\begin{equation}
\chi_{n}^{(l_{1},\ldots l_{n})}(x_{1},\ldots,x_{n}|\rho,q)=\sum_{j\geq0}
\frac{\rho^{j}}{(q)_{j}}\prod_{k=1}^{n}h_{j+l_{k}}(x_{k}|q), \label{gen_ch}
\end{equation}
\emph{where} \emph{numbers }$l_{1},\ldots,l_{n}$ \emph{are integer and
}$\left\vert \rho\right\vert ,\left\vert q\right\vert <1$, \emph{is of the
form of the ratio of two functions. Moreover, suppose that we can guess the
form of the denominator }$W_{n}(x_{1},\ldots,x_{n}|\rho,q)$ \emph{of this
ratio. Then the numerator can be obtained by the formula similar to
(\ref{diff}) i.e. by: }
\[
\sum_{j=0}^{\infty}\rho^{j}\sum_{k=0}^{j}\frac{1}{k!}\left. \frac{d^{k}
}{d\rho^{k}}W_{n}(x_{1},...,x_{n+m}|\rho,q)\right\vert _{\rho=0}\frac
{1}{(q)_{j-k}}\prod_{s=1}^{n}h_{j-k+l_{s}}(x_{s}|q).
\]
\begin{remark}
There are classes of characteristic functions that have common denominators
like for example bivariate ones described in \cite{Szab5}, Proposition 7 (iv)
or, more generally, bivariate functions of the form similar to (\ref{gen_ch})
that were considered by Carlitz in \cite{Carlitz72}. The point is that all
these functions are at most bivariate. There are no results concerning more
variables. Thus we have the following conjecture.
\end{remark}
\begin{conjecture}
Functions $\chi_{n}^{(l_{1},\ldots l_{n})}(x_{1},\ldots,x_{n}|\rho,q)$ for all
$n,m,l_{1},\ldots,l_{n}$ are the ratios of some functions with the common
denominators of the form
\[
W_{n}(x_{1},\ldots,x_{n}|\rho,q)=\prod_{i=0}^{\infty}w_{n}(x_{1}
,...,x_{n}|\rho q^{i}),
\]
where functions $w_{n}(x_{1},\ldots,x_{n}|\rho)$ are given by the iterative
relationship (\ref{rek}).
\end{conjecture}
\subsubsection{One-dimensional case}
Now we will present a one-dimensional example, in order to show that even in
this simplest case we obtain interesting identities. In this example, we will,
so to say, derive once more formula (\ref{ch1}). First of all, notice that
$(1-ae^{i\varphi})(1-ae^{-i\varphi})\allowbreak=\allowbreak1+a^{2}-2ax$
$\overset{df}{=}v(x|a)$ where $x\allowbreak=\allowbreak\cos\varphi$. Moreover,
we have:
\[
W_{1}(x|\rho,q)=\prod_{j=0}^{\infty}v(x|\rho q^{j})\allowbreak=\allowbreak
(\rho e^{i\varphi})_{\infty}(\rho e^{-i\varphi})_{\infty}.
\]
Let us denote indirectly function $d_{n}(x|q)$ by the relationship: $\frac
{n!}{(q)_{n}}d_{n}(x|q)\allowbreak=\allowbreak\left. \frac{d^{n}}{d\rho^{n}
}W_{1}(x|\rho,q)\right\vert _{\rho=0}.$ Notice that $d_{n}(x|q)$ are
coefficients of the expansion of $W_{1}(x|\rho,q)$ in the following series
\begin{equation}
W_{1}(x|\rho,q)=\sum_{n\geq0}\frac{\rho^{n}}{(q)_{n}}d_{n}(x|q). \label{expW}
\end{equation}
For the sake of symmetry let us also denote by $f_{n}(x|q)$ coefficients of
the expansion $1/W_{1}(x|\rho,q)$ in the following series
\[
1/W_{1}(x|\rho,q)\allowbreak=\allowbreak\sum_{n\geq0}\frac{\rho^{n}}{(q)_{n}
}f_{n}(x|q).
\]
\begin{remark}
Let us recall polynomials $\left\{ b_{n}\right\} $ defined in \cite{bms} and
later analyzed in \cite{Szab-rev}(2.43). These polynomials satisfy the
following three term recurrence :
\[
b_{n+1}(x|q)=-2q^{n}xb_{n}(x|q)+q^{n-1}(1-q^{n})b_{n-1}(x|q),
\]
with $b_{-1}(x|q)\allowbreak=\allowbreak0,$ $b_{1}(x|q)\allowbreak
=\allowbreak1.$ Moreover, as it follows from \cite{SzablAW}(3.18) after some
trivial transformation polynomials $\left\{ b_{n}\right\} $ satisfy the
following identity:
\begin{equation}
\sum_{j=1}^{n}
\genfrac{[}{]}{0pt}{}{n}{j}
_{q}b_{n-j}(x|q)h_{j+k}(x|q)=\left\{
\begin{array}
[c]{ccc}
0 & if & k<n\\
(-1)^{n}q^{\binom{n}{2}}\frac{(q)_{k}}{(q)_{k-n}}h_{k-n}(x|q) & if & k\geq n
\end{array}
\right. . \label{idb}
\end{equation}
Recall also that the two families of polynomials $\left\{ h_{n}\right\} $
and $\left\{ b_{n}\right\} $ are related to one another by
\[
b_{n}(x|q)=(-1)^{n}q^{\binom{n}{2}}h_{n}(x|q^{-1}),
\]
for $q\neq0$ and for $q\allowbreak=\allowbreak0$ we have $b_{-1}
(x|0)\allowbreak=\allowbreak b_{n}(x|0)\allowbreak=\allowbreak0$ for $n\geq3,$
$b_{1}(x|q)\allowbreak=\allowbreak-2x,$ $b_{2}(x|0)\allowbreak=\allowbreak1.$
In the sequel when considering the case $q\allowbreak=\allowbreak0$ we will
understand as the limit with $q\rightarrow0$ in the function in question.
\end{remark}
One can notice that, we have
\[
\frac{n!}{(q)_{n}}f_{n}(x|q)\allowbreak=\allowbreak\left. \frac{d^{n}}
{d\rho^{n}}W_{1}^{-1}(x|\rho,q)\right\vert _{\rho=0}.
\]
We have the following lemma.
\begin{lemma}
\label{1-dim}For $\left\vert x\right\vert \leq1,\left\vert q\right\vert <1,$
we have
\begin{align}
d_{n}(x|q)\allowbreak & =\allowbreak b_{n}(x|q),\label{1dimb}\\
f_{n}(x|q) & =h_{n}(x|q). \label{1dimh}
\end{align}
\end{lemma}
\begin{proof}
To prove (\ref{1dimb}) let us recall formula (1.7) of \cite{bms}.
\[
W_{1}(x|\rho,q)=\sum_{j\geq0}\frac{\rho^{j}}{(q)_{j}}b_{j}(x|q).
\]
To get (\ref{1dimh}) we recall (\ref{ch1}). The separate proof is needed for
the case $q\allowbreak=\allowbreak0.$ Then $W_{1}(x,\rho,0)\allowbreak
=\allowbreak v(x|\rho)\allowbreak=\allowbreak1-2x\rho+\rho^{2}$ which
confronted with our definition of polynomials $b_{n}$ for $q\allowbreak
=\allowbreak0$ shows that the (\ref{1dimb}) is true for this case also.
\end{proof}
Now we see that following, adapted to the present situation, formula
(\ref{diff}) we have, for $\left\vert q\right\vert ,\left\vert \rho\right\vert
<1 $ and $\left\vert x\right\vert \leq1$.
\begin{align*}
\chi_{1}^{t}(x|\rho,q)\allowbreak & =\allowbreak\sum_{j=0}^{\infty}\frac
{\rho^{j}}{(q)_{j}}h_{t+j}(x|q)\allowbreak=\allowbreak\frac{1}{W_{1}
(x|\rho,q)}\\
& \times\sum_{j=0}^{\infty}\rho^{j}\sum_{m=0}^{j}\frac{1}{(j-m)!}
\frac{(j-m)!}{(q)_{j-m}(q)_{m}}b_{j-m}(x|q)h_{m+t}(x|q)\\
& =\allowbreak\frac{1}{W_{1}(x|\rho,q)}\sum_{j=0}^{\infty}\frac{\rho^{j}
}{(q)_{j}}\sum_{m=0}^{j}
\genfrac{[}{]}{0pt}{}{j}{m}
_{q}b_{j-m}(x|q)h_{m+t}(x|q)\\
& =\frac{1}{W_{1}(x|\rho,q)}\sum_{j=0}^{t}
\genfrac{[}{]}{0pt}{}{t}{j}
_{q}(-\rho)^{j}q^{\binom{j}{2}}h_{t-j}(x|q).
\end{align*}
In particular, for $t\allowbreak=\allowbreak0,$ we get once more formula
(\ref{ch1}). This can be regarded as yet another prove of this formula since
we started from (\ref{expW}).
\subsubsection{Two-dimensional case}
Again, as before, let us denote\newline$\frac{n!}{(q)_{n}}d_{n}^{(2)}
(x,y|q)\allowbreak=\allowbreak\left. \frac{d^{n}}{d\rho^{n}}W_{2}
(x,y|\rho,q)\right\vert _{\rho=0},$ $\frac{n!}{(q)_{n}}f_{n}^{(2)}
(x,y|q)\allowbreak=\allowbreak\left. \frac{d^{n}}{d\rho^{n}}W_{2}
^{-1}(x,y|\rho,q)\right\vert _{\rho=0}$, where $W_{2}(x,y|\rho,q)\allowbreak
=\allowbreak\prod_{j=0}^{\infty}w_{2}(x,y|\rho q^{j}),$ with $w_{2}(x,y|a)$
defined by (\ref{w2}).
\begin{lemma}
\label{2-dim}For $\theta,\varphi\in\lbrack0,2\pi),\left\vert q\right\vert <1,$
we have
\begin{align}
d_{n}^{(2)}(\cos\theta,\cos\varphi|q)\allowbreak & =\allowbreak\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}b_{m}(\cos(\theta+\varphi)|q)b_{n-m}(\cos(\theta-\varphi)|q),\label{d2b}\\
f_{n}^{(2)}(\cos\theta,\cos\varphi|q) & =\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}h_{m}(\cos(\theta+\varphi)|q)h_{n-m}(\cos(\theta-\varphi)|q). \label{d2h}
\end{align}
\end{lemma}
\begin{proof}
First of all, notice that $w_{2}(\cos\theta,\cos\varphi|\rho)$ can be
decomposed as
\begin{equation}
w_{2}(\cos\theta,\cos\varphi|\rho)\allowbreak=\allowbreak w_{1}(\cos
(\theta+\varphi)|\rho)(w_{1}(\theta-\varphi)|\rho) \label{w22}
\end{equation}
hence, taking into account Leibniz rule, we get:
\begin{gather*}
d_{n}^{(2)}(x,y|q)=\frac{(q)_{n}}{n!}\left. \frac{d^{n}}{d\rho^{n}}
(W_{1}(\cos(\theta+\varphi)|\rho,q)W_{1}(\cos(\theta-\varphi)|\rho
,q))\right\vert _{\rho=0}\\
=\frac{(q)_{n}}{n!}\sum_{m=0}^{n}\binom{n}{m}\left. \frac{d^{m}}{d\rho^{m}
}(W_{1}(\cos(\theta+\varphi)|\rho,q)\right\vert _{\rho=0}\left. \frac
{d^{n-m}}{d\rho^{n-m}}(W_{1}(\cos(\theta-\varphi)|\rho,q)\right\vert _{r=0}\\
=\frac{(q)_{n}}{n!}\sum_{m=0}^{n}\binom{n}{m}\frac{m!}{(q)_{m}}b_{m}
(\cos(\theta+\varphi)|q)\frac{(n-m)!}{(q)_{n-m}}b_{n-m}(\cos(\theta
-\varphi)|q).
\end{gather*}
To get (\ref{d2h}), we argue in a similar way using Lemma \ref{1-dim} on the way.
\end{proof}
\begin{theorem}
\label{wazny}We have for $\left\vert x\right\vert ,\left\vert y\right\vert
,\left\vert q\right\vert \in\mathbb{R}$ and all $n\geq0:$
\begin{gather}
d_{n}^{(2)}(x,y|q)=\label{con1}\\
(-1)^{n}\sum_{j=0}^{\left\lfloor n/2\right\rfloor }(-1)^{j}q^{-\binom{n-2j}
{2}-j+\binom{j}{2}}\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}}b_{n-2j}(x|q)b_{n-2j}
(y|q),\nonumber\\
f_{n}^{(2)}(x,y|q)\allowbreak=\allowbreak\sum_{j=0}^{\left\lfloor
n/2\right\rfloor }\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}}h_{n-2j}(x|q)h_{n-2j}(y|q).
\label{con2}
\end{gather}
\end{theorem}
\begin{proof}
Is shifted to Section \ref{dow}.
\end{proof}
\begin{remark}
Notice that, in accordance with our agreement that the case $q\allowbreak
=\allowbreak0$ will be understood as the limit when $q\rightarrow0,$ we have
$d_{0}^{(2)}(x,y|0)\allowbreak=\allowbreak1,$ $d_{1}^{(2)}(x,y|0)\allowbreak
=\allowbreak-4xy$, $d_{2}^{(2)}(x,y|0)\allowbreak=\allowbreak4(x^{2}
+y^{2})-2,$ $d_{3}^{(2)}(x,y|0)\allowbreak=\allowbreak-4xy,$ $d_{4}
^{(2)}(x,y|0)\allowbreak=\allowbreak1$, $d_{n}^{(2)}(x,y|0)\allowbreak
=\allowbreak0$ for all $n\geq4$.
\end{remark}
As a corollary we get the following interesting nontrivial identity involving
polynomials $\left\{ b_{n}\right\} $ and $\left\{ h_{n}\right\} .$
\begin{corollary}
For all complex $x,y,q,$ $k\geq0$ and $t,s\in\mathbb{N\cup}\{0\},$we get
\begin{equation}
\sum_{m=0}^{k}
\genfrac{[}{]}{0pt}{}{k}{m}
_{q}d_{m}^{(2)}(x,y|q)h_{k-m+t}(x|q)h_{k-m+s}(y|q)=P_{t,s}^{(k)}(x,y|q)
\label{sumk}
\end{equation}
where $P_{t,s}^{(k)}(x,y|q)$ is a polynomial of order $t+s$ in $x$ and $y.$
In particular, we have
\begin{equation}
\sum_{m=0}^{k}
\genfrac{[}{]}{0pt}{}{k}{m}
_{q}d_{m}^{(2)}(x,y|q)h_{k-m}(x|q)h_{k-m}(y|q)\allowbreak=\left\{
\begin{array}
[c]{ccc}
0 & if & k\text{ is odd}\\
(-1)^{l}q^{\binom{l}{2}}(q^{l+1})_{l} & if & k=2l
\end{array}
\right. . \label{00k}
\end{equation}
\end{corollary}
\begin{proof}
Knowing that
\[
\sum_{j=0}^{\infty}\frac{\rho^{j}}{(q)_{j}}h_{j+t}(x|q)h_{j+s}(y|q)\allowbreak
=\allowbreak\frac{(\rho^{2})_{\infty}V_{t,s}(x,y|\rho,q)}{W_{2}(x,y|\rho,q)},
\]
for $t,s\in\mathbb{N\cup}\{0\}$, which is a modification of the formula given
in assertion i) of Lemma 3 in \cite{SzablAW}, where $V_{t,s}(x,y|\rho,q)$
denotes certain polynomial of the degree $t+s$ in $x$ and $y$, our expansion
of $W_{2}(x,y|\rho,q)$ and then applying Cauchy multiplication of series get
the identity
\begin{equation}
\sum_{j=0}^{\infty}\frac{\rho^{j}}{(q)_{j}}\sum_{m=0}^{j}
\genfrac{[}{]}{0pt}{}{j}{m}
d_{m}^{(2)}(x,y|q)h_{j-m+t}(x|q)h_{j-m+s}(y|q)\allowbreak=\allowbreak
V_{t,s}(x,y|\rho,q)(\rho^{2})_{\infty}, \label{id2wym}
\end{equation}
true for all $\left\vert x\right\vert ,\left\vert y\right\vert \leq1,$
$\left\vert \rho\right\vert ,\left\vert q\right\vert <1.$ Now knowing the form
of the polynomial $V_{t,s}$ given either in \cite{SzablAW}, \cite{SzabP-M} or
\cite{Szab-rev}, we deduce that the expansion of the polynomial $V_{t,s}$ in
the power series of $\rho$ is of a form of the sum of infinite power series
only in $\rho$ times polynomials of $x$ and $y$ of order at most $t+s$. Hence
it is of the form of the power series in $\rho$ with coefficients being
polynomials in $x$ and $y$ of order at most $t+s.$ Since the linear
combination of polynomials of order $t+s$ is a polynomial of order $t+s.$ A
similar argument can be applied to the product $V_{t,s}(x,y|\rho,q)(\rho
^{2})_{\infty}.$ Now comparing the coefficients of the powers of $\rho$ on the
two sides of (\ref{id2wym}), one proves the first part of the statement.
Now knowing that $V_{0,0}\allowbreak=\allowbreak1$, expanding $\left(
\rho^{2}\right) _{\infty}$ in a standard way and finally comparing
coefficients by equal powers of $\rho$ we arrive to (\ref{sumk}).
\end{proof}
\section{Proofs\label{dow}}
\begin{proof}
[Proof of Proposition \ref{ilocz}.]We will be using well known formulae for
the product of sines and cosines. The proof is by induction. For
$n\allowbreak=\allowbreak1$ and $k\allowbreak=\allowbreak1$ we have in case of
(\ref{cos}) and $k\allowbreak=\allowbreak0$ $\cos(\alpha)\allowbreak
=\allowbreak\frac{1}{2}(\cos(\alpha)+\cos(-\alpha))$ while in case of
(\ref{sin}) we get
\begin{gather*}
\sin(\alpha_{1})\cos(\alpha_{2})=\frac{-1}{4}(\sin(-\alpha_{1}-\alpha
_{2})\allowbreak+\allowbreak\sin(-\alpha_{1}+\alpha_{2})-\sin(\alpha
_{1}-\alpha_{2})-\sin\left( \alpha_{1}+\alpha_{2})\right) \\
=\frac{1}{2}(\sin(\alpha_{1}+\alpha_{2})+\sin(\alpha_{1}-\alpha_{2})).
\end{gather*}
Hence, let us assume that they are true for $n\allowbreak=\allowbreak m.$
In the case of the first one, we have
\begin{gather*}
\prod_{j=1}^{m+1}\cos(\xi_{j})\allowbreak=\allowbreak\cos(\xi_{m+1}
)\prod_{j=1}^{m}\cos(\xi_{j})\allowbreak=\allowbreak\\
\frac{1}{2^{m}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}\cos
(\sum_{k=1}^{m}i_{k}\xi_{k})\cos(\xi_{m+1})\allowbreak\\
=\allowbreak\newline\frac{1}{2^{m+1}}\allowbreak\times\allowbreak\sum
_{i_{1}\in\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(\cos(\sum_{k=1}^{m}i_{k}\xi
_{k}+\xi_{m+1})\allowbreak+\allowbreak\cos(\sum_{k=1}^{m}i_{k}\xi_{k}
-\xi_{m+1})).
\end{gather*}
Along the way we used the fact that $\cos(\alpha)\cos(\beta)\allowbreak
=\allowbreak(\cos(\alpha-\beta)+\cos(\alpha+\beta))/2.$ Let us also observe
that the product $\prod_{j=1}^{m}\cos(\xi_{j})$ is a sum of cosines of a
certain linear combination of arguments $\xi_{j},$ $j\allowbreak
=\allowbreak1,\ldots,m$ multiplied by $2^{m-1}.$
In the case of the second one we first consider the case of $k\allowbreak
=\allowbreak0$. Assuming that $m$ is even we get:
\begin{gather*}
\prod_{j=1}^{m+1}\sin(\xi_{j})=\allowbreak\sin(\xi_{m+1})\prod_{j=1}^{m}
\sin(\xi_{j})\allowbreak=\allowbreak(-1)^{m/2}\frac{1}{2^{m}}\allowbreak
\times\allowbreak\\
\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m}
(i_{k}+1)/2}\cos(\sum_{k=1}^{m}i_{k}\xi_{k})\sin(\xi_{m+1})\allowbreak\\
=(-1)^{m/2}\frac{1}{2^{m+1}}\allowbreak\sum_{i_{1}\in\{-1,1\}}...\sum
_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m}(i_{k}+1)/2}\allowbreak\times\\
(\sin(\sum_{k=1}^{m}i_{k}\xi_{k}\allowbreak+\allowbreak\xi_{m+1}
)\allowbreak-\allowbreak\sin(\sum_{k=1}^{m}i_{k}\xi_{k}\allowbreak
-\allowbreak\xi_{m+1}))\allowbreak=\\
-(-1)^{m/2}\frac{1}{2^{m+1}}\sum_{i_{m+1}\in\{-1\}}\sum_{i_{1}\in
\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m+1}(i_{k}+1)/2}\sin
(\sum_{k=1}^{m+1}i_{k}\xi_{k})\allowbreak\\
-(-1)^{m/2}\frac{1}{2^{m+1}}\sum_{i_{m+1}\in\{-1\}}\sum_{i_{1}\in
\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m+1}(i_{k}+1)/2}\sin
(\sum_{k=1}^{m+1}i_{k}\xi_{k}).
\end{gather*}
$\allowbreak$\newline We used the fact that $\sin(\alpha)\cos(\beta
)\allowbreak=\allowbreak(\sin(\alpha-\beta)+\sin(\alpha+\beta))/2.$ The case
of $m$ odd is treated in the similar way.
Now to consider general case we expand both products of sines and cosines.
\end{proof}
\begin{proof}
[Proof of Lemma \ref{aux1}](\ref{ksc}) Using the Euler's identity $\cos
(\theta)\allowbreak=\allowbreak(e^{i\theta}\allowbreak+\allowbreak
e^{-i\theta})/2$ we get
\[
\cos(\beta+\sum_{j=1}^{n}k_{j}\alpha_{j})\allowbreak=\allowbreak\exp
(i\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})/2\allowbreak+\allowbreak\exp
(-i\beta-\sum_{j=1}^{n}ik_{j}\alpha_{j})/2.
\]
So
\[
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}}
)\exp(i\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})/2\allowbreak=\allowbreak\frac
{1}{2}\exp(i\beta)\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(i\alpha_{j})}.
\]
Similarly:
\[
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}}
)\exp(-i\beta-\sum_{j=1}^{n}ik_{j}\alpha_{j})/2\allowbreak=\allowbreak\frac
{1}{2}\exp(-i\beta)\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(-i\alpha_{j})}.
\]
Thus
\begin{gather*}
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}}
)\cos(\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})\\
=\frac{\exp(i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(-i\alpha_{j}))+\exp
(-i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(i\alpha_{j}))}{2\prod_{j=1}^{n}
(1+\rho_{j}^{2}-2\rho_{j}\cos(\alpha_{j}))}.
\end{gather*}
Now, notice that
\begin{gather*}
\exp(-i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(i\alpha_{j}))\allowbreak=\\
\allowbreak\sum_{j=1}^{n}(-1)^{j}\sum_{M_{j,n}\subseteq S_{n}}\prod_{k\in
M_{j,n}}\rho_{k}\exp(-i\beta+i\sum_{k\in M_{j,n}}\alpha_{k}).
\end{gather*}
To verify (\ref{kss}), we use the fact that $\sin(\theta)\allowbreak
=\allowbreak(e^{i\theta}\allowbreak-\allowbreak e^{-i\theta})/2$ getting
\[
\sin(\beta+\sum_{j=1}^{n}k_{j}\alpha_{j})\allowbreak=\allowbreak\exp
(i\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})/2i\allowbreak-\newline\allowbreak
\exp(-i\beta-\sum_{j=1}^{n}ik_{j}\alpha_{j})/2i.
\]
So we have:
\[
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{i}}
)\exp(i\beta+i\sum_{j=1}^{n}k_{j}\alpha_{j})/2i\allowbreak=\allowbreak
\exp(i\beta)\frac{1}{2i}\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(i\alpha_{j})}.
\]
Similarly we get\newline
\[
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}}
)\exp(-i\beta-i\sum_{j=1}^{n}k_{j}\alpha_{j})/2i\allowbreak=\allowbreak
\exp(-i\beta)\frac{1}{2i}\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(-i\alpha_{j}
)}.
\]
So
\begin{gather*}
\sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}}
)\sin(\beta+\sum_{j=1}^{n}k_{j}\alpha_{j})\allowbreak=\allowbreak\\
\frac{1}{2i}\frac{\exp(i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(-i\alpha
_{j}))-\exp(-i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(i\alpha_{j}))}{\prod
_{j=1}^{n}(1+\rho_{j}^{2}-2\rho_{j}\cos(\alpha_{j}))}\allowbreak.
\end{gather*}
\end{proof}
\begin{proof}
[Proof of Theorem \ref{main}]The proof is based on the following observation.
First one is that we convert products Chebyshev polynomials to the products of
$\sin(j\alpha_{s}+(t_{s}+1)\alpha_{s})$ and $\cos(j\alpha_{s}+t_{s}\alpha
_{s})$ according to (\ref{Czebysz}). Secondly we change these products to sums
of either cosines if $n$ is even or zero or sines if $n$ is odd according to
the assertion of the Proposition \ref{ilocz}. The arguments of these sines and
cosines are the linear combinations of the arguments of sines and cosines that
were participating in the products. The coefficients of these linear
combinations are $j\geq0$ and $i_{m}\in\left\{ -1,1\right\} ,$
$m\allowbreak=\allowbreak1,\ldots,n+k.$ Thus we can sum first with respect to
$j$ and apply Corollary \ref{suma}. There the r\^{o}le of $\alpha$ plays now
$\sum_{s=1}^{k+n}$ $i_{s}\alpha_{s}$ for chosen combination of $i^{\prime}s$
while the r\^{o}le of $\beta$ similar combination $\sum_{s=1}^{n}i_{s}
(t_{s}+1)\alpha_{s}+\allowbreak+\allowbreak\sum_{s=n+1}^{n+k}i_{s}t_{s}
\alpha_{s}.$ The point is that the sum of such sines or cosines with respect
to $j,$ is a ratio of two trigonometric expressions. Moreover all these the
expressions in the denominators depend only on $\sum_{s=1}^{k+n}$ $i_{s}
\alpha_{s},$ i.e. do not depend on indeces $t_{s}$ (note that denominators of
sums in Corollary \ref{suma} do not depend on $\beta$). For $\alpha_{s}
\in\mathbb{R}$, $t_{s}\in\mathbb{Z}$, $s=1,...,n+k$, $\left\vert
\rho\right\vert <1$ we have, depending on the parity of $n$, the following equations.
If $n$ is odd then,
\begin{gather}
\sum_{j\geq0}\rho^{j}\prod_{s=1}^{n}U_{j+t_{s}}(\cos(\alpha_{s}))\prod
_{s=n+1}^{n+k}T_{j+t_{s}}(\cos(\alpha_{s}))=\label{si}\\
\frac{(-1)^{(n+1)/2}}{2^{n+k}\prod_{i=1}^{n}\sin(\alpha_{i})}\sum_{i_{1}
\in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{k=1}^{n}(i_{k}+1)/2}
\times\nonumber\\
\frac{(\sin(\sum_{s=1}^{n}i_{s}(t_{s}+1)\alpha_{s}+\sum_{s=n+1}^{n+k}
i_{s}t_{s}\alpha_{s})-\rho\sin(\sum_{s=1}^{n}i_{s}t_{s}\alpha_{s}+\sum
_{s=n+1}^{n+k}i_{s}(t_{s}-1)\alpha_{s}))}{(1-2\rho\cos(\sum_{s=1}^{n+k}
i_{s}\alpha_{s})+\rho^{2})},\nonumber
\end{gather}
while, when $n$ is even or zero, we get:$\allowbreak$\newline
\begin{gather}
\sum_{j\geq0}\rho^{j}\prod_{s=1}^{n}U_{j+t_{s}}(\cos(\alpha_{s}))\prod
_{s=n+1}^{n+k}T_{j+t_{s}}(\cos(\alpha_{s}))=\label{chi}\\
\frac{(-1)^{n/2}}{2^{n+k}\prod_{i=1}^{n}\sin(\alpha_{i})}\sum_{i_{1}
\in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{k=1}^{n}(i_{k}+1)/2}
\times\nonumber\\
\frac{\cos(\sum_{s=1}^{n}i_{s}(t_{s}+1)\alpha_{s}+\sum_{s=n+1}^{n+k}i_{s}
t_{s}\alpha_{s})-\rho\cos(\sum_{s=1}^{n}i_{s}t_{s}\alpha_{s}+\sum
_{s=n+1}^{n+k}i_{s}(t_{s}-1)\alpha_{s})}{(1-2\rho\cos(\sum_{s=1}^{n+k}
i_{s}\alpha_{s})+\rho^{2})}.\nonumber
\end{gather}
To justify it, we use (\ref{Czebysz}) first, then based on Proposition
\ref{ilocz}, we convert products to sums of sines or cosines (if $n$ is odd
sines if $n$ is even cosines) that are of the following arguments:
\begin{gather*}
\sum_{s=1}^{n}l_{s}((j+1)\alpha_{s}+t_{s}\alpha_{s})\allowbreak+\allowbreak
\sum_{s=n+1}^{n+k}l_{s}(j\alpha_{s}+t_{s}\alpha_{s})\allowbreak\\
=\allowbreak j\sum_{s=1}^{n+k}l_{s}\alpha_{s}+\sum_{s=1}^{n}l_{s}
(t_{s}+1)\alpha_{s}+\sum_{s=n+1}^{n+k}l_{s}t_{s}\alpha_{s}.
\end{gather*}
Then, we change the order of summation and we sum over $j$ first. We identify
"$\alpha$" with $\sum_{s=1}^{n+k}l_{s}\alpha_{s}$ and "$\beta$" with
$\sum_{s=1}^{n}l_{s}(t_{s}+1)\alpha_{s}\allowbreak+\allowbreak\sum
_{s=n+1}^{n+k}l_{s}t_{s}\alpha_{s}$ and apply formulae (\ref{s_si} or
\ref{s_g_c}) depending on on the case of parity of $n$.
Now let us analyze polynomial $w_{n}.$ Notice that denominator in both
(\ref{si}) and (\ref{chi}) is of the form
\begin{gather}
w_{k+n}(\cos(\alpha_{1}),...,\cos(\alpha_{k+n})|\rho)=\label{wkn}\\
\prod_{i_{1}\in\{-1,1\}}...\prod_{i_{k+n}\in\{-1,1\}}(1-2\rho\cos(\sum
_{s=1}^{n+k}i_{s}\alpha_{s})+\rho^{2}).\nonumber
\end{gather}
To get (\ref{wkn}) we will argue by induction. Let us replace $n+k$ by $m$ to
avoid confusion. To start with $m\allowbreak=\allowbreak1$ for $m\allowbreak
=\allowbreak2$ we recall (\ref{pro2}). Hence (\ref{wkn}) is true for
$m\allowbreak=\allowbreak1,2.$
Let us assume that the formula is true for $m\allowbreak=\allowbreak k+1.$
Hence, taking $\alpha\allowbreak=\allowbreak\alpha_{k+1}$ and $\beta
\allowbreak=\allowbreak\sum_{s=1}^{k}i_{s}\alpha_{s}$ and noting that
$i_{k}^{2}\allowbreak=\allowbreak1$we get:
\begin{gather*}
w_{k+1}(\cos(\alpha_{1}),...,\cos(\alpha_{k+1})|\rho)=\\
\prod_{i_{2}\in\{-1,1\}}...\prod_{i_{k}\in\{-1,1\}}((1-2\rho\cos(\sum
_{s=1}^{k-1}i_{s}\alpha_{s}+i_{k}(\alpha_{k}-i_{k}\alpha_{k+1}))+\rho^{2})\\
\times(1-2\rho\cos(\sum_{s=1}^{k-1}i_{s}\alpha_{s}+i_{k}(\alpha_{k}
+i_{k}\alpha_{k+1}))+\rho^{2}))\\
=w_{k}(\cos(\alpha_{1}),...,\cos(\alpha_{k}+\alpha_{k+1})|\rho)w_{k}
(\cos(\alpha_{1}),...,\cos(\alpha_{k}-\alpha_{k+1})|\rho).
\end{gather*}
by induction assumption. Now it is elementary to see that polynomials $w_{n}$
satisfy relationship (\ref{rek}). Similarly, the remarks concerning degree of
symmetry and the degree of polynomials $w_{n}$ follow directly (\ref{wkn}).
Now, let us multiply both sides of (\ref{si}) and (\ref{chi}) by
$w_{n+k}(x_{1},...,x_{n+k}|\rho)$. We see that this product is equal to the
right hand sides of these equalities with an obvious replacement $\cos
(\alpha_{s})->x_{s},$ $s=1,...,n+k.$ Inspecting (\ref{si}) and (\ref{chi}), we
notice that these right hand sides are polynomials of degree $2(2^{n+k-1}
-1)+1\allowbreak=\allowbreak2^{n+k}-1$ in $\rho.$ Thus, these polynomials can
be regained by using well known formula:
\[
p_{n}(x)\allowbreak=\allowbreak\sum_{i=0}^{n}x^{n}a_{n}\allowbreak
=\allowbreak\sum_{j=0}^{n}\frac{x^{j}}{j!}\left. \frac{d^{j}}{dx^{j}}
p_{n}(x)\right\vert _{x=0}.
\]
This leads directly to the differentiation of the products of $w_{n+k}
(x_{1},...,x_{n+k}|\rho)$ and right hand side of (\ref{_ktnu}). Now we apply
the Leibniz formula:
\[
\left. \frac{d^{n}}{dx^{n}}[f(x)g(x)]\right\vert _{x=0}=\sum_{j=0}^{n}
\binom{n}{i}\left. \frac{d^{j}}{dx^{j}}f(x)\right\vert _{x=0}\left.
\frac{d^{n-j}}{dx^{n-j}}g(x)\right\vert _{x=0}.
\]
and notice that
\[
\left. \frac{d^{k}}{d\rho^{k}}\sum_{j\geq0}\rho^{j}\prod_{s=1}^{n}T_{j+t_{s}
}(x_{s})\prod_{s=1+n}^{n+k}U_{j+t_{s}}(x_{s})\right\vert _{\rho=0}
=k!\prod_{s=1}^{n}T_{k+t_{s}}(x_{s})\prod_{s=1+n}^{n+k}U_{k+t_{s}}(x_{s}).
\]
Having this we get directly (\ref{formula}).
\end{proof}
\begin{proof}
[ Proof of the Theorem \ref{wazny}]The proof consists of several steps. First,
we prove that for all $\theta,\varphi\in\mathbb{R}$ we have
\begin{equation}
\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}h_{m}(\cos(\theta+\varphi)|q)h_{n-m}(\cos(\theta-\varphi)|q)=\sum
_{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}
}h_{n-2j}(\cos\theta|q)h_{n-2j}(\cos\varphi|q). \label{exKM}
\end{equation}
This formula follows, firstly from the fact that we have
\[
\left. \frac{d^{n}}{d\rho^{n}}W_{1}^{-1}(x|\rho,q)\right\vert _{\rho=0}
=\frac{n!}{(q)_{n}}h_{n}(x|q),
\]
which follows directly from (\ref{ch1}). Secondly, arguing in the similar way
as in the proof of Lemma \ref{2-dim} we deduce that
\begin{align*}
& \left. \frac{d^{n}}{d\rho^{n}}W_{1}^{-1}(\cos(\theta+\varphi)|\rho
,q)W_{1}^{-1}(\cos(\theta-\varphi)|\rho,q)\right\vert _{\rho=0}\\
& =\frac{n!}{(q)_{n}}\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}h_{m}(\cos(\theta+\varphi)|q)h_{n-m}(\cos(\theta-\varphi)|q).
\end{align*}
Thirdly, we notice that
\[
\frac{1}{W_{1}(\cos(\theta+\varphi)|\rho,q)W_{1}(\cos(\theta-\varphi)|\rho
,q)}\allowbreak=\allowbreak\frac{1}{W_{2}(\cos(\theta),\cos(\varphi)|\rho
,q)},
\]
which follows directly from (\ref{w22}).
Now, let us calculate
\[
\sum_{n\geq0}\frac{\rho^{n}}{(q)_{n}}\sum_{j=0}^{\left\lfloor n/2\right\rfloor
}\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}}h_{n-2j}(\cos(\theta)|q)h_{n-2j}
(\cos(\varphi)|q).
\]
After changing the order of summation, we get
\[
\sum_{j\geq0}\frac{\rho^{2j}}{(q)_{j}}\sum_{n\geq2j}\frac{\rho^{n-2j}
}{(q)_{n-2j}}h_{n-2j}(\cos(\theta)|q)h_{n-2j}(\cos(\varphi)|q)\allowbreak
=\allowbreak\frac{1}{(\rho^{2})_{\infty}}\frac{(\rho^{2})_{\infty}}{W_{2}
(\cos(\theta),\cos(\varphi)|\rho,q)},
\]
by the binomial and Poisson-Mehler summation theorems. Thus we have proved
(\ref{exKM}) as well as (\ref{con2}) at least for $\left\vert q\right\vert
<1.$ The formula can be easily extended to all values of $q\neq1$ since both
sides are polynomials in $q$. Similarly, we can extend it to all values of $x$
and $y$ by substitution $\cos(\theta)$ by $x$ and $\cos(\varphi)$ by $y.$ Now,
having proven (\ref{exKM}) we recall the definition of polynomials
$b_{n}(x|q)$ given in Lemma \ref{1-dim}, above. Recall also that
\[
(\frac{1}{q}|\frac{1}{q})_{n}=(-1)^{n}q^{-\binom{n+1}{2}}(q)_{n},
\]
and consequently that we have:
\[
\genfrac{[}{]}{0pt}{}{n}{j}
_{1/q}=
\genfrac{[}{]}{0pt}{}{n}{j}
_{q}q^{-j(n-j)}.
\]
Hence, for the left hand side of (\ref{exKM}), we have after changing $q$ to
$1/q$
\begin{gather*}
\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{1/q}h_{m}(\cos(\theta+\varphi)|\frac{1}{q})h_{n-m}(\cos(\theta
-\varphi)|\frac{1}{q})\\
=\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}q^{-m(n-m)}(-1)^{m}q^{-\binom{m}{2}}\\
\times b_{m}(\cos(\theta+\varphi)|q)(-1)^{n-m}q^{-\binom{n-m}{2}}b_{n-m}
(\cos(\theta-\varphi)|q)\\
=(-1)^{n}q^{-\binom{n}{2}}\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}b_{m}(\cos(\theta+\varphi)|q)b_{n-m}(\cos(\theta-\varphi)|q).
\end{gather*}
Now let us consider the right hand side of (\ref{exKM}) and change $q$ by
$1/q.$ We have
\begin{gather*}
\sum_{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q^{-1}|q^{-1})_{n}}
{(q^{-1}|q^{-1})_{j}(q^{-1}|q^{-1})_{n-2j}}h_{n-2j}(x|q^{-1})h_{n-2j}
(y|q^{-1})\\
=\sum_{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q)_{n}(-1)^{n}
q^{-\binom{n+1}{2}}}{(q)_{j}(-1)^{j}q^{-\binom{j+1}{2}}(q)_{n-2j}
(-1)^{n-2j}q^{-\binom{n-2j+1}{2}}}(-1)^{n-2j}\\
\times q^{-\binom{n-2j}{2}}b_{n-2j}(x|q)(-1)^{n-2j}q^{-\binom{n-2j}{2}
}b_{n-2j}(y|q).
\end{gather*}
We deduce that (\ref{con1}) is true since we have $\binom{n}{2}+n\allowbreak
=\allowbreak\binom{n+1}{2}.$
\end{proof}
\end{document} |
\begin{document}
\title{Improved Local Search Based Approximation Algorithm for Hard Uniform Capacitated k-Median Problems}
\maketitle
\begin{center}
\author{Neelima Gupta$^1$,}
\author{Aditya Pancholi$^2$}
\end{center}
\begin{enumerate}
\item {Department of Computer Science, University of Delhi, India.\\
\texttt{[email protected]}}
\item {Department of Computer Science, University of Delhi, India.\\
\texttt{[email protected]}}
\end{enumerate}
\begin{abstract}
In this paper, we study the hard uniform capacitated $k$- median problem
using local search heuristic. Obtaining a constant factor approximation for the problem is open. All the existing solutions giving constant-factor approximation, violate at least one of the constraints (\emph{cardinality}/ \emph{capacity}). All except Koruplou \textit{et~al}.~ \cite{KPR} are based on LP-relaxation.
We give $(3+\epsilon)$ factor approximation algorithm for the problem violating the cardinality by a factor of $8/3 \approx 2.67$.
There is a trade-off between the approximation factor and the cardinality violation between our work and the existing work.
Koruplou \textit{et~al}.~ \cite{KPR} gave $(1 + \alpha)$ approximation factor with $(5 + 5/\alpha)$ factor loss in cardinality using local search paradigm. Though the approximation factor can be made arbitrarily small, cardinality loss is at least $5$.
On the other hand, we improve upon the result of Aardal \textit{et~al}.~ \cite{capkmGijswijtL2013} in terms of factor-loss. They gave $(7+\epsilon)$ factor approximation, with the cardinality violation by a factor $2$.
Most importantly, their result is obtained using LP-rounding, whereas local search techniques are straightforward, simple to apply and have been shown to perform well in practice via empirical studies.
We extend the result to hard uniform capacitated $k$-median with penalties.
To the best of our knowledge, ours is the first result for the problem.
\end{abstract}
\section{Introduction}
$k$ - median problem is one of the extensively studied problem in literature {\cite{capkmGijswijtL2013, Archer03lagrangianrelaxation, kmedian, arya,Byrkaesa2015, charikar,Charikar:1999,jain2002new, Jain:2001,li2013approximating}}.
The problem is known to be NP-hard.
The input instance consists of a set ${\cal F}$ of facilities, a set ${\cal C}$ of clients, a non-negative integer $k$
and a non-negative cost function $c$ defining cost to connect clients to the facilities. {Metric version of the problem assumes that $c$ is symmetric and satisfies triangle's inequality}. The goal is to select a subset ${\cal S} \subseteq {\cal F}$ as centers with $|{\cal S}| \leq k$ (cardinality constraint) and to assign clients to them such that the total cost of serving the clients from centers is minimum. In {\em capacitated} version of the problem, we are also given a bound $u_i$ on the maximum number of clients that facility $i$ can serve.
{The {\em soft} capacity version allows a facility to be opened any number of times whereas the {\em hard} capacity version restricts the facilities to be opened at most once.} In $k$-median with penalties, each client $j$ has an associated penalty $p_j$ and we are allowed not to serve some clients at the cost of paying penalties for them. In this paper, we address the hard-capacitated $k$ median ($\text{CkM}$) problem and its penalty variant($\text{CkM}p$), when the capacities are uniform $\textit{i}.\textit{e}.$ $u_i = \textit{U}$ for all $i \in {\cal F}ilityset$. Our results are stated in Theorems~\ref{theo-ckm} and~\ref{theo-ckmpen}. For the problems, we define an $(a,b)$-approximation algorithm as a polynomial-time algorithm that computes a solution using at most $bk$ number of facilities with cost at most $a$ times the cost of an optimal solution using at most $k$ facilities.
\begin{theorem}
\label{theo-ckm}
There is a polynomial time local search heuristic that approximates hard uniform capacitated $k$ median problem within $(3 + \epsilon)$ factor of the optimal violating the cardinality by a factor of $\frac{8}{3}$.
\end{theorem}
In contrast to the LP-based algorithms, local search technique is known to be straightforward , simple to apply and has been shown to perform well in practice via empirical studies \cite{KPR,KH:warehouse}.
Power of local search technique over the LP-based algorithms is well exhibited by the fact that there are constant factor approximation ($3$ for uniform and $5$ for non-uniform) \cite{Aggarwal,Bansal} for capacitated facility location problem whereas the natural LP is known to have an unbounded integrality gap.
On the other hand, local search heuristics are
notoriously hard to analyze. This is evident from the fact that the only work known, based on local search heuristics, for $\text{CkM}$, is due to Korupolu \textit{et~al}.~ \cite{KPR} more than $15$ years ago.
Our work provides a trade-off between the approximation factor and the cardinality violation with the existing work. Koruplou \textit{et~al}.~ \cite{KPR} gave $(1 + \alpha)$ approximation factor with $(5 + 5/\alpha)$ factor loss in cardinality using local search paradigm. Though the approximation factor can be made arbitrarily small, cardinality loss is at least $5$.
Small approximation factor is obtained at a big loss in cardinality.
For example, for $\alpha$ anything less than $1$, cardinality violation is more than $10$.
To achieve $3$ factor approximation using their heuristic, cardinality violation is $7.5$. Thus, we improve upon their result in terms of cardinality.
On the other hand, we improve upon the results in \cite{capkmGijswijtL2013,capkmshili2014,Lisoda2016} in terms of factor-loss though the cardinality loss is a little more in our case. Aardal \textit{et~al}.~ \cite{capkmGijswijtL2013} gave $(7+\epsilon)$ factor approximation, with the violation of cardinality by a factor $2$ using LP Rounding. $O(1/\epsilon^2)$ factor approximation is given in \cite{capkmshili2014,Lisoda2016} violating the cardinality by a factor of $(1 + \epsilon)$ using sophisticated strengthened LPs.
\begin{theorem}
\label{theo-ckmpen}
There is a polynomial time local search heuristic that approximates hard uniform capacitated $k$ median problem with penalties within $(3 + \epsilon)$ factor of the optimal violating the cardinality by a factor of $\frac{8}{3}$.
\end{theorem}
To the best of our knowledge, no result is known for $\text{CkM}p$.
\subsection{Related Work}
Both, LP-based algorithms as well as local search heuristics, have been used to obtain good approximate algorithms for the (uncapacitated) $k$-median problem. {\cite{capkmGijswijtL2013, Archer03lagrangianrelaxation, kmedian, arya,Byrkaesa2015, charikar,Charikar:1999,jain2002new, Jain:2001,li2013approximating}}. The best known factor of $2.611 + \epsilon$ was given by Byrka \textit{et~al}.~ \cite{Byrkaesa2015}.
Obtaining a constant approximation factor for $\text{CkM}$ is an open problem. Natural LP is known to have an unbounded integrality gap when one of the constraints (cardinality/capacity) is allowed to be violated by a factor of less than $2$ without violating the other constraint, even for uniform capacities.
Several constant factor approximations are known~\cite{capkmByrkaFRS2013,charikar,Charikar:1999,capkmshanfeili2014,Groveretal2016} for the problem that violate the capacities by a factor of $2$ or more.
A $(7 + \epsilon)$ algorithm was given by Aardal \textit{et~al}.~ ~\cite{capkmGijswijtL2013} violating the cardinality constraint by a factor of $2$.
Koruplou \textit{et~al}.~ \cite{KPR} gave $(1 + \alpha)$ approximation factor with $(5 + 5/\alpha)$ factor loss in cardinality.
Very recently, Byrka \textit{et~al}.~ ~\cite{ByrkaRybicki2015} broke the barrier of $2$ in capacities and gave an $O(1/\epsilon^2)$ approximation violating capacities by a factor of $(1 + \epsilon)$ factor for uniform capacities. For non-uniform capacities, a similar result has been obtained by Demirci \textit{et~al}.~\ in ~\cite{Demirci2016}.
Li~\cite{capkmshili2014,Lisoda2016} strengthened the LP to break the barrier of $2$ in cardinality and gave an $(O(1/\epsilon^2 \ log (1/\epsilon)))$ approximation using at most $(1 + \epsilon) k$ facilities.
Though the algorithm violates the cardinality only by $1 + \epsilon$, it introduces a softness bounded by a factor of $2$. The running time of the algorithm is $n^{O(1/\epsilon)}$.
The other commonly used technique for the problem is local search \cite{kmedian,charikar,KPR} with the best factor of $3+ \epsilon$ given by Arya \textit{et~al}.~ \cite{kmedian}. Local search technique has been particularly useful to deal with capacities for the facility location problem~\cite{Chudak, paltree,zhangchenye, gupta2008simpler, vygen,Aggarwal,Bansal}.
Some results are known for the penalty variant of (uncapacitated) facility location problems, TSP and steiner network problems~\cite{charikar2001algorithms, Jain, xu2005lp,Xu2017,Goemans:1995:GAT:207985.207993, Bienstock1993}.
For the capacitated variant of facility location problem with penalties, $5.83 + \epsilon$ factor approximation for uniform and $8.532 + \epsilon$ factor for non-uniform capacities were given by Gupta and Gupta in~\cite{guptangupta}. This is the only result known for the problems with extension on capacities as well as penalties.
\subsection{High Level Idea}
Let ${\cal S}$ denote any feasible solution. The algorithm performs one of the following operations if it reduces the cost and it halts otherwise.
The local search operations are ${\mathcal S_W}apone{s}{o}; ~s\in {\cal S}, o \in {\cal F}\setminus{\cal S}$ and ${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}; ~s_1, s_2 \in {\cal S}, o_1,o_2 \in {\cal F}\setminus{\cal S}$. Given a set of open facilities, min-cost flow problem is solved to obtain the optimal assignments of clients to opened facilities.
To define the swaps and the reassignments for the purpose of analysis, we extend the ideas of Arya \textit{et~al}.~ \cite{kmedian}. Swaps are defined so that every facility in optimal solution is swapped in at least once and at most thrice whereas facilities in our locally optimal solution is swapped out at most thrice.
When a facility in our locally optimal solution is swapped out, some of its clients are reassigned to other facilities in our solution via a mapping similar to the one defined in \cite{kmedian}. However, for the capacitated case, mapping needs to be done a little carefully. An almost fully utilized facility may not be able to accommodate all the clients mapped to it and conversely a partially utilized facility may not be able to accommodate the load of an almost fully utilized facility. To address this concern, we partition the facilities of our locally optimal solution into {\em{heavy}} (denoted by ${\mathcal S_H}$) and {\em{light}} (denoted by ${\mathcal S_L}$). A facility is said to be {\em{heavy}} if it serves more than ($3\textit{U}/5$) clients in our solution and is called {\em{light}} otherwise. Heavy facilities neither participate in swaps nor the mapping.
Thus, mapping is defined between the clients of light facilities only. We allow to open ($\frac{8}{3}k$) facilities in our solution so that we have at least $k$ light facilities.
There are two situations in which we may not be able to define a feasible mapping between the clients of two light facilities. First situation is explained as follows: Let ${\cal O}$ denote some optimal solution, let $\mathcal{M}_o$ be the number of clients, a facility $o \in {\cal O}$ shares with the light facilities of our solution. All the clients of a facility $s \in {\mathcal S_L}$ cannot be mapped to clients of other facilities of our solution if $s$ shares more than $\mathcal{M}_o/2$ clients with $o$. Second situation arises when $s$ shares more than $2\textit{U}/5$ clients with $o$. In this case, mapping may be possible but it may not be feasible as the other facility $s'$, to which its clients are mapped, may not have sufficient available capacity to accommodate the clients of $s$. We say that $s$ {\em dominates} $o$ in the first case and that $s$ {\em covers} $o$ in the second case.
Although a facility $s \in {\mathcal S_L}$ may dominate several facilities in ${\cal O}$, it can cover at most one facility in ${\cal O}$. Whereas, a facility $o \in {\cal O}$ can be dominated by at most one facility in ${\mathcal S_L}$,
it can be covered by at most two facilities in ${\mathcal S_L}$. The scenario in which a facility $o\in {\cal O}$ is covered by exactly $2$ facilities, say $s_1$ and $ s_2$ needs to be handled carefully.
In this case, we say that $s_1$ as well as $s_2$ {\em{specially covers}} $o$. We denote the set of such facilities in ${\cal O}$ as ${\mathcal O}_{sp}$. Since mapping of clients of $s_1$ and $s_2$ cannot be done in $o$, we would like to swap $s_1$ and $s_2$ with $o$ and, assign their clients to $o$, i.e. we would like to perform ${\mathcal S_W}apone{\{s_1, s_2\}}{o}$. However since we do not have this operation, we look for one more facility $o'$ in ${\cal O}$ so that we can perform double-swap of $\{s_1, s_2\}$ with $\{o, o'\}$. First we look for $o'$ such that $\{s_1,s_2\}$ together either dominate or cover $o'$. Clearly neither $s_1$ nor $s_2$, being light, can cover any facility other than $o$. Thus we look for $o'$ that is dominated by them. If $\{s_1, s_2\}$ do not dominate any facility other than $o$, we form a triplet $<s_1, s_2, o>$ and keep it aside. We call such triplets are nice triplets. They will be used to swap in some facilities of ${\cal O}$ which are not swapped in otherwise.
If they dominate exactly one facility $o'$, then we perform double-swap of $\{s_1, s_2\}$ with $\{o, o'\}$. If they dominate at least two facilities other than $o$, then we cannot swap them out at all. We call such a pair of facilities as a bad pair.
Remaining facilities in ${\mathcal S_L}$ are classified as good, bad and nice. A facility that does not dominate any facility in ${\cal O}$ is termed as {\em nice}. A nice facility can be swapped in with any facility in ${\cal O}$. A facility $s$ that dominates exactly one facility $o$ in ${\cal O}$ is termed as {\em good}. We perform (single) $swap(s, o)$ in this case. A facility $s$ that dominates more than one facilities in ${\cal O}$ is termed as {\em bad}. Bad facilities cannot participate in swaps. Let ${\cal O}hat$ denote the set of facilities of ${\cal O}$ that are either dominated by bad facilities or by bad pairs in ${\mathcal S_L}$. Facilities of ${\cal O}hat$ are swapped in using the triplets (using ${\mathcal S_W}aptwo{.}{.}{.}{.}$ or the nice facilities (using ${\mathcal S_W}apone{.}{.}$). We show that the total number of triplets and the nice facilities is at least one thirds of $|{{\cal O}hat}|$ so that each facility of ${\mathcal S_L}$ is swapped out most $3$ times and each facility of ${\cal O}$ is swapped in at least once and at most $3$ times (Note that in the process, the facilities of ${\cal O}$ which were there in the triplets also get swapped thrice). Swapping in a facility of ${\cal O}$ thrice contributes a factor of $3$ and swapping out a facility of ${\mathcal S_L}$ thrice contributes a factor of $6$ making a total of $9$ factor approximation.
Extending swap and double-swap to multi-swap, where upto $p, (p > 2)$ facilities can be swapped simultaneously, we are able to ensure that every $s \in {\cal S}$ is swapped out at most $1 + 4/ (p-2)$ times, and every $o \in {\cal O}$ is swapped in at most $ 1 + 4/(p-2)$ times thereby reducing the factor to $(3+\epsilon)$.
For $\text{CkM}p$, we start with an initial feasible solution with $8k/3$ facilities from ${\cal F}$. The clients are assigned by solving min cost flow problem over the facilities ${\cal S} \cup \{\delta\}$, where $u_{\delta}=\abs{{\cal C}}$ and $\forall j \in {\cal C}, ~\dist{\delta}{j} = \pj{j}$. Clients assigned to $\delta$ pay penalty in the solution ${\cal S}$. We bound the cost of the locally optimal solution, in the same manner as done for $\text{CkM}$.
\subsection{Local Search Paradigm}
Given a problem $\mathtt{P}$, local search algorithm starts with a candidate feasible solution ${\cal S}$. A set of operations are defined such that performing an operation results in a new solution ${\cal S}'$, called the neighbourhood solution of ${\cal S}$. A solution ${\cal S}$ may have more than one neighbourhood solutions. An operation is performed if it results in improvement in the cost.
We formally describe the steps of the algorithm for a minimization problem.\\
\textbf{The paradigm:}
\begin{enumerate}
\item Compute an arbitrary feasible solution ${\cal S}$ to $\mathtt{P}$.
\item \textbf{while} ${\cal S}'$ is a neighborhood solution of ${\cal S}$ such that $\cost{{\cal S}'}<\cost{{\cal S}}$ \\
\textbf{do}
${\cal S} \leftarrow {\cal S}'$.
\end{enumerate}
The algorithm terminates at a locally optimal solution ${\cal S}$, \textit{i}.\textit{e}. $\cost{{\cal S}'}>\cost{{\cal S}}$ for every neighborhood solution ${\cal S}'$
In the above algorithm presented, we move to a new solution if it gives some improvement in the cost, however small that improvement may be. This may lead to an algorithm taking lot of time. To ensure that the algorithm terminates in polynomial time, a local search step is performed only when the cost of the current solution ${\cal S}$ is reduced by at least $\frac{\cost{{\cal S}}}{p(n,\epsilon)}$, where $n$ is the size of the problem instance and $p(n,\epsilon)$ is an appropriate polynomial in $n$ and $1/\epsilon$ for a fixed $\epsilon > 0$. This modification in the algorithm incurs a cost of additive $\epsilon$ in the approximation factor.
\subsection{Organization of the paper}
For the sake of easy disposition of ideas, we first present a weaker result for $\text{CkM}$ in Section~\ref{sec-CkM-algo1}. The algorithm uses two operations: a (single) swap and a double swap and provides an ($9 + \epsilon, 8/3$) solution. The factor is subsequently improved to $(3 + \epsilon)$ in Section~\ref{sec-CkM-algo2} using multi-swap operation.
The results are then extended to $\text{CkM}p$ in Section~\ref{sec-CkMP-algo}.
\section{($9 + \epsilon, 8/3$) algorithm for Capacitated $k$-Median Problem}
\label{sec1} \label{sec-CkM-algo1}
In this section, we present a local search algorithm that computes a solution with cost at most $9+\epsilon$ times the cost of an optimal.
We start with an initial feasible solution selected as an arbitrary set of $8k/3$ facilities. Given a set of open facilities, optimal assignments of the clients is obtained by solving min-cost flow problem.
For any feasible solution ${\cal S}$, algorithm performs one of the following operations, if it reduces the cost and terminates when it is no longer possible to improve the cost using these operations.
\begin{enumerate}
\item
${\mathcal S_W}apone{s}{o}$: ${\cal S} \leftarrow {\cal S} \setminus \{s\} \cup \{o\}$,
$o \in {\cal F} \setminus {\cal S}$,
$s \in {\cal S}$.
\item
${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$: ${\cal S} \leftarrow {\cal S} \setminus \{s_1, s_2\} \cup \{o_1, o_2\}$,
$o_1, o_2 \in {\cal F} \setminus {\cal S}$,
$s_1, s_2 \in {\cal S}$.
\end{enumerate}
\begin{claim}
\label{suniono}
For the locally optimal solution ${\cal S}$, and optimal solution ${\cal O}$ we have,
\begin{enumerate}
\item $\cost{{\cal S} \setminus \{s\} \cup \{o\}} \geq \cost{{\cal S}}; \ \forall s\in {\cal S}, o \in {\cal O}$
\item $\cost{{\cal S} \setminus \{s_1, s_2\} \cup \{o_1, o_2\}} \geq \cost{{\cal S}}; \ \forall s_1, s_2\in {\cal S}, o_1, o_2 \in {\cal O}$
\end{enumerate}
\end{claim}
\begin{proof}
The claims follow trivially when ${\cal S} \cap {\cal O} = \phi$ by the local optimality of ${\cal S}$. Next, suppose ${\cal S} \cap {\cal O} \ne \phi$. Let $\{s_1, o_1\} \in {\cal S} \cap {\cal O}$. Then ${\cal S} \setminus \{s_1\} \cup \{o_1\} = {\cal S} \setminus \{s_1\}$ if $s_1 \ne o_1$ and it is $= {\cal S}$ otherwise. Clearly the cost of assignment to facilities in ${\cal S} \setminus \{s_1\}$ can not be smaller than the cost of assignment to facilities in ${\cal S}$.
If $s_2 \in {\cal S} \setminus {\cal O}, o_2 \in {\cal O} \setminus {\cal S}$, then $\cost{{\cal S} \setminus \{s_2\} \cup \{o_2 \}} \ge \cost{{\cal S}}$ (by the argument of single swap) and $\cost{{\cal S} \setminus \{s_1, s_2\} \cup \{o_1, o_2\}} = \cost{{\cal S} \setminus \{s_1, s_2\} \cup \{o_2\}} \ge \cost{{\cal S} \setminus \{ s_2\} \cup \{o_2\}}$. All the other cases can be argued similarly.
\end{proof}
\subsection{Notations} \label{noatations}
Let ${\cal S}$ denote the locally optimal solution and ${\cal O}$ denote an optimal solution to the problem. Let $\bs{s}$ be the set of clients served by $s \in {\cal S}$ and $\bo{o}$ be the set of clients served by $o \in {\cal O}$.
Let $\cal{B}l{s}{o}$ denote the set of clients served by $s\in{\cal S}$ and $o\in{\cal O}$ \textit{i}.\textit{e}. $\cal{B}l{s}{o} = \bs{s} \cap \bo{o}$.
For a client $j$, let $\sigmas{j}$ and $\sigmao{j}$ denote the facilities serving $j$ in ${\cal S}$ and ${\cal O}$ respectively. Let $\sj{j}$ and $\oj{j}$ denote the service costs paid by $j$ in ${\cal S}$ and ${\cal O}$ respectively.
Facilities in ${\cal S}$ are partitioned into {\em{heavy}} (${\mathcal S_H}$) and {\em{light}} (${\mathcal S_L}$).
A facility $s \in {\cal S}$ is said to be heavy if $ \bs{s} > \frac{3}{5} \textit{U}$ and light otherwise.
When a facility in our locally optimal solution is swapped out, some of its clients are reassigned to other facilities in our solution via a mapping similar to the one defined in \cite{kmedian}.
We may not be able to define a feasible mapping for the heavy facilities. Thus heavy facilities are never swapped out and no client is mapped onto them for reassignment.
Consider a facility $o \in {\cal O}$, let
$\bol{o} = \bo{o} \cap \cup_{s \in {\mathcal S_L}} \bs{s}$. Let $\mathcal{M}_o = \abs{\bol{o}}$.
We introduce two concepts important to define the swaps and the mapping.
\begin{itemize}
\item
A facility $s \in {\mathcal S_L}$ is said to {\em dominate} $o$, if $\cal{B}l{s}{o} > \mathcal{M}_o/2$.
Note that a facility $o \in {\cal O}$ can be dominated by at most one facility in $s \in {\mathcal S_L}$ where as a facility $s \in {\mathcal S_L}$ can dominate any number of facilities.
Extending the definition to set $T$, we say that a set $T \subseteq {\mathcal S_L}$ {\em dominates} $o \in {\cal O}$ if $\sum_{s \in T} \cal{B}l{s}{o} > \mathcal{M}_o/2$. Let $\dom{T}$ denote the set of facilities dominated by $T$.
When $T = \{s\}$, slightly abusing the notation we use $\dom{s}$ instead of $\dom{\{s\}}$.
\item A facility $s \in {\mathcal S_L}$ is said to {\em cover} $o \in {\cal O}$, if $\cal{B}l{s}{o} > \frac{2}{5} \textit{U}$.
Note that if $ s \in {\mathcal S_L}$ then it can cover at most one facility in ${\cal O}$. Also a facility $o \in {\cal O}$ can be covered by at most $2$ facilities in ${\mathcal S_L}$.
Extending the definition to set $T \subseteq {\mathcal S_L}$ {\em covers} $o \in {\cal O}$ if $\sum_{s \in T}{} \cal{B}l{s}{o} > \frac{2}{5}\textit{U}$. Let $\vs{T}$ denote the set of facilities covered by $T$. Also we will use $\vs{s}$ instead of $\vs{\{s\}}$ when $T = \{s\}$.
\end{itemize}
\subsection{Analysis: The Swaps}
\label{sec-swaps}
Consider a set of facilities in ${\cal O}$ such that each of them is covered by exactly two light facilities. Let ${\mathcal O}_{sp}$ denote the set of such facilities.
For $ o \in {\cal O}$, a $1-1$ and onto mapping $\tau : \bol{o} \rightarrow \bol{o}$ can be defined such that the following claim holds,
\begin{claim}
\label{claim:strongclaim} \label{taumapping}
For $s \in {\mathcal S_L}$ and $o\in {\cal O} | o \notin \dom{s}$
\begin{enumerate}
\item \label{prop1}
$\tau(\cal{B}l{s}{o}) \cap \cal{B}l{s}{o} = \phi$.
\item \label{prop2}
If $ o \notin {\mathcal O}_{sp} $ then $\abs{\{j \in \cal{B}l{s}{o} : \tau(j) \in \cal{B}l{s'}{o}\}} \le \frac{2}{5}\textit{U},~\forall s' \neq s$.
\end{enumerate}
\end{claim}
\begin{proof}
$\tau$ can be defined as follows: Order the clients in $\bol{o}$ as $j_0, j_1,...,j_{\mathcal{M}_o-1}$ such that for every $s \in S$ with a nonempty $\cal{B}l{s}{o}$, the clients in $\cal{B}l{s}{o}$ are consecutive; that is, there exists $r,s,~ 0 \leq r \leq s \leq \mathcal{M}_o-1$, such that $\cal{B}l{s}{o} = \{j_r,...,j_s\}$. Define $\tau(j_p)=(j_q)$, where $q =(p +\floor{\mathcal{M}_o/2})~modulo~\mathcal{M}_o$.
We show that $\tau$ satisfies the claim.
We prove (\ref{prop1}) using contradiction.
Suppose if possible that both $j_p$, $\tau(j_p)=j_q \in \cal{B}l{s}{o}$ for some $s$, where $|\cal{B}l{s}{o}| \leq \mathcal{M}_o/2$. If $q = p+\floor{\mathcal{M}_o/2}$, then $|\cal{B}l{s}{o}| \geq q-p+1 =\floor{\mathcal{M}_o/2}+1 > \mathcal{M}_o/2$. If $q = p+\floor{\mathcal{M}_o/2} - \mathcal{M}_o$, then $|\cal{B}l{s}{o}| \geq p-q+1 = \mathcal{M}_o - \floor{\mathcal{M}_o/2}+1 > \mathcal{M}_o/2$. In either case, we have a contradiction, and hence mapping $\tau$ satisfies the claim.
For (\ref{prop2}), as $o \notin {\mathcal O}_{sp}$, then at most one facility can cover $o$. If $\vs{s} = o$ then for all $s' \neq s, \cal{B}l{s'}{o} \leq \frac{2}{5} \textit{U}$. And if $\vs{s} \neq o$ then $\cal{B}l{s}{o} \leq \frac{2}{5} \textit{U}$. In either case the claim $\abs{\{j \in \cal{B}l{s}{o} : \tau(j) \in \cal{B}l{s'}{o}\}} \le \frac{2}{5}\textit{U},~\forall s' \neq s$ holds true.
\end{proof}
Mapping $\tau$ is used to reassign the clients of a facility $s$ that is swapped out to other facilities $s' \in {\mathcal S_L}$. Claim (\ref{claim:strongclaim}.\ref{prop1}) ensures that if $s$ does not dominate $o$, then the client $j \in \cal{B}ll{s}{o}$ is mapped to some $s' \neq s$, whereas claim (\ref{claim:strongclaim}.\ref{prop2}) ensures
that if $o \notin {\mathcal O}_{sp}$, then no more than $\frac{2}{5}k$ clients are mapped to $s'$. But if $o \in {\mathcal O}_{sp}$ such that $\po{o}=\{s, s'\}$ , then more than $\frac{2}{5}k$ clients may get mapped to $s'$. This scenario poses a major challenge; thus facilities in ${\mathcal O}_{sp}$ are considered separately while defining the swaps.
For $o\in {\mathcal O}_{sp}$, $\exists s_1,s_2 \in {\mathcal S_L}$ such that $s_1 \neq s_2$ and $\vs{s_1}=\vs{s_2}=o$.
Consider a facility $o \in {\mathcal O}_{sp}$, then let $\po{o}$ denote the set $\{s_1, s_2\}$ such that $s_1 \neq s_2, \vs{s_1}=\vs{s_2}=o$. Let ${\mathcal S}_{sp}= \cup_{o \in {\mathcal O}_{sp}} \po{o}$. Let $\doo{o} = \dom{\po{o}}$ and $\dop{o} = \doo{o}\setminus \{o\}$.
Let ${\mathcal D}_{sp} = \cup_{o \in {\mathcal O}_{sp}} \dop{o}$. Figure \ref{fig0}(a) shows the relationship between ${\mathcal S}_{sp}$ and ${\mathcal D}_{sp} \cup {\mathcal O}_{sp}$.
The following claims hold.
\begin{claim}
\label{po_claim}
\label{c3}
$\forall o,o' \in {\mathcal O}_{sp}, o \neq o'$ we have $\po{o} \cap \po{o'} = \phi $.
\end{claim}
\begin{proof}
Suppose if possible $\po{o} \cap \po{o'} \neq \phi$. Let $s \in \po{o} \cap \po{o'}$. This implies $\cal{B}l{s}{o} \geq 2\textit{U}/5$ and $\cal{B}l{s}{o'} \geq 2\textit{U}/5$ which is a contradiction as $s\in{\mathcal S_L}$.
\end{proof}
\begin{claim}
\label{do_claim}
\label{c4}
$\forall o,o' \in {\mathcal O}_{sp}, o \neq o$ we have $\doo{o} \cap \doo{o'} = \phi$.
\end{claim}
\begin{proof}
Suppose if possible let $o_1 \in \doo{o} \cap \doo{o'}$. This implies $o_1 \in \dom{\po{o}}$ and $o_1 \in \dom{\po{o'}}$. This is a contradiction as $\po{o} \cap \po{o'} = \phi$ from claim \ref{po_claim} and $o_1$ cannot be dominated dominated by two disjoint set of facilities.
\end{proof}
\begin{claim}
\label{dsp_claim}
\label{c5}
${\mathcal D}_{sp} \cap {\mathcal O}_{sp} =\phi$.
\end{claim}
\begin{proof}
Suppose if possible let $o \in {\mathcal D}_{sp} \cap {\mathcal O}_{sp}$. As $o \in {\mathcal O}_{sp}$, we have $o \in \dom{\po{o}}$. By the definition we have $o \notin \dop{o}$ thus $o \in \dop{o'}$ for some $o' \neq o, o' \in {\mathcal O}_{sp}$. This implies $o \in \dom{\po{o'}}$ which is a contradiction as $\po{o} \cap \po{o'} = \phi$ using claim \ref{po_claim}.
\end{proof}
We consider at-most $k$ swaps, satisfying the following properties.
\begin{enumerate}
\item Each $o \in {\cal O}$ is considered in atleast one swap and at most three swaps.
\item If $s \in {\mathcal S_H}$, $s$ is not considered in any swap operation.
\item Each $s \in {\mathcal S_L}$ is considered in at most three swaps.
\item If ${\mathcal S_W}apone{s}{o}$ is considered then $\forall o' \neq o$; $o' \notin \dom{s}$ and $\forall o' \in {\mathcal O}_{sp}: o' \neq o$; $s \notin \po{o'}$.
\item If ${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$ is considered then $\forall o' \neq o_1, o_2$; $o' \notin \domtwo{s_1}{s_2}$ and
$\forall o' \in {\mathcal O}_{sp}: o' \neq o_1, o_2$; $s_1, s_2 \notin \po{o'}$.
\end{enumerate}
Let ${\mathcal S_W} \subseteq {\mathcal S_L}$ and ${\mathcal O_W} \subseteq {\cal O}$ denote the set of facilities that have participated in the swaps at any point of time. Initially ${\mathcal S_W} = {\mathcal O_W} = \phi$. While considering the facilities, we also maintain the sets ${\cal S}hat \subseteq {\mathcal S_L}$ and ${\cal O}hat\subseteq {\cal O}$; initially ${\cal S}hat = {\cal O}hat = \phi$.
The facilities in ${\cal S}hat$ will never participate in any swap. Facilities in ${\cal O}hat$ correspond to the facilities in ${\cal S}hat$ in some way which will become clear when we define the swaps.
We also maintain a set of triplets denoted by $\bag$ (will be defined shortly) and two sets ${\mathcal S_{\bag}}$ and ${\mathcal O_{\bag}}$ corresponding to $\bag$. All the three sets are empty initially. Throughout we maintain that ${\mathcal S_W}, {\mathcal S_{\bag}}, {\cal S}hat$ are pairwise disjoint and ${\mathcal O_W}, {\mathcal O_{\bag}}, {\cal O}hat$ are pairwise disjoint.
For $o_1 \in {\mathcal O}_{sp}$ with $\po{o_1} = \{s_1, s_2\}$.
\begin{enumerate}
\item If $\abs{\doo{o_1}} = 1$ then $\doo{o_1} = \{o_1\}$. In this case we call $\po{o_1}$ a {\em nice pair}.
Set $\bag = \bag \cup \{ < s_1, s_2, o_1 > \} $,
${\mathcal O_{\bag}} = {\mathcal O_{\bag}} \cup \doo{o_1}$, ${\mathcal S_{\bag}} = {\mathcal S_{\bag}} \cup \po{o_1}$.
\item If $\abs{\doo{o_1}} = 2$, let $\doo{o_1} = \{o_1, o_2\}$
In this case we call $\po{o_1}$ a { \em good pair} and consider ${\mathcal S_W}aptwopo{\po{o_1}}{\doo{o_1}}$ which is nothing but
${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$.
Set ${\mathcal O_W} = {\mathcal O_W} \cup \doo{o_1}$, ${\mathcal S_W} = {\mathcal S_W} \cup \po{o_1}$.
\item If $\abs{\doo{o_1}} > 2$ then we call $\po{o_1}$ a {\em bad pair}.
Set ${\cal O}hat = {\cal O}hat \cup \doo{o_1}$,
${\cal S}hat = {\cal S}hat \cup \po{o_1}$. That is, put the bad pairs in ${\cal S}hat$ and the facilities dominated by them in ${\cal O}hat$.
Note that the cardinality of ${\cal S}hat$ increased by $2$ while cardinality of ${\cal O}hat$ increased by at least $3$. \label{32case}
\end{enumerate}
Figure \ref{fig1}(a) shows the partitions ${\mathcal S_L}$ and ${\cal O}$ at this time. Let ${\cal S}hattwo = {\mathcal S_W} \cup {\mathcal S_{\bag}} \cup {\cal S}hat$ and $ {\cal O}hattwo = {\mathcal O_W} \cup {\mathcal O_{\bag}} \cup {\cal O}hat$. Note that ${\cal S}hattwo={\mathcal S}_{sp}$ and ${\cal O}hattwo={\mathcal O}_{sp} \cup {\mathcal D}_{sp}$.
Also, clearly $\abs{{\mathcal S_W}} = \abs{{\mathcal O_W}}$, $\abs{{\mathcal S_{\bag}}} = 2\abs{{\mathcal O_{\bag}}}$ as for every facility added to ${\mathcal O_{\bag}}$, two facilities are added to ${\mathcal S_{\bag}}$.
and $3| {{\cal S}hat}|\leq 2 |{{\cal O}hat}|$. The last claim follows as for every two facilities added to ${\cal S}hat$, at least three facilities are added to ${\cal O}hat$.
Next, we consider the facilities in ${\cal O} \setminus {\cal O}hattwo$ and ${\mathcal S_L} \setminus {\cal S}hattwo$.
We say that a facility $s \in {\mathcal S_L} \setminus {\cal S}hattwo$ is {\em good} if $\abs{\dom{s}} = 1$, {\em bad} if $\abs{\dom{s}} > 1$, else {\em nice} (i.e. $\abs{\dom{s}} = 0$). Let ${\mathcal S_g}, {\mathcal S_b}$, and ${\mathcal S_n}$ denote the set of good, bad and nice facilities respectively and, ${\cal O} \setminus {\cal O}hattwo$ is partitioned into ${\mathcal O}_{g}, {\mathcal O}_{b}$ and ${\mathcal O}_{n}$. Let ${\mathcal O}_{g}$ denote the set of facilities in ${\cal O} \setminus {\cal O}hattwo$ captured by good facilities. Let ${\mathcal O}_{b}$ denote the set of facilities in ${\cal O} \setminus {\cal O}hattwo$ captured by bad facilities, and let ${\mathcal O}_{n}$ denote the set of facilities in ${\cal O} \setminus {\cal O}hattwo$ not captured by any facility in ${\mathcal S_L} \setminus {\cal S}hattwo$. Figure \ref{fig0}(b) shows the relationship between ${\mathcal S_g}, {\mathcal O}_{g}$ and ${\mathcal S_b}, {\mathcal O}_{b}$.
\begin{figure}
\caption{(a) Relationship between the partitions of ${\mathcal S}
\label{fig0}
\end{figure}
\begin{figure}
\caption{Partitions of ${\mathcal S_L}
\label{fig1}
\end{figure}
\begin{enumerate}
\item For every $s \in {\mathcal S_n}$ ($\abs{\dom{s}} = 0$): So nothing.
\item For every $s \in {\mathcal S_g}$ ($ \abs{\dom{s}} = 1$): Perform ${\mathcal S_W}apone{s}{\dom{s}}$.
Update ${\mathcal S_W}, {\mathcal O_W}, {\cal O}hattwo$ and ${\cal S}hattwo$ as ${\mathcal S_W} = {\mathcal S_W} \cup {\mathcal S_g}, {\mathcal O_W} = {\mathcal O_W} \cup {\mathcal O}_{g}, {\cal O}hattwo = {\cal O}hattwo \cup {\mathcal O_W}$, ${\cal S}hattwo = {\cal S}hattwo \cup {\mathcal S_W}$ in this order.
\item For every $s \in {\mathcal S_b}$ ($\abs{\dom{s}} = 0$): Set ${\cal S}hat = {\cal S}hat \cup \{s\}, {\cal O}hat = {\cal O}hat \cup \dom{s}, {\cal O}hattwo = {\cal O}hattwo \cup {\cal O}hat$, ${\cal S}hattwo = {\cal S}hattwo \cup {\cal S}hat$ in this order.
That is, put the bad facility in ${\cal S}hat$ and the facilities dominated by it in ${\cal O}hat$.
The cardinality of ${\cal S}hat$ increased by $1$ while the cardinality of ${\cal O}hat$ increased by at least $2$.
\end{enumerate}
New partitions are shown in Figure \ref{fig1}(b).
Let ${\cal S}bar$ be the set of facilities in ${\mathcal S_L}$ that have not participated in any swap. Then such a facility is either a nice facility, a bad facility, is in a nice pair or in a bad pair.
Similarly, let ${\cal O}bar$ be the set of facilities in ${\cal O}$ that have not participated in the above swaps.
Then, ${\mathcal O_{\bag}}$ is the set of facilities in ${\cal O}bar$ that are in a triplet. Let $T = {\cal O}bar \setminus {\mathcal O_{\bag}}$.
Then facilities in $T$ are either dominated by a bad facility, by a bad pair, or are not dominated by any facility or a pair.$\textit{i}.\textit{e}.$, $T = {\cal O}hat \cup {\mathcal O}_{n}$. Let $\ell$ be the number of such facilities $\textit{i}.\textit{e}.$, $\ell = \abs{T}$. Next claim shows that there are at least $\ell/3$ nice facilities and nice pairs taken together.
\begin{claim} \label{numsuff}
$\abs{\bag} + \abs{{\mathcal S_n}} \geq \frac{1}{3} ( \abs{{\cal O}hat} + \abs{{\mathcal O}_{n}})$
\end{claim}
\begin{proof}
While handling ${\mathcal O}_{b}$, for every facility added in ${\cal S}hat$ atleast $2$ facilities are added in ${\cal O}hat$ and while handling ${\mathcal O}_{sp}$, atleast $3$ facilities are added in ${\cal O}hat$ for every $2$ facilities added in ${\cal S}hat$. Thus we have $3 \cdot \abs{{\cal S}hat} \leq 2 \cdot \abs{{\cal O}hat}
\implies \abs{{\cal S}hat} \leq \frac{2}{3} \cdot\abs{{\cal O}hat}$
Also $\abs{{\cal O}} = k \leq \abs{{\mathcal S_L}}$.
${\cal O} = {\cal O}hattwo \cup {\mathcal O}_{n}$ and ${\cal O}hattwo \cap {\mathcal O}_{n} = \phi$
Thus ${\cal O} = {\mathcal O_W} \cup {\mathcal O_{\bag}} \cup {\cal O}hat \cup {\mathcal O}_{n}$
Similarly ${\mathcal S_L} = {\cal S}hattwo \cup {\mathcal S_n}$ and ${\cal S}hattwo \cap {\mathcal S_n} = \phi$
Thus ${\mathcal S_L} = {\mathcal S_W} \cup {\mathcal S_{\bag}} \cup {\cal S}hat \cup {\mathcal S_n}$
Also $\abs{{\mathcal O_W}} = \abs{{\mathcal S_W}}$, $\abs{{\mathcal O_{\bag}}} = \abs{\bag}$, $\abs{{\mathcal S_W}} =2\abs{\bag}$ and $\abs{{\cal S}hat} \leq \frac{2}{3}.\abs{{\cal O}hat}$
Thus we get
$\abs{{\mathcal S_W}} + \abs{{\cal S}hat} + 2\abs{\bag} + \abs{{\mathcal S_n}} \geq \abs{{\mathcal O_W}} + \abs{\bag} + \abs{{\cal O}hat} + \abs{{\mathcal O}_{n}}$
$\abs{\bag} + \abs{{\mathcal S_n}} \geq \abs{{\cal O}hat} - \abs{{\cal S}hat} + \abs{{\mathcal O}_{n}} \geq \frac{1}{3} \abs{{\cal O}hat} + \abs{{\mathcal O}_{n}}$
$\abs{\bag} + \abs{{\mathcal S_n}} \geq \frac{1}{3}( \abs{{\cal O}hat} + \abs{{\mathcal O}_{n}})$
\end{proof}
Next, consider the following $\ell$ swaps in which the facilities in ${\cal O}bar$ are swapped with nice facilities or nice pairs in ${\mathcal S_L}$ in a way that each nice facility or a facility in a nice pair is considered in at most $3$ swaps and each facility in ${\mathcal O_{\bag}}$ is also considered in at most $3$ swaps.
\begin{enumerate}
\item Repeat until $\abs{T} < 3$. Pick $o_1, o_2, o_3 \in T$.
\begin{enumerate}
\item If ${\mathcal S_n} \neq \phi$. Pick a facility $s_1 \in {\mathcal S_n}$; perform\\ ${\mathcal S_W}apone{s_1}{o_1}$, ${\mathcal S_W}apone{s_1}{o_2}$, ${\mathcal S_W}apone{s_1}{o_3}$.
Set ${\mathcal S_n} = {\mathcal S_n} \setminus \{s_1\}$.
\item Else, pick a triplet $<s_1, s_2, o> \in \bag$, and perform\\
${\mathcal S_W}aptwo{s_1}{s_2}{o}{o_1}$,
${\mathcal S_W}aptwo{s_1}{s_2}{o}{o_2}$,
${\mathcal S_W}aptwo{s_1}{s_2}{o}{o_3}$.
\item Set $ T = T \setminus \{o_1, o_2, o_3\}$
\end{enumerate}
\item If $\abs{T} > 0$, either there must be a facility $s_1 \in {\mathcal S_n}$ or a triplet $<s_1, s_2, o> \in \bag$; accordingly perform swap or double-swap with the facilities in $T$ in the same manner as described in step 1.
\end{enumerate}
The swaps are summarized in Figure \ref{fig2} and Figure \ref{fig3}.
\begin{figure}
\caption{(a) Partitions of ${\mathcal S_L}
\label{fig3}
\end{figure}
\begin{figure}
\caption{Summary of the swaps}
\label{step1}
\label{fig2}
\end{figure}
\subsection{Analysis: Bounding the Cost}
\label{cost-bound}
Now we bound the cost of these swaps.
Whenever we consider a swap of form ${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$, the mapping $\tau$ as defined in claim~\ref{suniono} cannot be used to reassign the clients of $s_1$ and $s_2$ as it is possible that
some client $j$ of $s_1$ is mapped to a client of $s_2$ or vice versa.
To address this, we define another mapping $\tau'$ in a similar way as in claim (\ref{claim:strongclaim}) considering $s_1$ and $s_2$ as a single facility.
Let $DS = \{\{s_1,s_2\}: {\mathcal S_W}aptwo{s_1}{s_2}{.}{.}$
was~performed $\}$.
$\tau'$ satisfy the following claim
\begin{claim}
\label{claim:strongclaim1}
\begin{enumerate}
\item For $s \in {\mathcal S_L}$ and $o\in {\cal O} | o \notin \dom{s}$
\begin{enumerate}
\item \label{prop01}
$\tau'(\cal{B}l{s}{o}) \cap \cal{B}l{s}{o} = \phi$.
\item \label{prop02}
If $ o \notin {\mathcal O}_{sp} $ then $\abs{\{j \in \cal{B}l{s}{o} : \tau'(j) \in \cal{B}l{s'}{o}\}} \le \frac{2}{5}\textit{U},~\forall s' \neq s$.
\end{enumerate}
\item For {$\{s_1,s_2\} \in DS$} and $o\in {\cal O} | o \notin \domtwo{s_1}{s_2}$
\begin{enumerate}
\item \label{prop11}
$\tau'(\cal{B}l{\{s_1, s_2\}}{o}) \cap \cal{B}l{\{s_1, s_2\}}{o} = \phi$.
\item \label{prop21}
If $ o \notin {\mathcal O}_{sp} $ then $\abs{\{j \in \cal{B}l{\{s_1, s_2\}}{o} : \tau'(j) \in \cal{B}l{s'}{o}\}} \le \frac{2}{5}\textit{U},~\forall s' \neq s_1, s_2$.
\end{enumerate}
\end{enumerate}
\end{claim}
\begin{proof}
$\tau'$ can be defined as follows. Consider $s_1$ and $s_2$ as a single meta-facility, and define the mapping just as earlier. That is no $j \in \cal{B}l{s_1}{\tilde{o}}$ get mapped to $j' \in \cal{B}l{s_2}{\tilde{o}}$ and vice versa. Note that this is possible to create such a $\tau'$ as $\{s_1, s_2\}$ together do not dominate $\tilde{o}$.
For (\ref{prop11}), as $o \notin \domtwo{s_1}{s_2}$; same argument ensures that $\tau'(\cal{B}l{\{s_1, s_2\}}{o}) \cap \cal{B}l{\{s_1, s_2\}}{o} = \phi$ still hold true
For (\ref{prop21}), as $o \notin {\mathcal O}_{sp}$, then at most one facility can cover $o$. If $\vstwo{s_1}{s_2} = o$ then for all $s' \neq s, \cal{B}l{s'}{o} \leq \frac{2}{5} \textit{U}$. And if $\vstwo{s_1}{s_2} \neq o$ then $\cal{B}l{\{s_1,s_2\}}{o} \leq \frac{2}{5} \textit{U}$. In either case the claim $\abs{\{j \in \cal{B}l{\{s_1,s_2\}}{o} : \tau'(j) \in \cal{B}l{s'}{o}\}} \le \frac{2}{5}\textit{U},~\forall s' \neq s_1,s_2$ holds true.
\end{proof}
Set $\tau = \tau'$. For ${\mathcal S_W}apone{s_1}{o_1}$ reassignment is done as follows: for $j \in \bo{o_1}$ assign $j$ to $o_1$ and for $j \in \bs{s_1} \setminus \bo{o_1}$, assign $j$ to $\sigmas{\tau(j)}$. The cost of the operation is
For $j \in \bs{s_1} \setminus \bo{o_1}, \sigmas{\tau(j)} \ne \sigmas{j}$. Thus, since $j$ was assigned to $s$ and not to $\sigmas{\tau(j)}$ (call it $s'$), $\dist{j}{s} \le \dist{j}{s'}$ (since $s'$, being a light facility, had sufficient room to accommodate $j$ ). Thus, $\dist{j}{s} \leq \dist{j}{s'} \leq \dist{j}{o'} + \dist{o'}{\tau(j)} + \dist{\tau(j)}{s'}$ or $\sj{j} \leq (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)})$. However, if $j \in \bs{s_1} \cap \bo{o_1}, \sigmas{\tau(j)}$ may be same as $\sigmas{j}$. $\sj{j} \leq (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)})$ follows trivially in this case by triangle inequality.
Thus we can write
\begin{equation} \label{eq:2.2}
\sumlimits{j \in \bo{o_1}}{}(\oj{j} - \sj{j}) + \sumlimits{j \in \bs{s_1}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{equation}
For ${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$, reassignment is defined as, assign $j \in \bo{o_1}$ to $o_1$, assign $j \in \bo{o_2}$ to $o_2$, and assign $j \in \bs{s_1} \cup \bs{s_2} \setminus \bo{o_1} \setminus \bo{o_2}$ to $\sigmas{\tau^*(j)}$. The cost of the operation is
\begin{multline} \label{eq:2.3}
\sumlimits{j \in \bo{o_1} \cup \bo{o_2}}{}(\oj{j} - \sj{j}) + \\
\sumlimits{j \in \bs{s_1} \cup \bs{s_2} \setminus \{\bo{o_1} \cup \bo{o_2}\}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{multline}
Also, by similar arguement as above, we have
\begin{multline} \label{eq:2.4}
\sumlimits{j \in \bo{o_1} \cup \bo{o_2}}{}(\oj{j} - \sj{j}) + \\
\sumlimits{j \in \bs{s_1} \cup \bs{s_2}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{multline}
Now, a facility $ o \in {\cal O}$ may be considered at max three times. Let ${\cal O}_1, {\cal O}_2, {\cal O}_3$ denote the set of facilities in ${\cal O}$ that are considered in $1,2$ and $3$ swaps respectively.A facility $s \in {\mathcal S_L}$ may be swapped out at most $3$ times. Thus,
we can write
\begin{multline} \label{eq:2.5.1}
\sumlimits{o \in {\cal O}_1}{}\sumlimits{j \in \bo{o}}{}(\oj{j} - \sj{j}) +
\sumlimits{o \in {\cal O}_2}{}\sumlimits{j \in \bo{o}}{}2(\oj{j} - \sj{j}) +
\sumlimits{o \in {\cal O}_2}{}\sumlimits{j \in \bo{o}}{}3(\oj{j} - \sj{j}) +\\
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{multline}
as $\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j} > 0$ as argued above.
\begin{multline} \label{eq:2.5.2}
\sumlimits{o \in {\cal O}_1}{}\sumlimits{j \in \bo{o}}{}(\sj{j}) +
\sumlimits{o \in {\cal O}_2}{}\sumlimits{j \in \bo{o}}{}2(\sj{j}) +
\sumlimits{o \in {\cal O}_2}{}\sumlimits{j \in \bo{o}}{}3(\sj{j})
< \\
\sumlimits{o \in {\cal O}_1}{}\sumlimits{j \in \bo{o}}{}(\oj{j}) +
\sumlimits{o \in {\cal O}_2}{}\sumlimits{j \in \bo{o}}{}2(\oj{j}) +
\sumlimits{o \in {\cal O}_2}{}\sumlimits{j \in \bo{o}}{}3(\oj{j}) +\\
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j})
\end{multline}
\begin{multline} \label{eq:2.5.3}
\sumlimits{o \in {\cal O}}{}\sumlimits{j \in \bo{o}}{}(\sj{j})
<
\sumlimits{o \in {\cal O}}{}\sumlimits{j \in \bo{o}}{}3(\oj{j}) +
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j})
\end{multline}
\begin{multline} \label{eq:2.5.4}
\sumlimits{j \in {\cal C}}{}(\sj{j})
<
\sumlimits{j \in {\cal C}}{}3(\oj{j}) +
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j})
\end{multline}
\begin{multline} \label{eq:2.5.5}
\cost{{\cal S}}
<
3\cost{{\cal O}} +
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j})
\end{multline}
And as $\tau$ is a $1-1$ and onto mapping,
$$\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{}\oj{j} =\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{}\oj{\tau(j)}$$
and $$\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{}(\sj{\tau(j)} - \sj{j}) = 0$$. Thus,
\begin{multline}
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) = 2 \sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j}) = 6\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} (\oj{j}) \leq\\ 6\sumlimits{s \in {\cal S}}{}\sumlimits{j \in \bs{s}}{} (\oj{j}) = 6 \cost{{\cal O}}
\end{multline}
Thus we have
\begin{multline} \label{eq:2.5.6}
\cost{{\cal S}}
<
3\cost{{\cal O}} +
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} 3.(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) \\
\leq 3\cost{{\cal O}} + 6\cost{{\cal O}} = 9\cost{{\cal O}}
\end{multline}
\section{$(3+ \epsilon, 8/3)$ algorithm using multi-swaps} \label{sec-CkM-algo2}
In this section, we reduce the factor to $(3+ \epsilon)$ using multi-swap operation.
Let $p = 2p'+2$ for some integer $p'$. Let $p > 2$. The algorithm performs the following operation if it reduces the cost of the solution and it terminates otherwise.
$${\mathcal S_W}apmulti{A}{B}: {\cal S} = {\cal S} \setminus B \cup A; B \subseteq {\cal S}, A \subseteq {\cal F}, \abs{A} = \abs{B} \leq p$$
The operation can be performed in $O(n^p)$ time. For a fixed $\epsilon > 0$ and $p = O(1/\epsilon)$, it runs in $O(n^{1/\epsilon})$ time.
\subsection{Defining the swaps}
Following swaps are considered. Partitions are formed and the facilities in ${\mathcal O}_{sp}$ and ${\mathcal O_W}$ are treated in the same manner as described in Section~\ref{sec-swaps}.
We write inequalities corresponding to different operations and take their weighted sums.
Inequalities corresponding to the swaps defined for facilities in ${\mathcal O}_{sp}$ and ${\mathcal O_W}$ are assigned weight $1$.
Let ${\mathcal T_{\opt}} = {\mathcal O_{\bag}} \cup {\cal O}hat \cup {\mathcal O}_{b} \cup {\mathcal O}_{n}$ and
${\mathcal T_{\soln}} = {\mathcal S_{\bag}} \cup {\cal S}hat \cup {\mathcal S_b} \cup {\mathcal S_n}$. ${\mathcal T_{\soln}}$ and ${\mathcal T_{\opt}}$ are partitioned into $A_1, A_2, \ldots, A_r$ and $B_1, B_2, \ldots, B_r$ respectively using the Partition Algorithm.
such that the partitions satisfy the following properties:
\begin{enumerate}
\item For $1 \leq i \leq (r-1)$, we have $\abs{A_i} = \abs{B_i}, B_i = \dom{A_i}$ and $\abs{A_r} = \abs{B_r}$.
\item For $1 \leq i \leq (r-1)$, the set $A_i$ has exactly one facility $s$ from ${\mathcal S_b}$ or exactly one bad pair $<s_1, s_2>$ from ${\cal S}hat$.
\item The set $A_r$ contains facilities only from ${\mathcal S_{\bag}} \cup {\mathcal S_n}$.
\end{enumerate}
\begin{algorithm}
\footnotesize
\begin{algorithmic}[1]
\STATE $i=0$.
\WHILE {$\exists$ a facility in ${\cal S}hat \cup {\mathcal S_b}$}
\STATE $i = i+1$
\STATE $A_i = \{s\}$ where $s \in {\mathcal S_b}$ or $A_i = \{s_1, s_2\}$ where
$\{s_1, s_2\}$ is a bad pair from ${\cal S}hat$.
\STATE $B_i = \dom{A_i}$
\WHILE {$\abs{A_i} \neq \abs{B_i}$}
\STATE $A_i = A_i \cup \{g\}$ where $g \in {\mathcal T_{\soln}} \cap \{{\mathcal S_{\bag}} \cup {\mathcal S_n} \} \setminus A_i$.
\STATE $B_i = \dom{A_i}$.
\ENDWHILE
\STATE ${\mathcal T_{\opt}} = {\mathcal T_{\opt}} \setminus B_i$, ${\mathcal T_{\soln}} = {\mathcal T_{\soln}} \setminus A_i$.
\ENDWHILE
\STATE $A_r = {\mathcal T_{\soln}}$, $B_r = {\mathcal T_{\opt}}$.
\end{algorithmic}
\label{alg1}
\caption{Partition Algorithm}
\end{algorithm}
Next, we define the following swaps:
\begin{enumerate}
\item For the sets $A_i, B_i$ for some $i \leq i \leq (r-1)$,
such that $\abs{A_i} = \abs{B_i} \leq p$, perform ${\mathcal S_W}apmulti{A_i}{B_i}$ :
for all $o \in B_i$, $j \in \bo{o}$, reassign $j$ to $o$,
for all $j \in (\cup_{s \in A_i} \bs{s}) \setminus (\cup_{o \in B_i} \bo{o})$, assign $j$ to $\sigmas{\tau(j)}$. Inequalities are assigned weight $1$.
\item For the sets $A_i, B_i$ for some $i \leq i \leq (r-1)$, such that $\abs{A_i} = \abs{B_i} = q > p$, we perform \textit{shrinking} as follows:
for every $<s_1, s_2, o_1> \in \bag$ such that $s_1, s_2 \in A_i$ and $o_1 \in B_i$ \textbf{do}, $A_i = A_i \setminus \{s_1, s_2\}$, $B_i = B_i \setminus {o_1}$, $A_i = A_i \cup \meta{s_1}$. where $\meta{s_1}$ denote a meta node corresponding to nodes $s_1, s_2$.
Note that after shrinking we still have $\abs{A_i} = \abs{B_i} = q' \geq q/2 $. Also as $q > p = 2p' +2$, we have $ q' > p'+1 = p/2$. We consider the swaps as follows
\begin{enumerate}
\item If $s \in A_i~ : s \in {\mathcal S_b}$ then we consider exactly $q'(q'-1)$ swaps as follows: for all $o \in B_i$,
for all $s' \in A_i \setminus \{s\}$, if $s' = \meta{s_j}$ was created by shrinking of $<s_{j_1}, s_{j_2}, o_{j_1}> $, then we perform ${\mathcal S_W}aptwo{s_{j_1}}{s_{j_2}}{o_{j_1}}{o}$.
If $s'$ was not created by shrinking, then we perform ${\mathcal S_W}apone{s'}{o}$.
Each inequality is assigned weight $1/(q'-1)$. Then, each $s' \in A_i$ participates in exactly $(q') / (q'-1) = 1 + 1/(q'-1) \leq 1+ 1/p' = 1+2/(p-2)$ swaps where as each $o \in B_i$ participate in exactly $1$ swap except for facilities of the type $o_{j_1}$ which participates in as many swaps as $s_{j_1}, s_{j_2}$ do, which is no more than $1+ 2/(p-2)$.
\item If $s_1, s_2 \in A_i~ : \{s_1, s_2\}$ is a bad pair from ${\cal S}hat$ then we consider exactly $q'(q'-2)$ swaps as follows: for all $o \in B_i$,
for all $s' \in A_i \setminus \{s_1, s_2\}$, if $s' = \meta{s_j}$ was created by shrinking of $<s_{j_1}, s_{j_2}, o_{j_1}> $, then we perform ${\mathcal S_W}aptwo{s_{j_1}}{s_{j_2}}{o_{j_1}}{o}$.
If $s'$ was not created by shrinking, then we perform ${\mathcal S_W}apone{s'}{o}$.
Each inequality is assigned weight $1/(q'-2)$. Then, each $s' \in A_i$ participates in exactly $(q') / (q'-2) = 1 + 2/(q'-2) \leq 1+ 2/p' = 1+4/(p-2)$ swaps where as each $o \in B_i$ participate in exactly $1$ swap except for facilities of the type $o_{j_1}$ which participates in as many swaps as $s_{j_1}, s_{j_2}$ do, which is no more than $1+ 4/(p-2)$.
\end{enumerate}
\end{enumerate}
The cost of the swaps can be analysed as follows. Every $s \in {\cal S}$ is swapped out at most $1 + 4/ (p-2)$ times and every $o \in {\cal O}$ is swapped in atleast once and at most $ 1 + 4/(p-2)$ times. Thus we can now write
\begin{multline} \label{eq:3.5.4}
\sumlimits{j \in {\cal C}}{}(\sj{j})
<
(1 + 4/(p-2))\sumlimits{j \in {\cal C}}{} (\oj{j}) +
(1 + 4/(p-2))\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j})
\end{multline}
which gives
\begin{multline} \label{eq:3.5.5}
\sumlimits{j \in {\cal C}}{}(\sj{j})
<
(1 + 4/(p-2))\cost{{\cal O}} +
(1 + 4/(p-2))(2\cost{{\cal O}}) = (3 + 12/(p-2))\cost{{\cal O}}
\end{multline}
\section{Capacitated $k$-Median with Penalties}
\label{sec-CkMP-algo}
$\text{CkM}p$ is a variation of $\text{CkM}$ where we also have a fixed penalty cost $\pj{j}$ associated with each client $j \in {\cal C}$. For a solution ${\cal S}$, let $\pencli{{\cal S}}$ denote the set of clients that pay penalties in ${\cal S}$. The clients ${\cal C} \setminus \pencli{{\cal S}}$ are serviced by the facilities opened in ${\cal S}$. We borrow the notations from Section \ref{sec1}. Then,
$\cost{{\cal S}}$ is $\sumlimits{s \in {\cal S}}{} \sumlimits{j \in \bs{s}}{} \sj{j} + \sumlimits{j \in \pencli{{\cal S}}}{} \pj{j}$. For easy disposition of ideas, we give $(9 + \epsilon, 8/3)$ algorithm and its analysis. The factor is reduced to $(3 + \epsilon)$ in exactly the same manner as is done in Section~\ref{sec-CkM-algo2}.
\subsection{$(9+\epsilon, 8/3)$ Algorithm}
We start with an initial feasible solution ${\cal S}_0$ such that $\abs{{\cal S}} = 8k/3$.
The clients are assigned by solving min cost flow problem over the facilities ${\cal S}_0 \cup \{\delta\}$, where $u_{\delta}=\abs{{\cal C}}$ and $\forall j \in {\cal C}, ~\dist{\delta}{j} = \pj{j}$. Clients assigned to $\delta$ pay penalty in the solution ${\cal S}_0$.
The operations available to the algorithm are ${\mathcal S_W}apone{s}{o}$ and ${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$
Let ${\cal S}$ be the locally optimal solution with $\cost{{\cal S}} = \sumlimits{s \in {\cal S}}{} \sumlimits{j \in \bs{s}}{} \sj{j} + \sumlimits{j \in \pencli{{\cal S}}}{} \pj{j}$.
\subsection{Analysis}
Let ${\cal O}$ be an optimal solution for the problem with $\cost{{\cal O}} = \sumlimits{o \in {\cal O}}{} \sumlimits{j \in \bo{o}}{} \oj{j} + \sumlimits{j \in \pencli{{\cal O}}}{} \pj{j}$.
Swaps are defined exactly in the same manner as done in Section~\ref{sec1}. However, there is a slight change in the re-assignment.
Consider the ${\mathcal S_W}apone{s}{o}: s \in {\mathcal S_L}$ and $o \in {\cal O}$; reassignments are done as follows: $\forall j \in \bo{o}$, assign $j$ to $o$; $\forall j \in \bs{s} \cap \pencli{{\cal O}}$, $j$ pays penalty $\pj{j}$ and $\forall j \in \bs{s} \setminus \{\bo{o} \cup \pencli{{\cal O}} \}$, $j$ is assigned to $\sigmas{\tau(j)}$.
As ${\cal S}$ is locally optimal, we have
\begin{multline} \label{eq0.1}
\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}} }{}(\oj{j} - \sj{j}) +
\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{}(\oj{j} - \pj{j}) +
\sumlimits{j \in \bs{s} \cap \pencli{{\cal O}}}{}(\pj{j} - \sj{j}) + \\
\sumlimits{j \in \bs{s} \setminus \bo{o} \setminus \pencli{{\cal O}}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{multline}
Similarly for ${\mathcal S_W}aptwo{s_1}{s_2}{o_1}{o_2}$, we have
\begin{multline} \label{eq0.2}
\sumlimits{j \in \bo{o_1}\cup \bo{o_2}}{}(\oj{j} - \sj{j}) +
\sumlimits{j \in \{\bo{o_1}\cup \bo{o_2} \} \cap
\pencli{{\cal S}}}{}(\oj{j} - \pj{j}) +\\
\sumlimits{j \in \{ \{\bs{s_1}\cup \bs{s_2} \} \cap
\pencli{{\cal O}}}{}(\pj{j} - \sj{j}) + \\
\sumlimits{j \in \{\{\bs{s_1}\cup \bs{s_2} \} \setminus \{\bo{o_1}\cup \bo{o_2} \} \}\setminus \pencli{{\cal O}} \}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{multline}
and a facility $s \in {\mathcal S_L}$ may be swapped out at most $3$ times. Thus summing over all swaps we have
\begin{multline} \label{eq0.3}
\sumlimits{o \in {\cal O}_1}{}(\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{}(\oj{j} - \sj{j}) +
\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{}(\oj{j} - \pj{j})) +\\
\sumlimits{o \in {\cal O}_2}{}(\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} 2(\oj{j} - \sj{j}) +
\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{} 2 (\oj{j} - \pj{j}) )+\\
\sumlimits{o \in {\cal O}_3}{}(\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} 3(\oj{j} - \sj{j}) +
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{} 3 (\oj{j} - \pj{j})) +\\
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s} \cap
\pencli{{\cal O}}}{} 3 (\pj{j} - \sj{j})+
\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}}}{} 3(\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) > 0
\end{multline}
Note that for all $s \in {\cal S}$, $\pj{j} - \sj{j} \ge 0$ for all $j \in \bs{s} \cap \pencli{{\cal O}}$ otherwise $j$ would have paid penalty in ${\cal S}$ too. Thus we can write $\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s} \cap \pencli{{\cal O}}}{} 3 (\pj{j} - \sj{j}) \leq \sumlimits{s \in {\cal S}}{}\sumlimits{j \in \bs{s} \cap \pencli{{\cal O}}}{} 3 (\pj{j} - \sj{j})$
Rearranging, we get
\begin{multline} \label{eq0.4}
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} \sj{j} +
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{}\pj{j} +
\sumlimits{j \in \pencli{{\cal O}} \setminus \pencli{{\cal S}}}{} \sj{j} \le\\
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} 3\oj{j}
+ \sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{}3\oj{j}
+ \sumlimits{j \in \pencli{{\cal O}} \setminus \pencli{{\cal S}}}{} 3\pj{j} \\
+ \sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}}}{} 6\oj{j}
\end{multline}
as for
$\sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}}}{} (\oj{j} + \oj{\tau(j)} + \sj{\tau(j)} - \sj{j}) = \sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}}}{}2\oj{j}$
by the property of $\tau$.
Adding $\sumlimits{j \in \pencli{{\cal S}} \cap \pencli{{\cal O}}}{}\pj{j}$ on both the sides and re-arranging, we get
\begin{multline} \label{eq0.4}
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} \sj{j} +
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{}\pj{j} +
\sumlimits{j \in \pencli{{\cal O}} \setminus \pencli{{\cal S}}}{} \sj{j} + \\
\sumlimits{j \in \pencli{{\cal S}} \cap \pencli{{\cal O}}}{}\pj{j} \le
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} 3\oj{j}
+ \sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \cap
\pencli{{\cal S}}}{}3\oj{j} \\
+ \sumlimits{j \in \pencli{{\cal O}} \setminus \pencli{{\cal S}}}{} 3\pj{j}
+ \sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}}}{} 6\oj{j} + \sumlimits{j \in \pencli{{\cal S}} \cap \pencli{{\cal O}}}{}\pj{j}
\end{multline}
Rearranging the terms, we get
\begin{multline} \label{eq0.5}
\sumlimits{s \in {\cal S}}{}\sumlimits{j \in \bs{s} \setminus \pencli{{\cal O}}}{} \sj{j} + \sumlimits{j \in \pencli{{\cal O}} \setminus \pencli{{\cal S}}}{} \sj{j} + \sumlimits{j \in
\pencli{{\cal S}} \setminus \pencli{{\cal O}}}{}\pj{j} + \sumlimits{j \in
\pencli{{\cal S}} \cap \pencli{{\cal O}}}{}\pj{j}
\le\\
\sumlimits{o \in {\cal O}_1 \cup {\cal O}_2 \cup {\cal O}_3}{}\sumlimits{j \in \bo{o} \setminus \pencli{{\cal S}}}{} 3\oj{j}
+ \sumlimits{j \in \pencli{{\cal O}} \setminus \pencli{{\cal S}}}{} 3\pj{j} \\
+ \sumlimits{j \in
\pencli{{\cal S}} \setminus \pencli{{\cal O}}}{}3\oj{j}
+ \sumlimits{j \in
\pencli{{\cal S}} \cap \pencli{{\cal O}}}{}\pj{j}
+ \sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}} }{} 6\oj{j}
\end{multline}
which gives
\begin{multline} \label{eq0}
\sumlimits{s \in {\cal S}}{}\sumlimits{j \in \bs{s}}{} \sj{j}
+ \sumlimits{j \in \pencli{{\cal S}} }{}\pj{j}
\le
\sumlimits{o \in {\cal O}}{}\sumlimits{j \in \bo{o} }{} 3\oj{j}
+ \sumlimits{j \in \pencli{{\cal O}}}{} 3\pj{j}
+ \sumlimits{s \in {\mathcal S_L}}{}\sumlimits{j \in \bs{s}\setminus \pencli{{\cal O}} \}}{} 6\oj{j} \\
\le
\sumlimits{o \in {\cal O}}{}\sumlimits{j \in \bo{o} }{} 9\oj{j}
+ \sumlimits{j \in \pencli{{\cal O}}}{} 3\pj{j} = 9\cost{{\cal O}}
\end{multline}
\section{Conclusion}
In this paper, we presented $((3+\epsilon),8/3)$ factor algorithm for Capacitated $k$-Median Problem and its penalty version with uniform capacities, using local search heuristic. There is a trade-off between the approximation factor and the cardinality violation between our work and the existing work. Work in~\cite{KPR} is closest to our work as it is the only result for the problem based on local search. We improve upon the results in~\cite{KPR} in terms of cardinality violation.
It would be interesting to see how these results extend to the problems with non-uniform capacities.
\end{document} |
\tilde{\beta}agin{document}
\textitle{Tutte polynomial and $G$-parking functions}
\author{Hungyung Chang$^{a}$
\and
\and Jun Ma$^{b,}$\thetaanks{Email address of the corresponding author: [email protected]}\\
\and Yeong-Nan Yeh$^{c,}$\thetaanks{Partially supported by NSC 96-2115-M-001-005}
}\date{} \title{Tutte polynomial and $G$-parking functions} \vspace*{-1.2cm}\tilde{\beta}agin{center} \footnotesize
$^{a,b,c}$ Institute of Mathematics, Academia Sinica, Taipei, Taiwan\\
\end{center}
\vspace*{-0.3cm}
\thetaispagestyle{empty}
\tilde{\beta}agin{abstract}
Let $G$ be a connected graph with vertex set $\{0,1,2,\ldots,n\}$.
We allow $G$ to have multiple edges and loops. In this paper, we
give a characterization of external activity by some parameters of
$G$-parking functions. In particular, we give the definition of the
bridge vertex of a $G$-parking function and obtain an expression of
the Tutte polynomial $T_G(x,y)$ of $G$ in terms of $G$-parking
functions. We find the Tutte polynomial enumerates the $G$-parking
function by the number of the bridge vertices.
\end{abstract}
\noindent {\bf Keywords: parking functions; spanning tree; Tutte
polynomial}
\section{Introduction}
J. Riordan \cite{R} define the parking function as follows: $m$
parking spaces are arranged in a line, numbered $1$ to $n$ left to
right; $n$ cars, arriving successively, have initial parking
preferences, $a_i$ for $i$, chosen independently and at random;
$(a_1,\cdots,a_n)$ is called preference function; if space $a_i$ is
occupied, car $i$ moves to the first unoccupied space to the right;
if all the cars can be parked, then the preference function is
called parking function.
Konheim and Weiss \cite{konhein1966} introduced the conception of
the parking functions of length $n$ in the study of the linear
probes of random hashing function. J. Riordan \cite{R} studied the
parking functions and derived that the number of parking functions
of length $n$ is $(n+1)^{n-1}$, which coincides with the number of
labeled trees on $n+1$ vertices by Cayley's formula. Several
bijections between the two sets are known (e.g., see
\cite{FR,R,SMP}). Parking functions have been found in connection to
many other combinatorial structures such as acyclic mappings,
polytopes, non-crossing partitions, non-nesting partitions,
hyperplane arrangements, etc. Refer to \cite{F,FR,GK,PS,SRP,SRP2}
for more information.
Parking function $(a_1,\cdots,a_n)$ can be redefined that its
increasing rearrangement $(b_1,\cdots,b_n)$ satisfies $b_i\leq i$.
Pitman and Stanley generalized the notion of parking functions
in \cite{PS}. Let ${\bf x}=(x_1,\cdots,x_n)$ be a sequence of
positive integers. The sequence $\alphapha=(a_1,\cdots,a_n)$ is called
an ${\bf x}$-parking function if the non-decreasing rearrangement
$(b_1,\cdots,b_n)$ of $\alphapha$ satisfies $b_i\leq x_1+\cdots +x_i$
for any $1\leq i\leq n$. Thus, the ordinary parking function is the
case ${\bf x}=(1,\cdots,1)$. By the determinant formula of
Gon\v{c}arove polynomials, Kung and Yan \cite{KY} obtained the
number of ${\bf x}$-parking functions for an arbitrary ${\bf x}$.
See also \cite{Y1,Y2,Y3} for the explicit formulas and properties
for some specified cases of ${\bf x}$.
Recently, Postnikov and Shapiro \cite{postnikov2004} gave a new
generalization, building on work of Cori, Rossin and Salvy
\cite{cori2002}, the $G$-parking functions of a graph. For the
complete graph $G=K_{n+1}$, the defined functions in
\cite{postnikov2004} are exactly the classical parking functions.
Chebikin and Pylyavskyy \cite{Denis2005} established a family of
bijections from the set of $G$-parking functions to the spanning
trees of $G$.
Dimitrije Kostic and Catherine H. Yan \cite{kostic} proposed the
notion of a $G$-multiparking function, a natural extension of the
notion of a $G$-parking function and extended the result of
\cite{Y3} to arbitrary graphs. They constructed a family of
bijections from the set of $G$-multiparking functions to the
spanning forests of $G$. Particularly, They characterize the
external activity by the bijection induced by the breadth-first
search and gave a representation of Tutte polynomial by the reversed
sum of $G$-multiparking functions. Given a classical parking
function $\alphapha=(a_1,\ldots,a_n)$, let $cr(\alphapha)$ be the number
of critical left-to-right maxima in $\alphapha$. They also gave an
expression of the Tutte polynomial $T_{K_{n+1}}(x,y)$ of the
complete graph $K_{n+1}$ as follows:
$$T_{K_{n+1}}(x,y)=\sum\limits_{\alphapha\in\mathcal{P}_n}x^{cr(\alphapha)}y^{{n\chooseoseoose
2}-\sum\limits_{i=1}^na_i},$$ where $\mathcal{P}_n$ is the set of
classical parking functions of length $n$. Recently, Sen-peng Eu,
Tung-Shan Fu and Chun-Ju Lai \cite{EFL} considered a class of
multigraphs in connection with ${\bf x}$-parking functions, where
${\bf x}=(a,b,\ldots,b)$. They gave the Tutte polynomial of the
multigraphs in terms of ${\bf x}$-parking functions.
Let $G$ be a connected graph with vertex set $\{0,1,2,\ldots,n\}$.
We allow $G$ to have multiple edges and loops. The motivation of
this paper is to extend the results in \cite{Y3} on $K_{n+1}$ to
arbitrary connected graphs and give a characterization of external
activity by some parameters of $G$-parking functions. To obtain the
characterization for the complete graph $K_{n+1}$, Dimitrije Kostic
and Catherine H. Yan \cite{kostic} use the bijections induced by the
breadth-first search. In this paper, we use the bijections induced
by the vertex ranking. We give the definition of the bridge vertex
of a $G$-parking functions. We obtain a expression of the Tutte
polynomial $T_G(x,y)$ of $G$ in terms of $G$-parking functions. So,
we find the Tutte polynomial enumerates the $G$-parking function by
the number of the bridge vertices.
This paper is organized as follows. In Section $2$, we give the
definition of the bridge vertex of a $G$-parking functions. In
Section $3$, we will express the Tutte polynomial $T_G(x,y)$ of $G$
in terms of $G$-parking functions.
\section{The bridge vertex of $G$-parking function}
In this section, we always let $G$ be a connected graph with vertex
set $\{0,1,2,\ldots,n\}$ and edge set $E(G)$. We allow $G$ to have
multiple edges and loops. Let $[n]:=\{1,2,\ldots,n\}$. For any
$I\subseteq V(G)\setminusinus \{0\}$ and $v\in I$, define
$outdeg_{I,G}(v)$ to be the cardinality of the set $\{\{w,v\}\in
E(G)\mathitd w\notin I\}$. We give the definition of $G$-parking
function as follows.
\tilde{\beta}agin{defn}\lambdabel{definition} Let $G$ be a connected graph with vertex set
$V(G)=\{0,1,2,\cdots,n\}$ and edge set $E(G)$. A $G$-parking
function is a function $f:V(G)\rightarrow \mathbb{N}\cup\{-1\}$,
such that for every $I\subseteq V(G)\setminusinus\{0\}$ there exists a
vertex $v\in I$ such that $0\leq f(v)<outdeg_{I,G}(v)$ and
$f(0)=-1$.
\end{defn}
For any $i,j\in[n]$, let $\mu_G(i,j)$ be the number of edges
connecting the vertices $i$ to $j$ in $G$. For establishing the
bijections, all edges of $G$ are colored. The colors of edges
connecting the vertices $i$ to $j$ are $0,1,\cdots,\mu_G(i,j)-1$
respectively for any $i,j\in V(G)$. We use $\{i,j\}_k$ to denote the
edge $e\in E(G)$ connecting two vertices $i$ and $j$ with color $k$.
A subgraph $T$ of $G$ is called a subtree of $G$ rooted at $m$ if
the subgraph contains the vertex $m$ and there is a unique path from
$i$ to $m$ in $T$ for every vertex $i$ of $T$. If a subtree contains
all vertices of $G$, then we say the subtree is a spanning tree of
$G$. Let $\mathcal{P}_{G}$ and $\mathcal{T}_{G}$ be the sets of the
$G$-parking functions and the color spanning trees of $G$
respectively. For any $T\in\mathcal{T}_{G}$ and $e\in T$, let
$c_T(e)$ denote the color of edge $e$ in $T$. Kostic and Yan
\cite{kostic} give an algorithm $\hat{P}i$ which
is a bijection from the sets $\mathcal{P}_G$ to $\mathcal{T}_G$. We give a description
of the algorithm as follows.\\
\noindent{\bf Algorithm A. (Kostic, Yan \cite{kostic})}
{\bf Step 1:} Let $val_0=f$, $P_0=\varnothingtyset$, $T_0=Q_0=\{0\}$.
{\bf Step 2:} At time $i\geq 1$, let $v=\mathitn \{\tau(w)\mathitd w\in
Q_{i-1}\}$, where $\tau$ is a vertex ranking in $S_n$.
{\bf Step 3:} Let $N=\{w\notin P_{i-1}\mathitd 0\leq val_{i-1}(w)\leq
\mu(w,v)-1\text{ and }\{w,v\}_{val_{i-1}(w)}\in E(G)\}$ and
$\hat{N}=\{w\notin P_{i-1}\mathitd val_{i-1}(w)\geq \mu(w,v)\text{ and
}\{w,v\}_{val_{i-1}(w)}\in E(G)\}$. Set
$val_i(w)=val_{i-1}(w)-\mu_G(w,v)$ for all $w\in \hat{N}$. For any
other vertex $u$, set $val_i(w)=val_{i-1}(w)$. Update $P_i$, $Q_i$
and $F_i$ by letting $P_i=P_{i-1}\cup\{v\}$, $Q_i=Q_{i-1}\cup
N\setminusinus \{v\}$. Let $T_i$ be a graph on $P_{i}\cup Q_i$ whose
edges are obtained from those of $T_{i-1}$ by joining edges
$\{w,v\}_{val_{i-1}(w)}$ for each $w\in N$.\\
Define $\hat{P}i=\hat{P}i_{G,\tau}:\mathcal{P}_{G}\rightarrow
\mathcal{T}_{G,}$ by letting $\hat{P}i(f)=T_n$. Let $T\in
\mathcal{F}_{G}$. Note that the vertex $0$ is the root of $T$. For
any non-root vertex $v\in [n]$, there is a unique path from $v$ to
$0$ in $T$. Define the height of $v$ to be the number of edges in
the path. If the height of a vertex $w$ is less than the height of
$v$ and $\{v,w\}_k$ is an edge of $T$, then $w$ is the predecessor
of $v$, $v$ is a child of $w$, and write $w={ pre}_T(v)$ and $v\in {
child}_T(w)$. Suppose $T'$ is a subtree of $G$. A leaf of $T'$ is a
vertex of $T'$ with degree $1$ in $T'$. Denote the set of leaves of
$T'$ by $Leaf(T')$. The following algorithm will give the inverse
map of
$\hat{P}i$.\\
\noindent{\bf Algorithm B (Kostic, Yan \cite{kostic}).}
{\bf Step 1. } Let $\tau$ be a vertex ranking in $S_n$. Assume
$v_0,v_1,v_2,\ldots, v_i$ are determined, where $v_0=0$. Let
$V_i=\{v_0,v_1,v_2,\ldots, v_i\}$ and $W_i=\{v\notin V_i\mathitd
\{v,w\}_k\in T\text{ for some }w\in V_i\}$. Let $T'$ be the subtree
obtained by restricting $T$ to $V_i\cup W_i$. Let $v_{i+1}$ be the
vertex $w$ such that $\tau(w)\leq\tau(u)$ for all $u\in Leaf(T')$.
{\bf Step 2. } Let $\pi=(v_1\ldots v_n)$ be the order of the
vertices of $G$ determined by Step 1. Set $f(0)=-1$. For any other
vertex $v$, let $f(v)$ be equal to the sum of
the color of the edge connecting the vertices $v$ and $pre_T(v)$ and the cardinality of the set
$N(v)$, where
$N(v)=N_{G,T,\tau}(v)=\{w\mathitd \{v,w\}_k\in E(G)\text{ and }
\pi^{-1}(w)<\pi^{-1}({\rm pre}_T(v))\}$.\\
Define $\Thetaeta=\Thetaeta_{G,\tau}:\mathcal{T}_{G}\rightarrow
\mathcal{P}_{G}$ by letting $\Thetaeta_{G,\tau}(T)=f_T$. Then $\Thetaeta$
is the inverse of $\hat{P}i$. Note that the order $\pi=v_1v_2\ldots v_n$
in the algorithm B is exactly the order in which vertices of $G$
will be placed into the set $P_i$ when running algorithm A on $f$.
Define $Ord=Ord_{G,\tau}:\mathcal{P}_{G}\rightarrow S_{n}$ by
letting $Ord_{G,\tau}(f)=(v_1v_2\ldots v_n)$, where the order
$0=v_0,v_1,v_2,\ldots v_n$ is obtained by Algorithm B,i.e.,
$Ord_{G,\tau}(f)_i=u$ and $Ord^{-1}_{G,\tau}(f)_u=i$ if $v_i=u$ for
all $i\in [n]$. Furthermore, let $Rea(f)$ be a function such that
$Rea(f)(i)=f(Ord(f)_i)$ for all $i\in [n]$. We say $Ord(f)$ and
$Rea(f)$ are an order and a rearrangement of the $G$-parking
function $f$ respectively. Hence, for any $f\in\mathcal{P}_G$, we
can obtain a pair $(Rea(f),Ord(f))$.
\tilde{\beta}agin{defn} Let $f\in\mathcal{P}_G$ and $v\in V(G)$. Suppose $Ord(f)_i=v$. let
$I_v=I_{G,\tau,f,v}=\{Ord(f)_j\mathitd j\geq i\}$. The vertex $v$ is
said to be {\it $f$-critical} if $f(v)={\rm outdeg}_{I_v}(v)-1$.
\end{defn}
Define $C_f=C_{G,\tau,f}$ to be the set of all the $f$-critical
vertices. Clearly, $C_f\neq\varnothingtyset$ for any $f\in \mathcal{P}_G$
since $0\in C_f$.
\tilde{\beta}agin{exa} Let us consider the following graph $G$. Let $\tau$ be the identity permutation.\\
\tilde{\beta}agin{center}\mathop {\rm inc}ludegraphics[width=3cm]{graph.eps}\\
Fig 1. A graph $G$\end{center} We list all the $G$-parking functions
$f$ as well as the corresponding $Ord(f)$, $Rea(f)$ and $C_f$ as
follows.
$$\tilde{\beta}agin{array}{|l|l|l|l|l|l|}
\hline G-parking~functions~f&Ord(f)&Rea{f}&C_f\\
\hline f_1=(-1,0,0,0)&(0,1,2,3)&(-1,0,0,0)&\{0,1,2\}\\
\hline f_2=(-1,0,0,1)&(0,1,2,3)&(-1,0,0,1)&\{0,1,2\}\\
\hline f_3=(-1,0,0,2)&(0,1,2,3)&(-1,0,0,2)&\{0,1,2,3\}\\
\hline f_4=(-1,0,1,0)&(0,1,3,2)&(-1,0,0,1)&\{0,1,2\}\\
\hline f_5=(-1,0,1,1)&(0,1,3,2)&(-1,0,1,1)&\{0,1,2,3\}\\
\hline f_6=(-1,1,0,0)&(0,3,1,2)&(-1,0,1,0)&\{0,1,3\}\\
\hline f_7=(-1,1,1,0)&(0,3,1,2)&(-1,0,1,1)&\{0,1,2,3\}\\
\hline f_8=(-1,2,0,0)&(0,3,2,1)&(-1,0,0,2)&\{0,1,2,3\}\\
\hline
\end{array}
$$
\tilde{\beta}agin{center}Table 1. All the $G$-parking functions
\end{center}
\end{exa}
\tilde{\beta}agin{defn} Let $f\in\mathcal{P}_G$ and $v\in V(G)\setminusinus \{0\}$.
Suppose $Ord(f)_i=v$. A $G$-parking function $g$ is {\it weak
$v$-identical to $f$} if it satisfies the following conditions:
(1) $Rea(g)(j)=Rea(f)(j)$ and $Ord(g)_j=Ord(f)_j$ for all $j\in
[i-1]$,
(2) $g(v)\geq f(v)$, and
(3) $g(w)\geq {\rm outdeg}_{I_{v}}(w)$
for all $w\in I_v$ and $\tau(w)<\tau(v)$.\\
Furthermore, $g$ is {\it strong $v$-identical to $f$} if (1) $g$
is weak $v$-identical to $f$; (2) $Ord(g)_i=v$.
\end{defn}
Given $f\in\mathcal{P}_G$ and $v\in V(G)\setminusinus \{0\}$, define
$$W_{v,f}=W_{G,v,\tau,f}=\{g\in\mathcal{P}_G\mathitd g\text{ is weak
}v\text{-identical to }f\}$$ and
$$S_{v,f}=S_{G,v,\tau,f}=\{g\in\mathcal{P}_G \mathitd g\text{ is strong }\text{
}v\text{-identical to }f\}.$$ It is easy to see that
$S_{v,f}\subseteq W_{v,f}$ and $g(v)=f(v)$ for all $g\in S_{v,f}$ if
$v\in C_f$.
\tilde{\beta}agin{lem}\lambdabel{nobridge} Let $G$ be a connected graph and $f$ a $G$-parking function.
Let $e$ be an edge of $G$ connecting the vertices $w$ to $v$.
Suppose that $e$ is a bridge of $G$ and the vertices $w$ and $0$ are
in the same component after deleting the edge $e$. Then $v\in B(f)$
for any $f\in\mathcal{P}_G$.
\end{lem}
\tilde{\beta}agin{proof} Since $e$ is a bridge of $G$ and the vertices $w$ and $0$ are
in the same component after deleting the edge $e$, we have $f(v)=0$,
$Ord(f)^{-1}(w)<Ord(f)^{-1}(v)$ and $v\in C_f$ for all
$f\in\mathcal{P}_G$. Given a $f\in\mathcal{P}_G$, suppose
$Ord(f)_i=v$. Assume that $W_{v,f}\neq S_{v,f}$, i.e., there is a
$g\in\mathcal{P}_G$ such that $g\in W_{v,f}$ and $g\notin S_{v,f}$.
$g\notin S_{v,f}$ implies $Ord(g)_i\neq v$. Let $u=Ord(g)_i$. Then
$\tau(u)>\tau(v)$ and $Ord(g)^{-1}(u)<Ord(g)^{-1}(v)$ since $g\in
W_{v,f}$. Note that
$Ord(g)^{-1}(w)=Ord(f)^{-1}(w)<Ord(f)^{-1}(v)=Ord(g)^{-1}(u)$. So,
we must have $Ord(g)^{-1}(v)<Ord(g)^{-1}(u)$ by Algorithm A, a
contradiction. Hence, $W_{v,f}= S_{v,f}$ and $v\in B(f)$.
\end{proof}
\tilde{\beta}agin{defn} Let $f\in\mathcal{P}_G$. A vertex $v\in V(G)\setminusinus\{0\}$ is said to be $f$-bridge if
$v\in C_f$ and $|W_{v,f}|=|S_{v,f}|$.
\end{defn}
Define $B(f)=B_{G,\tau}(f)$ as the set of the $f$-bridge
vertices of $f$, $b(f)=b_{G,\tau}(f)=|B_{G,\tau}(f)|$ and
$w(f)=w_{G}(f)=|E(G)|-|V(G)|-\sum\limits_{i=0}^nf(i)$.
\tilde{\beta}agin{exa} We consider the graph $G$ in Fig 1. By Table 1, we have
$f_3=(-1,0,0,2)$ is a $G$-parking function and
$C_{f_3}=\{0,1,2,3\}$. It is easy to check the results in Table 2.
$$\tilde{\beta}agin{array}{|l|l|l|l|l|l|}
\hline W_{1,f_3}=\{f_i\mathitd i\in [8]\}&S_{1,f_3}=\{f_i\mathitd i\in [5]\}\\
\hline W_{2,f_3}=\{f_i\mathitd i\in [5]\}&S_{2,f_3}=\{f_i\mathitd i\in [3]\}\\
\hline W_{3,f_3}=\{f_i\mathitd i\in [3]\}&S_{3,f_3}=\{f_i\mathitd i\in
[3]\}\\
\hline
\end{array}
$$
\tilde{\beta}agin{center} Table 2. $W_{v,f_3}$ and $S_{v,f_3}$, where $v\in C_{f_3}$
\end{center} Hence, $B(f_3)=\{3\}$. We list all the $G$-parking functions as
well as the corresponding parameters $b(f)$ and $w(f)$ in the
following table.
$$\tilde{\beta}agin{array}{|l|l|l|}
\hline G-parking function~f&B(f)&(b(f),w(f))\\
\hline f_1=(-1,0,0,0)&\varnothingtyset&(0,2)\\
\hline f_2=(-1,0,0,1)&\varnothingtyset&(0,1)\\
\hline f_3=(-1,0,0,2)&\{3\}&(1,0)\\
\hline f_4=(-1,0,1,0)&\{2\}&(1,1)\\
\hline f_5=(-1,0,1,1)&\{2,3\}&(2,0)\\
\hline f_6=(-1,1,0,0)&\{3\}&(1,1)\\
\hline f_7=(-1,1,1,0)&\{2,3\}&(2,0)\\
\hline f_8=(-1,2,0,0)&\{1,2,3\}&(3,0)\\
\hline
\end{array}
$$
\tilde{\beta}agin{center}
Table 3. $G$-parking functions $f$ as well as the corresponding
parameters $b(f)$ and $w(f)$
\end{center}
We note that the Tutte polynomial $T_G(x,y)$ of $G$ in Fig.1
satisfies
$$T_G(x,y)=x^3+2x^2+x+2xy+y+y^2=\sum\limits_{f\in\mathcal{P}_G}x^{b(f)}y^{w(f)}.$$
\end{exa}
\section{A new expression
of the Tutte polynomial} In this section, we will prove the main
theorem of this paper. Suppose that $e$ is an edge connecting the
vertices $i$ to $j$ in $G$, where $i<j$. Define a graph $G{\setminusinus
e}$ as follows. The graph $G{\setminusinus e}$ is obtained from $G$
contracting the the vertices $i$ and $j$; that is, to get
$G{\setminusinus e}$ we identify two vertices $i$ and $j$ as a new
vertex $i$. Define $G-e$ as a graph obtained by deleting the edge
$e$ from $G$.
Let $NB_G(0)$ be the set of the vertices which are adjacent to the
vertex $0$ in $G$. We consider the case in which $e$ is an edge of
$G$ connecting the root $0$ to the vertex $u$ and $u$ satisfies
$\tau(u)\leq\tau(w)$ for all $w\in NB_G(0)$. Let
$\mathcal{P}_G^0=\{f\in\mathcal{P}_G\mathitd f(u)=0\}$ and
$\mathcal{P}_G^1=\{f\in\mathcal{P}_G\mathitd f(u)\geq 1\}$. Clearly,
$\mathcal{P}_G^0\cap\mathcal{P}_G^1=\varnothingtyset$ and
$\mathcal{P}_G=\mathcal{P}_G^0\cup\mathcal{P}_G^1$. For any $f\in
\mathcal{P}_G^0$, let $g=\phi(f)$ such that $g(w)=f(w)$ for any
$w\neq u$. For any $f\in \mathcal{P}_G^1$, let $g=\varphi(f)$ such
that $g(w)=f(w)$ for any $w\neq u$ and $g(u)=f(u)-1$.
\tilde{\beta}agin{lem}\lambdabel{lemmabijectionu=0} (1) The mapping $\phi$ is a bijection from $\mathcal{P}^0_G$
to $\mathcal{P}_{G\setminusinus e}$ with $w_{G\setminusinus
e}(\phi(f))=w_G(f)$.
(2) For any $f\in\mathcal{P}_G^0$, we have
$B_{G,\tau}(f)\setminusinus\{u\}=B_{G/e,\tau}(\phi(f))$.
\end{lem}
\tilde{\beta}agin{proof} (1) For any $I\subset V(G\setminusinus e)$ with $0\notin I$ and
$w\in I$, we have $outdeg_{I,G\setminusinus e}(w)=outdeg_{I,G}(w)$. This
implies $g=\phi(f)$ is a $(G\setminusinus e)$-parking function.
Conversely, for any $g\in\mathcal{P}_{G\setminusinus e}$, let
$f=\phi^{-1}(g)$ such that $f(w)=g(w)$ for any $w\in V(G\setminusinus
e)$ and $f(u)=0$. For any $I\subset V(G)$ with $0\notin I$, if $u\in
I$, then $f(u)<outdeg_{I,G}(u)$ since $f(u)=0$ and
$outdeg_{I,G}(u)\geq 1$; otherwise, we have $outdeg_{I,G\setminusinus
e}(w)=outdeg_{I,G}(w)$ for all $w\in I$, this implies
$f(w)<outdeg_{I,G}(w)$ for some $w\in I$ since $g$ is a $(G\setminusinus
e)$-parking function. Clearly, $w_{G\setminusinus e}(g)=w_G(f)$.
(2) Since $\tau(u)<\tau(w)$ for all $w\in NB_G(0)\setminusinus\{u\}$ and
$f(u)=0$, we have $Ord_{G,\tau}(f)_1=u$ for all
$f\in\mathcal{P}_G^0$. Let $g=\phi(f)$. Then $Ord_{G\setminusinus e,
\tau}(g)_i=Ord_{G, \tau}(f)_{i+1}$ for all $i\in[n-1]$. Let $v\in
B_{G,\tau}(f)\setminusinus\{u\}$. Clearly, $outdeg_{I_v,G\setminusinus
e}(v)=outdeg_{I_v,G}(v)$ and $f(v)=g(v)$. This implies
$g(v)=outdeg_{I_v,G\setminusinus e}(v)-1$. Hence, $v$
is $g$-critical in $G\setminusinus e$ since $v$
is $f$-critical in $G$.
Now, we assume that $W_{G\setminusinus e, v,\tau,g}\neq S_{G\setminusinus
e,v,\tau,g}$, i.e., there is a $g_1\in\mathcal{P}_{G\setminusinus e}$
such that $g_1\in W_{G\setminusinus e,v,\tau,g}$ and $g_1\notin
S_{G\setminusinus e,v,\tau,g}$. Suppose $Ord_{G,\tau}(f)_i=v$. So,
$Ord_{G\setminusinus e,\tau}(g)_i\neq v$. Furthermore, we have
$\tau(Ord_{G\setminusinus e,\tau}(g)_i)>\tau(v)$. Let
$f_1=\phi^{-1}(g_1)$. It is easy to see that
$f_1\in W_{G,v,\tau,f}$ and $f_1\notin S_{G,v,\tau,f}$, a
contradiction.
Conversely, let $v\in B_{G\setminusinus e,\tau}(g)$. Clearly, $v\neq u$.
Assume $W_{G,v,\tau,f}\neq S_{G,v,\tau,f}$, where $f=\phi^{-1}(g)$,
i.e., there is a $f_1\in\mathcal{P}_G$ such that $f_1\in
W_{G,v,\tau,f}$ and $f_1\notin S_{G,v,\tau,f}$. Let $g_1=\phi(f_1)$.
Similarly, we can obtain $W_{G\setminusinus e,v,\tau,g}\neq
S_{G\setminusinus e,v,\tau,g}$, a contradiction.
\end{proof}
\tilde{\beta}agin{lem}\lambdabel{lemmabijectionu=1} (1) The mapping $\varphi$ is a bijection from $\mathcal{P}^1_G$
to $\mathcal{P}_{G-e}$ with $w_{G-e}(\varphi(f))=w_G(f)$.
(2) For any $f\in\mathcal{P}_G^1$, we have
$B_{G,\tau}(f)=B_{G\setminusinus e,\tau}(\varphi(f))$.
\end{lem}
\tilde{\beta}agin{proof} Since $f(u)\geq 1$, we have the edge $\{0,u\}$ isn't a bridge. So,
$G-e$ is still a connected graph. (1) For any $I\subset V(G-e)$ with $0\notin I$ and $w\in I$, we
have $outdeg_{I,G-e}(w)=outdeg_{I,G}(w)$ if $w\neq u$;
$outdeg_{I,G\setminusinus e}(w)=outdeg_{I,G}(w)-1$ if $w=u$. Note that
$g(u)=f(u)-1$. Hence, $g=\varphi(f)$ is a $(G-e)$-parking function.
Conversely, for any $g\in\mathcal{P}_{G-e}$, let $f=\varphi^{-1}(g)$
such that $f(w)=g(w)$ for any $w\in V(G-e)$ and $f(u)=g(u)+1$. For
any $I\subset V(G)$ with $0\notin I$, we have
$outdeg_{I,G}(w)=outdeg_{I,G-e}(w)$ if $w\neq u$;
$outdeg_{I,G}(w)=outdeg_{I,G-e}(w)+1$ if $w=u$. Note that
$f(u)=g(u)+1$. This implies $f(w)<outdeg_{I,G}(w)$ for some $w\in I$
since $g$ is a $(G-e)$-parking function. Clearly,
$w_{G-e}(g)=w_G(f)$.
(2) Note that $Ord_{G,\tau}(f)=Ord_{G-e,\tau}(\varphi(f))$ for all
$f\in\mathcal{P}_G$. For any $f\in\mathcal{P}_G$, it is easy to see
that the vertex $v$ is $\varphi(f)$-critical in $G-e$ if and only if
it is $f$-critical in $G$.
Now, given $f\in\mathcal{P}_G$, let $g=\varphi(f)$. For any $v\in
B_{G,\tau}(f)$, we assume that $W_{G-e,v,\tau,g}\neq
S_{G-e,v,\tau,g}$, i.e., there is a $g_1\in W_{G-e,v,\tau,g}$ and
$g_1\notin S_{G-e,v,\tau,g}$. Suppose $Ord_{G-e,\tau}(g)_i=v$. Then
$Ord_{G-e,\tau}(g_1)_i\neq v$. Furthermore, we have
$\tau(Ord_{G-e,\tau}(g_1)_i)>\tau(v)$. Let
$f_1=\varphi^{-1}(g_1)$. It is easy to see that
$f_1\in W_{G,v,\tau,g}$ and $f_1\notin S_{G,v,\tau,g}$, a
contradiction.
Conversely, let $v\in B_{G-e,\tau}(g)$. Assume $W_{G,v,\tau,f}\neq
S_{G,v,\tau,f}$,
i.e., there is a
$f_1\in\mathcal{P}_G$ such that $f_1\in W_{G,v,\tau,f}$ and
$f_1\notin S_{G,v,\tau,f}$. Let $g_1=\phi(f_1)$. Similarly, we can
obtain $W_{G-e,v,\tau,g}\neq S_{G-e,v,\tau,g}$, a contradiction.
\end{proof}
We are in a position to prove the main theorem.
\tilde{\beta}agin{thm}\lambdabel{theorem}Suppose that $G$ is a connected graph with vertex $\{0,1,\ldots,n\}$
and $\tau$ is a vertex ranking in $S_n$. Let $T_G(x,y)$ be the Tutte polynomial of $G$. Then
$T_G(x,y)=\sum\limits_{f\in\mathcal{P}}x^{b(f)}y^{w(f)}$.
\end{thm}
\tilde{\beta}agin{proof} Let $P_{G}(x,y)=\sum\limits_{f\in\mathcal{P}}x^{b(f)}y^{w(f)}$. Let $e$ be an edge of $G$ connecting the vertices
$u$ to $0$, where $u$ satisfies $\tau(u)\leq\tau(w)$ for all $w\in NB_G(0)$. We
consider the following three cases.\\
{\it Case 1.} $e$ is a loop of $G$.
For any $f\in\mathcal{P}_G$, it is easy to see that $f$ is
$(G-e)$-parking function as well. Note that
\tilde{\beta}agin{eqnarray*}w_{G-e}(f)&=&|E(G-e)|-|V(G-e)|-\sum\limits_{i=0}^nf(i)\\
&=&|E(G)|-1-|V(G)|-\sum\limits_{i=0}^nf(i)\\
&=&w_{G}(f)-1.\end{eqnarray*} Hence,
\tilde{\beta}agin{eqnarray*}P_{G}(x,y)&=&\sum\limits_{f\in\mathcal{P}_G}x^{b(f)}y^{w(f)}\\
&=&\sum\limits_{g\in\mathcal{P}_{G-e}}x^{b_\tau(g)}y^{w(g)+1}\\
&=&yP_{G-e}(x,y)\end{eqnarray*} {\it Case 2.} $e$ is a bridge of
$G$.
For any $f\in\mathcal{P}_G$, we have $f(u)=0$ since $e$ is a bridge.
So, $\mathcal{P}_G=\mathcal{P}_G^0$. Let $\phi$ be defined as that
in Lemma \ref{lemmabijectionu=0}. From Lemma \ref{nobridge}, we have
$u\in B_{G,\tau}(f)$ for all $f\in\mathcal{P}_G$. Lemma
\ref{lemmabijectionu=0} (2) tells us that $b_{G}(f)=b_{G\setminusinus
e}(\phi(f))+1$.
\tilde{\beta}agin{eqnarray*}P_{G}(x,y)&=&\sum\limits_{f\in\mathcal{P}_G}x^{b(f)}y^{w(f)}\\
&=&\sum\limits_{g\in\mathcal{P}_{G\setminusinus e}}x^{b(g)+1}y^{w(g)}\\
&=&xP_{G\setminusinus e}(x,y)\end{eqnarray*} {\it Case 3.} $e$ is
neither loop nor bridge of $G$.
First, we claim $u\notin B_G(f)$ for any $f\in\mathcal{P}_G^0$.
Since $\tau(u)\leq\tau(w)$ for all $w\in NB_G(0)$ and $f(u)=0$, we
have $Ord_{G,\tau}(f)_1=u$ for all $f\in\mathcal{P}_G^0$. If there
are at least two edges connecting the vertices $0$ and $u$ in $G$,
i.e., $\mu_G(0,u)\geq 2$, then $u$ isn't $f$-critical for any
$f\in\mathcal{P}_G^0$. So, we suppose $\mu_G(0,u)=1$. This implies
that $u$ is $f$-critical for any $f\in\mathcal{P}_G^0$. Since $e$
is neither loop nor bridge of $G$, there exists a
$f'\in\mathcal{P}_G$ such that $f'(u)\geq 1$. Suppose
$Ord_{G,\tau}(f')_1=w$. Then $w\neq u$, $f'(w)=0$ and
$\tau(w)>\tau(u)$. Hence, $f'\in W_{G,u,\tau,f}$ and $f'\notin
S_{G,u,\tau, f}$. This tells us $u\notin B_{G}(f)$.
By Lemmas \ref{lemmabijectionu=0} and \ref{lemmabijectionu=1}, we
have
\tilde{\beta}agin{eqnarray*}P_{G}(x,y)&=&\sum\limits_{f\in\mathcal{P}_G}x^{b(f)}y^{w(f)}\\
&=&\sum\limits_{f\in\mathcal{P}_{G}^0}x^{b(f)}y^{w(f)}+\sum\limits_{f\in\mathcal{P}_{G}^1}x^{b(f)}y^{w(f)}\\
&=&\sum\limits_{g\in\mathcal{P}_{G\setminusinus e}}x^{b(g)}y^{w(g)}+\sum\limits_{g\in\mathcal{P}_{G-e}}x^{b(g)}y^{w(g)}\\
&=&P_{G\setminusinus e}(x,y)+P_{G-e}(x,y)\end{eqnarray*}
Finally, we consider the initial conditions. Let $G$ be a graph with
vertex set $\{0\}$ and $E(G)=\varnothingtyset$. There is a unique
$G$-parking function $f(0)=-1$. Clearly, $w(f)=0$ and
$B_G(f)=\varnothingtyset$. So, $T_G(x,y)=1$. Next, let $G$ be a graph with
vertex set $\{0,1\}$ and $E(G)=\{\{0,1\}\}$. There is a unique
$G$-parking function $f(0)=-1$ and $f(1)=0$. It is easy to see
$B_G(f)=\{1\}$ and $w(f)=0$. Hence, $P_G(x,y)=x$. This complete the
proof.
\end{proof}
Let us define a multiset $BW_G=BW_{G,\tau}$ as
$BW_{G,\tau}=\{(b_{G,\tau}(f),w_{G}(f))\mathitd
f\in\mathcal{P}_\mathcal{G}\}.$ By Theorem \ref{theorem}, we
immediately obtain the following corollary.
\tilde{\beta}agin{cor} Let $G$ be a connected graph. Suppose $\tau_1$ and
$\tau_2$ are two vertex ranking. Then $BW_{G,\tau_1}=BW_{G,\tau_2}$.
\end{cor}
Next, we consider the case in which $G$ is the complete graph
$K_{n+1}$ and $\tau$ is the identity permutation. Recall that the
$K_{n+1}$-parking functions are exactly the classical parking
functions, i.e., $f=(f(0),f(1),\ldots,f(n))\in\mathcal{P}_{K_{n+1}}$
if and only if $(f(1),\ldots,f(n))$ is a classical parking function.
\tilde{\beta}agin{defn}\lambdabel{classical} Given a classical parking function
$\alphapha=(a_1,a_2,\ldots,a_n)$, we say that a term $a_i=j$ is
$\alphapha$-critical maxima if $a_i$ satisfies that there are exactly
$n-1-j$ terms larger than $j$ and $k<i$ for all $a_k>j$.
\end{defn}
\tilde{\beta}agin{lem} Let $\alphapha=(a_1,a_2,\ldots,a_n)$ be a classical parking
function and $a_i=j$ an $\alphapha$-critical maxima. Then there are
exactly $j$ terms less than $j$.
\end{lem}
\tilde{\beta}agin{proof} Let ${\bf a}r{\alphapha}=(-1,a_1,a_2,\ldots,a_n)$. Then
${\bf a}r{\alphapha}\in\mathcal{P}_{K_{n+1}}$. Let $T=\hat{P}i({\bf a}r{\alphapha})$
be the spanning tree obtained by Algorithm A and $v=pre_T(i)$. Then
$Ord^{-1}({\bf a}r{\alphapha})_{v}=j$. Thus,
$Ord^{-1}({\bf a}r{\alphapha})_{i}=j+1$ since $a_i=j$ is an
$\alphapha$-critical maxima. From Algorithm B,
$a_{Ord({\bf a}r{\alphapha})_k}<j$ for all $1\leq k\leq j$.
\end{proof}
\tilde{\beta}agin{lem} Let $\alphapha=(a_1,a_2,\ldots,a_n)$ be a classical parking
function and ${\bf a}r{\alphapha}=(-1,a_1,a_2,\ldots,a_n)$. Then $a_i=j$ is
$\alphapha$-critical maxima if and only if the vertex $i$ is
${\bf a}r{\alphapha}$-bridge.
\end{lem}
\tilde{\beta}agin{proof} First, we suppose $a_i=j$ is
$\alphapha$-critical maxima. By Algorithm A, it is easy to see the
vertex $i$ is ${\bf a}r{\alphapha}$-critical since there are exactly $j$
terms less than $j$ and exactly $n-1-j$ terms larger than $j$. $k<i$
for all $a_k>a_i$ imply $W_{i,{\bf a}r{\alphapha }}= S_{i,{\bf a}r{\alphapha }}$.
Hence, the vertex $i$ is ${\bf a}r{\alphapha}$-bridge.
Conversely, we suppose the vertex $i$ is ${\bf a}r{\alphapha}$-bridge,
$a_i=j$ and $Ord^{-1}({\bf a}r{\alphapha})$. Let $T=\hat{P}i({\bf a}r{\alphapha})$ be
the spanning tree obtained by Algorithm A and $v=pre_T(i)$. There
are exactly $j$ terms less than $j$ and
$Ord^{-1}({\bf a}r{\alphapha})_v+1=Ord^{-1}({\bf a}r{\alphapha})_i$ since the
vertex $i$ is ${\bf a}r{\alphapha}$-critical. Let $w$ be the first vertex
at the right of $i$
such that $w>i$ in the order $Ord({\bf a}r{\alphapha})$. Let $T'$ be a new spanning tree from $T$ by deleting the
edge $\{v,i\}$ from $T$ and adding the edge $\{i,w\}$ into $T$ If
$a_w\leq j$; otherwise adding the edge $\{v,w\}$ into $T$. Let
${\bf a}r{\tilde{\beta}ata}=\Thetaeta(T')$ be the $K_{n+1}$-parking function by
Algorithm B. Then ${\bf a}r{\tilde{\beta}ata}\in W_{i,{\bf a}r{\alphapha }}$ and
${\bf a}r{\tilde{\beta}ata}\notin S_{i,{\bf a}r{\alphapha }}$, a contradiction. So, we
prove that for any vertex $w\in [n]$ if $w$ is at the right of $i$
in the order $Org({\bf a}r{\alphapha})$, then $w<i$ and $a_w>j$. Hence,
$a_i$ is $\alphapha$-critical maxima.
\end{proof}
Let $cm(\alphapha)$ be the number of critical maxima in a classical
parking function $\alphapha$.
\tilde{\beta}agin{cor}
\tilde{\beta}agin{eqnarray*}T_{K_{n+1}}(x,y)=\sum\limits_{\alphapha\in\mathcal{P}_n}x^{cm(\alphapha)}y^{{n\chooseoseoose{2}}-\sum\limits_{i=1}^na_i}
\end{eqnarray*}
where $\mathcal{P}_n$ is the set of classical parking function of
length $n$.
\end{cor}
\tilde{\beta}agin{exa} We list all classical the parkin functions $\alphapha$ of
length $3$ as well as the corresponding sets of critical maxima and
$cm(\alphapha)$ in the following table.
$$\tilde{\beta}agin{array}{|l|l|l|l|l|l|}
\hline
parking~function& critical~maxima & cm(\alphapha)&parking~function& critical~maxima& cm(\alphapha)\\
\hline (0,0,0)&\varnothingtyset&0&(0,0,1)&\varnothingtyset&0\\
\hline(0,0,2)&\{a_3\}&1&(0,1,0)&\varnothingtyset&0\\
\hline(0,1,1)&\varnothingtyset&0&(0,1,2)&\{a_3\}&1\\
\hline(0,2,0)&\{a_2\}&1&(0,2,1)&\{a_2,a_3\}&2\\
\hline(1,0,0)&\varnothingtyset&0&(1,0,1)&\varnothingtyset&0\\
\hline (1,0,2)&\{a_3\}&1&(1,1,0)&\{a_3\}&1\\
\hline (1,2,0)&\{a_2,a_3\}&2& (2,0,0)&\{a_1\}&1\\
\hline (2,0,1)&\{a_1,a_3\}&2& (2,1,0)&\{a_1,a_2,a_3\}&3\\
\hline
\end{array}
$$
\tilde{\beta}agin{center}
Table 4. Classical the parking functions $\alphapha$ of length $n$ as
well as their sets of critical maxima
\end{center}
Hence, $T_{K_4}(x,y)=y^3+3y^2+2y+(4y+2)x+3x^2+x^3$.
\end{exa}
\tilde{\beta}agin{thebibliography}{99}
thebibliographyitemtem{cori2002} R. Cori, D. Rossin, B. Salvy, Polynomial ideals
for sandpiles and their Grobner bases. {\it Theoretical Computer
Science} {\bf 276} (2002), no. 1-2, 1-15.
thebibliographyitemtem{Denis2005} Chebikin Denis, Pylyavskyy Pavlo, A family of
bijections between $G$-parking functions and spanning trees. {\it J.
Combin. Theory Ser. A} {\bf 110} (2005), no. 1, 31--41.
thebibliographyitemtem{kostic}Kosti$\acute{c}$ Dimitrije, Yan, Catherine H.
Multiparking functions, graph searching, and the Tutte polynomial.
{\it Adv. in Appl. Math.} {\bf 40} (2008), no. 1, 73--97.
thebibliographyitemtem{EFL}Sen-Peng Eu,Tung-Shan Fu, Symmetric parking functions
and related multigraphs, {\it private communication}.
thebibliographyitemtem{FR} D. Foata, J. Riordan, Mappings of acyclic and parking functions,
{\it Aequationes Math.} 10 (1974) 10-22.
thebibliographyitemtem{F}J. Fran\c{c}n, Acyclic and parking functions, {\it J. Combin. Theory Ser.
A} 18 (1975) 27-35.
thebibliographyitemtem{GK}J.D. Gilbey, L.H. Kalikow, Parking functions, valet functions and
priority queues, {\it Discrete Math.} 197/198 (1999) 351-373.
thebibliographyitemtem{konhein1966} Konheim, A. G. and Weiss, B. An Ocuupancy
Discipline and Applications. {\it Siam Journal of Applied
Mathematics} {\bf 14} (1966) 1266-1274
thebibliographyitemtem{KY}J.P.S. Kung, C.H. Yan, Gon\v{c}arove polynomials and parking functions,
{\it J. Combin. Theory Ser. A} 102 (2003) 16-37.
thebibliographyitemtem{postnikov2004} Postnikov, A. and Shapiro, B. Trees, Parking
Functions, Syzygies, and Deformatioins of Monomial Ideals. {\it
Transactions of the American Mathematical Society} {\bf 356} (2004).
thebibliographyitemtem{PS}J. Pitman, R. Stanley, A polytope related to empirical
distributions, plane trees, parking functions, and the
associahedron, {\it Discrete Comput. Geom.} 27 (4) (2002) 603-634.
thebibliographyitemtem{R} J.Riordan, Ballots and trees, {\it J.Combin. Theory} 6 (1969) 408-411.
thebibliographyitemtem{SMP}M.P. Sch¨¹tzenberger, On an enumeration problem, {\it J. Combin. Theory} 4 (1968) 219-221.
thebibliographyitemtem{SRP}R.P. Stanley, Hyperplane arrangements, interval orders and trees,
{\it Proc. Natl. Acad. Sci.} 93 (1996) 2620-2625.
thebibliographyitemtem{SRP2}R.P. Stanley, Parking functions and non-crossing partitions, in: The
Wilf Festschrift, {\it Electron. J. Combin.} 4 (1997) R20.
thebibliographyitemtem{Y1} C.H. Yan, Generalized tree inversions and k-parking functions, {\it J.
Combin. Theory Ser. A} 79 (1997) 268-280.
thebibliographyitemtem{Y2}C.H. Yan, On the enumeration of generalized parking functions,
{\it Congr. Numer.} 147 (2000) 201-209.
thebibliographyitemtem{Y3}C.H. Yan, Generalized parking functions, tree inversions and
multicolored graphs, {\it Adv. in Appl. Math.} 27 (2001) 641-670.
\end{thebibliography}
\end{document} |
\begin{document}
\title{A young person's guide to mixed Hodge modules}
\author[M. Saito]{Morihiko Saito}
\address{RIMS Kyoto University, Kyoto 606-8502 Japan}
\begin{abstract}
We give a rather informal introduction to the theory of mixed Hodge modules for young mathematicians.
\end{abstract}
\maketitle
\centerline{\bf Introduction}
\par
n
The theory of mixed Hodge modules (\cite{mhp}, \cite{mhm}) was originally constructed as a Hodge-theoretic analogue of the theory of $\ell$-adic mixed perverse sheaves (\cite{th1}, \cite{weil}, \cite{BBD}) and also as an extension of Deligne's mixed Hodge theory (\cite{th2}, \cite{th3}) including the theory of degenerations of variations of pure or mixed Hodge structures (see \cite{Sch}, \cite{St}, \cite{CK}, \cite{CKS1}, \cite{StZ}, \cite{Kadm}, etc.)
The main point is the ``stability" by the direct images and the pull-backs under morphisms of complex algebraic varieties and also by the dual and the nearby and vanishing cycle functors.
Here ``stability" means more precisely the ``stability theorems" saying that there are {\it canonically defined functors} between the derived categories.
These stability theorems become quite useful by combining them with the fundamental theorem of mixed Hodge modules asserting that any admissible variations of mixed Hodge structure on smooth complex algebraic varieties in the sense of \cite{StZ}, \cite{Kadm} are mixed Hodge modules \cite{mhm}; in particular, mixed Hodge modules on a point are naturally identified with graded-polarizable mixed ${\mathbf Q}$-Hodge structures in the sense of Deligne \cite{th2}.
\par
Technically the {\it strictness} of the Hodge filtration $F$ on the underlying complexes of ${\mathcal D}$-modules is very important in the theory of mixed Hodge modules. Here ${\mathcal D}$-modules are indispensable for the generalization of Deligne's theory in the absolute case (\cite{th2}, \cite{th3}) to the {\it relative} case.
For instance, this includes the assertion that any morphism of mixed Hodge modules is {\it bistrict} for the Hodge and weight filtrations $F,W$, generalizing the case of mixed Hodge structures by Deligne \cite{th2}.
${\mathcal D}$-modules are also essential for the construction of a relative version of ${\mathcal D}ec\,W$ in the proof of the stability theorem of mixed Hodge modules by the direct images under projective morphisms extending Deligne's argument in the absolute case, where we have {\it bistrict} complexes of ${\mathcal D}$-modules with filtrations $(F,{\mathcal D}ec\,W)$ under the direct images by projective morphisms, see \cite[Proposition 2.15]{mhm}.
\par
In order to understand the theory of mixed Hodge modules, the general theory of ${\mathcal D}$-modules does not seem to be absolutely indispensable.
In fact, ${\mathcal D}$-modules appearing in the theory of mixed Hodge modules are rather special ones, and we have to deal with them always as {\it $F$-filtered ${\mathcal D}$-modules}, where a slightly different kind of argument is usually required.
For instance, although the {\it regularity} of ${\mathcal D}$-modules is quite useful for the construction of the direct images by affine open immersions, this can be reduced essentially to the normal crossing case by using Beilinson's functor together with the stability theorem under the direct images by projective morphisms (see \cite{def}), and in the latter case, a generalization of the Deligne canonical extensions \cite{eq} to the case of ${\mathcal D}$-modules with normal crossing singular supports is sufficient.
Also the pull-backs of mixed Hodge modules under closed immersions are constructed by using nearby and vanishing cycle functors, which is entirely different from the usual construction of pull-backs of ${\mathcal D}$-modules, see \cite[Section 4.4]{mhm}.
\par
Note finally that we need only Zucker's Hodge theory in the curve case \cite{Zu} for the proof of the stability theorem of mixed Hodge modules under the direct images by proper morphisms, and classical Hodge theory is not used, see (2.5) below.
\par
I thank the referee for useful comments.
The author is partially supported by Kakenhi 15K04816.
\par
In Section~1 we explain the main properties of pure Hodge modules.
In Section~2 we give an inductive definition of pure Hodge modules, and explain an outline of proofs of Theorems~(1.3) and (1.4).
In Section~3 we explain a simplified definition of mixed Hodge modules following \cite{def}.
In Appendices we explain some basics of hypersheaves, ${\mathcal D}$-modules, and compatible filtrations.
\par
n
{\bf Conventions 1.} We assume that algebraic varieties in this paper are always defined over ${\mathbf C}$ and are reduced (but not necessarily irreducible).
More precisely, a variety means a separated reduced scheme of finite type over ${\mathbf C}$, but we consider only its {\it closed points}, that is, its {\it ${\mathbf C}$-valued points}. So it is close to a variety in the sense of Serre (except that reducible varieties are allowed here). We also assume that a variety is always quasi-projective, or more generally, globally embeddable into a smooth variety (where morphisms of varieties are assumed quasi-projective) in order to simplify some arguments. The reader may assume that all the varieties in this paper are reduced quasi-projective complex algebraic varieties.
\par
n
{\bf 2.} We use {\it analytic} sheaves on complex algebraic varieties; in particular, any ${\mathcal D}$-modules are analytic ${\mathcal D}$-modules.
(These are suitable for calculations using local coordinates.)
For the underlying filtered ${\mathcal D}$-module $(M,F)$ of a mixed Hodge module on $X$, one can pass to the corresponding {\it algebraic} filtered ${\mathcal D}$-module by applying GAGA to each $F_pM$ after taking an extension of the mixed Hodge module over a compactification of $X$.
\par
n
{\bf 3.} In this paper, perverse sheaves (which do not seem appropriate at least for book titles, see \cite{Di}, \cite{KS}) are mainly called hypersheaves by an analogy with hypercohomology versus cohomology (although the word ``sheaf" may not be of Greek origin). The abelian category of hypersheaves on $X$ are denoted by ${\mathbf H\mathbf S}(X,A)$ where $A$ is a subfield of ${\mathbf C}$.
Hypersheaves are not sheaves in the usual sense, but they behave like sheaves in some sense; for instance, they can be defined locally provided that gluing data satisfying some compatibility condition are also given.
\par
\par
\vbox{\centerline{\bf 1. Main properties of pure Hodge modules}
\par
n
In this section we explain the main properties of pure Hodge modules.}
\par
n
{\bf 1.1.~Filtered ${\mathcal D}$-modules with ${\mathbf Q}$-structure.}
A pure Hodge module ${\mathcal M}$ on a smooth complex algebraic variety $X$ of dimension $d_X$ is basically a coherent left ${\mathcal D}_X$-module $M$ endowed with the Hodge filtration $F$ such that $(M,F)$ is a filtered left ${\mathcal D}_X$-module with $F_pM$ coherent over ${\mathcal O}_X$.
The filtration $F$ on ${\mathcal D}_X$ is by the order of differential operators, and we have the filtered ${\mathcal D}$-module condition
$$(F_p{\mathcal D}_X)\,F_qM\subset F_{p+q}M\quad\quad(p\in{\mathbf N},\,q\in{\mathbf Z}),
\leqno(1.1.1)$$
(which is equivalent to {\it Griffiths transversality} in the case of variations of Hodge structure), and the equality holds in (1.1.1) for $q\gg0$.
Moreover $M$ has a ${\mathbf Q}$-{\it structure} given by an isomorphism
$$\alphapha_{{\mathcal M}}:{\mathcal D}R_X(M)\cong{\mathbf C}\otimes_{{\mathbf Q}}K\quad\hbox{in}\,\,\,{\mathbf H\mathbf S}(X,{\mathbf C}),
\leqno(1.1.2)$$
with $K\in{\mathbf H\mathbf S}(X,{\mathbf Q})$ (see (A.1) below). Here ${\mathcal D}R_X(M)$ is the de Rham complex of the ${\mathcal D}_X$-module $M$ viewed as a quasi-coherent ${\mathcal O}_X$-module with an integrable connection:
$${\mathcal D}R_X(M):=\bigl[M\to\Omega_X^1\otimes_{{\mathcal O}_X}M\to\cdots\to\Omega_X^{d_X}\otimes_{{\mathcal O}_X}M\bigr],
\leqno(1.1.3)$$
with last term put at degree 0. In (1.1.2) we {\it assume}
$${\mathcal D}R_X(M)\in{\mathbf H\mathbf S}(X,{\mathbf C}),
\leqno(1.1.4)$$
and moreover (1.1.4) holds with $M$ replaced by any subquotients of $M$ as coherent ${\mathcal D}$-modules. (More precisely, these properties follow from the condition that $M$ is {\it regular holonomic}, see (B.5.2) below. We effectively assume the last condition in the definition of Hodge modules, see Remark~(ii) after Theorem~(1.3).)
In this paper we also assume
$$\hbox{$M,K$ are quasi-unipotent (see (B.6) below).}
\leqno(1.1.5)$$
\par
We will denote by ${\mathcal M}F(X,{\mathbf Q})$ the category of
$${\mathcal M}=\bigl((M,F),K,\alphapha_{{\mathcal M}}\bigr)$$
satisfying the above conditions.
(Sometimes $\alphapha_{{\mathcal M}}$ will be omitted to simplify the notation.)
We also use the notation
$${\rm rat}({\mathcal M}):=K.
\leqno(1.1.6)$$
The category ${\mathcal M}H(X,w)$ of pure Hodge modules of weight $w$ on $X$ will be defined as a full subcategory of ${\mathcal M}F(X,{\mathbf Q})$.
\par
n
{\bf 1.2.~Strict support decomposition.}
The first condition for pure Hodge modules is the {\it decomposition by strict support}
$${\mathcal M}=\h{$\bigoplus$}_{Z\subset X}\,{\mathcal M}_Z\quad\hbox{in}\,\,\,{\mathcal M}F(X,{\mathbf Q}),
\leqno(1.2.1)$$
where $Z$ runs over irreducible closed subvarieties of $X$, and
$${\mathcal M}_Z=\bigl((M_Z,F),K_Z,\alpha_{{\mathcal M}_Z}\bigr)$$
has {\it strict support} $Z$ (that is, its support is $Z$, and it has no nontrivial sub nor quotient object supported on a proper subvariety of $Z$).
More precisely, we assume that the last condition is satisfied for both $M_Z$ and $K_Z$.
Note that the condition for $K_Z$ is equivalent to that $K_Z$ is an intersection complex with local system coefficients, see \cite{BBD}.
We then get
$${\rm Hom}({\mathcal M}_Z,{\mathcal M}_{Z'})=0\quad\hbox{if}\,\,\,Z\ne Z'.
\leqno(1.2.2)$$
In fact, this holds with ${\mathcal M}_Z,{\mathcal M}_{Z'}$ replaced by $M_Z,M_{Z'}$ or by $K_Z,K_{Z'}$.
\par
In (2.2) below, we will define the full subcategory
$${\mathcal M}H_Z(X,w)\subset{\mathcal M}F(X,{\mathbf Q})$$
consisting of {\it pure Hodge modules of weight $w$ with strict support $Z$} by increasing induction on $\dim Z$, and put
$${\mathcal M}H(X,w):=\h{$\bigoplus$}_{Z\subset X}\,{\mathcal M}H_Z(X,w)\subset{\mathcal M}F(X,{\mathbf Q}),
\leqno(1.2.3)$$
where the direct sum over closed irreducible subvarieties $Z$ of $X$ is justified by (1.2.1--2).
\par
This full subcategory ${\mathcal M}H_Z(X,w)\subset{\mathcal M}F(X,{\mathbf Q})$ can be defined effectively by the following {\it fundamental theorem of pure Hodge modules}, which may be viewed as a {\it working definition} of ${\mathcal M}H_Z(X,w)$.
\par
n
{\bf Theorem~1.3} (\cite[Theorem~3.21]{mhm}). {\it For any closed irreducible subvariety $Z\subset X$, the restriction to sufficiently small open subvarieties of $Z$ induces an equivalence of categories
$${\mathcal M}H_Z(X,w)\buildrel{\sim}\over\longrightarrow {\rm VHS}_{\rm gen}(Z,w-\dim Z)^p,
\leqno(1.3.1)$$
where the right-hand side is the category of polarizable variations of pure Hodge structure of weight $w-\dim Z$ defined on smooth dense open subvarieties $U$ of $Z$. $($More precisely, we take the inductive limit over $U\subset Z.)$
Moreover, $(1.3.1)$ induces a one-to-one correspondence between polarizations of ${\mathcal M}\in{\mathcal M}H_Z(X,w)$ $($see $(2.2.2)$ below$)$ and those of the corresponding generic variation of Hodge structure.}
\par
n
{\bf Remarks.} (i) The equivalence of categories (1.3.1) means that any pure Hodge module with strict support $Z$ is generically a polarizable variation of pure Hodge structure, and conversely any polarizable variation of pure Hodge structure defined on a smooth dense Zariski-open subset $U\subset Z$ can be extended uniquely to a pure Hodge module with strict support $Z$.
\par
(ii) By using the stability theorem of pure Hodge modules under the direct images by projective morphisms (see Theorem~(1.4) below), the proof of Theorem~(1.3) can be reduced to the case where $Z=X$ and a variation of Hodge structure is defined on the complement $U$ of a divisor with normal crossings. In this case we use the original definition of pure Hodge modules in \cite{mhp} given by induction on the dimension of the support of $M$ and using the nearby and vanishing cycle functors.
\par
In the case $D:=X\setminus U$ is a divisor with normal crossings, the above extension of a variation of Hodge structure on $U$ to a Hodge module on $X$ is rather easy to describe by using Deligne's canonical extension \cite{eq}. In fact, we have the following explicit formula (see \cite[(3.10.12)]{mhm}:
$$F_pM=\par
um_{i\geqslant 0}\,F_i\hskip1pt{\mathcal D}_X\hskip1pt\bigl(j_*F_{p-i}\hskip1pt{\mathcal L}\cap\widehat{{\mathcal L}}^{\,>-1}\bigr)\subset\widehat{{\mathcal L}}^{\,>-1}(*D),
\leqno(1.3.2)$$
with $j:U\hookrightarrow X$ is the inclusion. Here ${\mathcal L}$ is the locally free ${\mathcal O}_U$-module underlying the variation of Hodge structure with $F$ the Hodge filtration, $\widehat{{\mathcal L}}^{\,>-1}$ is the Deligne extension of ${\mathcal L}$ over $X$ such that the eigenvalues of the residues of the connection are contained in $(-1,0]$, and the last term of (1.3.2) is the Deligne meromorphic extension, see \cite{eq}. (Note that a decreasing filtration $F$ is identified with an increasing filtration by setting $F_p:=F^{-p}$.)
Taking the union over $p\in{\mathbf Z}$, we have
$$M={\mathcal D}_X\,\widehat{{\mathcal L}}^{\,>-1}\subset\widehat{{\mathcal L}}^{\,>-1}(*D).
\leqno(1.3.3)$$
\par
The underlying ${\mathbf Q}$-local system $L$ of a polarizable variation of Hodge structure on $U$ is canonically extended over $X$ as an intersection complex (see \cite{BBD}) where $L$ must be shifted by $\dim Z$.
This can be done without assuming that $D=X\setminus U$ is a divisor with normal crossings on a smooth variety.
\par
The second main theorem in the theory of pure Hodge modules is the stability theorem of pure Hodge modules under the direct images by projective morphisms.
\par
n
{\bf Theorem~1.4} (\cite[Theorem~5.3.1]{mhp}). {\it Let $f:X\to Y$ be a projective morphism of smooth complex algebraic varieties, and ${\mathcal M}=((M,F),K,\alphapha_{{\mathcal M}})\in{\mathcal M}H_Z(X,w)$. Let $\ell$ be the first Chern class of an $f$-ample line bundle. Then the direct image $f_*^{{\mathcal D}}(M,F)$ as a filtered ${\mathcal D}$-module $($see {\rm (B.3)} below$)$ is strict, and we have
$${\mathcal H}^if_*{\mathcal M}:=({\mathcal H}^if_*^{{\mathcal D}}(M,F),{}^{\mathfrak m}{\mathcal H}^if_*K,{}^{\mathfrak m}{\mathcal H}^if_*\alphapha_{{\mathcal M}})\in{\mathcal M}H(Y,w+i)\quad(i\in{\mathbf Z}),
\leqno(1.4.1)$$
together with the isomorphisms
$$\ell^i:{\mathcal H}^{-i}f_*{\mathcal M}\buildrel{\sim}\over\longrightarrow{\mathcal H}^if_*{\mathcal M}(i)\quad(i>0),
\leqno(1.4.2)$$
where $(i)$ denotes the Tate twist shifting the filtration $F$ by $i$, see a remark after $(2.1.10)$ below.
\par
Moreover, if $S:K\otimes K\to{\mathcal D}D_X(-w)$ is a polarization of ${\mathcal M}$ {\rm(}see $(2.2.2)$ below$)$, then a polarization of the $\ell$-primitive part
$${}^P{\mathcal H}^{-i}f_*{\mathcal M}:={\rm Ker}\,\ell^{\,i+1}\subset{\mathcal H}^{-i}f_*{\mathcal M}\quad(i\geqslant 0)$$
is given by the restriction to the $\ell$-primitive part of the induced pairing}
$$(-1)^{i(i-1)/2}\,{}^{\mathfrak m}{\mathcal H} f_*S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes\ell^{\,i}):{}^{\mathfrak m}{\mathcal H}^{-i}f_*K\otimes{}^{\mathfrak m}{\mathcal H}^{-i}f_*K\to{\mathcal D}D_Y(i-w).
\leqno(1.4.3)$$
\par
n
{\bf Remarks.} (i) The action of $\ell$ can be defined by using $C^{\infty}$ forms and the Dolbeault resolution to get a filtered complex which is filtered quasi-isomorphic to the relative de Rham complex
$${\mathcal D}R_{X\times Y/Y}\bigl((i_f)_*^{{\mathcal D}}(M,F)\bigr),$$
which is used in the definition of the direct image in (B.3) below. Note that the first Chern class is represented by a closed 2-form of type $(1,1)$ on $X$, see also \cite[Lemma 5.3.2]{mhp}.
(It is also possible to define the action of $\ell$ by using the restriction to a sufficiently general hyperplane section of $(M,F)$.)
\par
(ii) We use Deligne's sign convention of a polarization $S$ of a Hodge structure $(H_{{\mathbf C}},F),H_{{\mathbf Q}})$, see \cite[Definition 2.1.15]{th2}; that is,
$$S(v,C\,\overline{\!v})>0\quad(v\in H_{{\mathbf C}}\setminus\{0\}),
\leqno(1.4.4)$$
where $C$ is the Weil operator defined by $i^{p-q}$ on $H^{p,q}_{{\mathbf C}}$, and the Tate twist $(2\pi i)^w$ is omitted to simplify the notation.
(This sign convention seems to be theoretically natural if one considers the action of the Weil restriction of ${\mathbf G}_{\mathbf m}$, see \cite[Section 2.1]{th1}.)
\par
Recall that the usual sign convention is
$$S(C\hskip1ptv,\,\overline{\!v})>0\quad(v\in H_{{\mathbf C}}\setminus\{0\}),
\leqno(1.4.5)$$
and the difference is given by the multiplication by
$$(-1)^w,
\leqno(1.4.6)$$
where $w$ is the weight of the Hodge structure.
\par
If we use the usual sign convention (1.4.5) instead of (1.4.4), then the difference (1.4.6) implies a considerable change of the formula (1.4.3) of Theorem~(1.4). In fact, we have to change the sign for each direct factor with strict support $Z$ {\it depending on} $d_Z:=\dim Z$, since the difference of sign (1.4.6) depends on the {\it pointwise weight}
$$w-i-d_Z
\leqno(1.4.7)$$
of the generic variations of Hodge structure of each direct factor with strict support $Z$:
$$({\mathcal H}^{-i}f_*{\mathcal M})_Z\subset{\mathcal H}^{-i}f_*{\mathcal M}.$$
\par
(iii) For a polarization $S$ of a generic variation of Hodge structure of a pure Hodge module with strict support $Z$, the associated polarization of the Hodge modules is defined by
$$(-1)^{d_Z(d_Z-1)/2}\,S:K|_U\otimes K|_U\to{\mathcal D}D_U(-w),
\leqno(1.4.8)$$
on an open subvariety $U\subset Z$ where the variation of Hodge structure is defined, see \cite[Proposition 5.2.16]{mhp} for the constant coefficient case. This can be extended to a pairing of $K$ by using the theory of intersection complexes \cite{BBD}.
\par
For a smooth projective variety $X$, it is well-known that
$$\int_X(-1)^{j(j-1)/2}\,i^{p-q}\,v\,\wedge\,\overline{\!v}{}\wedge\omega^{d_X-j}>0,
\leqno(1.4.9)$$
for a nonzero element $v$ in the primitive cohomology ${}^P\!H^j(X,{\mathbf C})$ of type $(p,q)$, where $\omega$ is a K\"ahler form. One may think that (1.4.9) contradicts (1.4.8) combined with (1.4.3) in Theorem~(1.4) for $f:X\to pt$. In fact, by setting $\xi(k)=k(k-1)/2$ for $k\in{\mathbf N}$, the difference between the sign $(-1)^{\xi(j)}$ in (1.4.9) and the product of $(-1)^{\xi(d_X)}$ in (1.4.8) with $Z=X$ and $(-1)^{\xi(d_X-j)}$ in (1.4.3) with $i=d_X-j$ does not coincide with the difference between the two sign conventions (1.4.4--5) which is equal to $(-1)^j$ by (1.4.6) for $w=j$, since
$$\xi(d_X)-\xi(d_X-j)-\xi(j)-j=jd_X\mod 2.$$
However, the remaining sign $(-1)^{jd_X}$ just comes from the isomorphism
$${\mathbf R}\Gamma(X,{\mathbf C}_X)[d_X]\otimes_{{\mathbf C}}{\mathbf R}\Gamma(X,{\mathbf C}_X)[d_X]\cong\bigl({\mathbf R}\Gamma(X,{\mathbf C}_X)\otimes_{{\mathbf C}}{\mathbf R}\Gamma(X,{\mathbf C}_X)\bigr)[2d_X].
\leqno(1.4.10)$$
In fact, if we set ${\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}:={\mathbf R}\Gamma(X,{\mathbf C}_X)$, then ${\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}[d_X]$ is identified with ${\mathbf C}_X[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$, and the above isomorphism is given by
$${\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\otimes_{{\mathbf C}}{\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\cong{\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},$$
where the middle two components are exchanged, but the remaining ones are unchanged (see \cite{coh} for the sign about single complexes associated with $n$-ple complexes).
A similar argument is used in the proof of the anti-commutativity of the last diagram in \cite[Section 5.3.10]{mhp} where $d_X=1$.
\par
\par
\vbox{\centerline{\bf 2. Outline of proofs of Theorems~(1.3) and (1.4)}
\par
n
In this section we give an inductive definition of pure Hodge modules (see \cite{via}, \cite{mhp}), and explain an outline of proofs of Theorems~(1.3) and (1.4).}
\par
n
{\bf 2.1.~Admissibility condition along $g=0$.} Let $X$ be a smooth algebraic variety, and $Z$ be an irreducible closed subvariety of $X$. Let
$${\mathcal M}=((M,F),K,\alphapha_{{\mathcal M}})\in{\mathcal M}F(X,{\mathbf Q}),$$
with strict support $Z$, see (1.2). Let $g$ be a function on $X$, that is, $g\in\Gamma(X,{\mathcal O}_X)$. Let $i_g:X\hookrightarrow X\times{\mathbf C}$ be the graph embedding by $g$. Set
$$({\mathcal M}t,F):=(i_g)_*^{{\mathcal D}}(M,F),$$
see (B.3) below for $(i_g)_*^{{\mathcal D}}$. (Note that the filtration $F$ is {\it shifted by} $1$, which is the codimension of the embedding.) We have the filtration $V$ on ${\mathcal M}t$ (see (B.6) below).
\par
n
{\bf Definition.} We say that $(M,F)$ is {\it admissible along} $g=0$ (or $g$-{\it admissible} for short) in this paper if the following two conditions are satisfied:
$$t(F_pV^{\alpha}{\mathcal M}t)=F_pV^{\alpha+1}{\mathcal M}t\quad\quad(\forall\,\alpha>0),
\leqno(2.1.1)$$
\vskip-6mm
$$\partial_t(F_p{\rm Gr}_V^{\alpha}{\mathcal M}t)=F_{p+1}{\rm Gr}_V^{\alpha-1}{\mathcal M}t\quad\quad(\forall\,\alpha<1,\,p\in{\mathbf Z}),
\leqno(2.1.2)$$
see \cite[3.2.1]{mhp}. (These properties were first found in the one-dimensional case, see \cite{hfgs2}.)
\par
In the case $Z\subset g^{-1}(0)$, $(M,F)$ is $g$-admissible if and only if the following condition is satisfied:
$$g\,F_pM\subset F_{p-1}M\quad\quad(\forall\,p\in{\mathbf Z}),
\leqno(2.1.3)$$
see \cite[Lemma 3.2.6]{mhp}.
\par
In the case $Z\not\subset g^{-1}(0)$, let $j:X\times{\mathbf C}^*\hookrightarrow X\times{\mathbf C}$ be the natural inclusion. We have the isomorphisms
$$F_p{\mathcal M}t=\par
um_{i\geqslant 0}\,\partial_t^i(j_*F_{p-i}{\mathcal M}t\cap V^{>0}M)\quad\quad(p\in{\mathbf Z}),
\leqno(2.1.4)$$
if $(M,F)$ is $g$-admissible and moreover the following condition holds:
$$\partial_t:{\rm Gr}_V^1({\mathcal M}t,F)\to{\rm Gr}_V^0({\mathcal M}t,F[-1])\,\,\,\hbox{is strictly surjective,}
\leqno(2.1.5)$$
see \cite[Remark 3.2.3]{mhp}. This is closely related to (1.3.2). (Forgetting $F$, condition (2.1.5) is equivalent to that $M$ has no nontrivial quotient supported in $g^{-1}(0)$, see (B.6.6) below. The strictness of $F$ follows from the properties of Hodge modules as is seen in (2.3.5) below.)
\par
Assume $(M,F)$ is $g$-admissible. We have the {\it nearby and vanishing cycle functors} $\psi_g$, $\varphi_g$ defined by
$$\psi_g(M,F):=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*_1}\,\psi_{g,\lambda}(M,F),\quad\varphi_g(M,F):=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*_1}\,\varphi_{g,\lambda}(M,F),
\leqno(2.1.6)$$
\vskip-5mm
$$\alphaigned\psi_{g,{\mathbf e}(-\alpha)}(M,F)&:={\rm Gr}_V^{\alpha}({\mathcal M}t,F)\quad(\alpha\in(0,1]),\\\varphi_{g,1}(M,F)&:={\rm Gr}_V^0({\mathcal M}t,F[-1]),\endaligned
\leqno(2.1.7)$$
where ${\mathbf C}^*_1:=\{\lambda\in{\mathbf C}^*\mid|\lambda|=1\}$, ${\mathbf e}(-\alpha):=\exp(-2\pi i\alpha)$, and $\psi_{g,\lambda}=\varphi_{g,\lambda}$ ($\lambda\ne 1$) as in (A.2.8) below.
We have $\varphi_{g,1}(M,F)=(M,F)$ by \cite[Lemma 3.2.6]{mhp} if ${\rm supp}\,M\subset g^{-1}(0)$ and $g\hskip1ptF_pM\subset F_{p-1}M$ ($p\in{\mathbf Z}$). Note that $F$ is shifted by 1 when the direct image $(i_g)_*^{{\mathcal D}}$ is taken. This is the reason for which $F$ is shifted for $\varphi$ in (2.1.7), and not for $\psi$ (for left ${\mathcal D}$-modules).
\par
Combining these with (B.6.7) below, we get
$$\alphaigned\psi_g{\mathcal M}&:=(\psi_g(M,F),{}^{\mathfrak m}\psi_gK,{}^{\mathfrak m}\psi_g\alphapha_{{\mathcal M}}),\\\varphi_g{\mathcal M}&:=(\varphi_g(M,F),{}^{\mathfrak m}\varphi_gK,{}^{\mathfrak m}\varphi_g\alphapha_{{\mathcal M}})\quad\hbox{in}\,\,\,\,{\mathcal M}F(X,{\mathbf Q}).\endaligned
\leqno(2.1.8)$$
We have similarly $\psi_{g,1}{\mathcal M}$, $\varphi_{g,1}{\mathcal M}$ together with the morphisms
$$\alphaigned{\rm can}:\psi_{g,1}{\mathcal M}\to\varphi_{g,1}{\mathcal M},\quad&{\rm Var}:\varphi_{g,1}{\mathcal M}\to\psi_{g,1}{\mathcal M}(-1),\\
N:\psi_g{\mathcal M}\to\psi_g{\mathcal M}(-1),\quad&N:\varphi_{g,1}{\mathcal M}\to\varphi_{g,1}{\mathcal M}(-1),\endaligned
\leqno(2.1.9)$$
such that the restrictions of $\,{\rm can}$, $\,{\rm Var}$, $N$ to the ${\mathcal D}$-module part ${\rm Gr}_V^1{\mathcal M}$, ${\rm Gr}_V^0{\mathcal M}$, ${\rm Gr}_V^{\alpha}{\mathcal M}$ ($\alpha\in[0,1]$) are respectively given by $-\partial_t$, $t$, $-(\partial_tt-\alpha)$, and we have
$${\rm Var}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm can}=N\,\,\,\,\hbox{on}\,\,\,\varphi_{g,1}{\mathcal M},\quad{\rm can}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm Var}=N\,\,\,\,\hbox{on}\,\,\,\varphi_{g,1}{\mathcal M}.
\leqno(2.1.10)$$
Here the Tate twist $(k)$ for $k\in{\mathbf Z}$ in general is essentially the shift of the filtration $F$ by $[k]$. For the ${\mathbf Q}$-coefficient part, it is defined by the tensor product of ${\mathbf Q}(k):=(2\pi i)^k{\mathbf Q}\subset{\mathbf C}$ over ${\mathbf Q}$, see \cite[Definition 2.1.13]{th2}. (Similarly with ${\mathbf Q}$ replaced by any subfield $A\subset{\mathbf C}$)
\par
By \cite[Lemma 5.1.4]{mhp} we see that the strict support decomposition (1.2.1) holds if for any $g\in\Gamma(U,{\mathcal O}_U)$ with $U$ an open subvariety of $X$, $(M,F)|_U$ is $g$-admissible and moreover
$$\varphi_{g,1}{\mathcal M}|_U={\rm Im}\,{\rm can}\oplus{\rm Ker}\,{\rm Var}\quad\hbox{in}\quad{\mathcal M}F(U,{\mathbf Q}),
\leqno(2.1.11)$$
\par
n
{\bf 2.2.~Inductive definition of pure Hodge modules.} For a smooth complex algebraic variety $X$ and an irreducible closed subvariety $Z\subset X$,
we define the full subcategory ${\mathcal M}H_Z(X,w)\subset{\mathcal M}F(X,{\mathbf Q})$ by increasing induction on $d_Z:=\dim Z$ as follows (see \cite{via}, \cite{mhp}):
\par
n
{\bf Case 1.} If $Z$ is a point $x\in X$, then we have an equivalent of categories
$$\alphaigned&(i_x)_*:{\rm HS}(w)^p\buildrel{\sim}\over\longrightarrow{\mathcal M}H_{\{x\}}(X,w)\\ \hbox{with}\quad\quad&(i_x)_*\bigl((H_{{\mathbf C}},F),H_{{\mathbf Q}}\bigr)=\bigl((i_x)_*^{{\mathcal D}}(H_{{\mathbf C}},F),(i_x)_*H_{{\mathbf Q}}\bigr),\endaligned
\leqno(2.2.1)$$
where $i_x:\{x\}\hookrightarrow X$ denotes the canonical inclusion, and ${\rm HS}(w)^p$ denotes the category of polarizable ${\mathbf Q}$-Hodge structures of weight $w$ (see \cite{th2}). The latter is naturally identified with a full subcategory of ${\mathcal M}F(\{x\},{\mathbf Q})$ (by setting $F_p=F^{-p}$ as usual).
\par
n
{\bf Case 2.} If $d_Z>0$, then ${\mathcal M}=((M,F),K)\in{\mathcal M}F(X,{\mathbf Q})$ with strict support $Z$ belongs to ${\mathcal M}H_Z(X,w)$ if there is a perfect pairing (see (A.3) below)
$$S:K\otimes_{{\mathbf Q}}K\to{\mathcal D}D_X(-w)={\mathbf Q}_X(d_X-w)[2d_X],
\leqno(2.2.2)$$
which is called a {\it polarization} of ${\mathcal M}$, and the following two conditions are satisfied:
\par
n
(i) The pairing $S$ is compatible with the Hodge filtration $F$ in the following sense:
\par
n
There is an isomorphism of filtered ${\mathcal D}$-modules
$${\mathcal D}D(M,F)=(M,F)(w),$$
which corresponds (by using (B.4.6) below) to an isomorphism defined over ${\mathbf Q}$:
$${\mathcal D}D(K)=K(w),$$
and the latter is identified with the perfect pairing $S$ via (A.3.1) below.
\par
n
(ii) For any Zariski-open subset $U\subset X$ and $g\in\Gamma(U,{\mathcal O}_U)$, the restriction of $(M,F)$ to $U$ is $g$-admissible, and moreover, in the case $Z\cap U\not\subset g^{-1}(0)$, we have
$${\rm Gr}_k^W\psi_g{\mathcal M}|_U,\,{\rm Gr}_k^W\varphi_{g,1}{\mathcal M}|_U\in{\mathcal M}H_{<d_Z}(U,w),
\leqno(2.2.3)$$
$$\hbox{${}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i)$ gives a polarization of ${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U\,\,\,(i\geqslant 0)$}.
\leqno(2.2.4)$$
\par
The last term of (2.2.3) is the direct sum of ${\mathcal M}H_{Z'}(U,w)$ with $Z'$ running over closed irreducible subvarieties of $U$ with $d_{Z'}<d_Z$. The weight filtrations $W$ on $\psi_g{\mathcal M}|_U$, $\varphi_{g,1}\hskip1pt{\mathcal M}|_U$ are the {\it monodromy filtration} associated with the action of $N:=(2\pi i)^{-1}\log T_u$ (see \cite{weil}) which are shifted by $w-1$ and $w$ respectively. This means that $W$ on $\psi_g{\mathcal M}|_U$ {\it with filtration $F$ forgotten} is uniquely determined by the following conditions:
$$\alphaigned N(W_i\psi_g{\mathcal M}|_U)&\subset(W_{i-2}\psi_g{\mathcal M}|_U)(-1)\quad(i\in{\mathbf Z}),\\ N^i:{\rm Gr}_{w-1+i}^W\psi_g{\mathcal M}|_U&\buildrel{\sim}\over\longrightarrow({\rm Gr}^W_{w-1-i}\psi_g{\mathcal M}|_U)(-i)\quad(i\in{\mathbf N}).\endaligned
\leqno(2.2.5)$$
Here the Tate twists may be neglected since $F$ is forgotten in (2.2.5).
(However, the last isomorphism of (2.2.5) is strictly compatible with $F$ by (2.3.3) below if $\psi_g{\mathcal M}|_U$ belongs to ${\mathcal M}HW(U)$ in (2.3) below.)
A similar assertion holds for $\varphi_{g,1}{\mathcal M}|_U$ with $w-1$ replaced by $w$.
\par
In (2.2.4), the primitive part ${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U$ is defined by
$${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U:={\rm Ker}\,N^{i+1}\subset{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U,
\leqno(2.2.6)$$
by using the induced filtration $F$ on the kernel.
For $\psi_gS$, see (A.3.2) below. Note that the condition for a polarization $S$ is also by induction on $d_Z$. For each direct factor of ${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U$ with $0$-dimensional strict support, we assume that $\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i)$ induces a polarization of ${\mathbf Q}$-Hodge structure in the sense of \cite{th2} where the place of the Weil operator is different from the usual one as is noted in Remark (ii) after Theorem (1.4).
\par
n
{\bf 2.3. Some properties of pure Hodge modules.}
Let ${\mathcal M}H(X,w)$ be as in (1.2.3). Let
$${\mathcal M}HW(X)$$
be the category of {\it weakly mixed Hodge modules\,} consisting of $({\mathcal M},W)$ with ${\mathcal M}\in{\mathcal M}F(X,{\mathbf Q})$ and $W$ a finite increasing filtration of ${\mathcal M}$, which satisfy
$${\rm Gr}_w^W{\mathcal M}\in{\mathcal M}H(X,w)\quad(\forall\,w\in{\mathbf Z}).
\leqno(2.3.1)$$
We have by definition the stability of pure Hodge modules by the nearby and vanishing cycle functors:
$$\psi_g{\mathcal M}|_U,\,\varphi_{g,1}{\mathcal M}|_U\in{\mathcal M}HW(U).
\leqno(2.3.2)$$
It is easy to show the following (see \cite[Proposition 5.1.14]{mhp}):
\par
n
(2.3.3)\,\,\,\, ${\mathcal M}HW(X)$ and ${\mathcal M}H(X,w)$ ($w\in{\mathbf Z}$) are abelian categories such that
\par
\quad\quad\quad
any morphisms are strictly compatible with $(F,W)$ and $F$ respectively.
\par
This assertion is proved by using
$${\rm Hom}({\mathcal M},{\mathcal M}')=0\,\,\,\hbox{if}\,\,\,{\mathcal M}\in{\mathcal M}H(X,w),\,{\mathcal M}'\in{\mathcal M}H(X,w')\,\,\,\hbox{with}\,\,\,\,w>w'.
\leqno(2.3.4)$$
This is reduced to \cite{th2} by using the assertion that any ${\mathcal M}\in{\mathcal M}H_Z(X,w)$ is generically a variation of Hodge structure of weight $w-d_Z$.
(The latter is an easy part of Theorem~(1.3).)
\par
These assertions hold without assuming polarizability (see \cite[Section 5.1]{mhp}), and imply
$${\rm can}:\psi_{g,1}{\mathcal M}\to\varphi_{g,1}{\mathcal M}\,\,\,\,\hbox{is strictly surjective for $(F,W)$},
\leqno(2.3.5)$$
This assures the strict surjectivity in (2.1.5). It also gives a reason for which condition~(2.2.4) is imposed only for $\psi$.
\par
We prove Theorems~(1.3) and (1.4) by induction on $\dim Z$ using the following rather technical key theorem:
\par
n
{\bf Theorem~2.4.} {\it Let $f:X\to Y$ be as in Theorem~$(1.4)$. Let $g\in\Gamma(Y,{\mathcal O}_Y)$. Put $h:=fg$. Let ${\mathcal M}=((M,F),K)\in{\mathcal M}F(X,{\mathbf Q})$ with strict support $Z\not\subset h^{-1}(0)$. Let $S:K\otimes K\to{\mathcal D}D_X(-w)$ be a perfect pairing compatible with the filtration $F$ as in condition~{\rm (ii)} in Case $2$ of $(2.2)$.
Assume that $(M,F)$ is $h$-admissible, we have
$${\rm Gr}_i^W\psi_h{\mathcal M},\,{\rm Gr}_i^W\varphi_{h,1}{\mathcal M}\in{\mathcal M}H(X,i)\quad(\forall\,i\in{\mathbf Z}),
\leqno(2.4.1)$$
with $W$ as in $(2.2.5)$, and the conclusions of Theorem~$(1.4)$ are satisfied for the $N$-primitive part
$${}^{P_N}{\rm Gr}_{w-1+i}^W\psi_h{\mathcal M}\,\,\,\,\,\hbox{with polarization}\,\,\,\,\,\psi_hS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i)\quad(i\geqslant 0).
\leqno(2.4.2)$$
Then
\par
n
{\rm (i)} The filtered direct image $\,f_*^{{\mathcal D}}(M,F)$ is strict on a sufficiently small neighborhood of $g^{-1}(0)$ $($in the classical topology$)$, and the ${\mathcal H}^if_*^{{\mathcal D}}(M,F)$ are $g$-admissible.
\par
n
{\rm (ii)} The shifted direct image filtration $f_*^{{\mathcal D}}W[j]$ induces the monodromy filtration shifted by $w+j-1$ on
$$\psi_g{\mathcal H}^jf_*^{{\mathcal D}}{\mathcal M}={\mathcal H}^jf_*^{{\mathcal D}}\psi_h{\mathcal M}\quad(\forall\,j\in{\mathbf Z}),
\leqno(2.4.3)$$
which is denoted by $W$ so that
$${\rm Gr}_i^W\psi_g{\mathcal H}^jf_*^{{\mathcal D}}{\mathcal M}\in{\mathcal M}H(Y,i)\quad(\forall\,i\in{\mathbf Z}).
\leqno(2.4.4)$$
\par
n
{\rm (iii)} We have isomorphisms on a sufficiently small neighborhood of $g^{-1}(0)$:
$$\ell^j:{\mathcal H}^{-j}f_*^{{\mathcal D}}{\mathcal M}\buildrel{\sim}\over\longrightarrow({\mathcal H}^jf_*^{{\mathcal D}}{\mathcal M})(j)\quad(\forall\,j\geqslant 0).
\leqno(2.4.5)$$
\par
n
{\rm (iv)} On the bi-primitive part ${}^{P_{\ell}}{}^{P_N}{\rm Gr}_{w-1-j+i}^W\psi_g{\mathcal H}^{-j}f_*^{{\mathcal D}}{\mathcal M}$ defined by
$${\rm Ker}\,\ell^{j+1}\cap{\rm Ker}\,N^{i+1}\subset{\rm Gr}_{w-1-j+i}^W\psi_g{\mathcal H}^{-j}f_*^{{\mathcal D}}{\mathcal M},
\leqno(2.4.6)$$
we have a polarization of Hodge module given by the induced pairing}
$$(-1)^{j(j-1)/2}\hskip1pt{\rm Gr}^W\psi_g{\mathcal H} f_*S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i\ell^j)\quad(\forall\,i,j\geqslant 0).
\leqno(2.4.7)$$
\par
We first explain how the above theorem is used in the proofs of Theorems~(1.3) and (1.4).
\par
n
{\bf 2.5.~Outline of proofs of Theorems~(1.3) and (1.4).} We show the assertions by increasing induction on the dimension of the strict support $Z$.
The order of the induction is rather complicated as is explained below:
\par
Assume Theorems~(1.3) and (1.4) are proved for $\dim Z<d$. Then Theorem~(1.4) for $\dim Z=d$ with $f(Z)\ne pt$ follows from Theorem~(2.4) where the decomposition by strict support follows from (2.1.11) (which is satisfied by using \cite[Corollary 4.2.4]{mhp}). Using this, we can reduce the proof of Theorem~(1.3) to the normal crossing case where the singular locus of $M$ is a divisor with normal crossings. Here the filtration $F$ can be defined by (1.3.2), and we have to show that the conditions for Hodge modules are satisfied for any locally defined functions $g$. This can be further reduced by using Theorem~(2.4) (but not Theorem~(1.4)) to the case where the union of $g^{-1}(0)$ and the singular locus of $M$ is a divisor with normal crossings. Here we can calculate explicitly the nearby and vanishing cycle functors together with the induced pairing (although these are rather complicated), see \cite{mhm} for details.
\par
Now we have to prove Theorem~(1.4) in the case $\dim Z=d$ and $f(Z)=pt$.
Here we may assume that $X={\mathbf P}^n$, $Y=pt$. Let $\pi:\widetilde{X}\to X$ be the blow-up along the intersection $Z$ of two sufficiently general hyperplanes of ${\mathbf P}^n$. By Theorem~(1.3) for $\dim Z=d$, there is
$${\mathcal M}Mt=(({\mathcal M}t,F),{\mathcal K}t)\in{\mathcal M}H(\widetilde{X},w)\quad\hbox{with}\quad{\mathcal M}Mt|_{\widetilde{X}\setminus\pi^{-1}(Z)}={\mathcal M}|_{X\setminus Z}.$$
We have a polarization $\widetilde{S}$ of ${\mathcal M}Mt$ extending the restriction of a polarization $S$ of ${\mathcal M}$ to the complement of $Z\subset X$.
\par
By Theorem~(1.4) for $f(Z)\ne 0$, we see that ${\mathcal M}$ is a direct factor of ${\mathcal H}^0\pi_*{\mathcal M}$. Moreover its complement is isomorphic to ${\mathcal M}_Z(-1)$ since $Z$ is sufficiently general, where ${\mathcal M}_Z$ is the noncharacteristic restriction of ${\mathcal M}$ to $Z$; more precisely,
$${\mathcal M}_Z=((M_Z,F),K_Z[-2]),$$
with $(M_Z,F)$ and $K_Z$ respectively the noncharacteristic restrictions of $(M,F)$ and $K$ to $Z$.
\par
So we have the direct sum decomposition
$${\mathcal H}^0\pi_*{\mathcal M}Mt={\mathcal M}\oplus{\mathcal M}_Z(-1),
\leqno(2.5.1)$$
with ${\mathcal H}^j\pi_*{\mathcal M}Mt=0$ for $j\ne 0$. Here $\pi_*\widetilde{S}$ is compatible with this decomposition, and its restriction to ${\mathcal M}$ coincides with $S$ (by using (A.3.1) below together with the remark after (1.2.2)).
We then get the direct sum decompositions
$$H^j(\widetilde{X},{\mathcal M}Mt)=H^j(X,{\mathcal M})\oplus H^{j-2}(Z,{\mathcal M}_Z)(-1)\quad\quad(j\in{\mathbf Z}).
\leqno(2.5.2)$$
where $H^j(X,{\mathcal M}):=H^j(a_X)_*{\mathcal M}$ for the structure morphism $a_X:X\to pt$, and similarly for $H^j(\widetilde{X},{\mathcal M}Mt)$, etc.
The above argument implies that these direct sum decompositions are compatible with the induced pairing by $\widetilde{S}$, and moreover its restriction to the first factors coincides with the induced pairing by $S$.
\par
We have the Lefschetz pencil
$$p:\widetilde{X}\to{\mathbf P}^1,$$
and Theorem~(1.4) for $f(Z)\ne pt$ can be applied to this.
So the proof of Theorem~(1.4) for $\dim Z=d$ and $f(Z)=pt$ is reduced to the case $X={\mathbf P}^1$, where we can apply Zucker's result \cite{Zu}.
(Note that Zucker gave an ``algebraic description" of the Hodge filtration $F$ using holomorphic differential forms, see \cite[Corollary 6.15 and Proposition 9.1]{Zu}.)
However, we have to make some more calculations about polarizations on primitive classes related to the Leray spectral sequence, etc.\ (which are not quite trivial, see \cite[Sections 5.3.8\,-11]{mhp} for details).
\par
The Lefschetz pencil is also used in an essential way for the proof of the Weil conjecture and the hard Lefschetz theorem in \cite{weil}.
The reduction to the curve case is by analogy with the $\ell$-adic case in some sense.
\par
n
{\bf Remarks.} (i) For the calculation of the nearby and vanishing cycle functors in the normal crossing case, we use the so-called ``combinatorial description" of mixed Hodge modules with normal crossing singular loci. Here the {\it compatibility} of the $d_X+1$ filtrations $F,V_{(i)}$ ($i\in[1,d_X])$ is quite essential, where $V_{(i)}$ is the $V$-filtration along $x_i=0$, and the $x_i$ are local coordinates compatible with the singular locus of a mixed Hodge module. Note, however, that this description {\it never} gives an equivalence of categories (consider, for instance, the case of variations of mixed Hodge structure having no singular loci; in fact, this ``Hodge-theoretic combinatorial description" gives only the information of the fiber at the fixed point). Nevertheless it is quite useful when it is combined with the Verdier-type extension theorem \cite{Ve2} inductively, see also \cite[Proposition 3.13]{mhm}, etc.
\par
It seems rather easy to predict Hodge-theoretic combinatorial formulas for the nearby and vanishing cycle functors together with the induced pairing in the normal crossing case. These are implicitly related with Beilinson's construction of nearby cycles \cite{Bei2}, see \cite{ext} for the mixed Hodge module case. It seems more difficult to prove that these formulas actually hold, see for instance \cite{dual}.
(Note that the argument in the Appendix of \cite{mhm} was simplified by the writer. The original argument used the reduction to the 2-dimensional case, and was much more complicated.)
\par
(ii) The results of Cattani, Kaplan, Schmid (\cite{CK}, \cite{CKS1}, \cite{CKS2}, \cite{CKS3}) are used in an essential way for the above ``Hodge-theoretic combinatorial description". For instance, the {\it descent lemma} in \cite{CKS1}, \cite{CKS3} is crucial to the ``combinatorial description" of the pure Hodge module corresponding to the intersection complex. (This lemma is called ``the vanishing cycle theorem" in \cite{KK4}, which does not seem to be contained in \cite{KK2}.)
\par
(iii) It is still an open problem whether the Hodge structure obtained by the $L^2$ method in \cite{CKS3}, \cite{KK4} has an ``algebraic description" using holomorphic differential forms, and in the algebraic case, whether it coincides with the Hodge structure obtained by the theory of mixed Hodge modules. (It has been expected that the detailed versions of \cite{KK3}, \cite{KK5} would give a positive answer to these problems, and some people thought that \cite{KK2}, \cite{KK4} were written for this purpose. As for \cite{BrSY}, it seems rather difficult to apply it to filtered $L^2$-sheaf complexes.) We have to assume that polarizable variations of Hodge structure are geometric ones in \cite{toh}, and also in \cite{PS} for the analytic case.
\par
(iv) In the curve case, the answer to the above first problem was already given in \cite[Corollary 6.15 and Proposition 9.1]{Zu}, and the second problem is then easy to solve, see \cite[Section 5.3.10]{mhp}.
\par
(v) It does not seem easy to generalize the results in \cite{CKS3}, \cite{KK4} to the case of a ``tubular neighborhood" of a subvariety in a smooth complex algebraic variety, since we would have to take a {\it complete} metric to get a Hodge structure by applying a standard method.
\par
n
{\bf 2.6.~Outline of proof of Theorem~(2.4).}
We have to study the weight spectral sequence
$$E_1^{-k,j+k}={\mathcal H}^jf_*{\rm Gr}_k^W\psi_h{\mathcal M}\Longrightarrow{\mathcal H}^jf_*\psi_h{\mathcal M},
\leqno(2.6.1)$$
and similarly with $\psi_h$ replaced by $\varphi_{h,1}$.
This spectral sequence is defined in ${\mathcal M}F(Y,{\mathbf Q})$ if the differentials $d_r$ ($r\geqslant 1$) are strictly compatible with the filtration $F$ inductively, see \cite[1.3.6]{mhp}. The last assertion for $r=1$ follows from (2.3.3), since $d_1$ preserves the Hodge filtration $F$ and also the weight filtration $W$ (the latter is shifted depending on the degree $j$ of the cohomology sheaf ${\mathcal H}^jf_*\psi_h{\mathcal M}$). For $r\geqslant 2$, we can show $d_r=0$ inductively by using (2.3.4). In particular, the $E_2$-degeneration of the spectral sequence follows. The above argument also implies the strictness of the filtration $F$ on the direct image $f_*^{{\mathcal D}}\psi_h{\mathcal M}$ by using the theory of compatible filtrations ({\it loc.~cit.}). A similar assertion holds for $\varphi_{g,1}{\mathcal M}$. These assertions imply the assertion~(i) of Theorem~(2.4) by using the completion by the $V$-filtration, see \cite[Section 3.3]{mhp} for details.
\par
To show the remaining assertions, we have to show that the ``bi-symmetry" for the actions of $\ell$, $N$ on the $E_1$-term is preserved on the $E_2$-terms, and moreover the ``bi-primitive part" of the $E_2$-term is {\it represented} by the bi-primitive part of the $E_1$-term. Using the strict support decomposition together with the easy part of Theorem~(1.3), we can reduce the assertions to the case where the spectral sequence is defined in the category of Hodge structures. Theorem~(2.4) then follows from the theory of bi-graded Hodge structures of Lefschetz type as in the proof of \cite[Proposition 4.2.2]{mhp} (see also \cite{pos} where a slightly better explanation is given).
\par
n
{\bf Remarks.} (i) Signs were not determined in \cite[Proposition 4.2.2]{mhp}, since this was a very subtle issue at that time (see for instance \cite[Section 2.2.5]{th2} where a problem of sign for Chern classes was raised.) In the case of the nearby cycles of the constant sheaf ${\mathbf Q}_X$ in the normal crossing case (that is, in the case of Steenbrink \cite{St}), the primitive part
$${}^P{\rm Gr}^W_{d_X-1+i}\psi_{h,1}({\mathbf Q}_X[d_X-1])$$
is the direct sum of the constant sheaves supported on intersections of $i+1$ irreducible components of $h^{-1}(0)$. In particular, its support is pure dimensional, and the signs should depend only on this dimension. Then there would be no problem for using \cite[Proposition 4.2.2]{mhp} in this case. In the general case, however, the situation is much more complicated. In fact, there may be direct factors $({}^P{\rm Gr}^W_k\psi_h{\mathcal M})_Z$ of the primitive part ${}^P{\rm Gr}^W_k\psi_h{\mathcal M}$ which have strict supports $Z$ of {\it various dimensions}, and surject to the {\it same} closed subvariety of $Y$ (for instance, if $g^{-1}(0)=\{0\}$). In this case, we have to determine exactly the sign for each direct factor so that the {\it positivity} becomes compatible among the direct images of direct factors with various strict support dimensions.
\par
(ii) Precise signs are written in \cite[Lemma 5.3.6]{mhp} following Deligne's sign system \cite{sign} (see also \cite{GN}). Note that the conclusion of Lemma 5.3.6 follows from the {\it proof} of \cite[Proposition 4.2.2]{mhp}, since the hypothesis of the lemma is stronger than that of the proposition and the conclusion is essentially the same (that is, the $E_2$-term is bi-symmetric for $\ell,N$ and its bi-primitive part is represented by the bi-primitive part of the $E_1$-term). A pairing on the $E_1$-term of the Steenbrink weight spectral sequence \cite{St} satisfying Deligne's sign system \cite{sign} is constructed in \cite{GN} although its relation with the pairing induced by the nearby cycle functor does not seem to be clear (for instance, an isomorphism like (1.4.10) does not seem to be used there).
\par
(iii) An argument showing the decomposition (2.1.11) is noted in \cite[Lemma 5.2.5]{mhp} which can replace \cite[Corollary 4.2.4]{mhp} if one is quite sure that the signs in the lemma really hold in the case one is considering. In fact, the assertion is very sensitive to the signs: if the signs are modified, then the role of $H$ and $H'$ can be reversed, and we may get a decomposition of $H$ instead of $H'$. (There is a misprint in the last line of the lemma: $S'$ should be $H'$.) If one is not completely sure whether the signs of the lemma really hold in the case under consideration, then it is still possible to use \cite[Corollary 4.2.4]{mhp} at least in the constant sheaf case with normal crossing singularities as is explained in Remark~(i) above.
\par
\par
\vbox{\centerline{\bf 3. Mixed Hodge modules}
\par
n
In this section we explain a simplified definition of mixed Hodge modules following \cite{def}.}
\par
n
{\bf 3.1.~Admissibility condition for weakly mixed Hodge modules.}
Let $X$ be a smooth complex algebraic variety, and $g\in\Gamma(X,{\mathcal O}_X)$. For a weakly mixed Hodge module
$$({\mathcal M},W)=((M;F,W),(K,W))\in{\mathcal M}HW(X),$$
(see (2.3.1)), set
$$({\mathcal M}t;F,W)=(i_g)_*^{{\mathcal D}}(M;F,W),$$
where $W$ is not shifted under the direct image. We have the filtration $V$ on ${\mathcal M}t$ as in (B.6) below. We define the filtration $L$ on the nearby and vanishing cycle functors by
$$L_k\psi_g{\mathcal M}:=\psi_gW_{k+1}{\mathcal M},\quad L_k\varphi_{g,1}{\mathcal M}=\varphi_{g,1}W_k{\mathcal M}\quad(\forall\,k\in{\mathbf Z}).
\leqno(3.1.1)$$
Here the filtration $F$ can be neglected when the filtration $L$ is defined.
\par
We say that ${\mathcal M}$ is {\it admissible} along $g=0$ (or $g$-{\it admissible} for short) if the following two conditions are satisfied:
$$\hbox{Three filtrations $F,W,V$ on ${\mathcal M}t$ are compatible filtrations (see (C.2) below).}
\leqno(3.1.2)$$
\vskip-7mm
$$\hbox{There is the relative monodromy filtration $W$ on $(\psi_g{\mathcal M},L)$, $(\varphi_{g,1}{\mathcal M},L)$.}
\leqno(3.1.3)$$
The last condition means that there is a unique filtration $W$ on $\psi_g{\mathcal M}$ satisfying the following two conditions:
$$\alphaigned N(W_i\psi_g{\mathcal M})&\subset W_{i-2}\psi_g{\mathcal M}(-1)\quad(\forall\,i\in{\mathbf Z}),\\ N^i:{\rm Gr}^W_{k+i}{\rm Gr}^L_k\psi_g{\mathcal M}&\buildrel{\sim}\over\longrightarrow{\rm Gr}^W_{k-i}{\rm Gr}^L_k\psi_g{\mathcal M}(-i)\quad(\forall\,i\in{\mathbf N},\,k\in{\mathbf Z}),\endaligned
\leqno(3.1.4)$$
and similarly for $\varphi_{g,1}{\mathcal M}$, see \cite{weil}, \cite{StZ}.
(Here the filtration $F$ can be forgotten. However, the last isomorphism is compatible with $F$ by (2.3.3) if $(\psi_f{\mathcal M},W),(\varphi_{f,1}{\mathcal M},W)\in{\mathcal M}HW(X)$.)
\par
n
{\bf Remark.} Let $X$ be a smooth complex variety, and $\Xo$ be a smooth compactification such that $D:=\Xo\setminus X$ is a divisor with simple normal crossings. Let
$$({\mathcal M},W)=\bigl((M;,F,W),(H,W)\bigr)$$
be a variation of mixed Hodge structure on $X$ such that ${\rm Gr}_k^W{\mathcal M}$ are polarizable pure Hodge structures of weight $k$ for any $k\in{\mathbf Z}$, where $(M;F,W)$ is the underlying bi-filtered ${\mathcal O}_X$-module, and $(H,W)$ is the underlying filtered ${\mathbf Q}$-local system.
\par
Assume the local monodromies of $H$ are all {\it unipotent}. Let ${\mathcal M}o$ be the {\it canonical}\, Deligne extension of $M$ over $\Xo$ (that is, the residues of the logarithmic connections are all {\it nilpotent}, see \cite{eq}). The filtrations $F,W$ are naturally extended on ${\mathcal M}o$ by taking the intersection with ${\mathcal M}o$ of the open direct images of $F,W$ under the inclusion $j:X\hookrightarrow\Xo$. Let $D_i$ be the irreducible components of $D$ ($i\in[1,r]$). Let $T_i$ be the local monodromy around a general point of $D_i$, which is defined on the fiber $H_{x_0}$ at a base point $x_0\in X$, and is unipotent by hypothesis. (This is well-defined up to a conjugate compatible with $W$.) Set $N_i:=(2\pi i)^{-1}\log T_i$. We denote by $L$ the filtration $W$ on $H_{x_0}$.
\par
Under the above notation and assumption, $({\mathcal M},W)$ is an {\it admissible variation of mixed Hodge structure} in the sense of \cite{StZ}, \cite{Kadm} if and only if the following two conditions are satisfied:
$${\rm Gr}_F^p{\rm Gr}_k^W{\mathcal M}o\,\,\hbox{are locally free ${\mathcal O}_{\Xo}$-modules for any $p,k\in{\mathbf Z}$.}
\leqno(3.1.5)$$
\vskip-7mm
$$\hbox{There is the relative monodromy filtration on $(H_{x_0},L)$ for each $N_i$.}
\leqno(3.1.6)$$
(The last condition means that (3.1.4) holds with $\psi_g{\mathcal M}$ replaced by $H_{x_0}$.) In fact, this equivalence follows from \cite[Theorem 4.5.2]{Kadm}.
\par
In the non-unipotent local monodromy case, let $\rho:X'\to X$ be a generically finite morphism of complex algebraic varieties such that $\rho^*{\mathcal M}$ has unipotent local monodromies around the divisor at infinity of a compactification of $X'$ (by replacing $X$ with a non-empty open subvariety if necessary). Then ${\mathcal M}$ is an admissible variation if and only if $\rho^*{\mathcal M}$ is. (In fact, may assume that $\rho$ is finite \'etale by shrinking $X$. Then we can take the direct image.)
\par
n
{\bf 3.2.~Well-definedness of open direct images.}
Let $D$ be a locally principal divisor on a smooth complex algebraic variety $X$. Set $X':=X\setminus D$ with $j:X'\hookrightarrow X$ the inclusion.
We say that the open direct images $j_!,j_*$ are well-defined for ${\mathcal M}'\in{\mathcal M}HW(X')$, if there are
$$\alphaigned&\quad\quad{\mathcal M}'_!,\,{\mathcal M}'_*\in{\mathcal M}HW(X),\,\,\,\,\hbox{satisfying}\\{\rm rat}({\mathcal M}'_!)&=j_!K',\,\,\,{\rm rat}({\mathcal M}'_*)={\mathbf R} j_*K'\quad\hbox{with}\quad K':={\rm rat}({\mathcal M}'),\endaligned
\leqno(3.2.1)$$
(see (1.1.6) for rat), and moreover the following condition is satisfied:
$$\hbox{${\mathcal M}'_!$, ${\mathcal M}'_*$ are $g$-admissible for any $g\in\Gamma(U,{\mathcal O}_U)$ with $g^{-1}(0)_{\rm red}=D_{\rm red}\cap U$.}
\leqno({\rm A3})$$
Here $U$ is any open subvariety of $X$.
If the above condition is satisfied, we then define
$$j_!{\mathcal M}':={\mathcal M}'_!,\quad j_*{\mathcal M}':={\mathcal M}'_*.
\leqno(3.2.2)$$
If ${\mathcal M}'=j^{-1}{\mathcal M}$ with ${\mathcal M}\in{\mathcal M}HW(X)$ and ${\mathcal M}$ is $g$-admissible, then we have the canonical morphisms (see \cite[Proposition~2.11]{mhm})
$$j_!j^{-1}{\mathcal M}\to{\mathcal M},\quad{\mathcal M}\to j_*j^{-1}{\mathcal M}.
\leqno(3.2.3)$$
\par
n
{\bf 3.3. Definition of mixed Hodge modules.} Let $X$ be a smooth complex algebraic variety. The category of mixed Hodge modules ${\mathcal M}HM(X)$ is the abelian full subcategory of ${\mathcal M}HW(X)$ in (2.3) defined by increasing induction on the dimension $d$ of the support as follows:
\par
For ${\mathcal M}\in{\mathcal M}HW(X)$ with ${\rm supp}\,{\mathcal M}=Z$, we have ${\mathcal M}\in{\mathcal M}HM(X)$ if and only if, for any $x\in X$, there is a Zariski-open neighborhood $U_x$ of $x$ in $X$ together with $g_x\in\Gamma(U_x,{\mathcal O}_{U_x})$ such that
$$\dim Z\cap U_x\cap g_x^{-1}(0)<\dim Z,$$
$Z'_x:=Z\cap U_x\setminus g_x^{-1}(0)$ is smooth, and moreover the following two conditions are satisfied:
$$\hbox{${\mathcal M}|_{Z'_x}$ is an admissible variation of mixed Hodge structure.}
\leqno(3.3.1)$$
$$\hbox{${\mathcal M}|_{U_x}$ is $g_x$-admissible, and $\varphi_{g_x,1}{\mathcal M}|_{U_x}\in{\mathcal M}HM(U_x)$.}
\leqno(3.3.2)$$
More precisely, (3.3.1) means that ${\mathcal M}|_{U'_x}$ is isomorphic to the direct image of an admissible variation of mixed Hodge structure on $Z'_x$ by the closed embedding
$$i_{Z'_x}:Z'_z\hookrightarrow U'_x:=U_x\setminus g_x^{-1}(0).$$
\par
If $Z=\{x\}$ for $x\in X$, then we set
$${\mathcal M}HM_{\{x\}}(X):={\mathcal M}HW_{\{x\}}(X)={\rm MHS}({\mathbf Q}),
\leqno(3.3.3)$$
where the first and second categories are respectively full subcategories of ${\mathcal M}HM(X)$ and ${\mathcal M}HW(X)$ consisting of objects supported on $x$, and the last one is the category of graded-polarizable mixed ${\mathbf Q}$-Hodge structures \cite{th2}. (Here the direct image by $\{x\}\hookrightarrow X$ is used.)
\par
This definition is justified by the following (see \cite[Theorem 1]{def}).
\par
n
{\bf Theorem~3.4.} {\it Conditions {\rm (3.3.1--2)} are independent of the choice of $U_x$, $g_x$. More precisely, if they are satisfied for some $U_x$, $g_x$ for each $x\in Z$, then $(3.3.2)$ is satisfied for any $U_x$, $g_x$, and $(3.3.1)$ is satisfied in case ${\rm rat}({\mathcal M}')$ is a local system up to a shift of complex, see $(1.1.6)$ for ${\rm rat}$.}
\par
We have moreover the following (see \cite[Theorem 2]{def}).
\par
n
{\bf Theorem~3.5.} {\it The categories ${\mathcal M}HM(X)$ for smooth complex algebraic varieties $X$ are stable by the canonically defined cohomological functors ${\mathcal H}^jf_*$, ${\mathcal H}^jf_!$, ${\mathcal H}^jf^*$, ${\mathcal H}^jf^!$, $\psi_g$, $\varphi_{g,1}$, ${\mathbf 1}xtimes$, ${\mathcal D}D$, where $f$ is a morphism of smooth complex algebraic varieties and $g\in\Gamma(X,{\mathcal O}_X)$. Moreover these functors are compatible with the corresponding functors of the underlying ${\mathbf Q}$-complexes via the forgetful functor {\rm rat} in $(1.1.6)$.}
\par
The proofs of these theorems use Beilinson's maximal extension together with the stability by subquotients systematically. The well-definedness of open direct images in (3.3) is reduced to the normal crossing case, see \cite{def}.
Combining Theorem~(3.5) with the construction in \cite{mhm}, we can get the following (see \cite[Corollary 1]{def}).
\par
n
{\bf Theorem~3.6.} {\it There are canonically defined functors $f_*$, $f_!$, $f^*$, $f^!$, $\psi_g$, $\varphi_{g,1}$, ${\mathbf 1}xtimes$, ${\mathcal D}D$, $\otimes$, ${{\mathcal H}}om$ between the bounded derived categories $D^b{\mathcal M}HM(X)$ for smooth complex algebraic varieties $X$ so that we have the canonical isomorphisms $H^jf_*={\mathcal H}^jf_*$, etc., where $f$ is a morphism of smooth complex algebraic varieties, $g\in\Gamma(X,{\mathcal O}_X)$, $H^j$ is the usual cohomology functor of the derived categories, and ${\mathcal H}^jf_*$, etc.\ are as in Theorem~$(3.5)$. Moreover the above functors between the $D^b{\mathcal M}HM(X)$ are compatible with the corresponding functors of the underlying ${\mathbf Q}$-complexes via the forgetful functor {\rm rat}.}
\par
n
{\bf 3.7. Some notes on references about applications.} Since there is no more space to explain about applications of mixed Hodge modules, we indicate some references here. These are not intended to be complete.
\par
For applications to algebraic cycles, see
\cite{BRS}, \cite{BFNP}, \cite{MuS}, \cite{NS}, \cite{RS1}, \cite{RS2}, \cite{int}, \cite{HC1}, \cite{HC2}, \cite{HTC}, \cite{ari}, \cite{rcy}, \cite{FCh}, \cite{Dir}, \cite{Tho}, \cite{HCh}, \cite{SS2}, etc.
Some of them are related to normal functions. For the latter, see also
\cite{ext}, \cite{anf}, \cite{spr}, \cite{SS1}, \cite{Schn}, etc.
\par
Related to mixed Hodge structures on cohomologies of algebraic varieties, see
\cite{BDS}, \cite{DS1}, \cite{DS2}, \cite{DS3}, \cite{DS5}, \cite{DS6}, \cite{DSW}, \cite{DuS}, \cite{OS}, \cite{PS}, \cite{cmp}, \cite{Fil}, \cite{SZ}, etc.
\par
About Bernstein-Sato polynomials, Steenbrink spectra, and multiplier ideals, see
\cite{Bu}, \cite{BMS}, \cite{BS1}, \cite{BS2}, \cite{BSY}, \cite{DMS}, \cite{DMST}, \cite{DS4}, \cite{DS7}, \cite{MP}, \cite{ste}, \cite{rat}, \cite{mic}, \cite{bhyp}, \cite{Mul}, \cite{pow}, etc.
\par
For direct images of dualizing sheaves and vanishing theorems, see \cite{FFT}, \cite{kol}, \cite{Su}, etc.
Concerning Hirzebruch characteristic classes, see
\cite{BrScY}, \cite{MS}, \cite{MSS1}, \cite{MSS2}, \cite{MSS3}, \cite{MSS4}, etc.
\par
\par
\vbox{\centerline{\bf Appendix A. Hypersheaves}
\par
n
In this appendix we review some basics of hypersheaves, see Convention~3.}
\par
n
{\bf A.1.} Let $X$ be a complex algebraic variety or a complex analytic space, and $A$ be a subfield of ${\mathbf C}$. We denote by $D_c^b(X,A)$ the derived category of bounded complexes of $A$-modules with constructible cohomology sheaves, see \cite{Ve1}, etc.
In the algebraic case, we use the classical topology for the sheaf complexes although we assume that {\it stratifications are algebraic}.
\par
The category of hypersheaves ${\mathbf H\mathbf S}(X,A)$ is the {\it full subcategory} of $D_c^b(X,A)$ consisting of objects $K$ satisfying the condition:
$$\dim{\rm supp}\,{\mathcal H}^iK\leqslant-i,\quad\dim{\rm supp}\,{\mathcal H}^i\hskip1pt{\mathcal D}D(K)\leqslant-i\quad(\forall\,i\in{\mathbf Z}).
\leqno({\rm A}.1.1)$$
Here ${\mathcal H}^iK$ is the $i$\,th cohomology sheaf of $K$ in the usual sense, and ${\mathcal D}D(K)$ is the dual of $K$. The latter can be defined by
$${\mathcal D}D(K):={\mathbf R}{\mathcal H} om_A(K,{\mathcal D}D_X),
\leqno({\rm A}.1.2)$$
with ${\mathcal D}D_X$ the dualizing sheaf in $D^b_c(X,A)$. In the {\it smooth} case (by taking an embedding into smooth varieties), it can be defined by
$${\mathcal D}D_X:=A_X(d_X)[2d_X],
\leqno({\rm A}.1.3)$$
with $d_X:=\dim X$.
\par
By \cite{BBD}, ${\mathbf H\mathbf S}(X,A)$ is an {\it abelian category}, and there are canonical cohomological functors
$${}^{\mathfrak m}{\mathcal H}^i:D^b_c(X,A)\to{\mathbf H\mathbf S}(X,A)\quad(i\in{\mathbf Z}),
\leqno({\rm A}.1.4)$$
where the superscript $\,{}^{\mathfrak m}\,$ means the ``middle perversity".
\par
n
{\bf A.2.~Nearby and vanishing cycles.} Let $g$ be a holomorphic function on an analytic space $X$. Let ${\mathcal D}elta\subset{\mathbf C}$ be a sufficiently small open disk with center $0$, and $\pi:\widetilde{{\mathcal D}elta^*}\to{\mathcal D}elta^*$ be a universal covering of the punctured disk ${\mathcal D}elta^*$. Let $\pi':\widetilde{{\mathcal D}elta^*}\to{\mathbf C}$ be its composition with the inclusion ${\mathcal D}elta^*\hookrightarrow{\mathbf C}$.
Let $X_{\infty}$ be the base change of $X$ by $\pi'$. We denote by $\widetilde{j}:X_{\infty}\to X$ the base change of $\pi'$ by $g$. Set $X_0:=f^{-1}(0)$ with $i_0:X_0\hookrightarrow X$ the canonical inclusion. The nearby and vanishing cycle functors
$$\psi_g,\,\varphi_g:D_c^b(X,A)\to D_c^b(X_0,A)$$
are defined as in \cite{van} by
$$\psi_gK:=i_0^*\,{\mathbf R}\widetilde{j}_*\widetilde{j}^*K,\,\,\,\varphi_gK:=C(i_0^*\,K\to\psi_gK)\quad\hbox{for}\,\,\,\,K\in D_c^b(X,A),
\leqno({\rm A}.2.1)$$
where we take a flasque resolution of $K$ to define ${\mathbf R}\widetilde{j}_*\widetilde{j}^*K$ and also the mapping cone. By definition we have a distinguished triangle
$$i^*K\to\psi_gK\to\varphi_gK\buildrel{+1}\over\to.
\leqno({\rm A}.2.2)$$
\par
The action of the monodromy $T$ is defined by $\gamma^*$ with $\gamma$ the automorphism of $\widetilde{{\mathcal D}elta^*}$ over ${\mathcal D}elta^*$ defined by $z\mapsto z+1$. Here $\widetilde{{\mathcal D}elta^*}$ is identified with $\{z\in{\mathbf C}\mid{\rm Im}\,z>r\}$ for some $r>0$ and $\pi$ is given by $z\mapsto t:=\exp(2\pi iz)$. (This is compatible with the usual definition of the monodromy of a local system $L$ on ${\mathcal D}elta^*$. In fact, $(\gamma^*\sigma)(z_0)=\sigma(z_0+1)$ for $\sigma\in\Gamma(\widetilde{{\mathcal D}elta^*},\pi^*L)$ with $z_0$ a base point of $\widetilde{{\mathcal D}elta^*}$, and the monodromy is given by the composition of canonical isomorphisms: $L_{\pi(z_0)}=(\pi^*L)_{z_0}=\Gamma(\widetilde{{\mathcal D}elta^*},\pi^*L)=(\pi^*L)_{z_0+1}=L_{\pi(z_0)}$.)
There is a nonzero minimal polynomial for $T$ locally on $X$, and this implies the Jordan decomposition $T=T_sT_u$ (with $T_s,T_u$ polynomials in $T$ locally on $X$).
\par
Assume $K\in{\mathbf H\mathbf S}(X,A)$. Set
$${}^{\mathfrak m}\psi_gK:=\psi_gK[-1],\,\,{}^{\mathfrak m}\varphi_gK:=\varphi_gK[-1].
\leqno({\rm A}.2.3)$$
Then
$${}^{\mathfrak m}\psi_gK,\,{}^{\mathfrak m}\varphi_gK\in{\mathbf H\mathbf S}(X_0,A).
\leqno({\rm A}.2.4)$$
This follows for instance from \cite{Kvan}, \cite{Ma3} by using the Riemann-Hilbert correspondence.
\par
In the case $A={\mathbf C}$, this implies the decompositions in the abelian category ${\mathbf H\mathbf S}(X_0,A)$:
$${}^{\mathfrak m}\psi_gK=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*}\,{}^{\mathfrak m}\psi_{g,\lambda}K,\quad{}^{\mathfrak m}\varphi_gK=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*}\,{}^{\mathfrak m}\varphi_{g,\lambda}K,
\leqno({\rm A}.2.5)$$
(which are locally finite direct sum decompositions), where
$${}^{\mathfrak m}\psi_{g,\lambda}K:={\rm Ker}(T_s-\lambda)\subset{}^{\mathfrak m}\psi_gK,\quad{}^{\mathfrak m}\varphi_{g,\lambda}K:={\rm Ker}(T_s-\lambda)\subset{}^{\mathfrak m}\varphi_gK.
\leqno({\rm A}.2.6)$$
\par
In the case $A\subset{\mathbf C}$, we have only the decompositions
$${}^{\mathfrak m}\psi_gK={}^{\mathfrak m}\psi_{g,1}K\oplus{}^{\mathfrak m}\psi_{g,\ne 1}K,\quad{}^{\mathfrak m}\varphi_gK={}^{\mathfrak m}\varphi_{g,1}K\oplus{}^{\mathfrak m}\varphi_{g,\ne 1}K,
\leqno({\rm A}.2.7)$$
which are compatible with the above decompositions after taking the scalar extension by $A\hookrightarrow{\mathbf C}$.
\par
By (A.2.2) we have the canonical isomorphisms
$${}^{\mathfrak m}\psi_{g,\ne1}K\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\varphi_{g,\ne1}K,\quad{}^{\mathfrak m}\psi_{g,\lambda}K\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\varphi_{g,\lambda}K\,\,\,\,(\lambda\ne 1,\,A={\mathbf C}),
\leqno({\rm A}.2.8)$$
since the action of $T$ on $i^*K$ is trivial.
\par
If $K=A_X$ and $X$ is a smooth algebraic variety (or a complex manifold with $X_0$ compact), then the nearby cycle functor $\psi_gA_X$ is also defined by
$$\psi_gA_X={\mathbf R}\rho_*A_{X_c},
\leqno({\rm A}.2.9)$$
where $X_c:=f^{-1}(c)\subset X$ with $c\in{\mathbf C}^*$ sufficiently near $0$, and $\rho:X_c\to X_0$ is an appropriate contraction morphism. The latter is constructed by using an embedded resolution of $X_0\subset X$.
\par
n
{\bf A.3.~Compatibility with the dual functor ${\mathcal D}D$.} We say that a pairing
$$K\otimes_AK'\to{\mathcal D}D_X(k)$$
is a {\it perfect pairing} (with $k\in{\mathbf Z}$) if its corresponding morphism
$$K\to{\mathcal D}D(K')(k)={\mathbf R}{\mathcal H} om_A(K',{\mathcal D}D_X)(k)$$
is an isomorphism in $D^b_c(X,A)$. Here ${\mathcal D}D_X$ is as in (A.1.2), and the above correspondence comes from the isomorphism
$${\rm Hom}(K\otimes K',{\mathcal D}D_X(k))={\rm Hom}(K,{\mathbf R}{\mathcal H} om_A(K',{\mathcal D}D_X(k))).
\leqno({\rm A}.3.1)$$
The Tate twist $(k)$ is defined as in \cite[Definition 2.1.13]{th1}, see also a remark after (2.1.10). This can be neglected if $i=\sqrt{-1}$ is chosen. However, this twist is quite useful in order to keep track of ``weight". In fact, if $K=K'$ and it has pure weight $w$, then ${\mathcal D}D K$ should have weight $-w$, and the above $k$ must be equal to $-w$, since the Tate twist $(k)$ changes the weight by $-2k$ ({\it loc.~cit.}).
\par
Assume there is a perfect pairing
$$S:K\otimes_A K'\to{\mathcal D}D_X(-w)\quad\hbox{for}\,\,\,K,K'\in{\mathbf H\mathbf S}(X,A).$$
It induces a canonical pairing
$$\psi_gS:\psi_gK\otimes_A\psi_gK'\to\psi_g{\mathcal D}D_X(-w).
\leqno({\rm A}.3.2)$$
\par
Assume $X$ is {\it smooth} (by taking an embedding into a smooth variety), and $X_0$ is also {\it smooth} (by replacing $K$ with its direct image under the graph embedding by $g$). Then we have
$$\psi_g{\mathcal D}D_X=A_{X_0}(d_X-w)[2d_X]={\mathcal D}D_{X_0}(1-w)[2],
\leqno({\rm A}.3.3)$$
and (A.3.2) indues a canonical perfect pairing
$${}^{\mathfrak m}\psi_gS:{}^{\mathfrak m}\psi_gK\otimes_A{}^{\mathfrak m}\psi_gK'\to{\mathcal D}D_{X_0}(1-w).
\leqno({\rm A}.3.4)$$
Here some sign appears, and this is closely related to the sign in (1.4.8).
\par
The above construction is compatible with the monodromy $T$, that is,
$${}^{\mathfrak m}\psi_gS={}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(T\otimes T).$$
Since $T^e$ is unipotent for some $e\in{\mathbf Z}_{>0}$, this implies
$${}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(N\otimes id)=-{}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N),\quad{}^{\mathfrak m}\psi_gS={}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(T_s\otimes T_s),
\leqno({\rm A}.3.5)$$
where $T=T_sT_u$ is the Jordan decomposition, and $N:=(2\pi i)^{-1}\log T_u$.
\par
We then get the induced perfect pairings
$$\alphaigned{}^{\mathfrak m}\psi_{g,1}S:{}^{\mathfrak m}\psi_{g,1}K&\otimes_A{}^{\mathfrak m}\psi_{g,1}K'\to{\mathcal D}D_{X_0}(1-w),\\{}^{\mathfrak m}\psi_{g,\ne1}S:{}^{\mathfrak m}\psi_{g,\ne1}K&\otimes_A{}^{\mathfrak m}\psi_{g,\ne1}K'\to{\mathcal D}D_{X_0}(1-w),\\{}^{\mathfrak m}\psi_{g,\lambda}S:{}^{\mathfrak m}\psi_{g,\lambda}K&\otimes_A{}^{\mathfrak m}\psi_{g,\lambda^{-1}}K'\to{\mathcal D}D_{X_0}(1-w)\quad(A={\mathbf C}).\endaligned
\leqno({\rm A}.3.6)$$
\par
For the vanishing cycle functor $\varphi_g$, we have the induced perfect pairing
$${}^{\mathfrak m}\varphi_{g,1}S:{}^{\mathfrak m}\varphi_{g,1}K\otimes_A{}^{\mathfrak m}\varphi_{g,1}K'\to{\mathcal D}D_{X_0}(-w),
\leqno({\rm A}.3.7)$$
satisfying
$${}^{\mathfrak m}\varphi_{g,1}S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,({\rm can}\otimes id)={}^{\mathfrak m}\psi_{g,1}S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes{\rm Var}),
\leqno({\rm A}.3.8)$$
where the morphisms
$${\rm can}:{}^{\mathfrak m}\psi_{g,1}K\to{}^{\mathfrak m}\varphi_{t,1}K,\quad{\rm Var}:{}^{\mathfrak m}\varphi_{g,1}K'\to{}^{\mathfrak m}\psi_{t,1}K'(-1),
\leqno({\rm A}.3.9)$$
are constructed in \cite[Section 5.2.1]{mhp}. (These correspond respectively to the morphisms $-{\rm Gr}_V\partial_t$, ${\rm Gr}_Vt$ in (B.6.9) below if $K={\mathcal D}R_X(M)$, $K'={\mathcal D}R_X(M')$ with $A={\mathbf C}$.) We have
$${\rm Var}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm can}=N,\,\,\,{\rm can}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm Var}=N.
\leqno({\rm A}.3.10)$$
\par
Note that the target of (A.3.7) is different from that of (A.3.6) by the Tate twist, and ${}^{\mathfrak m}\varphi_{g,\ne 1}S$ is given by ${}^{\mathfrak m}\psi_{g,\ne 1}S$ together with the isomorphism (A.2.8).
The construction of (A.3.7) is not quite trivial, see \cite[Sections 5.2.1 and 5.2.3 and Lemma 5.2.4]{mhp}. For instance, we used there the isomorphism
$$i^!K'=[i^*K'\to\psi_{g,1}K'\buildrel{-N}\over\longrightarrow\psi_{g,1}K'(-1)],
\leqno({\rm A}.3.11)$$
where $i:X_0\hookrightarrow X$ is the inclusion. If we replace this with
$$i^!K'=[i^*K'\to\psi_{g,1}K'\buildrel{id-T}\over\longrightarrow\psi_{g,1}K'],
\leqno({\rm A}.3.12)$$
then (A.3.8) would hold with Var, $N$ respectively replaced by var, $T-id$, where the Tate twist should be omitted (since $T-id$ is not compatible with the weight structure).
\par
\par
\vbox{\centerline{\bf Appendix B. ${\mathcal D}$-modules}
\par
n
In this appendix we review some basics of ${\mathcal D}$-modules.}
\par
n
{\bf B.1.~Holonomic ${\mathcal D}$-modules.} Let $X$ be a complex manifold of dimension $d_X$, and $M$ be a coherent left ${\mathcal D}_X$-module. This means that $M$ has locally a finite presentation
$$\h{$\bigoplus$}^p\,{\mathcal D}_U\to\h{$\bigoplus$}^q\,{\mathcal D}_U\to M|_U\to 0,$$
over sufficiently small open subsets $U\subset X$.
(This is equivalent to the condition that $M$ is quasi-coherent over ${\mathcal O}_X$ and is locally finitely generated over ${\mathcal D}_X$.)
\par
A filtration $F$ on $M$ is called a {\it good filtration} if $(M,F)$ satisfies
$$(F_p{\mathcal D}_X)\hskip1pt(F_qM)\subset F_{p+q}M\quad(p\in{\mathbf N},\,q\in{\mathbf Z}),
\leqno({\rm B}.1.1)$$
and ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}M$ is a {\it coherent} ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$-module.
(The last condition is equivalent to the conditions that each $F_pM$ is coherent over ${\mathcal O}_X$ and the equality holds for $q\gg0$ in (B.1.1).)
Here $F$ on ${\mathcal D}_X$ is by the order of differential operators, that is, we have for local coordinates $(x_1,\dots,x_{d_X})$
$$F_p{\mathcal D}_X=\par
um_{|\nu|\leqslant p}{\mathcal O}_X\partial_{x_i}^{\nu_i}.$$
\par
The {\it characteristic variety} ${\rm Ch}(M)\subset T^*X$ of a coherent left ${\mathcal D}_X$-module $M$ is defined to be the support of the ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$-module ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}M$ in the cotangent bundle $T^*X$. (The latter can be defined to be the union of the analytic subspaces of $T^*X$ defined by the ideal of ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$ annihilating $g_i$ with $g_i$ local generators of ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}M$. Here ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$ is identified with the sheaf of holomorphic functions on $T^*X$ which are polynomials on fibers of $T^*X\to X$.) This is independent of a choice of a good filtration $F$.
\par
By the involutivity of the characteristic varieties (see \cite{SKK}, \cite{Ma2}, \cite{Ga}), it is known that
$$\dim{\rm Ch}(M)\geqslant\dim X.
\leqno({\rm B}.1.2)$$
(See also \cite{Bo} for the algebraic ${\mathcal D}$-module case.)
\par
A coherent left ${\mathcal D}_X$-module $M$ is called {\it holonomic} if
$$\dim{\rm Ch}(M)=\dim X.
\leqno({\rm B}.1.3)$$
We will denote by $M_{\rm hol}({\mathcal D}_X)$ the abelian category of holonomic ${\mathcal D}_X$-modules.
\par
n
{\bf B.2.~Left and right ${\mathcal D}$-modules.}
We have the transformation between filtered left and right ${\mathcal D}_X$-modules on a complex manifold $X$ of dimension $d_x$ which associates the following to a filtered left ${\mathcal D}_X$-module $(M,F)$:
$$(\Omega_X^{d_X},F)\otimes_{{\mathcal O}_X}(M,F),
\leqno({\rm B}.2.1)$$
where the filtration $F$ on $\Omega_X^{d_X}$ is defined by the condition
$${\rm Gr}^F_p\Omega_X^{d_X}=0\quad(p\ne-d_X).
\leqno({\rm B}.2.2)$$
So the filtration is shifted by $-d_X$.
Here it is better to distinguish $\Omega_X^{d_X}$ and the dualizing sheaf $\omega_X$, since the Hodge filtration $F$ on $\omega_X$ is usually defined by
$${\rm Gr}^F_p\omega_X=0\quad(p\ne 0).
\leqno({\rm B}.2.3)$$
\par
By choosing local coordinates $x_1,\dots,x_{d_X}$, the sheaf $\Omega_X^{d_X}$ is trivialized by $\partiald x_1\wedge\cdots\wedge \partiald x_{d_X}$ locally on $X$, and forgetting $F$, the transformation is given by the anti-involution $^*$ of ${\mathcal D}_X$ defined by the conditions (see for instance \cite{Ma1}):
$$\partial_{x_i}^*=-\partial_{x_i},\quad g^*=g\,\,\,(g\in{\mathcal O}_X),\quad(PQ)^*=Q^*P^*\,\,\,(P,Q\in{\mathcal D}_X).
\leqno({\rm B}.2.4)$$
\par
For a right ${\mathcal D}_X$-module $N$, the left ${\mathcal D}_X$-module corresponding to it is denoted often by
$$N\otimes_{{\mathcal O}_X}(\Omega_X^{d_X})^{\vee}.
\leqno({\rm B}.2.5)$$
Here $L^{\vee}$ denotes the dual of a locally free sheaf $L$ in general, that is, $L^{\vee}:={\mathcal H} om_{{\mathcal O}_X}(L,{\mathcal O}_X)$.
\par
n
{\bf B.3.~Direct images.} For a closed embedding $i:X\to Y$ of smooth complex algebraic varieties, the direct image of a filtered {\it right} ${\mathcal D}$-module $(M,F)$ is defined by
$$i_*^{{\mathcal D}}(M,F)=(M,F)\otimes_{{\mathcal D}_X}({\mathcal D}_{X\hookrightarrow Y},F),
\leqno({\rm B}.3.1)$$
where the sheaf-theoretic direct image is omitted to simplify the notation, and
$$({\mathcal D}_{X\hookrightarrow Y},F):={\mathcal O}_X\otimes_{{\mathcal O}_Y}({\mathcal D}_Y,F).
\leqno({\rm B}.3.2)$$
\par
For a filtered {\it left} ${\mathcal D}$-module $(M,F)$, the ${\mathcal D}$-module is twisted by $\omega_{X/Y}$ and the filtration $F$ is shifted by $r:~={\rm codim}_XY$ because of the transformation between filtered left and right ${\mathcal D}$-modules in (B.2).
If $X$ is locally defined by $y_1=\cdots=y_r$ with $y_1,\dots,y_m$ local coordinates of $Y$, then, setting $\partial_{y_i}:=\partial/\partial y_i$, the direct image is {\it locally} defined by
$$i_*^{{\mathcal D}}(M,F)=(M,F[r])\otimes_{{\mathbf C}}({\mathbf C}[\partial_{y_1},\dots,\partial_{y_r}],F).
\leqno({\rm B}.3.3)$$
\par
For a smooth projection $p:Z:=X\times Y\to Y$ with $X,Y$ smooth, the direct image of a filtered {\it left} ${\mathcal D}_Z$-module $(M,F)$ is defined by the sheaf-theoretic direct image of the relative de Rham complex ${\mathcal D}R_{Z/Y}(M,F)$, that is,
$$p_*^{{\mathcal D}}(M,F):={\mathbf R}\hskip1ptp_*{\mathcal D}R_{Z/Y}(M,F),
\leqno({\rm B}.3.4)$$
where ${\mathcal D}R_{Z/Y}(M,F)$ is the filtered complex defined by
$$(M,F)\to\Omega_{Z/Y}^1\otimes_{{\mathcal O}_Z}(M,F[-1])\to\cdots\to\Omega_{Z/Y}^{d_X}\otimes_{{\mathcal O}_Z}(M,F[-d_X]),
\leqno({\rm B}.3.5)$$
with the last term put at degree $0$. The differential of this complex is defined as in the absolute case (see (1.2.3)), and is locally given as the Koszul complex associated with the action of $\partial/\partial x_i$ on $M$ if $x_1,\dots,x_{d_X}$ are local coordinates of $X$.
(Note, however, that this does not work for {\it smooth morphisms} which are not necessarily {\it smooth projections}, since there is no canonical lift of vector fields on $Y$ to $Z$.)
\par
In general, the direct image of a filtered right ${\mathcal D}_X$-module $(M,F)$ by a morphism of smooth varieties $f:X\to Y$ is defined by
$$i_*^{{\mathcal D}}(M,F)=p_*^{{\mathcal D}}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(i_f)_*^{{\mathcal D}}(M,F),
\leqno({\rm B}.3.6)$$
where $i_f:X\to X\times Y$ is the graph embedding, and $p:X\times Y\to Y$ is the second projection so that $f=p\,\raise.15ex\hbox{${\scriptstyle\circ}$}\, i_f$.
\par
n
{\bf Remarks.} (i) We can verify that the direct image in (B.3.6) is naturally isomorphic to the complex of the induced ${\mathcal D}_Y$-module associated with the (sheaf-theoretic) direct image of the filtered differential complex ${\mathcal D}R_X(M,F)$. This means the compatibility between the direct images of filtered differential complexes and filtered ${\mathcal D}$-modules.
\par
(ii) It seems simpler to use the above construction of direct images instead of the induced ${\mathcal D}$-module construction as in \cite[3.3.6]{mhp} for the definition of direct images of $V$-filtrations.
\par
(iii) The direct image for a morphism of singular varieties is rather complicated.
For the direct image of mixed Hodge modules, we may assume that the morphism is projective by using a Beilinson-type resolution (see \cite[Section 3]{Bei1} and the proof of \cite[Theorem 4.3]{mhm}), and the cohomological direct image is actually enough. So it is reduced to the case of a morphism of smooth varieties.
\par
n
{\bf B.4.~Dual functor.} For a holonomic ${\mathcal D}_X$-module $M$ on a complex manifold $X$ of dimension $n$, its dual ${\mathcal D}D(M)$ is defined so that
$$(\Omega_X^{d_X})^{\vee}\otimes_{{\mathcal O}_X}{\mathcal D}D(M)={{\mathcal E}}xt_{{\mathcal D}_X}^{d_X}(M,{\mathcal D}_X).
\leqno({\rm B}.4.1)$$
and ${\mathcal D}D$ is called the dual functor. This is a contravariant functor. It is well-known that
$${\mathcal E} xt_{{\mathcal D}_X}^p(M,{\mathcal D}_X)=0\quad(p\ne\dim X),
\leqno({\rm B}.4.2)$$
and
$${\mathcal D}D^2=id.
\leqno({\rm B}.4.3)$$
\par
We say that a filtered holonomic ${\mathcal D}_X$-module $(M,F)$ is {\it Cohen-Macaulay} if ${\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^FM$ is a Cohen-Macaulay ${\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^F$-module.
In this case, we have
$${{\mathcal E} xt}^i_{{\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^F{\mathcal D}_X}\bigl({\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^FM,{\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^F{\mathcal D}_X\bigr)=0\quad\quad(i\ne d_X),
\leqno({\rm B}.4.4)$$
and the dual filtered ${\mathcal D}_X$-module ${\mathcal D}D(M,F)$ can be defined so that
$$(\Omega_X^{d_X},F)\otimes_{{\mathcal O}_X}{\mathcal D}D(M,F)={\mathbf R}{{\mathcal H} om}_{{\mathcal D}_X}\bigl((M,F),({\mathcal D}_X,F[d_X])\bigr)[d_X].
\leqno({\rm B}.4.5)$$
This means that the last filtered complex is filtered quasi-isomorphic to a filtered ${\mathcal D}$-module.
\par
It is known that the dual functor ${\mathcal D}D$ commutes with the de Rham functor ${\mathcal D}R_X$, that is, for a regular holonomic ${\mathcal D}_X$-module $M$, there is a canonical isomorphism (see for instance \cite[Proposition 2.4.12]{mhp}):
$${\mathcal D}D({\mathcal D}R_X(M))={\mathcal D}R_X({\mathcal D}D(M)).
\leqno({\rm B}.4.6)$$
\par
n
{\bf B.5.~Regular holonomic ${\mathcal D}$-modules.} Let $Z$ be a closed analytic subset $Z$ of a complex manifold $X$. Let ${\mathcal I}_Z\subset{\mathcal O}_X$ be the ideal of $Z$. For a bounded complex of ${\mathcal D}_X$-modules $M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$, set
$$\alphaigned{\mathcal H}_{[Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}&:=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal O}_X}({\mathcal O}_X/{\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\,\bigl(=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal D}_X}({\mathcal D}_X/{\mathcal D}_X{\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\bigr),\\
{\mathcal H}_{[X|Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}&:=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal O}_X}({\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\,\bigl(=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal D}_X}({\mathcal D}_X{\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\bigr),\endaligned$$
so that we have a long exact sequence of ${\mathcal D}_X$-modules
$$\to{\mathcal H}_{[Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to{\mathcal H}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to{\mathcal H}_{[X|Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to{\mathcal H}_{[Z]}^{i+1}M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to
\leqno({\rm B}.5.1)$$
see \cite{Khol2} (and also \cite{Gr} for the algebraic case).
Note that ${\mathcal H}_{[Z]}^0M$ for a holonomic ${\mathcal D}_X$-module $M$ is the largest holonomic ${\mathcal D}_X$-submodule supported in $Z$.
\par
It is known that a holonomic ${\mathcal D}_X$-module $M$ with support $Z$ is {\it regular holonomic} if and only if there is a closed analytic subset $Z'\subset Z$ together with a proper morphism $\pi:{\mathbf Z}t\to Z$ such that $\dim Z'<\dim Z$, $Z\setminus Z'$ is smooth and equi-dimensional, ${\mathbf Z}t$ is smooth, $\pi$ induces an isomorphism over $Z\setminus Z'$, $\pi^{-1}(Z')$ is a divisor with normal crossings on ${\mathbf Z}t$, and moreover, by setting $\pi_X:=j_Z\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,\pi$ with $j_Z:Z\hookrightarrow X$ the canonical inclusion, the following two conditions are satisfied:
\par
n
(i) $\,{\mathcal H}^0_{[Z']}M$ is a regular holonomic ${\mathcal D}_X$-module.
\par
n
(ii) $\,{\mathcal H}^0_{[X|Z']}M={\mathcal H}^0(\pi_X)_*^{{\mathcal D}}{\mathcal M}t$ with ${\mathcal M}t$ the Deligne meromorphic extension of $M|_{Z\setminus Z'}$ on ${\mathbf Z}t$.
\par
We may assume that $Z'$ is the union of ${\rm Sing}\,Z$ and the lower dimensional irreducible components of $Z$, and $\pi$ is given by the desingularization of the union of the maximal dimensional irreducible components of $Z$.
Note that in the case of algebraic ${\mathcal D}$-modules, we have ${\mathcal H}^0_{[X|Z']}M=j_*j^*M$ with $j:X\setminus Z'\hookrightarrow X$ the canonical inclusion.
\par
The above criterion is by induction on the dimension of the support by using \cite{eq}, \cite{Khol2}. In fact, it is known that regular holonomic ${\mathcal D}$-modules are stable by subquotients and extensions in the category of holonomic ${\mathcal D}$-modules and also by the direct images under proper morphisms, and contain Deligne meromorphic extensions in the normal crossing case.
\par
Let $M_{rh}({\mathcal D}_X)\subset M_{\rm hol}({\mathcal D}_X)$ be the full category of regular holonomic ${\mathcal D}_X$-modules on a complex manifold $X$. Let $D^b_{rh}({\mathcal D}_X)\subset D^b({\mathcal D}_X)$ be the full subcategory of bounded complexes with regular holonomic cohomologies. We have the equivalence of categories, that is, the Riemann-Hilbert correspondence:
$${\mathcal D}R_X:D^b_{rh}({\mathcal D}_X)\buildrel{\sim}\over\longrightarrow D_c^b(X,{\mathbf C})\,\,\,\bigl(\hbox{inducing}\,\,\,{\mathcal D}R_X:M_{rh}({\mathcal D}_X)\buildrel{\sim}\over\longrightarrow{\mathbf H\mathbf S}(X,{\mathbf C})\bigr),
\leqno({\rm B}.5.2)$$
see \cite{Krh1}, \cite{Krh2}, \cite{KK1}, \cite{Mthe}, \cite{Mrh}, \cite{Meq} (and also \cite{Bo}, \cite{HT}, etc.\ for the algebraic case).
\par
n
{\bf Remarks.} (i) There are many ways to define the full subcategory $M_{rh}({\mathcal D}_X)\subset M_{\rm hol}({\mathcal D}_X)$. Their equivalences may follow by using the above argument in certain cases.
\par
(ii) There is a nontrivial point about the commutativity of some diagram used in certain proofs of (B.5.2), and this is studied in \cite[Section 4]{ind}.
\par
(iii) Some people say that (B.5.2) is essentially proved in \cite{KK1} where the full faithfulness of the de Rham functor is essentially shown (compare the assertion (ii) of Theorem 5.4.1 written in \cite[p.~825]{KK1} with \cite[Proposition 3.3]{Mbid}). The essential surjectivity is not difficult to show by using the full faithfulness.
In \cite{Krh1}, \cite{Krh2}, a quasi-inverse is explicitly constructed.
\par
(iv) In the algebraic case, the argument is more complicated, since we have the regularity {\it at infinity}, see \cite{Bo}, \cite{eq}, \cite{HT}, etc.
(This is essential to get a {\it canonical algebraic structure} on the vector bundle associated with a local system on a smooth complex variety by using the Deligne extension, see \cite{eq}.) The Riemann-Hilbert correspondence is used in an essential way in representation theory, see \cite{BB}, \cite{BK}. (This point does not seem to be sufficiently clarified in the last one.)
\par
n
{\bf B.6.~$V$-filtration.} For a complex manifold $X$, set $Y:=X\times{\mathbf C}$ with $t$ the coordinate of ${\mathbf C}$.
We have the filtration $V$ on ${\mathcal D}_Y$ indexed by ${\mathbf Z}$ and such that
\par
n
(i) $\,\,V^0{\mathcal D}_Y\subset{\mathcal D}_Y$ is the subring generated by ${\mathcal O}_Y$, $\partial_{y_i}$, and $t\partial_t$,
\par
n
(ii) $\,\,V^j{\mathcal D}_Y=t^j\,V^0{\mathcal D}_Y$,\,\,\,$V^{-j}{\mathcal D}_Y=\par
um_{0\leqslant k\leqslant j}\,\partial_t^k\,V^0{\mathcal D}_Y$\quad($j\in{\mathbf Z}_{>0}$),
\par
n
where the $y_i$ are local coordinates of $Y$, and $\partial_{y_i}:=\partial/\partial y_i$.
\par
Let $M$ be a regular holonomic ${\mathcal D}_Y$-module. We say that $M$ is {\it quasi-unipotent} if so is $K:={\mathcal D}R_Y(M)$, that is, if there is a stratification $\{S\}$ of $Y$ such that the restrictions of the cohomology sheaves ${\mathcal H}^iK$ to each stratum $S$ are ${\mathbf C}$-local systems having quasi-unipotent local monodromies around $\So\setminus S$.
\par
For a quasi-unipotent regular holonomic left ${\mathcal D}_Y$-module $M$, there is a unique exhaustive filtration $V$ of Kashiwara \cite{Kvan} and Malgrange \cite{Ma3} indexed discretely by ${\mathbf Q}$ and satisfying the following three conditions:
\par
n
(iii)\,\, $V^{\alpha}M$ ($\forall\,\alpha\in{\mathbf Q}$) are locally finitely generated $V^0{\mathcal D}_Y$-submodules,
\par
n
(iv)\,\, $t\,V^{\alpha}M\subset V^{\alpha+1}M$ (with equality if $\alpha>0$),\,\,\,$\partial_t\,V^{\alpha}M\subset V^{\alpha-1}M\,\,$ for any $\alpha\in{\mathbf Q}$,
\par
n
(v)\,\,\, $\partial_tt-\alpha$ is locally nilpotent on ${\rm Gr}_V^{\alpha}M\,\,$ $(\forall\,\alpha\in{\mathbf Q})$.
\par
Here we say that $V$ is {\it indexed discretely by} ${\mathbf Q}$ if there is a positive integer $m$ satisfying
$$V^{\alpha}M=V^{j/m}M\quad\hbox{if}\quad(j-1)/m<\alpha\leqslant j/m\quad\hbox{with}\quad j\in{\mathbf Z}.
\leqno({\rm B}.6.1)$$
The existence of $V$ follows from that of $b$-functions in \cite{Khol2} where the holonomicity is actually sufficient (see \cite[Proposition 1.9]{rat}).
\par
From condition (v) we can easily deduce the isomorphisms
$$t:{\rm Gr}_V^{\alpha}M\buildrel{\sim}\over\longrightarrow{\rm Gr}_V^{\alpha+1}M,\quad\partial_t:{\rm Gr}_V^{\alpha+1}M\buildrel{\sim}\over\longrightarrow{\rm Gr}_V^{\alpha}M\quad\quad(\forall\,\alpha\ne 1),
\leqno({\rm B}.6.2)$$
It is also easy to show the following:
$$V^{\alpha}M=0\,\,\,(\forall\,\alpha>0)\quad\hbox{if}\quad{\rm supp}\,M\subset X\times\{0\}\subset Y,
\leqno({\rm B}.6.3)$$
$$t:V^{\alpha}M\buildrel{\sim}\over\longrightarrow V^{\alpha+1}M\quad\quad(\forall\,\alpha>0),
\leqno({\rm B}.6.4)$$
$$\hbox{$M\mapsto V^{\alpha}M$ (or ${\rm Gr}_V^{\alpha}M$) are exact functors ($\forall\,\alpha\in{\mathbf Q}$),}
\leqno({\rm B}.6.5)$$
see \cite[Lemma 3.1.3, Lemma 3.1.4, Corollary 3.1.5]{mhp}, where right ${\mathcal D}$-modules are used so that the action of $t\partial_t$ there corresponds to that of $-\partial_tt$ in this paper by (B.2.4), and an increasing filtration $V_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ is used there so that $V_{\alpha}=V^{-\alpha}$.
\par
Set $Z:=g^{-1}(0)\subset X$. Let $M'_Z$ be the largest holonomic ${\mathcal D}_X$-submodule of $M$ supported in $Z$, and similarly for $M''_Z$ with submodule replaced with quotient module. Then we have the following canonical isomorphisms of ${\mathcal D}_X$-modules (see \cite[Proposition 3.1.8]{mhp}):
$$\alphaigned M'_Z&={\rm Ker}\bigl(t:{\rm Gr}_V^0{\mathcal M}t\to{\rm Gr}_V^1{\mathcal M}t\bigr),\\ M''_Z&={\rm Coker}\bigl(\partial_t:{\rm Gr}_V^1{\mathcal M}t\to{\rm Gr}_V^0{\mathcal M}t\bigr).\endaligned
\leqno({\rm B}.6.6)$$
Set $K:={\mathcal D}R_Y(M)\in{\mathbf H\mathbf S}(Y,{\mathbf C})$. In the notation of (A.2), there are canonical isomorphisms
$$\alphaigned{\mathcal D}R_X({\rm Gr}_V^{\alpha}M)&\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\psi_{t,{\mathbf e}(-\alpha)}K\quad(\alpha\in(0,1]),\\ {\mathcal D}R_X({\rm Gr}_V^0M)&\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\varphi_{t,1}K,\endaligned
\leqno({\rm B}.6.7)$$
such that $\exp(-2\pi i(\partial_tt-\alpha))$ on the left-hand side corresponds to the monodromy $T$ on the right-hand side, where ${\mathbf e}(-\alpha):=\exp(-2\pi i\alpha)$. In particular, $-(\partial_tt-\alpha)$ corresponds to $N:=(2\pi i)^{-1}\log T_u$ with $T=T_sT_u$ the Jordan decomposition. Moreover the morphisms
$$-{\rm Gr}_V\partial_t:{\rm Gr}_V^1M\to{\rm Gr}_V^0M,\quad{\rm Gr}_Vt:{\rm Gr}_V^0M\to{\rm Gr}_V^1M
\leqno({\rm B}.6.8)$$
respectively correspond to the morphisms can and Var in (A.3.9) with $K'=K$ and $f=t$. (The sign before ${\rm Gr}_V\partial_t$ in (B.6.8) comes from the transformation between left and right ${\mathcal D}$-modules as in (B.2.4).)
\par
In the case $M=(i_g)_*^{{\mathcal D}}{\mathcal O}_X$ with $i_g:X\hookrightarrow X\times{\mathbf C}$ the graph embedding by a holomorphic function $g$ on $X$, the proof of (B.6.7) is given in \cite{Ma3}, and this can be extended to the general regular holonomic case (see also the proof of \cite[Proposition 3.4.12]{mhp}).
There are canonical morphisms inducing the isomorphisms of (B.6.7) by using logarithmic functions, and these canonical morphisms are quite important for the proof of the stability theorem of Hodge modules by direct images.
\par
\par
\vbox{\centerline{\bf Appendix C. Compatible filtrations}
\par
n
In this appendix we review some basics of compatible filtrations.}
\par
n
{\bf C.1.~Compatible subobjects.} Let ${\mathcal A}$ be an abelian category, and $n\in{\mathbf Z}_{>0}$. We say that subobjects $B_i$ ($i\in[1,n]$) of $A\in{\mathcal A}$ are {\it compatible subobjects} if there is a {\it short exact $n$-ple complex} $K$ in ${\mathcal A}$ such that
\par
n
(i) $\,\,K^p=0$ if $|p_i|>1$ for some $i\in[1,n]$, where $p=(p_1,\dots,p_n)\in{\mathbf Z}^n$.
\par
n
(ii) $\,\,K^{p-{\mathbf 1}_i}\to K^p\to K^{p+{\mathbf 1}_i}$ is exact for any $p\in{\mathbf Z}^n$, $i\in[1,n]$.
\par
n
(iii) $\,\,K^0=A$, $K^{-{\mathbf 1}_i}=B_i$ for any $i\in[1,n]$.
\par
n
Here ${\mathbf 1}_i=(({\mathbf 1}_i)_1,\dots,({\mathbf 1}_i)_n)$ with $({\mathbf 1}_i)_j=\delta_{i,j}$
Note that conditions (i) and (ii) respectively correspond to ``short" and ``exact". In the case $n=2$, $K$ is the diagram of the {\it nine lemma}.
\par
n
{\bf C.2.~Compatible filtrations.} We say that $n$ filtrations $F_{(i)}$ ($i\in[1,n]$) of $A\in{\mathcal A}$ form {\it compatible filtrations} if
$$F_{(1)}^{\nu_1}A,\,\dots\,,\,F_{(n)}^{\nu_n}A$$
are compatible subobjects of $A$ for any $\nu=(\nu_1,\dots,\nu_n)\in{\mathbf Z}^n$.
\par
If $n=2$, then any $2$ filtrations $F_{(1)}$, $F_{(2)}$ are compatible filtrations. However, this does not necessarily hold for $n>2$.
\par
We can show that if the $F_{(i)}$ ($i\in[1,n]$) form compatible filtrations of $A$, then their restrictions to $F^{\nu}A:=F_{(1)}^{\nu_1}\cdots F_{(n)}^{\nu_n}A$ also form compatible filtrations (see the proof of \cite[Corollary 1.2.13]{mhp}). Using the short exact complex $K$ with
$$K^0=F^{\nu}A,\quad K^{-{\mathbf 1}_i}=F^{\nu+{\mathbf 1}_i}A\,\,\,\,(i\in[1,n]),$$
we can show that
$$\hbox{${\rm Gr}_{F_{(1)}}^{\nu_1}\cdots\,{\rm Gr}_{F_{(n)}}^{\nu_n}A\,$ does not depend on the order of $\{1,\dots,n\}$.}
\leqno({\rm C}.2.1)$$
In fact, ${\rm Gr}_{F_{(i)}}^{\nu_i}$ corresponds to restricting $K$ to the subcomplex defined by $p_i=1$, and
$${\rm Gr}_{F_{(1)}}^{\nu_1}\cdots\,{\rm Gr}_{F_{(n)}}^{\nu_n}A=K^{1,\dots,1}.
\leqno({\rm C}.2.2)$$
Note that (C.2.1) is not completely trivial even in the case $n=2$, where Zassenhaus lemma is usually used, and we can replace it by the diagram of the nine lemma as is explained above.
\par
n
{\bf C.3.~Strict complexes.} Let $A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ be a complex in ${\mathcal A}$ with $n$ filtrations $F_{(i)}$ ($i\in[1,n]$). We say that $\bigl(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(i)}\,(i\in[1,n])\bigr)$ is {\it strict} if for any $j\in{\mathbf Z}$, $\nu=(\nu_1,\dots,\nu_n)\in{\mathbf Z}^n$, there is a short exact $n$-ple complex $K$ as in (C.1.1) such that
$$H^j\bigl(\h{$\bigcap$}_{i\in I}\,F_{(i)}^{\nu_i}A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\bigr)=K^{-{\mathbf 1}_I}\quad\bigl(\forall\,I\subset\{1,\dots,n\}\bigr),
\leqno({\rm C}.3.1)$$
where ${\mathbf 1}_I=(({\mathbf 1}_i)_I,\dots,({\mathbf 1}_I)_n)$ with $({\mathbf 1}_I)_j=1$ if $j\in I$, and $0$ otherwise (and $I$ can be empty in (C.3.1)).
\par
We can show (see \cite[Corollary 1.2.13]{mhp}) that if $\bigl(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(i)}\,(i\in[1,n])\bigr)$ is strict, then
$$\hbox{$H^j,\,{\rm Gr}_{F_{(1)}}^{\nu_1},\,\dots\,,\,{\rm Gr}_{F_{(n)}}^{\nu_n}$ on ${\mathcal A}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ commute with each other $(\forall\,j\in{\mathbf Z},\,\nu\in{\mathbf Z}^n)$.}
\leqno({\rm C}.3.2)$$
\vskip-7mm
$$\hbox{The induced filtrations $F_{(i)}$ ($i\in[1,n]$) on $H^jA^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ form compatible filtrations.}
\leqno({\rm C}.3.3)$$
\par
Note that $\bigl(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(i)}\,(i\in[1,2])\bigr)$ is strict if $(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(2)})$, $({\rm Gr}_{F_{(2)}}^pA^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(1)})$ ($\forall\,p\in{\mathbf Z})$ are strict, and $F_{(2)}^pA^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}=0$ for $p\gg 0$, where we assume that the filtered inductive limit in ${\mathcal A}$ is an exact functor, see \cite[Theorem~1.2.9]{mhp}.
\end{document} |
\begin{document}
\title[Morley's other miracle]{Morley's other miracle: $\displaystyle 4^{p-1}\equiv\pm \left(\smallmatrix p-1\\
\frac{p-1}{2} \endsmallmatrix\right)
\pmod {p^3}
$}
\author{Christian Aebi and Grant Cairns}
\address{Coll\`ege Calvin, Geneva, Switzerland 1211}
\email{[email protected]}
\address{Department of Mathematics, La Trobe University, Melbourne, Australia 3086}
\email{[email protected]}
\maketitle
In geometry, {\em Morley's miracle} says that in every planar triangle the adjacent angle trisectors meet at the
vertices of an equilateral triangle. Frank Morley obtained this wonderful result in 1899, and to this day it continues to attract interest. There are now many known proofs; see the cut-the-knot web site \cite{CTK}. Perhaps the most celebrated ones are those due to Alain Connes \cite{Connes} and John Conway (unpublished, yet accessible at \cite{CTK}). A proof in the same spirit as Connes' was published earlier by Liang-shin Hahn \cite{Hahn}; see also \cite{Geiges}. Conway's proof is perhaps the simplest and nicest one; a somewhat longer proof having the same general approach was given by Coxeter \cite{Coxeter}, and attributed to Raoul Bricard; see also \cite{New,Wa}.
\centerline{\includegraphics[scale=.75]{aebi}}
Morley's miracle was by no means his sole
surprising discovery. In number theory, he published the following result in the Annals of Mathematics 1894/95.
\begin{Morley} If $p$ is prime and
$p>3$, then
\[(-1)^{(p-1)/2}\cdot \left(\smallmatrix p-1\\
\frac{p-1}{2} \endsmallmatrix\right) \equiv2^{2p-2}
\pmod {p^3}.
\]
\end{Morley}
To appreciate the ``miraculous'' nature of this congruence, one first needs to compare it with other congruences known at the time. Some famous ones for primes $p$ include:
\begin{itemize}
\item Fermat's little theorem:
$2^{p-1}\equiv
1\pmod p$.
\item Wilson's theorem: $(p-1)!\equiv -1 \pmod p$.
\item Lucas' theorem:
If $0\leq n,j<p$, then
$\left(\smallmatrix pm+n\\ pi+j\endsmallmatrix\right)\equiv \left(\smallmatrix m\\ i\endsmallmatrix\right)\left(\smallmatrix n\\ j\endsmallmatrix\right)\pmod {p}$.
\end{itemize}
The above three congruences are modulo $p$, while Morley's congruence is modulo $p^3$. The difference between mod $p^3$ and mod $p$ is analogous to having a result to three significant figures, rather than just one significant figure.
The other striking aspect of Morley's congruence was the nature of his original proof, which made an ingenious use of integration of
trigonometric sums. First he used the Fourier series:
\begin{align*}
2^{2n}\cos^{2n+1}x&=\cos(2n+1)x+(2n+1)\cos(2n-1)x+\frac{(2n+1)2n}{1.2}\cos(2n-3)x\\&\quad+\dots+\frac{(2n+1)2n\dots(n+2)}{n!}\cos x.
\end{align*}
He integrated this term by term and compared it with the following formula, which can be obtained by induction using integration by parts:
\begin{equation}\label{ind}
\int_0^{\frac12\pi}\cos^{2n+1}x dx=\frac{2n(2n-2)\dots2}{(2n+1)(2n-1)\dots3}.
\end{equation}
This established his result modulo $p^2$, where $p=2n+1$. To obtain the result modulo $p^3$, Morley then used (\ref{ind}) again to integrate the following power series in $\cos(x)$, known from``treatises on trigonometry'':
\begin{align*}
(-1)^{\frac{p-1}2}\cos px&=p\cos x-\frac{p(p^2-1^2)}{3!}\cos^3x+\frac{p(p^2-1^2)(p^2-3^2)}{5!}\cos^5x\\&\quad-\dots+(-1)^{\frac{p-1}2}2^{p-1}\cos^px.
\end{align*}
Subsequently, two alternate proofs were given that used the properties of Bernoulli numbers: the 1913
Royal Danish Academy of Sciences paper by Niels Nielsen \cite[p.~353]{Nielsen} and the 1938 Annals of Mathematics paper by Emma Lehmer
\cite[p.~360]{Lehmer}.
The main aim of this note is to establish Morley's congruence by entirely elementary number theory arguments. The key to this approach is the following
basic congruence modulo $p$ that curiously, we have not seen in the literature.
\begin{lemma}\label{L1} If $p$ is prime and
$p>3$, then $\displaystyle
\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{ even}}} \frac {1}{ij}\equiv 0\pmod {p}$.
\end{lemma}
Here, $\frac1{ij}$ denotes the multiplicative inverse of $ij$ modulo
$p$. Throughout this note, $p$ is a prime greater than 3 and by a slight abuse of notation, $\frac1i$ will denote the fraction $1/i$ or the multiplicative inverse of $i$ modulo
$p$ or modulo $p^2$, according to the context.
After we have established Morley's congruence, we will show in the final section that it can also be deduced from Granville's elegant proof of Skula's conjecture \cite{Granville}.
\section*{Reduction of the Problem}
We will use the following well known facts \cite[Theorem 117]{HW}, that we prove for completeness.
\begin{lemma}\label{L2} {\rm(a)} $\sum_{i=1}^{\frac{p-1}2} \frac{1}{i^2}\equiv 0\pmod {p}$,
\qquad{\rm(b)} $\sum_{i=1}^{p-1}\frac{(-1)^i}{i}\equiv \sum_{i=1}^{\frac{p-1}2} \frac{1}{i}\pmod{p^2}$.
\end{lemma}
\begin{proof}(a) As $\frac{1}{i^2} \equiv
\frac{1}{(p-i)^2} \pmod p$, one has
\[
2\sum_{i=1}^{\frac{p-1}2} \frac{1}{i^2}= \sum_{i=1}^{p} \frac{1}{i^2} = \sum_{i=1}^{p} i^2 =\frac{(p-1)p(2p-1)}6\equiv 0\pmod {p}.\]
(b) For all $0< i\leqslant {\frac{p-1}2}$, one has $i(p-i)+i^2\equiv -p(p-i)\pmod{p^2}$ and dividing by $i^2(p-i)$ gives $\frac{1}i+\frac1{p-i}\equiv
-\frac{p}{i^2}\pmod{p^2}$. Summing and using (a) gives $\sum_{i=1}^{p-1}\frac1{i}\equiv 0\pmod{p^2}$, which is known as Wolstenholme's theorem. Thus
\[
\sum_{i=1}^{p-1}\frac{(-1)^i}i\equiv 2\sum_{\substack{i=2\\i\text{ even}}}^{p-1}\frac1{i}\equiv \sum_{i=1}^{\frac{p-1}2}
\frac1{i}\pmod{p^2}.
\]\end{proof}
Turning to the terms in Morley's congruence, first note that
\[
\binom{p}{i}=\frac{{p}\cdot{(p-1)}\cdot{(p-2)} \cdots {(p-(i-1))}}{{i} \cdot {1} \cdot {2} \cdots
{(i-1)}}
\]
and so
\begin{equation}\label{exp}
\binom{p}{i}=(-1)^{i-1}\cdot\frac{p}i \cdot \left(1-\frac{p}1\right) \cdot
\left(1-\frac{p}2 \right)\cdots \left(1-\frac{p}{i-1}\right).
\end{equation}
Thus
$\binom{p}{i}\equiv(-1)^{i}\cdot\left(-\frac{p}i+ p^2\cdot\sum_{j=1}^{i-1} \frac 1{ij}\right)\pmod {p^3}$
and so $2^p= 2+\sum_{i=1}^{p-1} \binom{p}{i}$ gives
\[
2^{p-1}\equiv 1-\frac{p}2\cdot\sum_{i=1}^{{p-1}} \frac{(-1)^{i}}i+ \frac{p^2}2\cdot\sum_{0< j< i<p} \frac {(-1)^{i}}{ij}\pmod
{p^3}.
\]
Squaring, and using Lemma \ref{L2}(b), we have
\begin{equation}\label{lh}
2^{2p-2}\equiv 1-p\cdot\sum_{i=1}^{\frac{p-1}2} \frac1i+ p^2\cdot\left(\frac14\left(\sum_{i=1}^{\frac{p-1}2}
\frac1i\right)^2+ \sum_{0< j< i<p} \frac{(-1)^{i}}{ij}\right)\pmod {p^3}.
\end{equation}
From (\ref{exp}) we also have
$(-1)^{i-1}\binom{p-1}{i-1}=(-1)^{i-1}\frac{i}{p}\binom{p}{i}= \left(1-\frac{p}1\right) \cdot
\left(1-\frac{p}2 \right)\cdots \left(1-\frac{p}{i-1}\right)$.
Taking $i=\frac{p+1}2$ gives
$(-1)^{\frac{p-1}{2}}\cdot\binom{p-1}{\frac{p-1}{2}} \equiv
1-p\cdot\sum_{i=1}^{\frac{p-1}2} \frac{1}i+p^2\cdot \sum_{1\leq j<i\leq \frac{p-1}2}\frac{1}{i j}\pmod{p^3}$,
or equivalently, using Lemma \ref{L2}(a),
\begin{equation}\label{rh}(-1)^{\frac{p-1}{2}}\cdot\binom{p-1}{\frac{p-1}{2}} \equiv
1-p\cdot\sum_{i=1}^{\frac{p-1}2} \frac{1}i+\frac{p^2}2\cdot \left(\sum_{i=1}^{\frac{p-1}2} \frac1{i}\right)^2\pmod{p^3}.
\end{equation}
Comparing (\ref{lh}) and (\ref{rh}), we observe that Morley's congruence is therefore
valid mod $p^2$. In order to obtain it mod $p^3$, it suffices to prove that
$\frac14\left(\sum_{i=1}^{\frac{p-1}2}
\frac1i\right)^2\equiv \sum_{0< j< i<p} \frac{(-1)^{i}}{ij}\pmod {p}$,
or equivalently,
\begin{equation}\label{mo}
\left(\sum_{\substack{0< i< p\\i\text{ even}}} \frac1i\right)^2\equiv \sum_{0< j< i<p} \frac{(-1)^{i}}{ij}\pmod {p}.
\end{equation}
The considerations so far have reduced Morley's congruence modulo $p^3$ to a congruence modulo $p$.
\section*{Completion of the Proof}
In the remainder of this note, all congruences are taken modulo $p$. First notice that as $\displaystyle\sum_{\substack{0< i< p\\i\text{ even}}} \frac {1}{i}= -\sum_{\substack{0< i< p\\i\text{ odd}}} \frac
{1}{i}$, the left hand side of (\ref{mo}) is
\[
\left(\sum_{\substack{0< i< p\\i\text{ even}}} \frac {1}{i}\right)^2\equiv -\left(\sum_{\substack{0< i< p\\i\text{
odd}}} \frac {1}{i}\right)\left(\sum_{\substack{0< j< p\\j\text{ even}}} \frac {1}{j}\right)\equiv -\sum_{\substack{0<
j<i< p\\i\text{ odd},j\text{ even}}} \frac {1}{ij} -\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{ even}}} \frac {1}{ij}.\]
On the other hand,
\[
\sum_{\substack{0< j<i< p\\i,j\text{ odd}}} \frac {1}{ij}= \sum_{\substack{0< i<j< p\\i,j\text{ even}}} \frac
{1}{(p-i)(p-j)}\equiv \sum_{\substack{0< i<j< p\\i,j\text{ even}}} \frac
{1}{ij}
\]
and so the right hand side of (\ref{mo}) is
\begin{align*}
\sum_{0< j< i<p} \frac{(-1)^{i}}{ij}&=\sum_{\substack{0< j<i< p\\i,j\text{ even}}}
\frac{1}{ij}-\sum_{\substack{0< j<i< p\\i,j\text{ odd}}}
\frac{1}{ij}-\sum_{\substack{0< j<i< p\\i\text{ odd}, j\text{ even}}}
\frac{1}{ij}+\sum_{\substack{0< j<i< p\\i\text{ even},j\text{ odd}}}
\frac{1}{ij}\\
&\equiv -\sum_{\substack{0< j<i< p\\i\text{ odd},j\text{ even}}} \frac {1}{ij} +
\sum_{\substack{0<i<j< p\\i\text{ odd},j\text{ even}}}
\frac{1}{ij}.
\end{align*}
Hence (\ref{mo}) follows from Lemma \ref{L1}, and so the proof of Lemma \ref{L1} is our final task.
\begin{proof}[Proof of Lemma \ref{L1}] We have
\begin{align*}
2\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{ even}}} \frac {1}{ij}=
\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{ even}}} &\frac {1}{ij}+\frac {1}{(j-i)j}
=\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{
even}}} \frac
{1}{i(j-i)}=\sum_{\substack{0< i,k< p\\i+k<p\\i,k\text{ odd}}} \frac {1}{ik}\\
\equiv &\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{
even}}} \frac {1}{i(p-j)}\equiv -\sum_{\substack{0< i<j< p\\i\text{ odd},j\text{
even}}} \frac
{1}{ij}
\end{align*}
which gives the required result, as $p>3$.\end{proof}
\section*{The connection with Skula's conjecture}\label{proof1}
Consider the Fermat quotient $q=\frac{2^{p-1}-1}p$, and note that
\begin{equation}\label{bas}
2^{2p-2}=1+2qp+q^2p^2.
\end{equation}
Adopting the notation of \cite{Granville}, set
\[
q(x)=\frac{x^p-(x-1)^p-1}p,\qquad g(x)=\sum_{i=1}^{p-1}\frac{x^i}i,\qquad G(x)=\sum_{i=1}^{p-1}\frac{x^i}{i^2}.\]
Note that $q=q(2)/2$. The following remarkable identity was established in \cite{Granville}:
\begin{equation}\label{Gr}
-G(x)\equiv \frac1p (q(x)+g(1-x))\pmod p,
\end{equation}
from which Granville deduced Skula's conjecture:
$q^2\equiv -G(2)\pmod p$.
From (\ref{Gr}),
\[
2q\equiv -g(-1)-G(2)p\equiv -g(-1)+q^2p\pmod {p^2}.\]
Hence, substituting in (\ref{bas}), we obtain
\begin{equation}\label{rq}
2^{2p-2}=1+2qp+q^2p^2\equiv 1-g(-1)p+\frac12g(-1)^2p^2\pmod {p^3}.
\end{equation}
From Lemma \ref{L2}(b), $g(-1)\equiv \sum_{i=1}^{\frac{p-1}2}
\frac1{i}\pmod{p^2}$, and so from (\ref{rq})
\[
2^{2p-2}\equiv 1-\left(\sum_{i=1}^{\frac{p-1}2} \frac1{i}\right)p +\frac12\left(\sum_{i=1}^{\frac{p-1}2} \frac1{i}\right)^2p^2 \pmod {p^3}.
\]
Together with (\ref{rh}), this gives Morley's congruence once again.
\vskip1cm
\noindent{\bf Acknowledgements.} We would like to thank the anonymous referees who proposed both constructive and illuminating remarks.
\end{document} |
\begin{document}
\title{Convex geometric $(k+2)$-quasiplanar representations of semi-bar $k$-visibility graphs}
\begin{abstract}
We examine semi-bar visibility graphs in the plane and on a cylinder in which sightlines can pass through $k$ objects. We show every semi-bar $k$-visibility graph has a $(k+2)$-quasiplanar representation in the plane with vertices drawn as points in convex position and edges drawn as segments. We also show that the graphs having cylindrical semi-bar $k$-visibility representations with semi-bars of different lengths are the same as the $(2k+2)$-degenerate graphs having edge-maximal $(k+2)$-quasiplanar representations in the plane with vertices drawn as points in convex position and edges drawn as segments.
\end{abstract}
\section{Introduction}
Bar visibility graphs are graphs for which vertices can be drawn as horizontal segments (bars) and edges can be drawn as vertical segments (sightlines) so that two bars are visible to each other if and only if there is a sightline which intersects them and no other bars. The study of bar visibility graphs was motivated in part by the problem of efficiently designing very large scale integration (VLSI) circuits \cite{Luccio}. Past research has shown how to represent planar graphs and plane triangular graphs as bar visibility graphs \cite{Duchet, bartri}. Dean \emph{et al.} \cite{AlDeank} introduced a generalization of bar visibility graphs in which bars are able to see through at most $k$ other bars for some nonnegative integer $k$. These graphs are known as bar $k$-visibility graphs.
We study bar $k$-visibility graphs in which every bar has left endpoint on the $y$-axis. Such bars are called \emph{semi-bars}. We also consider semi-bar $k$-visibility graphs obtained by placing the semi-bars on the surface of a cylinder with each semi-bar parallel to the cylinder's axis of symmetry. Felsner and Massow \cite{Felsner} proved that the maximum number of edges in any semi-bar $k$-visibility graph with $n$ vertices is $(k+1)(2n-2k-3)$ for $n \geq 2k+2$ and $\binom{n}{2}$ for $n \leq 2k+2$. Similar bounds were derived by Capoyleas and Pach \cite{CP} when they proved that the maximum number of straight line segments in the plane connecting $n$ points in convex position so that no $k+2$ segments are pairwise crossing is $\binom{n}{2}$ for $n \leq 2k+2$ and $2(k+1) n - \binom{2k+3}{2}$ for $n \geq 2k+2$.
We prove that every semi-bar or cylindrical semi-bar $k$-visibility graph can be represented in the plane with vertices drawn as points in convex position and edges drawn as segments so there are no $k+2$ pairwise crossing edges. Furthermore, we prove that the class of graphs having cylindrical semi-bar $k$-visibility representations with semi-bars of different lengths is the same as the class of $(2k+2)$-degenerate graphs having edge-maximal $(k+2)$-quasiplanar representations in the plane with vertices drawn as points in convex position and edges drawn as segments. Section~\ref{sec:order} contains a more detailed description of the results.
\section{Definitions}
A \emph{semi-bar $k$-visibility representation} of a graph $G = (V, E)$ is a collection $\left\{s_{v}\right\}_{v \in V}$ of disjoint segments in the plane parallel to the $x$-axis with left endpoints on the $y$-axis such that for all $a, b \in V$ there is an edge $\left\{a,b\right\} \in E$ if and only if there exists a vertical segment (a \emph{sightline}) which intersects $s_{a}$, $s_{b}$, and at most $k$ other semi-bars. A graph is a \emph{semi-bar $k$-visibility graph} if it has a semi-bar $k$-visibility representation.
A \emph{cylindrical semi-bar $k$-visibility representation} of a graph $G = (V, E)$ is a collection $\left\{s_{v}\right\}_{v \in V}$ of disjoint segments parallel to the $x$-axis in three dimensions with left endpoints on the circle $\left\{(0, y, z): y^{2}+z^{2} = 1 \right\}$ such that for all $a, b \in V$ there is an edge $\left\{a,b\right\} \in E$ if and only if there exists a circular arc along the surface of the cylinder parallel to the $y z$ plane (a \emph{sightline}) which intersects $s_{a}$, $s_{b}$, and at most $k$ other semi-bars. A graph is a \emph{cylindrical semi-bar $k$-visibility graph} if it has a cylindrical semi-bar $k$-visibility representation. Figure~\ref{cylbar} shows a cylindrical semi-bar visibility graph and a two-dimensional view of a corresponding representation in which bars are represented by radial segments and sightlines are represented by arcs.
\begin{figure}
\caption{A cylindrical semi-bar visibility graph and a corresponding representation.}
\label{cylbar}
\end{figure}
A graph is \emph{$k$-quasiplanar} if it can be drawn in the plane with no $k$ pairwise crossing edges. For example $2$-quasiplanar graphs are planar. Call a $k$-quasiplanar graph $G$ \emph{convex geometric} if it has a $k$-quasiplanar representation $C_{G}$ with vertices drawn as points in convex position and edges drawn as segments. Call a $k$-quasiplanar convex geometric representation \emph{maximal} if adding any straight edge to the representation causes it to have $k$ pairwise crossing edges.
In a set of points in the plane, call a pair of points a \emph{$j$-pair} if the line through those points has exactly $j$ points on one side. Every maximal $(k+2)$-quasiplanar convex geometric representation has edges between all $j$-pairs in the representation for each $j \leq k$.
A graph is called \emph{$l$-degenerate} if all of its subgraphs contain a vertex of degree at most $l$. Cylindrical semi-bar $k$-visibility graphs are $(2k+2)$-degenerate for all $k \geq 0$ since the shortest semi-bar in any subset of semi-bars sees at most $2k+2$ other semi-bars, so cylindrical semi-bar $k$-visibility graphs have chromatic number at most $2k+3$ and clique number at most $2k+3$. Furthermore, Felsner and Massow \cite{Felsner} showed $K_{2k+3}$ is a semi-bar $k$-visibility graph, so $K_{2k+3}$ is also a cylindrical semi-bar $k$-visibility graph. Thus $2k+3$ is the maximum possible chromatic number and clique number of cylindrical semi-bar $k$-visibility graphs.
\section{Order of results}\label{sec:order}
In Section~\ref{sec:sbkv} we show every cylindrical semi-bar $k$-visibility graph is a $(k+2)$-quasiplanar convex geometric graph. In particular, every cylindrical semi-bar $k$-visibility graph with a representation having semi-bars of different lengths has a maximal $(k+2)$-quasiplanar convex geometric representation. Furthermore, we show that if a semi-bar $k$-visibility representation $R$ with semi-bars of different lengths is curled into a cylindrical semi-bar $k$-visibility representation $R'$, then the graphs corresponding to $R$ and $R'$ will be the same if and only if the top $k+1$ and bottom $k+1$ semi-bars in $R$ comprise the longest $2k+2$ semi-bars in $R$ and the longest $2k+2$ semi-bars in $R$ are all visible to each other.
In Section~\ref{sec:kqcg} we show every graph with a maximal planar convex geometric representation can be represented as a semi-bar or cylindrical semi-bar visibility graph with semi-bars of different lengths. Moreover, we show every $(2k+2)$-degenerate graph with a maximal $(k+2)$-quasiplanar convex geometric representation has a cylindrical semi-bar $k$-visibility representation with semi-bars of different lengths. For each $k \geq 1$ we also exhibit $(k+2)$-quasiplanar convex geometric graphs which are not subgraphs of any cylindrical semi-bar $k$-visibility graph. Furthermore, we exhibit maximal $(k+2)$-quasiplanar convex geometric representations which cannot be transformed, without changing cyclic positions of vertices, into cylindrical semi-bar $k$-visibility representations having semi-bars of different lengths with a pair of adjacent semi-bars among the $2k+3$ longest semi-bars.
\section{Cylindrical semi-bar $k$-visibility graphs}\label{sec:sbkv}
We show that every cylindrical semi-bar $k$-visibility graph with $n$ semi-bars and $m$ edges can be represented by $n$ points in the plane with $m$ segments between them such that there are no $k+2$ pairwise crossing segments. This gives an alternative proof for the upper bound on the maximum number of edges in a semi-bar $k$-visibility graph with $n$ vertices in conjunction with the upper bound in \cite{CP}.
\begin{thm}\label{sbtokc}
Every cylindrical semi-bar $k$-visibility graph is a $(k+2)$-qua\-si\-pla\-nar convex geometric graph.
\end{thm}
\begin{proof}
Let $H$ be a cylindrical semi-bar $k$-visibility graph with $n$ vertices. Let $B_{H}$ be a cylindrical semi-bar $k$-visibility representation of $H$ and let $b_{1}, b_{2}, \ldots, b_{n}$ be the semi-bars in $B_{H}$ listed in cyclic order. Let $p_{1}, \ldots, p_{n}$ be any points in the plane in convex position listed in cyclic order. Draw a segment between $p_{i}$ and $p_{j}$ if and only if there is an edge in $H$ between the vertices corresponding to $b_{i}$ and $b_{j}$. For contradiction suppose that the resulting drawing contains a set $C$ of $k+2$ pairwise crossing edges. None of the edges in $C$ have any vertices in common, or else they would not be crossing.
Consider an edge $e \in C$ between two points $p_{i}$ and $p_{j}$ for which $i < j$ and $b_{i}$ or $b_{j}$ is a semi-bar of minimal length among all semi-bars corresponding to points in edges in $C$. Since the other $k+1$ edges in $C$ cross $e$, then each edge in $C$ besides $e$ has some endpoint $p_{t_{1}}$ for which $i < t_{1} < j$ and another endpoint $p_{t_{2}}$ for which $t_{2} < i$ or $j < t_{2}$. Since $H$ has an edge between the vertices corresponding to $b_{i}$ and $b_{j}$, then $b_{i}$ and $b_{j}$ are both longer than all but at most $k$ of the semi-bars in at least one of the half-spaces bounded by the plane through $b_{i}$ and $b_{j}$. Therefore, there exists $t'$ for which $b_{t'}$ is shorter than both $b_{i}$ and $b_{j}$, and $p_{t'}$ is an endpoint of one of the edges in $C$. This is a contradiction.
\end{proof}
The last theorem showed every subgraph of a cylindrical semi-bar $k$-visibility graph is a $(k+2)$-quasiplanar convex geometric graph. In particular, every semi-bar $k$-visibility graph is a $(k+2)$-quasiplanar convex geometric graph.
\begin{cor}
The maximum number of edges in a cylindrical semi-bar $k$-visibility graph with $n$ vertices is $(k+1)(2n-2k-3)$ for $n \geq 2k+2$ and $\binom{n}{2}$ for $n \leq 2k+2$.
\end{cor}
If the semi-bars in a cylindrical semi-bar $k$-visibility representation have different lengths, then the representation will have the maximum possible number of edges. The proof of this fact is similar to the proof of the upper bound for the number of edges in semi-bar $k$-visibility graphs in \cite{Felsner}.
\begin{lem}
The exact number of edges in every graph on $n$ vertices having a cylindrical semi-bar $k$-visibility representation with semi-bars of different lengths is $(k+1)(2n-2k-3)$ for $n \geq 2k+2$ and $\binom{n}{2}$ for $n \leq 2k+2$.
\end{lem}
\begin{proof}
If $n \leq 2k+2$, then the graph is complete. For $n > 2k+2$ we count, for each semi-bar $b$, how many edges in the cylindrical semi-bar $k$-visibility representation have $b$ as the shorter semi-bar. If $b$ is not among the $2k+2$ longest semi-bars, then there are $2k+2$ such edges. If $b$ is the $i^{th}$ longest semi-bar for some $1 \leq i \leq 2k+2$, then there are $i-1$ edges having $b$ as the shorter semi-bar.
\end{proof}
If the semi-bars in the representation have different lengths, then we also show that the construction in Theorem~\ref{sbtokc} yields a maximal $(k+2)$-quasiplanar convex geometric representation.
\begin{thm}
Every cylindrical semi-bar $k$-visibility graph with a representation having semi-bars of different lengths has a maximal $(k+2)$-quasiplanar convex geometric representation.
\end{thm}
\begin{proof}
Every cylindrical semi-bar $k$-visibility graph $G$ on $n$ vertices with a representation having semi-bars of different lengths has $(k+1)(2n-2k-3)$ edges for $n \geq 2k+2$ and $\binom{n}{2}$ edges for $n \leq 2k+2$. Furthermore, every $(k+2)$-quasiplanar convex geometric graph has at most $(k+1)(2n-2k-3)$ edges for $n \geq 2k+2$ and $\binom{n}{2}$ edges for $n \leq 2k+2$. In Theorem~\ref{sbtokc} we proved $G$ has a $(k+2)$-quasiplanar convex geometric representation. Thus $G$ has a maximal $(k+2)$-quasiplanar convex geometric representation.
\end{proof}
Any semi-bar $k$-visibility representation $R$ can be curled into a cylindrical semi-bar $k$-visibility representation by placing the semi-bars in $R$ on the cylinder $y^{2}+z^{2} = 1$ in the same order in which they occur in $R$. This transformation never deletes sightlines, but it may add sightlines. If a semi-bar visibility representation $R$ with semi-bars of different lengths is curled into a cylindrical semi-bar visibility representation $R'$, then the graphs corresponding to $R$ and $R'$ will be the same if and only if the top and bottom semi-bars in $R$ are the two longest semi-bars in $R$. The next lemma generalizes the previous statement to $k$-visibility for any $k > 0$.
\begin{lem}
If a semi-bar $k$-visibility representation $R$ with semi-bars of different lengths is curled into a cylindrical semi-bar $k$-visibility representation $R'$, then the graphs corresponding to $R$ and $R'$ will be the same if and only if the top $k+1$ and bottom $k+1$ semi-bars in $R$ comprise the longest $2k+2$ semi-bars in $R$ and the longest $2k+2$ semi-bars in $R$ are all visible to each other.
\end{lem}
\begin{proof}
If $R$ is a semi-bar $k$-visibility representation with semi-bars of different lengths such that the top $k+1$ and bottom $k+1$ semi-bars in $R$ comprise the longest $2k+2$ semi-bars in $R$ and the longest $2k+2$ semi-bars in $R$ are visible to each other, then there is no sightline in $R'$ which is not in $R$. Indeed, if there was a sightline between two semi-bars $b_{1}$ and $b_{2}$ in $R'$ which was not in $R$, then $b_{1}$ and $b_{2}$ would not both be among the longest $2k+2$ semi-bars. Then the sightline between $b_{1}$ and $b_{2}$ in $R'$ would pass through $k+1$ semi-bars that were the top $k+1$ semi-bars from $R$ or the bottom $k+1$ semi-bars from $R$. This contradicts the definition of $k$-visibility, so the graphs corresponding to $R$ and $R'$ have the same edges.
Conversely, if the graphs corresponding to $R$ and $R'$ have the same edges, then the longest $2k+2$ semi-bars in $R$ can all see each other. Suppose for contradiction that the top $k+1$ and bottom $k+1$ semi-bars in $R$ do not comprise the longest $2k+2$ semi-bars in $R$. Without loss of generality assume that there is a semi-bar $t$ among the top $k+1$ semi-bars in $R$ that is not one of the $2k+2$ longest semi-bars in $R$. Let $b$ be the bottom semi-bar in $R$. Since there is a sightline between $b$ and $t$ in $R'$, then there must be a sightline between $b$ and $t$ in $R$. However the sightline between $b$ and $t$ in $R$ crosses at least $k+1$ semi-bars longer than $t$, a contradiction. Thus the top $k+1$ and bottom $k+1$ semi-bars in $R$ comprise the longest $2k+2$ semi-bars in $R$.
\end{proof}
\section{Maximal $(k+2)$-quasiplanar convex geometric representations}\label{sec:kqcg}
In this section we find cylindrical semi-bar $k$-visibility representations of $(2k+2)$-degenerate graphs with maximal $(k+2)$-quasiplanar convex geometric representations. First we show that every graph with a maximal planar convex geometric representation is a semi-bar visibility graph.
\begin{figure}
\caption{A maximal planar convex geometric graph and a corresponding semi-bar visibility representation.}
\label{sbar}
\end{figure}
\begin{lem} \label{mptosb}
Every graph with a maximal planar convex geometric representation has a semi-bar visibility representation with semi-bars of different lengths.
\end{lem}
\begin{proof}
A cylindrical semi-bar visibility representation with the two longest semi-bars adjacent to each other can be changed into a semi-bar visibility representation by cutting the cylinder between the two longest semi-bars parallel to the cylinder's axis of symmetry and then flattening the representation. We show every graph with a maximal planar convex geometric representation has a cylindrical semi-bar visibility representation with the two longest semi-bars adjacent to each other.
Let $G$ be a graph with $n$ vertices having a maximal planar convex geometric representation $C_{G}$. Pick any vertex in $C_{G}$ to be $v_{1}$ and name the vertices $v_{1}, v_{2}, \ldots, v_{n}$ in cyclic order around $C_{G}$. We inductively construct a cylindrical semi-bar visibility representation $B_{G}$ from $C_{G}$ with $n$ semi-bars $b_{1}, b_{2}, \ldots, b_{n}$ listed in cyclic order around the cylinder. For each $i$ and $j$ there will be a sightline between $b_{i}$ and $b_{j}$ in $B_{G}$ if and only if there is a segment between $v_{i}$ and $v_{j}$ in $C_{G}$. Furthermore, no two semi-bars will have the same length and each length will be one of the integers $1, 2, \ldots, n$.
The semi-bar $b_{1}$ is assigned the length $n$ and the semi-bar $b_{n}$ is assigned $n-1$. The remaining semi-bars will be assigned lengths $1, 2, \ldots, n-2$ from shortest to longest. We may assume that $n > 2$ or else the proof is already finished. Let $C_{0} = C_{G}$. As $C_{0}$ is a maximal planar convex geometric representation, then every interior face in $C_{0}$ is a triangle. If $C_{0}$ has at least four vertices, then $C_{0}$ has at least two triangular faces, so there is a vertex in $C_{0}$ besides $v_{1}$ or $v_{n}$ with only two neighbors in $C_{0}$. If $C_{0}$ has three vertices, then every vertex in $C_{0}$ has exactly two neighbors in $C_{0}$.
Let $v_{m_{0}}$ be any vertex in $C_{0}$ besides $v_{1}$ or $v_{n}$ with only two neighbors in $C_{0}$. The semi-bar $b_{m_{0}}$ is assigned the length $1$. Then $b_{m_{0}}$ will see the semi-bars corresponding to the neighbors of $v_{m_{0}}$ in $C_{0}$ no matter which lengths greater than $1$ are assigned to those semi-bars. Let $C_{1}$ be obtained by deleting $v_{m_{0}}$ and any edges including $v_{m_{0}}$ from $C_{0}$. Then $C_{1}$ is a maximal planar convex geometric representation.
Continuing by induction after $i$ iterations, suppose that $C_{i}$ is a maximal planar convex geometric representation. Then every interior face in $C_{i}$ is a triangle. At least one of the triangles has a vertex besides $v_{1}$ or $v_{n}$ with only two neighbors in $C_{i}$, so let $v_{m_{i}}$ be such a vertex. The semi-bar $b_{m_{i}}$ is assigned the length $i+1$. Then $b_{m_{i}}$ will see the semi-bars corresponding to the neighbors of $v_{m_{i}}$ in $C_{i}$ no matter which lengths greater than $i+1$ are assigned to those semi-bars.
Let $C_{i+1}$ be obtained by deleting $v_{m_{i}}$ and any edges including $v_{m_{i}}$ from $C_{i}$. Then $C_{i+1}$ is a maximal planar convex geometric representation. Thus after $n-2$ iterations we obtain a cylindrical semi-bar visibility representation of $G$ with the two longest semi-bars next to each other. Figure~\ref{sbar} shows a maximal planar convex geometric graph with $8$ vertices and a corresponding semi-bar visibility representation in which the top and bottom semi-bars have lengths $8$ and $7$. The lengths of the other semi-bars are assigned according to the description in this proof.
\end{proof}
\begin{cor}
For all graphs $G$, $G$ has a semi-bar visibility representation with semi-bars of different lengths and an edge between the topmost and bottommost semi-bars if and only if $G$ has a maximal planar convex geometric representation.
\end{cor}
Lemma~\ref{mptosb} implies every planar convex geometric graph is a subgraph of a cylindrical semi-bar visibility graph. For $k \geq 1$ we exhibit $(k+2)$-quasiplanar convex geometric graphs which are not subgraphs of any cylindrical semi-bar $k$-visibility graph.
\begin{lem}
For all $k \geq 1$ there exist $(k+2)$-quasiplanar convex geometric graphs which are not subgraphs of any cylindrical semi-bar $k$-visibility graph.
\end{lem}
\begin{proof}
Fix $k \geq 1$. Let $D$ be a drawing with $4(k+1)$ vertices $v_{1}, v_{2}, \ldots, v_{4(k+1)}$ in convex position listed in cyclic order with straight edges $\left\{v_{i}, v_{3(k+1)+1-i} \right\}$ and $\left\{v_{k+1+i}, v_{4(k+1)+1-i} \right\}$ for each $1 \leq i \leq k+1$. Let $D'$ be any maximal $(k+2)$-quasiplanar convex geometric representation with $4(k+1)$ vertices which contains $D$. For any vertices $u$ and $v$ for which $\left\{u,v\right\}$ is a $j$-pair in $D'$ for some $j \leq k$, there is an edge between $u$ and $v$ in $D'$ since $D'$ is maximal. Then each vertex in $D'$ has $2(k+1)$ edges which are $j$-pairs for some $j \leq k$ and another edge from $D$ which is a $j$-pair for some $j > k$. Thus every vertex in $D'$ has degree at least $2(k+1)+1$, but cylindrical semi-bar $k$-visibility graphs are $(2k+2)$-degenerate. So the graph represented by $D'$ is not a subgraph of any cylindrical semi-bar $k$-visibility graph. See Figure~\ref{keq1} for an example in the case $k = 1$.
\end{proof}
\begin{figure}
\caption{A $3$-quasiplanar convex geometric graph which is not $4$-degenerate.}
\label{keq1}
\end{figure}
If a $(2k+2)$-degenerate graph $G$ has a maximal $(k+2)$-quasiplanar convex geometric representation, then we can show that $G$ is a cylindrical semi-bar $k$-visibility graph using a proof like the one for Lemma~\ref{mptosb}.
\begin{thm}\label{final}
Every $(2k+2)$-degenerate graph with a maximal $(k+2)$-quasiplanar convex geometric representation has a cylindrical semi-bar $k$-visibility representation with semi-bars of different lengths.
\end{thm}
\begin{proof}
Let $G$ be a $(2k+2)$-degenerate graph with $n$ vertices having a maximal $(k+2)$-quasiplanar convex geometric representation $C_{G}$. Pick any vertex in $C_{G}$ to be $v_{1}$ and name the vertices $v_{1}, v_{2}, \ldots, v_{n}$ in cyclic order around $C_{G}$. We construct a cylindrical semi-bar $k$-visibility representation $B_{G}$ from $C_{G}$ with $n$ semi-bars $b_{1}, b_{2}, \ldots, b_{n}$ listed in cyclic order around the cylinder. For each $i$ and $j$ there will be a sightline between $b_{i}$ and $b_{j}$ in $B_{G}$ if and only if there is a segment between $v_{i}$ and $v_{j}$ in $C_{G}$.
Furthermore, no two semi-bars will have the same length and each length will be one of the integers $1, 2, \ldots, n$. The semi-bars will be assigned lengths from shortest to longest. Let $C_{0} = C_{G}$. Each $C_{i}$ will be a maximal $(k+2)$-quasiplanar convex geometric representation.
Since $G$ is a $(2k+2)$-degenerate graph, then $C_{0}$ has a vertex with at most $2k+2$ neighbors in $C_{0}$, so let $v_{m_{0}}$ be such a vertex. Since $C_{0}$ is a maximal $(k+2)$-quasiplanar convex geometric representation, then the neighbors of $v_{m_{0}}$ in $C_{0}$ are the vertices $v$ for which $\left\{v, v_{m_{0}} \right\}$ is a $j$-pair in $C_{0}$ for each $j \leq k$. The semi-bar $b_{m_{0}}$ is assigned the length $1$. Then $b_{m_{0}}$ will see the semi-bars corresponding to the neighbors of $v_{m_{0}}$ in $C_{0}$ no matter which lengths greater than $1$ are assigned to those semi-bars.
Let $C_{1}$ be obtained by deleting $v_{m_{0}}$ and any edges including $v_{m_{0}}$ from $C_{0}$. If we suppose for contradiction that $C_{1}$ is not a maximal $(k+2)$-quasiplanar convex geometric representation, then there is an edge which can be added to $C_{1}$ so that the resulting representation is still a $(k+2)$-quasiplanar convex geometric representation. However, this same edge can be added to $C_{0}$ so that the resulting representation is still a $(k+2)$-quasiplanar convex geometric representation since no edge containing $v_{m_{0}}$ can cross more than $k$ other edges in $C_{0}$. This contradicts the hypothesis that $C_{0}$ is a maximal $(k+2)$-quasiplanar convex geometric representation. Thus $C_{1}$ is a maximal $(k+2)$-quasiplanar convex geometric representation.
Continuing by induction after $i$ iterations suppose that $C_{i}$ is a maximal $(k+2)$-quasiplanar convex geometric representation of a $(2k+2)$-degenerate graph. Then $C_{i}$ has a vertex with at most $2k+2$ neighbors in $C_{i}$, so let $v_{m_{i}}$ be such a vertex. Since $C_{i}$ is a maximal $(k+2)$-quasiplanar convex geometric representation, then the neighbors of $v_{m_{i}}$ in $C_{i}$ are the vertices $v$ for which $\left\{v, v_{m_{i}} \right\}$ is a $j$-pair in $C_{i}$ for each $j \leq k$. The semi-bar $b_{m_{i}}$ is assigned the length $i+1$. Then $b_{m_{i}}$ will see the semi-bars corresponding to the neighbors of $v_{m_{i}}$ in $C_{i}$ no matter which lengths greater than $i+1$ are assigned to those semi-bars.
Let $C_{i+1}$ be obtained by deleting $v_{m_{i}}$ and any edges including $v_{m_{i}}$ from $C_{i}$. Then $C_{i+1}$ is a maximal $(k+2)$-quasiplanar convex geometric representation. Thus after $n$ iterations we obtain a cylindrical semi-bar $k$-visibility representation of $G$.
\end{proof}
\begin{cor}
For all graphs $G$, $G$ has a cylindrical semi-bar $k$-visibility representation with semi-bars of different lengths if and only if $G$ is $(2k+2)$-degenerate and has a maximal $(k+2)$-quasiplanar convex geometric representation.
\end{cor}
\begin{figure}
\caption{A maximal $3$-quasiplanar convex geometric representation corresponding to a cylindrical semi-bar $1$-visibility representation with semi-bars of lengths $1, 6, 2, 7, 3, 8, 4, 9, 5, 10$ in cyclic order.}
\label{m3q}
\end{figure}
\begin{figure}
\caption{A maximal $3$-quasiplanar convex geometric representation which cannot be transformed, without changing cyclic positions of vertices, into a cylindrical semi-bar $1$-visibility representation having semi-bars of different lengths with a pair of adjacent semi-bars among the $5$ longest semi-bars.}
\label{counter}
\end{figure}
Lemma~\ref{mptosb} showed that any maximal planar convex geometric representation can be transformed without changing cyclic positions of vertices into a cylindrical semi-bar visibility representation having semi-bars of different lengths with the two longest semi-bars adjacent. However for $k \geq 1$ there exist maximal $(k+2)$-quasiplanar convex geometric representations which cannot be transformed, without changing cyclic positions of vertices, into cylindrical semi-bar $k$-visibility representations having semi-bars of different lengths with a pair of adjacent semi-bars among the $2k+3$ longest semi-bars.
For each $k \geq 1$ consider the $(k+2)$-quasiplanar representation $C$ obtained by applying Theorem~\ref{sbtokc} to the cylindrical semi-bar $k$-visibility representation $R$ having $(2k+3)(k+1)$ semi-bars with lengths $1$, $2k+4$, $\ldots$, $1+k(2k+3)$, $2$, $2k+5$, $\ldots$, $2+k(2k+3)$, $3$, $2k+6$, $\ldots$, $3+k(2k+3)$, $\ldots$, $2k+3$, $4k+6$, $\ldots$, $(2k+3)(k+1)$ in cyclic order. Let $B$ be a cylindrical semi-bar $k$-visibility representation with semi-bars of lengths $1, 2, \ldots, (2k+3)(k+1)$ constructed from $C$ using Theorem~\ref{final}. Figure~\ref{m3q} shows the representation $C$ when $k = 1$.
The only vertex of degree $2k+2$ in $C$ is the vertex $x$ corresponding to the semi-bar of length $1$ in $R$, so $x$ has length $1$ in $B$. When $x$ is removed from $C$, the only vertex of degree $2k+2$ in the resulting representation is the vertex $y$ corresponding to the semi-bar of length $2$ in $R$, so $y$ has length $2$ in $B$. Similarly the vertices in $C$ corresponding to semi-bars of lengths $3$, $4$, $\ldots$, $(2k+3)k$ in $R$ have lengths $3$, $4$, $\ldots$, $(2k+3)k$ respectively in $B$. Then no pair of semi-bars among the $2k+3$ longest semi-bars can be adjacent in $B$, unless the cyclic positions of vertices in $C$ are changed in $B$. Figure~\ref{counter} illustrates the process of assigning lengths to $B$ when $k = 1$.
\section{Acknowledgments}
Jesse Geneson was supported by an NSF graduate research fellowship. He thanks Katherine Bian for helpful comments on this paper.
\end{document} |
\begin{document}
\title{Topics in Graph Colouring and Graph Structures}
\author{David Ferguson}
\begin{titlepage}
\begin{center}
\textsc{\LARGE \bf }\\[3cm]
{\huge \bf The Ramsey number of \\[12pt] mixed-parity cycles III}
{\Large David G. Ferguson}
\end{center}
\abstract{
\noindent
Denote by $R(G_1, G_2, G_3)$ the minimum integer $N$ such that any three-colouring of the edges of the complete graph on $N$ vertices contains
a monochromatic copy of a graph $G_i$ coloured with colour~$i$ for some $i\in{1,2,3}$.
In a series of three papers of which this is the third, we consider the case where $G_1, G_2$ and $G_3$ are cycles of mixed parity. Specifically, in this in this paper, we consider~$R(C_n,C_m,C_{\ell})$, where $n$ is even and $m$ and $\ell$ are odd.
Figaj and \L uczak determined an asymptotic result for
this case, which we improve upon to give an exact result.
We prove that for~$n,m$ and $\ell$ sufficiently large
$$R(C_n,C_m,C_\ell)=\max\{4n-3, n+2m-3, n+2\ell-3\}.$$
}
\end{titlepage}
\pagenumbering{arabic}
\let\L\defaultL
\setcounter{page}{2}
\renewcommand{1.15}{1.25}\small\normalsize
\
For graphs $G_1,G_2,G_3$, the Ramsey number $R(G_1,G_2,G_3)$ is the smallest integer~$N$ such that every edge-colouring of the complete graph on~$N$ vertices with up to three colours results in the graph having, as a subgraph, a copy of~$G_{i}$ coloured with colour $i$ for some~$i$.
We consider the case when $G_1,G_2$ and $G_3$ are~cycles.
In 1973, Bondy and Erd\H{o}s~\cite{BonErd} conjectured that, if~$n>3$ is odd, then $$R(C_{n},C_{n},C_{n})=4n-3.$$
Later, \L uczak~\cite{Lucz} proved that, for~$n$ odd, $R(C_{n},C_{n},C_{n})=4n+o(n)$ as $n\rightarrow \infty$. Kohayakawa, Simonovits and Skokan~\cite{KoSiSk}, expanding upon the work of \L uczak, confirmed the Bondy-Erd\H{o}s conjecture for sufficiently large odd values of~$n$ by proving that there exists a positive integer~$n_0$ such that, for all odd $n,m,\ell>n_{0}$,
$$R(C_{n},C_{m},C_{\ell})=4 \max\{n,m,\ell\} -3.$$
In the case where all three cycles are of even length, Figaj and \L uczak~\cite{FL2007} proved the following asymptotic. Defining ${\langle \! \langle} x{\rangle \! \rangle}$ to be the largest even integer not greater than~$x$, they proved that, for all ${\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II>0$,
$$R(C_{{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}},C_{{\langle \! \langle} {\alpha_{1}}I n{\rangle \! \rangle}},C_{{\langle \! \langle} {\alpha_{1}}II n {\rangle \! \rangle}})=\half\big({\alpha_{1}} + {\alpha_{1}}I +{\alpha_{1}}II+\max \{{\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\}\big)n+o(n),$$
as $n \rightarrow \infty$.
Thus, in particular, for even~$n$,
$$R(C_{n},C_{n},C_{n})=2n+o(n),\text{ as }n\rightarrow \infty.$$
Independently, Gy\'{a}rf\'{a}s, Ruszink\'{o}, S\'{a}rk\"{o}zy and Szem\'{e}redi~\cite{GyarSzem} proved a similar, but more precise, result for paths, namely that there exists a positive integer~$n_{1}$ such that, for $n>n_{1}$,
$$R(P_{n},P_{n},P_{n})=\begin{cases} 2n-1, & n \text{ odd,} \\ 2n-2, & n \text{ even.} \end{cases}$$
More recently, Benevides and Skokan~\cite{BenSko} proved that there exists~$n_{2}$ such that, for even $n>n_{2}$,
$$R(C_{n},C_{n},C_{n})=2n.$$
We look at the mixed-parity case, for which,
defining $\langle x \rangle$ to be the largest odd number not greater than~$x$, Figaj and \L uczak~\cite{FL2008} proved that, for all ${\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II>0$,
\begin{align*}
&\text{(i)}\ R(C_{{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} },C_{{\langle \! \langle} {\alpha_{1}}I n {\rangle \! \rangle} },C_{\langle {\alpha_{1}}II n\rangle }) = \max \{2{\alpha_{1}}+{\alpha_{1}}I, {\alpha_{1}}+2{\alpha_{1}}I, \half{\alpha_{1}} + \half{\alpha_{1}}I +{\alpha_{1}}II \}n +o(n),\\
&\text{(ii)}\ R(C_{{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} },C_{\langle {\alpha_{1}}I n \rangle },C_{\langle {\alpha_{1}}II n\rangle }) = \max \{4{\alpha_{1}},{\alpha_{1}}+2{\alpha_{1}}I, {\alpha_{1}} +2{\alpha_{1}}II \}n +o(n),
\end{align*}
as $n\rightarrow \infty$.
In \cite{DF1} and \cite{DF2}, improving on the result of Figaj and \L uczak, in the case when exactly one of the cycles is of odd length and the others are even, we proved the following:
\phantomsection
\hypertarget{thA}
\phantomsection
\begin{thA}
\label{thA}
For every $\alpha_{1}, \alpha_{2}, \alpha_{3}>0$ such that ${\alpha_{1}} \geq {\alpha_{1}}I$, there exists a positive integer $n_{A}=n_{A}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$ such that, for $n> n_{A}$,
\begin{align*}
R(C_{{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle}},C_{{\langle \! \langle} \alpha_{2} n {\rangle \! \rangle}}, C_{\langle \alpha_{3} n \rangle }) = \max\{ 2{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle} \!+\! {\langle \! \langle} \alpha_{2} n {\rangle \! \rangle} \!-\!\text{\:}3,\text{\:}\half{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle} \!+\! \half{\langle \! \langle} \alpha_{2} n {\rangle \! \rangle} \!+\! \langle \alpha_{3} n \rangle \!-\! \text{\:}2\}.
\end{align*}
\end{thA}
In this paper, we consider the complementary case, that is where exactly one of the cycles is of even length and the others are odd. Specifically, again improving on the result of Figaj and \L uczak, we prove the following:
\phantomsection
\hypertarget{thC}
\phantomsection
\begin{thC}
\label{thC}
For every $\alpha_{1}, \alpha_{2}, \alpha_{3}>0$, there exists a positive integer $n_{C}=n_{C}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$ such that, for $n> n_{C}$,
\begin{align*}
R(C_{{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle}},C_{\langle \alpha_{2} n \rangle}, C_{\langle \alpha_{3} n \rangle }) = \max\{
4{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle},
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}I n \rangle,
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}II n \rangle\}-3
.
\end{align*}
\end{thC}
\section{Lower bounds}
\label{ram:low}
\setlength{\parskip}{0.1in plus 0.05in minus 0.025in}
The first step in proving Theorem C is to exhibit three-edge-colourings of the complete graph on $$ \max\{
4{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle},
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}I n \rangle,
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}II n \rangle\}-4
$$ vertices which do not contain any of the relevant coloured cycles, thus proving that
$$ R(C_{{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle}},C_{\langle \alpha_{2} n \rangle}, C_{\langle \alpha_{3} n \rangle }) \geq \max\{
4{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle},
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}I n \rangle,
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}II n \rangle\}-3
.$$
For this purpose, the well-known colourings shown in Figure~\ref{fig:lb0} suffice:
\begin{figure}
\caption{Extremal colouring for Theorem~C.}
\label{fig:lb0}
\end{figure}
\setcounter{figure}{1}
The first graph shown in Figure \ref{fig:lb0} has $4{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} -4$ vertices, divided into four equally-sized classes $V_{1}, V_{2}, V_{3}$ and $V_{4}$ such that all edges in $G[V_1]$, $G[V_2]$, $G[V_3]$ and $G[V_4]$ are coloured red, all edges in $G[V_1,V_3]$ and $G[V_2,V_4]$ are coloured blue and all edges in $G[V_1 \cup V_3,V_2 \cup V_4]$ are coloured green.
The second graph shown in Figure \ref{fig:lb0} has ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} + 2\langle {\alpha_{1}}I n \rangle- 4$ vertices, divided into four classes $V_{1}, V_{2}, V_{3}$ and $V_{4}$ with $|V_{1}|=|V_{2}|=\langle \alpha_{2} n \rangle-1$ and $|V_{3}|=|V_{4}|=\half{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle}-1$ such that all edges in $G[V_1,V_3]$ and $G[V_2,V_4]$ are coloured red, all edges in $G[V_1]$ and $G[V_2]$ are coloured blue, all edges in $G[V_1\cup V_3, V_2\cup V_4 ]$ are coloured green and all edges in $G[V_3]$ and $G[V_4]$ are coloured red or blue.
The third graph shown in Figure \ref{fig:lb0} has ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} + 2\langle {\alpha_{1}}II n \rangle- 4$ vertices, divided into four classes $V_{1}, V_{2}, V_{3}$ and $V_{4}$ with $|V_{1}|=|V_{2}|=\langle \alpha_{3} n \rangle-1$ and $|V_{3}|=|V_{4}|=\half{\langle \! \langle} \alpha_{1} n {\rangle \! \rangle}-1$ such that all edges in $G[V_1,V_3]$ and $G[V_2,V_4]$ are coloured red, all edges in $G[V_1]$ and $G[V_2]$ are coloured green, all edges in $G[V_1\cup V_3, V_2\cup V_4 ]$ are coloured blue and all edges in $G[V_3]$ and $G[V_4]$ are coloured red or green.
Thus, it remains to prove the corresponding upper-bound. To do so, we combine regularity (as used in~\cite{Lucz},~\cite{FL2007},~\cite{FL2008}) with stability methods using a similar approach to~\cite{GyarSzem}, \cite{BenSko}, \cite{KoSiSk}, \cite{KoSiSk2}.
\section{Key steps in the proof}
\label{ram:key}
In order to complete the proof of Theorem C, we must show that, for $n$ sufficiently large, any three-colouring of~$G$, the complete graph on $$N= \max\{
4{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle},
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}I n \rangle,
{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+2\langle {\alpha_{1}}II n \rangle
\}-3$$ vertices, will result in either a red cycle on ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices, a blue cycle on ${\langle \! \langle} {\alpha_{1}}I n {\rangle \! \rangle}$ or a green cycle on~$\langle {\alpha_{1}}II n \rangle$ vertices.
The main steps of the proof are as follows: Firstly, we apply a version of the Regularity Lemma (Theorem~\ref{l:sze}) to give a partition $V_0\cup V_1\cup\dots\cup V_K$ of the vertices which is simultaneously regular for the red, blue and green spanning subgraphs of~$G$. Given this partition, we define the three-multicoloured reduced-graph ${\mathcal G}$ on vertex set $V_1,V_2,\dots V_K$ whose edges correspond to the regular pairs. We colour the edges of the reduced-graph with all those colours for which the corresponding pair has density above some threshold.
\L uczak~\cite{Lucz} showed that, if the threshold is chosen properly, then the existence of a matching in a monochromatic connected-component of the reduced-graph implies the existence of a monochromatic cycle of the corresponding length in the original graph.
Thus, the key step in the proof of Theorem C will be to prove a Ramsey-type stability result for so-called connected-matchings (Theorem D). Defining a \textit{connected-matching} to be a matching with all its edges belonging to the same component,
this result essentially says that, for every $\alpha_{1},\alpha_{2},\alpha_{3}>0$
and every sufficiently large $k$, every three-multicolouring of a graph ${\mathcal G}$ on slightly fewer than $K=\max\{4{\alpha_{1}}, {\alpha_{1}}+2{\alpha_{1}}I, {\alpha_{1}}+2{\alpha_{1}}II\}k$ vertices with sufficiently large minimum degree results in either a connected-matching on at least~${\alpha_{1}} k$ vertices in the red subgraph of ${\mathcal G}$, a connected-matching on at least~${\alpha_{1}}I k$ vertices in a non-bipartite component of the blue subgraph of ${\mathcal G}$, a connected-matching on at least~${\alpha_{1}}II k$ vertices in a non-bipartite component of the green subgraph of ${\mathcal G}$ or one of a list of particular structures which will be defined later.
In the case that ${\mathcal G}$ contains a suitably large connected-matching in one of its coloured subgraphs, a blow-up result of Figaj and \L uczak (see Theorem~\ref{th:blow-up}) can be used to give a monochromatic cycle of the same colour in~$G$. If ${\mathcal G}$ does not contain such a connected-matching, then the stability result gives us information about the structure of ${\mathcal G}$. We then show that~$G$ has essentially the same structure which we exploit to force the existence of a monochromatic cycle.
In the next section, given a three-colouring of the complete graph on~$N$ vertices, we will define its three-multicoloured reduced-graph. We will also discuss a version of the blow-up lemma of Figaj and \L uczak, which motivates this approach.
In Section~\ref{ram:defn}, we will deal with some notational formalities before proceeding in Section~\ref{s:struct} to define the structures we need and to give a precise formulation of the connected-matching stability result which we shall call Theorem~D.
In Section~\ref{s:pre1}, we give a number of technical lemmas needed for the proofs of Theorem~C and Theorem~D. Among these is a decomposition result of Figaj and \L uczak which provides insight into the structure of the reduced-graph.
The hard work is done in Section~\ref{s:stabp}, where we prove Theorem~D, and in Sections~\mbox{\ref{s:p10}--\ref{s:p13}}, where we translate this result for connected-matchings into one for cycles, thus completing the proof of Theorem~C.
\section{Cycles, matchings and regularity}
\label{ram:cmr}
Szemer\'{e}di's Regularity Lemma~\cite{SzemRegu} asserts that any sufficiently large graph can be approximated by the union of a bounded number of random-like bipartite graphs.
Given a pair $(A,B)$ of disjoint subsets of the vertex set of a graph~$G$, we write $d(A,B)$ for the \textit{density} of the pair, that is, $d(A,B)=e(A,B)/|A||B|$ and say that such a pair is $(\epsilon,G)$\textit{-regular} for some~$\epsilon>0$ if, for every pair $(A',B')$ with $A'\subseteq A$, $|A'|\geq \epsilon |A|$, $B' \subseteq B$, $|B'|\geq \epsilon |B|$, we have $ \left| d(A',B')-d(A,B) \right| <\epsilon.$
We will make use of a generalised version of Szemer\'edi's Regularity Lemma~in order to move from considering monochromatic cycles to considering monochromatic connected-matchings, the version below being a slight modification of one found, for instance, in~\cite{KomSim}:
\begin{theorem}
\label{l:sze}
For every $\epsilon>0$ and every positive integer~$k_{0}$, there exists $K_{\ref{l:sze}}=K_{\ref{l:sze}}(\epsilon,k_{0})$ such that the following holds: For all graphs $G_{1},G_{2},G_{3}$ with $V(G_{1})=V(G_{2})=V(G_{3})=V$ and $|V|\geq K_{\ref{l:sze}}$, there exists a partition $\Pi =(V_{0},V_{1},\dots,V_{K})$ of $V$ such that
\begin{itemize}
\item [(i)]$k_{0}\leq K \leq K_{\ref{l:sze}}$;
\item [(ii)]$|V_0|\leq \epsilon |V|$;
\item [(iii)]$|V_1|=|V_2|=\dots =|V_K|$; and
\item [(iv)] for each $i$, all but at most $\epsilon K$ of the pairs $(V_i,V_j)$, $1\leq i<j\leq K$, are simultaneously ($\epsilon,G_r$)-regular for $r=1,2,3$.
\end{itemize}
\end{theorem}
Note that, given $\epsilon>0$ and graphs $G_1,G_2$ and $G_3$ on the same vertex set $V$, we call a partition $\Pi=(V_0,V_1,\dots,V_K)$ satisfying (ii)--(iv) \textit{$(\epsilon,G_1,G_2,G_3)$-regular}.
In what follows, given a three-coloured graph~$G$, we will use $G_1, G_2, G_3$ to refer to its monochromatic spanning subgraphs. That is $G_1$ (resp. $G_2, G_3$) has the same vertex set as~$G$ and includes, as an edge, any edge which in~$G$ is coloured red (resp. blue, green).
Then, given a three-coloured graph~$G$, we can use Theorem~\ref{l:sze} to define a partition which is simultaneously regular for $G_1$, $G_2$, $G_3$ and then define the three-multicoloured reduced-graph ${\mathcal G}$ as follows:
\begin{definition}
\label{reduced}
Given $\epsilon>0$, $\xi>0$, a three-coloured graph $G=(V,E)$ and an $(\epsilon,G_1,G_2,G_3)$-regular partition $\Pi =(V_{0},V_{1},\dots,V_{K})$, we define the three-multicoloured $(\epsilon,\xi,\Pi)$-reduced-graph ${\mathcal G}=({\mathcal V},{\mathcal E})$ by:
\begin{align*}
\mathcal{V}&=\{V_{1},V_{2},\dots ,V_{K}\}, \text{\quad\quad}
\mathcal{E}&=\{ V_{i}V_{j} : (V_{i},V_{j}) \text{ is simultaneously } (\epsilon,G_{r})\text{-regular for }r=1,2,3\},
\end{align*}
where $V_{i}V_{j}$ is coloured with all colours~$r$ such that $d_{G_r}(V_i,V_j)\geq\xi$.
\end{definition}
One well known fact about regular pairs is that they contain long paths. This is summarised in the following lemma, which is a slight modification of one found in~\cite{Lucz}:
\begin{lemma}
\label{longpath}
For every~$\epsilon$ such that $0\leq \epsilon < 1/600 $ and every $k\geq1/\epsilon$, the following holds: Let~$G$ be a bipartite graph with bipartition $V(G)=V_1\cup V_2$ such that $|V_1|,|V_2|\geq k$, the pair $(V_1,V_2)$ is $\epsilon$-regular and $e(V_1,V_2)\geq \epsilon^{1/2} |V_1||V_2|$. Then, for every integer~$\ell$ such that $1\leq \ell \leq k-2\epsilon^{1/2} k$ and every $v' \in V_1$, $v'' \in V_2$ such that $d(v'),d(v'')\geq \tfrac{2}{3}\epsilon^{1/2}k$,~$G$ contains a path of length $2\ell +1$ between~$v'$ and~$v''$.
\end{lemma}
A \textit{matching} is a collection of pairwise vertex-disjoint edges. Note that, in what follows, we will sometimes abuse terminology and, where appropriate, refer to a matching by its vertex set rather than its edge set.
We call a matching with all its vertices in the same component of~$G$ a \textit{connected-matching} and note that we say a connected-matching is \textit{odd} if the component containing the matching also contains an odd cycle.
The following theorem makes use of the lemma~above to blow up large connected-matchings in the reduced-graph to cycles (of appropriate length and parity) in the original. This facilitates our approach to proving Theorem~C in that it allows us to shift our attention away from cycles to connected-matchings, which turn out to be somewhat easier to find. Figaj and \L uczak~\cite[Lemma~3]{FL2008} proved a more general version of this theorem in a slightly different context (they considered any number of colours and any combination of parities and used a different threshold for colouring the reduced-graph):
\begin{theorem}
\label{th:blow-up}
For all $c_1,c_2,c_3,d, \eta>0$ such that
$0<\eta<\min\{0.01, (64c_1+64c_2+64c_3)^{-1}\}$, there exists $n_{\ref{th:blow-up}}=n_{\ref{th:blow-up}}(c_1,c_2,c_3,d,\eta)$ such that, for $n>n_{\ref{th:blow-up}}$, the following holds:
Given ${\alpha_{1}}, {\alpha_{1}}I, {\alpha_{1}}II$ such that $0<{\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\leq2$, and $\xi$ such that $\eta\leq \xi\leq\tfrac{1}{3}$, a complete three-coloured graph $G=(V,E)$ on
$$N=c_{1}{\langle \! \langle} {\alpha_{1}} n{\rangle \! \rangle}+c_{2}\langle {\alpha_{1}}I n \rangle +c_{3}\langle\alpha_{3}n\rangle-d$$
vertices and an $(\eta^4,G_1,G_2,G_3)$-regular partition $\Pi =(V_{0},V_{1},\dots,V_{K})$ for some $K>8(c_1+c_2+c_3)^2/\eta$, letting ${\mathcal G}=({\mathcal V},{\mathcal E})$ be the three-multicoloured $(\eta^4,\xi,\Pi)$-reduced-graph of~$G$ on~$K$ vertices, and letting $k$ be an integer such that
$$
c_{1}\alpha_{1}k+c_{2}{\alpha_{1}}I k+c_{3}{\alpha_{1}}II k - \eta k \leq K \leq c_{1}\alpha_{1}k+c_{2}{\alpha_{1}}I k+c_{3}{\alpha_{1}}II k - \half\eta k,$$
\begin{itemize}
\item[(i)] if ${\mathcal G}$ contains a red connected-matching on at least $\alpha_{1}k$ vertices, then~$G$ contains a red cycle on ${\langle \! \langle} \alpha_{1} n{\rangle \! \rangle}$ vertices;
\item[(ii)] if ${\mathcal G}$ contains a blue odd connected-matching on at least $\alpha_{2}k$ vertices, then~$G$ contains a blue cycle on $\langle \alpha_{2} n\rangle$ vertices;
\item[(iii)] if ${\mathcal G}$ contains a green odd connected-matching on at least $\alpha_{3}k$ vertices, then~$G$ contains a green cycle on $\langle\alpha_{3}n\rangle $ vertices.
\end{itemize}
\end{theorem}
\begin{comment}
A version of the below theorem formed the main result of the paper of Figaj and \L uczak~\cite{FL2008}. They used this theorem on connected-matchings and their version of Theorem~\ref{th:blow-up} above to prove the asymptotic Ramsey result for connected-matchings given in Section~\ref{ram:intro}.
\begin{theorem}
\label{ThA}
Given ${\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II>0$, there exists $\eta_{\ref{ThA}}=\eta_{\ref{ThA}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$ and $k_{\ref{ThA}}=k_{\ref{ThA}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,\eta)$ such that, if $0<\eta<\eta_{\ref{ThA}}, k\geq k_{\ref{ThA}}$, then:
For every three-colouring of a graph~$G$, a $(1-\eta^4)$-complete graph on
$$K\geq (\max \{ 2\alpha_{1}+\alpha_{2}, \half \alpha_{1}+\half\alpha_{2}+{\alpha_{1}}II \} +10\eta^{1/4})k$$ vertices, at least one of the following occurs:
\begin{itemize}
\item [(i)]~$G$ contains a red connected-matching on at least $\alpha_{1}k$ vertices;
\item [(ii)]~$G$ contains a blue connected-matching on at least $\alpha_{2}k$ vertices;
\item [(iii)]~$G$ contains a green odd connected-matching on at least $\alpha_{3}k$ vertices.
\end{itemize}
\end{theorem}
Much of the remainder of this paper will be dedicated to setting up a result analogous to this. Our version of this result uses a similar approach but considers a graph on slightly fewer vertices, the result being either a monochromatic connected-matching or a particular structure which can then be exploited to force a cycle.
\end{comment}
\section{Definitions and notation}
\label{ram:defn}
Recall that, given a three-coloured graph~$G$, we refer to the first, second and third colours as red, blue and green respectively and use $G_1, G_2, G_3$ to refer to the monochromatic spanning subgraphs of $G$. In what follows, if~$G_1$ contains the edge $uv$, we say that $u$ and $v$ are \textit{red neighbours} of each other in $G$. Similarly, if $uv\in E(G_2)$, we say that $u$ and $v$ are \textit{blue neighbours} and, if $uv\in E(G_3)$, we say that that $u$ and $v$ are \textit{green neighbours}.
We say that a graph~$G=(V,E)$ on~$N$ vertices is $a$-\emph{almost-complete} for $0\leq a\leq N-1$ if its minimum degree~$\delta(G)$ is at least $(N-1)-a$. Observe that, if~$G$ is $a$-almost-complete and $X\subseteq V$, then $G[X]$ is also $a$-almost-complete.
We say that a graph~$G$ on~$N$ vertices is $(1-c)$-\emph{complete} for $0\leq c\leq 1$ if it is $c(N-1)$-almost-complete, that is, if $\delta(G)\geq (1-c)(N-1)$. Observe that, for $c\leq \half$, any $(1-c)$-complete graph is connected.
We say that a bipartite graph~$G=G[U,W]$ is $a$-\emph{almost-complete} if every $u\in U$ has degree at least $|W|-a$ and every $w\in W$ has degree at least $|U|-a$. Notice that, if~$G[U,W]$ is $a$-almost-complete and $U_1\subseteq U, W_1\subseteq W$, then $G[U_1,W_1]$ is $a$-almost-complete.
We say that a bipartite graph~$G=G[U,W]$ is $(1-c)$-\emph{complete} if every $u\in U$ has degree at least $(1-c)|W|$ and every $w\in W$ has degree at least $(1-c)|U|$. Again, notice that, for $c< \half$, any $(1-c)$-complete bipartite graph $G[U,W]$ is connected, provided that~$U,W\neq \emptyset$.
We say that a graph~$G$ on~$N$ vertices is $c$-\emph{sparse} for $0<c<1$ if its maximum degree is at most $c(N-1)$. We say a bipartite graph~$G=G[U,W]$ is $c$-\emph{sparse} if every $u\in U$ has degree at most $c|W|$ and every vertex $w\in W$ has degree at most $c|U|$.
For vertices~$u$ and~$v$ in a graph~$G$, we will say that the edge $uv$ is \emph{missing} if~$uv\notin E(G)$.
\section {Connected-matching stability result}
\label{s:struct}
Before proceeding to state Theorem~D,
we define the coloured structures we will need.
\begin{definition}
\label{d:H}
For $x_{1}, x_{2}, c_1,c_2$ positive, $\gamma_1,\gamma_2$ colours, let ${\mathcal H}(x_{1},x_{2}, c_1,c_2, \gamma_1,\gamma_2)$ be the class of edge-multicoloured graphs defined as follows: A given two-multicoloured graph $H=(V,E)$ belongs to~${\mathcal H}$ if its vertex set can be partitioned into $X_{1}\cup X_{2}$ such that
\begin{itemize}
\item[(i)] $|X_{1}|\geq x_{1}, |X_{2}|\geq x_{2} $;
\item[(ii)] $H$ is $c_1$-almost-complete; and
\item[(iii)] defining $H_1$ to be the spanning subgraph induced by the colour $\gamma_1$ and $H_2$ to be the subgraph induced by the colour $\gamma_2$,
\begin{itemize}
\item[(a)] $H_1[X_{1}]$ is $(1-c_2)$-complete and $H_2[X_{1}]$ is $c_2$-sparse,
\item[(b)] $H_2[X_1,X_2]$ is $(1-c_2)$-complete and $H_1[X_1,X_2]$ is $c_2$-sparse.
\end{itemize}
\end{itemize}
\end{definition}
\begin{definition}
\label{d:J}
For $x, c$ positive, $\gamma_1,\gamma_2$ colours, let ${\mathcal J}(x,c,\gamma_1,\gamma_2)$ be the class of edge-multicoloured graphs defined as follows: A given two-multicoloured graph $H=(V,E)$ belongs to~${\mathcal J}$ if the vertex set of~$V$ can be partitioned into $X_{1}\cup X_{2}$ such that
\begin{itemize}
\item[(i)] $|X_{1}|,|X_{2}|\geq x$;
\item[(ii)] $H$ is $c$-almost-complete; and
\item[(iii)] (a) all edges present in $H[X_1], H[X_2]$ are coloured exclusively with colour $\gamma_1$,
\item[{~}] (b) all edges present in $H[X_1,X_2]$ are coloured exclusively with colour $\gamma_2$.
\end{itemize}
\end{definition}
\begin{figure}
\caption{$H\in{\mathcal H}
\label{fig:lb1}
\end{figure}
\begin{definition}
\label{d:L}
For $x, c$ positive, $\gamma_1, \gamma_2, \gamma_3$ colours, let ${\mathcal L}(x, c, \gamma_1, \gamma_2,\gamma_3)$ be the class of edge-multicoloured graphs defined as follows: A given three-multicoloured graph $H=(V,E)$ belongs to~${\mathcal L}$, if its vertex set can be partitioned into $X_{1}\cup X_{2}\cup Y_{1}\cup Y_{2}$ such that
\begin{itemize}
\item[(i)] $|X_{1}|, |X_{2}|, |Y_{1}|, |Y_{2}|\geq x$;
\item[(ii)] $H$ is $c$-almost-complete; and
\item[(iii)] (a) all edges present in $H[X_1]$, $H[X_2]$, $H[Y_1]$ and $H[Y_2]$ are coloured exclusively with colour $\gamma_1$,
\item[{~}] (b) all edges present in $H[X_1,Y_1]$ and $H[X_2,Y_2]$ are coloured exclusively with colour $\gamma_2$,
\item[{~}] (c) all edges present in $H[X_1,X_2]$ and $H[Y_1,Y_2]$ are coloured exclusively with colour $\gamma_3$,
\item[{~}] (d) all edges present in $H[X_1,Y_2]$ and $H[X_2,Y_1]$ are coloured colours $\gamma_2$ or $\gamma_3$ only.
\end{itemize}
\end{definition}
\begin{figure}
\caption{$H\in {\mathcal L}
\end{figure}
Having defined the coloured structures, we are in a position to state the main technical result, that is, the connected-matching stability result. The proof of this result follows in Section~\ref{s:stabp}.
\begin{thD}
\label{thD}
\label{th:stabnew}
For every $\alpha_{1},\alpha_{2},\alpha_{3}>0$, letting $$c=\max\{4{\alpha_{1}}, {\alpha_{1}}+2{\alpha_{1}}I,{\alpha_{1}}+2{\alpha_{1}}II\},$$
there exists $\eta_{D}=\eta_{D}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$ and $k_{D}=k_{D}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,\eta)$
such that, for every $k>k_{D}$ and
every~$\eta$ such that $0<\eta<\eta_{D}$, every
three-multicolouring of~$G$, a $(1-\eta^4)$-complete graph on
$$(c-\eta)k\leq K\leq(c-\half\eta)k$$ vertices results in the graph containing at least one of the following:
\begin{itemize}
\item [(i)] a red connected-matching on at least ${\alpha_{1}}
k$ vertices;
\item [(ii)] a blue odd connected-matching on at least ${\alpha_{1}}I k$ vertices;
\item [(iii)] a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices;
\item [(iv)] subsets of vertices $W$, $X$ and $Y$ such that $X\cup Y\subseteq W$, $X\cap Y=\emptyset$, $|W|\geq(c-\eta^{1/2})k$, every $\gamma$-component of $G[W]$ is odd, $G[X]$ contains a two-coloured spanning subgraph $H$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$ and $G[Y]$ contains a a two-coloured spanning subgraph $K$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$, where~{\phantom{nnn}}
\begin{align*}
{\mathcal H}_1&={\mathcal H}\left(({\alpha_{1}}-2\eta^{1/64})k,(\half\alpha_{*}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{red},\gamma\right),\text{ }
\\ {\mathcal H}_2&={\mathcal H}\left((\alpha_{*}-2\eta^{1/64})k,(\half{\alpha_{1}}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\gamma,\text{red}\right),
\end{align*}
for $(\alpha_{*},\gamma)\in\{({\alpha_{1}}I,\text{blue}),({\alpha_{1}}II,\text{green})\}$;
\item[(v)]
disjoint subsets of vertices $X$ and $Y$ such that $G[X]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$ and $G[Y]$ contains a two-coloured spanning subgraph $K$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$, where
\begin{align*}
{\mathcal H}_{2}^*&={\mathcal H}\left((\beta-2\eta^{1/32})k,(\half{\alpha_{1}}-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\gamma,\text{red}\right),\text{ }
\\ {\mathcal J}_b&={\mathcal J}\left(({\alpha_{1}}-18\eta^{1/2}), 4\eta^4 k, \text{red}, \gamma\right),
\end{align*}
for $\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$ and $\gamma\in\{\text{blue}, \text{green}\}$;
\item[(vi)] a subgraph $H$ from ${\mathcal L}={\mathcal L}\left((\half\alpha+\tfrac{1}{4}\eta)k,4\eta^4k, \text{red}, \text{blue}, \text{green}\right).$
\end{itemize}
Furthermore,
\begin{itemize}
\item[(iv)] occurs only if ${\alpha_{1}}\leq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq\alpha_{*}+24\eta^{1/4}$. Additionally, $H$ and $K$ belong to ${\mathcal H}_{1}$ if $\alpha_{*}\leq(1-\eta^{1/16}){\alpha_{1}}$ and belong to ${\mathcal H}_{2}$ if ${\alpha_{1}}\leq(1-\eta^{1/16})\alpha_{*}$;
\item[(v)] occurs only if ${\alpha_{1}}\leq\beta$. Additionally, $H$ and $K$ may belong to ${\mathcal H}_2^*$ only if ${\alpha_{1}}\leq(1-\eta^{1/16})\beta$ and may belong to ${\mathcal J}$ only if $\beta<(\tfrac{3}{2}+2\eta^{1/4}){\alpha_{1}}$; and
\item[(vi)] occurs only if ${\alpha_{1}}\geq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$.
\end{itemize}
\end{thD}
This result forms a partially strengthened analogue of the main technical result of the paper of Figaj and \L uczak~\cite{FL2008}. In that paper, Figaj and \L uczak considered a similar graph but on slightly more than $\max\{4{\alpha_{1}}, {\alpha_{1}}+2{\alpha_{1}}I, {\alpha_{1}}+2{\alpha_{1}}II\}k$ vertices and proved the existence of a connected-matching, whereas we consider a graph on slightly fewer vertices and prove the existence of either a monochromatic connected-matching or a particular structure.
\section{{Tools}}
\label{s:pre1}
In this section, we summarise results that we shall use later in our proofs beginning with some results on Hamiltonicity including Dirac's Theorem, which gives us a minimum-degree condition for Hamiltonicity:
\begin{theorem}[Dirac's Theorem~\cite{Dirac52}]
\label{dirac}
If~$G$ is a graph on~$n\geq3$ vertices such that every vertex has degree at least $\half n$, then~$G$ is Hamiltonian, that is, $G$ contains a cycle of length exactly $n$.
\end{theorem}
Observe then that, by Dirac's Theorem, any $c$-almost-complete graph on $n$ vertices is Hamiltonian, provided that $c\leq \half n-1$. Then, since almost-completeness is a hereditary property, we may prove the following corollary:
\begin{corollary}
\label{dirac1a}
If $G$ is a $c$-almost-complete graph on $n$ vertices, then, for any integer~$m$ such that $2c+2\leq m\leq n$, $G$ contains a cycle of length $m$.
\end{corollary}
Dirac's Theorem may be used to assert the existence of Hamiltonian paths in a given graph as follows:
\begin{corollary}
\label{dirac2}
If $G=(V,E)$ is a simple graph on~$n\geq4$ vertices such that every vertex has degree at least $\half n+1$, then any two vertices of~$G$ are joined by a Hamiltonian path.
\end{corollary}
For balanced bipartite graphs, we make use of the following result of Moon and Moser:
\begin{theorem}[\cite{moonmoser}]
\label{moonmoser}
If~$G=G[X,Y]$ is a simple bipartite graph on~$n$ vertices such that $|X|=|Y|=\half n$ and $d(x)+d(y)\geq \half n+1$ for every $xy\notin E(G)$, then~$G$ is Hamiltonian.
\end{theorem}
Observe that, by the above, any $c$-almost-complete balanced bipartite graph on $n$ vertices is Hamiltonian, provided that $c\leq \tfrac{1}{4}n-\half$. Then, since almost-completeness is a hereditary property, we may prove the following corollary:
\begin{corollary}
\label{moonmoser2}
If $G=G[X,Y]$ is $c$-almost-complete bipartite graph, then, for any even integer $m$ such that $4c+2\leq m\leq 2\min\{|X|,|Y|\}$, $G$ contains a cycle on $m$ vertices.
\end{corollary}
For bipartite graphs which are not balanced, we make use of the Lemma below:
\begin{lemma}
\label{bp-dir}
If $G=G[X_1,X_2]$ is a simple bipartite graph on $n\geq 4$ vertices such that $|X_1|>|X_2|+1$ and every vertex in~$X_2$ has degree at least $\half n +1$, then any two vertices $x_1,x_2$ in $X_1$ such that $d(x_2)\geq 2$ are joined by a path which visits every vertex of~$X_2$.
\end{lemma}
\begin{proof}
Observe that $\half n+1=\half|X_1|+\half|X_2|+1=|X_1|-(\half|X_1|-\half|X_2|-1)$ so any pair of vertices in~$X_2$ have at least $|X_1|-(|X_1|-|X_2|-2)$ common neighbours and, thus, at least $|X_1|-(|X_1|-|X_2|)\geq |X_2| $ common neighbours distinct from~$x_1,x_2$.
Then, ordering the vertices of~$X_2$ such that the first vertex is a neighbour of~$x_1$ and the last is a neighbour of~$x_2$, greedily construct the required path from~$x_1$ to~$x_2$.
\end{proof}
\begin{corollary}
\label{bp-dir2}
If $G=G[X_1,X_2]$ is a simple bipartite graph on $n\geq 5$ vertices such that
$|X_1|>|X_2|$
and every vertex in~$X_2$ has degree at least $\half(n +1)$, then any two vertices $x_1\in X_1$ and $x_2\in X_2$ such that $d(x_1),d(x_2)\geq 2$ are joined by a path which visits every vertex of~$X_2$.
\end{corollary}
For graphs with
a few vertices of small degree, we make use of the following result of Chv\'atal:
\begin{theorem}[\cite{Chv72}]
\label{chv}
If $G$ is a simple graph on $n\geq3$ vertices with degree sequence $d_1\leq d_2 \leq \dots \leq d_n$ such that $$d_k\leq k \leq \frac{n}{2} \implies d_{n-k} \geq n-k,$$ then~$G$ is Hamiltonian.
\end{theorem}
We also make extensive use of the theorem of Erd\H{o}s and Gallai:
\begin{theorem}[\cite{ErdGall59}]
\label{th:eg}
Any graph on~$K$ vertices with at least~$\frac{1}{2}(m-1)(K-1)+1$ edges, where $3\leq m \leq K$, contains a cycle of length at least~$m$.
\end{theorem}
Observing that a cycle on $m$ vertices contains a connected-matching on at least~$m-1$ vertices, the following is an immediate consequence of the above.
\begin{corollary}
\label{l:eg}
For any graph~$G$ on~$K$ vertices and any~$m$ such that $3 \leq m \leq K$, if the average degree $d(G)$ is at least $m$, then~$G$ contains a connected-matching on at least~$m$ vertices.
\end{corollary}
The following decomposition lemma~of Figaj and \L uczak~\cite{FL2008} also follows from the theorem of Erd\H{o}s and Gallai and is crucial in establishing the structure of a graph not containing large connected-matchings of the appropriate parities:
\begin{lemma}[{\cite[Lemma~9]{FL2008}}]
\label{l:decomp}
For any graph~$G$ on~$K$ vertices and any~$m$ such that $3\leq m \leq K$, if no odd component of~$G$ contains a matching on at least~$m$ vertices, then there exists a partition $V=V'\cup V''$ such that
\begin{itemize}
\item [(i)] $G[V']$ is bipartite;
\item [(ii)] every component of $G''=G[V'']$ is odd;
\item [(iii)] $G[V'']$ has at most $\half m |V(G'')|$ edges; and
\item [(iv)] there are no edges in $G[V',V'']$.
\end{itemize}
\end{lemma}
The following pair of lemmas allow us to find large connected-matchings in almost-complete bipartite graphs:
\begin{lemma}[{\cite[Lemma~10]{FL2008}}]
\label{l:ten}
Let $G=G[V_1,V_2]$ be a bipartite graph with bipartition $(V_{1},V_{2})$, where $|V_{1}|\geq|V_{2}|$, which has at least $(1-\epsilon)|V_{1}||V_{2}|$ edges for some $\epsilon$ such that $0<\epsilon<0.01$. Then,~$G$ contains a connected-matching on at least $2(1-3\epsilon)|V_{2}|$ vertices.
\end{lemma}
Notice that, if~$G$ is a $(1-\epsilon)$-complete bipartite graph with bipartition $(V_1,V_2)$, then we may immediately apply the above to find a large connected-matching in~$G$.
\begin{lemma}
\label{l:eleven}
Let $G=G[V_1,V_2]$ be a bipartite graph with bipartition $(V_1,V_2)$. If $\ell$ is a positive integer such that $|V_1|\geq|V_2|\geq \ell$ and~$G$ is $a$-almost-complete for some $a$ such that $0<a/\ell<0.5$, then~$G$ contains a connected-matching on at least $2|V_2|-2a$ vertices.
\end{lemma}
\begin{proof}
Observe that~$G$ is $(1-a/\ell)$-complete. Therefore, since $a/\ell<0.5$,~$G$ is connected. Thus, it suffices to find a matching of the required size. Suppose that we have found a matching with vertex set~$M$ such that $|M|=2k$ for some $k<|V_2|-a$ and consider a vertex $v_2\in V_2\backslash M$. Since $G$ is $a$-almost-complete, $v_2$ has at least $|V_1|-a$ neighbours in~$|V_1|$ and thus at least one neighbour in $v_1\in V_1\backslash M$. Then, the edge $v_1v_2$ can be added to the matching and thus, by induction, we may obtain a matching on $2|V_2|-2a$ vertices.
\end{proof}
We recall two further results of Figaj and \L uczak: The first is a technical result from~\cite{FL2008}.
The second is part of the main result from~\cite{FL2008}.
Note that these results can be immediately extended to multicoloured graphs:
\begin{lemma}[{\cite[Lemma~13]{FL2008}}]
\label{l:thirteen}
For every $\alpha, \beta>0$, $v\geq0$ and $\eta$ such that
$0<\eta<0.01 \min\{ \alpha,\beta\}$,
there exists $k_{\ref{l:thirteen}}=k_{\ref{l:thirteen}}(\alpha,\beta,v,\eta)$ such that, for every $k>k_{\ref{l:thirteen}}$, the following holds:
Let $G=(V,E)$ be a graph obtained from a $(1-\eta^4)$-complete graph on at least $$ \half\Big(\max\Big\{\alpha+\beta+\max \left\{ 2v, \alpha, \beta \right\},3\alpha+\max\left\{2v,\alpha\right\} \Big\} + 10\eta^{1/2} \Big)k $$
vertices by removing all edges contained within a subset $W \subseteq V$ of size at most~$vk$. Then, every two-multicolouring of the edges of~$G$ results in {either} a red connected-matching on at least $(\alpha+\eta)k$ vertices {or} a blue odd connected-matching on at least $(\beta+\eta)k$ vertices.
\end{lemma}
\begin{lemma}[{\cite[Theorem~1ii]{FL2008}}]
\label{l:fourteen}
For every ${\alpha_{1}}, {\alpha_{1}}I, {\alpha_{1}}II>0$, there exists $\eta_{\ref{l:fourteen}}=\eta_{\ref{l:fourteen}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$ such that, for every $0<\eta<\eta_{\ref{l:fourteen}}$,
there exists $k_{\ref{l:fourteen}}=k_{\ref{l:fourteen}}(\alpha,\beta,v,\eta)$ such that the following holds:
For every $k>k_{\ref{l:thirteen}}$ and every $(1-\eta^4)$-complete graph $G$ on $$ K\geq\left(\max\left\{
{\alpha_{1}}+2{\alpha_{1}}I,
2{\alpha_{1}}+{\alpha_{1}}I,
\half{\alpha_{1}}+\half{\alpha_{1}}I+{\alpha_{1}}II
\right\}
+10\eta^{1/4}\right)k$$
vertices, every three-colouring of the edges of $G$ results in one of the following:
\begin{itemize}
\item [(i)] a red connected-matching on at least $\alpha
k$ vertices;
\item [(ii)] a blue connected-matching on at least $\alpha
k$ vertices;
\item [(iii)] a green odd connected-matching on at least $\alpha
k$ vertices.
\end{itemize}
\end{lemma}
We also make use of the following pair of results dealing with different combinations of parities. The first is an immediate consequence of the main technical result of \cite{DF1} and \cite{DF2} which can be also be found (with additional detail) in \cite{FERT}. The second is an immediate consequence of the key technical result from \cite{KoSiSk}:
\begin{theorem}[{\cite[Theorem~B]{FERT}}]
\label{thBc}
For every $0<\alpha_{1},\alpha_{2},\alpha_{3}\leq 1$ such that ${\alpha_{1}}II\leq \tfrac{3}{2}\max\{{\alpha_{1}},{\alpha_{1}}I\}+\half\min\{{\alpha_{1}},{\alpha_{1}}I\}-11\eta^{1/2},$ letting $c=\max\{2{\alpha_{1}}+{\alpha_{1}}I, {\alpha_{1}}+2{\alpha_{1}}I, \half{\alpha_{1}}+\half{\alpha_{1}}I+{\alpha_{1}}II\},$
there exists $\eta_{\ref{thBc}}=\eta_{\ref{thBc}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$ and $k_{\ref{thBc}}=k_{\ref{thBc}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,\eta)$
such that, for every $k>k_{\ref{thBc}}$ and
every~$\eta$ such that $0<\eta<\eta_{\ref{thBc}}$, every
three-multicolouring of~$G$, a $(1-\eta^4)$-complete graph on
$(c-\eta)k\leq K\leq(c-\half\eta)k$ vertices, results in the graph containing at least one of the following:
\begin{itemize}
\item [(i)] a red connected-matching on at least ${\alpha_{1}}
k$ vertices;
\item [(ii)] a blue connected-matching on at least ${\alpha_{1}}I k$ vertices;
\item [(iii)] a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices;
\item [(iv)] disjoint subsets of vertices $X$ and $Y$ such that $G[X]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_1^B\cup{\mathcal H}_2^B$ and $G[Y]$ contains a two-coloured spanning subgraph $K$ from ${\mathcal H}_1^B\cup{\mathcal H}_2^B$ where
\begin{align*}
{\mathcal H}_1^B&={\mathcal H}\left(({\alpha_{1}}-2\eta^{1/32})k,(\half{\alpha_{1}}I-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\text{red},\text{blue}\right),\text{ }
\\ {\mathcal H}_2^B&={\mathcal H}\left(({\alpha_{1}}I-2\eta^{1/32})k,(\half{\alpha_{1}}-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\text{blue},\text{red}\right).
\end{align*}
\end{itemize}
Furthermore, in (iv) $H_1,H_2\in {\mathcal H}_1$ if ${\alpha_{1}}I\leq{\alpha_{1}}-\eta^{1/16}$ and $H_1,H_2\in {\mathcal H}_2$ if ${\alpha_{1}}\leq{\alpha_{1}}I-\eta^{1/16}$.
\end{theorem}
\begin{lemma}[{\cite[Theorem 6]{KoSiSk}}]
\label{kss07a-7}
There exists $\eta_{\ref{kss07a-7}}>0$ such that, for every $0<\alpha\leq1$ and $\eta$ such that $0<\eta<\eta_{\ref{kss07a-7}}$, there exists $k_{\ref{kss07a-7}}=k_{\ref{kss07a-7}}(\alpha,\eta)$ such that the following holds: For every $k>k_{\ref{kss07a-7}}$ and every $(1-\eta^4)$-complete graph $G$ on $(4\alpha-\eta)k\leq K \leq (4\alpha+\eta)k$ vertices, every three-colouring of the edges of $G$ results in the graph containing at least one of the following:
\begin{itemize}
\item [(i)] a red connected-matching on at least $\alpha k$ vertices;
\item [(ii)] a blue odd connected-matching on at least $\alpha k$ vertices;
\item [(iii)] a green odd connected-matching on at least $\alpha k$ vertices;
\item [(iv)] a subgraph from
${\mathcal L}\left((\half\alpha+\tfrac{1}{4}\eta)k,4\eta^4k, \text{red}, \text{blue}, \text{green}\right).$
\end{itemize}
\end{lemma}
We also make use of the following pair of stability results for cycles from~\cite{KoSiSk2}, which, given a sufficiently large two-coloured almost-complete graph allow us to find either a large matching or a particular structure.
\begin{lemma}[\cite{KoSiSk2}]
\label{l:SkB}
For every~$\eta$ such that $0<\eta<10^{-20}$, there exists $k_{\ref{l:SkB}}=k_{\ref{l:SkB}}(\eta)$ such that, for every $k>k_{\ref{l:SkB}}$ and every $\alpha,\beta>0$ such that $\alpha \geq \beta \geq 100\eta^{1/2}\alpha$, if $K>(\alpha + \half\beta-\eta^{1/2}\beta)k$ and $G=(V,E)$ is a red-blue-multicoloured $\beta \eta^2 k$-almost-complete graph on $K$ vertices, then at least one of the following occurs:
\begin{itemize}
\item[(i)]~$G$ contains a red connected-matching on at least $(1+\eta^{1/2})\alpha k$ vertices;
\item[(ii)]~$G$ contains a blue connected-matching on at least $(1+\eta^{1/2})\beta k$ vertices;
\item[(iii)] the vertices of~$G$ can be partitioned into $W$, $V'$, $V''$ such that
\begin{itemize}
\item[(a)] $|V'| < (1+\eta^{1/2})\alpha k$,
$|V''|\leq \half(1+\eta^{1/2})\beta k$,
$|W|\leq \eta^{1/16} k$,
\item[(b)] $G_1[V']$ is $(1-\eta^{1/16})$-complete and $G_2[V']$ is $\eta^{1/16}$-sparse,
\item[(c)] $G_2[V',V'']$ is $(1-\eta^{1/16})$-complete and $G_1[V',V'']$ is $\eta^{1/16}$-sparse;
\end{itemize}
\item[(iv)] we have $\beta > (1-\eta^{1/8})\alpha$ and the vertices of~$G$ can be partitioned into $W$, $V'$ and $V''$ such that
\begin{itemize}
\item[(a)] $|V'| < (1+\eta^{1/2})\beta k$,
$|V''|\leq \half(1+\eta^{1/2})\alpha k$,
$|W|\leq \eta^{1/16} k$,
\item[(b)] $G_2[V']$ is $(1-\eta^{1/16})$-complete and $G_1[V']$ is $\eta^{1/16}$-sparse,
\item[(c)] $G_1[V',V'']$ is $(1-\eta^{1/16})$-complete and $G_2[V',V'']$ is $\eta^{1/16}$-sparse.
\end{itemize}
\end{itemize}
Furthermore, if $\alpha+ \half\beta \geq 2(1+\eta^{1/2})\beta$, then we can replace (i) with
\begin{itemize}
\item[(i')]~$G$ contains a red odd connected-matching on $(1+\eta^{1/2})\alpha k$ vertices.
\end{itemize}
\end{lemma}
\begin{lemma}[\cite{KoSiSk2}]
\label{l:SkA}
For every $0<\alpha\leq 1$ and every~$\eta$ such that $0<\eta<0.001\alpha$, there exists $k_{\ref{l:SkA}}=k_{\ref{l:SkA}}(\eta)$ such that, for every $k>k_{\ref{l:SkA}}$, if $K>(\tfrac{3}{2}\alpha + 80\eta)k$ and $G=(V,E)$ is a red-blue-multicoloured $\eta^2 k$-almost-complete graph on $K$ vertices, then at least one of the following occurs:
\begin{itemize}
\item[(i)]~$G$ contains a red connected-matching on at least $(1+\eta^{1/2})\alpha k$ vertices;
\item[(ii)]~$G$ contains a blue odd connected-matching on at least $(1+\eta^{1/2})\alpha k$ vertices;
\item[(iii$\ast$)] the vertices of~$G$ can be partitioned into $V'$, $V''$ such that
\begin{itemize}
\item[(a)] $|V'|,|V''| < (\alpha+\eta)k$,
\item[(b)] all edges present in $G[V']$ and $G[V'']$ are coloured exclusively red and all edges present in $G[V',V'']$ are coloured exclusively blue. \end{itemize}
\end{itemize}
\end{lemma}
Combining the two results above, we obtain:
\begin{lemma}[\cite{KoSiSk2}]
\label{l:SkAB}
For every $0<\alpha,\beta\leq1$ such that $\beta\geq\alpha\geq100\eta^{1/2}\beta$, for every $0<\eta<\min\{10^{-20},0.001\alpha,(\alpha/2)^8\}$, there exists $k_{\ref{l:SkAB}}=k_{\ref{l:SkAB}}(\eta)$ such that, for every $k>k_{\ref{l:SkAB}}$, if $K>(\max\{2\alpha,\half\alpha + \beta\}-\eta^{1/2}\alpha)k$ and $G=(V,E)$ is a red-blue-multcoloured~$\eta^{3}k$-almost-complete graph on $K$ vertices, then at least one of the following occurs:
\begin{itemize}
\item[(i)]~$G$ contains a red connected-matching on at least $(1+\eta^{1/2})\alpha k$ vertices;
\item[(ii)]~$G$ contains a blue odd connected-matching on at least $(1+\eta^{1/2})\beta k$ vertices;
\item[(iii)] we have $\alpha\leq(1-\eta^{1/8})\beta$ and the vertices of~$G$ can be partitioned into $W$, $V'$, $V''$ such that
\begin{itemize}
\item[(a)] $|V'| < (1+\eta^{1/2})\beta k$,
$|V''|\leq \half(1+\eta^{1/2})\alpha k$,
$|W|\leq \eta^{1/16} k$,
\item[(b)] $G_2[V']$ is $(1-\eta^{1/16})$-complete and $G_1[V']$ is $\eta^{1/16}$-sparse,
\item[(c)] $G_1[V',V'']$ is $(1-\eta^{1/16})$-complete and $G_2[V',V'']$ is $\eta^{1/16}$-sparse;
\end{itemize}
\item[(iii$\ast$)] we have $\beta < (\tfrac{3}{2}+2\eta^{1/2})\alpha$ and the vertices of~$G$ can be partitioned into $V'$, $V''$ such that
\begin{itemize}
\item[(a)] $|V'|,|V''| < (\alpha+\eta)k$,
\item[(b)] all edges present in $G[V']$ and $G[V'']$ are coloured exclusively red and all edges present in $G[V',V'']$ are coloured exclusively blue. \end{itemize}
\end{itemize}
\end{lemma}
\begin{proof}
Suppose that $\beta\geq\alpha>(1-\eta^{1/8})\beta$.
Then, since $\eta\leq\min\{(\alpha/2)^8,0.001\alpha\}$, we have $(\max\{2\alpha,\half\alpha+\beta\}-\eta^{1/2}\alpha)k\geq(\tfrac{3}{2}\beta+80\eta)k$. Thus, since $\eta<0.001\alpha\leq0.001\beta$, we may apply Lemma~\ref{l:SkA} (with $\beta$ taking the role of $\alpha$) to find that $G$ either contains a red connected-matching on at least $(1+\eta^{1/2})\beta k\geq(1+\eta^{1/2})\alpha k$ vertices or a blue odd connected-matching on at least $(1+\eta^{1/2})\beta k$ vertices or admits a partition satisfying~(iii$\ast$).
Thus, we may assume that $\alpha\leq(1-\eta^{1/8})\beta$. Applying Lemma~\ref{l:SkB} (with the roles of $\alpha$ and $\beta$ and the colours exchanged) we find that $G$ either contains a red connected-matching on at least $(1+\eta^{1/2})\alpha k$ vertices, a blue connected-matching on at least $(1+\eta^{1/2})\beta k$ vertices or admits a partition $W\cup V'\cup V''$ satisfying (iii). Except in the second case, the proof is complete.
Thus, we may assume that $G$ contains a blue connected-matching $M$ on at least $(1+\eta^{1/2})\beta k$ vertices which is not odd and that $\beta+\half\alpha<2(1+\eta^{1/2})\alpha$. Partitioning the vertices spanned by $M$ into $V' \cup V''$ such that the edges of $M$ belong to $G[V',V'']$, we then have $|V'|,|V''|\geq\half(1+\eta^{1/2})\beta k$ and, since~$M$ is not odd, know that all edges present in $G[V']$ and $G[V'']$ must be red. Then, since $G$ is $\eta^3 k$-almost-complete, by Corollary \ref{dirac1a}, $G[V']$ and $G[V'']$ each contain a red connected-matching on at least $\half(1+\eta^{1/2})\beta k-1\geq (\half\alpha+\tfrac{1}{4}\eta^{1/2})k$ vertices. Thus, the presence of a red edge in $G[V',V'']$ would imply the existence of a red connected-matching on at least~$\alpha k$ vertices. Thus, we may assume that all edges present in $G[V',V'']$ are coloured blue.
Thus, the partition of $V(M)$ obtained, resembles that described in (iii$\ast$). To complete the proof in this case, we attempt to extend this into a partition of $V(G)$: Recall that $G[V',V'']$ contains a blue connected-matching $M$ on at least $\beta k$ vertices but that this connected-matching is not odd. Then, consider a vertex $v\in V\backslash (V'\cup V'')$. Such a vertex cannot have blue edges to both $V'$ and $V''$ since this would allow $M$ to be extended into an odd connected-matching on at least $\beta k$ vertices. Thus, either every edge present in $G[v,V']$ is red or every edge present in $G[v,V'']$ is red. In the former case, every edge present in $G[v,V'']$ must be blue (to avoid having a red connected-matching on at least $\alpha k$ vertices). In the latter case, every edge present in $G[v,V']$ must be blue.
Thus, $v$ can be added to either $V'$ or $V''$ while maintaining the property that all edges present in $G[V']$ and $G[V'']$ are coloured red and all edges in $G[V',V'']$ are coloured blue. Therefore, assigning the vertices of $V\backslash (V'\cup V'')$ in turn to either $V'$ or $V''$, we obtain a partition satisfying (iii$\ast$), completing the proof.
\end{proof}
It is a well-known fact that either a graph is connected or its complement is. We now prove
three
simple extensions of this fact for two-coloured almost-complete graphs, all of which can be immediately extended to two-multicoloured almost-complete graphs.
\begin{lemma}\label{l:dgf0}
For every $\eta$ such that $0<\eta<1/3$ and every $K\geq 1/\eta$, if $G=(V,E)$ is a two-coloured $(1-\eta)$-complete graph on~$K$ vertices and~$F$ is its largest monochromatic component, then $|F|\geq (1-3\eta)K$.
\end{lemma}
\begin{proof}
If the largest monochromatic (say, red) component in~$G$ has at least $(1-3\eta)K$ vertices, then we are done. Otherwise, we may partition the vertices of~$G$ into sets~$A$ and~$B$ such that $|A|,|B|\geq3\eta K\geq 2$ and there are no red edges between~$A$ and~$B$. Since~$G$ is $(1-\eta)$-complete, any two vertices in~$A$ have a common neighbour in~$B$, and any two vertices in~$B$ have a common neighbour in~$A$. Thus, $A\cup B$ forms a single blue component.
\end{proof}
The following lemmas form analogues of the above, the first concerns the structure of two-coloured almost-complete graphs with one hole and the second concerns the structure of two-coloured almost-complete graphs with two holes, that is, bipartite graphs.
\begin{lemma}
\label{l:dgf1}
For every $\eta$ such that $0<\eta<1/20$ and every $K\geq 1/\eta$, the following holds. For~$W$, any subset of~$V$ such that $|W|,|V\backslash W|\geq 4\eta^{1/2}K$, let $G_{W}=(V,E)$ be a two-coloured graph obtained from~$G$, a $(1-\eta)$-complete graph on~$K$ vertices with vertex set~$V$ by removing all edges contained entirely within~$W$. Let~$F$ be the largest monochromatic component of $G_W$ and define the following two sets:
\begin{align*}
W_{r}&= \{\text{$w \in W$ : $w$ has red edges to all but at most $3\eta^{1/2} K$ vertices in $V \backslash W$}\};
\\
W_{b}&=\{ w \in W : w \text{ has blue edges to all but at most } 3\eta^{1/2} K \text{ vertices in } V \backslash W\}.
\end{align*}
Then, at least one of the following holds:
\begin{itemize}
\item [(i)] $|F|\geq (1-2\eta^{1/2})K$;
\item [(ii)] $|W_{r}|,|W_{b}|>0$.
\end{itemize}
\end{lemma}
\begin{proof}
Consider $G[V\backslash W]$.
Since~$G$ is $(1-\eta)$-complete, $|V\backslash W|\geq 4\eta^{1/2}K$ and $\eta<1/20$, we see that every vertex in $G[V\backslash W]$ has degree at least $|V\backslash W|-\eta(K-1)\geq(1-\tfrac{1}{4}\eta^{1/2})(|V\backslash W|-1)$, that is, $G[V\backslash W]$ is $(1-\tfrac{1}{4}\eta^{1/2})$-complete. Thus, provided $4\eta^{1/2}K \geq 1/(\tfrac{1}{4}\eta^{1/2})$, that is, provided $K\geq 1/\eta$, we can apply Lemma~\ref{l:dgf0}, which tells us that the largest monochromatic component in $G[V\backslash W]$ contains at least $|V\backslash W|-\eta^{1/2} K$ vertices. We assume, without loss of generality, that this large component is red and call it~$R$.
Now,~$G$ is $(1-\eta)$-complete so either every vertex in~$W$ has a red edge to~$R$ (giving a monochromatic component of the required size) or there is a vertex $w\in W$ with at least $|R|-2\eta K$ {blue neighbours} in~$R$, that is, a vertex $w\in W_{b}$. Denote by~$B$ the set of $u\in R$ such that $uw$ is blue. Then, $|B|\geq |V\backslash W|-2\eta^{1/2} K$ and either every point in~$W$ has a blue edge to~$B$, giving a blue component of size at least $|B\cup W|>(1-2\eta^{1/2})K$, or there is a vertex $w_1\in W_{r}$.
\end{proof}
\section{Proof of the stability result}
\label{s:stabp}
In order to prove Theorem~D, we need to show that any three-multicoloured graph on slightly fewer than $$\left(\max\{4{\alpha_{1}},{\alpha_{1}}+2{\alpha_{1}}I,{\alpha_{1}}+2{\alpha_{1}}II\}\right)k$$ vertices with sufficiently large minimum degree will contain a red connected-matching on at least~$\alpha_{1}k$ vertices, a blue odd connected-matching on at least~$\alpha_{2}k$ vertices or a green odd connected-matching on at least~$\alpha_{3}k$ vertices, or will have a particular structure.
Thus, given ${\alpha_{1}}, {\alpha_{1}}I, {\alpha_{1}}II$, we set
$$c=\max\{4{\alpha_{1}},{\alpha_{1}}+2{\alpha_{1}}I,{\alpha_{1}}+2{\alpha_{1}}II\}={\alpha_{1}}+\max\{3{\alpha_{1}},2{\alpha_{1}}I,2{\alpha_{1}}II\},$$ let
$$\eta_{D}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)=\min \left\{10^{-40},\left(\frac{{\alpha_{1}}}{50}\right)^{16}, \left(\frac{{\alpha_{1}}I}{50}\right)^{16}, \left(\frac{{\alpha_{1}}II}{50}\right)^{16},
\left(\frac{\min\{{\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\}}{100\max\{{\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\}}\right)^4
\right\},$$
chose $\eta$ such that
$$\eta<\min\left\{\eta_D({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II),\half\eta_{\ref{l:fourteen}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II),\left(\eta_{\ref{thBc}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)\right)^2, \eta_{\ref{kss07a-7}}(\alpha_1),\right\}$$
and consider $G=(V,E)$, a $(1-\eta^4)$-complete graph on~$K\geq 100/\eta$ vertices, where
$$(c - \eta)k \leq K \leq (c - \half\eta)k$$ for some integer $k>k_{D}$
, where $k_{D}=k_{D}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,\eta)$ will be defined implicitly during the course of this section, in that, on a finite number of occasions, we will need to bound~$k$ below in order to apply results from Section~\ref{s:pre1}.
Note that, by scaling, we may assume that ${\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\leq1$. Notice, then, that $G$ is~$4\eta^4k$-almost-complete and, thus, for any $X\subset V$, $G[X]$ is also $4\eta^4k$-almost-complete.
In this section, we seek to prove that~$G$ contains at least one of the following:
\begin{itemize}
\item [(i)] a red connected-matching on at least ${\alpha_{1}} k$ vertices;
\item [(ii)] a blue odd connected-matching on at least $\alpha_{2}k$ vertices;
\item [(iii)] a green odd connected-matching on at least $\alpha_{3}k$ vertices;
\item [(iv)] subsets of vertices $W$, $X$ and $Y$ such that $X\cup Y\subseteq W$, $X\cap Y=\emptyset$, $|W|\geq(c-\eta^{1/2})k$, every $\gamma$-component of $G[W]$ is odd, $G[X]$ contains a two-coloured spanning subgraph $H$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$ and $G[Y]$ contains a a two-coloured spanning subgraph $K$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$, where~{\phantom{nnn}}
\begin{align*}
{\mathcal H}_1&={\mathcal H}\left(({\alpha_{1}}-2\eta^{1/64})k,(\half\alpha_{*}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{red},\gamma\right),\text{ }
\\ {\mathcal H}_2&={\mathcal H}\left((\alpha_{*}-2\eta^{1/64})k,(\half{\alpha_{1}}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\gamma,\text{red}\right),
\end{align*}
for $(\alpha_{*},\gamma)\in\{({\alpha_{1}}I,\text{blue}),({\alpha_{1}}II,\text{green})\}$;
\item[(v)]
disjoint subsets of vertices $X$ and $Y$ such that $G[X]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$ and $G[Y]$ contains a two-coloured spanning subgraph $K$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$, where
\begin{align*}
{\mathcal H}_{2}^*&={\mathcal H}\left((\beta-2\eta^{1/32})k,(\half{\alpha_{1}}-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\gamma,\text{red}\right),\text{ }
\\ {\mathcal J}_b&={\mathcal J}\left(({\alpha_{1}}-18\eta^{1/2}), 4\eta^4 k, \text{red}, \gamma\right),
\end{align*}
for $\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$ and $\gamma\in\{\text{blue}, \text{green}\}$;
\item[(vi)] a subgraph $H$ from ${\mathcal L}={\mathcal L}\left((\half\alpha+\tfrac{1}{4}\eta)k,4\eta^4k, \text{red}, \text{blue}, \text{green}\right).$
\end{itemize}
Observe that, for ${\alpha_{1}}\geq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$, since $\eta_{\ref{kss07a-7}}(\alpha_1)$, the result follows immediately from Lemma~\ref{kss07a-7}. Thus, in what follows, we may assume that $\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\geq {\alpha_{1}}$.
We consider the average degrees of the coloured spanning subgraphs. Notice that, if $d(G_1)\geq{\alpha_{1}} k$, then, by Corollary~\ref{l:eg}, $G$ contains a red connected-matching on ${\alpha_{1}} k$ vertices. Thus, since the number of missing edges at each vertex can be bounded above, we see that either $d(G_2)>\half(c-{\alpha_{1}}-2\eta)k$ or $d(G_3)>\half(c-{\alpha_{1}}-2\eta)k$. Without loss of generality, we assume the former and, thus, have
\begin{equation}
\label{ubnew}
e(G_2)>\tfrac{1}{4}(c-{\alpha_{1}}-2\eta)(c-\eta)k^2.
\end{equation}
If $G$ contained a blue odd connected-matching on at least ${\alpha_{1}}I k$ vertices, the proof would be complete, thus we may instead use Lemma~\ref{l:decomp} to decompose the blue graph and, thus, partition the vertices of~$G$ into $W, X$ and $Y$ such that
\begin{itemize}
\item[(i)] $X$ and $Y$ contain only red and green edges;
\item[(ii)] $W$ has at most $\half {\alpha_{1}}I k|W|$ blue edges; and
\item[(iii)] there are no blue edges between $W$ and $X\cup Y$.
\end{itemize}
\begin{figure}
\caption{Decomposition of the blue graph.}
\end{figure}
Thus, writing $wk$ for $|W|$ and noticing that $e(G[X,Y])$ is maximised when~$X$ and~$Y$ are equal in size, we find that
\begin{equation}
\label{lbnew}
e(G_2)\leq\half{\alpha_{1}}I wk^2+\tfrac{1}{4}(c-w)^2 k^2.
\end{equation}
Comparing (\ref{ubnew}) and (\ref{lbnew}), we obtain a quadratic inequality in $w$, solving which, since $\eta\leq\eta_D$, results in two possibilities:
\begin{itemize}
\item[(F)] $w>c-\eta^{1/2}$;
\item[(G)] $w<{\alpha_{1}}+\eta^{1/2}$.
\end{itemize}
In {\bf Case F}, almost all of the vertices of $G$ belong to $W$. Since $G[W]$ is the union of the odd blue-components of $G$, any blue matching found there is, by definition, odd. Thus, in $G[W]$, any result which provides a blue connected-matching of unspecified parity can be used to provide a blue odd connected-matching. Thus, we consider Theorems~\ref{l:fourteen} and~\ref{thBc} which relate to the even-even-odd case.
Since $G$ is $(1-\eta^4)$-complete and $|W|>(c-\eta^{1/2})k
\geq \tfrac{9}{10}K$, $G[W]$ is $(1-2\eta^4)$-complete.
Thus, since $\eta\leq\half\eta_{\ref{l:fourteen}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)$, provided $k\geq k_{\ref{l:fourteen}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,2^{1/4}\eta)$ and $$c-\eta^{1/2}\geq\max\left\{
{\alpha_{1}}+2{\alpha_{1}}I,
2{\alpha_{1}}+{\alpha_{1}}I,
\half{\alpha_{1}}+\half{\alpha_{1}}I+{\alpha_{1}}II
\right\}
+14\eta^{1/4},$$ by Theorem~\ref{l:fourteen},~$G$ contains either a red connected-matching on at least ${\alpha_{1}} k$ vertices, a blue connected-matching on at least ${\alpha_{1}}I k$ vertices (which by the nature of the decomposition is odd) or a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices.
Thus, in the case that $$\max\{4{\alpha_{1}},{\alpha_{1}}+2{\alpha_{1}}I,{\alpha_{1}}+2{\alpha_{1}}II\}\geq\max\{2{\alpha_{1}}+{\alpha_{1}}I,{\alpha_{1}}+2{\alpha_{1}}I,\half{\alpha_{1}}+\half{\alpha_{1}}I+{\alpha_{1}}II\}+15\eta^{1/4},$$ the proof is complete.
Since $\eta\leq\eta_D$, this condition holds provided ${\alpha_{1}}II\geq{\alpha_{1}}I+24\eta^{1/4}$. Thus, we may instead assume that ${\alpha_{1}}II\leq{\alpha_{1}}I+24\eta^{1/4}$. Also, since $\eta\leq\eta_D$, we have
$${\alpha_{1}}II\leq
{\alpha_{1}}I+24\eta^{1/4} \leq \tfrac{3}{2}{\alpha_{1}}I \leq \tfrac{3}{2}\max\{{\alpha_{1}},{\alpha_{1}}I\}
\leq \tfrac{3}{2}\max\{{\alpha_{1}},{\alpha_{1}}I\}+\half\min\{{\alpha_{1}},{\alpha_{1}}I\}-12\eta^{1/4}$$
and, since $\eta\leq\left(\eta_{\ref{thBc}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II)\right)^2$, provided $k\geq k_{\ref{thBc}}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,\eta^{1/2})$, we may apply Theorem~\ref{thBc}, to find that $G[W]$ contains either a red connected-matching on at least~${\alpha_{1}}
k$ vertices, a blue connected-matching on at least ${\alpha_{1}}I k$ vertices (which by the nature of the decomposition is odd), a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices or
two disjoint subsets of vertices $X$ and $Y$ such that $G[X]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_1^B\cup{\mathcal H}_2^B$ and $G[Y]$ contains a two-coloured spanning subgraph $K$ from ${\mathcal H}_1^B\cup{\mathcal H}_2^B$ where
\begin{align*}
{\mathcal H}_1=&{\mathcal H}\left(({\alpha_{1}}-2\eta^{1/64})k,(\half{\alpha_{1}}I-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{red},\text{blue}\right),\text{ }
\\ {\mathcal H}_2=&{\mathcal H}\left(({\alpha_{1}}I-2\eta^{1/64})k,(\half{\alpha_{1}}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{blue},\text{red}\right).
\end{align*}
Furthermore, $H,K\in {\mathcal H}_1$ if ${\alpha_{1}}I\leq{\alpha_{1}}-\eta^{1/32}$ and $H,K\in {\mathcal H}_2$ if ${\alpha_{1}}\leq{\alpha_{1}}I-\eta^{1/32}$, thus completing Case~F.
Moving on to {\bf Case G}, recall that we have a decomposition of the vertices of $G$ into $X\cup Y\cup W$ such that $G[X], G[Y], G[X,W]$ and $G[Y,W]$ contain only red and green edges and that we have $|W|=wk<(c-2{\alpha_{1}}I+\eta^{1/2})k$.
We assume that $|X|\geq|Y|$ and consider the subgraph $G_1[X\cup W] \cup G_3[X\cup W]$, that is, the subgraph of~$G$ on $X\cup W$ induced by the red and green edges. Recall that $G$, is $(1-\eta^4)$-complete and notice that $|X\cup W|\geq\half K$. Thus, $G[X\cup W]$, is $(1-2\eta^4)$-complete. Therefore, since $\eta\leq\eta_{D}$, provided that $k>k_{\ref{l:thirteen}}({\alpha_{1}},{\alpha_{1}}I,w,2^{1/4}\eta)$, by Lemma~\ref{l:thirteen}, if
\begin{equation*}
|X|+|W| \geq \half\Big(\max\Big\{{\alpha_{1}}+{\alpha_{1}}II+\max \left\{ 2w, {\alpha_{1}}, {\alpha_{1}}II \right\},3{\alpha_{1}}+\max\left\{2w,{\alpha_{1}}\right\} \Big\} + 11\eta^{1/2} \Big)k,
\end{equation*}
then $G[X]\cup G[X, W]$ has a red connected-matching on at least $({\alpha_{1}}+\eta)k$ vertices or a green odd connected-matching on at least $(\alpha_{3}+\eta)k$ vertices.
We may therefore assume that
\begin{equation}
\label{z8}
|X|+|W|< \half\Big(\max\Big\{{\alpha_{1}}+{\alpha_{1}}II+\max \left\{ 2w, {\alpha_{1}}, {\alpha_{1}}II \right\},3{\alpha_{1}}+\max\left\{2w,{\alpha_{1}}\right\} \Big\} + 11\eta^{1/2} \Big)k.
\end{equation}
Also, since $K=|X|+|Y|+|W|$ and $|X|\geq|Y|$, we have
\begin{equation}
\label{z9}
|X|+|W|\geq \frac{K+|W|}{2}=\frac{(c-\eta)k+wk}{2} =
\half\left({\alpha_{1}} +\max\{3{\alpha_{1}},2{\alpha_{1}}I,2{\alpha_{1}}II\}-\eta+w \right)k.
\end{equation}
We consider four subcases:
\begin{itemize}
\item[(G.i)] ${\alpha_{1}}\geq{\alpha_{1}}II,2w$; {\bf or} $\tfrac{3}{2}{\alpha_{1}}\geq{\alpha_{1}}II\geq{\alpha_{1}}\geq 2w$;
\item[(G.ii)] ${\alpha_{1}}II\geq\tfrac{3}{2}{\alpha_{1}}\geq{\alpha_{1}}\geq 2w$; {\bf or} ${\alpha_{1}}II\geq 2w\geq {\alpha_{1}}$ and ${\alpha_{1}}II\geq{\alpha_{1}} +w$;
\item[(G.iii)] $2w\geq{\alpha_{1}}II\geq 2{\alpha_{1}}$;
\item[(G.iv)] ${\alpha_{1}}+w\geq{\alpha_{1}}II\geq 2w\geq {\alpha_{1}}$; {\bf or} $2w\geq{\alpha_{1}},{\alpha_{1}}II$ and ${\alpha_{1}}II\leq 2{\alpha_{1}}$.
\end{itemize}
In {\bf Cases G.i} and {\bf G.ii} it can easily be shown that, together, equations (\ref{z8}) and (\ref{z9}) result in a contradiction unless $w\leq \eta+11\eta^{1/2}$ in which case almost all the vertices of~$G$ belong to~$X\cup Y$. In that case, (\ref{z9}) gives $$|X|\geq\half(c-\eta)-\half|W|\geq \left( \max\left\{2{\alpha_{1}},\half{\alpha_{1}}+{\alpha_{1}}I,\half{\alpha_{1}}+{\alpha_{1}}II\right\}-6\eta^{1/2}\right) k.$$
Recalling (\ref{z8}), in {\bf Case G.i}, we have
$|X|\leq|X|+|W|<(2{\alpha_{1}}+\tfrac{11}{2}\eta^{1/2})k$.
Then, since $Y=K-|X|-|W|$, we have
\begin{align*}
|X|\geq|Y|\geq(c-\eta)k - |X| - |W| &\geq (\max\{2{\alpha_{1}},2{\alpha_{1}}I-{\alpha_{1}},2{\alpha_{1}}II-{\alpha_{1}}\}-17\eta^{1/2})k\\&
\geq(\max\{2{\alpha_{1}},\half{\alpha_{1}}+{\alpha_{1}}I,\half{\alpha_{1}}+{\alpha_{1}}I\}-17\eta^{1/2})k.
\end{align*}
Similarly, in {\bf Case G.ii}, we have $|X|\leq|X|+|W|<(\half{\alpha_{1}}+{\alpha_{1}}II+\tfrac{11}{2}\eta^{1/2})k$, giving
\begin{align*}
|X|\geq|Y|\geq(c-\eta)k - |X| - |W| &\geq (\max\{\tfrac{7}{2}{\alpha_{1}}-{\alpha_{1}}II,\half{\alpha_{1}}+2{\alpha_{1}}I-{\alpha_{1}}II,\half{\alpha_{1}}+{\alpha_{1}}II\}-17\eta^{1/2})k\\&
\geq(\max\{2{\alpha_{1}},\half{\alpha_{1}}+{\alpha_{1}}I,\half{\alpha_{1}}+{\alpha_{1}}II\}-17\eta^{1/2})k.
\end{align*}
Thus, provided $\eta<({\alpha_{1}}/17)^4$, letting $\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$, in each of the {\bf Cases G.i-G.ii}, we have $$|X|\geq|Y|\geq (\max\{2{\alpha_{1}},\half{\alpha_{1}}+\beta\}-{\alpha_{1}}\eta^{1/4})k.$$
Thus, since $\eta<\min\{10^{-40},({\alpha_{1}}/1000)^4,({\alpha_{1}}/100\beta)^4\}$, provided $k>k_{\ref{l:SkAB}}(\eta^{1/2})$, we may apply Corollary~\ref{l:SkAB} to each of~$G[X]$ and $G[Y]$ to find that $G$ contains either a red connected-matching on at least~${\alpha_{1}} k$ vertices, a green odd connected-matching on at least~${\alpha_{1}}II k$ vertices or two
disjoint subsets of vertices $X$ and $Y$ such that $G[X]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$ and $G[Y]$ contains a two-coloured spanning subgraph $K$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$, where
\begin{align*}
{\mathcal H}_2^*=&{\mathcal H}\left((\beta-2\eta^{1/32})k,(\half{\alpha_{1}}-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\text{green},\text{red}\right),\text{ }
\\ {\mathcal J}_b=&{\mathcal J}\left(({\alpha_{1}}-18\eta^{1/2}), 4\eta^4 k, \text{red}, \text{green}\right).
\end{align*}
Furthermore, $H, K$ may belong to ${\mathcal H}_2^*$ only if ${\alpha_{1}}\leq(1-\eta^{1/16})\beta$ and may belong to ${\mathcal J}$ only if $\beta<(\tfrac{3}{2}+2\eta^{1/4}){\alpha_{1}}$.
In {\bf Case G.iii} by (\ref{z8}) and (\ref{z9}) we have
$$w>\max\{3{\alpha_{1}}-{\alpha_{1}}II,2{\alpha_{1}}I-{\alpha_{1}}II,{\alpha_{1}}II\}-11.5\eta^{1/2}\geq\half(3{\alpha_{1}}-{\alpha_{1}}II)+\half{\alpha_{1}}II-11.5\eta^{1/2}k\geq\tfrac{3}{2}{\alpha_{1}}-11.5\eta^{1/2}k,$$
contradicting the assumption that $w<{\alpha_{1}}+\eta^{1/2}$.
In case {\bf G.iv}, by (\ref{z8}) and (\ref{z9}), we have $w>\max\{3{\alpha_{1}},2{\alpha_{1}}I,2{\alpha_{1}}II\}-2{\alpha_{1}}-11.5\eta^{1/2}$.
Thus, since ${\alpha_{1}}\leq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$ and $w<{\alpha_{1}}+\eta^{1/2}$, in what follows we may assume that
\begin{equation}
\label{z10-}
{\alpha_{1}}\leq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq\tfrac{3}{2}{\alpha_{1}}+6\eta^{1/2}.\end{equation}
Then, recalling (\ref{z8}) and (\ref{z9}), since $K=|X|+|Y|+|W|$ and $|X|\geq|Y|$, it follows that
\begin{equation}
\label{z10}
\left.
\begin{aligned}
\,\,\,\quad\quad
\left(\half\max\{3{\alpha_{1}},2{\alpha_{1}}I,2{\alpha_{1}}II\}-\eta^{1/2}\right)k
\leq |X| & <
(\tfrac{3}{2}{\alpha_{1}}+6\eta^{1/2})k,
\quad\quad\quad\quad\quad\,\,\,\,\,
\\
\left(\half\max\{3{\alpha_{1}},2{\alpha_{1}}I,2{\alpha_{1}}II\}-8\eta^{1/2}\right)k
\leq |Y| & <
(\tfrac{3}{2}{\alpha_{1}}+6\eta^{1/2})k,
\\
({\alpha_{1}}-11.5\eta^{1/2})k \leq |W| & < ({\alpha_{1}}+4\eta)k.
\end{aligned}
\right\}\!
\end{equation}
Thus, $W$ contains around ${\alpha_{1}} k$ vertices and each of $X$ and $Y$ contain close to half the remaining
vertices. By scaling, we may assume that $\half \leq {\alpha_{1}}, \max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq 1$. Recall that $G$ is $(1-\eta^4)$-complete and that for any $V'\subseteq V(G)$, $G[V']$ is $4\eta^4 k$-almost-complete.
By~(\ref{z10}), letting $\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$, recalling that $|X|\geq|Y|$, we have $$|X|,|Y|\geq\left(\max\left\{\tfrac{3}{2}{\alpha_{1}},\tfrac{1}{4}(\tfrac{3}{2}{\alpha_{1}})+\tfrac{3}{4}\beta\right\}-8\eta\right)k\geq\tfrac{3}{4}(\max\left\{2{\alpha_{1}},\half{\alpha_{1}}+\beta\right\}-\half(10^4\eta)^{1/2} )k$$
and may prove the following claim:
\begin{claim}
Either $G[X]$ contains a red connected-matching on at least $(\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$ vertices, $G[X]$ contains a green odd connected-matching on at least $(\tfrac{3}{4}\beta-3\eta^{1/16})k$ vertices or $G[X]\in{\mathcal J}(\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k,4\eta^4k,\text{red}, \text{green})$.
\end{claim}
\begin{proof}
Since $\eta\leq10^{-24}$, provided $k\geq \tfrac{4}{3} k_{\ref{l:SkAB}}(10^4\eta)$, applying Lemma~\ref{l:SkAB} (with $\alpha=\alpha_1$ and green taking them place of blue), we find that at least one of the following occurs:
\begin{itemize}
\item[(i)] $G[X]$ contains a red connected-matching on at least $(\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$ vertices;
\item[(ii)] $G[X]$ contains a green odd connected-matching on at least $(\tfrac{3}{4}\beta+100\eta^{1/2})k$ vertices;
\item[(iii)] $G[X]$ admits a partition of its vertices into $W$, $V'$, $V''$ such that
\begin{itemize}
\item[(a)] $|V'| < \tfrac{3}{4}(1+100\eta^{1/2})\beta k$,
$|V''|\leq \tfrac{3}{8}(1+100\eta^{1/2}){\alpha_{1}} k$,
$|W|\leq \tfrac{3}{2}\eta^{1/16} k$,
\item[(b)] $G_2[V']$ is $(1-2\eta^{1/16})$-complete and $G_1[V']$ is $2\eta^{1/16}$-sparse,
\item[(c)] $G_1[V',V'']$ is $(1-2\eta^{1/16})$-complete and $G_2[V',V'']$ is $2\eta^{1/16}$-sparse;
\end{itemize}
\item[(v)] $G[X]$ admits a partition of its vertices into $V'$, $V''$ such that
\begin{itemize}
\item[(a)] $\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k<|V'|,|V''| < \tfrac{3}{4}({\alpha_{1}}+10^4\eta)k$,
\item[(b)] all edges present in $G[V']$ and $G[V'']$ are coloured red and all edges present in $G[V',V'']$ are coloured blue.
\end{itemize}
\end{itemize}
Furthermore, (iii) only occurs if $1\geq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\geq{\alpha_{1}}\geq \half$ and (iv) only occurs if $\half{\alpha_{1}}+ \beta < 2(1+\eta^{1/2}){\alpha_{1}}$. Note that the remaining situation in Lemma~\ref{l:SkAB}, cannot occur since \mbox{$\half\leq {\alpha_{1}}, \max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq1$}.
In case (iii), since $|X|=|V'|+|V''|+|W|$, we have $|V'|>(\tfrac{3}{4}\beta-2\eta^{1/16})k$. Then, since $G$ is $(10^4\eta)^3(\tfrac{3}{4})k$-almost-complete, by Corollary~\ref{dirac1a} $G[V']$ contains a green cycle of length~$m$ for every $2(10^4\eta)^3(\tfrac{3}{4})k+2\leq m\leq|V'|$. Thus, provided $k\geq \eta^{-1/16}$, $G[V']$ contains a green odd connected-matching on at least $(\tfrac{3}{4}\beta-2\eta^{1/16})k-1\geq(\tfrac{3}{4}\beta-3\eta^{1/16})k$ vertices, completing the proof of the claim.
\end{proof}
The same result applies to $G[Y]$, thus, letting $M_1$ be the largest monochromatic connected-matching in $G[X]$ and $M_2$ the largest monochromatic connected-matching in $G[Y]$, we may distinguish a number of subcases as follows:
\begin{itemize}
\item[(G.a)]
$M_1, M_2$ are both red,
$|V(M_1)|, |V(M_2)|\geq (\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$,
\\ \hphantom{~} $M_1$ and $M_2$ each share vertices with odd component(s) of the green graph;
\item[(G.b)]
$M_1, M_2$ are both red,
$|V(M_1)|, |V(M_2)|\geq (\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$,
\\ \hphantom{~} $M_1$ does not share any vertices with any odd component of the green graph;
\item[(G.c)]
$M_1, M_2$ are both green (and odd),
$|V(M_1)|, |V(M_2)|\geq(\tfrac{3}{4}\beta-3\eta^{1/16})k$;
\item[(G.d)]
$M_1$ is red, $|V(M_1)|\geq (\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$,
$M_2$ is green (and odd), $|V(M_2)|\geq(\tfrac{3}{4}\beta-3\eta^{1/16})k$;
\item[(G.e)]
$M_1$ is red, $|V(M_1)|\geq (\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$,
$G[Y]\in{\mathcal J}(\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k,4\eta^4k,\text{red}, \text{green})$;
\item[(G.f)]
$G[X]\in{\mathcal J}(\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k,4\eta^4k,\text{red}, \text{green})$,
$M_2$ is green (and odd), $|V(M_2)|\geq(\tfrac{3}{4}\beta-3\eta^{1/16})k$;
\item[(G.g)] $G[X],G[Y]\in{\mathcal J}(\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k,4\eta^4k,\text{red}, \text{green})$.
\end{itemize}
{\bf Case G.a:} There cannot exist a triple of vertices $w\in W, x\in M_{1}$ and $y\in M_{2}$ such that the edges $wx$ and $wy$ are both coloured red, since such an edge would imply the existence of a red connected-matching on at least ${\alpha_{1}} k$ vertices.
Thus, we may partition $W$ into $W_{1} \cup W_{2}$, such that all edges present in $G[V(M_1),W_2]$ and $G[V(M_2),W_1]$ are coloured exclusively green. Thus, in particular, all vertices in $V(M_1)$ belong to a single green component which, by assumption, is odd.
\begin{figure}
\caption{Partition of $W$ into $W_1\cup W_2$.}
\label{redMgreenP}
\end{figure}
Suppose, then, that $|W_1|\geq (\half{\alpha_{1}}II +8\eta^4)k$ and recall that $G[W,V(M_1)]$ is $4\eta^4 k$-almost-complete. Since $\eta<\eta_D$, we have $|V(M_1)|\geq (\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k\geq(\tfrac{1}{2}{\alpha_{1}}II+90\eta^{1/2})k$ and so we may apply Lemma~\ref{l:eleven} to $G[V(M_1),W_1]$ with $\ell=(\half{\alpha_{1}}II +8\eta^4) k$ and $a=4\eta^4 k$ to give a green connected-matching on at least ${\alpha_{1}}II k$ vertices. This connected-matching is odd since we know that $V(M_1)$ belongs to an odd green component.
The result is the same in the event that $|W_2|\geq(\half{\alpha_{1}}II +8\eta^4)k$. Thus, we may assume that
$|W_{1}|, |W_{2}|\leq (\half{\alpha_{1}}II+8\eta^4) k\leq|V(M_1)|,|V(M_2)|$.
In that case, we have, by (\ref{z10}), $|W_{1}|=|W|-|W_{2}|\geq ({\alpha_{1}}-11.5\eta^{1/2})k-(\half{\alpha_{1}}II+8\eta^4)k\geq(\tfrac{1}{6}{\alpha_{1}}II-24\eta^{1/2})k$ and, likewise, $|W_{2}|\geq (\tfrac{1}{6}{\alpha_{1}}II-24\eta^{1/2})k$. Recall that $G$ is $4\eta^4k$-almost-complete. Then, since $\eta\leq({\alpha_{1}}II/300)^2$, we have $4\eta^4 k < \half(\tfrac{1}{6}{\alpha_{1}}II-24\eta^{1/2})k\leq \half |W_1|$ and so, by Lemma~\ref{l:eleven}, $G[V(M_1),W_1]$ has a green connected-matching on at least $2|W_1|-8\eta^4k$ vertices. Similarly, $G[V(M_2),W_2]$ has a green connected-matching on at least $2|W_2|-8\eta^4k$ vertices. By (\ref{z10-}) and (\ref{z10}), we have $2|W_1|+2|W_2|-16\eta^4 k = 2|W|-16\eta^4 k\geq 2({\alpha_{1}}-11.5\eta^{1/2})k-16\eta^4 k\geq{\alpha_{1}}II k$. Thus, since these connected-matchings are odd, they must belong to different components of the green graph. Therefore, we may assume that all edges present in $G[V(M_1),W_2]$ and $G[V(M_2),W_1]$ are coloured red.
\begin{figure}
\caption{Colouring of the edges of $G[M_1,W_2]\cup G[M_2,W_1]$.}
\label{2ndRedGreen}
\end{figure}
Observe, then, that, without loss of generality, $|W_1|\geq|W_2|\geq\half(|W_1|+|W_2|)\geq(\half{\alpha_{1}}-6\eta^{1/2})k$. Now, choose any set~$R_1$ of $8\eta^{1/2} k$ of the edges from the matching~$M_1$, let $M_1'=M\backslash R_1$ and consider $G[V(M_1'),W_2]$. We have $|V(M_1')|\geq (\tfrac{3}{4}{\alpha_{1}}+84\eta^{1/2})k\geq(\half{\alpha_{1}} - 6\eta^{1/2})k$ and, thus, may apply Lemma~\ref{l:eleven} to $G[V(M_1'),W_2]$ to obtain a collection $R_2$ of edges from $G[V(M_1'),W_2]$ which form a red connected-matching on at least $({\alpha_{1}}-14\eta^{1/2})k$ vertices. Since $R_1$ and $R_2$ do not share any vertices but do belong to the same red-component of~$G$, the collection of edges~$R_1\cup R_2$ forms a red connected-matching on at least ${\alpha_{1}} k$ vertices, completing this case.
{\bf Case G.b:} Again, there can be no triple of vertices $w\in W, x\in M_{1}$ and $y\in M_{2}$ such that $wx$ and $wy$ are both coloured red. Thus, we may partition $W$ into $W_{1} \cup W_{2}$, where $W_1,W_2$ are defined as in Case G.a, giving the situation illustrated in Figure~\ref{redMgreenP}, specifically all edges present in $G[V(M_1), W_1]$ are coloured exclusively green.
Thus, in particular, every vertex in $V(M_1)$ belongs to the same green component, which is assumed not to be odd. Therefore, no vertex in $P=X\backslash V(M_1)$ can have more than one green edge to $V(M_1)$ and instead each such vertex must have red edges to all but at most one of the vertices of $V(M_1)$. By maximality of $M_1$, there can be no red edges in $G[P]$, that is, all edges present in $G[P]$ are coloured exclusively green. Then, since $|P|\geq(\half{\alpha_{1}}-10\eta^{1/2})k$ and~$G$ is $4\eta^4 k$-almost-complete, $G[P]$ contains a triangle and has a single green component. Thus, all edges present in $G[V(M_1),P]$ must, in fact, be coloured red so as to avoid having $V(M_1)$ belonging to an odd green component.
Given this colouring, it can easily be shown that $G[X]$ contains a red connected-matching on at least ${\alpha_{1}} k$ vertices. Indeed, firstly, choose any set~$R_1$ of $6\eta^{1/2} k$ of the edges from the matching~$M_1$, let $M_1'=M\backslash R_1$ and consider $G[V(M_1'),P]$. Since $|V(M_1')|,|P|\geq(\half{\alpha_{1}}-\eta^{1/2})k$, by Lemma~\ref{l:eleven}, there exists, in $G[V(M_1'),P]$, a collection $R_2$ of edges which form a red connected-matching on at least $({\alpha_{1}}-4\eta^{1/2})k$ vertices. Since $R_1$ and $R_2$ do not share any vertices but do belong to the same red-component of~$G$, the collection of edges $R_1\cup R_2$ forms a red connected-matching on at least ${\alpha_{1}} k$ vertices, completing this case.
{\bf Case G.c:} Suppose that there exists a triple $w\in W, x\in M_{1}$ and $y\in M_{2}$ such that $wx$ and $wy$ are both green. Such a triple would give a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices.
Thus, we may partition $W$ into $W_{1} \cup W_{2}$, such that all edges present in $G[V(M_1),W_2]$ and $G[V(M_2),W_1]$ are coloured exclusively red.
Suppose, then, that $|W_1|\geq (\half{\alpha_{1}} +8\eta^4)k$. Recalling that $G[V(M_2),W_1]$ is $4\eta^4 k$-almost-complete, since $\eta<\eta_D$, we have $|V(M_2)|\geq (\half{\alpha_{1}} +8\eta^4)k$ and we may apply Lemma~\ref{l:eleven} with $\ell=(\half{\alpha_{1}} +8\eta^4)k$ and $a=4\eta^4 k$ to give a red connected-matching on at least ${\alpha_{1}} k$ vertices. The result is the same in the event that $|W_2|\geq(\half{\alpha_{1}} +6\eta^4)k$ with the matching being found in $G[V(M_1), W_2]$.
Therefore, we may assume that $|W_{1}|, |W_{2}|\leq (\half{\alpha_{1}}+8\eta^4) k$. In that case, we have $|W_{1}|=|W|-|W_{2}|\geq (\half{\alpha_{1}}-12\eta^{1/2})k$ and, likewise, $|W_{2}|\geq (\half{\alpha_{1}}-12\eta^{1/2})k$. Thus, since $\eta<({\alpha_{1}}/100)^2$, Lemma~\ref{l:eleven} gives a red connected-matching on at least $({{\alpha_{1}}}-26\eta^{1/2})k$ vertices in each of $G[V(M_1),W_1]$ and $G[V(M_2),W_2]$.
Then, since $\eta<({\alpha_{1}}/100)^2$, $V(M_1)\cup W_1$ and $V(M_2)\cup W_2$ must belong to different red components (so as to avoid having a red connected-matching on at least $(2{\alpha_{1}}-52\eta^{1/2})k\geq {\alpha_{1}} k$ vertices). Thus, all edges present in $G[V(M_{1}),W_{2}]$ and $G[V(M_{2}),W_{1}]$ must be coloured exclusively green.
\begin{figure}
\caption{Colouring of the edges of $G[M_1,W_2]\cup G[M_2,W_1]$.}
\end{figure}
Given this colouring, it can easily be shown that $G$ contains a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices. Indeed, provided $k\geq \eta^{-1/2}$, we can choose a set $E_1$ of edges from $M_1$ such that $(\tfrac{1}{6}{\alpha_{1}}II+20\eta^{1/2})k\leq |E_1|\leq (\tfrac{1}{6}{\alpha_{1}}II+22\eta^{1/2})k$. Then, the edges of $E_1$ form a matching on at least $(\tfrac{1}{3}{\alpha_{1}}II+40\eta^{1/2})k$ vertices and letting $M_1'=M_1\backslash E_1$, since $\eta<({\alpha_{1}}II/50)^{16}$, we have
$$
|V(M_1')|\geq (\tfrac{3}{4}{\alpha_{1}}II-3\eta^{1/16})-(\tfrac{1}{3}{\alpha_{1}}II+44\eta^{1/2})k\geq (\tfrac{5}{12}{\alpha_{1}}II-4\eta^{1/16})k\geq(\tfrac{1}{3}{\alpha_{1}}II-18\eta^{1/2})k.
$$
Then, recalling that $|W_1|\geq(\half{\alpha_{1}}-12\eta^{1/2})k\geq(\tfrac{1}{3}{\alpha_{1}}II-18\eta^{1/2})k$, by Lemma~\ref{l:eleven}, $G[V(M_1'),W_1]$ contains a collection of edges $E_2$, which form a connected-matching on at least $(\tfrac{2}{3}{\alpha_{1}}II-38\eta^{1/2})k$ vertices. Then, since $E_1$ and $E_2$ do not share any vertices but do belong to the same odd green-component of~$G$, the collection of edges $E_1\cup E_2$ forms a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices, completing this case.
{\bf Case G.d:}
By (\ref{z10-}) and (\ref{z10}), we have $|X|+|W|, |Y|+|W|\geq \half K$, so $G[X\cup W]$ and $G[Y\cup W]$ are each $(1-2\eta^4)$-complete.
Additionally, $|W|,|V\backslash W|\geq 4(2\eta^{4})^{1/2}|X\cup W|$. Thus, provided that $\half K\geq 1/2\eta^4$, we may apply Lemma~\ref{l:dgf1} separately to $G[X\cup W]$ and $G[Y\cup W]$ with the result being that at least one of the following occurs:
\begin{itemize}
\item[(i)] $X\cup W$ has a connected red
component~$F$ on at least $|X \cup W| - \eta k$ vertices;
\item[(ii)] $X\cup W$ has a connected green
component~$F$ on at least $|X \cup W| - \eta k$ vertices;
\item[(iii)] $Y\cup W$ has a connected red
component~$F$ on at least $|Y \cup W| - \eta k$ vertices;
\item[(iv)] $Y\cup W$ has a connected green
component~$F$ on at least $|Y \cup W| - \eta k$ vertices;
\item[(v)] there exist points $w_{1}, w_{2}, w_{3}, w_{4} \in W$ such that the following hold:
\begin{itemize}
\item[(a)] $w_1$ has red edges to all but at most $\eta k$ vertices in $X$,
\item[(b)] $w_2$ has green edges to all but at most $\eta k$ vertices in $X$,
\item[(c)] $w_3$ has red edges to all but at most $\eta k$ vertices in $Y$,
\item[(d)] $w_4$ has green edges to all but at most $\eta k$ vertices in $Y$.
\end{itemize}
\end{itemize}
In subcase (i), we discard from~$W$ the at most~$\eta k$ vertices not contained in~$F$ and consider~$G[W,Y]$. Either there are at least~$\tfrac{1}{8}{\alpha_{1}} k$ mutually independent red edges present in~$G[W,Y]$ (which can be used to augment $M_1$) or we may obtain $W'\subset W$, $Y' \subset Y$ with $|W'|,|Y'| \geq (\tfrac{7}{8} {\alpha_{1}} - 12\eta^{1/2})k\geq(\tfrac{7}{12}{\alpha_{1}}II-24\eta^{1/2})k$ such that all the edges present in $G[W',Y']$ are coloured exclusively green. Since~$G$ is $4\eta^4 k$-almost-complete, so is $G_3[W',Y']$ and we may apply Lemma~\ref{l:eleven} to obtain a green connected-matching on at least ${\alpha_{1}}II k$ vertices in $G[W',Y]$ which is odd by virtue of sharing vertices with $M_2$.
In subcase (ii), suppose there exists a green edge in $G[M_2,F]$. Then, at least $|M_2\cup W|-\eta k$ of the vertices of $M_2\cup W$ would belong to the same green component in~$G$. Discard the at most $\eta k$ vertices of $W$ not contained in that component and consider $G[X,W]$. Either there are at least $(\tfrac{1}{8}{\alpha_{1}}II+2\eta^{1/16})k$ mutually independent green edges in $G[X,W]$ which can be used to augment $M_2$ or we may obtain $W'\subset W$, $X' \subset X$ with $|W'|,|X'| \geq (\tfrac{7}{8} {\alpha_{1}} - 3\eta^{1/16})k$ such that all the edges present in $G[W',X']$ are coloured exclusively red. Then, since~$G_1[W',X']$ is $4\eta^4 k$-almost-complete, we may apply Lemma~\ref{l:eleven} to obtain a red connected-matching on at least ${\alpha_{1}} k$ vertices in $G[W',X']$.
Thus, we may instead, after discarding at most $\eta k$ vertices from $W$, assume that all edges present in $G[V(M_2),W]$ are coloured exclusively red and apply Lemma~\ref{l:eleven} to obtain a red connected-matching on at least~${\alpha_{1}} k$ vertices in $G[V(M_2),W]$.
In subcase (iii), if there exists a red edge in $G[M_1,W]$, then the same argument as given in case (i) gives a red connected-matching on at least ${\alpha_{1}} k$ vertices. Thus, we may assume that, after deleting at most $\eta k$ vertices from $W$, all edges present in $G[M_1,W]$ are coloured green.
Since $\eta<\eta_D$, we have $|V(M_1)|,|W|\geq (\half{\alpha_{1}}II+90\eta^{1/2})k$.
Thus, by Lemma~\ref{l:eleven}, there exists a green connected-matching on at least ${\alpha_{1}}II k$ vertices in $G[V(M_1),W]$. However, this connected-matching is not necessarily odd. The existence of a green edge in $G[V(M_2),W]$ would suffice to complete the proof. Thus, we may assume that all edges present in $G[V(M_2),W]$ are be coloured exclusively red. But, then, we may apply Lemma~\ref{l:eleven} to obtain a red connected-matching on at least~${\alpha_{1}} k$ vertices in $G[V(M_2),W]$.
In subcase (iv), we can then discard the at most $\eta k$ vertices from of $W$ not contained in $F$ and and consider $G[X,W]$. Either there are at least $(\tfrac{1}{8}{\alpha_{1}}II+2\eta^{1/16})k$ mutually independent green edges in $G[X,W]$ which can be used to augment $M_2$ or we may obtain $W'\subset W$, $X' \subset X$ with $|W'|,|X'| \geq (\tfrac{7}{8} {\alpha_{1}} - 3\eta^{1/16})k$ such that all the edges present in $G[W',X']$ are coloured exclusively red. In the latter case, we apply Lemma~\ref{l:eleven} to obtain a red connected-matching on at least ${\alpha_{1}} k$ vertices in $G[W',X']$.
In subcase (v), there exist points $w_{1}, w_{2}, w_{3}, w_{4} \in W$ such that $w_1$ has red edges to all but at most $\eta k$ vertices in $X$, $w_2$ has green edges to all but at most $\eta k$ vertices in $X$, $w_3$ has red edges to all but at most $\eta k$ vertices in $Y$, and $w_4$ has green edges to all but at most $\eta k$ vertices in $Y$.
\begin{figure}
\caption{Vertices $w_1,w_2,w_3$ and $w_4$ in case (e).}
\end{figure}
Thus, defining
\begin{align*}
X_{S}&=\{x\in X \text{ such that } xw_1 \text{ is red and } xw_2 \text{ is green}\},\\
Y_{S}&=\{y\in Y \text{ such that } yw_3 \text{ is red and } yw_4 \text{ is green}\},
\end{align*}
by (\ref{z10}), we have $|X_S|,|Y_S|\geq(\tfrac{3}{2}{\alpha_{1}}-10\eta^{1/2})k$. Suppose there exists $w\in W$, $x\in X_S$,~$y\in Y_S$ such that~$wx$ and~$wy$ are red. In that case, $X_S\cup Y_S$ belong to the same red component of~$G$. Recall that $M_1\subseteq G[X]$ is a red connected-matching on $(\tfrac{3}{4}{\alpha_{1}}+100\eta^{1/2})k$ vertices and consider $G[W,Y_S]$. Either we can find $\tfrac{1}{8} {\alpha_{1}} k$ mutually independent red edges in $G[W,Y_S]$ (which together with $M_1$ give a red connected-matching on at least ${\alpha_{1}} k$ vertices) or we may obtain $W'\subset W$, $Y' \subset Y_S$ with $|W'|,|Y'| \geq (\tfrac{7}{8}{\alpha_{1}} - 12\eta^{1/2})k$ such that all the edges present in $G[W',Y']$ are coloured exclusively green. Then, as in case (i), we may apply Lemma~\ref{l:eleven} to obtain a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices.
Thus, we may assume that no such triple exists and, similarly, we may assume there exists no triple $w\in W$, $x\in X_S$, $y\in Y_S$ such that $wx$ and $wy$ are both green. Therefore, we may partition~$W$ into $W_1 \cup W_2$ such that all edges present in $G[W_1,X_S]$ and $G[W_2,Y_S]$ are coloured exclusively red and all edges present in $G[W_1,Y_S]$ and $G[W_2,X_S]$ are coloured exclusively green. Thus, we may assume that $|W_1|,|W_2|\leq(\half{\alpha_{1}}+\eta^{1/2})k$ (else Lemma~\ref{l:eleven} could be used to give a red connected-matching on at least ${\alpha_{1}} k$ vertices) and therefore also that $|W_1|,|W_2|\geq(\half{\alpha_{1}}-12\eta^{1/2})k$, in which case we can easily show that there exists a red connected-matching on at least ${\alpha_{1}} k$ vertices in $G[X_S\cup W_1]$ as follows: Choose any set~$R_1$ of $14\eta^{1/2} k$ of the edges from the matching~$M_1$, let $X'=X\backslash V(R_1)$ and consider $G[X',W_1]$. We have $|X'|,|W_1|\geq (\tfrac{1}{2}{\alpha_{1}}-12\eta^{1/2})k$ and, thus, may apply Lemma~\ref{l:eleven} to $G[X',W_1]$ to obtain a collection $R_2$ of edges from $G[V(M_1'),W_2]$ which form a red connected-matching on at least $({\alpha_{1}}-26\eta^{1/2})k$ vertices. Since $R_1$ and $R_2$ do not share any vertices but do belong to the same red-component of~$G$, the collection of edges $R_1\cup R_2$ forms a red connected-matching on at least ${\alpha_{1}} k$ vertices, completing this case.
{\bf Case G.e:} Since $|Y_1|,|Y_2|\geq\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k$ and $G$ is $4\eta^4 k$-almost-complete, by Corollary~\ref{dirac1a}, $G[Y_1]$ contains a red connected-matching $R_1$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices. Similarly, $G[Y_2]$ contains a red connected-matching $R_2$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices.
Thus, the existence of a triple $w\in W$, $y_1\in Y_1$, $y_2\in Y_2$ such that both the edges $wy_1$ and $wy_2$ are coloured red would give a red connected-matching on at least ${\alpha_{1}} k$ vertices. Therefore, there exists a partition of $W$ into $W_1\cup W_2$ such that all edges present in $G[W_1,Y_1]$ and $G[W_2,Y_2]$ are coloured exclusively green.
\begin{figure}
\caption{Colouring of $G$ in Case G.e.}
\end{figure}
Recall from (\ref{z10-}) that ${\alpha_{1}}>\tfrac{2}{3}{\alpha_{1}}II-4\eta^{1/2}$. Thus, $|Y_1|,|Y_2|\geq\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k\geq(\half{\alpha_{1}}II-80\eta^{1/2})k$. So, by Lemma~\ref{l:eleven}, $G[Y_1,Y_2]$ contains a green connected-matching $M_2$ on at least $({\alpha_{1}}II-164\eta^{1/2})k$ vertices which need not be odd.
We then consider two possibilities:
\begin{itemize}
\item[(i)] $|W_1|,|W_2|\geq 84\eta^{1/2}k$;
\item[(ii)] $\min\{|W_1|,|W_2|\}\leq 84\eta^{1/2}k$.
\end{itemize}
In subcase (i), provided $k\geq \eta^{-1/2}$, we may choose subsets $Y_1'\subset Y_1$, $Y_2'\subset Y_2$ such that $84\eta^{1/2}k\leq|Y_1|,|Y_2|\leq 86\eta^{1/2}k$. Then, since $|W_1|,|W_2|\geq84\eta^{1/2}k$, by Lemma~\ref{l:eleven}, $G[W_1,Y_1]$ contains a red connected-matching $E_1$ on at least $166\eta^{1/2}k$ vertices and $G[W_2,Y_2]$ contains a red connected-matching $E_2$ on at least $166\eta^{1/2}k$ vertices.
Also, we have $|Y_1\backslash Y_1'|, |Y_2\backslash Y_2'|\geq (\half{\alpha_{1}}II-166\eta^{1/2})k$ so, by Lemma~\ref{l:eleven}, $G[Y_1\backslash Y_1', Y_2\backslash Y_2']$ contains a green connected-matching $E_3$ on at least $({\alpha_{1}}II-330\eta^{1/2})k$ vertices. Together $E_1, E_2$ and $E_3$ form a green connected-matching on at least ${\alpha_{1}}II k$ vertices although this connected-matching need not be odd.
The existence of green edge in $G[W_1,Y_2]$ or $G[W_2,Y_1]$ would give an odd green cycle in the same green component as $E_1\cup E_2\cup E_3$. Thus, we may assume that all edges present in $G[W_1,Y_2]$ and $G[W_2,Y_1]$ are coloured red.
\begin{figure}
\caption{Colouring of $G[W,Y]$ in Case G.e.i.}
\end{figure}
In that case, the existence of a red edge in $G[V(M_1), W_1]$ would give a red connected-matching on $M_1\cup R_1$ on at least ${\alpha_{1}} k$ vertices. Similarly, the existence of a red edge in $G[V(M_1), W_2]$ would give a red connected-matching $M_1\cup R_2$ on at least ${\alpha_{1}} k$ vertices. Thus, we may assume that all edges present in $G[V(M_1),W]$ are coloured green, thus giving a green five-cycle in the same component as $E_1\cup E_2 \cup E_3$, completing this subcase.
In subcase (ii), without loss of generality, $|W_2|\leq 84\eta^{1/2}k$. Thus, after discarding at most $84\eta^{1/2}k$ vertices from $W$, we may assume that every vertex in $W$ belongs to the same green component as $M_2$ and that $|W|\geq ({\alpha_{1}}-98\eta^{1/2})k$.
Suppose there is a green edge in $G[W,Y_2]$. Then, the existence of $84\eta^{1/2}k$ mutually independent green edges in $G[V(M_1),W]$ would give an odd green connected-matching on at at least ${\alpha_{1}}II k$ vertices. Thus, for now assume there exist subsets $X'\subseteq M_1$ and $W'\subseteq W$ such that $|X'|\geq(\tfrac{3}{4}{\alpha_{1}}+2\eta^{1/2})k$ vertices, $|W'|\geq({\alpha_{1}}-182\eta^{1/2})k$ and all edges present in $G[X',W']$ are coloured red. Then, by Lemma~\ref{l:eleven}, $G[X',W']$ would contain a red connected-matching on at least ${\alpha_{1}} k$ vertices.
Therefore, we may instead assume that all edges present in $G[W,Y_2]$ are coloured red. But then, since $\eta\leq({\alpha_{1}}/320)^2$, $|W|\geq({\alpha_{1}}-98\eta^{1/2})k\geq(\half{\alpha_{1}}+\eta^{1/2})k$ and $|Y_2|\geq\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k\geq(\half{\alpha_{1}}+\eta^{1/2})k$. So, by Lemma~\ref{l:eleven}, $G[W,Y_2]$ contains a red connected-matching on at least ${\alpha_{1}} k$ vertices, completing this case.
{\bf Case G.f:} Since $|X_1|,|X_2|\geq\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k$ and $G$ is $4\eta^4 k$-almost-complete, by Corollary~\ref{dirac1a}, $G[X_1]$ contains a red connected-matching $R_1$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices. Similarly, $G[X_2]$ contains a red connected-matching $R_2$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices.
Thus, the existence of a triple $w\in W$, $x_1\in X_1$, $x_2\in X_2$ such that both the edges $wx_1$ and $wx_2$ are coloured red would give a red connected-matching on at least ${\alpha_{1}} k$ vertices. Therefore, there exists a partition of $W$ into $W_1\cup W_2$ such that all edges present in $G[W_1,X_1]$ and $G[W_2,X_2]$ are coloured exclusively green.
Recall from (\ref{z10-}) that ${\alpha_{1}}>\tfrac{2}{3}{\alpha_{1}}II-4\eta^{1/2}$. Thus, $|X_1|,|X_2|\geq(\half{\alpha_{1}}II-80\eta^{1/2})k$, so, by Lemma~\ref{l:eleven}, $G[X_1,X_2]$ contains a green connected-matching $M_1$ on at least $({\alpha_{1}}II-164\eta^{1/2})k$ vertices.
Since $\eta\leq\eta_D$, $({\alpha_{1}}II-164\eta^{1/2})k+(\tfrac{3}{4}{\alpha_{1}}-3\eta^{1/16})k\geq {\alpha_{1}}II k$. Thus, in order to avoid having a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices, $M_1$ and $M_2$ must be in different green components. Therefore, we may assume that all edges present in $G[W,V(M_2)]$ are coloured red. Then, since $|W|,|V(M_2)|\geq(\tfrac{3}{4}{\alpha_{1}}-3\eta^{1/16})\geq(\half{\alpha_{1}}+\eta^{1/2})k$, by Lemma~\ref{l:eleven}, $G[W,V(M_2)]$ contains a red connected-matching on at least ${\alpha_{1}} k$ vertices, completing the proof in this case.
{\bf Case G.g:} Since $|X_1|\geq\tfrac{3}{4}({\alpha_{1}}-102\eta^{1/2})k$ and $G$ is $4\eta^4 k$-almost-complete, by Corollary~\ref{dirac1a}, $G[X_1]$ contains a red connected-matching $R_{11}$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices. Similarly, $G[X_2]$ contains a red connected-matching $R_{12}$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices, $G[Y_1]$ contains a red connected-matching $R_{21}$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices and $G[Y_2]$ contains a red connected-matching $R_{22}$ on at least $(\tfrac{3}{4}{\alpha_{1}}-78\eta^{1/2})k$ vertices.
Thus, the existence of a triple $w\in W$, $x\in X$, $y\in Y$ such that both the edges $wx$ and $wy$ are coloured red would give a red connected-matching on at least ${\alpha_{1}} k$ vertices. Therefore, there exists a partition of $W$ into $W_1\cup W_2$ such that all edges present in $G[W_1,X]$ and $G[W_2,Y]$ are coloured exclusively green. Thus, $G[W_1\cup X]$ and $G[W_2\cup Y]$ each consists of a single odd green component.
\begin{figure}
\caption{Colouring of $G$ in Case G.g.}
\end{figure}
Without loss of generality, $|W_1|\geq\half(|W_1|+|W_2|)\geq \half({\alpha_{1}}-11.5\eta^{1/2})k\geq(\tfrac{1}{3}{\alpha_{1}}II-24\eta^{1/2})k\geq200\eta^{1/2}k$. Thus, we may partition $W_1$ into $W_{11}\cup W_{12}$ such that $|W_{11}|,|W_{12}|\geq 100\eta^{1/2}$.
Since $|X_1|,|X_2|\geq(\half{\alpha_{1}}II-90\eta^{1/2})k$, we may partition $X_1$ into $X_{11}\cup X_{12}$ and $X_2$ $X_{21}\cup X_{22}$ such that $|X_{11}|,|X_{21}|\geq 100\eta^{1/2}k$ and $|X_{21}|,|X_{22}|\geq (\half{\alpha_{1}}II-192\eta^{1/2})k$.
Then, by Lemma~\ref{l:eleven}, $G[X_{11},W_{11}]$ and $G[X_{21},W_{12}]$ each contains a green connected-matching on at least $198\eta^k$ vertices and $G[X_{12},X_{22}]$ contains a green connected-matching on at least $({\alpha_{1}}II-386\eta^{1/2})k$ vertices. These green connected-matchings share no vertices but belong to the same odd green component and thus form a green connected-matching on at least ${\alpha_{1}}II k$ vertices completing the proof of this case.
\begin{center}
***
\end{center}
Recall that early in the proof, we assumed that $d(G_2)>\half(c-{\alpha_{1}}-2\eta)k$. If instead we assume that $d(G_3)>\half(c-{\alpha_{1}}-2\eta)k$, the result is the same but with the roles of ${\alpha_{1}}I$ and ${\alpha_{1}}II$ and blue and green exchanged.
\qquad\hspace*{\fill}$\Box$
\label{Biiiend}
\section{Proof of the main result -- Setup}
\label{s:p10}
For ${\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II>0$, we set $c={\alpha_{1}}+\max\{3{\alpha_{1}}, 2{\alpha_{1}}I, 2{\alpha_{1}}II\}$,
$$ \eta=\frac{1}{2}\min\left\{\eta_{D}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II), 10^{-50}, \left(\frac{{\alpha_{1}}}{100}\right)^{128}, \left(\frac{{\alpha_{1}}I}{100}\right)^{128}, \left(\frac{{\alpha_{1}}II}{100}\right)^{128} \right\}$$
and let~$k_0$ be the smallest integer such that
$$k_0\geq \max \left\{\left(c-\half{\eta}\right)k_{D}({\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II,\eta), \frac{100}{\eta}\right\}.$$
We let
$$N={\langle \! \langle} {\alpha_{1}} n{\rangle \! \rangle}+\max\left\{3{\langle \! \langle} {\alpha_{1}} n{\rangle \! \rangle}, 2\langle {\alpha_{1}}I n \rangle, 2\langle{\alpha_{1}}II n \rangle\right\} - 3,$$ for some integer $n$ such that $N \geq K_{\ref{l:sze}}(\eta^4,k_0)$ and
$$n>n^*=\max\{n_{\ref{th:blow-up}}(4,0,0,\eta), n_{\ref{th:blow-up}}(1,2,0,\eta), n_{\ref{th:blow-up}}(1,0,2,\eta),1/\eta, 100/\min\{{\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\}\}$$
and consider a three-colouring of~$G=(V,E)$, the complete graph on~$N$ vertices.
In order to prove Theorem C, we must prove that $G$ contains either a red cycle on ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices, a blue cycle on $\langle {\alpha_{1}}I n \rangle$ vertices or a green cycle on $\langle {\alpha_{1}}II n \rangle$ vertices.
Recall that we use $G_1, G_2, G_3$ to refer to the monochromatic spanning subgraphs of $G$. That is,~$G_1$ (resp. $G_2, G_3$) has the same vertex set as~$G$ and includes as an edge any edge which in~$G$ is coloured red (resp. blue, green).
By Theorem~\ref{l:sze}, there exists an $(\eta^4,G_1,G_2,G_3)$-regular partition $\Pi=(V_0,V_1,\dots,V_K)$ for some $K$ such that $k_0\leq K \leq K_{\ref{l:sze}}(\eta^4,k_0)$. Given such a partition, we define the $(\eta^4,\eta,\Pi)$-reduced-graph ${\mathcal G}=({\mathcal V},{\mathcal E})$ on~$K$ vertices as in Definition~\ref{reduced}. The result is a three-multicoloured graph ${\mathcal G}=({\mathcal V},{\mathcal E})$ with
\begin{align*}
{\mathcal V}&=\{V_1,V_2,\dots,V_K\}, &
{\mathcal E}&=\{V_iV_j : (V_i,V_j) \text{ is } (\eta^4,G_r)\text{-regular for }r=1,2,3\},
\end{align*}
such that a given edge $V_iV_j$ of~${\mathcal G}$ is coloured with every colour for which there are at least $\eta|V_i||V_j|$ edges of that colour between~$V_i$ and~$V_j$ in~$G$.
In what follows, we will use ${\mathcal G}_1, {\mathcal G}_2, {\mathcal G}_3$ to refer to the monochromatic spanning subgraphs of the reduced graph ${\mathcal G}$. That is,~${\mathcal G}_1$ (resp. ${\mathcal G}_2, {\mathcal G}_3$) has the same vertex set as~${\mathcal G}$ and includes as an edge any edge which in~${\mathcal G}$ is coloured red (resp. blue, green).
Note that, by scaling, we may assume that $\max\{{\alpha_{1}},{\alpha_{1}}I,{\alpha_{1}}II\}= 1$. Thus, since $K\geq k_0 \geq 100/\eta$, we may fix an integer~$k$ such that
\begin{align}
\left(c-\eta\right)k \leq K \leq \left(c-\half\eta\right)k,
\label{sizeK}
\end{align}
and may assume that $2k\leq K\leq 4k$, $2n\leq N\leq 4n$.
Notice, also, that since the partition is~$\eta^4$-regular, we have $|V_0|\leq \eta^4 N$ and, for $1\leq i \leq K$,
\begin{equation}
\label{NK}
(1-\eta^4)\frac{N}{K}\leq |V_i|\leq \frac{N}{K}.
\end{equation}
\hypertarget{reD}
Applying Theorem~\hyperlink{thD}{D}, we find that ${\mathcal G}$ contains at least one of the following:
\begin{itemize}
\item [(i)] a red connected-matching on at least ${\alpha_{1}}
k$ vertices;
\item [(ii)] a blue connected-matching on at least ${\alpha_{1}}I k$ vertices;
\item [(iii)] a green odd connected-matching on at least ${\alpha_{1}}II k$ vertices;
\item [(iv)] subsets of vertices ${\mathcal W}$, ${\mathcal X}$ and ${\mathcal Y}$ such that ${\mathcal X}\cup {\mathcal Y}\subseteq {\mathcal W}$, ${\mathcal X}\cap {\mathcal Y}=\emptyset$, $|{\mathcal W}|\geq(c-\eta^{1/2})k$, every $\gamma$-component of $G[{\mathcal W}]$ is odd, $G[{\mathcal X}]$ contains a two-coloured spanning subgraph $H$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$ and $G[Y]$ contains a a two-coloured spanning subgraph $K$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$, where~{\phantom{nnn}}
\begin{align*}
{\mathcal H}_1&={\mathcal H}\left(({\alpha_{1}}-2\eta^{1/64})k,(\half\alpha_{*}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{red},\gamma\right),\text{ }
\\ {\mathcal H}_2&={\mathcal H}\left((\alpha_{*}-2\eta^{1/64})k,(\half{\alpha_{1}}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\gamma,\text{red}\right),
\end{align*}
for $(\alpha_{*},\gamma)\in\{({\alpha_{1}}I,\text{blue}),({\alpha_{1}}II,\text{green})\}$;
\item[(v)]
disjoint subsets of vertices ${\mathcal X}$ and ${\mathcal Y}$ such that $G[{\mathcal X}]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$ and $G[{\mathcal Y}]$ contains a two-coloured spanning subgraph $K$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$, where
\begin{align*}
{\mathcal H}_{2}^*&={\mathcal H}\left((\beta-2\eta^{1/32})k,(\half{\alpha_{1}}-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\gamma,\text{red}\right),\text{ }
\\ {\mathcal J}_b&={\mathcal J}\left(({\alpha_{1}}-18\eta^{1/2}), 4\eta^4 k, \text{red}, \gamma\right),
\end{align*}
for $\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$ and $\gamma\in\{\text{blue}, \text{green}\}$;
\item[(vi)] a subgraph $H$ from ${\mathcal L}={\mathcal L}\left((\half\alpha+\tfrac{1}{4}\eta)k,4\eta^4k, \text{red}, \text{blue}, \text{green}\right).$
\end{itemize}
Furthermore,
\begin{itemize}
\item[(iv)] occurs only if ${\alpha_{1}}\leq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq\alpha_{*}+24\eta^{1/4}$. Additionally, $H$ and $K$ belong to ${\mathcal H}_{1}$ if $\alpha_{*}\leq(1-\eta^{1/16}){\alpha_{1}}$ and belong to ${\mathcal H}_{2}$ if ${\alpha_{1}}\leq(1-\eta^{1/16})\alpha_{*}$;
\item[(v)] occurs only if ${\alpha_{1}}\leq\beta$. Additionally, $H$ and $K$ may belong to ${\mathcal H}_2^*$ only if ${\alpha_{1}}\leq(1-\eta^{1/16})\beta$ and may belong to ${\mathcal J}$ only if $\beta<(\tfrac{3}{2}+2\eta^{1/4}){\alpha_{1}}$; and
\item[(vi)] occurs only if ${\alpha_{1}}\geq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$.
\end{itemize}
Since $n>\max\{n_{\ref{th:blow-up}}(4,0,0,\eta), n_{\ref{th:blow-up}}(1,2,0,\eta), n_{\ref{th:blow-up}}(1,0,2,\eta)\}$ and $\eta<10^{-20}$, in cases (i)--(iii), Theorem~\ref{th:blow-up} gives a cycle of appropriate length, colour and parity to complete the proof.
Thus, we need only concern ourselves with cases (iv)--(vi). We divide the remainder of the proof into three parts, each corresponding to one of the possible coloured structures.
\section{Proof of the main result -- Part I -- Case (iv)}
\label{s:p11}
Suppose that ${\mathcal G}$ contains subsets of vertices ${\mathcal W}$, ${\mathcal X}$ and ${\mathcal Y}$ such that ${\mathcal X}\cup {\mathcal Y}\subseteq {\mathcal W}$, ${\mathcal X}\cap {\mathcal Y}=\emptyset$, $|{\mathcal W}|\geq(c-\eta^{1/2})k$, every blue-component of $G[{\mathcal W}]$ is odd, $G[{\mathcal X}]$ contains a red-blue-coloured spanning subgraph~$H$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$ and $G[Y]$ contains a a two-coloured spanning subgraph $K$ from ${\mathcal H}_{1}\cup{\mathcal H}_{2}$, where
\begin{align*}
{\mathcal H}_1&={\mathcal H}\left(({\alpha_{1}}-2\eta^{1/64})k,(\half{\alpha_{1}}I-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{red},\text{blue}\right),\text{ }
\\ {\mathcal H}_2&={\mathcal H}\left(({\alpha_{1}}I-2\eta^{1/64})k,(\half{\alpha_{1}}-2\eta^{1/64})k,4\eta^2 k,\eta^{1/64},\text{blue},\text{red}\right).
\end{align*}
Recall from Theorem~D, that we may additionally assume that
\begin{equation}
\label{sizesI}
{\alpha_{1}}\leq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq{\alpha_{1}}I+24\eta^{1/2}.
\end{equation}
We divide the proof that follows into three sub-parts depending on the colouring of the subgraphs~$H$~and~$K$, that is, whether $H$ and $K$ belong to ${\mathcal H}_1$ or ${\mathcal H}_2$:
\subsection*{Part I.A: $H, K \in {\mathcal H}_1$.}
From Theorem~\hyperlink{reD}{D}, we know that this case can arise only when ${\alpha_{1}}\geq(1-\eta^{1/16}){\alpha_{1}}I$. Thus, recalling~(\ref{sizesI}), we know that
\begin{equation}
\label{sizesIA}
{\alpha_{1}}I-\eta^{1/16}\leq{\alpha_{1}}\leq\max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq{\alpha_{1}}I+24\eta^{1/2}.
\end{equation}
We have a natural partition of
the vertex set~${\mathcal V}$ of~${\mathcal G}$ into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$, where ${\mathcal X}_1\cup{\mathcal X}_2$ is the partition of the vertices of $H$ given by Definition~\ref{d:H} and ${\mathcal Y}_1\cup{\mathcal Y}_2$ the corresponding partition of the vertices of $K$. Thus, ${\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2\subseteq {\mathcal W}$ and
\begin{align}
\label{IA0a-}
|{\mathcal X}_1|,|{\mathcal Y}_1| & \geq ({\alpha_{1}}-2\eta^{1/64})n,
& |{\mathcal X}_2|,|{\mathcal Y}_2| & \geq (\half {\alpha_{1}}I-2\eta^{1/64})n.
\end{align}
By the definition of ${\mathcal H}_1$, we know that ~${\mathcal G}_1[{\mathcal X}_1]$ is $(1-\eta^{1/64})$-complete and so, by Theorem~\ref{dirac}, it contains a red connected-matching on at least $|{\mathcal X}_1|-1\geq({\alpha_{1}}-4\eta^{1/64})k$ vertices. Similarly,~${\mathcal G}[{\mathcal Y}_1]$ contains a red connected-matching on at least $({\alpha_{1}}-4\eta^{1/64})k$ vertices. Thus, the existence of a red edge in~${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ would imply the existence of a red-connected-matching on at least ${\alpha_{1}} k$ vertices and, therefore, we may assume that there are no red edges present in~${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$.
Again, by the definition of ${\mathcal H}_1$, we know that~${\mathcal G}_2[{\mathcal X}_1,{\mathcal X}_2]$ is $(1-\eta^{1/64})$-complete and so, by Lemma~\ref{l:ten}, it contains a blue connected-matching on at least $({\alpha_{1}}I-8\eta^{1/64})k$ vertices. Similarly,~${\mathcal G}_2[{\mathcal Y}_1,{\mathcal Y}_2]$ contains a blue connected-matching on at least $({\alpha_{1}}I-8\eta^{1/64})k$ vertices. Thus, recalling that every component of~${\mathcal G}[{\mathcal W}]$ may be assumed to be odd, the existence of a blue edge in ${\mathcal G}[{\mathcal X}_1\cup {\mathcal X}_2,{\mathcal Y}_1\cup {\mathcal X}_2]$ would imply the existence of a blue odd connected-matching on at least ${\alpha_{1}}I k$ vertices. Therefore, we may assume that there are no blue edges present in~${\mathcal G}[{\mathcal X}_1\cup {\mathcal X}_2,{\mathcal Y}_1\cup {\mathcal Y}_2]$.
Suppose there exists a red matching ${\mathcal R}_1$ in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_2]$ such that $8\eta^{1/64}k\leq V({\mathcal R}_1)\leq 10\eta^{1/64}k$. Then, recalling (\ref{IA0a-}), there exists a subset $\widetilde{{\mathcal X}}_1$ of at least $({\alpha_{1}}-7\eta^{1/64})k$ vertices from ${\mathcal X}_1$ such that ${\mathcal X}_1$ and $V({\mathcal R}_1)$ share no vertices. By Theorem~\ref{dirac},~${\mathcal G}[\widetilde{{\mathcal X}}_1]$ contains a red connected-matching ${\mathcal R}_2$ on at least $|\widetilde{{\mathcal X}_1}|-1\geq({\alpha_{1}}-8\eta^{1/64})k$ vertices. Observe then that ${\mathcal R}_1$ and ${\mathcal R}_2$ share no vertices and therefore, since~${\mathcal G}_1[{\mathcal X}_1]$ is connected, form a red connected-matching on at least ${\alpha_{1}} k$ vertices. Thus, after moving at most~$5\eta^{1/64}k$ vertices from each of ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1$ and ${\mathcal Y}_2$ into ${\mathcal Z}$, we may assume that there are no red edges present in~${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_2]$ or~${\mathcal G}[{\mathcal X}_2,{\mathcal Y}_1]$.
In summary, moving vertices from ${\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2$ to ${\mathcal Z}$, we may now assume that we have a partition of ${\mathcal V}({\mathcal G})$ into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$ with
\begin{equation}
\left.
\label{IA0a}
\begin{aligned}
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,
({\alpha_{1}}-7\eta^{1/64})k\leq|{\mathcal X} _1|&=|{\mathcal Y} _1|=p\leq {\alpha_{1}} k,
\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
(\half{\alpha_{1}}I-7\eta^{1/64})k\leq|{\mathcal X} _2|&=|{\mathcal Y} _2|=q\leq \half{\alpha_{1}}I k,
\end{aligned}
\right\}\!
\end{equation}
such that
\begin{itemize}
\labitem{HA1}{HM1}
${\mathcal G}_1[{\mathcal X} _1]$ and ${\mathcal G}_1[{\mathcal Y} _1]$ are each $(1-2\eta^{1/16})$-complete (and thus connected), \\
${\mathcal G}_2[{\mathcal X} _1]$ and ${\mathcal G}_2[{\mathcal Y} _1]$ are each $2\eta^{1/16}$-sparse, \\
${\mathcal G}_3[{\mathcal X} _1]$ and ${\mathcal G}_3[{\mathcal Y} _1]$ each contain no edges;
\labitem{HA2}{HM2} ${\mathcal G}_1[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_1[{\mathcal Y}_1,{\mathcal Y}_2]$ are each $2\eta^{1/16}$-sparse, \\
${\mathcal G}_2[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_2[{\mathcal Y}_1,{\mathcal Y}_2]$ are each $(1-2\eta^{1/16})$-complete (and thus connected),\\
${\mathcal G}_3[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_3[{\mathcal Y}_1,{\mathcal Y}_2]$ each contain no green edges;
\labitem{HA3}{HM4} ${\mathcal G}[{\mathcal X}_2]$ and ${\mathcal G}[{\mathcal Y}_2]$ each contain no green edges, \\ all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_2] \cup {\mathcal G}[{\mathcal X}_2,{\mathcal Y}_1] \cup {\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ are coloured exclusively green;
\labitem{HA4}{HM3} ${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2]$ is $4\eta^4 k$-almost-complete (and thus connected), \\
every component of ${\mathcal G}_2[{\mathcal X}_1\cup {\mathcal X}_2 \cup {\mathcal Y}_1 \cup {\mathcal Y}_2]$ is odd.
\end{itemize}
By (\ref{sizesIA}) and (\ref{IA0a}), since $\eta\leq\left(\frac{{\alpha_{1}}II}{100}\right)^{128}$, we have $$|{\mathcal X}_1|\geq({\alpha_{1}}-7\eta^{1/64})k\geq(\half{\alpha_{1}}II+4\eta^2)k.$$
By (\ref{HM3}), ${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2]$ is $4\eta^2 k$-almost-complete, and by (\ref{HM4}) all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ are coloured exclusively green. Thus, by Lemma~\ref{l:eleven}, ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ contains a green connected-matching ${\mathcal M}$ on at least ${\alpha_{1}}II k$ vertices. By (\ref{HM4}), all the vertices of ${\mathcal X}_1\cup {\mathcal X}_2 \cup {\mathcal Y}_1 \cup {\mathcal Y}_2$ belong to the same green component. If there existed a pair of green edges $xz$, $yz$ with $x\in{\mathcal X}_1\cup{\mathcal X}_2$, $y\in{\mathcal Y}_1\cup{\mathcal Y}_2$, $z\in{\mathcal Z}$, then the component of ${\mathcal G}_3$ containing ${\mathcal M}$ would be odd. Thus, we may instead assume that we can partition ${\mathcal Z}$ into ${\mathcal Z}_X\cup{\mathcal Z}_Y$ such that there are no green edges in ${\mathcal G}[{\mathcal Z}_X,{\mathcal X}_1\cup{\mathcal X}_2]\cup{\mathcal G}[{\mathcal Z}_Y,{\mathcal Y}_1\cup{\mathcal Y}_2]$.
\begin{figure}
\caption{Partition of the vertices of the reduced-graph. }
\end{figure}
Recalling that $|V({\mathcal G})|\geq(c-\eta)k\geq(4\alpha_1-\eta)k$, without loss of generality, we may assume that
$$|{\mathcal X}_1\cup{\mathcal X}_2\cup {\mathcal Z}_X|\geq(2\alpha_1-\eta)k$$
(since, if not, then ${\mathcal Y}_1 \cup {\mathcal Y}_2\cup {\mathcal Z}_Y$ is that large instead).
We further partition ${\mathcal Z}_X$ into ${\mathcal Z}_R\cup {\mathcal Z}_B$ by defining
\begin{align*}
{\mathcal Z}_B&=\{ z\in {\mathcal Z}_X \text{ such that } z \text{ has at least } |{\mathcal X}_1|-16\eta^{1/2} \text{ blue neighbours in } {\mathcal X}_1\};\text{ and}
\\ {\mathcal Z}_R&={\mathcal Z}\backslash {\mathcal Z}_B=\{ z\in {\mathcal Z}_X \text{ such that } z \text{ has at least } 16\eta^{1/2} \text{ red neighbours in } {\mathcal X}_1\}.
\end{align*}
Given this definition, suppose that $|{\mathcal X}_1\cup{\mathcal Z}_R|\geq({\alpha_{1}}+\eta^{1/2})k$. Then, let $\widetilde{X}$ be any subset of ${\mathcal X}_1\cup{\mathcal Z}_R$ such that
$$({\alpha_{1}}+\eta^{1/2})k\leq|\widetilde{X}|\leq({\alpha_{1}}+2\eta^{1/2})k$$
which includes every vertex of ${\mathcal X}_1$. Given (\ref{IA0a}), we have
$$|{\mathcal X}_1\cap\widetilde{{\mathcal X}}|\geq({\alpha_{1}}-7\eta^{1/64})k\,\,\,\text{and}\,\,\,|{\mathcal Z}_R\cap \widetilde{{\mathcal X}}|\leq 8\eta^{1/64} k.$$
By (\ref{HM1}), ${\mathcal G}_1[{\mathcal X}_1]$ is $(1-2\eta^{1/64})$-complete and so is $4\eta^{1/64}$-almost-complete. Thus, ${\mathcal G}_1[\widetilde{{\mathcal X}}]$ consists of at least $|\widetilde{{\mathcal X}}|-7\eta^{1/64}n$ vertices of degree at least $|\widetilde{{\mathcal X}}|-12\eta^{1/64}n$ and at most $8\eta^{1/64}n$ vertices of degree at at least $16\eta^{1/64}n$ and so, by Theorem~\ref{chv}, ${\mathcal G}_1[\widetilde{{\mathcal X}}]$ is Hamiltonian and, thus, contains a red connected-matching on at least ${\alpha_{1}} k$ vertices.
Thus, we may, instead, suppose that $|{\mathcal X}_2\cup{\mathcal Z}_B|\geq({\alpha_{1}}-2\eta^{1/2})k\geq(\half{\alpha_{1}}I+\eta^{1/2})k$, in which case, we consider the blue graph ${\mathcal G}_2[{\mathcal X}_1,{\mathcal X}_2\cup {\mathcal Z}_B]$. Given the relative sizes of~${\mathcal X}_1$ and~${\mathcal X}_2\cup {\mathcal Z}_B$ and the large minimum-degree of the graph, we can use Theorem~\ref{moonmoser} to give a blue connected-matching on at least~${\alpha_{1}}I k $ vertices. Indeed, by~(\ref{sizesIA}) and (\ref{IA0a}), we have
$|{\mathcal X}_1|\geq({\alpha_{1}}-7\eta^{1/64})k\geq({\alpha_{1}}I-8\eta^{1/64})k$ and may choose subsets $\widetilde{{\mathcal X}}_1\subseteq {\mathcal X}_1$, $\widetilde{{\mathcal X}}_2\subseteq {\mathcal X}_2\cup {\mathcal Z}_B$ such that
\begin{align*}
(\half{\alpha_{1}}I +\eta^{1/2})k\leq|\widetilde{{\mathcal X}}_1|&=|\widetilde{{\mathcal X}}_2|\leq(\half{\alpha_{1}}I +2\eta^{1/2})k, & |\widetilde{{\mathcal X}}_2\cup {\mathcal Z}_B|\leq8\eta^{1/64}k.
\end{align*}
Recall that ${\mathcal G}_2[{\mathcal X}_1,{\mathcal X}_2]$ is $4\eta^{1/64}k$-almost-complete and that all vertices in ${\mathcal Z}_B$ have blue degree at least $|\widetilde{{\mathcal X}}_1|-16\eta^{1/64}k$ in $G[\widetilde{X}_1,\widetilde{X}_2]$. Thus, since $|\widetilde{{\mathcal X}}_2\cup {\mathcal Z}_B|\leq 8\eta^{1/64}k$ and $\eta\leq({\alpha_{1}}I/100)^{128}$, for any pair of vertices $x_1\in\widetilde{{\mathcal X}}_1$ and $x_2\in\widetilde{{\mathcal X}}_2$, we have $$d(x_1)+d(x_2)\geq |\widetilde{{\mathcal X}}_1|+|\widetilde{{\mathcal X}}_2|-32\eta^{1/64}n\geq(\half{\alpha_{1}}I+\eta^{1/2}) k+1.$$ Therefore, by Theorem~\ref{moonmoser}, ${\mathcal G}_2[\widetilde{{\mathcal X}}_1,\widetilde{{\mathcal X}}_2]$ contains a blue connected-matching on at least $ {\alpha_{1}}I k $ vertices, which is odd since every blue component of ${\mathcal G}[{\mathcal W}]$ is assumed to be odd, thus completing Part I.A.
\subsection*{Part I.B: $H \in {\mathcal H}_1, K \in {\mathcal H}_2$.}
From Theorem~\hyperlink{reD}{D}, we know that this case can arise only when ${\alpha_{1}}$ and ${\alpha_{1}}I$ are close in size, specifically when
\begin{equation}
\label{a12same}
(1-\eta^{1/16}){\alpha_{1}}\leq{\alpha_{1}}I
\leq (1-\eta^{1/16})^{-1} {\alpha_{1}} \leq (1+2\eta^{1/16}) {\alpha_{1}}.
\end{equation}
Following the same argument as in Part I.A. but exchanging the roles of red and blue and the roles of~${\alpha_{1}}$ and ${\alpha_{1}}I$ when necessary, we may obtain a partition of ${\mathcal V}({\mathcal G})$ into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$ with
\begin{equation}
\left.
\label{IC0a}
\begin{aligned}
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,
({\alpha_{1}}-7\eta^{1/64})k\leq|{\mathcal X} _1|&=p\leq {\alpha_{1}} k,
\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
(\half{\alpha_{1}}I-7\eta^{1/64})k\leq|{\mathcal X} _2|&=q\leq \half{\alpha_{1}}I k,
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\, \\
({\alpha_{1}}I-7\eta^{1/64})k\leq|{\mathcal Y} _1|&=r\leq {\alpha_{1}}I k,
\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
(\half{\alpha_{1}}-7\eta^{1/64})k\leq|{\mathcal Y} _2|&=s\leq \half{\alpha_{1}} k,
\end{aligned}
\right\}\!
\end{equation}
such that
\begin{itemize}
\labitem{HB1}{HO1}
${\mathcal G}_1[{\mathcal X} _1]$ and ${\mathcal G}_2[{\mathcal Y} _1]$ are each $(1-2\eta^{1/16})$-complete (and thus connected), \\
${\mathcal G}_2[{\mathcal X} _1]$ and ${\mathcal G}_1[{\mathcal Y} _1]$ are each $2\eta^{1/16}$-sparse, \\
${\mathcal G}_3[{\mathcal X} _1]$ and ${\mathcal G}_3[{\mathcal Y} _1]$ each contain no edges;
\labitem{HB2}{HO2} ${\mathcal G}_2[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_1[{\mathcal Y}_1,{\mathcal Y}_2]$ are each $(1-2\eta^{1/16})$-complete (and thus connected),\\
${\mathcal G}_1[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_2[{\mathcal Y}_1,{\mathcal Y}_2]$ are each $2\eta^{1/16}$-sparse, \\
${\mathcal G}_3[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_3[{\mathcal Y}_1,{\mathcal Y}_2]$ each contain no edges;
\labitem{HB3}{HO4}
${\mathcal G}[{\mathcal X}_2]$ and ${\mathcal G}[{\mathcal Y}_2]$ each contain no green edges,
\\ all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_2] \cup {\mathcal G}[{\mathcal X}_2,{\mathcal Y}_1] \cup {\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ are coloured exclusively green;
\labitem{HB4}{HO3} ${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2]$ is $4\eta^4 k$-almost-complete (and thus connected), \\
every component of ${\mathcal G}_2[{\mathcal X}_1\cup {\mathcal X}_2 \cup {\mathcal Y}_1 \cup {\mathcal Y}_2]$ is odd.
\end{itemize}
By (\ref{sizesI}) and (\ref{IC0a}), since $\eta\leq\left(\frac{{\alpha_{1}}}{100}\right)^{128},\left(\frac{{\alpha_{1}}I}{100}\right)^{128}$, we have
\begin{align*}
|{\mathcal X}_1|&\geq({\alpha_{1}}-7\eta^{1/64})k\geq(\half{\alpha_{1}}II+4\eta^2)k, \\
|{\mathcal Y}_1|&\geq({\alpha_{1}}I-7\eta^{1/64})k\geq(\half{\alpha_{1}}II+4\eta^2)k.
\end{align*}
By (\ref{HO3}), ${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2]$ is $4\eta^2 k$-almost-complete and by (\ref{HM4}) all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ are coloured exclusively green.
By the same argument given in Part I.A,
if there existed a pair of green edges $xz$, $yz$ with $x\in{\mathcal X}_1\cup{\mathcal X}_2$, $y\in{\mathcal Y}_1\cup{\mathcal Y}_2$, $z\in{\mathcal Z}$,
then ${\mathcal G}$ would contain an odd green connected-matching on at least ${\alpha_{1}}II k$ vertices.
Thus, again, we can partition ${\mathcal Z}$ into ${\mathcal Z}_X\cup{\mathcal Z}_Y$ such that there are no green edges in~${\mathcal G}[{\mathcal Z}_X,{\mathcal X}_1\cup{\mathcal X}_2]$ or in~${\mathcal G}[{\mathcal Z}_Y,{\mathcal Y}_1\cup{\mathcal Y}_2]$.
\begin{figure}
\caption{Partition of the vertices of the reduced-graph. }
\end{figure}
Recalling that $|V({\mathcal G})|\geq(c-\eta)k\geq(4\alpha_1-\eta)k$, without loss of generality, we may assume that at least one of
${\mathcal X}_1\cup{\mathcal X}_2\cup {\mathcal Z}_X$ or ${\mathcal Y}_1 \cup {\mathcal Y}_2\cup {\mathcal Z}_Y$ contains at least $(2\alpha_1-\eta)k$ vertices.
Suppose, for now, that $$|{\mathcal X}_1\cup{\mathcal X}_2\cup {\mathcal Z}_X|\geq (2\alpha_1-\eta)k\geq ({\alpha_{1}}+\eta^{1/2})k+({\alpha_{1}}I-3\eta^{1/16})k.$$
In that case, we further partition ${\mathcal Z}_X$ into ${\mathcal Z}_{XR}\cup {\mathcal Z}_{XB}$ by defining
\begin{align*}
{\mathcal Z}_{XB}&=\{ z\in {\mathcal Z}_X \text{ such that } z \text{ has at least } |{\mathcal X}_1|-16\eta^{1/2} \text{ blue neighbours in } {\mathcal X}_1\};\text{ and}
\\ {\mathcal Z}_{XR}&={\mathcal Z}\backslash {\mathcal Z}_{XB}=\{ z\in {\mathcal Z}_X \text{ such that } z \text{ has at least } 16\eta^{1/2} \text{ red neighbours in } {\mathcal X}_1\}.
\end{align*}
Given this definition, if $|{\mathcal X}_1\cup{\mathcal Z}_{XR}|\geq({\alpha_{1}}+\eta^{1/2})k$, then,
by the same argument given in Part I.A, ${\mathcal G}[{\mathcal X}_1\cup {\mathcal Z}_{XR}]$ contains
a red connected-matching on at least ${\alpha_{1}} k$ vertices.
Thus, we may, instead, suppose that $|{\mathcal X}_2\cup{\mathcal Z}_{XB}|\geq({\alpha_{1}}-2\eta^{1/2})k\geq(\half{\alpha_{1}}I+\eta^{1/2})k$, in which case,
by the same argument given in Part I.A,
${\mathcal G}[{\mathcal X}_1,{\mathcal X}_2\cup{\mathcal X}_{XB}]$
contains a blue connected-matching on at least $ {\alpha_{1}}I k $ vertices, which is odd since every blue component of ${\mathcal G}[{\mathcal W}]$ is assumed to be odd.
This would be sufficient to complete Part~I.B. Therefore we may instead assume that $$|{\mathcal Y}_1\cup{\mathcal Y}_2\cup {\mathcal Z}_Y|\geq (2\alpha_1-\eta)k\geq({\alpha_{1}}I+\eta^{1/2})k+({\alpha_{1}}-3\eta^{1/16})k.$$
In which case, we further partition ${\mathcal Z}_Y$ into ${\mathcal Z}_{YR}\cup {\mathcal Z}_{YB}$ by defining
\begin{align*}
{\mathcal Z}_{YR}&=\{ z\in {\mathcal Z}_Y \text{ such that } z \text{ has at least } |{\mathcal Y}_1|-16\eta^{1/2} \text{ red neighbours in } {\mathcal Y}_1\};\text{ and}
\\ {\mathcal Z}_{YB}&={\mathcal Z}_{Y}\backslash {\mathcal Z}_{YR}=\{ z\in {\mathcal Z}_Y \text{ such that } z \text{ has at least } 16\eta^{1/2} \text{ blue neighbours in } {\mathcal Y}_1\}.
\end{align*}
Given this definition, if $|{\mathcal Y}_1\cup{\mathcal Z}_{YB}|\geq({\alpha_{1}}I+\eta^{1/2})k$. Then,
the same argument as used in Part I.A, when considering ${\mathcal X}\cup{\mathcal Z}_R$ gives a blue connected-matching in ${\mathcal G}[{\mathcal Y}_1\cup{\mathcal Z}_{YB}]$ on
at least ${\alpha_{1}}I k$ vertices, which is odd since every blue component of ${\mathcal G}[{\mathcal W}]$ is assumed to be odd.
Thus, we may, instead, suppose that $|{\mathcal Y}_2\cup{\mathcal Z}_{YR}|\geq({\alpha_{1}}-3\eta^{1/2})k\geq(\half{\alpha_{1}}+\eta^{1/2})k$, in which case,
the same argument as used in Part I.A, when considering ${\mathcal G}[{\mathcal X}_1,{\mathcal X}_2\cup{\mathcal Z}_B]$ gives a red connected-matching in ${\mathcal G}[{\mathcal Y}_1,{\mathcal Y}_2\cup{\mathcal Z}_{YR}]$ on
at least $ {\alpha_{1}} k $ vertices, completing Part I.B.
\subsection*{Part I.C: $H, K \in {\mathcal H}_2$.}
From Theorem~\hyperlink{reD}{D}, we know that this case can arise only when
\begin{equation}
\label{a2big}
{\alpha_{1}}I\geq(1-\eta^{1/16}){\alpha_{1}}.
\end{equation}
Following the same argument as in Part I.A. but exchanging the roles of red and blue and the roles of ${\alpha_{1}}$ and ${\alpha_{1}}I$, we may obtain a partition of ${\mathcal V}({\mathcal G})$ into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$ with
\begin{equation}
\left.
\label{IB0a}
\begin{aligned}
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,
({\alpha_{1}}I-7\eta^{1/64})k\leq|{\mathcal X} _1|&=|{\mathcal Y} _1|=p\leq {\alpha_{1}}I k,
\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
(\half{\alpha_{1}}-7\eta^{1/64})k\leq|{\mathcal X} _2|&=|{\mathcal Y} _2|=q\leq \half{\alpha_{1}} k,
\end{aligned}
\right\}\!
\end{equation}
such that
\begin{itemize}
\labitem{HC1}{HN1}
${\mathcal G}_1[{\mathcal X} _1]$ and ${\mathcal G}_1[{\mathcal Y} _1]$ are each $2\eta^{1/16}$-sparse, \\
${\mathcal G}_2[{\mathcal X} _1]$ and ${\mathcal G}_2[{\mathcal Y} _1]$ are each $(1-2\eta^{1/16})$-complete (and thus connected), \\
${\mathcal G}_3[{\mathcal X} _1]$ and ${\mathcal G}_3[{\mathcal Y} _1]$ each contain no edges;
\labitem{HC2}{HN2} ${\mathcal G}_1[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_1[{\mathcal Y}_1,{\mathcal Y}_2]$ are each $(1-2\eta^{1/16})$-complete (and thus connected),\\
${\mathcal G}_2[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_2[{\mathcal Y}_1,{\mathcal Y}_2]$ are each $2\eta^{1/16}$-sparse, \\
${\mathcal G}_3[{\mathcal X}_1,{\mathcal X}_2]$ and ${\mathcal G}_3[{\mathcal Y}_1,{\mathcal Y}_2]$ each contain no edges;
\labitem{HC3}{HN4}
${\mathcal G}[{\mathcal X}_2]$ and ${\mathcal G}[{\mathcal Y}_2]$ each contain no green edges,
\\ all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_2] \cup {\mathcal G}[{\mathcal X}_2,{\mathcal Y}_1] \cup {\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ are coloured exclusively green;
\labitem{HC4}{HN3} ${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2]$ is $4\eta^4 k$-almost-complete (and thus connected), \\
every component of ${\mathcal G}_2[{\mathcal X}_1\cup {\mathcal X}_2 \cup {\mathcal Y}_1 \cup {\mathcal Y}_2]$ is odd.
\end{itemize}
\begin{figure}
\caption{Coloured structure of the reduced-graph in Part I.C.}
\end{figure}
The remainder of this section focuses on showing that the original graph must have a similar structure, which can then be exploited in order to force a cycle of appropriate length, colour and parity.
By definition, each vertex $V_{i}$ of ${\mathcal G}=({\mathcal V},{\mathcal E})$ represents a class of vertices of $G=(V,E)$. In what follows, we will refer to these classes as \emph{clusters} (of vertices of~$G$). Additionally, recall, from (\ref{NK}), that
$$(1-\eta^4)\frac{N}{K}\leq |V_i|\leq \frac{N}{K}.$$
Since $n> \max\{n_{\ref{th:blow-up}}(4,0,0,\eta), n_{\ref{th:blow-up}}(1,2,0,\eta), n_{\ref{th:blow-up}}(1,0,2,\eta)\}$, we can (as in the proof of Theorem~\ref{th:blow-up}) prove that $$
|V_i|\geq \left(1+\frac{\eta}{24}\right)\frac{n}{k}> \frac{n}{k}.$$
Thus, we can partition the vertices of~$G$ into sets $X_{1}, X_{2}, Y_{1}, Y_{2}$ and $Z$ corresponding to the partition of the vertices of~${\mathcal G}$ into ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1,{\mathcal Y}_2$ and ${\mathcal Z}$. Then,~$X_1, Y_1$ each contain~$p$ clusters of vertices and $X_2, Y_2$ each contain~$q$ clusters and, recalling (\ref{IB0a}), we have
\begin{equation}
\left.
\label{IB1}
\begin{aligned}
\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,
|X_1|,|Y_1| & = p|V_1| \geq ({\alpha_{1}}I-7\eta^{1/64})n,
\quad\quad\quad\quad\quad\quad\quad\quad
\\
|X_2|,|Y_2| & = q|V_1|\geq (\half {\alpha_{1}}-7\eta^{1/64})n.
\end{aligned}
\right\}\!
\end{equation}
In what follows, we will \textit{remove} vertices from $X_1\cup X_2\cup Y_1\cup Y_2$ by moving them into~$Z$ in order to show that, in what remains, $G[X_1\cup X_2\cup Y_1 \cup Y_2]$ has a particular coloured structure. We begin by proving the below claim which essentially tells us that $G$ has similar coloured structure to ${\mathcal G}$:
\begin{claim}
\label{G-struct}
We can \textit{remove} at most $5\eta^{1/128}n$ vertices from each of~$X_1$ and~$Y_1$ and at most $2\eta^{1/128}n$ vertices from each of~$X_2$ and~$Y_2$ so that the following conditions~hold.
\end{claim}
\begin{itemize}
\labitem{HC5}{HN5Z} $G_2[X_1]$ and $G_2[Y_1]$ are each $4\eta^{1/128}n$-almost-complete; and
\labitem{HC6}{HN5} $G_1[X_1,X_2]$ and $G_1[Y_1,Y_2]$ are each $3\eta^{1/128}n$-almost-complete.
\end{itemize}
\begin{proof} Consider the complete three-coloured graph $G[X_1]$ and recall from (\ref{HN1}) and (\ref{HN3}) that~${\mathcal G}[{\mathcal X}_1]$ contains only red and blue edges and is $4\eta^2k$-almost-complete. Given the structure of~${\mathcal G}$, we can bound the number of non-blue edges in $G[X_1]$ as follows:
Since regularity provides no indication as to the colours of the edges contained within each cluster, these could potentially all be non-blue. There are~$p$ clusters in $X_1$, each with at most $N/K$ vertices. Thus, there are at most $$p\binom{N/K}{2}$$ non-blue edges in~$X_1$ within the clusters.
Now, consider a pair of clusters $(U_1,U_2)$ in $X_1$. If $(U_1, U_2)$ is not $\eta^4$-regular, then we can only trivially bound the number of non-blue edges in $G[U_1,U_2]$ by $|U_1||U_2|\leq (N/K)^2$. However, by~(\ref{HN3}), there are at most $4\eta^2 |{\mathcal X}_1| k$ such pairs in ${\mathcal G}$. Thus, we can bound the number of non-blue edges coming from non-regular pairs by
$$4\eta^2 pk \left(\frac{N}{K}\right)^2.$$
If the pair is regular and~$U_1$ and~$U_2$ are joined by a red edge in the reduced-graph, then, again, we can only trivially bound the number of non-blue edges in $G[U_1,U_2]$ by $(N/K)^2$. However, by~(\ref{HN1}), ${\mathcal G}_1[X_1]$ is $\eta^{1/64}$-sparse, so there are at most $\eta^{1/64}\binom{p}{2}$ red edges in~${\mathcal G}[X_1]$ and, thus, there are at most $$2\eta^{1/64}\binom{p}{2}\left(\frac{N}{K}\right)^2$$ non-blue edges in $G[X_1]$ corresponding to such pairs of clusters.
Finally, if the pair is regular and~$U_1$ and~$U_2$ are not joined by a red edge in the reduced-graph, then the red density of the pair is at most~$\eta$ (since, if the density were higher, they would be joined by a red edge). Likewise, the green density of the pair is at most~$\eta$ (since there are no green edges in ${\mathcal G}[X_1]$). Thus, there are at most $$2\eta\binom{p}{2}\left(\frac{N}{K}\right)^2$$ non-blue edges in $G[X_1]$ corresponding to such pairs of clusters.
Summing the four possibilities above gives an upper bound of
$$p \binom{N/K}{2} + 4\eta^2 pk \left(\frac{N}{K}\right)^2+2\eta^{1/64}\binom{p}{2}\left(\frac{N}{K}\right)^{2}+2\eta\binom{p}{2}\left(\frac{N}{K}\right)^{2} $$ non-red edges in $G[X_1]$.
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $p\leq {\alpha_{1}}I k \leq k$, we obtain
$$e(G_1[X_1])+e(G_3[X_1])\leq [4\eta + 16\eta^{2} +4\eta^{1/64}+4 \eta]n^2\leq 6\eta^{1/64}n^2.$$
Since $G[X_1]$ is complete and contains at most $6\eta^{1/64}n^2$ non-blue edges, there are at most~$3\eta^{1/128}n$ vertices with blue degree at most $|X_1|-1-4\eta^{1/128}n$. Removing these vertices from $X_1$, that is, re-assigning these vertices to~$W$, gives a new~$X_1$
such that every vertex in $G[X_1]$ has blue degree at least $|X_1|-1-4\eta^{1/128}n$. The same argument works for $G[Y_1]$, thus completing the proof of (\ref{HN5Z}).
Next, consider $G[X_1,X_2]$. Considering (\ref{HN2}), in a similar way to the above, we can bound the number of non-red edges in $G[X_1,X_2]$ by
$$4\eta^2 pk \left(\frac{N}{K}\right)^2+2\eta^{1/64}pq\left(\frac{N}{K}\right)^{2}+2\eta pq\left(\frac{N}{K}\right)^{2}. $$
Where the first term bounds the number of non-red edges between non-regular pairs, the second bounds the number of non-red edges between pairs of clusters that are joined by a blue edge in the reduced-graph and the second bounds the number of non-red edges between pairs of clusters that are not joined by a blue edge in the reduced-graph.
Since $K\geq 2k$, $N\leq 4n$, $p \leq {\alpha_{1}}I k \leq k$ and $q\leq \half{\alpha_{1}} k\leq\tfrac{1}{2}k$, we obtain
$$e(G_2[X_1,X_2])+e(G_3[X_1,X_2])\leq 6 \eta^{1/64}n^2.$$
Since $G[X_1,X_2]$ is complete and contains at most $6\eta^{1/64}n^2$ non-red edges, there are at most $2\eta^{1/128}n$ vertices in~$X_1$ with red degree to~$X_2$ at most $|X_2|-3\eta^{1/128}n$ and at most~$2\eta^{1/128}n$ vertices in~$X_2$ with red degree to~$X_1$ at most $|X_1|-3\eta^{1/128}n$. Removing these vertices results in every vertex in~$X_1$ having degree in $G_1[X_1,X_2]$ at least $|X_2|-2\eta^{1/128}n$ and every vertex in~$X_2$ having degree in $G_1[X_1,X_2]$ at least $|X_1|-2\eta^{1/128}n$.
We repeat the above for~$G[Y_1,Y_2]$, removing vertices such that every (remaining) vertex in~$Y_1$ has degree in $G_1[Y_1,Y_2]$ at least $|Y_2|-3\eta^{1/128}n$ and every (remaining) vertex in~$Y_2$ has degree in $G_1[Y_1,Y_2]$ at least $|Y_1|-3\eta^{1/128}n$, thus completing the proof of (\ref{HN5}).
\end{proof}
Having discarded some vertices, recalling~(\ref{IB1}), we have
\begin{align}
\label{IB2}
|X_1|,|Y_1| & \geq ({\alpha_{1}}I-6\eta^{1/128})n,
& |X_2|,|Y_2| & \geq (\half {\alpha_{1}}-3\eta^{1/128})n,
\end{align}
and can proceed to the {end-game}.
The following pair of claims allow us to determine the colouring of $G[X_1,Y_1]$:
\begin{claim}
\label{nogreen2}
\hspace{-2.8mm} {\rm \bf a.} If there exist distinct vertices $x_1,x_2 \in X_1$ and $y_1, y_2\in Y_1$ such~that~$x_1y_1$ and~$x_2y_2$ are coloured blue, then~$G$ contains a blue cycle of length exactly~$\langle {\alpha_{1}}I n \rangle$.
{\rm \bf Claim~\ref{nogreen2}.b.} If there exist distinct vertices $x_1,x_2 \in X_1$ and $y_1, y_2\in Y_1$ such~that~$x_1y_1$ and~$x_2y_2$ are coloured red, then~$G$ contains a red cycle of length exactly~${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$.
\end{claim}
\begin{proof}
(a) Suppose there exist distinct vertices $x_1,x_2\in X_1$ and $y_1,y_2\in Y_1$ such that the edges $x_1y_1$ and $x_2y_2$ are coloured blue. Then, let $\widetilde{X}_1$ be any set of $\half (\langle {\alpha_{1}}I n \rangle+1) $ vertices in~$X_1$ such that $x_1,x_2 \in \widetilde{X}_1$.
By (\ref{HN5Z}),
every vertex in $\widetilde{X}_1$ has degree at least $|\widetilde{X}_1|-1-4\eta^{1/128}n$ in $G_1[\widetilde{X}_1]$. Since $\eta\leq({\alpha_{1}}I/100)^{128}$, we have $|\widetilde{X}_1|-1-4\eta^{1/128}n \geq \half |\widetilde{X}_1| +2$. So, by Corollary~\ref{dirac2}, there exists a Hamiltonian path in $G_1[\widetilde{X}_1]$ between $x_1, x_2$, that is, there exists a blue path between~$x_1$ and $x_2$ in $G[X_1]$ on exactly $\half (\langle {\alpha_{1}}I n \rangle+1)$ vertices.
Likewise, given any two vertices $y_1,y_2$ in~$Y_1$, there exists a blue path between~$y_1$ and~$y_2$ in~$G[Y_1]$ on exactly $\half (\langle {\alpha_{1}}I n \rangle-1)$ vertices.
Combining the edges $x_1y_1$ and $x_2y_2$ with the blue paths gives a blue cycle on exactly $\langle {\alpha_{1}}I n \rangle$ vertices.
(b) Suppose there exist distinct vertices $x_1,x_2\in X_1$ and $y_1,y_2\in Y_1$ such that $x_1y_1$ and $x_2y_2$ are coloured red. Then, let $\widetilde{X}_2$ be any set of
$$\ell_1=\left\lfloor \frac{{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}-2}{4} \right\rfloor \geq 3\eta^{1/128} n +2 $$ vertices from $X_2$. By~(\ref{HN5}), $x_1$ and $x_2$ each have at least two neighbours in $\widetilde{X}_2$ and, since $\eta\leq({\alpha_{1}}I/100)^{128}$, every vertex in $\widetilde{X}_2$ has degree at least
$|X_1|-3\eta^{1/128}n\geq \half|X_1| +\half|\widetilde{X}_2| +1$ in $G[X_1,\widetilde{X}_2]$. Since $|X_1|>\ell_1+1$, by Lemma~\ref{bp-dir}, $G_1[X_1,\widetilde{X}_2]$ contains a path on exactly $2\ell_1+1$ vertices from~$x_1$ to~$x_2$.
Likewise, given $y_1, y_2 \in Y_1$, for any set $\widetilde{Y}_2$ of
$$\ell_2=\left\lceil \frac{{\langle \! \langle} {\alpha_{1}}I n {\rangle \! \rangle}-2}{4} \right\rceil \geq 3\eta^{1/128} n +2 $$ vertices from $Y_2$, $G_1[Y_1,\widetilde{Y}_2]$ contains a a path on exactly $2\ell_2+1$ vertices from~$y_1$ to~$y_2$.
Combining the edges~$x_1y_1$,~$x_2y_2$ with the red paths found gives a red cycle on exactly $2\ell_1+2\ell_2+2={\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices, completing the proof of the claim.
\end{proof}
The existence of a red cycle on ${\alpha_{1}}na$ vertices or a blue cycle on $\langle {\alpha_{1}}I n\rangle$ vertices would be sufficient to complete the proof of Theorem~C. Thus, there cannot exist such a pair of vertex-disjoint red edges or such a pair of vertex-disjoint blue edges in $G[X_1,Y_1]$. Thus, after removing at most one vertex from each of~$X_1$ and~$Y_1$, we may assume that the green graph $G_3[X_1,Y_1]$ is complete.
Then, recalling~(\ref{IB2}), we have
\begin{align}
\label{IB3}
|X_1|, |Y_1|&\geq ({\alpha_{1}}I-8\eta^{1/128})n,
& |X_2|, |Y_2|&\geq (\half {\alpha_{1}} - 4\eta^{1/128})n.
\end{align}
We now consider $Z$. We claim that there can exist no vertex in $Z$ having a green edge to both $X_1$ and $Y_1$. Indeed, suppose that there existed such a vertex $z$ and vertices $x\in X_1$ and $y\in Y_1$ such that $xz$ and $yz$ are both coloured green. We know that $G_3[X_1,Y_1]$ is complete and that $|X_1|,|Y_1|\geq({\alpha_{1}}I-8\eta^{1/128})n\geq({\alpha_{1}}II-9\eta^{1/128})n$. Thus, we can greedily construct a path on $\langle {\alpha_{1}}II n \rangle -1$ vertices in $G_3[X_1,Y_1]$ from $x$ to $y$ which, together with the edges $xz$ and $yz$ gives a green cycle of length exactly~$\langle {\alpha_{1}}II n \rangle$.
\begin{figure}
\caption{Using $w\in W_G$ to construct an odd green cycle. }
\end{figure}
Thus, defining $Z_X$ to be the set of vertices in $Z$ having no green edges to $X_1\cup X_2$ and $Z_Y$ to be the set of vertices in $Z$ having no green edges to $Y_1\cup Y_2$, we see that, we may assume that $Z_X\cup Z_Y$ is a partition of $Z$.
Then, recalling that $|V(G)|\geq {\langle \! \langle} {\alpha_{1}} n{\rangle \! \rangle}+2\langle {\alpha_{1}}I n \rangle-3$, we may assume, without loss of generality that
$$|X_1 \cup X_2 \cup Z_X|\geq \half{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} + \langle {\alpha_{1}}I n \rangle -1.$$
\begin{figure}
\caption{Colouring of $G[X_1\cup X_2\cup Z_X]$.}
\end{figure}
Given (\ref{HN5Z}) and (\ref{HN5}), we can obtain upper bounds on $|X_1|$, $|X_2|$, $|Y_1|$ and $|Y_2|$ as follows: By Corollary~\ref{dirac1a}, for every integer $m$ such that $8\eta^{1/128}n+2\leq m\leq |X_1|$, we know that $G_2[X_1]$ contains a blue cycle of length $m$. Thus, in order to avoid having a blue cycle of length $\langle {\alpha_{1}}I n \rangle$, we may assume that $|X_1|<\langle {\alpha_{1}}I n \rangle$.
By Corollary~\ref{moonmoser2}, for every even integer $m$ such \mbox{that $12\eta^{1/128}n+2\leq m \leq 2\min\{|X_1|,|X_2|\}$}, we know that $G_1[X_1,X_2]$ contains a red cycle of length $m$. Recalling (\ref{a2big}) and (\ref{IB3}), we have $|X_1|\geq({\alpha_{1}}I-8\eta^{1/128})n\geq\half{\alpha_{1}} n$. Thus, in order to avoid having a red cycle on exactly ${\alpha_{1}}na$ vertices, we may assume that $|X_2|<\half{\alpha_{1}}na$.
In summary, we~have
\begin{equation}
\label{IB7}
\left.
\begin{aligned}
\,\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,
({\alpha_{1}}I-8\eta^{1/128})n&\leq |X_1|< \langle {\alpha_{1}}I n \rangle,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\\
(\half {\alpha_{1}} - 4\eta^{1/128})n&\leq |X_2|< \half{\alpha_{1}}na.
\end{aligned}
\right\}\!
\end{equation}
We now let
\begin{align*}
Z_R&=\{ z\in Z_X \text{ such that } z \text{ has at least } |X_1|-16\eta^{1/128} \text{ red neighbours in } X_1\};\text{ and}
\\ Z_B&=Z\backslash Z_R=\{ z\in Z_X \text{ such that } z \text{ has at least } 16\eta^{1/128} \text{ blue neighbours in } X_1\}.
\end{align*}
\begin{figure}
\caption{Partition of $Z_X$ into $Z_R\cup Z_B$.}
\end{figure}
In which case, we have $Z=Z_B\cup Z_R$ and, so, either $|Z_1\cup Z_B|\geq\langle {\alpha_{1}}I n \rangle$ or $|Z_2\cup Z_R|\geq\half{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$.
If $|X_1\cup Z_B|\geq\langle {\alpha_{1}}I n \rangle$, then we show that $G_2[X_1\cup Z_B]$ contains a long blue cycle as follows: Let~$X$ be any set of~$\langle {\alpha_{1}}I n \rangle$ vertices from $X_1 \cup Z_B$ consisting of every vertex from~$X_1$ and~$\langle {\alpha_{1}}I n \rangle-|X_1|$ vertices from~$Z_B$. By (\ref{HN5Z}) and~(\ref{IB7}), the blue graph~$G_2[X]$ has at least $\langle {\alpha_{1}}I n \rangle-8\eta^{1/128}n$ vertices of degree at least $|X|-1-4\eta^{1/128}n$ and at most $8\eta^{1/128}n$ vertices of degree at least $16\eta^{1/128}n$. Thus, by Theorem~\ref{chv},~$G[X]$ contains a blue cycle on exactly~$\langle {\alpha_{1}}I n \rangle$ vertices.
Thus, we may, instead, assume that $|X_2\cup Z_R|\geq\half{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$, in which case, we consider the red graph $G_2[X_1,X_2\cup Z_R]$. Given the relative sizes of~$X_1$ and~$X_2\cup Z_R$ and the large minimum-degree of the graph, we can use Theorem~\ref{moonmoser} to give a red cycle on exactly ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices as follows: By (\ref{a2big}) and~(\ref{IB7}), we have $|X_1|\geq\half{\langle \! \langle}{\alpha_{1}} n {\rangle \! \rangle}$ and may choose subsets $\widetilde{X}_1\subseteq X_1$, $\widetilde{X}_2\subseteq X_2\cup Z_R$ such that $\widetilde{X}_2$ includes every vertex of $X_2$, $
|\widetilde{X}_1|=|\widetilde{X}_2|=\half{\langle \! \langle}{\alpha_{1}} n {\rangle \! \rangle}$ and $|\widetilde{X}_2\cap Z_R|\leq6\eta^{1/128}n$. Recall, from (\ref{HN5}), that $G_1[X_1,X_2]$ is $3\eta^{1/128}k$-almost-complete and that, by definition, all vertices in $Z_R$ have red degree at least $|\widetilde{X}_1|-1-16\eta^{1/128}n$ in $G[\widetilde{X}_1,\widetilde{X}_2]$. Thus, since $|\widetilde{X}_2 \cap Z_R|\leq 6\eta^{1/128}n$, for any pair of vertices $x_1\in\widetilde{X}_1$ and $x_2\in\widetilde{X}_2$, we have $d(x_1)+d(x_2)\geq\half{\langle \! \langle}{\alpha_{1}} n {\rangle \! \rangle}+1$. Therefore, by Theorem~\ref{moonmoser}, $G_1[\widetilde{X}_1,\widetilde{X}_2]$ contains a red cycle on exactly~${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices, thus completing Part I.C. and thus, also, Part I.
\section{Proof of the main result -- Part II -- Case (v)}
\label{s:p12}
Suppose ${\mathcal G}$ contains
disjoint subsets of vertices ${\mathcal X}$ and ${\mathcal Y}$ such that $G[{\mathcal X}]$ contains a two-coloured spanning subgraph~$H$ from ${\mathcal H}_{2}^*\cup{\mathcal J}_b$ and $G[{\mathcal Y}]$ contains a two-coloured spanning subgraph $K$ from~\mbox{${\mathcal H}_{2}^*\cup{\mathcal J}_b$}, where
\begin{align*}
{\mathcal H}_{2}^*&={\mathcal H}\left((\beta-2\eta^{1/32})k,(\half{\alpha_{1}}-2\eta^{1/32})k,4\eta^4 k,\eta^{1/32},\gamma,\text{red}\right),\text{ }
\\ {\mathcal J}_b&={\mathcal J}\left(({\alpha_{1}}-18\eta^{1/2}), 4\eta^4 k, \text{red}, \gamma\right),
\end{align*}
and $\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}$ and $\gamma\in\{\text{blue}, \text{green}\}$.
Additionally, from Theorem D, we know that ${\alpha_{1}}\leq\beta$. We divide the proof that follows into three sub-parts depending on whether $H$ and $K$ belong to ${\mathcal H}_2^*$ or~${\mathcal J}_b$:
\subsection*{Part II.A: $H, K \in {\mathcal H}_2^*$.}
In this case, recalling Theorem~\hyperlink{thD}{D}, we know that ${\alpha_{1}}\leq(1-\eta^{1/16})\beta$.
We consider four subcases:
{\it Subcase i: $\beta={\alpha_{1}}I$, $\gamma=\text{blue}$.}
Since $0<\eta<1$, we have ${\mathcal H}_{2}^*\subseteq{\mathcal H}_2$. Notice also that, we have ${\alpha_{1}}\leq(1-\eta^{1/16}){\alpha_{1}}I\leq{\alpha_{1}}I$. Thus, we have ${\alpha_{1}}I\geq(1-\eta^{1/16}){\alpha_{1}}$ and, in fact, this case has already been dealt with in Part I.C.
Note that, in Part I.C, we knew that all blue components were odd and that here we do not, however, we did use that information.
{\it Subcase ii: $\beta={\alpha_{1}}II$, $\gamma=\text{blue}$.}
The vertex set~${\mathcal V}$ of~${\mathcal G}$ has a natural partition into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$, where ${\mathcal X}_1\cup{\mathcal X}_2$ is the partition of the vertices of $H$ given by Definition~\ref{d:H} and ${\mathcal Y}_1\cup{\mathcal Y}_2$ the corresponding partition of the vertices of $K$. In particular, $|{\mathcal X}_1|\geq ({\alpha_{1}}II-2\eta^{1/32})n$.
By the definition of ${\mathcal H}_2^*$, we know that ${\mathcal G}_2[{\mathcal X}_1]$ is $(1-\eta^{1/32})$-complete and so, by Theorem~\ref{dirac}, it contains a blue odd connected-matching on at least $|{\mathcal X}_1|-1\geq({\alpha_{1}}II-3\eta^{1/32})k$ vertices. Thus, in order to avoid having an odd blue connected-matching on at least ${\alpha_{1}}I k$ vertices, we may assume that ${\alpha_{1}}I\geq{\alpha_{1}}II-3\eta^{1/32}$ and, therefore, ${\alpha_{1}}I\geq({\alpha_{1}}-3\eta^{1/32})$. We may then follow the same argument given in Part I.C to obtain a monochromatic cycle and complete the proof.
{\it Subcase iii: $\beta={\alpha_{1}}II$, $\gamma=\text{green}$.}
We have ${\alpha_{1}}II\geq(1-\eta^{1/16}){\alpha_{1}}$ and can follow the argument given in Part I.C with the roles of blue and green (and ${\alpha_{1}}I$ and ${\alpha_{1}}II$) exchanged.
{\it Subcase iv: $\beta={\alpha_{1}}I$, $\gamma=\text{green}$.}
The vertex set~${\mathcal V}$ of~${\mathcal G}$ has a natural partition into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$, where ${\mathcal X}_1\cup{\mathcal X}_2$ is the partition of the vertices of $H$ given by Definition~\ref{d:H} and ${\mathcal Y}_1\cup{\mathcal Y}_2$ the corresponding partition of the vertices of $K$. In particular, $|{\mathcal X}_1|\geq ({\alpha_{1}}I-2\eta^{1/32})n$.
By the definition of ${\mathcal H}_2^*$, we know that ${\mathcal G}_3[{\mathcal X}_1]$ is $(1-\eta^{1/32})$-complete and so, by Theorem~\ref{dirac}, it contains a green odd connected-matching on at least $|{\mathcal X}_1|-1\geq({\alpha_{1}}I-3\eta^{1/32})k$ vertices. Thus, in order to avoid having an odd green connected-matching on at least ${\alpha_{1}}II k$ vertices, we may assume that ${\alpha_{1}}II\geq{\alpha_{1}}I-3\eta^{1/32}$ and, therefore, ${\alpha_{1}}II\geq({\alpha_{1}}-3\eta^{1/32})$. We may then follow the same argument given in Part I.C with the roles of blue and green (and ${\alpha_{1}}I$ and ${\alpha_{1}}II$) exchanged to obtain a monochromatic cycle and complete the proof.
\subsection*{Part II.B: $H, \in {\mathcal H}_2^*, K\in{\mathcal J}_b$.}
In this case, recalling Theorem~\hyperlink{thD}{D}, we know that
\begin{equation}
\label{IIBSIZE}
\left.
\begin{aligned}
\,\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,
{\alpha_{1}}&\leq(1-\eta^{1/16})\beta,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\
\beta&<(\tfrac{3}{2}+2\eta^{1/4}){\alpha_{1}}.
\end{aligned}
\right\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\end{equation}
Suppose that $\gamma$ is green. The vertex set~${\mathcal V}$ of~${\mathcal G}$ has a natural partition into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$ where ${\mathcal X}_1\cup{\mathcal X}_2$ is the partition of the vertices of $H$ given by Definition~\ref{d:H} and ${\mathcal Y}_1\cup{\mathcal Y}_2$ the corresponding partition of the vertices of $K$ given by Definition~\ref{d:J}.
Moving vertices from ${\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2$ to ${\mathcal Z}$, we may assume that
\begin{equation}
\left.
\label{IIBS1}
\begin{aligned}
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,
(\beta-2\eta^{1/32})k & \leq |X_1| =p \leq \beta k,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
(\half{\alpha_{1}}-2\eta^{1/32})k & \leq |X_2| =q \leq \half{\alpha_{1}} k, \\
({\alpha_{1}}-18\eta^{1/2}) & \leq |Y_{1}|=|Y_{2}| = r \leq {\alpha_{1}} k
\end{aligned}
\right\}\!\!\!\!\!\!\!\!\!\!\!
\end{equation}
and that
\begin{itemize}
\labitem{JB1}{JB1} $H$ and $K$ are each $4\eta^4 k$-almost-complete; and
\labitem{JB2}{JB2} {~}\!\!\!\!(a)
${\mathcal G}[{\mathcal X}_{1}]$ is $\eta^{1/32}$-sparse in red, contains no blue edges
and is $(1-\eta^{1/32})$-complete in green,
\item[{~}] {~\,}(b)
${\mathcal G}[{\mathcal X}_1,{\mathcal X}_2]$ is $(1-\eta^{1/32})$-complete in red, contains no blue edges
and is $\eta^{1/32}$-sparse in green,
\item[{~}] {~\,}(c)
all edges present in ${\mathcal G}[{\mathcal Y}_1], {\mathcal G}[{\mathcal Y}_2]$, are coloured exclusively red,
\item[{~}] {~\,}(d) all edges present in ${\mathcal G}[{\mathcal Y}_1,{\mathcal Y}_2]$, are coloured exclusively green.
\end{itemize}
By the definition of ${\mathcal H}_2^*$, we know that ${\mathcal G}_3[{\mathcal X}_1]$ is $(1-\eta^{1/32})$-complete and so, by Theorem~\ref{dirac}, it contains a green connected-matching on at least $|{\mathcal X}_1|-1\geq(\beta-4\eta^{1/32})k$ vertices and also clearly contains an odd green cycle. By definition of ${\mathcal J}_b$, ${\mathcal G}_3[Y_1,Y_3]$ is $4\eta^4 k$-almost-complete and so, by Lemma~\ref{l:eleven}, it contains a green connected-matching on at least $(2{\alpha_{1}}-40\eta^{1/32})k$ vertices. Thus, there can be no green edges present in ${\mathcal G}[X_1,Y_1\cup Y_2]$.
Again, by the definition of ${\mathcal H}_2^*$, we know that ${\mathcal G}_1[{\mathcal X}_1,{\mathcal X}_2]$ is $(1-\eta^{1/32})$-complete and so, by Lemma~\ref{l:ten}, it contains a red connected-matching on at least $({\alpha_{1}}I-8\eta^{1/64})k$ vertices.
By definition of ${\mathcal J}_b$, ${\mathcal G}_1[Y_1]$ and ${\mathcal G}_1[Y_2]$ are each $4\eta^4 k$-almost-complete and so, by Theorem~\ref{dirac}, each contains a red connected-matching on at least $({\alpha_{1}}-24\eta^{1/2})k$ vertices. Thus, there can be no red edges present in $G[X_1\cup X_2,Y_1\cup Y_2]$.
Thus, we know that
\begin{itemize}
\labitem{JB3}{JB3}
all edges present in ${\mathcal G}[X_1,Y_1\cup Y_2]$ are coloured exclusively blue.
\end{itemize}
\begin{figure}
\caption{Coloured structure of the reduced-graph in Part II.B.}
\end{figure}
The remainder of this section focuses on showing that the original graph must have a similar structure, which can then be exploited in order to force a cycle of appropriate length, colour and parity.
As before, each vertex $V_{i}$ of ${\mathcal G}=({\mathcal V},{\mathcal E})$ represents a cluster of vertices of $G=(V,E)$. Additionally, recall, from (\ref{NK}), that
$$(1-\eta^4)\frac{N}{K}\leq |V_i|\leq \frac{N}{K}.$$
Since $n> \max\{n_{\ref{th:blow-up}}(4,0,0,\eta), n_{\ref{th:blow-up}}(1,2,0,\eta), n_{\ref{th:blow-up}}(1,0,2,\eta)\}$, we can (as in the proof of Theorem~\ref{th:blow-up}) prove that $$
|V_i|\geq \left(1+\frac{\eta}{24}\right)\frac{n}{k}> \frac{n}{k}.$$
Thus, we can partition the vertices of~$G$ into sets $X_{1}, X_{2}, Y_{1}, Y_{2}$ and $Z$ corresponding to the partition of the vertices of~${\mathcal G}$ into ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1,{\mathcal Y}_2$ and ${\mathcal Z}$. Then,~$X_1$ contains $p$ clusters, $X_2$ contains $q$ clusters and $Y_1$ and $Y_2$ each contain~$r$ clusters of vertices and we have
\begin{equation}
\left.
\label{IIB1}
\begin{aligned}
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,
|X_1|& = p|V_1| \geq (\beta-2\eta^{1/32})n,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
|X_2|& = 1|V_1| \geq (\half{\alpha_{1}}-2\eta^{1/32})n,\\
|Y_1|=|Y_2| & = r|V_1| \geq ({\alpha_{1}}-18\eta^{1/2})n.
\end{aligned}
\right\}\!\!\!\!\!\!\!\!\!\!\!
\end{equation}
In what follows, we will \textit{remove} vertices from $X_1\cup X_2\cup Y_1\cup Y_2$ by moving them into~$Z$ in order to show that, in what remains, $G[X_1\cup X_2\cup Y_1 \cup Y_2]$ has a particular coloured structure. We begin by proving the below claim which essentially tells us that $G$ has similar coloured structure to ${\mathcal G}$:
\begin{claim}
\label{G-structIIB}
We can \textit{remove} at most
$9\eta^{1/2}+4\eta^{1/64}n$ vertices from~$X_1$,
at most
$3\eta^{1/2}n$ vertices from~$X_2$, and
at most
$16\eta^{1/2}n$ vertices from each of~$Y_1$ and~$Y_2$ such that
the following conditions~hold.
\end{claim}
\begin{itemize}
\labitem{JB4}{JB4} $G_1[X_1,X_2]$ is $2\eta^{1/2}$-almost-complete and $G_1[Y_1]$ and $G_1[Y_2]$ are each $6\eta^{1/2}$-almost-complete;
\labitem{JB5}{JB5} $G_2[X_1,Y_1\cup Y_2]$ is $9\eta^{1/2}$-almost-complete; and
\labitem{JB6}{JB6} $G_3[X_1]$ is $4\eta^{1/64}$-almost-complete and $G_3[Y_1,Y_2]$ is $6\eta^{1/2}$-almost-complete.
\end{itemize}
\begin{proof}
The proof follows the same pattern as that of Claim~\ref{G-struct} but is made easier in parts by the colouring of the reduced-graph.
Consider $G[X_1,X_2]$. Recall from (\ref{JB1}) that ${\mathcal G}[X_1,X_2]$ is $4\eta^4 k$-almost-complete and from (\ref{JB2}), that ${\mathcal G}[X_1,X_2]$ is $(1-\eta^{1/32})$-complete in red, contains no blue edges and is $\eta^{1/32}$ sparse in green. We can bound the number of non-red edges in $G[X_1,X_2]$ by
$$4\eta^4 pk \left(\frac{N}{K}\right)^2+2\eta^{1/32}pq\left(\frac{N}{K}\right)^{2}+2\eta pq\left(\frac{N}{K}\right)^{2},$$
where the first term bounds the number of non-red edges between non-regular pairs, the second bounds the number of non-red edges between pairs of clusters that are joined by a green edge in the reduced-graph and the second bounds the number of non-red edges between pairs of clusters that are not joined by a green edge in the reduced-graph.
Since $K\geq 2k$, $N\leq 4n$, $p \leq \beta k \leq k$ and $q\leq \half{\alpha_{1}} k\leq\tfrac{1}{2}k$, we obtain
$$e(G_2[X_1,X_2])+e(G_3[X_1,X_2])\leq 6 \eta^{1/32}n^2.$$
Since $G[X_1,X_2]$ is complete and contains at most $6\eta^{1/32}n^2$ non-red edges, there are at most $3\eta^{1/64}n$ vertices in~$X_1$ with red degree to~$X_2$ at most $|X_2|-2\eta^{1/64}n$ and at most~$3\eta^{1/64}n$ vertices in~$X_2$ with red degree to~$X_1$ at most $|X_1|-2\eta^{1/64}n$. Removing these vertices results in every vertex in~$X_1$ having degree in $G_1[X_1,X_2]$ at least $|X_2|-2\eta^{1/64}n$ and every vertex in~$X_2$ having degree in $G_1[X_1,X_2]$ at least $|X_1|-2\eta^{1/64}n$.
Next, consider the complete three-coloured graph $G[Y_1]$. Recall from (\ref{JB1})and (\ref{JB2}) that there can be no blue or green edges present in ${\mathcal G}[{\mathcal Y}_1]$ and that ${\mathcal G}[{\mathcal Y}_1]$ is $4\eta^4k$-almost-complete. Given the structure of~${\mathcal G}$, we can bound the number of non-red edges in $G[Y_1]$ by
$$r \binom{N/K}{2} + 4\eta^4 rk \left(\frac{N}{K}\right)^2+2\eta\binom{r}{2}\left(\frac{N}{K}\right)^{2},$$ where the first term counts the number of non-red edges within the clusters, the second counts the number of non-red edges between non-regular pairs of clusters and the third counts the number of non-red edges between regular pairs.
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $r\leq {\alpha_{1}} k \leq k$, we obtain
$$e(G_2[Y_1])+e(G_3[Y_1])\leq [4\eta +16\eta^{4} +16 \eta]n^2\leq 32\eta n^2.$$
Since $G[Y_1]$ is complete and contains at most $32\eta n^2$ non-red edges, there are at most~$8\eta^{1/2}n$ vertices with red degree at most $|Y_1|-1-8\eta^{1/2}n$. Removing these vertices from $Y_1$, that is, re-assigning these vertices to~$Z$, gives a new~$Y_1$
such that every vertex in $G[Y_1]$ has red degree at least $|Y_1|-1-8\eta^{1/2}n$. The same argument works for $G[Y_2]$, completing the proof of~(\ref{JB4}).
Next, we consider $G[X_1,Y_1\cup Y_2]$. By (\ref{JB2}) all the edges present in ${\mathcal G}[X_1,Y_1\cup Y_2]$ are coloured exclusively blue. Thus, we can bound the number of non-blue edges in $G[X_1,Y_1\cup Y_2]$ by
$$4\eta^4 rk \left(\frac{N}{K}\right)^2+2\eta p(2r) \left(\frac{N}{K}\right)^{2}. $$
Where the first term bounds the number of non-blue edges between non-regular pairs, the second bounds the number of non-blue edges between regular pairs.
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $p\leq \beta\leq k$, $r\leq {\alpha_{1}} k \leq k$, we obtain
$$e(G_1[X_1,Y_1\cup Y_2])+e(G_3[X_1,Y_1\cup Y_2])\leq 36\eta n^2.$$
Since $G[X_1,Y_1\cup Y_2]$ is complete and contains at most $36\eta n^2$ non-blue edges, there are at most $6\eta^{1/2}n$ vertices in~$X_1$ with green degree to~$Y_1\cup Y_2$ at most $|Y_1+Y_2|-6\eta^{1/2}n$ and at most~$6\eta^{1/2}n$ vertices in~$Y_1\cup Y_2$ with blue degree to~$X_1$ at most $|X_1|-6\eta^{1/2}n$ Removing these vertices results in every vertex in~$X_1$ having degree in $G_2[X_1,Y_1\cup Y_2]$ at least $|Y_1\cup Y_2|-6\eta^{1/2}n$ and every vertex in~$Y_1\cup Y_2$ having degree in $G_2[X_1,Y_1\cup Y_2]$ at least $|X_1|-6\eta^{1/2}n$k, thus completing the proof of (\ref{JB5}).
Next, consider the complete three-coloured graph $G[X_1]$ and recall, from (\ref{JB2}), that ${\mathcal G}[{\mathcal X}_1]$ contains only red and green edges and is $4\eta^4 k$-almost-complete. Thus, we can bound the number of non-green edges in $G[X_1]$ by
$$p \binom{N/K}{2} + 4\eta^4 pk \left(\frac{N}{K}\right)^2+2\eta^{1/32}\binom{p}{2}\left(\frac{N}{K}\right)^{2}+2\eta\binom{p}{2}\left(\frac{N}{K}\right)^{2},$$
where the first term counts the number of non-green edges within the clusters,
the second counts the number of non-green edges between non-regular pairs,
the third counts the number of non-green edges between regular pairs which are joined by a red edge and
the final term counts the number of non-green edges between regular pairs which are not joined by a red edge.
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $p\leq {\alpha_{1}}II k \leq k$, we obtain
$$e(G_1[X_1])+e(G_3[X_1])\leq 8 \eta^{1/32}n^2.$$
Since $G[X_1]$ is complete and contains at most $8\eta^{1/32}n^2$ non-green edges, there are at most~$4\eta^{1/64}n$ vertices with green degree at most $|X_1|-1-4\eta^{1/64}n$. Removing these vertices from $X_1$ gives a new~$X_1$
such that every vertex in $G[X_1]$ has green degree at least $|X_1|-1-4\eta^{1/64}n$.
Finally, we consider $G[Y_1,Y_2]$, where we can bound the number of non-green edges in $G[Y_1,Y_2]$ by
$$4\eta^4 rk \left(\frac{N}{K}\right)^2+2\eta r^2\left(\frac{N}{K}\right)^{2}. $$
Where the first term bounds the number of non-green edges between non-regular pairs and the second bounds the number of non-green edges between regular pairs.
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $r\leq {\alpha_{1}} k\leq k$, we obtain
$$e(G_1[Y_1,Y_2])+e(G_2[Y_1,Y_2])\leq 9\eta n^2.$$
Since $G[Y_1,Y_2]$ is complete and contains at most $9\eta n^2$ non-green edges, there are at most $3\eta^{1/2}n$ vertices in~$Y_1$ with green degree to~$Y_1$ at most $|Y_2|-3\eta^{1/2}n$ and at most~$3\eta^{1/2}n$ vertices in~$Y_2$ with blue degree to~$X_1$ at most $|Y_1|-3\eta^{1/2}n$ Removing these vertices results in every vertex in~$Y_1$ having degree in $G_3[Y_1,Y_2]$ at least $|Y_2|-3\eta^{1/2}n$ and every vertex in~$Y_2$ having degree in $G_3[Y_1,Y_2]$ at least $|Y_1|-3\eta^{1/2}n$k, thus completing the proof of (\ref{JB6}) and, also, of the claim.
\end{proof}
Having removed these vertices, recalling (\ref{IIB1}), we have
\begin{equation}
\left.
\label{IIB2}
\begin{aligned}
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,
|X_1|& = p|V_1| \geq (\beta-6\eta^{1/64})n,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\\
|X_2|& = q|V_1| \geq (\half{\alpha_{1}}-4\eta^{1/32})n,\\
|Y_1|=|Y_2| & = r|V_1| \geq ({\alpha_{1}}-34\eta^{1/2})n.
\end{aligned}
\right\}\!\!\!\!\!\!\!\!\!\!\!
\end{equation}
We now consider the remaining vertices in $Z$ and begin by proving the following claim:
\begin{claim}
\label{claimIIBsplit1}
If there are vertices $z\in Z$, $x\in X_1$ and $y\in Y_1\cup Y_2$ such that both the edges $x z$ and $y z$ are blue, then $G$ contains a blue cycle on exactly $\langle {\alpha_{1}}I n \rangle$ vertices.
\end{claim}
\begin{proof}
By (\ref{IIBSIZE}) and (\ref{IIB2}), we know that $|X_1|,|Y_1\cup Y_2|\geq ({\alpha_{1}}I-6\eta^{1/64})n\geq \lfloor \half \langle {\alpha_{1}}II n\rangle \rfloor$. We let $\widetilde{X}$ consist of $({\alpha_{1}}I-6\eta^{1/64})n$ vertices from $X_1$ including $x$, let $\widetilde{Y}$ consist of $\lfloor \half \langle {\alpha_{1}}I n\rangle \rfloor$ vertices from $Y$ including $y$ and consider $G[\widetilde{X},\widetilde{Y}]$. Every vertex in $\widetilde{Y}$ has degree in $G_2[\widetilde{X},\widetilde{Y}]$ at least $({\alpha_{1}}I-8\eta^{1/64})n\geq
\half(|\widetilde{X}|+|\widetilde{Y}|+1).$
Thus, by Corollary~\ref{bp-dir2}, there exists a blue path from $x$ to $y$ on exactly $\langle {\alpha_{1}}I n \rangle -1$ vertices which together with $x z$ and $y z$ forms a blue cycle on exactly $\langle {\alpha_{1}}I n \rangle$ vertices.
\end{proof}
Thus, we may partition the vertices of $Z$ into $Z_X$ and $Z_Y$ where there are no blue edges present in $G[Z_X,X_1]\cup G[Z_Y,Y_1\cup Y_2]$.
Recalling that $|V(G)|\geq 4{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}-3, {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} + 2\langle {\alpha_{1}}II n \rangle -3$, we know that
$$|X_1|+|X_2|+|Y_1|+|Y_2|+|Z_X|+|Z_Y|\geq 2.5{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+\langle {\alpha_{1}}II n \rangle -3.$$
Thus, we know that either $|X_1|+|X_2|+|Z_X|\geq \half {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}+\langle {\alpha_{1}}II n \rangle -1$ or $|Y_1|+|Y_2|+|Z_Y|\geq 2 {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} -1$.
In the former case, we define a partition of $Z_X$ into $Z_R\cup Z_G$ by
\begin{align*}
Z_R&=\{ z\in Z_X \text{ such that } z \text{ has at least } |X_1|-12\eta^{1/64} \text{ red neighbours in } X_1\};\text{ and}
\\ Z_G&=Z\backslash Z_R=\{ z\in Z_X \text{ such that } z \text{ has at least } 12\eta^{1/64} \text{ green neighbours in } X_1\}.
\end{align*}
Since this is a partition, we have either $|X_1\cup Z_G|\geq\langle {\alpha_{1}}II n \rangle$ or $|X_2\cup Z_R|\geq\half{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$.
If $|X_1\cup Z_G|\geq\langle {\alpha_{1}}II n \rangle$, then, following the argument given in the penultimate paragraph of Part I.C, we can show that $G_3[X_1\cup Z_G]$ contains a green cycle on exactly $\langle {\alpha_{1}}II n \rangle$ vertices. If $|X_2\cup Z_R|\geq\half{\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$, given the relative sizes of~$X_1$ and~$X_2\cup Z_R$ and the large minimum-degree of the graph, we may follow the same argument as given in the final paragraph of Part I.C. to find that $G[X_1,X_2\cup Z_R]$ contains a a red cycle on exactly ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices.
Thus, we may, instead, assume that
$$|Y_1|+|Y_2|+|Z_Y|\geq 2 {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} -1.$$ In that case, we can prove the following claim:
\begin{claim}
If there exist vertices $z\in Z_Y$, $y_1\in Y_1$ and $y_2\in Y_2$ such that both the edges $y_1 z$ and $y_2 z$ are green, then $G$ contains a green cycle on exactly $\langle {\alpha_{1}}I n \rangle$ vertices.
\end{claim}
\begin{proof}
Follows the same steps as that of Claim~\ref{claimIIBsplit1}.
\end{proof}
Similarly, we can show that the presence of a blue edge in $G[Y_1\cup Y_2]$ would result in a blue cycle on exactly $\langle {\alpha_{1}}I n \rangle$ vertices and that the presence of a green edge in $G[Y_1]$ or $G[Y_2]$ would result in a green cycle on exactly $\langle {\alpha_{1}}II n \rangle$ vertices.
Thus, we may partition $Z_Y$ into $Z_1$ and $Z_2$ where all edges in $G[Z_1,Y_1]$ and $G[Z_2,Y_2]$ (and indeed also in $G[Y_1]$ and $G[Y_2]$) are red. Since we have $|Y_1|+|Y_2|+|Z_Y|\geq2 {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} -1$, we may, without loss of generality, assume that $|Y_1|+|Z_1|\geq{\langle \! \langle}{\alpha_{1}} n {\rangle \! \rangle}$.
By (\ref{IIB2}), we have $|Y_1|\geq({\alpha_{1}} -34\eta^{1/2})n$. Thus, we may choose subsets $\widetilde{Y}_1\subseteq Y_1$ and $\widetilde{Z}_1\subseteq Z_1$ such that $\widetilde{Y}_1$ includes every vertex of $Y_1$ and $|\widetilde{Y}_1|+|\widetilde{Z}_1|={\langle \! \langle}{\alpha_{1}} n{\rangle \! \rangle}$. Then, $|\widetilde{Z}_1|\leq 36\eta^{1/2} n$. Thus, every vertex in~$\widetilde{Y}_1\cup\widetilde{Z}_1$ has degree in $\G_1[\widetilde{X}_1\cup\widetilde{Z}_1]$ at least $|\widetilde{X}_1\cup\widetilde{Z}_1|-36\eta^{1/2}n\geq\half|\widetilde{X}_1\cup\widetilde{Z}_1|$. Therefore, by Theorem~\ref{dirac}, $\G_1[\widetilde{X}_1\cup\widetilde{Z}_1]$ contains a red cycle of length exactly ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$, thus completing Part II.B.
\subsection*{Part II.C: $H, K \in {\mathcal J}_b$.}
In this case, recalling Theorem~\hyperlink{thD}{D}, we know that
$$\beta=\max\{{\alpha_{1}}I,{\alpha_{1}}II\}<(\tfrac{3}{2}+2\eta^{1/4}){\alpha_{1}}.$$
Suppose $\gamma$ is green. The vertex set~${\mathcal V}$ of~${\mathcal G}$ has a partition into ${\mathcal X} _1 \cup {\mathcal X} _2 \cup {\mathcal Y} _1 \cup {\mathcal Y} _2\cup {\mathcal Z}$, where ${\mathcal X}_1\cup{\mathcal X}_2$ is the partition of the vertices of $H$ given by Definition~\ref{d:J} and ${\mathcal Y}_1\cup{\mathcal Y}_2$ is the corresponding partition of the vertices of $K$. We then know that
\begin{itemize}
\item[(JC0)] $|{\mathcal X}_{1}|,|{\mathcal X}_{2}|,|{\mathcal Y}_{1}|,|{\mathcal Y}_{2}|\geq ({\alpha_{1}}-18\eta^{1/2})$;
\item[(JC1)] $H$ and $K$ are each $4\eta^4 k$-almost-complete; and
\item[(JC2)] (a) all edges present in ${\mathcal G}[{\mathcal X}_1], {\mathcal G}[{\mathcal X}_2], {\mathcal G}[{\mathcal Y}_1], {\mathcal G}[{\mathcal Y}_2]$ are coloured exclusively red,
\item[\phantom{(JC2)}] (b) all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal X}_2], {\mathcal G}[{\mathcal Y}_1,{\mathcal Y}_2]$ are coloured exclusively green.
\end{itemize}
Because $H$ is $4\eta^4 k$-almost-complete, and all edges in ${\mathcal H}[{\mathcal X}_1]$ are coloured red, by Theorem~\ref{dirac}, ${\mathcal G}[{\mathcal X}_1]$ contains a red connected-matching on at least $|{\mathcal X}_1|-1\geq({\alpha_{1}}-20\eta^{1/2})k$ vertices. Similarly, each of ${\mathcal G}[{\mathcal X}_2]$, ${\mathcal G}[{\mathcal Y}_1]$ and ${\mathcal G}[{\mathcal Y}_2]$ contains red connected-matchings on at least $({\alpha_{1}}-20\eta^{1/2})k$ vertices. Thus, the existence of a red edge in ${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2,{\mathcal Y}_1\cup{\mathcal Y}_2]$ would imply the existence of a red connected-matching on at least ${\alpha_{1}} k$ vertices and, therefore, we may assume that there are no red edges present in~${\mathcal G}[{\mathcal X}_1\cup{\mathcal X}_2,{\mathcal Y}_1\cup{\mathcal Y}_2]$.
Because $H$ is
$4\eta^4 k$-almost-complete and all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal X}_2]$ are coloured green, by Lemma~\ref{l:ten}, ${\mathcal G}_3[X_1,X_2]$ contains a green connected-matching ${\mathcal M}$ on at least $({\alpha_{1}}-40\eta^{1/2})k\geq({\alpha_{1}}II+2\eta^{1/2})k$ vertices.
Suppose now that there exists a pair of green edges $x_1y_1^1$, $x_2y_1^2$ such that $x_1\in {\mathcal X}_1, x_2\in {\mathcal X}
_2, y_1^1, y_1^2\in {\mathcal Y}
_1$. Then, since $K$ is $4\eta^4 k$-almost-complete and all edges present in ${\mathcal G}[{\mathcal Y}_1,{\mathcal Y}_2]$ are coloured green, $y_1^1$ and $y_1^2$ have a common neighbour $y_2\in{\mathcal Y}_2$. Similarly, there exists a green path $P=x_1 x_2' x_1' x_2$ in $G[{\mathcal X}_1,{\mathcal X}_2]$. Thus, $x_1y_1^1y_2y_1^2x_2x_1'x_2'x_1$ is an odd green cycle contained in the same component as ${\mathcal M}$ and, thus, ${\mathcal G}$ contains an odd green connected-matching on at least ${\alpha_{1}}II k$ vertices.
Thus, there can exist no such pair, nor can their be any pairs of green edges $x_1y_2^1$, $x_2y_2^2$ such that $x_1\in {\mathcal X}_1, x_2\in {\mathcal X}
_2, y_2^1, y_2^2\in {\mathcal Y}
_1$. Therefore, we may assume, without loss of generality, that all edges present in ${\mathcal G}[{\mathcal X}_1,{\mathcal Y}_1]$ and ${\mathcal G}[{\mathcal X}_2,{\mathcal Y}_2]$ are coloured blue.
Thus, this case can be dealt with alongside the next one.
\section{Proof of the main result -- Part III -- Case (vi)}
\label{s:p13}
Suppose that ${\mathcal G}$ contains a subgraph $H$ from
$${\mathcal L}={\mathcal L}\left(x,4\eta^4k, \text{red}, \text{blue}, \text{green}\right).$$
where
\begin{equation}
\label{SIZESIII}
\left.
\begin{aligned}
\!\!\!\!\!\!\!\!\!\!\!\!\!\! \textbf{either}\quad\quad x &\geq(\half{\alpha_{1}}+\tfrac{1}{4}\eta)k \text{and} \max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq{\alpha_{1}} \\
\!\!\!\!\!\!\!\!\!\!\!\!\!\! \textbf{or}\quad\quad
x&\geq({\alpha_{1}}-18\eta^{1/2})k \text{and} \max\{{\alpha_{1}}I,{\alpha_{1}}II\}\leq(\tfrac{3}{2}+2\eta^{1/2}){\alpha_{1}}. \quad\quad\quad\quad\quad
\end{aligned}
\right\}\!\!\!\!\!\!\!\!\!\!\!\!
\end{equation}
Observe that in both cases, provided $k\geq 4/\eta$, since $\eta\leq \tfrac{1}{100}, (\tfrac{{\alpha_{1}}}{100})^2, (\tfrac{{\alpha_{1}}I}{20})^4, (\tfrac{{\alpha_{1}}II}{20})^4$, we have
$$x\geq\max\{\half{\alpha_{1}} k+1,(\half{\alpha_{1}}I+\eta^4)k,(\half{\alpha_{1}}II+\eta^4)k\}.$$
Recalling the definition of ${\mathcal L}$ and Part II.C above, we have a partition of the vertex set ${\mathcal V}$ of ${\mathcal G}$ into ${\mathcal X}_{1}\cup {\mathcal X}_{2}\cup {\mathcal Y}_{1}\cup {\mathcal Y}_{2}\cup {\mathcal Z}$
such that
\begin{itemize}
\labitem{L0}{L0} $|{\mathcal X}_{1}|, |{\mathcal X}_{2}|, |{\mathcal Y}_{1}|, |{\mathcal Y}_{2}|\geq x$;
\labitem{L1}{L1} ${\mathcal G}[{\mathcal X}_{1}\cup {\mathcal X}_{2}\cup {\mathcal Y}_{1}\cup {\mathcal Y}_{2}]$ is $4\eta^4k$-almost-complete;
\labitem{L2}{L2} (a) all edges present in ${\mathcal G}[X_1]$, ${\mathcal G}[X_2]$, ${\mathcal G}[Y_1]$ and ${\mathcal G}[Y_2]$ are coloured exclusively red,
\item[{~}] (b) all edges present in ${\mathcal G}[X_1,Y_1]$ and ${\mathcal G}[X_2,Y_2]$ are coloured exclusively blue,
\item[{~}] (c) all edges present in ${\mathcal G}[X_1,X_2]$ and ${\mathcal G}[Y_1,Y_2]$ are coloured exclusively green,
\item[{~}] (d) there are no red edges present in ${\mathcal G}[X_1,Y_2]$ and ${\mathcal G}[X_2,Y_1]$.
\end{itemize}
\begin{figure}
\caption{$H\in {\mathcal L}
\end{figure}
Recall that ${\mathcal G}$ is a three-edge-multicoloured graph, therefore we seek to strengthen the statements made in (\ref{L3}) to prescribe not only which colours of edges are present but also which are not.
By (\ref{L1}) and (\ref{L2}), $H$ is $4\eta^4 k$-almost-complete, and all edges in $H[{\mathcal X}_1]$ are coloured red. Thus, by Theorem~\ref{dirac}, ${\mathcal G}[{\mathcal X}_1]$ contains a red connected-matching on at least $x-1\geq\half{\alpha_{1}} k$ vertices. Similarly, each of ${\mathcal G}[{\mathcal X}_2]$, ${\mathcal G}[{\mathcal Y}_1]$ and ${\mathcal G}[{\mathcal Y}_2]$ contains red connected-matchings on at least $\half{\alpha_{1}} k$ vertices.
Thus, no red path can join two of ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1, {\mathcal Y}_2$, since this would give a red connected-matching on at least~${\alpha_{1}} k$~vertices.
Similarly, because $H$ is
$4\eta^4 k$-almost-complete and all edges present in $H[{\mathcal X}_1,{\mathcal Y}_1]$ are coloured blue, by Lemma~\ref{l:eleven}, ${\mathcal G}_2[X_1,Y_1]$ contains a blue connected-matching on at least $2x-2\eta^4k\geq{\alpha_{1}}I k$ vertices. Likewise, ${\mathcal G}_2[X_2,Y_2]$ contains a blue connected-matching on at least ${\alpha_{1}}I k$ vertices and ${\mathcal G}_3[X_1,Y_1]$ and~${\mathcal G}_3[X_2,Y_2]$ each contain a green connected-matching on at least ${\alpha_{1}}II k$ vertices.
Thus,
no odd blue or green cycle can share any vertices with ${\mathcal X}_1\cup{\mathcal X}_2\cup{\mathcal Y}_1\cup{\mathcal Y}_2$.
We proceed to show that the vertices of ${\mathcal Z}$ can each be assigned to one of ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1$ or~${\mathcal Y}_2$ while maintaining these properties:
We begin by labelling the vertices of ${\mathcal Z}$, $z_1,z_2,...,z_u$ and considering these in turn beginning with $z_1$.
We showed above that, for instance, there can be no red path connecting ${\mathcal X}_1$ to ${\mathcal X}_2$. Thus, we know that $z_1$ can have red edges to at most one of ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1, {\mathcal Y}_2$. Therefore, suppose, without loss of generality, that it has no red edges to ${\mathcal X}_2, {\mathcal Y}_1$ or ${\mathcal Y}_2$.
Additionally, suppose that $z_1$ has a blue edge to $X_2$.
In that case, all edges in ${\mathcal G}[z_1,Y_2]$ must be green in order to avoid having a blue triangle. Then, all edges in ${\mathcal G}[z_1,Y_1]$ must be blue (in order to avoid a green triangle). Knowing this forces all edges in ${\mathcal G}[X_2,Y_1]$ and ${\mathcal G}[X_1,Y_2]$ to be green (in order to avoid a blue triangle or pentagon). Considering the colouring obtained, there can then be no blue or green edges in ${\mathcal G}[z_1,X_1]$. Thus, all edges in ${\mathcal G}[z_1,X_1]$ must be red. Adding $z_1$ to $X_1$ and exchanging the names of~$X_2$ and $Y_2$ gives a graph $H_1$ on vertex set ${\mathcal V}(H)\cup\{ z_1 \}$ belonging to~${\mathcal L}={\mathcal L}\left(x,4\eta^4k, \text{red}, \text{blue}, \text{green}\right)$.
If, instead of supposing that $z_1$ has a blue edge to $X_2$, we suppose that it has a green edge to $Y_2$ we arrive at an analogous situation to the above. Thus, we may instead assume that all edges in ${\mathcal G}[z_1, X_2]$ are green and all edges in ${\mathcal G}[z_1,Y_2]$ are blue, in which case, a similar argument (with no need for re-naming) allows us to add $z_1$ to $X_1$, obtaining a a graph $H_1$ on vertex set ${\mathcal V}(H)\cup\{ z_1 \}$ belonging to ${\mathcal L}={\mathcal L}\left(x,4\eta^4k, \text{red}, \text{blue}, \text{green}\right)$.
Considering each of $z_1, z_2,...,z_u$ in $Z$ in turn allows us to add each vertex to either ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1$ or ${\mathcal Y}_2$, showing that ${\mathcal G}$ itself belongs to ${\mathcal L}={\mathcal L}\left(x,4\eta^4k, \text{red}, \text{blue}, \text{green}\right)$.
Recall that, because $H$ is $4\eta^4 k$-almost-complete and all edges in ${\mathcal G}[{\mathcal X}_1]$ are coloured red, by Theorem~\ref{dirac}, ${\mathcal G}[{\mathcal X}_1]$ contains a red connected-matching on at least $|X_1|-1\geq\half{\alpha_{1}} k$ vertices. This provides a simple upper bound $$|{\mathcal X}_1|,|{\mathcal X}_2|,|{\mathcal Y}_1|,|{\mathcal Y}_2|\leq{\alpha_{1}} k+1.$$
Since $|{\mathcal V}(G)|\geq(4{\alpha_{1}}-\eta)k$, provided $k\geq(2/\eta)^{1/2}$, this also provides a corresponding upper bound
$$|{\mathcal X}_1|,|{\mathcal X}_2|,|{\mathcal Y}_1|,|{\mathcal Y}_2|\geq({\alpha_{1}} -2\eta^{1/2})k.$$
Discarding as few vertices as necessary, that is, returning them to ${\mathcal Z}$, we may therefore assume that
$$({\alpha_{1}} -2\eta^{1/2})k\leq |{\mathcal X}_1|=|{\mathcal X}_2|=|{\mathcal Y}_1|=|{\mathcal Y}_2|=p\leq {\alpha_{1}} k.$$
The remainder of this section focuses on showing that the original graph must have a similar structure, which can then be exploited in order to force a cycle of appropriate length, colour and parity.
Each vertex $V_{i}$ of ${\mathcal G}=({\mathcal V},{\mathcal E})$ represents a cluster of vertices of $G=(V,E)$. Recall, from (\ref{NK}), that
$$(1-\eta^4)\frac{N}{K}\leq |V_i|\leq \frac{N}{K},$$
and that, since $n> \max\{n_{\ref{th:blow-up}}(4,0,0,\eta), n_{\ref{th:blow-up}}(1,2,0,\eta), n_{\ref{th:blow-up}}(1,0,2,\eta)\}$, we can prove that $$
|V_i|\geq \left(1+\frac{\eta}{24}\right)\frac{n}{k}> \frac{n}{k}.$$
We partition the vertices of~$G$ into sets $X_{1}, X_{2}, Y_{1}, Y_{2}$ and $Z$ corresponding to the partition of the vertices of~${\mathcal G}$ into ${\mathcal X}_1, {\mathcal X}_2, {\mathcal Y}_1,{\mathcal Y}_2$ and ${\mathcal Z}$. Then,~$X_1, X_2, Y_1$ and $Y_2$ each contain~$p$ clusters of vertices and we have
\begin{equation}
\label{III1}
|X_1|,|Y_1|,|X_2|,|Y_2| = p|V_1| \geq ({\alpha_{1}}-2\eta^{1/2})n.
\end{equation}
In what follows, we will \textit{remove} vertices from $X_1\cup X_2\cup Y_1\cup Y_2$ by moving them into~$Z$ in order to show that, in what remains, $G[X_1\cup X_2\cup Y_1 \cup Y_2]$ has a particular coloured structure. We begin by proving the below claim which essentially tells us that $G$ has similar coloured structure to ${\mathcal G}$:
\begin{claim}
\label{G-structL}
We can \textit{remove} at most $14\eta^{1/2}n$ vertices from each of~$X_1, X_2, Y_1$ and~$Y_2$ so that the following conditions~hold.
\end{claim}
\begin{itemize}
\labitem{L3}{L3} $G_1[X_1], G_1[X_2], G_1[Y_1]$ and $G_1[Y_2]$ are each $6\eta^{1/2}$-almost-complete;
\labitem{L4}{L4} $G_2[X_1,Y_1]$ and $G_2[X_2,Y_2]$ are each $4\eta^{1/2}$-almost-complete; and
\labitem{L5}{L5} $G_3[X_1,X_2]$ and $G_3[Y_1,Y_2]$are each $4\eta^{1/2}$-almost-complete.
\end{itemize}
\begin{proof} Consider the complete three-coloured graph $G[X_1]$. Recall that there can be no blue or green edges present in ${\mathcal G}[{\mathcal X}_1]$ and that ${\mathcal G}[{\mathcal X}_1]$ is $4\eta^4k$-almost-complete. Given the structure of~${\mathcal G}$, we can bound the number of non-red edges in $G[X_1]$ by
$$p \binom{N/K}{2} + 4\eta^4 pk \left(\frac{N}{K}\right)^2+2\eta\binom{p}{2}\left(\frac{N}{K}\right)^{2}.$$
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $p\leq {\alpha_{1}} k \leq k$, we obtain
$$e(G_2[X_1])+e(G_3[X_1])\leq [4\eta + 16\eta^{2} +8 \eta]n^2\leq 18\eta n^2.$$
Since $G[X_1]$ is complete and contains at most $18\eta n^2$ non-red edges, there are at most~$6\eta^{1/2}n$ vertices with red degree at most $|X_1|-1-6\eta^{1/2}n$. Removing these vertices from $X_1$, that is, re-assigning these vertices to~$Z$, gives a new~$X_1$
such that every vertex in $G[X_1]$ has red degree at least $|X_1|-6\eta^{1/2}n$. The same argument works for $G[X_2], G[Y_1], G[Y_2]$, thus completing the proof of (\ref{L3}).
Next, consider $G[X_1,Y_1]$.We can bound the number of non-red edges in $G[X_1,Y_1]$ by
$$4\eta^4 pk \left(\frac{N}{K}\right)^2+2\eta p^2\left(\frac{N}{K}\right)^{2}. $$
Since $K\geq 2k, \eta^{-1}$, $N\leq 4n$ and $p\leq {\alpha_{1}} k \leq k$, we obtain
$$e(G_1[X_1,Y_1])+e(G_3[X_1,Y_1])\leq 16\eta n^2.$$
Since $G[X_1,Y_1]$ is complete and contains at most $16\eta n^2$ non-blue edges, there are at most $4\eta^{1/2}n$ vertices in~$X_1$ with blue degree to~$Y_1$ at most $|Y_1|-4\eta^{1/2}n$ and at most~$4\eta^{1/2}n$ vertices in~$Y_1$ with blue degree to~$X_1$ at most $|X_1|-4\eta^{1/2}n$. Removing these vertices results in every vertex in~$X_1$ having degree in $G_2[X_1,Y_1]$ at least $|Y_1|-4\eta^{1/2}n$ and every vertex in~$Y_1$ having degree in $G_2[X_1,Y_1]$ at least~$|X_1|-4\eta^{1/2}n$.
We repeat the above for~$G[X_2,Y_2]$, removing at most
$4\eta^{1/2}n$
vertices from each of $X_2, Y_2$ such that every (remaining) vertex in~$X_2$ has degree in $G_2[X_2,Y_2]$ at least $|Y_2|-4\eta^{1/2}n$ and every (remaining) vertex in~$X_2$ has degree in $G_2[X_2,Y_2]$ at least $|X_2|-4\eta^{1/2}n$ thus completing the proof of (\ref{L4}).
Finally, we repeat these steps for $G_3[X_1,X_2]$ and $G_3[Y_1,Y_2]$ removing at most $4\eta^{1/2}n$ vertices from each of $X_1,X_2,Y_1$ and $Y_2$ such that every (remaining) vertex in~$X_1$ has degree in $G_3[X_1,X_2]$ at least $|X_2|-4\eta^{1/2}n$, every (remaining) vertex in~$X_2$ has degree in $G_3[X_1,X_2]$ at least $|X_1|-4\eta^{1/2}n$,
every (remaining) vertex in~$Y_1$ has degree in $G_3[Y_1,Y_2]$ at least $|Y_2|-4\eta^{1/2}n$ and every (remaining) vertex in~$Y_2$ has degree in $G_3[Y_1,Y_2]$ at least $|Y_1|-4\eta^{1/2}n$,
thus completing the proof of (\ref{L5}).
\end{proof}
Having removed these vertices, we have
\begin{equation}
\label{III2}
|X_1|,|Y_1|,|X_2|,|Y_2| \geq ({\alpha_{1}}-16\eta^{1/2})n,
\end{equation}
We now define four (possibly overlapping) subsets of $Z$
\begin{align*}
Z_1^X&=\{ z\in Z \text{ such that } z \text{ has at least } 40\eta^{1/2} \text{ red neighbours in } X_1\}; \\
Z_2^X&=\{ z\in Z \text{ such that } z \text{ has at least } 40\eta^{1/2} \text{ red neighbours in } X_2\};
\end{align*}
{~}
\begin{align*}
Z_1^Y&=\{ z\in Z \text{ such that } z \text{ has at least } 40\eta^{1/2} \text{ red neighbours in } Y_1\}; \\
Z_2^Y&=\{ z\in Z \text{ such that } z \text{ has at least } 40\eta^{1/2} \text{ red neighbours in } Y_2\},
\end{align*}
and proceed to prove that $Z\backslash (Z_1^X \cup Z_2^X \cup Z_1^Y \cup Z_2^Y)=\emptyset.$
Indeed, suppose $z$ is a vertex in $Z$ not belonging to any of the newly defined sets, then $z$ has only small red degree to each of $X_1,X_2,Y_1,Y_2$. Thus, in particular, $z$ must have a blue or green edge to each of $X_1, X_2, Y_1$ and $Y_2$.
\begin{claim}
If there exist vertices $z\in Z\backslash (Z_1^X \cup Z_2^X \cup Z_1^Y \cup Z_2^Y)$, $x_1\in X_1$ and $x_2\in X_2$ such that both the edges $x_1 z$ and $x_2 z$ are green, then $G$ contains a green cycle on exactly $\langle {\alpha_{1}}II n \rangle$ vertices.
\end{claim}
\begin{proof}
Recalling (\ref{SIZESIII}) and (\ref{III2}), we know that $|X_1|,|X_2|\geq (\tfrac{5}{8}{\alpha_{1}}II-16\eta^{1/2})n$. We let $\widetilde{X}_1$ consist $(\tfrac{5}{8}{\alpha_{1}}II-16\eta^{1/2})n$ vertices from $X_1$ including $x_1$, let $\widetilde{X}_2$ consist of $\lfloor \half \langle {\alpha_{1}}II n\rangle \rfloor$ vertices from $X_2$ including $x_2$ and consider $G[\widetilde{X}_1,\widetilde{X}_2]$. Every vertex in $\widetilde{X}_2$ has degree in $G_3[\widetilde{X}_1,\widetilde{X}_2]$ at least $(\tfrac{5}{8}{\alpha_{1}}II-22\eta^{1/2})n\geq
\half(|\widetilde{X_1}|+|\widetilde{X_2}|+1).$
Thus, by Corollary~\ref{bp-dir2}, there exists a green path from $x_1$ to $x_2$ on exactly $\langle {\alpha_{1}}II n \rangle -1$ vertices which together with $x_1 z$ and $x_2 z$ forms a green cycle on exactly $\langle {\alpha_{1}}II n \rangle$ vertices.
\end{proof}
Analogous results can be proved for similar pairs of green edges to $Y_1$ and $Y_2$ or blue edges to $X_1$ and~$Y_1$ or $X_2$ and $Y_2$. Thus, the existence of a green (resp. blue) edge in $G[z,X_1]$ implies that all edges in $G[z,X_1]$ and $G[z,Y_2]$ must be green (resp. blue) and that all edges in $G[z,X_2]$ and $G[z,Y_1]$ must be blue (resp. green).
At this stage, the existence of a blue or green edge between $X_2$ and $Y_1$ could be used in a similar manner to prove the existence of a blue cycle on exactly $\langle {\alpha_{1}}I n \rangle$ vertices or a green cycle on exactly~$\langle {\alpha_{1}}II n \rangle$ vertices.
Thus, all edges present in $G[X_2,Y_1]$ must be red. However, the existence of even two independent red edges in $G[X_2,Y_1]$ would imply the existence of a red cycle on exactly ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices in $G[X_2\cup Y_1]$.
Indeed, suppose $x_1,x_2$ are distinct vertices in $X_2$ and $y_1, y_2$ are distinct vertices in $Y_1$ such that $x_1 y_1$ and $x_2 y_2$ are both coloured red and let $\widetilde{X}_2$ be any set of $\half {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} $ vertices in~$X_2$ such that $x_1,x_2 \in \widetilde{X}_2$.
By (\ref{L3}), every vertex in $\widetilde{X}_2$ has degree at least $|\widetilde{X}_2|-8\eta^{1/2}n$ in $G_1[\widetilde{X}_2]$. Since $\eta\leq ({\alpha_{1}}/100)^2$, we have $|\widetilde{X}_2|-8\eta^{1/2}n \geq \half |\widetilde{X}_2| +2$. So, by Corollary~\ref{dirac2}, there exists a Hamiltonian path in $G_1[\widetilde{X}_2]$ between $x_1, x_2$, that is, there exists a red path between~$x_1$ and $x_2$ in $G[X_1]$ on exactly $\half {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices. Likewise, there exists a red path between~$y_1$ and~$y_2$ in~$G[Y_1]$ on exactly $\half {\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices. Combining the edges $x_1y_1$ and $x_2y_2$ with the red paths gives a red cycle on exactly ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices.
Thus, in fact, all vertices in $Z$ belong to one of the previously defined sets $ Z_1^X$, $Z_2^X$, $Z_1^Y$ or $ Z_2^Y$. Therefore at least one of $X_1\cup Z_1^X, X_2\cup Z_2^X, Y_1\cup Z_1^Y$ or $Y_1\cup Z_2^Y$ contains at least ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices.
Without loss of generality, suppose that $|X_1\cup Z_1^X|\geq{\langle \! \langle}{\alpha_{1}} n {\rangle \! \rangle}$. We show that $G_1[X_1\cup Z_1^X]$ contains a long red cycle as follows: Let~$X$ be any set of~${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices from $X_1 \cup Z_1^X$ consisting of every vertex from~$X_1$ and~${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}-|X_1|$ vertices from~$Z_1^X$. By (\ref{L3}) and~(\ref{III2}), the red graph~$G_1[X]$ has at least ${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle} -16\eta^{1/128}n$ vertices of degree at least $|X|-6\eta^{1/2}n$ and at most $16\eta^{1/2}n$ vertices of degree at least $40\eta^{1/2}n$. Thus, by Theorem~\ref{chv},~$G[X]$ contains a red cycle on exactly~${\langle \! \langle} {\alpha_{1}} n {\rangle \! \rangle}$ vertices, completing the proof of Theorem~C.
\qquad\hspace*{\fill}$\Box$
\section{Conclusions}
Together, \cite{KoSiSk}, \cite{BenSko}, \cite{DF1}, \cite{DF2} and this paper give exact values for the Ramsey number of any triple of sufficiently long cycles (except when all three cycles are even but of different lengths). We now discuss briefly what is known for four or more colours beginning with the case when all the cycles in question are of odd length.
In \cite{BonErd}, Bondy and Erd\H{o}s gave the following bounds for the \mbox{$r$-coloured} Ramsey number of odd cycles $$2^{r-1}(n-1)+1\leq R(C_n,C_n,\dots,C_n)\leq(r+2)!n$$
and conjectured that the lower bound gives the true value of the Ramsey number.
In 2012, \L uczak, Simonovits and Skokan \cite{LSS} gave an improved asymptotic upper bound. For $n$ odd and $r\geq 4$, they proved that $$R(C_n,C_n,\dots,C_n)\leq r2^r n + o(n)$$ as $n \rightarrow \infty$.
Note that the conjecture still stands and has been confirmed for three colours by Kohayakawa, Simonovits, and Skokan \cite{KoSiSk}. The structures providing the lower bound are well known and easily constructed. For two colours, the structure is simply two classes of $n-1$ vertices coloured such that all edges within each class are coloured red and all edges between classes are coloured blue (see Figure~\ref{twocole}).
\begin{figure}
\caption{Coloured structure giving the lower bound for two colours. }
\label{twocole}
\end{figure}
The relevant $r$-coloured structure is obtained by taking two copies of the $(r-1)$-coloured structure and colouring all edges between the copies with colour $r$ (see Figure~\ref{4cole}).
\begin{figure}
\caption{Coloured structure giving the lower bound for four colours. }
\label{4cole}
\end{figure}
Notice that these structures also give a lower bound for the $r$-coloured Ramsey number when the cycles have different lengths. Thus, for $n_1\geq n_2\geq \dots \geq n_r$ all odd, we have $$R(C_{n_1},C_{n_2},\dots,C_{n_r})\geq 2^{r-1}(n_{1}-1)+1.$$
In 1976, Erd\H{o}s, Faudree, Rousseau and Schelp \cite{EFRS1} considered the case when one cycle is much longer than the others, proving in the case of odd cycles that, if $n$ is much larger than $m,\ell, k$ all odd, then
$$R(C_n,C_m,C_\ell,C_k)=8n-7,$$
thus showing that the above bound is tight in that case.
This `doubling-up' process can also be used to provide structures giving sensible lower bounds for mixed parity multicolour Ramsey numbers. For example, consider the case of two even and two odd cycles. The three-coloured graph shown in Figure~\ref{fig214-4} below was used earlier to provide a lower bound for $R(C_n,C_m,C_\ell)$ in the case that $n,m$ are even and $\ell$ is odd. Taking two copies of the graph and colouring all the edges between the copies with a fourth colour gives a four-coloured graph providing a lower bound for $R(C_n,C_m,C_\ell,C_k)$, in the case that $n,m$ are even and $\ell, k$ are odd.
\begin{figure}
\caption{Structure providing a lower bound for even-even-odd case.}
\label{fig214-4}
\end{figure}
As the number of colours increases, there are simply too many mixed parity cases to discuss each one here or to give conjectures for the exact or asymptotic Ramsey numbers. However, looking at the structures already seen for three colours and `doubling-up' would seem to be a good place to start.
For even cycles, this `doubling-up' is not an option and the Ramsey numbers grow more slowly as the number of colours increases. Indeed, \L uczak, Simonovits and Skokan \cite{LSS}, proved that the $r$-coloured Ramsey number for even cycles essentially grows no faster than linearly in $r$, proving that, for $n$ even, $$R(C_n,C_n,\dots,C_n)\leq rn + o(n)$$ as $n \rightarrow \infty$.
Recall that Bondy and Erd\H{o}s \cite{BonErd} proved that, for $n\geq 3$ even, $$R(C_n,C_n)=\tfrac{3}{2}n-1.$$
The simple structure providing the lower bound is shown in Figure~\ref{fig214-5} below. It has two classes of vertices, the first containing $n-1$ vertices and the second $\half n -1$ vertices. It is coloured such that all edges within the first class receive one colour and all other edges receive the second.
\begin{figure}
\caption{Coloured structure giving the lower bound in the two coloured even case.}
\label{fig214-5}
\end{figure}
This structure is easily extended to give a lower bound for the multicoloured odd cycles (see Figure~\ref{fig214-6}) showing that for $r$ colours $$R(C_n,C_n,\dots,C_n)\geq \half(r+1)n - r +1.$$
\begin{figure}
\caption{Structure providing a lower bound for $r$-coloured even cycle result.}
\label{fig214-6}
\end{figure}
It can also be adapted to provide a lower bound in the case that the cycles are all even but are of different lengths, showing that, for $n_1\geq n_2,\dots,n_r$ all even,
\begin{equation}
\label{lastoneprob}
R(C_{n_1},C_{n_2},\dots,C_{n_r})\geq n_1+\half n_2+\dots+\half n_r -r+1.
\end{equation}
Also in \cite{EFRS1}, Erd\H{o}s, Faudree, Rousseau and Schelp showed that, for $n$ much larger than $m,\ell,k$ all even,
\begin{align*}
R(C_n,C_m,C_\ell)&=n+\half n +\half \ell -2, \\
R(C_n,C_m,C_\ell,C_k)&=n+\half n +\half \ell + \half k-3.
\end{align*}
Thus, for two or three colours, the bound in (\ref{lastoneprob}) is tight when one of the cycles is much longer than the others. Notice, also, that this bound agrees with the asymptotic result of Figaj and \L uczak in~\cite{FL2007}.
However, as shown by the exact result of Benevides and Skokan \cite{BenSko}, this bound can be beaten slightly in the case of three even cycles of equal length. In that case, the less easily extended structure shown in Figure~\ref{fig214-7} gives $R(C_n,C_n,C_n)=2n$.
\begin{figure}
\caption{Structure providing the lower bound in the paper of Benevides and Skokan.}
\label{fig214-7}
\end{figure}
Based on the results discussed, one might be tempted to conjecture an asymptotic r-colour result for even cycles of the form $$R(C_{{\langle \! \langle}{\alpha_{1}} n{\rangle \! \rangle}}, C_{{\langle \! \langle}{\alpha_{1}}I n{\rangle \! \rangle}},\dots,C_{{\langle \! \langle}\alpha_r n{\rangle \! \rangle}})=\half\left({\alpha_{1}}+{\alpha_{1}}I+\dots+\alpha_r+\max\{{\alpha_{1}},{\alpha_{1}}I,\dots,\alpha_r)\right)n + o(n).$$
However, in 2006, in the case of $r$ cycles of equal even length $n$, Sun Yongqi, Yang Yuansheng, Xu Feng and Li Bingxi \cite{SYFL} proved, that
$$R(C_n,C_n,\dots,C_n)\geq (r-1)n-2r+4,$$
suggesting that the true form of such a result for even cycles is more complicated.
There is potential to apply the methods used in this chapter to the case of four or more colours but there are limitations which could make this quite difficult. For instance. two key sets of tools used to prove the stability result (Theorem B) were decompositions (which were used to find large two-coloured subgraphs within three-coloured graphs) and connectivity results (which reduce the difficulty of finding a connected-matching in a two-coloured graph). The most basic such connectivity result states that a two-coloured graph is connected in one of its colours. Such results do not apply (or are much more complicated) for three-coloured graphs. Therefore, an alternative approach or (even) more case analysis could well be necessary.
\renewcommand{1.15}{1.15}\small\normalsize
\cleardoublepage
\phantomsection
\addcontentsline{toc}{section}{References}
\end{document} |
\begin{document}
\begin{abstract}
Relative trace formulas play a central role in studying automorphic forms.
In this paper, we use a relative trace formula approach to derive a Kuznetsov type formula for the group $\mathbf mathfrak{G}Sp_4$.
We focus on giving a final formula that is as explicit as possible, and we plan on returning to applications elsewhere.
{\bf{e}}nd{abstract}
\mathbf maketitle
\tableofcontents
\mathbf{s}ection{Introduction}
In this work we develop a Kuznetsov formula for the group $\mathbf mathfrak{G}Sp_4$.
To motivate our results, we first recall the Kuznetsov formula for $\mathbf mathfrak{G}L_2$, an identity relating spectral information about the quotient space
$\mathbf mathfrak{G}amma \backslash \mathbf mathbb H$ (where $\mathbf mathfrak{G}amma$ is a congruence subgroup) to some arithmetic input.
For arbitrarily chosen nonzero integers $n$ and $m$ and any reasonable test function $h$, the spectral side involves
the quantity
\begin{equation}\label{BasicQuantity}
h(t_u)a_m(u)\mathfrak{o}verline{a_n(u)},
{\bf{e}}nd{equation}
where $u$ ranges over eigenfunctions of the Laplace operator involved in the
spectral decomposition of $L^2(\mathbf mathfrak{G}amma \backslash \mathbf mathbb H)$, $a_m(u)$ is the $m$-th Fourier coefficient of $u$,
and $t_u$ is the corresponding spectral parameter.
More precisely, the spectrum of $L^2(\mathbf mathfrak{G}amma \backslash \mathbf mathbb H)$ can be described as the direct sum of the
\textit{discrete spectrum} and the \textit{continuous spectrum}.
The discrete spectrum is the direct sum of $1$-dimensional subspaces spanned by cuspidal Maa{\mathbf{s}s} forms
(the \textit{cuspidal spectrum}) plus the constant function (the \textit{residual spectrum}).
The continuous spectrum is a direct integral of $1$-dimensional subspaces spanned by the Eisenstein series.
The spectral side of the Kuznetsov formula correspondingly splits as a discrete sum over Maa{\mathbf{s}s} forms plus a
continuous integral over Eisenstein series.
The arithmetic-geometric side is a sum of two contributions, that may be seen as the contributions from the
two elements of the Weyl group of $\mathbf mathfrak{G}L_2$.
The identity contribution is given by the delta symbol $\delta_{n,m}$ times the integral of the spectral test function
$h$ against the spectral measure $\bm{f}rac{t}{\mathfrak{p}i^2} \tanh(\mathfrak{p}i t) dt$.
For this reason, the Kuznetsov formula may be viewed as a result of quasi-orthogonality for the Fourier coefficients $a_m(\cdot)$
and $a_n(\cdot)$, provided the remaining contribution can be controlled.
The latter consists of a sum of Kloosterman sums weighted by some integral transform of the test function $h$, involving Bessel
functions.
Applications of the Kuznetsov formula involve using known results about any of the two sides to derive information
about the other side.
On one hand, the flexibility allowed by the choice of the test function $h$ enables one to use known bounds about the Kloosterman
sums to study the distribution of the discrete spectrum and the size of the Fourier coefficients of Maa{\mathbf{s}s} forms.
On the other hand, understanding the Fourier coefficients of Maa{\mathbf{s}s} forms as well as the integral transform appearing on
the geometric side yields strong bounds for sums of Kloosterman sums.
Recently, Kuznetsov formulae for $\mathbf mathfrak{G}L_3$ have been developed by Blomer and Buttcane, with similar applications as described above.
It would thus be interesting to establish the corresponding formulae for other groups.
In the classical proof of the $\mathbf mathfrak{G}L_2$ Kuznetsov formula, one computes the inner product of two Poincar\'e series in two
different ways, one involving the spectral decomposition of $L^2(\mathbf mathfrak{G}amma \backslash \mathbf mathbb H)$, and the other one by computing
the Fourier coefficients of the Poincar\'e series and unfolding. This gives a ``pre-Kuznetsov formula", that one then
proceeds to integrate against the test function $h$, obtaining on the geometric side the integral transforms of $h$
mentioned above.
Another approach, which one may call the relative trace formula approach to the Kuznetsov formula, builds upon the relative trace formula introduced by
Jacquet~\cite{JacquetLai}.
In the case of $\mathbf mathfrak{G}L_2$, the relative trace formula approach to the Kuznetsov formula is apparently based on unpublished work of Zagier,
detailed in~\cite{Joyner}.
This approach is developed in the adelic framework in~\cite{KL}for the congruence subgroup $\mathbf mathfrak{G}amma=\mathbf mathfrak{G}amma_1(N)$.
It proceeds by integrating an automorphic kernel
$$K_f(x,y)=\mathbf{s}um_{\gamma \in \mathcal PGL_2(\mathbb{Q})} f(x^{-1}\gamma y),$$ where $f$ is a suitable test function.
The spectral expansion of the kernel will then involve the quantity $\tilde{f}(t_u)u(x)\mathfrak{o}verline{u(y)}$, where $u$ ranges
over the eigenfunctions involved in the spectral decomposition of $L^2(\mathbf mathfrak{G}amma(N) \backslash \mathbf mathbb H)$,
$t_u$ is the spectral parameter of $u$, and $\tilde{f}$ is the \textit{spherical transform} of $f$.
Thus integrating $K_f(x,y)$ against a suitable character on $U \times U$, where $U=\mathbf mat{1}{*}{}{1}$, one gets the
quantity~(\ref{BasicQuantity}) with $h=\tilde{f}$.
On the other hand, using the Bruhat decomposition for $\mathcal PGL_2(\mathbb{Q})$, one can decompose the integral over $U \times U$ as
a sum over elements of the Weyl group and over diagonal matrices in $\mathcal PGL_2(\mathbb{Q})$ of some orbital integrals.
In the case of the identity element, at most one diagonal matrix will have a non-zero contribution, which will turn out
to be a delta symbol times some integral transform of the function $f$. In the case of the longest element in the Weyl
group, each positive integer in $N\mathbb{Z}$ will have a nonzero contribution, given by a Kloosterman sum times a second kind of
integral transform of $f$. A more refined version is then obtained by taking the Mellin transform of the primitive formula
obtained.
Note that in this approach, one gets on the geometric side some integral transforms of the function $f$, hence one has to do
some extra work to relate these to the test function $h=\tilde{f}$ appearing in the spectral side.
A couple of remarks are in order about the choice of $f$. Firstly, the spectral expansion of the kernel involves the spectral
decomposition of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q}) \backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ rather than $L^2(\mathbf mathfrak{G}amma \backslash \mathbf mathbb H)$.
By restricting $f$ to be left and right $K_\infty$-invariant
(where $K_\infty=\text{SO}_2$), only right-$K_\infty$-invariant automorphic forms $\mathfrak{p}hi$
(thus corresponding to adelization of functions on the homogeneous space $\mathbf mathbb H = SL_2(\mathbb{R}) / K_\infty$)
will show up in the spectral expansion of the kernel, but other choices are possible.
Also one may choose the test function $f$ at unramified places so as to get a final formula that include the Hecke eigenvalues
of a fixed Hecke operator of index coprime to the level $N$.
Our plan is to implement the relative trace formula approach in the case of $\mathbf mathfrak{G}Sp_4$.
In contrast to the case of $\mathbf mathfrak{G}L_2$, there is more than one non-conjugate unipotent subgroups~$U$.
Choosing $U$ to be the unipotent radical of the Borel subgroup (that is the minimal parabolic subgroup)
will yield Whittaker coefficients of the automorphic forms involved (instead of the Fourier coefficients).
The Whittaker coefficients have a ``multiplicity one" property, which ensures that the global Whittaker coefficients factor
into a product of local coefficients.
These local Whittaker coefficients can be written down in terms of local Satake parameters, which is important for applications.
Also in contrast to the case of $\mathbf mathfrak{G}L_2$, not every automorphic form has non-identically zero Whittaker coefficients.
For instance, Siegel modular form give rise to automorphic forms whose Whittaker coefficients vanish identically.
Thus, only \textit{generic} automorphic forms (\textit{i.e,} with non-identically zero Whittaker coefficients)
will survive the integration on $U \times U$ and contribute to the final formula.
Let us briefly sketch some similarities and differences with the Kuznetsov formula for $\mathbf mathfrak{G}L_3$, which also has
rank $2$.
On the spectral side, the continuous contribution is in both cases given on the one hand by \textit{minimal Eisenstein series},
(that is, attached to the minimal parabolic subgroup), and on the other hand by Eisenstein series induced from non-minimal parabolics by Maa{\mathbf{s}s} forms
on $\mathbf mathfrak{G}L_2$. However, in the case of $\mathbf mathfrak{G}L_3$, the two non-minimal proper standard parabolic subgroup are \textit{associated},
hence by Langlands theory their Eisenstein series are essentially the same. On the other hand, for $\mathbf mathfrak{G}Sp_4$, we have
two distinct non-associated such parabolic subgroups, giving rise to two distinct kinds of Eisenstein series.
As for as the geometric side, the Weyl group of $\mathbf mathfrak{G}L_3$ has six elements, while the Weyl goup of $\mathbf mathfrak{G}Sp_4$ has eight.
However, it seems interesting to notice that in both case, only the identity element and the longest three elements
in the Weyl group have a non-zero contribution, thus eventually giving in total four distinct terms.
Finally, let us mention that Siu Hang Man has independently derived a Kuznetsov formula for $\text{Sp}_4$ using the more classical technique
of computing the inner product of Poincar\'e series, and has derived some applications towards the Ramanujan Conjecture~\cite{SHMKuznetsov}.
However, because the techniques employed and the final formulae differ, the author believes that our works are complementary rather than redundant.
Indeed, the flexibility offered by the adelic framework enables us to treat the test function differently at each place.
As a result, by choosing an appropriate test function at finite places, our formula might incorporate the eigenvalues of an arbitrary Hecke operator.
Furthermore, at the Archimedean place, we make use of two deep theorems of functional analysis on real reductive groups (namely Harish-Chandra inversion theorem
and Wallach's Whittaker inversion theorem) in order to produce an arbitrary Paley-Wiener test function on the spectral side, and relate it explicitly to its transform
appearing on the arithmetic side.
As a last point, working with $\mathbf mathfrak{G}Sp_4$ instead of $\text{Sp}_4$ enables us to work with a central character.
The primary goal of this work is to write down the main Kuznetsov formula (in Theorem~\ref{MainTheorem}) as explicitly as possible.
Applications of Kuznetsov formulae classically include counts of the cuspidal spectrum,
bounds towards the Ramanujan conjecture and bounds for the number of automorphic forms violating the Ramanujan conjecture,
large sieve inequalities involving the Hecke eigenvalues,
estimates for moments of $L$-functions,
and distribution of the Whittaker coefficients.
Some of these have been developed in~\cite{SHMKuznetsov}, and we hope to work out further applications in future work.
\mathbf mathfrak{a}ddtocontents{toc}{\mathfrak{p}rotect\mathbf{s}etcounter{tocdepth}{1}}
\mathbf{s}ubsection*{Acknowledgments} I wish to thanks Abhishek Saha for suggesting me this problem and for helpful comments,
Ralf Schmidt for communicating a proof of Proposition~\ref{MustBePrincipalSeries} to me,
and Jack Buttcane for being in touch about the interchange of integrals conjecture.
\mathbf mathfrak{a}ddtocontents{toc}{\mathfrak{p}rotect\mathbf{s}etcounter{tocdepth}{2}}
\mathbf{s}ection{Generalities}
\begin{definition}\label{classical}
The general symplectic group of degree 2 over a field $\mathbf mathbb{F}$ is the group
$$\mathbf mathfrak{G}Sp_4(\mathbf mathbb{F})=\{M \in \mathbf mathcal Mat_4(\mathbf mathbb{F}) : {\bf{e}}xists \mathbf mu \in \mathbf mathbb{F}^\times , \trans{M}JM=\mathbf mu J\},$$
where
$J=\mathbf mat{}{I_2}{-I_2}{}$ and $\trans{M}$ denotes the transpose matrix of $M$.
{\bf{e}}nd{definition}
Note that some authors use different realizations of $\mathbf mathfrak{G}Sp_4$, for instance the realization used in~\cite{RS2}
(to which we refer, along with~\cite{RS1}, for expository details) is conjugated
in $\mathbf mathfrak{G}L_4$ to ours by the matrix $\mathbf mat{\block{}{1}{1}{}}{}{}{\block{1}{}{}{1}}$.
From now on we denote $G=\mathbf mathfrak{G}Sp$.
The {\bf Cartan involution} of $G$ is given by $\theta(M)=J^{-1}MJ=\trans{M^{-1}}$.
The scalar $\mathbf mu=\mathbf mu(M)$ in the definition is called the {\bf multiplier system}.
The centre of $G$ consists in all the invertible scalar matrices.
We fix a maximal torus in $G(\mathbf mathbb{F})$
$$T(\mathbf mathbb{F})=\left\{\mathbf mat{\block{x}{}{}{y}}{}{}{\block{tx^{-1}}{}{}{ty^{-1}}} : x,y,t \in \mathbf mathbb{F}^\times \right\}.$$
\begin{definition}
The symplectic group of degree 2 over a field $\mathbf mathbb{F}$ is the group
$$\text{Sp}_4(\mathbf mathbb{F})=\{M \in G(\mathbf mathbb{F}) : \mathbf mu(M)=1\}.$$
{\bf{e}}nd{definition}
The centre of $\text{Sp}_4$ is $\{\mathfrak{p}m 1\}$, and a maximal torus in $\text{Sp}_4(\mathbf mathbb{F})$ is given by
$$A(\mathbf mathbb{F})=T(\mathbf mathbb{F}) \cap \text{Sp}_4(\mathbf mathbb{F}) =\left\{\mathbf mat{\block{x}{}{}{y}}{}{}{\block{x^{-1}}{}{}{y^{-1}}} : x,y \in \mathbf mathbb{F}^\times \right\}.$$
\mathbf{s}ubsection{Weyl group}
Let $N(T)$ be the normalizer of $T$.
The Weyl group $\text{O}mega=N(T)/T$ is generated by (the images of)
$s_1=\mathbf mat{\block{}{1}{1}{}}{}{}{\block{}{1}{1}{}}$ and
$s_2=\left[ \begin{smallmatrix}&&1&\\ &1&&\\-1&&&\\&&&1{\bf{e}}nd{smallmatrix}\right]$, and consists of the (images of the) eight elements
$$1, s_1, s_2,
s_1s_2=\left[ \begin{smallmatrix}&1&&\\ &&1&\\&&&1\\-1&&&{\bf{e}}nd{smallmatrix}\right],
s_2s_1=\left[ \begin{smallmatrix}&&&1\\mathbf mathbbm{1}&&&\\&-1&&\\&&1&{\bf{e}}nd{smallmatrix}\right],$$
$$s_1s_2s_1=\left[ \begin{smallmatrix}1&&&\\ &&&1\\&&1&\\&-1&&{\bf{e}}nd{smallmatrix}\right],
s_2s_1s_2=\mathbf mat{}{\block{}{1}{1}{}}{\block{}{-1}{-1}{}}{},(s_1s_2)^2=J.$$
\mathbf{s}ubsection{Compact subgroups}
A choice of maximal compact subgroup of $G(\mathbb{R})$ is given by the set $K_0$ of fixed points of the Cartan involution $\theta$.
An easy computation shows
$$K_0=K_\infty \mathbf{s}qcup \diag{1}{1}{-1}{-1}K_\infty,$$
where
$$K_\infty = \left\{ \mathbf mat{A}{B}{-B}{A}:
A\trans{A}+B\trans{B}=1,
A\trans{B}=B\trans{A}. \right\}.$$
The condition
$$\begin{cases}
A\trans{A}+B\trans{B}=1\\
A\trans{B}=B\trans{A}{\bf{e}}nd{cases}$$ is equivalent to $A+iB \in U(2)$, hence $K_\infty$ is isomorphic to $U(2)$.
For each prime $p$ we also consider a (compact open) congruence subgroup $\mathbf mathfrak{G}amma_p \mathbf{s}ubset G(\mathbb{Z}_p)$,
with the properties that $\mathbf mathfrak{G}amma_p=G(\mathbb{Z}_p)$ for all but finitely many $p$ and the multiplier system $\mathbf mu$ is
surjective from $\mathbf mathfrak{G}amma_p$ to $\mathbb{Z}_p^\times$ for all $p$.
This implies we have the
{\bf strong approximation}: setting $\mathbf mathfrak{G}amma=K_\infty \mathfrak{p}rod_p \mathbf mathfrak{G}amma_p$, we have
$$G(\mathbf mathbb{A})=G(\mathbb{R})^\circ G(\mathbb{Q}) \mathbf mathfrak{G}amma,$$
where $G(\mathbb{R})^\circ$ is the connected component of the identity and $\mathbf mathbb{A}$ is the ring of ad\`eles of $\mathbb{Q}$.
Moreover we have the {\bf Iwasawa decomposition}
$G(\mathbf mathbb{A})=P(\mathbf mathbb{A})K$ for all standard parabolic subgroup $P$, where $K=K_\infty \times \mathfrak{p}rod_p G(\mathbb{Z}_p)$.
\mathbf{s}ubsection{Parabolic subgroups}
Parabolic subgroups are subgroups such that $G/P$ is a projective variety.
Given a minimal parabolic subgroup $P_0$,
{\bf standard parabolic subgroups} (with respect to $P_0$) are those parabolic subgroups that contain $P_0$.
If $P$ is a standard parabolic subgroup defined over $\mathbb{Q}$,
the {\bf Levi decomposition} of $P$ is a semidirect product $P=N_PM_P$ where $M_P$ is a reductive subgroup and
$N_P$ is a normal unipotent subgroup.
We give here the three non-trivial standard (with respect to our choice of $P_0=B$) parabolic subroups and their Levi decomposition.
\mathbf{s}ubsubsection{Borel subgroup} \label{Borelsbgp}
The {\bf Borel subgroup} is the minimal standard parabolic subgroup.
It is given by
$$B=\left[ \begin{smallmatrix}*&&*&*\\ *&*&*&*\\&&*&*\\&&&*{\bf{e}}nd{smallmatrix}\right] \cap \mathbf mathfrak{G}Sp_4$$
and has Levi decomposition $B=U T=TU$, where
$$U=\left\{\bigmat{\block{1}{}{x}{1}}{\block{c}{a-cx}{a}{b}}{}{\block{1}{-x}{}{1}},a,b,c,x \in \mathbf mathbb{F}\right\}.$$
We have the {\bf Bruhat decomposition}
\begin{align*}
G=\coprod_{\mathbf{s}igma \in \text{O}mega} B \mathbf{s}igma B=\coprod_{\mathbf{s}igma \in \text{O}mega} UT \mathbf{s}igma U.
{\bf{e}}nd{align*}
We write the Iwasawa decomposition for $\text{Sp}_4(\mathbb{R})$ as follows.
\begin{definition}
For every $g \in \text{Sp}_4(\mathbb{R})$ there is a unique element $A(g) \in \mathbf mathcal Lie{a}$, such that
$$g \in U {\bf{e}}xp(A(g))K_\infty.$$
{\bf{e}}nd{definition}
Later on we shall need the following technical lemma.
\begin{lemma}\label{A(Ju)}
With the notation $$u(x,a,b,c)=\left[
\begin{smallmatrix}
1 & & c & a-cx \\
x & 1 & a & b \\
& & 1 & -x \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right]$$
we have for all $a,b,c,x \in \mathbb{R}$
$$A(Ju(x,a,b,c))=A(Ju(-x,a,-b,-c)).$$
{\bf{e}}nd{lemma}
\begin{proof}
To alleviate notations, set $u=u(x,a,b,c)$.
By definition we have $u=u_1A(Ju)k$ for some $u_1 \in U(\mathbb{R})$ and $\trans{k}=k^{-1}$.
Then $$ u \trans{u} =J\trans{u}^{-1}u^{-1}J^{-1}\\
=\trans{u_1}^{-1}{\bf{e}}xp(-2A(Ju))u_1^{-1}.$$
The matrices $u_1$ and $A(Ju)$ can then be recovered from the entries of $u \trans{u}$ by an elementary calculation (see Lemma~\ref{uoppu}) below).
In particular, we find $$A(Ju)=-\bm{f}rac12\diag{a_1-a_2}{a_2}{a_2-a_1}{-a_2},$$ where
$a_1 =\log(1+x^2+a^2+b^2)$ and $a_2=\log((a(a-cx)+1-bc)^2+(x(a-cx)-b-c)^2)$.
{\bf{e}}nd{proof}
\mathbf{s}ubsubsection{Klingen subgroup}
The {\bf Klingen subgroup} is
$$P_{\text{K}}=\left[ \begin{smallmatrix}*&&*&*\\ *&*&*&*\\ *&&*&*\\&&&*{\bf{e}}nd{smallmatrix}\right] \cap \mathbf mathfrak{G}Sp_4.$$
It has Levi decomposition $P_{\text{K}}=N_{\text{K}}\mathbf mathcal Mk$, where
\begin{align*}
N_{\text{K}}=\left\{\mathbf mat{\block{1}{}{x}{1}}{\block{}{y}{y}{z}}{}{\block{1}{-x}{}{1}},x,y,z \in \mathbf mathbb{F}\right\}
\text{ and }
\mathbf mathcal Mk=\left\{\left[ \begin{smallmatrix}a&&b&\\ &t&&\\ c&&d&\\&&&t^{-1}\delta{\bf{e}}nd{smallmatrix}\right], t \in \mathbf mathbb{F}^\times,
\delta = \det\left(\mathbf mat{a}{b}{c}{d}\right) \texttt{n}eq 0\right\}.
{\bf{e}}nd{align*}
The centre of $\mathbf mathcal Mk$ is
$\mathbf mathbb{A}k=\left\{\left[ \begin{smallmatrix}u&&&\\&t&&\\&&u&\\&&&t^{-1}u^2{\bf{e}}nd{smallmatrix}\right], t,u
\in \mathbf mathbb{F}^\times \right\}.$
\mathbf{s}ubsubsection{Siegel subgroup}
The {\bf Siegel subgroup} is
$$P_{\text{S}}=\left[ \begin{smallmatrix}*&*&*&*\\ *&*&*&*\\&&*&*\\&&*&*{\bf{e}}nd{smallmatrix}\right] \cap \mathbf mathfrak{G}Sp_4.$$
It has Levi decomposition $P_{\text{S}}=N_{\text{S}}\mathbf mathcal Ms$, where
$$N_{\text{S}}=\left\{\mathbf mat{\block{1}{}{}{1}}{\block{x}{y}{y}{z}}{}{\block{1}{}{}{1}},x,y,z \in \mathbf mathbb{F}\right\}
\text{ and }
\mathbf mathcal Ms=\left\{ \mathbf mat{A}{}{}{t\trans{A^{-1}}}, A \in \mathbf mathfrak{G}L_2(\mathbf mathbb{F}), t \in \mathbf mathbb{F}^\times \right\}.$$
The centre of $\mathbf mathcal Ms$ is
$\mathbf mathbb{A}s=\left\{\left[ \begin{smallmatrix}u&&&\\&u&&\\&&tu^{-1}&\\&&&tu^{-1}{\bf{e}}nd{smallmatrix}\right], t,u \in \mathbf mathbb{F}^\times \right\}.$
\mathbf{s}ubsection{Lie algebras and characters}
Following Arthur~\cite{ArthurIntro}, we parametrize the characters of the Levi component of the parabolic subgroups by the dual of the Lie algebras of their centre.
We fix $|\cdot|_\mathbf mathbb{A}=\mathfrak{p}rod_v|\cdot|_v$ the {\bf standard adelic absolute value}.
Let $P=M_PN_P$ be a standard parabolic subgroup, and $A_P$ be the centre of $M_P$.
Then there is a surjective homomorphism
$$H_P: M_P(\mathbf mathbb{A}) \to \text{Hom}_\mathbb{Z}(X(M_P),\mathbb{R})$$ defined by
\begin{equation}\label{DefinitionOfHp}
\left(H_P(m)\right)(\chi)=\log(|\chi(m)|_\mathbf mathbb{A}),
{\bf{e}}nd{equation}
where we write $X(H)$ for the group of homomorphisms (of \textit{algebraic groups}) $H \to \mathbf mathfrak{G}L_1$ that are defined over $\mathbb{Q}$.
On the other hand, we may identify the vector space $\text{Hom}_\mathbb{Z}(X(M_P),\mathbb{R})$ with the Lie algebra
$\mathbf mathcal Lie{a}_P \mathfrak{o}plus \mathbf mathcal Lie{z}$ of $A_P(\mathbb{R})$ (where $\mathbf mathcal Lie{a}_P$ is the Lie algebra of $A_P(\mathbb{R}) \cap \text{Sp}_4(\mathbb{R})$ and $\mathbf mathcal Lie{z}$ is the Lie algebra of the centre).
If $\texttt{n}u \in \mathbf mathcal Lie{a}_P^*(\mathbf mathbb{C}) \mathfrak{o}plus \mathbf mathcal Lie{z}^*(\mathbf mathbb{C})$, then the map
\begin{equation}\label{DefOfChar}
m \mathbf mapsto {\bf{e}}xp(\texttt{n}u(H_P(m))
{\bf{e}}nd{equation}
defines a character of $M_P(\mathbf mathbb{A})$. Moreover characters of $Z(\mathbf mathbb{A})$
correspond to $\mathbf mathcal Lie{z}^*(\mathbf mathbb{C})$ while characters that are trivial on $Z(\mathbf mathbb{A})$ correspond to $\mathbf mathcal Lie{a}_P^*(\mathbf mathbb{C})$.
\mathbf{s}ection{Representations}
\mathbf{s}ubsection{Generic representations}
\mathbf{s}ubsubsection{Generic characters} \label{GenericCharacters}
A character $\mathfrak{p}si$ of $U(\mathbb{Q}) \backslash U(\mathbf mathbb{A})$ is said to be {\bf generic} if its differential is non-trivial
on each of the eigenspaces $\mathbf mathcal Lie{n}_\mathbf mathfrak{a}lpha$ corresponding to simple roots $\mathbf mathfrak{a}lpha$.
Explicitly, If $\theta$ is the standard additive character of $\mathbf mathbb{A}/\mathbb{Q}$ and $\mathbf m=(m_1, m_2) \in (\mathbb{Q}^\times)^2$,
generic characters of $U(\mathbf mathbb{A})$ are given by
\begin{equation}\label{genchar}
\mathfrak{p}si_\mathbf m\left(\left[\begin{smallmatrix}
1 & & c & a-cx \\
x & 1 & a & b \\
& & 1 & -x \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}\right]\right)=\theta(m_1x+m_2c).
{\bf{e}}nd{equation}
Note that all generic characters may be obtained from each other by conjugation by an element of $T/Z$,
as we have for all $u \in U(\mathbf mathbb{A})$
\begin{equation}\label{gencharconj}
\mathfrak{p}si_\mathbf m\left(u \right)=\mathfrak{p}si_{\mathbf mathbf{1}}\left( t_\mathbf m^{-1} u t_\mathbf m\right),
{\bf{e}}nd{equation}
where
\begin{equation}\label{tm}
t_\mathbf m=\diag{m_1}{1}{m_1m_2}{m_1^2m_2}.
{\bf{e}}nd{equation}
\mathbf{s}ubsubsection{Whittaker coefficients and generic representations}
If $\mathfrak{p}hi$ is any automorphic form on $G(\mathbf mathbb{A})$ and $\mathfrak{p}si$ a generic character, the {\bf $\mathfrak{p}si$-Whittaker coefficient}
of $\mathfrak{p}hi$ is by definition
\begin{equation} \label{Whittaker}
\mathcal{W}_{\mathfrak{p}si}(\mathfrak{p}hi) (g) = \int_{U(\mathbb{Q}) \backslashslash U(\mathbf mathbb{A})} \mathfrak{p}hi(ug) \mathfrak{p}si(u)^{-1} du.
{\bf{e}}nd{equation}
$\mathfrak{p}hi$ is called $\mathfrak{p}si$-generic if $\mathcal{W}_\mathfrak{p}si$ is not identically zero as a function of $g$.
Changing variable and using the left-$G(\mathbb{Q})$-invariance of $\mathfrak{p}hi$, note that we have
$$\mathcal{W}_{\mathfrak{p}si_{\mathbf m}}(\mathfrak{p}hi) (g)=\bm{f}rac1{|m_1^4m_2^3|}\mathcal{W}_{\mathfrak{p}si_{\mathbf mathbf 1}}(\mathfrak{p}hi) (t_{\mathbf m}^{-1}g)$$
In particular, $\mathfrak{p}hi$ is $\mathfrak{p}si$-generic for some generic character $\mathfrak{p}si$ if and only if it is $\mathfrak{p}si$-generic for any
generic character $\mathfrak{p}si$, henceforth we shall just say $\mathfrak{p}hi$ is {\bf generic}.
An irreducible automorphic representation $(\mathfrak{p}i, V_\mathfrak{p}i)$ is called generic if $V_\mathfrak{p}i$ contains a generic automorphic form
$\mathfrak{p}hi$.
Equivalently, every automorphic form in the space of a generic irreducible automorphic representation $\mathfrak{p}i$ is generic,
since otherwise the kernel of the map $\mathfrak{p}hi \mathbf mapsto W_\mathfrak{p}si(\mathfrak{p}hi)$ would be lead to a non-trivial invariant subspace of $\mathfrak{p}i$,
contradicting the irreducibility of $\mathfrak{p}i$.
Since $U$ may as well be viewed as the unipotent part of the minimal parabolic subgroup of $\text{Sp}_4$,
we can define the Whittaker coefficients of automorphic forms $\mathfrak{p}hi$ on $\text{Sp}_4$ in the exact same way as~(\ref{Whittaker}), except the argument is restricted
to $\text{Sp}_4(\mathbf mathbb{A})$. This gives a similar notion of generic automorphic forms and generic representations for $\text{Sp}_4$.
Later on, we shall restrict automorphic forms on $\mathbf mathfrak{G}Sp_4$ to $\text{Sp}_4$.
Let us briefly explain the corresponding operations on automorphic representations.
\begin{definition}\label{DefOfRes}
Let $(\mathfrak{p}i,V_\mathfrak{p}i)$ be an automorphic representation of $\mathbf mathfrak{G}Sp_4(\mathbf mathbb{A})$ realized by right translation on a subspace of $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}))$.
We define a representation $\text{res } \mathfrak{p}i$ of $\text{Sp}_4(\mathbf mathbb{A})$ as the action of $\text{Sp}_4(\mathbf mathbb{A})$ on $\left\{ \mathfrak{p}hi_{|\text{Sp}_4(\mathbf mathbb{A})}: \mathfrak{p}hi \in V_\mathfrak{p}i\right\}$.
It is a quotient of the restriction $\mathbb{R}es \mathfrak{p}i=\mathfrak{p}i_{|\text{Sp}_4(\mathbf mathbb{A})}$.
{\bf{e}}nd{definition}
The representation $\text{res } \mathfrak{p}i$ does not have finite length in general. However, the following shall be useful later on.
\begin{lemma}\label{genericrestriction}
Let $\mathfrak{p}i$ be an irreducible automorphic representation $\mathfrak{p}i$ of $G(\mathbf mathbb{A})$ that occurs discretely in $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}))$.
Then $\mathfrak{p}i$ is generic if and only if $\text{res } \mathfrak{p}i$ has a generic constituent.
{\bf{e}}nd{lemma}
\begin{proof}
Fix a generic character $\mathfrak{p}si$.
Note that for any automorphic form $\mathfrak{p}si$ on $G(\mathbf mathbb{A})$ we have
$\mathcal{W}_{\mathfrak{p}si}(\mathfrak{p}hi_{|\text{Sp}_4})=\left(\mathcal{W}_{\mathfrak{p}si}(\mathfrak{p}hi)\right)_{|\text{Sp}_4}$.
From this, it is clear that if $\text{res } \mathfrak{p}i$ has a generic constituent then $\mathfrak{p}i$ is generic.
Let us show the converse.
Assume no constituent of $\text{res } \mathfrak{p}i$ is generic, so for all $\mathfrak{p}hi \in V_\mathfrak{p}i$,
$$\mathcal{W}_\mathfrak{p}si(\mathfrak{p}hi)_{|\text{Sp}_4(\mathbf mathbb{A})}=0.$$
Let $\mathfrak{p}hi \in \mathfrak{p}i$ and $g \in G(\mathbf mathbb{A})$. Then $\mathfrak{p}i(g)\mathfrak{p}hi\in V_\mathfrak{p}i$ hence
$$\mathcal{W}_{\mathfrak{p}si}(\mathfrak{p}hi)(g)=\mathcal{W}_{\mathfrak{p}si}(\mathfrak{p}i(g)\mathfrak{p}hi)(1)=0.$$
Thus $\mathfrak{p}i$ is not generic.
{\bf{e}}nd{proof}
We now prove a similar lemma for the restriction of non-cuspidal representations.
\begin{lemma}\label{cuspidalrestriction}
Let $\mathfrak{p}i$ be an irreducible automorphic representation $\mathfrak{p}i$ of $G(\mathbf mathbb{A})$ that occurs discretely in $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}))$.
Then $\mathfrak{p}i$ is non-cuspidal if and only if $\text{res } \mathfrak{p}i$ has no cuspidal constituent.
{\bf{e}}nd{lemma}
\begin{proof}
Recall $\mathfrak{p}i$ is cuspidal if the constant term
$$C_\mathfrak{p}hi(g)=\int_{U(\mathbb{Q}) \backslash U(\mathbf mathbb{A})} \mathfrak{p}hi(ug)du$$
of some (equivalently, any, since $\mathfrak{p}i$ is irreducible) function $\mathfrak{p}hi$ in the space of $\mathfrak{p}i$ vanishes identically.
The exact same proof as Lemma~\ref{genericrestriction}, replacing the generic character $\mathfrak{p}si$ by $1$, shows that $\mathfrak{p}i$ is
non-cuspidal if and only if $\text{res } \mathfrak{p}i$ has a non-cuspidal component.
However, we want to show that if $\mathfrak{p}i$ is non-cuspidal, then $\text{res } \mathfrak{p}i$ has no cuspidal component.
So suppose that $\text{res } \mathfrak{p}i$ has a cuspidal component.
This means there is $\mathfrak{p}hi \in V_\mathfrak{p}i$ such that $(C_\mathfrak{p}hi)_{|\text{Sp}_4(\mathbf mathbb{A})}=0$. We want to show that $C_\mathfrak{p}hi$ is identically zero on $\mathbf mathfrak{G}Sp_4(\mathbf mathbb{A})$.
Now changing variables and using the left-invariance of $\mathfrak{p}hi$ under $\mathbf mathfrak{G}Sp_4(\mathbb{Q})$, if $t \in T(\mathbb{Q})$ then we have $C_\mathfrak{p}hi(tg)=C_\mathfrak{p}hi(g)$.
In addition, if $z\in Z(\mathbf mathbb{A})$ then $C_\mathfrak{p}hi(zg)=\mathfrak{o}mega_\mathfrak{p}i(z)C_\mathfrak{p}hi(g)$. Moreover, since $\mathfrak{p}i$ is an admissible representation, $\mathfrak{p}hi$~is right-invariant by $\mathbf mathfrak{G}Sp_4(\mathbb{Z}_p)$ for almost all prime $p$.
It follows that there exists a finite set of places $S$ such that for any $g \in \mathbf mathfrak{G}Sp_4(\mathbf mathbb{A})$,
if $\mathbf mu(g) \in \mathbb{Q}^\times (\mathbf mathbb{A}^\times)^2 \mathfrak{p}rod_{p \texttt{n}ot \in S}\mathbb{Z}_p^\times$ then $C_\mathfrak{p}hi(g)=0$.
The following lemma concludes the proof.
{\bf{e}}nd{proof}
\begin{lemma}
Let $S$ be any finite set of places containing $\infty$.
We have $\mathbb{Q}^\times (\mathbf mathbb{A}^\times)^2 \mathfrak{p}rod_{p \texttt{n}ot \in S} \mathbb{Z}_p^\times=\mathbf mathbb{A}^\times$.
{\bf{e}}nd{lemma}
\begin{proof}
Let $x \in \mathbf mathbb{A}^\times$.
By strong approximation, we have $x=qu$, with $q \in \mathbb{Q}^\times$ and $u \in \mathbb{R}_{>0}\mathfrak{p}rod_{p < \infty} \mathbb{Z}_p^\times$.
Now by the Chinese Remainders Theorem, there exists an integer $n>0$ such that for all finite $p \in S$, we have $n u_p \in (\mathbb{Z}_p^\times)^2$.
For all $p \texttt{n}ot \in S$, let ${\bf{e}}psilon_p \in \mathbb{Z}_p^\times$ such that ${\bf{e}}psilon_p n u_p \in (\mathbb{Z}_p^\times)^2$.
Define ${\bf{e}}psilon_p=1$ for $p \in S$.
Then $n {\bf{e}}psilon u \in (\mathbf mathbb{A}^\times)^2$, and $x=(qn^{-1})(n {\bf{e}}psilon u)\mathfrak{p}rod_{p \texttt{n}ot \in S}{\bf{e}}psilon_p^{-1}$.
{\bf{e}}nd{proof}
\mathbf{s}ubsection{The basic kernel}\label{basickernel}
The group $Z(\mathbb{Q})Z(\mathbb{R}) \backslashslash Z(\mathbf mathbb{A})$ is compact and acts on the Hilbert space $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}))$ by
right translation.
Since $Z(\mathbb{Q})Z(\mathbb{R}) \backslashslash Z(\mathbf mathbb{A})$ is abelian, its irreducible representations are characters, thus
by Peter-Weyl theorem we have
$$L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}))=\bigoplus_{\mathfrak{o}mega}L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega),$$
where the orthogonal direct sum ranges characters of $Z(\mathbf mathbb{A})$ that are trivial on $Z(\mathbb{Q})Z(\mathbb{R})$, and
$L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega)$ is the subspace of $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}))$
of functions $\mathfrak{p}hi$ satisfying $$\mathfrak{p}hi(gz)=\mathfrak{o}mega(z)\mathfrak{p}hi(g)$$ for all $z \in Z(\mathbf mathbb{A})$.
Fix such a character $\mathfrak{o}mega$. If $f:G(\mathbf mathbb{A}) \to \mathbf mathbb{C}$ is a measurable function that satisfies
\begin{itemize}
\item $f(gz)=\mathfrak{o}verline{\mathfrak{o}mega}(z)f(g)$ for all $z \in Z(\mathbf mathbb{A})$,
\item $f$ is compactly supported modulo $Z(\mathbf mathbb{A})$,
{\bf{e}}nd{itemize}
then we define an operator $R(f)$ on $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega)$ by
$$R(f) \mathfrak{p}hi(x)=\int_{\mathbf mathfrak{G}modZ(\mathbf mathbb{A})}f(y)\mathfrak{p}hi(xy)dy,$$
where $\mathfrak{o}verline{G}$ denotes $G / Z$.
By $G(\mathbb{Q})$-invariance of $\mathfrak{p}hi$, we have
\begin{align*}
R(f) \mathfrak{p}hi(x)=\int_{\mathbf mathfrak{G}modZ(\mathbf mathbb{A})}f(x^{-1}y)\mathfrak{p}hi(y)dy
=\mathbf{s}um_{\gamma \in \mathbf mathfrak{G}modZ(\mathbb{Q})}\int_{\mathbf mathfrak{G}modZ(\mathbb{Q}) \backslashslash \mathbf mathfrak{G}modZ(\mathbf mathbb{A})}f(x^{-1}\gamma y)\mathfrak{p}hi(y)dy
{\bf{e}}nd{align*}
Hence, setting
\begin{equation}\label{TheKernel}
K_f(x,y)=\mathbf{s}um_{\gamma \in \mathbf mathfrak{G}modZ(\mathbb{Q})}f(x^{-1}\gamma y),
{\bf{e}}nd{equation}
we have
\begin{equation}\label{basicequation}
R(f) \mathfrak{p}hi(x)=\int_{\mathbf mathfrak{G}modZ(\mathbb{Q}) \backslashslash \mathbf mathfrak{G}modZ(\mathbf mathbb{A})}K_f(x,y)\mathfrak{p}hi(y)dy.
{\bf{e}}nd{equation}
Now let us argue informally to motivate the more technical actual reasoning.
Let us pretend that $K_f(x,.)$ is an element of $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega)$, and that
$L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega)$ has a Hilbert orthonormal base $\mathbf mathcal{B}$.
Then we would have
$$K(x,.)=\mathbf{s}um_{\mathfrak{p}hi \in \mathbf mathcal{B}} \langle K(x,.) | \mathfrak{o}verline{\mathfrak{p}hi} \rangle \mathfrak{o}verline{\mathfrak{p}hi}.$$
But equation~(\ref{basicequation}) says that $\langle K(x,.) | \mathfrak{o}verline{\mathfrak{p}hi} \rangle=R(f) \mathfrak{p}hi(x)$.
Thus we might expect a spectral expansion of the kernel of the form
\begin{equation}\label{informal}
K(x,y)=\mathbf{s}um_{\mathfrak{p}hi \in \mathbf mathcal{B}}R(f) \mathfrak{p}hi(x) \mathfrak{o}verline{\mathfrak{p}hi(y)}.
{\bf{e}}nd{equation}
If moreover each element $\mathfrak{p}hi$ of our base $\mathbf mathcal{B}$ is an eigenfunction of the operator $R(f)$, say
\begin{equation}\label{evalue}
R(f)\mathfrak{p}hi=\lambda_f(\mathfrak{p}hi)
{\bf{e}}nd{equation}
then the above expansion becomes
$$K(x,y)=\mathbf{s}um_{\mathfrak{p}hi \in \mathbf mathcal{B}}\lambda_f(\mathfrak{p}hi) \mathfrak{p}hi(x) \mathfrak{o}verline{\mathfrak{p}hi(y)}.$$
Finally, integrating $K(x,y)$ on $U \times U$ against a character $\mathfrak{o}verline{\mathfrak{p}si_1(x)}\mathfrak{p}si_2(y)$ would then yield
a spectral equality involving the Whittaker coefficients and the eigenvalues $\lambda_f(\mathfrak{p}hi)$, of the form
\begin{equation}\label{informalspectral}
\int_{(U(\mathbb{Q}) \backslash U(\mathbf mathbb{A}))^{2}}K(x,y)\mathfrak{o}verline{\mathfrak{p}si_1(x)}\mathfrak{p}si_2(y)dxdy
=\mathbf{s}um_{\mathfrak{p}hi \in \mathbf mathcal{B}}\lambda_f(\mathfrak{p}hi) \mathcal{W}_{\mathfrak{p}si_1}(\mathfrak{p}hi) \mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si_2}(\mathfrak{p}hi)}.
{\bf{e}}nd{equation}
Note that in the last step we need~(\ref{informal}) to hold not only in the $L^2$ sense, but pointwise, as
$(U(\mathbb{Q}) \backslash U(\mathbf mathbb{A}))^{2}$ has measure zero.
Of course, $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega)$ \textit{does not} have a Hilbert orthonormal base, due to the presence
of continuous spectrum. However, after adding the proper continuous contribution,
a spectral expansion of the form~(\ref{informal}) has been proved by Arthur~\cite{ArthurSpectralExpansion}*{pages 928-934},
building on the spectral decomposition of $L^2(G(\mathbb{Q})Z(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}),\mathfrak{o}mega)$ by Langlands.
We may then reduce from global to local as follows.
By general theory, we may choose automorphic forms $\mathfrak{p}hi$ appearing in the spectral expansion of the kernel
to be factorizable vectors $\mathfrak{p}hi_\infty \mathfrak{o}times \bigotimes_p \mathfrak{p}hi_p$.
If moreover we take $f$ factorizable, say $f =f_\infty \mathfrak{p}rod_p f_p$, then the computation of $R(f) \mathfrak{p}hi$ reduces
to the computation of the action of each local component $f_v$ on $\mathfrak{p}hi_v$.
By choosing the local components $f_v$ appropriately, we can ensure that each $\mathfrak{p}hi_v$ is an eigenvector of the
operator corresponding to $f_v$, so that~(\ref{evalue}) holds.
The determination of $\lambda_f(\mathfrak{p}hi)$ then amounts, at the infinite place, to the study
of the spherical transform of $f_\infty$, and at finite places $p$, of the action of the local Hecke algebra.
Specifically, from now on we assume $f$ is as follows.
\begin{assumption}\label{testfunction}
From now on we assume $f = f_\infty \mathfrak{p}rod_p f_p$ where
\begin{itemize}
\item $f_\infty$ is any smooth, left and right $K_\infty$-invariant and $Z(\mathbb{R})$-invariant function on $G(\mathbb{R})$,
whose support is compact modulo the centre and contained in $G^+(\mathbb{R})=\{g \in G(\mathbb{R}) : \mathbf mu(g)>0 \}.$
\item for all prime $p$, $f_p$ is a left and right $\mathbf mathfrak{G}amma_p$-invariant function on $G(\mathbb{Q}_p)$, satisfying
$f_p(gz)=\mathfrak{o}verline{\mathfrak{o}mega_p}(z)f(g)$ for all $z \in Z(\mathbb{Q}_p)$, and compactly supported modulo the centre,
\item whenever $\mathbf mathfrak{G}amma_p \texttt{n}eq G(\mathbb{Z}_p)$, we have
$$f_p(g)=\begin{cases}
\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_p}(z)}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_p})} \text{ if there exists } z \in Z(\mathbb{Q}_p) \text{ such that } g \in z \mathbf mathfrak{G}amma_p\\
0 \text{ otherwise.}
{\bf{e}}nd{cases}$$
{\bf{e}}nd{itemize}
{\bf{e}}nd{assumption}
Note that this assumption can be fulfilled if and only if we have the following compatibility condition
\begin{assumption}
For all prime $p$, the resriction of $\mathfrak{o}mega_p$ to $\mathbf mathfrak{G}amma_p \cap Z(\mathbb{Q}_p)$ is trivial.
{\bf{e}}nd{assumption}
Let us remind the following result~\cite{KL}*{Lemma~3.10}
\begin{proposition}\label{Kfixed}
Let $G$ be a locally compact group, let $K \mathbf{s}ubset G$ be a closed subgroup, and let $\mathfrak{p}i$ be a unitary representation of $G$
on a Hilbert space $V$ with central character $\mathfrak{o}mega$. Let $f$ be any left and right $K$-invariant function satisfying
\begin{itemize}
\item $f(gz)=\mathfrak{o}verline{\mathfrak{o}mega}(z)f(g)$ for all $z$ in the centre $Z$ of $G$,
\item $f$ is integrable on $G/Z$.
{\bf{e}}nd{itemize}
Then the operator $\mathfrak{o}verline{\mathfrak{p}i}(f)$ on $V$ defined by
$$\mathfrak{o}verline{\mathfrak{p}i}(f)v=\int_{G/Z}f(g)\mathfrak{p}i(g)vdg$$
has its image in the $K$-fixed subspace $V^K$ and annihilates the orthogonal complement of this subspace.
{\bf{e}}nd{proposition}
Because of Assumption~\ref{testfunction}, this result implies only $\mathbf mathfrak{G}amma$-fixed automorphic forms having central character $\mathfrak{o}mega$ will appear
in the spectral decomposition of $K_f$.
These automorphic forms come from admissible irreducible representations with central character $\mathfrak{o}mega$ and having a
$\mathbf mathfrak{G}amma$-fixed vector.
In turn, these representations factor as restricted tensor products of local representations having similar local properties.
Furthermore, only those automorphic forms $\mathfrak{p}hi$ that are generic will survive the integration against a generic character on $U$, hence we may restrict
attention to local representations that are generic.
\mathbf{s}ubsection{Non-Archimedean Hecke algebras}
Let $p$ be a prime number, and $f_p$ be the local component of the function $f$ in Assumption~\ref{testfunction}.
Let $(\mathfrak{p}i,V)$ be a unitary representation of $G(\mathbb{Q}_p)$ with central character $\mathfrak{o}mega_p$.
Throughout this section the Haar measure on $G(\mathbb{Q}_p)$ is normalised so that $K_p=G(\mathbb{Z}_p)$ has volume one.
By Proposition~\ref{Kfixed} we have an operator
\begin{equation}\label{localR}
\mathfrak{o}verline{\mathfrak{p}i}(f_p)v=\int_{\mathbf mathfrak{G}modZ(\mathbb{Q}_p)} f(g)\mathfrak{p}i(g)vdg.
{\bf{e}}nd{equation}
acting on the $\mathbf mathfrak{G}amma_p$-fixed subspace $V^{\mathbf mathfrak{G}amma_p}$ and annihilating the orthogonal complement of this subspace.
First, let us consider the case $\mathbf mathfrak{G}amma_p \texttt{n}eq G(\mathbb{Z}_p)$.
Then any $\mathbf mathfrak{G}amma_p$-fixed vector $v \in V$ is also fixed by
$\mathfrak{o}verline{\mathfrak{p}i}(f_p)$, since in this case $$\mathfrak{o}verline{\mathfrak{p}i}(f_p)v=\bm{f}rac1{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_p})}\int_{\mathfrak{o}verline{\mathbf mathfrak{G}amma}_p} \mathfrak{p}i(g)vdg=v.$$
We now turn to the situation $\mathbf mathfrak{G}amma_p=K_p=G(\mathbb{Z}_p)$ (in particular, the character $\mathfrak{o}mega_p$ must be unramified).
We have have the following~\cite{RS2}*{Theorem~7.5.4}.
\begin{proposition}
Let $(\mathfrak{p}i,V)$ be generic, irreducible, admissible, representation of $G(\mathbb{Q}_p)$.
Assume $\mathfrak{p}i$ has a non-zero $K_p$-fixed vector.
Then $V^{K_p}$ has dimension $1$.
{\bf{e}}nd{proposition}
\begin{remark}
In~\cite{RS2}*{Theorem~7.5.4} it is assumed $\mathfrak{p}i$ has trivial central character.
However, in our situation, the fact that $\mathfrak{p}i$ has a non-zero $K_p$-fixed vector forces the central character
to be unramified. We can thus twist our representation by an unramified character to reduce to the hypothesis of~\cite{RS2}.
{\bf{e}}nd{remark}
By definition, any non-zero vector $\mathfrak{p}hi$ in $V^{K_p}$ is then called the {\bf spherical vector}.
Since $\mathfrak{o}verline{\mathfrak{p}i}(f_p)$ acts on $V^{K_p}$ which is one-dimensional, the spherical vector is an eigenvector of
$\mathfrak{o}verline{\mathfrak{p}i}(f_p)$. Finally, let us relate the operator $\mathfrak{o}verline{\mathfrak{p}i}(f_p)$ to the action of the unramified Hecke algebra.
The {\bf local Hecke algebra} $\mathbf mathscr Hecke(K_p)$ is the vector space of left and right-$K_p$ invariant
compactly supported functions $f:G(\mathbb{Q}_p) \to \mathbf mathbb{C}$, endowed with the convolution product
$$(f * h) (g)=\int_{G(\mathbb{Q}_p)}f(gx^{-1})h(x)dx.$$
If $(\mathfrak{p}i,V)$ is a smooth representation of $G(\mathbb{Q}_p)$, then the Hecke algebra $\mathbf mathscr Hecke(K_p)$ acts on the $K_p$-invariant subspace $V^{K_p}$ by
$$\mathfrak{p}i(f)v=\int_{G(\mathbb{Q}_p)} f(g)\mathfrak{p}i(g)vdg.$$
\begin{lemma}\label{ModingOutCentre}
Let $f$ be a bi-$K_p$ invariant function on $G(\mathbb{Q}_p)$, with a (unramified) central character, and compactly supported modulo the centre.
There exists a compactly supported bi-$K_p$-invariant function $\tilde{f}$ on $G(\mathbb{Q}_p)$
and a complete set of representatives $\mathbf mathfrak{G}modZ$ of $G(\mathbb{Q}_p)/\mathbb{Q}_p^\times$ satisfying
$\tilde{f}(gz)=f(g)\mathbf mathbbm{1}_{\mathbb{Z}_p^{\times}}(z)$ for all $g \in \mathbf mathfrak{G}modZ$ and $z \in \mathbb{Q}_p^\times$.
{\bf{e}}nd{lemma}
\begin{proof}
By the Cartan decomposition we have $G(\mathbb{Q}_p)=\coprod_{\mathbf{s}ubstack{i \le j \in \mathbb{Z}\\ t \in \mathbb{Z}}}K_p \diag{p^i}{p^j}{p^{t-i}}{p^{t-j}}K_p$.
Thus we have
$$G(\mathbb{Q}_p) / \mathbb{Q}_p^\times=\left.
\left(\coprod_{\mathbf{s}ubstack{j \ge 0 \\ t \in \mathbb{Z}}} K_p \diag{1}{p^j}{p^t}{p^{t-j}}K_p\right) \mathbf middle/ \mathbb{Z}_p^\times. \right.$$
Fix a complete set of representatives $\mathfrak{o}verline{K_p}$ of $K_p/\mathbb{Z}_p^\times$.
Then $\mathbf mathfrak{G}modZ= \coprod_{\mathbf{s}ubstack{j \ge 0 \\ t \in \mathbb{Z}}} \mathfrak{o}verline{K_p} \diag{1}{p^j}{p^t}{p^{t-j}} \mathfrak{o}verline{K_p}$
is a complete set of representatives of $G(\mathbb{Q}_p)/\mathbb{Q}_p^\times$.
Moreover, defining $$S= \coprod_{\mathbf{s}ubstack{j \ge 0 \\ t \in \mathbb{Z}}} K_p \diag{1}{p^j}{p^t}{p^{j+t}}K_p \cap Supp(f)=(\mathbb{Z}_p^\times \mathbf mathfrak{G}modZ )\cap Supp(f),$$
the function $\tilde{f}=\mathbf mathbbm{1}_S \times f$ has the desired properties.
{\bf{e}}nd{proof}
Now the function $\tilde{f_p}$ attached to $f_p$ by Lemma~(\ref{ModingOutCentre}) is an element of the Hecke algebra,
and we have $\mathfrak{p}i(\tilde{f_p})=\mathfrak{o}verline{\mathfrak{p}i}(f_p)$, as
$$\mathfrak{p}i(\tilde{f_p})v=\int_{G(\mathbb{Q}_p)} \tilde{f}(g)\mathfrak{p}i(g)vdg
=\int_{G(\mathbb{Q}_p)/\mathbb{Q}_p^\times}\int_{\mathbb{Q}_p^\times} f_p(g)\mathbf mathbbm{1}_{\mathbb{Z}_p^{\times}}(z)\mathfrak{p}i(g)vdg
=\mathfrak{o}verline{\mathfrak{p}i}(f_p)v.$$
We summarize the above discussion in the following proposition.
\begin{proposition}\label{localevalue}
Let $p$ be a prime number, and $f_p$ be the local component of the function $f$ in Assumption~\ref{testfunction}.
Let $(\mathfrak{p}i,V)$ be a unitary representation of $G(\mathbb{Q}_p)$ with central character $\mathfrak{o}mega_p$.
Then the operator $\mathfrak{o}verline{\mathfrak{p}i}(f_p)$ from Proposition~\ref{Kfixed} acts by a scalar $\lambda_\mathfrak{p}i(f_p)$ on the $\mathbf mathfrak{G}amma_p$ fixed
subspace $V^{\mathbf mathfrak{G}amma_p}$ and annihilates the orthogonal complement of this subspace.
Moreover, if $\mathbf mathfrak{G}amma_p \texttt{n}eq G(\mathbb{Z}_p)$ then $\lambda_\mathfrak{p}i(f_p)=1$, and if $\mathbf mathfrak{G}amma_p = G(\mathbb{Z}_p)$ then $\mathfrak{o}verline{\mathfrak{p}i}(f_p)$
equals the Hecke operator $\mathfrak{p}i(\tilde{f_p})$, where $\tilde{f_p}$ is given by Lemma~\ref{ModingOutCentre}.
{\bf{e}}nd{proposition}
\mathbf{s}ubsection{The Archimedean representation}
We first show that in our situation the representation at the Archimedean place must be an irreducible principal series representation, that is full induced from the Borel subgroup.
A representation of $G(\mathbb{R})$ which has a non-zero $K_\infty$-fixed vector is called {\bf spherical}.
\begin{proposition}\label{MustBePrincipalSeries}
Any generic irreducible spherical representation $(\mathfrak{p}i, V)$ of $G(\mathbb{R})$ vector is a principal series representation.
{\bf{e}}nd{proposition}
The author wishes to thank Ralf Schmidt for communicating the following argument.
\begin{proof}
As explained at the end of~\cite{Vogan}, the generic representations are exactly the ``large" ones,
\textit{i.e.}, those with maximal Gelfand-Kirillov dimension.
The Gelfand-Kirillov dimension of all irreducible representations of $\mathbf mathfrak{G}Sp_4(\mathbb{R})$ have been calculated in~\cite{thesis}*{Appendix~A}.
In particular the maximal Gelfand-Kirillov dimension is 4, and the irreducible large representations are either discrete series
or limit of discrete series, induced from the Siegel parabolic subgroup, Langlands quotient of representation induced from the
Klingen subgroup, or principal series representations.
Now the multiplicity of each possible $K_{\infty}$-type are described in~\cite{thesis}*{Chapter~4},
and among large representations of $\mathbf mathfrak{G}Sp_4(\mathbb{R})$ only principal series representations contain the trivial $K_{\infty}$-type.
{\bf{e}}nd{proof}
It is then known that the trivial $K_\infty$-type occurs in $\mathfrak{p}i$ with multiplicity one~\cite{MiyOda}, that is to say there is
a unique $K_\infty$-fixed vector in the space $V$.
Moreover, $\mathfrak{p}i$ has a unique {\bf Whittaker model}, and the image of a non-zero $K_\infty$-fixed vector is by definition given by the
{\bf Whittaker function}. The Whittaker function is an eigenfunction of the centre of the universal enveloping algebra,
which acts as an algebra of differential operators. One may then obtain a system of partial differential equations characterizing
the Whittaker function, and compute it explicitly. The Whittaker function may also be computed by the mean of the Jacquet integral.
This has been done by Niwa~\cite{Niwa} and Ishii~\cite{Ishii}.
\mathbf{s}ubsubsection{The spherical transform}
In this section we normalize the Haar measure on $\text{Sp}_4(\mathbb{R})$ so that $K_\infty$ has measure $1$.
If $h$ is any bi-$K_\infty$-invariant compactly supported function on $\text{Sp}_4(\mathbb{R})$, its {\bf spherical transform} is the function $\tilde{h}$
defined on $\mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$ by
\begin{equation}\label{SphericalTransform}
\tilde{h}(\texttt{n}u)=\int_{\text{Sp}_4(\mathbb{R})}h(g)\mathfrak{p}hi_{-\texttt{n}u}(g)dg,
{\bf{e}}nd{equation}
where
\begin{equation}\label{sphericalfunction}
\mathfrak{p}hi_{-\texttt{n}u}(g)=\int_{K_\infty}e^{(\rho-\texttt{n}u)(A(kg))}dk
{\bf{e}}nd{equation}
is the {\bf spherical function} with parameter $-\texttt{n}u$ (here $\rho$ is the half-sum of positive roots).
\begin{proposition}\label{archimedeanevalue}
Let $f_\infty$ be the Archimedean component of the function $f$ in Assumption~\ref{testfunction}.
Let $(\mathfrak{p}i,V)$ be a generic irreducible unitary representation representation of $G(\mathbb{R})$ with trivial central character.
Then the operator $\mathfrak{o}verline{\mathfrak{p}i}(f_\infty)$ from Proposition~\ref{Kfixed} acts by a scalar $\lambda_\mathfrak{p}i(f_\infty)$ on the $K_{\infty}$ fixed
subspace $V^{K_{\infty}}$ and annihilates the orthogonal complement of this subspace.
Moreover, provided this subspace $V^{K_\infty}$ is non zero, then $\mathfrak{p}i$ is a principal series representation,
and $\lambda_\mathfrak{p}i(f_\infty)=\tilde{f_\infty}(-\texttt{n}u)$, where $\tilde{f_\infty}$ is the spherical transform of $f_\infty$ and $\texttt{n}u$ is
the spectral parameter of $\mathfrak{p}i$.
{\bf{e}}nd{proposition}
\begin{proof}
If $V^{K_\infty}$ is zero then by Proposition~\ref{Kfixed} the statement is vacuous.
Assume now $\mathfrak{p}i$ has a non-zero fixed vector.
By Proposition~\ref{MustBePrincipalSeries}, $\mathfrak{p}i$ is then a principal series.
Then $V^{K_\infty}$ is one-dimensional, so if $v$ is any $K_\infty$-fixed vector in $V$ then
we have
\begin{equation}\label{sptr}
\mathfrak{p}i(f_\infty)v=\lambda_\mathfrak{p}i(f_\infty)v
{\bf{e}}nd{equation}
for some complex number $\lambda_\mathfrak{p}i(f)$.
Since $\mathfrak{p}i$ is induced by a character of the Borel subgroup,
to compute the eigenvalue $\lambda_\mathfrak{p}i(f)$, we may realize $\mathfrak{p}i$ as acting by right translation on
a space of functions $\mathfrak{p}hi$ satisfying for all $g \in G(\mathbb{R})$, $n \in U(\mathbb{R})$ and $a \in T^{+}(\mathbb{R})$
\begin{equation}\label{stdspace}
\mathfrak{p}hi(nag)=e^{(\rho+\texttt{n}u)(\log(a))}\mathfrak{p}hi(g),
{\bf{e}}nd{equation}
where $\texttt{n}u \in \mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$ is the {\bf spectral parameter} of $\mathfrak{p}i$.
We may view a $Z(\mathbb{R})$-invariant function supported on $G(\mathbb{R})^+$ as a function on $\text{Sp}_4(\mathbb{R})$, so
the operator $\mathfrak{o}verline{\mathfrak{p}i}(f)$ of Proposition~\ref{Kfixed} is given by
\begin{equation}\label{infinitR}
\mathfrak{o}verline{\mathfrak{p}i}(f_\infty)v=\int_{\mathbf mathfrak{G}modZ(\mathbb{R})} f_\infty(g)\mathfrak{p}i(g)vdg=\int_{\text{Sp}_4(\mathbb{R})} f_\infty(g)\mathfrak{p}i(g)vdg.
{\bf{e}}nd{equation}
If $\mathfrak{p}hi$ is a non-zero $K_\infty$-fixed function satisfying~(\ref{stdspace}) then because of the Iwasawa decomposition we must have
$\mathfrak{p}hi(1) \texttt{n}eq 0$.
Using the integration formula~\cite{Helgason}*{Ch.~I~Corollary~5.3} and right-$K_\infty$ invariance we may compute
\begin{align*}
\mathfrak{p}i(f_\infty) \mathfrak{p}hi(1) &=\int_{\text{Sp}_4(\mathbb{R})} f_\infty(g)\mathfrak{p}i(g) \mathfrak{p}hi(1) dg \\
&=\int_{K_{\infty}}\int_{U A^+}f_\infty(an)\mathfrak{p}hi(an)dadndk
=\int_{U A^+}f_\infty(an)e^{(\rho_{\text{B}}+\texttt{n}u)(\log(a))}dadn\mathfrak{p}hi(1),
{\bf{e}}nd{align*}
where $A^+$ is the subgroup of $A(\mathbb{R})$ with positive diagonal entries.
Therefore, using the Iwasawa decomposition and left-$K_\infty$ invariance of $f_\infty$,
the eigenvalue $\lambda_\mathfrak{p}i(f)$ is given by
\begin{align*}
\lambda_\mathfrak{p}i(f)&=\int_{\text{Sp}_4(\mathbb{R})} f_\infty(g)e^{(\rho+\texttt{n}u)(A(g))}dg\\
&=\int_{K_\infty}\int_{\text{Sp}_4(\mathbb{R})} f_\infty(g)e^{(\rho+\texttt{n}u)(A(kg))}dgdk
=\int_{\text{Sp}_4(\mathbb{R})}f_\infty(g)\mathfrak{p}hi_{\texttt{n}u}(g)dg=\tilde{f_\infty}(-\texttt{n}u).
{\bf{e}}nd{align*}
{\bf{e}}nd{proof}
The spherical transform $\tilde{f_\infty}$ will thus play the role of the test function on the spectral side of our formula.
On the other hand, the geometric side will involve some different integral transform of our test function $f_\infty$.
It is therefore natural to investigate the analytic properties of $\tilde{f_\infty}$, and to seek to recover $f_\infty$ from $\tilde{f_\infty}$.
This can be achieved by the Paley-Wiener theorem and Harish-Chandra inversion theorem.
\mathbf{s}ubsubsection{The Paley-Wiener theorem and Harish-Chandra inversion theorem.}
The material in this section is taken from~\cite{Helgason}.
Let us introduce a bit of notation.
We denote by $\langle, \rangle$ the Killing form on the Lie algebra of $\text{Sp}_4(\mathbb{R})$,
and we define for each $\texttt{n}u \in \mathbf mathcal Lie{a}^*$ a vector $A_\texttt{n}u \in \mathbf mathcal Lie{a}$ by
$\texttt{n}u(H)=\langle A_\texttt{n}u, H\rangle$ for all $H \in \mathbf mathcal Lie{a}$.
We then define $\mathbf{s}cal{\lambda}{\texttt{n}u}=\mathbf{s}cal{A_\lambda}{A_\texttt{n}u}$.
We define $\mathbf mathcal Lie{a}_+$ as the subset of elements $H \in \mathbf mathcal Lie{a}$ satisfying $\mathbf mathfrak{a}lpha(H) > 0$ for all $\mathbf mathfrak{a}lpha \in \mathcal Phi_{\text{B}}$,
and $\mathbf mathcal Lie{a}^*_+=\{\texttt{n}u \in \mathbf mathcal Lie{a} : A_\texttt{n}u \in \mathbf mathcal Lie{a}_+ \}$.
Explicitly the Killing form is given by $\langle X, Y \rangle =6Tr(XY)$ and
$\mathbf mathcal Lie{a}_+=\left\{\diag{x}{y}{-x}{-y}: 0<x<y \right\}$.
Harish-Chandra's {\bf $c$-function} captures the asymptotic behaviour of the spherical function and it gives the Plancherel measure.
More precisely, by Theorem~6.14 of~\cite{Helgason}*{Chap. IV}, if $H \in \mathbf mathcal Lie{a}^+$ and $\texttt{n}u \in \mathbf mathcal Lie{a}^*_+$
then we have
$$\lim_{t \to +\infty}e^{(-\texttt{n}u+\rho)(tH)}\mathfrak{p}hi_{-i\texttt{n}u}({\bf{e}}xp(tH))=c(-i\texttt{n}u).$$
Moreover, $c(\texttt{n}u)$ is given, for $\texttt{n}u \in \mathbf mathcal Lie{a}^*_+$, by the absolutely convergent integral
\begin{equation}\label{cfunction}
c(\texttt{n}u)
=\int_{U(\mathbb{R})}e^{(\texttt{n}u+\rho)(A(Ju))}du,
{\bf{e}}nd{equation}
where the measure $du$ is normalized so that $c(\rho)=1$,
and has meromorphic continuation to $\mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$ given in our situation by the expression
$$c(-i\texttt{n}u)=
c_0\mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi}
\bm{f}rac{2^{-\mathbf{s}cal{i\texttt{n}u}{\mathbf mathfrak{a}lpha_0}}\mathbf mathfrak{G}amma(\mathbf{s}cal{i\texttt{n}u}{\mathbf mathfrak{a}lpha_0})}
{ \mathbf mathfrak{G}amma\left(\bm{f}rac{\bm{f}rac32+\mathbf{s}cal{i\texttt{n}u}{\mathbf mathfrak{a}lpha_0}}2\right)
\mathbf mathfrak{G}amma\left(\bm{f}rac{\bm{f}rac12+\mathbf{s}cal{i\texttt{n}u}{\mathbf mathfrak{a}lpha_0}}2\right)},$$
where $\mathcal Phi$ is the set of roots,
$\mathbf mathfrak{a}lpha_0=\bm{f}rac{\mathbf mathfrak{a}lpha}{\mathbf{s}cal{\mathbf mathfrak{a}lpha}{\mathbf mathfrak{a}lpha}}$ and the constant $c_0$
is such that $c(\rho)=1$.
Using the duplication formula $\mathbf mathfrak{G}amma(z)\mathbf mathfrak{G}amma(z+\bm{f}rac12)=\mathfrak{p}i^{\bm{f}rac12}2^{1-2z}\mathbf mathfrak{G}amma(2z)$, we can rewrite this as
$$c(-i\texttt{n}u)=
\bm{f}rac{c_0}{4\mathfrak{p}i^2}\mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi}
\bm{f}rac{\mathbf mathfrak{G}amma(\mathbf{s}cal{i\texttt{n}u}{\mathbf mathfrak{a}lpha_0})}{\mathbf mathfrak{G}amma(\bm{f}rac12+\mathbf{s}cal{i\texttt{n}u}{\mathbf mathfrak{a}lpha_0})},$$
We then have the following theorems
\begin{theorem}[Paley-Wiener theorem]\label{PW}
Let $\mathcal PW^R(\mathbf mathcal Lie{a}^*_\mathbf mathbb{C})$ the set of $\text{O}mega$-invariant entire functions $h$ on $\mathbf mathcal Lie{\mathbf mathfrak{a}}^*_\mathbf mathbb{C}$ such that for all
$N \ge 0$ we have
$$h(\texttt{n}u) \ll_N (1+|\texttt{n}u|)^{-N}e^{R|\mathbb{R}e(\texttt{n}u)|}.$$
Let $$ \mathcal PW(\mathbf mathcal Lie{a}^*_\mathbf mathbb{C})=\bigcup_{R>0}\mathcal PW^R(\mathbf mathcal Lie{a}^*_\mathbf mathbb{C}).$$
Then the spherical transform $f \mathbf mapsto \tilde{f}$ is a bijection
from $C^\infty_c(K_\infty\backslash \text{Sp}_4(\mathbb{R}) / K_\infty)$ to $\mathcal PW(\mathbf mathcal Lie{a}^*_\mathbf mathbb{C})$.
{\bf{e}}nd{theorem}
\begin{theorem}[Inversion theorem]\label{sphericalinversion}
There is a constant $c$ such that for every function $f \in C^\infty_c(K\backslash \text{Sp}_4(\mathbb{R}) / K)$ we have
for all $g\in \text{Sp}_4(\mathbb{R})$
\begin{equation}\label{inversion}
cf(g)=\int_{\mathbf mathcal Lie{a}^*} \tilde{f}(-i\texttt{n}u)\mathfrak{p}hi_{-i\texttt{n}u}(g)\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}.
{\bf{e}}nd{equation}
{\bf{e}}nd{theorem}
\begin{remark}
The constant $c$ may be worked out by Exercise~C.4 of~\cite{Helgason}*{Chap.~IV}.
{\bf{e}}nd{remark}
\begin{remark}
Using formulae $\mathbf mathfrak{G}amma(iz)\mathbf mathfrak{G}amma(-iz)=\bm{f}rac{\mathfrak{p}i}{z\mathbf{s}inh{\mathfrak{p}i z}}$ and $\mathbf mathfrak{G}amma(\bm{f}rac12-iz)\mathbf mathfrak{G}amma(\bm{f}rac12+iz)=\bm{f}rac{\mathfrak{p}i}{\cosh{\mathfrak{p}i z}}$,
the Plancherel measure is given by
\begin{equation}\label{Plancherel}
\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}=\bm{f}rac{16\mathfrak{p}i^4}{c_0^2}\mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi}\mathbf{s}cal{\texttt{n}u}{\mathbf mathfrak{a}lpha_0}\tanh(\mathfrak{p}i\mathbf{s}cal{\texttt{n}u}{\mathbf mathfrak{a}lpha_0})d\texttt{n}u.
{\bf{e}}nd{equation}
{\bf{e}}nd{remark}
\mathbf{s}ubsubsection{The Whittaker function and the Jacquet integral}
As mentioned above, the Whittaker function is a non-zero $K_\infty$-fixed vector in the Whittaker model, and it is unique up to scaling.
It is given by (meromorphic continuation of) the Jacquet integral.
Namely, if $\mathfrak{p}si$ is a generic character of $U(\mathbb{R})$, we have the {\bf Jacquet integral}
\begin{equation}\label{jacquet}
W(\texttt{n}u, g, \mathfrak{p}si)=\int_{U(\mathbb{R})} e^{(\rho+\texttt{n}u)(A(Jug))}\mathfrak{o}verline{\mathfrak{p}si(u)}du.
{\bf{e}}nd{equation}
The Jacquet integral converges absolutely for $\mathbb{R}e(\texttt{n}u) \in \mathbf mathcal Lie{a}^*_+$, as may be seen by
using the absolute convergence of~(\ref{cfunction}) and computing
\begin{equation}\label{majoration}
|W(\texttt{n}u, g, \mathfrak{p}si)| \le
\int_{U(\mathbb{R})}|e^{(\texttt{n}u + \rho)(A(Jug))}| du =
e^{(\rho-\mathbb{R}e(\texttt{n}u))(A(g))}c(\mathbb{R}e(\texttt{n}u))
{\bf{e}}nd{equation}
Moreover, it has meromorphic continuation to all $\texttt{n}u \in \mathbf mathcal Lie{a}^*_\mathbf mathbb{C}$.
Ishii~\cite{Ishii} computed explicit integral representations for the normalized Jacquet integral
$$\mathbf mathcal{W}(\texttt{n}u,g,\mathfrak{p}si)=\bm{f}rac1{4\mathfrak{p}i^2}\mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi} \mathbf mathfrak{G}amma\left(\bm{f}rac12+\mathbf{s}cal{\texttt{n}u}{\mathbf mathfrak{a}lpha_0}\right)W(\texttt{n}u,g,\mathfrak{p}si),$$
namely (note the different choice of minimal parabolic subgroup) if $a=\diag{a_1}{a_2}{a_1^{-1}}{a_2^{-1}} \in A^+(\mathbb{R})$ then for any
$\texttt{n}u \in \mathbf mathcal Lie{a^*}_\mathbf mathbb{C}$
\begin{equation}\label{NiwaIntegral}
\begin{aligned}
\mathbf mathcal{W}(\texttt{n}u,a,\mathfrak{p}si)=2a_1a_2^2 &\int_0^\infty \int_0^\infty K_{\bm{f}rac{\texttt{n}u_2-\texttt{n}u_1}{2}}(2\mathfrak{p}i v_1)K_{\bm{f}rac{\texttt{n}u_1+\texttt{n}u_2}2}(2 \mathfrak{p}i v_2)\\
& \times {\bf{e}}xp\left(-\mathfrak{p}i\left(\bm{f}rac{a_2^2}{v_1v_2}+\bm{f}rac{v_1v_2}{a_1^2}+a_1^2\left(\bm{f}rac{v_1}{v_2}+\bm{f}rac{v_2}{v_1}\right)\right)\right)\bm{f}rac{dv_1dv_2}{v_1v_2}.
{\bf{e}}nd{aligned}
{\bf{e}}nd{equation}
This implies in particular that the normalized Jacquet integral satisfies the functional equations
\begin{equation}\label{whittakerftneq}
\mathbf mathcal{W}(\mathbf{s}igma \cdot \texttt{n}u,g,\mathfrak{p}si)=\mathbf mathcal{W}(\texttt{n}u,g,\mathfrak{p}si)
{\bf{e}}nd{equation}
for all $\mathbf{s}igma \in \text{O}mega$.
If $t \in A^+(\mathbb{R})$ and if we denote by $\mathfrak{p}si_t$ the character $\mathfrak{p}si_t(u)=\mathfrak{p}si(t^{-1}ut)$, then it is easy to see
(first by a change of variable in the domain where the Jacquet integral is absolutely convergent, then by meromorphic continuation)
that
\begin{equation}\label{whittakertorus}
W(\texttt{n}u,g,\mathfrak{p}si_t)=e^{(\rho-\texttt{n}u)(\log(t))}W(\texttt{n}u, t^{-1}g,\mathfrak{p}si).
{\bf{e}}nd{equation}
\begin{remark}\label{WonA}
By Lemma~\ref{A(Ju)} and changing variables $u(x,a,b,c) \mathbf mapsto u(-x,a,-b,-c)$, if $t \in A^+(\mathbb{R})$then we have
$W(\texttt{n}u, t,\mathfrak{p}si)=W(\texttt{n}u, t, \mathfrak{o}verline{\mathfrak{p}si}).$
{\bf{e}}nd{remark}
\mathbf{s}ubsubsection{Wallach's Whittaker transform}
The following theorem is a consequence of~\cite{Wallach}*{Ch.~15}.
Let $L^2(U \backslash \text{Sp}_4(\mathbb{R}) /K, \mathfrak{p}si)$ be the space of functions $f$ on $\text{Sp}_4(\mathbb{R})$ satisfying for all
$u \in U(\mathbb{R})$, for all $k \in K_\infty$
and for all $g \in \text{Sp}_4(\mathbb{R})$
$$f(ugk)=\mathfrak{p}si(u)f(g) \text{ and } \int_{U \backslash \text{Sp}_4(\mathbb{R})} |f(g)|^2dg < \infty.$$
\begin{theorem}[Wallach's Whittaker inversion]\label{WallWhit}
Define for $\mathbf mathfrak{a}lpha \in C^{\infty}_c(\mathbf mathcal Lie{a^*})$
\begin{equation*}
\mathbf mathscr{T} (\mathbf mathfrak{a}lpha)(a)=\int_{\mathbf mathcal Lie{a}^*}\mathbf mathfrak{a}lpha(\texttt{n}u)W(-i\texttt{n}u,a,\mathfrak{p}si)\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}.
{\bf{e}}nd{equation*}
Then the image of the linear map $\mathbf mathscr{T}$ is a dense subset $\mathbf mathscr W \mathbf{s}ubset L^2(U \backslash \text{Sp}_4(\mathbb{R}) /K, \mathfrak{p}si)$
containing $C^\infty_c(U \backslash \text{Sp}_4(\mathbb{R})/K, \mathfrak{p}si)$, and $\mathbf mathscr{T}$ extends to a unitary operator onto
$L^2(U \backslash \text{Sp}_4(\mathbb{R}) /K, \mathfrak{p}si)$.
Moreover, the inverse of $\mathbf mathscr{T}$ is given for all $f \in \mathbf mathscr W$ by the {\bf Whittaker transform}
\begin{equation*}
W(f)(\texttt{n}u)=c \int_{A^+} f(a)W(i\texttt{n}u,a,\mathfrak{p}si)e^{-2\rho \log a}da,
{\bf{e}}nd{equation*}
where the constant $c$ is the same as in Theorem~\ref{sphericalinversion}.
{\bf{e}}nd{theorem}
\mathbf{s}ubsubsection{An integral transform}
Let $g \in G(\mathbb{R})$, $t \in A^+(\mathbb{R})$ and $\mathfrak{p}si$ a generic character of $U(\mathbb{R})$.
When dealing with the geometric side of the relative trace formula,
we shall be interested in the integral $$I(f_\infty)=\int_{U(\mathbb{R})}f_\infty(tug)\mathfrak{o}verline{\mathfrak{p}si}(u)du.$$
Using expression~(\ref{sphericalfunction}) and applying Theorem~5.20 of~\cite{Helgason}*{Ch.I} that relates integration
on $K_\infty$ to integration on $U(\mathbb{R})$, one may establish the following identity for all $\texttt{n}u \in \mathbf mathcal Lie{a}^*_\mathbf mathbb{C}$
\begin{equation}\label{sphericalU}
\mathfrak{p}hi_{\texttt{n}u}(g)=\int_{U(\mathbb{R})}
e^{(\rho+\texttt{n}u)(A(Jug)}e^{(\rho-\texttt{n}u)(A(Ju))}du.
{\bf{e}}nd{equation}
From this identity, ignoring convergence issues and treating integrals as if they were
absolutely convergent, one may heuristically expect the following
\begin{equation}\label{heuristics}
\int_{U(\mathbb{R})}\mathfrak{p}hi_\texttt{n}u(tug)\mathfrak{o}verline{\mathfrak{p}si(u)}du
=W(\texttt{n}u,g,\mathfrak{p}si)W(-\texttt{n}u,t^{-1},\mathfrak{o}verline{\mathfrak{p}si}).
{\bf{e}}nd{equation}
However, the domain of absolute convergence of the two Jacquet integral in the right hand side are complementary from each other,
and the integral in the left hand side is likely not absolutely convergent, making such a result,
where the left hand side is (optimistically) a semi-convergent integral and the right-hand side is defined by meromorphic continuation,
likely difficult to prove.
Carrying on with this heuristic and using Theorem~\ref{sphericalinversion}, let us write
\begin{align*}
cI(f_\infty)&=\int_{U(\mathbb{R})}\int_{\mathbf mathcal Lie{a}^*} \tilde{f_\infty}(-i\texttt{n}u)\mathfrak{p}hi_{-i\texttt{n}u}(tug)\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}\mathfrak{o}verline{\mathfrak{p}si}(u)du \\
&=\int_{\mathbf mathcal Lie{a}^*}\tilde{f_\infty}(-i\texttt{n}u)\int_{U(\mathbb{R})} \mathfrak{p}hi_{-i\texttt{n}u}(tug)\mathfrak{o}verline{\mathfrak{p}si}(u)du\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}\\
&=\int_{\mathbf mathcal Lie{a}^*}\tilde{f_\infty}(-i\texttt{n}u)W(-i\texttt{n}u,g,\mathfrak{p}si)W(i\texttt{n}u,t^{-1},\mathfrak{o}verline{\mathfrak{p}si})\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}.
{\bf{e}}nd{align*}
Unlike~(\ref{heuristics}), this equality seems more reasonable.
Indeed, the left hand side is absolutely convergent because $f_\infty$ is compactly supported,
and in the right hand side $\tilde{f_\infty}$ has rapid decay.
We now give a rigorous proof of the following.
\begin{theorem}\label{GeomTransform}
Let $f_\infty$ be a smooth, bi-$K_\infty$, compactly supported function on $\text{Sp}_4(\mathbb{R})$.
Let $g \in G(\mathbb{R})$, $t \in A^+(\mathbb{R})$ and $\mathfrak{p}si$ a generic character of $U(\mathbb{R})$.
Then we have
$$c\int_{U(\mathbb{R})}f_\infty(tug)\mathfrak{o}verline{\mathfrak{p}si}(u)du=\int_{\mathbf mathcal Lie{a}^*}\tilde{f_\infty}(-i\texttt{n}u)W(-i\texttt{n}u,g,\mathfrak{p}si)W(i\texttt{n}u,t^{-1},
\mathfrak{o}verline{\mathfrak{p}si})\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)},$$
where $W(\texttt{n}u,\cdot,\mathfrak{p}si)$ is the $\mathfrak{p}si$-Whittaker function of the principal series with spectral parameter~$\texttt{n}u$.
{\bf{e}}nd{theorem}
\begin{proof}
Both sides transform on the left by $U(\mathbb{R})$ according to $\mathfrak{p}si$, and are $K_\infty$-invariant.
Thus by the Iwasawa decomposition, it suffices to prove it for $g=a \in A^+(\mathbb{R})$.
Also, by~(\ref{whittakertorus}), we may restrict ourself to $t=1$.
With notations of Theorem~\ref{WallWhit}, we have
$$
\int_{\mathbf mathcal Lie{a}^*}\tilde{f_\infty}(-i\texttt{n}u)W(-i\texttt{n}u,a,\mathfrak{p}si)W(i\texttt{n}u,1,\mathfrak{o}verline{\mathfrak{p}si})\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}=
\mathbf mathscr{T}(\mathbf mathfrak{a}lpha)(a),$$
where $$\mathbf mathfrak{a}lpha(\texttt{n}u)= \tilde{f_\infty}(-i\texttt{n}u) W(i\texttt{n}u,1,\mathfrak{o}verline{\mathfrak{p}si}).$$
Moreover $g \mathbf mapsto \int_{U(\mathbb{R})}f_\infty(ug)\mathfrak{o}verline{\mathfrak{p}si}(u)du$ belongs to $C^\infty_c(U \backslash \text{Sp}_4(\mathbb{R}) /K, \mathfrak{p}si)$
since $f_\infty$ is smooth and compactly supported. Hence
by Wallach's Whittaker inversion it suffices to show that
for all $\texttt{n}u \in \mathbf mathcal Lie{a}^*$ we have
\begin{equation}\label{toprove}
\mathbf mathfrak{a}lpha(\texttt{n}u) = \int_{{A}^+(\mathbb{R})}e^{-2\rho \log a} \int_{U(\mathbb{R})}f_\infty(ua)\mathfrak{o}verline{\mathfrak{p}si}(u)duW(i\texttt{n}u,a,\mathfrak{p}si)da.
{\bf{e}}nd{equation}
Since both sides are meromorphic in $\texttt{n}u$, it suffices to show this for
$\mathbb{R}e(i\texttt{n}u) \in \mathbf mathcal Lie{a}^*_+$. In this region, the Jacquet integral
$W(i\texttt{n}u,a,\mathfrak{p}si)=\int_{U(\mathbb{R})} e^{(\rho+i\texttt{n}u)(A(Ju_1a))}\mathfrak{o}verline{\mathfrak{p}si(u_1)}du_1$ converges absolutely.
By Remark~\ref{WonA}, the integral in~(\ref{toprove}) may then be written as
\begin{align*}
\int_{{A}^+(\mathbb{R})}\int_{U(\mathbb{R})}f_\infty(au)\mathfrak{o}verline{\mathfrak{p}si}(aua^{-1})duW(i\texttt{n}u,a,\mathfrak{o}verline{\mathfrak{p}si})da
=\int_{{A}^+(\mathbb{R})}\int_{U(\mathbb{R})}f_\infty(au)W(i\texttt{n}u,au,\mathfrak{o}verline{\mathfrak{p}si})&duda\\
=\int_{{A}^+(\mathbb{R})}\int_{U(\mathbb{R})}f_\infty(au)\int_{U(\mathbb{R})} e^{(\rho+i\texttt{n}u)(A(Ju_1au))}\mathfrak{p}si(u_1)du_1&duda
{\bf{e}}nd{align*}
Write $Ju_1=n{\bf{e}}xp(A(Ju_1))k_0(Ju_1)$ with $n \in U(\mathbb{R})$ and $k_0(Ju_1) \in K_\infty$.
Then $A(Ju_1au)=A(Ju_1)+A(k_0au)$.
So the integral we have to evaluate becomes
\begin{align*}
\int_{{A}^+(\mathbb{R})}\int_{U(\mathbb{R})}& e^{(\rho+i\texttt{n}u)(A(Ju_1)}\mathfrak{p}si(u_1)\int_{U(\mathbb{R})}f_\infty(au)e^{(\rho+i\texttt{n}u)(A(k_0(Ju_1)au))}dudu_1da\\
&= \int_{{A}^+(\mathbb{R})}\int_{U(\mathbb{R})} e^{(\rho+i\texttt{n}u)(A(Ju_1)}\mathfrak{p}si(u_1)\int_{\text{Sp}_4(\mathbb{R})}f_\infty(g)e^{(\rho+i\texttt{n}u)(A(k_0(Ju_1)g))}dgdu_1da\\
&= \int_{{A}^+(\mathbb{R})}\int_{U(\mathbb{R})} e^{(\rho+i\texttt{n}u)(A(Ju_1)}\mathfrak{p}si(u_1)\int_{\text{Sp}_4(\mathbb{R})}f_\infty(g)e^{(\rho+i\texttt{n}u)(A(g))}dgdu_1da\\
&=W(i\texttt{n}u,1,\mathfrak{o}verline{\mathfrak{p}si})\tilde{f}(-i\texttt{n}u).
{\bf{e}}nd{align*}
{\bf{e}}nd{proof}
\mathbf{s}ubsubsection{Estimates for the Whittaker function}
We close this section with some estimates for the Whittaker function to be used later on.
We begin with recalling the following estimate for Bessel $K$ functions.
\begin{lemma}\label{EstimateBessel}
Let $\mathbf{s}igma>0$. For $\mathbb{R}e(\texttt{n}u)\in [-\mathbf{s}igma,\mathbf{s}igma]$ we have for all ${\bf{e}}psilon>0$
$$
K_\texttt{n}u(u) \ll \left\lbrace\begin{array}{ccc}
(1+|\mathbf mathcal Im(\texttt{n}u)|)^{\mathbf{s}igma+{\bf{e}}psilon} u^{-\mathbf{s}igma-{\bf{e}}psilon} & \mathbf mbox{if} & 0<u\le 1+\bm{f}rac{\mathfrak{p}i}2|\mathbf mathcal Im(\texttt{n}u)|,\\
u^{-\bm{f}rac12} e^{-u} & \mathbf mbox{if} & u > 1+\bm{f}rac{\mathfrak{p}i}2|\mathbf mathcal Im(\texttt{n}u)|.
{\bf{e}}nd{array}
\right.
$$
{\bf{e}}nd{lemma}
In the following lemma, we have only used trivial bounds and haven't seek for optimality.
\begin{lemma}\label{TrivialWhittaker}
Let $\mathbf{s}igma>0$. Let $\texttt{n}u \in \mathbf mathcal Lie{a}^*_\mathbf mathbb{C}$ with $-\mathbf{s}igma < \bm{f}rac{\mathbb{R}e(\texttt{n}u_1-\texttt{n}u_2)}2, \bm{f}rac{\mathbb{R}e(\texttt{n}u_1+\texttt{n}u_2)}{2} < \mathbf{s}igma$ and $a \in A^{+}(\mathbb{R})$.
For simplicity, set $r_1=\bm{f}rac{|\mathbf mathcal Im(\texttt{n}u_1-\texttt{n}u_2)|}2$ and $r_2=\bm{f}rac{|\mathbf mathcal Im(\texttt{n}u_1+\texttt{n}u_2)|}{2} $
Then for all ${\bf{e}}psilon>0$ we have
\begin{align*}
\mathbf mathcal{W}(\texttt{n}u,a,\mathfrak{p}si) &\ll (1+r_1)^{\mathbf{s}igma+1+{\bf{e}}psilon} (1+r_2)^{\mathbf{s}igma+1+{\bf{e}}psilon}a_1a_2^{-2\mathbf{s}igma-{\bf{e}}psilon}\\
&+(1+r_1)^{-\bm{f}rac32} (1+r_2)^{-\bm{f}rac32}a_1a_2^{2}\\
&+(1+r_1)^{\mathbf{s}igma+{\bf{e}}psilon}(1+r_2)^{-(\mathbf{s}igma+\bm{f}rac52+{\bf{e}}psilon)}a_1^{-2\mathbf{s}igma-1-{\bf{e}}psilon}a_2^2\\
&+(1+r_1)^{-(\mathbf{s}igma+\bm{f}rac52+{\bf{e}}psilon)}(1+r_2)^{\mathbf{s}igma+{\bf{e}}psilon}a_1^{-2\mathbf{s}igma-1-{\bf{e}}psilon}a_2^2.
{\bf{e}}nd{align*}
{\bf{e}}nd{lemma}
\begin{proof}
Follows trivially from the explicit integral representation~(\ref{NiwaIntegral}) and Lemma~\ref{EstimateBessel}.
{\bf{e}}nd{proof}
\begin{proposition}\label{WhittakerSpectral}
Let $a \in A^{+}(\mathbb{R})$.
Then, for $\mathbb{R}e(\texttt{n}u)$ small enough we have for all ${\bf{e}}psilon >0$
$$W(\texttt{n}u,a,\mathfrak{p}si) \ll_{\mathbb{R}e(\texttt{n}u),a} \mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi} |\mathbf{s}cal{\mathbf mathcal Im(\texttt{n}u)}{\mathbf mathfrak{a}lpha_0}|^{2|\mathbf{s}cal{\mathbb{R}e(\texttt{n}u)}{\mathbf mathfrak{a}lpha_0}|+{\bf{e}}psilon}.$$
{\bf{e}}nd{proposition}
\begin{proof}
Observe that, if $\mathbb{R}e(\texttt{n}u) \in \mathbf mathcal Lie{a}^*_+$, then the claim follows by the trivial bound~(\ref{majoration}).
Next, if $\mathbb{R}e(\texttt{n}u)$ belongs to any open Weyl chamber, there is $\mathbf{s}igma \in \text{O}mega$ such that $Re(\mathbf{s}igma \cdot \texttt{n}u) \in \mathbf mathcal Lie{a}^*_+$.
The functional equation~({\ref{whittakerftneq}}) gives
$$W(\texttt{n}u,a,\mathfrak{p}si)=\mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi} \bm{f}rac{\mathbf mathfrak{G}amma\left(\bm{f}rac12+\mathbf{s}cal{\mathbf{s}igma \cdot \texttt{n}u}{\mathbf mathfrak{a}lpha_0}\right)}{\mathbf mathfrak{G}amma\left(\bm{f}rac12+\mathbf{s}cal{ \texttt{n}u}{\mathbf mathfrak{a}lpha_0}\right)}W(\mathbf{s}igma \cdot\texttt{n}u,a,\mathfrak{p}si).$$
Since the Weyl group acts by permutation on the set of (positive and negative) roots, the product can be written as
$$\mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi_\mathbf{s}igma} \bm{f}rac{\mathbf mathfrak{G}amma\left(\bm{f}rac12-\mathbf{s}cal{\texttt{n}u}{\mathbf mathfrak{a}lpha_0}\right)}{\mathbf mathfrak{G}amma\left(\bm{f}rac12+\mathbf{s}cal{ \texttt{n}u}{\mathbf mathfrak{a}lpha_0}\right)}
\ll \mathfrak{p}rod_{\mathbf mathfrak{a}lpha \in \mathcal Phi_\mathbf{s}igma} |\mathbf{s}cal{\mathbf mathcal Im(\texttt{n}u)}{\mathbf mathfrak{a}lpha_0}|^{-2\mathbf{s}cal{\mathbb{R}e(\texttt{n}u)}{\mathbf mathfrak{a}lpha_0}},$$
where $\mathcal Phi_\mathbf{s}igma$ is the set of positive roots whose image by $\mathbf{s}igma$ is a negative root and we have used
that $|\mathbf mathfrak{G}amma(x+iy)| \mathbf{s}im \mathbf{s}qrt{2\mathfrak{p}i}e^{-\bm{f}rac{\mathfrak{p}i}{2}|y|}|y|^{x-\bm{f}rac12}$ as $|y| \to \infty$ and that the numerator has no poles because
$\mathbb{R}e(\texttt{n}u)$ is small enough.
But if $\mathbf{s}igma \cdot \mathbf mathfrak{a}lpha$ is a negative root then we have $\mathbf{s}cal{\mathbb{R}e(\texttt{n}u)}{\mathbf mathfrak{a}lpha_0} <0$ and so we are done in this case again.
Finally, if $\mathbb{R}e(\texttt{n}u)$ belongs to a wall of a Weyl chamber, by Lemma~\ref{TrivialWhittaker} we may apply the Phragm{\'e}n-Lindel{\"o}f principle
to deduce the result.
{\bf{e}}nd{proof}
\mathbf{s}ection{Eisenstein series and the spectral decomposition}
The goal of Eisenstein series is to describe the continuous spectrum.
The latter is an orthogonal direct sum over standard parabolic subgroups $P$, each summand of which
is a direct integral parametrized by $i\mathbf mathcal Lie{a}^*_P$.
Eisenstein series will give intertwining operators from some representation induced from
$M_P$ to the corresponding part of the continuous spectrum.
One thus wants to define $E(\cdot,\mathfrak{p}hi,\texttt{n}u)$ for $\mathfrak{p}hi$ in the space $\mathbf mathscr H_P$ of the aforementioned
induced representation, and for $\texttt{n}u \in i\mathbf mathcal Lie{a}^*_P$.
Because of convergence issues, one originally defines $E(\cdot,\mathfrak{p}hi,\texttt{n}u)$ for $\mathfrak{p}hi$
lying a certain dense space of automorphic forms $\mathbf mathscr H_P^0 \mathbf{s}ubset \mathbf mathscr H_P$ and for $\texttt{n}u \in \mathbf mathcal Lie{a}^*_P(\mathbf mathbb{C})$ with large enough real part.
The definition is then extended to all $\mathfrak{p}hi$ in the completion of $\mathbf mathscr H_P^0$ and to all
$\texttt{n}u \in \mathbf mathcal Lie{a}^*_P(\mathbf mathbb{C})$. Our exposition follows Arthur, and in particular~\cite{ArthurIntro}.
\mathbf{s}ubsection{Definition of Eisenstein series}~\label{DefOfES}
Fix a standard parabolic subgroup $P=N_PM_P$ throughout this section, and let $A_P$ be the centre of $M_P$,
and $A_P^+(\mathbb{R})$ be the connected component of $1$ in $A_P(\mathbb{R})$.
Let $R_{M_P, \text{disc}}$ be the restriction of the right regular representation of $M_P(\mathbf mathbb{A})$
on the subspace of $L^2(M_P(\mathbb{Q})A_P^+(\mathbb{R})\backslash M_P(\mathbf mathbb{A}))$ that decompose discretely.
For $\texttt{n}u \in \mathbf mathcal Lie{a}_P^*(\mathbf mathbb{C}),$ consider the tensor product
$R_{M_P, \text{disc}, \texttt{n}u}(x)=R_{M_P, \text{disc}}(x)e^{\texttt{n}u (H_{M_P}(x))}$.
The continuous spectrum is described via the Eisenstein series in terms of the induced representation
$$\mathbf mathcal I_P(\texttt{n}u)=\text{Ind}_{P(\mathbf mathbb{A})}^{G(\mathbf mathbb{A})}(I_{N_P(\mathbf mathbb{A})} \mathfrak{o}times R_{M_P, \text{disc}, \texttt{n}u} ).$$
The space of this induced representation is independant of $\texttt{n}u$ and is given in the following definition.
\begin{definition}\label{defofHP}
Let $P=N_PM_P$ be the Levi decomposition of $P$,
$A_P$ be the centre of $M_P$ and $A_P^+(\mathbb{R})$ be the connected component of $1$ in $A_P(\mathbb{R})$.
We define $\mathbf mathscr H_P$ to be the Hilbert space obtained by completing
the space $\mathbf mathscr H_P^0$ of functions
\begin{equation}\label{Hp}
\mathfrak{p}hi : N_P(\mathbf mathbb{A}) M_P(\mathbb{Q}) A_P^+(\mathbb{R}) \backslashslash G(\mathbf mathbb{A}) \to \mathbf mathbb{C}
{\bf{e}}nd{equation} such that
\begin{enumerate}
\item \label{discretness}
for any $x \in G(\mathbf mathbb{A})$, the function $M_P(\mathbf mathbb{A}) \to \mathbf mathbb{C}, m \mathbf mapsto \mathfrak{p}hi(mx)$
is $\mathbf mathscr Z_{M_P}$-finite, where $\mathbf mathscr Z_{M_P}$ is the centre of the universal
enveloping algebra of $\mathbf mathfrak M_P(\mathbf mathbb{C})$,
\item \label{Kfinitness} $\mathfrak{p}hi$ is right $K$-finite,
\item \label{L2} $\| \mathfrak{p}hi\|^2 = \int_K \int_{A_P(\mathbb{R})^+ M_P(\mathbb{Q}) \backslashslash M_P(\mathbf mathbb{A})} |\mathfrak{p}hi(mk)|^2dm dk
< \infty.$
{\bf{e}}nd{enumerate}
{\bf{e}}nd{definition}
Then the representation $\mathbf mathcal I_P(\texttt{n}u)$ acts on $\mathbf mathscr H_P$ via
$$(\mathbf mathcal I_P(\texttt{n}u,y) \mathfrak{p}hi)(x)=\mathfrak{p}hi(xy){\bf{e}}xp((\texttt{n}u+\rho_P)(H_P(xy))){\bf{e}}xp(-(\texttt{n}u+\rho_P)(H_P(x))),$$
and is unitary for $\texttt{n}u \in i\mathbf mathcal Lie{a}_P(\mathbf mathbb{C})$.
We now define the Eisenstein series attached to $P$.
We may $H_P$ extends to $P(\mathbb{Q}) \backslash G(\mathbf mathbb{A})$ by setting $H_P(nmk)=H_P(m)$ ($n \in N_P, m \in M_P, k \in K$), therefore
the expression in the following proposition is well defined.
\begin{proposition}\label{defofES}
For $\texttt{n}u \in \mathbf mathcal Lie{a}_P(\mathbf mathbb{C})$ with large enough real part, if $x \in G(\mathbf mathbb{A})$ and $\mathfrak{p}hi \in \mathbf mathscr H_P^0$, the
{\bf Eisenstein series}
$$E(x, \mathfrak{p}hi, \texttt{n}u)= \mathbf{s}um_{\delta \in P(\mathbb{Q}) \backslashslash G(\mathbb{Q})} \mathfrak{p}hi(\delta x) {\bf{e}}xp((\texttt{n}u+\rho_P)(H_P(\delta x))$$
converges absolutely.
{\bf{e}}nd{proposition}
Langlands provided analytic continuation of Eisenstein series, as well as the spectral decomposition of $L^2(Z(\mathbb{R})G(\mathbb{Q}) \backslash G(\mathbf mathbb{A}))$.
The latter gives a decomposition of the right regular representation $R$ as direct sum over association classes of parabolic subgroups.
The class of $G$, viewed as a parabolic subgroup itself, gives the {\bf discrete spectrum}.
It consists on one hand of cuspidal functions on $Z(\mathbb{R})G(\mathbb{Q}) \backslash G(\mathbf mathbb{A})$ and on the other hand
of residues of Eisenstein series attached to proper parabolic subgroups.
The contribution of the other classes is given by direct integrals of corresponding induced representations and gives the
{\bf continuous spectrum}.
We now describe explicitly the Eisenstein series that are relevant for us.
\mathbf{s}ubsection{Action of the centre and of the compact \texorpdfstring{$\mathbf mathfrak{G}amma$}{Gamma}}
Since our test function $f$ is bi-$\mathbf mathfrak{G}amma$-invariant and has central character $\mathfrak{o}mega$,
Eisenstein series occurring in the spectral expansion of its kernel $K_f$ are only from the subspaces of $\mathbf mathscr H_P$ satisfying similar properties
(see Lemma~\ref{UnprovedLemma} below for a formal justification).
Using the Peter-Weyl Theorem, we can further reduce:
\begin{lemma}\label{CentralCharactersofP}
Let $P$ be a standard parabolic subgroup and $A_P$ it centre.
Let $\mathbf mathscr H_P^\mathbf mathfrak{G}amma(\mathfrak{o}mega)$ be the closed subspace of $\mathbf mathscr H_P$
consisting in functions $\mathfrak{p}hi$ such that for all $z \in Z(\mathbf mathbb{A})$ and $k \in \mathbf mathfrak{G}amma$, we have $\mathfrak{p}hi(zgk)=\mathfrak{o}mega(z)\mathfrak{p}hi(g)$.
Then
\begin{equation}\label{DecompOfCentralChar}
\mathbf mathscr H_P^\mathbf mathfrak{G}amma(\mathfrak{o}mega)=\bigoplus_{\chi} \mathbf mathscr H_P^\mathbf mathfrak{G}amma(\chi)
{\bf{e}}nd{equation}
where the $\chi$-orthogonal direct sum ranges over characters of $A^+_P(\mathbb{R})A_P(\mathbb{Q}) \backslashslash A_P(\mathbf mathbb{A})$ that coincide with
$\mathfrak{o}mega$ on $Z(\mathbf mathbb{A})$, and $\mathbf mathscr H_P^\mathbf mathfrak{G}amma(\chi)$ is the subspace of $\mathbf mathscr H_P^\mathbf mathfrak{G}amma(\mathfrak{o}mega)$
consisting in functions $\mathfrak{p}hi$ such that for all $z \in A_P(\mathbf mathbb{A})$, $\mathfrak{p}hi(zg)=\chi(z)\mathfrak{p}hi(g)$.
{\bf{e}}nd{lemma}
\mathbf{s}ubsection{Explicit description of Eisenstein series}
If $R_{M_P,\text{disc}}=\bigoplus_{\mathfrak{p}i} \mathfrak{p}i = \bigoplus_\mathfrak{p}i \left(\bigotimes_v \mathfrak{p}i_v \right)$ is the decomposition of
$R_{M_P,\text{disc}}$ into irreducible representations $\mathfrak{p}i = \bigotimes_v \mathfrak{p}i_v$ of $M_P(\mathbf mathbb{A}) / A_P(\mathbb{R})^+$, then we have
$$\mathbf mathcal I_P(\texttt{n}u)=\bigoplus_{\mathfrak{p}i} \mathbf mathcal I_P(\mathfrak{p}i_\texttt{n}u) = \bigoplus_\mathfrak{p}i \left(\bigotimes_v \mathbf mathcal I_P(\mathfrak{p}i_{v,\texttt{n}u})\right).$$
Moreover the representation space of each $\mathbf mathcal I_P(\mathfrak{p}i_\texttt{n}u)$ does not depend on $\texttt{n}u$.
Hence, to describe the spaces $\mathbf mathscr H_P^\mathbf mathfrak{G}amma(\chi)$ it suffices to describe
\begin{itemize}
\item the irreducible representations $\mathfrak{p}i$ with central character $\chi$ occurring $R_{M_P,\text{disc}}$,
\item the $\mathbf mathfrak{G}amma$-fixed subspace of each representation $\mathbf mathcal I_P(\mathfrak{p}i_\texttt{n}u)$.
{\bf{e}}nd{itemize}
By the Iwasawa decomposition, elements of this space may be viewed as families of functions indexed by $\mathbf mathfrak{G}amma \backslash K$ satisfying some
compatibility condition that we proceed to make explicit now.
We also prove that the Archimedean part of $\mathbf mathcal I_P(\mathfrak{p}i_\texttt{n}u)$ is a principal series representation, and we provide its spectral parameter.
\mathbf{s}ubsubsection{Borel Eisenstein series}
\begin{lemma}\label{BorelSpectralParameter}
The irreducible representations occurring in $R_{T,\text{disc}}$ are precisely characters $\chi$ of $T^+(\mathbb{R})T(\mathbb{Q}) \backslash T(\mathbf mathbb{A})$.
Let $\chi$ be such character and $\texttt{n}u \in i\mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$.
The Archimedean part of $\mathbf mathcal I_B(\chi_\texttt{n}u)$ is an irreducible principal series representation with spectral parameter $\texttt{n}u$.
{\bf{e}}nd{lemma}
\begin{proof}
The first part is because $T^+(\mathbb{R})T(\mathbb{Q}) \backslash T(\mathbf mathbb{A})$ is abelian.
For the second part, since $\chi_\infty=1$ we have $\mathbf mathcal I_B(\chi_\texttt{n}u)_\infty= \mathbf mathcal I_B(e^\texttt{n}u)$, which is irreducible
because $\texttt{n}u \in i\mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$ (see~\cite{Muic}*{Lemma~5.1}).
{\bf{e}}nd{proof}
Characters $\chi$ of $T^+(\mathbb{R})T(\mathbb{Q}) \backslash T(\mathbf mathbb{A})$ that coincide with $\mathfrak{o}mega$ on $Z(\mathbf mathbb{A})$ are in one-to-one
correspondence with triplets $(\mathfrak{o}mega_1, \mathfrak{o}mega_2, \mathfrak{o}mega_3)$ of characters of $\mathbb{R}_{>0}\mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times$ satisfying
$\mathfrak{o}mega_1 \mathfrak{o}mega_2 \mathfrak{o}mega_3^2 = \mathfrak{o}mega$, via
$$\chi\left(\left[ \begin{smallmatrix}x&&&\\&y&&\\&&tx^{-1}&\\&&&ty^{-1}{\bf{e}}nd{smallmatrix}\right]\right)=\mathfrak{o}mega_1(x)\mathfrak{o}mega_2(y) \mathfrak{o}mega_3(t).$$
Define a character of $B$ by
$$\mathfrak{o}mega\left(\left[ \begin{smallmatrix}x&*&*&*\\ &y&*&*\\&&tx^{-1}&*\\&&&ty^{-1}{\bf{e}}nd{smallmatrix}\right]\right)
=\mathfrak{o}mega_1(x)\mathfrak{o}mega_2(y)\mathfrak{o}mega_3(t)$$
(note that this notation is sound, as it coincides with our original $\mathfrak{o}mega$ on scalar matrices.)
\begin{proposition}\label{Borelsummand}
Let $\chi=(\mathfrak{o}mega_1,\mathfrak{o}mega_2,\mathfrak{o}mega_3)$ with $\mathfrak{o}mega_1 \mathfrak{o}mega_2 \mathfrak{o}mega_3^2 = \mathfrak{o}mega$.
Consider $(\mathfrak{p}hi_k)_{k \in K / \mathbf mathfrak{G}amma}$ such that
\begin{enumerate}
\item for all $k$, $\mathfrak{p}hi_k \in \mathbf mathbb{C}$,
\item \label{Borelcompatibility} if $\gamma \in K \cap B(\mathbf mathbb{A})$ then for all $k$,
$\mathfrak{p}hi_k=\chi(\gamma^{-1})\mathfrak{p}hi_{\gamma k}$.
{\bf{e}}nd{enumerate}
Then the function on $G(\mathbf mathbb{A})$ given for $u \in U(\mathbf mathbb{A}), t \in T(\mathbf mathbb{A}), k \in K$ by
\begin{equation}\label{Borelnmk}
\mathfrak{p}hi(utk)= \chi(t) \mathfrak{p}hi_k,
{\bf{e}}nd{equation}
is well-defined and belongs to ${\mathbf mathscr H_B}^\mathbf mathfrak{G}amma(\chi)$.
Moreover, every function in ${\mathbf mathscr H_{B}}^\mathbf mathfrak{G}amma(\chi)$ has this shape.
{\bf{e}}nd{proposition}
\begin{proof}
We first prove that $\mathfrak{p}hi$ is well-defined.
Suppose $u_1t_1k_1=u_2t_2k_2$.
In particular $k_1k_2^{-1}=(u_1t_1)^{-1}(u_2t_2) \in B(\mathbf mathbb{A}) \cap K$.
Therefore
$$
\chi(t_1)\mathfrak{p}hi_{k_1}= \chi(t_1)\chi({k_1k_2^{-1}})\mathfrak{p}hi_{k_2}\\
=\chi(t_2)\mathfrak{p}hi_{k_2}.$$
Next we show that $\mathfrak{p}hi$ belongs indeed to ${\mathbf mathscr H_{B}}^\mathbf mathfrak{G}amma(\mathfrak{o}mega_1, \mathfrak{o}mega_2, \mathfrak{o}mega_3)$.
The fact that $\mathfrak{p}hi$ is invariant on the left by $U(\mathbf mathbb{A})T(\mathbb{Q})T(\mathbb{R})^+$, the right invariance by $\mathbf mathfrak{G}amma$ and the fact that $\mathfrak{p}hi$ transforms
under $T(\mathbf mathbb{A})$ according to $\chi$ are obvious from the definition. Finally,
\begin{align*}
\int_K \int_{T(\mathbb{R})^+ T(\mathbb{Q}) \backslashslash T(\mathbf mathbb{A})} |\mathfrak{p}hi(mk)|^2dmdk
&=\int_K \int_{(\mathbb{R}_{>0} \mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times)^3} |\mathfrak{p}hi_k|^2dmdk\\
&=Vol(\mathbb{R}_{>0} \mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times)^3Vol(\mathbf mathfrak{G}amma)\mathbf{s}um_{k \in K / \mathbf mathfrak{G}amma} |\mathfrak{p}hi_k|^2
<\infty
{\bf{e}}nd{align*}
since $\mathbb{R}_{>0}\mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times$ is compact and $K / \mathbf mathfrak{G}amma$ is finite.
As a last point, we show that we thus exhaust all of ${\mathbf mathscr H_{B}}^\mathbf mathfrak{G}amma(\mathfrak{o}mega_1, \mathfrak{o}mega_2,\mathfrak{o}mega_3)$.
Let $\mathfrak{p}hi \in {\mathbf mathscr H_{B}}^\mathbf mathfrak{G}amma(\mathfrak{o}mega_1, \mathfrak{o}mega_2,\mathfrak{o}mega_3)$. Define
$$\mathfrak{p}hi_k=\mathfrak{p}hi(k).$$
Then it is clear that equation~(\ref{Borelnmk}) holds.
As for condition~\ref{Klingencompatibility}, note that if $\gamma=t_\gamma u_\gamma \in K \cap B(\mathbf mathbb{A})$
with $t_\gamma \in T(\mathbf mathbb{A})$ and $u_\gamma \in U(\mathbf mathbb{A})$ then
\begin{align*}
\mathfrak{p}hi_{\gamma k}&=\mathfrak{p}hi(\gamma k)\\
&=\mathfrak{p}hi (t_\gamma u_\gamma k) = \chi(t_\gamma) \mathfrak{p}hi(k)\\
&=\chi(\gamma)\mathfrak{p}hi_k.
{\bf{e}}nd{align*}
{\bf{e}}nd{proof}
\begin{remark}
Consider the action of $K \cap B(\mathbf mathbb{A})$ on $K / \mathbf mathfrak{G}amma$ by multiplication on the left.
Then the compatibility condition 2. can only be met if $\mathfrak{o}mega$ is trivial on
the stabilizer of each element of $K / \mathbf mathfrak{G}amma$.
In this case, the dimension of ${\mathbf mathscr H_{B}}^\mathbf mathfrak{G}amma(\chi)$ is the number of distinct orbits.
{\bf{e}}nd{remark}
\mathbf{s}ubsubsection{Klingen Eisenstein series}~\label{KES}
Characters $\chi$ of $\mathbf mathbb{A}k^+(\mathbb{R})\mathbf mathbb{A}k(\mathbb{Q}) \backslash \mathbf mathbb{A}k(\mathbf mathbb{A})$ that coincide with $\mathfrak{o}mega$ on $Z(\mathbf mathbb{A})$ are in one-to-one
correspondence with pairs $(\mathfrak{o}mega_1, \mathfrak{o}mega_2)$ of characters of $\mathbb{R}_{>0}\mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times$ satisfying
$\mathfrak{o}mega_1 \mathfrak{o}mega_2 = \mathfrak{o}mega$, via
$$\chi\left(\diag{u}{t}{u}{t^{-1}u^2}\right)=\mathfrak{o}mega_1(u)\mathfrak{o}mega_2(t).$$
For convenience, if $A=\mathbf mat{a}{b}{c}{d} \in \mathbf mathfrak{G}L_2$, define
$\iota_A=\left[ \begin{smallmatrix}a&&b&\\ &1&&\\ c&&d&\\&&&\det(A){\bf{e}}nd{smallmatrix}\right]\in \mathbf mathcal Mk,$
and if
$$
p=\left[ \begin{smallmatrix}
a & &b & * \\
* &t &* & * \\
c & &d & * \\
& & & t^{-1}\det(A)
{\bf{e}}nd{smallmatrix}\right],
$$
define $\mathbf{s}igma_{\text K}(p)=A$ and $t(p)=t$.
We may extend $\mathbf{s}igma_{\text K}$ to all of $P_{\text{K}}(\mathbf mathbb{A})$ by setting $\mathbf{s}igma_{\text K}(nm)=\mathbf{s}igma_{\text K}(m)$ (and similarly for $t$),
and we may view $\mathfrak{o}mega_2$ as the character of $P_{\text{K}}(\mathbf mathbb{A})$ defined by $\mathfrak{o}mega_2(p)=\mathfrak{o}mega_2(t(p))$.
\begin{lemma}\label{KlingenInducing}
Let $\chi=(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$ with $\mathfrak{o}mega_1\mathfrak{o}mega_2=\mathfrak{o}mega$.
The irreducible representations with central character $\chi$ occurring in $R_{\mathbf mathcal Mk,\text{disc}}$ are twists $\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i$, where $\mathfrak{p}i$
occurs in the discrete spectrum of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ and has central character $\mathfrak{o}mega_1$.
{\bf{e}}nd{lemma}
\begin{proof}
Let $\mathfrak{p}i$ be an irreducible representations with central character $\chi$ occurring $R_{\mathbf mathcal Mk,\text{disc}}$.
By definition, we may realize $\mathfrak{p}i$ in the subspace of $L^2(\mathbf mathcal Mk(\mathbb{Q})\mathbf mathbb{A}k^+(\mathbb{R})\backslash \mathbf mathcal Mk(\mathbf mathbb{A}))$ consisting of functions with central character
$\chi$.
This space identifies with $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}),\mathfrak{o}mega_1)$ via
$\mathfrak{p}hi \mathbf mapsto \left( \left[ \begin{smallmatrix}
a & &b & \\
&t & & \\
c & &d & \\
& & & t^{-1}\det(A)
{\bf{e}}nd{smallmatrix}\right] \mathbf mapsto \mathfrak{o}mega_2(t)\mathfrak{p}hi(\mathbf mat{a}{b}{c}{d}) \right).$
{\bf{e}}nd{proof}
\begin{proposition}\label{Klingensummand}
Let $\chi=(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$ with $\mathfrak{o}mega_1\mathfrak{o}mega_2=\mathfrak{o}mega$.
Let $(\mathfrak{p}i,V_\mathfrak{p}i)$ occur in the discrete spectrum of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ with central character $\mathfrak{o}mega_1$.
Consider $(\mathfrak{p}hi_k)_{k \in K / \mathbf mathfrak{G}amma}$ such that
\begin{enumerate}
\item for all $k$, $\mathfrak{p}hi_k \in V_\mathfrak{p}i$,
\item \label{Klingencompatibility} if $\gamma \in K \cap P_{\text{K}}(\mathbf mathbb{A})$ then for all $k$,
$\mathfrak{p}hi_k(\cdot \mathbf{s}igma_{\text K}(\gamma))=\mathfrak{o}mega_2 \circ t(\gamma^{-1})\mathfrak{p}hi_{\gamma k}$.
{\bf{e}}nd{enumerate}
Then the function on $G(\mathbf mathbb{A})$ given for $n \in N_{\text{K}}(\mathbf mathbb{A}), m \in \mathbf mathcal Mk(\mathbf mathbb{A}), k \in K$ by
\begin{equation}\label{Klingennmk}
\mathfrak{p}hi(nmk)= \mathfrak{o}mega_2 \circ t(m) \mathfrak{p}hi_k(\mathbf{s}igma_{\text K}(m)),
{\bf{e}}nd{equation}
is well-defined and belongs to ${\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$.
Moreover, every function in ${\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$ has this shape.
{\bf{e}}nd{proposition}
\begin{remark}\label{KlingenMaassForm}
Condition~(\ref{Klingencompatibility}) implies that each $\mathfrak{p}hi_k$ is right-$SO_2(\mathbb{R})$-invariant (and hence must be an adelic Maa{\mathbf{s}s} form or a character).
Indeed, let $v \le \infty$ and let $k_v$ be a compact subgroup of $\mathbf mathfrak{G}L_2(\mathbb{Q}_v)$ such that
$$\left\{\iota_A : A \in k_v\right\} \mathbf{s}ubset K_v.$$
Assume moreover that $K_v=\mathbf mathfrak{G}amma_v$. Then $K / \mathbf mathfrak{G}amma$ is left invariant by $\mathbf mathfrak{G}amma_v$, hence for all
$A \in k_v$ we have $\mathfrak{p}hi_k( \cdot A)=\mathfrak{p}hi_{\iota_A k}=\mathfrak{p}hi_k$.
In particular, for $v=\infty$, we may take $k_v=O_2(\mathbb{R})$, hence the claim.
{\bf{e}}nd{remark}
\begin{proof}
We first prove that $\mathfrak{p}hi$ is well-defined.
Suppose $n_1m_1k_1=n_2m_2k_2$.
In particular $k_2k_1^{-1}=(n_2m_2)^{-1}(n_1m_1) \in P_{\text{K}}(\mathbf mathbb{A}) \cap K$.
Therefore $\mathbf{s}igma_{\text K}(m_1)=\mathbf{s}igma_{\text K}(n_1m_1)=\mathbf{s}igma_{\text K}(n_2m_2k_2k_1^{-1})=\mathbf{s}igma_{\text K}(m_2)\mathbf{s}igma_{\text K}(k_2k_1^{-1})$.
Then
\begin{align*}
\mathfrak{o}mega_2 \circ t(m_1) \mathfrak{p}hi_{k_1}(\mathbf{s}igma_{\text K}(m_1)) &= \mathfrak{o}mega_2 \circ t(m_1)\mathfrak{p}hi_{k_1}(\mathbf{s}igma_{\text K}(m_2)\mathbf{s}igma_{\text K}(k_2k_1^{-1})) \\
&= \mathfrak{o}mega_2 \circ t(m_1) \mathfrak{o}mega_2 \circ t(k_1k_2^{-1}) \mathfrak{p}hi_{k_2}(\mathbf{s}igma_{\text K}(m_2)) = \mathfrak{o}mega_2 \circ t(m_2)\mathfrak{p}hi_{k_2}(\mathbf{s}igma_{\text K}(m_2)).
{\bf{e}}nd{align*}
Next we show that $\mathfrak{p}hi$ belongs indeed to ${\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$.
The fact that $\mathfrak{p}hi$ is invariant on the left by $N_{\text{K}}(\mathbf mathbb{A})\mathbf mathcal Mk(\mathbb{Q})\mathbf mathbb{A}k(\mathbb{R})^\circ$ and the right invariance by $\mathbf mathfrak{G}amma$ are obvious from the definition.
The fact that $\mathfrak{p}hi$ is square integrable follows from
\begin{align*}
\int_K \int_{\mathbf mathbb{A}k(\mathbb{R})^+ \mathbf mathcal Mk(\mathbb{Q}) \backslashslash \mathbf mathcal Mk(\mathbf mathbb{A})} &|\mathfrak{p}hi(mk)|^2dmdk
=\int_K \int_{\mathbf mathbb{A}k(\mathbb{R})^+ \mathbf mathcal Mk(\mathbb{Q}) \backslashslash \mathbf mathcal Mk(\mathbf mathbb{A})} |\mathfrak{p}hi_k(\mathbf{s}igma_{\text K}(m))|^2dmdk\\
&=\mathbf{s}um_{k \in K / \mathbf mathfrak{G}amma} Vol(\mathbf mathfrak{G}amma) \int_{\mathbb{R}_{>0} \mathbf mathfrak{G}L_2(\mathbb{Q}) \backslashslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A})} \int_{\mathbb{R}_{>0}\mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times}
|\mathfrak{p}hi_k(x)|^2dtdx
<\infty
{\bf{e}}nd{align*}
since $\mathfrak{p}hi_k$ is square integrable, $\mathbb{R}_{>0}\mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times$ is compact and $K / \mathbf mathfrak{G}amma$ is finite.
Finally, we need to show that for all $g=nmk$, the function $\mathfrak{p}hi_g : \mathbf mathcal Mk(\mathbf mathbb{A}) \to \mathbf mathbb{C}, m_1 \mathbf mapsto \mathfrak{p}hi(m_1g)$
transform under $\mathbf mathcal Mk(\mathbf mathbb{A})$ on the right according to $\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i$.
Indeed, for $m_1 \in \mathbf mathcal Mk(\mathbf mathbb{A})$ we have
$$\mathfrak{p}hi_g(m_1)=\mathfrak{p}hi(m_1nmk)=\mathfrak{p}hi(\underbrace{m_1nm_1^{-1}}_{\in N_{\text{K}}(\mathbf mathbb{A})}m_1mk)=\mathfrak{o}mega_2 \circ t (m) \mathfrak{o}mega_2 \circ t(m_1) \mathfrak{p}hi_k(m_1)$$
hence the claim since $\mathfrak{p}hi_k \in V_\mathfrak{p}i$.
As a last point, we show that ${\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$ consists exactly in such functions.
Let $\mathfrak{p}hi \in {\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$. Define
$$\mathfrak{p}hi_k(A)=\mathfrak{p}hi(\iota_A k).$$
Then it is clear that equation~(\ref{Klingennmk}) holds.
As for condition~(\ref{Klingencompatibility}), note that if $\gamma=n_\gamma m_\gamma \in K \cap P_{\text{K}}(\mathbf mathbb{A})$ then
\begin{align*}
\mathfrak{p}hi_k(A\mathfrak{p}i(\gamma))&=\mathfrak{p}hi(\iota_A\iota_{\mathfrak{p}i(\gamma)}k)\\
&=\mathfrak{p}hi \left(\iota_A \left[ \begin{smallmatrix}1&&&\\&t(\gamma)^{-1}&&\\&&1&\\&&&t(\gamma){\bf{e}}nd{smallmatrix}\right]
m_\gamma k \right)\\
&=\mathfrak{o}mega_2 \circ t (\gamma^{-1}) \mathfrak{p}hi(\iota_A n_\gamma^{-1} \gamma k)\\
&=\mathfrak{o}mega_2 \circ t (\gamma^{-1}) \mathfrak{p}hi(\underbrace{\iota_A n_\gamma^{-1} \iota_A^{-1}}_{\in N_{\text{K}}} \iota_A \gamma k)\\
&=\mathfrak{o}mega_2 \circ t (\gamma^{-1}) \mathfrak{p}hi_{\gamma k}(A).
{\bf{e}}nd{align*}
Finally, by definition of ${\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}$ the function $m \mathbf mapsto \mathfrak{p}hi(mk)$ transforms
under $\mathbf mathcal Mk(\mathbf mathbb{A})$ on the right according to $\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i$, from which follows $\mathfrak{p}hi_k$ transforms according to $\mathfrak{p}i$.
{\bf{e}}nd{proof}
Finally we prove the following
\begin{proposition} \label{KlingenSpectralParameter}
Let $(\mathfrak{p}i,V_\mathfrak{p}i)$ occur in the discrete spectrum of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ with central character $\mathfrak{o}mega_1$.
$\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)$ has a $K_\infty$-fixed vector if and only if $\mathfrak{p}i$ has a $O_2(\mathbb{R})$-fixed vector.
In this case, $\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)_\infty$ is generic if and only if $\mathfrak{p}i_\infty$ is a principal series.
Finally if $\mathfrak{p}i_\infty$ is a spherical principal series with spectral parameter $s$ and $\texttt{n}u \in i\mathbf mathcal Lie{a}_{\mathbf mathcal Mk}^*$
then $\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)_\infty$
is a principal series representation with spectral parameter $\texttt{n}u+\texttt{n}u_{\text{K}}(s)$, where $\texttt{n}u_{\text{K}}(s)$ is the element of $\mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$
corresponding to the character $\diag{y^{\bm{f}rac12}}{u}{y^{-\bm{f}rac12}}{u^{-1}} \mathbf mapsto |y|^s$.
{\bf{e}}nd{proposition}
\begin{proof}
The first claim follows immediately from Proposition~\ref{Klingensummand}.
By the spectral decomposition for $\mathbf mathfrak{G}L_2$, if $\mathfrak{p}i$ has a $O_2(\mathbb{R})$-fixed vector then $\mathfrak{p}i_\infty$ is either
a character or a principal series.
But representations induced from a character of the Klingen subgroup are not generic. This shows the second claim.
Finally assume $\mathfrak{p}i_\infty$ is a spherical principal series on $\mathbf mathfrak{G}L_2$ with spectral parameter $s$.
Then we might see $\mathfrak{p}i_\infty$ as the representation of $\mathcal PGL_2(\mathbb{R})$ induced from the character
$\chi_s:\mathbf mat{y^{\bm{f}rac12}}{x}{}{\mathfrak{p}m y^{{-\bm{f}rac12}}} \mathbf mapsto \left|y\right|^s$,
where $s$ is either an imaginary number or a real number with $0<|s|<\bm{f}rac12$.
Define the following subgroups:
$N_1= \left[ \begin{smallmatrix}
1 & &* & \\
&1 & & \\
& &1 & \\
& & & 1
{\bf{e}}nd{smallmatrix}\right] $,
$A_1=\left\{\diag{y^{\bm{f}rac12}}{1}{\mathfrak{p}m y^{-\bm{f}rac12}}{1}:y \texttt{n}eq 0\right\}$, $M_1=\{\iota_A : A \in \mathcal PGL_2(\mathbb{R})\}$.
Note that $N_1N_{\text{K}}=U$, $A_1\mathbf mathbb{A}k(\mathbb{R})=T(\mathbb{R})$ and $M_1\mathbf mathbb{A}k=\mathbf mathcal Mk$
We might view $\chi_s$ as a character of $A_1N_1$.
Since $\mathfrak{o}mega_2$ is trivial on $\mathbf mathbb{A}k(\mathbb{R})$, inducing in stage, we get
\begin{align*}
\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)_\infty &=\mathbf mathcal Ind_{P_{\text{K}}(\mathbb{R})}^{G(\mathbb{R})}\left(I_{N_{\text{K}}(\mathbb{R})} \mathfrak{o}times e^\texttt{n}u \mathfrak{o}times \mathfrak{p}i_{\infty}\right)\\
&=\mathbf mathcal Ind_{P_{\text{K}}(\mathbb{R})}^{G(\mathbb{R})}\left(I_{N_{\text{K}}(\mathbb{R})} \mathfrak{o}times e^\texttt{n}u \mathfrak{o}times \mathbf mathcal Ind_{A_1N_1}^{M_1}(\chi_s)\right)\\
&=\mathbf mathcal Ind_{P_{\text{K}}(\mathbb{R})}^{G(\mathbb{R})}\mathbf mathcal Ind_{B(\mathbb{R})}^{P_{\text{K}}(\mathbb{R})}\left(I_{N_{\text{K}}(\mathbb{R})} \mathfrak{o}times I_{N_1} \mathfrak{o}times e^{\texttt{n}u+\texttt{n}u_{\text{K}}(s)}\right)\\
&=\mathbf mathcal Ind_{B(\mathbb{R})}^{G(\mathbb{R})}(I_U \mathfrak{o}times e^{\texttt{n}u+\texttt{n}u_{\text{K}}(s)})
{\bf{e}}nd{align*}
Since $\texttt{n}u \in i \mathbf mathcal Lie{a}^*$, by Lemma~5.1 of~\cite{Muic} this representation is irreducible.
{\bf{e}}nd{proof}
\mathbf{s}ubsubsection{Siegel Eisenstein series}~\label{PES}
Characters $\chi$ of $\mathbf mathbb{A}s^+(\mathbb{R})\mathbf mathbb{A}s(\mathbb{Q}) \backslash \mathbf mathbb{A}s(\mathbf mathbb{A})$ that coincide with $\mathfrak{o}mega$ on $Z(\mathbf mathbb{A})$ are in one-to-one
correspondence with pairs $(\mathfrak{o}mega_1, \mathfrak{o}mega_2)$ of characters of $\mathbb{R}_{>0}\mathbb{Q}^\times \backslashslash \mathbf mathbb{A}^\times$ satisfying
$\mathfrak{o}mega_1 \mathfrak{o}mega_2^2 = \mathfrak{o}mega$, via
$$\chi\left(\diag{u}{u}{tu^{-1}}{tu^{-1}}\right)=\mathfrak{o}mega_1(u)\mathfrak{o}mega_2(t).$$
For convenience, if $A \in \mathbf mathfrak{G}L_2$, define
$\iota_A=\mathbf mat{A}{}{}{ \trans{A}^{-1}} \in \mathbf mathcal Ms,$
and if $p=\mathbf mat{A}{*}{}{t\trans{A}^{-1}} \in P_{\text{S}}$, define
$\mathbf{s}igma_{\text S}(p)=A$, and $\mathbf{s}igma_S(nm)=\mathbf{s}igma_S(m)$
\begin{lemma}\label{SiegelInducing}
Let $\chi=(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$ with $\mathfrak{o}mega_1\mathfrak{o}mega_2^2=\mathfrak{o}mega$.
The irreducible representations with central character $\chi$ occurring in $R_{\mathbf mathcal Ms,\text{disc}}$ are twists $\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i$, where $\mathfrak{p}i$
occurs in the discrete spectrum of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ and has central character $\mathfrak{o}mega_1$.
{\bf{e}}nd{lemma}
\begin{proof}
Similar as Lemma~\ref{KlingenInducing} with trivial modifications where required.
{\bf{e}}nd{proof}
\begin{proposition}\label{Siegelsummand}
Let $\chi=(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$ with $\mathfrak{o}mega_1\mathfrak{o}mega_2^2=\mathfrak{o}mega$.
Let $(\mathfrak{p}i,V_\mathfrak{p}i)$ occur in the discrete spectrum of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ with central character $\mathfrak{o}mega_1$.
Consider $(\mathfrak{p}hi_k)_{k \in K / \mathbf mathfrak{G}amma}$ such that
\begin{enumerate}
\item for all $k$, $\mathfrak{p}hi_k \in V_\mathfrak{p}i$,
\item \label{Siegelcompatibility} if $\gamma \in K \cap P_{\text{S}}(\mathbf mathbb{A})$ then for all $k$,
$\mathfrak{p}hi_k(\cdot \mathbf{s}igma_{\text S}(\gamma))=\mathfrak{o}mega_2 \circ \mathbf mu(\gamma^{-1})\mathfrak{p}hi_{\gamma k}$.
{\bf{e}}nd{enumerate}
Then the function on $G(\mathbf mathbb{A})$ given for $n \in N_{\text{S}}(\mathbf mathbb{A}), m \in \mathbf mathcal Ms(\mathbf mathbb{A}), k \in K$ by
\begin{equation}\label{Siegelnmk}
\mathfrak{p}hi(nmk)= \mathfrak{o}mega_2 \circ \mathbf mu(m) \mathfrak{p}hi_k(\mathbf{s}igma_{\text S}(m)),
{\bf{e}}nd{equation}
is well-defined and belongs to ${\mathbf mathcal I_{P_{\text{S}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$.
Moreover, every function in ${\mathbf mathcal I_{P_{\text{S}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)}^\mathbf mathfrak{G}amma$ has this shape.
{\bf{e}}nd{proposition}
\begin{remark}
Similarly as Remark~\ref{KlingenMaassForm}, condition~(\ref{Siegelcompatibility}) implies that each $\mathfrak{p}hi_k$ is right-$\text{O}_2(\mathbb{R})$-invariant
(and hence must be an adelic Maa{\mathbf{s}s} form or a character).
{\bf{e}}nd{remark}
\begin{proof}
Same proof as Proposition~\ref{Klingensummand}, with trivial modifications where required.
{\bf{e}}nd{proof}
\begin{proposition} \label{SiegelSpectralParameter}
Let $(\mathfrak{p}i,V_\mathfrak{p}i)$ occur in the discrete spectrum of $L^2(\mathbb{R}_{>0}\mathbf mathfrak{G}L_2(\mathbb{Q})\backslash \mathbf mathfrak{G}L_2(\mathbf mathbb{A}))$ with central character $\mathfrak{o}mega_1$.
$\mathbf mathcal I_{P_{\text{S}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)$ has a $K_\infty$-fixed vector if and only if $\mathfrak{p}i$ has a $O_2(\mathbb{R})$-fixed vector.
In this case, $\mathbf mathcal I_{P_{\text{S}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)_\infty$ is generic if and only if $\mathfrak{p}i_\infty$ is a principal series.
Finally if $\mathfrak{p}i_\infty$ is a spherical principal series with spectral parameter $s$ and $\texttt{n}u \in i\mathbf mathcal Lie{a}_{\mathbf mathcal Ms}^*$
then $\mathbf mathcal I_{P_{\text{K}}}((\mathfrak{o}mega_2 \mathfrak{o}times \mathfrak{p}i)_\texttt{n}u)_\infty$
is a principal series representation with spectral parameter $\texttt{n}u+\texttt{n}u_{\text{S}}(s)$, where $\texttt{n}u_{\text{S}}(s)$ is the element of $\mathbf mathcal Lie{a}^*(\mathbf mathbb{C})$
corresponding to the character $\diag{y^{\bm{f}rac12}u}{y^{-\bm{f}rac12}u}{y^{-\bm{f}rac12}u^{-1}}{y^{\bm{f}rac12}u^{-1}} \mathbf mapsto |y|^s$.
{\bf{e}}nd{proposition}
\begin{proof}
Same proof as Proposition~\ref{KlingenSpectralParameter}, with trivial modifications where required.
{\bf{e}}nd{proof}
\mathbf{s}ubsection{Spectral expansion of the kernel}
We now give the spectral expansion of the kernel.
\begin{definition}\label{ONBase}
For each standard parabolic $P$ we choose an orthonormal basis $\mathbf mathcal{B}_{P}$ of $\mathbf mathscr H_P(\mathfrak{o}mega)$ such that
\begin{enumerate}
\item \label{DecompIrred} if $R_{M_P,\text{disc}}=\bigoplus_\mathfrak{p}i \mathfrak{p}i$ is the decomposition of the restriction of the right regular representation of $M_P(\mathbf mathbb{A})$
on the subspace of $L^2(M_P(\mathbb{Q})A_P^0(\mathbb{R}) \backslash G(\mathbf mathbb{A}))$ that decompose discretely, then
$\mathbf mathcal{B}_P=\bigcup_\mathfrak{p}i \mathbf mathcal{B}_{\mathfrak{p}i}$, where each $\mathbf mathcal{B}_\mathfrak{p}i$ is a basis of the space of the corresponding induced representation $\mathbf mathcal I_P(\mathfrak{p}i_\texttt{n}u)$,
(note that this space does not depend on~$\texttt{n}u$).
\item \label{factor} for each representation $\mathfrak{p}i=\bigotimes_v \mathfrak{p}i_v$ as above, for each place $v$ there is an orthonormal basis $\mathbf mathcal{B}_{\mathfrak{p}i,v}$ of
the local representation $\mathfrak{p}i_v$ such that $\mathbf mathcal{B}_{\mathfrak{p}i}$ consists in factorizable vectors
$\mathfrak{p}hi=\bigotimes_{v \le \infty} \mathfrak{p}hi_v$ where each $\mathfrak{p}hi_v$ belongs to the corresponding $\mathbf mathcal{B}_{\mathfrak{p}i,v}$.
\item \label{Ktypes} for each representation $\mathfrak{p}i_v$, we have
$\mathbf mathcal{B}_{\mathfrak{p}i,v} = \bigcup_{\tau} \mathbf mathcal{B}_{\mathfrak{p}i,v, \tau}$, where the union is over the irreducible representations $\tau$ of $\mathbf mathfrak{G}amma_v$,
and $\mathbf mathcal{B}_{\mathfrak{p}i, v,\tau}$ is a basis of the space of $\mathfrak{p}i_v$ consisting of vectors $\mathfrak{p}hi$ satisfying $\mathfrak{p}i_v(\gamma)\mathfrak{p}hi=\tau(\gamma) \mathfrak{p}hi$
for all $\gamma \in \mathbf mathfrak{G}amma_v$.
{\bf{e}}nd{enumerate}
{\bf{e}}nd{definition}
Note that conditions~(\ref{factor}) and~(\ref{Ktypes}) imply in particular that elements of $\mathbf mathcal{B}_P$ are in $\mathbf mathscr H_P^0$.
\begin{definition}
For each standard parabolic $P$ and for each irreducible representation $\mathfrak{p}i$ occuring in $R_{M_P,\text{disc}}$,
define $\mathbf mathcal{B}_{\mathfrak{p}i, 1}$ to be the subset of $\mathbf mathcal{B}_{\mathfrak{p}i}$ consisting in vectors $\mathfrak{p}hi$ whose each local component
$\mathfrak{p}hi_v$ belongs to $\mathbf mathcal{B}_{\mathfrak{p}i,v, 1}$, and set $\mathbf mathcal{B}_P^\mathbf mathfrak{G}amma=\bigcup_\mathfrak{p}i \mathbf mathcal{B}_{\mathfrak{p}i, 1}$.
If $\chi$ is a character of $A_P(\mathbf mathbb{A})$, define
$$\mathbf mathfrak{G}en_P(\chi)=\bigcup_{\mathfrak{p}i} \mathbf mathcal{B}_{\mathfrak{p}i,1},$$
where the union runs over representations $\mathfrak{p}i$ with central character $\chi$ and such that the induced representations
$\mathbf mathcal I_P(\mathfrak{p}i_\texttt{n}u)$ are generic.
{\bf{e}}nd{definition}
If $u \in \mathbf mathscr H_P(\mathfrak{o}mega)$, define $$\mathbf mathcal I_P(\texttt{n}u ,f)u=\int_{\mathfrak{o}verline{G(\mathbf mathbb{A})}}f(y)\mathbf mathcal I_P(\texttt{n}u ,y)udy.$$
\begin{proposition}\label{globalevalue}
Let $\texttt{n}u \in i\mathbf mathcal Lie{a}_p^*$.
Let $u \in \mathbf mathcal{B}_P$.
Then either $\mathbf mathcal I_P(\texttt{n}u ,f)u=0$ or $u \in \mathbf mathcal{B}_P^\mathbf mathfrak{G}amma $.
In the latter case, say $u \in \mathbf mathcal{B}_\mathfrak{p}i$. Then if $\mathfrak{p}i$ is generic we have
$$\mathbf mathcal I_P(\texttt{n}u ,f)u=\lambda_f(u,\texttt{n}u)u,$$
where $\lambda_f(u,\texttt{n}u)=\lambda_{f_{\infty}}(u,\texttt{n}u)\lambda_{f_{\text{fin}}}(u,\texttt{n}u)$,
and
$$\lambda_{f_{\infty}}(u,\texttt{n}u)=
\begin{cases}
\tilde{f_\infty}(\texttt{n}u) \text{ if } P=B,\\
\tilde{f_\infty}(\texttt{n}u+\texttt{n}u_{\text{K}}(s_u)) \text{ if } P=P_{\text{K}} \text{ and } \mathfrak{p}i_\infty \text{ has spectral parameter } s_u,\\
\tilde{f_\infty}(\texttt{n}u+\texttt{n}u_{\text{S}}(s_u)) \text{ if } P=P_{\text{S}} \text{ and } \mathfrak{p}i_\infty \text{ has spectral parameter } s_u,\\
\tilde{f_\infty}(\texttt{n}u_u) \text{ if } P=G \text{ and } \mathfrak{p}i_\infty \text{ has spectral parameter } \texttt{n}u_u,\\
{\bf{e}}nd{cases}$$
and, following notations of Proposition~\ref{localevalue}, $\lambda_{f_{\text{fin}}}(u,\texttt{n}u)$ is the eigenvalue of the Hecke operator
$$\bigotimes_{\mathbf mathfrak{G}amma_p=G(\mathbb{Z}_p)}\mathfrak{o}verline{\mathfrak{p}i_{p,\texttt{n}u}}(\tilde{f_p}).$$
{\bf{e}}nd{proposition}
\begin{remark}
If $P=G$ then $\mathbf mathcal Lie{a}_P=\{0\}$ and $\mathbf mathcal I_P(\texttt{n}u ,f)=R(f)$.
{\bf{e}}nd{remark}
\begin{proof}
This is a combination of Propositions~\ref{localevalue},~\ref{archimedeanevalue}, Lemma~\ref{BorelSpectralParameter} and
Propositions~\ref{KlingenSpectralParameter} and~\ref{SiegelSpectralParameter}.
{\bf{e}}nd{proof}
The following statement~\cite{ArthurSpectralExpansion}*{pages 928-935} may be viewed as a rigorous version of the informal discussion
in Section~\ref{basickernel}.
\begin{lemma}\label{UnprovedLemma}
Let $f$ as in Assumption~\ref{testfunction}. Then we have a pointwise equality
$$K_f(x,y)=\mathbf{s}um_{P} n_P^{-1} \int_{i\mathbf mathcal Lie{a}_{P}^*}\mathbf{s}um_{u \in \mathbf mathcal{B}_P}E(x, \mathbf mathcal I_P(\texttt{n}u ,f)u,\texttt{n}u )\mathfrak{o}verline{E(y,u,\texttt{n}u)}d\texttt{n}u.$$
Here, $n_G=1,$ $n_B=8$, $n_{P_{\text{K}}}=2$ and $n_{P_{\text{S}}}=2$.
{\bf{e}}nd{lemma}
However, for the later purpose of interchanging integration order, we want to show that the above expressions for the kernel converge absolutely.
To this end, we need the following stronger statement.
\begin{proposition}\label{ACV}
Let $f$ as in Assumption~\ref{testfunction}. Then the following expression defines a continuous function in the variables $x$ and $y$
$$K_{\text{abs}}(x,y)=\mathbf{s}um_{P} n_P^{-1} \int_{i\mathbf mathcal Lie{a}_{P}^*}\mathbf{s}um_{u \in \mathbf mathcal{B}_P}|E(x, \mathbf mathcal I_P(\texttt{n}u ,f)u,\texttt{n}u )\mathfrak{o}verline{E(y,u,\texttt{n}u)}|d\texttt{n}u.$$
{\bf{e}}nd{proposition}
We do not give a proof of this proposition here, as a similar statement was proven in the setting of $\mathbf mathfrak{G}L_2$ in \S~6 of \cite{KL}, the
proof thereof can be directly adapted.
By combining it with Lemmas~\ref{Borelsummand},~\ref{Klingensummand},~\ref{Siegelsummand} and Proposition~\ref{globalevalue},
we obtain the following corollary.
\begin{corollary}\label{KernelSpectralForm}
Let $f$ as in Assumption~\ref{testfunction}.
Then we have a pointwise equality
$$K_{f}(x,y)=K_{\text{disc}}(x,y)+K_{\text{B}}(x,y)+K_{\text{K}}(x,y)+K_{\text{S}}(x,y)+K_{\text{ng}}(x,y),$$
where
$$K_{\text{disc}}(x,y)=\mathbf{s}um_{u \in \mathbf mathfrak{G}en_G(\mathfrak{o}mega)}\tilde{f_\infty}(\texttt{n}u_u)\lambda_{f_{\text{fin}}}(u)u(x)\mathfrak{o}verline{u(y)},$$
$$K_{\text{B}}(x,y)=\bm{f}rac18 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2\mathfrak{o}mega_3^2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_B(\mathfrak{o}mega_1,\mathfrak{o}mega_2,\mathfrak{o}mega_3)}
\int_{i\mathbf mathcal Lie{a}^*}\tilde{f_\infty}(\texttt{n}u)\lambda_{f_{\text{fin}}}(u,\texttt{n}u)E(x, u,\texttt{n}u )\mathfrak{o}verline{E(y,u,\texttt{n}u)}d\texttt{n}u.$$
$$K_{\text{K}}(x,y)=\bm{f}rac12 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_{P_{\text{K}}}(\mathfrak{o}mega_1,\mathfrak{o}mega_2)}
\int_{i\mathbf mathcal Lie{a}_\text{K}^*}\tilde{f_\infty}(\texttt{n}u+\texttt{n}u_{\text{K}}(s_u))\lambda_{f_{\text{fin}}}(u,\texttt{n}u)E(x, u,\texttt{n}u )\mathfrak{o}verline{E(y,u,\texttt{n}u)}d\texttt{n}u,$$
$$K_{\text{S}}(x,y)=\bm{f}rac12 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2^2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_{P_{\text{S}}}(\mathfrak{o}mega_1,\mathfrak{o}mega_2)}
\int_{i\mathbf mathcal Lie{a}_\text{S}^*}\tilde{f_\infty}(\texttt{n}u+\texttt{n}u_{\text{S}}(s_u))\lambda_{f_{\text{fin}}}(u,\texttt{n}u)E(x, u,\texttt{n}u )\mathfrak{o}verline{E(y,u,\texttt{n}u)}d\texttt{n}u,$$
and all the automorphic forms involved in $K_{\text{ng}}$ are not generic.
{\bf{e}}nd{corollary}
Actually, no automorphic form from the residual spectrum is generic, as shown by the following lemma.
Thus $K_{\text{disc}}$ consists only in elements from the cuspidal spectrum.
\begin{lemma}
Let $(\mathfrak{p}i,V_\mathfrak{p}i)$ be any irreducible representation occurring in the residual spectrum of
$L^2(Z(\mathbb{R})G(\mathbb{Q}) \backslashslash G(\mathbf mathbb{A}), \mathfrak{o}mega)$. Then $\mathfrak{p}i$ is non generic.
{\bf{e}}nd{lemma}
\begin{proof}
We will rely on results of Kim that describe the residual spectrum of $\text{Sp}_4$.
Thus we first need to show that the $\text{res } \mathfrak{p}i$ given by Definition~\ref{DefOfRes} belongs to the residual spectrum of $\text{Sp}_4(\mathbf mathbb{A})$.
First, $\text{res } \mathfrak{p}i$ occurs in the discrete spectrum of $L^2(\text{Sp}_4(\mathbb{Q}) \backslashslash \text{Sp}_4(\mathbf mathbb{A}))$, because there are only finitely many possibilities
for the Archimedean component of any irreducible representation occurring in $\text{res } \mathfrak{p}i$.
Moreover $\text{res } \mathfrak{p}i$ and is not cuspidal by Lemma~\ref{cuspidalrestriction}.
Hence $\text{res } \mathfrak{p}i$ belongs to the residual spectrum of $\text{Sp}_4(\mathbf mathbb{A})$, as claimed.
In view of Lemma~\ref{genericrestriction}, it suffices to prove that the residual spectrum of $\text{Sp}_4(\mathbf mathbb{A})$ is not generic.
By Theorem~3.3 and Remark~3.2 of~\cite{Kim}, the representations occurring from poles of Siegel Eisenstein
series are non generic. Similarly, by Theorem~4.1 and Remark~4.2 of~\cite{Kim}, the representations occurring
from poles of Klingen Eisenstein series are non generic.
Finally, by~\cite{Kim}~\S~5.3, irreducible representations $\mathfrak{p}i$ occurring from the poles of Borel Eisenstein series are described as follows.
On the one hand, we have the space of constant functions, which is clearly not generic.
On the other hand, for every non-trivial quadratic gr\"ossencharacter $\mathbf mu$ of $\mathbb{Q}$ we have a representation $B(\mathbf mu)$
whose local components are irreducible subquotients of the induced representation $\mathbf mathcal Ind_B^{\text{Sp}_4}(|\cdot|_v\mathbf mu_v \times \mathbf mu_v)$.
Therefore, in the terminology of~\cite{RS2}*{\S~2.2}, for all prime~$p$, $\mathfrak{p}i_p$ belongs to Group~V if $\mathbf mu_p \texttt{n}eq 1$, and to Group~VI if $\mathbf mu_p=1$.
Now by Table~A.2 of~\cite{RS2}, we see that the only generic representations in Group~V and~VI are those from~Va and VI~a.
But Table~A.12 shows that neither of these have a $K_p$-fixed vector. Since almost all $\mathfrak{p}i_p$ contain a $K_p$-fixed vector, at least one local component of
$\mathfrak{p}i$ must be non-generic, and thus $\mathfrak{p}i$ is not globally generic.
{\bf{e}}nd{proof}
\mathbf{s}ubsection{The spectral side of the trace formula}
Let $\mathfrak{p}si_1=\mathfrak{p}si_{\mathbf m_1}, \mathfrak{p}si_2=\mathfrak{p}si_{\mathbf m_2}$ be generic characters of $U(\mathbf mathbb{A}) / U(\mathbb{Q})$.
Fix $t_1, t_2 \in A^0(\mathbb{R})$ and consider the basic integral
\begin{equation}\label{BasicIntegral}
I=\int_{(U(\mathbb{Q}) \backslashslash U(\mathbf mathbb{A}))^2} K_f(xt_1,yt_2) \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}(x)}\mathfrak{p}si_{\mathbf m_2}(y)dx dy.
{\bf{e}}nd{equation}
Our goal is to compute it in two different ways -- using the spectral decomposition of the kernel
$K_f$ on the one hand, and its expression as a series together with the Bruhat decomposition on the other hand.
The latter will constitute the geometric side and will be addressed in Section~\ref{GeometricSide}. We now focus on the former.
Using the spectral expansion of the kernel $K_f$ given by Lemma~(\ref{UnprovedLemma}), we can evaluate the basic integral~(\ref{BasicIntegral}) as
$$I=\int_{(U(\mathbb{Q}) \backslashslash U(\mathbf mathbb{A}))^2} \mathbf{s}um_{P} n_P^{-1} \int_{i\mathbf mathcal Lie{a}_{P}^*}\mathbf{s}um_{u \in \mathbf mathcal{B}_P}E(xt_1, \mathbf mathcal I_P(\texttt{n}u ,f)u,\texttt{n}u )\mathfrak{o}verline{E(yt_2,u,\texttt{n}u)}d\texttt{n}u \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}(x)}\mathfrak{p}si_{\mathbf m_2}\}(y)dx dy.$$
By Proposition~\ref{ACV}, this expression is absolutely integrable since $(U(\mathbb{Q}) \backslashslash U(\mathbf mathbb{A}))^2$ is compact.
Thus we may interchange integration order, thus obtaining the Whittaker coefficients of the automorphic forms involved here.
By Corollary~\ref{KernelSpectralForm}, we get a discrete contribution and a residual contribution, and a continuous contribution
-- which itself splits into the contribution of the various parabolic classes.
Thus the spectral side of the Kuznetsov formula is given as follows.
\begin{proposition}
We have $I=\bm{f}rac{1}{(\mathbf m_{1,1}\mathbf m_{2,1})^4|\mathbf m_{1,2}\mathbf m_{2,1}|^3}\left(\Sigma_{\text{disc}}+\Sigma_B+\Sigma_K+\Sigma_S\right)$, where
$$\Sigma_{\text{disc}}=\mathbf{s}um_{u \in \mathbf mathfrak{G}en_G(\mathfrak{o}mega)}\tilde{f_\infty}(\texttt{n}u_u)\lambda_{f_{\text{fin}}}(u)\mathcal{W}_{\mathfrak{p}si}(u)(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(u)}(t_2t_{\mathbf m_2}^{-1}),$$
\begin{align*}
\Sigma_{\text{B}}=\bm{f}rac18 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2\mathfrak{o}mega_3^2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_B(\mathfrak{o}mega_1,\mathfrak{o}mega_2,\mathfrak{o}mega_3)} &
\int_{i\mathbf mathcal Lie{a}^*}\tilde{f_\infty}(\texttt{n}u) \lambda_{f_{\text{fin}}}(u,\texttt{n}u)\\
&\times\mathcal{W}_{\mathfrak{p}si}(E(\cdot, u,\texttt{n}u ))(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(E(\cdot,u,\texttt{n}u))}(t_2t_{\mathbf m_2}^{-1})d\texttt{n}u.
{\bf{e}}nd{align*}
\begin{align*}
\Sigma_{\text{K}}=\bm{f}rac12 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_{P_{\text{K}}}(\mathfrak{o}mega_1,\mathfrak{o}mega_2)}&
\int_{i\mathbf mathcal Lie{a}_\text{K}^*}\tilde{f_\infty}(\texttt{n}u+\texttt{n}u_{\text{K}}(s_u))\lambda_{f_{\text{fin}}}(u,\texttt{n}u)\\
&\times \mathcal{W}_{\mathfrak{p}si}(E(\cdot, u,\texttt{n}u ))(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(E(\cdot,u,\texttt{n}u))}(t_2t_{\mathbf m_2}^{-1})d\texttt{n}u,
{\bf{e}}nd{align*}
\begin{align*}
\Sigma_{\text{S}}=\bm{f}rac12 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2^2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_{P_{\text{S}}}(\mathfrak{o}mega_1,\mathfrak{o}mega_2)} &
\int_{i\mathbf mathcal Lie{a}_\text{S}^*}\tilde{f_\infty}(\texttt{n}u+\texttt{n}u_{\text{S}}(s_u))\lambda_{f_{\text{fin}}}(u,\texttt{n}u)\\
&\times \mathcal{W}_{\mathfrak{p}si}(E(\cdot, u,\texttt{n}u ))(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(E(\cdot,u,\texttt{n}u))}(t_2t_{\mathbf m_2}^{-1})d\texttt{n}u.
{\bf{e}}nd{align*}
{\bf{e}}nd{proposition}
\mathbf{s}ection{The geometric side of the trace formula}\label{GeometricSide}
Breaking the sum~(\ref{TheKernel}) over $U(\mathbb{Q}) \times U(\mathbb{Q})$ orbits
leads to a sum over representatives of the double cosets of $U \backslashslash G / U$ of
orbital integrals. Specifically, set $H=U \times U$, acting on $G$ by
$$(x,y) \cdot \delta = x^{-1} \delta y,$$
and denote by $H_\delta$ the stabilizer of $\delta$.
Since $f$ has compact support, the infinite sum $\mathbf{s}um_{\delta \in \mathbf mathfrak{G}modZ(\mathbb{Q})} |f(t_1^{-1}x^{-1} \delta yt_2)|$ is
in fact locally finite and hence
defines a continuous function in $x$ and $y$ on the compact set
$H(\mathbb{Q}) \backslashslash H(\mathbf mathbb{A})$. Thus we may interchange summation and integration order, getting
\begin{align*}
I &=
\int_{H(\mathbb{Q}) \backslashslash H(\mathbf mathbb{A})} \mathbf{s}um_{\delta \in \mathbf mathfrak{G}modZ(\mathbb{Q})} f(t_1^{-1}x^{-1} \delta yt_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}(x)}\mathfrak{p}si_{\mathbf m_2}(y)dx dy\\
&=\mathbf{s}um_{\delta \in \mathbf mathfrak{G}modZ(\mathbb{Q})} \int_{H(\mathbb{Q}) \backslashslash H(\mathbf mathbb{A})}f(t_1^{-1}x^{-1} \delta yt_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}(x)}\mathfrak{p}si_{\mathbf m_2}(y)dx dy\\
&= \mathbf{s}um_{\delta \in U(\mathbb{Q})\backslashslash \mathbf mathfrak{G}modZ(\mathbb{Q})/U(\mathbb{Q})} I_{\delta}(f),
{\bf{e}}nd{align*}
where
\begin{equation}\label{orbital}
I_{\delta}(f)=\int_{H_\delta(\mathbb{Q}) \backslashslash H(\mathbf mathbb{A})} f(t_1^{-1}x^{-1} \delta yt_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}(x)}\mathfrak{p}si_{\mathbf m_2}(y)d(x,y),
{\bf{e}}nd{equation}
and $d(x,y)$ is the quotient measure on $H_\delta(\mathbb{Q}) \backslashslash H(\mathbf mathbb{A})$.
Using the Bruhat decomposition $G=B \text{O}mega B= \coprod_{\mathbf{s}igma \in \text{O}mega}U \mathbf{s}igma T U$, we have
\begin{equation} \label{BruhatmodZ}
U \backslashslash \mathbf mathfrak{G}modZ / U = \coprod_{\mathbf{s}igma \in \text{O}mega} \mathbf{s}igma \overline{T},
{\bf{e}}nd{equation}
where $\overline{T}= T / Z$.
We can then compute separately the contribution from each element from the Weyl group.
Writing $H(\mathbf mathbb{A})=H_\delta(\mathbf mathbb{A}) \times (H_\delta(\mathbf mathbb{A}) \backslashslash H(\mathbf mathbb{A}))$, we can factor out the integral
of $\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}}\mathfrak{o}times\mathfrak{p}si_{\mathbf m_2}$ over the compact group $H_\delta(\mathbb{Q}) \backslashslash H_\delta(\mathbf mathbb{A})$
in~(\ref{orbital}). Therefore, $I_\delta(f)$ vanishes unless the character
$ \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}} \mathfrak{o}times\mathfrak{p}si_{\mathbf m_2}$ is trivial on $H_\delta(\mathbf mathbb{A})$. Following
Knightly and Li, we shall call the orbits $H \cdot \delta$ such that
$\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}} \mathfrak{o}times \mathfrak{p}si_{\mathbf m_2}$ is trivial on $H_\delta(\mathbf mathbb{A})$ {\bf relevant}.
\mathbf{s}ubsection{Relevant orbits}
In order to characterize the relevant orbits, let us introduce a bit of notation.
A set of representatives of $T(\mathbb{Q}) / Z(\mathbb{Q})$ is given by the elements
\begin{equation}\label{diagrepres}
\delta_1 \doteq \diag{d_1}{1}{d_2}{d_1d_2}, d_1,d_2 \in \mathbb{Q}^\times.
{\bf{e}}nd{equation}
For each $\mathbf{s}igma \in \text{O}mega$, the corresponding set of representatives of $\mathbf{s}igma \overline{T}(\mathbb{Q})$ in~(\ref{BruhatmodZ})
is given by elements of the form
\begin{equation}\label{ds}
\delta_\mathbf{s}igma=\mathbf{s}igma \delta_1,
{\bf{e}}nd{equation}
and $H_{\delta_\mathbf{s}igma}(\mathbf mathbb{A})$ consists in pairs
$(u,\delta_\mathbf{s}igma^{-1}u\delta_\mathbf{s}igma)=(u,\delta_1^{-1}\mathbf{s}igma^{-1}u\delta_1\mathbf{s}igma)$ such that both component lie in $U(\mathbf mathbb{A})$.
Since conjugation by $\delta_1$ preserves $U(\mathbf mathbb{A})$,
the condition that the second component lies in $U(\mathbf mathbb{A})$ is equivalent to $u \in U(\mathbf mathbb{A})$ and $\mathbf{s}igma^{-1} u \mathbf{s}igma \in U(\mathbf mathbb{A})$.
We accordingly make the following definition.
\begin{definition}
For $\mathbf{s}igma \in \text{O}mega,$ define
\begin{equation}\label{Usigma}
U_\mathbf{s}igma(\mathbf mathbb{A}) =\{x \in U(\mathbf mathbb{A}) :\mathbf{s}igma^{-1} x \mathbf{s}igma \in U(\mathbf mathbb{A})\},
{\bf{e}}nd{equation}
and
\begin{equation}\label{Dsigma}
D_\mathbf{s}igma(\mathbf mathbb{A}) = U_\mathbf{s}igma(\mathbf mathbb{A}) \times \mathbf{s}igma^{-1} U_\mathbf{s}igma(\mathbf mathbb{A}) \mathbf{s}igma
{\bf{e}}nd{equation}
Then we have
\begin{equation}\label{paramHd}
H_{\delta_\mathbf{s}igma}(\mathbf mathbb{A})=\{(u,\delta_\mathbf{s}igma^{-1}u\delta_\mathbf{s}igma): u \in U_\mathbf{s}igma(\mathbf mathbb{A})\} \mathbf{s}ubset D_\mathbf{s}igma(\mathbf mathbb{A}).
{\bf{e}}nd{equation}
{\bf{e}}nd{definition}
\begin{lemma}\label{relevantorbits}
The relevant orbits are the ones corresponding to the following elements:
\begin{itemize}
\item $\mathbf{s}igma=1$ with $\delta_1= {t_{\mathbf m_2}}^{-1}t_{\mathbf m_1}=
\diag{\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}}}{1}{\bm{f}rac{\mathbf m_{1,1}\mathbf m_{1,2}}{\mathbf m_{2,1}\mathbf m_{2,2}}}{\bm{f}rac{\mathbf m_{1,1}^2\mathbf m_{1,2}}{\mathbf m_{2,1}^2\mathbf m_{22}}}$,
\item $\mathbf{s}igma=s_1s_2s_1$ with $\delta_1$ satisfying $d_1\mathbf m_{1,2}=d_2\mathbf m_{2,2}$,
\item $\mathbf{s}igma=s_2s_1s_2$ with $\delta_1$ satisfying $\mathbf m_{1,1}=-d_1\mathbf m_{2,1}$,
\item $\mathbf{s}igma=s_1s_2s_1s_2=J$ with no condition on $\delta_1$.
{\bf{e}}nd{itemize}
{\bf{e}}nd{lemma}
\begin{proof}
For each representative $\delta_\mathbf{s}igma$ as in~(\ref{ds}), let us fix $u_1 \in U_\mathbf{s}igma(\mathbf mathbb{A})$, and compute
$\delta_\mathbf{s}igma^{-1}u_1\delta_\mathbf{s}igma$ in order to determine under which condition $\mathfrak{p}si_{\mathbf m_1} \mathfrak{o}times \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}}$
is trivial on $H_{\delta_\mathbf{s}igma}(\mathbf mathbb{A})$.
For $\mathbf{s}igma=1$, we have $U_\mathbf{s}igma=U$, hence we may take $u_1=\left[
\begin{smallmatrix}
1 & & c & a-cx \\
x & 1 & a & b \\
& & 1 & -x \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right]$. Then we have
$\delta^{-1}u_1\delta=\left[
\begin{smallmatrix}
1 & & c\bm{f}rac{d_2}{d_1} & (a-cx)d_2 \\
xd_1 & 1 & ad_2 & bd_1d_2 \\
& & 1 & -xd_1 \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right]$ Thus, by~(\ref{gencharconj}), the condition that $\mathfrak{p}si_{\mathbf m_1} \mathfrak{o}times \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}}$ be trivial on
$H_{\delta_1}(\mathbf mathbb{A})$ is equivalent to $\delta_1={t_{\mathbf m_2}}^{-1}t_{\mathbf m_1}.$
For $\mathbf{s}igma=s_1$, we have $U_\mathbf{s}igma(\mathbf mathbb{A})=\left\{\left[ \begin{smallmatrix}
1 & & c & a \\
& 1 & a & b \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}\right]: a,b,c \in \mathbf mathbb{A}\right\}$, and if $u_1=\left[ \begin{smallmatrix}
1 & & c & a \\
& 1 & a & b \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}\right]$, then $\delta^{-1}u_1\delta=\left[
\begin{smallmatrix}
1 & & b\bm{f}rac{d_2}{d_1} & ad_2 \\
& 1 & ad_2 & cd_1d_2 \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right],$
hence the condition that $\mathfrak{p}si_{\mathbf m_1} \mathfrak{o}times \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}}$ be trivial on
$H_{\delta_{s_1}}(\mathbf mathbb{A})$ is equivalent to
$\theta\left(\mathbf m_{1,2}c-\mathbf m_{2,2}\bm{f}rac{d_2}{d_1}b\right)=1$
for all $b,c \in \mathbf mathbb{A}$, which is equivalent to $\mathbf m_{1,2}=\mathbf m_{2,2}=0$ and thus contradicts the fact that $\mathfrak{p}si_{\mathbf m_1}$ and
$\mathfrak{p}si_{\mathbf m_2}$ are generic.
Similar calculations show that $\mathbf{s}igma=s_2,s_1s_2$ and $s_2s_1$ yield no relevant orbit.
For $\mathbf{s}igma=s_1s_2s_1$ we have $U_\mathbf{s}igma(\mathbf mathbb{A})=\left\{\left[ \begin{smallmatrix}
1 & & c & \\
& 1 & & \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}\right]: c \in \mathbf mathbb{A}\right\}$,
and if $u_1=\left[ \begin{smallmatrix}
1 & & c & \\
& 1 & & \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}\right]$
then we have $\delta^{-1}u_1\delta=\left[
\begin{smallmatrix}
1 & & c\bm{f}rac{d_2}{d_1} & \\
& 1 & & \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right]$,
hence the condition that $\mathfrak{p}si_{\mathbf m_1} \mathfrak{o}times \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}}$ be trivial on
$H_{\delta_{s_{121}}}(\mathbf mathbb{A})$ is equivalent to
$\theta\left(\left(\mathbf m_{1,2}-\mathbf m_{2,2}\bm{f}rac{d_2}{d_1}\right)c\right)=1$
for all $c \in \mathbf mathbb{A}$. This is equivalent to $d_1\mathbf m_{1,2}=d_2\mathbf m_{2,2}$.
The calculation for $\mathbf{s}igma=s_2s_1s_2$ is similar.
Finally, for $\mathbf{s}igma=s_1s_2s_1s_2=J$ the long Weyl element, $H_{\delta_{s_{1212}}}(\mathbf mathbb{A})$ is trivial.
{\bf{e}}nd{proof}
A case by case calculation also shows the following.
\begin{lemma}\label{Usigmarelevant}
Let $\mathbf{s}igma \in \text{O}mega$. Then there exists $\delta \in \overline{T}(\mathbb{Q})$ such that the orbit of $\delta_\mathbf{s}igma=\mathbf{s}igma\delta$ is relevant
if and only if $U_\mathbf{s}igma=\{u \in U : \mathbf{s}igma^{-1}u\mathbf{s}igma=u\}.$
{\bf{e}}nd{lemma}
In the sequel, we shall call such elements of the Weyl group {\bf relevant} as well.
In particular, by definition of the relevant orbits, and by~(\ref{paramHd}), we have the following.
\begin{corollary}\label{psionUsigma}
Suppose that the orbit of $\delta_\mathbf{s}igma=\mathbf{s}igma\delta$ is relevant.
Then for all $u \in U_\mathbf{s}igma$ we have $\mathfrak{p}si_{\mathbf m_2}(\delta^{-1}u\delta)=\mathfrak{p}si_{\mathbf m_1}(u).$
{\bf{e}}nd{corollary}
\mathbf{s}ubsection{General shape of the relevant orbital integrals}
\begin{lemma}\label{quotientmap}
For each $\delta_1 \in \overline{T}(\mathbb{Q})$ and $\mathbf{s}igma \in \text{O}mega$, the map
\begin{align*}
\varphi : D_\mathbf{s}igma(\mathbf mathbb{A}) & \to D_\mathbf{s}igma(\mathbf mathbb{A}) \\
(u_1,u_2) & \mathbf mapsto (u_1, \delta_\mathbf{s}igma^{-1}u_1^{-1}\delta_\mathbf{s}igma u_2).
{\bf{e}}nd{align*}
induces a bijective map
\begin{align*}
H_{\delta_\mathbf{s}igma} (\mathbb{Q}) \backslash D_\mathbf{s}igma(\mathbf mathbb{A}) \to & \left(U_\mathbf{s}igma(\mathbb{Q}) \times \{1\}\right) \backslash D_\mathbf{s}igma(\mathbf mathbb{A}) \\ &\cong \left(U_\mathbf{s}igma(\mathbb{Q})\backslash U_\mathbf{s}igma(\mathbf mathbb{A})\right)\times \left( \mathbf{s}igma^{-1} U_\mathbf{s}igma(\mathbf mathbb{A}) \mathbf{s}igma \right)
{\bf{e}}nd{align*}
preserving the quotient measures.
{\bf{e}}nd{lemma}
\begin{proof}
To prove $\varphi$ is well defined it is sufficient to prove that for any
$(u_1,u_2)\in U_\mathbf{s}igma(\mathbf mathbb{A}) \times \mathbf{s}igma^{-1} U_\mathbf{s}igma(\mathbf mathbb{A})$ we have
$\varphi_2(u_1,u_2)=\delta_\mathbf{s}igma^{-1}u_1^{-1}\delta_\mathbf{s}igma u_2 \in \mathbf{s}igma^{-1} U_\mathbf{s}igma(\mathbf mathbb{A})$.
This is equivalent to the condition $\mathbf{s}igma \varphi_2(u_1,u_2) \mathbf{s}igma^{-1} \in U_\mathbf{s}igma(\mathbf mathbb{A})$, which in turn is equivalent to
$$\begin{cases}
\mathbf{s}igma \varphi_2(u_1,u_2) \mathbf{s}igma^{-1} \in U(\mathbf mathbb{A}) \\
\varphi_2(u_1,u_2) \in U(\mathbf mathbb{A}).
{\bf{e}}nd{cases}$$
But $\varphi_2(u_1,u_2)=\delta_1^{-1} \mathbf{s}igma^{-1} u_1^{-1} \mathbf{s}igma \delta_1 u_2$, and since $u_1 \in U_\mathbf{s}igma(\mathbf mathbb{A})$,
we have $\mathbf{s}igma^{-1} u_1^{-1} \mathbf{s}igma \in U(\mathbf mathbb{A})$ and it follows $\varphi_2(u_1,u_2) \in U(\mathbf mathbb{A})$ as desired.
On the other hand,
\begin{align*}
\mathbf{s}igma \varphi_2(u_1,u_2) \mathbf{s}igma^{-1} &= \mathbf{s}igma \delta_1^{-1} \mathbf{s}igma^{-1} u_1^{-1} \mathbf{s}igma \delta_1 u_2 \mathbf{s}igma^{-1}\\
&=(\mathbf{s}igma \delta_1 \mathbf{s}igma^{-1})^{-1} u_1^{-1} (\mathbf{s}igma \delta_1 \mathbf{s}igma^{-1}) \mathbf{s}igma u_2 \mathbf{s}igma^{-1}
{\bf{e}}nd{align*}
By definition of the Weyl group, $\mathbf{s}igma \delta_1 \mathbf{s}igma^{-1} \in T(\mathbf mathbb{A})$ so
$(\mathbf{s}igma \delta_1 \mathbf{s}igma^{-1})^{-1} u_1^{-1} (\mathbf{s}igma \delta_1 \mathbf{s}igma^{-1}) \in U(\mathbf mathbb{A})$. Furthermore,
$ \mathbf{s}igma u_2 \mathbf{s}igma^{-1} \in U_\mathbf{s}igma(\mathbf mathbb{A}) \mathbf{s}ubset U(\mathbf mathbb{A})$ and it also follows that $\mathbf{s}igma \varphi_2(u_1,u_2) \mathbf{s}igma^{-1} \in U(\mathbf mathbb{A}).$
Next, for $h=(h_1,h_2) \in H_{\delta_\mathbf{s}igma}(\mathbb{Q})$, we clearly have $\varphi(h)=(h_1,1)$, and
\begin{align*}
\varphi(h(u_1,u_2))&=(h_1u_1, \delta_\mathbf{s}igma^{-1} u_1^{-1} \underbrace{h_1^{-1} \delta_\mathbf{s}igma h_2}_{=\delta_\mathbf{s}igma} u_2)\\
&=\varphi(h)\varphi(u_1,u_2).
{\bf{e}}nd{align*}
Finally if we define $\mathfrak{p}si(u_1,u_2)=(u_1^{-1}, u_2)$, then
$\mathfrak{p}si \circ \varphi$ is an involution, and in particular $\varphi$ is bijective, which establishes the lemma.
{\bf{e}}nd{proof}
\begin{corollary}\label{corqoutientmap}
Let $\delta_1 \in \overline{T}(\mathbb{Q})$ and $\mathbf{s}igma$ be a relevant element of the Weyl group.
We have a measure preserving map
\begin{align*}
\varphi :H_{\delta_\mathbf{s}igma}(\mathbb{Q}) \backslash H(\mathbf mathbb{A}) & \to
\left(U_\mathbf{s}igma(\mathbb{Q}) \backslash U_\mathbf{s}igma(\mathbf mathbb{A})\right) \times \left(U_\mathbf{s}igma(\mathbf mathbb{A}) \backslash U(\mathbf mathbb{A}) \right)
\times U(\mathbf mathbb{A}) \\
(x,y) & \mathbf mapsto
\left(U_\mathbf{s}igma(\mathbb{Q}) u_1, U_\mathbf{s}igma(\mathbf mathbb{A}) u_2, u_3\right)
{\bf{e}}nd{align*}
with $u_1u_2=x$ and $u_3=\delta_\mathbf{s}igma^{-1} u_1^{-1} \delta_\mathbf{s}igma y$.
{\bf{e}}nd{corollary}
\begin{remark}
The assumption that $\mathbf{s}igma$ is relevant is not really needed here, but it simplifies slightly the proof.
{\bf{e}}nd{remark}
\begin{proof}
Let $\mathfrak{o}verline{U_\mathbf{s}igma}=U \cap \mathbf{s}igma^{-1} \trans{U} \mathbf{s}igma$. Then the quotient space $U_\mathbf{s}igma \backslash U$ may be identified with $\mathfrak{o}verline{U_\mathbf{s}igma}$,
and the map $U_\mathbf{s}igma \times \mathfrak{o}verline{U_\mathbf{s}igma}, (u_\mathbf{s}igma,u_1) \mathbf mapsto u_\mathbf{s}igma u_1$ preserves the Haar measures.
Define $\mathfrak{o}verline{D_\mathbf{s}igma}=\mathfrak{o}verline{U_\mathbf{s}igma} \times \mathfrak{o}verline{U_\mathbf{s}igma}$.
Using that $\mathbf{s}igma$ is relevant and hence, by Lemma~\ref{Usigmarelevant}, that $D_\mathbf{s}igma(\mathbf mathbb{A})=U_\mathbf{s}igma(\mathbf mathbb{A}) \times U_\mathbf{s}igma(\mathbf mathbb{A})$, we obtain a measure preserving map
\begin{align*}
H_{\delta_\mathbf{s}igma}(\mathbb{Q}) \backslash H(\mathbf mathbb{A}) & \to \left(H_{\delta_\mathbf{s}igma}(\mathbb{Q}) \backslash D_\mathbf{s}igma(\mathbf mathbb{A}) \right) \times \mathfrak{o}verline{D_\mathbf{s}igma}\\
H_{\delta_\mathbf{s}igma}(\mathbb{Q})(x,y) & \mathbf mapsto (H_{\delta_\mathbf{s}igma}(\mathbb{Q}) (x_\mathbf{s}igma,y_\mathbf{s}igma),(x_1,y_1)).
{\bf{e}}nd{align*}
Composing the first coordinate with the map obtained in Lemma~\ref{quotientmap}, we get a measure preserving map
\begin{align*}
H_{\delta_\mathbf{s}igma}(\mathbb{Q}) \backslash H(\mathbf mathbb{A}) & \to \left(\left(U_\mathbf{s}igma(\mathbb{Q}) \times \{1\}\right) \backslash D_\mathbf{s}igma(\mathbf mathbb{A}) \right) \times \mathfrak{o}verline{D_\mathbf{s}igma}\\
H_{\delta_\mathbf{s}igma}(\mathbb{Q})(x,y) & \mathbf mapsto \left(\left(U_\mathbf{s}igma(\mathbb{Q}) \times \{1\}\right) (x_\mathbf{s}igma,\delta_\mathbf{s}igma^{-1}x_\mathbf{s}igma^{-1} \delta_\mathbf{s}igma y_\mathbf{s}igma),(x_1, y_1)\right).
{\bf{e}}nd{align*}
Finally, composing with $U_\mathbf{s}igma(\mathbf mathbb{A}) \times \mathfrak{o}verline{U_\mathbf{s}igma} \to U(\mathbf mathbb{A}), (y_\mathbf{s}igma,y_1) \mathbf mapsto y_\mathbf{s}igma y_1$ we obtain
\begin{align*}
H_{\delta_\mathbf{s}igma}(\mathbb{Q}) \backslash H(\mathbf mathbb{A}) & \to \left(U_\mathbf{s}igma(\mathbb{Q}) \backslash U_\mathbf{s}igma(\mathbf mathbb{A})\right) \times \mathfrak{o}verline{U_\mathbf{s}igma}(\mathbf mathbb{A}) \times U(\mathbf mathbb{A})\\
H_{\delta_\mathbf{s}igma}(\mathbb{Q})(x,y) & \mathbf mapsto \left(U_\mathbf{s}igma(\mathbb{Q})x_\mathbf{s}igma,x_1,\delta_\mathbf{s}igma^{-1}x_\mathbf{s}igma^{-1} \delta_\mathbf{s}igma y\right).
{\bf{e}}nd{align*}
{\bf{e}}nd{proof}
\begin{proposition}\label{relevantintegrals}
Let $H \cdot \delta_\mathbf{s}igma$ be a relevant orbit.
Then the integral~(\ref{orbital}) can be expressed as
\begin{align*}
I_{\delta_\mathbf{s}igma}(f)&=\int_{U_\mathbf{s}igma(\mathbf mathbb{A}) \backslash U(\mathbf mathbb{A})}\int_{U(\mathbf mathbb{A})}f(t_1^{-1} u \delta_\mathbf{s}igma u_1 t_2){\mathfrak{p}si_{\mathbf m_1}(u)}\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_1)}dudu_1.
{\bf{e}}nd{align*}
Moreover, it factors as $ I_{\delta_\mathbf{s}igma}(f)=I_{\delta_\mathbf{s}igma}(f_{\infty})I_{\delta_\mathbf{s}igma}(f_{\text{fin}})$, where we have set
$f_{fin}=\mathfrak{p}rod_p f_p$.
{\bf{e}}nd{proposition}
\begin{remark}
Note that the integral is well-defined by Corollary~\ref{psionUsigma}.
{\bf{e}}nd{remark}
\begin{remark}\label{d1d2}
By Assumption~\ref{testfunction}, the support of $f_\infty$ is included in $G^+(\mathbb{R})=\{g \in G(\mathbb{R}), \mathbf mu(g)>0\}$.
Therefore, if $\delta_1=\diag{d_1}{1}{d_2}{d_1d_2}$, we have $I_{\delta_\mathbf{s}igma}(f_{\infty}) \texttt{n}eq 0$ only if $d_1d_2>0$.
{\bf{e}}nd{remark}
\begin{proof}
By Corollary~\ref{corqoutientmap} we can make the change of variable $(u_1,u_2,u_3)=\varphi(x,y)$ in~(\ref{orbital}).
So we get
\begin{equation}\label{Idelta1}
\begin{split}
I_{\delta}(f)=\int_{U_\mathbf{s}igma(\mathbb{Q}) \backslash U_\mathbf{s}igma(\mathbf mathbb{A})}\int_{U_\mathbf{s}igma(\mathbf mathbb{A}) \backslash U(\mathbf mathbb{A})}\int_{U(\mathbf mathbb{A})}
f(t_1^{-1} u_2^{-1}\delta_\mathbf{s}igma u_3 t_2)) \quad \quad \quad \quad \quad \quad \\
\quad \quad \times \mathfrak{p}si_{\mathbf m_1}(u_1u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(\delta_\mathbf{s}igma^{-1}u_1\delta_\mathbf{s}igma u_3)}du_3du_2du_1.
{\bf{e}}nd{split}
{\bf{e}}nd{equation}
We have
\begin{align*}
\mathfrak{p}si_{\mathbf m_1}(u_1u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(\delta_\mathbf{s}igma^{-1}u_1\delta_\mathbf{s}igma u_3)}&=
\mathfrak{p}si_{\mathbf m_1}(u_2)\mathfrak{p}si_{\mathbf m_1}(u_1)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(\delta_\mathbf{s}igma^{-1}u_1\delta_\mathbf{s}igma)\mathfrak{p}si_{\mathbf m_2} (u_3)}\\
&=\mathfrak{p}si_{\mathbf m_1}(u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_3)}
{\bf{e}}nd{align*}since $(u_1, \delta_\mathbf{s}igma^{-1}u_1\delta_\mathbf{s}igma) \in H_\delta(\mathbf mathbb{A})$ and we assume $H \cdot \delta_\mathbf{s}igma$ is relevant orbit.
Reporting this equality in~(\ref{Idelta1}), we get
$$I_{\delta_\mathbf{s}igma}(f)=\int_{U_\mathbf{s}igma(\mathbf mathbb{A}) \backslash U(\mathbf mathbb{A})}\int_{U(\mathbf mathbb{A})}f(t_1^{-1} u_2^{-1} \delta_\mathbf{s}igma u_3 t_2)\mathfrak{p}si_{\mathbf m_1}(u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_3)}du_3du_2.$$
Write $u_3=u_\mathbf{s}igma u_1$ with $u_\mathbf{s}igma \in U_\mathbf{s}igma$ and $u_1 \in U_\mathbf{s}igma \backslash U$.
Then by Lemma~\ref{Usigmarelevant} we have
$$u_2^{-1} \delta_\mathbf{s}igma u_3=u_2^{-1} \mathbf{s}igma\delta u_\mathbf{s}igma u_1
=u_2^{-1} \mathbf{s}igma\delta u_\mathbf{s}igma \delta^{-1}\delta u_1=u_2^{-1} \delta u_\mathbf{s}igma \delta^{-1} \mathbf{s}igma \delta u_1,$$
and by Corollary~\ref{psionUsigma} we have
\begin{align*}
\mathfrak{p}si_{\mathbf m_1}(u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_3)}&=\mathfrak{p}si_{\mathbf m_1}(u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_\mathbf{s}igma u_1)}\\
&=\mathfrak{p}si_{\mathbf m_1}(u_2)\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_1}(\delta u_\mathbf{s}igma \delta^{-1})} \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_1)}
=\mathfrak{p}si_{\mathbf m_1}(\delta u_\mathbf{s}igma^{-1} \delta^{-1}u_2) \mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_1)}
{\bf{e}}nd{align*}
Setting $u=\delta u_\mathbf{s}igma^{-1}\delta^{-1} u_2$ we get the result.
{\bf{e}}nd{proof}
\mathbf{s}ubsection{The Archimedean orbital integrals}\label{ArchInt}
By Theorem~\ref{GeomTransform} and using equation~(\ref{whittakertorus}) we have the following
\begin{lemma}\label{InTermsOfW}
Let $H \cdot \delta_\mathbf{s}igma$ be a relevant orbit. Then the corresponding Archimedean orbital integral is given by
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{\infty})=\bm{f}rac1c\bm{f}rac{\mathbf mathcal Delta_\mathbf{s}igma(t_{\mathbf m_2})}{|\mathbf m_{1,1}^4\mathbf m_{1,2}^3|}
\int_{U_\mathbf{s}igma(\mathbb{R}) \backslash U(\mathbb{R})}\int_{\mathbf mathcal Lie{a}^*}&
\tilde{f_\infty}(-i\texttt{n}u)W(i\texttt{n}u, t_{\mathbf m_1}^{-1}t_1, \mathfrak{p}si_{\mathbf mathbf{1}})\\
& \times W(-i\texttt{n}u,t_{\mathbf m_1}^{-1}\delta_\mathbf{s}igma t_{\mathbf m_2}u_1 t_{\mathbf m_2}^{-1} t_2,\mathfrak{o}verline{\mathfrak{p}si_{\mathbf mathbf{1}}})
\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}
{\mathfrak{o}verline{\mathfrak{p}si_{\mathbf mathbf{1}}(u_1)}}du_1,
{\bf{e}}nd{align*}
where the constant $c$ is the one appearing in the spherical inversion theorem
and $\mathbf mathcal Delta_\mathbf{s}igma$ is the modulus character of the group $U_\mathbf{s}igma(\mathbb{R}) \backslash U(\mathbb{R})$.
{\bf{e}}nd{lemma}
Note that the above integral is well-defined.
More generally, let $\mathfrak{p}si$ be a generic character and $\mathbf{s}igma$ a relevant element of the Weyl group.
Then by Lemma~\ref{Usigmarelevant}, for all $t \in G(\mathbb{R})$ the integral
$$\int_{U_\mathbf{s}igma(\mathbb{R}) \backslash U(\mathbb{R})}\int_{\mathbf mathcal Lie{a}^*}g(-i\texttt{n}u)W(-i\texttt{n}u,y \mathbf{s}igma u_1 t,\mathfrak{p}si){d\texttt{n}u}
\mathfrak{o}verline{\mathfrak{p}si(u_1)}du_1 $$
is well defined as long as the commutator $yuy^{-1}u^{-1}$ belongs to $U_0(\mathbb{R})=\left\{
\left[\begin{smallmatrix}
1 & & & a \\
& 1 & a & b \\
& & 1 & \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}\right],a,b \in \mathbb{R} \right\}$ for all $u \in U_\mathbf{s}igma(\mathbb{R})$.
The following conjecture, due to Buttcane~\cite{Buttcane}, should enable to take the $\mathbf mathcal Lie{a}^*$ integral out in Lemma~\ref{InTermsOfW}.
\begin{conjecture}[Interchange of integral]\label{interchange}
Let $g$ be holomorphic with rapid decay on an open tube domain of $\mathbf mathcal Lie{a}^*_\mathbf mathbb{C}$ containing $\mathbf mathcal Lie{a}^*$,
and let $t \in G(\mathbb{R})$.
Let $\mathfrak{p}si$ be a generic character, $\mathbf{s}igma$ be a relevant element of the Weyl group.
Then for almost all $y \in \text{Sp}_4(\mathbb{R})$ satisfying $yuy^{-1}u^{-1}\in U_0(\mathbb{R})$ for all $u \in U_\mathbf{s}igma(\mathbb{R})$ we have
$$\int_{U_\mathbf{s}igma(\mathbb{R}) \backslash U(\mathbb{R})}\int_{\mathbf mathcal Lie{a}^*}g(-i\texttt{n}u)W(-i\texttt{n}u,y \mathbf{s}igma u_1 t,\mathfrak{p}si){d\texttt{n}u}
\mathfrak{o}verline{\mathfrak{p}si(u_1)}du_1=\int_{\mathbf mathcal Lie{a}^*}g(-i\texttt{n}u) \tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,t)d\texttt{n}u$$
where
$$\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,t)=\lim_{R \to 0}\int_{U_\mathbf{s}igma(\mathbb{R}) \backslash U(\mathbb{R})}h\left(\bm{f}rac{\|u_1\|}R\right)
W(-i\texttt{n}u,y \mathbf{s}igma u_1 t,\mathfrak{p}si) \mathfrak{o}verline{\mathfrak{p}si(u_1)}du_1,$$
for some fixed, smooth, compactly supported $h$ with $h(0)=1$.
Moreover $\tilde{K_\mathbf{s}igma}$ is entire in $\texttt{n}u$ and smooth and polynomially bounded in $t$ and $y$ for $\mathbb{R}e(-i\texttt{n}u)$ in some
fixed compact set.
{\bf{e}}nd{conjecture}
Note that Conjecture~\ref{interchange} is not needed for $\mathbf{s}igma=1$ since in this case
we have $U_\mathbf{s}igma=U$ and hence $\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,t)=W(-i\texttt{n}u,y \mathbf{s}igma t,\mathfrak{p}si)$.
Consider now the case $\mathbf{s}igma=J$ the long Weyl element. In this case $U_\mathbf{s}igma$ is trivial.
Let $u \in U(\mathbb{R})$ and $k \in K_\infty$.
Then changing variables and using the fact that $W(-i\texttt{n}u,y \cdot,\mathfrak{p}si)$ is right-$K_\infty$ invariant we have
\begin{equation}\label{TildeKIsWhittaker}
\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,utk)=\mathfrak{p}si(u)\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,t).
{\bf{e}}nd{equation}
Moreover $\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y, \cdot)$ is an eigenfunction of the centre of the universal enveloping algebra in each variable,
with eigenvalue matching those of $W(-i\texttt{n}u, \cdot,\mathfrak{p}si)$.
It follows from the uniqueness of the Whittaker model that $$\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,t)=K_\mathbf{s}igma(-i\texttt{n}u,y,\mathfrak{p}si)W(-i\texttt{n}u,t,\mathfrak{p}si)$$
for some function $K_\mathbf{s}igma(-i\texttt{n}u,y, \mathfrak{p}si)$ that we call the long Weyl element {\bf Bessel function}.
$K_\mathbf{s}igma(-i\texttt{n}u, \cdot, \mathfrak{p}si)$ is itself an eigenfunction of the centre of the universal enveloping algebra
with eigenvalue matching those of $W(-i\texttt{n}u, \cdot,\mathfrak{p}si)$, and satisfies for all $u \in U(\mathbb{R})$ the transformation rule
\begin{equation}\label{BesselTransformation}
K_\mathbf{s}igma(-i\texttt{n}u, uy, \mathfrak{p}si)=\mathfrak{p}si(u)K_\mathbf{s}igma(-i\texttt{n}u, y, \mathfrak{p}si)=K_\mathbf{s}igma(-i\texttt{n}u, y \mathbf{s}igma u \mathbf{s}igma^{-1}, \mathfrak{p}si).
{\bf{e}}nd{equation}
For the remaining two relevant elements of the Weyl group
$\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y, \cdot)$ still satisfies relation~(\ref{TildeKIsWhittaker}) for all $u \in U(\mathbb{R})$.
Thus we can still factor $\tilde{K_\mathbf{s}igma}(-i\texttt{n}u, y,t)=K_\mathbf{s}igma(-i\texttt{n}u,y,\mathfrak{p}si)W(-i\texttt{n}u,t,\mathfrak{p}si)$ for appropriate $y$,
and define $K_\mathbf{s}igma(-i\texttt{n}u,y,\mathfrak{p}si)$ to be the $\mathbf{s}igma$-{\bf Bessel function}.
However because of the restriction on $y$, the conditions satisfied by $K_\mathbf{s}igma(-i\texttt{n}u,y,\mathfrak{p}si)$ are more complicated.
Buttcane has announced a proof for Conjecture~\ref{interchange} in a more general context.
This would thus yield a uniform expression for the Archimedean integrals attached to the various elements of the Weyl group.
\begin{proposition}~\label{ArchWithConj}
Assume Conjecture~\ref{interchange}
Let $H \cdot \delta_\mathbf{s}igma$ be a relevant orbit. Then the corresponding Archimedean orbital integral is given by
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{\infty})=\bm{f}rac1c\bm{f}rac{\mathbf mathcal Delta_\mathbf{s}igma(t_{\mathbf m_2})}{|\mathbf m_{1,1}^4\mathbf m_{1,2}^3|}
\int_{\mathbf mathcal Lie{a^*}}\tilde{f_\infty}(-i\texttt{n}u)
K_\mathbf{s}igma(-i\texttt{n}u,t_{\mathbf m_1}^{-1}\mathbf{s}igma \delta t_{\mathbf m_2} \mathbf{s}igma^{-1},{\mathfrak{p}si_{\mathbf mathbf{1}}})\\
\times W(-i\texttt{n}u, t_{\mathbf m_2}^{-1} t_2,{\mathfrak{p}si_{\mathbf mathbf{1}}})
W(i\texttt{n}u, t_{\mathbf m_1}^{-1}t_1, \mathfrak{p}si_{\mathbf mathbf{1}})\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}
{\bf{e}}nd{align*}
where the constant $c$ is the one appearing in the spherical inversion theorem
and $\mathbf mathcal Delta_\mathbf{s}igma$ is the modular character of the group $U_\mathbf{s}igma(\mathbb{R}) \backslash U(\mathbb{R})$.
{\bf{e}}nd{proposition}
\begin{proof}
We apply the statement of Conjecture~\ref{interchange} to the integral in Lemma~\ref{InTermsOfW} for the function
defined by
$$g(-i\texttt{n}u)=\bm{f}rac{1}{c(i\texttt{n}u)c(-i\texttt{n}u)}\tilde{f_\infty}(-i\texttt{n}u)W(i\texttt{n}u, t_{\mathbf m_1}^{-1}t_1, \mathfrak{p}si_{\mathbf m_1}),$$ which has rapid decay
by the rapid decay of $\tilde{f_\infty}$ (Theorem~\ref{PW}), the explicit expression~(\ref{Plancherel}) for the spectral measure, and the estimate for the Whittaker function in the spectral aspect given by Proposition~\ref{WhittakerSpectral}.
{\bf{e}}nd{proof}
\mathbf{s}ubsection{Symplectic Kloosterman sums}
In this section, we compute the non-Archimedean part of the orbital integrals when the finite part of the test function satisfies the following.
\begin{assumption}\label{HeckeOprtr}
Recall from Assumption~\ref{testfunction} that we assume $f=f_\infty\mathfrak{p}rod_p f_p$ has central character $\mathfrak{o}mega.$
We now further assume that there are two coprime positive integers $N$ and $\texttt{n}$ such that
$\mathfrak{o}mega$ is trivial on $(1+N\hat{\mathbb{Z}}) \cap \hat{\mathbb{Z}}^\times$, and
the function $f_{fin}$ is supported on $Z(\mathbf mathbb{A}_{fin})M(\texttt{n},N)$ and satisfies
$$f_{fin}(zm)=\bm{f}rac1{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\mathfrak{o}verline{\mathfrak{o}mega}(z)$$
for $z \in Z(\mathbf mathbb{A}_{fin})$ and $m\in M(\texttt{n},N)$,
where $$M(\texttt{n},N)=\left\{g \in G(\mathbf mathbb{A}_{fin}) \cap \mathbf mathcal Mat_4(\hat{\mathbb{Z}}) : g {\bf{e}}quiv \left[
\begin{smallmatrix}
* & & * & * \\
* & 1 & * & * \\
& & * & * \\
& & & *
{\bf{e}}nd{smallmatrix}\right] \mathbf mod N, \mathbf mu(g) \in \texttt{n} \hat{\mathbb{Z}}^\times \right\},$$
and $$\mathbf mathfrak{G}amma_1(N)=\left\{g \in G(\hat{\mathbb{Z}}) : g {\bf{e}}quiv \left[
\begin{smallmatrix}
* & & * & * \\
* & 1 & * & * \\
& & * & * \\
& & & *
{\bf{e}}nd{smallmatrix}\right] \mathbf mod N\right\}.$$
{\bf{e}}nd{assumption}
\begin{remark}
With this choice, $f=\bigotimes_p f_p$, and each $f_p$ is left and right $\mathbf mathfrak{G}amma_p(N)$-invariant,
where $$\mathbf mathfrak{G}amma_p(N)=\{g \in G(\mathbb{Z}_p): g {\bf{e}}quiv \left[
\begin{smallmatrix}
* & & * & * \\
* & 1 & * & * \\
& & * & * \\
& & & *
{\bf{e}}nd{smallmatrix}\right] \mathbf mod N \}.$$
In particular, if $x, c \in \mathbb{Z}_p$ then $\mathbf mathfrak{G}amma_p$ contains the matrix $\left[
\begin{smallmatrix}
1 & & c & -cx \\
x & 1 & & \\
& & 1 & -x \\
& & & 1
{\bf{e}}nd{smallmatrix}\right].$
Thus if $\mathfrak{p}hi$ is right-$\mathbf mathfrak{G}amma_p$-invariant for all prime $p$, changing variables
$u \mathbf mapsto u\left[
\begin{smallmatrix}
1 & & c & -cx \\
x & 1 & & \\
& & 1 & -x \\
& & & 1
{\bf{e}}nd{smallmatrix}\right]$ in the integral expression of the $\mathfrak{p}si_\mathbf m$-Whittaker coefficient of $\mathfrak{p}hi$,
we get $\mathcal{W}_{\mathfrak{p}si_m}(\mathfrak{p}hi)(g)=\mathfrak{o}verline{\theta(m_1x+m_2c)}\mathcal{W}_{\mathfrak{p}si_m}(\mathfrak{p}hi)(g)$ for all $g$.
Therefore $\mathcal{W}_{\mathfrak{p}si_m}(\mathfrak{p}hi)=0$ unless $m_1$ and $m_2$ are integers.
Henceforth, we shall assume $\mathbf m_1$ and $\mathbf m_2$ are two pairs of integers.
{\bf{e}}nd{remark}
\begin{remark}
Note that $\mathbf mathfrak{G}amma=K_\infty \mathfrak{p}rod_p\mathbf mathfrak{G}amma_p(N)$ is contained both in the Borel, Klingen, Siegel, and paramodular congruence subgroup of level $N$,
thus any automorphic form that is fixed by one of these groups is also fixed by $\mathbf mathfrak{G}amma$, and hence will appear in our formula.
One could fix a different choice of congruence subgroup, and accordingly define different types of Kloosterman sums.
{\bf{e}}nd{remark}
Under Assumption~\ref{HeckeOprtr}, we can give more restriction about the element $\delta_1$ in Lemma~\ref{relevantorbits}.
\begin{lemma}\label{Supportdelta}
Let $\mathbf{s}igma \in \text{O}mega$, $\delta_1=\diag{d_1}{1}{d_2}{d_1d_2}$
such that the orbit of $\delta_\mathbf{s}igma=\mathbf{s}igma \delta_1$ is relevant.
Assume $I_{\delta_\mathbf{s}igma}(f_{\text{fin}}) \texttt{n}eq 0$.
Then there is an integer $s$ such that $d_1d_2=\mathfrak{p}m\bm{f}rac{\texttt{n}}{s^2}$.
{\bf{e}}nd{lemma}
\begin{proof}
For all $u \in U(\mathbf mathbb{A})$ and $u_1 \in U_\mathbf{s}igma(\mathbf mathbb{A}) \backslash U(\mathbf mathbb{A})$
we have $\mathbf mu(u \delta_\mathbf{s}igma u_1)=d_1d_2$.
So by Assumption~\ref{HeckeOprtr}, $u \delta_\mathbf{s}igma u_1$ belongs to the support of $f$
only if $d_1d_2 \in \mathbf mathbb{A}_{fin}^2\hat{\mathbb{Z}}^\times \texttt{n}$.
Since $d_1d_2$ is a rational number, there must be a rational number $s$ such that $d_1d_2=\mathfrak{p}m \bm{f}rac{\texttt{n}}{s^2}$.
But the second diagonal entry of $s \mathbf{s}igma^{-1} u \delta_\mathbf{s}igma u_1$ is $s$ therefore $s$ must belong to $\hat{\mathbb{Z}}$,
hence $s$ is an integer.
{\bf{e}}nd{proof}
Henceforth, we shall assume $\delta$ is as in Lemma~\ref{Supportdelta}. By Remark~\ref{d1d2}, we could also assume that $d_1d_2>0$
(which would then fix the sign in the equality $d_1d_2=\mathfrak{p}m\bm{f}rac{\texttt{n}}{s^2}$ above). However, we do not need doing so for now, and we shall not,
in view of possible applications with a different choice of test function at the Archimedean place.
\begin{remark}
Consider the case $N=\texttt{n}=1$. Then $M(\texttt{n},N)=\mathbf mathfrak{G}Sp_4(\hat{\mathbb{Z}})=\mathfrak{p}rod_p \mathbf mathfrak{G}Sp_4(\mathbb{Z}_p).$
For simplicity, set ${\bf{e}}ta=\delta_\mathbf{s}igma$, and if $p$ is a prime and $x \in G(\mathbf mathbb{A})$, write $x_p$ for the $p$-th
component of $x$.
Also write $\mathfrak{p}si_{p,1}$ and $\mathfrak{p}si_{p,2}$ for the local $p$-th components of the characters $\mathfrak{p}si_{\mathbf m_1}$ and $\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}}$, respectively.
In particular, these characters are trivial on $U(\mathbb{Z}_p)$.
Then we have
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{fin})&= \bm{f}rac1{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}
\int_{U_\mathbf{s}igma(\mathbf mathbb{A}_{fin}) \backslash U(\mathbf mathbb{A}_{fin})}\int_{U(\mathbf mathbb{A}_{fin})}\mathbf mathbbm{1}_{\mathbf mathfrak{G}Sp_4(\hat{\mathbb{Z}})}(s u {\bf{e}}ta v){\mathfrak{p}si_{\mathbf m_1}(u)}\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(v)}dudv\\
&=\mathfrak{p}rod_p \bm{f}rac1{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_p(N)})} \int_{U_\mathbf{s}igma(\mathbb{Q}_p) \backslash U(\mathbb{Q}_p)}\int_{U(\mathbb{Q}_p)}\mathbf mathbbm{1}_{\mathbf mathfrak{G}Sp_4(\mathbb{Z}_p)}(s_p u_p {\bf{e}}ta_p v_p){\mathfrak{p}si_{p,1}(u_p)}{\mathfrak{p}si_{p,2}(v_p)}du_pdv_p.\\
{\bf{e}}nd{align*}
For all but finitely many prime $p$, the entries of $s_p{\bf{e}}ta_p$ are in $\mathbb{Z}_p^\times$. For those primes, by the explicit Bruhat decomposition
(see Lemma~\ref{uoppu} below),
the condition $s_p u_p {\bf{e}}ta_p v_p \in \mathbf mathfrak{G}Sp_4(\mathbb{Z}_p)$ is equivalent to $u_p \in U(\mathbb{Z}_p)$ and $v_p \in U_\mathbf{s}igma(\mathbb{Z}_p) \backslash U(\mathbb{Z}_p)$, and hence
$$\int_{U_\mathbf{s}igma(\mathbb{Q}_p) \backslash U(\mathbb{Q}_p)}\int_{U(\mathbb{Q}_p)}\mathbf mathbbm{1}_{\mathbf mathfrak{G}Sp_4(\mathbb{Z}_p)}(s_p u_p {\bf{e}}ta_p v_p){\mathfrak{p}si_{p,1}(u_p)}{\mathfrak{p}si_{p,2}(v_p)}du_pdv_p
=1$$
For the remaining primes $p$, noticing that $U_\mathbf{s}igma(\mathbb{Q}_p) \backslash U(\mathbb{Q}_p)$ may be identified to the subgroup
$\mathfrak{o}verline{U_\mathbf{s}igma}(\mathbb{Q}_p)=U(\mathbb{Q}_p) \cap \mathbf{s}igma^{-1} \trans{U(\mathbb{Q}_p)} \mathbf{s}igma$,
the local integral equals the Kloosteman sum $\mathbf mathcal Kloos({\bf{e}}ta,\mathfrak{p}si_{p,1},\mathfrak{p}si_{p,2})$ as defined in~\cite{SHMKloosterman}
when ${\bf{e}}ta \in \text{Sp}_4(\mathbb{Q}_p)$ (note that we denote here by $U_\mathbf{s}igma$ what is denoted there by $\mathfrak{o}verline{U_\mathbf{s}igma}$, and conversely).
{\bf{e}}nd{remark}
We now treat separately the contribution from each relevant element of the Weyl group from a global point of view.
To alleviate notations, we shall not include $N$ and $\mathfrak{o}mega$ in the argument of the Kloosterman sums we proceed to define.
\mathbf{s}ubsubsection{The identity contribution}
\begin{definition}\label{sumid}
Let $a,b,d,N$ be integers such that $d \mathbf mid N$.
Then the following sum is well-defined
$$S(a,b,d,N)=\mathbf{s}um_{\mathbf{s}ubstack{x,y \in \mathbb{Z}/ N \mathbb{Z}\\d \mathbf mid xy}}e\left(\bm{f}rac{ax+by}N\right).$$
{\bf{e}}nd{definition}
\begin{lemma}
Let $a,b,d,N$ be integers such that $d \mathbf mid N$.
Write $a=\mathfrak{p}rod_i p_i^{a_i}$, where $p_i$ are distinct primes and $a_i$ are integers, and similarly for $b, d, N$.
Then we have
$$S(a,b,d,N)=\mathfrak{p}rod_i S(p_i^{a_i}, p_i^{b_i}, p_i^{d_i}, p_i^{N_i}).$$
Moreover if $n$ is a positive integer, $i,j,k$ are non-negative integers with $k \le n$ and $p$ is a prime, then we have
\begin{align*}
S(p^i,p^j,p^k,p^n)&=p^{2n-k}(1-p^{-1})\mathbf max(0,k+1-\mathbf max(0,n-i)-\mathbf max(0,n-j))\\
&+p^{2n-k-1}\left(\mathbf mathbbm{1}_{\mathbf{s}ubstack{i \ge n\\j \ge n}}-
\mathbf mathbbm{1}_{\mathbf{s}ubstack{i < n\\j < n\\i+j\ge 2n-k-1}}\right).
{\bf{e}}nd{align*}
In particular, it follows that $S(p^i,p^j,p^k,p^n)$ is non-zero only if
\begin{equation}\label{sumnonzero}
(n-i)+(n-j) \le k+1.
{\bf{e}}nd{equation}
{\bf{e}}nd{lemma}
\begin{proof}
The factorization is immediate from the Chinese remainders theorem.
Now let us evaluate $S=S(p^i,p^j,p^k,p^n)$.
We have (here, abusing slightly notations, we set $v_p(0)=n$)
$$S=\mathbf{s}um_{h=0}^k \mathbf{s}um_{\mathbf{s}ubstack{x \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(x)=h}} e\left(\bm{f}rac{p^ix}{p^n}\right)
\mathbf{s}um_{\mathbf{s}ubstack{y \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(y) \ge k-h}} e\left(\bm{f}rac{p^jy}{p^n}\right)
+ \mathbf{s}um_{\mathbf{s}ubstack{x \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(x) \ge k+1}} e\left(\bm{f}rac{p^ix}{p^n}\right)
\mathbf{s}um_{y \in \mathbb{Z} / p^n \mathbb{Z}} e\left(\bm{f}rac{p^jy}{p^n}\right).$$
Now if ${\bf{e}}ll$ is any non-negative integer, we have
$$ \mathbf{s}um_{\mathbf{s}ubstack{y \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(y) \ge {\bf{e}}ll}} e\left(\bm{f}rac{p^jy}{p^n}\right)=
\begin{cases}
p^{n-{\bf{e}}ll} \text{ if } j+{\bf{e}}ll \ge n \text{ and } {\bf{e}}ll \le n\\
0 \text{ otherwise.}
{\bf{e}}nd{cases}$$
Hence
$$S=\mathbf{s}um_{h=0}^{k-\mathbf max(0,n-j)}p^{n-k+h} \mathbf{s}um_{\mathbf{s}ubstack{x \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(x)=h}} e\left(\bm{f}rac{p^ix}{p^n}\right)
+p^{2n-k-1}\mathbf mathbbm{1}_{\mathbf{s}ubstack{j \ge n\\i+k+1 \ge n\\k<n}}.$$
Now $$\mathbf{s}um_{\mathbf{s}ubstack{x \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(x)=h}} e\left(\bm{f}rac{p^ix}{p^n}\right)=
\mathbf{s}um_{\mathbf{s}ubstack{x \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(x) \ge h}} e\left(\bm{f}rac{p^ix}{p^n}\right)
- \mathbf{s}um_{\mathbf{s}ubstack{x \in \mathbb{Z} / p^n \mathbb{Z} \\ v_p(x) \ge h+1}} e\left(\bm{f}rac{p^ix}{p^n}\right),$$
hence the $h$-sum becomes
$$\left(\mathbf{s}um_{h=\mathbf max(0,n-i)}^{k-\mathbf max(0,n-j)} p^{2n-k}(1-p^{-1})\right)-p^{2n-k-1}\mathbf mathbbm{1}_{0 \le n-i-1 \le k-\mathbf max(0,n-j)}
+p^{2n-k-1}\mathbf mathbbm{1}_{k-\mathbf max(0,n-j)=n},$$
so
\begin{align*}
S=p^{2n-k}(1-p^{-1})\mathbf max(0,k+1-\mathbf max(0,n-i)-\mathbf max(0,n-j))\\
+p^{2n-k-1}(\mathbf mathbbm{1}_{\mathbf{s}ubstack{j \ge n\\i+k+1 \ge n\\k<n}}-\mathbf mathbbm{1}_{0 \le n-i-1 \le k-\mathbf max(0,n-j)}+\mathbf mathbbm{1}_{k-\mathbf max(0,n-j)=n}).
{\bf{e}}nd{align*}
Finally, it can be checked by inspection of cases that
$$\mathbf mathbbm{1}_{\mathbf{s}ubstack{j \ge n\\i+k+1 \ge n\\k<n}}-\mathbf mathbbm{1}_{0 \le n-i-1 \le k-\mathbf max(0,n-j)}+\mathbf mathbbm{1}_{k-\mathbf max(0,n-j)=n}=
\mathbf mathbbm{1}_{\mathbf{s}ubstack{i \ge n\\j \ge n}}-
\mathbf mathbbm{1}_{\mathbf{s}ubstack{i < n\\j < n\\i+j\ge 2n-k-1}}.$$
{\bf{e}}nd{proof}
\begin{proposition}\label{idcontribution}
Let $\mathbf{s}igma=1$, $\delta_1=\diag{d_1}{1}{d_2}{d_1d_2}$ with $d_1d_2=\mathfrak{p}m\bm{f}rac{\texttt{n}}{s^2}$ for some integer $s$.
Then $I_{\delta_\mathbf{s}igma}(f_{fin})=0$ unless all of the following holds:
\begin{enumerate}
\item $s$ divides $\texttt{n}$,
\item $d_1=\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}}$ and $sd_1$ is an integer dividing $\texttt{n}$,
\item $d_2=\bm{f}rac{\mathbf m_{1,1}\mathbf m_{1,2}}{\mathbf m_{2,1}\mathbf m_{2,2}}$,
{\bf{e}}nd{enumerate}
If all these conditions are met, let $d=\gcd(s,sd_1s,d_2,\bm{f}rac{\texttt{n}}s)$, and $D=\gcd(sd_1,\bm{f}rac{n}s)$. Then
\begin{equation}\label{IdContribution}
I_{\delta_\mathbf{s}igma}(f_{fin})=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\bm{f}rac{\texttt{n} d}{|s^3d_1|}
S\left(\mathbf m_{1,1}\bm{f}rac{\texttt{n}}D,\mathbf m_{1,2}sd_1,d,\texttt{n}\right),
{\bf{e}}nd{equation}
where $\mathfrak{o}mega_N(s)=\mathfrak{p}rod_{p \mathbf mid N} \mathfrak{o}mega_p(s)$.
{\bf{e}}nd{proposition}
\begin{remark}
The integer $s$ is only determined up to sign. However, expression~(\ref{IdContribution}) does not depend on the sign of $s$,
since $S(a,b,d,\texttt{n})=S(a,-b,d,\texttt{n})$ and $\mathfrak{o}mega_N(-1)=\mathfrak{o}mega(-1)=1$ as $\mathfrak{o}mega_p(-1)=1$ for all $p \texttt{n}mid N$.
{\bf{e}}nd{remark}
\begin{remark}
The two pair of integers $\mathbf m_1$ and $\mathbf m_2$ essentially play symmetric roles in our formula.
More precisely, for our choice of test function $f$, the operator $\mathfrak{o}mega_N(\texttt{n})^{\bm{f}rac12}R(f)$ is self-adjoint.
Thus exchanging $\mathbf m_1$ and $\mathbf m_2$ amounts to take the complex conjugate of the spectral side and multiply it by $\mathfrak{o}mega_N(\texttt{n})$.
Hence the geometric side, and in particular the identity contribution, should enjoy the same symmetries.
Proposition~\ref{idcontribution} says that the identity element has a non-zero contribution only if there is an integer $t$ dividing $\texttt{n}$
with $\bm{f}rac{\texttt{n}}t=\mathfrak{p}m\bm{f}rac{\mathbf m_{1,2}}{\mathbf m_{2,2}}t$ and such that $s=\bm{f}rac{\mathbf m_{2,1}}{\mathbf m_{1,1}}t$ is also an integer dividing $\texttt{n}$.
This condition is indeed symmetric, as interchanging $\mathbf m_1$ and $\mathbf m_2$ amounts to replace $t$ with $\bm{f}rac{\texttt{n}}t$ and $s$ with $\bm{f}rac{\texttt{n}}s$.
In addition, we have
$S\left(\mathbf m_{1,1}\bm{f}rac{\texttt{n}}{\gcd(t,\bm{f}rac{\texttt{n}}s)},\mathbf m_{1,2}t,d,\texttt{n}\right)=S\left(\mathbf m_{2,1}\bm{f}rac{\texttt{n}}{\gcd(s,\bm{f}rac{\texttt{n}}t)},\mathbf m_{2,2}\bm{f}rac\texttt{n}{t},d,\texttt{n}\right)$.
Finally, using that $|s^3d_1|=\left|\texttt{n}^3\bm{f}rac{\mathbf m_{2,1}^4\mathbf m_{2,2}^3}{\mathbf m_{1,1}^4\mathbf m_{1,2}^3}\right|^{\bm{f}rac12}$,
multiplying $\bm{f}rac{\texttt{n} }{|s^3d_1|}$ by the factor $\bm{f}rac1{|\mathbf m_{1,1}^4\mathbf m_{1,2}^3|}$ that comes from the Archimedean part in Proposition~\ref{ArchWithConj}
gives $\texttt{n}^{-\bm{f}rac12}(\mathbf m_{1,1}\mathbf m_{2,1})^{-2}|\mathbf m_{1,2}\mathbf m_{2,2}|^{-\bm{f}rac32}$.
{\bf{e}}nd{remark}
\begin{remark}
In the case $\texttt{n}=1$ we must have $s=\mathfrak{p}m1$ and hence $\mathbf m_{1,1}=\mathfrak{p}m\mathbf m_{2,1}$. Together with the condition
$d_1d_2=\mathfrak{p}m\bm{f}rac{\mathbf m_{1,1}^2\mathbf m_{1,2}}{\mathbf m_{2,1}^2\mathbf m_{2,2}}=\bm{f}rac{\texttt{n}}{s^2}$ this also gives $\mathbf m_{1,2}=\mathfrak{p}m\mathbf m_{2,2}.$
{\bf{e}}nd{remark}
\begin{remark}
Using condition~(\ref{sumnonzero}) we find that the contribution from the identity element is non-zero only if for
all prime $p \mathbf mid \texttt{n}$ we have $$v_p(s) \le v_p(\mathbf m_{2,1})+v_p(\mathbf m_{2,1})+\mathbf min(0,v_p(\mathbf m_{2,1})-v_p(\mathbf m_{1,1}))+1,$$
which in turn implies that for all prime $p$ we have
$$v_p(\texttt{n}) \le 2 \mathbf min(v_p(\mathbf m_{1,1}),v_p(\mathbf m_{2,1}))+v_p(\mathbf m_{1,2})+v_p(\mathbf m_{2,2})+1.$$
{\bf{e}}nd{remark}
\begin{proof}
The finite part of the orbital integral corresponding to the identity element reduces to
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{fin})= \int_{U(\mathbf mathbb{A}_{fin})}f( u \delta){\mathfrak{p}si_{\mathbf m_1}(u)}du
=\int_{U(\mathbf mathbb{A}_{fin})}f(s u \delta){\mathfrak{p}si_{\mathbf m_1}(u)}du.
{\bf{e}}nd{align*}
Assume it is non-zero.
Note that by Lemma~\ref{Supportdelta} we have $\mathbf mu(su\delta)=\texttt{n}$.
Then by Assumption~\ref{HeckeOprtr}, $su\delta \in Supp(f)$ if and only if $su\delta \in \mathbf mathcal Mat_4(\hat{\mathbb{Z}})$.
In particular, each entry of $s\delta$ must be an integer.
Furthermore by Lemma~\ref{relevantorbits} we must have
$\delta=\diag{d_1}{1}{d_2}{d_1d_2}$ with $d_1=\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}}$, $d_2=\bm{f}rac{\mathbf m_{1,1}\mathbf m_{1,2}}{\mathbf m_{2,1}\mathbf m_{2,2}}$.
So we learn that $sd_1= s\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}} \in \mathbb{Z}$, $s \mathbf mid \texttt{n}$, and $sd_1 \mathbf mid \texttt{n}$.
Now let us examine the non-diagonal entries of $su\delta$.
Write $u=\left[
\begin{smallmatrix}
1 & & c & a-cx \\
x & 1 & a & b \\
& & 1 & -x \\
& & & 1
{\bf{e}}nd{smallmatrix}\right]$.
Then the following conditions must hold:
\begin{enumerate}
\item $sd_1x \in \hat{\mathbb{Z}}$ and $\bm{f}rac{\texttt{n}}{s}x \in \hat{\mathbb{Z}}$,
\item $c' \doteq \bm{f}rac{\texttt{n}}{sd_1}c \in \hat{\mathbb{Z}}$,
\item $a' \doteq \bm{f}rac{\texttt{n}}{sd_1}a \in \hat{\mathbb{Z}}$,
\item $\bm{f}rac{\texttt{n}}{s}(a-cx) \in \hat{\mathbb{Z}}$,
\item $b' \doteq \bm{f}rac{\texttt{n}}{s}b \in \hat{\mathbb{Z}}$.
{\bf{e}}nd{enumerate}
Condition~(1) is equivalent to $x \in \bm{f}rac1{D}\hat{\mathbb{Z}}$, where $D=\gcd(sd_1, \bm{f}rac{\texttt{n}}s)$ (note that $sd_1 \mathbf mid sD$).
Set $x'=Dx$.
Then condition~(4) gives
$d_1a'-\bm{f}rac{d_1}{D}c'x' \in \hat{\mathbb{Z}}$.
Combined with conditions~(1),~(2) and~(3), this is equivalent to
$ c'x' {\bf{e}}quiv Da' \mathbf mod \bm{f}rac{D}{d_1}$.
Now, $\mathfrak{p}si_{\mathbf m_1}(u)=\theta_{fin}(\mathbf m_{1,1}x+\mathbf m_{1,2}c)$ and $f(su\delta)=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}$.
Therefore integration with respect to $b$ gives
$Vol\left(\bm{f}rac{s}{\texttt{n}}\hat{\mathbb{Z}}\right)\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}=
\bm{f}rac{\texttt{n}}s\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}.$
Next, changing variables $x=\bm{f}rac1Dx'$ and $c=\bm{f}rac{sd_1}\texttt{n} c'$, for fixed $a$ the $x,c$-integral is
$$I(a)=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\bm{f}rac{\texttt{n}^2D}{s^2d_1}\int\int_{ c'x' {\bf{e}}quiv Da' \mathbf mod \bm{f}rac{D}{d_1}}
\theta_{fin}\left(\mathbf m_{1,1}\bm{f}rac{x'}{D}+\mathbf m_{1,2}\bm{f}rac{sd_1}\texttt{n} c'\right)dx'dc'.$$
Since $D \mathbf mid sd_1$ and $\mathbf m_{1,2}\bm{f}rac{s^2d_1^2}\texttt{n}=\mathbf m_{2,2}$, and since $\theta_{fin}$ is trivial on $\hat{\mathbb{Z}}$
the integrand is constant on cosets $x'+sd_1\hat{\mathbb{Z}}$ and $c' + sd_1\hat{\mathbb{Z}}$.
As $sd_1 \mathbf mid sD$, it is also constant on cosets $x'+sD\hat{\mathbb{Z}}$ and $c' + sD\hat{\mathbb{Z}}$.
Therefore we get
\begin{align*}
I(a)&=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\bm{f}rac{\texttt{n}^2}{|Dd_1s^4|}\mathbf{s}um_{\mathbf{s}ubstack{x,y \in \mathbb{Z} / sD \mathbb{Z} \\ xy \in Da'+\bm{f}rac{D}{d_1} \hat{\mathbb{Z}}}}
e\left(\bm{f}rac{\mathbf m_{1,1}x}D+\bm{f}rac{\mathbf m_{1,2}sd_1y}{\texttt{n}}\right)
{\bf{e}}nd{align*}
Finally the $a$ integrand depends only on $a' \mathbf mod \bm{f}rac{D}{d_1} \hat{\mathbb{Z}}$, thus, setting $d=\gcd\left(D,\bm{f}rac{D}{d_1}\right)=\gcd(s,sd_1sd_2,\bm{f}rac{\texttt{n}}s)$ we get
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{fin})&=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}
\bm{f}rac{\texttt{n}^3 }{|s^5D^2d_1|}\mathbf{s}um_{a \in \mathbb{Z} / \bm{f}rac{D}{d_1} \mathbb{Z}}\mathbf{s}um_{\mathbf{s}ubstack{x,y \in \mathbb{Z} / sD \mathbb{Z} \\ xy \in Da+\bm{f}rac{D}{d_1} \hat{\mathbb{Z}}}}
e\left(\bm{f}rac{\mathbf m_{1,1}x}D+\bm{f}rac{\mathbf m_{1,2}sd_1y}{\texttt{n}}\right)\\
&=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\bm{f}rac{\texttt{n}^3 }{|s^5D^2d_1|}d
\mathbf{s}um_{\mathbf{s}ubstack{x,y \in \mathbb{Z} / sD \mathbb{Z} \\ xy \in d \mathbb{Z}}}
e\left(\bm{f}rac{\mathbf m_{1,1}x}D+\bm{f}rac{\mathbf m_{1,2}sd_1y}{\texttt{n}}\right)\\
&=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N(s)}}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\bm{f}rac{\texttt{n} }{|s^3d_1|}d
\mathbf{s}um_{\mathbf{s}ubstack{x,y \in \mathbb{Z} / \texttt{n} \mathbb{Z} \\ xy \in d \mathbb{Z}}}
e\left(\bm{f}rac{\mathbf m_{1,1}x}D+\bm{f}rac{\mathbf m_{1,2}sd_1y}{\texttt{n}}\right)\\
{\bf{e}}nd{align*}
{\bf{e}}nd{proof}
\mathbf{s}ubsubsection{The contribution from the longest Weyl element}
The following lemma makes it explicit how to compute the Bruhat decomposition for elements in the cell of the
long Weyl element.
One could do the same for each element of the Weyl group, but, as it is straightforward calculations, we only
include this case for the sake of clarity in latter arguments.
\begin{lemma}\label{uoppu}
Let $\mathbf mathbb{F}$ be a field, and let $g \in \mathbf mathfrak{G}Sp_4(\mathbf mathbb{F})$.
Assume
$$g=\left[
\begin{smallmatrix}
1 & & c_1 & a_1 \\
x_1 & 1 & a_1+c_1x_1 & b_1 \\
& & 1 & -x_1 \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right]
J
\diag{t_1}{t_2}{t_3t_1^{-1}}{t_3t_2^{-1}}
\left[
\begin{smallmatrix}
1 & & c_2 & a_2-c_2x_2 \\
x_2 & 1 & a_2 & b_2 \\
& & 1 & -x_2 \\
& & & 1 \\
{\bf{e}}nd{smallmatrix}
\right]
= \mathbf mat{A}{B}{C}{D} =
\bigmat{\block{a_{11}}{a_{12}}{a_{21}}{a_{22}}}
{B}
{\block{c_{11}}{c_{12}}{c_{21}}{c_{22}}}
{\block{d_{11}}{d_{12}}{d_{21}}{d_{22}}} .$$
Set
\begin{alignat*}{1}
\mathbf mathcal Delta_1=\mathbf mat{a_{11}}{a_{12}}{c_{21}}{c_{22}},& \quad \mathbf mathcal Delta_2=\mathbf mat{c_{12}}{d_{11}}{c_{22}}{d_{21}}.
{\bf{e}}nd{alignat*}
Then
\begin{alignat*}{5}
t_3=\mathbf mu(g), & \quad t_2=- c_{22}, & \quad t_1t_2=\det(C),
{\bf{e}}nd{alignat*}
\begin{alignat*}{5}
x_1=-\bm{f}rac{c_{12}}{c_{22}},& \quad x_2=\bm{f}rac{c_{21}}{c_{22}}, & \quad c_1=\bm{f}rac{\det(\mathbf mathcal Delta_1)}{\det(C)},& \quad c_2=-\bm{f}rac{\det(\mathbf mathcal Delta_2)}{\det(C)},
{\bf{e}}nd{alignat*}
\begin{alignat*}{5}
a_{1} =\bm{f}rac{a_{12}}{c_{22}} & \quad a_2 = \bm{f}rac{d_{21}}{c_{22}}&\quad b_1 =\bm{f}rac{a_{22}}{c_{22}} & \quad b_2 =\bm{f}rac{d_{22}}{c_{22}}.
{\bf{e}}nd{alignat*}
Moreover, if $g= \mathbf mat{A}{B}{C}{D} \in \mathbf mathfrak{G}Sp_4(\mathbf mathbb{F})$ with $C=\mathbf mat{c_{11}}{c_{12}}{c_{21}}{c_{22}}$ satisfies
$\det(C) \texttt{n}eq 0$ and $c_{22} \texttt{n}eq 0$ then~$g \in UJ T U$.
{\bf{e}}nd{lemma}
\begin{proof}
The first claims follow by computing explicitly
\begin{alignat*}{3}
C&=\mathbf mat{1}{-x_1}{}{1}\mathbf mat{-t_1}{}{}{-t_2}\mathbf mat{1}{}{x_2}{1}=\mathbf mat{-t_1+t_2x_1x_2}{t_2x_1}{-t_2x_2}{-t_2},\\
\mathbf mathcal Delta_1&=\mathbf mat{c_1}{a_1}{}{1}\mathbf mat{-t_1}{}{}{-t_2}\mathbf mat{1}{}{x_2}{1}, & \quad
\mathbf mathcal Delta_2&=\mathbf mat{1}{-x_1}{}{1}\mathbf mat{-t_1}{}{}{-t_2}\mathbf mat{}{c_2}{1}{a_2},\\
D&=\mathbf mat{1}{-x_1}{}{1}\mathbf mat{-t_1}{}{}{-t_2}\mathbf mat{c_2}{a_2-c_2x_2}{a_2}{b_2},&\quad
A&=\mathbf mat{c_1}{a_1}{a_1+c_1x_1}{b_1}\mathbf mat{-t_1}{}{}{-t_2}\mathbf mat{1}{}{x_2}{1}.
{\bf{e}}nd{alignat*}
To prove the last claim, it suffices to show that provided $\det{C} \texttt{n}eq 0$ and $c_{22} \texttt{n}eq 0$,
there exist at most one $g \in \mathbf mathfrak{G}Sp_4(\mathbf mathbb{F})$ with the specified values for $\mathbf mu(g)$, $C$, $a_{12}, a_{22},$ $d_{21}, d_{22}$,
$\det(\mathbf mathcal Delta_1)$ and $\det(\mathbf mathcal Delta_2)$.
Since $c_{22} \texttt{n}eq 0$, the values of $a_{12}$, $c_{21}$ and $\det(\mathbf mathcal Delta_1)=a_{11}c_{22}-c_{21}a_{12}$ determine the value of
$a_{11}$. The equation $\trans{A}C=\trans{C}A$ then gives $a_{12}c_{11}+a_{22}c_{21}=a_{11}c_{12}+a_{21}c_{22}$,
which determines the value of $a_{21}$ hence of $A$.
The same reasoning using $\det(\mathbf mathcal Delta_1)$ and $C\trans{D}=D\trans{C}$ instead similarly fixes $D$.
Finally the equation $\trans{A}D-\trans{C}B=\mathbf mu(g)$ fixes $B$ since we are assuming $C$ is invertible.
{\bf{e}}nd{proof}
\begin{definition}\label{KloosLWE}
Let $s,d,m$ be three non-zero integers and define
$$C_J(s,d,m)= \{g=\mathbf mat{A}{B}{C}{D} : \det(C)=d, c_{22}=-s, \mathbf mu(g)=m\}$$
and
$$\mathbf mathfrak{G}amma_J(N,s,d,m)= \mathbf mathfrak{G}amma_1(N) \cap \mathbf mathcal Mat_4(\mathbb{Z}) \cap C_J(s,d,m).$$
For $g=\mathbf mat{A}{B}{C}{D} \in \mathbf mathfrak{G}Sp_4$,
let $\mathbf mathcal Delta_1=\mathbf mat{a_{11}}{a_{12}}{c_{21}}{c_{22}}$
and $\mathbf mathcal Delta_2=\mathbf mat{c_{12}}{d_{11}}{c_{22}}{d_{21}}$.
Then, for $\mathbf m_1,\mathbf m_2$ two pair of non-zero integers, we define the following generalized twisted Kloosterman sum
\begin{align*}
\mathbf mathcal Kloos_J(\mathbf m_1,\mathbf m_2,s,d,m)&=\\
\mathbf{s}um_{g \in U(\mathbb{Z}) \backslash \mathbf mathfrak{G}amma_J(N,s,d,m) / U(\mathbb{Z})}&\mathfrak{o}verline{\mathfrak{o}mega_N}(a_{22})
e\left(\bm{f}rac{\mathbf m_{11}c_{12}+\mathbf m_{21}c_{21}}{s}+\bm{f}rac{\mathbf m_{12}\det(\mathbf mathcal Delta_1)+\mathbf m_{22}\det(\mathbf mathcal Delta_2)}{d}\right).
{\bf{e}}nd{align*}
{\bf{e}}nd{definition}
\begin{remark}
Using Lemma~\ref{uoppu}, we can see that $\mathbf mathcal Kloos_J(\mathbf m_1,\mathbf m_2,s,d,m)$ is well defined.
Indeed, matrices in $\mathbf mathfrak{G}amma_J(\texttt{n},N,d,s)$ are of the form $$g=u(x_1,a_1,b_1,c_1)J\diag{\bm{f}rac{d}s}{s}{m\bm{f}rac{s}{d}}{\bm{f}rac{m}s}u(x_2,a_2,b_2,c_2).$$
Then $\bm{f}rac{c_{12}}s=x_1$,$\bm{f}rac{c_{21}}s=-x_2$, $\bm{f}rac{\det(\mathbf mathcal Delta_1)}{d}=c_1$ and $\bm{f}rac{\det(\mathbf mathcal Delta_2)}{d}=-c_2$.
Now multiplying g on the left (resp. on the right) by an element of $U(\mathbb{Z})$ does not change the classes of $x_1$ and $c_1$ (resp. $x_2$ and $c_2$)
in $\mathbb{R} / \mathbb{Z}$.
{\bf{e}}nd{remark}
\begin{proposition}\label{KloosJ}
Let $\mathbf{s}igma=J$, $\delta_1=\diag{d_1}{1}{d_2}{d_1d_2}$
with $d_1d_2=\mathfrak{p}m\bm{f}rac{\texttt{n}}{s^2}$ for some integer $s$.
Then we have $I_{\delta_\mathbf{s}igma}(f_{fin})=\bm{f}rac{1}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\mathbf mathcal Kloos_J(\mathbf m_1,\mathbf m_2,s,d_1s^2,s^2d_1d_2)$.
{\bf{e}}nd{proposition}
\begin{remark}
The set $\mathbf mathfrak{G}amma_J(N,s,d_1s^2,s^2d_1d_2)$ is non-empty only if $N$ divides $s$ and $N^2$ divides $d_1s^2$.
{\bf{e}}nd{remark}
\begin{proof}
The finite part of the orbital integral corresponding to the longest Weyl element reduces to
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{fin})&= \int_{U(\mathbf mathbb{A}_{fin})}\int_{U(\mathbf mathbb{A}_{fin})}f( u_1 J \delta u_2){\mathfrak{p}si_{\mathbf m_1}(u_1)}\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_2)}du_1du_2\\
&=\int_{U(\mathbf mathbb{A}_{fin})}\int_{U(\mathbf mathbb{A}_{fin})}f(s u_1 J \delta u_2){\mathfrak{p}si_{\mathbf m_1}(u_1)}\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_2)}du_1du_2.
{\bf{e}}nd{align*}
By Assumption~\ref{HeckeOprtr} $su_1 J \delta u_2 \in Supp(f)$ if and only if $su_1J\delta u_2=\mathbf mat{A}{B}{C}{D} \in Z(\mathbf mathbb{A}_{fin})M(\texttt{n},N)$.
In this case, we have $f(s u_1 J \delta u_2)=\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N}(a_{22})}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}$,
and Lemma~\ref{uoppu} shows that
$${\mathfrak{p}si_{\mathbf m_1}(u_1)}\mathfrak{o}verline{\mathfrak{p}si_{\mathbf m_2}(u_2)}=
e\left(-\bm{f}rac{\mathbf m_{11}c_{12}+\mathbf m_{21}c_{21}}{c_{22}}+\bm{f}rac{\mathbf m_{12}\det(\mathbf mathcal Delta_1)+\mathbf m_{22}\det(\mathbf mathcal Delta_2)}{\det(C)}\right).$$
Moreover, $f$ is left and right $U(\hat{\mathbb{Z}})$-invariant, and the characters $\mathfrak{p}si_{\mathbf m_1}$ and $\mathfrak{p}si_{\mathbf m_2}$
are trivial on~$\hat{\mathbb{Z}}$.
Therefore, if we consider the map $\varphi: U(\mathbf mathbb{A}_{fin}) \times U(\mathbf mathbb{A}_{fin}) \to G(\mathbf mathbb{A}), (u_1,u_2) \mathbf mapsto s u_1 J \delta u_2$, we have
\begin{align*}
I_{\delta_\mathbf{s}igma}(f_{fin})=
\mathbf{s}um_{U(\hat{\mathbb{Z}}) \backslash \left(M(\texttt{n},N) \cap Im(\varphi)\right) / U(\hat{\mathbb{Z}})}
&\bm{f}rac{\mathfrak{o}verline{\mathfrak{o}mega_N}(a_{22})}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\\
& \times
e\left(-\bm{f}rac{\mathbf m_{11}c_{12}+\mathbf m_{21}c_{21}}{c_{22}}+\bm{f}rac{\mathbf m_{12}\det(\mathbf mathcal Delta_1)+\mathbf m_{22}\det(\mathbf mathcal Delta_2)}{\det(C)}\right).
{\bf{e}}nd{align*}
Now by Lemma~\ref{uoppu}, $Im(\varphi)=C_J(s,d_1s^2,s^2d_1d_2)$.
Therefore, $$U(\hat{\mathbb{Z}}) \backslash (M(\texttt{n},N) \cap Im(\varphi)) / U(\hat{\mathbb{Z}})$$ may be identified to $U(\mathbb{Z}) \backslash \mathbf mathfrak{G}amma_J(N,s,d_1s^2,s^2d_1d_2) / U(\mathbb{Z})$.
{\bf{e}}nd{proof}
\mathbf{s}ubsubsection{Contribution from \texorpdfstring{$\mathbf{s}igma=s_1s_2s_1$}{s1s2s1}}
\begin{definition}
Let $s,d,m$ be three non-zero integers and define
$$C_{121}(s,d,m)= \{g=\mathbf mat{A}{B}{C}{D}: \det(C)=0, c_{22}=-s, \det(\mathbf mathcal Delta_2)=d, \mathbf mu(g)=m\}$$
and
$$\mathbf mathfrak{G}amma_{121}(N,s,d,m)= \mathbf mathfrak{G}amma_1(N) \cap \mathbf mathcal Mat_4(\mathbb{Z}) \cap C_{121}(s,d,m)$$
For $g=\mathbf mat{A}{B}{C}{D} \in \mathbf mathfrak{G}Sp_4$, let $\mathbf mathcal Delta_3=\mathbf mat{a_{12}}{b_{11}}{c_{22}}{d_{21}}$.
Then we define the following generalized twisted Kloosterman sum
\begin{align*}
\mathbf mathcal Kloos_{121}(\mathbf m_1,\mathbf m_2,s,d,m)&=\\
\mathbf{s}um_{g \in U(\mathbb{Z}) \backslash \mathbf mathfrak{G}amma_{121}(N,s,d,m) / \mathfrak{o}verline{U_\mathbf{s}igma}(\mathbb{Z})}&\mathfrak{o}verline{\mathfrak{o}mega_N}(a_{22})
e\left(\bm{f}rac{\mathbf m_{11}c_{12}+\mathbf m_{21}c_{21}}{s}+\bm{f}rac{\mathbf m_{12}\det(\mathbf mathcal Delta_3)}{d}\right).
{\bf{e}}nd{align*}
{\bf{e}}nd{definition}
By a similar argument as in the case of the long Weyl element, $\mathbf mathcal Kloos_{121}(\mathbf m_1,\mathbf m_2,s,d,m)$ is well-defined, and together with the condition on $\delta$,
from Lemma~\ref{relevantorbits} we get the following.
\begin{proposition}
Let $\mathbf{s}igma=s_1s_2s_1$, $\delta_1=\diag{d_1}{1}{d_2}{d_1d_2}$
with $d_1d_2=\mathfrak{p}m \bm{f}rac{\texttt{n}}{s^2}$ for some integer $s$ and $d_1\mathbf m_{1,2}=d_2\mathbf m_{2,2}$.
Then we have $I_{\delta_\mathbf{s}igma}(f_{fin})=\bm{f}rac{1}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\mathbf mathcal Kloos_{121}(\mathbf m_1,\mathbf m_2,s,d_2s^2,s^2d_1d_2)$.
{\bf{e}}nd{proposition}
\begin{remark}
The set $\mathbf mathfrak{G}amma_{121}(\texttt{n},N,s,d_2s^2,s^2d_1d_2)$ is non-empty only if $N$ divides $s$ and $N^2$ divides $d_2s^2$.
{\bf{e}}nd{remark}
\mathbf{s}ubsubsection{Contribution from \texorpdfstring{$\mathbf{s}igma=s_2s_1s_2$}{s2s1s2}}
\begin{definition}
Let $s,d$ be three non-zero integers and define
$$C_{212}(s,d,m)= \{g=\mathbf mat{A}{B}{C}{D} : \det(C)=-d, c_{22}=0, c_{21}=-s, \mathbf mu(g)=m \}$$
and
$$\mathbf mathfrak{G}amma_{212}(N,s,d,m)=\mathbf mathcal Mat_4(\mathbb{Z}) \cap \mathbf mathfrak{G}amma_1(N) \cap C_{212}.$$
We define the following generalized twisted Kloosterman sum
\begin{align*}
\mathbf mathcal Kloos_{212}(\mathbf m_1,\mathbf m_2,s,d,m)&=\\
\mathbf{s}um_{g \in U(\mathbb{Z}) \backslash \mathbf mathfrak{G}amma_{212}(N,s,d,m) / \mathfrak{o}verline{U_\mathbf{s}igma}(\mathbb{Z})}&\mathfrak{o}verline{\mathfrak{o}mega_N}(a_{22})
e\left(\bm{f}rac{\mathbf m_{11}c_{11}+\mathbf m_{22}d_{21}}{s}-\bm{f}rac{\mathbf m_{12}\det(\mathbf mathcal Delta_1)}{d}\right).
{\bf{e}}nd{align*}
{\bf{e}}nd{definition}
By a similar argument as above, $\mathbf mathcal Kloos_{212}(\mathbf m_1,\mathbf m_2,d,s)$ is well defined, and we have the following.
\begin{proposition}
Let $\mathbf{s}igma=s_1s_2s_1$, $\delta_1=\diag{d_1}{1}{d_2}{d_1d_2}$
with $d_1d_2=\mathfrak{p}m\bm{f}rac{\texttt{n}}{s^2}$ for some integer $s$ and $\mathbf m_{1,1}=-d_1\mathbf m_{2,1}$.
Then we have $I_{\delta_\mathbf{s}igma}(f_{fin})=\bm{f}rac{1}{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}\mathbf mathcal Kloos_{212}(\mathbf m_1,\mathbf m_2,sd_1,d_1s^2,sd_1d_2)$.
{\bf{e}}nd{proposition}
\begin{remark}
The set $\mathbf mathfrak{G}amma_{212}(\texttt{n},N,d_1s^2,d_1s)$ is non-empty only if $N$ divides $d_1s$ and $N^2$ divides $d_1s^2$.
{\bf{e}}nd{remark}
\mathbf{s}ection{The final formula}
We now assemble the material from previous sections to obtain our relative trace formula.
Let $N \ge 1$ be an integer.
We define the {\bf adelic congruence subgroup} $\mathbf mathfrak{G}amma_1(N)$ to be matrices of the form $g_\infty g_{fin}$
where $g_\infty \in K_\infty$ and $g_{fin} \in \{g \in G(\hat{\mathbb{Z}}): g {\bf{e}}quiv \left[
\begin{smallmatrix}
* & & * & * \\
* & 1 & * & * \\
& & * & * \\
& & & *
{\bf{e}}nd{smallmatrix}\right] \mathbf mod N \}.$
Fix a character $\mathfrak{o}mega: \mathbb{Q}^\times\mathbb{R}^\times \backslash \mathbf mathbb{A}^\times \to \mathbf mathbb{C}$, that we may see as a character of the centre of $G(\mathbf mathbb{A})$.
Assume that $\mathfrak{o}mega$ is trivial on $(1+N\hat{\mathbb{Z}}) \cap \hat{\mathbb{Z}}^\times$, and define $\mathfrak{o}mega_N(t)=\mathfrak{p}rod_{p \mathbf mid N}\mathfrak{o}mega(t_p)$.
For each standard parabolic subgroup $P=N_PM_P$ (including $G$ itself), consider the space $\mathbf mathscr H_P$ defined in Section~\ref{DefOfES}.
For each character $\chi$ of the centre of $M_P$ whose restriction to the centre of $G$ coincides with $\mathfrak{o}mega$,
let $\mathbf mathfrak{G}en_P(\chi)$ be an orthonormal basis consisting of factorizable vectors of the subspaces of functions $\mathfrak{p}hi$ in $\mathbf mathscr H_P$
that are generic, $\mathbf mathfrak{G}amma_1(N)$-fixed, and have central character $\chi$.
Specifically,
\begin{itemize}
\item If $P=G$ then $\mathbf mathfrak{G}en_P(\mathfrak{o}mega)$ consists in cuspidal elements of $L^2(Z(\mathbb{R})G(\mathbb{Q})\backslash G(\mathbf mathbb{A}), \mathfrak{o}mega)^{\mathbf mathfrak{G}amma_1(N)}$,
\item If $P=B$, each such character $\chi$ may be identified with a triplet of characters $(\mathfrak{o}mega_1,\mathfrak{o}mega_2,\mathfrak{o}mega_3)$
satisfying $\mathfrak{o}mega_1 \mathfrak{o}mega_2 \mathfrak{o}mega_3^2=\mathfrak{o}mega$.
Choose a set of representatives $S=\{k_1, \cdots, k_d\}$ of $(K \cap B(\mathbf mathbb{A})) \backslash K / \mathbf mathfrak{G}amma_1(N)$.
Then there is a basis $(e_i)_{1 \le i \le d}$
of $\mathbf mathbb{C}^S$ such that functions in $\mathbf mathfrak{G}en_P(\mathfrak{o}mega_1,\mathfrak{o}mega_2,\mathfrak{o}mega_3)$ are of the form
$$\mathfrak{p}hi_j^{\text B}(b k_i \gamma) = \chi(b)e_j(k_i)$$ for $b \in B(\mathbf mathbb{A})$, $\gamma \in \mathbf mathfrak{G}amma_1(N)$.
\item If $P=P_{\text{K}}$, each such character $\chi$ may be identified with a pair of characters $(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$
satisfying $\mathfrak{o}mega_1\mathfrak{o}mega_2=\mathfrak{o}mega$.
Choose a set of representatives $S=\{k_1, \cdots, k_d\}$ of $(K \cap P_{\text{K}}(\mathbf mathbb{A})) \backslash K / \mathbf mathfrak{G}amma_1(N)$.
Keeping notations of~\S~\ref{KES}, for each $i$, consider the compact subgroup of $\mathbf mathfrak{G}L_2$ given by
$C_i=\mathbf{s}igma_{\text K}\left(Stab_{K \cap P_{\text{K}}(\mathbf mathbb{A})} (k_i)\right)$.
Then, for each cuspidal automorphic representation $\mathfrak{p}i$ of $\mathbf mathfrak{G}L_2$ with central character $\mathfrak{o}mega_1$
and whose Archimedean component is a principal series there is a basis $(u_j)_j=(u_{j,i})_{i,j}$ of $\mathfrak{p}rod_i \mathfrak{p}i^{C_i}$
such that functions in $\mathbf mathfrak{G}en_P(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$ are of the form
$$\mathfrak{p}hi_{\mathfrak{p}i,j}^{\text K}(p k_i \gamma) = \mathfrak{o}mega_2(p)u_{j,i}(\mathbf{s}igma_{\text K}(p))$$ for $p \in P_{\text{K}}(\mathbf mathbb{A})$, $\gamma \in \mathbf mathfrak{G}amma_1(N)$.
In particular each $u_{i,j}$ is a $\mathbf mathfrak{G}L_2$ adelic Maa{\mathbf{s}s} forms.
\item If $P=P_{\text{S}}$, each such character $\chi$ may be identified with a pair of characters $(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$
satisfying $\mathfrak{o}mega_1\mathfrak{o}mega_2^2=\mathfrak{o}mega$.
Choose a set of representatives $S=\{k_1, \cdots, k_d\}$ of $(K \cap P_{\text{S}}(\mathbf mathbb{A})) \backslash K / \mathbf mathfrak{G}amma_1(N)$.
Keeping notations of~\S~\ref{PES}, for each $i$, consider the compact subgroup of $\mathbf mathfrak{G}L_2$ given by
$C_i=\mathbf{s}igma_{\text K}\left(Stab_{K \cap P_{\text{S}}(\mathbf mathbb{A})} (k_i)\right)$.
Then, for each cuspidal automorphic representation $\mathfrak{p}i$ of $\mathbf mathfrak{G}L_2$ with central character $\mathfrak{o}mega_1$
and whose Archimedean component is a principal series there is a basis $(u_j)_j=(u_{j,i})_{i,j}$ of $\mathfrak{p}rod_i \mathfrak{p}i^{C_i}$
such that functions in $\mathbf mathfrak{G}en_P(\mathfrak{o}mega_1,\mathfrak{o}mega_2)$ are of the form
$$\mathfrak{p}hi_{\mathfrak{p}i,j}^{\text S}(p k_i \gamma) = \mathfrak{o}mega_2 \circ \mathbf mu (p)u_{j,i}(\mathbf{s}igma_{\text K}(p))$$ for $p \in P_{\text{S}}(\mathbf mathbb{A})$, $\gamma \in \mathbf mathfrak{G}amma_1(N)$.
In particular each $u_{i,j}$ is a $\mathbf mathfrak{G}L_2$ adelic Maa{\mathbf{s}s} forms.
{\bf{e}}nd{itemize}
Now fix an integer $\texttt{n}>0$ coprime to $N$. Consider
$$M(\texttt{n},N)=\left\{g \in G(\mathbf mathbb{A}_{fin}) \cap \mathbf mathcal Mat_4(\hat{\mathbb{Z}}) : g {\bf{e}}quiv \left[
\begin{smallmatrix}
* & & * & * \\
* & 1 & * & * \\
& & * & * \\
& & & *
{\bf{e}}nd{smallmatrix}\right] \mathbf mod N, \mathbf mu(g) \in \texttt{n} \hat{\mathbb{Z}}^\times \right\}.$$
Define the {\bf $\texttt{n}$-th Hecke operator of level $\mathbf mathfrak{G}amma_1(N)$} by
$$T_{\texttt{n}}\mathfrak{p}hi(g) =\int_{M(\texttt{n},N)} \mathfrak{p}hi(gx) dx.$$
Then for every standard parabolic subgroup $P$, for every element $u \in \mathbf mathfrak{G}en_P$ and for every $\texttt{n}u \in i\mathbf mathcal Lie{a}_P^*$, the Eisenstein series $E(\cdot,u,\texttt{n}u)$
is an eigenfunction of $T_{\texttt{n}}$. We shall denote the corresponding eigenvalue by $\lambda_{\texttt{n}}(u,\texttt{n}u)$.
Then we have the following.
\begin{theorem}\label{MainTheorem}
Let $\mathbf m_1$, $\mathbf m_2$ be two pairs of non-zero integers, $t_1,t_2 \in A^0(\mathbb{R})$.
Let $h$ be a Paley-Wiener function on $\mathbf mathcal Lie{a}_\mathbf mathbb{C}$ and let $c$ be the constant appearing in Theorem~\ref{sphericalinversion}.
Then we have $$c(\Sigma_{cusp}+\Sigma_{B}+\Sigma_K+\Sigma_S)=\bm{f}rac1{Vol(\mathfrak{o}verline{\mathbf mathfrak{G}amma_1(N)})}(K_1+K_{121}+K_{212}+K_J).$$
The expression $\Sigma_{cusp}+\Sigma_{B}+\Sigma_K+\Sigma_S$ is given by
$$\Sigma_{\text{cusp}}=\mathbf{s}um_{u \in \mathbf mathfrak{G}en_G(\mathfrak{o}mega)}
h(\texttt{n}u_u)\lambda_{\texttt{n}}(u)\mathcal{W}_{\mathfrak{p}si}(u)(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(u)}(t_2t_{\mathbf m_2}^{-1}),$$
\begin{align*}
\Sigma_{\text{B}}=\bm{f}rac18 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2\mathfrak{o}mega_3^2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_B(\mathfrak{o}mega_1,\mathfrak{o}mega_2,\mathfrak{o}mega_3)}&
\int_{i\mathbf mathcal Lie{a}^*}h(\texttt{n}u)\lambda_{\texttt{n}}(u,\texttt{n}u)\\
&\times \mathcal{W}_{\mathfrak{p}si}(E(\cdot, u,\texttt{n}u ))(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(E(\cdot,u,\texttt{n}u))}(t_2t_{\mathbf m_2}^{-1})d\texttt{n}u,
{\bf{e}}nd{align*}
\begin{align*}
\Sigma_{\text{K}}=\bm{f}rac12 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_{P_{\text{K}}}(\mathfrak{o}mega_1,\mathfrak{o}mega_2)}&
\int_{i\mathbf mathcal Lie{a}_\text{K}^*}h(\texttt{n}u+\texttt{n}u_{\text{K}}(s_u))\lambda_{\texttt{n}}(u,\texttt{n}u)\\
&\times \mathcal{W}_{\mathfrak{p}si}(E(\cdot, u,\texttt{n}u ))(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(E(\cdot,u,\texttt{n}u))}(t_2t_{\mathbf m_2}^{-1})d\texttt{n}u,
{\bf{e}}nd{align*}
\begin{align*}
\Sigma_{\text{S}}=\bm{f}rac12 \mathbf{s}um_{\mathfrak{o}mega_1\mathfrak{o}mega_2^2=\mathfrak{o}mega}\mathbf{s}um_{u \in \mathbf mathfrak{G}en_{P_{\text{S}}}(\mathfrak{o}mega_1,\mathfrak{o}mega_2)} &
\int_{i\mathbf mathcal Lie{a}_\text{S}^*}h(\texttt{n}u+\texttt{n}u_{\text{S}}(s_u))\lambda_{\texttt{n}}(u,\texttt{n}u)\\
& \times \mathcal{W}_{\mathfrak{p}si}(E(\cdot, u,\texttt{n}u ))(t_1t_{\mathbf m_1}^{-1})\mathfrak{o}verline{\mathcal{W}_{\mathfrak{p}si}(E(\cdot,u,\texttt{n}u))}(t_2t_{\mathbf m_2}^{-1})d\texttt{n}u,
{\bf{e}}nd{align*}
where $\texttt{n}u_u$ (resp. $s_u$) is the spectral parameter of the representation of $GSp_4(\mathbb{R})$ (resp. $GL_2(\mathbb{R})$) attached to $u$,
$\texttt{n}u_{\text{K}}$ and $\texttt{n}u_\text{S}$ are given by Propositions~\ref{KlingenSpectralParameter} and~\ref{SiegelSpectralParameter}.
On the right hand side,
\begin{itemize}
\item $K_1$ is non-zero only if there is an integer $t$ dividing $\texttt{n}$
with $\bm{f}rac{\texttt{n}}t=\bm{f}rac{\mathbf m_{1,2}}{\mathbf m_{2,2}}t$ and such that $s=\bm{f}rac{\mathbf m_{2,1}}{\mathbf m_{1,1}}t$ is also an integer dividing $\texttt{n}$, in which case,
setting $d=\gcd(s,\bm{f}rac{\texttt{n}}s,t,\bm{f}rac{\texttt{n}}t)$ and
\begin{align*}
T(\texttt{n},\mathbf m_1,\mathbf m_2)=&d \times \mathfrak{o}verline{\mathfrak{o}mega_N(s)} \texttt{n}^{-\bm{f}rac12} (\mathbf m_{1,1}\mathbf m_{2,1})^{-2}|\mathbf m_{1,2}\mathbf m_{2,2}|^{-\bm{f}rac32}
\times S\left(\mathbf m_{1,1}\bm{f}rac{\texttt{n}}{\gcd(t,\bm{f}rac{\texttt{n}}s)},\mathbf m_{1,2}t,d,\texttt{n}\right)
{\bf{e}}nd{align*} we have
$$K_1=T(\texttt{n},\mathbf m_1,\mathbf m_2)
\int_{\mathbf mathcal Lie{a}^*}h(-i\texttt{n}u)W(i\texttt{n}u, t_{\mathbf m_1}^{-1}t_1, \mathfrak{p}si_{\mathbf mathbf{1}})W(-i\texttt{n}u,t_{\mathbf m_2}^{-1} t_2,\mathfrak{o}verline{\mathfrak{p}si_{\mathbf mathbf{1}})}
\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}.$$
\item The contribution of the long Weyl element is $$K_J=\mathbf{s}um_{\mathbf{s}ubstack{N \mathbf mid s\\N^2 \mathbf mid k}}\mathbf mathcal Kloos_J(\mathbf m_1,\mathbf m_2,s,k,\texttt{n})I_J(h)\left(\bm{f}rac{k}{s^2},\bm{f}rac{\texttt{n}}k\right),$$
\item The contribution of $s_1s_2s_1$ is non-zero only if $\texttt{n}\bm{f}rac{\mathbf m_{1,2}}{\mathbf m_{2,2}}=b^2$ for some rational number $b$, in which case it is given by
$$K_{121}=\mathbf m_{2,2}\mathbf{s}um_{N \mathbf mid kb}\mathbf mathcal Kloos_{121}(\mathbf m_1,\mathbf m_2,Nk,bNk,\texttt{n})I_{121}(h)\left(\bm{f}rac{\texttt{n}}{Nkb},\bm{f}rac{b}{Nk}\right),$$
\item The contribution of $s_2s_1s_2$ is given by
$$K_{212}=\mathbf m_{2,1}\mathbf{s}um_{\mathbf{s}ubstack{\mathbf m_{2,1}N \mathbf mid s\mathbf m_{1,1} \\ \mathbf m_{2,1}N^2 \mathbf mid s^2\mathbf m_{1,1}}}\mathbf mathcal Kloos_{212}\left(\mathbf m_1,\mathbf m_2,-s\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}},-s^2\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}},\texttt{n}\right)I_{212}(h)\left(-\bm{f}rac{\mathbf m_{1,1}}{\mathbf m_{2,1}},-\bm{f}rac{\texttt{n}}{s^2}\bm{f}rac{\mathbf m_{2,1}}{\mathbf m_{1,1}}\right),$$
{\bf{e}}nd{itemize}
where we have defined $I_\mathbf{s}igma(h)$
as the integral
\begin{align*}
I_\mathbf{s}igma(h)(d_1,d_2)&=\int_{\mathbf mathcal Lie{a}^*}h(-i\texttt{n}u)W(i\texttt{n}u, t_{\mathbf m_1}^{-1}t_1, \mathfrak{p}si_{\mathbf mathbf{1}})\\
&\times W\left(-i\texttt{n}u,t_{\mathbf m_1}^{-1}\mathbf{s}igma \diag{d_1}{1}{d_2}{d_1d_2}t_{\mathbf m_2}u_1 t_{\mathbf m_2}^{-1} t_2,\mathfrak{o}verline{\mathfrak{p}si_{\mathbf mathbf{1}}}\right)
\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)}
{\mathfrak{o}verline{\mathfrak{p}si_{\mathbf mathbf{1}}(u_1)}}du_1.
{\bf{e}}nd{align*}
Moreover, if Conjecture~1 is true then we have
\begin{align*}
I_\mathbf{s}igma(h)(d_1,d_2)=\int_{\mathbf mathcal Lie{a^*}}h(-i\texttt{n}u)
&K_\mathbf{s}igma\left(-i\texttt{n}u,t_{\mathbf m_1}^{-1}\mathbf{s}igma \diag{d_1}{1}{d_2}{d_1d_2} t_{\mathbf m_2} \mathbf{s}igma^{-1},{\mathfrak{p}si_{\mathbf mathbf{1}}}\right)\\
&\times W(-i\texttt{n}u, t_{\mathbf m_2}^{-1} t_2,{\mathfrak{p}si_{\mathbf mathbf{1}}})
W(i\texttt{n}u, t_{\mathbf m_1}^{-1}t_1, \mathfrak{p}si_{\mathbf mathbf{1}})\bm{f}rac{d\texttt{n}u}{c(i\texttt{n}u)c(-i\texttt{n}u)},
{\bf{e}}nd{align*}
where the generalised Bessel functions $K_\mathbf{s}igma$ have been defined in \S~\ref{ArchInt}.
{\bf{e}}nd{theorem}
\begin{bibdiv}
\begin{biblist}
\bib{ArthurIntro}{article}{
author={Arthur, James},
title={An introduction to the trace formula},
conference={
title={Harmonic analysis, the trace formula, and Shimura varieties},
},
book={
series={Clay Math. Proc.},
volume={4},
publisher={Amer. Math. Soc., Providence, RI},
},
date={2005},
pages={1--263},
review={\mathbf mathcal MR{2192011}},
}
\bib{ArthurSpectralExpansion}{article}{
author={Arthur, James},
title={A trace formula for reductive groups. I. Terms associated to
classes in $G({\bf Q})$},
journal={Duke Math. J.},
volume={45},
date={1978},
number={4},
pages={911--952},
issn={0012-7094},
review={\mathbf mathcal MR{518111}},
}
\bib{Buttcane}{article}{
author={Buttcane, Jack},
title={Bessel functions outside of $GL(2)$},
status={talk},
year={2020},
eprint={https://old.renyi.hu/seminars/ocaf2020/20200605_Buttcane.pdf}
}
\bib{GelbartKnapp}{article}{
author={Gelbart, S. S.},
author={Knapp, A. W.},
title={$L$-indistinguishability and $R$ groups for the special linear
group},
journal={Adv. in Math.},
volume={43},
date={1982},
number={2},
pages={101--121},
issn={0001-8708},
review={\mathbf mathcal MR{644669}},
doi={10.1016/0001-8708(82)90030-5},
}
\bib{Helgason}{book}{
author = {Helgason, Sigurdur},
title = {Groups and Geometric Analysis: Integral Geometry, Invariant Differential Operators, and Spherical Functions},
series = {Mathematical Surveys and Monographs},
publisher = {Academic Press},
year = {1984},
}
\bib{Ishii}{article}{
author={Ishii, Taku},
title={On principal series Whittaker functions on ${\rm Sp}(2,{\bf R})$},
journal={J. Funct. Anal.},
volume={225},
date={2005},
number={1},
pages={1--32},
issn={0022-1236},
review={\mathbf mathcal MR{2149916}},
doi={10.1016/j.jfa.2005.03.023},
}
\bib{JacquetLai}{article}{
author={Jacquet, H.},
author={Lai, K. F.},
title={A relative trace formula},
journal={Compositio Math.},
volume={54},
date={1985},
number={2},
pages={243--310},
issn={0010-437X},
review={\mathbf mathcal MR{783512}},
}
\bib{Joyner}{article}{
author={Joyner, David},
title={On the Kuznetsov-Bruggeman formula for a Hilbert modular surface
having one cusp},
journal={Math. Z.},
volume={203},
date={1990},
number={1},
pages={59--104},
issn={0025-5874},
review={\mathbf mathcal MR{1030708}},
doi={10.1007/BF02570723},
}
\bib{Kim}{article}{
author={Kim, Henry H.},
title={The residual spectrum of ${\rm Sp}_4$},
journal={Compositio Math.},
volume={99},
date={1995},
number={2},
pages={129--151},
issn={0010-437X},
review={\mathbf mathcal MR{1351833}},
}
\bib{KL}{article}{
author={Knightly, A.},
author={Li, C.},
title={Kuznetsov's trace formula and the Hecke eigenvalues of Maass
forms},
journal={Mem. Amer. Math. Soc.},
volume={224},
date={2013},
number={1055},
pages={vi+132},
issn={0065-9266},
isbn={978-0-8218-8744-8},
review={\mathbf mathcal MR{3099744}},
doi={10.1090/S0065-9266-2012-00673-3},
}
\bib{MiyOda}{article}{
author={Miyazaki, Takuya},
author={Oda, Takayuki},
title={Principal series Whittaker functions on ${\rm Sp}(2;{\bf R})$. II},
journal={Tohoku Math. J. (2)},
volume={50},
date={1998},
number={2},
pages={243--260},
issn={0040-8735},
review={\mathbf mathcal MR{1622070}},
doi={10.2748/tmj/1178224977},
}
\bib{Muic}{article}{
author={Mui\'{c}, Goran},
title={Intertwining operators and composition series of generalized and
degenerate principal series for ${\rm Sp}(4,\mathbf mathbb R)$},
journal={Glas. Mat. Ser. III},
volume={44(64)},
date={2009},
number={2},
pages={349--399},
issn={0017-095X},
review={\mathbf mathcal MR{2587308}},
doi={10.3336/gm.44.2.08},
}
\bib{Niwa}{article}{
author={Niwa, Shinji},
title={Commutation relations of differential operators and Whittaker
functions on ${\rm Sp}_2(\mathbf mathbf R)$},
journal={Proc. Japan Acad. Ser. A Math. Sci.},
volume={71},
date={1995},
number={8},
pages={189--191},
issn={0386-2194},
review={\mathbf mathcal MR{1362994}},
}
\bib{Pitale}{book}{
author={Pitale, Ameya},
title={Siegel modular forms},
series={Lecture Notes in Mathematics},
volume={2240},
note={A classical and representation-theoretic approach},
publisher={Springer, Cham},
date={2019},
pages={ix+138},
isbn={978-3-030-15674-9},
isbn={978-3-030-15675-6},
review={\mathbf mathcal MR{3931351}},
doi={10.1007/978-3-030-15675-6},
}
\bib{RS1}{article}{
author={Roberts, Brooks},
author={Schmidt, Ralf},
title={Some results on Bessel functionals for ${\rm GSp}(4)$},
journal={Doc. Math.},
volume={21},
date={2016},
pages={467--553},
issn={1431-0635},
review={\mathbf mathcal MR{3522249}},
}
\bib{RS2}{book}{
author={Roberts, Brooks},
author={Schmidt, Ralf},
title={Local newforms for GSp(4)},
series={Lecture Notes in Mathematics},
volume={1918},
publisher={Springer, Berlin},
date={2007},
pages={viii+307},
isbn={978-3-540-73323-2},
review={\mathbf mathcal MR{2344630}},
doi={10.1007/978-3-540-73324-9},
}
\bib{SHMKloosterman}{article}{
author={Siu Hang Man},
title={Symplectic Kloosterman Sums and Poincar\'e Series},
year={2020},
eprint={https://arxiv.org/abs/2006.03036}
}
\bib{SHMKuznetsov}{article}{
author={Siu Hang Man},
title={A Density Theorem for $Sp(4)$},
year={2021},
eprint={https://arxiv.org/abs/2101.09602}
}
\bib{thesis}{thesis}{
author={VerNooy, Colin},
title={$K$-types and Invariants for the Representations of $GSp(4,\mathbb{R})$},
date={2019},
eprint={https://hdl.handle.net/11244/321122},
type={phd}
}
\bib{Vogan}{article}{
author={Vogan, David A., Jr.},
title={Gel\cprime fand-Kirillov dimension for Harish-Chandra modules},
journal={Invent. Math.},
volume={48},
date={1978},
number={1},
pages={75--98},
issn={0020-9910},
review={\mathbf mathcal MR{506503}},
doi={10.1007/BF01390063},
}
\bib{Wallach}{book}{
author={Wallach, Nolan R.},
title={Real reductive groups. II},
series={Pure and Applied Mathematics},
volume={132},
publisher={Academic Press, Inc., Boston, MA},
date={1992},
pages={xiv+454},
isbn={0-12-732961-7},
review={\mathbf mathcal MR{1170566}},
}
{\bf{e}}nd{biblist}
{\bf{e}}nd{bibdiv}
{\bf{e}}nd{document} |
\begin{document}
\title{Changing the past by forgetting}
\author{Saibal Mitra}
\date{\today}
\maketitle
\begin{abstract}
Assuming the validity of the Many Worlds Interpretation (MWI) of quantum mechanics, we show that memory erasure can cause one to end up in a different sector of the multiverse where the contents of the erased memory is different.
\end{abstract}
\section{Introduction}
As pointed out by David Deutsch \cite{deutsch}, it is possible to experimentally disprove all collapse interpretations of quantum mechanics if one could make measurements in a reversible way. Suppose an observer measures the $z$-component of a spin that is polarized in the $x$-direction. Then there exists a unitary operator that disentangles the observer from the spin, causing the observer to forget the result of the measurement. However, he would still remember having measured the $z$-component of the spin. In the MWI, the spin will be in its original state and therefore measuring the $x$-component will yield spin up with 100\% probability. In any collapse interpretation, measuring the $x$-component will yield spin up or spin down with 50\% probability.
Unfortunately, such reversible measurements are not possible with current technology. In the MWI, it is still the case that simply dumping the information about the result of the measurement in the environment will cause the observer to become disentangled from the spin, but now the spin is entangled with the rest of the universe. While subsequent measurements cannot distinguish the MWI from collapse interpretations, in the MWI the outcome of measuring the $z$-component again is not predetermined. In this article, we show how machine observers can benefit from resetting their memories to previous states. First, we state the assumptions that define what we mean by the MWI in this article.
\section{Assumptions}
The conclusions of this article depend only on the following assumptions about time evolution and the physical nature of the subjective experiences of observers.
\begin{assumption}
The time evolution of the universe, including any observers, is always of form
\begin{equation}
\ket{\psi\haak{t_{2}}}=\hat{U}\haak{t_{2},t_{1}}\ket{\psi\haak{t_{1}}},
\end{equation}
where $\ket{\psi\haak{t}}$ represents the state of the universe at time $t$ and $\hat{U}$ is some unitary operator.
\end{assumption}
\begin{assumption}
The states any given observer can subjectively find herself in can be described classically using a finite amount of information, similar to specifying the computational state of a classical computer by specifying the state of each of the individual bits to be either zero or one.
\end{assumption}
To describe the full quantum mechanical state vector of an observer will, of course, require a huge amount of additional information. In this article we will take the view that whatever the exact quantum mechanical state vector is, the possible states the observer's consciousness can be in, can be identified with some classically describable macrostates of the observer. We will consider the additional information needed to specify the exact quantum state of the observer as part of the rest of the universe.
Denoting the macrostates an observer can find herself in formally as orthonormal ket vectors $\ket{O_{n}}$, the generic form of a quantum state of the universe containing the observer can be written as:
\begin{equation}\label{formm}
\ket{\psi} = \sum_{n,m}a_{n,m}\ket{O_{n}}\ket{\phi_{m}},
\end{equation}
where the $\ket{\phi_{m}}$ form a complete set of orthonormal states describing the rest of the universe. If we sum \eqref{formm} over $m$, we can write it formally as
\begin{equation}\label{form}
\ket{\psi} = \sum_{n}\ket{O_{n}}\ket{U_{n}}.
\end{equation}
The $\ket{U_{n}}$ can then be arbitrary states describing the rest of the universe.
If an initial state
\begin{equation}
\ket{\psi_{\text{Initial}}}=\ket{O_{1}}\ket{U_{1}}
\end{equation}
evolves in time to become a superposition of the form
\begin{equation}
\ket{\psi_{\text{A while later}}} = \ket{O_{2}}\ket{U_{2}} + \ket{O_{3}}\ket{U_{3}},
\end{equation}
then we interpret this as two parallel universes, one containing the observer in the state $\ket{O_{2}}$, the other containing the observer in the state $\ket{O_{3}}$.
\begin{assumption}
Born rule: In a state described by a superposition of the form
\begin{equation}
\sum_{k}\ket{O_{k}}\ket{U_{k}},
\end{equation}
the squared norms of the states $\ket{U_{k}}$ give the relative probabilities for the observer to find herself in the states $\ket{O_{k}}$.
\end{assumption}
\section{Memory erasure}
Consider a future machine observer who backs up its memory every day. It will reset its memory to the last backed up state when it learns about an impending disaster. The memory is also reset pseudo-randomly from macrostates that do not contain any information about a disaster. The fraction of such macrostates from which a resetting is done in the next clock cycle is $q$. Let's focus on the sector of the multiverse where at the time of a memory backup the observer is in some macrostate $\ket{O_j}$. The normalized state of the universe can be formally denoted as:
\begin{equation}\label{psi0}
\ket{\psi\haak{0}}=\ket{O_{j}}\ket{U_{j}}.
\end{equation}
This state then evolves to become at some clock cycle at time $t$, a state of the general form:
\begin{equation}\label{psit}
\ket{\psi\haak{t;j}}=\sum_{k}\ket{O_{k}}\ket{\tilde{U}_{k;j}}.
\end{equation}
Here the $j$ indicates that the information contained in the macrostate $\ket{O_{j}}$ has been stored.
In the superposition described by \eqref{psit}, the observer in each of the macrostates $\ket{O_{k}}$ knows whether or not it will reset its memory in the next clock cycle. We split the summation in \eqref{psit} into three parts:
\begin{equation}
\ket{\psi\haak{t;j}}=\sum_{k_{1}}\ket{O_{k_{1}}}\ket{\tilde{U}_{k_{1};j}}+\sum_{k_{2}}\ket{O_{k_{2}}}\ket{\tilde{U}_{k_{2};j}} + \sum_{k_{3}}\ket{O_{k_{3}}}\ket{\tilde{U}_{k_{3};j}},
\end{equation}
where the summation over $k_{1}$ is over those values for which $\ket{O_{k_{1}}}$ will reset its memory because of a disaster, the summation over $k_{2}$ is over the states for which memory resetting is triggered by the pseudo-random generator, and the summation over $k_{3}$ is over states in which no memory resetting will happen at the next clock cycle.
Suppose that the probability for the observer to learn about an impending disaster during a clock cycle is $p$. The squared norm of the summation over $k_{1}$ is then $p$ and the squared norm of the sum of the summation over $k_{2}$ and $k_{3}$ is $\haak{1-p}$. Since a fraction $q$ of the macrostates that don't contain any information about a disaster will do a pseudo-random memory resetting, the squared norm of the second summation over $k_{2}$ is $\haak{1-p}q$. The probability for the observer to reset its memory is thus:
\begin{equation}\label{reset}
P_{\text{reset}} = p + \haak{1-p}q.
\end{equation}
The normalized wavefunction describing the sector of the multiverse containing the observer with its reset memory is given by:
\begin{equation}\label{resetst}
\ket{\psi\haak{t;j}}=\frac{1}{\sqrt{P_{\text{reset}}}}\ket{O_{j}}\rhaak{\sum_{k_{1}}\ket{\tilde{U}'_{k_{1};j}}+\sum_{k_{2}}\ket{\tilde{U}'_{k_{2};j}}},
\end{equation}
where the primes indicate that due to the dumping of the memory, the state of the rest of the universe has been modified. Since this process is unitary, it follows that upon making new observations, the observer with the reset memory will learn about a disaster coming its way with a probability of:
\begin{equation}\label{dis}
P_{\text{dis}} = \frac{p}{p + \haak{1-p}q}.
\end{equation}
The probability for the observer before memory resetting to ultimately learn of the disaster after the memory resetting is $P_{\text{reset}}P_{\text{dis}} = p$, so the memory resetting doesn't appear to have had any effect. Moreover, the above formulas for $P_{\text{reset}}$ and $P_{\text{dis}}$ are also valid in a purely classical setting. However, while one can give the probabilities a trivial classical interpretation, according to quantum physics, the outcome of the measurement by the observer of the state described by \eqref{resetst} is not pre-determined.
The only way to escape this conclusion is to assume that the stored state to which the memory resetting is done, is always different for the sectors in which a disaster has happened and in which it has not happened. This means that one has to assume that the state \eqref{psi0} can only evolve into a state where a disaster is certain to happen at the clock cycle at time $t$ or to a state in which it is certain that no disaster will happen at that time.
It is certainly the case that the different sectors of the multiverse where a disaster strikes and where it doesn't strike in the future will usually be already be very different some time before. Therefore, it is reasonable to expect that, due to decoherence, the information about the coming disaster has already affected the exact wavefunction of the observer. Nevertheless, this doesn't necessarily have to happen and even if it did, it is unreasonable to assume that the macrostate of the observer must then be affected, as that means that the observer's state of consciousness would necessarily have to differ in the two sectors.
\section{Discussion}
Assuming the validity of the MWI, we are forced to accept that by resetting the memory to a previous state, the reason why the memory was reset is no longer determined. In the limit $p\ll q$ the probabilities for memory resetting \eqref{reset} and for having to face a disaster after memory resetting \eqref{dis}, reduce to $P_{\text{reset}}=q$ and $P_{\text{dis}}=p/q$. The observer facing disaster can thus be almost sure to escape the disaster by doing a memory resetting. The observer who learns of a disaster after a memory resetting should think that he would not have faced the disaster had he not reset his memory.
Of course, none of this is directly verifiable to the observer as the observer only has access to information present in the sector of the multiverse where he resides. Although we cannot reset our memories, we often can choose to test or not to test for possible disasters. If the MWI is true then there is no benefit to test for impending disasters against which nothing can be done. After learning about the impending disaster all we can do is sit and wait for the disaster to happen, while we could be almost sure that we would not have to endure the coming disaster had we not tested for it in the first place. It was the detection of the impending disaster that trapped us in the wrong sector of the multiverse.
But perhaps when we forget something, this is equivalent to the memory resetting scenario discussed in this article. This depends on whether or not the lost memory has affected our consciousness. So, if we watch a recording of a soccer match played a long time ago, the outcome is undetermined, not just if we are watching the match for the first time and never read about the outcome, but perhaps also if we've seen the match before and forgot about the outcome.
\end{document} |
\begin{document}
\title{On geometric properties of sets of positive reach in ${\bf E}^2d$}
\author{Andrea Colesanti and Paolo Manselli}
\date{}
\maketitle
\begin{abstract}
\noindent
Some geometric facts concerning sets with positive
reach in ${\bf E}^2d$ are proved. For $x_1$ and $x_2$ in ${\bf E}^2d$ and $R>0$ let us denote by ${\mathfrak H}(x_1,x_2,R)$ the
intersection of all closed balls of radius $R$ containing $x_1$ and
$x_2$. We prove that $\mathfrak{Re}ach(K)\ge R$ if and only if for every $x_1,x_2\in K$ such that ${\rm Nor}ma
x_1-x_2{\rm Nor}ma< 2R$, ${\mathfrak H}(x_1,x_2,R)\cap K$ is connected.
A corollary is that if $\mathfrak{Re}ach(K)\ge R>0$ and $D$ is a closed ball of radius less than or equal to
$R$ (intersecting $K$) then $\mathfrak{Re}ach(K\cap D)\ge R$. For $A\subset{\bf E}^2d$ and
$R>0$ we say that $A$ admits {\em $R$-hull} if there extsts $\hat A\supset A$, with
${\rm reach}(\hat A)\ge R$ and such that $\hat A$ is the minimal set (with respect to inclusion)
having these properties. A necessary and sufficient condition for a set
$A\subset{\bf E}^2d$ to admit a $R$-hull is provided.
\end{abstract}
\noindent
{\em AMS 2000 Subject Classification:} 52A30.
\section{Introduction}
Sets of positive reach were introduced by Federer in \cite{Federer}. This
class of sets can be viewed as an extension of that of convex sets. It is well
known that every point $x$ external to a closed convex set $C$ in ${\bf E}^2d$ admits a
unique {\it projection} on $C$, i.e. a point which minimizes the distance from
$x$ among all points in $C$. Sets of positive reach are those for which the
projection is unique for the points of a parallel neighborhood of the set (and
not necessarily for all external points).
Along with their definition, Federer provided the main fundamental
properties of sets of positive reach. Namely, the validity of global and
local Steiner formulas and consequently the existence of curvature measures
and many relevant properties of such measures.
The study of properties of sets with positive reach has been continued by
several authors and along various directions. Let us mention the contributions
given by Z\"ahle \cite{Zhale} and Rataj and Z\"ahle \cite{Rataj-Zhale} on integral representation of curvature
measures, the results by Hug \cite{Hug}, and Hug and the first author
\cite{Colesanti-Hug} on singular points of sets with positive reach and the
extensions of Steiner type formulas by Hug, Last and Weil
\cite{Hug-Last-Weil}. Moreover, in \cite{Fu} Fu proved several interesting
connections between sets of positive reach and semi-convex functions.
As stated by Federer, closed convex sets represent a limit case of sets
of positive reach, as the reach tends to $\infty$. The following question was
at the origin of the research carried out in this paper. Is it possible to see
(at least some of) the geometric properties of convex sets as limit case of
suitable geometric properties of sets of positive reach?
The first property that we analyse is the very definition of convex set: if
$x_1$ and $x_2$ belong to a convex set $C$, then the segment joining them
is entirely contained in $C$. In \S 3 we prove a
possible counterpart of this fact for sets of positive reach. For two points
$x_1$ and $x_2$ in ${\bf E}^2d$ and $R>0$ we denote by ${\mathfrak H}(x_1,x_2,R)$ the
intersection of all closed balls of radius $R$ containing $x_1$ and
$x_2$. The set ${\mathfrak H}(x_1,x_2,R)$ is a rugby ball-shaped set with cusps in $x_1$ and
$x_2$; moreover for $R\to\infty$, ${\mathfrak H}(x_1,x_2,R)$ tends to the segment
with endpoints $x_1$ and $x_2$. Theorem \mathfrak{Re}f{T2.1} states that
$\mathfrak{Re}ach(K)\ge R$ if and only if for every $x_1,x_2\in K$ such that ${\rm Nor}ma
x_1-x_2{\rm Nor}ma< 2R$, ${\mathfrak H}(x_1,x_2,R)\cap K$ is connected. The proof of this
result is geometric and does not require sophisticated techniques.
As a corollary (see Theorem \mathfrak{Re}f{T2.2}) we have the following fact: if
$\mathfrak{Re}ach(K)\ge R>0$ and $D$ is a closed ball of radius less than or equal to
$R$, intersecting $K$, then $\mathfrak{Re}ach(K\cap D)\ge R$. The latter property can be seen as a
counterpart, for sets with positive reach, of the well-known fact that the
intersection of a convex set with an half-space is convex (if it is non-empty).
Next, we consider the following problem: given a set $A$ and a number $R>0$
is it possible to find the minimal set (with respect to inclusion) containing $A$ and having
reach greater than or equal to $R$? The corresponding problem in the context of
convexity ($R=\infty$) has an affirmative answer: every set admits a least convex cover,
i.e. its convex hull. We will see through simple examples that this is not the
case for arbitrary $A$ and $R$ and we will find necessary and sufficient conditions so that
$A$ admits a minimal cover of reach greater than or equal to $R$.
The paper is organized as follows: in \S 2 we introduce some notations; in \S 3 we
prove Theorem \mathfrak{Re}f{T2.1} and some related results; in \S 4 we deal with the
least cover with prescribed reach of a given set.
\section{Notations}
Let ${\bf E}^2d$ be the $d$-dimensional Euclidean space; for $a,b\in{\bf E}^2d$,
let $\Vert b-a\Vert$ be their distance and let $(\cdot,\cdot)$ denote
the usual scalar product.
If $A$ is a subset of ${\bf E}^2d$, then ${\rm int}(A)$, ${\rm cl}(A)$ and $A^c$ will
denote the interior, the closure and the complement set of $A$,
respectively. For $x_0\in{\bf E}^2d$ and $r>0$ we set
$$
B(x_0,r)=\{x\in{\bf E}^2d\,:\,\Vert x-x_0\Vert<r\}\,,\quad{\rm and}\quad
D(x_0,r)={\rm cl}(B(x_0,r))\,.
$$
For $A\subset{\bf E}^2d$ and $a\in{\bf E}^2d$, the distance of $a$ from $A$ is given by
$$
\delta_A(a)=\inf\{\Vert a-x\Vert\,:\, x\in A\}\,.
$$
Let us recall the definition of {\em sets of positive reach}, introduced in
\cite{Federer}. Let $K\subset{\bf E}^2d$ be closed; let ${\rm Unp}(K)$ be the set
of points having a unique projection (or foot point) on $K$:
$$
{\rm Unp}(K):=\{a\in{\bf E}^2d\,:\,\exists!\,x\in K\;{\rm s.t.}\;\delta_K(x)=\Vert a-x\Vert\}\,.
$$
This definition implies the existence of a projection mapping
$\xi_K\,:{\rm Unp}(K)\to K$ which assigns to $x\in{\rm Unp}(K)$ the unique
point $\xi_K(x)\in K$ such that $\delta_K(x)=\Vert x-\xi_K(x)\Vert$. For a
point $a\in K$ we set:
$$
\mathfrak{Re}ach(K,a)=\sup\{r>0\,:\, B(a,r)\subset{\rm Unp}(K)\}\,.
$$
The reach of $K$ is then defined by:
$$
\mathfrak{Re}ach(K)=\inf_{a\in K}\mathfrak{Re}ach(K,a)\,,
$$
and $K$ is said to be of positive reach if $\mathfrak{Re}ach(K)>0$.
If $K\subset{\bf E}^2d$ is compact and $x\in K$, the tangent and the normal spaces to
$K$ at $a$ are:
$$
{\rm Tan}(K,a)=\{0\}\cup
\left\{u\,:\,\forall\,\epsilon>0\;\exists\,b\in K\,{\rm s.t.}\,
0<\Vert b-a\Vert<\epsilon\,,\,
\left\Vert\frac{b-a}{\Vert b-a\Vert}-\frac{u}{\Vert u\Vert}\right\Vert<\epsilon\right\}\,,
$$
$$
{\rm Nor}(K,a)=\{v\,:\,(u,v)\le 0\,,\,\forall u\in{\rm Tan}(K,a)\}\,.
$$
Notice in particular that ${\rm Nor}(K,a)$ is a closed convex cone. Let
$\mathfrak{Re}ach(K)>0$; for $a\in K$ we set:
$$
P_a=\{v\,:\,\xi_K(a+v)=a\}\,,\quad
Q_a=\{v\,:\,\delta_K(a+v)=\Vert v\Vert\}\,.
$$
\section{Characterization and geometrical properties of sets with positive reach }
The following definition will be useful later.
\begin{Def}\label{D2.1} Let $a,b\in{\bf E}^2d$, $a\ne b$, $R>0$, $\Vert a-b\Vert<2R$. Let
$$
{\cal D}(a,b,R)=\{D(x,R)\,:\,{\rm Nor}ma a-x{\rm Nor}ma,{\rm Nor}ma b-x{\rm Nor}ma\le R\}\,.
$$
We set
$$
{\mathfrak H}(a,b,R)=\bigcap_{D\in{\cal D}(a,b,R)} D\,.
$$
\end{Def}
It is clear from the definition that ${\mathfrak H}(a,b,R)$ is a compact convex set,
containing $a$ and $b$. The boundary of ${\mathfrak H}(a,b,R)$ is
obtained rotating an arc of circle of radius $R$ joining $a$ and $b$, about the
line through $a$ and $b$.
\begin{lm}\label{L2.3} Let $a,b\in{\bf E}^2d$ be such that $0<\Vert b-a\Vert< 2R$
where $R>0$. If $c,d\in{\mathfrak H}(a,b,R)$, then ${\mathfrak H}(c,d,R)\subset{\mathfrak H}(a,b,R)$.
\end{lm}
\noindent
\begin{dimo} If $D\in{\cal D}(a,b,R)$, then $c,d\in D$ so that $D\in{\cal D}(c,d,R)$.
The conclusion follows from Definition \mathfrak{Re}f{D2.1}.
\end{dimo}
A set is convex if and only if given any two points belonging to it, it contains
the line segment joining them.
In this section we prove (see Theorem \mathfrak{Re}f{T2.1})
a characterization of sets of positive reach that somehow
resembles the above characterization of convex sets.
The proof of this result requires various lemmas.
The next proposition is Theorem 4.8 (7) of \cite{Federer}.
\begin{prop}\label{L2.2} Let $K\subset{\bf E}^2d$ be closed, $x\in{\rm Unp}(K)$ and
$\mathfrak{Re}ach(K,\xi_K(x))>0$. Then, for every $b\in K$
\begin{equation}
\label{E2.2}
(x-\xi_K(x),\xi_K(x)-b)\ge-
\frac{\Vert\xi_k(x)-b\Vert^2\,\Vert x-\xi_K(x)\Vert}{2\,\mathfrak{Re}ach(K,\xi_K(x))}\,.
\end{equation}
\end{prop}
Let $R>0$ and $a,b\in{\bf E}^2d$ be such that
$0<{\rm Nor}ma a-b{\rm Nor}ma<2R$. We define the cone
$$
{\cal C}(a,b,R)=\left\{v\ne0 \,:\,\left(\frac{v}{{\rm Nor}ma v{\rm Nor}ma},\frac{b-a}{{\rm Nor}ma
b-a{\rm Nor}ma}\right)>\frac{{\rm Nor}ma b-a{\rm Nor}ma}{2R}\right\}\,.
$$
A geometric version of the above proposition follows.
\begin{coro}\label{C2.1} Let $K$ be a closed subset of ${\bf E}^2d$ such that
$\mathfrak{Re}ach(K)\ge R>0$. Let $x\in{\rm Unp}(K)\setminus K$, $a=\xi_K(x)\in\partial K$ and $b\in K$
such that $0<{\rm Nor}ma a-b{\rm Nor}ma<2R$. Then
$$
x-a\notin{\cal C}(a,b,R)\,.
$$
\end{coro}
We proceed with some geometric considerations in the plane.
Given $v$ and $w$ vectors in ${\bf E}^2$, $v,w\ne0$, we set
$$
{\cal S}(v,w)=\{z\,:\,z=tv+\tau w\,,\,t,\tau> 0\}\,.
$$
\begin{oss}\label{O2.1} Let $R>0$ and $z_1,z_2,z_3,z_4\in{\bf E}^2$ be such that
$$
{\rm Nor}ma z_1-z_2{\rm Nor}ma={\rm Nor}ma z_2-z_3{\rm Nor}ma=
{\rm Nor}ma z_3-z_4{\rm Nor}ma={\rm Nor}ma z_4-z_1{\rm Nor}ma=R\,,\quad 0<{\rm Nor}ma z_1-z_3{\rm Nor}ma<2R\,.
$$
We have
$$
{\cal C}(z_1,z_3,R)={\cal S}(z_2-z_1,z_4-z_1)\,.
$$
\end{oss}
\begin{lm}\label{L2.5} Let $R>0$, $b_1,b_2\in{\bf E}^2$ with $0<\Vert b_1-b_2\Vert<2R$,
$\Gamma_j=\partial B(b_j,R)$, $j=1,2$, $b_3,b_4\in{\bf E}^2$ such that
$\{b_3,b_4\}=\Gamma_1\cap\Gamma_2$. Let
\begin{enumerate}
\item[(i)] $\Sigma\subset\Gamma_1$ be the closed
arc joining $b_3$ and $b_4$ of smaller length;
\item[(ii)] $\Sigma'\subset\Gamma_1$
be the closed arc having length $\pi R$ and such that
$\Sigma\cap\Sigma'=\{b_4\}$.
\end{enumerate}
For every $a\in B(b_4,R)\setminus
D(b_3,R)$ there exist $c\in\Sigma$, $c\ne b_3$, $c\ne b_4$, and
$c'\in\Sigma'$, uniquely determined, such that
$$
{\rm Nor}ma b_1-c' {\rm Nor}ma={\rm Nor}ma c'-a {\rm Nor}ma=
{\rm Nor}ma a-c {\rm Nor}ma={\rm Nor}ma c-b_1 {\rm Nor}ma=R\,.
$$
\end{lm}
\vskip0.5cm
\begin{center}
\includegraphics[scale=.4]{figura1}
\vskip0.5cm
{\sc Figure 1}
\end{center}
\noindent
\begin{dimo} We have $\Vert a-b_3\Vert>R$ and $\Vert a-b_4\Vert< R$.
Let us notice that $b_3$ and $b_4$ are the endpoints of $\Sigma$. By continuity, there
exists $c\in\Sigma$ such that $\Vert a-c\Vert=R$. Let $b_5$ be the endpoint
of $\Sigma'$ which does not coincide with $b_4$; we have $\Vert
b_4-b_5{\rm Nor}ma=2R$ and $\Vert a-b_5\Vert+{\rm Nor}ma a-b_4{\rm Nor}ma\ge{\rm Nor}ma
b_4-b_5{\rm Nor}ma=2R$; thus ${\rm Nor}ma a-b_5{\rm Nor}ma\ge2R-{\rm Nor}ma a-b_4{\rm Nor}ma> R$. By
continuity, there exists $c'\in\Sigma'$ such that ${\rm Nor}ma a-c'{\rm Nor}ma=R$. The
points $c$ and $c'$ are uniquely determined as intersection of
$\Gamma_1$ and $\partial B(a,R)$.
\end{dimo}
\begin{lm}\label{L2.6} Let $R>0$, $b_1,b_2\in{\bf E}^2$, $0<{\rm Nor}ma b_1-b_2{\rm Nor}ma<2R$, $B_i=B(b_i,R)$,
$\Gamma_i=\partial B_i$, $i=1,2$. Let $b_3,b_4$ be such that
$\{b_3,b_4\}=\Gamma_1\cap\Gamma_2$, $B_i=B(b_i,R)$,
$i=3,4$. Assume that $a\in B_3\cup B_4\setminus{\mathfrak H}(b_1,b_2,R)$ and $c_i,c'_i$ are
such that
$$
{\rm Nor}ma b_i-c'_i {\rm Nor}ma={\rm Nor}ma c'_i-a {\rm Nor}ma=
{\rm Nor}ma a-c_i {\rm Nor}ma={\rm Nor}ma c_i-b_i {\rm Nor}ma=R\,,
\quad{\rm for}\;i=1,2\,,
$$
and let $S_i=S(c_i-a,c'_i-a)$, for $i=1,2$. Then:
\begin{equation}
\label{E2.4}
S_1\cup S_2\supset S(b_2-a,b_1-a)\,.
\end{equation}
In particular
\begin{equation}\label{E2.4b}
\frac{1}{2}(b_1+b_2)\in{\rm int}(S_1\cup S_2)\,.
\end{equation}
\end{lm}
\vskip0.5cm
\begin{center}
\includegraphics[scale=.4]{figura2}
\vskip0.5cm
{\sc Figure 2}
\end{center}
\noindent
\begin{dimo}
$S_1, S_2$ and $S(b_2-a,b_1-a)$ are open convex sectors with apex in $a$;
moreover $b_i-a\in S_i$ for $i=1,2$ so that $\{b_1-a,b_2-a\}\subset S_1\cup
S_2$.
Let $\Sigma_1=\Gamma_1\cap D(b_2,R)$ and $\Sigma_2=\Gamma_2\cap D(b_1,R)$. By
Lemma \mathfrak{Re}f{L2.5} we may assume that $c_i\in\Sigma_i\setminus\{b_3,b_4\}$ for $i=1,2$. This in turn
implies $\Vert c_1-c_2\Vert<2R$ (as $c_1,c_2\in{\mathfrak H}(b_3,b_4,R)$). Hence it is uniquely determined
$a'\ne a$ such that $\{a,a'\}=\partial B(c_1,R)\cap\partial B(c_2,R)$.
The straight line through $a$ and $a'$ bounds two open half-planes such that $b_2$ and $c_1$
(resp. $b_1$ and $c_2$) are in the same half-plane. Thus
\begin{equation}
\label{E2.5}
a'-a\in S_1\cap S_2\ne\emptyset\,.
\end{equation}
This implies that $S_1\cup S_2$ is a convex cone and,
since it contains $b_1$ and $b_2$, (\mathfrak{Re}f{E2.4}) follows.
\end{dimo}
\begin{teo}\label{T2.1} If $K\subset{\bf E}^2d$ is closed then $\mathfrak{Re}ach(K)\ge R>0$ if
and only if for every $b_1,b_2\in K$, $\Vert b_1-b_2\Vert<2R$, $K\cap
{\mathfrak H}(b_1,b_2,R)$ is connected.
\end{teo}
\noindent
\begin{dimo} Let us assume that $\mathfrak{Re}ach(K)\ge R>0$.
By contradiction, assume that
$K':=K\cap{\mathfrak H}(b_1,b_2,R)$
is not connected; then there exist $K_1,K_2\subset K'$,
closed, such that $K'=K_1\cup K_2$ and $K_1\cap
K_2=\emptyset$. By compactness, there exist $c_i\in K_i$ for $i=1,2$
such that
$$
\rho:=\Vert c_1-c_2\Vert=\inf\{\Vert x-y\Vert\,:\,x\in K_1\,,\, y\in
K_2\}>0\,.
$$
As $c_1,c_2\in{\mathfrak H}(b_1,b_2,R)$, $\rho\le R$. We have
$$
B(c_1,\rho)\cap B(c_2,\rho)\cap K'=\emptyset\,.
$$
On the other hand it is easy to check that
${\mathfrak H}(c_1,c_2,R)\subset[B(c_1,\rho)\cap B(c_2,\rho)]\cup\{c_1,c_2\}$. By
Lemma \mathfrak{Re}f{L2.3}, ${\mathfrak H}(c_1,c_2,R)\subset{\mathfrak H}(b_1,b_2,R)$, so that
\begin{equation}
\label{E2.6}
{\mathfrak H}(c_1,c_2,R)\cap K'=\{c_1,c_2\}\,.
\end{equation}
In particular, $c:=\displaystyle{\frac{c_1+c_2}{2}}\notin K$; as
$\delta_K(c)<R$, $c\in{\rm Unp}(K)\setminus K$. Let $c_3=\xi_K(c)\in\partial
K$. Notice that if $c_3\in{\mathfrak H}(c_1,c_2,R)$ then either $c_3=c_1$ or $c_3=c_2$ so
that $\delta_K(c)=\Vert c-c_1\Vert=\Vert c-c_2\Vert$ in contradiction with
$c\in{\rm Unp}(K)$. Consequently, $c_3\in K\setminus{\mathfrak H}(c_1,c_2,R)$.
We also observe that, for $i=1,2$,
$$
{\rm Nor}ma c_i-c_3{\rm Nor}ma\le{\rm Nor}ma c_i-c{\rm Nor}ma+{\rm Nor}ma c-c_3{\rm Nor}ma<2R
$$
as ${\rm Nor}ma c-c_3{\rm Nor}ma=\delta_K(c)<R$.
We recall the definitions of the cones:
$$
{\cal C}_i(c_3,c_i,R)=\left\{v\ne0 \,:\,\left(\frac{v}{{\rm Nor}ma v{\rm Nor}ma},\frac{c_i-c_3}{{\rm Nor}ma
c_i-c_3{\rm Nor}ma}\right)>\frac{{\rm Nor}ma c_i-c_3{\rm Nor}ma}{2R}
\right\}\,,\quad i=1,2\,.
$$
By Corollary \mathfrak{Re}f{C2.1} we have that
\begin{equation}
\label{E2.7}
c-c_3\notin{\cal C}_1\cup{\cal C}_2\,.
\end{equation}
Apply Remark \mathfrak{Re}f{O2.1} and Lemma \mathfrak{Re}f{L2.6} to the (uniquely determined) 2-dimensional plane
containing $c,c_1,c_2,c_3$ to obtain a contradiction with (\mathfrak{Re}f{E2.7}).
Vice-versa, assume that for every $b_1,b_2\in K$, $\Vert b_1-b_2\Vert<2R$, the set
$K\cap
{\mathfrak H}(b_1,b_2,R)$ is connected. If, by contradiction, $\mathfrak{Re}ach(K)< R$,
then there exists $x\in K^c$ such that $\delta_K(x)=r<R$ and
${\rm Nor}ma x-b_1{\rm Nor}ma={\rm Nor}ma x-b_2{\rm Nor}ma=r$ for some $b_1,b_2\in K$,
$b_1\ne b_2$. As ${\rm Nor}ma b_1-b_2{\rm Nor}ma<2R$, ${\mathfrak H}(b_1,b_2,R)\cap K$ is
connected. On the other hand, $r<R$ implies that ${\mathfrak H}(b_1,b_2,R)\subset
B(x,r)\cup\{b_1,b_2\}$ so that there exists $b\in K\cap B(x,r)$ i.e. a
contradiction.
\end{dimo}
\begin{oss} If $\mathfrak{Re}ach(K)\ge R$ and $b_1,b_2\in K$ are such that
${\rm Nor}ma b_1-b_2{\rm Nor}ma=2R$, then $K\cap{\mathfrak H}(b_1,b_2,R)$ is not necessarily
connected. Any set consisting of two points at distance $2R$ is an example.
\end{oss}
\begin{teo}\label{T2.2} Let $K$ be a closed set such that $\mathfrak{Re}ach(K)\ge R>0$. If $D$ is a
closed ball of radius less than or equal to $R$ then $\mathfrak{Re}ach(K\cap
D)\ge R$.
\end{teo}
\noindent
\begin{dimo} The argument is similar to the one used in the second part of
the proof of Theorem \mathfrak{Re}f{T2.1}. Let $a\in(K\cap D)^c$ such that $r=\delta_{K\cap D}(a)<R$; let us show
that $a\in{\rm Unp}(K\cap D)$. Assume by contradiction that there exist
$b_1,b_2\in(K\cap D)$ such that $b_1\ne b_2$ and
${\rm Nor}ma a-b_1{\rm Nor}ma={\rm Nor}ma a-b_2{\rm Nor}ma=r$. In particular ${\rm Nor}ma b_1-b_2{\rm Nor}ma<2R$.
Clearly, ${\mathfrak H}(b_1,b_2,R)\subset D$; consequently, by Theorem \mathfrak{Re}f{T2.1},
${\mathfrak H}(b_1,b_2,R)\cap(K\cap D)$ is connected. Also, notice that
$$
({\mathfrak H}(b_1,b_2,R)\setminus\{b_1,b_2\})\subset B(a,r)\,.
$$
Then there exists $b'\in K\cap D$ such that ${\rm Nor}ma a-b'{\rm Nor}ma<r$, i.e. a
contradiction.
\end{dimo}
\begin{coro} If $\mathfrak{Re}ach (K)\ge R>0$, $a,b\in K$, ${\rm Nor}ma a-b{\rm Nor}ma\le2R$, then
$\mathfrak{Re}ach(K\cap{\mathfrak H}(a,b,R))\ge R$.
\end{coro}
It is well known that, if $ K $ is a closed convex set in ${\bf E}^2d$ and
$ H $ is an open half space, satisfying $ H \cap K = \emptyset, $
then $ \partial H \cap K $ is either empty or a convex subset of $
\partial H. $ Let us show that a similar property holds for sets of
reach $ \ge R>0.$
\begin{Def}\label{D2.2}
Let $S$ be a sphere of radius $ R>0 $ in ${\bf E}^2d$; let $ K $ be
a closed subset of $S$. We say that $ K $ is convex in $ S $
if $ x_1 \in K,\; x_2 \in K, $ dist($ x_1, x_2$) $< 2R $ imply
that the arc of great circle of $S$ joining $ x_1 $ and $ x_2, $ and having
smaller length, is contained in $K$.
\end{Def}
\begin{teo}\label{T2.3} Let $K$ be a closed set in ${\bf E}^2d$
and $\mathfrak{Re}ach (K)\ge R>0$. Let $ B $ be an open ball of radius $R$
satisfying $ B \cap K = \emptyset$. Then $ \partial B \cap K $ is
either empty or a convex subset of $ \partial B $.
\end{teo}
\noindent
\begin{dimo}
Theorem \mathfrak{Re}f{T2.2} implies that $ (B\cup \partial B) \cap K $ $ =
\partial B \cap K $ has reach $ \ge R. $ Then, by theorem
\mathfrak{Re}f{T2.1}, if $b_1,b_2 \in K\cap \partial B$, $\Vert b_1-b_2\Vert<2R$, then $K\cap
\partial B \cap {\mathfrak H}(b_1,b_2,R)$ is connected. Now
$K\cap \partial B \cap {\mathfrak H}(b_1,b_2,R)$ is exactly the arc of great circle
of $\partial B$, joining $ b_1$ and $ b_2$ and having smaller length.
\end{dimo}
\section{On the $R$-hull of a set}
Let $A$ be a subset of ${\bf E}^2d$ and let $R>0$. In this section we analyze
the problem of finding $K$ such that $\mathfrak{Re}ach(K)\ge R$, $K\supset A$ and $K$ is
the {\em minimal} set (with respect to inclusion) having these properties. In
other words we look for a sort of hull of reach $R$ of $A$. Intuitively, when
$R=\infty$ we are dealing with the convex hull of $A$ which exists for
every $A$. On the other hand, for finite $R>0$ not every set $A$
admits a hull of reach $R$ (see the examples below). Our aim is to give necessary and
sufficient conditions for $A$ to have this property (see Theorems \mathfrak{Re}f{T4.1}
and \mathfrak{Re}f{T4.2}).
\begin{Def}\label{D4.1}
Let $A\subset{\bf E}^2d$, $R>0$. We say that $A$ admits a $R$-hull if
there exists $\hat A\subset{\bf E}^2d$ such that:
\begin{enumerate}
\item[(i)] $A\subset\hat A$;
\item[(ii)] $\mathfrak{Re}ach(\hat A)\ge R$;
\item[(iii)] if $\mathfrak{Re}ach(K)\ge R$ and $A\subset K$, then
$\hat A\subset K$.
\end{enumerate}
If such a set exists, we call it the $R$-hull of $A$.
\end{Def}
\noindent
{\bf Example 1.} For an arbitrary $R>0$ we may construct
an example of set which does not admit a $R$-hull.
Let $n=2$ and $A=\{a,b\}$ with ${\rm Nor}ma a-b{\rm Nor}ma=R/2$. Assume by contradiction
that there exists the $R$-hull of $A$, and denote it by $\hat A$. Let $\hat A_1$ be the
closed line segment joining $a$ and $b$: $\mathfrak{Re}ach(\hat A_1)=\infty$ so that
$\hat A_1\supset \hat A$. Let $\Gamma$ be a circle of radius $R$ passing through $a$ and
$b$ and let $\hat A_2\subset\Gamma$ be the closed arc of smaller length joining $a$
and $b$. We have $\mathfrak{Re}ach(\hat A_2)=R$ so that $\hat A_2\supset \hat A$. As
$\hat A_1\cap \hat A_2=A$,
we must have $\hat A=A$; on the other hand $\mathfrak{Re}ach(A)=R/2$ so we have a
contradiction.
\noindent
{\bf Example 2.} In ${\bf E}^2d$ consider a half-line $L$ with end-point in the origin. For
every $i=1,2,\dots$, let $a_i$ be the point of $L$ such that
${\rm Nor}ma a_i{\rm Nor}ma=1/i$. The set $A=\{a_1,a_2,\dots\}$ does not admit a $R$-hull for any
$R\in(0,\infty)$.
For an arbitrary set $A\subset{\bf E}^2d$ and $R>0$, we set
$$
A'_R=\{x\in{\bf E}^2d\,:\,\delta_A(x)\ge R\}\,.
$$
The proof of the following proposition is an easy application of Theorem \mathfrak{Re}f{T2.1}.
\begin{prop} Let $A\subset{\bf E}^2d$, $R>0$; $\mathfrak{Re}ach(A'_R)\ge R$
if and only if for every $a$ and $b$ such that
$\delta_A(a),\delta_A(b)\ge R$ and $B(a,R)\cap B(b,R)\ne\emptyset$,
there exists a continuous arc $\Gamma$ joining
$a$ and $b$, $\Gamma\subset{\mathfrak H}(a,b,R)$,
such that $\delta_A(x)\ge R$ for every $x\in\Gamma$.
\end{prop}
\begin{lm}\label{L4.1}
Let $K\subset{\bf E}^2d$, then
\begin{enumerate}
\item[(i)] $K\subset (K'_R)'_R\subset\{z\in{\bf E}^2d\,:\,\delta_K(z)<R\}$,
\item[(ii)] if $\mathfrak{Re}ach(K)\ge R>0$ then $\mathfrak{Re}ach(K'_R)\ge R$ and $K=(K'_R)'_R$.
\end{enumerate}
\end{lm}
\noindent
\begin{dimo} If $x\in K$, then ${\rm Nor}ma x-y{\rm Nor}ma\ge R$ for every $y\in K'_R$ so that
$\delta_{K'_R}(x)\ge R$ and $x\in(K'_R)'_R$. On the other hand, if $z\in(K'_R)'_R$ then
$z\notin K'_R$ so that $\delta_K(z)<R$. Claim (i) is proved.
For $s\ge 0$ set $K'_s=\{x\in{\bf E}^2d\,:\,\delta_K(x)\ge s\}$.
Corollary 4.9 in \cite{Federer} implies that $\mathfrak{Re}ach(K'_{R-1/i})\ge R-1/i$
for every $i=1,2,\dots$. Moreover, the sequence $K'_{R-1/i}$ converges to
$K'_R$ in the Hausdorff metric. On the other hand, the by Remark 4.14 in
\cite{Federer}, for every $\epsilon>0$ the family
$$
\{A\subset{\bf E}^2d\,:\,\mathfrak{Re}ach(A)\ge R-\epsilon\}
$$
is closed with respect to the Hausdorff metric. Then $\mathfrak{Re}ach(K'_R)\ge R-\epsilon$
for every $\epsilon>0$. Now let us prove that if $\mathfrak{Re}ach(K)\ge R$ then
$(K'_R)'_R\setminus K$ is empty. Let $z\in(K'_R)'_R\setminus K$; (i) implies that
$z\in{\rm Unp}(K)$. Let $x=\xi_K(z)$ and $y_t=x+t\frac{z-x}{{\rm Nor}ma z-x{\rm Nor}ma}$,
$t\ge 0$. Note that $\frac{z-x}{{\rm Nor}ma z-x{\rm Nor}ma}\in{\rm Nor}(K,x)$ so that,
by claim (12) of Theorem 4.8 of \cite{Federer}, if $0<t<R$, then $\delta_K(y_t)=t$ and by continuity
$\delta_K(y_R)=R$. Then $y_R\in K'_R$ and
${\rm Nor}ma z-y_R{\rm Nor}ma<R$, i.e. a contradiction.
\end{dimo}
\begin{teo}\label{T4.1} Let $A\subset{\bf E}^2d$ and $R>0$. If $\mathfrak{Re}ach(A'_R)\ge R$ then
$A$ admits $R$-hull $\hat A$ and
$$
\hat A=(A'_R)'_R\,.
$$
\end{teo}
\noindent
\begin{dimo} Let $A_1=(A'_R)'_R$; we prove
that $A_1$ is the $R$-hull of $A$. The inclusion $A\subset A_1$ is part (i) in Lemma
\mathfrak{Re}f{L4.1}.
By the same lemma, as $\mathfrak{Re}ach(A'_R)\ge R$ we have $\mathfrak{Re}ach(A_1)\ge R$. It remains to
show that $A_1$ satisfies (iii) in Definition \mathfrak{Re}f{D4.1}. Let $K$ be such
that $K\supset A$ and $\mathfrak{Re}ach(K)\ge R$. Then $K'_R\subset A'_R$ and, by Lemma
\mathfrak{Re}f{L4.1}, $K=(K'_r)'_R\supset(A'_R)'_R=A_1$.
\end{dimo}
\begin{coro}Let $A\subset{\bf E}^2d$ and $R>0$. If for every $a$ and $b$ such that
$\delta_A(a),\delta_A(b)\ge R$ and ${\rm Nor}ma a-b{\rm Nor}ma<2R$, there exists a continuous
arc $\Gamma$, joining $a$ and $b$ such that $\delta_A(x)\ge R$ for every $x\in\Gamma$,
$\Gamma\subset{\mathfrak H}(a,b,R)$, then $A$ admits $R$-hull $\hat A$ and
$$
\hat A=(A'_R)'_R\,.
$$
\end{coro}
\begin{teo}\label{T4.2} Let $K\subset{\bf E}^2d$ and $R>0$. Assume that $K$ admits $R$-hull
$\hat K$. Then $\mathfrak{Re}ach(K'_R)\ge R$.
\end{teo}
\noindent
\begin{dimo} We argue by contradiction. By using Theorem \mathfrak{Re}f{T2.1},
there exist $b_1$ and $b_2\in K'_R$ satisfying ${\rm Nor}ma b_1-b_2{\rm Nor}ma<2R$ and such that
${\mathfrak H}(b_1,b_2,R)\cap K'_R$ is not connected. Then, as we saw in the proof of
Theorem \mathfrak{Re}f{T2.1}, there exist $c_1$ and $c_2\in K'_R$ such that
\begin{equation}
\label{4.1}
{\mathfrak H}(c_1,c_2,R)\cap K'_R=\{c_1,c_2\}\,.
\end{equation}
For $j=1,2$ we have $\mathfrak{Re}ach(B(c_j,R)^c)=R$ and $B(c_j,R)^c\supset K$ thus
$B(c_j,R)^c\supset\hat K$. This implies in particular that $c_1$, $c_2\in(\hat
K)'_R$. As $\mathfrak{Re}ach(\hat K)\ge R$, by Lemma \mathfrak{Re}f{L4.1},
$\mathfrak{Re}ach(\hat K'_R)\ge R$, then ${\mathfrak H}(c_1,c_2,R)\cap\hat K'_R$ is connected.
Let $a\in[{\mathfrak H}(c_1,c_2,R)\setminus\{c_1,c_2\}]\cap\hat K'_R$. We have
$B_a(R)\cap K\subset B_a(R)\cap\hat K=\emptyset$ then $a\in K'_R$ which contradicts
(\mathfrak{Re}f{4.1}).
\end{dimo}
From the above theorem another connection between convex sets and sets of
positive reach can be deduced. The convex hull of a closed set $C$ is the intersection of all the closed half-spaces
containing $C$. Let us prove that if $K$ admits $R$-hull $\hat K$, then $\hat K$ is the
intersection of the complement sets of all open balls that do not meet $K$.
Note that for an arbitrary, non-empty, subset $K$ of ${\bf E}^2d$ we have
$$
(K'_R)'_R=\bigcap_{\delta_K(x)\ge R}B_x(R)^c\,.
$$
This remark and Theorem \mathfrak{Re}f{T4.2} lead to the following result.
\begin{coro} Let $K\subset{\bf E}^2d$, $R\ge 0$. Assume that $K$ admits an $R$-hull
$\hat K$. Then
$$
\hat K=\bigcap_{\delta_K(x)\ge R}B_x(R)^c\,.
$$
\end{coro}
{\sc Andrea Colesanti}, Dipartimento di Matematica 'U. Dini', Viale Morgagni 67/a, 50134
Firenze, Italy. E-mail: {\tt [email protected]}
{\sc Paolo Manselli}, Dipartimento di Matematica e Applicazioni per l'Architettura, via dell'Agnolo 14, 50122 Firenze,
Italy. E-mail: {\tt [email protected]}
\end{document} |
\begin{document}
\title[CKN, remainders and superweights for $L^{p}$-weighted Hardy inequalities]
{Extended Caffarelli-Kohn-Nirenberg inequalities, and remainders, stability, and superweights for $L^{p}$-weighted Hardy inequalities}
\author[M. Ruzhansky]{Michael Ruzhansky}
\address{
Michael Ruzhansky:
\endgraf
Department of Mathematics
\endgraf
Imperial College London
\endgraf
180 Queen's Gate, London SW7 2AZ
\endgraf
United Kingdom
\endgraf
{\it E-mail address} {\rm [email protected]}
}
\author[D. Suragan]{Durvudkhan Suragan}
\address{
Durvudkhan Suragan:
\endgraf
Institute of Mathematics and Mathematical Modelling
\endgraf
125 Pushkin str.
\endgraf
050010 Almaty
\endgraf
Kazakhstan
\endgraf
{\it E-mail address} {\rm [email protected]}
}
\author[N. Yessirkegenov]{Nurgissa Yessirkegenov}
\address{
Nurgissa Yessirkegenov:
\endgraf
Institute of Mathematics and Mathematical Modelling
\endgraf
125 Pushkin str.
\endgraf
050010 Almaty
\endgraf
Kazakhstan
\endgraf
and
\endgraf
Department of Mathematics
\endgraf
Imperial College London
\endgraf
180 Queen's Gate, London SW7 2AZ
\endgraf
United Kingdom
\endgraf
{\it E-mail address} {\rm [email protected]}
}
\thanks{The authors were supported in parts by the EPSRC
grant EP/K039407/1 and by the Leverhulme Grant RPG-2014-02,
as well as by the MESRK grant 5127/GF4. No new data was collected or
generated during the course of research.}
\keywords{Hardy inequality, weighted Hardy inequality, Caffarelli-Kohn-Nirenberg inequality, remainder term, homogeneous Lie group.}
\subjclass[2010]{22E30, 43A80}
\begin{abstract}
In this paper we give an extension of the classical Caffarelli-Kohn-Nirenberg inequalities: we show that
for $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$, $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ with $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $a$, $b$, $c\in\mathbb{R}$ with $c=\delta(a-1)+b(1-\delta)$, and for all functions $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ we have
$$
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}
$$
for $n\neq p(1-a)$, where the constant $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ is sharp for $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$.
In the critical case $n=p(1-a)$ we have
$$
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}.
$$
Moreover, we also obtain anisotropic versions of these inequalities which can be conveniently formulated in the language of Folland and Stein's homogeneous groups.
Consequently, we obtain remainder estimates for $L^{p}$-weighted Hardy inequalities on homogeneous groups, which are also new in the Euclidean setting of $\mathbb R^{n}$. The critical Hardy inequalities of logarithmic type and uncertainty type principles on homogeneous groups are obtained. Moreover, we investigate another improved version of $L^{p}$-weighted Hardy inequalities involving a distance and stability estimates. The relation between the critical and the subcritical Hardy inequalities on homogeneous groups is also investigated. We also establish sharp Hardy type inequalities
in $L^{p}$, $1<p<\infty$, with superweights, i.e. with
the weights of the form $\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}$ allowing for different choices of $\alpha$ and $\beta$.
There are two reasons why we call the appearing weights the superweights: the arbitrariness of the choice of any homogeneous quasi-norm and a wide range of parameters.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
\label{SEC:intro}
The aim of this paper is to give an extension of the classical Caffarelli-Kohn-Nirenberg (CKN) inequalities \cite{CKN84} with respect to ranges of parameters and to investigate the remainders and stability of the weighted $L^p$-Hardy inequalities. Moreover, our methods also provide sharp constants for the CKN inequality for known ranges of parameters as well as give an improvement by replacing the full gradient by the radial derivative. We also obtain the critical case of the CKN inequality with logarithmic terms, and investigate the remainders and other properties in the case when CKN inequalities reduce to the weighted Hardy inequalities.
For the latter, we also establish $L^p$ weighted Hardy inequalities with more general weights of the form $\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}$, allowing for different choices of $m$, $\alpha$ and $\beta$.
\subsection{Extended Caffarelli-Kohn-Nirenberg inequalities}
Let us recall the classical Caffarelli-Kohn-Nirenberg inequality \cite{CKN84}:
\begin{thm}\label{clas_CKN}
Let $n\in\mathbb{N}$ and let $p$, $q$, $r$, $a$, $b$, $d$, $\delta\in \mathbb{R}$ such that $p,q\geq1$, $r>0$, $0\leq\delta\leq1$, and
\begin{equation}\label{clas_CKN0}
\frac{1}{p}+\frac{a}{n},\, \frac{1}{q}+\frac{b}{n},\, \frac{1}{r}+\frac{c}{n}>0
\end{equation}
where $c=\delta d + (1-\delta) b$. Then there exists a positive constant $C$ such that
\begin{equation}\label{clas_CKN1}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}\leq C \||x|^{a}|\nabla f|\|^{\delta}_{L^{p}(\mathbb R^{n})} \||x|^{b}f\|^{1-\delta}_{L^{q}(\mathbb R^{n})}
\end{equation}
holds for all $f\in C_{0}^{\infty}(\mathbb R^{n})$, if and only if the following conditions hold:
\begin{equation}\label{clas_CKN2}
\frac{1}{r}+\frac{c}{n}=\delta \left(\frac{1}{p}+\frac{a-1}{n}\right)+(1-\delta)\left(\frac{1}{q}+\frac{b}{n}\right),
\end{equation}
\begin{equation}\label{clas_CKN3}
a-d\geq 0 \quad {\rm if} \quad \delta>0,
\end{equation}
\begin{equation}\label{clas_CKN4}
a-d\leq 1 \quad {\rm if} \quad \delta>0 \quad {\rm and} \quad \frac{1}{r}+\frac{c}{n}=\frac{1}{p}+\frac{a-1}{n}.
\end{equation}
\end{thm}
The first aim of this paper is to extend the CKN-inequalities for general functions with respect to widening the range of indices \eqref{clas_CKN0}. Moreover, another improvement will be achieved by replacing the full gradient $\nabla f$ in \eqref{clas_CKN1} by the radial derivative
$\mathcal{R}f=\frac{\partial f}{\partial r}$. It turns out that such improved versions can be establsihed with sharp constants, and to hold both in the isotropic and anisotropic settings.
To compare with Theorem \ref{clas_CKN} let us first formulate the isotropic version of our extension in the usual setting of $\mathbb R^{n}$.
\begin{thm}\label{clas_CKN-2}
Let $n\in\mathbb{N}$, $1<p,q<\infty$, $0<r<\infty$, with $p+q\geq r$, $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, $c=\delta(a-1)+b(1-\delta)$. If $n\neq p(1-a)$, then for any function $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ we have
\begin{equation}\label{EQ:aq}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}.
\end{equation}
In the critical case $n=p(1-a)$ for any function $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ we have
\begin{equation}\label{CKN_rem3a}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})},
\end{equation}
for any homogeneous quasi-norm $|\cdot|$. If $|\cdot|$ is the Euclidean norm on $\mathbb R^{n}$, inequalities
\eqref{EQ:aq} and \eqref{CKN_rem3a} imply, respectively,
\begin{equation}\label{EQ:aqe}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}
\end{equation}
for $n\neq p(1-a)$, and
\begin{equation}\label{CKN_rem3ae}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})},
\end{equation}
for $n=p(1-a)$. The inequality \eqref{EQ:aq} holds for any homogeneous quasi-norm $|\cdot|$, and the constant
$\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ is sharp for $p=q$ with $a-b=1$ or for $p\neq q$ with $p(1-a)+bq\neq0$. Furthermore, the constants $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=\overline{0,1}$.
\end{thm}
Note that if the conditions \eqref{clas_CKN0}
hold, then the inequality \eqref{EQ:aqe} is contained in the family of Caffarelli-Kohn-Nirenberg inequalities in Theorem \ref{clas_CKN}.
However, already in this case, if we require $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$, then \eqref{EQ:aqe} yields the inequality \eqref{clas_CKN1} with sharp constant. Moreover, the constants $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=0$ or $\delta=1$. Our conditions $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$ imply the condition \eqref{clas_CKN2} of the Theorem \ref{clas_CKN}, as well as \eqref{clas_CKN3}-\eqref{clas_CKN4} which are all necessary for having estimates of this type, at least under the conditions \eqref{clas_CKN0}.
If the conditions \eqref{clas_CKN0} are not satisfied, then the inequality \eqref{EQ:aqe} is not covered by Theorem \ref{clas_CKN}. So, this gives an extension of Theorem \ref{clas_CKN} with respect to the range of parameters. Let us give an example:
\begin{exa}\label{CKN_example} Let us take $1<p=q=r<\infty$, $a=-\frac{n-2p}{p}$, $b=-\frac{n}{p}$ and $c=-\frac{n-\delta p}{p}$.
Then by \eqref{EQ:aqe}, for all $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ we have the inequality
\begin{equation}\label{CKN_example1}
\left\|\frac{f}{|x|^{\frac{n-\delta p}{p}}}\right\|_{L^{p}(\mathbb R^{n})}\leq
\left\|\frac{\nabla f}{|x|^{\frac{n-2p}{p}}}\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\|\frac{f}{|x|^{\frac{n}{p}}}\right\|^{1-\delta}_{L^{p}(\mathbb R^{n})}, \quad 1<p<\infty, \;0\leq\delta\leq1,
\end{equation}
where $\nabla$ is the standard gradient in $\mathbb R^{n}$. Since we have
$$\frac{1}{q}+\frac{b}{n}=\frac{1}{p}+\frac{1}{n}\left(-\frac{n}{p}\right)=0,$$
we see that \eqref{clas_CKN0} fails, so that the inequality \eqref{CKN_example1} is not covered by Theorem \ref{clas_CKN}. Moreover, in this case, $p=q$ with $a-b=1$ hold true, so that the constant $\left|\frac{p}{n-p(1-a)}\right|^{\delta}=1$ in the inequality \eqref{CKN_example1} is sharp.
\end{exa}
Although these results are new already in the usual setting of $\mathbb R^{n}$, our techniques apply well also for the anisotropic structures. Consequently, it is convenient to work in the setting of homogeneous groups developed by Folland and Stein \cite{FS-Hardy} with an idea of emphasising general results of harmonic analysis depending only of the group and dilation structures.
In particular, in this way we obtain results on the anisotropic $\mathbb R^{n}$, on the Heisenberg group, general stratified groups, graded groups, etc. In the special case of stratified groups (or homogeneous Carnot groups), other formulations using horizontal gradient are possible, and we refer to \cite{Ruzhansky-Suragan:Layers} and especially to \cite{Ruzhansky-Suragan:JDE} for versions of such results and the discussion of the corresponding literature.
The improved versions of the Caffarelli-Kohn-Nirenberg inequality for radially symmetric functions with respect to the range of parameters was investigated in \cite{NDD12}. In \cite{ZhHD15} and \cite{HZh11}, weighted Hardy type inequalities were obtained for the generalised Baouendi-Grushin vector fields, which is when $\gamma=0$ gives the standard gradient in $\mathbb R^{n}$. We also refer to \cite{HNZh11}, \cite{Han15} for weighted Hardy inequalities on the Heisenberg group, to \cite{HZhD11} and \cite{ZhHD14} on the H-type groups, and a recent paper \cite{Yacoub17} on Lie groups of polynomial growth as well as to references therein.
In Section \ref{SEC:prelim} we very briefly recall the necessary notions and fix the notation in more detail. Assuming the notation there, Theorem \ref{clas_CKN-2} is the special case of the following theorem that we prove in this paper:
\begin{thm}\label{THM:CKN-i}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$. Let $|\cdot|$ be an arbitrary homogeneous quasi-norm on $\mathbb{G}$. Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$, $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$. Then
for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have the following Caffarelli-Kohn-Nirenberg type inequalities, with $\mathcal{R}:=\frac{d}{d|x|}$ being the radial derivative:
If $Q\neq p(1-a)$, then
$$
\||x|^{c}f\|_{L^{r}(\mathbb{G})}
\leq \left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\||x|^{a}\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})},
$$
where the constant $\left|\frac{p}{Q-p(1-a)}\right|^{\delta}$ is sharp for $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$.
If $Q=p(1-a)$, then
$$
\left\||x|^{c}f\right\|_{L^{r}(\mathbb{G})}
\leq p^{\delta} \left\||x|^{a}\log|x|\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.
$$ Moreover, the constants $\left|\frac{p}{Q-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=\overline{0,1}$.
\end{thm}
\subsection{$L^{p}$-weighted Hardy inequalities}
Let us recall the following $L^{p}$-weighted Hardy inequality
\begin{equation}\label{Lp_Hardy}
\int_{\mathbb R^{n}}\frac{|\nabla f(x)|^{p}}{|x|^{\alpha p}}dx\geq\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx
\end{equation}
for every function $f\in C_{0}^{\infty}(\mathbb R^{n})$, where $-\infty<\alpha<\frac{n-p}{p}$ and $2\leq p<n$. The inequality \eqref{Lp_Hardy} is a special case of the Caffarelli-Kohn-Nirenberg inequalities \cite{CKN84}, recalled also in Theorem \ref{clas_CKN}. Since in this paper we are also interested in remainder estimates for the $L^{p}$-weighted Hardy inequality, let us introduce known results in this direction. Overall, the study of remainders in Hardy and other related inequalities is a classical topic going back to \cite{Brez1, Brez2, BV97}.
Ghoussoub and Moradifam \cite{GM08} proved that there exists no strictly positive function $V\in C^{1}(0,\infty)$ such that the inequality
$$\int_{\mathbb R^{n}}|\nabla f|^{2}dx\geq\left(\frac{n-2}{2}\right)^{2}\int_{\mathbb R^{n}}\frac{|f|^{2}}{|x|^{2}}dx+\int_{\mathbb R^{n}}V(|x|)|f|^{2}dx$$
holds for any $f\in W^{1,2}(\mathbb R^{n})$. Cianchi and Ferone \cite {CF08} showed that for all $1<p<n$ there exists a constant $C=C(p,n)$ such that
$$\int_{\mathbb R^{n}}|\nabla f|^{p}dx\geq\left(\frac{n-p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f|^{p}}{|x|^{p}}dx\,(1+Cd_{p}(f)^{2p^{*}})$$
holds for all real-valued weakly differentiable functions $f$ in $\mathbb R^{n}$ such that $f$ and $|\nabla f|\in L^{p}(\mathbb R^{n})$ go to zero at infinity. Here
$$d_{p}f=\underset{c\in \mathbb{R}}{\rm inf}\frac{\|f-c|x|^{-\frac{n-p}{p}}\|_{L^{p^{*},\infty}(\mathbb R^{n})}}{\|f\|_{L^{p^{*},p}(\mathbb R^{n})}}$$ with $p^{*}=\frac{np}{n-p}$, and $L^{\tau, \sigma}(\mathbb R^{n})$ is the Lorentz space for $0<\tau\leq \infty$ and $1\leq\sigma\leq\infty$.
In the case of a bounded domain $\Omega$, Wang and Willem \cite{WW03} for $p=2$ and Abdellaoui, Colorado and Peral \cite{ACP05} for $1<p<\infty$ investigated the improved type of \eqref{Lp_Hardy} (see also \cite{ST15a} and \cite{ST15b} for more details).
For more general Lie group discussions of above inequalities we refer to recent papers \cite{Ruzhansky-Suragan:Layers}, \cite{Ruzhansky-Suragan:squares} and \cite{Ruzhansky-Suragan:JDE} as well as references therein.
Sometimes the improved versions of different inequalities, or remainder estimates, are called stability of the inequality if the estimates depend on certain distances: see, e.g. \cite{BJOS16} for stability of trace theorems, \cite{CFW13} for stability of Sobolev inequalities, etc.
We also note that Sano and Takahashi obtained the improved version of \eqref{Lp_Hardy} in \cite{ST15a} for $\Omega=\mathbb R^{n}$ and $\alpha=0$ and then in \cite{ST15b} for any $-\infty<\alpha<\frac{n-p}{p}$:
Let $n\geq 3$, $2\leq p<n$ and $-\infty<\alpha<\frac{n-p}{p}$. Let $N\in \mathbb{N}$, $t\in (0,1)$, $\gamma<\min\{1-t, \frac{p-N}{p}\}$ and $\delta=N-n+\frac{N}{1-t-\gamma}\left(\gamma+\frac{n-p-\alpha p}{p}\right)$.
Then there exists a constant $C>0$ such that the inequality
$$\int_{\mathbb R^{n}}\frac{|\nabla f|^{p}}{|x|^{\alpha p}}dx-\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f|^{p}}{|x|^{p(\alpha+1)}}dx
\geq C\frac{\left(\int_{\mathbb R^{n}}|f|^{\frac{N}{1-t-\gamma}}|x|^{\delta}dx\right)^{\frac{p(1-t-\gamma)}{Nt}}}
{\left(\int_{\mathbb R^{n}}|f|^{p}|x|^{-\alpha p}dx\right)^{\frac{1-t}{t}}}
$$
holds for any radially symmetric function $f\in W_{0,\alpha}^{1,p}(\mathbb{R}^{n})$, $f\neq 0$.
For the convenience of the reader we now briefly recapture the main results of this part of the paper, formulating them directly in the anisotropic cases following the notation recalled in Section \ref{SEC:prelim}.
Thus, we show that for a homogeneous group $\mathbb{G}$ of homogeneous dimension $Q$ and any homogeneous quasi-norm $|\cdot|$ we have the following results:
\begin{itemize}
\item ({\bf Remainder estimates for the $L^{p}$-weighted Hardy inequality})
Let $2\leq p<Q$, $-\infty<\alpha<\frac{Q-p}{p}$ and $\delta_{1}=Q-p-\alpha p-\frac{Q+pb}{p}$, $\delta_{2}=Q-p-\alpha p-\frac{bp}{p-1}$ for any $b\in\mathbb{R}$. Then
for all functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$
$$
\geq C_{p} \frac{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}},
$$
where $C_{p}=c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}$, $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative and $c_{p}=\underset{0<t\leq1/2}{\rm min}((1-t)^{p}-t^{p}+pt^{p-1})$. This family is a new result already in the standard setting of $\mathbb R^{n}$.
\item ({\bf Stability of Hardy inequalities})
Let $2\leq p<Q$ and $-\infty<\alpha<\frac{Q-p}{p}$. Then for all radial functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have the stability estimate
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$
$$
\geq c_{p}\left(\frac{p-1}{p}\right)^{p}\sup_{R>0}d_{R}(f,c_{f}(R)f_{\alpha})^{p},
$$
where $c_{f}(R)=R^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(R)$ with $f(x)=\widetilde{f}(r)$, $r=|x|$,
$\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, $c_{p}$ is defined in Lemma \ref{FrS}, $f_{\alpha}$ and $d_{R}(\cdot,\cdot)$ are defined in \eqref{aremterm7} and \eqref{aremterm8}, respectively.
\item ({\bf Critical Hardy inequalities of logarithmic type})
Let $1<\gamma<\infty$ and let $\max\{1,\gamma-1\}<p<\infty$.
Then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ and all $R>0$ we have
$$\frac{p}{\gamma-1}\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}
\geq
\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right\|_{L^{p}(\mathbb{G})},
$$
where $f_{R}=f\left(R\frac{x}{|x|}\right)$, where $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, and the constant $\frac{p}{\gamma-1}$ is optimal. In the abelian case, this result was obtained in \cite{MOW15}. In the case $\gamma=p$ this result on the homogeneous group was proved in \cite{Ruzhansky-Suragan:critical}.
\item ({\bf Uncertainty inequalities})
Let $1<p<\infty$ and $q>1$ be such that
$\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$. Let $1<\gamma<\infty$ and $\max\{1,\gamma-1\}<p<\infty$. Then for any $R>0$ and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have the uncertainty inequalities
$$
\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}
\geq\frac{\gamma-1}{p}\left\|\frac{f(f-f_{R})}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}} \right\|_{L^{2}(\mathbb{G})},
$$
where $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative (see \eqref{EQ:Euler}). Moreover,
$$
\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{p'}}\left(\log\frac{R}{|x|}\right)^{2-\frac{\gamma}{p}}}
\right\|_{L^{p'}(\mathbb{G})}$$$$\geq\frac{\gamma-1}{p}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{2}}\log\frac{R}{|x|}}\right\|^{2}_{L^{2}(\mathbb{G})}
$$
holds for $\frac{1}{p}+\frac{1}{p'}=1$.
\item ({\bf Relation between critical and subcritical Hardy inequalities})
Let $Q\geq m+1$, $m\geq2$. Let $|\cdot|$ be a homogeneous quasi-norm. Then for any nonnegative radial function $g\in C_{0}^{1}(B^{m}(0,R)\backslash\{0\})$, there exists a nonnegative radial function $f\in C_{0}^{1}(B^{Q}(0,1)\backslash\{0\})$ such that
$$\int_{B^{Q}(0,1)}|\mathcal{R}f|^{m}dx-\left(\frac{Q-m}{m}\right)^{m}\int_{B^{Q}(0,1)}\frac{|f|^{m}}{|x|^{m}}dx$$
$$
=\frac{|\sigma|}
{|\widetilde{\sigma}|}\left(\frac{Q-m}{m-1}\right)^{m-1}$$
$$\times\left(\int_{B^{m}(0,R)}|\mathcal{R}g|^{m}dz-
\left(\frac{m-1}{m}\right)^{m}\int_{B^{m}(0,R)}\frac{|g|^{m}}{|z|^{m}\left(\log\frac{Re}{|z|}\right)^{m}}dz\right)
$$
holds true, where $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, $|\sigma|$ and $|\widetilde{\sigma}|$ are $Q-1$ and $m-1$ dimensional surface measure of the unit sphere, respectively.
\end{itemize}
\subsection{$L^p$-Hardy inequalities with superweights}
The classical Hardy inequalities and their extensions, such as the Caffarelli-Kohn-Nirenberg inequalities, usually involve the weights of the form $\frac{1}{|x|^{m}}$. In this paper, we also consider the weights of the form $\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}$ allowing for different choices of $\alpha$ and $\beta$. If $\alpha=0$ or $\beta=0$, this reduces to traditional weights. So, we are interested in the case when $\alpha\beta\not=0$ and, in fact, we obtain two families of inequalities depending on whether $\alpha\beta>0$ or $\alpha\beta<0$. Moreover, $|\cdot|$ in these expressions can be an arbitrary homogeneous quasi-norm and the constants for the obtained inequalities are sharp. The freedom in choosing parameters $\alpha,\beta, a,b,m$ and a quasi-norm led us to calling these weights the `superweights' in this context.
Again, the obtained estimates will include both the isotropic and anisotropic settings of $\mathbb R^{n}$, for which our range of obtained estimates appears also to be new. Namely, already in the Euclidean case of $\mathbb R^{n}$ with the Euclidean norm, they extend the inequalities that have been known for $p=2$ for some range of parameters from \cite{GM11} to the full range of $1<p<\infty$.
Therefore, we can again work on the homogeneous groups.
To summarise, on a homogeneous group $\mathbb{G}$
with homogeneous dimension $Q$ for any homogeneous quasi-norm $|\cdot|$ on $\mathbb{G}$, all $a,b>0$ and $1<p<\infty$ we prove that
\begin{itemize}
\item If $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation}\label{intro1} \frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} .
\end{equation}
If $Q\neq pm+p$, then the constant $\frac{Q-pm-p}{p}$ is sharp.
\item If $\alpha \beta<0$ and $pm-\alpha\beta\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation} \label{intro2}
\frac{Q-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}
.
\end{equation}
If $Q\neq pm+p-\alpha\beta$, then the constant $\frac{Q-pm+\alpha\beta-p}{p}$ is sharp.
\end{itemize}
As noted before, the weights in the inequalities \eqref{intro1} and \eqref{intro2} are called superweights since the constants $\frac{Q-pm-p}{p}$ in \eqref{intro1} and $\frac{Q-pm+\alpha\beta-p}{p}$ in \eqref{intro2} are sharp for arbitrary homogeneous quasi-norm $|\cdot|$ of $\mathbb{G}$ and wide range of choices of the allowed parameters $\alpha, \beta, a, b$ and $m$. Directly from the inequalities \eqref{intro1} and \eqref{intro2}, choosing different $\alpha, \beta, a, b, m$ and $Q$, one can obtain a number of Hardy type inequalities which have various consequences
and applications. For instance, in the Abelian (isotropic or anisotropic) case ${\mathbb G}=(\mathbb R^{n},+)$, we have
$Q=n$, so for any quasi-norm $|\cdot|$ on $\mathbb R^{n}$, all $a,b>0$ and $1<p<\infty$ these imply
new inequalities. Thus, if $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$, we have
\begin{equation}\label{intro3} \frac{n-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\frac{df}{d|x|}\right\|_{L^{p}(\mathbb R^{n})}
\end{equation}
with the constant being sharp for $n\neq pm+p$.
If $\alpha \beta<0$ and $pm-\alpha\beta\leq n-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation} \label{intro4}
\frac{n-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\frac{df}{d|x|} \right\|_{L^{p}(\mathbb R^{n})}
\end{equation}
with the sharp constant for $n\neq pm+p-\alpha\beta$.
In the case of the standard Euclidean distance $|x|=\sqrt{x^{2}_{1}+\ldots+x^{2}_{n}}$ by using the Schwartz inequality from the inequalities \eqref{intro3} and \eqref{intro4} we obtain that
if $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$
\begin{equation}\label{intro5} \frac{n-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\nabla f\right\|_{L^{p}(\mathbb R^{n})}
\end{equation}
with the constant sharp for $n\neq pm+p$.
If $\alpha \beta<0$ and $pm-\alpha\beta\leq n-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation} \label{intro6}
\frac{n-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\nabla f\right\|_{L^{p}(\mathbb R^{n})}
\end{equation}
with the sharp constant for $n\neq pm+p-\alpha\beta$. The $L^{2}$-version, that is, when $p=2$ the inequalities \eqref{intro5} and \eqref{intro6} were obtained in \cite{GM11}. We also shall note that these inequalities have interesting applications in theory of ODE (see \cite[Theorem 2.1]{GM11}).
In Section \ref{SEC:2} we give the main result of this part and give its short proof. Some higher order versions of the obtained inequalities are discussed briefly in Section \ref{Sec3}.
In Section \ref{SEC:prelim} we briefly recall the main concepts of homogeneous groups and fix the notation. In Section \ref{SEC:critHardy} we present critical Hardy inequalities of logarithmic type and uncertainty type principles on homogeneous groups. The remainder estimates for $L^{p}$-weighted Hardy inequalities on homogeneous groups are proved in Section \ref{SEC:rem_estimates}. Moreover, in Section \ref{stab} we also investigate another improved version of $L^{p}$-weighted Hardy inequalities involving a distance. In Section \ref{SEC:crit_subcrit_con} the relation between the critical and the subcritical Hardy inequalities on homogeneous groups is investigated. In Section \ref{SEC:CKN} we introduce Caffarelli-Kohn-Nirenberg type inequalities on homogenous groups and prove their extended version.
\section{Preliminaries}
\label{SEC:prelim}
In this section we very briefly recall the necessary notation concerning the setting of homogeneous
groups following Folland and Stein \cite{FS-Hardy} as well as a recent treatise \cite{FR}.
We also recall a few other facts that will be used in the proofs.
A connected simply connected Lie group $\mathbb G$ is called a {\em homogeneous group} if
its Lie algebra $\mathfrak{g}$ is equipped with a family of the following dilations:
$$D_{\lambda}={\rm Exp}(A \,{\rm ln}\lambda)=\sum_{k=0}^{\infty}
\frac{1}{k!}({\rm ln}(\lambda) A)^{k},$$
where $A$ is a diagonalisable positive linear operator on $\mathfrak{g}$,
and every $D_{\lambda}$ is a morphism of $\mathfrak{g}$,
that is,
$$\forall X,Y\in \mathfrak{g},\, \lambda>0,\;
[D_{\lambda}X, D_{\lambda}Y]=D_{\lambda}[X,Y],$$
holds. We recall that $Q := {\rm Tr}\,A$ is called the homogeneous dimension of $\mathbb G$.
A homogeneous group is a nilpotent (Lie) group and exponential mapping $\exp_{\mathbb G}:\mathfrak g\to\mathbb G$ of this group is a global diffeomorphism.
Thus, this implies the dilation structure, and this dilation is denoted by $D_{\lambda}x$ or just by $\lambda x$, on homogeneous groups.
Then we have
\begin{equation}
|D_{\lambda}(S)|=\lambda^{Q}|S| \quad {\rm and}\quad \int_{\mathbb{G}}f(\lambda x)
dx=\lambda^{-Q}\int_{\mathbb{G}}f(x)dx.
\end{equation}
Here $dx$ is the Haar measure on homogeneous groups $\mathbb{G}$ and $|S|$ is the volume of a measurable set $S\subset \mathbb{G}$. The Haar measure on a homogeneous group $\mathbb{G}$ is the standard Lebesgue measure for $\mathbb R^{n}$ (see, for example \cite[Proposition 1.6.6]{FR}).
Let $|\cdot|$ be a homogeneous quasi-norm on $\mathbb G$.
Then the quasi-ball centred at $x\in\mathbb{G}$ with radius $R > 0$ is defined by
$$B(x,R):=\{y\in \mathbb{G}: |x^{-1}y|<R\}.$$
The following notation will be also used in this paper
$$B^{c}(x,R):=\{y\in \mathbb{G}: |x^{-1}y|\geq R\}.$$
We refer to \cite{FS-Hardy} for the proof of the following important polar decomposition on homogeneous Lie groups, which can be also found in \cite[Section 3.1.7]{FR}:
there is a (unique)
positive Borel measure $\sigma$ on the
unit quasi-sphere
\begin{equation}\label{EQ:sphere}
\mathfrak S:=\{x\in \mathbb{G}:\,|x|=1\},
\end{equation}
so that for every $f\in L^{1}(\mathbb{G})$ we have
\begin{equation}\label{EQ:polar}
\int_{\mathbb{G}}f(x)dx=\int_{0}^{\infty}
\int_{\mathfrak S}f(ry)r^{Q-1}d\sigma(y)dr.
\end{equation}
Let us now fix a basis $\{X_{1},\ldots,X_{n}\}$ of a Lie algebra $\mathfrak{g}$
such that
$$AX_{k}=\nu_{k}X_{k}$$
for every $k$, so that the matrix $A$ can be taken to be
$A={\rm diag} (\nu_{1},\ldots,\nu_{n}).$
Then every $X_{k}$ is homogeneous of degree $\nu_{k}$ and
$$
Q=\nu_{1}+\cdots+\nu_{n}.
$$
The decomposition of ${\exp}_{\mathbb{G}}^{-1}(x)$ in $\mathfrak g$ defines the vector
$$e(x)=(e_{1}(x),\ldots,e_{n}(x))$$
by the formula
$${\exp}_{\mathbb{G}}^{-1}(x)=e(x)\cdot \nabla\equiv\sum_{j=1}^{n}e_{j}(x)X_{j},$$
where $\nabla=(X_{1},\ldots,X_{n})$.
It implies the equality
$$x={\exp}_{\mathbb{G}}\left(e_{1}(x)X_{1}+\ldots+e_{n}(x)X_{n}\right).$$
Taking into account the homogeneity and denoting $x=ry,\,y\in \mathfrak S,$ one has
$$
e(x)=e(ry)=(r^{\nu_{1}}e_{1}(y),\ldots,r^{\nu_{n}}e_{n}(y)).
$$
So we have
\begin{equation*}
\frac{d}{d|x|}f(x)=\frac{d}{dr}f(ry)=
\frac{d}{dr}f({\exp}_{\mathbb{G}}
\left(r^{\nu_{1}}e_{1}(y)X_{1}+\ldots
+r^{\nu_{n}}e_{n}(y)X_{n}\right)).
\end{equation*}
We use the notation
\begin{equation}\label{EQ:Euler}
\mathcal{R} :=\frac{d}{dr},
\end{equation}
that is,
\begin{equation}\label{dfdr}
\frac{d}{d|x|}f(x)=\mathcal{R}f(x), \;\forall x\in \mathbb G,
\end{equation}
for any homogeneous quasi-norm $|x|$ on $\mathbb G$.
Let us recall the following lemma, which will be used in our proof.
\begin{lem}[\cite{FrS08}]
\label{FrS}
Let $p\geq2$ and let $a$, $b$ be real numbers. Then there exists $c_{p}>0$ such that
$$|a-b|^{p}\geq|a|^{p}-p|a|^{p-2}ab+c_{p}|b|^{p}$$
holds, where $c_{p}=\underset{0<t\leq1/2}{\rm min}((1-t)^{p}-t^{p}+pt^{p-1})$ is sharp in this inequality.
\end{lem}
We will also use the following result (see \cite{ORS16} and \cite{Ruzhansky-Suragan:L2-CKN}) with anisotropic Caffarelli-Kohn-Nirenberg inequality:
\begin{thm}[\cite{ORS16}]
\label{CKN}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $a,b\in\mathbb{R}$, and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$. Then we have
\begin{equation}\label{CKN1}
\left|\frac{Q-(a+b+1)}{p}\right|\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{a+b+1}}dx\leq\left(\int_{\mathbb{G}}
\frac{|\mathcal{R}f|^{p}}{|x|^{ap}}dx\right)^{\frac{1}{p}}\left(\int_{\mathbb{G}}
\frac{|f|^{p}}{|x|^{\frac{bp}{p-1}}}dx\right)^{\frac{p-1}{p}},
\end{equation}
where $\mathcal{R}$ is defined in \eqref{EQ:Euler}, $1<p<\infty$, and the constant $\left|\frac{Q-(a+b+1)}{p}\right|$ is sharp.
\end{thm}
\section{On remainder estimates of anisotropic $L^{p}$-weighted Hardy inequalities}
\label{SEC:rem_estimates}
In this section we obtain a family of remainder estimates in the weighted $L^p$-Hardy inequalities, with a freedom of choosing the parameter $b\in\mathbb R$. The obtained remainder estimates are new already in the standard setting of $\mathbb R^n$.
\begin{thm}\label{aremterm}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q\geq3$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $2\leq p<Q$, $-\infty<\alpha<\frac{Q-p}{p}$ and $\delta_{1}=Q-p-\alpha p-\frac{Q+pb}{p}$, $\delta_{2}=Q-p-\alpha p-\frac{bp}{p-1}$ for any $b\in\mathbb{R}$. Then
for all functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$
\begin{equation}\label{aremterm1}
\geq C_{p} \frac{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}},
\end{equation}
where $\mathcal{R}$ is defined in \eqref{EQ:Euler}, $C_{p}=c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}$ and $c_{p}=\underset{0<t\leq1/2}{\rm min}((1-t)^{p}-t^{p}+pt^{p-1})$.
\end{thm}
\begin{rem}\label{aremterm_rem1} Since the inequality \eqref{aremterm1} holds for any $b\in\mathbb{R}$, choosing $b=\frac{Q(p-1)}{p}$ so that $C_{p}=0$, we obtain the $L^{p}$-weighted Hardy inequalities on homogeneous groups:
\begin{multline}\label{aremterm13}
\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx\geq\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx, \\
-\infty<\alpha<\frac{Q-p}{p},\;\; 2\leq p<Q,
\end{multline}
for all functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$.
In the abelian case $\mathbb{G}=(\mathbb R^{n},+)$ with $Q=n$, the inequality \eqref{aremterm13} gives the $L^{p}$-weighted Hardy inequalities for any quasi-norm on $\mathbb R^{n}$: For any function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ we have
$$\int_{\mathbb R^{n}}\left|\frac{x}{|x|}\cdot\nabla f(x)\right|^{p}|x|^{-\alpha p}dx\geq\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{p(\alpha+1)}}dx,$$
where $-\infty<\alpha<\frac{n-p}{p}$ and $2\leq p<n$. By Schwarz's inequality with the standard Euclidean distance $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{n}^{2}}$, we obtain the Euclidean form of the $L^{p}$-weighted Hardy inequalities on $\mathbb R^{n}$:
\begin{multline*}
\int_{\mathbb R^{n}}\frac{|\nabla f(x)|^{p}}{|x|^{\alpha p}}dx\geq\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx, \\ -\infty<\alpha<\frac{n-p}{p},\;\; 2\leq p<n,
\end{multline*}
for any function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$, where $\nabla$ is the standard gradient in $\mathbb R^{n}$.
\end{rem}
\begin{rem}\label{aremterm_rem2}
We also note that in the abelian case, \eqref{aremterm1} implies a new remainder estimate for any quasi-norm on $\mathbb R^{n}$: For any function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ and for any $b\in\mathbb{R}$, we obtain
$$\int_{\mathbb R^{n}}\left|\frac{x}{|x|}\cdot\nabla f(x)\right|^{p}|x|^{-\alpha p}dx-\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{p(\alpha+1)}}dx$$
\begin{equation}\label{aremterm11}
\geq C_{p} \frac{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}},\quad 2\leq p<n,\;-\infty<\alpha<\frac{n-p}{p}.
\end{equation}
As in Remark \ref{aremterm_rem1}, by Schwarz's inequality with the standard Euclidean distance, we obtain the Euclidean version of the remainder estimate for $L^{p}$-weighted Hardy inequalities:
$$\int_{\mathbb R^{n}}\frac{\left|\nabla f(x)\right|^{p}}{|x|^{\alpha p}}dx-\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$
\begin{equation}\label{aremterm12}
\geq C_{p} \frac{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}},\quad 2\leq p<n,\;-\infty<\alpha<\frac{n-p}{p},
\end{equation}
for every function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ and for any $b\in\mathbb{R}$, where $\nabla$ is the standard gradient in $\mathbb R^{n}$.
Thus, we note that the remainder estimate \eqref{aremterm12} is new already in the standard setting of $\mathbb R^{n}$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{aremterm}] First let us show the statement of Theorem \ref{aremterm} for a radial function $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$. Since $f$ is radial, $f$ can be represented as $f(x)=\widetilde{f}(|x|)$. By Brezis-V\'{a}zquez's idea (\cite{BV97}), we define
\begin{equation}\label{aremterm2}
\widetilde{g}(r)=r^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(r).
\end{equation}
Since $\widetilde{f}=\widetilde{f}(r)\in C_{0}^{\infty}(0,\infty)$ and $\alpha<\frac{Q-p}{p}$, we obtain $\widetilde{g}(0)=0$ and $\widetilde{g}(+\infty)=0$. We set $g(x)=\widetilde{g}(|x|)$ for $x\in \mathbb{G}$.
Introducing polar coordinates $(r,y)=(|x|, \frac{x}{\mid x\mid})\in (0,\infty)\times\mathfrak S$ on $\mathbb{G}$ and using \eqref{EQ:polar}, we have
$$J:=\int_{\mathbb{G}}|\mathcal{R}f|^{p}|x|^{-\alpha p}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{p(\alpha+1)}}dx$$
$$=|\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}\widetilde{f}(r)\right|^{p}r^{-\alpha p+Q-1}dr-|\sigma|
\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{0}^{\infty}|\widetilde{f}(r)|^{p}r^{-p(\alpha+1)+Q-1}dr$$
$$=|\sigma|\int_{0}^{\infty}\left|\left(\frac{Q-p-\alpha p}{p}\right)r^{-\frac{Q-\alpha p}{p}}\widetilde{g}(r)
-r^{-\frac{Q-p-\alpha p}{p}}\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{Q-1-\alpha p}dr$$
$$-|\sigma|\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{0}^{\infty}|\widetilde{g}(r)|^{p}r^{-1}dr,$$
where $|\sigma|$ is the $Q-1$ dimensional surface measure of the unit quasi-sphere.
Here applying Lemma \ref{FrS} to the integrand of the first term in the last expression above, we get
$$\left|\left(\frac{Q-p-\alpha p}{p}\right)r^{-\frac{Q-\alpha p}{p}}\widetilde{g}(r)-r^{-\frac{Q-p-\alpha p}{p}}\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{Q-1-\alpha p}$$
$$\geq\left(\left(\frac{Q-p-\alpha p}{p}\right)^{p}r^{-Q+\alpha p}|\widetilde{g}(r)|^{p}\right)r^{Q-1-\alpha p}$$
$$-p\left(\frac{Q-p-\alpha p}{p}\right)^{p-1}|\widetilde{g}(r)|^{p-2}\widetilde{g}(r)\frac{d}{dr}\widetilde{g}(r)
r^{-(\frac{Q-\alpha p}{p})(p-1)}r^{-(\frac{Q-p-\alpha p}{p})}r^{Q-1-\alpha p}$$
$$+c_{p}\left|\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{-Q+p+\alpha p}r^{Q-1-\alpha p}$$
$$=\left(\frac{Q-p-\alpha p}{p}\right)^{p}r^{-1}|\widetilde{g}(r)|^{p}-p\left(\frac{Q-p-\alpha p}{p}\right)^{p-1}|\widetilde{g}(r)|^{p-2}\widetilde{g}(r)\frac{d}{dr}\widetilde{g}(r)$$
$$+c_{p}\left|\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{p-1}.$$
Since $\widetilde{g}(0)=\widetilde{g}(+\infty)=0$ and $p\geq2$, we note that
$$p\int_{0}^{\infty}|\widetilde{g}(r)|^{p-2}\widetilde{g}(r)\frac{d}{dr}\widetilde{g}(r)dr=\int_{0}^{\infty}
\frac{d}{dr}(|\widetilde{g}(r)|^{p})dr=0.$$
This gives a \enquote{ground state representation} (\cite{FrS08}) of the Hardy difference $J$:
\begin{equation}\label{aremterm3}
J\geq c_{p} |\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{p-1}dr=
c_{p}\int_{\mathbb{G}}|\mathcal{R}g(x)|^{p}|x|^{p-Q}dx.
\end{equation}
Putting $a=\frac{Q-p}{p}$ in \eqref{CKN1}, we obtain for any $b\in\mathbb{R}$, that
\begin{multline}\label{CKN2}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|\int_{\mathbb{G}}|g|^{p}
|x|^{-\frac{Q+pb}{p}}dx \\
\leq\left(\int_{\mathbb{G}}
|\mathcal{R}g|^{p}|x|^{p-Q}dx\right)^{\frac{1}{p}}\left(\int_{\mathbb{G}}
|g|^{p}|x|^{-\frac{bp}{p-1}}dx\right)^{\frac{p-1}{p}}.
\end{multline}
It gives that
\begin{equation}\label{aremterm4}J\geq c_{p}\int_{\mathbb{G}}|\mathcal{R}g(x)|^{p}|x|^{p-Q}dx\geq c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}\frac{\left(\int_{\mathbb{G}}|g|^{p}|x|^{-\frac{Q+pb}{p}}
dx\right)^{p}}{\left(\int_{\mathbb{G}}
|g|^{p}|x|^{-\frac{bp}{p-1}}dx\right)^{p-1}}.\end{equation}
Taking into account that $g(x)=\widetilde{g}(|x|)$, $x\in\mathbb{G}$, and \eqref{aremterm2}, one calculates
$$\int_{\mathbb{G}}|x|^{-\frac{Q+pb}{p}}|g(x)|^{p}dx=|\sigma|\int_{0}^{\infty}r^{Q-p-\alpha p}|\widetilde{f}(r)|^{p}r^{-\frac{Q+pb}{p}}r^{Q-1}dr
$$
$$=\int_{\mathbb{G}}|f(x)|^{p}|x|^{Q-p-\alpha p-\frac{Q+pb}{p}}dx=
\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx.$$
On the other hand,
$$\int_{\mathbb{G}}|x|^{-\frac{bp}{p-1}}|g(x)|^{p}dx=|\sigma|\int_{0}^{\infty}r^{Q-p-\alpha p}|\widetilde{f}(r)|^{p}r^{-\frac{bp}{p-1}}r^{Q-1}dr$$
$$=\int_{\mathbb{G}}|f(x)|^{p}|x|^{Q-p-\alpha p-\frac{bp}{p-1}}dx=\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx.$$
Putting these into \eqref{aremterm4}, we obtain
\begin{equation}\label{aremterm4_01}J\geq c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}
\frac{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}}.
\end{equation}
Now let us prove it for non-radial functions. We consider the radial function for a non-radial function $f$:
\begin{equation}\label{aremterm4_1}U(r)=\left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}}.\end{equation}
Using the H\"{o}lder inequality, we calculate
$$\frac{d}{dr}U(r)=\frac{1}{p}\left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}-1}
\frac{1}{|\sigma|}\int_{\mathfrak S}p|f(ry)|^{p-2}f(ry)\overline{\frac{d}{dr}f(ry)}d\sigma(y)$$
$$\leq\left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}-1}
\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p-1}\left|\frac{d}{dr}f(ry)\right|d\sigma(y)$$
$$\leq \left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}-1}
\frac{1}{|\sigma|}\left(\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}d\sigma(y)\right)^{\frac{1}{p}}
\left(\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{p-1}{p}}$$
$$=\left(\frac{1}{|\sigma|}\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}d\sigma(y)\right)^{\frac{1}{p}}.$$
Thus, we have
$$
\frac{d}{dr}U(r)\leq\left(\frac{1}{|\sigma|}\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}d\sigma(y)\right)^{\frac{1}{p}}.
$$
It follows that
$$|\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}U(r)\right|^{p}r^{Q-1-\alpha p}dr\leq
|\sigma|\int_{0}^{\infty}\frac{1}{|\sigma|}\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}r^{Q-1-\alpha p}d\sigma(y)dr$$
$$=\int_{\mathbb{G}}\left|\mathcal{R}f\right|^{p}|x|^{-\alpha p}dx,$$
that is,
\begin{equation}\label{aremterm5}
\int_{\mathbb{G}}\left|\mathcal{R}U\right|^{p}|x|^{-\alpha p}dx\leq\int_{\mathbb{G}}\left|\mathcal{R}f\right|^{p}|x|^{-\alpha p}dx.
\end{equation}
In view of \eqref{aremterm4_1}, we obtain
$$\int_{\mathbb{G}}|U(|x|)|^{p}|x|^{\theta}dx=|\sigma|\int_{0}^{\infty}|U(r)|^{p}r^{\theta+Q-1}dr$$
\begin{equation}\label{aremterm6}=|\sigma|\int_{0}^{\infty}\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)r^{\theta+Q-1}dr=
\int_{\mathbb{G}}|f(x)|^{p}|x|^{\theta}dx \end{equation}
for any $\theta\in\mathbb{R}$.
Then, it is easy to see that \eqref{aremterm5} and \eqref{aremterm6} imply that \eqref{aremterm1} holds also for all non-radial functions.
\end{proof}
\section{Stability of anisotropic $L^{p}$-weighted Hardy inequalities}
\label{stab}
In this section we establish a remainder estimate in the $L^{p}$-weighted Hardy inequality involving the distance to the set of extremisers: estimates of such type are known as stability estimates in the literature.
Let us denote
\begin{equation}\label{aremterm7}
f_{\alpha}(x)=|x|^{-\frac{Q-p-\alpha p}{p}}
\end{equation}
for $-\infty<\alpha<\frac{Q-p}{p}$, and we set
\begin{equation}\label{aremterm8}
d_{R}(f,g):=\left(\int_{\mathbb{G}}\frac{|f(x)-g(x)|^{p}}{\left|\log\frac{R}{|x|}\right|^{p}|x|^{(\alpha+1)p}}dx\right)^{\frac{1}{p}}
\end{equation}
for functions $f$ and $g$ for which the integral in \eqref{aremterm8} is finite.
\begin{thm}\label{aremterm_thm}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q\geq3$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $2\leq p<Q$ and $-\infty<\alpha<\frac{Q-p}{p}$. Then for all radial functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{(\alpha+1)p}}dx$$
\begin{equation}\label{aremterm9}
\geq c_{p}\left(\frac{p-1}{p}\right)^{p}\sup_{R>0}d_{R}(f,c_{f}(R)f_{\alpha})^{p},
\end{equation}
where $c_{f}(R)=R^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(R)$ with $f(x)=\widetilde{f}(r)$, $|x|=r$, $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, $c_{p}$ is defined in Lemma \ref{FrS}, $f_{\alpha}$ and $d_{R}(\cdot,\cdot)$ are defined in \eqref{aremterm7} and \eqref{aremterm8}, respectively.
\end{thm}
\begin{proof}[Proof of Theorem \ref{aremterm_thm}] Since $p\geq2$, as in \eqref{aremterm3} in the proof of Theorem \ref{aremterm}, we have
$$J(f)=\int_{\mathbb{G}}|\mathcal{R}f|^{p}|x|^{-\alpha p}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{(\alpha+1)p}}dx$$
\begin{equation}\label{aremterm10}
\geq c_{p}|\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}\widetilde{g}\right|^{p}r^{p-1}dr=
c_{p}\int_{\mathbb{G}}\left|\frac{d}{dr}g\right|^{p}|x|^{p-Q}dx.
\end{equation}
By Theorem 3.1 in \cite{Ruzhansky-Suragan:critical} or Remark \ref{ScalHardyrem} with $\gamma=p$, we obtain
$$J(f)\geq c_{p}\int_{\mathbb{G}}|\mathcal{R}g|^{p}|x|^{p-Q}dx\geq c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left|g(x)-g(\frac{Rx}{|x|})\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{Q}}dx$$
$$=c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left||x|^{\frac{Q-p-\alpha p}{p}}f(x)-R^{\frac{Q-p-\alpha p}{p}}f(\frac{Rx}{|x|})\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{Q}}dx$$
for any $R>0$.
Here using $f(x)=\widetilde{f}(r)$, $r=|x|$, one calculates
$$J(f)\geq c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left|f(x)-R^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(R)|x|^{-\frac{Q-p-\alpha p}{p}}\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{(\alpha+1)p}}dx$$
$$=c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left|f(x)-c_{f}(R)|x|^{-\frac{Q-p-\alpha p}{p}}\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{(\alpha+1)p}}dx,$$
\end{proof}
yielding \eqref{aremterm9}.
\section{Critical Hardy inequalities of logarithmic type and uncertainty principle}
\label{SEC:critHardy}
In this section, we present critical Hardy inequalities of logarithmic type on the homogeneous group
$\mathbb{G}$. In the abelian isotropic case, the following result was obtained in \cite{MOW15}. In the case $\gamma=p$ this result on the homogeneous group was proved in \cite{Ruzhansky-Suragan:critical}.
\begin{thm}\label{ScalHardy} Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $1<\gamma<\infty$ and $\max\{1,\gamma-1\}<p<\infty$.
Then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ and all $R>0$ we have
\begin{equation}\label{ScalHardy1}\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}
\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right\|_{L^{p}(\mathbb{G})}
\leq\frac{p}{\gamma-1}\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})},
\end{equation}
where $f_{R}(x)=f\left(R\frac{x}{|x|}\right)$, $\mathcal{R}$ is defined in \eqref{EQ:Euler}, and the constant $\frac{p}{\gamma-1}$ is optimal.
\end{thm}
\begin{proof}[Proof of Theorem \ref{ScalHardy}]
First, let us consider the integrals in \eqref{ScalHardy1} restricted to
$B(0, R)$. Introducing polar coordinates $(r,y)=(|x|, \frac{x}{\mid x\mid})\in (0,\infty)\times\mathfrak S$ on $\mathbb{G}$, where $\mathfrak S$ is the sphere as in \eqref{EQ:sphere}, and using \eqref{EQ:polar}, we have
$$\int_{B(0,R)}\frac{|f(x)-f_{R}(x)|^{p}}
{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}dx$$
$$=\int_{0}^{R}\int_{\mathfrak S}\frac{|f(ry)-f(Ry)|^{p}}{r^{Q}\left(\log\frac{R}{r}\right)^{\gamma}}r^{Q-1}d\sigma(y)dr$$
$$=\int_{0}^{R}\frac{d}{dr}\left(\frac{1}{(\gamma-1)\left(\log\frac{R}{r}\right)^{\gamma-1}}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p}d\sigma(y)\right)dr$$
$$-\frac{p}{\gamma-1}{\rm Re}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}
\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-2}(f(ry)-f(Ry))\overline{\frac{df(ry)}{dr}}d\sigma(y)dr$$
$$=-\frac{p}{\gamma-1}{\rm Re}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-2}
(f(ry)-f(Ry))\overline{\frac{df(ry)}{dr}}d\sigma(y)dr,$$
where $p-\gamma+1>0$, so that the boundary term at $r=R$ vanishes due to inequalities
$$|f(ry)-f(Ry)|\leq C(R-r),$$
$$\log\frac{R}{r}\geq\frac{R-r}{R}.$$
Then, by the H\"{o}lder inequality, we get
$$\int_{0}^{R}\int_{\mathfrak S}\frac{|f(ry)-f(Ry)|^{p}}{r\left(\log\frac{R}{r}\right)^{\gamma}}d\sigma(y)dr$$
$$=-\frac{p}{\gamma-1}{\rm Re}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-2}
(f(ry)-f(Ry))\overline{\frac{df(ry)}{dr}}d\sigma(y)dr$$
$$\leq\frac{p}{\gamma-1}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-1}\left|\frac{df(ry)}{dr}\right| d\sigma(y)dr$$
$$\leq\frac{p}{\gamma-1}\left(\int_{0}^{R}\int_{\mathfrak S}\frac{|f(ry)-f(Ry)|^{p}}{r\left(\log\frac{R}{r}\right)
^{\gamma}}d\sigma(y)dr\right)^{\frac{p-1}{p}}$$
$$\times\left(\int_{0}^{R}\int_{\mathfrak S}r^{p-1}\left(\log\frac{R}{r}\right)^{p-\gamma}
\left|\frac{df(ry)}{dr}\right|^{p}d\sigma(y)dr\right)^{\frac{1}{p}}.$$
Thus we obtain
$$\left(\int_{B(0,R)}\frac{\left|f(x)-f_{R}(x)\right|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}
dx\right)^{\frac{1}{p}}$$
\begin{equation}\label{ScalHardy2}\leq\frac{p}{\gamma-1}\left(\int_{B(0,R)}|x|^{p-Q}\left|\log\frac{R}{|x|}\right|^{p-\gamma}
\left|\mathcal{R}f(x)\right|^{p}dx\right)^{\frac{1}{p}}.
\end{equation}
Similarly, we have
$$\left(\int_{B^{c}(0,R)}\frac{\left|f(x)-f_{R}(x)\right|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}
dx\right)^{\frac{1}{p}}$$
\begin{equation}\label{ScalHardy3}\leq\frac{p}{\gamma-1}\left(\int_{B^{c}(0,R)}|x|^{p-Q}\left|\log\frac{R}{|x|}\right|
^{p-\gamma}\left|\mathcal{R}f(x)\right|^{p}dx\right)^{\frac{1}{p}}.
\end{equation}
The inequalities \eqref{ScalHardy2} and \eqref{ScalHardy3} imply \eqref{ScalHardy1}.
Now let us prove the optimality of the constant $\frac{p}{\gamma-1}$ in \eqref{ScalHardy1}. The inequality \eqref{ScalHardy1} gives that
\begin{equation}\label{optim1}
\left(\int_{B(0,R)}\frac{|f(x)|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}\right)^{\frac{1}{p}}
\leq \frac{p}{\gamma-1}\left(\int_{B(0,R)}|x|^{p-Q}\left|\log\frac{R}{|x|}\right|^{p-\gamma}|\mathcal{R}f(x)|^{p}dx\right)^{\frac{1}{p}}.
\end{equation}
It is enough to prove the optimality of the constant $\frac{p}{\gamma-1}$ in \eqref{optim1}. As in the abelian case (see \cite[Section 3]{MOW15}), we define the following sequence of functions
$$f_{k}(x):=\begin{cases}
(\log(kR))^{\frac{\gamma-1}{p}}, \;\;\;{\rm when}\;\;\;|x|\leq\frac{1}{k},\\
(\log\frac{R}{|x|})^{\frac{\gamma-1}{p}}, \;\;\;{\rm when}\;\;\;\frac{1}{k}\leq|x|\leq\frac{R}{2},\\
\frac{2}{R}(\log2)^{\frac{\gamma-1}{p}}(R-|x|), \;\;\;{\rm when}\;\;\;\frac{R}{2}\leq|x|\leq R
\end{cases}$$
for large $k\in \mathbb{N}$. Letting $\widetilde{f}_{k}(r):=f_{k}(x)$ with $r=|x|\geq0$, we get
$$\frac{d}{dr}\widetilde{f}_{k}(r)=\begin{cases}
0, \;\;\;{\rm when}\;\;\;r<\frac{1}{k},\\
-\frac{\gamma-1}{p}r^{-1}(\log\frac{R}{r})^{\frac{\gamma-1}{p}-1}, \;\;\;{\rm when}\;\;\;\frac{1}{k}<r<\frac{R}{2},\\
-\frac{2}{R}(\log2)^{\frac{\gamma-1}{p}}, \;\;\;{\rm when}\;\;\;\frac{R}{2}<r<R.
\end{cases}$$
Denoting by $|\sigma|$ the $Q-1$ dimensional surface measure of the unit sphere,
by a direct calculation one has
\begin{multline*}
\int_{B(0,R)}|x|^{p-Q}
\left|\log\frac{R}{|x|}\right|^{p-\gamma}\left|\mathcal{R}f_{k}(x)\right|
^{p}dx
=|\sigma|\int_{0}^{R}r^{p-1}\left|\log\frac{R}{r}\right|^{p-\gamma}
\left|\frac{d}{dr}\widetilde{f}_{k}(r)\right|^{p}dr\\
=|\sigma|\left(\frac{\gamma-1}{p}\right)^{p}\int_{\frac{1}{k}}^{\frac{R}{2}}r^{-1}\left(\log\frac{ R}{r}\right)^{-1}dr
+|\sigma|(\log2)^{\gamma-1}\left(\frac{2}{R}\right)^{p}
\int_{\frac{R}{2}}^{R}r^{p-1}\left(\log
\frac{R}{r}\right)^{p-\gamma}dr\end{multline*}
\begin{equation} \label{optim2}
=|\sigma|\left(\frac{\gamma-1}{p}\right)^{p}\left((\log(\log kR))-\log(\log 2)\right)+C_{\gamma,p},
\end{equation}
where $$C_{\gamma,p}:=2^{p}(\log2)^{\gamma-1}|\sigma|\int_{0}^{\log2}s^{p-\gamma}
e^{-ps}ds.
$$
Since $p-\gamma+1>0$, we get $C_{\gamma,q}<+\infty$. On the other hand, we see
$$
\int_{B(0,R)}\frac{|f_{k}(x)|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}dx
=|\sigma|\int_{0}^{R}\frac{|\widetilde{f}_{k}(r)|^{p}}{r\left|\log\frac{R}{r}\right|^{\gamma}}dr
$$$$=|\sigma|(\log(kR))^{\gamma-1}\int_{0}^{\frac{1}{k}}r^{-1}
\left(\log\frac{R}{r}\right)^{-\gamma}dr
+|\sigma|\int_{\frac{1}{k}}^{\frac{R}{2}}r^{-1}
\left(\log\frac{R}{r}\right)^{-1}dr$$$$+|\sigma|(\log2)^{\gamma-1}\left(\frac{2}{R}\right)^{p}
\int_{\frac{R}{2}}^{R}r^{-1}
(R-r)^{p}\left(\log\frac{R}{r}\right)^{-\gamma}dr
$$
\begin{equation}\label{optim3}
=\frac{|\sigma|}{\gamma-1}+|\sigma|(\log(\log(kR))-\log(\log(2)))
+C_{R,\gamma,p},
\end{equation}
where
$$
C_{R,\gamma,p}:=(\log2)^{\gamma-1}\left(\frac{2}{R}\right)^{p}|\sigma|\int_{\frac{R}{2}}^{R}r^{-1}
(R-r)^{p}\left(\log\frac{R}{r}\right)^{-\gamma}dr.
$$
The inequality $\log\frac{R}{r}\geq \frac{R-r}{R}$ for all $r\leq R$ and the assumption $p-\gamma>-1$, imply $C_{R,\gamma,p}<+\infty$. Then, by \eqref{optim2} and \eqref{optim3}, we have
\begin{multline*}
\left(\int_{B(0,R)}|x|^{p-Q}
\left|\log\frac{R}{|x|}\right|^{p-\gamma}\left|\mathcal{R}f_{k}(x)\right|
^{p}dx\right)\\ \times \left(\int_{B(0,R)}\frac{|f_{k}(x)|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}dx\right)^{-1}
\rightarrow \left(\frac{\gamma-1}{p}\right)^{p}
\end{multline*}
as $k \rightarrow \infty$, which implies that the constant $\frac{p}{\gamma-1}$ in \eqref{optim1} is optimal.
\end{proof}
\begin{cor}[{\rm Uncertainty type principle on $\mathbb{G}$}]\label{ScalHardycor} Let $1<p<\infty$ and $q>1$ be such that
$\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$. Let $1<\gamma<\infty$ and $\max\{1,\gamma-1\}<p<\infty$. Then for any $R>0$ and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
\begin{equation}\label{ScalHardycor1}
\left\|\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}
\geq\frac{\gamma-1}{p}\left\|\frac{f(f-f_{R})}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}} \right\|_{L^{2}(\mathbb{G})}.
\end{equation}
Moreover,
\begin{multline}\label{ScalHardycor2}
\left\|\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{p'}}\left(\log\frac{R}{|x|}\right)^{2-\frac{\gamma}{p}}}
\right\|_{L^{p'}(\mathbb{G})}\geq\frac{\gamma-1}{p}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{2}}\log\frac{R}{|x|}}\right\|^{2}_{L^{2}(\mathbb{G})}
\end{multline}
holds for $\frac{1}{p}+\frac{1}{p'}=1$.
\end{cor}
\begin{proof}[Proof of Corollary \ref{ScalHardycor}]
By \eqref{ScalHardy1}, we have
$$\left\|\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}\geq\frac{\gamma-1}{p}
\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right\|_{L^{p}(\mathbb{G})}
\|f\|_{L^{q}(\mathbb{G})}$$
$$=\frac{\gamma-1}{p}
\left(\int_{\mathbb{G}}\left|\frac{f(x)-f_{R}(x)}{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}
\right|^{2\frac{p}{2}}dx\right)^{\frac{1}{2}\frac{2}{p}}
\left(\int_{\mathbb{G}}|f(x)|^{2\frac{q}{2}}dx\right)^{\frac{1}{2}\frac{2}{q}},$$
and using the H\"{o}lder inequality, we obtain
$$\left\||x|^{\frac{p-Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{p-\gamma}{p}}
\mathcal{R}f \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}$$
$$\geq\frac{\gamma-1}{p}\left(\int_{\mathbb{G}}\left|\frac{f(x)(f(x)-f_{R}(x))}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right|^{2}dx\right)^{\frac{1}{2}}
=\frac{\gamma-1}{p}\left\|\frac{f(f-f_{R})}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}} \right\|_{L^{2}(\mathbb{G})}.$$
Similarly, one can prove \eqref{ScalHardycor2}.
\end{proof}
\begin{rem}\label{ScalHardyrem}
When $\gamma=p$, Theorem \ref{ScalHardy} gives \cite[Theorem 3.1]{Ruzhansky-Suragan:critical}:
\begin{equation}\label{ScalHardy4}
\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}\log\frac{R}{|x|}}\right\|_{L^{p}(\mathbb{G})}
\leq\frac{p}{p-1}\left\||x|^{\frac{p-Q}{p}}
\mathcal{R}f \right\|_{L^{p}(\mathbb{G})}, \;\;1<p<\infty,
\end{equation}
for all $R>0$.
\end{rem}
\section{Critical and subcritical Hardy inequalities}
\label{SEC:crit_subcrit_con}
In this section, we study the relation between the critical and the subcritical Hardy inequalities on homogeneous groups.
\begin{prop}\label{crit_subcrit_pr2}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q\geq 3$ and $Q\geq m+1$, $m\geq2$. Let $|\cdot|$ be a homogeneous quasi-norm. Then for any nonnegative radial function $g\in C_{0}^{1}(B^{m}(0,R)\backslash\{0\})$, there exists a nonnegative radial function $f\in C_{0}^{1}(B^{Q}(0,1)\backslash\{0\})$ such that
$$\int_{B^{Q}(0,1)}|\mathcal{R}f|^{m}dx-\left(\frac{Q-m}{m}\right)^{m}\int_{B^{Q}(0,1)}\frac{|f|^{m}}{|x|^{m}}dx$$
\begin{equation}\label{crit_subcrit3}
=\frac{|\sigma|}
{|\widetilde{\sigma}|}\left(\frac{Q-m}{m-1}\right)^{m-1}\left(\int_{B^{m}(0,R)}|\mathcal{R}g|^{m}dz-
\left(\frac{m-1}{m}\right)^{m}\int_{B^{m}(0,R)}\frac{|g|^{m}}{|z|^{m}\left(\log\frac{Re}{|z|}\right)^{m}}dz\right)
\end{equation}
holds true, where $\mathcal{R}$ is defined in \eqref{EQ:Euler}, $|\sigma|$ and $|\widetilde{\sigma}|$ are $Q-1$ and $m-1$ dimensional surface measure of the unit sphere, respectively.
\end{prop}
\begin{proof}[Proof of Proposition \ref{crit_subcrit_pr2}] Let $r=|x|$, $x\in\mathbb{G}$ and $s=|z|$, $z\in\mathbb{\widetilde{G}}$, where $\mathbb{\widetilde{G}}$ is a homogeneous group
of homogeneous dimension $m$. Let us define a radial function $f=f(x)\in C_{0}^{1}(B^{Q}(0,1)\backslash\{0\})$ for a nonnegative radial function $g=g(z)\in C_{0}^{1}(B^{m}(0,R)\backslash\{0\})$:
\begin{equation}\label{crit_subcrit4}
f(r)=g(s(r)),
\end{equation}
where $s(r)=R\exp(1-r^{-\frac{Q-m}{m-1}})$, that is,
$$r^{-\frac{Q-m}{m-1}}=\log\frac{Re}{s}, \;s'(r)=\frac{Q-m}{m-1}r^{-\frac{Q-m}{m-1}-1}s(r).$$
Here we see that $s'(r)>0$ for $r\in[0,1]$ and $s(0)=0$, $s(1)=R$. Since $g(s)\equiv0$ near $s=R$, we also note that $f\equiv0$ near $r=1$. Then a direct calculation shows
$$\int_{B^{Q}(0,1)}|\mathcal{R}f|^{m}dx-\left(\frac{Q-m}{m}\right)^{m}\int_{B^{Q}(0,1)}\frac{|f|^{m}}{|x|^{m}}dx$$
$$=|\sigma|\int_{0}^{1}|f'(r)|^{m}r^{Q-1}dr-\left(\frac{Q-m}{m}\right)^{m}|\sigma|\int_{0}^{1}f^{m}(r)r^{Q-m-1}dr$$
$$=|\sigma|\int_{0}^{R}|g'(s)s'(r(s))|^{m}r^{Q-1}(s)\frac{ds}{s'(r(s))}-\left(\frac{Q-m}{m}\right)^{m}|\sigma|\int_{0}^{R}g^{m}(s)
r^{Q-m-1}(s)\frac{ds}{s'(r(s))}$$
$$=|\sigma|\left(\frac{Q-m}{m-1}\right)^{m-1}\int_{0}^{R}|g'(s)|^{m}s^{m-1}ds-\left(\frac{Q-m}{m}\right)^{m}\frac{m-1}{Q-m}
|\sigma|\int_{0}^{R}\frac{g^{m}(s)}{s\left(\log\frac{Re}{s}\right)^{m}}ds$$
$$=\frac{|\sigma|}{|\widetilde{\sigma}|}\left(\frac{Q-m}{m-1}\right)^{m-1}\left(\int_{B^{m}(0,R)}|\mathcal{R}g|^{m}dz-
\left(\frac{m-1}{m}\right)^{m}\int_{B^{m}(0,R)}\frac{|g|^{m}}{|z|^{m}\left(\log\frac{Re}{|z|}\right)^{m}}dz\right),$$
yielding \eqref{crit_subcrit3}.
\end{proof}
\section{Extended Caffarelli-Kohn-Nirenberg inequalities}\label{SEC:CKN}
In this section, we introduce new Caffarelli-Kohn-Nirenberg type inequalities in the Euclidean setting of $\mathbb R^{n}$ as well as on homogeneous groups. For the convenience of the reader we recall Theorem \ref{THM:CKN-i} and then also explain how it implies Theorem \ref{clas_CKN-2}:
\begin{thm}\label{CKN_thm}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$ and $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$. Then we have the following Caffarelli-Kohn-Nirenberg type inequalities for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$:
If $Q\neq p(1-a)$, then
\begin{equation}\label{CKN_thm2}
\||x|^{c}f\|_{L^{r}(\mathbb{G})}
\leq \left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\||x|^{a}\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.
\end{equation}
If $Q=p(1-a)$, then
\begin{equation}\label{CKN_thm1}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb{G})}
\leq p^{\delta} \left\||x|^{a}\log|x|\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.
\end{equation}
The constant in the inequality \eqref{CKN_thm2} is sharp for $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$. Moreover, the constants in \eqref{CKN_thm2} and \eqref{CKN_thm1} are sharp for $\delta=0$ or $\delta=1$. Here $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative.
\end{thm}
\begin{rem} Our conditions $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$ imply the condition \eqref{clas_CKN2} of Theorem \ref{clas_CKN}, and in our case $a-d=1$.
\end{rem}
\begin{rem}\label{CKN_rem1}
In the abelian case $\mathbb{G}=(\mathbb R^{n},+)$ and $Q=n$, \eqref{CKN_thm1} implies a new type of Caffarelli-Kohn-Nirenberg inequality for any quasi-norm on $\mathbb R^{n}$: Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$ and $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, $n=p(1-a)$ and $c=\delta(a-1)+b(1-\delta)$. Then we have the Caffarelli-Kohn-Nirenberg type inequality for any function $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ and for any homogeneous quasi-norm $|\cdot|$:
\begin{equation}\label{CKN_rem2}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}.
\end{equation}
By the Schwarz inequality with the standard Euclidean distance given by $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{n}^{2}}$, we obtain the Euclidean form of the Caffarelli-Kohn-Nirenberg type inequality:
\begin{equation}\label{CKN_rem3}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})},
\end{equation}
where $\nabla$ is the standard gradient in $\mathbb R^{n}$.
Similarly, we write the inequality \eqref{CKN_thm2} in the abelian case: Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$ and $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, $n\neq p(1-a)$ and $c=\delta(a-1)+b(1-\delta)$. Then we have Caffarelli-Kohn-Nirenberg type inequality for any function $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ and for any homogeneous quasi-norm $|\cdot|$:
\begin{equation}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}.
\end{equation}
Then, using the Schwarz inequality with the standard Euclidean distance given by $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{n}^{2}}$, we obtain the Euclidean form of the Caffarelli-Kohn-Nirenberg type inequality:
\begin{equation}\label{CKN_rem3_1}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}.
\end{equation}
Note that if
\begin{equation}\label{CKNrem1}\frac{1}{p}+\frac{a}{n}>0, \;\frac{1}{q}+\frac{b}{n}>0 \;\;{\rm and}\;\; \frac{1}{r}+\frac{c}{n}>0
\end{equation} hold, then the inequality \eqref{CKN_rem3_1} is contained in the family of Caffarelli-Kohn-Nirenberg inequalities \cite{CKN84}. In this case, if we require $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$, then we obtain the inequality \eqref{CKN_rem3_1} with the sharp constant. Moreover, the constants $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=0$ or $\delta=1$. If \eqref{CKNrem1} is not satisfied, then the inequality \eqref{CKN_rem3_1} is not covered by Theorem \ref{clas_CKN} because condition \eqref{clas_CKN0} fails. So we obtain a new range of the Caffarelli-Kohn-Nirenberg inequality \cite{CKN84}.
Thus, the inequalities \eqref{CKN_rem3} and \eqref{CKN_rem3_1} are new already in the abelian case and, moreover, \eqref{CKN_thm2} and \eqref{CKN_thm1} hold for any choice of homogeneous quasi-norm.
\end{rem}
The proof of Theorem \ref{CKN_thm} will be based on the following family of weighted Hardy inequalities that was obtained in \cite[Theorem 3.4]{RSY16}, where $\mathbb{E}=|x|\mathcal{R}$ is the Euler operator.
\begin{thm}[\cite{RSY16}]\label{L_p_weighted_th}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$ and let $\alpha\in \mathbb{R}$.
Then for all complex-valued functions $f\in C^{\infty}_{0}(\mathbb{G}\backslash\{0\}),$ $1<p<\infty,$ and any homogeneous quasi-norm $|\cdot|$ on $\mathbb{G}$ for $\alpha p \neq Q$ we have
\begin{equation}\label{L_p_weighted}
\left\|\frac{f}{|x|^{\alpha}}\right\|_{L^{p}(\mathbb{G})}\leq
\left|\frac{p}{Q-\alpha p}\right|\left\|\frac{1}{|x|^{\alpha}}\mathbb{E} f\right\|_{L^{p}(\mathbb{G})}.
\end{equation}
If $\alpha p\neq Q$ then the constant $\left|\frac{p}{Q-\alpha p}\right|$ is sharp.
For $\alpha p=Q$ we have
\begin{equation}\label{L_p_weighted_log}
\left\|\frac{f}{|x|^{\frac{Q}{p}}}\right\|_{L^{p}(\mathbb{G})}\leq
p\left\|\frac{\log|x|}{|x|^{\frac{Q}{p}}}\mathbb{E} f\right\|_{L^{p}(\mathbb{G})},
\end{equation}
where the constant $p$ is sharp.
\end{thm}
We briefly recall its proof for the convenience of the reader but also since it will be useful in our argument.
\begin{proof}[Proof of Theorem \ref{L_p_weighted_th}]
Using integration by parts, for $\alpha p \neq Q$ we obtain
\begin{multline*}
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{\alpha p}}dx=\int_{0}^{\infty}\int_{\mathfrak S}|f(ry)|^{p}r^{Q-1-\alpha p}d\sigma(y)dr\\
=-\frac{p}{Q-\alpha p}\int_{0}^{\infty} r^{Q-\alpha p} {\rm Re} \int_{\mathfrak S}|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr\\
\leq \left|\frac{p}{Q-\alpha p}\right|\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)||f(x)|^{p-1}}{|x|^{\alpha p}}dx=
\left|\frac{p}{Q-\alpha p}\right|\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)||f(x)|^{p-1}}{|x|^{\alpha+\alpha (p-1)}}dx.
\end{multline*}
By H\"{o}lder's inequality, it follows that
$$
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{\alpha p}}dx\leq \left|\frac{p}{Q-\alpha p}\right|\left(\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)|^{p}}{|x|^{\alpha p}}dx\right)
^{\frac{1}{p}}\left(\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{\alpha p}}dx\right)^{\frac{p-1}{p}},
$$
which gives \eqref{L_p_weighted}.
Now we show the sharpness of the constant. We need to check the equality
condition in above H\"older's inequality.
Let us consider the function
\begin{equation}\label{2}
g(x)=\frac{1}{|x|^{C}},
\end{equation}
where $C\in\mathbb{R}, C\neq 0$ and $\alpha p\neq Q$. Then by a direct calculation we obtain
\begin{equation}\label{Holder_eq1}
\left|\frac{1}{C}\right|^{p}\left(\frac{|\mathbb{E}g(x)|}{|x|^{\alpha }}\right)^{p}=\left(\frac{|g(x)|^{p-1}}
{|x|^{\alpha (p-1)}}\right)^{\frac{p}{p-1}},
\end{equation}
which satisfies the equality condition in H\"older's inequality.
This gives the sharpness of the constant $\left|\frac{p}{Q-\alpha p}\right|$ in \eqref{L_p_weighted}.
Now let us prove \eqref{L_p_weighted_log}. Using integration by parts, we have
\begin{multline*}
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{Q}}dx=\int_{0}^{\infty}\int_{\mathfrak S}|f(ry)|^{p}r^{Q-1-Q}d\sigma(y)dr\\
=-p\int_{0}^{\infty} \log r {\rm Re} \int_{\mathfrak S}|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr\\
\leq p \int_{\mathbb{G}}\frac{|\mathbb{E}f(x)||f(x)|^{p-1}}{|x|^{Q}}|\log|x||dx=
p\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)|\log|x|||}{|x|^{\frac{Q}{p}}}\frac{|f(x)|^{p-1}}{|x|^{\frac{Q(p-1)}{p}}}dx.
\end{multline*}
By H\"{o}lder's inequality, it follows that
$$
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{Q}}dx\leq p\left(\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)|^{p}|\log|x||^{p}}{|x|^{Q}}dx\right)
^{\frac{1}{p}}\left(\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{Q}}dx\right)^{\frac{p-1}{p}},
$$
which gives \eqref{L_p_weighted_log}.
Now we show the sharpness of the constant. We need to check the equality
condition in above H\"older's inequality.
Let us consider the function
$$h(x)=(\log|x|)^{C},$$
where $C\in\mathbb{R}$ and $C\neq 0$.
Then by a direct calculation we obtain
\begin{equation}\label{Holder_eq2}
\left|\frac{1}{C}\right|^{p}\left(\frac{|\mathbb{E}h(x)||\log|x||}{|x|^{\frac{Q}{p}}}\right)^{p}=\left(\frac{|h(x)|^{p-1}}
{|x|^{\frac{Q (p-1)}{p}}}\right)^{\frac{p}{p-1}},
\end{equation}
which satisfies the equality condition in H\"older's inequality.
This gives the sharpness of the constant $p$ in \eqref{L_p_weighted_log}.
\end{proof}
We are now ready to prove Theorem \ref{CKN_thm}.
\begin{proof}[Proof of Theorem \ref{CKN_thm}] {\bf Case $\delta=0$}. In this case, we have $q=r$ and $b=c$ by $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$, respectively. Then, the inequalities \eqref{CKN_thm2} and \eqref{CKN_thm1} are equivalent to the trivial estimate
$$
\||x|^{b}f\|_{L^{q}(\mathbb{G})}
\leq \left\||x|^{b}f\right\|_{L^{q}(\mathbb{G})}.
$$
{\bf Case $\delta=1$}. Notice that in this case, $p=r$ and $a-1=c$. By Theorem \ref{L_p_weighted_th}, we have for $Q+pc=Q+p(a-1)\neq0$ the inequality
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq \left|\frac{p}{Q+pc}\right|\||x|^{c}\mathbb{E}f\|_{L^{r}(\mathbb{G})},$$
where $\mathbb{E}=|x|\mathcal{R}$ is the Euler operator. Taking into account this, we get
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq \left|\frac{p}{Q+pc}\right|\||x|^{c+1}\mathcal{R}f\|_{L^{r}(\mathbb{G})}$$
$$=\left|\frac{p}{Q-p(1-a)}\right|\||x|^{a}\mathcal{R}f\|_{L^{p}(\mathbb{G})},$$
which implies \eqref{CKN_thm2}.
For $Q+pc=Q+p(a-1)=0$ by Theorem \ref{L_p_weighted_th} we obtain
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq p\||x|^{c}\log|x|\mathbb{E}f\|_{L^{r}(\mathbb{G})}=p\||x|^{c+1}\log|x|\mathcal{R}f\|_{L^{r}(\mathbb{G})}$$
$$=p\||x|^{a}\log|x|\mathcal{R}f\|_{L^{p}(\mathbb{G})}$$
which gives \eqref{CKN_thm1}. In this case, the constants in \eqref{CKN_thm2} and \eqref{CKN_thm1} are sharp, since the constants in Theorem \ref{L_p_weighted_th} are sharp.
{\bf Case $\delta\in(0,1)\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$}.
Taking into account $c=\delta(a-1)+b(1-\delta)$, a direct calculation gives $$\||x|^{c}f\|_{L^{r}(\mathbb{G})}=
\left(\int_{\mathbb{G}}|x|^{cr}|f(x)|^{r}dx\right)^{\frac{1}{r}}
=\left(\int_{\mathbb{G}}\frac{|f(x)|^{\delta r}}{|x|^{\delta r (1-a)}}\cdot \frac{|f(x)|^{(1-\delta)r}}{|x|^{-br(1-\delta)}}dx\right)^{\frac{1}{r}}.$$
Since we have $\delta\in(0,1)\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $p+q\geq r$, then by using H\"{o}lder's inequality for $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, we obtain
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}
\leq \left(\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{p(1-a)}}dx\right)^{\frac{\delta}{p}}
\left(\int_{\mathbb{G}}\frac{|f(x)|^{q}}{|x|^{-bq}}dx\right)^{\frac{1-\delta}{q}}$$
\begin{equation}\label{CKN_thm1_1}=\left\|\frac{f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.
\end{equation}
Here we note that when $p=q$ and $a-b=1$ H\"{o}lder's equality condition is held for any function. We also note that in the case $p\neq q$ the function
\begin{equation}\label{Holder_eq_2}
h(x)=|x|^{\frac{1}{(p-q)}\left(p(1-a)+bq\right)}
\end{equation} satisfies H\"{o}lder's equality condition:
$$\frac{|h|^{p}}{|x|^{p(1-a)}}=\frac{|h|^{q}}{|x|^{-bq}}.$$
If $Q\neq p(1-a)$, then by Theorem \ref{L_p_weighted_th}, we have
$$\left\|\frac{f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}\leq
\left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\|\frac{\mathbb{E}f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}$$
\begin{equation}\label{1}=
\left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\|\frac{\mathcal{R}f}{|x|^{-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}, \;\;1<p<\infty.
\end{equation}
Putting this in \eqref{CKN_thm1_1}, one has
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq
\left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\|\frac{\mathcal{R}f}{|x|^{-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.$$
We note that in the case $p=q$, $a-b=1$ H\"older's equality condition of the inequalities \eqref{CKN_thm1_1} and \eqref{1} holds true for $g(x)$ in \eqref{2}. Moreover, in the case $p\neq q$, $p(1-a)+bq\neq0$ H\"older's equality condition of the inequalities \eqref{CKN_thm1_1} and \eqref{1} holds true for $h(x)$ in \eqref{Holder_eq_2}. Therefore, the constant in \eqref{CKN_thm2} is sharp when $p=q$, $a-b=1$ or $p\neq q$, $p(1-a)+bq\neq0$.
Now let us consider the case $Q=p(1-a)$. Using Theorem \ref{L_p_weighted_th}, one has
$$\left\|\frac{f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}\leq
p^{\delta} \left\|\frac{\log|x|}{|x|^{1-a}}\mathbb{E}f\right\|^{\delta}_{L^{p}(\mathbb{G})}, \;\;1<p<\infty.$$
Then, putting this in \eqref{CKN_thm1_1}, we obtain
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq
p^{\delta} \left\|\frac{\log|x|}{|x|^{1-a}}\mathbb{E}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}
$$
$$=p^{\delta} \left\|\frac{\log|x|}{|x|^{-a}}\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.$$
\end{proof}
\section{$L^{p}$-Hardy inequalities with super weights}
\label{SEC:2}
We now discuss versions of Hardy inequalities with more general weights, that we call superweights.
The following is the main result of this section.
\begin{thm}\label{1}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$ and let $|\cdot|$ be a homogeneous quasi-norm on $\mathbb{G}$. Let $a,b>0$ and $1<p<\infty, Q\geq1$.
\begin{itemize}
\item[(i)] If $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation}\label{Lpweighted1}
\frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} .
\end{equation}
If $Q\neq pm+p$, then the constant $\frac{Q-pm-p}{p}$ is sharp.
\item[(ii)] If $\alpha \beta<0$ and $pm-\alpha\beta\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation}\label{Lpweighted2}
\frac{Q-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}
.
\end{equation}
If $Q\neq pm+p-\alpha\beta$, then the constant $\frac{Q-pm+\alpha\beta-p}{p}$ is sharp.
\end{itemize}
\end{thm}
\begin{proof}[Proof of Theorem \ref{1}] We may assume that $Q\neq pm+p$ since for $Q=pm+p$ there is nothing
to prove. Introducing polar coordinates $(r,y)=(|x|, \frac{x}{\mid x\mid})\in (0,\infty)\times\mathfrak S$ on $\mathbb{G}$, where $\mathfrak S$ is the
unit quasi-sphere
\begin{equation}\label{EQ:sphere}
\mathfrak S:=\{x\in \mathbb{G}:\,|x|=1\},
\end{equation} and using the polar decomposition on homogeneous groups (see, for example, \cite{FS-Hardy} or \cite{FR}) and integrating by parts, we get
\begin{equation}\label{eqp}
\int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx
=\int_{0}^{\infty}\int_{\mathfrak S}
\frac{(a+br^{\alpha})^{\beta}}{r^{pm+p}}|f(ry)|^{p} r^{Q-1}d\sigma(y)dr.
\end{equation}
(i) Since $a,b>0$, $\alpha \beta>0$ and $m<\frac{Q-p}{p}$ we obtain
$$
\int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$
$$\leq\int_{0}^{\infty}\int_{\mathfrak S}
(a+br^{\alpha})^{\beta}r^{Q-1-pm-p}\left(\frac{\alpha\beta b r^{\alpha}}{(a+br^{\alpha})(Q-pm-p)}+1\right)|f(ry)|^{p} d\sigma(y)dr
$$
$$=\int_{0}^{\infty}\int_{\mathfrak S}
\frac{d}{dr}\left(\frac{(a+br^{\alpha})^{\beta}r^{Q-pm-p}}{Q-pm-p}\right)|f(ry)|^{p} d\sigma(y)dr$$
$$
=-\frac{p}{Q-pm-p}\int_{0}^{\infty}(a+br^{\alpha})^{\beta}r^{Q-pm-p} \,{\rm Re}\int_{\mathfrak S}
|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr
$$
$$\leq \left|\frac{p}{Q-pm-p}\right|\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}|\mathcal{R}f(x)||f(x)|^{p-1}}{|x|^{pm+p-1}}dx
$$
$$=\frac{p}{Q-pm-p}
\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|f(x)|^{p-1}}{|x|^{(m+1)(p-1)}}
\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}}{|x|^{m}}|\mathcal{R}f(x)|dx.$$
By H\"{o}lder's inequality, it follows that
$$
\int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$
$$\leq\frac{p}{Q-pm-p}\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx\right)^\frac{p-1}{p}
\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm}}|\mathcal{R}f(x)|^{p}dx\right)^\frac{1}{p},
$$
which gives \eqref{Lpweighted1}.
Now we show the sharpness of the constant. We need to check the equality
condition in above H\"older's inequality.
Let us consider the function
$$g(x)=|x|^{C}, $$
where $C\in\mathbb{R}, C\neq 0$ and $Q\neq pm+p$. Then by a direct calculation we obtain
\begin{equation}\label{Holder_eq1}
\left|\frac{1}{C}\right|^{p}\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}|\mathcal{R}g(x)|}{|x|^{m}}\right)^{p}=\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|g(x)|^{p-1}}
{|x|^{(m+1) (p-1)}}\right)^{\frac{p}{p-1}},
\end{equation}
which satisfies the equality condition in H\"older's inequality.
This gives the sharpness of the constant $\frac{Q-pm-p}{p}$ in \eqref{Lpweighted1}.
Let us now prove Part (ii). Here we also assume that, $Q\neq pm+p-\alpha\beta$ since for $Q=pm+p-\alpha\beta$ there is nothing
to prove. Using the polar decomposition, we have
the equality \eqref{eqp}.
Since $\alpha \beta<0$ and $pm-\alpha\beta<Q-p$ we obtain
$$
\int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$
$$\leq\int_{0}^{\infty}\int_{\mathfrak S}
(a+br^{\alpha})^{\beta}r^{Q-1-pm-p}\left(\frac{b r^{\alpha}}{a+br^{\alpha}}+\frac{a}{a+br^{\alpha}}
\cdot\frac{Q-pm-p}{Q-pm-p+\alpha\beta}\right)|f(ry)|^{p} d\sigma(y)dr
$$
$$=\int_{0}^{\infty}\int_{\mathfrak S}
\frac{(a+br^{\alpha})^{\beta}r^{Q-1-pm-p}}{Q-pm-p+\alpha\beta}
\left(\frac{\alpha \beta b r^{\alpha}}{a+br^{\alpha}}+Q-pm-p\right)|f(ry)|^{p} d\sigma(y)dr
$$
$$=\int_{0}^{\infty}\int_{\mathfrak S}
\frac{d}{dr}\left(\frac{(a+br^{\alpha})^{\beta}
r^{Q-pm-p}}{Q-pm-p+\alpha\beta}\right)|f(ry)|^{p}
d\sigma(y)dr$$
$$
=-\frac{p}{Q-pm-p+\alpha\beta}\int_{0}^{\infty}(a+br^{\alpha})^{\beta}r^{Q-pm-p} \,{\rm Re}\int_{\mathfrak S}
|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr
$$
$$\leq \left|\frac{p}{Q-pm-p+\alpha\beta}\right|\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}|\mathcal{R}f(x)||f(x)|^{p-1}}{|x|^{pm+p-1}}dx
$$
$$=\frac{p}{Q-pm-p+\alpha\beta}
\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|f(x)|^{p-1}}{|x|^{(m+1)(p-1)}}
\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}}{|x|^{m}}|\mathcal{R}f(x)|dx.$$
By H\"{o}lder's inequality, it follows that
$$
\int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$
$$\leq\frac{p}{Q-pm-p+\alpha\beta}\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx\right)^\frac{p-1}{p}
\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm}}|\mathcal{R}f(x)|^{p}dx\right)^\frac{1}{p},
$$
which gives \eqref{Lpweighted2}.
Now we show the sharpness of the constant. We need to check the equality
condition in above H\"older's inequality.
Let us consider the function
$$h(x)=|x|^{C}, $$
where $C\in\mathbb{R}, C\neq 0$ and $Q\neq pm+p-\alpha\beta$. Then by a direct calculation we obtain
\begin{equation}\label{Holder_eq2}
\left|\frac{1}{C}\right|^{p}\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}|\mathcal{R}h(x)|}{|x|^{m}}\right)^{p}=\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|h(x)|^{p-1}}
{|x|^{(m+1) (p-1)}}\right)^{\frac{p}{p-1}},
\end{equation}
which satisfies the equality condition in H\"older's inequality.
This gives the sharpness of the constant $\frac{Q-pm-p+\alpha\beta}{p}$
in \eqref{Lpweighted2}.
\end{proof}
\section{Higher order inequalities}
\label{Sec3}
In this section we present higher order $L^{p}$-Hardy type inequalities with super weights by iterating the obtained inequalities
\eqref{Lpweighted1} and \eqref{Lpweighted2}.
\begin{thm}\label{2}
Let $\mathbb{G}$ be a homogeneous group
of homogeneous dimension $Q$ and let $|\cdot|$ be a homogeneous quasi-norm on $\mathbb{G}$. Let $a,b>0$ and $1<p<\infty, Q\geq1, k\in \mathbb{N}$.
\begin{itemize}
\item[(i)] If $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation}\label{Lpweighted3}
\left[\prod_{j=0}^{k-1}\left(\frac{Q-p}{p}-(m+j)\right)\right]
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})}
.
\end{equation}
\item[(ii)] If $\alpha \beta<0$ and $pm-\alpha\beta\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have
\begin{equation}\label{Lpweighted4}
\left[\prod_{j=0}^{k-1}\left(\frac{Q-p+\alpha\beta}{p}-(m+j)\right)\right]
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})}
.
\end{equation}
\end{itemize}
\end{thm}
In the case of $k=1$ \eqref{Lpweighted3} gives inequality \eqref{Lpweighted1} and \eqref{Lpweighted4} gives inequality \eqref{Lpweighted2}.
\begin{proof}[Proof of Theorem \ref{2}] We can iterate \eqref{Lpweighted1}, that is, we have
\begin{equation}\label{Lpiterate0}
\frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}.
\end{equation}
In \eqref{Lpiterate0} replacing $f$ by $\mathcal{R}f$ we obtain
\begin{equation}\label{Lpiterate1}
\frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{2}f\right\|_{L^{p}(\mathbb{G})}.
\end{equation}
On the other hand, replacing $m$ by $m+1$, \eqref{Lpiterate0} gives
\begin{equation}\label{Lpiterate2}
\frac{Q-p(m+1)-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+2}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}
.
\end{equation}
Combining this with \eqref{Lpiterate1} we obtain
\begin{multline*}\label{Lpiterate3}
\frac{Q-pm-p}{p}\cdot\frac{Q-p(m+1)-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+2}}f\right\|_{L^{p}(\mathbb{G})} \\
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{2}f\right\|_{L^{p}(\mathbb{G})}.
\end{multline*}
This iteration process gives
\begin{equation*}\label{Lpweighted5}
\prod_{j=0}^{k-1}\left(\frac{Q-p}{p}-(m+j)\right)
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})}
.
\end{equation*}
Similarly, we have for $\alpha \beta<0$, $pm-\alpha\beta\leq Q-2$ and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$
\begin{equation*}\label{Lpweighted6}
\prod_{j=0}^{k-1}\left(\frac{Q-p+\alpha\beta}{p}-(m+j)\right)
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})}
,
\end{equation*}
completing the proof.
\end{proof}
\end{document} |
\begin{document}
\title{Weighted monotonicity theorems and applications to minimal surfaces of \( \mathbb{H}^n \) and \( S^n \)}
\author{Manh Tien Nguyen}
\address{\parbox{\linewidth}{Département de mathématique, Université Libre de Bruxelles,
Belgique\\ Institut de Mathématiques de Marseille, France}}
\email{[email protected]}
\maketitle
\begin{abstract}
We prove that in a Riemannian manifold \( M \), each function whose Hessian is
proportional the metric tensor yields a weighted monotonicity theorem. Such function
appears in the Euclidean space, the round sphere \( {S}^n \) and the hyperbolic space
\( \mathbb{H}^n \) as the distance function, the Euclidean coordinates of
\( \mathbb{R}^{n+1} \) and the Minkowskian coordinates of \( \mathbb{R}^{n,1} \). Then
we show that weighted monotonicity theorems can be compared and that in the hyperbolic
case, this comparison implies three \( SO(n,1) \)-distinct unweighted monotonicity
theorems. From these, we obtain upper bounds of the Graham--Witten renormalised area of a
minimal surface in term of its ideal perimeter measured under different metrics of the
conformal infinity. Other applications include a vanishing result for knot invariants
coming from counting minimal surfaces of \( \mathbb{H}^n \) and a quantification of how
antipodal a minimal submanifold of \( S^n \) has to be in term of its volume.
\end{abstract}
\section{Introduction}
\label{sec:org542a23b}
The classical monotonicity theorem dictates how a minimal submanifold of the Euclidean
space distributes its volume between spheres of different radii centred around the same
point. In the present paper, we prove the following three monotonicity results for minimal
submanifolds of the hyperbolic space. Let \( \Sigma \) be a minimal \( k \)-submanifold,
\( O \) be an interior point of \( \mathbb{H}^n \) and denote by \( A(t) \) be the
\( k \)-volume of \( \Sigma_t = \Sigma\cap B_{\cosh t}(O) \), the region of \( \Sigma \)
of distance at most \( \cosh t \) from \( O \). Define the function \( Q_{\delta}(a,t) \)
with parameter \( \delta = -1, 0, \) or \( +1 \) as:
\begin{equation}
\label{eq:Qdelta}
Q_\delta(a,t) := \int_a^t(t^2 +\delta)^{\frac{k}{2} - 1}dt + \frac{1}{ka}(a^2 +\delta)^{\frac{k}{2}}
\end{equation}
\begin{theorem}[Time-monotonicity]
\label{thm:intro-time}
Let \(d \) be the hyperbolic distance from \( O \) to \( \Sigma \), \( a \) be any number
in the interval \([1,\cosh d]\). Then the function
\[
\Phi_a(\Sigma,t) = \frac{A(t)}{Q_{-1}(a,t)}\quad
\text{is increasing in } t.
\]
\end{theorem}
When \( a=1 \), Theorem \mathop{\rm Re}\nolimitsf{thm:intro-time} was proved by Anderson \cite[Theorem
1]{Anderson82_CompleteMinimalVarieties}. For any submanifold \( \Sigma \), if
\( \cosh d \geq a_1\geq a_2 \), then the monotonicity of \( \Phi_{a_1} \) implies that of
\( \Phi_{a_2} \). It is therefore optimal to choose \( a \) in Theorem
\mathop{\rm Re}\nolimitsf{thm:intro-time} to be \( a=\cosh d \). In the language of the Comparison Lemma
\mathop{\rm Re}\nolimitsf{lem:comparison}, the monotonicity of \( \Phi_{a_2} \) is \textit{weaker} than that of
\( \Phi_{a_1} \).
There are 2 other versions of Theorem \mathop{\rm Re}\nolimitsf{thm:intro-time}, in which the interior point
\( O \) is replaced by a totally geodesic hyperplane \( H \) and a point \( b \) on the
sphere at infinity \( S_\infty \). We assume that in the compatification of
\( \mathbb{H}^n \) as the unit ball, the closure of \( \Sigma \) does not intersect
\( H \) or contain \( b \). In a half-space model associated to \( b \), this means that
\( \Sigma \) has a maximal height \( x_{\max} < +\infty \).
\begin{theorem}[Space-monotonicity]
\label{thm:intro-space}
Let \( a \) be any number in \( (0,\sinh d]\) and \( A(t) \) be the volume of the region
\( \Sigma_t \) of \( \Sigma \) that is of distance at most \( \sinh t \) from \( H
\). Then the function
\[
\frac{A(t)}{Q_{+1}(a,t)}\quad
\text{is increasing in } t.
\]
\end{theorem}
\begin{theorem}[Null-monotonicity]
\label{thm:intro-null}
Let \( a \) be any number in \( (0,x_{\max}^{-1}]\) and \( A(t) \) be the volume of the
region \( \Sigma_t \) of \( \Sigma \) with height at least \( \frac{1}{t} \) in a half
space model associated to \( b \). Then the function
\[
\frac{A(t)}{Q_{0}(a,t)}\quad
\text{is increasing in } t.
\]
\end{theorem}
These theorems can be seen as
statements about volume distribution of a minimal submanifold among level sets of the
time, space and null coordinates of \( \mathbb{H}^n \). These are pullbacks of the
coordinate functions of \( \mathbb{R}^{n,1} \) via the hyperboloid model. These theorems
are obtained in two steps. First, we prove that for any function \(h\) on a Riemannian
manifold \((M,g)\) with
\begin{equation}
\label{eq:intro-hess}
\hess h = Ug,\quad h\in C^2(M), U\in C^0(M)
\end{equation}
there is a \emph{naturally weighted} monotonicity theorem for minimal submanifolds of
\(M\). This is a statement about the weighted volume distribution of the submanifold among
level sets of \( h \). Here the volume is weighted by \(U\) and the density is obtained by
normalising this volume by that of a gradient tube of \( h \) (see
Definitions \mathop{\rm Re}\nolimitsf{def:weight} and \mathop{\rm Re}\nolimitsf{def:density}). The coordinate
functions of \( \mathbb{R}^{n,1} \) and \( \mathbb{R}^n \) pull back to functions on
\( \mathbb{H}^n \) and \( S^n \) that satisfy \eqref{eq:intro-hess}. In the second step,
we prove a comparison lemma which, in the hyperbolic case, gives us the unweighted volume
distribution which are Theorems \mathop{\rm Re}\nolimitsf{thm:intro-time}, \mathop{\rm Re}\nolimitsf{thm:intro-space},
\mathop{\rm Re}\nolimitsf{thm:intro-null}. More generally, these theorems also hold for \emph{tube extensions}
of minimal submanifolds (cf. Definition \mathop{\rm Re}\nolimitsf{def:tube}).
Since \eqref{eq:intro-hess} is local, more examples for \eqref{eq:intro-hess} can be
constructed as quotient of \( M \) by a symmetry group preserving \( h \). Fuchsian
manifolds are obtained this way (cf. Example \mathop{\rm Re}\nolimitsf{ex:xi-fuchsian}) and Theorem
\mathop{\rm Re}\nolimitsf{thm:intro-space} also holds there, given that one replaces \( H \) by the unique
totally geodesic surface.
There are two major differences, if one is to apply the previous argument for the
Euclidean coordinates of \( S^n \). First, all coordinate functions of \( S^n \) are the
same up to isometry whilst a Minkowskian coordinate \( \xi \) of \( \mathbb{H}^n \) falls
into one of the three types depending on the norm of \( d\xi \) in \( \mathbb{R}^{n,1}
\). Secondly, for the Euclidean coordinates of \( S^n \), the natural weight is
\emph{weaker} than the uniform weight. This means that we cannot recover, from the
Comparison Lemma \mathop{\rm Re}\nolimitsf{lem:comparison}, the unweighted volume distribution of a minimal
submanifold. In fact, the unweighted density of the Clifford torus of \( S^3 \) is not
monotone, even in a hemisphere. Despite this, the framework developed here can be used to
prove (cf. Proposition \mathop{\rm Re}\nolimitsf{prop:eps-sphere}) that the further a minimal submanifold of
\( S^n \) is from being antipodal, the larger its volume has to be. Here we quantify
antipodalness by:
\begin{definition}[]
\label{def:eps-antipodal}
A subset \( X \) of \( S^n \) is \emph{\( \epsilon \)-antipodal} if
the antipodal point of any point in \( X \) is at distance at most
\( \epsilon \) from \( X \).
\end{definition}
It is clear that 0-antipodal is antipodal and that any subset of
\( S^n \) is \( \pi \)-antipodal. The relation between \( \epsilon \) and the volume
\( A \) of a minimal surface is drawn in Figure \mathop{\rm Re}\nolimitsf{fig:3}. Proposition
\mathop{\rm Re}\nolimitsf{prop:eps-sphere} also confirms, in the non-embedded case, a conjecture by Yau on the
second lowest volume of minimal hypersurfaces of \( S^n \), cf. Remark \mathop{\rm Re}\nolimitsf{rem:yau-conj}.
Another application of the new monotonicity theorems is an isoperimetric inequality for
complete minimal surfaces of \( \mathbb{H}^n \). The area of these surfaces is necessarily
infinite and can be renormalised according to Graham and Witten
\cite{Graham.Witten99_ConformalAnomalySubmanifold}. Let \( A(\epsilon) \) be the area of
the part of the surface that is \( \epsilon \)-far\footnote{see Section
\mathop{\rm Re}\nolimitsf{sec:renorm-isop-ineq} for a more precise formulation.} from the sphere at infinity
\( S_\infty \). They proved that it has the following expansion
\( A(\epsilon) = \frac{L}{\epsilon} + \mathcal{A}_R + O(\epsilon) \), where the
coefficient \(\mathcal{A}_R \), called the \emph{renormalised area}, is an intrinsic
quantity of the surface. We will give upper bounds of \(\mathcal{A}_R \) by
the length of the ideal boundary measured in different metrics in the conformal class at
infinity. These metrics are obtained as the limit of \( \xi^{-2} g_{\mathbb{H}} \) on
\( S_\infty \), which are finite almost everywhere. Depending on the type of \( \xi \),
they are either round metrics, flat metrics, or doubled hyperbolic metrics.
\begin{theorem}[]
\label{thm:upper-AR}
Let \(\Sigma\subset \mathbb{H}^n\) be a minimal surface with ideal boundary \(\gamma\)
in \( S_\infty \). Let \( O \) be an interior point of \(
\mathbb{H}^n \) and \( |\gamma|_{g_O} \) be the length of \( \gamma \) measured in the
round metric \( g_O \) on \( S_\infty \) associated to \( O \). Then
\begin{equation}
\label{eq:upper-AR-a}
\mathcal{A}_R(\Sigma) + \frac{1}{2}\left(\cosh d + \frac{1}{\cosh d}\right)|\gamma|_{g_O} \leq 0
\end{equation}
where \( d \) is the distance from \( O \) to \( \Sigma \).
\end{theorem}
Theorem \mathop{\rm Re}\nolimitsf{thm:upper-AR} relies on a simple remark (Lemma \mathop{\rm Re}\nolimitsf{lem:less-than-tube})
about the extra information gained from a monotonicity theorem on the tube extension of a
minimal submanifold. It implies in particular that the volume of a minimal submanifold of \( \mathbb{H}^n \) is not larger than
that of the tube competitor. The estimate \eqref{eq:upper-AR-a} follows from substituting
the volume bound given by Theorem \mathop{\rm Re}\nolimitsf{thm:intro-time} to the Graham--Witten expansion.
The volume bound corresponding to Anderson's monotonicity (Theorem
\mathop{\rm Re}\nolimitsf{thm:intro-time} with \( a=1 \)) was obtained by Choe and Gulliver
\cite{Choe.Gulliver92_SharpIsoperimetricInequality} by a different method. Plugging
this into the Graham--Witten expansion, we have the following weaker version of \eqref{eq:upper-AR-a}:
\begin{equation}
\label{eq:upper-AR-0}
\mathcal{A}_R(\Sigma) + \sup_{\text{round } \tilde g}|\gamma|_{\tilde g} \leq 0
\end{equation}
The estimate \eqref{eq:upper-AR-0} was also independently proved by Jacob Bernstein
\cite{Bernstein21_SharpIsoperimetricProperty}. To illustrate the extra factor in
\eqref{eq:upper-AR-a}, we test it on a family of rotational minimal annuli \( M_C \) in
\( \mathbb{H}^4 \), indexed by a real number \( C \). Here the rotation is given by
changing the phase of the 2 complex variables of \( \mathbb{C}^2\cong \mathbb{R}^4 \) by
an opposite number. The profile curves of these annuli are drawn in Figure
\mathop{\rm Re}\nolimitsf{fig:min-HS}(a). Their ideal boundary are pairs of round circles that form Hopf links
in \( S_\infty \). Whilst \eqref{eq:upper-AR-0} only says
\( \mathcal{A}_R\leq -4\pi \), \eqref{eq:upper-AR-a} shows that they tend to
\( -\infty \). Moreover, the perimeter term of \eqref{eq:upper-AR-a} also captures the
decreasing rate of \( \mathcal{A}_R \). These two quantities are drawn in Figure
\mathop{\rm Re}\nolimitsf{fig:2}. The graph was plotted in log scale, in which the two curves are
asymptotically parallel. So the gap between them should be interpreted as a
multiplicative factor between the two quantities, which is approximately 1.198.
Theorems \mathop{\rm Re}\nolimitsf{thm:intro-space} and \mathop{\rm Re}\nolimitsf{thm:intro-null} also give two other upper bounds of
area, and thus two other isoperimetric inequalities:
\begin{theorem}[]
\label{thm:upper-AR-space}
Let \( H \), \( b \), \( \Sigma \) be a totally geodesic plane, a boundary point, and a
complete minimal surfaces satisfying the condition of Theorems \mathop{\rm Re}\nolimitsf{thm:intro-space} and
\mathop{\rm Re}\nolimitsf{thm:intro-null}. We denote by \( |\gamma|_{H} \) and \( |\gamma|_b \) the length of
the boundary curve under the doubled hyperbolic metric and the flat metric associated to
\( H \) and \( b \). Then
\begin{equation}
\label{eq:upper-AR-space}
\mathcal{A}_R(\Sigma) + \frac{1}{2}\left(\sinh d - \frac{1}{\sinh
d}\right)|\gamma|_{H}\leq 0
\end{equation}
\begin{equation}
\label{eq:upper-AR-null}
\mathcal{A}_R(\Sigma) + \frac{1}{2}\frac{|\gamma|_{b}}{x_{\max}} \leq 0
\end{equation}
Here \( d \) is the distance from \( \Sigma \) to \( H \) and \( x_{\max} \) is the
maximal height of \( \Sigma \) in the half-space model where \( b \) is at infinity.
\end{theorem}
It can be seen either directly from \eqref{eq:upper-AR-a} or from
\eqref{eq:upper-AR-space} and a rescaling argument (cf. Figure \mathop{\rm Re}\nolimitsf{fig:space-est}) that
\(\mathcal{A}_R\leq -2\pi \) for any minimal surface. The estimate
\eqref{eq:upper-AR-null}, on the other hand, seems to be optimised for a completely
different type of surface than totally geodesic discs. For the discs,
\( \mathcal{A}_R = -2\pi \) whilst the perimeter term in \eqref{eq:upper-AR-null} is
\( +\pi \) and one may hope to drop the constant \( \frac{1}{2} \) there. However, the
family of minimal annuli of \( \mathbb{H}^3 \) found by Mori
\cite{Mori81_MinimalSurfacesRevolution} and whose renormalised area was computed by Krtouš
and Zelnikov \cite{Krtous.Zelnikov14_MinimalSurfacesEntanglement}, show that the constant
\( \frac{1}{2} \) of \eqref{eq:upper-AR-null} cannot be replaced by any number bigger than
\( 0.59 \).
Each \( (k-1) \)-submanifold \( \gamma \) of \( S_\infty \) induces a function
\( V_\gamma \) on \( \mathbb{H}^n \), whose value at a point \( O \) is the volume of
\( \gamma \) under the round metric \( g_O \). The visual hull of \( \gamma \) was defined
by Gromov \cite{Gromov83_FillingRiemannianManifolds} as the set of points where
\( V_\gamma \) is at least the volume of the Euclidean \( (k-1) \)-sphere. Another
application of Theorem \mathop{\rm Re}\nolimitsf{thm:intro-time} is:
\begin{corollary}[]
A minimal submanifold of \( \mathbb{H}^n \) is contained in the \textit{visual hull} of
its ideal boundary.
\end{corollary}
\begin{proposition}
\label{prop:separation}
Let \(L:= L_1 \sqcup L_2\) be a separated union of two \( (k-1) \)-submanifold \(L_1,
L_2\) of \({S}^{n-1}\). Then we can rearrange \(L\) in its isotopy class such
that there is no connected minimal \( k \)-submanifold \(\Sigma\) of \(\mathbb{H}^n\)
whose ideal boundary is \(L\).
\end{proposition}
\begin{proof}
In the Poincaré ball model, we isotope \(L\) so that \(L_1\) (or \(L_2\)) is
contained in a small ball centred at the North (respectively South) pole and so that
the Euclidean volume of \(L\) is less than \(\frac{1}{2}\omega_{k-1}\). It suffices to
prove that any minimal submanifold filling \( L \) has no intersection with the
equatorial hyperplane. By convexity, such intersection is contained in a small ball
centred at the origin \(O\). If it was non-empty, by a small Möbius transform we could
suppose that \(\Sigma\) contains \(O\) while keeping the Euclidean length of \(L\) less
than \(\omega_{k-1}\). This is a contradiction.
\end{proof}
The visual hull and the proof of Proposition \mathop{\rm Re}\nolimitsf{prop:separation} can be visualised in
Figure \mathop{\rm Re}\nolimitsf{fig:vishull}.
The motivation for Proposition \mathop{\rm Re}\nolimitsf{prop:separation} comes from recent works of Alexakis,
Mazzeo \cite{Alexakis.Mazzeo10_RenormalizedAreaProperly} and Fine
\cite{Fine21_KnotsMinimalSurfaces}. The counting problem for minimal surfaces of
\( \mathbb{R}^3 \) bounded by a given curve traces back to Tomi--Tromba's resolution of
the embedded Plateau problem \cite{Tomi.Tromba78_ExtremeCurvesBound} and was studied in a
more general context by White \cite{White87_SpaceDimensionalSurfaces}. In the hyperbolic
space, one can ask the ideal boundary of the minimal surface to be an embedded curve
\( \gamma \) in the sphere at infinity. This problem was studied by Alexakis and Mazzeo in
\( \mathbb{H}^3 \) and by Fine in \( \mathbb{H}^4 \). Denote by \( \mathcal{C} \) the
Banach manifold of curves in \( S_\infty \), \( \mathcal{S} \) the space of minimal
surfaces and \( \pi: \mathcal{S} \longrightarrow \mathcal{C} \) the map sending a surface
to its ideal boundary. The counting problem consists of proving \( \mathcal{S} \) is a
Banach manifold and establishing a degree theory for \( \pi \). The degree is constant in
the isotopy class of \( \gamma \) and defines a knot/link invariant. Proposition
\mathop{\rm Re}\nolimitsf{prop:separation} shows that this invariant is multiplicative in \( \gamma
\): any minimal surface filling a separated union of links is union of surfaces filling
each one individually.
It was proved, in the works mentioned above, that the map \( \pi \) is Fredholm and of
index 0. This is because the stability operator is self-adjoint and elliptic or
0-elliptic. The properness of \( \pi \) is however more subtle. In \( \mathbb{H}^4 \), this can be seen via the family
\( M_C \): as \( C \) decreases to \( 0 \), their boundary converges to
the standard Hopf link \( \{zw = 0\}\cap S^3 \) and the waist of \( M_C \) collapses. In
fact, it can be proved (Proposition \mathop{\rm Re}\nolimitsf{prop:minimising-cone}) that the only minimal
surface of \( \mathbb{H}^4 \) filling the standard Hopf link is the pair of
discs. One can still hope that properness holds on a residual subset of
\( \mathcal{C} \), as the previous phenomenon happens in codimension 2 of \( \gamma \).
Since monotonicity theorems are inequalities, they still hold when the Hessian of the
function \(h\) is comparable to the metric as symmetric 2-tensors. We will see in Section
5 that such function arises naturally as the distance function in a Riemannian manifold
whose sectional curvature is bounded from above. When the curvature is negative, this
weighted monotonicity theorem also implies the unweighted version. In the case of positive
curvature, an unweighted monotonicity \emph{inequality} was obtained by Scharrer
\cite{Scharrer21_GeometricInequalitiesVarifolds}.
\paragraph{\bf Acknowledgements.}
The author thanks Joel Fine for introducing him to minimal surfaces and harmonic maps in
hyperbolic space and for proposing the search of the minimal annuli in \( \mathbb{H}^4 \). He also wants to thank Christian Scharrer for the reference
\cite{Hoffman.Spruck74_SobolevIsoperimetricInequalities} and Benjamin Aslan for helpful discussions on \cite{Berndt.etal16_SubmanifoldsHolonomy} and
\cite{Hsiang.Lawson71_MinimalSubmanifoldsLow}. The author was supported by
\emph{Excellence of Science grant number 30950721, Symplectic techniques in differential
geometry}.
\begin{figure}
\caption{The profile curve of \( M_C \)}
\label{fig:min-HS}
\end{figure}
\begin{figure}
\caption{The area and perimeter terms of
\eqref{eq:upper-AR-a}
\label{fig:2}
\caption{The function \(\epsilon(A,2)\). The red point is the data for Veronese surface.}
\label{fig:3}
\end{figure}
\begin{figure}
\caption{Any minimal surface \( \Sigma \) filling the link \( L \) (red) is contained in its visual
hull (pink). It cannot intersect the blue region because of the visual hull. So \(
\Sigma \) has no intersection with the equatorial disc and has to be disconnected.}
\label{fig:vishull}
\caption{For a pair \( (\xi_0, \xi_1) \) of time and space coordinates in the same
Minkowskian system, the function \(\frac{\xi_1}
\label{fig:space-est}
\end{figure}
\section{The hyperbolic space and the sphere as warped spaces.}
\label{sec:orga423e7a}
A metric on a Riemannian manifold \(M=N\times [a,b]\) is a \emph{warped product} if it has the
form
\begin{equation}
\label{eq:warped-metric}
g = dr^2 + f^2(r) g_N
\end{equation}
where \(r\in [a,b]\) and \(g_N\) is a Riemannian metric on \(N\). It can be checked that an anti-derivative \(h\) of the warping function \(f\) satisfies \(\hess(h) =
f'(r) g\). On the other hand, if such function \(h\) exists, the space can be written
locally as a warped product by the level sets of \( h \).
\begin{proposition}[cf. \cite{Cheeger.Colding96_LowerBoundsRicci}]
\label{prop:cheeger-colding}
Let \(h\) be a \( C^2 \) function on \((M,g)\) with no critical point. Suppose that the level
sets of \(h\) are connected and that
\begin{equation}
\label{eq:xi}
\hess h = U.g
\end{equation}
for a function \(U\in C^0(M)\). Then:
\begin{enumerate}
\item \(U = U(h)\) is a function of \( h \), i.e. a precomposition of \(h\) by a function
\(U: \mathbb{R}\longrightarrow \mathbb{R}\). So is the function \(V:= |dh|^2\in C^1(M)\)
and we have \(\frac{dV}{dh} = 2U\).
\item The metrics \(g_a, g_b\) induced from \(g\) on level sets \(h^{-1}(a)\) and \(h^{-1}(b)\) are related
by \(\frac{g_a}{V(a)}= \frac{g_b}{V(b)}\) via the inverse gradient flow of \(h\).
This defines a metric \(\tilde g\) on level sets of \( h \) under which the flow is isometric.
The the metric \(g\) pulls back, via the flow map \(h^{-1}(a)\times
{\mathrm{Range}}(h) \longrightarrow M\), to
\begin{equation}
\label{eq:warped-g}
g = \frac{V(h)}{V(a)}g_a + \frac{dh^2}{V(h)} = V(h)\tilde g + \frac{dh^2}{V(h)}
\end{equation}
which is a warped product after a change of variable \(dr = \frac{dh}{V(h)^{1/2}}\).
\end{enumerate}
\end{proposition}
\begin{proof}
For any vector field \(v\), one has
\begin{equation}
\label{eq:vv}
v(V) = 2 g(\nabla_v \nabla h, \nabla h)= 2 \hess(h)(v, \nabla h) = 2U g(v, \nabla h)
\end{equation}
It follows, by choosing \(v\) in \eqref{eq:vv} to be any vector field tangent to level
sets of \(h\), that \(V\) is constant on the level sets of \( h \). Then choose \( v \) to be the inverse
gradient \(\frac{\nabla h}{|\nabla h|^2}\), we have \((\nabla h) V = 2U |\nabla h|^2\), so \( U \) is
also a function of \( h \) and \( \frac{dV}{dh} = 2U \).
For the second part, let \(v_t\) be a vector field of \(M\) tangent to level sets \(h^{-1}(t)\) given by
pushing forward via the flow of \(u\) a vector field \(v_a\) tangent to \(h^{-1}(a)\). By
definition of Lie bracket, \([v_t, u] = 0\) for all \(t\), and so
\[
\frac{d}{dt}|v_t|^2 = 2 g(\nabla_u v_t, v_t) = 2 g(\nabla_{v_t}u, v_t) =
\frac{2}{|\nabla h|^2}\hess(h)(v_t, v_t) = \frac{V'}{V}|v_t|^2.
\]
Here in the third equality, we used the fact that \( v_t \) is orthogonal to the gradient of \( h \).
We conclude that \(\frac{|v_t|^2}{V(t)}\) is constant along the flow and so \(\frac{g_a}{V(a)} = \frac{g_b}{V(b)}\) for all \(a,b\) in the range of \(h\).
\end{proof}
The metric \(\tilde g\) is the \(g_N\) of \eqref{eq:warped-metric} and the restrictions of
\( \tilde g \) and \( g \) on the level sets of \( h \) are conformal. We will also use
\(\tilde g\) to denote the metric \(V(h)^{-1}g\) on \(M\) and will call it the
\emph{normalised metric}.
In applications, we will only assume that the function \(h\) satisfies
\eqref{eq:xi} on \(M\) and it is allowed to have critical points, as in the following examples.
\begin{exampl}[]
\label{ex:xi-R}
In the Euclidean space, the only functions satisfying \eqref{eq:xi} are the coordinates
\(x_i, i=1,\dots,n\) (and their linear combinations) and the square of distance
\(\rho := \frac{1}{2}\sum_{i=1}^n x_i^2\).
\end{exampl}
\begin{exampl}[]
\label{ex:xi-S}
In the unit sphere \({S}^n = \{(x_1,\dots, x_{n+1})\in
\mathbb{R}^{n+1}:\sum_{i=1}^{n+1} x_i^2 = 1\}\), the \(x_i\)
satisfy \eqref{eq:xi} with \(U = -x_i, V = 1-x_i^2\). It is more intuitive to choose
\( h \) to be \( 1-x_i \), for which \( U= 1-h \) and \( V = 2h-h^2 \).
\end{exampl}
\begin{exampl}[]
\label{ex:xi-H}
In the hyperbolic space \(\mathbb{H}^n=\{(\xi_0,\dots,\xi_n)\in \mathbb{R}^{n,1}:
\xi_0^2 - \sum_{i=1}^n\xi_i^2=1, \xi_0 > 0\}\), the Minkowskian coordinates \(\xi_\alpha\) satisfies \eqref{eq:xi} with \(U=\xi_\alpha, V = \xi_\alpha^2 + |d\xi_\alpha|^2\). Here
\(|d\xi_\alpha|^2 = -1\) if \( \alpha=0 \) and +1 if \( \alpha \geq 1 \). All null (light-type)
coordinates can be written, up to a Möbius transform as \( \xi_l = \xi_0 - \xi_1 \). They
satisfy \eqref{eq:xi} with \( U = \xi_l \) and \( V = \xi_l^2 \).
\end{exampl}
Unlike \( S^n \), \( \mathbb{H}^n \) can be written as warped product in 3
\( SO(n,1) \)-distinct ways. Geometrically, each time coordinate is uniquely defined by an
interior point of \( \mathbb{H}^n \) where it is minimised. Each space coordinate is
uniquely defined by a totally geodesic hyperplane where it vanishes together with a
coorientation. Each point on \( S_\infty \) defines a null coordinate uniquely up to a
multiplicative factor. In the the half space model with height coordinate \( x>0 \), such
functions are given by \( \frac{\lambda}{x}, \lambda >0 \).
The normalised metrics \(\tilde g\) of \((\mathbb{R}^n, \rho), ({S}^n, x_i)\) and
\((\mathbb{H}^n, \xi_0)\) are all round metrics on \({S}^{n-1}\). For a null coordinate
associated to a point \( b \in S_\infty\), \(\tilde g\) is the flat metric on \(
S_\infty\setminus \{ b\} \). For the space coordinate associated to a hyperplane \( H \),
\( \tilde g \) is obtained by putting hyperbolic metrics on each component of \( S_\infty\setminus \overline{H} \). We
will call this a \textit{doubled hyperbolic} metric. In all cases, \( \tilde g \)
lies in the conformal class at infinity.
\begin{exampl}[]
\label{ex:xi-fuchsian}
Equation \eqref{eq:xi} is local and so will descend to the quotient whenever there is
an isometric group action that preserves \( h \).
The subgroup of \( SO_+(n,1) \) that preserves a space coordinate \( \xi \) of \( \mathbb{H}^n \) is \( SO_+(n-1,1)
\). When \( n=3 \), the quotient of \( \mathbb{H}^3 \) by a discrete subgroup of \(
SO_+(2,1) \) is called a Fuchsian manifold and the metric descends to a warped product:
\[
g = dr^2 + \cosh^2 r. g_\Sigma\quad\text{on } \Sigma\times \mathbb{R}, \qquad \xi = \sinh r
\]
Here \( \Sigma\) is a Riemann surface of genus at least \( 2 \), equipped
with the hyperbolic metric \( g_\Sigma \).
\end{exampl}
\section{Monotonicity Theorems and Comparison Lemma}
\label{sec:orga12f250}
Given a function \( h \) on a Riemannian manifold \( M \) and a submanifold \( \Sigma \), we write \( \int_{\Sigma,
h\leq t} \) and \( \int_{\Sigma, h=t} \) for the integration over the sub-level
\( h^{-1}(-\infty,t] \) and the level set \( h=t
\) of \( h \) on \( \Sigma \).
The gradient vector of \( h \) in \( M \) is denoted by \(
\nabla h \) and its projection to the tangent of \( \Sigma \) by \( \nabla^\Sigma h
\). We will write \( \Delta_\Sigma h = \mathop{\rm div}\nolimits_\Sigma \nabla^\Sigma h \) for the rough Laplacian of \( h \) on \( \Sigma \).
The volume of the \( k \)-dimensional unit sphere will be denoted by \( \omega_k \).
We will fix a function \( h \) on \( M \) that satisfies property \eqref{eq:xi}.
\begin{definition}[]
\label{def:tube}
Let \( \gamma \) be a \( (k-1)\)-submanifold of \( M \). The \emph{forward \( h \)-tube} \( T_\gamma^+ \)
(respectively \emph{backward \( h \)-tube} \( T_\gamma^- \)) built upon \( \gamma \) is the \( k
\)-submanifold obtained by flowing \( \gamma \) along the gradient
of \( h \) forwards (respectively backwards) in time. We will denote their union by \(
T_\gamma \) and the intersection of \( T_\gamma \) with the region \( t_1\leq h \leq t_2
\) by \( T_\gamma(t_1,t_2) \).
The \emph{tube extension} of a
submanifold is its union with the forward tube built upon its boundary, as shown in Figure
\mathop{\rm Re}\nolimitsf{fig:tube-extension}.
\end{definition}
\begin{figure}
\caption{Extension of a submanifold by gradient tube of \( h \).}
\label{fig:tube-extension}
\end{figure}
Let \( \gamma \) be a \( (k-1) \)-submanifold of a non-critical level set of \( h=\),
and \(\mathcal{D}\) be a distribution of \(k\)-dimensional planes of \(M\) along \( \gamma
\).
\begin{definition}[]
The \emph{\(\mathcal{D}\)-parallel volume} of \(\gamma\) is
\[
\left|\gamma^{\mathcal{D}} \right|:= \int_{\gamma}\cos \angle(\nabla h, \mathcal{D}) \mathop{\rm vol}\nolimits^{k-1}_{\tilde g},
\]
where the integral was taken with the volume form of the normalised metric \(\tilde g\)
and \(\angle(\nabla h, \mathcal{D})\) is the angle between \(\nabla h\) and \(\mathcal{D}\).
When \(\nabla h\) is contained in \(\mathcal{D}\) at every point of \(\gamma\), \( |\gamma^{\mathcal{D}}| \)
is the \(\tilde g\)-volume of \( \gamma \).
\end{definition}
We will suppose that \( \Sigma \) is a \( k \)-submanifold in the region \( h\geq h_0 \)
of \( M \) and that the intersection of \( \Sigma \) and the level set \( h=h_0 \) has
zero \( k \)-volume. This happens for example when the intersection is empty or a
transversally cut out \( (k-1) \)-submanifold \( \gamma_0 \).
\begin{definition}[]
\label{def:weight}
A \emph{weight} is a pair \( (P,c) \) of a function
\( P: \mathbb{R} \longrightarrow \mathbb{R} \) and a real number \( c\). We define the
\emph{\( (h_0,(P,c)) \)-volume} of \( \Sigma \) by
\begin{equation}
\label{eq:p-c-weighted}
A_{h_0,(P,c)}(\Sigma)(t) := \int_{\Sigma, h_0\leq h\leq t} P(h) +\frac{c}{k}
|\gamma_0^{T\Sigma}|\ V(h_0)^{k/2}
\end{equation}
\end{definition}
Most of the time, we fix the starting level \( h_0 \) and change the weight and so will often drop
\( h_0 \) in the notation. When \( h_0 \) is chosen to be a
critical value of \( h \), the RHS of \eqref{eq:p-c-weighted} no
longer depends on \( c \) and we drop it in the data of a weight.
Because the metric \( g \) has the form \eqref{eq:warped-g}, the \( (P,c) \)-volume of a
tube \( T_\gamma(h_0,t) \), as a function of \( t \), is independent of the choice of \(
\gamma \) up to a
multiplicative factor:
\begin{equation}
\label{eq:weight-tube}
A_{(P,c)}(T_{\gamma}(h_0,t)) = \frac{|\gamma|_{\tilde g}}{\omega_{k-1}}
Q_{(P,c)}(t),\qquad Q_{(P,c)}(t):= \omega_{k-1}\left[ \int_{h_0}^t P(h)V^{\frac{k}{2} - 1}(h) dh + \frac{c}{k}V(h_0)^{\frac{k}{2}}\right]
\end{equation}
\begin{definition}[]
\label{def:density}
The \emph{\( (P,c) \)-density} of \( \Sigma \) is obtained by normalising its weighted
volume by that of a tube:
\begin{equation}
\label{eq:weight-density}
\Theta_{(P,c)}(t):= \frac{A_{(P,c)}(\Sigma)(t)}{Q_{(P,c)}(t)}.
\end{equation}
\end{definition}
Clearly, the density of a tube is constant in \( t \). In \eqref{eq:weight-tube}, we
choose to put the constant \( \omega_{k-1} \) in \( Q \) to be consistent with the
classical monotonicity theorem (\( h = \rho \) in Example \mathop{\rm Re}\nolimitsf{ex:xi-R}). In the classical
case with \( h_0=0 \), \( h \)-tubes are visually cones or discs.
We call the pair \( (U, 1) \) the \emph{natural weight}. The naturally weighted
volume of tubes simplifies to \( Q_{(U,1)}(t) = \frac{\omega_{k-1}}{k}|\gamma|_{\tilde
g}V^{k/2}(t) \), which is independent of the starting level \( h_0 \).
\begin{theorem}[Naturally weighted monotonicity]
\label{thm:monotonicity-warped}
Let \(h\) be a \(C^2\) function on a Riemannian manifold \(M\) that
satisfies \eqref{eq:xi} with \(U\) and \(V= |\nabla h|^2\) being functions of \(h\)
such that \(U = \frac{1}{2}V'\). Let \(\Sigma\) be a minimal
\( k \)-submanifold in \(M\). Then the naturally weighted density
\(\Theta_{(U,1)}(\Sigma)(t)\) is increasing in region \( U>0 \) and decreasing in region \( U<0 \).
Moreover, the conclusion also holds for the tube extension
of minimal submanifolds.
\end{theorem}
\begin{proof}
The Laplacian of \( h \) on \( \Sigma \) can be computed to be
\[
\Delta_\Sigma h = kU.
\]
This can be seen via the formula:
\begin{lemma}[Leibniz rule]
\label{lem:chain-rule}
Let \(f:(\Sigma,g_\Sigma) \longrightarrow (M, g)\) be a map between Riemannian manifolds and \(\tau(f)\) be its tension
field. Then for any \(C^2\) function \(h\) on \(M\):
\begin{equation}
\label{eq:chain-rule}
\Delta_{\Sigma} (h\circ f) = \mathop{\rm Tr}\nolimits_{\Sigma} f^*\hess h + dh.\tau(f)
\end{equation}
In particular,
\[
\Delta_\Sigma h = \mathop{\rm Tr}\nolimits_\Sigma \hess h
\]
if \(\Sigma\) is either minimal or tangent to the gradient of \(h\).
\end{lemma}
Because the volume forms of \(g\) and \(\tilde g\) on \(\gamma\) are related
by \(\mathop{\rm vol}\nolimits_{g}^{k-1} = V^{\frac{k-1}{2}}\mathop{\rm vol}\nolimits_{\tilde g}^{k-1}\) and
\(\cos \angle(\nabla h, T\Sigma).V^{1/2} = |\nabla^\Sigma h|\), the second term in the
definition of weighted volume \eqref{eq:p-c-weighted} is \( \frac{c}{k}\int_{\gamma_0}|\nabla^\Sigma h| \).
By Stokes' theorem,
\begin{equation}
\label{eq:stokes}
kA_{(U,1)}(\Sigma)(t) = \int_{\Sigma,h_0\leq h\leq t} \Delta_\Sigma h + \int_{\gamma_0}|\nabla^\Sigma h| = \int_{\Sigma,h=t}(\nabla^\Sigma h)\cdot n = \int_{\Sigma,h=t}|\nabla^\Sigma h|.
\end{equation}
The last equality is because the outer normal of the sublevel set \(h\leq t\) in \(\Sigma\) is \(n=
\frac{\nabla^{\Sigma} h}{|\nabla^{\Sigma} h|}\). By the coarea formula,
\[
\frac{dA_{(U,1)}(t)}{dt} = U(t)\int_{\Sigma,h = t} \frac{1}{|\nabla^\Sigma h|}.
\]
Combining this with \eqref{eq:stokes} and \(|\nabla^\Sigma h|^2\leq V(h)\), one has \(\frac{1}{U}\frac{dA_h}{dt} \geq \frac{k A_h}{V}\), or
\[
\frac{1}{U} \frac{d}{dt}\left( \Theta_{(U,1)}(\Sigma)\right)\geq 0.
\]
For the extension \( \tilde\Sigma \) of \( \Sigma \) by a forward tube built upon \( \gamma = \partial\Sigma \), it suffices to rewrite \eqref{eq:stokes} as
\begin{align*}
kA_{(U,1)}(\tilde \Sigma)(t) &= \left(\int_{\Sigma, h\leq t}+
\int_{T^+_{\gamma}, h\leq t}\right)\Delta h \\
&=\int_{\Sigma,h=t}|\nabla^{\Sigma} h| + \int_{\gamma\cap \{h\leq t\}}|\nabla^{\Sigma} h|+
\int_{T^+_\gamma, h=t} |\nabla h| - \int_{\gamma\cap \{h\leq t\}} |\nabla h| \\
&\leq \int_{\Sigma,h=t}|\nabla^{\Sigma} h| +
\int_{T_\gamma, h=t} |\nabla h| =
\int_{\tilde\Sigma,h=t}|\nabla^{\tilde\Sigma} h|.
\end{align*}
\end{proof}
\begin{remark}
\label{rem:monotonicity-warped}
We only need the "\(\leq\)" sign in \eqref{eq:stokes} and hence it suffices that \(\Delta_\Sigma h \geq k U(h)\). Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped}
also holds if \(\hess h\geq
U.g\), provided that \(U\) and \(V = |d h|^2\) are
functions of \(h\) with \(U = \frac{1}{2}V'\) (see Section \mathop{\rm Re}\nolimitsf{sec:bounded-curv}).
\end{remark}
The classical monotonicity theorem is recovered by applying Theorem
\mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} to the function \( \rho \) of Example \mathop{\rm Re}\nolimitsf{ex:xi-R}. On the other hand, Choe and
Gulliver \cite[Theorem 3]{Choe.Gulliver92_IsoperimetricInequalitiesMinimal} discovered
that there are monotonicity theorems in \( S^n \) and \( \mathbb{H}^n \) if one weights
the volume functional by the cosine (respectively hyperbolic cosine) of the distance
function. These theorems come from applying Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} to
Euclidean coordinates of Example \mathop{\rm Re}\nolimitsf{ex:xi-S} and time
coordinates of Example \mathop{\rm Re}\nolimitsf{ex:xi-H}.
Since the proof of Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} only uses integration by part, it
can be adapted to stationary rectifiable \( k \)-currents or \( k \)-varifolds. Instead of
integrating the Laplacian of \( h \), it suffices to apply the first variation formula to
a test vector field that is a suitable cut-off of the gradient of \( h \). See
\cite[Theorem 1]{Anderson82_CompleteMinimalVarieties} and
\cite[\(\S\)7]{Ekholm.etal02_EmbeddednessMinimalSurfaces}
Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} can also be adapted for harmonic maps. Given a map \(f:\Sigma \longrightarrow M\), we
define its \emph{dimension at a point} \(p\in \Sigma\) to be the ratio \(\frac{|df_p|^2}{|df_p|_o^2}\)
of the tensor norm of the derivative at \(p\) (also called \emph{energy
density}) and its operator norm, or \(+\infty\) if the latter vanishes. Note that when \(df_p\) is conformal,
this is the dimension of \(\Sigma\). The \emph{dimension} of \(f\), defined as the smallest
dimension at all points, plays the role of \(k\) in the argument.
The \emph{naturally weighted Dirichlet energy} and \emph{naturally weighted density} of \(f\)
are defined as
\[
E_{(U,1)}(t):=
\int_{\Sigma, h_0\leq h\circ f\leq t} U|df|^2 + \int_{\Sigma,h\circ f = h_0}|d(h\circ
f)|,\quad \Theta_{(U,1)}(t):= \frac{E_{(U,1)}(t)}{V(t)^{k/2}}.
\]
\begin{theorem}
\label{thm:monotonicity-map}
Let \(h, U, V\) be as in Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} and
\(f:\Sigma \longrightarrow M\) be a harmonic map. Then \(\frac{d}{dt}\Theta_h\) has the same sign as \(U\).
\end{theorem}
\begin{proof}
By Lemma \mathop{\rm Re}\nolimitsf{lem:chain-rule}, \(\Delta (h\circ f) = U |df|^2\) and by
integration by part, \(E_{(U,1)}(t) = \int_{\Sigma, h\circ f = t} |d(h\circ f)|\). One then
compares \(E_{(U,1)}\) with its derivative obtained from the coarea formula \(\frac{d}{dt}E_{(U,1)} = U(t) \int_{\Sigma, h\circ f = t}
\frac{|df|^2}{|d(h\circ f)|}\).
The definition of
\(k\) guarantees \(\frac{|df|^2}{|d(h\circ f)|} \geq k \frac{|d(h\circ f)|}{|dh|^2}\) and
therefore
\[
U^{-1}\frac{d}{dt}E_{(U,1)}(t)\geq \frac{k}{V}E_{(U,1)},
\]
which is equivalent to \( \frac{1}{U}\frac{d}{dt}\Theta_{h} \geq 0 \).
\end{proof}
We will give another characterisation of the monotonicity in the extended region. For
this, we suppose that \( \Sigma \) is contained in the region \( h_0 \leq h\leq h_1 \) as
in Figure \mathop{\rm Re}\nolimitsf{fig:tube-2},
with boundary composing of 2 parts: one (possibly empty) in level \( h=h_0 \) and one,
called \( \gamma \), in \( h=h_1 \). Let \( \tilde\Sigma \) be the forward tube extension
of \( \Sigma \) built upon \( \gamma \). Clearly the monotonicity of \( \tilde \Sigma \)
and \( \Sigma \), under any weight, are the same in the region \( [h_0, h_1] \). In the
region \( h\geq h_1 \), the monotonicity of \( \tilde\Sigma \) is equivalent to a volume
comparison between \( \Sigma \) and the tube.
\begin{figure}
\caption{Illustration of Lemma \mathop{\rm Re}
\label{fig:tube-2}
\end{figure}
\begin{lemma}
\label{lem:less-than-tube}
Let \(t_1 < t_2\) be two numbers in \([h_1,+\infty)\), then the following statements are
equivalent:
\begin{enumerate}
\item \(\Theta_{(P,c)}(\tilde\Sigma)(t_1) \leq \Theta_{(P,c)}(\tilde\Sigma)(t_2)\)
\item \(A_{(P,c)}(\Sigma)(h_1) \leq A_{(P,c)}(T_\gamma(h_0, h_1))\)
\end{enumerate}
\end{lemma}
\begin{proof}
It follows from straightforward volume addition and the fact that
the density of a tube is constant.
\end{proof}
By Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped}, we are mostly interested in the region of
\( M \) where \( U \) is signed. By switching the sign of \( h \), we can assume that
\( U\geq 0 \). When one applies Lemma \mathop{\rm Re}\nolimitsf{lem:less-than-tube} to minimal submanifold, the assumption that \( \Sigma \) belongs to the
region \( h_0\leq h\leq h_1 \) becomes automatic. This is because
\( h\leq h_1 \) on \( \partial\Sigma \) and its restriction to \( \Sigma \) is
superharmonic \( \Delta h = kU \geq 0 \). Also, there is no
critical point of \( h \) in the region \( U > 0 \) because the function \( V(h) = |dh|^2 \) satisfies
\( V' = 2U>0 \).
Let \( (P_1,c_1), (P_2,c_2) \) be 2 positive weights, that is \( P_i\geq 0 \). The tube volumes, defined in
\eqref{eq:weight-tube}, will be denoted by \(Q_1(t), Q_2(t)\).
\begin{definition}
We say that \((P_1, c_1)\) is \emph{weaker} than \((P_2,c_2)\) and write \( (P_1,c_1) \ll
(P_2,c_2) \) if \(\frac{P_1}{Q_1}\leq
\frac{P_2}{Q_2}\) for all \( t \), Equivalently, this means if \(\frac{d}{dt}\frac{Q_1}{Q_2}\leq 0\), i.e. the
\((P_2,c_2)\)-volume of a \(k\)-dimensional tube increases faster than its \((P_1,c_1)\)-volume.
\end{definition}
\begin{lemma}[Comparison]
\label{lem:comparison}
Let \(\Sigma\subset M^n\) be a \( k \)-submanifold not necessarily minimal and
\((P_1,c_1), (P_2,c_2)\) be two positive weights. Let \(\Theta_1,\Theta_2\) be the corresponding densities.
\begin{enumerate}
\item Suppose that \((P_1,c_1)\ll (P_2,c_2)\) and that \(\Theta_2(\Sigma)\) is
increasing. Then \( \Theta_1 \) is also increasing and we have \(\Theta_1\leq\Theta_2\).
\item On the other hand, if \((P_2, c_2) \ll (P_1, c_1)\) and \(\Theta_2(\Sigma)\) is
increasing. Then we still have \(\Theta_1\geq\Theta_2\).
\end{enumerate}
\end{lemma}
\begin{proof}
One has \(P_1^{-1} \frac{dA_{(P_1,c_1)}}{dt} = P_2^{-1}\frac{dA_{(P_2,c_2)}}{dt}\) from coarea
formula. Therefore
\begin{equation}
\label{eq:comparison-1}
\frac{Q_1}{P_1}\frac{d\Theta_1}{dt} + \omega_{k-1}V^{\frac{k}{2}-1}\Theta_1 = \frac{Q_2}{P_2}\frac{d\Theta_2}{dt} + \omega_{k-1}V^{\frac{k}{2}-1}\Theta_2
\end{equation}
which can be rearranged into
\begin{equation}
\label{eq:comparison-2}
P_1^{-1} \frac{d}{dt}\left( Q_1(\Theta_1 -\Theta_2) \right) = \left(\frac{Q_2}{P_2}-\frac{Q_1}{P_1}\right) \frac{d\Theta_2}{dt}
\end{equation}
For the second part of the Lemma, it follows from the hypothesis that the RHS of
\eqref{eq:comparison-2} is positive, and therefore \(Q_1(
\Theta_1- \Theta_2)\) is an increasing function. The latter vanishes at \(t=0\), which implies \(\Theta_1\geq\Theta_2\)
for all time. For the first part, the RHS of \eqref{eq:comparison-2} is negative by hypothesis and
therefore \(\Theta_2 \geq \Theta_1\). The rest of the conclusion follows by substituting
this into \eqref{eq:comparison-1}.
\end{proof}
The previous proof can also be adapted for rectifiable varifolds and currents. It suffices
to interpret equation \eqref{eq:comparison-1} in weak sense and choose a suitable test function.
It follows from Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped}, Lemma \mathop{\rm Re}\nolimitsf{lem:comparison} and Lemma
\mathop{\rm Re}\nolimitsf{lem:less-than-tube} that:
\begin{corollary}[]
Let \( \Sigma \) be a minimal \( k \)-submanifold whose boundary is a union of its
intersection with \( h = h_0\)) and a \( (k-1) \)-submanifold \( \gamma \) of the level set \( h=h_1
\). Then for any weight \( (P,c) \) weaker than \( (U, 1) \), we have
\[
A_{(P,c)}(\Sigma)\leq A_{(P,c)}(T_\gamma(h_0, h_1)).
\]
\end{corollary}
We will now give a few examples to which Lemma \mathop{\rm Re}\nolimitsf{lem:comparison} applies.
First, for any positive \( P \), \( (P, c_1) \ll (P, c_2) \) if and only if \( c_1 > c_2
\).
A less obvious example is in the unit ball of \(\mathbb{R}^n\) equipped with the Poincaré metric \(g_{\mathbb{H}}=
\frac{4}{(1-r^2)^2}g_E\). We choose \( h = \xi \) the time coordinate minimised at the
origin \( O \), given by \(\xi := \frac{1+r^2}{1-r^2}\) with starting level \( h_0=1
\). Since \( h_0 \) is a critical value of \( \xi \), we can forget the constant \( c
\) in the data of a weight. We will define and compare 5
different weights \( P_i,i=1,\dots,5 \), starting with the natural weight \( P_1 =\xi
\) and the uniform weight \( P_2=1 \).
The Euclidean \( k \)-volume corresponds to
weight \(P_3 = (1+\xi)^{-k}\).
The sphere \( S^n \) is the 1-point compactification of \(
\mathbb{R}^n \) and the round metric \( g_S \) induces a \( k \)-volume functional, which
under \( g_{\mathbb{H}} \) corresponds to a weight \(P_4=\xi^{-k}\).
On this sphere, let \( x \) be the Euclidean coordinate (see Example
\mathop{\rm Re}\nolimitsf{ex:xi-S}) that is maximised at \( O \). The weighted \(g_S\)-volume corresponds to \(P_5 = \xi^{-k-1}\).
It can be checked that \(P_{i+1}\) is weaker than \(P_i\).
By Lemma \mathop{\rm Re}\nolimitsf{lem:comparison}, we have the following chain of monotonicity:
\begin{equation}
\label{eq:chain}
\text{time-weighted } g_{\mathbb{H}^n} \gg \text{unweighted } g_{\mathbb{H}^n}
\gg g_E \gg \text{unweighted } g_{{S}^n} \gg \text{weighted } g_{{S}^n}
\end{equation}
where any submanifold \(\Sigma\subset \mathbb{B}^n\) having increasing density with one volume
functional in the chain will automatically have increasing density in any other volume functional
following it and that the densities can be compared accordingly. More generally:
\begin{lemma}[]
\label{lem:compensated-comparison}
\begin{enumerate}
\item Let \( M=\mathbb{H}^n \), \( h \) be a Minkowskian coordinate. Then natural weight is stronger
than the weight \((1, h_0^{-1}) \),
\item Let \( M = S^n \) and \( h = 1-x \) where \( x \) is a Euclidean coordinate. Then natural weight is weaker than
the weight \( (1, (1-h_0)^{-1}) \).
\end{enumerate}
\end{lemma}
\begin{proof}
Both statements are inequalities on explicit functions of 1 real variable: denote the
weights by \( (P_1,c_1),(P_2,c_2) \) and tube volumes by \( Q_1,Q_2 \), we want to check
that \( \frac{Q_1}{P_1}\leq \frac{Q_2}{P_2} \). In both cases, it suffices to prove that \(
\frac{d}{dt}\frac{Q_1}{P_1}\leq \frac{d}{dt}\frac{Q_2}{P_2} \) and that \(
\frac{Q_1}{P_1}= \frac{Q_2}{P_2} \) at \( h_0 \).
\end{proof}
Theorems \mathop{\rm Re}\nolimitsf{thm:intro-time}, \mathop{\rm Re}\nolimitsf{thm:intro-space}, \mathop{\rm Re}\nolimitsf{thm:intro-null}
in the introduction follow from Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped}, Lemma \mathop{\rm Re}\nolimitsf{lem:comparison} and Lemma \mathop{\rm Re}\nolimitsf{lem:compensated-comparison}.
\section{Applications to minimal surfaces in \(\mathbb{H}^n\) and \( S^n \)}
\label{sec:orga09937d}\subsection{Volume estimates}
The first application of Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} is to estimate the
intersection between a minimal \( k \)-submanifold and level sets of \( h \).
\begin{corollary}
\label{cor:est-curve}
Suppose, in addition to the hypothesis of Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped}, that \(
U\geq 0 \) in the interval \( [h_0, t] \).
Let \( \gamma_t \), \( \gamma_0 \) be the intersection of \( \Sigma \) and the level sets \( h=t \), \( h=h_0 \) and \(
|\gamma_t|, |\gamma_0^{T\Sigma}| \) be their volume and parallel volume under the metric
\( \tilde g \). Then \(|\gamma_t| \geq |\gamma_0^{T\Sigma}|.\)
\end{corollary}
\begin{proof}
By Lemma \mathop{\rm Re}\nolimitsf{lem:less-than-tube}, the naturally weighted density at \( t \) is less
than \( \omega_{k-1}^{-1}|\gamma_t| \), but it is \(\omega_{k-1}^{-1} |\gamma_0^{T\Sigma}| \) at \( h_0 \).
\end{proof}
When the level set \( h=h_0 \) is a critical and degenerate to a point, as in the case of
time coordinates of \( \mathbb{H}^n \) and Euclidean coordinates of \( S^n \),
Corollary \mathop{\rm Re}\nolimitsf{cor:est-curve} becomes:
\begin{corollary}[]
\label{cor:est-curve-degen}
Let \( p \) be a point on a minimal \( k \)-submanifold \( \Sigma \) of \( \mathbb{H}^n \)
(respectively \( S^n \)) and \( \gamma \) be the intersection of \( \Sigma \) and a
geodesic sphere \( S \) of radius \( r \in (0,+\infty) \) (respectively \(r\in (0, \pi/2]
\)) centred at \( p \). Then the \( (k-1) \)-volume of \( \gamma \) is at least that of a
great \( (k-1) \)-subsphere of \( S \).
In particular, a complete minimal submanifold of \( \mathbb{H}^n \) is contained in the
visual hull of its ideal boundary.
\end{corollary}
Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} can also be used to upper bound the volume of
minimal submanifolds of \( \mathbb{H}^n \) by that of the tube competitors. The following
estimates come from combining Lemma \mathop{\rm Re}\nolimitsf{lem:less-than-tube} and Theorems
\mathop{\rm Re}\nolimitsf{thm:intro-time}, \mathop{\rm Re}\nolimitsf{thm:intro-space}, \mathop{\rm Re}\nolimitsf{thm:intro-null}. Because these
monotonicity theorems hold for tube extensions, the boundary \( \gamma \) does not need to
lie in a level set of the Minkowskian coordinate.
\begin{corollary}[]
\label{cor:est-vol}
Let \( \xi \) be a Minkowskian coordinate, \( \Sigma \) a minimal \( k \)-submanifold of \( \mathbb{H}^n \) with boundary a \(
(k-1) \)-submanifold \( \gamma \) and \( T_\gamma \) its \( \xi \)-tube. Suppose that \(
\Sigma \) lies in a region \( \xi\geq a \) with \( a>0 \). Then the volume of \( \Sigma \) is at most
the \( (a, (1, a^{-1})) \)-volume of \( T_\gamma \), which is \(|\gamma|_{\tilde g} Q_\delta(a,t)\) (see \eqref{eq:Qdelta}).
\end{corollary}
When \( \Sigma \) is in the region \( \xi\geq a_1 > a_2 \), the volume
estimate obtained by Corollary \mathop{\rm Re}\nolimitsf{cor:est-vol} with \( a = a_1 \) implies the
version with \( a=a_2 \). It is thus optimal to choose \( a =\min_\Sigma \xi \).
\subsection{Renormalised isoperimetric inequalities}
\label{sec:renorm-isop-ineq}
Corollary \mathop{\rm Re}\nolimitsf{cor:est-vol} consists of three different upper bounds for volume of minimal
submanifolds, depending on the type of \( \xi \). The version for time coordinate with
\( a=1 \) was proved by Choe and Gulliver and was essential to their proof of the
isoperimetric inequality for minimal surfaces of \( \mathbb{H}^n \)
\cite{Choe.Gulliver92_SharpIsoperimetricInequality}. Complete surfaces, on the other hand,
necessarily run out to infinity and so have infinite area. By studying the area growth,
Graham and Witten defined finite number, called the renormalised area, that is intrinsic
to the minimal surface. We will briefly review their result and use Corollary
\mathop{\rm Re}\nolimitsf{cor:est-vol} to prove three different renormalised versions of the isoperimetric
inequality.
A \emph{boundary defining function} (bdf) of \(\mathbb{H}^n\) is a non-negative function \(\rho\)
on the compactification \(\overline{\mathbb{H}^n}\) that vanishes exactly on \({S}_{\infty}\) and
exactly to first order.
Such function is called \emph{special} if \(|d\ln\rho|_{g_{\mathbb{H}^n}} = 1\) on a neighbourhood
of the ideal boundary. It was proved in
\cite{Graham.Witten99_ConformalAnomalySubmanifold} that any complete minimal surface \(\Sigma\) of \( \mathbb{H}^n \) that are
\(C^2\) up to its ideal boundary has to following area expansion under a special
bdf. Here \( A \) denotes the hyperbolic area and \( \bar g = \rho^2 g \).
\begin{equation}
\label{eq:GW-1}
A(\Sigma\cap \{\rho \geq \epsilon\}) = \frac{|\gamma|_{\bar g}}{\epsilon} +
\mathcal{A}_R + O(\epsilon).
\end{equation}
The coefficient \(\mathcal{A}_R\) is called the \emph{renormalised
area} of \(\Sigma\) and is independent of the choice of \(\rho\). Clearly, when \( \rho
\) is only \emph{special up to third order}, that
is \( \rho = \bar \rho + o(\bar\rho^2) \) for a special bdf \(
\bar\rho \), the 2 metrics \( \rho^2 g\) and \(\bar\rho^2g \)
are equal on \( S_\infty \) and the expansion \eqref{eq:GW-1} still holds for \( \rho \).
\begin{lemma}[]
\label{lem:bdfs}
For any Minkowskian coordinate \( \xi \), the bdf \( \rho = |\xi|^{-1} \) is
special up to third order.
\end{lemma}
\begin{proof}
Clearly when \( \xi \) is of null type, \(\rho = \xi^{-1}\) is the half space coordinate
and so is special. When \( \xi \) is of time type (respectively space type), let \( l \)
be the distance function towards the defining interior point (or hyperplane). Clearly
\( |dl| = 1 \) and so \( \bar \rho = 2\exp(-l) \) is a special bdf. It can be checked
that \( |\xi| = \cosh l \) (respectively \( \sinh l \)), and so
\( \rho = \bar\rho + O(\bar\rho^3)\).
\end{proof}
Now we suppose that \( \Sigma \) is a complete minimal surface of \( \mathbb{H}^n \) in the region
\( \xi \geq a > 0 \), the expansion \eqref{eq:GW-1} with \( \rho = |\xi|^{-1} \) can be rewritten as
\begin{equation}
\label{eq:GW-2}
A(\Sigma \cap \{\xi < t\})(t) = |\gamma|_{\bar g} t + \mathcal{A}_R + O(t^{-1})
\end{equation}
Here \( \bar g = \xi^{-2}g_{\mathbb{H}} = \tilde g \) is either the round, flat or doubled
hyperbolic metric on \( S_\infty \).
Corollary \mathop{\rm Re}\nolimitsf{cor:est-vol} gives an upper bound of the LHS of \eqref{eq:GW-2}:
\begin{equation}
\label{eq:est-AR-1}
A(\Sigma\cap\{\xi\leq t\}) \leq |\gamma_t|_{\tilde g}\left( \int_{a}^t dh +
\frac{1}{2a}V(a) \right) = |\gamma| t - |\gamma| \left( \frac{1}{2}a -
\frac{\delta}{2a}\right) + O(t^{-1})
\end{equation}
For the equality at the end of \eqref{eq:est-AR-1}, we used the fact that \( |\gamma_t|_{\tilde g} = |\gamma|_{\tilde g} + O(t^{-2}) \), which is because minimal surfaces
meet \( S_\infty \) orthogonally. Now plug \eqref{eq:est-AR-1} into
\eqref{eq:GW-2}, we have the following estimate of \( \mathcal{A}_R \), which
specialises to Theorems \mathop{\rm Re}\nolimitsf{thm:upper-AR} and \mathop{\rm Re}\nolimitsf{thm:upper-AR-space} in the introduction:
\begin{theorem}[]
\label{thm:compensated-est-AR}
Let \(\xi\) be a Minkowskian coordinate, \( \tilde g \) be the
corresponding round/ double hyperbolic/ flat metrics on \(S_\infty\) and \(\Sigma\) be a complete minimal surface that is \(C^2\) near its ideal boundary
\(\gamma\). Suppose that \( \Sigma \) lies in the region \( \xi\geq a > 0 \), then
\begin{equation}
\label{eq:compensated-AR}
\mathcal{A}_R (\Sigma) + \frac{1}{2}|\gamma|_{\tilde g} \left(a - \frac{\delta}{a}\right)
\leq 0.
\end{equation}
where \( \delta = -1,0,+1 \) depending on the type of \( \xi \) as in Example \mathop{\rm Re}\nolimitsf{ex:xi-H}.
\end{theorem}
\subsection{Antipodalness of minimal submanifolds of \( S^n \)}
\label{sec:volume-minim-subm}
In \( S^n \), we will use Theorem \mathop{\rm Re}\nolimitsf{thm:monotonicity-warped} with the function \( h =
1-x \) instead of the Euclidean coordinate \( x \)
of \( \mathbb{R}^{n+1} \). Recall that the naturally weighted
monotonicity theorem holds only in one half sphere (\( h\in [0,1] \)) and the natural weight
is weaker than the uniform weight. The
second half of Lemma \mathop{\rm Re}\nolimitsf{lem:comparison} applies and gives an
estimate of the (unweighted) volume of minimal submanifolds.
Recall that the notion of \( \epsilon \)-antipodal was given in Definition \mathop{\rm Re}\nolimitsf{def:eps-antipodal}.
It is clear that any subset of \( S^n \) is \( \pi \)-antipodal and any closed subset is \( 0 \)-antipodal if and only if it is symmetric by antipodal
map. Closed minimal submanifolds are
\( \frac{\pi}{2} \)-antipodal because they cannot be contained in one half
sphere. This is because for any Euclidean coordinate \( x \),
\begin{equation}
\label{eq:x-weighted-0}
\int_\Sigma x = -\frac{1}{k}\int_\Sigma \Delta x = 0,
\end{equation}
and so \( \Sigma \) cannot be contained in a half sphere where \( x \) is signed. We prove
that the further a minimal submanifold is from being antipodal, i.e. the greater \(
\epsilon \) is, the greater its volume has
to be.
We define \( \epsilon(A,k) \) to be the unique
solution in \( [0,\frac{\pi}{2}) \) of
\begin{equation}
\label{eq:eps-sphere}
\int_{\epsilon}^{\pi/2}\sin^{k-1}t dt + \frac{(1-\cos^2\epsilon)^{k/2}}{k\cos\epsilon}= \omega_{k-1}^{-1}\left(A - \frac{\omega_k}{2}\right)
\end{equation}
It can be checked that \( \epsilon(\cdot, k) \) is
strictly increasing function on the interval \( [\omega_k,+\infty) \) with \( \epsilon(\omega_k, k) = 0 \) and converges to \( \frac{\pi}{2} \) when \( A\to+\infty
\). When \( k=2 \), \eqref{eq:eps-sphere} simplifies to
\[
\cos\epsilon + \frac{1}{\cos\epsilon} = \frac{A}{\pi}- 2
\]
\begin{proposition}
\label{prop:eps-sphere}
A closed minimal \( k \)-submanifold \( \Sigma \) of \( S^n \) with volume \( A \) must be
\( \epsilon(A,k) \)-antipodal. More generally, if \( \Sigma \) contains a point \( p \)
with density \( m \), then it cannot avoid the ball of radius
\( \epsilon(\frac{A}{m},k) \) centred at the antipodal point \( p' \) of \( p \).
\end{proposition}
Before proving Proposition \mathop{\rm Re}\nolimitsf{prop:eps-sphere}, we note that
\eqref{eq:x-weighted-0} can be interpreted as the equal distribution of weighted volume of
\( \Sigma \) between opposing half spheres. More precisely, if \( h_1=
1-x \) and \( h_2 = 1+x \) are warping functions associated to the North pole \( p \) and South pole \( p' \), then
\[
\int_{\Sigma, 0\leq h_1\leq 1} U(h_1) - \int_{\Sigma, 0\leq h_2\leq 1} U(h_2)= \int_{\Sigma}x = 0
\]
Now if \( \Sigma \) has density \( m \) at \( p \), then its naturally weighted volume in the
Northern (and so in the Southern hemisphere) is at least \( m/2 \) times that of a
totally geodesic \( k \)-subsphere, which is
\( \frac{m}{2}.\frac{\omega_{k-1}}{k} \). Moreover, by Lemma \mathop{\rm Re}\nolimitsf{lem:comparison}, we recover
the following lower bound of unweighted volume, which was first proved by Cheng,
Li and Yau using the heat kernel.
\begin{corollary}[cf. {\cite[Corollary 2]{Cheng.etal84_HeatEquationsMinimal}}]
Let \(\Sigma\) be a minimal \( k \)-submanifold of \( S^n \) that has no boundary in the geodesic
ball \(B(p,r)\) centred at \(p\) and of radius \(r \leq
\frac{\pi}{2}\). Suppose that \( \Sigma \) contains \( p \) with multiplicity \( m \).
Then the volume of \(\Sigma\cap B(p,r)\) is at least \(m\) times that of
the \( k \)-ball of radius \(r\):
\[
A(\Sigma\cap B(O,r)) \geq m \omega_{k-1}\int_{t=0}^{r} \sin^{k-1}(t)dt
\]
\end{corollary}
\begin{proof}[Proof of Proposition \mathop{\rm Re}\nolimitsf{prop:eps-sphere}]
Suppose that \( \Sigma \) is at distance \( \epsilon \) from \( p' \), which means \(
h_2 \geq a := 1 - \cos\epsilon \) on \( \Sigma \). On the Southern hemisphere, \( \Sigma \) is
\( (a, (U, 1)) \)-monotone with density \(\geq m \) at \( h_2 = 1 \). Lemma
\mathop{\rm Re}\nolimitsf{lem:comparison} says that its \( (a,(1, \frac{1}{1-a})) \)-density on this hemisphere is at
least \( m \). So the volume \( A_- \) of \( \Sigma \) in this hemisphere can be bounded by
\[
A_-\geq m.\omega_{k-1}\left[ \int_{a}^1 V(t)^{k/2 - 1} dt + \frac{1}{k}.\frac{1}{1-a} V(a)^{k/2}\right]
\]
where \( V(t) = 2t - t^2 \). On the Northern hemisphere, the unweighted density is at least
\( m \) and so the volume \( A_+ \) of \( \Sigma \) there is at least \( m
\frac{\omega_{k}}{2} \). Therefore
\[
A = A_+ + A_- \geq m \left[ \frac{\omega_k}{2} + \omega_{k-1}\left( \int_{a}^1(2t -
t^2)^{k/2-1}dt + \frac{(2a - a^2)^{k/2}}{k(1-a)} \right) \right]
\]
So \( \epsilon \) satisfies
\begin{equation}
\label{eq:A-eps}
\frac{A}{m}\geq \frac{\omega_k}{2} + \omega_{k-1} \left[ \int_{\epsilon}^{\pi/2}\sin^{k-1}t dt + \frac{(1 - \cos^2\epsilon)^{k/2}}{k\cos\epsilon} \right].
\end{equation}
\end{proof}
One sees either from the domain of definition of \( \epsilon(\cdot,k) \) or from \eqref{eq:A-eps} that:
\begin{corollary}[]
\label{cor:yau-conj}
If a closed minimal \( k \)-submanifold \( \Sigma \) of \( S^n \) has
multiplicity \( m \) at a point, then its volume is at least \( m\omega_k \).
\end{corollary}
\begin{remark}[]
\label{rem:yau-conj}
It was conjectured by Yau \cite[Problem 31]{Yau12_ChernGreatGeometer} that the volume of a
non-totally geodesic minimal hypersurface of \( S^n \) is lower bounded by that of the
Clifford tori. Corollary \mathop{\rm Re}\nolimitsf{cor:yau-conj} shows that the volume of a non-embedded
minimal hypersurface of \( S^n \) is at least \( 2\omega_{n-1} \) and thus confirms the
conjecture for non-embedded hypersurfaces. By a different method, Ge and Li
\cite{Ge.Li22_VolumeGapMinimal} give a slightly weaker version of Corollary
\mathop{\rm Re}\nolimitsf{cor:yau-conj}, which is also sufficient to confirm Yau's conjecture in this case.
\end{remark}
Proposition \mathop{\rm Re}\nolimitsf{prop:eps-sphere} can be tested on the Veronese surface, which is
the image of the conformal harmonic immersion \( f: S^2 \longrightarrow S^4 \):
\[
f(x,y,z) = \left(\sqrt{3}xy, \sqrt{3}yz, \sqrt{3}zx, \frac{\sqrt{3}}{2} (x^2 - y^2),
\frac{1}{2}(x^2 + y^2 - 2z^2)\right).
\]
This map takes the same value on antipodal points of \( S^2 \) and descends to an
embedding of \( \mathbb{R}P^2 \) into \( S^4 \). Because \( f^*g_{S^4} = 3g_{S^2} \), the
image has area \(6\pi \). The antipodal point of \( f(0,0,1) \) is at distance
\( \epsilon = \frac{\pi}{3} \) to the surface. Because \( f \) is
\( SO(3) \)-equivariant, this is also the
smallest \( \epsilon \) such that the surface is \( \epsilon \)-antipodal.
\subsection{Minimising cones}
Area-minimising cones in Euclidean space appear naturally as \emph{oriented tangent cones}
of an area-minimising surface \cite{Morgan16_GeometricMeasureTheory}. It follows from
Lemma \mathop{\rm Re}\nolimitsf{lem:less-than-tube} that in \(\mathbb{R}^n\) and \(\mathbb{H}^n\), sections of
a minimising cone cannot be spanned by a minimal submanifold. By rescaling, a hyperbolic
minimising cone is also Euclidean minimising. The converse is also true, as pointed out by
Anderson \cite{Anderson82_CompleteMinimalVarieties}. More precisely, let \( \gamma \) be a
\( (k-1) \)-submanifold of a geodesic sphere centred at the origin of the Poincaré model
and suppose that the radial cone built on \( \gamma \) is Euclidean-minimising. Anderson
proved that the cone is also hyperbolic minimising. The following result is an
improvement of this, in the sense that it also rules out complete minimal submanifolds of \( \mathbb{H}^n \).
\begin{proposition}
\label{prop:minimising-cone}
Let \( \gamma \) be a \( (k-1) \)-submanifold of the sphere at infinity (or a geodesic sphere) such that in the
Poincaré model, the radial cone built upon it is Euclidean minimising. Then there is no
minimal \( k \)-submanifold of \( \mathbb{H}^n \) with ideal boundary (respectively boundary) \( \gamma \).
\end{proposition}
\begin{proof}
If there was such a a minimal submanifold, then it would satisfy the Euclidean
monotonicity and thus by Lemma \mathop{\rm Re}\nolimitsf{lem:less-than-tube} would have Euclidean area smaller
than that of the cone.
\end{proof}
To illustrate Proposition \mathop{\rm Re}\nolimitsf{prop:minimising-cone}, let \( \gamma_0 \) be the standard
Hopf link given by the intersection of the pair of planes \( zw=0 \) with the unit sphere
\( S^3 \) of \( \mathbb{R}^4\cong \mathbb{C}^2 \). Since the pair of planes is Euclidean
minimising, it follows from Proposition \mathop{\rm Re}\nolimitsf{prop:minimising-cone} that it is the only
minimal surface of \( \mathbb{H}^4 \) filling \( \gamma_0 \). Now let \(\gamma_\epsilon\) be
the perturbed Hopf links cut out by the complex curves \(zw=\epsilon\), \(\epsilon\in (0,
\frac{1}{2})\). Clearly the pairs of planes filling them are no longer
Euclidean minimising and so the proof of Proposition \mathop{\rm Re}\nolimitsf{prop:minimising-cone} fails. It
is possible to write down explicitly a family of minimal annuli of \( \mathbb{H}^4 \)
filling the
\( \gamma_\epsilon \).
In fact, let \(g\) be radially conformally flat metric, that is
\(g = e^{2\varphi(\rho)} g_E\) where \(\varphi\) is a function of \(\rho:=|z|^2 + |w|^2\),
we will point out a family \( M_{C,\varphi} \) of \( g \)-minimal
annuli that are invariant by the \({S}^1\) action
\begin{equation}
\label{eq:hopf-action}
(z,w)\mapsto(ze^{i\theta}, we^{-i\theta})
\end{equation}
If \(\mathbb{C}^2\) is identified with the
space of quaternions by \((z,w) \mapsto z + jw\), this action corresponds to
multiplication on the left by \(e^{i\theta}\).
These surfaces are obtained by rotating a curve in the real plane \(\mathop{\rm Im}\nolimits z = \mathop{\rm Im}\nolimits w =
0\). Such curve is given by an equation \(zw = F(\rho)\) where \(F\) is a real function on \(\rho\). The minimal surface
equation is equivalent to the following second order ODE on \(F\)
\[
\frac{X'}{X} - \frac{Y'}{Y} + \frac{1}{\rho} +
\frac{\varphi'}{2}\left[8+\rho\left(\frac{X^2}{Y^2}-4\right)\frac{F'}{F}
\right]=0,\quad\text{where }X=F-F'\rho,\quad Y = \frac{F'}{2}\sqrt{\rho^2 - 4F^2}.
\]
This can be reduced to a first order ODE using symmetry. When we rotate a
solution curve in the real plane by an angle \( \alpha \), the new
curve is still a solution because this rotation is equivalent to multiplying on the right of \(z_1+jz_2\) by \(e^{j\alpha}\)
and so commutes with the left multiplication by \(e^{i\theta}\).
Concretely, by a change of variable \(F=\frac{\rho}{2}\sin\theta(\rho)\) the ODE reduces
either to the first order Bernoulli equation
\(\theta'^2 = -\rho^2 + C^2\rho^4 e^{4\varphi}\) for a parameter \(C > 0\), or to
\(\theta'=0\) which corresponds to pairs of 2-planes. The profile curve can be described
in a more geometric fashion, as in Proposition \mathop{\rm Re}\nolimitsf{prop:min-annuli}. Note that the curve
\eqref{eq:min-annuli} is a hyperbola when \( g=g_E \) and so the minimal surfaces
are the complex curves \(z_1 z_2 = \mathop{\rm const }\nolimits\).
\begin{proposition}[]
\label{prop:min-annuli}
Let \(M_{C,\varphi}\) be the surface in \(\mathbb{C}^2\) given by rotating by
\eqref{eq:hopf-action} the following curve in \( \mathbb{R}^2 \):
\begin{equation}
\label{eq:min-annuli}
\sin^2\psi = C^2\frac{e^{-4\varphi}}{\rho^2}, \quad C > 0
\end{equation}
Here \(\psi\) is the angle formed by the tangent of the curve at a point \(p\) and the
radial direction \(\overrightarrow{Op}\). Then \(M_{C,\varphi}\) is
minimal under the metric \(g =
e^{2\varphi}g_E\). Up to \(SO(4)\), these annuli and the 2-planes are the only
minimal surfaces obtained as orbit of a real curve by the rotation \eqref{eq:hopf-action}.
\end{proposition}
When \( g \) is the hyperbolic metric, the renormalised area of the annuli \( M_C \) can
be computed to be
\begin{equation}
\label{eq:ARMC}
\mathcal{A}_R(M_C) = 4\pi \int_{a}^\infty\left[ \frac{\xi^2 - 1}{\sqrt{(\xi^2-1)^2 - C^2}} - 1\right] d\xi -
4\pi a
\end{equation}
where \( a = (C+1)^{1/2}\). Note that just by Theorem
\mathop{\rm Re}\nolimitsf{thm:upper-AR}, we know the renormalised area of \( M_C \) tends to \( -\infty \) as \(
C\to+\infty \).
Another radially conformally flat metric is the round sphere, with
\(e^\varphi = \frac{2}{1+\rho}\). The parameter \(C\) is in \((0,1)\). Seen from the
origin, the solution curve sweeps out an angle \(\theta_C\) between \(\frac{\pi}{2}\)
(\(C=0\), the surface is a totally geodesic \({S}^2\) ) and \(\frac{\pi}{\sqrt{2}}\)
(\(C=1\), the surface is (part of) the Clifford torus). In particular, if \(\theta_C\) is
a rational multiple of \(\pi\), we can close the surface by repeating the profile
curve. Benjamin Aslan pointed out to the author that the minimal annuli in this case are
bipolar transform of the Hsiang--Lawson annuli \(\tau_{0,1,\alpha} \). In
\cite{Hsiang.Lawson71_MinimalSubmanifoldsLow}, Hsiang and Lawson constructed a family
\(\tau_{p,q,\alpha} \) of minimal annuli in \( S^3 \) that are invariant by the
\( (p,q) \)-rotation
\[
(z,w) \longrightarrow (e^{ip\theta}z, e^{iq\theta}w)
\]
Here \( S^3 \) is seen as the unit sphere of \( \mathbb{C}^2 \). The bipolar transform was
defined by Lawson \cite{Lawson70_CompleteMinimalSurfaces} by wedging a conformal harmonic
map \(f: \Sigma \longrightarrow {S}^3\subset \mathbb{R}^4 \) with its \( \mathbb{R}^4
\)-valued Gauss map. The result is a map from \( \Sigma \) to the unit sphere
of \( \Lambda^2 \mathbb{R}^4 \cong \mathbb{R}^6 \), which is also conformal and
harmonic. This transforms a minimal surface of \({S}^3 \) into a minimal surface of
\({S}^5 \). If a surface is invariant by the \( (p,q) \)-rotation, its
bipolar transform is contained in a subsphere \( S^4 \) and is invariant by a
\( (p+q, p-q)\)-rotation. When \( p=0,q=1 \), they are the annuli \( M_C \).
\section{Weighted monotonicity in spaces with curvature bounded from above}
\label{sec:bounded-curv}
Fix a point \(O\) in a Riemannian manifold \((M^n,g)\), and let \(r_{\rm inj}\) be the
injectivity radius. Let \( r \) be the distance function to \(O\). Its Hessian at
a point \( p\in B(O,r_{\rm inj}) \) is:
\begin{equation}
\label{eq:hess-r}
\hess r (\partial_r, \cdot) =0, \qquad \hess r (v, v)=: I(v),\ \forall v\perp \partial_r
\end{equation}
where \(I(v)=\int_{\Gamma}\left(|\dot V|^2 - K_M(\dot\gamma,V)|V|^2\right)\) is the index
form of the Jacobi field \(V\) along the geodesic \(\Gamma\) between \(O\) and \(p\) that
interpolates \(0\) at \(O\) and \(v\) at \(p\).
When the sectional curvature
satisfies \(K_M\leq -a^2\) (respectively \(b^2\)), one can check that
\(I(v) \geq a\coth(ar)|v|^2\) (respectively \(b\cot (br)|v|^2\)). This gives an estimate of \(\hess r\) on
directions orthogonal to \(\partial_r\).
\begin{proposition}[]
\label{prop:hess-r}
Inside \(B(O,r_{\rm inj})\),
\begin{enumerate}
\item If \(K_M\leq -a^2\) then \(\hess (a^{-2}\cosh ar) \geq \cosh ar.\ g\).
\item If \(K\leq b^2\) then \(\hess (-b^{-2}\cos br) \geq \cos br.\ g\) when \(r\leq \frac{\pi}{b}\).
\end{enumerate}
\end{proposition}
This means that the functions \(h=a^{-2}\cosh ar\) and \(h=-b^{-2}\cos br\) satisfy \(\hess h \geq
U.g\).
Here \(U= a^2 h\) (respectively \(-b^2 h\)), \(V = |\nabla h|^2 = a^2
(h^2-1)\) (respectively \(-b^2 (h^2-1)\)) and we still have \(U = \frac{1}{2}V'\).
When \(M\) is \(\mathbb{H}^n\) or \({S}^n\), we recover the time-coordinate and
the Euclidean coordinate in Examples \mathop{\rm Re}\nolimitsf{ex:xi-H} and \mathop{\rm Re}\nolimitsf{ex:xi-S}.
As explained in Remark \mathop{\rm Re}\nolimitsf{rem:monotonicity-warped}, we still have weighted monotonicity
theorem for \(h\). It is more convenient here to see the weight as a function of \(r\)
instead of \(h\). The \emph{\(P\)-volume} functional of a \( k \)-submanifold is
\(A_{P}(\Sigma)(t):= \int_{\Sigma, r\leq t}P(r)\) and the \emph{\(P\)-density} is
\(\Theta_P:= \frac{A_P}{Q_P}\) where \( Q_P\) is the \(P\)-volume of a ball of radius
\(t\), not in \( M \) but in space-form:
\begin{equation}
\label{eq:Q-geodesic}
Q_P(t):= \begin{cases}
\omega_{k-1}\int_{r\leq t}P(r)\frac{\sinh^{k-1} ar}{a^{k-1}}dr , & \text{when } K_M\leq -a^2 \\
\omega_{k-1}\int_{r\leq t}P(r)\frac{\sin^{k-1} br}{b^{k-1}}dr , & \text{when } K_M\leq b^2
\end{cases}
\end{equation}
The \emph{naturally weighted volume} and \emph{naturally weighted density} correspond to
\( P=U=\cosh ar \) (respectively \( \cos br \)), for which \( Q_U(t)=
\frac{\omega_{k-1}}{k} a^{-k}{\sinh^k at} \) (\( \frac{\omega_{k-1}}{k}b^{-k} \sin^k bt \) respectively).
We define the \emph{eligible interval} \([0,r_{ \max})\) to be \([0, r_{\rm inj})\) when
\(K_M\leq -a^2\) and \([0, \min (r_{\rm inj}, \frac{\pi}{2b}))\) when \(K_M\leq b^2\).
\begin{theorem}[]
\label{thm:monotonicity-geodesic}
Let \(M\) be a Riemannian manifold with sectional curvature \(K_M\leq -a^2\) (or \(K_M\leq b^2\))
and \(\Sigma\subset M\) be an extension of a minimal \( k \)-submanifold by geodesic rays, then
the naturally weighted density \(\Theta_U(\Sigma)(t)\) is an increasing function in \( [0,
r_{\max})\).
\end{theorem}
\begin{corollary}
Let \(\Sigma\subset M\) be a minimal \( k \)-submanifold containing the point \(O\) with
multiplicity \(m\) and \(l_t\) be the \( (k-1) \)-volume of the intersection \(
\Sigma\cap\{r=t\}\) under the metric \( g \) of \( M \).
For all \(t\) in the eligible
interval, one has\begin{equation}
\label{eq:bcurv-est}
Q_U(t) \leq A_U(\Sigma)(t)\leq \frac{l_t}{k}V(t)^{1/2}
\end{equation}
In particular,
\[
l_t \geq \begin{cases}
m\omega_{k-1}\left(\frac{\sinh at}{a}\right)^{k-1} , & \text{if $K_M\leq -a^2$} \\
m\omega_{k-1}\left(\frac{\sin bt}{b}\right)^{k-1} , & \text{if $K_M\leq b^2$}
\end{cases}
\]
\end{corollary}
\begin{proof}
The first half of \eqref{eq:bcurv-est} follows directly from Theorem
\mathop{\rm Re}\nolimitsf{thm:monotonicity-geodesic}. The second half is more subtle because the \(P\)-volume
of a geodesic cone in \(M\) is no longer proportional to \( Q \) (so Lemma
\mathop{\rm Re}\nolimitsf{lem:less-than-tube} does not generalise). Instead, the upper bound of \(A_U\)
follows from \eqref{eq:stokes}:
\( A_U(\Sigma)(t) \leq \frac{1}{k}\int_{\Sigma, r=t}|\nabla^\Sigma h| \leq
\frac{V(t)^{1/2}}{k}l_t. \) The function \( V \) can also be rewritten as
\( V(r) = a^{-2}\sinh^2ar \) ( \( b^{-2}\sin^2 br \) respectively).
\end{proof}
With an identical proof as Lemma \mathop{\rm Re}\nolimitsf{lem:comparison}, we have:
\begin{lemma}[Comparison]
\label{lem:comparison-geodesic}
Let \(\Sigma\) be any \( k \)-submanifold of \( M \) not necessarily minimal and \(P_1,P_2\) be two
non-negative, continuous weights. Let \(Q_1, Q_2\) be defined from \(P_1,
P_2\) as in \eqref{eq:Q-geodesic} and \( \Theta_1,\Theta_2
\) be the two densities. In the eligible interval, suppose that \( \Theta_2 \) is
increasing, then:
\begin{enumerate}
\item If \(P_1\) is weaker than \(P_2\), i.e. \(\frac{P_1}{Q_1} \leq
\frac{P_2}{Q_2}\), then \(\Theta_1 \) is also increasing and \(\Theta_1\geq\Theta_2\).
\item If \(P_2\) is weaker than \(P_1\), then \(\Theta_1\geq\Theta_2\).
\end{enumerate}
\end{lemma}
Note that to compare weights, it is necessary to mention the curvature bound \(a\) or
\(b\).
\begin{lemma}[]
\label{lem:compare-geodesic}
For any \(a, b\geq 0\) and \(u\geq v \geq 0\),
\begin{enumerate}
\item \(P_1 = \cosh v r\) is weaker than \(P_2 = \cosh ur\) when \(K_M\leq -a^2\),
\item \(P_1 = \cos ur\) is weaker than \(P_2 = \cos vr\) on the interval \([0. \frac{\pi}{2u}]\) when \(K_M\leq b^2\).
\end{enumerate}
\end{lemma}
Lemma \mathop{\rm Re}\nolimitsf{lem:compare-geodesic} can be seen as a continuous version of the chain
\eqref{eq:chain}. In particular, when \(K_M\leq -a^2\) the monotonicity theorem holds for
any weight \(P_u =\cosh ur\) with \(u\in [0, a)\), including the uniform weight. When
\(K_M\leq b^2\), the monotonicity theorem holds for any weight \(P_u = \cos ur\) with
\(u\in[b,\infty)\). The second part of Lemma \mathop{\rm Re}\nolimitsf{lem:compare-geodesic} can be used
to obtain a lower bound of volume in this case.
\begin{proposition}[]
\label{prop:uniform-area-sphere}
Suppose that \(K_M\leq b^2\). Let \( \Sigma \) be a minimal \( k \)-submanifold with
multiplicity \( m \) at the point \(O\) and no boundary in the interior of the ball
\(B(O,t)\) of radius \(t<r_{\max}\). Then
\[
A(\Sigma\cap B(O,t)) \geq m \omega_{k-1}\int_{r=0}^{t} \frac{\sin^{k-1}(br)}{b^{k-1}}dr.
\]
\end{proposition}
In particular, if \(M\) is simply connected, with curvature pinched between
\(\frac{b^2}{4}\) and \(b^2\) and \(\Sigma\subset M\) is a closed minimal submanifold,
then \( A(\Sigma)\geq \frac{1}{2}\omega_k b^{-k}\). A weaker version of this, with
\(\frac{1}{2}\omega_k\) replaced by the volume of the unit \(k\)-ball, was proved in
\cite{Hoffman.Spruck74_SobolevIsoperimetricInequalities}.
\end{document} |
\begin{document}
\title{SILVar: Single Index Latent Variable Models}
\author{Jonathan Mei
and~Jos\'{e}~M.F.~Moura
\thanks{This work partially funded by NSF grants CCF 1011903 and CCF
1513936.}
\thanks{This paper appears in: IEEE Transactions on Signal Processing, online March 21, 2018}
\thanks{Print ISSN: 1053-587X, Online ISSN: 1941-0476}
\thanks{Digital Object Identifier: 10.1109/TSP.2018.2818075}}
\markboth{IEEE Transactions on Signal Processing}
{Mei \MakeLowercase{\textit{et al.}}: SILVar}
\maketitle
\begin{abstract}
A semi-parametric, non-linear regression model in the presence of latent variables is introduced. These latent variables can correspond to unmodeled phenomena or unmeasured agents in a complex networked system. This new formulation allows joint estimation of certain non-linearities in the system, the direct interactions between measured variables, and the effects of unmodeled elements on the observed system. The particular form of the model adopted is justified, and learning is posed as a regularized empirical risk minimization. This leads to classes of structured convex optimization problems with a ``sparse plus low-rank'' flavor. Relations between the proposed model and several common model paradigms, such as those of Robust Principal Component Analysis (PCA) and Vector Autoregression (VAR), are established. Particularly in the VAR setting, the low-rank contributions can come from broad trends exhibited in the time series. Details of the algorithm for learning the model are presented. Experiments demonstrate the performance of the model and the estimation algorithm on simulated and real data.
\end{abstract}
\section{Introduction}
How real is this relationship? This is a ubiquitous question that presents itself not only in judging interpersonal connections but also in evaluating correlations and causality throughout science and engineering. Two reasons for reaching incorrect conclusions based on observed relationships in collected data are chance and outside influences. For example, we can flip two coins that both show heads, or observe that today's temperature measurements on the west coast of the continental USA seem to correlate with tomorrow's on the east coast throughout the year. In the first case, we might not immediately conclude that coins are related, since the number of flips we observe is not very large relative to the possible variance of the process, and the apparent link we observed is up to chance. In the second case, we still may hesitate to use west coast weather to understand and predict east coast weather, since in reality both are closely following a seasonal trend.
Establishing interpretable relationships between entities while mitigating the effects of chance can be achieved via sparse optimization methods, such as regression (Lasso)~\cite{tibshirani_regression_1996} and inverse covariance estimation~\cite{friedman_sparse_2008}. In addition, the extension to time series via vector autoregression~\cite{bolstad_causal_2011,basu_regularized_2015} yields interpretations related to Granger causality~\cite{granger_investigating_1969}. In each of these settings, estimated nonzero values correspond to actual relations, while zeros correspond to absence of relations.
However, we are often unable to collect data to observe all relevant variables, and this leads to observing relationships that may be caused by common links with those unobserved variables. The hidden variables in this model are fairly general; they can possibly model underlying trends in the data, or the effects of a larger network on an observed subnetwork. For example, one year of daily temperature measurements across a country could be related through a graph based on geographical and meteorological features, but all exhibit the same significant trend due to the changing seasons. We have no single sensor that directly measures this trend. In the literature, a standard pipeline is to de-trend the data as a preprocessing step, and then estimate or use a graph to describe the variations of the data on top of the underlying trends~\cite{sandryhaila_discrete_2013,sandryhaila_discrete_2014,mei_signal_2017}.
Alternatively, attempts have been made to capture the effects of hidden variables via sparse plus low-rank optimization~\cite{chandrasekaran_latent_2012}. This has been extended to time series~\cite{jalali_learning_2011}, and even to a non-linear setting via Generalized Linear Models (GLMs)~\cite{bahadori_fast_2013}. What if the form of the non-linearity (link function) is not known? Regression using a GLM with an unknown link function is also known as a Single Index Model (SIM). Recent results have shown good performance when using SIMs for sparse regression~\cite{ganti_learning_2015}.
So far, when choosing a model, current methods will impose a fixed form for the (non-)linearity, assume the absence of any underlying trends, perform separate pre-processing or partitioning in an attempt to remove or otherwise explicitly handle such trends, or take some combination of these steps.
To address all of these issues, we propose a model with a non-linear function applied to a linear argument that captures the effects of latent variables, which manifest as unmodeled trends in the data. Thus, we introduce the Single Index Latent Variable (SILVar) model, which uses the SIM in a sparse plus low-rank optimization setting to enable general, interpretable multi-task regression in the presence of unknown non-linearities and unobserved variables. That is, we propose the SILVar model not only to use for regression in difficult settings, but also as a tool for uncovering hidden relationships buried in data.
First, we establish notation and review prerequisites in Section~\ref{sec:bg}. Next, we introduce the SILVar model and discuss several paradigms in which it can be applied in Section~\ref{sec:SILVar}. Then, we detail the numerical procedure for learning the SILVar model in Section~\ref{sec:learn_SILVar}. Finally, we demonstrate the performance via experiments on synthetic and real data in Section~\ref{sec:exp}.
\section{Background}
\label{sec:bg}
In this section, we introduce the background concepts and notation used throughout the remainder of the paper.
\subsection{Bregman Divergence and Convex Conjugates}
For a given convex function $\phi$, the Bregman Divergence~\cite{bregman_relaxation_1967} associated with $\phi$ between $\y$ and $\x$ is denoted
\begin{equation}
\label{eq:breg_div}
D_\phi(\y \| \x ) = \phi(\y) - \phi(\x) - \nabla\phi(\x)^\top (\y-\x).
\end{equation}
The Bregman Divergence is a non-negative (asymmetric) quasi-metric. Two familiar special cases of the Bregman Divergence are Euclidean Distance when $\phi(\x)=\frac{1}{2}\|\x\|_2^2$ and Kullback-Liebler Divergence when $\phi(\x)=\sum\limits_i x_i\log x_i$ in the case that $\x$ is a valid probability distribution (i.e., $\x\ge 0$ and $\sum\limits_i x_i = 1$).
The convex conjugate $\phi_*$ of a function $\phi$ is given by
\begin{equation}
\label{eq:conv_conj}
\phi_*(\x) \overset{\Delta}{=} \sup_\y \; \y^\top\x-\phi(\y).
\end{equation}
The convex conjugate arises naturally in optimization problems when deriving a dual form for the original (primal) problem. For closed, convex, differentiable, 1-D function $\phi$ with invertible gradient, the following properties hold
\begin{equation}
\label{eq:conv_conj_prop}
\begin{aligned}
&\phi_*(x)=x(\nabla\phi)^{-1}(x)-\phi((\nabla\phi)^{-1}(x)) \\
&(\nabla \phi)^{-1} =\nabla \phi_* \qquad\qquad (\phi_*)_* =\phi
\end{aligned}
\end{equation}
where $(\cdot)^{-1}$ denotes the inverse function, not the multiplicative inverse. In words, these properties give an alternate form of the conjugate in terms of gradients of the original function, state that the function inverse of the gradient is equal to the gradient of the conjugate, and state that the conjugate of the conjugate is the original function.
\subsection{Generalized Linear and Single-Index Models}
\label{sec:bg:GLM_SIM}
The Generalized Linear Model (GLM) can be described using several parameterizations. We adopt the one based on the Bregman Divergence~\cite{banerjee_clustering_2005}. For observations $y_i\in\Rbb$ and $\x_i\in\Rbb^{p}$, let $\y=(y_1\ldots y_n)^\top$, $\X=(\x_1\ldots\x_n)$. The model is parameterized by 1) a non-linear link function $g=(\nabla \phi)^{-1}$ where $\phi$ is a closed, convex, differentiable, invertible function; and 2) a vector $\a\in\Rbb^p$. We have the model written as
\begin{equation}
\label{eq:GLM_expec}
\Ebb[y_i | \x_i] = g \left(\a^\top \x_i \right),
\end{equation}
(note that some references use $g^{-1}$ as the link function where we use $g$) and the corresponding likelihood function written as
\begin{equation}
\label{eq:GLM}
P(y_i | \x_i) = \exp \left\{-D_\phi \left(y_i \| g \left(\a^\top \x_i \right) \right) \right\}
\end{equation}
where the likelihood is expressed with respect to an appropriate base measure~\cite{acharyya_parameter_2015}, which can be omitted for notational clarity.
Let $G=\phi_*$ and $g=\nabla G=(\nabla\phi)^{-1}$. Then, for data $\{\x_i,y_i\}$ with conditionally independent $y_i$ given $\x_i$ (note that this is not necessarily assuming that $\x_i$ are independent), learning the model $\a$ assuming $g$ is known can be achieved via empirical risk minimization,
\begin{equation}
\label{eq:GLM_MLE}
\begin{aligned}
\what{\a} &= \underset{\a}{\arg\!\max}\; \prod\limits_{i=1}^{n}\; \exp \left\{-D_\phi \left(y_i \| g \left(\a^\top \x_i\right) \right) \right\}\\
&= \argmin[\a] \sum\limits_{i=1}^{n}\;D_\phi \left(y_i \| g \left(\a^\top \x_i \right) \right)\\
&=\argmin[\a] \sum\limits_{i=1}^{n} \begin{aligned}[t]&\left[ \phi\left(y_i\right)- \phi\left(g\left(\a^\top\x_i\right)\right) \right.\\
&\quad \left. -\nabla\phi\left(g\left(\a^\top\x_i\right)\right)\left(y_i - g\left(\a^\top\x_i\right)\right) \right]\end{aligned}\\
&\overset{(a)}{=}\argmin[\a] \sum\limits_{i=1}^{n} \begin{aligned}[t]&\left[ G_*\left(y_i\right) - y_i \left(\a^\top \x_i\right) - \phi\left(g\left(\a^\top\x_i\right)\right) \right.\\
&\quad \left. +\left(\a^\top\x_i\right) g\left(\a^\top\x_i\right) \right]\end{aligned}\\
&\overset{(b)}{=} \argmin[\a] \sum\limits_{i=1}^{n}\;\left[ G_*\left(y_i\right) + G\left(\a^\top \x_i\right) - y_i \left(\a^\top \x_i\right) \right]\\
&= \argmin[\a] \what{F}_1\left(\y,\X,g,\a\right)
\end{aligned}
\end{equation}
where equality $(a)$ arises from the second property in~\eqref{eq:conv_conj_prop}, equality $(b)$ arises from the first property, and we introduce
{\small
\begin{equation}
\label{eq:SIM_MLE_conv_obj}
\what{F}_1(\y,\X,g,\a) \overset{\Delta}{=}\frac{1}{n}\sum\limits_{i=1}^{n}\;\left[ G_*(y_i) + G\left(\a^\top \x_i \right) - y_i \left(\a^\top \x_i \right) \right]
\end{equation}
}for notational compactness.
The Single Index Model (SIM)~\cite{ichimura_semiparametric_1993} takes the same form as the GLM. The crucial difference is in the estimation of the models. When learning a GLM, the link function $g$ is known and the linear parameter $\a$ is estimated; however when learning a SIM, the link function $g$ needs to be estimated along with the linear parameter $\a$.
Recently, it has been shown that, when the function $g$ is restricted to be monotonic increasing and Lipschitz, learning SIMs becomes computationally tractable~\cite{acharyya_parameter_2015} with performance guarantees in high-dimensional settings~\cite{ganti_learning_2015}. Thus, with scalar $u$ defining the set $\Ccal^{u}=\{g : \forall y>x,\; 0 \le g(y)-g(x) \le u(y-x) \}$ of monotonic increasing $u$-Lipschitz functions, this leads to the optimization problem,
\begin{equation}
\label{eq:SIM_MLE_conv}
\begin{aligned}
(\what{g},\what{\a}) &= \argmin[g, \a] \what{F}_1(\y,\X,g,\a) \\
& \qquad \textrm{s.t. } g=\nabla G \in\Ccal^{1}.
\end{aligned}
\end{equation}
\subsection{Lipschitz Monotonic Regression and Estimating SIMs}
\label{sec:bg:LMR}
The estimation of $g$ with the objective function including terms $G$ and $G_*$ at first appears to be an intractably difficult calculus of variations problem. However, there is a marginalization technique that cleverly avoids directly estimating functional gradients with respect to $G$ and $G_*$~\cite{acharyya_parameter_2015} and admits gradient-based optimization algorithms for learning. The marginalization utilizes Lipschitz monotonic regression (LMR) as a subproblem. Thus, before introducing this marginalization, we first review LMR.
\subsubsection{LMR}
\label{sec:bg:LMR:LMR}
Given ordered pairs $\{x_i,y_i\}$ and independent Gaussian $w_i$, consider the model
\begin{equation}
y_i = g(x_i) + w_i,
\end{equation}
which intuitively treats $\{y_i\}$ as noisy observations of a function $g$ indexed by $x$, sampled at points $\{x_i\}$. Let $\what{g}_i=g(x_i)$, an estimate of the function value, with $\what{\g}=(\what{g}_1 \ldots \what{g}_n)^\top$, and $x_{[j]}$ denote the $j^{th}$ element of the $\{x_i\}$ sorted in ascending order. Then LMR is described by the problem,
{\small
\begin{equation}
\label{eq:Lip_mon_reg}
\begin{aligned}
\what{\g} \overset{\Delta}{=} \textrm{LMR}(\y,\x) &= \argmin[\g] \sum\limits_{i=1}^{n}\;\left( g(x_i) - y_i \right)^2\\
& \textrm{s.t. } 0 \le g\left(x_{[j+1]}\right) \!-\! g\left(x_{[j]}\right) \le x_{[j+1]} \!-\! x_{[j]} \\
&\qquad\qquad \textrm{for } j=1,\ldots, n-1.
\end{aligned}
\end{equation}
}While there may be in fact infinitely many possible (continuous) monotonic increasing Lipschitz functions $g$ that pass through the points $\what{\g}$, the solution vector $\what{\g}$ is unique. We will introduce a simple yet effective algorithm for solving this problem later in Section~\ref{sec:learn_SILVar:LMR}.
\subsubsection{Estimating SIMs}
\label{sec:bg:LMR:SIM}
We now return to the objective function~\eqref{eq:SIM_MLE_conv}. Let $\what{\g}=\textrm{LMR}(\y,\X^\top\a)$. Then the gradient w.r.t. $\a$ can be expressed in terms of an estimate of $g$ without explicit computation of $G$ or $G_*$,
\begin{equation}
\label{eq:SIM_MLE_marg}
\begin{aligned}
\nabla_\a F_1 &= \sum\limits_{i=1}^{n}\;\left[ \left( \what{g}_i - y_i \right) \x_i \right].
\end{aligned}
\end{equation}
This allows us to apply gradient or quasi-Newton methods to solve the minimization in $\a$, which is itself a convex problem since the original problem was jointly convex in $g$ and $\a$.
\section{Single Index Latent Variable Models}
\label{sec:SILVar}
In this section, we build the Single Index Latent Variable (SILVar) model from fundamental concepts.
\subsection{Multitask regression and latent variables}
\label{sec:SILVar:multi_reg_lat_var}
First, we extend the SIM to the multivariate case and then examine how latent variables can affect learning of the linear parameter. Let $\y_i=\left(y_{1i} \; \ldots \; y_{mi}
\right)^\top$, $\g(\x)=\left(g_1(x_1) \; \ldots \; g_m(x_m) \right)^\top$, and $\A=\left(\a_1 \; \ldots \; \a_m\right)^\top$. Consider the vectorization,
\begin{equation}
\label{eq:lat_vars}
\begin{aligned}
\Ebb \left[y_{ji} | \x_i \right] &= g_j \left(\a_j^\top \x_i \right) \\
\Rightarrow \Ebb \left[\y_i | \x_i \right] &= \g \left(\A \x_i \right).
\end{aligned}
\end{equation}
For the remainder of this paper, we make an assumption that all $g_j=g$ for notational simplicity, though the same analysis readily extends to the case where $g_j$ are distinct.
Now, let us introduce a set of latent variables $\z_i\in\Rbb^r$ with $r\ll p$ and the corresponding linear parameter $\B=(\b_1\ldots \b_m)^\top \in\Rbb^{m\times r}$ (note we can incorporate a linear offset by augmenting the variable $\z\leftarrow (\z^\top \; 1)^\top$ and adding the linear offset as a column of $\B$). This leads to the asymptotic maximum likelihood estimate,
\begin{equation}
\label{eq:SIM_MLE_full}
\begin{aligned}
\left(\overline{g},\overline{\A},\overline{\B}\right) = \begin{aligned}[t]& \argmin[g,\A,\B] F_2\left(\y_i,\x_i,\z_i,g,\A,\B\right) \\ & \; \textrm{s.t. } g=\nabla G \in\Ccal^{1}, \end{aligned}
\end{aligned}
\end{equation}
where
{\small\begin{align}
\label{eq:SIM_MLE_full_obj}
F_2(\y_i,\x_i,\z_i,g,\A,\B) \overset{\Delta}{=} \Ebb&\left[ \sum\limits_{j=1}^{m}\left[G_*(y_j) + G(\a_j^\top\x_i+\b_j^\top\z_i)\right] \right. \nonumber \\ &\qquad\left.\vphantom{\sum\limits_{j=1}^{m}}- \y^\top (\A \x_i+\B\z_i) \right].
\end{align}}
Now consider the case in which the true distribution remains the same, but we only observe $\x_i$ and not $\z_i$,
{\small
\begin{align}
\label{eq:SIM_MLE_latent}
(\what{g},\what{\A}) = & \argmin[g,\A] F_3(\y_i,\x_i,g,\A) \overset{\Delta}{=}
\begin{aligned}[t]&\Ebb\left[ \sum\limits_{j=1}^{m}\left[G_*(y_{ji}) + G(\a_j^\top\x_i)\right] \right. \nonumber \\
& \; \qquad \quad\left.\vphantom{\sum\limits_{j=1}^{m}}- \y_i^\top (\A \x_i) \right]
\end{aligned}\\
& \; \textrm{s.t. } g=\nabla G \in\Ccal^{1}.
\end{align}
}We now propose a relation between the two models in~\eqref{eq:SIM_MLE_full} and~\eqref{eq:SIM_MLE_latent}, which will finally lead to the SILVar model. Here we present the abridged theorem, and relegate the full expressions and derivation to Appendix~\ref{app:thm}. To establish notation, let primes ( $ '$) denote derivatives, hats ( $\what{ }$ ) denote a parameter estimate with only observed variables, overbars ($\overline{\vphantom{\a}\hphantom{\a}}$) denote an underlying parameter estimate when we have access to both observed and latent variables, and we drop the subscripts from the random variables $\x$ and $\z$ to reduce clutter.
\begin{thm}
\label{thm:equiv}
Assume that $\what{g}'(0)\ne 0$ and that $|\what{g}''|\le J$ and $|\overline{g}''|\le J$ for some $J<\infty$. Furthermore, assume in models~\eqref{eq:SIM_MLE_full_obj} and~\eqref{eq:SIM_MLE_latent} that $\max_j \left(\|\what{\a}_j\|_1,\|\overline{\a}_j\|_1 +\|\overline{\b}_j\|_2 \right)\le k,$
{\small \begin{equation*}
\max\left( \Ebb[\|\x\|_2 \|\x\|_\infty^2], \Ebb[\|\x\|_2\|\x\|_\infty \|\z\|_2], \Ebb[\|\x\|_2 \|\z\|_2^2] \right) \le s_{Nr},
\end{equation*}
}where subscripts in $s_{Nr}$ indicate that the bounds may grow with the values in the subscript.
Then, the parameters $\what{\A}$ and $\overline{\A}$ from models~\eqref{eq:SIM_MLE_full_obj} and~\eqref{eq:SIM_MLE_latent} are related as
\begin{equation*}
\what{\A}= q(\overline{\A}+\L) + \E,
\end{equation*}
where $q=\frac{\overline{g}'(0)}{\what{g}'(0)}$, $\boldsymbol{\mu}_\x=\Ebb[\x_i]$,
\begin{equation*}
\begin{aligned}
&\L=\Big(\overline{\B}\Ebb[\z\x^\top] + \left(\overline{\g}(\0)- \what{\g}(\0)\right)\boldsymbol{\mu}_\x^\top \Big)\left(\Ebb[\x\x^\top]\right)^{\dagger}\\
\Rightarrow&\textrm{rank}(\L)\le r+1,
\end{aligned}
\end{equation*}
and $\E = \what{\A} - q(\overline{\A}+\L)$ is bounded as
\begin{equation*}
\!\begin{aligned}[t] \frac{1}{MN}\|\E\|_F & \le \! \frac{2J \sigma_\ell \sqrt{N}}{\what{g}'(0)M}s_{Nr}k^2,
\end{aligned}
\end{equation*}
where $\sigma_\ell=\left\|\left(\Ebb\left[\x\x^\top \right]\right)^\dagger \right\|_2$, the largest singular value of the pseudo-inverse of the covariance.
\end{thm}
The proof is given in Appendix~\ref{app:thm}. The assumptions require that the 2nd order Taylor series expansion is accurate around the point $\0$, that the model parameters $\overline{\A}$ and $\overline{\B}$ are bounded, and that the distributions generating the data are not too spread out (beyond $\0$ where the Taylor series expansion is performed). These are all intuitively reasonable and unsurprising assumptions.
Though the theorem poses a hard constraint on $|\what{g}''|$, we hypothesize that this is a rather strong condition that can be weakened to be in line with similar models.
The theorem determines certain scaling regimes under which the sparse plus low-rank approximation remains reasonable. For instance, consider the case where $M\sim N$ and the moments scale as $s_{Nr}\sim \sqrt{N}$, which is reasonable given their form (i.e., very loosely, $\|\x\|_\infty \sim 1$ and $\|\x\|_2 \sim \sqrt{N}$ and small latent power $\|\z\|_2 \sim 1$ relative to $N$). Then, to keep the error of constant order, the power of each row of the matrix would need to stay constant $k\sim 1$. If we see $\A$ as a graph adjacency matrix, then, in rough terms, this can correspond intuitively to the case in which the node in-degrees stay constant so that the local complexity remains the same even as the global complexity increases while the network grows. Again, we hypothesize that this overall bound can be tightened given more assumptions (e.g., on the network topology). Thus, we propose the SILVar model,
\begin{equation}
\label{eq:SILVar}
\what{\y} = \what{\g}\left(\left(\what{\A}+\what{\L}\right) \x \right),
\end{equation}
and learn it using the optimization problem,
\begin{align}
\label{eq:SILVar_opt}
(\what{g},\what{\A},\what{\L}) = & \argmin[g,\A,\L] \what{F}_3(\Y,\X,g,\A+\L) +h_1(\A)+h_2(\L) \nonumber \\ & \quad \textrm{s.t. } g=\nabla G \in\Ccal^{1},
\end{align}
where
{\small
\begin{equation}
\label{eq:SILVar_opt_obj}
\begin{aligned}
\what{F}_3(\Y,\X,g,\A) \!\!=\!\! \frac{1}{n}\!\sum\limits_{i=1}^{n}\!\left[ \sum\limits_{j=1}^{m}\!\left[G_{\!*\!}\left(y_{ji} \!\right) \!+\! G\left(\a_j^\top\!\x_i \!\right)\right] \!\!-\! \y_i^\top \!\left(\!\A \x_i\!\right)\! \right],
\end{aligned}
\end{equation}
}the empirical version of $F_3$, and $h_1$ and $h_2$ are regularizers on $\A$ and $\L$ respectively. Two natural choices for $h_2$ would be $h_2(\L)=\lambda_2\|\L\|_*$ the nuclear norm and $h_2(\L)=\mathbb{I}\{\|\L\|_*\le \lambda_2\}$ the indicator of the nuclear norm ball, both relating to the nuclear norm of $\L$, since $\L$ is approximately low rank due to the influence of a relatively small number of latent variables. We may choose different forms for $h_1$ depending on our assumptions about the structure of $\A$. For example, if $\A$ is assumed sparse, we may use $h_1(\A)=\lambda_1\|\textrm{v}(\A)\|_1$, the $\ell_1$ norm applied element-wise to the vectorized $\A$ matrix. These examples are extensions to the ``sparse and low-rank'' models, which have been shown under certain geometric incoherence conditions to be identifiable~\cite{chandrasekaran_latent_2012}. In other words, if the sparse component is not too low-rank, and if the low-rank component is not too sparse, then $\A$ and $\L$ can be recovered uniquely.
\subsection{Connection to Related Problems}
\label{sec:SILVar:rel_probs}
In this section, we show how the SILVar model can be used in various problem settings commonly considered throughout the literature.
\subsubsection{Generalized Robust PCA}
Though we posed our initial problem as a regression problem, if we take our measurement vectors to be $\x_i=\e_i$ the canonical basis vectors (i.e., so that the overall data matrix $\X=\I$), then we arrive at
\begin{equation}
\what{\Y}=\what{\g}(\what{\A}+\what{\L}).
\end{equation}
This is precisely the model behind Generalized Robust PCA~\cite{candes_robust_2011}, but with the twist of estimating the link function as well~\cite{ganti_matrix_2015}. What is worth noting is that although we arrived at our model via a regression problem with latent variables, the model itself is also able to describe a problem that arises from very different assumptions on how the data is generated and structured.
We also note that the SILVar model can be modified to share a space with the Generalized Low-Rank (GLR) Models~\cite{udell_generalized_2016}. The GLR framework is able to describe arbitrary types of data with an appropriate choice of convex link function $g$ determined \emph{a priori}, while the SILVar model is restricted to a certain continuous class of convex link functions but aims to learn this function. The modification is simply a matrix factorization $\L=\U\V$ (and ``infinite'' penalization on $\A$). The explicit factorization makes the problem non-convex but instead block convex, which still allows for alternating convex steps in $\U$ with fixed $\V$ (and vice versa) to tractably reach local minima under certain conditions. Nonetheless, due to the non-convexity, further discussion of this particular extension will be beyond the scope of this paper.
\subsubsection{Extension to Autoregressive Models}
We can apply the SILVar model to learn from time series as well. Consider a set of $N$ time series each of length $K$, $\X\in\Rbb^{N\times K}$. We assume the noise at each time step is independent (note that, with this assumption, the time series are still dependent across time), and take in our previous formation, $\y_i\leftarrow \x_k$ and $\x_i\leftarrow \x_{k-1:k-M}=(\x_{k-1}^\top \ldots \x_{k-M}^\top)^\top$ so that the model of order $M$ takes the form,
\begin{equation}
\label{eq:SILVar_AR}
\what{\x}_k = \g\left(\sum_{i=1}^{M}\left(\A^{(i)}+\L^{(i)}\right) \x_{k-i}\right),
\end{equation}
and learn it using the optimization problem,
{\small
\begin{equation}
\label{eq:SILVar_AR_opt}
(\what{g},\what{\A},\what{\L}) = \begin{aligned}[t]& \argmin[g,\A,\L] \what{F}_4(\X,g,\A+\L) +h_1(\A)+h_2(\L) \\ & \quad \textrm{s.t. } g=\nabla G \in\Ccal^{1}, \end{aligned}
\end{equation}
}where $\A=\left(\A^{(1)} \ldots \A^{(M)}\right)$ and $\L=\left(\L^{(1)} \ldots \L^{(M)}\right)$ and
{\small
\begin{equation*}
\label{eq:SILVar_AR_opt_obj}
\begin{aligned}
\what{F}_4(\X,g,\A) \!=\!\! \frac{1}{K \!-\! M}\!\!\sum\limits_{k \!=\! M \!+\! 1}^{K}\!\!\!\begin{aligned}[t]&\left[ \sum\limits_{j=1}^{N}\left[G_*(x_{jk}) \!+\! G\left(\sum\limits_{i=1}^M \a_j^{(i)}\x_{k-i}\right)\right] \right.\\
&\qquad\quad \left. \vphantom{\sum\limits_{j=1}^{m}} - \x_k^\top \left(\sum\limits_{i=1}^M \A^{(i)} \x_{k-i} \right) \right],\end{aligned}
\end{aligned}
\end{equation*}
}where $\A^{(i)}\!=\!\left(\a_1^{(i)}\ldots\a_N^{(i)}\right)^\top$, similarly to before. Note that the analysis in the previous section follows naturally in this setting, so that here $\textrm{rank}(\L_i)\le r+1$. Then, the matrix $\A$ may be assumed to be group sparse, relating to generalized notions of Granger Causality~\cite{granger_causality_1988,bolstad_causal_2011}, and one possible corresponding choice of regularizer taking the form $h_1(\A)=\lambda_1\sum\limits_{i,j}\left\|\left(a^{(1)}_{ij} \ldots a^{(M)}_{ij}\right)\right\|_2$.
Another structural assumption could be that of the Causal Graph Process model~\cite{mei_signal_2015-1}, inspired by Signal Processing on Graphs~\cite{sandryhaila_discrete_2013}, in which $\A^{(i)}$ are matrix polynomials in one underlying matrix $\wtil{\A}$. This framework utilizes the nonconvex regularizer $h_1(\A)=\lambda_1\|\textrm{v}(\A^{(1)})\|_1+\lambda_2\sum\limits_{i\ne j}\|\A^{(i)}\A^{(j)}-\A^{(j)}\A^{(i)}\|_F^2$ to encourage both sparsity and commutativity, which is satisfied if $\A^{(i)}$ are all matrix polynomials in the same matrix. Since this particular regularization is again block convex, convex steps can still be taken in each $\A^{(i)}$ with all other blocks $\A^{(j)}$ for $j\ne i$ fixed, for a computationally tractable algorithm to reach a local minimum under certain conditions. However, further detailed discussion will remain outside the scope of this paper.
The hidden variables in this time series model can even possibly model underlying trends in the data. For example, one year of daily temperature measurements across a country could be related through a graph based on geographical and meteorological features, but all exhibit the same significant trend due to the changing seasons. In previous work, a standard pipeline is to detrend the data as a preprocessing step, and then estimate or use a graph to describe the variations of the data on top of the underlying trends~\cite{sandryhaila_discrete_2013,sandryhaila_discrete_2014,mei_signal_2017}. Instead, the time series can also be modeled as a modified autoregressive process, depending on a low-rank trend $\L'=\left(\boldsymbol{\ell}'_1 \; \ldots \; \boldsymbol{\ell}'_K\right)\in\Rbb^{N\times K}$ and the variations of the process about that trend,
\begin{equation}
\begin{aligned}
\label{eq:trend}
&\what{\x}_k=g\left(\boldsymbol{\ell}'_k+\sum_{i=1}^{M}\A^{(i)}\left(\x_{k-i}-\boldsymbol{\ell}'_{k-i}\right)\right).
\end{aligned}
\end{equation}
Substituting this into Equation~\eqref{eq:SILVar_AR} yields
\begin{equation}
\begin{aligned}
\label{eq:trend2}
&\boldsymbol{\ell}'_k \!+\! \sum_{i=1}^{M}\!\A^{(i)}\!\left(\x_{k \!-\! i} \!-\! \boldsymbol{\ell}'_{k \!-\! i}\right) \!=\! \sum_{i=1}^{M}\left(\!\A^{(i)} \!+\! \L^{(i)}\!\right) \x_{k\!-\!i}\\
\Rightarrow&\sum_{i=1}^{M}\L^{(i)}\x_{k-i} = \boldsymbol{\ell}'_{k}-\sum_{i=1}^{M}\A^{(i)}\boldsymbol{\ell}'_{k-i}
\end{aligned}
\end{equation}
Thus we estimate the trend using ridge regression for numerical stability purposes but without enforcing $\L'$ to be low rank. We can accomplish this via the simple optimization,
\begin{equation}
\begin{aligned}
\label{eq:trend_opt}
\what\L' \!=\! \argmin[\L'] &\!\sum\limits_{k\!=\!M\!+\!1}^{K} \left\|\boldsymbol{\ell}'_{k} \!-\! \sum_{i=1}^{M}\A^{(i)}\boldsymbol{\ell}'_{k\!-\!i} \!-\! \sum_{i=1}^{M}\L^{(i)}\x_{k\!-\!i} \right\|_2^2\\
& \; +\lambda\|\L'\|_F^2
\end{aligned}
\end{equation}
with $\lambda$ being the regularization parameter. In this way, the extension of SILVar to autoregressive models can allow joint estimation of the effects of the trend and of these variations supported on the graph, as will be demonstrated via experiments in Section~\ref{sec:exp}.
\section{Efficiently Learning SILVar Models}
\label{sec:learn_SILVar}
In this section, we describe the formulation and corresponding algorithm for learning the SILVar model. Surprisingly, in the single-task setting, learning a SIM is jointly convex in $g$ and $\a$ as demonstrated in~\cite{acharyya_parameter_2015}. The pseudo-likelihood functional $\what{F}_3$ used for learning the SILVar model in~\eqref{eq:SILVar_AR_opt} is thus also jointly convex in $g$, $\A$, and $\L$ by a simple extension from the single-task regression setting.
\begin{lem}[Corollary of Theorem 2 of~\cite{acharyya_parameter_2015}]
The $\what{F}_3$ in the SILVar model learning problem~\eqref{eq:SILVar} is jointly convex in $g$, $\A$, and $\L$.
\end{lem}
This convexity is enough to ensure that the learning can converge and be computationally efficient. Before describing the full algorithm, one detail remains: the implementation of the LMR introduced in Section~\ref{sec:bg:LMR}.
\subsection{Lipschitz Monotonic Regression}
\label{sec:learn_SILVar:LMR}
To tackle LMR, we first introduce the related simpler problem of monotonic regression, which is solved by a well-known algorithm, the pooled adjacent violators (PAV)~\cite{dykstra_isotonic_1981}. The monotonic regression problem is formulated as
\begin{equation}
\label{eq:mon_reg}
\begin{aligned}
\textrm{PAV}(\y,\x) &\overset{\Delta}{=} \argmin[\g] \sum\limits_{i=1}^{n}\;\left( g(x_i) - y_i \right)^2\\
& \quad \textrm{s.t. } 0 \le g\left(x_{[j+1]}\right)-g\left(x_{[j]}\right).
\end{aligned}
\end{equation}
The PAV algorithm has a complexity of $O(n)$, which is due to a single complete sweep of the vector $\y$. The monotonic regression problem can also be seen as a standard $\ell_2$ projection onto the convex set of monotonic functions.
We introduce a simple generalization to the monotonic regression,
\begin{equation}
\label{eq:gen_mon_reg}
\begin{aligned}
\textrm{GPAV}_\t(\y,\x) &\overset{\Delta}{=} \argmin[\g] \sum\limits_{i=1}^{n}\;\left( g(x_i) - y_i \right)^2\\
& \quad \textrm{s.t. } t_{[j+1]} \le g\left(x_{[j+1]}\right)-g\left(x_{[j]}\right) \\
&\qquad\qquad \textrm{for } j=1,\ldots, n-1.
\end{aligned}
\end{equation}
This modification allows weaker (if $t_i<0$) or stronger (if $t_i>0$) local monotonicity conditions to be placed on the estimated function $g$.
Let $t_{[1]}=0$ and $s_{[i]}=\sum\limits_{j=1}^i t_{[j]}$, and in vectorized form,
\begin{equation}
\label{eq:cusum}
\s\overset{\Delta}{=}\textrm{cusum}(\t).
\end{equation}
Then, we can rewrite,
{\small
\begin{equation}
\label{eq:gen_mon_reg2}
\begin{aligned}
\textrm{GPAV}_\t(\y,\x) &= \argmin[\g] \sum\limits_{i=1}^{n}\;\left( g(x_i) - s_i - \left(y_i - s_i\right) \right)^2\\
& \quad \textrm{s.t. } 0 \le g\left(x_{[j+1]}\right) \!-\! s_{[j+1]} \!-\! \left(g\left(x_{[j]}\right) \!-\! s_{[j]}\right) \\
&\qquad\qquad \textrm{for } j=1,\ldots, n-1,
\end{aligned}
\end{equation}
}which leads us to recognize that
\begin{equation}
\label{eq:gpav_pav}
\begin{aligned}
\textrm{GPAV}_\t(\y,\x)= \textrm{PAV}(\y-\s, \x)+ \s.
\end{aligned}
\end{equation}
Returning to LMR~\eqref{eq:Lip_mon_reg}, we pose it as the projection onto the intersection of 2 convex sets, the monotonic functions and the functions with upper bounded differences, for which each projection can be solved efficiently using GPAV (for the second set, we can use GPAV on the negated function and negate the result). Thus to perform this optimization efficiently utilizing PAV as a subroutine, we can use an accelerated Dykstra's Algorithm~\cite{lopez_acceleration_2016}, which is a general method to find the projection onto the non-empty intersection of any number of convex sets. The accelerated algorithm can has a geometric convergence rate for $O(\log n)$ passes, with each pass utilizing PAV with $O(n)$ cost, for a total cost of $O(n\log n)$. We make a brief sidenote that this is better than the $O(n^2)$ in~\cite{yeganova_isotonic_2009} and simpler than the $O(n\log n)$ in~\cite{kakade_efficient_2011} achieved using a complicated tree-like data structure, and has no learning rate parameter to tune compared to an ADMM-based implementation that would also yield $O(n\log n)$. We provide the steps of LMR in Algorithm~\ref{alg:LMR} as a straightforward application of the the accelerated Dykstra's algorithm, but leave the details of the derivation of the acceleration to~\cite{lopez_acceleration_2016}. In addition, the numerical stability parameter is included to handle small denominators, but in practice we did not observe the demoninator falling below $\varepsilon=10^{-9}$ in our simulations before convergence.
\begin{algorithm}
\caption{Lipschitz monotone regression (LMR)}
\label{alg:LMR}
\begin{algorithmic}[1]
\State Let $t_1=0$ and compute $t_{[i+1]}=x_{[i+1]} - x_{[i]}$, $\s=\textrm{cusum}(\t)$. Set numerical stability parameter $0<\varepsilon<1$, $error=\infty$, and tolerance $0<\delta<1$.
\State Initialize $\g_{\ell}^{(0)}=\textrm{PAV}(\y,\x)$, $\g_{u}^{(0)}=-\textrm{PAV}(-\y+\s,\x)+\s$,
$\v_0=\0$, $\v_1=\g_{u}-\g_{\ell}$, $\w=\g_{u}$, $k=0$
\While{$error \ge \delta$}
\State $\g_{\ell}^{(k+1)}\leftarrow \textrm{PAV}\left(\g_{u}^{(k)} - \v_0, \x\right)$
\State $\g_{u}^{(k+1)}\leftarrow - \textrm{PAV}\left(-(\g_{\ell}^{(k+1)}-\v_1)+\s, \x\right)+\s$
\State $\v_0\leftarrow \g_{\ell}^{(k+1)} - (\g_u^{(k+1)}-\v_0)$
\State $\v_1\leftarrow \g_{u}^{(k+1)} - (\g_{\ell}^{(k+1)}-\v_1)$
\If{$\Big| \!\|\g_{\ell}^{(k)} \!\!-\! \g_{u}^{(k)}\|_2^2 \!+\! (\g_\ell^{(k\!+\!1)} \!\!-\! \g_u^{(k\!+\!1)})\!^\top \!(\g_\ell^{(k)} \!\!-\! \g_u^{(k)}) \Big| \!\!\ge\! \varepsilon$}
\State $\alpha^{(k+1)}\leftarrow \frac{\|\g_{\ell}^{(k)} \!-\! \g_{u}^{(k)}\|_2^2 }{\|\g_{\ell}^{(k)} \!-\! \g_{u}^{(k)}\|_2^2 \!+\! (\g_\ell^{(k+1)} \!-\! \g_u^{(k+1)})\!^\top \!(\g_\ell^{(k)} \!-\! \g_u^{(k)})}$
\State $\z\leftarrow \g_{u}^{(k)}+\alpha^{(k+1)}(\g_{u}^{(k+1)}-\g_{u}^{(k)})$
\Else
\State $\z\leftarrow \frac{1}{2}(\g_{\ell}^{(k+1)}+\g_{u}^{(k+1)})$
\EndIf
\State $error\leftarrow\|\z-\w\|_2$
\State $\w\leftarrow\z$
\State $k\leftarrow k+1$
\EndWhile\\
\Return $\z$
\end{algorithmic}
\end{algorithm}
While the GPAV allows us to quickly perform Lipschitz monotonic regression with our particular choice of $\t$, it should be noted that using other choices for $\t$ can easily generalize the problem to bi-H\"{o}lder regression as well, to find functions in the set $\Ccal_{\ell,\alpha}^{u,\beta}=\{g : \forall y>x,\; \ell|y-x|^\alpha \le g(y)-g(x) \le u|y-x|^\beta \}$. This set may prove interesting for further analysis of the estimator and is left as a topic for future investigation.
\subsection{Optimization Algorithms}
\label{sec:learn_SILVar:opt_alg}
With LMR in hand, we can outline the algorithm for solving the convex problem~\eqref{eq:SILVar_opt}. This procedure can be performed for general $h_1$ and $h_2$, but proximal mappings are efficient to compute for many common regularizers. Thus, we describe the basic algorithm using gradient-based proximal methods (e.g., accelerated gradient~\cite{odonoghue_adaptive_2013}, and quasi-Newton~\cite{schmidt_optimizing_2009}), which require the ability to compute a gradient and the proximal mapping.
In addition, we can compute the objective function value for purposes of backtracking~\cite{nocedal_numerical_1999}, evaluating algorithm convergence, or computing error (e.g., for validation or testing purposes). While our optimization outputs an estimate of $g$, the objective function depends on the value of $G$ and its conjugate $G_*$. Though we cannot necessarily determine $G$ or $G_*$ uniquely since $G+c$ and $G$ both yield $g$ as a gradient for any $c\in\Rbb$, the value of $G(x)+G_*(y)$ is unique for a fixed $g$. To see this, consider $\wtil{G}=G+c$ for some constant $c$. Then,
\begin{equation}
\label{eq:G_unique}
\begin{aligned}
\wtil{G}(x)+\wtil{G}_*(y) &=G(x)+c+\max_{z}[zy-\wtil{G}(z)]\\
& = G(x)+c +\max_{z}[zy-G(z)-c]\\
& = G(x) + \max_{z}[zy-G(z)] \\
& = G(x)+G_*(y)
\end{aligned}
\end{equation}
This allows us to compute the objective function by performing the cumulative numerical integral of $\what{\g}$ on points $\textrm{v}(\boldsymbol{\Theta})$ (e.g., \texttt{G=cumtrapz(theta,ghat)} in Matlab). Then, a discrete convex conjugation (also known as the discrete Legendre transform (DLT)~\cite{lucet_faster_1997}) computes $G_*$.
We did not notice significant differences in performance accuracy due to our implementation using quasi-Newton methods and backtracking compared to an accelerated proximal gradient method, possibly because both algorithms were run until convergence. Thus, we show only one set of results. However, we note that the runtime was improved by implementing the quasi-Newton and backtracking method.
\begin{algorithm}
\caption{Single Index Latent Variable (SILVar) Learning}
\label{alg:SILVar}
\begin{algorithmic}[1]
\State Initialize $\what{\A}=\0$, $\what{\L}=\0$
\While{not converged} Proximal Methods
\State Computing gradients:
\begin{equation*}
\begin{aligned}
\boldsymbol{\Theta} &\leftarrow (\what{\A}+\what{\L})\X\\
\what{\g} &\leftarrow \textrm{LMR}(\textrm{v}(\Y),\textrm{v}(\boldsymbol{\Theta}))\\
\nabla_{\A} F_3=\nabla_{\L} F_3&=\sum_{i\in\Ical}\left( \what\g(\boldsymbol{\theta}_i)-\y_i \right)\x_i^\top
\end{aligned}
\end{equation*}
\State Optionally compute function value:
\begin{equation*}
\begin{aligned}
\what{\G} &\leftarrow \texttt{cumtrapz}(\textrm{v}(\boldsymbol{\Theta}),\what{\g})\\
\what{\G}_* &\leftarrow \textrm{DLT}(\textrm{v}(\boldsymbol{\Theta}),\what{\G},\textrm{v}(\Y))\\
\what{F}_3 &= \sum\limits_{ij} \what{\G}_*(y_{ij})+\what{\G}(\theta_{ij})-y_{ij}\theta_{ij}
\end{aligned}
\end{equation*}
\EndWhile\\
\Return $(\what{\g},\A,\L)$
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:SILVar} describes the learning procedure and details the main function and gradient computations while assuming a proximal operator is given. The computation of the gradient and the update vector depends on the particular variation of proximal method utilized; with stochastic gradients, the set $\Ical\subset \{1,\ldots,n\}$ could be pseudorandomly generated at each iteration, while with standard gradients, $\Ical=\{1,\ldots,n\}$. The $\texttt{cumtrapz}$ procedure takes as input the coordinates of points describing the function $\what{\g}$. The DLT procedure takes as its first two inputs the coordinates of points describing the function $\what{\G}$, and as its third input the points at which to evaluate the convex conjugate $\what{\G}_*$.
Here, an observant reader may notice the subtle point that the function $G_*$ may only be defined inside a finite or semi-infinite interval. This can occur if the function $g$ is bounded below and/or above, so that $G$ has a minimum/maximum slope, and $G_*$ is infinite for any values below that minimum or above that maximum slope of $G$ (to see this, one may refer back to the definition of conjugation~\eqref{eq:conv_conj}). Fortunately, this does not invalidate our method. It is straightforward to see that $\what{g}(x)=x$ is always a feasible solution and that $\what{G}_*(y)=y^2/2$ is defined for all $y$; thus starting from this solution, with appropriately chosen step sizes, gradient-based algorithms will avoid the solutions that make the objective function infinite. Furthermore, even if new data $\y_j$ falls outside the valid domain for the learned $\what{G}_*$ and we incur an ``infinite'' loss using the model, evaluating $\what{\g}\left(\left(\what{\A}+\what{\B}\right)\x_j\right)$ is still well defined. This problem is not unique to SIM's, as assuming a fixed link function $g$ in a GLM can also incur infinite loss if new data does not conform to modeling assumptions implicit to the link function (e.g., the log-likelihood for a negative $\y$ under a non-negative distribution), and making a prediction using the learned GLM is still well-defined. Practically, the loss for SILVar computed using the DLT may be large, but will not be infinite~\cite{lucet_faster_1997}.
\section{Performance}
\label{sec:perf}
We now provide conditions under which the optimization recovers the parameters of the model.
First let us establish a few additional notations needed.
Let $(\wtil g, \wtil\A, \wtil\L)$ be the best Lipschitz monotonic function, sparse matrix, and low-rank matrix that models the true data generation.
Let $\wtil \A$ be sparse on the index set $S$ of size $|S|=s_\A$ and $\wtil\L=\U\Lambda\V^\top$ be the SVD of $\wtil L$, where $\U\in\Rbb^{M \times r_\L}$, $\Lambda\in\Rbb^{r_\L \times r_\L}$, and $\V\in\Rbb^{N \times r_\L}$ where $r_\L$ is the rank of $\wtil \L$. Then let $\A_S$ be equal to $\A$ on $S$ and $0$ on $S^c$ (so that $\wtil\A_S=\wtil\A$ and $\A_S+\A_{S^c}=\A$), and $\L_R=\L-(\I-\U\U^\top)\L(\I-\V\V^\top)$ and $\L_{R^c}=\L-\L_R$ (so that $\wtil\L_R=\wtil\L$ and $\L_{R^c}+\L_R=\L$). Consider the set of approximately $S$-sparse and $R$-low-rank matrices $\mathcal{B}_\gamma(S,R)=\{(\boldsymbol{\Phi},\boldsymbol{\Psi}): \gamma\|\boldsymbol{\Phi}_{S^c}\|_1+\|\boldsymbol{\Psi}_{R^c}\|_* \le 3(\gamma\|\boldsymbol{\Phi}_{S}\|_1+\|\boldsymbol{\Psi}_{R}\|_*)\}$. Intuitively, this is the set of matrices for which the energy of $\boldsymbol{\Phi}$ is on the same sparsity set as $\wtil\A$ and similarly the energy of $\boldsymbol{\Psi}$ is in the same low-rank space as $\wtil\L$.
Finally, let the marginalization of the loss functional w.r.t. $g$ be $\what{m}(\Y,\X,\A)=\underset{g\in\Ccal^1}{\min} \what{F}_3(\Y,\X,g,\A)$, the value of the marginalized function at the true matrix parameters be $\what{\wtil{g}}=\underset{g\in\Ccal^1}{\min} \what{F}_3(\Y,\X,g,\wtil\A+\wtil\L)$, the Hessian of the marginalized functional w.r.t. the matrix parameter be $\wtil{\Ical} = \nabla^2_{\textrm{v}(\A)}\what{m}(\Y,\X,\wtil\A+\wtil\L)$, and denote the quadratic form in shorthand $\|\v\|_{\wtil{\Ical}}=\v^\top \wtil{\Ical}\v$.
Now, consider the following assumptions:
\begin{enumerate}
\item\label{as:RSC} There exists some $\alpha>0$ such that
\begin{align*}
\|\boldsymbol{\Phi}+\boldsymbol{\Psi}\|_{\wtil{\Ical}}\ge \alpha\|\boldsymbol{\Phi}+\boldsymbol{\Psi}\|_F^2 \quad \forall (\boldsymbol{\Phi},\boldsymbol{\Psi})\in \mathcal{B}_\gamma(S,R).
\end{align*}
This is essentially a Restricted Strong Convexity condition standard for structured recovery~\cite{negahban_unified_2012}.
\item\label{as:Dev} Let $\what{\wtil{\boldsymbol{\Gamma}}}\in \Rbb^{M\times K}$ be the matrix with columns given by
\begin{align*}
\what{\wtil{\boldsymbol{\Gamma}}}_k= \what{\wtil{g}}((\wtil\A+\wtil\L)\x_k)-\y_k, \quad k=1,\ldots,K.
\end{align*}
Then for some $\lambda>0$ and $\gamma>0$,
\begin{align*}
&\frac{1}{K}\Big\| \what{\wtil{\boldsymbol{\Gamma}}}\X^\top \Big\|_{\infty} \le \lambda\gamma/2 \qquad \frac{1}{K}\Big\| \what{\wtil{\boldsymbol{\Gamma}}}\X^\top \Big\|_2 \le \lambda/2
\end{align*}
where $\|\A\|_\infty$ is the largest element of $\A$ in magnitude, and $\|\A\|_2$ is the largest singular value of $\A$.
This states that the error is not too powerful and is not too correlated with the regression variables, and it is also similar to standard conditions for structured recovery.
\item\label{as:Inc} Let $\tau=\gamma\sqrt{\frac{s_\A}{r_\L}}$ and $\mu=\frac{1}{32 r_\L (1+\tau^2)}$, then
\begin{align*}
\frac{\max( \|\boldsymbol{\Phi}\|_{2},\|\boldsymbol{\Psi}\|_{\infty}/\gamma) }{\gamma\|\boldsymbol{\Phi}\|_{1}+\|\boldsymbol{\Psi}\|_{*}} \le \mu\quad \forall (\boldsymbol{\Phi},\boldsymbol{\Psi})\in \mathcal{B}_\gamma(S,R).
\end{align*}
This is a condition on the incoherence between the sparsity set $S$ and low rank subspace $R$ (see~\cite{chandrasekaran_rank-sparsity_2011,chandrasekaran_latent_2012,candes_matrix_2010,candes_robust_2011} for other similar variations on rank-sparsity incoherence considered previously). Basically, this states that $S$ should be such that $S$-sparse matrices are not approximately low-rank, while $R$ should be such that $R$-low-rank matrices are not approximately sparse.
\end{enumerate}
Then we have the following result,
\begin{thm}
\label{thm:perf}
Under Assumptions~\ref{as:RSC}-\ref{as:Inc}, the solution to the optimization problem~\eqref{eq:SILVar_AR_opt} satisfies
\begin{align}
&\|\what\A-\wtil\A\|_F \le \frac{3\lambda\gamma\sqrt{s_\A}}{\alpha}\left(2 \!+\! \sqrt{\frac{2}{2 \!+\! \tau^2}}\right)\le\frac{9\lambda\gamma\sqrt{s_\A}}{\alpha}\\
&\|\what\L-\wtil\L\|_F \le \frac{3\lambda\sqrt{r_\L}}{\alpha}\left(2 \!+\! \sqrt{\frac{2\tau^2}{1 \!+\! 2\tau^2}}\right)\le\frac{9\lambda\sqrt{r_\L}}{\alpha}.
\end{align}
\end{thm}
We give the proof for this theorem in the Appendix. The theorem relates the performance of the optimization to conditions on the problem parameters. Of course, in practice these conditions are difficult to check, since they require knowledge of ground truth. Even given ground truth, verifying Assumption 3 requires solving another separate non-convex optimization problem. Thus, future directions for work could include investigation into which kinds of sparse matrix (or network) models for $\A$ and low-rank models $\L$ produce favorable scaling regimes in the problem parameters with high probability.
\section{Experiments}
\label{sec:exp}
We study the performance of the algorithm via simulations on synthetically generated data as well as real data. In these experiments, we show the different regression settings, as discussed in Section~\ref{sec:SILVar:rel_probs}, under which the SILVar model can be applied.
\subsection{Synthetic Data}
The synthetic data corresponds to a multi-task regression setting. The data was generated by first creating random sparse matrix $\A_f=(\A \; \A_h)$ where $\B\in\Rbb^{p \times H}$ and $H=\lfloor hp \rfloor$ the number of hidden variables, and $h$ is the proportion of hidden variables. The elements of $\A_f$ were first generated as i.i.d. Normal variables, and then a sparse mask was applied to $\A$ choose $\lfloor m\log_{10}(p) \rfloor$ non-zeros, and $\B$ was scaled by a factor of $\frac{1}{3\sqrt{H}}$. Next, the data $\X_f =(\X^\top \; \Z^\top)^\top$ were generated with each vector drawn i.i.d. from a multivariate Gaussian with mean $\0$ and covariance matrix $\boldsymbol{\Sigma}_f\in\Rbb^{(p+H) \times (p+H)}$. This was achieved by creating $\boldsymbol{\Sigma}_f^{1/2}$ by thresholding a matrix with diagonal entries of $1$ and off-diagonal entries drawn from an i.i.d. uniform distribution in the interval $[-0.5,0.5]$ to be greater than 0.35 in magnitude. Then $\boldsymbol{\Sigma}_f^{1/2}$ was pre-multiplied with a matrix of i.i.d. Normal variables. Finally, we generated $\Y=g(\A_f\X_f)+\W$ for 2 different link functions $g_1(x)=\log(1+e^{c_1 x})$ and $g_2(x)=\frac{2}{1+e^{-c_2 x}}-1$, and $\W$ is added i.i.d. Gaussian noise. Then, the task is to learn $(\what{g},\what\A,\what\L)$ the SILVar model~\eqref{eq:SILVar} from $(\Y,\X)$ without access to $\Z$. As comparison, we use an Oracle GLM model, in which the true $g$ is given but the parameters $(\what\A_o,\what\L_o)$ still need to be learned. We note that while there is a slight mismatch between the true and assumed noise distributions, the task is nonetheless difficult, yet our estimation using the SILVar model can still exhibit good performance with respect to the Oracle.
The experiments were carried out across a range of problem sizes. The dimension of $\y_i$ was fixed at $m=25$. The dimension of the observed $\x_i$ was set to $p\in\{25,50\}$, and the proportion of the dimension of hidden $\z_i$ was set to $h\in\{0.1,0.2\}$. The number of data samples was varied in $k\in\{25, 50, 100, 150, 200\}$. Validation was performed using a set of samples generated independently but using the same $\A_f$ and $g$, and using a grid of $(\lambda_S,\lambda_L)\in\left\{10^{i/4} \big| i\in \{-8, -7, \ldots, 7, 8\} \right\}^2$. The whole process of data generation and model learning is repeated 20 times for each experimental condition. The average $\ell_1$ errors between the true $\A$ and the best estimated $\what\A$ (determined via validation) are shown in Figure~\ref{fig:toy}. The first row is for the function $g_1$ and the second row is for the function $g_2$ for the various experimental conditions. The empirical results show that the SILVar model and learning algorithm lose out slightly on performance for estimating the link function $g$ but overall still match the Oracle fairly well in most cases. We also see that SILVar performs better than just the sparse SIM~\cite{ganti_learning_2015}, but that the purely sparse Oracle GLM can still provide an improvement over SILVar for not knowing the link function. We also see several other intuitive behaviors for both the SILVar and Oracle models: as we increase the amount of data $n$, the performance increases. Note that due to the scaling of weights $\B$ by the factor of $1/\sqrt{H}$ to keep the average power $\|\B\|_F$ roughly constant, increasing the proportion of hidden variables (from $h=0.1$ to $h=0.2$) did not directly lead to significant performance decreases. We also note that $g_2$ seems to be somehow more difficult to estimate than $g_1$, and additionally that the performance of the SILVar model w.r.t. the Oracle model is worse with $g_2$ than with $g_1$. This could have something to do with the 2 saturations in $g_2$ as opposed to the 1 saturation in $g_1$, corresponding in an intuitive sense to a more non-linear setting that contains more information needing to be captured while estimating $\what{g}$.
\begin{figure*}
\caption{Different errors for estimating $\A$ in Toy Data}
\label{fig:toy}
\end{figure*}
\subsection{Temperature Data}
In this setting, we wish to learn the graph capturing relations between the weather patterns at different cities. The data is a real world multivariate time series consisting of daily temperature measurements (in $^\circ$F) for $365$ consecutive days during the year $2011$ taken at each of $150$ different cities across the continental USA.
Previously, the analysis on this dataset has been performed by first fitting with a $4$th order polynomial and then estimating a sparse graph from an autoregressive model using a known link function $g(x)=x$ assuming Gaussian noise~\cite{mei_signal_2017}.
Here, we fit the time series using a $2$nd order AR SILVar model~\eqref{eq:SILVar_AR} with regularizers for group sparsity $h_1(\A)=\lambda_1\sum\limits_{i,j}\left\|\left(a^{(1)}_{ij} \ldots a^{(M)}_{ij}\right)\right\|_2$ where $a_{ij}^{(m)}$ is the $ij$ entry of matrix $\A^{(m)}$, and nuclear norm $h_2(\L)=\lambda_2\sum\limits_{i=1}^{M}\left\|\L^{(i)}\right\|_*$. We also estimate the underlying trend using~\eqref{eq:trend_opt}.
In Figure~\ref{fig:weather_trends}, we plot the original time series, estimated trends, the estimated network, and residuals. The plots are all from time steps $3$ to $363$, since those are the indices for the trends that we can reliably estimate using the simple procedure~\eqref{eq:trend_opt}. In Figure~\ref{fig:weather_trends:ts}, we show the original time series, which clearly exhibit a seasonal trend winter--summer--winter. Figure~\ref{fig:weather_trends:weather_g_hat} shows the estimated link function $\what{g}$, which turns out to be linear, though a priori we might not intuitively expect a process as complicated as weather to behave this way when sampled so sparsely.
Figures~\ref{fig:weather_trends:full_pred} and~\ref{fig:weather_trends:full_resid} show the full prediction $\what{\x}_k=\sum\limits_{i=1}^M(\what{\A}^{(i)}+\what{\L}^{(i)})\x_{k-i}$ and the residuals $\x_k-\what{\x}_k$, respectively. Note the much lower vertical scale in Figure~\ref{fig:weather_trends:full_resid}. Figures~\ref{fig:weather_trends:tr_pred} and~\ref{fig:weather_trends:tr_resid} show the trend $\what{\L}'$ learned using~\eqref{eq:trend_opt} and the time series with the estimated trend removed, respectively. The trend estimation procedure captures the basic shape of the seasonal effects on the temperature. Several of the faster fluctuations in the beginning of the year are captured as well, suggesting that they were caused by some larger scale phenomena. Indeed there were several notable storm systems that affected the entire USA in the beginning of the year in short succession~\cite{hedge_summary_2011, hamrick_mid-atlantic_2011}.
Figure~\ref{fig:weather_graphs} compares two networks $\what{\A}'$ estimated using SILVar and using just sparse SIM without accounting for the low-rank trends, both with the same sparsity level of $12\%$ non-zeros for display purposes, and where $\what{a}'_{ij}=\left\|\left(\what{a}^{(1)}_{ij} \ldots \what{a}^{(M)}_{ij}\right)\right\|_2$.
Figure~\ref{fig:weather_graphs:silvar} shows the network $\what{\A}'$ that is estimated using SILVar. The connections imply predictive dependencies between the temperatures in cities connected by the graph. It is intuitively pleasing that the patterns discovered match well previously established results based on first de-trending the data and then separately estimating a network~\cite{mei_signal_2017}. That is, we see the effect of the Rocky Mountain chain around $-110^\circ$ to $-105^\circ$ longitude and the overall west-to-east direction of the weather patterns, matching the prevailing winds.
In contrast to that of SILVar, the graph estimated by the sparse SIM shown in Figure~\ref{fig:weather_graphs:nolat} on the other hand has many additional connections with no basis in actual weather patterns. Two particularly unsatisfying cities are: sunny Los Angeles, California at $(-118, 34)$, with its multiple connections to snowy northern cities including Fargo, North Dakota at $(-97, 47)$; and Caribou, Maine at $(-68, 47)$, with its multiple connections going far westward against prevailing winds including to Helena, Montana at $(-112, 47)$. These do not show in the graph estimated by SILVar and shown in Figure~\ref{fig:weather_graphs:silvar}.
\begin{figure}
\caption{Temperature time series}
\label{fig:weather_trends:ts}
\caption{Learned $\what{g}
\label{fig:weather_trends:weather_g_hat}
\caption{One step prediction}
\label{fig:weather_trends:full_pred}
\caption{Prediction error}
\label{fig:weather_trends:full_resid}
\caption{Estimated trend}
\label{fig:weather_trends:tr_pred}
\caption{De-trended time series}
\label{fig:weather_trends:tr_resid}
\caption{Time series, link function, trends, and prediction errors computed using the learned SILVar model}
\label{fig:weather_trends}
\end{figure}
\begin{figure}
\caption{Weather graph learned using SILVar}
\label{fig:weather_graphs:silvar}
\caption{Weather graph learned using Sp. SIM (without low-rank)}
\label{fig:weather_graphs:nolat}
\caption{Learned weather stations graphs}
\label{fig:weather_graphs}
\end{figure}
\subsection{Bike Traffic Data}
The bike traffic data was obtained from HealthyRide Pittsburgh~\cite{noauthor_healthy_2016}. The dataset contains the timestamps and station locations of departure and arrival (among other information) for each of 127,559 trips taken between 50 stations within the city from May 31, 2015 to September 30, 2016, a total of 489 days.
We consider the task of using the total number of rides departing from and arriving in each location at 6:00AM-11:00AM to predict the number of rides departing from each location during the peak period of 11:00AM-2:00PM for each day. This corresponds to $\Y\in\mathbb{N}_0^{50 \times 489}$ and $\X\in\mathbb{N}_0^{100 \times 489}$, where $\mathbb{N}_0$ is the set of non-negative integers, and $\A,\L\in\Rbb^{50\times 100}$.
We estimate the SILVar model~\eqref{eq:SILVar} and compare its performance against a sparse plus low-rank GLM model with an underlying Poisson distribution and fixed link function $g_{\scriptsize{\textrm{GLM}}}(x)=\log(1+e^x)$. We use $n\in\{60,120,240,360\}$ training samples and compute errors on validation and test sets of size $48$ each, and learn the model on a grid of $(\lambda_S,\lambda_L)\in\left\{10^{i/4} \big| i\in \{-8, -7, \ldots, 11, 12\} \right\}^2$. We repeat this 10 times for each setting, using an independent set of training samples each time. We compute testing errors in these cases for the optimal $(\lambda_S,\lambda_L)$ with lowest validation errors for both SILVar and GLM models.
We also demonstrate that the low-rank component of the estimated SILVar model captures something intrinsic to the data. Naturally, we expect people's behavior and thus traffic to be different on business days and on non-business days. A standard pre-processing step would be to segment the data along this line and learn two different models. However, as we use the full dataset to learn one single model, we hypothesize that the learned low-rank $\what{\L}$ captures some aspects of this underlying behavior.
\begin{figure}
\caption{(a) Root mean squared errors (RMSEs) from SILVar and Oracle models; (b) Link function learned using SILVar model}
\label{fig:bikes_MSE_link}
\end{figure}
Figure~\ref{fig:bikes_MSE_link:MSE} shows the test Root Mean Squared Errors (RMSEs) for both SILVar and GLM models for varying training sample sizes, averaged across the 10 trials. We see that the SILVar model outperforms the GLM model by learning the link function in addition to the sparse and low-rank regression matrices. Figure~\ref{fig:bikes_MSE_link:link} shows an example of the link function learned by the SILVar model with $n=360$ training samples. Note that the learned SILVar link function performs non-negative clipping of the output, which is consistent with the count-valued nature of the data.
\begin{figure}
\caption{Receiver operating characteristics (ROCs) for classifying each day as a business day or non-business day, using the low-rank embedding provided by $\what{\L}
\label{fig:bikes_ROCs}
\end{figure}
We also test the hypothesis that the learned $\what{\L}$ contains information about whether a day is business or non-business through its relatively low-dimensional effects on the data. We perform the singular value decomposition (SVD) on the optimally learned $\what{\L}=\what{\U}\what{\boldsymbol{\Sigma}}\what{\V}^\top$ for $n=360$ and project the data onto the $r$ top singular components $\wtil{\X}_r=\what{\boldsymbol{\Sigma}}_r\what{\V}_r^\top\X$. We then use $\wtil{\X}_r$ to train a linear support vector machine (SVM) to classify each day as either a business day or a non-business day, and compare the performance of this lower dimensional feature to that of using the full vector $\X$ to train a linear SVM. If our hypothesis is true then the performance of the classifier trained on $\wtil{\X}_r$ should be competitive with that of the classifier trained on $\X$. We use 50 training samples of $\wtil\X_r$ and of $\X$ and test on the remainder of the data. We repeat this 50 times by drawing a new batch of $50$ samples each time. We then vary the proportion of business to non-business days in the training sample to trace out a receiver operating characteristic (ROC).
In Figure~\ref{fig:bikes_ROCs}, we see the results of training linear SVM on $\wtil{\X}_r$ for $r\in\{1,...,6\}$ and on the full data for classifying business and non-business days. We see that using only the first two singular vectors, the performance is fairly poor. However, by simply taking 3 or 4 singular vectors, the classification performance almost matches that of the full data. Surprisingly, using the top 5 or 6 singular vectors achieves performance greater than that of the full data. This suggests that the projection may even play the role of a de-noising filter in some sense. Since we understand that behavior should correlate well with the day being business or non-business, this competitive performance of the classification using the lower dimensional features strongly suggests that the low-rank $\what\L$ indeed captures the effects of latent behavioral factors on the data.
\begin{figure}
\caption{Intensities of the self-loop at each station}
\label{fig:bikes_station_location}
\end{figure}
Finally, in Figure~\ref{fig:bikes_station_location}, we plot the $(i,i)$ entries of the optimal $\what\A$ at $n=360$. This corresponds to locations for which incoming bike rides at 6:00AM-11:00AM are good predictors of outgoing bike rides at 11:00AM-2:00PM, beyond the effect of latent factors such as day of the week. We may intuitively expect this to correlate with locations that have restaurants open for lunch service, so that people would be likely to ride in for lunch or ride out after lunch. This is confirmed by observing that these stations are in Downtown (-80,40.44), the Strip District (-79.975, 40.45), Lawrenceville (-79.96, 40.47), and Oakland (-79.96, 40.44), known locations of many restaurants in Pittsburgh. It is especially interesting to note that Oakland, sandwiched between the University of Pittsburgh and Carnegie Mellon University, is included. Even though the target demographic is largely within walking distance, there is a high density of restaurants open for lunch, which may explain its non-zero coefficient. The remainder of the locations with non-zero coefficients $a_{ii}$ are also near high densities of lunch spots, while the other locations with coefficients $a_{ii}$ of zero are largely either near more residential areas or near restaurants more known for dinner or nightlife rather than lunch, such as Shadyside ($x\ge -79.95$) and Southside ($y\le 40.43$)).
\section{Conclusion}
\label{sec:conc}
Data exhibit complex dependencies, and it is often a challenge to deal with non-linearities and unmodeled effects when attempting to uncover meaningful relationships among various interacting entities that generate the data.
We introduce the SILVar model for performing semi-parametric sparse regression and estimating sparse graphs from data under the presence of non-linearities and latent factors or trends. The SILVar model estimates a non-linear link function $g$ as well as structured regression matrices $\A$ and $\L$ in a sparse and low-rank fashion. We justify the form of the model and relate it to existing methods for general PCA, multi-task regression, and vector autoregression. We provide computationally tractable algorithms for learning the SILVar model and demonstrate its performance against existing regression models and methods on both simulated and real data sets, namely 2011 US weather sensor network data and 2015-2016 Pittsburgh bike traffic data. We see from the simulated data that the SILVar model matches the performance of an Oracle GLM that knows the true link function and only needs to estimate $\A$ and $\L$; we show empirically on the temperature data that the learned $\L$ can capture the effects of underlying trends in time series while $\A$ represents a graph consistent with US weather patterns; and we see that, in the bike data, SILVar outperforms a GLM with a fixed link function, the learned $\L$ encodes latent behavioral aspects of the data, and $\A$ discovers notable locations consistent with the restaurant landscape of Pittsburgh.
\appendices
\section{Proof of Theorem~\ref{thm:equiv}}
\label{app:thm}
We first restate the theorem for convenience.
\setcounter{thm}{0}
\begin{thm}
\label{thm:equiv_precise}
Assume that $\what{g}'(0)\ne 0$ and that $|\what{g}''|\le J$ and $|\overline{g}''|\le J$ for some $J<\infty$. Furthermore, assume in models~\eqref{eq:SIM_MLE_full_obj} and~\eqref{eq:SIM_MLE_latent} that $\max_j \left(\|\what{\a}_j\|_1,\|\overline{\a}_j\|_1 +\|\overline{\b}_j\|_2 \right)\le k,$
{\small \begin{equation*}
\max\left( \Ebb[\|\x\|_2 \|\x\|_\infty^2], \Ebb[\|\x\|_2\|\x\|_\infty \|\z\|_2], \Ebb[\|\x\|_2 \|\z\|_2^2] \right) \le s_{Nr},
\end{equation*}
}where subscripts in the parameters on the RHS indicate that the bounds may grow with the values in the subscript.
Then, the parameters $\what{\A}$ and $\overline{\A}$ from models~\eqref{eq:SIM_MLE_full_obj} and~\eqref{eq:SIM_MLE_latent} are related as
\begin{equation*}
\what{\A}= q(\overline{\A}+\L) + \E,
\end{equation*}
where $q=\frac{\overline{g}'(0)}{\what{g}'(0)}$, $\boldsymbol{\mu}_\x=\Ebb[\x_i]$,
\begin{equation*}
\begin{aligned}
&\L=\Big(\overline{\B}\Ebb[\z\x^\top] + \left(\overline{\g}(\0)- \what{\g}(\0)\right)\boldsymbol{\mu}_\x^\top \Big)\left(\Ebb[\x\x^\top]\right)^{\dagger}\\
\Rightarrow&\textrm{rank}(\L)\le r+1,
\end{aligned}
\end{equation*}
and $\E = \what{\A} - q(\overline{\A}+\L)$ is bounded as
\begin{equation*}
\!\begin{aligned}[t] \frac{1}{MN}\|\E\|_F & \le \! \frac{2J \sigma_\ell \sqrt{N}}{\what{g}'(0)M}s_{Nr}k^2,
\end{aligned}
\end{equation*}
where $\sigma_\ell=\left\|\left(\Ebb\left[\x\x^\top \right]\right)^\dagger \right\|_2$, the largest singular value of the pseudo-inverse of the covariance.
\end{thm}
~\\
\noindent\textit{Proof.} We begin with the Taylor series expansion, using the Lagrange form of the remainder,
{\small
\begin{equation}
\label{eq:proof_taylor}
\begin{aligned}
&\Ebb\left[ \left( \what{\g}(\0) \!+\! \what{g}'(0)\what{\A}\x \!-\! \overline{\g}(\0) \!-\! \overline{g}'(0)(\overline{\A}\x \!+\! \overline{\B}\z) \right) \x^\top \right]\\
&\,=\!\Ebb\left[ \left( \what{\g}''(\boldsymbol{\xi}) \!\odot\! (\what{\A}\x_i).^\wedge 2 \!-\! \overline{\g}''(\boldsymbol{\eta})\odot(\overline{\A}\x_i \!+\! \overline{\B}\z).^\wedge 2 \right)\x^\top \right],
\end{aligned}
\end{equation}
}where
\begin{equation*}
\begin{aligned}
&|\xi_j| \le |\what{\a}_j\x|, \qquad |\eta_j| \le |\overline{\a}_j\x+\overline{\b}_j\z|,
\end{aligned}
\end{equation*}
$\overline{\B}\!=\!\left(\overline{\b}_1\ldots\overline{\b}_m\right)^\top$ similarly to before, $\x\odot\y$ denotes the element-wise (Hadamard) product, and $\x.^\wedge 2=\x\odot\x$ denotes element-wise squaring.
First let us consider the quantity
\begin{align*}
(\overline{\a}_j\x \!+\! \overline{\b}_j\z)^2 &= (\overline{\a}_j\x)^2 \!+\! 2\overline{\a}_j\x\overline{\b}_j\z + (\overline{\b}_j\z)^2 \\
&\le (\|\overline{\a}_j\|_1 \|\x\|_\infty + \|\overline{\b}_j\|_2\|\z\|_2)^2
\end{align*}
Then,
{\small
\begin{align*}
\|\!\left((\overline{\A}\x \!+\! \overline{\B}\z).\!^\wedge 2 \!\right)\! \x^{\!\top}\!\|_F \! & \!\le \|(\overline{\A}\x \!+\! \overline{\B}\z).\!^\wedge 2 \|_2 \|\x\|_2 \\
& \le \|\x\|_2\sqrt{\sum_{j=1}^{N}(\overline{\a}_j\x \!+\! \overline{\b}_j\z)^4}\\
&\overset{(a)}{\le} \|\x\|_2 \sum_{j=1}^{N}(\overline{\a}_j\x \!+\! \overline{\b}_j\z)^2\\
&\le \! \|\x\|_2 \! \sum_{j=1}^{N}(\|\overline{\a}_j\|_1 \|\x \!\|_\infty \!+\! \|\overline{\b}_j\|_2\|\z\|_2)^2\\
\Rightarrow \!\Ebb[\|\!\left((\overline{\A}\x \!+\! \overline{\B}\z).\!^\wedge 2 \!\right)\! \x^{\!\top}\!\|_F \!]&\le \! s_{Nr} \sum_{j=1}^{N}(\|\overline{\a}_j\|_1 \!+\! \|\overline{\b}_j\|_2)^2 \\
&\le \! s_{Nr} Nk^2
\end{align*}
}where $(a)$ follows from the fact that $\|\x\|_4 \le \|\x\|_2 \; \forall \x\in \Rbb^{N}$.
Similarly, we have
\begin{align*}
\Rightarrow \!\Ebb[\|\!((\what{\A}\x ).\!^\wedge 2 ) \x^{\!\top}\!\|_F \!]&\le \|\x\|_2\sum_{j=1}^{N}(\|\what{\a}_j\|_1 \|\x\|_\infty)^2\\
&\le \! s_{Nr} Nk^2
\end{align*}
To finish off, substituting the definition of $\E$ and $q$ into~\eqref{eq:proof_taylor} we have
{\small
\begin{equation*}
\!\begin{aligned}[t] \|\E\|_F &
\!\!\le\!\! J\Ebb\!\left[ \left\|\left(\! (\what{\A}\x_i).\!^\wedge 2 \!+\! (\overline{\A}\x_i \!+\! \overline{\B}\z_i).\!^\wedge 2 \!\right)\! \x_i^{\!\top} \! \right\|_F \!\right] \! \left\|\left( \what{g}'(0)\Ebb[\x_i\x_i^{\!\top} \! ] \! \right)^{\!\dagger} \right\|_F \\
& \le \! \frac{2J \sigma_\ell }{\what{g}'(0)}s_{Nr}k^2N\sqrt{N}.
\end{aligned}
\end{equation*}
}
\qed
\section{Proof of Theorem~\ref{thm:perf}}
First, we present and prove some intermediate results. For convenience of notation, for the remainder of this section let $\boldsymbol{\Phi}=\what\A-\wtil\A$ and $\boldsymbol{\Psi}=\what\L-\wtil\L$.
\subsection{Propositions}
\setcounter{thm}{4}
\begin{prop}
\label{prop:restricted}
Under Assumption~\ref{as:Dev}, the solution $(\what\A,\what\L)$ to the optimization problem~\eqref{eq:SILVar_AR_opt} satisfies
\begin{align*}
&\gamma\|\boldsymbol{\Phi}_{S^c}\|_{1} \!+\! \|\boldsymbol{\Psi}_{R^c}\|_* \le 3( \gamma\|\boldsymbol{\Phi}_S\|_{1} \!+\!\|\boldsymbol{\Psi}_R\|_* )\\
&\gamma\|\boldsymbol{\Phi}\|_{1} \!+\! \|\boldsymbol{\Psi}\|_* \le 4\sqrt{r_\L}( \tau\|\boldsymbol{\Phi}\|_{F} \!+\!\|\boldsymbol{\Psi}\|_F ).
\end{align*}
\end{prop}
~\\
\noindent\textit{Proof.} We start with the convexity of the marginalized objective functional at $(\wtil\A,\wtil\L)$,
\begin{align*}
m(\what{\A} \!+\! \what{\L}) \!\ge\! m(\wtil\A \!+\! \wtil\L) \!+\! \textrm{tr}\left(\left[\frac{1}{K}\what{\wtil{\boldsymbol{\Gamma}}}\X^\top \!\right]^\top \!(\what{\A} \!+\! \what{\L} \!-\! (\A \!+\! \L))\right)
\end{align*}
Then consider the optimality of the full objective functional at $(\what{g},\what{\A},\what{\L})$,
{\small
\begin{align*}
&m(\what{\A}+\what{\L}) +\lambda (\gamma\|\what{\A}\|_{1} + \|\what{\L}\|_* )\\
&\quad = \what{F}_3(\Y,\X,\what{g},\what{\A}+\what{\L}) + (\gamma\|\what{\A}\|_{1} + \|\what{\L}\|_* ) \\
&\quad \le \what{F}_3(\Y,\X,\what{\wtil{g}},\wtil{\A}+\wtil{\L})+(\gamma\|\wtil{\A}\|_{1} + \|\wtil{\L}\|_* )\\
&\quad = m(\wtil{\A}+\wtil{\L}) + \lambda (\gamma\|\wtil{\A}\|_{1} + \|\wtil{\L}\|_* )\\
\Rightarrow & m(\what{\A} \!+\! \what{\L}) \!-\! m(\wtil{\A}+\wtil{\L}) \le \lambda (\gamma\|\wtil{\A}\|_{1} \!+\! \|\wtil{\L}\|_* ) \!-\! \lambda (\gamma\|\what{\A}\|_{1} \!+\! \|\what{\L}\|_* )\\
\Rightarrow &\textrm{tr}\left(\left[ \frac{1}{K}\what{\wtil{\boldsymbol{\Gamma}}}\X^\top \!\right]^\top \! (\what{\A} \!+\! \what{\L} \!-\! (\wtil\A \!+\! \wtil\L))\right)\\
& \quad \le \lambda (\gamma(\|\wtil\A\|_{1} \!-\! \|\what{\A}\|_{1}) \!+\! \|\wtil\L\|_* \!-\! \|\what{\L}\|_* )
\end{align*}
}where the last inequality utilizes convexity of the marginalized objective. Then using Assumption~\ref{as:Dev},
\begin{align*}
&-\!\frac{\lambda\gamma}{2}\|\boldsymbol{\Phi}\|_{1} \!-\! \frac{\lambda}{2}\|\boldsymbol{\Psi}\|_* \!\le\! \lambda\gamma(\|\wtil\A\|_{1} \!-\! \|\what{\A}\|_{1}) \!+\! \lambda(\|\wtil\L\|_* \!-\! \|\what{\L}\|_* )\\
\Rightarrow & 0 \le \frac{\gamma}{2}\|\boldsymbol{\Phi}\|_{1} \!+\! \gamma(\|\wtil\A\|_{1} \!-\! \|\what{\A}\|_{1}) \!+\! \frac{1}{2}\|\boldsymbol{\Psi}\|_* \!+\! (\|\L\|_* \!-\! \|\what{\L}\|_* )\\
\Rightarrow & 0 \le \frac{\gamma}{2}(\|\boldsymbol{\Phi}_S\|_{1} \!+\! \|\boldsymbol{\Phi}_{S^c}\|_{1}) \!+\! \gamma(\|\boldsymbol{\Phi}_S\|_{1} \!-\! \|\boldsymbol{\Phi}_{S^c}\|_{1})\\
& \qquad \!+\! \frac{1}{2}(\|\boldsymbol{\Psi}_R\|_* \!+\! \|\boldsymbol{\Psi}_{R^c}\|_*) \!+\! (\|\boldsymbol{\Psi}_R\|_* \!-\! \|\boldsymbol{\Psi}_{R^c}\|_* )\\
\Rightarrow& 0 \le -(\gamma\|\boldsymbol{\Phi}_{S^c}\|_{1} \!+\! \|\boldsymbol{\Psi}_{R^c}\|_*) \!+\! 3(\gamma\|\boldsymbol{\Phi}_S\|_{1} \!+\! \|\boldsymbol{\Psi}_R\|_* )
\end{align*}
where the penultimate inequality arises from decomposability of the norm. Specifically,
\begin{align*}
\begin{cases}
\left| [\wtil\A]_s \right| - \left| [\what\A]_s \right| \le \left| [\wtil\A]_s - [\what\A]_s \right| = \left| [\boldsymbol{\Phi}]_s \right|, & s\in S\\
\left| [\wtil\A]_s \right| - \left| [\what\A]_s \right| = - \left| [\what\A]_s \right| = -\left| [\boldsymbol{\Phi}]_s \right|, & s\notin S
\end{cases}
\end{align*}
Thus, we have the first inequality
\begin{align*}
& \gamma\|\boldsymbol{\Phi}_{S^c}\|_{1} \!+\! \|\boldsymbol{\Psi}_{R^c}\|_* \le 3(\gamma\|\boldsymbol{\Phi}_S\|_{1} \!+\! \|\boldsymbol{\Psi}_R\|_* )
\end{align*}
Finally, for the second inequality,
\begin{align*}
\gamma\|\boldsymbol{\Phi}\|_{1} \!+\! \|\boldsymbol{\Psi}\|_* &\le 4( \gamma\|\boldsymbol{\Phi}_{S}\|_{1} \!+\! \|\boldsymbol{\Psi}_{R}\|_* )\\
&\le 4( \gamma\sqrt{s_\A}\|\boldsymbol{\Phi}_{S}\|_{F} \!+\! \sqrt{r_\L}\|\boldsymbol{\Psi}_{R}\|_F )\\
&\le 4\sqrt{r_\L}( \tau\|\boldsymbol{\Phi}_{S}\|_{F} \!+\!\|\boldsymbol{\Psi}_{R}\|_F )\\
&\le 4\sqrt{r_\L}( \tau\|\boldsymbol{\Phi}\|_{F} \!+\!\|\boldsymbol{\Psi}\|_F )
\end{align*}
\qed
\begin{prop}
\label{prop:Inc}
Under Assumptions~\ref{as:Dev} and \ref{as:Inc}, the solution of $(\what\A,\what\L)$ to the optimization problem~\eqref{eq:SILVar_AR_opt} satisfies
\begin{align*}
2|\textrm{tr}(\boldsymbol{\Phi}^\top\boldsymbol{\Psi})| \le \mu(\gamma\|\boldsymbol{\Phi}\|_{1}+\|\boldsymbol{\Psi}\|_{*})^2.
\end{align*}
\end{prop}
~\\
\noindent\textit{Proof.} This follows directly from Proposition~\ref{prop:restricted}, Assumption~\ref{as:Inc}, and applying Cauchy-Schwarz twice
\begin{align*}
2|\textrm{tr}(\boldsymbol{\Phi}^\top\boldsymbol{\Psi})| &\le \|\boldsymbol{\Psi}\|_{\infty}\|\boldsymbol{\Phi}\|_{1} +\|\boldsymbol{\Psi}\|_{*}\|\boldsymbol{\Phi}\|_{2}\\
&\le \mu\gamma(\gamma\|\boldsymbol{\Phi}\|_{1}+\|\boldsymbol{\Psi}\|_{*})\|\boldsymbol{\Phi}\|_1\\
&\quad +\mu(\gamma\|\boldsymbol{\Phi}\|_{1}+\|\boldsymbol{\Psi}\|_{*})\|\boldsymbol{\Psi}\|_*.
\end{align*}
\qed
\subsection{Theorem}
Now we restate the theorem again for convenience.
\setcounter{thm}{2}
\begin{thm}
Under Assumptions~\ref{as:RSC}-\ref{as:Inc}, the solution to the optimization problem~\eqref{eq:SILVar_AR_opt} satisfies
\begin{align*}
&\|\what\A-\wtil\A\|_F \le \frac{3\lambda\gamma\sqrt{s_\A}}{\alpha}\left(2 \!+\! \sqrt{\frac{2}{2 \!+\! \tau^2}}\right)\le\frac{9\lambda\gamma\sqrt{s_\A}}{\alpha}\\
&\|\what\L-\wtil\L\|_F \le \frac{3\lambda\sqrt{r_\L}}{\alpha}\left(2 \!+\! \sqrt{\frac{2\tau^2}{1 \!+\! 2\tau^2}}\right)\le\frac{9\lambda\sqrt{r_\L}}{\alpha}.
\end{align*}
\end{thm}
~\\
\noindent\textit{Proof.} Starting with Proposition~\ref{prop:restricted}, since our solution is in this restricted set, we can use the stronger convexity condition implied by Assumption~\ref{as:RSC},
\begin{align*}
m(\what{\A} \!+\! \what{\L}) \!\ge& m(\wtil\A \!+\! \wtil\L) \!+\! \textrm{tr}\left(\left[\frac{1}{K}\what{\wtil{\boldsymbol{\Gamma}}}\X^\top \!\right]^\top \!\left(\what{\A} \!+\! \what{\L} \!-\! (\A \!+\! \L)\right)\right)\\
&\quad + \alpha \|\boldsymbol{\Phi}+\boldsymbol{\Psi}\|_F^2
\end{align*}
Revisiting the objective functional at optimality and skipping repetitive algebra (see proof for Proposition~\ref{prop:restricted}),
\begin{align*}
\alpha \|\boldsymbol{\Phi} \!+\! \boldsymbol{\Psi}\|_F^2 \! &\le\! \frac{\lambda}{2}\left( 3(\gamma\|\boldsymbol{\Phi}_S\|_{1} \!+\! \|\boldsymbol{\Psi}_R\|_* ) \!-\!(\gamma\|\boldsymbol{\Phi}_{S^c}\|_{1} \!+\! \|\boldsymbol{\Psi}_{R^c}\|_*) \right)\\
&\le\! \frac{3\lambda}{2}(\gamma\|\boldsymbol{\Phi}\|_{1} \!+\! \|\boldsymbol{\Psi}\|_* )\\
&\le 6\lambda\sqrt{r_\L}(\tau\|\boldsymbol{\Phi}\|_{F} \!+\! \|\boldsymbol{\Psi}\|_F )
\end{align*}
Now from Proposition~\ref{prop:Inc}, we have
\begin{align*}
\|\boldsymbol{\Phi} \!+\! \boldsymbol{\Psi}\|_F^2 \! &\ge\! \|\boldsymbol{\Phi}\|_F \!+\! \|\boldsymbol{\Psi}\|_F^2 - 2|\textrm{tr}(\boldsymbol{\Phi}^\top\boldsymbol{\Psi})| \\
&\ge \|\boldsymbol{\Phi}\|_F \!+\! \|\boldsymbol{\Psi}\|_F^2 - \mu(\gamma\|\boldsymbol{\Phi}\|_{1}+\|\boldsymbol{\Psi}\|_{*})^2
\end{align*}
Combining the previous two inequalities, we have
{\small
\begin{align*}
&\alpha (\|\boldsymbol{\Phi}\|_F^2 \!+\! \|\boldsymbol{\Psi}\|_F^2) \!-\! \alpha\mu(\gamma\|\boldsymbol{\Phi}\|_{1} \!+\!\|\boldsymbol{\Psi}\|_{*})^2 \! \le\! 6\lambda\sqrt{r_\L}(\tau\|\boldsymbol{\Phi}\|_{F} \!+\! \|\boldsymbol{\Psi}\|_F )\\
\Rightarrow& \alpha (\|\boldsymbol{\Phi}\|_F^2 \!+\! \|\boldsymbol{\Psi}\|_F^2) \!-\! 16\alpha\mu r_\L (\tau\|\boldsymbol{\Phi}\|_{F} \!+\!\|\boldsymbol{\Psi}\|_{F})^2 \\
&\qquad \le\! 6\lambda\sqrt{r_\L}(\tau\|\boldsymbol{\Phi}\|_{F} \!+\! \|\boldsymbol{\Psi}\|_F )\\
\Rightarrow& \boldsymbol{\zeta}^\top\Q\boldsymbol{\zeta}\le 0
\end{align*}
}where
\begin{align*}
&\boldsymbol{\zeta}= ( \|\boldsymbol{\Phi}\|_F \quad \|\boldsymbol{\Psi}\|_F \quad 1)^\top\\
&\Q=\left(\begin{array}{ccc}
\alpha\left(\frac{2+\tau^2}{2(1+\tau^2)}\right) & -\alpha\left(\frac{\tau}{2(1+\tau^2)}\right) & -3\lambda'\tau \\
-\alpha\left(\frac{\tau}{2(1+\tau^2)}\right) & \alpha\left(\frac{1+2\tau^2}{2(1+\tau^2)}\right) & -3\lambda' \\
-3\lambda'\tau & -3\lambda' & 0
\end{array}\right),
\end{align*}
which is a conic in standard form, $\lambda'=\lambda\sqrt{r_\L}$, and the entries of $\Q$ follow from taking $\mu$ given in Assumption~\ref{as:Inc}.
Checking its discriminant,
\begin{align*}
\left(2+\tau^2\right)(1+2\tau^2) - \tau^2 > 0
\end{align*}
so we have that the conic equation describes an ellipse, and there are bounds on the values of $\|\boldsymbol{\Phi}\|_F$ and $\|\boldsymbol{\Psi}\|_F$.
For these individual bounds, we consider the points at which the gradients of $\boldsymbol{\zeta}^\top\Q\boldsymbol{\zeta}$ vanish w.r.t. each of $\|\boldsymbol{\Phi}\|_F$ and $\|\boldsymbol{\Psi}\|_F$. For $\|\boldsymbol{\Phi}\|_F$,
\begin{align*}
&\partial_{\|\boldsymbol{\Phi}\|_F}\boldsymbol{\zeta}^\top\Q\boldsymbol{\zeta}=0 \\
\Rightarrow &
\alpha\left(\frac{2+\tau^2}{2(1+\tau^2)}\right)\|\boldsymbol{\Phi}\|_F ^* = 3\lambda'\tau + \alpha\left(\frac{\tau}{2(1+\tau^2)}\right) \|\boldsymbol{\Psi}\|_F^*.
\end{align*}
Plugging this into the equation defined by $\boldsymbol{\zeta}^\top\Q\boldsymbol{\zeta}=0$ yields
\begin{align*}
\|\boldsymbol{\Phi}\|_F^* = \frac{3\lambda'\tau}{\alpha}\left(2\pm\sqrt{\frac{2}{2+\tau^2}}\right).
\end{align*}
Since we are seeking an upper bound for $\|\boldsymbol{\Phi}\|_F$, it can be seen that we take the positive root,
\begin{align*}
\|\boldsymbol{\Phi}\|_F \le \frac{3\lambda'\tau}{\alpha}\left(2+\sqrt{\frac{2}{2+\tau^2}}\right) \le\frac{9\lambda'\tau}{\alpha}
\end{align*}
Similarly, for $\|\boldsymbol{\Psi}\|_F$,
\begin{align*}
&\partial_{\|\boldsymbol{\Psi}\|_F}\boldsymbol{\zeta}^\top\Q\boldsymbol{\zeta}=0 \\
\Rightarrow &
\alpha\left(\frac{2+\tau^2}{2(1+\tau^2)}\right)\|\boldsymbol{\Psi}\|_F^* = 3\lambda'\tau + \alpha\left(\frac{\tau}{2(1+\tau^2)}\right) \|\boldsymbol{\Phi}\|_F^*.
\end{align*}
Finally, plugging this into $\boldsymbol{\zeta}^\top\Q\boldsymbol{\zeta}=0$ and solving for the upper bound yields
\begin{align*}
\|\boldsymbol{\Psi}\|_F \le \frac{3\lambda'}{\alpha}\left(2+\sqrt{\frac{2\tau^2}{1+2\tau^2}}\right)\le\frac{9\lambda'}{\alpha}
\end{align*}
\qed
\ifCLASSOPTIONcaptionsoff
\fi
\balance
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{jmei2}}]{Jonathan Mei}
received his B.S. and M.Eng. degrees in electrical engineering both from the Massachusetts Institute of Technology in Cambridge, MA in 2013. He is currently pursuing his Ph.D. degree in electrical and computer engineering at Carnegie Mellon University in Pittsburgh, PA. His research interests include computational photography, image processing, reinforcement learning, graph signal processing, non-stationary time series analysis, and high-dimensional optimization.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{moura_photo}}]{Jos\'e M.~F.~Moura}(S'71--M'75--SM'90--F'94) is the Philip L.~and Marsha Dowd University Professor at Carnegie Mellon University (CMU). He holds the engenheiro electrot\'{e}cnico degree from Instituto Superior T\'ecnico (IST), Lisbon, Portugal, and the M.Sc., E.E., and D.Sc.~degrees in EECS from the Massachusetts Institute of Technology (MIT), Cambridge, MA. He was on the faculty at IST and has held visiting faculty appointments at MIT and New York University (NYU). He founded and directs a large education and research program between CMU and Portugal, www.icti.cmu.edu.
His research interests are on data science and graph signal processing. He has published over 550 papers and holds 14~US patents. Two of his patents (co-inventor A. Kav$\check{\textrm{c}}$i\'c) are used in over three billion disk drives in 60~\% of all computers sold in the last 13 years worldwide and were, in 2016, the subject of a 750 million dollar settlement between CMU and a chip manufacturer, the largest ever university verdict/settlement in the information technologies area.
Dr. Moura is the 2018 IEEE President Elect, was IEEE Technical Activities Vice-President, IEEE Division IX Director, President of the IEEE Signal Processing Society(SPS), and Editor in Chief for the IEEE Transactions in Signal Processing.
Dr. Moura received the Technical Achievement Award and the Society Award from the IEEE Signal Processing Society, and the CMU College of Engineering Distinguished Professor of Engineering Award. He is a Fellow of the IEEE, the American Association for the Advancement of Science (AAAS), a corresponding member of the Academy of Sciences of Portugal, Fellow of the US National Academy of Inventors, and a member of the US National Academy of Engineering.
\end{IEEEbiography}
\end{document} |
\begin{document}
\title{Pre-derivations and description of non-strongly nilpotent filiform Leibniz algebras }
\author{K.K. Abdurasulov, A.Kh. Khudoyberdiyev, M. Ladra and A.M. Sattarov}
\address{[A.Kh. Khudoyberdiyev]
National University of Uzbekistan, Institute of Mathematics
Academy of Sciences of Uzbekistan, Tashkent, 100170, Uzbekistan.}
\email{[email protected]}
\address{[K.K. Abdurasulov -- A.M. Sattarov] Institute of Mathematics Academy of Sciences of Uzbekistan, Tashkent, 100170, Uzbekistan.}
\email{[email protected] --- [email protected]}
\address{[M. Ladra] Department of Mathematics, Institute of Mathematics, University of Santiago de Compostela, 15782, Spain.}
\email {[email protected]}
\subjclass[2010]{17A32, 17A36, 17B30}
\keywords{Lie algebra, Leibniz algebra, derivation,
pre-derivation, nilpotency, characteristically nilpotent algebra,
strongly nilpotent algebra}
\begin{abstract}
In this paper we investigate pre-derivations of filiform Leibniz
algebras. Recall that the set of filiform Leibniz algebras of
fixed dimension can be decomposed into three non-intersected families.
We describe the pre-derivation of filiform Leibniz algebras for the first and second families. We
found sufficient conditions under which filiform Leibniz algebras
are strongly nilpotent. Moreover, for the first and second
families, we give the description of characteristically nilpotent
algebras which are non-strongly nilpotent.
\end{abstract}
\maketitle
\section{Introduction}
The notion of Leibniz algebra was introduced in \cite{Lod} as
a non-antisymmetric generalization of Lie algebras. During the
last 25 years, the theory of Leibniz algebras has been actively
studied and many results of the theory of Lie algebras have been
extended to Leibniz algebras. Since the study of derivations and
automorphisms of a Lie algebra plays an essential role in the
structure theory of algebras, it is a natural question whether
the corresponding results for Lie algebras can be extended to more
general objects.
In 1955, Jacobson \cite{Jac2} proved that a Lie algebra over a
field of characteristic zero admitting a non-singular (invertible)
derivation is nilpotent. However, the related problem of whether the inverse of this
statement is correct or not, remained open until the work of Dixmier and
Lister \cite{Dix}, where an example of a nilpotent Lie algebra,
whose all derivations are nilpotent (and hence, singular), was
constructed. Such types of Lie algebras are called
characteristically nilpotent Lie algebras. The papers \cite{CaNu,
Kha1} and others are devoted to the investigation of
characteristically nilpotent Lie algebras.
An analogue of Jacobson's result for Leibniz algebras was
proved in \cite{LaRiTu}. Moreover, it was shown that, similarly to
the case of Lie algebras, the inverse of this statement does not
hold and the notion of characteristically nilpotent Lie algebra
can be extended to Leibniz algebras \cite{KhLO, Omi1}. The study
of derivations of Lie algebras led to the appearance of a natural
generalization: pre-derivations of Lie algebras \cite{Mul};
the pre-derivations arise as the Lie algebra corresponding to the group of the pre-automorphisms of a Lie algebra.
A pre-derivation of a Lie algebra is just a
derivation of the Lie triple system induced by it. Research on pre-derivations has been related to Lie algebra degenerations, Lie triple systems
and bi-invariant semi-Riemannian metrics on Lie groups \cite{Burd}. In
\cite{Baj} it was proved that Jacobson's result is also true in
terms of pre-derivations. Similarly, likewise the example of Dixmier and
Lister, several examples of nilpotent Lie algebras whose
pre-derivations are nilpotent were presented in \cite{Burd}. Such
Lie algebras are called strongly nilpotent.
In \cite{Moens}, a generalized notion of derivations and
pre-derivations of Lie algebras is given. These derivations are defined as Leibniz-derivations of
order $k$, and Moens proved that a finite-dimensional Lie algebra over a field of characteristic zero is nilpotent if and only if it admits an invertible Leibniz-derivation. Further, an analogue of Moens’ result was showed for alternative, Jordan, Zinbiel, Malcev and Leibniz algebras \cite{FKhO,Kayg1,Kayg2}.
Since a Leibniz-derivation of order $3$ of a Leibniz algebra is a
pre-derivation, it is natural to define the notion of strongly
nilpotent Leibniz algebras. It should be noted that the class of
strongly nilpotent Leibniz algebras is a subclass of
characteristically nilpotent Leibniz algebras. Thus, one of the
approaches to the classification of nilpotent Leibniz algebras
considers a subclass of Leibniz algebras, in which any Leibniz derivation
of order $k$ is nilpotent and any algebra admits a non-nilpotent
Leibniz-derivation of order $k+1$. In the case of $k=1$ we have
the class of non-characteristically nilpotent Leibniz algebras.
Such class of filiform Leibniz algebras was classified in
\cite{KhLO}.
This paper is devoted to the study of algebras for the case $k=2$, i.e. the class of characteristically
nilpotent filiform Leibniz algebras which are non-strongly
nilpotent. It is known that the class of all filiform Leibniz
algebras is split into three non-intersected families
\cite{AyOm2,GoOm}, where one of the families contains filiform Lie
algebras and the other two families come out from naturally graded
non-Lie filiform Leibniz algebras. An isomorphism criterion for
these two families of filiform Leibniz algebras was given in
\cite{GoOm}. Note that some classes of finite-dimensional
filiform Leibniz algebras and algebras up to dimension less than
$10$ were classified in \cite{AAOK, GoJiKh,OmRa,RaSo}.
In order to achieve our goal, we have organized the paper as
follows. In Section~\ref{S:prel}, we present necessary definitions
and results that will be used in the rest of the paper.
In Section~\ref{S1} we describe pre-derivations of filiform Leibniz algebras of the first and second families.
Finally, in Section~\ref{S2}, we give the description of
characteristically nilpotent filiform Leibniz
algebras which are non-strongly nilpotent.
Throughout the paper, all the spaces and algebras are assumed to be
finite-dimensional.
\section{Preliminaries}\label{S:prel}
In this section we give necessary definitions and preliminary
results.
\begin{defn} An algebra $(L,[-,-])$ over a field $F$ is called a (right) Leibniz algebra if for any $x,y,z\in L$, the so-called Leibniz identity
\[ \big[[x,y],z\big]=\big[[x,z],y\big]+\big[x,[y,z]\big] \] holds.
\end{defn}
Every Lie algebra is a Leibniz algebra, but the bracket in a
Leibniz algebra does not need to be skew-symmetric.
Note that a derivation of Leibniz algebra $L$ is a
linear transformation, such that
\[d([x,y])=[d(x),y]+[x, d(y)],\]
for any $x, y\in L$.
Pre-derivations of Leibniz algebras are a generalization of
derivations which defined as follows.
\begin{defn}
A linear transformation $P$ of the Leibniz algebra $L$ is called a
pre-derivation if for any $x, y, z\in L$,
\[P([[x,y],z])=[[P(x),y],z]+[[x,P(y)],z]+[[x,y],P(z)].\]
\end{defn}
For the given Leibniz algebra $L$ we consider the following central
lower series:
\[
L^1=L,\qquad L^{k+1}=[L^k,L^1] \qquad k \geq 1.
\]
\begin{defn} A Leibniz algebra $L$ is called
nilpotent if there exists $s\in\mathbb N $ such that $L^s=0$.
\end{defn}
A nilpotent Leibniz algebra is called \emph{characteristically
nilpotent} if all its derivations are nilpotent. We say that a
Leibniz algebra is \emph{strongly nilpotent} if any
pre-derivation is nilpotent.
Since any derivation of the Leibniz algebra is a pre-derivation,
it implies that a strongly nilpotent Leibniz algebra is
characteristically nilpotent. An example of characteristically
nilpotent, but non-strongly nilpotent Leibniz algebra could be
find in \cite{FKhO, KhLO, Omi1}.
\begin{defn} A Leibniz algebra $L$ is said to be filiform if
$\dim L^i=n-i$, where $n=\dim L$ and $2\leq i \leq n$.
\end{defn}
The following theorem decomposes all $n$-dimensional filiform
Leibniz algebras into three families.
\begin{thm}[\cite{AyOm2,GoOm}]
Any $n$-dimensional complex filiform Leibniz algebra admits a basis $\{e_1, e_2, \dots, e_n\}$
such that the table of multiplication of the algebra has one of the following forms:
$F_1(\alpha_4, \dots, \alpha_n,\theta)=\left\{\begin{array}{ll}
[e_1,e_1]=e_3, \\[1mm] [e_i,e_1]=e_{i+1},& 2\leq i \leq n-1,\\[1mm]
[e_1,e_2]=\sum\limits_{t=4}^{n-1}\alpha_te_t+\theta e_n,&\\[1mm]
[e_j,e_2]=\sum\limits_{t=j+2}^{n}\alpha_{t-j+2}e_t,& 2\leq j\leq n-2.\\[1mm]
\end{array}\right.$ \\[1mm]
$F_2(\beta_4, \dots, \beta_n,\gamma)=\left\{\begin{array}{ll}
[e_1,e_1]=e_{3}, \\[1mm]
[e_i,e_1]=e_{i+1}, & \ 3\leq i \leq {n-1},\\[1mm]
[e_1,e_2]=\sum\limits_{t=4}^{n}\beta_te_{t}, \\[1mm]
[e_2,e_2]= \gamma e_{n},\\[1mm]
[e_j,e_2]=\sum\limits_{t=j+2}^{n}\beta_{t-j+2}e_t, & \ 3\leq j
\leq {n-2},
\end{array} \right.$ \\
$F_3(\theta_1,\theta_2,\theta_3)= \left\{\begin{array}{lll}
[e_i,e_1]=e_{i+1}, &
2\leq i \leq {n-1},\\[1mm]
[e_1,e_i]=-e_{i+1}, & 3\leq i \leq {n-1}, \\[1mm]
[e_1,e_1]=\theta_1e_n, & \\[1mm]
[e_1,e_2]=-e_3+\theta_2e_n, & \\[1mm]
[e_2,e_2]=\theta_3e_n, & \\[1mm]
[e_i,e_j]=-[e_j,e_i] \in \langle e_{i+j+1}, e_{i+j+2}, \dots ,
e_n\rangle, &
2\leq i < j \leq {n-1},\\[1mm]
[e_i,e_{n+1-i}]=-[e_{n+1-i},e_i]=\alpha (-1)^{i+1}e_n, & 2\leq
i\leq n-1,
\end{array} \right.$
\noindent where all omitted products are equal to zero and
$\alpha\in\{0,1\}$ for even $n$ and $\alpha=0$ for odd $n$.
\end{thm}
It is easy to see that algebras of the first and the second
families are non-Lie algebras. Note that if $(\theta_1, \theta_2,
\theta_3) = (0,0,0)$, then an algebra of the third class is a Lie
algebra.
Further we shall need the notion of Catalan numbers. The
$p$-th Catalan numbers were defined
in \cite{HiPe} by the formula \[C^{p}_{n} = \frac {1} {(p-1)n+1}
\ \binom{pn}{n}\,.\]
It should be noted that for the $p$-th Catalan numbers the
following equality is hold:
\begin{equation} \label{E:comb}
\sum\limits_{k=1}^{n} C_k^p C_{n-k}^p = \frac {2n}
{(p-1)n+p+1}C_{n+1}^p\,.
\end{equation}
\section{Pre-derivations of filiform Leibniz algebras}\label{S1}
In this section we give the description of pre-derivations of
filiform Leibniz algebras. First, we consider the filiform Leibniz
algebras from the first family.
Let $L$ be a $n$-dimensional filiform Leibniz algebra from the
family $F_1(\alpha_4, \dots, \alpha_n, \theta )$.
\begin{prop}\label{prop1}
The pre-derivations of the filiform Leibniz algebras from the
family $F_1(\alpha_4,\alpha_5,\ldots, \alpha_{n}, \theta)$ have
the following form:
\begin{align*}
P(e_1)&=\sum\limits_{t=1}^{n}a_te_t,\quad
P(e_2)=(a_1+a_2)e_2+\sum\limits_{t=3}^{n-2}a_te_t+b_{n-1}e_{n-1}+b_ne_n,
\quad P(e_3)=\sum\limits_{t=2}^{n}c_te_t,\\
P(e_{2i})&=((2i-1)a_1+a_2)e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+3})e_t,\qquad \qquad \qquad 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor,\\
P(e_{2i+1})&=c_2e_{2i}+((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+2})e_t,\qquad 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor,
\end{align*}
where $\lfloor a\rfloor$ is the integer part of the real number $a$ and
\begin{equation}\label{eq0}\left\{\begin{array}{ll}
(1+(-1)^n)c_2=0,\quad c_2\alpha_t=0,& 4\leq t\leq n-1, \\[1mm]
(a_1-a_2)\alpha_4=0,\quad (3a_1-c_3)\alpha_4=0,\\[1mm]
\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0,&3\leq k\leq \lfloor\frac{n-1}{2}\rfloor,\\[1mm]
(2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0,& 3\leq k\leq \lfloor\frac{n}{2}\rfloor-1,\\[1mm]
(a_2-(k-3)a_1)\alpha_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},& 5\leq k\leq n-2,\\[1mm]
(a_2-(n-4)a_1)\alpha_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}\\[1mm]
\quad\quad\quad\quad\quad\quad\quad\quad\quad
+\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t},&
n\quad \text {is even},
\\[1mm]
(2a_2-c_3-(n-6)a_1)\alpha_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-1}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}
+\\[1mm]
\quad\quad\quad\quad\quad\quad\quad\quad\quad
+\sum\limits_{t=2}^{\frac{n-3}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t},&
n\quad \text {is odd.}
\end{array}\right.
\end{equation}
\end{prop}
\begin{proof}
Let $L$ be a filiform Leibniz algebra from the family
$F_1(\alpha_4,\alpha_5,\ldots, \alpha_{n}, \theta)$ and let $P \colon L
\rightarrow L$ be a pre-derivation of $L$. Put
\[P(e_1)=\sum\limits_{t=1}^{n}a_te_t, \quad P(e_2)=\sum\limits_{t=1}^{n}b_te_t, \quad P(e_3)=\sum\limits_{t=1}^{n}c_te_t.\]
From
\begin{align*}
0& =P([[e_1,e_1],e_3])=[[P(e_1),e_1],e_3]+[[e_1,P(e_1)],e_3]+[[e_1,e_1],P(e_3)]\\
& = [e_3, \sum\limits_{t=1}^{n}c_te_t] = c_1e_4 +
c_2\sum\limits_{t=4}^{n-1}\alpha_{t}e_{t+1},
\end{align*}
we have
\[ c_1=0,\qquad c_2\alpha_t=0,\quad 4\leq t\leq n-1.\]
By the property of pre-derivation, we have
\begin{align*}
P(e_4)&=P([[e_1,e_1],e_1])=[[P(e_1),e_1],e_1]+[[e_1,P(e_1)],e_1]+[[e_1,e_1],P(e_1)]\\
& =[(a_1+a_2)e_3+\sum\limits_{t=4}^{n}a_{t-1}e_t,e_1]+[a_1e_3+a_2\big(\sum\limits_{t=4}^{n-1}\alpha_te_t+\theta e_n\big),e_1]+a_1e_4+
a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\
& =(3a_1+a_2)e_4+\sum\limits_{t=5}^{n}(a_{t-2}+2a_2\alpha_{t-1})e_t.
\end{align*}
On the other hand,
\begin{align*}
P(e_4)&=P([[e_2,e_1],e_1])=[[P(e_2),e_1],e_1]+[[e_2,P(e_1)],e_1]+[[e_2,e_1],P(e_1)] \\
&=[(b_1+b_2)e_3+\sum\limits_{t=4}^{n}b_{t-1}e_t,e_1]+[a_1e_3+a_2\sum\limits_{t=4}^{n}\alpha_te_t,e_1]+a_1e_4+
a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\
&=(2a_1+b_1+b_2)e_4+\sum\limits_{t=5}^{n}(b_{t-2}+2a_2\alpha_{t-1})e_t.
\end{align*}
Comparing the coefficients at the basic elements we have
\[b_1+b_2=a_1+a_2, \qquad b_{t}=a_t,\quad 3\leq t\leq n-2.\]
Using the property of pre-derivation, we get
\begin{align*}
P(e_5)&=P([[e_3,e_1],e_1])=[[P(e_3),e_1],e_1]+[[e_3,P(e_1)],e_1]+[[e_3,e_1],P(e_1)]\\
&=[c_2e_3+\sum\limits_{t=4}^{n}c_{t-1}e_t,e_1]+[a_1e_4+a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t,e_1]+a_1e_5+
a_2\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\\
&=c_2e_4+(2a_1+c_3)e_5+\sum\limits_{t=6}^{n}(c_{t-2}+2a_2\alpha_{t-2})e_t.
\end{align*}
Similarly, from the equality
\[P(e_{j+2})=P([[e_j,e_1],e_1])=[[P(e_j),e_1],e_1]+[[e_j,P(e_1)],e_1]+[[e_j,e_1],P(e_1)],
\quad 3\le j\le n-2,\] inductively, we derive
\begin{align*}
P(e_{2i})&=((2i-1)a_1+a_2)e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+3})e_t,&& 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor.\\
P(e_{2i+1})&=c_2e_{2i}+((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+2})e_t,&& 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor.
\end{align*}
Moreover, in the case of $n$ is even using the property of the
pre-derivation for the triple $\{e_{n-1},e_1,e_1\}$, we deduce
$c_2=0$. In the case of $n$ is odd considering the property of
$P([[e_{n-1},e_1],e_1])$ give us the identical equality. Thus we
get, \[(1 + (-1)^n)c_2 =0.\]
Consider
\begin{multline*}
P([[e_1,e_1],e_2])=P([e_3,e_2])=P\Big(\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\Big)\\
=\sum\limits_{t=2}^{\lfloor\frac{n-1}{2}\rfloor}\Big[c_2e_{2t}+((2t-2)a_1+c_3)e_{2t+1}+\sum\limits_{k=2t+2}^{n}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})e_k\Big]\alpha_{2t}\\
+\sum\limits_{t=3}^{\lfloor\frac{n}{2}\rfloor}\Big[((2t-1)a_1+a_2)e_{2t}+\sum\limits_{k=2t+1}^{n}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})e_k\Big]\alpha_{2t-1}\\
=\sum\limits_{k=2}^{\lfloor\frac{n-1}{2}\rfloor}((2k-2)a_1+c_3)\alpha_{2k}e_{2k+1}+\sum\limits_{k=6}^{n}\sum\limits_{t=2}^{\lfloor\frac{k-2}{2}\rfloor}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})\alpha_{2t}e_k\\
+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}((2k-1)a_1+a_2)\alpha_{2k-1}e_{2k}+\sum\limits_{k=7}^{n}\sum\limits_{t=3}^{\lfloor\frac{k-1}{2}\rfloor}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})\alpha_{2t-1}e_k\\
=(2a_1+c_3)\alpha_4e_5+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[((2k-1)a_1+a_2)\alpha_{2k-1}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t}\\
+\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1}\Big]e_{2k}\\
+\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[((2k-2)a_1+c_3)\alpha_{2k}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t}\\
+\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-1}\Big]e_{2k+1}.
\end{multline*}
On the other hand, using the property of pre-derivation, we get
\begin{multline*}
P([[e_1,e_1],e_2])=[[P(e_1),e_1],e_2]+[[e_1,P(e_1)],e_2]+[[e_1,e_1],P(e_2)]\\
=[(a_1+a_2)e_3+\sum\limits_{t=4}^{n}a_{t-1}e_t,e_2]+[a_1e_3+a_2\Big(\sum\limits_{t=4}^{n-1}\alpha_te_t+\theta e_n \Big),e_2]+b_1e_4+
b_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\
=(a_1+a_2)\sum\limits_{t=5}^{n}\alpha_{t-1}e_t+\sum\limits_{t=4}^{n-2}a_{t-1}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k\\
+ a_1\sum\limits_{t=5}^{n}\alpha_{t-1}e_t+a_2\sum\limits_{t=4}^{n-2}\alpha_{t}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k+b_1e_4+
b_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\
=b_1e_4+(2a_1+a_2+b_2)\alpha_{4}e_5+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[(2a_1+a_2+b_2)\alpha_{2k-1}+\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+2}\Big]e_{2k}\\
+\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[(2a_1+a_2+b_2)\alpha_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+3}\Big]e_{2k+1}.
\end{multline*}
Comparing the coefficients at the basic elements we have
\begin{equation}\label{eq1}\left\{\begin{array}{cc}
b_1=0,\\[1mm] (2a_1+a_2+b_2)\alpha_4=(2a_1+c_3)\alpha_4,
\end{array}\right.
\end{equation}
\begin{equation}\label{eq2}3\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor\left\{\begin{array}{cc}
(2a_1+a_2+b_2)\alpha_{2k-1}+\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+2}\\[1mm]
=((2k-1)a_1+a_2)\alpha_{2k-1}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t}+\\[1mm]
+\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1},
\end{array}\right.
\end{equation}
\begin{equation}\label{eq3}3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor\left\{\begin{array}{cc}
(2a_1+a_2+b_2)\alpha_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+3}\\[1mm]
=((2k-2)a_1+c_3)\alpha_{2k}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t}+\\[1mm]
+\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-1}.
\end{array}\right.
\end{equation}
Now, we consider
\begin{multline*}
P([[e_3,e_1],e_2])=P([e_4,e_2])=P\Big(\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\Big)\\
=\sum\limits_{t=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[((2t-2)a_1+c_3)e_{2t+1}+\sum\limits_{k=2t+2}^{n}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})e_k\Big]\alpha_{2t-1}\\
+\sum\limits_{t=3}^{\lfloor\frac{n}{2}\rfloor}\Big[((2t-1)a_1+a_2)e_{2t}+\sum\limits_{k=2t+1}^{n}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})e_k\Big]\alpha_{2t-2}\\
=\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}((2k-2)a_1+c_3)\alpha_{2k-1}e_{2k+1}+\sum\limits_{k=8}^{n}\sum\limits_{t=3}^{\lfloor\frac{k-2}{2}\rfloor}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})\alpha_{2t-1}e_k\\
+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}((2k-1)a_1+a_2)\alpha_{2k-2}e_{2k}+\sum\limits_{k=7}^{n}\sum\limits_{t=3}^{\lfloor\frac{k-1}{2}\rfloor}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})\alpha_{2t-2}e_k\\
=(5a_1+a_2)\alpha_4e_6+\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[((2k-2)a_1+c_3)\alpha_{2k-1}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1} \displaybreak \\
+\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-2}\Big]e_{2k+1} \\
+\sum\limits_{k=4}^{\lfloor\frac{n}{2}\rfloor}\Big[((2k-1)a_1+a_2)\alpha_{2k-2}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t-1}\\
+\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-2}\Big]e_{2k}.
\end{multline*}
On the other hand,
\begin{multline*}
P([[e_3,e_1],e_2])=[[P(e_3),e_1],e_2]+[[e_3,P(e_1)],e_2]+[[e_3,e_1],P(e_2)]\\
=[\sum\limits_{t=3}^{n}c_{t-1}e_t,e_2]+[a_1e_4+a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t,e_2]+b_2\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\\
=\sum\limits_{t=4}^{n-2}c_{t-1}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k+
a_1\sum\limits_{t=6}^{n}\alpha_{t-2}e_t+a_2\sum\limits_{t=5}^{n-2}\alpha_{t-1}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k+b_2\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\\
=(2a_1+a_2+c_3)\alpha_{4}e_6+\sum\limits_{k=7}^{n}\Big[(2a_1+a_2+c_3)\alpha_{k-2}+\sum\limits_{t=5}^{k-2}(c_{t-1}+a_2\alpha_{t-1})\alpha_{k-t+2}\Big]e_k\\
=(2a_1+a_2+c_3)\alpha_{4}e_6+\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}
\Big[(2a_1+a_2+c_3)\alpha_{2k-1}+\sum\limits_{t=5}^{2k-1}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+3}\Big]e_{2k+1}\\
+\sum\limits_{k=4}^{\lfloor\frac{n}{2}\rfloor}\Big[(2a_1+a_2+c_3)\alpha_{2k-2}+\sum\limits_{t=5}^{2k-2}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+2}\Big]e_{2k}.
\end{multline*}
Comparing the coefficients at the basic elements we get
\begin{equation}\label{eq4}
(2a_1+a_2+c_3)\alpha_{4}=(5a_1+a_2)\alpha_4
\end{equation}
\begin{equation}\label{eq5}3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor\left\{\begin{array}{cc}
(2a_1+a_2+c_3)\alpha_{2k-1}+\sum\limits_{t=5}^{2k-1}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+3}\\[1mm]
=((2k-2)a_1+c_3)\alpha_{2k-1}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1}\\[1mm]
+\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-2},
\end{array}\right.
\end{equation}
\begin{equation}\label{eq6}4\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor\left\{\begin{array}{cc}
(2a_1+a_2+c_3)\alpha_{2k-2}+\sum\limits_{t=5}^{2k-2}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+2}\\[1mm]
=((2k-1)a_1+a_2)\alpha_{2k-2}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t-1}+\\[1mm]
+\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-2}.
\end{array}\right.
\end{equation}
According to $b_1+b_2 = a_1 + a_2$, from equalities
\eqref{eq1} and \eqref{eq4} we obtain
\[(a_1-a_2)\alpha_4=0,\quad (3a_1-c_3)\alpha_4=0.\]
Subtracting of equalities \eqref{eq2} and \eqref{eq5} we obtain
\[\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0, \quad 3 \leq k \leq \left\lfloor\frac{n-1} 2\right\rfloor.\]
Summarizing of these equalities we get
\[\alpha_{k}(a_2-(k-3)a_1)=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}\quad \text{for odd } k,\quad 5 \leq k \leq n-2.\]
Similarly from equalities \eqref{eq3} and \eqref{eq6} we obtain
\[(2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0, \quad 3 \leq k \leq \left\lfloor\frac{n} 2\right\rfloor -1 \]
and
\[\alpha_{k}(a_2-(k-3)a_1)=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}\quad \text{for even } k, \quad 5 \leq k \leq n-2.\]
From equalities \eqref{eq2} and \eqref{eq3} in the case of
$2k=n$ and $2k=n-1$, respectively we obtain last two restrictions
of equalities \eqref{eq0}.
Considering the properties of the pre-derivation for $P([[e_i,
e_1], e_2])$, $4 \leq i \leq n$ and $P([[e_i, e_2], e_2])$, $3
\leq i \leq n$, we have the same restrictions or identical
equalities.
\end{proof}
In the following proposition we give the descriptions of
pre-derivation of algebras from the second class of filiform
Leibniz algebras.
\begin{prop}\label{prop32}
The pre-derivations of the filiform Leibniz algebras from the
family $F_2(\beta_4,\beta_5,\ldots, \beta_{n}, \gamma)$ have the
following form:
\begin{align*}
P(e_1)&=\sum\limits_{t=1}^{n}a_te_t, \qquad
P(e_2)=b_2e_2+b_{n-1}e_{n-1}+b_ne_n, \qquad
P(e_3)=\sum\limits_{t=2}^{n}c_te_t,\\
P(e_{2i})&=(2i-1)a_1e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\beta_{t-2i+3})e_t,&& 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor,\\
P(e_{2i+1})&=((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\beta_{t-2i+2})e_t, && 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor,
\end{align*}
where
\begin{equation}\label{eq3.8} \left\{\begin{array}{ll}
(c_3-2a_1)\beta_4=0, \quad (b_2-2a_1)\beta_4=0,\quad
c_2\beta_t=0,& 4\leq
t\leq n-1,\\[1mm]
\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\beta_{2k-2t+4})\beta_{2t-2}=0, & 3\leq k\leq \lfloor\frac{n-1}{2}\rfloor,\\[1mm]
(c_3-2a_1)\beta_{2k}=\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\beta_{2k-2t+5})\beta_{2t-2},
& 3\leq
k\leq \lfloor\frac{n}{2}\rfloor-1,\\[1mm]
(b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}, & 5\leq k\leq n-2,\\[1mm]
(b_2-c_3-(n-5)a_1)\beta_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-1}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}+\\[1mm]
\quad\quad\quad\quad\quad\quad\quad\quad
+\sum\limits_{t=2}^{\frac{n-3}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\beta_{n-2t+2})\beta_{2t},
& n\quad \text {is odd,}\\[1mm]
(b_2-(n-3)a_1)\beta_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}+\\[1mm]
\quad\quad\quad\quad\quad\quad\quad\quad+\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\beta_{n-2t+2})\beta_{2t},&
n\quad \text {is even}.
\end{array}\right.
\end{equation}
\end{prop}
\begin{proof} Let $P$ be a pre-derivation of a filiform Leibniz algebra $L$ from the second class. Put
\[P(e_1)=\sum\limits_{t=1}^{n}a_te_t, \quad P(e_2)=\sum\limits_{t=1}^{n}b_te_t, \quad P(e_3)=\sum\limits_{t=1}^{n}c_te_t.\]
Consider the property of pre-derivation
\begin{align*}
P([[e_2,e_1],e_1])&=[[P(e_2),e_1],e_1]+[[e_2,P(e_1)],e_1]+[[e_2,e_1],P(e_1)]\\
&=[[\sum\limits_{t=1}^{n}b_te_t,e_1],e_1]+[[e_2,\sum\limits_{t=1}^{n}a_te_t],e_1]\\
&=[b_1e_3+\sum\limits_{t=3}^{n-1}b_{t}e_{t+1},e_1]=b_1e_4+\sum\limits_{t=5}^{n}b_{t-2}e_{t}.
\end{align*}
On the other hand, $P([[e_2,e_1],e_1])=0$, since $[e_2, e_1]=0$.
Thus, we have \[ b_1=0, \quad b_t=0, \quad 3\leq t\leq n-2.\]
Hence, $P(e_2)=b_2e_2+b_{n-1}e_{n-1}+b_ne_n$.
From
\begin{align*}
0&=P([[e_1,e_1],e_3])=[[P(e_1),e_1],e_3]+[[e_1,P(e_1)],e_3]+[[e_1,e_1],P(e_3)]\\
&=[e_3,\sum\limits_{t=1}^{n}c_te_t]=c_1e_4+c_2\sum\limits_{t=5}^{n}\beta_{t-1}e_t,
\end{align*}
we get \[c_1=0, \quad c_2\beta_t=0,\quad 4\leq t\leq n-1.\]
Considering the property of pre-derivation for the triples $\{e_1,
e_1, e_1\}$ and $\{e_i, e_1, e_1\}$ for $3 \leq i \leq n-2$,
inductively we obtain
\begin{align*}
P(e_{2i})&=(2i-1)a_1e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\beta_{t-2i+3})e_t, && 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor.\\
P(e_{2i+1})&=((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\beta_{t-2i+2})e_t, && 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor.
\end{align*}
Now, we consider
\begin{align*}
P([[e_1,e_1],e_2])&=[[P(e_1),e_1],e_2]+[[e_1,P(e_1)],e_2]+[[e_1,e_1],P(e_2)]\\
&=[[\sum\limits_{t=1}^{n}a_te_t,e_1],e_2]+[[e_1,\sum\limits_{t=1}^{n}a_te_t],e_2]+[e_3,b_2e_2+b_{n-1}+b_te_t]\\
&=(2a_1+b_2)\beta_4e_5+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[(2a_1+b_2)\beta_{2k-1}
\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\beta_t)\beta_{2k-t+2}\Big]e_{2k}\\
&\quad +\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[(2a_1+b_2)\beta_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\beta_t)\beta_{2k-t+3}\Big]e_{2k+1}.
\end{align*}
On the other hand,
\begin{align*}
P([[e_1,e_1],e_2])&=P([e_3,e_2])=P\Big(\sum\limits_{t=5}^{n}\beta_{t-1}e_t\Big)\\
& =\beta_4(2a_1+c_3)e_5
+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[\beta_{2k-1}(2k-1)a_1 +\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+2})\\
& \qquad \qquad \qquad \qquad \qquad +\sum\limits_{t=3}^{k-1}\beta_{2t-1}(a_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+3})\Big]e_{2k}\\
& \qquad +\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[\beta_{2k}((2k-2)a_1+c_3)+
\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+3})\\
& \qquad \qquad \qquad +\sum\limits_{t=3}^{k}\beta_{2t-1}(a_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+4})\Big]e_{2k+1}.
\end{align*}
Comparing the coefficients at the basis elements, we have
\begin{equation}\label{eq3.9}(2a_1+b_2)\beta_4=(2a_1+c_3)\beta_4.\end{equation}
\begin{equation}\label{eq3.10}
3\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor \left\{\begin{array}{c} (2a_1+b_2)\beta_{2k-1}+\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\beta_t)\beta_{2k-t+2}\\[1mm]
=\beta_{2k-1}(2k-1)a_1+\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+2})\\[1mm]
+\sum\limits_{t=3}^{k-1}\beta_{2t-1}(a_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+3}),
\end{array}\right.\end{equation}
\begin{equation}\label{eq3.11}
3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor
\left\{\begin{array}{c}(2a_1+b_2)\beta_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\beta_t)\beta_{2k-t+3}\\[1mm]
=\beta_{2k}((2k-2)a_1+c_3)+\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+3})\\[1mm]
+\sum\limits_{t=3}^{k}\beta_{2t-1}(a_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+4}).
\end{array}\right.
\end{equation}
Analogously, from the equality \[P([e_4, e_2]) =
P([[e_3,e_1],e_2])=[[P(e_3),e_1],e_2]+[[e_3,P(e_1)],e_2]+[[e_3,e_1],P(e_2)],\]
we obtain the following restrictions:
\begin{equation}\label{eq3.12}(a_1+b_2+c_3)\beta_{4}=5a_1\beta_4,\end{equation}
\begin{equation}\label{eq3.13}3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor\left\{\begin{array}{c}(a_1+b_2+c_3)\beta_{2k-1}+\sum\limits_{t=5}^{2k-1}(c_{t-1}+a_2\beta_{t-1})\beta_{2k-t+3}\\[1mm]
=\beta_{2k-1}((2k-2)a_1+c_3)+\sum\limits_{t=3}^{k-1}\beta_{2t-1}(c_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+3})\\[1mm]
+\sum\limits_{t=3}^{k}\beta_{2t-2}(a_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+4}),\end{array}\right.\end{equation}
\begin{equation}\label{eq3.14}4\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor\left\{\begin{array}{c}(a_1+b_2+c_3)\beta_{2k-2}+\sum\limits_{t=5}^{2k-2}(c_{t-1}+a_2\beta_{t-1})\beta_{2k-t+2}\\[1mm]
=\beta_{2k-2}(2k-1)a_1+\sum\limits_{t=3}^{k-1}\beta_{2t-1}(c_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+2})\\[1mm]
+\sum\limits_{t=3}^{k-1}\beta_{2t-2}(a_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+3}).\end{array}\right.\end{equation}
It is not difficult to see that from \eqref{eq3.9} and
\eqref{eq3.12} we have
\[(c_3-2a_1)\beta_4=0, \quad (b_2-2a_1)\beta_4=0.\]
Similarly to the proof of Proposition \ref{prop1}, summarizing and
subtracting equalities \eqref{eq3.10} and \eqref{eq3.13}, we
obtain
\[\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\beta_{2k-2t+4})\beta_{2t-2}=0, \quad 3 \leq k \leq \left\lfloor\frac {n-1} 2\right\rfloor\]
and
\[(b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}\quad \text{for odd } k, \quad 5 \leq k \leq n-2.\]
From equalities \eqref{eq3.11} and \eqref{eq3.14} we have
\[(c_3-2a_1)\beta_{2k}=\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\beta_{2k-2t+5})\beta_{2t-2}, \quad 3 \leq k \leq \left\lfloor\frac {n} 2\right\rfloor -1,\] and
\[(b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}\quad \text{for even } k, \quad 5 \leq k \leq n-2.\]
From equalities \eqref{eq3.10} and \eqref{eq3.11} in the case
of $2k=n-1$ and $2k=n$, respectively we obtain last the two
restrictions of equalities \eqref{eq3.8}.
Considering the properties of the pre-derivation for $P([[e_i,
e_1], e_2])$ for $4 \leq i \leq n$ and $P([[e_i, e_2], e_2])$ for
$3 \leq i \leq n$, we have the same restrictions or identical
equalities.
\end{proof}
\section{Strongly nilpotent filiform Leibniz algebras}\label{S2}
In this section we describe non-strongly nilpotent filiform
Leibniz algebras. We give the description of non-strongly
nilpotent filiform Leibniz algebras of the first and second
classes. We reduce to investigation of strongly nilpotency of
third class of filiform Leibniz algebras to the investigation of Lie
algebras.
First, we consider the case of filiform Leibniz algebras from the
first class. From Proposition \ref{prop1} it is obvious that if
there exist the parameters $a_1, a_2, c_2, c_3$ such that $(a_1,
a_2, c_2, c_3) \neq (0, 0, 0, 0)$ and the restriction \eqref{eq0}
holds, then a filiform Leibniz algebra of the first family is
non-strongly nilpotent, otherwise is strongly nilpotent.
\begin{prop} \label{prop41} Let $L(\alpha_4, \alpha_5, \dots, \alpha_n, \theta)$ be a filiform Leibniz algebra of the first family. If
$\alpha_4 = \alpha_5 = \dots =\alpha_{n-1} =0$, then $L$ is
non-strongly nilpotent.
\end{prop}
\begin{proof} It is immediate that, if
$\alpha_4 = \alpha_5 = \dots =\alpha_{n-1} =0$, then the
restriction \eqref{eq0} is hold for any values of $a_1, a_2,
c_3$. Thus, we have that $L$ has a non-nilpotent pre-derivation,
which implies that $L$ is non-strongly nilpotent.
\end{proof}
It is obvious that any algebra from the family $F_1(0,\dots, 0,
\alpha_n, \theta)$ is isomorphic to one of the following four
algebras:
\[F_1(0,\dots, 0, 0, 0), \quad F_1(0,\dots, 0, 0, 1),\quad F_1(0,\dots, 0, 1, 0), \quad F_1(0,\dots, 0, 1, 1).\]
Remark that the algebras $F_1(0,\dots, 0, 0, 0)$, $F_1(0,\dots, 0, 0,
1)$ and $ F_1(0,\dots, 0, 1, 1)$
are non-characteristically nilpotent (see \cite{KhLO}). The algebra $F_1(0,\dots, 0, 1, 0)$ is characteristically nilpotent,
but non-strongly nilpotent.
Now we consider the case of $\alpha_i\neq 0$ for some $i \ (4 \leq i
\leq n-1)$. Then from \eqref{eq0} we have that $c_2=0$.
\begin{thm}\label{thm41} Let $L$ be a filiform Leibniz
algebra from the family $F_1(\alpha_4, \alpha_5, \dots, \alpha_n,
\theta)$ and let $n$ be even.
$L$ is non-strongly nilpotent if and only if the parameters $(\alpha_4, \alpha_5, \alpha_6,\dots, \alpha_{n-1}, \alpha_n,
\theta)$ are one of the following values:
i) $\alpha_4 \neq 0$ and $\alpha_k =
(-1)^kC_{k-4}^2\alpha_4^{k-3}, \quad 5 \leq k \leq n-2$;
ii) $\left\{\begin{array}{lll}\alpha_{(2s-3)t+3} =
(-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, & 3 \leq s \leq \frac{n-2}2
& 1 \leq t \leq \lfloor\frac
{n-5} {2s-3}\rfloor, \\[1mm]
\alpha_j =0, & j \neq (2s-3)t+3, & 4 \leq j \leq
n-2;
\end{array}\right.$
iii) $\alpha_{2i}=0$, for $2 \leq i \leq \frac {n-2} 2$;
\noindent where $C^{p}_{n} = \frac {1} {(p-1)n+1} \dbinom{pn}{n}$ is the
$p$-th Catalan number.
\end{thm}
\begin{proof} From Proposition \ref{prop1} we have
\begin{equation}\label{eq4.1}\left\{\begin{array}{ll}
(a_1-a_2)\alpha_4=0,\quad (3a_1-c_3)\alpha_4=0,\\[1mm]
\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0,&3\leq k\leq \frac{n-2}{2},\\[1mm]
(2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0,& 3\leq k\leq \frac{n-2}{2},\\[1mm]
(a_2-(k-3)a_1)\alpha_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},& 5\leq k\leq n-2,\\[1mm]
(a_2-(n-4)a_1)\alpha_{n-1}=
a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}\\[1mm]
\quad\quad\quad\quad\quad\quad\quad\quad+\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t}.
\end{array}\right.
\end{equation}
\textbf{Case 1.} If $\alpha_4 \neq 0$, then from the first two
equalities of \eqref{eq4.1} we get $a_2 =a_1$, $c_3=3a_1$ and from
the next two equalities of \eqref{eq4.1} we obtain \[c_i = a_{i-1} + a_1 \alpha_i,
\qquad 4 \leq i \leq n-3.\]
Since $a_2 =a_1$, $c_3=3a_1$, we get that $L$ is non-strongly
nilpotent if and only if $a_1 \neq 0$. Therefore we have
\begin{align*}
\alpha_{k}&=\frac{k-1}{2(4-k)}\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},
&& 5\leq k\leq n-2,\\
(5-n)a_1\alpha_{n-1}& = (c_{n-2}-a_{n-3} + a_1\alpha_{n-2})\alpha_4
+a_1\sum\limits_{t=5}^{n-2}(t-2)\alpha_{n-t+2}\alpha_{t}.
\end{align*}
Using equality \eqref{E:comb} we get that
\[\alpha_k =(-1)^kC_{k-4}^2\alpha_4^{k-3}, \qquad 5 \leq k \leq n-2\]
and
\[c_{n-2} = \frac{1}{\alpha_4}\Big((5-n)a_1\alpha_{n-1} - a_1\sum\limits_{t=5}^{n-2}(t-2)\alpha_{n-t+2}\alpha_{t} \Big) +a_{n-3} - a_1\alpha_{n-2}.\]
Note that the parameters $\alpha_{n-1}, \alpha_{n}$ and $\theta$
are free and we have the case \textit{i}).
\textbf{Case 2.} Let $\alpha_{2s} \neq 0$ for some $s \ (3 \leq s
\leq \frac {n-2} 2)$ and $\alpha_{2i} = 0$ for $2 \leq i \leq
s-1$. Then similarly to the previous case from equality
\eqref{eq4.1} we get
\[(2a_1+a_2 - c_3)\alpha_{2s}=0, \qquad (a_2 - (2s-3) a_1)\alpha_{2s}=0,\]
which derive $a_2=(2s-3) a_1$, $c_3 = (2s-1) a_1$ and
\[c_i = a_{i-1} + a_1 \alpha_i, \qquad 4 \leq i \leq n-2s+1.\]
If $L$ is non-strongly nilpotent then $a_1 \neq 0$. Consequently,
we have
\[\alpha_{k}=\frac{(k-1)(2s-3)}{2(2s-k)}\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},\qquad 5\leq k\leq n-2.\]
Using $\alpha_{2i} = 0$ for $2 \leq i \leq s-1$ and applying
formula \eqref{E:comb}, inductively on $t$ we obtain
\begin{align*}
\alpha_j &= 0, \qquad \qquad j \neq (2s-3)t+3, \qquad 4 \leq j \leq n-2,\\
\alpha_{(2s-3)t+3} &= (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, \qquad \qquad \qquad \quad 1 \leq t \leq \left\lfloor\frac {n-5} {2s-3}\right\rfloor.
\end{align*}
From the last equality of \eqref{eq4.1} we have
\[c_{n-2s+2} = \frac{1}{\alpha_{2s}}\Big((2s+1-n)a_1\alpha_{n-1} - (2s-3)a_1\sum\limits_{t=2s+1}^{n-2s+2}(t-2)\alpha_{n-t+2}\alpha_{t} \Big) +a_{n-2s+1} -
(2s-3)^2a_1\alpha_{n-2s}.\]
The parameters $\alpha_n-1$, $\alpha_n$ and $\theta$ may take any
values and we obtain case \textit{ii}).
\textbf{Case 3.} Let $\alpha_{2i}=0$
for all $i \ (2 \leq i \leq \frac {n-2} 2)$. Then the first four
equalities of \eqref{eq4.1} hold and from the last equalities
we have
\begin{align*}
&\alpha_5(a_2-2a_1) =0, \qquad \alpha_{2s+1}(a_2-(2s-2)a_1) =sa_2\sum\limits_{t=3}^s\alpha_{2t-1}\alpha_{2s+5-2t}, \quad 3 \leq s \leq \frac {n-2} 2.\\
&(a_2-(n-4)a_1)\alpha_{n-1}=
a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}.
\end{align*}
Taking $a_1=a_2=0$ and $c_3\neq 0$, we have that previous
equalities hold for any values of $\alpha_{2s+1}$. Since
$c_3\neq 0$, this algebra is non-strongly nilpotent and we
obtain the case \textit{iii}).
\end{proof}
\begin{thm}\label{thm42}
Let $L$ be a filiform Leibniz
algebra from the family $F_1(\alpha_4, \alpha_5, \dots, \alpha_n,
\theta)$ and let $n$ be odd.
$L$ is non-strongly nilpotent if and only if the parameters $(\alpha_4, \alpha_5, \alpha_6,\dots, \alpha_{n-1}, \alpha_n,
\theta)$ are one of the following values:
i) $\alpha_4 \neq 0$ and $\alpha_k =
(-1)^kC_{k-4}^2\alpha_4^{k-3}, \quad 5 \leq k \leq n-2$;
ii) $\left\{\begin{array}{lll}\alpha_{(2s-3)t+3} =
(-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, & 3 \leq s \leq \frac {n-3}
2,& 1 \leq t \leq \lfloor\frac {n-5} {2s-3}\rfloor,\\[1mm] \alpha_j = 0,
& j \neq (2s-3)t+2, & 4 \leq j \leq n-2;\end{array}\right.$
iii) $\alpha_{2i}=0$, for $2 \leq i \leq \frac {n-1} 2$;
iv) $\left\{\begin{array}{lll}\alpha_{(2s-2)t+3} =
(-1)^{t+1}C_{t}^{2s-1}\alpha_{2s+1}^{t}, & 2 \leq s \leq \frac
{n-3} 2, & 1 \leq t \leq \lfloor\frac {n-5} {2s-2}\rfloor, \\[1mm] \alpha_j
= 0, & j \neq (2s-2)t+3, & 4 \leq j \leq n-2.
\end{array}\right.$
\end{thm}
\begin{proof} From Proposition \ref{prop1}
we have
\begin{equation}\label{eq4.5}
\left\{\begin{aligned}
& (a_1-a_2)\alpha_4=0,\qquad \qquad (3a_1-c_3)\alpha_4=0,\\
& \sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0, \qquad \qquad \qquad \qquad \qquad \quad 3\leq k\leq \frac{n-1}{2},\\
&(2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0, \qquad 3\leq k\leq \frac{n-3}{2},\\
&(a_2-(k-3)a_1)\alpha_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}, \qquad \qquad \qquad \qquad \qquad \qquad \quad 5\leq k\leq n-2,\\
& (2a_2-c_3-(n-6)a_1)\alpha_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-1}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}\\
& \qquad \qquad \qquad \qquad \qquad \qquad \quad +\sum\limits_{t=2}^{\frac{n-3}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t}
\end{aligned}\right.
\end{equation}
\quad\quad\quad\quad\qquad\qquad
\textbf{Case 1.} $\alpha_{2s} \neq 0$ for some $s \ (2 \leq s \leq
\frac {n-3} 2)$ and $\alpha_{2i} = 0$ for $2 \leq i \leq s-1$.
Then similarly to the proof of Theorem \ref{thm41} we get
\[a_2=(2s-3) a_1,\qquad c_3 = (2s-1) a_1, \qquad c_i = a_{i-1} + a_1 \alpha_i, \qquad 4 \leq i \leq n-2s+1.\]
Consequently
we have
\[\alpha_{k}=\frac{(k-1)(2s-3)}{2(2s-k)}\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},\quad 5\leq k\leq n-2.\]
Applying formula \eqref{E:comb} inductively on $t$ we obtain
\[\alpha_{(2s-3)t+3} = (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, \qquad 1 \leq t \leq \left\lfloor\frac {n-5} {2s-3}\right\rfloor,\]
\[\alpha_j = 0, \quad j \neq (2s-3)t+3, \quad 4 \leq j \leq n-2.\]
From the last equality of \eqref{eq4.5} we have
\[c_{n-2s+2} = \frac{1}{\alpha_{2s}}\Big((2s+1-n)a_1\alpha_{n-1} - (2s-3)a_1\sum\limits_{t=2s+1}^{n-2s+2}(t-2)\alpha_{n-t+2}\alpha_{t} \Big) +a_{n-2s+1} -
(2s-3)^2a_1\alpha_{n-2s}.\]
Thus, we have the cases \textit{i}) and \textit{ii}).
\textbf{Case 2.} Let $\alpha_{2i}=0$ for all $i \ (2 \leq i \leq
\frac {n-3} 2)$. Then the first four equalities of \eqref{eq4.5}
hold and from the last two equalities we have
\begin{equation}\label{eq.forodd}
\begin{aligned}
&\alpha_5(a_2-2a_1) =0, \qquad \alpha_{2s+1}(a_2-(2s-2)a_1)
=sa_2\sum\limits_{t=3}^s\alpha_{2t-1}\alpha_{2s+5-2t}, \quad 3
\leq s \leq \frac {n-3} 2. \\
&(2a_2-c_3-(n-6)a_1)\alpha_{n-1}=0.
\end{aligned}
\end{equation}
If $\alpha_{n-1}=0$, then taking $a_1=a_2=0$ and $c_3 \neq 0$, we
have a non-nilpotent pre-derivation for any values of
$\alpha_{2i+1}$, and so we have the case \textit{iii}).
If $\alpha_{n-1}\neq0$, then $c_3 = 2a_2+(n-6)a_1$ and we obtain
that
non-nilpotency of pre-derivations depends of the parameters
$\alpha_{2i+1}$.
Let $\alpha_{2s+1}$ be the first non-vanishing parameter among
$\{\alpha_5,\alpha_7, \dots, \alpha_{n-4},\alpha_{n-2}\}$. Then we
get \[a_2 = (2s-2)a_1.\]
Applying formula \eqref{E:comb} inductively on $t$ from
\eqref{eq.forodd} we obtain
\[\alpha_{(2s-2)t+3} = (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, \qquad 1 \leq t \leq \left\lfloor\frac {n-5} {2s-3}\right\rfloor\]
and
\[\alpha_j = 0, \qquad j \neq (2s-2)t+3, \quad 4 \leq j \leq n-2.\]
Therefore, we have the case \textit{iv}).
\end{proof}
Now we give the classification of non-strongly nilpotent filiform
Leibniz algebras from the second class.
\begin{prop} Let $L(\beta_4, \beta_5, \dots, \beta_n, \gamma)$ be a filiform Leibniz algebra of the second family. If
$\beta_4 = \beta_5 = \dots =\beta_{n-1} =0$, then $L$ is
non-strongly nilpotent.
\end{prop}
\begin{proof} Analogously to the proof of Proposition \ref{prop41}.
\end{proof}
It is obvious that any algebra from the family $F_2(0,\dots, 0,
\beta_n, \gamma)$ is isomorphic to one of the algebras
$F_2(0,\dots, 0, 0, 0)$, $ F_2(0,\dots, 0, 1, 0)$, $F_2(0,\dots,
0, 0, 1)$. Note that these algebras are non-characteristically
nilpotent (see \cite{KhLO}).
Now we consider the case of $\beta_i\neq 0$ for some $i \ (4 \leq i
\leq n-1)$. Then from \eqref{eq0} we have that $c_2=0$.
\begin{thm}\label{SC.n.even} Let $L$ be a $n$-dimensional complex non-strongly nilpotent filiform Leibniz algebra from the family
$F_2(\beta_4,\ldots,\beta_n,\gamma)$ and $n$ even. Then $L$ is
isomorphic to one of the following algebras:
\begin{align*}
& F_2^{2s}(0,\dots, 0, \beta_{2s}, 0 \dots, 0 ,
\beta_{n-1},\beta_n,\gamma), \qquad \beta_{2s}=1, \quad 2 \leq s
\leq \frac{n-2} 2,\\
& F_2(0, \beta_5, 0, \beta_7, 0 , \dots, 0, \beta_{n-1}, \beta_n,
\gamma).
\end{align*}
\end{thm}
\begin{proof}
From Proposition \ref{prop32} we have:
\begin{equation}\label{eq4.8}
\left\{\begin{array}{ll}
(c_3-2a_1)\beta_4=0, \quad (b_2-2a_1)\beta_4=0,& \\[1mm]
\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\beta_{2k-2t+4})\beta_{2t-2}=0, & 3\leq k\leq \frac{n-2}{2},\\[1mm]
(c_3-2a_1)\beta_{2k}=\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\beta_{2k-2t+5})\beta_{2t-2},
& 3\leq
k\leq \frac{n-2}{2},\\[1mm]
(b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}, & 5\leq k\leq n-2,\\[1mm]
(b_2-(n-3)a_1)\beta_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}\\[1mm]
\qquad\qquad\qquad\qquad\qquad \ \ +
\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\beta_{n-2t+2})\beta_{2t},
\end{array}\right.
\end{equation}
\textbf{Case 1.} Let $\beta_4\neq 0$, then from \eqref{eq4.8} we
have
\begin{align*}
c_3&=b_2=2a_1, \qquad c_{i} =a_{i-1}+a_2\beta_i,\qquad \quad 4 \leq i \leq n-3,\\
(4-k)a_1\beta_{k}&=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4},\qquad \qquad \qquad 5\leq k\leq n-2,\\
(5-n)a_1\beta_{n-1}&=(c_{n-2}-a_{n-3}+a_2\beta_{n-2})\beta_{4}+a_2\sum\limits_{t=5}^{n-2}(t-2)\beta_{n-t+2}\beta_t.
\end{align*}
Since $L$ is non-strongly nilpotent, we get $a_1\neq0$ and
\[
\beta_5=-\frac{2a_2}{a_1}\beta_4^2, \qquad
\beta_{k}=\frac{(k-1)a_2}{2(4-k)a_1}\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4},\quad
6\leq k\leq n-2.
\]
From the isomorphic criterion which given in [10, Theorem 4.4] we
have that if two algebras from the family
$F_2(\beta_4,\ldots,\beta_n,\gamma)$ are isomorphic, then there
exist $A,B,D \in \mathbb{C}$, such that
\[\beta_4'=\frac{D}{A^2}\beta_4,\qquad \beta_5'=\frac{D}{A^3}(\beta_5-\frac{2B}{A}\beta_4^2),\]
where $\beta_i$ and $\beta_i'$ are parameters of the first and
second algebras, respectively.
Putting $D=\frac{A^2}{\beta_4}$ and
$B=\frac{A\beta_5}{2\beta_4^2}$ we obtain $\beta_4'=1$,
$\beta_5'=0$. Therefore, we have shown that, if $L$ is
non-strongly nilpotent algebra from the family
$F_2(\beta_4,\ldots,\beta_n,\gamma)$, with $\beta_4 \neq 0$, then
we may always suppose
\[\beta_4=1, \qquad \beta_5=0.\]
Moreover, from $\beta_5=-\frac{2a_2}{a_1}\beta_4^2$, we obtain
$a_2=0$, which implies $\beta_k=0$ for $6\leq k\leq n-2$ and
\[c_{n-2} = a_{n-3} + \frac{(5-n)a_1\beta_{n-1}}{\beta_4}.\]
Thus, $L$ isomorphic to the algebra
$F_2(1,0,\ldots,0,\beta_{n-1},\beta_n,\gamma)$.
\textbf{Case 2.} Let $\beta_{2s}\neq 0$ for some $s \ (3 \leq s \leq
\frac{n-2} 2)$ and $\beta_{2i}=0$ for $2 \leq i \leq s-1$. Then we
have
\begin{align*}
b_2&=(2s-2)a_1, \quad c_3 = 2a_1, \quad c_{i} =a_{i-1}+a_2\beta_i,\quad 4 \leq i \leq n-2s+1,\\
(2s-k)a_1\beta_{k}& =\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4},\qquad \qquad \qquad \qquad \quad 5\leq k\leq n-2,\\
(2s+1-n)a_1\beta_{n-1}&=(c_{n-2s+2}-a_{n-2s+1}+a_2\beta_{n-2s+2})\beta_{2s}+a_2\sum\limits_{t=5}^{n-2}(t-2)\beta_{n-t+2}\beta_t.
\end{align*}
Since $L$ is non-strongly nilpotent, we get $a_1\neq0$, which
implies $\beta_5=\dots=\beta_{2s-1}=0$. Moreover, we have
\[\beta_{4s-3} =
-\frac{(2s-2)a_2}{(2s-3)a_1}\beta_{2s}^2.\]
From the isomorphic criterion of filiform Leibniz algebras of the
second class, we obtain
\[\beta_{2s}'=\frac{D}{A^{2s-2}}\beta_{2s},\qquad \beta_{4s-3}'=\frac{D}{A^{4s-2}}\left(\beta_{4s-3}-\frac{(2s-2)B}{A}\beta_{2s}^2\right).\]
Putting $D=\frac{A^{2s-2}}{\beta_{2s}}$ and
$B=\frac{A\beta_{4s-3}}{(2s-2)\beta_{2s}^2}$, we have
$\beta_{2s}'=1$, $\beta_{4s-3}'=0$. Therefore, we have shown that,
if $L$ is non-strongly nilpotent algebra from the family
$F_2(\beta_4,\ldots,\beta_n,\gamma)$, with $\beta_{2s} \neq 0$ and
$\beta_{2i} = 0$ for $2 \leq i \leq s-1$, then we may always
suppose
\[\beta_{2s}=1, \quad \beta_{4s-3}=0.\]
Moreover, from $\beta_{4s-3} =
-\frac{(2s-2)a_2}{(2s-3)a_1}\beta_{2s}^2$, we obtain $a_2=0$,
which implies $\beta_k=0$ for $2s+1\leq k\leq n-2$ and
\[c_{n-2s+2} = a_{n-2s+1} + \frac{(2s+1-n)a_1\beta_{n-1}}{\beta_{2s}}.\]
Thus, $L$ isomorphic to the algebra \[F_2^{2s}(0,\dots, 0,
\beta_{2s}, 0 \dots, 0 , \beta_{n-1},\beta_n,\gamma), \quad
\beta_{2s}=1.\]
\textbf{Case 3.} Let $\beta_{2s}= 0$ for $2 \leq s \leq \frac
{n-2} 2$, then we have
\begin{align*}
(b_2-(k-2)a_1)\beta_{k}& =\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}, && 5\leq k\leq n-2,\\
(b_2-(n-3)a_1)\beta_{n-1}& =a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}.
\end{align*}
Taking $a_1=a_2=0$ and $c_3\neq 0$, we have that previous
equalities are hold for any values of $\beta_{2s+1}$. Since
$c_3\neq 0$, this algebra is non-strongly nilpotent. Therefore, we
obtain the algebra $F_2(0, \beta_5, 0, \beta_7, 0 , \dots, 0,
\beta_{n-1}, \beta_n, \gamma)$.
\end{proof}
\begin{thm} Let $L$ be a $n$-dimensional complex non-strongly nilpotent filiform Leibniz algebra from the family
$F_2(\beta_4,\ldots,\beta_n,\gamma)$ and $n$ odd. Then $L$ is
isomorphic to one of the following algebras:
\begin{align*}
&F_2^{j}(0,\dots, 0, \beta_{j}, 0 \dots, 0 ,
\beta_{n-1},\beta_n,\gamma), \qquad \beta_{j}=1, \quad 4 \leq j
\leq n-2,\\
& F_2(0, \beta_5, 0, \beta_7, 0 , \dots, 0, \beta_{n-2},0, \beta_n, \gamma).
\end{align*}
\end{thm}
\begin{proof}
Analogously to the proofs of Theorems \ref{thm42} and
\ref{SC.n.even}.
\end{proof}
Now, we consider a Leibniz algebra $L$ from the third family
$F_3(\theta_1,\theta_2,\theta_3)$. Note that $L$ is a parametric
algebra with parameters $\theta_1,\theta_2,\theta_3$ and
$\alpha_{i,j}^k$. The parameters $\alpha_{i,j}^k$ appear from the
multiplications $[e_i, e_j]$ for $\leq i < j \leq {n-1}$. In the
case of $\theta_1 = \theta_2 = \theta_3 =0$, the algebra $L$ is a Lie
algebra.
In the next proposition we assert that the strongly nilpotency of
$L(\theta_1,\theta_2,\theta_3)$ is equivalent to the strongly
nilpotency of $L(0,0,0)$.
\begin{prop} An algebra $L(\theta_1,\theta_2,\theta_3)$ from the family $F_3(\theta_1,\theta_2,\theta_3)$ is strongly nilpotent if and only if the algebra
$L(0,0,0)$ is strongly nilpotent.
\end{prop}
\begin{proof} Note that the parameters $\theta_1,\theta_2,\theta_3$ appear only in the multiplications $[e_1,e_1], [e_1,e_2], [e_2,e_2]$.
Since $[P(x),y],\ [x,P(y)], \ [x,y]\in L^2$ and $e_1,e_2\not \in
L^2$ parameters $\theta_i$ do not take part in the identity
\[P([[x,y],z])=[[P(x),y],z]+[[x,P(y)],z]+[[x,y],P(z)],\quad x,y,z \in L.\]
Therefore, the spaces of pre-derivations of algebras
$L(\theta_1,\theta_2,\theta_3)$ and $L(0,0,0)$ coincide.
\end{proof}
\textbf{Acknowledgments.} This work was supported by Agencia Estatal de Investigaci\'on (Spain), grant MTM2016-79661-P (European FEDER support included, UE).
The second named author was supported
by IMU/CDC Grant (Abel Visiting Scholar Program), and he would
like to acknowledge the hospitality of the University of Santiago
de Compostela (Spain).
\end{document} |
\begin{document}
\title{Local middle dimensional symplectic non-squeezing in the analytic setting}
\author{Lorenzo Rigolli\footnote{This work is partially supported by the DFG grant AB 360/1-1.}}
\maketitle
\begin{abstract}
We prove the following middle-dimensional non-squeezing result for analytic symplectic embeddings of domains in $\mathbb{R}^{2n}$.\\ Let $\varphi: D \hookrightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding of a domain $D \subset \mathbb{R}^{2n}$ and $P$ be a symplectic projector onto a linear $2k$-dimensional symplectic subspace $V\subset \mathbb{R}^{2n}$. Then there exists a positive function $r_0:D\rightarrow (0,+ \infty)$, bounded away from $0$ on compact subsets $K \subset D$, such that the inequality $Vol_{2k}(P\varphi (B_r(x)),\omega ^k _{0|V})\geq \pi^{k} r^{2k}$ holds for every $x \in D$ and for every $r < r_0(x)$. This claim will be deduced from an analytic middle-dimensional non-squeezing result (stated by considering paths of symplectic embeddings) whose proof will be carried on by taking advantage of a work by \'{A}lvarez Paiva and Balacheff.
\end{abstract}
\section*{Introduction}
Let $\omega_0=\sum_{i=1}^n d x_i \wedge d y_i$ be the standard symplectic form on $\mathbb{R}^{2n}$, $B_R$ the ball of radius $R$
\begin{align*}
B_R = \{ (x_1,y_1,\ldots,x_n,y_n) \in {\mathbb{R}}^{2n} \ | \sum_{i=1} ^n x_i^2 + \sum_{i=1} ^n y_i^2 < R^2 \},
\end{align*}
and $Z_r$ the cylinder
\begin{align*}
Z_r = \{ (x_1,y_1,\ldots,x_n,y_n) \in {\mathbb{R}}^{2n} \ | x_1 ^2 + y_1 ^2 < r^2 \}.
\end{align*}
The Gromov's non-squeezing theorem (see \cite{Gro85} or \cite{HZ94}) states that if $\varphi (B_R) \subset Z_r$, where $\varphi$ is a symplectic (open) embedding, then $r\geq R$.
Symplectic diffeomorphisms are volume preserving, due to the fact that they preserve the multiple of the volume form ${\omega_0}^n$, but the non-squeezing theorem shows that, unlike volume preserving diffeomorphisms, they also present two-dimensional rigidity phenomena.\\
Since symplectic diffeomorphisms preserve the $2k$-form ${\omega_0}^k$ for every integer $1\leq k \leq n$, after Gromov's pioneering result one may ask if there are also middle dimensional rigidity phenomena. Some work in this direction, concerning symplectic embeddings of polydisks, has been done by Guth. In \cite{Gut08} he considers symplectic embeddings of a polydisk $\Gamma :=B_2(R_1) \times \ldots \times B_2(R_n)$ with $R_1\leq \ldots \leq R_n$ into a polydisk $\Gamma ' := B_2(R_1 ') \times \ldots \times B_2(R_n ')$ with $R_1' \leq \ldots \leq R_n '$. By the Gromov's non-squeezing theorem and the conservation of the volume under symplectic diffeomorphism, such symplectic embeddings may exist only if $R_1 \leq R_1 '$ and $R_1 \ldots R_n \leq R_1 ' \ldots R_n '$. On the other hand Guth proved that such embeddings exist if and only if for a certain constant $C(n)$, the inequalities $C(n) R_1 \leq R_1 '$ and $C(n) R_1 \ldots R_n \leq R_1 ' \ldots R_n '$ hold. In particular this result exclude every middle dimensional non-squeezing phenomena in the case of polydisks embeddings.\\
In this paper we proceed in a different way, namely we keep the ball as domain of symplectic embeddings and in order to search for middle dimensional non-squeezing phenomena we follow the strategy pursued in \cite{AM13}.\\
First, as in \cite{EG91}, we introduce an alternative formulation of Gromov's theorem, which is that the two-dimensional shadow of the image of a radius $R$ ball in $\mathbb{R}^{2n}$ under a symplectic diffeomorphism has area at least $\pi R^2$.
More precisely the claim is that every symplectic embedding $\varphi :B_R \hookrightarrow \mathbb{R}^{2n}$ satisfies the inequality
\begin{align}
area( P \varphi (B_R), \omega_{0|V}) \geq \pi R^2,
\label{ineq1}
\end{align}
where $P$ denotes the symplectic projector onto a symplectic plane $V$, i.e. the projector along the symplectic orthogonal complement of $V$.\\
This second formulation easily implies the classic one. On the other hand, if it were $area( P \varphi (B_R),\omega_{0|V}) < \pi R^2$, then, by a theorem of Moser's (see \cite{Mos65} or \cite{HZ94}), there would exist a smooth area preserving diffeomorphism\\ $\phi: P \varphi (B_R) \rightarrow B^2_r \cap V$ for some $r< R$, and then the symplectic embedding $(\phi \times id_{{V}^{\bot}}) \circ \varphi$ mapping $B_R$ into $Z_r$ would violate the classic formulation of Gromov's theorem.\\
The alternative formulation of Gromov's theorem has a natural generalization to higher dimensional shadows of a symplectic ball.\\
In other words, if $V$ is a $2k$-dimensional symplectic subspace of $\mathbb{R}^{2n}$ and $P$ is the symplectic projector onto $V$, we may ask whether it is true that
\begin{align}
Vol_{2k} ( P \varphi(B_R), \omega_{0|V} ^k) \geq \pi ^k R^{2k},
\label{ineq3}
\end{align}
for every symplectic embedding $\varphi: B_R \hookrightarrow \mathbb{R}^{2n}$.\\
If $k=1$ or $k=n$ the inequality holds respectively by the non-squeezing theorem and by the volume preserving property of symplectic diffeomorphisms. So we are interested only in the middle dimensional case when $2\leq k \leq n-1$.\\
If the symplectic diffeomorphism $\varphi$ is a linear map an affirmative answer to the middle dimensional non-squeezing question has been given in \cite{AM13}, nevertheless in the same paper Abbondandolo and Matveyev show that if $P$ is the symplectic projector onto a $2k$-dimensional symplectic subspace with $2\leq k \leq n-1$, then for every $\epsilon >0$ there exists an open symplectic embedding $\varphi:B_1 \hookrightarrow \mathbb{R}^{2n}$ such that $Vol_{2k} ( P \varphi(B_1), \omega_{0|V} ^k) < \epsilon$.
Since this counterexample deforms very strongly the unitary ball, one may ask how far can the ball be deformed before the middle non-squeezing ends his validity and whether the middle dimensional non-squeezing holds locally. In \cite{AM13} the authors give two different formulations of the local question.\\
The first one asks whether, fixed a symplectic embedding $\varphi:D \hookrightarrow \mathbb{R}^{2n}$ of a domain $D\subset \mathbb{R}^{2n}$, the inequality
\begin{align}
\label{eq7}
Vol_{2k} ( P \varphi(B_R(x)), \omega_{0|V} ^k) \geq \pi ^k R^{2k}
\end{align}
holds for any $x \in D$ and for $R$ positive and small enough.\\
The second formulation is the following.\\ Let us fix a path of symplectic embeddings of the unit $2n$-dimensional ball
\begin{align*}
\varphi_t :B_1 \hookrightarrow \mathbb{R}^{2n} \ \ t \in [0,1],
\end{align*}
such that $\varphi_0$ is linear (i.e. it is the restriction to $B_1$ of a linear symplectomorphism).\\
We would like to know whether there exists a positive number $t_0 \leq 1$ such that
\begin{align}
Vol_{2k} (P \varphi_t (B_1), \omega_{0|V} ^k) \geq \pi^k, \textrm{ \ for every \ } 0\leq t < t_0.
\label{ineq5}
\end{align}
The second formulation implies the first one by taking the path of symplectic embeddings
\begin{align}
\varphi_t (y):=
\left\{ \begin{array}{l}
\dfrac{1}{t} \big( \varphi (x+t y) - \varphi (x) \big) \ \ \ \textrm{ if } t \in ]0,1],\\
D \varphi (x) [y] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textrm{ if } t=0,
\end{array}\right.
\end{align}
in fact
\begin{align*}
Vol_{2k}(P \varphi_t (B_1(0))=Vol_{2k}(P \frac{1}{t} \varphi (B_t(x)),\omega_{0|V} ^k)=\dfrac{Vol_{2k}(P \varphi (B_t(x)),\omega_{0|V} ^k)}{t^{2k}}.
\end{align*}
In this setting Abbondandolo, Bramham, Hryniewicz and Salom\~ao \cite{ABHS15} have recently proved that if the symplectic projection is onto a $4$-dimensional symplectic subspace $V$, then both these local non-squeezing results hold (in the first formulation the diffeomorphism is required to be $C^3$).
In this paper we address the same question but we do not impose for any assumption on the dimension of $V$, instead we require an analiticity hypothesis. First we focus on the second local formulation of the middle dimensional non-squeezing and we prove its validity under the additional assumption that the path of embeddings $t \mapsto \varphi_t$ is analytic in $t$, i.e. $\forall x \in \overline{B_1}$ the function $t\mapsto \varphi_t(x)$ is analytic.
\begin{teo}[Analytic local non-squeezing]
\label{Teor2}
Let $[0,1] \ni t \mapsto \varphi_t$ be an analytic path of symplectic embeddings $\varphi_t:\overline{B_1} \hookrightarrow \mathbb{R}^{2n}$, such that $\varphi_0$ is linear. Then the middle dimensional non-squeezing inequality
\begin{eqnarray*}
Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k) \geq \pi^k
\end{eqnarray*}
holds for $t$ small enough.
\end{teo}
To prove this theorem we need some ingredients.\\
In Section \ref{Sect1} we recall some facts about contact geometry together with some results about the minimal action of a Reeb orbit in a contact manifold and we introduce Zoll contact manifolds (also known as \textit{regular contact type manifolds}), namely manifolds with the property that all Reeb orbits are periodic with the same period.\\
In Section \ref{Sect2} we prove a weaker version of the main theorem in \cite{BP14}, which says that if a constant volume deformation of the unit ball does not start tangent to all orders to a deformation by convex domains with Zoll boundaries (i.e. the deformation is \textit{not formally trivial}), then the minimal action $A_{\min}$ on the ball is strictly larger than the one of its small deformations.\\
In Section \ref{Sect4} we will see that this result implies the validity of the non-squeezing inequality \eqref{ineq5} for not formally trivial deformations of the unit ball.\\
On the other hand in case of a \textit{formally trivial deformation} we will have to proceed in a different way. Namely, using some results from Section \ref{Sect3}, we will prove that if the deformation of the ball is analytic and trivial then the function $t \mapsto Vol (P \varphi_t(B_1),\omega_{0|V} ^k)$ is analytic and has all vanishing derivatives in $t=0$. This will enable us to deduce that the function above is constant and consequently that the equality in \eqref{ineq5} holds.\\
Using Theorem \ref{Teor2} we will deduce the local non-squeezing formulation for any fixed analytic symplectic embedding. Moreover, in this latter setting we will prove that, on compact subsets of $\mathbb{R}^{2n}$, the minimal radius $R$ for which the estimate \eqref{eq7} holds is bounded away from $0$. More precisely we shall prove the following result.
\begin{teo}
\label{Teor5}
Let $\varphi: D \hookrightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding of a domain $D \subset \mathbb{R}^{2n}$. Then there exists a function $r_0:D\rightarrow (0,+ \infty)$ such that the inequality $Vol_{2k}(P \varphi (B_r(x)),\omega ^k _{0|V})\geq \pi^{k} r^{2k}$ holds, for every $x \in D$ and for every $r < r_0(x)$. Moreover $r_0$ is bounded away from $0$ on compact subsets $K \subset D$.
\end{teo}
{\textbf{Acknowledgments.}}
I would like to warmly thank Alberto Abbondandolo for all the precious help and advice he gave me concerning this paper.
\section{Zoll contact manifolds and minimal action}
\label{Sect1}
Let us start recalling some basic facts in contact geometry.\\
A $1$-form $\alpha$ on the $(2n-1)$-dimensional manifold $M$ is a \textit{contact form} if $\alpha \wedge (d \alpha)^{n-1}$ is a volume form. In this case $(M,\alpha)$ is called \textit{contact manifold} and the volume of $M$ with respect to the volume form induced by $\alpha$ is denoted by $Vol(M,\alpha)$.\\ Moreover the contact form $\alpha$ induces a vector field $R_\alpha$ on $M$, which is called the Reeb vector field of $\alpha$ and is determined by the requirements:
\begin{eqnarray*}
i_{R_{\alpha}} d \alpha =0 \ \ \& \ \ \alpha (R_\alpha)=1.
\end{eqnarray*}
The \textit{action} $A (\gamma)$ of a periodic Reeb orbit $\gamma$ on a contact manifold $(M,\alpha)$ is defined as
\begin{align*}
A (\gamma) :=\int_{\gamma} \alpha \in \mathbb{R}.
\end{align*}
and coincides with the period of $\gamma$.
\begin{defin}
Given any contact manifold $(M,\alpha)$ with at least a closed Reeb orbit we define a function as follows
\begin{align*}
A_{\min}(M,\alpha):= \min_{\gamma} \{ A( \gamma) \ | \ \gamma \textrm{ is a closed Reeb orbit on} \ (M,\alpha) \}.
\end{align*}
\end{defin}
Both the volume and the function $A_{\min}$ are invariant under strict contactomorphism. Indeed we have the following simple result.
\begin{prop}
Let $(M,\alpha)$ and $(N,\beta)$ be two $2n-1$ dimensional contact manifolds and $\phi:M \rightarrow N$ a strict contactomorphism (i.e. $\phi^* \beta = \alpha$).
Then
\begin{enumerate}
\item to each closed Reeb orbit $\gamma$ of $(M,\alpha)$ corresponds a closed Reeb orbit $\phi \circ \gamma$ of $(N,\beta)$,
\item $A(\gamma)= A(\phi \circ \gamma)$,
\item $A_{\min}(M,\alpha)=A_{\min}(N,\beta)$,
\item $Vol(M,\alpha)= Vol(N,\beta)$.
\end{enumerate}
\end{prop}
Another straightforward fact we will use is the following.
\begin{oss}
\label{oss1}
Let $f:S^{2n-1}\rightarrow (0,+\infty)$ be a $C^1$ function and define the set $M_f:=\{ f(x) x \ | \ x\in S^{2n-1} \} \subset \mathbb{R}^{2n}$. Then $(S^{2n-1}, f^2 \lambda_{0|S^{2n-1}})$ and $(M_f,\lambda_{0|M_f})$ are strictly contactomorphic.
\end{oss}
Indeed the radial projection $\theta:(S^{2n-1}, f^2 \lambda_{0|S^{2n-1}}) \rightarrow (M_f,\lambda_{0|M_f})$ defined by $\theta(x)=f(x)x$ is a strict contactomorphism:
\begin{align*}
& \theta^* (\lambda_{0|M_f})(x)[v]=\lambda_{0|S^{2n-1}} (\theta(x)) [ d \theta(x)[v]]=\lambda_{0|S^{2n-1}} (f(x)x)[f(x)[v]+ df(x)[v]x]=\\
&= f(x)\lambda_{0|S^{2n-1}} (x)[f(x)[v]]+ f(x)df(x)[v]\lambda_{0|S^{2n-1}} (x)[x]={f(x)}^2 \lambda_{0|S^{2n-1}}(x)[v].
\end{align*}
Now we introduce a special type of contact form.
\begin{defin}
A contact form on a manifold $M$ is \textit{Zoll} (or \textit{regular}) if its Reeb flow is periodic and all the Reeb orbits have the same period, and hence the same action.
\end{defin}
For example the contact form $\lambda _{0|S^{2n-1}}$, induced on the unit sphere $S^{2n-1}$ in $\mathbb{R}^{2n}$ by the standard Liouville 1-form $\lambda_0 := \sum_{i=1}^n x_i d y_i$, is Zoll.\\
Later on we will consider two different kinds of deformations of a contact form: \textit{formally trivial} and \textit{not formally trivial}.
\begin{defin}
A smooth deformation $\alpha_t$, $t \in [0,t_0)$, of a contact form $\alpha_0$ is \textit{trivial} if there exists a smooth real valued function $r(t)$ and an isotopy $\phi_t$ such that $\alpha_t= r (t) \phi_t ^* \alpha_0$.
A deformation $\alpha_t$ is \textit{formally trivial} if for every non negative $m$ there exists a trivial deformation $\alpha_t ^{(m)}$ that has order of contact $m$ with $\alpha_t$ at $t=0$. Otherwise the deformation is \textit{not formally trivial}.
\end{defin}
If instead of deformations of contact forms we choose to consider deformations of convex domains, we give the following definition.
\begin{defin}
Consider a smooth convex domain $C_0 \subset \mathbb{R}^{2n}$ with the standard Liouville 1-form ${\lambda_0}_{\vert \partial C_0}$.
A smooth deformation $C_t$ of $C_0$ is \textit{trivial} (resp. \textit{formally trivial}, resp. \textit{not formally trivial}) if $\theta_t ^* ( \lambda_{0|\partial C_t})$ is trivial (resp. formally trivial, resp. not formally trivial), where $\theta_t:S^{2n-1}\rightarrow \partial C_t$ is the radial projection.
\end{defin}
It is a result by Weinstein that trivial deformations of a Zoll contact form can be characterized in the following way.
\begin{prop}{\upshape{\cite{Wei74}}}
Let $\alpha_t$, $t\in [0,t_0)$, be a smooth deformation of a Zoll contact form $\alpha_0$. The deformation is trivial if and only if $\alpha_t$ is a Zoll contact form for every $t\in [0,t_0)$.
\end{prop}
In our case it will turn out that every contact deformation can be reduced to a normal form.
\begin{defin}
Let $\alpha_t = \rho_t \alpha_0$ be a smooth deformation of the Zoll contact form $\alpha_0$, where $\rho_t$ is a smooth family of positive functions on $M$, and let $m$ be a non negative integer. The deformation $\alpha_t$ is in \textit{normal form up to order $m$} if
\begin{align*}
\alpha_t = (1+ t \mu ^{(1)} + \ldots + t^m \mu ^{(m)} + t^{m+1} r_t) \alpha_0
\end{align*}
where, for $i=1,\ldots,m$, the functions $\mu ^{(i)}$ are integrals of motion for the Reeb flow of $\alpha_0$ (i.e. they are constant on the orbit of that flow) and $r_t$ is a smooth function on $M$ depending smoothly on the parameter $t$.
\end{defin}
Using a technique known as the \textit{method of Dragt and Finn} (see \cite{DF76} and \cite{Fin86}), which consist of constructing the required isotopy as composition of isotopies $\phi_t ^{(m)}$ which are flows of some particular vector fields, Balacheff and Paiva proved the following result.
\begin{teo}{\upshape{\cite{BP14}}}
\label{Teor1}
Let $\alpha_t = \rho_t \alpha_0$ be a smooth deformation of a Zoll contact form $\alpha_0$, where $\rho_t$ is a smooth family of real valued functions on $M$ with $\rho_0=1$. Given a non negative integer $m$, there exists a contact isotopy $\phi_t ^{(m)}$ such that ${\phi_t ^{(m)}}^*\alpha _t$ is in normal form up to order $m$.
\end{teo}
\proof (Idea)
The theorem is proved by induction. The case $m=0$ follows from the Taylor expansion $\rho_t = 1+t r_t$ around $t=0$. Supposing that $\alpha_t$ is already in normal form up to order $m-1$, the point is to find a function $h_m$ in such a way that the flow $\phi_{t,{h_m}}$ of the Hamiltonian vector field $X_{h_m}$ determines an isotopy $t\mapsto \phi_{t,{h_m}}$ for which $\phi_{t,{h_m}}^* \alpha_t$ is in normal form up to order $m$.
\endproof
This proposition will be crucial in the next section:
\begin{prop}{\upshape{\cite{BP14}}}
\label{Prop3}
Let $(M,\alpha)$ be a Zoll contact manifold and let $\rho: M \rightarrow \mathbb{R}^+$ be a smooth positive function invariant under the Reeb flow of $\alpha$. Then $A_{\min} (M,\rho \alpha ) \leq A_{\min} (M,\alpha) \min \rho$.
\end{prop}
\proof
This essentially follows from the fact that if $u \in M$ is a minimum point for $\rho$, then the Reeb orbit $\gamma$ of $(M,\alpha)$ passing through $u$ is also a Reeb orbit for $(M,\rho \alpha)$. Once one checks this by a straightforward computation, we have that the action of $\gamma$ in $(M,\rho \alpha)$ is
\begin{align*}
\int_{\gamma} \rho \alpha = \min \rho \int_{\gamma} \alpha = A_{\min} (M,\alpha) \min \rho
\end{align*}
and the proof is complete.
\endproof
We shall make use of the following classical result.
\begin{teo}{\upshape{\cite{Rab78}} \upshape{\cite{Wei78}}}
\label{Teor4}
Let $C$ be a smooth convex bounded domain of $\mathbb{R}^{2n}$.
The contact manifold $(\partial C,{\lambda_0}_{|\partial C})$ admits at least one periodic Reeb orbit.
\end{teo}
An important fact is that the function $A_{\min}$ coincides with some well known symplectic capacities such as Hofer-Zehnder and Ekeland-Hofer capacities.
\begin{teo}{\upshape{\cite{EH89}} \upshape{\cite{Vit89}}}
\label{Teor7}
Let $C$ be a smooth convex bounded domain of $\mathbb{R}^{2n}$. There exists a distinguished closed characteristic $\overline{\gamma} \subset \partial C$ such that $A_{\min}(\partial C,\lambda_0) = A( \overline{\gamma})$. Moreover, restricting to smooth convex domain of $\mathbb{R}^{2n}$, the function $A_{\min}$ is a symplectic capacity that we will denote with $c$.
\end{teo}
Due to the two theorems above, the function $A_{\min}(\partial C, {\lambda_0}_{| \partial C})$ is well defined.\\
Besides the usual proprieties of a capacity, choosing in a carefully way one among the equivalent definitions of $c$, the following result can be proved.
\begin{prop}{\upshape{\cite{AM15}}}
\label{Prop7}
Let $C \subset \mathbb{R}^{2n}$ be a smooth convex bounded domain and $P$ the symplectic projector onto a symplectic linear subspace $V \subset \mathbb{R}^{2n}$. Then $c (PC,{\omega_0}_{|V}) \geq c (C,\omega_0)$.
\end{prop}
\section{Deformations of $S^{2n-1}$}
\label{Sect2}
In this section we would like to get some information on how $A_{\min}$ behaves in case of a contact deformation on the unit sphere.
The results we are going to state hold in the case of an arbitrary Zoll contact manifold, but in this paper we are interested just in deformations of the standard contact form on the sphere $S^{2n-1}$, so we can simplify the proof about the Lipschitz continuity of $A_{\min}$ that relies on a result from \cite{Gin87}.
\begin{lem}
\label{Lem3}
Fix two real numbers $0<\delta<\Delta<\infty$ and consider the family $\mathcal{C}_{\delta, \Delta}$ of the convex domains $C \subset \mathbb{R}^{2n}$ which satisfy the $(\delta,\Delta)$-pinching condition $B_{\delta} \subset C \subset B_{\Delta}$. Every symplectic capacity $c:\mathcal{C}_{\delta, \Delta} \rightarrow \mathbb{R}$ is Lipschitz continuous with respect to the Hausdorff distance.
\end{lem}
\proof
Let us take two elements $C,D \in \mathcal{C}_{\delta, \Delta}$ and let $d$ be their Hausdorff distance.\\
By assumption
\begin{eqnarray*}
\delta B=B_{\delta} \subset C,D \subset B_{\Delta}=\Delta B,
\end{eqnarray*}
hence
\begin{eqnarray*}
C \subset D + d B \subset D + \dfrac{d}{\delta} D= \left(1 + \dfrac{d}{\delta}\right) D,
\end{eqnarray*}
and by the monotonicity and conformality proprieties of symplectic capacities
\begin{eqnarray*}
c(C) \leq \left(1+ \dfrac{d}{\delta}\right)^2 c(D) = \left(1 + 2\dfrac{d}{\delta}+\dfrac{d ^2}{\delta ^2} \right) c(D),
\end{eqnarray*}
therefore
\begin{eqnarray*}
c(C)-c(D) \leq d \left(\dfrac{2}{\delta} + \dfrac{d}{\delta ^2}\right) c(D).
\end{eqnarray*}
Because of the pinching condition we have $c(D)\leq c(\Delta B)=\Delta^2 \pi$ and $d\leq \Delta$, thus there exists a fixed real number $M>0$ such that
\begin{eqnarray*}
c(C)-c(D) \leq d M.
\end{eqnarray*}
If $c(C) \geq c(D)$ we have $c(C)-c(D)=|c(C)-c(D)|$ and the claim follows, otherwise if $c(C) < c(D)$ we repeat the same proof switching the role of sets $C$ and $D$.
\endproof
\begin{lem}
\label{Lem2}
There exists a small open neighbourhood $U$ of zero in the Banach space $C^2 (S^{2n-1})$ such that if $f \in U$, the map $f \mapsto A_{\min} (S^{2n-1}, (1+ f) {\lambda_0}_{\vert S^{2n-1}})$ is Lipschitz continuous on $U$ with respect to the $C^0$-topology.
\end{lem}
\proof
Let us set $M_{\sqrt{1+f}}:=\{ \sqrt{1+f(x)} x \ | \ x\in S^{2n-1} \} \subset \mathbb{R}^{2n}$.\\
The map $A_{\min} (S^{2n-1}, (1+ f) \lambda_{0\vert S^{2n-1}})$ is well defined because, as observed in Remark \ref{oss1}, looking for a periodic orbit of $(S^{2n-1}, (1+ f) \lambda_{0\vert S^{2n-1}})$ is the same as looking for one of $(M_{\sqrt{1+f}}, \lambda_{0\vert M_{\sqrt{1+f}}})$, and that exists because a $C^2$-small deformation of $S^{2n-1}$ still bounds a convex domain of $\mathbb{R}^{2n}$.\\
The map $f \mapsto A_{\min}(S^{2n-1}, (1+ f) \lambda_{0\vert S^{2n-1}})$ is the composition of the maps\\ $f \mapsto M_{\sqrt{1+f}}$ and $M_{\sqrt{1+f}} \mapsto A_{\min} (M_{\sqrt{1+f}},\lambda_{0\vert M_{\sqrt{1+f}}})$, the first of which is clearly Lipschitz from the $C^0$-distance to the Hausdorff distance. So in order to prove the Lipschitz regularity result we need to show that the minimal action (which is a capacity) of a periodic orbit on a convex domain whose boundary is close to $S^{2n-1}$, is Lipschitz continuous with respect to the Hausdorff distance. But this follows from Lemma \ref{Lem3}.
\endproof
The next theorem is a weaker version, which suits in our case, of the one in \cite{BP14} which holds for every Zoll contact manifold. The proof is the same except that in our setting we do not need to use a stronger result about the Lipschitz continuity that generalizes Lemma \ref{Lem2}.
\begin{teo}
\label{Teor3}
Consider a domain $U \subset \mathbb{R}^{2n}$ and a smooth simple curve $[0,1] \ni t \mapsto y(t) \in U$ starting at $y(0)=y_0$.
Let $(S^{2n-1},\mu_{t,y(t)})$, with $t \in [0,t_0)$, be a smooth constant volume deformation of the Zoll contact manifold $(S^{2n-1},\mu_{0,y_0}:=\lambda_{0\vert S^{2n-1}})$. If it is not formally trivial, the function $t\mapsto A_{\min} (S^{2n-1},\mu_{t,y(t)})$ attains a strict local maximum at $t=0$.
\end{teo}
\proof
To simplify the notation we will denote $\mu_{t,y(t)}$ by $\mu_t$ and, since $y(t)$ depends smoothly on $t$, we will consider that everything depends just on $t$. The proof is carried out in four steps.
\begin{enumerate}[1)]
\item First we consider the form $(1 + t \nu_t + t^m r_t) \mu_0$, where $m>1$ and both $\nu_t$ and $r_t$ are smooth function on $S^{2n-1}$ depending smoothly on $t$. By Lemma \ref{Lem2}, the function $f\mapsto A_{\min} (S^{2n-1},(1+ f)\mu_0)$ is Lipschitz if $f$ is in a small enough $C^2$-neighbourhood of zero in $C^{\infty} (S^{2n-1})$, then, for $t\rightarrow0$
\begin{align*}
A_{\min} (S^{2n-1},(1 + t \nu_t + t^m r_t) \mu_0)=A_{\min} (S^{2n-1},(1+ t \nu_t)\mu_0) + O (t^m).
\end{align*}
\item Let $(1+t^m \rho + t^{m+1} r_t) \mu_0$ be a deformation of $\mu_0$ and $\overline{\rho}$ the function obtained by averaging $\rho$ along the orbits of the Reeb vector field of $\mu_0$
\begin{align*}
\overline{\rho}(x):=\dfrac{1}{T}\int_0 ^T \rho (\varphi_t (x))dt,
\end{align*}
where $T$ is the common period of the periodic orbits of the Reeb flow $\varphi_t$.
According to the induction step of the proof of Theorem \ref{Teor1}, there exists a contact isotopy $\phi_t ^{(m)} :S^{2n-1} \rightarrow S^{2n-1}$ such that
\begin{align*}
\phi_t ^{(m)*}(1+t^m \rho + t^{m+1} r_t) \mu_0=(1+t^m \overline{\rho} + t^{m+1} r' _t) \mu_0,
\end{align*}
where $r'_t$ is a smooth function depending smoothly on $t$.
\item If $(1+t^m \rho + t^{m+1} r_t) \mu_0$ is a smooth constant volume deformation and $\overline{\rho}$ is not identically zero, then $A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0) < A_{\min}(S^{2n-1},\mu_0)$ for $t\neq 0$ small enough.\\
To prove this claim, first note that by 2) and 1) follows
\begin{align*}
A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0)=A_{\min}(S^{2n-1},(1+t^m \overline{\rho} + t^{m+1} r'_t) \mu_0)=
\end{align*}
\begin{align}
\label{eq3}
= A_{\min}(S^{2n-1},(1+t^m \overline{\rho}) \mu_0) + O({t}^{m+1}). \quad \qquad \qquad \qquad \qquad \qquad
\end{align}
Since $\overline{\rho}$ is an integral of motion for the Reeb flow of $\mu_0$ and $m$ is a positive integer, we have that $(1 + t^m \overline{\rho})$ is a positive (for small $t$) integral of motion of $\mu_0$, thus Proposition \ref{Prop3} implies that
\begin{align}
\label{eq4}
A_{\min}(S^{2n-1},(1 + t^m \overline{\rho}) \mu_0)\leq (1+t^m \min \overline{\rho}) A_{\min}(S^{2n-1},\mu_0).
\end{align}
The deformation $(1+t^m \overline{\rho} + t^{m+1} r'_t) \mu_0$ is constant volume because contact isotopies preserve the volume. By the proprieties of the exterior derivative
\begin{align*}
& Vol (S^{2n-1}, \mu_0) =Vol (S^{2n-1}, (1+t^m \overline{\rho} + t^{m+1} r'_t) \mu_0) =\\
&= \int_{S^{2n-1}} {(1+t^m \overline{\rho} + t^{m+1} r'_t)}^{n} \mu_0 \wedge {d\mu_0}^{n-1}=\\
& = Vol (S^{2n-1}, \mu_0) + n t^m \int_{S^{2n-1}} \overline{\rho} \mu_0 \wedge \ {d\mu_0}^{n-1} + O({t}^{m+1}),
\end{align*}
and thus the integral of $\overline{\rho}$ over $S^{2n-1}$ is zero. Therefore, if in addition $t \neq 0$ and $\overline{\rho}$ is not identically zero, the extrema of $\overline{\rho}$ must have opposite signs and hence its minimum must be negative. Putting together this fact with \eqref{eq3} and \eqref{eq4}, we deduce that the function $t\mapsto A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0)$ attains a strict maximum at $t=0$, namely
\begin{align*}
A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0)<A_{\min}(S^{2n-1},\mu_0), \ \ \textrm{for} \ \ t>0.
\end{align*}
\item We are finally ready to prove the theorem. Let us consider a constant volume deformation $\mu_t$ of the Zoll contact form $\mu_0$. By Gray's stability theorem we can assume that the contact deformation has the form\\ $\mu_t= \rho_t \mu_0$. Expanding $\rho_t$ around $t=0$, we obtain
\begin{align*}
\mu_t = (1+ t \rho_{(1)} + t^2 r_t) \mu_0,
\end{align*}
where $\rho_{(1)} = {\dfrac{d \rho_t}{dt}}|_{t=0}$ and $r_t$ is a smooth function depending on $t$.\\
By 3), if the average $\overline{\rho_{(1)}}$ is not identically zero, then $t\mapsto A_{\min} (S^{2n-1},\mu_t)$ attains a strict maximum at $t=0$.\\
Otherwise, if $\overline{\rho_{(1)}}$ is identically zero, by 2) there exists a contact isotopy $\phi_t^{(2)}$ such that $\phi_t ^{(2)*} \mu_t = (1+t^2 r'_t) \mu_0$. Since $\phi_t^{(2)}$ is a contact isotopy, then $(1+t^2 r'_t) \mu_0$ is also a constant volume smooth deformation of $\mu_0$ and $A_{\min} (S^{2n-1},(1+t^2 r'_t) \mu_0) =A_{\min} (S^{2n-1},\mu_t)$, so we can rewrite $\mu_t = (1+ t^2 r'_t) \mu_0$ and start anew.\\
If we repeat this process an arbitrary number of times, we see that either $t\mapsto A_{\min} (S^{2n-1},\mu_t)$ attains a strict maximum at $t=0$ or that for any positive integer $m$, there exist a contact isotopy $\phi_t ^{(m)}$ and a smooth function $\nu_t ^{(m)}$ on $S^{2n-1}$ depending smoothly on the parameter $t$, such that $\phi_t ^{(m)*} \mu_t = (1 + t^{m} \nu_t ^{(m)} ) \mu_0$. In other words, either $t\mapsto A_{\min} (S^{2n-1},\mu_t)$ attains a strict maximum at $t=0$ or the deformation $\mu_t$ is formally trivial.
\end{enumerate}
\endproof
\section{Analiticity of the volume of a projection}
\label{Sect3}
Our next goal is to prove that the fixed domain formulation of the local middle dimensional non-squeezing theorem holds if we consider an analytic path of symplectic embeddings.\\
To do this we need a result, whose proof relies on calculations made in order to prove Theorem 3 of \cite{AM13}.
\begin{prop}
\label{Prop1} Let $U\ni y_0$ be a domain of $\mathbb{R}^{n}$ and $[0,1] \times U \ni (t,y) \mapsto \varphi_{t,y}$ an analytic map such that $\varphi_{t,y}$ are embeddings of the unit $n$-dimensional ball $\varphi_{t,y}:\overline{B_1} \hookrightarrow \mathbb{R}^{n}$, with $\varphi_{0,y_0}$ linear. Moreover, let $P:\mathbb{R}^n \rightarrow V$ be the orthogonal projector onto a $k$-dimensional linear subspace $V \subset \mathbb{R}^n$ and $\rho$ a constant $k$-volume form on $V$. Then the function $(t,y) \mapsto Vol_{k} (P \varphi_{t,y} (B_1),\rho)$ is analytic in a neighbourhood of $(0,y_0)$ small enough.
\end{prop}
In the proof we will use the following lemma.
\begin{lem} Take the hypothesis of the proposition above.
The set $S_{t,y} \subset \partial B_1$ defined as
\begin{align}
S_{t,y} := \{x \in \partial B_1 | P_{| T_{\varphi_{t,y} (x)} \varphi_{t,y} (\partial B_1)} \textrm{ is not surjective} \}
\label{an1}
\end{align}
has the property that
\begin{align}
\partial P \varphi_{t,y} (B_1)= P \varphi_{t,y} (S_{t,y})
\label{an2}
\end{align}
and can be written as
\begin{align}
S_{t,y}=\{x \in \partial B_1 | F_{t,y}(x)=0\},
\end{align}
\label{an3}
where $F_{t,y}(x):=(I-P) (D \varphi_{t,y} (x)^{*})^{-1} [x]$. If $(t,y)$ is in a small enough neighbourhood $of (0,y_0)$, $S_{t,y}$ is a submanifold of $\partial B_1$ such that $S_{t,y}= \phi_{t,y}(S^{k-1})$, where $\phi_{t,y}$ is an analytic path of diffeomorphisms.
\end{lem}
\proof
First observe that ~\eqref{an2} is an immediate consequence of the definition of $S_{t,y}$.
The function $P_{| T_{\varphi_{t,y} (x)} \varphi_{t,y} (\partial B_1)}$ is not surjective if and only if $P D \varphi_{t,y} (x)_{| T_x \partial B_1} :T_x \partial B_1 \rightarrow T_{P \varphi_{t,y} (x)} V \cong \mathbb{R}^{k}$ is not surjective.\\
This is true iff
\begin{align*}
&\exists u \in \mathbb{R}^{k}, u \neq 0, \textrm{ such that } <P D \varphi_{t,y}(x) [\xi], u> =0
\end{align*}
$\forall \xi \in T_x \partial B_1$, i.e. $\forall \xi$ such that $<\xi , x> =0$.\\
Since $u = P u$ and $P=P^*$
\begin{align*}
<P D \varphi_{t,y}(x) [\xi], u>=<\xi, {(P D \varphi_{t,y}(x))}^* [u]>=<\xi, {D \varphi_{t,y}(x)}^* [u]>
\end{align*}
and thus the non surjectivity holds iff
\begin{align*}
D \varphi_{t,y} (x) ^* [u] = \lambda x, \textrm{ where } \lambda \neq 0 \textrm{ is a real number}.
\end{align*}
Equivalently
\begin{align*}
(D \varphi_{t,y} (x)^{*})^{-1} [x] \in \mathbb{R}^{k}
\end{align*}
which is the same as
\begin{align*}
F_{t,y}(x):=(I-P) (D \varphi_{t,y} (x)^{*})^{-1} [x] =0 \in \mathbb{R}^{n-k}.
\end{align*}
Now, consider the analytic function $G(t,y,x):=(I-P)(\varphi_{t,y} (x)^{*})^{-1} [x]$. We have that $\varphi_{0,y_0}= D \varphi_{0,y_0}$ because $\varphi_{0,y_0}$ is linear, hence $G(0,y,z)=0$ if $z \in S_{0,y_0}$. Applying the analytic implicit function theorem we deduce that, for $(t,y)$ close to $(0,y_0)$, $S_{t,y}$ is a submanifold of $\partial B_1$ and $S_{t,y}=\phi_{t,y} ' (S_{0,y_0})$ where $\phi_{t,y} '$ is an analytic path of diffeomorphisms. There is a diffeomorphism given by $(D \varphi_{t,y} (x)^{*})^{-1}$ between $S^{k-1}$ and $S_{0,y_0}$, therefore by composition with $\phi_{t,y} '$ we get an analytic path of diffeomorphisms $\phi_{t,y}$ such that $\phi_{t,y}(S^{k-1})= S_{t,y}$.
\endproof
Now we are ready to prove Proposition \ref{Prop1}.
\proof
Take a primitive $\alpha \in \Omega^{k-1} (V)$ of the volume form $\rho \in \Omega^{k} (V)$, i.e. $d \alpha=\rho$.\\
As observed in the former lemma $\partial P \varphi_{t,y} (B_1)= P \varphi_{t,y} (S_{t,y}) $ and applying Stokes' theorem we get
\begin{align*}
Vol_k (P \varphi_{t,y} (B_1),\rho) = \int_{P \varphi_{t,y} (B_1)} d \alpha= \int_{\partial P \varphi_{t,y} (B_1)} \alpha=
\end{align*}
\begin{align*}
= \int_{P \varphi_{t,y} (S_{t,y})} \alpha=\int_{S_{t,y}} (P \varphi_{t,y})^* \alpha = \int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha.
\end{align*}
where $\phi_{t,y}: S^{k-1} \rightarrow S_{t,y}$ is the diffeomorphism introduced in the proof of the lemma above.
For $(t,y)$ close to $(0,y_0)$, the function $(t,y) \mapsto P \varphi_{t,y} \phi_{t,y}$ is analytic and this implies the analyticity of $(t,y) \mapsto \int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha$.\\
In fact, we can write $\int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha=\int_{S^{k-1}} a_{t,y}(x) \nu$ where $a_{t,y}$ is analytic.
Differentiating under integral sign, from the Taylor expansion of $a_{t,y}$ we get a local series expansion of the function $(t,y) \mapsto \int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha= Vol_{k} (P \varphi_{t,y} (B_1),\rho)$, which is therefore analytic.
\endproof
\section{Local non-squeezing}
In the following $B_1$ indicates the unit ball in $\mathbb{R}^{2n}$ and $P:\mathbb{R}^{2n} \rightarrow V$ the symplectic projection onto a $2k$-dimensional symplectic linear subspace $V \subset \mathbb{R}^{2n}$.
At first, we are interested in proving the local non squeezing formulation for a path of symplectic embeddings starting from a linear one and to do so we will use the middle dimensional linear non-squeezing result.
\label{Sect4}
\begin{teo}{\upshape{\cite{AM13}, \cite{AM15}}}
\label{teor6}
Let $P$ be the symplectic projector onto a $2k$-dimensional symplectic linear subspace $V \subset \mathbb{R}^{2n}$. Then for every linear symplectic isomorphism $L: \mathbb{R}^{2n} \rightarrow \mathbb{R}^{2n}$ there holds
\begin{align*}
Vol_{2k} ( P L (B_1), \omega^k_{0|V}) \geq \pi^k.
\end{align*}
The equality holds if and only if the linear subspace $L^{-1} V$ is $J$-invariant, where $J$ is the standard complex structure on $\mathbb{R}^{2n}$.
\end{teo}
We complete the above result by the following:
\begin{add}
\label{add1}
The equality holds if and only if $(P L(B_1),\omega_{0|V})$ is symplectomorphic to $(B_1 \cap L^{-1} V,\omega_{0|L^{-1} V} )$.
\end{add}
The following result is useful to prove Theorem \ref{teor6} and the addendum as well.
\begin{lem}\upshape{{\cite{Fed69} (Section 1.8.1)}}\\
\label{Lem1}
Let $1\leq k \leq n$, then
\begin{eqnarray*}
| \omega^k[u_1,\ldots,u_{2k}]| \leq |u_1 \wedge \ldots \wedge u_{2k}| \ \ \forall u_1,\ldots , u_{2k} \in \mathbb{R}^{2n}.
\end{eqnarray*}
\end{lem}
\proof (Addendum)
If a symplectomorphism exists, by Lemma \ref{Lem1} we have $Vol_{2k} (P L(B_1),\omega^k _{0|V}) = Vol_{2k}(B_1 \cap L^{-1} V,\omega_{0|L^{-1} V} ^k ) \leq \pi^k$. But at the same time Theorem \ref{teor6} yields $Vol_{2k} (P L(B_1),\omega^k _{0|V}) \geq \pi^k$, hence the equality holds. On the other hand $Vol_{2k} ( P L (B_1), \omega^k_{0|V}) = \pi^k$ iff $L^{-1} V$ is $J$-invariant; and if the claim that $PL(B_1 \cap L^{-1} V)=PL(B_1)$ is true, then $(B_1 \cap L^{-1} V,\omega_{0|L^{-1} \cap V} )$ is symplectomorphic to $(P L(B_1),\omega_{0|V})=(PL(B_1\cap L^{-1} V),\omega_{0|V})$ via the linear symplectic isomorphism $L:L^{-1} V \rightarrow V$.
To prove the claim we reduce it to the easier case in which $P$ is orthogonal. First we take an $\omega$-compatible inner product $(\cdot,\cdot) '$ on $\mathbb{R}^{2n}$ such that $P$ is orthogonal and we denote with $B_1 '$ and $J '$ the corresponding unit ball and complex structure. In particular $V$ is $J'$-invariant. Let $\psi :(V ,\omega, J ') \rightarrow (V ,\omega, J)$ be a complex and linear isomorphism. It follows that $\psi$ is an isometry from $(V, (\cdot,\cdot)' )$ to $(V, (\cdot,\cdot) )$, hence $\psi(B_1 ')= B_1$. The image of the unit ball under a linear surjection $M$ is given by
\begin{align*}
M(B_1)=M( B_1 \cap rank M^*).
\end{align*}
If we take $N=L\psi$, $M=PN$ and we denote with $*'$ the adjoint of a matrix with respect to $(\cdot,\cdot)'$, we get
\begin{align*}
&PL(B_1)=PL \psi (B_1 ')= PN(B_1 ') =PN(B_1 ' \cap rank (PN)^{*'})=\\
&=PN(B_1 ' \cap rank (N^{*'}P^{*'}))=PN(B_1 ' \cap rank (N^{*'}P))= PN(B_1 ' \cap N^{*'}V).
\end{align*}
The identity $\psi J' = J \psi$ implies $ J' = \psi ^{-1} J \psi$ and the fact that $L^{-1}V$ is\\ $J$-invariant is equivalent to $J\psi N^{-1}V=\psi N^{-1}V$, hence
\begin{align*}
&N^{*'} V=N^{*'} J' V=N^{*'} J'N N^{-1} V=J' N^{-1} V=\\
&= \psi^{-1} J \psi N^{-1} V=\psi^{-1} \psi N^{-1}V=N^{-1}V,
\end{align*}
thus we obtain
\begin{align*}
&PL(B_1)=PN(B_1 ' \cap N^{*'}V)=PN(B_1 ' \cap N^{-1}V)=\\
&=PL\psi (B_1 ' \cap \psi^{-1} L^{-1} V)=PL(B_1 \cap L^{-1} V),
\end{align*}
and the claim is proved.
\endproof
In order to gain some information about the strong formulation of the local non-squeezing inequality we study the function $t\mapsto Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k)$.
\begin{prop}
\label{prop1}
Consider a domain $U \subset \mathbb{R}^{2n}$ and a smooth simple curve $[0,1] \ni t \mapsto y(t) \in U$ starting at $y(0)=y_0$.
Let $[0,1] \ni t \mapsto \varphi_{t,y(t)}$ be a smooth path of symplectic embeddings $\varphi_{t,y(t)}:\overline{B_1} \hookrightarrow \mathbb{R}^{2n}$, such that $\varphi_{0,y_0}$ is linear and $\varphi_{0,y_0} ^{-1} V$ is $J$-invariant.
The deformation of $P\varphi_{0,y_0}(B_1)$ given by $P\varphi_{t,y(t)}(B_1)$ can be either formally or not formally trivial:
\begin{itemize}
\item if the deformation is formally trivial, then every order $m \in \mathbb{Z}^+$ derivative of $t\mapsto Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)$ vanishes in $0$;
\item if the deformation is not formally trivial, then the strict middle dimensional non-squeezing inequality $Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k) > \pi^k$ holds for $t>0$ small enough.
\end{itemize}
\end{prop}
\proof
By the previous addendum we have that $\psi:=\varphi_{{0,y_0}|\varphi_{0,y_0} ^{-1}V}$ is a linear symplectomorphism between $(B_1 \cap \varphi_{0,y_0} ^{-1}V,\omega_{{0,y_0}|\varphi_{0,y_0} ^{-1}V})$ and $(P\varphi_{0,y_0} (B_1),\omega_{{0,y_0}|V})$.
Let us call $M_{t,y(t)}:=\partial P \varphi_{t,y(t)} (B_1)$ and consider two $1$-forms: the Liouville form $\lambda_{0|\psi^{-1} M_{t,y(t)}}$ and its pullback $\mu_{t,y(t)} := {\theta_{t,y(t)}}^* (\lambda_{0|\psi^{-1}M_{t,y(t)}})$, where $\theta_{t,y(t)} : S^{2k-1} \rightarrow \psi^{-1}M_{t,y(t)}$ is the radial diffeomorphism such that ${\theta_{t,y(t)}}^{-1}(x) = \dfrac{x}{||x||}$.\\
Later we will use the capacity $c$, which is defined only for convex domains, so let us notice once for all that, for small deformations, $\varphi_{t,y(t)} (B_1)$ is still convex and that the projection of a convex domain is still convex.\\
Now we compute the relations between the volume of the deformations.\\
Using Stokes' theorem we get
\begin{align*}
&Vol_{2k-1}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})= \int_{\psi^{-1}\partial P \varphi_{t,y(t)} (B_1)} \lambda_{0|\psi^{-1}M_{t,y(t)}} \wedge (d \lambda_{0|\psi^{-1}M_{t,y(t)}})^{k-1}=\\
&=\int_{ \psi^{-1}P \varphi_{t,y(t)} (B_1)} {\omega_0}^k = Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k).
\end{align*}
On the other hand, since $(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and $(S^{2k-1},\mu_{t,y(t)})$ are strictly contactomorphic, $Vol_{2k-1}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})=Vol_{2k-1}(S^{2k-1},\mu_{t,y(t)})$.
So, if $\mu_{t,y(t)} ' :=\mu_{t,y(t)} \rho (t)$, where $\rho (t):= \dfrac{1}{\sqrt[k]{Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1} V} ^k)}}$, it follows $Vol(S^{2k-1},\mu_{t,y(t)} ')= 1$ and in particular that $\mu_{t,y(t)} '$ is a constant volume deformation.\\
Observing that closed characteristics in $(S^{2k-1}, \mu_{t,y(t)} ')$ are the same as in $(S^{2k-1}, \mu_{t,y(t)})$ we can establish the relations between the minimal action of their closed Reeb orbits
\begin{align*}
&A_{\min} (S^{2k-1}, \mu_{t,y(t)} ') = \min_{\gamma} \{ A(\gamma) \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ') \} = \\
&= \min_{\gamma} \{ \int_{\gamma} \mu_{t,y(t)} ' \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ') \} = \\
&= \min_{\gamma} \{ \rho (t)\int_{\gamma} \mu_{t,y(t)} \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ') \}= \\
& =\min_{\gamma} \{ \rho (t)\int_{\gamma} \mu_{t,y(t)} \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ) \}= \\
&= \rho (t) A_{\min} (S^{2k-1}, \mu_{t,y(t)}).
\end{align*}
Since $\theta_{t,y(t)}$ is a strict contactomorphism between $(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and $(S^{2k-1}, \mu_{t,y(t)} )$, we also get
\begin{align*}
A_{\min} (S^{2k-1}, \mu_{t,y(t)} ) =A_{\min}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})=c (\psi^{-1}P \varphi_{t,y(t)} (B_1)),
\end{align*}
where $\psi$ is a symplectomorphism.\\ Thus the quantities $A_{\min}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and $c (\psi^{-1}P \varphi_{t,y(t)} (B_1))$ are equal respectively to $A_{\min}(M_{t,y(t)}, \lambda_{0|M_{t,y(t)}})$ and $c (P \varphi_{t,y(t)} (B_1))$.
Notice that the Weinstein conjecture holds in the convex case (Theorem \ref{Teor4}), hence a closed characteristic for $(M_{t,y(t)}, \lambda_{0|M_{t,y(t)}} )$ always exists, moreover by Theorem \ref{Teor7} the quantities above are well defined.\\
Now let us take a deformation $(S^{2k-1}, \mu_{t,y(t)} ' )$ of the standard Zoll contact form $\mu_{0,y_0}=\lambda_{0|S^{2k-1}}$ on $S^{2k-1}$, that could be either formally trivial or not formally trivial.\\
Suppose the former to be true, which is equivalent to say that the deformation $P\varphi_{t,y(t)}(B_1)$ is formally trivial.\\
In this case, in the last part of the proof of Theorem \ref{Teor3} we deduced that for every $m \in \mathbb{Z}^+$ there is a contact isotopy $\phi_{t,y(t)}$ such that
\begin{align*}
\phi_{t,y(t)}^* \mu_{t,y(t)} = (1+O({t}^{m})) \mu_0.
\end{align*}
The volume function is invariant by contact isotopy, so
\begin{align*}
&Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)=Vol_{2k-1} (S^{2k-1},\mu_{t,y(t)})=\\
&=\rho(t) Vol_{2k-1} (S^{2k-1},\mu_{t,y(t)} ')=\rho(t) Vol_{2k-1} ( S^{2k-1},(1+O({t}^{m}))\mu_0), \ \ \ \forall m \in \mathbb{Z}^+.
\end{align*}
By the definition of $\rho(t)$ the above equality is equivalent to
\begin{align*}
&{Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)}^{\frac{k+1}{k}}=Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k) \rho(t)=\\
&= Vol_{2k}( S^{2k-1},(1+O({t}^{m}))\mu_0), \ \ \ \forall m \in \mathbb{Z}^+.
\end{align*}
Therefore each of $m$-order derivatives of $Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)^{\frac{k+1}{k}}$, and hence of $Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)=Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)$, vanishes in $0$.\\
Now we suppose that $(S^{2k-1}, \mu_{t,y(t)} ')$ (equivalently $P\varphi_{t,y(t)}(B_1)$) is not formally trivial.
By Theorem \ref{Teor3} and the previous calculations, if $t$ is small enough the following inequality holds
\begin{align*}
& 1=\dfrac{\pi}{\sqrt[k]{\pi^k}}= \dfrac{ A_{\min} (\psi^{-1}M_{0,y_0}, \lambda_{0|\psi^{-1} M_{0,y_0}})}{\sqrt[k]{\pi^k}} = \rho (0) A_{\min} (\psi^{-1}M_{0,y_0}, \lambda_{0|\psi^{-1}M_{0,y_0}})= \\
&= A_{\min} (S^{2k-1}, \mu_{0,y_0} ') > A_{\min} (S^{2k-1}, \mu_{t,y(t)} ')= \dfrac{A_{\min} (\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})}{{\sqrt[k]{Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1} V} ^k)}}}.
\end{align*}
So, recalling that $A_{\min} (M_{t,y(t)}, \lambda_{0|M_{t,y(t)}})=A_{\min} (\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and\\ $Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)=Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)$, if we prove that \\$A_{\min} (M_{t,y(t)}, \lambda_{0|M_{t,y(t)}})\geq \pi$, then ${ Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)}^{-\frac{1}{k}} < \dfrac{1}{\pi}$ and the strict local non-squeezing inequality $Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k) > \pi^k$ holds.
But from the behaviour of the capacity $c$ respect to symplectic projections (Proposition \ref{Prop7}), we deduce
\begin{align*}
A_{\min} (M_{t,y(t)}, \lambda_{0|M_{t,y(t)}}) = c (P \varphi_{t,y(t)} (B_1)) \geq c ( \varphi_{t,y(t)} (B_1))= c ( B_1) =\pi,
\end{align*}
and hence the result.
\endproof
From this result we cannot deduce the general local non-squeezing inequality \eqref{ineq5} because in the general case we cannot say much if a trivial deformation occurs. Nevertheless, if the deformation is analytic, the local non-squeezing inequality follows easily as consequence of the proposition above.
\begin{teo*}{Teor2}[Analytic local non-squeezing]
Let $[0,1] \ni t \mapsto \varphi_t$ be an analytic path of symplectic embeddings $\varphi_t:\overline{B_1}\hookrightarrow \mathbb{R}^{2n}$, such that $\varphi_0$ is linear. Then the middle dimensional non-squeezing inequality
\begin{align*}
Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k) \geq \pi^k
\end{align*}
holds for $t$ small enough.
\end{teo*}
\proof
By Theorem \ref{teor6} we have that $Vol_{2k} (P \varphi_0 (B_1),\omega_{0|V} ^k) \geq \pi^k$ and the equality holds if and only if $\varphi_0 ^{-1} V$ is $J$-invariant. If the equality does not hold the theorem is trivially true by the continuity of the volume. On the other hand, if the equality holds, Theorem \ref{teor6} implies that $\varphi_0 ^{-1}V$ is $J$-invariant and thus we are under the hypothesis of Proposition \ref{prop1}.\\ Therefore, in the case of a not formally trivial deformation $P \varphi_t(B_1)$ there is nothing to prove. Otherwise, if the deformation is formally trivial, the function $t \mapsto Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k)$ has vanishing derivatives in $0$, but we know by Proposition \ref{Prop1} that if $t$ is small enough this function is analytic and hence constant. Thus we get $Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k)=Vol_{2k} (B^{2k}_1, \omega_{0|V} ^k)= \pi^k$ for $t$ small enough.
\endproof
Note that to prove the theorem it was sufficient to use Proposition \ref{prop1} in the case where the curve $y(t)$, on which the path $t\mapsto \varphi_{t,y(t)}$ depends, is a constant curve, but the same proof leads to a generalization of Theorem \ref{Teor2} to the case in which $y(t)$ is an arbitrary analytic curve.
Thanks to this remark we can say something more about the fixed symplectic embedding formulation of the local non-squeezing, but before we state a couple of lemmata.
First a result on the local structure of the zero set of an analytic function.
\begin{teo}(Lojasiewicz's Structure Theorem) \ {\upshape{\cite[Theorem 5.2.3]{KP92}}}
\label{Teo4}
Let $f(x_1,\ldots,x_n)$ be a real analytic function in a neighbourhood of a point $y=(y_1,\ldots,y_n)$ in $\mathbb{R}^n$ and assume that $x_n \mapsto f(y_1,\ldots,y_{n-1},x_n)$ is not identically zero. There exist numbers $\delta_j>0$, $j=1,\ldots n,$ and a neighbourhood $Q_n$ (where we define $Q_k:= \{(x_1,\ldots,x_k) \ | \ |y_j - x_j| < \delta_j, \ 1\leq j \leq k \}$) such that the zero set
\begin{align*}
Z:=\{ x\in Q_n \ | \ f(x)=0 \}
\end{align*}
has a decomposition
\begin{align*}
Z=V^{0} \cup \ldots \cup V^{n-1},
\end{align*}
where the set $V^0$ is either empty or consists of the point $y$ alone, while for $1\leq k \leq n-1$ we may write $V^k$ as a finite disjoint union $V^k= \cup_\lambda \Gamma^k_\lambda$ of $k$-dimensional subvarieties $\Gamma^k_\lambda$. Each $\Gamma^k_\lambda$ is defined by a system of $n-k$ equations:
\begin{align*}
&x_{k+1}=^\lambda \! \! \eta _{k+1}^k (x_1,\ldots, x_k),\\
&\ \ \ \ \ \ \ \ \ \ \ldots\\
&x_n=^\lambda \! \!\eta _{n}^k (x_1,\ldots, x_k),
\end{align*}
where each function $^\lambda \eta _{k+1}^k$ is real analytic on an open subset $\Omega_\lambda ^k \subseteq Q_k \subseteq \mathbb{R}^k$.
\end{teo}
\begin{lem}
Let $\varphi: D\rightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding and $x \in D$.
As long as $x + ry \in D$, the map
\begin{align*}
\varphi_{r,x}(y) :=
\left\{ \begin{array}{l}
\dfrac{1}{r} \big( \varphi (x+ r y) - \varphi (x) \big) \ \ \ \textrm{ if } r >0,\\
D \varphi (x) [y] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textrm{ if } r=0,
\end{array}\right.
\end{align*}
is analytic.
\end{lem}
\proof
The function $\varphi(x+ry)$ is analytic in $r$ because it is a composition of analytic maps, thus the map $\dfrac{1}{r} \big( \varphi (x+ r y) - \varphi (x) \big)$ is analytic in $r>0$.
Since $\varphi(x+ry)- \varphi(x)$ is analytic in $r=0$, we can express it as a convergent Taylor series centred in $0$. But the $0$-th coefficient of this expansion must vanish since $\varphi(x+0y)- \varphi(x)=0$, hence we can divide by $r$ and we obtain a convergent Taylor expansion for $\dfrac{1}{r} \big( \varphi (x+ r y) - \varphi (x) \big)$ in $r=0$.
\endproof
\begin{teo*}{Teor5}
Let $\varphi: D\hookrightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding, with $D$ domain of $\mathbb{R}^{2n}$. Then there exists a function $r_0:D\rightarrow (0,+ \infty)$ such that the inequality $Vol_{2k}(P\varphi (B_r(x)),\omega ^k_{0|V})\geq r^{2k}\pi^{k}$ holds, for every $x \in D$ and for every $r < r_0(x)$. Moreover $r_0$ is bounded away from $0$ on compact subsets $K \subset D$.
\end{teo*}
\proof
Let $\varphi_{r,x}$ be the map defined in the lemma above. Observing that
\begin{align}
\label{eq6}
Vol_{2k}(P \varphi_{r,x} (B_1(0)),\omega_{0|V} ^k)=Vol_{2k}(P \frac{1}{r} \varphi (B_r(x)),\omega_{0|V} ^k)=\dfrac{Vol_{2k}(P \varphi (B_r(x)),\omega_{0|V} ^k)}{r^{2k}},
\end{align}
for every fixed $x \in D$ we can apply Theorem \ref{Teor2} to the path $r\mapsto \varphi_{r,x}$ and we deduce the first part of the theorem.\\
Now we prove the estimate on compact sets.\\
Define a function
\begin{align*}
f(x,r):= Vol_{2k}(P \varphi_{r,x} (B_1(0)),\omega_{0|V} ^k) - \pi^k.
\end{align*}
This function is analytic in $\mathcal{D}= \{ (x,r) \in D \times [0,+ \infty) \ | \ 0 \leq r < R(x) \}$, where $R(x)>0$ is the supremum of the radii $r$ for which $f(x,r)$ is defined. To see this is enough to apply Proposition \ref{Prop1} to the analytic map $(r,x)\mapsto \varphi_{r,x}$.
Now, take an arbitrary point $x_0 \in D$. If $f(x_0,0)>0$, then by continuity there exists a small neighbourhood $B_{\epsilon_{x_0}} \times [0,r_{x_0})$ of $(x_0,0)$ in $\mathcal{D}$, on which $f$ is positive.
On the other hand, if $f(x_0,0)=0$, we denote with $\gamma_A ^{x_0}:[0,1] \rightarrow \mathcal{D}$ a simple analytic curve such that $\gamma_A ^{x_0}(0)=(x_0,0)$. A consequence of Theorem \ref{Teor2} is that $f(\gamma_A ^{x_0} (r))$ must be non negative in a neighbourhood of $r=0$, i.e. $(x_0,0)$ is a local minimum for the restriction of $f$ to every analytic curve $\gamma_A ^{x_0}$.
From this we can deduce that $(x_0,0)$ is a minimum for $f$ in $\mathcal{D}$.
To see it, we first extend $f$ to an analytic function in a neighbourhood of $(x_0,0)$ in $\mathbb{R}^{2n+1}$. By Theorem \ref{Teo4}, there is a small ball $B_\delta(x_0,0) \subset \mathbb{R}^{2n+1}$ in which we know how the zeros are distributed, in particular $\mathcal{D} \cap (B_\delta (x_0,0) \backslash f^{-1}(0))$ has at most a finite number $N$ of different connected components $A_i \subset \mathcal{D} \cap B_\delta (x_0,0)$ such that $(x_0,0) \in \overline{A_i}$.
The set $(f^{-1}(0) \cup_{i=1}^N A_i) \cap (\mathcal{D} \cap B_\delta (x_0,0))$ contains a neighbourhood of $(x_0,0)$ in $\mathcal{D}$, hence if we prove that $f_{|A_i}>0$ for every $i\in \{1,\ldots ,N\}$, we get the desired result.
But if it were $f_{|A_i}<0$, by Theorem \ref{Teo4} we would be able to conclude that there exists an analytic curve $\gamma_A ^{x_0}$ laying in the connected component $A_i$ and this would imply that $0$ is not a minimum for $\gamma_A ^{x_0}$, hence a contradiction.
Therefore $(x_0,0)$ is a minimum for $f$ in $\mathcal{D}$ and hence there exists a small neighbourhood $B_{\epsilon_{x_0}} \times [0,r_{x_0})$ of $(x_0,0)$ in $\mathcal{D}$ on which $f$ is positive.
Now we consider an arbitrary compact set $K \subset D$. As we have just seen, to every $x_0 \in D$ we can associate two positive real numbers $r_{x_0}$ and $\epsilon_{x_0}$. The balls of radius $\epsilon_{x_0}$ centred in an arbitrary $x_0 \in K$ produce an open cover of $K$. From this cover we can extract a finite subcover of balls of radius $\epsilon_{x_i}$ and if we define $r_0$ as the minimum in the set of the corresponding $r_{x_i}$ we get the result.
\endproof
\thispagestyle{empty}
\section*{\huge References}
\begin{bibliography}{A}
\bibitem[AM13]{AM13} A. Abbondandolo and R. Matveyev, \emph{How large is the shadow of a symplectic ball?}, J. Topol. Anal. \textbf{5} (2013), 87-119.
\bibitem[ABHS15]{ABHS15} A. Abbondandolo, B. Bramham, U. Hryniewicz and P. Salom\~ ao \emph{Sharp systolic inequalities for Reeb flows on the three sphere}, arXiv:1054.05258 [math.SG], 2015.
\bibitem[AM15]{AM15} A. Abbondandolo and P. Majer, \emph{A non-squeezing theorem for symplectic images of the Hilbert ball}, Calc. Var. Partial Diff. Equ. (online version).
\bibitem[BP14]{BP14} J. C. \'{A}lvarez Paiva and F. Balacheff, \emph{Contact geometry and isosystolic inequalities}, Geom. Funct. Anal. \textbf{24} (2014), no. 2 648-669.
\bibitem[DF76]{DF76} A. J. Dragt and J. M. Finn \emph{Lie series and invariant functions for analytic symplectic maps}, J. Mathematical Phys. \textbf{17} (1976), no. 12 2215-2227.
\bibitem[EG91]{EG91} Y. Eliashberg and M. Gromov, \emph{Convex symplectic manifolds, Several complex variables and complex geometry, Part 2}, Proc. Sympos. Pure Math. \textbf{52} part 2 (1991), 135-162.
\bibitem[EH89]{EH89} I. Ekeland and H. Hofer, \emph{Symplectic topology and Hamiltonian dynamics}, Math. Z. \textbf{200} (1989), 355-378.
\bibitem[Fed69]{Fed69} H. Federer \emph{Geometric mesure theory}, Springer (1969).
\bibitem[Fin86]{Fin86} J. M. Finn \emph{Lie transforms: a perspective}, Lecture Notes in Phys. \textbf{252}, Springer, Berlin (1986), 63-86.
\bibitem[Gin87]{Gin87} V. L. Ginzburg \emph{New generalizations of Poincar\'e's geometric theorem}, Funktsional. Anal. i Prilozehn. \textbf{21} (1987), no.2, 16-22.
\bibitem[Gro85]{Gro85} M. Gromov \emph{Pseudo holomorphic curves in symplectic manifolds}, Invent. Math. \textbf{82} (1985), 307-347.
\bibitem[Gut08]{Gut08} L. Guth \emph{Symplectic embeddings of polydisks}, Invent. Math. \textbf{172} (2008), 477-489.
\bibitem[HZ94]{HZ94} H. Hofer and E. Zehnder, \emph{Symplectic invariants and Hamiltonian dynamics}, Birkh\"auser, 1994.
\bibitem[KP92]{KP92} S. G. Krantz and H. R. Parks \emph{A Primer on Real Analytic Functions}, Birkh\"auser, 1992.
\bibitem[Mos65]{Mos65} J. Moser, \emph{On the volume elements of a manifold}, Trans. Amer. Math. Soc. \textbf{120} (1965), 286-294.
\bibitem[Rab78]{Rab78} P. Rabinowitz, \emph{Periodic solutions of Hamiltonian Systems}, Comm. Pure Math. Appl. \textbf{31} (1978), 157-184.
\bibitem[Vit89]{Vit89} C. Viterbo, \emph{Capacit\'e symplectiques et applications}, Ast\'erisque \textbf{177-178} (1989), no. 714 S\'eminaire Bourbaki 41\'eme ann\'ee 345-362.
\bibitem[Wei74]{Wei74} A. Weinstein, \emph{Fourier integral operators, quantization, and the spectra of Riemannian manifolds}, G\'eom\'etrie symplectique et physique math\'ematique, \'Editions Centre Nat. Recherche Sci., Paris (1975), 289-298.
\bibitem[Wei78]{Wei78} A. Weinstein, \emph{Periodic orbits for convex Hamiltonian systems}, Ann. Math. \textbf{108} (1978), 507-518.
\end{bibliography}
\thispagestyle{empty}
\end{document} |
\begin{document}
\onehalfspace
\begin{abstract}
We provide new bounds for the divisibility function of the free group
${\mathbf F}_2$ and construct short laws for the symmetric groups
$\Sym(n)$. The construction is random and relies on
the classification of the finite simple groups. We also give bounds on the length of laws for finite
simple groups of Lie type.
\end{abstract}
\title{Divisibility and Laws in Finite Simple Groups}
\tableofcontents
\hypersetup{linkcolor=aablue}
\section{Introduction}
\label{intro}
We want to start out by explaining a straightforward and well-known application of the prime number theorem.
We consider the Chebychev functions
$$\vartheta(x) = \sum_{p \leq x} \log(p) \quad \mbox{and} \quad \psi(x) = \sum_{p^k \leq x} \log(p^k),$$
where the summation is over all primes (resp.\ prime powers) less than or equal to $x$. It is well-known and equivalent to the prime number theorem, that
\begin{equation} \label{pnt}
\lim_{x \to \infty}\frac{\vartheta(x)}{x} = 1 \quad \mbox{and} \quad\lim_{x \to \infty}\frac{\psi(x)}{x} = 1.
\end{equation}
We conclude that for each $\varepsilon>0$, there exists $N(\varepsilon) \in {\mathbb N}$ such that for all $n \geq N(\varepsilon)$, there exists some prime $p \leq (1+ \varepsilon)\log(n)$ such that $n$ is not congruent to zero modulo $p$. Indeed, if this was not the case, then
$$n \geq \prod_{p \leq (1+\varepsilon)\log(n)} p = \exp(\vartheta((1+ \varepsilon)\log(n)))$$
for infinitely many $n$, which contradicts Equation \eqref{pnt}. On the other side, setting $n:= \exp(\psi(x))$, we obtain an integer which is congruent to zero modulo any prime power less than or equal $x$. Thus, any prime power $p^k$ so that $n$ is not congruent to zero modulo $p^k$ must satisfy $p^k \geq x \geq (1-\varepsilon)\log(n),$ if $x$ is large enough.
These arguments allow to determine the asymptotics of the so-called divisibility function for the group ${\mathbb Z}$, defined more generally as follows.
\begin{definition}For any group $\Gamma$, we define the divisibility function $D_{\Gamma} \colon \Gamma \to {\mathbb N}$ by
$$D_{\Gamma}(\gamma):= \min\{ [\Gamma:\Lambda] \mid \Lambda \subset \Gamma \mbox{ a subgroup}, \gamma \not \in \Lambda \} \in {\mathbb N} \cup \{\infty\}.$$
\end{definition}
The simplest case to study is $\Gamma={\mathbb Z}$ and there the prime number theorem implies (as explained above) that
$$\limsup_{n \to \infty} \frac{D_{{\mathbb Z}}(n)}{\log(n)} = 1.$$
This result appeared in work of Bou-Rabee \cite[Theorem 2.2]{br}, where the study of quantitative aspects of residual finiteness was initiated.
In order to study similar asymptotics for more complicated groups, we restrict to the case that $\Gamma$ is generated by some finite symmetric subset $S \subset \Gamma$. We denote the associated word length function by $|\cdot| \colon \Gamma \to {\mathbb N}$ and set
$$D_{\Gamma}(n) := \max\{ D_{\Gamma}(\gamma) \mid \gamma \in \Gamma, |\gamma| \leq n \}.$$
We now consider the free group ${\mathbf F}_2$ with two generators and
its natural generating set $S:= \{a,a^{-1},b,b^{-1}\}.$ It is
elementary (using Lemma \ref{elem} below) to see that
$D_{{\mathbf F}_2}(n) \leq n+1.$ The only known improvement to this trivial bound has been obtained by Buskin (see \cite{MR2583614})
$$D_{{\mathbf F}_2}(n) \leq \frac n2+2.$$
Concerning lower bounds, it is easy to see the element
$a^{\exp(\psi(n))} \in {\mathbf F}_2$ lies in all subgroups of index
less than or equal $n$ (see again Lemma \ref{elem}), and hence Equation \eqref{pnt} implies
$$\limsup_{n \to \infty} \frac{D_{\mathbf F_2}(n)}{\log(n)} \geq 1,$$
i.e., $D_{\mathbf F_2}(n) = \Omega(\log(n))$.
Bogopolski asked if $D_{\mathbf F_2}(n) = O(\log(n))$ holds, see \cite[Problem 15.35]{MR2263886}. This question was answered negatively by Bou-Rabee and McReynolds \cite[Theorem 1.4]{brm}, giving the following explicit lower bound.
\begin{equation} \label{mr}
D_{\mathbf F_2}(n) \ge \frac{\log(n)^2}{C\log \log(n)}.
\end{equation}
(Here and below, $C$ refers to absolute constants, which are positive
and sufficiently large, and may denote different constants in
different formulas.)
Independent work of Gimadeev-Vyalyi \cite{MR2972333} on the length of shortest laws for ${\rm Sym}(n)$ implies that
\begin{equation*}
D_{\mathbf F_2}(n) \ge \left(\frac{\log(n)}{C\log\log(n)} \right)^2,
\end{equation*}
but the connection with Bogopolski's question apparently remained unnoticed. We will improve \eqref{mr} and obtain in Theorem \ref{divthm} a new lower bound as follows:
$$D_{{\mathbf F}_2}(n) \ge \exp\left(\left(\frac{\log(n)}{C\log \log(n)} \right)^{1/4} \right).$$
Assuming Babai's Conjecture on the diameter of Cayley graphs of permutation groups (see the discussion below), we even obtain
$$D_{{\mathbf F}_2}(n) \ge \exp\left(\frac{\log(n)}{C\log \log(n)} \right) = n^{\frac1{C \log\log(n)}}.$$
Note that this estimate is now getting much closer to the linear upper bound for the divisibility function.
Currently, the proofs are based on consequences of the classification of finite simple groups and it would be desirable to obtain more elementary arguments for these bounds.
It remains to be an outstanding open problem to determine the precise asymptotic behaviour of the function $D_{{\mathbf F}_2}$.
We now turn to the closely related study of two-variable laws for specific groups.
\begin{definition}
For a group $\Gamma$, we say that $\Gamma$ satisfies the law $w \in {\mathbf F}_2$ if
$$w(g,h)=e \quad \forall g,h \in \Gamma.$$
\end{definition}
We denote by $\Sym(n)$ the symmetric group on the letters
$\{1,\dots,n\}$. Along the way we will show the existence of short
laws which are satisfied by the group $\Sym(n)$. We denote by
$\alpha(n)$ the length of the shortest law for the group $\Sym(n)$. The shortest previously known laws for $\Sym(n)$
implied
\begin{equation} \label{oldbound}
\alpha(n)\le \exp\left(C(n \log n)^{1/2} \right),
\end{equation}
a result which was obtained in the course of the proof of \cite[Theorem 1.4]{brm}. The proof of this result was based on the fact that the maximal order of an element in ${\rm Sym}(n)$ is bounded by $\exp(C (n \log(n))^{1/2})$ --- a result of Landau \cite{landau} from 1903. We will give a quick proof of \eqref{oldbound} in Section \ref{divlaw}.
Independently, Gimadeev-Vyalyi \cite{MR2972333} obtained the slightly weaker bound
$$\alpha(n) \le \exp\left(Cn^{1/2} \log(n) \right).$$
We are able to improve these bounds substantially and will show
\begin{theorem}\label{random}
The length of the shortest non-trivial law for ${\rm Sym}(n)$ satisfies\begin{equation} \label{randomnew}
\alpha(n)\le\exp\left(C\log(n)^4 \log\log(n)\right).
\end{equation}
\end{theorem} The proof uses
\begin{itemize}
\item a remarkable structure result on the possible primitive subgroups of ${\rm Sym}(n)$ by Liebeck \cite{MR758332} which is ultimately based on the O'Nan-Scott Theorem and the Classification of Finite Simple Groups (CFSG), and
\item a recent breakthrough result by Helfgott-Seress
\cite{seresshelfgott} on the diameter of Cayley graphs of ${\rm
Sym}(n)$ (also using (CFSG)).
\end{itemize}
The main contribution to our estimate for the word length, $\exp\left(C\log(n)^4\log\log n)\right)$ comes
from the Helfgott-Seress theorem. The Helfgott-Seress theorem is not
conjectured to be sharp. In fact, a conjecture of Babai
\cite[Conjecture 1.7]{babser} states that this term should be
$n^C$. It is natural to ask how our result would improve under this
conjecture.
\begin{theorem}\label{thm:underBabai}If Babai's conjecture holds, then
$$\alpha(n)\le\exp\left(C\log(n) \log\log(n)\right) =
n^{C\log\log(n)}.$$
\end{theorem}
The main obstruction to replacing the term $n^{\log\log n}$ with a
polynomial is not the (CFSG) part, for which the existing estimates
are far better than our needs, but in our handling of the transitive
imprimitive subgroups of ${\rm Sym}(n)$, which are wreath products of smaller
groups. See the proofs in Section \ref{sec:proof}.
The complexity of identities for associative rings has a long history and one of the outstanding results is the Amitsur-Levitzki theorem \cite{al} which says that the algebra $M_n(k)$ of $n\times n$-matrices of a field $k$ satisfies the identity
$$\sum_{\sigma \in {\rm Sym}(2n)} {\rm sgn}(\sigma) \cdot A_{\sigma(1)} \cdots A_{\sigma(2n)}=0,$$
for all $2n$-tuples of matrices $A_1,\dots,A_{2n} \in M_n(k)$. Moreover, the theorem asserts that this identity has the minimal degree among all non-trivial identities for $M_n(k)$. Thus, in this case the exact asymptotics of the smallest degree of non-trivial identities is known. It seems that we have still a long way to go to close the current gap between the lower and upper bound and determine the precise asymptotics of $\alpha(n)$.
Similar questions about the length of laws for groups were studied by Hadad \cite{MR2764921} for simple groups of Lie type and
by Kassabov and Matucci in \cite{MR2784792} for arbitrary finite groups. The lengths of the shortest laws that hold in $n$-step solvable (resp.\ $n$-step nilpotent) groups were estimated in \cite{thomelk}. Quantitative aspects of almost laws for compact groups (specifically for ${\rm SU}(n)$) were studied in \cites{convergent, thomelk}.
In Section \ref{lietype} we study the length of shortest laws in finite quasi-simple groups of Lie type. We point out a gap in a proof in \cite{MR2764921} and prove a weaker form of the upper bounds in Theorem 1 and Theorem 2 of \cite{MR2764921}.
Our main result asserts that any finite simple group $G$ of Lie type with rank $r$ and defined over a field with $q$ elements satisfies a non-trivial law $w_G \in {\mathbf F}_2$ of length $|w_G| \leq 12 \cdot q^{155\cdot r}$. For $G={\rm PGL}_n(q)$ with $q=p^s$ for some prime $p$, we get more precisely:
$$|w_G| \leq 12 \cdot \exp\left(2\sqrt{2} \cdot n^{1/2} \log(n)\right) \cdot q^{n-1}.$$
It remains to be an intruiging question to determine the asymptotics of the length of the shortest law in ${\rm PGL}_n(p)$ as $p$ increases, even for $n$ fixed or just in the case $n=3$. It is conceivable that our strategy for ${\rm Sym}(n)$ can be carried out for other families of finite simple groups, resulting in improved bounds -- see the remarks at the end of Section \ref{lietype}.
\section{Divisibility and group laws for \texorpdfstring{${\rm Sym}(n)$}{Sym(n)}}
\label{divlaw}
There is a direct relationship between the divisibility function on the free group and the existence of group laws on symmetric groups.
\begin{lemma} \label{elem}
An element $w \in {\mathbf F}_2$ is a law for ${\rm Sym}(n)$ if and only if $D_{{\mathbf F}_2}(w)>n$.
\end{lemma}
\begin{proof}
Let $w$ be a law for ${\rm Sym}(n)$. Consider a subgroup $\Lambda \subset {\mathbf F}_2$ of index less than or equal $n$. We obtain a natural permutation action of ${\mathbf F}_2$ on the space of cosets ${\mathbf F}_2/\Lambda$ and $|{\mathbf F}_2/\Lambda| \leq n$. By assumption, $w$ fixes every point in ${\mathbf F}_2/\Lambda$. In particular, $w \Lambda = \Lambda$ and hence $w \in \Lambda$. Suppose now that $w$ is not a law for ${\rm Sym}(n)$. Then, there exists permutations $\sigma,\tau \in {\rm Sym}(n)$ and $w(\sigma,\tau) \neq 1_n$. Let us take some point $k \in \{1,\dots,n\}$ such that $w(\sigma,\tau)(k) \neq k$. We set
$\Lambda := \{v \in {\mathbf F}_2 \mid v(\sigma,\tau)(k)=k \}.$
Then, $\Lambda \subset {\mathbf F}_2$ is a subgroup of index less than or equal $n$ and $w \not \in \Lambda$. In particular, we get $D_{{\mathbf F}_2}(w) \leq n$. This finishes the proof.
\end{proof}
In order to establish estimates like the one in \eqref{mr} or \eqref{randomnew}, it remains to construct laws for ${\rm Sym}(n)$ whose length is as short as possible.
The most common strategy to construct laws for $\Gamma$ is to construct first a list of words $w_1,\dots,w_k$ in ${\mathbf F}_2$ so that for each pair $(g,h) \in \Gamma^2$, we have $w_i(g,h)=e$ for at least one $1 \leq i \leq k$. After this is done, the words are combined using an iterated nested commutator. The following lemma takes care of the second step, similar constructions appeared in \cite[Lemma 3.3]{MR2764921} and \cite[Lemma 10]{MR2784792}.
\begin{lemma}\label{lem:commut}
Let $v_1,\dotsc,v_m$ be non-trivial words in $\mathbf{F}_k$, $k\ge 2$. Then
there exists a non-trivial word $w\in\mathbf{F}_k$ that trivializes
the union of $k$-tuples trivialized by the $v_i$.
In other words, for every $i\in\{1,\dotsc,m\}$ and $\sigma_1,\cdots,\sigma_k$
in some group, $v_i(\sigma_1,\dots,\sigma_k)=1{\mathbb R}ightarrow w(\sigma_1,\dots,\sigma_k)=1$.
Moreover, the length of $w$ is
bounded by $16\cdot m^2\max|v_i|$.
\end{lemma}
\begin{proof}
We strengthen the requirements on $w$ to $|w|\le
4^{\lfloor\log_2(m-1)\rfloor+2}\max|v_i|$ (for $m\ge 2$, and
$4\max|v_i|$ for $m=1$) and we require that $w$ be a non-trivial
commutator. We now argue by induction on $m$. For $m=1$ we take
$w=[v_1,z]$ for some generator of $z$ of $\mathbf{F}_k$. Since a
commutator is trivial in the free group only when the two words are
powers of the same word \cite{mks}, we choose $z$ to be some
generator such that $v_1$ is not a power of $z$ and we are done.
For $m\ge 2$, we apply the lemma inductively on $v_1,\dotsc,v_{\lfloor
m/2\rfloor}$ and on $v_{\lfloor m/2\rfloor +1},\dotsc,\linebreak[4]v_m$ and get
two non-trivial words $w_1$ and $w_2$ which together trivialize the
union of the $k$-tuples trivialized by the $v_i$, and
\[
|w_j|\le 4^{\lfloor \log_2(m-1)\rfloor+1}\max|v_i|\qquad \forall j\in\{1,2\}.
\]
Clearly $w=[w_1,w_2]$ trivializes the union of the $k$-tuples trivialized
by the $v_i$ and has the correct length. So we need only show that it
is not trivial. As above, $w$ can be trivial only if $w_1$ and $w_2$
are powers of the same word. But by induction $w_i$ are commutators
and hence cannot be non-trivial powers -- a result that goes back to Sch\"utzenberger \cite{MR0103219}. Hence $w$ can
be trivial only if $w_1=w_2^{\pm 1}$, but in this case $w_1$ and $w_2$
trivialize the same set of $k$-tuples and we can simply take
$w=w_1$ instead. This proves the lemma.
\end{proof}
Let us illustrate this lemma by giving a quick argument for
\eqref{oldbound}. Following Landau \cite{landau}, we denote the
largest order of an element in ${\rm Sym}(n)$ by $g(n)$. Hence the
words $a,a^2,\dotsc,a^{g(n)}\in\mathbf{F}_2$ trivialize all couples in $\Sym(n)$. (Of
course, these words are also in $\mathbf{F}_1$ but Lemma \ref{lem:commut} only
works in $\mathbf{F}_k$, $k\ge 2$ so we add a new variable.) We now apply
Lemma \ref{lem:commut} and get a non-trivial word $w\in\mathbf{F}_2$
with $|w|\le 16g(n)^3$ which trivializes $\Sym(n)$.
Landau \cite{landau} showed that $g(n) \le \exp\left(C(n \log(n))^{1/2} \right)$ using the Prime Number Theorem. Hence, we can conclude that the length of the shortest law for ${\rm Sym}(n)$ satisfies \eqref{oldbound}.
Let us remark that this last argument demonstrates that the
requirement $k\ge 2$ in Lemma \ref{lem:commut} is really necessary:
even though all our words are in $\mathbf{F}_1$, any word in
$\mathbf{F}_1$ which trivializes all of $\Sym(n)$ must be $a^q$ where
$q$ is divisable by all primes smaller than $n$, which can be shown to
be much larger than \eqref{oldbound}, in fact exponential in $n$,
again by the Prime Number Theorem -- see Equation \eqref{pnt}.
We will use Lemma \ref{lem:commut} repeatedly in a similar way in the proof of Theorem \ref{random}.
\section{Random walks and mixing time}
In this section we recall some standard facts about the relationship between diameter, spectral gap, and mixing time for Cayley graphs of finite groups.
Let $G$ be a finite group and let $S$ be a finite and symmetric generating set. Recall, a set $S\subset G$ is called symmetric, if $S^{-1}:= \{s^{-1} \mid s\in S\}$ is equal to $S$. We consider the lazy random walk on $G$, which starts at the neutral element and whose transitions are defined as follows: $${\mathbb P}(\omega_{n+1} = g \mid \omega_n = h ) = \begin{cases} \frac12 & g=h \\
\frac1{2|S|} & g=sh \mbox{ for some $s \in S$}\end{cases}.$$
Consider the Hilbert space $\ell^2(G)$ with orthonormal basis $\{\delta_g \mid g \in G\}$ and the left-regular representation
$$\lambda \colon G \to {\rm U}(\ell^2(G)), \quad \lambda(g)\delta_h := \delta_{gh}.$$
We set
$$M_S := \frac{1}{2} + \frac1{2|S|}\sum_{g \in S} \lambda(s) \in B(\ell^2(G)),$$
and note that for any subset $E \subset G$,
$${\mathbb P}(\omega_n \in E) = \langle M_S^n(\delta_e),\chi_E \rangle,$$
where $\chi_E$ denotes the characteristic function of $E \subset G$. The properties of the random walk can be studied using the eigenvalues of the self-adjoint operator $M_S$, which we denote by
$$1 = \lambda_0 > \lambda_1(G,S) \geq \lambda_2(G,S) \cdots
\lambda_{|G|}(G,S)\geq 0,$$
where the last inequality is due to laziness. Note that the eigenspace for the eigenvalue $1$ is one-dimensional, since the Cayley graph is connected. The difference $1 - \lambda_1(G,S)$ is called the spectral gap of the random walk.
We denote the diameter of the Cayley graph associated with $S$ by
$${\rm diam}(G,S) := \min\{n \in {\mathbb N} \mid \forall g \in G, \exists s_1,\dots,s_n \in S \cup \{e\} : g=s_1\cdots s_n \}.$$
We have the following relationship between the diameter of a graph and the spectral gap of the random walk, see \cite[Corollary 1]{MR1245303}.
\begin{equation} \label{specgap}
1 -\lambda_1(G,S) \geq \frac{1}{{2|S| \cdot \rm diam}(G,S)^2}.
\end{equation}
We denote by $u \in \ell^2(G)$ the uniform distribution and note that \eqref{specgap} implies:
$$\left\| M^n(\delta_e) - u \right\|_2 \leq \lambda_1(G,S)^n \leq \left(1 - \frac{1}{2|S| \cdot {\rm diam}(G,S)^2}\right)^n.$$
The following proposition is well-known and uses this information about the speed of convergence to the uniform distribution to relate the diameter with the mixing time of the random walk.
\begin{proposition} \label{mixing}
Let $E \subset G$ be a subset and set $\alpha := |E|/|G|$.
If $$n \geq 2|S| \cdot{\rm diam}(G,S)^2 \cdot \log\left(2|G| \right),$$
then ${\mathbb P}(\omega_n \in E) \geq \alpha/2.$
\end{proposition}
\begin{proof} Using our assumption, we can compute
$$\left(1 - \frac{1}{2|S| \cdot{\rm diam}(G,S)^2} \right)^n \leq \exp\left(- \frac{n}{2|S| \cdot{\rm diam}(G,S)^2}\right) \leq \frac{1}{2|G|}.$$
and we find the following estimate.
\begin{eqnarray*}
{\mathbb P}(\omega_n \in E) &=&\langle M^n(\delta_e),\chi_E \rangle \\
&\geq& \langle u,\chi_E \rangle -\left(1 - \frac{1}{2|S| \cdot{\rm diam}(G,S)^2}\right)^n \cdot \|\chi_E\| \\
&=& \frac{|E|}{|G|} - \left(1 - \frac{1}{2|S| \cdot{\rm diam}(G,S)^2}\right)^n \cdot |E|^{1/2} \\
&\geq& \frac{|E|}{|G|} - \frac{|E|^{1/2}}{2|G|}\\
& \geq& \alpha/2.
\end{eqnarray*}
This finishes the proof.
\end{proof}
\section{Proof of the main result}\label{sec:proof}
We wish to demonstrate Theorems \ref{random} and
\ref{thm:underBabai}. Recall that Theorem \ref{random} states that
there exists an absolute constant $C$ such that for each $n \ge 2$, there exists a non-trivial word $w_n \in {\mathbf F}_2$ of length
$$|w_n| \leq \exp\left(C \log(n)^4 \log \log(n) \right),$$
which is trivial whenever evaluated on ${\rm Sym}(n)$.
Theorem \ref{thm:underBabai} gets a better estimate under Babai's
conjecture.
The proof relies on a recent result of Helfgott-Seress \cite{seresshelfgott}, which gives new bound on the diameter of Cayley graphs of ${\rm Sym}(n)$, and a structure result about subgroups of ${\rm Sym}(n)$ obtained by Liebeck \cite{MR758332}.
Let us first explain this result, which will allow us the perform a crucial induction step in our proof.
\begin{theorem}[Liebeck]\label{maroti}
Let $X$ be a finite set and let $\Gamma \subset {\rm Sym}(X)$ be a subgroup. Then, either
\begin{enumerate}
\item[i)] $\Gamma = {\rm Sym}(X)$ or $\Gamma={\rm Alt}(X)$,
\item[ii)] There exists two disjoint non-empty sets $Y$ and $Z$ such
that $X\cong Y\cup Z$ and
$\Gamma\hookrightarrow\Sym(Y)\times\Sym(Z)$.
\item[iii)] There exist two sets $Y$ and $Z$, each with at least two elements, such
that $X\cong Y\times Z$ and $\Gamma\hookrightarrow
\Sym(Y)\wr\Sym(Z)$ where the action of $\Sym(Z)$ on $\Sym(Y)^Z$
implicit in the $\wr$ notation is the natural action.
\item[iv)] There exist two sets $Y$ and $Z$ as above and a $1 \leq k
\leq |Y|$, such that $X \cong \binom{Y}{k}^Z$ such that $\Gamma$ is
isomorphic to a group $\Gamma'$ with
$${\rm Alt}(Y)^Z \subseteq \Gamma' \subseteq {\rm Sym}(Y) \wr {\rm Sym}(Z),$$
\item[v)] the size of $\Gamma$ is bounded by $$|\Gamma| \leq \exp\left(C_1 \log(n)^2\right).$$
\end{enumerate}
\end{theorem}
Case ii) corresponds to the non-transitive case and every element of
$\Gamma$ preserves the sets $Y$ and $Z$. Case iii) corresponds to the
transitive imprimitive case and each element of $\Gamma$ preserves the
division of $X$ into copies of $Y$. To explain case iv), denote an
element of $X$ by $(y_{i,j})$ where $i\in \{1,\dotsc,|Z|\}$ and $j\in\{1,\dotsc,k\}$.
Then an element $(\sigma_1,\dotsc,\sigma_{|Z|})$ of $\Sym(Y)^Z$ acts on $X$
by $(y_{i,j})\mapsto (y_{i,\sigma_i(j)})$
while the elements of $\Sym(Z)$ act on the $i$ coordinates.
\begin{remark}Let us mention that the theorem above has been optimized my Mar\'oti in \cite{MR1943938}, giving a constant in clause v) equal to $4$
with four exceptions, the Matthieu groups $M_{11},M_{12}, M_{23}$
and $M_{24}$.
\end{remark}
Another version of Liebeck's result is the following, achieved in
\cites{maxorder, maxorder2} and \cite{maxorder3}, making heavy use of
CFSG. This version is necessary for our Theorem \ref{thm:underBabai}
and could have been used to simplify the proof of our main Theorem
\ref{random}. Since its proof is not fully published at this time, we
will prove Theorem \ref{random} using Theorem \ref{maroti} and use
Theorem \ref{newthm} only for our Theorem \ref{thm:underBabai}.
To state the theorem, let us say that a permutation $\sigma \in {\rm
Sym}(n)$ has a {\it regular} cycle, if it admits a cycle of length
equal to its order.
\begin{theorem} \label{newthm}
Let $X$ be a finite set and $G \subset {\rm Sym}(X)$ be a primitive permutation group. Then, either
\begin{enumerate}
\item[i)] there exists finite sets $Y,Z$, $1 \leq k \leq |Y|$, and an isomorphism
$X \cong \binom{Y}{k}^Z$ such that $${\rm Alt}(Y)^Z \subseteq \Gamma \subseteq {\rm Sym}(Y) \wr {\rm Sym}(Z),$$ or
\item[ii)] every element in $G$ contains a regular cycle.
\end{enumerate}
\end{theorem}
\begin{remark}
Note that the first case in Theorem \ref{newthm} includes the case $k=|Z|=1$, which corresponds to case i) in Theorem \ref{maroti}. The only consequence of Theorem \ref{newthm} that we are going to use is that in Case ii), the order of all elements is bounded by $|X|$.
\end{remark}
Let us now start with the proof of Theorem \ref{random}.
\begin{proof}
As the result is asymptotic we will assume $n$ is sufficiently large. We first want to construct a non-trivial word $v_n \in {\mathbf F}_2$, such that $v_n(\sigma,\tau)=1_n$ whenever the group generated by $\sigma,\tau$ is isomorphic to ${\rm Sym}(k)$ or ${\rm Alt}(k)$ for some $k \leq n$. For simplicity, let us restrict to the case ${\rm Sym}(k)$. We denote by $P(k)$ the set of $k$-cycles in ${\rm Sym}(k)$ and note that $|P(k)|/|{\rm Sym}(k)| = 1/k$.
Let $(\sigma,\tau) \in {\rm Sym}(k)^2$ be a pair of generators and let us assume that $k \leq n$. By the main result in Helfgott-Seress \cite{seresshelfgott}, the diameter of the Cayley graph associated with the set $\{\sigma,\tau,\sigma^{-1},\tau^{-1} \}$ is at most
$$\exp\left(C \log(k)^4 \log\log(k) \right),$$
for some universal constant $C>0$.
Thus by Proposition \ref{mixing}, there exists $C>0$, such that a randomly choosen word (according to the standard lazy random walk on ${\mathbf F}_2$) at time
$$4 \cdot \exp(C \log(n)^4 \log\log(n))^2 \cdot \log\left(2|{\rm Sym}(n)| \right)$$ will lie in the set $P(k)$ with probability at least $1/2n$.
Now, if we take $8 n^2 \log(n)$ words at the same time independently at random, then the event that none of these words satisfies $w(\sigma,\tau) \in P(k)$ has probability at most
$$(1-1/{2n})^{8n^2 \log(n)} \leq \exp({-4n \ln(n)}).$$
We now consider all $k$ with $k \leq n$ at the same time and note that
there are at most $(n!)^2 \leq \exp(2 n \log n)$ pairs of permutations
that generate ${\rm Sym}(k)$ for some $k \leq n$. Hence, the probability that there exists $k$ with $k \leq n$ and a pair $(\sigma,\tau) \in {\rm Sym}(k)$ that generates ${\rm Sym}(k)$ so that for all of the $8 n^2 \log(n)$ independently choosen words $w$, we have $w(\sigma,\tau) \not \in P(k)$ is less than $$\exp(- 4 n \log(n)) \cdot \exp(2 n \log n) = \exp(-2 n \log(n))< 1.$$
Hence, there exists a set $W \subset {\mathbf F}_2$ consisting of at most $8n^2 \log(n)$ words of length at most
$$4 \cdot \exp(C \log(n)^4 \log\log(n))^2 \cdot \log\left(2|{\rm Sym}(n)| \right) \leq \exp(C' \log(n)^4 \log\log(n))$$ such that for all $k$ with $k \leq n$ and all pairs $(\sigma,\tau) \in {\rm Sym}(k)^2$ that generate ${\rm Sym}(k)$ at least one word $w \in W$ satisfies $w(\sigma,\tau) \in P(k)$.
Note also that since $P(k)$ does not contain the trivial permutation and $w(\sigma,\tau) \in P(k)$, we have that the $e \not \in W$, where $e$ denotes the neutral element of ${\mathbf F}_2$.
Now, by construction, the order of any element in $P(k)$ is $k \leq n$. We consider the set
$$W' := \left\{w^k \mid w \in W, 1 \leq k \leq n \right\}$$
which consists only of non-trivial words, since ${\mathbf F}_2$ is torsionfree. For each $k$ with $k \leq n$ and any pair $(\sigma,\tau) \in {\rm Sym}(k)$ that generates ${\rm Sym}(k)$, we have $w(\sigma,\tau)=1_k$ for at least one $w \in W'$.
Moreover, the length of elements in $W'$ is bounded by
$$n \cdot \exp(C \log(n)^4 \log \log(n)^2) \leq \exp(C' \log(n)^4 \log \log(n)),$$
and the number of elements in $W'$ is bounded by
$8n^3 \log(n).$ By Lemma \ref{lem:commut} there is a word $v$ which
trivializes all couples $\sigma,\tau$ in ${\rm Sym}(k)$ which generate either $\Sym(k)$
or $\Alt(k)$ for some $k\le n$, and
\[
|v|\le 4(8n^3\log n)^2\exp(C\log(n)^4\log\log(n))\le
\exp(C'\log(n)^4\log\log(n)).
\]
To finish the proof we need to deal with the case that $(\sigma,\tau)$
generates an arbitrary subgroup. In order to do so we will make use of
Theorem \ref{maroti}. Let us first construct the candidate word $x \in {\mathbf F}_2$,
and then prove its properties. The construction is as follows. We use
induction to construct non-trivial words $x_i$, for all $i$ with $i<\log_2(n)$, which are laws for
$\Sym(2^i)$ of minimal length. We wish to compose $x_i$ and $x_j$, but there is nothing
to guarantee that the result is non-trivial. We therefore embed
$\mathbf{F}_2=\langle a,b\rangle$ into $\mathbf{F}_4=\langle
a,b,c,d\rangle$ and define $x_i^{(1)}=[x_i,c]$ and $x_i^{(2)}[x_i,d]$,
and define
\[
y_{i,j}'(a,b,c,d)=x_i(x_j^{(1)}(a,b,c,d),x_j^{(2)}(a,b,c,d)).
\]
$y_{i,j}'$ is clearly non-trivial --- in fact, it is reduced as
written. We embed ${\mathbf F}_4$ back in ${\mathbf F}_2$ via
${\mathbf F}_4 = \langle a, bab^{-1}, b^2ab^{-2}, b^3ab^{-3} \rangle
\subset {\mathbf F}_2$, and thus define
$$y_{i,j}(a,b) := y'_{i,j}(a,bab^{-1},b^2ab^{-2},b^3ab^{-3}),$$
which is non-trivial.
Finally, let $v'$ be a word of length $\exp(C\log(n)^2)$ which
trivializes any element of order less than or equal $\exp(C_1\log(n)^2)$ where $C_1$
is from clause v) in Theorem \ref{maroti}, constructed using Lemma \ref{lem:commut}. To get our candidate $x$, we apply Lemma
\ref{lem:commut} again, this time to the words
\begin{equation}\label{eq:law_induction}
\{v,v'\}\cup\{y_{i,j}:1\le i, j<\log_2(n), 2^{i+j} < 4n\},
\end{equation}
where $v$ was the word constructed in the previous step. The
resulting word is our candidate $x$. We need to show it is indeed a
law for $\Sym(n)$ and to estimate its length.
We first show that $x$ is a law for ${\rm Sym}(n)$.
Indeed, consider a pair $(\sigma,\tau) \in {\rm Sym}(n)$ and denote
the subgroup that the pair generates by $\Gamma \subseteq {\rm
Sym}(n)$. According to Theorem \ref{maroti}, there are five cases to
consider. Let us first despose of case ii), the case that $\Gamma$ is
non-transitive. In this case we note that if $x$ trivializes the
restrictions of $\sigma$ and $\tau$ to every orbit of $\Gamma$, then
it trivializes $(\sigma,\tau)$. Hence it is enough to show that
$x(\sigma,\tau)=1_k$ for every $\sigma$ and $\tau$ that generate a
transitive subgroup of $\Sym(k)$ for some $k\le n$. This leaves four
cases to verify.
Case i): $\Gamma = {\rm Sym}(k)$ or $\Gamma={\rm Alt}(k)$. In this case the word $v$ vanishes on $(\sigma,\tau)$. Since $x$ is defined to be an iterated commutator involving $v$, it will vanish as well.
Case iii): The action is transitive and imprimitive. In this case the
pair $(\sigma,\tau)$ generates a subgroup of ${\rm Sym}(l) \wr {\rm
Sym}(k/l)$ for some $l \neq 1$. Let $i\ge 1$ be the smallest number such
that $2^i\ge l$ and $j\ge 1$ the smallest such that $2^j\ge k/l$. To
show that $i$ and $j$ satisfy the requirements, note that
$l,k/l\le n/2$ and hence $i,j<\log_2(n)$ and that $2^{i+j}< (2l)\cdot (2k/l)=4k\le 4n$ so $y_{i,j}$ is one of the
words participating in the construction of $x$. By construction, the word $y_{i,j}$ vanishes on the pair $(\sigma,\tau)$. Indeed, the wreath product admits an extension
$$1 \to {\rm Sym}(l)^{k/l} \to {\rm Sym}(l) \wr {\rm Sym}(k/l)
\stackrel{\pi}{\to} {\rm Sym}(k/l) \to 1.$$
Since $x_j$ is a law of $\Sym(k/l)$ we get that
$x_j^{(1/2)}(\alpha,\beta,\gamma,\delta)$ is in the kernel of the extension from
above for any $\alpha,\beta,\gamma,\delta\in\Sym(l)\wr\Sym(k)$. Since $x_i$ is a law for
$\Sym(k)$, we get that $y_{i,j}'(\alpha,\beta,\gamma,\delta)=1_k$ and hence also
$y_{i,j}(\sigma,\tau)$ and $x(\sigma,\tau)$.
Case iv): In this case the group $\Gamma$ is also contained in wreath product ${\rm Sym}(r) \wr {\rm Sym}(s)$, with
$n=\binom{r}{q}^s$ for some $1 \leq q < r$ and $s,r \geq 2$. This
implies that $r \leq \sqrt{n}$ and $s \leq \log_2(n)$. For $n$
sufficiently large we can find $i$ and $j$ as in case iii), and we are
done.
Case v): In this case, the group has cardinality bounded by $\exp(C_1 \log(n)^2)$. By construction of $v'$, it vanished on all elements whose order is at most $\exp(C_1 \log(n)^2)$. Hence, the word $x$ vanishes on $\Gamma$.
This finishes the case study and we conclude that indeed, $x$ is a non-trivial
law for ${\rm Sym}(n)$. It remains to bound its length. The number of
terms in \eqref{eq:law_induction} is, up to constants, $\log(
n)^2$. Their lengths satisfy $|v|\le\exp(C\log(n)^4\log\log(n))$,
$|v'|\le\exp(C\log(n)^2)$ and $|y_{i,j}|\le C\alpha(2^i)\alpha(2^j)$
where $\alpha(n)$ is, as before, the length of shortest identities in
${\mathbf F}_2$ that hold for ${\rm Sym}(n)$. Hence
lemma \ref{lem:commut} gives
\begin{equation}\label{eq:alpha_rec}
\alpha(n)\le |x|\le C\log(n)^4\max\left\{\exp(C\log(n)^4\log\log(n)),\max_{i,j}\alpha(2^i)\alpha(2^j) \right\}.
\end{equation}
The theorem is thus proved, given the elementary analysis of
\eqref{eq:alpha_rec} done in Lemma \ref{lem:anal_rec} below.
\end{proof}
\begin{lemma}\label{lem:anal_rec}
A series $\alpha(n)$ satisfying the recursion \eqref{eq:alpha_rec}
satisfies
\[
\alpha(n)< \exp(C\log(n)^4\log\log(n)).
\]
\end{lemma}
\begin{proof}Because of the powers of $2$ appearing in
\eqref{eq:alpha_rec}, it is natural to define $m$ by
$n\in(2^{m-1},2^m]$ and show that
$\alpha(n)\le\exp(Cm^4\log m)$.
Let $K$ be some parameter. We will show that if $K$ is chosen
appropriately (sufficiently large), the inequality
$\alpha(n)\le\exp(Km^4\log m)$ carries over by induction over
$m$. Assume therefore it holds inductively and examine the term
$\max_{i,j}\alpha(2^i)\alpha(2^j)$ appearing in
\eqref{eq:alpha_rec}. Recall that the maximum is taken over $i,j<\log_2
(n)$, i.e., $i,j< m$, and such that $2^{i+j}< 4n$ i.e.\ $i+j\le
m+1$. We now claim that under these restrictions there exists some
$m_0$ such that
\begin{equation}\label{eq:infi}
i^4 + j^4 \le m^4 - m^2 \qquad\forall m\ge m_0.
\end{equation}
To see \eqref{eq:infi}, assume without loss of generality that
$i+j=m+1$ and that $i\ge j$ (so that in particular $i>m/2$) and write
\[
i^4+j^4\le (i+j)^4-4i^3j= (m+1)^4 - 4i^3j \le m^4 + 15m^3 - 4i^3j <
m^4 + m^3(15-j/2).
\]
and see that \eqref{eq:infi} holds whenever $j\ge 32$ (regardless of
$m$). For $j< 32$ we use that $i<m$ to get that \eqref{eq:infi} holds
whenever $m>m_0$ for some $m_0$ sufficiently large.
Now insert \eqref{eq:infi} and the induction hypothesis into \eqref{eq:alpha_rec} and get
\begin{align*}
\log\alpha(n)&\le C+4\log(m)+\max\{Cm^4\log
(m),\max_{i,j}\log\alpha(2^i)+\log\alpha(2^j)\}\\
& \le C+4\log(m)+\max\{Cm^4\log(m),\max_{i,j}Ki^4\log(i)+Kj^4\log(j)\}\\
&\le C+4\log(m)+\max\{Cm^4\log(m),\max_{i,j}(i^4+j^4)\cdot K\log(m)\}\\
&\stackrel{\smash{\textrm{\eqref{eq:infi}}}}{\le}
C+4\log(m)+\max\{Cm^4\log(m),K(m^4-m^2)\log(m)\}
\end{align*}
whenever $m$ is sufficiently large. Taking $K$ bigger than the
absolute constants $C$ appearing in the last stage, we see that for
$m>m_0$, we may conclude that $\alpha(n)\le
\exp(Km^4\log(m))$ from our induction hypothesis. Increase $K$, if
necessary, so that $\alpha(n)\le \exp(Km^4\log(m))$ for all $m\le
m_0$. With this value of $K$ our induction works and the lemma is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:underBabai}]The proof is very
similar to that of Theorem \ref{random} so we will be brief. Recall
how \eqref{eq:alpha_rec} was achieved: the term
$\exp(C\log(n)^4\log\log(n))$ was the length of the word $v$ (see
the discussion before \eqref{eq:law_induction}) which was the product of the worst diameter
for any Cayley graph of $\Sym(n)$ with any generators; and several polynomial factors (an $n\log n$ for the difference
between the square of the diameter and the mixing time, an $n$ because the
random words need to be taken to a power, and an $n^6\log(n)^2$ for
the number of random words needed to ensure all generators are
accounted for). Under Babai's conjecture $\exp(C\log(n)^4\log\log(n))$ can be replaced by
$n^C$. Further, there was the length of the word $v'$ which was
$\exp(C\log(n)^2)$ that came from clause v) in Theorem
\ref{maroti}. Replacing Theorem \ref{maroti} with Theorem
\ref{newthm} we see that since all remaining cases have order at most $
n$, the $\exp(C\log(n)^2)$ can also be replaced by a
polynomial. All in all we get
\[
\alpha(n)\le C\log(n)^4 \max\left\{n^C,\max_{i,j}\alpha(2^i)\alpha(2^j)\right\}.
\]
A similar analysis will show that $\alpha(n)\le n^{C\log\log(n)}$ holds under
this restriction: the main term this time is not the $n^C$ that appears in
the maximum but the accumulation of the $\log(n)^4$ outside the
maximum, which one gets whenever moving from $n$ to $2n$. This finishes the outline of the proof.
\end{proof}
\section{Lower bounds for the divisibility function}
Lemma \ref{elem} allows us to convert the upper bound on the length of the shortest law for ${\rm Sym}(n)$ into a lower bound for divisibility function for ${\mathbf F}_2$. Hence, another way of interpreting our results in Theorem \ref{random} and Theorem \ref{thm:underBabai} is the following theorem.
\begin{theorem} \label{divthm}
The divisibility function for the free group ${\mathbf F}_2$ satisfies the lower bound
$$D_{{\mathbf F}_2}(n) \ge \exp\left(\left(\frac{\log(n)}{C\log \log(n)} \right)^{1/4} \right)$$
for some constant $C>0$.
Moreover, if Babai's Conjecture holds, then the divisibility function for the free group ${\mathbf F}_2$ satisfies the stronger lower bound
$$D_{{\mathbf F}_2}(n) \ge \exp\left(\frac{\log(n)}{C\log \log(n)} \right) = n^{\frac{1}{C\log\log(n)}}.$$
\end{theorem}
\begin{proof}
Let $w \in {\mathbf F}_2$ be a shortest law for ${\rm Sym}(n)$ and set
$$k:= \lfloor \exp\left(D\log(n)^4 \log \log(n) \right) \rfloor.$$
We compute
\begin{eqnarray*}
\left(\frac{\log(k)}{\log \log(k)} \right)^{1/4} &\leq& \left(\frac{D\log(n) \log \log(n)}{\log\left( D\log(n)^4 \log \log(n) \right)} \right)^{1/4}\\
&=& \left(\frac{D\log(n)^4 \log \log(n)}{\log(D) + 4 \log \log(n) + \log \log \log(n)} \right)^{1/4}\\
&\leq& D/4 \cdot \log(n).
\end{eqnarray*}
Using the estimate from Theorem \ref{random} and Lemma \ref{elem} we get:
\[
D_{{\mathbf F}_2}(k) \geq D_{{\mathbf F}_2}(w) > n
\geq\exp\left( 4/D \cdot \left(\frac{\log(k)}{\log \log(k)} \right)^{1/4} \right).
\]
This proves the first claim. The second claim is proved in a similar way using Theorem \ref{thm:underBabai} instead of Theorem \ref{random}. This finishes the proof.
\end{proof}
Note that (at least assuming Babai's Conjecture) we are getting somewhat close to the linear upper bound for $D_{\mathbf F_2}(n)$, which explains in part why attempts have failed to improve this upper bound substantially.
\section{Finite simple groups of Lie type}
\label{lietype}
We now turn to laws satisfied for finite simple groups of Lie type, see \cite[Prop. 2.3.2]{MR1303592} or \cite[Sec. 13.1]{MR0407163} for a detailed discussion.
The proof of the upper bound on the minimal length of a law for such a
group in \cite[Theorem 2]{MR2764921} contains a gap, as we will now explain. In the proof of \cite[Lemma 3.1]{MR2764921}, it is assumed that a polynomial of degree $n$ has a splitting field of degree less than or equal $n$. This is not correct and a more detailed analysis is needed to overcome this problem. Even though the statement of \cite[Lemma 3.1]{MR2764921} is correct, the author refers to the proof of this lemma in the proof of \cite[Proposition 3.4]{MR2764921} which is crucial to establish the bounds. We will only arrive at a weaker bound than the one obtained in \cite[Theorem 2]{MR2764921}. Our main result is as follows.
\begin{theorem} \label{finitelie}
Let $G$ be a finite simple group of rank $r$ defined over a field with $q$ elements. There exists a law $w_G$ for $G$ with
$|w_G| \leq 48 \cdot q^{155 \cdot r}.$
For $G={\rm PGL}_n(q)$, with $q=p^s$ for some prime $p$, we have more precisely:
$$|w_G| \leq 48 \cdot \exp\left(2\sqrt{2} \cdot n^{1/2} \log(n)\right) \cdot q^{n-1}.$$
\end{theorem}
\begin{proof} Let $p$ be a prime and $q=p^s$ for some $s \in {\mathbb N}$.
Let $A \in {\rm GL}_n(q)$. Consider the Jordan decomposition $A=A_s A_u=A_uA_s$, where $A_u$ is unipotent and $A_s$ is semi-simple. The characteristic polynomial $\chi_A$ of $A$ is a polynomial of degree $n$ over ${\mathbb F}_q$. Let $j_1,\dots,j_l$ be the degrees of irreducible factors of $\chi_A$, where each degree is counted only once. We set $k := (q^{j_1}-1) \cdots (q^{j_l}-1)$ and claim that $A_s^k=1_n$. Indeed, the algebra ${\mathbb F_q}[A_s] \subset M_{n}({\mathbb F_q})$ splits as a sum of fields isomorphic to ${\mathbb F}_{q^{j_i}}$, for $1 \leq i \leq l$, and every non-zero element $\alpha \in {\mathbb F}_{q^{j_i}}$ satisfies $\alpha^{q^{j_i}-1}=1$.
Now,
$$\frac{l^2}{2} \leq 1 + \cdots + l \leq j_1 + \cdots + j_l \leq n$$ and hence $l \leq \sqrt{2n}$.
We obtain that there are at most $n^{\sqrt{2n}} = \exp\left(\sqrt{2} \cdot n^{1/2} \log(n)\right)$ possibilities that have to be taken into account. Note also that for each instance we have
$$k=(q^{j_1}-1) \cdots (q^{j_l}-1) \leq q^n.$$
Hence, we obtain at most $\exp\left(\sqrt{2} \cdot n^{1/2} \log(n)\right)$ exponents $k_1,\dots,k_m \in {\mathbb N}$ of size less than or equal $q^n$, such that for every $A \in {\rm GL}_n(q)$ we have the $A_s^{k_i} = 1_n \in {\rm GL}_n(q)$. Since $A_s$ and $A_u$ commute, we conclude that $A^{k_i}$ is unipotent. For any unipotent matrix $B$, it is easy to see that $B^{p^{\lceil \log_2(n) \rceil}} = 1_n.$ Thus, for any matrix $A \in {\rm GL}_n(q)$, we have
$$A^{p^{\lceil \log_2(n) \rceil}k_i }= 1_n, \quad \mbox{for some $1 \leq i \leq m$}.$$
We can now set $w_i = a^{p^{\lceil \log_2(n) \rceil}k_i}$ and proceed. Using Lemma \ref{lem:commut}, we obtain a word $w \in {\mathbf F}_2$ which is trivial on all pairs $A,B \in {\rm GL}_n(q)$ of length
\begin{align} \label{trivialbound}
|w| &\leq 16\exp\left(2\sqrt{2} \cdot n^{1/2} \log(n)\right) \cdot
\left(2 p^{\lceil \log_2(n) \rceil}q^n +2 \right) \leq 48 \cdot q^{5n}.
\end{align}
There is still room for improvement.
First of all, it is known that no element of ${\rm PGL}_n(q)$ has
order $q^n-1$. Thus, we can divide each $k$ (with more than two
factors) appearing above by $q-1$ and the construction would still work. In any case, we obtain $k \leq q^{n-1}$. At the same time, the factor $p^{\lceil \log_2(n) \rceil}$ can only appear in the presence of multiplicities of eigenvalues, which is obvious when looking a the Jordan normal form of the matrix. Thus, the factor of the form $p^{\lceil \log_2(n) \rceil}$ in not necessary, see \cite[Corollary 2.7]{maxorder} for details, and we obtain
\begin{equation} \label{bound}
|w| \leq 48 \cdot \exp\left(2\sqrt{2} \cdot n^{1/2} \log(n)\right) \cdot q^{n-1}.
\end{equation}
This finishes the proof in the case of ${\rm PGL}_n(q)$. The general case follows with the same arguments as in \cite[Section 4.2]{MR2764921}.
Indeed, it is well-known that there exists a universal constant $D>0$ such that any finite simple group $G$ of Lie type with rank $r$ defined over a field with $q$ elements embeds either into ${\rm PSL}_{Dr}(q)$ or ${\rm SL}_{Dr}(q)$. Thus, any law for ${\rm GL}_{Dr}(q)$ will be a law for $G$ and we can set $C:=5D $ in combination with the estimate in \eqref{trivialbound}. A case study that was carried out in \cite[Section 4.2]{MR2764921} shows that $D=31$, i.e., $C=155$ is enough. This finishes the proof.
\end{proof}
It seems likely that for fixed $n$, much better upper bounds can be achieved for quasi-simple groups of Lie type along the line of arguments that were used to establish Theorem \ref{random}, using
\begin{itemize}
\item bounds on the diameter of finite simple groups of Lie type of fixed rank, see the fundamental work of Breuillard-Green-Tao \cite{MR2827010} and Pyber-Szab\'o \cite{pyber},
\item the elementary fact that the density of diagonalizable elements in ${\rm GL}_n(q)$ is at least $\exp(-n \log(n))$ if $q \geq n$, and
\item the work of Larsen-Pink, see \cite{MR2813339}, in order to carry out an induction argument similar to the one in the proof of our Theorem \ref{random}.
\end{itemize}
This will be the topic of further investigation.
\section*{Acknowledgments}
This note was written during the trimester on {\it Random Walks and
Asymptotic Geometry of Groups} at Institute Henri Poincar\'{e} in
Paris. We are grateful to this institution for its hospitality. We are
grateful to Mark Sapir for interesting remarks -- especially about the
comparison with the study of identities for associative algebras, and
for bringing the work of Gimadeev-Vyalyi \cite{MR2972333} to our
attention. The second author thanks Emmanuel Breuillard and Martin
Kassabov for valuable comments. The first author was supported by the
Israel Science Foundation and the Jesselson Foundation.
The second author was supported by ERC Starting Grant No. 277728.
\end{document} |
\begin{document}
\fracootskip=0pt
\fracootnotesep=2pt
\allowdisplaybreaks
\nablaewtheorem{claim}{Claim}[section]
\theoremstyle{definition}
\nablaewtheorem{thm}{Theorem}[section]
\nablaewtheorem*{thmm}{Theorem}
\nablaewtheorem{mydef}{Definition}[section]
\nablaewtheorem{lem}[thm]{Lemma}
\nablaewtheorem{prop}[thm]{Proposition}
\nablaewtheorem{remark}[thm]{Remark}
\nablaewtheorem{rem}{Remark}[section]
\nablaewtheorem*{propp}{Proposition}
\nablaewtheorem{cor}[thm]{Corollary}
\nablaewtheorem{conj}[thm]{Conjecture}
\def\mathcal{F}{\mathcal{F}}
\def\lambda{\lambdambda}
\def\tilde{R}{\tilde{R}}
\def\displaystyle{\displaystyle}
\def\varepsilon{\varepsilon}
\def\gamma{\gammaamma}
\def\lesssim{\lesssim}
\nablaewcommand{\bd}[1]{\mathbf{#1}}
\nablaewcommand{\mathbb{R}}{\mathbb{R}}
\nablaewcommand{\mathbb{Z}}{\mathbb{Z}}
\nablaewcommand{\Omega}{\Omegaega}
\nablaewcommand{\mathbf{v}}{\mathbf{v}}
\nablaewcommand{\mathbf{n}}{\mathbf{n}}
\nablaewcommand{\mathbf{U}}{\mathbf{U}}
\nablaewcommand{\quad}{\quaduad}
\nablaewcommand{\partial}{\partialartial}
\nablaewcommand{\nabla}{\nablaabla}
\nablaewcommand{\frac}{\fracrac}
\nablaumberwithin{equation}{section}
\title{On the blowup mechanism of smooth solutions to 1D quasilinear strictly hyperbolic systems
with large initial data\fracootnote{Li Jun ([email protected]) is supported by NSFC (No.11871030).
Xu Gang ([email protected], [email protected]) and Yin Huicheng ([email protected], [email protected]) are
supported by NSFC (No.11731007, No.11971237).}}
\author[1]{Li Jun}
\author[2]{Xu Gang}
\author[1,2]{Yin Huicheng}
\affil[1]{Department of Mathematics, Nanjing University, Nanjing 210093, China}
\affil[2]{School of Mathematical Sciences and Institute of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, China}
\date{}
\maketitle
\centerline{}
\date{}
\maketitle
\thispagestyle{empty}
\begin{abstract} For the first order 1D $n\times n$ quasilinear
strictly hyperbolic system $\partial_tu+F(u)\partial_xu=0$ with $u(x, 0)=\varepsilon u_0(x)$, where $\varepsilon>0$ is small,
$u_0(x)\nablaot\equiv 0$ and $u_0(x)\in C_0^2(\Bbb R)$,
when at least one eigenvalue of $F(u)$ is genuinely nonlinear, it is well-known that on the finite blowup time $T_{\varepsilon}$,
the derivatives $\partial_{t,x}u$ blow up while the solution $u$ keeps to be small.
For the 1D scalar equation or $2\times 2$ strictly hyperbolic system (corresponding to $n=1, 2$),
if the smooth solution $u$ blows up in finite time, then
the blowup mechanism can be well understood
(\emph{i.e.}, only the blowup of $\partial_{t,x}u$ happens). In the present paper,
for the $n\times n$ ($n\gammae 3$) strictly hyperbolic system with a class of large initial data,
we are concerned with the blowup mechanism of smooth solution $u$ on the finite blowup time and
the detailed singularity behaviours
of $\partial_{t,x}u$ near the blowup point. Our results are based on the efficient decomposition of $u$ along the different
characteristic directions, the suitable introduction of the modulated coordinates and the global weighted energy estimates.
\end{abstract}
\vskip 0.2cm
{\bf Keywords:} Blowup mechanism, strictly hyperbolic system, genuinely nonlinear,
geometric blowup, modulated coordinate, global weighted energy estimate.\vskip 0.2 true cm
{\bf 2010 Mathematics Subject Classification.} 35L03, 35L67.
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}\lambdabel{i}
In the paper, we are concerned with the blowup mechanism of smooth solutions to the
following Cauchy problem of 1D $n\times n$ quasilinear strictly hyperbolic system:
\begin{subequations}\lambdabel{i-1}\begin{align}
&\partialartial_t u+F(u)\partialartial_x u=0,\lambdabel{i-1a}\\
&u(x, 0)=u_0(x),\lambdabel{i-1b}
\end{align}
\end{subequations}
where $t\gammae 0$, $x\in\Bbb R$, $u=(u_1, \cdots, u_n)^{\top}$, the $n\times n$ real matrix $F(u)$ is smooth on its argument $u$,
and $u_0(x)\in C^2(\Bbb R)$. The strict hyperbolicity of system \eqref{i-1a} means that $F(u)$ has $n$ distinct real eigenvalues
\begin{equation}\lambdabel{i-2}
\lambdambda_1(u)<\cdots<\lambdambda_n(u),
\end{equation}
meanwhile the corresponding right eigenvectors are denoted by $\gamma_1(u), \cdots, \gamma_n(u)$ respectively. One calls
system \eqref{i-1a} to be genuinely nonlinear with respect to some eigenvalue $\lambdambda_{i_0}(u)$ ($1\le i_0\le n$) when
\begin{equation}\lambdabel{i-3}
\nablaabla_u\lambdambda_{i_0}(u)\cdot \gammaamma_{i_0}(u)\nablaeq 0.
\end{equation}
Otherwise, \eqref{i-1a} is called to be linearly degenerate with respect to the eigenvalue $\lambdambda_{i_0}(u)$ when
\begin{equation}\lambdabel{y-1}
\nablaabla_u\lambdambda_{i_0}(u)\cdot \gammaamma_{i_0}(u)\equiv 0.
\end{equation}
Our purpose of the paper is to discuss the blowup mechanism of smooth solutions to problem \eqref{i-1}
for a class of large smooth initial data $u_0(x)$ provided that system \eqref{i-1a} is genuinely nonlinear with respect to some eigenvalue $\lambdambda_{i_0}(u)$ for $i_0\in \{1, \cdots, n\}$.
\subsection{Reviews and problems}
For the 1D scalar equation
\begin{equation}\lambdabel{Y-2}\begin{cases}
\partialartial_t v+f(v)\partialartial_x v=0,\\
v(x, 0)=v_0(x),
\end{cases}
\end{equation}
where $v_0(x)\nablaot\equiv 0$, $v_0(x)\in C_0^1(\mathbb{R})$, $f(v)$ is a $C^1$ smooth function
and $f'(v)\nablaot= 0$ for $v\in \text{supp} v_0(x)$.
Set $g(x)=f(v_0(x))$, then by the characteristics method, it is easy to know that
the $C^1$ solution $v$ will blow up
on the finite positive time $T^*=-\fracrac{1}{\min g'(x)}$ due to $\min\limits_{x\in\mathbb{R}}g'(x)<0$. Meanwhile,
$v\in C(\mathbb{R}\times [0, T^*])$ and $\lim\limits_{t\nablaearrow T^*}\|\partial_{t,x}v(\cdot, t)\|_{C(\Bbb R)}=\infty$ hold.
This illustrates that the blowup of solution $v$ to problem \eqref{Y-2} corresponds to the
geometric blowup by the terminology in \cite{A0}.
For the 1D $2\times 2$ strictly hyperbolic system
\begin{equation}\lambdabel{Y-3}\begin{cases}
\partialartial_t v+B(v)\partialartial_x v=0,\\
v(x, 0)=v_0(x),
\end{cases}
\end{equation}
where $v_0(x)\nablaot\equiv 0$, $v_0(x)\in C_0^1(\Bbb R)$, $B(v)\in C^1$ is a $2\times 2$ matrix which admits two
distinct real eigenvalues $\lambda_1(v)$ and $\lambda_2(v)$, by introducing
two Riemann invariants $w_1=w_1(v)$ and $w_2=w_2(v)$, then \eqref{Y-3} can be decoupled into the following
$2\times 2$ strictly hyperbolic system of $w=(w_1,w_2)$:
\begin{equation}\lambdabel{Y-4}\begin{cases}
\partialartial_t w_1+\lambda_1(w)\partialartial_x w_1=0,\\
\partialartial_t w_2+\lambda_2(w)\partialartial_x w_2=0,\\
w(x, 0)=w_0(x).
\end{cases}
\end{equation}
When the system in \eqref{Y-3} is genuinely nonlinear with respect to at least one eigenvalue $\lambda_i(v)$ ($i=1,2$),
then by \eqref{Y-4} and \cite{PDL1}, one knows that the smooth solution $v$ will blow up at the maximal
finite existence time $T^*$, meanwhile $\|v\|_{L^{\infty}(\mathbb{R}\times [0, T^*])}$ is bounded and
$\lim\limits_{t\nablaearrow T^*}\|\partial_{t,x}v(\cdot, t)\|_{C(\Bbb R)}=\infty$ holds.
This implies that the blowup of solution $v$ to \eqref{Y-3} also corresponds to the geometric blowup.
For the small data solution problem of 1D $n\times n$ quasilinear strictly hyperbolic system
\begin{equation}\lambdabel{Y-1}\begin{cases}
\partialartial_t v+B(v)\partialartial_x v=0,\\
v(x, 0)=\varepsilon v_0(x),
\end{cases}
\end{equation}
where $\varepsilon>0$ is small, $v_0(x)\nablaot\equiv 0$, $v_0(x)\in C_0^2(\Bbb R)$ and $B(v)\in C^2$ is a $n\times n$ matrix,
when the system in \eqref{Y-1} is genuinely nonlinear with respect to at least
one eigenvalue of $B(v)$, it follows from the results in \cite{LH} and \cite{FJ} that the lifespan $T_{\varepsilon}$ of smooth solution $v$ to \eqref{Y-1}
satisfies
$$\lim\limits_{\varepsilon\to 0^{+}}\varepsilon T_{\varepsilon}=\tau_0>0.$$
Moreover, $\|v\|_{C(\mathbb{R}\times [0, T_\varepsilon])}\le C\varepsilon$ and $\displaystyle\lim_{t\to T_{\varepsilon}-}\|\partial_{t,x}v(\cdot, t)\|_{C(\Bbb R)}=\infty$ hold.
This means that the blowup of solution $v$ to \eqref{Y-1} corresponds to the geometric blowup.
Compared with the results on problem \eqref{Y-2} and problem \eqref{Y-3}, two natural problems arise for the system \eqref{i-1a}
with $n\gammaeq 3$: when at least one eigenvalue of $F(u)$ is genuinely nonlinear,
{\bf Q1.} Can we find
a class of large initial data \eqref{i-1b} such that the blowup of solution $u$ corresponds to the geometric blowup
as in the small data solution problem \eqref{Y-1}?
{\bf Q2.} Can we find
another class of large initial data \eqref{i-1b} such that the solution $u$ itself blows up in finite time?
In the present paper, we focus on the investigation of {\bf Q1}.
\subsection{Statement of main results}
By Proposition \ref{lemA-1} in Section \ref{II},
\eqref{i-1} can be equivalently changed into the following problem
\begin{subequations}\lambdabel{i-6}\begin{align}
&\partialartial_t w+A(w)\partialartial_x w=0,\lambdabel{i-6a}\\
&w(x, -\varepsilon)=w_0(x),\lambdabel{i-6b}
\end{align}
\end{subequations}
where $w(x, t)=(w_1, \cdots, w_n)^{\top}$, $t\gammaeq -\varepsilon$, and $\varepsilon>0$ is a
small constant (for the convenience of expression,
here the initial temporal variable is shifted from $t=0$ to $t=-\varepsilon$).
In addition, the $n$ distinct real eigenvalues of smooth function matrix $A(w)=\left(a_{ij}(w)\right)_{n\times n}$
are denoted by $\mu_1(w), \cdots, \mu_n(w)$. Based on the reduction in Proposition \ref{lemA-1} and the strictly
hyperbolic condition \eqref{i-2}, there hold
\begin{subequations}\lambdabel{i-7}\begin{align}
&\mu_1(w)<\cdots<\mu_{i_0-1}(w)<\mu_n(w)<\mu_{i_0}(w)<\cdots<\mu_{n-1}(w),\lambdabel{i-7a}\\
&a_{in}(w)=0\ (1\leq i\leq n-1),\lambdabel{i-7b}\\
& a_{nn}(w)=\mu_n(w)=\mu_n(0)+\partialartial_{w_n}\mu_n(0)w_n+\sum\limits_{i=1}^{n-1}\partialartial_{w_i}\mu_n(0)w_i+O(|w|^2),\lambdabel{i-7c}\\
&A(0)=\text{diag}\{\mu_1(0), \cdots, \mu_n(0)\},\lambdabel{i-7d}
\end{align}
\end{subequations}
where $\partialartial_{w_n}\mu_n(0)\nablaeq 0$. This means that the
system \eqref{i-6a} is genuinely nonlinear with respect to the eigenvalue $\mu_n(w)$. Let $\ell_i(w)$ and $\gammaamma_i(w)$ be the left and right eigenvectors of the matrix $A(w)$ corresponding to the eigenvalue $\mu_i(w)$ ($1\le i\le n$), respectively.
Together with \eqref{i-7}, without loss of generality, one can assume
\begin{subequations}\lambdabel{i-71}\begin{align}
&\ell_i(w)\cdot\gammaamma_j(w)=\delta_{i}^{j}\ (1\leq i, j\leq n),\lambdabel{i-71a}\\
& \gammaamma_n(w)={\bf e}_n, \ \gammaamma_i(0)={\bf e}_i,\ \|\gammaamma_i(w)\|=1\ (1\leq i\leq n-1),\lambdabel{i-71b}\\
&\ell_i(0)={\bf e}_i^{\top}\ (1\leq i\leq n),\lambdabel{i-71c}
\end{align}
\end{subequations}
where $\|\gammaamma_i(w)\|=\sqrt{\displaystyle\sum_{k=1}^n(\gammaamma_{i}^k)^2(w)}$ with $\gammaamma_i(w)=(\gammaamma_{i}^1(w), ..., \gammaamma_{i}^n(w))^{\top}$.
To study the blowup mechanism of smooth solution to problem \eqref{i-6} with a class of large initial data $w_0(x)=(w_{10}, \cdots, w_{n0})(x)$, motivated by \cite{TSV1}-\cite{TSV3}, we choose $w_0(x)$ as follows:
At first, let $w_{n0}(x)$ satisfy the following generic nondegenerate condition at $x=0$:
\begin{equation}\lambdabel{i-8}
w_{n0}(0)=\kappa_0\varepsilon^{\fracrac{1}{3}},\ w_{n0}'(0)=-\fracrac{1}{\varepsilon}=\min\limits_{x\in\mathbb{R}}w_{n0}'(x),\ w_{n0}''(0)=0,\ w_{n0}'''(0)=\fracrac{6}{\varepsilon^4},
\end{equation}
where $\kappa_0$ is a suitable constant.
Secondly, in order to derive $L^{\infty}$ estimates for the lower order derivatives of $w$ and track the development of possible singularity, we require such assumptions
of $w_{0}(x)$:
\begin{subequations}\lambdabel{i-9}\begin{align}
&|w_{n0}(x)-w_{n0}(0)|\leq 2\varepsilon^{\fracrac{1}{2}-\fracrac{1}{30}},\lambdabel{i-9a}\\
&|\hat w(x)|\leq \varepsilon^{\fracrac{3}{2}}\eta^{\fracrac{1}{6}}(\varepsilon^{-\fracrac{3}{2}}x),\ \ |\hat w'(x)|\leq \eta^{-\fracrac{1}{3}}(\varepsilon^{-\fracrac{3}{2}}x)\quaduad \text{for $|x|\leq\mathcal{L}\varepsilon^{\fracrac{3}{2}}$},\lambdabel{i-9b}\\
&|\hat w^{(4)}(x)|\leq \varepsilon^{\fracrac{1}{9}-\fracrac{9}{2}}\quaduad\text{for $|x|\leq \varepsilon^{\fracrac{3}{2}}$},\lambdabel{i-9e}\\
&\varepsilon|w_{n0}'(x)|\leq 2\eta^{-\fracrac{1}{3}}(\varepsilon^{-\fracrac{3}{2}}x)\quaduad\text{for $\mathcal{L}\varepsilon^{\fracrac{3}{2}}\leq|x|\leq 2\mathcal{L}\varepsilon^{\fracrac{3}{2}}$},\lambdabel{i-9c}\\
&\varepsilon|w_{n0}'(x)|\leq \eta^{-1}(\varepsilon^{-\fracrac{3}{2}}x)\quaduad\text{for $|x|\gammaeq 2\mathcal{L}\varepsilon^{\fracrac{3}{2}}$},\lambdabel{i-9d}
\end{align}
\end{subequations}
and for $1\leq j\leq n-1$,
\begin{equation}\lambdabel{i-10}
|w_{j0}(x)|\leq\varepsilon,\ |w_{j0}'(x)|\leq \eta^{-\fracrac{1}{3}}(\varepsilon^{-\fracrac{3}{2}}x),\ |w_{j0}''(x)|\leq \varepsilon^{-\fracrac{11}{6}}\eta^{-\fracrac{1}{3}}(\varepsilon^{-\fracrac{3}{2}}x),
\end{equation}
where $\mathcal{L}=\varepsilon^{-\fracrac{1}{10}}, \eta(x)=1+x^2$
and $\hat{w}(x)=w_{n0}(x)-\kappa_0\varepsilon^{\fracrac{1}{3}}-\overline{w}(x)$ with $\overline{w}(x)=\varepsilon^{\fracrac{1}{2}}\overline{W}(\varepsilon^{-\fracrac{3}{2}}x)$ and
$\overline{W}(y)=(-\fracrac{y}{2}+(\fracrac{1}{27}+\fracrac{y^2}{4})^{\fracrac{1}{2}})^{\fracrac{1}{3}}
-(\fracrac{y}{2}+(\fracrac{1}{27}+\fracrac{y^2}{4})^{\fracrac{1}{2}})^{\fracrac{1}{3}}$.
Thirdly, in order to derive the $L^{2}-$energy estimates for the $\mu_0-$order derivatives of $w$,
we demand that
\begin{equation}\lambdabel{i-11}
\sum\limits_{j=1}^{n-1}\|\partialartial_x^{\mu_0}w_{j0}(\cdot)\|_{L^2}+\varepsilon\|\partialartial_x^{\mu_0}w_{n0}(\cdot)\|_{L^2}\lesssim \varepsilon^{\fracrac{3}{2}(1-\mu_0)},
\fracootnote{Hereafter, $A\lesssim B$ means that there exists a generic positive constant $C$ such that $A\leq CB$.}
\end{equation}
where $\mu_0\gammae 6$ is a suitably given constant.
Our main results are stated as:
\begin{thm}\lambdabel{thmi-1} {\it Under the conditions \eqref{i-7}, and without loss of generality,
$\mu_n(0)=0$ and $\partialartial_{w_n}\mu_n(0)=1$ are assumed, then there exists a positive constant $\varepsilon_0$
such that when $0<\varepsilon<\varepsilon_0$ and $w_0(x)$ satisfies \eqref{i-8}-\eqref{i-11},
the problem \eqref{i-6} admits a unique local smooth solution $w$, which will firstly
blow up at the point $(x^*, T^*)$.
Moreover,
\begin{enumerate}[$(1)$]
\item $x^{*}=O(\varepsilon^2),\ T^*=O(\varepsilon^{\fracrac{4}{3}}).$
\item $w$ lies in the following spaces:\begin{equation}\lambdabel{i-12}\begin{cases}
w\in C([-\varepsilon, T^*), H^{\mu_0}(\mathbb{R}))\cap C^1([-\varepsilon, T^*), H^{\mu_0-1}(\mathbb{R})),\\[2mm]
w_i\in C^1([-\varepsilon, T^*]\times \mathbb{R})\ (1\leq i\leq n-1),\ w_n\in L^{\infty}([0, T^*], C^{\fracrac{1}{3}}(\mathbb{R})).
\end{cases}
\end{equation}
\item There exist two smooth functions $\xi(t)$ and $\tau(t)$ such that
\begin{equation}\lambdabel{i-13}\begin{cases}
\lim\limits_{t\nablaearrow T^*}(\xi(t), \tau(t))=(x^*, T^*),\ \lim\limits_{t\nablaearrow T^*}\partialartial_x w_n(\xi(t), t)=-\infty,\\[1mm]
-2<(T^*-t)\partialartial_x w_n(\xi(t), t)<-\fracrac{1}{2}\quaduad\text{for $-\varepsilon\leq t<T^*$},\\[1mm]
\text{$|\tau(t)-T^*|\lesssim \varepsilon^{\fracrac{1}{3}}(T^*-t)$ and $|\xi(t)-x^*|\lesssim \varepsilon (T^*-t)$
for $-\varepsilon\leq t<T^*$}.
\end{cases}
\end{equation}
\end{enumerate}}
\end{thm}
\begin{rem}\lambdabel{remi-H1} {\it By Theorem \ref{thmi-1}, we know that the solution $w\in C(\Bbb R\times [0, T^*])$ of \eqref{i-6a}
blows up at the point $(x^*, T^*)$, i.e., $\lim\limits_{t\nablaearrow T^*}\|\partial_{t,x}w(\cdot, t)\|_{C(\Bbb R)}=\infty$.
This corresponds to the geometric blowup for the problem \eqref{i-6}.}
\end{rem}
\begin{rem}\lambdabel{rem1-1} {\it The assumptions of $\mu_n(0)=0$ and $\partialartial_{w_n}\mu_n(0)=1$ in Theorem \ref{thmi-1}
can be realized by the translation $(t, x)\mapsto (t, x+\mu_n(0)t)$ and then the spatial scaling $x\mapsto \partialartial_{w_n}\mu_n(0) x$.}
\end{rem}
\begin{rem}\lambdabel{rem1-2} {\it We now give some comments on \eqref{i-6}-\eqref{i-7}.
It is not difficult to find that there are a great number of $w_0(x)$ to fulfill the constrains \eqref{i-8}-\eqref{i-11}. In addition,
it follows from Proposition \ref{lemA-1} that the unknown $w$
admits the good components $(w_1, \cdots, w_{n-1})$ and the bad component $w_n$. The conditions \eqref{i-9b}-\eqref{i-9c} imply that the bad component $w_n$ mainly tracks the possible singularity and it can be thought as a suitable perturbation of the
singular function $\overline{W}$. On the other hand, in order to control the detailed behaviors of $w_n$
near the possible blowup point, we posed the suitable perturbation for the fourth order derivatives of $\hat w$ in \eqref{i-9e} when $|x|\leq \varepsilon^{\fracrac{3}{2}}$. The conditions \eqref{i-9a} and \eqref{i-9d} are posed to control the behavior of $w_n$
away from the blowup position. In addition, to avoid the influence of the initial data $w_0(x)$ at infinity,
we naturally pose the appropriate decaying condition \eqref{i-10}-\eqref{i-11} for large $|x|$.}
\end{rem}
\begin{rem}\lambdabel{remi-3} {\it In \cite{TSV1}-\cite{TSV3}, through introducing
suitable modulated coordinates and taking the constructive proofs, the authors systematically study the shock formation
of multidimensional compressible Euler equations with a class of smooth initial data. Motivated by these papers,
we study the geometric blowup mechanism of problem \eqref{i-1}, whose nonlinear structure is more general than
the 1D compressible Euler equations. Thanks to the new reformulation in the equivalent problem \eqref{i-6} as well as \eqref{i-7},
we can establish some suitable exponential-growth controls on the bounds of the characteristics corresponding to
$\mu_i(w)\ (1\leq i\leq n-1)$ (see Lemma \ref{lem7-1} and Lemma \ref{lem7-2} below) such that
the problem \eqref{i-6} can be mainly dominated by the approximate Burgers equation of $w_n$.}
\end{rem}
\begin{rem}\lambdabel{remi-H2}
{\it When \eqref{i-6a} admits the structure of conservation laws, there are some interesting works on the shock construction through the first-in-time blowup point $(x^*, T^*)$ for $t\gammae T^*$.
For instances, under various nondegenerate conditions with finite orders or infinitely degenerate conditions
for the initial data, the shock construction from the blowup point is completed for the 1D scalar equation
$\partial_tu+\partial_x(f(u))=0$ in \cite{YL1}; under the generic nondegenerate condition of initial data,
the shock surface from the blowup curve has been constructed for the multidimensional scalar equation
$\partial_tu+\partial_1(f_1(u))+\cdot\cdot\cdot+\partial_n(f_n(u))=0$ in \cite{YL2}; under the generic nondegenerate conditions
of the initial data, for the 1-D $2\times 2$ $p-$ system of
polytropic gases, the authors in \cite{K1},\cite{LB} and \cite{CD} obtain the formation and construction
of the shock wave starting from the blowup point under some variant conditions; for the 1-D $3\times 3$
strictly hyperbolic conservation laws with the small initial data or the 3-D full compressible Euler equations with
symmetric structure and small perturbed initial data,
the authors in \cite{CXY}, \cite{YHC} and \cite{CD2} also get the formation and construction
of the resulting shock waves, respectively. In the near future, we hope that the shock formation can
be constructed from the blowup point $(x^*, T^*)$ in Theorem \ref{thmi-1} when \eqref{i-6a} has
the structure of conservation laws.}
\end{rem}
\begin{rem}\lambdabel{remi-2} {\it In the recent years, the formation of shock waves have made much progress
for the multi-dimensional Euler equations and the quasilinear wave equations under various restrictions
on the related initial data. One can see
the remarkable articles \cite{TSV1}-\cite{TSV4}, \cite{CD1}, \cite{CD3}-\cite{CD4}, \cite{LS},
\cite{MY} and \cite{SJ}.}
\end{rem}
\subsection{Comments on the proof of Theorem \ref{thmi-1}}
Let us give comments on the proof of Theorem \ref{thmi-1}.
Motivated by \cite{TSV3}, by introducing the modulated coordinate which is smooth before the singularity formation,
we can convert the finite time singularity formation of \eqref{i-6} into the global well-posedness of smooth
solutions to the resulting new system of $W=(W_1, \cdot\cdot\cdot, W_n)^{\top}$ (see \eqref{ii-4}-\eqref{ii-5}).
To achieve this aim, we take the following strategies:
{\bf $\bullet$} Due to the important form of \eqref{i-6a} with \eqref{i-7}, we divide $W$ as the $n-1$ good
components $(W_1, \cdots, W_{n-1})^{\top}$
and the bad unknown $w_n$. Inspired by \cite{TSV2}, we continue to decompose $w_n$ into another bad part $W_0$ and a good part $\kappa(t)$
(see \eqref{ii-3}).
The $L^{\infty}$ estimates for the lower order derivatives of $W_0$ are carried out in two different domains $\{(y, s): |y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}\}$ and $\{(y, s): |y|\gammaeq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}\}$ with $\mathcal{L}=\varepsilon^{-\fracrac{1}{10}}$. In the interior domain $\{(y, s): |y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}\}$,
$W_0$ is expected to have the similar behavior as $\overline{W}(y)$, which is the steady solution of 1D Burgers type equation
$(\partialartial_s-\fracrac{1}{2})\overline{W}+\left(\fracrac{3}{2}y+\overline{W}\right)\partialartial_y\overline{W}=0$.
In the exterior domain $\{(y, s): |y|\gammaeq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}\}$, the treatment of $W_0$
is rather delicate since the temporal and spatial decay estimates of good components $(W_1, \cdots, W_{n-1})^{\top}$ are
required to be established simultaneously.
{\bf $\bullet$} Due to the partially decoupling form of \eqref{i-6a}, in order to
prove Theorem \ref{thmi-1}, we need to establish the $L^{\infty}$
estimates of the lower order derivatives and the $L^2$ estimates of the
highest order derivatives for $(W_1, \cdots, W_{n})^{\top}$. To get the related $L^{\infty}$ estimates,
by utilizing the characteristics method and delicate analysis, at first, we derive
the basic exponential controls on the bounds of the characteristics corresponding to
$\mu_i(w)\ (1\leq i\leq n-1)$. Subsequently, the spatial decay rate $(1+y^2)^{-\fracrac{1}{3}}$ of $\partialartial_y w_0$
and further the temporal decay of $\partialartial_y W_0$ are obtained. From these, the $L^{\infty}$ estimates of $(W_1, \cdots, W_{n})^{\top}$
are achieved. On the other hand, we observe that the coefficients in the equations of $w$
admit the key $O(e^{\fracrac{s}{2}})$ scale because of the strict hyperbolicity of \eqref{i-6a} (see \eqref{v-35}).
This will lead to the expected $L^2$ estimates on the
highest order derivatives of $(W_1, \cdots, W_{n})^{\top}$. Here, we specially point out that the $L^{\infty}$
estimates of each related quantity depend on the information of the higher order derivatives of $W$
since the related \eqref{i-6a} only admits the partial decoupling form.
This is the main reason to apply the $L^2$ estimates for dealing with the highest order derivatives of $W$.
When these are done, the proof of Theorem \ref{thmi-1} can be completed successfully. It is hoped that
our analysis methods in the paper will be adopted to study the singularity formation problem
for the general multi-dimensional symmetric hyperbolic systems with some classes of large initial data,
which is a generalization of the results in \cite{TSV1}-\cite{TSV4} for the multi-dimensional compressible
Euler equations.
The rest of the paper is arranged as follows: In Section \ref{II}, we reduce the problem \eqref{i-1} into
the equivalent partially decoupling problem \eqref{i-6} via Proposition \ref{lemA-1}. In Section \ref{ii},
under the modulated coordinate, the problem \eqref{i-6} and the choice of initial data $w_0(x)$
are reformulated. Moreover, as the heuristics of the formation of the expected singularity,
the rigorous derivation on the resulting Burgers-type equation is also given in this section.
The bootstrapping assumptions and their closure of the arguments are
arranged in Section \ref{iv}-Section \ref{v} respectively: The descriptions of bootstrapping assumptions
on $w$ and the modulated coordinate are made in Section \ref{iv};
the $L^{\infty}$ estimates for the bad unknown $W_n$ and the good components $(W_1,\cdot\cdot\cdot,
W_{n-1})^{\top}$ are taken in Section \ref{vi}-Section \ref{vii} respectively. In addition,
the closure of bootstrapping assumptions for the modulation variables is completed in Section \ref{viii};
the related energy estimates for the higher order derivatives of $W$ are derived in Section \ref{v}.
In Section \ref{V}, we establish the main results in Theorem \ref{thmii-1} and further Theorem \ref{thmi-1}.
Finally, a useful interpolation inequality and its application for deriving some delicate estimates
are given in Appendix \ref{A}.
\section{Reduction}\lambdabel{II}
In the section, our main aim is to reduce \eqref{i-1a} to a partially decoupling form \eqref{i-6a}
such that the resulting new unknown functions $w=(w_1, \cdot\cdot\cdot, w_n)^{\top}$
will admit $n-1$ good components and only one bad component. The good component
and the bad component mean that their regularities are in $C^1$ and in $C^{1/3}$
up to the blowup time, respectively.
\begin{prop}\lambdabel{lemA-1} {\it Under assumptions \eqref{i-2}-\eqref{i-3}, there exists a constant $\delta_0>0$
such that when $|u|<\delta_0$, the system \eqref{i-1a} can be equivalently reduced into
\begin{equation}\lambdabel{A-1}
\partialartial_{t}w+A(w)\partialartial_{x} w=0,
\end{equation}
where the smooth mapping $u\mapsto w=w(u)$ is invertible and $w(0)=0$.
In addition, the inverse mapping of $w(u)$ is denoted as $u=u(w)$, and the $n\times n$ matrix
\begin{equation*}
A(w)=\left(\fracrac{\partialartial w}{\partialartial u}\right)F(u(w))\left(\fracrac{\partialartial w}{\partialartial u}\right)^{-1}:=\left(a_{ij}(w)\right)_{n\times n}
\end{equation*} satisfies
\begin{enumerate}[$(1)$]
\item $A(w)$ has $n$ distinct eigenvalues $\left\{\mu_i(w)\right\}_{i=1}^{n}$ with
\begin{equation*}
\mu_i(w)=\lambdambda_i(u(w))\ (1\leq i<i_0);\ \mu_i(w)=\lambdambda_{i+1}(u(w))\ (i_0\leq i<n);\ \mu_n(w)=\lambdambda_{i_0}(u(w)).
\end{equation*}
\item $a_{in}(w)=0\ (i\nablaeq n),\ a_{nn}(w)=\mu_{n}(w)$ and $\partialartial_{w_{n}}\mu_{n}(w)\nablaeq 0$.
\item $A(0)=\text{diag}\{\mu_1(0), \cdots, \mu_n(0)\}$.
\end{enumerate}}
\end{prop}
\begin{proof} At first, we claim that when $|u|\leq \delta_0$ for some constant $\delta_0>0$, there exist $(n-1)$
linearly independent Riemann invariants $\alpha_i(u)\ (i\nablaeq i_0)$ corresponding to
$\lambdambda_{i_0}(u)$ such that
\begin{equation}\lambdabel{B-2}
\nablaabla_u \alpha_i(u)\cdot \gammaamma_{i_0}(u)=0\ (i\nablaeq i_0,\ |u|<\delta_0).
\end{equation}
Indeed, let $\{\zeta_i\}_{(i\nablaeq i_0)}$ be $(n-1)$ linearly independent column vectors orthogonal to $\gammaamma_{i_0}(0)$ and set $\alpha_i(u)=\zeta_i^{\top}\cdot u+\bar{\alpha}_i(u)$, then it follows from \eqref{B-2} that the
unknowns $\{\bar{\alpha}_i(u)\}_{i\nablaeq i_0}$ should satisfy
\begin{equation}\lambdabel{B-3}
\nablaabla_u \bar{\alpha}_i(u)\cdot\gammaamma_{i_0}(u)=-\zeta_i^{\top}\cdot(\gammaamma_{i_0}(u)-\gammaamma_{i_0}(0)),\ \bar{\alpha}_i(0)=0\ (i\nablaeq i_0).
\end{equation}
It is not difficult to find that there exists a constant $\delta_0>0$ such that \eqref{B-3} is uniquely solved when $|u|<\delta_0$ and $|\bar{\alpha}_i(u)|\lesssim |u|^2$. Hence \eqref{B-2} is obtained.
By $\alpha_i(0)=0\ (i\nablaeq i_0)$, we define a mapping $u\mapsto v=v(u)=(v_1, \cdots, v_n)^{\top}(u)$ with $v(0)=0$ as
\begin{equation}\lambdabel{B-4}
v_i=\alpha_i(u)\ (1\leq i< i_0);\quaduad v_i=\alpha_{i+1}(u)\ (i_0\leq i<n);\quaduad v_{n}=\gammaamma_{i_0}^{\top}(0)\cdot u.
\end{equation}
Note that $\{\zeta_i\}_{i\nablaeq i_0}$ are $(n-1)$ linearly independent column vectors
which are orthogonal to $\gammaamma_{i_0}(0)$. Then the transformation $u\mapsto v=v(u)$ is reversible for $|u|<\delta_0$
since its Jacobian matrix $J_{v}(u)$ satisfies $J_{v}(0)=\left(\fracrac{\partialartial v}{\partialartial u}\right)|_{u=0}
=\left(\zeta_1,\cdots, \zeta_{i_0-1}, \zeta_{i_0+1}, \cdots, \zeta_n, \gammaamma_{i_0}(0)\right)^{\top}$ and $J_{v}(0)$
is non-singular. We now denote the inverse mapping of $v=v(u)$ as $u=u(v)$.
By \eqref{B-2} and \eqref{B-4}, the system \eqref{i-1a} is equivalently converted into
\begin{equation}\lambdabel{B-5}
\partialartial_t v+G(v)\partialartial_x v=0,
\end{equation}
where $G(v)=J_{v}(u)F(u(v))J_{v}^{-1}(u):=\left(g_{ij}(v)\right)_{n\times n}$,
$G(v)$ has $n$ distinct eigenvalues $\{\lambdambda_i(u(v))\}_{i=1}^{n}$ and the corresponding right
eigenvectors are $\{J_{v}(u)\gammaamma_i(u(v))\}_{i=1}^{n}$. In addition, \eqref{B-2} shows
\begin{equation}\lambdabel{A-6}
J_{v}(u)\gammaamma_{i_0}(u(v))=\gammaamma_{i_0}^{\top}(0)\cdot\gammaamma_{i_0}(u(v)){\bf e}_{n}\nablaeq {\bf 0}.
\end{equation}
This implies that
\begin{equation}\lambdabel{A-7}
g_{i n}(v)=0\ (i\nablaeq n),\quaduad g_{n n }(v)=\lambdambda_{i_0}(u(v)).
\end{equation}
It follows from \eqref{B-5} and \eqref{A-7} that the $(n-1)$ order square matrix
\begin{equation}\lambdabel{A-9}
G_{n-1}(v)=\left(g_{ij}(v)\right)_{(n-1)\times (n-1)}
\end{equation}
has $(n-1)$ eigenvalues
\begin{equation*}\lambdabel{A-10}
\lambdambda_1(u(v))<\cdots<\lambdambda_{i_0-1}(u(v))<\lambdambda_{i_0+1}(u(v))<\cdots<\lambdambda_n(u(v)).
\end{equation*}
Then there exists a unique $(n-1)$ order invertible constant square matrix $B_{n-1}=\left(b_{ij}\right)_{(n-1)\times (n-1)}$
such that
\begin{equation}\lambdabel{B-10}
\bar{G}_{n-1}(v)=B_{n-1}G_{n-1}(v)B_{n-1}^{-1}:=\left(\bar{g}_{ij}(v)\right)_{(n-1)\times (n-1)},
\end{equation}
where
\begin{equation*}
\bar{G}_{n-1}(0)=\text{diag}\{\lambdambda_1(0),\cdots, \lambdambda_{i_0-1}(0), \lambdambda_{i_0+1}(0), \cdots, \lambdambda_n(0)\}.
\end{equation*}
Furthermore, it is derived from \eqref{i-3} and \eqref{A-6}-\eqref{A-7} that
\begin{equation}\lambdabel{A-8}
\nablaabla_{v}\lambdambda_{i_0}(u(v))J_{v}(u)\gammaamma_{i_0}(u(v))=\nablaabla_{u}\lambdambda_{i_0}(u)\cdot\gammaamma_{i_0}(u)\nablaeq 0.
\end{equation}
Denote the invertible transformation $v\mapsto w=w(v):=(w_1, \cdots, w_n)^{\top}(v)$ as
\begin{equation}\lambdabel{A-11}
(w_1, \cdots, w_{n-1})^{\top}=B_{n-1}(v_1, \cdots, v_{n-1})^{\top},\quaduad w_n=v_n+\sum\limits_{j=1}^{n-1}b_{nj}w_j,
\end{equation}
where the constants $\{b_{nj}\}_{j=1}^{n-1}$ will be determined later. Set the inverse mapping of $w=w(v)$ as $v=v(w)$ with $v(0)=0$.
By \eqref{B-5}, \eqref{A-9}-\eqref{A-11} and a direct computation, we arrive at
\begin{equation}\lambdabel{A-12}
\partialartial_t w+A(w)\partialartial_x w=0,
\end{equation}
where $A(w)=\left(a_{ij}(w)\right)_{n\times n}$ satisfies
\begin{equation}\lambdabel{A-13}\begin{cases}
a_{ij}(w)=\bar{g}_{ij}(v(w))\ (1\leq i, j\leq n-1),\\[2mm]
a_{jn}(w)=0\ (1\leq j\leq n-1), \\[2mm]
a_{nn}(w)=g_{nn}(v(w))
\end{cases}
\end{equation}
and
\begin{equation}\lambdabel{A-14}\begin{aligned}
&(a_{n 1}, \cdots, a_{n n-1})(w)\\
=&(g_{n1}, \cdots, g_{n n-1})(v(w))B_{n-1}
+(b_{n1 }, \cdots, b_{n n-1})(\bar{G}_{n-1}-g_{n n}I_{n-1})(v(w)).
\end{aligned}
\end{equation}
Since the $(n-1)$ order square matrix
\begin{equation*}
\bar{G}_{n-1}(0)-g_{n n}(0)I_{n-1}=\text{diag}\{\lambdambda_1(0)-\lambdambda_{i_0}(0), \cdots \lambdambda_{i_0-1}(0)-\lambdambda_{i_0}(0), \lambdambda_{i_0+1}(0)-\lambdambda_{i_0}(0), \cdots, \lambdambda_{n}(0)-\lambdambda_{i_0}(0)\}
\end{equation*}
is invertible due to \eqref{i-2} and \eqref{A-7}, it is derived from \eqref{A-14} that there exists unique $\{b_{n j}\}_{1\leq j\leq n-1}$
such that
\begin{equation}\lambdabel{A-15}
(a_{n 1}, \cdots, a_{n, n-1})(0)=(0, \cdots, 0).
\end{equation}
For the constant invertible square matrix $B=\left(b_{ij}\right)_{n\times n}$ with $b_{i n}=\delta_{i}^{n}\ (1\leq i\leq n)$ and
\begin{equation}\lambdabel{A-16}
w=Bz,
\end{equation}
one has from \eqref{A-11}-\eqref{A-14} that
\begin{equation}\lambdabel{A-17}
A(w)=BG(B^{-1}w)B^{-1}:=(a_{ij}(w))_{n\times n},
\end{equation}
where
\begin{equation*}
A(0)=BG(0)B^{-1}=\text{diag}\{\lambdambda_1(0), \cdots, \lambdambda_{i_0-1}(0), \lambdambda_{i_0+1}(0), \cdots, \lambdambda_n(0), \lambdambda_{i_0}(0)\}.
\end{equation*}
The expected invertible mapping $u\mapsto w=w(u)$ is just the composition of two mappings $v=v(u)$ and $w=w(v)$ defined by \eqref{B-4} and \eqref{A-16} respectively. Its inverse mapping is denoted as $u=u(w)$. It is easy to know that the matrix $A(w)$ has $n$
distant eigenvalues $\{\lambdambda_i(w(u))\}_{i=1}^n$ and the corresponding right eigenvectors are $\{J_w(u)\gammaamma_i(u(w))\}_{i=1}^n$.
In addition, it is derived from \eqref{A-6}, \eqref{A-8} and \eqref{A-11} that
\begin{equation}\lambdabel{A-18}
J_w(u)\gammaamma_{i_0}(u(w))=B J_v(u)\gammaamma_{i_0}(u(v))=\gammaamma_{i_0}^{\top}(0)\cdot\gammaamma_{i_0}(u(v)){\bf e}_n\nablaeq {\bf 0}
\end{equation}
and
\begin{equation}\lambdabel{A-19}\begin{aligned}
&\nablaabla_w\lambdambda_{i_0}(u(w))\cdot J_w(u)\gammaamma_{i_0}(u(w))\\
=&\nablaabla_v\lambdambda_{i_0}(u(v))\cdot J_v(w)\cdot J_w(v)\cdot J_v(u)\gammaamma_{i_0}(u(v))\\
=&\nablaabla_v\lambdambda_{i_0}(u(v))\cdot J_v(u)\gammaamma_{i_0}(u(v))\nablaeq 0.
\end{aligned}
\end{equation}
Then the properties (1)-(3) of $A(w)$ come from \eqref{A-9}-\eqref{B-10}, \eqref{A-13}, \eqref{A-15}
and \eqref{A-18}-\eqref{A-19}.
\end{proof}
\begin{rem}\lambdabel{YH-1} {\it We point out that Proposition \ref{lemA-1} is a generalization of Lemma 2.1
in \cite{CXY}, where \eqref{i-1} with small initial data and $n=3$ is simplified analogously.}
\end{rem}
\section{Reformulation under the modulated coordinates}\lambdabel{ii}
Motivated by \cite{TSV2}, to show the geometric blowup mechanism in Theorem \ref{thmi-1}, we introduce
three modulation variables $\tau(t),\ \xi(t)$ and $\kappa(t)$ as
\begin{equation}\lambdabel{ii-1}\begin{cases}
\tau(t):\quaduad \text{tracking the exact blowup time},\\
\xi(t): \quaduad \text{tracking the location of the blowup point},\\
\kappa(t):\quaduad \text{fixing the speed of the singularity development}.
\end{cases}
\end{equation}
Set the modulated coordinate $(y, s)$ as follows
\begin{equation}\lambdabel{ii-2}
s=s(t)=-\log(\tau(t)-t),\quaduad y=(x-\xi(t))(\tau(t)-t)^{-\fracrac{3}{2}}=(x-\xi(t))e^{\fracrac{3s}{2}}.
\end{equation}
In addition, the new unknowns $W=(W_1, \cdots, W_n)^{\top}$ and $W_0$ are defined as
\begin{equation}\lambdabel{ii-3}
w_i(x, t)=W_i(y, s) (i\nablaeq n),\quaduad w_{n}(x, t)=W_n(y, s)=e^{-\fracrac{s}{2}}W_{0}(y, s)+\kappa(t),
\end{equation}
where $\kappa(t)=w_n(\xi(t),t)$.
By \eqref{ii-1}-\eqref{ii-3}, when $t<\tau(t)$, the system \eqref{i-6a} can be equivalently rewritten as
\begin{equation}\lambdabel{ii-4}
\partialartial_s W+\left(\fracrac{3}{2}y-e^{\fracrac{s}{2}}\beta_{\tau}\dot{\xi}(t)\right)\partialartial_y W
+e^{\fracrac{s}{2}}\beta_{\tau}A(w)\partialartial_y W=0,
\end{equation}
where $\beta_{\tau}(t)=\fracrac{1}{1-\dot{\tau}(t)}$, $\dot{\xi}(t)=\xi'(t)$ and $\dot{\tau}(t)={\tau}'(t)$.
In addition, $W_0$ is determined by
\begin{equation}\lambdabel{ii-5}
(\partialartial_s-\fracrac{1}{2})W_{0}+\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_{n}(w)
-\dot{\xi}(t))\right)\partialartial_y W_{0}+\sum\limits_{j\nablaeq n}e^{s}\beta_{\tau}a_{nj}(w)\partialartial_y W_j
=-e^{-\fracrac{s}{2}}\beta_{\tau}\dot{\kappa}(t),
\end{equation}
where $\dot{\kappa}(t)=\kappa'(t)$.
\subsection{Global steady solution for 1D Burgers equation}\lambdabel{ii-a}
The simplest case of problem \eqref{i-6} is (i.e., $n=1$ and $a_n(w)=w_n$)
\begin{equation}\lambdabel{A-1}\begin{cases}
\partialartial_t\overline{\omega}+\overline{\omega}\partialartial_x\overline{\omega}=0,\ x\in\mathbb{R}, \ t>-\varepsilon,\\
\overline{\omega}(x, -\varepsilon)=\overline{\omega}_0(x),\ x\in\mathbb{R},
\end{cases}
\end{equation}
where $\overline{\omega}_0(x)\in C^{\infty}(\mathbb{R})$ and $\overline{\omega}_0'(x)\leq 0$.
It is assumed that $\overline{\omega}_0(x)$ satisfies the generic nondegenerate condition at $x=0$:
\begin{equation}\lambdabel{A-2}
\overline{\omega}_0(0)=0,\ \overline{\omega}_0'(0)=-\fracrac{1}{\varepsilon}=\min\limits_{x\in\mathbb{R}}\overline{\omega}_0'(x),\ \overline{\omega}_0''(0)=0,\ \overline{\omega}_0'''(0)=\fracrac{6}{\varepsilon^4}.
\end{equation}
By the characteristics method, it is easy to know that under the assumption \eqref{A-2},
the smooth solution $\overline{\omega}(x, t)$ of problem \eqref{A-1} will blowup at the first-in-time
singularity point $(0, 0)$ and the related characteristics starting from
the point $(0, -\varepsilon)$ is $\{(x, t): x=0, \ -\varepsilon<t<0\}$.
From this and the procedures in \eqref{ii-1}-\eqref{ii-5}, we define
\begin{equation}\lambdabel{A-3}
\tau_0(t)=0,\ \xi_0(t)=0,\ \kappa_0(t)=0,
\end{equation}
and
\begin{equation}\lambdabel{A-4}
s=s(t)=-\log(\tau_0(t)-t),\ y=(x-\xi_0(t))(\tau_0(t)-t)^{-\fracrac{3}{2}}=(x-\xi_0(t))e^{\fracrac{3s}{2}}
\end{equation}
and
\begin{equation}\lambdabel{A-5}
\overline{\omega}=e^{-\fracrac{s}{2}}\overline{W}(y, s)+\kappa_0(t).
\end{equation}
Then it follows from \eqref{A-1} and \eqref{A-3}-\eqref{A-5} that $\overline{W}(y, s)$ satisfies
\begin{equation}\lambdabel{ii-6}
(\partialartial_s-\fracrac{1}{2})\overline{W}+\left(\fracrac{3}{2}y+\overline{W}\right)\partialartial_y\overline{W}=0.
\end{equation}
In Appendix A.1 of \cite{TSV2}, it is proved that equation \eqref{ii-6} has a group of steady
smooth solutions $\overline{W}=\overline{W}(y)$ satisfying such a generic nondegenerate condition
\begin{equation}\lambdabel{ii-7}
\overline{W}(0)=0,\ \overline{W}'(0)=\min\limits_{y\in\mathbb{R}}\overline{W}'(y)<0,\ \overline{W}''(0)=0,\ \overline{W}'''(0)>0.
\end{equation}
According to the initial data \eqref{A-2} and the transformation \eqref{A-3}-\eqref{A-4},
the solution $\overline{W}(y)$ of \eqref{ii-6} is introduced in \cite{TSV2}
\begin{equation}\lambdabel{ii-8}
\overline{W}(y)=\left(-\fracrac{y}{2}+\left(\fracrac{1}{27}+\fracrac{y^2}{4}\right)^{\fracrac{1}{2}}\right)^{\fracrac{1}{3}}
-\left(\fracrac{y}{2}+\left(\fracrac{1}{27}+\fracrac{y^2}{4}\right)^{\fracrac{1}{2}}\right)^{\fracrac{1}{3}}.
\end{equation}
In addition, it is easy to obtain
\begin{subequations}\lambdabel{ii-9}\begin{align}
&\overline{W}(0)=0,\ \ \ \overline{W}'(0)=-1,\ \ \ \overline{W}''(0)=0,\ \ \ \overline{W}'''(0)=6,\lambdabel{ii-9a}\\
& \|\eta^{-\fracrac{1}{6}}\overline{W}\|_{L^{\infty}}\leq 1,\ -1\leq\eta^{\fracrac{1}{3}}\overline{W}'\leq -\fracrac{1}{6},\ \|\eta^{\fracrac{5}{6}}\overline{W}''\|_{L^{\infty}}\leq 2,\lambdabel{ii-9b}\\
& \|\eta^{\fracrac{5}{6}}\overline{W}^{(\mu)}\|_{L^{\infty}}\lesssim_{\mu}1\quaduad\text{for $\mu\gammaeq 3$}.\lambdabel{ii-9c}
\end{align}
\end{subequations}
\subsection{Evolution for the modulation variables}\lambdabel{ii-b}
With the expectation $\lim\limits_{s\nablaearrow +\infty}W_{0}(y, s)=\overline{W}(y)$
and by the properties of $\overline{W}(y)$, we pose
\begin{equation}\lambdabel{ii-10}
W_0(0, s)=0,\quaduad \partialartial_y W_0(0, s)=-1,\quaduad\partialartial_y^2 W_0(0, s)=0,\quaduad \partialartial_y^3 W_0(0, s)=6.
\end{equation}
Next, we derive the equations of the modulation variables in \eqref{ii-1}. For any nonnegative integer $\mu$, acting $\partialartial^{\mu}=\partialartial_y^{\mu}$ on both sides of \eqref{ii-4} and \eqref{ii-5} yields
\begin{subequations}\lambdabel{ii-11}\begin{align}
&(\partialartial_s+\fracrac{3}{2}\mu)\partialartial^{\mu}W+(\fracrac{3}{2}y-e^{\fracrac{s}{2}}\beta_{\tau}\dot{\xi}(t))\partialartial_y\partialartial^{\mu}W
+e^{\fracrac{s}{2}}\beta_{\tau}A(w)\partialartial_y\partialartial^{\mu}W=F_{\mu},\lambdabel{ii-11a}\\[2mm]
&(\partialartial_s+\fracrac{3\mu-1}{2})\partialartial^{\mu}W_0+\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_{n}(w)
-\dot{\xi}(t))\right)\partialartial_y\partialartial^{\mu}W_0=F_{\mu}^0,\lambdabel{ii-11b}
\end{align}
\end{subequations}
where
\begin{equation*}\begin{cases}
F_{\mu}=-e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu}C_{\mu}^{\beta}\partialartial^{\beta}A(w)\partialartial_y\partialartial^{\mu-\beta}W,\\[2mm]
F_{\mu}^0=-e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu}C_{\mu}^{\beta}\partialartial^{\beta}\mu_{n}(w)\partialartial_y\partialartial^{\mu-\beta}W_0
+e^{s}\beta_{\tau}\sum\limits_{j\nablaeq n}\partialartial^{\mu}\left(a_{nj}(w)\partialartial_yW_j\right)
-e^{-\fracrac{s}{2}}\beta_{\tau}\dot{\kappa}(t)\delta_{\mu}^0.\\
\end{cases}
\end{equation*}
Due to \eqref{ii-10}, it is derived from \eqref{ii-11b} that for $\mu=0, 1, 2$,
the modulation variables satisfy the following ordinary differential system:
\begin{subequations}\lambdabel{ii-12}\begin{align}
\dot{\kappa}(t)&=e^{s}\left(\mu_{n}(w^{0})-\dot{\xi}(t)\right)-e^{\fracrac{3s}{2}}\sum\limits_{j\nablaeq n}a_{nj}(w^{0})(\partialartial_y W_j)^0,\lambdabel{ii-12a}\\[2mm]
\dot{\tau}(t)&=1-\partialartial_{w_n}\mu_{n}(w^{0})+e^{\fracrac{s}{2}}\sum\limits_{j\nablaeq n}\partialartial_{w_j}\mu_{n}(w^{0})(\partialartial_y W_j)^0\nablaonumber\\[2mm]
&\quaduad -e^{s}\sum\limits_{j\nablaeq n}\left(\partialartial_y(a_{nj}(w)\partialartial_y W_j)\right)^0,\lambdabel{ii-12b}\\[2mm]
\dot{\xi}(t)&=\mu_{n}(w^{0})-\fracrac{1}{6}(\partialartial_y^2 \mu_n(w))^{0}+\fracrac{1}{6}\sum\limits_{j\nablaeq n}e^{\fracrac{s}{2}}\left(\partialartial_y^{2}(a_{nj}(w)\partialartial_y W_j)\right)^0,\lambdabel{ii-12c}
\end{align}
\end{subequations}
where the notation $v^{0}$ represents $v(0, s)$ for the function $v(y, s)$.
\subsection{The equation of $\mathcal{W}=W_{0}-\overline{W}$}\lambdabel{ii-c}
Set
\begin{equation}\lambdabel{ii-14}
\mathcal{W}=W_{0}-\overline{W}.
\end{equation}
It follows from \eqref{ii-5} and \eqref{ii-6} that for any nonnegative integer $\mu$,
\begin{equation}\lambdabel{ii-15}
\left(\partialartial_{s}+\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_{n}(w)
-\dot{\xi}(t)\right)\partialartial_y\right)\partialartial^{\mu}\mathcal{W}+\mathcal{D}_{\mu}\partialartial^{\mu}\mathcal{W}=\mathcal{F}_{\mu},
\end{equation}
where
\begin{equation*}
\mathcal{D}_{\mu}=\fracrac{3\mu-1}{2}+\beta_{\tau}\overline{W}'+e^{\fracrac{s}{2}}\beta_{\tau}\mu \partialartial_y \mu_n(w)\\
\end{equation*}
and
\begin{equation*}\begin{aligned}
\mathcal{F}_{\mu}=&-\sum\limits_{1\leq\beta\leq\mu}C_{\mu}^{\beta}\beta_{\tau}\partialartial_y^{1+\beta}\overline{W}\partialartial_y^{\mu-\beta}\mathcal{W}
-\sum\limits_{2\leq\beta\leq \mu}C_{\mu}^{\beta}e^{\fracrac{s}{2}}\beta_{\tau}\partialartial_y^{\beta}\mu_n(w)\partialartial_y^{\mu-\beta+1}\mathcal{W}
-e^{-\fracrac{s}{2}}\beta_{\tau}\dot{\kappa}(t)\delta_{\mu}^0\\
&-\sum\limits_{j\nablaeq n}e^{s}\beta_{\tau}\partialartial_y^{\mu}\left(a_{nj}(w)\partialartial_y W_j\right)-\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y^{\mu}\left(\overline{W}' (\mu_{n}(w)-e^{-\fracrac{s}{2}}W_0-\dot{\xi}(t))\right)\\
&+(1-\beta_{\tau})\partialartial_y^{\mu}(\overline{W}' \overline{W}).
\end{aligned}
\end{equation*}
In addition, for the purpose to establish the weighted estimates of $\mathcal{W}$, it is derived from \eqref{ii-15}
that for any real number $\nablau$,
\begin{equation}\lambdabel{ii-16}
\left(\partialartial_s+\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_{n}(w)
-\dot{\xi}(t))\right)\partialartial_y\right)\left[\eta^{\nablau}\partialartial^{\mu}\mathcal{W}\right]+\mathcal{D}_{\mu, \nablau}\left[\eta^{\nablau}\partialartial^{\mu}\mathcal{W}\right]=\eta^{\nablau}\mathcal{F}_{\mu},
\end{equation}
where
\begin{equation*}
\mathcal{D}_{\mu, \nablau}=\mathcal{D}_{\mu}-2\nablau y\eta^{-1}\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_{n}(w)-\dot{\xi}(t))\right).
\end{equation*}
\subsection{The decomposition on the derivatives of $W$}\lambdabel{ii-d}
To deal with the derivatives of $W$, we adopt the method of eigendecomposition in \cite{PDL1}.
Set
\begin{equation}\lambdabel{ii-17}
\partialartial_{y}^{\mu}W=\sum\limits_{m=1}^{n}W_{\mu}^{m}\gammaamma_{m}(w),
\end{equation}
where $\left\{\gammaamma_{m}(w)\right\}_{m=1}^{n}$ have been defined in \eqref{i-71}.
Acting each left eigenvector $\ell_m(w)$ on both sides of \eqref{ii-11a}
and substituting the expansion \eqref{ii-17} into \eqref{ii-11a} yield
\begin{equation}\lambdabel{ii-18}
\left(\partialartial_s+\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_{m}(w)-\dot{\xi}(t))\right)\partialartial_y +\fracrac{3}{2}\mu\right)W_{\mu}^{m}=\mathbb{F}_{\mu}^m,
\end{equation}
where
\begin{equation*}
\mathbb{F}_{\mu}^m=-\sum\limits_{j=1}^{n}W_{\mu}^{j}\ell_m(w)\cdot\left(\partialartial_s\gammaamma_j(w)
+((\fracrac{3}{2}y-e^{\fracrac{s}{2}}\beta_{\tau}\dot{\xi}(t))I_n+e^{\fracrac{s}{2}}\beta_{\tau}A(w))\partialartial_y\gammaamma_j(w)\right)
+\ell_m(w)\cdot F_{\mu}.
\end{equation*}
On the other hand, for any real number $\nablau$, it is derived from \eqref{ii-18} that
\begin{equation}\lambdabel{ii-19}
\left(\partialartial_s+\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_m(w)
-\dot{\xi}(t))\right)\partialartial_y\right)\left[\eta^{\nablau}W_{\mu}^{m}\right]
+\mathbb{D}_{\mu, \nablau}^{m}\left[\eta^{\nablau}W_{\mu}^{m}\right]=\eta^{\nablau}\mathbb{F}_{\mu}^{m},
\end{equation}
where
\begin{equation*}
\mathbb{D}_{\mu, \nablau}^{m}=\fracrac{3\mu}{2}-2\nablau y\eta^{-1}\left(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_m(w)-\dot{\xi}(t)\right).
\end{equation*}
\subsection{Initial data and main results under the modulation coordinates}\lambdabel{ii-e}
Under the constrains \eqref{i-8}-\eqref{i-11} and the definition \eqref{ii-3},
the initial data of $W(y, s)$ on $s=-\log\varepsilon$ can be determined accordingly.
Indeed, due to \eqref{i-8} and the definitions \eqref{ii-1} and \eqref{ii-3} (also see \eqref{ii-10}),
the initial data of the modulation variables $\tau(t), \xi(t)$ and $\kappa(t)$ on $t=-\varepsilon$ are
\begin{equation}\lambdabel{iii-14}
\tau(-\varepsilon)=0,\ \xi(-\varepsilon)=0,\ \kappa(-\varepsilon)=w_n(0, -\varepsilon)=\kappa_0\varepsilon^{\fracrac{1}{3}}.
\end{equation}
In addition, for the bad component $W_0(y, s)$, it is derived from \eqref{i-8}-\eqref{i-9} and \eqref{ii-2}-\eqref{ii-3}
that on $s=-\log\varepsilon$,
\begin{equation}\lambdabel{iii-6}
W_{0}(0, s)=\partialartial_{y}^{2}W_{0}(0, s)=0,\ \partialartial_{y}W_{0}(0, s)=\min\limits_{y\in\mathbb{R}}\partialartial_{y}W_{0}(y, s)=-1,\ \partialartial_{y}^{3}W_{0}(0, s)=6
\end{equation}
and
\begin{subequations}\lambdabel{iii-7}\begin{align}
&|W_{0}(y, -\log\varepsilon)|\leq 2\varepsilon^{-\fracrac{1}{30}},\lambdabel{iii-7a}\\
&\text{$|\mathcal{W}(y, -\log\varepsilon)|\leq \varepsilon\eta^{\fracrac{1}{6}}(y)$
and $|\partialartial_{y}\mathcal{W}(y, -\log\varepsilon)|\leq \varepsilon\eta^{-\fracrac{1}{3}}(y)$ for $|y|\leq\mathcal{L}$},\lambdabel{iii-7b}\\
&|\partialartial_y^4 \mathcal{W}(y, -\log\varepsilon)|\leq \varepsilon^{\fracrac{1}{9}}\quaduad\text{for $|y|\leq 1$},\lambdabel{iii-7d}\\
&|\partialartial_y W_0(y, -\log\varepsilon)|\leq 2\eta^{-\fracrac{1}{3}}(y)\boldsymbol{1}_{\{\mathcal{L}\leq |y|\leq 2\mathcal{L}\}}+\eta^{-1}(y)\boldsymbol{1}_{\{|y|\gammaeq 2\mathcal{L}\}}\quaduad\text{for $|y|\gammaeq\mathcal{L}$}.\lambdabel{iii-7c}
\end{align}
\end{subequations}
For the good components of $W$, it follows from \eqref{i-10} and \eqref{ii-2}-\eqref{ii-3} that for $j\nablaeq n$
\begin{subequations}\lambdabel{iii-8}\begin{align}
&|W_{j}(y, -\log\varepsilon)|\leq \varepsilon,\lambdabel{iii-8a}\\
&|\partialartial_y W_j(y, -\log\varepsilon)|\leq \varepsilon^{\fracrac{3}{2}-3\nablau} \eta^{-\nablau}(y)\ (\nablau\in [0, \fracrac{1}{3}]),\lambdabel{iii-8b}\\
&|\partialartial_y^2 W_j(y, -\log\varepsilon)|\leq \varepsilon^{\fracrac{7}{6}}\eta^{-\fracrac{1}{3}}(y).\lambdabel{iii-8c}
\end{align}
\end{subequations}
Following \eqref{i-11}, the initial energy of $\partialartial_y^{\mu_0}W$ on $s=-\log\varepsilon$ satisfies
\begin{equation}\lambdabel{iii-9}
\sum\limits_{j=1}^{n-1}\|\partialartial_{y}^{\mu_0}W_{j}(\cdot, -\log\varepsilon)\|_{L^{2}(\mathbb{R})}
+\varepsilon^{\fracrac{3}{2}}\|\partialartial_{y}^{\mu_0}W_{0}(\cdot, -\log\varepsilon)\|_{L^{2}(\mathbb{R})}\lesssim \varepsilon^{\fracrac{3}{2}}.
\end{equation}
Under the preparations above, the new version of Theorem \ref{thmi-1} under the modulated coordinates can be stated as:
\begin{thm}\lambdabel{thmii-1} {\it Under the conditions in \eqref{i-7} and the notations in \eqref{ii-1}-\eqref{ii-3}, there exists a positive constant $\varepsilon_0$ such that when $0<\varepsilon<\varepsilon_0$, the system \eqref{ii-4}-\eqref{ii-5} and \eqref{ii-12} with the initial data satisfying \eqref{iii-14}-\eqref{iii-9} has a global-in-time solution $W$ and $\tau(t), \xi(t), \kappa(t)$, which satisfy
\begin{enumerate}[$(1)$]
\item $\lim\limits_{s\to +\infty}\xi(t)=x^*=O(\varepsilon^2), \lim\limits_{s\to +\infty}\tau(t)=T^*=O(\varepsilon^{\fracrac{4}{3}})$.
\item $|\dot{\tau}(t)|\lesssim \varepsilon^{\fracrac{1}{3}},\ |\dot{\xi}(t)|\lesssim \varepsilon,\ |\dot{\kappa}(t)|\lesssim 1$, and $|\tau(t)|\lesssim\varepsilon^{\fracrac{4}{3}},\ |\xi(t)|\lesssim \varepsilon^2,\ |\kappa(t)-\kappa_0\varepsilon^{\fracrac{1}{3}}|\lesssim\varepsilon$.
\item $|W_0(y, s)|\lesssim\varepsilon^{\fracrac{1}{3}}e^{\fracrac{s}{2}}, |W_i(y, s)|\lesssim\varepsilon\ (1\leq i\leq n-1)$.
\item With respect to $W_0$,
\begin{equation*}\begin{cases}
\text{$|W_0(y, s)-\overline{W}(y)|\leq\varepsilon^{\fracrac{1}{11}}\eta^{\fracrac{1}{6}}(y)$
and $|\partialartial_y (W_0(y, s)-\overline{W}(y))|\leq \varepsilon^{\fracrac{1}{12}}\eta^{-\fracrac{1}{3}}(y)$ for $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$},\\
|\partialartial_y W_0(y, s)|\leq\fracrac{7}{6}\eta^{-\fracrac{1}{3}}(y)\quaduad\text{for $|y|\gammaeq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$}.
\end{cases}
\end{equation*}
\item For $1\leq j\leq n-1$,
\begin{equation*}
|\partialartial_y W_j(y, s)|\lesssim e^{(3\nablau-\fracrac{3}{2})s}\eta^{-\nablau}(y)\ (\nablau\in [0, \fracrac{1}{3}]), |\partialartial_y^2 W_j(y, s)|\lesssim e^{(\nablau^+-\fracrac{7}{6})s}\eta^{-\fracrac{1}{3}}(y)\ (\nablau^+>0).
\end{equation*}
\item For $\mu_0$ given in \eqref{iv-50},
\begin{equation*}
\|\partialartial_y^{\mu_0}W_j(\cdot, s)\|_{L^2(\mathbb{R})}\lesssim e^{-\fracrac{3}{2}s}\ (1\leq j\leq n-1),\ \|\partialartial_y^{\mu_0}W_n(\cdot, s)\|_{L^2(\mathbb{R})}\lesssim e^{-\fracrac{s}{2}}.
\end{equation*}
\end{enumerate}}
\end{thm}
\begin{rem}\lambdabel{remii-2} In Theorem \ref{thmii-1}, the spatial decay estimates in $(4)$ and $(5)$
come from the influences of the initial data $W(y, -\log\varepsilon)$ without compact support.
\end{rem}
\section{Bootstrap assumptions}\lambdabel{iv}
Since the local existence of \eqref{i-6} was known already (one can see \cite{MA} for instance),
we utilize the continuous induction to establish the global-in-time estimates in Theorem \ref{thmii-1}. According to the initial data in \eqref{iii-14}-\eqref{iii-9}, we first make the following induction assumptions. In what follows, $M>0$ is denoted as a
suitably large constant, which is independent of $\varepsilon$.
For the modulation variables in \eqref{ii-1}, suppose that
\begin{subequations}\lambdabel{iv-1}\begin{align}
&|\kappa(t)-\kappa_0\varepsilon^{\fracrac{1}{3}}|\leq M\varepsilon,\quaduad|\tau(t)|\leq M\varepsilon^{\fracrac{4}{3}},\quaduad\quaduad |\xi(t)|\leq M\varepsilon^2,\lambdabel{iv-1a}\\
&|\dot{\kappa}(t)|\leq M,\quaduad\quaduad |\dot{\tau}(t)|\leq M\varepsilon^{\fracrac{1}{3}},\quaduad\quaduad |\dot{\xi}(t)|\leq M\varepsilon.\lambdabel{iv-1b}
\end{align}
\end{subequations}
For the bad unknown $W_0$ and the related $\mathcal{W}$ in \eqref{ii-14}, the bootstrap assumptions are
\begin{subequations}\lambdabel{iv-2}\begin{align}
&\|W_0(\cdot, s)\|_{L^{\infty}}\leq M\varepsilon^{\fracrac{1}{3}}e^{\fracrac{s}{2}},\ |\partialartial_y^{\mu}W_0(y, s)|\leq M\ (1\leq\mu\leq 4),\lambdabel{iv-2a}\\
&|\partialartial_y W_0(y, s)|\leq \fracrac{7}{6}\eta^{-\fracrac{1}{3}}(y)\quaduad\text{for $|y|\gammaeq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$},\lambdabel{iv-2b}\\
&\text{$|\mathcal{W}(y, s)|\leq \varepsilon^{\fracrac{1}{11}} \eta^{\fracrac{1}{6}}(y)$
and $|\partialartial_y\mathcal{W}(y, s)|\leq \varepsilon^{\fracrac{1}{12}}\eta^{-\fracrac{1}{3}}(y)$ for $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$}.\lambdabel{iv-2c}
\end{align}
\end{subequations}
For the good unknowns $W_i (1\leq i\leq n-1)$, it is assumed that
\begin{subequations}\lambdabel{iv-3}\begin{align}
&|W_i(y, s)|\leq M\varepsilon,\ |\partialartial_y^{\mu} W_i(y, s)|\leq Me^{-\fracrac{3}{2}s}\ (1\leq\mu\leq 4),\lambdabel{iv-3a}\\
&|\partialartial_y W_i(y, s)|\leq Me^{(3\nablau-\fracrac{3}{2})s}\eta^{-\nablau}(y)\ (\nablau\in [0, \fracrac{1}{3}]),\lambdabel{iv-3b}\\
&|\partialartial_y^2 W_i(y, s)|\leq Me^{(\nablau^+-\fracrac{7}{6})s}\eta^{-\fracrac{1}{3}}(y)\ (\nablau^+>0).\lambdabel{iv-3c}
\end{align}
\end{subequations}
In addition, we make the following auxiliary assumptions with $\ell=\fracrac{1}{M^4}$,
\begin{equation}\lambdabel{iv-4}
|\partialartial_y^{\mu}\mathcal{W}(y, s)|\leq \varepsilon^{\fracrac{1}{10}}\ell^{4-\mu},\ 0\leq \mu\leq 4,\ |y|\leq \ell.
\end{equation}
With respect to the energies of the higher order derivatives of $W$, by
fixing $\mu_0$ to be the minimum positive integer such that
\begin{equation}\lambdabel{iv-50}
\mu_0\gammaeq 6,\ 3\mu_0-e^{\fracrac{s}{2}}\beta_{\tau}\partialartial_y\mu_m(w)\gammaeq \fracrac{13}{2}\ (1\leq m\leq n-1),
\end{equation}
we assume
\begin{equation}\lambdabel{iv-6}
\sum\limits_{m=1}^{n-1}\|\partialartial_y^{\mu_0}W_m(\cdot, s)\|_{L^2(\mathbb{R})}\leq Me^{-\fracrac{3}{2}s},\quaduad \|\partialartial_y^{\mu_0}W_n(\cdot, s)\|_{L^2(\mathbb{R})}\leq Me^{-\fracrac{s}{2}}.
\end{equation}
\section{Bootstrap estimates on the bad component of $W$}\lambdabel{vi}
In the section, we close the bootstrap arguments on $W_0$ and $\mathcal{W}$.
\subsection{The analysis on the characteristics of \eqref{ii-5}}
We now study some properties of the characteristics of \eqref{ii-5}. For any point $(y_0, \zeta_0)$ with $\zeta_0\gammaeq -\log\varepsilon$,
the characteristics $y(\zeta)=y(\zeta; y_0, \zeta_0)$ of \eqref{ii-5}
starting from $(y_0, \zeta_0)$ is defined as
\begin{equation}\lambdabel{vi1-1}\begin{cases}
\dot{y}(\zeta)=\fracrac{3}{2}y(\zeta)+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_n(w)-\dot{\xi}(t(\zeta)))(y(\zeta), \zeta),\ \zeta\gammaeq \zeta_0,\\
y(\zeta_0)=y_0.
\end{cases}
\end{equation}
\begin{prop}\lambdabel{lem5-1} {\it Under the assumptions \eqref{i-7c} and \eqref{iv-1}-\eqref{iv-3},
when $|y_0|\gammaeq \ell$, one has
\begin{equation}\lambdabel{vi1-2}
|y(\zeta)|\gammaeq |y_0|e^{\fracrac{\zeta-\zeta_0}{2}}\ (\zeta\gammaeq\zeta_0).
\end{equation}
In addition, if $(y(\zeta), \zeta)$ goes through some point $(y, s)$ with $|y|\leq\ell$ and $s\gammaeq\zeta_0$, then
\begin{equation}\lambdabel{vi1-21}
|y(\zeta)|\leq \ell\ (\zeta_0\leq \zeta\leq s).
\end{equation}}
\end{prop}
To prove Proposition \ref{lem5-1} and for later uses, we first establish the following results
on $\mu_n(w)-\dot{\xi}(t)$.
\begin{lem}\lambdabel{lem5-0} {\it One has
\begin{equation}\lambdabel{vi1-6}
|\mu_n(w)-\dot{\xi}(t)|\leq e^{-\fracrac{s}{2}}(\fracrac{7}{6}+\varepsilon^{\fracrac{1}{20}})|y|+M^2e^{-s}
\end{equation}
and
\begin{equation}\lambdabel{vi1-61}
|\mu_n(w)-\dot{\xi}(t)-e^{-\fracrac{s}{2}}W_0|\leq \varepsilon^{\fracrac{1}{8}}e^{-\fracrac{s}{2}}|y|^{\fracrac{1}{2}}+M^2e^{-s}.
\end{equation}
}
\end{lem}
\begin{proof} Note that
\begin{equation}\lambdabel{vi1-3}
(\mu_n(w)-\dot{\xi}(t))(y, s)=(\mu_n(w)-\mu_n(w^0))(y, s)+(\mu_n(w^0)-\dot{\xi}(t))(s).
\end{equation}
Then it follows from \eqref{ii-3}, \eqref{ii-10}, \eqref{ii-12c}, \eqref{iv-2a} and \eqref{iv-3a} that
\begin{equation}\lambdabel{vi1-4}
|\mu_n(w^0)-\dot{\xi}(t)|(s)\leq M^2 e^{-s}.
\end{equation}
In addition, due to $\partialartial_{w_n}\mu_n(0)=1$, it is derived from \eqref{i-7c}, \eqref{ii-10}, \eqref{iv-2a}-\eqref{iv-2b}, \eqref{iv-3a} and \eqref{iv-3b} with $\nablau=\fracrac{1}{4}$ that
\begin{equation}\lambdabel{vi1-5}\begin{aligned}
&|\mu_n(w)-\mu_n(w^0)-e^{-\fracrac{s}{2}}W_0|(y, s)\\
=&|\mu_n(w)-\mu_n(w^0)-e^{-\fracrac{s}{2}}\int_0^{y}\partialartial_y W_0(z, s)dz|(y, s)\\
\leq &\sum\limits_{i=1}^{n}|\int_0^1\partialartial_{w_i}\mu_n(\beta w+(1-\beta)w^0) d\beta-\delta_n^i|\cdot |\int_0^{y}\partialartial_y W_i(z, s) dz|\\
\leq &M^2\varepsilon^{\fracrac{1}{4}}e^{-\fracrac{s}{2}}\min\{|y|^{\fracrac{1}{2}}, |y|\}.\end{aligned}
\end{equation}
By \eqref{iv-2b}-\eqref{iv-2c}, \eqref{ii-9b} and \eqref{ii-10}, we arrive at
\begin{equation}\lambdabel{vi1-51}
|W_0(y, s)|=|\int_0^y\partialartial_y W_0(z, s)dz|\leq (\fracrac{7}{6}+\varepsilon^{\fracrac{1}{14}})|y|.
\end{equation}
Substituting \eqref{vi1-4}-\eqref{vi1-51} into \eqref{vi1-3} yields \eqref{vi1-6}-\eqref{vi1-61} and
then the proof of Lemma \ref{lem5-0} is completed.\end{proof}
We now start the proof of Proposition \ref{lem5-1}.
\begin{proof} Since \eqref{vi1-21} can be easily derived from \eqref{vi1-2}, it suffices to prove \eqref{vi1-2}.
Due to $|\beta_{\tau}|\leq 2$ by \eqref{ii-4} and \eqref{iv-1b}, then it follows from \eqref{vi1-1} and \eqref{vi1-6} that
\begin{equation}\lambdabel{vi1-7}
\fracrac{d}{d\zeta}y^2(\zeta)\gammaeq \fracrac{7}{12}y^2(\zeta)-2M^2 e^{-\zeta}|y(\zeta)|\gammaeq \fracrac{13}{24}y^2(\zeta)-M^2 e^{-\zeta}.
\end{equation}
We derive from \eqref{vi1-7} and the assumption $|y_0|\gammaeq\ell$ that
\begin{equation*}
e^{-\fracrac{13}{24}\zeta}y^2(\zeta)\gammaeq e^{-\fracrac{13}{24}\zeta_0}y_0^2-M^2 e^{-(1+\fracrac{13}{24})\zeta_0}\gammaeq \fracrac{1}{4}e^{-\fracrac{13}{24}\zeta_0}\ell^2\ (\zeta\gammaeq \zeta_0)
\end{equation*}
and
\begin{equation*}
|y(\zeta)|\gammaeq \fracrac{1}{2}\ell\quaduad\text{when $\zeta\gammaeq\zeta_0$}.
\end{equation*}
Together with \eqref{vi1-7}, this yields
\begin{equation*}
\fracrac{d}{d\zeta}y^2(\zeta)\gammaeq \fracrac{1}{2}y^2(\zeta).
\end{equation*}
Then \eqref{vi1-2} is obtained and the proof of Proposition \ref{lem5-1} is completed.\end{proof}
\begin{rem}\lambdabel{lem5-2} {\it For each point $(y, s)\in\mathbb{R}\times [-\log\varepsilon, +\infty)$, one can define the
following backward characteristics $y=y(\zeta)$ starting from $(y_0, \zeta_0)=(y_0(y, s), \zeta_0)$
\begin{equation}\lambdabel{vi1-8}\begin{cases}
\dot{y}(\zeta)=\fracrac{3}{2}y(\zeta)+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_n(w)-\dot{\xi}(t))(y(\zeta), \zeta),\ \zeta_0\leq \zeta\leq s,\\
y(\zeta_0)=y_0.
\end{cases}
\end{equation}
According to Proposition \ref{lem5-1}, $(y(\zeta), \zeta)$ can be clarified into one of the following cases:
\begin{enumerate}[{\bf Case} $1$.]
\item Set $\zeta_0=-\log\varepsilon$ and there are no additional constrains on $y(\zeta)$.
\item When $|y|\leq \ell$, set $\zeta_0=-\log\varepsilon$,
then $|y(\zeta)|\leq \ell$ holds for $-\log\varepsilon\leq\zeta\leq s$ due to \eqref{vi1-21}.
\item When $|y|\gammaeq \ell$, set $(y_0, \zeta_0)$ with $|y_0|>\ell$ and $\zeta_0=-\log\varepsilon$ or $|y_0|=\ell$ and $\zeta_0\gammaeq-\log\varepsilon$, then $|y(\zeta)|\gammaeq |y_0| e^{\fracrac{\zeta-\zeta_0}{2}}$ holds
for $\zeta_0\leq\zeta\leq s$ due to \eqref{vi1-2}.
\end{enumerate}}
\end{rem}
\subsection{Bootstrap estimate on $W_0$}\lambdabel{vi-a}
For each point $(y, s)\in\mathbb{R}\times [-\log\varepsilon, +\infty)$, it follows from
the {\bf Case} 1 in Remark \ref{lem5-2} and \eqref{ii-11b} with $\mu=0$ that
\begin{equation}\lambdabel{vi2-1}
(\fracrac{d}{d\zeta}-\fracrac{1}{2})W_0(y(\zeta), \zeta)=F_0^0(y(\zeta), \zeta).
\end{equation}
By \eqref{iv-1b}, \eqref{iv-2a} and \eqref{iv-3a}, we have
\begin{equation}\lambdabel{vi2-2}\begin{aligned}
|F_0^0(y(\zeta), \zeta)|\leq& e^{-\fracrac{\zeta}{2}}(|\beta_{\tau}||\dot{\kappa}(t)|)(\zeta)
+\beta_{\tau}e^{\zeta}\sum\limits_{j=1}^{n-1}(|a_{nj}(w)||\partialartial_y W_j|)(y(\zeta), \zeta)\\
\leq& 2Me^{-\fracrac{\zeta}{2}}+M^2 e^{-\fracrac{\zeta}{2}}\leq 2M^2 e^{-\fracrac{\zeta}{2}}.
\end{aligned}
\end{equation}
Combining \eqref{vi2-1} with \eqref{vi2-2} yields
\begin{equation}\lambdabel{vi2-3}\begin{aligned}
e^{-\fracrac{s}{2}}|W_0(y, s)|&=|e^{\fracrac{1}{2}\log\varepsilon}W_0(y_0, -\log\varepsilon)
+\int_{-\log\varepsilon}^{s}e^{-\fracrac{\zeta}{2}}F_0^0(y(\zeta), \zeta)d\zeta|\\
&\leq \sqrt{\varepsilon}\|W_0(\cdot, -\log\varepsilon)\|_{L^{\infty}}+2M^2\varepsilon.
\end{aligned}
\end{equation}
Then it is derived from \eqref{vi2-3} and \eqref{iii-7a} that
\begin{equation}\lambdabel{vi2-4}
\|W_0(\cdot, s)\|_{L^{\infty}}\leq \varepsilon^{\fracrac{1}{3}}e^{\fracrac{s}{2}}.
\end{equation}
\subsection{Bootstrap estimate on $\mathcal{W}$ when $|y|\leq\ell$}\lambdabel{vi-b}
For each $(y, s)\in \mathbb{R}\times [-\log\varepsilon, +\infty)$, by
{\bf Case} 1 in Remark \ref{lem5-2}, it follows from \eqref{ii-15} that
\begin{equation}\lambdabel{vi3-1}\begin{aligned}
\partialartial^{\mu}\mathcal{W}(y, s)=&\partialartial^{\mu}\mathcal{W}(y_0(y, s), -\log\varepsilon)\exp\left({-\int_{-\log\varepsilon}^{s}\mathcal{D}_{\mu}(y(\alpha), \alpha)d\alpha}\right)\\
&+\int_{-\log\varepsilon}^{s}\mathcal{F}_{\mu}(y(\zeta), \zeta)\exp\left(-{\int_{\zeta}^{s}\mathcal{D}_{\mu}(y(\alpha), \alpha)d\alpha}\right)d\zeta.
\end{aligned}
\end{equation}
We next estimate $\partialartial_y^4\mathcal{W}(y, s)$ when $|y|\leq\ell$. In this situation,
$|y(\alpha)|\leq\ell$ holds for $-\log\varepsilon\leq\alpha\leq s$ by \eqref{vi1-21} in Proposition \ref{lem5-1}.
For $\mu=4$ in \eqref{ii-15}, one has from \eqref{ii-3} and \eqref{ii-14} that
\begin{equation}\lambdabel{vi3-11}\begin{aligned}
&\mathcal{D}_4(y(\alpha), \alpha)\\
=&\fracrac{11}{2}+\beta_{\tau}\overline{W}'(y(\alpha))+\sum\limits_{m=1}^{n}4e^{\fracrac{s}{2}}\beta_{\tau}\partialartial_{w_m}\mu_{n}(w)\partialartial_y W_m\\
=&\fracrac{11}{2}+\beta_{\tau}\overline{W}'(y(\alpha))+4\beta_{\tau}\partialartial_y W_0(y(\alpha), \alpha)+\sum\limits_{m=1}^{n}4e^{\fracrac{s}{2}}\beta_{\tau}\left(\partialartial_{w_m}\mu_n(w)-\delta_n^m\right)\partialartial_y W_m\\
=&\fracrac{11}{2}+5\beta_{\tau}\overline{W}'(y(\alpha))+4\beta_{\tau}\partialartial_y\mathcal{W}(y(\alpha), \alpha)+\sum\limits_{m=1}^{n}4e^{\fracrac{s}{2}}\beta_{\tau}\left(\partialartial_{w_m}\mu_n(w)-\delta_n^m\right)\partialartial_y W_m.
\end{aligned}
\end{equation}
Due to $\partialartial_{w_n}\mu_n(0)=1$ and $|y(\alpha)|\leq\ell$ for $-\log\varepsilon\leq\alpha\leq s$,
then by \eqref{vi3-11}, \eqref{i-7}, \eqref{ii-3}, \eqref{ii-9b}, \eqref{iv-1}, \eqref{iv-2} and \eqref{iv-3a},
we arrive at
\begin{equation}\lambdabel{vi3-2}
\mathcal{D}_4\gammaeq \fracrac{11}{2}-5-M\varepsilon^{\fracrac{1}{12}}\gammaeq\fracrac{1}{3}.
\end{equation}
It follows from \eqref{vi3-1} and \eqref{vi3-2} that
\begin{equation}\lambdabel{vi3-3}\begin{aligned}
|\partialartial_y^4\mathcal{W}(y, s)|&\leq|\partialartial_y^4\mathcal{W}(y_0(y, s), -\log\varepsilon)|+3\|\mathcal{F}_{4}(y(\zeta), \zeta)\chi_{[-\log\varepsilon, s]}(\zeta)\|_{L^{\infty}}.
\end{aligned}
\end{equation}
When $|y|\leq\ell$, it is derived from \eqref{vi1-21} in Proposition \ref{lem5-1} and \eqref{iii-7d} that $|y_0|=|y_0(y, s)|\leq \ell<1$ and
\begin{equation}\lambdabel{vi3-4}
|\partialartial_y^4\mathcal{W}(y_0(y, s), -\log\varepsilon)|\leq \varepsilon^{\fracrac{1}{9}}.
\end{equation}
For $\mathcal{F}_4(y(\zeta), \zeta)$ in \eqref{vi3-3}, it follows from \eqref{ii-15}, \eqref{i-7}, \eqref{ii-9}, \eqref{ii-12c}, \eqref{iv-1}, \eqref{iv-2a}-\eqref{iv-2b} and \eqref{iv-3a}-\eqref{iv-3b} that
\begin{equation}\lambdabel{vi3-5}\begin{aligned}
&|\mathcal{F}_4(y(\zeta), \zeta)|\\
\leq& \sum\limits_{\nablau=0}^{3}M|\partialartial_y^{\nablau}\mathcal{W}(y(\zeta), \zeta)|+M^2\varepsilon^{\fracrac{1}{2}}+Me^{\fracrac{s}{2}}\sum\limits_{\nablau=0}^{4}|\partialartial_y^{\nablau}(\mu_{n}(w)
-e^{-\fracrac{s}{2}}W_0-\dot{\xi}(t))|(y(\zeta), \zeta)
\end{aligned}
\end{equation}
and
\begin{equation}\lambdabel{vi3-6}\begin{aligned}
&\sum\limits_{\nablau=0}^{4}|\partialartial_y^{\nablau}(\mu_{n}(w)-e^{-\fracrac{s}{2}}W_0-\dot{\xi}(t))|(y(\zeta), \zeta)\\
=&\sum\limits_{\nablau=0}^{4}|\partialartial_y^{\nablau}\left(\sum\limits_{m=1}^{n}\int_0^1(\partialartial_{w_m}\mu_n(\beta w+(1-\beta)w^0)d\beta-\delta_n^m)\cdot\int_0^{y}\partialartial_y W_m(z, \zeta)dz\right)|(y(\zeta), \zeta)\\
\leq& M^2 e^{-\fracrac{3s}{2}}+M^2\varepsilon^{\fracrac{1}{3}}e^{-\fracrac{s}{2}}(1+|y(\zeta)|).
\end{aligned}
\end{equation}
By the definition of $\ell$ in \eqref{iv-4} and $|y(\zeta)|\leq\ell $ shown in \eqref{vi1-21} when $|y|\leq\ell$,
then it is derived from \eqref{vi3-5}-\eqref{vi3-6} and \eqref{iv-4} that
\begin{equation*}
|\mathcal{F}_4(y(\zeta), \zeta)|\leq\varepsilon^{\fracrac{1}{10}}\ell^{\fracrac{3}{5}}.
\end{equation*}
Combining this with \eqref{vi3-3}-\eqref{vi3-4} yields
\begin{equation}\lambdabel{vi-9}
|\partialartial_y^4\mathcal{W}(y, s)|\leq 2\varepsilon^{\fracrac{1}{10}}\ell^{\fracrac{1}{2}}\leq\fracrac{1}{2}\varepsilon^{\fracrac{1}{10}}\quaduad
\text{for $|y|\leq\ell$}.
\end{equation}
In addition, by \eqref{ii-9a} and \eqref{ii-10}, $\partialartial_y^{\mu}\mathcal{W}(0, s)=0$ for $\mu=1, 2, 3$,
we have that for $|y|\leq\ell$,
\begin{equation}\lambdabel{vi-10}\begin{aligned}
|\partialartial_y^{\nablau}\mathcal{W}(y, s)|=&|\int_{0}^{y}\partialartial_y^{\nablau+1}\mathcal{W}(z, s)dz|\\
\leq& 2\varepsilon^{\fracrac{1}{10}}\ell^{4-\nablau+\fracrac{1}{2}}\leq \fracrac{1}{2}\varepsilon^{\fracrac{1}{10}}\ell^{4-\nablau}\quaduad (0\leq\nablau\leq 3).
\end{aligned}
\end{equation}
\subsection{Bootstrap estimates on $\mathcal{W}$ when $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$}\lambdabel{vi-c}
In the region $\{(y, s): |y|\leq \ell\}$, it is derived from \eqref{vi-10} that
\begin{equation}\lambdabel{vi-11}
|\eta^{-\fracrac{1}{6}}(y)\mathcal{W}(y, s)|\leq 2\varepsilon^{\fracrac{1}{10}}\ell^{4+\fracrac{1}{2}}\quaduad\text{for $|y|\leq\ell$}.
\end{equation}
In the region $\{(y, s): \ell<|y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}\}$, by {\bf Case} 3 in Remark \ref{lem5-2}
and \eqref{ii-16}, one has
\begin{equation*}\lambdabel{vi4-1}
(\fracrac{d}{d\zeta}+\mathcal{D}_{\mu, \nablau}(y(\zeta), \zeta))[\eta^{\nablau}\partialartial^{\mu}\mathcal{W}](y(\zeta) \zeta)=[\eta^{\nablau}\mathcal{F}_{\mu}](y(\zeta), \zeta)
\end{equation*}
and
\begin{equation}\lambdabel{vi4-2}\begin{aligned}
\left[\eta^{\nablau}\partialartial^{\mu}\mathcal{W}\right](y, s)&=[\eta^{\nablau}\partialartial^{\mu}\mathcal{W}](y_0, \zeta_0)\exp\left(-\int_{\zeta_0}^{s}\mathcal{D}_{\mu, \nablau}(y(\alpha), \alpha)d\alpha\right)\\
&+\int_{\zeta_0}^{s}[\eta^{\nablau}\mathcal{F}_{\mu}](y(\zeta), \zeta)\exp\left(-\int_{\zeta}^{s}\mathcal{D}_{\mu, \nablau}(y(\alpha), \alpha)d\alpha\right)d\zeta.
\end{aligned}
\end{equation}
For $(\mu, \nablau)=(0, -\fracrac{1}{6})$ in \eqref{ii-16}, it comes from \eqref{ii-12}, \eqref{iv-2a} and \eqref{iv-3a} that
\begin{equation}\lambdabel{vi4-3}
\mathcal{D}_{0, -\fracrac{1}{6}}=-\fracrac{1}{2(1+y^2)}+\beta_{\tau}\overline{W}'+\fracrac{\beta_{\tau}}{3}\fracrac{y}{1+y^2}e^{\fracrac{s}{2}}(\mu_{n}(w)-\dot{\xi}(t)).
\end{equation}
In addition, by \eqref{vi1-61}, \eqref{iv-2b} and \eqref{ii-10}, one has
\begin{equation}\lambdabel{vi4-4}
|\mu_n(w)-\dot{\xi}(t)|(y(\alpha), \alpha)\leq 4e^{-\fracrac{\alpha}{2}}|y(\alpha)|^{\fracrac{1}{2}}+M^2 e^{-\alpha}.
\end{equation}
Combining \eqref{vi4-3}-\eqref{vi4-4} with \eqref{ii-9b} and \eqref{iv-1b} yields
\begin{equation*}
|\mathcal{D}_{0, -\fracrac{1}{6}}(y(\alpha), \alpha)|\leq 10\eta^{-\fracrac{1}{4}}(y(\alpha))+M^2 e^{-\fracrac{\alpha}{2}},
\end{equation*}
and then it follows from Proposition \ref{lem5-1} and {\bf Case 3} in Remark \ref{lem5-2} that
\begin{equation}\lambdabel{vi4-5}
\int_{\zeta_0}^{+\infty}|\mathcal{D}_{0, -\fracrac{1}{6}}|(y(\alpha), \alpha)d\alpha
\leq 2M^2\varepsilon^{\fracrac{1}{2}}+10\int_{\zeta_0}^{+\infty}\fracrac{1}{(1+\ell^2 e^{\fracrac{\alpha-\zeta_0}{2}})^{\fracrac{1}{4}}}d\alpha\leq 200 \ln\fracrac{1}{\ell}.
\end{equation}
From \eqref{vi4-2} with $(\mu, \nablau)=(0, -\fracrac{1}{6})$ and \eqref{vi4-5}, we obtain
\begin{equation}\lambdabel{vi4-6}\begin{aligned}
&|\eta^{-\fracrac{1}{6}}\mathcal{W}|(y, s)\\
\leq& \fracrac{1}{\ell^{200}}\left(\eta^{-\fracrac{1}{6}}(y(\zeta_0))|\mathcal{W}|(y(\zeta_0), \zeta_0)+\int_{\zeta_0}^{s}\eta^{-\fracrac{1}{6}}(y(\zeta))|\mathcal{F}_0|(y(\zeta), \zeta)d\zeta\right)\quaduad\text{for $|y|\gammaeq\ell$}.
\end{aligned}
\end{equation}
With \eqref{ii-9b}, \eqref{iii-6}, \eqref{iv-1}, \eqref{iv-3a} and \eqref{vi1-4}-\eqref{vi1-5} , $\mathcal{F}_0$ in \eqref{ii-15} satisfies
\begin{equation}\lambdabel{vi4-7}\begin{aligned}
&|\mathcal{F}_0|(y(\zeta), \zeta)\\
\leq& 2e^{\fracrac{s}{2}}\eta^{-\fracrac{1}{3}}(y(\zeta))(|\mu_n(w^0)-\dot{\xi}(t)|+|\mu_n(w)-\mu_{n}(w^0)-e^{-\fracrac{s}{2}}W_0|)\\
&+M^2 e^{-\fracrac{s}{2}}+2M\varepsilon\eta^{-\fracrac{1}{6}}(y(\zeta))\\
\leq& 2e^{\fracrac{s}{2}}\eta^{-\fracrac{1}{3}}(y(\zeta))(M^2\varepsilon^{\fracrac{1}{4}}e^{-\fracrac{\zeta}{2}}\eta^{\fracrac{1}{4}}(y(\zeta))
+M^2 e^{-\zeta})+M^2 e^{-\fracrac{\zeta}{2}}+2M\varepsilon\eta^{-\fracrac{1}{6}}(y(\zeta))\\
\leq& M^3 e^{-\fracrac{\zeta}{2}}+M^3\varepsilon^{\fracrac{1}{4}}\eta^{-\fracrac{1}{12}}(y(\zeta)).
\end{aligned}
\end{equation}
When $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$ and $\zeta_0=-\log\varepsilon$, one has $|y(\zeta_0)|\leq\mathcal{L}$ due to \eqref{vi1-2} in Proposition \ref{lem5-1}. Thus, combining \eqref{vi4-7} with \eqref{vi4-6}, \eqref{vi-11}, \eqref{iii-7b}
and {\bf Case 3} in Remark \ref{lem5-2} shows
\begin{equation}\lambdabel{vi-16}
|\eta^{-\fracrac{1}{6}}(y)\mathcal{W}|(y, s)\leq\fracrac{1}{2}\varepsilon^{\fracrac{1}{11}}\quaduad\text{for $|y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$}.
\end{equation}
\subsection{Bootstrap estimates on $\partialartial_y\mathcal{W}$ when $|y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$}\lambdabel{vi-d}
As in Subsection \ref{vi-c}, the $L^{\infty}$ estimate of $\eta^{\fracrac{1}{3}}\partialartial_y\mathcal{W}$ is still considered in
the cases of $|y|\leq \ell$ and $\ell<|y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$. In the region $\{(y, s): \ell<|y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}\}$, by {\bf Case} 3 in Remark \ref{lem5-2},
it follows from \eqref{vi1-2} in Proposition \ref{lem5-1} that when $|y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$,
\begin{equation}\lambdabel{vi-170}
|y(\zeta)|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{\zeta}{4}},\ \zeta_0\leq \zeta\leq s.
\end{equation}
For $|y|\leq\ell$, by \eqref{vi-10}, one has that
\begin{equation}\lambdabel{vi-17}
\eta^{\fracrac{1}{3}}(y)|\partialartial_y\mathcal{W}(y, s)|\leq 2\varepsilon^{\fracrac{1}{10}}\ell^{3+\fracrac{1}{2}}\ (|y|\leq\ell).
\end{equation}
Next, we estimate $\eta^{\fracrac{1}{3}}\partialartial_y\mathcal{W}$ when $\ell\leq |y|\leq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$.
For $(\mu, \nablau)=(1, \fracrac{1}{3})$ in \eqref{ii-16}, we have
\begin{equation*}\begin{aligned}
\mathcal{D}_{1, \fracrac{1}{3}}=&\fracrac{1}{1+y^2}+\beta_{\tau}\overline{W}'+\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y \mu_{n}(w)-\fracrac{2y}{3(1+y^2)}\beta_{\tau}e^{\fracrac{s}{2}}(\mu_{n}(w)-\dot{\xi}(t))\\
=&\fracrac{1}{1+y^2}+\beta_{\tau}\overline{W}'+\beta_{\tau}\partialartial_{w_n}\mu_n(w)\partialartial_y W_0+\beta_{\tau}e^{\fracrac{s}{2}}\sum\limits_{j\nablaeq n}\partialartial_{w_j}\mu_n(w)\partialartial_y W_j\\
&-\fracrac{2y}{3(1+y^2)}\beta_{\tau}e^{\fracrac{s}{2}}(\mu_n(w)-\dot{\xi}(t)).
\end{aligned}
\end{equation*}
Combining this with \eqref{i-7c}, \eqref{ii-9b}, \eqref{iv-1b}, \eqref{iv-2a}-\eqref{iv-2b}, \eqref{iv-3a}
and \eqref{vi4-4}-\eqref{vi4-5} yields
\begin{equation*}
|\mathcal{D}_{1, \fracrac{1}{3}}(y(\alpha), \alpha)|\leq 10\eta^{-\fracrac{1}{4}}(y(\alpha))+M^2 e^{-\fracrac{\alpha}{2}}
\end{equation*}
and
\begin{equation}\lambdabel{vi-18}
\int_{\zeta_0}^{+\infty}|\mathcal{D}_{1, \fracrac{1}{3}}|(y(\alpha), \alpha)d\alpha\leq 200\ln\fracrac{1}{\ell}.
\end{equation}
It is derived from \eqref{vi4-2} and \eqref{vi-18} that for $|y|\gammaeq\ell$,
\begin{equation}\lambdabel{vi-19}\begin{aligned}
&|\eta^{\fracrac{1}{3}}(y)\partialartial_y\mathcal{W}|(y, s)\\
\leq& \fracrac{1}{\ell^{200}}\left(\eta^{\fracrac{1}{3}}(y(\zeta_0))|\partialartial_y\mathcal{W}|(y(\zeta_0), \zeta_0)+\int_{\zeta_0}^{s}\eta^{\fracrac{1}{3}}(y(\zeta))|\mathcal{F}_1|(y(\zeta), \zeta)d\zeta\right).
\end{aligned}
\end{equation}
For $\mathcal{F}_1$ in \eqref{vi-19}, one has from \eqref{ii-15}, \eqref{i-7d} and \eqref{iv-1}, \eqref{iv-2a}, \eqref{iv-3a} that
\begin{equation}\lambdabel{vi-20}\begin{aligned}
|\mathcal{F}_1|&\leq 2|\overline{W}''\mathcal{W}|+Me^{s}|W|\sum\limits_{j=1}^{n-1}|\partialartial_y^2 W_j|+Me^{s}\sum\limits_{j=1}^{n-1}|\partialartial_y W_j|^2+Me^{\fracrac{s}{2}}\sum\limits_{j=1}^{n-1}|\partialartial_y W_0 \partialartial_y W_j|\\
&+2e^{\fracrac{s}{2}}|\overline{W}''||\mu_{n}(w)-e^{-\fracrac{s}{2}}W_0-\dot{\xi}(t)|+Me^{\fracrac{s}{2}}|\overline{W}'|\sum\limits_{j=1}^{n-1}|\partialartial_y W_j|\\
&+2|\overline{W}'||\partialartial_{w_n}\mu_{n}(w)-1||\partialartial_y W_0|+2M\varepsilon(|\overline{W}'|^2+|\overline{W}''\overline{W}|)
=\sum\limits_{k=1}^{8}I_k.
\end{aligned}
\end{equation}
For $I_1$, it follows from \eqref{ii-9b}, \eqref{vi-16} and \eqref{vi-170} that
\begin{equation}\lambdabel{vi-21}
\eta^{\fracrac{1}{3}}(y(\zeta))I_1(y(\zeta), \zeta)\leq 2\varepsilon^{\fracrac{1}{11}}\eta^{-\fracrac{1}{3}}(y(\zeta)).
\end{equation}
For $I_2$, we have from \eqref{ii-3}, \eqref{iv-1a}, \eqref{vi2-4}, \eqref{iv-3a} and \eqref{iv-3c} with $\nablau^+=\fracrac{1}{24}$ that
\begin{equation}\lambdabel{vi-22}
\eta^{\fracrac{1}{3}}(y(\zeta))I_2(y(\zeta), \zeta)\leq M^4\varepsilon^{\fracrac{1}{3}}e^{-\fracrac{1}{8}\zeta}.
\end{equation}
For $I_3$, \eqref{iv-3b} with $\nablau=\fracrac{1}{6}$ shows
\begin{equation}\lambdabel{vi-23}
\eta^{\fracrac{1}{3}}(y(\zeta))I_3(y(\zeta), \zeta)\leq M^3 e^{-\zeta}.
\end{equation}
In the similar way, due to $\partialartial_{w_n}\mu_n(0)=1$, it is derived from \eqref{i-7}, \eqref{iv-2a}, \eqref{iv-3a}
and \eqref{ii-9b} that
\begin{equation}\lambdabel{vi-24}
\eta^{\fracrac{1}{3}}(y(\zeta))(I_4+I_6+I_7+I_8)(y(\zeta), \zeta)\leq M^3 e^{-\zeta}
+M^2\varepsilon^{\fracrac{1}{3}}\eta^{-\fracrac{1}{3}}(y(\zeta)).
\end{equation}
In addition, it follows from \eqref{ii-9b} and \eqref{vi1-61} that
\begin{equation}\lambdabel{vi-25}
\eta^{\fracrac{1}{3}}(y(\zeta))I_5(y(\zeta), \zeta)\leq M^2\varepsilon^{\fracrac{1}{8}}\eta^{-\fracrac{1}{2}}(y(\zeta)).
\end{equation}
Substituting \eqref{vi-21}-\eqref{vi-25} into \eqref{vi-20} yields
\begin{equation}\lambdabel{vi-26}
\eta^{\fracrac{1}{3}}(y(\zeta))|\mathcal{F}_1|(y(\zeta), \zeta)\leq 4\varepsilon^{\fracrac{1}{11}}\eta^{-\fracrac{1}{3}}(y(\zeta))+\varepsilon^{\fracrac{1}{11}}e^{-\fracrac{1}{8}\zeta}.
\end{equation}
Analogously to obtain \eqref{vi-16}, combining \eqref{vi-26} with \eqref{vi-19}, \eqref{vi-17} and \eqref{iii-7b} derives
\begin{equation}\lambdabel{vi-27}
\eta^{\fracrac{1}{3}}(y)|\partialartial_y\mathcal{W}(y, s)|\leq\fracrac{1}{2}\varepsilon^{\fracrac{1}{12}}\quaduad \text{for $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$}.
\end{equation}
\subsection{More delicate estimates for $W_0$}\lambdabel{vi-e}
In the Subsection, we mainly estimate the weighted $L^{\infty}$ norms of $\eta^{-\fracrac{1}{6}}W_0$ and $\eta^{\fracrac{1}{3}}\partialartial_y W_0$
in the whole spatial space.
Since the proof procedures are very similar to the processes in Subsection \ref{vi-c} and Subsection \ref{vi-d},
we just give the sketch of the related verifications. For $\mu\in\mathbb{N}_0$ and $\nablau\in\mathbb{R}$, it is
derived from \eqref{ii-5} that
\begin{equation}\lambdabel{vi-28}
\left(\partialartial_s+(\fracrac{3}{2}y+\beta_{\tau}e^{\fracrac{s}{2}}(\mu_n(w)-\dot{\xi}(t)))\partialartial_y\right)[\eta^{\nablau}\partialartial^{\mu}W_0]+\overline{D}_{\mu, \nablau}[\eta^{\nablau}\partialartial^{\mu}W_0]=\eta^{\nablau}\overline{F}_{\mu}^{0},
\end{equation}
where
\begin{equation*}
\overline{D}_{\mu, \nablau}=\fracrac{3\mu-1}{2}+\mu\beta_{\tau} e^{\fracrac{s}{2}}\partialartial_y\mu_n(w)+\beta_{\tau}\partialartial_y W_0\boldsymbol{1}_{\mu\gammaeq 2}-\fracrac{2\nablau y}{1+y^2}(\fracrac{3}{2}y+\beta_{\tau}e^{\fracrac{s}{2}}(\mu_{n}(w)-\dot{\xi}(t)))
\end{equation*}
and
\begin{equation*}\begin{aligned}
\overline{F}_{\mu}^{0}&=-\sum\limits_{2\leq\beta\leq\mu-1}
C_{\mu}^{\beta}\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y^{\beta}\mu_n(w)\partialartial_y^{\mu-\beta+1}W_0
-\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y^{\mu}(\mu_n(w)-w_n)\partialartial_y W_0\boldsymbol{1}_{\mu\gammaeq 2}\\
&-\sum\limits_{j=1}^{n-1}\beta_{\tau}e^{s}\partialartial_y^{\mu}(a_{nj}(w)\partialartial_y W_j)-\beta_{\tau}e^{-\fracrac{s}{2}}\dot{\kappa}(t)\delta_{\mu}^0.
\end{aligned}
\end{equation*}
When $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$, it follows from \eqref{ii-9b}, \eqref{vi-16}
and \eqref{vi-27} that for $|y|\leq\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$,
\begin{equation}\lambdabel{vi-29}
\eta^{-\fracrac{1}{6}}(y)|W_0(y, s)|\leq 1+\fracrac{1}{2}\varepsilon^{\fracrac{1}{11}},\ \eta^{\fracrac{1}{3}}(y)|\partialartial_y W_0(y, s)|\leq 1+\fracrac{1}{2}\varepsilon^{\fracrac{1}{12}}.
\end{equation}
When $|y|\gammaeq \mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{s}{4}}$, the backward characteristics $y=y(\zeta)$ is
defined by \eqref{vi1-8} with $(y_0, \zeta_0)$ satisfying $|y_0|\gammaeq \mathcal{L}, \zeta_0=-\log\varepsilon$ or $|y_0|=\mathcal{L}\varepsilon^{\fracrac{1}{4}}e^{\fracrac{\zeta_0}{4}}, \zeta_0>-\log\varepsilon$. In this case, we have $|y(\zeta)|\gammaeq\mathcal{L}e^{\fracrac{\zeta-\zeta_0}{4}}$ for $\zeta\gammaeq\zeta_0$ due to \eqref{vi1-2}.
Moreover, it is derived from \eqref{vi-28} that
\begin{equation}\lambdabel{vi6-0}\begin{aligned}
\left[\eta^{\nablau}\partialartial^{\mu}W_0\right](y, s)&=[\eta^{\nablau}\partialartial^{\mu}W_0](y_0, \zeta_0)\exp \left(-\int_{\zeta_0}^{s}\overline{D}_{\mu, \nablau}(y(\alpha), \alpha)d\alpha\right)\\
&+\int_{\zeta_0}^{s}[\eta^{\nablau}\overline{F}_{\mu}^0](y(\zeta), \zeta)\exp\left(-\int_{\zeta}^{s}\overline{D}_{\mu, \nablau}(y(\alpha), \alpha)d\alpha\right)d\zeta.
\end{aligned}
\end{equation}
In addition, by \eqref{vi4-4} and \eqref{iv-2a}-\eqref{iv-2b}, \eqref{iv-3a}, we have
\begin{equation}\lambdabel{vi6-1}\begin{aligned}
&|\overline{D}_{0, -\fracrac{1}{6}}(y(\alpha), \alpha)|\\
=&\left|-\fracrac{1}{2(1+y(\alpha)^2)}+\fracrac{y(\alpha)}{3(1+y(\alpha)^2)}e^{\fracrac{\alpha}{2}}\beta_{\tau}(\mu_n(w)-\dot{\xi}(t))\right|\\
\leq&\fracrac{1}{2}\eta^{-1}(y(\alpha))+e^{\fracrac{\alpha}{2}}\eta^{-\fracrac{1}{2}}(y(\alpha))|\mu_n(w)-\dot{\xi}(t)|\\
\leq&5\eta^{-\fracrac{1}{4}}(y(\alpha))+M^2 e^{-\fracrac{\alpha}{2}}
\end{aligned}
\end{equation}
and
\begin{equation}\lambdabel{vi6-2}\begin{aligned}
&|\overline{D}_{1, \fracrac{1}{3}}(y(\alpha), \alpha)|\\
=&\left|\fracrac{1}{1+y(\alpha)^2}+e^{\fracrac{\alpha}{2}}\beta_{\tau}\partialartial_y\mu_n(w)
-\fracrac{2y(\alpha)}{3(1+y(\alpha)^2)}e^{\fracrac{\alpha}{2}}\beta_{\tau}(\mu_n(w)-\dot{\xi}(t))\right|\\
\leq&\eta^{-1}(y(\alpha))+2e^{\fracrac{\alpha}{2}}\eta^{-\fracrac{1}{2}}(y(\alpha))|\mu_n(w)
-\dot{\xi}(t)|+2e^{\fracrac{\alpha}{2}}\sum\limits_{j=1}^{n}|\partialartial_{w_j}\mu_n(w)||\partialartial_y W_j|\\
\leq& 20\eta^{-\fracrac{1}{4}}(y(\alpha))+M^3 e^{-\fracrac{\alpha}{2}}.
\end{aligned}
\end{equation}
Similarly to \eqref{vi4-5}, for $|y(\zeta)|\gammaeq\mathcal{L}e^{\fracrac{\zeta-\zeta_0}{4}}$,
one has from \eqref{vi6-1}-\eqref{vi6-2} that
\begin{equation}\lambdabel{vi-31}\begin{aligned}
&\int_{\zeta_0}^{s}|\overline{D}_{0, -\fracrac{1}{6}}|(y(\alpha), \alpha)d\alpha+\int_{\zeta_0}^{s}|\overline{D}_{1, \fracrac{1}{3}}|(y(\alpha), \alpha)d\alpha\\
\leq& 4M^3\varepsilon^{\fracrac{1}{2}}+\int_{\zeta_0}^{s}\fracrac{25}{(1+\mathcal{L}^2 e^{\fracrac{\zeta-\zeta_0}{2}})^{\fracrac{1}{4}}}d\zeta\\
\leq & 4M^3\varepsilon^{\fracrac{1}{2}}+600\ln(1+\mathcal{L}^{-1})\leq \varepsilon^{\fracrac{1}{20}}.
\end{aligned}
\end{equation}
Based on \eqref{vi-31}, together with \eqref{vi6-0} this yields
\begin{equation}\lambdabel{vi-32}
\eta^{-\fracrac{1}{6}}(y)|W_0(y, s)|\leq (1+2\varepsilon^{\fracrac{1}{20}})\left(\eta^{-\fracrac{1}{6}}(y_0)|W_0(y_0, \zeta_0)|+\int_{\zeta_0}^{s}\eta^{-\fracrac{1}{6}}(y(\zeta))|\overline{F}_0^{0}|(y(\zeta), \zeta)d\zeta\right)
\end{equation}
and
\begin{equation}\lambdabel{vi-33}
\eta^{\fracrac{1}{3}}(y)|\partialartial_y W_0(y, s)|\leq (1+2\varepsilon^{\fracrac{1}{20}})\left(\eta^{\fracrac{1}{3}}(y_0)|\partialartial_y W_0(y_0, \zeta_0)|+\int_{\zeta_0}^{s}\eta^{\fracrac{1}{3}}(y(\zeta))|\overline{F}_1^{0}|(y(\zeta), \zeta)d\zeta\right).
\end{equation}
By \eqref{i-7}, \eqref{iv-1}-\eqref{iv-3}, \eqref{iv-3b} with $\nablau=\fracrac{1}{3}$
and \eqref{iv-3c} with $\nablau^+=\fracrac{1}{24}$, then $\overline{F}_0^{0}$ and $\overline{F}_1^{0}$ in \eqref{vi-28} satisfy
\begin{equation}\lambdabel{vi-34}
|\overline{F}_0^{0}(y(\zeta), \zeta)|\leq 4M^2 e^{-\fracrac{\zeta}{2}},\ |\overline{F}_1^{0}(y(\zeta), \zeta)|\leq 2M^2e^{-\fracrac{\zeta}{9}}\eta^{-\fracrac{1}{3}}(y(\zeta)).
\end{equation}
Therefore, we derive from \eqref{iii-7b}, \eqref{iii-7c}, \eqref{vi-29} and \eqref{vi-32}-\eqref{vi-34} that
\begin{equation}\lambdabel{vi-35}
\eta^{-\fracrac{1}{6}}(y)|W_0(y, s)|\leq 1+\varepsilon^{\fracrac{1}{21}},\ \eta^{\fracrac{1}{3}}(y)|\partialartial_y W_0(y, s)|\leq 1
+\varepsilon^{\fracrac{1}{21}}.
\end{equation}
For the estimates of $\partialartial_y^{\mu}W_0$, together with Lemma \ref{lemA-3} for $u=\partialartial_y^{\mu}W_0$ and $d=1, p, q=\infty, r=2, j=\mu-1, m=\mu_0-1$, it comes from \eqref{ii-3}, \eqref{iv-6} and \eqref{vi-35} that for $2\leq \mu\leq 4$ and $\alpha
=\fracrac{\mu-1}{\mu_0-\fracrac{1}{2}}\in (0, \fracrac{6}{11}]$,
\begin{equation}\lambdabel{vi-36}
\|\partialartial_y^{\mu}W_0(\cdot, s)\|_{L^{\infty}}\leq M^{\fracrac{1}{20}}\|\partialartial_y^{\mu_0}W_0(\cdot, s)\|_{L^2}^{\alpha}\|\partialartial_y W_0(\cdot, s)\|_{L^{\infty}}^{1-\alpha}
\leq 2^{1-\alpha}M^{\fracrac{1}{20}+\alpha}\leq M^{\fracrac{3}{5}}.
\end{equation}
\section{Bootstrap estimates on good components of $W$}\lambdabel{vii}
In the section, we will apply the characteristics method to establish a series of estimates of $W_m\ (m\nablaeq n)$.
\subsection{Framework for the characteristics method}\lambdabel{vii-a}
For $1\leq m\leq n-1$ and any point $(y_0, \zeta_0)\in\mathbb{R}\times [-\log\varepsilon, +\infty)$, we consider the
following forward characteristics $y(\zeta):=y(\zeta; y_0, \zeta_0)$ of \eqref{ii-18} which
starts from $(y_0, \zeta_0)$:
\begin{equation}\lambdabel{vii1-0}\begin{cases}
\dot{y}(\zeta)=\fracrac{3}{2}y(\zeta)+e^{\fracrac{\zeta}{2}}\beta_{\tau}(\mu_m(w)-\dot{\xi}(t))(y(\zeta), \zeta),\\
y(\zeta_0)=y_0.
\end{cases}
\end{equation}
This yields that for $s\gammaeq \zeta_0$,
\begin{equation}\lambdabel{vii-1}
y(\zeta)e^{-\fracrac{3}{2}\zeta}=y_0 e^{-\fracrac{3}{2}\zeta_0}+\int_{\zeta_0}^{\zeta}e^{-\alpha}\left(\beta_{\tau}(\cdot)(\mu_m(w)-\dot{\xi}(\cdot))\right)(y(\alpha), \alpha)d\alpha:=G_m(\zeta; y_0, \zeta_0).
\end{equation}
Next we discuss the positions of $y(\zeta; y_0, \zeta_0)$ for the different cases of $(y_0, \zeta_0)$.
\begin{lem}\lambdabel{lem7-1} {\it For each $i_0\leq m\leq n-1$, $a_m:=\mu_n(0)>0$ is due to \eqref{i-7a} and the assumption $\mu_n(0)=0$ in Remark \ref{rem1-1}. Then for any point $(y_0, \zeta_0)\in\mathbb{R}\times [-\log\varepsilon, +\infty)$, $y(\zeta):=y(\zeta; y_0, \zeta_0)$
can be classified into the following six cases:
\begin{enumerate}[{\bf Case} $1$.]
\item When $y_0<-4a_m e^{\fracrac{\zeta_0}{2}}$,\ $y(\zeta)\leq -a_m e^{\fracrac{3}{2}\zeta-\zeta_0}<0$ holds for $\zeta\gammaeq\zeta_0.$
\item When $ -\fracrac{a_m}{4}e^{\fracrac{\zeta_0}{2}}\leq y_0\leq 0$, there exists a number $\zeta^*\gammaeq \zeta_0$ such that $G_m(\zeta^*; y_0, \zeta_0)=0$ and
\begin{equation*}\begin{cases}
-\fracrac{3}{2}a_m (e^{-\zeta}-e^{-\zeta^*}) e^{\fracrac{3}{2}\zeta}\leq y(\zeta)\leq -\fracrac{a_m}{2}(e^{-\zeta}-e^{-\zeta^*})e^{\fracrac{3}{2}\zeta}\leq 0
\quaduad\text{for $\zeta_0\leq \zeta\leq \zeta^*$},\\[2mm]
y(\zeta)\gammaeq \fracrac{a_m}{2}(e^{-\zeta^*}-e^{-\zeta})e^{\fracrac{3}{2}\zeta}\gammaeq 0\quaduad\text{for $\zeta\gammaeq \zeta^*$}.
\end{cases}
\end{equation*}
\item When $y_0\gammaeq 0$, one has $y(\zeta)\gammaeq \fracrac{a_m}{2}(e^{-\zeta_0}-e^{-\zeta})e^{\fracrac{3}{2}\zeta}\gammaeq 0$
\quaduad \text{for $\zeta\gammaeq\zeta_0.$}
\item When $(y_0, \zeta_0)\in D^+$ and $(y(\zeta), \zeta)$ lies in the domain $D^+$, there holds
\begin{equation*}
-4a_m e^{\fracrac{\zeta}{2}}\leq y(\zeta)\leq -\fracrac{a_m}{4}e^{\fracrac{\zeta}{2}}<0\quaduad\text{for $\zeta\gammaeq \zeta_0$},
\end{equation*}
where $D^{+}=\{(y, \zeta): -4a_m e^{\fracrac{\zeta}{2}}\leq y\leq -\fracrac{a_m}{4}e^{\fracrac{\zeta}{2}},\ \zeta\gammaeq -\log\varepsilon\}$.
\item When $(y_0, \zeta_0)\in D^+$ and the characteristics $y=y(\zeta)$ goes through $\partialartial D^+$ at some point $(\hat y, \hat \zeta)$
with $\hat y=-4a_m e^{\fracrac{\hat\zeta}{2}}$, we have
\begin{equation*}\begin{cases}
(y(\zeta), \zeta)\in D^+\quaduad\text{for $\zeta_0\leq \zeta\leq\hat\zeta$},\\
y(\zeta)\leq -a_m e^{\fracrac{3}{2}\zeta-\hat\zeta}<0\quaduad\text{for $\zeta\gammaeq\hat\zeta$.}
\end{cases}
\end{equation*}
\item When $(y_0, \zeta_0)\in D^+$ and the characteristics $y=y(\zeta)$ goes through $\partialartial D^+$ at
some point $(\hat y, \hat \zeta)$ with $\hat y=-\fracrac{a_m}{2} e^{\fracrac{\hat\zeta}{2}}$,
there exists $\tilde{\zeta}>\hat\zeta$ such that $G_m(\tilde{\zeta}; y_0, \zeta_0)=0$ and $y=y(\zeta)$ can be divided into
the three parts as:
\begin{equation*}\begin{cases}
(y(\zeta), \zeta)\in D^{+}\quaduad\text{for $\zeta_0\leq \zeta\leq \hat \zeta$},\\
-2a_m (e^{-\zeta}-e^{-\tilde{\zeta}})e^{\fracrac{3}{2}\zeta}\leq y(\zeta)\leq -\fracrac{a_m}{2}(e^{-\zeta}-e^{-\tilde{\zeta}})
e^{\fracrac{3}{2}\zeta}\leq 0\quaduad\text{for $\hat \zeta\leq \zeta\leq \tilde{\zeta}$},\\
y(\zeta)\gammaeq\fracrac{a_m}{2}(e^{-\tilde{\zeta}}-e^{-\zeta})e^{\fracrac{3}{2}\zeta}\gammaeq 0\quaduad\text{for $\zeta\gammaeq \tilde{\zeta}$}.
\end{cases}
\end{equation*}
\end{enumerate}
}
\end{lem}
\begin{proof} Since $a_m>0$ for $i_0\leq m\leq n-1$, by \eqref{ii-3}, \eqref{iv-1}, \eqref{iv-2a} and \eqref{iv-3a}, we
then have
\begin{equation}\lambdabel{vii1-1}
|\beta_{\tau}(\mu_m(w)-\dot{\xi}(t))-a_m|\leq M^3\varepsilon^{\fracrac{1}{3}}\leq \fracrac{a_m}{2}.
\end{equation}
When $y_0<-4a_m e^{\fracrac{\zeta_0}{2}}$, it is derived from \eqref{vii-1} and \eqref{vii1-1} that
\begin{equation}\lambdabel{vii1-2}
y(\zeta)e^{-\fracrac{3}{2}\zeta}\leq -4a_m e^{-\zeta_0}+\fracrac{3}{2}a_m (e^{-\zeta_0}-e^{-\zeta})\leq -a_m e^{-\zeta_0}.
\end{equation}
This shows {\bf Case} 1.
When $-\fracrac{a_m}{4}e^{\fracrac{\zeta_0}{2}}\leq y_0\leq 0$, it follows from \eqref{vii1-1}
that $G_m(\zeta; y_0, \zeta_0)$ in \eqref{vii-1} satisfies
\begin{equation}\lambdabel{vii1-3}\begin{cases}
G_m(\zeta_0; y_0, \zeta_0)=y_0 e^{-\fracrac{3}{2}\zeta_0}\leq 0,\\
G_m(+\infty; y_0, \zeta_0)\gammaeq y_0 e^{-\fracrac{3}{2}\zeta_0}+\fracrac{a_m}{2}e^{-\zeta_0}\gammaeq
-\fracrac{a_m}{4}e^{-\zeta_0}+\fracrac{a_m}{2}e^{-\zeta_0}>0.
\end{cases}
\end{equation}
Since $G_m(\zeta; y_0, \zeta_0)$ is a continuous function with respect to the variable $\zeta$, \eqref{vii1-3}
shows that there exists $\zeta^*\gammaeq \zeta_0$ such that $G_m(\zeta^*; y_0, \zeta_0)=0$. In this situation,
we derive from \eqref{vii-1} that
\begin{equation}\lambdabel{vii1-4}
y(\zeta)e^{-\fracrac{3}{2}\zeta}=G_m(\zeta; y_0, \zeta_0)-G_m(\zeta^*; y_0, \zeta_0)=\int_{\zeta^*}^{\zeta}e^{-\alpha}\left(\beta_{\tau}(\cdot)(\mu_m(w)-\dot{\xi}(\cdot))\right)(y(\alpha), \alpha)d\alpha.
\end{equation}
Therefore, {\bf Case} 2 is obtained from \eqref{vii1-1}, \eqref{vii1-4} and $a_m>0$ for $i_0\leq m\leq n-1$.
Based on the results established in {\bf Case} 1-{\bf Case} 2, {\bf Case} 3-{\bf Case} 6 in Lemma \ref{lem7-1}
can be carried out in the same way due to the formula \eqref{vii-1} and the definition of $D^{+}$,
here we omit the details.\end{proof}
\begin{lem}\lambdabel{lem7-2} {\it For each $1\leq m\leq i_0-1$, $a_m:=\mu_m(0)<0$ is due to \eqref{i-7a} and the assumption $\mu_n(0)=0$ in Remarks \ref{rem1-1}. Then for any point $(y_0, \zeta_0)\in\mathbb{R}\times [-\log\varepsilon, +\infty)$, $y(\zeta)=y(\zeta; y_0, \zeta_0)$
can be classified into the following six cases:
\begin{enumerate}[{\bf Case} $1$.]
\item When $y_0>-4a_m e^{\fracrac{\zeta_0}{2}}$, $y(\zeta)\gammaeq -a_m e^{\fracrac{3}{2}\zeta-\zeta_0}>0.$
\item When $ 0\leq y_0\leq -\fracrac{a_m}{4}e^{\fracrac{\zeta_0}{2}}$, there exists a number $\zeta^{*}\gammaeq \zeta_0$
such that $G_m(\zeta^*; y_0, \zeta_0)=0$ and
\begin{equation*}\begin{cases}
0\leq -\fracrac{a_m}{2}(e^{-\zeta}-e^{-\zeta^*})e^{\fracrac{3}{2}\zeta}\leq y(\zeta)\leq -\fracrac{3}{2}a_m (e^{-\zeta}-e^{-\zeta^*}) e^{\fracrac{3}{2}\zeta} \quaduad\text{for $\zeta_0\leq \zeta\leq \zeta^{*}$},\\[2mm]
y(\zeta)\leq \fracrac{a_m}{2}(e^{-\zeta^*}-e^{-\zeta})\leq 0\quaduad\text{for $\zeta\gammaeq \zeta^{*}$}.
\end{cases}
\end{equation*}
\item When $y_0\leq 0$, $y(\zeta)\leq \fracrac{a_m}{2}(e^{-\zeta_0}-e^{-\zeta})e^{\fracrac{3}{2}\zeta}\leq 0$
\quaduad\text{for $\zeta\gammaeq \zeta_0.$}
\item When $(y_0, \zeta_0)\in D^-$ and the characteristics $(y(\zeta), \zeta)$ lies in $D^- $, one has
\begin{equation*}
0<-\fracrac{a_m}{4} e^{\fracrac{\zeta}{2}}\leq y(\zeta)\leq -4a_m e^{\fracrac{\zeta}{2}}\quaduad\text{for $\zeta\gammaeq \zeta_0$},
\end{equation*}
where $D^{-}=\{(y, \zeta): -\fracrac{a_m}{4}e^{\fracrac{\zeta}{2}}\leq y\leq -4a_m e^{\fracrac{\zeta}{2}},\ \zeta\gammaeq -\log\varepsilon\}$.
\item When $(y_0, \zeta_0)\in D^-$ and the characteristics $(y(\zeta), \zeta)$ goes through $\partialartial D^-$ at some point $(\hat y, \hat \zeta)$ with $\hat y=-4 a_m e^{\fracrac{\hat\zeta}{2}}$, we have
\begin{equation*}\begin{cases}
(y(\zeta), \zeta)\in D^{-}\quaduad\text{for $\zeta_0\leq \zeta\leq \hat \zeta$},\\
y(\zeta)\gammaeq -a_m e^{\fracrac{3}{2}\zeta-\hat \zeta}>0\quaduad\text{for $\zeta\gammaeq \hat \zeta$}.
\end{cases}
\end{equation*}
\item When $(y_0, \zeta_0)\in D^-$ and the characteristics $(y(\zeta), \zeta)$ goes through $\partialartial D^-$
at some point $(\hat y, \hat \zeta)$ with $\hat y=-\fracrac{a_m}{2} e^{\fracrac{\hat\zeta}{2}}$, there exists $\tilde{\zeta}>\hat\zeta$
such that $y=y(\zeta)$ can be divided into the three parts as:
\begin{equation*}\begin{cases}
(y(\zeta), \zeta)\in D^{-}\quaduad\text{for $\zeta_0\leq \zeta\leq \hat \zeta$},\\
0\leq -\fracrac{a_m}{2} (e^{-\zeta}-e^{-\tilde{\zeta}})e^{\fracrac{3}{2}\zeta}\leq y(\zeta)\leq -2a_m (e^{-\zeta}-e^{-\tilde{\zeta}})e^{\fracrac{3}{2}\zeta}\quaduad\text{for $\hat \zeta\leq \zeta\leq \tilde{\zeta}$},\\
y(\zeta)\leq\fracrac{a_m}{2}(e^{-\tilde{\zeta}}-e^{-\zeta})e^{\fracrac{3}{2}\zeta}\leq 0\quaduad\text{for $\zeta\gammaeq \tilde{\zeta}$}.
\end{cases}
\end{equation*}
\end{enumerate}
}
\end{lem}
\begin{proof} Since the proof of Lemma \ref{lem7-2} is just the same as in Lemma \ref{lem7-1}, we omit the details here.
\end{proof}
Based on Lemma \ref{lem7-1} and Lemma \ref{lem7-2}, we now establish the following results.
\begin{lem}\lambdabel{lem7-3} {\it For $1\leq m\leq n-1$ and each forward characteristics $y(\zeta):=y(\zeta; y_0, \zeta_0)$
defined by \eqref{vii1-0}, when the function $\mathcal{D}(z, \zeta)$ satisfies that for some positive constant $c_0$,
\begin{equation}\lambdabel{vii-2}
|\mathcal{D}(z, \zeta)|\leq c_0\eta^{-\fracrac{\kappa}{2}}(z)\ (0<\kappa<1),
\end{equation}
then
\begin{equation}\lambdabel{vii-3}
\int_{\zeta_0}^{+\infty}|\mathcal{D}(y(\zeta), \zeta)|d\zeta\leq \fracrac{16 c_0}{\kappa (1-\kappa)|a_m|^{\kappa}}e^{-\fracrac{\kappa}{2}\zeta_0}.
\end{equation}}
\end{lem}
\begin{proof} We only consider the {\bf Case} 2 in Lemma \ref{lem7-1}. The estimate \eqref{vii-3} for other {\bf Cases} in Lemma \ref{lem7-1} and Lemma \ref{lem7-2} can be done analogously. In the present situation, we choose $\zeta_{1}, \zeta_{2}\in [-\log\varepsilon, +\infty)$ such that
\begin{equation*}
e^{-\zeta_{1}}=\min\{2e^{-\zeta^{*}}, e^{-\zeta_0}\},\ e^{-\zeta_{2}}=\fracrac{1}{2}e^{-\zeta^{*}}.
\end{equation*}
This implies $\zeta_0\leq \zeta_1\leq \zeta^*<\zeta_2$. Then it is derived from \eqref{vii-2} and {\bf Case} 2
in Lemma \ref{lem7-1} that
\begin{equation*}\lambdabel{vii-4}\begin{aligned}
&\int_{\zeta_{0}}^{+\infty}|\mathcal{D}(y(\zeta), \zeta)|d\zeta\\
\leq& \fracrac{2 c_0}{|a_m|^{\kappa}}\left(\int_{\zeta_{0}}^{\zeta_{1}}\fracrac{e^{-\fracrac{3}{2}\kappa \zeta}}{|e^{-\zeta}-e^{-\zeta^{*}}|^{\kappa}}d\zeta+\int_{\zeta_{1}}^{\zeta_{2}}\fracrac{e^{-\fracrac{3}{2}\kappa \zeta}}{|e^{-\zeta}-e^{-\zeta^{*}}|^{\kappa}}d\zeta+\int_{\zeta_{2}}^{+\infty}\fracrac{e^{-\fracrac{3}{2}\kappa \zeta}}{|e^{-\zeta}-e^{-\zeta^{*}}|^{\kappa}}ds\right)\\
\leq & \fracrac{2 c_0}{|a_m|^{\kappa}}\left(2\int_{\zeta_{0}}^{\zeta_{1}}e^{-\fracrac{\kappa}{2}\zeta}d\zeta+2 e^{\kappa \zeta^{*}}\int_{\zeta_{2}}^{+\infty}e^{-\fracrac{3}{2}\kappa \zeta}d\zeta+4 e^{-\fracrac{\kappa}{2}\zeta^{*}}\int_{\fracrac{1}{2}}^{2}|1-t|^{-\kappa}dt\right)\\
\leq& \fracrac{16 c_0}{\kappa (1-\kappa)|a_m|^{\kappa}}e^{-\fracrac{\kappa}{2}\zeta_{0}}.
\end{aligned}
\end{equation*}
Therefore, the estimate \eqref{vii-3} holds for the case $i_0\leq m\leq n-1$ in \eqref{vii-1} and {\bf Case} 2
in Lemma \ref{lem7-1}.\end{proof}
\subsection{Auxiliary analysis}
As in \cite{PDL1}, we will apply the decomposition \eqref{ii-17} and the reduced system \eqref{ii-18}-\eqref{ii-19}
to establish the related estimates for the good components of $W$. To this end, we first show the
relation between $\partialartial_y^{\mu}W$ and $W_{\mu}^m$ ($1\leq m\leq n$).
Due to \eqref{i-71}, \eqref{ii-3}, \eqref{iv-1a}, \eqref{iv-2a} and \eqref{iv-3a}, one has $\ell_m^n(w)=0$
and $\|\ell_m(w)\|_{L^{\infty}}\leq 2$ for $1\leq m \leq n-1$. Combining this with \eqref{ii-17} yields
\begin{equation}\lambdabel{vii-401}\begin{cases}
|W_{\mu}^m|=|\ell_m(w)\cdot \partialartial_y^{\mu}W|\leq 2\sum\limits_{j=1}^{n-1}|\partialartial_y^{\mu}W_j|\ (1\leq m\leq n-1),\\
|W_{\mu}^n|=|\ell_n(w)\cdot \partialartial_y^{\mu}W|\leq 2\sum\limits_{j=1}^{n}|\partialartial_y^{\mu}W_j|.
\end{cases}
\end{equation}
In addition, by $\|\gammaamma_j(w)\|=1$ for $1\leq j\leq n$ and $\gammaamma_k^n(w)=0$ for $1\leq k\leq n-1$,
then it follows from \eqref{ii-17} that
\begin{equation}\lambdabel{vii-402}\begin{cases}
|\partialartial_y^{\mu}W_j|=|\sum\limits_{m=1}^{n}W_{\mu}^m\gammaamma_m^j(w)|\leq \sum\limits_{k=1}^{n-1}|W_{\mu}^k|\ (1\leq j\leq n-1),\\
|\partialartial_y^{\mu}W_n|=|\sum\limits_{m=1}^{n}W_{\mu}^m\gammaamma_m^n(w)|\leq |W_{\mu}^n|.
\end{cases}
\end{equation}
\subsection{Bootstrap estimates on $W_j\ (j\nablaeq n)$}\lambdabel{vii-b}
First, by \eqref{i-7}, \eqref{ii-9b}, \eqref{iv-1a}, \eqref{iv-2} and \eqref{iv-3a}-\eqref{iv-3b} with $\nablau=\fracrac{1}{3}$,
it is derived from \eqref{ii-18} and \eqref{ii-3} that
\begin{equation}\lambdabel{vii-5}\begin{aligned}
&\partialartial_s\gammaamma_{j}(w)+((\fracrac{3}{2}y-e^{\fracrac{s}{2}}\beta_{\tau}\dot{\xi}(t))I_{n}
+e^{\fracrac{s}{2}}\beta_{\tau}A(w))\partialartial_y\gammaamma_{j}(w)\\
=&\fracrac{\partialartial\gammaamma_j(w)}{\partialartial w}\left(\partialartial_s W+(\fracrac{3}{2}y-e^{\fracrac{s}{2}}\beta_{\tau}\dot{\xi}(t))\partialartial_y W\right)+e^{\fracrac{s}{2}}\beta_{\tau}A(w)\fracrac{\partialartial\gammaamma_j(w)}{\partialartial w}\partialartial_y W\\
=&e^{\fracrac{s}{2}}\beta_{\tau}[A(w), \fracrac{\partialartial\gammaamma_j(w)}{\partialartial w}]\partialartial_y W
\end{aligned}
\end{equation}
and
\begin{equation}\lambdabel{vii3-1}
\left|e^{\fracrac{s}{2}}\beta_{\tau}[A(w), \fracrac{\partialartial\gammaamma_j(w)}{\partialartial w}]\partialartial_y W\right|\leq M^2 \eta^{-\fracrac{1}{3}}(y)(1-\delta_{j}^n),
\end{equation}
where $[A, B]=AB-BA$ for two $n\times n$ matrices $A$ and $B$.
For any $(y, s)\in \mathbb{R}\times [-\log\varepsilon, +\infty)$, the backward characteristics $y(\zeta):=y(\zeta; y, s)$
of \eqref{ii-18} which starts from $(y_0(y, s), -\log\varepsilon)$, is defined as
\begin{equation}\lambdabel{vii-6}\begin{cases}
\dot{y}(\zeta)=\fracrac{3}{2}y(\zeta)+e^{\fracrac{\zeta}{2}}\beta_{\tau}(\cdot)(\mu_m(w)-\dot{\xi}(\cdot))(y(\zeta), \zeta),\\
y(-\log\varepsilon)=y_0(y, s).
\end{cases}
\end{equation}
Then it is derived from \eqref{ii-18} with $\mu=0$ and \eqref{vii-6} for $1\leq m\leq n-1$ that
\begin{equation}\lambdabel{vii-8}
W_{0}^m(y, s)=W_{0}^m(y_0(y, s), -\log\varepsilon)+\int_{-\log\varepsilon}^{s}\mathbb{F}_{0}^{m}(y(\zeta), \zeta)d\zeta,
\end{equation}
where $\mathbb{F}_{0}^m$ is given in \eqref{ii-18}, and it comes from $F_0=0$ in \eqref{ii-11a} and \eqref{vii-5}-\eqref{vii3-1} that
\begin{equation}\lambdabel{vii-9}
|\mathbb{F}_0^m(y(\zeta), \zeta)|\leq M^3 \eta^{-\fracrac{1}{3}}(y(\zeta))\sum\limits_{j=1}^{n-1}\|W_{0}^j\|_{L^{\infty}}.
\end{equation}
In addition, by Lemma \ref{lem7-3} with $\kappa=\fracrac{2}{3}$ and \eqref{vii-8}-\eqref{vii-9}, we arrive at
\begin{equation}\lambdabel{vii-10}\begin{aligned}
|W_{0}^m(y, s)|&\leq \|W_{0}^m(\cdot, -\log\varepsilon)\|_{L^{\infty}}
+M^4 \varepsilon^{\fracrac{1}{3}}\sum\limits_{j=1}^{n-1}\|W_{0}^j\|_{L^{\infty}}\\
&\leq \|W_{0}^m(\cdot, -\log\varepsilon)\|_{L^{\infty}}
+\varepsilon^{\fracrac{1}{4}}\sum\limits_{j=1}^{n-1}\|W_{0}^j\|_{L^{\infty}}.
\end{aligned}
\end{equation}
On the other hand, due to the arbitrariness of $(y, s)$, summing $m$ in both sides of \eqref{vii-10} from $1$ to $n-1$
yields
\begin{equation}\lambdabel{vii-11}
\sum\limits_{m=1}^{n-1}\|W_{0}^m\|_{L^{\infty}}\leq 2\sum\limits_{m=1}^{n-1}\|W_{0}^m(\cdot, -\log\varepsilon)\|_{L^{\infty}}.
\end{equation}
Then it follows from \eqref{vii-11}, \eqref{vii-401}-\eqref{vii-402} and \eqref{iii-8a} that
\begin{equation}\lambdabel{vii-12}
|W_j(y, s)|\leq 4n\sum\limits_{k=1}^{n-1}\|W_k(\cdot,-\log\varepsilon)\|_{L^{\infty}}\leq 4n^2\varepsilon\ (1\leq j\leq n-1).
\end{equation}
\subsection{Bootstrap estimates on $\partialartial_y W_j\ (j\nablaeq n)$}\lambdabel{vii-c}
With the definition \eqref{vii-6}, it is derived from \eqref{ii-18} for $\mu=1$ and $1\leq m\leq n-1$ that
\begin{equation}\lambdabel{vii-13}
e^{\fracrac{3}{2}s}W_{1}^m(y, s)=\varepsilon^{-\fracrac{3}{2}}W_{1}^m(y_0(y, s), -\log\varepsilon)+\int_{-\log\varepsilon}^{s}e^{\fracrac{3}{2}\zeta}\mathbb{F}_1^m(y(\zeta), \zeta)d\zeta.
\end{equation}
Since $a_{in}(w)=0$ and $\ell_i^n(w)=0$ for $1\leq i\leq n-1$ (see \eqref{i-7b} and \eqref{i-71}),
it follows from \eqref{ii-18}, \eqref{vii-402}-\eqref{vii-5}, \eqref{iv-2}-\eqref{iv-3} and \eqref{ii-11a} with $\mu=1$ that
\begin{equation}\lambdabel{vii-14}
|\mathbb{F}_1^m(y(\zeta), \zeta)|\leq M^3 \eta^{-\fracrac{1}{3}}(y(\zeta))\sum\limits_{j=1}^{n-1}|W_{1}^j|(y(\zeta), \zeta).
\end{equation}
As in \eqref{vii-11}, combining \eqref{vii-13}-\eqref{vii-14} with Lemma \ref{lem7-3} yields that for $1\leq m\leq n-1$,
\begin{equation}\lambdabel{vii-15}\begin{aligned}
e^{\fracrac{3}{2}s}|W_{1}^m|(y, s)&\leq \varepsilon^{-\fracrac{3}{2}}\|W_{1}^m(\cdot, -\log\varepsilon)\|_{L^{\infty}}+M^4\varepsilon^{\fracrac{1}{3}}\sum\limits_{j=1}^{n-1}\|e^{\fracrac{3}{2}\varsigma}W_{1}^j(z, \varsigma)\|_{L^{\infty}_{z, \varsigma}}\\
&\leq \varepsilon^{-\fracrac{3}{2}}\|W_{1}^m(\cdot, -\log\varepsilon)\|_{L^{\infty}}+\varepsilon^{\fracrac{1}{4}}\sum\limits_{j=1}^{n-1}\|e^{\fracrac{3}{2}\varsigma}W_{1}^j(z, \varsigma)\|_{L^{\infty}_{z, \varsigma}}.
\end{aligned}
\end{equation}
Similarly to \eqref{vii-11}-\eqref{vii-12}, it is derived from \eqref{vii-15} and \eqref{iii-8b} with $\nablau=0$ that
\begin{equation}\lambdabel{vii-16}
|\partialartial_y W_j(y, s)|\leq 4n^2 e^{-\fracrac{3}{2}s}\ (1\leq j\leq n-1).
\end{equation}
In addition, as in \eqref{vi-36}, by \eqref{iv-6} and \eqref{vii-16}, we have that
for $2\leq\mu\leq 4$ and $\alpha=\fracrac{\mu-1}{\mu_0-\fracrac{1}{2}}\in (0, \fracrac{6}{11}]$,
\begin{equation}\begin{aligned}\lambdabel{vii-17}
\|\partialartial_y^{\mu}W_k(\cdot, s)\|_{L^{\infty}}&\leq M^{\fracrac{1}{20}}\|\partialartial_y^{\mu_0}W_k(\cdot, s)\|_{L^2}^{\alpha}\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^{1-\alpha}\\
&\leq (4n^2)^{1-\alpha} M^{\fracrac{1}{20}+\alpha} e^{-\fracrac{3}{2}s}\leq M^{\fracrac{3}{5}}e^{-\fracrac{3}{2}s}\ (1\leq k\leq n-1).
\end{aligned}
\end{equation}
\subsection{Weighted bootstrap estimates of the good components}\lambdabel{vii-d}
For $a=\max\limits_{1\leq i\leq n-1}\{5|\mu_m(0)|+1\}$, set the domain $D^0$ as
\begin{equation}\lambdabel{vii-18}
D^0=\{(y, s): |y|<4ae^{\fracrac{s}{2}}, -\log\varepsilon\leq s<+\infty\}.
\end{equation}
Then $D^{\partialm}\subset D^0$ for the domains $D^{\partialm}$ defined in Lemma \ref{lem7-1} and Lemma \ref{lem7-2}.
When $(y, s)\in D^0$, it is derived from \eqref{vii-16}-\eqref{vii-17} that
\begin{equation}\lambdabel{vii-19}\begin{cases}
\sum\limits_{j=1}^{n-1}|\partialartial_y W_j(y, s)|\leq 4n^2 e^{-\fracrac{3}{2}s}\leq M^{\fracrac{1}{5}} e^{(\nablau-\fracrac{3}{2})s}\eta^{-\nablau}(y)\ (0\leq \nablau\leq \fracrac{1}{3}]),\\[4mm]
\sum\limits_{j=1}^{n-1}|\partialartial_y^2 W_j(y, s)|\leq M^{\fracrac{2}{5}} e^{-\fracrac{3}{2}s}\leq M^{\fracrac{3}{5}}e^{-\fracrac{7}{6}s}\eta^{-\fracrac{1}{3}}(y).
\end{cases}
\end{equation}
Next we derive the weighted estimates on the good components of $W$ when $(y, s)\nablaotin D^0$.
In this situation, as in \eqref{vii-6}, for each $1\leq m\leq n-1$, the backward characteristics
$y(\zeta):=y(\zeta; y, s)$ of \eqref{ii-19} which starts from $(y_0(y, s), \zeta_0)\nablaotin D^0$ is defined as
\begin{equation}\lambdabel{vii-20}\begin{cases}
\dot{y}(\zeta)=\fracrac{3}{2}y(\zeta)+e^{\fracrac{\zeta}{2}}\beta_{\tau}(\mu_m(w)-\dot{\xi}(t))(y(\zeta), \zeta),\ \zeta_0\leq\zeta\leq s,\\
y(\zeta_0)=y_0(y, s),
\end{cases}
\end{equation}
where either $\zeta_0=-\log\varepsilon$ or $(y_0(y, s),\zeta_0)\in\partialartial D^0$.
Note that $y(\zeta)$ in \eqref{vii-20} has the following expression
\begin{equation}\lambdabel{vii-21}
y(\zeta)e^{-\fracrac{3}{2}\zeta}=y_0(y, s)e^{-\fracrac{3}{2}\zeta_0}+\int_{\zeta_0}^{\zeta}e^{-\alpha}\beta_{\tau}(\mu_m(w)-\dot{\xi}(t))(y(\alpha), \alpha)d\alpha.
\end{equation}
It follows from \eqref{vii1-1} and \eqref{vii-21} with $|y_0(y, s)|\gammaeq 4a e^{\fracrac{\zeta_0}{2}}$ that
for $\zeta_0\leq \zeta\leq s$,
\begin{equation}\lambdabel{vii-22}
|y(\zeta)|\gammaeq e^{\fracrac{3}{2}\zeta}(4ae^{-\zeta_0}-2|a_m|(e^{-\zeta_0}-e^{-\zeta}))\gammaeq e^{\fracrac{3}{2}\zeta-\zeta_0}.
\end{equation}
Based on the definition of $y(\zeta)=y(\zeta; y, s)$ in \eqref{vii-20}, we derive from \eqref{ii-19} that
\begin{equation}\lambdabel{vii5-1}\begin{aligned}
\left[\eta^{\nablau}W_{\mu}^m\right](y, s)&=[\eta^{\nablau} W_{\mu}^m](y_0, \zeta_0)\exp(-\int_{\zeta_0}^{s}\mathbb{D}_{\mu,\nablau}^m(y(\alpha), \alpha)d\alpha)\\
&+\int_{\zeta_0}^{s}[\eta^{\nablau}\mathbb{F}_{\mu}^{m}(y(\zeta), \zeta)
\exp(-\int_{\zeta}^{s}\mathbb{D}_{\mu,\nablau}^m(y(\alpha), \alpha)d\alpha))d\zeta
\end{aligned}
\end{equation}
and
\begin{equation}\lambdabel{vii-23}\begin{aligned}
\mathbb{D}_{\mu, \nablau}^m(y(\zeta), \zeta)&=\fracrac{3\mu}{2}-3\nablau+\fracrac{3\nablau}{1+y^2}-\fracrac{2\nablau y}{1+y^2}e^{\fracrac{s}{2}}\beta_{\tau}(\mu_m(w)-\dot{\xi}(t))\bigl|_{(y, s)=(y(\zeta), \zeta)}\\
&:=\fracrac{3\mu}{2}-3\nablau+\mathbb{D}_{\nablau}^{m}(y(\zeta), \zeta).
\end{aligned}
\end{equation}
It is derived from \eqref{vii-22}, \eqref{vii-23} and \eqref{vii1-1} that
\begin{equation}\lambdabel{vii-25}
\int_{\zeta_0}^{+\infty}|\mathbb{D}_{\nablau}^{m}|(y(\zeta), \zeta)d\zeta\leq 6|\nablau|a\int_{\zeta_0}^{+\infty}
e^{\zeta_0-\zeta}d\zeta\leq 6|\nablau| a.
\end{equation}
Due to \eqref{vii5-1} and \eqref{vii-25}, when $\fracrac{3\mu}{2}-3\nablau>0$, we obtain
\begin{equation}\lambdabel{vii-26}\begin{aligned}
&e^{(\fracrac{3\mu}{2}-3\nablau)s}\eta^{\nablau}(y)|W_{\mu}^{m}(y, s)|\leq e^{6|\nablau|a}\biggl( e^{(\fracrac{3\mu}{2}-3\nablau)\zeta_0}\eta^{\nablau}(y_0(y, s))|W_{\mu}^{m}(y_0(y, s), \zeta_0)|\\
&\quadquad\quadquad\quadquad+\int_{\zeta_0}^{s}e^{(\fracrac{3\mu}{2}-3\nablau)\zeta}\eta^{\nablau}(y(\zeta))|\mathbb{F}_{\mu}^m|(y(\zeta), \zeta)d\zeta\biggr)
\end{aligned}
\end{equation}
and
\begin{equation}\lambdabel{vii-27}\begin{aligned}
&e^{(\fracrac{7}{6}-\nablau^+)s}\eta^{\fracrac{1}{3}}(y)|W_{2}^m(y, s)|\leq e^{2a}\biggl(e^{(\fracrac{7}{6}-\nablau^+)\zeta_0}|\eta^{\fracrac{1}{3}}(y_0(y, s))|W_{2}^m(y_0(y, s), \zeta_0)|\\
&\quadquad\quadquad\quadquad+\int_{\zeta_0}^{s}e^{(\fracrac{7}{6}-\nablau^+)\zeta}\eta^{\fracrac{1}{3}}(y(\zeta))|\mathbb{F}_2^m|(y(\zeta), \zeta)dl\biggr)\ (0\leq\nablau^+<\fracrac{7}{6}).
\end{aligned}
\end{equation}
Here we point out that the factor $\fracrac{7}{6}-\nablau^+$ appeared in \eqref{vii-27} for $0\leq \nablau^+<\fracrac{7}{6}$
is due to \eqref{vii-19}.
With respect to $\mathbb{F}_2^m$, similarly to the argument for \eqref{vii-14}, it
is derived from \eqref{ii-11a}, \eqref{ii-18}, \eqref{vii-401}, \eqref{vii-16} and \eqref{iv-2}-\eqref{iv-3} that
\begin{equation}\lambdabel{vii-28}
|\mathbb{F}_2^m(y(\zeta), \zeta)|\leq M^3 (e^{-\zeta}+\eta^{-\fracrac{1}{3}}(y(\zeta)))\sum\limits_{j=1}^{n-1}(|W_2^j(y(\zeta), \zeta)|
+|W_1^j(y(\zeta), \zeta)|).
\end{equation}
For $\mu=1$, $0\leq \nablau\leq \fracrac{1}{3}$ and $1\leq m\leq n-1$ in \eqref{vii-26}, we obtain from \eqref{iii-8b}, \eqref{vii-402}, \eqref{vii-14}, \eqref{vii-19}, \eqref{vii-20} and Lemma \ref{lem7-3} that
\begin{equation*}\lambdabel{vii-29}\begin{aligned}
e^{(\fracrac{3}{2}-3\nablau)s}\eta^{\nablau}(y)|W_1^m(y, s)|&\leq e^{2a}\left(M^{\fracrac{2}{5}}+M^3(\varepsilon+\varepsilon^{\fracrac{1}{3}})\sum\limits_{j=1}^{n-1}\|W_1^j(z, \tau)\|_{L^{\infty}(\{|z|\gammaeq 4a e^{\fracrac{\tau}{2}}\})}\right)\\
&\leq M^{\fracrac{2}{5}}e^{2a}+\varepsilon^{\fracrac{1}{4}}\sum\limits_{j=1}^{n-1}\|W_1^j(z, \tau)\|_{L^{\infty}(\{|z|\gammaeq 4a e^{\fracrac{\tau}{2}}\})}.
\end{aligned}
\end{equation*}
Combining this with \eqref{vii-402} shows that for $1\leq j\leq n-1$,
\begin{equation}\lambdabel{vii5-2}
|\partialartial_y W_j(y, s)|\leq \sum\limits_{m-1}^{n-1}|W_1^m(y, s)|\leq M^{\fracrac{3}{5}}e^{(3\nablau-\fracrac{3}{2})s}\eta^{-\nablau}(y)\ (0\leq \nablau\leq \fracrac{1}{3}).
\end{equation}
For the estimates of $\partialartial_y^2 W_m$ with $1\leq m\leq n-1$, it follows from \eqref{iii-8c}, \eqref{vii-401}, \eqref{vii-19}, \eqref{vii-27}-\eqref{vii5-2} and Lemma \ref{lem7-3} that
\begin{equation*}\lambdabel{vii-30}\begin{aligned}
&e^{(\fracrac{7}{6}-\nablau^+)s}\eta^{\fracrac{1}{3}}(y)|W_2^m(y, s)|\leq 2e^{2a}M^{\fracrac{3}{5}}\varepsilon^{\nablau^+}+M^{5}(\varepsilon+\varepsilon^{\fracrac{1}{3}})\\
&\quadquad\quadquad+M^4(\varepsilon+\varepsilon^{\fracrac{1}{3}})\sum\limits_{j=1}^{n-1}\|e^{(\fracrac{7}{6}-\nablau^+)\tau}\eta^{\fracrac{1}{3}}(z)W_2^j(z, \tau)\|_{L^{\infty}(\{|z|\gammaeq 4ae^{\fracrac{\tau}{2}}\})}\ (0<\nablau^+<\fracrac{7}{6}).
\end{aligned}
\end{equation*}
Together with \eqref{vii-402}, this yields that for $1\leq j\leq n-1$,
\begin{equation}\lambdabel{vii-30}
|\partialartial_y^2 W_j(y, s)|\leq \sum\limits_{m=1}^{n-1}|W_2^m(y, s)|\leq M^{\fracrac{4}{5}}e^{(\nablau^+-\fracrac{7}{6})s}\eta^{-\fracrac{1}{3}}(y)\ (0<\nablau^+<\fracrac{7}{6}).
\end{equation}
\section{Bootstrap estimates on the modulation variables}\lambdabel{viii}
For $\dot{\kappa}(t)$ and $\dot{\tau}(t)$, it follows from \eqref{i-7}, \eqref{ii-3}, \eqref{ii-12a}, \eqref{ii-12c}, \eqref{iii-6} and \eqref{iv-1}-\eqref{iv-3} that
\begin{equation}\lambdabel{viii-2}\begin{aligned}
|\dot{\kappa}(t)|&\leq e^{s}|\mu_n(w^0)-\dot{\xi}(t)|+e^{\fracrac{3s}{2}}\sum\limits_{j\nablaeq n}|a_{nj}(w^0)||(\partialartial_y W_j)^0|\\
&\leq e^{s}|(\partialartial_y^2 \mu_n(w))^0|+e^{\fracrac{3s}{2}}\sum\limits_{k=0, 2}\sum\limits_{j\nablaeq n}|(\partialartial_y^k(a_{nj}(w)\partialartial_y W_j))^0|\\
&\leq |(\partialartial_{w_n w_n}(\mu_n(w)))^0|+M\varepsilon^{\fracrac{1}{3}}\leq M^{\fracrac{1}{4}}\leq M^{\fracrac{1}{2}}
\end{aligned}
\end{equation}
and
\begin{equation}\lambdabel{viii-3}\begin{aligned}
|\dot{\tau}(t)|&\leq |1-(\partialartial_{w_n}\mu_n(w))^0|+M^2 e^{-\fracrac{s}{2}}\leq M^2\varepsilon^{\fracrac{1}{2}}\leq \varepsilon^{\fracrac{1}{3}}\leq 2\varepsilon^{\fracrac{1}{3}}.
\end{aligned}
\end{equation}
By the coordinate transformation \eqref{ii-2}, one has $t=t(s)$ and
\begin{equation}\lambdabel{viii-4}
\fracrac{d}{ds}(\kappa, \tau, \xi)(t(s))=(\dot{\kappa}(t), \dot{\tau}(t), \dot{\xi}(t))\fracrac{e^{-s}}{\beta_{\tau}}.
\end{equation}
Combining \eqref{viii-4} with \eqref{viii-2}-\eqref{viii-3} and \eqref{iii-14} shows
\begin{equation}\lambdabel{viii-5}
|\kappa(t)-\kappa_0\varepsilon^{\fracrac{1}{3}}|=|\int_{-\log\varepsilon}^{s}(\dot{\kappa}(t)\fracrac{e^{-s}}{\beta_{\tau}})ds|\leq |\kappa_0\varepsilon|+2M^{\fracrac{1}{4}}\varepsilon\leq M^{\fracrac{1}{2}} \varepsilon
\end{equation}
and
\begin{equation}\lambdabel{viii-51}
|\tau(t)|=|\tau(-\varepsilon)+\int_{-\log\varepsilon}^{s}(\dot{\tau}(t)\fracrac{e^{-s}}{\beta_{\tau}})ds|\leq 2\varepsilon^{\fracrac{4}{3}}.
\end{equation}
With respect to $\xi(t)$, by \eqref{i-7}, \eqref{ii-3}, \eqref{iii-6}, \eqref{vii-12}, \eqref{vii-16}-\eqref{vii-17} and \eqref{viii-3}-\eqref{viii-5}, then $\xi(t)$ in \eqref{ii-12c} satisfies
\begin{equation}\lambdabel{viii-1}
|\dot{\xi}(t)|\leq |\mu_{n}(w^0)|+\fracrac{1}{6}|(\partialartial^2 \mu_n(w))^0|+\fracrac{1}{6}\sum\limits_{j\nablaeq n}e^{\fracrac{s}{2}}|(\partialartial^2(a_{nj}(w)\partialartial_y W_j ))^0|\leq M^{\fracrac{3}{4}}\varepsilon\leq 2M^{\fracrac{3}{4}}\varepsilon
\end{equation}
and
\begin{equation}\lambdabel{viii-11}
|\xi(t)|=\left|\xi(-\varepsilon)+\int_{-\log\varepsilon}^{s}(\dot{\xi}(t)\fracrac{e^{-s}}{\beta_{\tau}})ds\right|\leq 2M^{\fracrac{3}{4}}\varepsilon^2.
\end{equation}
\section{Weighted energy estimates}\lambdabel{v}
In the section, we establish the spatial $L^2-$energy estimates of $\partialartial_y^{\mu_0}W$ when $\mu_0$ satisfies \eqref{iv-50}.
\begin{thm}\lambdabel{thm8-1} {\it For $\mu_0$ satisfying \eqref{iv-50}, under the assumptions \eqref{iv-1}-\eqref{iv-3} and \eqref{iv-6}, one has
\begin{equation}\lambdabel{v-1}
\sum\limits_{m=1}^{n-1}\|\partialartial_y^{\mu_0}W_m(\cdot, s)\|_{L^2(\mathbb{R})}\leq M^{\fracrac{1}{2}} e^{-\fracrac{3}{2}s},\ \|\partialartial_y^{\mu_0}W_{n}(\cdot, s)\|_{L^2(\mathbb{R})}\leq M^{\fracrac{1}{2}} e^{-\fracrac{s}{2}}.
\end{equation}}
\end{thm}
Based on the expansion \eqref{ii-17} and the assumption \eqref{iv-6}, we have from \eqref{vii-401}
and \eqref{v-1} that
\begin{equation}\lambdabel{v-2}
\sum\limits_{m=1}^{n-1}\|W_{\mu_0}^m(\cdot, s)\|_{L^2(\mathbb{R})}\leq 2nM e^{-\fracrac{3}{2}s},\ \|W_{\mu_0}^n(\cdot, s)\|_{L^2(\mathbb{R})}\leq 2M e^{-\fracrac{s}{2}}.
\end{equation}
\subsection{Framework for energy estimates}
To prove Theorem \ref{thm8-1}, we first establish the following framework for energy estimates:
\begin{lem}\lambdabel{lem8-1} With $\mu_0$ satisfying \eqref{iv-50}, under the assumption \eqref{v-2} (or see \eqref{iv-6}), one has
\begin{enumerate}[(1)]
\item When $1\leq m\leq n-1$, for any Lipschitz continuous function $q_m(y)$ and with the notation
\begin{equation}\lambdabel{v-3}
Q_m(y, s)=-(\fracrac{3}{2}y+e^{\fracrac{s}{2}}\beta_{\tau}(\mu_m(w)-\dot{\xi}(t)))q_m'(y),
\end{equation}
then
\begin{equation}\lambdabel{v-4}\begin{aligned}
&\fracrac{d}{ds}\int_{\mathbb{R}}e^{q_m(y)}|W_{\mu_0}^m|^2(y, s)dy+\int_{\mathbb{R}}(3\mu_0-\fracrac{5}{2}-e^{\fracrac{s}{2}}\beta_{\tau}\partialartial_y\mu_m(w)+Q_{m})e^{q_m(y)}|W_{\mu_0}^m|^2(y, s)dy\\
\leq& \int_{\mathbb{R}}e^{q_m(y)}|\mathbb{F}_{\mu_0}^m|^2(y, s)dy.
\end{aligned}
\end{equation}
\item For $W_{\mu_0}^n$,
\begin{equation}\lambdabel{v-5}\fracrac{d}{ds}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy+\int_{\mathbb{R}}(2\mu_0-\fracrac{5}{2}-e^{\fracrac{s}{2}}\beta_{\tau}\partialartial_y\mu_n(w))|W_{\mu_0}^n|^2(y, s)dy\leq \fracrac{1}{\mu_0+1}\int_{\mathbb{R}}|\mathbb{F}_{\mu_0}^n|^2(y, s)dy.
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof} For $1\leq m\leq n-1$, multiplying both sides of \eqref{ii-18} with $\mu=\mu_0$
by $2e^{q_m(y)}W_{\mu_0}^m$ and integrating on $\mathbb{R}$ yield
\begin{equation}\lambdabel{v-6}\begin{aligned}
&\fracrac{d}{ds}\int_{\mathbb{R}}e^{q_m(y)}|W_{\mu_0}^m|^2(y, s) dy+\int_{\mathbb{R}}(3\mu_0-\fracrac{3}{2}-e^{\fracrac{s}{2}}\beta_{\tau}\partialartial_y\mu_m(w)+Q_m)e^{q_m(y)}|W_{\mu_0}^m|^2(y, s) dy\\
=&2\int_{\mathbb{R}}e^{q_m(y)}(\mathbb{F}_{\mu_0}^m\cdot W_{\mu_0}^m)(y, s) dy\\
\leq &\int_{\mathbb{R}}e^{q_m(y)}|W_{\mu_0}^m|^2(y, s) dy+\int_{\mathbb{R}}e^{q_m(y)}|\mathbb{F}_{\mu_0}^m|^2(y, s) dy.
\end{aligned}
\end{equation}
Then \eqref{v-4} comes from \eqref{v-3} and \eqref{v-6}. The estimate \eqref{v-5}
can be obtained by a standard energy estimate associated with the equation \eqref{ii-18} for $m=n$ and $\mu=\mu_0$.
\end{proof}
Next we analyze the structure of $\mathbb{F}_{\mu_0}^m$.
\begin{lem}\lambdabel{lem8-2} For $\mu_0$ satisfying \eqref{iv-50} and $1\leq m\leq n-1$,
then $\mathbb{F}_{\mu_0}^m$ in \eqref{ii-18} satisfies
\begin{equation}\lambdabel{v-7}\begin{aligned}
\int_{\mathbb{R}}|\mathbb{F}_{\mu_0}^m|^2 (y, s) dy&
\leq 4nM^{-\fracrac{1}{16}}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy+2nM^{\fracrac{1}{16}}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|\eta^{-\fracrac{1}{3}}(y)W_{\mu_0}^j|^2(y, s)dy\\
&+2n M^{\fracrac{1}{4}}e^{-2s}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy+e^{-(3+\fracrac{1}{4})s}.
\end{aligned}
\end{equation}
\end{lem}
\begin{proof} First, due to \eqref{ii-3}, \eqref{vi2-4}, \eqref{vi-35}, \eqref{vii-12}, \eqref{vii-16} and \eqref{viii-5}, the estimate in \eqref{vii3-1} can be improved as
\begin{equation}\lambdabel{v-72}
|e^{\fracrac{s}{2}}\beta_{\tau}[A(w), \fracrac{\partialartial\gammaamma_j(w)}{\partialartial w}]\partialartial_y W|\leq M^{\fracrac{1}{64}}\eta^{-\fracrac{1}{3}}(y)(1-\delta_j^n).
\end{equation}
Thus, we can obtain from \eqref{ii-18}, \eqref{vii-5} and \eqref{v-72} that
\begin{equation}\lambdabel{v-71}\begin{aligned}
\int_{\mathbb{R}}|\mathbb{F}_{\mu_0}^m|^2(y, s)dy&\leq M^{\fracrac{1}{16}}e^{-s}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy+M^{\fracrac{1}{16}}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|\eta^{-\fracrac{1}{3}}(y)W_{\mu_0}^j|^2(y, s)dy\\
&+2\int_{\mathbb{R}}|\ell_m\cdot F_{\mu_0}|^2(y, s)dy.
\end{aligned}
\end{equation}
Due to $\ell_k^n(w)=0$ and $a_{kn}(w)=0$ ($1\leq k\leq n-1$) by \eqref{i-7}-\eqref{i-71}, for $F_{\mu_0}$ in \eqref{ii-11a}, one
then has
\begin{equation}\lambdabel{v-8}\begin{aligned}
|\ell_m\cdot F_{\mu_0}|
\leq& \sum\limits_{1\leq \beta\leq \mu_0} 2C_{\mu_0}^{\beta}e^{\fracrac{s}{2}}|\ell_m\cdot \partialartial_y^{\beta}A(w)\partialartial_y^{\mu_0+1-\beta} W|\\
\leq&\sum\limits_{1\leq\beta\leq \mu_0}\sum\limits_{1\leq q\leq \mu_0-\beta+1}2 C_{\mu_0}^{\beta}\|\partialartial_w^q A(w)\|_{L^{\infty}}e^{\fracrac{s}{2}}\sum\limits_{k=1}^{n-1}I_{\beta q k}\\
\leq&\left(M^{\fracrac{1}{16}}e^{s}\sum\limits_{1\leq\beta\leq\mu_0}\sum\limits_{1\leq q\leq \mu_0-\beta+1}\sum\limits_{1\leq k\leq n-1}I_{\beta q k}^2\right)^{1/2},
\end{aligned}
\end{equation}
where the last inequality comes from \eqref{ii-3}, \eqref{iv-1}-\eqref{iv-3}, and $I_{\beta q k}$ satisfies
\begin{equation}\lambdabel{v-9}
I_{\beta q k}=\sum\limits_{\gammaamma_1+\cdots+\gammaamma_q=\mu_0-\beta+1,\ \gammaamma_j\gammaeq 1\ (1\leq j\leq q)}|\partialartial_{y}^{\gammaamma_1} W|\cdots|\partialartial_y^{\gammaamma_q}W|\cdot|\partialartial_y^{\beta}W_k|.
\end{equation}
Note that the estimates for ${I_{\beta q k}}'s$ in \eqref{v-9} are taken in Lemma \ref{lemA-4} for $1\leq k\leq n-1$.
Substituting the three type estimates in \eqref{A2-3} into \eqref{v-8} shows
\begin{equation}\lambdabel{v-16}\begin{aligned}
&\int_{\mathbb{R}}|\ell_m\cdot F_{\mu_0}|^2(y, s)dy\\
\leq &2nM^{\fracrac{1}{16}}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|\eta^{-\fracrac{1}{3}}(y)W_{\mu_0}^j|^2(y, s)dy+2nM^{-\fracrac{1}{16}}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy\\
&+4nM^{\fracrac{3}{8}}e^{-2s}\int_{\mathbb{R}^n}|W_{\mu_0}^n|^2(y, s)dy+M^{3\mu_0+2}e^{-(3+\fracrac{1}{2})s}\ (1\leq m\leq n-1).
\end{aligned}
\end{equation}
Therefore, \eqref{v-7} follows from \eqref{v-71}-\eqref{v-8} and \eqref{v-16}, and then
the proof of Lemma \ref{lem8-2} is finished.\end{proof}
\begin{lem}\lambdabel{lem8-3} {\it Let $\mu_0$ satisfy \eqref{iv-50}, then for $\mathbb{F}_{\mu_0}^n$ in \eqref{ii-18}, one has
\begin{equation}\lambdabel{v-17}
\int_{\mathbb{R}}|\mathbb{F}_{\mu_0}^n|^2(y, s)dy\leq \fracrac{102}{100}(\mu_0+1)^2\int_{\mathbb{R}}|W_{\mu_0}^n|^2 (y, s)dy+M^{\fracrac{1}{2}}e^{-s}.
\end{equation}}
\end{lem}
\begin{proof} Note that under the assumptions \eqref{i-7}-\eqref{i-71}, $\ell_n(w)$ has the decomposition
\begin{equation}\lambdabel{v-18}
\ell_n(w)={\bf e}_n^{\top}+\sum\limits_{j=1}^{n-1}c_j(w)\ell_j(w),
\end{equation}
where $c_j(w)=-{\bf e}_n^{\top}\cdot\gammaamma_j(w)\ (1\leq j\leq n-1)$.
Combining \eqref{v-18} with $\gammaamma_n(w)={\bf e}_n$ in \eqref{i-71}, \eqref{ii-11}, \eqref{ii-18} shows
\begin{equation}\lambdabel{v-19}\begin{aligned}
|\mathbb{F}_{\mu_0}^n|=&|\ell_n\cdot F_{\mu_0}|
\leq \sum\limits_{j=1}^{n-1}|c_j(w)\ell_j\cdot F_{\mu_0}|+|{\bf e}_n^{\top}\cdot F_{\mu_0}|\\
\leq& \sum\limits_{j=1}^{n-1}|c_j(w)\ell_j\cdot F_{\mu_0}|+e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu_0}C_{\mu_0}^{\beta}|\partialartial_y^{\beta}\mu_{n}(w)
\partialartial_y^{\mu_0-\beta+1}W_n|\\
&+e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{j=1}^{n-1}\sum\limits_{1\leq\beta\leq\mu_0}|\partialartial_y^{\beta}(a_{nj}(w))
\partialartial_y^{\mu_0-\beta+1}W_j|:=\sum\limits_{i=1}^{5}J_i,
\end{aligned}
\end{equation}
where
\begin{equation*}\begin{cases}
J_1=e^{\fracrac{s}{2}}\beta_{\tau}(\mu_0+1)|\partialartial_y W_n\partialartial_y^{\mu_0}W_n|,\\
J_2=e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{2\leq\beta\leq\mu_0-1}C_{\mu_0}^{\beta}|\partialartial_y^{\beta}W_n\partialartial_y^{\mu_0-\beta}W_n|,\\
J_3=\sum\limits_{j=1}^{n-1}|c_j(w)\ell_j\cdot F_{\mu_0}|,\\
J_4=e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu_0}C_{\mu_0}^{\beta}|\partialartial_y^{\beta}(\mu_{n}(w)-W_n)\partialartial_y^{\mu_0-\beta+1}W_n|,\\
J_5=e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{j=1}^{n-1}\sum\limits_{1\leq\beta\leq\mu_0}|\partialartial_y^{\beta}(a_{nj}(w))\partialartial_y^{\mu_0-\beta+1}W_j|.
\end{cases}
\end{equation*}
It is derived from \eqref{v-19} that
\begin{equation}\lambdabel{v-20}
\int_{\mathbb{R}}|\mathbb{F}_{\mu_0}^n|^2(y, s)dy\leq \fracrac{101}{100}\int_{\mathbb{R}}J_1^2(y, s)dy+M^{\fracrac{1}{32}}\sum\limits_{k=2}^{5}\int_{\mathbb{R}}J_k^2(y, s)dy.
\end{equation}
In addition, it follows from \eqref{ii-3}, \eqref{vi-35} and \eqref{viii-3} that
\begin{equation}\lambdabel{v-21}
\int_{\mathbb{R}}J_1^2(y, s)dy\leq (1+\varepsilon^{\fracrac{1}{40}})(\mu_0+1)^2\int_{\mathbb{R}}|\partialartial_y^{\mu_0}W_{n}|^2 (y, s)dy.
\end{equation}
For $J_2$, by H\"older inequality, \eqref{ii-3} and \eqref{vi-35}, we have
\begin{equation}\lambdabel{v-22}\begin{aligned}
\int_{\mathbb{R}}J_2^2(y, s)dy&\leq M^{\fracrac{1}{8(\mu_0-1)}}\int_{\mathbb{R}}|\partialartial_y^{\mu_0-1}W_n|^2(y, s)ds\\
&+M^{\fracrac{1}{16(\mu_0-1)}}e^{s}\sum\limits_{1<\beta<\mu_0-2}\|\partialartial_y^{\beta}W_n(\cdot, s)\|_{L^{\fracrac{2(\mu_0-2)}{\beta-1}}}^2\|\partialartial_y^{\mu_0-\beta}W_n(\cdot, s)\|_{L^{\fracrac{2(\mu_0-2)}{\mu_0-\beta-1}}}^2.
\end{aligned}
\end{equation}
Note that by Lemma \ref{lemA-3},
\begin{equation*}\begin{aligned}
&\|\partialartial_y^{\beta}W_n(\cdot, s)\|_{L^{\fracrac{2(\mu_0-2)}{\beta-1}}}\leq M^{\fracrac{1}{64(\mu_0-1)}}\|\partialartial_y W_n(\cdot, s)\|_{L^{\infty}}^{\fracrac{\mu_0-\beta-1}{\mu_0-2}}\|\partialartial_y^{\mu_0-1}W_n(\cdot, s)\|_{L^2}^{\fracrac{\beta-1}{\mu_0-2}},\\
&\|\partialartial_y^{\mu_0-\beta}W_n(\cdot, s)\|_{L^{\fracrac{2(\mu_0-2)}{\mu_0-\beta-1}}}\leq M^{\fracrac{1}{64(\mu_0-1)}}\|\partialartial_y W_n(\cdot, s)\|_{L^{\infty}}^{\fracrac{\beta-1}{\mu_0-2}}\|\partialartial_y^{\mu_0-1}W_n(\cdot, s)\|_{L^{2}}^{\fracrac{\mu_0-\beta-1}{\mu_0-2}}.
\end{aligned}
\end{equation*}
Then combining these two estimates with \eqref{ii-3}, \eqref{vi-35}, \eqref{v-22} and Lemma \ref{lemA-3} yields
\begin{equation}\lambdabel{v-221}\begin{aligned}
\int_{\mathbb{R}}J_2^2(y, s)dy&\leq 2M^{\fracrac{1}{8(\mu_0-1)}}\int_{\mathbb{R}}|\partialartial_y^{\mu_0-1}W_n|^2(y, s)ds\\
&\leq M^{\fracrac{1}{4}(\mu_0-1)}\|\partialartial_y W_n(\cdot, s)\|_{L^{2}}^{\fracrac{2}{\mu_0-1}}\|\partialartial_y^{\mu_0}W_n(\cdot, s)\|_{L^2}^{2\fracrac{\mu_0-2}{\mu_0-1}}\\
&\leq M^{-\fracrac{1}{8(\mu_0-2)}}\int_{\mathbb{R}}|\partialartial_y^{\mu_0}W_n|^2(y, s)dy+2M^{\fracrac{3}{8}}e^{-s}.
\end{aligned}
\end{equation}
For $J_3$, due to $\gammaamma_j(0)={\bf e}_j\ (1\leq j\leq n-1)$ in \eqref{i-71}, one then derives
from \eqref{ii-3} and \eqref{iv-1}-\eqref{iv-3} that
\begin{equation}\lambdabel{v-222}
|c_j(w)|\leq M\|W(\cdot, s)\|_{L^{\infty}}\leq M^4\varepsilon^{\fracrac{1}{3}}\leq \varepsilon^{\fracrac{1}{4}}\ (1\leq j\leq n-1).
\end{equation}
On the other hand, it follows from \eqref{v-2}, \eqref{v-16} and \eqref{v-222} that
\begin{equation}\lambdabel{v-23}
\int_{\mathbb{R}}J_3^2(y, s)dy\leq \varepsilon^{\fracrac{1}{6}} e^{-3s}.
\end{equation}
With respect to $J_4$, one has from \eqref{v-19} that
\begin{equation}\lambdabel{w-1}\begin{aligned}
J_4=&e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq \beta\leq \mu_0}C_{\mu_0}^{\beta}|\partialartial_y^{\beta}(\mu_n(w)-W_n)\partialartial_y^{\mu_0-\beta+1}W_n|\\
\leq&e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu_0}C_{\mu_0}^{\beta}|\partialartial_{w_n}(\mu_n(w)-W_n)|I_{\beta 1 n}\\
+& e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu_0}C_{\mu_0}^{\beta}\sum\limits_{1\leq k\leq n-1}|\partialartial_{w_k}(\mu_n(w)-W_n)|I_{\beta 1 k}\\
+&e^{\fracrac{s}{2}}\beta_{\tau}\sum\limits_{1\leq\beta\leq\mu_0}\sum\limits_{2\leq q\leq \mu_0-\beta+1}c_{\beta q n}(w)I_{\beta q n}\\
:=&\sum\limits_{1\leq \beta\leq \mu_0}(J_{41 \beta}+J_{42 \beta}+J_{43 \beta}),
\end{aligned}
\end{equation}
where $|c_{\beta q n}(w)|\leq M$ due to \eqref{ii-3} and \eqref{iv-1}-\eqref{iv-3}. Since $\partialartial_{w_n}\mu_n(0)=1$,
by \eqref{ii-3} and \eqref{iv-1}-\eqref{iv-3}, we have
\begin{equation}\lambdabel{w-2}
|J_{41 \beta}|\leq M\varepsilon^{\fracrac{1}{6}}e^{\fracrac{s}{2}}I_{\beta 1 n}.
\end{equation}
In the similar and easier way,
\begin{equation}\lambdabel{w-3}
|J_{42 \beta}|+|J_{43 \beta}|\leq M\sum\limits_{1\leq k\leq n-1}I_{\beta 1 k}+M^2\sum\limits_{2\leq q\leq \mu_0-\beta+1}I_{\beta q n}.
\end{equation}
With the help of Lemma \ref{lemA-4}, Lemma \ref{lemA-5} and \eqref{v-2}, we obtain from \eqref{w-1}-\eqref{w-3} that
\begin{equation}\lambdabel{w-4}
\int_{\mathbb{R}}J_4^2(y, s)dy\leq \varepsilon^{\fracrac{1}{10}}e^{-s}.
\end{equation}
Analogously to the treatment of $J_4$ in \eqref{w-1}-\eqref{w-4}, one has
\begin{equation}\lambdabel{v-24}
\int_{\mathbb{R}}J_5^2(y, s)dy\leq \varepsilon^{\fracrac{1}{10}}e^{-s}.
\end{equation}
Therefore, \eqref{v-17} comes from \eqref{v-21}, \eqref{v-221}, \eqref{v-23}, \eqref{w-4}, \eqref{v-24} and the largeness of $M$.
\end{proof}
\subsection{Energy estimates of $W_{\mu_0}^n$}
First, we close the estimate of $W_{\mu_0}^n$ in \eqref{v-2}. Substituting \eqref{v-17} into \eqref{v-5} yields
\begin{equation}\lambdabel{v-25}
\fracrac{d}{ds}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy+\int_{\mathbb{R}}(\fracrac{98\mu_0-102}{100}-\fracrac{5}{2}-\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y\mu_n(w))|W_{\mu_0}^n|^2(y, s)dy\leq M^{\fracrac{1}{2}}e^{-s}.
\end{equation}
When $\mu_0\gammaeq 6$, it is derived from \eqref{i-7}, \eqref{ii-3}, \eqref{vi2-4}, \eqref{vi-35}, \eqref{vii-12}, \eqref{vii-16},
\eqref{viii-3} and \eqref{viii-5} that
\begin{equation}\lambdabel{v-26}
\fracrac{98\mu_0-102}{100}-\fracrac{5}{2}-\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y\mu_n(w)
\gammaeq\fracrac{486}{100}-\fracrac{5}{2}-M e^{-s}-(1+\varepsilon^{\fracrac{1}{20}})>\fracrac{6}{5}.
\end{equation}
In addition, one has from \eqref{v-25}-\eqref{v-26} that
\begin{equation*}
\fracrac{d}{ds}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy
+\fracrac{6}{5}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy\leq M^{\fracrac{1}{2}}e^{-s}.
\end{equation*}
This yields
\begin{equation}\lambdabel{v-27}
\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy\leq e^{\fracrac{6}{5}(-\log\varepsilon-s)}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, -\log\varepsilon)dy+5M^{\fracrac{1}{2}}e^{-s}.
\end{equation}
Then it follows from \eqref{v-27}, \eqref{ii-3}, \eqref{iii-9} and \eqref{vii-401} that
\begin{equation}\lambdabel{v-271}
\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy\leq 6M^{\fracrac{1}{2}} e^{-s}.
\end{equation}
\subsection{Energy estimates of $W_{\mu_0}^j\ (1\leq j\leq n-1)$}
We now close the estimates of $W_{\mu_0}^j\ (1\leq j\leq n-1)$ in \eqref{v-2}. Similarly to \eqref{v-26},
there exists a minimal positive integer $\mu_0\gammaeq 6$ such that for all $1\leq m\leq n-1$,
\begin{equation}\lambdabel{v-28}
3\mu_0-\fracrac{5}{2}-\beta_{\tau}e^{\fracrac{s}{2}}\partialartial_y\mu_m(w)\gammaeq 3\mu_0-\fracrac{5}{2}-M e^{-s}-(1+\varepsilon^{\fracrac{1}{4}})|\partialartial_{w_n}\mu_m(0)|\gammaeq 4.
\end{equation}
This also ensures the assumption \eqref{iv-50} in turn.
In addition, it follows from \eqref{v-7} and \eqref{v-271} that
\begin{equation}\lambdabel{v-29}
\int_{\mathbb{R}}|\mathbb{F}_{m}^{\mu_0}|^2(y, s)dy\leq \fracrac{\delta}{2n}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy+c^{*}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|\eta^{-\fracrac{1}{3}}(y)W_{\mu_0}^j|^2(y, s)dy+24nM^{\fracrac{3}{4}}e^{-3s},
\end{equation}
where $\delta=8n^2 M^{-\fracrac{1}{16}}$ and $c^*=2n M^{\fracrac{1}{16}}$.
On the other hand, there exist two positive constants $K$ and $K^{*}$ such that
\begin{equation}\lambdabel{v-30}
\eta^{-\fracrac{2}{3}}(y)\leq \fracrac{\delta}{2n c^{*}}\ (|y|\gammaeq K),\ \eta^{-\fracrac{2}{3}}(y)\leq \fracrac{K^*}{nc^{*}}\ (|y|\leq K).
\end{equation}
It is derived from \eqref{v-29} and \eqref{v-30} that
\begin{equation}\lambdabel{v-31}
\int_{\mathbb{R}}|\mathbb{F}_{m}^{\mu_0}|^2(y, s)dy\leq \fracrac{\delta}{n}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0 j}|^2(y, s)dy+\fracrac{K^*}{n}\sum\limits_{j=1}^{n-1}\int_{-K}^K|W_{\mu_0 j}|^2(y, s)dy+24n M^{\fracrac{3}{4}} e^{-3s}.
\end{equation}
Next we determine the Lipschitz continuous function $q_m(y)\ (1 \leq m\leq n-1)$ in Lemma \ref{lem8-1}. Due to \eqref{i-7a}
and $\mu_n(0)=0$, when $\mu_m(0)<0\ (1\leq m\leq i_0-1)$, then $q_m(y)$ is defined as
\begin{equation}\lambdabel{v-32}
q_m(y):=q_m^-(y)=\begin{cases}0,\ y\leq -K,\\
\fracrac{y}{K}+1,\ -K\leq y\leq K,\\
2,\ y\gammaeq K.
\end{cases}
\end{equation}
When $\mu_m(0)>0\ (i_0\leq m\leq n-1)$, $q_m(y)$ is defined as
\begin{equation}\lambdabel{v-33}
q_m(y):=q_m^+(y)=q_m^-(-y).
\end{equation}
According to the definitions \eqref{v-32}-\eqref{v-33}, we have
\begin{equation}\lambdabel{v-34}
0\leq q_m(y)\leq 2.
\end{equation}
On the other hand, $Q_m(y, s)$ in \eqref{v-3} satisfies
\begin{equation}\lambdabel{v-35}\begin{aligned}
Q_m(y, s)&\gammaeq \left(\fracrac{1}{K}\beta_{\tau}e^{\fracrac{s}{2}}|\mu_m(w)-\dot{\xi}(t)|-\fracrac{3}{2}K\right)\chi_{\{|y|\leq K\}}\\
&\gammaeq \left(\fracrac{|\mu_m(0)|}{2K}e^{\fracrac{s}{2}}-\fracrac{3}{2}K\right)\chi_{\{|y|\leq K\}}\gammaeq K^*\chi_{\{|y|\leq K\}},
\end{aligned}
\end{equation}
where the last inequality comes from \eqref{i-7}, \eqref{ii-3}, \eqref{vi2-4}, \eqref{vii-12}, \eqref{viii-1} and the fact of $s\gammaeq -\log\varepsilon$.
With \eqref{v-271}-\eqref{v-28}, \eqref{v-31}, \eqref{v-34}-\eqref{v-35} and the largeness of $M$, summing up $m$
on both sides of \eqref{v-4} from $1$ to $n-1$ yield
\begin{equation*}\lambdabel{v-36}\begin{aligned}
\fracrac{d}{ds}\sum\limits_{m=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^m|^2(y, s)dy
+\fracrac{7}{2}\sum\limits_{m=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^m|^2(y, s)dy\leq 14n^2 M^{\fracrac{3}{4}} e^{4-3s}.
\end{aligned}
\end{equation*}
This shows
\begin{equation}\lambdabel{v-36}
\sum\limits_{m=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^m|^2(y, s)dy\leq e^{\fracrac{7}{2}(-\log\varepsilon-s)}\sum\limits_{m=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^m|^2(y, -\log\varepsilon)dy+28n^2 M^{\fracrac{3}{4}}e^{4-3s}.
\end{equation}
Then it comes from \eqref{iii-9}, \eqref{vii-401} and \eqref{v-36} that
\begin{equation}\lambdabel{v-37}
\sum\limits_{m=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy\leq 30n^2 e^4 M^{\fracrac{3}{4}} e^{-3s}.
\end{equation}
{\bf Proof of Theorem \ref{thm8-1}:} It is derived from \eqref{v-271}, \eqref{v-37} and \eqref{vii-402} that
\begin{equation}\lambdabel{v-38}
\sum\limits_{j=1}^{n-1}\|\partialartial_y^{\mu_0}W_j|^2(\cdot, s)\|_{L^2(\mathbb{R})}\leq (n-1)\sum\limits_{k=1}^{n-1}\|W_{\mu_0}^{k}(\cdot, s)\|_{L^2(\mathbb{R})}\leq \sqrt{30}n^2 e^2 M^{\fracrac{3}{8}}e^{-\fracrac{3}{2}s}
\end{equation}
and
\begin{equation}\lambdabel{v-39}
\|\partialartial_y^{\mu_0}W_n(\cdot, s)\|_{L^2(\mathbb{R})}\leq \sum\limits_{k=1}^{n}\|W_{\mu_0}^k(\cdot, s)\|_{L^2(\mathbb{R})}\leq \sqrt{30}n^2 e^2 M^{\fracrac{3}{8}}e^{-\fracrac{3}{2}s}+\sqrt{6}M^{\fracrac{1}{4}}e^{-\fracrac{s}{2}}.
\end{equation}
Then the estimates in \eqref{v-1} come from \eqref{v-38}, \eqref{v-39} and the largeness of $M$.
Therefore, the proof of Theorem \ref{thm8-1} is completed.
\section{Proof of main theorems}\lambdabel{V}
In the section, we complete the proofs of Theorem \ref{thmii-1} and Theorem \ref{thmi-1}.
\subsection{Proof of Theorem \ref{thmii-1}}
Based on the local existence of \eqref{i-6} (see \cite{MA}), we utilize the continuous induction to prove Theorem \ref{thmii-1}.
To this end, under the induction assumptions \eqref{iv-1}-\eqref{iv-3} and \eqref{iv-6} for suitably large $M>16$,
the proof of Theorem \ref{thmii-1} is mainly reduced to recover the estimates \eqref{iv-1}-\eqref{iv-3}
and \eqref{iv-6} with the smaller coefficient bounds via the bootstrap arguments.
The induction assumptions of $\kappa(t), \tau(t)$ and $\xi(t)$ in \eqref{iv-1a} and their derivatives in \eqref{iv-1b} are recovered in \eqref{viii-5}-\eqref{viii-51}, \eqref{viii-11} and \eqref{viii-2}-\eqref{viii-3}, \eqref{viii-1} with $M$ replaced by the smaller ones $M^{\fracrac{1}{2}}, 2$ and $2M^{\fracrac{3}{4}}$ respectively.
In the similar way, the assumptions \eqref{iv-2} for $W_0$ as well as $\mathcal{W}$ are also recovered by \eqref{vi-24}, \eqref{vi-36}, \eqref{vi-35}, \eqref{vi-16} and \eqref{vi-27} with the coefficients replaced by the smaller ones accordingly. In addition, the assumptions \eqref{iv-3} for $W_j\ (j\nablaeq n)$ are also obtained by \eqref{vii-12}, \eqref{vii-17}, \eqref{vii5-2} and \eqref{vii-30} with the coefficient $M$ replaced by the smaller ones. In addition, the energy assumptions \eqref{iv-6} are obviously
derived by \eqref{v-1} in Theorem \ref{thm8-1} with $M$ replaced by $M^{\fracrac{1}{2}}$.
Therefore, Theorem \ref{thmii-1} is proved via the method of continuous induction.
\subsection{Proof of Theorem \ref{thmi-1}}
Due to \eqref{ii-2} and Theorem \ref{thmii-1}, in order to complete the proof of Theorem \ref{thmi-1},
we only need to verify the $C^{\fracrac{1}{3}}$ optimal regularity of $W_0$ and \eqref{i-13}.
First, we show that the optimal regularity of $W_0(y, s)$ is $C^{\fracrac{1}{3}}$ with respect to the spatial variable.
By \eqref{ii-2}, for any $t\gammaeq -\varepsilon$ and $x_1, x_2\in\mathbb{R}$, set
\begin{equation}\lambdabel{V-1}
s=s(t), y_i=(x_i-\xi(t))e^{\fracrac{3s}{2}}\ (i=1, 2).
\end{equation}
Then for any $\alpha>0$, by \eqref{ii-3}, we arrive at
\begin{equation}\lambdabel{V-2}
\fracrac{|w_n(x_1, t)-w_n(x_2, t)|}{|x_1-x_2|^{\alpha}}=e^{\fracrac{3\alpha s}{2}-\fracrac{s}{2}}\fracrac{|W_0(y_1, s)-W_0(y_2, s)|}{|y_1-y_2|^{\alpha}}.
\end{equation}
When $\alpha=\fracrac{1}{3}$, it is derived from \eqref{V-2}, \eqref{ii-9b} and \eqref{vi-35} that
\begin{equation}\lambdabel{V-3}\begin{aligned}
&\sup\limits_{x_1, x_2\in\mathbb{R}, x_1\nablaeq x_2}\fracrac{|w_n(x_1, t)-w_n(x_2, t)|}{|x_1-x_2|^{\alpha}}\\
\leq& \sup\limits_{y_1, y_2\in\mathbb{R}, y_1\nablaeq y_2}\fracrac{|W_0(y_1, s)-W_0(y_2, s)|}{|y_1-y_2|^{\fracrac{1}{3}}}\\
=&\sup\limits_{y_1, y_2\in\mathbb{R}, y_1\nablaeq y_2}\fracrac{\displaystyle|\int_{y_2}^{y_1}\partialartial_y W_0(z, s)dz|}{|y_1-y_2|^{\fracrac{1}{3}}}\\
\leq&2\sup\limits_{y_1, y_2\in\mathbb{R}, y_1\nablaeq y_2}\fracrac{\displaystyle|\int_{y_2}^{y_1}\eta^{-\fracrac{1}{3}}(z)dz|}{|y_1-y_2|^{\fracrac{1}{3}}}\leq 12.
\end{aligned}
\end{equation}
When $\fracrac{1}{3}<\alpha<1$, with \eqref{ii-8}, \eqref{ii-14}, \eqref{vi-16} and $y_{1}^*=1, y_2^*=0$ (Denote ($x_1^*, x_2^*)$ by the
corresponding transformation \eqref{V-1} respectively), we have
\begin{equation}\lambdabel{V-4}\begin{aligned}
\fracrac{|W_0(y_1^*, s)-W_0(y_2^*, s)|}{|y_1^*-y_2^*|^{\alpha}}&\gammaeq |\overline{W}(1)|-|\mathcal{W}(1, s)-\mathcal{W}(0, s)|\\
&\gammaeq |\overline{W}(1)|-2\varepsilon^{\fracrac{1}{11}}>\fracrac{1}{2}|\overline{W}(1)|>0.
\end{aligned}
\end{equation}
Thus, \eqref{V-4} shows that when $\fracrac{1}{3}<\alpha<1$, for any $\overline{M}>0$, there exists a
constant $s_0\gammaeq -\log\varepsilon$ (and corresponding $t_0$ by \eqref{V-1}) such that when $s\gammaeq s_0$,
\begin{equation}\lambdabel{V-5}
e^{\fracrac{3\alpha s}{2}-\fracrac{s}{2}}\fracrac{|W_0(y_1^*, s)-W_0(y_2^*, s)|}{|y_1^*-y_2^*|^{\alpha}}\gammaeq \fracrac{1}{2}e^{\fracrac{3\alpha s_0}{2}-\fracrac{s_0}{2}}|\overline{W}(1)|>\overline{M}.
\end{equation}
Due to the arbitrariness of $\overline{M}$, it implies from \eqref{V-2} and \eqref{V-5} that $w_n(x, t)\nablaotin C^{\alpha}$
with $\alpha>\fracrac{1}{3}$. Combining this with \eqref{V-3} shows that the optimal regularity of $w_n(x, t)$
is $C^{\fracrac{1}{3}}$ with respect to the spatial variable.
Therefore, (1) and (2) in Theorem \ref{thmi-1} are obtained from Theorem \ref{thmii-1} and the above
verification of $C^{\fracrac{1}{3}}$ regularity for $W_0$.
Next we prove \eqref{i-13}. It is derived from \eqref{ii-2}, \eqref{ii-3} and \eqref{ii-10} that
\begin{equation}\lambdabel{V-6}
\partialartial_x w_n(\xi(t), t)=e^{s}\partialartial_y W_0(0, s)=-e^{s}=\fracrac{1}{\tau(t)-t}.
\end{equation}
By the definition of $\tau(t)$ in \eqref{ii-1}, we have $\tau(T^*)=T^*$ and then
\begin{equation}\lambdabel{V-7}
|T^*-\tau(t)|=|\tau(T^*)-\tau(t)|\leq \int_t^{T^*}|\dot{\tau}(z)|dz\lesssim \varepsilon^{\fracrac{1}{3}}(T^*-t),
\end{equation}
where the last inequality comes from (2) in Theorem \ref{ii-1}.
Following \eqref{V-7}, one has
\begin{equation}\lambdabel{V-8}\begin{aligned}
&|\tau(t)-t|\leq |T^*-t|+|\tau(t)-T^*|\leq (1+\varepsilon^{\fracrac{1}{4}})|T^*-t|,\\
&|\tau(t)-t|\gammaeq |T^*-t|-|\tau(t)-T^*|\gammaeq (1-\varepsilon^{\fracrac{1}{4}})|T^*-t|.
\end{aligned}
\end{equation}
Then, we derive from \eqref{V-6}-\eqref{V-8} that
\begin{equation}\lambdabel{V-9}
-2<(T^*-t)\partialartial_x w_n(\xi(t), t)<-\fracrac{1}{2}.
\end{equation}
Collecting \eqref{V-7}, \eqref{V-9} and Theorem \ref{ii-1} yields \eqref{i-13} in Theorem \ref{thmi-1} and then the
proof of Theorem \ref{thmi-1} is completed.
\appendix
\renewcommand{\appendixname}{}
\section{Appendix}\lambdabel{A}
In the Appendix, we introduce a useful interpolation inequality
(see \cite{Ad}) and give its applications.
\begin{lem}\lambdabel{lemA-3} {\bf (Gagliardo-Nirenberg-Sobolev inequality).} {\it Let $u: \mathbb{R}^d\to\mathbb{R}$.
Fix $1\leq q, r\leq \infty$ and $j, m\in\mathbb{N}$, and $\fracrac{j}{m}\leq \alpha\leq 1$. If
\begin{equation*}
\fracrac{1}{p}=\fracrac{j}{d}+\alpha(\fracrac{1}{r}-\fracrac{m}{d})+\fracrac{1-\alpha}{q},
\end{equation*}
then one has
\begin{equation}\lambdabel{A2-1}
\|D^j u\|_{L^p}\leq C\|D^m u\|_{L^r}^{\alpha}\|u\|_{L^q}^{1-\alpha},
\end{equation}
where the positive constant $C$ depends on $d, m, r, q$ and $m$.} \end{lem}
Next we estimate the $L^2-$norm of the terms ${I_{\beta q k}}'s\ (1\leq k\leq n)$ with the expression as
\begin{equation}\lambdabel{A2-2}
I_{\beta q k}=\sum\limits_{\gammaamma_1+\cdots+\gammaamma_q=\mu_0-\beta+1,\ \gammaamma_j\gammaeq 1 (1\leq j\leq q)}|\partialartial_y^{\gammaamma_1}W|\cdots |\partialartial_y^{\gammaamma_q}W|\cdot|\partialartial_y^{\beta}W_k|.
\end{equation}
The estimates of ${I_{\beta q k}}'s$ are considered in two cases: $1\leq k\leq n-1$ and $k=n$.
\begin{lem}\lambdabel{lemA-4} {\it For $1\leq k\leq n-1$, we have
\begin{subequations}\lambdabel{A2-3}\begin{align}
&\sum\limits_{\beta=1, \mu_0}\int_{\mathbb{R}}I_{\beta 1 k}^2(y, s)dy\nablaonumber\\
&\quadquad\quadquad\leq 2e^{-s}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|\eta^{-\fracrac{1}{3}}(y)W_{\mu_0}^j|^2(y, s)dy+M^{\fracrac{1}{16}}e^{-3s}\int_{\mathbb{R}}|W_{\mu_0}^{n}|^2(y, s)dy+M^3 e^{-6s},\lambdabel{A2-3a}\\
&\sum\limits_{1< \beta<\mu_0}\int_{\mathbb{R}}I_{\beta 1 k}^2(y, s)dy\nablaonumber\\
&\quadquad\quadquad\leq 2 M^{-\fracrac{1}{8}}e^{-s}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy+M^{\fracrac{1}{4}}e^{-3s}\int_{\mathbb{R}}|W_{\mu_0}^n|^2(y, s)dy+M^3 e^{-6s},\lambdabel{A2-3b}\\
&\int_{\mathbb{R}}I_{\beta q k}^2(y, s)\leq M^{3\mu_0+1}e^{-(4+\fracrac{1}{4})s}\ (q\gammaeq 2).\lambdabel{A2-3c}
\end{align}
\end{subequations}
}
\end{lem}
\begin{proof} For the proof of \eqref{A2-3a}, by \eqref{ii-3}, \eqref{vi-35}, \eqref{vii-16}, the expansion in \eqref{ii-17}, \eqref{vii-401}-\eqref{vii-402} and \eqref{v-2}, we have
\begin{equation}\lambdabel{v-10}\begin{aligned}
&\int_{\mathbb{R}}(I^2_{\mu_0 1 k}+I^2_{1 1 k})(y, s)dy\\
\leq&\int_{\mathbb{R}}|\partialartial_y W|^2\cdot |\partialartial_y^{\mu_0}W_k|^2(y, s)dy+\int_{\mathbb{R}}|\partialartial_y W_k|^2|\partialartial_y^{\mu_0}W|^2(y, s)dy\\
\leq& 2e^{-s}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|\eta^{-\fracrac{1}{3}}(y)W_{\mu_0}^j|^2(y, s)dy+M^{\fracrac{1}{16}} e^{-3s}\sum\limits_{j=1}^{n}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy.
\end{aligned}
\end{equation}
Combining \eqref{v-10} with \eqref{v-2} yields \eqref{A2-3a}.
With respect to the case of $1<\beta<\mu_0$ and $q=1$ in \eqref{A2-2}, it is derived from \eqref{ii-3}, \eqref{vi-35}, \eqref{vii-402}, \eqref{vii-16}, \eqref{v-2}, H\"older inequality and Lemma \ref{lemA-3} that
\begin{equation}\lambdabel{v-11}\begin{aligned}
&\sum\limits_{1<\beta<\mu_0}\int_{\mathbb{R}}I_{\beta 1 k}^2(y, s)dy\\
\leq&\sum\limits_{1<\beta<\mu_0}\int_{\mathbb{R}}|\partialartial_y^{\mu_0+1-\beta}W|^2 |\partialartial_y^{\beta}W_k|^2(y, s)dy\\
\leq&\sum\limits_{1<\beta<\mu_0}\|\partialartial_y^{\beta}W_k(\cdot, s)\|_{L^{\fracrac{2(\mu_0-1)}{\beta-1}}}^2\|\partialartial_y^{\mu_0-\beta+1}W(\cdot, s)\|_{L^{\fracrac{2(\mu_0-1)}{\mu_0-\beta}}}^2\\
\leq&M^{\fracrac{1}{16}} \sum\limits_{1<\beta<\mu_0}\|\partialartial_y^{\mu_0}W_k(\cdot, s)\|_{L^2}^{2\fracrac{\beta-1}{\mu_0-1}}\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^{2\fracrac{\mu_0-\beta}{\mu_0-1}}\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{2\fracrac{\mu_0-\beta}{\mu_0-1}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2\fracrac{\beta-1}{\mu_0-1}}\\
\leq&M^{\fracrac{1}{8}+\fracrac{\beta-1}{8(\mu_0-1)}}\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^2\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^2+M^{-\fracrac{1}{8}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^2\|\partialartial_y^{\mu_0}W_k(\cdot, s)\|_{L^2}^2\\
\leq& 2M^{-\fracrac{1}{8}} e^{-s}\sum\limits_{j=1}^{n-1}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy+M^{\fracrac{1}{4}}e^{-3s}\sum\limits_{j=1}^{n}\int_{\mathbb{R}}|W_{\mu_0}^j|^2(y, s)dy.
\end{aligned}
\end{equation}
Then \eqref{A2-3b} comes from \eqref{v-11} and \eqref{v-2}.
For the easier cases of $I_{\beta q k}$ ($q\gammaeq 2$) in \eqref{A2-2}, we
will apply the following three type estimates with the help of Lemma \ref{lemA-3}:
\begin{subequations}\lambdabel{v-13}\begin{align}
&\|\partialartial_y^{\gammaamma_j}W(\cdot, s)\|_{L^{\infty}}\leq M^{\fracrac{1}{16}}\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{\fracrac{\gammaamma_j-1}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{\fracrac{\mu_0-\fracrac{1}{2}-\gammaamma_j}{\mu_0-1-\fracrac{1}{2}}},\lambdabel{v-13a}\\
&\|\partialartial_y^{\gammaamma_j}W(\cdot, s)\|_{L^{2}}\leq M^{\fracrac{1}{16}}\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{\fracrac{\gammaamma_j-1-\fracrac{1}{2}}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{\fracrac{\mu_0-\gammaamma_j}{\mu_0-1-\fracrac{1}{2}}}\ (\gammaamma_j\gammaeq 2),\lambdabel{v-13b}\\
&\|\partialartial_y^{\beta}W_k(\cdot, s)\|_{L^2}
\leq M^{\fracrac{1}{16}}\|\partialartial_y^{\mu_0}W_k(\cdot, s)\|_{L^2}^{\fracrac{\beta-1-\fracrac{1}{2}}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^{\fracrac{\mu_0-\beta}{\mu_0-1-\fracrac{1}{2}}}\ (\beta\gammaeq 2).\lambdabel{v-13c}
\end{align}
\end{subequations}
When $\beta\gammaeq 2$ and $q\gammaeq 2$, substituting \eqref{v-13a} and \eqref{v-13c} into \eqref{A2-2} yields
\begin{equation}\lambdabel{v-131}\begin{aligned}
&\int_{\mathbb{R}} I_{\beta q k}^2 (y, s)dy\\
=&\sum\limits_{\gammaamma_1+\cdots+\gammaamma_q=\mu_0-\beta+1,\ \gammaamma_j\gammaeq 1 (1\leq j\leq q)}\|\partialartial_y^{\gammaamma_1}W(\cdot, s)\|_{L^{\infty}}^2\cdots\|\partialartial_y^{\gammaamma_q}W(\cdot, s)\|_{L^{\infty}}^2\|\partialartial_y^{\beta}W_k(\cdot, s)\|_{L^{2}}^2\\
\leq & M^{\fracrac{q+1}{4}}\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{2\fracrac{\mu_0-\beta+1-q}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2\fracrac{q(\mu_0-\fracrac{1}{2})-(\mu_0-\beta+1)}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y^{\mu_0}W_k(\cdot, s)\|_{L^{2}}^{2\fracrac{\beta-1-\fracrac{1}{2}}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^{2\fracrac{\mu_0-\beta}{\mu_0-1-\fracrac{1}{2}}}\\
\leq& M^{\fracrac{q+1}{2}} \left(\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2q}+\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{2q}\right)\left(\|\partialartial_y^{\mu_0}W_k(\cdot, s)\|_{L^2}^2+\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^2\right).
\end{aligned}
\end{equation}
Combining this with \eqref{iv-6}, \eqref{vi-35} and \eqref{vii-16} derives
\begin{equation}\lambdabel{v-14}
\int_{\mathbb{R}}I_{\beta q k}^2 (y, s)dy\leq M^{3q+1}e^{-(q+3)s}\leq M^{3\mu_0+1}e^{-(q+3)s}\ (q\gammaeq 2, \beta\gammaeq 2).
\end{equation}
When $\beta=1$ and $q\gammaeq 3$, similarly to \eqref{v-14}, we have
\begin{equation}\lambdabel{v-15}\begin{aligned}
&\int_{\mathbb{R}} I_{1 q k}^2 (y, s)dy\\
=&\sum\limits_{\gammaamma_1+\cdots+\gammaamma_q=\mu_0-\beta+1,\ \gammaamma_j\gammaeq 1 (1\leq j\leq q)}\|\partialartial_y^{\gammaamma_1}W(\cdot, s)\|_{L^{\infty}}^2\cdots\|\partialartial_y^{\gammaamma_q}W(\cdot, s)\|_{L^{\infty}}^2\|\partialartial_y W_k(\cdot, s)\|_{L^{2}}^2\\
\leq & M^{\fracrac{q+1}{4}}\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{2\fracrac{\mu_0-\beta+1-q}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2\fracrac{q(\mu_0-\fracrac{1}{2})-(\mu_0-\beta+1)}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W_k(\cdot, s)\|_{L^{2}}^{2}\\
\leq& M^{\fracrac{q+1}{2}} \left(\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2q}+\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{2q}\right)\|\partialartial_y W_k(\cdot, s)\|_{L^{2}}^2\\
\leq &M^{3q+1}e^{-(q+\fracrac{5}{4})s}\ (q\gammaeq 3),
\end{aligned}
\end{equation}
where the last estimate comes from \eqref{iv-6} and \eqref{vii5-2} with $\nablau=\fracrac{7}{24}$.
When $\beta=1$ and $q=2$, without loss of generality, we assume $\gammaamma_2\gammaeq 2$ due to $\gammaamma_1+\gammaamma_2=\mu_0\gammaeq 6$.
Then we apply \eqref{v-13a} and \eqref{v-13b} to control $\partialartial_y^{\gammaamma_1}W$ and $\partialartial_y^{\gammaamma_2}W$ respectively and
subsequently obtain the following estimate with \eqref{iv-6} and \eqref{vii-16}
\begin{equation}\lambdabel{v-151}\begin{aligned}
&\int_{\mathbb{R}}I_{1 q k}^2(y, s)dy\\
=&\|\partialartial_y^{\gammaamma_1}W(\cdot, s)\|_{L^{\infty}}^2\|\partialartial_y^{\gammaamma_2}W(\cdot, s)\|_{L^2}^2\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^2\\
\leq & M^{\fracrac{1}{4}}\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2\fracrac{\mu_0-\fracrac{1}{2}}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^{2}}^{2\fracrac{\mu_0-2-\fracrac{1}{2}}{\mu_0-1-\fracrac{1}{2}}}\|\partialartial_y W_k(\cdot, s)\|_{L^{\infty}}^2\\
\leq &M^5 e^{-5s}.
\end{aligned}
\end{equation}
Thus, we get \eqref{A2-3c} from \eqref{v-14}-\eqref{v-151} and the proof of Lemma \ref{lemA-4} is finished.\end{proof}
\begin{lem}\lambdabel{lemA-5} {\it For $I_{\beta q n}$, one has
\begin{equation}\lambdabel{A-10}
\int_{\mathbb{R}}I_{\beta q n}^2(y, s)dy\leq M^{3q+1}e^{-(q+1)s}.
\end{equation}
}
\end{lem}
\begin{proof} As in Lemma \ref{lemA-4}, the estimates on the two terms $I_{\mu_0 1 n}$ and $I_{1 1 n}$ are crucial in the
proof of \eqref{A-10}. For these two terms, similarly to \eqref{v-10}, one has
\begin{equation}\lambdabel{A-111}
\int_{\mathbb{R}}(I_{\mu_0 1 n}^2+_{1 1 n}^2)(y, s)dy\leq 2\int_{\mathbb{R}}|\partialartial_y W|^2\cdot|\partialartial_y^{\mu_0}W|^2(y, s)dy
\leq 2\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^2\cdot\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^2.
\end{equation}
When $1<\beta<\mu_0$, similarly to \eqref{v-131}, we arrive at
\begin{equation}\lambdabel{A-112}
\int_{\mathbb{R}}I_{\beta q n}^2(y, s)dy\leq M^{\fracrac{q+1}{2}}\left(\|\partialartial_y W(\cdot, s)\|_{L^{\infty}}^{2q}+\|\partialartial_y^{\mu_0}W(\cdot, s)\|_{L^2}^{2q}\right)\left(\|\partialartial_y W_n(\cdot, s)\|_{L^{\infty}}^2+\|\partialartial_y^{\mu_0}W_n(\cdot, s)\|_{L^2}^2\right).
\end{equation}
Combining \eqref{A-111}-\eqref{A-112} with \eqref{ii-3}, \eqref{iv-6}, \eqref{vi-35} and \eqref{vii-16} yields \eqref{A-10}
and the proof of Lemma \ref{lemA-5} is completed.
\end{proof}
\vskip 1 true cm
\end{document} |
\begin{document}
\title{Fair-Share Allocations for Agents with Arbitrary Entitlements}
\author{Moshe Babaioff\thanks{Microsoft Research --- E-mail: \texttt{[email protected]}.}, Tomer Ezra\thanks{Tel Aviv University --- E-mail: \texttt{[email protected]}.}, Uriel Feige\thanks{Weizmann Institute --- E-mail: \texttt{[email protected]}. Part of the work was conducted at Microsoft Research, Herzeliya.}}
\maketitle
\begin{abstract}
We consider the problem of fair allocation of indivisible goods to $n$ agents, with no transfers. When agents have equal entitlements, the well established notion of the maximin share (MMS) serves as {an attractive} fairness criterion, where to qualify as fair, an allocation needs to give every agent at least a substantial fraction of her MMS.
In this paper we consider the case of arbitrary (unequal) entitlements.
We explain shortcomings in previous attempts that extend the MMS to
unequal
entitlements.
Our conceptual contribution is the introduction of a new notion of a share, the {\em AnyPrice share} (APS), that is appropriate for settings with arbitrary entitlements. Even for the equal entitlements case, this notion is new, and satisfies $APS \ge MMS$, where the inequality is sometimes strict. We present two equivalent definitions for the APS (one as a minimization problem, the other as a maximization problem), and provide comparisons between the APS and previous notions of fairness.
Our main result concerns additive valuations and arbitrary entitlements, for which we provide
a polynomial-time algorithm
that gives every agent at least a $\frac{3}{5}$-fraction of her APS.
This algorithm can also be viewed as providing strategies in
a certain natural bidding game, and these strategies secure each agent at least a $\frac{3}{5}$-fraction of her APS.
\end{abstract}
\section{Introduction}\label{sec:intro}
There is a vast literature on the problem of fairly allocating indivisible items to agents that have preferences over the items
(see, for example, the classic books \citep{moulin2004fair,brams1996fair} and the recent survey \citep*{BouveretCM16}).
While most literature considers the important special case that agents have equal entitlements to the items,
there is also a fast growing recent literature on the more general model in which agents may have arbitrary, possibly unequal, entitlements to the items, see, for example, [\citealp{BabaioffNT2020}; \citealp{farhadi2019fair}; \citealp{chakraborty2019weighted}; \citealp{aziz2019weighted}; \citealp{aziz2020polynomial}]. The settings with arbitrary entitlements are sometimes referred to in the literature as having agents with ``asymmetric shares/weights" or ``weighted/unequal entitlements".
{The general model of arbitrary entitlements captures many settings that are not captured by the equal entitlement model. For example, it captures ``agents" that each is actually a representative of a population, {and population sizes are different}
(e.g., states in the US have different population sizes). It can also capture situations in which agents have made unequal
prior investments in the goods (as partners that have invested different amounts in a partnership), and can also capture other asymmetries between agents (for additional motivation see examples presented, e.g., in \citep*{chakraborty2019weighted}).
The definition of a ``fair division" when agents have unequal entitlements} is more illusive than the equal entitlements case. In this paper we explain shortcomings of recent work that aims to capture fairness in allocation of indivisible goods
when agents have unequal
entitlements. We then suggest a new notion of a share and prove existence of fair allocations with respect to that notion.
A fundamental problem in this space is the allocation of a set ${\cal{M}}$ of $m$ indivisible {goods (items)} to $n$ agents with additive valuations over the items, with no monetary transfers.
Every agent $i$ has an additive valuation $v_i$ and entitlement of $b_i>0$ to the items, assuming the entitlements sum up to $1$ ($\sum_i b_i=1$). We want to partition the items between the agents in a ``fair" way according to their entitlements, without using money.
How should we interpret the meaning of ``entitlements" in this setting?
If items were divisible then there is a natural interpretation -- each agent should get at the least as high value as if getting a $b_i$ fraction of each item, which for additive valuations is the same as her \emph{proportional} share
$b_i \cdot v_i({\cal{M}})$, which we denote by $Proportional(b_i)$.
Yet when items are indivisible, a proportional allocation (one which gives every agent her proportional share) might not exist, nor any finite approximation to it (consider two agents with equal entitlements over a single item -- one must get nothing).
To address the problem of indivisibility with equal entitlements, the notion of \emph{maximin} share (MMS) was suggested {by \citet{Budish11}}. This share is equal to the value an agent can ensure she gets if she partitions the items into $n$ bundles, and gets the worst one.
Although for some instances it is not possible to give all agents their maximin share,
in every instance it is possible to give at least a large constant fraction of it \citep*{KurokawaPW18}.
Yet, this notion is clearly inappropriate for agents with unequal
entitlements. Consider two agents with entitlements $9/10$ and $1/10$. With ten identical items, clearly the first agent should not get only five of the items (the maximum she can ensure by splitting the items into two and getting the smaller of the two bundles).
To address this, \citet*{farhadi2019fair} have suggested the notion of \textit{Weighted-max-min-share} (WMMS).
{The WMMS of agent $i$ with valuation $v_i$ when the vector of entitlements is $(b_1,b_2,\ldots,b_n)$ is defined to be the maximal value $z$ such that there is an allocation that gives each agent $k$ at least $\frac{z\cdot b_k}{b_i}$ according to $v_i$ (see Appendix \ref{sec:missing-defs} for a formal definition).}
In contrast to the maximin share, the WMMS is influenced by the partition of the entitlements among the other agents. This leads to scenarios in which an agent with arbitrary large entitlement might have a WMMS of $0$ even when there are many items.
For example, an agent that has entitlement of 99\% ($b_i=0.99$) for 100 items that she values the same, has a WMMS of 0, if the total number of agents is~$101$.
This suggests that the WMMS is sometimes too small.
Moreover, that paper presents an impossibility result suggesting the WMMS is sometimes too large: there are instances where in every allocation some agent receives at most a $\frac{1}{n}$-fraction of her WMMS. Due to the above examples, we do not view the WMMS as a concept that is aligned well with our intended intuitive understanding of fair allocations when entitlements are unequal.
\citet*{BabaioffNT2020}
have suggested a different generalization of MMS to the case of arbitrary entitlements: the \emph{$l$-out-of-$d$ share} of an agent is the value she can ensure herself by splitting the set of goods into $d$ bundles and getting the worse $l$ out of them. The \emph{Pessimistic} share of an agent with entitlement of $b_i$, which we denote\footnote{This share is implicit in \citet{BabaioffNT2020}, which did not give it a name.} by $Pessimistic(b_i)$, is defined to be the highest $l$-out-of-$d$ share of all integers $l$ and $d$ such that $l/d\leq b_i$.
For the equal entitlement case with $n$ agents, $b_i=1/n$ and the $Pessimistic(b_i)$ share is the same as the MMS.
For the example above, an agent with entitlement of $9/10$ will split the items to ten singletons, and will get nine of them.
\citet{BabaioffNT2020} did not prove that there always exists an allocation that concurrently gives each agent some fraction of the pessimistic share. In this paper we present a new share that is always (weakly) larger, {and
prove} existence of a share-approximating allocation\footnote{{For some positive constant fraction, a \emph{share approximating allocation} gives every agent at least that fraction of her share.} }
{even} for our larger share.
\citet{BabaioffNT2020} have related the pessimistic share to Competitive Equilibrium (CE), which we discuss next.
A different approach for fair allocation is to try to come up with a fair allocation for each particular instance, instead of defining the share of each agent.
For equal entitlements, {\citet{varian1973equity}} has suggested to use \emph{Competitive Equilibrium from Equal Income (CEEI)} in which each agent has a budget of $1/n$ and the allocation is according to the Competitive Equilibrium allocation with these budgets. {For equal entitlements such a CE allocation is attractive as it ensures several nice fairness properties -- it is envy free and thus proportional (and gives each agent at least her MMS).}
More generally, the notion of \emph{Competitive Equilibrium (CE)} is defined for any entitlements, by treating the entitlement of an agent as her budget, and looking for item prices that clear the market.
CE allocations are intuitively fair as all agents have access to the same (anonymous) item prices, and in a CE every agent gets her favorite bundle within her budget.
For the case of arbitrary entitlements, \citet{BabaioffNT2020} have proven that the allocation of a CE ensures that an agent with budget $b_i$ will get her pessimistic share $Pessimistic(b_i)$. {Interestingly, unlike the case of equal entitlements, with unequal entitlements a CE allocation need not give every agent her proportional share (see Proposition \ref{prop:TPS-CE}).}
While CE allocations have some nice fairness properties, unfortunately, even for equal entitlements, CE allocations might fail to exist,
as one can immediately see from the simple case of two agents with equal budgets and a single item.\footnote{\label{footnote:generic}
\citet{BabaioffNT2020} have tried to overcome the CE existence problem through budget perturbations (see also \citep{segal2020competitive}).
We note that a CE for such perturbed budgets, even if it exists, no longer provides guarantees with respect to the original pessimistic shares.
}
Thus, as there are instances with no CE, we cannot rely on CE allocations as a method for fair division of indivisible items.
An alternative instance-based approach is to allocate items using the allocation that maximizes the Nash social welfare (NSW), for the equal entitlements case, or the weighted Nash social welfare, for the arbitrary entitlements case. Recall that for valuations $\textbf{v}=(v_1,v_2,\ldots,v_n)$ and entitlements $\textbf{b}=(b_1,b_2,\ldots,b_n)$, the \emph{weighted Nash social welfare} of an allocation $A=(A_1,A_2,\ldots, A_n)$ is $\prod_{i \in {\cal{N}}} v_i(A_i)^{b_i}$.
\citet*{CaragiannisKMPS19} showed that an allocation that maximizes NSW is always Pareto optimal, envy-free up to one good (EF1) and always gives each agent at least $O(\frac{1}{\sqrt{n}})$ fraction of her maximin share, showing that for equal entitlements maximizing NSW ensures a combination of some efficiency and fairness properties.
Unfortunately, for some simple unequal entitlements cases, maximizing the weighted NSW seems to produce unfair allocations.
As an example consider an instance with three agents and three items, two of the items are good (value $1$ each) and the last item is unattractive.
Assume the first two agents have a value 0 for it, while the last agent has a value $\varepsilon>0$. No matter how high the entitlement of the last agent is, in every allocation that maximizes the weighted NSW she will receive the unattractive item. This seems very unfair to the last agent, as although she has the largest entitlement (possibly by far), she {only gets her least} preferred item.
\subsection{Our Contribution}
Our first, conceptual contribution, is the definition of a new share, the \emph{AnyPrice share}, a share definition that is suitable not only for equal entitlements but also for arbitrary entitlements, and makes sense beyond additive valuations.
We present two alternative definitions for this share, one based on prices (inspired by CE) and another on item bundling ({inspired by MMS}).
The first definition, based on prices, defines the share of an agent $i$ to be the value she can guarantee herself whenever her budget is set to her entitlement $b_i$ (when $\sum_i b_i=1$) and she buys her highest value affordable set when items are {adversarially} priced with a total price of $1$.
Let ${\cal{P}}=\{(p_1,p_2,\ldots,p_m) | p_j\geq 0\ \forall j\in{\cal{M}},\ \ \sum_{j\in {\cal{M}}} p_j=1\}$ be the set of item-price vectors that total to $1$.
The price-based definition of AnyPrice share (APS) is the following:
\begin{restatable}{definition}{defanyprice}[AnyPrice share]
\label{def:anyprice-prices}
Consider a setting in which agent $i$ with valuation $v_i$ has entitlement $b_i$ to a set of indivisible items ${\cal{M}}$.
The \emph{AnyPrice share (APS)} of agent $i$, denoted by $\anypricei$, is the value she can guarantee herself whenever the items in ${\cal{M}}$ are adversarially priced with a total price of $1$, and she picks her favorite affordable bundle:
$$\anypricei = \min_{(p_1,p_2,\ldots,p_m)\in {\cal{P}}}\ \ \max_{S\subseteq {\cal{M}}} \left\{v_i(S) \Big | \sum_{j\in S} p_j\leq b_i\right\}$$
When ${\cal{M}}$ and $v_i$ are clear from context we denote the APS share of an agent $i$ with entitlement $b_i$
by $\anypricevali{b_i}$, instead of $\anypricei$.
\end{restatable}
Observe that the AnyPrice share is well-defined for any instance, and is a definition of a share that
depends on the agent valuation and entitlement, but not on the valuations and entitlements of the other agents.
This can be contrasted with the value the agent gets under a CE allocation, a value that depends on the entire instance, and additionally, is not well defined as a CE might not exist. The AnyPrice share is defined using prices, but unlike CE, it does \emph{not} require finding prices that clear the market (which might not exist) but rather use (worst case) prices that define a share for an agent based on her own valuation and entitlement.
From the definition it is clear that when a Competitive Equilibrium exists (with entitlements serving as budgets),
every agent gets at least her APS (as the CE price vector is one possible price vector in the minimization).
We further present an alternative definition that turns out useful in proving claims regarding APS.
Using LP duality we show that the AnyPrice share (APS) of agent $i$ also equals to the maximum value $z$ such that there exists a distribution over bundles for which $i$ has a value of at least $z$, such that no item is allocated more than $b_i$ times in expectation (see Definition~\ref{def:anyprice-bundles} for a formal definition).
We show (Proposition \ref{prop:small-support}) that there is always a solution to this optimization problem with small support -- at most $m$ sets in the support.
While some of our proofs use the priced-based definition, others use the alternative definition, demonstrating the usefulness of both.
The next example illustrates the notion of APS.
\begin{example}\label{example:APS-twice-Pessimistic}
Consider a
setting with five items and an agent $i$ with entitlement $b_i=\frac{2}{5}$ and an additive valuation with values for the items being $2,1,1,1,0$. Her proportional share is $\frac{2}{5} (2+1+1+1) =2$. Her pessimistic share $Pessimistic(b_i)$ is $1$, as partitioning the items to five bundles and getting the worst two bundles, or partitioning to four or three bundles and getting the worst bundle, gives a value of only~1.
Her APS is at least $2$ as for any pricing of the items, either the item of value $2$ has price at most $\frac{2}{5}$, or there are two items of value $1$ whose total price is at most $\frac{2}{5}$.
Her APS is not larger than 2, as by pricing each item at fifth of its value, the agent cannot afford a bundle of value more than $2$.
\end{example}
The example illustrates that when valuations are additive, the APS can be larger than the pessimistic share (even
twice as large).
This is no coincidence, as we show that the APS is always at least the pessimistic share, for any valuations (even non-additive).
\OLD{
===================================
\mbc{to be REMOVED:}
We list few properties of APS:
\begin{itemize}
\item APS defined a share for agents with arbitrary entitlements.
\item The definition of APS does not rely on the valuations being additive, and is well defined for any valuations. This is in contrast to the $l$-out-of-$d$ share, which is the basis for the definition of the $Pessimistic(b_i)$ share, that is based on the agent getting multiple sets (yet the value of the union of the sets is not equal to the sum \mbc{not clear why this is a problem}, when valuations are not additive). \mbc{remove}
\item The APS is indeed a new share definition: even for equal entitlements\footnote{That is, for an entitlement $b_i=1/k$ for an integer $k$.}, the APS is different than MMS.
\item For any $v_i$ the APS satisfies $\anypricevali{b_i}\geq Pessimistic(b_i)$.
\item In every CE every agent gets at least her AnyPrice share (see PROP REF).
\end{itemize}
Properties of APS for additive valuations:
\begin{itemize}
\item For any $v_i$ the APS satisfies $Proportional(b_i) \geq \anypricevali{b_i} \geq Pessimistic(b_i) \geq \frac{1}{2}\cdot \anypricevali{b_i}$.
\item Like MMS there are instances for which we cannot simultaneously give all agents their exact share (no MMS allocation\footnote{For a share $S$, and \emph{$S$ allocation} is an allocation that gives every agent her share $S$.
An \emph{approximate $S$ allocation} is an allocation that gives every agent a constant fraction of her share $S$.} and thus no APS allocation).
\item While for arbitrary entitlements there are no allocations that are approximately proportional\footnote{That is, there are instances in which in every allocation some agent is not getting any positive fraction of her proportional share.},
we show that there always exist approximated APS allocations (see our main results). First approximation to a share for arbitrary entitlements. (also shows approx to pessimistic)
\item while \citet{BabaioffNT2020} did not prove that an allocation that approximate the $pessimistic$ shares always exists, we prove approximation to the larger APS share.
\item Like MMS it is NP-hard to compute. Yet, it can be approximated arbitrarily well in polynomial time (PTAS or FPTAS?).
\end{itemize}
=================================\mbc{START OLD}
\mbc{OLD (now I list the pros and cons above). Need to edit below:}
The definition of APS does not rely on the valuations being additive, and is well defined for any valuations.
Yet, in the reminder of this paper we focus on proving results for the additive case, leaving the study of more general valuations for future research.
\mbc{Should we discuss why these definitions are good beyond additive? and explain why pessimistic is not?}
We show that for additive valuations, in every CE every agent gets at least her AnyPrice share (see PROP REF), and that the AnyPrice share is at most the proportional share (see PROP REF).
\mbc{Should we talk about computation? It is NP-hard to compute that APS (as for two equal entitlement agents it is equal to the MMS, which requires partition to be solved.) Note that the definition looks like the share is computed by an LP- but it has exponentially many variables (and apparently no efficiently computable separation oracle ). }
We further show (REF PROP) that the APS share is always at least as large as the
$Pessimistic(b_i)$ share. Additionally, for an entitlement $b_i=1/k$ for an integer $k$ (corresponding to equal entitlements with $k$ agents), the APS is at least as large as the MMS, and sometimes strictly larger (REF PROP).
\mbc{discuss pros and cons of APS}
\mbc{CHANGE} A conceptual contribution of this work is that of introducing a new notion of share that is
combining the idea of a ``share" (that are instance independent and only depend on the valuation $v_i$ and entitlement $b_i$ of the considered agent $i$, but not the valuations of the others) and a relaxation of CE, to define a new share for each agent.
Our aim is to come with a share that has several good properties, and makes sense even beyond additive valuations, to all sub-additive valuations\footnote{\label{footnote:super-add}Focusing on ex-post allocations is problematic for super-additive valuations. Consider for example agents that have no value unless getting all items. Splitting the items makes little sense, and any reasonable solution will allocate all items to one agent (possibly at random, to get some ex-ante fairness). As we are interested in ex-post allocations, it thus seems that we should not aim to handle super-additive valuations, and focus on a share definition that is appropriate for sub-additive valuations (hopefully beyond just additive).
}. First, unlike proportional share, it should be possible to give all agents concurrently some approximation to their shares.
Secondly, the definition should be general enough to make sense beyond additive valuations. \mbc{add more. Guaranteed by CE? can be given if proportional exists? "sensitive" to the entitlements in the sense that it gets "better" with an increase in entitlement? }
=================================\mbc{END OLD}
}
The definition of APS does not rely on the valuations being additive, and is well defined for any valuations.
Yet, in this paper we mainly focus on proving results for the additive case, leaving the study of more general valuations for future research.
We start by making several observations regarding properties of the APS for additive valuations. First, we observe that it is at most the proportional share. Secondly, we show that while the APS is NP-hard to compute (like the MMS), it has a pseudo-polynomial time algorithm, while the MMS does not.
We then turn to prove results about the approximation to the APS that can be given to all agents concurrently.
We observe that for two agents, it is always possible to give both of them their APS, and turn to study the more involved case of more than two agents.
Our main result is that for any number of agents with additive valuations and arbitrary entitlements, it is always possible to give each agent at least a $\frac{3}{5}$ fraction of her AnyPrice share, and moreover,
at least $\frac{1}{2-b_i}$ fraction of the APS (which is more than $\frac{3}{5}$ if her entitlement is larger than $1/3$). Note that the fraction $\frac{1}{2-b_i}$ goes to $1$ as $b_i$ grows to $1$.
\begin{theorem}\label{thm:intro-unequal}
For any instance with $n$ agents with additive valuations \textbf{v}\ and entitlements \textbf{b}, there exists an allocation in which every agent gets at least
$\max \{\frac{3}{5}, \frac{1}{2-b_i} \}$ fraction of her AnyPrice share.
Moreover, such an allocation can be computed in polynomial time.
\end{theorem}
As far as we know, this
is the first positive result that guarantees a constant fraction of \emph{any} share to agents with arbitrary entitlements. (See Section~\ref{sec:discussion} for clarifications regarding this statement.)
Recall that for equal entitlements, with at least three agents it is not possible to give all agents their MMS \citep*{KurokawaPW18}.
As the APS is at least as large as the MMS,
we thus conclude that when there are more than two agents one cannot hope to give all agents their APS share exactly, even in the equal entitlements case, and constant approximation is the best we can hope for (we leave open the problem of finding the exact constant).
The proof of the theorem is based on
analyzing a natural ``bidding game",
in which entitlements serve as budgets, and at each round the highest bidder wins, taking any items she wants and paying her bid for each item taken. We show that in this game, no matter how other players bid, an agent has a bidding strategy that gives her at least $\max \{\frac{3}{5}, \frac{1}{2-b_i} \}$ fraction of her APS.
If the bidding game is used as the mechanism for allocating the items, then in every equilibrium (e.g., of the full-information game), every agent receives at least $\max \{\frac{3}{5}, \frac{1}{2-b_i} \}$ fraction of her AnyPrice share.
Finally, we consider the APS in the special case of equal entitlements.
{While for equal entitlements the pessimistic share coincides with the MMS, the APS does not.}
Rather, the APS is
at least as large as the MMS, and sometimes strictly larger. Hence existing algorithms that guarantee every agent a positive fraction of the MMS need not guarantee the same fraction with respect to the APS. Nevertheless, we prove that in the equal entitlements case an algorithm of~\citep{BK20} gives every agent at least a $\frac{2}{3}$ fraction of her APS (which is better than the $\frac{3}{5}$ fraction that we get for the arbitrary entitlements case).
\begin{theorem}
For any instance with $n$ agents with additive valuations \textbf{v}\ and equal entitlements ($b_i=1/n$ for every $i$), there exists an allocation in which every agent $i$ gets at least a
$\frac{2n}{3n-1}$
fraction of her AnyPrice share. Moreover, such an allocation can be computed in polynomial time.
\end{theorem}
As mentioned above, giving every agent her APS is not possible, even for equal entitlements. Finding the best approximation is open even for the case of equal entitlements.
We remark that our results are actually somewhat stronger than the theorems stated above.
Some bounds are proven with respect to a notion of a share called the \emph{Truncated Proportional Share} (TPS) (see Section~\ref{sec:tps}) which for additive valuation is at least as large as the APS, and sometimes larger.
The
paper is organized as follows. We first present some additional related work in Section \ref{sec:related}.
We then present the model and some prior fairness definitions in Section \ref{sec:model}.
We introduce the notion of AnyPrice share and discuss its properties in Section \ref{sec:aps}.
We then present our main result for APS with arbitrary entitlements in Section \ref{sec:allocation-game}, and our result for the special case of equal entitlements in Section \ref{sec:greedy-efx}.
All proofs missing from the body of the paper appear in the appendix. The main part of this paper considers only allocation of goods. Extensions of the AnyPrice share to chores and to mixed manna are presented in Appendix~\ref{app:chores}.
\subsection{Additional Related Work}
\label{sec:related}
One approach taken for fairness when agents have equal entitlement is to require the allocation to be envy free (EF). With indivisible items, such envy-free allocation might not exist, and relaxations as envy-free up to one good (EF1) were considered~\citep{Budish11}. {EF1 allocations were shown to exist in~\citep*{LiptonMMS04}. Clearly, when agents have different entitlements, naively using this notion is inappropriate. \citet*{chakraborty2019weighted} have extended envy-freeness to settings with agents having arbitrary entitlements (introducing notions of strong and weak weighted EF1, WEF1), and proved existence result for Pareto optimal allocations satisfying these notions. We remark that even in the equal entitlements settings,
{there are instances with an EF1 allocation (that is Pareto optimal) that has an agent that only gets an $O(\frac{1}{n})$ fraction of her MMS. }
Hence WEF1 allocations do not guarantee agents a constant fraction of their APS. }
While we consider fair allocation of indivisible goods to agents with arbitrary entitlement, \citet*{aziz2019weighted} considered the related problem of fair assignment of indivisible chores (negative value items).
\citet*{aziz2020polynomial} considered the case that every item value might have mixed signs (a good for some agents, a bad for others) when agents have arbitrary entitlements, {and show that there always exists an allocation that is not Pareto dominated by any fractional allocation and also gives each agent her proportional share up to one item. Moreover, they show that such an allocation can be found in polynomial time.}
Our result for agents with equal entitlements shows that we can give each agent $2/3$ of her APS. As the APS is at least the MMS, this result relates to the literature on MMS and approximate MMS allocations.
{The maximin share (MMS) was first introduced by \citet{Budish11} as a relaxation of the proportional share. \citet*{KurokawaPW18} showed that for agents with additive valuations, an allocation that gives each agent her MMS may not exists. A series of papers
[\citealp{KurokawaPW18}; \citealp{amanatidis2017approximation}; \citealp{BK20}; \citealp{GhodsiHSSY18}; \citealp{garg2019approximating}; \citealp{GT20}] considered the best fraction of the MMS that can be concurrently guaranteed to all agents, and the current state of the art (for additive valuations) is $\frac{3}{4} +\Omega(\frac{1}{n}) $ fraction of the MMS.}
{In our proofs we consider a natural bidding game in which agents have budgets, and bid for the right to select items (as many items as their budget allows, given the bid). We design and analyse strategies for the agents that guarantee that they obtain a high value in the game. Our bidding game is a variant of the \textit{Poorman game} introduced by \citet*{lazarus1995richman} where the agents start with budgets, and in each turn, the agents perform a first price auction in order to determine who selects the next action of the game.
Variants of these games were later studied in the context of allocating items in multiple papers, for example, \citep{huang2012sequential,meir2018bidding}.
}
\section{Model}
\label{sec:model}
We consider fair allocation of a set ${\cal{M}}$ of $m$ indivisible goods (items) to a set ${\cal{N}}$ of $n$ agents, with entitlements to the items that are potentially different.
Each agent $i$ is also associated with an \emph{entitlement}
$b_i > 0$, and the sum of entitlements of the agents is $1$ (i.e., $\sum_{i\in {\cal{N}}} b_i = 1$).
We use $\textbf{b}= (b_1,\ldots,b_n)$ to denote the vector of agents' entitlements,
{and say that agents have \emph{arbitrary entitlements}.
If $b_i=1/n$ for every $i$ then we say that agents have \emph{equal entitlements}. }
In this work we consider deterministic allocations. An \emph{allocation} $A$ is a partition of the items to
$n$ disjoint bundles $A_1, \ldots, A_n$ (some of which might be empty), where $A_i \subseteq {{\cal{M}}}$ for every $i\in {\cal{N}}$.\footnote{One can extend the terminology and notation so that an allocation may leave some of the items not allocated. This extension is not needed in this paper, as we only consider goods (and not bads), and allocating additional items to an agent cannot hurt her.}
{We denote the set of all allocations of ${\cal{M}}$ to the $n$ agents by $\mathcal{A}$.}
A valuation $v$ {over the set of goods ${\cal{M}}$} is called \textit{additive} if each item $j\in {\cal{M}}$ is associated with a value $v(j)\geq 0$, and the value of a set of items $S \subseteq {\cal{M}}$ is $v(S)=\sum_{j\in S}v(j)$.
A valuation $v$ is called \textit{subadditive} if for every two sets of items $S,T\subseteq {\cal{M}}$, it holds that $v(S)+v(T)\geq v(S\cup T)$.
In this work we assume
that all valuations are monotone (i.e., $v(S) \geq v(T)$ for every $T\subseteq S$), and normalized (i.e., $v(\emptyset)=0$.)
{Let $v_i(\cdot)$ denote the valuation of agent $i$, and let $\textbf{v} =(v_1,\ldots,v_n)$ denote the vector of agents' valuations.}
We assume that the valuation functions of the agents as well as the entitlements are known to the social planner, and that there are no transfers (no money involved).
We formally define some shares from the literature.
{The \emph{proportional share} of agent $i$ with valuation $v_i$ and entitlement $b_i$ for items ${\cal{M}}$ is
$\proportional{b_i}{v_i}{{\cal{M}}}= b_i\cdot v_i({\cal{M}})$.
When $v_i$ and ${\cal{M}}$ are clear from context we omit them from the notation and denote this share by $Proportional(b_i)$.
}
We next define the maximin share (MMS), which is defined for the equal entitlements case.
\begin{definition}[maximin Share (MMS)]
\label{def:pes}
The \emph{maximin share (MMS)} of agent $i$ with valuation $v_i$ over ${\cal{M}}$, when there are $n$ agents, which we denote by $\MMSfull{n}$,
is defined to be the highest value she can ensure herself by splitting the goods to $n$ bundles and getting the worst one. Formally:
$$\MMSfull{n} = \max_{(A_1,\ldots,A_n)\in \mathcal{A}} \min_{j \in [n] } \left\{v_i(A_j)
\right\},$$
\end{definition}
When $v_i$ and ${\cal{M}}$ are clear from context, we denote this share by $\MMSnum{n}$, and when $n$ is also clear from context, we simply use the term MMS.
In order to formalize the notion of pessimistic share, we first present the definition of the $\ell$-out-of-$d$ share, a generalization of the MMS.
\begin{definition}[$\ell$-out-of-$d$ share]
\label{def:lds}
The \emph{$\ell$-out-of-$d$ share} of agent $i$ with valuation $v_i$ over ${\cal{M}}$ is the value she can ensure herself by splitting the goods to $d$ bundles and getting the worse $\ell$ out of them. Formally:
$${\ell\mbox{-out-of-}d\mbox{-}Share}(v_i,{\cal{M}}) = \max_{(P_1,\ldots,P_d)\in\mathcal{Z} } \min_{J \subseteq [d]:~ |J| = \ell } \left\{v_i(\cup_{j\in J}P_j)
\right\},$$
where $\mathcal{Z}$ is the set of all partitions of ${\cal{M}}$ into $d$ disjoint sets.
\end{definition}
\begin{definition}[Pessimistic Share]
\label{def:pes}
The \emph{Pessimistic share} of agent $i$ with valuation $v_i$ over ${\cal{M}}$, and entitlement $b_i$, denoted by $\pessimisticvali{b_i}{v_i}{{\cal{M}}}$, is defined to be the highest $\ell$-out-of-$d$ share for all integers $\ell$ and $d$ such that $\frac{\ell}{d} \leq b_i$. Formally:
{
$$\pessimisticvali{b_i}{v_i}{{\cal{M}}}
= \max_{\ell,d\in \mathbb{N}: ~\frac{\ell}{d}\leq b_i} \Big\{{\ell\mbox{-out-of-}d\mbox{-}Share}(v_i,{\cal{M}})\Big\}.$$}
\end{definition}
When $v_i$ and ${\cal{M}}$ are clear from context we omit them from the notation and denote this share by $Pessimistic(b_i)$.
\section{The AnyPrice Share (APS)}
\label{sec:aps}
In this section we introduce a new share definition that is well defined for arbitrary entitlements (even for non-additive valuations), presents some of its properties and compare it to previous suggested shares.
\subsection{The Definition of the AnyPrice Share}
We present two definitions of $\anypricei$, the AnyPrice share (APS) of an agent $i$ that has valuation $v_i$ and entitlement $b_i$ (when $\sum_i b_i=1$) to items ${\cal{M}}$.
The definitions are general and well defined for every valuations, not requiring that the valuations are necessarily additive.
{Our first definition is based on prices and was presented as Definition \ref{def:anyprice-prices} in Section \ref{sec:intro}.
}
\FullVersion{
The first definition, based on prices, defines the share of an agent $i$ to be the value she can guarantee herself whenever her budget is $b_i$ and she buys her highest value affordable set when items are adversarially priced with total price of $1$.
Let ${\cal{P}}=\{(p_1,p_2,\ldots,p_m) | p_j\geq 0\ \forall j\in{\cal{M}},\ \ \sum_{j\in {\cal{M}}} p_j=1\}$ be the set of item-price vectors that total to $1$.
We restate the price-based definition of APS:
\defanyprice*
}
We next present the second definition of the AnyPrice share, using item packing (in Proposition \ref{prop:anyprice-eqe} we
show that the two definitions are equivalent).
\begin{definition}[AnyPrice share, dual definition] \label{def:anyprice-bundles}
Consider a setting in which agent $i$ with valuation $v_i$ has entitlement $b_i$ to a set of indivisible items ${\cal{M}}$.
The \emph{AnyPrice share (APS)} of $i$, denoted by $\anypricei$, is
the maximum value $z$ she can get
by coming up with non-negative
weights $\{\lambda_T\}_{T\subseteq {\cal{M}}}$ that total to $1$ (a distribution over sets),
such that any set $T$ of value below $z$ has a weight of zero, and
any item appears in sets of total weight at most $b_i$:
$$ \anypricei = \max z $$ subject to the following set of constraints being feasible for $z$:
\begin{itemize}
\item $\sum_{T\subseteq {\cal{M}}} \lambda_T=1$
\item $\lambda_T\geq 0\ \forall {T\subseteq {\cal{M}}}$
\item $\lambda_T= 0\ \forall {T\subseteq {\cal{M}}}$ s.t. $v_i(T)<z$
\item $\sum_{T: j\in T} \lambda_T \leq b_i \ \forall j\in {\cal{M}} $
\end{itemize}
\end{definition}
An equivalent set of constraints that will sometimes be convenient to use is the following.
Let ${{\cal{G}}_i(z)}$ be the family of sets $T\subseteq {\cal{M}}$ such that $v_i(T)\geq z$.
The set of constraints now becomes $\sum_{T\in {{\cal{G}}_i(z)}} \lambda_T=1$,
$\lambda_T\geq 0\ \forall {T\in {{\cal{G}}_i(z)}}$, and $\sum_{T\in {{\cal{G}}_i(z)}: j\in T} \lambda_T \leq b_i \ \forall j\in {\cal{M}} $.
{For the case of equal entitlements this definition can be viewed as a fractional relaxation of the definition of MMS (and hence implying that the APS is at least as large as the MMS). Specifically, one can get the definition of the MMS (with $n$ agents, hence $b_i=\frac{1}{n}$) by
restricting every weight $\lambda_T$ to be either $b_i$ or~0 for any non-empty set.
As the total weight is $1$, there must be at most $n$ bundles with non-zero weight.
The constraint that every item belongs to bundles with total weight of at most $b_i$ implies that all
non-empty positive-weight bundles are disjoint.
The APS on the other hand, relaxes the constraint $\lambda_T \in \{0,b_i\}$ for non-empty sets (which can be thought of as an integer constraint of the form $\lambda_T \in \{0,1\} \cdot b_i$) to the fractional constraint $0 \le \lambda_T \le b_i$, and maintains the constraint that every item belongs to bundles with total weight of at most $b_i$.}
While the price-based definition (Definition \ref{def:anyprice-prices}) is very useful in proving that the APS is not larger than some value, by simply presenting prices for which the agent cannot afford any bundle of that value, the weight-based definition (Definition \ref{def:anyprice-bundles}) is very useful in proving the APS is at least as large as some value $z$ -- by simply presenting weights that satisfy the definition for some value $z$.
Indeed, let us go back to Example \ref{example:APS-twice-Pessimistic} to illustrate this point.
Recall that the example considers a setting with five items and an agent $i$ with entitlement $b_i=2/5$ and an additive valuation with values for the items being $2,1,1,1,0$.
Using the price-based definition one could easily see that the APS of the agent is at most $2$, by pricing each item at fifth of its value.
To see that the APS is at least $2$ using the alternative definition,
consider a distribution over four sets of value $2$, one set contains the item of value $2$ and weight $2/5$, and three sets each contains a different pair of items of value $1$, and each of the sets has a weight of $1/5$. Each of the four sets has a value of $2$, and the total weight of the sets that contain any item is at most $2/5$, thus showing the APS is at least $2$.
We start with several simple observations. We first {show, using linear programming duality,} that Definitions~\ref{def:anyprice-bundles} and \ref{def:anyprice-prices} are indeed equivalent.
\begin{restatable}{proposition}{propDefEq}
\label{prop:anyprice-eqe}
The two shares defined in Definition~\ref{def:anyprice-bundles} and Definition~\ref{def:anyprice-prices} are equivalent.
\end{restatable}
We further observe that there is always a solution with small support to the feasibility problem of Definition~\ref{def:anyprice-bundles} -- at most $m$ sets in the support.
\begin{restatable}{observation}{propSmallSupport}
\label{prop:small-support}
For any valuation $v_i$ and entitlement $b_i$ there is a solution to the optimization problem presented in
Definition~\ref{def:anyprice-bundles} in which there are at most $m$ non-zero weights.
\end{restatable}
We observe that in any Competitive Equilibrium\footnote{For a formal definition of CE see Appendix \ref{sec:missing-defs}.} in which the entitlement of an agent is her budget,
every agent gets her AnyPrice share. This follows immediately from Definition \ref{def:anyprice-prices} as the APS of an agent is the value the agent can secure for \emph{any} possible prices, and in particular, at least that value can be secured for the CE prices.
\begin{observation}
\label{obs:anyPrice-CE}
Assume that the pair $(A,p)$ is an allocation-prices pair that forms a Competitive Equilibrium when agents have valuations $\textbf{v}$ and budgets (=entitlements) $\textbf{b}$.
Then the allocation $A$ is an APS allocation, that is, for every agent $i$ it holds that $v_i(A_i)\geq \anypricei$.
\end{observation}
We next show the APS of an agent is at least as large as the pessimistic share.
\begin{restatable}{proposition}{propAPSPES}
\label{prop:APS-pes}
For any valuation $v_i$ and entitlement $b_i$ it holds that
$$\anypricevali{b_i}\geq Pessimistic(b_i).$$
\end{restatable}
Computationally, the problem of computing the APS is NP-hard, much like the problem of computing the MMS.
\begin{proposition}
\label{prop:computation}
The problem of computing the AnyPrice share of an agent
is NP-hard. This is true even for $b_i=\frac{1}{2}$ (the case of two equally entitled agents) and even when the valuations are additive.
\end{proposition}
Proposition~\ref{prop:computation} follows from the fact that for $b_i=\frac{1}{2}$ the APS and the MMS are identical (Observation \ref{obs:APS-two-equal}), and as MMS is hard to compute in the case of two additive agents with equal entitlements (need to solve \emph{PARTITION}), so is the APS.
The APS definition is general and applies to all valuations, not only additive.
Yet, ex-post fairness is problematic for some super-additive valuations {(even for equal entitlements)}.
To illustrate the problem,
consider the simple setting with agents that each has zero value for her set, unless she gets all items. Splitting the set of items makes little sense, and any reasonable solution will allocate all items to one agent (possibly at random, to get some ex-ante fairness). Thus we view ex-post fairness notions (APS, as well as all other ex-post notions like MMS, CE, etc.) as unsuitable for some super-additive valuations.
In contrast, for sub-additive valuations,
splitting the set of items does make sense, and so do ex-post fairness notions, and the APS in particular.
Before moving to our main interest in this paper, the case of additive valuations, we briefly illustrate the AnyPrice share on unit-demand valuations (which, like additive valuations, is a simple family of sub-additive valuations), illustrating that the APS is indeed a reasonable notion of a share for some sub-additive valuations.
\subsubsection{Unit-Demand valuations}\label{sec:unit-demand}
A valuation is \emph{unit demand} if it assigns a value for each item, and the value of any set is the maximal value of any item in the set.
We first note that unit-demand valuations illustrate that the proportional share is not attractive as a share definition for non-additive valuations, even for equal entitlements.
Consider for example the simple case of an agent that has a unit-demand valuation over $n$ identical items (same value of $1$ for each, and no additional value for more than one item).
While $n$ such agents can get a value of $1$ each, the proportional share of such an agent is only $1/n$,\footnote{This example shows that for non-additive valuations the proportional share might be smaller than the APS. Yet, Proposition \ref{prop:shares-comp} shows that is never the case for additive valuations. } so the proportional share is not even as large as the worst of the $n$ items.
In contrast, the APS share in this example is $1$.
More generally, we observe that for unit-demand valuations with any entitlements, the value of the APS of an agent with entitlement $b_i$ is the value of her $\lceil 1/b_i \rceil$ ranked item.\footnote{It is easy to see that for unit-demand valuations the pessimistic share is the same as the APS.}
There is always an allocation that gives each player her APS: simply let the agents pick items sequentially in the order of their entitlements, breaking ties arbitrarily.
\subsection{Basic Properties of APS with Additive Valuations}
We next focus on additive valuations.
We start by relating the different notions of shares to each other.
\begin{restatable}{proposition}{propShareComp}
\label{prop:shares-comp}
For any additive valuation $v_i$ over a set of items ${\cal{M}}$, and any entitlement $b_i$ it holds that
$$Proportional(b_i) \geq \anypricevali{b_i} \geq Pessimistic(b_i) \geq \frac{1}{2}\cdot \anypricevali{b_i}$$
Moreover, for each of the above inequalities there is a setting in which it holds as equality.
Finally, even for the case of equal entitlements ($b_i=\frac{1}{k}$ for an integer $k$), in which the pessimistic share is equal to the MMS, for each inequality above there is a setting in which it holds as a strict inequality.
\end{restatable}
In particular, this claim shows that the APS is at least as large as the pessimistic share of \citep*{BabaioffNT2020}, and thus {obtaining share approximating allocations for the APS} is harder. Note that for equal entitlements ($b_i=\frac{1}{k}$ for an integer $k$) the MMS and the pessimistic shares are the same, so the above says that even for equal entitlements, the APS and the MMS are different. Yet, they are the same in the special case of two agents with equal entitlements ($b_i=\frac{1}{2}$).
\begin{restatable}{observation}{obsApsTwo}
\label{obs:APS-two-equal}
For any additive valuation $v_i$ over a set of items ${\cal{M}}$, it holds that \\ $\anyprice{\frac{1}{2}}{v_i}{{\cal{M}}}= \MMSfull{2}$.
\end{restatable}
With two agents, the cut-and-choose procedure ensures that each of the two agents gets her MMS (when $b_1=b_2=1/2$). We next observe that for the case of two agents, even with arbitrary entitlements, there is always an allocation that gives each agent her AnyPrice share.
\begin{restatable}{observation}{obsTwo}
\label{obs:APS-two-agents}
For any setting with two agents with additive valuations $(v_1,v_2)$ and arbitrary entitlements $(b_1,b_2)$ there is an allocation $A=(A_1,A_2)$ that is an APS allocation.
That is, for every agent $i\in \{1,2\}$ it holds that $v_i(A_i)\geq \anypricei$.
\end{restatable}
Recall that we have observed that the problem of computing the AnyPrice share of an agent with an additive valuation is NP-hard, like the problem of computing the MMS.
Yet, we next show that it has a pseudo-polynomial time algorithm, while the MMS does not.\footnote{This implies the APS computation problem has a fully polynomial time approximation scheme (FPTAS), while the MMS does not. Note that the MMS problem does have a polynomial time approximation scheme (PTAS) (\citep{WOEGINGER1997}).}
That is, for any integer additive valuation $v_i$ and entitlement $b_i$, if the value of the APS of an agent is $z$
then the APS can be computed exactly in time polynomial in the input size and $z$ (note that $z$ is at most $v_i({\cal{M}})$), which is pseudo-polynomial\footnote{Note that a polynomial algorithm would run in time polynomial in $\log z$, not polynomial in $z$.}.
This is in contrast to the MMS which is computationally harder --
it is known to be strongly NP-hard when the number of agents is large (\citep{WOEGINGER1997}, can be proven by a reduction from $3$-PARTITION) and hence it does not have a pseudo-polynomial time algorithm, unless P=NP.
\begin{restatable}{proposition}{proppseudo}
\label{prop:computation-psedu-poly}
Consider the problem of computing, for any integer additive valuation $v_i$ and entitlement $b_i$ (where $b_i \leq 1$ is a rational number), the AnyPrice share $z=\anypricei$.
There is an algorithm that computes the APS in time polynomial in the input size (representation of the valuations and entitlement) and $z$.
\end{restatable}
\section{APS Approximation for Additive Agents: Arbitrary Entitlements}\label{sec:allocation-game}
In this section we present our main result regarding fair-share approximation for additive agents with arbitrary entitlements.
We start with extending the definition of Truncated Proportional Share (TPS) of \citet*{babaioff2021bestofbothworlds} to arbitrary entitlements, and show it is at least as high as the APS, {and thus can serve as a tool in proving results for APS}.
Then, we present our main result -- showing that for agents with additive valuations and arbitrary entitlements
there is a bidding game in which each agent can always secure herself a $\frac{3}{5}$ fraction of her AnyPrice share, and can also secure at least $\frac{1}{2-b_i}$ fraction of her Truncated Proportional Share (which is at least her APS).
{As the analysis of the strategies for these
bounds is rather involved, most details of the proofs are deferred to the appendix. To give the reader a sense of some ideas that we use, we present some weaker bounds that are easier to analyze, along with their analysis. In particular, we present a natural strategy that secures half the TPS, and a strategy that obtains some constant fraction, that is strictly larger than half, of the APS. }
\subsection{The Truncated Proportional Share (TPS)}
\label{sec:tps}
The \emph{Truncated Proportional Share (TPS)} was defined in \citep*{babaioff2021bestofbothworlds}
for additive agents with equal entitlements. In this section we extend their definition to arbitrary entitlements and show that for additive agents the TPS is always at least as large as the APS.
We then present a polynomial time algorithm that gives every agent a constant fraction of her TPS, and thus also that fraction of her APS share.
\citet{babaioff2021bestofbothworlds} have defined the TPS for equal entitlements. For equal entitlements setting with $n$ agents and a set of items ${\cal{M}}$, the \emph{truncated proportional share} of agent $i$ with additive valuation function $v_i$ is the largest value $t$ such that $\frac{1}{n} \sum_{j\in {\cal{M}}} \min[v_j, t] = t$.
We extend this definition to the case of arbitrary entitlements in a natural way, by thinking of $\frac{1}{n}$ as the entitlement of the agent in the equal entitlement setting, and replacing it by the agent's entitlement $b_i$ in the arbitrary entitlement case:
\begin{definition}\label{def:TPS}
The \emph{Truncated Proportional Share (TPS)} of agent $i$ with an additive valuation $v_i$ over the set of items ${\cal{M}}$ and entitlement $b_i$, denoted by $\truncatedi$, is:
\begin{equation}
\truncatedi = \max\{~z \mid b_i \cdot \sum_{j\in {\cal{M}}} \min(v_i(j),z) = z ~\}. \label{eq:tps}
\end{equation}
When ${\cal{M}}$ and $v_i$ are clear from the context we denote this share by $TPS(b_i)$.
\end{definition}
{Observe that it is immediate from Definition~\ref{def:TPS} that the TPS is at most the proportional share, and that the TPS is exactly the proportional share of the instance after capping the value of each item at the TPS.
Alternative, one can think about the following continuous process to reach the TPS by capping the valuation: a cap on item values is reduced
until the point that no item has a capped value which is strictly greater than the proportional share of the current capped valuation.
}
{We next show that the TPS is always at least as large as the APS. Thus, by obtaining a guarantee with respect to the TPS, we will also obtain the same guarantee with respect to the APS.}
\begin{proposition}\label{prop:TPS-APS}
For agent $i$ with an additive valuation $v_i$ over ${\cal{M}}$ and entitlement $b_i$, it holds that
$\truncatedi \geq \anypricei$.
\end{proposition}
\begin{proof}
Let $w=\truncatedi$.
We denote the set of items that have values strictly more than $w$ by $K$, and let $k=|K|$. If $k=0$ then the $\truncatedi = \proportional{b_i}{v_i}{{\cal{M}}}$, and thus by Proposition~\ref{prop:shares-comp} the proposition holds.
Assume $k>0$.
By Definition~\ref{def:TPS}, it holds that $w=b_i \cdot \left(k\cdot w +\sum_{j\in {\cal{M}} \setminus K} v_i(j)\right) $.
{It also holds that $b_i \cdot k <1$.
To see this, first observe that if $b_i \cdot k > 1$ then $w = b_i \cdot \left(k\cdot w +\sum_{j\in {\cal{M}} \setminus K} v_i(j)\right) \geq b_i \cdot k\cdot w >w $, a contradiction.
Finally, consider the case that $b_i \cdot k =1$. As $w = b_i \cdot \left(k\cdot w +\sum_{j\in {\cal{M}} \setminus K} v_i(j)\right) $ we have $\sum_{j\in {\cal{M}} \setminus K} v_i(j)=0$.
Let $\ell=\min_{j\in K} v_i(j) $.
Observe that
$$ b_i \cdot \sum_{j \in {\cal{M}}} \min(v_i(j),\ell) = b_i \cdot \sum_{j \in K} \min(v_i(j),\ell) = b_i \cdot k \cdot \ell =\ell.$$
Thus $\ell$ satisfies Equation~\eqref{eq:tps} and hence the TPS is at least $\ell > w$, which is a contradiction. We thus conclude that $b_i \cdot k <1$.
}
If $v_i({\cal{M}} \setminus K)=0$ then if we price all items in $K$ by $\frac{1}{k}$, the agent cannot afford any
{item she has positive value for
($v_i({\cal{M}} \setminus K)=0$ and since $b_i<\frac{1}{k}$ she cannot get any item in $K$),}
and thus $\anypricei =0 \leq \truncatedi$.
Else, we have $v_i({\cal{M}} \setminus K)>0$. Assume towards contradiction that the value $z^*=\anypricei$ is strictly more than $w$.
Thus, there exists an $\epsilon>0$ such that
$$ w < \frac{b_i \cdot v_i({\cal{M}} \setminus K)}{1-k(b_i+\epsilon)} < z^*.$$
{Such an $\epsilon>0$ must exist since when $\epsilon \to 0 $ then this expression goes to $w$, while when $\epsilon \to \frac{1}{k}-b_i$, the expression goes to infinity, and it is continuous in $\epsilon$ for $\epsilon\in(0,\frac{1}{k}-b_i)$.}
Consider the pricing $b_i+\epsilon$ for every item in $K$, and for every item $j \in {\cal{M}} \setminus K$, we give a price $p_j=\frac{v_i(j)\cdot (1-k(b_i+\epsilon))}{v_i({\cal{M}} \setminus K)}$.
The agent cannot afford any item among $K$. Among the other items, her value is proportional to the budget spent and thus is at most $\frac{b_i}{1-k(b_i+\epsilon)} \cdot v_i({\cal{M}} \setminus K)$, which is strictly smaller than $z^*$, contradicting the definition of $z^*$ as her APS.
\end{proof}
In Appendix \ref{app:TPS} we discuss some other properties of the TPS. We show that the TPS might be much larger than the APS (Observation \ref{obs-TPS-mush-larger-APS}).
{We observe that while the APS is (weakly) NP-hard to compute, the TPS can be computed in polynomial time (Observation~\ref{obs:tps-computation}).} Yet, we only use TPS as a tool, and do not consider it as the most appropriate share for agents with arbitrary entitlements, as it has some significant drawbacks.
The main drawback of TPS is that, similarly to the proportional share, it seems unattractive beyond additive valuation: the unit-demand valuations (with identical items) example in Section \ref{sec:unit-demand} is one in which the TPS is identical to PS, and the proportional share
(and TPS) seem too small (only $1/n$ instead of $1$).
We also show (Proposition \ref{prop:TPS-CE}) that a competitive equilibrium (CE) does not necessarily guarantee that every agent gets her TPS, unlike the case for APS (in which in every CE, every agent gets her APS).
{Finally, we note (see Observation~\ref{obs:TPS-app-UB}) that even for equal entitlements, it is not possible to give every agent more than a $\frac{n}{2n-1}$ fraction of her TPS.
This is in contrast to the APS for which our main result shows that it is possible to give every agent a $\frac{3}{5}$ fraction of her APS.}
\subsection{Main Result: Approximate APS Allocations}
We next turn to present and prove our main result:
for agents with additive valuations and arbitrary entitlements, it is always possible to give each agent at least a $\frac{3}{5}$ fraction of her AnyPrice share, and at least $\frac{1}{2-b_i}$ fraction of her Truncated Proportional Share (which is at least her APS).
This $\frac{1}{2-b_i}$ fraction is more than $\frac{3}{5}$ if the agent entitlement is larger than $\frac{1}{3}$, and it
goes to $1$ as $b_i$ grows to $1$. Moreover, we show that agent $i$ also gets a value that is
at least the value of her $\lfloor 1/b_i\rfloor$
ranked item.
Note that this result implies Theorem \ref{thm:intro-unequal} (it is a stronger version of it).
{Our result follows from presenting a natural ``bidding game", and showing that each agent has a strategy to secure each of the guarantees mentioned above. }
Consider the following ``bidding game" among the agents:
Every agent $i$ starts with a budget of $b_i$.
As long as there are still items left, in each round $t$, every agent bids an amount $r_i^{(t)}$ between $0$ and $b_i^{(t)}$.
An agent with the highest bid (breaking ties arbitrarily) is the winner of the round, we denote that winning agent by $w^{(t)}$.
{
Within her remaining budget $b_{w^{(t)}}^{(t)}$,
agent $w^{(t)}$ selects available items she wants to take, paying $r_{w^{(t)}}^{(t)}$ for each item she takes.}
\begin{algorithm}
\caption{The Bidding Game }
\begin{algorithmic}[1]
\STATE Input: Set of items ${\cal{M}}$, entitlements $\vect{b}=(b_1,\ldots,b_n)$.
\\ We have the following notations for the beginning of round $t$:
\\ \quad\quad\quad\quad\quad
${\cal{M}}^{(t)}$ - the available items at the beginning of round $t$.
\\ \quad\quad\quad\quad\quad $s_i^{(t)}=v_i({\cal{M}}^{(t)})$ - the value of agent $i$ for ${\cal{M}}^{(t)}$, the items available.
\\ \quad\quad\quad\quad\quad $x_i^{(t)}$ -- the highest value $i$ assigns to any item in ${\cal{M}}^{(t)}$.
\\ \quad\quad\quad\quad\quad $y_i^{(t)}$ -- the second highest value $i$ assigns to any item in ${\cal{M}}^{(t)}$.
\\ \quad\quad\quad\quad\quad $b_i^{(t)}$ -- the budget available to $i$ at the beginning of round $t$.
\\ \quad\quad\quad\quad\quad $B^{(t)}=\sum_k b_k^{(t)}$ -- the total budget remaining at the beginning of round $t$.
\STATE Initialize: $t=1$, for every agent $i$, $b_i^{(1)}=b_i$, and ${\cal{M}}^{(1)} ={\cal{M}}$
\WHILE{${\cal{M}}^{(t)} \neq \emptyset $}
\STATE Every agent $i$ bids an amount $r_i^{(t)} \in [0,b_i^{(t)}]$
\STATE Let $w^{(t)} \in \arg\max r_i^{(t)}$ (breaking ties arbitrarily) be the winning agent of round $t$.
\STATE Agent $w^{(t)}$ selects a {non-empty set
$W^{(t)} \subseteq {\cal{M}}^{(t)}$
that she can afford: $|W^{(t)}|\cdot r_{w^{(t)}}^{(t)}\leq b_{w^{(t)}}^{(t)} $.}
\STATE{Update budgets:} ~\quad\quad\quad\quad\quad For every agent $i \neq w^{(t)}$ set $b_i^{(t+1)} = b_i^{(t)}$.
\STATE \quad \quad \quad\quad\quad\quad \quad \quad\quad\quad\quad\quad Set $b_{w^{(t)}}^{(t+1)} = b_{w^{(t)}}^{(t)} - |W^{(t)}| \cdot r_{w^{(t)}}^{(t)}$
\STATE{Remove allocated items: } ~\quad ${\cal{M}}^{(t+1)} = {\cal{M}}^{(t)}\setminus W^{(t)}$
\STATE $t\leftarrow t+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{restatable}{theorem}{thmappAPS}
\label{thm:app-APS}
There exists an allocation mechanism
for settings with $n$ agents that have additive valuations and arbitrary entitlements, which has the following properties.
In that mechanism {(the Bidding Game)}
every agent $i$ has a strategy that regardless of the
strategies of the other agents, ensures she gets a bundle of value at least the largest of the following three values:
\begin{itemize}
\item$60\%$
of her AnyPrice share.
\item $\frac{1}{2-b_i}$ fraction of her Truncated Proportional Share.
\item the value of her $\lfloor 1/b_i\rfloor$
ranked item.
\end{itemize}
Moreover, that mechanism runs in polynomial time.
\end{restatable}
Before moving to the proof, we remark that for the case of equal entitlements ($b_i=1/n$ for every $i$), the approximation of $\frac{1}{2-b_i}$ for the TPS in the theorem is tight and cannot be improved.
Indeed, consider $n$ agents with equal entitlements and $2n-1$ identical items, each of value $\frac{1}{2n-1}$.
One of the $n$ agents must get at most one item, getting a value of only $\frac{1}{2n-1}$ while her TPS is $\frac{1}{n}$ (identical to her proportional share as no item has value larger than her {proportional} share). Thus, that agent gets only $\frac{n}{2n-1}$ fraction of her TPS, which equals a $\frac{1}{2-b_i}$ fraction of her TPS, as $b_i=1/n$. Note that $\frac{n}{2n-1}$ approaches $\frac{1}{2}$ when $n$ goes to infinity, and thus any constant approximation to the TPS that is larger than $50\%$ is impossible, while we are able to obtain $60\%$ of the weaker benchmark of APS.
The proof of the theorem is based on presenting three strategies, each guaranteeing one of the bounds promised by the theorem. By picking the one with the highest guarantee, the agent gets all three guarantees.
While the last bound (with respect to the ranked items) is simple, the other two bounds {are substantially more difficult}.
{As a warm-up, we will start by presenting a simple strategy that ensures that agent $i$ gets at least half of her TPS.}
We leave the proof that agent $i$ can secure the stronger bound $\frac{1}{2-b_i}$ fraction of her TPS to the appendix.
{The claim that there is a strategy for agent $i$ that guarantees $60\%$ of her APS appears in Lemma \ref{lem:35}.} As presenting the strategy and its analysis in full is {rather complex},
in the body of the paper we show how the most involved part can be substituted with a simpler step (presented in Lemma \ref{lem:safeStrategy}) that is enough to guarantee a smaller fraction of $8/15$ (while leaving the more {difficult}
proof of the stronger bound of $3/5$ to the appendix).
We note that this fraction of $8/15$ still illustrates an important qualitative point - that it is possible for an agent to guarantee herself some constant fraction that is strictly more than 50\% of her APS.
{To illustrate the use of the bidding game, and likewise, the usefulness of the TPS as an upper bound on the APS, we first present a relatively easy proof that there is an allocation that gives every agent at least half of her TPS (and hence of her APS).}
The \emph{bid-your-max-value strategy} is the strategy in which at each round agent $i$ bids her normalized value of her highest-value remaining item, unless it is higher than her remaining budget -- in this case she bids her entire remaining budget. If she wins she picks that item.
Formally:
If at the beginning of round $t$, the remaining budget of $i$ is $b_i^{(t)}$, and the highest value remaining item has value $x_i^{(t)}$, then at round $t$ agent $i$ bids $\min\left(\frac{x_i^{(t)} }{v_i({\cal{M}})},b_i^{(t)}\right)$ and chooses the maximal value remaining item when winning.
{It is immediate from the definition of the strategy that if an agent $i$ is able to spend her entire budget, she will get her proportional share. The following lemma plays a key role in reasoning about those instances in which $i$ fails to spend her full budget.
It considers only the special case that no single item has a value larger than the proportional share, but other cases can be reduced to this special case, as will be seen in Corollary~\ref{cor:safeStrategy}. For this special case,
any time that agent $i$ bids her max value but does not win, the total budget of the other agents decreases by at least her max value. It follows that the total available budget of all agents decreases at a rate that is at least as fast as the rate of decrease of the total value of the remaining items. Hence either agent $i$ wins some items, or the other agents run out of budget before they manage to buy all the items that agent $i$ values.}
\begin{lemma}
\label{lem:safeStrategy}
The bid-your-max-value strategy of agent $i$ with additive valuation $v_i$ and entitlement $b_i$ provides the following guarantee, regardless of the strategies of the other agents.
If for some positive integer $k$
no subset of $k$ items has a value larger than the proportional share $b_i \cdot v_i({\cal{M}})$,
then the value agent $i$ gets is at least a {$\frac{k}{k+1}$-fraction of her proportional share (i.e., at least $\frac{k}{k+1}\cdot b_i \cdot v_i({\cal{M}})$).}
\end{lemma}
\begin{proof}
{We denote $v_i({\cal{M}})$ by $s_i$.} Recall that we consider the strategy in which at round $t$ agent $i$ bids $\min\Big\{\frac{x_i^{(t)} }{s_i},b_i^{(t)}\Big\}$, and selects the item of highest value if she wins. We shall say that agent $i$ bids her {\em max value} at round $t$ if $\frac{x_i^{(t)} }{s_i} \le b_i^{(t)}$, and that $i$ bids her budget if $\frac{x_i^{(t)} }{s_i} > b_i^{(t)}$.
Let $t^*$ be the latest round such that for every round up to and including $t^*$, agent $i$ bids her max value. That is, $\frac{x_i^{(t)} }{s_i } \leq b_i^{(t)}$ for every $t \le t^*$.
Let $u_i^{(t^*+1)}$ denote the total value accumulated
by agent $i$ up to the beginning of round $t^* + 1$.
Observe that $\frac{u_i^{(t^*+1)}}{s_i} = b_i - b_i^{(t^*+1)}$, {as up to (and including) round $t^*$ agent $i$ always bids exactly her max value}, and hence $u_i^{(t^*+1)} = s_i\cdot (b_i - b_i^{(t^*+1)})$. Thus the value accumulated by agent $i$ by the end of the bidding game is at least $\left(b_i - b_i^{(t^*+1)}\right)\cdot v_i({\cal{M}})$. Hence if $b_i^{(t^*+1)} \le \frac{b_i}{k+1}$, the Lemma is proved.
It remains to consider the case that $b_i^{(t^*+1)} > \frac{b_i}{k+1}> 0$. {We claim that in this case $s_i^{(t^* + 1)} > 0$. The claim follows from the following argument. In every round $t \le t^*$ agent $i$ bids her max value $\frac{x_i^{(t)}}{s_i}$. Hence in every round $t \le t^*$, regardless of which agent won the round, the payment per item consumed in round $t$ was at least $\frac{x_i^{(t)}}{s_i}$, whereas every item consumed in round $t$ had value at most $x_i^{(t)}$ (according to $v_i$). As the initial total budget is~1, and $B^{(t^*+1)} \ge b_i^{(t^*+1)} > 0$, the total budget consumed up to and including round $t^*$, namely, $1 - B^{(t^*+1)}$, satisfies $1 - B^{(t^*+1)} < 1$. Hence the total value (according to $v_i$) consumed is at most $(1 - B^{(t^*+1)})s_i < s_i$, implying that $s_i^{(t^* + 1)} > 0$, as claimed.}
By the maximality of $t^*$, in round $t^{*} + 1$ agent $i$ bids her budget. This implies that $b_i^{(t^*+1)} < \frac{x_i^{(t^*+1)}}{s_i}$.
Let $F_i$ denote the set of items that agent $i$ received up to round $t^*+1$.
If $|F_i|< k$ then by the assumption of the lemma, it holds that
$v_i(F_i) + x_i^{(t^*+1)} \leq b_i\cdot v_i({\cal{M}}) = b_i \cdot s_i$.
Thus, it holds that $\frac{x_i^{(t^*+1)}}{s_i}\leq b_i^{(t^*+1)} $, and agent $i$ can afford to bid her max value, a contradiction to the definition of $t^*$.
We are left to handle the case of $|F_i| \geq k$.
Since $b_i^{(t^*+1)} < \frac{x_i^{(t^*+1)}}{s_i} $ and since $b_i^{(t^*+1)} =b_i - \frac{v_i(F_i)}{s_i} $,
it holds that $v_i(F_i)+x_i^{(t^*+1)} > b_i \cdot s_i$.
Hence,
$$v_i(F_i) = \frac{|F_i| \cdot v_i(F_i) + v_i(F_i)}{|F_i|+1} \geq
|F_i|\cdot \frac{v_i(F_i) + x_i^{(t^*+1)}}{|F_i|+1} > \frac{k}{k+1} \cdot b_i \cdot s_i ,$$
where the first inequality holds since
every item in $F_i$ has a value of at least $x_i^{(t^*+1)}$ and thus $ v_i(F_i) \geq |F_i| \cdot x_i^{(t^*+1)}$. The second inequality holds since $|F_i| \geq k$ and $v_i(F_i) + x_i^{(t^*+1)} > b_i \cdot s_i$.
This concludes the proof.
\end{proof}
An immediate corollary of Lemma~\ref{lem:safeStrategy} (for $k=1$) is that the bid-your-max-value strategy gives the agent half her TPS.
\begin{corollary}
\label{cor:safeStrategy}
Agent $i$ with additive valuation $v_i$ and a budget of $b_i$ has a strategy in the bidding game
that guarantees herself a value of at least $\frac{1}{2} \cdot TPS(b_i )$, regardless of the strategies of the other agents. Hence for any additive valuations and arbitrary entitlements
there is an allocation that gives every agent at least half of her TPS.
\end{corollary}
\begin{proof}
Let $z=\truncated{b_i}{v_i}{{\cal{M}}}$.
Let $v_i'$ be the additive valuation where $v_i'(j) = \min( z,v_i(j)) $ for every item $j\in {\cal{M}}$.
By Definition~\ref{def:TPS} it holds that the proportional share with respect to $v_i'$ is $b_i \cdot v_i'({\cal{M}}) = z$.
Consider the bid-your-max-value strategy with respect to valuation $v_i'$. Since every item $j\in {\cal{M}}$ has a value of at most $z = b_i \cdot v_i'({\cal{M}})$, Lemma~\ref{lem:safeStrategy} shows that agent $i$ receives a bundle $A_i$ of value $v_i'(A_i) \geq \frac{1}{2} \cdot b_i \cdot v_i'({\cal{M}}) = \frac{z}{2}$.
Since for every item $j \in {\cal{M}}$ it holds that $v_i(j) \geq v_i'(j) $, then it holds that $v_i(A_i) \geq v_i'(A_i) \geq \frac{z}{2} $, and thus agent $i$ gets at least half of $\truncated{b_i}{v_i}{{\cal{M}}}$.
\end{proof}
While the analysis of the strategy that gives a $\frac{1}{2-b_i}$ fraction of the TPS is rather involved and deferred to the appendix, the strategy itself is not very different from the bid-your-max-value strategy presented above.
At each round, {bid as follows}. If there is one item that suffices by itself to obtain the goal, bid your whole budget. Else, if two items suffice, bid half your budget (and take both items if you win).
Otherwise, use the bid-your-max-value strategy.
We next turn to present the proof of Theorem \ref{thm:app-APS}.
\begin{proof}(\emph{of Theorem \ref{thm:app-APS}})
{We present three strategies for agent $i$ that plays in the bidding game,
each ensures one of the required guarantees.
Among the suggested strategies, depending on her valuation and entitlement, the agent can choose the strategy for which the corresponding guarantee has the highest value.
In Lemma~\ref{lem:1bi} we present a strategy that guarantees agent $i$ at least $\frac{1}{2-b_i}$-fraction of her TPS.
In Lemma~\ref{lem:35} we present a strategy that guarantees agent $i$ at least $60\%$ of her APS.
{ In order to guarantee the $\lfloor 1/b_i\rfloor$
ranked item the agent can use the strategy of bidding $b_i$, and selecting the maximal available item if she wins. She is guaranteed to select one of the top $\lfloor 1/b_i\rfloor$ ranked items, since otherwise the other agents spent a budget of {$\lfloor 1/b_i\rfloor \cdot b_i$}
which is strictly more than their budget $1-b_i$.
}
One can readily see that all aspects of the strategies above can be implemented in polynomial time, except for one issue.
{While Observation \ref{obs:tps-computation} shows that the TPS can be computed efficiently,}
computing the APS is NP-hard, and hence potentially the agent cannot compute beforehand which of the three guarantees is highest (as she cannot compare $\frac{3}{5}\cdot APS$ with the other two guarantees). Moreover, to run the strategy of Lemma~\ref{lem:35}, it appears that the agent needs to know her own APS.
There are two ways of addressing this problem. The simple way is to settle for a pseudo-polynomial time algorithm, and use Proposition~\ref{prop:computation-psedu-poly}. However, there is a {better} solution that does give a truly polynomial time algorithm. The basic idea is to show that the strategy of Lemma~\ref{lem:35} does not just guarantee a value of $\frac{3}{5}\cdot APS$, but in fact a value of $\frac{3}{5}\cdot V$, where $V \ge APS$, and moreover, $V$ can be computed exactly in polynomial time. See more details in Appendix~\ref{app:V}.}
\end{proof}
Lemma~\ref{lem:35} shows that an agent can secure a $\frac{3}{5}$ fraction of her APS. Let us provide here a brief overview of the proof.
In Step~1 of the algorithm, if there is a single item of value at least $\frac{3}{5}\cdot APS$, the agent bids her whole budget, attempting to win the item. In step~2, if there are two items whose sum of values is at least $\frac{3}{5}\cdot APS$, she bids half her budget, attempting to win both items. A difficulty arises if there are items of value strictly less than $\frac{3}{5}\cdot APS$ but strictly more than $\frac{1}{2}\cdot APS$, the agent bids $\frac{b_i}{2}$ (attempting to win two items), but fails to win these items. In this case the budget of the remaining agents is consumed at a rate that is slower than the rate at which the total value of items decreases. This difficulty is handled in Step~3 of the algorithm. The difficulty is counter-balanced by a positive effect: each of the remaining items has relatively low value, and we have already seen in Lemma~\ref{lem:safeStrategy} that in such cases {the agent can secure herself strictly more than half her TPS (and hence strictly more than half her APS)}. We show that the positive effect is more significant than the negative effect. For this purpose, we introduce Lemma~\ref{lem:3/4}, that for the APS provides better bounds than Lemma~\ref{lem:safeStrategy} does, though requires stronger assumptions, and its proof is significantly more involved.
For those readers who prefer not to follow the more involved proof of Lemma~\ref{lem:3/4}, we remark that one can use Lemma~\ref{lem:safeStrategy} instead, without changing anything in the proof of Lemma~\ref{lem:35}. This gives a bound of $\frac{8}{15}\cdot APS$ {(which is already better than half the APS) } instead of $\frac{3}{5}\cdot APS$.
Specifically,
let ${\cal{M}}'$ be the set of remaining items at the beginning of step~3 of the strategy described in Lemma~\ref{lem:35}, and let $B'$ be the total remaining budget of all agents at this point.
By the definition of step~2 of the strategy described in Lemma~\ref{lem:35}
it holds that there are no two items worth together more than $\frac{3}{5} \cdot \anyprice{b_i}{v_i}{{\cal{M}}}$.
Moreover, it holds that the proportional share of agent $i$ with a normalized budget of $\frac{b_i}{B'}$ over a set of items ${\cal{M}}'$ satisfies:
\begin{eqnarray*}
\proportional{\frac{b_i}{B'}}{v_i}{{\cal{M}}'} & = & 2\cdot \proportional{\frac{b_i}{2\cdot B'}}{v_i}{{\cal{M}}'} \geq
2\cdot \anyprice{\frac{b_i}{2\cdot B'}}{v_i}{{\cal{M}}'} \\ & \stackrel{\eqref{eq:half-anyprice}}{\geq} & \frac{4}{5} \cdot \anyprice{b_i}{v_i}{{\cal{M}}}.
\end{eqnarray*}
The equality is by definition of proportionality. The first inequality is by Proposition~\ref{prop:APS-pes}.
The second inequality follows directly from Equation~\eqref{eq:half-anyprice},
{that states that $\anyprice{\frac{b_i}{2B'}}{v_i}{{\cal{M}}'} \geq\frac{2}{5}\cdot \anyprice{b_i}{v_i}{{\cal{M}}}$
whenever reaching step 3.}
Thus, we can apply at this point the strategy of Lemma~\ref{lem:safeStrategy} for $k=2$ and guarantee a value of at least $ \frac{2}{3}\cdot \proportional{\frac{b_i}{B'}}{v_i}{{\cal{M}}'} \geq \frac{8}{15} \cdot \anyprice{b_i}{v_i}{{\cal{M}}}.$
\begin{lemma}
\label{lem:35}
Agent $i$ with additive valuation $v_i$ over ${\cal{M}}$ and entitlement $b_i$ has a strategy that guarantees a value of at least $60\%$ of $\anypricei$, regardless of the strategies of the other agents.
\end{lemma}
\begin{proof}
Let $z=\anypricei$. By Definition~\ref{def:anyprice-bundles} there exists a list of bundles $\mathcal{S}=\{S_j\}_j$
and associated nonnegative weights $\{\lambda_j\}_j$ such that:
\begin{itemize}
\item $\sum_j \lambda_j = 1$.
\item For every item $e$ we have $\sum_{j | e\in S_j} \lambda_j \leq b_i$.
\item $v_i(S_j) \ge z$ for every {$S_j \in \mathcal{S}$}.
\end{itemize}
We propose the following strategy for the agent.
\begin{enumerate}
\item If $x_i^{(t)} \ge \frac{3z}{5}$, bid $b_i^{(t)}$. If the agent wins, she selects {an item with the highest value, among the available items.}
\item Else, if $x_i^{(t)} +y_i^{(t)} \ge \frac{3z}{5}$, bid $\frac{b_i}{2}$. If the agent wins any such round, she selects
{a pair of items with the highest total value, among the available items.}
\item Let $t'$ be the first round such that $x_i^{(t')} +y_i^{(t')} < \frac{3z}{5}$ (so the pre-conditions for both steps above do not hold).
Let the total remaining budget be $B' =\sum_j b_j^{(t')}$.
Consider a new instance in which the budget of every agent $j$ is $b_j=\frac{b_j^{(t')}}{B'}$.
Agent $i$ runs the strategy of Lemma~\ref{lem:3/4} on the instance defined by these budgets and the set of items ${\cal{M}}' = {\cal{M}}^{(t')}$.
\end{enumerate}
If the agent wins a bid in either step~1 or step~2 then we are done. Hence we may assume that the agent does not win any such bid.
We show that if the agent does not win any item in steps~1 and~2, then
\begin{equation}
\gamma\stackrel{\Delta}{=} \anyprice{\frac{b_i}{2B'}}{v_i}{{\cal{M}}'} \geq\frac{2}{5} \cdot \anyprice{b_i}{v_i}{{\cal{M}}}. \label{eq:half-anyprice}
\end{equation}
If the agent plays according to the strategy defined in Lemma~\ref{lem:3/4}, the agent guarantees a value of $\frac{3}{2}\cdot \gamma \geq \frac{3z}{5}$, as desired.
To complete the proof it remains to prove Equation~(\ref{eq:half-anyprice}).
To prove that it holds we
use the bundles $\mathcal{S}=\{S_j\}_j$ and associated nonnegative weights $\{\lambda_j\}_j$,
and
construct bundles $\mathcal{S'}=\{S'_j\}_j$ and weights $\{\beta_j\}_j$, as required by Definition~\ref{def:anyprice-bundles}, for entitlement $\frac{b_i}{2B'}$ and share that is at least $\frac{2z}{5}$.
Let $Q = Q_1 \cup Q_2$ denote the set of items consumed by the adversary in the first two steps ($Q_1$ for rounds of step~1, $Q_2$ for rounds of step~2).
For each $S_j \in \mathcal{S}$
\begin{itemize}
\item If $S_j \cap Q_1 \neq \emptyset$ or $|S_j \cap Q_2 |>1 $, then $S_j$ is discarded.
\item If $S_j \cap Q_1 = \emptyset$ and $|S_j \cap Q_2 |= 1 $, then $S_j \setminus Q_2$ is added to $\mathcal{S'}$ with weight $\frac{\lambda_j}{2B'}$.
\item If $S_j \cap Q_1 = \emptyset$ and $|S_j \cap Q_2 | = 0 $, then $S_j$ is split to $C_j,D_j$ as defined below, and both $C_j$ and $D_j$ are added to $\mathcal{S'}$, each with weight $\frac{\lambda_j}{2B'}$.
\end{itemize}
The way we split $S_j$ to $C_j,D_j$ is as follows.
We start by setting both $C_j,D_j=\emptyset$, and go over the items in $S_j$ in order of decreasing values.
As long as $v_i(C_j)<\frac{2z}{5}$ we add an item to $C_j$. When $v_i(C_j)\geq \frac{2z}{5}$, we add all remaining items to $D_j$.
In order to prove that $\gamma \geq \frac{2z}{5}$, we need to show that all sets in $\mathcal{S'}$ have a value of at least $\frac{2z}{5}$.
By definition $v_i(C_j) \geq \frac{2z}{5}$.
It holds that $v_i(C_j) \leq \frac{3z}{5}$ since the first two items in $S_j$ are worth together less than $\frac{3z}{5}$ (otherwise we are not finished with step~2). If their value is at least $\frac{2z}{5}$, then no more items will be added to $C_j$. Else, the second item has a value of at most $\frac{z}{5}$, and therefore any other item in $S_j$ will increase the value of $C_j$ by at most $\frac{z}{5}$.
Thus, $v_i(D_j) = v_i(S_j) - v_i(C_j) \geq z -\frac{3z}{5}=\frac{2z}{5}$.
The second type of sets that are added to $\mathcal{S'}$, are sets $S_j \setminus Q_2$ where $|S_j \cap Q_2| = 1$ (and $|S_j \cap Q_1| = 0$).
For these sets, notice that since items in $Q_2$ have a value of at most $\frac{3z}{5}$, then it holds that $v_i(S_j\setminus Q_2) \geq v_i(S_j) -\frac{3z}{5} \geq z-\frac{3z}{5}=\frac{2z}{5}$.
Observe that for every item $e \in {\cal{M}}'$, if it belongs to $S_j$, then the weight for the corresponding sets added to $\mathcal{S'}$ is at most $\frac{\lambda_j}{2B'}$. This is since we either discard this set, or $e$ is added to exactly one set (notice that $e$ cannot be in both $C_j$ and $D_j$).
Thus, the total weight of every element is at most $\frac{b_i}{2B'}$ as needed.
It remains to show that the sum of weights of the sets added to $\mathcal{S'}$ is at least 1.
Let $\alpha_1= \sum _{j\; | \; S_j \cap Q_1 \neq \emptyset}\lambda_j$ denote
the sum of weights of all sets that have items from $Q_1$. Let $\alpha_2= \sum _{j \; | \;
|S_j \cap Q_2| > 1 \wedge
|S_j \cap Q_1| =0} \lambda_j$ denote the sum of weights of all sets that have at least two items from $Q_2$ and no item from $Q_1$. Let $\alpha_3= \sum_{j \; | \;
|S_j \cap Q_2| =1 \wedge
|S_j \cap Q_1| =0}\lambda_j$ denote the sum of weights of all sets that have one item from $Q_2$ and no item from $Q_1$.
Observe that $\alpha_1 \leq b_i \cdot |Q_1|$, since every item is in at most $b_i$ weight of the sets.
Likewise, $2\cdot \alpha_2 +\alpha_3 \leq b_i \cdot |Q_2|$.
The sum of weights of all sets that have no items from $Q_1 \cup Q_2$ is at least $\alpha_4 \geq 1-\alpha_1-\alpha_2-\alpha_3 $.
In addition, it holds that $B'=1-|Q_1| \cdot b_i- |Q_2| \cdot \frac{b_i}{2}$ since every item in $Q_1$ was paid $b_i$, and every item in $Q_2$ was paid $\frac{b_i}{2}$.
The sum of weights of $\mathcal{S'}$ is $$\frac{\alpha_3}{2B'} +2 \frac{\alpha_4}{2B'} \geq \frac{\alpha_3 + 2-2\alpha_1-2\alpha_2-2\alpha_3}{2B'} \geq \frac{ 2-2b_i\cdot |Q_1|-b_i\cdot |Q_2|}{2(1-|Q_1| \cdot b_i- |Q_2| \cdot \frac{b_i}{2})} = 1, $$
which proves inequality~(\ref{eq:half-anyprice}) and concludes the proof.
\end{proof}
The next lemma (whose proof is deferred to Appendix~\ref{app:allocation-game})
states that there is a strategy for agent $i$ that guarantees herself a value of $\frac{3}{2}\cdot \anyprice{\frac{b_i}{2}}{v_i}{{\cal{M}}}$. This Lemma (with a suitable assignment to the
parameters $b_i$ and ${\cal{M}}$) is used in step~3 of Lemma~\ref{lem:35}.
\begin{restatable}{lemma}{lemThreeFour}
\label{lem:3/4}
Agent $i$ with additive valuation $v_i$ and entitlement $b_i$
has a strategy that guarantees herself a value of at least $\frac{3}{2}\cdot \anyprice{\frac{b_i}{2}}{v_i}{{\cal{M}}}$ in the bidding game
that allocates ${\cal{M}}$, regardless of the strategies of the other agents.
\end{restatable}
\section{APS Approximation for Additive Agents: Equal Entitlements}
\label{sec:greedy-efx}
In this section we consider approximating the APS in the special case of equal entitlements.
For only two agents with equal entitlements, there is always an allocation giving both their APS, which equals to their MMS (Observation \ref{obs:APS-two-equal}).
We saw (Proposition \ref{prop:shares-comp}) that beyond two agents, even for equal entitlements, the APS is different than the shares defined in prior work (like MMS).
As the APS is at least as large as the MMS (Proposition \ref{prop:shares-comp}), and as \citet{KurokawaPW18} showed that for three agents with equal entitlements, simultaneously giving every agent her MMS is not possible, we cannot hope to give every agent her APS exactly, and thus look for an approximation.
While for arbitrary entitlements we saw that each agent can guarantee herself at least a $\frac{3}{5}$ fraction of the APS (Theorem \ref{thm:app-APS}), in the equal entitlements case we
show that the \textit{greedy-EFX} algorithm proposed by \citet{BK20} gives a stronger guarantee.
In this section we prove the following theorem.
\begin{theorem}
\label{thm:BK17}
When $n$ agents have additive valuations and equal entitlements, there is an allocation that gives every agent at least the minimum of the following two values: a $\frac{3}{4}$ fraction of her APS, and a $\frac{2n}{3n-1}$ fraction of her TPS.
Moreover, such an allocation can be found in polynomial time.
\end{theorem}
Recall that for $n \le 2$ there is an allocation that gives every agent at least her AnyPrice share (see Observation~\ref{obs:APS-two-agents}). Hence there is interest in Theorem~\ref{thm:BK17} only when $n \ge 3$.
Note that Theorem \ref{thm:BK17} ensures that each agent always gets at least $\frac{2}{3}$ of her APS (as the TPS is always at least the APS).
In proving Theorem~\ref{thm:BK17}, we shall follow an approach of~\citet{BK20} who proved the existence of allocations that give every agent at least a $\frac{2n}{3n-1}$ fraction of her MMS. The algorithm is identical to that of~\citet{BK20}, but the proof of the key lemma (Lemma~\ref{lem:BK17}) is different, as we need to establish stronger guarantees than those established in~\citep{BK20}.
The first step of the proof of Theorem~\ref{thm:BK17} is a reduction to {\em ordered} instances. Given an instance of an allocation problem with additive valuations and a fixed order $\sigma$ among the items (from~1 to $m$), its {\em ordered version} is obtained by each agent permuting the values of items so that values are non-increasing in $\sigma$. The following theorem is due to~\citep{bouveret2016characterizing}.
\begin{restatable}{theorem}{thmordered} (\citep{bouveret2016characterizing})
\label{thm:BL14}
For every instance with additive valuations, every allocation for its ordered version can be transformed in polynomial time to an allocation for the original instance. Using this transformation, every agent derives at least as high value in the original instance as derived by the allocation for the ordered instance.
\end{restatable}
For completeness, we sketch the proof of Theorem~\ref{thm:BL14} in Appendix~\ref{app:greedy-efx}.
Given Theorem~\ref{thm:BL14}, it suffices to prove Theorem~\ref{thm:BK17} in the special case in which the input instance is ordered (and $n \ge 3$). For such instances
we use an algorithm proposed by~\citep{LiptonMMS04,BK20}, that we refer to as {\em Greedy-EFX}. Greedy-EFX assumes that the instance is ordered, with item $e_1$ of highest value and item $e_m$ of lowest value.
The Greedy-EFX works as follows (see formal definition in Appendix \ref{app:greedy-efx}).
Each bundle is associated with one agent. Greedy-EFX proceeds in rounds. In rounds~1 to $n$ the items $e_1$ up to $e_n$ are placed in bundles $B_1$ up to $B_n$ respectively. At a round $r > n$, if there is an agent whose bundle no one envies, then $e_r$ is added to her bundle. Else, if there is no such agent, then there must be {envy cycles (in such an envy cycle, each agent envies the next in the cycle).
An envy cycle can be resolved by rotating the sets along the cycle, each agent getting the set of the agent she envies.
The cycles are iteratively resolved until there are no more envy cycles. At this point, there must be an agent whose bundle no one envies, and then $e_r$ is added to her bundle.}
Clearly, the greedy-EFX algorithm runs in polynomial time. Hence in order to prove Theorem~\ref{thm:BK17}, it suffices to prove the following lemma.
\begin{restatable}{lemma}{thmgreedyEFX}
\label{lem:BK17}
When $n$ agents have additive valuations and equal entitlements, the greedy-EFX allocation gives every agent at least the minimum of the following two values: a $\frac{3}{4}$ fraction of her APS, or a $\frac{2n}{3n-1}$ fraction of her TPS.
\end{restatable}
The proof of Lemma~\ref{lem:BK17} is based on the following principles. Consider an arbitrary agent $i$. We may assume that {under Greedy-EFX agent} $i$ receives at least one item of positive value, as otherwise her APS is~0 {(as there are less than $n$ items she values).} Without loss of generality, assume that the final value received by $i$ is~1. The main part of the proof focuses on one particular item, $e_k$, which is the item of smallest value among those that serve as a first item to enter a bundle that eventually has more than two items. If $v_i(e_k) \le \frac{3}{4}$, a fairly simple argument shows that the TPS is at most $\frac{3n-1}{2n}$, proving that $i$ received at least a $\frac{2n}{3n-1}$ fraction of her TPS. If $v_i(e_k) > \frac{3}{4}$ then we show that the APS is less than $\frac{4}{3}$, and hence that $i$ gets more than a $\frac{3}{4}$ fraction of the APS. To upper bound the APS, we exhibit prices that certify this upper bound. In fact, the prices that we exhibit depend on the exact value of $e_k$, with one set of prices if $\frac{3}{4} < v_i(e_k) \le \frac{5}{6}$, and a different set of prices when $v_i(e_k) > \frac{5}{6}$. The full proof of Lemma~\ref{lem:BK17} appears in Appendix~\ref{app:greedy-efx}.
{An immediate corollary of Lemma~\ref{lem:BK17} is that for the case of equal entitlements (or for an entitlement which is an inverse of an integer), the MMS is at least $\frac{3}{4}$ of the APS.
\begin{corollary}
For every additive valuation $v_i$ and an entitlement $b_i =\frac{1}{n}$ for some integer $n$, it holds that $$\MMSfull{n} \geq \frac{3}{4} \cdot \anypricei $$
\end{corollary}
\begin{proof}
Let $S$ be the allocation returned by greedy-EFX when all agents have the same valuation $v_i$.
By Lemma~\ref{lem:BK17} every agent receives a value of at least $\frac{3}{4}$ of the APS.
By definition of the MMS, the least satisfied agent received at most the MMS.
This concludes the proof.
\end{proof}
}
We next show that the analysis of the greedy-EFX Algorithm
is essentially tight. As the upper bound claimed in the following proposition applies also to the MMS, the proposition may have been previously known. However, we are not aware of an explicit reference to this result, and hence we state it and provide its proof in the appendix.
\begin{restatable}{proposition}{EFXupper}
\label{pro:EFXupper}
For every $\epsilon> 0$, there is an instance {with additive valuations and equal entitlements} in which the greedy-EFX algorithm
does not provide
some agent more than a $\frac{2}{3}+\epsilon$ of her MMS (and thus also does not provide more than a $\frac{2}{3}+\epsilon$ of her APS).
\end{restatable}
In the appendix we also show that if all agents have
identical additive
valuation functions (and equal entitlements), then greedy-EFX gives every agent at least a $\frac{3}{4}$ of her APS. This ratio is the best
possible guarantee that holds for greedy-EFX,
even for MMS, as we show that for every $\epsilon > 0$ there are such instances in which greedy-EFX does not give an agent more than a $\frac{3}{4} + \epsilon$ fraction of her MMS.
\section{Discussion}\label{sec:discussion}
In this paper we have studied fair allocation for agents with arbitrary (not necessarily equal) entitlements. We have introduced the concept of the AnyPrice share (APS) and have presented a positive constant fair-share approximation result for agents with arbitrary entitlements, showing that for agents with additive valuation functions there always is an allocation that gives each agent a $\frac{3}{5}$ fraction of her APS. Following the statement of Theorem~\ref{thm:intro-unequal}, we claimed that (as far as we know), this is the first positive result that guarantees a constant fraction of {\em any} share to agents with arbitrary entitlements. We clarify here what we mean by the term ``share".
\subsection{The Notion of a {\em Share}}
Intuitively, a share of an agent with some valuation and relative entitlement, represents what value she can reasonably expect to receive in any setting in which the items are partitioned, without any further knowledge (as the valuations of others, or the way entitlements are distributed among others).
Consider an allocation setting with a set ${\cal{M}}$ of items, in which every valuation function $v_i$ is normalized ($v_i(\emptyset)=0$) and monotone ($v_i(S) \le v_i(T)$ for every $S \subseteq T \subseteq {\cal{M}}$).
We think of a share as a function $f$ that maps the valuation function $v_i$ of an agent and her normalized entitlement $b_i$ (where $b_i\geq 0$ and
$\sum_i b_i=1$) to a value $f(v_i,b_i)$, where this function $f$ needs to satisfy some natural properties, such as the following:
\begin{itemize}
\item {\em Normalization:} $f(v_i,b_i)\ge 0$, with equality if (though not necessarily only if) either $v_i$ is identically~0, or $b_i = 0$.
\item {\em Boundedness:} for every $v_i$ and $b_i$ it holds that $f(v_i,b_i) \le v_i({\cal{M}})$, with equality if (though not necessarily only if) $b_i = 1$.
\item {\em Entitlement monotonicity:} for every $v_i$ and $b'_i > b_i$ it holds that $f(v_i,b'_i) \ge f(v_i,b_i)$.
\item {\em Valuation monotonicity:} for every $v'_i \ge v_i$ (meaning that $v'_i(S) \ge v_i(S)$ for every $S \subseteq {\cal{M}}$) and every $b_i$ it holds that $f(v'_i,b_i) \ge f(v_i,b_i)$.
\item {\em Value Scaling:} if the valuation function is scaled by a multiplicative factor of $c$, then the share is scaled by the same multiplicative factor. (Note that we do not require scaling with respect to the entitlement.)
\end{itemize}
Of course, for a notion of a share to be useful, we want $f$ to have additional properties beyond those listed above. It should strike a good balance between being as large as possible, yet still allowing for allocations that give every agent some substantial fraction of her share. We shall not attempt to rigorously define what makes a notion of a share useful.
When one defines a share as a function satisfying all the above properties, then
the proportional share, the pessimistic share and our AnyPrice share (APS) are indeed shares. Likewise, the maximin share (MMS) is a share, if one restricts the entitlements $b_i$ to be of the form $b_i = \frac{1}{k_i}$ for an integer $k_i > 1$. The weighted maximin share (WMMS) {of \citet{farhadi2019fair}} is not a share under our definition, as its value depends not only of $v_i$ and $b_i$, but also on how the remaining $1 - b_i$ entitlement is distributed among the remaining agents. However, we may relax our definition of a share so that it also includes WMMS (this requires a certain adaptation of the entitlement monotonicity property). Among all the above notions of shares, indeed Theorem~\ref{thm:intro-unequal} is the first positive result that guarantees a constant fraction of a share to agents with arbitrary entitlements. This guarantee is given with respect to the AnyPrice share, and hence it holds also with respect to the pessimistic share (which is never larger than the AnyPrice share).
To further clarify our use of the term share, let us briefly discuss Prop1, which is a ``close relative" to the proportional share. An allocation is Prop1 if every agent gets at least her proportional share, up to one item. Prop1 is a property of a solution rather than a mapping of $v_i$ and $b_i$ to a value, and hence it is not a notion of a share. However, one can try to define a share based on Prop1. A natural definition would be a value equal to the proportional share, minus the value of the most valuable item (namely, $b_i v_i({\cal{M}}) - \max_{e\in {\cal{M}}} v_i(e)$), as this is a lower bound on the value that the agent gets in any Prop1 allocation.\footnote{An alternative definition of a value based on Prop1 would be as the smallest value of a bundle, among all bundles for which adding one item not in the bundle makes the value of the bundle larger than the proportional share. The distinction between these two definitions is not important for our discussion.}
However, this definition does not satisfy valuation monotonicity (by increasing the value of the most valuable item, the ``share" value decreases rather than increases). Hence we do not view neither Prop1 nor its derivatives as a notion of a share.
\subsection{Interpretation of the AnyPrice Share as an Analog of {\em Cut and Choose}}
The MMS can be thought of as being motivated by the {\em cut-and-choose} paradigm, where the agent plays the role of the ``cutter". The value of the MMS for an agent is the maximum value she can secure to herself if she were to partition (``cut") the grand bundle into $n$ bundles, and allow every other agent to choose a bundle before she does. The APS can also be thought of as being motivated by the cut-and-choose paradigm, as we explain below.
Recall that we provided two equivalent definitions for the {APS}. Definition~\ref{def:anyprice-bundles} (that we referred to as the dual definition) can be thought of as a natural variation on the definition of MMS. Instead of insisting on an integral partition into bundles, one allows a fractional partition, with the advantage that fractional partitions can be easily extended to situations in which agents do not have equal entitlements. In this respect, the APS is motivated by MMS, and hence, indirectly by the cut-and-choose paradigm. However, even in the case of equal entitlements, the APS is not the same as the MMS, and so some of the cut and choose motivation (with the agent as the cutter) may be lost in this adaptation.
Consider now the price based definition of APS, Definition~\ref{def:anyprice-prices}. This definition can be viewed as being motivated by the cut-and-choose paradigm, but with the agent playing the role of the chooser, not the cutter.
Recall that the APS depends only on the valuation function of the agent and her entitlement, but not on valuation functions and entitlements of other agents. Hence pretend that there are no other agents. Instead, there is just a single current owner of all items, and our agent that has a $b_i$ entitlement to the items. Our agent comes to claim her fair share of items from this owner, whereas the owner is interested in keeping the items for himself. Which items should our agent be allowed to take? Here we may use a price-and-choose paradigm, in the spirit of the cut-and-choose paradigm. As the situation is not symmetric (there is a current owner of the item and our agents that has claims on the items), the current owner is the one who is allowed to price the items in any way he wishes, provided that the prices are non-negative and sum up to~1. Now our agent with a budget of $b_i$ plays the role of the chooser, and is allowed to choose any bundle that she can afford. Her APS is the maximum value that she can guarantee herself as a chooser in such price-and-choose games.
Consider an agent $i$ with a valuation function $v_i$ over the set ${\cal{M}}$ of items {that is normalized ($v_i(\emptyset) = 0$)}.
An item $j$ is a {\em good} with respect to $v_i$ if $v_i(S \cup \{j\}) \ge v_i(S)$ for every set $S \subseteq {\cal{M}} \setminus \{j\}$.
An item $j$ is a {\em bad} (also referred to as a {\em chore}) with respect to $v_i$ if $v_i(S \cup \{j\}) \le v_i(S)$ for every set $S \subseteq {\cal{M}} \setminus \{j\}$. If neither all items are required to be goods nor all items are required to be bads (with respect to $v_i$), we refer to ${\cal{M}}$ as being a {\em mixed manna} for agent $i$ with valuation function $v_i$.
Clearly, settings with only goods or only bads are special cases of mixed manna.
Recall that for goods we defined valuation functions to be set functions that are normalized, namely, $v_i(\emptyset) = 0$, and monotone non-decreasing. Once chores are involved, the requirement that valuation functions are non-decreasing is removed. The value of an item or of a (non-empty) bundle might be negative.
Agent $i$ has entitlement $b_i$, where $0 < b_i \le 1$. Intuitively, when all items are {\em goods}, the entitlement represents what fraction of the grand bundle of goods the agent is entitled to receive.
When items are {\em chores}, the entitlement of the agent represents what fraction of the chores {should be assigned to} the agent. {As items are indivisible, some deviations from these exact fractions might be necessary.}
We now extend the definition of the AnyPrice share to the setting of mixed manna. Recall that two definitions were presented for the APS in case of goods. We first present Definition~\ref{def:APSmixed} which is the natural extension of Definition~\ref{def:anyprice-bundles}, replacing the inequality in the last constraint by an equality. {That is, instead of requiring that for every item $j\in {\cal{M}}$ it holds that $\sum_{T: j\in T} \lambda_T \geq b_i$, we now require that
$\sum_{T: j\in T} \lambda_T = b_i$.}
\begin{definition}
\label{def:APSmixed}
Consider a setting in which agent $i$ with valuation $v_i$ has entitlement $b_i$ to a set of indivisible items ${\cal{M}}$.
The \emph{AnyPrice share (APS)} of $i$, denoted by $\anypricei$,
is the maximum value $z$ she can get
by coming up with non-negative
weights $\{\lambda_T\}_{T\subseteq {\cal{M}}}$ that total to $1$ (a distribution over sets),
such that any set $T$ of value below $z$ has a weight of zero, and
every item appears in sets of total weight exactly $b_i$:
$$ \anypricei = \max_{z} z $$ \\ subject to the following set of constraints being feasible for $z$:
\begin{itemize}
\item $\sum_{T\subseteq {\cal{M}}} \lambda_T=1$
\item $\lambda_T\geq 0$ for every bundle ${T\subseteq {\cal{M}}}$
\item $\lambda_T= 0$ for every bundle ${T\subseteq {\cal{M}}}$ s.t. $v_i(T)<z$
\item $\sum_{T: j\in T} \lambda_T = b_i$ for every item $j\in {\cal{M}}$
\end{itemize}
\end{definition}
Recall that for the case of goods the APS is at least as large as the MMS. This property also holds for mixed manna (and thus for chores).
This is because Definition~\ref{def:APSmixed} defines the APS as a solution to a maximization problem that is a relaxed (fractional) version of the (integral) maximization problem defining the MMS.
For mixed manna, the APS may be either positive or negative (or zero). We present some observations that are useful in determining its sign and some of its properties.
\begin{proposition}
\label{pro:positiveManna}
{Let ${\cal{M}}$ be mixed manna for agent $i$ with valuation function $v_i$.} If $v_i({\cal{M}}) \ge 0$ then $\anypricei \ge 0$.
\end{proposition}
\begin{proof}
{The LP of Definition~\ref{def:APSmixed} is feasible for $z=0$. This can be seen by} taking bundle ${\cal{M}}$ with coefficient $b_i$, and the empty bundle $\emptyset$ with coefficient $1 - b_i$.
\end{proof}
Fixing a valuation function $v_i$, if there are only goods, then the APS is a non-decreasing function of the entitlement. Likewise, if there are only chores, then the APS is a non-increasing function of the entitlement. However, for the case of mixed manna, the APS need not be a monotone function of the entitlement. Consider for example an instance (that is non-additive) with ${\cal{M}} = \{a,b\}$, where $v_i(a), v_i(b) > 0$, and $v_i({\cal{M}}) < 0$. Then if $b_i < \frac{1}{2}$ the APS is zero (the LP in Definition~\ref{def:APSmixed} can be supported on the three bundles $\{a\}, \{b\}, \emptyset$), if $b_i = \frac{1}{2}$ the APS is positive (the support is $\{a\}, \{b\}$), and if $b_i > \frac{1}{2}$ the APS is negative (the bundle ${\cal{M}}$ must be in the support of the LP solution).
While for mixed manna the APS is not necessarily monotone in the entitlement, we next show that it is always a quasiconcave function of the entitlement.
\begin{remark}
\label{rem:quasiconcave}
A proof similar to that of Proposition~\ref{pro:positiveManna} shows that for a fixed valuation function $v_i$, the APS is a {\em quasiconcave} function of the entitlement $b_i$. Namely, for every $0 \le b_1 < b_2 < b_3 \le 1$ it holds that $\anyprice{b_2}{v_i}{{\cal{M}}} \ge \min\left[\anyprice{b_1}{v_i}{{\cal{M}}},\anyprice{b_3}{v_i}{{\cal{M}}}\right]$. This follows by considering optimal solutions, $\{\lambda_T^1\}_T$ for $b_1$ and $\{\lambda_T^3\}_T$ for $b_3$, in Definition~\ref{def:APSmixed}. Then $\{\frac{b_1 \cdot \lambda_T^1+b_3 \cdot\lambda_T^3}{b_1+b_3}\}_T$ is a feasible solution for $b_2$, and the bundle of minimum value in its support is in the support of at least one of the two optimal solutions (for $b_1$ and $b_3$).
\end{remark}
Quasiconcavity implies that for every valuation function $v_i$ there is a threshold entitlement $\tau$, such that $\anypricevali{b_i}$ is a weakly increasing function of $b_i$ for $b_i\in [0,\tau]$, and weakly decreasing for $b_i \in [\tau,1]$. In particular, if the APS is negative for entitlement $b_i$, it remains negative for every entitlement $b_i^+ > b_i$.
\begin{proposition}
\label{pro:worstItem}
{For any {normalized valuation $v_i$ and any} entitlement $b_i$,} if there are only chores, then $\anypricei \le \min_{j\in {\cal{M}}} v_i(j)$.
\end{proposition}
\begin{proof}
Let $z = \anypricei$. Then the LP of Definition~\ref{def:APSmixed} is feasible for this value of $z$. Let $j$ be the item of smallest (most negative) value under $v_i$. Then in the feasible solution of the LP, there must be some bundle $T$ that contains $j$. Hence $v_i(T) \ge z$. As there are only chores, $v_i(T) \le v_i(j)$.
\end{proof}
\begin{proposition}
\label{pro:APSversusProp}
{For any entitlement $b_i$,} if $v_i$ is an additive valuation (allowing mixed manna), then the AnyPrice share cannot be larger than the proportional share: $\anypricei \le b_i \cdot v_i({\cal{M}})$.
\end{proposition}
\begin{proof}
Let $z$ be a value for which the LP of Definition~\ref{def:APSmixed} is feasible. Then $z$ is the smallest value of a bundle in the support of the underlying distribution, whereas the proportional value is the average value (due to additivity).
\end{proof}
If all items are chores (even without assuming additivity), then the value of every non-empty bundle is negative.
It is convenient to view the absolute values of these negative values as positive {\em disutilities}. Thus we replace the non-positive valuation function $v_i$ by the non-negative disutility\ function $c_i$, where for every $S \subseteq {\cal{M}}$ we have $c_i(S) =-v_i(S)$. With respect to disutilities, the APS is positive, and the agents wish to receive bundles of small disutility. There are allocation instances in which every allocation gives some agent a bundle of disutility\ larger than her APS. (This follows from a similar result concerning MMS for chores and agents with equal entitlements~\citep*{ARSW17}, because with equal entitlements, the APS disutility\ is never larger than the MMS disutility.) Consequently, we wish to find allocations that give every agent a bundle of disutility\ no more than $\rho$ times her APS, with $\rho$ being as small as possible. {For $\rho = 2$, this can be achieved for additive valuations, using standard approaches.}
\begin{proposition}
\label{pro:BoBWchores}
Consider an allocation instance with $n$ agents with valuations $(v_1,v_2,\ldots,v_n)$ and
arbitrary entitlements $(b_1,b_2,\ldots,b_n)$ for a set ${\cal{M}}$ of indivisible chores. If all disutility\ functions are additive, then there is an allocation in which every agent $i$ gets a bundle of disutility\ at most $2\cdot \anypricei$.
\end{proposition}
\begin{proof}
Consider a fractional allocation in which every agent $i$ (with entitlement $b_i$) gets a fraction $b_i$ of every item $j$. This fractional allocation gives every agent her proportional share. It is well known that every fractional allocation can be rounded to give an integral allocation in which every agent gets a bundle whose disutility\ is the same as the disutility\ of her original fractional bundle, up to the disutility\ of one item~\citep*{LST90}. As her fractional disutility\ is at most her APS disutility (Proposition~\ref{pro:APSversusProp}, after transforming values to disutilities) and every item has value at most the APS (Proposition~\ref{pro:worstItem}, after transforming values to disutilities), the proposition follows.
\end{proof}
For completeness, we present the price based definition of the APS for chores (in analogy to Definition~\ref{def:anyprice-prices} for goods).
{The price based definition is derived by considering the dual of the fractional MMS definition.
We have shown the equivalence of the two definitions for the case of goods in Proposition~\ref{prop:anyprice-eqe}. Similar arguments (which are omitted) show the equivalence in the case of chores.}
\begin{definition}
\label{def:APSchores}
Consider a setting in which agent $i$ with disutility\ function $c_i$ has entitlement $b_i$ to a set of indivisible chores ${\cal{M}}$.
The \emph{AnyPrice share (APS)} of agent $i$, denoted by $\anypriceic$, is the disutility\ she can limit herself to whenever the chores in ${\cal{M}}$ are adversarially priced with a total price of $1$, and she picks her bundle of chores among those bundles of total price at least $b_i$:
$$\anypriceic = \max_{(p_1,p_2,\ldots,p_m)\in {\cal{P}}}\ \ \min_{S\subseteq {\cal{M}}} \left\{c_i(S) \Big | \sum_{j\in S} p_j\geq b_i\right\}$$
\end{definition}
Observe that both in Definition~\ref{def:anyprice-prices} (for goods) and in Definition~\ref{def:APSchores} (for chores), all prices are non-negative. The difference between the definitions is that for the case of goods, the feasible bundles for an agent $i$ are those priced at most $b_i$, whereas for chores the feasible bundles are those priced at least $b_i$.
We note that the APS in Definition~\ref{def:APSchores} measures dis-utility, and thus has a positive value which equals to minus the value of Definition~\ref{def:APSmixed}.
For allocation instances that involve a mixture of goods and chores, there are instances in which agents have additive valuations, equal entitlements, the MMS of every agent is strictly positive, whereas in every allocation (that allocates all items) some agent receives a bundle of value at most~0 \citep*{KMT2020}. As the APS is at least as large as the MMS, it follows that for mixed manna it is not always possible to find an allocation that gives every agent a positive fraction of her APS.
\subsection{Directions for Future Work and Open Questions}
Several approximability results appearing in this paper provide constant factor approximation ratios that are not known to be best possible. It would be interesting to improve over any of them. We list the known lower bounds and upper bounds for three such results (all for additive valuation functions).
\begin{enumerate}
\item What is the largest possible gap between the APS and the MMS when agents have equal entitlements (or equivalently, when the entitlement $b_i$ is the inverse of a positive integer)? Theorem~\ref{thm:BK17a} shows that in this case the value of the MMS is at least a $\frac{3}{4}$ fraction of the value of the APS, and Lemma~\ref{lem:APSlargerPESS} shows that sometimes the ratio is no better than $\frac{96}{97}$.
\item How much of the APS can one guarantee to agents with equal entitlements? Theorem~\ref{thm:BK17} shows that there always is an allocation that gives every agent at least a $\frac{2}{3}$ fraction of her APS. In~\citep*{FST21} it is shown that there are allocation instances for which there is no allocation that gives every agent more than a $\frac{39}{40}$ of her MMS. This negative result applies also to the APS (as the APS is at least as large as the MMS).
\item How much of the APS can one guarantee to agents with arbitrary entitlements? Theorem~\ref{thm:intro-unequal} shows that there always is an allocation that gives every agent at least a $\frac{3}{5}$ fraction of her APS. The negative result of $\frac{39}{40}$ still applies (as equal entitlements is a special case of arbitrary entitlement).
\end{enumerate}
A natural direction for future work for additive valuations is to obtain best-of-both-worlds results for agents with arbitrary entitlements, for example, simultaneously achieving ex-ante proportionality as well as APS approximation ex-post. Such results are known for agents with equal entitlements \citep*{babaioff2021bestofbothworlds}.
More generally, one would like to study the APS for additional classes of valuation functions, such as submodular valuations.
\mbc{need to turn this section to a discussion section. This should be part of a discussion at the end regarding the philosophical meaning of "shares" (common units to measure outcomes) and what do we learn from the approximation results - that this algorithm has a solution that gives each agent at least something. }
If one is to use the share notion as a deterministic solution concept (rather than ex-post guarantee), then the natural approach is to do lex-max (which is also Pareto efficient). This applies not only to APS but also to other notions of shares, such as MMS.
\uf{For agents with 0 share, the scaling below cannot be done. By convention, if get~0 value this is equivalent to getting one unit of share. Geting more value is equivalent to infinity, but we will need different notions of infinity.}\mbc{How do we take care of multiple zero shares? }
As for 3 agents maximin cannot be guaranteed (and since $APS>=MMS$), we cannot guarantee everyone his APS share for three agents or more. (3.16).
We thus need approximation and then do lex-max.
\begin{algorithm}
\caption{Lex-max share-based allocation algorithm\label{alg:lex-max}
}
\begin{algorithmic}
\STATE $s_i = $ share of agent i
\STATE scale each $v_i$ by $\frac{1}{s_i}$. (with the convention that $0/0=1$. )
\STATE return the lex-max allocation with respect scaled valuations
\end{algorithmic}
\end{algorithm}
\uf{Alert that this is not a polynomial time algorithm}
\te{In later sections we show existence results which imply that the allocation returned by Algorithm~\ref{alg:lex-max} guarantees at least those fractions of the shares.}
\appendix
\section{Some Formal Definitions}
\label{sec:missing-defs}
In this section we present some formal definitions that are omitted from the body of the paper.
\begin{definition}[Competitive Equilibrium (CE)]\label{def:CE}
Given the valuations of the agents $\textbf{v}=(v_1,\ldots,v_n)$, and a vector of budgets $\textbf{b}=(b_1,\ldots,b_n)$, a \emph{Competitive Equilibrium (CE)} is a pair $(A,p)$ of an allocation $A$ {of all items} and a vector of item prices $p$, where the price of a bundle received by each agent is at most her budget (i.e., $p(A_i) = \sum_{j\in A_i} p_j \leq b_i$), and every agent's bundle is utility maximizing under her budget (i.e., for all $S\subseteq {\cal{M}}$, $p(S) \leq b_i \implies v_i(S) \leq v_i(A_i)$).
\end{definition}
\begin{definition}[Weighted maximin share (WMMS)]
The \emph{weighted maximin share (WMMS)} of agent $i$ with valuation $v_i$ over ${\cal{M}}$, when the vector of entitlements is $(b_1,b_2,\ldots,b_n)$, which is denoted by $WMMS_i(b,v_i,{\cal{M}})$,
is defined to be the maximal value $z$ such that there is an allocation that gives each agent $k$ at least $\frac{z\cdot b_k}{b_i}$ according to $v_i$.
Formally:
$$WMMS_i(b,v_i,{\cal{M}}) = \max_{(A_1,\ldots,A_n)\in\mathcal{A} }\ \min_{k \in[n] } \frac{b_i \cdot v_i(A_k)}{b_k}$$
\end{definition}
\section{Properties of The Truncated Proportional Share (TPS)}\label{app:TPS}
Observation \ref{obs-TPS-mush-larger-APS} shows that the TPS might be much larger than the APS.
\begin{observation}\label{obs-TPS-mush-larger-APS}
For any $\varepsilon>0$
there exist an additive valuation and an entitlement $b_i$ such that $\varepsilon\cdot TPS(b_i)\geq \anypricevali{b_i}$.
\end{observation}
\begin{proof}
Consider the case of entitlement $b_i=\frac{1}{k+\epsilon}$
and $k+1$ items, $k$ of them with value $b_i$, and one item with value $r=\frac{\epsilon}{k+\epsilon}$.
The APS is $r=\frac{\epsilon}{k+\epsilon}$, since if we set the prices of the high value items at price $\frac{1}{k}$, while pricing the low value item at $0$, the agent can only afford the low value item. The APS is not lower since the agent can always afford at least one item.
In contrast, the TPS is $ \frac{1}{k+\epsilon}$, since no item is worth more than the entitlement, so the TPS is the same as the proportional share.
\end{proof}
The next observation shows that the TPS can be computed efficiently.
\begin{observation}
\label{obs:tps-computation}
Consider the problem of computing, for any integer additive valuation $v_i$ and entitlement $b_i$ (were $b_i \leq 1$ is a rational number), the truncated proportional $\truncatedi$.
There is an algorithm that computes the TPS in time polynomial in the input size.
\end{observation}
\begin{proof}
{Compute the proportional share $PS_i = b_i \cdot v_i({\cal{M}})$. Let $j$ denote an item of highest value under $v_i$. If $v_i(j) \le PS_i$ then the TPS is $PS_i$. If $v_i(j) > PS_i$ then there are two cases to consider.
In one case, $b_i \ge \frac{1}{2}$. In this case the TPS is $z = \frac{b_i}{1-b_i} \cdot v_i({\cal{M}} \setminus \{j\})$. This can be verified, as after reducing the value of $v_i(j)$ to $z$, no item other than $j$ has value larger than $z$ (because $\frac{b}{1 - b} \ge 1$), and it indeed holds that $z = b_i\cdot (\min[z, v_i(j)] + v_i({\cal{M}} \setminus \{j\})) = b_i(z + \frac{1-b_i}{b_i}z) = z$. In the other case $b_i < \frac{1}{2}$. In this case
Lemma~\ref{lem:ktps} implies that the TPS equals $\frac{b_i}{1-b_i} \cdot \truncated{\frac{b_i}{1-b_i}}{v_i}{{\cal{M}}\setminus \{j\}}$. As in this case the number of items decreases, it takes no more than $m$ iterations to compute the TPS.}
\end{proof}
\begin{proposition}\label{prop:TPS-CE}
There exists an instance with additive valuations $\textbf{v}$ and entitlements $\textbf{b}$ such that no allocation gives every agent her truncated proportional share (or her proportional share), yet a competitive equilibrium with entitlements as budgets does exist.
\end{proposition}
\begin{proof}
Consider an instance with two agents $1,2$, and three identical items ${\cal{M}} =\{a,b,c\}$. Every agent has value of $\frac{1}{3}$ for each item.
The agents entitlements are $b_1=0.4, b_2=0.6$.
The allocation where agent 1 receives item $a$ while agent 2 receives both $b$ and $c$, with prices $p_a =0.4,p_b=p_c=0.3$, is a CE, and thus competitive equilibria exist.
Moreover, in every CE, agent~2 receives at least as many items as agent~1 (since she has a larger budget), and thus, she must receive two items, as all items must be allocated in a CE.
Thus, in every CE of this instance, agent~1 receives a value of $\frac{1}{3}$.
However, in this instance the TPS of every agent is the same as her proportional share (and the same as the entitlement). So
$\truncated{b_1}{v_1}{{\cal{M}}} =0.4>\frac{1}{3}$, and agent 1 gets less than her TPS in every CE.
\end{proof}
\citet{babaioff2021bestofbothworlds} observe that even for equal entitlements, it is not possible to give every agent more than half her TPS. More precisely, for any constant $\rho$ larger than half,
there is an instance (with equal entitlements) in which some agent does not get $\rho$-fraction of her TPS.
\begin{observation} \label{obs:TPS-app-UB}
For every $n $, there exists an instance with $n$ agents with equal entitlements and identical valuations, such that in every allocation at least one agent gets at most $\frac{n}{2n-1}$-fraction of her TPS.
\end{observation}
\begin{proof}
Consider an instance with $n$ agents and $2n-1$ identical items, each of value $\frac{1}{2n-1}$ for all agents.
The TPS of every agent is $\frac{1}{n}$, while in every allocation there is at least one agent that receives only one item, so her value is at most $\frac{1}{2n-1} \leq \frac{n}{2n-1}\cdot \frac{1}{n}$
, as desired.
\end{proof}
\section{Missing Proofs from Section \ref{sec:aps}}
\propDefEq*
\begin{proof}
Let $z_D$ (resp., $z_P$) be the value according to Definition~\ref{def:anyprice-prices} (resp., Definition~\ref{def:anyprice-bundles}).
We first prove {that} $z_D \geq z_P$.
Let $\vect{p}$ be the corresponding minimal prices with respect to $z_D$.
Assume towards contradiction $z_{P}>z_D$. By Definition~\ref{def:anyprice-bundles}, there {exists} a distribution $F$ over sets $T_1,\ldots,T_k$, with weights $\lambda_1,\ldots,\lambda_k$ (i.e., the probability of $T_r$ according to $F$ is $\lambda_r$) such that $\sum_r \lambda_r=1$ and for all $r$, {$v_i(T_r) \geq z_P > z_D$} , such that for every item $j\in {\cal{M}}$, $\sum_{r: j \in T_r} \lambda_r \leq b_i$.
It holds that $$E_{T \sim F}[\vect{p}(T)] = \sum_{j\in{\cal{M}}} p_j \cdot \Pr_{T \sim F}[j \in T] \leq \sum_{j\in{\cal{M}}} p_j \cdot b_i =b_i,$$
where $\vect{p}(T)$ is the price of bundle $T$ according to $\vect{p}$.
Thus, there must be a set $T_r$ in the support of $F$ that is priced at most $b_i$ and has a value strictly greater than $z_D$, which leads to a contradiction.
To see the other direction, assume that $z_D>z_P$ and fix $z$ satisfying $z_D>z>z_P$. Consider the following LP over the variables $\{\lambda_T \mid T \in {{\cal{G}}_i(z)}\}$:
{\bf Maximize} $\sum_{T \in {{\cal{G}}_i(z)}} \lambda_T$ {\bf subject to}:
\begin{itemize}
\item $\sum_{T: j\in T} \lambda_T \leq b_i$ for all $j\in {\cal{M}}$.
\item $\lambda_T \geq 0$ for all $T \in {{\cal{G}}_i(z)}$.
\end{itemize}
Note that since $z>z_P$, the solution for this LP is strictly less than $1$.
The dual of this LP is over the variables $\{q_j \}_{j\in{\cal{M}}}$:
{\bf Minimize} $b_i \cdot \sum_{j \in {\cal{M}}} q_j$ {\bf subject to}:
\begin{itemize}
\item $\sum_{j \in T} q_j \geq 1$ for all $T \in {{\cal{G}}_i(z)} $.
\item $q_j \geq 0$ for all $j \in {\cal{M}}$.
\end{itemize}
By LP duality we get that the minimum is also strictly less than 1. By replacing $p_j=q_j \cdot b_i$ we get:
{\bf Minimize} $\min \sum_{j \in {\cal{M}}} p_j$ {\bf subject to}:
\begin{itemize}
\item $\sum_{j \in T} p_j \geq b_i$ for all $T\in {{\cal{G}}_i(z)}$.
\item $p_j \geq 0$ for all $j \in {\cal{M}} $.
\end{itemize}
This means that the minimum of the last LP is also strictly less than 1. Thus, if we consider the prices $p_j+\epsilon$ for small enough $\epsilon$ for $p_j$ that minimizes this LP, it holds that $\sum_j (p_j+\epsilon ) \leq 1$, and every (non empty) bundle of value at least $z $ (which is strictly more than $0$), is priced strictly more than $b_i$. Thus we get that $ z_D$ must be smaller than $z$, a contradiction.
\end{proof}
\propSmallSupport*
\begin{proof}
As shown in the proof of Proposition~\ref{prop:anyprice-eqe}, for the AnyPrice share $z$, it holds that the LP over the variables $\{\lambda_T \mid T\in{{\cal{G}}_i(z)} \}$
{\bf Maximize} $\max \sum_{T \in {{\cal{G}}_i(z)}} \lambda_T$ {\bf subject to}:
\begin{itemize}
\item $\sum_{T: j\in T} \lambda_T \leq b_i$ for all $j\in {\cal{M}}$.
\item $\lambda_T \geq 0$ for all $T \in {{\cal{G}}_i(z)}$.
\end{itemize}
has solution of at least $1$ (since $z$ is feasible).
There are only $m$ constraints (other than positivity constraints), thus there is a solution where at most $m$ variables are not zero.
\end{proof}
\propAPSPES*
\begin{proof}
By Definition~\ref{def:pes}, we need to prove that the $\anypricevali{b_i} \geq {\ell\mbox{-out-of-}d\mbox{-}Share}$ holds for every integers $\ell,d$ satisfying $\frac{\ell}{d}\leq b_i$.
Let $A=(A_1,\ldots,A_q)$ be the partition of ${\cal{M}}$ to {$d$} bundles with respect to Definition~\ref{def:lds}.
For every vector of prices $\vect{p}$ that sums up to 1, the expected price of a union of $\ell$ different bundles among $A$, picked uniformly over all { $ {d}\choose{\ell}$ unions},
is exactly $\frac{\ell}{d}$. This holds as every item is picked with probability $\frac{\ell}{d}$ (as it belongs to one set, and $\ell$ sets are picked out of $d$).
Recall that $\frac{\ell}{d}\leq b_i$.
Thus, there are $\ell$ different bundles that their sum of prices is at most $b_i$ and thus can be purchased. By Definition~\ref{def:lds}, the value of their union is at least the value of the ${\ell\mbox{-out-of-}d\mbox{-}Share}$.
\end{proof}
\propShareComp*
\begin{proof}
The inequality $Proportional(b_i) \geq \anypricevali{b_i}$ follows by setting the price vector to be $p_j = \frac{v_i(j)}{v_i({\cal{M}})}$. Then, the price of a bundle is proportional to the value of it, and every affordable bundle has value of at most $b_i \cdot v_i({\cal{M}})$.
The middle inequality follows by Proposition~\ref{prop:APS-pes}.
Each of these two inequalities might hold as equality: when there are $2$ items each of value $1/2$, and $b_1=b_2=1/2$ then all three shares are $1/2$.
The rightmost inequality is proven in Lemma \ref{lem:pes-APS-2},
and it can hold as equality by Example \ref{example:APS-twice-Pessimistic}.
We next argue that each of the inequalities might be strict {even for equal entitlements ($b_i=\frac{1}{k}$ for some integer $k$). The leftmost inequality is strict in the case of a single item and $b_i=1/2$, where the proportional share is $b_i=1/2$, while the APS is $0$. The rightmost inequality is strict in the example for which $\anypricevali{b_i} = Pessimistic(b_i)$ presented above ($2$ identical items, 2 agents with equal entitlements). Finally, Lemma \ref{lem:APSlargerPESS} shows that the middle inequality is sometimes strict, even for agents with equal entitlements (it is much simpler to prove strict inequality for unequal entitlements, e.g., Example \ref{example:APS-twice-Pessimistic}).}
\end{proof}
\begin{lemma}\label{lem:APSlargerPESS}
There exists an additive valuation (over $15$ items) such that for $b_i=\frac{1}{3}$ (corresponding to three agents with equal entitlements) it holds that $\anypricevali{\frac{1}{3}} > \pessimisticb{\frac{1}{3}} = \MMSnum{n}um{3}$.
\end{lemma}
\begin{proof}
{
Consider the additive valuation $v_i$ over a set ${\cal{M}}$ of $15$ items with the multi-set of items value being $$5,5,5,7,7,7,11,17,23,23,23,31,31,31,65.$$
To prove the claim we show that for this instance
$\anypricevali{\frac{1}{3}}= 97> \pessimisticb{\frac{1}{3}} = \MMSnum{n}um{3}$.
It will be convenient to index each item by a pair of different integers in $\{1,2,3,4,5,6\}$, that is, the set of items is ${\cal{M}} = \{\{k,j\} | 1\leq k<j \leq 6\}$.
We can now write the items in the following six-by-six table (for convenience each value appears twice).}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
* & 11 & 7 & 7 & 7 & 65 \\
\hline
11 & * & 23 & 23 & 23 & 17 \\
\hline
7 & 23 & * & 31 & 31 & 5 \\
\hline
7 & 23 & 31 & * & 31 & 5 \\
\hline
7 & 23 & 31 & 31 & * & 5 \\
\hline
65 & 17 & 5 & 5 & 5 & * \\
\hline
\end{tabular}
\end{table}
{By Definition~\ref{def:anyprice-bundles}, the APS is at least $97$:
the sets in the support will be the six rows $T_k = \{ \{j,k\} \mid j \neq k\}$ for $k\in [6]$, each has value of $97$. Set $\lambda_{T_k}= \frac{1}{6}$ for each row $T_k$, and set $\lambda_{T}=0$ for every other bundle $T$.
Then every bundle $T_k$ has value of $97$, and every item $\{j,k\}$
as total weight at most $1/3$ as it
appears in exactly two sets of positive weight (once in $T_j$ and once in $T_k$): $\sum_{T : \{j,k\} \in T} \lambda_T =\lambda_{T_k} +\lambda_{T_j}= \frac{1}{6}+ \frac{1}{6}=\frac{1}{3}\leq b_i$ as needed.
The APS is not more than $97$ since it is bounded by the proportional share, which is $97$.
In contrast, we next prove that the MMS is strictly less than $97$ (it cannot be larger than 97, as the MMS is at most the APS.)
Assume towards a contradiction that the MMS is $97$. By definition of the MMS, there must exist a partition of ${\cal{M}}$ to three bundles $P_1,P_2,P_3$, each with value of exactly $97$. To complete the proof we show that such a partition does not exist, deriving a contradiction.
W.l.o.g., we assume that the item of value $65$ is in $P_1$.}
The only two ways to complete $P_1$ to have a value of {exactly} $97$ is by adding the items with values $5,5,5,17$ or adding the items with values $7,7,7,11$. This is since if we add an item with value $31$, then we get a value of $96$ and the is no item with value $1$.
If we add an item of value $23$ then we get value of $88$, and there is no subset of items worth $9$.
In order to add a value of $32$, one must use on of the $11,17$ valued items, since otherwise, the maximal value that can be achieved is $36$, and the second maximal value is $31$.
If the $11$ value is added, then the only way to achieve an additional value of $21$ is by adding the three $7$ valued items, and if the $17$ value is added, then the only way to achieve an additional value of $15$ is by adding the three $5$ valued items.
W.l.o.g., we assume that at least two items of value $31$ (among the remaining three) are in $P_2$.
It cannot be that the third $31$ value is also in $P_2$, since otherwise we have accumulated a value of $93$ and cannot be completed to {exactly} $97$.
It must be that at least one of the $23$ valued items is in $P_2$, since otherwise the value of $P_3$ is at least $3\cdot 23 +31=100 >97$.
It must be that at most one of the $23$ valued items is in $P_2$ since otherwise the value of $P_2$ is at least $2 \cdot 31 +2 \cdot 23 = 108>97$.
In order to complete $P_2$ to have a value of $97$, we need to add items with total value of $12$, and the only way to do so, is by adding on item of value $5$, and one item of value $7$.
But it must be that either all three items of value $5$ are in $P_1$, or all three items of value $7$ are in $P_1$, thus $P_2$ cannot be completed to have a value of $97$.
Thus there is no such partition, and the MMS is smaller than $97$ (and as it is an integer, it is at most $96$).
\end{proof}
\begin{lemma}\label{lem:pes-APS-2}
For any additive valuation $v_i$ and any entitlement $b_i$ it holds that \\
$ 2\cdot Pessimistic(b_i) \geq \anypricevali{b_i}$.
\end{lemma}
\begin{proof}
Let $z$ be the value of the $\anypricevali{b_i}$ share. Then there is a collection $S_1, \ldots, S_t$ of bundles and positive coefficients $\lambda_1, \ldots \lambda_t$ satisfying $\sum_{\{\ell \le t\}} \lambda_\ell = 1$, $v(S_\ell) \ge z$ for every bundle $S_\ell$, and $\sum_{\ell | j \in S_\ell} \lambda_\ell \le b_i$ for every item $j$. Let $k$ be the smallest integer satisfying $b_i \geq \frac{1}{k}$.
We need to show that one can partition all items into $k$ disjoint bundles $T_1, \ldots, T_k$ such that $v(T_\ell) \ge \frac{z}{2}$ for every $T_\ell$. To distinguish between the bundles of type $S_\ell$ and those of type $T_\ell$, we refer to the latter as {\em bins}. We shall place items in bins, and {\em close} a bin once its value reaches (or exceeds) $\frac{z}{2}$.
Call an item {\em large} if its value is at least $\frac{z}{2}$. We first present an argument that shows that we can assume that there are no large items.
Let $h$ denote the number of large items. Each large item closes a bin by itself. If $h \ge k$ we are done, and hence we assume that $h \le k-1$. This leaves us $k - h$ bins to fill. As to the original bundles, discard (change its coefficient $\lambda_\ell$ to~0) every bundle $S_\ell$ that contains a large item. As the coefficients $\lambda_\ell$ of bundles that contain an item sum up to at most $b_i$, the sum of coefficients that remain is at least $1 - b_i h > 0$. (Note that the choice of $k$ satisfies $b_i < \frac{1}{k-1}$, and together with $h \le k-1$, this implies that $b_ih < 1$.) W.l.o.g., assume that the sum of coefficients that remain is exactly $1 - b_i h$. If $h = k-1$ we are also done for the following reason. $k-1$ bins are closed by using the $h = k-1$ large items. As the sum of coefficients that remain is positive, at least one bundle (say $S_\ell$) has a nonzero coefficient. This bundle has no large items, implying that all its items remain. As the sum of their values is at least $z$, these items suffice in order to close the last remaining bin. In other words, we may choose $T_k = S_\ell$. Given the above, we may assume that $h \le k - 2$.
Scale every coefficient by $\frac{1}{1 - b_i h}$, getting new coefficients $\lambda'_\ell = \frac{\lambda_\ell}{1 - b_ih}$. Now $\sum_\ell \lambda'_\ell = 1$, and for every item $j$, $\sum_{\{\ell | j \in T_\ell\}} \lambda'_\ell \le \frac{b_i}{1 - b_ih} < 1$. Denote $\frac{b_i}{1 - b_ih}$ by $b'_i$.
Let $k'$ be the smallest integer satisfying $b' \ge \frac{1}{k'}$.
We have that $k' = k-h$, because then $k'b'_i = \frac{(k-h)b_i}{1 - b_ih} = \frac{kb_i - hb_i}{1 - hb_i} \geq \frac{1 - hb_i}{1 - hb_i} = 1$. (A similar computation shows that $k' = k - h - 1$ does not suffice.)
Renaming $k'$ by $k$ and $b'_i$ by $b_i$, we return to the starting point of the theorem, except that now every item has value below $\frac{z}{2}$.
Having concluded that we can assume that no item has value above $\frac{z}{2}$, consider first the case that $1>b_i \ge \frac{1}{2}$, for which $k = 2$. Let $x$ be the highest value item. As at least one of the bundles does not contain $x$, the total value of items is at least $z + v_i(x)$ (the bundle not containing $x$ contributes at least $z$, and $x$ itself contributes $v_i(x)$). Bin $T_1$ will have value at most $\frac{z}{2} + v_i(x)$ (as its value grows by steps not larger than $v_i(x)$, and we stop adding to it the first time the value is at least $z/2$), and consequently value of at least $\frac{z}{2}$ remains for bin $T_2$, as desired.
Consider now the case that $b_i < \frac{1}{2}$, for which $k \ge 3$. For $j \ge 1$, let $u_j$ denote the value of the $j$th most valuable item. We consider two cases.
\begin{itemize}
\item $u_{2k-2} < \frac{z}{4}$. For $1 \le j \le k$, put $u_j$ in bin $T_j$. Thereafter add items to the bins, closing each bin as soon as its value reaches $\frac{z}{2}$. Note that at most $k-3$ bins can close at a value above $\frac{3z}{4}$ (as only $2k-3$ items have value
at least $\frac{z}{4}$, and every item has value smaller than $\frac{z}{2}$),
and no bin can close at value above $z$ (because no single item has value
larger or equal to $\frac{z}{2}$). Consequently, even after all but one bin close, the total value left for the last bin is at least $(k-1)z - (k-3)z - 2\frac{3z}{4} \ge \frac{z}{2}$, as desired.
\item $u_{2k-2} \ge \frac{z}{4}$.
Then we can cover the first $k-1$ bins by partitioning the first $2k-2$ items to $k-1$ disjoint pairs, and
putting a pair in each of these bins.
We claim that value above $\frac{z}{2}$ remains for bin $T_k$.
This is because among the original bundles, at least one bundle $S_i$ contains at most one of the first $2k-2$ items.
(Every item is in at most a $b_i$-weighted-fraction of the bundles for $b_i < \frac{1}{k-1}$. The expected number of items among the top $2k-2$ items distributed according to $\lambda$ (i.e., bundle $S_\ell$ w.p. $\lambda_\ell$), is at most $(2k-2) \cdot b_i < 2$. Thus, there must be a bundle $S_\ell$ that contains at most one element among the top $2k-2$.)
This bundle $S_\ell$ has value of at least $z - u_1 \ge \frac{z}{2}$ left.
\end{itemize}
\end{proof}
\obsApsTwo*
\begin{proof}
{
For an entitlement of the form $b_i=\frac{1}{n}$, the pessimistic share is the same as the MMS of $n$ players (i.e., $\pessimisticb{\frac{1}{n}}=\MMSnum{n}$).
Then by Proposition~\ref{prop:shares-comp} it holds that $\anypricevali{\frac{1}{2}} \geq \pessimisticb{\frac{1}{2}}=\MMSnum{2}$, and hence the APS is at least as large as the MMS.
On the other hand, the MMS of two player is the maximal value of any bundle that has a value of at most $\frac{v_i({\cal{M}}) }{2}$, and the APS cannot be larger as then it will be larger than the proportional share, contradicting Proposition~\ref{prop:shares-comp}.}
\end{proof}
{\begin{remark}
\label{ex:aps-half-mms}
Observation~\ref{obs:APS-two-equal} uses in an essential way the fact that valuations are additive. The following example shows that for $b_i=\frac{1}{2}$, the APS and the MMS might be different for submodular valuations.
Consider a setting with six items ${\cal{M}}=\{1,2,3,4,5,6\}$, and let $A=\{\{1,2,3\},\{1,5,6\},\{2,4,6\},\{3,4,5\}\}$. For every agent $i$, let
$v_{i}(S)= 6$ if there exists $T \in A $ such that $T \subseteq S$ (i.e., $S$ contains a set in $A$), or {if $|S| \geq 4$}.
Let $v_i(S)=2$ for any set $S$ of size $1$. Let $v_i(S)=4$ for any set $S$ of size $2$, and let $v_i(S)=5$ for any set $S$ of size $3$, such that $S \notin A$.
This is a submodular valuation since the marginal of every item is in $0,1,2$, and can only decrease as more items are added.
The MMS is at most $5$ since there is no partition $(S_1,S_2)$ in which both $S_1,S_2$ have value~$6$ (either contain a set in $A$, or are of size at least $4$).
The APS is at least $6$ since if we set $\lambda_S=\frac{1}{4}$ for every $S\in A$ we get that the APS is at least $6$, by Definition~\ref{def:anyprice-bundles}.
\paragraph{Subadditive:} If we change the value of the non-empty sets that have value less than~6 to be~3, then the function is no longer submodular, but is still subadditive. The APS of the modified function is still $6$, whereas the MMS drops to $3$.
\end{remark}}
\obsTwo*
\begin{proof}
Let $F$ be the distribution over sets (non-negative weights $\{\lambda_T\}_{T\subseteq {\cal{M}}}$ that total to $1$),
that by Definition~\ref{def:anyprice-bundles} define $\anyprice{v_1}{b_1}{{\cal{M}}}$, the APS of agent $1$ that has valuation $v_1$ and entitlement $b_1$.
The expected value for the other agent, agent $2$, for the complement of a random set sampled from $F$ is $\sum_{T\subseteq {\cal{M}}} \lambda_T \cdot v_2({\cal{M}}\setminus T) = \sum_{T\subseteq {\cal{M}}} \lambda_T \cdot (v_2({\cal{M}}) - v_2(T))= v_2({\cal{M}}) - \sum_{T\subseteq {\cal{M}}} \lambda_T \cdot v_2(T)\geq v_2({\cal{M}}) - b_1 \cdot v_2({\cal{M}}) = b_2 \cdot v_2({\cal{M}})$, using the fact that $v_2$ is additive and as $\sum_{T\subseteq {\cal{M}}} \lambda_T \cdot v_2(T)\leq b_1 \cdot v_2({\cal{M}})$, since each item has total weight at most $b_1$.
As $b_2 \cdot v_2({\cal{M}})$ is the proportional share of agent $2$, it is at least her APS according to Proposition~\ref{prop:shares-comp}.
Thus, for at least one set $S$ in the support of $F$, agent $2$ values the set ${\cal{M}}\setminus S$ at least at her APS.
As every set in the support of $F$ gives agent $1$ her APS,
the allocation $A=(S,{\cal{M}}\setminus S)$ gives each of the two agents her AnyPrice share.
\end{proof}
\proppseudo*
\begin{proof}
Under the conditions of the proposition, the AnyPrice share $z=\anypricei$ is an integer. It suffices to design a polynomial time testing algorithm that given a candidate value for $t$, tests in time polynomial in $t$ whether $z < t$. Using such a testing algorithm, the value of $z$ can be found by binary search over values of $t$ (without need to ever test a value of $t$ larger than $2z$), invoking the testing algorithm only a polynomial number of times.
For testing a candidate value of $t$, we use the following linear program. Its variables are $p_j$, the prices for the items.
{\bf Minimize} $\sum_{j \in {\cal{M}}} p_j$ subject to the following constraints:
\begin{itemize}
\item $p_j \ge 0$ for every item $j \in {\cal{M}}$.
\item $\sum_{j \in S} p_j \ge b_i$ for every set $S \subset {\cal{M}}$ with $v_i(S) \ge t$.
\end{itemize}
If $z < t$ then the optimal value of the LP is strictly smaller than~1. This is because by definition of the AnyPrice share, there is a price vector $p$ with respect to which every bundle of value $t$ has price at least $b_i + \epsilon$, for some $\epsilon > 0$. Scaling all prices by $\frac{b_i}{b_i + \epsilon}$ gives a feasible solution to the LP, and its value is strictly smaller than~1.
The LP has exponentially many constraints, one for each set $S$ with value $v_i(S) \ge t$. Nevertheless, it can be solved in pseudo-polynomial time. This is because given a candidate assignment to the $p_j$ variables, one can test if there is a violated constraint by solving a knapsack problem over the items ${\cal{M}}$, with $b_j$ as the knapsack size, the $p_j$ serve as item sizes, and the values $v_i(j)$ are the item values. Knapsack can be solved in time pseudo-polynomial in the value of the solution, hence in time pseudo-polynomial in $t$.
\end{proof}
\section{Missing Proofs from Section~\ref{sec:allocation-game}}
\label{app:allocation-game}
Before we prove Theorem~\ref{thm:app-APS}, we first observe the following property regarding the TPS.
\begin{lemma}\label{lem:ktps}
For every subset of items $K \subset {\cal{M}}$, and for every $b_i\leq \frac{1}{|K|+1}$ and additive valuation $v_i$ over the set of items ${\cal{M}}$, it holds that $$
\truncatedi \leq \truncated{\frac{b_i}{1-|K|\cdot b_i}}{v_i}{{\cal{M}} \setminus K}. $$
\end{lemma}
\begin{proof}
It suffices to prove this lemma for $K$ which is a singleton. The proof of the lemma then follows by observing that if $b_i \leq \frac{1}{|K|+1}$ then $\frac{b_i}{1-b_i} \leq \frac{1}{|K|}$ and that $\frac{\frac{b_i}{1-b_i}}{1-(|K|-1)\frac{b_i}{1-b_i}} = \frac{b_i}{1-|K|\cdot b_i}$. The lemma follows by partitioning $K$ into $|K|$ singletons, and iterating over the singletons.
Fix $K=\{j\}$ and $b_i\leq \frac{1}{2}$.
By definition $$z=b_i \cdot \sum_{\ell\in {\cal{M}}} \min(v_i(\ell),z) \le b_i \cdot z + b_i \cdot \sum_{\ell\in {\cal{M}} \setminus\{j\}} \min(v_i(\ell),z). $$
By rearranging, we get that:
$$z \le \frac{b_i}{1-b_i} \cdot \sum_{\ell\in {\cal{M}} \setminus\{j\}} \min(v_i(\ell),z) .$$
Thus, by definition $z\leq \truncated{\frac{b_i}{1-b_i}}{v_i}{{\cal{M}}\setminus\{j\}}, $
which concludes the proof.
\end{proof}
A similar result holds for the APS (with a somewhat relaxed constraint on $b_i$).
\begin{lemma}\label{lem:kaps}
For every subset of items $K \subset {\cal{M}}$, and for every $b_i < \frac{1}{|K|}$ and additive valuations $v_i$ over the set of items ${\cal{M}}$, it holds that $$
\anypricei \leq \anyprice{\frac{b_i}{1-|K|\cdot b_i}}{v_i}{{\cal{M}} \setminus K}. $$
\end{lemma}
\begin{proof}
Let $\vect{p}$ be the prices according to Definition~\ref{def:anyprice-bundles} with respect to items ${\cal{M}} \setminus K$ and entitlement $\frac{b_i}{1-|K|\cdot b_i}$.
From $\vect{p}$ we derive prices $\vect{p'}$ for ${\cal{M}}$. For every item $j\in K$ the price is $p'_j = b_i+\epsilon$, and for every item $j\in {\cal{M}} \setminus K$ the price is $p'_j = p_j \cdot (1-|K|\cdot (b_i+\epsilon'))$. The ratio between $\epsilon$ and $\epsilon'$ is chosen so that the sum of prices is~1, and $\epsilon'$ is chosen to be small enough so that agent $i$ with budget $b_i$ and prices $\vect{p'}$ cannot afford any bundle that could not be afforded with budget $\frac{b_i}{1-|K|\cdot b_i}$ and prices $\vect{p}$.
\end{proof}
\input{proof2bi}
\section{Missing Proofs from Section~\ref{sec:greedy-efx}}
\label{app:greedy-efx}
Here we present the definition of the Greedy-EFX algorithm.
\begin{algorithm}
\caption{Greedy-EFX~\citep{LiptonMMS04,BK20}}
\begin{algorithmic}[1]
\STATE Input: Set of items ${\cal{M}}=\{e_1,\ldots,e_m\}$ and valuation profile $\vect{v}=(v_1,\ldots,v_n)$ where $v_i(e_j)\geq v_i(e_k)$ for all $i\in {\cal{N}}$ and every $j\leq k\leq m$.
\FOR{$t= 1,\ldots,m $}
\STATE Give item $e_t$ to an agent that no one envies, breaking ties arbitrarily
\WHILE{There exists an envy cycle}
\STATE Resolve {an (arbitrary)} envy cycle by rotating sets along the cycle
\ENDWHILE
\ENDFOR
\end{algorithmic}
\end{algorithm}
\thmordered*
\begin{proof}[Sketch]
A {\em choosing sequence} is a sequence on names of agents (repetitions are allowed). Starting with the beginning of the choosing sequence, in each round the agent whose name appears in the choosing sequence selects an item of her choice. The allocation $A$ for the ordered instance naturally induces a choosing sequence, where for every round $r$, the agent to choose in round $r$ is the one to which $A$ allocated the $r$th most valuable item in the ordered instance. Using this choosing sequence (and the pigeon-hole principle), in every round, every agent can choose an item that she values at least as much as the corresponding item that she got under $A$.
\end{proof}
\thmgreedyEFX*
\input{greedy-new}
{Recall that in every round, the greedy-EFX allocation algorithm gives an item to an agent that has a bundle that no one envies. If there are several such agents, it may select one of them arbitrarily. Depending on the arbitration rule, we get different versions of greedy-EFX. The positive results of Lemma~\ref{lem:BK17} hold for all such versions. The negative results of the following Proposition~\ref{pro:EFXupper} are proved with respect to one such natural version. The key assumption of our version is that the first item goes to agent~1. (For convenience we also assume that if in a certain round the arbitration rule preferred an agent $i$ over an agent $j$, then in the next round in which both $i$ and $j$ are eligible candidates to receive an item, agent $j$ is preferred over agent $i$. However, this last assumption can be removed by slightly perturbing the values of items in the example. Details of how to perform these perturbations are omitted.)}
\EFXupper*
\begin{proof}
Let $k =\lceil \frac{1}{2\epsilon}\rceil$.
We consider setting with $n=\sum_{i=0}^k 4^i=\frac{ 4^{k+1} -1}{3}$ agents, and a set of $m=3\cdot \sum_{i=0}^k 4^i = 4^{k+1}-1$ items ${\cal{M}} =\big\{(a,b,c) \mid a\in\{0,1,\ldots,k\},b\in\{1,2,3\},c \in \{1,2,\ldots,4^a\}\big\}$. For agent $1$, the value of item $(a,b,c)$ is $1-\frac{a}{2k}$ if $b=1$, and $\frac{a}{2k}$ if $b\neq 1$.
We call items with $b=1$ large items, and items with $b\in \{2,3\}$ small items.
For every $0 \le i \le k$, we refer to the items with $a=i$ as belonging to {\em group} $i$.
Hence group $i$ contains $4^i$ large items and $2\cdot 4^i$ small items.
For every agent other than agent~1, the value of item $(a,b,c)$ is $1-2^{-k+a-1}$ if $b=1$, and $2^{-k+a-2}$ if $b\neq 1$.
{We next claim that if the first item that the greedy-EFX algorithm allocates is assigned to agent $1$, then agent $1$ gets the three items of group $0$, with values $1,0$ and $0$.
When
agent $1$ receives the first item (the large item of group 0), then each other agent receives a single large item (as they value all large items by at least half, while all small items by less than half).
After the allocation of the large items, the greedy-EFX allocates the small items in decreasing order of the groups. Each agent that received a large item of group $i$ also receives two small items of the same group.
Thus, agent $1$ won't get another item until all items but the small items of group $0$ are allocated.}
{While the greedy-EFX algorithm gives agent~1 value of $1$, we next show that
the MMS {of agent~1} is at least $\frac{3}{2}-\epsilon$. This holds since the agent can partition the items to $n$ bundles, each with value at least $\frac{3}{2}-\epsilon$, in the following way:}
For each $a\in \{1,2,\ldots,k\}$, the agent creates $2\cdot 4^{a-1}$ bundles by taking two large items of group $a$ and one small item of group $a-1$.
The value of such bundle is $2(1-\frac{a}{2k}) + \frac{a-1}{2k} = 2-\frac{a+1}{2k}\geq \frac{3}{2}-\epsilon $.
{The agent also creates $ \frac{2\cdot 4^k-2}{3} $ bundles, each containing three small items of group $k$, and each such bundle has a value of $\frac{3}{2}$.}
Note that $2\cdot 4^k-2$ is divisible by $3$.
The agent also creates another bundle using {the two remaining} small items of group $k$, and the large item of group $0$. {That bundle has value of $2$.}
The number of bundles created is then $$\sum_{a=1}^k 2 \cdot 4^{a-1} + \frac{2\cdot 4^k-2}{3} +1 =\frac{2\cdot 4^k-2}{3} + \frac{2\cdot 4^k+1}{3} = n,$$
which concludes the proof.
\end{proof}
We next consider the outcome of the greedy-EFX algorithm for the case of agents with equal entitlements and identical valuations.
\begin{theorem}
\label{thm:BK17a}
When $n$ agents have additive {\em identical} valuations and equal entitlements, the greedy-EFX allocation gives every agent more than a $\frac{3}{4}$ fraction of her AnyPrice share.
\end{theorem}
\begin{proof}
{Lemma~\ref{lem:BK17} concerns arbitrary additive valuations. Its proof was based on analysing several cases, handled by four different claims. For every claim, the bounds proved in that lemma apply also to additive identical valuations. In three of the claims (Claim~\ref{claim:atMost2}, Claim~\ref{claim:5/6}, Claim~\ref{claim:middle}), it was established that the agent receives more than a $\frac{3}{4}$ fraction of her AnyPrice share.
The exception is Claim~\ref{claim:3/4}, that considers the range of values $v_q(e_k) \le \frac{3}{4}$. (Recall that $q$ is an agent that receives an allocation of value~1, index $k$ is the largest index of a bundle that contains more than two items, and $e_k$ is the $k\mbox{-}th$ most valuable item, and hence the most valuable item in bundle $B_k$.) Hence in the current proof it suffices to address the case that $v_q(e_k) \le \frac{3}{4}$. We shall address this case by further partitioning it into three cases. In two of these cases we shall not require valuation functions of different agents to be identical, and only the third of these cases makes use of the assumption of identical valuations.
\begin{claim}
\label{claim:1/3}
If $v_q(e_k) < \frac{1}{3}$ then $TPS_q < \frac{4}{3}$, and hence $APS_q < \frac{4}{3}$. This holds even if the valuation functions of agents are not identical.
\end{claim}
\begin{proof}
If $v_q(e_k) < \frac{1}{3}$, then every bundle either has only a single item, or the last item to enter the bundle has value at most $v_q(e_k) < \frac{1}{3}$, and hence the bundle has value strictly less than $\frac{4}{3}$. Hence the truncated proportional share of agent $q$ is smaller than $\frac{4}{3}$, and she gets more than a $\frac{3}{4}$ fraction of $TPS_q$.
\end{proof}
In order to upper bound the APS in the proofs of the following Claims~\ref{claim:2/3} and~\ref{claim:middleThird}, we shall use the approach used in the proof of Lemma~\ref{lem:BK17}, pricing the items so that the total price $P$ is strictly less than~$n$, and giving the agent a budget of $\frac{P}{n}< 1$.}
\begin{claim}
\label{claim:2/3}
If $\frac{2}{3} \le v_q(e_k) \le \frac{3}{4}$ then $APS_q < \frac{4}{3}$. {This holds even if the valuation functions of agents are not identical.}
\end{claim}
\begin{proof}
Recall that $B_{k_1}$ denotes the last bin among those that have only one item, and note that $k_1 < k$ (as $B_k$ has more than two items). The price function $p$ is as follows: for $i \le k_1$ we have $p(e_i) = 1$, for $k_1 < i \le 2n-k$ (medium items) we have $p(e_i) = \max[\frac{1}{2}, v_q(e) - \frac{1}{4}]$, and for $i \ge 2n-k+1$ (small items) we have $p(e_i) = \min[\frac{1}{4}, v_q(e)]$. Recall that every small item has value at most $1 - v_q(e_k)$, and under the conditions of the claim this value is at most $\frac{1}{3}$.
(Remark: if $q > k$, the price of item $e_{2n - q + 1}$ is slightly decreased, as explained shortly.)
Every bundle up to~$k_1$ has price~1. Every bundle from $k+1$ up to $n$ has two items, none of which has value above $\frac{3}{4}$, and hence the price of the bundle is at most~1. For every bundle $B_i$ with $i$ in the range $[k_1 + 1, k]$, its first item has value at least $v_q(e_k) \ge \frac{2}{3}$. If $B_i$ has only one small item, then $p(B_i) \le \max[\frac{1}{2}, v_q(e_i) - \frac{1}{4}] + \frac{1}{4} \le 1$. If $B_i$ has two small items, then $p(B_i) \le \max[\frac{1}{2}, v_q(e_i) - \frac{1}{4}] + 2\min[\frac{1}{4}, 1 - v(_qe_i)] \le 1$. If $B_i$ has three or more small items then $p(B_i) \le \max[\frac{1}{2}, v_q(e_i) - \frac{1}{4}] + \frac{3}{2}(1 - v_q(e_i)) \le 1$, where the last inequality holds since $v_q(e_i) \ge \frac{2}{3}$.
If $q \le k$ then $p(B_q) < 1$ (because $p(e_q) < v(e_q)$ whereas $v(B_q) = 1$). If $q > k$, then $v_q(B_q) = 1$ implies that for the second item in $B_q$ (which is a medium item) we have $v_q(e_{2n-q+1}) \le \frac{1}{2}$. We change its price to $\frac{1}{2} - \epsilon$ for a very small $\epsilon > 0$, so that $p(B_q) < 1$. The sum of prices is now strictly smaller than $n$, as desired.
Purchasing a medium item gives the agent a value gain of at most $\frac{1}{4}$ compared to the price paid. Purchasing a small item gives the agent a value gain of at most $\frac{1}{3} - \frac{1}{4} = \frac{1}{12}$ compared to the price paid. With a budget less than~1, the combination that gives the agent the largest gain is that of purchasing one large item and one small item (purchasing $e_{2n-q+1}$ as a second large item might be feasible, but it gives a smaller gain), for a gain of at most $\frac{1}{4} + \frac{1}{12} = \frac{1}{3}$. As the budget is less than~1, the AnyPrice share is less than $\frac{4}{3}$.
\end{proof}
It remains to deal with the case that $\frac{1}{3} \le v_q(e_k) < \frac{2}{3}$. Here we shall use the assumption that all agents have the same valuation function $v = v_q$. This assumption implies that every item is added to a bundle that at the time has the smallest $v$ value.
\begin{claim}
\label{claim:middleThird}
If agents have identical additive valuations and $\frac{1}{3} \le v_q(e_k) < \frac{2}{3}$, then $APS_q < \frac{4}{3}$.
\end{claim}
\begin{proof}
In this proof we change the meaning of the terminology {\em large}, {\em medium} and {\em small}, compared to the previous cases.
The price function is as follows. (Remark: we shall later increase some of these prices.)
\begin{itemize}
\item Items $e$ of value $v(e) > 1$ are referred to as {\em huge}, and are priced $p(e) = 1$.
\item
Items $e$ of value $\frac{2}{3} \le v(e) \le 1$ are referred to as {\em large}, and are priced $p(e) = \frac{2}{3}$.
\item Items $e$ of value $\frac{1}{3} \le v(e) < \frac{2}{3}$ are referred to as {\em medium}, and are priced $p(e) = \frac{1}{3}$.
\item Items $e$ of value $v(e) < \frac{1}{3}$ are referred to as {\em small}, and are priced $p(e) = v(e)$.
\end{itemize}
Let us first confirm that no bundle is priced more than~1. This holds for bundles with at most two items, because $v(e_k) < \frac{2}{3}$, and no bundle can contain two items of price $\frac{2}{3}$.
It remains to consider bundles with at least three items.
At least one bundle that contains more than a single item has value above $\frac{4}{3}$, as otherwise the truncated proportional share is strictly smaller than $\frac{4}{3}$. Let $B_j$ be such a bundle. Necessarily, $B_j$ has either two or three items. Let $e_s$ denote the smallest item in $B_j$, and note that necessarily $v(e_s) = \frac{1}{3} + \delta$ for some $\delta > 0$. If $B_j$ contains three items, then necessarily $\delta \le \frac{1}{6}$.
Likewise, $\delta \le \frac{1}{6}$ also if $B_j$ has two items. This is because $v(e_k) < \frac{2}{3}$, implying that $j < k$, and consequently $e_s$ has smaller value than the second item in $B_k$. That is, $v(e_s) \le v(e_{2n - k + 1}) \le \frac{1}{2}$.
Consider any bundle $B_i$ with three items (which may also be $B_j$ itself, if $B_j$ has three items). If $B_i$'s second item was added before $e_s$ (this is the case for $B_j$), then the value of this second item is above $v(e_s) > \frac{1}{3}$, meaning that every item in $B_i$ has value smaller than $\frac{2}{3}$, and hence each of its three items has price at most $\frac{1}{3}$. If $B_i$'s second item was added after $e_s$, then the first item in $B_i$ has value above $1 - \delta$ (since this is a lower bound on the value of $B_j\setminus\{e_s\}$) and price $\frac{2}{3}$, and the sum of prices of the remaining two items in the bundle is at most $2\delta \le \frac{1}{3}$.
Consider now any bundle $B_i$ with four or more items. At most two items in $B_i$ have value larger than $e_s$. Regardless of whether there are two such items or only one, their sum of values is at least $1 - \delta$ (as the other items arrive after $e_s$, and $v(B_k) \ge 1 - \delta$ at the time of arrival of $e_s$), and their sum of prices is $\frac{2}{3}$. The sum of values (and hence also prices) of the remaining items in $B_i$ is at most $2\delta \le \frac{1}{3}$.
Recall that $v(B_q) = 1$. We claim that the price of $B_q$ is strictly smaller than~1. For every item $e$ we have that $p(e) \le v(e)$, with equality only if either $v(e) = \frac{2}{3}$ or $v(e) \le \frac{1}{3}$. Hence $p(B_q) \le 1$. Assume now for the sake of contradiction that $p(B_q) = 1$. Necessarily $v(e_q) > \frac{1}{3}$ (as otherwise $v(e_s) \le \frac{1}{3}$), and hence $v(e_q) = \frac{2}{3}$. This means that all other items of $B_q$ arrive after $e_s$ (as their total value is $\frac{1}{3}$ which is smaller than $v(e_s)$). But then the value in $B_q$ at the time that $e_s$ arrives (which is $\frac{2}{3}$) is smaller than the value in $B_j$ at that time (which is at least $1 - \delta \ge \frac{5}{6}$), contradicting the assumption that greedy-EFX places $e_s$ is $B_j$.
This completes the proof that the sum of prices is strictly smaller than $n$.
Given $\delta$ as defined above, we now increase the above prices, preserving the property that the price of every bundle is at most~1, and that the price of $B_q$ is strictly less than~1.
\begin{itemize}
\item
For every medium item $e$ of value $\frac{2}{3} - \delta < v(e) < \frac{2}{3}$, we increase its price to $p(e) = \frac{1}{2}$. We refer to these items as {\em special} items. Observe that if the first item in a bundle is special, then the second item has value at least $v(e_s) = \frac{1}{3} + \delta$, and hence the bundle has two items whose sum of prices is at most~1. (If the bundle is $B_q$, the second item cannot be special and the sum of prices remains strictly less than~1.) Also, there cannot be a bundle in which the first item is large and the second item is special, because $e_k$ is not large, and $e_{2n-k+1}$ is not special.
\item
The price of every small item $e$ is increased to $p(e) = \frac{v(e)}{6\delta}$ (making the price larger than the value). Note that small items belong to bundles in which the other items (or single item) have total value at least $1 - \delta$ (and total price $\frac{2}{3}$). As such, the total value of small items in the bundle is at most $2\delta$, and the total price of the bundle remains at most~1. (If $B_q$ contains small items, then their sum of values is at most $\delta$, and consequently $B_q$'s price remains strictly below~1.)
\end{itemize}
We now verify that agent $q$ cannot afford a bundle of value $\frac{4}{3}$.
The agent can afford at most one large item, paying $\frac{2}{3}$. If she does so, she has strictly less than $\frac{1}{3}$ budget left, and can buy small items of value strictly less than $\frac{1}{3}$, and the total value cannot reach $\frac{4}{3}$.
If the agent buys two medium items that are not special, she gets value at most $\frac{4}{3} - 2\delta$. The agent then has a budget strictly less than $\frac{1}{3}$ left, with which she can buy only small items. This gives additional value of strictly less than $\frac{1}{3} \cdot 6\delta = 2\delta$, and hence the total value purchased is strictly less than $\frac{4}{3}$.
If the agent buys one special item and one medium item that is not special, she gets value at most $\frac{4}{3} - \delta$. The agent then has a budget strictly less than $\frac{1}{6}$ left, with which she can buy only small items. This gives additional value of strictly less than $\frac{1}{6} \cdot 6\delta = \delta$, and hence the total value purchased is strictly less than $\frac{4}{3}$.
\end{proof}
{The proof of Theorem~\ref{thm:BK17a} follows from Claims~\ref{claim:1/3}, \ref{claim:2/3} and~\ref{claim:middleThird}, together with Claims~\ref{claim:atMost2}, \ref{claim:5/6} and~\ref{claim:middle} of Lemma~\ref{lem:BK17}.}
\end{proof}
The analysis of Greedy-EFX in the proof of Theorem~\ref{thm:BK17a} is tight up to low order terms. In fact, this holds even with respect to the maximin share.
\begin{proposition}
\label{pro:BK17a}
For every $\delta > 0$, there is an instance with identical additive valuation functions in which greedy-EFX does not provide an agent more than a $\frac{3}{4} + \delta$ of her maximin share. (In our example, $n$ is linear in $\frac{1}{\delta}$.)
\end{proposition}
\begin{proof}
For a given value of $\delta > 0$, choose $n$ sufficiently large so that $\epsilon$ in the example below satisfies $\epsilon \le \frac{4}{5}\delta$.
There are $3n-1$ items, and $\epsilon = \frac{1}{8n-3}$. (Under this choice of $\epsilon$ that the numbers in the following example work out.) The first item has value $\frac{1}{2} + \frac{\epsilon}{2}$, thereafter item values decrease by $\epsilon$ until they reach value $\frac{1}{4} + \frac{3}{4}\epsilon$ for item $2n$. All remaining $n-1$ items have value $\frac{1}{4} + \frac{3}{4}\epsilon$. Greedy-EFX will put the first $2n$ items in different bundles (each bundle will contain the pair of items $i$ and $2n+1-i$), each of value $\frac{3}{4} + \frac{5}{4}\epsilon$. Thereafter, only $n-1$ items remain, so one bundle will have a final value of $\frac{3}{4} + \frac{5}{4}\epsilon \le \frac{3}{4} + \delta$. The maximin share is~1, putting the first two items in the same bundle, and for the remaining bundles using greedy-EFX.
\end{proof}
{Proposition \ref{pro:BK17a} shows that for identical additive valuations,
greedy-EFX does not provide an agent more than a $\frac{3}{4}$ of her maximin share,
although there exists an allocation that gives every agent her maximin share (as the valuations are identical).}
Give a theorem summarizing many results for additive:
$$proportional\geq truncated \geq anyPrice \geq b-pessimistic\ share\geq maximin(b)\geq \frac{1}{2} anyPrice$$
Every inequality is strict for some instances.
claim follows from:
\begin{itemize}
\item Observation (2.2 +2.3 from old writeup) that anyprice $\geq b$-pessimistic share (as it is so for) the $p/q$-share for any $p/q\leq b$.
\item truncated proportional $\geq$ anyprice (3.1)
\item 2*maximin $\geq$ anyprice (3.3)
\end{itemize}
\section{Beyond Goods: Chores and Mixed Manna}\label{app:chores}
In this Appendix we present an extension of our APS definition to allocation instances with \textit{chores} and with \textit{mixed manna}, and briefly discuss this extension.
Consider an agent $i$ with a valuation function $v_i$ over the set ${\cal{M}}$ of items {that is normalized ($v_i(\emptyset) = 0$)}.
An item $j$ is a {\em good} with respect to $v_i$ if $v_i(S \cup \{j\}) \ge v_i(S)$ for every set $S \subseteq {\cal{M}} \setminus \{j\}$.
An item $j$ is a {\em bad} (also referred to as a {\em chore}) with respect to $v_i$ if $v_i(S \cup \{j\}) \le v_i(S)$ for every set $S \subseteq {\cal{M}} \setminus \{j\}$. If neither all items are required to be goods nor all items are required to be bads (with respect to $v_i$), we refer to ${\cal{M}}$ as being a {\em mixed manna} for agent $i$ with valuation function $v_i$.
Clearly, settings with only goods or only bads are special cases of mixed manna.
Recall that for goods we defined valuation functions to be set functions that are normalized, namely, $v_i(\emptyset) = 0$, and monotone non-decreasing. Once chores are involved, the requirement that valuation functions are non-decreasing is removed. The value of an item or of a (non-empty) bundle might be negative.
Agent $i$ has entitlement $b_i$, where $0 < b_i \le 1$. Intuitively, when all items are {\em goods}, the entitlement represents what fraction of the grand bundle of goods the agent is entitled to receive.
When items are {\em chores}, the entitlement of the agent represents what fraction of the chores {should be assigned to} the agent. {As items are indivisible, some deviations from these exact fractions might be necessary.}
We now extend the definition of the AnyPrice share to the setting of mixed manna. Recall that two definitions were presented for the APS in case of goods. We first present Definition~\ref{def:APSmixed} which is the natural extension of Definition~\ref{def:anyprice-bundles}, replacing the inequality in the last constraint by an equality. {That is, instead of requiring that for every item $j\in {\cal{M}}$ it holds that $\sum_{T: j\in T} \lambda_T \geq b_i$, we now require that
$\sum_{T: j\in T} \lambda_T = b_i$.}
\begin{definition}
\label{def:APSmixed}
Consider a setting in which agent $i$ with valuation $v_i$ has entitlement $b_i$ to a set of indivisible items ${\cal{M}}$.
The \emph{AnyPrice share (APS)} of $i$, denoted by $\anypricei$,
is the maximum value $z$ she can get
by coming up with non-negative
weights $\{\lambda_T\}_{T\subseteq {\cal{M}}}$ that total to $1$ (a distribution over sets),
such that any set $T$ of value below $z$ has a weight of zero, and
every item appears in sets of total weight exactly $b_i$:
$$ \anypricei = \max_{z} z $$ \\ subject to the following set of constraints being feasible for $z$:
\begin{itemize}
\item $\sum_{T\subseteq {\cal{M}}} \lambda_T=1$
\item $\lambda_T\geq 0$ for every bundle ${T\subseteq {\cal{M}}}$
\item $\lambda_T= 0$ for every bundle ${T\subseteq {\cal{M}}}$ s.t. $v_i(T)<z$
\item $\sum_{T: j\in T} \lambda_T = b_i$ for every item $j\in {\cal{M}}$
\end{itemize}
\end{definition}
Recall that for the case of goods the APS is at least as large as the MMS. This property also holds for mixed manna (and thus for chores).
This is because Definition~\ref{def:APSmixed} defines the APS as a solution to a maximization problem that is a relaxed (fractional) version of the (integral) maximization problem defining the MMS.
For mixed manna, the APS may be either positive or negative (or zero). We present some observations that are useful in determining its sign and some of its properties.
\begin{proposition}
\label{pro:positiveManna}
{Let ${\cal{M}}$ be mixed manna for agent $i$ with valuation function $v_i$.} If $v_i({\cal{M}}) \ge 0$ then $\anypricei \ge 0$.
\end{proposition}
\begin{proof}
{The LP of Definition~\ref{def:APSmixed} is feasible for $z=0$. This can be seen by} taking bundle ${\cal{M}}$ with coefficient $b_i$, and the empty bundle $\emptyset$ with coefficient $1 - b_i$.
\end{proof}
Fixing a valuation function $v_i$, if there are only goods, then the APS is a non-decreasing function of the entitlement. Likewise, if there are only chores, then the APS is a non-increasing function of the entitlement. However, for the case of mixed manna, the APS need not be a monotone function of the entitlement. Consider for example an instance (that is non-additive) with ${\cal{M}} = \{a,b\}$, where $v_i(a), v_i(b) > 0$, and $v_i({\cal{M}}) < 0$. Then if $b_i < \frac{1}{2}$ the APS is zero (the LP in Definition~\ref{def:APSmixed} can be supported on the three bundles $\{a\}, \{b\}, \emptyset$), if $b_i = \frac{1}{2}$ the APS is positive (the support is $\{a\}, \{b\}$), and if $b_i > \frac{1}{2}$ the APS is negative (the bundle ${\cal{M}}$ must be in the support of the LP solution).
While for mixed manna the APS is not necessarily monotone in the entitlement, we next show that it is always a quasiconcave function of the entitlement.
\begin{remark}
\label{rem:quasiconcave}
A proof similar to that of Proposition~\ref{pro:positiveManna} shows that for a fixed valuation function $v_i$, the APS is a {\em quasiconcave} function of the entitlement $b_i$. Namely, for every $0 \le b_1 < b_2 < b_3 \le 1$ it holds that $\anyprice{b_2}{v_i}{{\cal{M}}} \ge \min\left[\anyprice{b_1}{v_i}{{\cal{M}}},\anyprice{b_3}{v_i}{{\cal{M}}}\right]$. This follows by considering optimal solutions, $\{\lambda_T^1\}_T$ for $b_1$ and $\{\lambda_T^3\}_T$ for $b_3$, in Definition~\ref{def:APSmixed}. Then $\{\frac{b_1 \cdot \lambda_T^1+b_3 \cdot\lambda_T^3}{b_1+b_3}\}_T$ is a feasible solution for $b_2$, and the bundle of minimum value in its support is in the support of at least one of the two optimal solutions (for $b_1$ and $b_3$).
\end{remark}
Quasiconcavity implies that for every valuation function $v_i$ there is a threshold entitlement $\tau$, such that $\anypricevali{b_i}$ is a weakly increasing function of $b_i$ for $b_i\in [0,\tau]$, and weakly decreasing for $b_i \in [\tau,1]$. In particular, if the APS is negative for entitlement $b_i$, it remains negative for every entitlement $b_i^+ > b_i$.
\begin{proposition}
\label{pro:worstItem}
{For any {normalized valuation $v_i$ and any} entitlement $b_i$,} if there are only chores, then $\anypricei \le \min_{j\in {\cal{M}}} v_i(j)$.
\end{proposition}
\begin{proof}
Let $z = \anypricei$. Then the LP of Definition~\ref{def:APSmixed} is feasible for this value of $z$. Let $j$ be the item of smallest (most negative) value under $v_i$. Then in the feasible solution of the LP, there must be some bundle $T$ that contains $j$. Hence $v_i(T) \ge z$. As there are only chores, $v_i(T) \le v_i(j)$.
\end{proof}
\begin{proposition}
\label{pro:APSversusProp}
{For any entitlement $b_i$,} if $v_i$ is an additive valuation (allowing mixed manna), then the AnyPrice share cannot be larger than the proportional share: $\anypricei \le b_i \cdot v_i({\cal{M}})$.
\end{proposition}
\begin{proof}
Let $z$ be a value for which the LP of Definition~\ref{def:APSmixed} is feasible. Then $z$ is the smallest value of a bundle in the support of the underlying distribution, whereas the proportional value is the average value (due to additivity).
\end{proof}
If all items are chores (even without assuming additivity), then the value of every non-empty bundle is negative.
It is convenient to view the absolute values of these negative values as positive {\em disutilities}. Thus we replace the non-positive valuation function $v_i$ by the non-negative disutility\ function $c_i$, where for every $S \subseteq {\cal{M}}$ we have $c_i(S) =-v_i(S)$. With respect to disutilities, the APS is positive, and the agents wish to receive bundles of small disutility. There are allocation instances in which every allocation gives some agent a bundle of disutility\ larger than her APS. (This follows from a similar result concerning MMS for chores and agents with equal entitlements~\citep*{ARSW17}, because with equal entitlements, the APS disutility\ is never larger than the MMS disutility.) Consequently, we wish to find allocations that give every agent a bundle of disutility\ no more than $\rho$ times her APS, with $\rho$ being as small as possible. {For $\rho = 2$, this can be achieved for additive valuations, using standard approaches.}
\begin{proposition}
\label{pro:BoBWchores}
Consider an allocation instance with $n$ agents with valuations $(v_1,v_2,\ldots,v_n)$ and
arbitrary entitlements $(b_1,b_2,\ldots,b_n)$ for a set ${\cal{M}}$ of indivisible chores. If all disutility\ functions are additive, then there is an allocation in which every agent $i$ gets a bundle of disutility\ at most $2\cdot \anypricei$.
\end{proposition}
\begin{proof}
Consider a fractional allocation in which every agent $i$ (with entitlement $b_i$) gets a fraction $b_i$ of every item $j$. This fractional allocation gives every agent her proportional share. It is well known that every fractional allocation can be rounded to give an integral allocation in which every agent gets a bundle whose disutility\ is the same as the disutility\ of her original fractional bundle, up to the disutility\ of one item~\citep*{LST90}. As her fractional disutility\ is at most her APS disutility (Proposition~\ref{pro:APSversusProp}, after transforming values to disutilities) and every item has value at most the APS (Proposition~\ref{pro:worstItem}, after transforming values to disutilities), the proposition follows.
\end{proof}
For completeness, we present the price based definition of the APS for chores (in analogy to Definition~\ref{def:anyprice-prices} for goods).
{The price based definition is derived by considering the dual of the fractional MMS definition.
We have shown the equivalence of the two definitions for the case of goods in Proposition~\ref{prop:anyprice-eqe}. Similar arguments (which are omitted) show the equivalence in the case of chores.}
\begin{definition}
\label{def:APSchores}
Consider a setting in which agent $i$ with disutility\ function $c_i$ has entitlement $b_i$ to a set of indivisible chores ${\cal{M}}$.
The \emph{AnyPrice share (APS)} of agent $i$, denoted by $\anypriceic$, is the disutility\ she can limit herself to whenever the chores in ${\cal{M}}$ are adversarially priced with a total price of $1$, and she picks her bundle of chores among those bundles of total price at least $b_i$:
$$\anypriceic = \max_{(p_1,p_2,\ldots,p_m)\in {\cal{P}}}\ \ \min_{S\subseteq {\cal{M}}} \left\{c_i(S) \Big | \sum_{j\in S} p_j\geq b_i\right\}$$
\end{definition}
Observe that both in Definition~\ref{def:anyprice-prices} (for goods) and in Definition~\ref{def:APSchores} (for chores), all prices are non-negative. The difference between the definitions is that for the case of goods, the feasible bundles for an agent $i$ are those priced at most $b_i$, whereas for chores the feasible bundles are those priced at least $b_i$.
We note that the APS in Definition~\ref{def:APSchores} measures dis-utility, and thus has a positive value which equals to minus the value of Definition~\ref{def:APSmixed}.
For allocation instances that involve a mixture of goods and chores, there are instances in which agents have additive valuations, equal entitlements, the MMS of every agent is strictly positive, whereas in every allocation (that allocates all items) some agent receives a bundle of value at most~0 \citep*{KMT2020}. As the APS is at least as large as the MMS, it follows that for mixed manna it is not always possible to find an allocation that gives every agent a positive fraction of her APS.
\end{document} |
\begin{document}
\crdata{}
\title{Elementary Integration of Superelliptic Integrals}
\newfont{\authfntsmall}{phvr at 11pt}
\newfont{\eaddfntsmall}{phvr at 9pt}
\numberofauthors{3}
\author{
\alignauthor
Thierry Combot\\
\affaddr{Univ. de Bourgogne (France)}\\
\email{$\!\!\!\!\!\!\!\!\!\!\!\[email protected]$\!\!\!\!\!\!\!\!\!\!\!\!$}
}
\date{today}
\maketitle
\begin{abstract}
Consider a superelliptic integral $I=\int P/(Q S^{1/k}) dx$ with $\mathbb{K}=\mathbb{Q}(\xi)$, $\xi$ a primitive $k$th root of unity, $P,Q,S\in\mathbb{K}[x]$ and $S$ has simple roots and degree coprime with $k$. Note $d$ the maximum of the degree of $P,Q,S$, $h$ the logarithmic height of the coefficients and $g$ the genus of $y^k-S(x)$. We present an algorithm which solves the elementary integration problem of $I$ generically in $O((kd)^{\omega+2g+1} h^{g+1})$ operations.
\end{abstract}
\noindent
{\bf Categories and Subject Descriptors: 68W30} \\
\noindent {\bf Keywords: Symbolic Integration, Divisors, Superelliptic curves} \\
A superelliptic integral is an integral of the form
$$I(x)=\int \frac{P(x)}{Q(x) S(x)^{1/k}} dx$$
with $P,Q,S\in\mathbb{K}[x]$, and we can assume the multiplicity of roots of $S$ to be $<k$. We add \emph{the technical condition} that the roots of $S$ are simple and its degree is coprime with $k$. If the degree is multiple of $k$, then infinity is a regular point, and a root of $S$ can be sent to infinity by a Moebius transformation, which however often requires a field extension of the coefficients. The case $k=1$ is well known \cite{1}, is always elementary integrable, and thus we can assume $k\geq 2$. When $k=2$, the roots of $S$ are always simple, and the integral $I$ is hyperelliptic. For general $k$, the integral $I$ is called superelliptic, and is, although of specific form, wildly encountered, see for example \cite{2,8} where all ``random'' examples where in fact superelliptic. The purpose of this article is to present an efficient algorithm to decide if $I$ is an elementary integral.\\
\begin{defi}
A superelliptic integral is called
\begin{itemize}
\item elementary if $I$ can be written
$$I(x)=G_0(x)+\sum\limits_{i} \lambda_i \ln G_i(x), \; G_i\in \overline{\mathbb{Q}}(x,S^{1/k})$$
\item reduced if $Q$ is square free coprime with $S$ and $\deg P <\deg Q+k^{-1} \deg S-1$
\item of first kind if reduced and $Q$ is constant.
\item of torsion if it is a sum of an elementary integral and a first kind integral.
\end{itemize}
\end{defi}
The integration process starts with a Hermite reduction, which consists in finding an algebraic function $G_0$ such that $\partial_x G_0-PQ^{-1}S^{-1/k}$ has only simple poles and no pole at infinity. If such function can be found, the resulting integral is reduced. This part is well known \cite{2,3}, and in our case is even simpler as the notion of integral basis can be avoided. This part and its complexity will be recalled at the beginning of section $2$.\\
Much more difficult is to find the logarithmic part. An integral of the first kind is never elementary except $0$. If we are able to find a sum $L$ of logarithms of algebraic functions such that their residues coincides with the ones of the integral, the difference $I-L$ is of first kind and thus $I$ is of torsion, and then $I$ is elementary if and only if $I-L=0$. This is where expedients as heuristic approaches are used to speed up the computation \cite{8}. The base approach is given by Trager \cite{1}, and improved by Bertrand in the hyperelliptic case. Three major difficulties happen in these approaches.\\
\textbf{Problem 1}. For a reduced superelliptic integral, the re\-si\-dues $\lambda$ are roots of the polynomial
$$R(\lambda)=\hbox{resultant}_x(P^k-\lambda^k Q'^k S,Q),$$
which factorizes in $\mathbb{K}[\lambda]$ under the form
$$R(\lambda)=\prod_{i=1}^l R_i(\lambda^{k_i}),\quad k_i \mid k,$$
We say that $R$ is \emph{generic} if for all $i$ the Galois group of $R_i(\lambda^{k_i})$ is maximal $\mathbb{Z}_{k_i}^{\deg R_i} \ltimes S_{\deg R_i}$. Following Trager, we need to compute a $\mathbb{Q}$-basis of these residues. However, generically $l=1$ and $R$ is generic, and thus the splitting field of $R$ is of degree $k^dd!$. Worse, this computation has to be done at the beginning, even if $I$ was not elementary in the end. The generic case is thus typically intractable \cite{5}, which is probably the main reason this algorithm is still not implemented in Maple and Mathematica. Theorem \ref{thm1} presents a similar decomposition much cheaper, and sufficient to conclude in the generic case.
\begin{thm}\label{thm1}
Given a field extension $\mathbb{K}[\alpha]$ and a reduced superelliptic integral $I$, the algorithm \underline{\sf TraceIntegrals} computes a set of integrals $\mathcal{S}_\alpha$ such that
\begin{itemize}
\item The residues of the integrals are in $\mathbb{Z}[\xi]$.
\item If the integral $I$ is elementary, then all integrals are of torsion.
\end{itemize}
If $R$ is generic, $I$ is a linear combination of the $(\mathcal{S}_{\alpha})_{R(\alpha)=0}$ and an integral of the first kind.
\end{thm}
The generic condition is sufficient but not necessary, and no examples were found for which $I$ could not be decomposed as such. Taking a larger $\mathbb{K}[\alpha]$ containing simultaneously all the roots of $R$ reduces to original Trager's approach and thus provably works in all cases, but the cost then rises to $O((k^dd!)^3)$. Remark however that even if the decomposition is not possible, the integrals $\mathcal{S}_\alpha$ being of torsion is still a necessary condition for elementary integration, and thus gives a quick test to prove an integral is non-elementary.\\
A particular behaviour of superelliptic integrals is the Galois action on the $k$-th root, which multiplies the integral by $\xi$. Because of this, the space of residues of a superelliptic integral is always invariant by $\xi$. Thus our decomposition is similar to Trager's one but done on $\mathbb{K}$ instead of $\mathbb{Q}$, as keeping this invariance by multiplication by $\xi$ is essential for the following.\\
\textbf{Problem 2}. For each integral of $\mathcal{S}_\alpha$, we need to decide if they are of torsion. Possibly more than one log is necessary, but they have to respect a precise pattern to ensure the invariance by $\xi$, leading us to introduce
$$L_S(P)= \sum\limits_{i=0}^{k-1} \xi^i\ln\left( \sum\limits_{j=0}^{k-1} P\left(x,\xi^j S^{1/k}\right) \right)$$
with $P\in\mathbb{C}[y]_{\leq k-1}[x]$. Now, a single function $L_S$ is necessary (see Propositions \ref{prop1},\ref{prop1b}). Trager's approach is to build a divisor on the superelliptic curve, and then to test its principality using linear algebra. Bertrand uses a nice representation of divisors on hyperelliptic curves to make the process more efficient. However, both are polynomial in the size of the output, and the degree of $P$ depends on the residues, and so can be extremely large as they are not even bounded by a function of $d$. It happens however that testing principality of a divisor can be done in logarithmic time of its height.
\begin{thm}\label{thm2}
Consider a reduced superelliptic integral $I$ with residues in $\mathbb{Z}[\xi]$. If $I= L_S(P)$ up to an integral of the first kind for a polynomial $P$, then it can be written
$$I= \sum\limits_{i=0}^l (-2)^i L_S(P_i)$$
up to a first kind integral where $\deg_x P_i \leq \deg Q +\frac{1}{k}\deg S$ and $l\leq \underset{r \hbox{ residues}}{\max} (\hbox{log}_2 \mid r \mid ) $. Algorithm \underline{\sf JacobianReduce} compute this decomposition in time $O((kd)^\omega l)$ if it exists.
\end{thm}
Remark that simplifying the sum to have just one $L_S$ function would have an exponential cost in $l$, thus this is thank to this specific representation of the solution that such a fast algorithm is possible. Testing principality of a divisor is equivalent to test if its reduction in the Jacobian of the superelliptic curve is $0$. The technical condition allows to have an efficient representation of divisors. If the support of the divisor is small but height is large, then a fast multiplication technique in the Jacobian is very efficient \cite{4}. This was overlooked by Bertrand \cite{3}. This allows a fantastic speed up as Trager and Bertrand algorithm were exponential in logarithmic height, and ours is linear. However, the coefficients size grows fast which decreases the usefulness of the algorithm, except when computed modulo a prime number. This gives a quick test for proving that the divisor is not principal, which is generically the case as, except in genus $0$, most superelliptic integrals are not elementary.\\
\textbf{Problem 3}. Theorem \ref{thm2} does not solve the elementary integration problem, as we could have $I=\frac{1}{N} L_S(P)$ with $N\in\mathbb{N},\; N\geq 2$ and this would not be found by Theorem \ref{thm2}. In this case, the corresponding divisors are not principal but of torsion, i.e. a multiple of the divisor is principal. This problem is solved by Trager using two ``good reduction primes'' $p$ \cite{1}, and then deducing a unique possible candidate for $N$. We know that all divisors are of torsion modulo $p$. Thus given a divisor and two good reductions $p,q$, we test the principality of multiples of the divisor modulo $p$ and $q$ until we find a multiple principal. A large prime is used to confirm with high probability this candidate.
\begin{thm}\label{thm3}
Consider a reduced superelliptic integral $I$ with residues in $\mathbb{Z}[\xi]$ with coefficients in $\mathbb{K}(\alpha)$ with $\alpha$ algebraic of degree $r$. Note $\Delta$ the discriminant of the square free part of $QS$. If $I=\frac{1}{N} L_S(P)$ for a polynomial $P$ and $N\in\mathbb{N}^*$, then algorithm \underline{\sf TorsionOrder} finds a non zero multiple of $N$ which is expected to be less than $\;\;\;$ $(1+\sqrt{r\phi(k)\ln(k\Delta)})^{2g}$
in time $\tilde{O}( (kd)^{\omega+g} h^{g+1} r^g)$. Else algorithm \underline{\sf TorsionOrder} returns $0$ with probability $1-\epsilon$.
\end{thm}
The probabilistic part is important because having a false positive can be very costly in characteristic zero, as the binary complexity of \underline{\sf JacobianReduce} is not logarithmic in divisor height.
\section{Integral Decomposition}
\subsection{Hermite reduction}
The Hermite reduction for algebraic integral is described in \cite{2,3,8}. In our simpler superelliptic case, let us recall it to precise its complexity. We note $\hbox{sf}(Q)$ square free part of $Q$.\\
\noindent\underline{\sf HermiteReduction}\\
\textsf{Input:} A superelliptic integral $P/(QS^{1/k})$.\\
\textsf{Output:} A rational function $G\in\mathbb{K}(x)$ such that the integral
$$\int \frac{P}{Q S^{1/k}}-\partial_x \left( \frac{G}{S^{1/k}}\right) dx$$
is reduced or ``FAIL''.
\begin{enumerate}
\item Note $\tilde{Q}=Q/\hbox{sf}(Q),\; \hat{Q}=\hbox{sf}(Q)/(\hbox{sf}(Q) \wedge S)$ and $G=T/\tilde{Q}$ with $T$ an unknown polynomial with $\deg T\leq \deg P -\deg \hbox{sf}(Q) +1$
\item Solve the linear system coming from the condition
$$ \!\!\!\!\!\! \!\!\!\!\!\!\left(\frac{P}{QS^{1/k}}-\partial_x \left(\frac{T}{\tilde{Q} S^{1/k}}\right)\right) S^{1/k}\hat{Q} \in \mathbb{K}_{< \deg \hat{Q}+\frac{1}{k} \deg S-1} [x]$$
\item If one solution return $T/\tilde{Q}$ else ``FAIL''
\end{enumerate}
Such reduction is not always possible due to the condition on the degree at infinity, as for example elliptic integral of the second kind are not reducible by this process.
\begin{prop}
If $\int \frac{P}{Q S^{1/k}}$ is elementary, the algorithm $\;$ \underline{\sf HermiteReduction} is successful and runs in $O(d^\omega)$.
\end{prop}
\begin{proof}
If $\int \frac{P}{Q S^{1/k}}$ is elementary, then it can be written as a sum of an algebraic function and logs of algebraic functions. Thus $\frac{P}{Q S^{1/k}}$ can be written as the derivative of an algebraic function plus an algebraic function with simple poles. Now if an extension of $\mathbb{C}(x,S^{1/k})$ was necessary, then acting the Galois group on this extension would allow to find another decomposition in $\mathbb{C}(x,S^{1/k})$. Thus we can write
$$\frac{P}{Q S^{1/k}}=\partial_x \left( \frac{T}{\tilde{Q}S^{1/k}}\right)+ \frac{R}{\hat{Q} S^{1/k}}$$
with $T,R\in\mathbb{C}[x]$.
Now multiplying both sides by $\hat{Q}S^{1/k}$, we obtain that $(\frac{P}{QS^{1/k}}-\partial_x \frac{T}{\tilde{Q} S^{1/k}})S^{1/k} \hat{Q}$ should be a polynomial. Knowing that $\frac{R}{\hat{Q} S^{1/k}}$ is the derivative of logs and as $k \wedge \deg S =1$, there are no residues at infinity and so the exponent at infinity is $<-1$. Thus we have $\deg R < \deg \hat{Q}+\frac{1}{k} \deg S -1$ which is exactly the condition of step $2$. Thus a solution $T$ as step $2$ should be found, and step $3$ then returns $G=T/\tilde{Q}$.
The complexity comes from step $2$ where a system of size $\deg(PS\hat{Q})$ should be solved. This size is linear in $d$, and thus the complexity is $O(d^\omega)$
\end{proof}
\subsection{Trace integrals}
Following Trager, if $\int P/(QS^{1/k}) dx$ is reduced and elementary, then it can be written
\begin{equation}
\int P/(QS^{1/k}) dx= \sum\limits_{i=1}^l \lambda_i \ln G_i(x)
\end{equation}
where $G_i\in\overline{\mathbb{K}}[x,S^{1/k}]$ and the $\lambda_i$ form a $\mathbb{Q}$ basis of the residues of $P/(QS^{1/k})$.
\begin{defi}
Consider field extension $\mathbb{L} \supset \mathbb{K}(\alpha) \supset \mathbb{K}$. The trace of $\beta \in\mathbb{L}$ over $\mathbb{K}(\alpha)$ is
$$\hbox{tr}_{\mathbb{K}(\alpha)}(\beta)=-\frac{\hbox{coeff}_{z^{\deg T-1}}(T)}{\deg T}$$
where $T\in \mathbb{K}(\alpha)[z]$ is minimal unitary polynomial of $\beta$.
\end{defi}
\begin{prop}
We can build functions $P_i/(Q_i S^{1/k})$ without pole at $\infty$ such that $Q_i \mid Q$ and $\forall \beta \in Q^{-1}(0)$ we have
$$\hbox{res}_\beta \frac{P_i}{Q_i S^{1/k}}= d_i \hbox{coeff}_{\alpha^i} \hbox{tr}_{\mathbb{K}(\alpha)} \left(\hbox{res}_\beta \frac{P}{Q S^{1/k}}\right)$$
with $d_i\in\mathbb{N}^*$ chosen minimal such that all residues are in $\mathbb{Z}[\xi]$. Algorithm \underline{\sf TraceIntegrals} computes these functions in $O(d^2e^2)$ with $e=[\mathbb{K}(\alpha):\mathbb{K}]$.
\end{prop}
\noindent\underline{\sf TraceIntegrals}\\
\textsf{Input:} A reduced superelliptic integral $\int P/(QS^{1/k}) dx$ and $\alpha\in\overline{\mathbb{K}}$ with minimal polynomial $E\in\mathbb{K}[z]$.\\
\textsf{Output:} A list $P_j/(Q_jS^{1/k})$ with $P_j,Q_j\in\mathbb{K}(\alpha)[x]$ such that
\begin{equation}\label{eq3b}
\hbox{res}_\beta P_jQ_j^{-1}S^{-1/k}= d_j \hbox{coeff}_{\alpha^j} \hbox{tr}_{\mathbb{K}[\alpha]} \hbox{res}_\beta P_jQ_j^{-1}S^{-1/k} \in\mathbb{Z}[\xi],
\end{equation}
$\forall \beta\in Q^{-1}(0)$, $d_j\in\mathbb{N}^*$ minimal.
\begin{enumerate}
\item Factorize $Q=Q_1\dots Q_l$ in $\mathbb{K}(\alpha)$
\item Pose $d=1$. For $i=1\dots l$ do
\item If $y^k-S$ solves in $\mathbb{K}(\alpha)[x]/(Q_i)$, note $y\in\mathbb{K}(\alpha)[x]/(Q_i)$ one of its solutions, else go to next $i$.
\item Compute $t_j=\hbox{coeff}_{\alpha^j}\hbox{tr}_{\mathbb{K}(\alpha)} P(x)/(Q'(x)y),\; j=0\dots$ $\deg E-1$, and reassign $d_j$ the minimal multiple of $d_j$ such that $d_j t_j\in \mathbb{Z}[\xi]$.
\item Solve equation $R_{i,j}(x)-t_jQ_i'(x)y=0 \hbox{ mod } Q_i$ with $\deg R_{i,j} \leq \deg Q_i-1$.
\item Return
$$\left\lbrace \left(d_j\sum\limits_{i=1}^l \frac{R_{i,j}}{Q_iS^{1/k}} \right)_{j=0 \dots \deg E-1} \right\rbrace$$
\end{enumerate}
\begin{proof}
Let us first check that algorithm \underline{\sf TraceIntegrals} compute the integrals. Consider a factor $Q_i$ obtained in step $2$ and $\beta$ one of its roots. Either $y^k-S$ is irreducible or it fully factorizes as all its solutions in $y$ are the same up to a power of $\xi$. If it is irreducible, the residue $P(\beta)/(Q'(\beta)S(\beta)^{1/k})$ has a minimal polynomial in $\mathbb{K}(\alpha)[\lambda^k]$, and thus its second leading coefficient is $0$, and so the trace is $0$. If it factorizes, then step $5$ builds functions $R_{i,j}/(Q_iS)$ whose residues are the coefficients $t_j$ in $\alpha$ of the trace. As $Q_i$ is irreducible, it has $\deg Q_i$ simple roots on which neither $Q_i'$ or $S$ vanishes. Thus equation $R_{i,j}(x)-tQ_i'(x)y=0 \hbox{ mod } Q_i$ is an interpolation problem and thus admits a unique solution with $\deg R_i \leq \deg Q_i-1$. The integer $d_j$ is the minimal one such that $d_jt_j\in \mathbb{Z}[\xi]$ and should be a multiple of the old $d_j$ to still satisfy this same condition for previous $i$'s. In step $6$, the sum is made over all factors $Q_i$, and as they have distinct roots, equation \eqref{eq3b} is satisfied.
We factorize a polynomial of degree $d$ in $\mathbb{K}(\alpha)$, which costs $\tilde{O}(de)$. Step $3$ uses factorization in an extension of degree $de$, so $\tilde{O}(d^2e)$. Step $4$ uses a resultant to compute the minimal polynomial, which costs $O(d^2e)$. Step $5$ computes $e$ interpolations which costs $O(e^2d^2)$. Thus the global cost is $O(e^2d^2)$.
\end{proof}
\begin{prop}\label{proptor}
If $\int P/(QS^{1/k}) dx$ is reduced and of torsion, then the output of \underline{\sf TraceIntegrals} are integrals of torsion.
\end{prop}
\begin{proof}
Let us note $\beta_1,\dots,\beta_p$ the residues of the integral at singular points on the superelliptic curve $\mathcal{C}=\{(x,y)\in \mathbb{C}^2,y^k-S(x)\}$. There exists $M\in M_{p,l}(\mathbb{Q})$ such that $\beta= M \lambda$. The $\lambda$'s, $\beta$'s and the $\alpha$ are in some field extension $\mathbb{L}\supset \mathbb{K}$. Let us note $\tau_j$ the operator extracting the $\alpha^j$ coefficient of the trace over $\mathbb{K}(\alpha)$. As the trace $\mathbb{Q}$ linear, $\tau_j$ is also, and we have
$$\tau_j(\beta)=M \tau_j(\lambda).$$
As $\int P/(QS^{1/k}) dx= \sum\limits_{i=1}^l \lambda_i \ln G_i(x)$ up to an integral of first kind, each column of $M$ defines the list of residues of $\partial_x \ln G_i$ (and so are in fact integers). Note $\pi_1,\dots,\pi_{\phi(k)}$ the projectors to a basis $B$ of $\mathbb{Q}(\alpha)(\xi)$ over $\mathbb{Q}(\alpha)$ and the functions
$$F_s= \prod\limits_{i=1}^l G_i^{\tilde{d}_j\pi_s(\tau_j(\lambda_i))}$$
with $\tilde{d}_j\in\mathbb{N}^*$ such that all exponents are in $\mathbb{Z}$. The $\partial_x \ln F_s$ have for residues $\pi_s(\tau_j(\beta))$, and then $\partial_x \sum B_s \ln F_s$ has for residue $\tau_j(\beta)$. Thus the integral $I_j$ of \underline{\sf TraceIntegrals} is such that $I_j-\partial_x \sum B_s \ln F_s$ is reduced and has no residues, and thus is of first kind. Thus $I_j$ is of torsion.
\end{proof}
\subsection{Completeness}
Once trace integrals have been computed over $\mathbb{K}(\alpha)$ we can consider the conjugated sums
$$\tilde{I}_{i,j}=\sum\limits_{\alpha \in E^{-1}(0)} \alpha^j I_{i,\alpha},\quad j=0\dots \deg E-1$$
We now want to compute enough such integrals such that $I$ can be written as a linear combination of them and an integral of the first kind.
\begin{proof}[of Theorem \ref{thm1}]
Consider the trace over $\mathbb{K}(\alpha_1)\; $ where $\alpha_1$ is a residue of the integral, and so a root of $R$. One of the factor $R_i(\lambda^{k'})$ of $R$ is the minimal unitary polynomial of $\alpha_1$, and note $l=\deg R_i-1$. The other roots of $R_i(\lambda^{k'})$ are noted $\xi^i \alpha_j$. By assumption, its Galois group is $\mathbb{Z}_{k'}^{l+1} \ltimes S_{l+1}$. Now the trace over $\mathbb{K}(\alpha_1)$ of the roots of $R_i(\lambda^{k'})$ are
$$(\xi^j \alpha_1)_{j=0\dots k-1}, (\xi^j t )_{j=0\dots k-1},\dots, (\xi^j t )_{j=0\dots k'-1}$$
and $\alpha_1+lt=u\in\mathbb{K}$ where $u$ is minus the second leading coefficient of $R_i(\lambda^{k'})$ (it is zero for $k'>1$).
Now applying the Galois group of $R_i(\lambda^{k'})$, we can permute the root $\alpha_1$ to any root $\alpha_i$, and the factorization of $R_i(\lambda^{k'})$ in $\mathbb{K}(\alpha_i)[\lambda]$ will have the same structure. We obtain then from algorithm \underline{\sf TraceIntegrals} integrals whose residues are any line of the matrix
$$M=\left(\begin{array}{cccc} \alpha_1 & (u-\alpha_1)/l & \dots &(u-\alpha_1)/l \\ & & \dots & \\ (u-\alpha_{l+1})/l & \dots & (u-\alpha_{l+1})/l & \alpha_{l+1} \end{array} \right)$$
and their multiples by $\xi$. This matrix is invertible if $u\neq 0$, and $\hbox{Im} M= \{x\in\mathbb{C}^{l+1}, \sum x_i=0\}$ for $u=0$. Thus $(\alpha_1,\dots,\alpha_{l+1})$ is in the image of $M$ in both cases, and thus a $\mathbb{K}$ linear combination of the integrals of \underline{\sf TraceIntegrals} will have the residues $(\alpha_1,\dots,\alpha_{l+1})$ at suitable poles. Doing this for all (conjugacy classes of) residues, we can subtract to $I$ a linear combination of integrals of \underline{\sf TraceIntegrals} removing all residues, and thus all poles, so leaving an integral of the first kind.
\end{proof}
Similar proofs can be done with smaller Galois group. In particular, the same proof works when replacing $S_{l+1}$ by any $2$ transitive group, and other groups could lead to a different matrix $M$, but still invertible.\\
\textbf{Example:} (see \cite{11}) $\mathcal{I}_1=$
$$\frac{535423}{(x^4-8x^3+236x^2-880x+12964)(x-15)(x^2+118)^{1/3}}$$
The residues are solutions up to multiplication by $\xi$ of
$$\lambda-1, \lambda^2+\frac{3527}{220}\lambda\xi+\frac{11261}{5280}\lambda-\frac{449219897}{6082560}-\frac{12314729}{276480}\xi,$$
$$ \lambda^2+\frac{73387}{5280}\lambda\xi-\frac{11261}{5280}\lambda-\frac{12314729}{276480}-\frac{449219897}{6082560}\xi$$
Now applying \underline{\sf TraceIntegrals} with these extensions gives
\begin{small}\begin{equation}\label{exa}
\frac{174584x^4+700160x^3-45841128x^2+306988544x-11145996240}{(x^4-8x^3+236x^2-880x+12964)(x-15)(x^2+118)^{1/3}}
\end{equation}\end{small}
for the trace over $\mathbb{K}$ and $4$ more complicated expressions for the two extensions of degree $2$.
For the integrals
$$\int (x^n+x-3)^{-1}(x^2+118)^{-1/3}$$
we obtain for $n=2$ with $\alpha^6-\tfrac{1}{191867}\alpha^3-\tfrac{1}{32425523}=0$
$$\frac{13\alpha^2(2494271\alpha^3-29531)}{(2494271\alpha^3-243x-128)(x^2+118)^{1/3}}.$$
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}\hline
$n$ & 4 & 5 & 6 & 7 & 8 \\\hline
Degree & 12 & 15 & 18 & 21 & 24 \\\hline
Galois & 1944 &29160&524880&11022480& 264539520 \\\hline
Time & 1.21&10.3 & 31.6 &1138 & 2333 \\\hline
\end{tabular}\\
\end{center}
Galois groups of the residue polynomials $R$ have been computed with Magma, but this computation is not necessary to perform the algorithm, however this ensures that $R$ is generic and show how useful it is to avoid computations in the splitting field. Remark that the trace integrals do not always split the poles of the integral when there are $\mathbb{K}$ relations between the residues, and in particular in $\mathcal{I}_1$ the $\mathbb{K}$-dimension of the residues is $2$ instead of expected $5$ (but this is still a generic $R$!).
\section{Computations in Jacobians}
\subsection{Superelliptic divisors}
Let us recall the definition of divisor and a introduce a specific notion for superelliptic curves.
\begin{defi}
A divisor $D$ on a curve $\mathcal{C}$ is a function $\mathcal{C} \rightarrow \mathbb{Z}$ with finite support. It is said to be principal if there exists a rational function $f$ on $\mathcal{C}$ such that $D(z)=\hbox{ord}_{x=z} f(x)$. It is said to be of torsion if there exists $N\in\mathbb{N}^*$ such that $ND$ is principal. The height of a divisor is $\sum_{(x,y)\in\mathcal{C}} \mid D(x,y) \mid$.\\
A superelliptic divisor $D$ on a superelliptic curve $\mathcal{C}$ is a function $\mathcal{C} \rightarrow \mathbb{Z}[\xi]$ with finite support and $D(\sigma(z))=\xi D(z)$ with $\sigma:\mathcal{C} \rightarrow \mathcal{C}$ the $k$th order shift on branches. It is said to be principal if $D=\sum_{i=1}^l a_i D_i$ with $a_i\in\mathbb{Z}[\xi]$ and $D_i$ principal divisors. It is said to be of torsion if there exists $N\in\mathbb{Z}[\xi]^*$ such that $ND$ is principal. The superelliptic divisor of a superelliptic integral is defined by
$$D(z)= \hbox{res}_z P/(QS^{1/k})$$
provided that all the residues are in $\mathbb{Z}[\xi]$.
\end{defi}
The divisors are usually defined as a function on the places of $\bar{\mathcal{C}}$, which can be different than simply points of $\bar{\mathcal{C}}$, however the technical condition implies that any ramification point is maximally ramified including infinity, and thus there is a unique place over a ramification point. The value of the divisor at infinity is recovered using the fact that the sum over all points $\in\bar{\mathcal{C}}$ should be zero. Remark that the notion of torsion order for divisors is well defined (the minimal $N$), however it is not always the case for superelliptic divisors. The set of possible $N$ forms an ideal of $\mathbb{Z}[\xi]$, and from $k=23$, this ideal is not always principal. In the following, we will not try to find the optimal one anyway.
A divisor will be represented by a list of triples of a irreducible polynomial $Q$ in $x$, a polynomial $R$, and a list of $k$ integers. The roots of $Q$ are the abscissas of the support of $D$, $R$ evaluated at $Q^{-1}(0)$ defines the ordinate of a point at such abscissa, and the list are the value of the divisor at this point and the other obtained by multiplication by $\xi$ of the ordinate.
\begin{prop}\label{prop1}
Any superelliptic divisor $D$ can be written uniquely
\begin{equation}\label{eq3}
kD(z)=\sum\limits_{i=0}^{k-1} \xi^i \tilde{D}(\sigma^i(z))
\end{equation}
where $\tilde{D}$ is a divisor on $\mathcal{C}$ such that
\begin{equation}\label{eq4}
\sum\limits_{i=0}^{k-1} \tilde{D}(\sigma^i(z)) \xi^{ij}=0,\; \forall j \wedge k \neq 1
\end{equation}
and $\tilde{D}(z)=0$ on ramification points. We have $D$ of torsion if and only if $\tilde{D}$ is of torsion. Algorithm \underline{\sf Divisor} computes the $\tilde{D}$ associated to the superelliptic divisor of a superelliptic integral.
\end{prop}
\begin{proof}
Let us note $[d_0,\dots,d_{k-1}]$ the values of $\tilde{D}$ over a given abscissa (not ramified), and note $U(z)=\sum_{i=0}^{k-1} d_i z^i$. We have from \eqref{eq3}
$$D(z)=\sum\limits_{i=0}^{k-1} \xi^i \tilde{D}(\sigma^i(z))= \left[ \sum\limits_{i=0}^{k-1} \xi^i d_{i+l} \right]_{l=0\dots k-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\left[ \xi^{-l} U(\xi)\right]_{l=0\dots k-1} $$
where indices are taken modulo $k$. As shifting branches on $D$ multiplies it by $\xi$, this equality is satisfied if and only if it is satisfied for $l=0$.
We also know that $d_i\in\mathbb{Z}$, and thus we can apply on \eqref{eq3} the Galois action $\psi_j\in \hbox{Gal}(\mathbb{K}:\mathbb{Q})$ which substitutes $\psi_j(\xi)=\xi^j$ with $j\wedge k=1$. Thus we know the values of $U(\xi^j),\; j\wedge k=1$. The condition \eqref{eq4} is $U(\xi^j)=0,\; \forall j \wedge k \neq 1$. Thus we know $U$ on all roots of unity, and $U$ is of degree $\leq k-1$. By polynomial interpolation, there exists a unique $d$ satisfying these conditions. On ramification points, we have $D(z)=0$ thus \eqref{eq3} is satisfied with $\tilde{D}(z)=0$.
If $\tilde{D}$ is of torsion, then $\tilde{D}(\sigma^i(z)),\; i=0\dots k-1$ is also, and thus $D$ is of torsion. If $D$ is of torsion, the Galois action $\psi_j$ gives that $\psi_j(D)$ is also a torsion divisor. Then
$$\sum\limits_{j\wedge k=1} k\psi_j(D)= \sum\limits_{j\wedge k=1} \sum\limits_{i=0}^{k-1} \xi^{ij} \tilde{D}(\sigma^i(z))=$$
and using \eqref{eq4}
$$\sum\limits_{j=0}^{k-1} \sum\limits_{i=0}^{k-1} \xi^{ij} \tilde{D}(\sigma^i(z))= \sum\limits_{i=0}^{k-1} \left(\sum\limits_{j=0}^{k-1}\xi^{ij}\right) \tilde{D}(\sigma^i(z))=k\tilde{D}(z).$$
\end{proof}
\begin{prop}
The principality of a divisor on $\mathcal{C}$ does not depend of its values on ramification points.
\end{prop}
\begin{proof}
Consider a divisor $D'$ whose support is only on ramification points. Noting $x_i$ the abscissa of these points and $d_i$ the values of $D'$, the rational function on $\mathcal{C}$
$$\prod\limits_{i=1}^{\sharp S^{-1}(0)} (x-x_i)^{d_i}$$
has for divisor $D'$, and thus $D'$ is principal. Thus for a divisor $D$ on $\mathcal{C}$, we have
$$D \hbox{ principal} \Leftrightarrow D+D' \hbox{ principal}$$
and thus the principality of $D$ is independent of its values on the $z_i$.
\end{proof}
From now, we will thus work with divisors modulo the divisors over ramification points, and thus in their representation we will not consider values over ramification points.
\subsection{Reduction in the Jacobian}
\begin{prop}\label{prop1b}
A divisor $D$ on $\mathcal{C}$ with non negative values is principal if and only if there exists $f\in\mathbb{K}[y]_{<k}[x]$ such that
$$\hbox{ord}_{(x,y)} f= D(x,y),\forall (x,y)\in\mathcal{C}$$
and $\hbox{wdeg} f =\sum_{z\in \mathcal{C}} D(z)$ where $\hbox{wdeg}(x^iy^j)=ki+(\deg S)j$.
\end{prop}
\begin{proof}
The existence of a rational $f$ satisfying the order condition is equivalent to the condition of principality of a divisor after multiplying $f$ by a rational function in $x$ (which shifts all the values of $D$ over a given abscissa). As the quantity $D(x,y)$ is always non negative and $S$ has only simple roots, $f$ should then be a polynomial. The number of zeros of $f$ on $\mathcal{C}$ counting multiplicity is $\hbox{wdeg} f$. The number of zeros with multiplicity required by the order condition is $\sum_{(x,y)\in \mathcal{C}} D(x,y)$.
\end{proof}
Remark that the simple roots condition on $S$ is necessary, as for $\mathcal{C}:y^3-x^2(x^2+1)$, $y^2/x$ has not a pole at $0$, its divisor is always non negative, but is not polynomial.
If the divisor $D$ corresponds to a superelliptic divisor, with Proposition \ref{prop1}, we can recover the principality of the superelliptic divisor as it is the divisor of the function $L_S(f)$, using the fact that the function $L_S$ is invariant by multiplication of $f$ by a function of $x$ only.
With this proposition, testing principality of a divisor reduces to a linear system solving problem. However, the size of this system grows as the height of $D$, and as the coefficients of the divisor come from residues of the integral, the height of $D$ can be very large, rendering this approach impractical except for small examples.
Let us introduce a divisor reduction process. Recall that the Jacobian of $\mathcal{C}$ is defined by its divisors modulo the principal divisors. It is a $g$ dimensional Abelian variety, and thus it is possible to reduce divisors to a set of divisors depending of $g$ parameters.
\begin{prop}\label{prop2}
Given a divisor $D$ over $\mathcal{C}$ with non negative values, there always exists a principal divisor $D'$ such that $D'-D$ has at most $(k-1)(\deg S -1)/2$ points in its support.
\end{prop}
\begin{proof}
Consider a the expression
$$f_N=\sum\limits_{j=0}^{k-1} \left(\sum\limits_{i=0}^{\lfloor ( N+(k-1)(\deg S -1)/2 )/k-j\deg S/k \rfloor} \!\!\!\!\!\!\!\!\! a_{i,j} x^iy^j\right) $$
Its number of roots on $\mathcal{C}$ counting multiplicity is $N+(k-1)(\deg S -1)/2$, and using the technical condition, it has
$$\sum\limits_{j=0}^{k-1} \lfloor (N+(k-1)(\deg S -1)/2)/k-j\deg S/k +1 \rfloor=$$
$$k+ N+(k-1)(\deg S -1)/2 -(k-1)(\deg S +1)/2=N+1$$
parameters. We write down the condition
$${{f_N}_{\mid \mathcal{C}}}^{(j)}(x,y)=0,\; \forall j <D(x,y),\; \forall (x,y)\in\mathcal{C}$$
This is a linear system on the $a_{i,j}$ with $\sum_{(x,y)\in\mathcal{C}} D(x,y)$ equations. Thus for $N=\sum_{(x,y)\in\mathcal{C}} D(x,y)$, it always admits a non zero solution. Among the roots of $f_N$ there will be the points in the support of $D$ with required multiplicity, but also $(k-1)(\deg S -1)/2$ additional points (or multiplicity increases). Thus the divisor of $f_N$ minus $D$ will have at most $(k-1)(\deg S -1)/2$ points in its support.
\end{proof}
Remark that if $(k-1)(\deg S -1)/2=0$, then $k=1$ or $\deg S=1$. Proposition \ref{prop2} allows then to reduce $D$ to a divisor with empty support, so $0$, and thus all divisors are principal, which is indeed the case as the genus of $\mathcal{C}$ is then $0$. Also, the genus using Riemann Hurwitz formula is $g=(k-1)(\sharp S^{-1}(0)-1)/2$, and as $S$ has only simple poles, our reduction is optimal.
\subsection{Negabinary expansion}
Recall that any integer $n\in\mathbb{Z}$ can be written uniquely
$$n=\sum\limits_{i=0}^l a_i (-2)^i,\; a_i\in\{0,1\},\; l\in\mathbb{N}$$
Similarly, a divisor $D$ on $\mathcal{C}$ is defined by a vector of integers over each abscissa, and thus we can write
$$D=\sum\limits_{i=0}^l (-2)^i D_i,\; \hbox{Im}(D_i)\subset \{0,1\},\; l\in\mathbb{N}$$
\noindent\underline{\sf JacobianReduce}\\
\textsf{Input:} A divisor $D$ on $\mathcal{C}$.\\
\textsf{Output:} A sequence of polynomial $\in\mathbb{K}[y]_{<k}[x]$, and a reduced divisor $\bar{D}$
\begin{enumerate}
\item if $D=0$, return $[1],0$.
\item Reduce $D$ modulo $2$, note $D_0$ the rest and $D_1$ such that $D=D_0-2D_1$.
\item Compute $\tilde{f},\tilde{D}:=D_0+2$\underline{\sf JacobianReduce}($D_1$).
\item $N:=\sum_{(x,y)\in\mathcal{C}} \tilde{D}(x,y)$
\item Note $f=\sum_{j=0}^{k-1} Q_j(x) y^j$ with $\deg Q_j=\lfloor ( N+(k-1)(\deg S -1)/2 )/k-j\deg S/k \rfloor$
\item Solve the linear system $\hbox{ord}_{(x,y)} f \geq \tilde{D}(x,y),\;\forall (x,y)\in\mathcal{C}$, and note $f$ the solution of lowest weighted degree.
\item Compute $D'$ the divisor of $f$. Return $[f,\tilde{f}],D'-\tilde{D}$.
\end{enumerate}
We will prove that algorithm \underline{\sf JacobianReduce} returns a zero reduced divisor if and only if the integral $I$ whose superelliptic divisor gave $D$ thanks to Proposition \ref{prop1} is of torsion, and then can be written up to a first kind integral
$$I= \sum\limits_{i=0}^{l} (-2)^i L_S(f_i).$$
\begin{proof}[of Theorem \ref{thm2}]
Let us prove by recurrence that $D+\hbox{\underline{\sf JacobianReduce}}(D)$ is a principal divisor. For $D$ of height $0$, step $1$ returns a correct answer. Now assume it is true for all divisor of height less than $D$.
In step $3$, there is a recursive call on $D_1$ which is the quotient of $D$ by $-2$. Thus $D_1$ has strictly smaller height than $D$, thus by hypothesis, $D_1+\hbox{\underline{\sf JacobianReduce}}(D_1)$ is a principal divisor. Thus $D-\tilde{D}=-2(\hbox{\underline{\sf JacobianReduce}}(D_1)+D_1)$ is a principal divisor. In step $7$, $D'$ is principal by construction, and thus
$$D+\hbox{\underline{\sf JacobianReduce}}(D)=D+D'-\tilde{D}$$
is principal.
Thus if $D$ is principal, $\hbox{\underline{\sf JacobianReduce}}(D)$ is also principal. Let us prove that if $D$ is principal, then \underline{\sf JacobianReduce}$(D)=0$. In step $7$, we have $D'\geq \tilde{D}$ by construction, and thus \underline{\sf JacobianReduce} always return non negative value divisors. So in step $3$, $\tilde{D}$ has non negative values, and is principal. Now applying Proposition \ref{prop1b}, we know that in steps $5,6$ we will find a $f$ such that $D'-\tilde{D}=0$. As $D'\geq \tilde{D}$, this will be reached for the minimal weighted degree solution of equation in step $6$, and this is the one we choose.
We must now check termination and complexity. Recursive calls in step $3$ are made on a divisor of strictly lower height, except $0$, which is dealt in step $1$. In step $6$, a solution $f$ always exists thanks to Proposition \ref{prop2} as $\tilde{D}$ has non negative values. For complexity, steps $6,7$ are in $N^\omega$ where $N$ is the height of $\tilde{D}$. However by Proposition \ref{prop2}, the outputted divisor of \underline{\sf JacobianReduce} is of height at most $(k-1)(\deg S -1)/2$, and $D_0$ is of height at most $k \deg Q$, and thus the cost is $O(((k-1)(\deg S -1)+k\deg Q)^\omega)$. In recursive calls, the support of $D_0$ is always at most $k \deg Q$, thus the same bound applies. The number of recursive calls is at most $\log_2 \hbox{height}(D)$, thus giving complexity $O((kd)^\omega \log_2 \hbox{height}(D)$. Now the coefficients of $D$ are from the residues of $I$, and thus the roots of the residue polynomial $R$, which comes from a resultant computation. Thus the the residues are bounded by a polynomial in the coefficients of $I$, and thus $\log_2 \hbox{height}(D)$ is in $O(h)$ with $h$ the height of the coefficients of $I$. Thus complexity is $O((kd)^\omega h)$.
\end{proof}
\textbf{Example} (see \cite{7})
$$\mathcal{I}_2=\frac{8(7\sqrt{5}-15)}{(x-20+8\sqrt{5})\sqrt{x^3+5x^2-40x+80}}$$
The integral $\mathcal{I}_2$ is a trace integral of a superelliptic integral over $\mathbb{K}(\sqrt{5})$. The divisor of $\mathcal{I}_2$ is $D_2=[x-20+8\sqrt{5}, -120+56\sqrt{5},[1,-1]]$.
Consider also the integral
$$\mathcal{I}_3= \frac{3}{(x-1)\sqrt{x^3+8}},\;\; D_3=[x-1,-3,[-1,1]]$$
We compute the divisor reduction in the Jacobian of $3^nD_2$ and $3^n D_3$
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}\hline
$n$ & 3 & 4 & 5 & 6 & 7 \\\hline
$D_2$ time &0.21& 0.06& 0.09&0.48 & 1.6 \\\hline
$\bar{D_2}$ digits & 1 & 1 & 1 & 1 & 1 \\\hline
$D_3$ time & 0.06&0.45& 20 &5109 &$>10^4$ \\\hline
$\bar{D_3}$ digits & 828 &7446&67008&603071&$>10^6$\\\hline
$\!\!\! D_3$ mod time $\!\!\!\!\!$ &0.55 &0.62&0.86 &1.21 &1.23 \\\hline
\end{tabular}\\
\end{center}
The reduction time (in s) for $D_2$ is negligible, but $D_3$ reduction time grows exponentially instead of linearly. This is only because the coefficient size of the reduced divisor grows exponentially, and the timings become as expected when computing mod $65521$. The $D_2$ reductions do not grow in size, in fact because this is a torsion divisor and thus reductions are periodic in $n$.
Thus in practice, we want to run \underline{\sf JacobianReduce} either in positive characteristic, or on torsion divisors. For a good reduction prime, if the divisor is not principal mod $p$, it is not principal in characteristic $0$. For example \eqref{exa}, the divisor is
\begin{small}
$$[x^2+24\xi x+8x-48\xi-130, 8+22\xi+2x, [95909, -158035, 62126]],$$
$$[x^2-24\xi x-16x+48\xi-82, 2\xi x+8\xi+22, [62126, -158035, 95909]],$$
$$[x-15, -7-7\xi, [-10560, 21120, -10560]]$$
\end{small}
To test principality of this divisor, it is enough (!) to look for a $L_S$ function with a polynomial of degree $1928100$. Trager's approach reduces this to solving a linear system of this size. Applying \underline{\sf JacobianReduce} to it modulo $13$ allows to reduce this divisor in $0.39s$ to $[x+11,11,[1,0,0]]$ and thus it is not principal.
\section{Torsion of Divisors}
\subsection{Hasse Weil Bound}
\begin{defi}
A good reduction $(p,\mathcal{J})$ for a reduced superelliptic integral $I$ on $\mathcal{C}$ with coefficients in $\mathbb{K}(\alpha)$ is such that
\begin{itemize}
\item $p$ does not divide $k$.
\item $\mathcal{J}$ is a prime ideal factor in characteristic $p$ of $<\Phi_k(\xi),$ $P(\alpha)>$ where $P\in\mathbb{K}[z]$ is the minimal polynomial of $\alpha$.
\item All poles of $I$ and roots of $S$ stay distinct under reduction modulo $\mathcal{J}$.
\end{itemize}
It is a very good reduction when moreover $\mathcal{J}$ has a single point.
\end{defi}
Good reduction primes have important properties \cite{1,4}.
\begin{itemize}
\item All divisors of the Jacobian of a curve mod $p$ are of torsion.
\item The mod $p$ reduction on the Jacobian restricted to torsion divisors of order coprime with $p$ is an isomorphism.
\end{itemize}
Thus the reduction conserves the torsion order provided the divisor is of torsion with order coprime with $p$. Following \cite{1}, reducing with two different good primes allows to recover a unique candidate for the torsion order.
\begin{prop}[Hasse Weil bound]
The torsion order of a divisor on a curve of genus $g$ on $\mathbb{F}_{p^q}$ is less than $(1+\sqrt{p^q})^{2g}$.
\end{prop}
The torsion order modulo $p$ will be computed by testing principality of $nD$ for $n$ up to the bound $(1+\sqrt{p^q})^{2g}$. Thus we want to minimize the upper bound in which $q=\sharp \mathcal{J}^{-1}(0)$. Using Tchebotarev theorem \cite{9}, the probability of having a factor $\mathcal{J}$ with $\sharp \mathcal{J}^{-1}(0)=1$ is $(\phi(k)\deg P)^{-1}$, so we can increase probabilistically $p$ by a factor $\phi(k)\deg P$ to ensure $q=1$. As typically $pq < p^q$, we will then only consider very good reductions.
\subsection{Torsional Test}
\noindent\underline{\sf TorsionOrder}\\
\textsf{Input:} A divisor $D$ on $\mathcal{C}$, $\epsilon>0$.\\
\textsf{Output:} An integer $n$, candidate for torsion order.
\begin{enumerate}
\item Find $p_1<p_2<p_3$ primes, $p_3>1/\epsilon$, such that $p_i \nmid \Delta(Q\hbox{sf}(S))$ and $(\Phi_k(\xi),P(\alpha))$ has a prime ideal factor mod $p_i$ with one solution, and note them $\mathcal{J}_1,\mathcal{J}_2,\mathcal{J}_3$.
\item For $n\in\mathbb{N}^*$, Compute \underline{\sf JacobianReduce}($nD$) mod $\mathcal{J}_1$ until it reduces to zero.
\item For $m\in\mathbb{N}^*$, Compute \underline{\sf JacobianReduce}($mD$) mod $\mathcal{J}_2$ until it reduces to zero.
\item Solve equation $np_1^u=mp_2^v$, and if a solution, note $N=np_1^u$. Else return $0$.
\item Compute \underline{\sf JacobianReduce}($ND$) mod $\mathcal{J}_3$. If $0$, return $N$ else return $0$.
\end{enumerate}
\begin{proof}[of Theorem \ref{thm3}]
Steps $1$ compute three different very good reduction prime ideals with $p_3>1/\epsilon$. As modulo a good reduction prime all divisors are of torsion, then steps $2,3$ terminate. Now the true torsion order (if it exists) should be both of the forms $np_1^u$ and $mp_2^v$. As $p_1\neq p_2$, this equation has a most one solution. If none, then $D$ is not of torsion, thus algorithm returns $0$. Else we test in step $5$ if $ND$ is principal modulo $\mathcal{J}_3$. If $ND$ is not principal in characteristic $0$, its reduction modulo $\mathcal{J}_3$ is a random element in a group of at least $(1+\sqrt{p_3})^{2g}$ elements, thus its probability to be by chance $0$ is $\leq 1/(1+\sqrt{p_3})^{2g} \leq r^{-g}<\epsilon$. For $g=0$ all elements are principal and thus this case would not happen, and for $g\geq 1$, the probability is verified.
Now for complexity, in step $1$ we need to avoid prime factors of $\Delta(Q\hbox{sf}(S))$, which are in $O(\ln \Delta(Q\hbox{sf}(S))$. Being unlucky, it is possible that for the first primes, we either have a factor or that $(\Phi_k(\xi),P(\alpha))$ has no prime factor of degree $1$. The probability for factorization is $1/(\phi(k) \deg P)$, and thus we will find a prime in $O(\phi(k) \deg P\ln \Delta(Q\hbox{sf}(S))$ tests. For steps $2,3$, the Hasse Weil bound applies and we will find a suitable $N$ in less than $(1+\sqrt{p})^{2g}=O(p^g)=O((\phi(k) \deg P\ln \Delta(Q\hbox{sf}(S))^g)$ tests. Each of these tests cost $O((kd)^\omega h \ln n)$, and thus total cost is in $\tilde{O}( (kd)^\omega h (k \deg P\ln \Delta(Q\hbox{sf}(S))^g)$.
We note $r=\deg P$, and $d$ is the number of abscissa in the support of $D$, which is also bounded by the degree of $Q$. Finally $h$ is the logarithmic height of the coefficients, and thus $\ln \Delta(Q\hbox{sf}(S)=O(h)$. Thus the cost is $\tilde{O}( (kd)^{\omega+g} h^{g+1} r^g)$.
\end{proof}
Remark that step $5$ is important for checking with good probability that indeed the divisor is of torsion. In binary complexity, arithmetic in $\mathbb{F}_p$ costs $O(\ln p)$, and thus the checking cost will be in $O(\ln \epsilon)$. In arithmetic complexity, $\epsilon$ does not matter, in the examples $\epsilon=1$ was enough and did not left false positives.\\
\textbf{Example} (see \cite{6,5}) $\mathcal{I}_4=$
$$\frac{\tfrac{8}{29}(5x^3+267x^2+2688x-10240)(x^2+40x+512)^{-1}}{\sqrt{x^5+113x^4+4864x^3+102400x^2+1048576x+4194304}}$$
The divisor of this integral is $D_4=[x^2+40x+512, 8x+512, [-1,1]]$. In $2.3s$ \underline{\sf TorsionOrder} finds the candidate $29$ using primes $3,5$ and checking with $11$. \underline{\sf JacobianReduce}($29D_4$) reduces it to $0$ in $0.17s$, thus integral is of torsion, with candidate integral (not simplified!)
\begin{small}
$$\tfrac{1}{29}(L_S(x^2+40x+512)-2L_S(x^3+92x^2+2560x+4y+24576)+$$
$$4L_S(x^4+104x^3+4096x^2+73728x+524288)-$$
$$8L_S(-3x^3-248x^2-6144x-49152+(x+40)y)+$$
$$16L_S(x^3+78x^2+1792x-2y+12288)-$$
$$32L_S(x^5+106x^4+4688x^3+107520x^2+1277952x+6291456+$$
$$(2x^2+80x+1024)y)+64L_S(x^2+40x+512))$$
\end{small}
This proves that $\mathcal{I}_4$ is of torsion, and this expression is indeed an integral and so the integral of $\mathcal{I}_4$ is elementary.
Going back to integral $\mathcal{I}_1$, the divisor of \eqref{exa} has very good reduction for $p=13,19$, and is respectively of torsion order $2,19$ (found in $3.7s$). Thus no compatible torsion order is found in step $4$, and thus this is not a torsion integral, and thus the integral of $\mathcal{I}_1$ is not elementary.
\subsection{Elementary Integration Algorithm}
We can now put together all these parts. We first compute minimal polynomials for the residues modulo multiplication by $\xi$.\\
\noindent\underline{\sf Residues}\\
\textsf{Input:} A reduced superelliptic integral $I$.\\
\textsf{Output:} A list of irreducible polynomials in $\mathbb{K}[\lambda]$ whose solutions are the residues of $I$ up to multiplication by $\xi$.
\begin{enumerate}
\item Compute $R(\lambda)=\hbox{resultant}_x(P^k-\lambda^k Q'^k S,Q)$
\item Factorise $R=R_1\dots R_l$ in $\mathbb{K}[\lambda]$. $L=[]$.
\item For $i=1\dots l$, if $\forall j, R_i(\xi^j \lambda)\notin L$ then add $R_i$ to $L$.
\item Return $L$
\end{enumerate}
We will then run \underline{\sf TraceIntegrals} with the field extension generated by a root of a polynomial in $L$. Removing factors of $R$ having the same roots up to multiplication by $\xi$ avoid doing the same calculation several times.\\
\noindent\underline{\sf ElementaryIntegrate}\\
\textsf{Input:} A superelliptic integral $I$.\\
\textsf{Output:} An elementary expression or ``Not elementary'' or ``Not handled''
\begin{enumerate}
\item Apply \underline{\sf HermiteReduction}($I$). If FAIL, return ``Not elementary'', else note $\tilde{I}$ the reduced integral and $A$ the algebraic part.
\item $L=\hbox{\underline{\sf Residues}}(\tilde{I})$. For $i=1\dots \sharp L$ do
\begin{enumerate}
\item $T_i=\hbox{\underline{\sf Traceintegrals}}(\tilde{I},\mathbb{K}[L_i^{-1}(0)])$. For $j=1\dots \sharp T_i$ do
\begin{enumerate}
\item $D=\hbox{\underline{\sf Divisor}}(T_{i,j}),N=\hbox{\underline{\sf TorsionOrder}}(D,1)$, if $N=0$ return ``Not elementary''.
\item $(D',G_{i,j})=\hbox{\underline{\sf Jacobianreduce}}(D)$. If $d'\neq 0$ return ``Not elementary''.
\end{enumerate}
\end{enumerate}
\item Compute $\hbox{Ints}=[x^iS^{-1/k},0]_{i=0\dots \lfloor \deg S/k \rfloor},$
$$\!\!\!\!\!\!\!\!\!\!\!\! \left[\sum_{\alpha \in L_i^{-1}(0)} \!\!\!\! \alpha^s T_{i,j}, \!\!\!\! \sum_{\alpha \in L_i^{-1}(0)} \!\!\!\! \alpha^s \sum (-2)^r L_S(G_{i,j})
\right]_{\underset{ j=1\dots \sharp T_i,i=1\dots \sharp L}{s=0\dots \deg L_i-1}}$$
\item Look for a linear combination of the first elements of $\hbox{Ints}$ which gives $\tilde{I}$. If none, return ``Not handled''.
\item Apply this same linear combination to the second elements of $\hbox{Ints}$, obtain an expression $Out$.
\item If $I-\partial_x (A+Out)=0$ return $A+Out$ else return ``Not elementary''.
\end{enumerate}
\begin{prop}
If \underline{\sf ElementaryIntegrate} returns ``Not elementary'', then $I$ is not elementary. If \underline{\sf ElementaryIntegrate} returns an expression, this is an elementary expression of $I$. If \underline{\sf ElementaryIntegrate} returns ``Not handled'', the residue polynomial $R$ is not generic.
\end{prop}
\begin{proof}
In step $1$, if \underline{\sf HermiteReduction}($I$) fails, then $I$ is not elementary. If $\tilde{I}$ is elementary, then all the $T_{i,j}$ are of torsion with proposition \ref{proptor}. In step $2(a)i$ if \underline{\sf TorsionOrder} returns $0$, then $T_{i,j}$ is not of torsion, thus $\tilde{I}$ is not elementary. The same for step $2(a)ii$. In step $3$, the integrals of first elements and the second elements differ by an integral of first kind. Thus $\tilde{I}$ and $Out$ differ by an integral of first kind. Thus $I$ and $A+Out$ differ by an integral of first kind. If $I-\partial_x (A+Out)\neq 0$, then this is a non zero integral of first kind and thus not elementary.
Step $6$ is the only case returning an expression, it is elementary by construction and the result is checked in step $6$. Finally the case ``Not handled'' can only occur if $\tilde{I}$ is not a linear combination of the $T_{i,j}$ and an integral of first kind, and by Theorem \ref{thm1} this does not occur when $R$ is generic.
For complexity, the first loop is executed at most $d$ times, and note $e_i$ the degree of extension for computing $T_i$, which costs $O(d^2e_i^2)$. The dominant cost of steps $2(a)$ is the torsion order calculation, which cost $\tilde{O}( (kd)^{\omega+g} h^{g+1} e_i^g)$. This test is done $e_i$ times, which give total cost of
$$\sum_{i=1}^{\sharp L} \tilde{O}( (kd)^{\omega+g} h^{g+1} e_i^{g+1}+d^2e_i^2)$$
As function of $e_i$, torsion cost is dominant in positive genus, and as $\sum_i e_i \leq kd$ by convexity the maximum is reached when $\sharp L=1$ and $e_1=kd$, which gives $\tilde{O}( (kd)^{\omega+2g+1} h^{g+1})$. The steps $4,5,6$ are linear algebra in dimension $\sum_i e_i^2$ which is maximized when $\sharp L=1$ and $e_1=kd$. Thus the cost is in $O((kd)^{2\omega})$, which is less than the torsion part for positive genus as $\omega<3$.
\end{proof}
\section{Conclusion}
We proved that generically, a similar decomposition as done by Trager for rational integrals can be done for superelliptic integrals, and thus unusable large field extensions can be avoided. However it is unproven that it is always possible. In particular, we would like to prove that the traces of the roots of a polynomial over its rupture field are enough to find all $\mathbb{K}$ linear relations among the roots. The known algorithms have still factorial in degree complexity \cite{10}, even if generically factorization in the rupture field is enough to find the relations. We then use fast multiplication techniques in Jacobians to test fast for principality and torsion of divisors. However, the principality test in characteristic $0$ is still slow in binary complexity as the coefficient size of reduced divisor grows very fast. Still we do this computation only when we are reasonably sure that the divisor is of torsion. Over $\mathbb{Q}$ for elliptic curves, the Nagell-Lutz Theorem gives a bound on the size of torsion points. If we had a similar bound for the size of torsion points in the Jacobian of superelliptic curves on number fields, we could then ensure that the principality test would also be fast in binary complexity. Also, our algorithm for finding torsion order relies to test principality of lots of multiples of $D$. As the cost is logarithmic, the total cost is $\tilde{O}(N)$ for computing all of them up to $N$. However, all $N$ are probably not possible, as for example for elliptic curves up to quartic fields we already have a complete (small) list of torsion orders.
\begin{small}
\end{small}
\end{document} |
\begin{document}
\begin{abstract}
For twisted $K$-theory whose twist is classified by a degree three integral cohomology of infinite order, universal even degree characteristic classes are in one to one correspondence with invariant polynomials of Atiyah and Segal. The present paper describes the ring of these invariant polynomials by a basis and structure constants.
\end{abstract}
\title{A basis of the Atiyah-Segal invariant polynomials}
\section{Introduction}
\label{sec:introduction}
\subsection{Atiyah-Segal invariant polynomials}
The ring of Atiyah-Segal invariant polynomials \cite{A-Se2} is a subring in the polynomial ring $A = \mathbb{C}[x_1, x_2, \cdots]$ on generators $x_i$ of degree $i$. Its definition \footnote{In \cite{A-Se2}, the ring is first defined by using the variables $s_i = i!x_i$.} is $J = \mathrm{Ker}(d)$, the kernel of the derivation $d : A \to A$ given by
\begin{align*}
dx_1 &= 0, &
dx_i &= x_{i-1}, \ (i > 1).
\end{align*}
Originally, $J$ is introduced as the ring of the universal Chern classes of \textit{twisted $K$-theory} \cite{A-Se1} whose twist is classified by a degree three integral cohomology class of infinite order. While more study is dedicated to twisted $K$-theory in recent years, few is known about its Chern classes and $J$. For example, $J$ is not isomorphic to a polynomial ring \cite{A-Se2}, whereas there seems to remain the issue of presenting $J$ by generators and relations.
\subsection{Main result}
The aim of this paper is to describe the ring structure of $J$ by a basis and structure constants.
To state the description precisely, we define, for an integer $\ell \ge 0$, a set $B^{(\ell)}$ by
$$
B^{(\ell)}
=
\left\{
\beta = (\beta_1, \beta_2, \cdots, \beta_\ell) \in \mathbb{Z}^\ell \big| \
\begin{array}{c}
\beta_1, \cdots, \beta_{\ell - 1} \ge 0, \ \beta_{\ell} \ge 1
\end{array}
\right\}, \ (\ell \ge 1),
$$
and $B^{(0)} = \{ \beta = (\emptyset) \}$. We also define $B^{(\ell)}(0)$ by the following subset in $B^{(\ell)}$:
\begin{align*}
B^{(0)}(0) &= B^{(0)}, &
B^{(1)}(0) &= \{ \beta = (1) \}, &
B^{(\ell)}(0) &= \{ \beta \in B^{(\ell)} |\ \beta_1 = 0 \}, \ (\ell > 1).
\end{align*}
For $\beta \in B^{(\ell)}$, $\beta' \in B^{(\ell')}$ and $\beta'' \in B^{(\ell + \ell')}$, we define the number $N_{\beta \beta'}^{\beta''}$ by the following expression of a polynomial:
\begin{multline*}
e_1(\vec{k}, \vec{k}')^{\beta_1''} \cdots
e_{\ell+\ell'}(\vec{k}, \vec{k}')^{\beta''_{\ell+\ell'}} \\
=
\sum_{\beta \in B^{(\ell)}, \beta' \in B^{(\ell')}}
N_{\beta \beta'}^{\beta''}
e_1(\vec{k})^{\beta_1} \cdots e_\ell(\vec{k})^{\beta_\ell}
e_1(\vec{k}')^{\beta'_1} \cdots e_{\ell'}(\vec{k}')^{\beta'_{\ell'}},
\end{multline*}
where $e_i(\vec{k})$, $e_i(\vec{k}')$ and $e_i(\vec{k}, \vec{k}')$ mean the $i$-th elementary symmetric polynomials in $\{ k_1, \cdots, k_\ell \}$, $\{ k'_1, \cdots, k'_{\ell'}\}$ and $\{ k_1, \cdots, k_\ell, k'_1, \cdots, k'_{\ell'} \}$, respectively. Note that $N_{\beta \beta'}^{\beta''}$ are non-negative integers, because $e_i(\vec{k}, \vec{k}') = \sum_{j}e_j(\vec{k})e_{i-j}(\vec{k}')$.
\begin{thm} \label{thm:main}
The ring $J$ of the Atiyah-Segal invariant polynomials is isomorphic to the ring $J'$ constructed as follows:
\begin{itemize}
\item[(a)]
its underlying vector space over $\mathbb{C}$ is generated by $\beta \in B(0) = \bigcup_{\ell \ge 0}B^{(\ell)}(0)$;
\item[(b)]
the product of $\beta \in B^{(\ell)}(0)$ and $\beta' \in B^{(\ell')}(0)$ is given by
$$
\beta \cdot {\beta'}
= \sum_{\beta'' \in B^{(\ell+\ell')}(0)}
N_{\beta \beta'}^{\beta''} {\beta''}.
$$
\end{itemize}
\end{thm}
Denote by $g_\beta(\vec{x})$ the invariant polynomial corresponding to $\beta$. Then $g_{(\emptyset)}(\vec{x}) = 1$ and $g_{(1)} (\vec{x})= x_1$. Some explicit product formulae of the polynomials are:
\begin{align*}
g_{(1)} g_{(1)} &= g_{(0, 1)}, &
g_{(1)} g_{(0, \beta_2, \cdots, \beta_{\ell-1}, \beta_\ell)}
&=
g_{(0, \beta_2, \cdots, \beta_{\ell-1}, -1 + \beta_\ell, 1)},
\end{align*}
\begin{align*}
g_{(0, a)} g_{(0, b)}
&=
\sum_{1 \le r \le \frac{a + b}{2}}
\frac{(a + b - 2r)!}{(a - r)!(b - r)!}
g_{(0, a + b - 2r, 0, r)}, \\
g_{(0, a)} g_{(0, b, c)}
&=
\sum_{
\substack{
0 \le r \le \mathrm{min}\{a-s, b\} \\
1 \le s \le c \\
}}
\frac{(a + b - 2r - s)!}{(a - r - s)!(b - r)!}
g_{(0, a + b - 2r - s, c - s, r, s)}.
\end{align*}
\subsection{Application}
In \cite{A-Se2}, the Chern classes of twisted $K$-theory corresponding to $J$ live in ordinary cohomology, at the first stage. Then lifting these Chern classes to twisted cohomology is proposed, and the problem to find some natural basis is raised. We answer this problem by constructing a lift of the basis in Theorem \ref{thm:main}
\subsection{Toward a description of $J$ by generators and relations}
If we consider the polynomial algebra generated on $B(0)$ and take the quotient by the ideal generated by relations corresponding to (b) in Theorem \ref{thm:main}, then we get a description of $J$ by generators and relations, which is obviously unsatisfactory. For a satisfactory description, a possible direction would be to eliminate redundant bases. With respect to an order of monomials in $x_i$, the leading term of the invariant polynomial $g_\beta(\vec{x}) $ corresponding to $\beta = (\beta_1, \cdots, \beta_\ell) \in B^{(\ell)}(0)$ is the monomial $x_{\beta_\ell}x_{\beta_\ell + \beta_{\ell-1}} \cdots x_{\beta_\ell + \cdots + \beta_1}$, and its coefficient is always $1$. This fact leads us to introduce the subset of $\{ g_\beta |\ \beta \in B(0) \}$ consisting of bases $g_\beta(\vec{x})$ whose leading terms cannot be the products of the leading terms of other non-trivial bases of strictly lower degree. We can see that the bases in the subset generate the ring $J$ algebraically, and their first non-trivial relations appear in degree $12$:
\begin{align*}
g_{(0,2)}g_{(0,1,2)} - g_{(0,3)} g_{(0,0,2)}
&=
g_{(1)} g_{(0,0,1,2)} + g_{(1)}^2 g_{(0,2,2)}, \\
g_{(0,2)}^3 - g_{(0,0,2)}^2
&=
3g_{(1)}^2 g_{(0,2)}g_{(0,3)} - 2g_{(1)}^3g_{(0,0,3)} - 3g_{(1)}^4g_{(0,4)}.
\end{align*}
However, by computing relations in higher degree, we can also see that the subset still contains many redundant bases to generate $J$ algebraically. The task to find out a minimal set of algebraic generators still seems not easy, and the presentation issue of $J$ needs further study.
\subsection{Plan of the paper}
The description of $J$ in Theorem \ref{thm:main} originates from a construction of characteristic classes for twisted $K$-theory based on the \textit{Chern character} and the \textit{Adams operations} \cite{A-Se2}. Section \ref{sec:motivating_construction} is devoted to this motivating construction. Though this section can be skipped for the proof of Theorem \ref{thm:main}, one with the understanding of the construction will find the definition of our basis natural.
The proof of Theorem \ref{thm:main}, which is quite elementary, is given in Section \ref{sec:basis}. The point in the proof is the definition of the polynomials $g_\beta(\vec{x})$ associated to $\beta \in B = \bigcup_{\ell \ge 0} B^{(\ell)}$. With the additive basis formed by these polynomials, the structure constant of $A$ and the subring $J \subset A$ are easy to express. Our lifts of Chern classes are provided at the end of this section.
Finally, a list of invariant polynomials $g_\beta(\vec{x})$ is appended for reference.
\subsection{Acknowledgment}
This work is supported by the Grants-in-Aid for Scientific Research (start-up 21840034), JSPS.
\section{Motivating construction}
\label{sec:motivating_construction}
\subsection{Chern character}
One tool for the construction motivating our basis is the \textit{Chern character} for twisted $K$-theory. For simplicity, we assume that $X$ is a smooth manifold. With some subtle technical points understood, the Chern character is a natural homomorphism
$$
\mathrm{ch} : \ K_\tau(X) \longrightarrow H^{\mathrm{even}}_\eta(X).
$$
Here $K_\tau(X)$ denotes the twisted $K$-group of $X$ with its twist $\tau$. Since the way to realize twists may vary according to the contexts, we just point out that a twist is a geometric object classified by the degree three integral cohomology group $H^3(X; \mathbb{Z})$. For the sake of simplicity, we also assume that the twist $\tau$ is classified by an element in $H^3(X; \mathbb{Z})$ of infinite order. The target of $\mathrm{ch}$ is the cohomology of the complex $(\Omega(X), d - \eta \wedge \cdot )$, where $\Omega(X)$ is the space of differential forms, and $\eta$ is a closed $3$-form whose de Rham cohomology class corresponds to the real image of the element in $H^3(X; \mathbb{Z})$ classifying $\tau$.
A point to note is the following commutative diagram:
$$
\begin{CD}
K_\tau(X) \times K_{\tau'}(X) @>{\otimes}>> K_{\tau + \tau'}(X) \\
@V{\mathrm{ch} \times \mathrm{ch}}VV @VV{\mathrm{ch}}V \\
H_{\eta}(X) \times H_{\eta'}(X) @>>{\wedge}> H_{\eta + \eta'}(X),
\end{CD}
$$
where $\otimes$ is the multiplication in twisted $K$-theory, which mixes twists.
From the Chern character, we can compute the Chern class corresponding to an invariant polynomial $f(x_1, x_2, \cdots) \in J$: Let $a \in K_\tau(X)$ be a twisted $K$-class. We express its Chern character as follows:
$$
\mathrm{ch}(a) = [x_1(a) + x_2(a) + x_3(a) + \cdots],
$$
where $x_n(a)$ is the $2n$-form part. (The $0$-form part $x_0(a)$ is absent, under the assumption on $\tau$.) The differential form $f(x_1(a), x_2(a), \cdots)$ is closed by the invariance of the polynomial $f$. If we denote the de Rham cohomology class of the differential form by $c_f(a)$, then we get the Chern class corresponding to $f$:
$$
c_f : \ K_\tau(X) \longrightarrow H^{\mathrm{even}}(X).
$$
\subsection{Adams operation}
The other tool for our motivating construction is the \textit{Adams operation} for twisted $K$-theory \cite{A-Se2}, which is given as a natural map
$$
\psi^k : K_\tau(X) \longrightarrow K_{k\tau}(X).
$$
Here $k$ is allowed to be any integer. The twists in the source and the target of $\psi^k$ are generally different, because the formulation of $\psi^k$ involves the multiplication in twisted $K$-theory. It should be noticed that, under the expression $\mathrm{ch}(a) = [x_1(a) + x_2(a) + \cdots]$, we can express the Chern character of $\psi^k(a)$ as
$$
\mathrm{ch}(\psi^k(a)) = [k x_1(a) + k^2 x_2(a) + k^3 x_3(a) + \cdots].
$$
\subsection{Factory of Chern classes}
For integers $k_1, \cdots, k_\ell$, the Adams operations $\psi^{k_i}$ and the product in twisted $K$-theory construct
$$
\begin{array}{rcl}
\psi^{(k_1, \cdots, k_\ell)} : \
K_\tau(X) & \longrightarrow &
K_{(k_1 + \cdots + k_\ell)\tau}(X). \\
& & \\
a \ \quad & \mapsto & \psi^{k_1}(a) \otimes \cdots \otimes \psi^{k_\ell}(a)
\end{array}
$$
While Atiyah and Segal considered the case of $k_1 + \cdots + k_\ell = 1$ to get an ``internal'' operation, we consider the case of $k_1 + \cdots + k_\ell = 0$. The resulting ``external'' operation followed by the Chern character then gives
$$
\mathrm{ch} \circ \psi^{(k_1, \cdots, k_\ell)} : \
K_\tau(X) \longrightarrow H^{\mathrm{even}}(X).
$$
Since this map is natural, its $2n$-form part gives rise to a Chern class of twisted $K$-theory. It is natural to ask what kind of Chern classes are produced by this method: These Chern classes correspond to some polynomials in $J$.
By means of the properties explained so far, we can compute examples readily. In the simplest case of $\ell = 2$, we have
\begin{align*}
\mathrm{ch}(\psi^{(k_1,k_2)}(a))
&=
(k_1k_2) \cdot x_1^2 \\
&+
(k_1k_2)^2 \cdot (x_2^2 - 2 x_1x_3) \\
&+
(k_1k_2)^3 \cdot (x_3^2 - 2 x_2x_4 + 2 x_1x_5) \\
&+
(k_1k_2)^4 \cdot
(x_4^2 - 2 x_3x_5 + 2 x_2x_6 - 2x_1x_7) \\
&+
(k_1k_2)^5 \cdot (x_5^2 - 2 x_4x_6 + 2 x_3x_7 - 2x_2x_8 + 2 x_1x_9) \\
&+
\cdots,
\end{align*}
where $x_i = x_i(a)$.
In the case of $\ell = 3$, we also have
\begin{align*}
\mathrm{ch}(\psi^{(k_1,k_2,k_3)}(a))
&=
e_3 \cdot x_1^3 \\
&+
e_2e_3 \cdot x_1(x_2^2 - 2x_1x_3) \\
&+
e_3^2 \cdot (x_2^3 - 3 x_1x_2x_3 + 3 x_1^2x_4) \\
&+
e_2^2 e_3 \cdot x_1(x_3^2 - 2x_2x_4 + 2x_1x_5) \\
&+
e_2e_3^2 \cdot
(x_2x_3^2 - 2 x_2^2x_4 - x_1x_3x_4 + 5 x_1x_2x_5 - 5 x_1^2x_6) \\
&+
e_3^3 \cdot
(x_3^3 - 3 x_2x_3x_4 + 3 x_2^2x_5 \\
& \quad \quad \quad \quad
+ 3 x_1x_4^2 - 3 x_1x_3x_5 - 3 x_1x_2x_6 + 3 x_1^2x_7) \\
&\quad +
e_2^3e_3 \cdot
x_1(x_4^2 - 2 x_3x_5 + 2 x_2x_6 - 2 x_1x_7) \\
&+
\cdots,
\end{align*}
where $e_2 = k_1k_2 + k_2k_3 + k_1k_3$ and $e_3 = k_1k_2k_3$ for short.
With some experience of calculating polynomials in $J$, one will find that each coefficient of a product of elementary symmetric polynomials in $k_1, \cdots, k_\ell$ is an invariant polynomial. Further, one may guess that the invariant polynomials arising in this way constitute an additive basis of a subspace in $J$: This turns out to be the case, as a result of Theorem \ref{thm:main}. In the next section, we consider $\mathrm{ch} \circ \psi^{(k_1, \cdots, k_\ell)}$ in purely algebraic setting to construct our basis of $J$.
\section{A basis of invariant polynomials}
\label{sec:basis}
\subsection{Preliminary}
We define $\mathrm{ch}(k | \vec{x})$ to be the following formal power series in variables $x_1, x_2, \cdots$ and $k$:
$$
\mathrm{ch}(k | \vec{x}) =
\sum_{i \ge 1} k^i x_i =
k x_1 + k^2 x_2 + k^3 x_3 + \cdots.
$$
For an integer $\ell \ge 1$, we let $\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})$ be the following formal power series in the variables $x_1, x_2, \cdots$ and $k_1, \cdots, k_\ell$:
$$
\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})
=
\mathrm{ch}(k_1 | \vec{x}) \cdots \mathrm{ch}(k_\ell | \vec{x})
=
\sum_{i_1, \cdots, i_\ell \ge 1}
k_1^{i_1} \cdots k_r^{i_\ell} x_{i_1} \cdots x_{i_\ell}.
$$
We write $\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_n$ for the degree $n$ part of $\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})$ with respect to $x_i$. By construction, $\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_n$ is a symmetric polynomial in $k_1, \cdots k_\ell$ with its coefficients in $A$. In particular, the symmetric polynomial is of degree $n$, provided that each $k_i$ is given degree $1$.
As is well-known \cite{M}, the ring of symmetric polynomials in $k_1, \cdots, k_\ell$ is the polynomial ring in the elementary symmetric polynomials $e_1, \cdots, e_\ell$:
$$
e_j(\vec{k}) = e_j(k_1, \cdots, k_\ell)
=
\sum_{1 \le i_1 < \cdots < i_j \le \ell}
k_{i_1} \cdots k_{i_j}.
$$
For integers $n$ and $\ell$ such that $n \ge \ell \ge 1$, we put
$$
B_{n}^{(\ell)}
=
\left\{
\beta = (\beta_1, \beta_2, \cdots, \beta_\ell) \in \mathbb{Z}^\ell \bigg| \
\begin{array}{c}
\beta_1, \cdots, \beta_{\ell-1} \ge 0, \ \beta_{\ell} \ge 1 \\
\beta_1 + 2 \beta_2 + \cdots + \ell \beta_{\ell} = n
\end{array}
\right\}.
$$
We set $B^{(0)}_0 = B^{(0)}$ and $B^{(0)}_n = \emptyset$ for $n \ge 1$. Then $B^{(\ell)}$ in Section \ref{sec:introduction} is $B^{(\ell)} = \bigcup_{n \ge \ell} B_{n}^{(\ell)}$. To an element $\beta = (\beta_1, \cdots, \beta_\ell) \in B_n^{(\ell)}$, we associate the product of elementary symmetric polynomials $e^\beta(\vec{k}) = e_1(\vec{k})^{\beta_1} \cdots e_\ell(\vec{k})^{\beta_\ell}$. These products form a basis of the vector space of symmetric polynomials of degree $n$ in $k_1, \cdots, k_\ell$.
\begin{dfn}
Let $n$ and $\ell$ be integers such that $n \ge \ell \ge 1$. For $\beta \in B_n^{(\ell)}$, we define a polynomial $g_\beta(\vec{x}) \in A$ of degree $n$ by the following formula:
$$
\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_n
=
\sum_{\substack{i_1, \cdots, i_r \ge 1 \\ i_1 + \cdots + i_\ell = n}}
k_1^{i_1} \cdots k_\ell^{i_\ell}
x_{i_1} \cdots x_{i_\ell}
=
\sum_{\beta \in B_n^{(\ell)}}
e^\beta(k_1, \cdots, k_\ell) g_\beta(\vec{x}).
$$
For $\beta = (\emptyset) \in B^{(0)}$, we define $g_\beta(\vec{x}) = 1$.
\end{dfn}
Another equivalent definition of $g_\beta(\vec{x})$ makes use of the transition matrix $M = M(m, e)$ from elementary symmetric polynomials to monomial symmetric polynomials \cite{M}: let $\Lambda_n^{(\ell)}$ be the set of partitions of $n$ of length $\ell$:
$$
\Lambda_n^{(\ell)}
=
\left\{
\lambda = (\lambda_1, \cdots, \lambda_\ell) \in \mathbb{Z}^\ell | \
\lambda_1 \ge \cdots \ge \lambda_\ell \ge 1, \
\lambda_1 + \cdots + \lambda_\ell = n
\right\},
$$
which can be identified with $B_n^{(\ell)}$ through the change of expression:
$$
\beta = (\beta_1, \cdots, \beta_\ell) \leftrightarrow
\lambda =
(
\overbrace{\ell, \cdots, \ell}^{\beta_\ell},
\cdots,
\overbrace{2, \cdots, 2}^{\beta_2},
\overbrace{1, \cdots, 1}^{\beta_1}
).
$$
For $\lambda \in \Lambda_n^{(\ell)}$, the \textit{monomial symmetric polynomial} $m_\lambda(\vec{k}) = m_\lambda(k_1, \cdots, k_\ell)$ is the polynomial $\sum k_1^{\mu_1} \cdots k_\ell^{\mu_\ell}$ summed over all distinct permutations $(\mu_1, \cdots, \mu_\ell)$ of $\lambda$. They also form a basis of the space of symmetric polynomials of degree $n$ in $k_i$. Let $M_{\lambda \beta}$ be the transition matrix given by the base change $m_\lambda = \sum_\beta M_{\lambda \beta} e^\beta$. Because of the expression
$$
\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_n
=
\sum_{\lambda \in \Lambda_n^{(\ell)}}
m_\lambda(k_1, \cdots, k_\ell) x_\lambda
=
\sum_{\beta \in B_n^{(\ell)}}
e^\beta(k_1, \cdots, k_\ell) g_\beta(\vec{x}),
$$
the other definition of $g_\beta(\vec{x})$ is
$$
g_\beta(\vec{x}) =
\sum_{\lambda \in \Lambda_n^{(\ell)}}
M_{\lambda \beta} x_\lambda,
$$
where $x_\lambda = x_{\lambda_1} \cdots x_{\lambda_\ell}$ for $\lambda = (\lambda_1, \cdots, \lambda_\ell)$.
\begin{rem}
Since $M_{\lambda \beta} \in \mathbb{Z}$, the latter definition shows $g_\beta(\vec{x}) \in \mathbb{Z}[x_1, x_2, \cdots ] \subset A$.
\end{rem}
\begin{rem}
The transition matrix $(M_{\lambda \beta})_{\lambda \beta}$ can be computed from the \textit{Kostka matrix}. Some facts about the matrix in \cite{M} imply the expression:
$$
g_\beta(\vec{x}) =
x_{\beta'}
+ \sum_{\substack{\lambda \in \Lambda_n^{(\ell)} \\ \beta' < \lambda}}
M_{\lambda \beta} x_\lambda.
$$
Here $\beta' \in \Lambda_n$ is the \textit{conjugate} of the partition $\beta$. The meaning of $\beta' < \lambda$ is that $\beta' \neq \lambda$ and $\beta' \le \lambda$, where $\le$ is the \textit{natural (partial) ordering} \cite{M}.
\end{rem}
\begin{rem}
For $n \ge \ell$, let $\omega \in \Lambda_n^{(\ell)}$ be $\omega = (n - \ell + 1, \overbrace{1, \cdots, 1}^{\ell-1})$, which satisfies $\lambda \le \omega$ for all $\lambda \in \Lambda_n^{(\ell)}$. For any $\beta = (\beta_1, \beta_2, \cdots, \beta_\ell) \in B_n^{(\ell)}$, the coefficient $M_{\omega\beta}$ of $x_\omega$ in $g_\beta(\vec{x})$ never vanish: If $n = \ell$, then $M_{\omega\beta} = 1$. If $n > \ell$, then we obtain
$$
M_{\omega \beta}
=
(-1)^{\ell + 1 + \sum_{i}\beta_{2i}}
\frac{n - \ell}{(\sum_i \beta_i) - 1}
\frac{((\sum_i \beta_i) - 1)!}{\prod_i (\beta_i!)} \beta_\ell,
$$
by using so-called \textit{Waring's formula}, an explicit formula expressing the power sum in terms of the elementary symmetric functions (page 33, Example 20, \cite{M}).
\end{rem}
\subsection{Proof of Theorem \ref{thm:main}}
We denote by $A_n^{(\ell)} \subset A$ the subspace consisting of polynomials of degree $n$ in $x_1, x_2, \cdots$ which are linear combinations of monomials $x_{i_1} \cdots x_{i_\ell}$ of length $\ell$. We set $A_n = \bigoplus_\ell A^{(\ell)}_n$ and $A^{(\ell)} = \bigoplus_n A^{(\ell)}_n$. By construction, the polynomial $g_\beta(\vec{x})$ with $\beta \in B_n^{(\ell)}$ belongs to the subspace $A_n^{(\ell)}$.
\begin{lem} \label{lem:polynomial_ring}
The following holds about the polynomial ring $A = \mathbb{C}[x_1, x_2, \cdots]$:
\begin{itemize}
\item[(a)]
For $n \ge \ell \ge 0$, the set $\{ g_\beta(\vec{x}) |\ \beta \in B_n^{(\ell)} \}$ is a basis of $A_n^{(\ell)}$.
\item[(b)]
For $\beta \in B^{(\ell)}$ and $\beta' \in B^{(\ell')}$, the product of the polynomials $g_\beta(\vec{x})$ and $g_{\beta'}(\vec{x})$ is expresses as
$$
g_\beta(\vec{x}) g_{\beta'}(\vec{x})
= \sum_{\beta'' \in B^{(\ell+\ell')}}
N_{\beta \beta'}^{\beta''} g_{\beta''}(\vec{x}),
$$
where $N_{\beta \beta'}^{\beta''}$ is the non-negative integer introduced in Section \ref{sec:introduction}.
\end{itemize}
\end{lem}
\begin{proof}
For (a), the set $\{ x_\lambda |\ \lambda \in \Lambda_n^{(\ell)} \}$ clearly provides a basis of $A_n^{(\ell)}$. Since the transition matrix $( M_{\lambda \beta} )$ is invertible, $\{ g_\beta(\vec{x}) |\ \beta \in B_n^{(\ell)} \}$ gives rise to a basis of $A_n^{(\ell)}$ as well. Then, (b) follows from the obvious formula
$$
\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x}) \mathrm{ch}(k'_1, \cdots, k'_{\ell'} | \vec{x})
=
\mathrm{ch}(k_1, \cdots, k_\ell, k'_1, \cdots, k'_{\ell'} | \vec{x}),
$$
together with the definition of $g_\beta(\vec{x})$ and that of $N_{\beta \beta'}^{\beta''}$.
\end{proof}
For $i \ge 0$ and $n \ge \ell > 1$, we define $B_n^{(\ell)}(i)$ to be the following subset in $B_n^{(\ell)}$:
$$
B_n^{(\ell)}(i)
=
\left\{
\beta = (\beta_1, \beta_2, \cdots, \beta_\ell) \in \mathbb{Z}^\ell \bigg| \
\begin{array}{c}
\beta_1 = i, \ \beta_2, \cdots, \beta_{\ell-1} \ge 0, \ \beta_{\ell} \ge 1 \\
\beta_1 + 2 \beta_2 + \cdots + \ell \beta_{\ell} = n
\end{array}
\right\}.
$$
In the case of $\ell = 0$ and $\ell = 1$, we also define $B_n^{(\ell)}(i)$ by
\begin{align*}
B_n^{(0)}(i) &=
\left\{
\begin{array}{cc}
B^{(0)}, & (n = i = 0), \\
\emptyset, & \mbox{otherwise},
\end{array}
\right. &
B_n^{(1)}(i) &=
\left\{
\begin{array}{cc}
\{ \beta = (i+1) \}, & (n = i+1), \\
\emptyset, & (n \neq i+1).
\end{array}
\right.
\end{align*}
\begin{lem} \label{lem:derivation}
For any $n \ge \ell \ge 1$ and $\beta = (\beta_1, \cdots, \beta_\ell) \in B_n^{(\ell)}(i)$, we have:
$$
d g_{(\beta_1, \cdots, \beta_\ell)}(\vec{x}) =
\left\{
\begin{array}{cl}
0, & (i = 0), \\
g_{(\beta_1 - 1, \beta_2, \cdots, \beta_\ell)}(\vec{x}), & (i > 0).
\end{array}
\right.
$$
\end{lem}
\begin{proof}
By the defining formula of $\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_n$, we have
$$
d \mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_n
= e_1(k_1, \cdots, k_\ell) \mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})_{n-1}.
$$
This formula and the definition of $g_\beta(\vec{x})$ establish the lemma.
\end{proof}
\begin{proof}[The proof of Theorem \ref{thm:main}]
By Lemma \ref{lem:polynomial_ring} and \ref{lem:derivation}, the subspace $J = \mathrm{Ker}(d) \subset A$ of invariant polynomials has the additive basis $\{ g_\beta(\vec{x}) |\ \beta \in B(0) \}$. Since $J \subset A$ is also a subring, we use Lemma \ref{lem:polynomial_ring} again to see that: for $\beta \in B^{(\ell)}(0)$ and $\beta' \in B^{(\ell')}(0)$, the product of the polynomials $g_\beta(\vec{x})$ and $g_{\beta'}(\vec{x})$ is expresses as
$$
g_\beta(\vec{x}) g_{\beta'}(\vec{x})
= \sum_{\beta'' \in B^{(\ell+\ell')}}
N_{\beta \beta'}^{\beta''} g_{\beta''}(\vec{x})
= \sum_{\beta'' \in B^{(\ell+\ell')}(0)}
N_{\beta \beta'}^{\beta''} g_{\beta''}(\vec{x}).
$$
Thus, $\beta \mapsto g_\beta(\vec{x})$ induces a ring isomorphism $J' \cong J$.
\end{proof}
\subsection{Lifts to twisted cohomology}
In \cite{A-Se2}, the Chern class corresponding to $f \in J$ of positive degree is, at the first stage, constructed as a natural map
$$
c_f : \ K_\tau(X) \longrightarrow H^*(X).
$$
Then, at the second stage, $c_f$ is lifted to the twisted cohomology
$$
C_f : \ K_\tau(X) \longrightarrow H_\eta^*(X)
$$
so that its leading term agrees with $c_f$. In the universal setting, such a lift is in one to one correspondence with a formal power series in $x_1, x_2, \cdots$:
$$
F(x_1, x_2, \cdots)
=
f(x_1, x_2, \cdots) + \mbox{higher degree term}
$$
satisfying $dF = F$. Clearly, a polynomial $f \in J$ admits various lifts $F$. A way to construct a lift \cite{A-Se2} is as follows: let $\delta : A \to A$ be the derivation defined by $\delta x_i = i x_{i+1}$. Then any $f \in J_n$ has the following series as its lift:
$$
\left(\exp \frac{\delta}{n} \right)f
=
f + \frac{1}{n} \delta f + \frac{1}{2 n^2} \delta^2f + \cdots +
\frac{1}{(i!) n^i} \delta^i f + \cdots.
$$
Using the basis $\{ g_\beta(\vec{x}) \}$, we provide another way to construct a lift:
\begin{prop}
Let $n$ and $\ell$ be such that $n \ge \ell \ge 1$. For $\beta = (\beta_1, \beta_2, \cdots, \beta_\ell) \in B^{(\ell)}_n(0)$, the polynomial $g_\beta(\vec{x}) \in J_n^{(\ell)}$ has the following series as its lift:
$$
\tilde{g}_\beta(\vec{x})
=
\sum_{i \ge 0} g_{\beta(i)}(\vec{x})
=
g_\beta(\vec{x}) + g_{\beta(1)}(\vec{x}) + g_{\beta(2)}(\vec{x}) + \cdots,
$$
where $\beta(i) = (\beta_1 + i, \beta_2, \cdots, \beta_\ell) \in B^{(\ell)}_{n+i}(i)$ for $i \ge 0$.
\end{prop}
\begin{proof}
This is an immediate consequence of Lemma \ref{lem:derivation}.
\end{proof}
By Theorem \ref{thm:main}, the lifts $\tilde{g}_\beta(\vec{x})$ form a basis of the universal Chern classes in twisted cohomology, which answers the problem to find a basis \cite{A-Se2}.
For $g_{(1)}(\vec{x}) = x_1 \in J_1$, our lift agrees with that of Atiyah and Segal:
$$
(\exp \delta)(g_{(1)}) = \tilde{g}_{(1)}(\vec{x})
=
\mathrm{ch}(1 | \vec{x})
= x_1 + x_2 + x_3 + \cdots,
$$
but not in general, as is seen in the case of $g_{(0, 1)}(\vec{x}) = x_1^2 \in J_2^{(2)}$:
\begin{align*}
\left( \exp \frac{\delta}{2} \right)x_1^2 &=
x_1^2 + x_1x_2 + \frac{1}{4}(x_2^2 + 2x_1x_3)
+ \frac{1}{8}(x_2x_3 + x_1x_4)
+ \cdots, \\
\tilde{g}_{(0, 1)}(\vec{x}) &=
x_1^2 + x_1x_2 + x_1x_3 + x_1x_4 + \cdots.
\end{align*}
A motivation of Atiyah and Segal to introduce Adams operations is to construct a characteristic class of twisted $K$-theory living in twisted cohomology: If $k_1, \cdots, k_\ell$ are integers such that $k_1 + \cdots + k_\ell = 1$, then the Chern character and the Adams operations combine to give a characteristic class
$$
\mathrm{ch} \circ \psi^{(k_1, \cdots, k_\ell)} : \
K_\tau(X) \longrightarrow H^*_\eta(X).
$$
In the universal setting, this corresponds to the series $\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x})$, where $k_1, \cdots, k_\ell$ are now regarded as numbers, rather than formal variables. With respect to our basis, we can express the series as:
$$
\mathrm{ch}(k_1, \cdots, k_\ell | \vec{x}) = \sum_{\beta \in B^{(\ell)}(0)}
e^\beta(k_1, \cdots, k_\ell) \tilde{g}_\beta(\vec{x}).
$$
\appendix
\section{Lists}
\subsection{Poincar\'e polynomial}
The generating function $\mathcal{J}(u,t) = \sum_{n, \ell}\mathrm{dim}J_n^{(\ell)}u^\ell t^n$ for the dimension of $J_n^{(\ell)} = A_n^{(\ell)} \cap J$ has the following formula:
$$
\mathcal{J}(u,t)
= \frac{1-t}{(1-ut)(1-ut^2)(1-ut^3)(1-ut^4) \cdots} + t.
$$
If we substitute $u = 1$, we get the formula of the generating function $\mathcal{J}(t) = \sum_n(\mathrm{dim}J_n)t^n$ for the dimension of $J_n$ in \cite{A-Se2}:
\begin{align*}
\mathcal{J}(t)
&=
\frac{1}{(1-t^2)(1-t^3)(1-t^4) \cdots} + t \\
&=
1 + t + t^2 + t^3 + 2t^4 + 2t^5 + 4t^6 + 4t^7 + 7t^8 + 8t^9 + 12t^{10} \\
& \quad
+ 14t^{11} + 21t^{12} + 24t^{13} + 34t^{14}
+ 41t^{15} + 55t^{16} + 66t^{17} + 88 t^{18} \\
& \quad
+ 105 t^{19} + 137 t^{20} + 165 t^{21}
+ 210 t^{22} + 235 t^{23} + 320 t^{24} + \cdots.
\end{align*}
For $\ell \ge 0$, the generating function $\mathcal{J}^{(\ell)}(t) = \sum_n (\mathrm{dim} J_n^{(\ell)}) t^n$ is
\begin{align*}
\mathcal{J}^{(0)}(t) &= 1, &
\mathcal{J}^{(1)}(t) &= t, &
\mathcal{J}^{(\ell)}(t)
&=
\frac{t^\ell}{(1-t^2) (1 - t^3) \cdots (1- t^\ell)}. \ (\ell \ge 2).
\end{align*}
A calculation gives the table:
\begin{center}
\begin{tabular}{c|cccccccccccccccc|c}
$\ell$ &
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 &
total \\
\hline
$\mathrm{dim} J_1^{(\ell)}$ &
1 & & & & & & & & & & & & & & & &
1 \\
\hline
$\mathrm{dim} J_2^{(\ell)}$ &
0 & 1 & & & & & & & & & & & & & & &
1 \\
\hline
$\mathrm{dim} J_3^{(\ell)}$ &
0 & 0 & 1 & & & & & & & & & & & & & &
1 \\
\hline
$\mathrm{dim} J_4^{(\ell)}$ &
0 & 1 & 0 & 1 & & & & & & & & & & & & &
2 \\
\hline
$\mathrm{dim} J_5^{(\ell)}$ &
0 & 0 & 1 & 0 & 1 & & & & & & & & & & & &
2 \\
\hline
$\mathrm{dim} J_6^{(\ell)}$ &
0 & 1 & 1 & 1 & 0 & 1 & & & & & & & & & & &
4 \\
\hline
$\mathrm{dim} J_7^{(\ell)}$ &
0 & 0 & 1 & 1 & 1 & 0 & 1 & & & & & & & & & &
4 \\
\hline
$\mathrm{dim} J_8^{(\ell)}$ &
0 & 1 & 1 & 2 & 1 & 1 & 0 & 1 & & & & & & & & &
7 \\
\hline
$\mathrm{dim} J_9^{(\ell)}$ &
0 & 0 & 2 & 1 & 2 & 1 & 1 & 0 & 1 & & & & & & & &
8 \\
\hline
$\mathrm{dim} J_{10}^{(\ell)}$ &
0 & 1 & 1 & 3 & 2 & 2 & 1 & 1 & 0 & 1 & & & & & & &
12 \\
\hline
$\mathrm{dim} J_{11}^{(\ell)}$ &
0 & 0 & 2 & 2 & 3 & 2 & 2 & 1 & 1 & 0 & 1 & & & & & &
14 \\
\hline
$\mathrm{dim} J_{12}^{(\ell)}$ &
0 & 1 & 2 & 4 & 3 & 4 & 2 & 2 & 1 & 1 & 0 & 1 & & & & & 21
\\
\hline
$\mathrm{dim} J_{13}^{(\ell)}$ &
0 & 0 & 2 & 3 &
5 & 3 & 4 & 2 & 2 &
1 & 1 & 0 & 1 & & & &
24 \\
\hline
$\mathrm{dim} J_{14}^{(\ell)}$ &
0 & 1 & 2 & 5 &
5 & 6 & 4 & 4 & 2 &
2 & 1 & 1 & 0 & 1 & & &
34 \\
\hline
$\mathrm{dim} J_{15}^{(\ell)}$ &
0 & 0 & 3 & 4 &
7 & 6 & 6 & 4 & 4 &
2 & 2 & 1 & 1 & 0 & 1 & &
41 \\
\hline
$\mathrm{dim} J_{16}^{(\ell)}$ &
0 & 1 & 2 & 7 & 7 &
9 & 7 & 7 & 4 & 4 &
2 & 2 & 1 & 1 & 0 &
1 &
55 \\
\end{tabular}
\end{center}
\subsection{Lists of bases}
In the following, the monomials $x_\lambda$ in the invariant polynomial $g_\beta(\vec{x})$ are arranged by using the ordering $L'_n$ on the set of partitions \cite{M}.
\subsubsection{$\ell = 1$ and $\ell = 2$}
We have
\begin{align*}
\mathcal{J}^{(1)}(t) &= t, &
\mathcal{J}^{(2)}(t) &= t^2 + t^4 + t^6 + t^8 + \cdots
\end{align*}
The base in $J^{(1)}$ is $g_{(1)} = x_1$ and the bases in $J^{(2)}$ are
\begin{align*}
g_{(0,1)}
&=
x_1^2 \\
g_{(0,2)}
&=
x_2^2 - 2 x_1x_3 \\
g_{(0,3)}
&=
x_3^2 - 2 x_2x_4 + 2 x_1x_5 \\
g_{(0,4)}
&=
x_4^2 - 2 x_3x_5 + 2 x_2x_6 - 2x_1x_7 \\
g_{(0,5)}
&=
x_5^2 - 2 x_4x_6 + 2 x_3x_7 - 2x_2x_8 + 2 x_1x_9 \\
& \vdots \\
g_{(0, m)}
&=
x_m^2 - 2 x_{m-1}x_{m+1} + 2 x_{m-2}x_{m+2} - \cdots
+ (-1)^{m-1} 2 x_1x_{2m-1}
\end{align*}
\subsubsection{$\ell = 3$}
We have
$$
\mathcal{J}^{(3)}(t) = t^3 + t^5 + t^6 + t^7 + t^8 + 2t^9 +
t^{10} + 2t^{11} + 2t^{12} + 2t^{13} + 2t^{14} + 3t^{15} + \cdots
$$
\begin{align*}
g_{(0,0,1)}
&=
x_1^3 \\
g_{(0,1,1)}
&=
x_1(x_2^2 - 2x_1x_3) \\
g_{(0,0,2)}
&=
x_2^3 - 3 x_1x_2x_3 + 3 x_1^2x_4 \\
g_{(0,2,1)}
&=
x_1(x_3^2 - 2x_2x_4 + 2x_1x_5) \\
g_{(0,1,2)}
&=
x_2x_3^2 - 2 x_2^2x_4 - x_1x_3x_4 + 5 x_1x_2x_5 - 5 x_1^2x_6 \\
g_{(0,0,3)}
&=
x_3^3 - 3 x_2x_3x_4 + 3 x_2^2x_5
+ 3 x_1x_4^2 - 3 x_1x_3x_5 - 3 x_1x_2x_6 + 3 x_1^2x_7 \\
g_{(0,3,1)}
&=
x_1(x_4^2 - 2 x_3x_5 + 2 x_2x_6 - 2 x_1x_7) \\
g_{(0,2,2)}
&=
x_2x_4^2 - 2 x_2x_3x_5 + 2 x_2^2x_6 - x_1x_4x_5 + 3 x_1x_3x_6
- 7 x_1x_2x_7 + 7 x_1^2x_8 \\
g_{(0,1,3)}
&=
x_3x_4^2 - 2 x_3^2x_5 - x_2x_4x_5 + 5 x_2x_3x_6 - 5 x_2^2x_7 \\
&\quad
+ 4 x_1x_5^2 - 7 x_1x_4x_6 + 2 x_1x_3x_7 + 8 x_1x_2x_8 - 8 x_1^2x_9 \\
g_{(0,4,1)}
&=
x_1(x_5^2 - 2 x_4x_6 + 2 x_3x_7 - 2 x_2x_8 + 2 x_1x_9) \\
& \\
g_{(0,3,2)}
&=
x_2x_5^2 - 2 x_2x_4x_6 + 2 x_2x_3x_7 - 2 x_2^2x_8 \\
&\quad
- x_1x_5x_6 + 3 x_1x_4x_7 - 5 x_1x_3x_8 + 9 x_1x_2x_9 -9 x_1^2 x_{10} \\
g_{(0,0,4)}
&=
x_4^3 - 3 x_3x_4x_5 + 3 x_2x_5^2 + 3 x_3^2x_6
- 3 x_2x_4x_6 - 3 x_2x_3x_7 + 3 x_2^2x_8 \\
&\quad
- 3 x_1x_5x_6 + 6 x_1x_4x_7 - 3 x_1x_3x_8 - 3 x_1x_2x_9 + 3 x_1^2x_{10} \\
& \\
g_{(0,2,3)}
&=
x_3 x_5^2 - 2 x_3 x_4 x_6 + 2 x_3^2 x_7
- x_2 x_5 x_6 + 3 x_2 x_4 x_7 - 7 x_2 x_3 x_8 + 7 x_2^2 x_9 \\
&\quad
+ 5 x_1x_6^2 - 9 x_1x_5x_7 + 6 x_1x_4x_8 + x_1x_3x_9
- 15 x_1x_2x_{10} + 15 x_1^2x_{11} \\
g_{(0,5,1)}
&=
x_1(x_6^2 - 2 x_5 x_7 + 2 x_4 x_8 - 2 x_3 x_9
+ 2 x_2x_{10} - 2 x_1x_{11}) \\
& \\
g_{(0,4,2)}
&=
x_2x_6^2 - 2 x_2x_5x_7 + 2 x_2x_4x_8 - 2 x_2x_3x_9 + 2 x_2^2x_{10} \\
&\quad
- x_1 x_6 x_7
+ 3 x_1 x_5 x_8
- 5 x_1 x_4 x_9
+ 7 x_1 x_3 x_{10}
- 11 x_1 x_2 x_{11}
+ 11 x_1^2 x_{12} \\
g_{(0,1,4)}
&=
x_4x_5^2 - 2 x_4^2x_6 - x_3x_5x_6 + 5 x_3x_4x_7 - 5 x_3^2x_8 \\
&\quad
+ 4 x_2x_6^2 - 7 x_2x_5x_7 + 2 x_2x_4x_8 + 8 x_2x_3x_9 - 8 x_2^2x_{10} \\
& \quad
- 4 x_1x_6x_7
+ 11 x_1x_5x_8
- 13 x_1x_4x_9
+ 5 x_1x_3x_{10}
+ 11 x_1x_2x_{11}
-11 x_1^2x_{12} \\
& \\
g_{(0,0,5)}
&=
x_5^3
- 3 x_4 x_5 x_6 + 3 x_4^2 x_7
+ 3 x_3 x_6^2 - 3 x_3 x_5 x_7 - 3 x_3 x_4 x_8 + 3 x_3^2 x_9 \\
&\quad
- 3 x_2 x_6 x_7
+ 6 x_2 x_5 x_8
- 3 x_2 x_4 x_9
- 3 x_2 x_3 x_{10}
+ 3 x_2^2 x_{11}
+ 3 x_1 x_7^2 \\
&\quad
- 3 x_1 x_6 x_8
- 3 x_1 x_5 x_9
+ 6 x_1 x_4 x_{10}
- 3 x_1 x_3 x_{11}
- 3 x_1 x_2 x_{12}
+ 3 x_1^2 x_{13} \\
g_{(0,3,3)}
&=
x_3 x_6^2
- 2 x_3 x_5 x_7
+ 2 x_3 x_4 x_8
- 2 x_3^2 x_9
- x_2 x_6 x_7 \\
&\quad
+ 3 x_2 x_5 x_8
- 5 x_2 x_4 x_9
+ 9 x_2 x_3 x_{10}
- 9 x_2^2 x_{11}
+ 6 x_1 x_7^2
- 11 x_1 x_6 x_8 \\
&\quad
+ 8 x_1 x_5 x_9
- 3 x_1 x_4 x_{10}
- 6 x_1 x_3 x_{11}
+ 24 x_1 x_2 x_{12}
-24 x_1^2 x_{13} \\
g_{(0,6,1)}
&=
x_1 (x_7^2
- 2 x_6 x_8
+ 2 x_5 x_9
- 2 x_4 x_{10}
+ 2 x_3 x_{11}
- 2 x_2 x_{12}
+ 2 x_1 x_{13})
\end{align*}
\subsubsection{$\ell = 4$}
We have
$$
\mathcal{J}^{(4)}(t) = t^4 + t^6 + t^7 + 2t^8 + t^9 +
3t^{10} + 2t^{11} + 4t^{12} +
3 t^{13} + 5 t^{14} + 4 t^{15} + \cdots
$$
\begin{align*}
g_{(0,0,0,1)}
&=
x_1^4 \\
g_{(0,1,0,1)}
&=
x_1^2(x_2^2 - 2 x_1x_3) \\
g_{(0,0,1,1)}
&=
x_1(x_2^3 - 3 x_1x_2x_3 + 3 x_1^2x_4) \\
g_{(0,2,0,1)}
&=
x_1^2(x_3^2 - 2 x_2x_4 + 2 x_1x_5) \\
g_{(0,0,0,2)}
&=
x_2^4 - 4 x_1x_2^2x_3 + 2x_1^2x_3^2 + 4x_1^2x_2x_4 - 4x_1^3x_5 \\
g_{(0,1,1,1)}
&=
x_1(x_2x_3^2 - 2 x_2^2x_4 - x_1x_3x_4 + 5 x_1x_2x_5 - 5 x_1^2x_6) \\
& \\
g_{(0,0,2,1)}
&=
x_1(x_3^3 - 3 x_2 x_3 x_4 + 3 x_2^2 x_5+ 3 x_1 x_4^2
- 3 x_1 x_3 x_5 - 3 x_1 x_2 x_6 + 3 x_1^2 x_7) \\
g_{(0,1,0,2)}
&=
x_2^2 x_3^2 - 2 x_2^3 x_4
- 2 x_1 x_3^3
+ 4 x_1 x_2 x_3 x_4
+ 2 x_1 x_2^2 x_5 \\
&\quad
- 3 x_1^2 x_4^2 + 2 x_1^2 x_3 x_5 - 6 x_1^2 x_2 x_6 + 6 x_1^3 x_7 \\
g_{(0,3,0,1)}
&=
x_1^2(x_4^2 - 2 x_3 x_5 + 2 x_2 x_6 - 2 x_1 x_7) \\
& \\
g_{(0,0,1,2)}
&=
x_2 x_3^3 - 3 x_2^2 x_3 x_4 + 3 x_2^3 x_5
- x_1 x_3^2 x_4
+ 5 x_1 x_2 x_4^2
- 2 x_1 x_2 x_3 x_5
- 7 x_1 x_2^2 x_6 \\
&\quad
- 5 x_1^2 x_4 x_5
+ 7 x_1^2 x_3 x_6
+ 7 x_1^2 x_2 x_7
- 7 x_1^3 x_8 \\
g_{(0,2,1,1)}
&=
x_1(x_2 x_4^2 - 2 x_2 x_3 x_5 + 2 x_2^2 x_6
- x_1 x_4 x_5
+ 3 x_1 x_3 x_6
- 7 x_1 x_2 x_7
+ 7 x_1^2 x_8) \\
& \\
g_{(0,0,0,3)}
&=
x_3^4 - 4 x_2x_3^2x_4 + 2 x_2^2x_4^2 + 4 x_2^2x_3x_5 - 4 x_2^3x_6 \\
&\quad
+ 4 x_1 x_3 x_4^2 - 4 x_1 x_3^2 x_5
- 8 x_1 x_2 x_4 x_5 + 8 x_1 x_2 x_3 x_6 + 4 x_1 x_2^2 x_7 \\
&\quad
+ 6 x_1^2 x_5^2
- 4 x_1^2 x_4 x_6
- 4 x_1^2 x_3 x_7
- 4 x_1^2 x_2 x_8
+ 4 x_1^3 x_9 \\
g_{(0,2,0,2)}
&=
x_2^2 x_4^2 - 2 x_2^2 x_3 x_5 + 2 x_2^3 x_6 \\
&\quad
- 2 x_1 x_3 x_4^2
+ 4 x_1 x_3^2 x_5
- 4 x_1 x_2 x_3 x_6
- 2 x_1 x_2^2 x_7 \\
&\quad
- 4 x_1^2 x_5^2
+ 8 x_1^2 x_4 x_6
- 4 x_1^2 x_3 x_7
+ 8 x_1^2 x_2 x_8
- 8 x_1^3 x_9 \\
g_{(0,1,2,1)}
&=
x_1 \left(
x_3x_4^2 - 2 x_3^2x_5 - x_2x_4x_5 + 5 x_2x_3x_6 - 5 x_2^2x_7 \right. \\
&\quad \left.
+ 4 x_1x_5^2 - 7 x_1x_4x_6 + 2 x_1x_3x_7 + 8 x_1x_2x_8 - 8 x_1^2x_9 \right) \\
g_{(0,4, 0,1)}
&=
x_1^2(x_5^2 - 2 x_4 x_6 + 2 x_3 x_7 - 2 x_2 x_8 + 2 x_1 x_9)
\end{align*}
\subsubsection{$\ell \ge 5$}
$$
\mathcal{J}^{(5)}(t)
=
t^5 + t^7 + t^8 + 2 t^9 + 2 t^{10} + 3 t^{11} + 3 t^{12}
+ 5 t^{13} + 5 t^{14} + 7 t^{15} + \cdots.
$$
\begin{align*}
g_{(0,0,0,0,1)}
&=
x_1^5 \\
g_{(0,1,0,0,1)}
&=
x_1^3(x_2^2 - 2 x_1x_3) \\
g_{(0,0,1,0,1)}
&=
x_1^2(x_2^3 - 3 x_1x_2x_3 + 3 x_1^2x_4) \\
& \\
g_{(0,0,0,1,1)}
&=
x_1(x_2^4 - 4 x_1x_2^2x_3 + 2 x_1^2x_3^2 + 4 x_1^2x_2x_4 - 4 x_1^3x_5) \\
g_{(0,2,0,0,1)}
&=
x_1^3(x_3^2 - 2 x_2x_4 + 2 x_1x_5) \\
& \\
g_{(0,0,0,0,2)}
&=
x_2^5 - 5 x_1 x_2^3 x_3 + 5 x_1^2 x_2 x_3^2 + 5 x_1^2 x_2^2 x_4
- 5 x_1^3 x_3 x_4 - 5 x_1^3 x_2 x_5 + 5 x_1^4 x_6 \\
g_{(0,1,1,0,1)}
&=
x_1^2(x_2x_3^2 - 2 x_2^2x_4 - x_1x_3x_4 + 5 x_1x_2x_5 - 5 x_1^2x_6) \\
& \\
g_{(0,1,0,1,1)}
&=
x_1(x_2^2 x_3^2 - 2 x_2^3 x_4
- 2 x_1 x_3^3
+ 4 x_1 x_2 x_3 x_4
+ 2 x_1 x_2^2 x_5 \\
&\quad
- 3 x_1^2 x_4^2
+ 2 x_1^2 x_3 x_5
- 6 x_1^2 x_2 x_6
+ 6 x_1^3 x_7) \\
g_{(0,0,2,0,1)}
&=
x_1^2(x_3^3 - 3 x_2 x_3 x_4 + 3 x_2^2 x_5 + 3 x_1 x_4^2
- 3 x_1 x_3 x_5 - 3 x_1 x_2 x_6 + 3 x_1^2 x_7) \\
g_{(0,3,0,0,1)}
&=
x_1^3 (x_4^2 - 2 x_3 x_5 + 2 x_2 x_6 - 2 x_1 x_7)
\end{align*}
$$
\mathcal{J}^{(6)}(t)
=
t^6 + t^8 + t^9 + 2 t^{10} + 2 t^{11} + 4 t^{12} +
3 t^{13} + 6 t^{14} + 6 t^{15} + \cdots.
$$
\begin{align*}
g_{(0,0,0,0,0,1)}
&=
x_1^6 \\
g_{(0,1,0,0,0,1)}
&=
x_1^4(x_2^2 - 2 x_1x_3) \\
g_{(0,0,1,0,0,1)}
&=
x_1^3(x_2^3 - 3 x_1x_2x_3 + 3 x_1^2x_4) \\
& \\
g_{(0,0,0,1,0,1)}
&=
x_1^2(x_2^4 - 4 x_1x_2^2x_3 + 2 x_1^2x_3^2 + 4 x_1^2x_2x_4 - 4 x_1^3x_5) \\
g_{(0,2,0,0,0,1)}
&=
x_1^4 (x_3^2 - 2 x_2 x_4 + 2 x_1 x_5) \\
& \\
g_{(0,0,0,0,1,1)}
&=
x_1(x_2^5 - 5 x_1x_2^3x_3 + 5 x_1^2x_2x_3^2 + 5 x_1^2x_2^2x_4 \\
&\quad
- 5 x_1^3 x_3 x_4 - 5 x_1^3 x_2 x_5 + 5 x_1^4 x_6) \\
g_{(0,1,1,0,0,1)}
&=
x_1^3(x_2 x_3^2 - 2 x_2^2x_4 - x_1x_3x_4 + 5 x_1x_2x_5 - 5 x_1^2x_6)
\end{align*}
$$
\mathcal{J}^{(7)}(t)
=
t^7 + t^9 + t^{10} + 2 t^{11} + 2 t^{12}
+ 4 t^{13} + 4 t^{14} + 6 t^{15} + \cdots
$$
\begin{align*}
g_{(0,0,0,0,0,0,1)}
&=
x_1^7 \\
g_{(0,1,0,0,0,0,1)}
&=
x_1^5(x_2^2 - 2 x_1x_3) \\
g_{(0,0,1,0,0,0,1)}
&=
x_1^4(x_2^3 - 3 x_1x_2x_3 + 3 x_1^2x_4)
\end{align*}
\end{document} |
\begin{document}
\setlength{\columnsep}{5pt}
\title{\bf Core partial order in rings with involution}
\author{Xiaoxiang Zhang\footnote{ E-mail: [email protected]},
\ Sanzhang Xu\footnote{ E-mail: [email protected]},
\ Jianlong Chen\footnote{ Corresponding author. E-mail: [email protected] }.\\
Department of Mathematics, Southeast University \\ Nanjing 210096, China }
\date{}
\maketitle
\begin{quote}
{\textbf{Abstract:} \small
Let $R$ be a unital ring with involution.
Several characterizations and properties of core partial order are given.
In particular, we investigate the reverse order law $(ab)^{\tiny\textcircled{\tiny\#}}=b^{\tiny\textcircled{\tiny\#}}a^{\tiny\textcircled{\tiny\#}}$
for two core invertible elements $a,b\in R$.
Some relationships between core partial order and other partial orders are obtained.
\textbf {Keywords:} {\small Core inverse, core partial order, reverse order law, EP element.}
}
\end{quote}
\section{ Introduction }\label{a}
The core inverse of a complex matrix was introduced by Baksalary and Trenkler \cite{BT}.
Let $M_{n}(\mathbb{C})$ be the ring of all $n\times n$ complex matrices. A matrix $X\in M_{n}(\mathbb{C})$ is called a core inverse of $A\in M_{n}(\mathbb{C})$, if it satisfies
$AX=P_{A}$ and $\mathcal{R}(X)\subseteq \mathcal{R}(A)$,
where $\mathcal{R}(A)$ denotes the column space of $A$,
and $P_{A}$ is the orthogonal projector onto $\mathcal{R}(A)$.
And if such a matrix $X$ exists, then it is unique and denoted by $A^{\tiny\textcircled{\tiny\#}}$.
The core partial order for a complex matrix were also introduced in \cite{BT}.
Let $\mathbb{C}^{CM}_{n}=\{A\in M_{n}(\mathbb{C})\mid \mathrm{rank}(A)=\mathrm{rank}(A^{2})\}$
, $A\in \mathbb{C}^{CM}_{n}$ and $B\in M_{n}(\mathbb{C})$. The binary operation $\overset{\tiny\textcircled{\tiny\#}}\leq$ is defined as follows:
$$A\overset{\tiny\textcircled{\tiny\#}}\leq B~\Leftrightarrow~
A^{\tiny\textcircled{\tiny\#}}A=A^{\tiny\textcircled{\tiny\#}}B~~\mathrm{and}~~AA^{\tiny\textcircled{\tiny\#}}=BA^{\tiny\textcircled{\tiny\#}}.$$
In \cite[Theorem 6]{BT}, it is proved that core partial order is a matrix partial order. Baksalary and Trenkler gave several
characterizations and various relationships between the matrix core partial order and other matrix partial orders by using the decomposition of Hartwig and Spindelb\"{o}ck \cite{HS}.
In \cite{RD}, Raki\'{c} and Djordjevi\'{c} generalized the matrix core partial order to the ring case. They gave various equivalent conditions of core partial
order and investigated relationships between the core partial order and other partial orders in general rings.
Motivated by \cite{BT,M2,MRT,R,RD}, in this paper, we give some new equivalent conditions and properties for core partial order in general rings.
Moreover, some new relationships between core partial order and other partial orders are obtained. As an application, we prove the reverse law for two core invertible
elements under the core partial order.
Let $R$ be a $*$-ring, that is a ring with an involution $a\mapsto a^*$
satisfying $(a^*)^*=a$, $(ab)^*=b^*a^*$ and $(a+b)^*=a^*+b^*$ for all $a,b\in R$.
We say that $x\in R$ is the Moore-Penrose inverse of $a\in R$, if the following hold:
$$axa=a, \quad xax=x, \quad (ax)^{\ast}=ax \quad (xa)^{\ast}=xa.$$
There is at most one $x$ such that above four equations hold.
If such an element $x$ exists, it is denoted by $a^{\dagger}$. The set of all Moore-Penrose invertible elements will be denoted by $R^{\dagger}$.
An element $x\in R$ is an inner inverse of $a\in R$ if $axa=a$ holds. The set of all inner inverses of $a$ will be denoted by $a\{1\}$.
An element $a\in R$ is said to be group invertible
if there exists $x\in R$ such that the following equations hold:
$$axa=a, \quad xax=x, \quad ax=xa.$$
The element $x$ which satisfies the above equations is called a group inverse of $a$.
If such an element $x$ exists, it is unique and denoted by $a^\#$. The set of all group invertible elements will be denoted by $R^\#$.
An element $a\in R$ is said to be an EP element if $a\in R^{\dagger}\cap R^\#$ and $a^{\dagger}=a^\#.$ The set of all EP elements will be denoted by $R^{EP}$.
In \cite{RDD} Raki\'{c}, Din\v{c}i\'{c} and Djordjevi\'{c} generalized the core inverse of a complex matrix to the case of an element in a ring.
Let $a,x\in R$, if
$$axa=a,~xR=aR,~Rx=Ra^{\ast},$$
then $x$ is called a core inverse of $a$ and if such an element $x$ exists, then it is unique and denoted by $a^{\tiny{\textcircled{\tiny\#}}}$. The set of all core invertible elements in $R$ will be denoted by $R^{\tiny{\textcircled{\tiny\#}}}$.
An element $p\in R$ is called self-adjoint idempotent if $p^{2}=p=p^{\ast}$.
An element $q\in R$ is called idempotent if $q^{2}=q$.
For $a,b\in R$, we have the following definitions:
\begin{itemize}
\item[{\rm $\bullet$}] the star partial order $a\overset{\ast}\leq b$: $a^{\ast}a=a^{\ast}b$ and $aa^{\ast}=ba^{\ast}$\cite{D};
\item[{\rm $\bullet$}] the minus partial order $a\overset{-}\leq b$ if and only if there exists an $a^{-}\in a\{1\}$ such that $a^{-}a=a^{-}b$ and $aa^{-}=ba^{-}$\cite{H2};
\item[{\rm $\bullet$}] the sharp partial order $a\overset{\#}\leq b$: $a^{\#}a=a^\#b$ and $aa^{\#}=ba^{\#}$\cite{M}.
\end{itemize}
This paper is organized as follows. In section 2, some new equivalent characterizations of the core partial order in rings are obtained.
Specially, the reverse order of two core invertible elements in rings was given. In section 3, some relationships of the core partial order and other
partial orders are obtained.
\section{ Equivalent conditions and properties of core partial order }\label{a}
In this section, some new characterizations of the core partial order in rings are obtained. Let us start this section with two auxiliary lemmas.
These two lemmas can be found in \cite[Lemma 2.2]{M} and \cite[Lemma 2.3 and Theorem 2.6]{RD}.
\begin{lem} \label{lemma-partial1}
Let $a\in R^\#$ and $b\in R$. Then:
\begin{itemize}
\item[{\rm (1)}] $a^\#a=a^\#b$ if and only if $a^{2}=ab$;
\item[{\rm (2)}] $aa^\#=ba^\#$ if and only if $a^{2}=ba$;
\item[{\rm (3)}] $a\overset{\#}\leq b$ if and only if $a^{2}=ab=ba$;
\item[{\rm (4)}] $a\overset{\#}\leq b$ if and only if there exists idempotent $p\in R$ such that $a=pb=bp$.
\end{itemize}
\end{lem}
\begin{lem} \label{lemma-partial2}
Let $a\in R^{\tiny{\textcircled{\tiny\#}}}$ and $b\in R$. Then:
\begin{itemize}
\item[{\rm (1)}] $a^{\tiny{\textcircled{\tiny\#}}} a=a^{\tiny{\textcircled{\tiny\#}}}b$ if and only if $a^{*}a=a^{*}b$;
\item[{\rm (2)}] $aa^{\tiny{\textcircled{\tiny\#}}}=ba^{\tiny{\textcircled{\tiny\#}}}$ if and only if $a^{2}=ba$ if and only if $aa^{\#}=ba^{\#}$.
\end{itemize}
\end{lem}
We will use the following notations $aR=\{ax\mid x\in R\}$, $Ra=\{xa\mid x\in R\}$, $^{\circ}a=\{x\in R\mid xa=0\}$ and $a^{\circ}=\{x\in R\mid ax=0\}$.
In \cite[Lemma 8]{LPT}, Lebtahi et al. proved that $a\overset{-}\leq b$ if and only if there exists
$c\in a\{1,2\}$ such that $b-a\in$ $^{\circ}c\cap c^{\circ}$.
For the core partial order, we have the following result.
\begin{thm} \label{core-pr1}
Let $a\in R^{\tiny{\textcircled{\tiny\#}}}$ and $b\in R$. Then the following conditions are equivalent:
\begin{itemize}
\item[{\rm (1)}] $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$;
\item[{\rm (2)}] $ba^{\tiny{\textcircled{\tiny\#}}}b=a$ and $a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$;
\item[{\rm (3)}] $aa^{\tiny{\textcircled{\tiny\#}}}b=a=ba^{\tiny{\textcircled{\tiny\#}}}a$;
\item[{\rm (4)}] $b-a\in ^{\circ}\!\!a\cap (a^{\ast})^{\circ}$;
\item[{\rm (5)}] $b-a\in(1-aa^{\tiny{\textcircled{\tiny\#}}})R\cap R(1-aa^{\tiny{\textcircled{\tiny\#}}})$;
\item[{\rm (6)}] $b-a\in ^{\circ}\!\!(aa^{\tiny{\textcircled{\tiny\#}}}) \cap (aa^{\tiny{\textcircled{\tiny\#}}})^{\circ}$.
\end{itemize}
\end{thm}
\begin{proof}
$(1)\Leftrightarrow(2)$ Suppose that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then
$ba^{\tiny{\textcircled{\tiny\#}}}b
=aa^{\tiny{\textcircled{\tiny\#}}}b
=aa^{\tiny{\textcircled{\tiny\#}}}a=a$ and
$a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}.$
Conversely, if $ba^{\tiny{\textcircled{\tiny\#}}}b=a$ and
$a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$, then $aa^{\tiny{\textcircled{\tiny\#}}}
=ba^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=ba^{\tiny{\textcircled{\tiny\#}}}$
and
$a^{\tiny{\textcircled{\tiny\#}}}a
=a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}b
=a^{\tiny{\textcircled{\tiny\#}}}b.$
$(1)\Leftrightarrow(3)$ Suppose that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then
$a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}b$ and
$aa^{\tiny{\textcircled{\tiny\#}}}=ba^{\tiny{\textcircled{\tiny\#}}}$. Thus
$aa^{\tiny{\textcircled{\tiny\#}}}b=aa^{\tiny{\textcircled{\tiny\#}}}a=a$
and $ba^{\tiny{\textcircled{\tiny\#}}}a=aa^{\tiny{\textcircled{\tiny\#}}}a=a$.
Conversely, if $aa^{\tiny{\textcircled{\tiny\#}}}b=a=ba^{\tiny{\textcircled{\tiny\#}}}a$,
then pre-multiplication by $a^{\tiny{\textcircled{\tiny\#}}}$ on $aa^{\tiny{\textcircled{\tiny\#}}}b=a$ yields
$a^{\tiny{\textcircled{\tiny\#}}}b=a^{\tiny{\textcircled{\tiny\#}}}a$, similarly we have
$ba^{\tiny{\textcircled{\tiny\#}}}=aa^{\tiny{\textcircled{\tiny\#}}}$, thus
$a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$.
$(1)\Leftrightarrow(4)$ Since $b-a\in ^{\circ}\!\!a\cap (a^{\ast})^{\circ}$ is equivalent to both $a^{*}a=a^{*}b$ and $a^{2}=ba$ hold,
thus $(1)\Leftrightarrow(4)$ by Lemma \ref{lemma-partial2}.
$(4)\Leftrightarrow(5)$ By $a\in R^{\tiny{\textcircled{\tiny\#}}}$, we have $^{\circ}a=R(1-aa^{\tiny{\textcircled{\tiny\#}}})$ and
$(a^{\ast})^{\circ}=(1-(a^{\tiny{\textcircled{\tiny\#}}})^{\ast}a^{\ast})R
=(1-(aa^{\tiny{\textcircled{\tiny\#}}})^{\ast})R
=(1-aa^{\tiny{\textcircled{\tiny\#}}})R.$
$(5)\Leftrightarrow(6)$ By $(aa^{\tiny{\textcircled{\tiny\#}}})^{2}=aa^{\tiny{\textcircled{\tiny\#}}}$, we have
$(1-aa^{\tiny{\textcircled{\tiny\#}}})R=(aa^{\tiny{\textcircled{\tiny\#}}})^{\circ}$ and
$R(1-aa^{\tiny{\textcircled{\tiny\#}}})=^{\circ}\!(aa^{\tiny{\textcircled{\tiny\#}}}).$
\end{proof}
If $p,~q\in R$ are idempotents, then arbitrary $a\in R$ can be written as
$$a=paq+pa(1-q)+(1-p)aq+(1-p)a(1-q).$$
The corresponding matrix form is
$$a= \left[
\begin{matrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{matrix}
\right]_{p\times q},$$
where $a_{11}=paq$, $a_{12}=pa(1-q)$, $a_{21}=(1-p)aq$ and $a_{22}=(1-p)a(1-q)$. If $a=(a_{ij})_{p\times q}$ and $b=(b_{ij})_{p\times q}$, then $a+b=(a_{ij}+b_{ij})_{p\times q}$.
In \cite[Theorem 2.6]{RD}, Raki\'{c} and Djordjevi\'{c} proved that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$ if and only if
there exist self-adjoint idempotent $p\in R$ and idempotent $q\in R$ such that $a=pb=bq$ and $qa=a$.
We now provide some new characterizations for the core partial order in terms of self-adjoint idempotents.
\begin{thm} \label{core-pr2}
Let $a\in R^{\tiny{\textcircled{\tiny\#}}}$ and $b\in R$. Then the following conditions are equivalent:
\begin{itemize}
\item[{\rm (1)}] $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$;
\item[{\rm (2)}] There exists a self-adjoint idempotent $p\in R$ such that $a=pb$, $ap=bp$ and $aR=pR$;
\item[{\rm (3)}] There exists self-adjoint idempotent $p\in R$ such that $a=pb$, $ap=bp$;
\item[{\rm (4)}] $a= \left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p},~~
b= \left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & b_{4}
\end{smallmatrix}
\right)_{p\times p}$.
\end{itemize}
\end{thm}
\begin{proof}
$(1)\Rightarrow(2)$ Let $p=aa^{\tiny{\textcircled{\tiny\#}}}$, then $p^{2}=p=p^{\ast}$ and
$pb=aa^{\tiny{\textcircled{\tiny\#}}}b=aa^{\tiny{\textcircled{\tiny\#}}}a=a$,
$ap
=a^{2}a^{\tiny{\textcircled{\tiny\#}}}=aa^{\tiny{\textcircled{\tiny\#}}}a^{2}a^{\tiny{\textcircled{\tiny\#}}}
=ba^{\tiny{\textcircled{\tiny\#}}}a^{2}a^{\tiny{\textcircled{\tiny\#}}}
=baa^{\tiny{\textcircled{\tiny\#}}}
=bp$,
$aR=pR$ by $a=aa^{\tiny{\textcircled{\tiny\#}}}a=pa$.
$(2)\Rightarrow(3)$ It is trivial.
$(3)\Rightarrow(1)$ Suppose that $a=pb$ and $ap=bp$. Then $a^{2}=apb=bpb=ba$
and $a^{*}a=(pb)^{*}pb=b^{*}p^{*}pb=b^{*}p^{\ast}b=(pb)^{*}b=a^{\ast}b$, thus $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$ by Lemma \ref{lemma-partial2}.
$(3)\Rightarrow(4)$ Suppose that $a=pb$ and $ap=bp$. Then $pa=a$ and
$$\begin{array}{rcl}
pap=ap=a_{1}, &~~~~ & pa(1-p)=a-ap=a_{2},\\
(1-p)ap=0, &~~~~ & (1-p)a(1-p)=0.\\
pbp=ap=a_{1}, &~~~~ & pb(1-p)=a-ap=a_{2},\\
(1-p)bp=ap-ap=0, &~~~~ & (1-p)b(1-p)=b-a=b_{4}.
\end{array}$$
Thus $a= \left(
\begin{smallmatrix}
a_{1} & a_{2}\\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p},~~
b= \left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & b_{4}
\end{smallmatrix}
\right)_{p\times p}$.\\
$(4)\Rightarrow(3)$ If there exists $p^{2}=p=p^{\ast}$ such that $pa=a$, $a_{1}=ap$, $a_{2}=a-ap$, $b_{4}=b-a$, then
$$pb=\left(
\begin{smallmatrix}
p & 0\\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
\left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & b_{4}
\end{smallmatrix}
\right)_{p\times p}
=\left(
\begin{smallmatrix}
pa_{1} & pa_{2} \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
=\left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
=a,$$
$$ap=\left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
\left(
\begin{smallmatrix}
p & 0 \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
=\left(
\begin{smallmatrix}
a_{1}p & 0 \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
=\left(
\begin{smallmatrix}
a_{1} & 0 \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p},$$
$$bp=\left(
\begin{smallmatrix}
a_{1} & a_{2} \\
& \\
0 & b_{4}
\end{smallmatrix}
\right)_{p\times p}
\left(
\begin{smallmatrix}
p & 0 \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
=\left(
\begin{smallmatrix}
a_{1}p & 0\\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}
=\left(
\begin{smallmatrix}
a_{1} & 0 \\
& \\
0 & 0
\end{smallmatrix}
\right)_{p\times p}.$$
Hence, $pb=a,~ ap=bp$.
\end{proof}
The following characterizations of the minus partial order will be used in the proof of Theorem 2.6,
which plays an important role in the sequel.
\begin{lem} \emph{\cite[Lemma 3.4]{Rb}} \label{minus-regular}
Let $a,~b\in R^{-}$. The following conditions are equivalent :
\begin{itemize}
\item[{\rm (1)}] $a\overset{-}\leq b$;
\item[{\rm (2)}] There exists $b^{-}\in b\{1\}$ such that $a=bb^{-}a=ab^{-}b=ab^{-}a$;
\item[{\rm (3)}] For arbitrary $b^{-}\in b\{1\}$, we have $a=bb^{-}a=ab^{-}b=ab^{-}a$.
\end{itemize}
\end{lem}
\begin{thm} \label{core-minus-regular}
Let $a,~b \in R^{\tiny{\textcircled{\tiny\#}}}$ with $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then:
\begin{itemize}
\item[{\rm (1)}] $ba^{\tiny{\textcircled{\tiny\#}}}=ab^{\tiny{\textcircled{\tiny\#}}}$,
$a^{\tiny{\textcircled{\tiny\#}}}b=b^{\tiny{\textcircled{\tiny\#}}}a$;
\item[{\rm (2)}] $b^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}$;
\item[{\rm (3)}] $b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}$.
\end{itemize}
\end{thm}
\begin{proof}
Suppose $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$, thus $a\overset{-}\leq b$ by $a^{\tiny{\textcircled{\tiny\#}}}\in a\{1\}$,
then $a=bb^{\tiny{\textcircled{\tiny\#}}}a=bb^{\#}a$ by Lemma \ref{minus-regular}.\\
$(1)$ $ba^{\tiny{\textcircled{\tiny\#}}}
=aa^{\tiny{\textcircled{\tiny\#}}}=bb^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=(bb^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}})^{*}
=aa^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}
=aa^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=ab^{\tiny{\textcircled{\tiny\#}}}$.
$a^{\tiny{\textcircled{\tiny\#}}}b
=b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}b
=b^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}b
=b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}b
=b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}a
=b^{\tiny{\textcircled{\tiny\#}}}a.$\\
$(2)$ It is obviously $b^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}$
,
$a^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}$
and
$a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}.$\\
$(3)$ Similarly to $(2)$, we have
$b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$
,
$a^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$
and
$b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}.$
\end{proof}
A complex matrix $A\in M_{n}(\mathbb{C})$ is called range-Hermite (EP matrix), if $\mathcal{R}(A)=\mathcal{R}(A^{\ast})$.
\begin{rem}\label{slip1}
\emph{In \cite[Theorem $2.4$]{M2}}, it is claimed that the following are equivalent for two complex matrices $A,B$ of index $1$ with the same order:
\begin{itemize}
\item[{\rm (1)}] $A^{\tiny{\textcircled{\tiny\#}}}BA^{\tiny{\textcircled{\tiny\#}}}=A^{\tiny{\textcircled{\tiny\#}}}$;
\item[{\rm (2)}] $A^{\dagger}BA^\#=A^{\tiny{\textcircled{\tiny\#}}}.$
\end{itemize}
\emph{While the implication $(2)\Rightarrow(1)$ is always valid,
the converse is not true in genral. In fact,
let
$A=B=\left[\begin{matrix}
1 & 1 \\
0 & 0
\end{matrix}
\right]\in M_{2}(\mathbb{C})$\normalsize,
we have $A^\#=A$, $A^{\dagger}=\left[\begin{matrix}
1/2 & 0 \\
1/2 & 0
\end{matrix}
\right]$\normalsize
and $A^{\tiny{\textcircled{\tiny\#}}}=\left[\begin{matrix}
1 & 0 \\
0 & 0
\end{matrix}
\right]$\normalsize,
then the condition
$A^{\tiny{\textcircled{\tiny\#}}}BA^{\tiny{\textcircled{\tiny\#}}}
=A^{\tiny{\textcircled{\tiny\#}}}AA^{\tiny{\textcircled{\tiny\#}}}=A^{\tiny{\textcircled{\tiny\#}}}$ holds.
However, $A^{\dagger}BA^\#\neq A^{\tiny{\textcircled{\tiny\#}}}.$
Note that $(1)\Rightarrow(2)$ holds in case $A$ is an EP matrix.}
\end{rem}
\begin{prop} \label{a-b-core}
Let $a,b \in R^{\tiny{\textcircled{\tiny\#}}}$. Then $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$ if and only if
$a^{\tiny{\textcircled{\tiny\#}}}b=b^{\tiny{\textcircled{\tiny\#}}}a$,
$ba^{\tiny{\textcircled{\tiny\#}}}=ab^{\tiny{\textcircled{\tiny\#}}}$,
$ab^{\tiny{\textcircled{\tiny\#}}}a=a.$
\end{prop}
\begin{proof}
Suppose that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then
$a^{\tiny{\textcircled{\tiny\#}}}b=b^{\tiny{\textcircled{\tiny\#}}}a$ and
$ba^{\tiny{\textcircled{\tiny\#}}}=ab^{\tiny{\textcircled{\tiny\#}}}$
by Theorem \ref{core-minus-regular}, thus
$ab^{\tiny{\textcircled{\tiny\#}}}a=ba^{\tiny{\textcircled{\tiny\#}}}a=aa^{\tiny{\textcircled{\tiny\#}}}a=a.$
Conversely, if $a^{\tiny{\textcircled{\tiny\#}}}b=b^{\tiny{\textcircled{\tiny\#}}}a$,
$ba^{\tiny{\textcircled{\tiny\#}}}=ab^{\tiny{\textcircled{\tiny\#}}}$,
$ab^{\tiny{\textcircled{\tiny\#}}}a=a$, then
$a^{\tiny{\textcircled{\tiny\#}}}a
=a^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}a
=a^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}b
=a^{\tiny{\textcircled{\tiny\#}}}b$ and
$aa^{\tiny{\textcircled{\tiny\#}}}
=ab^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=ba^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}
=ba^{\tiny{\textcircled{\tiny\#}}}.$
\end{proof}
In \cite[Theorem 2.5]{MRT} Malik et al. investigated the reverse order law for two core invertible complex matrices
under the matrix core partial order. By \cite[Theorem 3.1]{XCZ}, we can get that the equations
$axa=a$ and $xax=x$ in \cite[Theorem 2,14]{RDD} can be dropped.
\begin{lem} \emph{\cite[Theorem 3.1]{XCZ}} \label{five-equations-yy}
Let $a,x \in R$, then $a\in R^{\tiny\textcircled{\tiny\#}}$ with core inverse $x$ if and only if
$(ax)^{\ast}=ax$, $xa^{2}=a$ and $ax^{2}=x$.
\end{lem}
\begin{thm} \label{Reverse-order1}
Let $a,~b \in R^{\tiny{\textcircled{\tiny\#}}}$ with $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then:
\begin{itemize}
\item[{\rm (1)}] $(ab)^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=(a^{\tiny{\textcircled{\tiny\#}}})^{2}=(a^{2})^{\tiny{\textcircled{\tiny\#}}}=(ba)^{\tiny{\textcircled{\tiny\#}}}$;
\item[{\rm (2)}] $ab\in R^{EP}$ whenever $a\in R^{EP}$.
\end{itemize}
\end{thm}
\begin{proof}
$(1)$ Suppose that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then $a^{\tiny{\textcircled{\tiny\#}}}b=b^{\tiny{\textcircled{\tiny\#}}}a$ by Proposition \ref{a-b-core}. Thus,
$b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=a^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=(a^{\tiny{\textcircled{\tiny\#}}})^{2}
=(a^{2})^{\tiny{\textcircled{\tiny\#}}}
=(ba)^{\tiny{\textcircled{\tiny\#}}}.$
Let $x=b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}$. Then
\begin{eqnarray*}
&&abx=abb^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=aba^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=aaa^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=aa^{\tiny{\textcircled{\tiny\#}}}
=(aa^{\tiny{\textcircled{\tiny\#}}})^{*}
=(abb^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}})^{*};\\
&&x(ab)^{2}=b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}(ab)^{2}
=b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}a(ba)b
= a^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}aa^{2}b
= a^{\tiny{\textcircled{\tiny\#}}}a^{2}b
= ab;\\
&&abx^{2}=ab(b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}})^{2}
=a(ba^{\tiny{\textcircled{\tiny\#}}})a^{\tiny{\textcircled{\tiny\#}}}(a^{\tiny{\textcircled{\tiny\#}}})^{2}
=(a^{\tiny{\textcircled{\tiny\#}}})^{2}
= b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}.
\end{eqnarray*}
Thus $(ab)^{\tiny{\textcircled{\tiny\#}}}=b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}$ by Lemma \ref{five-equations-yy}.
$(2)$ Suppose that $a\in R^{EP}$. Then $a^{\tiny{\textcircled{\tiny\#}}}a=aa^{\tiny{\textcircled{\tiny\#}}}.$ Thus
\begin{eqnarray*}
&&b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}ab
=b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}b
=a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}b
=a^{\tiny{\textcircled{\tiny\#}}}b
=a^{\tiny{\textcircled{\tiny\#}}}a;\\
&&abb^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}
=abb^{\tiny{\textcircled{\tiny\#}}}a(a^{\tiny{\textcircled{\tiny\#}}})^{2}
=aba^{\tiny{\textcircled{\tiny\#}}}b(a^{\tiny{\textcircled{\tiny\#}}})^{2}
=aaa^{\tiny{\textcircled{\tiny\#}}}a(a^{\tiny{\textcircled{\tiny\#}}})^{2}
=aa^{\tiny{\textcircled{\tiny\#}}}.
\end{eqnarray*}
By $(1)$, we have
$b^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}ab
=(ab)^{\tiny{\textcircled{\tiny\#}}}ab
=ab(ab)^{\tiny{\textcircled{\tiny\#}}}
=abb^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}$,
hence $ab\in R^{EP}.$
\end{proof}
\section{ Relationships between the core partial order and other partial orders }\label{a}
In this section, we consider the relationships between core partial order and other partial orders.
Recall that the left star partial order $a$ $\ast \!\!\leq b$ in $R$ is defined by: $a^{\ast}a=a^{\ast}b$ and $aR\subseteq bR$.
The right sharp partial order $a\leq_{\#} b$ in $R^\#$ is defined by: $aa^\#=ba^\#$ and $Ra\subseteq Rb$.
Let us start with a auxiliary lemma.
\begin{lem} \emph{\cite{BT}} \label{core-star-sharp}
Let $a\in R^{\tiny{\textcircled{\tiny\#}}}$ and $b\in R$. Then $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$ if and only if
$a$ $\ast\!\!\leq b$ and $a\leq_{\#} b$.
\end{lem}
In \cite[Theorem 4.10]{RD}, Raki\'{c} and Djordjevi\'{c} gave the relationship between the core partial order and the minus partial order
for $a, b\in R^{\tiny{\textcircled{\tiny\#}}}$.
For instance, it is proved that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$ if and only if $a\overset{-}\leq b$ and
$b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$.
By Lemma \ref{core-star-sharp}, the core partial order implies the left star partial order
and the right sharp partial order.
Motivated by \cite[Theorem 4.10]{RD}, we have the following theorem.
\begin{thm} \label{core-and-other}
Let $a,~b \in R^{\tiny{\textcircled{\tiny\#}}}$. Then the following are equivalent:
\begin{itemize}
\item[{\rm (1)}] $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$;
\item[{\rm (2)}] $a$ $\ast\!\!\leq b$ and $ba^{\tiny{\textcircled{\tiny\#}}}b=a$;
\item[{\rm (3)}] $a$ $\ast \!\!\leq b$ and $b^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$;
\item[{\rm (4)}] $a$ $\ast \!\!\leq b$ and $b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$;
\item[{\rm (5)}] $a\leq_{\#} b$ and $ba^{\tiny{\textcircled{\tiny\#}}}b=a$;
\item[{\rm (6)}] $a\leq_{\#} b$ and $a^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$.
\end{itemize}
\end{thm}
\begin{proof}
$(1) \Rightarrow (2)$-$(6)$ It is obviously by Theorem \ref{core-pr1}, Theorem \ref{core-minus-regular} and Lemma \ref{core-star-sharp}.\\
$(2) \Rightarrow (1)$ Suppose that $a$ $\ast \!\!\leq b$ and $ba^{\tiny{\textcircled{\tiny\#}}}b=a$. Then $a^{*}a=a^{*}b$ and $aR\subseteq bR$.
We have $a^{*}a=a^{*}b$ if and only if $a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}b$ by Lemma \ref{lemma-partial2}, thus $aa^{\tiny{\textcircled{\tiny\#}}}=ba^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}
=ba^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}=ba^{\tiny{\textcircled{\tiny\#}}}.$\\
$(3) \Rightarrow (1)$ Suppose that $a$ $\ast\!\!\leq b$. We have $a=bs$ for some $s\in R$, then $a=bs=bb^{\tiny{\textcircled{\tiny\#}}}bs=bb^{\tiny{\textcircled{\tiny\#}}}a$, thus
$aa^{\tiny{\textcircled{\tiny\#}}}=bb^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}=ba^{\tiny{\textcircled{\tiny\#}}}$.\\
$(4) \Rightarrow (1)$ Suppose $a$ $\ast \!\!\leq b$ and $b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}.$
Then $a^{\ast}a=a^{\ast}b$, thus by Lemma \ref{lemma-partial2}, we have $a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}b.$
By $a$ $\ast \!\!\leq b$, we have $a=bb^{\tiny{\textcircled{\tiny\#}}}a$, which gives
$$ba^{\tiny{\textcircled{\tiny\#}}}=b(b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}})=ab^{\tiny{\textcircled{\tiny\#}}}.$$
Pre-multiplication of $b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$ by $a$ and
post-multiplication of $b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$ by $bb^{\tiny{\textcircled{\tiny\#}}}$ yield
$$aa^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}
=ab^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}=aa^{\tiny{\textcircled{\tiny\#}}}.$$
Since $a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}b$, we have
$aa^{\tiny{\textcircled{\tiny\#}}}=aa^{\tiny{\textcircled{\tiny\#}}}bb^{\tiny{\textcircled{\tiny\#}}}
=aa^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=ab^{\tiny{\textcircled{\tiny\#}}}$.
Thus by $ba^{\tiny{\textcircled{\tiny\#}}}=ab^{\tiny{\textcircled{\tiny\#}}}$ and the definition of core partial order,
we have $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$.\\
$(5) \Rightarrow (1)$ Suppose that $a\leq_{\#} b$ and $ba^{\tiny{\textcircled{\tiny\#}}}b=a$. Then $aa^\#=ba^\#$ and $Ra\subseteq Rb$,
by Lemma \ref{lemma-partial2}, we have
$aa^\#=ba^\#$ if and only if $aa^{\tiny{\textcircled{\tiny\#}}}=ba^{\tiny{\textcircled{\tiny\#}}}$, thus
$a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}ba^{\tiny{\textcircled{\tiny\#}}}b=
a^{\tiny{\textcircled{\tiny\#}}}aa^{\tiny{\textcircled{\tiny\#}}}b=a^{\tiny{\textcircled{\tiny\#}}}b.$\\
$(6) \Rightarrow (1)$ By $(5) \Rightarrow (1)$, we only need to prove $a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}b$.\\
Since $Ra\subseteq Rb$ is equivalent to $a=ab^{\tiny{\textcircled{\tiny\#}}}b$, we have
$a^{\tiny{\textcircled{\tiny\#}}}a=a^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}b=a^{\tiny{\textcircled{\tiny\#}}}b.$
\end{proof}
The right star partial order $a\leq\!\!\ast$ $b$ is defined as: $aa^{\ast}=ba^{\ast}$ and $Ra\subseteq Rb.$
\begin{rem}\label{slip2}
\emph{Let $a\in R^{\tiny{\textcircled{\tiny\#}}}$ and $b\in R^{EP}$.
In \cite[Theorem $2.9$]{M2}, it is claimed that $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$
if and only if $a\leq\!\!\ast$ $b$ and $b^{\tiny{\textcircled{\tiny\#}}}ab^{\tiny{\textcircled{\tiny\#}}}=a^{\tiny{\textcircled{\tiny\#}}}$
in the complex matrix case. But it is not true.
In fact, let
$A=\left[\begin{matrix}
1 & 1 \\
0 & 0
\end{matrix}
\right],
B=\left[\begin{matrix}
1 & 1 \\
0 & 1
\end{matrix}
\right]\in M_{2}\{\mathbb{C}\}$\normalsize,
then $A$ is core invertible, $B$ is an EP matrix and the condition $A\overset{\tiny{\textcircled{\tiny\#}}}\leq B$ is satisfied,
but $AA^{\ast}\neq BA^{\ast}.$}
\end{rem}
The equivalence of $(2)$-$(4)$ in the following proposition for the complex matrices has been proved by Malik et al. in \cite[Lemma 19]{MRT}.
\begin{prop} \label{k-core}
Let $a\in R^{\tiny{\textcircled{\tiny\#}}}, ~b\in R$ with $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$. Then the following conditions are equivalent:
\begin{itemize}
\item[{\rm (1)}] $a\overset{\#}\leq b$;
\item[{\rm (2)}] $ab=ba$;
\item[{\rm (3)}] $a^{2}\overset{\tiny{\textcircled{\tiny\#}}}\leq b^{2}$;
\item[{\rm (4)}] $a^{k}\overset{\tiny{\textcircled{\tiny\#}}}\leq b^{k}$, for any $k\geq 2$.
\end{itemize}
\end{prop}
\begin{proof}
By Lemma \ref{lemma-partial2}, we have $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$ if and only if both $a^{*}a=a^{*}b$ and $ba=a^{2}$ hold.
$(1)\Rightarrow(2)$ is obvious by Lemma \ref{lemma-partial1}.
$(2)\Rightarrow(4)$ If $ab=ba$, then $ab=ba=a^{2}$ by Lemma \ref{lemma-partial2}.
If $k\geq 2$, first show $ab^{k-1}=a^{k}$.
When $k=2$, $ab=ba=a^{2}$;
when $k>2$, $ab^{k-1}=a^{2}b^{k-2}=a^{2}bb^{k-3}=a^{3}b^{k-3}= \cdots=a^{k}.$
Next prove $(a^{k})^{\tiny{\textcircled{\tiny\#}}}a^{k}=(a^{k})^{\tiny{\textcircled{\tiny\#}}}b^{k}.$
In fact,$(a^{k})^{\tiny{\textcircled{\tiny\#}}}b^{k}
=(a^{\tiny{\textcircled{\tiny\#}}})^{k}b^{k}
=(a^{\tiny{\textcircled{\tiny\#}}})^{k-1}a^{\tiny{\textcircled{\tiny\#}}}bb^{k-1}
=(a^{\tiny{\textcircled{\tiny\#}}})^{k-1}a^{\tiny{\textcircled{\tiny\#}}}ab^{k-1}
=(a^{\tiny{\textcircled{\tiny\#}}})^{k}ab^{k-1}
=(a^{k})^{\tiny{\textcircled{\tiny\#}}}ab^{k-1}
=(a^{k})^{\tiny{\textcircled{\tiny\#}}}a^{k}.$
Similarly, $b^{k}(a^{k})^{\tiny{\textcircled{\tiny\#}}}=a^{k}(a^{k})^{\tiny{\textcircled{\tiny\#}}}.$
$(4)\Rightarrow(3)$ Taking $k=2$.
$(3)\Rightarrow(1)$
If $a^{2}\overset{\tiny{\textcircled{\tiny\#}}}\leq b^{2}$,
then $(a^{2})^{\tiny{\textcircled{\tiny\#}}}a^{2}=(a^{2})^{\tiny{\textcircled{\tiny\#}}}b^{2}.$ And
$$(a^{2})^{\tiny{\textcircled{\tiny\#}}}a^{2}
=(a^{\tiny{\textcircled{\tiny\#}}})^{2}a^{2}
=a^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}a^{2}
=a^{\tiny{\textcircled{\tiny\#}}}a
=a^{\#}a,$$
$$(a^{2})^{\tiny{\textcircled{\tiny\#}}}b^{2}
=(a^{\tiny{\textcircled{\tiny\#}}})^{2}b^{2}
=a^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}bb
=a^{\tiny{\textcircled{\tiny\#}}}a^{\tiny{\textcircled{\tiny\#}}}ab
=a^{\#}b,$$
thus $a^{\#}a=a^{\#}b$. Hence $a^{2}=aaa^{\#}a=aaa^{\#}b=ab=ba$ by $ba=a^{2}$.
\end{proof}
A complex matrix $A$ is called range-Hermite, if $\mathcal{R}(A)=\mathcal{R}(A^{\ast})$.
In \cite[Theorem 7]{BT}, Baksalary and Trenker proved that for complex matrices $A$ and $B$, if
$A$ is an range-Hermite matrix, then
$A\overset{\tiny{\textcircled{\tiny\#}}}\leq B$ if and only if $A\overset{\ast}\leq B$.
In \cite[Theorem 3.3]{M2}, Mailk proved that for complex matrices $A$ and $B$, if
$A$ is an range-Hermite matrix, then $A\overset{\tiny{\textcircled{\tiny\#}}}\leq B$
if and only if $A\overset{\#}\leq B$.
It is easy to check that the following proposition is valid for elements in rings by \cite[Theorem 3.1]{RDD}.
\begin{prop} \label{EP-core-other}
Let $a\in R^{EP}$ and $b\in R$. Then the following are equivalent:
\begin{itemize}
\item[{\rm (1)}] $a\overset{\tiny{\textcircled{\tiny\#}}}\leq b$;
\item[{\rm (2)}] $a\overset{\#}\leq b$;
\item[{\rm (3)}] $a\overset{\ast}\leq b$.
\end{itemize}
\end{prop}
\noindent {\large\bf Acknowledgements}
This research is supported by the National Natural Science Foundation of China (No. 11371089),
The second author is grateful to China Scholarship Council for giving him a purse for his further study in Universidad Polit\'{e}cnica de Valencia, Spain.
\end{document} |
\begin{document}
\title[Topos-Based Logic for Quantum Systems and Bi-Heyting Algebras]{Topos-Based Logic for Quantum Systems\\and Bi-Heyting Algebras}
\author{Andreas D\"oring}
\address{Andreas D\"oring\newline
\indent Clarendon Laboratory\newline
\indent Department of Physics\newline
\indent University of Oxford\newline
\indent Parks Road\newline
\indent Oxford OX1 3PU, UK}
\email{[email protected]}
\date{December 5, 2013}
\subjclass{Primary 81P10, 03G12; Secondary 06D20, 06C99}
\keywords{Bi-Heyting algebra, Topos, Quantum, Logic}
\begin{abstract}
To each quantum system, described by a von Neumann algebra of physical quantities, we associate a complete bi-Heyting algebra. The elements of this algebra represent contextualised propositions about the values of the physical quantities of the quantum system.
\end{abstract}
\maketitle
\section{Introduction} \label{Sec_Introd}
Quantum logic started with Birkhoff and von Neumann's seminal article \cite{BvN36}. Since then, non-distributive lattices with an orthocomplement (and generalisations thereof) have been used as representatives of the algebra of propositions about the quantum system at hand. There are a number of well-known conceptual and interpretational problems with this kind of `logic'. For review of standard quantum logic(s), see the article \cite{DCG02}.
In the last few years, a different form of logic for quantum systems based on generalised spaces in the form of presheaves and topos theory has been developed by Chris Isham and this author \cite{DI(1),DI(2),DI(3),DI(4),Doe07b,DI(Coecke)08,Doe09,Doe11}. This new form of logic for quantum systems is based on a certain Heyting algebra $\Subcl{\Sig}$ of clopen, i.e., closed and open subobjects of the spectral presheaf $\Sig$. This generalised space takes the r\^ole of a state space for the quantum system. (All technical notions are defined in the main text.) In this way, one obtains a well-behaved intuitionistic form of logic for quantum systems which moreover has a topological underpinning.
In this article, we will continue the development of the topos-based form of logic for quantum systems. The main new observation is that the complete Heyting algebra $\Subcl{\Sig}$ of clopen subobjects representing propositions is also a complete co-Heyting algebra. Hence, we relate quantum systems to complete bi-Heyting algebras in a systematic way. This includes two notions of implication and two kinds of negation, as discussed in the following sections.
The plan of the paper is as follows: in section \ref{Sec_Background}, we briefly give some background on standard quantum logic and the main ideas behind the new topos-based form of logic for quantum systems. Section \ref{Sec_BiHeytAlgs} recalls the definitions and main properties of Heyting, co-Heyting and bi-Heyting algebras, section \ref{Sec_SigEtc} introduces the spectral presheaf $\Sig$ and the algebra $\Subcl{\Sig}$ of its clopen subobjects. In section \ref{Sec_RepOfPropsAndBiHeyting}, the link between standard quantum logic and the topos-based form of quantum logic is established and it is shown that $\Subcl{\Sig}$ is a complete bi-Heyting algebra. In section \ref{Sec_NegsAndRegs}, the two kinds of negations associated with the Heyting resp. co-Heyting structure are considered. Heyting-regular and co-Heyting regular elements are characterised and a tentative physical interpretation of the two kinds of negation is given. Section \ref{Sec_Conclusion} concludes.
Throughout, we assume some familiarity with the most basic aspects of the theory of von Neumann algebras and with basics of category and topos theory. The text is interspersed with some physical interpretations of the mathematical constructions.
\section{Background} \label{Sec_Background}
\textbf{Von Neumann algebras.} In this article, we will discuss structures associated with von Neumann algebras, see e.g. \cite{KR83}. This class of algebras is general enough to describe a large variety of quantum mechanical systems, including systems with symmetries and/or superselection rules. The fact that each von Neumann algebra has `sufficiently many' projections makes it attractive for quantum logic. More specifically, each von Neumann algebra is generated by its projections, and the spectral theorem holds in a von Neumann algebra, providing the link between self-adjoint operators (representing physical quantities) and projections (representing propositions).
The reader not familiar with von Neumann algebras can always take the algebra $\BH$ of all bounded operators on a separable, complex Hilbert space $\cH$ as an example of a von Neumann algebra. If the Hilbert space $\cH$ is finite-dimensional, $\dim\cH=n$, then $\BH$ is nothing but the algebra of complex $n\times n$-matrices.
\textbf{Standard quantum logic.} From the perspective of quantum logic, the key thing is that the projection operators in a von Neumann algebra $\N$ form a complete orthomodular lattice $\PN$. Starting from Birkhoff and von Neumann \cite{BvN36}, such lattices (and various kinds of generalisations, which we don't consider here) have been considered as quantum logics, or more precisely as algebras representing propositions about quantum systems.
The kind of propositions that we are concerned with (at least in standard quantum logic) are of the form ``the physical quantity $A$ has a value in the Borel set $\De$ of real numbers'', which is written shortly as ``$\Ain\De$''. These propositions are pre-mathematical entities that refer to the `world out there'. In standard quantum logic, propositions of the form ``$\Ain\De$'' are represented by projection operators via the spectral theorem. If, as we always assume, the physical quantity $A$ is described by a self-adjoint operator $\hA$ in a given von Neumann algebra $\N$, or is affiliated with $\N$ in the case that $\hA$ is unbounded, then the projection corresponding to ``$\Ain\De$'' lies in $\PN$. (For details on the spectral theorem see any book on functional analysis, e.g. \cite{KR83}.)
Following Birkhoff and von Neumann, one then interprets the lattice operations $\meet,\join$ in the projection lattice $\PN$ as logical connectives between the propositions represented by the projections. In this way, the meet $\meet$ becomes a conjunction and the join $\join$ a disjunction. Moreover, the orthogonal complement of a projection, $\hP':=\hat 1-\hP$, is interpreted as negation. Crucially, meets and joins do not distribute over each other. In fact, $\PN$ is a distributive lattice if and only if $\N$ is abelian if and only if all physical quantities considered are mutually compatible, i.e., co-measurable.
Quantum systems always have some incompatible physical quantities, so $\N$ is never abelian and $\PN$ is non-distributive. This makes the interpretation of $\PN$ as an algebra of propositions somewhat dubious. There are many other conceptual difficulties with quantum logics based on orthomodular lattices, see e.g. \cite{DCG02}.
\textbf{Contexts and coarse-graining.} The topos-based form of quantum logic that was established in \cite{DI(2)} and developed further in \cite{Doe07b,DI(Coecke)08,Doe09,Doe11} is fundamentally different from standard quantum logic. For some conceptual discussion, see in particular \cite{Doe09}. Two key ideas are \emph{contextuality} and \emph{coarse-graining} of propositions. Contextuality has of course been considered widely in foundations of quantum theory, in particular since Kochen and Specker's seminal paper \cite{KS67}. Yet, the systematic implementation of contextuality in the language of presheaves is comparatively new. It first showed up in work by Chris Isham and Jeremy Butterfield \cite{Ish97,IB98,IB99,IB00,IB00b,IB02,Ish05} and was substantially developed by this author and Isham. For recent, related work see also \cite{HLS09,CHLS09,HLS09b,HLS11} and \cite{AB11,AMS11}.
Physically, a context is nothing but a set of compatible, i.e., co-measurable physical quantities $(A_i)_{i\in I}$. Such a set determines and is determined by an abelian von Neumann subalgebra $V$ of the non-abelian von Neumann algebra $\N$ of (all) physical quantities. Each physical quantity $A_i$ in the set is represented by some self-adjoint operator $\hA$ in $V$.\footnote{From here on, we assume that all the physical quantities $A_i$ correspond to \emph{bounded} self-adjoint operators that lie in $\N$. Unbounded self-adjoint operators affiliated with $\N$ can be treated in a straightforward manner.} In fact, $V$ is generated by the operators $(\hA_i)_{i\in I}$ and the identity $\hat 1$, in the sense that $V=\{\hat 1, \hA_i \mid i\in I\}''$, where $\{S\}''$ denotes the double commutant of a set $S$ of operators (see e.g. \cite{KR83}).\footnote{We will often use the notation $V'$ for a subalgebra of $V$, which does \emph{not} mean the commutant of $V$. We trust that this will not lead to confusion.} Each abelian von Neumann subalgebra $V$ of $\N$ will be called a context, thus identifying the mathematical notion and its physical interpretation. The set of all contexts will be denoted $\VN$. Each context provides one of many `classical perspectives' on a quantum system. We partially order the set of contexts $\VN$ by inclusion. A smaller context $V'\subset V$ represents a `poorer', more limited classical perspective containing fewer physical quantities than $V$.
Each context $V\in\VN$ has a complete Boolean algebra $\PV$ of projections, and $\PV$ clearly is a sublattice of $\PN$. Propositions ``$\Ain\De$'' about the values of physical quantities $A$ in a (physical) context correspond to projections in the (mathematical) context $V$. Since $\PV$ is a Boolean algebra, there are Boolean algebra homomorphisms $\ld:\PV\ra\{0,1\}\simeq\{\false,\true\}$, which can be seen as truth-value assignments as usual. Hence, there are consistent truth-value assignments for all propositions ``$\Ain\De$'' for propositions about physical quantities \emph{within} a context.
The key result by Kochen and Specker \cite{KS67} shows that for $\N=\BH$, $\dim\cH\geq 3$, there are no truth-value assignments for \emph{all} contexts simultaneously in the following sense: there is no family of Boolean algebra homomorphisms $(\ld_V:\PV\ra\{0,1\})_{V\in\VN}$ such that if $V'=V\cap\tilde V$ is a subcontext of both $V$ and $\tilde V$, then $\ld_{V'}=\ld_V|_{V'}=\ld_{\tilde V}|_{V'}$, where $\ld_V|_{V'}$ is the restriction of $\ld_V$ to the subcontext $V'$, and analogously $\ld_{\tilde V}|_{V'}$. As Isham and Butterfield realised \cite{IB98,IB00}, this means that a certain presheaf has no global elements. In \cite{Doe05}, it is shown that this result generalises to all von Neumann algebras without a type $I_2$-summand.
In the topos approach to quantum theory, propositions are represented not by projections, but by suitable subobjects of a quantum state space. An obstacle arises since the Kochen-Specker theorem seems to show that such a quantum state space cannot exist. Yet, if one considers presheaves instead of sets, this problem can be overcome. The presheaves we consider are `varying sets' $(\ps S_V)_{V\in\VN}$, indexed by contexts. Whenever $V'\subset V$, there is a function defined from $\ps S_V$, the set associated with the context $V$, to $\ps S_{V'}$, the set associated with the smaller context $V'$. This makes $\ps S=(\ps S_V)_{V\in\VN}$ into a contravariant, $\Set$-valued functor.
Since by contravariance we go from $\ps S_V$ to $\ps S_{V'}$, there is a built-in idea of \emph{coarse-graining}: $V$ is the bigger context, containing more self-adjoint operators and more projections than the smaller context $V'$, so we can describe more physics from the perspective of $V$ than from $V'$. Typically, the presheaves defined over contexts will mirror this fact: the component $\ps S_V$ at $V$ contains more information (in a suitable sense, to be made precise in the examples in section \ref{Sec_SigEtc}) than $\ps S_{V'}$, the component at $V'$. Hence, the presheaf map $\ps S(i_{V'V}):\ps S_V\ra\ps S_{V'}$ will implement a form of coarse-graining of the information available at $V$ to that available at $V'$.
The subobjects of the quantum state space, which will be called the spectral presheaf $\Sig$, form a (complete) Heyting algebra. This is typical, since the subobjects of any object in a topos form a Heyting algebra. Heyting algebras are the algebraic representatives of (propositional) intuitionistic logics. In fact, we will not consider \emph{all} subobjects of the spectral presheaf, but rather the so-called clopen subobjects. The latter also form a complete Heyting algebra, as was first shown in \cite{DI(2)} and is proven here in a different way, using Galois connections, in section \ref{Sec_RepOfPropsAndBiHeyting}. The difference between the set of all subobjects of the spectral presheaf and the set of clopen subobjects is analogous to the difference between all subsets of a classical state space and (equivalence classes modulo null subsets of) measurable subsets.
Together with the representation of states (which we will not discuss here, but see \cite{DI(2),Doe08,Doe09,DI12}), these constructions provide an intuitionistic form of logic for quantum systems. Moreover, there is a clear topological underpinning, since the quantum state space $\Sig$ is a generalised space associated with the nonabelian algebra $\N$.
The construction of the presheaf $\Sig$ and its algebra of subobjects incorporates the concepts of contextuality and coarse-graining in a direct way, see sections \ref{Sec_SigEtc} and \ref{Sec_RepOfPropsAndBiHeyting}.
\section{Bi-Heyting algebras} \label{Sec_BiHeytAlgs}
The use of bi-Heyting algebras in superintuitionistic logic was developed by Rauszer \cite{Rau73,Rau77}. Lawvere emphasised the importance of co-Heyting and bi-Heyting algebras in category and topos theory, in particular in connection with continuum physics \cite{Law86,Law91}. Reyes, with Makkai \cite{MR95} and Zolfaghari \cite{RZ96}, connected bi-Heyting algebras with modal logic. In a recent paper, Bezhanishvili et al. \cite{BBGK10} prove (among other things) new duality theorems for bi-Heyting algebras based on bitopological spaces. Majid has suggested to use Heyting and co-Heyting algebras within a tentative representation-theoretic approach to the formulation of quantum gravity \cite{Maj95,Maj08}.
As far as we are aware, nobody has connected quantum systems and their logic with bi-Heyting algebras before.
The following definitions are standard and can be found in various places in the literature; see e.g. \cite{RZ96}.
A \textbf{Heyting algebra $H$} is a lattice with bottom element $0$ and top element $1$ which is a cartesian closed category. In other words, $H$ is a lattice such that for any two elements $A,B\in H$, there exists an exponential $A\Rightarrow B$, called the \textbf{Heyting implication (from $A$ to $B$)}, which is characterised by the adjunction
\begin{equation}
C\leq(A\Rightarrow B)\quad\text{if and only if}\quad C\wedge A\leq B.
\end{equation}
This means that the product (meet) functor $A\meet_:H\ra H$ has a right adjoint $A\Rightarrow\_:H\ra H$ for all $A\in H$.
It is straightforward to show that the underlying lattice of a Heyting algebra is distributive. If the underlying lattice is complete, then the adjoint functor theorem for posets shows that for all $A\in H$ and all families $(A_i)_{i\in I}\subseteq H$, the following infinite distributivity law holds:
\begin{equation}
A\meet\bjoin_{i\in I}A_i = \bjoin_{i\in I}(A\meet A_i).
\end{equation}
The \textbf{Heyting negation} is defined as
\begin{align}
\neg: H &\lra H^{\op}\\ \nonumber
A &\lmt (A\Rightarrow 0).
\end{align}
The defining adjunction shows that $\neg A=\bjoin\{B\in H \mid A\meet B=0\}$, i.e., $\neg A$ is the largest element in $H$ such that $A\meet\neg A=0$. Some standard properties of the Heyting negation are:
\begin{align}
& A\leq B\text{ implies }\neg A\geq\neg B,\\
& \neg\neg A\geq A,\\
& \neg\neg\neg A=\neg A\\
& \neg A\join A\leq 1.
\end{align}
Interpreted in logical terms, the last property on this list means that in a Heyting algebra the law of excluded middle need not hold: in general, the disjunction between a proposition represented by $A\in H$ and its Heyting negation (also called Heyting complement, or pseudo-complement) $\neg A$ can be smaller than $1$, which represents the trivially true proposition. Heyting algebras are algebraic representatives of (propositional) intuitionistic logics.
A canonical example of a Heyting algebra is the topology $\mc T$ of a topological space $(X,\mc T)$, with unions of open sets as joins and intersections as meets.
A \textbf{co-Heyting algebra} (also called \textbf{Brouwer algebra $J$}) is a lattice with bottom element $0$ and top element $1$ such that the coproduct (join) functor $A\join\_:J\ra J$ has a left adjoint $A\Leftarrow\_:J\ra J$, called the \textbf{co-Heyting implication (from $A$)}. It is characterised by the adjunction
\begin{equation}
(A\Leftarrow B)\leq C\text{ iff }A\leq B\join C.
\end{equation}
It is straightforward to show that the underlying lattice of a co-Heyting algebra is distributive. If the underlying lattice is complete, then the adjoint functor theorem for posets shows that for all $A\in J$ and all families $(A_i)_{i\in I}\subseteq J$, the following infinite distributivity law holds:
\begin{equation}
A\join\bmeet_{i\in I}A_i = \bmeet_{i\in I}(A\join A_i).
\end{equation}
The \textbf{co-Heyting negation} is defined as
\begin{align}
\sim: J &\lra J^{\op}\\
A &\lmt (1\Leftarrow A).
\end{align}
The defining adjunction shows that $\sim A=\bmeet\{B\in J \mid A\join B=1\}$, i.e., $\sim A$ is the smallest element in $J$ such that $A\join \sim A=1$. Some properties of the co-Heyting negation are:
\begin{align}
& A\leq B\text { implies }\sim A\geq\sim B,\\
& \sim\sim A\leq A,\\
&\sim\sim\sim A=\sim A\\
&\sim A\meet A\geq 0.
\end{align}
Interpreted in logical terms, the last property on this list means that in a co-Heyting algebra the law of noncontradiction does not hold: in general, the conjunction between a proposition represented by $A\in J$ and its co-Heyting negation $\sim A$ can be larger than $0$, which represents the trivially false proposition. Co-Heyting algebras are algebraic representatives of (propositional) paraconsistent logics.
We will not discuss paraconsistent logic in general, but in the final section \ref{Sec_NegsAndRegs}, we will give and interpretation of the co-Heyting negation showing up in the form of quantum logic to be presented in this article.
A canonical example of a co-Heyting algebra is given by the closed sets $\mc C$ of a topological space, with unions of closed sets as joins and intersections as meets.
Of course, Heyting algebras and co-Heyting algebras are dual notions. The opposite $H^{\op}$ of a Heyting algebra is a co-Heyting algebra and vice versa.
A \textbf{bi-Heyting algebra $K$} is a lattice which is a Heyting algebra and a co-Heyting algebra. For each $A\in K$, the functor $A\meet\_:K \ra K$ has a right adjoint $A\Rightarrow\_:K\ra K$, and the functor $A\join\_:K\ra K$ has a left adjoint $K\Leftarrow\_:K\ra K$. A bi-Heyting algebra $K$ is called complete if it is complete as a Heyting algebra and complete as a co-Heyting algebra.
A canonical example of a bi-Heyting algebra is a Boolean algebra $\mc B$. (Note that by Stone's representation theorem, each Boolean algebra is isomorphic to the algebra of clopen, i.e., closed and open, subsets of its Stone space. This gives the connection with the topological examples.) In a Boolean algebra, we have for the Heyting negation that, for all $A\in\mc B$,
\begin{equation}
A\join\neg A=1,
\end{equation}
which is the characterising property of the co-Heyting negation. In fact, in a Boolean algebra, $\neg =\sim$.
\section{The spectral presheaf of a von Neumann algebra and clopen subobjects} \label{Sec_SigEtc}
With each von Neumann algebra $\N$, we associate a particular presheaf, the so-called spectral presheaf. A distinguished family of subobjects, the so-called clopen subobjects, are defined and their interpretation is given: clopen subobjects can be seen as families of local propositions, compatible with respect to coarse-graining. The constructions presented here summarise those discussed in \cite{DI(1),DI(2),DI(Coecke)08,DI12}.
Let $\N$ be a von Neumann algebra, and let $\VN$ be the set of its abelian von Neumann subalgebras, partially ordered under inclusion. We only consider subalgebras $V\subset\N$ which have the same unit element as $\N$, given by the identity operator $\hat 1$ on the Hilbert space on which $\N$ is represented. By convention, we exclude the trivial subalgebra $V_0=\bbC\hat 1$ from $\VN$. (This will play an important r\^ole in the discussion of the Heyting negation in section \ref{Sec_NegsAndRegs}.) The poset $\VN$ is called the \textbf{context category of the von Neumann algebra $\N$}.
For $V',V\in\VN$ such that $V'\subset V$, the inclusion $i_{V'V}:V'\hookrightarrow V$ restricts to a morphism $i_{V'V}|_{\mc P(V')}:\mc P(V')\ra\PV$ of complete Boolean algebras. In particular, $i_{V'V}$ preserves all meets, hence it has a left adjoint
\begin{align}
\deo_{V,V'}: \PV &\lra \mc P(V')\\ \nonumber
\hP &\lmt \deo_{V,V'}(\hP)=\bmeet\{\hQ\in V' \mid \hQ\geq \hP\}
\end{align}
that preserves all joins, i.e., for all families $(\hP_i)_{i\in I}\subseteq\PV$, it holds that
\begin{equation}
\deo_{V,V'}(\bjoin_{i\in I}\hP_i)=\bjoin_{i\in I}\deo_{V,V'}(\hP_i),
\end{equation}
where the join on the left hand side is taken in $\PV$ and the join on the right hand side is in $\mc P(V')$. If $W\subset V'\subset V$, then $\deo_{V,W}=\deo_{V',W}\circ\deo_{V,V'}$, obviously.
We note that distributivity of the lattices $\PV$ and $\mc P(V')$ plays no r\^ole here. If $\N$ is a von Neumann algebra and $\cM$ is any von Neumann subalgebra such that their unit elements coincide, $\hat 1_{\cM}=\hat 1_{\N}$, then there is a join-preserving map
\begin{align}
\deo_{\N,\cM'}: \PN &\lra \mc P(\cM)\\ \nonumber
\hP &\lmt \deo_{\N,\cM}(\hP)=\bmeet\{\hQ\in \mc P(\cM) \mid \hQ\geq \hP\}.
\end{align}
Recall that the Gel'fand spectrum $\Sigma(A)$ of an abelian $C^*$-algebra $A$ is the set of algebra homomorphisms $\ld:A\ra\bbC$. Equivalently, the elements of the Gel'fand spectrum $\Sigma(A)$ are the pure states of $A$. The set $\Sigma(A)$ is given the relative weak*-topology (as a subset of the dual space of $A$), which makes it into a compact Hausdorff space. By Gel'fand-Naimark duality, $A\simeq C(\Sigma(A))$, that is, $A$ is isometrically $*$-isomorphic to the abelian $C^*$-algebra $C(\Sigma(A))$ of continuous, complex-valued functions on $\Sigma(A)$, equipped with the supremum norm. If $A$ is an abelian von Neumann algebra, then $\Sigma(A)$ is extremely disconnected.
We now define the main object of interest:
\begin{definition}
Let $\N$ be a von Neumann algebra. The \textbf{spectral presheaf $\Sig$ of $\N$} is the presheaf over $\VN$ given
\begin{itemize}
\item [(a)] on objects: for all $V\in\VN$, $\Sig_V:=\Sigma(V)$, the Gel'fand spectrum of $V$,
\item [(b)] on arrows: for all inclusions $i_{V'V}:V'\hookrightarrow V$,
\begin{align}
\Sig(i_{V'V}): \Sig_V &\lra \Sig_{V'}\\ \nonumber
\ld &\lmt \ld|_{V'}.
\end{align}
\end{itemize}
\end{definition}
The restriction maps $\Sig(i_{V'V})$ are well-known to be continuous, surjective maps with respect to the Gel'fand topologies on $\Sig_V$ and $\Sig_{V'}$, respectively. They are also open and closed, see e.g. \cite{DI(2)}.
We equip the spectral presheaf with a distinguished family of subobjects (which are subpresheaves):
\begin{definition}
A subobject $\ps S$ of $\Sig$ is called \textbf{clopen} if for each $V\in\VN$, the set $\ps S_V$ is a clopen subset of the Gel'fand spectrum $\Sig_V$. The set of all clopen subobjects of $\Sig$ is denoted as $\Subcl{\Sig}$.
\end{definition}
The set $\Subcl{\Sig}$, together with the lattice operations and bi-Heyting algebra structure defined below, is the algebraic implementation of the new topos-based form of quantum logic. The elements $\ps S\in\Subcl{\Sig}$ represent propositions about the values of the physical quantities of the quantum system. The most direct connection with propositions of the form ``$\Ain\De$'' is given by the map called daseinisation, see Def. \ref{Def_OuterDas} below.
We note that the concept of contextuality (cf. section \ref{Sec_Background}) is implemented by this construction, since $\Sig$ is a presheaf over the context category $\VN$. Moreover, coarse-graining is mathematically realised by the fact that we use subobjects of presheaves. In the case of $\Sig$ and its clopen subobjects, this means the following: for each context $V\in\VN$, the component $\ps S_V\subseteq\Sig_V$ represents a \emph{local proposition} about the value of some physical quantity. If $V'\subset V$, then $\ps S_{V'}\supseteq\Sig(i_{V'V})(\ps S_V)$ (since $\ps S$ is a subobject), so $\ps S_{V'}$ represents a local proposition at the smaller context $V'\subset V$ that is \emph{coarser} than (i.e., a consequence of) the local proposition represented by $\ps S_V$.
A clopen subobject $\ps S\in\Subcl{\Sig}$ can hence be interpreted as a collection of \emph{local propositions}, one for each context, such that smaller contexts are assigned coarser propositions.
Clearly, the definition of clopen subobjects makes use of the Gel'fand topologies on the components $\Sig_V$, $V\in\VN$. We note that for each abelian von Neumann algebra $V$ (and hence for each context $V\in\VN$), there is an isomorphism of complete Boolean algebras
\begin{align} \label{Eq_alphaV}
\alpha_V:\PV &\lra \Cp(\Sig_V)\\ \nonumber
\hP &\lmt \{\ld\in\Sig_V \mid \ld(\hP)=1\}.
\end{align}
Here, $\Cp(\Sig_V)$ denotes the clopen subsets of $\Sig_V$.
There is a purely order-theoretic description of $\Subcl\Sig$: let
\begin{equation}
\mc P:=\prod_{V\in\VN}\PV
\end{equation}
be the set of choice functions $f:\VN\ra\coprod_{V\in\VN}\PV$, where $f(V)\in\PV$ for all $V\in\VN$. Equipped with pointwise operations, $\mc P$ is a complete Boolean algebra, since each $\PV$ is a complete Boolean algebra. Consider the subset $\mc S$ of $\mc P$ consisting of those functions for which $V'\subset V$ implies $f(V')\geq f(V)$. The subset $\mc S$ is closed under all meets and joins (in $\mc P$), and clearly, $\mc S\simeq\Subcl\Sig$.
We define a partial order on $\Subcl{\Sig}$ in the obvious way:
\begin{equation}
\forall \ps S,\ps T\in\Subcl{\Sig} :\quad \ps S \leq \ps T :\Longleftrightarrow (\forall V\in\VN: \ps S_V\subseteq \ps T_V).
\end{equation}
We define the corresponding (complete) lattice operations in a stagewise manner, i.e., at each context $V\in\VN$ separately: for any family $(\ps S_i)_{i\in I}$,
\begin{equation}
\forall V\in\VN : (\bmeet_{i\in I}\ps S_i)_V:=\on{int}(\bigcap_{i\in I}\ps S_{i;V}),
\end{equation}
where $\ps S_{i;V}\subseteq\Sig_V$ is the component at $V$ of the clopen subobject $\ps S_i$. Note that the lattice operation is not just componentwise set-theoretic intersection, but rather the interior (with respect to the Gel'fand topology) of the intersection. This guarantees that one obtains clopen subsets at each stage $V$, not just closed ones. Analogously,
\begin{equation}
\forall V\in\VN : (\bjoin_{i\in I}\ps S_i)_V:=\on{cl}(\bigcup_{i\in I}\ps S_{i;V}),
\end{equation}
where the closure of the union is necessary in order to obtain clopen sets, not just open ones. The fact that meets and joins are not given by set-theoretic intersections and unions also means that $\Subcl{\Sig}$ is not a sub-Heyting algebra of the Heyting algebra $\Sub{\Sig}$ of all subobjects of the spectral presheaf. The difference between $\Sub{\Sig}$ and $\Subcl{\Sig}$ is analogous to the difference between the power set $PX$ of a set $X$ and the complete Boolean algebra $BX$ of measurable subsets (with respect to some measure) modulo null sets. For results on measures and quantum states from the perspective of the topos approach, see \cite{Doe08,DI12}.
In section \ref{Sec_RepOfPropsAndBiHeyting}, we will show that $\Subcl{\Sig}$ is a complete bi-Heyting algebra.
\begin{example}
For illustration, we consider a simple example: let $\N$ be the \emph{abelian} von Neumann of diagonal matrices in $3$ dimensions. This is given by
\begin{equation}
\N:=\bbC\hP_1+\bbC\hP_2+\bbC\hP_3,
\end{equation}
where $\hP_1,\hP_2,\hP_3$ are pairwise orthogonal rank-$1$ projections on a $3$-dimensional Hilbert space. The projection lattice $\PN$ of $\N$ has $8$ elements,
\begin{equation}
\PN=\{\hat 0,\hP_1,\hP_2,\hP_2,\hP_3,\hP_1+\hP_2,\hP_1+\hP_3,\hP_2+\hP_3,\hat 1\}.
\end{equation}
Of course, $\PN$ is a Boolean algebra.
The algebra $\N$ has three non-trivial abelian subalgebras,
\begin{equation}
V_i:=\bbC\hP_i+\bbC(\hat 1-\hP_i),\quad i=1,2,3.
\end{equation}
Hence, the context category $\VN$ is the $4$-element poset with $\N$ as top element and $V_i\subset\N$ for $i=1,2,3$.
The Gel'fand spectrum $\Sig_{\N}$ of $\N$ has three elements $\ld_1,\ld_2,\ld_3$ such that
\begin{equation}
\ld_i(\hP_j)=\delta_{ij}.
\end{equation}
The Gel'fand spectrum $\Sig_{V_1}$ of $V_1$ has two elements $\ld'_1,\ld'_{2+3}$ such that
\begin{equation}
\ld'_1(\hP_1)=1,\quad\ld'_1(\hat 1-\hP_1)=0,\quad\ld'_{2+3}(\hP_1)=0,\quad\ld'_{2+3}(\hat 1-\hP_1)=1.
\end{equation}
(Note that $\hat 1-\hP_1=\hP_2+\hP_3$.) Analogously, the spectrum $\Sig_{V_2}$ has two elements $\ld'_{1+3},\ld'_2$, and the spectrum $\Sig_{V_3}$ has two elements $\ld'_{1+2},\ld'_3$.
Consider the restriction map of the spectral presheaf from $\Sig_{\N}$ to $\Sig_1$:
\begin{equation}
\Sig(i_{V_1,\N})(\ld_1)=\ld'_1,\quad\Sig(i_{V_1,\N})(\ld_2)=\Sig(i_{V_1,\N})(\ld_3)=\ld'_{2+3}.
\end{equation}
The restriction maps from $\Sig_{\N}$ to $\Sig_{V_2}$ resp. $\Sig_{V_3}$ are defined analogously. This completes the description of the spectral presheaf $\Sig$ of the algebra $\N$.
We will now determine all clopen subobjects of $\Sig$. First, note that the Gel'fand spectra all are discrete sets, so topological questions are trivial here. We simply have to determine all subobjects of $\Sig$. We distinguish a number of cases:
\begin{itemize}
\item [(a)] Let $\ps S\in\Subcl\Sig$ be a subobject such that $\ps S_{\N}=\Sig_{\N}=\{\ld_1,\ld_2,\ld_3\}$. Then the restriction maps of $\Sig$ dictate that for each $V_i$, $i=1,2,3$, we have $\ps S_{V_i}\supset\Sig(i_{V_i,\N})(\ps S_{N})=\Sig_{V_i}$, so $\ps S$ must be $\Sig$ itself.
\item [(b)] Let $\ps S$ be a subobject such that $\ps S_{\N}$ contains two elements, e.g. $\ps S_{\N}=\{\ld_1,\ld_2\}$. Then $\ps S_{V_1}=\Sig_{V_1}$ and $\ps S_{V_2}=\Sig_{V_2}$, but $\ps S_{V_3}$ can either be $\{\ld'_{1+2}\}$ or $\{\ld'_{1+2},\ld'_3\}$, so there are $2$ options. Moreover, there are three ways of picking two elements from the three-element set $\Sig_{\N}$, so we have $3\cdot 2=6$ subobjects $\ps S$ with two elements in $\ps S_{\N}$.
\item [(c)] Let $\ps S$ be such that $\ps S_{\N}$ contains one element, e.g. $\ps S_{\N}=\{\ld_1\}$. Then $\ps S_{V_1}$ can either be $\{\ld'_1\}$ or $\{\ld'_1,\ld'_{2+3}\}$; $\ps S_{V_2}$ can either be $\{\ld'_{1+3}\}$ or $\{\ld'_{1+3},\ld'_2\}$; and $\ps S_{V_3}$ can either be $\{\ld'_{1+2}\}$ or $\{\ld'_{1+2},\ld'_3\}$. Hence, there are $2^3$ options. Moreover, there are three ways of picking one element from $\Sig_{\N}$, so there are $3\cdot 2^3=24$ subobjects $\ps S$ with one element in $\ps S_{\N}$.
\item [(d)] Finally, consider a subobject $\ps S$ such that $\ps S_{\N}=\emptyset$. Since the $V_i$ are not contained in one another, there are no conditions arising from restriction maps of the spectral presheaf $\Sig$. Hence, we can pick an arbitrary subset of $\Sig_{V_i}$ for $i=1,2,3$. Since each $\Sig_{V_i}$ has $2$ elements, there are $4$ subsets of each, so we have $4^3=64$ subobjects $\ps S$ with $\ps S_{\N}=\emptyset$.
\end{itemize}
In all, $\Subcl\Sig$ has $64+24+6+1=95$ elements.
\end{example}
We conclude this section with the remark that the pertinent topos in which the spectral presheaf (and the other presheaves discussed in this section) lie of course is the topos $\SetVNop$ of presheaves over the context category $\VN$.
\section{Representation of propositions and bi-Heyting algebra structure} \label{Sec_RepOfPropsAndBiHeyting}
\begin{definition} \label{Def_OuterDas}
Let $\N$ be a von Neumann algebra, and let $\PN$ be its lattice of projections. The map
\begin{align}
\ps\deo: \PN &\lra \Subcl{\Sig}\\ \nonumber
\hP &\lmt \ps\deo(\hP):=(\alpha_V(\deo_{\N,V}(\hP)))_{V\in\VN}
\end{align}
is called \textbf{outer daseinisation of projections}.
\end{definition}
This map was introduced in \cite{DI(2)} and discussed in detail in \cite{Doe09,Doe11}. It can be seen as a `translation' map from standard quantum logic, encoded by the complete orthomodular lattice $\PN$ of projections, to a form of (super)intuitionistic logic for quantum systems, based on the clopen subobjects of the spectral presheaf $\Sig$, which conceptually plays the r\^ole of a quantum state space.
In standard quantum logic, the projections $\hP\in\PN$ represent propositions of the form ``$\Ain\De$'', that is, ``the physical quantity $A$ has a value in the Borel set $\De$ of real numbers''. The connection between propositions and projections is given by the spectral theorem. Outer daseinisation can hence be seen as a map from propositions of the form ``$\Ain\De$'' into the bi-Heyting algebra $\Subcl{\Sig}$ of clopen subobjects of the spectral presheaf. A projection $\hP$, representing a proposition ``$\Ain\De$'', is mapped to a collection $(\deo_{\N,V}(\hP))_{V\in\VN}$, consisting of one projection $\deo_{\N,V}(\hP)$ for each context $V\in\VN$. (Each isomorphism $\alpha_V$, $V\in\VN$, just maps the projection $\deo_{\N,V}(\hP)$ to the corresponding clopen subset of $\Sig_V$, which does not affect the interpretation.)
Since we have $\deo_{\N,V}(\hP)\geq\hP$ for all $V$, the projection $\deo_{\N,V}(\hP)$ represents a coarser (local) proposition than ``$\Ain\De$'' in general. For example, if $\hP$ represents ``$\Ain\De$'', then $\deo_{\N,V}(\hP)$ may represent ``$\Ain\Ga$'' where $\Ga\supset\De$.
The map $\ps\deo$ preserves all joins, as shown in section 2.D of \cite{DI(2)} and in \cite{Doe09}. Here is a direct argument: being left adjoint to the inclusion of $\PV$ into $\PN$, the map $\deo_{\N,V}$ preserves all colimits, which are joins. Moreover, $\alpha_V$ is an isomorphism of complete Boolean algebras, so $\alpha_V\circ\deo_{\N,V}$ preserves all joins. This holds for all $V\in\VN$, and joins in $\Subcl{\Sig}$ are defined stagewise, so $\ps\deo$ preserves all joins.
Moreover, $\ps\deo$ is order-preserving and injective, but not surjective. Clearly, $\ps\deo(\hat 0)=\ps 0$, the empty subobject, and $\ps\deo(\hat 1)=\Sig$. For meets, we have
\begin{equation}
\forall \hP,\hQ\in\PN : \ps\deo(\hP\meet\hQ)\leq\ps\deo(\hP)\meet\ps\deo(\hQ).
\end{equation}
In general, $\ps\deo(\hP)\meet\ps\deo(\hQ)$ is not of the form $\ps\deo(\hat R)$ for any projection $\hat R\in\PN$. See \cite{DI(2),Doe09} for proof of these statements.
Let $(\ps S_i)_{i\in I}\subseteq\Subcl{\Sig}$ be a family of clopen subobjects of $\Sig$, and let $\ps S\in\Subcl{\Sig}$. Then
\begin{equation}
\forall V\in\VN : (\ps S\meet\bjoin_{i\in I}\ps S_i)_V=\bjoin_{i\in I}(\ps S_V\meet\ps S_{i;V}),
\end{equation}
since $\Cp(\Sig_V)$ is a distributive lattice (in fact, a complete Boolean algebra) in which finite meets distribute over arbitrary joins. Hence, for each $\ps S\in\Subcl{\Sig}$, the functor
\begin{equation}
\ps S\meet\_:\Subcl{\Sig} \lra \Subcl{\Sig}
\end{equation}
preserves all joins, so by the adjoint functor theorem for posets, it has a right adjoint
\begin{equation}
\ps S\Rightarrow\_:\Subcl{\Sig} \lra \Subcl{\Sig}.
\end{equation}
This map, the \textbf{Heyting implication from $\ps S$}, makes $\Subcl{\Sig}$ into a complete Heyting algebra. This was shown before in \cite{DI(2)}. The Heyting implication is given by the adjunction
\begin{equation}
\ps R\meet\ps S\leq\ps T\quad\text{if and only if}\quad\ps R\leq (\ps S\Rightarrow\ps T).
\end{equation}
(Note that $\ps S\meet\_=\_\meet\ps S$.) This implies that
\begin{equation}
(\ps S\Rightarrow\ps T)=\bjoin\{\ps R\in\Subcl{\Sig} \mid \ps R\meet\ps S\leq\ps T\}.
\end{equation}
The stagewise definition is: for all $V\in\VN$,
\begin{equation}
(\ps S\Rightarrow\ps T)_V=\{\ld\in\Sig_V \mid \forall V'\subseteq V:\text{ if } \ld|_{V'}\in \ps S_{V'}\text{, then }\ld|_{V'}\in\ps T_{V'}\}.
\end{equation}
As usual, the \textbf{Heyting negation $\neg$} is defined for all $\ps S\in\Subcl{\Sig}$ by
\begin{equation}
\neg\ps S:=(\ps S\Rightarrow\ps 0).
\end{equation}
That is, $\neg\ps S$ is the largest element of $\Subcl{\Sig}$ such that
\begin{equation}
\ps S\meet\neg\ps S=\ps 0.
\end{equation}
The stagewise expression for $\neg\ps S$ is
\begin{equation} \label{Eq_HeytNegStagew}
(\neg\ps S)_V=\{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld|_{V'}\notin \ps S_{V'}\}.
\end{equation}
In $\Subcl{\Sig}$, we also have, for all families $(\ps S_i)_{i\in I}\subseteq\Subcl{\Sig}$ and all $\ps S\in\Subcl{\Sig}$,
\begin{equation}
\forall V\in\VN: (\ps S\join\bmeet_{i\in I}\ps S_i)_V=\bmeet_{i\in I}(\ps S_V\join\ps S_{i;V}),
\end{equation}
since finite joins distribute over arbitrary meets in $\Cp(\Sig)$. Hence, for each $\ps S$ the functor
\begin{equation}
\ps S\join\_:\Subcl{\Sig} \lra \Subcl{\Sig}
\end{equation}
preserves all meets, so it has a left adjoint
\begin{equation}
\ps S\Leftarrow\_: \Subcl{\Sig} \lra \Subcl{\Sig}
\end{equation}
which we call \textbf{co-Heyting implication}. This map makes $\Subcl{\Sig}$ into a complete co-Heyting algebra. It is characterised by the adjunction
\begin{equation}
(\ps S\Leftarrow\ps T)\leq\ps R\quad\text{iff}\quad\ps S\leq\ps T\join\ps R,
\end{equation}
so
\begin{equation} \label{Eq_coImp}
(\ps S\Leftarrow\ps T)=\bmeet\{\ps R\in\Subcl{\Sig} \mid \ps S\leq\ps T\join\ps R\}.
\end{equation}
One can think of $\ps S\Leftarrow\_$ as a kind of `subtraction' (see e.g. \cite{RZ96}): $\ps S\Leftarrow\ps T$ is the smallest clopen subobject $\ps R$ for which $\ps T\join\ps R$ is bigger then $\ps S$, so it encodes how much is `missing' from $\ps T$ to cover $\ps S$.
We define a \textbf{co-Heyting negation} for each $\ps S\in\Subcl{\Sig}$ by
\begin{equation}
\sim\ps S:=(\Sig\Leftarrow\ps S).
\end{equation}
(Note that $\Sig$ is the top element in $\Subcl{\Sig}$.) Hence, $\sim\ps S$ is the smallest clopen subobject such that
\begin{equation}
\sim\ps S\join\ps S=\Sig
\end{equation}
holds. We have shown in a direct manner, without use of topos theory as in section \ref{Sec_SigEtc}:
\begin{proposition}
$(\Subcl{\Sig},\meet,\join,\ps 0,\Sig,\Rightarrow,\neg,\Leftarrow,\sim)$ is a complete bi-Heyting algebra.
\end{proposition}
We give direct arguments for the following two facts (which also follow from the general theory of bi-Heyting algebras):
\begin{lemma}
For all $\ps S\in\Subcl{\Sig}$, we have $\neg\ps S\leq\;\sim\ps S$.
\end{lemma}
\begin{proof}
For all $V\in\VN$, it holds that $(\neg\ps S)_V\subseteq\Sig_V\backslash\ps S_V$, since $(\neg\ps S\meet\ps S)_V=(\neg\ps S)_V\cap\ps S_V=\emptyset$, while $(\sim\ps S)_V\supseteq\Sig_V\backslash\ps S_V$ since $(\sim\ps S\join\ps S)_V=(\sim\ps S)_V\cup\ps S_V=\Sig_V$.
\end{proof}
The above lemma and the fact that $\neg\ps S$ is the largest subobject such that $\neg\ps S\meet\ps S=\ps 0$ imply
\begin{corollary}
In general, $\sim\ps S\meet\ps S\geq\ps 0$.
\end{corollary}
This means that the co-Heyting negation does not give a system in which a central axiom of most logical systems, viz. freedom from contradiction, holds. We have a glimpse of \emph{paraconsistent logic}.
In fact, a somewhat stronger result holds: for any von Neumann algebra except for $\bbC\hat 1=M_1(\bbC)$ and $M_2(\bbC)$, we have $\sim\ps S>\neg\ps S$ and $\sim\ps S\meet\ps S>\ps 0$ for all clopen subobjects except $\ps 0$ and $\Sig$. This follows easily from the representation of clopen subobjects as families of projections, see beginning of next section.
\section{Negations and regular elements} \label{Sec_NegsAndRegs}
In this section, we will examine the Heyting negation $\neg$ and the co-Heyting negation $\sim$ more closely. We will determine regular elements with respect to the Heyting and the co-Heyting algebra structure.
Throughout, we will make use of the isomorphism $\alpha_V:\PV\ra\Cp(\Sig_V)$ (defined in (\ref{Eq_alphaV})) between the complete Boolean algebras of projections in an abelian von Neumann algebra $V$ and the clopen subsets of its spectrum $\Sig_V$. Given a projection $\hP\in\PV$, we will use the notation $S_{\hP}:=\alpha_V(\hP)$. Conversely, for $S\in\Cp(\Sig_V)$, we write $\hP_S:=\alpha_V^{-1}(S)$.
Given a clopen subobject $\ps S\in\Subcl{\Sig}$, it is useful to think of it as a collection of projections: consider
\begin{equation}
(\hP_{\ps S_V})_{V\in\VN} = (\alpha_V(\ps S_V))_{V\in\VN},
\end{equation}
which consists of one projection for each context $V$. The fact that $\ps S$ is a subobject then translates to the fact that if $V'\subset V$, then $\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}$. (This is another instance of coarse-graining.)
If $\ld\in\Sig_V$ and $\hP\in\PV$, then
\begin{equation}
\ld(\hP)=\ld(\hP^2)=\ld(\hP)^2\in\{0,1\},
\end{equation}
where we used that $\hP$ is idempotent and that $\ld$ is multiplicative.
\textbf{Heyting negation and Heyting-regular elements.} We consider the stagewise expression (see eq. (\ref{Eq_HeytNegStagew})) for the Heyting negation:
\begin{align}
(\neg\ps S)_V &=\{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld|_{V'}\notin \ps S_{V'}\}\\
&= \{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld|_{V'}(\hP_{\ps S_{V'}})=0 \}\\
&= \{\ld\in\Sig_V \mid \forall V'\subseteq V:\ld(\hP_{\ps S_{V'}})=0 \}\\
&= \{\ld\in\Sig_V \mid \ld(\bjoin_{V'\subseteq V}\hP_{\ps S_{V'}})=0 \}
\end{align}
As we saw above, the smaller the context $V'$, the larger the associated projection $\hP_{\ps S_{V'}}$. Hence, for the join in the above expression, only the \emph{minimal} contexts $V'$ contained in $V$ are relevant. A minimal context is generated by a single projection $\hQ$ and the identity,
\begin{equation}
V_{\hQ}:=\{\hQ,\hat 1\}''=\bbC\hQ+\bbC\hat 1.
\end{equation}
Here, it becomes important that we excluded the trivial context $V_0=\{\hat 1\}''=\bbC\hat 1$. Let
\begin{equation}
m_V:=\{V'\subseteq V \mid V'\text{ minimal}\}=\{V_{\hQ} \mid \hQ\in\PV\}.
\end{equation}
We obtain
\begin{align}
(\neg\ps S)_V &= \{\ld\in\Sig_V \mid \ld(\bjoin_{V'\in m_V}\hP_{\ps S_{V'}})=0 \}\\
&= \{\ld\in\Sig_V \mid \ld(\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}})=1 \}\\
&= S_{\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}}}.
\end{align}
This shows:
\begin{proposition} \label{Prop_HeytingNegLocal}
Let $\ps S\in\Subcl{\Sig}$, and let $V\in\VN$. Then
\begin{equation}
\hP_{(\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}},
\end{equation}
where $m_V=\{V'\subseteq V \mid V'\text{ minimal}\}$.
\end{proposition}
We can now consider double negation: $(\neg\neg\ps S)_V=S_{\hat 1-\bjoin_{V'\in m_V}\hP_{(\neg\ps S)_{V'}}}$, so
\begin{equation}
\hP_{(\neg\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}\hP_{(\neg\ps S)_{V'}}.
\end{equation}
For a $V'\in m_V$, we have $\hP_{(\neg\ps S)_{V'}}=\hat 1-\bjoin_{W\in m_{V'}}\hP_{\ps S_{W}}$, but $m_{V'}=\{V'\}$, since $V'$ is minimal, so $\hP_{(\neg\ps S)_{V'}}=\hat 1-\hP_{\ps S_{V'}}$. Thus,
\begin{equation} \label{Eq_DoubleHeytNegStagew}
\hP_{(\neg\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}(\hat 1-\hP_{\ps S_{V'}})=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}.
\end{equation}
Since $\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}$ for all $V'\in m_V$ (because $\ps S$ is a subobject), we have
\begin{equation} \label{Eq_DoubleHeytNegBigger}
\hP_{(\neg\neg\ps S)_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}
\end{equation}
for all $V\in\VN$, so $\neg\neg\ps S\geq\ps S$ as expected. We have shown:
\begin{proposition} \label{Prop_HeytingReg}
An element $\ps S$ of $\Subcl{\Sig}$ is Heyting-regular, i.e., $\neg\neg\ps S=\ps S$, if and only if for all $V\in\VN$, it holds that
\begin{equation}
\hP_{\ps S_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}},
\end{equation}
where $m_V=\{V'\subseteq V \mid V'\text{ minimal}\}$.
\end{proposition}
\begin{definition}
A clopen subobject $\ps S\in\Subcl{\Sig}$ is called \textbf{tight} if
\begin{equation}
\Sig(i_{V'V})(\ps S_V)=\ps S_{V'}
\end{equation}
for all $V',V\in\VN$ such that $V'\subseteq V$.
\end{definition}
For arbitrary subobjects, we only have $\Sig(i_{V'V})(\ps S_V)\subseteq\ps S_{V'}$. Let $\ps S\in\Subcl{\Sig}$ be an arbitrary clopen subobject, and let $V,V'\in\VN$ such that $V'\subset V$. Then
$\Sig(i_{V'V})(\ps S_V)\subseteq\ps S_{V'}\subseteq\Sig_{V'}$, so $\hP_{\Sig(i_{V'V})(\ps S_V)}\in\mc P(V')$. Thm. 3.1 in \cite{DI(2)} shows that
\begin{equation}
\hP_{\Sig(i_{V'V})(\ps S_V)}=\deo_{V,V'}(\hP_{\ps S_V}).
\end{equation}
This key formula relates the restriction maps $\Sig(i_{V'V}):\Sig_V\ra\Sig_{V'}$ of the spectral presheaf to the maps $\deo_{V,V'}:\PV\ra\mc P(V')$. Using this, we see that
\begin{proposition}
A clopen subobject $\ps S\in\Subcl{\Sig}$ is tight if and only if $\hP_{\ps S_{V'}}=\deo_{V,V'}(\hP_{\ps S_V})$ for all $V',V\in\VN$ such that $V'\subseteq V$.
\end{proposition}
It is clear that all clopen subobjects of the form $\ps\deo(\hP)$, $\hP\in\PN$, are tight (see Def. \ref{Def_OuterDas}).
\begin{proposition}
For a tight subobject $\ps S\in\Subcl{\Sig}$, it holds that $\neg\neg\ps S=\ps S$, i.e., tight subobjects are Heyting-regular.
\end{proposition}
\begin{proof}
We saw in equation (\ref{Eq_DoubleHeytNegStagew}) that $\hP_{(\neg\neg\ps S)_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}$ for all $V\in\VN$. Moreover, $\hP_{(\neg\neg\ps S)_V}\geq\hP_{\ps S_V}$ from equation (\ref{Eq_DoubleHeytNegBigger}). Consider the minimal subalgebra $V_{\hP_{\ps S_V}}=\{\hP_{\ps S_V},\hat 1\}''$ of $V$. Then, since $\ps S$ is tight, we have
\begin{equation}
\deo_{V,V_{\hP_{\ps S_V}}}(\hP_{\ps S_V})=\bmeet\{\hQ\in\mc P(V_{\hP_{\ps S_V}}) \mid \hQ\geq\hP_{\ps S_V}\}=\hP_{\ps S_V},
\end{equation}
so, for all $V\in\VN$,
\begin{equation}
\hP_{(\neg\neg\ps S)_V}=\bmeet_{V'\in m_V}\hP_{\ps S_{V'}}=\hP_{\ps S_V}.
\end{equation}
\end{proof}
\begin{corollary}
Outer daseinisation $\ps\deo:\PN\ra\Subcl{\Sig}$ maps projections into the Heyting-regular elements of $\Subcl{\Sig}$.
\end{corollary}
We remark that in order to be Heyting-regular, an element $\ps S\in\Subcl{\Sig}$ need not be tight.
\textbf{Co-Heyting negation and co-Heyting regular elements.} For any $\ps S\in\Subcl{\Sig}$, by its defining property $\sim\ps S$ is the smallest element of $\Subcl{\Sig}$ such that $\ps S\join\sim\ps S=\Sig$.
Let $V$ be a maximal context, i.e., a maximal abelian subalgebra (masa) of the non-abelian von Neumann algebra $\N$. Then clearly
\begin{equation}
(\sim\ps S)_V=\Sig_V\backslash\ps S_V.
\end{equation}
Let $V\in\VN$, not necessarily maximal. We define
\begin{equation}
M_V:=\{\tilde V\supseteq V \mid \tilde V\text{ maximal}\}.
\end{equation}
\begin{proposition} \label{Prop_CoHeytingNegLocal}
Let $\ps S\in\Subcl{\Sig}$, and let $V\in\VN$. Then
\begin{equation}
\hP_{(\sim\ps S)_V}=\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hat P_{\ps S_{\tilde V}})),
\end{equation}
where $M_V=\{\tilde V\supseteq V \mid \tilde V\text{ maximal}\}$.
\end{proposition}
\begin{proof}
$\sim\ps S$ is a (clopen) subobject, so we must have
\begin{equation}
\hP_{(\sim\ps S)_V}\geq\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hat P_{\ps S_{\tilde V}})),
\end{equation}
since $(\sim\ps S)_V$, the component at $V$, must contain all the restrictions of the components $(\sim\ps S)_{\tilde V}$ for $\tilde V\in M_V$ (and the above inequality expresses this using the corresponding projections).
On the other hand, $\sim\ps S$ is the \emph{smallest} clopen subobject such that $\newblock{\ps S\join\sim\ps S}=\Sig$. So it suffices to show that for
$\hP_{(\sim\ps S)_V}=\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hP_{\ps S_{\tilde V}}))$, we have $\hP_{(\sim\ps S)_V}\join\hP_{\ps S_V}=\hat 1$ for all $V\in\VN$, and hence $\sim\ps S\join\ps S=\Sig$.
If $V$ is maximal, then $\hP_{(\sim\ps S)_V}=\deo_{V,V}(\hat 1-\hP_{\ps S_V})=\hat 1-\hP_{\ps S_V}$ and hence $\hP_{(\sim\ps S)_V}\join\hP_{\ps S_V}=\hat 1$. If $V$ is non-maximal and $\tilde V$ is any maximal context containing $V$, then $\hP_{(\sim\ps S)_V}\geq\hP_{(\sim\ps S)_{\tilde V}}$ and $\hP_{\ps S_V}\geq\hP_{\ps S_{\tilde V}}$, so $\hP_{(\sim\ps S)_V}\join\hP_{\ps S_V}\geq\hP_{(\sim\ps S)_{\tilde V}}\join\hP_{\ps S_{\tilde V}}=\hat 1$.
\end{proof}
For the double co-Heyting negation, we obtain
\begin{align}
\hP_{(\sim\sim\ps S)_V} &=\bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hat 1-\hP_{(\sim\ps S)_{\tilde V}})\\
&= \bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hat 1-\bjoin_{W\in M_{\tilde V}}\deo_{W,\tilde V}(\hat 1-\hP_{\ps S_W})).
\end{align}
Since $\tilde V$ is maximal, we have $M_{\tilde V}=\{\tilde V\}$, and the above expression simplifies to
\begin{align}
\hP_{(\sim\sim\ps S)_V} &=\bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hat 1-(\hat 1-\hP_{\ps S_{\tilde V}}))\\
&= \bjoin_{\tilde V\in M_{V}}\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}}).
\end{align}
Note that the fact that $\ps S$ is a subobject implies that
\begin{equation} \label{Eq_DoubleCoHeytNegBigger}
\hP_{(\sim\sim\ps S)_V}\leq\hP_{\ps S_V}
\end{equation}
for all $V\in\VN$, so $\sim\sim\ps S\leq\ps S$ as expected. We have shown:
\begin{proposition} \label{Prop_CoHeytReg}
An element $\ps S$ of $\Subcl{\Sig}$ is co-Heyting-regular, i.e., $\newblock{\sim\sim\ps S}=\ps S$, if and only if for all $V\in\VN$ it holds that
\begin{equation}
\hP_{\ps S_V}=\bjoin_{\tilde V\in M_{V}}\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}}),
\end{equation}
where $M_V=\{\tilde V\supseteq V \mid \tilde V\text{ maximal}\}$.
\end{proposition}
\begin{proposition}
If $\ps S\in\Subcl{\Sig}$ is tight, then $\sim\sim\ps S=\ps S$, i.e., tight subobjects are co-Heyting regular.
\end{proposition}
\begin{proof}
If $\ps S$ is tight, then for all $V\in\VN$ and for all $\tilde V\in M_V$, we have $\hP_{\ps S_V}=\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}})$, so $\bjoin_{\tilde V\in M_V}\deo_{\tilde V,V}(\hP_{\ps S_{\tilde V}})=\hP_{\ps S_V}$. By Prop. \ref{Prop_CoHeytReg}, the result follows.
\end{proof}
\begin{corollary}
Outer daseinisation $\ps\deo:\PN\ra\Subcl{\Sig}$ maps projections into the co-Heyting-regular elements of $\Subcl{\Sig}$.
\end{corollary}
\textbf{Physical interpretation.} We conclude this section by giving a tentative physical interpretation of the two kinds of negation. For this interpretation, it is important to think of an element $\ps S\in\Subcl{\Sig}$ as a collection of local propositions $\ps S_V$ (resp. $\hP_{\ps S_V}$), one for each context $V$. Moreover, if $V'\subset V$, then the local proposition represented by $\ps S_{V'}$ is coarser than the local proposition represented by $\ps S_V$.
Let $\ps S\in\Subcl{\Sig}$ be a clopen subobject, and let $\neg\ps S$ be its Heyting complement. As shown in Prop. \ref{Prop_HeytingNegLocal}, the local expression for components of $\neg\ps S$ is given by
\begin{equation}
\hP_{(\neg\ps S)_V}=\hat 1-\bjoin_{V'\in m_V}\hP_{\ps S_{V'}},
\end{equation}
where $m_V$ is the set of all minimal contexts contained in $V$. The projection $\hP_{(\neg\ps S)_V}$ is always smaller than or equal to $\hat 1-\hP_{\ps S_V}$, since $\hP_{\ps S_{V'}}\geq\hP_{\ps S_V}$ for all $V'\in m_V$. For the Heyting negation of the local proposition in the context $V$, represented by $\ps S_V$ or equivalently by the projection $\hP_{\ps S_V}$, one has to consider all the coarse-grainings of this proposition to minimal contexts (which are the `maximal' coarse-grainings). The Heyting complement $\neg\ps S$ is determined at each stage $V$ as the complement of the join of all the coarse-grainings $\hP_{\ps S_{V'}}$ of $\hP_{\ps S_V}$.
In other words, the component of the Heyting complement $\neg\ps S$ at $V$ is not simply the complement of $\ps S_V$, but the complement of the disjunction of all the coarse-grainings of this local proposition to all smaller contexts. The coarse-grainings of $\ps S_V$ are specified by the clopen subobject $\ps S$ itself.
The component of the co-Heyting complement $\sim\ps S$ at a context $V$ is given by
\begin{equation}
\hP_{(\sim\ps S)_V}=\bjoin_{\tilde V\in M_V}(\deo_{\tilde V,V}(\hat 1-\hat P_{\ps S_{\tilde V}})),
\end{equation}
where $M_V$ is the set of maximal contexts containing $V$. The projection $\hP_{(\sim\ps S)_V}$ is always larger than or equal to $\hat 1-\hP_{\ps S_V}$, as was argued in the proof of Prop. \ref{Prop_CoHeytingNegLocal}. This means that the co-Heyting complement $\sim\ps S$ has a component $(\sim\ps S)_V$ at $V$ that may overlap with the component $\ps S_V$, hence the corresponding local propositions are not mutually exclusive in general. Instead, $\hP_{(\sim\ps S)_V}$ is the disjunction of all the coarse-grainings of complements of (finer, i.e., stronger) local propositions at contexts $\tilde V\supset V$.
The co-Heyting negation hence gives local propositions that for each context $V$ take into account all those contexts $\tilde V$ from which one can coarse-grain to $V$. The component $(\sim\ps S)_V$ is defined in such a way that all the stronger local propositions at maximal contexts $\tilde V\supset V$ are complemented in the usual sense, i.e., $\hP_{(\sim\ps S)_{\tilde V}}=\hat 1-\hP_{\ps S_{\tilde V}}$ for all maximal contexts $\tilde V$. At smaller contexts $V$, we have some coarse-grained local proposition, represented by $\hP_{(\sim\ps S)_V}$, that will in general not be disjoint from (i.e., mutually exclusive with) the local proposition represented by $\hP_{\ps S_V}$.
\section{Conclusion and outlook} \label{Sec_Conclusion}
Summing up, we have shown that to each quantum system described by a von Neumann algebra $\N$ of physical quantities one can associate a (generalised) quantum state space, the spectral presheaf $\Sig$, together with a complete bi-Heyting algebra $\Subcl{\Sig}$ of clopen subobjects. Elements $\ps S$ can be interpreted as families of local propositions, where `local' refers to contextuality: each component $\ps S_V$ of a clopen subobject represents a proposition about the value of a physical quantity in the context (i.e., abelian von Neumann subalgebra) $V$ of $\N$. Since $\ps S$ is a subobject, there is a built-in form of coarse-graining which guarantees that if $V'\subset V$ is a smaller context, then the local proposition represented by $\ps S_{V'}$ is coarser than the proposition represented by $\ps S_V$.
The map called outer daseinisation of projections (see Def. \ref{Def_OuterDas}) is a convenient bridge between the usual Hilbert space formalism and the new topos-based form of quantum logic. Daseinisation maps a propositions of the form ``$\Ain\De$'', represented by a projection $\hP$ in the complete orthomodular lattice $\PN$ of projections in the von Neumann algebra $\N$, to an element $\ps\deo(\hP)$ of the bi-Heyting algebra $\Subcl{\Sig}$.
We characterised the two forms of negation arising from the Heyting and the co-Heyting structure on $\Subcl{\Sig}$ by giving concrete stagewise expressions (see Props. \ref{Prop_HeytingNegLocal} and \ref{Prop_CoHeytingNegLocal}), considered double negation and characterised Heyting regular elements of $\Subcl{\Sig}$ (Prop. \ref{Prop_HeytingReg}) as well as co-Heyting regular elements (Prop. \ref{Prop_CoHeytReg}). It turns out that daseinisation maps projections into Heyting regular and co-Heyting regular elements of the bi-Heyting algebra of clopen subobjects.
The main thrust of this article is to replace the standard algebraic representation of quantum logic in projection lattices of von Neumann algebras by a better behaved form based on bi-Heyting algebras. Instead of having a non-distributive orthomodular lattice of projections, which comes with a host of well-known conceptual and interpretational problems, one can consider a complete bi-Heyting algebra of propositions. In particular, this provides a distributive form of quantum logic. Roughly speaking, a non-distributive lattice with an orthocomplement has been traded for a distributive one with two different negations.
We conclude by giving some open problems for further study:
\begin{itemize}
\item [(a)] It will be interesting to see how far the constructions presented in this article can be generalised beyond the case of von Neumann algebras. A generalisation to complete orthomodular lattices is immediate, but more general structures used in the study of quantum logic(s) remain to be considered.
\item [(b)] Bi-Heyting algebras are related to bitopological spaces, see \cite{BBGK10} and references therein. But the spectral presheaf $\Sig$ is not a topological (or bitopological) space in the usual sense. Rather, it is a presheaf which has no global elements. Hence, there is no direct notion of points available, which makes it impossible to define of a set underlying the topology (or topologies). Generalised notions of topology like frames will be useful to study the connections with bitopological spaces.
\item [(c)] All the arguments given in this article are topos-external. There is an internal analogue of the bi-Heyting algebra $\Subcl{\Sig}$ in the form of the power object $P\ps O$ of the so-called outer presheaf, see \cite{DI12}, so one can study many aspects internally in the topos $\SetVNop$ associated with the quantum system. This also provides the means to go beyond propositional logic to predicate logic, since each topos possesses an internal higher-order intuitionistic logic.
\end{itemize}
\textbf{Acknowledgements.} I am very grateful to the ASL, and to Reed Solomon, Valentina Harizanov and Jennifer Chubb personally, for giving me the opportunity to organise a Special Session on ``Logic and Foundations of Physics'' for the 2010 North American Meeting of the ASL, Washington D.C., March 17--20, 2010. I would like to thank Chris Isham and Rui Soares Barbosa for discussions and support. Many thanks to Dan Marsden, who read the manuscript at an early stage and made some valuable comments and suggestions. The anonymous referee also provided some very useful suggestions, which I incorporated. Finally, Dominique Lambert's recent talk at \textsl{Categories and Physics 2011} at Paris 7 served as an eye-opener on paraconsistent logic (and made me lose my fear of contradictions ;-) ).
\end{document} |
\begin{document}
\title{A computer-assisted proof of Barnette-Goodey conjecture: Not only fullerene graphs are Hamiltonian.}
\begin{abstract}
Fullerene graphs, i.e., 3-connected planar cubic graphs with pentagonal and hexagonal faces, are conjectured to be Hamiltonian. This is a special case of a conjecture of Barnette and Goodey, stating that 3-connected planar graphs with faces of size at most 6 are Hamiltonian. We prove the conjecture.
\end{abstract}
\section{Introduction}
Tait conjectured in 1880 that cubic polyhedral graphs (i.e., 3-connected planar cubic graphs) are Hamiltonian. The first counterexample to Tait's conjecture was found by Tutte in 1946; later many others were found, see Figure \ref{fig:tutte}. Had the conjecture been true, it would have implied the Four-Color Theorem.
However, each known non-Hamiltonian cubic polyhedral graph has at least one face of size 7 or more \cite{ABH,zaks}. It was conjectured that all cubic polyhedral graphs with maximum face size at most 6 are Hamiltonian. In the literature, the conjecture is usually attributed to Barnette (see, e.g., \cite{mal}), however, Goodey \cite{good} stated it in an informal way as well.
This conjecture covers in particular the class of fullerene graphs, 3-connected cubic planar graphs with pentagonal and hexagonal faces only. Hamiltonicity was verified for all fullerene graphs with up to 176 vertices \cite{ABH}.
Later on, the conjecture in the general form was verified for all graphs with up to 316 vertices \cite{bgm}.
On the other hand, cubic polyhedral graphs having only faces of sizes 3 and 6 or 4 and 6 are known to be Hamiltonian \cite{good,good2}.
\begin{figure}
\caption{Tutte's first example of a non-Hamiltonian cubic polyhedral graph (left); one of minimal examples on 38 vertices (right).}
\label{fig:tutte}
\end{figure}
Jendrol$\!$' and Owens proved that the longest cycle of a fullerene graph of order $n$ covers at least $4n/5$ vertices \cite{JO}, the bound was later improved to $5n/6-2/3$ by Kr\' al' et al.~\cite{kral} and to $6n/7+2/7$ by Erman et al.~\cite{erman}.
Maru\v{s}i\v{c} \cite{mar} proved that the fullerene graph obtained from another fullerene graph with an odd number of faces by the so-called leapfrog operation (truncation of the dual; replacing each vertex by a hexagonal face) is Hamiltonian. In fact, Hamiltonian cycle in the derived graph corresponds to a decomposition of the original graph into an induced forest and a stable set. We will use similar technique to prove the conjecture in the general case.
In this paper we prove
\begin{theorem}
Let $G$ be a 3-connected planar cubic graph with faces of size at most 6. Then $G$ is Hamiltonian.
\label{th:main}
\end{theorem}
In the next sections, we reduce the main theorem to Theorem \ref{th:bar} and further to Theorem \ref{th:find} and we introduce terminology and techniques used in the proof of Theorem \ref{th:find}.
\section{Preliminaries}
\subsection{First reduction}
\label{sec:pre}
A \emph{Barnette graph} is a 3-connected planar cubic graph with faces of size at most 6, having no triangles and no two adjacent quadrangles.
We reduce Theorem \ref{th:main} to the case of Barnette graphs:
\begin{theorem}
Let $G$ be a Barnette graph on at least 318 vertices. Then $G$ is Hamiltonian.
\label{th:bar}
\end{theorem}
\begin{lemma}
Theorem \ref{th:bar} implies Theorem \ref{th:main}.
\end{lemma}
Proof. Suppose Theorem \ref{th:bar} true. Let $G$ be a smallest counterexample to Theorem \ref{th:main}. We know that $G$ has at least 318 vertices, since Theorem \ref{th:main} has already been verified for all cubic planar graphs with faces of size at most 6 on at most 316 vertices \cite{bgm}. (The number of vertices of a cubic graph is always even.)
Assume $f=v_1v_2v_3$ is a triangle in $G$. If one of the faces adjacent to $f$ is a triangle, then, by 3-connectivity, $G$ is (isomorphic to) $K_4$, a Hamiltonian graph.
Therefore, all the three faces adjacent to $f$ are of size at least $4$. Let $G_1$ be a graph obtained from $G$ by replacing $v_1v_2v_3$ by a single vertex $v$. It is easy to see that $G_1$ is a 3-connected cubic planar graph with faces of size at most 6, moreover, every Hamiltonian cycle of $G_1$ can be extended to a Hamiltonian cycle of $G$, see Figure \ref{fig:easy} for illustration.
From this point on we may assume that $G$ contains no triangles.
Let $f_1$ and $f_2$ be two adjacent faces of size 4 in $G$. Let $v_1$ and $v_2$ be the vertices they share; let $f_1=v_1v_2u_3u_4$, let $f_2=v_1v_2w_3w_4$. We denote by $f_3$ (resp. $f_4$) the face incident to $u_3$ and $w_3$ ($u_4$ and $w_4$, respectively). If both $f_3$ and $f_4$ are quadrangles, then, by 3-connectivity, $G$ is the graph of a cube, which is Hamiltonian. Suppose $d(f_4)\ge 5$ and $d(f_3)=4$. Let $G_2$ be a graph obtained from $G$ by collapsing the faces $f_1$, $f_2$, $f_3$ to a single vertex. Again, $G_2$ is a 3-connected cubic planar graph with faces of size at most 6, moreover, every Hamiltonian cycle of $G_2$ can be extended to a Hamiltonian cycle of $G$, see Figure \ref{fig:easy}.
\begin{figure}
\caption{A triangle, as well as three quadrangles sharing a vertex, can be reduced to a single vertex.
}
\label{fig:easy}
\end{figure}
Finally, suppose that both $f_3$ and $f_4$ are of size at most 5. We remove the vertices $v_1$ and $v_2$, identify $u_3$ with $w_3$ and $u_4$ with $w_4$; in this way we obtain a graph $G_3$. It can be verified that $G_3$ is a 3-connected cubic planar graph with all the faces of size at most 6, unless $G$ is the 12-vertex graph obtained from the cube by replacing two adjacent vertices by triangles, which is impossible since $G$ has no triangles. Again, every Hamiltonian cycle of $G_3$ can be extended to a Hamiltonian cycle of $G$, as seen on Figure \ref{fig:easy2}.
$\square$
\begin{figure}
\caption{A pair of adjacent quadrangles can be reduced to a single edge.
}
\label{fig:easy2}
\end{figure}
\subsection{Cyclic edge-connectivity of Barnette graphs}
Let $G$ be a graph. For a set of vertices $X$, we denote $G[X]$ the subgraph of $G$ induced by $X$.
For a set of vertices $X$, $\emptyset \ne X \ne V(G)$, the set of edges of $G$
having exactly one end-vertex in $X$ form a cut-set of $G$. An edge-cut $(X,Y)$, where $Y=V(G)\setminus X$, is \emph{cyclic} if both $G[X]$ and $G[Y]$ contain a cycle. Finally, a graph is cyclically $k$-edge-connected if it has no cyclic edge-cuts of size smaller than $k$.
\begin{lemma}
Let $G$ be a Barnette graph. Then $G$ is cyclically $4$-edge-connected.
\label{le:c4ec}
\end{lemma}
Proof. Suppose that $G$ contains a cyclic $3$-edge-cut $(X,Y)$. Choose $X$ inclusion-wise minimal. It is easy to see that the cut-edges are pairwise non-adjacent. Let $x_1$, $x_2$, $x_3$ be the vertices of $X$ incident to the cut-edges. We prove that they are pairwise non-adjacent: Suppose that two of them, say $x_1$ and $x_2$, are adjacent. Then, by minimality of $X$, $X^\prime=X\setminus\{x_1,x_2\}$ is acyclic with $(X^\prime,V(G)\setminus X^\prime)$ being a 3-edge-cut, and hence, $|X^\prime|=1$, $X^\prime=\{x_3\}$, so thus $G[X]$ is a triangle, which is impossible in a Barnette graph.
Let $y_i$ be the other endvertex of the cut-edge incident to $x_i$, $i=1,2,3$. We prove that these three vertices are also pairwise non-adjacent:
Since $G$ has no triangles, $G[\{y_1,y_2,y_3\}]$ has at most two edges. If it had exactly two edges, then $G$ would contain a 2-edge-cut, which is impossible. Suppose now that $y_1$ and $y_2$ are adjacent, but $y_3$ is not adjacent to any of them. Each of the two faces incident to the edge $x_3y_3$ has at least three incident vertices in both $X$ and $Y$, therefore, it is a hexagon, and there are exactly three incident vertices in both $X$ and $Y$. Let $z_i$ be the common neighbor of $y_3$ and $y_i$, $i=1,2$. Then $z_1$ and $z_2$ are adjacent, otherwise there would be a 2-edge-cut in $G$. But then $y_3z_1w_2$ is a triangle in $G$, a contradiction.
As $y_1$, $y_2$, $y_3$ are pairwise non-adjacent, for each face incident to any cut-edge, there are at least three incident vertices in both $X$ and $Y$, therefore, each such face is a hexagon having three incident vertices in both $X$ and $Y$. Let $x_{ij}$ be the common neighbor of $x_i$ and $x_j$, $1\le i < j \le 3$. By minimality of $X$, $X^\prime = X\setminus \{x_1,x_2,x_3,x_{12},x_{13},x_{23}\}$ is a single vertex, and so $G[X]$ is the union of three 4-faces pairwise adjacent to each other, which is impossible in a Barnette graph.
$\square$
\subsection{Goldberg vectors, Coxeter coordinates, and nanotubes}
Let $f_1$ and $f_2$ be two faces of an infinite hexagonal grid $H$. Then there is a (unique) translation $\phi$ of $H$ that maps $f_1$ to $f_2$. The vector $\vec{u}$ defining $\phi$ can be expressed as an integer combination of two \emph{unit vectors} -- those that define translations mapping a hexagon to an adjacent one. Out of the six possible unit vectors, we choose a pair $\vec{u}_1,\vec{u}_2$ making a $60^\circ$ angle such that $f_2$ is inside this angle starting from $f_1$. Then the coordinates $(c_1,c_2)$ of $\vec{u} = c_1\vec{u}_1 + c_2\vec{u}_2$ are non-negative integers, called the \emph{Coxeter coordinates} of $\phi$ \cite{cox}.
We may always assume that $c_1\ge c_2$. The pair $(c_1,c_2)$ determines the mutual position of a pair of hexagons in a hexagonal grid, it is also called a \emph{Goldberg vector}. Observe that, for example, $(1,0)$ corresponds to a pair of adjacent faces, $(1,1)$ corresponds to a pair of non-adjacent faces with an edge connecting them (and thus having two distinct common neighboring faces), whereas $(2,0)$ corresponds to a pair of non-adjacent faces with two paths of length 2 connecting them (and thus sharing a single common neighboring face), etc.
The Coxeter coordinates are used to define nanotubical graphs in the following way:
Let $(c_1,c_2)$ be a pair of integers with $c_1\ge c_2$. Fix a pair of unit vectors $\vec{u}_1$ and $\vec{u}_2$ making a $60^\circ$ angle. A graph obtained from an infinite hexagonal grid by identifying objects (vertices, edges, and faces) whose mutual position is (an integer multiple of) the vector $c_1\vec{u}_1+c_2\vec{u}_2$ is the \emph{infinite nanotube} of \emph{type} $(c_1,c_2)$.
If $c_1+c_2\le 2$ then the infinite nanotube is not 3-connected. Since nanotubes with $c_1+c_2=3$ contain cyclic 3-edge-cuts and Barnette graphs are cyclically 4-edge-connected, we will only be interested in nanotubes with $c_1+c_2\ge 4$.
Let $N$ be an infinite nanotube of type $(c_1,c_2)$. Let $f_1$ and $f_2$ be two hexagons of the hexagonal grid $H$ at mutual position $(c_1,c_2)$ corresponding to the same hexagon $f$ of $N$. Let $P$ be a dual path of length $c_1+c_2$ connecting the vertices $f_1^*$ and $f_2^*$ in $H^*$. Then the edges corresponding to the edges of $P$ form a cyclic edge-cut in $H$ of cardinality $c_1+c_2$. A cyclic sequence of hexagonal faces of $N$ corresponding to the vertices of $P$ is called a \emph{ring} in $N$.
A finite 2-connected subgraph of an infinite nanotube is an \emph{open-ended nanotube} if it contains at least one ring. A Barnette graph is a \emph{nanotube} if it contains an open-ended nanotube of some type as a subgraph.
Observe that the same graph may be considered as a nanotube of more than one type.
Let $G$ be a nanotube. We call a \emph{cap} any of the two inclusion-wise minimal 2-connected subgraphs of $G$ that can be obtained as a component of a cyclic edge-cut defined by a set of edges intersecting a line perpendicular to the vector defining the corresponding open-ended nanotube. See Figures \ref{fig:50tube}, \ref{fig:33tube}, and \ref{fig:caps} for illustration.
\begin{lemma}
Let $G$ be a Barnette graph which is a nanotube of type $(p_1,p_2)$ with $p_1+p_2=4$. Then $(p_1,p_2)=(4,0)$.
\end{lemma}
We omit the details of the proof, as it is similar to the proof of Lemma \ref{le:c4ec}: It suffices to prove that every (potential) cap of a nanotube of type $(3,1)$ or $(2,2)$ contains a triangle or a pair of adjacent quadrangles.
\begin{lemma}
Let $G$ be a Barnette graph which is a nanotube of type $(p_1,p_2)$ with $(p_1,p_2)\in \{(4,0), (5,0),(4,1),(5,1),(3,2),(4,2),(3,3),(4,3)\}$. Then $G$ is Hamiltonian.
\label{le:smalltubes}
\end{lemma}
Proof. We may suppose that $G$ has at least $318$ vertices (at least 161 faces). Since the caps of the tube are of bounded size (at most 5, 10, 6, 11, 5, 10, 10, 14 faces, respectively, each), the tubical part of $G$ contains a large number of disjoint rings.
We provide a construction of a Hamilton cycle in such graphs: First, we find a pair of paths covering the vertices of the tubical part of $G$; then, we verify that for each possible cap it is always possible to connect the two paths in a way that all the vertices of the cap are covered as well.
In a nanotube of type $(p,0)$, $p\ge 4$, for each $p$-edge-cut corresponding to a ring, we construct the two paths tranversing the tube in a way that each path contains one cut-edge incident to the same hexagonal face. Let us call this hexagon a \emph{transition face}.
For two consecutive rings, the transition faces are adjacent and once the transition face is fixed for one ring, we are free to choose any of the two adjacent hexagons in the next one to be the transition face, see Figure \ref{fig:50tube} for illustration.
To complete the proof for $(4,0)$- and for $(5,0)$-nanotubes, it suffices to verify that for every possible cap, there exists a path covering all the vertices of the cap leaving the cap by two edges adjacent to the same hexagonal face of the first ring of the tube. Since the tubical part of $G$ is sufficiently long, we can choose a transition face in the first and the last ring of hexagons regardless of the relative position of the two caps.
\begin{figure}
\caption{Two ways to cover the $2p$ vertices separated by two consecutive cyclic $p$-edge-cuts in a $(p,0)$-nanotube by two paths (top left for $p=5$). A path joining two consecutive pending edges covering all the vertices, for every possible cap of $(p,0)$-nanotubes for $p=4$ (top right line) and for $p=5$ (bottom line).}
\label{fig:50tube}
\end{figure}
For nanotubes of type $(3,3)$, the construction is described in Figure \ref{fig:33tube}.
\begin{figure}
\caption{For each possible cap of a $(3,3)$-nanotube, a path leaving the cap by a prescribed pair of edges is given (first two rows). For the last cap, we added three hexagons of the tube to make the construction work. To connect the two caps and to cover the tubical part of the graph, it suffices to combine an appropriate number of the first two patterns of the last row (and/or their mirror images) and finish by the third one.}
\label{fig:33tube}
\end{figure}
For nanotubes of type $(p_1,p_2)$ with $p_1>p_2>0$, we provide a repetitive pattern to cover the tubical part (see Figure \ref{fig:tubes}) and, for every cap and for every position of the cap with respect to the pattern, a path covering the vertices of the cap (see Figure \ref{fig:caps} for the first two types of nanotubes; we omit the details for the remaining three types).
$\square$
\begin{figure}
\caption{Two paths covering all the vertices of a (potentially infinite) open-ended nanotube of type $(3,2)$, $(4,1)$, $(5,1)$, $(4,2)$, and $(4,3)$, respectively. For each end of the tube, the two dashed lines separate the smallest period of the covering.}
\label{fig:tubes}
\end{figure}
\begin{figure}
\caption{For every cap of a nanotube of types $(3,2)$ (first two columns) and $(4,1)$ (the rest), and for every position of the cap relative to the two paths covering the tubical part of the graph, a completion of the Hamilton cycle in the cap is given. In the first row, the caps are drawn together with the first ring of the tube.}
\label{fig:caps}
\end{figure}
\subsection{Second reduction}
Let $H$ be a plane cubic graph. We denote $H^\parallel $ the $6$-regular multigraph obtained from $H$ by replacing each edge by a pair of parallel edges, equipped with the following black-and-white face-coloring: We color the 2-gons between pairs of parallel edges white and we color the faces of $H^\parallel$ corresponding to the faces of $H$ black. It is easy to see that this is a proper face-coloring of $H^\parallel$.
Let $G$ be a Barnette graph and let $M$ be a perfect matching of $G$. Then $F=E(G)\setminus M$ is a $2$-factor of $G$. A hexagonal face of $G$ incident to three edges of $M$ is called \emph{resonant}.
There is a canonical face-coloring of $G$ with two colors, say black and white, such that each edge of $F$ is incident to one black and one white face. Let $h$ be a white resonant hexagon. Since it is incident to three edges from $M$, the colors of its neighboring faces are alternating black and white.
We transform $F$ into a $6$-regular plane pseudograph in the following way:
First, inside each white resonant hexagon $h$ we introduce a new vertex $v_h$. We remove the three edges incident to $h$ from $F$ and we replace them by six new edges, joining $v_h$ to all the six vertices incident to $h$. Each of the newly created triangles receives the color of the corresponding face adjacent to $h$.
This way we obtain a black-and-white face-colored plane graph with two types of vertices: vertices of degree 2 are the vertices of the underlying Barnette graph, vertices of degree 6 correspond to white resonant hexagons.
Finally, we suppress all vertices of degree 2. This operation may create loops, parallel edges, and even circular edges incident to no vertex, see Figure \ref{fig:h6} for illustration.
Let $G^M$ be the resulting black-and-white face-colored plane 6-regular pseudograph.
\begin{figure}
\caption{An example of a black-and-white face-colored 6-regular pseudograph (right) corresponding to a $2$-factor of a Barnette graph (left).}
\label{fig:h6}
\end{figure}
A $2$-factor $F$ is called $\emph{odd}$ if it consists of an even number of (disjoint) cycles; otherwise it is \emph{even}. The same applies to the corresponding perfect matching.
A $2$-factor $F$ (as well as the corresponding perfect matching $M = E(G)\setminus F$) is called \emph{simple} if $G^M$ has no circular edges and $G^M\cong H^\parallel$ for some cubic planar graph $H$. If this is the case, $H$ is called the \emph{residual graph}.
\begin{lemma}
Let $F$ be a simple $2$-factor of a Barnette graph $G$. Let $n$ be the number of vertices of the corresponding residual graph $H$. If $F$ is odd, then $n=4k+2$ for some $k\ge 1$; otherwise $n=4k$ for some $k\ge 1$.
\end{lemma}
Proof. The number of vertices of a residual graph is always even, since it is a cubic graph. Moreover, the number of cycles in $F$, say $c$, is equal to the number of faces of the residual graph. By Euler's formula,
$$c = 2+|E(H)| - |V(H)| = 2+\frac{3n}2-n = \frac{n+4}{2},$$
so the claim follows immediately.
We will make use of the following classical result:
\begin{theorem}[Payan and Sakarovitch \cite{PS}]
Let $H$ be a cubic graph on $n=4k+2$ vertices ($k\ge 1$). If $H$ is cyclically $4$-edge-connected, then $V(H)$ admits a partition into two sets, say $B$ and $W$, such that $H[B]$ is a stable set and $H[W]$ is a tree.
\label{th:pyber}
\end{theorem}
Observe (by double-counting white-white and black-white edges) that the divisibility condition is a necessary condition for such a partition to exist.
That's why we will only be interested in odd $2$-factors.
\begin{lemma}
Let $G$ be a Barnette graph and let $M$ be an odd simple perfect matching of $G$. If the residual graph is a cyclically $4$-edge-connected, then $G$ is Hamiltonian.
\end{lemma}
Proof. Let $H$ be a cyclically $4$-edge-connected cubic planar graph on $4k+2$ vertices ($k\ge 1$) such that $G^M=H^\parallel$. Let $F=E(G)\setminus M$.
Recall that vertices of $H$ correspond to white resonant hexagons in $G$ with respect to a fixed cannonical face-coloring of $F$.
Let $(B,W)$ be a partition of $V(H)$ into an induced (black) stable set $B$ and an induced (white) tree $W$ given by Theorem \ref{th:pyber}.
\begin{figure}
\caption{Clockwise, starting from upper left: An example of a simple $2$-factor $F$ of a Barnette graph $G$; the corresponding planar 6-regular pseudograph $G^M$ which is a double of a cyclically 4-edge connected cubic graph $H$; A decoposition of $H$ into a (black) stable set and a (white) induced tree; the corresponding Hamilton cycle in $G$.}
\label{fig:howItWorks}
\end{figure}
We transform the $2$-factor $F$ and the black-and-white face-coloring of $G$ in the following way:
For each resonant hexagon $h$ corresponding to a black vertex $b$ of $H$, replace the three edges from $F$ incident to $h$ in $G$ by the other three edges; recolor the hexagon $h$ black. Since $B$ induces a stable set in $H$, this operation can be carried out independently for all black vertices of $H$ at once. For each such vertex, the number of edges from $F$ incident to any vertex of $G$ remains unchanged, therefore, $F$ becomes a $2$-factor of $G$, say $F^\prime$.
We claim that it consists of a single cycle. To prove that, it suffices to observe that the graph $(V(G),F^\prime)$ has a single white face (as $H[W]$ is connected) and a single black face (as $H[W]$ is acyclic). See Figure \ref{fig:howItWorks} for illustration.
$\square$
It remains to prove that such a situation occurs for at least one perfect matching for any Barnette graph not known to be Hamiltonian yet.
\begin{theorem}
Let $G$ be a Barnette graph on at least $318$ vertices. Then there exists an odd simple perfect matching $M$ of $G$ such that the residual graph
$H$ is cyclically $4$-edge-connected, unless $G$ is a nanotube of type $(4,0)$, $(5,0)$, $(4,1)$, $(5,1)$, $(3,2)$, $(4,2)$, $(3,3)$, or $(4,3)$.
\label{th:find}
\end{theorem}
In the rest of the paper, we prove Theorem \ref{th:find}.
We describe the general approach in Section \ref{sec:pro}, and we specify the computer-assisted part in Section \ref{sec:com}.
We claim (without proof) that in order to prove Theorem \ref{th:find} it suffices to consider a simple odd $2$-factor maximizing the number of white resonant hexagons.
\subsection{Generalized 2-factors}
We will call a \emph{$2^*$-factor} of a Barnette graph $G$ any spanning subgraph $F$ of $G$ such that
each component of $F$ is a connected regular graph of degree 1 or 2 -- an isolated edge or a cycle. For a 2$^*$-factor $F$ of a Barnette graph $G$, let $F^{(0)}$ be the set of isolated edges of $F$; let $G^{(2)}$ be a plane graph obtained from $G$ by replacing each edge of $F^{(0)}$ by a 2-gon; let $F^{(2)}$ be the set of edges of $G^{(2)}$ corresponding to those from $F$. Then $F^{(2)}$ is a $2$-factor of $G^{(2)}$ in the common (strict) sense.
Given a 2$^*$-factor $F$ of a Barnette graph $G$, there are two cannonical black-and-white face-colorings of $G^{(2)}$ (complementary to each other) with the following property: an edge $e$ of $G^{(2)}$ is incident to a white and a black face if and only if $e$ belongs to $F^{(2)}$ (otherwise $e$ is incident to two faces of the same color).
A $2^*$-factor $F$ of a Barnette graph $G$ is called \emph{quite good} if for each of the two canonical black-and-white face-colorings of $G^{(2)}$ induced by $F^{(2)}$ the 2-gons corresponding to the edges of $F^{(0)}$ have all the same color. Given a quite good $2^*$-factor of a Barnette graph $G$, we will always assume that a canonical coloring of $G^{(2)}$ such that all the 2-gons of $G^{(2)}$ are black is given along.
A quite good $2^*$-factor $F$ of a Barnette graph $G$ is called \emph{good} if, after having fixed a planar embedding of $G$ such that the outer face is a white one, no cycle of $F$ is inside another.
Observe that given a good $2^*$-factor $F$ of $G$, for any planar embedding of $G$ with a white outer face, the set of faces inside a fixed cycle $C$ of $F$ is always the same and these faces correspond to a sub-tree of the dual graph $G^*$ (empty if $C$ is a 2-cycle).
\begin{lemma}
Let $F$ be a good $2^*$-factor of a Barnette graph $G$.
Let $f$ be the number of all the faces of $G$, let $q_k$ be the number of non-resonant white faces of size $k$ in $G$ ($k=4,5,6$); let $c$ be the number of components of $F$. Then $q_5$ is even, moreover, $f+q_4+q_5/2+c \equiv 0 \pmod{2}$.
\label{l:paths}
\end{lemma}
Proof. Let $n$ be the number of vertices of $G$, let $f_k$ be the number of all faces of size $k$ in $G$, let $x_k$ be the number of black faces of size $k$ in $G$. Euler's formula yields $n=8+f_5+2f_6$.
If a cycle covers $c_4\ge 0$ quadrangles, $c_5\ge 0$ pentagons, and $c_6\ge 0$ hexagons, its length is $2+2c_4+3c_5+4c_6$.
Clearly, each vertex is covered by exactly one cycle, thus we have
$$
8+f_5+2f_6=n=2c+2x_4+3x_5+4x_6 = 2c+2(f_4-q_4)+3(f_5-q_5)+4x_6,
$$
since only hexagons can be resonant, and thus $f_k=x_k+q_k$ for $k=4,5$. Therefore,
$$
8+q_5+2f_6=2c+2(f_4-q_4)+2(f_5-q_5)+4x_6,
$$ so $q_5$ is even. By dividing by two and rearranging the terms we obtain
$$
4+f_4+f_5+f_6+q_4+q_5/2 +c= 2c+2f_4+2f_5-q_5+2x_6,
$$
the claim immediately follows.
$\square$
Let $F$ be a good $2^*$-factor in a Barnette graph $G$.
Let us consider the structure of the graph $G^{(2)}$. We introduce an auxiliary graph $\Gamma = \Gamma_G(F)$, defined in the following way: $V(\Gamma)$ is the set of the white non-resonant faces of $G$ (as of $G^{(2)}$). The edges of $\Gamma$ are defined in the next two paragraphs.
Let $C$ be the facial cycle of a (black) 2-gon $f_0$ in $G^{(2)}$. Let $f_0$ be incident to vertices $u$ and $v$ and adjacent to two (white) faces $f$ and $f^\prime$. Then each of $u$ and $v$ is incident to one more face (which has to be white), say $f_u$ and $f_v$, respectively. Since $f_0$ only shares a vertex with $f_u$ and with $f_v$, the faces $f$ and $f^\prime$ are two consecutive white neighbors of $f_u$ ($f_v$). Therefore, the faces $f_u$ and $f_v$ cannot be resonant. We add the edge $f_uf_v$ to $E(\Gamma)$; we call this type of edge of $\Gamma$ \emph{white}.
Let $C$ be a cycle of $F$ (and of $F^{(2)}$) which is not a facial cycle of a face of $G^{(2)}$. It means that $C$ is a boundary of a union of at least two faces of $G$. We consider every pair of adjacent faces inside $C$. Let $f$ and $f^\prime$ be such a pair of faces. Let $u$ and $v$ be the endvertices of the edge incident to both $f$ and $f^\prime$. Then each of $u$ and $v$ is incident to a third face (which has to be white), say $f_u$ and $f_v$, respectively. The faces $f$ and $f^\prime$ are two consecutive black neighbors of $f_u$ ($f_v$). Therefore, the faces $f_u$ and $f_v$ cannot be resonant. We add the edge $f_uf_v$ to $E(\Gamma)$; we call this type of edge of $\Gamma$ \emph{black}.
Observe that for each edge of $\Gamma$, its endvertices are two faces of $G$ at mutual position $(1,1)$. Each edge of $\Gamma$ covers two vertices of $G$ and these pairs of vertices are pairwise disjoint. Therefore, $\Gamma$ is a planar graph.
Let $f$ be a white pentagon of $G^{(2)}$. It cannot be resonant, so $f$ is a vertex of $\Gamma$. Let $f_1,\dots,f_5$ be the faces adjacent to $f$ (sharing an edge with $f$) in $G^{(2)}$. (Observe that some $f_i$ can be a 2-face: if it is the case, then there is another face $f_i^\prime$ adjacent to $f$ in $G$, and adjacent to $f_i$ in $G^{(2)}$.) Since the size of $f$ is odd, the number of pairs $(f_i,f_{i+1})$ (with $f_6=f_1$) of the same color (both black or both white) has to be odd. If both $f_i$ and $f_{i+1}$ are black, then none of them can be a 2-face, and thus there is a black edge incident to $f$ in $\Gamma$. If both $f_i$ and $f_{i+1}$ are white, then again none of them can be a 2-face, and the vertex incident to $f$, $f_i$, and $f_{i+1}$ is (in $G^{(2)}$) covered by a 2-cycle adjacent both to $f_i$ and $f_{i+1}$, and thus there is a white edge incident to $f$ in $\Gamma$. Altogehter, $f$ is a vertex of odd degree in $\Gamma$.
Similarly, for each non-resonant white hexagon $f$, there is an even number of pairs of consecutive adjacent faces of the same color, hence $f$ is a vertex of non-zero even degree in $\Gamma$.
A white quadrangle $f$ is always considered non-resonant. Its degree in $\Gamma$ is also always even, however, it can be equal to 0 if the neighboring faces are colored alternatively black and white.
As a result of these local observations, the graph $\Gamma$ can always be edge-decomposed into a set of paths with endvertices at the white pentagons of $G$, a set of cycles, and, eventually, a set of isolated vertices (corresponding to white quadrangles). The number of paths in the decomposition is equal to $q_5/2$, where $q_5$ is the number of white pentagons.
\subsection{Structure of Barnette graphs}
Let $G$ be a Barnette graph and let $p_1$ and $p_2$ be two small faces of $G$. Suppose that there exists an induced dual path $P^*$ connecting $p_1$ and $p_2$ passing only through hexagons. Then if we consider only faces of $G$ corresponding to $P^*$, and if we replace the two small faces by hexagons, we obtain a graph with a cannonical embedding into an infinite hexagonal grid. The Goldberg vector $(c_1,c_2)$ joining the first and the last hexagon is uniquely determined. We will use this vector to characterize the mutual position of $p_1$ and $p_2$ in $G$.
Observe that the vector of two small faces may depend on the choice of the path joining them, see Figure \ref{fig:ex} for illustration.
\begin{figure}
\caption{An example of a Barnette graph (left). Pentagonal faces are denoted $p_1,\dots,p_{12}
\label{fig:ex}
\end{figure}
Graver \cite{Graver} used the Coxeter coordinates to describe the structure of fullerene graphs. His technique may be extended to a full description of Barnette graphs as well in the following way: A given Barnette/fullerene graph $G$ is represented by a planar triangulation $T$, whose vertices represent the small faces of $G$, and each edge $uv$ is labelled with a Goldberg vector representing the mutual position of the faces represented by $u$ and $v$. The angle between face-adjacent edges (incident to the same triangle of $T$) is well defined and is determined by the labels of the three edges forming the triangle. For a vertex of $T$ representing a pentagon (a quadrangle) the angles around it sum up to $5/3\pi = 300^\circ$ ($4/3\pi = 240^\circ$, respectively).
The existence of a triangulation $T$ is guaranteed by a structural theorem of Alexandrov (see e.g.~\cite{DR}, Theorem 23.3.1, or \cite{pak}, Theorem 37.1), which states (in a more general setting) that any Barnette graph can be embedded onto the surface of a convex (possibly degenerate) polyhedron
so that every face is isometric to a regular polygon with unit edge length; it suffice then to triangulate the faces of this polyhedron.
Any spanning tree of $T$ may be used to cut the graph $G$ in order to obtain a graph embeddable into the infinite hexagonal grid, see Figure \ref{fig:ex} for illustration.
We say that a Goldberg vector $\vec{u}=(c_1,c_2)$ is \emph{shorter} than $\vec{u}^\prime=(c_1^\prime, c_2^\prime)$ if and only if the Euclidean length of a segment determined by $\vec{u}$ is shorter than the Euclidean length of a segment determined by $\vec{u}^\prime$ when both embedded into the same hexagonal grid.
Observe that the triangulation representing a Barnette graph is not unique: wherever two adjacent triangles form a convex quadrilateral (once embedded into the hexagonal grid), we may choose the other diagonal of the quadrilateral instead of the existing one as an edge of the triangulation. For example, in the graph depicted in Figure \ref{fig:ex} we could have chosen the edge $p_3p_{10}$ instead of the edge $p_2p_9$, etc.
However, for a triangulation $T$ representing a Barnette graph $G$, the operation switching the diagonals of a convex quadrilateral eventually leads to a triangulation minimal with respect to the sum of lengths of its edges. For example, the triangulation depicted in Figure \ref{fig:ex} is already minimal.
\begin{lemma}
Let $G$ be a Barnette graph, let $T$ be a minimal triangulation representing $G$. Then $T$ has a Hamiltonian path.
\end{lemma}
Proof.
Suppose that $T$ has no Hamiltonian path. Then there exists a set $X$ of vertices such that $T\setminus X$ has at least $|X|+2$ connected components. Since $T$ has at most 12 vertices, $|X|\le 5$.
For each component $C$, the set of vertices in $G\setminus C$ having a neighbor in $C$ contains a cycle in $T$ (as $T$ is a triangulation). Therefore, $G\setminus X$ is a plane graph with $|X|\le 5$ vertices and $|V(T)\setminus X|\ge 7$ faces.
However, a planar graph on at most 5 vertices can have at most 6 faces. (Adding edges increases the number of faces, and (the) planar triangulation on 5 vertices (the triangular bipyramid) only has 6 faces.)
$\square$
Note that the smallest planar graph with desired properties is a bipyramid over a square (which has 6 vertices and 8 faces).
\begin{lemma}
Let $G$ be a Barnette graph, let $T$ be a minimal triangulation representing $G$. Then either $T$ is Hamiltonian, or $T$ can be transformed to a Hamiltonian trangulation by a single diagonal switch.
\label{le:triang}
\end{lemma}
Proof.
Suppose that $T$ has no Hamiltonian cycle. Then there exists a set $X$ of vertices such that $T\setminus X$ has at least $|X|+1$ connected components. Since $T$ has at most 12 vertices, $|X|\le 5$.
For each component $C$, the set of vertices in $G\setminus C$ having a neighbor in $C$ form a cycle in $T$ (as $T$ is a triangulation). Therefore, $G\setminus X$ is a plane graph with $|X|\le 5$ vertices and $|V(T)\setminus X|\ge 6$ faces.
There is only one such graph: the triangular bipyramid $B$, which has $5$ vertices and $6$ triangular faces. Out of the six components of $T\setminus B$, at least five are singletons, the sixth may eventually be an isolated edge. It means $T$ has five vertices of degree at least 6, six vertices of degree 3, and eventually a vertex of degree 4.
Let $e=uv$ be an edge of $B$. It is incident to two triangles, each incident to a different component of $T\setminus B$. Let $x$ and $y$ be the vertices of $T\setminus B$ such that $uvx$ and $uvy$ are triangles of $T$. If the quadrilateral $uxvy$ is convex, then the triangulation $T^\prime$ obtained from $T$ by switching $uv$ to $xy$ has at most five vertices of degree 3, so $T^\prime$ has to be Hamiltonian.
It remains to consider the case when for each edge $e$ of $B$, the union of the two incident triangles is a non-convex quadrilateral, meaning that at one of its endvertices, the sum of the angles in the incident triangles is greater than $180^\circ$. Since $B$ has five vertices and nine edges, there is at least one vertex of $B$ with two (disjoint) pairs of incident triangles whose union gives a non-convex angle. But then the sum of the angles around this vertex is greater than $360^\circ$, a contradiction.
$\square$
In Figure \ref{fig:bigEx}, an example of a Barnette graph on 322 vertices is depicted, along with the corresponding triangulation and a shortest Hamilton cycle in it.
\begin{figure}
\caption{An example of a Barnette graph on 322 vertices (top left). A triangulation capturing the mutual position of all the small faces with a Hamilton cycle (top right). Another (tubular) drawing of the same graph (bottom); the three edges sticking to the north (to the south) are incident to an omitted vertex at the north (south) pole.}
\label{fig:bigEx}
\end{figure}
\section{Proof of Theorem \ref{th:find}: Finding a 2-factor}
\label{sec:pro}
In this section we explain the general proceduce in the case when the small faces of $G$ are far from each other.
We will deal with the case when some small faces of $G$ are close to each other in Section \ref{sec:com}.
\subsection{Phase 1: Cut the graph and fix a coloring}
Let $G$ be a Barnette graph, let $T$ be a Hamiltonian triangulation capturing the mutual position of the small faces of $G$, whose existence is given by Lemma \ref{le:triang}. Let $C_T$ be a Hamiltonian cycle in $T$ such that the sum of the lengths of the corresponing Goldberg vectors is minimal. Then there exists a cycle $C^*$ in $G^*$ including all the small vertices of $G^*$ in the same order as the corresponding vertices or $C_T$.
A cycle in $G^*$ corresponds to an edge-cut in $G$. We cut the graph $G$ along $C^*$. We obtain two graphs, say $G_1$ and $G_2$, containing only hexagons as internal faces, and with semi-edges and partial faces on the boundary.
Both $G_1$ and $G_2$ are subgraphs of the hexagonal grid, hence there is a canonical face coloring using three colors for each of them. We will use colors 1, 2, 3 for one and colors $A$, $B$, $C$ for the other. We color the partial faces in both graphs too.
We choose one color in each graph, say 1 and $A$ (there are 9 color combinations in total), and recolor black all the faces of $G_1$ and $G_2$ colored 1 or $A$; we color white the other faces. (Later we will inspect all the nine colorings.) This gives a black-and-white face-coloring $\phi_i$ inducing a $2$-factor $F_i$ in $G_i$, $i=1,2$.
Observe that for any choice of a color in $G_i$ ($i=1,2$), the edges incident to one face of the other two colors each form a matching $M_i$ such that $G_i^{M_i}= H_i^\parallel$, where $H_i$ is the graph whose vertices are the centers of the faces of the other two colors.
We merge the two black-and-white face-colorings $\phi_1$ and $\phi_2$ of $G_1$ and $G_2$, respectively, into an intermediate black-and-white (multi-)face-coloring $\phi^{(i)}$ of $G$ in the natural way: A face not corresponding to a vertex of $C^*$ inherits a color from either $G_1$ or $G_2$; A face which is cut by the cycle $C^*$ is divided into two partial faces, one inheriting a color from $G_1$ and the other from $G_2$, see Figure \ref{fig:ex2} for illustration.
\begin{figure}
\caption{Three of the nine black-and-white colorings of the graph in Figure \ref{fig:ex}
\label{fig:ex2}
\end{figure}
\subsubsection{Active and inactive segments}
The cycle $C^*$ can always be decomposed into a sequence of $\ell\le 12$ subpaths $P^*_1,\dots, P^*_{\ell}$ joining consecutive pairs of small vertices. Let us call these subpaths \emph{segments}.
We may suppose that a segment only contains hexagons with a non-empty intersection with the straight line joining the end-vertices of the segment.
For each segment $P_i^*$, the two face-colorings of $G_1$ and $G_2$ meet along $P_i^*$, and there is a unique canonical bijection $\varphi_i:\{1,2,3\}\to\{A,B,C\}$ between the two sets of colors.
If $\varphi_i(1)=A$ then the two black-and-white colorings coincide along $P_i$, we say that the segment $P_i$ is \emph{inactive}; otherwise it is \emph{active}. Out of the nine colorings, each segment is active in precisely six of them.
For example, the segments $p_{6}p_{7}$ and $p_{12}p_1$ are inactive in all the three colorings depicted in Figure \ref{fig:ex2}, the segment $p_4p_5$ is active in all the three colorings, whereas the segment $p_9p_{10}$ is inactive in the first coloring and active in the other two.
When switching from $P_i^*$ to $P_{i+1}^*$, if the $i$-th small face is a quadrangle, we have $\varphi_i=\varphi_{i+1}$.
If the $i$-th small face is a pentagon, the difference $\varphi_{i+1} \circ \varphi_{i}^{-1}$ is a permutation of the colors $\{A,B,C\}$ such that the color of the pentagon is stable and the two other colors are switched -- a transposition. See Figure \ref{fig:abc} for illustration.
\begin{figure}
\caption{A pentagon always causes a single switch of colors -- the two colors different from its color are switched.}
\label{fig:abc}
\end{figure}
Let $p_i$ be a pentagonal face of $G$ such that the segments $P^*_{i-1}$ and $P^*_i$ meet at $p_i$. Then exactly one of the following happens:
\begin{enumerate}
\item[(i)] if $\varphi_{i-1}(1)=\varphi_i(1)=A$, then both $P^*_{i-1}$ and $P^*_i$ are inactive, $p_i$ generates a switch of $B$ and $C$, thus it is colored $A$ and it is black in both subgraphs;
\item[(ii.a)] if $\varphi_{i-1}(1)=A$ and $\varphi_{i}(1)\ne A$, then $P^*_{i-1}$ is inactive and $P^*_i$ is active, $p_i$ generates a switch of $A$ and $\varphi_{i}(1)$, thus it is colored neither $A$ nor $1$, so it is white in both subgraphs;
\item[(ii.b)] if $\varphi_{i-1}(1)\ne A$ and $\varphi_{i}(1)= A$, then $P^*_{i-1}$ is active and $P^*_i$ is inactive, $p_i$ generates a switch of $A$ and $\varphi_{i-1}(1)$, thus it is colored neither $A$ nor $1$, so it is white in both subgraphs;
\item[(iii.a)] if $\varphi_{i-1}(1)=\varphi_{i}(1)\ne A$, then both $P^*_{i-1}$ and $P^*_i$ are active, $p_i$ generates a switch of $A$ and the third color, thus it is colored $\varphi_{i-1}(1)$, so it is black in $G_1$ and white in $G_2$;
\item[(iii.b)] if $\{\varphi_{i-1}(1),\varphi_{i}(1)\}=\{B,C\}$, then both $P^*_{i-1}$ and $P^*_i$ are active, $p_i$ generates a switch of $B$ and $C$, thus it is colored $A$, so it is white in $G_1$ and black in $G_2$.
\end{enumerate}
In order to transform $\phi^{(i)}$ into a black-and-white face-coloring of $G$ corresponding to a good $2$-factor of $G$, we reroute slightly the cut $C^*$ in a way described in the following subsection.
\subsection{Phase 2: Approximate the cut by $\Gamma$-paths}
Let $P^*_i$ be an active segment, let $\varphi_i(1)=B$. Suppose without loss of generality that $\varphi_i^{-1}(A)=2$.
Then all the faces of $P_i$ colored $A$ (and 2) or $1$ (and $B$) are partially black and partially white; both parts of each face of $P_i^*$ colored $C$ and 3 are white.
We approximate the dual path $P^*_i$ by a sequence $Q_i$ of faces colored $C$ and/or 3, each consecutive pair of faces in a mutual position $(1,1)$.
Let $f$ be a white ($C$- and $3$-colored) hexagonal face of $Q_i$. Then among its neighbors, there is a cyclic sub-sequence of $A$- and $B$-faces colored alternatively black and white, and another cyclic sub-sequence of $1$- and $2$-faces colored alternatively black and white, with the coloring being the opposite of the first one. Therefore, there are exactly two pairs (not necessarily disjoint) of adjacent faces of the same color: each pair is either a black $A$-face adjacent to a black $1$-face, or a white $B$-face adjacent to a white $2$-face. Therefore, $f$ is a white non-resonant hexagon, corresponding to a vertex of degree 2 in the future auxiliary graph $\Gamma$ being constructed -- we will call it a $\Gamma$-face.
Let $f$ and $f^\prime$ be two consecutive $\Gamma$-faces. If the two faces adjacent both to $f$ and $f^\prime$ are black, then the two cycles of the $2$-factors in $G_1$ and $G_2$ are merged. If the two faces adjacent both to $f$ and $f^\prime$ are white, then a new 2-cycle of the $2^*$-factor is created. In the first case, the $\Gamma$-edge $ff^\prime$ is black, in the second case it is a white one.
Two consecutive $\Gamma$-edges of $Q_i$ of the same color always form a $180^\circ$ angle, otherwise it could be possible to simplify $Q_i$ by removing a face from $Q_i$. Similarly, two consecutive edges of $Q_i$ of different colors always form an angle of $\pm 120^\circ$.
The resulting structure of $\phi^{(i)}$ along $P^*_i$ is the following: All vertices are covered by cycles of length 6 (single faces), 10 (two adjacent black hexagons, both incident to a black $\Gamma$-edge), or 2 (white $\Gamma$-edges). A $\Gamma$-path $Q_i$ separates the two subgraphs of regular coloring. See Figure \ref{fig:grid1} for illustration.
\begin{figure}
\caption{Several hexagonal patterns meeting along some cut curves (left). As the cutting lines are approximated by $\Gamma$-paths, a good $2^*$-factor is created (right).
}
\label{fig:grid1}
\end{figure}
The first (the last) $\Gamma$-face of $Q_i$ is the pentagon $p_i$ ($p_{i+1}$) if and only if the segment $P^*_{i-1}$ ($P^*_{i+1}$) is inactive; otherwise the first (the last) $\Gamma$-face of $Q_i$ is a hexagon adjacent to $p_i$ ($p_{i+1}$) and it is the last (the first) $\Gamma$-face of $Q_{i-1}$ ($Q_{i+1}$, respectively).
White non-resonant hexagons where two consecutive sequences $Q_{i-1}$ and $Q_i$ meet are the only occasion where two $\Gamma$-edges of the same color might form a $60^\circ$ angle -- if only they are both incident to the same pentagon.
Let us explicit the structure of $H^{(i)}=H_1\cup H_2$ and of $\Gamma$ now:
Vertices of $H^{(i)}$ are all the vertices corresponding to faces of $G$ white in $G_1$ or in $G_2$;
each vertex of $\Gamma$ where two black edges meet corresponds to a 2-vertex in $H^{(i)}$ (the corresponding face of $G$ is a non-resonant white hexagon adjacent to four black faces belonging to two different components of the $2^*$-factor); each vertex of $\Gamma$ where two white edges meet corresponds to a 4-vertex in $H^{(i)}$ (the corresponding face of $G$ is incident to four different compents of the $2^*$-factor, including two 2-cycles); each vertex of $\Gamma$ where a black and a white edge meet at a $120^\circ$ angle corresponds to a 3-vertex in $H^{(i)}$ (the corresponding face of $G$ being incident to three different components of the $2^*$-factor: a 2-cycle, a 6-cycle and a 10-cycle).
If there are $q_5$ white pentagons, then $\Gamma$ is composed of $q_5/2$ paths. A white quadrangle is either an isolated vertex of $\Gamma$ (if both incident segments are inactive) or it is an internal vertex of a path (otherwise).
\subsection{Phase 3: Change the parity of the $2^*$-factor}
It follows from Lemma \ref{l:paths} that whenever we want to transform an even $2^*$-factor into an odd one, it suffices either to increase or decrease the number of black quadrangles by 1, or to increase or decrease the number of black pentagons by 2. In other words, it suffices either to change the number of isolated vertices in $\Gamma$ by 1 or change the number of $\Gamma$-paths by 1.
\subsubsection{Changing the parity using a quandrangle}
Let $q$ be a quadrangular face of $G$. For three of the nine colorings of $G_1$ and $G_2$, both segments incident to $q$ are inactive; moreover, for two out of the three $q$ is a white face. In Phase 1, we choose one of these two.
If the good $2^*$-factor obtained in Phase 1 is even, it can be transformed into an odd one by recoloring $q$ black. This way an isolated vertex of $\Gamma$ is transformed into a cycle of length 2, see Figure \ref{fig:quad} for illustration.
\begin{figure}
\caption{We can use a white quadrangle to change the parity of a $2^*$-factor. The times sign marks non-resonant faces -- vertices of $\Gamma$; edges of $\Gamma$ are drawn using a thick grey line.}
\label{fig:quad}
\end{figure}
\subsubsection{Changing the parity using two pentagons}
From this point on we may assume that $G$ has no quadrangular faces -- it is a (fullerene) graph having 12 pentagonal faces.
Suppose first that some pair of consecutive pentagons $p_i$ and $p_{i+1}$ (consecutive along the cut $C$) are in the mutual position $(c_1,c_2)$, $c_1\ge c_2\ge 0$, with $3\mid (c_1-c_2)$. Then in the coloring of $G_1$ with colors 1, 2, 3 (and of $G_2$ with $A$, $B$, $C$) the partial faces corresponding to the pentagons $p_i$ and $p_{i+1}$ have the same color. Therefore, for two of the nine colorings the segment $P_i$ joining $p_i$ and $p_{i+1}$ is active whereas the neighboring segments $P_{i-1}$ and $P_{i+1}$ are inactive.
For both such colorings, after Phase 2 there is a $\Gamma$-path with endvertices at $p_i$ and $p_{i+1}$, and the vertex set of this path can be chosen to be the same in both colorings. If this is the case, then each $\Gamma$-edge white in one coloring is black in the other and vice versa. Among the two colorings, we may fix the one where the number of white $\Gamma$-edges is maximised.
We transform the $\Gamma$-path into a $\Gamma$-cycle, increasing the number of black pentagons by 2, in the following way:
For each black $\Gamma$-edge, we recolor both black hexagons forming a black 10-cycle white; then we recolor all faces corresponding to the vertices of the $\Gamma$-path black, including the first and the last one ($p_i$ and $p_{i+1}$). We will denote this operation $O_1$. See Figure \ref{fig:pc} for illustration.
\begin{figure}
\caption{Operation $O_1$: The parity of a $2^*$-factor can be changed by modifying a $\Gamma$-path joining two consecutive pentagons into a $\Gamma$-cycle.}
\label{fig:pc}
\end{figure}
From this point on we may assume that there is no pair of consecutive pentagons with the same color in $G_1$ (or in $G_2$). Then for every pair of consecutive pentagons the nine colorings look like depicted in Figure \ref{fig:9cases}.
\begin{figure}
\caption{A schematic drawing of the position of the $\Gamma$-paths in the neighborhood of two consecutive pentagons of different colors.}
\label{fig:9cases}
\end{figure}
Let $\phi_i^j$ be the angle between the two segments meeting at pentagon $p_i$ in $G_j$, $j=1,2$. Clearly, $\phi_i^1+\phi_i^2=300^\circ$. When following the segments composing the cut in an ascending order, say $G_1$ is to the left and $G_2$ to the right. If $\phi_i^1 > 150^\circ > \phi_i^2$, then there is a right turn at $p_i$ when switching from $P_{i-1}$ to $P_i$. If $\phi_i^1 < 150^\circ < \phi_i^2$, then there is a left turn at $p_i$ when switching from $P_{i-1}$ to $P_i$. The value $\phi_i^1=\phi_i^2=150^\circ$ means that the segment $P_i$ continues in the same direction as $P_{i-1}$.
Let $\phi_i=\phi_i^1-\phi_i^2$ for $i=1,\dots,12$. It is easy to see that $\sum_{i=1}^{12}\phi_i=0$, since $\sum_{i=1}^{12}\phi_i^1 = \sum_{i=1}^{12}\phi_i^2 = 1800^\circ$. Therefore, there exist $i$ such that $\phi_i\cdot \phi_{i+1}\le 0$ (indices modulo 12). We fix $i$ such that $\phi_i\cdot\phi_{i+1}\le 0$ and the difference $|\phi_i-\phi_{i+1}|$ is as big as possible.
Without loss of generality we may assume that $\phi_i\ge 0$ and $\phi_{i+1}\le 0$. In other words, there is a right turn at $p_i$ followed by a left turn at $p_{i+1}$. There are two colorings in which the segments $P_{i-1}$, $P_i$, and $P_{i+1}$ are active; among them we choose the one where $p_i$ is black in $G_2$ and $p_{i+1}$ is black in $G_1$.
We can now change the parity of the $2^*$-factor (if needed) by decreasing the number of black pentagons in the following way: For each black $\Gamma$-edge of $P_i$, we recolor both black hexagons forming a black 10-cycle white; then we recolor all faces corresponding to the vertices of the $\Gamma$-subpath $Q_i$ black, including the first and the last one (those adjacent to $p_i$ and $p_{i+1}$, respectively); we recolor $p_i$ and $p_{i+1}$ white. As the last step, we simplify unnecessary $60^\circ$ turns. We will denote this operation $O_2$. See Figures \ref{fig:o2},
\ref{fig:tr1} and \ref{fig:bigEx2} for illustration.
\begin{figure}
\caption{A schematic drawing of the operation $O_2$.}
\label{fig:o2}
\end{figure}
\begin{figure}
\caption{Operation $O_2$: The parity of a $2^*$-factor can be changed by transforming a $\Gamma$-path passing by two consecutive pentagons into two different $\Gamma$-paths.}
\label{fig:tr1}
\end{figure}
\begin{figure}
\caption{The good $2^*$-factor induced by one of the nine possible black-and-white face-colorings of the graph in Figure \ref{fig:bigEx}
\label{fig:bigEx2}
\end{figure}
\subsection{Phase 4: Transform a good odd $2^*$-factor into a simple $2$-factor}
It suffices now, as the last phase, to transform a good odd $2^*$-factor into a simple (odd) $2$-factor. We do it in the following way:
In a good $2^*$-factor, each 2-cycle corresponds to a white $\Gamma$-edge $ff^\prime$, incident to two white resonant hexagons $h_1$ and $h_2$ (one in each of $G_1$ and $G_2$). We can choose either $h_1$ or $h_2$, say $h_i$, and recolor it black: By doing this, the 2-cycle is merged with two other cycles in $G_i$; the other face $f_0$ incident to both cycles being merged loses its resonantness, it becomes another $\Gamma$-face inserted to the $\Gamma$-path between $f$ and $f^\prime$, joint now to $f$ and $f^\prime$ by two black $\Gamma$-edges forming a $60^\circ$ angle and replacing the original white $\Gamma$-edge. In $H^{(i)}$, a vertex of degree 3 is removed, and thus the degree of three other vertices is decreased by 1: one of them corresponds to $f_0$, the other two correspond to $f$ and $f^\prime$.
Observe that this operation decreases the number of components of the factor by 2, therefore, starting with an odd factor we can only obtain odd factors.
We make a decision for all white $\Gamma$-edges sequentially according to their order along $Q_i$, according to the following rules: If a white $\Gamma$-edge $e_j$ forms a $180^\circ$ angle with $e_{j-1}$ (which has to have been white in this case) and that we have decided to recolor black a hexagon in $G_i$, $i=1,2$, incident to $e_{j-1}$, then we decide to recolor black a hexagon in $G_{3-i}$ incident to $e_j$. If a white $\Gamma$-edge $e_j$ forms a $120^\circ$ angle with a black $e_{j-1}$, we decide to recolor black a hexagon incident to $e_j$ in such a way that one of the new black $\Gamma$-edges forms a $180^\circ$ angle with $e_{j-1}$.
The resulting structure in $G$ is the following: All the $\Gamma$-paths and $\Gamma$-cycles are
formed of black $\Gamma$-edges only. Each vertex of $\Gamma$ of degree 1 or 2 corresponds to a 2-vertex in $H^{(i)}$.
Finally, to obtain $H$, we suppress all the 2-vertices in $H^{(i)}$; for each $\Gamma$-edge we merge the incident partial faces of $H^{(i)}$.
To describe the structure of $H$, we introduce the following notation:
A vertex of $\Gamma$ is called \emph{direct} if it corresponds to a pentagon or if the two incident (black) $\Gamma$-edges form a $180^\circ$ degree; otherwise it is called \emph{sharp}.
We claim that there cannot be three consecutive sharp $\Gamma$-vertices along any $Q_i$: Suppose some $Q_i$ contains a subpath $f_0f_1f_2f_3f_4$ with all of $f_1$, $f_2$, and $f_3$ sharp and $f_0$ direct. If $f_1f_3$ had been a white $\Gamma$-edge after the Phase 2, we would not have decided to choose $f_2$. Therefore, $f_2$ was a $\Gamma$-vertex already after Phase 2, which means that $f_0f_2$ was a white $\Gamma$-edge after Phase 2. If $f_2f_4$ was also a white $\Gamma$-edge after Phase 2, we would have decided one of them in the other way. Therefore, $f_3$ was a $\Gamma$-vertex already after Phase 2, but not $f_4$, which means that $f_4$ is sharp. As $f_1$ must have been chosen because of the other $\Gamma$-edge incident to $f_0$, $f_4$ should never have been chosen, a contradiction.
A (black) $\Gamma$-edge joining two direct $\Gamma$-vertices $f$ and $f^\prime$ completes the boundary of two partial faces in $H_1$ and $H_2$, each having three incident 3-vertices. After the suppression of $2$-vertices in $H^{(i)}$, in $H$ these two partial faces are merged into a hexagon.
The $60^\circ$ angle at a sharp $\Gamma$-vertex $f$ contains a partial face of $H^{(i)}$ having one 3-vertex, which is to be merged with (at least) two other partial faces.
If both $\Gamma$-vertices adjacent to $f$ in $\Gamma$ are direct, then a face of size 7 is created in $H$ by merging two partial faces each having three incident 3-vertices in $H_i$ with a partial face having one incident 3-vertex in $H_{3-i}$.
On the other hand, opposite to this one, there is a face of $H^{(i)}$ whose size is decreased by 1 by the suppresion of the 2-vertex $f$ -- a pentagonal face is created in $H$.
If one of the vertices adjacent to a sharp vertex in $\Gamma$ is a sharp one, they are transformed into a face of size 8 and two pentagons in $H$. See Figures \ref{fig:grid2}, \ref{fig:bigEx3}, and \ref{fig:bigEx4} for illustration.
\begin{figure}
\caption{The intermediate structure after eliminating the white $\Gamma$-edges (left) and the final simple $2$-factor, with the face size changes in $H$ (with respect to the initial size of 6) marked with plus and minus signs.
}
\label{fig:grid2}
\end{figure}
\begin{figure}
\caption{The simple $2$-factor obtained from the good $2^*$-factor in Figure \ref{fig:bigEx2}
\label{fig:bigEx3}
\end{figure}
\begin{figure}
\caption{Two different odd simple $2$-factors of the graph in Figure \ref{fig:bigEx}
\label{fig:bigEx4}
\end{figure}
\section{Checking the correctness of the algorithm in the neighborhood of small faces close to each other: the computer-assisted part}
\label{sec:com}
Let $G$ be a Barnette graph. Let $S(G)$ be the set of the \emph{small} faces (faces of size 4 or 5) of $G$. It is straightforward to derive from the Euler's formula that $2f_4+f_5=12$, where $f_4$ and $f_5$ are the numbers of quadrangles and pentagons in $G$, respectively.
\subsection{Patches}
A \emph{patch} is a 2-connected subcubic plane graph $P$, having at most one face of size different from 4, 5 and 6, and such that all vertices of $P$ of degree 2 are incident to this special face, often referred to as the outer face of the patch; moreover, $P$ contains no pair of adjacent 4-faces. When a patch is depicted, there are additional pending half-edges at vertices of degree 2 towards the outer face.
The \emph{curvature} of a patch $P$, denoted by $\mu(P)$, is equal to $2f_4(P)+f_5(P)$, where $f_4(P)$ and $f_5(P)$ are the numbers of quadrangles and pentagons in $P$ (distinct from the outer face of $P$), respectively.
We denote $\partial(P)$ the \emph{boundary} of a patch $P$ -- the facial cycle of the outer face of $P$; we denote $\delta(P)$ the \emph{perimeter} of a patch $P$, the number of 2-vertices in $P$.
The \emph{boundary vector} $\sigma(P)$ of a patch $P$ is a cyclic sequence of distances between consecutive 2-vertices on the boundary cycle of $P$. The length of $\sigma(P)$ is equal to $\delta(P)$ and its sum is equal to the length of $\partial(P)$. When expliciting elements of a cyclic sequence $\sigma$, we write $x^k$ as a shortcut for $k$ consecutive occurences of a value $x$ in $\sigma$.
Each vertex of $\partial(P)$ is either a 2-vertex or a 3-vertex in $P$. The proportion of 2-vertices along $\partial(P)$ is determined by the curvature of $P$, as is stated explicitely in the following lemma, which is a generalisation of an observation from \cite{KS} and can be derived directly from Euler's formula by the same double-counting arguments.
\begin{lemma}
Let $P$ be a patch of curvature $\mu$. Then
$$
2\delta(P)-|\partial(P)| = 6-\mu.
$$
\label{lemma:basic}
\end{lemma}
Observe that for patches of curvature (greater than, less than) six, the average value of $\sigma(P)$ is (greater than, less than, respectively) two.
A patch $P$ of curvature $\mu\le 4$ ($\mu\ge 8$) is called \emph{convex} if its boundary vector $\sigma(P)$ only contains '1's and '2's ('2's and '3's, respectively). A patch $P$ with $\mu=5$ ($\mu=7$) is called \emph{convex} if $\sigma(P)$ does not contain $32^j3$ (does not contain $12^j1$, respectively).
A patch $P$ of curvature 6 is called \emph{convex} if $\sigma(P)$ contains at most one subsequence $32^j3$; if this is the case, $j>\delta(P)/2$. For instance, all the caps of nanotubes in Figures \ref{fig:33tube} and \ref{fig:caps} are convex patches of curvature 6.
Note that, according to Lemma \ref{lemma:basic}, the boundary vector of a convex patch $P$ of curvature $\mu\le 4$ has the form $(12^{k_1}12^{k_2}\dots12^{k_{t}})$
where $t=6-\mu$, $k_1,k_2,\dots,k_{t}\in \mathbb{N}_0$, and $k_1+k_2+\dots+k_{t}=p-t$.
We denote $P^{i\gets j}$ a patch obtained from $P$ by adding a face of size $j$ to $P$ along the path corresponding to the $i$-th element of $\sigma(P)$, if such a patch exists, see Figure \ref{fig:grow} for illustration.
\begin{figure}
\caption{Two different examples of three different patches obtained from a given patch (on the left) by inserting a new face at the element $\sigma_i$ of its boundary vector.
}
\label{fig:grow}
\end{figure}
It may happen that while adding a new face to a patch, we have to identify some elements (vertices/edges/faces) of the patch, as in the second row of Figure \ref{fig:grow}. It may even happen that adding a new face of some desired size to a specific place of a patch is not possible, since the faces to be identified are not of the same size.
\subsubsection{Patches in Barnette graphs}
Let $G$ be a Barnette graph. We say that a patch $P$ is contained in $G$ if there is a graph homomorphism $\varphi: P \to G$ such that all faces of the patch (except for the outer face) are also faces of $G$.
We say that a patch $P$ is \emph{realizable} if it is contained in some Barnette graph.
Observe that a patch $P$ of perimeter 0 is contained in a Barnette graph $G$ if and only if $P=G$ and the outer face of $P$ is a face of $G$. Similarly, a patch $P$ of perimeter 2 is contained in a Barnette graph $G$ if and only if $P=G\setminus e$ for some edge $e$ of $G$ and the outer face of $P$ is the union of the two faces incident to $e$ in $G$.
Finally, since Barnette graphs are cyclically 4-edge-connected, a patch $P$ if perimeter 3 is contained in a Barnette graph $G$ if and only of $P=G\setminus v$ for some vertex $v$ of $G$ and the outer face of $P$ is the union of the three faces incident to $v$ in $G$.
On the other hand, no patch of perimeter 1 can be realizable, since it would correspond to a cut-edge in a Barnette graph.
Some (but not all) realizable patches can be obtained in the following way: For any induced cycle $C$ of a Barnette graph $G$, there are two distinct (but not disjoint) patches $P$ and $\bar{P}$ contained in $G$ such that $\partial(P)=\partial(\bar{P})=C$.
It is easy to see that we have $\mu(P)+\mu(\bar{P})=12$ and that $\delta(P)$ is equal to the number of edges of the cut separating $P$ from $G\setminus P$.
Moreover, as each vertex of $C$ is either a 2-vertex in $P$ or a 2-vertex in $\bar{P}$, $\delta(P)+\delta(\bar{P})$ is equal to the length of $\partial(P)$.
As a direct consequence of Lemma \ref{lemma:basic} we obtain the following observation.
\begin{lemma}
Let $C$ be an induced cycle in a Barnette graph and let $P$ and $\bar{P}$ be the two corresponding patches. Then
$$
\delta(P)-\delta(\bar{P}) = 6-\mu(P) = 6+\mu(\bar{P}).
$$
\label{lemma:basic2}
\end{lemma}
However, there are patches contained in Barnette graphes which cannot be obtained this way: it is not always true that the facial cycle of the outer face of a patch corresponds to an induced cycle of the host Barnette graph -- a patch can even be self-overlapping.
\begin{lemma}
Let $P$ be a realizable patch of perimeter at least $2$. For every element $\sigma_i$ of its boundary vector there exists $j\in \{4,5,6\}$ such that $P^{i\gets j}$ is also a realizable patch, moreover, it is contained (at least) in the same Barnette graph as $P$.
\label{le:real}
\end{lemma}
Proof. Let $P$ be contained in a Barnette graph $G$. Each element of $\sigma$ is a path contained in a facial cycle of some face of $G$ of a certain size $j\in \{4,5,6\}$. Therefore, the face added to the patch corresponds to a face of $G$.
$\square$
\subsubsection{Primitive patches}
A convex patch of curvature $\mu\le 5$ is \emph{primitive} if the arrangement of its small faces is the same as in one of the patches depicted in Figure \ref{fig:smallpatches1} or it has no small faces at all (for $\mu=0$).
Observe that each convex patch with at most one small face is primitive.
\begin{figure}
\caption{Arrangements of small faces in primitive patches for different values of curvature $1\le \mu \le 5$.
}
\label{fig:smallpatches1}
\end{figure}
\begin{lemma}
Let $P$ be a convex patch of curvature $\mu\le 5$ which is not primitive. Then there exists another patch $P^\prime$ with the same curvature and the same boundary vector as $P$ on a bigger number of vertices.
\label{le:primitive}
\end{lemma}
Proof.
Suppose that there exists a convex patch of curvature $\mu\le 5$ which is not primitive, and all the convex patches of given curvature and boundary vector have at most as many vertices as $P$.
If $P$ has at most one small face, then it is primitive by definition, a contradiction. Therefore, we may assume that $P$ has at least two small faces.
If all the small faces of $P$ are pairwise adjacent to each other, then $P$ has at most three small faces, moreover, if it has three small faces, at most one of them is a quadrangle. In all the cases the patch is primitive, a contradiction.
We may suppose that $P$ has two small faces $f_1$ and $f_2$ which are not adjacent to each other. We claim that $f_1$ and $f_2$ are at mutual position $(1,1)$ and the edge connecting them is incident to a quadrangle:
Suppose $f_1$ and $f_2$ are two small faces in mutual position $(c_1,c_2)$ such that $c_1\ge c_2\ge 1$ and $c_1\ge 2$. Then there exists a new patch $P^\prime$ with the same boundary vector and the same curvature, but with a bigger number of vertices: $P^\prime$
can be found by inserting two pentagons and $c_1+c_2-3$ hexagons along a shortest path joining $f_1$ and $f_2$. (The path is in $P$ due to convexity of $P$.) By applying this operation, the size of $f_1$ and $f_2$ is increased by one; the mutual position of the two new pentagons is $(c_1-1,c_2-1)$, see Figure \ref{fig:oper1} for illustration. The patch $P^\prime$ is indeed a patch of a Barnette graph, since no pair of adjacent quadrangles can be created this way.
\begin{figure}
\caption{If a patch contains at least two non-adjacent small faces, it can be transformed to another one with more vertices, unless the two small faces are in position (1,1) and the edge connecting them is incident to a quadrangle: Two generic cases (top) and two special cases (bottom). The size of the two small faces is always increased by one (so if they were pentagons, they are no more small); two new pentagons or one new quadrangle are created.
}
\label{fig:oper1}
\end{figure}
Similarly, if $c_1\ge 3$ and $c_2=0$, then there is a sequence $h_1,\dots,h_{c_1-1}$ of hexagons forming a dual path joining $f_1$ and $f_2$. We subdivide the edge between $f_1$ and $h_1$ and the edge between $h_{c_1}$ and $f_2$ once; we subdivide each edge between $h_i$ and $h_{i+1}$ ($1 \le i \le c_1-2$) twice; we join the new vertices in such a way that $h_1$ and $h_{c_1-1}$ are split into a pentagon and a hexagon and that all other hexagons in the sequence are split into two new hexagons. Again, the size of $f_1$ and $f_2$ is increased by one and a new pair of pentagons at mutual position $(c_1-2,1)$ is created, see Figure \ref{fig:oper1} for illustration.
Analogously, if $(c_1,c_2)=(2,0)$, then there is a hexagon $h$ adjacent to both $f_1$ and $f_2$. To obtain $P^\prime$, it suffices to subdivide the two edges $h$ shares with $f_1$ and $f_2$, respectively, and join the two new vertices by a new edge. This way $h$ is split into two pentagons and the size of $f_1$ and $f_2$ is increased by one, see Figure \ref{fig:oper1} for illustration.
Finally, let $(c_1,c_2)=(1,1)$. Then $f_1$ and $f_2$ are connected by an edge $e$. If the edge $e$ is not incident to any quadrangle, then new patch $P^\prime$ can be obtained by replacing $e$ by a quadrangle, see Figure \ref{fig:oper1} for illustration. Since the size of $f_1$ and $f_2$ is increased by one, there can not be two adjacent quadrangles in the patch $P^\prime$.
To conclude, for every pair of non-adjacent small faces of $P$, there is a quadrangle adjacent to both of them, so both of them are pentagons, and so $\mu\ge 4$ and $P$ contains a quadrangle adjacent to two pentagons (which are not adjacent to each other). If $\mu=4$, then $P$ has no other small faces, so it is primitive, a contradiction. If $\mu=5$, then $P$ contains an additional pentagon, which, due to the previous observations, has to be adjacent to (the only) quadrangle -- again we obtain a primitive patch, a contradiction.
$\square$
\begin{corollary}
For a given curvature and given boundary vector, a convex patch with maximal number of vertices has to be a primitive one.
\end{corollary}
\begin{lemma}
Let $P$ be a convex patch of curvature $\mu\le 5$ and boundary vector $\sigma$. Then there exists a unique primitive patch $\bar{P}(\mu,\sigma)$ with the same curvature and the same boundary vector.
\label{le:unique}
\end{lemma}
Proof. The existence is given by the previous lemma. The uniqueness can be proven by induction, by adding/removing rows of hexagons from a patch, or, alternatively, by considering embeddings of patches onto infinite hexagonal cones. We omit the details.
$\square$
\begin{lemma}
Let $P$ be a convex patch of perimeter $p$ and curvature $\mu\le 5$. Then $P$ has at most $\frac{p^2}{6-\mu}$ vertices.
\label{lemma:finite}
\end{lemma}
Proof. It suffices to count the numbers of vertices of primitive convex patches. We omit the details.
$\square$
It is worth mentioning that the bound from Lemma \ref{lemma:finite} is tight only if $\mu(P)\le 2$ and the patch contains at most one small face.
\begin{corollary}
Let $P$ be a convex patch of curvature $\mu\ge 7$. Then $P$ can be realized only in finitely many Barnette graphs.
\label{cor:howto}
\end{corollary}
The largest Barnette graph containing a given realizable convex patch of curvature $\mu \ge 7$ can be found by adding the corresponding (unique) primitive patch of curvature $12-\mu$.
\subsubsection{Patch closure and essential patches}
A \emph{$k$-disc} centered at a face $f$ of a plane graph $G$, denoted by $B_k(f)$, is a subgraph of $G$ composed of facial cycles of faces at (dual) distance at most $k$ from the face $f$. Note that if $k$ is large enough, then $B_k(f)=G$ for any $f$.
A patch $P^\prime$ is called a \emph{closure} of a patch $P$, if
\begin{enumerate}
\item $P$ is contained in $P^\prime$,
\item every small face of $P'$ corresponds to a small face of $P$,
\item $P^\prime$ contains the 2-discs centered at the small faces of $P$, and
\item $P'$ is convex.
\end{enumerate}
A patch $P$ is called \emph{closed} if it is a closure of itself.
Clearly, if $P^\prime$ is a closure of $P$, then $P^\prime$ can be obtained from $P$ by adding a finite number of hexagons.
Let $P$ be a patch with boundary vector $\sigma(P)=\sigma_1\sigma_2\dots \sigma_k$.
The \emph{small face distance} of a value $\sigma_i$ is equal to the minimum of the distances $d(f^*,g^*)$, where $f$ is the new face of the patch $P^{i\gets j}$ (for some $j$ sufficiently big), $g$ run the set of small faces of $P$, and the distances are taken in the inner dual (dual without the vertex representing the outer face) of $P^{i\gets j}$.
Let $P$ be a patch which is not convex. Then we set all the values of its boundary vector as \emph{admissible}.
Let $P$ be a convex patch with boundary vector $\sigma(P)=\sigma_1\sigma_2\dots \sigma_k$.
A value $\sigma_i$ is called \emph{admissible}, if the small face distance of $\sigma_i$ is at most 2.
Observe that boundary vectors of closed patches have no admissible values.
Let $P$ be a patch with boundary vector $\sigma(P)=\sigma_1\sigma_2\dots \sigma_k$ which is not closed.
A \emph{critical element} of the boundary vector of $P$ is an admissible value $\sigma_i$ such that
\begin{itemize}
\item $\sigma_i$ is maximal, and then
\item the sum $\sigma_{i-1}+\sigma_{i+1}$ (incides taken modulo $k$) is maximal, unless $\mu(P)\ge 5$ and $\max_{i=1}^k\sigma_i=3$; in which case we choose $\sigma_i=3$ contained in a subsequence $32^j3$ of minimum length, and then
\item the small face distance of $\sigma_i$ is minimal.
\end{itemize}
\begin{lemma}
Let $G$ be a Barnette graph and let $f$ be a small face of $G$. Then there exists a finite sequence of patches $\{P_k\}_{k=1}^t$ contained in $G$ such that
\begin{itemize}
\item $P_1$ is a cycle of length equal to the size of $f$;
\item $P_{k+1}=P_k^{i\gets j}$, where $j\in\{4,5,6\}$ and $\sigma_i$ is a critical element of $\sigma(P_k)$;
\item for each $k$, in the embedding of $P_k$ into $G$, the face $f$ corresponds to a face of $P_k$;
\item either $P_t$ is the first closed patch of the sequence or $P_t=G$.
\end{itemize}
\label{le:seq}
\end{lemma}
Proof. The existence of the sequence is guaranteed by Lemma \ref{le:real}. Either adding faces one by one yields a closed patch, or all the faces of $G$ are eventually added. In both cases the sequence is finite.
$\square$
Observe that the sequence $\{P_k\}_{k=1}^t$ of patches contained in a Barnette graph $G$ starting with a fixed small face $f$ of $G$ given by Lemma \ref{le:real} is not unique -- it may depend on the choice of a critical element.
Let $f$ be a small face of a Barnette graph $G$ and let $P$ be a patch. If $P=P_t$ for some sequence described in Lemma \ref{le:real} starting with $f$, then we call $P$ an \emph{essential patch} for $f$ in $G$.
\subsection{Patches and the general procedure}
Let $P$ be a patch essential from some small face of a Barnette graph $G$.
Then the Hamiltonian cycle $C_T$ of the triangulation $T$ capturing the mutual position of small faces of $G$ enters and leaves $P$ at least once.
We will modify the general procedure in order to ensure that we can choose a cycle $C_T$ enterling and leaving $P$ exactly once: For essential patches of curvature at least 6 this is automatically true due to convexity of the patch and minimality of the cycle. For each essential patch $P$ of curvature at most 5 we can temporarily replace $P$ by the corresponding primitive patch $\bar{P}$; in the resulting graph we find the cycle $C^*$ visiting each small face exactly once. Since in $\bar{P}$ the small faces are adjacent to each other, they are consecutive along $C^*$ by minimality of $C^*$.
When replacing back the primitive patches by the actual patches, we keep the order in which the (primitive) patches were covered by $C^*$ and we keep the position of the segments joining different patches. We disregard the way how $C^*$ visits the small faces inside each essential patch, since we will inspect that in details later.
From this point on we may assume that for each essential patch $P$ there are exactly two segments leaving $P$, say $P_i^*$ and $P_j^*$. For any position of the segments $P_{i+1}^*, \dots, P_{j-1}^*$ inside $P$, the difference $\varphi_j \circ \varphi_i^{-1}$ is a permutation of three elements which is even if and only if $\mu(P)$ is even (each pentagon of $P$ contributes with a single transposition).
If $\mu(P)$ is odd, then the difference $\varphi_j \circ \varphi_i^{-1}$ is an odd permutation -- a transposition. Therefore, among the nine choices of colorings of $G_1$ and $G_2$, for one choice both segments leaving $P$ are inactive, for four choices one of them is active and the other one is inactive, and for the remaining four both segments are active -- the patch behaves like a pentagon. We will call these patches type 1.
If $\mu(P)$ is even and the difference $\varphi_j \circ \varphi_i^{-1}$ is the identity, then among the nine choices of colorings of $G_1$ and $G_2$, for three of them both segments are inactive and for the remaining six both segments are active -- the patch behaves like a quadrangle. We will call these patches type 0.
If $\mu(P)$ is even and the difference $\varphi_j \circ \varphi_i^{-1}$ is an even permutation different from the identity, then it has to be a cycle of length three. Therefore, among the nine choices of colorings of $G_1$ and $G_2$, for three of them both segments are active and for the remaining six there is one active and one inactive segment -- the patch behaves like a pair of pentagons of different colors. We will call these patches type 2.
Let $P$ be a patch essential from some small face of a Barnette graph $G$.
Let the position of two segments leaving $P$ and all the segments inside $P$ be fixed.
Let one of the nine colorings of $G_1$ and $G_2$ be chosen.
Let the procedure described in Section \ref{sec:pro} be applied. We first obtain a $2^*$-factor, which is then transformed into at most two $2$-factors (depending on the order of decisions at 2-cycles of the $2^*$-factor).
Let $H^0_P$ be the subgraph of the residual graph $H$ induced by the vertices corresponding to the faces of $P$ and faces adjacent to faces of $P$ in $G$. There can be vertices of degree 1 or 2 in $H^0_P$. We add $3-d$ new vertices adjacent to each vertex of degree $d$ in $H^0_P$ inside the outer face; we then connect all these new vertices by a new cycle. This way we obtain a plane cubic graph $H_P$, we call it \emph{partial residual graph}.
Let $f^*$ be a vertex of $H_P$ corresponding to a face $f$ of $P$. The face $f$ is a white resonant hexagon. If we recolor $f$ black, then three different components of the underlying $2$-factor are merged into a single cycle; the vertex $f^*$ is deleted from $H_P$ and the three resulting $2$-vertices are suppressed. We call this operation \emph{elimination} of $f^*$.
We say that a plane cubic graph is \emph{strongly essentially $4$-edge-connected}, if it is cyclically 3-edge-connected, and every cyclic 3-edge-cut separates a triangle adjacent to the outer face from the rest of the graph.
We say that a plane cubic graph is \emph{essentially $4$-edge-connected} if it can be transformed into a strongly essentially $4$-edge-connected plane graph by a vertex elimination.
We say that a patch $P$ is \emph{regular}, if for every possible position of a pair of segments leaving $P$ and for every choice of the colors of $G_1$ and $G_2$, there is a permutation of small faces of $P$ such that for each of the (at most) two $2$-factors obtained by the general procedure the corresponding partial residual graph is essentially 4-edge-connected. See Figures \ref{fig:reg} and \ref{fig:0.5.prvy} for illustration.
\begin{figure}
\caption{For a patch $P$ with four pentagons and a fixed position of two segments leaving $P$, for each of the nine colorings of $G_1$ and $G_2$ the $2^*$-factor and (at most) two simple $2$-factors obtained by the general procedure are depicted. The third drawing in the third column of the second row proves that for this position of the segments leaving $P$ the patch $P$ is parity-switching.
}
\label{fig:reg}
\end{figure}
\begin{figure}
\caption{For a few patches with many small faces adjacent to each other, the first outcome of the general procedure is a 2-factor such that the corresponding partial residual graph is not strongly essentially 4-edge-connected (left). However, to obtain a strongly essentially 4-edge-connected graph, it suffices to eliminate a vertex incident to a short cycle (right).
}
\label{fig:0.5.prvy}
\end{figure}
We say that a patch $P$ is \emph{weakly regular}, if for every possible position of a pair of segments leaving $P$ there exists a choice of the colors of $G_1$ and $G_2$ such that there is a permutation of small faces of $P$ such that for at least one $2$-factor obtained by the general procedure the corresponding partial residual graph is essentially 4-edge-connected.
We say that a patch $P$ is \emph{parity-switching} if for every possible position of a pair of segments leaving $P$ there exists a choice of the colors of $G_1$ and $G_2$ such that there exists a permutation of small faces of $P$ such that one of the operations $O_1$ and $O_2$ can be applied inside $P$; for both $2^*$-factors (before and after the operation), for at least one $2$-factor the corresponding partial residual graph is essentially 4-edge-connected.
\subsection{Generation of patches}
\begin{theorem}
There exists a finite set $\mathcal{P}$ of patches such that for every Barnette graph $G$ on at least 318 vertices and every small face $f$ of $G$, there exists a patch $P\in \mathcal{P}$ essential for $f$ in $G$.
\label{th:gen}
\end{theorem}
Proof. We prove the claim by construction. We used Algorithm \ref{algo:add} to generate all the patches in $\mathcal{P}$, by two calls of the procedure \textsc{Generate()}, passing as a parameter first a $4$-cycle and then a $5$-cycle, with the database of patches containing initially the closures of the two initial patches. The procedure uses Algorithm \ref{algo:closure} as a subroutine to calculate a closure of a given patch.
If the insertion at lines $6$, $11$, or $16$ of Algorithm \ref{algo:add} fails, it means that there is no Barnette graph containing the current patch $P$ such that the element $\sigma_i$ corresponds to a $j$-face for $j=4$, $5$, or $6$, respectively. If this is the case, the following lines are ignored until the next insertion.
Similarly for the insertion at line 6 of Algorithm \ref{algo:closure}.
$\square$
\begin{algorithm}[pht]
\caption{Generation of all closed patches containing a given patch}\label{algo:add}
\begin{algorithmic}[1]
\Procedure{Generate}{patch $P$}
\If{$\mu(P)\ge 7$ and the largest graph containing $P$ has at most 316 vertices}
\Return{}
\Else
\State let $\sigma_i$ be a critical element of the boundary of $P$
\If {the path along $\sigma_i$ is not adjacent to a 4-face}
\State $P^\prime \gets P^{i\gets 4}$
\State $P^{\prime\prime} \gets $\Call{Closure}{$P^\prime$}
\If{$P^{\prime\prime}$ is not in the database of patches}
\State Add $P^{\prime\prime}$ to the database of patches
\State \Call{Generate}{$P^\prime$}
\EndIf
\EndIf
\State $P^\prime \gets P^{i\gets 5}$
\State $P^{\prime\prime} \gets $\Call{Closure}{$P^\prime$}
\If{$P^{\prime\prime}$ is not in the database of patches}
\State Add $P^{\prime\prime}$ to the database of patches
\State \Call{Generate}{$P^\prime$}
\EndIf
\State $P^\prime \gets P^{i\gets 6}$
\State $P^{\prime\prime} \gets $\Call{Closure}{$P^\prime$}
\If{$P^{\prime\prime}\ne
P^\prime
$
}
\State \Call{Generate}{$P^\prime$}
\EndIf
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[pht]
\caption{Computation of a closure of a given patch}\label{algo:closure}
\begin{algorithmic}[1]
\Procedure{Closure}{patch $P$}
\If{$P$ is closed}
\State \Return{$P$}
\Else{}
\State let $\sigma_i$ be a critical element of the boundary of $P$
\State $P^\prime \gets P^{i\gets 6}$ \label{lineno:add}
\State \Return{\Call{Closure}{$P^\prime$}}
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
The counts of patches generated in the proof of Theorem \ref{th:gen} are depicted in Table \ref{table}.
\begin{table}[pht]
\centerline{
\begin{tabular}{|c||c|c|c|c|c|c|c|}\hline
$f_4$ $\backslash$ $f_5$&1&2&3&4&5&6&7 \\\hline\hline
0&1&3&12&92&1202&8821&679 \\\hline
1&3&24&354&3279&&& \\\hline
2&37&383&&&&& \\\hline
\end{tabular}}
\caption{Numbers of essential patches in $\mathcal{P}$, given number of pentagons and quadrangles. Amongst the patches of curvature greater than 6, only patches contained in at least one graph on at least 318 vertices are counted.}
\label{table}
\end{table}
\subsection{Analyse of patches}
The following statements were checked by computer:
\begin{theorem}
There is no patch $P\in \mathcal{P}$ with $\mu(P)\ge 8$.
\label{th:8+}
\end{theorem}
This means that for every patch $P$ of curvature at least 8 considered by the generating algorithm, the largest graph containing $P$ has less than 318 vertices. See line 2 of Algorithm \ref{algo:add} and the remark after Corollary \ref{cor:howto}.
\begin{theorem}
Every patch $P\in \mathcal{P}$ with $\mu(P)\le 5$ is regular. Every patch $P\in \mathcal{P}$ with $\mu(P)\in \{6, 7\}$ is weakly regular, unless $P$ contains a cap of a nanotube of type $(p_1,p_2)$ with $(p_1,p_2)\in \{(4,0), (5,0),(4,1),(5,1),(3,2),(4,2),(3,3),(4,3)\}$.
\label{th:67}
\end{theorem}
Theorem \ref{th:67} guarantees the existence of a simple $2$-factor such that the residual graph is cyclically 4-edge-connected. The only missing part is that we cannot be sure that this $2$-factor is odd.
We do not need to check for regularity of patches of curvature 6 and 7, weak regularity suffices instead: If a Barnette graph contains an essential patch of curvature $\mu=7$, then it only contains one. Therefore, we can chose the coloring of $G_1$ and $G_2$ such that no segment leaving $P$ is active.
If a Barnette graph $G$ contains an essential patch $P$ of curvature $\mu=6$, then $P$ contains a cap of a nanotube, and we can choose the coloring of $G_1$ and $G_2$ such that the tubical part of $G$ is traversed by at most one active segment (one if $P$ is type 2, none if $P$ is type 0).
If $G$ is a nanotube of type $(c_1,c_2)$, $c_1\ge c_2\ge 0$, and the caps are (contained in) patches of type 0, then $3 \mid (c_1-c_2)$, so we can write $(c_1,c_2)=(3a+b,b)$ for some integers $a,b\ge 0$. If we choose any of the three colorings of $G_1$ and $G_2$ such that no active segment traverses the tubical part of $G$, then the residual graph $H$ is a nanotube of type $(a+b,a)$.
If $G$ is a nanotube of type $(c_1,c_2)$, $c_1\ge c_2\ge 0$, and the caps are (contained in) patches of type 2, then $3 \nmid (c_1-c_2)$, so we can write $(c_1,c_2)=(3a+b,b+1)$ or $(c_1,c_2)=(3a+b+1,b)$ for some integers $a,b\ge 0$, or $(c_1,c_2)=(3a+2,0)$ for some integer $a\ge 1$. If we choose a coloring of $G_1$ and $G_2$ such that one active segment traverses the tubical part of $G$, then the residual graph $H$ is a nanotube of type $(a+b,a)$ (in the first two cases) or $(a+1,1)$ (in the third case).
This is the reason for excluding the aforementioned 8 types of nanotubes.
\begin{theorem}
Let $P\in \mathcal{P}$ with $\mu(P)\le 7$. Then $P$ is parity-switching, unless
$P$ is one of the following exceptional patches:
\begin{itemize}
\item the patch $P_1$ of curvature $1$ having one pentagon,
\item the patch $P_2$ of curvature $2$ containing two adjacent pentagons,
\item the patch $P_3$ with three pentagons sharing a common vertex,
\item two patches $P_4$ and $P_5$ with four pentagons (the type 0 patches obtained from $P_3$ by adding a pentagon at distance at most two),
\item four patches $P_6$, $P_7$, $P_8$, $P_9$ with six pentagons, depicted in Figure \ref{fig:clusters}.
\end{itemize}
\label{th:psw}
\end{theorem}
\begin{figure}
\caption{Patches with 4 and 6 pentagons for which it is not possible to increase or decrease the number of black pentagons by 2.}
\label{fig:clusters}
\end{figure}
There is a combinatorial reason for the patches $P_3$-$P_9$ not to be parity-switching: if three pentagons share a vertex, either one or two of them have to be black, so we do not have the freedom to change their colors independentely.
As a consequence of Theorem \ref{th:psw}, if a Barnette graph contains at least one parity-switching essential patch, we choose the coloring of $G_1$ and $G_2$ that allows to change the parity of the $2$-factor, and, by regularity, we are done.
It remains to consider Barnette graphs (in fact, fullerene graphs) only containing patches $P_1$-$P_9$ and verify that we can use parity-switching operations using pentagons from different patches.
If a fullerene graph contains $P_6$, it is a nanotube of type $(5,0)$, and it is known to be Hamiltonian \cite{mar2}. If a fullerene graph contains $P_7$ ($P_8$, $P_9$, respectively), then it is a nanotube of type $(4,2)$ (of type $(6,2)$, $(8,0)$) -- the patch itself already contains a corresponding ring. Out of all the possible patches (caps) to close the other end of the tube, $P_7$ ($P_8$, $P_9$) is the only one that is not parity-switching, as it was verified by a computer. However, if both caps of a nanotube are $P_7$ ($P_8$, $P_9$), then it has an even number of hexagons and exactly 6 black and 6 white pentagons, so by Lemma \ref{l:paths} the number of cycles in the $2$-factor is odd.
It remains to consider fullerene graphs only having patches $P_1$, $P_2$, $P_3$, $P_4$, and $P_5$.
It was verified by computer that for each of the five patches, for each active segment leaving the patch, the $\Gamma$-path can be transformed into a pair of $\Gamma$-paths (interconnected inside the patch or not) -- it is nothing else than applying a half of one of the operations $O_1$ and $O_2$ (or its inverse) inside the patch and the other half inside another.
In each of the patches this modification corresponds to increasing or decreasing the number of black pentagons by one. In most of the cases both are possible. More precisely, for each segment leaving $P_1$, $P_2$, $P_4$ or $P_5$, out of the nine possible colorings, for three colorings the segment is inactive, for at least two colorings it is possible to increase the number of black pentagons by one, and for at least four colorings it is possible to decrease the number of black pentagons by one. It means that if two of these patches are consecutive along $C^*$, then there exists a coloring such that we can decrease the number of black pentagons in each of them by one.
On the other hand, for $P_3$, it is possible to increase the number of black pentagons by one for four colorings and decrease it for two of them. Again, if there are two such patches consecutive along $C^*$, there exists a coloring such that we can increase the number of black pentagons in each of them by one.
It remains to consider fullerene graphs such that along $C^*$, the patches $P_3$ alternate with other types of patches among $\{P_1,P_2,P_4,P_5\}$. Since each $P_3$ contains three pentagons and there are twelve pentagons altogether, it is easy to see that the number of $P_3$ patches is either 2 or 3.
If there are two $P_3$ patches, the other two patches have six pentagons, and hence one of them is $P_2$ and the other one is either $P_4$ or $P_5$.
The patch with four pentagons has to be far from each of the $P_3$ patches, otherwise the graph would be too small (see Lemma \ref{lemma:finite}). The patches $P_4$ and $P_5$ are both type 0. That is why we may omit the four-pentagon patch and search only for a cycle passing through the eight pentagons of the other three patches; we consider $P_4$ or $P_5$ as if no segment leaving it was active. As a consequence, we find two $P_3$ patches consecutive along $C^*$.
If there are three $P_3$ patches, the other three patches can only have one pentagon each.
Moreover, the condition that for each segment joining a $P_3$ to a $P_1$ the two colorings allowing to decrease the number of black pentagons in $P_3$ correspond to the two colorings allowing to increase the number of black pentagons in the other patch implies that out of the nine colorings, there is one with no active segment joining a $P_3$ to a $P_1$, there are four colorings with three active segments and three inactive segments alternating, and there are four colorings with all the six segments active. In all the cases there are three $\Gamma$-paths in $G$. (In the case of no active segments joining different patches, there is still a $\Gamma$-path joining different pentagons inside each $P_3$.)
If we replace a vertex incident to three pentagons inside each $P_3$ by a triangle temporarily, then the graph will contain three pentagons and three triangles (and all the other faces will be hexagons). Moreover, in the coloring of $G_1$ and $G_2$ all the six small faces have the same color.
By the structural theorem of Alexandrov, such a graph can be isometrically embedded onto a surface of a (possibly degenerate) convex polyhedron, say $P$. The polyhedron $P$ has six vertices, and the cycle $C^*$ is a Hamiltonian cycle in some triangulation of $P$.
The cycle $C^*$ cuts the polyhedron $P$ into two hexagons. In the two hexagons the angles at a fixed $P_3$-vertex (center of a triangle) sum up to $180^\circ$, and hence they are both always convex (smaller than $180^\circ$). For the angles at the $P_1$-vertices (centers of isolated pentagons), in at least one hexagon the angle is convex. Therefore, it is always possible to permute a $P_1$ patch with a $P_3$ patch to obtain a new cycle with two consecutive patches of the same type, which gives us a possibility to change the parity of the number of cycles. See Figure \ref{fig:313131} for illustration.
\begin{figure}
\caption{Top to bottom, left to right: An exemple of a fullerene graph on 198 vertices containing three patches $P_3$ and three patches $P_1$. An even 2-factor with three $\Gamma$-paths without a possibility to apply $O_1$ or $O_2$. Another even 2-factor obtained by switching the order of the patches. An odd 2-factor after applying $O_1$.}
\label{fig:313131}
\end{figure}
\section{Concluding remarks}
Similar technique could be used to prove Hamiltonicity of related graph classes: planar cubic graphs with only a few faces of size larger than six; projective-planar graphs with faces of size at most six (except, of course, for the Petersen graph), etc.
\end{document} |
\begin{document}
\title{The Effect of Finite Element Discretisation on the
Stationary Distribution of SPDEs}
\author{Jochen Voss}
\date{20th October 2011}
\maketitle
\begin{abstract}
This article studies the effect of discretisation error on the
stationary distribution of stochastic partial differential equations
(SPDEs). We restrict the analysis to the effect of space
discretisation, performed by finite element schemes. The main
result is that under appropriate assumptions the stationary
distribution of the finite element discretisation converges in total
variation norm to the stationary distribution of the full SPDE.
\mathrm{e}nd{abstract}
\noindent
\textbf{keywords:} SPDEs, finite element discretisation, stationary
distribution
\noindent
\textbf{MSC2010 classifications:} 60H35, 60H15, 65C30
\section*{Introduction}
In this article we consider the finite element discretisation for
stochastic partial differential equations (SPDEs) of the form
\begin{equation}\label{E:simple}
\d_t u(t,x)
= \d_x^2 u(t,x)
+ f\bigl(u(t,x)\bigr)
+ \sqrt{2}\, \d_t w(t,x)
\qquad \forall (t,x) \in [0,\infty)\times [0, 1],
\mathrm{e}nd{equation}
where $\d_t w$ is space-time white noise and $f\colon \mathbb{R}\to \mathbb{R}$ is a
smooth function with bounded derivatives, and the differential
operator~$\d_x^2$ is equipped with boundary conditions such that it is
a negative operator on the space $\mathrm{L}^2\bigl([0,1], \mathbb{R}\bigr)$. More
specifically, we are considering the effect that discretisation of the
SPDE has on its stationary distribution.
Our motivation for studying this problem lies in a recently
proposed, SPDE-based sampling technique: when trying to sample from a
distribution on path-space, \textit{e.g.}\ in filtering/smoothing
problems to sample from the conditional distribution of a process
given some observations, one can do so using a Markov chain Monte
Carlo approach. Such MCMC methods requires a process with values in
path-space and it transpires that in some situations SPDEs of the
form~\mathrm{e}qref{E:simple} can be used, see \textit{e.g.}\
\citet{HaiStuaVoWi05,HaiStuaVo07} and \citet{HaiStuaVo09} for a
review. When implementing the resulting methods on a computer, the
sampling SPDEs must be discretised and, because MCMC methods use the
sampling process only as a source of samples from its stationary
distribution, the effect of the discretisation error on an MCMC method
depends on how well the stationary distribution of the SPDE is
approximated. While there are many results of approximation of
trajectories of SPDEs
\citep[\textit{e.g.}][]{MR2147242,Wa05,Hau08,MR2465711,Jen11}, approximation
of the stationary distribution seems not to be well-studied so far.
When discretising an SPDE, discretisation of space and time can be
considered to be two independent problems. In cases where only the
stationary distribution of the process is of interest, Metropolis
sampling, using the next time step of the time discretisation as a
proposal, can be used to completely eliminate the error introduced by
time discretisation \citep{BeRoStuaVo08}. For this reason, in this
article we restrict the analysis to the effect of space discretisation
alone. The discretisation technique discussed here is a finite
element discretisation, which is a much-studied technique for
deterministic PDEs. The approximation problem for {\mathrm{e}m stochastic}
PDEs, as studied in this article, differs from the deterministic case
significantly, since here we have to compare the full {\mathrm{e}m
distribution} of the solutions instead of considering the
approximation of the solution as a function.
While the results of this article are formulated for SPDEs with values
in $\mathbb{R}$, we expect the results and techniques to carry over to SPDEs
with values in $\mathbb{R}^d, d>1$ without significant changes. We only restrict
discussion to the one-dimensional case to ease notation.
This is in contrast to the domain of the SPDEs: we
consider the case of one spacial dimension because this is the
relevant case for the sampling techniques discussed above, but this choice
significantly affects the proofs and a different approach
would likely be required to study the case of higher-dimensional
spatial domains.
The text is structured as follows: In section~\ref{S:SPDE} present the
required results to characterise the stationary distribution of the
SPDE~\mathrm{e}qref{E:simple}. In section~\ref{S:SDE} we introduce the finite
element discretisation scheme for~\mathrm{e}qref{E:simple} and identify the
stationary distribution of the discretised equation. Building on
these results, in section~\ref{S:result}, we state our main result
about convergence of the discretised stationary distributions to the
full stationary distribution. Finally, in section~\ref{S:example}, we
give two examples in order to illustrate the link to the MCMC methods
discussed above and also to demonstrate that the considered finite
element discretisation forms a concrete and easily implemented
numerical scheme.
\section{The Infinite-Dimensional Equation}
\label{S:SPDE}
In order to study the SPDE~\mathrm{e}qref{E:simple}, it is convenient to
rewrite the equation as an evolution equation on the Hilbert space
$\mathcal{H} = \mathrm{L}^2\bigl([0,2\pi],\mathbb{R}^d\bigr)$; For a description of the
underlying theory we refer, for example, to the monograph of
\citet{ZDP}. We consider
\begin{equation}\label{E:SPDE}
du(t) = \mathcal{L} u(t) \,dt + f\bigl(u(t)\bigr) + \sqrt{2}\, dw(t)
\qquad \forall t \geq 0
\mathrm{e}nd{equation}
where the solution $u$ takes values in $\mathcal{H}$ and $f$ acts pointwise on
$u$, \textit{i.e.}\ $f(u)(x) = \tilde f\bigl(u(x)\bigr)$ for almost
all $x\in[0,1]$ for some function $\tilde f\colon \mathbb{R}^d \to \mathbb{R}^d$, such
that $f$ maps $\mathcal{H}$ into itself. Furthermore, $w$ is an
$\mathrm{L}^2$-cylindrical Wiener process and we equip the linear operator $\mathcal{L}
= \d_x^2$ with boundary conditions given by the domain
\begin{equation}\label{E:domain}
\mathcal{D}(\mathcal{L})
= \bigl\{ u\in \mathrm{H}^2([0,1], \mathbb{R}) \bigm|
\alpha_0 u(0) - \beta_0 \d_x u(0) = 0, \;
\alpha_1 u(1) + \beta_1 \d_x u(1) = 0 \bigr\}
\mathrm{e}nd{equation}
where $\alpha_0, \alpha_1, \beta_0, \beta_1\in\mathbb{R}$. The boundary
conditions in~\mathrm{e}qref{E:domain} include the cases of Dirichlet
($\beta_i=0$) and v.~Neumann ($\alpha_i=0$) boundary conditions. The
general case of $\alpha_i, \beta_i \neq 0$ is known as Robin
boundary conditions.
We start our analysis by considering the linear equation
\begin{equation}\label{E:linear}
du(t) = \mathcal{L} u(t) \,dt + \sqrt{2}\, dw(t) \qquad \forall t \geq 0.
\mathrm{e}nd{equation}
For equation~\mathrm{e}qref{E:linear} to have a stationary distribution, we
require $\mathcal{L}$ to be negative definite. The following lemma states
necessary and sufficient conditions on $\alpha_i$ and $\beta_i$ for
this to be the case.
\begin{lemma}\label{L:cond-neg}
The operator $\mathcal{L}$ is a self-adjoint operator on the Hilbert space
$\mathcal{H}$. The operator $\mathcal{L}$ is negative definite, if
and only if $\alpha_0, \beta_0, \alpha_1, \beta_1$ are
contained in the set
\begin{multline*}
A = \Bigl\{
\beta_0(\alpha_0+\beta_0) >0,
\beta_1(\alpha_1+\beta_1) >0,
\big| (\alpha_0+\beta_0)(\alpha_1+\beta_1)\bigr| > | \beta_0\beta_1| \Bigr\} \\
\cup \Bigl\{
\beta_0 = 0, \alpha_0 \neq 0, \;
\beta_1(\alpha_1+\beta_1) >0 \Bigr\}
\cup \Bigl\{
\beta_0(\alpha_0+\beta_0) >0, \;
\beta_1 = 0, \alpha_1 \neq 0 \Bigr\} \\
\cup \Bigl\{
\beta_0 = 0, \alpha_0 \neq 0, \;
\beta_1 = 0, \alpha_1 \neq 0 \Bigr\}.
\mathrm{e}nd{multline*}
\mathrm{e}nd{lemma}
\begin{proof}
From the definition of $\mathcal{L}$ it is easy to see that the operator is
self-adjoint.
We have to show that $\mathcal{L}$ is negative if and only if $(\alpha_0,
\beta_0, \alpha_1, \beta_1)\in A$. Without loss of generality we
can assume $\beta_i \geq 0$ for $i=1,2$ and $\alpha_i \geq 0$ whenever
$\beta_i = 0$ (since we can replace $(\alpha_i, \beta_i)$ by
$(-\alpha_i, -\beta_i)$ if required). Assume that $\lambda$ is an
eigenvalue of $\mathcal{L}$. If $\lambda>0$, the corresponding
eigenfunctions are of the form
\begin{equation*}
u(x) = c_1 \mathrm{e}^{\sqrt{\lambda}x} + c_2 \mathrm{e}^{-\sqrt{\lambda}x}
\mathrm{e}nd{equation*}
where $c_1$ and $c_2$ are given by the boundary conditions: For $u$
to be in the domain of $\mathcal{L}$, the coefficients $c_1$ and $c_2$ need
to satisfy
\begin{equation*}
\begin{pmatrix}
\alpha_0-\beta_0\sqrt{\lambda} & \alpha_0+\beta_0\sqrt{\lambda} \\
(\alpha_1+\beta_1\sqrt{\lambda}) \mathrm{e}^{\sqrt{\lambda}} & (\alpha_1-\beta_1\sqrt{\lambda}) \mathrm{e}^{-\sqrt{\lambda}}
\mathrm{e}nd{pmatrix}
\begin{pmatrix}
c_1 \\ c_2
\mathrm{e}nd{pmatrix}
=
\begin{pmatrix}
0 \\ 0
\mathrm{e}nd{pmatrix}.
\mathrm{e}nd{equation*}
Non-trivial solutions exist only if the matrix is singular or,
equivalently, if its determinant
\begin{equation}\label{E:ev-exists}
f(\lambda)
= \alpha_0\alpha_1 + \bigl(\alpha_0\beta_1+\alpha_1\beta_0 \bigr)\,\sqrt{\lambda}\coth\sqrt\lambda + \beta_0\beta_1\lambda
\mathrm{e}nd{equation}
satisfies $f(\lambda) = 0$. For $\lambda = 0$ the eigenfunctions
are of the form $u(x) = c_1 1 + c_2 x$ and an argument similar to
the one above shows that the boundary conditions can be satisfied if
and only if $\alpha_0\alpha_1 + \alpha_0\beta_1 +\alpha_1\beta_0 =
0$. Since $x \coth(x) \to 1$ as $x\to 0$, this condition can be
written as $f(0) = 0$ where
\begin{equation*}
f(0)
= \lim_{\lambda\downarrow 0} f(\lambda)
= \alpha_0\alpha_1 + \alpha_0\beta_1+\alpha_1\beta_0.
\mathrm{e}nd{equation*}
This shows that $\mathcal{L}$ is negative whenever $f(\lambda) \neq 0$ for
all $\lambda \geq 0$.
Let $(\alpha_0, \beta_0, \alpha_1, \beta_1)\in A$. Assume first
$\beta_i \neq 0$ for $i=1,2$ and let $\xi_i = \alpha_i/\beta_i$.
Then, by the first condition in $A$,
we have $\xi_0, \xi_1 > -1$ and $(\xi_0 - 1)(\xi_1 - 1) > 1$, and
for $\lambda \geq 0$ we get
\begin{equation*}
f(\lambda)
= \xi_0\xi_1 + \bigl(\xi_0+\xi_1 \bigr)\,\sqrt{\lambda}\coth\sqrt\lambda + \lambda
\geq (\xi_0 + 1) (\xi_1 + 1) - 1
> 0.
\mathrm{e}nd{equation*}
The cases $\beta_0 = 0$ or $\beta_1 = 0$
can be treated similarly. Thus, for $(\alpha_0, \beta_0, \alpha_1,
\beta_1)\in A$, there are no eigenvalues with $\lambda \geq 0$ and
the operator is negative.
For the converse statement, assume that $(\alpha_0, \beta_0,
\alpha_1, \beta_1)\notin A$. We then have to show that there is a
$\lambda\geq0$ with $f(\lambda) = 0$. Assume first $\beta_i > 0$
for $i=1,2$ and define $\xi_i$ as above. If $(\xi_0+1)(\xi_1+1) =
1$, we have $f(0) = 0$. If $(\xi_0+1)(\xi_1+1) < 1$, the function
$f$ satisfies $f(0) < 0$ and $f(\lambda) \to\infty$ as
$\lambda\to\infty$; by continuity there is a $\lambda>0$ with
$f(\lambda) = 0$. Finally, if $(\xi_0+1)(\xi_1+1) > 1$ but $\xi_0,
\xi_1 \leq -1$, we have $f(0) > 0$ and for $\lambda$ with
$\sqrt{\lambda} = -(\xi_0+\xi_1)/2 > 0$ we find $f(\lambda) <
\xi_0\xi_1 + \bigl(\xi_0+\xi_1\bigr)\,\sqrt\lambda + \lambda =
\xi_0\xi_1 - (\xi_0+\xi_1)^2/4 = -(\xi_0-\xi_1)^2/4 \leq 0$ and by
continuity there is a $\lambda>0$ with $f(\lambda) = 0$. Again, the
cases $\beta_0 = 0$ and $\beta_1 = 0$ can be treated similarly.
\mathrm{e}nd{proof}
A representation of the eigenvalues of $\mathcal{L}$ which is similar to the
one in the proof of lemma~\ref{L:cond-neg} can be found in section~3
of~\citet{CaccFin10}.
The statement from lemma~\ref{L:cond-neg} reproduces the well-known
results that the Laplacian with Dirichlet boundary conditions
($\alpha_i = 1, \beta_i = 0$) is negative definite whereas the
Laplacian with von~Neumann boundary conditions ($\alpha_i = 0, \beta_i
= 1$) is not (since constants are eigenfunctions with eigenvalue~$0$).
\begin{lemma}\label{L:C-exact}
Let $\mathcal{L}$ be negative definite. Then the following statements hold:
\begin{jvlist}
\item[1.] The linear SPDE~\mathrm{e}qref{E:linear} has global, continuous
$\mathcal{H}$-valued solutions.
\item[2.] Equation~\mathrm{e}qref{E:linear} has a unique stationary
distribution $\nu$ on $\mathcal{H}$. The measure~$\nu$ is Gaussian with
mean~$0$ and covariance function
\begin{equation}\label{E:C-exact}
C(x,y)
= \frac{\beta_0\beta_1 + \alpha_0 \beta_1 \, xy + \beta_0\alpha_1(1-x)(1-y)}
{\alpha_0\alpha_1 + \alpha_0\beta_1 + \beta_0\alpha_1} + x\wedge y - xy
\mathrm{e}nd{equation}
where $x \wedge y$ denotes the minimum of $x$ and~$y$.
\item[3.] The measure $\nu$ coincides with the distribution of
$U\in \mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$ given by
\begin{equation*}
U(x) = (1-x)L + xR + B(x) \qquad \forall x\in[0,1],
\mathrm{e}nd{equation*}
where $L\sim\mathcal{N}(0, \sigma_{L}^2)$, $R\sim\mathcal{N}(0,\sigma_{R}^2)$ with $\Cov(L,R) =
\sigma_{LR}$, the process $B$ is a Brownian bridge, independent of $L$
and~$R$, and
\begin{multline*}
\sigma_{L}^2 = \frac{\beta_0(\alpha_1+\beta_1)}{\alpha_0\alpha_1 + \alpha_0\beta_1 + \beta_0\alpha_1}, \quad
\sigma_{R}^2 = \frac{(\alpha_0+\beta_0)\beta_1}{\alpha_0\alpha_1 + \alpha_0\beta_1 + \beta_0\alpha_1}, \\
\sigma_{LR} = \frac{\beta_0\beta_1}{\alpha_0\alpha_1 + \alpha_0\beta_1 + \beta_0\alpha_1}.
\mathrm{e}nd{multline*}
\mathrm{e}nd{jvlist}
\mathrm{e}nd{lemma}
\begin{proof}
From \citet{IscMaMcDoTaZi90} and \citet{DaPraZa96}
\citep[see][Lemma~2.2]{HaiStuaVoWi05} we know that \mathrm{e}qref{E:linear}
has global, continuous $\mathcal{H}$-valued solutions as well as a unique
stationary distribution given by $\nu = \mathcal{N}(0, -\mathcal{L}^{-1})$. An easy
computation shows that $C$ as given in equation~\mathrm{e}qref{E:C-exact} is
a Green's function for the operator $-\mathcal{L}$, \textit{i.e.}\ $-\d_x^2
C(x,y) = \delta(x-y)$ and for every $y\in(0,1)$ the function $x
\mapsto C(x,y)$ satisfies the boundary conditions~\mathrm{e}qref{E:domain}.
This completes the proof of the first two statements.
For the third statement we note that $U$ is centred Gaussian with
covariance function
\begin{align*}
C(x,y)
&= \Cov\bigl(U(x),U(y) \bigr) \\
&= \Cov\bigl((1-x)L + xR + B(x), (1-y)L + yR + B(y)\bigr) \\
&= (1-x)(1-y) \,\sigma_{L}^2 + \bigl((1-x)y+x(1-y)\bigr) \,\sigma_{LR} + xy \,\sigma_{R}^2 \\
&\hskip1cm
+ x\wedge y - xy.
\mathrm{e}nd{align*}
The fact that this covariance function can be written in the
form~\mathrm{e}qref{E:C-exact} can be checked by a direct calculation.
\mathrm{e}nd{proof}
Using the results for the linear SPDE~\mathrm{e}qref{E:linear} we can
now study the full SPDE~\mathrm{e}qref{E:SPDE}. The result is given in the
following lemma.
\begin{lemma}\label{L:density}
Let $\mathcal{L}$ be negative definite. Furthermore, let $f = F'$ where
$F\in \mathrm{C}^2(\mathbb{R}, \mathbb{R})$ is bounded from above with bounded second
derivative. Then the following statements hold:
\begin{jvlist}
\item[1.] The nonlinear SPDE~\mathrm{e}qref{E:SPDE} has global, continuous
$\mathcal{H}$-valued solutions.
\item[2.] Equation~\mathrm{e}qref{E:SPDE} has a unique stationary
distribution~$\mu$ which is given by
\begin{equation*}
\frac{d\mu}{d\nu}(u)
= \frac{1}{Z} \mathrm{e}xp\Bigl(
\int_0^1 F\bigl(u(x)\bigr) \,dx
\Bigr)
\mathrm{e}nd{equation*}
where $\nu$ is the stationary distribution of~\mathrm{e}qref{E:linear} from
lemma~\ref{L:C-exact} and $Z$ is the normalisation constant.
\mathrm{e}nd{jvlist}
\mathrm{e}nd{lemma}
\begin{proof}
This result is well known, see \textit{e.g.}\ \citet{Za89} or
\citet[corollary~4.5]{HaiStuaVo07}.
\mathrm{e}nd{proof}
\section{Finite Element Approximation}
\label{S:SDE}
In this section we consider finite dimensional approximations of the
SPDE~\mathrm{e}qref{E:SPDE}, obtained by discretising space using the finite
element method. The approximation follows the
same approach as for deterministic PDEs. For background on the
deterministic case we refer to \cite{BreSco02} or~\cite{Jo90}.
To discretise space, let $n\in\mathbb{N}$, $\Delta x = 1/n$ and consider $x$-values
on the grid $k\Delta x$ for $k\in\mathbb{N}$. Since the differential operator
$\mathcal{L}$ in~\mathrm{e}qref{E:SPDE} is a second order differential operator, we
can choose a finite element basis consisting of ``hat functions''
$\phi_i$ for $i\in\mathbb{Z}$ which have $\phi_i(i \,\Delta x)=1$, $\phi_i(j \,\Delta x)
= 0$ for all $j\neq i$, and which are affine between the grid points.
Formally, the weak (in the PDE-sense)
formulation of SPDE~\mathrm{e}qref{E:SPDE} can be written as
\begin{equation*}
\la v, du(t) \ra = B(v, u) \,dt + \la v, f\bigl(u(t)\bigr) \ra + \sqrt{2} \la v, dw(t)\ra
\mathrm{e}nd{equation*}
where $\la \,\cdot\,, \,\cdot\,\ra$ denotes the $\mathrm{L}^2$-inner product
and the bilinear form~$B$ is given by
\begin{equation*}
B(u, v)
= \la v, \mathcal{L} u \ra
= u(1) v'(1) - u(0) v'(0) - \int_0^1 u'(x) v'(x) \,dx.
\mathrm{e}nd{equation*}
The discretised solution is found by taking $u$ and~$v$ to be in the
space spanned by the functions~$\phi_i$, \textit{i.e.}\ by using the ansatz
\begin{equation*}
u(t) = \sum_j U_j(t) \phi_j
\mathrm{e}nd{equation*}
and then considering the following system of equations:
\begin{equation}\label{E:FE-SDE0}
\la \phi_i, \sum_j dU_j \phi_j\ra
= \la \phi_i, \d_x^2 \sum_j U_j \phi_j\ra \,dt
+ \la \phi_i, f\bigl(\sum_j U_j \phi_j\bigr) \ra \,dt
+ \sqrt2 \la \phi_i, dw\ra.
\mathrm{e}nd{equation}
The domain $V$ of the bilinear form $B$ depends on the boundary
conditions of~$\mathcal{L}$; there are four different cases:
\begin{enumerate}
\item If $\beta_0, \beta_1 \neq 0$ in~\mathrm{e}qref{E:domain}, \textit{i.e.}\
for von~Neumann or Robin boundary conditions, we have $V =
\mathrm{H}^1\bigl([0,1], \mathbb{R}\bigr)$ and we consider the basis functions
$\phi_i$ for $i\in I = \{ 0, 1, \ldots, n-1, n \}$.
\item If $\beta_0 = 0$ and $\beta_1 \neq 0$, \textit{i.e.}\ for a
Dirichlet boundary condition at the left boundary, we have $V =
\bigl\{ u \in \mathrm{H}^1\bigl([0,1], \mathbb{R}\bigr) \bigm| u(0)=0 \bigr\}$ and we
consider the basis functions $\phi_i$ for $i\in I = \{ 1, \ldots,
n-1, n \}$.
\item If $\beta_0 \neq 0$ and $\beta_1 = 0$, \textit{i.e.}\ for a
Dirichlet boundary condition at the right boundary, we have $V =
\bigl\{ u \in \mathrm{H}^1\bigl([0,1], \mathbb{R}\bigr) \bigm| u(1)=0 \bigr\}$ and we
consider the basis functions $\phi_i$ for $i\in I = \{ 0, 1, \ldots,
n-1 \}$.
\item If $\beta_0, \beta_1 = 0$, \textit{i.e.}\ for Dirichlet boundary
conditions at both boundaries, we have $V = \bigl\{ u \in
\mathrm{H}^1\bigl([0,1], \mathbb{R}\bigr) \bigm| u(0) = u(1) = 0 \bigr\}$ and we
consider the basis functions $\phi_i$ for $i\in I = \{ 1, \ldots,
n-1 \}$.
\mathrm{e}nd{enumerate}
Throughout the rest of the text we will write $I$ for the index set of
the finite element discretisation as above, and the discretised
solution $u = \sum_{j\in I} U_j \phi_j$ will be described by the
coefficient vector $U\in\mathbb{R}^I$. In all cases we define the ``stiffness
matrix'' $L^{(n)} \in \mathbb{R}^{I\times I}$ by $L^{(n)}_{ij} = B(\phi_i,
\phi_j)$ for all $i,j \in I$. For the given basis functions we get
\begin{equation*}
L^{(n)}_{ij} =
\begin{cases}
- \frac{2}{\Delta x} & \mbox{if $i=j \notin\{0, n\}$,} \\
+ \frac{1}{\Delta x} & \mbox{if $i\in\{j-1, j+1\}$,} \\
- \frac{1}{\Delta x} - \frac{\alpha_0}{\beta_0}& \mbox{if $i=j=0$,} \\
- \frac{1}{\Delta x} - \frac{\alpha_1}{\beta_1}& \mbox{if $i=j=n$,} \\
0 & \mbox{else,}
\mathrm{e}nd{cases}
\mathrm{e}nd{equation*}
where the cases $i=j=0$ and $i=j=n$ cannot occur for Dirichlet
boundary conditions. The ``mass matrix'' $M\in\mathbb{R}^{I\times I}$ is
defined by $M_{ij} = \la \phi_i, \phi_j\ra$ and for $i,j \in I$ we get
\begin{equation*}
M_{ij} =
\begin{cases}
\frac{4}{6} \Delta x & \mbox{if $i=j \notin\{0, n\}$,} \\
\frac{1}{6} \Delta x & \mbox{if $i\in\{j-1, j+1\}$,} \\
\frac{2}{6} \Delta x & \mbox{if $i=j \in\{0, n\}$,} \\
0 & \mbox{else,}
\mathrm{e}nd{cases}
\mathrm{e}nd{equation*}
where, again, the cases $i=j=0$ and $i=j=n$ don't occur for Dirichlet
boundary conditions. We note that the matrix~$L^{(n)}$ only has
the prefactor $1/\Delta x$ instead of the $1/\Delta x^2$ one would expect for a
second derivative. The ``missing'' $\Delta x$ appears in the matrix~$M$.
Since
\begin{equation*}
\Cov(\la\phi_i, w\ra, \la\phi_j, w\ra) = \la\phi_i, \phi_j\ra = M_{ij},
\mathrm{e}nd{equation*}
equation~\mathrm{e}qref{E:FE-SDE0} can be written as
\begin{equation*}
M \, dU_t
= L^{(n)} U_t \,dt + f_n(U_t) \,dt + \sqrt{2} M^{1/2} dW_t
\mathrm{e}nd{equation*}
where $f_n\colon \mathbb{R}^I\to \mathbb{R}^I$ is defined by
\begin{equation}\label{E:FE-f}
f_n(u)_i = \bigl\la \phi_i, f\bigl(\sum_{j\in I} u_j \phi_j\bigr) \bigr\ra
\mathrm{e}nd{equation}
for all $u\in\mathbb{R}^I$ and $i\in I$. Multiplication with $M^{-1}$ then
yields the following SDE describing the evolution of the
coefficients~$(U_i)_{i\in I}$:
\begin{definition}
The {\mathrm{e}m finite element discretisation} of SPDE~\mathrm{e}qref{E:SPDE} is given by
\begin{equation}\label{E:FE-SDE}
dU_t
= M^{-1} L^{(n)} U_t \,dt
+ M^{-1} f_n(U_t) \,dt
+ \sqrt{2} M^{-1/2}\, dW_t
\mathrm{e}nd{equation}
where $W$ is an $|I|$-dimensional standard Brownian motion, $I\subseteq \{0, 1, \ldots, n\}$
is the index set of the finite element discretisation, and
$L^{(n)}$ and $M$ are as above.
\mathrm{e}nd{definition}
Our aim is to show that the stationary distribution of~\mathrm{e}qref{E:FE-SDE}
converges to the stationary distribution of the SPDE~\mathrm{e}qref{E:SPDE}.
We start our analysis by considering the linear
case $f\mathrm{e}quiv 0$. For this case the finite element discretisation
simplifies to~\mathrm{e}qref{E:FE-linear} below.
\begin{lemma}\label{L:FE-linear}
Let $\mathcal{L}$ be negative definite. Then $\nu_n = \mathcal{N}\bigl(0,
(-L^{(n)})^{-1}\bigr)$ is the unique stationary distribution of
\begin{equation}\label{E:FE-linear}
dU_t
= M^{-1} L^{(n)} U_t \,dt
+ \sqrt{2} M^{-1/2}\, dW_t.
\mathrm{e}nd{equation}
\mathrm{e}nd{lemma}
\begin{proof}
Since $\mathcal{L}$ is a negative operator, the matrix $L^{(n)}$ is a
symmetric, negative definite matrix. As the product of a positive
definite symmetric matrix and a negative definite symmetric matrix,
$M^{-1}L^{(n)}$ is negative definite; its eigenvalues coincide with
the eigenvalues of
\begin{equation*}
M^{-1/2} L^{(n)} M^{-1/2}
= - \bigl((-L^{(n)})^{1/2}M^{-1/2}\bigr)^\top \, \bigl((-L^{(n)})^{1/2} M^{-1/2}\bigr).
\mathrm{e}nd{equation*}
From \citet[theorem~8.2.12]{Arn74} we know that
then the unique stationary distribution of the
SDE~\mathrm{e}qref{E:FE-linear} is~$\mathcal{N}(0,C^{(n)})$ where $C^{(n)}$ solves the
Lyapunov equation
\begin{equation*}
M^{-1}L^{(n)} C^{(n)} + C^{(n)} L^{(n)} M^{-1} = - 2 M^{-1}.
\mathrm{e}nd{equation*}
By theorem~5.2.2 of~\citet{Lancaster-Rodman95}, this system of
linear equations has a unique solution and it is easily verified
that this solution is given by $C^{(n)} = (-L^{(n)})^{-1}$.
\mathrm{e}nd{proof}
The following lemma shows that for $f\mathrm{e}quiv 0$ there is no
discretisation error at all: the stationary distributions of the SPDE
\mathrm{e}qref{E:linear}, projected to~$\mathbb{R}^I$, and of the finite element
discretisation~\mathrm{e}qref{E:FE-SDE} coincide.
\begin{lemma}\label{L:exact}
Define $\mathbb{P}i\colon \mathrm{C}\bigl([0,1],\mathbb{R}\bigr)\to \mathbb{R}^I$ by
\begin{equation}\label{E:Pi}
(\mathbb{P}i u)_i = u (i\Delta x) \qquad \forall i\in I.
\mathrm{e}nd{equation}
Let $\nu$ be the stationary distribution of the linear
SPDE~\mathrm{e}qref{E:linear} on $\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$ and let $\nu_n$ be
the stationary distribution of the linear finite element
discretisation~\mathrm{e}qref{E:FE-linear}. Then we have
\begin{equation*}
\nu_n = \nu\circ\mathbb{P}i^{-1}
\mathrm{e}nd{equation*}
for every $n\in\mathbb{N}$.
\mathrm{e}nd{lemma}
\begin{proof}
Let $C\mathrm{e}xact$ be the covariance matrix of $\nu\circ\mathbb{P}i^{-1}$ and let
$C^{(n)}$ be the covariance matrix of~$\nu_n$. Since both measures
under consideration are centred Gaussian, if suffices to show
$C\mathrm{e}xact = C^{(n)}$.
By lemma~\ref{L:C-exact}, the matrix
$C\mathrm{e}xact$ satisfies
\begin{equation*}
C\mathrm{e}xact_{i,j} = C(i\Delta x, j\Delta x) \qquad \forall i,j \in I
\mathrm{e}nd{equation*}
where $C$ is given by equation~\mathrm{e}qref{E:C-exact}. By
lemma~\ref{L:FE-linear} we have $C^{(n)} = (-L^{(n)})^{-1}$. A
simple calculation, using the fact that both $C\mathrm{e}xact$ and $L^{(n)}$
are known explicitly, shows $C\mathrm{e}xact L^{(n)} = -I$ and thus
$C\mathrm{e}xact=C^{(n)}$ (the four different cases for the boundary
conditions need to be checked separately). This completes the
proof.
\mathrm{e}nd{proof}
The preceding results only consider the linear case and for the
general case, in the presence of the non-linearity~$f$, we can of
course no longer expect a similar result to hold. As a starting point
for analysing this case, we reproduce a well-known result which allows
to identify the stationary distribution of the discretised finite
element equation.
\begin{lemma}\label{L:preconditioning}
Let $F\in \mathrm{C}^2\bigl(\mathbb{R}^d, \mathbb{R})$ with bounded second derivatives and
satisfying the condition $Z = \int_{\mathbb{R}^d} \mathrm{e}^{2 F(x)} \,dx <
\infty$. Furthermore, let $A\in\mathbb{R}^{d\times d}$ be invertible. Then
the SDE
\begin{equation}\label{E:grad-log-phi-SDE}
dX_t = A A^\top \nabla F(X_t) \,dt + A \,dW_t
\mathrm{e}nd{equation}
has a unique stationary distribution which has density
\begin{equation*}
\phi(x) = \frac1Z \mathrm{e}^{2F(x)}
\mathrm{e}nd{equation*}
with respect to the Lebesgue measure on~$\mathbb{R}^d$.
\mathrm{e}nd{lemma}
\begin{proof}
Define $G(y) = F(Ay)$ for all $y\in \mathbb{R}^d$. By the assumptions on
$F$ we have $G\in \mathrm{C}^2\bigl(\mathbb{R}^d, \mathbb{R})$ with bounded second
derivatives and $Z_G = \int_{\mathbb{R}^d} \mathrm{e}^{2 G(y)} \,dy < \infty$.
Therefore, the SDE
\begin{equation*}
dY_t = \nabla G(Y_t) \,dt + dW_t
\mathrm{e}nd{equation*}
has a unique stationary distribution with density
\begin{equation*}
\psi(y) = \frac{1}{Z_G} \mathrm{e}^{2G(y)}.
\mathrm{e}nd{equation*}
Since $\nabla G(y) = A^\top \nabla F(A y)$, we have
\begin{equation*}
dY_t = A^\top \nabla F(A Y_t) \,dt + dW_t
\mathrm{e}nd{equation*}
and multiplying this equation by $A$ gives
\begin{equation*}
d(AY_t) = A A^\top \nabla F(A Y_t) \,dt + A\,dW_t.
\mathrm{e}nd{equation*}
Consequently, $X_t = AY_t$ satisfies the
SDE~\mathrm{e}qref{E:grad-log-phi-SDE} and has a unique stationary
distribution with density proportional to $\psi(A^{-1}x) \propto
\mathrm{e}^{2G(A^{-1}x)} = \mathrm{e}^{2F(x)}$. Since this function, up to a
multiplicative constant, coincides with $\phi$, the process $X$ has
stationary density~$\phi$.
\mathrm{e}nd{proof}
Because the stationary distribution in the lemma does not depend
on~$A$, the stationary distribution of~\mathrm{e}qref{E:grad-log-phi-SDE} does
not change when we remove/add $A$ from the equation. The process of
introducing the matrix $A$ is sometimes called ``preconditioning the
SDE''.
In cases were we are only interested in the stationary distribution of
a discretised SPDE, the argument from lemma~\ref{L:preconditioning}
allows us to omit the mass matrix~$M$ from the finite element
SDE~\mathrm{e}qref{E:FE-SDE}. In particular we don't need to consider the
potentially computationally expensive square root~$M^{1/2}$ in
numerical simulations.
\begin{lemma}\label{L:FE-density}
Let $\mathcal{L}$ be negative definite. Furthermore, let $f = F'$ where
$F\in \mathrm{C}^2(\mathbb{R}, \mathbb{R})$ is bounded from above with bounded second
derivative. Then the finite element SDE~\mathrm{e}qref{E:FE-SDE} has a
unique stationary distribution~$\mu_n$ given by
\begin{equation}\label{E:mu-n-dens}
\frac{d\mu_n}{d\nu_n} = \frac{1}{Z_n} \mathrm{e}xp\bigl( F_n \bigr)
\mathrm{e}nd{equation}
where
\begin{equation*}
F_n(u) = \int_0^1 F\Bigl(\sum_{j\in I} u_j \phi_j(t)\Bigr) \,dt
\qquad \forall u\in\mathbb{R}^I,
\mathrm{e}nd{equation*}
$Z_n$ is the normalisation constant and $\nu_n$ is the stationary
distribution of the linear equation from lemma~\ref{L:FE-linear}.
\mathrm{e}nd{lemma}
\begin{proof}
Let $\mathbb{P}hi(u) = \frac12 u^\top L^{(n)} u + F_n(u)$ for all $u\in\mathbb{R}^I$.
Then
\begin{equation*}
\d_i \mathbb{P}hi(u)
= (L^{(n)}u)_i + \bigl\la \phi_i, F'\bigl(\sum_{j\in I} u_j \phi_j(t) \bigr) \bigr\ra
= \bigl( L^{(n)}u + f_n(u) \bigr)_i
\mathrm{e}nd{equation*}
for all $i\in I$ and thus \mathrm{e}qref{E:FE-SDE} can be written as
\begin{equation*}
dU_t = M^{-1} \nabla\mathbb{P}hi(U_t) \,dt + \sqrt{2} M^{-1/2} \, dW_t.
\mathrm{e}nd{equation*}
By lemma~\ref{L:preconditioning}, this SDE has a unique stationary
distribution~$\mu_n$ whose density w.r.t.\ the $|I|$-dimensional
Lebesgue measure $\lambda$ is given by
\begin{equation*}
\frac{d\mu_n}{d\lambda}(u)
= \frac{1}{\tilde Z_n} \mathrm{e}^{\mathbb{P}hi(u)}
= \frac{1}{\tilde Z_n} \mathrm{e}xp\Bigl(- \frac12 u^\top (-L^{(n)}) u + F_n(u) \Bigr).
\mathrm{e}nd{equation*}
From lemma~\ref{L:FE-linear} we know that the density of $\nu_n$
w.r.t.~$\lambda$ is
\begin{equation*}
\frac{d\nu_n}{d\lambda}(x)
= \frac{1}{\bigl(2\pi\bigr)^{|I| / 2} \bigl(\det(-L^{(n)})\bigr)^\frac12}
\mathrm{e}xp\Bigl(-\frac12 x^{\mathrm{T}} (-L^{(n)}) x\Bigr)
\mathrm{e}nd{equation*}
and consequently the distribution $\mu_n$ satisfies
\begin{equation*}
\frac{d\mu_n}{d\nu_n}(x)
= \frac{d\mu_n}{d\lambda}(x) / \frac{d\nu_n}{d\lambda}(x)
\propto \mathrm{e}xp\bigl( F_n \bigr).
\mathrm{e}nd{equation*}
Since the right-hand side, up to constants, coincides with the
expression in~\mathrm{e}qref{E:mu-n-dens}, the proof is complete.
\mathrm{e}nd{proof}
\section{Main Result}
\label{S:result}
Now we have identified the stationary distribution of the SPDE
(in section~\ref{S:SPDE}) and of the SDE (in section~\ref{S:SDE}),
we can compare the two stationary distributions.
The result is given in the following theorem.
\begin{theorem}\label{T:result}
Let $\mu$ be the stationary distribution of the SPDE~\mathrm{e}qref{E:SPDE}
on $\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$. Let $\mu_n$ be the stationary
distribution of the finite element equation~\mathrm{e}qref{E:FE-SDE}
on~$\mathbb{R}^I$. Let $\mathcal{L}$ be negative and assume $f=F'$ where $F\in
\mathrm{C}^2(\mathbb{R})$ is bounded from above with bounded second derivative.
Then
\begin{equation*}
\bigl\| \mu\circ\mathbb{P}i^{-1} - \mu_n \bigr\|_{\mathrm{TV}} = \mathcal{O}\bigl(\frac1n\bigr)
\mathrm{e}nd{equation*}
as $n\to\infty$, where $\| \setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} \|_{\mathrm{TV}}$ denotes
total-variation distance between probability distributions on~$\mathbb{R}^I$.
\mathrm{e}nd{theorem}
Before we prove this theorem, we first show some auxiliary results.
The following lemma will be used to get rid of the (not explicitly
known) normalisation constant~$Z_n$.
\begin{lemma}\label{L:normalising-constants}
Let $(\Om, \mathcal{F}, \mu)$ be a measure space and $f_1, f_2\colon \Om\to
[0,\infty]$ integrable with $Z_i = \int f_i \,d\mu > 0$ for $i=1,2$.
Then
\begin{equation*}
\int \Bigl| \frac{f_1}{Z_1} - \frac{f_2}{Z_2} \Bigr| \,d\mu
\leq \frac{2}{\max(Z_1, Z_2)} \int \bigl| f_1 - f_2 \bigr| \,d\mu.
\mathrm{e}nd{equation*}
\mathrm{e}nd{lemma}
\begin{proof}
Using the $\mathrm{L}^1$-norm $\|f\| = \int |f| \,d\mu$ we can write
\begin{equation*}
\bigl\| \frac{f_1}{Z_1} - \frac{f_2}{Z_2} \Bigr\|
\leq \Bigl\| \frac{f_1}{Z_1} - \frac{f_2}{Z_1} \Bigr\|
+ \Bigl\| \frac{f_2}{Z_1} - \frac{f_2}{Z_2} \Bigr\| \\
= \frac{1}{Z_1} \bigl\| f_1 - f_2 \bigr\|
+ \frac{|Z_2 - Z_1|}{Z_1Z_2} \bigl\|f_2\bigr\|.
\mathrm{e}nd{equation*}
Since $Z_i = \|f_i\|$ we can conclude
\begin{equation*}
\bigl\| \frac{f_1}{Z_1} - \frac{f_2}{Z_2} \Bigr\|
\leq \frac{1}{Z_1} \bigl\| f_1 - f_2 \bigr\|
+ \frac{\bigl|\|f_2\| - \|f_1\|\bigr|}{Z_1}
\leq \frac{2}{Z_1} \bigl\| f_1 - f_2 \bigr\|
\mathrm{e}nd{equation*}
where the second inequality comes from the inverse triangle
inequality. Without loss of generality we can assume $Z_1 \geq
Z_2$ (otherwise interchange $f_1$ and $f_2$ in the above argument)
and thus the claim follows.
\mathrm{e}nd{proof}
\begin{lemma}\label{L:Pi}
Let $\mu$ and $\nu$
be probability measures on $\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$ with $\mu\ll\nu$
and let $\mathbb{P}i\colon \mathrm{C}\bigl([0,1],\mathbb{R}\bigr)\to\mathbb{R}^I$
be the projection from~\mathrm{e}qref{E:Pi}.
Then $\mu\circ\mathbb{P}i^{-1} \ll \nu\circ\mathbb{P}i^{-1}$ and
\begin{equation*}
\frac{d(\mu\circ\mathbb{P}i^{-1})}{d(\nu\circ\mathbb{P}i^{-1})}\circ\mathbb{P}i
= \mathbb{E}_\nu\bigl( \frac{d\mu}{d\nu} \bigm| \mathbb{P}i\bigr).
\mathrm{e}nd{equation*}
\mathrm{e}nd{lemma}
\begin{proof}
Let $\phi = \frac{d\mu}{d\nu}$. Since $\mathbb{E}(\phi|\mathbb{P}i)$ is
$\mathbb{P}i$-measurable, there is a function $\psi\colon \mathbb{R}\to \mathbb{R}$ with
$\mathbb{E}(\phi|\mathbb{P}i) = \psi\circ\mathbb{P}i$. Let $A\subseteq\mathbb{R}$ be measurable.
Then
\begin{multline*}
\int_{\mathbb{R}} \psi \, 1_A \,d(\nu\circ\mathbb{P}i^{-1})
= \int_{\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)} \psi\circ\mathbb{P}i \, 1_{\mathbb{P}i^{-1}(A)} \,d\nu
= \int_{\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)} \mathbb{E}(\phi|\mathbb{P}i) \, 1_{\mathbb{P}i^{-1}(A)} \,d\nu \\
= \int_{\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)} \phi \, 1_{\mathbb{P}i^{-1}(A)} \,d\nu
= \mu\circ\mathbb{P}i^{-1}(A)
\mathrm{e}nd{multline*}
by the definition of conditional expectation. This shows that
$\psi$ is indeed the required density.
\mathrm{e}nd{proof}
\begin{proof} (of theorem~\ref{T:result}).
Let $\mathbb{P}i$, $\nu$ and $\nu_n$ as in lemma~\ref{L:exact}. Using
lemmata \ref{L:exact} and~\ref{L:Pi} we find
\begin{equation}\label{E:proof-start}
\begin{split}
\bigl\| \mu\circ\mathbb{P}i^{-1} - \mu_n \bigr\|_{\mathrm{TV}}
&= \mathbb{E}_{\nu_n}\Bigl| \frac{d\mu\circ\mathbb{P}i^{-1}}{d\nu_n} - \frac{d\mu_n}{d\nu_n} \Bigr| \\
&= \mathbb{E}_\nu \Bigl| \frac{d\mu\circ\mathbb{P}i^{-1}}{d\nu\circ\mathbb{P}i^{-1}}\circ\mathbb{P}i - \frac{d\mu_n}{d\nu_n}\circ\mathbb{P}i \Bigr| \\
&= \mathbb{E}_\nu\Bigl| \mathbb{E}_\nu\bigl( \frac{d\mu}{d\nu} - \frac{d\mu_n}{d\nu_n}\circ\mathbb{P}i \bigm| \mathbb{P}i\bigr) \Bigr| \\
&\leq \mathbb{E}_\nu\Bigl| \frac{d\mu}{d\nu} - \frac{d\mu_n}{d\nu_n}\circ\mathbb{P}i \Bigr|.
\mathrm{e}nd{split}
\mathrm{e}nd{equation}
From lemma~\ref{L:density} we know
\begin{equation*}
\frac{d\mu}{d\nu}(U)
= \frac{1}{Z} \mathrm{e}xp\Bigl( \int_0^1 F\bigl(U_x\bigr) \,dx \Bigr).
\mathrm{e}nd{equation*}
Lemma~\ref{L:FE-density} gives
\begin{equation*}
\frac{d\mu_n}{d\nu_n}
= \frac{1}{Z_n} \mathrm{e}xp\bigl( F_n \bigr),
\mathrm{e}nd{equation*}
and by the definition of $F_n$ we have
\begin{equation*}
\frac{d\mu_n}{d\nu_n}\circ\mathbb{P}i(U)
= \frac{1}{Z_n} \mathrm{e}xp\Bigl(\int_0^1 F\bigl(U^{(n)}_x\bigr) \,dx \Bigr)
\mathrm{e}nd{equation*}
where $U_u = \sum \mathbb{P}i(U)_j \phi_j$ for all $U\in
\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$ and $\phi_j$, $j\in I$ are the finite
element basis functions. Using lemma~\ref{L:normalising-constants} we
get
\begin{multline*}
\bigl\| \mu\circ\mathbb{P}i^{-1} - \mu_n \bigr\|_{\mathrm{TV}}
\leq \mathbb{E}_\nu\Bigl| \frac{d\mu}{d\nu}
- \frac{d\mu_n}{d\nu_n}\circ\mathbb{P}i \Bigr| \\
\leq \frac{2}{Z} \int \Bigl|
\mathrm{e}xp\Bigl( \int_0^1 F\bigl(U_x\bigr) \,dx \Bigr)
- \mathrm{e}xp\Bigl(\int_0^1 F\bigl(U^{(n)}_x\bigr) \,dx \Bigr)
\Bigr| \,d\nu(U).
\mathrm{e}nd{multline*}
Since the inequality $\bigl|\mathrm{e}^x - 1 \bigr| \leq |x|
\mathrm{e}xp\bigl(|x|\bigr)$ holds for all $x\in\mathbb{R}$, we conclude
\begin{equation*}
\begin{split}
&\hskip-5mm \bigl\| \mu\circ\mathbb{P}i^{-1} - \mu_n \bigr\|_{\mathrm{TV}} \\
&\leq \frac{2}{Z}
\mathbb{E} \left( \mathrm{e}xp\Bigl( \int_0^1 F\bigl(U^{(n)}_x\bigr) \,dx \Bigr)
\cdot \Bigl|
\mathrm{e}xp\Bigl( \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \Bigr)
- 1 \Bigr| \right) \\
&\leq \frac{2}{Z}
\mathbb{E} \left( \mathrm{e}xp\Bigl( \int_0^1 F\bigl(U^{(n)}_x\bigr) \,dx \Bigr) \right. \\
&\hskip1.5cm
\left. \cdot
\Bigl| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \Bigr|
\cdot \mathrm{e}xp\Bigl( \bigl| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \bigr| \Bigr)
\right)
\mathrm{e}nd{split}
\mathrm{e}nd{equation*}
where $U$ is distributed according to the Gaussian measure~$\nu$.
Since $F$ is bounded from above we can estimate the the first
exponential in the expectation by a constant. Using the
Cauchy-Schwarz inequality we get
\begin{multline}\label{E:central-estimate}
\bigl\| \mu\circ\mathbb{P}i^{-1} - \mu_n \bigr\|_{\mathrm{TV}} \\
\leq c_1
\Bigl\| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \Bigr\|_2
\cdot
\Bigl\| \mathrm{e}xp\Bigl( \bigl| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \bigr| \Bigr) \Bigr\|_2
\mathrm{e}nd{multline}
for some constant~$c_1$.
\begin{figure}
\begin{center}
\input{fig1.tikz}
\mathrm{e}nd{center}
\caption{\label{fig:BB}\it Illustration of the convergence of
$U^{(n)}$ to $U$. Under the distribution $\nu$, the path $U$ is
a Brownian Bridge with random boundary conditions. Since
$U^{(n)}$ is the linear interpolation of $U$ between the grid
points, the difference $U^{(n)}-U$ consists of a chain of $n$
independent Brownian bridges.}
\mathrm{e}nd{figure}
The main step in the proof is to estimate the right-hand side
of~\ref{E:central-estimate} by showing that $\bigl| \int_0^1 F(U_x) -
F(U^{(n)}_x) \,dx \bigr|$ gets small as $n\to\infty$. By
lemma~\ref{L:C-exact}, the path $U$ in stationarity is just a
Brownian bridge (with random boundary values) and, by definition,
$U^{(n)}$ is the linear interpolation of the values of $U$ at the
grid points (see figure~\ref{fig:BB} for illustration). Thus, the
difference $U^{(n)} - U$ can be written as
\begin{equation*}
\bigl(U-U^{(n)}\bigr)(x)
= \sum_{i=1}^n 1_{[\frac{i-1}{n},\frac{i}{n}]}(x) \; \frac{1}{\sqrt{n}}
B^{(i)}_{nx - (i-1)}
\mathrm{e}nd{equation*}
where $B^{(1)}, \ldots, B^{(n)}$ are standard Brownian
bridges, independent of each other and of $U^{(n)}$.
Using Taylor approximation for $F$ we find
\begin{multline}\label{E:Taylor}
\Bigl| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \Bigr| \\
\leq \Bigl| \int_0^1 f\bigl(U^{(n)}_x\bigr)\bigl(U_x-U^{(n)}_x\bigr) \,dx \Bigr|
+ \frac12 \|F''\|_\infty \int_0^1 \bigl(U_x-U^{(n)}_x\bigr)^2 \,dx \\
=: \bigl| P_n \bigr| + \frac12 \|F''\|_\infty Q_n.
\mathrm{e}nd{multline}
For the term $|P_n|$ we find
\begin{equation*}
P_n
= \sum_{i=1}^n \int_0^{1/n}
f\bigl(U^{(n)}_{\frac{i-1}{n}+x}\bigr) \frac{1}{\sqrt{n}} B^{(i)}_{nx} \,dx
= n^{-3/2} \sum_{i=1}^n \int_0^1 f(U^{(n)}_{\frac{i-1}{n}+y/n}) B^{(i)}_{y} \,dy
\mathrm{e}nd{equation*}
where $B^{(i)}$ are the Brownian bridges defined above. As an
abbreviation write $\bar f_i(y) =
f\bigl(U^{(n)}_{\frac{i-1}{n}+y/n}\bigr)$. Conditioned on the value
of $U^{(n)}$, the integrals $\int_0^1 \bar f_i(y) B^{(i)}_{y} \,dy$
are centred Gaussian with
\begin{multline*}
\Var\Bigl(\int_0^1 \bar f_i(y) B^{(i)}_{y} \,dy \Bigm| U^{(n)} \Bigr)
= \int_0^1 \bar f_i(y) \int_0^1 (y\wedge z - yz) \bar f_i(z) \,dz \,dy \\
\leq c_2 \| \bar f_i \|_\infty^2
\leq c_3^2 \bigl( \|U^{(n)}\|_\infty + 1 \bigr)^2
\mathrm{e}nd{multline*}
for some constants $c_2, c_3 > 0$ where the last inequality uses the
fact the $f$ is Lipschitz continuous. We get $\mathbb{E}\bigl(P_n\bigm|
\mathbb{P}i(U) \bigr) = 0$ and, since the $B^{(i)}$ are independent,
\begin{equation*}
\mathbb{E}\bigl( P_n^2 \bigm| U^{(n)} \bigr)
\leq c_3^2 \bigl( \|U^{(n)}\|_\infty + 1 \bigr)^2 n^{-2}.
\mathrm{e}nd{equation*}
Using the tower property for conditional expectations, and using
$\|U^{(n)}\|_\infty \leq \|U\|_\infty$, we get
\begin{equation*}
\mathbb{E}\bigl( P_n^2 \bigr)
\leq c_3^2 \mathbb{E}\Bigl(\bigl( \|U\|_\infty + 1 \bigr)^2\Bigr) n^{-2}.
\mathrm{e}nd{equation*}
Since $U$ is a Gaussian process, $\bigl\| \|U\|_\infty + 1
\bigr\|_2$ is finite, and we can conclude
\begin{equation*}
\mathbb{E}\bigl( P_n^2 \bigr) \leq c_4^2 n^{-2}
\mathrm{e}nd{equation*}
for some constant~$c_4$. Similarly, for $Q_n$ we find
\begin{equation}\label{E:Qn}
Q_n = \sum_{i=1}^n \int_0^1 \bigl(B^{(i)}_y)^2 \,dy \cdot n^{-2}
\mathrm{e}nd{equation}
and thus, using independence of the $B^{(i)}$ again,
\begin{equation*}
\mathbb{E}\bigl( Q_n^2 \bigr)
= c_5^2 n^{-3}
\mathrm{e}nd{equation*}
for some constant~$c_5$. Combining these estimates we get
\begin{equation}\label{E:term1}
\Bigl\| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \Bigr\|_2
= \bigl\| P_n \bigr\|_2
+ \frac12 \bigl\|F''\bigr\|_\infty \bigl\| Q_n \bigr\|_2
\leq c_4 n^{-1} + c_5 n^{-3/2}
\mathrm{e}nd{equation}
and thus we have shown the required bound for the first factor
of~\mathrm{e}qref{E:central-estimate}.
Finally, we have to show that the second factor
in~\mathrm{e}qref{E:central-estimate} is bounded, uniformly in~$n$:
From \mathrm{e}qref{E:Taylor} we get
\begin{multline}\label{E:term2-start}
\Bigl\| \mathrm{e}xp\Bigl( \bigl| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \bigr| \Bigr) \Bigr\|_2
\leq \mathbb{E}\Bigl( \mathrm{e}xp\bigl( 2 |P_n| + \|F''\|_\infty Q_n \bigr) \Bigr)^{1/2} \\
\leq \bigl\| \mathrm{e}^{2|P_n|} \bigr\|_2^{1/2}
\cdot \bigl\| \mathrm{e}^{\|F''\|_\infty Q_n} \bigr\|_2^{1/2}.
\mathrm{e}nd{multline}
It is easy to check that, for all $\sigma>0$, an
$\mathcal{N}(0,\sigma^2)$-distributed random variable~$X$ satisfies the
inequality $\mathbb{E}(\mathrm{e}^{|X|}) \leq 2\mathrm{e}^{\sigma^2/2}$ and thus we have
\begin{equation}\label{E:exp-P}
\bigl\| \mathrm{e}^{2|P_n|} \bigr\|_2^{1/2}
= \mathbb{E}\bigl( \mathrm{e}^{4|P_n|} \bigr)^{1/4}
\leq 2 \mathrm{e}xp(8\, c_4^2 n^{-2})
< 1
\mathrm{e}nd{equation}
for all sufficiently large~$n$. Furthermore, using~\mathrm{e}qref{E:Qn}
and the fact that the $B^{(i)}$ are i.i.d., we find
\begin{equation}\label{E:exp-Q-start}
\bigl\| \mathrm{e}^{\|F''\|_\infty Q_n} \bigr\|_2^{1/2}
= \mathbb{E}\Bigl( \mathrm{e}xp\bigl( 2 \|F''\|_\infty |B^{(1)}|_{\mathrm{L}^2}^2 \, n^{-2} \bigr) \Bigr)^{n/4}
\mathrm{e}nd{equation}
where we write $|\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}|_{\mathrm{L}^2}$ for the $\mathrm{L}^2$-norm on the space
$\mathrm{L}^2\bigl([0,1],\mathbb{R}\bigr)$. By Fernique's theorem \citep{Fe70}
there exists an $\mathrm{e}ps>0$ with $\mathbb{E}\bigl( \mathrm{e}xp(\mathrm{e}ps \,
|B^{(1)}|_{\mathrm{L}^2}^2) \bigr) < \infty$. For all $\lambda>0$ we have
\begin{equation*}
\mathbb{E}\bigl(\mathrm{e}^{\lambda |B^{(1)}|_{\mathrm{L}^2}^2}\bigr)
= \int_0^\infty \mathbb{P}\bigl(\mathrm{e}^{\lambda |B^{(1)}|_{\mathrm{L}^2}^2}\geq a\bigr) \,da
\leq 1 + \int_0^\infty \mathbb{P}\bigl(|B^{(1)}|_{\mathrm{L}^2}^2 \geq b\bigr) \,\lambda\mathrm{e}^{\lambda b} \,db
\mathrm{e}nd{equation*}
and using Markov's inequality $\mathbb{P}\bigl(|B^{(1)}|_{\mathrm{L}^2}^2\geq
b\bigr) \leq \mathbb{E}\bigl(\mathrm{e}^{\mathrm{e}ps |B^{(1)}|_{\mathrm{L}^2}^2}\bigr) \mathrm{e}^{-\mathrm{e}ps
b}$ we get
\begin{equation*}
\mathbb{E}\bigl(\mathrm{e}^{\lambda |B^{(1)}|_{\mathrm{L}^2}^2}\bigr)
\leq 1 + \int_0^\infty \mathbb{E}\bigl(\mathrm{e}^{\mathrm{e}ps |B^{(1)}|_{\mathrm{L}^2}^2}\bigr) \mathrm{e}^{-\mathrm{e}ps b} \,\lambda\mathrm{e}^{\lambda b} \,db
= 1 + \frac{\lambda}{\mathrm{e}ps-\lambda} \mathbb{E}\bigl(\mathrm{e}^{\mathrm{e}ps |B^{(1)}|_{\mathrm{L}^2}^2}\bigr)
\mathrm{e}nd{equation*}
for all $\lambda<\mathrm{e}ps$. Substituting this bound
into~\mathrm{e}qref{E:exp-Q-start} we have
\begin{equation}\label{E:exp-Q}
\bigl\| \mathrm{e}^{\|F''\|_\infty Q_n} \bigr\|_2^{1/2}
\leq \bigl(1 + \frac{c_6}{n^2} \bigr)^n
\leq 2
\mathrm{e}nd{equation}
for some constant $c_6$ and all sufficiently large~$n$. From
\mathrm{e}qref{E:exp-P} and~\mathrm{e}qref{E:exp-Q} we see that the right-hand
side of~\mathrm{e}qref{E:term2-start} is bounded uniformly in~$n$,
\textit{i.e.}
\begin{equation}\label{E:term2}
\Bigl\| \mathrm{e}xp\Bigl( \bigl| \int_0^1 F\bigl(U_x\bigr) - F\bigl(U^{(n)}_x\bigr) \,dx \bigr| \Bigr) \Bigr\|_2
\leq c_7
\mathrm{e}nd{equation}
for all $n\in\mathbb{N}$ and some constant~$c_7$.
Combining \mathrm{e}qref{E:term1} and~\mathrm{e}qref{E:term2} we see that the
right-hand side in~\mathrm{e}qref{E:central-estimate} is of
order~$\mathcal{O}(n^{-1})$. This completes the proof.
\mathrm{e}nd{proof}
In the preceding theorem we compared the stationary
distribution~$\mu_n$ of the finite element SDE on $\mathbb{R}^I$ and the
stationary distribution~$\mu$ of the SPDE on $\mathrm{C}\bigl([0,1],
\mathbb{R}\bigr)$ by projecting $\mu$ onto the finite dimensional space
$\mathbb{R}^I$. An alternative approach is to embed $\mathbb{R}^I$ into
$\mathrm{C}\bigl([0,1], \mathbb{R}\bigr)$ instead. A na{\"\i}ve implementation of
this idea would be to extend vectors from $\mathbb{R}^I$ to continuous
functions via linear interpolation. Unfortunately, the image of
$\mu_n$ when projected to $\mathrm{C}\bigl([0,1], \mathbb{R}\bigr)$ in this way would
be mutually singular with $\mu$ and thus the total variation norm
would not provide a useful measure for the distance between the two
distributions. For this reason, we choose here a different approach,
described in the following definition.
\begin{definition}\label{D:hat}
Given a probability measure $\mu_n$ on $\mathbb{R}^I$, we define a
distribution $\hat\mu_n$ as follows: Consider a random variable $X$
which is distributed according to $\mu_n$. Given $X$, construct
$Y\in \mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$ by setting $Y(k\,\Delta x) = X_k$ for
$k=0, 1, \dots, n$ and filling the gaps between these points with
$n$ Brownian bridges, independent of $X$ and of each other. Then we
denote the distribution of $Y$ by~$\hat\mu_n$.
\mathrm{e}nd{definition}
\begin{lemma}\label{L:Pi-and-hat}
Let $\mu_n$ and $\nu_n$ be probability measures on $\mathbb{R}^I$ with
$\mu_n \ll \nu_n$. Then $\hat\mu_n \ll \hat\nu_n$ with
\begin{equation*}
\frac{d\hat\mu_n}{d\hat\nu_n} = \frac{d\mu_n}{d\nu_n}\circ\mathbb{P}i
\mathrm{e}nd{equation*}
on $\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$.
\mathrm{e}nd{lemma}
\begin{proof}
Let $\psi = \frac{d\mu_n}{d\nu_n}$. Using substitution we get
\begin{equation*}
\mathbb{E}_{\hat\mu_n}\bigl(f\circ\mathbb{P}i\bigr)
= \int_{\mathbb{R}^I} f \,d\mu_n
= \int_{\mathbb{R}^I} f \psi \,d\nu_n
= \mathbb{E}_{\hat\nu_n}\bigl(f\circ\mathbb{P}i \cdot \psi\circ\mathbb{P}i \bigr)
\mathrm{e}nd{equation*}
for all integrable $f\colon \mathbb{R}^I\to \mathbb{R}$. Since, conditioned on
the value of $\mathbb{P}i$, the distributions $\hat\mu_n$ and $\hat\nu_n$
are the same, we can use the tower property to get
\begin{align*}
\hat\mu(A)
&= \mathbb{E}_{\hat\mu_n}\bigl(\mathbb{E}_{\hat\mu_n}(1_A|\mathbb{P}i)\bigr)
= \mathbb{E}_{\hat\mu_n}\bigl(\mathbb{E}_{\hat\nu_n}(1_A|\mathbb{P}i)\bigr) \\
&= \mathbb{E}_{\hat\nu_n}\bigl(\mathbb{E}_{\hat\nu_n}(1_A|\mathbb{P}i) \cdot \psi\circ\mathbb{P}i \bigr)
= \mathbb{E}_{\hat\nu_n}\bigl(1_A \psi\circ\mathbb{P}i \bigr)
\mathrm{e}nd{align*}
for every measurable set~$A$. This shows that $\psi\circ\mathbb{P}i$ is the
required density.
\mathrm{e}nd{proof}
\begin{corollary}\label{C:other}
Let $\mu$ be the stationary distribution of the SPDE~\mathrm{e}qref{E:SPDE}
on $\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$. Let $\mu_n$ be the stationary
distribution of the finite element equation~\mathrm{e}qref{E:FE-SDE}
on~$\mathbb{R}^I$. Let $\mathcal{L}$ be negative and assume $f=F'$ where $F\in
\mathrm{C}^2(\mathbb{R})$ is bounded from above with bounded second derivative.
Then
\begin{equation*}
\bigl\| \mu - \hat \mu_n \bigr\|_{\mathrm{TV}}
= \mathcal{O}\bigl(\frac1n\bigr)
\mathrm{e}nd{equation*}
as $n\to\infty$.
\mathrm{e}nd{corollary}
\begin{proof}
Let $\nu$ be the stationary distribution of the linear
SPDE~\mathrm{e}qref{E:linear} on $\mathrm{C}\bigl([0,1],\mathbb{R}\bigr)$ and let $\nu_n$
be the stationary distribution of the linear finite element
equation~\mathrm{e}qref{E:FE-linear} on~$\mathbb{R}^I$. By construction of the
process $U$ in the third statement of lemma~\ref{L:C-exact} and by
the Markov property for Brownian bridges, the distribution of $U$
between the grid points, conditioned on the values at the grid
points, coincides with the distribution of $n$ independent Brownian
bridges. By lemma~\ref{L:exact} the distribution of $U$ on the grid
points is given by~$\nu_n$. Thus we have $\nu = \hat\nu_n$. Using
this equality and lemma \ref{L:Pi-and-hat} we find
\begin{equation*}
\bigl\| \mu - \hat\mu_n \bigr\|_{\mathrm{TV}}
= \mathbb{E}_\nu\Bigl| \frac{d\mu}{d\nu} - \frac{d\hat\mu_n}{d\nu} \Bigr|
= \mathbb{E}_\nu\Bigl| \frac{d\mu}{d\nu} - \frac{d\mu_n}{d\nu_n} \circ \mathbb{P}i \Bigr|.
\mathrm{e}nd{equation*}
Now we are in the situation of equation~\mathrm{e}qref{E:proof-start} and
the proof of theorem~\ref{T:result} applies without further changes.
\mathrm{e}nd{proof}
\section{Examples}
\label{S:example}
To illustrate that the suggested finite element method is a concrete
and implementable scheme, this section gives two examples for the
finite element discretisation of SPDEs, both in the context of infinite
dimensional sampling problems.
For the first example, for $c>0$, consider the SPDE
\begin{equation*}
\d_t u (t,x) = \d_x^2 u(t,x) - c^2 u(t,x) + \sqrt{2} \d_t w(t,x)
\qquad \forall (t,x) \in \mathbb{R}_+ \times (0,1)
\mathrm{e}nd{equation*}
with Robin boundary conditions
\begin{equation} \label{E:ex1-bc}
\d_xu(t,0) = c u(t,0), \quad \d_x u(t,1) = - c u(t,1)
\qquad \forall t\in \mathbb{R}_+,
\mathrm{e}nd{equation}
where $\d_t w$ is space-time white noise. From~\citet{HaiStuaVo07} we
know that the stationary distribution of this SPDE on $\mathrm{C}\bigl([0,1],
\mathbb{R}\bigr)$ coincides with the distribution of the process~$X$ given by
\begin{equation}\label{E:ex1-SDE}
\begin{split}
dX_\tau &= - c X_\tau \,d\tau + dW_\tau \qquad \forall \tau\in[0,1] \\
X_0 &\sim \mathcal{N}\bigl(0, \frac{1}{2c} \bigr),
\mathrm{e}nd{split}
\mathrm{e}nd{equation}
where the time~$\tau$ in the SDE plays the r\^ole of the space~$x$ in
the SPDE. In the framework of section~\ref{S:SPDE}, the boundary
conditions~\mathrm{e}qref{E:ex1-bc} correspond to the case $\alpha_0 =
\alpha_1 = c$ and $\beta_0 = \beta_1 = 1$. Since $\beta_i \neq 0$, we
need to include both boundary points in the finite element
discretisation and thus have $I=\{ 0, 1, \ldots, n \}$ and $\mathbb{R}^I \cong
\mathbb{R}^{n+1}$. The matrix $L^{(n)}$ is given by
\begin{equation*}
L^{(n)} = \frac{1}{\Delta x}
\begin{pmatrix}
-1 - c \Delta x & 1 & & & \\
1 & -2 & 1 & & \\
& 1 & -2 & 1 & \\
& & 1 & -2 & 1 \\
& & & 1 & -1 - c \Delta x
\mathrm{e}nd{pmatrix}
\in \mathbb{R}^{(n+1)\times(n+1)},
\mathrm{e}nd{equation*}
where the middle rows are repeated along the diagonal to obtain
tridiagonal $(n+1)\times(n+1)$-matrices. Similarly, the mass matrix
$M$ is given by
\begin{equation*}
M = \frac{\Delta x}{6}
\begin{pmatrix}
2 & 1 & & & \\
1 & 4 & 1 & & \\
& 1 & 4 & 1 & \\
& & 1 & 4 & 1 \\
& & & 1 & 2
\mathrm{e}nd{pmatrix}
\in \mathbb{R}^{(n+1)\times(n+1)}
\mathrm{e}nd{equation*}
where, again, the middle rows are repeated along the diagonal.
Finally, it transpires that the discretised drift for this example is
given by $f_n(u) = - c Mu$. By lemma~\ref{L:preconditioning} the
$n+1$ dimensional SDEs
\begin{equation*}
dU_t = M^{-1} L^{(n)} U_t \,dt - c U_t \,dt + \sqrt{2} M^{-1/2} \,dW_t
\mathrm{e}nd{equation*}
and
\begin{equation*}
dU_t = L^{(n)} U_t \,dt - c M U_t \,dt + \sqrt{2}\,dW_t,
\mathrm{e}nd{equation*}
where $W$ is an $(n+1)$-dimensional Brownian motion, both have the
same stationary distribution and this stationary distribution
converges to the distribution of the process~$X$
from~\mathrm{e}qref{E:ex1-SDE} in the sense given in theorem~\ref{T:result}
and corollary~\ref{C:other}.
As a second example, consider the SPDE
\begin{equation*}
\d_t u (t,x) = \d_x^2 u(t,x) - \bigl(g g' + \frac12 g'')(u) + \sqrt{2} \d_t w(t,x)
\qquad \forall (t,x) \in \mathbb{R}_+ \times (0,1)
\mathrm{e}nd{equation*}
with Dirichlet boundary conditions
\begin{equation*}
u(t,0) = u(t,1) = 0
\qquad \forall t\in \mathbb{R}_+,
\mathrm{e}nd{equation*}
where $g\in\mathrm{C}^3\bigl(\mathbb{R}, \mathbb{R}\bigr)$ with bounded derivatives $g'$,
$g''$ and $g'''$. From~\citet{HaiStuaVo07} we know that the
stationary distribution of this SPDE on $\mathrm{C}\bigl([0,1], \mathbb{R}\bigr)$
coincides with the conditional distribution of the process~$X$ given
by
\begin{equation}\label{E:ex2-SDE}
\begin{split}
dX_\tau &= g(X_\tau) \,d\tau + dW_\tau \qquad \forall \tau\in[0,1] \\
X_0 &= 0,
\mathrm{e}nd{split}
\mathrm{e}nd{equation}
conditioned on $X_1 = 0$.
Since we have Dirichlet boundary conditions, the boundary points in
the finite element discretisation are not included: we have $I=\{1,
2, \ldots, n-1\}$ and $\mathbb{R}^I \cong \mathbb{R}^{n-1}$. The matrices $L^{(n)}$
and $M$ are given by
\begin{equation*}
L^{(n)} = \frac{1}{\Delta x}
\begin{pmatrix}
-2 & 1 & \\
1 & -2 & 1 \\
& 1 & -2
\mathrm{e}nd{pmatrix}
\in \mathbb{R}^{(n-1)\times(n-1)},
\mathrm{e}nd{equation*}
and
\begin{equation*}
M = \frac{\Delta x}{6}
\begin{pmatrix}
4 & 1 & \\
1 & 4 & 1 \\
& 1 & 4
\mathrm{e}nd{pmatrix}
\in \mathbb{R}^{(n-1)\times(n-1)}
\mathrm{e}nd{equation*}
where, again, the middle rows are repeated along the diagonal to
obtain matrices of the required size. The discretised drift~$f_n$ can
be computed from~\mathrm{e}qref{E:FE-f}; if an analytical solution is not
available, numerical integration can be used. By the assumptions
on~$g$, the function $F = - \frac12 (g^2 + g')$ satisfies the
conditions of theorem~\ref{T:result}. Thus, the stationary
distributions of the $(n-1)$-dimensional SDEs
\begin{equation*}
dU_t
= M^{-1} L^{(n)} U_t \,dt
+ M^{-1} f_n(U_t) \,dt
+ \sqrt{2} M^{-1/2}\, dW_t
\mathrm{e}nd{equation*}
and
\begin{equation*}
dU_t
= L^{(n)} U_t \,dt
+ f_n(U_t) \,dt
+ \sqrt{2}\, dW_t
\mathrm{e}nd{equation*}
coincide and converge to the conditional distribution of $X$
from~\mathrm{e}qref{E:ex2-SDE}, conditioned on~$X_1 = 0$.
\mathrm{e}nd{document} |
\begin{document}
\title[On van der Corput property of shifted primes]{On van der Corput
property of shifted primes}
\author{Sini\v{s}a Slijep\v{c}evi\'{c}}
\address{Department of Mathematics, Bijeni\v{c}ka 30, Zagreb,\ Croata}
\email{[email protected]}
\urladdr{}
\date{October 27, 2011}
\subjclass[2000]{Primary 11P99; Secondary 37A45}
\keywords{S\'{a}rk\"{o}zy theorem, recurrence, primes, difference sets,
positive definiteness, van der Corput property, Fourier analysis}
\begin{abstract}
We prove that the upper bound for the van der Corput property of the set of
shifted primes is $O((\log n)^{-1+o(1)})$, giving an answer to a problem
considered by Ruzsa and Montgomery for the set of shifted primes $p-1$. We
construct normed non-negative valued cosine polynomials with the spectrum in
the set $p-1$, $p\leq n$, and a small free coefficient $a_{0}=O((\log
n)^{-1+o(1)})$. This implies the same bound for the intersective property of
the set $p-1$, and also bounds for several properties related to uniform
distribution of related sets.
\end{abstract}
\maketitle
\section{Introduction}
We say that a set of integers $\mathcal{S}$ is a \textit{van der Corput} (or
\textit{correlative}) set, if given a real sequence $(x_{n})_{n\in N}$, if
all the sequences $(x_{n+d}-x_{n})_{n\in N}$, $d\in \mathcal{S}$, are
uniformly distributed $\func{mod}1$, then the sequence $(x_{n})_{n\in N}$ is
itself uniformly distributed $\func{mod}1$. The property was introduced by
Kamae and Mend\`{e}s France (\cite{Kamae:77}), and is important as it is
closely related to the intersective property of integers, discussed below.
Classical examples of van der Corput sets are sets of squares, shifted
primes $p+1$, $p-1$, and also sets of values $P(n)$, where $P$ is any
polynomial with integer coefficients, and has a solution of $P(n)\equiv 0(
\func{mod}k$) for all $k$. All van der Corput sets are intersective sets,
but the converse does not hold, as was shown by Bourgain (\cite{Bourgain:87}
).
We first recall the key characterization of the van der Corput property. If $
\mathcal{S}$ is a set of positive integers, then let $\mathcal{S}_{n}=
\mathcal{S}\cap \{1,...,n\}$. We denote by $\mathcal{T}(\mathcal{S})$ the
set of all cosine polynomials
\begin{equation}
T(x)=a_{0}+\tsum_{d\in \mathcal{S}_{n}}a_{d}\cos (2\pi dx)\text{,}
\label{r:d0}
\end{equation}
$T(0)=1$, $T(x)\geq 0$ for all $x$, where $n$ is any integer and $a_{0}$, $
a_{d}$ are real numbers (i.e. $T$ is a non-negative normed cosine polynomial
with the spectrum in $\mathcal{S}\cup \{0\}$). Kamae and Mend\`{e}s France
proved that a set is a van der Corput set if and only if (\cite{Kamae:77},
\cite{Montgomery:94})
\begin{equation}
\inf_{T\in \mathcal{T}(\mathcal{S})}a_{0}=0. \label{r:d2}
\end{equation}
We can define a function which measures how quickly a set is becoming a van
der Corput set with
\begin{equation}
\gamma (n)=\inf_{T\in \mathcal{T}(\mathcal{S}_{n})}a_{0}, \label{r:dgamma}
\end{equation}
and then a set is van der Corput if and only if $\gamma (n)\rightarrow 0$ as
$n\rightarrow \infty $.
Ruzsa and Montgomery set a problem of finding any upper bound for the
function $\gamma $ for any non-trivial van der Corput set (\cite
{Montgomery:94}, unsolved problem 3; \cite{Ruzsa:84a}). Ruzsa in \cite
{Ruzsa:81} announced the result that for the set of squares, $\gamma
(n)=O((\log n)^{-1/2})$, but the proof was never published. The author in
\cite{Slijepcevic:09} proved that for the set of squares, $\gamma
(n)=O((\log n)^{-1/3})$. In this paper we prove the following result:
\begin{theorem}
\label{t:main01}If $\mathcal{S}$ is the set of shifted primes $p-1$, then $
\gamma (n)=O((\log n)^{-1+o(1)})$.
\end{theorem}
The gap between the upper bound and the best available lower bound remains
very large, as in the case of the sets of recurrence discussed below. The
lower bound below relies on a construction of Ruzsa \cite{Ruzsa84b}:
\begin{theorem}
\label{t:main02}If $\mathcal{S}$ is the set of shifted primes $p-1$, then $
\gamma (n)\,\gg n^{\left( -1+\frac{\log 2-\varepsilon }{\log \log n}\right)
} $, where $\varepsilon >0$ is an arbitrary real number.
\end{theorem}
\textbf{Structure of the proof and its limitations.} We define a cosine
polynomial
\begin{equation}
F_{N,d}(\theta )=\frac{1}{k}\func{Re}\sum_{\substack{ p\leq dN+1 \\ p\equiv
1(\func{mod}d)}}\log p\cdot e((p-1)\theta ), \label{r:defF}
\end{equation}
where $e(\theta )=\exp (2\pi i\theta )$ and $k$ is chosen so that $
F_{N,d}(0)=1$. We show in Sections 2 and 3 by using exponential sum
estimates along major and minor arcs that
\begin{equation*}
F_{N,d}(\theta )\geq \tau (d,q)+E(d,q,\kappa ,N).
\end{equation*}
Here $\kappa =\theta -a/q$, the function $E$ is the error term and $\tau
(d,q)$ is the principal part which is (for square-free $d$) $1$ for $q|d$, $
0 $ if $q$ not square-free, and $-1/\varphi (q/(q,d))$ otherwise ($\varphi $
being the Euler's totient function and $(q,d)$ the greatest common divisor).
In Section 4 we demonstrate that for a given $\delta >0$, one can find a
collection of positive integers $\mathcal{D}$ not exceeding $\exp ((\log
1/\delta )^{2+o(1)})$ and weights $\sum_{d\in \mathcal{D}}w(d)=1$ such that
for any integer $q>0$,
\begin{equation*}
\sum_{d\in \mathcal{D}}w(d)\tau (d,q)\geq -\delta /2\text{.}
\end{equation*}
In addition, one can find constants $R,N$ not exceeding $O(\exp ((\log
1/\delta )^{4+o(1)}))$ for any given $\theta $ such that if $a/q$ is the
Dirichlet's approximation of $\theta =a/q+\kappa $, $\kappa \leq 1/(qR)$,
then the error term $|E(d,q,\kappa ,N)|\leq \delta /2$. This seemingly
implies effectively the same upper bound for $\gamma (n)$ as obtained in
\cite{Ruzsa:08} for a stronger intersective property of sets of integers
(see below).
Unfortunately, in our calculations the constants $R,N$ can not be chosen so
that for all $\theta \in \boldsymbol{T}=\boldsymbol{R}/\boldsymbol{Z}$ the
error term is small. Namely, for $d\theta $ close to an integer, the error
term is $O(dN/R)$, and for $\theta $ on minor arcs, the error term is $
O(d^{2}\sqrt{R}/\sqrt{N})$. We resolve it by choosing a geometric sequence
of constants $N_{1},...,N_{4/\delta }$, which results with the bound in
Theorem \ref{t:main01}. We finalize the proof in Section 5 by constructing
the required cosine polynomial as a convex combination of $F_{N,d}$ over $
d\in \mathcal{D}$ and $N_{j}$.
\textbf{Applications.}We say a set $\mathcal{S}$ is \textit{intersective set}
(or a \textit{set of recurrence}, or a \textit{Poincar\'{e} set)}, if for
any set $A$ of integers with positive upper Banach density
\begin{equation*}
\rho (A)=\lim \sup_{n\rightarrow \infty }|A\cap \lbrack 1,n]|/n>0,
\end{equation*}
its difference set $A-A$ contains an element of $\mathcal{S}$. Given any set
of integers $\mathcal{S}$, one can define the function $\alpha :\boldsymbol{N
}\rightarrow \lbrack 0,1]$ as $\alpha (n)=\sup \rho (A)$, where $A$ goes
over all sets of integers whose difference set does not contain an element
of $\mathcal{S}\cap \lbrack 1,n]$ (equivalent definitions of $\alpha $ can
be found in \cite{Ruzsa:84a}). A set is an intersective set if and only if $
\lim_{n\rightarrow \infty }\alpha (n)=0$. Ruzsa in \cite{Ruzsa:84a} also
proved that if $\mathcal{S}$ is a van der Corput set, then it is also an
intersective set, and
\begin{equation*}
\alpha (n)\leq \gamma (n).
\end{equation*}
The bound $\alpha (n)=O((\log n)^{-1+o(1)})=O(\exp ((-1+o(1))\log \log n))$
for the set of shifted primes follows then as a corollary of Theorem \ref
{t:main01}. This is worse than the bound $\alpha (n)=O(\exp (-c\sqrt[4]{\log
n}))$ obtained by Ruzsa and Sanders in \cite{Ruzsa:08}, but better than
earlier bounds in \cite{Lucier:08} and \cite{Sarkozy:78b}.
The function $\gamma (n)$ has different characterizations and further
applications discussed in detail in \cite{Montgomery:94}. We discuss in
Section 9 the Heilbronn property of the set of shifted primes, which
specifies how well the expression $x(p-1)$ can approximate integers
uniformly in $x\in \boldsymbol{R}$, by choosing for a given $x$ some prime $
p\leq n$ so that $x(p-1)$ is as close to an integer as possible.
\section{The major arcs}
If $\Lambda $ is the von-Mangoldt function, we define as in \cite{Ruzsa:08}
\begin{equation*}
\Lambda _{N,d}(x):=\left\{
\begin{array}{cc}
\Lambda (dx+1) & \text{if }1\leq x\leq N \\
0 & \text{otherwise,}
\end{array}
\right.
\end{equation*}
and let $\Lambda _{N}(x)=\Lambda _{N,1}(x)$. The Fourier transform $\widehat{
.}:l^{1}(\boldsymbol{Z})\rightarrow L^{\infty }(\boldsymbol{R})$ is defined
as the map which takes $f\in l^{1}(\boldsymbol{Z})$ to $\widehat{f}(\theta
)=\sum_{x\in \boldsymbol{Z}}f(x)\overline{e(x\theta )}$, thus $\widehat{
\Lambda _{N,d}}(\theta )$ is the exponential sum
\begin{equation*}
\widehat{\Lambda _{N,d}}(\theta )=\sum_{x\leq N}\Lambda (dx+1)\overline{
e(x\theta )}\text{.}
\end{equation*}
The classical estimates for Fourier transforms of $\Lambda _{N,d}(x)$ were
optimized by Ruzsa and Sanders to the class of problems studied in this
paper. They studied two cases related to the generalized Riemann hypothesis:
given a pair of integers $D_{1}\geq D_{0}\geq 2$, then there either exists
an exceptional Dirichlet character of modulus $d_{D}$ $\leq D_{0}$ or not (
\cite{Ruzsa:08}, Proposition 4.7). They then obtained the following
estimates (we will be more specific below on the assumptions):\ if $\kappa
=\theta -a/q$, where $\theta \in \boldsymbol{T}$, then
\begin{eqnarray}
\left\vert \widehat{\Lambda _{N,d}}(\theta )\right\vert &\leq &\frac{|\tau
_{a,d,q}|}{\varphi (q)}\widehat{\Lambda _{N,d}}(0)+O\left( (1+|\kappa
|N)E_{N,D_{1}}\right) \text{,} \label{r:rs1} \\
\left\vert \widehat{\Lambda _{N,d}}(0)\right\vert &\gg &\frac{N}{\varphi (d)}
+O\left( E_{N,D_{1}}\right) \text{,} \label{r:rs2}
\end{eqnarray}
where
\begin{eqnarray*}
E_{N,D_{1}} &=&ND_{1}^{2}\exp \left( -\frac{c_{1}\log N}{\sqrt{\log N}+\log
D_{1}}\right) , \\
\tau _{a,d,q} &=&\sum_{\substack{ m=0 \\ (md+1,q)=1}}^{q-1}e\left( m\frac{a
}{q}\right) \text{.}
\end{eqnarray*}
\begin{proposition}
\label{p:rusa}(Ruzsa, Sanders). There is an absolute constant $c_{1}$ such
that for any pair of integers $D_{1}\geq D_{0}\geq 2$, one of the following
possibilities hold:
(i)\ ($(D_{1},D_{0})$ is exceptional). There is an integer $d_{D}\leq D_{0}$
, such that for all non-negative integers $N,a,q,d$, where $1\leq dq\leq
D_{1}$, $d_{D}|d$, and $(a,q)=1$, for any $\theta \in \boldsymbol{T}$ (\ref
{r:rs1}), (\ref{r:rs2}) hold, where $\kappa =\theta -a/q$.
(ii)\ ($(D_{1},D_{0})$ is unexceptional). For all non-negative integers $
N,a,q,d$, where $1\leq dq\leq D_{0}$ and $(a,q)=1$, for any $\theta \in
\boldsymbol{T}$ (\ref{r:rs1}), (\ref{r:rs2}) hold, where $\kappa =\theta
-a/q $.
\end{proposition}
\begin{proof}
(\cite{Ruzsa:08}), Propositions 5.3. and 5.5. (Note that (\ref{r:rs1}) is
explicitly obtained at the end of the proof of Proposition 5.3.)
\end{proof}
We now define a function $\tau $ closely related to $\tau _{a,d,q}$ above,
which will be the main term when estimating cosine polynomials $F_{N,d}$. Let
\begin{equation}
\tau (d,q)=\left\{
\begin{array}{ll}
1, & q|d \\
0, & (d,r)>1\text{ or }r\text{ not square-free} \\
-1/\varphi (r) & \text{otherwise,}
\end{array}
\right. \label{r:deftau}
\end{equation}
where $r=q/(q,d)$. Note that for $d$ square-free, the second row condition
above is equivalent to $q$ being not square-free.
\begin{lemma}
\label{l:tau}Let $a,d,q$ be positive integers, $(a,q)=1$, $r=q/(q,d)>1$ and $
a^{\ast }=ad/(q,d)$. Then
\begin{equation}
\frac{|\tau _{a^{\ast },d,r}|}{\varphi (r)}=|\tau (d,q)|\text{.}
\label{r:help}
\end{equation}
\end{lemma}
\begin{proof}
As was noted in \cite{Ruzsa:08}, Section 5,
\begin{equation*}
\tau _{a,d,q}=\left\{
\begin{array}{ll}
c_{q}(a)e(-m_{d,q}a/q) & \text{if }(d,q)=1 \\
0 & \text{otherwise,}
\end{array}
\right.
\end{equation*}
where $c_{q}(a)$ is the Ramanujan sum and $m_{d,q}$ is a solution of $
m_{d,q}d\equiv 1(\func{mod}q)$. Now if $q|d$, $\tau _{a^{\ast },d,r}=\tau
_{a^{\ast },d,1}=1$, thus both sides of (\ref{r:help}) are equal to $1$. If $
(d,r)>1$, then $\tau _{a^{\ast },d,r}=0$, and if $r$ not square-free, then $
\tau _{a^{\ast },d,r}=0$ as the Ramanujan sum $c_{r}(a^{\ast })=0$ when $r$
not square-free. The remaining case follows from $(a^{\ast },r)=1$, $r$
square-free implying that the Ramanujan sum $|c_{r}(a^{\ast })|=1$.
\end{proof}
It is easy to see that there exists a constant $c_{2}$ depending only on $
c_{1}$ such that if
\begin{equation}
\log N\geq c_{2}(\log D_{1})^{2}, \label{r:new1}
\end{equation}
then
\begin{equation}
D_{1}^{2}\exp \left( -\frac{c_{1}\log N}{\sqrt{\log N}+\log D_{1}}\right)
\leq \frac{1}{D_{1}^{2}}\text{.} \label{r:new2}
\end{equation}
We first discuss the case of $q$ not dividing $d$, and then $q|d$.
\begin{proposition}
\label{p:major}Assume all the assumptions of Proposition \ref{p:rusa} hold
for $D_{0},D_{1},\theta ,N,a,q,d,\kappa $, and in addition (\ref{r:new1}), (
\ref{r:new2}). If $q$ not dividing $d$, then
\begin{equation*}
F_{N,d}(\theta )\geq \tau (d,q)+O\left( \frac{1}{D_{1}}+|\kappa |N\right)
\text{.}
\end{equation*}
\end{proposition}
\begin{proof}
If we write
\begin{eqnarray*}
\psi (x;q,a) &=&\sum_{\substack{ n\leq x \\ n\equiv a(\func{mod}q)}}\Lambda
(n), \\
\vartheta (x;q,a) &=&\sum_{\substack{ p\leq x \\ p\equiv a(\func{mod}q)}}
\log (p),
\end{eqnarray*}
then $\widehat{\Lambda _{N,d}}(0)=\psi (Nd+1;d,1)$ and $k=\vartheta
(Nd+1;d,1)$ where $k$ is the denominator in (\ref{r:defF}). By the
well-known property of functions $\psi ,\vartheta $ (see e.g. \cite
{Montgomery:07}, p.381),
\begin{equation*}
\psi (Nd+1;d,1)-\vartheta (Nd+1;d,1)\ll \sqrt{dN}\text{.}
\end{equation*}
Relations (\ref{r:rs2}), (\ref{r:new2}) and $\varphi (d)<D_{1}$ imply that
\begin{equation}
\frac{N}{\left\vert \widehat{\Lambda _{N,d}}(0)\right\vert }\ll D_{1}\text{.}
\label{r:lambda0}
\end{equation}
If we use the shorthand notation $F=\func{Re}\sum_{_{p\leq dN+1,p\equiv 1(
\func{mod}d)}}\log p\cdot e((p-1)\theta )$, and then $F_{N,d}(\theta )=F/k$,
we see from definitions that $F$ is approximately $\func{Re}\widehat{\Lambda
_{N,d}}(d\theta )$, or more precisely
\begin{equation*}
|\func{Re}\widehat{\Lambda _{N,d}}(d\theta )-F|\leq \widehat{\Lambda _{N,d}}
(0)-k\ll \sqrt{dN}.
\end{equation*}
Putting these three inequalities together,
\begin{equation}
\left\vert \frac{F}{k}-\frac{\func{Re}\widehat{\Lambda _{N,d}}(d\theta )}{
\widehat{\Lambda _{N,d}}(0)}\right\vert \leq \left\vert \frac{F}{k}
\right\vert \frac{|\widehat{\Lambda _{N,d}}(0)-k|}{|\widehat{\Lambda _{N,d}}
(0)|}+\frac{|\func{Re}\widehat{\Lambda _{N,d}}(d\theta )-F|}{|\widehat{
\Lambda _{N,d}}(0)|}\ll \frac{\sqrt{d}}{\sqrt{N}}D_{1}\text{.}
\label{r:relf}
\end{equation}
Now if $\theta -a/q=\kappa $, then $d\theta -a^{\ast }/r=d\kappa $, where $
a^{\ast }=ad/(d,q)$, $r=q/(d,q)$. Combining (\ref{r:rs1}), (\ref{r:rs2}), (
\ref{r:new2}) and (\ref{r:lambda0}) we easily get that
\begin{equation*}
\left\vert \frac{\widehat{\Lambda _{N,d}}(d\theta )}{\widehat{\Lambda _{N,d}}
(0)}\right\vert \leq \frac{|\tau _{a^{\ast },d,r}|}{\varphi (r)}+O\left(
\frac{1}{D_{1}}+|\kappa |N\right) \text{.}
\end{equation*}
The last two relations combined (noting that if $d\leq D_{1}$ and (\ref
{r:new1}), then $\sqrt{d}D_{1}/\sqrt{N}\ll 1/D_{1}$)\ and Lemma \ref{l:tau}
complete the proof.
\end{proof}
\begin{proposition}
\label{p:main2}Say $d,N$ are\ positive integers, $\theta \in \boldsymbol{T}$
, and $\kappa =\theta -a/q$, $(a,q)=1$ and $q|d$. Then
\begin{equation}
F_{N,d}(\theta )\geq 1+O(dN|\kappa |)\text{.} \label{p:m2}
\end{equation}
\end{proposition}
\begin{proof}
We first recall that $\func{Re}e(\theta )=\cos (2\pi \theta )\geq 1-2\pi
\left\Vert \theta \right\Vert $, where $\left\Vert .\right\Vert $ is the
distance from the nearest integer. Thus if $|dN\kappa |\leq 1/2,$ then for
each $p\leq dN+1$, $d|(p-1)$, we get $\left\Vert (p-1)\theta \right\Vert
=(p-1)|\kappa |$ and $\func{Re}e((p-1)\theta )\geq 1-2\pi dN|\kappa |$,
which easily implies (\ref{p:m2}).
\end{proof}
\section{The minor arcs}
We start with the minor arc estimate from \cite{Ruzsa:08}, Corollary 6.2,
which is derived from the classical result of Vinogradov (\cite
{Montgomery:94}, Theorem 2.9).
\begin{proposition}
\label{t:vin}Suppose that $d\leq N$ and $q\leq R$ are positive integers, $
\theta \in \boldsymbol{T}$, $(a,q)=1$ and $|\theta -a/q|\leq 1/qR$. Then
\begin{equation}
\left\vert \widehat{\Lambda _{N,d}}(\theta )\right\vert \ll d(\log
N)^{4}\left( \frac{N}{\sqrt{q}}+N^{4/5}+\sqrt{NR}\right) \text{.}
\label{r:a1}
\end{equation}
\end{proposition}
The minor arc estimate for $F_{N,d}(\theta )$ now follows.
\begin{corollary}
\label{c:main3}Suppose $d\leq D_{1}$, $q\leq R,$ $N$ are positive integers, $
\theta \in \boldsymbol{T}$, $(a,q)=1$ and $|\theta -a/q|\leq 1/qR$. Assume
also (\ref{r:new1}) and (\ref{r:new2}) hold. Then
\begin{equation}
|F_{d,N}(\theta )|\ll D_{1}^{2}(\log N)^{4}\left( \frac{1}{\sqrt{q}}
+N^{-1/5}+\frac{\sqrt{R}}{\sqrt{N}}\right) \text{.} \label{p:m3}
\end{equation}
\end{corollary}
\begin{proof}
First note that as $d\leq D_{1}$, Proposition \ref{p:rusa} implies that (\ref
{r:rs2}) holds. Then similarly as in the proof of Proposition \ref{p:major},
\begin{equation}
\frac{N}{\left\vert \widehat{\Lambda _{N,d}}(0)\right\vert }\ll D_{1}
\label{r:a2}
\end{equation}
and
\begin{equation}
\left\vert \frac{F}{k}-\frac{\func{Re}\widehat{\Lambda _{N,d}}(d\theta )}{
\widehat{\Lambda _{N,d}}(0)}\right\vert \ll \frac{\sqrt{d}}{\sqrt{N}}
D_{1}\leq \frac{D_{1}^{3/2}}{\sqrt{N}}\text{.} \label{r:a3}
\end{equation}
We complete the proof by combining (\ref{r:a1}), (\ref{r:a2}) and (\ref{r:a3}
).
\end{proof}
\section{Cancelling out the main term}
Recall the definition of the arithmetic function $\tau $ in (\ref{r:deftau}
). We first cancel out the main terms in the unexceptional case.
\begin{theorem}
\label{t:cancelling}For a given $\delta >0$ smaller than some $\delta _{0}>0$
there exists a collection of positive integers $\mathcal{D}$ not greater
than $\exp ((\log 1/\delta )^{2+o(1)})$ and weights $w:\mathcal{D\rightarrow
}\boldsymbol{R}$, $\sum_{d\in \mathcal{D}}w(d)=1$, such that for all
positive integers $q$,
\begin{equation}
\sum_{d\in \mathcal{D}}w(d)\tau (d,q)\geq -\delta /2. \label{r:A}
\end{equation}
\end{theorem}
\begin{proof}
We first define the set $\mathcal{D}$ depending on three constants $
p^{-}<p^{+}$, $l$ to be defined below. Let
\begin{equation*}
d^{\ast }=\prod_{p\leq p^{-}}p
\end{equation*}
($p$ denoting a product over primes as usual), and let $\mathcal{D}(j)$ be
the set of all square-free numbers $d^{\ast }d$, $d$ containing in its
decomposition only primes $p^{-}<p\leq p^{+}$, and such that $\omega (d)=j$,
where $\omega (d)$ denotes the number of distinct primes dividing $d$. We
set now
\begin{eqnarray*}
p^{+} &=&2/\delta +1, \\
l &=&\left\lceil 2\log (1/\delta )\left( \frac{2\log \log (2/\delta )}{\log 2
}+1\right) \right\rceil =\log (1/\delta )^{1+o(1)}, \\
p^{-} &=&2l^{2}+1=\log (1/\delta )^{2+o(1)}, \\
\mathcal{D} &=&\mathcal{D}(l), \\
W(j) &=&\sum_{d^{\ast }d\in \mathcal{D}(j)}1/\varphi (d), \\
w(d^{\ast }d) &=&\frac{1}{W(l)}\frac{1}{\varphi (d)}\text{,}
\end{eqnarray*}
where $\left\lceil x\right\rceil $ is the smallest integer $\geq x$. We
denote the left-hand side of (\ref{r:A}) with $A(q)$.
By using $\prod_{p\leq x}p=\exp (x^{(1+o(1))})$ (see e.g. \cite
{Montgomery:07}, Corollary 2.6), we easily see that for each $d^{\ast }d\in
\mathcal{D}$,
\begin{equation*}
d^{\ast }d\leq \prod_{p\leq p^{-}}p\cdot (p^{+})^{l}=\exp (\log (1/\delta
)^{2+o(1)}).
\end{equation*}
If $q$ is not square-free or $q$ contains a prime larger than $p^{+}$, the
claim $A(q)\geq -\delta /2$ is straightforward as for all $d$, $\tau (d,q)=0$
, respectively $\tau (d,q)\geq -1/\varphi (p^{+})\geq -\delta /2$.
We can now without loss of generality assume that $q$ is square-free,
containing no prime $>p^{+}$ or $\leq p^{-}\,$in its decomposition (the
latter can be eliminated as primes $\leq p^{-}\,\ $do not affect the value
of $\tau (d^{\ast }d,q)$ for square-free $q$). We define the following
constants and sets to assist us in calculations:
\begin{eqnarray*}
k &=&\log (1/\delta ), \\
\mathcal{D}(j;q) &=&\{d^{\ast }d\in \mathcal{D}(j)\text{, }(d,q)=1\}, \\
W(j;q) &=&\sum_{d^{\ast }d\in \mathcal{D}(j,q)}1/\varphi (d), \\
W &=&W(1)=\sum_{p^{-}<p\leq p^{+}}\frac{1}{\varphi (p)}=\sum_{p^{-}<p\leq
p^{+}}\frac{1}{p-1}\text{.}
\end{eqnarray*}
The remaining cases will be distinguished by $\omega (q)$.
(i) Assume $\omega (q)\leq 2k$. We will show that the terms for which $q|d$
dominate all the others. We first show the following: for $j_{1}<j_{2}$,
\begin{equation}
W(j_{2};q)\leq \frac{W^{j_{2}-j_{1}}W(j_{1};q)}{j_{2}(j_{2}-1)...(j_{1}+1)}
\text{.} \label{r:laminq}
\end{equation}
Indeed, if we define
\begin{equation*}
W^{\ast }(j;q)=\sum_{(p_{1},p_{2},...,p_{j})}\frac{1}{\varphi
(p_{1}p_{2}...p_{j})}\text{,}
\end{equation*}
where the sum goes over all ordered j-tuples of pairwise different primes $
p_{i}$, $p^{-}<p_{i}\leq p^{+},\,\ p_{i}$ coprime with $q$, then $
W(j;q)=W^{\ast }(j;q)/j!$. However, as $\varphi $ is multiplicative for
coprime integers,
\begin{equation}
W^{\ast }(j_{2};q)\leq W^{j_{2}-j_{1}}W^{\ast }(j_{1};q) \label{r:laminq2}
\end{equation}
(we first choose the first $j_{2}-j_{1}$ primes and then the remaining $j_{1}
$). We obtain (\ref{r:laminq}) by dividing (\ref{r:laminq2}) with $j_{2}!$.
The definition of $A(q)$ now yields:
\begin{equation*}
A(q)=\sum_{q|d}\frac{1}{\varphi (d)}-\sum_{q/(q,d)>1}\frac{1}{\varphi (d)}
\frac{1}{\varphi (r)},
\end{equation*}
where the sums above and below are over $d^{\ast }d\in \mathcal{D}$ unless
specified otherwise and $r$ always denotes $r=q/(q,d)$ (recall that we
assumed that $q$ and $d^{\ast }$ are coprime). We first detail out the first
term:
\begin{equation*}
\sum_{q|d}\frac{1}{\varphi (d)}=\sum_{d^{\ast }d\in \mathcal{D}(l-\omega
(q);q)}\frac{1}{\varphi (d)}\frac{1}{\varphi (q)}=W(l-\omega (q);q)\frac{1}{
\varphi (q)}.
\end{equation*}
If $\omega ((d,q))=j$, we can choose $(d,q)$ as a factor of $q$ in $\binom{
\omega (q)}{j}$ ways. Using that, (\ref{r:laminq})\ and in the last rows $
\omega (q)\leq 2k$ and $(1+x/n)^{n}<\exp (x)$ we obtain
\begin{eqnarray*}
\sum_{q/(q,d)>1}\frac{1}{\varphi (d)}\frac{1}{\varphi (r)}
&=&\sum_{j=0}^{\omega (q)-1}\sum_{\omega ((d,q))=j}\frac{1}{\varphi (d)}
\frac{1}{\varphi (r)}=\sum_{j=0}^{\omega (q)-1}W(l-j;q)\binom{\omega (q)}{j}
\frac{1}{\varphi (q)}\leq \\
&\leq &\sum_{j=0}^{\omega (q)-1}\frac{W^{\omega (q)-j}}{(l-j)...(l-\omega
(q)+1)}\binom{\omega (q)}{j}\cdot \frac{W(l-\omega (q);q)}{\varphi (q)}\leq
\\
&\leq &\frac{W(l-\omega (q);q)}{\varphi (q)}\sum_{j=0}^{\omega (q)-1}\binom{
\omega (q)}{j}\frac{W^{\omega (q)-j}}{(l-\omega (q))^{\omega (q)-j}}\leq \\
&\leq &\frac{W(l-\omega (q);q)}{\varphi (q)}\left[ \left( 1+\frac{W}{(l-2k)}
\right) ^{2k}-1\right] < \\
&<&\frac{W(l-\omega (q);q)}{\varphi (q)}\left[ \exp \left( \frac{W}{l/(2k)-1}
\right) -1\right] .
\end{eqnarray*}
As by e.g. \cite{Montgomery:07}, Theorem 2.7.(d),
\begin{equation}
\sum_{p\leq x}\frac{1}{p-1}=\log \log x\cdot (1+o(1)), \label{r:sump}
\end{equation}
we get that
\begin{equation}
W=\sum_{p^{-}<p\leq p^{+}}\frac{1}{p-1}=\log \log (p^{+})(1+o(1))\leq 2\log
\log (2/\delta ). \label{r:below}
\end{equation}
It is easy to check that the definitions of $l,k$ imply that
\begin{equation*}
1-\left[ \exp \left( \frac{2\log \log (2/\delta )}{l/(2k)-1}\right) -1\right]
\geq 0\text{.}
\end{equation*}
Putting all of the above together we get $A(q)>0$.
(ii)\ Assume $2k<\omega (q)\leq 2l$. We now show that all the terms are
small. First assume $\omega ((q,d))=j\geq k$. By the same reasoning as in (
\ref{r:laminq}) one gets for $j\leq l$,
\begin{equation*}
W(l;q)=\frac{(W-\sum_{p|q}1/\varphi (p))^{l-j}W(j;q)}{l(l-1)...(j+1)}\text{.}
\end{equation*}
Now by definition, $W(l)\geq W(l;q)$. Applying again (\ref{r:sump}) we see
that for $\delta $ small enough,
\begin{equation*}
W-\sum_{p|q}1/\varphi (p)\geq \log \log (p^{+})(1+o(1))-\log \log
(2l)(1+o(1))\geq 1\text{.}
\end{equation*}
Combining all of it one gets
\begin{equation*}
\frac{W(j;q)}{W(l)}\leq l^{l-j}\text{.}
\end{equation*}
Furthermore, as by the Stirling's formula $k!\geq k^{k}\exp (-k)$ and as $
k=\log (1/\delta )$, we get for $\delta $ small enough
\begin{equation*}
\frac{l}{k!}\leq \frac{\log (1/\delta )^{(1+o(1))}}{\log (1/\delta )^{\log
(1/\delta )}\exp (-(\log (1/\delta ))}\leq \delta /4.
\end{equation*}
Putting it all that together and summing over $d^{\ast }d\in \mathcal{D}$
similarly as above we get
\begin{eqnarray}
\sum_{j=k}^{l}\sum_{\omega ((d,q))=j}|w(d^{\ast }d)\tau (d^{\ast }d,q)| &=&
\frac{1}{W(l)}\sum_{j=k}^{\min \{l,\omega (q)\}}\sum_{\omega (d,q)=j}\frac{1
}{\varphi (d)}\frac{1}{\varphi (r)}= \notag \\
&=&\sum_{j=k}^{\min \{l,\omega (q)\}}\binom{\omega (q)}{j}\frac{W(l-j;q)}{
W(l)}\frac{1}{\varphi (q)}\leq \notag \\
&\leq &\sum_{j=k}^{\min \{l,\omega (q)\}}\frac{(2l)^{j}}{j!}l^{j}\frac{1}{
(p^{-}-1)^{\omega (q)}}\leq \notag \\
&\leq &\frac{1}{k!}\sum_{j=k}^{\min \{l,\omega (q)\}}\left( \frac{2l^{2}}{
p^{-}-1}\right) ^{\omega (q)}\leq \frac{l}{k!}\leq \delta /4\text{.}
\label{r:one}
\end{eqnarray}
For $\omega ((q,d))=j<k$, $\omega (r)=\omega (q)-j>k$ (where $r=q/(q,d)$).
We now see that for $\delta >0$ small enough,
\begin{equation}
|\tau (d^{\ast }d,q)|=1/\varphi (r)\leq 1/(p^{-}-1)^{k}=\log (1/\delta
)^{(-2-o(1))\log (1/\delta )}\leq \delta /4\text{,} \label{r:oneB}
\end{equation}
thus
\begin{equation}
\sum_{j=0}^{k-1}\sum_{\omega ((d,q))=j}|w(d^{\ast }d)\tau (d^{\ast
}d,q)|\leq \frac{\delta }{4}\sum_{d^{\ast }d\in \mathcal{D}}|w(d^{\ast
}d)|=\delta /4. \label{r:two}
\end{equation}
Relations (\ref{r:one})\ and (\ref{r:two})\ give $|A(q)|\leq \delta /2.$
(iii) Assume $2l<\omega (q)$. Then it is enough to see that for all $d^{\ast
}d\in \mathcal{D}$, $\omega (r)\geq l>k$. We now obtain in the same way as
in (\ref{r:oneB}) that $|\tau (d^{\ast }d,q)|\leq \delta /4$, but now for
all $d^{\ast }d\in \mathcal{D}$, thus $|A(q)|\leq \delta /4$.
\end{proof}
We now modify this for the exceptional case.
\begin{theorem}
\label{t:cancelling2}Assume $\delta >0$ is smaller than some $\delta _{0}>0$
and let $d_{D}$ be a positive integer, $d_{D}=\exp ((\log 1/\delta
)^{2+o(1)})$. Then there exists a collection of positive integers $\mathcal{D
}$, \ such that $d_{D}|d$ for all $d\in \mathcal{D}$, not greater than $\exp
((\log 1/\delta )^{2+o(1)})$ and weights $w:\mathcal{D\rightarrow }
\boldsymbol{R}$, $\sum_{d\in \mathcal{D}}w(d)=1$, such that for all positive
integers $q$,
\begin{equation}
\sum_{d\in \mathcal{D}}w(d)\tau (d,q)\geq -\delta /2. \label{r:B}
\end{equation}
\end{theorem}
\begin{proof}
We define $d^{\ast }=d_{D}\prod_{p\leq p^{-}}p$, where $p^{-}$ and all the
other constants remain the same as in the proof of Theorem \ref{t:cancelling}
. Let $\mathcal{D}$ be the set of all the numbers $d^{\ast }d$, $d$
square-free, relatively prime with $d^{\ast }$, containing in its
decomposition only primes $p^{-}<p\leq p^{+}$, and such that $\omega (d)=l$.
The rest of the proof is analogous as the proof of Theorem \ref{t:cancelling}
with all calculations the same, thus omitted.
\end{proof}
\section{Proof of Theorem}
We complete the proof of Theorem \ref{t:main01} in this section. We will
choose below the constants $Q,R$, and will use the major arcs estimates for $
q\leq Q$ and minor arcs estimates for $Q<q\leq R$. We will assume that $a/q$
is the Dirichlet's approximation of $\theta \in \boldsymbol{T}$, $|\theta
-a/q|\leq 1/qR$, $(a,q)=1$. The error terms in Propositions \ref{p:major},
\ref{p:main2} are then
\begin{eqnarray*}
E_{1} &=&O\left( \frac{1}{D_{1}}+\frac{N}{R}\right) , \\
E_{2} &=&O\left( D_{1}N/R\right) ,
\end{eqnarray*}
as $|\kappa |=1/qR$ and $d\leq D_{1}$. The error term for minor arcs is the
entire right-hand side of (\ref{p:m3}), thus as $q>Q$, it is
\begin{equation*}
E_{3}=O\left( D_{1}^{2}(\log N)^{4}\left( \frac{1}{\sqrt{Q}}+N^{-1/5}+\frac{
\sqrt{R}}{\sqrt{N}}\right) \right) .
\end{equation*}
To complete the proof, we need to choose the constants $D_{1},N,Q,R\,$\ so
that the error terms $E_{1},E_{2},E_{3}\leq \delta /2$ for all $\theta \in
\boldsymbol{T}$ on major; respectively minor arcs. As was noted in the
introduction, this is impossible, so we proceed as follows. We define
\begin{equation*}
Q=\exp (\log (1/\delta )^{2+o(1)})
\end{equation*}
(the constant obtained as the upper bound on $\mathcal{D}$ in\ Theorem \ref
{t:cancelling}), and let
\begin{equation*}
D_{0}=Q^{2},\text{ }D_{1}=Q^{4}\text{.}
\end{equation*}
If $(D_{0},D_{1})$ is unexceptional, we construct the set $\mathcal{D}$
according to Theorem \ref{t:cancelling}, and if it is exceptional with the
modulus of the exceptional character $d_{D}\leq D_{0}$, then according to
Theorem \ref{t:cancelling2}. Now let $N_{0}=\exp (c_{2}(\log D_{1})^{2})$,
where $c_{2}$ is the constant in (\ref{r:new1}). We now define
\begin{eqnarray*}
N_{j} &=&N_{0}D_{1}^{8j}, \\
R_{j}^{\ast } &=&N_{0}D_{1}^{8k+2}\text{,}
\end{eqnarray*}
where $j=1,...,m$, $4/\delta \leq m<4/\delta +1$. Then for $0<\delta \leq
\delta _{0}$ for some $\delta _{0}$ small enough, and $j\leq j^{\ast }$, it
is easy to see that the error terms $E_{1},E_{2}\leq \delta /4$ for the
constants $Q,D_{1},N_{j},R_{j^{\ast }}^{\ast }$. Furthermore, if $j\geq
j^{\ast }+1$, the error term $E_{3}\leq \delta /4$ for the constants $
Q,D_{1},N_{j},R_{j^{\ast }}^{\ast }$.
Let for a given $\theta \in \boldsymbol{T}$ the rational $a_{j}^{\ast
}/q_{j}^{\ast }$, $(a_{j}^{\ast },q_{j}^{\ast })$ be the Dirichlet's
approximation of $\theta $, $|\theta -a_{j}^{\ast }/q_{j}^{\ast }|\leq
1/q_{j}^{\ast }R_{j}^{\ast }$. Without loss of generality, we can also
assume that $a_{j}^{\ast }/q_{j}^{\ast }$ is the rational with the smallest $
q_{j}^{\ast }$ for a given $R_{j}^{\ast }$. Then the sequence $q_{j}^{\ast }$
is increasing.
Let $j_{0}$ be the smallest index such that $q_{j_{0}}^{\ast }>Q$ ($
q_{j_{0}}^{\ast }=m+1$ if $q_{j}^{\ast }\leq Q$ for all $j$). We define
\begin{eqnarray*}
a_{j}/q_{j} &=&a_{j_{0}-1}^{\ast }/q_{j_{0}-1}^{\ast }\text{, }
R_{j}=R_{j_{0}-1}^{\ast }\text{ for }j\leq j_{0}-1\text{,} \\
a_{j}/q_{j} &=&a_{j_{0}}^{\ast }/q_{j_{0}}^{\ast }\text{, }
R_{j}=R_{j_{0}}^{\ast }\text{ for }j\geq j_{0}\text{.}
\end{eqnarray*}
Now one can easily check that for any $d\in \mathcal{D}$ and any $j\leq
j_{0}-1$, the assumptions of Proposition \ref{p:major} in the case $q$ not
dividing $d$, respectively of Proposition \ref{p:main2} in the case $q|d$,
do hold for the constants $D_{0},D_{1},Q,a_{j},q_{j},R_{j},N_{j}$, and as
was noted above, $E_{1},E_{2}\leq \delta /4$, thus
\begin{equation}
F_{d,N_{j}}(\theta )\geq \tau (d,q_{j})-\delta /4\text{.} \label{r:sum1}
\end{equation}
Similarly for $j\geq j_{0}+1$ and $d\leq D_{1}$, the assumptions of
Corollary \ref{c:main3} hold and $E_{3}\leq \delta /4$, therefore
\begin{equation}
F_{d,N_{j}}(\theta )\geq -\delta /4. \label{r:sum2}
\end{equation}
Also by definition,
\begin{equation}
F_{d,N_{j_{0}}}(\theta )\geq -1\text{.} \label{r:sum3}
\end{equation}
Now the required polynomial is
\begin{equation*}
T=\frac{1}{m}\sum_{d\in \mathcal{D}}\sum_{j=1}^{m}w(d)F_{d,N_{j}}.
\end{equation*}
By applying (\ref{r:sum1}), (\ref{r:sum2}), (\ref{r:sum3}) for $1/m$ the sum
over $j$, and (\ref{r:A}) respectively (\ref{r:B}) for the sum over $d\in
\mathcal{D}$, we get that for any $\theta \in \boldsymbol{T}$, $T(\theta
)\geq -\delta $. As the largest non-zero coefficient in $T$ is $dN_{m}\leq
N_{0}D_{1}^{8(4/\delta +1)+1}=\exp ((1/\delta )^{1+o(1)})$, this completes
the proof.
\section{The lower bound}
In this section we prove Theorem \ref{t:main02} on the lower bound for $
\gamma (n)$ associated to the set $p-1$. Ruzsa in \cite{Ruzsa84b}, Section
5, constructed for a given $n$ a subset $A$ of integers not larger than $n$,
$|A|\gg n^{((\log 2-\varepsilon )/\log \log n)}$ such that $A-A$ contains no
shifted prime $p-1$. We now construct a set $B$ of positive integers by the
following rule:\ if $x\equiv a(\func{mod}2n)$, then $x\in B$ for $a\in A$,
otherwise $x\not\in B$. Now clearly the upper Banach density of $B$ satisfies
\begin{equation}
\rho (B)\gg n^{(-1+(\log 2-\varepsilon )/\log \log n)} \label{r:upperbanach}
\end{equation}
and $B$ contains no shifted prime $p-1$ smaller than $n$. Recall the measure
of intersectivity $\alpha (n)$ defined in the introduction, satisfying $
\gamma \geq \alpha $. As $\alpha (n)$ is by definition $\gg $ than the
right-hand side of (\ref{r:upperbanach}), the proof is completed.
\section{Application:\ Heilbronn property of shifted primes}
An estimate for the Heilbronn property of shifted primes is an example of
application of Theorem \ref{t:main01}. If $\mathcal{H}$ is a set of positive
integers, we say that it is a Heilbronn set if $\eta =0$, where
\begin{equation*}
\eta =\sup_{\theta \in \boldsymbol{T}}\inf_{h\in \mathcal{H}}||h\theta ||
\end{equation*}
(for more detailed discussion, see \cite{Montgomery:94}, Section 2.7 or \cite
{Schmidt:77}). One can quantify the Heilbronn property similarly as the van
der Corput and Poincar\'{e} properties of integers, and define
\begin{equation}
\eta (n)=\sup_{\theta \in \boldsymbol{T}}\inf_{h\in \mathcal{H}
_{n}}||h\theta ||, \label{r:dnu}
\end{equation}
where $\mathcal{H}_{n}=\mathcal{H}\cap \{1,...,n\}$. One can show that a set
is a Heilbronn set if and only if $\lim_{n\rightarrow \infty }\eta (n)=0$ (
\cite{Montgomery:94}, Section 2.7). All van der Corput sets are Heilbronn
sets (the converse does not hold), and as was shown in \cite{Montgomery:94},
Theorem 2.9,
\begin{equation}
\eta (n)\leq \gamma (n)\text{.} \label{r:heilbronn}
\end{equation}
Various estimates for the function $\eta $ have been obtained by Schmidt
\cite{Schmidt:77} for sets of values of polynomials with integer
coefficients. An upper bound for the set of shifted primes follows from\
Theorem \ref{t:main01} and (\ref{r:heilbronn}).
\begin{corollary}
If $\eta $ is the arithmetic function (\ref{r:dnu}) associated to the set of
shifted primes $\mathcal{H}$, then $\eta (n)=O((\log n)^{-1+o(1)})$.
\end{corollary}
\end{document} |
\begin{document}
\newtheorem{theo}{Theorem}[section]
\newtheorem{definition}[theo]{Definition}
\newtheorem{lem}[theo]{Lemma}
\newtheorem{prop}[theo]{Proposition}
\newtheorem{coro}[theo]{Corollary}
\newtheorem{exam}[theo]{Example}
\newtheorem{rema}[theo]{Remark}
\newtheorem{example}[theo]{Example}
\newtheorem{principle}[theo]{Principle}
\newcommand{\mathord{\sim}}{\mathord{\sim}}
\newtheorem{axiom}[theo]{Axiom}
\title{A discussion on the origin of quantum probabilities}
\author{{\sc Federico Holik}$^{1,2}$ \ {\sc ,} \ {\sc Angel Plastino}$^{3}$ \ {\sc and} {\sc Manuel S\'{a}enz}$^{2}$}
\maketitle
\begin{center}
\begin{small}
1- Universidad Nacional de La Plata, Instituto
de F\'{\i}sica (IFLP-CCT-CONICET), C.C. 727, 1900 La Plata, Argentina \\
2- Departamento de Matem\'{a}tica - Facultad de Ciencias Exactas y
Naturales\\ Universidad de Buenos Aires - Pabell\'{o}n I, Ciudad
Universitaria \\ Buenos Aires, Argentina.\\
3- Universitat de les Illes Balears and IFISC-CSIC, 07122 Palma de
Mallorca, Spain \\
\end{small}
\end{center}
\begin{abstract}
\noindent We study the origin of quantum probabilities as arising
from non-boolean propositional-operational structures. We apply
the method developed by Cox to non distributive lattices and develop an alternative formulation of
non-Kolmogorvian probability measures for quantum
mechanics. By generalizing the method presented in previous works,
we outline a general framework for the deduction of probabilities in
general propositional structures represented by lattices
(including the non-distributive case).
\end{abstract}
\noindent
\begin{small}
\centerline{\em Key words: Quantum Probability-Lattice
theory-Information theory}
\end{small}
\section{Introduction}
\noindent Quantum probabilities\footnote{By the term ``quantum
probabilities", we mean the probabilities that appear in quantum
theory. As is well known, they are ruled by the well known formula
$\mbox{tr}(\rho P)$, where $\rho$ is a density matrix representing
a general quantum state and $P$ is a projection operator
representing an event (see Section \ref{s:probabilities} of this
work for details).} posed an intriguing question from the very
beginning of quantum theory. It was rapidly realized that
probability amplitudes of quantum process obeyed rules of a non
classical nature, as for example, the sum rule of probability
amplitudes giving rise to interference terms or the nonexistence
of joint distributions for noncommuting observables. In 1936 von
Neumann wrote the first work ever to introduce quantum logics
\cite{BvN,uno,RedeiHandbook}, suggesting that quantum mechanics
requires a propositional calculus substantially different from all
classical logics. He rigorously isolated a new algebraic structure
for quantum logics, and studied its connections with quantum
probabilities. Quantum and classical probabilities have points in
common as well as differences. These differences and the
properties of quantum probabilities have been intensively studied
in the literature
\cite{Gudder-StatisticalMethods,gudderlibro78,dallachiaragiuntinilibro,mikloredeilibro,Redei-Summers2006,mackey-book,Davies-Lewies,Srinivas,AcacioSuppes,Anastopoulos,Rau}.
It is important to remark that not all authors believe that
quantum probabilities are essentially of a different nature than
those which arise in probability theory (see for example
\cite{Symmetry} for a recent account). Thought this is a major
question for probability theory and physics, it is not our aim in
this work to settle this discussion.
\noindent There exist two important axiomatizations of classical
probabilities. One of them was provided by Kolmogorov
\cite{KolmogorovProbability}, a set theoretical approach based on
boolean sigma algebras of a sample space. Probabilities are defined
as measures over subsets of a given set. Thus, the Kolmogorovian
approach is set theoretical and usually identified (but not
necessarily) with a frequentistic interpretation of probabilities.
Some time later it was realized that quantum probabilities can be
formulated as measures over non boolean structures (instead of
boolean sigma algebras). This is the origin of the name
``non-boolean or non-kolmogorovian'' probabilities
\cite{Redei-Summers2006}. It is remarkable that the creation of
quantum theory and the works on the foundations of probability by
Kolmogorov where both developed at the same time, in the twenties.
\noindent An alternative approach to the Kolmogorovian construction
of probabilities was developed by R. T. Cox
\cite{CoxPaper,CoxLibro}. Cox starts with a propositional calculus,
intended to represent assertions which portray our knowledge about
the world or system under investigation. As it is well known since
the work of Boole \cite{Boole}, propositions of classical logic (CL)
can be represented as a Boolean lattice, i.e., an algebraic
structure endowed with lattice operations ``$\wedge$", ``$\vee$",
and ``$\neg$", which are intended to represent conjunction,
disjunction, and negation, respectively, together with a partial
order relation ``$\leq$" which is intended to represent logical
implication. Boolean lattices (as seen from an algebraic point of
view) can be characterized by axioms
\cite{Knuth-2004a,Knuth-2004b,Knuth-2005a}. By considering
probabilities as an inferential calculus on a boolean lattice, Cox
showed that the axioms of classical probability can be deduced as a
consequence of lattice symmetries, using entropy as a measure of
information. Thus, differently form the set theoretical approach of
Kolmogorov, the approach by Cox considers probabilities as an
inferential calculus.
\vskip 3mm \noindent It was recently shown that Feymann's rules of
quantum mechanics can be deduced from operational lattice structures
using a variant of Cox method
\cite{Caticha-99,Knuth-2005b,Symmetry,GoyalKnuthSkilling,KnuthSkilling-2012}
(see also \cite{Knuth-2004a,Knuth-2004b}). For example, in
\cite{Symmetry,GoyalKnuthSkilling} this is done by:
\begin{itemize}
\item first defining an operational propositional calculus on a
quantum system under study, and after that,
\item postulating that any quantum process (interpreted as a proposition
in the operational propositional calculus) can be represented by a
pair of real numbers and,
\item using a variant of the method developed by Cox,
showing that these pairs of real numbers obey the sum and product
rules of complex numbers, and can then be interpreted as the
quantum probability amplitudes which appear in Feymann's rules.
\end{itemize}
\noindent There is a long tradition with regards to the application of
lattice theory to physics and many other disciplines. The quantum logical (QL) approach to quantum
theory (and physics in general), initiated by von Neumann in
\cite{BvN}, has been a traditional tool for studies on the
foundations of quantum mechanics (see for example
\cite{mackey57,jauch,piron,kalm83,kalm86,vadar68,
vadar70,greechie81,giunt91,pp91,belcas81,gudderlibro78}, and for a
complete bibliography \cite{dallachiaragiuntinilibro},
\cite{dvupulmlibro}, and \cite{HandbookofQL}).
\noindent The (QL) approach to physics bases itself on defining elementary
tests and propositions for physical systems and then, studying the
nature of these propositional structures. In some approaches,
this is done in an operational way
\cite{piron,aertsdaub1,aertsdaub2,aertsjmp83,aertsrepmathphys84a,aertsjmp84b},
and is susceptible of considerable generalization to arbitrary
physical systems (not necessarily quantum ones). That is why the
approach is also called \emph{operational quantum logic}
(OQL)\footnote{In this paper we will use the terms QL and OQL
interchangeably, but it is important to remark that -though similar-
they are different approaches.}. One of the most important goals of
OQL is to impose operationally motivated axioms on a lattice
structure in order that it can be made isomorphic to a projection
lattice on a Hilbert space. There are different positions in the
literature about the question of whether this goal has been achieved
or not \cite{Rau}, and also, of course, alternative operational
approaches to physics, as the \emph{convex operational} one
\cite{Mielnik-68,Mielnik-69,Mielnik-74,Ludwig1,Ludwig2,Gudder-StatisticalMethods}.
In this work, we are interested in the great generality of the OQL
approach. The operational approach presented in
\cite{Gudder-StatisticalMethods} bases itself only in the convex
formulation of any statistical theory, and it can be shown that the
more general structure which appears under reasonable operational
considerations is a \emph{$\sigma$-orthocomplemented orthomodular
poset}, a more general class than \emph{orthomodular lattices} (the
ones which appear in quantum theory). We will come back to these
issues and review the definitions for these structures below.
\noindent \fbox{\parbox{0.97\linewidth}{ In this work we complement the
work presented in \cite{Knuth-2004b,Knuth-2005a} asking the
following questions:
\begin{itemize}
\item is it possible to generalize Cox's method to arbitrary
lattices or more general algebraic structures?
\item what happens if the Cox's method mentioned above is applied to
general lattices (not necessarily distributive), representing
general physical systems? And in particular, what happens if it is
applied to the von Newmann's lattice of projection operators?
\item does the logical underlying structure of the theory determine
the form and properties of the probabilities?
\end{itemize}
\noindent As we shall see below, it is possible to use these questions
to give an alternative formulation of quantum probabilities. We will show that once the
operational structure of the theory is fixed, the general
properties of probability theory are -in a certain sense to be clarified below- determined. We also discuss
the implications of our derivation for the foundations of quantum
physics and probability theory, and compare with ours different
approaches: the one presented in \cite{Symmetry,GoyalKnuthSkilling}, the OQL
approach, the operational approach of
\cite{Gudder-StatisticalMethods}, and the traditional one
(represented by the von Neumann formalism of Hilbertian quantum
mechanics \cite{vN}).
\noindent The approach presented here shows itself to be susceptible of
great generalization: we provide an algorithm for developing
generalized probabilities using a combination of the Cox's method
with the OQL approach. This opens the door to the development of
more general probability and information measures. This
methodology is advantageous because, in the particular case of
quantum mechanics, it includes mixed states in a natural way, unlike other
approaches based only on pure states (like the ones presented in
\cite{KnuthSkilling-2012} and \cite{Caticha-99}).}}
\noindent The paper is organized as follows. In Section
\ref{s:LogicOperational} we review the $QL$ approach to physics as
well as lattice theory. In Section \ref{s:CoxReview} we revisit
Kolmogorov's and Cox' approaches to probability. After that, in
Section \ref{s:probabilities}, we give a sketch concerning
quantum probabilities and their differences with classical ones.
In Section \ref{s:KnuthGoyal} we discuss the approach developed in
\cite{Caticha-99}, \cite{Knuth-2005b}, \cite{Symmetry},
\cite{GoyalKnuthSkilling}, and \cite{KnuthSkilling-2012}. In
Section \ref{s:CoxNon-Boolean} we apply Cox's method for the formulation of
non-Kolmogorovian probabilities using the algebraic properties of non-boolean lattices and
study several examples. Finally, in section \ref{s:Conclusions}
some conclusions are drawn.
\section{The lattice/operational approach to
physics}\label{s:LogicOperational}
\noindent The quantum logical approach to physics is vast and includes
different programs. We will concentrate on the path followed by von
Neumann and the operational approach developed by Jauch, Piron, and
others. First, we recall the relationship between projection
operators and elementary tests in $QM$. After studying the examples
of lattices applied to QM and CM, we review the main features of the $QL$ approach. The
reader familiar with these topics can skip this Section.
\subsection{Elementary notions of lattice theory}\label{s:LatticeTheory}
\noindent A partially ordered set (also called a poset) is a set
$X$ endowed with a partial ordering relation ``$<$" satisfying
\begin{itemize}
\item 1- For all $x,y\in X$, $x<y$ and $y<x$ entail $x=y$
\item 2- For all $x,y,z\in X$, if $x<y$ and $y<z$, then $x<z$
\end{itemize}
\noindent The notation ``$x\leq y$" is used to denote ``$x<y$" or
``$x=y$". A lattice $\mathcal{L}$ will be a poset in which any two
elements $a$ and $b$ have a unique supremum (the elements' least
upper bound ``$a\vee b$"; called their join) and an infimum
(greatest lower bound ``$a\wedge b$"; called their meet). Lattices
can also be characterized as algebraic structures satisfying
certain axiomatic identities imposed on operations ``$\vee$" and
``$\wedge$". For a {\it complete} lattice all its subsets have
both a supremum (join) and an infimum (meet).
\vskip 3mm \noindent A {\it bounded} lattice has a greatest (or maximum)
and least (or minimum) element, denoted $1$ and $0$ by convention
(also called top and bottom, respectively). Any lattice can be
converted into a bounded lattice by adding a greatest and least
element, and every non-empty finite lattice is bounded. For any set
$A$, the collection of all subsets of $A$ (called the power set of
$A$) can be ordered via subset inclusion to obtain a lattice bounded
by $A$ itself and the null set. Set intersection and union represent
the operations meet and join, respectively.
\vskip 3mm \noindent Every complete lattice is a bounded lattice. While
bounded lattice homomorphisms in general preserve only finite
joins and meets, complete lattice homomorphisms are required to
preserve arbitrary joins and meets. If $P$ is a bounded poset, an
orthocomplementation in $P$ a unary operation ``$\neg(\ldots)$" such that:
\begin{subequations}\label{e:ComplementationAxioms}
\begin{equation}\label{e:Complement1}
\neg(\neg(a)))=a
\end{equation}
\begin{equation}\label{e:Complement2}
a\leq b \longrightarrow \neg b\leq \neg a
\end{equation}
$a\vee \neg a$ and $a\wedge \neg a$ exist and both
\begin{equation}\label{e:Complement3}
a\vee \neg a=\mathbf{1}
\end{equation}
\begin{equation}\label{e:Complement4}
a\wedge \neg a=\mathbf{0}
\end{equation}
\end{subequations}
\noindent hold. A bounded poset with ortocomplementation will be called an
orthoposet. An ortholattice, will be an orthoposet which is also a
lattice. For $a,\,b \in \mathcal{L}$ (an ortholattice or
orthoposet), we say that $a$ is orthogonal to $b$ ($ a \bot b$) iff
$a\le \neg b$.
\vskip 3mm \noindent Distributive lattices are lattices for which the
operations of join and meet are distributed over each other. A
complete complemented lattice that is also distributive is a
Boolean algebra. For a distributive lattice, the complement of
$x$, when it exists, is unique. The prototypical examples of
Boolean algebras are collections of sets for which the lattice
operations can be given by set union and intersection, and lattice
complementation by set theoretical complementation.
\noindent A {\it modular} lattice is one that satisfies the following
self-dual condition (\emph{modular law} or \emph{modular identity})
\begin{equation}\label{e:ModularIdentity}
x \leq b \longrightarrow x \vee (a \wedge b) = (x \vee a) \wedge b
\end{equation}
\noindent Modular lattices arise naturally in algebra and in many other
areas of mathematics. For example, the subspaces of a finite
dimensional vector space form a modular lattice. Every distributive
lattice is modular. In a not necessarily modular lattice, there may
still be elements $b$ for which the modular law holds in connection
with arbitrary elements $a$ and $x$ ($\le b$). Such an element is
called a modular element. Even more generally, the modular law may
hold for a fixed pair $(a,b)$. Such a pair is called a modular pair,
and there are various generalizations of modularity related to this
notion and to semi-modularity. \vskip 3mm
\noindent An orthomodular lattice will be an ortholattice satisfying the
orthomodular law:
\begin{equation}\label{e:ModularIdentity2}
x \leq b \longrightarrow x \vee (\neg x \wedge b) = b
\end{equation}
\vskip 3mm
\noindent Orthomodularity is a weakening of modularity. As an example, the lattice
${\mathcal{L}}_{v\mathcal{N}}({\mathcal{H}})$ of closed subspaces of a Hilbert space $H$ (see Section \ref{s:ElementaryTests})
is orthomodular. ${\mathcal{L}}_{v\mathcal{N}}({\mathcal{H}})$ is modular only if $H$ is finite dimensional and strictly orthomodular
for the infinite dimensional case.
\vskip 3mm
\noindent The concept of lattice's atom is of great physical importance.
If $\mathcal{L}$ has a least element $ 0$, then an element $x$ of
$\mathcal{L}$ is an {\it atom} if $0 < x$ and there exists no
element $y$ of $\mathcal{L}$ such that $0 < y < x$. One says that
$\mathcal{L}$ is: \newline i) {\it Atomic}, if for every nonzero
element $x$ of $\mathcal{L}$, there exists an atom $a$ of
$\mathcal{L}$ such that $ a \leq x$
\newline ii) Atomistic, if every element of $\mathcal{L}$ is a
supremum of atoms.
\subsection{Elementary measurements and projection
operators}\label{s:ElementaryTests}
\noindent In QM, an elementary measurement given by a yes-no experiment
(i.e., a test in which we get the answer ``yes" or the answer
``no"), is represented by a projection operator. If $\mathbb{R}$ is
the real line, let $B(\mathbb{R})$ be the family of subsets of
$\mathbb{R}$ such that
\begin{itemize}
\item 1 - The family is closed under set theoretical complements.
\item 2 - The family is closed under denumerable unions.
\item 3 - The family includes all open intervals.
\end{itemize}
\noindent The elements of $B(\mathbb{R})$ are the \emph{Borel
subsets} of $\mathbb{R}$ \cite{ReedSimon}. Let $\mathcal{P}(\mathcal{H})$ be
the set of all projection operators (or equivalently, the set of closed subspaces of $\mathcal{H}$).
In QM, a projection valued measure (PVM) $M$, is a mapping
\begin{subequations}
\begin{equation}
M: B(\mathbb{R})\rightarrow \mathcal{P}(\mathcal{H})
\end{equation}
\noindent such that
\begin{equation}
M(\emptyset)=0
\end{equation}
\begin{equation}
M(\mathbb{R})=\mathbf{1}
\end{equation}
\begin{equation}
M(\cup_{j}(B_{j}))=\sum_{j}M(B_{j}),\,\,
\end{equation}
\noindent for any disjoint denumerable family ${B_{j}}$. Also,
\begin{equation}
M(B^{c})=\mathbf{1}-M(B)=(M(B))^{\bot}
\end{equation}
\end{subequations}
\noindent Any elementary measurement is represented by a projection operator \cite{vN}.
All operators representing observables can be expressed in
terms of PVM's (and so, reduced to sets of elementary
measurements), via the spectral decomposition theorem, which
asserts that the set of spectral measurements may be put in a
bijective correspondence with the set $\mathcal{A}$ of self
adjoint operators of $\mathcal{H}$ \cite{ReedSimon}.
\vskip 3mm \noindent The set of closed subspaces $\mathcal{P}(\mathcal{H})$ of any quantum system can be endowed with a lattice structure:
$\mathcal{L}_{v\mathcal{N}}({\mathcal{H}})=
<\mathcal{P}(\mathcal{H}),\ \leq ,\ \wedge,\ \vee,\ \neg,\ 0,\ 1>$, where ``$\leq$'' is the set theoretical inclusion ``$\subseteq$'', ``$\wedge$'' is set theoretical intersection ``$\cap$'',
``$\vee$'' is the closure of the sum``$\oplus$'',
$0$ is the empty set $\emptyset$, $1$ is the total space
$\mathcal{H}$ and $\neg(S)$ is
the orthogonal complement of a subspace $S$
\cite{mikloredeilibro}. Closed subspaces can be put in one to one
correspondence with projection operators. \emph{Thus, elementary
tests in QM, which are represented by projection operators, can be
endowed with a lattice structure}. This lattice was called
``Quantum Logic" by Birkhoff and von Neumann \cite{BvN}. We will
refer to this lattice as the \emph{von Neumann-lattice
($\mathcal{L}_{v\mathcal{N}}(\mathcal{H})$})
\cite{mikloredeilibro}.
\vskip 3mm
\noindent The analogous of this structure in Classical Mechanics ($CM$)
was provided by Birkoff and von Neumann \cite{BvN}. Take for
example the following operational propositions on a classical
harmonic oscillator: ``the energy is equal to $E_0$" and ``the
energy is lesser or equal than $E_0$". The first one corresponds
to an ellipse in phase space, and the second to the ellipse and
its interior. This simple example shows that operational
propositions in $CM$ can be represented by subsets of the phase
space. Thus, given a classical system $S$ with phase space
$\Gamma$, let $\mathcal{P}(\Gamma)$ represent the set formed by
all the subsets of $\Gamma$. This set can be endowed with a lattice
structure as follows. If ``$\vee$" is represented by set union,
``$\wedge$" by set intersection, ``$\neg$" by set complement (with
respect to $\Gamma$), $\leq$ is represented by set inclusion, and $\textbf{0}$ and $\textbf{1}$ are
represented by $\emptyset$ and $\Gamma$ respectively, then
$<\mathcal{P}(\Gamma),\ \leq,\ \wedge,\ \vee,\ \neg,\ 0,\ 1>$ conform a
complete bounded lattice. This is the lattice of propositions of a
classical system, which as it is well known, \emph{is a boolean
one}. Thus, $\mathcal{P}(\Gamma)$, as well as
$\mathcal{P}(\mathcal{H})$, can be endowed with a propositional lattice structure.
\subsection{The Quantum Logical Approach to Physics}
\noindent We have seen that operational propositions of quantum and
classical systems can be endowed with lattice structures. These
lattices where boolean for classical systems, and non distributive
for quantum ones. This fact, discovered by von Neumann \cite{BvN},
raised a lot of interesting questions. The first one is: is it
possible to obtain the formalism of $QM$ (as well as $CM$) by
imposing suitable axioms on a lattice structure? The surprising
answer is \emph{yes, it is possible}. But the road which led to
this result was fairly difficult and full of obstacles. In the
first place, it was a very difficult mathematical task to
demonstrate that a suitably chosen set of axioms on a lattice
would yield a representation theorem which would allow one to
recover Hilbertian $QM$. The first result was obtained by Piron,
and the final demonstration was given by Sol\`{e}r in 1995
\cite{Soler-1995} (see also \cite{dallachiaragiuntinilibro}, page
72). One of the advantages ascribed to this approach was that the
axioms imposed on a lattice structure could be given a clear
operational interpretation: unlike the Hilbert space formulation,
whose axioms have the disadvantage of being \emph{ad hoc} and
physically unmotivated, the quantum logical approach would be
clearer and more intuitive from a physical point of view. But of
course, the operational validity of the axioms imposed on the
lattice structure was criticized by many authors (as an example,
see \cite{Rau}).
\vskip 3mm
\noindent The second important question raised by the von Neumann
discovery was: given that $QM$ and $CM$ can be described by
operational lattices, is it possible to formulate the entire
apparatus of physics in lattice theoretical terms? Given
\emph{any} physical system, quantum, classical, or obeying more
general tenets, it is always possible to define an operational
propositional structure on it using the notion of elementary
tests. A very general approach to physics can be given using
\emph{event structures}, which are sets of events endowed with
probability measures satisfying certain axioms
\cite{Gudder-StatisticalMethods}. It can be shown (see
\cite{Gudder-StatisticalMethods}, Chapter 3) that any event
structure is isomorphic to a \emph{$\sigma$-orthocomplete
orthomodular poset}, which is an orthocomplemented poset
$\mathcal{P}$, satisfying the orthomodular identity
\eqref{e:ModularIdentity2}, and for which if $a_i\in \mathcal{P}$
and $a_i\bot a_j$ ($i\neq j$), this implies that $\bigvee a_i$
exists for $i=1,2,\ldots$. Remark that event structures (or
$\sigma$-orthocomplete orthomodular posets) need not to be
lattices. However, lattices are very general structures and
encompass most important examples. Consequently, we will work
with orthomodular lattices in this paper (and indicate which
results can be easily extended to $\sigma$-orthocomplete
orthomodular posets).
\noindent There are other general approaches to statistical theories. One
of them is the convex operational one
\cite{wilce,Barnum-Wilce-2006,Barnum-Wilce-2009,Barnum-Wilce-2010},
which consists on imposing axioms on a convex structure (formed by
physical states). Indeed, the convex operational approach is even
more general than the quantum logical, but we will not discuss this
issue in detail here (although a link with it will be discussed in
Section \ref{s:GeneralMethod}).
\section{Cox vs. Kolmogorov}\label{s:CoxReview}
\noindent In this Section we will review two different approaches to
probability theory. On one hand, the Cox's approach, in which
probabilities are considered as measures of the plausibility of a
given event or happening. On the other hand, the traditional
Kolmogorovian one, a set theoretical approach which is compatible
with the interpretation of probabilities as frequencies.
\subsection{Kolmogorov}\label{s:Kolmogorov}
\noindent Given a set $\Omega$, let us consider a $\sigma$-algebra
$\Sigma$ of $\Omega$. Then, a probability measure will be given by a
function $\mu$ such that
\begin{subequations}\label{e:kolmogorovian}
\begin{equation}
\mu:\Sigma\rightarrow[0,1]
\end{equation}
\noindent which satisfies
\begin{equation}
\mu(\emptyset)=0
\end{equation}
\begin{equation}
\mu(A^{c})=1-\mu(A),
\end{equation}
\noindent where $(\ldots)^{c}$ means set-theoretical-complement and
for any pairwise disjoint denumerable family $\{A_{i}\}_{i\in I}$
\begin{equation}
\mu(\bigcup_{i\in I}A_{i})=\sum_{i}\mu(A_{i})
\end{equation}
\end{subequations}
\noindent where conditions (\ref{e:kolmogorovian}) are the well
known axioms of Kolmogorov. The triad $(\Omega,\Sigma,\mu)$ is
called a \emph{probability space}. Depending on the context,
probability spaces obeying Eqs. \eqref{e:kolmogorovian} are usually
referred as Kolmogorovian, classical, commutative or boolean
probabilities \cite{Gudder-StatisticalMethods}.
\noindent It is possible to show that if $(\Omega,\Sigma,\mu)$ is a
kolmogorovian probability space, the \emph{inclusion-exclusion
principle} holds
\begin{equation}\label{e:SumRule}
\mu(A\cup B)=\mu(A)+\mu(B)-\mu(A\cap B)
\end{equation}
\noindent or (as expressed in logical terms)
\begin{equation}\label{e:SumRuleLogical}
\mu(A \vee B)=\mu(A)+\mu(B)-\mu(A \wedge B)
\end{equation}
\noindent As remarked in \cite{Redei99}, Eq. \eqref{e:SumRule} was
considered as crucial by von Neumann for the interpretation of
$\mu(A)$ and $\mu(B)$ as relative frequencies. If $N_{(A\cup B)}$,
$N_{(A)}$, $N_{(B)}$, $N_{(A\cap B)}$ are the number of times of
each event to occur in a series of $N$ repetitions, then
\eqref{e:SumRule} trivially holds.
\noindent As we shall discuss below, this principle does no longer hold in
QM, a fact linked to the non-boolean QM-character. Thus, the
relative-frequencies' interpretation of quantum probabilities
becomes problematic \cite{Redei99}. The QM example shows that
non-distributive propositional structures play an important role in
probability theories {\it different from} that of Kolmogorov.
\subsection{Cox's approach}
\noindent Propositions of classical logic can be endowed with a Boolean
lattice structure \cite{Boole}. The logical implication
``$\longrightarrow$" is associated with a partial order relation
``$\leq$", the conjunction ``and" with the greatest lower bound
``$\wedge$", disjunction ``or" with the lowest upper bound
``$\vee$", and negation ``not" is associated with complement
``$\neg$". Boolean lattices can be characterized as ortholattices
satisfying:
\begin{itemize}\label{e:equationsboolean}
\item L1. $x\vee x = x$, $x\wedge x = x$ (idempotence)
\item L2. $x\vee y = y\vee x$, $x\wedge y = y\wedge x$ (commutativity)
\item L3. $x\vee (y\vee z) = (x\vee y)\vee z$, $x\wedge (y\wedge z) = (x\wedge y)\wedge z$ (associativity)
\item L4. $x\vee (x\wedge y) = x\wedge (x\vee y) = x$ (absortion)
\item D1. $x\wedge (y\vee z) = (x\wedge y) \vee (x\wedge z)$ (distributivity 1)
\item D2. $x\vee (y\wedge z) = (x\vee y) \wedge (x\vee z)$ (distributivity 2)
\end{itemize}
\noindent It is well known that boolean lattices can be represented as
subsets of a given set, with ``$\leq$" represented as set
theoretical inclusion $\subseteq$, ``$\vee$" represented as set
theoretical union ``$\cup$", ``$\wedge$" represented as set
intersection ``$\cap$", and $\neg$ represented as the set
theoretical complement ``$(\ldots)^{c}$".
\vskip 3mm
\noindent As a typical feature, Cox develops classical probability theory
as an inferential calculus on boolean lattices. A real valued
function $\varphi$ representing the degree to which a proposition
$y$ implies another proposition $x$ is postulated, and its
properties deduced from the algebraic properties of the boolean
lattice (Eqns. \eqref{e:ComplementationAxioms} and
\eqref{e:equationsboolean}). These algebraic properties define
functional equations \cite{aczel-book} which determine the possible
elections of $\varphi$ up to rescaling. It turns out that
$\varphi(x|y)$ --if suitably normalized-- satisfies all the
properties of a Kolmogorovian probability (Eqs.
\eqref{e:kolmogorovian}). The deduction will be omitted here, and
the reader is referred to
\cite{CoxPaper,CoxLibro,Knuth-2004a,Knuth-2004b,Knuth-2005b} for
detailed expositions.
\vskip 3mm
\noindent Despite their formal equivalence, there is a great conceptual
difference between the approaches of Kolmogorov and Cox. In the
Kolmogorovian approach probabilities are naturally interpreted (but
not necessarily) as relative frequencies in a sample space. On the
other hand, the approach developed by Cox, considers probabilities
as a measure of the degree of belief of an intelligent agent, on the
truth of proposition $x$ if it is known that $y$ is true. This
measure is given by the real number $\varphi(x|y)$, and in this way
the Cox's approach is more compatible with a \emph{Bayesian}
interpretation of probability theory.
\section{Quantum vs. classical probabilities}\label{s:probabilities}
\noindent In this Section we will introduce quantum probabilities and
look at their differences with classical ones. Great part of the
hardship faced by Birkhoff and von Neumann in developing the
logic of quantum mechanics were due to the inadequacies of
classical probability theory. Their point of view was that any
statistical physical theory could be regarded as a probability
theory, founded on a calculus of events. These events should be
the experimentally verifiable propositions of the theory, and the
structure of this calculus was to be deduced from empirical
considerations, which, for the quantum case, resulted in an
orthomodular lattice \cite{BvN,Srinivas}. We remark on the great
generality of this conception: there is no need of restricting it
to physics. Any statistical theory formulated as an event
structure fits into this scheme.
\noindent In the formulation of both classical and quantum statistical
theories, states can be regarded as representing consistent
probability assignments \cite{mackey-book,wilce}. In the quantum
mechanics instance {\it this ``states as mappings" visualization} is
achieved via \emph{postulating} a function \cite{mikloredeilibro}
\begin{subequations}\label{e:nonkolmogorov}
\begin{equation}
s:\mathcal{P}(\mathcal{H})\rightarrow [0;1]
\end{equation}
\noindent such that:
\begin{equation}\label{e:Qprobability1}
s(\textbf{0})=0 \,\, (\textbf{0}\,\, \mbox{is the null subspace}).
\end{equation}
\begin{equation}\label{e:Qprobability2}
s(P^{\bot})=1-s(P),\end{equation} \noindent and, for a denumerable
and pairwise orthogonal family of projections ${P_{j}}$
\begin{equation}\label{e:Qprobability3}
s(\sum_{j}P_{j})=\sum_{j}s(P_{j}).
\end{equation}
\end{subequations}
\noindent Gleason's theorem
\cite{Gleason,Gleason-Dvurechenski-2009}, tell us that if the
dimension of $\mathcal{H}\geq 3$, any measure $s$ satisfying
\eqref{e:nonkolmogorov} can be put in correspondence with a trace
class operator (of trace one) $\rho_{s}$ via the correspondence:
\begin{equation}\label{e:bornrule2}
s(P):=\mbox{tr}(\rho_{s} P)
\end{equation}
\noindent And vice versa: using equation \eqref{e:bornrule2} any trace
class operator of trace one defines a measure as in
\eqref{e:nonkolmogorov}. Thus, equations \eqref{e:nonkolmogorov}
define a probability: to any elementary test (or event), represented
by a projection operator $P$, $s(P)$ gives us the probability that
the event $P$ occurs, and this is experimentally granted by the
validity of Born's rule. But in fact, \eqref{e:nonkolmogorov} is not
a classical probability, because it does not obeys Kolmogorov's
axioms \eqref{e:kolmogorovian}. The main difference comes from the
fact that the $\sigma$-algebra in (\ref{e:kolmogorovian}) is
boolean, while $\mathcal{P}(\mathcal{H})$ is not. Thus, quantum
probabilities are also called non-kolmogorovian (or non-boolean)
probability measures. The crucial fact is that, in the quantum case,
{\it we do not have a $\sigma$-algebra, but an orthomodular lattice
of projections.}
\noindent One of the most important ensuing differences expresses itself
in the fact that Eq. \eqref{e:SumRule} is no longer valid in QM.
Indeed, it may happen that
\begin{equation}
s(A)+s(B)\leq s(A\vee B)
\end{equation}
\noindent for $A$ and $B$ suitably chosen elementary sharp tests (see
\cite{Gudder-StatisticalMethods}, Chapter 2). Another important
difference comes from the difficulties which appear when one tries
to define a quantum conditional probability (see for example
\cite{Gudder-StatisticalMethods} and \cite{Redei-Summers2006} for a
comparison between classical and quantum probabilities). Quantum
probabilities may also be considered as a generalization of
classical probability theory: while in an arbitrary statistical
theory a state will be a normalized measure over a suitable
$C^{\ast}$-algebra, the classical case is recovered when the algebra
is \emph{commutative}
\cite{Gudder-StatisticalMethods,Redei-Summers2006}.
\noindent We are thus faced with the following fact: on the one hand,
there exists a generalization of classical probability theory to
non-boolean operational structures. On the other hand, Cox derives
classical probabilities from the algebraic properties of classical
logic. As we shall see in detail below, this readily implies that
probabilities in CM are determined by the operational structure of
classical propositions (given by subsets of phase space). The
question is: is it possible to generalize Cox's method to arbitrary
propositional structures (representing the operational propositions
of an arbitrary theory) even when they are not boolean? What would
we expect to find? We will see that the answer to the first question
is \emph{yes}, and for the second, it is reasonable to recover
quantum probabilities (Eq. \eqref{e:nonkolmogorov}). This approach
may serve as a solution for a problem posed by von Neumann. In his
words:
\begin{quote}
``In order to have probability all you need is a concept of all
angles, I mean, other than 90. Now it is perfectly quite true that
in geometry, as soon as you can define the right angle, you can
define all angles. Another way to put it is that if you take the
case of an orthogonal space, those mappings of this space on itself,
which leave orthogonality intact, lives all angles intact, in other
words, in those systems which can be used as models of the logical
background for quantum theory, it is true that as soon as all the
ordinary concepts of logic are fixed under some isomorphic
transformation, all of probability theory is already fixed... This
means however, that one has a formal mechanism in which, logics and
probability theory arise simultaneously and are derived
simultaneously.\cite{Redei99}''
\end{quote}
\noindent and, as remarked by M. Redei \cite{Redei99}:
\begin{quote}
``It was simultaneous emergence and mutual determination of
probability and logic what von Neumann found intriguing and not at
all well understood. He very much wanted to have a detailed
axiomatic study of this phenomenon because he hoped that it would
shed ``... a great deal of new light on logics and probability alter
the whole formal structure of logics considerably, if one succeeds
in deriving this system from first principles, in other words from a
suitable set of axioms."(quote) He emphasized --and this was his
last thought in his address-- that it was an entirely open problem
whether/how such an axiomatic derivation can be carried out.''
\end{quote}
\noindent The problem posed above has remained thus far
unanswered, and this work may be considered as concrete step
towards its solution. Before entering the subject, let us first
review an alternative approach.
\section{Alternative derivation of Feynman's rules}\label{s:KnuthGoyal}
\noindent Refs. \cite{Symmetry}, \cite{GoyalKnuthSkilling}, and
\cite{KnuthSkilling-2012} present a novel derivation of Feynman's
rules for quantum mechanics, based on a modern reformulation
\cite{Knuth-2005b} of Cox's ideas on the foundations of probability
\cite{CoxPaper,CoxLibro}. To start with, an experimental logic of
processes is defined for quantum systems. This is done in such a way
that the resulting algebra is a distributive one. Given $n$
measurements $M_1$,$\ldots$,$M_n$ on a given system, with results
$m_1$, $m_2$, $\ldots$, $m_n$, the later are organized in a
\emph{measuring sequence} $A=[m_1,m_2,\ldots,m_n]$ as a particular
process. The measuring sequence $A=[m_1,m_2,\ldots,m_n]$ must not be
confused with the conditional (logical) proposition of the form
$(m_2,\ldots,m_n|m_1)$. Sequence $A$ has associated a probability
$P(A)=Pr(m_n,\ldots,m_2|m_1)$ of obtaining outcomes $m_2$, $\ldots$,
$m_n$ conditional upon obtaining $m_1$ \cite{Symmetry}.
\noindent If each of the $m_i$'s has two possible values, $1$ and
$2$, a measuring sequence of three measurements is for example
$A_1=[1,2,1]$. Another one could be $A_2=[1,1,2]$, and so on.
\noindent As is explained in Ref. \cite{Symmetry}:
\begin{quotation}
``A particular outcome of a measurement is either atomic or
coarse-grained. An atomic outcome cannot be more finely divided in
the sense that the detector whose output corresponds to the outcome
cannot be sub-divided into smaller detectors whose outputs
correspond to two or more outcomes. A coarse-grained outcome is one
that does not differentiate between two or more outcomes."
\end{quotation}
\noindent Thus, if we want to ``coarse grain" a certain measurement,
say $M_2$, we can unite the two outcomes in a joint outcome $(1,2)$,
yielding the experiment (measurement) $\widetilde{M}_2$. Thus, a
possible sequence obtained by the replacement of $M_2$ by
$\widetilde{M}_2$ could be $[1,(1,2),1]$. This is used to define a
logical operation
\begin{equation}
[m_1,\ldots,(m_i,m'_i),\ldots,m_n]=[m_1,\ldots,m_i,\ldots,m_n]\vee[m_1,\ldots,m'_i,\ldots,m_n]
\end{equation}
\noindent It is intended that sequences of measurements can be
compounded. For example, if we have $[m_1,m_2]$ and $[m_2,m_3]$, we
have also the sequence $[m_1,m_2,m_3]$, paving the way for the
general definition
\begin{equation}
[m_1,\ldots,m_j,\ldots,m_n]=[m_1,\ldots,m_j]\cdot[m_j,\ldots,m_n]
\end{equation}
\noindent Given measuring sequences $A$, $B$ and $C$, these
operations satisfy
\begin{subequations}\label{e:ExperimentalLogicKnuth}
\begin{equation}
A\vee B=B\vee A
\end{equation}
\begin{equation}
(A\vee B)\vee C=A\vee(B\vee C)
\end{equation}
\begin{equation}
(A\cdot B)\cdot C=A\cdot(B\cdot C)
\end{equation}
\begin{equation}
(A\vee B)\cdot C=(A\cdot C)\vee(B\cdot C)
\end{equation}
\begin{equation}
C\cdot(A\vee B)=(C\cdot A)\vee(C\cdot B),
\end{equation}
\end{subequations}
\noindent and thus, we have commutativity and associativity of the
operation ``$\vee$", associativity of the operation ``$\cdot$", and
right- and left-distributivity of ``$\cdot$" over ``$\vee$".
\noindent We had already seen in Section \ref{s:CoxReview} that the
method of Cox consists of deriving probability and entropy from the
symmetries of a boolean lattice, intended to represent our
propositions about the world, while probability is interpreted as a
measure of knowledge about an inference calculus. Once equations
(\ref{e:ExperimentalLogicKnuth}) are cast, the set-up for the
derivation of Feynman's rules is ready. The path to follow now is to
apply Cox's method to the symmetries defined by equations
(\ref{e:ExperimentalLogicKnuth}). But this cannot be done
straightforwardly. In order to proceed, an important assumption has
to be made: each measuring sequence will be represented by a pair of
real numbers. This -non operational- assumption is justified in
\cite{Symmetry} using Bohr's complementarity principle. As we shall
se below, the method proposed in this article is an alternative one,
which is more direct and systematic, and makes the introduction of
these assumptions somewhat clearer.
\noindent Once a pair of real numbers is assigned to any measurng
sequence, the authors of \cite{Symmetry} reasonably assume that
equations (\ref{e:ExperimentalLogicKnuth}) induce operations onto
pairs of reals numbers. If measuring sequences $A$, $B$, etc. induce
pairs of real numbers $\mathbf{a}$, $\mathbf{b}$, etc., then, we
should have
\begin{subequations}\label{e:ExperimentalLogicComplex}
\begin{equation}
\mathbf{a}\vee \mathbf{b}=\mathbf{b}\vee \mathbf{a}
\end{equation}
\begin{equation}
(\mathbf{a}\vee \mathbf{b})\vee
\mathbf{c}=\mathbf{a}\vee(\mathbf{b}\vee \mathbf{c})
\end{equation}
\begin{equation}
(\mathbf{a}\cdot
\mathbf{b})\cdot\mathbf{c}=\mathbf{a}\cdot(\mathbf{b}\cdot
\mathbf{c})
\end{equation}
\begin{equation}
(\mathbf{a}\vee \mathbf{b})\cdot \mathbf{c}=(\mathbf{a}\cdot
\mathbf{c})\vee(\mathbf{b}\cdot \mathbf{c})
\end{equation}
\begin{equation}
\mathbf{c}\cdot(\mathbf{a}\vee \mathbf{b})=(\mathbf{c}\cdot
\mathbf{a})\vee(\mathbf{c}\cdot \mathbf{b})
\end{equation}
\end{subequations}
\noindent We easily recognize in (\ref{e:ExperimentalLogicComplex})
operations satisfied by the complex numbers' field (provided that
the operations are interpreted as sum and product of complex
numbers). If they constituted the only possible instance, sequences
represented by pairs of real numbers would be complex numbers, and
thus, we could easily have Feyman's rules. However, complex numbers
are not the only entities that satisfy
(\ref{e:ExperimentalLogicComplex}). There are other such entities,
and thus, extra assumptions have to be made in order to restrict
possibilities. These additional assumptions are presented in
\cite{Symmetry} and \cite{GoyalKnuthSkilling}, and improved upon in
\cite{KnuthSkilling-2012}. We list them below (and refer the reader
to Refs. \cite{Symmetry}, \cite{GoyalKnuthSkilling}, and
\cite{KnuthSkilling-2012} for details).
\begin{itemize}
\item Pair symmetry
\item Additivity condition
\item Symmetric bias condition
\end{itemize}
\noindent Leaving aside the fact that these extra assumptions are
more or less reasonable (justifications for their use are given in
\cite{KnuthSkilling-2012}), it is clear that the derivation is quite
indirect: the experimental logic is thus defined in order to yield
algebraic rules compatible with complex multiplication (and the rest
of the strategy is to make further assumptions in order to discard
other fields different from that of complex numbers). Further, the
experimental logic characterized by equations
(\ref{e:ExperimentalLogicKnuth}) is not the only possibility, as we
have seen in Section \ref{s:LogicOperational}.
\noindent \fbox{\parbox{0.97\linewidth}{In the rest of this work, we
will apply Cox's method to general propositional structures
according to the quantum logical approach. We will see that this
allows for a new perspective which sheds light onto the structure of
non-boolean probabilities, and is at the same time susceptible of
great generalization. It opens the door to a general derivation of
alternative kinds of probabilities, including quantum and classical
theories as particular cases.}}
\noindent Yet another important remark is in order. As noted in
the Introduction, the work presented in \cite{Symmetry},
\cite{Knuth-2005b}, and \cite{GoyalKnuthSkilling} -as well as
ours- is a combination of two approaches: 1) the one which defines
propositions in an empirical way (something which it shares with
the OQL approach) and 2) that of Cox. Cox's spirit was to derive
probabilities out of Chomsky's generative propositional
structures that are ingrained in our brain \cite{chom}, and this
boolean structure is independent of any experimental information.
This does not imply, though, that the empirical logic needs to
satisfy the same algebra than pervades our thinking, and that is
indeed what happens. In this sense, any derivation involving
empirical or operational logics deviates from the original intent
of Cox. As we shall see, this is not a problem, but rather an
important advantage in practice.
\section{Cox's method applied to non-boolean
algebras}\label{s:CoxNon-Boolean}
\noindent As seen in Section \ref{s:LogicOperational}, {\it
operationally motivated axioms imposed on a lattice's propositional
structure can be used to describe quantum mechanics and other
theories as well.} Disregarding the discussion about the operational
validity of this construction, we are only interested in the fact
that the embodiment is feasible. Similar constructions can be made
for many physical systems, beyond quantum mechanics: the connection
between any theory and experience is given by an event structure
(elementary tests), and these events can be organized in a lattice
structure in most examples of interest.
\noindent {\sf Thus, our point of departure will be the fact that physical
systems can be represented by propositional lattices, and that these
lattices need not be necessarily distributive. We will consider
atomic orthomodular lattices. \noindent Given a system $S$, and its
propositional lattice $\mathcal{L}$, we proceed to apply Cox's
method in order to develop an inferential calculus on
$\mathcal{L}$.}
\subsection{Classical Mechanics}\label{s:ClassicalDerivation}
\noindent We start with classical mechanics (that theory satisfying
Hamilton's equations). Given a classical system $S_{C}$, the
propositional structure is a boolean one, isomorphic to a perfectly
boolean lattice used in our logical language (i.e., regarding its
algebraic structure, it is the same as the one used by Cox).
Accordingly, as shown in Section \ref{s:CoxReview} (proceeding in
the same way as Cox \cite{CoxLibro,CoxPaper}), the corresponding
probability calculus has to be the one which obeys the laws of
Kolmogorov (it satisfies -in particular- equations
\ref{e:kolmogorovian}), and the corresponding information measure is
Shannon's, as expected.
\subsection{Quantum case}\label{s:QMDerivation}
\noindent As shown by Birkoff and von Neumann in \cite{BvN}, if we
follow the above path and try to define the propositional
structure for a quantum system $S_{Q}$ we find an orthomodular
lattice $\mathcal{L}_{v\mathcal{N}}(\mathcal{H})$ isomorphic to
the lattice of projections $\mathcal{P}(\mathcal{H})$. What are we
going to find if we apply instead Cox's method? It stands to
reason that we would encounter a non-boolean probability measure
with the properties \emph{postulated} in Section
\ref{s:probabilities} (Eqs. \eqref{e:nonkolmogorov}). Let us see
that this is indeed the case.
\noindent The first thing to remark is that in this derivation we assume
to have a non-boolean lattice
$\mathcal{L}_{v\mathcal{N}}(\mathcal{H})$, isomorphic to the
lattice of projections $\mathcal{P}(\mathcal{H})$. We must show
that the ``degree of implication" measure $s(\cdots)$ demanded by
Cox's method satisfies Eqs. \eqref{e:nonkolmogorov}. We will only
consider the case of prior probabilities. This means that we ask
for the probability that a certain event happens for a given state
of affairs, i.e., a concrete preparation of the system under
certain circumstances (which could be natural or artificial).
Thus, we are looking for a function to the real numbers $s$ such
that it is non-negative and $s(P)\leq s(Q)$ whenever $P\leq Q$.
\noindent Under these assumptions, let us consider the operation
``$\vee$". As the direct sum of subspaces is associative,
``$\vee$" will be associative too. If $P$ and $Q$ are orthogonal
projections ($P\perp Q$), then $P\wedge Q=\mathbf{0}$ (otherwise,
there would be a vector in $P$ which is not orthogonal to every
vector of $Q$). Next, we consider the relationship between $s(P)$,
$s(Q),$ and $s(P\vee Q)$. As $P\wedge Q=\mathbf{0}$, it should
happen that
\begin{equation}\label{e:s(or)}
s(P\vee Q)=F(s(P),s(Q)),
\end{equation}
\noindent with $F$ a function to be determined. Add now a third
proposition $R$ (notice that, for doing this, we need a space of
dimension $d\geq 3$, an interesting analogy with Gleason's
theorem), such that $P\perp R$, $Q\perp R,$ and $Q\perp P$ (and
thus $P\wedge R=\mathbf{0}$, $Q\wedge R=\mathbf{0},$ and $Q\wedge
P=\mathbf{0}$). Build now the element $(P\vee Q)\vee R$. Then,
because of the associativity of ``$\vee$", we arrive at the
following result
\begin{equation}
s((P\vee Q)\vee R)=s(P\vee(Q\vee R)),
\end{equation}
\noindent and thus (using \eqref{e:s(or)}),
\begin{equation}\label{e:FunctEq1}
F(F(s(P),s(Q)),s(R))=F(s(P),F(s(Q),s(R))).
\end{equation}
\noindent The algebraic properties of associativity for $\vee$ and
$\perp$ are the only prerequisite for this result. Thus, proceeding
as in \cite{Knuth-2004a,Knuth-2004b,Knuth-2005b} (and using the
solutions to functional equations of the form \eqref{e:FunctEq1}
studied in \cite{aczel-book}), we have that --up to a re-scaling:
\begin{equation}
s(P\vee Q)=s(P)+s(Q).
\end{equation}
\noindent whenever $P\perp Q$. It thus follows that for any finite family
of orthogonal projections $P_j$, $1\leq j\leq n$, we have $s(P_1\vee
P_2\vee\cdots\vee P_n)=s(P_1)+s(P_2)+\cdots+s(P_n)$. Now, as any
projection $P$ satisfies $P\leq \mathbf{1}$, then $s(P)\leq
s(\mathbf{1})$, and we can assume without loss of generality the
normalization condition $s(\mathbf{1})=1$. Thus, for any denumerable
pairwise orthogonal infinite family of projections $P_j$, we have
for each $n$
\begin{equation}
\sum_{j=1}^{n}s(P_j)=s(\bigvee_{j=1}^{n}P_j)\leq 1.
\end{equation}
\noindent As $s(P_j)\geq 0$ for each $j$, the sequence
$s_n=s(\bigvee_{j=1}^{n}P_j)$ is monotone, bounded from above, and
thus converges. We write then
\begin{equation}
s(\bigvee_{j=1}^{\infty}P_j)=\sum_{j=1}^{\infty}s(P_j),
\end{equation}
\noindent and we recover condition \eqref{e:Qprobability3} of the
axioms of quantum probability. Now, given any proposition
$\mathcal{L}_{v\mathcal{N}}(\mathcal{H})$, consider $P^{\perp}$.
As $P\vee P^{\perp}=\mathbf{1}$, and $P$ is orthogonal to
$P^{\perp}$, we have
\begin{equation}
s(P\vee P^{\perp})=s(P)+s(P^{\perp})=s(\mathbf{1})=1.
\end{equation}
\noindent In other words,
\begin{equation}
s(P^{\perp})=1-s(P),
\end{equation}
\noindent which is nothing but condition \eqref{e:Qprobability2}. On the
other hand, as $\mathbf{0}=\mathbf{0}\vee\mathbf{0}$ and
$\mathbf{0}\bot\mathbf{0}$, then
$s(\mathbf{0})=s(\mathbf{0})+s(\mathbf{0})$, and thus,
$s(\mathbf{0})=0$, which is condition \eqref{e:Qprobability1}.
\noindent \fbox{\parbox{0.97\linewidth}{ \emph{This Section shows
that using the algebraic properties of $\mathcal{L}_{v\mathcal{N}}$,
it is possible to derive the form of the quantum probabilities
which, on the light of this discussion, do not need to be
postulated.}}}\vskip 2mm
\noindent Thus, we have proved that $s$ is a probability measure on
$\mathcal{L}_{v\mathcal{N}}$. Is there any possibility that $s$
differs from the standard formulation of a quantum probability
measure as a density matrix using the Born's rule? The answer is no,
and this is granted by Gleason's theorem, because we have proved
that $s$ satisfies Eqs. \eqref{e:nonkolmogorov}, and Gleason's
theorem leaves no alternative (if the dimension of $\mathcal{H}\geq
3$).
\noindent An important question is the following: which will be the
effect of non-distributivity? As we saw in Section
\ref{s:probabilities}, classical probabilities are sub-additive,
i.e., they satisfy
\begin{equation}\label{e:SubadditiveProb}
\mu(A\vee B)\leq\mu(A)+\mu(B),
\end{equation}
\noindent and this is linked to the stronger assertion of Eq.
\eqref{e:SumRule} (see also \cite{mikloredeilibro}, page $104$). But
as we have seen, it is indeed the case that the analogous of Eq.
\eqref{e:SubadditiveProb} does not hold for quantum probabilities.
\vskip 3mm
\noindent We show below that this derivation is susceptible of
generalization. Indeed, the derivation relies mainly on the
algebraic properties of the lattice of projections, i.e., in its
\emph{non-distributive} lattice structure.
\subsection{A Finite Non-distributive Example}\label{s:FiniteCase}
\noindent There are many systems of interest which can be represented by
finite lattices. Many toy models serve to illustrate special
features of different theories. Let us start first by analyzing
$L_{12}$, a non distributive lattice which may be considered as the
union of two incompatible experiments \cite{svozillibro}. The Hasse
diagram of $L_{12}$ is represented in Fig. \ref{f:Ejemplo1}. An
example of $L_{12}$ is provided by a firefly which flies inside a
room. The first experiment is to test if the firefly shines on the
right side (r) of the room, or on the left (l), or if it does not
shine at all (n). Other experiment consists in testing if the
firefly shines at the front of the room (f), or on the back (b) of
it, or if it does not shine (n). It is forbidden to make both
experiments at the same time (and thus, emulating contextuality).
\noindent Applying the Cox's method to the boolean sublattices of $L_{12}$
(see Fig. \ref{f:Ejemplo1}) and suitably normalizing, we obtain
classical probabilities for each one of them. It is also easy to
find that $P(l)+P(r)+P(f)+P(b)+P(n)=2-P(n)$, a quantity which may be
greater than $1$. The last equation implies explicitly that the
exclusion-inclusion law \eqref{e:SumRule} does not hold. This is due
to the global non-distributivity of $L_{12}$. It is also easy to
show that $P(l)+P(r)=P(f)+P(b)$, yielding a non trivial relationship
between atoms (something which does not occur in a distributive
lattice).
\begin{figure}
\caption{\small{Hasse diagram for $L_{12}
\label{f:Ejemplo1}
\end{figure}
\subsection{General Derivation}\label{s:GeneralDerivation}
\noindent Let $\mathcal{L}$ be an atomic orthomodular lattice. We will
compute prior probabilities, i.e., we will assume that
$\mathcal{L}$ represents the propositional structure of a given
system --physical or not--, and that we want to ascertain how
likely a given event is when represented by a lattice element
$a\in\mathcal{L}$. One assumes that the system has undergone a
preparation, i.e., we take it that there exists a \emph{state of
affairs}.
\noindent We must define a function $s:\mathcal{L}\longrightarrow
\mathbb{R}$, such that it is always non-negative
\begin{subequations}
\begin{equation}
s(a)\geq 0\,\,\forall a\in\mathcal{L}
\end{equation}
and is also order preserving
\begin{equation}
a\leq b \longrightarrow s(a)\leq s(b).
\end{equation}
\end{subequations}
\noindent We will show that under these rather general assumptions a
probability theory can be developed. The order preserving assumption
readily implies that $s(a)\leq s(\textbf{1})$ for all
$a\in\mathcal{L}$. We will also assume that $s(\textbf{1})=K$, a
finite real number.
\noindent Now, as an ortholattice is complemented (Eqs.
\eqref{e:ComplementationAxioms}), we will always have that
$\neg\neg a=a$ for all $a\in\mathcal{L}$. Accordingly,
\begin{equation}\label{e:Sdenonoa}
s(\neg\neg a)=s(a),
\end{equation}
\noindent for all $a$. Next, it is also reasonable to assume that $s(\neg
a)$ is a function of $s(\neg a)$, say $s(\neg a)=g(s(a))$. Thus,
Eqs. \eqref{e:Complement1} and \eqref{e:Sdenonoa} imply
\begin{equation}\label{e:FunctionalEqG}
g(g(s(a)))=s(a),
\end{equation}
\noindent or, in other words,
\begin{equation}\label{e:FunctionalEqGbis}
g(g(x))=x,
\end{equation}
\noindent for positive $x$. A family of functions which satisfy
\eqref{e:FunctionalEqG} are $g(x)=x$ and $g(x)=c-x$, where $c$ is a
real constant\footnote{There are additional solutions to this equation,
but, if we suitably choose scales, we can disregard them for
these two cases. For a discussion on re-scaling of measures, their
meaning and validity, see \cite{CoxLibro}.}. We discard the first
possibility because if true, we would have $s(\mathbf{0})=s(\neg
\mathbf{1})=g(s(\mathbf{1}))=s(\mathbf{1})$. But if
$s(\mathbf{0})=s(\mathbf{1})$, because of $\mathbf{0}\leq
x\leq\mathbf{1}$ for all $x\in\mathcal{L}$, we have
$s(\mathbf{0})=s(x)=s(\mathbf{1})$, and our measure would be
trivial. Thus, the only non-trivial option ---up to rescaling--- is
$s(\neg a)=c-s(a)$.
\noindent Now, let us see what happens with the ``$\vee$" operation. As
$\mathcal{L}$ is orthocomplemented, the orthogonality notion for
elements is available (see Section \ref{s:LatticeTheory}). If
$a,b\in\mathcal{L}$ and $a\bot b$, because of \eqref{e:Complement2},
we have that $a\wedge b=\mathbf{0}$. Thus, it is reasonable to
assume that $s(a\vee b)$ is a function of $s(a)$ and $s(b)$ only,
i.e., $s(a\vee b)=f(s(a),s(b))$. By associativity of the ``$\vee$"
operation, $(a\vee b)\vee c=a\vee(b\vee c)$ for any
$a,b,c\in\mathcal{L}$, and this implies then that $s((a\vee b)\vee
c)=s(a\vee(b\vee c))$. If $a$, $b$, and $c$ are orthogonal, we will
have for the left hand side $s((a\vee b)\vee
c)=f(f(s(a),s(b)),s(c))$ and $s(a\vee (b\vee
c))=f(s(a),f(s(b),s(c)))$ for the right hand side. Thus,
\begin{equation}
f(f(s(a),s(b)),s(c))=f(s(a),f(s(b),s(c))),
\end{equation}
\noindent or, in a simpler form
\begin{equation}\label{e:FunctionalEq2}
f(f(x,y),z)=f(x,f(y,z)),
\end{equation}
\noindent with $x$, $y,$ and $z$ positive real numbers. As shown in
\cite{aczel-book}, the only solution (up to re-scaling) of
\eqref{e:FunctionalEq2} is $f(x,y)=x+y$. We have thus shown that if
$a\bot b$
\begin{equation}
s(a\vee b)=s(a)+s(b),
\end{equation}
\noindent and we will also have
\begin{equation}\label{e:FiniteSum}
s(a_1\vee a_2\cdots\vee a_n)=s(a_1)+s(b_2)+\cdots+s(a_n),
\end{equation}
\noindent whenever $a_1$, $a_2$, $\cdots$, $a_n$ are pairwise orthogonal.
Suppose now that $\{a_i\}_{i\in\mathbb{N}}$ is a family of pairwise
orthogonal elements of $\mathcal{L}$. For any finite $n$, we have
that $a_1\vee a_2\vee\cdots\vee a_n\leq \mathbf{1}$, and thus
$s(a_1\vee a_2\vee\cdots\vee a_n)=s(a_1)+s(b_2)+\cdots+s(a_n)\leq
s(\mathbf{1})=K$. Then, $s_n=s(a_1\vee a_2\vee\cdots\vee a_n)$ is a
monotone sequence bounded from above, and thus it converges to a
real number. As
$\bigvee\{a_{i}\}_{i\in\mathbb{N}}=\lim_{n\longrightarrow\infty}\bigvee_{i=1}^{n}a_i$,
we can write
\begin{equation}
s(\bigvee\{a_i\}_{i\in\mathbb{N}})=\sum_{i=1}^{\infty}s(a_i).
\end{equation}
\noindent In any orthomodular lattice we have $\mathbf{1}\bot \mathbf{0}$
(because $\mathbf{0}\leq \neg \mathbf{1}=\mathbf{0}$), and
$\mathbf{1}\vee\mathbf{0}=\mathbf{1}$. Thus, $s(\mathbf{1}\vee
\mathbf{0})=s(\mathbf{1})=s(\mathbf{1})+s(\mathbf{0})$. Accordingly,
$s(\mathbf{0})=0$. As $\neg \mathbf{1}=\mathbf{0}$,
$s(\mathbf{0})=c-s(\mathbf{1})$. Thus, $s(\mathbf{1})=c$ and then,
$c=K$. We will not lose generality if we assume the normalization
condition $K=1$.
\vskip3mm
\noindent The results of this section show that in \emph{any} orthomodular
lattice, a reasonable measure $s$ of plausibility of a given event
must satisfy that, for any orthogonal denumerable family
$\{a_i\}_{i\in\mathbb{N}}$, one has (up to rescaling)
\begin{subequations}\label{e:GeneralizedProbability}
\begin{equation}
s(\bigvee\{a_i\}_{i\in\mathbb{N}})=\sum_{i=1}^{\infty}s(a_i)
\end{equation}
\begin{equation}
s(\neg a)= 1-s(a)
\end{equation}
\begin{equation}
s(\mathbf{0})=0.
\end{equation}
\end{subequations}
\vskip3mm
\noindent Why do Eqs. \eqref{e:GeneralizedProbability} define
non-classical (non-Kolmogorovian) probability measures? In a
non-distributive orthomodular lattice there always exist elements
$a$ and $b$ such that
\begin{equation}
(a\wedge b)\vee (a\wedge\neg b)<a,
\end{equation}
\noindent so that (using $(a\wedge\neg b)\bot(a\wedge b$)),
$s((a\wedge\neg b)\vee(a\wedge b))=s(a\wedge\neg b)+s(a\wedge b)\leq
s(a)$. The inequality can be strict, as the quantum case shows. But
in any classical probability theory, by virtue of the
inclusion-exclusion principle (Eqn. \eqref{e:SumRuleLogical}), we
always have $s(a\wedge\neg b)+s(a\wedge b)= s(a)$. This simple fact
shows that our measures will be non-classical in the general case.
\vskip 3mm
\noindent \fbox{\parbox{0.97\linewidth}{ In this way, we provide an
answer to the problem posed by von Neumann and discussed in Section
\ref{s:probabilities}: \emph{the algebraic and logical properties of
the operational event structure determine up to rescaling the
general form of the probability measures which can be defined over
the lattice.} Accordingly, we did present a generalization of the
Cox method to non-boolean structures, namely orthomodular lattices,
and then \emph{we have indeed deduced a generalized probability
theory.}}}\vskip 2mm
\noindent Remark that boolean lattices are also orthomodular. This means
that our derivation is a generalization of that of Cox, and when we
face a boolean structure, classical probability theory will be
deduced exactly as in \cite{CoxPaper,CoxLibro}.
\noindent Another important remark is that -as the reader may check-
the derivation presented here \emph{is also valid} for
$\sigma$-orthocomplete orthomodular posets (see Section
\ref{s:LogicOperational} of this work), and thus, of great
generality. Indeed, in these structures the disjunction exists for
denumerable collections of orthogonal elements and it is
associative. This fact, together with orthocomplementation, make the
above results remain valid in these structures. But as it was
mentioned in Section \ref{s:LogicOperational},
$\sigma$-orthocomplete orthomodular posets are not lattices in the
general case. In this way, we remark that the generalization of the
Cox's method goes well beyond lattice theory. Thus, yielding a
generalized probability calculus which includes the general physical
approach presented for example, in \cite{Gudder-StatisticalMethods}.
\subsection{A general methodology}\label{s:GeneralMethod}
\noindent At this stage it is easy to envisage just how a general method
could be developed. One first starts by identifying the algebraic
structure of the elementary tests of a given theory $\mathcal{T}$.
They determine all observable quantities and, in the most important
examples, they are endowed with a lattice structure. Once the
algebraic properties of the pertinent lattice are fixed, Cox's
method can be used to \emph{determine} (up to rescaling) the general
properties of prior probabilities. Of course, it does not
predetermine the particular values that these probabilities might
take on the atoms of the theory. Such an specification would amount
to determine a \emph{particular state} of the system under scrutiny.
The whole set of these states can be rearranged to yield a convex
set. The path we have followed here is illustrated in Fig.
\ref{f:MetodoGeneral}, with the classical and quantum cases as
extreme examples of a vast family.
\begin{figure}
\caption{\small{Schematic representation of the method proposed
here. A general theory $\mathcal{T}
\label{f:MetodoGeneral}
\end{figure}
\noindent The method described in this work can be seen as a general
epistemological background for a huge family of scientific theories.
In order to have a (quantitative) scientific theory, we must be able
to make predictions on certain events of interest. Events are
regarded here as propositions susceptible of being tested. Thus, one
starts from an inferential calculus which allows for quantifying
the degree of belief on the certainty of an event $x$, if it is
known that event $y$ has occurred. The crucial point is that event
structures are not always organized as boolean lattices (QM being
perhaps the most spectacular example). Thus, in order to determine
the general properties of the probabilities of a given theory (and
thus all possible states by specifying particular values of prior
probabilities on the atoms), we must apply Cox's method to lattices
more general than boolean ones.
\section{Conclusions}\label{s:Conclusions}
\noindent By complementing the results presented in
\cite{Caticha-99,Knuth-2004a,Knuth-2004b,Knuth-2005a,Knuth-2005b,Symmetry,KnuthSkilling-2012},
in this work we showed that it is possible to combine Cox's method
for the foundations of classical probability theory and the OQL
approach, in order \emph{to give an alternative derivation of what
is known as non-kolmogorovian (or non commutative) probability
theory} \cite{Gudder-StatisticalMethods,Redei-Summers2006}.
\vskip 3mm
\noindent Most physical probabilistic theories of interest can be
endowed with orthomodular lattices. The elements of these lattices
represent the events of that theory. As in
\cite{Knuth-2004b,Knuth-2005a}, we have studied how the algebraic
properties of the lattices determine
---up to rescaling--- the general form of the probabilities
associated to the given physical theory. In this way, we provided a
new formulation of the approach to physics based on
non-kolmogorovian probability theory
\cite{Gudder-StatisticalMethods}.
\vskip 3mm
\noindent Differently from
\cite{Caticha-99,Knuth-2004a,Knuth-2004b,Knuth-2005a,Knuth-2005b,Symmetry,KnuthSkilling-2012},
in this work we have focused on the application of the Cox's method
to the von Neumann's lattice of projection operators in a separable
Hilbert space and general orthomodular lattices as well, studying
their particularities. In doing so, we obtained the von Neumann's
axioms of quantum probabilities, and thus (using Gleason's theorem),
quantum probabilities. In this way, our approach includes quantum
mixtures explicitly and naturally.
\vskip 3mm
\noindent It is interesting to remark (as in
\cite{Knuth-2004b,Knuth-2005a}) that this construction is not
necessarily restricted to physics; any probabilistic theory endowed
with a lattice structure of events will follow the same route.
\vskip 3mm
\noindent We have also found that the method can be easily extended
to $\sigma$-orthomodular posets (which are not lattices in the
general case), and thus, it contains generalizations of QM.
\vskip 3mm
\noindent In deriving quantum probability out of the lattice properties,
we have shown in a direct way how non-distributivity forbids the
derivation of a Kolmogorovian probability. This sheds light on the
structure of quantum probabilities and on their differences with
the classical case. In particular, we have shown that \emph{the
properties of the underlying algebraic structure imply that the
inclusion-exclusion principle is not valid for the von Neumann's
lattice}. And, moreover, that it can also fail in the more general
framework of orthomodular (non-distributive) lattices. We have
shown the explicit violation of the inclusion-exclusion principle
for the finite example of the Chinese lantern and also for a non
trivial relationship between atoms.
\vskip 3mm
\noindent Furthermore, we have provided a non-trivial connection
between the Cox's approach and the problem posed by von Neumann
regarding the axiomatization of probabilities. This is a novel
perspective on the origin and axiomatization of the probabilities
which appear in quantum theory. We believe that -far from being a
definitive answer to the interpretation of quantum probabilities-
this is an interesting topic to be further investigated.
\vskip 3mm
\noindent Using Cox's approach, Shanon's entropy can be deduced as a
natural measure of information over the boolean algebra of classical
propositions \cite{CoxLibro,KnuthSkilling-2012,Knuth-2005b}. We will
provide a detailed study of what happens with orthomodular latices
elsewhere.
\vskip 3mm
\noindent The strategy followed in this work suggests that we are at the
gates of a great generalization. The general rule for constructing
probabilities would read as follows:
\begin{itemize}
\item 1 - We start by identifying the operational logic of our physical
system. The characteristics of this ``empirical" logic depends
both on physical properties of the system and on the election of
the properties that we assume in order to study the system. This
can be done in a standard way, and the method is provided by the
OQL approach.
\item 2 - Once the operational logic is identified, the symmetries
of the lattice are used to define the properties of the ``degree
of implication" function, which will turn out to be the
probability function associated to that particular logic. Remark
that the same physical system may have different propositional
structures, depending of the election of the observers. For
example, if we look at the observable ``electron's" charge, we
will face classical propositions, but if wee look at its momentum
and position, we will have a non-boolean lattice.
\end{itemize}
\noindent This method is not only of physical interest, but also of
mathematical one, because one is solving the problem of
characterizing probability measures over general lattices (and other
structures as well). A final remark: our approach deviates from that
of Cox, in the sense that we look for an empirical logic which would
be intrinsic to the system under study, and because of that, not
only referred to our ignorance about it, but to assumptions about
its nature. But this is not a problem, but an advantage: any problem
which can be transformed into the language of lattice theory (or the
more general framework of $\sigma$-othocomplemented orthomodular
posets) may fall into this scheme, and a probability theory can be
developed using the method described in this work.
\vskip1truecm
\noindent {\bf Acknowledgements} \noindent This work was partially
supported by the grants PIP N$^o$ 6461/05 amd 1177 (CONICET). Also
by the projects FIS2008-00781/FIS (MICINN) - FEDER (EU) (Spain, EU).
The contribution of F. Holik to this work was done during his
postdoctoral stance (CONICET) at Universidad Nacional de La Plata,
Instituto de F\'{\i}sica (IFLP-CCT-CONICET), C.C. 727, 1900 La
Plata, Argentina.
\end{document} |
\begin{document}
\title{Decoherence allows quantum theory to describe the use of
itself}
\author{Armando Rela\~{n}o} \affiliation{Departamento de Estructura de
la Materia, F\'{\i}sica T\'ermica y Electr\'onica, and GISC,
Universidad Complutense de Madrid, Av. Complutense s/n, 28040
Madrid, Spain} \email{[email protected]}
\begin{abstract}
We show that the quantum description of measurement based on
decoherence fixes the bug in quantum theory discussed in
[D. Frauchiger and R. Renner, {\em Quantum theory cannot
consistently describe the use of itself}, Nat. Comm. {\bf 9}, 3711
(2018)]. Assuming that the outcome of a measurement is determined by
environment-induced superselection rules, we prove that different
agents acting on a particular system always reach the same
conclusions about its actual state.
\end{abstract}
\maketitle
\section{I. Introduction}
In \cite{Renner:18} Frauchiger and Renner propose a Gedankenexperiment
to show that quantum theory is not fully consistent. The setup
consists in an entangled system and a set of fully compatible
measurements, from which four different agents infer contradictory
conclusions. The key point of their argument is that all these
conclusions are obtained from {\em certain} results, free of any
quantum ambiguity:
{\em 'In the argument present here, the agents' conclusions are all
restricted to supposedly unproblematic ``classical'' cases.'}
\cite{Renner:18}
The goal of this paper is to show that this statement is not
true, at least if ``classical'' states arise from quantum mechanics as
a consequence of environment-induced superselection rules. These rules
are the trademark of the decoherence interpratation of the quantum
origins of the classical world. As it is discussed in \cite{Zurek:03}, a
quantum measurement understood as a perfect correlation between a
system and a measuring apparatus suffers from a number of ambiguities,
which only dissapear after a further interaction with a large
environment ---if this interaction does not occur, the agent
performing the measurement cannot be certain about the real state of
the measured system.
The main conclusion of this paper is that the contradictory
conclusions discussed in \cite{Renner:18} dissapear when the role of
the environment in a quantum measurement is properly taken into
account. In particular, we show that the considered cases only become
``classical'' after the action of the environment, and that this
action removes all the contradictory conclusions.
The paper is organized as follows. In Sec. II we review the
Gedankenexperiment proposed in \cite{Renner:18}. In Sec. IIIA, we
review the consequences of understanding a quantum measurement just as
a perfect correlation between a system and a measuring apparatus; this
discussion is based on \cite{Zurek:03,Zurek:81}. In Sec. IIIB, we
re-interpret the Gedankenexperiment taking into account the
conclusions of Sec. IIIA. In particular, we show that the
contradictory conclusions obtained by the four measuring agents
dissapear due to the role of environment. In Sec. IV we summarize our
main conclusions.
\section{II. The Gedankenexperiment}
This section consists in a review of the Gedankenexperiment proposed
in \cite{Renner:18}. The reader familiarized with it can jump to
section III.
\subsection{A. Description of the setup}
The Gedankenexperiment \cite{Renner:18} starts with an initial state
in which a {\em quantum coin} $R$ is entangled with a $1/2$-spin
$S$. A spanning set of the quantum coin Hilbert space is $\left\{ \ensuremath{\ket{\text{head}}_R}, \ensuremath{\ket{\text{tail}}_R}
\right\}$; for the $1/2$ spin we can use the usual basis, $\left\{
\ensuremath{\ket{\uparrow}_S}, \ensuremath{\ket{\downarrow}_S} \right\}$.
The experiment starts from the following state:
\begin{equation}
\ensuremath{\ket{\text{init}}} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S}.
\label{eq:inicial}
\end{equation}
From this state, four different agents, $W$, $F$, $\overline{W}$, and
$\overline{F}$, perform different measurements. All these measurements
are represented by unitary operators that correlate different parts of
the system with their apparatus. Relying on the Born rule, they infer
conclusions only from {\em certain} results ---results
with probability $p=1$. These conclusions appear to be contradictory.
To interpret the results of measurements on the initial state,
Eq. (\ref{eq:inicial}), it is useful to rewrite it as different
superpositions of linearly independent vectors, that is, by means of
different orthonormal basis. As is pointed in
\cite{Zurek:81,Zurek:03}, this procedure suffers from what is called
basis ambiguity ---due to the superposition principle, different basis
entail different correlations between the different parts of the
system. This problem is specially important when all the coefficents
of the linear combination are equal \cite{Elby:94}; however, it is not
restricted to this case. We give here four different possibilities
for the initial state given by Eq. (\ref{eq:inicial}):
\begin{equation}
\ensuremath{\ket{\text{init}}}_{(1)} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\uparrow}_S}.
\label{eq:uno}
\end{equation}
(The term in $\ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\uparrow}_S}$ does not show up, because its probability in this state is zero).
\begin{equation}
\ensuremath{\ket{\text{init}}}_{(2)} = \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S}.
\label{eq:dos}
\end{equation}
(Again, the probability of the term in $\ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\leftarrow}_S}$ is zero).
\begin{equation}
\ensuremath{\ket{\text{init}}}_{(3)} = \coef{2}{3} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\downarrow}_S} + \coef{1}{6} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\uparrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\uparrow}_S}.
\label{eq:tres}
\end{equation}
(And again, the probability of the term $\ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\downarrow}_S}$ is zero).
\begin{equation}
\ensuremath{\ket{\text{init}}}_{(4)} = \coef{3}{4} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{12} \ensuremath{\ket{\text{h+t}}_R} \ensuremath{\ket{\leftarrow}_S} - \coef{1}{12} \ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{12} \ensuremath{\ket{\text{h-t}}_R} \ensuremath{\ket{\leftarrow}_S}.
\label{eq:cuatro}
\end{equation}
It is worth to remark that all $\ensuremath{\ket{\text{init}}}_{(1)}$, $\ensuremath{\ket{\text{init}}}_{(2)}$,
$\ensuremath{\ket{\text{init}}}_{(3)}$ and $\ensuremath{\ket{\text{init}}}_{(4)}$ are just different
decompositions of the very same state, $\ensuremath{\ket{\text{init}}}$. In the equations
above we have used the following notation:
\begin{eqnarray}
\ensuremath{\ket{\rightarrow}_S} &=& \coef{1}{2} \ensuremath{\ket{\uparrow}_S} + \coef{1}{2} \ensuremath{\ket{\downarrow}_S}, \\
\ensuremath{\ket{\leftarrow}_S} &=& \coef{1}{2} \ensuremath{\ket{\uparrow}_S} - \coef{1}{2} \ensuremath{\ket{\downarrow}_S}, \\
\ensuremath{\ket{\text{h+t}}_R} &=& \coef{1}{2} \ensuremath{\ket{\text{head}}_R} + \coef{1}{2} \ensuremath{\ket{\text{tail}}_R}, \\
\ensuremath{\ket{\text{h-t}}_R} &=& \coef{1}{2} \ensuremath{\ket{\text{head}}_R} - \coef{1}{2} \ensuremath{\ket{\text{tail}}_R}.
\end{eqnarray}
All the statements that the four agents make in this
Gedankenexperiment are based on different measurements performed on
the initial state, given by Eq. (\ref{eq:inicial}); their results are
easily interpreted relying on
Eqs. (\ref{eq:uno})-(\ref{eq:cuatro}). The procedure is designed to
not perform two incompatible measurements. That is, each agent works
on a different part of the setup, so the wave-function collapse after
each measurement does not interfere with the next one. As a
consequence of this, each agent can infer the conclusions obtained by
the others, just by reasoning from their own measurements.
To structure the interpretation of the Gedankenexperiment, we consider
the following hypothesis for the measuring protocol:
\begin{center}
\ovalbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{
\centering
\begin{hipotesis}[Measurement procedure]
\texttt{} To perform a measurement, an initial state in which the system, $S$, and the apparatus, $A$, are uncorrelated, $\ket{\psi} = \ket{s} \otimes \ket{a}$, is transformed into a correlated state, $\ket{\psi'} = \sum_i c_i \ket{s_i} \otimes \ket{a_i}$, by the action of a unitary operator $U^{M}$. We assume that both $\left\{ \ket{s_i} \right\}$ and $\left\{ \ket{a_i} \right\}$ are linearly independent. Therefore, if the outcome of a measurement is $\ket{a_j}$, then the agent can safely conclude that the system is in the state $\ket{s_j}$.
\end{hipotesis}
}}
\end{center}
\subsection{B. Development of the experiment}
Equations (\ref{eq:uno})-(\ref{eq:cuatro}) provide four different
possibilities to establish a correlation between the system and the
apparatus. Each of the four agents involved in the Gedankenexperiment
works with one of them. Follows a summary of the main results; more
details are given in \cite{Renner:18}.
{\bf Measurement 1.-} Agent $\overline{F}$ measures the state of the
quantum coin $R$ in the basis $\left\{ \ensuremath{\ket{\text{head}}_R}, \ensuremath{\ket{\text{tail}}_R} \right\}$.
According to hypothesis 1 above, this statement is based on the
following facts. Agent $\overline{F}$ starts from
Eq. (\ref{eq:dos}). Then, she performs a measurement by means of a
unitary operator that correlates the quantum coin and the apparatus in
the following way
\begin{equation}
\left( c_1 \ensuremath{\ket{\text{head}}_R} + c_2 \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{\overline{F}_0} \longrightarrow c_1 \ensuremath{\ket{\text{head}}_R} \ket{\overline{F}_1} + c_2 \ensuremath{\ket{\text{tail}}_R} \ket{\overline{F}_2}.
\label{eq:medida1}
\end{equation}
That is, for any initial state of the coin, $\ket{R} = c_1 \ensuremath{\ket{\text{head}}_R} + c_2
\ensuremath{\ket{\text{tail}}_R}$, the state $\ket{\overline{F}_1}$ of the apparatus becomes
perfectly correlated with $\ensuremath{\ket{\text{head}}_R}$, and the state
$\ket{\overline{F}_2}$ becomes perfectly correlated with $\ensuremath{\ket{\text{tail}}_R}$. This
procedure is perfect if $\braket{\overline{F}_1}{\overline{F}_2}=0$,
but this condition is not necessary to distinguish between the two
possible outcomes. Since the same protocol must be valid for any
initial state, the only constraint for coefficents $c_1$ and $c_2$ is
$\left| c_1 \right|^2 + \left| c_2 \right|^2$=1.
As a consequence of this, the measurement performed by agent
$\overline{F}$ consists in
\begin{equation}
\begin{split}
&\left( \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \right) \otimes \ket{\overline{F}_0} \longrightarrow \\
&\longrightarrow \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ket{\overline{F}_1} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ensuremath{\ket{\text{head}}_R} \ket{\overline{F}_1} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ket{\overline{F}_2} \ensuremath{\ket{\rightarrow}_S}.
\end{split}
\end{equation}
Furthermore, the quantum coin together with the agent $\overline{F}$
become the laboratory $\overline{L}$:
\begin{eqnarray}
\ensuremath{\ket{\text{head}}_R} \otimes \ket{\overline{F}_1} &\equiv& \ket{h}_{\overline{L}}, \\
\ensuremath{\ket{\text{tail}}_R} \otimes \ket{\overline{F}_2} &\equiv& \ket{t}_{\overline{L}},
\end{eqnarray}
and therefore the state of the whole system becomes
\begin{equation}
\ensuremath{\ket{\text{init}}}_{(2)} = \coef{1}{6} \ket{h}_{\overline{L}} \ensuremath{\ket{\rightarrow}_S} - \coef{1}{6} \ket{h}_{\overline{L}} \ensuremath{\ket{\leftarrow}_S} + \coef{2}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\rightarrow}_S}.
\label{eq:dosb}
\end{equation}
The main conclusion obtain from this procedure can be written as follows:
{\bf Statement 1.-} If agent $\overline{F}$ finds her apparatus in the
state $\ket{\overline{F}_2}$, then she can safely conclude that the
quantum coin $R$ is in the state $\ensuremath{\ket{\text{tail}}_R}$. Then, as a consequence of
Eq. (\ref{eq:dosb}), she can also conclude that the spin is in state
$\ensuremath{\ket{\rightarrow}_S}$, and therefore that agent $W$ is going to obtain $\ensuremath{\ket{\text{fail}}_L}$ in
his measurement (see below for details).
{\bf Measurement 2.-} Agent $F$ measures the state of the spin $S$ in
the basis $\left\{ \ensuremath{\ket{\uparrow}_S}, \ensuremath{\ket{\downarrow}_S} \right\}$.
Again, according to hypothesis $1$, this statement is based on a
perfect correlation between the apparatus and the spin states. In this
case, agent $F$ starts from Eq. (\ref{eq:uno}). Taking into account
the previous measurement, hers gives rise to the following
correlation:
\begin{equation}
\begin{split}
& \left( \coef{1}{3} \ket{h}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\uparrow}_S} \right) \otimes \ket{F_0} \longrightarrow \\
&\longrightarrow \coef{1}{3} \ket{h}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} \ket{F_1} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\downarrow}_S} \ket{F_1} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{\uparrow}_S} \ket{F_2}.
\end{split}
\end{equation}
It is worth to note that this measurement is totally independent from
the previous one.
As it happened with agent $\overline{F}$, agent $F$ becomes entangled with her apparatus, and both together conform the laboratory $L$:
\begin{eqnarray}
\ensuremath{\ket{\downarrow}_S} \otimes \ket{F_1} &\equiv& \ensuremath{\ket{-1/2}_L}, \\
\ensuremath{\ket{\uparrow}_S} \otimes \ket{F_2} &\equiv& \ensuremath{\ket{+1/2}_L}.
\end{eqnarray}
Then, the whole system becomes
\begin{equation}
\ket{\text{init}}_{(1)} = \coef{1}{3} \ket{h}_{\overline{L}} \ensuremath{\ket{-1/2}_L} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{-1/2}_L} + \coef{1}{3} \ket{t}_{\overline{L}} \ensuremath{\ket{+1/2}_L}.
\label{eq:statement2}
\end{equation}
The main conclusion obtained by agent $F$ can be written as follows:
{\bf Statement 2.-} If agent $F$ finds her apparatus in state
$\ket{F_2}$, then she can safely conclude that the spin $S$ is in
state $\ensuremath{\ket{\uparrow}_S}$. Then, as it is shown in Eq. (\ref{eq:statement2}),
she also is certain that laboratory $\overline{L}$ is in state
$\ket{t}_{\overline{L}}$, and therefore she can safely conclude that
agent $\overline{F}$ has obtained $\ensuremath{\ket{\text{tail}}_R}$ in her measurement. Finally,
according to Statement 1, she can be sure that agent $W$ is going to
obtain $\ensuremath{\ket{\text{fail}}_L}$ in his measurement.
{\bf Measurement 3.-} Agent $\overline{W}$ measures the laboratory
$\overline{L}$ in the basis $\left\{ \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}}, \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \right\}$,
where $\ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} = \left( \ket{h}_{\overline{L}} +
\ket{t}_{\overline{L}} \right)/\sqrt{2}$, and $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} = \left(
\ket{h}_{\overline{L}} - \ket{t}_{\overline{L}} \right)/\sqrt{2}$.
Starting from (\ref{eq:tres}), and taking into account all the
previous results, this measurement implies:
\begin{equation}
\begin{split}
&\left( \coef{2}{3} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{-1/2}_L} + \coef{1}{6} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} - \coef{1}{6} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} \right) \otimes \ket{\overline{W}} \longrightarrow \\ &\longrightarrow \coef{2}{3} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{-1/2}_L} \otimes \ket{\overline{W}_1}+ \coef{1}{6} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} \otimes \ket{\overline{W}_1} - \coef{1}{6} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{+1/2}_L} \otimes \ket{\overline{W}_2}.
\end{split}
\label{eq:statement3}
\end{equation}
Again, as the meauserment is not on either the spin $S$ or
the quantum coin $R$ or both, it is fully compatible with the previous
ones. And again, agent $\overline{W}$ becomes entangled with his
apparatus, in the same way that agents $F$ and $\overline{F}$
did. However, since no measurements are done over this new composite
system, we do not introduce a new notation: state $\ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}}$ can be
understood as $\ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \otimes \ket{\overline{W}_1}$, and $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$
as $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \otimes \ket{\overline{W}_2}$.
The main conclusion that agent $\overline{W}$ obtains can be
written as follows:
{\bf Statement 3.-} If agent $\overline{W}$ finds his apparatus in
state $\ket{\overline{W}_2}$, then he can safely conclude that
laboratory $\overline{L}$ is in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$. Hence, as a
consequence of Eq. (\ref{eq:statement3}), he can also conclude that
laboratory $L$ is in state $\ensuremath{\ket{+1/2}_L}$. Therefore, from statement 2, agent
$\overline{W}$ knows that agent $F$ has obtained $\ensuremath{\ket{\uparrow}_S}$ in her
measurement, and from statement 1, he also knows that agent
$\overline{F}$ has obtained $\ensuremath{\ket{\text{tail}}_R}$. Consequently, agent
$\overline{W}$ can be certain that agent $W$ is going to obtain
$\ensuremath{\ket{\text{fail}}_L}$ in his measurement on laboratory $L$.
The key point in \cite{Renner:18} lays here. As all the agents use the
same theory, and as all the measurements they perform are fully
compatible, they must reach the same conclusion. This conclusion is:
{\em Every time laboratory $\overline{L}$ is in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$, then
laboratory $L$ is in state $\ensuremath{\ket{\text{fail}}_L}$. Hence, it is not possible to find
both laboratories in states $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$ and $\ensuremath{\ket{\text{ok}}_L}$, respectively.}
It is worth to remark that agent $W$ must also obtain the same
conclusion from statements 1, 2 and 3.
{\bf Measurement 4.-} As the final step of the process, agent $W$
measures the laboratory $L$ in the basis $\left\{ \ensuremath{\ket{\text{fail}}_L}, \ensuremath{\ket{\text{ok}}_L}
\right\}$, where $\ensuremath{\ket{\text{ok}}_L} = \left( \ensuremath{\ket{-1/2}_L} - \ensuremath{\ket{+1/2}_L} \right)/\sqrt{2}$, and
$\ensuremath{\ket{\text{fail}}_L} = \left( \ensuremath{\ket{-1/2}_L} + \ensuremath{\ket{+1/2}_L} \right)/\sqrt{2}$.
Starting from Eq. (\ref{eq:cuatro}), the result of this final
measurement is
\begin{equation}
\begin{split}
&\left( \coef{3}{4} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} + \coef{1}{12} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} - \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} + \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} \right) \otimes \ket{W} \rightarrow \\ & \rightarrow \coef{3}{4} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} \otimes \ket{W_1} + \coef{1}{12} \ensuremath{\ket{\overline{\text{fail}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} \otimes \ket{W_2} - \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{fail}}_L} \otimes \ket{W_1} + \coef{1}{12} \ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L} \otimes \ket{W_2}.
\end{split}
\label{eq:cuatrob}
\end{equation}
Therfore, and despite the previous conclusion that agent $W$ has
obtained from statements 1, 2, and 3, after this measurement he can
conclude that {\em the probability of $\overline{L}$ being in state
$\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$ and $L$ in state $ok$ is not zero, but $1/12$.} This is
the contradiction discussed in \cite{Renner:18}, from which the
authors of this paper conclude that quantum theory cannot consistently
describe the use of itself:
{\em As Eq. (\ref{eq:cuatrob}) establishes that the probability of
obtaining $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}} \ensuremath{\ket{\text{ok}}_L}$ after a proper measurement is $p=1/12$, and
as the same theory, used to describe itself, allows us to conclude
that this very same probability should be $p=0$, the conclusion is
that quantum theory cannot be used as it is used in statements 1, 2
and 3. That is, quantum theory cannot consistently describe the use
of itself.}
In the next section, we will prove that this is a consequence of
hypothesis 1, that is, a consequence of understanding a measurement
just as a perfect correlation between a system and a measuring
apparatus. If we consider that a proper measurement
requires the action of an external environment, as it is discussed in
\cite{Zurek:03,Zurek:81}, quantum theory recovers its ability to speak
about itself. Environmental-induced super-selection rules determining
the real state of the system after a measurement removes all the
contradictions coming from statements 1, 2 and 3.
\section{III. Environment-induced superselection rules}
\subsection{A. The problem of basis ambiguity}
In \cite{Zurek:03,Zurek:81}, W. H. Zurek shows that a perfect
correlation, like the one summarized in hypothesis $1$, is {\em not}
enough to determine the result of a quantum measurement. The reason is
the basis ambiguity due to the superposition principle. To understand
this statement, let us consider a simple measurement in which the
state of the quantum coin $R$ is to be determined. This goal can be
achieved by means of the following unitary operator:
\begin{equation}
U^{RA} = \ket{A_1} \otimes \ensuremath{\ket{\text{head}}_R} \bra{\text{head}}_R + \ket{A_2} \otimes \ensuremath{\ket{\text{tail}}_R} \bra{\text{tail}}_R,
\end{equation}
which establishes a perfect correlation between $\ensuremath{\ket{\text{head}}_R}$ and
$\ket{A_1}$, and between $\ensuremath{\ket{\text{tail}}_R}$ and $\ket{A_2}$. Furthermore, if such
apparatus states verify $\braket{A_1}{A_2}=0$, the measurement
is perfect. Starting from an initial state
\begin{equation}
\ket{\Psi_0} = \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \otimes \ket{A_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \otimes \ket{A_2} \right) \otimes \ket{A_0},
\end{equation}
the final state of the composite system, quantum coin plus apparatus,
is
\begin{equation}
\ket{\Psi} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \otimes \ket{A_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \otimes \ket{A_2}.
\label{eq:psi}
\end{equation}
This measurement fulfills the conditions for Hypothesis $1$; indeed,
it is equivalent to the one that the agent $\overline{F}$ performs in
measurement 1. However, {\em the basis ambiguity allows us to rewrite
(\ref{eq:psi}) in the following way:}
\begin{equation}
\ket{\Psi} = \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_1} + \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_2}.
\label{eq:psiprima}
\end{equation}
Note that this is the very same state as the one written in
Eq. (\ref{eq:psi}) ---it is obtained from $\ket{\Psi_0}$ as a consequence of the action of
$U^{RA}$. The new states of the apparatus,
\begin{eqnarray}
\ket{A_1} &=& \coef{1}{2} \left( \ket{A'_1} + \ket{A'_2} \right), \\
\ket{A_2} &=& \coef{1}{2} \left( \ket{A'_1} - \ket{A'_2} \right),
\end{eqnarray}
also fulfill $\braket{A'_1}{A'_2}=0$, so they also give rise to a
perfect measurement.
Let us reinterpret measurement 1, as described in the previous
section, taking into account this result. Hypothesis $1$ establishes
that a measurement is performed when a perfect correlation between a
system and an apparatus has been settled. But, as both
Eqs. (\ref{eq:psi}) and (\ref{eq:psiprima}) fulfill this requirement,
and both represent the very same state, $\ket{\Psi}$, the action of
the operator $U^{RA}$ is not enough to be sure about the final state
of both the system and the measuring apparatus. Indeed, the only
possible conclusion we can reach is:
{\em Measurement $U^{RA}$ cannot determine the final state of the
system: if the outcome of the aparatus is ``ONE'', the system can
either be in state $\ensuremath{\ket{\text{head}}_R}$ or state $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} +
\coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$; and if the outcome of the apparatus is ``TWO'', the
system can either be in state $\ensuremath{\ket{\text{tail}}_R}$ or state $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} -
\coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$.}
Hence, measurement 1, understood as the (only) consequence of
Eq. (\ref{eq:medida1}) seems not enough to support the conclusion
summarized in statement 1. Both $\ensuremath{\ket{\text{tail}}_R}$ and $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} -
\coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$ are fully compatible with the output ``TWO'' of the
measuring apparatus.
But what has really happened? What is the real state of the quantum
coin after the measurement is completed? To which state does the wave
function collapse? We know that experiments provide precise results
---Schr\"odinger cats are always found dead or alive, not in a weird
superposition like $\coef{1}{3} \ket{\text{alive}} - \coef{2}{3}
\ket{\text{dead}}$---, so it is not possible that both possibilities
are true. To answer this question, we introduce the following
assumption:
\begin{center}
\Ovalbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{
\centering
\begin{myteo}[``Classical'' reality]
\texttt{} An event has certainly happened (at a certain time in the
past) if and only if it is the only explanation for the current
state of the universe.
\end{myteo}
}}
\end{center}
This assumption just reinforces our previous conclusion ---from the
measurement $U^{RA}$, that is, from Eq. (\ref{eq:medida1}), we cannot
make a certain statement about the state of the system. Both $\ensuremath{\ket{\text{tail}}_R}$
and $\coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R}$ are compatible with the
real state of the universe, given by $\ket{\Psi}$ and the measurement
outcome ``TWO''.
This is why W. H. Zurek establishes that {\em something} else has to
happen before we can make a safe statement about the real state of the
system. The procedure described in Hypothesis $1$ constitutes just a
pre-measurement. The measurement itself requires another action,
perfomed by another unitary operator, to determine the real state of
the system. This action is done by an external (and large)
environment, which becomes correlated with the system and the
apparatus. As is described in \cite{Zurek:03,Zurek:81}, after the
pre-measurement is completed, the system plus the apparatus interacts
with a large environment by means of $U^{\mathcal E}$. Let us suppose
that the result of this interaction is
\begin{equation}
\ket{\Psi}_{{\mathcal E}} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \otimes \ket{A_1} \otimes \ket{{\mathcal E}_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \otimes \ket{A_2} \otimes \ket{{\mathcal E}_2},
\label{eq:psi_e}
\end{equation}
with $\braket{{\mathcal E}_1}{{\mathcal E}_2}=0$. Then, this interaction
establishes a perfect correlation between environmental and apparatus
states, in a similar way that the pre-measurement correlates the
system and the apparatus. The main difference between these two
processes is given by the following teorem:
\begin{center}
\shadowbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{
\centering
\begin{myteo2}[Triorthogonal uniqueness theorem \cite{Elby:94}]
\texttt{}
Suppose $\ket{\psi} = \sum_i c_i \ket{A_i} \otimes \ket{B_i} \otimes \ket{C_i}$, where $\{ \ket{A_i} \}$ and $\{ \ket{C_i} \}$ are linearly independent sets of vectors, while $\{ \ket{B_i} \}$ is merely noncollinear. Then there exist no alternative linearly independent sets of vectors $\{ \ket{A'_i} \}$ and $\{ \ket{C'_i} \}$, and no alternative noncollinear set $\{ \ket{B'_i} \}$, such that $\ket{\psi} = \sum_i d_i \ket{A'_i} \otimes \ket{B'_i} \otimes \ket{C'_i}$. (Unless each alternative set of vectors differs only trivially from the set it replaces.)
\end{myteo2}
}}
\end{center}
In other words, this theorem establishes that the state
$\ket{\Psi}_{{\mathcal E}}$ is unique, that is, we cannot find another
decomposition for the very same state
\begin{equation}
\ket{\Psi}_{{\mathcal E}} = \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_1} \otimes \ket{{\mathcal E}'_1} + \coef{1}{2} \left( \coef{1}{3} \ensuremath{\ket{\text{head}}_R} - \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \right) \otimes \ket{A'_2} \otimes \ket{{\mathcal E}'_2},
\label{eq:psiprima_e}
\end{equation}
with $\braket{{\mathcal E}'_1}{{\mathcal E}'_2}=0$. Hence, the
interaction with the environment determines the real state of the
system plus the apparatus. The action of $U^{{\mathcal E}}$ gives rise
to Eq. (\ref{eq:psi_e}). {\em To obtain a state like the one written
in Eq. (\ref{eq:psiprima_e}), a different interaction with the
environment is mandatory, $U^{{\mathcal E}'}$.} Thus, we can
formulate an alternative hypothesis:
\begin{center}
\ovalbox{\parbox{\columnwidth-2\fboxsep-2\fboxrule-\shadowsize}{
~
\centering \begin{hipotesis}[Real measurement procedure (adapted
from \cite{Zurek:03})]
\texttt{} To perform a measurement, an initial state in which
the system, $S$, the apparatus, $A$, and an external
environment ${\mathcal E}$ are uncorrelated, $\ket{\psi} =
\ket{s} \otimes \ket{a} \otimes \ket{\varepsilon}$, is
transformed: {\em i)} first, into a state, $\ket{\psi'} =
\left( \sum_i c_i \ket{s_i} \otimes \ket{a_i} \right) \otimes
\ket{\varepsilon}$, by means of a procedure called
pre-measurement; and {\em ii)} second, into a final state,
$\ket{\psi''} = \left( \sum_i c_i \ket{s_i} \otimes \ket{a_i}
\otimes \ket{\varepsilon_i} \right)$. This final state
determines the real correlations between the system and the
apparatus. If $\braket{\varepsilon_i}{\varepsilon_j}=0$, then, after tracing out the environmental degrees of freedom, the state becomes
\begin{equation}
\rho = \sum_i \left| c_i \right|^2 \ket{s_i} \ket{a_i} \bra{s_i} \bra{a_i}.
\end{equation}
Therefore, the measuring agent can safely conclude that the result of
the measurement certainly is one of the previous possibilities,
$\left\{ \ket{a_i} \ket{s_i} \right\}$, each one with a probability
given by $p_i = \left| c_i \right|^2$. The states $\left\{ \ket{a_i}
\ket{s_i} \right\}$ are called ``pointer states''. They are selected
by the environment, by means of environmental-induced superselection
rules; they constitute the ``classical'' reality.
~
\end{hipotesis}
}}
\end{center}
This hypothesis establishes that only after the real correlations between the system, the apparatus and the
environment are settled, the observation of the agent becomes
certain. Tracing out the environmental degrees of freedom, which are
not the object of the measurement, the state given by
Eq. (\ref{eq:psi_e}) becomes:
\begin{equation}
\rho_{{\mathcal E}} = \frac{1}{3} \ensuremath{\ket{\text{head}}_R} \ket{A_1} \bra{\text{head}}_R \bra{A_1} + \frac{2}{3} \ensuremath{\ket{\text{tail}}_R} \ket{A_2} \bra{\text{tail}}_R \bra{A_2}.
\end{equation}
In other words, the agent observes a mixture between the system being
in state $\ensuremath{\ket{\text{head}}_R}$ with the apparatus in state $\ket{A_1}$ (with
probability $p_1=1/3$), and the system being in state $\ensuremath{\ket{\text{tail}}_R}$ with the
apparatus in state $\ket{A_2}$ (with probability $p_2=2/3$). And this
happens because the environment has {\em chosen} $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\text{head}}_R}$
as the ``classical'' states ---the ones observed as a consequence of
quantum measurements--- by means of environmental-induced
superselection rules. In other words, following this interpretation,
Schr\"odinger cats are always found either dead or alive because the
interaction with the environment determines that $\ket{\text{dead}}$
and $\ket{\text{alive}}$ are the pointer ``classical'' states.
\subsection{B. Re-interpretation of the Gedankenexperiment}
Let us re-interpret the first statement of the Gedankenexperiment,
in the terms discussed above. Agent
$\overline{F}$ cannot reach any conclusion about the real state of the
quantum coin before the pointer states are obtained by means of the
interaction with a large environment ${\mathcal E}$. The key point is
that {\em the environment ${\mathcal E}$ interacts with the whole
system, that is, with the quantum coin $R$, the apparatus, and the
spin $S$, because the three of them are entangled}. So, let us
assume that a correlation like Eq. (\ref{eq:psi}) has happened as a
consequence of the pre-measurement. In such a case, taking into
account that the quantum coin $R$ is entangled with the spin $S$, the
state after the pre-measurement is
\begin{equation}
\ket{\Psi} = \coef{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} + \coef{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \ket{A_2}.
\end{equation}
The next step in the process is the interaction with the environment,
which determines the pointer states of the system composed by the
quantum coin and the spin. There are several possibilities for such an
interaction. Let us consider, for example,
\begin{equation}
U^{{\mathcal E}} = \ket{\varepsilon_1} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \ket{\varepsilon_2} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\rightarrow}_S \bra{A_2},
\label{eq:environment1}
\end{equation}
and
\begin{equation}
\begin{split}
U^{{\mathcal E}'} &= \ket{\varepsilon'_1} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \\ &+ \ket{\varepsilon'_2} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\downarrow}_S \bra{A_2} + \ket{\varepsilon'_3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\uparrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\uparrow}_S \bra{A_2}.
\end{split}
\label{eq:environment2}
\end{equation}
If the real interaction with the environment is given by
Eq. (\ref{eq:environment1}), the final state of the system, after
tracing out the environmental degrees of freedom, is
\begin{equation}
\rho^{{\mathcal E}} = \text{Tr}_{{\mathcal E}} \left[ U^{{\mathcal E}} \ket{\Psi} \right] = \frac{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \frac{2}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\rightarrow}_S \bra{A_2}.
\label{eq:final1}
\end{equation}
On, the contrary, if the real interaction with the environment is
given by Eq. (\ref{eq:environment2}), the final state is
\begin{equation}
\begin{split}
\rho^{{\mathcal E}'} &= \text{Tr}_{{\mathcal E}} \left[ U^{{\mathcal E}'} \ket{\Psi} \right] = \\ &= \frac{1}{3} \ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_1} \bra{\text{head}}_R \bra{\downarrow}_S \bra{A_1} + \frac{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\downarrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\downarrow}_S \bra{A_2} + \frac{1}{3} \ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\uparrow}_S} \ket{A_2} \bra{\text{tail}}_R \bra{\uparrow}_S \bra{A_2}.
\end{split}
\label{eq:final2}
\end{equation}
At this point, the question is: what is the real state of the system
after the measurement is completed?
\begin{itemize}
\item Eq. (\ref{eq:environment1}) establishes that it is a mixture in which
the agent can find the system either in $\ensuremath{\ket{\text{head}}_R}$ and $\ensuremath{\ket{\downarrow}_S}$, with
probability $p=1/3$, or in $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\rightarrow}_S}$, with probability
$p=2/3$. It is worth to remark that this is not a quantum
superposition, but a classical mixture. That is, due to the
interaction with the environment, $U^{{\mathcal E}}$, the state of the
system is compatible with either a collapse to $\ensuremath{\ket{\text{head}}_R} \ensuremath{\ket{\downarrow}_S}$, with
$p=1/3$, or a collapse to $\ensuremath{\ket{\text{tail}}_R} \ensuremath{\ket{\rightarrow}_S}$, with $p=2/3$. This is what
agent $\overline{F}$ concludes in statement 1.
\item But the other possible interaction with the environment,
Eq. (\ref{eq:environment2}), establishes that the real state of the
system is a mixture in which the agent can find the system in $\ensuremath{\ket{\text{head}}_R}$
and $\ensuremath{\ket{\downarrow}_S}$, with probability $p=1/3$, $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\downarrow}_S}$, with
probability $p=1/3$, and $\ensuremath{\ket{\text{tail}}_R}$ and $\ensuremath{\ket{\uparrow}_S}$, with probability
$p=1/3$.
\end{itemize}
At this stage, the key point is the following. As the measurement
performed by agent $\overline{F}$ only involves the quantum coin $R$,
her apparatus only reads $\ensuremath{\ket{\text{head}}_R}$ with probability $p=1/3$, and
$\ensuremath{\ket{\text{tail}}_R}$, with probability $p=2/3$. {\em But both (\ref{eq:final1}) and
(\ref{eq:final2}) are compatible with this result}. Hence, following
assumption 1, {\em agent $\overline{F}$ cannot be certain about the
state of the spin $S$ ---and thus, she can neither be certain about
what agent $W$ is going to find when he measures the state of the
laboratory $L$}. The only way to distinguish between
(\ref{eq:final1}) and (\ref{eq:final2}) is to perform a further
measurement on the spin $S$. Such a procedure would provide the
pointer ``classical'' states of the system composed by the quantum
coin and the spin ---its outcome would determine whether the
interaction with the environment is given by
Eq. (\ref{eq:environment1}) or by Eq. (\ref{eq:environment2}). But
such a procedure would be incompatible with the measurement performed
by agent $F$. Hence, agent $\overline{F}$ has to choose between: {\em
i)} not being certain about the real state of the quantum spin $S$,
and therefore not being able to reach any conclusion about the
measurement that agent $W$ will do in the future; or {\em ii)}
performing a further measurement which would invalidate the
conclusions of this Gedankenexperiment.
This conclusion is enough to rule out the contradictions discussed in
\cite{Renner:18}. As agent $\overline{F}$ cannot be certain about the
outcome what agent $W$ will obtain in his measurement, none of the
four agents can conclude that it is not possible to find laboratory
$L$ in state $\ensuremath{\ket{\text{ok}}_L}$ and laboratory $\overline{L}$ in state $\ensuremath{\ket{\overline{\text{ok}}}_{\overline{L}}}$
at the same time. Hence, the outcome of measurement 4, whatever it is,
becomes fully compatible with all the conclusions obtained by all the
agents.
It is worth to note that the same analysis can also be done over
measurements 2 and 3. The conclusions are pretty the same.
\section{IV. Conclusions}
The main result of this paper is to show that assumption 1 and
hypothesis 2 allow quantum theory to consistently describe the use of
itself. This conclusion is based on the decoherence interpretation
about quantum measurements \cite{Zurek:03}. Hence, a
further statement can be set down:
{\em To make quantum theory fully consistent, in order it can be used
to describe itself, the decoherence interpretation of measurements
(and origins of the classical world) is mandatory.}
In any case, the main conclusion of this paper is applicable to other
interpretations of quantum mechanics. Decoherence interpretation of
the meausrement process establishes that the wave-function collapse is
not real ---the measuring agent {\em sees} the system as if its
wave-function had collapsed onto one of the pointer states selected by
the environment, even though the whole wave function remains in a
quantum superposition. However, this interpretation is not really
important for experimental results; from this point of view, it is
compatible with the Copenhaguen interpretation, because it assigns the
same probabilities to all of the possibles outcomes. Furthermore, it
is also compatible with Everett many-worlds interpretation
\cite{Everett:57}: the branches onto which the universe splits after a
measurement are determined by the environmental-induced
super-selection rules. The key point is that real ``classical'' states
are not ambiguous, but they are the (unique) result of the interaction
between the measured system, the measuring apparatus, and a large
environment.
\end{document} |
\begin{document}
\author{
\footnotesize L. Biasco \& L. Chierchia
\\ \footnotesize Dipartimento di Matematica e Fisica
\\ \footnotesize Universit\`a degli Studi Roma Tre
\\ \footnotesize Largo San Leonardo Murialdo 1 - 00146 Roma, Italy
\\ {\footnotesize [email protected], [email protected]}
\\
}
\title{f
Global properties of generic real--analytic nearly--integrable Hamiltonian systems
}
\begin{abstract}{\noindent}ndent
We introduce a new class ${\mathbb G}^n_{s}$ of generic real analytic potentials on $\mathbb T^n$ and study global analytic properties of
natural nearly--integrable Hamiltonians ${\hat f}ac12 |y|^2+\varepsilon f(x)$, with potential $f\in {\mathbb G}^n_{s}$, on the phase space $ {\mathcal M} =B\times \mathbb T^n$ with $B$ a given ball in $\mathbb R^n$. The phase space $ {\mathcal M} $ can be covered by three sets: a `non--resonant' set, which is filled up to an exponentially small set of measure $e^{-c {\mathtt K}}$ (where ${\mathtt K}$ is the maximal size of resonances considered) by primary maximal KAM tori;
a `simply resonant set' of measure $\sqrt{\varepsilon} {\mathtt K}^a$ and a third set of measure $\varepsilon {\mathtt K}^b$ which is `non perturbative', in the sense that the $\ham$--dynamics on it can be described by a natural system which is {\sl not} nearly--integrable.
We then focus on the simply resonant set -- the dynamics of which is particularly interesting (e.g., for Arnol'd diffusion, or the existence of secondary tori) -- and show that on such a set the secular (averaged) 1 degree--of--freedom Hamiltonians (labelled by the resonance index $k\in {\mathbb Z} ^n$) can be put into a universal form (which we call `Generic Standard Form'), whose main analytic properties are controlled by {\sl only one parameter, which is uniform in the resonance label $k$}.
\varepsilonnd{abstract}
\alphalowdisplaybreaks
\section*{Introduction}\label{maindefinitions}
{
{\noindent}ndent}
The paper is divided in three parts.
{
{\noindent}ndent}
{\bf 1.}
In the first part we discuss generic properties of (multi--periodic) analytic functions introducing a new class ${\mathbb G}^n_{s}$ of functions
real analytic on the complex neighborhood of $\mathbb T^n$ given by
\[
{\mathbb T} ^n_s:=\{x=(x_1,...,x_1)\in {\mathbb C} ^n:\ \ |\, {\rm Im}\, x_j|<s\}/(2\pi {\mathbb Z} ^n)
\,.
\varepsilonnd{equation} no
Such a class -- related but smaller than the sets $ {\mathcal H} _{s,\t}$ introduced in \cite{BCnonlin} -- is generic, as
it contains an open and dense set in the norm $\|f\|_s=\sup_k |f_k|e^{s\noruno{k}}$, and has full probability measure with respect to a natural weighted product probability measure on Fourier coefficients.
\\
The class ${\mathbb G}^n_{s}$ may be described as follows. Consider a real--analytic zero average function $f$ and consider its projection $ \pi_{\!{}_{\integer k}} f$ onto the Fourier modes proportional to a given $k\in {\mathbb Z} ^n\, \backslash\, \{0\}$ (with components with no common divisors), which is given by
\[
\sa\in {\mathbb T} \mapsto \pi_{\!{}_{\integer k}} f (\sa):=\sum_{j\in {\mathbb Z} } f_{jk} e^{{\rm i} j\sa}\,.
\varepsilonnd{equation} no
These one dimensional projections arise naturally, e.g., in averaging theory, where they are the leading terms of the averaged (`secular') Hamiltonians around simple resonances $\{y|\, y{c_{{}_{\rm o}}}ot k= \sum y_j k_j=0\}$.
Denoting by $\gen$ the set of generators of 1--dimensional maximal lattices in $\mathbb{Z}^n$,
the class ${\mathbb G}^n_{s}$ is formed by real--analytic zero average functions $f$ with $\|f\|_s\le 1$, which satisfy
\[
\d_{\!{}_f}=\varliminf_{{\mathtt s}pra{\noruno{k}\to+{\infty}}{k\in\gen}} |f_k| e^{\noruno{k}s} \noruno{k}^n>0 \,,
\varepsilonnd{equation} no
and such that the Fourier--projection $\pi_{\!{}_{\integer k}} f$ is a Morse function with distinct critical values for all $k\in\gen$ with $\noruno{k}\le \mathtt N$, where $\mathtt N$ is a {\sl a--priori} Fourier cut--off depending only on $n,s$ and $\d_{\!{}_f}$.
\\
A remarkable feature of this class of functions is that the Fourier projection $\pi_{\!{}_{\integer k}} f$ is close (in a large analytic norm) to a shifted rescaled cosine,
\[
\pi_{\!{}_{\integer k}} f(\sa)\sim |f_k|\cos (\sa+\sa_k)\,; \qquad \forall k\in\gen\,,\noruno{k}\ge \mathtt N\,,
\varepsilonnd{equation} no
(Proposition~\ref{pollaio} below), allowing to have uniform control of the analytic properties of secular Hamiltonians as $\noruno{k}\to+{\infty}$.
{
{\noindent}ndent}
We believe that the class ${\mathbb G}^n_{s}$ is a good candidate to address analytic problems in dynamical systems whenever generic results -- such as generic existence of secondary tori in nearly--integrable Hamiltonian systems\footnote{I.e., tori which are not a continuous deformation of integrable $(\varepsilon=0$) flat tori; for references, see
\cite{Nei89},
\cite{MNT},
\cite{Simo}, \cite{AM},
\cite{BCnonlin},
\cite{FHL},
\cite{BClin2}.}
or Arnol'd diffusion\footnote{\label{AD}See
\cite{A64}; compare also, e.g., \cite{CG}, \cite{DH}, \cite{Z}, \cite{Ma}, \cite{T},
\cite{DLS},
\cite{BKZ},
\cite{DS},
\cite{KZ},
\cite{GPSV},
\cite{CdL22}, \cite{CFG}.} -- are considered.
{
{\noindent}ndent}
{\bf 2.}
In the rest of the paper, we consider natural nearly--integrable Hamiltonian systems with $n\ge 2$ degrees of freedom with Hamiltonian $\ham={\hat f}ac12 |y|^2 +\varepsilon f(x)$ ($n\ge 2$),
with potential $f$ in the class ${\mathbb G}^n_{s}$ with a fixed $s>0$, on a bounded phase space $ {\mathcal M} =\DD\times \mathbb T^n\subset \mathbb R^n\times \mathbb T^n$; in fact, in view of the model, it is not restrictive to simply consider $\DD=\{y\in \mathbb R^n\st |y| <1\}$.
\\
We, then, introduce a covering of the action space $\DD= \mathbb Rz\cup\mathbb Ru\cup \mathbb Rd$, depending on two `Fourier cut--offs' ${\mathtt K}>{\mathtt K}O>\mathtt N$ ($\mathtt N$ as above), so that: $\mathbb Rz$ is a non--resonant set up to order ${\mathtt K}O$; $\mathbb Ru$ is union of neighborhoods $\mathbb Ruk$ of simple resonances
$\{y\in \DD: y{c_{{}_{\rm o}}}ot k=0\}$ of maximal order ${\mathtt K}O$, which are non resonant modulo $ {\mathbb Z} k$ up to order ${\mathtt K}$, and $\mathbb Rd$ is a set of measure proportional to $\varepsilon {\mathtt K}^b$ for a suitable $b>1$ (which depends only on $n$); similar `geometry--of--resonances' analysis is typical of Nekhoroshev's theory\footnote{Compare
\cite{Nek},
\cite{BGG},
\cite{poschel},
\cite{DG},
\cite{Nie00},
\cite{DH},
\cite{GCB},
\cite{BFN},
\cite{NPR}.
}.
\\
The set $\mathbb Rd$ is {\sl a non perturbative set}, namely, it is a set where the $\ham$--dynamics is equivalent to the dynamics of a Hamiltonian, which is {\sl not nearly--integrable}: indeed, in the simplest non trivial case $n=2$ such a Hamiltonian is given by $|y|^2/2 + f(x)$.
\\
On the other hand the set $(\mathbb Rz\cup \mathbb Ru)\times \mathbb T^n$ is suitable for high order perturbation theory, and,
following the averaging theory developed in \cite{BCnonlin}, we construct high order normal forms
(Theorem~\ref{normalform}) so that on $\mathbb Rz\times\mathbb T^n$ the above Hamiltonian $\ham$ is conjugated, up to an exponentially small term of $O(e^{-{\mathtt K}O s/3})$, to an integrable Hamiltonian, which depends only on action variables and it is close to $|y|^2/2$. By classical KAM theory, it then follows that this set is filled by primary\footnote{Primary tori are smooth deformation of the flat Lagrangian integrable ($\varepsilon=0$)
tori.} KAM tori up to a set of measure of order $O(e^{-{\mathtt K}O s/7})$. Actually, here there is a delicate point: the symplectic map realizing the above mentioned conjugation moves the boundary of the phase space $\DD\times \mathbb T^n$ by a quantity much larger than
$O(e^{-{\mathtt K}O s/7})$, therefore, in order to get the exponentially small measure estimate on the `non--torus set' one needs to introduce a second covering which takes care of the dynamics close to the boundary: this is done in Lemma~\ref{sedformosa} below.
\\
The analysis on the dynamics in $\mathbb Ru\times\mathbb T^n$ is {\sl much} more complicate. In each of the neighborhoods $\mathbb Ruk$, which
cover the set $\mathbb Ru$ as $\noruno{k}\le {\mathtt K}O$, one can perform resonant averaging theory so as to conjugate $\ham$ to still an integrable system, which however depends on the resonant angle $\ttx_1=k{c_{{}_{\rm o}}}ot x$. The averaged systems with secular Hamiltonians
$\hamsec_k(\tty,\ttx_1)$ are therefore 1D--Hamiltonian systems (one degree--of--freedom systems in the symplectic variables $(\tty_1,\ttx_1)$ depending also on adiabatic actions
$\tty_2,...,\tty_n$), which are close to natural systems with potentials $\pi_{\!{}_{\integer k}} f$. Such potentials, for low $k$'s, are rather general: for instance, they may have an arbitrary large number of separatrices depending on the particular structure of $f$. The global analytic properties of the Hamiltonians $\hamsec_k(\tty,\ttx_1)$ is the argument of the third (and main) part of this paper.
{
{\noindent}ndent}
{\bf 3.}
In the third part we prove that the secular Hamiltonians $\hamsec_k(\tty,\ttx_1)$ described in the previous item {\sl can be symplectically conjugated, for all $\noruno{k}\le {\mathtt K}O$, to 1D--Hamiltonians in the standard form introduced in} \cite{BCaa23} (see, also, Definition~\ref{morso} below). In few words, a standard 1D--Hamiltonian (which depends on $(n-1)$ external parameters) is a one degree--of--freedom Hamiltonian system close to a natural system
with a generic potential, which may be controlled essentially by {\sl only one parameter}, namely, the parameter $\kappa$ appearing in Eq. \varepsilonqu{alce} below; here, `essentially' means, roughly speaking, that $\kappa$ governs the main scaling properties of the Hamiltonian $\hamsec_k$. What is particularly relevant is that the $\kappa$ parameter of the secular Hamiltonians $\hamsec_k$ is shown to be {\sl independent of $k$}, as it depends only on
$n$, $s$, the above parameter $\d_{\!{}_f}$, and on a fourth parameter $\b$, which measures the Morse properties of the the potentials $\pi_{\!{}_{\integer k}} f$ with $\noruno{k}\le \mathtt N$; compare Eq. \varepsilonqu{kappa} and Remark~\ref{rampulla}--(i).\\
This uniformity allows to analyze global analytic properties: for example,
the action--angle map for standard Hamiltonians, as discussed in \cite{BCaa23}, depends only on the parameter $\kappa$ of the standard Hamiltonian and therefore can be used {\sl simultaneously} for all the secular Hamiltonians $\hamsec_k$, allowing for a nearly--integrable description of $\ham$ on $\mathbb Ruk\times {\mathbb T} ^n$ with uniformly exponentially small perturbations.
{
{\noindent}ndent}
The results presented in this paper may be useful in attacking some of the fundamental open problems in the analytic theory of nearly--integrable Hamiltonian systems
such as Arnol'd diffusion for generic real analytic systems, and provide indispensable tools to develop a `singular KAM Theory', namely a KAM theory dealing simultaneously with primary and secondary persistent Lagrangian tori in the full phase space, except for the non--perturbative set $\mathbb Rd$.
In particular, Theorem~\ref{sivori} below is the starting point for, e.g., the following result, which (up to the logarithmic correction and in the case of natural systems) proves a conjecture by Arnold, Kozlov and Neishtadt\footnote{See, \cite[Remark~6.18, \S~6.3--C]{AKN}.},
{
{\noindent}ndent}
{\bf Theorem (\cite{BClin2})}
{\sl Fix $n\ge 2$, $s>0$, $f\in {\mathbb G}^n_{s}$, $B$ an open ball in $\mathbb R^n$, and let
$\displaystyle \ham(y,x;\varepsilon):={\hat f}ac12 |y|^2 +\varepsilon f(x)$.
Then, there exists a constant $c>1$ such that, for all $0<\varepsilon<1$, all points in $B\times \mathbb T^n$
lie on a maximal KAM torus for $\ham$, except for
a subset of measure bounded by
$
\, c\, \varepsilon |\ln \varepsilon|^{\g}$ with $\g:=11 n +4$.
}
{
{\noindent}ndent}
Let us remark that, since it is well known that the asymptotic (as $\varepsilon \to 0$) density of non--integrable {\sl primary} tori is $1-c\sqrt\varepsilon$ (see \cite{Nei}, \cite{P82}), the difference of order of the density of invariant maximal tori in the above theorem must come from secondary tori, i.e., the tori in $\mathbb Ru\times\mathbb T^n$ whose leading dynamics is governed by the secular Hamiltonians $\hamsec_k(\tty,\ttx_1)$ discussed in this paper.
\section{Generic real analytic periodic functions} \label{gennaro}
We begin with a few definitions.
\dfn{pelle}{\bf (Norms on real analytic periodic functions)}\\
For $s>0$ and $n\in {\mathbb N} =\{1,2,3...\}$,
consider the Banach space of
zero average
real analytic periodic functions $\displaystyle f:x\in {\mathbb T} ^n:= {\mathbb R} ^n/(2\pi \mathbb{Z}^n)\mapsto \sum_{ k\in {\mathbb Z} }f_k e^{{\rm i} k{c_{{}_{\rm o}}}ot x}$, $f_0=0$,
with finite norm\footnote{As usual $\noruno{k}:=\sum |k_j|$.}
\[
\|f\|_s:=\sup_{k\in {\mathbb Z} ^n} |f_k| e^{\noruno{k}s}\,,
\varepsilonnd{equation} no
and denote by $\paramBBns$ the closed unit ball of functions $f$ with $\|f\|_s\le 1$.
\varepsilondfn
{
{\noindent}ndent}
Besides the norm $\|{c_{{}_{\rm o}}}ot\|_s$, we shall also use the following two (non equivalent) norms
$$
{\modulo}f{\modulo}_{s}:=\sup_{\mathbb T^n_s}|f|\,,\qquad
\,\thickvert\!\!\thickvert\, f\,\thickvert\!\!\thickvert\,_{s}:=
\sum_{k\in\mathbb{Z}^n}
|f_k| e^{|k|_{{}_1}s}\,.
$$
Such norms satisfy the relations
$$\|f\|_{s}\leq {\modulo}f{\modulo}_{s}
\leq
\,\thickvert\!\!\thickvert\, f\,\thickvert\!\!\thickvert\,_{s}\,.
$$
Notice also the following
`smoothing property' of the norm $\,\thickvert\!\!\thickvert\, {c_{{}_{\rm o}}}ot\,\thickvert\!\!\thickvert\,_{r,s}$:
{\sl if $s'\leq s$, then for any $N\ge 1$, one has}
\begin{equation}\label{lesso}
f(y,x)=\sum_{\noruno{k}\geq N}f_k(y)e^{{\rm i} k{c_{{}_{\rm o}}}ot x}
\qquad
\Longrightarrow
\qquad
\,\thickvert\!\!\thickvert\, f \,\thickvert\!\!\thickvert\,_{r,s'}\leq e^{-(s-s')N} \,\thickvert\!\!\thickvert\, f \,\thickvert\!\!\thickvert\,_{r,s}\,.
\varepsilonnd{equation}
\dfn{generators}{\bf (Generators and Fourier projectors)}\\
{\rm (i)}
Let $ {\mathbb Z} ^n_\varstar$ be the set of integer vectors $k\neq 0$ in $ {\mathbb Z} ^n$ such that the first non--null component is positive:
\beq{iguazu}
{\mathbb Z} ^n_\varstar:=
\big\{ k\in {\mathbb Z} ^n:\ k\neq 0\ {\rm and} \ k_j>0\ {\rm where}\ j=\min\{i: k_i\neq 0\}\big\}\,,
\varepsilonnd{equation}
and denote by $\gen$
the set of {\sl generators of 1d maximal lattices} in $ {\mathbb Z} ^n$, namely, the set of vectors $k\in {\mathbb Z} ^n_\varstar$
such that the greater common divisor ({\rm gcd}) of their components is 1:
\[
\gen:=\{k\in {\mathbb Z} ^n_\varstar:\ {\rm gcd} (k_1,\ldots,k_n)=1\}\,.
\varepsilonnd{equation} no
Let us also denote by ${\cal G}^n_{K}$ the generators of size not exceeding $K\ge 1$,
\[
{\cal G}^n_{K}:=\gen \cap \{\noruno{k} \leq K \}\,,
\varepsilonnd{equation} no
{\rm (ii)}
Given a zero average real analytic
periodic function
and $k\in \gen$,
we define
\beq{kproj}
\sa\in {\mathbb T} \mapsto \pi_{\!{}_{\integer k}} f (\sa):=\sum_{j\in {\mathbb Z} } f_{jk} e^{{\rm i} j\sa}\,.
\varepsilonnd{equation}
\varepsilondfn
Notice that $f$ can be uniquely written as
\[
f(x)=
\sum_{k\in \gen} \pi_{\!{}_{\integer k}} f (k{c_{{}_{\rm o}}}ot x)\,.
\varepsilonnd{equation} no
\begin{definition}\label{buda}
Let $\b>0$. A function $F\in C^2(\mathbb T,\mathbb R)$ is called a {\bf $\b$--Morse function} if
\[
\min_{\sa\in\mathbb T} \ \big( |F'(\sa)|+|F''(\sa)|\big)
\geq\beta \,,
\quad
\min_{i\neq j }
|F(\sa_i)-F(\sa_j)|
\geq \beta \,,
\varepsilonnd{equation} no
where $\sa_i\in\mathbb T$ are the critical points of $F$.
\varepsilonnd{definition}
\dfn{pigro}{\bf (Cosine--like functions)}
Let $0<\ttg< 1/4$.
We say that a real analytic function $G:\mathbb T_1\to\mathbb{C}$ is
$\ttg$--cosine--like
if, for some $\varepsilonta>0$ and
$\sa_0\in\mathbb R$, one~has
\[
{\modulo}G(\sa)-\varepsilonta\cos (\sa+\sa_0) {\modulo}_1
:= \sup_{|\, {\rm Im}\, \sa|<1}{\modulo}G(\sa)-\varepsilonta\cos (\sa+\sa_0) {\modulo}
\leq \varepsilonta\ttg\,.
\varepsilonnd{equation} no
\varepsilondfn
Notice that this notion is invariant by rescalings: $G$ is $\ttg$--cosine--like if and only if $\lambda G$ is $\ttg$--cosine--like for any $\l>0$. Beware of the usage of $|{c_{{}_{\rm o}}}ot|_1$ as sup norm on $ {\mathbb T} _1$, the complex strip of width~2.
{
{\noindent}ndent}
Now, the main definition.
\dfn{sicuro} {\bf (The analytic class ${\mathbb G}^n_{s}$)}
We denote by ${\mathbb G}^n_{s}$ the subset of functions $f\in \paramBBns$ such that the following two properties
hold:
\beqa{P1}
&&
\varliminf_{{\mathtt s}pra{\noruno{k}\to+{\infty}}{k\in\gen}} |f_k| e^{\noruno{k}s} \noruno{k}^n>0\,,
\\
&&
\forall \ k\in\gen\,,\ \pi_{\!{}_{\integer k}} f\ {\rm is \ a\ Morse\ function\ with \ distinct \ critical\ values}\,.
\nonumber
\varepsilonnd{equation} a
\varepsilondfn
\rem\label{posizione}
(i)
If $f\in \paramBBns$, then the function $\pi_{\!{}_{\integer k}} f$ belongs to $ \hol_{|k|_{{}_1}s}^1$ and therefore has a domain of analyticity which increases with the norm of $k$.
{
{\noindent}ndent}
(ii) A simple example of function in ${\mathbb G}^n_{s}$ is given by
\[
f(x):=2 \sum_{k\in\gen} e^{-s\noruno{k}} \cos k{c_{{}_{\rm o}}}ot x\,.
\varepsilonnd{equation} no
Indeed, one checks immediately that
\[
\|f\|_s=1\,,\qquad
\varliminf_{{\mathtt s}pra{\noruno{k}\to+{\infty}}{k\in\gen}} |f_k| e^{\noruno{k}s} \noruno{k}^n=+{\infty}\,,\qquad
\pi_{\!{}_{\integer k}} f (\sa)=2 e^{-s\noruno{k}} \cos \theta\,.
\varepsilonnd{equation} no
(iii)
The critical points of an analytic Morse function on $ {\mathbb T} $, by compactness, cannot accumulate, hence, there are a finite, even number of them, which are, alternately, a relative strict maximum and a relative strict minimum.
In particular, if $G$ is $\b$--Morse, then the number of its critical points can be bounded by
$\pi\sqrt{2\max |G''|/\b}$.
Indeed, if $\sa\neq \sa'$ are critical points of $G$, then, by \varepsilonqu{cimabue} one has
$$\textstyle\b\le|G(\sa)-G(\sa')|\le {\hat f}ac12 (\max|G''|) \,{\rm dist}(\sa,\sa')^2\,,$$ which implies that the minimal distance between two critical points is $\sqrt{2\b/\max|G''|}$ and the claim follows.
\varepsilonrem
\subsection{Uniform behaviour of large-mode Fourier projections}\label{ironman}
If a function $f\in \paramBBns$ satisfies \varepsilonqu{P1}, then, {\sl apart from a finite number of Fourier modes, its Fourier projections
$\pi_{\!{}_{\integer k}} f$ are close to a shifted rescaled cosine}, a fact that allows, e.g., to have a uniform analytic theory of high order perturbation theory.
{
{\noindent}ndent}
To discuss this matter, let us first point out that
for any sequence of real numbers $\{a_k\}$ and for any function $N(\d)$ such that
$\lim_{\d\downarrow 0} N(\d)=+\infty$ one has
\beq{analisi1}
\varliminf a_k>0 \quad \iff \quad \varepsilonxists \ \d>0\ \ {\rm s.t.}\ \ a_k\ge \d\,,\ \forall\ k\ge N(\d)\,.
\varepsilonnd{equation}
We shall apply this remark to the minimum limit in \varepsilonqu{P1} with a particular choice of the function $N(\d)$, namely, we define
$\mathtt N(\d)=\mathtt N(\d;n,s)$ as
\beq{enne}
\mathtt N(\d):=2\, \max\paramBBig\{1\,,\,{\hat f}ac1s \, \log {\hat f}ac{c_{{}_{\rm o}}}{s^n\, \d}\paramBBig\} \,,\qquad {c_{{}_{\rm o}}}:= 2^{44}\ (2n/e)^n\,.
\varepsilonnd{equation}
For later use, we point out that\footnote{In fact, if $s\ge 1$ then $\mathtt N\ge 2\ge 2/s$, while if $s<1$ then the logarithm in
\varepsilonqu{enne} is larger than one, so that $\mathtt N\ge 2/s$ also in this case.}
\beq{bollettino1}
\mathtt N\ge2 \ttcs\,,\quad {\rm where}\quad \ttcs:=\max\big\{1, 1/s\big\}\,.
\varepsilonnd{equation}
From \varepsilonqu{analisi1} it follows that if $f$ satisfies \varepsilonqu{P1}, one can find $0<\d\le 1$ such that
\beq{P1+}
|f_k|\geq \delta \noruno{k}^{-n}\, e^{-\noruno{k} s}\,,\qquad \forall \ k\in\gen\,,\ \noruno{k}\ge\mathtt N\,.
\varepsilonnd{equation}
The main feature of the above choice of $\mathtt N$ is that, for $\noruno{k}\ge\mathtt N$, $\pi_{\!{}_{\integer k}} f$ is very close to a shifted rescaled cosine function:
\begin{proposition}\label{pollaio}
Let $\d>0$, $f\in\paramBBns$ and assume \varepsilonqu{P1+}.
Then,
for any $k\in \gen $ with $ \noruno{k}\geq \mathtt N$,
$\pi_{\!{}_{\integer k}} f$ is $2^{-40}$--cosine--like (Definition~\ref{pigro}).
\varepsilonnd{proposition}
{\noindent}ndent{\bf Proof\ }
We shall prove something slightly stronger, namely, that there exists $\sa_k\in[0,2\pi)$
so that
\begin{equation}\label{alfacentauri}
\pi_{\!{}_{\integer k}} f(\sa)=2 |f_k| \big(\cos(\sa+ \sa_k)+F^k_\star(\sa)\big)\,,\quad F^k_{\! \varstar}(\sa):={\hat f}ac{1}{2|f_k|}\sum_{|j|\geq 2}f_{jk}e^{{\rm i} j \sa}\,,
\varepsilonnd{equation}
with $F^k_{\! \varstar}\in\hol_1^1$ and (recall the definition of the norms in \varepsilonqu{norme})
\begin{equation}\label{gallina}
\modulo F^k_{\! \varstar} \modulo_1\le \,\thickvert\!\!\thickvert\, F^k_{\! \varstar}\,\thickvert\!\!\thickvert\,_{1}\leq 2^{-40}\,.
\varepsilonnd{equation}
Indeed, by definition of $\pi_{\!{}_{\integer k}} f$,
\[
\pi_{\!{}_{\integer k}} f(\sa):= \sum_{j\in {\mathbb Z} \, \backslash\, \{0\}} f_{jk} e^{{\rm i} j \sa}
= \sum_{|j|=1} f_{jk}e^{{\rm i} j\sa} + \sum_{|j| \ge 2} f_{jk}e^{{\rm i} j\sa} \,,
\varepsilonnd{equation} no
and, defining $\sa_k\in[0,2\pi)$ so that $e^{{\rm i} \sa_k}= f_k/|f_k|$, one has
$$
{\hat f}ac{1}{2|f_k|}\sum_{|j|=1}f_{jk}e^{{\rm i} j \sa}=\, {\rm Re}\, \paramBBig( {\hat f}ac{f_k}{|f_k|} e^{{\rm i} \sa}\paramBBig)=\, {\rm Re}\, e^{{\rm i} (\sa+\sa_k)}
=\cos (\sa+\sa_k)\,,
$$
which yields \varepsilonqu{alfacentauri}. Now, since $f\in\paramBBns$ it is
$|f_k|\le e^{-\noruno{k}s}$
and, by \varepsilonqu{P1+}, $|f_k|\geq \delta \noruno{k}^{-n}\, e^{-\noruno{k} s}$.
Therefore, for $\noruno{k}\ge \mathtt N$, one has
\beqa{onlyyou}
\,\thickvert\!\!\thickvert\, F^k_{\! \varstar}\,\thickvert\!\!\thickvert\,_{{}_1}&\stackrel{\varepsilonqu{alfacentauri}}=&
{\hat f}ac{1}{2|f_k|}\sum_{|j|\geq 2}|f_{jk}|e^{|j|}
\leq
{\hat f}ac{\noruno{k}^n e^{\noruno{k}s}}{2\d}\sum_{|j|\geq 2}|f_{jk}|e^{|j|}\nonumber
\\
&\le&
{\hat f}ac{\noruno{k}^n e^{\noruno{k}s}}{2\d}\sum_{|j|\geq 2}e^{-|j|(\noruno{k}s-1)}
\nonumber
\\
&\le&
{\hat f}ac{2 e^2 \noruno{k}^n}{\d} \ e^{-\noruno{k}s}
={\hat f}ac{2^{n+1} e^2 }{s^n\d} e^{-{\hat f}ac{\noruno{k}s}2}\ \ \paramBBig({\hat f}ac{\noruno{k}s }2\paramBBig)^n e^{-{\hat f}ac{\noruno{k}s}2}
\nonumber
\\
&\le& \paramBBig({\hat f}ac{2n}{e s}\paramBBig)^n\, {\hat f}ac{2e^2}{\d} \, e^{-{\hat f}ac{\mathtt N s}2}
\le 2^{-40}\,,
\varepsilonnd{equation} a
where the geometric series converges since $\noruno{k}s\ge \mathtt N s\ge2 $ (by \varepsilonqu{bollettino1}) and last inequality follows
by definition of $\mathtt N$ in \varepsilonqu{enne}.
\qed
\rem
In fact, the particular form of $\mathtt N$ is used {\sl only} in the last inequality in \varepsilonqu{onlyyou}.
\varepsilonrem
{\noindent}ndent
Next, we need an elementary calculus lemma:
\begin{lemma}\label{pennarello}
Assume that $F\in C^2(\mathbb T,\mathbb R)$, $\bar\sa$ and $0<\mathtt c<1/2$ are such that
$$\|F-\cos (\sa+\bar \sa)\|_{C^2}\le \mathtt c\,,$$
where $\|F\|_{C^2}:=\max_{0\leq k\leq 2}\sup|F^{(k)}|$.
Then, $F$ has only two critical points and it is $(1-2 \mathtt c)$--Morse (Definition~\ref{buda}).
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ } By considering the translated function $\sa\to F(\sa-\bar\sa)$, one can reduce oneself to the case $\bar\sa=0$ ($F$ is $\b$--Morse, if and only if $\sa\to F(\sa-\bar\sa)$ is $\b$--Morse).\\
Thus, we set $\bar \sa=0$, and note that, by assumption $|F'|=|F'+\sin\sa-\sin\sa|\ge|\sin \sa|-\mathtt c $, and, analogously, $|F''|\ge |\cos\sa|-\mathtt c $. Hence, $|F'|+|F''|\ge|\sin\sa|+|\cos\sa|-2\mathtt c \ge 1-2\mathtt c $. Next, let us show that $F$ has a unique strict maximum $\sa_0\in I:=(-\pi/6,\pi/6)$ (mod $2\pi$). Writing $F=\cos\sa+g$, with $g:=F-\cos \sa$, one has that $F'(-\pi/6)=1/2+g'(\pi/6)\ge 1/2 - \mathtt c >0$, and, similarly $F'(\pi/6)\le -1/2 +\mathtt c $, thus $F$ has a critical point in $I$, and, since $-F''=\cos\sa -g''\ge \cos\sa-\mathtt c \ge \sqrt3/2-\mathtt c >0$, $F$ is strictly concave in $I$, showing that such critical point is unique and it is a strict local minimum. In fact, similarly one shows that $F$ has a second critical point $\sa_1\in (\pi-\pi/6,\pi+\pi/6)$ where $F$ is strictly convex, so that $\sa_1$ is a strict local minimum; but, since
in the complementary of these intervals $F$ is strictly monotone (as it is easy to check), it follows that $F$ has a unique global strict maximum and a unique global strict minimum.
Finally, $F(\sa_0)-F(\sa_1)\ge \sqrt3-2\mathtt c >1-2\mathtt c $ and the claim follows. \qed
{
{\noindent}ndent}
From Proposition~\ref{pollaio} and Lemma~\ref{pennarello} one gets immediately:
\begin{proposition}\label{punti} Let $\d>0$, $f\in\paramBBns$ and assume \varepsilonqu{P1+}.
Then,
for every
$k\in \gen$ with $\noruno{k}\ge \mathtt N$, the function
$\pi_{\!{}_{\integer k}} f$ is $|f_k|$--Morse.
\varepsilonnd{proposition}
{\noindent}ndent{\bf Proof\ } As in the proof of Proposition~\ref{pollaio},
we get
\beq{derby}
\paramBBig|{\hat f}ac{\pi_{\!{}_{\integer k}} f}{2f_k} - \cos(\sa+\sa^k)\paramBBig|_1\stackrel{\varepsilonqu{alfacentauri}}=
|F^k_\star|_1\leq
\,\thickvert\!\!\thickvert\, F^k_\star\,\thickvert\!\!\thickvert\,_1\stackrel{\varepsilonqu{gallina}}
\leq 2^{-40}\,,
\varepsilonnd{equation}
which implies that the function $F:=\pi_{\!{}_{\integer k}} f/(2f_k)$ is $C^2$--close to a (shifted) cosine: Indeed, by Cauchy estimates
$\|{c_{{}_{\rm o}}}ot\|_{C^2}\leq 2 |{c_{{}_{\rm o}}}ot|_1$, so that
\[
\|F-\cos(\sa+\sa^k)\|_{C^2}=\max_{0\le j\le 2} \max_\mathbb T |\partial_\sa^j(F-\cos(\sa+\sa^k))|\le
2|F^k_\star|_1 \stackrel{\varepsilonqu{derby}} \leq 2^{-39} \,.
\varepsilonnd{equation} no
By Lemma~\ref{pennarello} we see that $F$ is $(1-2^{-38})$--Morse, and the claim follows by rescaling.~\qed
\subsection{Genericity}
In this section we prove that ${\mathbb G}^n_{s}$ is a generic set in $\paramBBns$.
\dfn{sicuro2} Given $n,s>0$, $0<\d\le 1$ and $\b>0$ and $\mathtt N$ as in \varepsilonqu{enne} we call
${\mathbb G}^n_{s}(\d,\b)$ the set of functions in $\paramBBns$ which satisfy \varepsilonqu{P1+} together with:
\beq{P2+}
\pi_{\!{}_{\integer k}} f\ {\rm is \ \b\!-\!Morse}\,,\qquad \, \ \ \forall \ k\in\gen\,,\ \noruno{k}\le \mathtt N\,.
\varepsilonnd{equation}
\varepsilondfn
Then, the following lemma holds:
\begin{lemma}
\label{telaviv} Let $n,s>0$. Then,
$\displaystyle
{\mathbb G}^n_{s}=\bigcup_{{\mathtt s}pra{\d\in (0,1]}{\b>0}}{\mathbb G}^n_{s}(\d,\b)$.
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ }
Assume $f\in {\mathbb G}^n_{s}$ and
let $0<\d_0\le 1$ be smaller than limit inferior in \varepsilonqu{P1}.
Then, there exists $N_0$ such that $|f_k|>\d_0 \noruno{k}^{-n} e^{-\noruno{k}s}$, for any $\noruno{k}\ge N_0$, $k\in\gen$.
Since $\lim_{\d\to 0} \mathtt N=+\infty$, there exists $0<\d<\d_0$ such that $\mathtt N>N_0$. Hence, if $\noruno{k}\ge \mathtt N$ and
$k\in\gen$,
\varepsilonqu{P1+} holds.
\\
Since $\pi_{\!{}_{\integer k}} f$ is, for any $\noruno{k}\le \mathtt N$, a Morse function with distinct critical values one can, obviously, find a $\b>0$ for which
\varepsilonqu{P2+} holds. Hence $f\in {\mathbb G}^n_{s}(\d,\b)$.
{
{\noindent}ndent}
Now, let $f\in \bigcup {\mathbb G}^n_{s}(\d,\b)$. Then, there exist $\d\in (0,1]$ and $\b>0$ such that \varepsilonqu{P1+} and \varepsilonqu{P2+} hold. Then, \varepsilonqu{P1} follows immediately from \varepsilonqu{P1+}.
By Proposition~\ref{pollaio}, for any $k\in \gen$ with $\noruno{k}> \mathtt N$,
$\pi_{\!{}_{\integer k}} f$ is $2^{-40}$--cosine--like, showing (Lemma~\ref{pennarello}) that $\pi_{\!{}_{\integer k}} f$ is Morse with distinct critical values also for $\noruno{k}\ge \mathtt N$. The proof is complete.
\qed
\begin{proposition}\label{adso}
${\mathbb G}^n_{s}$ contains an open and dense set in $\paramBBns$.
\varepsilonnd{proposition}
{
{\noindent}ndent}
To prove this result we need a preliminary elementary result on real analytic periodic functions:
\begin{lemma}\label{trifolato} Let $F=\sum F_j e^{{\rm i} j \sa}$ be a real analytic function on $ {\mathbb T} $.
There exists a compact set $\G\subseteq\mathbb{C}$ (depending on $F_j$ for $|j|\ge 2$) of zero Lebesgue measure such that if the Fourier coefficient $F_1$ does not belong to $\G$, then
$F$ is a Morse function with distinct critical values.
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ }
Without loss of generality we may assume that $F$ has zero average. Then, letting $z:=F_1\in {\mathbb C} $, we write $F$ as
\begin{equation}\label{efesta}
F(\sa)=
z e^{{\rm i} \sa} + \bar z e^{-{\rm i} \sa} + G(\sa):=z e^{{\rm i} \sa} + \bar z e^{-{\rm i} \sa} +
\sum_{|j|\geq 2} F_je^{{\rm i} j\sa} \,.
\varepsilonnd{equation}
When $G\varepsilonquiv 0$ the claim is true with $\G=\{0\}$.\\
Assume that $G\not\varepsilonquiv 0$.
Observe that, since $G$ is real--analytic, the equations
$ F'(\sa)=0= F''(\sa)$ are equivalent to the single equation
$z={\hat f}ac12 e^{-{\rm i} \sa} \big( {\rm i} G'(\sa)+G''(\sa) \big)$, which, as $\sa\in {\mathbb T} $, describes a smooth closed `critical' curve $\Gamma_1$ in $ {\mathbb C} $.
\\
Observe also that $F$ has distinct critical points $\sa_1,\sa_2\in\mathbb T$ with the same critical values if and only if
the following three real equations are satisfied:
\beq{odessa}
F'(\sa_1)=0\,,\qquad
F'(\sa_2)=0\,,\qquad
F(\sa_1)-F(\sa_2)=0\,.
\varepsilonnd{equation}
We claim that if $z,\sa_1,\sa_2$ satisfy \varepsilonqu{odessa}
then
\begin{equation}\label{celebration}
z=\zeta(\sa_1,\sa_2)\,,\qquad
g(\sa_1,\sa_2)=0\,,
\varepsilonnd{equation}
with $\zeta$ and $g$ real analytic on $\mathbb T^2$ given by
\begin{eqnarray*}
&&\zeta(\sa_1,\sa_2):=
\left\{\begin{array}l
{\hat f}ac{\ii}{2(e^{{\rm i} \sa_1}-e^{{\rm i} \sa_2})}
\big(
G'(\sa_1)-G'(\sa_2) +{\rm i} G(\sa_1) -{\rm i} G(\sa_2)
\big)\,,\quad \mbox{for}\ \ \sa_1\neq\sa_2\,;
\\ \ \\
{\hat f}ac1{2e^{{\rm i} \sa_1}}
\big(
G''(\sa_1)+{\rm i} G'(\sa_1)
\big)\,,\phantom{AAAAAAAAAAAAAaa} \mbox{for}\ \ \sa_1=\sa_2\,,
\varepsilonnd{array}\right.
\\ \ \\
&&
g(\sa_1,\sa_2)
:=
\big(1-\cos(\sa_1-\sa_2)\big)\big( G'(\sa_1)+G'(\sa_2) \big)
- \sin (\sa_1-\sa_2)\big( G(\sa_1)-G(\sa_2) \big)\,.
\varepsilonnd{equation} ano
Indeed, summing up the the third equation in \varepsilonqref{odessa} with
the difference of the first two equations
multiplied by $-\ii$,
we get
$$
2(e^{{\rm i} \sa_1}-e^{{\rm i} \sa_2})
z-\ii
\big(
G'(\sa_1)-G'(\sa_2) +{\rm i} G(\sa_1) -{\rm i} G(\sa_2)
\big)=0\,,
$$
which is equivalent to
$z=\zeta(\sa_1,\sa_2)$. Then, by definition $g(\sa_1,\sa_1)=0$, while if $\sa_1\neq \sa_2$,
substituting $z=\zeta(\sa_1,\sa_2)$
in the first equation in \varepsilonqref{odessa}
and multiplying by
$1-\cos(\sa_1-\sa_2)$
we get $g(\sa_1,\sa_2)=0$ also for $\sa_1\neq\sa_2$. Thus, \varepsilonqref{celebration} holds.
\\
Next, we claim that the real analytic function
$ g(\sa_1,\sa_2)$
is not identically zero.
Assume by contradiction that $g$ is identically zero.
Then $g(\sa_2+t,\sa_2)\varepsilonquiv 0$ for every $\sa_2$ and
$t$, and taking the fourth derivative with respect to $t$ evaluated at
$t=0$, we see that
$
G'''(\sa_2)+G'(\sa_2)=0$, for all $\sa_2$.
The general (real) solution of the such differential equation
is given by $G(\sa_2)= c e^{{\rm i} \sa_2} + \bar c e^{-{\rm i} \sa_2}+c_0,$
with $c\in\mathbb{C},$ $c_0\in\mathbb R, $
which contradicts the fact that, by definition, $G_j=0$ for $|j|\le 1$.
Thus, $ g(\sa_1,\sa_2)$
is not identically zero
and, therefore,
the set
$\mathcal Z\subseteq\mathbb T^2$ of its zeros is compact and has zero Lebesgue measure\footnote{Compare, e.g., Corollary 10, p. 9 of \cite{GR}.}.
Clearly, also the set $\Gamma_2:=\zeta(\mathcal Z)\subseteq\mathbb{C}$
is compact and has zero measure, and, therefore, if we define
$\G=\Gamma_1\cup\Gamma_2$, we see that the lemma holds also in the case $G\nequiv 0$.
\qed
{\noindent}ndent{\bf Proof\ } {\bf of Proposition~\ref{adso}}
Let ${\mathbb G}^n_{s}t(\d,\b)$ denote the subset of functions in ${\mathbb G}^n_{s}(\d,\b)$ satisfying the (stronger) condition\footnote{Here, we explicitly indicate
the dependence on $\d$, while $n$ and $s$ are fixed.
Recall that $\mathtt N(\d)$ is decreasing.}
\beq{starstar}
|f_k|> \delta \, e^{-\noruno{k} s}\,,\qquad \forall \ k\in\gen\,,\ \noruno{k}\ge\mathtt N
=\mathtt N(\d)\,,
\varepsilonnd{equation}
and let
$\displaystyle
{\mathbb G}^n_{s}t=\bigcup_{\stackrel{0<\d\le 1}{\b>0}} {\mathbb G}^n_{s}t(\d,\b)\,.
$
We claim that ${\mathbb G}^n_{s}t$ is an open subset of $\paramBBns$.
Let $f\in{\mathbb G}^n_{s}t(\d,\b)$ for some
${0<\d\le 1},{\b>0}$ and let us show
that there exists $0<\d'\leq\d/2$ such that if
$g\in \paramBBns$ with
$\|g-f\|_s<\d'\leq \d/2$,
then
$g\in{\mathbb G}^n_{s}t(\d',\b')$ with $\b':=\min\{\b,\delta e^{-s\mathtt N(\d/2)}\}/2$.
Indeed
$$
|\tilde f_k|e^{|k|_{{}_1}s}\geq |f_k| e^{|k|_{{}_1}s} -\|g-f\|_s > \d-\d'\geq\d/2\,,
\qquad \forall \ k\in\gen\,,\ \noruno{k}\ge\mathtt N(\d)\,,
$$
namely $g$ satisfies \varepsilonqu{starstar} with $\d/2$ instead of $\d$.
We already know that
$\pi_{\!{}_{\integer k}} f$ is $\b\!-\!$Morse $ \forall \ k\in\gen,\, \noruno{k}<\mathtt N(\d)$.
Moreover, by Proposition~\ref{punti},
we know that $\pi_{\!{}_{\integer k}} f$ is $|f_k|$--Morse for
$k\in \gen$ with $\noruno{k}\ge \mathtt N(\d)$.
In conclusion, by \varepsilonqref{starstar}, we get that
$\pi_{\!{}_{\integer k}} f$ is $2\b'\!-\!$Morse $ \forall \ k\in\gen,\, \noruno{k}<\mathtt N(\d/2)$.
Since the $\|{c_{{}_{\rm o}}}ot\|_s$--norm is stronger than the $C^2$--one, taking $\d'$ small enough we get that
$\pi_{\!{}_{\integer k}} g$ is $\b'\!-\!$Morse $ \forall \ k\in\gen,\, \noruno{k}<\mathtt N(\d/2)$.
{
{\noindent}ndent}
Let us now show that ${\mathbb G}^n_{s}t$ is dense in $\paramBBns$.
Fix $f$ in $\paramBBns$ and $0<\loge <1$. We have to find $g\in{\mathbb G}^n_{s}t$
with $\|g-f\|_s\leq \loge $.
Let $\d:=\loge /4$ and denote by $f_k$ and $g_k$ (to be defined) be the Fourier coefficients of, respectively,
$f$ and $g$.
It is enough to define $g_k$ only for
$k\in {\mathbb Z} ^n_\varstar$ since, for $k\in - {\mathbb Z} ^n_\varstar$
we set $g_k:=\bar{g}_{-k}$, since $g$ must be real analytic.
Set $g_k:=f_k$ for
$k\in {\mathbb Z} ^n_\varstar\setminus\gen$.
For
$k\in\gen$, $|k|_{{}_1}\geq \mathtt N(\d)$,
we set $g_k:=f_k$ if
$|f_k|e^{|k|_{{}_1}s}>\d$ and
$g_k:= 2\deltalta e^{-|k|_{{}_1}s}$ otherwise.
Consider now $k\in\gen$, $|k|_{{}_1}< \mathtt N(\d)$.
We make use of Lemma~\ref{trifolato}
with $F=\pi_{\!{}_{\integer k}} g$, $z=F_1=g_k$.
Thus, by Lemma~\ref{trifolato}, there exists a compact set $\G_k\subseteq\mathbb{C}$ (depending on $F_k$ for $|k|\ge 2$) of zero measure such that if $g_k\notin \G_k$ the function
$\pi_{\!{}_{\integer k}} g$ is a Morse function with distinct critical values.
We conclude the proof of the density choosing
$|g_k|<e^{-|k|_{{}_1}s}$, $|f_k-g_k|\leq \loge
e^{-|k|_{{}_1}s}$ with $g_k\notin \Gamma_k$.
\qed
\subsection{Full measure}
Here we show that {\sl ${\mathbb G}^n_{s}$ is a set of probability 1 with respect to the standard product probability measure on $\paramBBns$}.
More precisely, consider the space\footnote{$ {\mathbb Z} ^n_\varstar$ was
defined in \varepsilonqref{iguazu}.}
${\rm D}^{{ {\mathbb Z} ^n_\varstar}}$, where ${\rm D}:=\{w\in {\mathbb C} :\ |w|\le 1\}$, endowed with the product topology\footnote{
By Tychonoff's Theorem,
${\rm D}^{{ {\mathbb Z} ^n_\varstar}}$ with the product topology is a
compact Hausdorff space.
}.
The product $\sigma$-algebra of the Borel sets of ${\rm D}^{{ {\mathbb Z} ^n_\varstar}}$ is the $\sigma$--algebra generated by
the cylinders $\bigotimes_{k\in{ {\mathbb Z} ^n_\varstar}} A_k$, where $A_k$ are
Borel sets of ${\rm D}$, which differs from ${\rm D}$ only for a finite number of $k$.
The probability product measure $\mu_{{}_\otimes}$ on
${\rm D}^{{ {\mathbb Z} ^n_\varstar}}$
is then defined by letting
$$
\mu_{{}_\otimes} \big(\bigotimes_{k\in{ {\mathbb Z} ^n_\varstar}} A_k \big):=
\prod_{k\in{ {\mathbb Z} ^n_\varstar}}
|A_k |\,,
$$
where $|{c_{{}_{\rm o}}}ot|$ denotes the normalized ($|{\rm D}|=1$) Lebesgue measure on ${\rm D}$.
The (weighted) Fourier bijection\footnote{$f$ is real analytic so that $f_{-k}=\bar f_k.$}
\begin{equation}\label{odisseo}
\mathcal F:f(x)=\sum_{k\in {\mathbb Z} ^n_\varstar}
f_k e^{{\rm i} k{c_{{}_{\rm o}}}ot x}+\bar f_k e^{-{\rm i} k{c_{{}_{\rm o}}}ot x}
\in \paramBBns \to \big\{f_k e^{|k|_{{}_1}s}\big\}_{k\in { {\mathbb Z} ^n_\varstar}}\in \varepsilonll^{\infty}({ {\mathbb Z} ^n_\varstar})
\varepsilonnd{equation}
induces a product topology on $\paramBBns$
and a {\sl probability product measure} $\mu$ on the
product $\sigma$-algebra
$ {\mathcal B} $ of the Borellians in $\paramBBns=\mathcal F^{-1}
\big({\rm D}^{{ {\mathbb Z} ^n_\varstar}}\big)$
(with respect to the induced product topology), i.e., given $B\in {\mathcal B} $, we set $\mu(B):=\mu_{{}_\otimes}(\mathcal F(B))$. Then one has:
\begin{proposition}\label{melk}
${\mathbb G}^n_{s}\in {\mathcal B} $ and $\mu({\mathbb G}^n_{s})=1$.
\varepsilonnd{proposition}
{\noindent}ndent{\bf Proof\ }
First note that, for every $\d,\b>0$ the set
${\mathbb G}^n_{s}(\d,\b)$ is
closed with respect to the product topology.
Indeed for every $ k\in\gen$
the set $\{f\in \paramBBns\ \mbox{s.t.}\
|f_k|\geq \delta \noruno{k}^{-n}\, e^{-\noruno{k} s}\}$
is a closed cylinder. Moreover also
the set $\{f\in \paramBBns\ \mbox{s.t.}\ \pi_{\!{}_{\integer k}} f \ \mbox{is}\ \b\mbox{--Morse}\}$
is closed w.r.t the product topology. In fact we prove that
the complementary $E:=\{f\in \paramBBns\ \mbox{s.t.}\ \pi_{\!{}_{\integer k}} f \ \mbox{is not}\ \b\mbox{--Morse}\}$ is open w.r.t the product topology.
Indeed if $f^*\in E$ there exists a $r>0$ small enough such that
$E_r:=\{f\in \paramBBns\ \mbox{s.t.}\ \|\pi_{\!{}_{\integer k}} f-\pi_{\!{}_{\integer k}} f^*\|_{C^2}<r \}\subseteq E.$
Define the open cylinder
$$
E_{\rho,J}:=\{
f\in \paramBBns\ \mbox{s.t.}\
|f_{jk}-f_{jk}^*|<{\hat f}ac{\rho}{\noruno{j}^2}e^{-\noruno{jk}s}\ \mbox{for}\
j\in\mathbb{Z}\,,\
0<\noruno{j}\leq J \}\,.
$$
We claim that $E_{\rho,J}\subseteq E_r$ for suitably small $\rho$
and large $J$ (depending on $r$ and $s$).
Indeed, when $f\in E_{\rho,J}$
$$
\|\pi_{\!{}_{\integer k}} f-\pi_{\!{}_{\integer k}} f^*\|_{C^2}
\leq
3\sum_{j\neq 0} \noruno{j}^2 |f_{jk}-f_{jk}^*|
\leq
3\rho\sum_{0<\noruno{j}\leq J}e^{-\noruno{jk}s}
+
6\sum_{\noruno{j}> J}\noruno{j}^2 e^{-\noruno{jk}s}
<r
$$
for suitably small $\rho$
and large $J$.
Therefore $E_{\rho,J}\subseteq E_r\subseteq E$ and $E$ is open in
the product topology.
In conclusion, taking the intersection over $k\in\gen$,
we get that ${\mathbb G}^n_{s}(\d,\b)$ is
closed with respect to the product topology.
\\
Recalling Lemma~\ref{telaviv}, we note that ${\mathbb G}^n_{s}$ can be written as
$\displaystyle
{\mathbb G}^n_{s}=\bigcup_{h\in\mathbb N} {\mathbb G}^n_{s}(1/h,1/h)$.
Thus ${\mathbb G}^n_{s}$ is Borellian.
{
{\noindent}ndent}
Let us now prove that $\mu({\mathbb G}^n_{s})=1$.
Fix $0<\d\le 1$ and denote by
${\mathbb G}^n_{s}(\d)$ be the subset of functions in $\paramBBns$ satisfying \varepsilonqu{P1+}
and such that $\pi_{\!{}_{\integer k}} f$
is a Morse function with distinct critical values
for every $ k\in\gen$.
Recall \varepsilonqref{odisseo} and define
$$
\mathbb P_\d:=\mathcal F({\mathbb G}^n_{s}(\d))\subseteq \varepsilonll^{\infty}({ {\mathbb Z} ^n_\varstar})\,.
$$
Fix $\hat g=(g_k)_{k\in {\mathbb Z} ^n_\varstar\setminus\gen}\in
\varepsilonll^{\infty}({ {\mathbb Z} ^n_\varstar\setminus\gen})$ with $|g_k|\leq 1$ for every
$k\in {\mathbb Z} ^n_\varstar\setminus\gen$.
Consider
the section
\[
\mathbb P_\d^{\hat g}:=\big\{ {\hat \eta}eck g=(g_k)_{k\in\gen},\ |g_k|\leq 1 \ \ \mbox{s.t}\ \ |g_k|\geq \delta \noruno{k}^{-n} \ \mbox{if}\ \noruno{k}\geq \mathtt N\,,\ \
g_k e^{-\noruno{k}s}\notin \G_k\,, \ \mbox{if}\ \noruno{k}< \mathtt N
\big\},
\varepsilonnd{equation} no
where the sets $\G_k$ (depending on $\hat g$)
were defined in the proof of Proposition \ref{adso}
so that, for every $k\in\gen,$ $|k|_{{}_1}< \mathtt N$,
if $g_k e^{-\noruno{k}s}\notin \G_k$ then the function\footnote{Recall \varepsilonqref{efesta}.}
$$
g_k e^{-\noruno{k}s} e^{{\rm i} \sa} + \bar g_k e^{-\noruno{k}s} e^{-{\rm i} \sa} +
\sum_{|j|\geq 2} \hat g_{jk} e^{-\noruno{jk}s} e^{{\rm i} j\sa}
=\pi_{\!{}_{\integer k}} f\,,\ \ \mbox{with}\ \ f:=\mathcal F^{-1}(g)\,,\ \ g=({\hat \eta}eck g,\hat g)\,,
$$
is a Morse function with distinct critical values. Then, since every $\G_k$ has
zero measure
$$
\mu_{{}_\otimes}|_{\varepsilonll^{\infty}({\gen})}(\mathbb P_\d^{\hat g})=
\prod_{k\in\gen, |k|_{{}_1}\geq \mathtt N} (1-
\d^2\, |k|_{{}_1}^{-2{n}})\geq 1-c\d^2\,,
$$
for a suitable constant $c=c(n)$.
Since the above estimate holds for every
$\hat g\in
\varepsilonll^{\infty}({ {\mathbb Z} ^n_\varstar\setminus\gen})$,
by Fubini's Theorem we get
$$
\mu_{{}_\otimes}|_{\varepsilonll^{\infty}({\gen})}(\mathbb P_\d^{\hat g})=
\mu_{{}_\otimes}(\mathbb P_\d)=\mu({\mathbb G}^n_{s}(\d))
\geq 1-c\d^2\,.
$$
Then,
$$
\mu({\mathbb G}^n_{s})=\lim_{\d\to 0^+} \mu({\mathbb G}^n_{s}(\d))=1\,. \qedeq
$$
\section{Averaging, coverings and normal forms}
In the rest of the paper we consider
{\sl real--analytic, nearly--integrable natural Hamiltonian systems}
\beqa{ham}\textstyle
&&\left\{
\begin{array}{l}
\dot y = -\ham_x(y,x)\\
\dot x= \ham_y(y,x)
\varepsilonnd{array}\right.\,, \phantom{AAAAAAA}(y,x)\in {\mathbb R} ^n\times {\mathbb T} ^n\,,\nonumber
\\
&&\textstyle \ham(y,x;\varepsilon):={\hat f}ac12 |y|^2 +\varepsilon f(x)\,,\phantom{AAA} n\ge 2\,,\ 0<\varepsilon<1\,.
\varepsilonnd{equation} a
As usual, `dot' denotes derivative with respect to `time' $t\in {\mathbb R} $; $\ham_y$ and $\ham_x$ denote the gradients with respect to $y$ and $x$; $|y|^2:=y{c_{{}_{\rm o}}}ot y:=\sum_j|y_j|^2$; $ {\mathbb T} ^n$ denotes the standard flat torus $ {\mathbb R} ^n/(2\pi {\mathbb Z} ^n)$, and
the phase space $\mathbb R^n\times {\mathbb T} ^n$ is endowed with the standard symplectic form $dy\wedge dx=\sum_j dy_j\wedge dx_j$.
{
{\noindent}ndent}
In this section, we discuss the high order normal forms of generic natural systems, especially in neighbourhoods of simple resonances.
{
{\noindent}ndent}
As standard in perturbation theory, we consider a bounded phase space $ {\mathcal M} \subseteq \mathbb R^n\times\mathbb T^n$.
By translating actions and rescaling the parameter $\varepsilon$, it is not restrictive to take
\begin{equation}\label{bada}
{\mathcal M} :={\rm B}\times \mathbb T^n\,,\qquad {\rm with}\qquad \DD:=B_1(0):=\{y\in \mathbb R^n\st |y| <1\}\,.
\varepsilonnd{equation}
{
{\noindent}ndent}
The first step in averaging theory is to construct suitable coverings so as to control resonances where small divisors appear.
Let us recall that {\sl a resonance} ${\cal R}_k$
(with respect to the free Hamiltonian ${\hat f}ac12 |y|^2$) is the hyperplane
$\{y\in {\mathbb R} ^n: y{c_{{}_{\rm o}}}ot k=0\}$, where $k\in\gen$, and its order is given by $\noruno{k}$;
a {\sl double resonance}
${\cal R}_{k,\varepsilonll}$ is the intersection of two resonances: ${\cal R}_{k,\varepsilonll}={\cal R}_k\cap {\cal R_\varepsilonll}$ with
$k\neq\varepsilonll$ in
$\gen$; the order of ${\cal R}_{k,\varepsilonll}$ is given by $\max\{\noruno{k},\noruno{\varepsilonll}\}$.
\subsubsection*{Notations}
The real or complex (open) balls of radius $r>0$ and center $y_0\in \mathbb R$ or $z_0\in {\mathbb C} ^n$ are denoted by
\beq{palle}
B_r(y_0):= \{y\in\mathbb R^n: |y-y_0|<r\}\,,\qquad D_r(z_0):= \{z\in {\mathbb C} ^n: |z-z_0|<r\}\,;
\varepsilonnd{equation}
if $V\subset {\mathbb R} ^n$ and $r>0$, $V_r$ denotes the complex neighborhood of $V$ given by\footnote{$\displaystyle |u|:=\sqrt{u{c_{{}_{\rm o}}}ot \bar u}$ denotes the standard Euclidean norm on vectors $u\in {\mathbb C} ^n$ (and its subspaces); `bar', as usual, denotes complex--conjugated.}
\beq{dico}
V_r := \bigcup_{y\in D}\ D_r(y)\,.
\varepsilonnd{equation}
We shall also use the notation $\, {\rm Re}\,(V_r)$ to denote the {\sl real} $r$--neighbourhood of $V\subset {\mathbb R} ^n$, namely,
\beq{dire}
\, {\rm Re}\,(V _r) := V_r\cap {\mathbb R} ^n= \bigcup_{y\in V}\ B_r(y)\,.
\varepsilonnd{equation}
For a set $V\subseteq \mathbb R^n$ and for $r,s>0$, given a function
$f:(y,x) \in V_r\times {\mathbb T} ^n_s\to f(y,x)$,
we denote
\beq{norme}
{\modulo}f{\modulo}_{V,r,s} = {\modulo}f{\modulo}_{r,s}:=\sup_{V_r\times \mathbb T^n_s}|f|\,,
\qquad
\,\thickvert\!\!\thickvert\, f\,\thickvert\!\!\thickvert\,_{V,r,s}=\,\thickvert\!\!\thickvert\, f\,\thickvert\!\!\thickvert\,_{r,s}:=
\sup_{y\in V_r}\sum_{k\in\mathbb{Z}^n}
|f_k(y)| e^{|k|_{{}_1}s}\,,
\varepsilonnd{equation}
where $f_k(y)$ denotes the $k$--th Fourier coefficient of $x\in {\mathbb T} ^n\mapsto f(y,x)$;
for a function depending only on $y\in V_r$ we set ${\modulo}f{\modulo}_{V,r}={\modulo}f{\modulo}_{r}:=
\sup_{V_r}|f|$.
\subsection{Non--resonant and simply--resonant sets}
Denote by $\pk$ and $\pko$ the orhogonal projections
\beq{orto}
\pk y:=(y{c_{{}_{\rm o}}}ot e_k)\, e_k\,,\qquad \pko y:=y-\pk y\,,\qquad e_k:=k/|k|\,,
\varepsilonnd{equation}
and, for any ${\mathtt K}\geq {\mathtt K}O\geq 2$ and $\a>0$, define the following sets:
\beqa{neva}
&&\mathbb Rz:=\{y\in \DD: \min_{k\in {\cal G}^n_{\KO}}|y{c_{{}_{\rm o}}}ot k|>\textstyle {\hat f}ac\a2 \}\,, \
\\
\label{sonnosonnoBIS}
&&
\left\{\begin{array}{l}
\mathbb Ruk:=
\big\{y\in \DD:
|y{c_{{}_{\rm o}}}ot k|<\textstyle\a;\, |\pko y{c_{{}_{\rm o}}}ot \varepsilonll|> {\hat f}ac{3 \alpha {\mathtt K}}{|k|}, \forall
\varepsilonll\in {\mathcal G^n_{{\mathtt K}}}\, \backslash\, \mathbb{Z} k\big\}\,,\quad
(k\in{\cal G}^n_{\KO});
\\
\mathbb Ru:=\bigcup_{k\in {\cal G}^n_{\KO}} \mathbb Ruk\,;
\varepsilonnd{array}\right.
\varepsilonnd{equation} a
where, as above, $\DD=B_1(0)$.
{
{\noindent}ndent}
Eq. \varepsilonqu{neva} implies that $\mathbb Rz$ is a $(\a/2)$--non--resonant set up to order ${\mathtt K}O$, i.e.,
\beq{ovvio}
|y{c_{{}_{\rm o}}}ot k|>{\hat f}ac\a2\,,\qquad \forall \ y\in \mathbb Rz\,,\ \forall \ 0<|k|\le {\mathtt K}O\,.
\varepsilonnd{equation}
Indeed, fix $y\in \mathbb Rz$ and $k\in {\mathbb Z} ^n$ with $0<|k|\le {\mathtt K}O$. Then, there exists $\bar k\in {\cal G}^n_{\KO}$ and $j\in {\mathbb Z} \, \backslash\, \{0\}$ such that $k=j\bar k$, so that
$$
|y{c_{{}_{\rm o}}}ot k|=|j|\ |\bar k{c_{{}_{\rm o}}}ot y|\ge |\bar k{c_{{}_{\rm o}}}ot y|>\a/2\,.
$$
From \varepsilonqu{sonnosonnoBIS} it follows that $\mathbb Ruk$
is $(2 \alpha {\mathtt K}/|k|)$--non resonant modulo $ {\mathbb Z} k$ up to order ${\mathtt K}$, namely:
\begin{equation}\label{cipollotto2}
|y{c_{{}_{\rm o}}}ot \varepsilonll |\ge 2\a{\mathtt K}/|k|\,,
\ \ \
\forall k \in {\cal G}^n_{\KO}\,,\
\forall y\in \mathbb Ruk\,,\
\forall \varepsilonll\notin \mathbb{Z} k\,, \ |\varepsilonll|\leq {\mathtt K}\ .
\varepsilonnd{equation}
Indeed, fix $y\in \mathbb Ruk$, $k\in{\cal G}^n_{\KO}$, $\varepsilonll\notin \mathbb{Z} k$ with $|\varepsilonll|\le {\mathtt K}$.
Then, there exist $j\in\mathbb{Z}\setminus\{ 0\}$
and $\bar\varepsilonll\in{\cal G}^n_{\K}$ such that $\varepsilonll=j\bar\varepsilonll$. Hence,
\begin{eqnarray*}
|y{c_{{}_{\rm o}}}ot \varepsilonll|&=&|j|\, |y{c_{{}_{\rm o}}}ot \bar\varepsilonll| \ge |y{c_{{}_{\rm o}}}ot \bar\varepsilonll| =| \pko y {c_{{}_{\rm o}}}ot\bar\varepsilonll+ \pk y{c_{{}_{\rm o}}}ot \bar\varepsilonll|
\ge |\pko y{c_{{}_{\rm o}}}ot \bar\varepsilonll|- {\hat f}ac{\alpha {\mathtt K}}{|k|}\\
&>& {\hat f}ac{3 \alpha {\mathtt K}}{|k|} - {\hat f}ac{\alpha {\mathtt K}}{|k|} = {\hat f}ac{2 \alpha {\mathtt K}}{|k|}\ .
\varepsilonnd{equation} ano
Relations \varepsilonqu{ovvio} and \varepsilonqu{cipollotto2} yield quantitative control on the small divisors that appear in
perturbation theory allowing for {\sl high} order averaging theory as we now proceed to show.
\subsubsection*{Averaging}
To perform averaging,
we need to introduce a few parameters (Fourier cut--offs, a small divisor threshold, radii of analyticity) and some notation.
{
{\noindent}ndent}
Let
\beq{dublino}
\left\{\begin{array}{l}
{\mathtt K}\ge 6 {\mathtt K}O\ge 12\,,\quad
\nu:= \textstyle{\hat f}ac92n+2\,,
\quad
\a:= \sqrt\varepsilon {\mathtt K}^\nu\,,
\quad
r_{\rm o}:={\hat f}ac{\a}{16 {\mathtt K}O}\,,
\quad r_{\rm o}':= {\hat f}ac{r_{\rm o}}2\,,
\\
\textstyle s_{\rm o}:=s\big(1-{\hat f}ac1{{\mathtt K}O}\big)\,,
\ s_{\rm o}':=s_{\rm o}\big(1-{\hat f}ac1{{\mathtt K}O}\big)\,,
\
\textstyle s_{\varstar}:=s\big(1-{\hat f}ac1{{\mathtt K}}\big)\,,
\ s_{\varstar}':=s_{\varstar}\big(1-{\hat f}ac1{{\mathtt K}}\big)\,,\\
r_k:={\a}/{ |k|}=\sqrt\varepsilon {\mathtt K}^\nu/|k|\,,\quad r_k':={\hat f}ac{r_k}2\,,\ s'_k:=|k|_{{}_1}s_{\varstar}'\,,\qquad (\forall\ k\in{\cal G}^n_{\KO})\,.
\varepsilonnd{array}\right.
\varepsilonnd{equation}
\rem
(i) The action space $\DD$ can be trivially covered by three sets as follows
$$
\DD=\mathbb Rz\cup\mathbb Ru \cup \mathbb Rd\,,\qquad \mathbb Rd:=\DD\, \backslash\, (\mathbb Rz\cup\mathbb Ru)\,.
$$
As just pointed out, on the set $(\mathbb Rz\cup\mathbb Ru)\times \mathbb T^n$ one can construct detailed, high order normal forms,
while $\mathbb Rd$ is a {\sl small set of measure of order $\varepsilon^2 {\mathtt K}^\g$} (compare \varepsilonqu{teheran4} below).
{
{\noindent}ndent}
(ii) It is important to notice that $\mathbb Rd$, which is a neighborhood of double resonances of order ${\mathtt K}$, is a {\sl non perturbative set}, as pointed out in \cite{AKN}. Indeed, consider for simplicity the case $n=2$,
where the only double resonance is the origin $y=0$.
Rescaling variables and time by setting $y =\sqrt\varepsilon {\rm y}$, ${\rm x}=x$,
${\rm t}=\sqrt{\varepsilon}t$, the Hamiltonian $t$--flow of ${\hat f}ac12 |y|^2+\varepsilon f(x)$ on $\{y: |y|<\varepsilon\}\times \mathbb T^2\subseteq \mathbb Rd\times \mathbb T^2$ is equivalent to the ${\rm t}$--flow on $\{|{\rm y}|<1\} \times \mathbb T^2$ of the Hamiltonian ${\hat f}ac12 {\rm y}^2+f({\rm x})$, which {\sl does not depend upon $\varepsilon$}.
\varepsilonrem
{
{\noindent}ndent}
Next result is based on
`refined Averaging Theory' as presented in \cite{BCnonlin}. The main technical point in this approach is the minimal loss of regularity in the angle analyticity domain and the usage of two Fourier cut--offs; for a discussion on these fine points, we refer to the Introduction in \cite{BCnonlin}.
\begin{lemma}[Averaging Lemma]
\label{averaging} Let $\ham$ be as in \varepsilonqu{ham} with $f\in\paramBBns$ and let \varepsilonqu{dublino} hold. There exists a constant
$\bco=\bco(n,s)>1$ such that if ${\mathtt K}O\ge \bco$ the following holds.
{
{\noindent}ndent}
{\rm (a)} There exists a real analytic symplectic map
\begin{equation}\label{trota}
\Psi_{\rm o}: \mathbb Rz_{r_{\rm o}'}\times \mathbb T^n_{s_{\rm o}'} \to
\mathbb Rz_{r_{\rm o}}\times \mathbb T^n_{s_{\rm o}}
\,,
\varepsilonnd{equation}
such that, denoting by $\langle {c_{{}_{\rm o}}}ot \rangle$ the
average over angles $x$,
\beq{prurito}
\hamo(y,x) := \big(\ham\circ\Psi_{\rm o}\big)(y,x)
={\hat f}ac{|y|^2}2+\varepsilon\big( g^{\rm o}(y) +
f^{\rm o}(y,x)\big)\,,\quad
\langle f^{\rm o}\rangle=0\,,
\varepsilonnd{equation}
where $g^{\rm o}$ and $f^{\rm o}$ are real analytic on $\mathbb Rz_{r_{\rm o}'}\times \mathbb T^n_{s_{\rm o}'}$ and satisfy
\beq{552}
| g^{\rm o}|_{r_{\rm o}'}
\leq
\vartheta o:=
{\hat f}ac{1}{{\mathtt K}^{6n+1}}\,, \qquad
\,\thickvert\!\!\thickvert\, f^{\rm o} \,\thickvert\!\!\thickvert\,_{r_{\rm o}',s_{\rm o}'}
\leq e^{-{\mathtt K}O s/3}\,.
\varepsilonnd{equation}
{\rm (b)} For each $k\in {\cal G}^n_{\KO}$, there exists a real analytic symplectic map
\begin{equation}\label{canarino}
\Psi_k:
\mathbb Ruk_{r_k'} \times \mathbb T^n_{s_{\varstar}'}
\to
\mathbb Ruk_{r_k} \times \mathbb T^n_{s_\varstar}
\,,
\varepsilonnd{equation}
such that
\beqa{hamk}
\hamk(y,x) &:=& \big(\ham\circ\Psi_k\big)(y,x)\\
&=&{\hat f}ac{|y|^2}2+\varepsilon \big( g^k_{\rm o}(y)+
g^k(y,k{c_{{}_{\rm o}}}ot x) +
f^k (y,x)\big)\,,\qquad\pi_{\!{}_{\integer k}} f^k=0\,,
\nonumber
\varepsilonnd{equation} a
where
$g^k_{\rm o}$ is real analytic on $\mathbb Ruk_{r_k'}$;
$g^k(y,{c_{{}_{\rm o}}}ot)\in\hol_{s'_k}^1$
for every $y\in \mathbb Ruk_{r_k'}$ (in particular, $\langle g^k(y,{c_{{}_{\rm o}}}ot)\rangle=0$); $f^k $ is real analytic on $\mathbb Ruk_{r_k'} \times \mathbb T^n_{s_{\varstar}'} $, and:
\begin{equation}\label{cristina}
|g^k_{\rm o}|_{r_k'}
\leq \vartheta o\,,\qquad
\,\thickvert\!\!\thickvert\, g^k-\pi_{\!{}_{\integer k}} f\,\thickvert\!\!\thickvert\, _{r_k',s'_k}
\leq \vartheta o\,,\qquad
\,\thickvert\!\!\thickvert\, f^{k} \,\thickvert\!\!\thickvert\, _{r_k',{\hat f}ac{s_{\varstar}}2} \le
e^{- {\mathtt K} s/3}\,.
\varepsilonnd{equation}
{\rm (c)} Finally, denoting by $\pi_y$ and $\pi_x$ the projections onto, respectively, the action variable $y$ and the angle variable $x$, one has
\begin{equation}\label{dunringill}\textstyle
|\pi_y\Psi_{\rm o}-y|_{r_{\rm o}',s_{\rm o}'}\leq {\hat f}ac{r_{\rm o}}{2^7 {\mathtt K}O}\,,\quad
|\pi_y\Psi_k-y|_{r_k', s_{\varstar}'}\leq {\hat f}ac{r_k}{2^7 {\mathtt K}}
\,,
\varepsilonnd{equation}
and, for every fixed $y$, $\pi_x \Psi_{\rm o}(y,{c_{{}_{\rm o}}}ot)$,
and $\pi_x \Psi_k(y,{c_{{}_{\rm o}}}ot)$ are diffeomorphisms on $\mathbb T^n$.
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ } The statements follow from
Theorem~6.1 in \cite[p. 3553]{BCnonlin} with obvious notational changes, which we proceed to spell out.
The correspondence of symbols between this paper and \cite{BCnonlin} is the following\footnote{In these identities, the first symbol is the one used here, the second one is that used in \cite{BCnonlin}}:
\begin{eqnarray*}
&&
\mathbb Rz=\O^0\,;\ \ \mathbb Ruk=\O^{1,k}\,,\ \ \textstyle {\hat f}ac{|y|^2}2=h(y)\,; \ \
{\mathtt K}O={\mathtt K_{{}_1}}\,,\ \ {\mathtt K}={\mathtt K}_{{}_2}\,,
\\
&&
g^{\rm o}= {\rm g}^{\rm o}\,; \ \
f^{\rm o}=f^{\rm o}_{\varstar\varstar}\,;\ \
g^k_{\rm o}(y)+g^k(y,k{c_{{}_{\rm o}}}ot x)={\rm g}^k(y,x)\,;\ \ f^k=f^k_{\varstar\varstar}\,;
\varepsilonnd{equation} ano
the constants $\bar L$ and $L$ in Definition~2.1 in \cite[p. 3532]{BCnonlin} in the present case are $\bar L=L=1$ (since the frequency map here is the identity map);
the projection ${\rm p}_{\!{}_{ {\mathbb Z} k}}$ introduced in \cite[p. 3529]{BCnonlin} is different from the projection $\pi_{\!{}_{\integer k}}$ defined here, the relation between the two being: $\pi_{\!{}_{\integer k}} f(k{c_{{}_{\rm o}}}ot x)={\rm p}_{\!{}_{ {\mathbb Z} k}}f(x)$;
finally, the norm $|{c_{{}_{\rm o}}}ot|_{D,r,s}$ in \cite[p. 3534]{BCnonlin} corresponds here to the norm $\,\thickvert\!\!\thickvert\,{c_{{}_{\rm o}}}ot\,\thickvert\!\!\thickvert\,_{D,r,s}$, hence:
$$|g^k_{\rm o}|_{r_k'}+\,\thickvert\!\!\thickvert\, g^k-\pi_{\!{}_{\integer k}} f\,\thickvert\!\!\thickvert\, _{r_k',s'_k} = \,\thickvert\!\!\thickvert\, {\rm g}^k - {\rm p}_{\!{}_{ {\mathbb Z} k}} f\,\thickvert\!\!\thickvert\,_{D^{1,k},r_k/2,s_\star}\,.$$
Now, Assumption~A in \cite[p. 3533]{BCnonlin} holds. Indeed:
\begin{itemize}
\item[\footnotesize $\bullet$]
the action--analyticity radii are the same as in \cite{BCnonlin} (compare \varepsilonqu{dublino} with Eq.~(140) in \cite{BCnonlin});
\item[\footnotesize $\bullet$]
the angle--analyticity radii defined here are the same as in
Eq.s~(144) and (147) in \cite{BCnonlin} (with different names);
\item[\footnotesize $\bullet$]
In \cite{BCnonlin} it is assumed that ${\mathtt K}\ge 3{\mathtt K}O\ge 6$ (see Eq. (139) in \cite{BCnonlin}), which in view of \varepsilonqu{dublino}, is satisfied.
Also $\nu$ in \cite{BCnonlin} is assumed to satisfy $\nu\ge n+2$, which in \varepsilonqu{dublino} is defined as $\nu={\hat f}ac92n+2$.
\item[\footnotesize $\bullet$] By taking $\bco$ big enough condition (143) is satisfied.
\item[\footnotesize $\bullet$]
finally,
to meet the smallness condition~(40) in \cite{BCnonlin}, namely $\varepsilon\le r^2/{\mathtt K}^{2\nu}$ (where $r$ is the analyticity radius of the unperturbed Hamiltonian, which here is a free parameter), one can take $r={\mathtt K}^\nu$ so that condition~(40)
in \cite{BCnonlin} becomes simply $\varepsilon\le 1$.
\varepsilonnd{itemize}
Thus, Theorem~6.1 of \cite{BCnonlin} can be applied, and
\varepsilonqu{prurito} and \varepsilonqu{hamk} are immediately recognized
as,
respectively, Eq.'s (145) and (148) in \cite{BCnonlin}.
Since $\bar\vartheta$ and $\vartheta$ in Eq. 141 of \cite{BCnonlin} are of the form $c(n,s) /{\mathtt K}^{7n+1}$, we see that, by taking $\bco$ big enough, they can be bounded by $\vartheta o=1/{\mathtt K}^{6n+1}$ in \varepsilonqu{552}. Analogously, the exponential estimates on the perturbation functions in (146) and (150) of \cite{BCnonlin} are, respectively, of the form $c(n,s)\, {\mathtt K}O^n e^{-{\mathtt K}O s/2}$ and
$c(n,s)\, {\mathtt K}^n e^{-{\mathtt K} s/2}$, which, by taking $\bco$ big enough, can be bounded, respectively, by $e^{-{\mathtt K}O s/3}$ and $e^{-{\mathtt K} s/3}$
as claimed. Thus (a) and (b) are proven. Finally, from (71) and (69) in \cite[p. 3541]{BCnonlin} it follows at once
\varepsilonqu{dunringill} and the injectivity of the angle maps.
\qed
{
{\noindent}ndent}
For high Fourier modes, a more precise and uniform normal form can be achieved\footnote{This lemma should be compared with
Theorem 2.1 in \cite{BCnonlin}.}:
\begin{lemma}[Cosine--like Normal forms] \label{coslike}
Let $\ham$ be as in \varepsilonqu{ham} with $f\in\paramBBns$ satisfying \varepsilonqref{P1+} and let \varepsilonqu{dublino} hold.
There exists a constant $\bfco=\bfco(n,s,\d)\ge \max\{\mathtt N\,,\,\bco\}$ such that
if ${\mathtt K}O\ge \bfco$ then the following holds.
For any
$k\in {\cal G}^n_{\KO}$ such that
$\noruno{k}\ge \mathtt N$, then the Hamiltonian $\hamk$ in \varepsilonqu{hamk} takes the form:
\begin{equation}\label{hamkc}
\hamk
=
{\hat f}ac{|y|^2}2 + \varepsilon g^k_{\rm o}(y)+
2|f_k|\varepsilon\
\big[\cos(k{c_{{}_{\rm o}}}ot x +\sa_k)+
F^k_{\! \varstar}(k{c_{{}_{\rm o}}}ot x)+
g^k_{\! \varstar}(y,k{c_{{}_{\rm o}}}ot x)+
f^k_{\! \varstar} (y,x)
\big]\,,
\varepsilonnd{equation}
where $\sa_k$ and $F^k_{\! \varstar}$ are as in Proposition~\ref{pollaio} and:
\begin{equation}\label{cate}
g^k_{\! \varstar}:={\hat f}ac{1}{2|f_k|}\, \big(g^k- \pi_{\!{}_{\integer k}} f\big)\,,
\qquad
f^k_{\! \varstar} :={\hat f}ac{1}{2|f_k|} f^k\,.
\varepsilonnd{equation}
Furthermore,
$g^k_{\! \varstar}(y,{c_{{}_{\rm o}}}ot )\in\hol_1^1$
(for every $y\in \mathbb Ruk_{r_k'}$), $\pi_{\!{}_{\integer k}} f^k_{\! \varstar}=0$, and one has:
\beq{martinaTE}
\,\thickvert\!\!\thickvert\, g^k_{\! \varstar}\,\thickvert\!\!\thickvert\,_{r_k',1}\le
\vartheta :={\hat f}ac{1}{{\mathtt K}^{5n}}\,,\qquad\quad
\,\thickvert\!\!\thickvert\, f^k_{\! \varstar} \,\thickvert\!\!\thickvert\, _{r_k',{\hat f}ac{s_\varstar}2}
\leq
e^{-{\mathtt K} s/7}\,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
Observe that, under the assumptions of Lemma~\ref{coslike}, by \varepsilonqu{dublino} and \varepsilonqu{bollettino1} it is
\beq{bollettino3}
{\mathtt K}\ge 6{\mathtt K}O\ge6\mathtt N\ge 12\ttcs\ge 12\,.
\varepsilonnd{equation}
{\noindent}ndent{\bf Proof\ }
First of all observe that the hypotheses of Lemma~\ref{coslike} imply those of Lemma~\ref{averaging} so that the results of Lemma~\ref{averaging} hold.\\
From \varepsilonqu{cate} it follows that
$g^k(y,\sa)=\pi_{\!{}_{\integer k}} f(\sa)+ 2 |f_k| g^k_\star(y,\sa)$,
which together with \varepsilonqu{alfacentauri} and \varepsilonqu{hamk} of
Lemma~\ref{averaging}, implies immediately the relations \varepsilonqu{hamkc}.
To prove the first estimate in \varepsilonqu{martinaTE}, we observe that,
since $\noruno{k}\ge \mathtt N$, recalling \varepsilonqu{dublino} and \varepsilonqu{bollettino3} one has
\beq{bad}\textstyle
s'_k =
|k|_{{}_1} s\, \big(1-{\hat f}ac1{\mathtt K}\big)^2 > \mathtt N s\, {\hat f}ac45> 1\,.
\varepsilonnd{equation}
Thus, $g^k_{\! \varstar}(y,{c_{{}_{\rm o}}}ot)$
is bounded on a `large' angle--domain of size larger than 1 and has
zero average (since $g^k_{\! \varstar}(y,{c_{{}_{\rm o}}}ot)\in\hol_{|k|_{{}_1}s_{\varstar}'}^1$).
Now, recall the smoothing property \varepsilonqu{lesso} (with $N=1$),
recall that ${\mathtt K}O\le {\mathtt K}/6$, and take $\bfco$ large enough. Then,
\begin{align*}
\,\thickvert\!\!\thickvert\, g^k_{\! \varstar}\,\thickvert\!\!\thickvert\,_{r_k',1}&\stackrel{\varepsilonqu{cate}}{:=}
{\hat f}ac{1}{2|f_k|}\, \,\thickvert\!\!\thickvert\, g^k- \pi_{\!{}_{\integer k}} f\,\thickvert\!\!\thickvert\,_{r_k',1}
\stackrel{\varepsilonqu{P1+}}\le
{\hat f}ac{\noruno{k}^n e^{\noruno{k}s}}{2\d}\, \,\thickvert\!\!\thickvert\, g^k- \pi_{\!{}_{\integer k}} f\,\thickvert\!\!\thickvert\,_{r_k',1}
\\
&\stackrel{(\ref{lesso},\ref{bad})}\le
{\hat f}ac{\noruno{k}^n e^{\noruno{k}s}}{2\d}\, \,\thickvert\!\!\thickvert\, g^k- \pi_{\!{}_{\integer k}} f\,\thickvert\!\!\thickvert\,_{r_k',s'_k} {c_{{}_{\rm o}}}ot e^{-(s'_k-1)}
\stackrel{\varepsilonqu{cristina}}\le
{\hat f}ac{\noruno{k}^n e}{2\d}\, \vartheta o\ e^{\noruno{k}(s-s_\varstar')}\\
&\stackrel{\varepsilonqu{dublino}}=
{\hat f}ac{\noruno{k}^n e}{2\d}\, \vartheta o\ e^{{\hat f}ac{\noruno{k}}{{\mathtt K}} s \big(2-{\hat f}ac1{\mathtt K}\big)}
\stackrel{\varepsilonqu{552}}\le
{\hat f}ac{{\mathtt K}O^n e}{2\d}\, {\hat f}ac{1}{{\mathtt K}^{6n+1}}\ e^{2s {\hat f}ac{{\mathtt K}O}{{\mathtt K}}}\le {\hat f}ac{1}{{\mathtt K}^{5n}}\stackrel{\varepsilonqu{martinaTE}}=\vartheta \,.
\varepsilonnd{align*}
Furthermore, possibly increasing $\bfco$, one also has
\begin{align*}
\,\thickvert\!\!\thickvert\, f^k_{\! \varstar}\,\thickvert\!\!\thickvert\,_{r_k',{\hat f}ac{s_\varstar}2}&\stackrel{\varepsilonqu{cate}}=
{\hat f}ac{1}{2|f_k|}\, \,\thickvert\!\!\thickvert\, f^k\,\thickvert\!\!\thickvert\,_{r_k',{\hat f}ac{s_\varstar}2}
\stackrel{ \varepsilonqu{P1+}}\le
{\hat f}ac{\noruno{k}^n e^{\noruno{k}s}}{2\d}\, \,\thickvert\!\!\thickvert\, f^k\,\thickvert\!\!\thickvert\,_{r_k',{\hat f}ac{s_\varstar}2}
\stackrel{\varepsilonqu{cristina}}\le
{\hat f}ac{\noruno{k}^n e^{\noruno{k}s}}{2\d}\, e^{-{\hat f}ac{{\mathtt K} s}3}
\\
&\le
{\hat f}ac{{\mathtt K}O^n}{2\d}\ e^{-{\mathtt K} s\big({\hat f}ac13-{\hat f}ac{{\mathtt K}O}{\mathtt K}\big)}
\le
{\hat f}ac{{\mathtt K}^n}{2\d{c_{{}_{\rm o}}}ot 6^n}\ e^{-{\mathtt K} s/6}
\le e^{-{\mathtt K} s/7}\,. \qquad \qedeq
&
\varepsilonnd{align*}
\subsection{Coverings}
As mentioned in the Introduction, the averaging symplectic maps $\Psi_{\rm o}$ and $\Psi_k$ of Lem\-ma~\ref{averaging} may displace boundaries by $\sqrt\varepsilon{\mathtt K}^\nu$ (compare \varepsilonqu{dublino} and \varepsilonqu{dunringill}) so one cannot use the secular Hamiltonians to describe the dynamics all the way up to the boundary of $\DD\times \mathbb T^n$.
Such a problem -- which is essential, for example, in achieving the results described at the end of the Introduction -- may be overcome by introduce {\sl a second covering}, as we proceed now to explain.
{
{\noindent}ndent}
Recall the definitions of $\mathbb Rz$ and $\mathbb Ruk$ in \varepsilonqu{neva} and \varepsilonqu{sonnosonnoBIS}; recall {\varepsilonqu{dublino}, the notation in \varepsilonqu{dire} and define
\beq{rocket}
\mathbb Rzt:= \, {\rm Re}\, (\mathbb Rz_{r'_{\rm o}/2})\,,\qquad \mathbb Rukt:= \, {\rm Re}\,(\mathbb Ruk_{r'_k/2})\,,\phantom{AAAAa} (k\in{\cal G}^n_{\KO})\,.
\varepsilonnd{equation}
Then, the following result holds:
\begin{lemma}\label{sedformosa} {\bf (Covering Lemma)}
\beqa{surge}
&&\mathbb Rz\times \mathbb T^n \subseteq \Psi_{\rm o}\big(\mathbb Rzt\times \mathbb T^n\big)\,,\\
&&\label{surge2}
\mathbb Ruk\times\mathbb T^n\subseteq \Psi_k\big(\mathbb Rukt\times \mathbb T^n\big)\,,\qquad \forall k\in {\cal G}^n_{\KO}\,,\\
\label{zucchina}
&&
\label{sonnosonnosonno}
\mathbb Rd:=\DD\, \backslash\, (\mathbb Rz\cup\mathbb Ru)\subseteq\bigcup_{k\in {\cal G}^n_{\KO}}
\bigcup_{ {\mathtt s}pra{\varepsilonll\in \mathcal G^n_{{\mathtt K}}}{\varepsilonll\notin \mathbb{Z} k}} \mathbb Rd_{k,\varepsilonll}\,,
\varepsilonnd{equation} a
where
\beq{defi}
\mathbb Rd_{k\varepsilonll}:= \big\{y\in \DD:
|y{c_{{}_{\rm o}}}ot k|<\textstyle\a;\, |\pko y{c_{{}_{\rm o}}}ot \varepsilonll|\le {\hat f}ac{3 \alpha {\mathtt K}}{|k|}\big\}\,,\qquad (k\in{\cal G}^n_{\KO}\,,\ \varepsilonll\in {\cal G}^n_{\K}\, \backslash\, \mathbb{Z} k)\,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
\rem
(i)
From the definition of $\mathbb Rd$ in \varepsilonqu{sonnosonnosonno} it follows trivially that $\{{\cal R}^i\}$ is a covering of $\DD$
so that $\DD= \mathbb Rz\cup\mathbb Ru\cup \mathbb Rd$.
{
{\noindent}ndent}
(ii) Notice that from the definition of of $\mathbb Rukt$ in \varepsilonqu{rocket}, one has that
\beq{rocket2}
\mathbb Rukt_{r'_k/2}\subseteq \mathbb Ruk_{r'_k}\,.
\varepsilonnd{equation}
{
{\noindent}ndent}
(iii) Relations \varepsilonqu{surge} and \varepsilonqu{surge2} allow to map back the dynamics of the averaged
Hamiltonians \varepsilonqu{prurito} and \varepsilonqu{hamk} so as to describe the dynamics also {\sl arbitrarily close to the boundary of the starting phase space}.
\varepsilonrem
{
{\noindent}ndent}
For the proof of the Covering Lemma we shall use the following immediate consequence of the Contraction Lemma\footnote{Recall the definitions in \varepsilonqu{palle}; as usual $\overlineerline{A}$ denotes the closure of the set $A$.}:
\begin{lemma}\label{DAD}
Fix $y_0\in\mathbb R^n$, $r>0$ and let $\phi:D_{2r}(y_0)\to\mathbb{C}^n$ be a real analytic map satisfying
\beq{trasloco}
\sup_{D_{2r}(y_0)}|\phi(y)-y|\le M
\varepsilonnd{equation}
for some $0<M<r$.
Then,
$y_0\in \phi(\overlineerline{B_r(y_0)})$.
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ }
Let $V_0:=\overlineerline{B_r(0)}$.
Solving the equation $\phi(y)=y_0$
for some $y\in \overlineerline{B_r(y_0)}$
is equivalent to solve the fixed point
equation $w=\psi_0(w):=-{\psi}(y_0+w)$
for $w\in V_0$ having set ${\psi}(y):=\phi(y)-y$. By \varepsilonqu{trasloco} it follows that $\psi_0: V_0 \to V_0$ and
by the mean value theorem and Cauchy estimates we get that, for every $w,w'\in V_0$,
$$|\psi_0(w)-\psi_0(w')|=|{\psi}(y_0+w)-{\psi}(y_0+w')|
\leq {\hat f}ac{M}{r} |w-w'|\,,$$
showing that $\psi_0$ is a contraction on $V_0$ (since $M/r<1$) and the claim follows by the standard Contraction Lemma. \qed
{\noindent}ndent{\bf Proof\ } {\bf of \varepsilonqu{surge}} We start by proving that
\beq{amicamea}
\forall\ (y_0,x)\in \mathbb Rz\times\mathbb T^n\,,\ \varepsilonxists! \ (y,x_0)\in \mathbb Rzt\times\mathbb T^n\!:\ \ \Psi_{\rm o}(y,x)=(y_0,x_0)\,.
\varepsilonnd{equation}
Define
\beq{zoccolo}
M:={\hat f}ac{r_{\rm o}}{2^7 {\mathtt K}O}
\stackrel{\varepsilonqref{dublino}}= {\hat f}ac{\a}{2^{11} {\mathtt K}O^2}
<
{\hat f}ac{\a}{2^{10}{\mathtt K}O^2}=:r<
{\hat f}ac{\a}{2^7 {\mathtt K}O}
\stackrel{\varepsilonqref{dublino}}=
{\hat f}ac{r_{\rm o}'}4
\,.
\varepsilonnd{equation}
Fix $(y_0,x)\in \mathbb Rz\times\mathbb T^n$ and
let $\phi(y):=\pi_y\Psi_{\rm o}(y,x)$. Then, by \varepsilonqu{zoccolo},
$$\sup_{D_{2r}(y_0)}|\phi(y)-y|
\le \sup_{D_{r'_{\rm o}}(y_0)}|\phi(y)-y|
\le |\pi_y\Psi_{\rm o}-y|_{r_{\rm o}',s_{\rm o}'}
\stackrel{\varepsilonqu{dunringill}}\le M\,.$$
Thus,
by
Lemma~\ref{DAD}, since by \varepsilonqu{zoccolo} $2r<r'_{{\rm o}}/2$, by definition of $\mathbb Rzt$, we have that
$$y_0\in \pi_y\Psi_{\rm o}\big(\overlineerline{B_r(y_0)}\times \{x\}\big)
\subseteq
\pi_y\Psi_{\rm o}\big(\mathbb Rzt\times \{x\})\,,
$$ which implies that $\Psi_{\rm o}(y,x)=(y_0,x_0)$ with $x_0\in\mathbb T^n$ proving
\varepsilonqref{amicamea}. Now, observe that the map $ (y_0,x)\in \mathbb Rz\times\mathbb T^n \mapsto (y,x_0)\in\mathbb Rzt\times \mathbb T^n$ in \varepsilonqu{amicamea} is nothing else than the diffeomorphism associated to the near--to--identity generating function $y_0{c_{{}_{\rm o}}}ot x+\psi_0(y_0,x)$ of the near--to--identity symplectomorphism $\Psi_{\rm o}$. Thus, for each $y_0\in\mathbb Rz$, the map $x \in\mathbb T^n\mapsto x_0=x+\partial_{y_0} \psi_0(y_0,x)$
is a diffeomorphism of $\mathbb T^n$ with inverse given by $x_0\in\mathbb T^n\mapsto x=x_0+{\hat \eta}i(y_0,x_0)$ for a suitable (small) real analytic map ${\hat \eta}i$. Therefore, given $(y_0,x_0)\in\mathbb Rz\times \mathbb T^n$, if we take $x= x_0+{\hat \eta}i(y_0,x_0)$ in \varepsilonqu{amicamea} we obtain that there exist $(y,x)\in \mathbb Rzt\times\mathbb T^n$ such that $(y_0,x_0)=\Psi_0(y,x)$, proving \varepsilonqu{surge}. \qed
{\noindent}ndent{\bf Proof\ } {\bf of \varepsilonqu{surge2}}
The argument is completely analogous: Again, we start by proving that
\beq{amicomeo}
\forall\ k\in {\cal G}^n_{\KO}\,,\ \forall\ (y_0,x)\in \mathbb Ruk\times\mathbb T^n\,,\ \varepsilonxists! \ (y,x_0)\in \mathbb Rukt\times\mathbb T^n\!:\ \ \Psi_k(y,x)=(y_0,x_0)\,.
\varepsilonnd{equation}
Fix $k\in{\cal G}^n_{\KO}$ and define
\beq{zoccoli}
M:={\hat f}ac{r_k}{2^7 {\mathtt K}O}
\stackrel{\varepsilonqref{dublino}}= {\hat f}ac{\a}{2^7|k|\, {\mathtt K}}
<
{\hat f}ac{\a}{2^6 |k| {\mathtt K}}=:r<{\hat f}ac{r_k'}4 \stackrel{\varepsilonqref{dublino}}= {\hat f}ac{\a}{8|k|}
\,.
\varepsilonnd{equation}
Fix $(y_0,x)\in\mathbb Ruk\times \mathbb T^n$, and
let $\phi(y):=\pi_y\Psi_k(y,x)$.
By \varepsilonqu{zoccoli},
$$\sup_{D_{2r}(y_0)}|\phi(y)-y|
\le \sup_{D_{r'_k}(y_0)}|\phi(y)-y|
\le |\pi_y\Psi_k-y|_{r_k',s_\varstar}
\stackrel{\varepsilonqu{dunringill}}\le M\,.$$
Thus, by
Lemma~\ref{DAD} we have
$$y_0\in \pi_y\Psi_k\big(\overlineerline{B_r(y_0)}\times \{x\}\big)
\subseteq
\pi_y\Psi_k\big(\mathbb Rukt\times \{x\})\,,
$$ which implies that $\Psi_k(y,x)=(y_0,x_0)$ for some $x_0\in\mathbb T^n$ proving
\varepsilonqref{amicomeo}.
Now, the argument given in the non--resonant case apply also in this case.
\qed
{\noindent}ndent{\bf Proof\ } {\bf of \varepsilonqu{sonnosonnosonno}} If $y\in \mathbb Rd$ then, since $y\notin \mathbb Rz$, there exists $k\in {\cal G}^n_{\KO}$ such that $|y{c_{{}_{\rm o}}}ot k|<\a$, in which case, since $y\notin \mathbb Ru$, there exists $\varepsilonll\in {\cal G}^n_{\K}\, \backslash\, \mathbb{Z} k$
such that $ |\pko y{c_{{}_{\rm o}}}ot \varepsilonll|\le {\hat f}ac{3 \alpha {\mathtt K}}{|k|}$, hence $y\in \mathbb Rd_{k,\varepsilonll}$ for some $k\in {\cal G}^n_{\KO}$ and $\varepsilonll\in {\cal G}^n_{\K}\, \backslash\, \mathbb{Z} k$. \qed
{
{\noindent}ndent}
Next, we show that
the measure of $\mathbb Rd$ is proportional to\footnote{A similar result can be found in \cite[p. 3533]{BCnonlin}.} $\a^2$:
\begin{lemma}\label{coverto}\
There exists a constant ${c_{{}_{\rm o}}}r={c_{{}_{\rm o}}}r(n)>1$ such that:
\begin{equation}\label{teheran44}
{\rm\, meas\, } (\mathbb Rd) \le {c_{{}_{\rm o}}}r\, \a^2\ {\mathtt K}^{2n} \,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ }
Let us estimate the measure of $\mathbb Rd_{k,\varepsilonll}$ in \varepsilonqu{defi}.
Denote by $v\in\mathbb R^n$ the projection of $y$ onto
the plane generated by $k$ and $\varepsilonll$
(recall that, by hypothesis, $k$ and $\varepsilonll$ are not parallel).
Then,
\begin{equation}\label{soldatino}
|v{c_{{}_{\rm o}}}ot k|=|y{c_{{}_{\rm o}}}ot k|<\a\,, \qquad |\proiezione_k^\perp v {c_{{}_{\rm o}}}ot \varepsilonll|
=|\proiezione_k^\perp y {c_{{}_{\rm o}}}ot \varepsilonll|
\le
3\a{\mathtt K} /|k|\,.
\varepsilonnd{equation}
Set
\beq{bacca}
h:=\pk \varepsilonll= \varepsilonll -{\hat f}ac{\varepsilonll{c_{{}_{\rm o}}}ot k}{|k|^2} k\,.
\varepsilonnd{equation}
Then, $v$ decomposes in a unique way as
$v=a k+ b h$
for suitable $a,b\in\mathbb R$.
By \varepsilonqref{soldatino},
\beq{goja}
|a|<{\hat f}ac{\a}{|k|^2}\,,\qquad
|\pk v{c_{{}_{\rm o}}}ot\varepsilonll|
=|bh{c_{{}_{\rm o}}}ot \varepsilonll| \le 3\a{\mathtt K} /|k|\,,
\varepsilonnd{equation}
and, since $ |\varepsilonll|^2 |k|^2-(\varepsilonll{c_{{}_{\rm o}}}ot k)^2$ is a positive integer (recall, that $k$ and $\varepsilonll$ are integer vectors not parallel),
$$
|h{c_{{}_{\rm o}}}ot \varepsilonll|
\varepsilonqby{bacca}
{\hat f}ac{ |\varepsilonll|^2 |k|^2-(\varepsilonll{c_{{}_{\rm o}}}ot k)^2 }{|k|^2}
\ge {\hat f}ac1{|k|^2}\,.
$$
Hence,
\beq{velazquez}
|b|\le 3 \alpha {\mathtt K} |k| \,.
\varepsilonnd{equation}
Then, write $y\in \mathbb Rd_{k,\varepsilonll}$ as $y=v+v^\perp$ with
$v^\perp$ in the orthogonal
complement of the plane generated by $k$ and $\varepsilonll$. Since $|v^\perp |\le |y|< 1$ and $v$ lies in the plane spanned by $k$ and $\varepsilonll$ inside a rectangle of sizes of length $2\a/|k|^2$ and $6 \alpha {\mathtt K} |k|$ (compare \varepsilonqu{goja} and \varepsilonqu{velazquez})
we find
\[ \textstyle
{\rm\, meas\, }(\mathbb Rd_{k,\varepsilonll})\le {\hat f}ac{2\a}{|k|^2}\, (6 \alpha {\mathtt K} |k|)\ 2^{n-2}=3{c_{{}_{\rm o}}}ot 2^n \, \a^2 {\hat f}ac{{\mathtt K}}{|k|}\,,\quad \forall
\left\{\begin{array}{l}
k\in{\cal G}^n_{\KO}\,,\\
\varepsilonll\in \mathcal G^n_{{\mathtt K}}\, \backslash\, \mathbb{Z} k\,.
\varepsilonnd{array}\right.
\varepsilonnd{equation} no
Since $\sum_{k\in{\cal G}^n_{\KO}}|k|^{-1}\le c\, {\mathtt K}O^{n-1}$ for a suitable $c=c(n)$,
and ${\mathtt K}O\le {\mathtt K}/6$, \varepsilonqu{teheran44} follows. \qed
\rem
In view of \varepsilonqu{teheran44} and \varepsilonqref{dublino}, we have
\begin{equation}\label{teheran4}
{\rm\, meas\, } (\mathbb Rd)
\le
{c_{{}_{\rm o}}}r \, \varepsilon\, {\mathtt K}^\gamma
\,,\qquad
\gamma:=11n+4
\,.
\varepsilonnd{equation}
Thus, if ${\rm V}_{\! n}=\pi^{{\hat f}ac{n}2}/\G(1+{\hat f}ac{n}2)$ denotes the volume of the Euclidean unit ball in $\mathbb R^n$, we have that
\beq{piccoletto}
\varepsilon<{\hat f}ac{{\rm V}_{\! n}}{{c_{{}_{\rm o}}}r {\mathtt K}^\gamma}\quad\implies \quad {\rm\, meas\, } (\mathbb Rd) <{\rm\, meas\, } \DD\,.
\varepsilonnd{equation}
\varepsilonrem
\subsection{Normal Form Theorem}
In the normal form around simple resonances the `averaged Hamiltonian' in \varepsilonqu{hamk} (i.e., the Hamiltonian obtained disregarding the exponentially small term $f^k$) depends on angles through the linear combination $k{c_{{}_{\rm o}}}ot x$, which,
since $k\in\gen$ defines {\sl a new well--defined angle $\ttx_1\in {\mathbb T} $}. This fact calls for a linear symplectic change of variables:
\begin{lemma}\label{Fiu}
Let the hypotheses of Lemma~\ref{coslike} hold.
\\
{\rm (i)} For any $k\in {\cal G}^n_{\KO}$ there exists a matrix
$\hAA\in\mathbb{Z}^{(n-1)\times n}$
such that\footnote{${\rm SL}(n,\mathbb{Z})$ denotes the group of $n\times n$ matrices with entries in $ {\mathbb Z} $ and determinant 1;
$|M|_{{}_\infty}$, with $M$ matrix (or vector), denotes the maximum norm $\max_{ij}|M_{ij}|$ (or $\max_i |M_i|$).}
\beq{scimmia}
\begin{array}l
\displaystyle {\mathcal A}A:=\binom{k}{\hAA}
=\binom{k_1{c_{{}_{\rm o}}}ots k_n}{\hAA}\in\ \ {\rm SL}(n,\mathbb{Z})\,,\\
|\hAA|_{{}_\infty}\leq |k|_{{}_\infty}\,,\ \
|{\mathcal A}A|_{{}_\infty}=|k|_{{}_\infty}\,,\ \
|{\mathcal A}A^{-1}|_{{}_\infty}\leq
(n-1)^{{\hat f}ac{n-1}2} |k|_{{}_\infty}^{n-1}\,.\phantom{\displaystyle\int}
\varepsilonnd{array}
\varepsilonnd{equation}
{\rm (ii)} Let $\Fio$ be the linear, symplectic map on $\mathbb R^n\times {\mathbb T} ^n$ onto itself defined by
\begin{equation}\label{talktothewind}
\Fio: (\tty,\ttx) \mapsto (y,x)=
({\mathcal A}A^T\tty, {\mathcal A}A^{-1} \ttx) \,.
\varepsilonnd{equation}
Then,
\beq{azz}
\ttx_1=k{c_{{}_{\rm o}}}ot x\,,\qquad\quad y=\tty_1 k+\hAA^T \hat \tty\,,\phantom{AAAAAAA} \big[\hat \tty:=(\tty_2,...,\tty_{n})\big]\,.
\varepsilonnd{equation}
Furthermore, letting\footnote{$\mathbb Rukt$ is defined in \varepsilonqu{rocket}; recall, also, \varepsilonqu{dublino}.}
\begin{equation}\label{formentera}\textstyle
\DDD^k:= {\mathcal A}A^{-T}\mathbb Rukt\,,\quad
\left\{\begin{array}l
\tilde r_k:={\hat f}ac{{r_k}}{ \itcu |k|}\\
\tilde s_k:=
{\hat f}ac{s}{\itcu |k|^{n-1}}
\varepsilonnd{array}\right.
\,,\quad \itcu:=5n(n-1)^{{\hat f}ac{n-1}2}\,,
\varepsilonnd{equation}
with ${\mathcal A}A$ as in {\rm (i)}, we find
\begin{equation}\label{fare2}
\Fio: \DDD^k_{\tilde r_k}\times \mathbb T^n_{\tilde s_k}
\to \mathbb Rukt_{r_k'/2}\times \mathbb T^n_{s_\varstar/2}\,, \qquad
\Fio(\DDD^k\times {\mathbb T} ^n)=\mathbb Rukt\times {\mathbb T} ^n\,.
\varepsilonnd{equation}
{\rm (iii)} $\hamk$ in \varepsilonqu{hamk}, in the symplectic variables $(\tty,\ttx)=\big((\tty_1,\hat\tty),\ttx\big)$, takes the form:
\beq{dopomedia}
{\mathcal H} _k(\tty,\ttx):=\hamk\circ \Fio(\tty,\ttx)=\hamsec_k(\tty,\ttx_1)+ \varepsilon
\bar f^k(\tty,\ttx) \,,\quad (\tty,\ttx) \in \DDD^k_{\tilde r_k}\times \mathbb T^n_{\tilde s_k}\,,
\varepsilonnd{equation}
where the `secular Hamiltonian'
\beq{hamsec}
\hamsec_k(\tty,\ttx_1):= {\hat f}ac12 |{\mathcal A}A^T\tty|^2+\varepsilon g^k_{\rm o}({\mathcal A}A^T\tty)+
\varepsilon g^k({\mathcal A}A^T\tty,\ttx_1)\,,\quad
\bar f^k(\tty,\ttx):=f^k ({\mathcal A}A^T\tty,{\mathcal A}A^{-1} \ttx)
\varepsilonnd{equation}
is a real analytic function for $\tty\in \DDD^k_{\tilde r_k}$ and\footnote{Recall \varepsilonqu{dublino}.} $\ttx_1\in {\mathbb T} _{s'_k}$.
\varepsilonnd{lemma}
\rem\label{ariazz} In the above Lemma~\ref{Fiu} (and also often in what follows), to simplify symbols, we may omit the dependence upon $k$ in the notation, but of course ${\mathcal A}A$, $\hAA$ and $\Fio$ {\sl do depend upon the simple resonance label $k\in {\cal G}^n_{\KO}$}.
\varepsilonrem
{\noindent}ndent{\bf Proof\ } {\bf of Lemma \ref{Fiu}} (i) From B\'ezout's lemma it follows that\footnote{See Appendix~A of \cite[p. 3564]{BCnonlin} for a detailed proof.}:
{
{\noindent}ndent}
{\sl
given $k\in {\mathbb Z} ^n$, $k\neq 0$ there exists a matrix ${\mathcal A}A=({\mathcal A}A_{ij})_{1\le i,j\le n}$ with integer entries such that $A_{nj}=k_j$ $\forall$ $1\le j\le n$, $\deltat{\mathcal A}A={\rm gcd}(k_1,...,k_1)$, and
$|{\mathcal A}A|_{{}_\infty}=|k|_{{}_\infty}$.}
{
{\noindent}ndent}
Hence,
since $k\in \gen$, ${\rm gcd}(k_1,...,k_1)=1$, and \varepsilonqu{scimmia} follows\footnote{Notice that the bound on $|{\mathcal A}A^{-1}|_{{}_\infty}$ follows from D'Alembert expansion of determinants, observing that
for any $m\times m$ matrix ${\rm M}$, one has
$|\deltat {\rm M}|\leq m^{m/2} |{\rm M}|_{{}_\infty}^m$}.
{
{\noindent}ndent}
(ii) $\Fio$ is symplectic since it is generated by the generating function $\tty{c_{{}_{\rm o}}}ot {\mathcal A}A x$. \\
The relations in \varepsilonqu{azz} follow at once from the definition of $\Fio$.
\\
Let us prove \varepsilonqu{fare2}: $\tty\in\DDD^k_{\tilde r_k}$ if and only if $\tty=\tty_0+z$ with $\tty_0\in\DDD^k$ and $|z|<\tilde r_k$. Thus,
$$|{\mathcal A}A^Tz|\stackrel{\varepsilonqu{scimmia}}\le n |k| |z|<n |k| \tilde r_k\stackrel{ \varepsilonqu{formentera}}< {\hat f}ac{r_k}4
\stackrel{ \varepsilonqu{dublino}}={\hat f}ac{r'_k}2\,.
$$
Since, by definition of $\DDD^k$, ${\mathcal A}A^T \tty_0\in \mathbb Rukt$, we have that
${\mathcal A}A^T\tty\in \mathbb Rukt_{r_k'/2}$.
\\
Let, now, $\ttx$ belong to $\mathbb T^n_{\tilde s_k}$.Then, for any $1\le j\le n$, recalling the definitions of $s_\varstar$ and $s_\varstar'$ in \varepsilonqu{dublino}, we find
$$
\paramBBig|\, {\rm Im}\, ({\mathcal A}A^{-1} \ttx)_j\paramBBig|= \paramBBig| \sum_{i=1}^n ({\mathcal A}A^{-1})_{ij} \, {\rm Im}\, \ttx_j\paramBBig|\stackrel{\varepsilonqu{scimmia}}{<}
n(n-1)^{{\hat f}ac{n-1}2} |k|^{n-1} \tilde s_k \stackrel{\varepsilonqu{formentera}}{\le }{\hat f}ac{s_\varstar}{2}< s_\varstar'\,.
$$
Thus, ${\mathcal A}A^{-1} \ttx$ belong to $\mathbb T^n_{s_\varstar'}$, and \varepsilonqu{fare2} follows.
{
{\noindent}ndent}
(iii) Eq.'s \varepsilonqu{dopomedia}--\varepsilonqu{hamsec} follow immediately from the definition of the symplectic map $\Fio$ in \varepsilonqu{talktothewind} and \varepsilonqu{azz}. The statement on the angle--analyticity domain of $\hamsec_k$ follows from part (b) of Lemma~\ref{averaging}.
\qed
{
{\noindent}ndent}
We summarize the above lemmata in the following
\begin{theorem}[Normal Form Theorem]
\label{normalform}
Let $\ham$ be as in \varepsilonqu{ham} with $f\in\paramBBns$ satisfying \varepsilonqref{P1+} with $\mathtt N$ as in \varepsilonqu{enne}, and let \varepsilonqu{dublino} hold.
There exists a constant\footnote{$\bco$ is defined in Lemma~\ref{averaging}.} $\bfco=\bfco(n,s,\d)\ge \max\{\mathtt N\,,\,\bco\}$ such that,
if ${\mathtt K}O\ge \bfco$, $k\in{\cal G}^n_{\KO}$, and $\DDD^k$, $\tilde r_k$, $\tilde s_k$ are as in \varepsilonqu{formentera}, then
there exist
real analytic symplectic maps
\beq{trota2}
\Psi_{\rm o}: \mathbb Rz_{r_{\rm o}'}\times \mathbb T^n_{s_{\rm o}'} \to
\mathbb Rz_{r_{\rm o}}\times \mathbb T^n_{s_{\rm o}}
\,,
\qquad
\Psi^k:
\DDD^k_{\tilde r_k}\times \mathbb T^n_{\tilde s_k}
\to
\mathbb Ruk_{r_k} \times \mathbb T^n_{s_\varstar}
\varepsilonnd{equation}
having the following properties.
{
{\noindent}ndent}
{\rm (i)}
$
\hamo(y,x) := \big(\ham\circ\Psi_{\rm o}\big)(y,x)
={\hat f}ac{|y|^2}2+\varepsilon\big( g^{\rm o}(y) +
f^{\rm o}(y,x)\big)$,
with $g^{\rm o}$ and $f^{\rm o}$ satisfying
\varepsilonqu{552} and $\langle f^{\rm o}\rangle=0
$.
{
{\noindent}ndent}
{\rm (ii)}
\beq{colosseum3cippa}
{\mathcal H} _k(\tty,\ttx):=\ham\circ \Psi^k(\tty,\ttx)=\hamsec_k(\tty,\ttx_1)+ \varepsilon
\bar f^k(\tty,\ttx) \,,\quad (\tty,\ttx) \in
\DDD^k_{\tilde r_k}\times \mathbb T^n_{\tilde s_k}\,,
\varepsilonnd{equation}
where
\beq{hamseccippa}
\hamsec_k(\tty,\ttx_1):= {\hat f}ac12 |{\mathcal A}A^T\tty|^2+\varepsilon \mathtt g^k_{\rm o}(\tty)+
\varepsilon \mathtt g^k(\tty,\ttx_1)
\varepsilonnd{equation}
is a real analytic function for $\tty\in \DDD^k_{\tilde r_k}$ and $\ttx_1\in {\mathbb T} _{s'_k}$.
In particular
$\mathtt g^k(y,{c_{{}_{\rm o}}}ot)\in\hol_{s'_k}^1$
for every $y\in \DDD^k_{\tilde r_k}$. Furthermore, the following estimates hold:
\begin{equation}\label{cristinacippa}
|\mathtt g^k_{\rm o}|_{\tilde r_k}
\leq \vartheta o={\hat f}ac{1}{{\mathtt K}^{6n+1}}\,,\qquad
\,\thickvert\!\!\thickvert\, \mathtt g^k-\pi_{\!{}_{\integer k}} f\,\thickvert\!\!\thickvert\, _{{\tilde r_k},s'_k}
\leq \vartheta o\,,\qquad
\,\thickvert\!\!\thickvert\, \bar f^{k} \,\thickvert\!\!\thickvert\, _{{\tilde r_k},{\tilde s_k}} \le
e^{- {\mathtt K} s/3}\,.
\varepsilonnd{equation}
{\rm (iii)} If $ \noruno{k}\geq \mathtt N$, there exists $\sa_k\in[0,2\pi)$ such that
\begin{equation}\label{hamkccippa}
{\mathcal H} _k
=
{\hat f}ac12 |{\mathcal A}A^T\tty|^2+\varepsilon \mathtt g^k_{\rm o}(\tty)+
2|f_k|\varepsilon\
\big[\cos(\ttx_1 +\sa_k)+
F^k_{\! \varstar}(\ttx_1)+
\mathtt g^k_{\! \varstar}(\tty,\ttx_1)+
\mathtt f^k_{\! \varstar} (\tty,\ttx)
\big]\,,
\varepsilonnd{equation}
where $F^k_{\! \varstar}$ is as in Proposition~\ref{pollaio} and satisfies
$F^k_{\! \varstar}\in\hol_1^1$ and
$
\modulo F^k_{\! \varstar} \modulo_1\leq 2^{-40}$.\\
Moreover,
$\mathtt g^k_{\! \varstar}(y,{c_{{}_{\rm o}}}ot )\in\hol_1^1$
(for every $y\in \DDD^k_{\tilde r_k}$), $\pi_{\!{}_{\integer k}}\mathtt f^k_{\! \varstar}=0$, and one has
\beq{martinaTEcippa}\textstyle
\,\thickvert\!\!\thickvert\, \mathtt g^k_{\! \varstar}\,\thickvert\!\!\thickvert\,_{\tilde r_k,1}\le
\vartheta ={\hat f}ac{1}{{\mathtt K}^{5n}}
\,,\quad\qquad
\,\thickvert\!\!\thickvert\, \mathtt f^k_{\! \varstar} \,\thickvert\!\!\thickvert\, _{\tilde r_k,\tilde s_k}
\leq
e^{-{\mathtt K} s/7}\,.
\varepsilonnd{equation}
\varepsilonnd{theorem}
{\noindent}ndent{\bf Proof\ }
The first relation in \varepsilonqu{trota2} is \varepsilonqu{trota}. Define
\beq{pippala}
\Psi^k:= \Psi_k\circ \Fio\,.
\varepsilonnd{equation}
Then, since $s_\varstar/2<s_\varstar'$ (compare \varepsilonqu{dublino}), by
\varepsilonqu{fare2}, \varepsilonqu{rocket2} we get the second relation in \varepsilonqu{trota2}.
{
{\noindent}ndent}
{\rm (i)}
follows from point {\rm (a)} of Lemma~\ref{averaging}.
{
{\noindent}ndent}
{\rm (ii)}
\varepsilonqu{colosseum3cippa},
\varepsilonqu{hamseccippa} and \varepsilonqu{cristinacippa} follow from, respectively,
\varepsilonqu{dopomedia},
\varepsilonqu{hamsec}, \varepsilonqu{cristina}
and point (ii) of Lemma~\ref{Fiu} setting
\beq{ggg}
\mathtt g^k_{\rm o}(\tty):=
g^k_{\rm o}({\mathcal A}A^T\tty)\,,\qquad
\mathtt g^k(\tty,\ttx_1)
:= g^k({\mathcal A}A^T\tty,\ttx_1)\,.
\varepsilonnd{equation}
{\rm (iii)}
follows by Proposition~\ref{pollaio} and Lemma~\ref{coslike}.
In particular
\varepsilonqu{hamkccippa} follows from \varepsilonqu{hamkc}.
Furthermore,
\begin{equation}\label{catecippa}
\mathtt g^k_{\! \varstar}:={\hat f}ac{1}{2|f_k|}\, \big(\mathtt g^k- \pi_{\!{}_{\integer k}} f\big)\,,
\qquad
\mathtt f^k_{\! \varstar} :={\hat f}ac{1}{2|f_k|} \bar f^k
\varepsilonnd{equation}
and
noting that
$\mathtt g^k_{\! \varstar}(\tty,\ttx_1)
= g^k_{\! \varstar}({\mathcal A}A^T\tty,\ttx_1)$ and that, by \varepsilonqu{hamsec},
$\mathtt f^k_{\! \varstar}(\tty,\ttx)=f^k_{\! \varstar}({\mathcal A}A^T\tty,{\mathcal A}A^{-1} \ttx)
$, we see that
\varepsilonqu{martinaTEcippa} follows from \varepsilonqu{martinaTE} and \varepsilonqu{fare2}.
\qed
\section{Generic Standard Form at simple resonances}
In this final section we show that {\sl the secular Hamiltonians $\hamsec_k$ \varepsilonqu{hamsec} in Theorem~\ref{normalform}
can be symplectically put into a suitable standard form, uniformly in $k\in{\cal G}^n_{\KO}$}
{
{\noindent}ndent}
The precise definition of `standard form' is taken from \cite{BCaa23}, where the analytic properties
of action--angle variables of such Hamiltonian systems are discussed.
\begin{definition}\label{morso}
Let $\hat D \subseteq \mathbb R^{n-1}$ be a bounded domain, ${\mathtt R}>0$ and $D:= (-{\mathtt R} ,{\mathtt R} ) \times\hat D
$. We say that the real analytic Hamiltonian ${\ham}_{\flat}$ is in Generic Standard Form with respect to the symplectic
variables $(p_1,q_1)\in (-{\mathtt R} ,{\mathtt R} )\times {\mathbb T} $ and `external actions'
$$\hat p=(p_2,...,p_n)\in \hat D$$ if ${\ham}_{\flat}$ has the form
\beq{pasqua}
{\ham}_{\flat}(p,q_1)=
\big(1+ \cin(p,q_1)\big) p_1^2
+\Gm(\hat p, q_1)
\,,
\varepsilonnd{equation}
where:
\begin{itemize}
\item[\bolla] $\cin$ and $ \Gm$ are real analytic functions defined on, respectively, $D_{\mathtt r}\times\mathbb T_{\mathtt s}$ and $\hat D_{\mathtt r}\times \mathbb T_{\mathtt s}$ for some $0<{\mathtt r}\leq{\mathtt R}$ and ${\mathtt s}>0$;
\item[\bolla]
$\Gm$ has zero average and there exists a
function $\GO$ (the `reference potential') depending only on $q_1$ such that , for some\footnote{Recall Definition~\ref{buda}.} $\morse>0$,
\begin{equation}\label{A2bis}
\GO\ \ \mbox{is} \ \
\morse {\rm \text{--}Morse}\,,\qquad \langle \GO\rangle=0\,;
\varepsilonnd{equation}
\item[\bolla]
the following estimates hold:
\beq{cimabue}
\left\{\begin{array}{l} \displaystyle \sup_{ {\mathbb T} ^1_{\mathtt s}}|\GO|\le \upepsilon\,,\\
\displaystyle \sup_{\hat D_{\mathtt r}\times {\mathbb T} ^1_{\mathtt s}}|\Gm-\GO| \leq
\upepsilon
\lalla
\,,\quad{\rm for\ some}\quad 0<\upepsilon\le {\mathtt r}^2/2^{16}
\,,\ \ 0\le \lalla<1\,,
\\
\displaystyle \sup_{D_{\mathtt r}\times {\mathbb T} ^1_{\mathtt s}}|\cin| \leq
\lalla\,.
\varepsilonnd{array}\right.
\varepsilonnd{equation}
\varepsilonnd{itemize}
\varepsilonnd{definition}
We shall call $(\hat D,{\mathtt R},{\mathtt r},{\mathtt s},\morse,\upepsilon,\lalla)$ {\sl the analyticity characteristics of ${\ham}_{\flat}$
with respect to the unperturbed potential $\GO$}.
\rem\label{trivia}
If ${\ham}_{\flat}$ is in Generic Standard Form, then the parameters $\morse$ and $\upepsilon$ satisfy the relation\footnote{By \varepsilonqu{cimabue}, $\morse \le |\bar \Gm(\sa_i)- \bar \Gm(\sa_i)|\le 2 \max_ {\mathbb T} |\bar \Gm|\le 2\upepsilon$.}
\beq{sucamorse}\textstyle
{\hat f}ac\upepsilon\morse\ge {\hat f}ac12\,.
\varepsilonnd{equation}
Furthermore, one can always
fix $\varpi\geq 4$ such that:
\begin{equation}\label{alce}\textstyle
{\hat f}ac{1}{\varpi}\leq {\mathtt s}\leq 1\,,\qquad
1\leq
{\hat f}ac{{\mathtt R}}{{\mathtt r}}\leq \varpi\,,\qquad
{\hat f}ac{1}{2}\leq
{\hat f}ac{\upepsilon}{\morse }
\leq \varpi \,.
\varepsilonnd{equation}
Such a parameter $\varpi$ rules the main scaling properties of these Hamiltonians.
\varepsilonrem
\subsection{Main theorem}
{
{\noindent}ndent}
In the following we shall often use the following notation:
If $w$ is a vector with $n$ or $2n$ components, $\hat w=(w)^{\widehat{}}$ denotes the last $(n-1)$ components;
if $w$ is vector with $2n$ components, ${\hat \eta}eck w=(w)^{{\!\!\widecheck{\phantom{a}}}}$ denotes the first $n+1$ components.
Explicitly:
\beq{checheche}
w=(y,x)=\big((y_1,...,y_n),(x_1,...x_n)\big)\quad \Longrightarrow\quad
\left\{
\begin{array}{l}
\hat w=(w)^{\widehat{}}=(x_2,...,x_n)=\hat x\,,\\
\hat y=\,(y)^{\widehat{}}=(y_2,...,y_n)\,,\\
{\hat \eta}eck w=(w)^{{\!\!\widecheck{\phantom{a}}}}=(y,x_1)\,,\\
w=({\hat \eta}eck w,\hat w)\,.
\varepsilonnd{array}
\right. \varepsilonnd{equation}
\dfn{dadaumpa}
Given a domain $\hat {\rm D}\subseteq \mathbb R^{n-1}$,
we denote by
$\Gdag$ the
abelian group of
symplectic diffeomorphisms $\Psi_{\! \ta}$
of $(\mathbb R\times\hat {\rm D})\times \mathbb R^n$
given by
\[
(p,q)\in(\mathbb R\times\hat {\rm D})\times \mathbb R^n\stackrel{\Psi_{\! \ta}}\mapsto (P,Q)=
(p_1+\ta(\hat p),\hat q,q_1,\hat q-q_1\partial_{\hat p}
\ta(\hat p))\in {\mathbb R} ^{2n}\,,
\varepsilonnd{equation} no
with $\ta:\hat {\rm D}\to \mathbb R$ smooth.
\varepsilondfn
\rem
\label{alice}
The group properties of $\Gdag$ are trivial:
$$
{\rm id}_{\Gdag}=\Psi_{\! 0}\,,\qquad
\Psi_{\! \ta}^{-1}=\Psi_{\! -\ta }\,,\qquad
\Psi_{\! \ta}\circ\Psi_{\! \ta'}=\Psi_{\! \ta+\ta'}\,.
$$
Notice, however, that, unless $\partial_{\hat p} \ta\in {\mathbb Z} ^{n-1}$,
maps in $\Psi_{\! \ta}\in \Gdag$ {\sl do not induce well defined angle maps}
$q\in {\mathbb T} ^n\mapsto (q_1,\hat q-q_1\partial_{\hat p}
\ta(\hat p))\in {\mathbb T} ^n$.
\varepsilonrem
Now, let $f\in{\mathbb G}^n_{s}$ satisfy\footnote{Recall that by Lemma~\ref{telaviv} such $\d$ and $\b$ always exist.} \varepsilonqref{P1+} and \varepsilonqu{P2+} for some $0<\d\le 1$ and $\b>0$ with $\mathtt N$ defined in \varepsilonqu{enne}, let $k\in{\cal G}^n_{\KO}$, recall \varepsilonqu{dublino} and define the following parameters\footnote{Here and in what follows we shall not always indicate explicitly the dependence upon $k$.
Recall the definitions of $\itcu$, $\hAA$ and $\ttcs$ in, respectively, \varepsilonqu{formentera}
Lemma~\ref{Fiu} and \varepsilonqu{bollettino1}.}
\beq{cerbiatta}
\begin{array}l
{\mathtt R}={\a}/{|k|^2}={\sqrt\varepsilon {\mathtt K}^\nu}/{|k|^2}\,,\quad \itcd=4\, n^{{\hat f}ac32} \itcu\,,
\quad {\mathtt r}= {{\mathtt R}}/{\itcd}\,,
\quad\varepsilon_k={\hat f}ac{2\varepsilon}{|k|^2}\,,
\phantom{\displaystyle\int}
\\
\hat D =
\big\{ \hat\act\in\mathbb R^{n-1}: \
|\proiezione_k^\perp \hAA^T \hat\act|<1\,,
\ \displaystyle
\min_{{\mathtt s}pra{\varepsilonll\in {\cal G}^n_{\K}}{\varepsilonll \notin \mathbb{Z} k}}
\big| \big(\proiezione_k^\perp \hAA^T \hat\act\big){c_{{}_{\rm o}}}ot \varepsilonll\big|
\geq {\textstyle {\hat f}ac{3\a{\mathtt K}}{|k|}} \big\}\,,\ D=(-{\mathtt R},{\mathtt R})\times \hat D\,,
\\
\morse=\casitwo{\varepsilon_k \b,}{\noruno{k}<\mathtt N}{\varepsilon_k |f_k|,}{\noruno{k}\ge \mathtt N}\,,\,\qquad
{\hat \eta}k=\casitwo
{1 \,,}{\noruno{k}<\mathtt N}
{ |f_k|\,,}{\noruno{k}\ge \mathtt N}\,,
\quad \upepsilon=\textstyle \ttcs \varepsilon_k\, {\hat \eta}k\,,
\\
{\mathtt s}=\casitwo{\min\{{\hat f}ac{s}2,1\}\,,}{\noruno{k}<\mathtt N}{1\,,}{\noruno{k}\ge \mathtt N}\,,\quad
\textstyle
{\hat \eta}s:=\casitwo{s'_k\,,}{\noruno{k}<\mathtt N\,,}{1\,,}{\noruno{k}\ge \mathtt N}\,,
\quad \displaystyle
\lalla={\hat f}ac{1}{{\mathtt K}^{5n}}\,.
\varepsilonnd{array}
\varepsilonnd{equation}
\begin{theorem}[Generic Standard Form at simple resonances]\label{sivori}\ \\
Let $\ham$ be as in \varepsilonqu{ham} with $f\in{\mathbb G}^n_{s}$ satisfying \varepsilonqref{P1+} and \varepsilonqu{P2+} for some $0<\d\le 1$ and $\b>0$ with $\mathtt N$ defined in \varepsilonqu{enne}.
Assume that\footnote{$\bfco$ is defined in Theorem~\ref{normalform}.} ${\mathtt K}O\ge \max\{\itcd,\bfco\}$. Then, with the definitions given in \varepsilonqu{cerbiatta}, the following holds for all $k\in{\cal G}^n_{\KO}$.
{
{\noindent}ndent}
{\rm (i)}
There exists a real analytic
symplectic transformation
\beq{diamond}
\Phi_{\,\rm diam\,}ond:(\ttp,\ttq)\ \in D\times \mathbb R^n \to
(\tty,\ttx)=\Phi_{\,\rm diam\,}ond(\ttp,\ttq)\in \mathbb R^{2n}\,,
\varepsilonnd{equation}
such that: $\Phi_{\,\rm diam\,}ond$ fixes $\hat\ttp$ and\footnote{I.e., in \varepsilonqu{diamond} it is $\tty=\hat\ttp,\ttx_1=\ttq_1$.} $\ttq_1$;
for every $\hat\ttp\in\hat D$ the map $(\ttp_1,\ttq_1)\mapsto (\tty_1,\ttx_1)$ is symplectic;
the $(n+1)$--dimensional map\footnote{Recall the notation in \varepsilonqu{checheche}.} ${\hat \eta}eck\Phi_{\,\rm diam\,}ond$ depends only on the first $n+1$ coordinates $(\ttp,\ttq_1)$, is $2\pi$--periodic in $\ttq_1$
and, if $\DDD^k= {\mathcal A}A^{-T}\mathbb Ruk$ and $\hamsec_k$ are as in Theorem~\ref{normalform}, one has\footnote{
$r_k$ and $s'_k$ are defined in \varepsilonqu{dublino}, $\tilde r_k$ in \varepsilonqu{formentera}.}
\beq{tikitaka}
\begin{array}l
{\hat \eta}eck\Phi_{\,\rm diam\,}ond:
D_{{\hat \eta}r}\times {\mathbb T} _{\hat \eta}s\ \, \to
\DDD^k_{\tilde r_k}\times \mathbb T_{\hat \eta}s\,, {\phantom{\displaystyle \int}}
\\
\hamsec_k\circ {\hat \eta}eck\Phi_{\,\rm diam\,}ond(\ttp,\ttq)=:
\textstyle{{\hat f}ac{|k|^2}{2}}( {\ham}_{{}_k}(\ttp,\ttq_1)+
\hzk(\hat\ttp))\,, {\phantom{\displaystyle \int}}
\\
\sup_{\hat\ttp\in \hat D_{2{\mathtt r}}}
\big|\textstyle\hzk(\hat\ttp)-
\htk(\hat\ttp)
\big|
\leq
\textstyle 6\, \varepsilon_k\lalla\,,\qquad\ \htk(\hat\ttp):={\textstyle {\hat f}ac{1}{|k|^2}}
| \proiezione^\perp_k \hAA^T \hat\ttp|^2\,.
\varepsilonnd{array}
\varepsilonnd{equation}
{\rm (ii)}
${\ham}_{{}_k}$ in \varepsilonqu{tikitaka} is in Generic Universal Form according to Definition~\ref{morso}:
\[
{\ham}_{{}_k}(\ttp,\ttq_1)=\big(1+\cins(\ttp,\ttq_1)\big)\, \ttp_1^2 + \Gf(\hat\ttp,\ttq_1)\,,
\varepsilonnd{equation} no
having reference potential
\beq{paranoia}
\GO=\bGf:= \varepsilon_k\, \pi_{\!{}_{\integer k}} f\,,
\varepsilonnd{equation}
analyticity characteristics given in \varepsilonqu{cerbiatta}, and
$\upkappa$
verifying \varepsilonqu{alce} with
\beq{kappa}\textstyle
\upkappa=\upkappa(n,s,\b):=\max\big\{\itcd\,, 4\ttcs\,, \ttcs/\b
\big\}\,.
\varepsilonnd{equation}
{\rm (iii)} Finally, $\Phi_{\,\rm diam\,}ond=\Fiuno\circ\Fidue\circ\Fitre$, where\footnote{Recall Definition~\ref{dadaumpa}.}: $\Fiuno:=\Psi_{\!{
{\noindent}ndent}no}\in\Gdag$ with
${
{\noindent}ndent}no(\hat \ttp):=-\textstyle {\hat f}ac1{|k|^2}{(\hAA k){c_{{}_{\rm o}}}ot \hat \ttp}$; $\Fitre:=\Psi_{\!\gitre}\in\Gdag$ for a suitable real analytic function $\gitre(\hat\ttp)$ satisfying
\[ \textstyle
|\gitre|_{4 {\hat \eta}r}< {\hat f}ac{\varepsilon_k{\hat \eta}k}{{\hat \eta}r}\lalla\,,
\varepsilonnd{equation} no
and $\Fidue(\ttp,\ttq)=(\ttp_1+\upeta_{{}_2},\hat \ttp, \ttq_1,\hat \ttq+\upchi_{{}_2})$ for suitable real analytic functions $\upeta_{{}_2}=\upeta_{{}_2}(\hat \ttp,\ttq_1)$ and
$\upchi_{{}_2}=\upchi_{{}_2}(\hat \ttp,\ttq_1)$
satisfying
\[ \textstyle
|\upeta_{{}_2}|_{4 {\hat \eta}r,{\hat \eta}s}< {\hat f}ac{\varepsilon_k{\hat \eta}k}{{\hat \eta}r}\lalla\,,\qquad |\upchi_{{}_2}|_{2 {\hat \eta}r,{\hat \eta}s}< {\hat f}ac{4\varepsilon_k{\hat \eta}k}{{\hat \eta}r^2}\,\lalla
\,.
\varepsilonnd{equation} no
\varepsilonnd{theorem}
\rem\label{rampulla} (i) One of the main point of the above theorem is that the parameter $\kappa$ in \varepsilonqu{kappa}
{\sl does not depend on $k$}. Incidentally, we point out that $\kappa$ depends (indirectly) also on $\d$, since $\d$ appears in the definition of $\mathtt N$ and $\b$ is the uniform Morse constant of the first $\mathtt N$ reference potentials.
{
{\noindent}ndent}
(ii)
Note that by \varepsilonqu{cerbiatta},
\varepsilonqu{tikitaka}, \varepsilonqu{dublino}
and \varepsilonqu{bollettino1}
\begin{equation}\label{fangorn}\textstyle
\min\big\{{\hat f}ac{s}2,1\big\}\le {\mathtt s}\leq {\hat \eta}s\le {s'_k}\,.
\varepsilonnd{equation}
In particular, the composition $\hamsec_k\circ {\hat \eta}eck\Phi_{\,\rm diam\,}ond$ is well defined; compare Theorem~\ref{normalform}--(ii).\\
As for the action analyticity radii, notice that, by the definitions in \varepsilonqu{dublino}, \varepsilonqu{formentera} and \varepsilonqu{cerbiatta}, one has
\beq{bollettino2}
r_k={\mathtt R}\, |k|\,,\qquad
\tilde r_k= {\hat f}ac{{\mathtt R}}\itcu\,.
\varepsilonnd{equation}
(iii) The three maps which define $\Phi_{\,\rm diam\,}ond$ have the following purposes:
The first one is needed to decouple the `kinetic energy' of the 1--d.o.f. secular system; the second one is introduced so as to get a purely positional 1--dimensional potential; finally, the third one puts the momentum coordinate of the equilibria in 0.
{
{\noindent}ndent}
(iv) The proof is fully constructive and the explicit definition of ${\ham}_{{}_k}$ is given in \varepsilonqu{cins},
\varepsilonqu{fpe}, \varepsilonqu{guaito},
\varepsilonqu{limone}, \varepsilonqu{pontediferro} and \varepsilonqu{hamsecu}
below.
\varepsilonrem
\subsection{Proof of the main theorem}
The proof is articulated in three lemmata. \\
The first lemma shows how to `block--diagonalize' the kinetic energy. For $k\in {\cal G}^n_{\KO}$, recall the definition of the matrices
${\mathcal A}A$ and $\hAA$ in \varepsilonqref{scimmia}, and define\footnote{ ${\rm I}_{m}$ denotes the $(m\times m)$--identity matrix and recall the notation in \varepsilonqu{checheche}.}
\beqa{centocelle}
&&
\tty= {\rm U}\ttY:=
\left(\begin{matrix}
1 & - {\hat f}ac1{|k|^2}(\hAA k)^T \cr 0 & \quad{\rm I}_{{}_{n-1}} \cr
\varepsilonnd{matrix}\right)\ttY
\,,\qquad
{\rm i.e.}\qquad
\casi{\tty_1=\ttY_1 - {\hat f}ac1{|k|^2}{\hAA k{c_{{}_{\rm o}}}ot \hat \ttY}\,,}
{\hat \tty=\hat \ttY\,.}
\varepsilonnd{equation} a
Then, one has
\begin{lemma}\label{phi1} {\rm (i)}
Let $\Fiuno$ be the map $\Fiuno(\ttY,\ttX)=({\rm U}\ttY,{\rm U}^{-T}\ttX)$.
Then, $\Fiuno$ is symplectic and
\beq{finocchio} \DDD^k = {\rm U} \mathbb{Z}Z \,,\qquad
\Fiunoc: \mathbb{Z}Z_{4 {\hat \eta}r}\times\mathbb T_{\hat \eta}s\to \DDD^k_{\tilde r_k}\times \mathbb T_{\hat \eta}s\,.
\varepsilonnd{equation}
{\rm (ii)} Let
\beq{hamsecu}
\left\{
\begin{array}{l}
\Guo:= {\textstyle \varepsilon_k} g^k_{\rm o}({\mathcal A}A^T {\rm U} \ttY)\,,\quad
\Gu(\ttY,\ttX_1):= {\textstyle \varepsilon_k}
g^k({\mathcal A}A^T {\rm U} \ttY,\ttX_1)\,,\\ \ \\
\hamsecu(\ttY,\ttX_1):=
\ttY_1^2+ \Guo(\ttY)+\Gu(\ttY,\ttX_1)\,,\qquad \langle \Gu(\ttY,{c_{{}_{\rm o}}}ot)\rangle=0\,.
\varepsilonnd{array}\right.
\varepsilonnd{equation}
Then, if $\hamsec_k$ is as in \varepsilonqu{hamsec}, one has
\beq{spigolak}
\hamsec_k\circ \Fiunoc(\ttY,\ttX_1)= {\textstyle{\hat f}ac{|k|^2}{2}} \, \hamsecu(\ttY,\ttX_1)+ {\textstyle {\hat f}ac12}
| \proiezione^\perp_k \hAA^T \hat \ttY| ^2\,,
\varepsilonnd{equation}
with $\hamsecu$ real analytic on $\mathbb{Z}Z_{4 {\hat \eta}r}\times\mathbb T_{\hat \eta}s$ and $\langle \Gu(\ttY,{c_{{}_{\rm o}}}ot)\rangle=0$, and
the following estimates~hold\footnote{$\vartheta $ is defined in \varepsilonqu{martinaTE}. Notice that, by \varepsilonqu{cerbiatta}, ${\hat \eta}i_{{}_k}\le 1$ for all $k$.}:
\beq{betta}
|\Guo|_{4 {\hat \eta}r}\le \bettao:=2 \varepsilon_k \vartheta = {\hat f}ac{2\varepsilon_k}{{\mathtt K}^{5n}}\,,\qquad
| \Gu-\bGf|_{4 {\hat \eta}r,{\hat \eta}s}\le \betta:={\hat \eta}i_{{}_k}\bettao\le\bettao
\,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ }
(i) $\Fiuno$ is symplectic since it is generated by the generating function ${\rm U}\ttY{c_{{}_{\rm o}}}ot \ttx$.\\
From the definitions of ${\mathcal A}A$ and ${\rm U}$ in, respectively, \varepsilonqu{scimmia} and \varepsilonqu{centocelle}, it follows
\beq{perpieta}
({\mathcal A}A^T{\rm U}) \ttY=\ttY_1 k + \hAA^T\hat \ttY- {\textstyle {\hat f}ac{(\hAA k){c_{{}_{\rm o}}}ot \hat \ttY}{|k|^2}k}=
\ttY_1 k + \hAA^T\hat \ttY- {\textstyle {\hat f}ac{ \hAA^T \hat \ttY {c_{{}_{\rm o}}}ot k}{|k|^2}k}
=\ttY_1 k + \proiezione_k^\perp \hAA^T \hat \ttY\,.
\varepsilonnd{equation}
Thus, $\tty= ({\mathcal A}A^T{\rm U}) \ttY$ if and only if $\tty{c_{{}_{\rm o}}}ot k=\ttY_1 |k|^2$ and $\proiezione_k^\perp \tty=\proiezione_k^\perp
\hAA^T \hat \ttY$, which is equivalent to say $({\mathcal A}A^T{\rm U}) \mathbb{Z}Z =\mathbb Ruk$, which in view of \varepsilonqu{formentera}, is equivalent to
$\DDD^k = {\rm U} \mathbb{Z}Z$. Now, by~\varepsilonqu{scimmia},
\beq{UU}
|{\rm U}|, |{\rm U}^{-1}|\le n^{{\hat f}ac32}\,,
\varepsilonnd{equation}
where, as usual, for a matrix $M$ we denote by
$\displaystyle |M|=\sup_{u\neq 0} |Mu|/|u|$ the standard operator norm.
Thus, by \varepsilonqu{cerbiatta} and \varepsilonqu{bollettino2} we have (for complex $z$)
\beq{chitikaka}
|z|<4{\mathtt r} \ \ \implies
\ \
|{\rm U} z|< n^{{\hat f}ac32} 4 {\mathtt r} = 4 n^{{\hat f}ac32}\, {\hat f}ac{{\mathtt R}}{\itcd}= {\hat f}ac{{\mathtt R}}{\itcu}= \tilde r_k\,,
\varepsilonnd{equation}
which, since $\ttX_1=\ttx_1$, implies that
$\Fiunoc: \mathbb{Z}Z_{4 {\hat \eta}r}\times\mathbb T_{\hat \eta}s\to \DDD^k_{\tilde r_k}\times \mathbb T_{\hat \eta}s$, proving \varepsilonqu{finocchio}.
{
{\noindent}ndent}
(ii) By the previous item, the composition
$\hamsec_k\circ \Fiunoc$ is well defined and analytic on $\mathbb{Z}Z_{4 {\hat \eta}r}\times\mathbb T_{\hat \eta}s$.
From \varepsilonqu{perpieta} it follows that
$|{\mathcal A}A^T{\rm U} \ttY|^2=|k|^2 \ttY_1^2 + | \proiezione^\perp_k \hAA^T \hat \ttY| ^2$,
and \varepsilonqu{spigolak} follows. Notice that since $g^k(y,{c_{{}_{\rm o}}}ot)\in\hol_{s'_k}^1$ (compare Lemma~\ref{averaging}), $\Gu$ has zero average over $\mathbb T$.
\\
By the definition of $\Guo$ and $\Gu$ in \varepsilonqu{hamsecu}, by \varepsilonqu{chitikaka}, \varepsilonqu{cristina} in\footnote{Recall that, by \varepsilonqu{dublino}, \varepsilonqu{formentera}, $\tilde r_k<r'_k=r_k/2$.
Recall also the definitions of
$\vartheta o$ and $\vartheta $ in \varepsilonqu{552} and
\varepsilonqu{martinaTE}.} Lemma~\ref{averaging}, the estimates
on $|\Guo|_{4 {\hat \eta}r}$ and on $| \Gu-\bGf|_{4 {\hat \eta}r,{\hat \eta}s}$ for $\noruno{k}<\mathtt N$ in \varepsilonqu{betta} follow.
The estimate for $\noruno{k}\ge\mathtt N$ in \varepsilonqu{betta} follows from Lemma~\ref{coslike}: see in particular \varepsilonqu{cate}, \varepsilonqu{martinaTE}
and \varepsilonqu{alfacentauri}. \qed
{
{\noindent}ndent}
Next lemma shows how one can remove the dependence on $\ttY_1$ in the potential
$\Gu$.
\begin{lemma}\label{avogado} If let ${\mathtt K}\ge \itcd$
then,
\beq{tettapic}\textstyle
{\hat f}ac{\bettao}{{\hat \eta}r^2}
<{\hat f}ac1{2^{10}}\ {\hat f}ac{\hat \eta}s{\pi+{\hat \eta}s}<1
\,,
\varepsilonnd{equation}
and the following statements hold.
\\
{\rm (i)} The fixed point equation
\beq{fpe}
\pp = -{\textstyle {\hat f}ac12} \, \partial_{\ttY_1} \Guo(\pp,\hat\ttP) -{\textstyle {\hat f}ac12} \, \partial_{\ttY_1}\Gu(\pp,\hat\ttP,\ttQ_1)
\varepsilonnd{equation}
has a unique solution $\pp:(\hat \ttP,\ttQ_1)\in \hat\mathbb{Z}Z\times {\mathbb T} \mapsto \pp(\hat\ttP,\ttQ_1)\in {\mathbb R} $ real analytic on $\hat\mathbb{Z}Z_{4 {\hat \eta}r}\times {\mathbb T} _{\hat \eta}s$, satisfying
\beq{pitale}\textstyle
|\pp|_{4 {\hat \eta}r,{\hat \eta}s}<{\hat f}ac{\bettao}{3 {\hat \eta}r}\,.
\varepsilonnd{equation}
Furthermore, if we define
\beq{guaito}
\left\{
\begin{array}{l}
\pp_{\rm o}(\hat \ttP):=\langle \pp(\hat\ttP,{c_{{}_{\rm o}}}ot)\rangle
\\
\tilde\pp:=\pp-\pp_{\rm o}
\varepsilonnd{array}\right.\,,\qquad
\left\{
\begin{array}{l}
\displaystyle \phi(\hat \ttP,\ttX_1):= \int_0^{\ttX_1} \tilde\pp(\hat \ttP,\sa)d\sa\\
\hat\qq(\hat\ttP,\ttQ_1):=-\partial_{\hat\ttP} \, \phi(\hat\ttP,\ttQ_1)
\varepsilonnd{array}\right.
\varepsilonnd{equation}
then, $\ttQ_1\to\hat\qq(\hat\ttP,\ttQ_1)$ is a real analytic periodic function, and
one has
\beq{tess}\textstyle
|\pp_{\rm o}|_{4 {\hat \eta}r}< {\hat f}ac13\, {\hat f}ac{\bettao}{{\hat \eta}r}\,,\qquad\quad
|\tilde\pp|_{4 {\hat \eta}r,{\hat \eta}s}< {\hat f}ac13\, {\hat f}ac{\betta}{{\hat \eta}r}\,,\qquad |\hat \qq|_{2 {\hat \eta}r,{\hat \eta}s}< {\hat f}ac{\betta}{6 {\hat \eta}r^2}\,(\pi+{\hat \eta}s)
\,.
\varepsilonnd{equation}
{\rm (ii)}
The real analytic symplectic map $\Fidue$ generated by
$ \ttP{c_{{}_{\rm o}}}ot\ttX+ \phi(\hat\ttP,\ttX_1)$, namely,
\beq{Fidue}
\Fidue:(\ttP,\ttQ)\mapsto (\ttY,\ttX) \quad{\rm with}\quad
\casi{\ttY_1=\ttP_1+ \tilde \pp(\hat \ttP,\ttQ_1)}{\hat \ttY=\hat \ttP}\,,
\quad
\casi{\ttX_1=\ttQ_1}{\hat \ttX=\hat \ttQ + \hat\qq(\hat\ttP,\ttQ_1)}
\,,
\varepsilonnd{equation}
satisfies:
\beq{elficheck}
\Fiduec: \mathbb{Z}Z_{2 {\hat \eta}r}\times {\mathbb T} _{\hat \eta}s\to \mathbb{Z}Z_{3 {\hat \eta}r}\times {\mathbb T} _{\hat \eta}s\,,
\varepsilonnd{equation}
and
\beqa{hamsecd}
\hamsecd(\ttP,\ttQ_1)&:=&\hamsecu\circ \Fiduec(\ttP,\ttQ_1)\\
&=& \big(1+\tilde\cin(\ttP,\ttQ_1)\big)\,
\big(\ttP_1-\pp_{\rm o}(\hat \ttP)\big)^2 + \Gfo(\hat\ttP)+ \Gf(\hat\ttP,\ttQ_1)\,,
\nonumber
\varepsilonnd{equation} a
for suitable functions $\tilde\cin$, $\Gfo$ and $\Gf$ (explicitly defined in \varepsilonqu{limone} below, with $\langle\Gf\rangle=0$) real analytic on, respectively,
$\mathbb{Z}Z_{2{\hat \eta}r}\times {\mathbb T} _{\hat \eta}s$, $\hat \mathbb{Z}Z_{2{\hat \eta}r}$ and $\hat \mathbb{Z}Z_{2{\hat \eta}r}\times {\mathbb T} _{\hat \eta}s$ ,
which satisfy the bounds:
\beq{cima}\textstyle
|\tilde \cin|_{2{\hat \eta}r,{\hat \eta}s}\leq {\hat f}ac{\bettao}{4{\hat \eta}r^2}\,,
\qquad
|\Gfo|_{2{\hat \eta}r}\leq 3 \bettao\,\qquad
\quad
|\Gf-\bGf|_{2{\hat \eta}r,{\hat \eta}s}\leq 2\betta\,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ }
We start by proving \varepsilonqu{tettapic}. Recalling \varepsilonqu{fangorn}, \varepsilonqu{bollettino1} and \varepsilonqu{bollettino3}, we have
\beq{chenoia}\textstyle
{\hat f}ac{\pi+{\hat \eta}s}{\hat \eta}s\stackrel{}\le 1+ 2\pi \ttcs < 8 \ttcs<{\mathtt K}\,.
\varepsilonnd{equation}
Now, by the definitions in \varepsilonqu{betta}, \varepsilonqu{spigolak}, \varepsilonqu{cerbiatta}, \varepsilonqu{dublino}, we find
\[ \textstyle
{\hat f}ac{\bettao}{{\hat \eta}r^2}
=
4 \itcd^2 \, {\hat f}ac{|k|^2}{{\mathtt K}^{14n+4}}
\stackrel{\varepsilonqu{chenoia}}\le 4 \itcd^2 \, {\hat f}ac{1}{{\mathtt K}^{14n+1}} {\hat f}ac{\hat \eta}s{\pi+{\hat \eta}s}\,,
\varepsilonnd{equation} no
which yields \varepsilonqu{tettapic} since, by assumption, ${\mathtt K}>{\mathtt K}O\ge \itcd$.
{
{\noindent}ndent}
(i) Let us denote by ${\bf X}:= \hat\mathbb{Z}Z_{4 {\hat \eta}r,{\hat \eta}s}\times {\mathbb T} _{\hat \eta}s$ and by
$\mathcal X$ the complete metric space formed by the real analytic complex--valued functions $u: {\bf X}\to \{z\in {\mathbb C} :|z|\le {\hat \eta}r/2\}$, equipped with the metric given by the distance in sup--norm on $\bf X$. Let us also denote:
\beq{pontediferro}
\ttG^\sharp:=\Guo+\Gu\,,\qquad\quad \tilde \ttG^\sharp := \Gu- \bGf\,.
\varepsilonnd{equation}
Note that $\Gu$ and $\tilde \ttG^\sharp$ have zero average.
Consider the operator $F:u\in {\mathcal X} \mapsto F(u)$, where
$F(u)(\hat \ttP,\ttQ_1):=-{\textstyle {\hat f}ac12} \partial_{\ttY_1} \ttG^\sharp (u(\hat\ttP,\ttQ_1),\hat\ttP, \ttQ_1)$.
If $u\in {\mathcal X}$, then, by Cauchy estimate we get
\beqa{onam1}
\sup_{\bf X} |F(u)|&=&{\textstyle {\hat f}ac12} \sup_{\bf X}
\big| \partial_{\ttY_1} \ttG^\sharp(u(\hat\ttP,\ttQ_1),\hat\ttP, \ttQ_1)\big|
\nonumber\\
&=&{\textstyle {\hat f}ac12} \sup_{\bf X}
\big| \partial_{\ttY_1} \big[\ttG^\sharp(u(\hat\ttP,\ttQ_1),\hat\ttP, \ttQ_1)- \bGf(\ttQ_1)\big]\big|
\nonumber\\
&\le &
{\hat f}ac12\ {\hat f}ac{\big| \ttG^\sharp-\bGf\big|_{4 {\hat \eta}r,{\hat \eta}s}}{4 {\hat \eta}r- {\hat f}ac{\hat \eta}r2}
\nonumber\\
&\stackrel{\varepsilonqu{betta}}\le& {\hat f}ac12\, {\hat f}ac{\bettao+\betta }{4 {\hat \eta}r- {\hat f}ac{\hat \eta}r2}
\le {\hat f}ac27\, {\hat f}ac\bettao{{\hat \eta}r}
\stackrel{\varepsilonqu{tettapic}}{< } {\hat f}ac27\, {\hat \eta}r<{\hat f}ac{\hat \eta}r2\,.
\varepsilonnd{equation} a
Thus, $F:{\mathcal X}\to {\mathcal X}$.
Let us check that $F$ is, in fact, a contraction on ${\mathcal X}$. If $u,v\in{\mathcal X}$, then, again, by Cauchy estimate,
\varepsilonqu{betta} and \varepsilonqu{tettapic}, we get\footnote{$u$ and $v$, in the r.h.s. of the first inequality, are evaluated at $(\ttQ_1,\hat\ttP)$. }
\beqa{onam2}
\sup_{\bf X} |F(u)-F(v)| &\le &
{\textstyle {\hat f}ac12} \sup_{\bf X}
\big| \partial_{\ttY_1}\big( \ttG^\sharp(u,\hat \ttP,\ttQ_1)- \ttG^\sharp(v,\hat \ttP,\ttQ_1)\big)\big|
\nonumber\\
&\le& {\hat f}ac12\ \big| \partial_{\ttY_1}^2(\ttG^\sharp-\bGf\big)|_{{\hat f}ac{\hat \eta}r2,{\hat \eta}s}{c_{{}_{\rm o}}}ot \sup_{\bf X} |u-v|
\nonumber\\
&\le &
{\hat f}ac12\ {\hat f}ac{\big| \ttG^\sharp-\bGf\big|_{4 {\hat \eta}r,{\hat \eta}s}}{\big({4 {\hat \eta}r}- {\hat f}ac{\hat \eta}r2\big)^2}{c_{{}_{\rm o}}}ot \sup_{\bf X} |u-v|
\nonumber\\
&\stackrel{\varepsilonqu{betta}}{\le}& {\hat f}ac{4}{49}\ {\hat f}ac{\bettao}{{\hat \eta}r^2} {c_{{}_{\rm o}}}ot \sup_{\bf X} |u-v|
\stackrel{\varepsilonqu{tettapic}}{<} {\hat f}ac18\ {c_{{}_{\rm o}}}ot \sup_{\bf X} |u-v|\,,
\varepsilonnd{equation} a
showing that $F$ is a contraction on $\mathcal X$. Thus, by the standard Contraction Lemma, it follows that
there exists
a unique $\pp\in {\mathcal X}$ solving \varepsilonqu{fpe}.
{
{\noindent}ndent}
Since $\pp=F(\pp)$, one sees that \varepsilonqu{pitale} follows from \varepsilonqu{onam1}.\\
The first bound in \varepsilonqu{tess} follows immediately from \varepsilonqu{pitale}.
\\
To prove the second estimate in \varepsilonqu{tess}, write\footnote{To simplify notation, we drop, here, from the notation the explicit dependence on $\hat\ttP$ and $\ttQ_1$ of $\ttG^\sharp$.}
\beq{deangelis}
\partial_{\ttY_1} \ttG^\sharp (\pp)=
\partial_{\ttY_1} \ttG^\sharp (\pp_{\rm o}+\tilde\pp) =
\partial_{\ttY_1} \ttG^\sharp (\pp_{\rm o}) + w \tilde\pp\,,\quad {\rm with} \quad w:= \int_0^1 \partial^2_{\ttY_1} \ttG^\sharp (\pp_{\rm o}+t \tilde\pp)dt\,.
\varepsilonnd{equation}
As above, by Cauchy estimates,
\beq{marcore}
|w|_{4{\hat \eta}r,{\hat \eta}s}\le {\hat f}ac2{49} \, {\hat f}ac{\bettao+\betta}{{\hat \eta}r^2}<{\hat f}ac18\,.
\varepsilonnd{equation}
Thus, by \varepsilonqu{deangelis}, Cauchy estimates, and \varepsilonqu{marcore}, observing
that\footnote{Recall that
$\langle \Gu(\ttY,{c_{{}_{\rm o}}}ot)\rangle=0$ as stated in Lemma \ref{phi1}.} $\langle \partial_{\ttY_1} \Gu(\pp_{\rm o})\rangle=0$, one finds
\begin{eqnarray*}
|\tilde \pp|=|\pp-\pp_{\rm o}|&\stackrel{\varepsilonqu{deangelis}}=&{\textstyle {\hat f}ac12} \big| \partial_{\ttY_1} \ttG^\sharp (\pp_{\rm o}) - \langle \partial_{\ttY_1} \ttG^\sharp (\pp_{\rm o})\rangle +
w \tilde \pp - \langle w \tilde \pp \rangle\big|\\
&=&
{\textstyle {\hat f}ac12} \big| \partial_{\ttY_1} \Gu (\pp_{\rm o}) - \langle \partial_{\ttY_1} \Gu(\pp_{\rm o})\rangle +
w \tilde \pp - \langle w \tilde \pp \rangle\big|\\
&=&
{\textstyle {\hat f}ac12} \big| \partial_{\ttY_1} \big(\Gu (\pp_{\rm o})-\bGf\big)+
w \tilde \pp - \langle w \tilde \pp \rangle\big|\\
&\stackrel{\varepsilonqu{betta},\varepsilonqu{marcore}}\le& {\hat f}ac12 \paramBBig(\, {\hat f}ac27 \, {\hat f}ac\betta{\hat \eta}r\paramBBig) + {\hat f}ac12 |\tilde \pp|\,,
\varepsilonnd{equation} ano
which yields immediately the second bound in \varepsilonqu{tess}.
\\
Next, since
$\tilde \pp$ has zero average over the torus, the function $\phi$ defined in \varepsilonqu{guaito}
defines a (real analytic) periodic function such that $\partial_{\ttX_1}\phi=\tilde \pp$.
Furthermore, by the second estimate in \varepsilonqu{tess}, one has\footnote{$\pi+{\hat \eta}s$ is an estimate of the length of the integration path in \varepsilonqu{guaito}, as the real part of
$\ttQ_1$ can be taken in $[-\pi,\pi)$.}
$
|\phi|_{4 {\hat \eta}r,{\hat \eta}s}< {\hat f}ac{\betta}{3 {\hat \eta}r}\ (\pi+{\hat \eta}s)\,,
$
so that, by Cauchy estimates, also last bounds in \varepsilonqu{tess} follow.
{
{\noindent}ndent}
(ii) By the definition of $\Fidue$ in \varepsilonqu{Fidue}, by \varepsilonqu{tess} and \varepsilonqu{tettapic}, the relations in
\varepsilonqu{elficheck} follow at once.
\\
Now, define\footnote{Here, $\pp_{\rm o}=\pp_{\rm o}(\hat\ttP)$.}
\beq{limone}
\left\{\begin{array}{l}
\displaystyle \tilde\cin(\ttP,\ttQ_1):=\int_0^1 (1-t)\partial_{\ttY_1}^2 \ttG^\sharp\big(\pp_{\rm o}+t (\ttP_1-\pp_{\rm o}),\hat\ttP,\ttQ_1\big)dt\,,\\
\Gfo(\hat\ttP):= \langle \pp(\hat \ttP,{c_{{}_{\rm o}}}ot)^2\rangle +\langle \ttG^\sharp\big( \pp({c_{{}_{\rm o}}}ot,\hat \ttP),\hat\ttP,{c_{{}_{\rm o}}}ot)\rangle
\,,\\
\displaystyle \Gf(\hat\ttP, \ttQ_1):= \pp(\hat \ttP,\ttQ_1)^2
+ \ttG^\sharp\big( \pp(\ttQ_1,\hat \ttP),\hat\ttP,\ttQ_1) - \Gfo(\hat\ttP)
\,,
\varepsilonnd{array}\right.
\varepsilonnd{equation}
then,
by Taylor's formula, \varepsilonqu{hamsecu}, \varepsilonqu{Fidue}, \varepsilonqu{pontediferro} and \varepsilonqu{fpe}, one finds\footnote{$\displaystyle g(t_0+\t)=g(t_0)+g'(t_0)\t+\paramBBig(\int_0^1 (1-t) g''\big(t_0+t\t\big)dt\paramBBig) \t^2$ with $g=\Gu({c_{{}_{\rm o}}}ot,\ttQ_1)$, $t_0=\pp$ and $\t=\ttP_1-\pp_{\rm o}$. For ease of notation we drop the (dumb) dependence upon $\hat \ttP$ in these formulae.}
\beqa{onam3}
\hamsecd(\ttP_1,\ttQ_1)&:=&\hamsecu\circ \Fiduec(\ttP_1,\ttQ_1)
= (\ttP_1+\tilde\pp)^2+\ttG^\sharp(\ttP_1+\tilde\pp,\ttQ_1)\nonumber\\
&\stackrel{\varepsilonqu{guaito}}=&\big(\pp+(\ttP_1-\pp_{\rm o})\big)^2+ \ttG^\sharp\big(\pp+(\ttP_1-\pp_{\rm o}),\ttQ_1\big)\nonumber\\
&\stackrel{\varepsilonqu{limone}}{=}& (\ttP_1-\pp_{\rm o})^2+2(\ttP_1-\pp_{\rm o})\pp + \pp^2+ \ttG^\sharp(\pp,\ttQ_1)+\partial_{\ttY_1}\ttG^\sharp(\pp,\ttQ_1) (\ttP_1-\pp_{\rm o}) \nonumber \\
&& + (\ttP_1-\pp_{\rm o})^2 \tilde \cin\nonumber\\
&\stackrel{\varepsilonqu{fpe}}{=}&
(1+\tilde \cin) (\ttP_1-\pp_{\rm o})^2 + \pp^2+ \ttG^\sharp(\pp,\ttQ_1)\nonumber\\
&\stackrel{\varepsilonqu{limone}}{=}& (1+\tilde \cin) (\ttP_1-\pp_{\rm o})^2+ \Gfo+ \Gf(\ttQ_1)\,,
\varepsilonnd{equation} a
proving \varepsilonqu{hamsecd}.
\\
Let us now prove \varepsilonqu{cima}.
Observe that for $\ttP\in \mathbb{Z}Z_{2{\hat \eta}r}$ by \varepsilonqu{tettapic} and \varepsilonqu{tess}
the segment $\big(\pp_{\rm o}+t (\ttP_1-\pp_{\rm o}),\hat \ttP\big)$, $t\in[0,1]$,
still belongs to $\mathbb{Z}Z_{2{\hat \eta}r}$, hence, by
definition of $\tilde \cin$ in \varepsilonqu{limone}, by Cauchy estimate\footnote{Compare, also, the estimates done in \varepsilonqu{onam2}.} and \varepsilonqu{betta} one obtains the first
estimate in \varepsilonqu{cima}.
\\
By the definition of $\Gfo$, by \varepsilonqu{pitale} and \varepsilonqu{betta}, observing that $|\ttG^\sharp|\le \bettao+\betta\le 2\bettao$, one gets immediately the second estimate in \varepsilonqu{cima}.
\\
As for the third estimate in \varepsilonqu{cima}, by the definitions given, one has that\footnote{Dropping, again, in the notation the dumb variable $\hat \ttP$.}
\beq{proietti}
\Gf-\bGf
=
\big(\pp^2-\langle \pp^2\rangle\big)+\big(\ttG^\sharp(\pp,{c_{{}_{\rm o}}}ot) - \langle \ttG^\sharp(\pp,{c_{{}_{\rm o}}}ot)\rangle-\bGf\big) \,.
\varepsilonnd{equation}
Let us estimate the terms in brackets separately. For $\hat\ttP\in \hat\mathbb{Z}Z_{2{\hat \eta}r}$ and $\ttQ_1\in {\mathbb T} _{{\hat \eta}s}$, one finds
\beqa{fiorini}
|\pp^2-\langle \pp^2\rangle|=|2\tilde \pp\pp_{\rm o} +\tilde\pp^2-\langle \tilde \pp^2\rangle|\le
(2|\pp_{\rm o}|+2|\tilde \pp|)\, |\tilde\pp|
\stackrel{\varepsilonqu{tess}}{\le}
{\hat f}ac49 \bettao {\hat f}ac\betta{{\hat \eta}r^2}\stackrel{\varepsilonqu{tettapic}}{<} {\hat f}ac\betta2\,.
\varepsilonnd{equation} a
To estimate the second term in \varepsilonqu{proietti}, we define
$$\z(t):=
\ttG^\sharp(\pp_{\rm o}+t \tilde \pp,\ttQ_1) - \langle \ttG^\sharp(\pp_{\rm o}+t \tilde \pp,,{c_{{}_{\rm o}}}ot)\rangle\,,
$$
and observe that (recall \varepsilonqu{pontediferro})
$\z(0)= \Gu(\pp_{\rm o},\ttQ_1)$
and that, by Cauchy estimates, we get\footnote{Reasoning as in \varepsilonqu{onam1}.}
\beqa{oji}
|\z'(s)|&\le& |\tilde \pp|\, \int_0^1\big| \partial_{\ttY_1}\big( \ttG^\sharp(\pp_{\rm o}+t \tilde \pp,\ttQ_1) - \langle \ttG^\sharp(\pp_{\rm o}+t \tilde \pp,,{c_{{}_{\rm o}}}ot)\rangle\big)\big|dt\nonumber\\
&\le& |\tilde \pp|\, {\hat f}ac{2|\ttG^\sharp|_{4{\hat \eta}r ,{\hat \eta}s}}{4{\hat \eta}r -{\hat f}ac{\hat \eta}r2}
\le |\tilde \pp|\, {\hat f}ac{4\bettao}{4{\hat \eta}r -{\hat f}ac{\hat \eta}r2}
\stackrel{\varepsilonqu{tess}}{<}{\hat f}ac8{21} {\hat f}ac{\bettao}{{\hat \eta}r^2}\ \, \betta\stackrel{\varepsilonqu{tettapic}}{<}{\hat f}ac12 \betta\,.
\varepsilonnd{equation} a
Thus,
\[
\big|\ttG^\sharp(\pp,{c_{{}_{\rm o}}}ot) - \langle \ttG^\sharp(\pp,{c_{{}_{\rm o}}}ot)\rangle-\bGf\big|\le |\z(0)-\bGf|+\int_0^1|\z'(t)|dt
\le
| \Gu(\pp_{\rm o},{c_{{}_{\rm o}}}ot)-\bGf|+{\hat f}ac12 \betta\stackrel{\varepsilonqu{betta}}{\le} {\hat f}ac32 \betta\,.
\varepsilonnd{equation} no
Putting together this estimate and \varepsilonqu{fiorini} one gets also the third estimate in \varepsilonqu{cima}. \qed
{
{\noindent}ndent}
The final transformation is again just a translation, which is done so that
{\sl all equilibria of the
secular system will lie on the angle--axis in its 2--dimensional phase space}.
\begin{lemma}\label{platano}
The real analytic symplectic map $\Fitre\in\Gdag$ defined as
\beq{Fitre}
\Fitre:
(\ttp,\ttq) \mapsto
(\ttP,\ttQ)
\quad{\rm with}\quad
\casi{\ttP_1=\ttp_1 + \pp_{\rm o}(\hat\ttp)}{\hat \ttP=\hat \ttp}\,,
\qquad
\casi{\ttQ_1=\ttq_1}{\hat\ttQ=\hat\ttq -\ttq_1\partial_{\hat \ttp} \pp_{\rm o}(\hat\ttp) \, }
\,,
\varepsilonnd{equation}
satisfies
\beq{giovannino}\textstyle
\Fitrec: \mathbb{Z}Z_{{\hat \eta}r}\times {\mathbb T} _{\hat \eta}s\to \mathbb{Z}Z_{2{\hat \eta}r}\times {\mathbb T} _{\hat \eta}s\,.
\varepsilonnd{equation}
Furthermore, one has:
\beq{Hsharp-}
\hamsecd\circ \Fitrec(\ttp,\ttq_1)=
\big(1+\cins(\ttp,\ttq_1)\big)\, \ttp_1^2 + \Gfo(\hat\ttp)+ \Gf(\hat\ttp,\ttq_1)\,,
\varepsilonnd{equation}
where
\beq{cins}
\cins(\ttp,\ttq_1):= \tilde\cin\big(\pp_{\rm o}(\hat\ttp)+\ttp_1, \hat\ttp,\ttq_1\big)\,,
\varepsilonnd{equation}
and the following bunds hold:
\beq{cimadue}
|\cins|_{{\hat \eta}r,{\hat \eta}s}\leq {\hat f}ac{\bettao}{4{\hat \eta}r^2}\,,
\qquad
|\Gfo|_{2{\hat \eta}r}\leq 3 \bettao\,\qquad
\quad
|\Gf-\bGf|_{2{\hat \eta}r,{\hat \eta}s}\leq 2\betta\,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
{\noindent}ndent{\bf Proof\ } Just observe that, if $|\ttp_1|<{\hat \eta}r$, then, by \varepsilonqu{tess} and \varepsilonqu{tettapic}, it follows that,
for all $\ttp\in\mathbb{Z}Z_{{\hat \eta}r}$,
$$\textstyle
|\pp_{\rm o}(\hat\ttp)+\ttp_1|< {\hat f}ac{\bettao}{3 {\hat \eta}r}+{\hat \eta}r\le {\hat f}ac{\hat \eta}r3+{\hat \eta}r={\hat f}ac43 {\hat \eta}r<2 {\hat \eta}r\,,
$$
so that \varepsilonqu{giovannino} holds.
Finally, by \varepsilonqu{cima}, we get\footnote{{\sl $\Gf$ and $\bGf$ are the same} as in \varepsilonqu{cima} of Lemma~\ref{avogado}.} \varepsilonqu{cimadue}. \qed
We are ready for the
{
{\noindent}ndent}
{\noindent}ndent{\bf Proof\ } {\bf of Theorem \ref{sivori}}\\
Recall the definitions of $\Phi_{\!{}_j}$, $1\le j\le 3$, in, respectively, Lemma~\ref{phi1}, \varepsilonqu{Fidue} and \varepsilonqu{Fitre} and define
$\Phi_{\,\rm diam\,}ond:=\Fiuno\circ\Fidue\circ \Fitre$,
and
$\hzk(\hat\p):=
{\hat f}ac{1}{|k|^2}
| \proiezione^\perp_k \hAA^T \hat\p| ^2
+\Gfo(\hat\p)$. Then, the expression for ${\ham}_{{}_k}$ in \varepsilonqu{tikitaka}
follows by
\varepsilonqu{spigolak}, \varepsilonqu{hamsecd} and \varepsilonqu{Hsharp-}.
\\
By \varepsilonqu{giovannino} and Lemma~\ref{platano},
the Hamiltonian function ${\ham}_{{}_k}$ is real analytic on $\mathbb{Z}Z_{\hat \eta}r\times {\mathbb T} _{\hat \eta}s$, where
$\mathbb{Z}Z=(-{\mathtt R},{\mathtt R})\times\hat \mathbb{Z}Z$
(compare \varepsilonqu{cerbiatta}).
\\
By \varepsilonqu{P2+} and Proposition~\ref{punti} we have
that $\bGf$ in \varepsilonqref{paranoia} is $\morse$--Morse with $\morse$ as in \varepsilonqu{cerbiatta}.
\\
Let us, now, estimate $|\bGf|_{\mathtt s}$. Consider, first, $\noruno{k}<\mathtt N$. Then, estimating $|f_{jk}|$ by $e^{-|j|\noruno{k}s}$, by the definition of ${\mathtt s}$, we get
\begin{eqnarray*}
|\bGf|_{\mathtt s}&\stackrel{\varepsilonqu{paranoia}}=& \varepsilon_k |\pi_{\!{}_{\integer k}} f|_{{\mathtt s}}
\le
\varepsilon_k |\pi_{\!{}_{\integer k}} f|_{s/2}=\varepsilon_k \sum_{j\neq 0} |f_{jk}|e^{{\hat f}ac{|j|\noruno{k}s}{2}}
\\
&\le& {\hat f}ac{8\varepsilon}{|k|^2} \, {\hat f}ac{e^{-s/2}}{2(1-e^{-s/2})}< {\hat f}ac{8\varepsilon}{|k|^2} \, {\hat f}ac{1}{s}\,.
\varepsilonnd{equation} ano
If $\noruno{k}\ge\mathtt N$ one has
\[
|\bGf|_{\mathtt s}=|\bGf|_1
\stackrel{\varepsilonqu{alfacentauri}}={\hat f}ac{4\varepsilon}{|k|^2}|f_k||\cos(\sa+ \sa_k)+F^k_\star(\sa)|_1
\stackrel{\varepsilonqu{gallina}}{\le}{\hat f}ac{4\varepsilon}{|k|^2} \, |f_k|\, (\cosh 1+2^{-40})<{\hat f}ac{8\varepsilon}{|k|^2} |f_k|\,.
\varepsilonnd{equation} no
Thus, by definitions of ${\hat \eta}i_{{}_k}$ in \varepsilonqu{cerbiatta} and $\bttcd$, one gets
\beq{crusca}
|\bGf|_{\mathtt s}\le \upepsilon \,,
\varepsilonnd{equation}
with $\upepsilon$ as in \varepsilonqu{cerbiatta}.
Next, since ${\hat \eta}i_{{}_k}\le 1\le \bttcd$,
\beq{mappo}
|\Gf-\bGf|_{{\mathtt r},{\mathtt s}}\stackrel{\varepsilonqu{cima}}\le 2\betta\stackrel{\varepsilonqu{betta}}={\hat f}ac{8\varepsilon}{|k|^2} {\hat \eta}i_{{}_k}\vartheta
\stackrel{\varepsilonqu{cerbiatta}}\le \upepsilon \lalla\,.
\varepsilonnd{equation}
By \varepsilonqu{cimadue}, \varepsilonqu{cerbiatta}, \varepsilonqu{betta}, using the inequalities $|k|\le {\mathtt K}O\le {\mathtt K}/6$,
recalling \varepsilonqu{tikitaka}, \varepsilonqu{dublino}, and the hypothesis ${\mathtt K}O\ge \itcd$ (in the last inequality),
one sees that
\beq{mappo2}
|\cins|_{{\mathtt r},{\mathtt s}}\leq \itcd^2\, {\hat f}ac{|k|^2}{{\mathtt K}^{2\nu}}\, \vartheta \le
{\hat f}ac{\itcd^2}{36}\, {\hat f}ac{1}{{\mathtt K}^{2(\nu-1)}} \, \vartheta
<
\vartheta =\lalla\,.
\varepsilonnd{equation}
Then \varepsilonqu{cimabue}
follows by \varepsilonqu{crusca}, \varepsilonqu{mappo} and
\varepsilonqu{mappo2}.
\\
Finally, observe that, by the definitions in
\varepsilonqu{cerbiatta} and \varepsilonqu{betta}
one has
\beq{stent}\textstyle
{\upepsilon}/{\morse}=
\casitwo{ {\hat f}ac{4 \bttcd}{\b}
\,,}{\noruno{k}<\mathtt N\,,}{4 \bttcd
\,,\phantom{\displaystyle \int}
}{\noruno{k}\ge \mathtt N\,.}
\varepsilonnd{equation}
Then, \varepsilonqu{alce} with $\varpi$ as in \varepsilonqu{kappa}, follows immediately by the definitions in
\varepsilonqu{cerbiatta},
\varepsilonqu{sucamorse} and \varepsilonqu{stent}.
\qed
\small
\begin{thebibliography}{99}
\bibitem{AM}
A. A. Arabanov, A.D. Morozov {\sl On resonances in Hamiltonian systems with three degrees of freedom}. Regul. Chaotic Dyn. 24 (2019), no. 6, 628--648.
\bibitem{A64}
V. I. Arnol'd, {\sl Instability of dynamical systems with many degrees of freedom}, Dokl. Akad. Nauk SSSR, 156 (1964), pp. 9--12
\bibitem{AKN}
V.~I. Arnold, V.~V. Kozlov, and A.~I. Neishtadt.
{\sl Mathematical aspects of classical and celestial mechanics},
volume~3 of Encyclopaedia of Mathematical Sciences.
Springer-Verlag, Berlin, third edition, 2006.
[Dynamical systems. III], Translated from the Russian original by E. Khukhro.
\bibitem{BGG}
G. Benettin, L. Galgani, A. Giorgilli,
{\sl A proof of Nekhoroshev's theorem for the stability times in nearly integrable Hamiltonian systems}. Celestial Mech. 37 (1985), no. 1, 1--25.
\bibitem{BCaa23} L. Biasco, and L. Chierchia.
{\sl Complex Arnol'd--Liouville maps}, preprint 2023. \underline{\href{https://arxiv.org/abs/2306.00875v1}{\sl arXiv:2306.00875v1}}
\bibitem{BCnonlin} L. Biasco, and L. Chierchia.
{\sl On the topology of nearly-integrable Hamiltonians at simple resonances}.
Nonlinearity 33 (2020) 3526--3567
\bibitem{BClin2} L. Biasco, and L. Chierchia.
{\sl Quasi--periodic motions in generic nearly--integrable mechanical systems}, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. 33 (2022), no. 3, pp. 575--580
\bibitem{BFN}
A. Bounemoura, B. Fayad, L. Niederman, {\sl Superexponential stability of quasi--periodic motion in Hamiltonian systems}. Comm. Math. Phys. 350 (2017), no. 1, 361--386
\bibitem{BKZ}
P. Bernard, V. Kaloshin, and K. Zhang, {\sl Arnol'd diffusion in arbitrary degrees of freedom and 3--dimensional normally hyperbolic invariant cylinders}, Acta Math., 217:1 (2016), pp. 1--79
\bibitem{CdL22}
Q. Chen; R. de la Llave,
{\sl Analytic genericity of diffusing orbits in a priori unstable Hamiltonian systems}.
Nonlinearity 35 (2022), no. 4, 1986--2019
\bibitem{CG} L. Chierchia, and G. Gallavotti.
{\sl Drift and diffusion in phase space}
Ann. Inst. Henri Poincar\'e, Phys. Th\'eor., 60, 1--144 (1994)\\
Erratum, Ann. Inst. Henri Poincar\'e, Phys. Th\'eor., 68, no. 1, 135 (1998)
\bibitem{CFG}
A. Clarke, J. Fejoz, M. Guardia, {\sl Topological shadowing methods in Arnold diffusion: weak torsion and multiple time scales}. Nonlinearity 36 (2023), no. 1, 426--457
\bibitem{DG}
A. Delshams, P. Guti\'errez,
{\sl Effective stability and KAM theory},
J. Differ. Equ. 128 (1996) 415--490
\bibitem{DH}
A. Delshams, G. Huguet, {\sl Geography of resonances and Arnol'd diffusion in a priori unstable Hamiltonian systems}. Nonlinearity 22 (2009), no. 8, 1997--2077.
\bibitem{DLS}
A. Delshams, R. de la Llave, T.M. Seara,
{\sl Instability of high dimensional Hamiltonian systems: multiple resonances do not impede diffusion}.
Adv. Math. 294 (2016), 689--755
\bibitem{DS}
A. Delshams, R.G. Schaefer,
{\sl Arnold diffusion for a complete family of perturbations}. Regul. Chaotic Dyn. 22 (2017), no. 1,
78--108
\bibitem{GPSV}
M. Guardia, J. Paradela, T.M. Seara, C. Vidal, {\sl Symbolic dynamics in the restricted elliptic isosceles three body problem}. J. Differential Equations 294 (2021), 143--177
\bibitem{GCB}
Guzzo, M.; Chierchia, L.; Benettin,
{\sl The steep Nekhoroshev's theorem}. Comm. Math. Phys. 342 (2016), no. 2, 569--601.
\bibitem{FHL}
J.-L. Figueras, A. Haro, A. Luque, {\sl Effective bounds for the measure of rotations}. Nonlinearity 33 (2020), no. 2, 700--741
\bibitem{GR} R. C. Gunning, and H. Rossi,
{\sl Analytic Functions of Several Complex Variables}, American Mathematical Soc., 2009 -- 317 pp.
\bibitem{KZ}
V. Kaloshin and K. Zhang. {\sl Arnol'd diffusion for smooth systems of two and a half degrees of freedom}, volume
208 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 2020
\bibitem{Ma}
J. N. Mather,
{\sl Arnol'd diffusion by variational methods}. Essays in mathematics and its applications, 271--285, Springer, Heidelberg, 2012
\bibitem{MNT} A.G. Medvedev, A.I. Neishtadt, D.V. Treschev,
{\sl Lagrangian tori near resonances of near--integrable Hamiltonian systems},
Nonlinearity, {\bf 28}:7 (2015), 2105--2130
\bibitem{Nei}
A. I. Neishtadt,
{\sl Estimates in the Kolmogorov theorem on conservation of conditionally periodic motions}.
J. Appl. Math. Mech. 45 (1981), no. 6, 766--772
\bibitem{Nei89} A. Neishtadt,
{\sl Problems of Perturbation Theory for Non--Linear Resonant Systems}. Doktor. Diss. Moscow Univ., Moscow (1989), 342 pp. (Russian)
\bibitem{Nek}
N.N. Nehoroshev,
{\sl An exponential estimate of the time of stability of nearly integrable Hamiltonian systems}. (Russian)
Uspehi Mat. Nauk 32 (1977), no. 6(198), 5--66, 287
\bibitem{Nie00}
L. Niederman, {\sl Dynamics around simple resonant tori in nearly integrable Hamiltonian systems}.
J. Differential Equations 161 (2000), no. 1, 1--41
\bibitem{NPR}
L. Niederman, A. Pousse, P. Robutel, {\sl On the co-orbital motion in the three--body problem: existence of quasi--periodic horseshoe--shaped orbits}. Comm. Math. Phys. 377 (2020), no. 1, 551--612.
\bibitem{P82}
J. P\"oschel,
{\sl Integrability of Hamiltonian systems on Cantor sets}.
Comm. Pure Appl. Math. 35 (1982), no. 5, 653--696
\bibitem{poschel}
J. P\"oschel,
{\sl Nekhoroshev estimates for quasi--convex Hamiltonian systems}.
Math. Z. {\bf 213}, pag. 187 (1993).
\bibitem{Simo}
C. Sim\'o, {\sl Some questions looking for answers in dynamical systems}. Discrete Contin. Dyn. Syst. 38 (2018), no. 12, 6215--6239.
\bibitem{T}
D. Treschev. {\sl Arnol'd diffusion far from strong resonances in multidimensional a priori unstable Hamiltonian
systems}. Nonlinearity, 25(9):2717--2757, 2012
\bibitem{Z}
Ke Zhang,
{\sl Speed of Arnol'd diffusion for analytic Hamiltonian systems}
Invent. Math. 186 (2011), no. 2, 255--290.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{equation}gin{document}
\begin{equation}gin{frontmatter}
\title{ The Geometry of Limit State Function Graphs and Subset Simulation}
\author[mymainaddress]{Karl Breitung \mbox{\rm corr}ef{mycorrespondingauthor}}
\address[mymainaddress]{Engineering Risk Analysis Group, Technical University of Munich, Theresienstr.~90, 80333 Munich, Germany}
\cortext[mycorrespondingauthor]{Corresponding author}
\end{eqnarray}d{[email protected]}
\begin{equation}gin{abstract}
In the last fifteen the subset sampling method has often been used in reliability problems as a tool for calculating small probabilities. This method is extrapolating from an initial Monte Carlo estimate for the probability content of a failure domain found by a suitable higher level of the original limit state function. Then iteratively conditional probabilities are estimated for failures domains decreasing to the original failure domain.
But there are assumptions not immediately obvious about the structure of the failure domains which must be fulfilled that the method works properly. Here examples are studied that show that at least in some cases if these premises are not fulfilled, inaccurate results may be obtained. For the further development of the subset sampling method it is certainly desirable to find approaches where it is possible to check that these implicit assumptions are not violated. Also it would be probably important to develop further improvements of the concept to get rid of these limitations.
\end{abstract}
\begin{equation}gin{keyword}
Asymptotic approximations; FORM/SORM; subset simulation; Monte Carlo methods; stochastic optimization
\end{keyword}
\end{frontmatter}
\thispagestyle{empty}
\section{Introduction}
A standard problem of structural reliability is the calculation of failure probabilities which are given by $n$-dimensional integrals in the form:
\begin{equation}gin{eqnarray}
P=\int_{ g(\scriptsize{\ul{x})}<0 }f(\ul{x})\ d\ul{x}
\end{eqnarray}
Here $f(\ul{x})$ is an $n$-dimensional PDF (probability density function)
and $g(\ul{x})$ is the LSF (limit state function) describing the failure condition.
During the development of structural reliability methods it became usual to transform the random vector $\ul{X}$ into a standard normal
random vector $\ul{U}$ with independents components. So the problem is in this standardized form:
\begin{equation}gin{eqnarray}
P=(2\pi)^{-n/2}\int_{ g(\scriptsize{\ul{u}})<0 }\exp(-\ein|\ul{u}|^2)\ d\ul{u}
\end{eqnarray}
Such transformations into the the standard normal space for random vectors with independent components were first described by \citep{Rackwitz/Fiessler(1978)}.
For random vectors with dependent components the Rosenblatt-transformation is often proposed to achieve a transformation but with the exception of the example given in \citep{Hohenbichler(1981)} no applications of this transformation concept are known to the author. A practically applicable method appears to be the Nataf-transformation described in \citep{Der/Liu(1986)}.
From the sixties last century for this problem basically two different solution concepts were followed.
The first were Monte-Carlo methods. The second were the so-called FORM/SORM concepts.
In the further development then hybrid methods originated, combining both concepts. A review
over these developments till end of the nineties can be found in \citep{Rackwitz(2001)}.
During the decades the problems in structural reliability became more complex, i.e. higher dimensional problems with smaller probabilities had to be solved.
A new method for such tasks these was proposed in 2001, the so-called subset simulation method (SuS) -- the author prefers this acronym -- will be described in the next section. A connection of this approach with FORM/SORM is described in section~3 and in the next section the approximation for the estimation variance used in SuS is explained in more detail. Then in section~5 some examples using SuS will be studied to show that SuS has some implicit assumptions. In the case that these are not fulfilled the results obtained by SuS might be misleading.
This paper is an enlarged and revised version of the conference paper \citep{Breitung(2016)}.
\section{The Subset Simulation Concept}
The subset simulation method is basically a variant of Monte Carlo methods; it attempts to avoid the large quantity of data points which has to be created in standard Monte Carlo by using instead an iterative procedure.
It can be subsumed under the generic term of stochastic optimization procedures.
Whereas for example importance sampling tries to improve the efficiency of Monte Carlo by identifying regions with high probability content and putting more date points there, SuS starts from a region around the origin and then moves step by step towards the failure domain. These regions are defined here by domains in the form $F_i=\{g(\ul{u})< a_i \}$ with the $a_i$'s being positive and $a_i \to 0 $.
The basic thought of the method (see \citep{Au/Beck(2001)}, \citep{Au/Wang(2014)}) is now to write the failure probability $\hbox{P}(F)$ as a product of
conditional probabilities
\begin{equation}gin{eqnarray}
\hspace*{-0.85cm}\hbox{P}(F_n)=\hbox{P}(F_1|F_0)\!\cdot\!\hbox{P}(F_2|F_1)\ldots\hbox{P}(F_n|F_{n-1}) = \prod_{k=0}^{n-1}\hbox{P}(F_{k+1}|F_k)\hspace{-0.2cm}
\end{eqnarray}
Here $\mathbb{R}^n=F_0\supset F_1 \supset F_2\supset \ldots \supset F_n=F$.
Since the respective (suitably chosen) conditional probabilities are relatively large compared with the failure probability $\hbox{P}(F)$ which has to be estimated, such an access to the problem has the advantage that these conditional probabilities can be estimated more efficiently with much smaller sample sizes. The details how these samples are produced with Monte Carlo Markov chains can be found in the references given above.
\begin{equation}gin{figure}[hbt]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gglsfbeian}
\caption{Figure a) the standard SuS example, figure b) a more complicated LSF. The design points red, curve $g(u_1,u_2)=0$ blue.}
\label{fig:gglsfbeian}
\end{center}
\end{figure}
This construct is an iterated extrapolation starting from an initial probability estimate $\widehat{\hbox{P}(F_1)}$
of a much larger failure domain. Then iteratively the failure domain is shrunk towards the original failure domain. In many papers about SuS the examples are similar the case shown in figure \ref{fig:gglsfbeian}a), a standard example for demonstrating the concept. But one should study also how the method works for more complex cases as e.g. in figure \ref{fig:gglsfbeian}b).
As Rackwitz said in \citep{Rackwitz(2001)}, an crucial step in developing new procedures is also to show where they do not work, i.e. to construct counterexamples. In practical mathematical methods almost never a proof can be found showing the correctness of an approach, but with examples one can show the limits of the applicability of the concept and where it needs improvement. On the other hand it is practically impossible to show the correctness of a method by some examples.
So to understand the thought of SuS more clearly, it is profitable to study examples where the concept runs into difficulties which will be done in section~5. This might either lead to an improvement of the used procedures which then allows to tackle also such examples successfully or to see possible limitations more distinctly.
\section{SuS and Asymptotic Approximations}
There are interesting relations between SORM and SuS which seem to have gone unnoticed till now. If one considers the first example in chap.~5 in \citep{Au/Wang(2014)}, the
LSF's there are given by
\begin{equation}gin{eqnarray}\label{lsf}
g_{\begin{equation}ta}(u_1,u_2)=\frac{\begin{equation}ta^2}{2}-u_1\cdot u_2.
\end{eqnarray}
Taking now the constant in the LSF as in eq.(\ref{lsf}) gives a sequence of limit state surfaces with
$\min_{g_{\begin{equation}ta}(\ul{u})=0}|\ul{u}|=\begin{equation}ta$.
The probabilities $\hbox{P}(g_{\begin{equation}ta}(u_1,u_2)<0)$ can be calculated exactly. This is not mentioned in \citep{Au/Wang(2014)}, but one has for $\begin{equation}ta>0$ that (\citep{Weisstein(2016)}):
\begin{equation}gin{eqnarray}\label{prod}
\hbox{P}(g_{\begin{equation}ta}(u_1,u_2)<0)=\pi^{-1}K_0(\begin{equation}ta^2/2),
\end{eqnarray}
where $K_0=(.)$ is the modified Bessel function of the second kind and of order $0$.
For this Bessel function one has the following asymptotic approximation (see \citep{Abramowitz}, eq.~9.7.2, p.~378):
\begin{equation}gin{eqnarray}
K_0(z) \sim \sqrt{\pi/z}\cdot e^{-z},\ z\to\infty
\end{eqnarray}
Inserting this into eq.~\ref{prod} and then using Mill's ratio
(see \citep{Abramowitz}, eq.~26.2.12, p.~932)
yields
\begin{equation}gin{eqnarray}\label{exact}
\hbox{P}(g_{\begin{equation}ta}(u_1,u_2)<0)\sim \sqrt{2}\cdot \Phi(-\begin{equation}ta), \ \begin{equation}ta\to\infty
\end{eqnarray}
With the SORM approach now asymptotic approximations can be computed.
Using the Lagrange multiplier method two beta points can be found $\ul{u}_1=(\begin{equation}ta/\sqrt{2},\begin{equation}ta/\sqrt{2} )$ and $\ul{u}_2=(-\begin{equation}ta/\sqrt{2},-\begin{equation}ta/\sqrt{2})$.
At these points one has ($\ul{I}_n$ denoting the $n$-dimensional unit matrix):
\begin{equation}gin{eqnarray}\nonumber
\nabla g_{\begin{equation}ta}(\ul{u})=\left(\begin{equation}gin{array}{r} - \frac{\begin{equation}ta}{\sqrt{2}}\\ \\ -\frac{\begin{equation}ta}{\sqrt{2}}\end{array}\right)&,&\
\nabla^2 g_{\begin{equation}ta}(\ul{u})=\left(
\begin{equation}gin{array}{rr} 0 & -1\\-1& 0 \end{array}\right),\
\\ \nonumber
\ul{P}= \left(
\begin{equation}gin{array}{rr} \ein & \ein \\ & \\ \ein & \ein \end{array}\right)&,&\
\ul{I}_2-\ul{P}=\left(
\begin{equation}gin{array}{rr} \ein & -\ein\\ & \\-\ein & \ein \end{array}\right),
\\ \tilde{\ul{H}}&=& \ul{I}_2+\begin{equation}ta |\nabla_{\begin{equation}ta} g(\ul{u})|^{-1}\nabla^2 g_{\begin{equation}ta}(\ul{u})
\end{eqnarray}
The SORM approximation is found adding the equal contributions from the two beta points.
As described in \citep{Breitung(2015)}, one obtains as SORM approximations then:
\begin{equation}gin{eqnarray}
\hspace{-0.8cm}\hbox{P}(g_{\begin{equation}ta}(u_1,u_2)\!<0)\!\sim\! \frac{2\!\cdot\! \Phi(-\begin{equation}ta)}{\sqrt{\det((\ul{I}_2\!-\!\ul{P})\tilde{\ul{H}}(\ul{I}_2\!-\!\ul{P})+\ul{P})}},\hspace{0.05cm} \begin{equation}ta\to\infty
\end{eqnarray}
Inserting the values found before, one finds:
\begin{equation}gin{eqnarray}
\hbox{P}(g_{\begin{equation}ta}(u_1,u_2)<0)\sim \sqrt{2}\cdot\Phi(-\begin{equation}ta)
,\ \begin{equation}ta\to\infty
\end{eqnarray}
This agrees with the asymptotic approximation for the
exact probability given in eq.~\ref{exact}.
In figure~\ref{fig:sormfig} now they
are compared.
\begin{equation}gin{figure}
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{plotsorm1}
\caption{Left: limit state surface (solid black line) for $\begin{equation}ta=\sqrt{12}$ and design points (red circles), right: asymptotic approximation (solid line) and curve (dotted blue line) fitted from SuS data (red squares
)}
\label{fig:sormfig}
\end{center}
\end{figure}
In \citep{Breitung(1994b)}, chap.~6 it is explained that
given a LSF such that $ \min_{g(\ul{u})=0}|\ul{u}| =1$
and defining
\begin{equation}gin{eqnarray}
F(\begin{equation}ta)=\{\ul{u}; g(\begin{equation}ta \ul{u} )< 0)\}, \ P(\begin{equation}ta)=P(F(\begin{equation}ta)),
\end{eqnarray}
one has for this sequence of failure domains that
\begin{equation}gin{eqnarray}
P(\begin{equation}ta)\sim c\begin{equation}ta^b \cdot \Phi(-\begin{equation}ta),\ \begin{equation}ta\to\infty
\end{eqnarray}
with $c,b\geq 0$ non-negative constants.
Therefore the results of SuS can be used to estimate the constants in these asymptotic approximations with the proviso that these results are unbiased (see section~\ref{bias} for a discussion).
In figure \ref{fig:sormfig} from four values of the complimentary cumulative distribution function estimated by SuS (taken from \citep{Au/Wang(2014)}, p.~165, figure~5.3), marked by circles, the value of $c$ is estimated ($b$ is zero here) and the corresponding curve is drawn. It agrees quite well with the asymptotic approximation shown as solid curve.
In the same way, for stationary Gaussian vector processes $\ul{x}(t)$ results can be found (see \citep{Breitung(1994b)}, chap.~$8$) for outcrossing rates out of limit state surfaces. Here again SuS might be useful for determining the parameter values in the asymptotic formul\ae.
\section{Bias and variance of SuS Estimates}\label{bias}
In the derivation of SuS some assumptions and approximations are made which are not spelt out too clearly.
Let be given an estimate $\hat{P}$ of a failure probability $P$. Then the mean square error (MSE)
of the estimator is
\begin{equation}gin{eqnarray}\label{MSE}
\mathrm{MSE}(\hat{P})= \mbox{\rm var}(\hat{P})+(P-\hbox{$I\!\!E$}(\hat{P}))^2
\end{eqnarray}
The first term on the rhs is the variance of the estimator and the second one its bias.
In the derivation of the SuS often it is assumed that the second term in eq.~(\ref{MSE}) can be neglected.
These assumptions is based on slightly cavalier arguments that somehow everything goes to infinity and therefore
asymptotically large sample approximations can be used which state that the bias vanishes. Using asymptotic arguments is a problematic solution for such questions, since the convergence speed to the theoretical asymptotic values cannot be estimated here by explicit error bounds.
An argument against is further, that since SuS is an iterative method, the bias could accumulate and might be amplified in the further stages.
Therefore such argumentations should always be underpinned by extensive Monte-Carlo studies to demonstrate if in realistic settings the claimed effect is observed in fact.
As far as known to the author this has not yet be done, so this part of the SuS reasonings remains still slightly speculative.
In \citep{Au/Beck(2001)} , \citep{Au/Wang(2014)} and \citep{Papaioannou(2015)} relations between the coefficient of variation of the estimator $\widehat{\hbox{P}(F)}$ and the conditional probability estimators $\widehat{\hbox{P}(F_i|F_{i-1})}$ are derived in a slightly sketchy way. Here it is attempted to rewrite the basic thought in a more precise
way showing all intermediate steps. The notation is also changed; the usual mathematical notational conventions for probability and statistics are observed instead of the slightly unusual notations in the SuS papers.
The following notation will be used:
$P$ is the unknown failure probability, $P_i=P(F_i|F_{i-1})$ are the conditional probabilities,
the estimators for them are denoted by a hat over the probability, i.e. $\hat{P}_i$. The realizations of the random variables and constants are denoted by lowercase letters, e.g. the realizations of
$\hat{P}_i$ in a run are denoted by $\hat{p}_i$. The mean value of a rv $X$ is denoted by $\hbox{$I\!\!E$}(X)$, its variance by $\mbox{\rm var}(X)$ and the covariance between two rv's $X$ and $Y$ by $\mbox{\rm cov}(X,Y)$ and their correlation by $\mbox{\rm corr}(X,Y)$.
The coefficient of variation of a random variable $X$ is denoted by $c_v(X)=\sqrt{\mbox{\rm var}(X)}/\hbox{$I\!\!E$}(X)$.
The following approximations for moments of functions of random variables are in some texts called the \textit{ delta method} (see \citep{Casella/Berger(2002)}, \cite{Oehlert(1992)}), whereas in \cite{Papoulis(1965)} they are derived in a section \textit{approximate evaluation of moments} without giving them a specific name.
For a sufficiently smooth function $h$ of a random variable $X$ one has in the univariate case the following approximation for the first two moments of the random variable $h(X)$
\begin{equation}gin{eqnarray}
\hbox{$I\!\!E$}(h(X))&\approx &h(\hbox{$I\!\!E$}(X))\\
\mbox{\rm var}(h(X))&\approx & h'(\hbox{$I\!\!E$}(X))^2\cdot \mbox{\rm var}(X)
\end{eqnarray}
In the same way for a random vector $\ul{X}=(\ul{X}_1,\ldots,\ul{X}_n)$ and a smooth function $h$ of it this has then the form
\begin{equation}gin{eqnarray}\label{multidelta}\nonumber
\hspace*{-0.5cm} \hbox{$I\!\!E$}(h(\ul{X}))&\approx& h(\hbox{$I\!\!E$}(x_1,\ldots,x_n))=h(\hbox{$I\!\!E$}(\ul{X}))\\
\hspace*{-0.5cm} \mbox{\rm var}(h(\ul{X}))&\approx& \sum_{i,j=1}^n h_i(\hbox{$I\!\!E$}(\ul{X}))\ h_j(\hbox{$I\!\!E$}(\ul{X}))\cdot \mbox{\rm cov}(X_i,X_j)
\end{eqnarray}
with $h_i(\ul{x})=\frac{\partial h(\ul{x})}{\partial x_i}$ the partial derivative of $h$ with respect to $x_i$.
These approximation are exact only if the function $h$ is linear and the random variables are normal. Otherwise the approximation errors depend on the non-linearity of the functions, the form of the distributions and --- in the multivariate case --- the dependence between the components of $\ul{X}$.
Therefore information about the quality of this
approximation in SuS can be obtained only by numerical experiments which seem
to be still lacking.
An approximation for the variance of $\hat{P}$ can now found using the delta method in its multivariate version in eq.~(\ref{multidelta}) and assuming $\hbox{$I\!\!E$}(\hat{P}_i)=P_i$:
\begin{equation}gin{eqnarray}
\hspace*{-0.8cm}\mbox{\rm var}(\hat{P})\approx \mbox{\rm var}\left( \prod_{i=1}^n \hat{P}_i \right) = \sum_{i,j=1}^n\left[ \prod_{k\neq i}P_k \prod_{l\neq j}P_l\right]\mbox{\rm cov}(\hat{P}_i,\hat{P}_j)
\end{eqnarray}
Dividing this by $\hbox{$I\!\!E$}(\hat{P})^2$ and approximating it by $(\prod_{i=1}^n P_i)^2$ on the rhs one obtains:
\begin{equation}gin{eqnarray}
\frac{\mbox{\rm var}(\hat{P})}{\hbox{$I\!\!E$}(\hat{P})^2}\approx \sum_{i,j=1}^n \frac{\mbox{\rm cov}(\hat{P}_i,\hat{P}_j)}{P_i\cdot P_j}
\end{eqnarray}
Now using the definition of the $c_v$ this can be written as
\begin{equation}gin{eqnarray}
c_v(\hat{P})\approx \sum_{i,j=1}^n c_v(\hat{P}_i)c_v(\hat{P}_j)\cdot \mbox{\rm corr}(\hat{P}_i,\hat{P}_j)
\end{eqnarray}
This equation is given in \citep{Papaioannou(2015)}, but without derivation.
The derivation here shows that some assumptions are involved.
Now, in the papers about SuS the coefficient of variation is used for calculating some sort of confidence intervals.
The problem is that for deriving a meaningful confidence interval, it is not enough to calculate the sample mean and sample standard deviation. Additionally the form of the distribution of the sample has to be approximately similar to a sample from a normal distribution. In mathematical statistics confidence intervals are usually derived in some asymptotic context,i.e. the distribution of a sample approaches a normal one as the sample size $N$ goes to infinity; then for large $N$ it is possible to approximate the sample distribution
by a normal one and find an approximate confidence interval. This is done by taking the mean and variance of the sample, fitting a normal distribution to it and calculating for it a confidence interval.
In more applied statistics such intervals are computed if only the form of the sample looks similar to a normal
one by heuristic considerations; for example visual inspection or if the third and fourth moments are approximately close to those of a normal distribution. But in any case, an approach which just uses the mean and the standard deviation (coefficient of variation) of a sample for computing confidence intervals without any further considerations is quite problematic (see \citep{Wilcox(2009)}).
Here now for the example in section $3$, which is taken from chapter 5 of \citep{Au/Wang(2014)}, $500$ runs of the SuS algorithm were made and then the histogram of the estimators for $\hbox{P}(g(u_1,u_2)<0)$ was plotted.
For this histogram then a qq-plot (see e.g. \citep{NISTstat(2012)}) was made to examine if it follows approximately a normal distribution.
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=8cm]{exampleqq}
\caption{a) histogram of data, b) qqplot of data, c) histogram of decimal logarithms of data, d) qqplot of logarithms}
\label{fig:exampleqq}
\end{center}
\end{figure}
One can see from the qq-plot in figure \ref{fig:exampleqq}\ b) that the histogram is quite different from a normal distribution. The recommendation in \citep{NISTstat(2012)} is to use for such skewed data a lognormal or Weibull distribution. If the logarithms are taken, their distribution is approximately near a normal curve.
Then it is meaningful to construct a confidence interval based on the standard deviation of the transformed data, whereas a confidence interval based on the first two moments of the untransformed data is not meaningful.
\section{Examples}
All the examples were calculated with the SuS algorithm given in \citep{Li/Cao(2016)}.
As parameters were taken $500$ samples per step, an
acceptance probability of $0.1$ and a chain length of ten. This setup was used in \citep{Au/Wang(2014)} for estimates of the context of two-dimensional domains.
Always the left plot in a figure with label a) shows the contour plot of the limit state curve $g(u_1,u_2)=0$ in blue and the right plot with label b) the graph of the LSF, i.e. the surface $\{(u_1,u_2,g(u_1,u_2))\}$ as a blue mesh.
Also the plane $z=0$ is shown as a a tranlucent grey plane.
The SuS sample points in the diagrams are marked by green points. To improve the readability of the diagrams, only every tenth sample point is shown. Further in the diagrams on the left the design points for the LSF are shown as red circles.
\subsection{Piecewise Linear Functions}
The simplest cases where SuS approximations might become misleading are piecewise linear functions.
If the decrease velocity of such functions changes in a non-monotonic way in different directions,
the search path of SuS can be led away from the design point.
Consider here two piecewise linear functions $g_1$ and $g_2$ defined by
\begin{equation}gin{eqnarray}
g_1(u_1,u_2)&=&
\begin{equation}gin{cases}
4- u_1 &, u_1 > 3.5 \\
0.85- 0.1\cdot u_1&, u_1 \leq 3.5
\end{cases}
\\
g_2(u_1,u_2)&=&
\begin{equation}gin{cases}
0.5-0.1\cdot u_2 &, u_2 > 2 \\
2.3- u_2&, u_1 \leq 2
\end{cases}
\end{eqnarray}
\begin{equation}gin{figure}[hbt]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gllsf}
\caption{}
\label{fig:piecelin}
\end{center}
\end{figure}
The series system defined by these two LSF's has the LSF $g^*(u_1,u_2)=\min(g_1,g_2)$.
The performance of the SuS method for this LSF is shown in figure \ref{fig:piecelin}.
Due to the switching decrease velocities of the two LSF's the SuS sample points move after these
switches happen into the wrong direction.
\subsection{Extrapolation}
Everybody is extrapolating almost all the time, in science and in private live.
So, extrapolation is a necessary and indispensable tool, but also it can be dangerous in reliability. Well known examples here are the Challenger disaster and the railway accident near Eschede.
In civil engineering stress-strain curves give a simple example; in the first part usually linear, they become non-linear
after the elastic limit depending on the material.
So, extrapolating from the linear relationship at the lower stress rates would give wrong values.
Consider now an example in reliability. In many cases it seems to be reasonable to assume that the tail distributions of random variables are different from their distribution in the central part (\citep{Maes(1994b)}, \citep{Acar/Ramu(2014)}). The next example shows the problem of extrapolation with SuS. Let be given the LSF
\begin{equation}gin{eqnarray}
g(u_1,u_2)=0.1\cdot (52- 1.5\cdot u_1^2- u_2^2).
\end{eqnarray}
\begin{equation}gin{figure}[hbt]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gglsfpareto}
\caption{Distribution with Pareto tail ( solid curve limit state curve)}
\label{fig:pareto}
\end{center}
\end{figure}
In the original space the first random variable has a standard normal distribution and the second a standard normal distribution up to $a=3.5$ and for the upper tail a Pareto distribution is fitted. These are then transformed into the standard normal space.
More formally written, in the original space there are two independent rv's $X_1$ and $X_2$, where $X_1$ has a standard normal distribution and $X_2$ has the following CDF $F_2(x_2)$:
\begin{equation}gin{eqnarray}
F_2(x_2)&=&
\begin{equation}gin{cases}
\Phi(x_2) &, x_2 \leq 3.5 \\
1-x_2^c&, x_2 > 3.5
\end{cases}
\end{eqnarray}
with $ c=\log(\Phi(-3.5))/\log(3.5)$.
So the upper tail of this rv has a Pareto distribution.
As one can see in figure~\ref{fig:pareto} the SuS samples move towards the local distance minima of the limit state surface on the horizontal axis, whereas the global minimum lies on the positive vertical axis.
So following the distributional form around the origin leads the algorithm in the wrong direction, since for larger values of $u_2$ the LSF decreases then much faster.
\subsection{Invariance}
An important reason that the Hasofer-Lind index was adopted as a measure for reliability is its invariance under reformulations or re-parametrizations of the underlying reliability problem (see \citep{Hasofer(1974)}).
Also the convergence proofs for the beta point search algorithms do not depend on the specific form of the LSF. But clearly one has to start the search algorithm from different points to obtain all global minimal distance points, i.e. beta points.
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width=9cm, height=5cm]{gglsflininv}
\caption{Series system defined by LSF in eq.~(\ref{linser})}
\label{fig:lin}
\end{center}
\end{figure}
Consider now a series system consisting of two independent components, so failure occurs if at least one fails.
The first component fails if $u_1>5$ and the second component if $u_2<-4$.
Now, this limit state surface can be the zero set of different LSF's.
For example, one has
\begin{equation}gin{eqnarray}\label{linser}
g(u_1,u_2) =\min \begin{equation}gin{cases}
5-u_1\\
4+u_2
\end{cases}
\end{eqnarray}
Here both LSF's are linear, with SuS one obtains as expected an estimate for the asymptotic failure probability approximation $\widehat{P(F)}\approx \Phi(-4)\approx 3.17\cdot 10^{-5}$.
The performance of the SuS method is shown in figure~\ref{fig:lin}.
Assume now that the LSF for the second random variable is given not by a linear but by a logistic function in the form:
\begin{equation}gin{eqnarray}
g_2^*(u_2)=\frac{1}{1+\exp(-2( u_2+4))}-0.5
\end{eqnarray}
Then the LSF $g^*(u_1,u_2)$ given by
\begin{equation}gin{eqnarray}\label{nonser}
g^*(u_1,u_2)=\min\begin{equation}gin{cases}
5-u_1\\
\frac{1}{1+\exp(-2( u_2+4))}-0.5
\end{cases}
\end{eqnarray}
defines the same limit surface as before, but the shape of the LSF is different and the contour lines of these functions are different in the safe and unsafe domain, both LSF's have only the contour of zero level set in common.
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gglsfin}
\caption{Series system defined by LSF in eq.~(\ref{nonser})}
\label{fig:logistic}
\end{center}
\end{figure}
Here, with the LSF defined in eq.~\ref{nonser}, the points in SuS converge towards the point $(5,0)$ and one gets as probability estimate a value of $\Phi(-5)\approx 2.87\cdot 10^{-7}$ whereas the true failure probability
is approximately equal to $\Phi(-4)\approx
3.17\cdot 10^{-5}$ as shown in figure \ref{fig:logistic}. So, here the different forms of the LSF's influence the result of the method. The reason is that the structure of the LSF in the neighborhood of the origin is different from its form near the limit state surface.
The same limit state surface can be described by a plethora of different LSF's. Their specific forms will influence the behavior of the SuS algorithm. Especially for more complicated LSF's for series or parallel systems it might be useful to clear inasmuch this can create convergence problems or lead to incorrect results.
Certainly there will be cases where the result will not depend on the changing structures of the LSF's, but as the example above shows, it would be overoptimistic to assume that this is generally so.
\subsection{Changing Topological Structure of Domains}
Another case is when topological structures of the failure domains changes, for example if its genus changes.
Assume that an LSF is given by a metaball function (\citep{Metaball(2016)}):
\begin{equation}gin{eqnarray}
g(u_1,u_2)=d - \sum_{j=1}^k\frac{c_j}{(u_1-a_j)^2 +(u_2-b_j)^2+1}
\end{eqnarray}
with $k$ a natural number and $a_j,\ b_j$ and $ c_j$ real numbers.
If the parameter $d$ changes, the contours given by $g(u_1,u_2)=0$ take on different shapes.
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width= 8cm,height=4.5cm]{genus}
\caption{Example for LSF's created by eq.~\ref{multiball} }
\label{fig:genus}
\end{center}
\end{figure}
As example consider now a simple metaball function defined by:
\begin{equation}gin{eqnarray}\label{multiball}
g(u_1,u_2)&=&
{{30}\over{\left({{4\,\left(u_1+2\right)^2}\over{9}}+{{u_2^2}\over{25
}}\right)^2+1}}\\ &+&{{20}\over{\left({{\left(u_1-2.5\right)^2}\over{4}}+
{{\left(u_2-0.5\right)^2}\over{25}}\right)^2+1}}-5
\end{eqnarray}
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gglsfgenus}
\caption{SuS for the LSF's in fig.~\ref{fig:genusex} }
\label{fig:genusex}
\end{center}
\end{figure}
For decreasing values towards zero that the safe domain consists first of two elliptic regions which then merge to one region.
For larger values of the parameter the failure domain has topological genus two which then changes to one. To formulate it more sloppy, first there are two holes in it and then only one. This can be seen in figure \ref{fig:genus}.
In figure \ref{fig:genusex} the SuS results for one run for this example are shown.
The sudden change in the topological structure creates difficulties and the sample points move in the wrong direction.
\subsection{Rotationally Unsymmetric LSF's}
The von Mises distribution (see e.g. \citep{Forbes(2011)}) is defined on the unit circle and has there the PDF:
\begin{equation}gin{eqnarray}
f(\mbox{\rm var}phi)=\exp(\kappa\cos(\mbox{\rm var}phi-\mu))/(2\pi I_0(\kappa)),\ 0\leq \mbox{\rm var}phi < 2\pi
\end{eqnarray}
The parameters $\mu$ ($0\leq \mu <2\pi$) and $1/\kappa$ ($0<\kappa$) are location resp. dispersion measures and $I_0(.)$ is the modified Bessel function of order $0$.
Using a LSF proportional to a mixture of two non-normalized densities and for the distance to the origin functions with varying decrease velocities
\begin{equation}gin{eqnarray}\label{cramer}\nonumber
\hspace{-1cm}g(r,\mbox{\rm var}phi)=0.19- 0.0055\cdot(\Phi(r-0.5)\cdot\exp(4\cdot \cos(\mbox{\rm var}phi))\\
- 12\cdot (\Phi( 0.004r)-0.5)\cdot \exp(\cos(\mbox{\rm var}phi-\pi))
\end{eqnarray}
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gglsfcramer}
\caption{SuS for the LSF's in fig.~\ref{fig:cramer} }
\label{fig:cramer}
\end{center}
\end{figure}
This LSF shows different behaviors in different directions from the origin.
Since the LSF decreases in one direction, $\mbox{\rm var}phi=0$, at the beginning faster and then the decrease slows down,
whereas in the direction $\mbox{\rm var}phi=\pi$ it is viceversa, the algorithm moves into the wrong direction.
\subsection{Several Beta Points}
\begin{equation}gin{figure*}[h!tb]
\centering
\begin{equation}gin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=8cm,height=5cm]{gglsfeins}
\caption{SuS detects one beta point}
\label{fig:one}
\end{subfigure}
\begin{equation}gin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=8cm,height=5cm]{gglsfzwei}
\caption{ SuS detects two beta points}
\label{fig:two}
\end{subfigure}
\vskip\begin{equation}gin{eqnarray}selineskip
\centering
\begin{equation}gin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=8cm,height=5cm]{gglsfdrei}
\caption{SuS detects three beta points}
\label{fig:three}
\end{subfigure}
\begin{equation}gin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=8cm,height=5cm]{gglsfvier}
\caption{ SuS detects four beta points}
\label{fig:four}
\end{subfigure}
\caption{SuS for the LSF $g(u_1,u_2)=15-|u_1\cdot u_2|$}\label{fig:runsus1}
\end{figure*}
Consider now a slightly more complicated version of the example studied in the second paragraph.
Let the LSF be
\begin{equation}gin{eqnarray}
g_{\begin{equation}ta}(u_1,u_2)= \begin{equation}ta^2/2 - |u_1\cdot u_2|.
\end{eqnarray}
Due to the symmetry of the LSF there are four beta points.
In a FORM/SORM analysis one obtains using the results found in the following asymptotic approximation
one has for the failure probability
$\hbox{P}(g_{\begin{equation}ta}(u_1,u_2)<0)\sim 2\sqrt{2}\cdot\Phi(-\begin{equation}ta) ,\ \begin{equation}ta\to\infty$.
This is obtained by just doubling the result in eq.~(\ref{exact}).
Here clearly in a SORM analysis the beta point search algorithms have to be started several times to find all beta points.
If this problem is examined now with SuS the possible outcomes of runs are shown in figure~\ref{fig:runsus1}.
In fifty runs of SuS in one case only one beta point was detected, in 11 two, in 29 three and only in nine cases all four were found.
This might lead to a systematic underestimation of the failure probability when not all beta points are found.
If now several runs are combined, there will still be a bias, the failure probability will be underestimated.
It is unclear to the author how to get a good estimator of the failure probability here without making some sort of geometric analysis similar to FORM/SORM.
\subsection{The black swan example from Au/Wang}
\begin{equation}gin{figure}[htb]
\begin{equation}gin{center}
\includegraphics[width=8cm,height=5cm]{gglsfaubl}
\caption{SuS for the LSF from chapter 5, p.~197 in \citep{Au/Wang(2014)} }
\label{fig:auwang}
\end{center}
\end{figure}
In \citep{Au/Wang(2014)} in chapter 5 there is an example to illustrate a case where SuS has difficulties to reach the failure domain. This example is shown here. One can see that the problem in this case is essentially different from the other problems shown before. The data points move into the right direction, the difficulty is only if the dispersion of the points is large enough to reach the failure region. Au/Wang propose to analyze this specific problem by looking at the cumulative distributions curves if there are any doglegs there. But the other examples treated before require an understanding of the geometry of the failure domains which in general cannot be found from the cumulative distribution functions as least as far the author knows.
\section{Conclusions}
The subset simulation method --- in the right circumstances --- can give good failure probability estimates, but it has its limitations as was tried to explain here. These problems were illustrated by relatively simple two-dimensional examples to provide an intuitive idea of the possible shortcomings. This should aid to a better understanding of SuS. Further it
should guide to clarify where the restrictions of SuS are precisely and how to detect those for a given problem.
Unfortunately, the bilk of SuS literature is about increasing the efficiency of the method and not about understanding how it works or not; as is argued in \citep{Hooker(1995)}, research should be concentrated more about this. Since certainly not so efficient, but correct algorithms are more desirable than very high efficient ones which give wrong results. Further extensive Monte Carlo studies, which are recommended in \citep{Bartz(2010)} for clarifying the performance of algorithms if no general theoretical results can be derived, are lacking.
The examples demonstrate that the points chosen by SuS usually move in the directions of steepest descent of the LSF's near the origin, but latter changes in the descent speed of the LSF's and changes in the topological structure of the failure domains may lead the SuS estimators in wrong directions.
A disadvantage of SuS for detecting more complex structures in limit state surfaces seems to be in the opinion of the author this underlying idea of extrapolating from failure domains nearer to the origin towards the original limit state surface which is far away from the origin. Here it is taken for granted that the structure does not change essentially during the extrapolation movement. It appears not too easy to justify this assumption in a general way.
So the SuS proponents have the problem that they claim that their method is a progress above FORM/SORM, but with their method they cannot gain information about the limit surface. This is even declared as a feature of SuS (\citep{subsetsampling(2016)}):\\
\textit{Subset Simulation takes the relationship between the (input) random variables and the (output) response quantity of interest as a 'black-box'.}
Grave problems --- in the opinion of the author --- result from this attempt in SuS concept to avoid all geometric concepts used in FORM/SORM. The effect is that information which can gained by modeling the geometric structure of the limit state surface is not used; this can lead to slowing down the procedure or to not finding correct estimates. In the examples here it is possible to detect by visual inspection this, but in higher dimensions it seems to be possible only by an analysis of the structure of the limit state surface.
The second last example treats the case of a failure domain having several disjoint subsets which results in several design points. Here the problem is to identify all these sets/points, which does not succeed always.
As Rackwitz said in \citep{Rackwitz(2001)}:
{\textit{Unfortunately, the presence of multiple critical points is not as infrequent as one might wish.
They can occur in standard space as well as in original space.\ldots
Usually, the most difficult part is to know whether such a problem exists at all.}}
But in the SuS concept this problem does not exist at all, since the structure of the LSF is seen as a black box. Now, look at possible LSF structures and test examples for SuS, where one plays the \textit{advocatus diaboli} by setting up difficult tests and the other tries to solve them using SuS.
The problem seems a the first glance to be similar to the old German fairy tale about the hare and the hedgehog where here both can play whatever part they want. Even ignoring the problem of multiple beta points, by increasing the number of samples and runs one can always find a good estimator for the failure probability with SuS, but there can be constructed always a more complex problem for which still more simulation effort is necessary. Finally one reaches the extent of a full blown Monte Carlo analysis.
But the analogy limps, since the real problem is that the user of SuS does not see --- if he applies only SuS --- if his results are not correct and need improvement. So theoretically, if he would know the correct result in advance he could improve the SuS estimates by increasing sample size and/or run length, but this is usually never the case in more realistic problem settings. Since from the results of SuS one obtains no information about possible wrong directions the algorithm has taken, all problems are strapped on the same Procrustes bed of the method and then the results are registered.
If one assumes that the LSF is given in black box form, not explicitly, then making a FORM/SORM analysis of the LSF's, it would be possible to extract all this information about the design points and the curvatures of the limit state surface from the results of this analysis. Instead if a SuS analysis is made, the
result would be the histogram of complimentary cumulative distribution function estimating the distribution of the LSF $g(\ul{u})$.
Compared with FORM/SORM, where the identification of the relevant beta points is not too time consuming, the advantages of the SuS method seem to be lost in such circumstances. Certainly as said by using more samples for all the problems above the correct solution can be found. But as already said it remains unclear when one should increase the sample size and/or the number of runs. And might it be possible by improving the method to check if the probability contributions from all relevant parts of the failure domain have been found? If SuS is seen as stand-alone method and not as an add-on for FORM/SORM, it appears difficult to achieve all these goals.
So there are some points in SuS concept where a clarification and more careful elaboration of the approach would be very desirable and the development of further refinements would be helpful and important.
In the moment, as it stands, the SuS approach seems to be a step backwards concerning the identification of the structure of the failure domain, since this topic is ignored there. This does not necessarily mean that this cannot be resolved by a modified SuS, but if it is possible it is really time to do this.
The SuS approach is focused on the computation of failure probabilities for a given probabilistic model.
Such has been one of the paradigms of structural reliability for the last fifty years. But the problems studied here are changing and it might be good to reconsider the basic concepts a little bit. Now, when more and more high dimensional structures are studied, where no intuitive understanding of the limit states is possible anymore,
is this really what one wants? Everybody knows that the numbers are wrong anyway, so is having an algorithm spitting out numbers/probabilities in fact that what the analyst or his client wants and needs?
In \citep{Breitung(2017a)} there is a first attempt to discuss if not a shift of paradigms from the computation of probabilities to the detection of structures might be meaningful. This would lead to a change from seeing the basic problem of structural reliability in calculating probabilities towards investigating the structure of limit state surfaces and failure domains.
To finish there are some problems in the SuS approach to structural reliability calculations. These gaps need to be filled to keep this concept as a serious contender in this field, otherwise it might be helpful to clarify the limitations of this method by Monte Carlo studies and to avoid to apply SuS in problems beyond these limits. Until either the first or the second goal has been achieved, SuS should be applied only --- at least in the opinion of the author --- if the correctness of the procedure can be verified by some external method not based on SuS.
\begin{equation}gin{thebibliography}{10}
\expandafter\ifx\csname url\endcsname\relax
\def\ul{r}l#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\def\ul{r}lprefix{URL }\fi
\expandafter\ifx\csname href\endcsname\relax
\def\href#1#2{#2} \def\path#1{#1}\fi
\bibitem{Rackwitz/Fiessler(1978)}
R.~Rackwitz, B.~Fiessler, {Structural Reliability under Combined Random Load
Sequences}, Computers and Structures 9 (1977) 489--494.
\bibitem{Hohenbichler(1981)}
M.~Hohenbichler, R.~Rackwitz, {Non-normal dependent vectors in structural
safety}, Journal of the Engineering Mechanics Division ASCE 107~(6) (1981)
1227--1241.
\bibitem{Der/Liu(1986)}
A.~{Der Kiureghian}, P.~Liu,
\href{http://dx.doi.org/10.1061/(ASCE)0733-9399(1986)112:1(85}{{Structural
Reliability under Incomplete Probability Information}}, Journal of
Engineering Mechanics 112~(1) (1986) 85--104.
\newblock \href
{http://arxiv.org/abs/http://dx.doi.org/10.1061/(ASCE)0733-9399(1986)112:1(85)}
{\path{arXiv:http://dx.doi.org/10.1061/(ASCE)0733-9399(1986)112:1(85)}},
\href {http://dx.doi.org/10.1061/(ASCE)0733-9399(1986)112:1(85)}
{\path{doi:10.1061/(ASCE)0733-9399(1986)112:1(85)}}.
\newline\ul{r}lprefix\ul{r}l{http://dx.doi.org/10.1061/(ASCE)0733-9399(1986)112:1(85}
\bibitem{Rackwitz(2001)}
R.~Rackwitz,
\href{http://www.sciencedirect.com/science/article/pii/S0167473002000097}{{Reliability
analysis---a review and some perspectives}}, Structural Safety 23~(4) (2001)
365--395.
\newblock \href {http://dx.doi.org/10.1016/S0167-4730(02)00009-7}
{\path{doi:10.1016/S0167-4730(02)00009-7}}.
\newline\ul{r}lprefix\ul{r}l{http://www.sciencedirect.com/science/article/pii/S0167473002000097}
\bibitem{Breitung(2016)}
K.~Breitung, { Extrapolation, Invariance, Geometry and Subset Sampling}, in:
R.~Caspeele, et~al. (Eds.), {{14th} International Probabilistic Workshop},
Springer Nature, Cham, CH, 2016, pp. 33--44.
\bibitem{Au/Beck(2001)}
S.~K. Au, J.~L. Beck, { Estimation of small failure probabilities in high
dimensions by subset simulation}, Probabilistic Engineering Mechanics 16
(2001) 263--277.
\bibitem{Au/Wang(2014)}
S.-K. Au, Y.~Wang, {Engineering Risk Assessment with Subset Simulation}, John
Wiley \& Sons, Ltd, New York, 2014.
\bibitem{Weisstein(2016)}
E.~W. Weisstein,
\href{http://mathworld.wolfram.com/NormalProductDistribution.html}{{Normal
Product Distribution}}, From MathWorld--A Wolfram Web Resource.
\newline\ul{r}lprefix\ul{r}l{http://mathworld.wolfram.com/NormalProductDistribution.html}
\bibitem{Abramowitz}
M.~Abramowitz, I.~Stegun, {Handbook of Mathematical Functions}, Dover, New
York, 1965.
\bibitem{Breitung(2015)}
K.~Breitung,
\href{http://www.sciencedirect.com/science/article/pii/S0266892015300369}{{40
years FORM: Some new aspects?}}, Probabilistic Engineering Mechanics 42
(2015) 71--77.
\newblock \href {http://dx.doi.org/10.1016/j.probengmech.2015.09.012}
{\path{doi:10.1016/j.probengmech.2015.09.012}}.
\newline\ul{r}lprefix\ul{r}l{http://www.sciencedirect.com/science/article/pii/S0266892015300369}
\bibitem{Breitung(1994b)}
K.~Breitung, {Asymptotic Approximations for Probability Integrals}, Springer,
Berlin, 1994, {L}ecture Notes in Mathematics, Nr.~1592.
\bibitem{Papaioannou(2015)}
I.~Papaioannou, W.~Betz, K.~Zwirglmaier, D.~Straub,
\href{http://www.sciencedirect.com/science/article/pii/S0266892015300205}{{\{MCMC\}
algorithms for Subset Simulation}}, Probabilistic Engineering Mechanics 41
(2015) 89--103.
\newblock \href {http://dx.doi.org/10.1016/j.probengmech.2015.06.006}
{\path{doi:10.1016/j.probengmech.2015.06.006}}.
\newline\ul{r}lprefix\ul{r}l{http://www.sciencedirect.com/science/article/pii/S0266892015300205}
\bibitem{Casella/Berger(2002)}
G.~Casella, R.~Berger, {Statistical Inference}, Duxbury, 2002.
\bibitem{Oehlert(1992)}
G.~W. Oehlert,
\href{http://www.tandfonline.com/doi/abs/10.1080/00031305.1992.10475842}{{A
Note on the Delta Method}}, The American Statistician 46~(1) (1992) 27--29.
\newblock \href
{http://arxiv.org/abs/http://www.tandfonline.com/doi/pdf/10.1080/00031305.1992.10475842}
{\path{arXiv:http://www.tandfonline.com/doi/pdf/10.1080/00031305.1992.10475842}},
\href {http://dx.doi.org/10.1080/00031305.1992.10475842}
{\path{doi:10.1080/00031305.1992.10475842}}.
\newline\ul{r}lprefix\ul{r}l{http://www.tandfonline.com/doi/abs/10.1080/00031305.1992.10475842}
\bibitem{Papoulis(1965)}
A.~Papoulis, {Probability, Random Variables and Stochastic Processes}, 1st
Edition, McGraw-Hill Kogakusha, Ltd., Tokyo, 1965.
\bibitem{Wilcox(2009)}
R.~R. Wilcox, {Basic Statistics}, Oxford University Press, 2009.
\bibitem{NISTstat(2012)}
NIST/SEMATECH, \href{http://www.itl.nist.gov/div898/handbook}{{ E-Handbook of
Statistical Methods}}, NIST, 2012.
\newline\ul{r}lprefix\ul{r}l{http://www.itl.nist.gov/div898/handbook}
\bibitem{Li/Cao(2016)}
H.-S. Li, Z.-J. Cao, \href{http://dx.doi.org/10.1007/s00158-016-1414-5}{{Matlab
codes of Subset Simulation for reliability analysis and structural
optimization}}, Structural and Multidisciplinary Optimization (2016)
1--20\href {http://dx.doi.org/10.1007/s00158-016-1414-5}
{\path{doi:10.1007/s00158-016-1414-5}}.
\newline\ul{r}lprefix\ul{r}l{http://dx.doi.org/10.1007/s00158-016-1414-5}
\bibitem{Maes(1994b)}
M.~Maes, K.~Breitung, {Reliability-Based Tail Estimation}, in: P.~Spanos, Y.-T.
Wu (Eds.), {Probabilistic Structural Mechanics: Advances in Structural
Reliability Methods, IUTAM Symposium, San Antonio, TX, June 7-10, 1993},
Springer, New York, 1994, pp. 335--346.
\bibitem{Acar/Ramu(2014)}
E.~Acar, P.~Ramu, {Reliability Estimation Using Guided Tail Modeling with
Adaptive Sampling}, 16th AIAA Non-Deterministic Approaches Conference,
SciTech (2014) 13--17.
\bibitem{Hasofer(1974)}
A.~Hasofer, N.~Lind, {An exact and invariant first-order reliability format},
Journal of the Engineering Mechanics Division ASCE 100~(1) (1974) 111--121.
\bibitem{Metaball(2016)}
Wikipedia, \href{https://en.wikipedia.org/wiki/Metaballs}{{Metaballs}} (2016).
\newline\ul{r}lprefix\ul{r}l{https://en.wikipedia.org/wiki/Metaballs}
\bibitem{Forbes(2011)}
C.~Forbes, M.~Evans, N.~Hastings, B.~Peacock, {Statistical Distributions}, 4th
Edition, J. Wiley \& Sons, 2011.
\bibitem{Hooker(1995)}
J.~Hooker, {Testing Heuristics: We Have It All Wrong }, Journal of Heuristics 1
(1995) 33--42.
\bibitem{Bartz(2010)}
T.~Bartz-Breitenstein, et~al. (Eds.), {Experimental Methods for the Analysis of
Optimization Algorithms}, Springer, 2010.
\bibitem{subsetsampling(2016)}
Wikipedia, \href{https://en.wikipedia.org/wiki/Subset_simulation}{{Subset
Simulation}} (2016).
\newline\ul{r}lprefix\ul{r}l{https://en.wikipedia.org/wiki/Subset_simulation}
\bibitem{Breitung(2017a)}
K.~Breitung, { Is there any strength in numbers? On the philosophy of FORM/SORM
and Subset Sampling}, in: C.~Bucher, et~al. (Eds.), {Proceedings of ICOSSAR
2017 12th Int'l Conf. on structural safety and reliability}, 2017, in print.
\end{thebibliography}
\end{document} |
\begin{document}
\title{ The House of a Reciprocal Algebraic Integer}
\author{Dragan Stankov}
\curraddr{}
\email{[email protected]}
\thanks{}
\subjclass[2010]{11C08, 11R06, 11Y40}
\date{}
\dedicatory{}
\keywords{Algebraic integer, the house of algebraic integer, maximal modulus, Schinzel-Zassenhaus conjecture,
Mahler measure}
\begin{abstract}
Let $\alpha$ be an algebraic integer of degree $d$, which is reciprocal. The house of $\alpha$ is the largest modulus of its conjugates. We compute the minimum of the houses of all reciprocal algebraic integers of degree $d$ which are not roots of unity, say $\mathrm{mr}(d)$, for $d$ at most 34. We prove several lemmata and use them to avoid unnecessary calculations. The computations suggest several conjectures. The direct consequence of the last one is the conjecture of Schinzel and Zassenhaus.
We demonstrate the utility of $d$-th power of the house of $\alpha$.
\end{abstract}
\maketitle
\section{Introduction}
Let $\alpha$ be an algebraic integer of degree $d$, with conjugates
$\alpha=\alpha_1, \alpha_2,\ldots,\alpha_d$ and minimal
polynomial $P$. The house of $\alpha$ (and of $P$) is defined by:
\[\house{\alpha} = \max\limits_{1\leq i\leq d}|\alpha_i|.\]
The Mahler measure of $\alpha$ is $M(\alpha) = \prod_{i=1}^{d}
\max(1, |\alpha_i|)$.
Clearly,
$\house{\alpha} > 1$, and a theorem of Kronecker \cite{K} tells us that $\house{\alpha} = 1$ if and only if $\alpha$ is a root
of unity. In 1965, Schinzel and Zassenhaus \cite{SZ} have made the following conjecture:
\begin{conjecture}[SZ]
There is a constant $c > 0$ such that if $\alpha$ is not a root of unity, then $\house{\alpha}\ge 1 + c/d$.
\end{conjecture}
Let $\mathrm{m}(d)$ denote the minimum of $\house{\alpha}$ over $\alpha$ of degree $d$ which are not roots of unity.
Let an $\alpha$ attaining $\mathrm{m}(d)$ be called extremal.
In 1985, D. Boyd \cite{B} conjectured, using a result of C.J. Smyth \cite{S1}, that $c$ should be equal to $3/2 \log \theta$ where $\theta = 1.324717 \ldots$ is the smallest Pisot number, the real root of the polynomial $x^3 - x - 1$. Intending to verify his conjecture that extremal $\alpha$ are always nonreciprocal, Boyd has computed the smallest houses for reciprocal polynomials of even degrees $\le 16$. We continued his computation with even degrees $\le 34$. So our Table \ref{table:nu} is the extension of Boyd's Table 2.
Let $\mathrm{mr}(d)$ denote the minimum of $\house{\alpha}$ over reciprocal $\alpha$ of degree $d$ which are not roots of
unity.
Let an $\alpha$ attaining $\mathrm{mr}(d)$ be called extremal reciprocal. A polynomial $P(x)$ is primitive if it cannot be expressed as a polynomial in $x^k$, for some
$k \ge 2$. Clearly, any polynomial of degree $2p$ has to be primitive.
It is easy to verify that $\house{P(x^k)} = \sqrt[k]{\house{P(x)}}$.
Then our computations, as summarized in Table \ref{table:nu}, suggest the following:
\begin{conjecture}\label{sec:Sqrt}
If $d \ge 8$ is even and extremal reciprocal $\alpha$ of degree $d$ has minimal polynomial $R_d(x)$ then $\sqrt{\alpha}$ is extremal reciprocal of degree $2d$ and $R_{2d}(x)=R_d(x^2)$.
\end{conjecture}
E. M. Matveev \cite{M} proved the following result:
\begin{theorem}\label{sec:Mat} Let $\alpha$ be an algebraic integer, not a root of unity, and let $d =
deg(\alpha) \ge 2$. Then
\begin{equation}\label{Mat:1}
\house{\alpha}\ge \exp(\log(d + 0.5)/d^2).
\end{equation}
Moreover, if $\alpha$ is reciprocal and $d\ge 6$, then
\begin{equation}\label{Mat:2}
\house{\alpha}\ge \exp(3\log(d/2)/d^2).
\end{equation}
\end{theorem}
Let $\sigma=1.169283\ldots$ be the house of $x^8+x^5+x^4+x^3+1$. It is proved in Table 1 that $\sigma$ is extremal reciprocal for $d=8$.
Then we obtain the following consequence of Conjecture \ref{sec:Sqrt}: if $d=2^k$ and $k\ge 3$ then
\begin{equation}\label{Mat:3}
\mathrm{mr}(d)=\sigma^{8/d}.
\end{equation}
It is not hard to show that we get from \eqref{Mat:3} a better low bound than from \eqref{Mat:2} i.e.
$\sigma^{8/2^k}>\exp(3\log(d/2)/d^2)=(2^{k-1})^{3/(2^{2k})}$. Since $\sigma^5>2$ it follows that
$\sigma^{8/2^k}>2^{8/(5\cdot 2^k)}$. It remains to be shown that
$2^{8/(5\cdot 2^k)}>(2^{k-1})^{3/(2^{2k})}$ which is, however, equivalent with a true inequality $2^{k+3}>15(k-1)$.
The following lemmata can help us to avoid unnecessary calculations.
\begin{lemma}\label{sec:PolType}
If $d\ge 10$ and $P(x)$ is a reciprocal polynomial with coefficients $-1,0,1$ of degree $d$ such that
\begin{equation}\label{Pol:1}
P(x)=x^d-x^{d-1}-x^{d-2}-x^{d-3}-mx^{d-4}+\sum_{k=5}^{d-5}a_{d-k}x^{d-k}-mx^4-x^3-x^2-x+1,
\end{equation}
$m\in\{0,1\}$, then $P(x)$ has a real root $\alpha$ greater than $3/2$ and, consequently, $\house{\alpha}\ge 3/2$.
\end{lemma}
\begin{proof}[Proof]
It is obvious that $P(2)\ge 2^d+1-\sum_{k=1}^{d-1}2^k=2^d+1-2(2^{d-1}-1)=3>0$ so that the theorem will be proved if we show that $P(1.5)<0$.
\begin{eqnarray*}
P(1.5)&\le & 1.5^d-1.5^{d-1}-1.5^{d-2}-1.5^{d-3}+\\
& &+\;\sum_{k=5}^{d-5}1.5^{d-k}-1.5^3-1.5^2-1.5+1\\
&=&1.5^d-1.5^{d-1}-1.5^{d-2}-1.5^{d-3}+\\
& &+\;1.5^5\cdot 2(1.5^{d-9}-1)-1.5^3-1.5^2-1.5+1\\
&=&1.5^d-1.5^{d-1}-1.5^{d-2}-1.5^{d-3}+\\
& &+\;2\cdot 1.5^{d-4}-2\cdot 1.5^5-1.5^3-1.5^2-1.5+1\\
&=&1.5^{d-4}(1.5^4-1.5^{3}-1.5^{2}-1.5+2)-21.3125\\
&=&1.5^{d-4}(-0.0625)-21.3125\\
&<&0.
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{sec:PolType2}
If $d\ge 6$ and $P(x)$ is a reciprocal polynomial with coefficients $-2$, $-1$, $0$, $1$, $2$ of degree $d$ such that
\begin{equation}\label{Pol:2}
P(x)=x^d-2x^{d-1}-2x^{d-2}+\sum_{k=3}^{d-3}a_{d-k}x^{d-k}-2x^2-2x+1,
\end{equation}
then $P(x)$ has a real root $\alpha$ greater than $2$ and, consequently, $\house{\alpha}\ge 2$.
\end{lemma}
\begin{proof}[Proof]
It is obvious that $P(3)\ge 3^d+1-2\sum_{k=1}^{d-1}3^k=3^d+1-3(3^{d-1}-1)=4>0$ so that the theorem will be proved if we show that $P(2)<0$.
\begin{eqnarray*}
P(2)&\le & 2^d-2\cdot 2^{d-1}-2\cdot 2^{d-2}+\\
& &+\;2\sum_{k=3}^{d-3}2^{d-k}-2\cdot 2^2-2\cdot 2+1\\
&=&-2^{d-1}+2^4(2^{d-5}-1)-2\cdot 2^2-2\cdot 2+1\\
&=&-27\\
&<&0.
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{sec:PolType3}
If $d\ge 10$ and $P(x)$ is a reciprocal polynomial with coefficients $-2$, $-1$, $0$, $1$, $2$ of degree $d$ such that
\begin{equation}\label{Pol:3}
P(x)=x^d-2x^{d-1}-x^{d-2}-mx^{d-3}+\sum_{k=4}^{d-4}a_{d-k}x^{d-k}-mx^3-x^2-2x+1,
\end{equation}
$m\in\{1,2\}$, then $P(x)$ has a real root $\alpha$ greater than $2$ and, consequently, $\house{\alpha}\ge 2$.
\end{lemma}
\begin{proof}[Proof]
At first we show that $P(3)$ is positive:
\begin{eqnarray*}
P(3)&\ge& 3^d+3^{d-2}+3^2+1-2\sum_{k=1}^{d-1}3^k\\
&=&3^d+3^{d-2}+3^2+1-3(3^{d-1}-1)\\
&=&3^{d-2}+3^2+3+1\\
&>&0.
\end{eqnarray*}
The theorem will be proved if we show that $P(2)$ is negative:
\begin{eqnarray*}
P(2)&\le & 2^d-2\cdot 2^{d-1}-2^{d-2}-2^{d-3}+\\
& &+\;2\sum_{k=4}^{d-4}2^{d-k}-2^3-2^2-2\cdot 2+1\\
&=&-2^{d-2}-2^{d-3}+2^4\cdot 2(2^{d-7}-1)-2^3-2^2-4+1\\
&=&-2^{d-3}-2^5-15\\
&<&0.
\end{eqnarray*}
\end{proof}
The obvious consequence of the lemma \ref{sec:PolType} is: Mahler measure of a polynomial of type \eqref{Pol:1}, \eqref{Pol:2}, \eqref{Pol:3} is greater than $3/2$.
So if we have to find polynomials of small Mahler measure we can omit polynomials of these types.
\section{Polynomials of composite and prime degrees}
Table \ref{table:theta} of Rhin and Wu suggests that if $d\ge 9$ is a composite number then $P_d(x)$ is a nonprimitive polynomial. We add the column $\mathrm{m}^{d}(d)$ to the table which is necessary to present the following lemma, corollary and conjecture.
\begin{lemma}\label{sec:PolProd}
Let $\mathrm{m}(d)$ is attained for $\alpha_d$ with minimal polynomial $P_d(x)$. If $\mathrm{m}^{d_1}(d_1)<\mathrm{m}^{d_2}(d_2)$ then the house of $P_{d_1}(x^{d_2})$ is less than the house of $P_{d_2}(x^{d_1})$.
\end{lemma}
\begin{proof}
If $\mathrm{m}^{d_1}(d_1)<\mathrm{m}^{d_2}(d_2)$ then $\mathrm{m}^{1/d_2}(d_1)<\mathrm{m}^{1/d_1}(d_2)$. Finally we should recall that the house of $P_{d_1}(x^{d_2})$ is equal to $\mathrm{m}^{1/d_2}(d_1)$ and the house of $P_{d_2}(x^{d_1})$ is equal to $\mathrm{m}^{1/d_1}(d_2)$.
\end{proof}
\begin{corollary}\label{sec:composite}
Let $d$ be a composite natural number. Let $\mathrm{m}(b_i)$ is attained for $\alpha_{b_i}$ with minimal polynomial $P_{b_i}(x)$ where $1 \le b_i <d$, are natural numbers which are divisors of $d$ such that $P_{b_i}(x)$ is a primitive polynomial, $i=1,2\ldots,k$.
If $\mathrm{m}^{b_1}(b_1)<\mathrm{m}^{b_2}(b_2)<\cdots<\mathrm{m}^{b_k}(b_k)$ then the nonprimitive polynomial $P_{b_1}(x^{d/b_1})$ has the house which is less than the house of any other nonprimitive polynomial of degree $d$.
\end{corollary}
\begin{proof}
The claim follows straightforwardly from Lemma \ref{sec:PolProd}.
\end{proof}
If $p$ is a prime number then it is obvious that the minimal polynomial of the extremal of degree $p$ is primitive or $P_1(x^p)=x^p-2$. Table \ref{table:theta} of Rhin and Wu suggests that $P_4(x)=x^4+x^3+1$ and $P_8(x)=x^8+x^7+x^4-x^2+1$ are the only primitive minimal polynomials of an extremal of a composite degree.
\begin{conjecture}\label{sec:compositeConj}
Let $d$ be a composite natural number and let $p_1,p_2,\ldots,p_k$ be odd prime numbers which are divisors of $d$ or $p_i=t$, $i=1,2,\ldots,k$ where $t$ is defined on the following manner:
$t:=1$;
if $4 \mid d$ and $8 \nmid d$ then $t:=4$;
if $8 \mid d$ then $t:=8$.
\noindent Let $\mathrm{m}^{p_1}(p_1)<\mathrm{m}^{p_2}(p_2)<\cdots<\mathrm{m}^{p_k}(p_k)\le 2$. If $P_{p_i}(x)$ is the minimal polynomial of the extremal of degree $p_i$ then $P_d(x)=P_{p_1}(x^{d/p_1})$ and $\mathrm{m}(d)=\house{P_{p_1}(x^{d/p_1})}$.
\end{conjecture}
If the previous conjecture is true then we just need to determine $\mathrm{m}(d)$ for $d$ is a prime. If $d$ is a composite number we can easily calculate $\mathrm{m}(d)=\mathrm{m}^{p_1/d}(p_1)$ where $p_1$ is determined as in the Conjecture \ref{sec:compositeConj}.
\begin{lemma}\label{sec:prime5Mod6}
Let $d\ge 5$ be a natural number such that $d\equiv 5\;(\mathrm{mod}\;6)$. Let $P_d(x)$ be defined
\begin{equation}\label{prime5Mod6:1}
P_d(x):=(x^{d+2}-x^2-1)/(x^2-x+1).
\end{equation}
Then $P_d(x)$ is a polynomial which has a real root $1<a_d<\sqrt[d]{2}$ such that
$$\lim_{d\rightarrow\infty}a_d^d= 2.$$
\end{lemma}
\begin{proof}
It can be proved by the mathematical induction that $$P_d(x)=(x^5+x^4-x^2-x)(x^{d-5}+x^{d-11}+\cdots+1)-1.$$
It is obvious that $P_d(1)=-1$ and $P_d(\sqrt[d]{2})=(2(\sqrt[d]{2})^2-(\sqrt[d]{2})^2-1)/((\sqrt[d]{2})^2-\sqrt[d]{2}+1)>0$. Hence, there is a real root $a_d\in(1,\sqrt[d]{2})$ of $P_d(x)$. It follows from $a_d^{d+2}-a_d^2-1=0$ that $a_d^{d}=1+1/a_d^2$. Since $a_d$ tends to $1$ we conclude that $a_d^{d}$ tends to $1+1/1^2=2$ when $d$ tends to $\infty$.
\end{proof}
It is proved in \cite{B} that $a_d$ is greater than any of its conjugates. Hence $a_d=\house{P_d(x)}$.
Table \ref{table:theta} of Rhin and Wu for $d=17$ and $d=23$ and the last two lemmata suggest the following
\begin{conjecture}\label{sec:primeConj5Mod6}
The extremal $\alpha$ of prime degree $d\ge 17$ such that $d\equiv 5\;(\mathrm{mod}\;6)$ has the minimal
polynomial $P_d(x)$ defined in Lemma \ref{sec:prime5Mod6}.
\end{conjecture}
Our attempt to generalize the minimal polynomial $(x^{22}-x^{11}-x+1)/((x-1)(x^2+1))$ of extremal $\alpha$ of degree $d= 19$ to prime degree $d\ge 19$ such that $d\equiv 7\;(\mathrm{mod}\;12)$ failed because the house of
$(x^{d+3}-x^{(d+3)/2}-x+1)/((x-1)(x^2+1))$ is greater than $\sqrt[d]{2}$ when $d\ge 19$. Therefore the next question arises: is there any prime $d\ne 2$ such that $\mathrm{m}^d(d)=2$? And, if there is such $d$ then whether $\mathrm{m}^{d^2}(d^2)<2$? These questions are closely related with the next
\begin{lemma}\label{sec:mddBounded}
The sequence $(\mathrm{m}^d(d))_{d\ge 1}$ is bounded and $2$ is an upper bound.
\end{lemma}
\begin{proof}
If $\mathrm{m}(d)$ is attained for $\alpha_d$ then $\mathrm{m}(d)=\house{\alpha_d}\le \house{x^d-2}= \sqrt[d]{2}$. The claim follows straightforwardly if we raise both sides of the inequality to the power $d$.
\end{proof}
\begin{corollary}\label{sec:mddAccumulation}
The sequence $(\mathrm{m}^d(d))_{d\ge 1}$ has an accumulation point in $[1,2]$.
\end{corollary}
\begin{proof}
The claim is direct consequence of lemma \ref{sec:mddBounded} and the Bolzano-Weierstrass Theorem.
\end{proof}
The last few lemmata and corollaries show that $\house{\alpha}^d$ can play an important role
in the research of algebraic integers analogously to the Mahler measure. The obvious benefit is that we can exclude nonprimitive polynomials because $\house{\alpha}^d=\house{\sqrt[k]{\alpha}}^{kd}$. Also it is interesting to ask the Lehmer question whether there exists a positive number $\epsilon$ such that
if $\alpha$ is neither 0 nor a root of unity, then $\house{\alpha}^d \ge 1+\epsilon$.
Is there an accumulation point less than $2$ of the sequence $(\mathrm{m}^d(d))_{d\ge 1}$ is another interesting question. We suggest $\house{\alpha}^d$ to be called the \textbf{powerhouse} of $\alpha$ and denoted with $\mathrm{ph}(\alpha)$.
\begin{lemma}\label{sec:PolProdRec}
Let $\mathrm{mr}(d)$ is attained for $\alpha_d$ with minimal reciprocal polynomial $R_d(x)$. Let $k_1$, $k_2$ be integers and $d_1$, $d_2$ be even integers such that $k_1d_1=k_2d_2$. If $\mathrm{mr}^{d_1}(d_1)<\mathrm{mr}^{d_2}(d_2)$ then the house of $R_{d_1}(x^{k_1})$ is less than the house of $R_{d_2}(x^{k_2})$.
\end{lemma}
\begin{proof}
If $\mathrm{mr}^{d_1}(d_1)<\mathrm{mr}^{d_2}(d_2)$ then $\mathrm{mr}^{1/k_1}(d_1)<\mathrm{mr}^{1/k_2}(d_2)$. It remains to recall that the house of $R_{d_1}(x^{k_1})$ is equal to $\mathrm{mr}^{1/k_1}(d_1)$ and the house of $R_{d_2}(x^{k_2})$ is equal to $\mathrm{mr}^{1/k_2}(d_2)$.
\end{proof}
\begin{corollary}\label{sec:compositeRec}
Let $d/2$ be a composite natural number. Let $\mathrm{mr}(b_i)$ is attained for a reciprocal $\alpha_{b_i}$ with minimal polynomial $R_{b_i}(x)$ where $1 \le b_i <d$, are natural numbers which are divisors of $d$ such that $R_{b_i}(x)$ is a primitive polynomial, $i=1,2,\ldots,k$.
If $\mathrm{mr}^{b_1}(b_1)<\mathrm{mr}^{b_2}(b_2)<\cdots<\mathrm{mr}^{b_k}(b_k)$ then the nonprimitive polynomial $P_{b_1}(x^{d/b_1})$ has the house which is less than the house of any other nonprimitive polynomial of degree $d$.
\end{corollary}
\begin{proof}
The claim follows straightforwardly from Lemma \ref{sec:PolProdRec}.
\end{proof}
If $p$ is a prime number then it is obvious that the minimal polynomial of the extremal reciprocal of degree $2p$ is primitive or $R_2(x^p)=x^{2p}+3x^p+1$. Table \ref{table:nu} suggests that $R_8(x)
$, $R_{12}(x)
$, and $R_{18}
$ are the only primitive minimal polynomials of an extremal reciprocal of a degree $d$ such that $d/2$ is a composite number.
\begin{conjecture}\label{sec:compositeConjRec}
Let $d$ be a composite natural number and let $p_1,p_2,\ldots,p_k$ be odd prime numbers which are divisors of $d$ or $p_i\in\{s,t\}$, $i=1,2,\ldots,k$ where $s=9$ and $t$ is defined on the following manner:
$t:=1$;
if $8 \mid d$ and $12 \nmid d$ then $t=4$;
if $12 \mid d$ then $t=6$.
Let $\mathrm{mr}^{2p_1}(2p_1)<\mathrm{mr}^{2p_2}(2p_2)<\cdots<\mathrm{mr}^{2p_k}(2p_k)$.
If $R_{2p_i}(x)$ is the minimal polynomial of the extremal of degree $2p_i$ then $R_{d}(x)=R_{2p_1}(x^{d/(2p_1)})$ and $\mathrm{mr}(d)=\house{R_{2p_1}(x^{d/(2p_1)})}$.
\end{conjecture}
If the previous conjecture is true then we just need to determine $\mathrm{mr}(d)$ for $d/2$ is a prime number. If $d/2$ is a composite number we can easily calculate $\mathrm{mr}(d)=\mathrm{mr}^{p_1/d}(p_1)$ where $p_1$ is determined as in the Conjecture \ref{sec:compositeConjRec}.
\begin{lemma}\label{sec:mddBoundedRec}
The sequence $(\mathrm{mr}^d(d))_{d\ge 1}$ is bounded and $U=6.854102\ldots$ is an upper bound.
\end{lemma}
\begin{proof}
If $\mathrm{mr}(d)$ is attained for $\alpha_d$ then $$\mathrm{mr}(d)=\house{\alpha_d}\le \house{x^d+3x^{d/2}+1}= \sqrt[d/2]{2.618\ldots}.$$ The claim follows straightforwardly if we raise both sides of the inequality to the power $d$.
\end{proof}
\begin{corollary}\label{sec:mddAccumulationRec}
In the interval $[1,U]$ there is an accumulation point of the sequence $(\mathrm{mr}^d(d))_{d\ge 1}$.
\end{corollary}
\begin{proof}
The claim is direct consequence of lemma \ref{sec:mddBoundedRec} and the Bolzano-Weierstrass Theorem.
\end{proof}
\section{Results}
\begin{table}[!htbp]
\caption{Extreme values of $\house{\alpha}$ for reciprocal $\alpha$ of even degree $d\le 34$. The minimum $\mathrm{mr}(d)$ is attained for an $\alpha$ with minimal polynomial $R_d(x)$ having $\nu$ roots outside the unit circle.}
\label{table:nu}
\begin{tabular}{rllllll}
\hline\noalign{
}
$d$ & $\nu$ & $\mathrm{mr}(d)$ & $R_d(x)$ \\ [0.5ex]
\noalign{
}\hline\noalign{
}
2 & 1 & 2.61803398874989 & 1 3 \\
4 & 2 & 1.53922233842043 & 1 1 3 \\
6 & 2 & 1.32166315615906 & 1 2 2 1\\
8 & 2 & 1.16928302978955 & 1 0 0 1 1\\
10 & 2 & 1.12571482154239 & 1 0 1 1 0 1\\
12 & 2 & 1.10805485364877 & 1 1 1 0 -1 -1 -1\\
14 & 4 & 1.09390168574961 & 1 0 0 0 1 1 0 1\\
16 & 4 & 1.08133391225354 & $R_8(x^2)$\\
18 & 4 & 1.07185072135591 & 1 0 1 1 1 2 1 2 2 1\\
20 & 4 & 1.06099708837602 & $R_{10}(x^2)$\\
22 & 4 & 1.06621758541355 & 1 1 0 -1 0 0 0 0 0 -1 0 1\\
24 & 4 & 1.05264184490679 & $R_{12}(x^2)$\\
26 & 8 & 1.05784846909829 & 1 0 0 1 0 -1 0 0 -1 -1 1 0 0 2\\
28 & 8 & 1.04589755031246 & $R_{14}(x^2)$\\
30 & 6 & 1.04026214469874 & $R_{10}(x^3)$\\
32 & 8 & 1.03987206532993 & $R_8(x^4)$\\
34 & 8 & 1.04961810533324 & 1 0 1 1 0 1 0 0 0 0 0 0 1 0 1 1 0 1\\
\noalign{
}\hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Let $\theta = 1. 3247\ldots$ is the real root
of $x^3 - x-1$. Extreme values of $\house{\alpha}$ for $\alpha$ of degree $d\le 28$, calculated by Rhin and Wu \cite{RW2}, are greater than or equal to $\theta^{3/(2d)}$.}
\label{table:theta}
\begin{tabular}{rllllll}
\hline\noalign{
}
$d$ & $\mathrm{m}(d)$ & & $\theta^{3/(2d)}$ & $\mathrm{m}^d(d)$ & coefficients of $P_d(x)$\\ [0.5ex]
\noalign{
}\hline\noalign{
}
1 & 2 & $>$ & 1.524703 & 2 & 1 -2\\
2 & 1.41421356 & $>$ & 1.234788 & 2 & 1 0 -2\\
3 & 1.15096392 & $=$ & 1.150964 & 1.524703 & 1 1 0 -1\\
4 & 1.18375181 & $>$ & 1.111210 & 1.963553 & 1 1 0 0 1\\
5 & 1.12164517 & $>$ & 1.088020 & 1.775323 & 1 0 -1 -1 1 1\\
6 & 1.07282986 & $=$ & 1.072830 & 1.524703 & $P_3(x^2)$\\
7 & 1.09284559 & $>$ & 1.062110 & 1.861708 & 1 1 0 0 1 -1 -1\\
8 & 1.07562047 & $>$ & 1.054140 & 1.791730 & 1 1 0 0 1 0 -1 0 1\\
9 & 1.04798219 & $=$ & 1.047982 & 1.524703 & $P_3(x^3)$\\
10 & 1.05907751 & $>$ & 1.043082 & 1.775323 & $P_5(x^2)$\\
11 & 1.05712485 & $>$ & 1.039090 & 1.842422 & 1 1 0 0 1 1 0 -1 0 1 0 -1\\
12 & 1.03577500 & $=$ & 1.035775 & 1.524703 & $P_3(x^4)$\\
13 & 1.05372001 & $>$ & 1.032978 & 1.974367 & 1 0 -1 0 1 0 -1 -1 1 1 -1 -1 1 1\\
14 & 1.04539255 & $>$ & 1.030587 & 1.861708 & $P_7(x^2)$\\
15 & 1.02851905 & $=$ & 1.028519 & 1.524703 & $P_3(x^5)$\\
16 & 1.03712124 & $>$ & 1.026713 & 1.791730 & $P_8(x^2)$\\
17 & 1.03930211 & $>$ & 1.025122 & 1.925798 & 1 1 0 -1 -1 0 1 1 0 -1 -1 0\\
&&&&& 1 1 0 -1-1-1\\
18 & 1.02371001 & $=$ & 1.023710 & 1.524702 & $P_3(x^6)$\\
19 & 1.03641032 & $>$ & 1.022448 & 1.972890 & 1 1 0 0 1 1 0 0 1 1 0 -1 0\\
&&&&& 1 0 -1 0 1 0 -1\\
20 & 1.02911491 & $>$ & 1.021314 & 1.775323 & $P_5(x^4)$\\
21 & 1.02028875 & $=$ & 1.020289 & 1.524703 & $P_3(x^7)$\\
22 & 1.02816577 & $>$ & 1.019358 & 1.842422 & $P_{11}(x^2)$\\
23 & 1.02932014 & $>$ & 1.018508 & 1.943841 & 1 1 0 -1 -1 0 1 1 0 -1 -1 0\\
&&&&& 1 1 0 -1 -1 0 1 1 0 -1-1-1\\
24 & 1.01773032 & $=$ & 1.017730 & 1.524703 & $P_3(x^8)$\\
25 & 1.02322489 & $>$ & 1.017015 & 1.775323 & $P_5(x^5)$\\
26 & 1.02650865 & $>$ & 1.016355 & 1.974367 & $P_{13}(x^2)$\\
27 & 1.01574486 & $=$ & 1.015745 & 1.524703 & $P_3(x^9)$\\
28 & 1.02244440 & $>$ & 1.015178 & 1.861708 & $P_7(x^4)$\\
\noalign{
}\hline
\end{tabular}
\end{table}
In the Table \ref{table:nu} we listed irreducible, reciprocal, integer polynomials with even degree at most 34 having the smallest house.
We add the column $\theta^{3/(2d)}$ to the Table \ref{table:theta} so that it suggests the following
\begin{conjecture}[SZB]
There is a constant $T > 1$ such that if $\alpha$ is not a root of unity, then $\house{\alpha}\ge T^{1/d}$.
\end{conjecture}
It is easy to show that the conjecture of Schinzel and Zassenhaus is a direct consequence of
the previous conjecture. In order to expand $T^{1/d}$ as a Taylor series in $1/d$, we use the known Taylor series of function $T^x$. Thus \[T^{\frac{1 }{d}}=1+\frac{\log(T)}{d}+\frac{\log^2(T)}{2!d^2}+\cdots+\frac{\log^k(T)}{k!d^k}+\cdots.\]
If $T=\theta^{3/2}$ and if we take only two terms of the series we get the conjecture of Schinzel and Zassenhaus with Boyd's \cite{B} suggestion for $c$.
\begin{table}[!htbp]
\caption{Let $\tau=1.125715\ldots$ be the house of $x^{10}+x^8+x^7+x^5+x^3+x^2+1$. Extreme values of $\house{\alpha}$ for reciprocal $\alpha$ of even degree $d\le 34$ are greater than or equal to $\tau^{10/d}$.}
\label{table:sigma}
\begin{tabular}{rllllll}
\hline\noalign{
}
$d$ & $\mathrm{mr}(d)$ & & $\tau^{10/d}$ & $\mathrm{mr}^d(d)$\\ [0.5ex]
\noalign{
}\hline\noalign{
}
2 & 2.618034 & $>$ & 1.807765 & 6.854102\\
4 & 1.539222 & $>$ & 1.344531 & 5.613134\\
6 & 1.321663 & $>$ & 1.218187 & 5.329969\\
8 & 1.169283 & $>$ & 1.159539 & 3.494276\\
10 & 1.125715 & $=$ & 1.125715 & 3.268014\\
12 & 1.108055 & $>$ & 1.103715 & 3.425588\\
14 & 1.093902 & $>$ & 1.088265 & 3.513145\\
16 & 1.081334 & $>$ & 1.076819 & 3.494276\\
18 & 1.071851 & $>$ & 1.068000 & 3.486723\\
20 & 1.060997 & $=$ & 1.060997 & 3.268014\\
22 & 1.066218 & $>$ & 1.055301 & 4.098345\\
24 & 1.052642 & $>$ & 1.050578 & 3.425588\\
26 & 1.057848 & $>$ & 1.046599 & 4.514652\\
28 & 1.045898 & $>$ & 1.043199 & 3.513145\\
30 & 1.040262 & $=$ & 1.040262 & 3.268014\\
32 & 1.039872 & $>$ & 1.037699 & 3.494276\\
34 & 1.049618 & $>$ & 1.035443 & 5.188773\\
\noalign{
}\hline
\end{tabular}
\end{table}
In the following tables we listed irreducible, reciprocal, integer polynomials with even degree at most 34 having small house. If $d=2p$ where $p$ is a prime number then all found polynomials are primitive, otherwise we marked a primitive polynomial with the symbol $P$.
A polynomial which has small Mahler measure, less than $1.3$, we marked with the symbol $M$. This list is only known to be complete through degree 20. If $d>22$ only polynomials of height one are completely investigated.
\begin{table}[!htbp]
\caption{Irreducible, reciprocal, integer polynomials with even degree at most 34 having small house.}
\label{table:small}
\begin{tabular}{rlrllll}
\hline\noalign{
}
$d$ & House & Out & Coefficients \\ [0.5ex]
\noalign{
}\hline\noalign{
}
2 & 2.61803398874989 & 1 & 1 3 \\
\\
4 & 1.53922233842043 & 2 & 1 1 3 P\\
4 & 1.61803398874989 & 2 & 1 0 3 \\
\\
6 & 1.32166315615906 & 2 & 1 2 2 1\\
6 & 1.32471795724474 & 3 & 1 1 -1 -3\\
6 & 1.33076841869444 & 2 & 1 1 0 -2\\
6 & 1.33950727686218 & 2 & 1 0 2 1\\
\\
8 & 1.16928302978955 & 2 & 1 0 0 1 1 P\\
8 & 1.17050710134464 & 2 & 1 1 1 0 -1 P\\
8 & 1.18375181855821 & 4 & 1 1 0 1 3 P\\
8 & 1.21502361972591 & 2 & 1 2 2 1 1 P\\
8 & 1.21962614693622 & 2 & 1 1 1 0 1 P\\
\\
10 & 1.12571482154239 & 2 & 1 0 1 1 0 1 M\\
10 & 1.13295293839656 & 2 & 1 1 0 0 0 -1 M\\
10 & 1.14208745799486 & 2 & 1 1 0 -1 0 0\\
10 & 1.16703006058662 & 2 & 1 0 0 0 1 1\\
10 & 1.17004216879649 & 2 & 1 0 0 1 0 -1\\
10 & 1.17628081825992 & 1 & 1 1 0 -1 -1 -1 M\\
\\
12 & 1.10805485364877 & 2 & 1 1 1 0 -1 -1 -1 PM\\
12 & 1.11850195225747 & 2 & 1 1 0 0 0 -1 -1 PM\\
12 & 1.12445269119837 & 2 & 1 0 1 0 0 1 -1 PM\\
12 & 1.12742072023266 & 4 & 1 1 1 1 1 2 3 P\\
12 & 1.12819252128504 & 2 & 1 0 1 1 1 2 1 PM\\
12 & 1.13861753595063 & 4 & 1 1 1 1 2 2 3 P\\
12 & 1.14103240247447 & 2 & 1 1 0 -1 0 1 1 P\\
12 & 1.14211801611167 & 2 & 1 0 0 1 -1 -1 1 P\\
12 & 1.14460531348308 & 2 & 1 2 2 1 1 1 1 P\\
\\
14 & 1.09390168574961 & 4 & 1 0 0 0 1 1 0 1\\
14 & 1.09663696733953 & 4 & 1 1 0 0 1 0 -1 -1\\
14 & 1.09873127474994 & 4 & 1 1 0 -1 0 1 1 1\\
14 & 1.10540085265079 & 3 & 1 1 1 0 -1 -1 -1 -1 M\\
14 & 1.10912255228309 & 4 & 1 0 0 0 -1 0 1 1\\
14 & 1.11020596746828 & 4 & 1 1 1 1 1 0 1 1\\
14 & 1.11132960322928 & 4 & 1 1 2 2 2 2 1 1\\
14 & 1.11141077514688 & 4 & 1 0 -1 0 1 1 0 -2\\
14 & 1.11157496383649 & 3 & 1 1 1 1 0 -1 -2 -3\\
\noalign{
}\hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Irreducible, reciprocal, integer polynomials with even degree at most 34 having small house.
}
\label{table:small1}
\begin{tabular}{rlrllll}
\hline\noalign{
}
$d$ & House & Out & Coefficients \\ [0.5ex]
\noalign{
}\hline\noalign{
}
16 & 1.08133391225354 & 4 & 1 0 0 0 0 0 1 0 1\\
16 & 1.08189976492494 & 4 & 1 0 1 0 1 0 0 0 -1\\
16 & 1.08568941631979 & 4 & 1 1 1 1 0 0 -1 -2 -1 P\\
16 & 1.08800359308148 & 8 & 1 0 1 0 0 0 1 0 3\\
16 & 1.09054731172112 & 4 & 1 0 -1 -1 0 2 1 -1 -1 P\\
16 & 1.09145310961609 & 4 & 1 0 -1 0 0 1 1 -1 -1 P\\
16 & 1.09341867119317 & 4 & 1 1 1 2 2 2 2 2 3 P\\
16 & 1.09441893214119 & 4 & 1 1 0 0 0 0 1 0 -1 P\\
\\
18 & 1.07185072135591 & 4 & 1 0 1 1 1 2 1 2 2 1 P\\
18 & 1.07715254391892 & 4 & 1 1 0 -1 0 0 -1 -1 1 2 P\\
18 & 1.08350235040111 & 4 & 1 0 1 0 0 1 0 1 1 1 P\\
18 & 1.08507352195696 & 4 & 1 0 0 1 1 0 0 2 1 -1 P\\
18 & 1.08914119715632 & 4 & 1 0 0 1 0 1 0 0 1 -1 P\\
18 & 1.09054731172112 & 4 & 1 -1 0 0 0 1 -1 0 1 -1 P\\
18 & 1.09059677435683 & 6 & 1 0 1 1 1 1 2 2 1 3 P\\
18 & 1.09151857842220 & 4 & 1 0 0 1 0 0 1 -1 -1 1 P\\
18 & 1.09217083605099 & 6 & 1 1 0 -1 -1 1 2 1 0 -1 P\\
18 & 1.09282468746958 & 3 & 1 1 0 0 0 -1 -1 0 0 -1 PM\\
18 & 1.09381566687105 & 4 & 1 0 1 1 1 1 1 1 1 0 P\\
\\
20 & 1.06099708837602 & 4 & 1 0 0 0 1 0 1 0 0 0 1\\
20 & 1.06440262043860 & 4 & 1 0 1 0 0 0 0 0 0 0 -1\\
20 & 1.06554639211891 & 4 & 1 1 0 -1 -1 -1 -1 -1 0 2 3 PM\\
20 & 1.06868491988746 & 4 & 1 0 1 0 0 0 -1 0 0 0 0\\
20 & 1.07086533169145 & 6 & 1 1 0 -1 -1 -1 0 0 0 1 2 P\\
20 & 1.07888517088957 & 8 & 1 1 0 0 1 1 0 1 2 0 -1 P\\
20 & 1.08029165533508 & 4 & 1 0 0 0 0 0 0 0 1 0 1\\
20 & 1.08081406854476 & 4 & 1 1 1 0 0 0 0 -1 -1 0 0 P\\
20 & 1.08093254714134 & 4 & 1 1 0 0 0 -1 -1 0 0 0 1 PM\\
20 & 1.08100667136043 & 4 & 1 1 0 0 0 0 0 -1 0 1 1 P\\
20 & 1.08111514762666 & 4 & 1 0 0 1 0 0 0 0 -1 -1 1 P\\
20 & 1.08168487499664 & 4 & 1 0 0 0 0 0 1 0 0 0 -1\\
20 & 1.08205695902988 & 4 & 1 1 -1 -1 1 0 -1 0 0 0 1 P\\
20 & 1.08213153864244 & 4 & 1 0 -1 1 0 -2 1 1 -1 0 1 P\\
20 & 1.08215867145095 & 6 & 1 1 1 0 -1 -2 -1 -1 1 1 1 P\\
20 & 1.08222782056950 & 4 & 1 1 0 -1 0 1 1 0 0 0 1 P\\
20 & 1.08228216492799 & 6 & 1 0 -1 0 1 1 -1 0 2 0 -1 P\\
20 & 1.08286885593631 & 6 & 1 -1 1 0 0 0 1 -1 0 1 -1 P\\
\noalign{
}\hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Irreducible, reciprocal, integer polynomials with even degree at most 34 having small house.
}
\label{table:small2}
\begin{tabular}{rlrllll}
\hline\noalign{
}
$d$ & House & Out & Coefficients \\ [0.5ex]
\noalign{
}\hline\noalign{
}
22 & 1.06621758541355 & 4 & 1 1 0 -1 0 0 0 0 0 -1 0 1 M\\
22 & 1.06827041313888 & 6 & 1 1 1 0 -1 -1 0 1 2 1 -1 -1\\
22 & 1.06843153438173 & 7 & 1 1 1 1 1 0 0 -1 -1 -2 -1 -1\\
22 & 1.06849271893547 & 7 & 1 0 1 0 1 0 0 0 -1 0 -2 1\\
22 & 1.06857505098600 & 8 & 1 0 -1 0 1 1 -1 -1 1 2 0 -3\\
22 & 1.07151243860039 & 4 & 1 0 1 1 0 1 0 1 1 1 2 1\\
22 & 1.07266460893982 & 5 & 1 1 1 0 -1 -1 -1 0 0 0 0 -1\\
22 & 1.07448519196034 & 6 & 1 1 0 -1 0 1 1 0 0 0 0 -1\\
22 & 1.07483796674177 & 4 & 1 1 0 0 0 -1 -1 0 0 0 1 1 M\\
22 & 1.07534302358553 & 4 & 1 1 1 0 -1 -1 0 0 1 0 -1 -1\\
22 & 1.07656105250927 & 6 & 1 0 0 1 0 0 0 1 1 0 1 1\\
22 & 1.07711967842550 & 9 & 1 0 1 0 1 0 1 1 0 2 -1 1\\
22 & 1.07719162675888 & 4 & 1 1 1 0 0 -1 -1 -1 0 1 2 2\\
22 & 1.07798582029041 & 7 & 1 1 1 1 0 0 0 -1 0 -1 -2 -2\\
\\
24 & 1.05264184490679 & 4 & 1 0 1 0 1 0 0 0 -1 0 -1 0 -1\\
24 & 1.05351295923098 & 6 & 1 0 0 0 0 0 0 0 0 1 0 0 1\\
24 & 1.05388045665602 & 6 & 1 0 0 1 0 0 1 0 0 0 0 0 -1\\
24 & 1.05759252657036 & 4 & 1 0 1 0 0 0 0 0 0 0 -1 0 -1\\
24 & 1.05784057193422 & 12 & 1 0 0 1 0 0 0 0 0 1 0 0 3\\
24 & 1.06003424557321 & 4 & 1 0 1 1 0 2 0 1 1 0 1 0 1 PM\\
24 & 1.06040213654932 & 4 & 1 0 0 0 1 0 0 0 0 0 1 0 -1\\
24 & 1.06177436224626 & 6 & 1 1 1 1 0 -1 -2 -2 -2 -1 1 2 3 P\\
24 & 1.06180069703907 & 8 & 1 0 1 0 1 0 1 0 1 0 2 0 3\\
24 & 1.06216407455959 & 4 & 1 0 0 0 1 0 1 0 1 0 2 0 1\\
24 & 1.06490580489257 & 4 & 1 1 1 0 0 0 0 -1 -1 0 0 0 -1 P\\
24 & 1.06535138762690 & 4 & 1 1 1 0 -1 -1 -1 0 0 0 0 0 1 PM\\
24 & 1.06537706223156 & 6 & 1 1 0 -1 -2 -1 1 2 2 0 -1 -1 -1 P\\
\\
26 & 1.05784846909829 & 8 & 1 0 0 1 0 -1 0 0 -1 -1 1 0 0 2\\
26 & 1.05968760806902 & 8 & 1 1 0 0 0 0 1 0 -1 0 1 1 1 1\\
26 & 1.06184735727122 & 6 & 1 0 1 0 0 1 0 1 1 0 1 0 0 1\\
26 & 1.06277446310360 & 10 & 1 0 0 0 0 0 1 1 0 0 0 -1 1 1\\
26 & 1.06342599606179 & 6 & 1 1 0 0 0 0 0 0 1 1 1 0 -1 -1\\
26 & 1.06345648424260 & 6 & 1 0 0 0 0 1 1 0 0 0 1 1 0 1\\
26 & 1.06559111842191 & 6 & 1 1 0 0 1 0 -1 0 1 -1 -1 0 0 -1\\
26 & 1.06596578704523 & 6 & 1 1 1 0 0 0 1 1 1 0 -1 -1 0 0\\
26 & 1.06619413895030 & 6 & 1 1 0 0 0 0 0 -1 0 1 0 0 0 -1\\
26 & 1.06644267309866 & 6 & 1 0 0 0 0 1 0 1 1 0 0 0 1 1\\
26 & 1.06666365977337 & 4 & 1 1 0 0 0 -1 -1 0 1 1 1 1 0 -1 M\\
\noalign{
}\hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Irreducible, reciprocal, integer polynomials with even degree at most 34 having small house.
}
\label{table:small3}
\begin{tabular}{rlrllll}
\hline\noalign{
}
$d$ & House & Out & Coefficients \\ [0.5ex]
\noalign{
}\hline\noalign{
}
28 & 1.04589755031246 & 8 & 1 0 0 0 0 0 0 0 1 0 1 0 0 0 1\\
28 & 1.04720435796435 & 8 & 1 0 1 0 0 0 0 0 1 0 0 0 -1 0 -1\\
28 & 1.04820383263464 & 8 & 1 0 1 0 0 0 -1 0 0 0 1 0 1 0 1\\
28 & 1.05138045095521 & 6 & 1 0 1 0 1 0 0 0 -1 0 -1 0 -1 0 -1\\
28 & 1.05314887470058 & 8 & 1 0 0 0 0 0 0 0 -1 0 0 0 1 0 1\\
28 & 1.05366311858595 & 8 & 1 0 1 0 1 0 1 0 1 0 0 0 1 0 1\\
28 & 1.05419618820658 & 8 & 1 0 1 0 2 0 2 0 2 0 2 0 1 0 1\\
28 & 1.05423468693972 & 8 & 1 0 0 0 -1 0 0 0 1 0 1 0 0 0 -2\\
28 & 1.05431255509763 & 6 & 1 0 1 0 1 0 1 0 0 0 -1 0 -2 0 -3\\
28 & 1.05616339145825 & 6 & 1 0 1 0 0 0 0 0 0 0 -1 0 -1 0 -1\\
28 & 1.05637230762463 & 6 & 1 1 1 0 0 -1 -1 -1 0 0 0 0 1 0 0 P\\
28 & 1.05798761666627 & 8 & 1 0 2 0 2 0 1 0 0 0 0 0 2 0 3\\
28 & 1.05910355609770 & 6 & 1 0 -1 0 0 1 0 -1 1 0 0 -1 -1 1 1 P\\
\\
30 & 1.04026214469874 & 6 & 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1\\
30 & 1.04248694101431 & 6 & 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 -1\\
30 & 1.04528115508851 & 6 & 1 0 0 1 0 0 0 0 0 -1 0 0 0 0 0 0\\
30 & 1.04978612425248 & 6 & 1 0 1 1 1 2 1 3 2 3 3 3 4 3 4 3 PM\\
30 & 1.05283588953315 & 6 & 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1\\
30 & 1.05374090226554 & 6 & 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 -1\\
30 & 1.05561042764362 & 3 & 1 0 0 1 0 0 0 0 0 -1 0 0 -1 0 0 -1\\
30 & 1.05736311561234 & 10 & 1 0 0 0 0 2 0 0 0 0 2 0 0 0 0 1\\
30 & 1.05737367134034 & 6 & 1 1 0 0 0 -1 -1 0 0 0 1 1 0 0 0 -1 PM\\
30 & 1.05785144758134 & 15 & 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -3\\
30 & 1.05836091217175 & 6 & 1 1 0 0 0 0 0 0 1 0 -1 0 0 0 1 1 P\\
30 & 1.05867431886766 & 6 & 1 0 1 0 0 0 -1 0 -1 1 0 1 1 0 1 -1 P\\
\\
32 & 1.03987206532993 & 8 & 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1\\
32 & 1.04014410776822 & 8 & 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 -1\\
32 & 1.04196421067126 & 8 & 1 0 1 0 1 0 1 0 0 0 0 0 -1 0 -2 0 -1\\
32 & 1.04307410718581 & 16 & 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 3\\
32 & 1.04429273277234 & 8 & 1 0 0 0 -1 0 1 0 0 0 -2 0 1 0 1 0 -1\\
32 & 1.04472633240294 & 8 & 1 0 0 0 -1 0 0 0 0 0 1 0 1 0 -1 0 -1\\
32 & 1.04566661570176 & 8 & 1 0 1 0 1 0 2 0 2 0 2 0 2 0 2 0 3\\
32 & 1.04614479501702 & 8 & 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 -1\\
32 & 1.04891831290646 & 8 & 1 0 0 0 0 0 1 0 -1 0 0 0 0 0 -1 0 1\\
32 & 1.04989575593276 & 8 & 1 0 0 0 2 0 0 0 2 0 0 0 1 0 0 0 1\\
32 & 1.05025977911128 & 8 & 1 0 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1\\
32 & 1.05069363916743 & 8 & 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 -1\\
32 & 1.05077230417714 & 8 & 1 0 -1 1 0 -1 1 -1 -1 1 -1 0 1 -1 1 1 -1 P\\
\noalign{
}\hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Irreducible, reciprocal, integer polynomials with even degree at most 34 having small house.
}
\label{table:small4}
\begin{tabular}{rlrllll}
\hline\noalign{
}
$d$ & House & Out & Coefficients \\ [0.5ex]
\noalign{
}\hline\noalign{
}
\\
34 & 1.04961810533324 & 8 & 1 0 1 1 0 1 0 0 0 0 0 0 1 0 1 1 0 1\\
34 & 1.05022062041836 & 7 & 1 1 1 1 0 -1 -2 -2 -2 -1 1 2 3 2 1 -1 -3 -3 M\\
34 & 1.05071690069432 & 8 & 1 1 0 -1 0 1 1 0 0 0 0 0 1 1 0 -1 0 0\\
34 & 1.05082250196013 & 8 & 1 1 0 0 0 0 1 1 0 0 0 0 1 0 -1 0 1 1\\
34 & 1.05105473087034 & 6 & 1 0 1 0 0 0 0 0 1 1 1 1 0 1 0 1 1 1\\
34 & 1.05115446958173 & 8 & 1 0 0 1 0 -1 1 0 -1 0 1 0 0 1 1 -1 0 1\\
34 & 1.05136643237339 & 6 & 1 1 1 0 0 0 1 0 0 -1 0 0 1 0 0 -1 0 -1\\
34 & 1.05182663296743 & 8 & 1 0 -1 0 1 1 -1 -1 1 1 0 0 0 0 1 1 0 -1\\
34 & 1.05221475176357 & 7 & 1 1 0 0 0 -1 -1 0 0 0 1 1 0 0 0 -1 -1 -1 M\\
34 & 1.05372780022456 & 8 & 1 0 0 1 0 0 0 0 0 -1 0 0 -1 1 1 0 1 1\\
34 & 1.05394569820733 & 8 & 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 0\\
34 & 1.05406220416025 & 8 & 1 1 1 1 0 0 0 -1 0 1 0 1 0 -1 1 0 0 1\\
\noalign{
}\hline
\end{tabular}
\end{table}
\end{document} |
\begin{document}
\title
{\large
\textbf{Augmented Thresholds for MONI}
}
\author{
C\'esar Mart\'inez-Guardiola$^{\ast}$, Nathaniel K. Brown$^{\dag}$, Fernando Silva-Coira$^{\ast}$,\\Dominik K\"{o}ppl$^{\ddag}$, Travis Gagie$^{\dag}$ and Susana Ladra$^{\ast}$ \\[0.5em]
{\small\begin{minipage}{\linewidth}\begin{center}
\begin{tabular}{ccccc}
$^{\ast}$Universidade da Coru\~na & \hspace*{0.25in} & $^{\dag}$Dalhousie U & \hspace*{0.25in} & $^{\ddag}$TMDU \\
CITIC, A Coru\~na, Spain && Halifax, Canada && Tokyo, Japan \\
\url{{first.last}@udc.es} && \url{{first.last}@dal.ca} && \url{[email protected]} \\
\end{tabular}
\end{center}\end{minipage}}}
\maketitle
\thispagestyle{empty}
\begin{abstract}
MONI (Rossi et al., 2022) can store a pangenomic dataset $T$ in small space and later, given a pattern $P$, quickly find the maximal exact matches (MEMs) of $P$ with respect to $T$. In this paper we consider its one-pass version (Boucher et al., 2021), whose query times are dominated in our experiments by longest common extension (LCE) queries. We show how a small modification lets us avoid most of these queries and thus significantly speeds up MONI in practice while only slightly increasing its size.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
The FM-index~\cite{FM00} is one of the most successful compact data structures and DNA alignment has been its ``killer app'', with FM-based aligners such as Bowtie~\cite{LTPS09,LS12} and BWA~\cite{LD09} racking up tens of thousands of citations and seeing every day use in labs and clinics worldwide. Standard FM-indexes can handle only a few human genomes at once, however, and geneticists now realize that aligning against a only few standard references biases their research results and medical diagnoses~\cite{Beg19}. Among other concerns, this bias undermines personalized medicine particularly for people from ethnic groups --- such as African, Central/South Asian, Indigenous, Latin American and Middle Eastern populations --- whose genotypes are not reflected well in the standard references or even in public databases of genomes~\cite{LAWRB18}. Countries such as China~\cite{WWLLTGFZLZ+08} and Denmark~\cite{MJPSLVSBTI+17} have assembled their own reference sequences, but it is not clear whether and how we can do this fairly for multi-ethnic populations. Bioinformaticians and data-structure designers have therefore been looking for ways to index models that better capture the genetic diversity of whole species, especially humanity. The most publicized approach so far is building and indexing pangenome graphs~\cite{SMCNEMSHCC+21}, but we can also try scaling FM-indexes up to handle a dozen or so representative samples genomes~\cite{CSMIL21} or, more ambitiously, to handle even thousands of genomes at once. Indexing thousands of genomes at once is technically challenging, of course, but it should give us different functionality than pangenome graphs.
M\"akinen et al.~\cite{SVMN08,MNSV10} initiated the study of indexing massive genomic datasets with their index based on the run-length compressed Burrows-Wheeler Transform (RLBWT), which stores such a pangenomic dataset $T [1..n]$ in space proportional to the number $r$ of runs in the BWT of $T$ and allows us to quickly {\em count} the number of exact matches of any pattern $P [1..m]$ in $T$. Policriti and Prezza~\cite{PP18} showed that, if we augment M\"akinen et al.'s index with the entries of the suffix array (SA) sampled at BWT run boundaries, then we can quickly locate {\em one} of $P$'s matches in $T$. Gagie, Navarro and Prezza~\cite{GNP20} then showed how we can store that SA sample such that we can quickly locate {\em all} of $P$'s matches in $T$. (For the sake of brevity, we assume the reader is familiar with the BWT, SA, etc.; otherwise, we refer them to M\"akinen et al.'s~\cite{MBCT15} and Navarro's~\cite{Nav16} texts.) Gagie et al.\ called their data structure the $r$-index, after its $O (r)$ space bound; Nishimoto and Tabei~\cite{NT21} recently sped it up to answer queries in optimal time when $T$ is over a $\mathrm{polylog} (n)$-sized alphabet, while still using $O (r)$ space. Boucher et al.~\cite{BGKLMM19,KMBGLM20} showed how we can build an $r$-index efficiently in practice using a technique they called prefix-free parsing (PFP).
Because approximate pattern matching is often more important in bioinformatics than exact matching, Bannai, Gagie and I~\cite{BGI20} designed a version of the $r$-index that can efficiently find maximal exact matches (MEMs), which are commonly used for approximate pattern matching in tools such as BWA-MEM~\cite{Li13}. Bannai et al.'s is not a true $r$-index because it requires fast random access to $T$ and we do not know how to support that in worst-case $O (r)$ space, but Gagie et al.~\cite{GMNST19,GMNSST20} showed how we can use PFP to build a straight-line program (SLP) for $T$ that gives us this random access and in practice takes significantly less space than the $r$-index itself. The key idea behind Bannai et al.'s index is to store the positions of $r$ thresholds in the RLBWT, one between each consecutive pair of runs of the same character, but they did not give an algorithm for finding those thresholds. Rossi et al.~\cite{ROLGB22} showed that we can choose the thresholds based on the longest common prefix (LCP) array and build Bannai et al.'s index efficiently with PFP. They implemented it in a tool called MONI (Finnish for ``multi'', as it indexes many genomes at once), and demonstrated its practicality for pangenomic alignment.
By default, MONI makes two passes over $P$, one right-to-left and then the other left-to-right. Boucher et al.~\cite{BGIKLMNPR21} noted, however, that by using the SLP to support longest common extension (LCE) queries instead of random access, MONI can run in one pass. For long patterns, MONI in two-pass mode buffers a significant amount of data during its first pass, so switching to one-pass mode reduces its workspace and allows us to run more queries in parallel. We can also use one-pass MONI for applications that are inherently online, such as recognizing and ejecting non-target DNA strands from nanopore sequencers~\cite{ARKSGBL21}. Even though one-pass MONI processes most characters in $P$ without LCE queries, the LCE queries it does compute still take most of the query time~\cite{BGIKLMNPR21}. We show in this paper that by precomputing and storing two LCE values for each threshold, in practice we can avoid many of those queries and thus significantly speed up one-pass MONI while increasing its size only slightly.
\section{MONI}
\label{sec:MONI}
Bannai et al.\ defined a {\em threshold} between two consecutive runs $\ensuremath{\mathrm{BWT}} [s_1..e_1]$ and $\ensuremath{\mathrm{BWT}} [s_2..e_2]$ of the same character, to be a position $t$ with $e_1 < t \leq s_2$ such that $\ensuremath{\mathrm{LCE}} (e_1, k) \geq \ensuremath{\mathrm{LCE}} (k, s_2)$ for $k < t$, and $\ensuremath{\mathrm{LCE}} (e_1, k) \leq \ensuremath{\mathrm{LCE}} (k, s_2)$ for $k \geq t$. (Rossi et al.'s construction is based on the observation that we can set $t$ to the position of a minimum in $\ensuremath{\mathrm{LCP}} [e_1 + 1..s_2]$.) Bannai et al.\ showed how adding these thresholds to an $r$-index lets us compute MEMs by computing the matching statistics $\ensuremath{\mathrm{MS}} [1..m]$ of $P$ with respect to $T$, where the $i$th matching statistics $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}}$ and $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}}$ are defined such that
\[T \left[ \rule{0ex}{2ex} \ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}}..\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} + \ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}} - 1 \right]
= P [i..i + \ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}} - 1]\]
and $P [i..i + \ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}}]$ does not occur in $T$. In other words, $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}}$ is a pointer to the starting position in $T$ of a longest match for $P [i..m]$ and $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}}$ is the length of that match, where a {\em longest match} for $P [i..m]$ is an occurrence in $T$ of the longest prefix of $P [i..m]$ that occurs in $T$.
Suppose we have already computed $\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}}$ and the position $j$ of $T [\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}} - 1]$ in the BWT. If $\ensuremath{\mathrm{BWT}} [j] = P [i]$, then $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} = \ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}} - 1$ and we can continue once we compute the position $\ensuremath{\mathrm{LF}} (j)$ of $T [\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} - 1]$ in the BWT. Otherwise, let $\ensuremath{\mathrm{BWT}} [e]$ be the last occurrence of $P [i]$ before $\ensuremath{\mathrm{BWT}} [j]$, and $\ensuremath{\mathrm{BWT}} [s]$ be the first occurrence of $P [i]$ after $\ensuremath{\mathrm{BWT}} [j]$. By the definitions of the BWT and thresholds, if $\ensuremath{\mathrm{BWT}} [j]$ is strictly above the threshold between $\ensuremath{\mathrm{BWT}} [e]$ and $\ensuremath{\mathrm{BWT}} [s]$, then a prefix of $T [\ensuremath{\mathrm{SA}} [e]..n]$ is a longest match for $P [i..m]$; otherwise, a prefix of $T [\ensuremath{\mathrm{SA}} [s]..n]$ is a longest match for $P [i..m]$. Since $\ensuremath{\mathrm{BWT}} [e]$ is the end of a run and $\ensuremath{\mathrm{BWT}} [s]$ is the start of a run, we have $\ensuremath{\mathrm{SA}} [e]$ and $\ensuremath{\mathrm{SA}} [s]$ stored. Therefore, depending on whether $\ensuremath{\mathrm{BWT}} [j]$ is above or below the threshold, either we can ``jump up'' from $\ensuremath{\mathrm{BWT}} [j]$ to $\ensuremath{\mathrm{BWT}} [e]$ and set $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} = \ensuremath{\mathrm{SA}} [e]$ (so the position of $T [\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} - 1]$ in the BWT is $\ensuremath{\mathrm{LF}} (e)$), or we can ``jump down'' from $\ensuremath{\mathrm{BWT}} [j]$ to $\ensuremath{\mathrm{BWT}} [s]$ and set $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} = \ensuremath{\mathrm{SA}} [e]$ (so the position of $T [\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}} - 1]$ in the BWT is $\ensuremath{\mathrm{LF}} (s)$).
By default, MONI makes a right-to-left pass over $P$ to compute $\ensuremath{\mathrm{MS}} [1..m].\ensuremath{\mathrm{pos}}$, and then a left-to-right pass over $P$ to compute $\ensuremath{\mathrm{MS}} [1..m].\ensuremath{\mathrm{len}}$. If we use the SLP to support LCE queries instead of random access, however, then we need only one pass over $P$. To see why, suppose that when we compute $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{pos}}$, we have already computed $\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{len}}$ as well as $\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}}$. If we jump up from $\ensuremath{\mathrm{BWT}} [j]$ to $\ensuremath{\mathrm{BWT}} [e]$, then
\begin{eqnarray}
\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}}
& = & \min \left( \rule{0ex}{2ex} \ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}}, \ensuremath{\mathrm{SA}} [e]), \ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{len}} \right) + 1\,;
\label{eqn:jump_up}
\end{eqnarray}
if we jump down from $\ensuremath{\mathrm{BWT}} [j]$ to $\ensuremath{\mathrm{BWT}} [s]$, then
\begin{eqnarray}
\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}}
& = & \min \left( \rule{0ex}{2ex} \ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}}, \ensuremath{\mathrm{SA}} [s]), \ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{len}} \right) + 1\,.
\label{eqn:jump_down}
\end{eqnarray}
In fact, if we compute both $\ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}}, \ensuremath{\mathrm{SA}} [e])$ and $\ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}}, \ensuremath{\mathrm{SA}} [s])$, then we need not check the threshold between $\ensuremath{\mathrm{BWT}} [e]$ and $\ensuremath{\mathrm{BWT}} [s]$ at all. MONI stores the thresholds in order to use only one LCE query for each jump, because the thresholds collectively do not take much space compared to the RLBWT and the SA samples, and the LCE queries are slow compared to the LF-steps.
\section{Augmented Thresholds}
\label{sec:augmented_thresholds}
In practice, MONI's jumps and resultant LCE queries tend to occur in bunches: if a character $P [i]$ is a sequencing error or a variation not in $T$, then we will probably jump for $P [i]$, find a short longest match, and then also jump for several more characters of $P$ in rapid succession, until the longest matches are finally long enough again to reorient us in the BWT. Because the lengths of the longest matches can only increment for each character of $P$ we process, most of the comparisons in Equations~\ref{eqn:jump_up} and~\ref{eqn:jump_down} between the LCE values and the length of the current longest match will simply return the length of the current match. This observation led us to wonder if all those LCE queries are really necessary.
\begin{figure}
\caption{Suppose we want to compute $\ensuremath{\mathrm{MS}
\label{fig:example}
\end{figure}
Suppose that, at the threshold $t$ between between two consecutive runs $\ensuremath{\mathrm{BWT}} [s_1..e_1]$ and $\ensuremath{\mathrm{BWT}} [s_2..e_2]$ of the same character, we store $\ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{SA}} [e_1], \ensuremath{\mathrm{SA}} [t - 1])$ and $\ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{SA}} [t], \ensuremath{\mathrm{SA}} [s_2])$. Furthermore, suppose we later want to compute $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}}$ for some $i$ such that $P [i] = \ensuremath{\mathrm{BWT}} [e_1] = \ensuremath{\mathrm{BWT}} [s_2]$ and the position $j$ of $T [\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{pos}} - 1]$ in the BWT is between $e_1 + 1$ and $s_2 - 1$. If $j < t$ and
\[\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{len}} \leq \ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{SA}} [e_1], \ensuremath{\mathrm{SA}} [t - 1])\,,\]
or $j \geq t$ and
\[\ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{len}} \leq \ensuremath{\mathrm{LCE}} (\ensuremath{\mathrm{SA}} [t], \ensuremath{\mathrm{SA}} [s_2])\]
then, as illustrated in Figure~\ref{fig:example}, we can safely set $\ensuremath{\mathrm{MS}} [i].\ensuremath{\mathrm{len}} = \ensuremath{\mathrm{MS}} [i + 1].\ensuremath{\mathrm{len}} + 1$ without using an LCE query. Algorithm~\ref{alg:augmented} shows how these values are used to compute $\ensuremath{\mathrm{MS}} [1..m]$ for a given pattern $P[1..m]$ by storing the thresholds alongside these ``threshold LCEs''.
\begin{algorithm}[!ht]
\caption{Computes \ensuremath{\mathrm{MS}}\ using a variation of one-pass MONI~\cite{BGIKLMNPR21} which stores augmented thresholds (thresholds and thr\_lce arrays)}
\label{alg:augmented}
\begin{algorithmic}[1]
\State $j \gets \ensuremath{\mathrm{BWT}}.\ensuremath{\mathrm{select}}_{P[m]}(1)$
\State $\ensuremath{\mathrm{MS}}[m] \gets (\ensuremath{\mathrm{pos}}: \ensuremath{\mathrm{SA}}[j], \ensuremath{\mathrm{len}}: 1)$
\For {$i = m-1$ \textbf{down to} $1$}
\If {$\ensuremath{\mathrm{BWT}}[j] = P[i]$}
\State $\ensuremath{\mathrm{MS}}[i] \gets (\ensuremath{\mathrm{pos}}: \ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{pos}} - 1, \ensuremath{\mathrm{len}}: \ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}} + 1)$
\Else
\State $c \gets \ensuremath{\mathrm{BWT}}.\ensuremath{\mathrm{rank}}_{P[i]}(j)$
\State $e_1 \gets \ensuremath{\mathrm{BWT}}.\ensuremath{\mathrm{select}}_{P[i]}(c)$
\State $s_2 \gets \ensuremath{\mathrm{BWT}}.\ensuremath{\mathrm{select}}_{P[i]}(c + 1)$
\State $x \gets \ensuremath{\mathrm{BWT}}.\text{run\_of\_position}(s_2)$ \Comment{Position $s_2$ belongs to the $x$th run}
\State $t \gets \text{thresholds}[x]$
\If {$j < t$}
\Comment $\text{thr\_lce}_e$ stores $\ensuremath{\mathrm{LCE}}(\ensuremath{\mathrm{SA}}[e_1], \ensuremath{\mathrm{SA}}[t - 1])$
\If {$\ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}} \leq \text{thr\_lce}_e[x]$}
\State $\ensuremath{\mathrm{MS}}[i].\ensuremath{\mathrm{len}} \gets \ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}} + 1$
\Else
\State $\ensuremath{\mathrm{MS}}[i].\ensuremath{\mathrm{len}} \gets min(\ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}}, \ensuremath{\mathrm{LCE}}(\ensuremath{\mathrm{SA}}[e_1], \ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{pos}}) + 1$
\EndIf
\State $\ensuremath{\mathrm{MS}}[i].\ensuremath{\mathrm{pos}} \gets \ensuremath{\mathrm{SA}}[e_1]$
\State $j \gets \ensuremath{\mathrm{LF}}(e_1)$
\Else
\Comment $\text{thr\_lce}_s$ stores $\ensuremath{\mathrm{LCE}}(\ensuremath{\mathrm{SA}}[t], \ensuremath{\mathrm{SA}}[s_2])$
\If {$\ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}} \leq \text{thr\_lce}_s[x]$}
\State $\ensuremath{\mathrm{MS}}[i].\ensuremath{\mathrm{len}} \gets \ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}} + 1$
\Else
\State $\ensuremath{\mathrm{MS}}[i].\ensuremath{\mathrm{len}} \gets min(\ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{len}}, \ensuremath{\mathrm{LCE}}(\ensuremath{\mathrm{SA}}[s_2], \ensuremath{\mathrm{MS}}[i+1].\ensuremath{\mathrm{pos}}) + 1$
\EndIf
\State $\ensuremath{\mathrm{MS}}[i].\ensuremath{\mathrm{pos}} \gets \ensuremath{\mathrm{SA}}[s_2]$
\State $j \gets \ensuremath{\mathrm{LF}}(s_2)$
\EndIf
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
Threshold LCEs can be computed using \ensuremath{\mathrm{LCE}}\ queries and SA samples, but their relationship to thresholds allows us to compute both simultaneously. Recall that Rossi et al. observed that we can set $t$ to be the position of $min(\ensuremath{\mathrm{LCP}}[e_1+1..s_2])$ using a range-minimum query (RMQ), which they support space-efficiently through PFP and a range-minimum data structure over the \ensuremath{\mathrm{LCP}}\ array~\cite{ROLGB22}. We can also define \ensuremath{\mathrm{LCE}}\ queries as RMQs over the \ensuremath{\mathrm{LCP}}\ array~\cite{INT10}, such that $\ensuremath{\mathrm{LCE}}(\ensuremath{\mathrm{SA}}[e_1], \ensuremath{\mathrm{SA}}[t - 1]) = min(\ensuremath{\mathrm{LCP}}[e_1..t-1])$ and $\ensuremath{\mathrm{LCE}}(\ensuremath{\mathrm{SA}}[t], \ensuremath{\mathrm{SA}}[s_2]) = min(\ensuremath{\mathrm{LCP}}[t+1..s_2])$. These minimums can be computed alongside the thresholds by performing RMQs for the given ranges as thresholds are found. This operation scans each run boundary and with only a slight modification to the original MONI method builds both the thresholds and the threshold LCEs (constituting augmented thresholds).
\section{Experiments}
\label{sec:experiments}
We directly compare the time and memory for querying the augmented thresholds approach against the unmodified one-pass MONI. To mitigate the size increase of augmented thresholds, we explore techniques for space-efficiency. Any single threshold LCE can be stored in $O(\lg{n})$-bits (since they inherit \ensuremath{\mathrm{LCP}}\ bounds); however, many values tend to be smaller than others~\cite{KKP16} and in practice our LCE values represent minimums over ranges of the \ensuremath{\mathrm{LCP}}\ array. The second observation is the existence of threshold LCEs which can be ignored: if $t = s_2$ then for any position $j$ (with $e_1 < j < s_2$) we always have $j < t$ so we jump up to $e_1$ and the corresponding \ensuremath{\mathrm{LCE}}\ is never used, and similarly for $t = e_1+1$ and always jumping down. Thus, we can safely ignore these values, choosing to ``zero'' them or not store any value at all. For thresholds, we note that they form increasing sub-sequences with respect to each of the $\sigma$ unique characters in the text; we compress the thresholds by storing them in $\sigma$ bitvectors as done in Ahmed et al.'s implementation~\cite{ARKSGBL21}.
We focus on selected variants of augmented thresholds which differ in storing the threshold LCEs and compare against the unmodified approach:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item \texttt{PHONI}: Standard version of one-pass MONI described as \texttt{PHONI}$_{std}$ in original paper~\cite{BGIKLMNPR21}.
\item \texttt{Aug-Full}: One-pass MONI modified with augmented thresholds described previously, using $O(\lg{n})$-bits per threshold LCE stored.
\item \texttt{Aug-1}: As above, but caps the size to one byte per threshold LCE. In the event of an overflow, we default to performing a single \ensuremath{\mathrm{LCE}}\ query.
\item \texttt{Aug-BV-Full}: Stores a bitvector marking which threshold LCEs are used/non-zero, storing just these values with $O(\lg{n})$-bits for each.
\item \texttt{Aug-BV-1}: As above, but ignores storing values greater than one byte (default to \ensuremath{\mathrm{LCE}}\ query).
\item \texttt{Aug-DAC}: Stores threshold LCEs using a directly addressable code (DAC) with escaping, as described and tested on the \ensuremath{\mathrm{LCP}}\ array by Brisaboa et al.~\cite{BLN13}.
\item \texttt{Aug-BV-DAC}: Same as \texttt{Aug-BV-Full}, but substituting in a DAC to store defined values.
\end{itemize}
Our C++ code is available at \url{https://github.com/drnatebrown/aug_phoni} and is based on the original one-pass MONI code at \url{https://github.com/koeppl/phoni}. All experiments were executed single threaded on a server with an Intel(R) Xeon(R) Bronze 3204 CPU and 512 GiB RAM.
To compare against \texttt{PHONI} and its existing results, we re-ran Boucher et al.'s query experiments using the same dataset consisting of chromosome 19 haplotypes (\texttt{chr19}), building the data structures for concatenations of 16, 32, 64, 128, 256, 512, and 1000 sequences of \texttt{chr19} and querying them with 10 different \texttt{chr19} sequences. To support random access and \ensuremath{\mathrm{LCE}}\ queries efficiently we construct SLPs; both the SLP compressed text of the original one-pass MONI experiments (SLP$_{comp}$), and the naive uncompressed version of Gagie et al. (SLP$_{plain}$)~\cite{GMNSST20} that sacrifices space for speed. The datasets and SLP sizes are reported in Table~\ref{tab:dataset}. The average query times (computing MS for a single pattern) is shown in Figure~\ref{fig:query_experiments} where results for both SLP types are accentuated. Similarly, Figure~\ref{fig:size_experiments} shows the disk sizes for all variants and both SLP types.
\begin{table}[!ht]
\centering
\begin{tabular}{lrrrrr}
\hline
\# & {$n/10^6$} & {$r/10^4$} & {$n/r$} & ~~SLP$_{comp}$ [MB] & ~~SLP$_{plain}$ [MB] \\
\hline
16 & ~946.01 & 3240.02 & 29.20 & 36.10 & 70.54 \\
32 & ~1892.01 & 3282.51 & 57.64 & 37.80 & 74.75 \\
64 & ~3784.01 & 3334.06 & 113.50 & 39.48 & 79.84 \\
128 & ~7568.01 & 3405.40 & 222.24 & 42.11 & 88.89 \\
256 & ~15136.04 & 3561.98 & 424.93 & 47.43 & 102.52 \\
512 & ~30272.08 & 3923.60 & 771.54 & 58.00 & 131.09 \\
1,000 & ~59125.12 & 4592.68 & 1287.38 & 80.63 & 186.98 \\
\hline
\end{tabular}
\caption{Table summarizing the datasets and sizes of SLPs built over them. The first column describes the number of concatenated sequences of \texttt{chr19} representing the text $T$, where $n$ represents the length of $T$ and $r$ the number of runs.\label{tab:dataset}}
\end{table}
\begin{figure}
\caption{The average query time to compute MS using 10 distinct \texttt{chr19}
\label{fig:query_experiments}
\end{figure}
\begin{figure}
\caption{The disk size in GB for each data structure built on 16, 32, 64, 128, 256, 512, and 1000 sequences of \texttt{chr19}
\label{fig:size_experiments}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
With respect to query times, we can see that any variants using augmented thresholds are always faster than \texttt{PHONI}, and with respect to size, always larger. Introducing the SLP$_{plain}$ clearly benefits all methods by speeding up LCE queries, and although it can be over twice as large as SLP$_{comp}$ when compared directly against each other (Table~\ref{tab:dataset}), the difference is much smaller when comparing the total sizes of the data structures shown in Figure~\ref{fig:size_experiments}. This \ensuremath{\mathrm{LCE}}\ speedup reduces the gap between query times compared to \texttt{PHONI}, since it spends a larger percentage of execution on them; however, the \ensuremath{\mathrm{LCE}}\ queries skipped by augmented thresholds still result in faster execution.
We highlight some standout variants when compared to \texttt{PHONI} for the largest text size (1000 sequences of \texttt{chr19}). \texttt{Aug-DAC} is in the fastest class for both SLPs: $48.37\%$ faster and $22.89\%$ larger for SLP$_{comp}$, and $22.92\%$ faster and $19.97\%$ larger for SLP$_{plain}$; significant improvement compared to the original \texttt{PHONI} method (SLP$_{comp}$) and a direct time/space tradeoff for the introduced SLP$_{plain}$. \texttt{Aug-1} is in the smallest class: $40.22\%$ faster and only $14.60\%$ larger for SLP$_{comp}$, while $19.95\%$ faster and $12.66\%$ larger for SLP$_{plain}$. Although \texttt{Aug-Full} is in the fastest class with \texttt{Aug-DAC}, it is much larger. Other variants fall between these approaches in both time and space.
When compared to the original one-pass MONI of Boucher et.\ al (\texttt{PHONI} with SLP$_{comp}$), our best augmented threshold approaches showed over $40\%$ speed improvements with under $20\%$ space increase on the largest dataset, and similar results across all data. When compared to uncompressed threshold LCEs, our applied compression schemes are space-efficient whilst still being faster than unmodified one-pass MONI. Introducing an uncompressed SLP (SLP$_{plain}$) experimentally was shown to be of great benefit to both \ensuremath{\mathrm{LCE}}\ and total query speed, only requiring a small size increase for computing matching statistics on repetitive texts. Using this SLP, results show augmented thresholds to allow a direct time/space tradeoff (increase speed/space by $\approx20\%$), or a size decrease whilst maintaining a comparable speed increase.
\subsection*{Acknowledgments}
Many thanks to Massimiliano Rossi for guidance on implementing this modification.
\Section{References}
\end{document} |
\begin{document}
\begin{abstract}
Let the crown $C_{13}$ be the linear $3$-graph on $9$ vertices $\{a,b,c,d,e,f,g,h,i\}$ with edges
$$E = \{\{a,b,c\}, \{a, d,e\}, \{b, f, g\}, \{c, h,i\}\}.$$ Proving a conjecture of Gy\'arf\'as et. al., we show that for any crown-free linear $3$-graph $G$ on $n$ vertices, its number of edges satisfy
$$\lvert E(G) \rvert\leq \frac{3(n - s)}{2}$$
where $s$ is the number of vertices in $G$ with degree at least $6$. This result, combined with previous work, essentially completes the determination of linear Tur\'an number for linear $3$-graphs with at most $4$ edges.
\end{abstract}
\title{note on the Tur\'an number of the linear $3$-graph $C_{13}
\section{Introduction}
A \textbf{linear $3$-graph} $G = (V, E)$ consists of a finite set of vertices $V = V(G)$ and a collection $E = E(G)$ of $3$-element subsets of $V$(edges), such that any two edges in $E$ share at most one vertex. If $H$ and $F$ are linear $3$-graphs, then $H$ is $F$-free if it contains no copy of $F$. For a linear $3$-graph $F$, and a positive integer $n$, the \textbf{linear Tur\'an number} $\text{ex}(n, F)$ is the maximum number of edges in any $F$-free linear $3$-graph on $n$ vertices.
Let the \textbf{crown} $C_{13}$ be the linear $3$-graph on $9$ vertices $\{a,b,c,d,e,f,g,h,i\}$ with edges
$$E = \{\{a,b,c\}, \{a, d,e\}, \{b, f, g\}, \{c, h,i\}\}.$$
\begin{figure}
\caption{The crown $C_{13}
\end{figure}
The study of $\text{ex}(n, C_{13})$ was initiated by Gy\'arf\'as, Ruszink\'o and S\'ark\"ozy in \cite{G21}, where they showed the bounds
$$6\floor{\frac{n - 3}{4}} + \epsilon \leq \text{ex}(n, C_{13}) \leq 2n.$$
where $\epsilon = 0$ if $n - 3 \equiv 0,1\bmod{4}$, $\epsilon = 1$ if $n - 3 \equiv 2\bmod{4}$, and $\epsilon = 3$ if $n - 3 \equiv 3\bmod{4}$. In \cite{C21}, Gy\'arf\'as et. al. showed that every linear $3$-graph with minimum degree $4$ contains a crown. They also proposed some ideas to obtain the exact bounds. Very recently, Fletcher showed in \cite{F21} the improved upper bound
$$\text{ex}(n, C_{13}) < \frac{5}{3}n.$$
In this paper, we show that the lower bound in \cite{G21} is essentially tight, thus resolving a conjecture in \cite{C21}. In fact, we show the following stronger result.
\begin{theorem}
\label{thm:main1}
Let $G$ be any crown-free linear $3$-graph $G$ on $n$ vertices. Then its number of edges satisfies
$$\abs{E(G)} \leq \frac{3(n - s)}{2}.$$
where $s$ is the number of vertices in $G$ with degree at least $6$.
\end{theorem}
Furthermore, we show that when $s$ is small, the upper bound can be improved.
\begin{theorem}
\label{thm:main2}
Let $G$ be any crown-free linear $3$-graph $G$ on $n$ vertices, and let $s$ be the number of vertices in $G$ with degree at least $6$. If $s \leq 2$, then the number of edges satisfies
$$\abs{E(G)} \leq \frac{10(n - s)}{7}.$$
\end{theorem}
Combining the two theorems above, we immediately conclude that the lower bound in \cite{G21} is exact when $n \equiv 3\bmod{4}$ and $n \geq 63$.
\begin{corollary}
If $n \geq 63$, then
$$\text{ex}(n, C_{13}) \leq \frac{3(n - 3)}{2}.$$
\end{corollary}
The paper is structured as follows. In \cref{sec:main1} and \cref{sec:main2} we present the main innovative inequality and prove our main theorems, quotient a technical and familiar lemma that we prove in \cref{sec:554}.
\section{Proof of \texorpdfstring{\cref{thm:main1}}{Theorem 1.1}}
\label{sec:main1}
Let $G$ be any linear $3$-graph. For each $v \in V(G)$, let $d(v)$ be the degree of $v$, which is the number of edges in $E(G)$ that contains $v$. For each edge $e \in E(G)$ and positive integers $a \geq b \geq c$, we write $D(e) \geq \langle a,b,c\rangle$ if we can write $e = \{x,y,z\}$ such that $d(x) \geq a$, $d(y) \geq b$ and $d(z) \geq c$.
Suppose the contrary. Let $G$ be the smallest linear $3$-graph such that $G$ has greater than $3(n - s) / 2$ edges. For each $v \in V(G)$, let $\chi(v) = 1$ if $d(v) \leq 5$, and $\chi(v) = 0$ otherwise.
Our key innovation is the following observation
$$\sum_{e \in E(G)} \sum_{v \in V(G), v \in e} \frac{\chi(v)}{d(v)} = \sum_{v \in V(G)} \sum_{e \in E(G), v \in e} \frac{\chi(v)}{d(v)} = \sum_{v \in V(G)} \chi(v) = n - s.$$
As $\abs{E(G)} > 3(n - s)/2$, we conclude that there exists an edge $e = \{x,y,z\}$ such that
\begin{align}
\label{ineq:main}
\frac{\chi(x)}{d(x)} + \frac{\chi(y)}{d(y)} + \frac{\chi(z)}{d(z)} < \frac{2}{3}.
\end{align}
Without loss of generality, assume $d(x) \leq d(y) \leq d(z)$. First we note that $d(x) \geq 2$ and $d(y) \geq 4$, as otherwise \eqref{ineq:main} would be violated. If $d(z) \geq 6$, then we can easily find a $C_{13}$ by choosing an edge $e_1 \neq e$ adjacent to $x$, choosing an edge $e_2$ adjacent to $y$ that does not share a vertex with $e_1$, and finally choosing an edge $e_3$ adjacent to $z$ that does not share a vertex with $e_1$ and $e_2$, contradiction. Therefore, we have $d(z) \leq 5$, and \eqref{ineq:main} implies that $D(e) \geq \langle 5,5,4\rangle.$
We use the following lemma to handle the case $D(e) \geq \langle 5,5,4\rangle$. As the lemma is quite straightforward using the techniques in \cite{C21}, \cite{F21} and \cite{G21}, we delay the lengthy proof to \cref{sec:554}.
\begin{restatable}{lemma}{ll}
\label{lem:554}
Let $G$ be a crown-free graph and $e = \{x,y,z\} \in E(G)$ satisfy $D(e) \geq \langle 5,5,4\rangle$. Then, the vertex set of all vertices sharing an edge with $\{x,y,z\}$,
$$S = \bigcup_{f \in E(G), f \cap \{x,y,z\} \neq \emptyset} f,$$
contains exactly $11$ vertices and all vertices in $S$ have degree at most $5$. The set of edges that contains at least one vertex in $S$,
$$E_S = \{f: f \in E(G), f \cap S \neq \emptyset\},$$
contains at most $13$ edges, and all elements of $E_S$ are subsets of $S$. In other words, the subgraph $G[S]$ is a connected component of $G$.
\end{restatable}
Let $G - S$ be the graph obtained by deleting the vertices $S$ and the edges in $E_S$. By the lemma, the graph $G - S$ has $n' = n - 11$ vertices and at least $\abs{E(G)} - 13$ edges. Furthermore, the number of vertices in $G - S$ of degree at least $6$ is exactly $s$. Therefore, we conclude that
$$\abs{E(G - S)} \geq \abs{E(G)} - 13 > \frac{3(n - s)}{2} - 13 > \frac{3(n' - s)}{2}$$
contradicting the assumption that $G$ is the smallest counterexample to \cref{thm:main1}. So we have shown \cref{thm:main1}.
\section{Proof of \texorpdfstring{\cref{thm:main2}}{Theorem 1.2}}
\label{sec:main2}
We use the same notations as \cref{sec:main1}.
Suppose the contrary. Let $G$ be the smallest linear 3-graph such that $G$ has at most $2$ vertices with degree at least $6$ and has greater than $10(n - s)/7$ edges.
For each $e \in E(G)$ and $v \in e$, we define a weight $\chi(v, e)$ as follows: let $\chi(v,e)=1$ if $d(v)=1,2,4,5$, and $\chi(v,e)=0$ if $d(v)\geq 6$. If $d(v)=3$, let $\chi (v,e)=1.05$ if there exists at least one vertex in $e$ with degree at least $6$, and $\chi(v,e)=0.9$ otherwise.
Since $s \leq 2$, we have
$$\sum_{e \in E(G)} \sum_{v \in V(G), v \in e} \frac{\chi(v,e)}{d(v)} = \sum_{v \in V(G)} \sum_{e \in E(G), v \in e} \frac{\chi(v,e)}{d(v)} \leq n - s.$$
As $\abs{E(G)} > 10(n - s)/7$, we conclude that there exists an edge $e = \{x,y,z\}$ such that
\begin{align}
\label{ineq:main2}
\frac{\chi(x,e)}{d(x)} + \frac{\chi(y,e)}{d(y)} + \frac{\chi(z,e)}{d(z)} < \frac{7}{10}.
\end{align}
Without loss of generality, assume $d(x) \leq d(y) \leq d(z)$. First we note that $d(x) \geq 2$, as otherwise \eqref{ineq:main2} would be violated. Then note that if $d(y) \leq 3$, no matter $d(z)$ is greater than $6$ or not \eqref{ineq:main2} would also be violated, thus $d(y) \geq 4$.
The rest of the proof proceeds exactly the same as \cref{sec:main1}, other than the following inequality which leads to contradiction. \cref{thm:main2} then follows.
$$\abs{E(G - S)} \geq \abs{E(G)} - 13 > \frac{10(n - s)}{7} - 13 > \frac{10(n' - s)}{7}.$$
\section{Proof of \texorpdfstring{\cref{lem:554}}{Lemma 2.1}}
\label{sec:554}
In this section we show our lemma on the case $D(e) \geq \langle 5,5,4\rangle$. Our proof follows similar techniques as in \cite{C21}, \cite{F21} and \cite{G21}. In particular, \cite{C21} analyzed the case $D(e) \geq \langle 4,4,4\rangle$, \cite{F21} analyzed the case $D(e) \geq \langle 5,5,5 \rangle$, and \cite{G21} analyzed the case $D(e) \geq \langle 5,5,3\rangle$. We use a slight variation of their methods to prove our lemma.
Without loss of generality, assume $d(y), d(z) \geq 5$ and $d(x) \geq 4$. As we must not have $D(e) \geq \langle 6,4,2\rangle$, we must have $d(y) = d(z) = 5$. For $p \in \{x,y,z\}$, let $G(p)$ be the set of all vertices distinct from $x,y,z$ that lie on the same edge with $p$. We first note that we must have $G(y) = G(z)$. Suppose the contrary, and some edge $e_1 \neq e$ adjacent to $y$ contain some vertex not in $G(z)$. Then at most one edge adjacent to $z$ other than $e$ contains a vertex in $e_1$, so at least three edges $F$ adjacent to $z$ are disjoint from $e_1$. Thus, we can take an edge $e_2$ containing $x$ that is disjoint from $e_1$, then take an edge $e_3$ from $F$ that is disjoint from $e_2$. So $e,e_1,e_2,e_3$ forms a $C_{13}$, contradiction.
Similarly, we must have $G(x) \subset G(y)$. Suppose the contrary, and some edge $e_1 \neq e$ adjacent to $x$ contain some vertex not in $G(y)$. Then, we can take an edge $e_3$ containing $z$ that is disjoint from $e_1$. Among the four edges adjacent to $y$ distinct from $e$, at most two can intersect $e_3$, and at most one can intersect $e_1$. Thus, we can choose $e_2$ containing $y$ that is disjoint from $e_1$ and $e_3$. So $e,e_1,e_2,e_3$ forms a $C_{13}$, contradiction.
Thus $S \backslash \{x,y,z\} = G(y) = G(z) \supset G(x)$. We define $F$ as the set of all edges in $E(G)$ that contains one of the vertices in $S$, but is disjoint from $\{x,y,z\}$. It suffices to show that $F$ must be empty.
We denote the vertices in $G(z)$ by $a,b,c,d,r,s,p,q$, such that $\{z,a,b\}, \{z,c,d\}, \{z,r,s\}, \{z,p,q\}$ are edges in $E(G)$.
\noindent {\it Step\uppercase\text{ex}pandafter{\romannumeral 1}.}
We construct an auxiliary bipartite graph $H = (X_H,Y_H,E_H)$, where $X_H = \{e_i|y \in e_i\}, Y_H = \{e_j|z \in e_j\}, E_H = \{\{e_i,e_j\}| e_i \cap e_j \neq \emptyset\}$. $H$ is a 2-regular bipartite graph with order 8. Thus, $H=C_8$ or $H=C_4\biguplus C_4$.
We claim that if $G$ contains no crown, $H$ contains a $K_{2,2}$. Arbitrarily choose $e \in G(x)$. Define $V_1=e \cap S \subset E_H , W_1=\{e_i|e_i\cap V_1 \neq \emptyset\} \subset X_H \biguplus Y_H$, we have $|V_1|\le2, |W_1|\le4$, $|H-W_1|\ge 4$. To find a crown, we only need to choose $e_i \in X_G$ , $e_j \in Y_G$ s.t. $\{e_i, e_j\} \not\in E_{G-W_1}$ . Therefore, if there is no crown in H, $H-W_1$ has to be a completed bipartite graph. Since $|G-W_1|\ge4$ and two parts have the same order, there is definitely a $K_{2,2}$ in $H-W_1$. So $H$ contains a $K_{2,2}$, furthermore, $H=C_4\biguplus C_4$.
By symmetry we can assume $\{z,a,b\}, \{z,c,d\}$ are in a $C_4$ and $\{z,r,s\}, \{z,p,q\}$ are in the other one. Without loss of generality we can further assume $\{y,b,d\}$, $\{y,a,c\}$ lie in $E(G)$, and $\{y,s,q\}, \{y,r,p\}$ lie in $E(G)$.
\noindent {\it Step\uppercase\text{ex}pandafter{\romannumeral 2}.}
Now let $V_1 = \{a,b,c,d\}$, $V_2 = \{r,s,p,q\}$, We have symmetry between $V_1$ and $V_2$, and symmetry inside $V_i, i=1,2$ as well. We claim that there exists no edge containing $x$ that contains exactly one vertex in $V_1$ and another one in $V_2$. Otherwise we can let it be $\{x,a,r\}$ by symmetry. Then $\{z,a,b\}, \{y,b,d\}, \{z,p,q\}, \{x,a,r\}$ form a $C_{13}$, contradiction. Thus the edges other than $e$ containing $x$ must be a subset of $\{\{x,a,d\},\{x,b,c\},\{s,r,q\},\{x,s,p\}\}$.
\noindent {\it Step\uppercase\text{ex}pandafter{\romannumeral 3}.}
Let $f$ be any element of $F$. By symmetry we can let $a \in f$ without loss of generality. Then we can see $b,c \notin f$. Firstly, we claim that $f$ cannot contain exactly one element $a$ of $S$. Otherwise $\{z,a,b\}, \{y,b,d\}, \{z,r,s\}, f$ form a $C_{13}$, contradiction. Secondly, we claim that $d \notin f$. Otherwise $G(x) = \{\{x,b,c\},\{s,r,q\},\{x,s,p\}\}$ since $d(x)\ge4$. Since at most one edge of $\{z,r,s\}$ and $\{z,p,q\}$ intersect $f$, we can assume $\{z,r,s\}\cap f = \emptyset$. Then$\{z,a,b\}, \{x,b,c\}, \{z,r,s\}, f$ form a $C_{13}$, contradiction.
Therefore we can assume $r\in f$ by symmetry. Similarly we know that $q \notin f$ since $a,d$ and $r,q$ are symmetric. So $f$ has exactly two elements $a,r$ of $S$. While $\{z,a,b\}, \{x,b,c\}, \{z,p,q\}, f$ form a $C_{13}$ in this case, contradiction.
\end{document} |
\begin{document}
\newtheorem{thm}{Theorem}
\newtheorem{lem}{Lemma}[section]
\newtheorem{prop}[lem]{Proposition}
\newtheorem{cor}[lem]{Corollary}
\title[Modular elliptic curves]{
Modular elliptic curves over real abelian fields\\
and the generalized Fermat equation
$x^{2\ell}+y^{2m}=z^p$
}
\author{Samuele Anni and Samir Siksek}
\address{Mathematics Institute\\
University of Warwick\\
Coventry\\
CV4 7AL \\
United Kingdom}
\email{[email protected]}
\email{[email protected]}
\date{\today}
\thanks{The authors are supported by EPSRC Programme Grant
\lq LMF: L-Functions and Modular Forms\rq\ EP/K034383/1.
}
\keywords{Elliptic curves,
modularity, Galois representation, level lowering, irreducibility, generalized Fermat, Fermat--Catalan,
Hilbert modular forms}
\subjclass[2010]{Primary 11D41, 11F80, Secondary 11G05, 11F41}
\begin{abstract}
Let $K$ be a real abelian field of odd class number in which $5$
is unramified. Let $S_5$ be the set of places of $K$ above $5$.
Suppose for every non-empty proper subset $S \subset S_5$ there
is a totally positive unit $u \in \mathcal{O}_K$ such that
$\prod_{\mathfrak{q} \in S} \norm_{\mathbb{F}_\mathfrak{q}/\mathbb{F}_5}(u \bmod{\mathfrak{q}}) \ne \overline{1}$.
We prove that every semistable elliptic curve over $K$ is modular,
using a combination of several powerful modularity theorems
and class field theory. We deduce that if $K$ is a real abelian
field of conductor $n<100$, with $5 \nmid n$ and $n \ne 29$, $87$, $89$,
then every semistable elliptic curve $E$ over $K$ is modular.
Let $\ell$, $m$, $p$ be prime, with $\ell$, $m \ge 5$ and $p \ge 3$.
To a putative non-trivial primitive
solution of the generalized Fermat $x^{2\ell}+y^{2m}=z^p$
we associate a Frey elliptic curve defined over $\mathbb{Q}(\zeta_p)^+$,
and study its mod $\ell$ representation with the help
of level lowering and our modularity result.
We deduce the non-existence of non-trivial primitive solutions
if $p \le 11$, or if $p=13$ and $\ell$, $m \ne 7$.
\end{abstract}
\maketitle
\section{Introduction}
Let $p$, $q$, $r \in \mathbb{Z}_{\geq 2}$. The equation
\begin{equation}\label{eqn:FCgen}
x^p+y^q=z^r
\end{equation}
is known as the \textbf{generalized Fermat equation}
(or the \textbf{Fermat--Catalan equation})
with signature $(p,q,r)$.
As in Fermat's Last Theorem, one is interested in integer solutions
$x$, $y$, $z$. Such a solution is called \textbf{non-trivial} if
$xyz \neq 0$, and \textbf{primitive} if $x$, $y$, $z$ are coprime.
Let $\chi=p^{-1}+q^{-1}+r^{-1}$.
The \textbf{generalized Fermat conjecture}
\citep{DG,Da97},
also known as the Tijdeman--Zagier conjecture
and as the Beal conjecture \citep{Beukers},
is concerned with the case $\chi<1$.
It states that the only non-trivial primitive solutions to
\eqref{eqn:FCgen} with $\chi<1$ are
\begin{gather*}
1+2^3 = 3^2, \quad 2^5+7^2 = 3^4, \quad 7^3+13^2 = 2^9, \quad
2^7+17^3 = 71^2, \\
3^5+11^4 = 122^2, \quad 17^7+76271^3 = 21063928^2, \quad
1414^3+2213459^2 = 65^7, \\
9262^3+15312283^2 = 113^7, \; \,
43^8+96222^3 = 30042907^2, \; \, 33^8+1549034^2 = 15613^3.
\end{gather*}
The generalized Fermat conjecture
has been established for many signatures $(p,q,r)$,
including for several infinite families of signatures,
starting with
Fermat's Last Theorem $(p,p,p)$ by
\citet{Wiles};
$(p,p,2)$ and $(p,p,3)$ by \citet{DM};
$(2,4,p)$ by \citet{El} and
\citet*{BEN};
$(2p,2p,5)$ by \citet{Bennett}; $(2,6,p)$
by \cite{BC}; and other signatures by other researchers.
An excellent, exhaustive and up-to-date survey was recently compiled by
\citet*{BennettSurvey}, which also proves
the generalized Fermat conjecture for several families of signatures,
including $(2p,4,3)$.
The main Diophantine result of this paper is the following theorem.
\begin{thm}\label{thm:1}
Let $p=3$, $5$, $7$, $11$ or $13$. Let $\ell$, $m \ge 5$ be primes,
and if $p=13$ suppose moreover that $\ell$, $m \ne 7$.
Then the only primitive solutions to
\begin{equation}\label{eqn:main}
x^{2\ell}+y^{2m}=z^p,
\end{equation}
are the trivial ones
$(x,y,z)=(\pm 1, 0, 1)$ and $(0, \pm 1, 1)$.
\begin{comment}
Let $\ell$, $m \ge 5$ be primes.
The only primitive solutions to
\begin{equation}\label{eqn:main}
x^{2\ell}+y^{2m}=z^p,
\end{equation}
for $p=3$, $5$, $7$, $11$, $13$ are the trivial ones
$(x,y,z)=(\pm 1, 0, 1)$ and $(0, \pm 1, 1)$.
\end{comment}
\end{thm}
If $\ell$, $m$ is $2$ or $3$ then \eqref{eqn:main} has
no non-trivial primitive solutions for prime $p \ge 3$;
this follows from the aforementioned work
on Fermat equations of signatures $(2,4,p)$, $(2,6,p)$ and $(2p,4,3)$.
Our approach is unusual in that it treats several bi-infinite
families of signatures.
We start with a descent argument (Section~\ref{sec:Descent}), inspired
by the approach of \citet{Bennett} for $x^{2n}+y^{2n}=z^5$ and that of \citet{recipes}
for $x^r+y^r=z^p$ with certain small values of $r$. For $p=3$ the descent argument allows us to
quickly obtain a contradiction (Section~\ref{sec:peq3})
through results of \citet{BS}.
The bulk of the paper is devoted to $5 \le p \le 13$.
Our descent allows us to construct Frey curves
(Sections~\ref{sec:FreyCurve},~\ref{sec:FreyCurve2})
attached to \eqref{eqn:main} that
are defined over the real cyclotomic field $K=\mathbb{Q}(\zeta+\zeta^{-1})$
where $\zeta$ is a $p$-th root of unity,
or, for $p \equiv 1 \pmod{4}$, defined
over the unique subfield $K^\prime$ of $K$ of degree $(p-1)/4$. These Frey curves
are semistable over $K$, though not necessarily over $K^\prime$.
In the remainder of the paper
we study the
mod $\ell$ representations of these Frey curves using modularity
and level lowering.
Several recent papers \citep{DF,FS,FSsmall,recipes,BDMS}
apply modularity and level lowering over
totally real fields to study Diophantine problems.
We need to refine many of the ideas in those papers,
both because we are dealing with representations over number fields
of relatively high degree, and because we
are aiming for a \lq clean\rq\ result without any exceptions (the methods
are much easier to apply for sufficiently large $\ell$).
We first establish modularity of the Frey curves by combining
a modularity theorem for residually reducible representations due to \citet{SkinnerWiles}
with a theorem of \citet{Thorne} for residually dihedral representations,
and implicitly applying modularity lifting theorems of \citet{Kisin} and others for representations
with \lq big image\rq. We shall use class field theory to glue together these great modularity theorems
and produce our own theorem (proved in Section~\ref{sec:modularity})
that applies to our Frey curves, but which we expect to be
of independent interest.
\begin{thm} \label{thm:modularity}
Let $K$ be a real abelian number field.
Write $S_5$ for the prime ideals $\mathfrak{q}$ of $K$ above $5$.
Suppose
\begin{enumerate}
\item[(a)] $5$ is unramified in $K$;
\item[(b)] the class number of $K$ is odd;
\item[(c)] for each non-empty proper subset $S$ of $S_5$,
there is some totally positive unit $u$ of $\mathcal{O}_K$
such that
\begin{equation}\label{eqn:conditionc}
\prod_{\mathfrak{q} \in S} \norm_{\mathbb{F}_\mathfrak{q}/\mathbb{F}_5}(u \bmod{\mathfrak{q}}) \ne \overline{1} \, .
\end{equation}
\end{enumerate}
Then every semistable elliptic curve $E$ over $K$
is modular.
\end{thm}
Theorem~\ref{thm:modularity} allows us to deduce the following
corollary (also proved in Section~\ref{sec:modularity}).
\begin{cor}\label{cor:modularity}
Let $K$ be a real abelian field of conductor $n < 100$
with $5 \nmid n$ and $n \ne 29$, $87$, $89$. Let $E$ be a
semistable elliptic curve over $K$. Then $E$ is modular.
\end{cor}
To apply level lowering theorems to a modular mod $\ell$ representation, one
must first show that this representation is irreducible.
Let $G_K=\Gal(\overline{K}/K)$.
The mod $\ell$ representation that concerns us, denoted $\overline{\rho}_{E,\ell} \; : \; G_K \rightarrow \GL_2(\mathbb{F}_\ell)$,
is the one attached to the $\ell$-torsion of
our semistable Frey elliptic curve $E$ defined over the field
$K=\mathbb{Q}(\zeta+\zeta^{-1})$ of degree $(p-1)/2$.
We shall exploit semistability of our Frey curve
to show, with the help of class field theory,
that if $\overline{\rho}_{E,\ell}$ is reducible
then $E$ or some $\ell$-isogenous curve possesses non-trivial $K$-rational
$\ell$-torsion. Using famous results on torsion of elliptic curves
over number fields of
small degree due to \citet{Kamienny,Parent1,Parent2,DKSS}
and some computations of $K$-points on certain modular curves,
we prove the required irreducibility result
(Section~\ref{sec:Irred}).
The final step (Section~\ref{sec:final})
in the proof of Theorem~\ref{thm:1} requires computations
of certain Hilbert eigenforms
over the fields $K$ together with their eigenvalues at primes of small norm.
For these computations we have made use of the
\lq Hilbert modular forms package\rq\ developed by
Demb\'el\'e, Donnelly, Greenberg and Voight and available within the
\texttt{Magma} computer algebra system \citep{Magma}. For the theory behind
this package see
\citep{DV}.
For $p \ge 17$,
the required computations
are beyond the capabilities of current software,
though the strategy for proving Theorem~\ref{thm:1}
should be applicable to larger $p$ once these
computational limitations are overcome.
In fact, at the end of Section~\ref{sec:final}, we heuristically argue
that
the larger the value
of $p$ is, the more likely that the argument used to complete
the proof of Theorem~\ref{thm:1} will succeed for that particular $p$.
We content ourselves with proving (Section~\ref{sec:proof2})
the following theorem.
\begin{thm}\label{thm:2}
Let $p$ be an odd prime, and
let $K=\mathbb{Q}(\zeta+\zeta^{-1})$ where $\zeta=\exp(2 \pi i/p)$.
Write $\mathcal{O}_K$ for the ring of integers in $K$ and
$\mathfrak{p}$ for the unique prime ideal above $p$.
Suppose that there are no
elliptic curves $E/K$ with full $2$-torsion and
conductors $2 \mathcal{O}_K$, $2 \mathfrak{p}$. Then there is an
ineffective constant $C_p$ (depending only on $p$)
such that for all primes
$\ell$, $m \ge C_p$, the only primitive solutions to
\eqref{eqn:main}
are the trivial ones
$(x,y,z)=(\pm 1, 0, 1)$ and $(0, \pm 1, 1)$.
If $p \equiv 1 \pmod{4}$ then let $K^\prime$ be the unique
subfield of $K$ of degree $(p-1)/4$. Let $\mathfrak{B}$ be
the unique prime ideal of $K^\prime$ above $p$.
Suppose that there are no
elliptic curves $E/K^\prime$ with non-trivial $2$-torsion and
conductors $2 \mathfrak{B}$, $2 \mathfrak{B}^2$. Then there is an
ineffective constant $C_p$ (depending only on $p$)
such that for all primes
$\ell$, $m \ge C_p$, the only primitive solutions to
\eqref{eqn:main}
are the trivial ones
$(x,y,z)=(\pm 1, 0, 1)$ and $(0, \pm 1, 1)$.
\end{thm}
The computations described in this paper were carried out using the
computer algebra \texttt{Magma} \citep{Magma}. The code and output
is available from: \newline
\url{http://homepages.warwick.ac.uk/~maseap/progs/diophantine/}
We are grateful to the three referees for careful reading of
the paper and for suggesting many improvements. We are
indebted to
Lassina Demb\'{e}l\'{e},
Steve Donnelly,
Marc Masdeu and Jack Thorne
for stimulating conversations.
\section{Proof of Theorem~\ref{thm:modularity} and Corollary~\ref{cor:modularity}}
\label{sec:modularity}
We shall need a result from class field theory. The following
version is proved by \citet[Appendice A]{KrausNF}.
\begin{prop}\label{prop:Krauscf}
Let $K$ be a number field, and $q$ a rational
prime that does not ramify in $K$.
Denote the mod $q$ cyclotomic character by
$\chi_q \; : \; G_K \rightarrow \mathbb{F}_q^\times$.
Write $S_q$ for
the set of primes $\mathfrak{q}$ of $K$ above $q$, and let $S$
be a subset of $S_q$. Let
$\varphi \; : \; G_K \rightarrow \mathbb{F}_q^\times$ be a character satisfying:
\begin{itemize}
\item[(a)] $\varphi$ is unramified outside $S$ and the infinite places of $K$;
\item[(b)] $\varphi \vert_{I_\mathfrak{q}} = \chi_q \vert_{I_\mathfrak{q}}$
for all $\mathfrak{q} \in S$; here $I_\mathfrak{q}$ denotes the inertia
subgroup of $G_K$ at $\mathfrak{q}$.
\end{itemize}
Let $u \in \mathcal{O}_K$
be a unit that is positive in each real embedding of $K$.
Then
\[
\prod_{\mathfrak{q} \in S} \norm_{\mathbb{F}_{\mathfrak{q}}/\mathbb{F}_q} (u \bmod \mathfrak{q})=\overline{1}.
\]
\end{prop}
\begin{proof}
For the reader's convenience we give a sketch of Kraus's elegant
argument.
Let $L$ be the cyclic field extension of $K$ cut out by the kernel of $\varphi$.
Then we may view $\varphi$ as a character $\Gal(L/K) \rightarrow \mathbb{F}_q^\times$.
Write $M_K$ for the places of $K$.
For $\upsilon \in M_K$,
let $\Theta_\upsilon : K_\upsilon^* \rightarrow \Gal(L/K)$
be the local Artin map. Let $u \in \mathcal{O}_K$ be a unit that is positive in each
real embedding.
We shall consider the values $\varphi(\Theta_\upsilon(u)) \in \mathbb{F}_q^\times$ as $\upsilon$
ranges over $M_K$.
Suppose first that $\upsilon \in M_K$ is infinite. If $\upsilon$ is complex
then $\Theta_\upsilon$ is trivial and so certainly
$\varphi(\Theta_\upsilon(u))=\overline{1}$ in $\mathbb{F}_q^\times$. So suppose $\upsilon$ is real.
As $u$ is positive in $K_\upsilon$, it is a local norm
and hence in the kernel of $\Theta_\upsilon$.
Therefore $\varphi(\Theta_\upsilon(u))=\overline{1}$.
Suppose next that $\upsilon \in M_K$ is finite.
As $u \in \mathcal{O}_\upsilon^\times$, it follows from local reciprocity
that $\Theta_\upsilon(u)$ belongs to the inertia subgroup $I_\upsilon \subseteq \Gal(L/K)$.
If $\upsilon \notin S$ then $\varphi(I_\upsilon)=1$ by (a)
and so $\varphi(\Theta_\upsilon(u))=\overline{1}$.
Thus suppose that $\upsilon=\mathfrak{q} \in S$. It follows from (b) that
$\varphi(\Theta_\mathfrak{q}(u))=\chi_q(\Theta_\mathfrak{q}(u))$. Through an explicit
calculation, \citet[Appendice A, Proposition 1]{KrausNF} shows that
$\chi_q(\Theta_\mathfrak{q}(u))= \norm_{\mathbb{F}_\mathfrak{q}/\mathbb{F}_q}(u \bmod{\mathfrak{q}})^{-1}$.
Finally, by global reciprocity, $\prod_{\upsilon \in M_K} \Theta_\upsilon(u)=1$.
Applying $\varphi$ to this equality completes the proof.
\end{proof}
We shall also make use of the following theorem of
\citet[Theorem 1.1]{Thorne}.
\begin{thm}[Thorne]
Let $E$ be an elliptic curve
over a totally real field $K$. Suppose $5$
is not a square in $K$ and that $E$ has no
$5$-isogenies defined over $K$. Then $E$ is modular.
\end{thm}
Thorne deduces this result by combining his beautiful modularity
theorem for residually dihedral representations \citep[Theorem 1.2]{Thorne},
with \citep*[Theorem 3]{FLS}. The latter result is essentially
a straightforward consequence of the powerful modularity lifting theorems
for residual representations with \lq big image\rq\
due to \citet{Kisin}, \citet*{BGG1,BGG2}, and \citet{BD}.
Finally we shall need the following modularity theorem
for residually reducible representations due to
\citet[Theorem A]{SkinnerWiles}.
\begin{thm}[Skinner and Wiles]
Let $K$ be a real abelian number field. Let $q$ be
an odd prime, and
\[
\rho : G_K \rightarrow \GL_2(\overline{\mathbb{Q}}_q)
\]
be a continuous, irreducible representation,
unramified away from a finite number of places of $K$.
Suppose $\overline{\rho}$ is reducible and write
$\overline{\rho}^{\mathrm{ss}} = \psi_1 \oplus \psi_2$. Suppose further that
\begin{enumerate}
\item[(i)] the splitting field $K(\psi_1 / \psi_2)$ of $\psi_1 / \psi_2$
is abelian over $\mathbb{Q}$;
\item[(ii)] $(\psi_1 / \psi_2)(\tau) = -1$ for each complex
conjugation $\tau$;
\item[(iii)]
$(\psi_1 / \psi_2)\vert_{D_\mathfrak{q}}\neq 1$ for each $\mathfrak{q} \mid q$;
\item[(iv)] for all $\mathfrak{q} \mid q$,
\[
\rho \vert_{D_\mathfrak{q}} \sim
\begin{pmatrix}
\phi_1^{(\mathfrak{q})} \cdot \tilde{\psi}_1 & * \\
0 & \phi_2^{(\mathfrak{q})} \cdot \tilde{\psi}_2\\
\end{pmatrix}
\]
with $\phi_2^{(\mathfrak{q})}$ factoring through a pro-$q$ extension
of $K_\mathfrak{q}$ and $\phi_2^{(\mathfrak{q})} \vert_{I_\mathfrak{q}}$
having finite order, and
where $\tilde{\psi}_i$ is a Teichm\"{u}ller lift of $\psi_i$;
\item[(v)] $\Det(\rho) = \psi \chi_q^{k-1}$, where $\psi$ is
a character of finite order, and $k \ge 2$ is an integer.
\end{enumerate}
Then the representation $\rho$ is associated to a Hilbert modular newform.
\end{thm}
\subsection*{Proof of Theorem~\ref{thm:modularity}}
As $5$ is unramified in $K$, it certainly is not a square in $K$.
If $E$ has no $5$-isogenies defined over $K$ then the result follows
from Thorne's theorem. We may thus suppose that the mod $5$
representation $\overline{\rho}$ of $E$ is reducible,
and write $\overline{\rho}^{\mathrm{ss}}=\psi_1 \oplus \psi_2$.
We will verify hypotheses (i)--(v) in the theorem of Skinner and Wiles
(with $q=5$)
to deduce the modularity
of $\rho : G_K \rightarrow \Aut(T_5(E)) \cong \GL_2(\mathbb{Z}_5)$,
where $T_5(E)$ is the $5$-adic Tate module of $E$.
If $E$ has good supersingular reduction at some
$\mathfrak{q} \mid 5$ then (as $\mathfrak{q}$ is unramified) $\overline{\rho} \vert_{I_\mathfrak{q}}$
is irreducible \citep[Proposition 12]{Serre72}, contradicting the reducibility of $\overline{\rho}$. It follows that $E$ has good ordinary
or multiplicative reduction at all $\mathfrak{q} \mid 5$. In particular,
hypothesis (iv) holds with $\phi_i^{(\mathfrak{q})}=1$.
Now $\psi_1 \psi_2 = \Det(\rho)=\chi_5$ so hypothesis (v) holds
with $\psi=1$ and $k=2$.
Moreover, for each complex conjugation $\tau$,
we have $(\psi_1/\psi_2)(\tau)=\psi_1(\tau) \psi_2(\tau^{-1})
=\psi_1(\tau)\psi_2(\tau)=\chi_5(\tau)=-1$ so (ii)
is satisfied.
It follows from the fact that $E$ has good ordinary
or multiplicative at all $\mathfrak{q} \mid 5$, that
$(\overline{\rho} \vert_{I_\mathfrak{q}})^{\mathrm{ss}} =
\chi_5 \vert_{I_\mathfrak{q}} \oplus 1$
and so $\psi_1/\psi_2$ is non-trivial when restricted to $I_\mathfrak{q}$
(again as $\mathfrak{q}$ is unramified in $K$); this proves (iii).
It remains to verify (i). Note that $\psi_1/\psi_2=\chi_5/ \psi_2^2$.
Hence $K(\psi_1/\psi_2)$ is contained in the compositum
of the fields $K(\zeta_5)$ and $K(\psi_2^2)$,
and by symmetry also contained in the compositum of
the fields $K(\zeta_5)$ and $K(\psi_1^2)$.
It is sufficient to show that either
$K(\psi_2^2)=K$ or $K(\psi_1^2)=K$.
Note that $\psi_i^2 \; : \; G_K \rightarrow \mathbb{F}_5^\times$ are quadratic characters
that are unramified at all
archimedean places. We will show that one of them
is everywhere unramified, and then the desired result
follows from the assumption that the class number of $K$
is odd.
First note, by the semistability of $E$, that $\psi_1$
and $\psi_2$ are unramified at all finite primes $\mathfrak{p} \nmid 5$.
Let $S$ be the subset of $\mathfrak{q} \in S_5$ such that $\psi_1$
is unramified at $\mathfrak{q}$. By the above, we know that
these are precisely the $\mathfrak{q} \in S_5$ such that
$\psi_2 \vert_{I_\mathfrak{q}}=\chi_5 \vert_{I_\mathfrak{q}}$.
By assumption (c) and Proposition~\ref{prop:Krauscf},
we have that either $S=\emptyset$ or $S=S_5$.
If $S=\emptyset$ then $\psi_2$ is unramified
at all $\mathfrak{q} \mid 5$, and if $S=S_5$
then $\psi_1$ is unramified at all $\mathfrak{q} \mid 5$.
This completes the proof.
\subsection*{Proof of Corollary~\ref{cor:modularity}}
Suppose first that $K=\mathbb{Q}(\zeta_n)^+$.
If $n \equiv 2 \pmod{4}$ then $\mathbb{Q}(\zeta_n)=\mathbb{Q}(\zeta_{n/2})$,
so we adopt the
usual convention of supposing that $n \not \equiv 2 \pmod{4}$.
We consider values $n<100$ and impose the restriction $5 \nmid n$,
which ensures that condition (a) of Theorem~\ref{thm:modularity}
is satisfied. It is known \citep{Miller} that the
class number $h_n^+$ of $K$ is $1$ for all $n < 100$. Thus condition (b)
is also satisfied. Write $E_n^+$ for the group of
units of $K$ and $C_n^+$ for the subgroup of cyclotomic
units. A result of \citet{Sinnott} asserts that
$[E_n^+:C_n^+]=b \cdot h_n^+$ where $b$ is an explicit constant
that happens to be $1$ for $n$ with at most $3$ distinct prime
divisors, and so certainly for all $n$ is our range. It follows that
$E_n^+=C_n^+$ for $n < 100$. Now let $S_5$ be as in the statement
of Theorem~\ref{thm:modularity}. We wrote a simple \texttt{Magma}
script which for each $n <100$ satisfying $5 \nmid n$
and $n \not \equiv 2 \pmod{4}$ writes down a basis for
the cyclotomic units $C_n^+$ and deduces a basis for
the totally positive units. It then checks, for every
non-empty proper subset of $S_5$, if there is an element
$u$ of this basis of totally positive units that satisfies
\eqref{eqn:conditionc}. We found this to be the case for
all $n$ under consideration except $n=29$, $87$ and $89$.
The corollary follows from Theorem~\ref{thm:modularity}
for $K=\mathbb{Q}(\zeta_n)^+$ with
$n$ as in the statement of the corollary.
Now let $K$ be a real abelian field with conductor $n$
as in the statement of the corollary. Then
$K \subseteq \mathbb{Q}(\zeta_n)^+$.
As $\mathbb{Q}(\zeta_n)^+/K$
is cyclic, modularity of an elliptic curve $E/K$
follows, by Langlands' cyclic base change theorem
\citep{Langlands}, from
modularity of $E$ over $\mathbb{Q}(\zeta_n)^+$, completing the proof
of the corollary.
\section{Cyclotomic Preliminaries}
Throughout $p$ will be an odd prime.
Let $\zeta$ be a primitive $p$-th root of unity,
and
$K=\mathbb{Q}(\zeta+\zeta^{-1})$ the maximal real subfield of $\mathbb{Q}(\zeta)$.
We write
\[
\theta_j=\zeta^j+\zeta^{-j} \in K, \qquad j=1,\dotsc,(p-1)/2 \, .
\]
Let $\mathcal{O}_K$ be the ring of integers of $K$. Let $\mathfrak{p}$
be the unique prime ideal of $K$ above $p$. Then $p \mathcal{O}_K=\mathfrak{p}^{(p-1)/2}$.
\begin{lem}\label{lem:cyc}
For $j=1,\dotsc,(p-1)/2$, we have
\[
\theta_j \in \mathcal{O}_K^\times, \qquad \theta_j+2 \in \mathcal{O}_K^\times,
\qquad
(\theta_j-2) \mathcal{O}_K=\mathfrak{p}.
\]
Moreover,
$(\theta_j-\theta_k) \mathcal{O}_K=\mathfrak{p}$
for
$1 \le j< k \le (p-1)/2$.
\end{lem}
\begin{proof}
Observe that
$\theta_j=(\zeta^{2j}-\zeta^{-2j})/(\zeta^j-\zeta^{-j})$
and thus belongs to the group of cyclotomic units.
Given $j$,
let $j \equiv 2 r \pmod{p}$. Then $\theta_j+2=\theta_r^2 \in \mathcal{O}_K^\times$.
For now, let $L=\mathbb{Q}(\zeta)$.
Let $\mathfrak{P}$ be the prime of $\mathcal{O}_L$ above $\mathfrak{p}$.
Then $\mathfrak{p} \mathcal{O}_L=\mathfrak{P}^2$. As is well-known, $\mathfrak{P}=(1-\zeta^u) \mathcal{O}_L$
for $u=1,2,\dotsc,p-1$. Note that
$\theta_j-2=(\zeta^r-\zeta^{-r})^2$, with $j \equiv 2 r \pmod{p}$, from which we deduce that
$(\theta_j-2) \mathcal{O}_L=\mathfrak{P}^2=\mathfrak{p} \mathcal{O}_L$, hence $(\theta_j-2) \mathcal{O}_K=\mathfrak{p}$.
For the final part, $j \not \equiv \pm k \pmod{p}$. Thus there exist
$u$, $v \not \equiv 0 \pmod{p}$
such that
\[
u+v \equiv j, \qquad u-v \equiv k \pmod{p}.
\]
Then
\[
(\zeta^u-\zeta^{-u})(\zeta^v-\zeta^{-v})=\theta_j-\theta_k
\]
and so $(\theta_j-\theta_k)\mathcal{O}_L=\mathfrak{P}^2=\mathfrak{p} \mathcal{O}_L$. This completes the proof.
\end{proof}
\section{The Descent}\label{sec:Descent}
Now let $\ell$, $m \ge 5$ be prime, and let $(x,y,z)$
be a non-trivial, primitive solution to \eqref{eqn:main}.
If $\ell=p$, then \eqref{eqn:main} can be rewritten as
$z^p+(-x^2)^p=(y^m)^2$. \citet{DM} have shown
that the only primitive solutions to the generalized Fermat equation \eqref{eqn:FCgen}
with signature $(p,p,2)$ are the trivial ones, giving us a contradiction.
We shall henceforth suppose that $\ell \ne p$ and $m \ne p$.
Clearly $z$ is odd.
By swapping in \eqref{eqn:main}
the terms $x^\ell$ and $y^m$ if necessary,
we may suppose that $2 \mid x$.
We factor the left-hand side over $\mathbb{Z}[i]$. It follows from
our assumptions that the two
factors $(x^\ell+y^m i )$ and $(x^\ell-y^m i)$ are coprime.
There exist coprime rational integers $a$, $b$ such that
\[
x^\ell+ y^m i=(a+bi)^p \, , \qquad z=a^2+b^2.
\]
Then
\begin{align*}
x^\ell &= \frac{1}{2} \left( (a+bi)^p+(a-bi)^p \right)\\
&= a \cdot \prod_{j=1}^{p-1}
\left( (a+bi) +(a-bi) \zeta^j \right)\\
&= a \cdot \prod_{j=1}^{(p-1)/2}
\left(
(a+bi)+(a-bi) \zeta^j
\right) \cdot
\left(
(a+bi)+(a-bi) \zeta^{-j}
\right) \, .
\end{align*}
In the last step we have paired up the complex conjugate factors.
Multiplying out these pairs we obtain a factorization
of $x^\ell$ over $\mathcal{O}_K$:
\begin{equation} \label{eqn:factor}
x^\ell= a \cdot \prod_{j=1}^{(p-1)/2}
\left(
(\theta_j+2) a^2+ (\theta_j-2) b^2
\right) \, .
\end{equation}
To ease notation, write
\begin{equation}\label{eqn:beta}
\beta_j= (\theta_j+2) a^2 + (\theta_j-2)b^2 \, , \qquad j=1,\dotsc,\frac{p-1}{2} \, .
\end{equation}
\begin{lem}\label{lem:factor}
Write $n=\ord_2(x) \ge 1$.
\begin{enumerate}
\item[(i)] If $p \nmid x$ then
\[
a= 2^{\ell n} \alpha^\ell,
\qquad
\beta_j \mathcal{O}_K={\mathfrak{b}}_j^\ell
\]
where $\alpha$ is a rational integer, and
$\alpha \mathcal{O}_K,{\mathfrak{b}}_1,\dotsc,{\mathfrak{b}}_{(p-1)/2}$ are pairwise coprime ideals of $\mathcal{O}_K$, all of which are coprime to $2p$.
\item[(ii)] If
$p \mid x$ then
\[
a = 2^{\ell n} p^{\kappa \ell -1} \alpha^\ell,
\qquad \beta_j \mathcal{O}_K=\mathfrak{p} \cdot {\mathfrak{b}}_j^\ell
\]
where $\kappa=\ord_p(x) \ge 1$, $\alpha$ is a rational integer and
$\alpha \cdot \mathcal{O}_K,{\mathfrak{b}}_1,\dotsc,{\mathfrak{b}}_{(p-1)/2}$ are pairwise coprime ideals of $\mathcal{O}_K$, all of which are coprime to $2p$.
\end{enumerate}
\end{lem}
\begin{proof}
As $z=a^2+b^2$ is odd, exactly one of $a$, $b$ is even.
Thus the $\beta_j$ are coprime to $2 \mathcal{O}_K$. We see from \eqref{eqn:factor}
that $2^{\ell n} \mid\mid a$, and hence that $b$ is odd.
As $a$, $b$ are coprime,
it is clear that the greatest common divisor of $a \mathcal{O}_K$
and $\beta_j \mathcal{O}_K$ divides $(\theta_j-2) \mathcal{O}_K=\mathfrak{p}$.
Moreover, for $k \ne j$, the greatest common
divisor of $\beta_j \mathcal{O}_K$ and
$\beta_k \mathcal{O}_K$ divides
\[
\left((\theta_j+2)(\theta_k-2)-(\theta_k+2)(\theta_j-2) \right) \mathcal{O}_K
=4 (\theta_k-\theta_j) \mathcal{O}_K= 4 \mathfrak{p}.
\]
However, $\beta_j$ is odd, and so the greatest common
divisor of $\beta_j \mathcal{O}_K$ and
$\beta_k \mathcal{O}_K$ divides $\mathfrak{p}$.
Now part (i)
of the lemma follows immediately from \eqref{eqn:factor}.
So suppose $p \mid x$. For part (ii) we have to
check that $\mathfrak{p} \mid\mid \beta_j$. However, since
$(\theta_j-2)\mathcal{O}_K=\mathfrak{p}$, and $\theta_j+2 \in \mathcal{O}_K^\times$,
reducing \eqref{eqn:factor} modulo $\mathfrak{p}$ shows that $a^p \equiv 0 \pmod{\mathfrak{p}}$,
and hence that $p \mid a$. Since $a$, $b$ are coprime, it follows
that $\ord_\mathfrak{p}(\beta_j)=1$. Now, from \eqref{eqn:factor},
\[
\frac{(p-1)}{2} \ord_p(a)
=\ord_\mathfrak{p}(a)=\ell \ord_\mathfrak{p}(x) - \sum_{j=1}^{(p-1)/2} \ord_\mathfrak{p}(\beta_j)
=\frac{(p-1)}{2} (\kappa \ell -1)
\]
giving the desired exponent of $p$ in the factorization of $a$.
\end{proof}
\section{Proof of Theorem~\ref{thm:1} for $p=3$}\label{sec:peq3}
Suppose $p=3$. Then $K=\mathbb{Q}$
and $\theta:=\theta_1=-1$.
We treat first the case $3 \nmid x$.
By Lemma~\ref{lem:factor},
\[
a=2^{\ell n}\alpha^\ell, \qquad a^2-3b^2=\mathfrak{a}mma^\ell
\]
for some coprime odd rational integers $\alpha$ and $\mathfrak{a}mma$.
We obtain the equation
\[
2^{2\ell n}\alpha^{2\ell}-\mathfrak{a}mma^\ell=3b^2.
\]
\citet[Theorem 1]{BS} show that equation $x^n+y^n=3z^2$
has no solutions in coprime integers $x$, $y$, $z$ for $n \ge 4$,
giving us a contradiction.
We now treat $3 \mid x$. By Lemma~\ref{lem:factor},
\[
a=2^{\ell n} 3^{\kappa \ell -1} \alpha^\ell, \qquad a^2-3b^2=3\mathfrak{a}mma^\ell
\]
for coprime rational integers $\alpha$, $\mathfrak{a}mma$ that are also coprime to $6$.
Thus
\[
2^{2\ell n} 3^{2\kappa \ell -3} \alpha^{2\ell}-\mathfrak{a}mma^\ell=b^2.
\]
Using the recipes of \citet[Sections 2, 3]{BS} we can attach a Frey
curve to such a triple $(\alpha,\mathfrak{a}mma,b)$ whose mod $\ell$
representation arises from a classical newform of weight $2$
and level $6$. As there are no such newforms our contradiction
is complete.
\section{The Frey Curve}\label{sec:FreyCurve}
We shall henceforth suppose $p \ge 5$.
From now on, fix $1 \le j$, $k \le (p-1)/2$ with $j \ne k$.
The expressions $\beta_j$, $\beta_k$ shall be given by \eqref{eqn:beta}.
For each such choice of $(j,k)$ we shall construct a Frey curve.
The idea is that the three expressions $a^2$, $\beta_j$, $\beta_k$
are roughly $\ell$-th powers (Lemma~\ref{lem:factor}). Moreover they
are linear combinations of $a^2$ and $b^2$, and hence must be
linearly dependent.
Writing down this linear relation gives a Fermat equation (with coefficients) of signature $(\ell,\ell,\ell)$.
As in the work Hellegouarch, Frey, Serre, Ribet, Kraus, and many others, one can associate to
such an equation a Frey elliptic curve whose mod $\ell$ representation has very little ramification.
In what follows we take care to scale the expressions $a^2$, $\beta_j$, $\beta_k$
appropriately so that the Frey curve is semistable.
\noindent \textbf{Case I: $p \nmid x$}.
Let
\begin{equation}
\label{eqn:uvw}
u=
\beta_j , \qquad
v=- \frac{(\theta_j-2)}{(\theta_k-2)} \beta_k , \qquad
w=\frac{4 (\theta_j-\theta_k)}{(\theta_k-2)} \cdot a^2.
\end{equation}
Then
$u+v+w=0$.
Moreover, by Lemmas~\ref{lem:cyc} and~\ref{lem:factor},
\[
u \mathcal{O}_K={\mathfrak{b}}_j^\ell, \qquad
v \mathcal{O}_K={\mathfrak{b}}_k^\ell, \qquad
w \mathcal{O}_K= 2^{2 \ell n+2} \cdot \alpha^{2 \ell} \mathcal{O}_K.
\]
We will let the Frey curve be
\begin{equation}\label{eqn:Frey}
E=E_{j,k} \; : \; Y^2=X(X-u)(X+v).
\end{equation}
For a non-zero ideal $\mathfrak{a}$, we define its \textbf{radical}, denoted by $\mathbb{R}ad(\mathfrak{a})$,
to be the product of the distinct prime ideal factors of $\mathfrak{a}$.
\begin{lem}\label{lem:Frey1}
Suppose $p \nmid x$. Let $E$ be the Frey curve \eqref{eqn:Frey}
where $u$, $v$, $w$ are given by \eqref{eqn:uvw}.
The curve $E$ is semistable, with
multiplicative reduction at all primes above $2$
and good reduction at $\mathfrak{p}$. It has
minimal discriminant
and conductor
\[
\mathcal{D}_{E/K}
=
2^{4 \ell n-4} {\alpha}^{4 \ell} {\mathfrak{b}}_j^{2\ell} {\mathfrak{b}}_k^{2\ell}, \qquad
\mathcal{N}_{E/K}= 2 \cdot \mathbb{R}ad(\alpha {\mathfrak{b}}_j {\mathfrak{b}}_k).
\]
\end{lem}
\begin{proof}
The invariants $c_4$, $c_6$, $\Delta$, $j(E)$ have their usual meanings
and are given by:
\begin{equation}\label{eqn:inv}
\begin{gathered}
c_4=16(u^2-vw)=16(v^2-wu)=16(w^2-uv),\\
c_6=-32(u-v)(v-w)(w-u), \qquad
\Delta=16 u^2 v^2 w^2, \qquad j(E)=c_4^3/\Delta \, .
\end{gathered}
\end{equation}
By Lemma~\ref{lem:factor}, we have $\alpha \mathcal{O}_K$, ${\mathfrak{b}}_j$ and ${\mathfrak{b}}_k$
are pairwise coprime, and all coprime to $2p$.
In particular $\mathfrak{p} \nmid \Delta$ and so $E$ has good reduction
at $\mathfrak{p}$. Moreover, $c_4$ and $\Delta$ are coprime away from $2$.
Hence the model in \eqref{eqn:Frey} is already semistable away from $2$.
Recall that $2^\ell \mid a$ and $2 \nmid b$. Thus
\[
u \equiv (\theta_j-2) b^2 \pmod{2^{2\ell}}, \quad
v \equiv -(\theta_j-2) b^2 \pmod{2^{2\ell}}, \quad
w \equiv 0 \pmod{2^{2\ell+2}}.
\]
It is clear that $\ord_\mathfrak{q}(j)<0$ for all $\mathfrak{q} \mid 2$. Thus $E$
has potentially multiplicative reduction at all $\mathfrak{q} \mid 2$.
Write $\mathfrak{a}mma=-c_4/c_6$.
To show that $E$ has multiplicative reduction at $\mathfrak{q}$
it is enough to show that $K_\mathfrak{q}(\sqrt{\mathfrak{a}mma})/K_\mathfrak{q}$
is an unramified extension \citep[Exercise V.5.11]{SilII}.
However,
\[
c_4/16=(u^2-vw) \equiv (\theta_j-2)^2 \cdot b^4 \pmod {2^{2\ell}}
\]
which shows that $c_4$ is a square in $K_\mathfrak{q}$. Moreover,
\[
-c_6/16=2 (u-v)(v-w)(w-u)
\equiv 4 \cdot (\theta_j-2)^3 \cdot b^6 \pmod{2^{2\ell+1}} \, .
\]
Thus $K_\mathfrak{q}(\sqrt{\mathfrak{a}mma})=K_\mathfrak{q}(\sqrt{\theta_j-2})$.
As before, letting $r$ satisfy $2r \equiv j \pmod{p}$,
we have $\theta_j-2=(\zeta^r-\zeta^{-r})^2$ and so
$K_\mathfrak{q}(\sqrt{\mathfrak{a}mma})$ is contained in the unramified extension $K_\mathfrak{q}(\zeta)$.
Hence $E$ has multiplicative reduction at $\mathfrak{q} \mid 2$.
Finally $2$ is unramified in $K$, and so $\ord_\mathfrak{q}(c_4)=\ord_2(16)=4$.
Hence $\mathcal{D}_{E/K}=(\Delta/2^{12}) \cdot \mathcal{O}_K$ as required.
\end{proof}
\noindent \textbf{Case II: $p \mid x$}.
Let
\begin{equation}
\label{eqn:uvwp}
u=
\frac{\beta_j}{(\theta_j-2)}, \qquad
v=- \frac{\beta_k}{(\theta_k-2)}, \qquad
w=\frac{4 (\theta_j-\theta_k)}{(\theta_j-2)(\theta_k-2)} \cdot a^2.
\end{equation}
Then, from Lemmas~\ref{lem:cyc} and~\ref{lem:factor},
\[
u \mathcal{O}_K={\mathfrak{b}}_j^\ell, \qquad
v \mathcal{O}_K={\mathfrak{b}}_k^\ell, \qquad
w \mathcal{O}_K= 2^{2 \ell n+2} \cdot \mathfrak{p}^{\delta} \cdot \alpha^{2 \ell} \mathcal{O}_K,
\]
where
\begin{equation}\label{eqn:delta}
\delta=(\kappa \ell-1)(p-1)-1 \, .
\end{equation}
Again $u+v+w=0$ and the Frey curve is given by \eqref{eqn:Frey}.
\begin{lem}\label{lem:Frey2}
Suppose $p \mid x$. Let $E$ be the Frey curve \eqref{eqn:Frey}
where $u$, $v$, $w$ are given by \eqref{eqn:uvwp}.
The curve $E$ is semistable, with
multiplicative reduction at $\mathfrak{p}$ and at
all primes above $2$. It has
minimal discriminant
and conductor
\[
\mathcal{D}_{E/K}
=
2^{4 \ell n-4} \mathfrak{p}^{2\delta} {\alpha}^{4 \ell} {\mathfrak{b}}_j^{2\ell} {\mathfrak{b}}_k^{2\ell},
\qquad \mathcal{N}_{E/K}=2 \mathfrak{p} \cdot \mathbb{R}ad(\alpha {\mathfrak{b}}_j {\mathfrak{b}}_k).
\]
\end{lem}
\begin{proof}
The proof is an
easy modification of the proof of Lemma~\ref{lem:Frey1}.
\end{proof}
\section{A closer look at the Frey Curve for $p \equiv 1 \pmod{4}$}
\label{sec:FreyCurve2}
In this section we shall suppose that $p \equiv 1 \pmod{4}$.
The Galois group of $K=\mathbb{Q}(\zeta+\zeta^{-1})$ is cyclic
of order $(p-1)/2$. Thus
the field $K=\mathbb{Q}(\zeta+\zeta^{-1})$ has a unique involution
$\tau \in \Gal(K/\mathbb{Q})$ and we let $K^\prime$ be the
subfield of degree $(p-1)/4$ that is fixed by this involution.
In the previous section we let $1 \le j$, $k \le (p-1)/2$
with $j \ne k$. In this section we shall impose the further
condition that $\tau(\theta_j)=\theta_k$.
Now a glance at the definition \eqref{eqn:Frey} of the Frey curve $E$
and the formulae \eqref{eqn:uvwp} for $u$ and $v$
in the case $p \mid x$ shows that the curve $E$ is in fact defined over
$K^\prime$. This is not true in the case $p \nmid x$, but we can take
a twist of the Frey curve so that it is defined over $K^\prime$.
\noindent \textbf{Case I: $p \nmid x$}.
Let
\begin{equation}\label{eqn:uvwd}
u^\prime=
(\theta_k-2)\beta_j , \qquad
v^\prime=- (\theta_j-2)\beta_k, \qquad
w^\prime=4 (\theta_j-\theta_k) \cdot a^2,
\end{equation}
and let
\[
E^\prime \; : \; Y^2=X(X-u^\prime)(X+v^\prime) \, .
\]
Clearly the coefficients of
$E^\prime$ are invariant under $\tau$ and so $E^\prime$
is defined over $K^\prime$. Moreover, $E^\prime/K$
is the quadratic twist of $E/K$ by $(\theta_k-2)$.
Let $\mathfrak{B}$ be the unique prime of $K^\prime$ above $p$.
Let
\[
{\mathfrak{b}}_{j,k}=\norm_{K/K^\prime}({\mathfrak{b}}_j)
=\norm_{K/K^\prime}({\mathfrak{b}}_k) \, .
\]
It follows from Lemma~\ref{lem:factor} that the $\mathcal{O}_{K^\prime}$-ideal
${\mathfrak{b}}_{j,k}$ is coprime to $\alpha$ and to $2p$.
An easy calculation leads us to the following lemma.
\begin{lem}\label{lem:Frey3}
Suppose $p \nmid x$. Let $E^\prime/K^\prime$ be the above Frey elliptic curve.
Then $E^\prime$ is semistable away from $\mathfrak{B}$,
with minimal discriminant and conductor
\[
\mathcal{D}_{E^\prime/K^\prime}
=
2^{4 \ell n-4} \mathfrak{B}^3 {\alpha}^{4 \ell} {\mathfrak{b}}_{j,k}^{2\ell}, \qquad
\mathcal{N}_{E^\prime/K^\prime}= 2 \cdot \mathfrak{B}^2 \cdot \mathbb{R}ad(\alpha {\mathfrak{b}}_{j,k}).
\]
\end{lem}
\noindent \textbf{Case II: $p \mid x$}.
Another straightforward computation yields the following lemma.
\begin{lem}\label{lem:Frey4}
Suppose $p \mid x$. Let $E^\prime=E$ be the Frey curve in Lemma~\ref{lem:Frey2}.
Then $E^\prime$ is defined $K^\prime$.
The curve $E^\prime/K^\prime$ is semistable
with minimal discriminant and conductor
\[
\mathcal{D}_{E^\prime/K^\prime}
=
2^{4 \ell n-4} \mathfrak{B}^\delta {\alpha}^{4 \ell} {\mathfrak{b}}_{j,k}^{2\ell}, \qquad
\mathcal{N}_{E^\prime/K^\prime}= 2 \cdot \mathfrak{B} \cdot \mathbb{R}ad(\alpha {\mathfrak{b}}_{j,k}),
\]
where $\delta$ is given by \eqref{eqn:delta}.
\end{lem}
\noindent \textbf{Remark.} Clearly $E$
has full $2$-torsion over $K$.
The curve $E^\prime$
has a point of order $2$ over $K^\prime$, but not
necessarily full $2$-torsion.
\section{Proof of Theorem~\ref{thm:2}}\label{sec:proof2}
\begin{lem}\label{lem:modularity}
Let $p$ be an odd prime. There is an ineffective
constant $C_p^{(1)}$ depending on $p$ such
that for odd primes $\ell$, $m \ge C_p^{(1)}$,
and any non-trivial primitive solution $(x,y,z)$
of \eqref{eqn:main}, the Frey curve $E/K$ as
in Section~\ref{sec:FreyCurve} is modular. If $p \equiv 1 \pmod{4}$
then under the same assumptions, the Frey curve $E^\prime/K^\prime$ as in Section~\ref{sec:FreyCurve2} is modular.
\end{lem}
\begin{proof}
\citet*{FLS} show that for any totally real field $K$
there are at most finitely many non-modular $j$-invariants.
Let $j_1,\dotsc,j_r$ be the values of these $j$-invariants.
Let $\mathfrak{q}$ be a prime of $K$ above $2$. By Lemmas~\ref{lem:Frey1}
and~\ref{lem:Frey2},
we have $\ord_\mathfrak{q}(j(E)) =-(4 \ell n -4)$ with $n \ge 1$. Thus for $\ell$, $m$
sufficiently large we have $\ord_\mathfrak{q}(j(E)) < \ord_\mathfrak{q}(j_i)$
for $i=1,\dotsc,r$, completing the proof.
\end{proof}
\noindent \textbf{Remarks.}
\begin{itemize}[leftmargin=5mm]
\item The argument in \citet*{FLS} relies on Faltings' Theorem
(finiteness of the number of rational points on a curve of genus $\ge 2$)
to deduce finiteness
of the list of possibly non-modular $j$-invariants.
It is for this reason that the constant $C_p^{(1)}$
(and hence the constant $C_p$ in Theorem~\ref{thm:2})
is ineffective.
\item In the above argument, it seems that it is enough
to suppose that $\ell$ is sufficiently large without
an assumption on $m$. However, in Section~\ref{sec:Descent}
we swapped the terms $x^{2\ell}$ and $y^{2m}$ in \eqref{eqn:main}
if needed
to ensure that $x$ is even. Thus in the above argument
we need to suppose that
both $\ell$ and $m$ are sufficiently large.
\end{itemize}
We shall make use of the following
result due to \citet[Theorem 2]{FSirred}.
It is a variant of results proved by
\citet{KrausNF} and by \citet{David}. All these build
on the celebrated uniform boundedness theorem of \citet{Merel}.
\begin{thm}\label{thm:irred}
Let $K$ be a totally real field. There is an effectively
computable constant $C_K$ such that for a prime $\ell>C_K$,
and for an elliptic curve $E/K$ semistable at all $\lambda \mid \ell$,
the mod $\ell$ representation $\overline{\rho}_{E,\ell} \, : \, G_K \rightarrow
\GL_2(\mathbb{F}_\ell)$ is irreducible.
\end{thm}
In \citet[Theorem 2]{FSirred} it is assumed that $K$ is
Galois as well as totally real. Theorem~\ref{thm:irred}
follows immediately on replacing $K$ with its Galois closure.
\begin{lem}\label{lem:ll}
Let $E/K$ be the Frey curve given in Section~\ref{sec:FreyCurve}.
Suppose $\overline{\rho}_{E,\ell}$
is irreducible and $E$ is modular. Then
$\overline{\rho}_{E,\ell}\sim \overline{\rho}_{\mathfrak{f},\lambda}$
for some Hilbert cuspidal eigenform $\mathfrak{f}$ over $K$ of parallel weight $2$
that is new at level $\mathcal{N}_\ell$, where
\[
\mathcal{N}_\ell=
\begin{cases}
2 \mathcal{O}_K & \text{if $p \nmid x$} \\
2 \mathfrak{p} & \text{if $p \mid x$} \, .
\end{cases}
\]
Here $\lambda \mid \ell$ is a prime of $\mathbb{Q}_\mathfrak{f}$, the field
generated over $\mathbb{Q}$ by the eigenvalues of $\mathfrak{f}$.
If $p \equiv 1 \pmod{4}$, let $E^\prime/K^\prime$ be the Frey
curve given in Section~\ref{sec:FreyCurve2}.
Suppose $\overline{\rho}_{E^\prime,\ell}$
is irreducible and $E$ is modular. Then
$\overline{\rho}_{E^\prime,\ell}\sim \overline{\rho}_{\mathfrak{f},\lambda}$
for some Hilbert cuspidal eigenform $\mathfrak{f}$ over $K$ of parallel weight $2$
that is new at level $\mathcal{N}^\prime_\ell$, where
\[
\mathcal{N}^\prime_\ell=
\begin{cases}
2 \mathfrak{B}^2 & \text{if $p \nmid x$} \\
2 \mathfrak{B} & \text{if $p \mid x$} \, .
\end{cases}
\]
\end{lem}
\begin{proof}
This immediate from Lemmas~\ref{lem:Frey1}, \ref{lem:Frey2},
\ref{lem:Frey3} and \ref{lem:Frey4},
and a standard level lowering recipe derived in \cite[Section 2.3]{FS}
from the work of Jarvis, Fujiwara and Rajaei. Alternatively,
one could use modern modularity lifting theorems which
integrate level lowering with modularity lifting, as for example
in \cite{BD}.
\end{proof}
\subsection*{Proof of Theorem~\ref{thm:2}}
Let $K=\mathbb{Q}(\zeta+\zeta^{-1})$ and $E$ be the Frey curve constructed
in Section~\ref{sec:FreyCurve}. Let $C^{(1)}_p$ be the constant
in Lemma~\ref{lem:modularity}, and
$C^{(2)}_p=C_K$ be the constant in Theorem~\ref{thm:irred}.
Let $C_p=\max(C^{(1)}_p, C^{(2)}_p)$.
Suppose that $\ell$, $m \ge C_p$.
Then $\overline{\rho}_{E,\ell}$ is irreducible and modular,
and it follows from Lemma~\ref{lem:ll} that
$\overline{\rho}_{E,\ell}\sim \overline{\rho}_{\mathfrak{f},\lambda}$
for some Hilbert eigenform over $K$ of parallel weight $2$
that is new at level $\mathcal{N}_\ell$, where $\mathcal{N}_\ell=2 \mathcal{O}_K$
or $2 \mathfrak{p}$.
Now a standard argument (c.f. \citet*[Section 4]{BS}
or \citet[Section 3]{Kra97} or \citet[Section 9]{IHP}) shows that,
after enlarging $C_p$ by an effective amount, we may
suppose that the field of eigenvalues of $\mathfrak{f}$ is $\mathbb{Q}$.
Observe that the level $\mathcal{N}_\ell$ is non-square-full (meaning there
is a prime $\mathfrak{q}$ at which the level has valuation $1$).
As the level is non-square-full and the field of
eigenvalue is $\mathbb{Q}$, the eigenform
$\mathfrak{f}$ is known to correspond to some
elliptic curve $E_1/K$ of conductor $\mathcal{N}_\ell$ \citep{Blasius},
and $\overline{\rho}_{E,\ell} \sim \overline{\rho}_{E_1,\ell}$.
Finally, and again by standard arguments (loc.\ cit.),
we may enlarge $C_p$ by an effective constant so that the isomorphism
$\overline{\rho}_{E,\ell} \sim \overline{\rho}_{E_1,\ell}$ forces
$E_1$ to either have full $2$-torsion, or to be isogenous
to an elliptic curve $E_2/K$ that has full $2$-torsion.
This contradicts the hypothesis of Theorem~\ref{thm:2} that there
are no such elliptic curves with conductor $2\mathcal{O}_K$, $2 \mathfrak{p}$,
and completes the proof of the first part of the theorem.
The proof of the second part is similar, and makes use
of the Frey curve $E^\prime/K^\prime$.
\section{Modularity of the Frey Curve For $5 \le p \le 13$}
\begin{lem}\label{lem:modularitysmallp}
If $p=5$, $7$, $11$ or $13$ then the Frey curve $E/K$
in Section~\ref{sec:FreyCurve} is modular.
If $p=5$, $13$ then the Frey curve $E^\prime/K^\prime$
in Section~\ref{sec:FreyCurve2}
is modular.
\end{lem}
\begin{proof}
Recall that $E$ is defined over $K=\mathbb{Q}(\zeta+\zeta^{-1})$
where $\zeta$ is a primitive $p$-th root of unity.
If $p=5$ then $K=\mathbb{Q}(\sqrt{5})$, and modularity
of elliptic curves over real quadratic fields was recently
established by \citet*{FLS}.
For $p=7$, $11$, $13$, the prime $5$ is unramified in $K$,
the class number of $K$
is $1$, and condition (c) of Theorem~\ref{thm:modularity} is easily
verified. Thus $E$ is modular.
For $p=13$, the curves $E$ and $E^\prime$ are at
worst quadratic twists over $K$,
and $K/K^\prime$ is quadratic. The modularity of $E^\prime/K^\prime$
follows from the modularity of $E/K$
and the cyclic base change theorem of \citep{Langlands}.
For $p=5$ we could use the same argument, or more simply note
that $K^\prime=\mathbb{Q}$, and conclude by the modularity
theorem over the rationals \citep*{BCDT}.
\end{proof}
\section{Irreducibility of $\overline{\rho}_{E,\ell}$ for $5 \le p \le 13$}\label{sec:Irred}
We let $E$ be the Frey curve as in Section~\ref{sec:FreyCurve},
and $p=5$, $7$, $11$, $13$. To
apply Lemma~\ref{lem:ll} we need to prove the
irreducibility of $\overline{\rho}_{E,\ell}$ for $\ell \ge 5$;
equivalently, we need to show that $E$ does not have an $\ell$
isogeny for $\ell \ge 5$. Alas, there is not yet a uniform boundedness
theorem for isogenies.
The papers of \cite{KrausNF}, \cite{David}, \cite{FSirred}
do give effective bounds $C_K$ such that for $\ell>C_K$ the representation
$\overline{\rho}_{E,\ell}$ is irreducible, however these bounds
are too large for our present purpose. We will refine the arguments
in those papers making use of the fact that the curve $E$ is semistable,
and the number fields $K=\mathbb{Q}(\zeta+\zeta^{-1})$ all have narrow class number $1$.
Before doing this, we relate, for $p=5$ and $13$,
the representations $\overline{\rho}_{E,\ell}$
and $\overline{\rho}_{E^\prime,\ell}$ where $E^\prime$ is the Frey
curve in Section~\ref{sec:FreyCurve2}.
\begin{lem}\label{lem:compare}
Suppose $p=5$ or $13$. Let $\tau$ be the unique
involution of $K$ and $K^\prime$ the subfield fixed by it.
Let $j$ and $k$ satisfy $\tau(\theta_j)=\theta_k$. Let
$E/K$ be the Frey elliptic curve in Section~\ref{sec:FreyCurve}
and $E^\prime/K^\prime$ the
Frey curve in Section~\ref{sec:FreyCurve2}, associated to this
pair $(j,k)$. Then
$\overline{\rho}_{E,\ell}$ is irreducible as a representation
of $G_K$ if and only if
$\overline{\rho}_{E^\prime,\ell}$ is irreducible as a representation
of $G_{K^\prime}$.
\end{lem}
\begin{proof}
Note that $K/K^\prime$ is a quadratic extension and $E/K$ is a quadratic
twist of $E^\prime/K$. Thus $\overline{\rho}_{E,\ell}$ is a twist
of $\overline{\rho}_{E^\prime,\ell} \vert_{G_K}$ by a quadratic character.
If $\overline{\rho}_{E^\prime,\ell}$ is reducible as a
representation of $G_{K^\prime}$ then certainly
$\overline{\rho}_{E,\ell}$ is reducible as a representation of $G_K$.
Conversely, suppose $\overline{\rho}_{E^\prime,\ell}(G_{K^\prime})$ is
irreducible. We would like to show that
$\overline{\rho}_{E,\ell}(G_K)$ is irreducible.
It is enough to show that $\overline{\rho}_{E^\prime,\ell}(G_K)$
is irreducible. Let $\mathfrak{q} \mid 2$ be a prime of $K^\prime$.
Then
$\ord_\mathfrak{q}(j(E^\prime))=4-4\ell n$ which is negative but not divisible
by $\ell$. Thus $\overline{\rho}_{E^\prime,\ell}(G_{K^\prime})$
contains an element of order $\ell$ \citep[Proposition V.6.1]{SilII}.
By Dickson's classification \citep{SwD}
of subgroups of $\GL_2(\mathbb{F}_\ell)$ we see
that $\overline{\rho}_{E^\prime,\ell}(G_{K^\prime})$
must contain $\SL_2(\mathbb{F}_\ell)$. The latter is a simple group,
and must therefore be contained in
$\overline{\rho}_{E^\prime,\ell}(G_K)$. This completes the proof.
\end{proof}
\begin{lem}\label{lem:tors}
Suppose $\overline{\rho}_{E,\ell}$ is reducible. Then
either $E/K$ has non-trivial $\ell$-torsion, or is $\ell$-isogenous
to an elliptic curve defined over $K$ that has non-trivial
$\ell$-torsion.
\end{lem}
\begin{proof}
Suppose $\overline{\rho}_{E,\ell}$ is reducible, and write
\[
\overline{\rho}_{E,\ell}\sim
\begin{pmatrix}
\psi_1 & * \\
0 & \psi_2
\end{pmatrix} \, .
\]
We shall show that either $\psi_1$ or $\psi_2$ is trivial.
It follows in the former case that $E$ has non-trivial
$\ell$-torsion, and in the latter case that the $K$-isogenous
curve $E/\Ker(\psi_1)$ has non-trivial $\ell$-torsion.
As $K$ has narrow class number $1$ for $p=5$, $7$, $11$, $13$,
it is sufficient to show that one of $\psi_1$, $\psi_2$ is
unramified at all finite places.
As $E$ is semistable, the characters $\psi_1$ and $\psi_2$
are unramified away from $\ell$ and the infinite places.
Let $S_\ell$ be the set of primes $\lambda \mid \ell$ of $K$.
Let $S \subset S_\ell$ for the set of $\lambda \in S_\ell$
such that $\psi_1$ is ramified at $\lambda$. Then (c.f.
proof of Theorem~\ref{thm:modularity}) $\psi_2$
is ramified exactly at the primes $S\setminus S_\ell$. Moreover,
$\psi_1 \vert_{I_\lambda}= \chi_\ell \vert_{I_\lambda}$
for all $\lambda \in S$, and
$\psi_2 \vert_{I_\lambda}= \chi_\ell \vert_{I_\lambda}$
for all $\lambda \in S_\ell \setminus S$.
It is enough to show that either $S$ is empty or $S_\ell \setminus S$ is empty.
Suppose $S$ is a non-empty proper subset of $S_\ell$.
Fix $\lambda \in S$ and let $D=D_\lambda \subset G=\Gal(K/\mathbb{Q})$ be the
decomposition group of $\lambda$; by definition
$\lambda^\sigma=\lambda$ for all $\sigma \in D_\lambda$.
As $K/\mathbb{Q}$ is abelian and Galois, $D_{\lambda^\prime}=D$
for all $\lambda^\prime \in S_\ell$, and $G/D$ acts
transitively and freely on $S_\ell$.
Fix a set $T$ of coset representatives for $G/D$.
Then there is a subset $T^\prime \subset T$
such that
\[
S=\{ \lambda^{\tau^{-1}}\; :\; \tau \in T^\prime\},
\qquad
S_\ell\setminus S=\{ \lambda^{\tau^{-1}} \; : \; \tau \in T \setminus T^\prime\}.
\]
As $S$ is a non-empty proper subset of $S_\ell$, we have that
$T^\prime$ is a non-empty proper subset of $T$.
Now, by Proposition~\ref{prop:Krauscf}, for any totally positive unit $u$
of $\mathcal{O}_K$,
\[
\prod_{\tau \in T^\prime} \norm_{\mathbb{F}_\lambda/\mathbb{F}_\ell} (u+\lambda^{\tau^{-1}})
=\overline{1} \, .
\]
But
\begin{equation*}
\begin{split}
\norm_{\mathbb{F}_\lambda/\mathbb{F}_\ell} (u+\lambda^{\tau^{-1}})
&=\prod_{\sigma \in D} (u+\lambda^{\tau^{-1}})^\sigma\\
&=\prod_{\sigma \in D} (u^\sigma+\lambda^{\tau^{-1}})\\
&= \left(
\prod_{\sigma \in D} (u^{\sigma \tau}+\lambda)
\right)^{\tau^{-1}} \\
&=
\prod_{\sigma \in D} (u^{\sigma \tau}+\lambda)
\qquad \text{(as this expression belongs to $\mathbb{F}_\ell$).}
\end{split}
\end{equation*}
Let
\[
B_{T^\prime,D}(u)=\norm_{K/\mathbb{Q}}\left( \left(\prod_{\tau \in T^\prime,~\sigma \in D} u^{\sigma \tau}
\right) -1 \right) \, .
\]
It follows that $\ell \mid B_{T^\prime,D}(u)$. Now let $u_1,\dotsc,u_d$ be a system
of totally positive units. Then $\ell$ divides
\[
B_{T^\prime,D}(u_1,\dotsc,u_d)={\mathfrak{c}}d \left( B_{T^\prime,D}(u_1),\dotsc,B_{T^\prime,D}(u_d) \right).
\]
To sum up, if the lemma is false for $\ell$, then there is some subgroup $D$ of $G$
and some non-empty proper subset $T^\prime$ of $G/D$ such that $\ell$ divides $B_{T^\prime,D}(u_1,\dotsc,u_d)$.
The proof of the lemma is completed by a computation that we now describe.
For each of
$p=5$, $7$, $11$, $13$ we fix a basis
$u_1,\dotsc,u_d$ for the system of totally positive units
of $\mathcal{O}_K$.
We run through the subgroups $D$ of $G=\Gal(K/\mathbb{Q})$.
For each subgroup $D$ we fix a set of coset representatives $T$, and run
through the non-empty proper subsets $T^\prime$ of $T$, computing $B_{T^\prime,D}(u_1,\dotsc,u_d)$.
We found that for $p=5$, $7$ the possible values for $B_{T^\prime,D}(u_1,\dotsc,u_d)$ are all $1$;
for $p=11$ they are $1$ and $23$; and for $p=13$ they are $1$, $5^2$ and $3^5$. Thus the proof
is complete for $p=5$, $7$ and it remains to deal with $(p,\ell)=(11,23)$, $(13,5)$.
For each of these possibilities we ran through the non-empty proper
$S \subset S_\ell$ and checked that there is some totally positive unit
$u$ such that $\prod_{\lambda \in S} \norm(u+\lambda) \ne \overline{1}$.
This completes the proof.
\end{proof}
Suppose $\overline{\rho}_{E,\ell}$ is reducible. It follows
from Lemma~\ref{lem:tors} that there is an elliptic curve
$E_1/K$ (which is either $E$ or $\ell$-isogenous to $E$)
such that $E_1(K)$ has a subgroup isomorphic to
$\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/{2\ell \mathbb{Z}}$. Such an elliptic curve
is isogenous~\footnote{At the suggestion
of one of the referees we prove this statement.
Let $P_1$, $P_2 \in E_1(K)$ be independent points of order $2$.
Let $Q$ be a solution to the equation $2X=P_1$. Then
$Q$ has order $4$ and the complete set of solutions is
$\{ Q, Q+P_2, 3Q, 3Q+P_2\}$ which is Galois-stable.
Let $E_2=E_1/\langle P_2 \rangle$ and let $\phi: E_1 \rightarrow E_2$
be the induced isogeny. As $\Ker(\phi) \cap \langle Q \rangle=0$,
we see that $Q^\prime=Q+\langle P_2 \rangle$ has order $4$.
Moreover, the set $\{ Q^\prime, 3 Q^\prime\}$ is Galois-stable,
so $\langle Q^\prime \rangle$ is a $K$-rational cyclic subgroup
of order $4$ on $E_2$. The point of order $\ell$ on $E_1$
survives the isogeny, and so $E_2$ has a $K$-rational cyclic
subgroup
of order $4\ell$.}
to an elliptic curve $E_2/K$ with
a $K$-rational cyclic subgroup isomorphic to $\mathbb{Z}/4\ell \mathbb{Z}$.
Thus we obtain a non-cuspidal $K$-point on the curves
$X_0(\ell)$, $X_1(\ell)$, $X_0(2\ell)$, $X_1(2\ell)$,
$X_0(4\ell)$, $X_1(2,2\ell)$. To achieve a contradiction
it is enough to show that there are no non-cuspidal $K$-points
on one of these curves. For small values of $\ell$, we have
found \texttt{Magma}'s \lq small modular curves package\rq,
as well as \texttt{Magma}'s functionality for computing Mordell--Weil
groups of elliptic curves over number fields invaluable.
Four of the modular
curves of interest to us happen to be elliptic curves.
The aforementioned \texttt{Magma} package gives
the following models:
\begin{gather}
\label{eqn:x020}
X_0(20) \; : \;
y^2 = x^3 + x^2 + 4x + 4 \qquad \text{(Cremona label \texttt{20a1})},\\
X_0(14) \; : \;
y^2 + xy + y = x^3 + 4x - 6 \qquad \text{(Cremona label \texttt{14a1})},\\
\label{eqn:x011}
X_0(11) \; : \;
y^2 + y = x^3 - x^2 - 10x - 20 \qquad \text{(Cremona label \texttt{11a1})},\\
X_0(19) \; : \;
y^2 + y = x^3 + x^2 - 9x - 15 \qquad \text{(Cremona label \texttt{19a1})} .
\end{gather}
\begin{lem}\label{lem:35}
Let $p=5$. Then $\overline{\rho}_{E,\ell}$ is irreducible.
Moreover, $\overline{\rho}_{E^\prime,\ell}$ is irreducible.
\end{lem}
\begin{proof}
Suppose $\overline{\rho}_{E,\ell}$ is reducible
. By the above there is an elliptic
curve $E_2$ over the quadratic field $K=\mathbb{Q}(\sqrt{5})$,
with a $K$-rational subgroup isomorphic to
$\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/{2 \ell \mathbb{Z}}$.
From classification of torsion
subgroups of elliptic curves over quadratic fields
\citep{Kamienny}
we deduce that $\ell \le 5$. However we are assuming
throughout that $\ell \ge 5$ and $\ell \ne p$.
This gives a contradiction as $p=5$.
Thus $\overline{\rho}_{E,\ell}$
is irreducible. The irreducibility of $\overline{\rho}_{E^\prime,\ell}$
follows from Lemma~\ref{lem:compare}.
\end{proof}
\begin{lem}\label{lem:p7}
Let $p=7$. Then $\overline{\rho}_{E,\ell}$ is irreducible.
\end{lem}
\begin{proof}
In this case $K$ is a cubic field.
By the
classification of cyclic $\ell$-torsion on elliptic
curves over cubic fields \citep{Parent1,Parent2},
we know $\ell \le 13$. Since $\ell \ne p$,
we need only deal with the case $\ell=5$, $11$, $13$.
To eliminate $\ell=5$ and $\ell=11$ we computed the $K$-points
on the
modular curves $X_0(20)$
and $X_0(11)$.
These both have rank $0$ and their $K$-points
are in fact defined over $\mathbb{Q}$.
The $\mathbb{Q}$-points of $X_0(20)$ are cuspidal thus $\ell \ne 5$.
The three non-cuspidal $\mathbb{Q}$-points on $X_0(11)$
all have integral $j$-invariants. As our curve $E$
has multiplicative reduction at $2\mathcal{O}_K$,
it follows that $\ell \ne 11$.
We suppose $\ell=13$. We now apply
\citet[Theorem 1]{BN}. That theorem gives a useful and practical
criterion for ruling out the existence of
torsion subgroups $\mathbb{Z}/m\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z}$ on elliptic curves
over a given number field $K$ (the remarks at the end of
Section 2 of \citep{BN} are useful when applying that theorem).
The theorem involves making certain choices and we indicate
them briefly; in the notation of the theorem, we take $A=\mathbb{Z}/26\mathbb{Z}$,
$L=\mathbb{Q}$, $m=1$, $n=26$, $X=X^\prime=X_1(26)$, $p=\mathfrak{p}_0=7$.
To apply the theorem we need the fact that the gonality
of $X_1(26)$ is $6$ \citep*{DH}, and that its Jacobian
has Mordell--Weil rank $0$ over $\mathbb{Q}$ \citep[page 11]{BN}.
We merely check that conditions (i)--(vi) of \cite[Theorem 1]{BN}
are satisfied, and conclude that there are no elliptic
curves over $K$ with a subgroup isomorphic to $\mathbb{Z}/26\mathbb{Z}$.
This completes the proof.
\end{proof}
\begin{lem}
Let $p=11$. Then $\overline{\rho}_{E,\ell}$ is irreducible.
\end{lem}
\begin{proof}
Now $K$ has degree $5$.
By the
classification of cyclic $\ell$-torsion on elliptic
curves over quintic fields \citep{DKSS} we know
that $\ell \le 19$. As $\ell \ne p$ we need to
deal with $\ell=5$, $7$, $13$, $17$, $19$.
The elliptic
curves $X_0(20)$, $X_0(14)$ and
$X_0(19)$ have rank $0$ over $K$ and this allowed us to
quickly eliminate $\ell=5$, $7$, $19$.
Suppose $\ell=13$.
We again apply \citet[Theorem 1]{BN},
with choices $A=\mathbb{Z}/26\mathbb{Z}$,
$L=\mathbb{Q}$, $m=1$, $n=26$, $X=X^\prime=X_1(26)$, $p=\mathfrak{p}_0=11$
(with Mordell--Weil and gonality information as in
the proof of Lemma~\ref{lem:p7}).
This shows that there are no elliptic curves over $K$
with a subgroup isomorphic to $\mathbb{Z}/26\mathbb{Z}$,
allowing us to eliminate $\ell=13$.
Suppose $\ell=17$.
We apply the same theorem with choices $A=\mathbb{Z}/34\mathbb{Z}$,
$L=\mathbb{Q}$, $m=1$, $n=34$, $X=X^\prime=X_1(34)$, $p=\mathfrak{p}_0=11$.
For this we need the fact that $X$ has gonality $10$
\citep{DH} and that the rank of $J_1(34)$ over $\mathbb{Q}$
is $0$ \citep[page 11]{BN}. Applying the theorem shows
that there are no elliptic curves over $K$ with
a subgroup isomorphic to $\mathbb{Z}/34\mathbb{Z}$.
This completes the proof.
\end{proof}
It remains to deal with $p=13$. Unfortunately the field $K$
in this case is sextic, and the known bound \citep{DKSS}
for cyclic $\ell$-torsion over sextic fields is $\ell \le 37$,
and we have been unable to deal with the cases $\ell=37$ directly
over the sextic field.
We therefore proceed a little differently. We are in fact most
interested in showing the irreducibility of $\overline{\rho}_{E^\prime,\ell}$
where $E^\prime$ is the Frey curve
defined over the degree $3$ subfield $K^\prime$.
\begin{lem}\label{lem:nmidx}
Let $p=13$.
Then $\overline{\rho}_{E^\prime,\ell}$ is irreducible.
\end{lem}
\begin{proof}
Suppose $\overline{\rho}_{E^\prime,\ell}$ is reducible.
We shall treat the case $13 \mid x$ and $13 \nmid x$ separately.
Suppose first that $13 \mid x$. Then the curve $E^\prime$ over
the field $K^\prime$ is semistable (Lemma~\ref{lem:Frey4}).
It is now straightforward to adapt the proof of Lemma~\ref{lem:tors}
to show that $E^\prime$ has non-trivial $\ell$-torsion,
or is $\ell$ isogenous to an elliptic curve with non-trivial
$\ell$-torsion. Thus there is an elliptic curve over $K^\prime$
with a point of exact order $2\ell$. Now $K^\prime$
is cubic, so by \citep{Parent1,Parent2} we have $\ell \le 13$.
As $\ell \ne p$, it remains to deal with the cases $\ell=5$, $7$, $11$.
The elliptic curves $X_0(14)$ and $X_0(11)$ have rank zero over $K^\prime$,
and in fact their $K^\prime$-points are the same as their $\mathbb{Q}$-points.
This easily allows us to eliminate $\ell=7$ and $\ell=11$ as before.
The curve $X_0(10)$ has genus $0$ so we need a different approach,
and we will leave this case to the end of the proof (recall that
$E^\prime$ does not necessarily have full $2$-torsion over $K^\prime$).
Now suppose that $13 \nmid x$. Here $E^\prime/K^\prime$ is not
semistable. As we have assumed that $\overline{\rho}_{E^\prime,\ell}$ is
reducible we have that $\overline{\rho}_{E,\ell}$ is reducible
(Lemma~\ref{lem:compare}). Now we may apply Lemma~\ref{lem:tors}
to deduce the existence of $E_1/K$ (which is $E$ or $\ell$
isogenous to it) that has a subgroup isomorphic
to $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\ell\mathbb{Z}$.
As before, let $\mathfrak{p}$ be the unique prime of $K$ above $13$.
By Lemma~\ref{lem:Frey1} the Frey curve $E$ has good reduction at $\mathfrak{p}$.
As $\mathfrak{p} \nmid 2 \ell$,
we know from injectivity of torsion that $4 \ell \mid \# E(\mathbb{F}_\mathfrak{p})$.
But $\mathbb{F}_\mathfrak{p}=\mathbb{F}_{13}$.
By the Hasse--Weil bounds, $\ell \le (\sqrt{13}+1)^2/4\approx 5.3$.
Thus $\ell=5$.
It remains to deal with the case $\ell=5$ for both $13 \mid x$
and $13 \nmid x$. In both case we obtain a $K$-point on
$X=X_0(20)$ whose image in $X_0(10)$ is a $K^\prime$-point.
Would like to compute
$X(K)$. This computation proved beyond \texttt{Magma}'s
capability.
However, $K=K^\prime(\sqrt{13})$.
Thus the rank of $X(K)$ is the sum of the ranks of $X(K^\prime)$ and of
$X^\prime(K^\prime)$ where $X^\prime$ is the quadratic twist
of $X$ by $13$. Computing the ranks of $X(K^\prime)$ and $X^\prime(K^\prime)$
turns out to be a task within the capabilities of \texttt{Magma},
and we find that they are respectively $0$ and $1$. Thus
$X(K)$ has rank $1$. With a little more work we find that
\[
X(K) = \frac{\mathbb{Z}}{6\mathbb{Z}} \cdot (4,10) + \mathbb{Z} \cdot (3, 2 \sqrt{13}) \, .
\]
Thus $X(K)=X(\mathbb{Q}(\sqrt{13}))$. It follows that the
$j$-invariant of $E^\prime$ must belong to $\mathbb{Q}(\sqrt{13})$.
But the $j$-invariant belongs to $K^\prime$ too, and
so belongs to $\mathbb{Q}(\sqrt{13}) \cap K^\prime=\mathbb{Q}$.
Let the rational integers
$a$, $b$ be as in Sections~\ref{sec:Descent},~\ref{sec:FreyCurve}. Recall that
$b$ is odd, and that $\ord_2(a)=5n$ where $n>0$.
Write $a=2^{5n} a^\prime$ where $a^\prime$ is odd.
We know that $\ord_2(j(E))=-(20n-4)$. The prime $2$
is inert in $K^\prime$. An explicit calculation,
making use of the fact that $a^\prime \equiv b \equiv 1 \pmod{2}$,
shows
that
\begin{comment}
Now, in the notation
of Section~\ref{sec:FreyCurve},
\[
u \equiv \theta_j \pmod{2}, \qquad
v \equiv -\theta_j \pmod{2}, \qquad
w=2^{10n+2} \mu {a^\prime}^2
\]
where $\mu=(\theta_j-\theta_k)/(\theta_k-2)$. Thus, from \eqref{eqn:inv},
\end{comment}
\[
2^{20n-4} j(E)
\equiv
\frac{\theta_j^2 \theta_k^2}{(\theta_j-\theta_k)^2} \pmod{2}.
\]
Computing this residue for the possible values of $j$ and $k$,
we checked that it does not belong to $\mathbb{F}_2$, giving us a contradiction.
\end{proof}
\section{Proof of Theorem~\ref{thm:1}}\label{sec:final}
In Section~\ref{sec:peq3} we proved Theorem~\ref{thm:1}
for $p=3$. In this section we deal with the values
$p=5$, $7$, $11$, $13$. Let $\ell$, $m \ge 5$ be primes.
Suppose $(x,y,z)$ is a primitive non-trivial solution
to \eqref{eqn:main}. Without loss of generality, $2 \mid x$.
We let $K=\mathbb{Q}(\zeta+\zeta^{-1})$ where $\zeta=\exp(2 \pi i/p)$.
For $p=13$ we also let $K^\prime$ be the unique subfield
of $K$ of degree $3$. Let $E$ be the Frey curve attached
to this solution $(x,y,z)$ defined
in Section~\ref{sec:FreyCurve} where we take $j=1$ and $k=2$.
For $p=13$ we also work with the Frey curve $E^\prime$
defined in Section~\ref{sec:FreyCurve2}
where we take $j=1$ and $k=5$ (these choices
satisfy the condition $\tau(\theta_j)=\theta_k$
where $\tau$ is unique involution on $K$).
By Lemma~\ref{lem:modularitysmallp} these
elliptic curves are modular. Moreover by the results
of Section~\ref{sec:Irred} the representation
$\overline{\rho}_{E,\ell}$ is irreducible for $p=5$, $7$, $11$, $13$,
and the representation $\overline{\rho}_{E^\prime,\ell}$ is irreducible
for $p=13$.
Let $\mathcal{K}$ be the number field $K$ unless $p=13$ and $13 \mid x$
in which case we take $\mathcal{K}=K^\prime$. Also let $\mathcal{E}$
be the Frey curve $E$ unless $p=13$ and $13 \mid x$
in which we take $\mathcal{E}$ to be $E^\prime$.
By Lemma~\ref{lem:ll} there is a Hilbert cuspidal
eigenform $\mathfrak{f}$ over $\mathcal{K}$ of parallel weight $2$
and level $\mathcal{N}$ as given in Table~\ref{table1}, such that
$\overline{\rho}_{\mathcal{E},\ell} \sim \overline{\rho}_{\mathfrak{f},\lambda}$
where $\lambda \mid \ell$ is a prime of $\mathbb{Q}_\mathfrak{f}$, the field
generated by the Hecke eigenvalues of $\mathfrak{f}$.
Using the \texttt{Magma} \lq Hilbert modular
forms\rq\ package we computed the possible Hilbert newforms
at these levels. The information is summarized in Table~\ref{table1}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$p$ & Case & Field $\mathcal{K}$ & Frey curve $\mathcal{E}$ & Level $\mathcal{N}$ & Eigenforms $\mathfrak{f}$ & $[\mathbb{Q}_\mathfrak{f}:\mathbb{Q}]$\\
\hline\hline
\multirow{2}{*}{$5$} & $5 \nmid x$ & $K$ & $E$ & $2\mathcal{O}_K$ & -- & -- \\
\cline{2-7}
& $ 5 \mid x$ & $K$ & $E$ & $2 \mathfrak{p}$ & -- & --\\
\hline\hline
\multirow{2}{*}{$7$} & $7 \nmid x$ & $K$ & $E$ & $2\mathcal{O}_K$ & -- & -- \\
\cline{2-7}
& $ 7 \mid x$ & $K$ & $E$ & $2 \mathfrak{p}$ & $\mathfrak{f}_1$ & $1$ \\
\hline\hline
\multirow{2}{*}{$11$} & $11 \nmid x$ & $K$ & $E$ & $2\mathcal{O}_K$ & $\mathfrak{f}_2$ & $2$ \\
\cline{2-7}
& $ 11 \mid x$ & $K$ & $E$ & $2 \mathfrak{p}$ & $\mathfrak{f}_3$, $\mathfrak{f}_4$ & $5$\\
\hline\hline
\multirow{5}{*}{$13$} & \multirow{3}{*}{$13 \nmid x$}
& \multirow{3}{*}{$K$} & \multirow{3}{*}{$E$} & \multirow{3}{*}{$2\mathcal{O}_K$} & $\mathfrak{f}_5$, $\mathfrak{f}_6$ & $1$ \\
& & & & & $\mathfrak{f}_{7}$ & $2$ \\
& & & & & $\mathfrak{f}_{8}$ & $3$ \\
\cline{2-7}
& \multirow{2}{*}{$13 \mid x$} &
\multirow{2}{*}{$K^\prime$} & \multirow{2}{*}{$E^\prime$} & \multirow{2}{*}{$2 \mathfrak{B}$} & $\mathfrak{f}_{9}$, $\mathfrak{f}_{10}$ & $1$\\
& & & & & $\mathfrak{f}_{11}$, $\mathfrak{f}_{12}$ & $3$\\
\hline
\end{tabular}
\caption{Frey curve and Hilbert eigenform information.
Here $\mathfrak{p}$ is the unique prime of $K$ above $p$,
and $\mathfrak{B}$ is the unique prime of $K^\prime$ above $p$.}
\label{table1}
\end{center}
\end{table}
As shown in the table, there are no newforms at the relevant
levels for $p=5$, completing the contradiction for this case.
\footnote{We point out in passing that for $p=5$ we could have also worked
with the Frey curve $E^\prime/\mathbb{Q}$.
In that case the Hilbert newforms $\mathfrak{f}$ are actually classical
newforms of weight $2$ and levels $2$ and $50$. There are
no such newforms of level $2$, but there are two newforms of level $50$
corresponding to the elliptic curve isogeny classes
\texttt{50a} and \texttt{50b}. These would require further work
to eliminate.}
We now explain how we complete the contradiction for the remaining
cases.
Suppose
$\mathfrak{q}$ a prime of $\mathcal{K}$ such that $\mathfrak{q} \nmid 2 p \ell$. In particular, $\mathfrak{q}$ does not
divide the level of $\mathfrak{f}$, and $\mathcal{E}$ has good or multiplicative reduction at $\mathfrak{q}$.
Write $\sigma_\mathfrak{q}$ for a Frobenius element of $G_{\mathcal{K}}$
at $\mathfrak{q}$. Comparing the traces of
$\overline{\rho}_{\mathcal{E},\ell}(\sigma_\mathfrak{q})$ and $\overline{\rho}_{\mathfrak{f},\lambda}(\sigma_\mathfrak{q})$
we obtain:
\begin{enumerate}
\item[(i)] if $\mathcal{E}$ has good reduction at $\mathfrak{q}$ then
$a_\mathfrak{q}(\mathcal{E}) \equiv a_\mathfrak{q}(\mathfrak{f}) \pmod{\lambda}$;
\item[(ii)] if $\mathcal{E}$ has split multiplicative reduction at $\mathfrak{q}$ then
$\norm(\mathfrak{q})+1 \equiv a_\mathfrak{q}(\mathfrak{f}) \pmod{\lambda}$;
\item[(iii)] if $\mathcal{E}$ has non-split multiplicative reduction at $\mathfrak{q}$ then
$-(\norm(\mathfrak{q})+1) \equiv a_\mathfrak{q}(\mathfrak{f}) \pmod{\lambda}$.
\end{enumerate}
Let $q \nmid 2p \ell$ be a rational prime
and let
\[
\mathcal{A}_q=\{ (\eta,\mu ) \; : \quad 0 \le \eta,~\mu \le q-1, \quad (\eta, \mu) \ne (0,0) \}.
\]
For $(\eta,\mu) \in \mathcal{A}_q$
let
\begin{gather*}
u(\eta,\mu)=
\begin{cases}
(\theta_j+2)\eta^2+(\theta_j-2) \mu^2 & \text{if $p \nmid x$},\\
\frac{1}{(\theta_j-2)}\left((\theta_j+2)\eta^2+(\theta_j-2) \mu^2\right) & \text{if $p \mid x$},\\
\end{cases}\\
v(\eta,\mu)=
\begin{cases}
-\frac{(\theta_j-2)}{(\theta_k-2)}\left((\theta_k+2)\eta^2+(\theta_k-2) \mu^2\right) & \text{if $p \nmid x$},\\
-\frac{1}{(\theta_k-2)}\left((\theta_k+2)\eta^2+(\theta_k-2) \mu^2\right) & \text{if $p \mid x$}.\\
\end{cases}
\end{gather*}
Write
\[
E_{(\eta,\mu)} \; : \; Y^2=X(X-u(\eta,\mu))(X+v(\eta,\mu)) \, .
\]
Let $\Delta(\eta,\mu)$, $c_4(\eta,\mu)$
and $c_6(\eta,\mu)$ be the usual invariants of this model. Let $\mathfrak{a}mma(\eta,\mu)=-c_4(\eta,\mu)/c_6(\eta,\mu)$.
Let $(a,b)$ be as in Section~\ref{sec:Descent}.
As ${\mathfrak{c}}d(a,b)=1$,
we have $(a,b) \equiv (\eta,\mu) \pmod{q}$ for some $(\eta,\mu)\in \mathcal{A}_q$.
In particular $(a,b) \equiv (\eta,\mu) \pmod{\mathfrak{q}}$ for all primes $\mathfrak{q} \mid q$ of $\mathcal{K}$.
From the definitions of the Frey curves $E$ and $E^\prime$ in Sections~\ref{sec:FreyCurve} and~\ref{sec:FreyCurve2}
we see that $\mathcal{E}$ has good reduction at $\mathfrak{q}$ if and only if $\mathfrak{q} \nmid \Delta(\eta,\mu)$,
and in this case $a_\mathfrak{q}(\mathcal{E})=a_\mathfrak{q}(E_{(\eta,\mu)})$.
Let
\[
B_\mathfrak{q}(\mathfrak{f},\eta,\mu)=\begin{cases}
a_\mathfrak{q}(E_{(\eta,\mu)})-a_\mathfrak{q}(\mathfrak{f}) & \text{if $\mathfrak{q} \nmid \Delta(\eta,\mu)$},\\
\norm(\mathfrak{q})+1-a_\mathfrak{q}(\mathfrak{f}) & \text{if $\mathfrak{q} \mid \Delta(\eta,\mu)$ and
$\overline{\mathfrak{a}mma(\eta,\mu)}
\in (\mathbb{F}_\mathfrak{q}^*)^2$},\\
\norm(\mathfrak{q})+1+a_\mathfrak{q}(\mathfrak{f}) & \text{if $\mathfrak{q} \mid \Delta(\eta,\mu)$ and
$\overline{\mathfrak{a}mma(\eta,\mu)}
\notin (\mathbb{F}_\mathfrak{q}^*)^2$}.\\
\end{cases}
\]
From (i)--(iii) above we see that
$\lambda \mid B_\mathfrak{q}(\mathfrak{f},\eta,\mu)$. Now let
\[
B_q(\mathfrak{f},\eta,\mu)=\sum_{\mathfrak{q} \mid q} B_\mathfrak{q}(\mathfrak{f},\eta,\mu) \cdot \mathcal{O}_{\mathfrak{f}},
\]
where $\mathcal{O}_\mathfrak{f}$ is the ring of integers of $\mathbb{Q}_\mathfrak{f}$.
Since $(a,b) \equiv (\eta,\mu) \pmod{\mathfrak{q}}$ for all $\mathfrak{q} \mid q$, we have that $\lambda \mid B_q(\mathfrak{f},\eta,\mu)$.
Now $(\eta,\mu)$ is some unknown element of $\mathcal{A}_q$. Let
\[
B_q^\prime(\mathfrak{f})= \prod_{(\eta,\mu) \in \mathcal{A}_q} B_q(\mathfrak{f},\eta,\mu) \, .
\]
Then $\lambda \mid B_q^\prime(\mathfrak{f})$. Previously, we have supposed that $q \nmid 2p \ell$.
This is inconvenient as $\ell$ is unknown. Now we simply suppose $q \nmid 2p$, and let $B_q(\mathfrak{f})=q B_q^\prime(\mathfrak{f})$.
Then, since $\lambda \mid \ell$, we certainly have that $\lambda \mid B_q(\mathfrak{f})$ regardless of whether $q=\ell$ or not.
Finally, if $S=\{q_1,q_2,\dotsc,q_r\}$ is a set of rational primes, $q_i \nmid 2p$ then $\lambda$ divides
$\mathcal{O}_\mathfrak{f}$-ideal $\sum_{i=1}^r B_{q_i}(\mathfrak{f})$ and thus $\ell$ divides $B_S(\mathfrak{f})=\norm(\sum_{i=1}^r B_{q_i}(\mathfrak{f}))$.
Table~\ref{table2} gives our choices for the set $S$ and the corresponding value of $B_S(\mathfrak{f})$
for each of the eigenforms $\mathfrak{f}_1,\dotsc,\mathfrak{f}_{12}$ appearing in Table~\ref{table1}. Recalling
that $\ell \ge 5$ and $\ell \ne p$ gives a contradiction unless $p=13$ and $\ell=7$.
This completes the proof of Theorem~\ref{thm:1}.
The reader is wondering whether we can eliminate the case $p=13$ and $\ell=7$ by enlarging
our set $S$; here we need only concern ourselves with forms $\mathfrak{f}_9$ and $\mathfrak{f}_{11}$.
Consider $(\eta,\mu)=(0,1)$ which belongs to $\mathcal{A}_q$ for any $q$. The corresponding
Weierstrass model $E_{(0,1)}$ is singular with a split note. It follows that
$B_{\mathfrak{q}}(\mathfrak{f},0,1)=\norm(\mathfrak{q})+1-a_\mathfrak{q}(\mathfrak{f})$. Note that if $\lambda$ is a prime of $\mathbb{Q}_\mathfrak{f}$
that divides $\norm(\mathfrak{q})+1-a_\mathfrak{q}(\mathfrak{f})$ for all $\mathfrak{q} \nmid 26$, then $\ell$ will
divide $B_S(\mathfrak{f})$ for any set $S$ where $\lambda \mid \ell$. This appears to be the
case with $\ell=7$ for $\mathfrak{f}_{11}$, and we now show that it is indeed the case for $\mathfrak{f}_9$.
Let $F$ be the elliptic curve with Cremona label \texttt{26b1}:
\[
F \; : \; y^2 + xy + y = x^3 - x^2 - 3x + 3,
\]
which has conductor $2 \mathfrak{B}$ as an elliptic curve over $\mathcal{K}$.
As $\mathcal{K}/\mathbb{Q}$ is cyclic, we know that $F$ is modular over $\mathcal{K}$ and hence corresponds
to a Hilbert modular form of parallel weight $2$ and level $2 \mathfrak{B}$, and by comparing
eigenvalues we can show that it in fact corresponds to $\mathfrak{f}_9$. Now the point $(1,0)$
on $F$ has order $7$. It follows that $7 \mid \# E(\mathbb{F}_\mathfrak{q})=\norm(\mathfrak{q})+1-a_\mathfrak{q}(\mathfrak{f})$
for all $\mathfrak{q} \nmid 26$ showing that for $\mathfrak{f}_9$ we can never eliminate $\ell=7$
by enlarging the set $S$. We can still complete the contradiction in this
case as follows. Note that $\overline{\rho}_{\mathfrak{f}_9,7}\sim \overline{\rho}_{F,7}$ which
is reducible. As $\overline{\rho}_{\mathcal{E},7}$ is irreducible we have $\overline{\rho}_{\mathcal{E},7} \not \sim
\overline{\rho}_{\mathfrak{f}_9,7}$, completing the contradiction for $\mathfrak{f}=\mathfrak{f}_9$.
We strongly suspect that reducibility of $\overline{\rho}_{\mathfrak{f}_{11},\lambda}$
(where $\lambda$ is the unique prime above $7$ of $\mathbb{Q}_{\mathfrak{f}_{11}}$)
but we are unable to prove it.
\begin{table}[h]
\begin{center}
{\tabulinesep=0.8mm
\begin{tabu}{|c|c|c|c|c|}
\hline
$p$ & Case & $S$ & Eigenform $\mathfrak{f}$ & $B_S(\mathfrak{f})$\\
\hline\hline
$7$ & $ 7 \mid x$ & $\{3\}$ & $\mathfrak{f}_1$ & $2^8\times 3^5 \times 7^6$ \\
\hline\hline
\multirow{3}{*}{$11$} & $11 \nmid x$ & $\{23,43\}$ & $\mathfrak{f}_2$ & $1$ \\
\cline{2-5}
& \multirow{2}{*}{$11 \mid x$} & \multirow{2}{*}{$\{23,43\}$} & $\mathfrak{f}_3$ & $1$ \\
\cline{4-5}
& & & $\mathfrak{f}_4$ & $1$ \\
\hline\hline
\multirow{8}{*}{$13$} & \multirow{4}{*}{$13 \nmid x$} & \multirow{4}{*}{$\{79, 103\}$} & $\mathfrak{f}_5$ &
$2^{6240}\times 3^{312}$ \\
\cline{4-5}
& & & $\mathfrak{f}_6$ & $ 2^{12792}\times 3^{234}$\\
\cline{4-5}
& & & $\mathfrak{f}_7$ & $2^{10608}\times 3^{624}$\\
\cline{4-5}
& & & $\mathfrak{f}_8$ & $2^{18720}\times 3^{936}$\\
\cline{2-5}
& \multirow{4}{*}{$13 \mid x$} & \multirow{4}{*}{$\{3, 5, 31, 47\}$} & $\mathfrak{f}_9$ & $7^2$ \\
\cline{4-5}
& & & $\mathfrak{f}_{10}$ & $3^7$\\
\cline{4-5}
& & & $\mathfrak{f}_{11}$ & $7^6$\\
\cline{4-5}
& & & $\mathfrak{f}_{12}$ & $1$\\
\hline
\end{tabu}
}
\caption{Our choice of set of primes $S$ and the value of $B_S(\mathfrak{f})$ for
each of the eigenforms in Table~\ref{table1}.
}
\label{table2}
\end{center}
\end{table}
\noindent \textbf{Remark.}
We now explain why we believe that the above strategy will succeed in proving
that that \eqref{eqn:main} has no non-trivial primitive solutions,
or at least in bounding the exponent $\ell$, for larger values of $p$
provided the eigenforms $\mathfrak{f}$ at the relevant levels can be computed.
The usual obstruction, c.f.\ \citep[Section 9]{IHP}, to bounding the exponent comes from eigenforms $\mathfrak{f}$
that correspond
to elliptic curves with a torsion structure that matches the Frey curve $\mathcal{E}$. Let $\mathfrak{f}$ be such an eigenform.
Let $q \nmid 2p$ be a rational prime and $\mathfrak{q}_1,\dotsc,\mathfrak{q}_r$ be the primes of $\mathcal{K}$ above $q$.
Note that $\norm(\mathfrak{q}_1)=\dotsc=\norm(\mathfrak{q}_r)=q^{d/r}$ where $d=[\mathcal{K}: \mathbb{Q}]$.
We would like to estimate the \lq probability\rq\ that $B_q(\mathfrak{f})$ is non-zero.
Observe that if $B_q(\mathfrak{f})$ is non-zero, then we obtain a bound for $\ell$. Examining the definitions
above, shows that the ideal $B_q(\mathfrak{f})$ is $0$ if and only if there is some $(\eta,\mu) \in \mathcal{A}_q$
such that $a_\mathfrak{q}(E_{(\eta,\mu)})=a_\mathfrak{q}(\mathfrak{f})$ for $\mathfrak{q}=\mathfrak{q}_1,\mathfrak{q}_2,\dotsc,\mathfrak{q}_r$.
Treating $a_\mathfrak{q}(E_{(\eta,\mu)})$ as a random variable belonging to the Hasse interval $[-2 q^{d/2r}, 2 q^{d/2r}]$,
we see that the \lq probability\rq\ that $a_\mathfrak{q}(E_{(\eta,\mu)})=a_\mathfrak{q}(\mathfrak{f})$ is
roughly $c/q^{d/2r}$ with $c=1/4$. We can be a little more sophisticated and take
account of the fact that the torsion structures coincide, and that these impose congruence restrictions
on both traces. In that case we should take $c=1$ if $\mathcal{E}$ has full $2$-torsion (i.e.\ $\mathcal{E}$ is
the Frey curve $E$) and take $c=1/2$ if $\mathcal{E}$ has
just one non-trivial point of order $2$ (i.e.\ $\mathcal{E}=E^\prime$
and $p \equiv 1 \pmod{4}$). Thus the \lq probability\rq\ that $a_\mathfrak{q}(E_{(\eta,\mu)})=a_\mathfrak{q}(\mathfrak{f})$
for all $\mathfrak{q} \mid q$ simultaneously is roughly $c^r/q^{d/2}$.
Since $B_q(\mathfrak{f})=q \prod_{(\eta,\mu)\in \mathcal{A}_q} B_q(\mathfrak{f},\eta,\mu)$.
It follows that the \lq probability\rq\ $\mathbb{P}_q$ (say) that $B_q(\mathfrak{f})$
is non-zero satisfies
\[
\mathbb{P}_q \sim \left(1 - \frac{c^r}{q^{d/2}}\right)^{q^2-1}
\]
For $q$ large, we have $(1-c^r/q^{d/2})^{q^{d/2}} \approx e^{-c^r}$.
For $d\ge 5$, from the above estimates, we expect that
$\mathbb{P}_q \rightarrow 1$ as $q \rightarrow \infty$. Thus
we certainly expect our strategy to succeed in bounding the exponent $\ell$.
\end{document} |
\betagin{document}
\timestle{Localization of extriangulated categories}
\author{Hiroyuki Nakaoka}
\email{[email protected]}
\mathrm{op}eratorname{add}ress{Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya 464-8602, Japan}
\author{Yasuaki Ogawa}
\email{[email protected]}
\mathrm{op}eratorname{add}ress{Center for Educational Research of Science and Mathematics, Nara University of Education, Takabatake-cho, Nara, 630-8528, Japan}
\author{Arashi Sakai}
\email{[email protected]}
\mathrm{op}eratorname{add}ress{Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya 464-8602, Japan}
\thanks{The authors are grateful for Mikhail Gorsky, Osamu Iyama, Kiriko Kato, Yann Palu and Katsunori Takahashi for their interest and valuable comments. This work is supported by JSPS KAKENHI Grant Number JP20K03532.}
\betagin{abstract}
In this article, we show that the localization of an extriangulated category by a multiplicative system satisfying mild assumptions can be equipped with a natural, universal structure of an extriangulated category. This construction unifies the Serre quotient of abelian categories and the Verdier quotient of triangulated categories. Indeed we give such a construction for a bit wider class of morphisms, so that it covers several other localizations appeared in the literature, such as Rump's localization of exact categories by biresolving subcategories, localizations of extriangulated categories by means of Hovey twin cotorsion pairs, and the localization of exact categories by two-sided admissibly percolating subcategories.
\end{abstract}
\title{Localization of extriangulated categories}
\tableofcontents
\subseteqction{Introduction}
Abelian categories, exact categories and triangulated categories are the main categorical frameworks used in homological algebra.
The notion of an {\it extriangulated category} was recently introduced in \circte{NP} as a unification of such classes of categories. The class of extriangulated categories not only contains them as typical cases, but has an advantage that it is closed by several operations such as taking extension-closed subcategories \circte[Remark~2.18]{NP}, ideal quotients by subcategories consisting of projective-injectives \circte[Proposition 3.30]{NP}, and relative theories \circte[Proposition 3.16]{HLN}.
If one names another fundamental operation which still lacks in extriangulated categories, it will be the \emph{localization}.
As we know, for abelian/triangulated categories, localization can be performed in a satisfactory generality. As a unifying notion, it is natural to expect that extriangulated categories provide their common generalization.
In this article, let us discuss about localizations of extriangulated categories. As for the localizations which involve abelian/exact/triangulated categories, the following are known in the literature.
\betagin{itemize}
\item[{\rm (i)}] Serre quotient of an abelian category \circte{Ga}.
\item[{\rm (ii)}] Verdier quotient of a triangulated category \circte{V}.
\item[{\rm (iii)}] C\'{a}rdenas-Escudero's localization of an exact category \circte{C-E}. More generally, localization of an exact category by a two-sided admissibly percolating subcategory \circte{HR},\circte{HKR}.
\item[{\rm (iv)}] Rump's localization of an exact category \circte{R} by a biresolving subcategory.
\item[{\rm (v)}] Localization of an abelian category with respect to an abelian model structure \circte{Ho1},\circte{Ho2}. More generally, localization of an exact category with respect to an exact model structure \circte{Gi}.
\item[{\rm (vi)}] As a counterpart of {\rm (v)}, localization of a triangulated category with respect to a triangulated model structure \circte{Y}.
\item[{\rm (vii)}] As a unification of {\rm (v)} and {\rm (vi)}, localization of an extriangulated category with respect to an admissible Hovey twin cotorsion pair \circte{NP}.
\end{itemize}
The localization of extriangulated categories which we will introduce in this article, gives a unification of all the above localizations.
After briefly reviewing the definition and basic properties of extriangulated categories in Section~\ref{Section_ReviewOnExtri}, we introduce our main theorem (Theorem~\ref{ThmMultLoc}) in Section~\ref{Section_Localization}, which is as follows. In the below, $\mathscr{C}Es$ is an extriangulated category, $\mathcal{N}_{\mathscr{S}}\subseteq\mathscr{C}$ is an additive full subcategory associated to $\mathscr{S}$, and $\overseterline{\mathscr{S}}$ denotes a set of morphisms in the ideal quotient $\overseterline{\mathscr{C}}=\mathscr{C}/[\mathcal{N}_{\mathscr{S}}]$ obtained from $\mathscr{S}$ by taking closure with respect to the composition with isomorphisms in $\overseterline{\mathscr{C}}$.
\betagin{introthm}
Let $\mathscr{S}$ be a set of morphisms in $\mathscr{C}$ containing all isomorphisms and closed by compositions.
Suppose that $\overseterline{\mathscr{S}}$ satisfies the following conditions in $\overseterline{\mathscr{C}}$.
\betagin{itemize}
\item[{\rm (MR1)}] $\overseterline{\mathscr{S}}$ satisfies $2$-out-of-$3$ with respect to compositions.
\item[{\rm (MR2)}] $\overseterline{\mathscr{S}}$ is a multiplicative system.
\item[{\rm (MR3)}] Let $\langle A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C,\delta\rangle$, $\langle A^{\prime}\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C^{\prime},\delta^{\prime}\rangle$ be any pair of $\mathfrak{s}$-triangles, and suppose that $a\in\mathscr{C}(A,A^{\prime}),c\in\mathscr{C}(C,C^{\prime})$ satisfies $a_{\ast}\delta=c^{\ast}\delta^{\prime}$. If $\overseterline{a},\overseterline{c}\in\overseterline{\mathscr{S}}$, then there exists $\mathbf{b}\in\overseterline{\mathscr{S}}$ such that $\mathbf{b}\circ\overseterline{x}=\overseterline{x}^{\prime}\circ\overseterline{a}$, $\overseterline{c}\circ\overseterline{y}=\overseterline{y}^{\prime}\circ\mathbf{b}$.
\item[{\rm (MR4)}] $\overseterline{\mathcal{M}}_{\mathsf{inf}}=\{ \mathbf{v}\circ \overseterline{x}\circ \mathbf{u}\mid x\ \text{is an}\ \mathfrak{s}\text{-inflation}, \mathbf{u},\mathbf{v}\in\overseterline{\mathscr{S}}\}$ is closed by compositions. Dually for $\mathfrak{s}$-deflations.
\end{itemize}
Then the localization of $\mathscr{C}$ by $\mathscr{S}$ gives an extriangulated category $(\widetilde{\C},\widetilde{\Ebb},\widetilde{\mathfrak{s}})$ equipped with an exact functor $(Q,\mu)\colon\mathscr{C}Es\to \widetilde{\C}Es$, which is universal among exact functors inverting $\mathscr{S}$.
\end{introthm}
In fact, we show in Theorem~\ref{ThmMultLoc} that even without condition {\rm (MR4)}, we obtain a universal \emph{weakly} extriangulated category in the sense of \circte{BBGH}.
In the last Section~\ref{Section_Examples}, we will demonstrate how the above-mentioned localizations can be seen as particular cases of the localization given in this article, by showing in Propositions~\ref{PropSatisfy} and \ref{PropPerc} that the assumption of Theorem~\ref{ThmMultLoc} is indeed satisfied. More precisely, we roughly divide the above list into the following two cases.
\betagin{itemize}
\item[{\rm (A)}] Localizations obtained in {\rm (ii),(iv),(vii)} (and hence also {\rm (v),(vi)}).
\item[{\rm (B)}] Localizations obtained in {\rm (i) (ii),(iii)}.
\end{itemize}
This division results from particular properties of thick subcategories (Definition~\ref{DefThick}) used to give $\mathscr{S}$. We remark that {\rm (ii)} sits in their intersection.
Case {\rm (A)} is dealt in Subsection~\ref{Subsection_LocTri} by using \emph{biresolving} thick subcategories (Definition~\ref{Def_BiResol}), while case {\rm (B)} is dealt in Subsection~\ref{Subsection_Percolating} by using \emph{percolating} thick subcategories (Definition~\ref{Def_Percolating}). In case {\rm (A)}, the resulting localization always becomes triangulated as will be shown in Corollary~\ref{CorLocTri}. On the other hand in case {\rm (B)}, with some additional condition which fits well with percolating subcategories, the resulting localization becomes exact as in Corollary~\ref{CorLast}.
Throughout this article, we use the following notations and terminology. For a category $\mathscr{C}$, let $\mathcal{M}=\mathrm{op}eratorname{Mor}(\mathscr{C})$ denote the class of all morphisms of $\mathscr{C}$. Also, $\mathrm{op}eratorname{Iso}(\mathscr{C})\subseteq\mathcal{M}$ denotes the class of all isomorphisms. If a class of morphisms $\mathscr{S}\subseteq\mathcal{M}$ is closed by compositions and contains all identities in $\mathscr{C}$, then we may regard $\mathscr{S}\subseteq\mathscr{C}$ as a (not full) subcategory satisfying $\mathrm{op}eratorname{Ob}(\mathscr{S})=\mathrm{op}eratorname{Ob}(\mathscr{C})$. With this view in mind, we write $\mathscr{S}(X,Y)=\{f\in \mathscr{C}(X,Y)\mid f\in\mathscr{S}\}$ for any $X,Y\in\mathscr{C}$.
Throughout this article, let $\mathscr{C}$ denote an additive category. To avoid any set-theoretic problem in considering its localizations, we assume that $\mathscr{C}$ is small.
\subseteqction{Review on the definition of extriangulated category}\label{Section_ReviewOnExtri}
In this subsection, we review the definition of extriangulated categories. More precisely, we use instead the notion of $1$-exangulated category equivalent to it (\circte[Definition~2.32]{HLN}).
\betagin{dfn}\label{DefExtension}
Suppose $\mathscr{C}$ is equipped with a biadditive functor $\mathbb{E}\colon\mathscr{C}^\mathrm{op}\times\mathscr{C}\to\mathscr{A}b$. For any pair of objects $A,C\in\mathscr{C}$, an element $\delta\in\mathbb{E}(C,A)$ is called an \emph{$\mathbb{E}$-extension}.
We abbreviately express it as $C\overset{\delta}{\dashrightarrow}A$.
For any $a\in\mathscr{C}(A,A^{\prime})$ and $c\in\mathscr{C}(C^{\prime},C)$, we abbreviate
$\mathbb{E}(C,a)(\delta)\in\mathbb{E}(C,A^{\prime})$ and $\mathbb{E}(c,A)(\delta)\in\mathbb{E}(C^{\prime},A)$ to $a_{\ast}\delta$ and $c^{\ast}\delta$, respectively.
By Yoneda lemma, natural transformations
\[ \delta_\sharp\colonlon\mathscr{C}(-,C)\Rightarrow\mathbb{E}(-,A)\ \ \text{and}\ \ \delta^\sharp\colonlon\mathscr{C}(A,-)\Rightarrow\mathbb{E}(C,-) \]
are associated to $\delta$.
Explicitly, they are given by
\betagin{eqnarray*}
&\delta_\sharp\colon\mathscr{C}(X,C)\to\mathbb{E}(X,A)\ ;\ f\mapsto f^{\ast}\delta,&\\
&\delta^\sharp\colon\mathscr{C}(A,X)\to\mathbb{E}(C,X)\ ;\ g\mapsto g_{\ast}\deltata&
\end{eqnarray*}
for each $X\in\mathscr{C}$.
\end{dfn}
\betagin{dfn}\label{DefEquiv2Seq}
Let $A,C\in\mathscr{C}$ be any pair of objects. Sequences $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$ and $A\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C$ are said to be equivalent if there exists an isomorphism $b\in\mathscr{C}(B,B^{\prime})$ such that $b\circ x=x^{\prime}$ and $y^{\prime}\circ b=y$. We denote the equivalence class to which $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$ belongs by $[A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C]$.
\end{dfn}
\betagin{dfn}\label{DefConf}
Let $\mathscr{C}$ and $\mathbb{E}bb$ be as before. Suppose that we are given a map $\mathfrak{s}$ which assigns an equivalence class $\mathfrak{s}(\delta)=[A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C]$ to each $\mathbb{E}bb$-extension $\delta\in\mathbb{E}bb(C,A)$. We use the following terminology.
\betagin{enumerate}
\item A sequence $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$ in $\mathscr{C}$ is called an \emph{$\mathfrak{s}$-conflation} if it satisfies $\mathfrak{s}(\delta)=[A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C]$ for some $\delta\in\mathbb{E}bb(C,A)$.
A morphism $x$ in $\mathscr{C}$ is called an \emph{$\mathfrak{s}$-inflation} if it appears in some $\mathfrak{s}$-conflation as $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$.
Dually, a morphism $y$ in $\mathscr{C}$ is called an \emph{$\mathfrak{s}$-deflation} if it appears in some $\mathfrak{s}$-conflation as $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$.
\item An \emph{$\mathfrak{s}$-triangle} $\langle A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C,\delta\rangle$ is a pair of a sequence $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$ in $\mathscr{C}$ and an $\mathbb{E}bb$-extension $\delta\in\mathbb{E}bb(C,A)$ satisfying $\mathfrak{s}(\delta)=[A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C]$. We denote such a pair abbreviately by $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$.
\item A \emph{morphism of $\mathfrak{s}$-triangles} from $\langle A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C,\delta\rangle$ to $\langle A^{\prime}\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C^{\prime},\delta^{\prime}\rangle$ is a triplet $(a,b,c)$ of morphisms in $\mathscr{C}$ which satisfies $b\circ x=x^{\prime}\circ a$, $c\circ y=y^{\prime}\circ b$ and $a_{\ast}\delta=c^{\ast}\delta^{\prime}$. We denote it abbreviately by a diagram as follows.
\betagin{equation}\label{***abc}
\xy
(-12,6)*+{A}="0";
(0,6)*+{B}="2";
(12,6)*+{C}="4";
(24,6)*+{}="6";
(-12,-6)*+{A^{\prime}}="10";
(0,-6)*+{B^{\prime}}="12";
(12,-6)*+{C^{\prime}}="14";
(24,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{a} "0";"10"};
{\ar_{b} "2";"12"};
{\ar^{c} "4";"14"};
{\ar_{x^{\prime}} "10";"12"};
{\ar_{y^{\prime}} "12";"14"};
{\ar@{-->}_{\delta^{\prime}} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\end{equation}
\end{enumerate}
\end{dfn}
\betagin{dfn}\label{DefEACat}
Let $\mathscr{C}Es$ be a triplet of an additive category $\mathscr{C}$, a biadditive functor $\mathbb{E}\colon\mathscr{C}^\mathrm{op}\times\mathscr{C}\to\mathscr{A}b$, and a map $\mathfrak{s}$ which assigns an equivalence class $\mathfrak{s}(\delta)=[A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C]$ to each $\mathbb{E}bb$-extension $\delta\in\mathbb{E}bb(C,A)$.
\betagin{enumerate}
\item $\mathscr{C}Es$ is a \emph{weakly extriangulated category} if it satisfies the following conditions.
\betagin{itemize}
\item[{\rm (C1)}] For any $\mathfrak{s}$-triangle $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$, the sequence
\[ \mathscr{C}(X,A)\overset{x\circ-}{\longrightarrow}\mathscr{C}(X,B)\overset{y\circ-}{\longrightarrow}\mathscr{C}(X,C)\overset{\delta_\sharp}{\longrightarrow}\mathbb{E}bb(X,A) \]
is exact for any $X\in\mathscr{C}$.
\item[{\rm (C1')}] Dually, for any $\mathfrak{s}$-triangle $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$, the sequence
\[ \mathscr{C}(C,X)\overset{-\circ y}{\longrightarrow}\mathscr{C}(B,X)\overset{-\circ x}{\longrightarrow}\mathscr{C}(A,X)\overset{\delta^\sharp}{\longrightarrow}\mathbb{E}bb(C,X) \]
is exact for any $X\in\mathscr{C}$.
\item[{\rm (C2)}] For any $A\in\mathscr{C}$, zero element $0\in\mathbb{E}(0,A)$ satisfies
$\mathfrak{s}(0)=[A\overset{\mathrm{id}_A}{\longrightarrow}A\to0]$.
\item[{\rm (C2')}] Dually, for any $A\in\mathscr{C}$, zero element $0\in\mathbb{E}(A,0)$ satisfies $\mathfrak{s}(0)=[0\to A\overset{\mathrm{id}_A}{\longrightarrow}A]$.
\item[{\rm (C3)}] For any $\mathfrak{s}$-triangle $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$, any $a\in\mathscr{C}(A,A^{\prime})$ and any $\mathfrak{s}$-triangle $A^{\prime}\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C\overset{a_{\ast}\delta}{\dashrightarrow}$, there exists $b\in\mathscr{C}(B,B^{\prime})$ which gives a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-12,6)*+{A}="0";
(0,6)*+{B}="2";
(12,6)*+{C}="4";
(24,6)*+{}="6";
(-12,-6)*+{A^{\prime}}="10";
(0,-6)*+{B^{\prime}}="12";
(12,-6)*+{C}="14";
(24,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{a} "0";"10"};
{\ar_{b} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{x^{\prime}} "10";"12"};
{\ar_{y^{\prime}} "12";"14"};
{\ar@{-->}_{a_{\ast}\delta} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
and makes
\[ A\overset{\left[\betagin{smallmatrix} x\\ a\end{smallmatrix}\right]}{\longrightarrow}B\mathrm{op}lus A^{\prime}\overset{[b\ -x^{\prime}]}{\longrightarrow}B^{\prime}\overset{y^{\prime\ast}\delta}{\dashrightarrow} \]
an $\mathfrak{s}$-triangle.
\item[{\rm (C3')}] For any $\mathfrak{s}$-triangle $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$, any $c\in\mathscr{C}(C^{\prime},C)$ and any $\mathfrak{s}$-triangle $A\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C^{\prime}\overset{c^{\ast}\delta}{\dashrightarrow}$, there exists $b\in\mathscr{C}(B^{\prime},B)$ which gives a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-12,6)*+{A}="0";
(0,6)*+{B^{\prime}}="2";
(12,6)*+{C^{\prime}}="4";
(24,6)*+{}="6";
(-12,-6)*+{A}="10";
(0,-6)*+{B}="12";
(12,-6)*+{C}="14";
(24,-6)*+{}="16";
{\ar^{x^{\prime}} "0";"2"};
{\ar^{y^{\prime}} "2";"4"};
{\ar@{-->}^{c^{\ast}\delta} "4";"6"};
{\ar@{=} "0";"10"};
{\ar_{b} "2";"12"};
{\ar^{c} "4";"14"};
{\ar_{x} "10";"12"};
{\ar_{y} "12";"14"};
{\ar@{-->}_{\delta} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
and makes
\[ B^{\prime}\overset{\left[\betagin{smallmatrix} -y^{\prime}\\ b\end{smallmatrix}\right]}{\longrightarrow}C^{\prime}\mathrm{op}lus B\overset{[c\ y]}{\longrightarrow}C\overset{x^{\prime}_{\ast}\delta}{\dashrightarrow} \]
an $\mathfrak{s}$-triangle.
\end{itemize}
\item $\mathscr{C}Es$ is an \emph{extriangulated category} if it is weakly extriangulated and moreover satisfies the following conditions.
\betagin{itemize}
\item[{\rm (C4)}] $\mathfrak{s}$-inflations are closed by compositions. Namely, if $A\overset{f}{\longrightarrow}A^{\prime}$ and $A^{\prime}\overset{f^{\prime}}{\longrightarrow}A^{\prime}r$ are $\mathfrak{s}$-inflations, then so is $f^{\prime}\circ f$.
\item[{\rm (C4')}] Dually, $\mathfrak{s}$-deflations are closed by compositions.
\end{itemize}
\end{enumerate}
For a (weakly) extriangulated category $\mathscr{C}Es$, we call $\mathfrak{s}$ a \emph{realization} of $\mathbb{E}bb$.
For an $\mathfrak{s}$-conflation $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$ in an extriangulated category, we write $C=\mathscr{C}one(x)$ and call it a \emph{cone} of $x$. This is uniquely determined by $x$ up to isomorphisms. Dually we write $A=\mathscr{C}oCone(y)$ and call it a \emph{cocone} of $y$, which is uniquely determined by $y$ up to isomorphisms.
\end{dfn}
\betagin{rem}\label{RemExTriEquiv}
It has been shown in \circte[Section~4.1]{HLN} that a triplet $\mathscr{C}Es$ satisfies the above conditions
if and only if it is an extriangulated category defined in \circte[Definition~2.12]{NP}. By this reason, we call it simply an extriangulated category in this article.
In \circte{HLN}, an additional condition {\rm (R0)} is also required, which is negligible as it follows from {\rm (C3)} and {\rm (C3')}.
The notion of a weakly extriangulated category was introduced in \circte[Definition 5.15]{BBGH}, regarding its importance in the classification of additive subfunctors of the functor $\mathbb{E}bb$ of an extriangulated category $\mathscr{C}Es$.
\end{rem}
\betagin{rem}\label{RemWE}
Let $\mathscr{C}Es$ be a weakly extriangulated category, and let $(\ref{***abc})$ be any morphism of $\mathfrak{s}$-triangles. If $a,c$ are isomorphisms, then so is $b$. In this case we have $\mathfrak{s}(\delta^{\prime})=[A^{\prime}\overset{b^{-1}\circ x^{\prime}}{\longrightarrow}B\overset{y^{\prime}\circ b}{\longrightarrow}C^{\prime}]=[A^{\prime}\overset{x\circ a^{-1}}{\longrightarrow}B\overset{c\circ y}{\longrightarrow}C^{\prime}]$.
\end{rem}
\betagin{rem}
As in Remark~\ref{RemExTriEquiv},
an extriangulated category $\mathscr{C}Es$ satisfies the following {\rm (ET4)} (and its dual {\rm (ET4)$^\mathrm{op}$}), which is one of the conditions in its original definition \circte[Definition~2.12]{NP}.
\betagin{itemize}
\item[{\rm (ET4)}]
Let $A\overset{f}{\longrightarrow}B\overset{f^{\prime}}{\longrightarrow}D\overset{\delta}{\dashrightarrow}$ and $B\overset{g}{\longrightarrow}C\overset{g^{\prime}}{\longrightarrow}F\overset{\rho}{\dashrightarrow}$ be any pair of $\mathfrak{s}$-triangles.
Then there exists a diagram satisfying $d^{\ast}\delta^{\prime}=\delta,e^{\ast}\rho=f_{\ast}\delta^{\prime}$
\[
\xy
(-21,7)*+{A}="0";
(-7,7)*+{B}="2";
(7,7)*+{D}="4";
(-21,-7)*+{A}="10";
(-7,-7)*+{C}="12";
(7,-7)*+{E}="14";
(-7,-21)*+{F}="22";
(7,-21)*+{F}="24";
{\ar^{f} "0";"2"};
{\ar^{f^{\prime}} "2";"4"};
{\ar^{\delta}@{-->} "4";(19,7)};
{\ar@{=} "0";"10"};
{\ar_{g} "2";"12"};
{\ar^{d} "4";"14"};
{\ar_{h=g\circ f} "10";"12"};
{\ar_{h^{\prime}} "12";"14"};
{\ar@{-->}^{\delta^{\prime}} "14";(19,-7)};
{\ar_{g^{\prime}} "12";"22"};
{\ar^{e} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{-->}_{\rho} "22";(-7,-34)};
{\ar@{-->}^{f^{\prime}_{\ast}\rho} "24";(7,-34)};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
in which $A\overset{h}{\longrightarrow}C\overset{h^{\prime}}{\longrightarrow}E\overset{\delta^{\prime}}{\dashrightarrow}$ and $D\overset{d}{\longrightarrow}E\overset{e}{\longrightarrow}F\overset{f^{\prime}_{\ast}\rho}{\dashrightarrow}$ are $\mathfrak{s}$-triangles.
\end{itemize}
\end{rem}
\betagin{rem}\label{Ex_ExTri}
The following holds for an extriangulated category $\mathscr{C}Es$. (See \circte[Corollaries~3.18 and 7.6]{NP} for the detail.)
\betagin{enumerate}
\item If any $\mathfrak{s}$-inflation is monomorphic and any $\mathfrak{s}$-deflation is epimorphic, then $\mathscr{C}$ has a natural structure of an exact category, in which admissible exact sequences are precisely given by $\mathfrak{s}$-conflations. In this case we simply say that $\mathscr{C}Es$ \emph{corresponds to an exact category}, in this article.
\item If any morphism is both an $\mathfrak{s}$-inflation and an $\mathfrak{s}$-deflation, then $\mathscr{C}$ has a natural structure of a triangulated category, in which distinguished triangles come from $\mathfrak{s}$-triangles. In this case we simply say that $\mathscr{C}Es$ \emph{corresponds to a triangulated category}, in this article.
\end{enumerate}
\end{rem}
In this article, we sometimes refer to the following condition introduced in \circte[Condition~5.8]{NP}.
\betagin{cond}\label{ConditionWIC}
For a (weakly) extriangulated category $\mathscr{C}Es$, consider the following condition {\rm (WIC)}.
\betagin{itemize}
\item[{\rm (WIC)}] For morphisms $h=g\circ f$ in $\mathscr{C}$, if $h$ is an $\mathfrak{s}$-inflation, then $f$ is also an $\mathfrak{s}$-inflation. Dually, if $h$ is an $\mathfrak{s}$-deflation, then so is $g$.
\end{itemize}
\end{cond}
\betagin{rem}
If an extriangulated category $\mathscr{C}Es$ corresponds to a triangulated category, then {\rm (WIC)} is always satisfied.
On the other hand, if $\mathscr{C}Es$ corresponds to an exact category, then it satisfies {\rm (WIC)} if and only if $\mathscr{C}$ is weakly idempotent complete (\circte[Proposition~7.6]{B}). (In \circte{R} and the reference therein, it is also called \emph{divisive}.)
\end{rem}
\betagin{dfn}\label{DefExFun}
Let $\mathscr{C}Es$, $(\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ and $(\mathscr{C}^{\prime}r,\mathbb{E}^{\prime}r,\mathfrak{s}^{\prime}r)$ be weakly extriangulated categories.
\betagin{enumerate}
\item {\rm (\circte[Definition 2.23]{B-TS})}
An \emph{exact functor} $(F,\phi)\colon\mathscr{C}Es\to(\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ is a pair of an additive functor $F\colon\mathscr{C}\to\mathscr{C}^{\prime}$ and a natural transformation $\phi\colon\mathbb{E}\Rightarrow\mathbb{E}^{\prime}\circ(F^\mathrm{op}\times F)$ which satisfies
\[\mathfrak{s}^{\prime}(\phi_{C,A}(\delta))=[F(A)\overset{F(x)}{\longrightarrow}F(B)\overset{F(y)}{\longrightarrow}F(C)]\]
for any $\mathfrak{s}$-triangle $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$ in $\mathscr{C}$.
\item If $(F,\phi)\colon\mathscr{C}Es\to (\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ and $(F^{\prime},\phi^{\prime})\colon(\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})\to (\mathscr{C}^{\prime}r,\mathbb{E}^{\prime}r,\mathfrak{s}^{\prime}r)$ are exact functors, then their composition $(F^{\prime}r,\phi^{\prime}r)=(F^{\prime},\phi^{\prime})\circ(F,\phi)$ is defined by
$F^{\prime}r=F^{\prime}\circ F$ and $\phi^{\prime}r=(\phi^{\prime}\circ(F^\mathrm{op}\times F))\cdot\phi$.
\item Let $(F,\phi),(G,\psi)\colon \mathscr{C}Es\to (\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ be exact functors. A \emph{natural transformation} $\eta\colon (F,\phi)\Rightarrow (G,\psi)$ \emph{of exact functors} is a natural transformation $\eta\colon F\Rightarrow G$ of additive functors, which satisfies
\betagin{equation}\label{P7}
(\eta_A)_{\ast}\phi_{C,A}(\delta)=(\eta_C)^{\ast}\psi_{C,A}(\delta)
\end{equation}
for any $\delta\in\mathbb{E}bb(C,A)$. Horizontal compositions and vertical compositions are defined by those for natural transformations of additive functors.
\end{enumerate}
\end{dfn}
\betagin{rem}\label{RemExFun}
The above definition of an exact functor in {\rm (1)} is nothing but that of an \emph{extriangulated functor} introduced in \circte{B-TS}, applied to weakly extriangulated categories.
The notion of an exact functor coincides with usual ones when both of $\mathscr{C}Es$ and $(\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ correspond to exact categories or to triangulated categories. Here we briefly recall how an exact functor ($=$ triangulated functor) between triangulated categories can be regarded as an exact functor of extriangulated categories in the above sense. See \circte[Theorem 2.33, 2.34]{B-TS} for the detail.
By definition, an exact functor between triangulated categories $(F,\xi)\colon\mathscr{T}\to\mathscr{T}^{\prime}$ consists of an additive functor $F$ and a natural isomorphism $\xi\colon F\circ [1]\overset{\colonng}{\Longrightarrow}[1]\circ F$ which respects distinguished triangles. As shown in \circte[Proposition~3.22]{NP}, we may regard $\mathscr{T}$ as an extriangulated category $(\mathscr{T},\mathbb{E},\mathfrak{s})$, for which $\mathbb{E}$ is given by $\mathbb{E}=\mathbb{E}xt^1_{\mathscr{T}}=\mathscr{T}(-,-[1])$ and a sequence $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$ is an $\mathfrak{s}$-triangle if and only if $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\longrightarrow}A[1]$ is a distinguished triangle in $\mathscr{T}$. Similarly for $\mathscr{T}^{\prime}$.
We see that $\xi$ induces a natural transformation $\phi\colon\mathbb{E}xt^1_{\mathscr{T}}\Rightarrow \mathbb{E}xt_{\mathscr{T}^{\prime}}^1\circ(F^\mathrm{op}\times F)$ defined by
\[ \phi_{C,A}\colon \mathbb{E}xt^1_{\mathscr{T}}(C,A)\to \mathbb{E}xt_{\mathscr{T}^{\prime}}^1(FC,FA)\ ;\ \delta\mapsto\xi_A\circ F(\delta) \]
for each $A,C\in\mathscr{C}$. This gives an exact functor $(F,\phi)$ between extriangulated categories in the sense of Definition~\ref{DefExFun}.
The above definition of a natural transformation of exact functors in {\rm (3)} is automatically satisfied when $(\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ corresponds to an exact categories, and is equivalent to the definition of a morphism of triangulated functors in \circte[Definition 10.1.9]{KS} when both $\mathscr{C}Es$ and $(\mathscr{C}^{\prime},\mathbb{E}^{\prime},\mathfrak{s}^{\prime})$ correspond to triangulated categories.
\end{rem}
\betagin{prop}\label{PropExEq}
Let $(F,\phi)\colon\mathscr{C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be an exact functor between weakly extriangulated categories. The following are equivalent.
\betagin{enumerate}
\item $F$ is an equivalence of categories and $\phi$ is a natural isomorphism.
\item $(F,\phi)$ is an equivalence of extriangulated categories, in the sense that there exist an exact functor $(G,\psi)\colon(\mathscr{D},\mathbb{F}bb,\mathfrak{t})\to \mathscr{C}Es$, natural transformations of exact functors $(\mathrm{Id}_{\mathscr{C}},\mathrm{id}_{\mathbb{E}})\Rightarrow (G,\psi)\circ(F,\phi)$ and $(F,\phi)\circ (G,\psi)\Rightarrow(\mathrm{Id}_{\mathscr{D}},\mathrm{id}_{\mathbb{F}})$ which have inverses.
\end{enumerate}
\end{prop}
\betagin{proof}
This is analogous to the usual argument for exact functors between triangulated categories.
As the converse is trivial, let us only show that {\rm (1)} implies {\rm (2)}. Suppose that $F$ is an equivalence of categories and let $G\colon\mathscr{D}\to\mathscr{C}$ be a quasi-inverse of $F$, equipped with natural isomorphisms $\eta\colon\mathrm{Id}_{\mathscr{C}}\overset{\colonng}{\Longrightarrow} G\circ F$ and $\varepsilon\colon F\circ G\overset{\colonng}{\Longrightarrow}\mathrm{Id}_{\mathscr{D}}$. We may choose $\eta$ and $\varepsilon$ to be the unit and the counit of an adjoint pair $F\dashv G$.
Define $\psi$ to be the composition of
\[ \mathbb{F}bb\overset{\mathbb{F}bb\circ(\varepsilon^\mathrm{op}\times\varepsilon^{-1})}{\Longrightarrow}\mathbb{F}bb\circ(F^\mathrm{op}\times F)\circ(G^\mathrm{op}\times G)
\overset{\phi^{-1}\circ(G^\mathrm{op}\times G)}{\Longrightarrow}\mathbb{E}bb\circ(G^\mathrm{op}\times G). \]
By definition, for any $X,Z\in\mathscr{D}$, a homomorphism
\[ \psi_{Z,X}\colon\mathbb{F}bb(Z,X)\to\mathbb{E}bb(GZ,GX) \]
sends each $\rho\in\mathbb{F}bb(Z,X)$ to the unique element $\psi_{Z,X}(\rho)\in\mathbb{E}bb(GZ,GX)$ which satisfies
\betagin{equation}\label{Eq_phi_psi}
\phi_{GZ,GX}(\psi_{Z,X}(\rho))=(\varepsilon_Z)^{\ast}(\varepsilon_X^{-1})_{\ast}\rho
\end{equation}
in $\mathbb{F}bb(FGZ,FGX)$.
Let us show that $(G,\psi)\colon(\mathscr{D},\mathbb{F}bb,\mathfrak{t})\to\mathscr{C}Es$ is an exact functor. Let $X\overset{x}{\longrightarrow}Y\overset{y}{\longrightarrow}Z\overset{\rho}{\dashrightarrow}$ be any $\mathfrak{t}$-triangle.
By Remark~\ref{RemWE},
\betagin{equation}\label{stri_compare1}
FGX\overset{FGx}{\longrightarrow}FGY\overset{FGy}{\longrightarrow}FGZ\overset{(\varepsilon_Z)^{\ast}(\varepsilon_X^{-1})_{\ast}\rho}{\dashrightarrow}
\end{equation}
is also a $\mathfrak{t}$-triangle.
Put $\delta=\psi_{Z,X}(\rho)\in\mathbb{E}bb(GZ,GX)$, and realize it by an $\mathfrak{s}$-triangle $GX\overset{f}{\longrightarrow}B\overset{g}{\longrightarrow}GZ\overset{\delta}{\dashrightarrow}$. Since $(F,\phi)$ is exact,
\betagin{equation}\label{stri_compare2}
FGX\overset{Ff}{\longrightarrow}FB\overset{Fg}{\longrightarrow}FGZ\overset{\phi_{GZ,GX}(\delta)}{\dashrightarrow}
\end{equation}
becomes a $\mathfrak{t}$-triangle.
As $(\ref{Eq_phi_psi})$ suggests, $\mathfrak{t}$-triangles $(\ref{stri_compare1})$ and $(\ref{stri_compare2})$ realize the same $\mathbb{F}bb$-extension, and thus
\[ [FGX\overset{FGx}{\longrightarrow}FGY\overset{FGy}{\longrightarrow}FGZ]=[FGX\overset{Ff}{\longrightarrow}FB\overset{Fg}{\longrightarrow}FGZ] \]
holds as sequences in $\mathscr{D}$. Since $F$ is fully faithful, it is easy to deduce that
\[ [GX\overset{Gx}{\longrightarrow}GY\overset{Gy}{\longrightarrow}GZ]=[GX\overset{f}{\longrightarrow}B\overset{g}{\longrightarrow}GZ] \]
holds as sequences in $\mathscr{C}$. This means that $\mathfrak{s}(\psi_{Z,X}(\rho))=[GX\overset{Gx}{\longrightarrow}GY\overset{Gy}{\longrightarrow}GZ]$ holds, and thus $(G,\psi)$ is exact.
It is immediate from the construction that $\varepsilon\colon(F,\phi)\circ (G,\psi)\Rightarrow(\mathrm{Id}_{\mathscr{D}},\mathrm{id}_{\mathbb{F}})$ is a natural transformation of exact functors. Indeed $(\ref{Eq_phi_psi})$ is nothing but the equality to be satisfied by $\varepsilon$ in Definition~\ref{DefExFun} {\rm (3)}. Let us confirm that $\eta\colon(\mathrm{Id}_{\mathscr{C}},\mathrm{id}_{\mathbb{E}})\Rightarrow (G,\psi)\circ(F,\phi)$ is a natural transformation of exact functors. Take any $\delta\in\mathbb{E}bb(C,A)$, and put $\sigma=\phi_{C,A}(\delta)\in\mathbb{F}bb(FC,FA)$.
It is enough to confirm that
\betagin{equation}\label{ToShowpsi}
\psi_{FC,FA}(\sigma)=(\eta_C^{-1})^{\ast}(\eta_A)_{\ast}\delta
\end{equation}
holds in $\mathbb{E}bb(GFC,GFA)$.
By definition $\psi_{FC,FA}(\sigma)\in\mathbb{E}bb(GFC,GFA)$ is the unique element which satisfies
\[ \phi_{GFC,GFA}(\psi_{FC,FA}(\sigma))=(\varepsilon_{FC})^{\ast}(\varepsilon_{FA}^{-1})_{\ast}(\sigma) \]
in $\mathbb{E}bb(FGFC,FGFA)$. Since $F\circ\eta=(\varepsilon\circ F)^{-1}$ holds by the adjointness, we have
\[ (\varepsilon_{FC})^{\ast}(\varepsilon_{FA}^{-1})_{\ast}(\sigma)=(F(\eta_C)^{-1})^{\ast}(F(\eta_A))_{\ast}(\sigma)=(F(\eta_C)^{-1})^{\ast}(F(\eta_A))_{\ast}(\phi_{C,A}(\delta)). \]
By the naturality of $\phi$, we also have
\[ (F(\eta_C)^{-1})^{\ast}(F(\eta_A))_{\ast}(\phi_{C,A}(\delta))=\phi_{GFC,GFA}((\eta_C^{-1})^{\ast}(\eta_A)_{\ast}\delta). \]
Combining the above equalities, we obtain
\[ \phi_{GFC,GFA}(\psi_{FC,FA}(\sigma))=\phi_{GFC,GFA}((\eta_C^{-1})^{\ast}(\eta_A)_{\ast}\delta). \]
Since $\phi_{GFC,GFA}$ is an isomorphism, this implies $(\ref{ToShowpsi})$.
It is also obvious from definition that the inverses $\eta^{-1}\colon(G,\psi)\circ(F,\phi)\Rightarrow(\mathrm{Id}_{\mathscr{C}},\mathrm{id}_{\mathbb{E}})$ and $\varepsilon^{-1}\colon (\mathrm{Id}_{\mathscr{D}},\mathrm{id}_{\mathbb{F}})\Rightarrow(F,\phi)\circ (G,\psi)$ are natural transformations of exact functors.
\end{proof}
\subseteqction{Localization}\label{Section_Localization}
In the rest of this article, let $\mathscr{C}Es$ be an extriangulated category.
In this section, let $\mathscr{S}\subseteq\mathcal{M}$ be a set of morphisms which satisfies the following condition.
\betagin{itemize}
\item[{\rm (M0)}] $\mathscr{S}$ contains all isomorphisms in $\mathscr{C}$, and is closed by compositions. Also, $\mathscr{S}$ is closed by taking finite direct sums. Namely, if $f_i\in\mathscr{S}(X_i,Y_i)$ for $i=1,2$, then $f_1\mathrm{op}lus f_2\in\mathscr{S}(X_1\mathrm{op}lus X_1,Y_1\mathrm{op}lus Y_2)$.
\end{itemize}
Let $\widetilde{\C}$ denote the localization of $\mathscr{C}$ by $\mathscr{S}$.
The aim of this section is to equip $\widetilde{\C}$ with a natural structure of an extriangulated category, under some assumptions.
First we associate a full subcategory $\mathcal{N}_{\mathscr{S}}\subseteq\mathscr{C}$ in the following way.
\betagin{dfn}\label{DefNS}
Let $\mathscr{S}\subseteq\mathcal{M}$ be as above. Define $\mathcal{N}_{\mathscr{S}}\subseteq\mathscr{C}$ to be the full subcategory consisting of objects $N\in\mathscr{C}$ such that both $N\to0$ and $0\to N$ belong to $\mathscr{S}$.
It is obvious that $\mathcal{N}_{\mathscr{S}}\subseteq\mathscr{C}$ is an additive subcategory. In the rest, we will denote the ideal quotient by $p\colon\mathscr{C}\to \overseterline{\mathscr{C}}=\mathscr{C}/[\mathcal{N}_{\mathscr{S}}]$, and $\overseterline{f}$ will denote a morphism in $\overseterline{\mathscr{C}}$ represented by $f\in\mathscr{C}(X,Y)$. Also, let $\overseterline{\mathscr{S}}\subseteq\overseterline{\mathcal{M}}=\mathrm{op}eratorname{Mor}(\overseterline{\mathscr{C}})$ be the closure of $p(\mathscr{S})$ with respect to compositions with isomorphisms in $\overseterline{\mathscr{C}}$.
\end{dfn}
\betagin{lem}\label{LemSplitN}
Let $\mathscr{S}$ and $\overseterline{\mathscr{C}}$ be as above. The following holds.
\betagin{enumerate}
\item For any $f\in\mathscr{S}(A,B)$ and any $i\in\mathscr{C}(A,N)$ with $N\in\mathcal{N}_{\mathscr{S}}$, we have $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{S}(A,B\mathrm{op}lus N)$.
\item Suppose that $f,g\in\mathscr{C}(A,B)$ satisfy $\overseterline{f}=\overseterline{g}$ in $\overseterline{\mathscr{C}}$. Then $f\in\mathscr{S}$ holds if and only if $g\in\mathscr{S}$.
\item The following conditions are equivalent.
\betagin{itemize}
\item[{\rm (i)}] $p(\mathscr{S})=\overseterline{\mathscr{S}}$.
\item[{\rm (ii)}] $p^{-1}(\overseterline{\mathscr{S}})=\mathscr{S}$.
\item[{\rm (iii)}] $p^{-1}(\mathrm{op}eratorname{Iso}(\overseterline{\mathscr{C}}))\subseteq\mathscr{S}$.
\item[{\rm (iv)}] $f\in\mathscr{S}$ holds for any split monomorphism $f\in\mathscr{C}(A,B)$ such that $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$.
\item[{\rm (v)}] $f\in\mathscr{S}$ holds for any split epimorphism $f\in\mathscr{C}(A,B)$ such that $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$.
\end{itemize}
\end{enumerate}
\end{lem}
\betagin{proof}
{\rm (1)} As $\mathscr{S}$ is closed by finite direct sums, we have $\left[\betagin{smallmatrix}1\\0\end{smallmatrix}\right]\in\mathscr{S}(A,A\mathrm{op}lus N)$. This also implies $\left[\betagin{smallmatrix}1\\i\end{smallmatrix}\right]\in\mathscr{S}(A,A\mathrm{op}lus N)$, since
\[ \left[\betagin{array}{cc}1&0\\i&1\end{array}\right]
\left[\betagin{array}{c}1\\0\end{array}\right]=
\left[\betagin{array}{c}1\\i\end{array}\right] \]
holds. Thus $\left[\betagin{smallmatrix} f\\i\end{smallmatrix}\right]\in\mathscr{S}(A,B\mathrm{op}lus N)$ follows from $\left[\betagin{smallmatrix} f\\i\end{smallmatrix}\right]=(f\mathrm{op}lus \mathrm{id}_N)\circ \left[\betagin{smallmatrix}1\\i\end{smallmatrix}\right]$.
{\rm (2)} Suppose that $f$ belongs to $\mathscr{S}$. If $\overseterline{f}=\overseterline{g}$, there exist $N\in\mathcal{N}_{\mathscr{S}}$ and $i\in\mathscr{C}(A,N),j\in\mathscr{C}(N,B)$ such that $g=f+j\circ i$. By {\rm (1)} and its dual, we have $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{S}(A,B\mathrm{op}lus N)$ and $[1\ j]\in\mathscr{S}(B\mathrm{op}lus N,B)$. This implies $g=[1\ j]\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{S}$.
{\rm (3)} Remark that we always have $p(\mathscr{S})\subseteq\overseterline{\mathscr{S}}$ and $p^{-1}(\overseterline{\mathscr{S}})\supseteq\mathscr{S}$ by definition. {\rm (i)}\,$\mathbb{E}Q$\,{\rm (ii)}\,$\mathbb{E}Q$\,{\rm (iii)}
follows from {\rm (2)} since $p$ is full.
Since {\rm (iii)}\,$\mathbb{E}Q$\,{\rm (v)} can be shown dually, it is enough to show {\rm (iii)}\,$\mathbb{E}Q$\,{\rm (iv)}.
As {\rm (iii)}\,$\Rightarrow$\,{\rm (iv)} is trivial, it remains to show {\rm (iv)}\,$\Rightarrow$\,{\rm (iii)}. Suppose that $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$ for a morphism $f\in\mathscr{C}(A,B)$. Then there exist $N\in\mathcal{N}_{\mathscr{S}}$ and $i\in\mathscr{C}(A,N)$ such that $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{C}(A,B\mathrm{op}lus N)$ is a split monomorphism. Then {\rm (iv)} forces $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{S}$, which implies $f=[1\ 0]\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{S}$.
\end{proof}
\betagin{rem}\label{RemSplitN}
If $\mathscr{C}$ is idempotent complete, or more generally if any split monomorphism has a cokernel in $\mathscr{C}$ (namely, $\mathscr{C}$ is weakly idempotent complete), then any $\mathscr{S}$ with {\rm (M0)} satisfies {\rm (iv)} and hence all the other equivalent conditions. In particular if $\mathscr{C}Es$ satisfies condition {\rm (WIC)}, then any $\mathscr{S}$ with {\rm (M0)} should satisfy $p(\mathscr{S})=\overseterline{\mathscr{S}}$.
\end{rem}
\subsection{Statement of the main theorem}
One of our aim in Section~\ref{Section_Localization} is to show the following.
\betagin{cor}\label{CorMultLoc}
Assume that $\mathscr{S}\subseteq\mathcal{M}$ satisfies {\rm (M0)} as before, and moreover $p(\mathscr{S})=\overseterline{\mathscr{S}}$. Then the following holds.
\betagin{enumerate}
\item Suppose that $\mathscr{S}$ satisfies the following conditions {\rm (M1),(M2),(M3)}. Then we obtain a weakly extriangulated category $(\widetilde{\C},\widetilde{\Ebb},\widetilde{\mathfrak{s}})$ together with an exact functor $(Q,\mu)\colon\mathscr{C}Es\to\widetilde{\C}Es$.
\betagin{itemize}
\item[{\rm (M1)}] $\mathscr{S}\subseteq\mathcal{M}$ satisfies $2$-out-of-$3$ with respect to compositions in $\mathscr{C}$.
\item[{\rm (M2)}] $\mathscr{S}$ is a multiplicative system in $\mathscr{C}$.
\item[{\rm (M3)}] Let $\langle A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C,\delta\rangle$, $\langle A^{\prime}\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C^{\prime},\delta^{\prime}\rangle$ be any pair of $\mathfrak{s}$-triangles, and let $a\in\mathscr{C}(A,A^{\prime}),c\in\mathscr{C}(C,C^{\prime})$ be any pair of morphisms satisfying $a_{\ast}\delta=c^{\ast}\delta^{\prime}$. If $a,c\in\mathscr{S}$, then there exists $b\in\mathscr{S}$ which gives the following morphism of $\mathfrak{s}$-triangles.
\[
\xy
(-12,6)*+{A}="0";
(0,6)*+{B}="2";
(12,6)*+{C}="4";
(24,6)*+{}="6";
(-12,-6)*+{A^{\prime}}="10";
(0,-6)*+{B^{\prime}}="12";
(12,-6)*+{C^{\prime}}="14";
(24,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{a} "0";"10"};
{\ar_{b} "2";"12"};
{\ar^{c} "4";"14"};
{\ar_{x^{\prime}} "10";"12"};
{\ar_{y^{\prime}} "12";"14"};
{\ar@{-->}_{\delta^{\prime}} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
\end{itemize}
\item The exact functor $(Q,\mu)\colon\mathscr{C}Es\to\widetilde{\C}Es$ obtained in {\rm (1)} is characterized by the following universality.
\betagin{itemize}
\item[{\rm (i)}]
For any exact functor $(F,\phi)\colon(\mathscr{C},\mathbb{E}bb,\mathfrak{s})\to (\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ such that $F(s)$ is an isomorphism for any $s\in\mathscr{S}$, there exists a unique exact functor $(\widetilde{F},\widetilde{\phi})\colon\widetilde{\C}Es\to (\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ with $(F,\phi)=(\widetilde{F},\widetilde{\phi})\circ (Q,\mu)$.
\item[{\rm (ii)}] For any pair of exact functors $(F,\phi),(G,\psi)\colon(\mathscr{C},\mathbb{E}bb,\mathfrak{s})\to (\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ which send any $s\in\mathscr{S}$ to isomorphisms, let $(\widetilde{F},\widetilde{\phi}),(\widetilde{G},\widetilde{\psi})\colon\widetilde{\C}Es\to (\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be the exact functors obtained in {\rm (i)}. Then for any natural transformation $\eta\colon (F,\phi)\Rightarrow(G,\psi)$ of exact functors, there uniquely exists a natural transformation $\widetilde{\eta}\colon (\widetilde{F},\widetilde{\phi})\Rightarrow(\widetilde{G},\widetilde{\psi})$ of exact functors satisfying $\eta=\widetilde{\eta}\circ (Q,\mu)$.
\end{itemize}
\item If $\mathscr{S}$ moreover satisfies the following {\rm (M4)}, then $(\widetilde{\C},\widetilde{\Ebb},\widetilde{\mathfrak{s}})$ is extriangulated.
\betagin{itemize}
\item[{\rm (M4)}] $\mathcal{M}_{\mathsf{inf}}:=\{ t\circ x\circ s\mid x\ \text{is an}\ \mathfrak{s}\text{-inflation}, s,t\in\mathscr{S}\}$ is closed by composition in $\mathcal{M}$.
Dually, $\mathcal{M}_{\mathsf{def}}:=\{ t\circ y\circ s\mid y\ \text{is an}\ \mathfrak{s}\text{-deflation}, s,t\in\mathscr{S}\}\subseteq\mathcal{M}$ is closed by compositions.
\end{itemize}
\end{enumerate}
\end{cor}
More generally, in order to include several typical cases in mind (see Section~\ref{Section_Examples}), we will show the following, as our main theorem.
\betagin{thm}\label{ThmMultLoc}
Assume that $\mathscr{S}\subseteq\mathcal{M}$ satisfies {\rm (M0)}, as before.
\betagin{enumerate}
\item Suppose that $\overseterline{\mathscr{S}}$ satisfies the following conditions {\rm (MR1),(MR2),(MR3)}.
Then we obtain a weakly extriangulated category $(\widetilde{\C},\widetilde{\Ebb},\widetilde{\mathfrak{s}})$ together with an exact functor $(Q,\mu)\colon\mathscr{C}Es\to\widetilde{\C}Es$.
\betagin{itemize}
\item[{\rm (MR1)}] $\overseterline{\mathscr{S}}\subseteq\overseterline{\mathcal{M}}$ satisfies $2$-out-of-$3$ with respect to compositions in $\overseterline{\mathscr{C}}$.
\item[{\rm (MR2)}] $\overseterline{\mathscr{S}}$ is a multiplicative system in $\overseterline{\mathscr{C}}$.
\item[{\rm (MR3)}] Let $\langle A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C,\delta\rangle$, $\langle A^{\prime}\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C^{\prime},\delta^{\prime}\rangle$ be any pair of $\mathfrak{s}$-triangles, and let $a\in\mathscr{C}(A,A^{\prime}),c\in\mathscr{C}(C,C^{\prime})$ be any pair of morphisms satisfying $a_{\ast}\delta=c^{\ast}\delta^{\prime}$. If $\overseterline{a}$ and $\overseterline{c}$ belong to $\overseterline{\mathscr{S}}$, then there exists $\mathbf{b}\in\overseterline{\mathscr{S}}(B,B^{\prime})$ which satisfies $\mathbf{b}\circ\overseterline{x}=\overseterline{x}^{\prime}\circ\overseterline{a}$ and $\overseterline{c}\circ\overseterline{y}=\overseterline{y}^{\prime}\circ\mathbf{b}$.
\end{itemize}
\item The exact functor $(Q,\mu)\colon\mathscr{C}Es\to\widetilde{\C}Es$ obtained in {\rm (1)} is characterized by the same universality as stated in Corollary~\ref{CorMultLoc} {\rm (2)}.
\item If $\overseterline{\mathscr{S}}$ moreover satisfies the following {\rm (MR4)}, then $(\widetilde{\C},\widetilde{\Ebb},\widetilde{\mathfrak{s}})$ is extriangulated.
\betagin{itemize}
\item[{\rm (MR4)}] $\overseterline{\mathcal{M}}_{\mathsf{inf}}:=\{ \mathbf{v}\circ \overseterline{x}\circ \mathbf{u}\mid x\ \text{is an}\ \mathfrak{s}\text{-inflation}, \mathbf{u},\mathbf{v}\in\overseterline{\mathscr{S}}\}$ is closed by composition in $\overseterline{\mathcal{M}}$.
Dually, $\overseterline{\mathcal{M}}_{\mathsf{def}}:=\{ \mathbf{v}\circ \overseterline{y}\circ \mathbf{u}\mid y\ \text{is an}\ \mathfrak{s}\text{-deflation}, \mathbf{u},\mathbf{v}\in\overseterline{\mathscr{S}}\}\subseteq\overseterline{\mathcal{M}}$ is closed by compositions.
\end{itemize}
\end{enumerate}
\end{thm}
Proof of Theorem~\ref{ThmMultLoc} will be given in the end of this section.
In advance, let us confirm that Corollary~\ref{CorMultLoc} is indeed a corollary of Theorem~\ref{ThmMultLoc}. It suffices to show the following.
\betagin{claim}\label{ClaimMultLoc}
Assume that $\mathscr{S}$ satisfies {\rm (M0)} as before, and also $p(\mathscr{S})=\overseterline{\mathscr{S}}$. Then the following holds.
\betagin{enumerate}
\item {\rm (M1)} is equivalent to {\rm (MR1)}.
\item {\rm (M2)} implies {\rm (MR2)}.
\item {\rm (M3)} implies {\rm (MR3)}.
\item {\rm (M4)} implies {\rm (MR4)}.
\end{enumerate}
\end{claim}
\betagin{proof}
{\rm (1)} is immediate from $p(\mathscr{S})=\overseterline{\mathscr{S}}$ and Lemma~\ref{LemSplitN}.
Also {\rm (3),(4)} are obvious.
Let us show {\rm (2)}. Suppose that $\mathscr{S}$ satisfies {\rm (M2)}.
Remark that $\overseterline{\mathscr{S}}$ is closed by compositions by definition.
To show {\rm (MR2)}, by duality it suffices to show the following.
\betagin{itemize}
\item[{\rm (i)}] For any $\overseterline{f}\in\overseterline{\mathscr{C}}(A,B)$, if there exists $\overseterline{s}\in\overseterline{\mathscr{S}}(A^{\prime},A)$ satisfying $\overseterline{f}\circ\overseterline{s}=0$, then there exists $\overseterline{t}\in\overseterline{\mathscr{S}}(B,B^{\prime})$ such that $\overseterline{t}\circ\overseterline{f}=0$.
\item[{\rm (ii)}] For any $A^{\prime}\overset{\overseterline{s}}{\longleftarrow}A\overset{\overseterline{f}}{\longrightarrow}B$ with $\overseterline{s}\in\overseterline{\mathscr{S}}$, there exists a commutative square
\[
\xy
(-6,6)*+{A}="0";
(6,6)*+{B}="2";
(-6,-6)*+{A^{\prime}}="4";
(6,-6)*+{B^{\prime}}="6";
{\ar^{\overseterline{f}} "0";"2"};
{\ar_{\overseterline{s}} "0";"4"};
{\ar^{\overseterline{s}^{\prime}} "2";"6"};
{\ar_{\overseterline{f}^{\prime}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\overseterline{\mathscr{C}}$ such that $\overseterline{s}^{\prime}\in\overseterline{\mathscr{S}}$.
\end{itemize}
Since {\rm (ii)} is immediate from {\rm (M2)}, let us show {\rm (i)}. Suppose that $\overseterline{f}\in\overseterline{\mathscr{C}}(A,B)$ and $\overseterline{s}\in\overseterline{\mathscr{S}}(A^{\prime},A)$ satisfy $\overseterline{f}\circ\overseterline{s}=0$. Then there is $N\in\mathcal{N}_{\mathscr{S}}$ and $i\in\mathscr{C}(A^{\prime},N),j\in\mathscr{C}(N,B)$ such that $f\circ s=j\circ i$. This means $[f\ -j]\circ \left[\betagin{smallmatrix} s\\ i\end{smallmatrix}\right]=0$. Since $\left[\betagin{smallmatrix} s\\ i\end{smallmatrix}\right]\in\mathscr{S}$ by Lemma~\ref{LemSplitN}, there exists $t\in\mathscr{S}(B,B^{\prime})$ such that $t\circ [f\ -j]=0$ by {\rm (M2)}. In particular we have $t\circ f=0$.
\end{proof}
The following is also an immediate corollary.
\betagin{cor}\label{CorOfThm}
Let $(F,\phi)\colon\mathscr{C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be an exact functor to a weakly extriangulated category $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$. If $\mathscr{S}=F^{-1}(\mathrm{op}eratorname{Iso}(\mathscr{D}))$ satisfies {\rm (M2)}, or more generally if $\overseterline{\mathscr{S}}$ satisfies {\rm (MR2)},
then there exists a unique exact functor $(\widetilde{F},\widetilde{\phi})\colon\widetilde{\C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ such that $(F,\phi)=(\widetilde{F},\widetilde{\phi})\circ (Q,\mu)$.
\end{cor}
\betagin{proof}
Since $\mathscr{S}$ satisfies {\rm (M0),(M1),(M3)} and $p^{-1}(\overseterline{\mathscr{S}})=\mathscr{S}$ by construction, this will follow from Theorem~\ref{ThmMultLoc} {\rm (1),(2)} and Claim~\ref{ClaimMultLoc}.
\end{proof}
\betagin{cor}\label{CorAdjoint}
Let $(F,\phi)\colon\mathscr{C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be an exact functor to a weakly extriangulated category $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$, and put $\mathscr{S}=F^{-1}(\mathrm{op}eratorname{Iso}(\mathscr{D}))$.
If $F$ admits a fully faithful right adjoint and a fully faithful left adjoint,
then there exists a unique exact functor $(\widetilde{F},\widetilde{\phi})\colon\widetilde{\C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ such that $(F,\phi)=(\widetilde{F},\widetilde{\phi})\circ (Q,\mu)$, and $\widetilde{F}$ is an equivalence.
\end{cor}
\betagin{proof}
It is well-known for experts that {\rm (M2)} holds for such adjoint triple, e.g. \circte[Ch.I, Section 2]{GZ}.
Thus, the existence of a unique exact functor $(\widetilde{F},\widetilde{\phi})\colon\widetilde{\C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ directly follows from Corollary \ref{CorOfThm}.
Obviously $\widetilde{F}$ is essentially surjective.
To show the faithfulness of $\widetilde{F}$, let $\alpha$ be a morphism in $\widetilde{\C}$ which is expressed as $\alpha=Q(s)^{-1}\circ Q(f)$ by a digram $X\overset{f}{\longrightarrow}Y\overset{s}{\longleftarrow} Y^{\prime}$ in $\mathscr{C}$ with $s\in\mathscr{S}$.
If $\widetilde{F}(\alphapha)=0$, then we get $F(f)=0$. Let $R$ be a right adjoint functor of $F$, and let $\eta$ be its unit.
Note that $\eta_Y\colon Y\to RF(Y)$ belongs to $\mathscr{S}$, and thus $RF(f)=0$ implies $\eta_Y\circ f=0$.
Hence we have $Q(f)=0$.
It remains to show that $\widetilde{F}$ is full. Let $X,Y\in\widetilde{\C}$ be any pair of objects, and let $g\in\mathscr{D}(\widetilde{F}(X),\widetilde{F}(Y))$ be any morphism.
If we put $\alphapha:=Q(\eta_Y)^{-1}\circ QR(g)\circ Q(\eta_X)$, then we can easily check $\widetilde{F}(\alphapha)=g$.
\end{proof}
\betagin{rem}
In the situation of Corollary~\ref{CorAdjoint},
unlikely to the case of triangulated categories, a quasi-inverse $\widetilde{F}^{-1}\colon\widetilde{\mathscr{D}}\to\widetilde{\mathscr{C}}$ of $\widetilde{F}$ can not in general be equipped with a natural transformation $\psi$ which makes $(\widetilde{F}^{-1},\psi)\colon (\mathscr{D},\mathbb{F}bb,\mathfrak{t})\to\widetilde{\C}Es$ an exact functor, unless $\widetilde{\phi}$ is a natural isomorphism.
A typical example for which $\widetilde{\phi}$ is not an isomorphism is given by a relative theory. For an extriangulated category $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ and a closed subfunctor $\mathbb{E}bb\subseteq\mathbb{F}bb$, we have an extriangulated category $\mathscr{C}Es$ with $\mathscr{C}=\mathscr{D}$ and $\mathfrak{s}=\mathfrak{t}|_{\mathbb{E}bb}$, and an exact functor $(\mathrm{Id},\phi)\colon\mathscr{C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ is induced by the inclusion $\phi\colon\mathbb{E}bb\hookrightarrow\mathbb{F}bb$. The localization in Corollary~\ref{CorAdjoint} does not essentially change $\mathscr{C}Es$, hence $\widetilde{\phi}$ is not an isomorphism unless $\mathbb{E}bb=\mathbb{F}bb$.
\end{rem}
\betagin{rem}
A \emph{recollement} of extriangulated categories is defined in \circte{WWZ} as a diagram of functors between extriangulated categories of the following shape
\[
\xymatrix@C=1.2cm{\mathcal{N}\ar[r]|-{{i_\ast}}
&\mathscr{C}\ar[r]|-{j^\ast}\ar@/^1.2pc/[l]^-{i^!}\ar_-{i^\ast}@/_1.2pc/[l]
&\mathscr{D} \ar@/^1.2pc/[l]^{j_\ast}\ar@/_1.2pc/[l]_{j_!}}
\]
satisfying some assumptions (see \circte[Definition~3.1]{WWZ} for the detail) which allow us to apply Corollary~\ref{CorAdjoint} to the right half of the above diagram.
\end{rem}
In the rest of this section, we proceed to show Theorem~\ref{ThmMultLoc}.
\betagin{rem}\label{RemSaturation}
Remark that replacing $\mathscr{S}$ by $p^{-1}(\overseterline{\mathscr{S}})$ does not change $\overseterline{\mathscr{S}}$, neither affect the resulting localization. As conditions {\rm (MR1)},\,$\ldots\,$,\,{\rm (MR4)} is only about $\overseterline{\mathscr{S}}$, and since condition {\rm (M0)} is stable under this replacement, we see that $p^{-1}(\overseterline{\mathscr{S}})$ satisfies the assumption of Theorem~\ref{ThmMultLoc} whenever $\mathscr{S}$ does. Thus in proving Theorem~\ref{ThmMultLoc}, we may replace $\mathscr{S}$ by $p^{-1}(\overseterline{\mathscr{S}})$ so that $\mathscr{S}=p^{-1}(\overseterline{\mathscr{S}})$ is satisfied from the beginning.
\end{rem}
\subsection{Construction of $\widetilde{\Ebb}$}
Suppose that $\mathscr{S}$ satisfies {\rm (M0),(MR1),(MR2),(MR3)} and $\mathscr{S}=p^{-1}(\overseterline{\mathscr{S}})$.
Remark that a localization $\mathscr{C}\to\widetilde{\C}$ factors uniquely through $p\colon\mathscr{C}\to\overseterline{\mathscr{C}}$, and we may regard $\widetilde{\C}$ as a localization of $\overseterline{\mathscr{C}}$ by $\overseterline{\mathscr{S}}$. In other words, we may use the following functor $Q$ as a localization functor.
\betagin{dfn}\label{DefQ}
Take a localization $\overseterline{Q}\colon\overseterline{\mathscr{C}}\to\widetilde{\C}$ of $\overseterline{\mathscr{C}}$ by the multiplicative system $\overseterline{\mathscr{S}}$, and put $Q=\overseterline{Q}\circ p\colon\mathscr{C}\to\widetilde{\C}$. This gives a localization of $\mathscr{C}$ by $\mathscr{S}$.
\end{dfn}
In this subsection, we will construct a biadditive functor $\widetilde{\Ebb}\colon\widetilde{\C}^\mathrm{op}\times\widetilde{\C}\to\mathscr{A}b$ and a natural transformation $\mu\colon\mathbb{E}bb\Rightarrow\widetilde{\Ebb}\circ(Q^\mathrm{op}\times Q)$.
More in detail, we are going to define a biadditive functor $\overseterline{\mathbb{E}bb}$ and natural transformations $\overseterline{\mu},\wp$ which fit in the following diagram,
\[
\xy
(-12,14)*+{\mathscr{C}^\mathrm{op}\times\mathscr{C}}="0";
(-12,0)*+{\overseterline{\mathscr{C}}^\mathrm{op}\times\overseterline{\mathscr{C}}}="2";
(-12,-14)*+{\widetilde{\C}^\mathrm{op}\times\widetilde{\C}}="4";
(14,0)*+{\mathscr{A}b}="6";
{\ar_{p^\mathrm{op}\times p} "0";"2"};
{\ar_(0.46){\overseterline{Q}^\mathrm{op}\times \overseterline{Q}} "2";"4"};
{\ar@/_1.5pc/_{Q^\mathrm{op}\times Q} (-19,11.5);(-19,-11.5)};
{\ar^{\mathbb{E}bb} "0";"6"};
{\ar^{\overseterline{\mathbb{E}bb}} "2";"6"};
{\ar_{\widetilde{\Ebb}} "4";"6"};
{\ar@{=>}_{\wp} (-4,7);(-4,3)};
{\ar@{=>}_{\overseterline{\mu}} (-4,-3);(-4,-7)};
{\ar@{}|\circrclearrowright (-17,0);(-26,0)};
\endxy
\]
and will define $\mu$ by $\mu=\big(\overseterline{\mu}\circ(p^\mathrm{op}\times p)\big)\cdot\wp$.
\betagin{lem}\label{LemExtVanish}
For any extension $\delta\in\mathbb{E}bb(C,A)$, the following are equivalent.
\betagin{enumerate}
\item There exists $s\in\mathscr{S}(A,A^{\prime})$ such that $s_{\ast}\delta=0$.
\item There exists $t\in\mathscr{S}(C^{\prime},C)$ such that $t^{\ast}\delta=0$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show that {\rm (1)} implies {\rm (2)}. Remark that the equivalent conditions in Lemma~\ref{LemSplitN} {\rm (3)} are satisfied by our assumption. Let $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$ be an $\mathfrak{s}$-triangle, and suppose that $s\in\mathscr{S}(A,A^{\prime})$ satisfies $s_{\ast}\delta=0$. By {\rm (MR3)}, there exists $u\in\mathscr{S}(B,A^{\prime}\mathrm{op}lus C)$ which makes
\[
\xy
(-16,6)*+{A}="0";
(0,6)*+{B}="2";
(16,6)*+{C}="4";
(-16,-6)*+{A^{\prime}}="10";
(0,-6)*+{A^{\prime}\mathrm{op}lus C}="12";
(16,-6)*+{C}="14";
{\ar^{\overseterline{x}} "0";"2"};
{\ar^{\overseterline{y}} "2";"4"};
{\ar_{\overseterline{s}} "0";"10"};
{\ar_{\overseterline{u}} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_(0.4){\left[\betagin{smallmatrix}1\\0\end{smallmatrix}\right]} "10";"12"};
{\ar_(0.6){[0\ 1]} "12";"14"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
commutative in $\overseterline{\mathscr{C}}$. By {\rm (MR2)}, we obtain a commutative square
\[
\xy
(-7,6)*+{B^{\prime}}="0";
(7,6)*+{C}="2";
(-7,-6)*+{B}="4";
(7,-6)*+{A^{\prime}\mathrm{op}lus C}="6";
{\ar^{\overseterline{v}} "0";"2"};
{\ar_{\overseterline{b}} "0";"4"};
{\ar^{\left[\betagin{smallmatrix}0\\1\end{smallmatrix}\right]} "2";"6"};
{\ar_(0.4){\overseterline{u}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\overseterline{\mathscr{C}}$ such that $v\in\mathscr{S}$. Then we have $\overseterline{v}=\overseterline{y\circ b}$, which forces $y\circ b\in\mathscr{S}$ by Lemma~\ref{LemSplitN} {\rm (2)}. Thus $t=y\circ b$ satisfies the required properties.
\end{proof}
\betagin{dfn}\label{DefK}
Define a subset $\mathcal{K}(C,A)\subseteq\mathbb{E}bb(C,A)$ by
\betagin{eqnarray*}
\mathcal{K}(C,A)&=&\{\delta\in\mathbb{E}bb(C,A)\mid s_{\ast}\delta=0\ \text{for some}\ s\in\mathscr{S}(A,A^{\prime})\}\\
&=&\{\delta\in\mathbb{E}bb(C,A)\mid t^{\ast}\delta=0\ \text{for some}\ t\in\mathscr{S}(C^{\prime},C)\}
\end{eqnarray*}
for each $A,C\in\mathscr{C}$.
By Lemma~\ref{LemExtVanish}, these form a subfunctor $\mathcal{K}\subseteq\mathbb{E}bb$.
\end{dfn}
\betagin{prop}\label{PropK}
The following holds.
\betagin{enumerate}
\item $\mathcal{K}\subseteq\mathbb{E}bb$ is an additive subfunctor.
\item $\mathbb{E}bb/\mathcal{K}\colon\mathscr{C}^\mathrm{op}\times\mathscr{C}\to\mathscr{A}b$ is a biadditive functor which satisfies $(\mathbb{E}bb/\mathcal{K})(N,-)=0$ and $(\mathbb{E}bb/\mathcal{K})(-,N)=0$ for any $N\in\mathcal{N}_{\mathscr{S}}$.
\end{enumerate}
\end{prop}
\betagin{proof}
{\rm (1)} For any $\delta\in\mathcal{K}(C,A)$ and $\delta^{\prime}\in\mathcal{K}(C^{\prime},A^{\prime})$, obviously we have $\delta\mathrm{op}lus\delta^{\prime}\in\mathcal{K}(C\mathrm{op}lus C^{\prime},A\mathrm{op}lus A^{\prime})$. Here $\delta\mathrm{op}lus\delta^{\prime}\in\mathbb{E}bb(C\mathrm{op}lus C^{\prime},A\mathrm{op}lus A^{\prime})$ is the element corresponding to $(\delta,0,0,\delta^{\prime})$ through the isomorphism $\mathbb{E}bb(C\mathrm{op}lus C^{\prime},A\mathrm{op}lus A^{\prime})\colonng\mathbb{E}bb(C,A)\mathrm{op}lus\mathbb{E}bb(C,A^{\prime})\mathrm{op}lus\mathbb{E}bb(C^{\prime},A)\mathrm{op}lus\mathbb{E}bb(C^{\prime},A^{\prime})$ induced by the biadditivity of $\mathbb{E}bb$. This shows the additivity of $\mathcal{K}\subseteq\mathbb{E}bb$.
{\rm (2)} is immediate from {\rm (1)} and the definition.
\end{proof}
In the rest, for any $\delta\in\mathbb{E}bb(X,Y)$, let $\overseterline{\delta}\in(\mathbb{E}bb/\mathcal{K})(X,Y)$ denote an element represented by $\delta$.
\betagin{dfn}\label{DefEbar}
By Proposition~\ref{PropK}, we obtain a well-defined biadditive functor $\overseterline{\mathbb{E}bb}\colon\overseterline{\mathscr{C}}^\mathrm{op}\times\overseterline{\mathscr{C}}\to\mathscr{A}b$ given by the following.
\betagin{itemize}
\item $\overseterline{\mathbb{E}bb}(C,A)=(\mathbb{E}bb/\mathcal{K})(C,A)$ for any $A,C\in\mathscr{C}$.
\item $\overseterline{a}_{\ast}\overseterline{\delta}=\overseterline{a_{\ast}\delta}$ for any $\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(C,A)$ and $\overseterline{a}\in\overseterline{\mathscr{C}}(A,A^{\prime})$.
\item $\overseterline{c}^{\ast}\overseterline{\delta}=\overseterline{c^{\ast}\delta}$ for any $\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(C,A)$ and $\overseterline{c}\in\overseterline{\mathscr{C}}(C^{\prime},C)$.
\end{itemize}
Also, we have a natural transformation $\wp\colon\mathbb{E}bb\Rightarrow\overseterline{\mathbb{E}bb}\circ(p^\mathrm{op}\times p)$ given by
\[ \wp_{C,A}\colon\mathbb{E}bb(C,A)\to\overseterline{\mathbb{E}bb}(C,A)\ ;\ \delta\mapsto\overseterline{\delta} \]
for each $A,C\in\mathscr{C}$. We will often abbreviate an $\overseterline{\mathbb{E}bb}$-extension $\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(C,A)$ to $C\overset{\overseterline{\delta}}{\dashrightarrow}A$, as in Definition~\ref{DefExtension}.
\end{dfn}
\betagin{rem}\label{RemTrivK}
By definition, the following are equivalent for any $\delta\in\mathbb{E}bb(C,A)$.
\betagin{enumerate}
\item $\overseterline{\delta}=0$ holds in $\overseterline{\mathbb{E}bb}(C,A)$.
\item $\overseterline{s}_{\ast} \overseterline{\delta}=0$ holds in $\overseterline{\mathbb{E}bb}(C,A^{\prime})$ for some/any $\overseterline{s}\in\overseterline{\mathscr{S}}(A,A^{\prime})$.
\item $\overseterline{t}^{\ast}\overseterline{\delta}=0$ holds in $\overseterline{\mathbb{E}bb}(C^{\prime},A)$ for some/any $\overseterline{t}\in\overseterline{\mathscr{S}}(C^{\prime},C)$.
\end{enumerate}
\end{rem}
We are going to construct a biadditive functor $\widetilde{\Ebb}\colon\widetilde{\C}^\mathrm{op}\times\widetilde{\C}\to\mathscr{A}b$ and a natural transformation $\overseterline{\mu}\colon\overseterline{\mathbb{E}bb}\Rightarrow\widetilde{\Ebb}\circ(\overseterline{Q}^\mathrm{op}\times\overseterline{Q})$. Though this can be substantialized by using coends as $\widetilde{\C}\underset{\overseterline{\mathscr{C}}}{\otimes}\overseterline{\mathbb{E}bb}\underset{\overseterline{\mathscr{C}}}{\otimes}\widetilde{\C}$, we give an explicit description of $\widetilde{\Ebb}$ in the below, which enables us to construct a realization $\widetilde{\mathfrak{s}}$ of $\widetilde{\Ebb}$ in the next subsection.
In fact, we will define $\widetilde{\Ebb}(C,A)$ to be a set of equivalence classes of triplets $(C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A)$ with respect to some equivalence relation. In its definition (Definition~\ref{DefwE}), we involve the following Proposition~\ref{PropCommonDenom}.
\betagin{lem}\label{LemCommonDenom}
For any finite family of morphisms $\{\overseterline{s}_i\in\overseterline{\mathscr{S}}(A,X_i)\}_{1\le i\le n}$, there exist $X\in\mathscr{C}$, $\overseterline{s}\in\overseterline{\mathscr{S}}(A,X)$ and $\{\overseterline{u}_i\in\overseterline{\mathscr{S}}(X_i,X)\}_{1\le i\le n}$ such that $\overseterline{s}=\overseterline{u}_i\circ\overseterline{s}_i$ for any $1\le i\le n$. Dually for a family of morphisms $\{\overseterline{t}_i\in\overseterline{\mathscr{S}}(Z_i,C)\}_{1\le i\le n}$.
\end{lem}
\betagin{proof}
This follows from {\rm (MR1)} and {\rm (MR2)}, by a usual argument on multiplicative systems.
\end{proof}
\betagin{prop}\label{PropCommonDenom}
Let $n\ge2$ be an integer. Let $A,C\in\mathscr{C}$ be any pair of objects, and let
\[ (\mathbb{F}R{\overseterline{t}_i}{\overseterline{\delta}_i}{\overseterline{s}_i})=(C\overset{\overseterline{t}_i}{\longleftarrow}Z_i\overset{\overseterline{\delta}_i}{\dashrightarrow}X_i\overset{\overseterline{s}_i}{\longleftarrow}A)\qquad(i=1,2,\ldots,n) \]
be triplets with
$\overseterline{s}_i\in\overseterline{\mathscr{S}}(A,X_i), \overseterline{t}_i\in\overseterline{\mathscr{S}}(Z_i,C), \overseterline{\delta}_i\in\overseterline{\mathbb{E}bb}(Z_i,X_i)$. The following holds.
\betagin{enumerate}
\item We can take a \emph{common denominator} $\overseterline{s},\overseterline{t}\in\overseterline{\mathscr{S}}$.
More precisely, there exist
common $X,Z\in\mathscr{C}$, $\overseterline{s}\in\overseterline{\mathscr{S}}(A,X)$ and $\overseterline{t}\in\overseterline{\mathscr{S}}(Z,C)$, equipped with
$\overseterline{u}_i\in\overseterline{\mathscr{S}}(X_i,X)$, $\overseterline{v}_i\in\overseterline{\mathscr{S}}(Z,Z_i)$
which satisfy $\overseterline{s}=\overseterline{u}_i\circ\overseterline{s}_i$ and $\overseterline{t}=\overseterline{t}_i\circ\overseterline{v}_i$ as depicted in the following diagram for $1\le i\le n$,
\betagin{equation}\label{DiagCommonDenom}
\xy
(-23,-6)*+{C}="C";
(-7,6)*+{Z_i}="1";
(-7,-6)*+{Z}="3";
(7,6)*+{X_i}="11";
(7,-6)*+{X}="13";
(23,-6)*+{A}="A";
{\ar_{\overseterline{t}_i} "1";"C"};
{\ar^{\overseterline{t}} "3";"C"};
{\ar@{-->}^{\overseterline{\delta}_i} "1";"11"};
{\ar@{-->}_{\overseterline{\rho}_i} "3";"13"};
{\ar_{\overseterline{s}_i} "A";"11"};
{\ar^{\overseterline{s}} "A";"13"};
{\ar_{\overseterline{v}_i} "3";"1"};
{\ar_{\overseterline{u}_i} "11";"13"};
{\ar@{}|\circrclearrowright (-12,0);(-10,-4)};
{\ar@{}|\circrclearrowright (12,0);(10,-4)};
\endxy
\end{equation}
where we put $\overseterline{\rho}_i=\overseterline{v}_i^{\ast}\overseterline{u}_{i\ast}\overseterline{\delta}_i$.
When $n=2$, we write $(\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1})\sim(\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2})$ if they satisfy $\overseterline{\rho}_1=\overseterline{\rho}_2$.
\item The relation $(\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1})\sim(\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2})$ is independent of the choice of common denominators. Namely, for any other choice of
\[ (\mathbb{F}R{\overseterline{t}^{\prime}}{\overseterline{\rho}^{\prime}_i}{\overseterline{s}^{\prime}})=(C\overset{\overseterline{t}^{\prime}}{\longleftarrow}Z^{\prime}\overset{\overseterline{\rho}^{\prime}_i}{\dashrightarrow}X^{\prime}\overset{\overseterline{s}^{\prime}}{\longleftarrow}A) \]
and $\overseterline{u}^{\prime}_i\in\overseterline{\mathscr{S}}(X_i,X^{\prime})$, $\overseterline{v}^{\prime}_i\in\overseterline{\mathscr{S}}(Z^{\prime},Z_i)$, $\overseterline{\rho}^{\prime}_i\in\overseterline{\mathbb{E}bb}(Z^{\prime},X^{\prime})$ satisfying $\overseterline{s}^{\prime}=\overseterline{u}^{\prime}_i\circ\overseterline{s}_i$, $\overseterline{t}^{\prime}=\overseterline{t}_i\circ\overseterline{v}^{\prime}_i$ and $\overseterline{\rho}^{\prime}_i=\overseterline{v}_i^{\prime\ast}\overseterline{u}^{\prime}_{i\ast}\overseterline{\delta}_i$ for $i=1,2$, we have $\overseterline{\rho}_1=\overseterline{\rho}_2$ if and only if $\overseterline{\rho}_1^{\prime}=\overseterline{\rho}_2^{\prime}$.
\end{enumerate}
\end{prop}
\betagin{proof}
{\rm (1)} This is immediate from Lemma~\ref{LemCommonDenom}.
{\rm (2)}
Assume that there exists a diagram (\ref{DiagCommonDenom}) with $\overseterline{\rho}_1=\overseterline{\rho}_2$.
For another common denominator $\overseterline{s}^{\prime}, \overseterline{t}^{\prime}\in\overseterline{\mathscr{S}}$ given in the assertion (2), we shall show $\overseterline{\rho}^{\prime}_1=\overseterline{\rho}^{\prime}_2$.
For triplets
$(\mathbb{F}R{\overseterline{t}}{\overseterline{\rho}_i}{\overseterline{s}})$ and $(\mathbb{F}R{\overseterline{t}^{\prime}}{\overseterline{\rho}^{\prime}_i}{\overseterline{s}^{\prime}})$, by Lemma~\ref{LemCommonDenom},
there exists a common denominator $\overseterline{s}^{\prime}r, \overseterline{t}^{\prime}r\in\overseterline{\mathscr{S}}$ together with the following commutative diagrams
\[
\xymatrix{
Z_i\ar[rrd]_{\overseterline{t}_i}&Z\ar@{}[d]|( .3)\circrclearrowright\ar[l]_{\overseterline{v}_i}\ar[dr]|{\overseterline{t}}&Z^{\prime}r\ar@{}[dr]|( .3)\circrclearrowright\ar@{}[dl]|( .3)\circrclearrowright\ar[l]_{\overseterline{v}}\ar[r]^{\overseterline{v}^{\prime}}\ar[d]^{\overseterline{t}^{\prime}r}&Z^{\prime}\ar@{}[d]|( .3)\circrclearrowright\ar[dl]|{\overseterline{t}^{\prime}}\ar[r]^{\overseterline{v}^{\prime}_i}&Z_i\ar[lld]^{\overseterline{t}_i}&&A\ar[ld]|{\overseterline{s}}\ar[d]^{\overseterline{s}^{\prime}r}\ar[rd]|{\overseterline{s}^{\prime}}\ar[rrd]^{\overseterline{s}_i}\ar[lld]_{\overseterline{s}_i}&&\\
&&C&&X_i\ar[r]_{\overseterline{u}_i}&X\ar@{}[u]|( .3)\circrclearrowright\ar[r]_{\overseterline{u}}&X^{\prime}r\ar@{}[ur]|( .3)\circrclearrowright\ar@{}[ul]|( .3)\circrclearrowright&X^{\prime}\ar@{}[u]|( .3)\circrclearrowright\ar[l]^{\overseterline{u}^{\prime}}&X_i\ar[l]^{\overseterline{u}^{\prime}_i}
}
\]
in which all morphisms belong to $\overseterline{\mathscr{S}}$.
Composing some morphisms $Z^{\prime\prime\prime}\to Z^{\prime}r$ and $X^{\prime}r\to X^{\prime\prime\prime}$ in $\overseterline{\mathscr{S}}$ if necessary, we may assume that $\overseterline{v_iv}=\overseterline{v^{\prime}_iv^{\prime}}$ and $\overseterline{uu_i}=\overseterline{u^{\prime} u^{\prime}_i}$ hold for $i=1,2$ from the first.
Then we have the following equations
\betagin{eqnarray*}
\overseterline{v}^{\ast}\overseterline{u}_{\ast}\overseterline{\rho}_i\ =\ \overseterline{v}^{\ast}\overseterline{u}_{\ast}(\overseterline{v}_i^{\ast}\overseterline{u}_{i\ast}\overseterline{\delta}_i)&=&(\overseterline{v_iv})^{\ast}(\overseterline{uu_i})_\ast\overseterline{\delta}_i\\
&=&(\overseterline{v^{\prime}_iv^{\prime}})^{\ast}(\overseterline{u^{\prime} u^{\prime}_i})_\ast\overseterline{\delta}_i\ =\
{\overseterline{v}^{\prime}}^{\ast}\overseterline{u}^{\prime}_{\ast}({\overseterline{v}^{\prime}_i}^{\ast}{\overseterline{u}^{\prime}_i}_{\ast}\overseterline{\delta}_i)\ =\ {\overseterline{v}^{\prime}}^{\ast}\overseterline{u}^{\prime}_{\ast}\overseterline{\rho}^{\prime}_i
\end{eqnarray*}
for $i=1,2$.
By the assumption, we have ${\overseterline{v}^{\prime}}^{\ast}\overseterline{u}^{\prime}_{\ast}\overseterline{\rho}^{\prime}_1={\overseterline{v}^{\prime}}^{\ast}\overseterline{u}^{\prime}_{\ast}\overseterline{\rho}^{\prime}_2$ and particularly ${\overseterline{v}^{\prime}}^{\ast}\overseterline{u}^{\prime}_{\ast}(\overseterline{\rho}^{\prime}_1-\overseterline{\rho}^{\prime}_2)=0$.
Remark~\ref{RemTrivK} shows $\overseterline{\rho}^{\prime}_1=\overseterline{\rho}^{\prime}_2$.
\end{proof}
\betagin{cor}\label{CorwETrans}
The above $\sim$ gives an equivalence relation on the set of triplets
\betagin{equation}\label{wE}
\{ (\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}})=(C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A) \mid X,Z\in\mathscr{C},\, \overseterline{s},\overseterline{t}\in\overseterline{\mathscr{S}},\,\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(Z,X)\}
\end{equation}
for each $A,C\in\mathscr{C}$.
\end{cor}
\betagin{proof}
Transitivity can be shown by using Proposition~\ref{PropCommonDenom} {\rm (2)}.
In fact, if there exist triplets with $(\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1})\sim(\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2})\sim(\mathbb{F}R{\overseterline{t}_3}{\overseterline{\delta}_3}{\overseterline{s}_3})$,
by Proposition~\ref{PropCommonDenom} {\rm (1)}, we have a diagram (\ref{DiagCommonDenom}) for $i=1,2,3$.
Proposition~\ref{PropCommonDenom} {\rm (2)} forces $\overseterline{\rho_1}=\overseterline{\rho_2}=\overseterline{\rho_3}$.
The other conditions can be checked easily.
\end{proof}
\betagin{dfn}\label{DefwE}
Let $A,C\in\mathscr{C}$ be any pair of objects. Define $\widetilde{\Ebb}(C,A)$ to be the quotient set of $(\ref{wE})$ by the equivalence relation $\sim$ obtained above.
We denote the equivalence class of $(\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}})=(C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A)$ by $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$. By definition we have
\[ \widetilde{\Ebb}(C,A)=\{ [\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}] \mid X,Z\in\mathscr{C},\, \overseterline{s},\overseterline{t}\in\overseterline{\mathscr{S}},\,\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(Z,X)\}. \]
\end{dfn}
In the following arguments, we frequently use the following.
\betagin{rem}
By {\rm (MR2)}, any morphism $\alpha\in\widetilde{\C}(A,B)$ can be expressed as
\[ \alpha=\overseterline{Q}(\overseterline{s})^{-1}\circ\overseterline{Q}(\overseterline{f})=\overseterline{Q}(g)\circ\overseterline{Q}(\overseterline{t})^{-1} \]
by some pairs of morphisms $A\overset{\overseterline{f}}{\longrightarrow}B^{\prime}\overset{\overseterline{s}}{\longleftarrow}B$ and $A\overset{\overseterline{t}}{\longleftarrow}A^{\prime}\overset{\overseterline{g}}{\longrightarrow}B$ satisfying $\overseterline{s},\overseterline{t}\in\overseterline{\mathscr{S}}$.
\end{rem}
\betagin{dfn}\label{DefwEFtr}
Let $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$ be any element.
\betagin{enumerate}
\item For any morphism $\alpha\in\widetilde{\C}(A,A^{\prime})$, express it as $\alpha=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})$ with some $\overseterline{a}\in\overseterline{\mathscr{C}}(A,D)$ and $\overseterline{u}\in\overseterline{\mathscr{S}}(A^{\prime},D)$. Then take a commutative square
\betagin{equation}\label{DD}
\xy
(-6,6)*+{X}="0";
(6,6)*+{X^{\prime}}="2";
(-6,-6)*+{A}="4";
(6,-6)*+{D}="6";
{\ar^{\overseterline{a}^{\prime}} "0";"2"};
{\ar^{\overseterline{s}} "4";"0"};
{\ar_{\overseterline{s}^{\prime}} "6";"2"};
{\ar_{\overseterline{a}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\end{equation}
in $\overseterline{\mathscr{C}}$ with $\overseterline{s}^{\prime}\in\overseterline{\mathscr{S}}$, and put
$\alpha_{\ast} [\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}}]$.
\item For any morphism $\gamma\in\widetilde{\C}(C^{\prime},C)$, express it as $\gamma=\overseterline{Q}(\overseterline{c})\circ\overseterline{Q}(\overseterline{v})^{-1}$ with some $\overseterline{c}\in\overseterline{\mathscr{C}}(E,C)$ and $\overseterline{v}\in\overseterline{\mathscr{S}}(E,C^{\prime})$. Then take a commutative square
\[
\xy
(-6,6)*+{E^{\prime}}="0";
(6,6)*+{Z}="2";
(-6,-6)*+{E}="4";
(6,-6)*+{C}="6";
{\ar^{\overseterline{c}^{\prime}} "0";"2"};
{\ar_{\overseterline{t}^{\prime}} "0";"4"};
{\ar^{\overseterline{t}} "2";"6"};
{\ar_{\overseterline{c}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\overseterline{\mathscr{C}}$ with $\overseterline{t}^{\prime}\in\overseterline{\mathscr{S}}$, and put
$ \gamma^{\ast} [\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[\mathbb{F}R{\overseterline{v\circ t^{\prime}}}{\overseterline{c^{\prime\ast}\delta}}{\overseterline{s}}]$.
\end{enumerate}
\end{dfn}
\betagin{lem}
The above is well-defined.
\end{lem}
\betagin{proof}
{\rm (1)}
Let $(\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}})=(C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A)$ be any triplet with $\overseterline{s},\overseterline{t}\in\overseterline{\mathscr{S}}$ and $\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(Z,X)$.
For each expression $\alpha=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})$ with $\overseterline{a}\in\overseterline{\mathscr{C}}(A,D)$ and $\overseterline{u}\in\overseterline{\mathscr{S}}(A^{\prime},D)$, the equivalence class $[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}}]$ does not depend on the choice of $(\ref{DD})$. Indeed, for any other commutative square
\[
\xy
(-6,6)*+{X}="0";
(6,6)*+{X^{\prime}r}="2";
(-6,-6)*+{A}="4";
(6,-6)*+{D}="6";
{\ar^{\overseterline{a}^{\prime}r} "0";"2"};
{\ar^{\overseterline{s}} "4";"0"};
{\ar_{\overseterline{s}^{\prime}r} "6";"2"};
{\ar_{\overseterline{a}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
with $\overseterline{s}^{\prime}r\in\overseterline{\mathscr{S}}$, by {\rm (MR2)} we may find a pair of morphisms $\overseterline{s}_1\in\overseterline{\mathscr{S}}(X^{\prime},X^{\prime\prime\prime})$ and $\overseterline{s}_2\in\overseterline{\mathscr{S}}(X^{\prime}r,X^{\prime\prime\prime})$ such that $\overseterline{s}_1\circ\overseterline{a}^{\prime}=\overseterline{s}_2\circ\overseterline{a}^{\prime}r$ and $\overseterline{s}_1\circ\overseterline{s}^{\prime}=\overseterline{s}_2\circ\overseterline{s}^{\prime}r$. Then we obviously have $[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}r_{\ast}\delta}}{\overseterline{s^{\prime}r\circ u}}]$.
Next we check that the class $[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}}]$ does not depend on a choice of expressions of $\alphapha\in\widetilde{\C}(A,A^{\prime})$.
By using {\rm (MR2)}, we may tacitly reduce it to the case where $\alphapha=\overseterline{Q}(\overseterline{u}_1)^{-1}\circ\overseterline{Q}(\overseterline{a}_1)=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})$ admits a commutative diagram below
\[
\xymatrix{
&D_1\ar[d]|{\overseterline{u}_2}&\\
A\ar[r]^{\overseterline{a}}\ar[ru]^{\overseterline{a}_1}&D\ar@{}[ul]|( .3)\circrclearrowright\ar@{}[ur]|( .3)\circrclearrowright&A^{\prime}\ar[l]_{\overseterline{u}}\ar[lu]_{\overseterline{u}_1}
}
\]
with $\overseterline{u}_1, \overseterline{u}_2, \overseterline{u}\in\overseterline{\mathscr{S}}$.
By (MR2), the morphisms $X\overset{\overseterline{s}}{\longleftarrow}A\overset{\overseterline{a}_1}{\longrightarrow}D_1\overset{\overseterline{u}_2}{\longrightarrow}D$ yield the following commutative diagram
\[
\xymatrix{
A\ar@<0.5ex>@{}[rr]^\circrclearrowright\ar@{}[dr]|\circrclearrowright\ar@/^1.2pc/[rr]|{\overseterline{a}}\ar[d]_{\overseterline{s}}\ar[r]_{\overseterline{a}_1}&D_1\ar@{}[dr]|\circrclearrowright\ar[d]^{\overseterline{s}^{\prime}r}\ar[r]_{\overseterline{u}_2}&D\ar[d]^{\overseterline{s}^{\prime}}\\
X\ar@<-0.5ex>@{}[rr]_\circrclearrowright\ar@/_1.2pc/[rr]|{\overseterline{a}^{\prime}}\ar[r]^{\overseterline{a}^{\prime}_1}&X^{\prime}r\ar[r]^{\overseterline{u}^{\prime}_2}&X^{\prime}
}
\]
where the vertical arrows belong to $\overseterline{\mathscr{S}}$.
By {\rm (MR1)}, we also have $\overseterline{u}_2^{\prime}\in\overseterline{\mathscr{S}}$.
This shows that both $(\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}})$ and $(\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{1\ast}\delta}}{\overseterline{s^{\prime}r\circ u_1}})$ define the same equivalence class in $\widetilde{\Ebb}(C,A)$.
Let us show the independence from the choice of representatives for $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]$.
Let
$(\mathbb{F}R{\overseterline{t}^{\prime}r}{\overseterline{\rho}}{\overseterline{s}^{\prime}r})$ be another choice of its representative. It suffices to show in the case where there is a diagram as below,
\betagin{equation}\label{Diagwelldefpull}
\xy
(-23,-6)*+{C}="C";
(-7,6)*+{Z}="1";
(-7,-6)*+{Z^{\prime}r}="3";
(7,6)*+{X}="11";
(7,-6)*+{X^{\prime}r}="13";
(23,-6)*+{A}="A";
{\ar_{\overseterline{t}} "1";"C"};
{\ar^{\overseterline{t}^{\prime}r} "3";"C"};
{\ar@{-->}^{\overseterline{\delta}} "1";"11"};
{\ar@{-->}_{\overseterline{\rho}} "3";"13"};
{\ar_{\overseterline{s}} "A";"11"};
{\ar^{\overseterline{s}^{\prime}r} "A";"13"};
{\ar_{\overseterline{v}^{\prime}r} "3";"1"};
{\ar_{\overseterline{u}^{\prime}r} "11";"13"};
{\ar@{}|\circrclearrowright (-12,0);(-10,-4)};
{\ar@{}|\circrclearrowright (12,0);(10,-4)};
\endxy
\end{equation}
with $\overseterline{v}^{\prime}r,\overseterline{u}^{\prime}r\in\overseterline{\mathscr{S}}$ satisfying $\overseterline{v^{\prime\prime\ast}u^{\prime}r_{\ast}\delta}=\overseterline{\rho}$. Take a commutative square as in $(\ref{DD})$.
For morphisms $X^{\prime}r\overset{\overseterline{u}^{\prime}r}{\longleftarrow}X\overset{\overseterline{a}^{\prime}}{\longrightarrow}X^{\prime}$, we may take the following commutative square by {\rm (MR2)}
\[
\xy
(-6,6)*+{X}="0";
(6,6)*+{X^{\prime}}="2";
(-6,-6)*+{X^{\prime}r}="4";
(6,-6)*+{W}="6";
{\ar^{\overseterline{a}^{\prime}} "0";"2"};
{\ar_{\overseterline{u}^{\prime}r} "0";"4"};
{\ar^{\overseterline{w}} "2";"6"};
{\ar_{\overseterline{a}^{\prime}r} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
with $\overseterline{w}\in\overseterline{\mathscr{S}}$.
This gives rise to the following diagram
\[
\xy
(-23,-6)*+{C}="C";
(-7,6)*+{Z}="1";
(-7,-6)*+{Z^{\prime}r}="3";
(7,6)*+{X^{\prime}}="11";
(7,-6)*+{W}="13";
(23,-6)*+{A^{\prime}}="A'";
{\ar_{\overseterline{t}} "1";"C"};
{\ar^{\overseterline{t}^{\prime}r} "3";"C"};
{\ar@{-->}^{\overseterline{a}^{\prime}_\ast\overseterline{\delta}} "1";"11"};
{\ar@{-->}_{\overseterline{a}^{\prime}r_\ast\overseterline{\rho}} "3";"13"};
{\ar_{\overseterline{s^{\prime} u}} "A'";"11"};
{\ar^{\overseterline{ws^{\prime} u}} "A'";"13"};
{\ar_{\overseterline{v}^{\prime}r} "3";"1"};
{\ar_{\overseterline{w}} "11";"13"};
{\ar@{}|\circrclearrowright (-12,0);(-10,-4)};
{\ar@{}|\circrclearrowright (12,0);(10,-4)};
\endxy
\]
where all solid arrows belong to $\overseterline{\mathscr{S}}$.
This shows $[\mathbb{F}R{\overseterline{t}}{\overseterline{a}^{\prime}_{\ast}\overseterline{\delta}}{\overseterline{s^{\prime} u}}]=[\mathbb{F}R{\overseterline{t}^{\prime}r}{\overseterline{a}^{\prime}r_{\ast}\overseterline{\rho}}{\overseterline{ws^{\prime} u}}]$.
{\rm (2)} This is dual to {\rm (1)}.
\end{proof}
\betagin{rem}\label{RemPP}
For any $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]\in\widetilde{\Ebb}(C,A)$, $\alpha=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})\in\widetilde{\C}(A,A^{\prime})$ and $\gamma=\overseterline{Q}(\overseterline{c})\circ\overseterline{Q}(\overseterline{v})^{-1}\in\widetilde{\C}(C^{\prime},C)$ as in Definition~\ref{DefwEFtr}, the element
\[ \gamma^{\ast}\alpha_{\ast} [\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[\mathbb{F}R{\overseterline{v\circ t^{\prime}}}{\overseterline{c^{\prime\ast}a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}}] \]
is given by the following diagram.
\[
\xy
(-31,-8)*+{C^{\prime}}="0";
(-15,8)*+{E^{\prime}}="2";
(-23,0)*+{E}="4";
(-15,-8)*+{C}="6";
(-7,0)*+{Z}="8";
(7,0)*+{X}="10";
(15,8)*+{X^{\prime}}="12";
(15,-8)*+{A}="14";
(23,0)*+{D}="16";
(31,-8)*+{A^{\prime}}="18";
{\ar_{\overseterline{t}^{\prime}} "2";"4"};
{\ar_{\overseterline{v}} "4";"0"};
{\ar^{\overseterline{c}^{\prime}} "2";"8"};
{\ar^{\overseterline{t}} "8";"6"};
{\ar_{\overseterline{c}} "4";"6"};
{\ar@{-->}_{\overseterline{\delta}} "8";"10"};
{\ar@/^0.60pc/@{-->}^{\overseterline{c^{\prime\ast}a^{\prime}_{\ast}\delta}} "2";"12"};
{\ar^{\overseterline{a}^{\prime}} "10";"12"};
{\ar_{\overseterline{s}^{\prime}} "16";"12"};
{\ar^{\overseterline{s}} "14";"10"};
{\ar_{\overseterline{a}} "14";"16"};
{\ar_{\overseterline{u}} "18";"16"};
{\ar@{}|\circrclearrowright "2";"6"};
{\ar@{}|\circrclearrowright "10";"16"};
\endxy
\]
\end{rem}
\betagin{lem}\label{LemEmu}
For any $A,C\in\mathscr{C}$, the following holds.
\betagin{enumerate}
\item For any pair of elements $[\mathbb{F}R{\overseterline{t}_i}{\overseterline{\delta}_i}{\overseterline{s}_i}]\in\widetilde{\Ebb}(C,A)$ $(i=1,2)$, using Proposition~\ref{PropCommonDenom} {\rm (1)}, we define their sum as
\[ [\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1}]+[\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{\rho}_1+\overseterline{\rho}_2}{\overseterline{s}}] \]
by taking common denominators so that $[\mathbb{F}R{\overseterline{t}_i}{\overseterline{\delta}_i}{\overseterline{s}_i}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{\rho}_i}{\overseterline{s}}]$ hold for $i=1,2$.
Then this sum is well-defined, and makes $\widetilde{\Ebb}(C,A)$ an abelian group with zero element $[\mathbb{F}R{\overseterline{\mathrm{id}}}{0}{\overseterline{\mathrm{id}}}]$.
\item If we define a map $\overseterline{\mu}_{C,A}\colon\overseterline{\mathbb{E}bb}(C,A)\to\widetilde{\Ebb}(C,A)$ by $\overseterline{\mu}_{C,A}(\overseterline{\delta})=[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]$ for any $\overseterline{\delta}\in\overseterline{\mathbb{E}bb}(C,A)$, then it is a monomorphism.
\end{enumerate}
\end{lem}
\betagin{proof}
{\rm (1)}
Well-definedness of the sum is obvious from Proposition~\ref{PropCommonDenom} {\rm (2)}.
We will show that it is associative.
For any $[\mathbb{F}R{\overseterline{t}_i}{\overseterline{\delta}_i}{\overseterline{s}_i}]\in\widetilde{\Ebb}(C,A)$ $(i=1,2,3)$, we take common denominators so that $[\mathbb{F}R{\overseterline{t}_i}{\overseterline{\delta}_i}{\overseterline{s}_i}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{\rho}_i}{\overseterline{s}}]$ hold for $i=1,2,3$.
Then we have
\betagin{eqnarray*}
([\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1}]+[\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2}])+[\mathbb{F}R{\overseterline{t}_3}{\overseterline{\delta}_3}{\overseterline{s}_3}]&=&[\mathbb{F}R{\overseterline{t}}{\overseterline{\rho}_1+\overseterline{\rho}_2+\overseterline{\rho}_3}{\overseterline{s}}] \\
&=&[\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1}]+([\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2}]+[\mathbb{F}R{\overseterline{t}_3}{\overseterline{\delta}_3}{\overseterline{s}_3}]).
\end{eqnarray*}
Commutativity is obvious.
For any $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]\in\widetilde{\Ebb}(C,A)$, we have
\[ [\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]+[\mathbb{F}R{\overseterline{\mathrm{id}}}{0}{\overseterline{\mathrm{id}}}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]+[\mathbb{F}R{\overseterline{t}}{0}{\overseterline{s}}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}].
\]
Thus $[\mathbb{F}R{\overseterline{\mathrm{id}}}{0}{\overseterline{\mathrm{id}}}]$ is the zero element.
For any $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]\in\widetilde{\Ebb}(C,A)$, we have the inverse element $[\mathbb{F}R{\overseterline{t}}{\overseterline{-\delta}}{\overseterline{s}}]$ of $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]$. Thus $\widetilde{\Ebb}(C,A)$ is an abelian group with respect to the above sum.
{\rm (2)} Additivity is immediate from the definition. Monomorphicity follows from Remark~\ref{RemTrivK}.
\end{proof}
\betagin{prop}\label{PropEmu}
The above definitions give a biadditive functor $\widetilde{\Ebb}\colon\widetilde{\C}^\mathrm{op}\times\widetilde{\C}\to\mathscr{A}b$ and a natural transformation $\overseterline{\mu}\colon\overseterline{\mathbb{E}bb}\Rightarrow\widetilde{\Ebb}\circ(\overseterline{Q}^\mathrm{op}\times\overseterline{Q})$.
\end{prop}
\betagin{proof}
First we show that $\widetilde{\Ebb}$ is a biadditive functor. By Remark~\ref{RemPP}, it is easy to check $\gamma^{\ast}\alpha_{\ast}=\alpha_{\ast}\gamma^{\ast}$ for any morphisms $\alpha,\gamma$ in $\widetilde{\C}$.
We show that $\alpha_{\ast}$ is a group homomorphism for any $\alpha\in\widetilde{\C}(A,A^{\prime})$. Take morphisms $\overseterline{a}\in\overseterline{\mathscr{C}}(A,D)$ and $\overseterline{u}\in\overseterline{\mathscr{S}}(A^{\prime},D)$ such that $\alpha=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})$. For any $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{1}}}{\overseterline{s}}], [\mathbb{F}R{\overseterline{t^{\prime}}}{\overseterline{\delta_{2}}}{\overseterline{s^{\prime}}}]\in\widetilde{\Ebb}(C,A)$, we need to show $\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{1}}}{\overseterline{s}}]+[\mathbb{F}R{\overseterline{t^{\prime}}}{\overseterline{\delta_{2}}}{\overseterline{s^{\prime}}}])=\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{1}}}{\overseterline{s}}])+\alpha_{\ast}([\mathbb{F}R{\overseterline{t^{\prime}}}{\overseterline{\delta_{2}}}{\overseterline{s^{\prime}}}])$. Taking common denominators, we may assume that $t=t^{\prime},s=s^{\prime}$ and $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_i}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta_i}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$ hold for $i=1,2$ from the beginning. By {\rm (MR2)}, there exist morphisms $\overseterline{a^{\prime}}\colon X\to X^{\prime}$ and $\overseterline{s^{\prime}}\colon D\to X^{\prime}$ satisfying $\overseterline{a^{\prime}\circ s}=\overseterline{s^{\prime}\circ a}$. Then we obtain
\betagin{eqnarray*}
\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{1}}}{\overseterline{s}}]+[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{2}}}{\overseterline{s}}])
&=&\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{1}+\delta_{2}}}{\overseterline{s}}]) \\
&=&[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}(\delta_{1}+\delta_{2})}}{\overseterline{s^{\prime}\circ u}}] \\
&=&[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta_{1}}}{\overseterline{s^{\prime}\circ u}}]+[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta_{2}}}{\overseterline{s^{\prime}\circ u}}] \\
&=&\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{1}}}{\overseterline{s}}])+\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta_{2}}}{\overseterline{s}}]).
\end{eqnarray*}
Dually we can show that $\gamma^{\ast}$ is a group homomorphism.
Let us show $(\beta\circ\alpha)_{\ast}=\beta_{\ast}\alpha_{\ast}$ for any $\alpha\in\widetilde{\C}(A,A_1)$ and $\beta\in\widetilde{\C}(A_1,A_2)$. Put $\alpha=\overseterline{Q}(\overseterline{u_{1}})^{-1}\circ\overseterline{Q}(\overseterline{a_{1}})$ and $\beta=\overseterline{Q}(\overseterline{u_{2}})^{-1}\circ\overseterline{Q}(\overseterline{a_{2}})$ as the following diagram. For any $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$, we obtain the following diagram by {\rm (MR2)}
\[
\xy
(-15,-8)*+{C}="6";
(-7,0)*+{Z}="8";
(7,0)*+{X}="10";
(15,8)*+{X_{1}}="12";
(15,-8)*+{A}="14";
(23,0)*+{D_{1}}="16";
(31,-8)*+{A_{1}}="18";
(31,8)*+{D_{3}}="20";
(39,0)*+{D_{2}}="22";
(47,-8)*+{A_{2}}="24";
(23,16)*+{X_{2}}="26";
{\ar^{\overseterline{t}} "8";"6"};
{\ar@{-->}_{\overseterline{\delta}} "8";"10"};
{\ar^{\overseterline{a_{1}^{\prime}}} "10";"12"};
{\ar^{\overseterline{s_{1}}} "16";"12"};
{\ar^{\overseterline{s}} "14";"10"};
{\ar_{\overseterline{a_{1}}} "14";"16"};
{\ar_{\overseterline{u_{1}}} "18";"16"};
{\ar_{\overseterline{a_{2}}} "18";"22"};
{\ar_{\overseterline{u_{2}}} "24";"22"};
{\ar_{\overseterline{u_{3}}} "22";"20"};
{\ar^{\overseterline{a_{3}}} "16";"20"};
{\ar^{\overseterline{a_{3}^{\prime}}} "12";"26"};
{\ar_{\overseterline{s_{2}}} "20";"26"};
{\ar@{}|\circrclearrowright "10";"16"};
{\ar@{}|\circrclearrowright "16";"22"};
{\ar@{}|\circrclearrowright "12";"20"};
\endxy
\]
in which each $\overseterline{u_{i}},\overseterline{s_{i}}$ belongs to $\overseterline{\mathscr{S}}$.
Then we obtain
\betagin{eqnarray*}
(\beta\circ\alpha)_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])
&=&(\overseterline{Q}(\overseterline{u_{3}\circ u_{2}})^{-1}\circ\overseterline{Q}(\overseterline{a_{3}\circ a_{1}}))_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]) \\
&=&[\mathbb{F}R{\overseterline{t}}{\overseterline{(a_{3}^{\prime})_{\ast}(a_{1}^{\prime})_{\ast}\delta}}{\overseterline{s_{2}\circ u_{3}\circ u_{2}}}] \\
&=&\beta_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{(a_{1}^{\prime})_{\ast}\delta}}{\overseterline{s_{1}\circ u_{1}}}])\\
&=&\beta_{\ast}\alpha_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]).
\end{eqnarray*}
Dually we can show $(\beta\circ\alpha)^{\ast}=\alpha^{\ast}\beta^{\ast}$.
For any $\alpha_{i}=\overseterline{Q}(\overseterline{u_{i}})^{-1}\circ\overseterline{Q}(\overseterline{a_{i}})\in\widetilde{\C}(A, A^{\prime})$ for $i=1,2$, we show that $(\alpha_{1}+\alpha_{2})_{\ast}=(\alpha_{1})_{\ast}+(\alpha_{2})_{\ast}$. Taking common denominators, we may assume $\overseterline{u_{1}}=\overseterline{u_{2}}$ with $u_{1}\colon A^{\prime}\to D$. Take any element $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$. By {\rm (MR2)}, there are morphisms $\overseterline{a_{i}^{\prime}}\colon X\to X^{\prime}, \overseterline{s_{i}}\colon D\to X^{\prime}$ such that $\overseterline{a_{i}^{\prime}\circ s}=\overseterline{s_{i}\circ a_{i}}$ for $i=1,2$. Taking common denominators, we may assume that $s_{1}=s_{2}$. Then we have
\betagin{eqnarray*}
((\alpha_{1})_{\ast}+(\alpha_{2})_{\ast})([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])
&=&(\alpha_{1})_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])+(\alpha_{2})_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]) \\
&=&[\mathbb{F}R{\overseterline{t}}{\overseterline{(a_{1}^{\prime})_{\ast}\delta}}{\overseterline{s_{1}\circ u_{1}}}]+[\mathbb{F}R{\overseterline{t}}{\overseterline{(a_{2}^{\prime})_{\ast}\delta}}{\overseterline{s_{1}\circ u_{1}}}] \\
&=&[\mathbb{F}R{\overseterline{t}}{\overseterline{(a_{1}^{\prime}+a_{2}^{\prime})_{\ast}\delta}}{\overseterline{s_{1}\circ u_{1}}}] \\
&=&(\overseterline{Q}(\overseterline{u_{1}})^{-1}\circ\overseterline{Q}(\overseterline{a_{1}+a_{2}}))_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]) \\
&=&(\alpha_{1}+\alpha_{2})_{\ast}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])
\end{eqnarray*}
as desired. Dually, we obtain $(\alpha_{1}+\alpha_{2})^{\ast}=(\alpha_{1})^{\ast}+(\alpha_{2})^{\ast}$. Thus $\widetilde{\Ebb}$ is a biadditive functor.
We show that $\overseterline{\mu}$ is a natural transformation. By Lemma~\ref{LemEmu} {\rm (2)}, it suffices to confirm the naturality of $\overseterline{\mu}$. Let $\overseterline{a}\colon A\to A^{\prime}$ be an arbitrary morphism in $\overseterline{\mathscr{C}}$. For any object $C\in\mathscr{C}$ and any element $\overseterline{\delta}\in\overseterline{\mathbb{E}}(C,A)$, we have
\betagin{eqnarray*}
(\widetilde{\Ebb}(C,\overseterline{Q}(\overseterline{a}))\circ\overseterline{\mu}_{C,A})(\overseterline{\delta})
&=&\widetilde{\Ebb}(C,\overseterline{Q}(\overseterline{a}))([\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]) \\
&=&[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{a_{\ast}\delta}}{\overseterline{\mathrm{id}}}] \\
&=&\overseterline{\mu}_{C,A^{\prime}}(\overseterline{a_{\ast}\delta}) \\
&=&(\overseterline{\mu}_{C,A^{\prime}}\circ\overseterline{\mathbb{E}}(C,\overseterline{a}))(\overseterline{\delta}).
\end{eqnarray*}
Dually, $\widetilde{\Ebb}(\overseterline{Q}(\overseterline{c}),A)\circ\overseterline{\mu}_{C,A}=\overseterline{\mu}_{C^{\prime},A}\circ\overseterline{\mathbb{E}}(\overseterline{c},A)$ holds for any morphism $\overseterline{c}\colon C^{\prime}\to C$ in $\overseterline{\mathscr{C}}$. Thus $\overseterline{\mu}$ is a natural transformation.
\end{proof}
\subsection{Construction of $\widetilde{\mathfrak{s}}$}
Similarly as in the previous subsection, suppose that $\mathscr{S}$ satisfies {\rm (M0),(MR1),(MR2),(MR3)} and $\mathscr{S}=p^{-1}(\overseterline{\mathscr{S}})$.
In this subsection, we construct a realization $\widetilde{\mathfrak{s}}$ of $\widetilde{\Ebb}$.
\betagin{prop}\label{PropWellDef}
Let $A,C\in\mathscr{C}$ be any pair of objects.
For each $i=1,2$, let $(\mathbb{F}R{\overseterline{t}_i}{\overseterline{\delta}_i}{\overseterline{s}_i})=(C\overset{\overseterline{t}_i}{\longleftarrow}Z_i\overset{\overseterline{\delta}_i}{\dashrightarrow}X_i\overset{\overseterline{s}_i}{\longleftarrow}A)$ be a triplet satisfying $\overseterline{s}_i\in\overseterline{\mathscr{S}}(A,X_i), \overseterline{t}_i\in\overseterline{\mathscr{S}}(Z_i,C), \overseterline{\delta}_i\in\overseterline{\mathbb{E}bb}(Z_i,X_i)$, with $\mathfrak{s}(\delta_i)=[X_i\overset{x_i}{\longrightarrow}Y_i\overset{y_i}{\longrightarrow}Z_i]$.
If $[\mathbb{F}R{\overseterline{t}_1}{\overseterline{\delta}_1}{\overseterline{s}_1}]=[\mathbb{F}R{\overseterline{t}_2}{\overseterline{\delta}_2}{\overseterline{s}_2}]$ holds in $\widetilde{\Ebb}(C,A)$, then
\betagin{equation}\label{Equiv_Seq_in_Ctilde}
[A\overset{Q(x_1\circ s_1)}{\longrightarrow}Y_1\overset{Q(t_1\circ y_1)}{\longrightarrow}C]=[A\overset{Q(x_2\circ s_2)}{\longrightarrow}Y_2\overset{Q(t_2\circ y_2)}{\longrightarrow}C]
\end{equation}
holds as sequences in $\widetilde{\C}$.
\end{prop}
\betagin{proof}
By the definition of the equivalence relation, it suffices to show $(\ref{Equiv_Seq_in_Ctilde})$ in the case where there exist $\overseterline{u}\in\overseterline{\mathscr{S}}(X_1,X_2)$ and $\overseterline{v}\in\overseterline{\mathscr{S}}(Z_2,Z_1)$ such that
$\overseterline{s}_2=\overseterline{u}\circ\overseterline{s}_1$, $\overseterline{t}_2=\overseterline{t}_1\circ\overseterline{v}$, and $\overseterline{u_{\ast} v^{\ast}\delta_1}=\overseterline{\delta}_2$.
By the definition of $\overseterline{\mathbb{E}bb}$, there exists $u^{\prime}\in\mathscr{S}(X_2,X_2^{\prime})$ such that $u^{\prime}_{\ast}(u_{\ast} v^{\ast}\delta_1)=u^{\prime}_{\ast}\delta_2$.
Hence replacing $u,s_2,\delta_2$ by $u^{\prime} u,u^{\prime} s_2,u^{\prime}_{\ast}\delta_2$ respectively, we may assume that $u_{\ast} v^{\ast}\delta_1=\delta_2$ holds from the beginning. In this case, for $\mathfrak{s}(u_{\ast}\delta_1)=[X_2\overset{x_3}{\longrightarrow}Y_3\overset{y_3}{\longrightarrow}Z_1]$, there exist $\mathbf{w}_1,\mathbf{w}_2\in\overseterline{\mathscr{S}}$ which make
\[
\xy
(-14,12)*+{X_1}="0";
(0,12)*+{Y_1}="2";
(14,12)*+{Z_1}="4";
(-14,0)*+{X_2}="10";
(0,0)*+{Y_3}="12";
(14,0)*+{Z_1}="14";
(-14,-12)*+{X_2}="20";
(0,-12)*+{Y_2}="22";
(14,-12)*+{Z_2}="24";
{\ar^{\overseterline{x}_1} "0";"2"};
{\ar^{\overseterline{y}_1} "2";"4"};
{\ar_{\overseterline{u}} "0";"10"};
{\ar_{\mathbf{w}_1} "2";"12"};
{\ar@{=} "4";"14"};
{\ar^{\overseterline{x}_3} "10";"12"};
{\ar^{\overseterline{y}_3} "12";"14"};
{\ar@{=} "20";"10"};
{\ar^{\mathbf{w}_2} "22";"12"};
{\ar_{\overseterline{v}} "24";"14"};
{\ar_{\overseterline{x}_2} "20";"22"};
{\ar_{\overseterline{y}_2} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "10";"22"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
commutative in $\overseterline{\mathscr{C}}$ by {\rm (MR3)}.
Then $\alpha=\overseterline{Q}(\mathbf{w}_2)^{-1}\circ\overseterline{Q}(\mathbf{w}_1)\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$ makes
\[
\xy
(-16,0)*+{A}="0";
(4,0)*+{}="1";
(0,8)*+{Y_1}="2";
(0,-8)*+{Y_2}="4";
(-4,0)*+{}="5";
(16,0)*+{C}="6";
{\ar^{Q(x_1\circ s_1)} "0";"2"};
{\ar_{Q(x_2\circ s_2)} "0";"4"};
{\ar^{Q(t_1\circ y_1)} "2";"6"};
{\ar_{Q(t_2\circ y_2)} "4";"6"};
{\ar^{\colonng}_{\alpha} "2";"4"};
{\ar@{}|\circrclearrowright "0";"1"};
{\ar@{}|\circrclearrowright "5";"6"};
\endxy
\]
commutative in $\widetilde{\C}$, which shows $(\ref{Equiv_Seq_in_Ctilde})$.
\end{proof}
\betagin{dfn}\label{Def_sN}
Let $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$ be any $\widetilde{\Ebb}$-extension. Take $\mathfrak{s}(\delta)=[X\overset{x}{\longrightarrow}Y\overset{y}{\longrightarrow}Z]$, and put
\betagin{equation}\label{Real_sN}
\widetilde{\mathfrak{s}}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])=[A\overset{Q(x\circ s)}{\longrightarrow}Y\overset{Q(t\circ y)}{\longrightarrow}C]
\end{equation}
in $\widetilde{\C}$. Well-definedness is ensured by Proposition~\ref{PropWellDef}.
\end{dfn}
The following lemma will be used to show Proposition~\ref{PropExact}.
\betagin{lem}\label{LemNtoS}
Let $f\in\mathscr{C}(A,B)$ be any morphism. The following holds.
\betagin{enumerate}
\item If $f$ is an $\mathfrak{s}$-inflation with $\mathscr{C}one(f)\in\mathcal{N}_{\mathscr{S}}$, then $f\in\mathscr{S}$.
\item Dually, if $f$ is an $\mathfrak{s}$-deflation with $\mathscr{C}oCone(f)\in\mathcal{N}_{\mathscr{S}}$, then $f\in\mathscr{S}$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it suffices to show {\rm (1)}. Let $A\overset{f}{\longrightarrow}B\overset{g}{\longrightarrow}N\overset{\delta}{\dashrightarrow}$ be an $\mathfrak{s}$-triangle with $N\in\mathcal{N}_{\mathscr{S}}$.
Remark that $A\overset{\mathrm{id}_A}{\longrightarrow}A\to 0\overset{0}{\dashrightarrow}$ is also an $\mathfrak{s}$-triangle.
By {\rm (MR3)}, there exists $\overseterline{u}\in\overseterline{\mathscr{S}}(A,B)$ which makes
\[
\xy
(-14,6)*+{A}="0";
(0,6)*+{A}="2";
(14,6)*+{0}="4";
(-14,-6)*+{A}="10";
(0,-6)*+{B}="12";
(14,-6)*+{N}="14";
{\ar^{\overseterline{\mathrm{id}_A}} "0";"2"};
{\ar^{} "2";"4"};
{\ar_{\overseterline{\mathrm{id}_A}} "0";"10"};
{\ar_{\overseterline{u}} "2";"12"};
{\ar "4";"14"};
{\ar_{\overseterline{f}} "10";"12"};
{\ar_{\overseterline{g}} "12";"14"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
commutative in $\overseterline{\mathscr{C}}$. This shows $\overseterline{f}=\overseterline{u}\in\overseterline{\mathscr{S}}$, hence $f\in\mathscr{S}$ follows.
\end{proof}
\betagin{prop}\label{PropExact}
Let $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$ be any $\widetilde{\Ebb}$-extension, and take its representative arbitrarily. Then for $\mathfrak{s}(\delta)=[X\overset{x}{\longrightarrow}Y\overset{y}{\longrightarrow}Z]$, the following holds.
\betagin{enumerate}
\item The sequence
\[ \widetilde{\C}(-,A)\overset{Q(x\circ s)\circ-}{\longrightarrow}\widetilde{\C}(-,Y)\overset{Q(t\circ y)\circ-}{\longrightarrow}\widetilde{\C}(-,C)\overset{[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]_\sharp}{\longrightarrow}\widetilde{\Ebb}(-,A) \]
is exact.
\item The sequence
\[ \widetilde{\C}(C,-)\overset{-\circ Q(t\circ y)}{\longrightarrow}\widetilde{\C}(Y,-)\overset{-\circ Q(x\circ s)}{\longrightarrow}\widetilde{\C}(A,-)\overset{[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]^\sharp}{\longrightarrow}\widetilde{\Ebb}(C,-) \]
is exact.
\end{enumerate}
\end{prop}
\betagin{proof}
{\rm (1)} By the commutativity of the following diagram,
\[
\xy
(-42,6)*+{\widetilde{\C}(-,A)}="0";
(-14,6)*+{\widetilde{\C}(-,Y)}="2";
(14,6)*+{\widetilde{\C}(-,C)}="4";
(42,6)*+{\widetilde{\Ebb}(-,A)}="6";
(-42,-6)*+{\widetilde{\C}(-,X)}="10";
(-14,-6)*+{\widetilde{\C}(-,Y)}="12";
(14,-6)*+{\widetilde{\C}(-,Z)}="14";
(42,-6)*+{\widetilde{\Ebb}(-,X)}="16";
{\ar^{Q(x\circ s)\circ-} "0";"2"};
{\ar^{Q(t\circ y)\circ-} "2";"4"};
{\ar^{[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]_\sharp} "4";"6"};
{\ar_{Q(s)\circ-}^{\colonng} "0";"10"};
{\ar@{=} "2";"12"};
{\ar^{Q(t)\circ-}_{\colonng} "14";"4"};
{\ar^{Q(s)_{\ast}}_{\colonng} "6";"16"};
{\ar_{Q(x)\circ-} "10";"12"};
{\ar_{Q(y)\circ-} "12";"14"};
{\ar_{[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]_\sharp} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "4";"16"};
\endxy
\]
it suffices to show the exactness of the bottom row.
Let us show the exactness of
$\widetilde{\C}(P,X)\overset{Q(x)\circ-}{\longrightarrow}\widetilde{\C}(P,Y)\overset{Q(y)\circ-}{\longrightarrow}\widetilde{\C}(P,Z)$
for any $P\in\mathscr{C}$. Let $\beta\in\widetilde{\C}(P,Y)$ be any element. Express $\beta$ as $\beta=\overseterline{Q}(\overseterline{f})\circ\overseterline{Q}(\overseterline{s})^{-1}$, using $\overseterline{f}\in\overseterline{\mathscr{C}}(P^{\prime},Y)$ and $\overseterline{s}\in\overseterline{\mathscr{S}}(P^{\prime},P)$. If $Q(y)\circ\beta=0$ then $\overseterline{Q}(\overseterline{y}\circ\overseterline{f})=0$, hence there exists some $\overseterline{s}^{\prime}\in\overseterline{\mathscr{S}}(P^{\prime}r,P^{\prime})$ such that $\overseterline{y}\circ\overseterline{f}\circ\overseterline{s}^{\prime}=0$ in $\overseterline{\mathscr{C}}$. This means that there exist $N\in\mathcal{N}_{\mathscr{S}}$ and $i\in\mathscr{C}(P^{\prime}r,N),j\in\mathscr{C}(N,Z)$ such that $y\circ f\circ s^{\prime}=j\circ i$. For $\mathfrak{s}(j^{\ast}\delta)=[X\overset{x^{\prime}}{\longrightarrow}Y^{\prime}\overset{y^{\prime}}{\longrightarrow}N]$, we have a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-12,6)*+{X}="0";
(0,6)*+{Y^{\prime}}="2";
(12,6)*+{N}="4";
(24,6)*+{}="6";
(-12,-6)*+{X}="10";
(0,-6)*+{Y}="12";
(12,-6)*+{Z}="14";
(24,-6)*+{}="16";
{\ar^{x^{\prime}} "0";"2"};
{\ar^{y^{\prime}} "2";"4"};
{\ar@{-->}^{j^{\ast}\delta} "4";"6"};
{\ar@{=} "0";"10"};
{\ar^{k} "2";"12"};
{\ar^{j} "4";"14"};
{\ar_{x} "10";"12"};
{\ar_{y} "12";"14"};
{\ar@{-->}_{\delta} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
such that $Y^{\prime}\overset{\left[\betagin{smallmatrix} -y^{\prime}\\ k\end{smallmatrix}\right]}{\longrightarrow}N\mathrm{op}lus Y\overset{[j\ y]}{\longrightarrow}Z\overset{x^{\prime}_{\ast}\delta}{\dashrightarrow}$ is an $\mathfrak{s}$-triangle.
By the exactness of $\mathscr{C}(P^{\prime}r,Y^{\prime})\to\mathscr{C}(P^{\prime}r,N\mathrm{op}lus Y)\to\mathscr{C}(P^{\prime}r,Z)$, there is $p\in\mathscr{C}(P^{\prime}r,Y^{\prime})$ such that
\[ \left[\betagin{smallmatrix} -y^{\prime}\\ k\end{smallmatrix}\right]\circ p=\left[\betagin{smallmatrix} -i\\ f\circ s^{\prime}\end{smallmatrix}\right]. \]
By Lemma~\ref{LemNtoS}, we have $\overseterline{x}^{\prime}\in\overseterline{\mathscr{S}}$. If we put $\alpha=\overseterline{Q}(\overseterline{x}^{\prime})^{-1}\circ Q(p)\circ Q(s\circ s^{\prime})^{-1}$, then it satisfies $\beta=Q(x)\circ\alpha$ by construction.
Let us show the exactness of
$\widetilde{\C}(P,Y)\overset{Q(y)\circ-}{\longrightarrow}\widetilde{\C}(P,Z)\overset{[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]_\sharp}{\longrightarrow}\widetilde{\Ebb}(P,X)$
for any $P\in\mathscr{C}$. Suppose that a morphism $\beta\in\widetilde{\C}(P,Z)$
satisfies $[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]_\sharp(\beta)=\beta^{\ast}[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]=0$. Take $\overseterline{f}\in\overseterline{\mathscr{C}}(P^{\prime},Z)$ and $\overseterline{s}\in\overseterline{\mathscr{S}}(P^{\prime},P)$ such that $\beta=\overseterline{Q}(\overseterline{f})\circ\overseterline{Q}(\overseterline{s})^{-1}$.
Then $\beta^{\ast}[\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}]=[\mathbb{F}R{\overseterline{s}}{\overseterline{f^{\ast}\delta}}{\overseterline{\mathrm{id}}}]$ by definition. By Remark~\ref{RemTrivK} it follows $\overseterline{f^{\ast}\delta}=0$, hence $t^{\ast} (f^{\ast}\delta)=0$ holds for some $t\in\mathscr{S}(P^{\prime}r,P^{\prime})$.
By the exactness of
$\mathscr{C}(P^{\prime}r,Y)\overset{y\circ-}{\longrightarrow}\mathscr{C}(P^{\prime}r,Z)\overset{\delta_\sharp}{\longrightarrow}\mathbb{E}bb(P^{\prime}r,X)$, there exists $g\in\mathscr{C}(P^{\prime}r,Y)$ such that $y\circ g=f\circ t$. If we put $\alpha=\overseterline{Q}(\overseterline{g})\circ\overseterline{Q}(\overseterline{s\circ t})^{-1}\in\widetilde{\C}(P,Y)$, this satisfies $Q(y)\circ\alpha=\beta$ in $\widetilde{\C}(P,Z)$ by construction.
{\rm (2)} This can be shown dually to {\rm (1)}.
\end{proof}
\subsection{Proof of the main theorem}
Similarly as in the previous subsection, suppose that $\mathscr{S}$ satisfies {\rm (M0),(MR1),(MR2),(MR3)} and $\mathscr{S}=p^{-1}(\overseterline{\mathscr{S}})$.
We will prove Theorem~\ref{ThmMultLoc} item by item.
\betagin{proof}[Proof of Theorem~\ref{ThmMultLoc} {\rm (1)}]
Since conditions {\rm (C1'),(C2'),(C3')} can be checked in a dual manner, it suffices to confirm conditions {\rm (C1),(C2),(C3)} in Definition~\ref{DefEACat}.
As {\rm (C1)} follows from Proposition~\ref{PropExact} and {\rm (C2)} is obvious from the definition, it is enough to show {\rm (C3)}.
Let $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$ be any $\widetilde{\Ebb}$-extension, and let $\alpha\in\widetilde{\C}(A,A^{\prime})$ be any morphism. Take any $\overseterline{a}\in\overseterline{\mathscr{C}}(A,D)$ and $\overseterline{u}\in\overseterline{\mathscr{S}}(A^{\prime},D)$ so that $\alpha=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})$ holds. Then by definition
$\alpha_{\ast} [\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_{\ast}\delta}}{\overseterline{s^{\prime}\circ u}}]$
is given by using a commutative square
\betagin{equation}\label{CSby(MS2)}
\xy
(-6,6)*+{A}="0";
(6,6)*+{D}="2";
(-6,-6)*+{X}="4";
(6,-6)*+{X^{\prime}}="6";
{\ar^{\overseterline{a}} "0";"2"};
{\ar_{\overseterline{s}} "0";"4"};
{\ar^{\overseterline{s}^{\prime}} "2";"6"};
{\ar_{\overseterline{a}^{\prime}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\end{equation}
in $\overseterline{\mathscr{C}}$ satisfying $\overseterline{s}^{\prime}\in\overseterline{\mathscr{S}}(D,X^{\prime})$. By {\rm (C3)} for $\mathscr{C}Es$,
we obtain a morphism of $\mathfrak{s}$-triangles
\betagin{equation}\label{Morph_dels}
\xy
(-12,6)*+{X}="0";
(0,6)*+{Y}="2";
(12,6)*+{Z}="4";
(24,6)*+{}="6";
(-12,-6)*+{X^{\prime}}="10";
(0,-6)*+{Y^{\prime}}="12";
(12,-6)*+{Z}="14";
(24,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{a^{\prime}} "0";"10"};
{\ar_{b} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{x^{\prime}} "10";"12"};
{\ar_{y^{\prime}} "12";"14"};
{\ar@{-->}_{a^{\prime}_{\ast}\delta} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\end{equation}
which makes
\betagin{equation}\label{stri_mappingcone}
X\overset{\left[\betagin{smallmatrix} x\\ a^{\prime}\end{smallmatrix}\right]}{\longrightarrow}Y\mathrm{op}lus X^{\prime}\overset{[b\ -x^{\prime}]}{\longrightarrow}Y^{\prime}\overset{y^{\prime\ast}\delta}{\dashrightarrow}
\end{equation} an $\mathfrak{s}$-triangle.
By definition we have $\widetilde{\mathfrak{s}}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])=[A\overset{Q(x\circ s)}{\longrightarrow}Y\overset{Q(t\circ y)}{\longrightarrow}C]$ and $\widetilde{\mathfrak{s}}(\alpha_{\ast}[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])=[A^{\prime}\overset{Q(x^{\prime}\circ s^{\prime}\circ u)}{\longrightarrow}Y^{\prime}\overset{Q(t\circ y^{\prime})}{\longrightarrow}C]$.
Put $x^{\prime}r=x^{\prime}\circ s^{\prime}\circ u$. Obviously, $(\ref{Morph_dels})$ induces the following morphism of $\widetilde{\mathfrak{s}}$-triangles.
\[
\xy
(-18,6)*+{A}="0";
(0,6)*+{Y}="2";
(18,6)*+{C}="4";
(32,6)*+{}="6";
(-18,-6)*+{A^{\prime}}="10";
(0,-6)*+{Y^{\prime}}="12";
(18,-6)*+{C}="14";
(32,-6)*+{}="16";
{\ar^{Q(x\circ s)} "0";"2"};
{\ar^{Q(t\circ y)} "2";"4"};
{\ar@{-->}^{[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]} "4";"6"};
{\ar_{\alpha} "0";"10"};
{\ar^{Q(b)} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{Q(x^{\prime}r)} "10";"12"};
{\ar_{Q(t\circ y^{\prime})} "12";"14"};
{\ar@{-->}_{\alpha_{\ast}[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
As we have $Q(t\circ y^{\prime})^{\ast}[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[\mathbb{F}R{\overseterline{\mathrm{id}}_{Y^{\prime}}}{\overseterline{y^{\prime\ast}\delta}}{\overseterline{s}}]$, it remains to show that
\betagin{equation}\label{tobe_ws_tri}
A\overset{\left[\betagin{smallmatrix} Q(x\circ s)\\ \alpha\end{smallmatrix}\right]}{\longrightarrow}Y\mathrm{op}lus A^{\prime}\overset{[Q(b)\ -Q(x^{\prime}r)]}{\longrightarrow}Y^{\prime}\overset{[\mathbb{F}R{\overseterline{\mathrm{id}}_{Y^{\prime}}}{\overseterline{y^{\prime\ast}\delta}}{\overseterline{s}}]}{\dashrightarrow}
\end{equation}
is an $\widetilde{\mathfrak{s}}$-triangle.
Remark that since $(\ref{stri_mappingcone})$ is an $\mathfrak{s}$-triangle, we have
\[ \widetilde{\mathfrak{s}}([\mathbb{F}R{\overseterline{\mathrm{id}}_Y}{\overseterline{y^{\prime\ast}\delta}}{\overseterline{s}}])
=[A\overset{Q(\left[\betagin{smallmatrix} x\\ a^{\prime}\end{smallmatrix}\right]\circ s)}{\longrightarrow}Y\mathrm{op}lus X^{\prime}\overset{Q([b\ -x^{\prime}])}{\longrightarrow}Y^{\prime}] \]
by definition. Thus $(\ref{tobe_ws_tri})$ indeed becomes an $\widetilde{\mathfrak{s}}$-triangle, since an isomorphism $\beta=\mathrm{id}_Y\mathrm{op}lus Q(s^{\prime}\circ u)\in\widetilde{\C}(Y\mathrm{op}lus A^{\prime},Y\mathrm{op}lus X^{\prime})$ makes the diagram
\[
\xy
(-18,0)*+{A}="0";
(6,0)*+{}="1";
(0,10)*+{Y\mathrm{op}lus A^{\prime}}="2";
(0,-10)*+{Y\mathrm{op}lus X^{\prime}}="4";
(-6,0)*+{}="5";
(18,0)*+{Y^{\prime}}="6";
{\ar^{\left[\betagin{smallmatrix} Q(x\circ s)\\\alpha\end{smallmatrix}\right]} "0";"2"};
{\ar_{\left[\betagin{smallmatrix} Q(x\circ s)\\ Q(a^{\prime}\circ s)\end{smallmatrix}\right]} "0";"4"};
{\ar^{[ Q(b)\ -Q(x^{\prime}r)]} "2";"6"};
{\ar_{[ Q(b)\ -Q(x^{\prime})]} "4";"6"};
{\ar^{\colonng}_{\beta} "2";"4"};
{\ar@{}|\circrclearrowright "0";"1"};
{\ar@{}|\circrclearrowright "5";"6"};
\endxy
\]
commutative in $\widetilde{\C}$.
\end{proof}
\betagin{proof}[Proof of Theorem~\ref{ThmMultLoc} {\rm (2)}]
We will show the universality of the functor $(Q,\mu)\colon\mathscr{C}Es\to\widetilde{\C}Es$.
Let $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be any weakly extriangulated category.
{\rm (i)} Let $(F,\phi)\colon(\mathscr{C},\mathbb{E}bb,\mathfrak{s})\to (\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be an exact functor sending any $s\in\mathscr{S}$ to an isomorphism.
It is obvious that $F$ uniquely factors through $Q$ via an additive functor $\widetilde{F}\colon\widetilde{\C}\to\mathscr{D}$, namely, $F=\widetilde{F}\circ Q$.
We define a natural transformation $\widetilde{\phi}\colon\widetilde{\Ebb}\to\mathbb{F}bb\circ(\widetilde{F}^\mathrm{op}\times\widetilde{F})$ so that the pair $(\widetilde{F},\widetilde{\phi})$ becomes exact.
For each $A,C\in\mathscr{C}$, let $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]=[C\overset{\overseterline{t}}{\longleftarrow}Z\overset{\overseterline{\delta}}{\dashrightarrow}X\overset{\overseterline{s}}{\longleftarrow}A]\in\widetilde{\Ebb}(C,A)$ be any element with $\mathfrak{s}(\delta)=[X\overset{x}{\longrightarrow}Y\overset{y}{\longrightarrow}Z]$.
Keeping in mind that $F(s)$ and $F(t)$ are isomorphisms,
we put $\widetilde{\phi}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]):=(F(t)^{-1})^\ast(F(s)^{-1})_\ast(\phi(\deltata))$.
Well-definedness and the additivity of $\widetilde{\phi}_{C,A}\colon\widetilde{\Ebb}(C,A)\to\mathbb{F}bb(F(C),F(A))$ can be deduced easily from the definition of the equivalence relation given in Corollary~\ref{CorwETrans}.
Moreover, we get
\betagin{equation}\label{UnivRealization}
\betagin{split}
\mathfrak{t}(\widetilde{\phi}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]))&=[FA\overset{F(x\circ s)}{\longrightarrow}FY\overset{F(t\circ y)}{\longrightarrow}FC]\\
&=[\widetilde{F}QA\overset{\widetilde{F}Q(x\circ s)}{\longrightarrow}\widetilde{F}QY\overset{\widetilde{F}Q(t\circ y)}{\longrightarrow}\widetilde{F}QC]
\end{split}
\end{equation}
by Remark~\ref{RemWE}.
Naturality of $\widetilde{\phi}$ can be checked as follows.
Consider a morphism $\alphapha=\overseterline{Q}(\overseterline{u})^{-1}\circ\overseterline{Q}(\overseterline{a})\colon A\to A^{\prime}$ and the commutative square same as (\ref{CSby(MS2)}).
Then we have the following equalities.
\betagin{eqnarray*}
\widetilde{\phi}(\alphapha_\ast[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]) &=& \widetilde{\phi}([\mathbb{F}R{\overseterline{t}}{\overseterline{a^{\prime}_\ast\delta}}{\overseterline{s^{\prime}\circ u}}])\\
&=& (F(t)^{-1})^\ast(F(u)^{-1}\circ F(s^{\prime})^{-1})_\ast(\phi(a^{\prime}_\ast\deltata))\\
&=& (F(t)^{-1})^\ast(F(u)^{-1}\circ F(s^{\prime})^{-1}\circ F(a^{\prime}))_\ast(\phi(\deltata))\\
&=& (F(t)^{-1})^\ast(F(u)^{-1}\circ F(a)\circ F(s)^{-1})_\ast(\phi(\deltata))\\
&=& \widetilde{F}(\alphapha)_\ast\widetilde{\phi}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])
\end{eqnarray*}
Similarly, we have $\widetilde{\phi}(\betata^\ast[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])=\widetilde{F}(\betata)^\ast\widetilde{\phi}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])$ for any $\betata\in\widetilde{\C}(C^{\prime}, C)$.
By construction, for any $\delta\in\mathbb{E}bb(C,A)$ we have
\betagin{equation}\label{P22}
\widetilde{\phi}_{C,A}(\mu_{C,A}(\delta))=\widetilde{\phi}_{C,A}([\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{\delta}}{\overseterline{\mathrm{id}}}])=\phi(\delta),
\end{equation}
which shows $(\widetilde{\phi}\circ(Q^\mathrm{op}\times Q))\cdot\mu=\phi$, and thus $(\widetilde{F},\widetilde{\phi})\circ(Q,\mu)=(F,\phi)$ holds. It is obvious that $\widetilde{\phi}$ is uniquely determined by $(\ref{P22})$ (and the requirement of $(\ref{P7})$ for natural transformations in Definition~\ref{DefExFun} {\rm (3)}), as in the above construction.
{\rm (ii)} Let $\eta\colon(F,\phi)\Rightarrow(G,\psi)$ be any natural transformation of exact functors between $(F,\phi),(G,\psi)\colon\mathscr{C}Es\to (\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ which send any $s\in\mathscr{S}$ to isomorphisms.
By the universality of localization for additive categories, there exists a unique natural transformation $\widetilde{\eta}\colon\widetilde{F}\Rightarrow \widetilde{G}$ of additive functors such that $\widetilde{\eta}\circ Q=\eta$. Indeed, this is given by $\widetilde{\eta}_C=\eta_C$ for any $C\in\mathscr{C}$. It is obvious from the construction and the naturality of $\eta$, that such $\widetilde{\eta}$ satisfies
\[ (\widetilde{\eta}_A)_{\ast}\widetilde{\phi}_{C,A}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}])=
(\widetilde{\eta}_C)^{\ast}\widetilde{\psi}_{C,A}([\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]) \]
for any $[\mathbb{F}R{\overseterline{t}}{\overseterline{\delta}}{\overseterline{s}}]\in\widetilde{\Ebb}(C,A)$.
\end{proof}
To show {\rm (3)}, we need the following lemma.
\betagin{lem}\label{LemComposeInf}
The following holds for any morphism $\alpha$ in $\widetilde{\C}$.
\betagin{enumerate}
\item $\alpha$ is an $\widetilde{\mathfrak{s}}$-inflation in $\widetilde{\C}$ if and only if $\alpha=\beta\circ Q(f)\circ \gamma$ holds for some $\mathfrak{s}$-inflation $f$ in $\mathscr{C}$ and isomorphisms $\beta,\gamma$ in $\widetilde{\C}$.
\item $\alpha$ is an $\widetilde{\mathfrak{s}}$-deflation in $\widetilde{\C}$ if and only if $\alpha=\beta\circ Q(f)\circ\gamma$ holds for some $\mathfrak{s}$-deflation $f$ in $\mathscr{C}$ and isomorphisms $\beta,\gamma$ in $\widetilde{\C}$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show {\rm (1)}.
By the definition of $\widetilde{\mathfrak{s}}$ it is obvious that any $\widetilde{\mathfrak{s}}$-inflation can be written as $\beta\circ Q(f)\circ \gamma$ for some $\mathfrak{s}$-inflation $f$ and isomorphisms $\beta,\gamma$ in $\widetilde{\C}$.
Let us show the converse. Take any $\mathfrak{s}$-triangle $X\overset{f}{\longrightarrow}Y\overset{g}{\longrightarrow}Z\overset{\delta}{\dashrightarrow}$ and isomorphisms $\gamma\in\widetilde{\C}(A,X),\beta\in\widetilde{\C}(Y,B)$ in $\widetilde{\C}$. Express $\gamma^{-1}$ as $\gamma^{-1}=\overseterline{Q}(\overseterline{t})^{-1}\circ\overseterline{Q}(\overseterline{e})$, with some $\overseterline{e}\in\overseterline{\mathscr{C}}(X,X^{\prime})$ and $\overseterline{t}\in\overseterline{\mathscr{S}}(A,X^{\prime})$. Remark that $\overseterline{Q}(\overseterline{e})\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$ and $\gamma=\overseterline{Q}(\overseterline{e})^{-1}\circ\overseterline{Q}(\overseterline{t})$ hold. For $\mathfrak{s}(e_{\ast}\delta)=[X^{\prime}\overset{f^{\prime}}{\longrightarrow}Y^{\prime}\overset{g^{\prime}}{\longrightarrow}Z]$, we obtain a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-14,6)*+{X}="0";
(0,6)*+{Y}="2";
(14,6)*+{Z}="4";
(27,6)*+{}="6";
(-14,-6)*+{X^{\prime}}="10";
(0,-6)*+{Y^{\prime}}="12";
(14,-6)*+{Z}="14";
(27,-6)*+{}="16";
{\ar^{f} "0";"2"};
{\ar^{g} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{e} "0";"10"};
{\ar_{d} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{f^{\prime}} "10";"12"};
{\ar_{g^{\prime}} "12";"14"};
{\ar@{-->}_{e_{\ast}\delta} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
for some $d\in\mathscr{C}(Y,Y^{\prime})$, by {\rm (C3)} for $\mathscr{C}Es$. Since $(Q,\mu)$ is an exact functor,
\[
\xy
(-16,6)*+{X}="0";
(0,6)*+{Y}="2";
(16,6)*+{Z}="4";
(30,6)*+{}="6";
(-16,-6)*+{X^{\prime}}="10";
(0,-6)*+{Y^{\prime}}="12";
(16,-6)*+{Z}="14";
(30,-6)*+{}="16";
{\ar^{Q(f)} "0";"2"};
{\ar^{Q(g)} "2";"4"};
{\ar@{-->}_(0.56){\mu_{Z,X}(\delta)} "4";"6"};
{\ar_{Q(e)} "0";"10"};
{\ar_{Q(d)} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{Q(f^{\prime})} "10";"12"};
{\ar_{Q(g^{\prime})} "12";"14"};
{\ar@{-->}_(0.56){\mu_{Z,X^{\prime}}(e_{\ast}\delta)} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
becomes a morphism of $\widetilde{\mathfrak{s}}$-triangles. In particular we have $Q(d)\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$ by Remark~\ref{RemWE}.
By the definition of $\widetilde{\mathfrak{s}}$, we have
$\widetilde{\mathfrak{s}}([\mathbb{F}R{\overseterline{\mathrm{id}}}{\overseterline{e_{\ast}\delta}}{\overseterline{t}}]=[A\overset{Q(f^{\prime}\circ t)}{\longrightarrow}Y^{\prime}\overset{Q(g^{\prime})}{\longrightarrow}Z]$.
Since $\lambda=\beta\circ Q(d)^{-1}$ makes
\[
\xy
(-16,0)*+{A}="0";
(4,0)*+{}="1";
(0,8)*+{Y^{\prime}}="2";
(0,-8)*+{B}="4";
(-4,0)*+{}="5";
(16,0)*+{Z}="6";
{\ar^{Q(f^{\prime}\circ t)} "0";"2"};
{\ar_{\beta\circ Q(f)\circ\gamma} "0";"4"};
{\ar^{Q(g^{\prime})} "2";"6"};
{\ar_{Q(g^{\prime})\circ\lambda^{-1}} "4";"6"};
{\ar^{\colonng}_{\lambda} "2";"4"};
{\ar@{}|\circrclearrowright "0";"1"};
{\ar@{}|\circrclearrowright "5";"6"};
\endxy
\]
commutative in $\widetilde{\C}$, it follows that $\beta\circ Q(f)\circ\gamma$ is an $\widetilde{\mathfrak{s}}$-inflation.
\end{proof}
\betagin{proof}[Proof of Theorem~\ref{ThmMultLoc} {\rm (3)}]
Suppose that $\overseterline{\mathscr{S}}$ moreover satisfies {\rm (MR4)}. Let $X_1\overset{\alpha_1}{\longrightarrow}X_2$ and $X_2\overset{\alpha_2}{\longrightarrow}X_3$ be $\widetilde{\mathfrak{s}}$-inflations. By Lemma~\ref{LemComposeInf}, for $i=1,2$ we may write $\alpha_i$ as $\alpha_i=\beta_i\circ Q(f_i)\circ\gamma_i$ with some $\beta_i,\gamma_i\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$ and an $\mathfrak{s}$-inflation $f_i$ in $\mathscr{C}$. It suffices to show that their composition $\alpha_2\circ\alpha_1$ becomes again of this form.
Express $(\gamma_2\circ\beta_1)^{-1}$ as $(\gamma_2\circ\beta_1)^{-1}=\overseterline{Q}(\overseterline{s})^{-1}\circ\overseterline{Q}(\overseterline{e})$ for some morphisms $\overseterline{e},\overseterline{s}$ in $\overseterline{\mathscr{C}}$ with $\overseterline{s}\in\overseterline{\mathscr{S}}$. We have $\overseterline{Q}(\overseterline{e})\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$ and $\gamma_2\circ\beta_1=\overseterline{Q}(\overseterline{e})^{-1}\circ\overseterline{Q}(\overseterline{s})$. Denote the domains and codomains of $f_2$ and $e$ by $A\overset{f_2}{\longrightarrow}B$ and $A\overset{e}{\longrightarrow}A^{\prime}$, respectively. Since $f_2$ is an $\mathfrak{s}$-inflation, there exists an $\mathfrak{s}$-triangle $A\overset{f_2}{\longrightarrow}B\overset{g}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$. Also to $e_{\ast}\delta$, another $\mathfrak{s}$-triangle $A^{\prime}\overset{f_2^{\prime}}{\longrightarrow}B^{\prime}\overset{g^{\prime}}{\longrightarrow}C\overset{e_{\ast}\delta}{\dashrightarrow}$ is associated. By {\rm (C3)} for $\mathscr{C}Es$, we obtain a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-14,6)*+{A}="0";
(0,6)*+{B}="2";
(14,6)*+{C}="4";
(27,6)*+{}="6";
(-14,-6)*+{A^{\prime}}="10";
(0,-6)*+{B^{\prime}}="12";
(14,-6)*+{C}="14";
(27,-6)*+{}="16";
{\ar^{f_2} "0";"2"};
{\ar^{g} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{e} "0";"10"};
{\ar_{d} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{f_2^{\prime}} "10";"12"};
{\ar_{g^{\prime}} "12";"14"};
{\ar@{-->}_{e_{\ast}\delta} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
for some $d\in\mathscr{C}(B,B^{\prime})$.
Similarly as in the proof of Lemma~\ref{LemComposeInf}, we have $Q(d)\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$. Applying {\rm (MR4)} to $f_2^{\prime}\circ s\circ f_1$, we obtain $\overseterline{f}_2^{\prime}\circ \overseterline{s}\circ \overseterline{f}_1=\mathbf{v}\circ\overseterline{x}\circ\mathbf{u}$ for some $\mathbf{u},\mathbf{v}\in\overseterline{\mathscr{S}}$ and an $\mathfrak{s}$-inflation $x$. Then we have
\betagin{eqnarray*}
\alpha_2\circ\alpha_1
&=&
\big(\beta_2\circ\overseterline{Q}(\overseterline{f}_2)\circ\gamma_2\big)\circ\big(\beta_1\circ\overseterline{Q}(\overseterline{f}_1)\circ\gamma_1\big)\\
&=&
\big(\beta_2\circ Q(d)^{-1}\circ\overseterline{Q}(\mathbf{v})\big)\circ\overseterline{Q}(\overseterline{x})\circ\big(\overseterline{Q}(\mathbf{u})\circ\gamma_1\big).
\end{eqnarray*}
Since $\beta_2\circ Q(d)^{-1}\circ\overseterline{Q}(\mathbf{v})$ and $\overseterline{Q}(\mathbf{u})\circ\gamma_1$ belong to $\mathrm{op}eratorname{Iso}(\widetilde{\C})$, this indeed shows that $\alpha_2\circ\alpha_1$ is an $\widetilde{\mathfrak{s}}$-inflation by Lemma~\ref{LemComposeInf}. Dually for $\widetilde{\mathfrak{s}}$-deflations.
\end{proof}
Thus our main theorem is proved. We conclude this subsection with some remarks and consequences of Theorem~\ref{ThmMultLoc}.
\betagin{rem}
Suppose that $\overseterline{\mathscr{S}}$ satisfies {\rm (M0),(MR1),(MR2),(MR3)} and $p(\mathscr{S})=\overseterline{\mathscr{S}}$, as before. Lemma~\ref{LemComposeInf} shows that $\widetilde{\C}Es$ becomes extriangulated if and only if $\overseterline{\mathscr{S}}$ satisfies the following {\rm (MR4$^{-}$)}, which is a weaker variant of {\rm (MR4)}.
\betagin{itemize}
\item[{\rm (MR4$^{-}$)}] For any sequence of morphisms $X\overset{\mathbf{f}}{\longrightarrow}Y\overset{\mathbf{g}}{\longrightarrow}Z$ with $\mathbf{f},\mathbf{g}\in\overseterline{\mathcal{M}}_{\mathsf{inf}}$, there exist morphisms $\mathbf{a},\mathbf{b}$ in $\overseterline{\mathscr{C}}$ such that $\mathbf{a}\circ\mathbf{g}\circ\mathbf{f}\circ\mathbf{b}\in\overseterline{\mathcal{M}}_{\mathsf{inf}}$ and $\overseterline{Q}(\mathbf{a}),\overseterline{Q}(\mathbf{b})\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$.
Dually for $\overseterline{\mathcal{M}}_{\mathsf{def}}\subseteq\overseterline{\mathcal{M}}$.
\end{itemize}
\end{rem}
\betagin{rem}\label{RemAdm}
Remark that if $\mathscr{C}Es$ corresponds to a triangulated category, or to an exact category which is moreover abelian, then any morphism $f$ in $\mathscr{C}$ is $\mathfrak{s}$-\emph{admissible} in the sense that $f=m\circ e$ holds for an $\mathfrak{s}$-deflation $e$ and an $\mathfrak{s}$-inflation $m$.
Conversely, if $\mathscr{C}Es$ corresponds to an exact category in which any morphism is $\mathfrak{s}$-admissible, then $\mathscr{C}$ is abelian.
By Lemma~\ref{LemComposeInf}, if any morphism in $\mathscr{C}$ is $\mathfrak{s}$-admissible, then any morphism in $\widetilde{\C}$ is $\widetilde{\mathfrak{s}}$-admissible. In particular if $\widetilde{\C}Es$ corresponds to an exact category and any morphism in $\mathscr{C}$ is $\mathfrak{s}$-admissible, then $\widetilde{\C}$ is an abelian category.
\end{rem}
In the following way, extriangulated categories obtained by ideal quotients can be also seen as a particular type of the localization.
\betagin{rem}\label{Rem_IdealQuotLoc}
Let $\mathcal{I}\subseteq\mathscr{C}$ be any full additive subcategory closed by isomorphisms and direct summands, whose objects are both projective and injective. Let $p\colon \mathscr{C}\to \mathscr{C}/[\mathcal{I}]$ denote the ideal quotient. If we put $\mathscr{S}=p^{-1}(\mathrm{op}eratorname{Iso}(\mathscr{C}/[\mathcal{I}]))$, then it is easy to see that the ideal $[\mathcal{N}_\mathscr{S}]$ coincides with $[\mathcal{I}]$, hence we obtain $\overseterline{\mathscr{C}}=\mathscr{C}/[\mathcal{I}]$.
As shown in \circte[Proposition 3.30]{NP}, it has a natural extriangulated structure $(\overseterline{\mathscr{C}},\overseterline{\mathbb{E}},\overseterline{\mathfrak{s}})$, given by $\overseterline{\mathbb{E}}(C,A)=\mathbb{E}(C,A)$ for any $A,C\in\mathscr{C}$ and $\overseterline{\mathfrak{s}}(\delta)=[A\overset{\overseterline{x}}{\longrightarrow}B\overset{\overseterline{y}}{\longrightarrow}C]$ for any $\delta\in\mathbb{E}(C,A)$ with $\mathfrak{s}(\delta)=[A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C]$.
It is obvious that $\phi=\{\phi_{C,A}=\mathrm{id}\colon\mathbb{E}(C,A)\to\overseterline{\mathbb{E}}(C,A)\}_{C,A\in\mathscr{C}}$ gives an exact functor $(p,\phi)\colon\mathscr{C}Es\to(\overseterline{\mathscr{C}},\overseterline{\mathbb{E}},\overseterline{\mathfrak{s}})$.
Since $\overseterline{\mathscr{S}}=\mathrm{op}eratorname{Iso}(\overseterline{\mathscr{C}})$ trivially satisfies {\rm (MR2)}, we see that Corollary~\ref{CorOfThm} can be applied to $(p,\phi)$.
We remark that $\overseterline{\mathscr{S}}$ also satisfies {\rm (MR4)}. Indeed, let $A\overset{f}{\longrightarrow}B\overset{s}{\longrightarrow}X\overset{x}{\longrightarrow}Y$ be a sequence of morphisms in $\mathscr{C}$ in which $f,x$ are $\mathfrak{s}$-inflations and $s$ belongs to $\mathscr{S}$. Then there exists $t\in\mathscr{S}(X,B)$ such that $\overseterline{t}^{-1}=\overseterline{s}$, and by {\rm (M3)} we obtain a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-12,6)*+{X}="0";
(0,6)*+{Y}="2";
(12,6)*+{Z}="4";
(24,6)*+{}="6";
(-12,-6)*+{B}="10";
(0,-6)*+{Y^{\prime}}="12";
(12,-6)*+{Z}="14";
(24,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{} "2";"4"};
{\ar@{-->}^{} "4";"6"};
{\ar_{t} "0";"10"};
{\ar_{u} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_{x^{\prime}} "10";"12"};
{\ar_{} "12";"14"};
{\ar@{-->}_{} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
for some $u\in\mathscr{S}(Y,Y^{\prime})$. This gives $\overseterline{x\circ s\circ f}=\overseterline{u}^{-1}\circ\overseterline{x^{\prime}\circ t\circ s \circ f}=\overseterline{u}^{-1}\circ\overseterline{x^{\prime}\circ f}$. Then $x^{\prime}\circ f$ is an $\mathfrak{s}$-inflation by {\rm (C4)} for $\mathscr{C}Es$. Since $\mathscr{S}$ is closed under compositions, this shows that $\overseterline{\mathcal{M}}_{\mathsf{inf}}\subseteq\overseterline{\mathcal{M}}$ is closed by compositions. Dually for $\overseterline{\mathcal{M}}_{\mathsf{def}}$.
Thus $\widetilde{\C}Es$ obtained in Corollary~\ref{CorOfThm} is extriangulated.
As the proof of Theorem~\ref{ThmMultLoc} suggests, localization by $\mathscr{S}$ does not change $(\overseterline{\mathscr{C}},\overseterline{\mathbb{E}},\overseterline{\mathfrak{s}})$ essentially, and there is an equivalence of extriangulated categories $(\overseterline{\mathscr{C}},\overseterline{\mathbb{E}},\overseterline{\mathfrak{s}})\overset{\simeq}{\longrightarrow}\widetilde{\C}Es$ in the sense of {\rm (2)} in Proposition~\ref{PropExEq}, in an obvious way.
\end{rem}
The following is a corollary of Theorem~\ref{ThmMultLoc}, which gives a condition for the resulting localization to correspond to an exact category. This will be used in Subsection~\ref{Subsection_Percolating}.
\betagin{cor}\label{CorLocExact}
Let $\mathscr{S}\subseteq\mathcal{M}$ and $\mathcal{N}_{\mathscr{S}}$ be as in Theorem~\ref{ThmMultLoc} {\rm (3)}. Assume moreover that it satisfies the following conditions {\rm (i),(ii)} and their duals.
Then the resulting localization $\widetilde{\C}Es$ corresponds to an exact category.
Moreover, if any morphism in $\mathscr{C}$ is $\mathfrak{s}$-admissible, then $\widetilde{\C}$ is an abelian category.
\betagin{itemize}
\item[{\rm (i)}] $\mathrm{op}eratorname{Ker}\big(\mathscr{C}(X,A)\overset{x\circ-}{\longrightarrow}\mathscr{C}(X,B)\big)\subseteq[\mathcal{N}_{\mathscr{S}}](X,A)$ holds for any $X\in\mathscr{C}$ and any $\mathfrak{s}$-inflation $x\in\mathscr{C}(A,B)$.
\item[{\rm (ii)}] For any $N\in\mathcal{N}_{\mathscr{S}}$ and any morphism $f\in\mathscr{C}(A,N)$, there exists a morphism $s\in\mathscr{C}(A^{\prime},A)$ such that $f\circ s=0$ and $Q(s)\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$.
\end{itemize}
\end{cor}
\betagin{proof}
By duality, it is enough to show that any $\widetilde{\mathfrak{s}}$-inflation is monic in $\widetilde{\C}$. By Lemma~\ref{LemComposeInf}, it suffices to show that $Q(x)$ is monic in $\widetilde{\C}$ for any $\mathfrak{s}$-inflation $x\in\mathscr{C}(A,B)$.
Since any morphism $\alpha\in\widetilde{\C}(X,A)$ can be written as $\alpha=\overseterline{Q}(\overseterline{f})\circ\overseterline{Q}(\overseterline{u})^{-1}$ for some $\overseterline{f}\in\overseterline{\mathscr{C}}(X^{\prime},A)$ and $\overseterline{u}\in\overseterline{\mathscr{S}}(X^{\prime},X)$, it suffices to show that $Q(x\circ f)=0$ implies $Q(f)=0$ for any $f\in\mathscr{C}(X^{\prime},A)$.
By {\rm (MR2)}, we see that $Q(x\circ f)=0$ is equivalent to the existence of $\overseterline{v}\in\overseterline{\mathscr{S}}(Y,X^{\prime})$ such that $\overseterline{x}\circ\overseterline{f}\circ\overseterline{v}=0$. Thus $j\circ i=x\circ f\circ v$ holds for some $N\in\mathcal{N}_{\mathscr{S}}$ and $i\in\mathscr{C}(Y,N)$, $j\in\mathscr{C}(N,B)$.
By {\rm (ii)},
there exists $s\in\mathscr{C}(Y^{\prime},Y)$ which satisfies $i\circ s=0$ and $Q(s)\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$. We have $x\circ f\circ v\circ s=j\circ i\circ s=0$, hence $\overseterline{f}\circ \overseterline{v}\circ \overseterline{s}=0$ holds by {\rm (i)}. This implies $Q(f)\circ Q(v)\circ Q(s)=0$, hence $Q(f)=0$ since $Q(v),Q(s)\in\mathrm{op}eratorname{Iso}(\widetilde{\C})$.
The last assertion is immediate from Remark~\ref{RemAdm}.
\end{proof}
\betagin{rem}\label{RemLocExact}
In Corollary~\ref{CorLocExact}, we see that the following holds.
\betagin{enumerate}
\item Condition {\rm (i)} and its dual are trivially satisfied if $\mathscr{C}Es$ corresponds to an exact category.
\item Condition {\rm (ii)} and its dual are satisfied if $\mathscr{C}$ is abelian (or similarly for $\mathscr{C}Es$ in which any morphism is $\mathfrak{s}$-admissible) and if $\mathscr{S}$ is the multiplicative system associated to a Serre subcategory $\mathcal{N}\subseteq\mathscr{C}$.
\end{enumerate}
\end{rem}
\subseteqction{Relation to known constructions} \label{Section_Examples}
\subsection{Thick subcategories}
Before listing the known constructions which we are going to deal with, let us introduce the notion of a thick subcategory in an extriangulated category, which serves to control corresponding $\mathscr{S}$.
\betagin{dfn}\label{DefThick}
A full additive subcategory $\mathcal{N}\subseteq\mathscr{C}$ is called a \emph{thick} subcategory if it satisfies the following conditions.
\betagin{itemize}
\item[{\rm (i)}] $\mathcal{N}\subseteq\mathscr{C}$ is closed by isomorphisms and direct summands.
\item[{\rm (ii)}] $\mathcal{N}$ satisfies $2$-out-of-$3$ for $\mathfrak{s}$-conflations. Namely, if any two of objects $A,B,C$ in an $\mathfrak{s}$-conflation $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C$ belong to $\mathcal{N}$, then so does the third.
\end{itemize}
\end{dfn}
\betagin{rem}
The following is obvious from the definition.
\betagin{enumerate}
\item A thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ is extension-closed, hence is an extriangulated category.
\item If $(F,\phi)\colon\mathscr{C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ is an exact functor to a weakly extriangulated category $\mathscr{D}$, then $\mathrm{op}eratorname{Ker} F\subseteq\mathscr{C}$ is a thick subcategory.
\end{enumerate}
\end{rem}
\betagin{dfn}\label{DefMorphClassesLR}
For a thick subcategory $\mathcal{N}\subseteq\mathscr{C}$, we associate the following classes of morphisms.
\betagin{eqnarray*}
\mathcal{L}&=&\{ f\in\mathcal{M}\mid f\ \text{is an}\ \mathfrak{s}\text{-inflation with}\ \mathscr{C}one(f)\in\mathcal{N}\}.\\
\mathcal{R}&=&\{ f\in\mathcal{M}\mid f\ \text{is an}\ \mathfrak{s}\text{-deflation with}\ \mathscr{C}oCone(f)\in\mathcal{N}\}.
\end{eqnarray*}
Define $\mathscr{S}_{\mathcal{N}}\subseteq\mathcal{M}$ to be the smallest subset closed by compositions containing both $\mathcal{L}$ and $\mathcal{R}$. It is obvious that $\mathscr{S}_{\mathcal{N}}$ satisfies condition {\rm (M0)} in Section~\ref{Section_Localization}.
\end{dfn}
\betagin{rem}\label{RemSN}
$\mathscr{S}_{\mathcal{N}}$ coincides with the set of all finite compositions of morphisms in $\mathcal{L}$ and $\mathcal{R}$.
\end{rem}
\betagin{lem}\label{LemNSN}
$\mathcal{N}_{\mathscr{S}_{\mathcal{N}}}=\mathcal{N}$ holds.
\end{lem}
\betagin{proof}
Clearly, $\mathcal{N}\subseteq\mathcal{N}_{\mathscr{S}_{\mathcal{N}}}$ holds. We show that the converse inclusion holds. Let $X$ be an arbitrary object in $\mathcal{N}_{\mathscr{S}_{\mathcal{N}}}$. Then the morphism $0\to X$ belongs to $\mathscr{S}_{\mathcal{N}}$. By Remark~\ref{RemSN}, it suffices to show the following claim:
For any morphism $f\colon N\to Y$ with $N\in\mathcal{N}$, if $f$ is contained in either $\mathcal{L}$ or $\mathcal{R}$, then $Y$ is contained in $\mathcal{N}$. This follows immediately from that $\mathcal{N}$ is a thick subcategory.
\end{proof}
\betagin{lem}\label{LemM3}
$\mathscr{S}_{\mathcal{N}}$ satisfies {\rm (M3)} in Corollary~\ref{CorMultLoc}.
\end{lem}
\betagin{proof}
Let $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$ and $A^{\prime}\overset{x^{\prime}}{\longrightarrow}B^{\prime}\overset{y^{\prime}}{\longrightarrow}C^{\prime}\overset{\delta^{\prime}}{\dashrightarrow}$ be any pair of $\mathfrak{s}$-triangles, and suppose that $a\in\mathscr{S}_{\mathcal{N}}(A,A^{\prime}),c\in\mathscr{S}_{\mathcal{N}}(C,C^{\prime})$ satisfy $a_{\ast}\delta=c^{\ast}\delta^{\prime}$. It suffices to show the existence of $b\in\mathscr{S}_{\mathcal{N}}$ which gives a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-12,6)*+{A}="0";
(0,6)*+{B}="2";
(12,6)*+{C}="4";
(24,6)*+{}="6";
(-12,-6)*+{A^{\prime}}="10";
(0,-6)*+{B^{\prime}}="12";
(12,-6)*+{C^{\prime}}="14";
(24,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{a} "0";"10"};
{\ar_{b} "2";"12"};
{\ar^{c} "4";"14"};
{\ar_{x^{\prime}} "10";"12"};
{\ar_{y^{\prime}} "12";"14"};
{\ar@{-->}_{\delta^{\prime}} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy.
\]
We may assume that either $c$ or $a$ equals to the identity map. Suppose $c=\mathrm{id}$. By Remark~\ref{RemSN}, we may assume that $a$ belongs to either $\mathcal{R}$ or $\mathcal{L}$. Then the assertion follows from {\rm (ET4)}, \circte[Proposition 3.15]{NP}, respectively.
\end{proof}
We prepare some lemmas which will be used in the proceeding subsections.
\betagin{lem}\label{LemThickFirstProperties}
The following holds for any thick subcategory $\mathcal{N}\subseteq\mathscr{C}$.
\betagin{enumerate}
\item $\mathcal{L},\mathcal{R}\subseteq\mathcal{M}$ contain all isomorphisms, and are closed by compositions in $\mathcal{M}$.
\item For any $l\in\mathcal{L}$, morphism $\overseterline{l}$ is epimorphic in $\overseterline{\mathscr{C}}$. Dually, for any $r\in\mathcal{R}$, morphism $\overseterline{r}$ is monomorphic in $\overseterline{\mathscr{C}}$.
\item For any $X\overset{l}{\longleftarrow}X^{\prime}\overset{g}{\longrightarrow}Y$ with $l\in\mathcal{L}$, there exists a commutative square
\[
\xy
(-6,6)*+{X^{\prime}}="0";
(6,6)*+{Y}="2";
(-6,-6)*+{X}="4";
(6,-6)*+{Y^{\prime}}="6";
{\ar^{g} "0";"2"};
{\ar_{l} "0";"4"};
{\ar^{l^{\prime}} "2";"6"};
{\ar_{g^{\prime}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\mathscr{C}$ such that $l^{\prime}\in\mathcal{L}$. Moreover if $g\in\mathcal{R}$, then it can be chosen to satisfy $l^{\prime}\in\mathcal{L}$ and $g^{\prime}\in\mathcal{R}$.
Dually, for any $X\overset{f}{\longrightarrow}Y\overset{r}{\longleftarrow}Y^{\prime}$ with $r\in\mathcal{R}$, there exists a commutative square
\[
\xy
(-6,6)*+{X^{\prime}}="0";
(6,6)*+{Y^{\prime}}="2";
(-6,-6)*+{X}="4";
(6,-6)*+{Y}="6";
{\ar^{f^{\prime}} "0";"2"};
{\ar_{r^{\prime}} "0";"4"};
{\ar^{r} "2";"6"};
{\ar_{f} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\mathscr{C}$ such that $r^{\prime}\in\mathcal{R}$. Moreover if $f\in\mathcal{L}$, then it can be chosen to satisfy $r^{\prime}\in\mathcal{R}$ and $f^{\prime}\in\mathcal{L}$.
\end{enumerate}
\end{lem}
\betagin{proof}
(1) Since $\mathcal{N}\subseteq\mathscr{C}$ is extension-closed, this is straightforward.
(2) Let $X^{\prime}\overset{l}{\longrightarrow}X$ be a morphism in $\mathcal{L}$, and suppose that $\overseterline{f}\circ\overseterline{l}=0$ holds for $X\overset{f}{\longrightarrow}Y$.
The composed morphism $f\circ l$ can be factored as $X^{\prime}\overset{a}{\longrightarrow}N\overset{}{\longrightarrow}Y$ with $N\in\mathcal{N}$.
Condition (C3) yields a morphism of $\mathfrak{s}$-triangles
\[
\xy
(-12,6)*+{X^{\prime}}="0";
(0,6)*+{X}="2";
(12,6)*+{N^{\prime}}="3";
(24,6)*+{}="p";
(-12,-6)*+{N}="4";
(0,-6)*+{X^{\prime}r}="6";
(12,-6)*+{N^{\prime}}="7";
(24,-6)*+{}="q";
{\ar^{l} "0";"2"};
{\ar "2";"3"};
{\ar_{a} "0";"4"};
{\ar^{a^{\prime}} "2";"6"};
{\ar_{i} "4";"6"};
{\ar "6";"7"};
{\ar@{}|\circrclearrowright "0";"6"};
{\ar@{}|\circrclearrowright "2";"7"};
{\ar@{=} "3";"7"};
{\ar@{-->} "3";"p"};
{\ar@{-->} "7";"q"};
\endxy
\]
which gives an $\mathfrak{s}$-conflation $X^{\prime}\overset{\left[\betagin{smallmatrix} l\\ a\end{smallmatrix}\right]}{\longrightarrow}X\mathrm{op}lus N\overset{[a^{\prime}\ -i]}{\longrightarrow}X^{\prime}r$.
Note that $X^{\prime}r\in\mathcal{N}$ follows from $N,N^{\prime}\in\mathcal{N}$. Since $f$ factors through $X^{\prime}r$, this shows $\overseterline{f}=0$. Thus the former assertion is shown, and dually for the latter.
(3) We only check the former assertion. Dually for the latter.
Due to {\rm (C3)}, a desired square exists.
If $g\in\mathcal{R}$, by using {\rm (ET4)}, we have a choice to satisfy $l^{\prime}\in\mathcal{L}$ and $g^{\prime}\in\mathcal{R}$.
\end{proof}
\subsection{Typical cases}
Examples we have in mind are the following.
\betagin{ex}\label{ExVerdier}
{\rm (}\emph{Verdier quotient.}{\rm )} Suppose that $\mathscr{C}$ is a triangulated category. If we regard it as an extriangulated category, then Definition~\ref{DefThick} agrees with the usual definition of a thick subcategory.
In this case we have $\mathscr{S}_{\mathcal{N}}=\mathcal{L}=\mathcal{R}$ and the localization of $\mathscr{C}$ by $\mathscr{S}_{\mathcal{N}}$ naturally becomes triangulated.
\end{ex}
\betagin{ex}\label{ExSerreAbel}
{\rm (}\emph{Serre quotient.}{\rm )} Suppose that $\mathscr{C}$ is an abelian category, which we may regard as an extriangulated category. Then any Serre subcategory of $\mathscr{C}$ is thick in the sense of Definition~\ref{DefThick}.
In this case we have $\mathscr{S}_{\mathcal{N}}=\mathcal{L}\circ\mathcal{R}$, and the localization of $\mathscr{C}$ by $\mathscr{S}_{\mathcal{N}}$ becomes abelian.
\end{ex}
\betagin{ex}\label{ExTwo-sidedExact}
Suppose that $\mathscr{C}$ is an exact category, and that $\mathcal{N}\subseteq\mathscr{C}$ is a full additive subcategory closed by isomorphisms.
As in \circte[Definition~2.4]{HKR}, it is said to be a \emph{two-sided admissibly percolating subcategory} if it satisfies the following conditions.
\betagin{itemize}
\item[{\rm (i)}] For any conflation $A\to B\to C$, we have $B\in\mathcal{N}$ if and only if $A,C\in\mathcal{N}$.
\item[{\rm (ii)}] For any morphism $f\colon X\to A$ with $A\in\mathcal{N}$, there is a factorization of $f$ as $X\overset{g}{\to}A^{\prime}\overset{h}{\to}A$ with a deflation $g$ and an inflation $h$.
\item[{\rm (iii)}] Dual of (ii).
\end{itemize}
In this case, we have $\mathscr{S}_{\mathcal{N}}=\mathcal{L}\circ\mathcal{R}$ and the localization of $\mathscr{C}$ by $\mathscr{S}_{\mathcal{N}}$ becomes an exact category such that the localization functor is an exact functor.
\end{ex}
\betagin{rem}
As for localizations for exact categories, there is a precedent result by C\'{a}rdenas-Escudero \circte{C-E}.
As stated in \circte[Theorem 8.1]{HR}, this can be dealt as a particular case of Example~\ref{ExTwo-sidedExact}.
\end{rem}
\betagin{ex}\label{ExRump}
Suppose that $\mathscr{C}$ is an exact category, and that $\mathcal{N}\subseteq\mathscr{C}$ is a full additive subcategory closed by isomorphisms.
As in \circte[Definition~7]{R}, it is said to be \emph{biresolving} if it satisfies the following conditions.
\betagin{itemize}
\item[{\rm (i)}] For any $C\in\mathscr{C}$, there is an inflation $C\to N$ and a deflation $N^{\prime}\to C$ for some $N,N^{\prime}\in\mathcal{N}$.
\item[{\rm (ii)}] $\mathcal{N}$ satisfies $2$-out-of-$3$ for conflations.
\end{itemize}
In particular, any biresolving subcategory $\mathcal{N}\subseteq\mathscr{C}$ closed by direct summands is a thick subcategory in the sense of Definition~\ref{DefThick}.
In \circte[Proposition~12]{R}, it is shown that for any biresolving subcategory of an exact category, localization of $\overseterline{\mathscr{C}}=\mathscr{C}/[\mathcal{N}]$ by the morphisms which are both monomorphic and epimorphic, has a natural structure of a triangulated category.
\end{ex}
\betagin{rem}\label{RemRump}
Since replacing $\mathcal{N}$ by $\mathrm{op}eratorname{add}\mathcal{N}$ does not affect the resulting localization, in this article we only deal with the case where $\mathcal{N}\subseteq\mathscr{C}$ is thick. Here $\mathrm{op}eratorname{add}\mathcal{N}\subseteq\mathscr{C}$ denotes the smallest additive full subcategory containing $\mathcal{N}$, closed by direct summands.
\end{rem}
\betagin{ex}\label{ExHTCP}
Let $\mathscr{C}Es$ be an extriangulated category. As defined in \circte[Definition~5.1]{NP}, a \emph{Hovey twin cotorsion pair} is a quartet $((\mathcal{S},\mathscr{T}cal),(\mathcal{U},\mathcal{V}))$ of full subcategories closed by isomorphisms and direct summands satisfying
\betagin{itemize}
\item[{\rm (i)}] $(\mathcal{S},\mathscr{T}cal)$ and $(\mathcal{U},\mathcal{V})$ are cotorsion pairs on $\mathscr{C}$.
\item[{\rm (ii)}] $\mathbb{E}bb(\mathcal{S},\mathcal{V})=0$.
\item[{\rm (iii)}] $\mathscr{C}one(\mathcal{V},\mathcal{S})=\mathscr{C}oCone(\mathcal{V},\mathcal{S})$.
\end{itemize}
If $((\mathcal{S},\mathscr{T}cal),(\mathcal{U},\mathcal{V}))$ is a Hovey twin cotorsion pair, then $\mathcal{N}=\mathscr{C}one(\mathcal{V},\mathcal{S})$ satisfies $2$-out-of-$3$ for $\mathfrak{s}$-conflations by \circte[Proposition~5.3]{NP}.
It is also immediate that $\mathcal{N}\subseteq\mathscr{C}$ is closed by direct summands. Indeed if $X\mathrm{op}lus Y\in\mathcal{N}$, then for $\mathfrak{s}$-conflations $T\to S\to X$ and $T^{\prime}\to S^{\prime}\to Y$ satisfying $S,S^{\prime}\in\mathcal{S}$ and $T,T^{\prime}\in\mathscr{T}cal$, we see that the $\mathfrak{s}$-conflation $T\mathrm{op}lus T^{\prime}\to S\mathrm{op}lus S^{\prime} \to X\mathrm{op}lus Y$ should satisfy $T\mathrm{op}lus T^{\prime}\in\mathcal{N}\cap\mathscr{T}cal=\mathcal{V}$ by the $2$-out-of-$3$ property. This implies $T,T^{\prime}\in\mathcal{V}$, hence $X,Y\in\mathcal{N}$.
Thus $\mathcal{N}\subseteq\mathscr{C}$ is a thick subcategory.
If moreover $\mathscr{C}Es$ satisfies condition {\rm (WIC)}, then the following holds by \circte[Corollaries~5.12, 5.22]{NP}.
\betagin{itemize}
\item $\mathit{wCof}\subseteq\mathcal{L}$ and $\mathit{wFib}\subseteq\mathcal{R}$ hold in the notation of \circte[Definition~5.11]{NP},
\item $\mathbb{W}=\mathit{wFib}\circ\mathit{wCof}\subseteq\mathcal{M}$ satisfies $\mathcal{L},\mathcal{R}\subseteq\mathbb{W}$, and $2$-out-of-$3$ with respect to the composition in $\mathcal{M}$,
\end{itemize}
hence in particular we have $\mathscr{S}_{\mathcal{N}}=\mathbb{W}=\mathcal{R}\circ\mathcal{L}$. Localization of $\mathscr{C}$ by $\mathscr{S}_{\mathcal{N}}$ becomes naturally triangulated, as shown in \circte[Theorem~6.20]{NP}.
\end{ex}
The rest of this article is devoted to demonstrate how the examples listed above can be seen as particular cases of the localization given in the previous Section~\ref{Section_Localization}, by showing that the assumption of Theorem~\ref{ThmMultLoc} is indeed satisfied. More precisely, we roughly divide them into the following two cases.
\betagin{itemize}
\item[{\rm (A)}] Localizations obtained in Examples~\ref{ExVerdier}, \ref{ExRump}, \ref{ExHTCP}.
\item[{\rm (B)}] Localizations obtained in Examples~\ref{ExVerdier}, \ref{ExSerreAbel}, \ref{ExTwo-sidedExact}.
\end{itemize}
In fact, this division results from particular properties of the thick subcategories. Remark that Example~\ref{ExVerdier} belongs to both cases.
Subsection~\ref{Subsection_LocTri} deals with case {\rm (A)}. We will show that the assumption of Theorem~\ref{ThmMultLoc} is satisfied for any thick subcategory which is \emph{biresolving} (Definition~\ref{Def_BiResol}), and that the localization $\widetilde{\C}Es$ obtained by the theorem corresponds to a triangulated category. Subsection~\ref{Subsection_Percolating} deals with case {\rm (B)}. We will show that the assumption of Corollary~\ref{CorMultLoc} is satisfied for any thick subcategory which is \emph{percolating} (Definition~\ref{Def_Percolating}). With an additional assumption which fits well with percolating subcategories, the localization $\widetilde{\C}Es$ obtained by the corollary can be also made correspond to an exact category.
\betagin{rem}
For Examples~\ref{ExVerdier} and \ref{ExSerreAbel}, it is also easy to check directly that they satisfy the assumption of Corollary~\ref{CorMultLoc}.
On the other hand, this is not always satisfied in case {\rm (A)}. This is the reason why we need the generality of Theorem~\ref{ThmMultLoc}.
\end{rem}
\subsection{Case {\rm (A)}: Triangulated localization by biresolving subcategories}\label{Subsection_LocTri}
We observe that in each of Examples~\ref{ExRump}, \ref{ExHTCP} (and also \ref{ExVerdier}), the corresponding thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ is of the following type.
\betagin{dfn}\label{Def_BiResol}
A thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ is called \emph{biresolving}, if for any $C\in\mathscr{C}$ there exist an $\mathfrak{s}$-inflation $C\to N$ and an $\mathfrak{s}$-deflation $N^{\prime}\to C$ for some $N,N^{\prime}\in\mathcal{N}$.
\end{dfn}
This definition obviously covers the above-mentioned examples in mind. Let us summarize here for clarity, together with some other trivial cases.
\betagin{ex}
Let $\mathscr{C}Es$ be an extriangulated category, as before.
\betagin{enumerate}
\item $\mathscr{C}$ itself is always biresolving in $\mathscr{C}Es$.
\item The thick full subcategory of zero objects in $\mathscr{C}$ is biresolving if and only if $\mathscr{C}Es$ corresponds to a triangulated category.
\item If $\mathscr{C}Es$ corresponds to a triangulated category, then any thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ is biresolving.
\item Suppose that $\mathscr{C}Es$ corresponds to an exact category, and that $\mathcal{N}\subseteq\mathscr{C}$ is a thick subcategory. Then $\mathcal{N}\subseteq\mathscr{C}$ is biresolving in the sense of Definition~\ref{Def_BiResol} if and only if it is biresolving in the sense of \circte[Definition~7]{R}. (See also Remark~\ref{RemRump}.)
\item Suppose that $\mathscr{C}Es$ satisfies {\rm (WIC)}. If $((\mathcal{S},\mathscr{T}cal),(\mathcal{U},\mathcal{V}))$ is a Hovey twin cotorsion pair, then $\mathcal{N}=\mathscr{C}one(\mathcal{V},\mathcal{S})\subseteq\mathscr{C}$ is a biresolving thick subcategory as seen in Example~\ref{ExHTCP}.
\item Suppose that $\mathscr{C}Es$ is a Frobenius extriangulated category in the sense of \circte[Definition~7.1]{NP}, and let $\mathcal{N}\subseteq\mathscr{C}$ be the full subcategory of projective-injective objects. Then $\mathcal{N}\subseteq\mathscr{C}$ is a biresolving thick subcategory. (See also \circte[Remark~7.3]{NP} for the relation with the above {\rm (5)}.)
\end{enumerate}
\end{ex}
\betagin{rem}
As shown in \circte[Theorem~3.13]{ZZ} (cf. \circte[Corollary~7.4 and Remark~7.5]{NP}), the ideal quotient $(\overseterline{\mathscr{C}},\overseterline{\mathbb{E}},\overseterline{\mathfrak{s}})$ in {\rm (6)} corresponds to a triangulated category. By Remark~\ref{Rem_IdealQuotLoc}, so does $\widetilde{\C}Es$.
\end{rem}
In the rest of this subsection, let
$\mathcal{N}\subseteq\mathscr{C}$ denote a biresolving thick subcategory. The aim of this subsection is to show Proposition~\ref{PropSatisfy} and Corollary~\ref{CorLocTri}.
Let us start with some lemmas.
\betagin{lem}\label{LemAllInf}
Let $f\in\mathscr{C}(A,B)$ be any morphism.
\betagin{enumerate}
\item By taking an $\mathfrak{s}$-inflation $i\in\mathscr{C}(A,N)$ to $N\in\mathcal{N}$, morphism $f$ can be written as a composition of
an $\mathfrak{s}$-inflation $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathscr{C}(A,B\mathrm{op}lus N)$ and a split epimorphism $r=[1\ 0]\in\mathscr{C}(B\mathrm{op}lus N,B)$. Remark that we have $r\in\mathcal{R}$.
\item By taking an $\mathfrak{s}$-deflation $j\in\mathscr{C}(N^{\prime},B)$ from $N^{\prime}\in\mathcal{N}$, morphism $f$ can be written as a composition of
an $\mathfrak{s}$-deflation $[f\ j]\in\mathscr{C}(A\mathrm{op}lus N^{\prime},B)$ and a split monomorphism $l=\left[\betagin{smallmatrix}1\\0\end{smallmatrix}\right]\in\mathscr{C}(A,A\mathrm{op}lus N^{\prime})$. Remark that we have $l\in\mathcal{L}$.
\end{enumerate}
\end{lem}
\betagin{proof}
This is obvious by \circte[Corollary 3.16]{NP}.
\end{proof}
\betagin{lem}\label{LemRL}
$\mathscr{S}_{\mathcal{N}}=\mathcal{R}\circ\mathcal{L}$ holds.
\end{lem}
\betagin{proof}
It suffices to show $\mathcal{L}\circ\mathcal{R}\subseteq\mathcal{R}\circ\mathcal{L}$.
For any pair of $\mathfrak{s}$-conflations $N_1\overset{m}{\longrightarrow}X\overset{r}{\longrightarrow}Y$ and $Y\overset{l}{\longrightarrow}Z\overset{e}{\longrightarrow}N_2$ with $N_1,N_2\in\mathcal{N}$, let us show that $f=l\circ r$ belongs to $\mathcal{R}\circ\mathcal{L}$.
Take an $\mathfrak{s}$-inflation $i\in\mathscr{C}(X,N)$ to some $N\in\mathcal{N}$. As in Lemma~\ref{LemAllInf}, we obtain $\mathfrak{s}$-conflations $X\overset{\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]}{\longrightarrow}Z\mathrm{op}lus N\overset{g}{\longrightarrow}Z^{\prime}$ and $X\overset{\left[\betagin{smallmatrix} r\\ i\end{smallmatrix}\right]}{\longrightarrow}Y\mathrm{op}lus N\overset{h}{\longrightarrow}Y^{\prime}$, and factorize $f$ as below.
\[
\xy
(-16,14)*+{N_1}="0";
(0,14)*+{N}="2";
(-16,0)*+{X}="10";
(0,0)*+{Z\mathrm{op}lus N}="12";
(16,0)*+{Z^{\prime}}="14";
(-16,-14)*+{Y}="20";
(0,-14)*+{Z}="22";
(16,-14)*+{N_2}="24";
{\ar_{m} "0";"10"};
{\ar^{\left[\betagin{smallmatrix}0\\1\end{smallmatrix}\right]} "2";"12"};
{\ar^(0.4){\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]} "10";"12"};
{\ar_(0.56){g} "12";"14"};
{\ar_{r} "10";"20"};
{\ar^{[1\ 0]} "12";"22"};
{\ar_{l} "20";"22"};
{\ar_{e} "22";"24"};
{\ar@{}|\circrclearrowright "10";"22"};
\endxy
\]
Here, the rows and columns are $\mathfrak{s}$-conflations.
By \circte[Lemma~3.14]{NP}, there exist morphisms $d,d^{\prime}$ which makes the following diagram commutative,
\[
\xy
(-16,7)*+{X}="0";
(0,7)*+{Y\mathrm{op}lus N}="2";
(16,7)*+{Y^{\prime}}="4";
(-16,-7)*+{X}="10";
(0,-7)*+{Z\mathrm{op}lus N}="12";
(16,-7)*+{Z^{\prime}}="14";
(0,-21)*+{N_2}="22";
(16,-21)*+{N_2}="24";
{\ar^(0.4){\left[\betagin{smallmatrix} r\\ i\end{smallmatrix}\right]} "0";"2"};
{\ar^(0.54){h} "2";"4"};
{\ar@{=} "0";"10"};
{\ar_{l\mathrm{op}lus \mathrm{id}} "2";"12"};
{\ar^{d} "4";"14"};
{\ar_(0.4){\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]} "10";"12"};
{\ar_(0.54){g} "12";"14"};
{\ar_{[e\ 0]} "12";"22"};
{\ar^{d^{\prime}} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
such that $Y^{\prime}\overset{d}{\longrightarrow}Z^{\prime}\overset{d^{\prime}}{\longrightarrow}N_2$ is an $\mathfrak{s}$-conflation.
By \circte[Proposition~3.17]{NP}, there exists a morphism $i^{\prime}$ which makes the following diagram commutative,
\[
\xy
(-16,7)*+{N_1}="2";
(0,7)*+{X}="4";
(16,7)*+{Y}="6";
(-16,-7)*+{N}="12";
(0,-7)*+{Y\mathrm{op}lus N}="14";
(16,-7)*+{Y}="16";
(-16,-21)*+{Y^{\prime}}="22";
(0,-21)*+{Y^{\prime}}="24";
{\ar^{m} "2";"4"};
{\ar^{r} "4";"6"};
{\ar_{i^{\prime}} "2";"12"};
{\ar^{\left[\betagin{smallmatrix} r\\ i\end{smallmatrix}\right]} "4";"14"};
{\ar@{=} "6";"16"};
{\ar_(0.46){\left[\betagin{smallmatrix} 0\\ 1\end{smallmatrix}\right]} "12";"14"};
{\ar_(0.54){[1\ 0]} "14";"16"};
{\ar_{h\circ\left[\betagin{smallmatrix} 0\\ 1\end{smallmatrix}\right]} "12";"22"};
{\ar^{h} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "4";"16"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
such that $N_1\overset{i^{\prime}}{\longrightarrow}N\overset{h\circ\left[\betagin{smallmatrix} 0\\ 1\end{smallmatrix}\right]}{\longrightarrow}Y^{\prime}$ is an $\mathfrak{s}$-conflation.
We remark that $i^{\prime}=i\circ m$ also holds by the commutativity.
Since $\mathcal{N}\subseteq\mathscr{C}$ is thick, it follows $Y^{\prime},Z^{\prime}\in\mathcal{N}$. This shows $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathcal{L}$, and thus $f=[1\ 0]\circ \left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathcal{R}\circ\mathcal{L}$ as desired.
\end{proof}
\betagin{lem}\label{LemSInfDef}
Let $f\in\mathscr{C}(A,B)$ be any morphism.
\betagin{enumerate}
\item $f\in\mathcal{L}$ holds if and only if $f$ is an $\mathfrak{s}$-inflation satisfying $f\in\mathscr{S}_{\mathcal{N}}$.
\item $f\in\mathcal{R}$ holds if and only if $f$ is an $\mathfrak{s}$-deflation satisfying $f\in\mathscr{S}_{\mathcal{N}}$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show {\rm (1)}. As the converse is trivial, it suffices to show that any $\mathfrak{s}$-inflation $f\in\mathscr{S}_{\mathcal{N}}=\mathcal{R}\circ\mathcal{L}$ should belong to $\mathcal{L}$. However, this immediately follows from the dual of \circte[Proposition~3.17]{NP} since $\mathcal{N}\subseteq\mathscr{C}$ is thick.
\end{proof}
\betagin{lem}\label{LemAllInf2}
For any $f\in\mathscr{C}(A,B)$, the following are equivalent.
\betagin{enumerate}
\item $f\in\mathscr{S}_{\mathcal{N}}$.
\item $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathcal{L}$ holds for a decomposition as in Lemma~\ref{LemAllInf} {\rm (1)}.
\item $[f\ j]\in\mathcal{R}$ holds for a decomposition as in Lemma~\ref{LemAllInf} {\rm (2)}.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show {\rm (1)}\,$\mathbb{E}Q$\,{\rm (2)}. If $f\in\mathscr{S}_{\mathcal{N}}$, then $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]$ belongs to $\mathscr{S}_{\mathcal{N}}$ by Lemma~\ref{LemSplitN} {\rm (1)}, hence to $\mathcal{L}$ by Lemma~\ref{LemSInfDef}. This shows {\rm (1)}\,$\Rightarrow$\,{\rm (2)}. Since $r\in\mathcal{R}$ is always satisfied, the converse is trivial.
\end{proof}
\betagin{lem}\label{LemAddedForBiresol}
$p(\mathscr{S}_{\mathcal{N}})=\overseterline{\mathscr{S}_{\mathcal{N}}}$ holds.
\end{lem}
\betagin{proof}
By Lemma~\ref{LemSplitN}, it suffices to show that condition {\rm (iv)} in Lemma~\ref{LemSplitN} {\rm (3)} is satisfied. Let $f\in\mathscr{C}(A,B)$ be any split monomorphism such that $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$. As in Lemma~\ref{LemAllInf} {\rm (1)}, we may take an $\mathfrak{s}$-conflation $A\overset{\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]}{\longrightarrow}B\mathrm{op}lus N\to C$ with $N\in\mathcal{N}$. Since $f$ is a split monomorphism, so is $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]$. This means that $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]$ induces an isomorphism $A\mathrm{op}lus C\colonng B\mathrm{op}lus N$ in $\mathscr{C}$. Since $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$, it follows $C\colonng0$ in $\overseterline{\mathscr{C}}$. Then there is a split monomorphism $s\colonlon C\to N^{\prime}$ with $N\in\mathcal{N}$. As in Lemma~\ref{LemAllInf} {\rm (1)}, we may take an $\mathfrak{s}$-conflation $C\overset{\left[\betagin{smallmatrix} s\\ i^{\prime}\end{smallmatrix}\right]}{\longrightarrow}N^{\prime}\mathrm{op}lus N^{\prime}r\to D$ with $N^{\prime}r\in\mathcal{N}$. In the same manner as before, this induces an isomorphism $C\mathrm{op}lus D\colonng N^{\prime}\mathrm{op}lus N^{\prime}r$ in $\mathscr{C}$. Then we have $C\in\mathcal{N}$ since $\mathcal{N}\subseteq\mathscr{C}$ is closed by direct summands. Thus $\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]\in\mathcal{L}$ holds, hence $f\in\mathscr{S}_{\mathcal{N}}$ by Lemma~\ref{LemAllInf2}.
\end{proof}
\betagin{lem}\label{LemForN2}
For any $l\in\mathcal{L}$, morphism $\overseterline{l}$ is monomorphic in $\overseterline{\mathscr{C}}$.
Dually $\overseterline{r}$ is epimorphic in $\overseterline{\mathscr{C}}$, for any $r\in\mathcal{R}$.
\end{lem}
\betagin{proof}
Suppose that $f\in\mathscr{C}(X,Y)$ and $l\in\mathcal{L}(Y,Y^{\prime})$ satisfy $\overseterline{l}\circ\overseterline{f}=0$ in $\overseterline{\mathscr{C}}$. Then there exists a commutative square
\[
\xy
(-6,6)*+{X}="0";
(6,6)*+{N}="2";
(-6,-6)*+{Y}="4";
(6,-6)*+{Y^{\prime}}="6";
{\ar^{i} "0";"2"};
{\ar_{f} "0";"4"};
{\ar^{j} "2";"6"};
{\ar_{l} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\mathscr{C}$ for some $N\in\mathcal{N}$. Since $\mathcal{N}\subseteq\mathscr{C}$ is biresolving, there is an $\mathfrak{s}$-deflation $N^{\prime}\to Y^{\prime}$ for some $N^{\prime}\in\mathcal{N}$. Replacing $N$ by $N\mathrm{op}lus N^{\prime}$, we may assume that $j$ is an $\mathfrak{s}$-deflation, from the beginning. Then there is an $\mathfrak{s}$-conflation
\[ W\to Y\mathrm{op}lus N\overset{[l\ j]}{\longrightarrow}Y^{\prime} \]
for some $W$. By the commutativity of the above square, we see that $f$ factors through $W$. By the dual of Lemma~\ref{LemSplitN} {\rm (1)}, we have $[l\ j]\in\mathscr{S}_{\mathcal{N}}$. Thus Lemma~\ref{LemSInfDef} {\rm (2)} shows $W\in\mathcal{N}$,
hence $\overseterline{f}=0$ follows. The latter part can be shown dually.
\end{proof}
\betagin{lem}\label{LemForN3}
For any $X\overset{f}{\longrightarrow}Y\overset{l}{\longleftarrow}Y^{\prime}$ with $l\in\mathcal{L}$, there exists a commutative square
\betagin{equation}\label{Square_F6}
\xy
(-8,6)*+{X^{\prime}}="0";
(8,6)*+{Y^{\prime}}="2";
(-8,-6)*+{X\mathrm{op}lus N}="4";
(8,-6)*+{Y}="6";
{\ar^{f^{\prime}} "0";"2"};
{\ar_{l^{\prime}} "0";"4"};
{\ar^{l} "2";"6"};
{\ar_(0.6){[f\ j]} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\end{equation}
in $\mathscr{C}$ such that $N\in\mathcal{N}$ and $l^{\prime}\in\mathcal{L}$. Thus we obtain a commutative diagram
\[
\xy
(-6,6)*+{X^{\prime}}="0";
(6,6)*+{Y^{\prime}}="2";
(-6,-6)*+{X}="4";
(6,-6)*+{Y}="6";
{\ar^{\overseterline{f}^{\prime}} "0";"2"};
{\ar_{\overseterline{s}} "0";"4"};
{\ar^{\overseterline{l}} "2";"6"};
{\ar_{\overseterline{f}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
in $\overseterline{\mathscr{C}}$, where $s=[1\ 0]\circ l^{\prime}\in\mathscr{S}_{\mathcal{N}}$.
\end{lem}
\betagin{proof}
Suppose that $X\overset{f}{\longrightarrow}Y\overset{l}{\longleftarrow}Y^{\prime}$ satisfies $l\in\mathcal{L}$. By the assumption, there is an $\mathfrak{s}$-deflation $j\in\mathscr{C}(N,Y)$ from some $N\in\mathcal{N}$. Then $[f\ j]\colon X\mathrm{op}lus N\to Y$ is also an $\mathfrak{s}$-deflation, and a commutative square $(\ref{Square_F6})$ can be obtained by using {\rm (ET4)$^\mathrm{op}$}.
\end{proof}
\betagin{prop}\label{PropSatisfy}
Let $\mathcal{N}\subseteq\mathscr{C}$ be a biresolving thick subcategory.
Then $\mathscr{S}_{\mathcal{N}}$ satisfies {\rm (MR1)},\,$\ldots$\,,\,{\rm (MR4)}.
Moreover, $\overseterline{\mathscr{S}_{\mathcal{N}}}$ can be described as
\betagin{equation}\label{Eq_MonoEpi}
\overseterline{\mathscr{S}_{\mathcal{N}}}=\{\overseterline{x}\in\mathrm{op}eratorname{Mor}(\overseterline{\mathscr{C}})\mid \overseterline{x}\ \text{is both monomorphic and epimorphic} \}.
\end{equation}
\end{prop}
\betagin{proof}
By Lemma~\ref{LemAddedForBiresol}, we have $p(\mathscr{S}_{\mathcal{N}})=\overseterline{\mathscr{S}_{\mathcal{N}}}$. Also, {\rm (M3)} is already shown in Lemma~\ref{LemM3}, and {\rm (MR4)} is immediate from Lemma~\ref{LemAllInf}. Thus by Claim~\ref{ClaimMultLoc}, it suffices to confirm conditions {\rm (M1),(MR2)} and show $(\ref{Eq_MonoEpi})$.
{\rm (M1)}
$\mathscr{S}_{\mathcal{N}}\subseteq\mathcal{M}$ is closed by composition, by its definition. Let
\betagin{equation}\label{Diag_Comm_*}
\xy
(-7,6)*+{A}="0";
(-7,-6)*+{B}="2";
(2,2)*+{}="3";
(7,-6)*+{C}="4";
{\ar_{f} "0";"2"};
{\ar_{g} "2";"4"};
{\ar^{h} "0";"4"};
{\ar@{}|\circrclearrowright "2";"3"};
\endxy
\end{equation}
be any commutative diagram in $\mathscr{C}$ satisfying $h\in\mathscr{S}_{\mathcal{N}}$. Let us show that $f\in\mathscr{S}_{\mathcal{N}}$ holds if and only if $g\in\mathscr{S}_{\mathcal{N}}$.
As the converse can be shown in a dual manner, it is enough to show that $g\in\mathscr{S}_{\mathcal{N}}$ implies $f\in\mathscr{S}_{\mathcal{N}}$.
Since $g$ is a composition of morphisms in $\mathcal{L}$ and $\mathcal{R}$, it suffices to show in the case of $g\in\mathcal{L}$ and of $g\in\mathcal{R}$.
Take an $\mathfrak{s}$-inflation $i\in\mathscr{C}(A,N)$ to some $N\in\mathcal{N}$. Then $f^{\prime}=\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right],h^{\prime}=\left[\betagin{smallmatrix} h\\ i\end{smallmatrix}\right]$ are $\mathfrak{s}$-inflations, hence we obtain $\mathfrak{s}$-triangles
\[
A\overset{\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]}{\longrightarrow}B\mathrm{op}lus N\overset{y}{\longrightarrow}X\overset{\delta}{\dashrightarrow}\quad \text{and}\quad
A\overset{\left[\betagin{smallmatrix} h\\ i\end{smallmatrix}\right]}{\longrightarrow}C\mathrm{op}lus N\overset{z}{\longrightarrow}N^{\prime}\overset{\rho}{\dashrightarrow},
\]
in which $N^{\prime}\in\mathcal{N}$ holds by Lemma~\ref{LemAllInf2}. It suffices to show $X\in\mathcal{N}$, again by the same lemma.
Remark that $(\ref{Diag_Comm_*})$ yields another commutative diagram
\[
\xy
(-10,7)*+{A}="0";
(-10,-8)*+{B\mathrm{op}lus N}="2";
(2,4)*+{}="3";
(10,-8)*+{C\mathrm{op}lus N}="4";
{\ar_{f^{\prime}} "0";"2"};
{\ar_{g^{\prime}} "2";"4"};
{\ar^{h^{\prime}} "0";"4"};
{\ar@{}|\circrclearrowright "2";"3"};
\endxy
\]
in $\mathscr{C}$, where we put $g^{\prime}=g\mathrm{op}lus\mathrm{id}_N$.
If $g\in\mathcal{L}$, then $g^{\prime}\in\mathcal{L}$ holds. Thus by {\rm (ET4)} we obtain a commutative diagram
\[
\xy
(-24,14)*+{A}="0";
(-8,14)*+{B\mathrm{op}lus N}="2";
(8,14)*+{X}="4";
(-24,0)*+{A}="10";
(-8,0)*+{C\mathrm{op}lus N}="12";
(8,0)*+{N^{\prime}}="14";
(-8,-14)*+{N^{\prime}r}="22";
(8,-14)*+{N^{\prime}r}="24";
{\ar^(0.44){f^{\prime}} "0";"2"};
{\ar^(0.56){y} "2";"4"};
{\ar@{=} "0";"10"};
{\ar^{g^{\prime}} "2";"12"};
{\ar^{} "4";"14"};
{\ar_(0.44){h^{\prime}} "10";"12"};
{\ar_(0.56){z} "12";"14"};
{\ar_{} "12";"22"};
{\ar^{} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
with $N^{\prime}r\in\mathcal{N}$, whose rows and columns are $\mathfrak{s}$-conflations. Since $\mathcal{N}\subseteq\mathscr{C}$ is thick, it follows $X\in\mathcal{N}$.
On the other hand if $g\in\mathcal{R}$, then $g^{\prime}\in\mathcal{R}$ holds. Thus by the dual of \circte[Proposition~3.17]{NP} we obtain a diagram
\[
\xy
(-8,14)*+{N^{\prime}r}="2";
(8,14)*+{N^{\prime}r}="4";
(-24,0)*+{A}="10";
(-8,0)*+{B\mathrm{op}lus N}="12";
(8,0)*+{X}="14";
(-24,-14)*+{A}="20";
(-8,-14)*+{C\mathrm{op}lus N}="22";
(8,-14)*+{N^{\prime}}="24";
{\ar@{=} "2";"4"};
{\ar^{} "2";"12"};
{\ar^{} "4";"14"};
{\ar^(0.44){f^{\prime}} "10";"12"};
{\ar_(0.56){y} "12";"14"};
{\ar@{=} "10";"20"};
{\ar_{g^{\prime}} "12";"22"};
{\ar^{} "14";"24"};
{\ar_(0.44){h^{\prime}} "20";"22"};
{\ar_(0.56){z} "22";"24"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "10";"22"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
with $N^{\prime}r\in\mathcal{N}$, whose rows and columns are $\mathfrak{s}$-conflations. Since $\mathcal{N}\subseteq\mathscr{C}$ is thick, it follows $X\in\mathcal{N}$ also in this case.
{\rm (MR2)}
By Lemmas~\ref{LemThickFirstProperties} and \ref{LemForN2}, any $\overseterline{s}\in\overseterline{\mathscr{S}_{\mathcal{N}}}$ is monomorphic and epimorphic in $\overseterline{\mathscr{C}}$.
Remaining conditions also follow from Lemma~\ref{LemThickFirstProperties}, Lemma~\ref{LemForN3} and its dual.
As for $(\ref{Eq_MonoEpi})$, it remains to show that
\[ \overseterline{\mathscr{S}_{\mathcal{N}}}\supseteq\{\overseterline{x}\in\mathrm{op}eratorname{Mor}(\overseterline{\mathscr{C}})\mid \overseterline{x}\ \text{is both monomorphic and epimorphic}\} \]
holds. Let $x\in\mathscr{C}(A,B)$ be any morphism such that $\overseterline{x}$ is monomorphic and epimorphic in $\overseterline{\mathscr{C}}$. By Lemma~\ref{LemAllInf}, we may assume that $x$ is an $\mathfrak{s}$-inflation, from the beginning. Since $\mathcal{N}\subseteq\mathscr{C}$ is biresolving, there is an $\mathfrak{s}$-deflation $N\overset{g}{\longrightarrow}B$ and an $\mathfrak{s}$-inflation $B\overset{i}{\longrightarrow}N^{\prime}$ for some $N,N^{\prime}\in\mathcal{N}$. There exist $\mathfrak{s}$-triangles $A\overset{x}{\longrightarrow}B\overset{y}{\longrightarrow}C\overset{\delta}{\dashrightarrow}$ and $D\overset{f}{\longrightarrow}N\overset{g}{\longrightarrow}B\overset{\rho}{\dashrightarrow}$. By {\rm (ET4)$^\mathrm{op}$}, we obtain a diagram made of $\mathfrak{s}$-triangles as below,
\[
\xy
(-12,12)*+{D}="0";
(0,12)*+{D}="2";
(-12,0)*+{E}="10";
(0,0)*+{N}="12";
(12,0)*+{C}="14";
(24,0)*+{}="16";
(-12,-12)*+{A}="20";
(0,-12)*+{B}="22";
(12,-12)*+{C}="24";
(24,-12)*+{}="26";
(-12,-24)*+{}="30";
(0,-24)*+{}="32";
{\ar@{=} "0";"2"};
{\ar^{} "0";"10"};
{\ar^{f} "2";"12"};
{\ar_{} "10";"12"};
{\ar_{} "12";"14"};
{\ar@{-->}^{\theta} "14";"16"};
{\ar_{e} "10";"20"};
{\ar^{g} "12";"22"};
{\ar@{=} "14";"24"};
{\ar_{x} "20";"22"};
{\ar_{y} "22";"24"};
{\ar@{-->}_{\delta} "24";"26"};
{\ar@{-->} "20";"30"};
{\ar@{-->}^{\rho} "22";"32"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "10";"22"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
for some $\mathbb{E}bb$-extension $\theta$.
Since $\overseterline{x}$ is monomorphic and epimorphic in $\overseterline{\mathscr{C}}$, it follows $\overseterline{e}=0$ and $\overseterline{y}=0$. Thus we have $\overseterline{\delta}=\overseterline{e_{\ast}\theta}=0$, which means that there exists some $s\in\mathscr{S}_{\mathcal{N}}(A,A^{\prime})$ such that $s_{\ast}\delta=0$. Since $\mathscr{S}_{\mathcal{N}}$ satisfies {\rm (M3)}, there exists $t\in\mathscr{S}_{\mathcal{N}}(B,A^{\prime}\mathrm{op}lus C)$ which gives the following morphism of $\mathfrak{s}$-triangles.
\[
\xy
(-15,6)*+{A}="0";
(0,6)*+{B}="2";
(15,6)*+{C}="4";
(28,6)*+{}="6";
(-15,-6)*+{A^{\prime}}="10";
(0,-6)*+{A^{\prime}\mathrm{op}lus C}="12";
(15,-6)*+{C}="14";
(28,-6)*+{}="16";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar@{-->}^{\delta} "4";"6"};
{\ar_{s} "0";"10"};
{\ar_{t} "2";"12"};
{\ar@{=} "4";"14"};
{\ar_(0.4){\left[\betagin{smallmatrix}1\\0\end{smallmatrix}\right]} "10";"12"};
{\ar_(0.6){[0\ 1]} "12";"14"};
{\ar@{-->}_{0} "14";"16"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
\endxy
\]
Then the commutativity of the right square means that $t$ is of the form $t=\left[\betagin{smallmatrix} b\\ y\end{smallmatrix}\right]$ for some $b\in\mathscr{C}(B,A^{\prime})$.
Since $\left[\betagin{smallmatrix} i\\ b \end{smallmatrix}\right]\in\mathscr{C}(B,N^{\prime}\mathrm{op}lus A^{\prime})$ is an $\mathfrak{s}$-inflation, we have an $\mathfrak{s}$-conflation $B\overset{\left[\betagin{smallmatrix} i\\ b\end{smallmatrix}\right]}{\longrightarrow}N^{\prime}\mathrm{op}lus A^{\prime}\to Z$ associated to it. This in turn gives rise to an $\mathfrak{s}$-conflation
\[ B\overset{\left[\betagin{smallmatrix} i\\ b\\0\end{smallmatrix}\right]}{\longrightarrow}N^{\prime}\mathrm{op}lus A^{\prime}\mathrm{op}lus C\to Z\mathrm{op}lus C. \]
Since $\overseterline{t}=\left[\betagin{smallmatrix}\overseterline{b}\\0\end{smallmatrix}\right]\in\overseterline{\mathscr{S}_{\mathcal{N}}}$, we also have $\overseterline{\left[\betagin{smallmatrix} i\\ b\\ 0\end{smallmatrix}\right]}\in\overseterline{\mathscr{S}_{\mathcal{N}}}$. By Lemma~\ref{LemAllInf2} this means $\left[\betagin{smallmatrix} i\\ b\\ 0\end{smallmatrix}\right]\in\mathcal{L}$, namely $Z\mathrm{op}lus C\in\mathcal{N}$. Since $\mathcal{N}\subseteq\mathscr{C}$ is closed by direct summands, we obtain $C\in\mathcal{N}$. This shows $x\in\mathcal{L}\subseteq\mathscr{S}_{\mathcal{N}}$ as desired.
\end{proof}
\betagin{cor}\label{CorLocTri}
Let $\mathcal{N}\subseteq\mathscr{C}$ be a biresolving thick subcategory.
Then the localization of $\mathscr{C}$ by $\mathscr{S}_{\mathcal{N}}$ corresponds to a triangulated category.
\end{cor}
\betagin{proof}
By Theorem~\ref{ThmMultLoc} and Proposition~\ref{PropSatisfy}, we obtain an extriangulated category $\widetilde{\C}Es$. Since any morphism in $\widetilde{\C}$ is both an $\widetilde{\mathfrak{s}}$-inflation and an $\widetilde{\mathfrak{s}}$-deflation by Lemmas~\ref{LemComposeInf} and \ref{LemAllInf}, this becomes triangulated.
\end{proof}
\subsection{Case {\rm (B)}: Localization by percolating subcategories}\label{Subsection_Percolating}
In this subsection, we show that the localizations in Examples~\ref{ExSerreAbel} and \ref{ExTwo-sidedExact} can be covered by our construction in Section~\ref{Section_Localization}. In fact, we may perform it in slightly a bit broader situation\footnote{The authors wish to thank Mikhail Gorsky, whose suggestive comment to the previous version of this manuscript led them to this improvement.}, so that it also contains Example~\ref{ExVerdier}.
\betagin{dfn}\label{Def_Percolating}
A thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ is called a \emph{two-sided admissibly percolating} subcategory, or simply a \emph{percolating} subcategory in this article, if the following conditions are satisfied.
\betagin{itemize}
\item[{\rm (P1)}] For any morphism $f\colon X\to N$ in $\mathscr{C}$ with $N\in\mathcal{N}$, there is a factorization of $f$ as $X\overset{g}{\to}N^{\prime}\overset{h}{\to}N$ with an $\mathfrak{s}$-deflation $g$ and an $\mathfrak{s}$-inflation $h$, and $N^{\prime}\in\mathcal{N}$.
\item[{\rm (P1')}] Dually, for any morphism $f\colon N\to Y$ in $\mathscr{C}$ with $N\in\mathcal{N}$, there is a factorization of $f$ as $N\overset{g}{\to}N^{\prime}\overset{h}{\to}Y$ with an $\mathfrak{s}$-deflation $g$ and an $\mathfrak{s}$-inflation $h$, and $N^{\prime}\in\mathcal{N}$.
\end{itemize}
\end{dfn}
\betagin{lem}\label{LemPercdefinf}
Let $\mathcal{N}\subseteq\mathscr{C}$ be a thick subcategory.
The following are equivalent.
\betagin{enumerate}
\item $\mathcal{N}$ is percolating.
\item If a morphism $f\colon X\to Y$ in $\mathscr{C}$ factors through some object in $\mathcal{N}$, then there exists a factorization $X\overset{d}{\to}N\overset{i}{\to}Y$ of $f$ with an $\mathfrak{s}$-deflation $d$, an $\mathfrak{s}$-inflation $i$ and $N\in\mathcal{N}$.
\end{enumerate}
\end{lem}
\betagin{proof}
As the converse is trivial, let us show that {\rm (1)} implies {\rm (2)}.
Suppose that $f=h\circ g$ holds for morphisms $g\colon X\to N_1, h\colon N_1\to Y$ with $N_1\in\mathcal{N}$. Since $\mathcal{N}$ is percolating, there is a factorization $X\overset{d}{\to}N_2\overset{i}{\to}N_1$ of $g$. Here $d$ is an $\mathfrak{s}$-deflation, $i$ is an $\mathfrak{s}$-inflation and $N_2$ belongs to $\mathcal{N}$. Then there is a factorization $N_2\overset{d^{\prime}}{\to}N_3\overset{i^{\prime}}{\to}Y$ of $h\circ i$ with an $\mathfrak{s}$-deflation $d^{\prime}$, an $\mathfrak{s}$-inflation $i^{\prime}$ and $N_3\in\mathcal{N}$ because $\mathcal{N}$ is percolating. Thus we obtain a factorization $X\overset{d^{\prime}\circ d}{\to}N_3\overset{i^{\prime}}{\to}Y$ of $f$ as desired.
\end{proof}
In addition, we also consider the following condition so that Corollary~\ref{CorMultLoc} can be applied to $\mathscr{S}_{\mathcal{N}}$.
\betagin{cond}\label{ConditionPerc}
Let $\mathcal{N}$ be a percolating thick subcategory in $\mathscr{C}Es$. Consider the following conditions.
\betagin{enumerate}
\item[{\rm (P2)}] If $f\in\mathscr{C}(A,B)$ is a split monomorphism such that $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$, then there exist $N\in\mathcal{N}$ and $j\in\mathscr{C}(N,B)$ such that $[f\ j]\colon A\mathrm{op}lus N\to B$ is an isomorphism in $\mathscr{C}$.
\item[{\rm (P3)}] $\mathrm{op}eratorname{Ker}\big(\mathscr{C}(X,A)\overset{l\circ-}{\longrightarrow}\mathscr{C}(X,B)\big)\subseteq[\mathcal{N}](X,A)$ holds for any $X\in\mathscr{C}$ and any $l\in\mathcal{L}(A,B)$.
Dually, $\mathrm{op}eratorname{Ker}\big(\mathscr{C}(C,X)\overset{-\circ r}{\longrightarrow}\mathscr{C}(B,X)\big)\subseteq[\mathcal{N}](C,X)$ holds for any $X\in\mathscr{C}$ and any $r\in\mathcal{R}(B,C)$.
\end{enumerate}
\end{cond}
Thick subcategories in Examples~\ref{ExVerdier}, \ref{ExSerreAbel}, \ref{ExTwo-sidedExact} are percolating and moreover satisfy Condition~\ref{ConditionPerc}. More in detail, we have the following Remarks~\ref{RemP2} and \ref{RemP3}.
\betagin{rem}\label{RemP2}
As for {\rm (P2)}, the following holds.
\betagin{enumerate}
\item {\rm (P2)} is self-dual. Namely, {\rm (P2)} holds if and only if for any split epimorphism $e\in\mathscr{C}(B,A)$ such that $\overseterline{e}$ is an isomorphism in $\overseterline{\mathscr{C}}$, there exist $N\in\mathcal{N}$ and $i\in\mathscr{C}(B,N)$ such that $\left[\betagin{smallmatrix} e\\ i\end{smallmatrix}\right]\colon B\to A\mathrm{op}lus N$ is an isomorphism in $\mathscr{C}$.
\item Suppose that $\mathscr{C}Es$ satisfies {\rm (WIC)} or more generally that any split monomorphism has a cokernel. Then {\rm (P2)} is always satisfied. In fact, cokernel $N$ of a split monomorphism $f\colon A\to B$ belongs to $\mathcal{N}$ whenever $\overseterline{f}$ is an isomorphism, hence gives a decomposition into a direct sum $B\colonng A\mathrm{op}lus N$.
We also remark that if any split monomorphism $f$ in $\mathscr{C}$ has a cokernel, then any extension-closed subcategory $\mathscr{D}\subseteq\mathscr{C}$ closed by direct summands also possesses this property.
\item Suppose that $\mathscr{C}Es$ corresponds to an exact category. Then {\rm (P2)} is always satisfied by any percolating subcategory.
Indeed if $f\in\mathscr{C}(A,B)$ is a split monomorphism such that $\overseterline{f}$ is an isomorphism in $\overseterline{\mathscr{C}}$, then there exist $e\in\mathscr{C}(B,A)$, $N\in\mathcal{N}$, $i\in\mathscr{C}(B,N)$ and $j\in\mathscr{C}(N,B)$ which satisfy $e\circ f=\mathrm{id}_A$ and $f\circ e+j\circ i=\mathrm{id}_B$. We may assume that $i$ is an $\mathfrak{s}$-deflation and $j$ is an $\mathfrak{s}$-inflation by Lemma~\ref{LemPercdefinf}. Then $i\circ f=0$, $e\circ j=0$ and $i\circ j=\mathrm{id}$ follows since $i$ is epimorphic and $j$ is monomorphic, thus $[f\ j]\colon A\mathrm{op}lus N\to B$ and $\left[\betagin{smallmatrix} e\\i\end{smallmatrix}\right]\colon B\to A\mathrm{op}lus N$ are inverse to each other.
\end{enumerate}
\end{rem}
\betagin{rem}\label{RemP3}
As for {\rm (P3)}, the following holds.
\betagin{enumerate}
\item Suppose that $\mathscr{C}Es$ corresponds to an exact category. Then {\rm (P3)} is trivially satisfied, since any $\mathfrak{s}$-inflation is monomorphic and any $\mathfrak{s}$-deflation is epimorphic.
\item Suppose that $\mathscr{C}Es$ corresponds to a triangulated category. Then {\rm (P3)} is always satisfied. This is because $N\to A\overset{f}{\longrightarrow} B$ is an $\mathfrak{s}$-conflation if and only if $A\overset{f}{\longrightarrow} B\to N[1]$ is an $\mathfrak{s}$-conflation.
\end{enumerate}
We also remark that in these particular cases {\rm (1)} and {\rm (2)}, the following stronger version {\rm (P3$^+$)} is satisfied.
\betagin{itemize}
\item[{\rm (P3$^+$)}] For any $\mathfrak{s}$-conflation $N\to A\overset{r}{\longrightarrow}B$ with $N\in\mathcal{N}$, there exist $N^{\prime}\in\mathcal{N}$ and
$g\in\mathscr{C}(B,N^{\prime})$ which gives a weak cokernel of $r$. Dually for $\mathfrak{s}$-conflations $A\overset{l}{\longrightarrow}B\to N$ with $N\in\mathcal{N}$.
\end{itemize}
\end{rem}
In summary, we have the following.
\betagin{ex}\label{ExPerc}
Let $\mathscr{C}Es$ be an extriangulated category, as before.
\betagin{enumerate}
\item $\mathscr{C}$ itself is percolating in $\mathscr{C}Es$ if and only if any morphism in $\mathscr{C}$ is $\mathfrak{s}$-admissible. This always satisfies {\rm (P3)}. Moreover, if any split monomorphism in $\mathscr{C}$ has a cokernel, then it also satisfies {\rm (P2)}.
\item The thick full subcategory of zero objects in $\mathscr{C}$ is always percolating and satisfies Condition~\ref{ConditionPerc}.
\item If $\mathscr{C}Es$ corresponds to a triangulated category, then any thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ is percolating and satisfies Condition~\ref{ConditionPerc}.
\item Suppose that $\mathscr{C}Es$ corresponds to an exact category. Then any percolating thick subcategory $\mathcal{N}\subseteq\mathscr{C}$ becomes a Serre subcategory, in the sense that $B\in\mathcal{N}$ holds if and only if $A,C\in\mathcal{N}$, for each $\mathfrak{s}$-conflation $A\to B\to C$. Also, it always satisfies Condition~\ref{ConditionPerc}. Thus Condition~\ref{ConditionPerc} does not pose any additional requirement on \circte[Definition 2.4]{HKR}.
\item If $\mathscr{C}$ is abelian, then conversely any Serre subcategory $\mathcal{N}\subseteq\mathscr{C}$ is a percolating thick subcategory.
\end{enumerate}
\end{ex}
In the rest, let $\mathcal{N}\subseteq\mathscr{C}$ denote a percolating thick subcategory satisfying Condition~\ref{ConditionPerc}.
Our aim is to show Proposition~\ref{PropPerc}, which asserts that the assumption of Corollary~\ref{CorMultLoc} is fulfilled by $\mathscr{S}_{\mathcal{N}}$ associated to it.
\betagin{lem}\label{LemExPerc_pS}
$p(\mathscr{S}_{\mathcal{N}})=\overseterline{\mathscr{S}_{\mathcal{N}}}$ holds.
\end{lem}
\betagin{proof}
By Lemma~\ref{LemSplitN} {\rm (3)}, this is immediate from {\rm (P2)}.
\end{proof}
\betagin{lem}\label{Lem_ii}
The following holds.
\betagin{enumerate}
\item Let $f\colon N\to A$ be any morphism in $\mathscr{C}$ with $N\in\mathcal{N}$. If there is an $\mathfrak{s}$-inflation $x\colon A\to B$ such that $x\circ f$ is an $\mathfrak{s}$-inflation, then $f$ is an $\mathfrak{s}$-inflation.
\item Let $f\colon C\to N$ be any morphism in $\mathscr{C}$ with $N\in\mathcal{N}$. If there exists an $\mathfrak{s}$-deflation $y\colon B\to C$ such that $f\circ y$ is an $\mathfrak{s}$-deflation, then $f$ is an $\mathfrak{s}$-deflation.
\end{enumerate}
\end{lem}
\betagin{proof}
Since {\rm (2)} can be shown dually, it is enough to show {\rm (1)}.
Let $f\colon N\to A$ be a morphism with $N\in\mathcal{N}$, and suppose that there is an $\mathfrak{s}$-inflation $x\colon A\to B$ such that $x\circ f$ is an $\mathfrak{s}$-inflation.
Since $\mathcal{N}$ is percolating, there exist an $\mathfrak{s}$-conflation $N^{\prime}r\overset{m}{\longrightarrow}N\overset{q}{\longrightarrow}N^{\prime}$ and an $\mathfrak{s}$-inflation $i\in\mathscr{C}(N^{\prime},A)$ such that $f=i\circ q$. It suffices to show that $q$ is an $\mathfrak{s}$-inflation.
Since $(x\circ f)\circ m=0$ becomes an $\mathfrak{s}$-inflation, we obtain an $\mathfrak{s}$-conflation $N^{\prime}r\overset{0}{\longrightarrow}B\overset{b}{\longrightarrow}B^{\prime}$ for some $B^{\prime}\in\mathscr{C}$. By {\rm (C1')} for $\mathscr{C}Es$, we see that there exists $e\in\mathscr{C}(B^{\prime},B)$ such that $e\circ b=\mathrm{id}_B$, in particular $b$ is a split monomorphism. Then $\mathrm{id}_{B^{\prime}}-b\circ e\colon B^{\prime}\to B^{\prime}$ satisfies $(\mathrm{id}_{B^{\prime}}-b\circ e)\circ b=0$, hence $\mathrm{id}_{B^{\prime}}=\overseterline{b}\circ\overseterline{e}$ holds in $\overseterline{\mathscr{C}}$ by {\rm (P3)}. In particular $\overseterline{b}$ is an isomorphism in $\overseterline{\mathscr{C}}$. Thus by {\rm (P2)}, we have an isomorphism $[b\ j]\colon B\mathrm{op}lus N_B\overset{\colonng}{\longrightarrow}B^{\prime}$ for some $N_B\in\mathcal{N}$. Thus we have a split $\mathfrak{s}$-conflation $N_B\overset{j}{\longrightarrow}B^{\prime}\overset{e^{\prime}}{\longrightarrow}B$ such that $e^{\prime}\circ b=\mathrm{id}_B$. By {\rm (ET4)$^\mathrm{op}$}, we obtain a commutative diagram in $\mathscr{C}$
\[
\xy
(-18,12)*+{N^{\prime}r}="0";
(-6,12)*+{0}="2";
(6,12)*+{N_B}="4";
(-18,0)*+{N^{\prime}r}="10";
(-6,0)*+{B}="12";
(6,0)*+{B^{\prime}}="14";
(-6,-12)*+{B}="22";
(6,-12)*+{B}="24";
{\ar^{} "0";"2"};
{\ar^{} "2";"4"};
{\ar@{=} "0";"10"};
{\ar^{} "2";"12"};
{\ar^{j} "4";"14"};
{\ar_{0} "10";"12"};
{\ar_{b} "12";"14"};
{\ar_{\mathrm{id}_B} "12";"22"};
{\ar^{e^{\prime}} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
in which $N^{\prime}r\to0\to N_B$ becomes an $\mathfrak{s}$-conflation. Then by \circte[Proposition~3.15]{NP}, we obtain a commutative diagram made of $\mathfrak{s}$-conflations as below for some $M$.
\[
\xy
(-6,6)*+{N^{\prime}r}="2";
(6,6)*+{N}="4";
(18,6)*+{N^{\prime}}="6";
(-6,-6)*+{0}="12";
(6,-6)*+{M}="14";
(18,-6)*+{N^{\prime}}="16";
(-6,-18)*+{N_B}="22";
(6,-18)*+{N_B}="24";
{\ar^{m} "2";"4"};
{\ar^{q} "4";"6"};
{\ar_{} "2";"12"};
{\ar^{} "4";"14"};
{\ar@{=} "6";"16"};
{\ar_{} "12";"14"};
{\ar_{r} "14";"16"};
{\ar_{} "12";"22"};
{\ar^{} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "4";"16"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
Then $r$ is an isomorphism, hence $q$ becomes an $\mathfrak{s}$-inflation as desired.
\end{proof}
\betagin{lem}\label{Lem9}
Let $l\colon X\to Y$ be a morphism in $\mathcal{L}$ and $r\colon Y\to Z$ an $\mathfrak{s}$-deflation. Then there exists the following commutative diagram
\betagin{equation}\label{DiagramTA}
\xymatrix{
W\ar@{}[dr]|\circrclearrowright\ar[r]^w\ar[d]_z&V\ar@{}[dr]|\circrclearrowright\ar[d]^g\ar[r]^{f^{\prime}}&N_2\ar[d]^{g^{\prime}}\\
X\ar@{}[dr]|\circrclearrowright\ar[d]_{z^{\prime}}\ar[r]^l&Y\ar@{}[dr]|\circrclearrowright\ar[d]^r\ar[r]^f&N_1\ar^{r^{\prime}}[d]\\
Z^{\prime}\ar[r]_{l^{\prime}}&Z\ar[r]_n&N_3
}
\end{equation}
where all rows and all columns are $\mathfrak{s}$-conflations and $N_{i}\in\mathcal{N}$ for $i=1,2,3$. In particular, we have $\mathcal{R}\circ\mathcal{L}\subseteq\mathcal{L}\circ\mathcal{R}$.
\end{lem}
\betagin{proof}
By definition, there exist $\mathfrak{s}$-conflations $X\overset{l}{\longrightarrow}Y\overset{f}{\longrightarrow} N_1$ and $V\overset{g}{\longrightarrow} Y\overset{r}{\longrightarrow}Z$ with $N_1\in\mathcal{N}$. Since $\mathcal{N}$ is percolating, there exist $N_2\in\mathcal{N}$, $\mathfrak{s}$-deflation $f^{\prime}$ and $\mathfrak{s}$-inflation $g^{\prime}$ such that $f\circ g=g^{\prime}\circ f^{\prime}$. Now we have the following diagram
\[
\xymatrix{
W\ar[r]^w&V\ar@{}[dr]|\circrclearrowright\ar[d]^g\ar[r]^{f^{\prime}}&N_2\ar[d]^{g^{\prime}}\\
X\ar[r]^l&Y\ar[d]^r\ar[r]^f&N_1\ar[d]^{r^{\prime}}\\
\ &Z&N_3
}
\]
where all rows and all columns are $\mathfrak{s}$-conflations and $N_3\in\mathcal{N}$.
By {\rm (ET4)}, we obtain a commutative diagram in $\mathscr{C}$ as below,
\[
\xy
(-18,12)*+{W}="0";
(-6,12)*+{V}="2";
(6,12)*+{N_2}="4";
(-18,0)*+{W}="10";
(-6,0)*+{Y}="12";
(6,0)*+{E}="14";
(-6,-12)*+{Z}="22";
(6,-12)*+{Z}="24";
{\ar^{w} "0";"2"};
{\ar^{f^{\prime}} "2";"4"};
{\ar@{=} "0";"10"};
{\ar^{g} "2";"12"};
{\ar^{d} "4";"14"};
{\ar_{g\circ w} "10";"12"};
{\ar_{c} "12";"14"};
{\ar_{r} "12";"22"};
{\ar^{e} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
in which $W\overset{g\circ w}{\longrightarrow}Y\overset{c}{\longrightarrow}E$ and $N_2\overset{d}{\longrightarrow}E\overset{e}{\longrightarrow}Z$ are $\mathfrak{s}$-conflations. By \circte[Lemma~3.13]{NP}, the upper right square is a weak pushout. Thus there exists a morphism $q\in\mathscr{C}(E,N_1)$ which makes the following diagram commutative.
\[
\xy
(-6,6)*+{V}="0";
(6,6)*+{N_2}="2";
(-6,-6)*+{Y}="4";
(6,-6)*+{E}="6";
(16,-16)*+{N_1}="8";
{\ar^{f^{\prime}} "0";"2"};
{\ar_{g} "0";"4"};
{\ar^{d} "2";"6"};
{\ar_{c} "4";"6"};
{\ar_{q} "6";"8"};
{\ar@/_0.80pc/_{f} "4";"8"};
{\ar@/^0.80pc/^{g^{\prime}} "2";"8"};
{\ar@{}|\circrclearrowright "0";"6"};
{\ar@{}|\circrclearrowright "6";(4,-16)};
{\ar@{}|\circrclearrowright "6";(16,-4)};
\endxy
\]
Then $q$ is an $\mathfrak{s}$-deflation by Lemma~\ref{Lem_ii} {\rm (2)}. Complete $q$ into an $\mathfrak{s}$-conflation $Z^{\prime}\overset{q^{\prime}}{\longrightarrow}E\overset{q}{\longrightarrow}N_1$. By the dual of \circte[Lemma~3.14]{NP}, there exist morphisms $z,z^{\prime}$ which makes the following diagram commutative,
\[
\xy
(-18,12)*+{W}="0";
(-6,12)*+{X}="2";
(6,12)*+{Z^{\prime}}="4";
(-18,0)*+{W}="10";
(-6,0)*+{Y}="12";
(6,0)*+{E}="14";
(-6,-12)*+{N_1}="22";
(6,-12)*+{N_1}="24";
{\ar^{z} "0";"2"};
{\ar^{z^{\prime}} "2";"4"};
{\ar@{=} "0";"10"};
{\ar^{l} "2";"12"};
{\ar^{q^{\prime}} "4";"14"};
{\ar_{g\circ w} "10";"12"};
{\ar_{c} "12";"14"};
{\ar_{f} "12";"22"};
{\ar^{q} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
such that $W\overset{z}{\longrightarrow}X\overset{z^{\prime}}{\longrightarrow}Z^{\prime}$ is an $\mathfrak{s}$-conflation. Then by the dual of \circte[Proposition~3.17]{NP}, there exist morphisms $l^{\prime},n$ which makes the following diagram commutative,
\[
\xy
(-6,12)*+{N_2}="2";
(6,12)*+{N_2}="4";
(-18,0)*+{Z^{\prime}}="10";
(-6,0)*+{E}="12";
(6,0)*+{N_1}="14";
(-18,-12)*+{Z^{\prime}}="20";
(-6,-12)*+{Z}="22";
(6,-12)*+{N_3}="24";
{\ar@{=} "2";"4"};
{\ar_{d} "2";"12"};
{\ar^{g^{\prime}} "4";"14"};
{\ar^{q^{\prime}} "10";"12"};
{\ar_{q} "12";"14"};
{\ar@{=} "10";"20"};
{\ar_{e} "12";"22"};
{\ar^{r^{\prime}} "14";"24"};
{\ar_{l^{\prime}} "20";"22"};
{\ar_{n} "22";"24"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "10";"22"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
such that $Z^{\prime}\overset{l^{\prime}}{\longrightarrow}Z\overset{n}{\longrightarrow}N_3$ is an $\mathfrak{s}$-conflation.
Commutativity of $(\ref{DiagramTA})$ is immediate from the construction.
It remains to show $\mathcal{R}\circ\mathcal{L}\subseteq\mathcal{L}\circ\mathcal{R}$. Suppose $r\in\mathcal{R}$ in $(\ref{DiagramTA})$. Then $V$ is in $\mathcal{N}$, and so is $W$ because $\mathcal{N}$ is a thick subcategory. Thus $r\circ l=l^{\prime}\circ z^{\prime}$ belongs to $\mathcal{L}\circ\mathcal{R}$.
\end{proof}
\betagin{lem}\label{LemPercLR}
$\mathscr{S}_{\mathcal{N}}=\mathcal{L}\circrc\mathcal{R}$ holds.
\end{lem}
\betagin{proof}
This follows immediately from Lemma~\ref{Lem9}.
\end{proof}
\betagin{lem}\label{LemGeneralThick23_1}
Let
\[
\xy
(-7,6)*+{A}="0";
(-7,-6)*+{B}="2";
(2,2)*+{}="3";
(7,-6)*+{C}="4";
{\ar_{f} "0";"2"};
{\ar_{g} "2";"4"};
{\ar^{h} "0";"4"};
{\ar@{}|\circrclearrowright "2";"3"};
\endxy
\]
be any commutative diagram in $\mathscr{C}$. The following holds.
\betagin{enumerate}
\item If $f,h\in\mathcal{L}$, then $g\in\mathscr{S}_{\mathcal{N}}$.
\item If $g,h\in\mathcal{R}$, then $f\in\mathscr{S}_{\mathcal{N}}$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show {\rm (1)}. Let $A\overset{f}{\longrightarrow}B\to N\overset{\delta}{\dashrightarrow}$ and $A\overset{h}{\longrightarrow}C\to N^{\prime}\overset{\rho}{\dashrightarrow}$ be $\mathfrak{s}$-triangles with $N,N^{\prime}\in\mathcal{N}$.
By the dual of \circte[Proposition~3.15]{NP}, we obtain a commutative diagram made of $\mathfrak{s}$-triangles as below.
\[
\xy
(-7,7)*+{A}="2";
(7,7)*+{C}="4";
(21,7)*+{N^{\prime}}="6";
(35,7)*+{}="8";
(-7,-7)*+{B}="12";
(7,-7)*+{E}="14";
(21,-7)*+{N^{\prime}}="16";
(35,-7)*+{}="18";
(-7,-21)*+{N}="22";
(7,-21)*+{N}="24";
(-7,-34)*+{}="32";
(7,-34)*+{}="34";
{\ar^{h} "2";"4"};
{\ar^{} "4";"6"};
{\ar@{-->}^{\rho} "6";"8"};
{\ar_{f} "2";"12"};
{\ar^{} "4";"14"};
{\ar@{=} "6";"16"};
{\ar_{} "12";"14"};
{\ar_{} "14";"16"};
{\ar@{-->}_{f_{\ast}\rho} "16";"18"};
{\ar_{} "12";"22"};
{\ar^{} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{-->}_{\delta} "22";"32"};
{\ar@{-->}^{h_{\ast}\delta} "24";"34"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "4";"16"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
Since $h_{\ast}\delta=g_{\ast} f_{\ast}\delta=0$, we may replace $[C\to E\to N]$ by $[C\overset{\left[\betagin{smallmatrix} 1\\0\end{smallmatrix}\right]}{\longrightarrow}C\mathrm{op}lus N\overset{[0\ 1]}{\longrightarrow}N]$, to obtain
\[
\xy
(-7,7)*+{A}="2";
(7,7)*+{C}="4";
(21,7)*+{N^{\prime}}="6";
(35,7)*+{}="8";
(-7,-7)*+{B}="12";
(7,-7)*+{C\mathrm{op}lus N}="14";
(21,-7)*+{N^{\prime}}="16";
(35,-7)*+{}="18";
(-7,-21)*+{N}="22";
(7,-21)*+{N}="24";
(-7,-34)*+{}="32";
(7,-34)*+{}="34";
{\ar^{h} "2";"4"};
{\ar^{} "4";"6"};
{\ar@{-->}^{\rho} "6";"8"};
{\ar_{f} "2";"12"};
{\ar^{\left[\betagin{smallmatrix}1\\0\end{smallmatrix}\right]} "4";"14"};
{\ar@{=} "6";"16"};
{\ar_(0.35){m} "12";"14"};
{\ar_{} "14";"16"};
{\ar@{-->}_{f_{\ast}\rho} "16";"18"};
{\ar_{} "12";"22"};
{\ar^{[0\ 1]} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{-->}_{\delta} "22";"32"};
{\ar@{-->}^{h_{\ast}\delta} "24";"34"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "4";"16"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
for some $m=\left[\betagin{smallmatrix} g^{\prime}\\ g^{\prime}r\end{smallmatrix}\right]\in\mathcal{L}(B,C\mathrm{op}lus N)$. Since $[1\ 0]\in\mathcal{R}(C\mathrm{op}lus N,C)$, we have $g^{\prime}=[1\ 0]\circ \left[\betagin{smallmatrix} g^{\prime}\\ g^{\prime}r\end{smallmatrix}\right]\in\mathcal{R}\circ\mathcal{L}\subseteq\mathscr{S}_{\mathcal{N}}$. Since $(g-g^{\prime})\circ f=0$, by {\rm (C1')} it follows that $g-g^{\prime}$ factors through $N$, hence $\overseterline{g}=\overseterline{g}^{\prime}$. Since $\mathscr{S}_{\mathcal{N}}$ satisfies {\rm (M0)}, we can apply Lemma~\ref{LemSplitN} to $\mathscr{S}=\mathscr{S}_{\mathcal{N}}$ to conclude $g\in\mathscr{S}_{\mathcal{N}}$.
\end{proof}
\betagin{lem}\label{LemGeneralThick23_2}
Let $A\overset{r}{\longrightarrow}D\overset{l}{\longrightarrow}C$ and $A\overset{f}{\longrightarrow}B\overset{g}{\longrightarrow}C$ be a pair of sequences in $\mathscr{C}$ satisfying $l\in\mathcal{L}$ and $r\in\mathcal{R}$.
Assume that
\[
\xy
(-6,6)*+{A}="0";
(6,6)*+{D}="2";
(-6,-6)*+{B}="4";
(6,-6)*+{C}="6";
{\ar^{\overseterline{r}} "0";"2"};
{\ar_{\overseterline{f}} "0";"4"};
{\ar^{\overseterline{l}} "2";"6"};
{\ar_{\overseterline{g}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
is commutative in $\overseterline{\mathscr{C}}$.
Then the following holds.
\betagin{enumerate}
\item If $f\in\mathcal{L}$, then $g\in\mathscr{S}_{\mathcal{N}}$.
\item If $g\in\mathcal{R}$, then $f\in\mathscr{S}_{\mathcal{N}}$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show {\rm (1)}.
By $\overseterline{g}\circ\overseterline{f}=\overseterline{l}\circ\overseterline{r}$, there exists $N_0\in\mathcal{N}$ and $i\in\mathscr{C}(A,N_0),j\in\mathscr{C}(N_0,C)$ such that $l\circ r=g\circ f+j\circ i$.
If we put $B_0=B\mathrm{op}lus N_0$, $f_0=\left[\betagin{smallmatrix} f\\ i\end{smallmatrix}\right]$ and $g_0=[g\ j]$, then this means the commutativity of
\betagin{equation}\label{CommADB_0C}
\xy
(-6,6)*+{A}="0";
(6,6)*+{D}="2";
(-6,-6)*+{B_0}="4";
(6,-6)*+{C}="6";
{\ar^{r} "0";"2"};
{\ar_{f_0} "0";"4"};
{\ar^{l} "2";"6"};
{\ar_{g_0} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\end{equation}
in $\mathscr{C}$. Remark that $f\in\mathcal{L}$ implies $f_0\in\mathcal{L}$. Thus there are $\mathfrak{s}$-triangles
\[ A\overset{f_0}{\longrightarrow}B_0\overset{y}{\longrightarrow}N_1\overset{\delta_1}{\dashrightarrow},\quad N_2\overset{m}{\longrightarrow}A\overset{r}{\longrightarrow}D\overset{\delta_2}{\dashrightarrow} \]
for some $N_1,N_2\in\mathcal{N}$.
By {\rm (ET4)}, we obtain a diagram made of $\mathfrak{s}$-triangles as below.
\[
\xy
(-21,7)*+{N_2}="0";
(-7,7)*+{A}="2";
(7,7)*+{D}="4";
(-21,-7)*+{N_2}="10";
(-7,-7)*+{B_0}="12";
(7,-7)*+{E}="14";
(-7,-21)*+{N_1}="22";
(7,-21)*+{N_1}="24";
{\ar^{m} "0";"2"};
{\ar^{r} "2";"4"};
{\ar^{\delta_2}@{-->} "4";(19,7)};
{\ar@{=} "0";"10"};
{\ar_{f_0} "2";"12"};
{\ar^{d} "4";"14"};
{\ar_{} "10";"12"};
{\ar_{b} "12";"14"};
{\ar@{-->}^{} "14";(19,-7)};
{\ar_{y} "12";"22"};
{\ar^{} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{-->}_{\delta_1} "22";(-7,-34)};
{\ar@{-->}^{r_{\ast}\delta_1} "24";(7,-34)};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
By \circte[Lemma~3.13]{NP}, the upper right square
\[
\xy
(-6,6)*+{A}="0";
(6,6)*+{D}="2";
(-6,-6)*+{B_0}="4";
(6,-6)*+{E}="6";
{\ar^{r} "0";"2"};
{\ar_{f_0} "0";"4"};
{\ar^{d} "2";"6"};
{\ar_{b} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
is a weak pushout. Thus by the commutativity of $(\ref{CommADB_0C})$, there exists $e\in\mathscr{C}(E,C)$ such that $e\circ b=g_0$ and $e\circ d=l$. Since $l,d\in\mathcal{L}$, we obtain $e\in\mathscr{S}_{\mathcal{N}}$ by Lemma~\ref{LemGeneralThick23_1} {\rm (1)}. Thus $g=e\circ b\circ\left[\betagin{smallmatrix}1\\0\end{smallmatrix}\right]\in\mathscr{S}_{\mathcal{N}}\circ\mathcal{R}\circ\mathcal{L}\subseteq\mathscr{S}_{\mathcal{N}}$ follows.
\end{proof}
By using Lemma~\ref{LemGeneralThick23_2}, we can show the following.
\betagin{lem}\label{LemPercPreMR1}
Let
\[
\xy
(-6,6)*+{A}="0";
(6,6)*+{D}="2";
(-6,-6)*+{B}="4";
(6,-6)*+{C}="6";
{\ar^{r} "0";"2"};
{\ar_{f} "0";"4"};
{\ar^{l} "2";"6"};
{\ar_{g} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
be any commutative square in $\mathscr{C}$ with $l\in\mathcal{L}$ and $r\in\mathcal{R}$.
The following holds.
\betagin{enumerate}
\item If $f\in\mathcal{R}$, then $g\in\mathscr{S}_{\mathcal{N}}$.
\item If $g\in\mathcal{L}$, then $f\in\mathscr{S}_{\mathcal{N}}$.
\end{enumerate}
\end{lem}
\betagin{proof}
By duality, it is enough to show {\rm (1)}.
Take $\mathfrak{s}$-conflations $N\overset{i}{\longrightarrow}A\overset{r}{\longrightarrow}D$ and $N^{\prime}\overset{i^{\prime}}{\longrightarrow}A\overset{f}{\longrightarrow}B$.
By the dual of Lemma~\ref{Lem9}, we obtain a commutative diagram
\[
\xy
(-12,12)*+{N_2}="0";
(0,12)*+{N^{\prime}}="2";
(12,12)*+{N_3}="4";
(-12,0)*+{N}="10";
(0,0)*+{A}="12";
(12,0)*+{D}="14";
(-12,-12)*+{N_1}="20";
(0,-12)*+{B}="22";
(12,-12)*+{Z}="24";
{\ar^{x} "0";"2"};
{\ar^{y} "2";"4"};
{\ar_{} "0";"10"};
{\ar_{i^{\prime}} "2";"12"};
{\ar^{x^{\prime}} "4";"14"};
{\ar^{i} "10";"12"};
{\ar^{r} "12";"14"};
{\ar_{} "10";"20"};
{\ar_{f} "12";"22"};
{\ar^{y^{\prime}} "14";"24"};
{\ar_{i^{\prime}} "20";"22"};
{\ar_{r^{\prime}} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "10";"22"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
in $\mathscr{C}$ with $N_1,N_2,N_3\in\mathcal{N}$, whose rows and columns are $\mathfrak{s}$-conflations. In particular we have $y^{\prime},r^{\prime}\in\mathcal{R}$.
Since $l\in\mathcal{L}$, there is an $\mathfrak{s}$-conflation $D\overset{l}{\longrightarrow}C\to N^{\prime\prime\prime}$. By {\rm (ET4)}, we obtain a commutative diagram
\[
\xy
(-12,12)*+{N_3}="0";
(0,12)*+{D}="2";
(12,12)*+{Z}="4";
(-12,0)*+{N_3}="10";
(0,0)*+{C}="12";
(12,0)*+{Z^{\prime}}="14";
(0,-12)*+{N^{\prime\prime\prime}}="22";
(12,-12)*+{N^{\prime\prime\prime}}="24";
{\ar^{x^{\prime}} "0";"2"};
{\ar^{y^{\prime}} "2";"4"};
{\ar@{=} "0";"10"};
{\ar^{l} "2";"12"};
{\ar^{z} "4";"14"};
{\ar_{} "10";"12"};
{\ar_{q} "12";"14"};
{\ar_{} "12";"22"};
{\ar^{} "14";"24"};
{\ar@{=} "22";"24"};
{\ar@{}|\circrclearrowright "0";"12"};
{\ar@{}|\circrclearrowright "2";"14"};
{\ar@{}|\circrclearrowright "12";"24"};
\endxy
\]
whose rows and columns are $\mathfrak{s}$-conflations.
Then we have
\[ q\circ g\circ f=q\circ l\circ r=z\circ y^{\prime}\circ r=z\circ r^{\prime}\circ f. \]
By {\rm (P3)},
this means that
\[
\xy
(-6,6)*+{B}="0";
(6,6)*+{Z}="2";
(-6,-6)*+{C}="4";
(6,-6)*+{Z^{\prime}}="6";
{\ar^{\overseterline{r}^{\prime}} "0";"2"};
{\ar_{\overseterline{g}} "0";"4"};
{\ar^{\overseterline{z}} "2";"6"};
{\ar_{\overseterline{q}} "4";"6"};
{\ar@{}|\circrclearrowright "0";"6"};
\endxy
\]
is commutative in $\overseterline{\mathscr{C}}$. Since $z\in\mathcal{L}$ and $q,r^{\prime}\in\mathcal{R}$ hold, we obtain $g\in\mathscr{S}_{\mathcal{N}}$
by Lemma~\ref{LemGeneralThick23_2} {\rm (2)}.
\end{proof}
\betagin{prop}\label{PropPerc}
Let $\mathcal{N}\subseteq\mathscr{C}$ be a percolating thick subcategory which satisfies Condition~\ref{ConditionPerc}.
Then $\mathscr{S}_{\mathcal{N}}$ satisfies {\rm (M1)},\,$\ldots\,$,\,{\rm (M4)} and $\mathscr{S}_{\mathcal{N}}=p^{-1}(\overseterline{\mathscr{S}_{\mathcal{N}}})$.
Thus the localization $\widetilde{\C}Es$ becomes extriangulated by Corollary~\ref{CorMultLoc}.
\end{prop}
\betagin{proof}
By Lemmas~\ref{LemM3} and \ref{LemExPerc_pS}, it remains to show {\rm (M1),(M2),(M4)}.
{\rm (M1)} Let
\[
\xy
(-7,6)*+{A}="0";
(-7,-6)*+{B}="2";
(2,2)*+{}="3";
(7,-6)*+{C}="4";
{\ar_{f} "0";"2"};
{\ar_{g} "2";"4"};
{\ar^{h} "0";"4"};
{\ar@{}|\circrclearrowright "2";"3"};
\endxy
\]
be any commutative diagram in $\mathscr{C}$, with $h\in\mathscr{S}_{\mathcal{N}}$. Similarly as in the proof of Proposition~\ref{PropSatisfy}, by duality it is enough to show that $g\in\mathscr{S}_{\mathcal{N}}$ implies $f\in\mathscr{S}_{\mathcal{N}}$. Moreover, it suffices to show in the case where $g\in\mathcal{L}$ or $g\in\mathcal{R}$.
By Lemma~\ref{LemPercLR}, we may write $h=l\circ r$ by some $l\in\mathcal{L}$ and $r\in\mathcal{R}$. If $g\in\mathcal{R}$, Lemma~\ref{LemGeneralThick23_2} {\rm (2)} shows $f\in\mathscr{S}_{\mathcal{N}}$.
Similarly if $g\in\mathcal{L}$, Lemma~\ref{LemPercPreMR1} {\rm (2)} shows $f\in\mathscr{S}_{\mathcal{N}}$.
{\rm (M2)} We firstly consider morphisms $X^{\prime}\overset{s}{\longleftarrow}X\overset{f}{\longrightarrow}Y$ in $\mathscr{C}$ and a factorization $s\colon X\overset{r}{\longrightarrow}X^{\prime}r\overset{l}{\longrightarrow}X^{\prime}$ with $l\in\mathcal{L}, r\in\mathcal{R}$.
We will complete the morphisms $X^{\prime}\overset{s}{\longleftarrow}X\overset{f}{\longrightarrow}Y$ to a commutative square.
Take an $\mathfrak{s}$-conflation $N\overset{g}{\longrightarrow}X\overset{r}{\longrightarrow}X^{\prime}r$.
By {\rm (P1')}, we may factorize
$f\circ g$ as $N\overset{f^{\prime}}{\longrightarrow}N^{\prime}\overset{g^{\prime}}{\longrightarrow}Y$ with an $\mathfrak{s}$-deflation $f^{\prime}$, an $\mathfrak{s}$-inflation $g^{\prime}$ and $N^{\prime}\in\mathcal{N}$. If we take an $\mathfrak{s}$-conflation $N^{\prime}\overset{g^{\prime}}{\longrightarrow}Y\overset{r^{\prime}}{\longrightarrow}Y^{\prime}$, we obtain the following commutative solid squares
\[
\xymatrix@C=24pt@R=12pt{
N\ar[rr]^{f^{\prime}}\ar[dr]_g&&N^{\prime}\ar[dr]^{g^{\prime}}&&\\
&X\ar@{}[ur]|\circrclearrowright\ar[rr]^f\ar[rd]^r\ar[dd]_s&&Y\ar[rd]^{r^{\prime}}&\\
&&X^{\prime}r\ar@{}[l]|\circrclearrowright\ar@{}[ur]|\circrclearrowright\ar@{}[dr]|\circrclearrowright\ar[rr]\ar[dl]^l&&Y^{\prime}\ar@{..>}[ld]^{l^{\prime}}\\
&X^{\prime}\ar@{..>}[rr]&&Y^{\prime}r&
}
\]
By Lemma~\ref{LemThickFirstProperties}, we have the dotted arrows with $l^{\prime}\in\mathcal{L}$ which make the whole diagram commutative.
Since $l^{\prime}\circ r^{\prime}\in\mathscr{S}_\mathcal{N}$, we have a desired commutative square.
Next, let $X^{\prime}\overset{s}{\longrightarrow}X\overset{f}{\longrightarrow}Y$ be a sequence with $s\in\mathscr{S}_{\mathcal{N}}$ and $f\circ s=0$.
By Lemma~\ref{LemPercLR}, there exist $r\in\mathcal{R}(X^{\prime},X^{\prime}r)$ and $l\in\mathcal{L}(X^{\prime}r,X)$ such that $s=l\circ r$. By {\rm (P3)}
we have $f\circ l=j\circ i$ for some $N\in\mathcal{N}$, $i\in\mathscr{C}(X^{\prime}r,N)$ and $j\in\mathscr{C}(N,Y)$. By Lemma~\ref{LemPercdefinf} we may assume that $j$ is an $\mathfrak{s}$-inflation. By (C1') for $\mathscr{C}Es$, we obtain a commutative diagram with $N^{\prime}\in\mathcal{N}$
\[
\xymatrix{
X^{\prime}r\ar@{}[dr]|\circrclearrowright\ar[r]^{l}\ar[d]_i&X\ar@{}[dr]|\circrclearrowright\ar[r]\ar[d]^{f}&N^{\prime}\ar[d]\\
N\ar[r]_j&Y\ar[r]_{r^{\prime}}&Y^{\prime}
}
\]
in which two rows are $\mathfrak{s}$-conflations.
Thus $r^{\prime}\circ f$ factors through $N^{\prime}\in\mathcal{N}$. By Lemma~\ref{LemPercdefinf}, there exists some $N^{\prime}r\in\mathcal{N}$, $\mathfrak{s}$-deflation $i^{\prime}\in\mathscr{C}(X,N^{\prime}r)$ and $\mathfrak{s}$-conflation $N^{\prime}r\overset{j^{\prime}}{\longrightarrow}Y^{\prime}\overset{l^{\prime}}{\longrightarrow}Y^{\prime}r$ such that $r^{\prime}\circ f=j^{\prime}\circ i^{\prime}$. Then $l^{\prime}\circ r^{\prime}\circ f=0$ holds for $l^{\prime}\circ r^{\prime}\in\mathscr{S}_{\mathcal{N}}$.
Other conditions can be checked dually.
{\rm (M4)} Let us show that $\mathcal{M}_{\mathsf{def}}\subseteq\mathcal{M}$ is closed under compositions. Since $\mathscr{S}_{\mathcal{N}}=\mathcal{L}\circ\mathcal{R}$ is closed under compositions, it suffices to show that $y\circ s\circ y^{\prime}\in\mathcal{M}_{\mathsf{def}}$ holds for any $\mathfrak{s}$-deflations $y,y^{\prime}$ and any $s\in\mathscr{S}_{\mathcal{N}}$. By $\mathscr{S}_{\mathcal{N}}=\mathcal{L}\circ\mathcal{R}$, there are an $\mathfrak{s}$-inflation $l$ and an $\mathfrak{s}$-deflation $r$ satisfying $s=l\circ r$. By Lemma~\ref{Lem9}, there exist an $\mathfrak{s}$-deflation $d$ and $l^{\prime}\in\mathcal{L}$ satisfying $y\circ l=l^{\prime}\circ d$. Thus we have $y\circ s\circ y^{\prime}=l^{\prime}\circ d\circ r\circ y^{\prime}$ as desired. Dually, $\mathcal{M}_{\mathsf{inf}}\subseteq\mathcal{M}$ is closed under compositions.
\end{proof}
\betagin{cor}\label{CorLast}
Let $\mathcal{N}\subseteq\mathscr{C}$ be a percolating thick subcategory satisfying {\rm (P2)}.
Suppose that $\mathcal{N}$ also satisfies the following condition.
\betagin{itemize}
\item $\mathrm{op}eratorname{Ker}\big(\mathscr{C}(X,A)\overset{x\circ-}{\longrightarrow}\mathscr{C}(X,B)\big)\subseteq[\mathcal{N}](X,A)$ holds for any $X\in\mathscr{C}$ and any $\mathfrak{s}$-inflation $x\in\mathscr{C}(A,B)$.
Dually, $\mathrm{op}eratorname{Ker}\big(\mathscr{C}(C,X)\overset{-\circ y}{\longrightarrow}\mathscr{C}(B,X)\big)\subseteq[\mathcal{N}](C,X)$ holds for any $X\in\mathscr{C}$ and any $\mathfrak{s}$-deflation $y\in\mathscr{C}(B,C)$.
\end{itemize}
Then the localization $\widetilde{\C}Es$ obtained in Proposition~\ref{PropPerc} corresponds to an exact category.
If moreover any morphism in $\mathscr{C}$ is $\mathfrak{s}$-admissible, then $\widetilde{\C}$ is an abelian category.
\end{cor}
\betagin{proof}
Obviously $\mathcal{N}$ satisfies {\rm (P3)}, hence Proposition~\ref{PropPerc} can be applied.
We remark that $\mathcal{N}_{\mathscr{S}_{\mathcal{N}}}=\mathcal{N}$ holds by Lemma~\ref{LemNSN}, hence the above condition is nothing but {\rm (i)} of Corollary~\ref{CorLocExact} with its dual. Also, condition {\rm (ii)} of Corollary~\ref{CorLocExact} and its dual are fulfilled by {\rm (P1)} and {\rm (P1')}. Thus the resulting localization $\widetilde{\C}Es$ corresponds to an exact category by Corollary~\ref{CorLocExact}. The last assertion follows from Remark~\ref{RemAdm}.
\end{proof}
Let us conclude this subsection with the following construction which provides percolating subcategories satisfying Condition~\ref{ConditionPerc}. This is a generalization of Example~\ref{ExPerc} {\rm (3)}. Since the extriangulated category $\mathscr{C}Es$ obtained below is not triangulated nor exact in general, the resulting localization does not belong to either of Examples~\ref{ExVerdier}, \ref{ExSerreAbel}, \ref{ExTwo-sidedExact}.
A similar construction appears in \circte[Example~2.15]{GMT} for recollements of abelian categories.
\betagin{prop}
Let $(T,\xi)\colon\mathscr{T}\to\mathscr{T}_\mathcal{N}$ be the Verdier quotient of a triangulated category $\mathscr{T}$ by a thick subcategory $\mathcal{N}\subseteq\mathscr{T}$. Here $\xi\colon T\circ [1]\overset{\colonng}{\Longrightarrow}[1]\circ T$ denotes a natural isomorphism, as in Remark~\ref{RemExFun}.
Let $\mathscr{D}\subseteq\mathscr{T}_{\mathcal{N}}$ be an extension-closed subcategory closed by direct summands, which we naturally regard as an extriangulated category $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$.
Let $\mathscr{C}\subseteq\mathscr{T}$ be the extension-closed subcategory given by $\mathscr{C}=T^{-1}(\mathscr{D})$, which we also regard as an extriangulated category $\mathscr{C}Es$.
Let $(F=T|_{\mathscr{C}},\phi)\colon\mathscr{C}Es\to(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ be the exact functor induced by restricting $T$, for which $\phi_{C,A}\colon\mathbb{E}(C,A)\to\mathbb{F}(FC,FA)$ is given by $\phi_{C,A}(\delta)=\xi_A\circ T(\delta)$ for any $A,C\in\mathscr{C}$ and $\delta\in\mathbb{E}(C,A)$.
Then the following holds.
\betagin{enumerate}
\item $\mathcal{N}\subseteq\mathscr{C}$ is a percolating subcategory of $\mathscr{C}Es$ which satisfies Condition~\ref{ConditionPerc} and $\mathscr{S}_{\mathcal{N}}=F^{-1}(\mathrm{op}eratorname{Iso}(\mathscr{D}))$.
\item The exact functor $(\widetilde{F},\widetilde{\phi})\colon\widetilde{\C}Es\to(\mathscr{D},\mathbb{F},\mathfrak{t})$ obtained by Corollary~\ref{CorMultLoc} is an equivalence of extriangulated categories in the sense of {\rm (2)} in Proposition~\ref{PropExEq}.
\item If moreover $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ corresponds to an exact category, then $\mathcal{N}$ also satisfies the assumption of Corollary~\ref{CorLast}.
\end{enumerate}
\end{prop}
\betagin{proof}
{\rm (1)} It is obvious that $\mathcal{N}$ is a thick subcategory in $\mathscr{C}Es$.
To show that $\mathcal{N}\subseteq\mathscr{C}$ is percolating, let $N\in\mathcal{N}$ be any object, and $x\in\mathscr{C}(N,A)$ be any morphism. If we complete $x$ into a distinguished triangle $N\overset{x}{\longrightarrow}A\overset{y}{\longrightarrow}B\longrightarrow N[1]$ in $\mathscr{T}$, then it gives an $\mathfrak{s}$-conflation $N\overset{x}{\longrightarrow}A\overset{y}{\longrightarrow}B$. This shows that $x$ is an $\mathfrak{s}$-inflation. Dually, any morphism $y\in\mathscr{C}(A,N)$ to $N\in\mathcal{N}$ is an $\mathfrak{s}$-deflation. Thus in particular $\mathcal{N}\subseteq\mathscr{C}$ is percolating.
We know that $\mathcal{N}$ satisfies {\rm (P2)} as seen in Remark~\ref{RemP2} {\rm (2)}. We can also easily check that $\mathscr{S}_{\mathcal{N}}=\mathcal{R}=\mathcal{L}$ holds in $\mathscr{C}$. Indeed, any morphism $f\in\mathscr{C}(A,B)$ belongs to $\mathscr{S}_{\mathcal{N}}$ if and only if it can be completed into a distinguished triangle $N\to A\overset{f}{\longrightarrow}B\to N[1]$ in $\mathscr{T}$ for some $N\in\mathcal{N}$, if and only if it satisfies $F(f)\in\mathrm{op}eratorname{Iso}(\mathscr{D})$. Thus $\mathcal{N}$ also satisfies {\rm (P3$^+$)}.
{\rm (2)} Remark that Proposition~\ref{PropPerc} shows that Corollary~\ref{CorMultLoc} can be applied to $\mathscr{S}_{\mathcal{N}}$. By Proposition~\ref{PropExEq}, it suffices to show that $\widetilde{F}$ is an equivalence and $\widetilde{\phi}$ is a natural isomorphism.
Since $T$ is essentially surjective, obviously so is $\widetilde{F}$. By construction, if $s\in\mathscr{T}(X,Y)$ is a morphism with $T(s)\in\mathrm{op}eratorname{Iso}(\mathscr{T}_{\mathcal{N}})$, then $X\in\mathscr{C}$ holds if and only if $Y\in\mathscr{C}$. This property is enough to conclude that $\widetilde{F}\colon\widetilde{\mathscr{C}}\to\mathscr{D}$ is fully faithful.
Let us show that $\widetilde{\phi}_{C,A}\colon\widetilde{\mathbb{E}bb}(C,A)\to\mathbb{F}bb(FC,FA)$ is an isomorphism for any $A,C\in\mathscr{C}$. By definition, we have $\mathbb{F}bb(FC,FA)=\mathscr{T}_{\mathcal{N}}(FC,(FA)[1])$. Any element $\alpha\in\mathscr{T}_{\mathcal{N}}(FC,(FA)[1])$ can be expressed as $\alpha=\xi_A\circ T(\sigma)\circ F(t)^{-1}$ for some $C^{\prime}\in\mathscr{C}$ and $\sigma\in\mathscr{T}(C^{\prime},A[1])=\mathbb{E}bb(C^{\prime},A)$. It is straightforward to show that the map
\[ \lambda\colon \mathbb{F}bb(FC,FA)\to\widetilde{\Ebb}(C,A)\ ;\ \alpha\mapsto [\mathbb{F}R{\overseterline{t}}{\overseterline{\sigma}}{\mathrm{id}}] \]
is well-defined and gives the inverse of $\widetilde{\phi}_{C,A}$.
{\rm (3)} Assume that $(\mathscr{D},\mathbb{F}bb,\mathfrak{t})$ corresponds to an exact category.
Suppose that $x\circ f=0$ holds for an $\mathfrak{s}$-inflation $x\colon A\to B$ and a morphism $f\colon X\to A$ in $\mathscr{C}$. Since $F(x)$ is a monomorphism in $\mathscr{D}$ we get $F(f)=0$, which implies that $f$ factors through an object in $\mathcal{N}$. Dually for $\mathfrak{s}$-deflations.
\end{proof}
\betagin{thebibliography}{BBGH}
\bibitem[BBGH]{BBGH} Baillargeon, R-L.; Br\"{u}stle, T.; Gorsky, M.; Hassoun, S.: \emph{On the lattice of weakly exact structures}. arXiv:2009.10024v2.
\bibitem[B-TS]{B-TS} Bennett-Tennenhaus, R.; Shah, A.; \emph{Transport of structure in higher homological algebra}. J. Algebra \textbf{574} (2021), 514--549.
\bibitem[B]{B} B\"uhler, T.: \emph{Exact Categories}, Expo. Math. \textbf{28} (2010) 1--69.
\bibitem[C-E]{C-E} C\'{a}rdenas-Escudero, M.E.: \emph{Localization for exact categories}. Thesis (Ph.D.)--State University of New York at Binghamton. 1998.
\bibitem[Ga]{Ga}
Gabriel, P.: \emph{Des cat\'egories ab\'eliennes}.
Bull. Soc. Math. France \textbf{90} (1962), 323--448.
\bibitem[GZ]{GZ}
Gabriel, P.; Zisman, M.: \emph{Calculus of fractions and homotopy theory}.
Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 35 Springer-Verlag New York, Inc., New York 1967 {\rm x}+168 pp.
\bibitem[Gi]{Gi} Gillespie, J.: \emph{Model structures on exact categories}. J. Pure Appl. Algebra \textbf{215} (2011) no. 12, 2892--2902.
\bibitem[GMT]{GMT} Gu,~W.; Ma,~X.; Tan,~L.: \emph{Homological dimensions of extriangulated categories and recollements}. arXiv:2104.06042.
\bibitem[HKR]{HKR} Henrard, R.; Kvamme, S.; van Roosmalen, A-C.: \emph{Auslander's formula and correspondence for exact categories}. arXiv:2011.15107.
\bibitem[HR]{HR} Henrard, R.; van Roosmalen, A-C.: \emph{Localizations of (one-sided) exact categories}. arXiv:1903.10861.
\bibitem[HLN]{HLN} Herschend, M.; Liu, Y.; Nakaoka, H.: \emph{$n$-exangulated categories (I): Definitions and fundamental properties}. J. Algebra \textbf{570} (2021), 531--586.
\bibitem[Ho1]{Ho1} Hovey, M.: \emph{Cotorsion pairs, model category structures, and representation theory}, Math. Z. \textbf{241} (2002) no. 3, 553--592.
\bibitem[Ho2]{Ho2} Hovey, M.: \emph{Cotorsion pairs and model categories. Interactions between homotopy theory and algebra}, 277--296, Contemp. Math., \textbf{436}, Amer. Math. Soc., Providence, RI, 2007.
\bibitem[KS]{KS} Kashiwara, M.; Schapira, P.: \emph{Categories and sheaves}, Springer-Verlag, Berlin, 2006. x+497 pp. ISBN: 978-3-540-27949-5; 3-540-27949-0.
\bibitem[NP]{NP} Nakaoka, H.; Palu, Y.: \emph{Extriangulated categories, Hovey twin cotorsion pairs and model structures}. Cah. Topol. G\'{e}om. Diff\'{e}r. Cat\'{e}g. \textbf{60} (2019), no. 2, 117--193.
\bibitem[R]{R} Rump, W.: \emph{The acyclic closure of an exact category and its triangulation}. J. Algebra \textbf{565} (2021), 402--440.
\bibitem[V]{V} Verdier, J.-L.: \emph{Des cat\'egories d\'eriv\'ees des cat\'egories ab\'eliennes}, Ast\'erisque, \textbf{239}, Soci\'et\'e Math\'ematique de France, (1996) [1967].
\bibitem[WWZ]{WWZ} Wang,~L.; Wei,~J.; Zhang,~H.: \emph{Recollements of extriangulated categories}. arXiv:2012.03258.
\bibitem[Y]{Y} Yang, X.: \emph{Model structures on triangulated categories}. Glasg. Math. J. \textbf{57} (2015) no. 2, 263--284.
\bibitem[ZZ]{ZZ} Zhou, P.; Zhu, B.: \emph{Triangulated quotient categories revisited}. J. Algebra \textbf{502} (2018), 196--232.
\end{thebibliography}
\end{document} |
\begin{document}
\title{A Nivat Theorem for Weighted Timed Automata and Weighted Relative Distance Logic\thanks{The final version appeared in the Proceedings of the 41st International Colloquium on Automata, Languages, and Programming (ICALP 2014) and is available at link.springer.com; DOI: 10.1007/978-3-662-43951-7\_15}}
\author{Manfred Droste and Vitaly Perevoshchikov\thanks{Supported by DFG Graduiertenkolleg 1763 (QuantLA)}}
\institute{Universit\"at Leipzig, Institut f\"ur Informatik, \\
04109 Leipzig, Germany\\
\email{\{droste,perev\}@informatik.uni-leipzig.de}
}
\maketitle
\begin{abstract}
Weighted timed automata (WTA) model quantitative aspects of real-time systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke's theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic.
\begin{keywords}
Weighted timed automata, linearly priced timed automata, average behavior, discounting, Nivat's theorem, quantitative logic.
\end{keywords}
\end{abstract}
\section{Introduction}
Timed automata introduced by Alur and Dill \cite{AD94} are a prominent model for real-time systems.
Timed automata form finite representations of infinite-state automata for which various fundamental results from the theory of finite-state automata can be transferred to the timed setting.
Although time has a quantitative nature, the questions asked in the theory of timed automata are of a qualitative kind. On the other side, quantitative aspects of systems, e.g., costs, probabilities and energy consumption can be modelled using weighted automata, i.e., classical nondeterministic automata with a transition weight function. The behaviors of weighted automata can be considered as quantitative languages (also known as formal power series) where every word carries a value. Semiring-weighted automata have been extensively studied in the literature (cf. \cite{BR88,Eil74,KS86} and the handbook of weighted automata \cite{DKV09}).
Weighted extensions of timed automata are of much interest for the real-time community, since weighted timed automata (WTA) can model continuous time-dependent consumption of resources. In the literature, various models of WTA were considered, e.g., linearly priced timed automata \cite{ATP01,BFHL01, LBBF01},
multi-weighted timed automata with knapsack-problem objective \cite{LR05}, and WTA with measures like average, reward-cost ratio \cite{BBL04, BBL08} and discounting \cite{AT11, FL09, FL092}. In \cite{Qua10, Qua11}, WTA over semi\-rings were studied with respect to the classical automata-theoretic questions. However, various models, e.g., WTA with average and discounting measures as well as multi-weighted automata cannot be defined using semirings. For the latter situations, only several algorithmic problems were handled. But many questions whether the results known from the theories of timed and weighted automata also hold for WTA remain open. Moreover, there is no unified framework for WTA.
The main goal of this paper is to build a bridge between the theories of WTA and timed automata. First, we develop a general model of {\em timed valuation monoids} for WTA. Recall that Nivat's theorem \cite{Ni68} is one of the fundamental characterizations of rational transductions and establishes a connection between rational transductions and rational languages. Our first main result is an extension of Nivat's theorem to WTA over timed valuation monoids. By Nivat's theorem for semiring-weighted automata described recently in \cite{DK}, recognizable quantitative languages are exactly those which can be constructed from recognizable languages using operations like morphisms and intersections. The proof of this result requires the fact that finite automata are determinizable. However, timed automata do not enjoy this property. Nevertheless, for idempotent timed valuation monoids which model all mentioned examples of WTA, we do not need determinization. In this case, our Nivat theorem for WTA is similar to the one for weighted automata. In the non-idempotent case, we give an example showing that this statement does not hold true. But in this case we can establish a connection between recognizable quantitative timed languages and sequentially, deterministically or unambiguously recognizable timed languages.
As an application of our Nivat theorem, we provide a characterization of recognizable quantitative timed languages by means of quantitative logics. The classical B\"uchi-Elgot theorem \cite{Buc60} was extended to both weighted \cite{DG07, DG09, DM12} and timed settings \cite{Wil94,Wil942}. In \cite{Qua10, Qua11}, a semiring-weighted extension of Wilke's relative distance logic \cite{Wil94,Wil942} was considered. Here, we develop a different weighted version of relative distance logic based on our notion of timed valuation monoids. In our second main result, we show that this logic and WTA have the same expressive power. For the proof of this result, we use a new proof technique and our Nivat theorem to derive our result from the corresponding result for unweighted logic \cite{Wil94, Wil942}. Since the proof of our Nivat theorem is constructive, the translation process from weighted relative distance logic to WTA and vice versa is constructive. This leads to decidability results for weighted relative distance logic. In particular, based on the results of \cite{ATP01,BFHL01, LBBF01}, we show the decidability of several weighted extensions of the satisfiability problem for our logic.
\section{Timed Automata}
An {\em alphabet} is a non-empty finite set.
Let $\Sigma$ be a non-empty set. A {\em finite word} over $\Sigma$ is a finite sequence $a_1 ... a_n$ where $n \ge 0$ and $a_1, ..., a_n \in \Sigma$. If $n \ge 1$, then we say that $w$ is {\em non-empty}. Let $\Sigma^+$ denote the set of all non-empty words over $\Sigma$.
Let $\mathbb R_{\ge 0}$ denote the set of all non-negative real numbers. A {\em finite timed word} over $\Sigma$ is a finite word over $\Sigma \times \mathbb R_{\ge 0}$, i.e., a finite sequence $(a_1, t_1) ... (a_n, t_n)$ where $n \ge 0$, $a_1, ..., a_n \in \Sigma$ and $t_1, ..., t_n \in \mathbb R_{\ge 0}$. Let $|w| = n$ and $\langle w \rangle = t_1 + ... + t_n$ and let $\mathbb T \Sigma^+ = (\Sigma \times \mathbb R_{\ge 0})^+$,
the set of all non-empty finite timed words. Any set $\mathcal L \subseteq \mathbb T \Sigma^+$ of timed words is called a {\em timed language}.
Let $C$ be a finite set of {\em clock variables} ranging over $\mathbb R_{\ge 0}$. A {\em clock constraint} over $C$ is either $\text{\sc True}$ or (if $C$ is non-empty) a conjunction of formulas of the form $x \bowtie c$ where $x \in C$, $c \in \mathbb N$ and ${\bowtie} \in \{<, \le, =, \ge, >\}$. Let $\Phi(C)$ denote the set of all clock constraints over $C$. A {\em clock valuation} over $C$ is a mapping $\nu: C \to \mathbb R_{\ge 0}$ which assigns a value to each clock variable. Let $\mathbb R_{\ge 0}^C$ be the set of all clock valuations over $C$. The {\em satisfaction relation} ${\models} \subseteq \mathbb R_{\ge 0}^C \times \Phi(C)$ is defined as usual. Now let $\nu \in \mathbb R_{\ge 0}^C$, $t \in \mathbb R_{\ge 0}$ and $\Lambda \subseteq C$. Let $\nu + t$ denote the clock valuation $\nu' \in \mathbb R_{\ge 0}^C$ such that $\nu'(x) = \nu(x) + t$ for all $x \in C$. Let $\nu[\Lambda := 0]$ denote the clock valuation $\nu' \in \mathbb R_{\ge 0}^C$ such that $\nu'(x) = 0$ for all $x \in \Lambda$ and $\nu'(x) = \nu(x)$ for all $x \notin \Lambda$.
\begin{definition}
Let $\Sigma$ be an alphabet. A {\em timed automaton} over $\Sigma$ is a tuple ${\mathcal A = (L, C, I, E, F)}$ such that $L$ is a finite set of {\em locations}, $C$ is a finite set of {\em clocks}, $I, F \subseteq L$ are sets of {\em initial} resp. {\em final} locations and ${E \subseteq L \times \Sigma \times \Phi(C) \times 2^C \times L}$ is a finite set of {\em edges}.
\end{definition}
For an edge $e = (\ell, a, \phi, \Lambda, \ell')$, let $\Label(e) = a$ be the {\em label} of $e$. A {\em run} of $\mathcal A$ is a finite sequence
\begin{equation}
\label{Eq:DefRun}
\rho = (\ell_0, \nu_0) \xrightarrow{t_1} \xrightarrow{e_1} (\ell_1, \nu_1) \xrightarrow{t_2} \xrightarrow{e_2} ... \xrightarrow{t_n} \xrightarrow{e_n} (\ell_n, \nu_n)
\end{equation}
where $n \ge 1$, $\ell_0, \ell_1, ..., \ell_n \in L$, $\nu_0, \nu_1, ..., \nu_n \in \mathbb R_{\ge 0}^C$, $t_1, ..., t_n \in \mathbb R_{\ge 0}$ and ${e_1, ..., e_n \in E}$ satisfy the following conditions: $\ell_0 \in I$, $\nu_0(x) = 0$ for all $x \in C$, $\ell_n \in F$ and, for all $1 \le i \le n$, $e_i = (\ell_{i-1}, a_i, \phi_i, \Lambda_i, \ell_i)$ for some $a_i \in \Sigma$, $\phi_i \in \Phi(C)$ and $\Lambda_i \subseteq C$ such that $\nu_{i-1} + t_i \models \phi_i$ and $\nu_i = (\nu_{i-1} + t_i)[\Lambda_i := 0]$. The {\em label} of $\rho$ is the timed word $\Label(\rho) = (\Label(e_1), t_1) ... (\Label(e_n), t_n) \in \mathbb T \Sigma^+$. For any timed word $w \in \mathbb T \Sigma^+$, let $\Run_{\mathcal A}(w)$ denote the set of all runs $\rho$ of $\mathcal A$ such that $\Label(\rho) = w$. Let $\mathcal L(\mathcal A) = \{w \in \mathbb T \Sigma^+ \; | \; \Run_{\mathcal A}(w) \neq \emptyset\}$. We say that an arbitrary timed language $\mathcal L \subseteq \mathbb T \Sigma^+$ is {\em recognizable} if there exists a timed automaton over $\Sigma$ such that $\mathcal L(\mathcal A) = \mathcal L$. Let ${\mathcal A = (L, C, I, E, F)}$ be a timed automaton over $\Sigma$. We say that $\mathcal A$ is {\em unambiguous} if $|\Run_{\mathcal A}(w)| \le 1$ for all $w \in \mathbb T \Sigma^+$. We call $\mathcal A$ {\em deterministic} if $|I| = 1$ and, for all $e_1 = (\ell, a, \phi_1, \Lambda_1, \ell_1) \in E$ and $e_2 = (\ell, a, \phi_2, \Lambda_2, \ell_2) \in E$ with $e_1 \neq e_2$, there exists no clock valuation $\nu \in \mathbb R_{\ge 0}^C$ with $\nu \models \phi_1 \wedge \phi_2$.
We call $\mathcal A$ {\em sequential} if $|I| = 1$ and, for all $e_1 = (\ell, a, \phi_1, \Lambda_1, \ell_1) \in E$ and $e_2 = (\ell, a, \phi_2, \Lambda_2, \ell_2) \in E$, we have $e_1 = e_2$; this property can be viewed as a strong form of determinism. Based on these notions, we can define {\em sequentially recognizable}, {\em deterministically recognizable} and {\em unambiguously recognizable} timed languages.
\section{Weighted Timed Automata}
In this section, we introduce a general model of weighted timed automata (WTA) over {\em timed valuation monoids}. We will show that our new model covers a variety of situations known from the literature:
linearly priced timed automata \cite{ATP01, BFHL01,LBBF01} and WTA with the measures like average \cite{BBL04, BBL08} and discounting \cite{AT11,FL09,FL092}.
A {\em timed valuation monoid} is a tuple $\mathbb M = (M, +, \val, \mathbb 0)$ where $(M, +, \mathbb 0)$ is a commutative monoid and $\val: \mathbb T (M \times M)^+ \to M$ is a {\em timed valuation function}.
We will say that $M$ is the {\em domain} of $\mathbb M$.
We say that $\mathbb M$ is {\em idempotent} if $+$ is idempotent, i.e., $m + m = m$ for all $m \in M$.
Let $\Sigma$ be an alphabet and $\mathbb M = (M, +, \val, \mathbb 0)$ a timed valuation monoid. A {\em weighted timed automaton} (WTA) over $\Sigma$ and $\mathbb M$ is a tuple $\mathcal A = (L, C, I, E, F, \wt)$ where $(L, C, I, E, F)$ is a timed automaton over $\Sigma$ and $\wt: L \cup E \to M$ is a {\em weight function}. Let $\rho$ be a run of $\mathcal A$ of the form (\ref{Eq:DefRun}). Let $\wt^{\sharp}(\rho) \in \mathbb T (M \times M)^+$ be the timed word $(u_1, t_1) ... (u_n, t_n)$ where, for all $1 \le i \le n$, $u_i = (\wt(\ell_{i-1}), \wt(e_i))$. Then, the {\em weight} of $\rho$ is defined as $\wt_{\mathcal A}(\rho) = \val(\wt^{\sharp}(\rho)) \in M$. The {\em behavior} of $\mathcal A$ is the mapping $||\mathcal A||: \mathbb T \Sigma^+ \to M$ defined by
$
{||\mathcal A||(w) = \sum (\wt_{\mathcal A}(\rho) \; | \; \rho \in \Run_{\mathcal A}(w))}
$
for all $w \in \mathbb T \Sigma^+$. A {\em quantitative timed language} (QTL) over $\mathbb M$ is a mapping ${\mathbb L: \mathbb T \Sigma^+ \to M}$. We say that $\mathbb L$ is {\em recognizable} if there exists a WTA $\mathcal A$ over $\Sigma$ and $\mathbb M$ such that $\mathbb L = ||\mathcal A||$.
\begin{example}
\label{Ex:TVM}
All of the subsequent WTA model the property that staying in a location invokes costs depending on the length of the stay; the subsequent transition also invokes costs but happens instantaneously.
We assume that, for all $x \in \mathbb R \cup \{\infty\}$, ${x \cdot \infty = \infty \cdot x = \infty}$ and $x + \infty = \infty + x = \infty$.
\begin{itemize}
\item [(a)] {\em Linearly priced timed automata} were considered in \cite{ATP01, BFHL01, LBBF01}. We can describe this model by the timed valuation monoid \linebreak ${\mathbb M^{\text{sum}} = (\mathbb R \cup \{\infty\}, \min, \val^{\text{sum}}, \infty)}$ where $\val^{\text{sum}}$ is defined by
$\val^{\text{sum}}(v) = \sum_{i = 1}^n (m_i \cdot t_i + m'_i)$ for all
$v = ((m_1, m'_1), t_1) ... ((m_n, m'_n), t_n) \in \mathbb T (M \times M)^+$.
\item [(b)] The situation of the average behavior for WTA considered in \linebreak \cite{BBL04, BBL08} can be described by means of the timed valuation monoid \linebreak ${\mathbb M^{\text{avg}} = (\mathbb R \cup \{\infty\}, \min, \val^{\text{avg}}, \infty)}$ where $\val^{\text{avg}}$ is defined as follows. Let ${v = ((m_1, m'_1), t_1) ... ((m_n, m'_n), t_n) \in \mathbb T (M \times M)^+}$. If $\langle v \rangle > 0$, then we let
$
\val^{\text{avg}}(v) = \frac{\sum_{i = 1}^n (m_i \cdot t_i + m_i')}{\sum_{i = 1}^n t_i}.
$
If $\langle v \rangle = 0$, $m_1 = ... = m_n \in \mathbb R$ and $m_1' = ... = m_n' = 0$, then we put $\val^{\text{avg}}(v) = m_1$. Otherwise, we put $\val^{\text{avg}}(v) = \infty$.
\item [(c)] The model of WTA with the discounting measure was investigated in \cite{AT11, FL09, FL092}. These WTA can be considered as WTA over the timed valuation monoid ${\mathbb M^{\text{disc}_{\lambda}} = (\mathbb R \cup \{\infty\}, \min, \val^{\text{disc}_{\lambda}}, \infty)}$ where $0 < \lambda < 1$ is a {\em discounting factor} and $\val^{\text{disc}_{\lambda}}$ is defined for all $v = ((m_1, m_1'), t_1) ... ((m_n, m_n'), t_n) \in \mathbb T (M \times M)^+$ by
$
\val^{\text{disc}_\lambda}(v) = \sum_{i = 1}^n \lambda^{t_1 + ... + t_{i-1}} \cdot \big(\int_{0}^{t_i} m_i \cdot \lambda^{\tau} d \tau + \lambda^{t_i} \cdot m_i'\big).
$
\end{itemize}
Note that the timed valuation monoids $\mathbb M^{\text{sum}}$, $\mathbb M^{\text{avg}}$ and $\mathbb M^{\text{disc}_{\lambda}}$ are idempotent.
\end{example}
\section{Closure Properties}
\label{Sect:Closure}
In this section, we consider several closure properties of recognizable quantitative timed languages which we will use for the proof of our Nivat theorem and which could be of independent interest. For lack of space, we will omit the proofs.
Let $\Sigma$ be a set, $\Gamma$ an alphabet and $h: \Gamma \to \Sigma$ a mapping. For a timed word ${v = (\gamma_1, t_1) ... (\gamma_n, t_n) \in \mathbb T \Gamma^+}$, we let $h(v) = (h(\gamma_1), t_1) ... (h(\gamma_n), t_n) \in \mathbb T \Sigma^+$. Then, for a QTL $r: \mathbb T \Gamma^+ \to M$ over $\mathbb M$, we define the QTL $h(r): \mathbb T \Sigma^+ \to M$ over $\mathbb M$ by
$
h(r)(w) = \sum (r(v) \; | \; v \in \mathbb T \Gamma^+ \text{ and } h(v) = w)
$
for all $w \in \mathbb T \Sigma^+$. Observe that for any $w \in \mathbb T \Sigma^+$ there are only finitely many $v \in \mathbb T \Gamma^+$ with $h(v) = w$, hence the sum exists in $(M, +)$.
\begin{lemma}
\label{Lemma:Homo}
Let $\Sigma, \Gamma$ be alphabets, $\mathbb M = (M, +, \val, \mathbb 0)$ a timed valuation monoid and $h: \Gamma \to \Sigma$ a mapping. If $r: \mathbb T \Gamma^+ \to M$ is a recognizable QTL over $\mathbb M$, then the QTL $h(r)$ is also recognizable.
\end{lemma}
For the proof of this lemma, we use a similar construction as in \cite{DV10}, Lemma 1.
Let $g: \Sigma \to M \times M$ be a mapping. We denote by ${\val \circ g: \mathbb T \Sigma^+ \to M}$ the QTL over $\mathbb M$ defined for all $w \in \mathbb T \Sigma^+$ by ${(\val \circ g)(w) = \val(g(w))}$. We say that a timed valuation monoid $\mathbb M = (M, +, \val, \mathbb 0)$ is {\em location-independent} if, for any $v = ((m_1, m_1'), t_1) ... ((m_n, m_n'), t_n) \in \mathbb T (M \times M)^+$ and ${v' = ((k_1, k_1'), t_1) ... ((k_n, k_n'), t_n) \in \mathbb T (M \times M)^+}$ with $m_i' = k_i'$ for all $1 \le i \le n$, we have $\val(v) = \val(v')$.
\begin{lemma}
\label{Lemma:Comp}
Let $\Sigma$ be an alphabet, $\mathbb M = (M, +, \val, \mathbb 0)$ a timed valuation monoid and $g: \Sigma \to M \times M$ a mapping. Then, $\val \circ g$ is unambiguously recognizable. If $\mathbb M$ is location-independent, then $\val \circ g$ is sequentially recognizable.
\end{lemma}
However, in general, $\val \circ g$ is not deterministically recognizable (and hence not sequentially recognizable). Let $\Sigma = \{a, b\}$ and $\mathbb M = \mathbb M^{\text{sum}}$ as in Example \ref{Ex:TVM} (a). Let $g(a) = (1, 0)$ and $g(b) = (2, 0)$. Then, one can show that $\val \circ g$ is not deterministically recognizable.
Let $\mathcal L \subseteq \mathbb T \Sigma^+$ be a timed language and $r: \mathbb T \Sigma^+ \to M$ a QTL over $\mathbb M$. The {\em intersection} $(r \cap \mathcal L): \mathbb T \Sigma^+ \to M$ is the QTL over $\mathbb M$ defined by $(r \cap \mathcal L)(w) = r(w)$ if $w \in \mathcal L$ and $(r \cap \mathcal L)(w) = \mathbb 0$ if $w \in \mathbb T \Sigma^+ \setminus \mathcal L$.
\begin{example}
\label{Ex:Bad}
As opposed to weighted untimed automata, recognizable quantitative timed languages are not closed under the intersection with recognizable timed languages. Let $\Sigma$ be a singleton alphabet and $\mathcal L$ a recognizable timed language over $\Sigma$ which is not unambiguously recognizable. Wilke \cite{Wil94} showed that such a language exists. Consider the non-idempotent and location-independent timed valuation monoid $\mathbb M = (\mathbb N, +, \val, 0)$ where $+$ is the usual addition of natural numbers and $\val(v) = m_1' \cdot ... \cdot m_n'$ for all $v = ((m_1, m_1'), t_1) ... ((m_n, m_n'), t_n) \in \mathbb T (\mathbb N \times \mathbb N)^+$. Let the QTL ${r: \mathbb T \Sigma^+ \to \mathbb N}$ over $\mathbb M$ be defined by $r(w) = 1$ for all $w \in \mathbb T \Sigma^+$. Then, $r$ is recognizable but $r \cap \mathcal L$ is not recognizable.
\end{example}
Nevertheless, the intersection enjoys the following closure properties.
\begin{lemma}
\label{Lemma:Inter}
Let $\Sigma$ be an alphabet, $\mathbb M = (M, +, \val, \mathbb 0)$ a timed valuation monoid, $\mathcal L \subseteq \mathbb T \Sigma^+$ a recognizable timed language and $r: \mathbb T \Sigma^+ \to M$ a recognizable QTL over $\mathbb M$. If $\mathbb M$ is idempotent, then $r \cap \mathcal L$ is recognizable. If $\mathcal L$ is unambiguously recognizable, then $r \cap \mathcal L$ is recognizable. If $\mathcal L, r$ are unambiguously (deterministically, sequentially) recognizable, then $r \cap \mathcal L $ is also unambiguously (deterministically, sequentially) recognizable.
\end{lemma}
For the proof, we use a kind of product construction for timed automata.
\section{A Nivat Theorem for Weighted Timed Automata}
Nivat's theorem \cite{Ni68} (see also \cite{Be69}, Theorem 4.1) is one of the fundamental characterizations of rational transductions and establishes a connection between rational transductions and rational languages. A version for semiring-weighted automata was given in \cite{DK}; this shows a connection between recognizable quantitative and qualitative languages. In this chapter, we prove a Nivat-like theorem for recognizable quantitative timed languages.
Let $\Sigma$ be an alphabet and $\mathbb M = (M, +, \val, \mathbb 0)$ a timed valuation monoid. Let $\text{\sc Rec}(\Sigma, \mathbb M)$ denote the collection of all QTL recognizable by a WTA over $\Sigma$ and $\mathbb M$. Let $\mathcal N(\Sigma, \mathbb M)$ (with $\mathcal N$ standing for Nivat) denote the set of all QTL ${\mathbb L: \mathbb T \Sigma^+ \to M}$ over $\mathbb M$ such that there exist an alphabet $\Gamma$, mappings ${h: \Gamma \to \Sigma}$ and ${g: \Gamma \to M \times M}$ and a recognizable timed language $\mathcal L \subseteq \mathbb T \Gamma^+$ such that $\mathbb L = h((\val \circ g) \cap \mathcal L)$. Let the collection $\mathcal N^{\text{\sc Seq}}(\Sigma, \mathbb M)$ be defined like $\mathcal N(\Sigma, \mathbb M)$ with the only difference that $\mathcal L$ is sequentially recognizable. The collections $\mathcal N^{\text{\sc Unamb}}(\Sigma, \mathbb M)$ and $\mathcal N^{\text{\sc Det}}(\Sigma, \mathbb M)$ are defined similarly using unambiguously resp. deterministically recognizable timed languages.
Our Nivat theorem for weighted timed automata is the following.
\begin{theorem}
\label{Theorem:Nivat}
Let $\Sigma$ be an alphabet and $\mathbb M$ a timed valuation monoid. Then,
$\text{\sc Rec}(\Sigma, \mathbb M) = \mathcal N^{\text{\sc Seq}}(\Sigma, \mathbb M) = \mathcal N^{\text{\sc Det}}(\Sigma, \mathbb M) = \mathcal N^{\text{\sc Unamb}}(\Sigma, \mathbb M) \subseteq \mathcal N(\Sigma, \mathbb M)$. \linebreak
If $\mathbb M$ is idempotent, then $\text{\sc Rec}(\Sigma, \mathbb M) = \mathcal N(\Sigma, \mathbb M)$.
\end{theorem}
As opposed to the result of \cite{DK} for weighted untimed automata, the equality $\text{\sc Rec}(\Sigma, \mathbb M) = \mathcal N(\Sigma, \mathbb M)$ does not always hold: let $\Sigma$, $\mathbb M$, $\mathcal L$ and $r$ be defined as in Example \ref{Ex:Bad}. Then, one can show that $r \cap \mathcal L \in \mathcal N(\Sigma, \mathbb M) \setminus \text{\sc Rec}(\Sigma, \mathbb M)$.
The proof of Theorem \ref{Theorem:Nivat} is based on the closure properties of WTA (cf. Sect. \ref{Sect:Closure}) and the following lemma.
\begin{lemma}
\label{Lemma:Transitions}
Let $\Sigma$ be an alphabet and $\mathbb M$ a timed valuation monoid. Then,
$\text{\sc Rec}(\Sigma, \mathbb M) \subseteq \mathcal N^{\text{\sc Seq}}(\Sigma, \mathbb M)$.
\end{lemma}
\begin{proof}[Sketch]
Let $\mathcal A = (L, C, I, E, F, \wt)$ be a WTA over $\Sigma$ and $\mathbb M$. Let $\Gamma = E$. We define the mappings $h: \Gamma \to \Sigma$ and ${g: \Gamma \to M \times M}$ for all $\gamma = (\ell, a, \phi, \Lambda, \ell') \in \Gamma$ by $h(\gamma) = a$ and $g(\gamma) = (\wt(\ell), \wt(\gamma))$. Let $\mathcal L$ be the set of all timed words $w = (\gamma_1, \tau_1) ... (\gamma_n, \tau_n)$ such that there exists a run $\rho$ of $\mathcal A$ of the form (\ref{Eq:DefRun}) with $\gamma_i = e_i$ and $\tau_i = t_i$ for all $1 \le i \le n$. It can be shown that $\mathcal L$ is sequentially recognizable and $||\mathcal A|| = h((\val \circ g) \cap \mathcal L) \in \mathcal N^{\text{\sc Seq}}(\Sigma, \mathbb M)$. \qed
\end{proof}
Let $\Sigma$ be an alphabet and $\mathbb M$ a timed valuation monoid with the domain $M$. Let $\mathcal N^{\text{\sc Unamb}}b(\Sigma, \mathbb M)$ denote the collection of all QTL ${\mathbb L: \mathbb T \Sigma^+ \to M}$ over $\mathbb M$ such that there exist an alphabet $\Gamma$, a mapping $h: \Gamma \to \Sigma$ and an unambiguously recognizable QTL $r: \mathbb T \Gamma^+ \to M$ over $\mathbb M$ such that $\mathbb L = h(r)$.
The collections $\mathcal N^{\text{\sc Seq}}q(\Sigma, \mathbb M)$ and $\mathcal N^{\text{\sc Det}}t(\Sigma, \mathbb M)$ are defined like $\mathcal N^{\text{\sc Unamb}}b(\Sigma, \mathbb M)$ with the only difference that $r$ is sequentially resp. deterministically recognizable.
As a corollary from Theorem 5.1, we establish the following connections between recognizable and unambiguously, sequentially and deterministically recognizable QTL. For the proof of this corollary, we apply Theorem \ref{Theorem:Nivat} and closure properties of WTA considered in Sect. \ref{Sect:Closure}.
\begin{corollary}
\label{Cor:Unamb}
Let $\Sigma$ be an alphabet and $\mathbb M$ a timed valuation monoid. Then, $\mathcal N^{\text{\sc Seq}}q(\Sigma, \mathbb M) = \mathcal N^{\text{\sc Det}}t(\Sigma, \mathbb M) \subseteq \mathcal N^{\text{\sc Unamb}}b(\Sigma, \mathbb M) = \text{\sc Rec}(\Sigma, \mathbb M)$. If $\mathbb M$ is location-independent, then $\mathcal N^{\text{\sc Seq}}q(\Sigma, \mathbb M) = \text{\sc Rec}(\Sigma, \mathbb M)$.
\end{corollary}
However, the equality $\mathcal N^{\text{\sc Seq}}q(\Sigma, \mathbb M) = \text{\sc Rec}(\Sigma, \mathbb M)$ does not always hold. Let ${\Sigma = \{a, b\}}$ and $\mathbb M = \mathbb M^{\text{sum}}$ be the timed valuation monoid as in Example \ref{Ex:TVM} (a); note that $\mathbb M$ is not location-independent. Consider the QTL ${\mathbb L: \mathbb T \Sigma^+ \to M}$ over $\mathbb M$ defined for all $w = (a_1, t_1) ... (a_n, t_n)$ by $\mathbb L(w) = t_1$ if $a_1 = a$ and $\mathbb L(w) = 2 \cdot t_1$ otherwise. We can show that $\mathbb L \in \text{\sc Rec}(\Sigma, \mathbb M) \setminus \mathcal N^{\text{\sc Seq}}q(\Sigma, \mathbb M)$.
\section{Weighted Relative Distance Logic}
In this section, we develop a weighted relative distance logic. Relative distance logic on timed words was introduced by Wilke in \cite{Wil94, Wil942}. It was shown that restricted relative distance logic and timed automata have the same expressive power. Here, we will derive a weighted version of this result. We will show that the proof of our result can be deduced from Wilke's result and our Nivat theorem for WTA.
We fix a countable set $V_1$ of {\em first-order variables} and a countable set $V_2$ of {\em second-order variables} such that $V_1 \cap V_2 = \emptyset$. Let $V = V_1 \cup V_2$.
\subsection{Relative Distance Logic}
Let $\Sigma$ be an alphabet. The set $\text{\sc Rdl}(\Sigma)$ of {\em relative distance formulas} over $\Sigma$ is defined by the grammar:
$$
\varphi \; ::= \; P_a(x) \; | \; x \le y \; | \; X(x) \; | \; \dd_{\leftarrow}^{{\bowtie} c}(X, x) \; | \; \lnot \varphi \; | \; \varphi \vee \varphi \; | \; \exists x. \varphi \; | \; \exists X. \varphi
$$
where $a \in \Sigma$, $x, y \in V_1$, $X \in V_2$, ${\bowtie} \in \{<, \le, =, \ge, >\}$ and $c \in \mathbb N$.
The formulas of the form $\dd_{\leftarrow}^{{\bowtie c}}(X, x)$ are called {\em past formulas}.
Let $w = (a_1, t_1) ... (a_n, t_n) \in \mathbb T \Sigma^+$ be a timed word. For every $1 \le i \le n$, let $\langle w \rangle_i = t_1 + ... + t_i$. The {\em domain} of $w$ is the set $\dom(w) = \{1, ..., n\}$ of {\em positions} of $w$. Let $y \in \dom(w)$, $Y \subseteq \dom(w)$, ${\bowtie} \in \{<, \le, =, \ge, >\}$ and $c \in \mathbb N$. Then, we write $\dd^{{\bowtie} c, w}_{\leftarrow}(Y, y)$ iff either there exists a position $z \in Y$ such that $z < y$ and, for the greatest such position $z$, $\langle w \rangle_y - \langle w \rangle_z \bowtie c$, or there exists no position $z \in Y$ with $z < y$, and $\langle w \rangle_y \bowtie c$.
A {\em $w$-assignment} is a mapping $\sigma: V \to \dom(w) \cup 2^{\dom(w)}$ such that $\sigma(V_1) \subseteq \dom(w)$ and $\sigma(V_2) \subseteq 2^{\dom(w)}$. We define the {\em update} $\sigma[x/i]$ to be the $w$-assignment such that $\sigma[x/i](x) = i$ and $\sigma[x/i](y) = \sigma(y)$ for all $y \in V \setminus \{x\}$. Similarly, for $X \in V_2$ and $I \subseteq \dom(w)$, we define the update $\sigma[X/I]$. Let $\varphi \in \text{\sc Rdl}(\Sigma)$ and $\sigma$ be a $w$-assignment. The definition that the pair $(w, \sigma)$ {\em satisfies} the formula $\varphi$, written $(w, \sigma) \models \varphi$, is given inductively on the structure of $\varphi$ as usual for MSO logic where, for the new formulas $\dd^{{\bowtie} c}_{\leftarrow}(X, x)$, we put $(w, \sigma) \models \dd^{{\bowtie} c}_{\leftarrow}(X, x)$ iff $\dd_{\leftarrow}^{{\bowtie} c, w}(\sigma(X), \sigma(x))$.
A formula $\varphi \in \text{\sc Rdl}(\Sigma)$ is called a {\em sentence} if every variable occurring in $\varphi$ is bound by a quantifier. Note that, for a sentence $\varphi \in \text{\sc Rdl}(\Sigma)$, the relation $(w, \sigma) \models \varphi$ does not depend on $\sigma$, i.e., for any $w$-assignments $\sigma_1, \sigma_2$, $(w, \sigma_1) \models \varphi$ iff ${(w, \sigma_2) \models \varphi}$. Then, we will write $w \models \varphi$. For a sentence $\varphi \in \text{\sc Rdl}(\Sigma)$, let ${\mathcal L(\varphi) = \{w \in \mathbb T \Sigma^+ \; | \; w \models \varphi\}}$, the timed language {\em defined} by $\varphi$.
Let $\mathbb Delta \subseteq \text{\sc Rdl}(\Sigma)$. We say that a timed language $\mathcal L \subseteq \mathbb T \Sigma^+$ is {\em $\mathbb Delta$-definable} if there exists a sentence $\varphi \in \mathbb Delta$ such that $\mathcal L(\varphi) = \mathcal L$.
Let $\mathcal V = \{X_1, ..., X_m\} \subseteq V$ with $|\mathcal V| = m$. For $\varphi \in \text{\sc Rdl}(\Sigma)$, let
$\exists \mathcal V. \varphi$ denote the formula $\exists X_1. \; ... \; \exists X_m. \varphi$. For a formula $\varphi \in \text{\sc Rdl}(\Sigma)$, let $\mathcal D(\varphi) \subseteq V_2$ denote the set of all variables $X$ for which there exist $x \in V_1$, ${\bowtie} \in \{<, \le, =, \ge, >\}$ and $c \in \mathbb N$ such that $\dd_{\leftarrow}^{\bowtie c}(X, x)$ is a subformula of $\varphi$. Let $\text{\sc Rdl}^{\leftarrow}(\Sigma) \subseteq \text{\sc Rdl}(\Sigma)$ denote the set of all formulas $\varphi$ where quantification of second-order variables is applied only to variables not in $\mathcal D(\varphi)$. We denote by $\exists \text{\sc Rdl}^{\leftarrow}(\Sigma) \subseteq \text{\sc Rdl}(\Sigma)$ the set of all sentences of the form $\exists \mathcal D(\varphi). \varphi$.
\begin{theorem}[Wilke \cite{Wil942}]
\label{Theorem:Wilke}
Let $\Sigma$ be an alphabet and $\mathcal L \subseteq \mathbb T \Sigma^+$ a timed language. Then, $\mathcal L$ is recognizable iff $\mathcal L$ is $\exists \text{\sc Rdl}^{\leftarrow}(\Sigma)$-definable.
\end{theorem}
\subsection{Weighted Relative Distance Logic}
\label{SSect:wRdl}
In this subsection, we consider a weighted version of relative distance logic. For untimed words, weighted MSO logic over semirings was defined in \cite{DG07}. A weighted MSO logic over (untimed) product valuation monoids was considered in \cite{DM12}. We will use a similar approach to define the syntax and the semantics of our weighted relative distance logic. In \cite{DM12}, valuation monoids were augmented with a product operation and a unit element to define the semantics of weighted formulas. Here, we proceed in a similar way and consider timed {\em product} valuation monoids.
A {\em timed product valuation monoid} (timed pv-monoid) $\mathbb M = (M, +, \val, \diamond, \mathbb 0, \mathbb 1)$ is a timed valuation monoid $(M, +, \val, \mathbb 0)$ equipped with a multiplication $\diamond: M \times M \to M$ and a unit $\mathbb 1 \in M$ such that $m \diamond \mathbb 1 = \mathbb 1 \diamond m = m$ and $m \diamond \mathbb 0 = \mathbb 0 \diamond m = \mathbb 0$ for all $m \in M$, $\val(((\mathbb 1, \mathbb 1), t_1), ..., ((\mathbb 1, \mathbb 1), t_n)) = \mathbb 1$ for all $n \ge 1$ and all $t_1, ..., t_n \in \mathbb R_{\ge 0}$, and $\val(((m_1, m_1'), t_1) ... ((m_n, m_n'), t_n)) = \mathbb 0$ whenever $m'_i = \mathbb 0$ for some $1 \le i \le n$. We say that $\mathbb M$ is {\em idempotent} if $+$ is idempotent.
\begin{example}
\label{Ex:TPVM}
If we augment the timed valuation monoids $\mathbb M^{\text{sum}}$, $\mathbb M^{\text{avg}}$ and $\mathbb M^{\text{disc}_{\lambda}}$ from Example \ref{Ex:TVM} with the multiplication $\diamond = +$ and the unit $\mathbb 1 = 0$, then we obtain the timed pv-monoids $\mathbb M_0^{\text{sum}}$, $\mathbb M_0^{\text{avg}}$ and $\mathbb M_0^{\text{disc}_{\lambda}}$. Note that these timed pv-monoids are idempotent.
\end{example}
Motivated by the examples, for the clarity of presentation, we restrict ourselves to idempotent timed pv-monoids.
Let $\Sigma$ be an alphabet and $\mathbb M = (M, +, \val, \diamond, \mathbb 0, \mathbb 1)$ a timed pv-monoid. The set $\text{\sc wRdl}(\Sigma, \mathbb M)$ of formulas of {\em weighted relative distance logic} over $\Sigma$ and $\mathbb M$ is defined by the grammar
$$
\varphi \; ::= \; \mathbb B. \beta \; | \; m \; | \; \varphi \vee \varphi \; | \; \varphi \wedge \varphi \; | \; \exists x. \varphi \; | \; \forall x. (\varphi, \varphi) \; | \; \exists X. \varphi
$$
where $\beta \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$, $m \in M$, $x \in V_1$ and $X \in V_2$; the notation $\mathbb B. \beta$ indicates that here $\beta$ will be interpreted in a quantitative way.
Let $\mathbb T \Sigma^+_V$ denote the set of all pairs $(w, \sigma)$ where $w \in \mathbb T \Sigma^+$ and $\sigma$ is a $w$-assignment. For $\varphi \in \text{\sc wRdl}(\Sigma, \mathbb M)$, the {\em semantics} of $\varphi$ is the mapping $[\![\varphi]\!]: \mathbb T \Sigma^+_V \to M$ defined for all $(w, \sigma) \in \mathbb T \Sigma^+_V$ with $w = (a_1, t_1) ... (a_n, t_n)$ inductively on the structure of $\varphi$ as shown in Table \ref{Table:Semantics}.
\begin{table}[t]
\begin{scriptsize}
\begin{tabular}{@{\hspace{0.42cm}}l@{\hspace{1.3cm}}l}
$
\begin{array}{rll}
\! [\![\mathbb B. \beta]\!](w, \sigma) & = & \begin{cases} \mathbb 1, & \text{if } (w, \sigma) \models \beta, \\
\mathbb 0, & \text{otherwise}
\end{cases} \\
\! [\![m]\!](w, \sigma) & = & m \\
\! [\![\varphi_1 \vee \varphi_2]\!](w, \sigma) & = & [\![\varphi_1]\!](w, \sigma) + [\![\varphi_2]\!](w, \sigma) \\
\end{array}
$
&
$
\begin{array}{rll}
\! [\![\varphi_2 \wedge \varphi_2]\!](w, \sigma) & = & [\![\varphi_1]\!](w, \sigma) \diamond [\![\varphi_2]\!](w, \sigma) \\
\! [\![\exists x. \varphi]\!](w, \sigma) & = & \sum\limits_{i \in \dom(w)} [\![\varphi]\!](w, \sigma[x/i]) \\
\! [\![\exists X. \varphi]\!](w, \sigma) & = & \sum\limits_{I \subseteq \dom(w)} [\![\varphi]\!](w, \sigma[X/I])
\end{array}
$
\end{tabular}
$
\! [\![\forall x. (\varphi_1, \varphi_2)]\!](w, \sigma) = \val[(([\![\varphi_1]\!](w, \sigma[x/i]), [\![\varphi_2]\!](w, \sigma[x/i])), t_i)]_{i \in \dom(w)}
$
\end{scriptsize}
\caption{The semantics of weighted relative distance logic}
\label{Table:Semantics}
\end{table}
Here, $x \in V_1$, $X \in V_2$, $\beta \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$, $m \in M$ and ${\varphi, \varphi_1, \varphi_2 \in \text{\sc wRdl}(\Sigma, \mathbb M)}$.
\begin{remark}
In \cite{Qua10, Qua11}, Quaas introduced a weighted version of relative distance logic over a semiring $\mathbb S = (S, +, \cdot, \mathbb 0, \mathbb 1)$ and a family of functions $\mathcal F \subseteq S^{\mathbb R_{\ge 0}}$ where elements of $S$ model discrete weights and functions $f \in \mathcal F$ model continuous weights.
If $\mathcal F$ is a one-parametric family of functions $(f_s)_{s \in S}$, then our weighted logic incorporates the logic of Quaas over $\mathbb S$ and $\mathcal F$. However, for more complicated timed valuation functions (like average and discounting) we must have formulas which combine both discrete and continuous weights. Therefore, we use the formulas $\forall x. (\varphi_1, \varphi_2)$. Our approach also extends the idea of \cite{DM12} to define the semantics of formulas with a first-order universal quantifier using the valuation function.
\end{remark}
\begin{example}
Let $\Sigma = \{a, b\}$, $C(a), C(b) \in \mathbb R$ be the {\em continuous costs} of $a,b$ and $D(a), D(b) \in \mathbb R$ the {\em discrete costs}. Given a timed word ${w = (\gamma_1, t_1) ... (\gamma_n, t_n) \in \mathbb T \Sigma^+}$, the {\em average cost} of $w$ is defined as $A(w) = \frac{\sum_{i = 1}^n (C(\gamma_i) \cdot t_i + D(\gamma_i))}{\sum_{i=1}^n t_i}$. Let $\mathbb M_0^{\text{avg}}$ be defined as in Example \ref{Ex:TPVM}. For $U \in \{C, D\}$, let $\varphi_U(x) = (P_a(x) \wedge U(a)) \vee (P_b(x) \wedge U(b))$. Consider the
$\text{\sc wRdl}(\Sigma, \mathbb M_0^{\text{avg}})$-sentence $\varphi = \forall x. (\varphi_C(x), \varphi_D(x))$. Then, for all ${w \in \mathbb T \Sigma^+}$, we have: $[\![\varphi]\!](w) = A(w)$.
\end{example}
A sentence $\varphi \in \text{\sc wRdl}(\Sigma, \mathbb M)$ is defined as usual as a formula without free variables. Then, for every sentence $\varphi \in \text{\sc wRdl}(\Sigma, \mathbb M)$, every timed word $w \in \mathbb T \Sigma^+$ and every $w$-assignment $\sigma$, the value $[\![\varphi]\!](w, \sigma)$ does not depend on $\sigma$. Hence, we can consider the semantics of $\varphi$ as a quantitative timed language $[\![\varphi]\!]: \mathbb T \Sigma^+ \to M$ over $\mathbb M$.
Similarly to the results of \cite{DG07}, in general weighted relative distance logic and WTA are not expressively equivalent. We can show that the QTL $\mathbb L: \mathbb T \Sigma^+ \to \mathbb R \cup \{\infty\}$ with $\mathbb L(w) = |w|^2$ is not recognizable over the timed valuation monoid $\mathbb M^{\text{sum}}$. But this QTL is defined by the $\text{\sc wRdl}(\Sigma, \mathbb M^{\text{sum}}_0)$-sentence $\forall x. (0, \forall y. (0, 1))$.
Nevertheless, there is a syntactically restricted fragment of weighted relative distance logic which is expressively equivalent to WTA. Let $\Sigma$ be an alphabet and $\mathbb M = (M, +, \val, \diamond, \mathbb 0, \mathbb 1)$ an idempotent timed pv-monoid. A formula $\varphi \in \text{\sc wRdl}(\Sigma, \mathbb M)$ is called {\em almost boolean} if it is built from boolean formulas $\mathbb B. \beta \in \text{\sc Rdl}^{\leftarrow}(\Sigma, \mathbb M)$ and constants $m \in M$ using disjunctions and conjunctions. We say that a formula $\varphi$ is {\em syntactically restricted} if whenever it contains a subformula $\forall x. (\varphi_1, \varphi_2)$, then $\varphi_1, \varphi_2$ are almost boolean; whenever it contains a subformula $\varphi_1 \wedge \varphi_2$, then either $\varphi_1, \varphi_2$ are almost boolean or $\varphi_1 = \mathbb B. \varphi'$ or $\varphi_2 = \mathbb B. \varphi'$ with $\varphi' \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$; every constant $m \in M$ is in the scope of a first-order universal quantifier. Let $\text{\sc Def}^{\text{res}}(\Sigma, \mathbb M)$ denote the collection of all QTL $\mathbb L: \mathbb T \Sigma^+ \to M$ over $\mathbb M$ such that $\mathbb L = [\![\varphi]\!]$ for some syntactically restricted $\text{\sc wRdl}(\Sigma, \mathbb M)$-sentence $\varphi$.
Our main result for weighted relative distance logic is the following theorem.
\begin{theorem}
\label{Thm:Rec_Eq_wRdl}
Let $\Sigma$ be an alphabet and $\mathbb M$ an idempotent timed pv-monoid. Then, $\text{\sc Def}^{\text{\rm res}}(\Sigma, \mathbb M) = \text{\sc Rec}(\Sigma, \mathbb M)$.
\end{theorem}
Now we give a sketch of the proof of this theorem. Let $\mathcal N^{\exists \text{\sc Rdl}^{\leftarrow}}(\Sigma, \mathbb M)$ denote the collection of all QTL ${\mathbb L: \mathbb T \Sigma^+ \to M}$ over $\mathbb M$ such that there exist an alphabet $\Gamma$, mappings $h: \Gamma \to \Sigma$, $g: \Gamma \to M \times M$ and a $\exists \text{\sc Rdl}^{\leftarrow} (\Gamma)$-definable timed language $\mathcal L$ such that $\mathbb L = h((\val \circ g) \cap \mathcal L)$.
For the proof of Theorem \ref{Thm:Rec_Eq_wRdl}, we establish a Nivat-like characterization of definable QTL.
\begin{theorem}
\label{Thm:LogicNivat}
Let $\Sigma$ be an alphabet and $\mathbb M$ an idempotent timed pv-monoid. Then, $\mathcal N^{\exists \text{\sc Rdl}^{\leftarrow}}(\Sigma, \mathbb M) = \text{\sc Def}^{\text{\rm res}}(\Sigma, \mathbb M)$.
\end{theorem}
\begin{proof}[Sketch]
To show the inclusion $\subseteq$, let $\mathbb L = h((\val \circ g) \cap \mathcal L)$ where $\Gamma$, $h$, $g$ and $\mathcal L$ are as in the definition of $\mathcal N^{\exists \text{\sc Rdl}^{\leftarrow}}(\Sigma, \mathbb M)$. Let $\beta$ be a $\exists \text{\sc Rdl}^{\leftarrow}(\Sigma)$-sentence defining $\mathcal L$. We introduce a family $\mathcal V = (X_{\gamma})_{\gamma \in \Gamma}$ of second-order variables not occurring in $\beta$. We replace each predicate $P_{\gamma}(x)$ with $\gamma \in \Gamma$ occurring in $\beta$ by the formula ${P_{h(\gamma)}(x) \wedge X_{\gamma}(x)}$; so we obtain a formula $\beta' \in \exists \text{\sc Rdl}^{\leftarrow}(\Sigma)$. Assume that $\beta' = \exists \mathcal D(\beta''). \beta''$ with $\beta'' \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$. We construct a formula $\text{Part} \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$ which demands that the variables $\mathcal V$ form a partition of the domain, and a formula $H \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$ which demands that, whenever a position of a word belongs to $X_{\gamma}$, then this position is labelled by $h(\gamma)$. Then, the following syntactically restricted $\text{\sc wRdl}(\Sigma, \mathbb M)$-sentence defines $\mathcal L$:
$$
\exists (\mathcal V \cup \mathcal D(\beta'')). \big[\mathbb B. (\beta'' \wedge \text{Part} \wedge H) \wedge \forall x. \textstyle \big(\bigvee_{\gamma \in \Gamma} \mathbb B. X_{\gamma}(x) \wedge g_1(\gamma), \bigvee_{\gamma \in \Gamma} \mathbb B. X_{\gamma}(x) \wedge g_2(\gamma) \big) \big]
$$
where, for $i \in \{1,2\}$, $g_i$ is the projection of $g$ to the $i$-th coordinate.
To show the inclusion $\supseteq$, we introduce {\em canonical $\text{\sc wRdl}(\Sigma, \mathbb M)$-sentences} which are of the form $\varphi = \exists \mathcal V. \forall y. (\bigvee_{i = 1}^k \mathbb B. \beta_i \wedge m_i, \bigvee_{i = 1}^k \mathbb B. \beta_i \wedge m'_i)$ where $\mathcal V$ is a set of variables, $m_1, ..., m_k, m_1', ..., m_k' \in M$ and $\beta_1, ..., \beta_k \in \text{\sc Rdl}^{\leftarrow}(\Sigma)$ are such that, for every timed word $w \in \mathbb T \Sigma^+$ and every $w$-assignment $\sigma$, there exists exactly one $i \in \{1, ..., k\}$ such that $(w, \sigma) \models \beta_i$.
We can show that every syntactically-restricted sentence can be transformed into a canonical one. It remains to prove that, for a canonical sentence $\varphi$ as above, $[\![\varphi]\!] \in \mathcal N^{\exists \text{\sc Rdl}^{\leftarrow}}(\Sigma, \mathbb M)$. Let $M^1_{\varphi} = \{m_1, ..., m_k\}$ and $M^2_{\varphi} = \{m'_1, ..., m'_k\}$. We put $\Gamma = \Sigma \times M_{\varphi}^1 \times M_{\varphi}^2$. Let $h: \Gamma \to \Sigma$ be the projection to the first coordinate. Let $g: \Gamma \to M \times M$ be the projection to $M_{\varphi}^1 \times M_{\varphi}^2$. Then we can construct a $\exists \text{\sc Rdl}^{\leftarrow}(\Gamma)$-sentence $\beta$ of the form $\exists \mathcal V. \forall y. \beta'$ such that
$[\![\varphi]\!] = h((\val \circ g) \cap \mathcal L(\beta))$. \qed
\end{proof}
Then, our Theorem \ref{Thm:Rec_Eq_wRdl} follows from Theorem \ref{Thm:LogicNivat}, the Nivat Theorem \ref{Theorem:Nivat} and Wilke's Theorem \ref{Theorem:Wilke}.
\begin{remark}
We can also follow the approach of \cite{DG07} to prove our Theorem \ref{Thm:Rec_Eq_wRdl}. Compared to this way, our new proof technique has the following advantages. The proof idea of \cite{DG07} involves technical details like B\"uchi's encodings of assignments and a bulky logical description of accepting runs of timed automata. In our new proof, these details are taken care of by Wilke's proof for unweighted relative distance logic.
\end{remark}
Let $\Sigma$ be an alphabet, $\mathbb M^{\text{sum}}$ the timed valuation monoid as in Example \ref{Ex:TVM}(a) and $\mathcal A$ a WTA over $\Sigma$ and $\mathbb M$. As it was shown in \cite{ATP01, BFHL01, LBBF01}, $\inf\{||\mathcal A||(w) \; | \; w \in \mathbb T \Sigma^+\}$ is computable. This result and our Theorem \ref{Thm:Rec_Eq_wRdl} imply decidability results for weighted relative distance logic.
\begin{itemize}
\item
Let $\mathbb M_0^{\text{sum}}$ be the timed pv-monoid as in Example \ref{Ex:TPVM}. It is decidable, given an alphabet $\Sigma$, a syntactically restricted sentence ${\varphi \in \text{\sc wRdl}(\Sigma, \mathbb M^{\text{sum}})}$ with constants from $\mathbb Q$ and a threshold $\theta \in \mathbb Q$, whether there exists $w \in \mathbb T \Sigma^+$ with $[\![\varphi]\!](w) < \theta$.
\item
Let $\mathbb M_0^{\text{avg}}$ be the timed pv-monoid as in Example \ref{Ex:TPVM}. It is decidable, given an alphabet $\Sigma$, a syntactically restricted sentence ${\varphi \in \text{\sc wRdl}(\Sigma, \mathbb M^{\text{avg}})}$ with constants from $\mathbb Q$ and a threshold $\theta \in \mathbb Q$, whether there exists $w \in \mathbb T \Sigma^+$ with $\langle w \rangle > 0$ and $[\![\varphi]\!](w) < \theta$.
\end{itemize}
\section{Conclusion and Future Work}
In this paper, we proved a version of Nivat's theorem for weighted timed automata on finite words which states a connection between the quantitative and qualitative behaviors of timed automata. We also considered several applications of this theorem. Using this theorem, we studied the relations between sequential, unambiguous and non-deterministic WTA. We also introduced a weighted version of Wilke's relative distance logic and established a B\"uchi-like result for this logic, i.e., we showed the equivalence between restricted weighted relative distance logic and WTA. Using our Nivat theorem, we deduced this from Wilke's result.
Because of space constraints, we did not present in this paper the following results. As in \cite{DM12}, for timed pv-monoid with additional properties there are larger fragments of weighted relative-distance logic which are still expressively equivalent to WTA. For the simplicity of presentation, we restricted ourselves to idempotent timed pv-monoids. However, we also obtained a more complicated result for non-idempotent timed pv-monoids. In \cite{Qua10, Qua11}, for weighted relative distance logic over non-idempotent semi\-rings, a strong restriction on the use of a first-order universal quantification was done. Surprisingly, in our result we could avoid this restriction.
Our future work concerns the following directions. The ongoing research should extend the currently obtained results to $\omega$-infinite words. This work should be further extended to the {\em multi-weighted} setting for WTA, e.g., the optimal reward-cost ratio \cite{BBL04, BBL08} or the optimal consumption of several resources where some resources must be restricted \cite{LR05}. A logical characterization of untimed multi-weighted automata was given in \cite{DP13}. It could be also interesting to compare for the weighted and unweighted cases the complexity of translations between logic and automata. We believe that our Nivat theorem will be helpful for this.
\end{comment}
\end{document} |
\begin{document}
\title{ Invariant subspaces for commuting operators in a real Banach space}
\author{ Victor Lomonosov and Victor Shulman}
\maketitle
{\bf Abstract.}
It is proved that a commutative algebra $A$ of operators in a reflexive real Banach space has an invariant subspace if each operator $T\in A$ satisfies the condition
$$\|1- \varepsilon T^2\|_e \le 1 + o(\varepsilon) \text{ when } \varepsilon\searrow 0,$$
where $\|\cdot\|_e$ is the essential norm.
This implies the existence of an invariant subspace for every commutative family of essentially selfadjoint operators in a real Hilbert space.
\section{Introduction }
One of the most known unsolved problems in the invariant subspace theory is the question of existence of a (non-trivial, closed) invariant subspace for an operator $T$ with compact imaginary part (= essentially selfadjoint operator = compact perturbation of a selfadjoint operator). There is a lot of papers concerning this subject; we only mention that the answer is affirmative for perturbations by operators from Shatten - von Neumann class $\mathfrak{S}_p$ (Livshits \cite{MSL} for $p=1$, Sahnovich \cite{Sah} for $p=2$, Gohberg and Krein \cite{GK}, Macaev \cite{M1}, Schwartz \cite{Sch} --- for arbitrary $p$), or, more generally, from the Macaev ideal (Macaev \cite{M2}). But the general question is still open.
It was proved in \cite{Lom2} that an essentially self-adjoint operator in a complex Hilbert space has an invariant real subspace. Then in \cite{Lom3} the following general theorem of Burnside type was proved:
\begin{theorem}\label{L1}
Suppose that an algebra $A$ of bounded operators in a (real or complex) Banach space $X$ is not dense in the algebra $\mathcal{B}(X)$ of all bounded operators on $X$ with respect to the weak operator topology (WOT). Then there are non-zero $x\in X^{**}, y\in X^*$, such that
\begin{equation}\label{ineq}
|(x,T^*y)| \le \|T\|_e \text{ for all } T\in A,
\end{equation}
where $\|T\|_e$ is the essential norm of $~T$, that is $\|T\|_e = \inf\{\|T+K\|: K\in \mathcal{K}(X)\}$. Here $\mathcal{K}(X)$ is the ideal of all compact operators in $X$.
\end{theorem}
Using this result and developing a special variational techniques, Simonic \cite{Sc} has obtained a significant progress in the topic: he proved that each essentially selfadjoint operator in a real Hilbert space has invariant subspace. Deep results based on Theorem \ref{L1} were proved then by Atzmon \cite{A}, Atzmon, Godefroy and Kalton \cite{AGK}, Grivaux \cite{SofGr} and other mathematicians. Here we show that every commutative family of essentially selfadjoint operators in a real Hilbert space has an invariant subspace, and consider some analogs of this result for operators in Banach spaces. Our proof is very simple and short but it heavily depends on Theorem \ref{L1}. More precisely, we use the following improvement of Theorem \ref{L1} obtained in \cite{Lom4}:
\begin{theorem}\label{L4}
If an algebra $A\subset \mathcal{B}(X)$ is not WOT-dense in $\mathcal{B}(X)$, then there are non-zero vectors $x\in X^{**}, y\in X^*$, such that $(x,y) \geq 0$ and
\begin{equation}\label{ineq2}
|(x,T^*y)| \le \|T\|_e (x,y), \text{ for all } T\in A.
\end{equation}
\end{theorem}
Let us mention that if $A$ contains a non-zero compact operator then by \cite{Lom0} $A$ has a non-trivial invariant subspace $L$ in $X$ (so $x$ can be chosen from $L$ and $y$ can be chosen from $L^\bot$).
The original proof of Theorem \ref{L1} was essentially simplified by Lindstrom and Shluchtermann \cite{LSc}. For completeness, we give at the end of the paper a short proof of Theorem \ref{L4}, unifying arguments from \cite{LSc} and \cite{Lom4} (and correcting some inaccuracy in \cite{Lom4}) --- in this form it was not published before.
\section{Main results }
In this section $X$ is a real Banach space (complex spaces are considered as real ones). The standard epimorphism from $\mathcal{B}(X)$ to the Calkin algebra $\mathcal{C}(X) = \mathcal{B}(X)/\mathcal{K}(X)$ is denoted by $\pi$.
Let us say that an element $a$ of a unital normed algebra is {\it positive}, if $\|1-\varepsilon a\|\le 1+o(\varepsilon)$ for $\varepsilon > 0$, $\varepsilon\to 0$. And let us say that an element $a$ is {\it real}, if $a^2$ is positive. An operator $T\in \mathcal{B}(X)$ is {\it essentially real}, if $\pi(T)$ is a real element of $\mathcal{C}(X)$.
Clearly all selfadjoint operators in a Hilbert space are real. It is not difficult to check that Hermitian operators in a complex Banach space (defined by the condition $\|\exp(itT)\| = 1$ for $t\in \mathbb{R}$) are real. Many other operators, for instance, all involutions and all nilpotents of index two are also real. So we can see that the class of essentially real operators is very wide.
\begin{theorem}\label{LB} If $A\subset \mathcal{B}(X)$ is a commutative algebra of essentially real operators, then there exists a non-trivial closed subspace of $X^*$, invariant for the algebra $A^*= \{T^*: T \in A\}$.
\end{theorem}
\begin{proof} Note that the set of all positive elements of a Banach algebra is a convex cone. Moreover this cone is norm-closed. Indeed let $a = \lim_{n\to \infty}a_n$ where all $a_n$ are positive. If $a$ is not positive then there is a sequence $\varepsilon_n \to 0$ and a number $C> 0$, such that $\|1 - \varepsilon_n a\| > 1 + C\varepsilon_n$ for all $n$. Taking $k$ with $\|a-a_k\| < C/2$, we get that $\|1 - \varepsilon_na_k\| > 1 + C\varepsilon_n - \|a-a_k\|\varepsilon_n > 1 + (C/2)\varepsilon_n$, which is a contradiction to positivity of $a_k$.
Hence the set of real elements is also norm closed. Applying this to $\mathcal{C}(X)$ we see that the set of essentially real operators is norm-closed as well. This allows us to assume that the algebra $A$ is norm-closed. Obviously we may assume also that $A$ is unital. Therefore $\exp(T)\in A$, for each $T\in A$.
Since $A$ is commutative it is not WOT-dense in $\mathcal{B}(X)$. Applying Theorem \ref{L4}, we find non-zero vectors $x\in X^{**}, y\in X^*$, such that the condition (\ref{ineq2}) holds.
Therefore, for $T\in A$ and $\varepsilon \searrow 0$, we have
$$(x,(1-\varepsilon (T^2)^*)y) \le \|1-\varepsilon T^2\|_e(x,y) \le (1+o(\varepsilon))(x,y),$$
hence \;\;\;$-\varepsilon(x, (T^2)^*y) \le o(\varepsilon)(x,y)$ and $(x,(T^2)^*y)\ge 0$, because $(x,y)\ge 0$.
Since $\exp(T) = (\exp(T/2))^2$, we get that
\begin{equation}\label{1}
(x,\exp(T^*)y) \ge 0 \text{ for }T\in A.
\end{equation}
Let $K$ be the closed convex envelope of the set $M = \{\exp(T^*)y: T\in A\}\subset X^*$. Since $M$ is invariant under all operators $\exp(T^*)$, the same is true for $K$.
Since $K\neq X^*$ (by (\ref{1})) and $K$ is not a singleton (otherwise we trivially have an invariant subspace $\mathbb{C}y$), the boundary $\partial K$ of $K$ is not a singleton.
By the Bishop-Phelps theorem \cite{BPh}, the set of support points is dense in $\partial K$, so there is a non-zero functional $x_0\in X^{**}$ supporting $K$ in some point $0\neq y_0\in K$. That is
$$(x_0,y_0) \le (x_0,z) \text{ for all } z\in K.$$
By the above arguments, $\exp(T^*)y_0\in K$ for $T\in A$, therefore $$ (x_0,y_0) \le (x_0,\exp(T^*)y_0) \text{ for all } T\in A.$$
It follows that for each $T\in A$, the function $\phi(t) = (x_0,\exp(tT^*)y_0)$ has a minimum at $t = 0$. For this reason $\phi'(0) = 0$ and
$(x_0,T^*y_0) = 0$. Hence the subspace $A^*y_0$ is not dense in $H$, and its norm-closure $\overline{A^*y}$ is a non-trivial invariant subspace.
\end{proof}
\begin{corollary}\label{LBref} A commutative algebra of essentially real operators in a reflexive real Banach space has a non-trivial invariant subspace.
\end{corollary}
Since the algebra generated by a commutative family of essentially selfadjoint operators consists of essentially selfadjoint operators we get the following result:
\begin{corollary}\label{L} Any commutative family of essentially selfadjoint operators in a real Hilbert space has a non-trivial invariant subspace.
\end{corollary}
Returning to individual criteria of continuity let us consider the class $(E)$ of operators $T$ such that all polynomials $p(T)$ of $T$ are essentially real.
\begin{corollary}\label{LB2} Each operator $T\in (E)$ in a reflexive real Banach space has a non-trivial invariant subspace.
\end{corollary}
Atzmon, Godefroy and Kalton \cite{AGK} introduced the class $(S)$ of all operators, satisfying the condition
\begin{equation}\label{S}
\|p(T)\|_e \le \sup\{|p(t)|: t\in L\}, \text{ for each polynomial } p,
\end{equation}
where $L$ is a compact subset of $\mathbb{R}$. It was proved in \cite{AGK} that all operators in $(S)$ have invariant subspaces. It is not difficult to see that $(S)\subset (E)$ (indeed if $T\in (S)$ then $\|1- \varepsilon p(T)^2\|_e \le \sup \{ |1-\varepsilon p(t)^2|: t\in L\} \le 1$ if $\varepsilon$ is sufficiently small) , so this result follows from Corollary \ref{LB2}.
\section{Proof of Theorem \ref{L4}}
Without loss of generality one can assume that $A$ is norm-closed. Since the algebra $A$ is not WOT-dense in $\mathcal{B}(X)$, the algebra $A^*$ is not WOT-dense in $\mathcal{B}(X^*)$. Suppose, aiming at the contrary, that $A^*$ is transitive (has no invariant subspaces). Set $F = \{T^*\in A^*: \|T\|_e < 1\}$ and fix $\varepsilon \in (0, \frac{1}{10})$. Choose $y_0\in X^*$ with $\|y_0\| = 3$ and let $S$ be the ball $\{y\in X^*: \|y-y_0\| \le 2\}$.
Let us suppose firstly that $Fy$ is dense in $X^*$ for each non-zero $y\in X^*$. Then the same is true for $\varepsilon Fy$. It follows that for every $y\in S$ there exists $T^*_y\in \varepsilon F$ with $\|T^*_yy - y_0\|<1$. By the definition of $F$, $T^*_y = R^*_y + K^*_y$, where $\|R_y\| < \varepsilon$, $K_y \in \mathcal{K}(X)$. Thus $\|K^*_yy - y_0\| \le \|T^*_yy-y_0\| + \|R^*_yy\| < 1 + 5\varepsilon$ (since $\|y\| \leq 5$, for each $y\in S$). Let $\tau$ denote the (relative) *-weak topology of $S$, then compactness of $K_y$ implies that $K_y^*$ continuously maps $(S,\tau)$ to $(X^*, \|\cdot\|)$. Therefore, for each $y\in S$, there is a $\tau$-neighborhood $V_y$ of $y$ such that $\|K^*_yz - y_0\| < 1 +5\varepsilon$, for each $z\in V_y$, and $\|T^*_yz-y_0\|< 1+5\varepsilon + 5\varepsilon < 2$. In other words $T^*_y$ maps $V_y$ to $S$.
The sets $V_y$, $y\in S$, form an open covering of $S$. Since $S$ is $\tau$-compact there is a finite subcovering $\{V_{y_i}: 1\le i\le n\}$. Let $\{\varphi_i: 1\le i\le n\}$ be a partition of unity related to this subcovering. We define a $\tau$-continuous map $\Phi: S \to S$ by $\Phi(y) = \sum_{i=1}^n\varphi_i(y)T^*_{y_i}(y)$. By Tichonov's Theorem, $\Phi$ has a fixed point $z\in S$. This means that $T^*z = z$, where $T^* = \sum_{i=1}^n\varphi_i(z)T^*_{y_i}$. Since the set $\varepsilon F$ is convex we get that $T^* \in \varepsilon F$ and $\|T^*\|_e \leq \|T\|_e < \frac{1}{10}$.
For this reason 1 is an eigenvalue of $T^*$ exceeding $\|T^*\|_e$ and hence it is an isolated point in $\sigma(T^*)$. The corresponding Riesz projection is of finite rank and belongs to $A^*$. But it is well known (see e.g. \cite[Theorem 8.2]{RR}), that a transitive algebra containing a non-zero finite rank operator is $WOT$-dense in the algebra of all operators. Since $A^*$ is not $WOT$-dense in $\mathcal{B}(X^*)$, we obtain a contradiction.
Hence there exists $y_0\in X^*$ such that $Fy_0$ is not norm-dense in $X^*$. Let $V$ be the norm-closure of $Fy_0$. If $V = \{0\}$ then we have the inequality (\ref{ineq2}) with $y=y_0$ and any non-zero $x$.
If $V \neq \{0\}$ then $V$ is a norm-closed convex proper subset of $X^*$ containing more then one point. By the Bishop - Phelps Theorem \cite{BPh}, there are $0\neq y \in V$ and $0\neq x \in X^{**}$ such that $Re(x,y) = \sup \{Re(x,w): w\in V\}$. Since the set $V$ is invariant under multiplication by any number $t$ with $|t| \le 1$, we have $(x,y) \geq 0$ and $(x,y) \geq |(x,w)|$ for all $ w\in V$. Since $F^2\subset F$, we have that $Fy\subset V$ and $|(x,T^*y)| \le (x,y)$, for any $T^*\in F$. Therefore the inequality $|(x,T^*y)|\le \|T\|_e(x,y)$ holds for all $T\in A$. $~~\blacksquare$
\noindent Dept of Math.\\
\noindent Kent State University\\
\noindent Kent OH 44242, USA \\
[email protected]
\noindent Dept of Math.\\
\noindent Vologda State University\\
\noindent Vologda 160000, Russia\\
[email protected]
\end{document} |
\begin{document}
\title{Coherent states of a charged particle in a uniform magnetic
field}
\author{K Kowalski and J Rembieli\'nski}
\address{Department of Theoretical Physics, University
of \L\'od\'z, ul.\ Pomorska 149/153, 90-236 \L\'od\'z,
Poland}
\begin{abstract}
The coherent states are constructed for a charged particle in a
uniform magnetic field based on coherent states for the circular
motion which have recently been introduced by the authors.
\end{abstract}
\pacs{02.20.Sv, 03.65.-w, 03.65.Sq}
\submitto{\JPA}
\maketitle
\section{Introduction}
Coherent states which can be regarded from the physical point of
view as the states closest to the classical ones, are of fundamental
importance in quantum physics. One of the most extensively studied
quantum systems presented in many textbooks is a charged particle in
a uniform magnetic field. The coherent states for this system were
originally found by Malkin and Man'ko \cite{1} (see also Feldman and
Kahn \cite{2}). As a matter of fact, the alternative states for a
charged particle in a constant magnetic field were introduced by
Loyola, Moshinsky and Szczepaniak \cite{3} (see also the very recent
paper by Schuch and Moshinsky \cite{4}), nevertheless, those states
are labeled by discrete quantum numbers and therefore can hardly be
called ``coherent ones'' which should be marked with the points of
the classical phase space. In spite of the fact that the transverse
motion of a charged particle in in a uniform magnetic field is
circular, the coherent states described by Malkin and Man'ko are
related to the standard coherent states for a particle on a plane
instead of coherent states for a particle on a circle. Furthermore,
the definition of the coherent states constructed by Malkin and Man'ko
seems to ignore the momentum part of the classical phase space. In
this work we introduce the coherent states for a charged particle in
a uniform magnetic field based on the construction of the coherent
states for a quantum particle on a circle described in \cite{5}. The
paper is organized as follows. In section 2 we recall the construction
of the coherent states for a particle on a circle. Section 3
summarizes the main facts about quantization of a charged particle
in a magnetic field. Section 4 is devoted to the definition of the
coherent states for a charged particle in a magnetic field and
discussion of their most important properties. In section 5 we
collect the basic facts about the coherent states for a charged particle
in a magnetic field introduced by Malkin and Man'ko and we compare
these states with ours discussed in section 4.
\section{Coherent states for a quantum mechanics on a circle}
In this section we summarize most important facts about the coherent
states for a quantum particle on a circle. We first recall that the
algebra adequate for the study of the motion on a circle is of the
form
\begin{equation}
[J,U] = U,\qquad [J,U^\dagger]=-U^\dagger ,
\end{equation}
where $J$ is the angular momentum operator, the unitary operator
$U$ represents the position of a quantum particle on a (unit) circle
and we set $\hbar=1$. Consider the eigenvalue equation
\begin{equation}
J|j\rangle = j|j\rangle.
\end{equation}
As shown in \cite{5} $j$ can be only integer and half-integer. We
restrict for brevity to the case with integer $j$. From (2.1) and
(2.2) it follows that the operators $U$ and $U^\dagger $ are the
ladder operators, namely
\begin{equation}
U|j\rangle = |j+1\rangle,\qquad U^\dagger |j\rangle = |j-1\rangle.
\end{equation}
Consider now the coherent states for a quantum particle on a circle.
These states can be defined \cite{5} as the solution of the eigenvalue equation
\begin{equation}
X|\xi\rangle = \xi|\xi\rangle,
\end{equation}
where
\begin{equation}
X = e^{-J +\frac{1}{2}}U.
\end{equation}
An alternative construction of the coherent states specified by (2.4)
based on the Weil-Brezin-Zak transform was described in \cite{6}.
The convenient parametrization of the complex number $\xi$ consistent
with the form of the operator $X$ is given by
\begin{equation}
\xi = e^{-l + \rmi\varphi}.
\end{equation}
The parametrization (2.6) arises from the deformation of the
cylinder (the phase space) specified by
\begin{equation}
x=e^{-l}\cos\varphi,\qquad y=e^{-l}\sin\varphi,\qquad z=l,
\end{equation}
and then projecting the points of the obtained surface on the $x,y$
plane. The projection of the vectors $|\xi\rangle$ onto the basis vectors
$ |j\rangle$ is of the form
\begin{equation}
\langle j|\xi\rangle = \xi^{-j}e^{-\frac{j^2}{2}}.
\end{equation}
Using the parameters $l$, and $\varphi$ (2.8) can written in the following
equivalent form:
\begin{equation}
\langle j|l,\varphi\rangle =
e^{lj-\rmi j\varphi}e^{-\frac{j^2}{2}},
\end{equation}
where $|l,\varphi\rangle\equiv|\xi\rangle$, with $\xi=e^{-l + \rmi\varphi}$.
The coherent states are not orthogonal. Namely,
\begin{equation}
\langle \xi|\eta\rangle =
\sum_{j=-\infty}^{\infty}(\xi^*\eta)^{-j}e^{-j^2} =
\theta_3\left(\frac{\rmi}{2\pi}\ln\xi^*\eta\Bigg\vert\frac{\rmi}
{\pi}\right),
\end{equation}
where $\theta_3$ is the Jacobi theta-function \cite{7}. The coherent states
satisfy
\begin{equation}
\frac{\langle l,\varphi|J|l,\varphi\rangle}{\langle
l,\varphi|l,\varphi\rangle}\approx l,
\end{equation}
where the maximal error is of order $0.1$ per cent and we have the exact
equality in the case with $l$ integer or half-integer. Therefore, the
parameter $l$ labeling the coherent states can be interpreted as the
classical angular momentum. Furthermore, we have
\begin{equation}
\frac{\langle l,\varphi|U|l,\varphi\rangle}{\langle l,\varphi|l,\varphi\rangle}
\approx
e^{-\frac{1}{4}}e^{\rmi\varphi}.
\end{equation}
We point out that the absolute value of the average of the unitary operator
$U$ given by (2.12) which is approximately $e^{-\frac{1}{4}}$ is lesser than
1, as expected because $U$ is not diagonal in the coherent states basis. On
introducing the relative expectation value
\begin{equation}
\langle\!\langle U\rangle\!\rangle_{(l,\varphi)} :=
\frac{\langle U\rangle_{(l,\varphi)}}{\langle U\rangle_{(0,0)}},
\end{equation}
where $\langle U\rangle_{(l,\varphi)}=\langle
l,\varphi|U|l,\varphi\rangle/\langle l,\varphi|l,\varphi\rangle$, we
get
\begin{equation}
\langle\!\langle U\rangle\!\rangle_{(l,\varphi)} \approx e^{\rmi\varphi}.
\end{equation}
Therefore, the relative expectation value $\langle\!\langle
U\rangle\!\rangle_{(l,\varphi)}$ seems to be the most natural
candidate to describe the average position on a circle and $\varphi$
can be regarded as the classical angle. We finally point out that
the discussed coherent states as well as the coherent states for a
particle on a sphere introduced by us in \cite{8} are concrete
realization of the general mathematical scheme of construction of
the Bargmann spaces described in the recent papers \cite{9}. The
importance of the coherent states for the circular motion has been
confirmed by their recent application in quantum gravity \cite{10}.
\section{Charged quantum particle in a magnetic field}
In order to obtain the operators necessary for definition of the
coherent states we first recall some facts about the quantization of
a particle with the mass $\mu$ and a charge $e$ in a uniform magnetic
field ${\bi B}=(0,0,B)$, which is taken, without loss of generality,
along the $z$ axis. Neglecting the spin we can write the
Hamiltonian in the form
\begin{equation}
H = \frac{1}{2\mu}{\bpi}^2,
\end{equation}
where $\bpi=\mu\dot{\bi x}$ is the kinetic momentum related to
the canonical momentum ${\bi p}$ satisfying the Heisenberg algebra with
the position ${\bi x}$, by
\begin{equation}
\bpi = {\bi p} - e {\bi A},
\end{equation}
where ${\bi A}$ is the vector potential which fulfils ${\bi B}=\hbox{rot}
{\bi A}$ and we set $c=1$. We choose the symmetric gauge such that
\begin{equation}
{\bi A} = (-By/2,Bx/2,0)
\end{equation}
in which ${\bi A}=\frac{1}{2}{\bi B}\times{\bi x}$. The coordinates
of the kinetic momentum (3.2) in the gauge (3.3) are
\begin{equation}
\pi_x = p_x+\frac{\mu\omega}{2}y,\qquad \pi_y = p_y-\frac{\mu\omega}{2}x,
\qquad \pi_z=p_z,
\end{equation}
where $\omega =\frac{eB}{\mu}$ is the cyclotron frequency. From (3.1)
and(3.4) it follows that the motion along the $z$ axis is free and
we actually deal with a two-dimensional problem in the $x,y$
plane. Clearly, the Hamiltonian for the transverse motion is
\begin{equation}
H_{\perp} = \frac{1}{2\mu}(\pi_x^2+\pi_y^2).
\end{equation}
The coordinates $\pi_x$ and $\pi_y$ of the kinetic momentum given by
(3.4) satisfy the following commutation relation:
\begin{equation}
[\pi_x,\pi_y] = \rmi\mu\omega ,
\end{equation}
where we set $\hbar=1$. On introducing the operators
\begin{equation}
a = \frac{1}{\sqrt{2\mu\omega}}(-\pi_y+\rmi\pi_x),\qquad
a^\dagger = \frac{1}{\sqrt{2\mu\omega}}(-\pi_y-\rmi\pi_x),
\end{equation}
which obey
\begin{equation}
[a,a^\dagger]=1,
\end{equation}
we can write the Hamiltonian (3.5) in the form of the Hamiltonian of the
harmonic oscillator, such that
\begin{equation}
H_{\perp} = \omega (a^\dagger a + \hbox{$\scriptstyle 1\over 2$}).
\end{equation}
Consider now the orbit center-coordinate operators \cite{11}
\begin{equation}
x_0 = x + \frac{1}{\mu\omega}\pi_y,\qquad y_0 = y -
\frac{1}{\mu\omega}\pi_x.
\end{equation}
These operators are integrals of the motion and they represent the
coordinates of the center of a circle in the $x,y$ plane in
which a particle moves. However, they do not commute with each
other, namely, we have
\begin{equation}
[x_0,y_0] = -\frac{\rmi}{\mu\omega}.
\end{equation}
As with coordinates of the kinetic momentum we can construct from
$x_0$ and $y_0$ the creation and annihilation operators. We set
\begin{equation}
b=\sqrt{\frac{\mu\omega}{2}}(x_0-\rmi y_0),\qquad
b^\dagger =\sqrt{\frac{\mu\omega}{2}}(x_0+\rmi y_0),
\end{equation}
implying with the use of (3.11)
\begin{equation}
[b,b^\dagger] = 1.
\end{equation}
We now return to (3.10). Since equations (3.10) hold also in the
classical case, therefore the operators
\begin{equation}
r_x := x-x_0=-\frac{1}{\mu\omega}\pi_y,\qquad
r_y := y-y_0=\frac{1}{\mu\omega}\pi_x,
\end{equation}
are the position observables of a particle on a circle. More
precisely, they are coordinates of the radius vector of a particle
moving in a circle with the center at the point $(x_0,y_0)$. From
(3.14) and (3.6) it follows that
\begin{equation}
[r_x,r_y] = \frac{\rmi}{\mu\omega}.
\end{equation}
We have the formula on the squared radius of a circle such that
\begin{equation}
{\bi
r}^2=r_x^2+r_y^2=(x-x_0)^2+(y-y_0)^2=\frac{1}{(\mu\omega)^2}
(\pi_x^2+\pi_y^2)=\frac{2}{\mu\omega^2}H_{\perp}
\end{equation}
following directly from (3.10) and (3.5).
\section{Coherent states for a particle in a magnetic field}
An experience with the coherent states for a circular motion
described in section 2 indicates that in order to introduce the
coherent states we should first identify the algebra adequate for
the study of the motion of a charged particle in a uniform magnetic
field. As with (2.1) such algebra should include the angular
momentum operator. It seems that the most natural candidate is the
operator defined by
\begin{equation}
L = ({\bi r}\times\bpi)_z = r_x\pi_y-r_y\pi_x.
\end{equation}
Indeed, eqs.\ (3.14) and (3.16) taken together yield
\begin{equation}
L = -\mu\omega {\bi r}^2,
\end{equation}
which coincides with the classical expression. Furthermore, it can
be easily verified that it commutes with the orbit center-coordinate
operators $x_0$ and $y_0$. It should be noted however that since
\begin{equation}
[L,r_x] = 2\rmi r_y,\qquad [L,r_y] = -2\rmi r_x,
\end{equation}
following directly from (4.2) and (3.15), the generator of rotations
about the axis passing through the center of the circle and
perpendicular to the $x,y$ plane, is not $L$ but $\frac{1}{2}L$.
Therefore, the counterpart of the operator $J$ satisfying (2.1)
which is the generator of the rotations, is not $L$ but $\frac{1}{2}L$.
Now, we introduce the operator representing the position of a
particle on a circle of the form
\begin{equation}
r_+ = r_x+\rmi r_y.
\end{equation}
This operator is a natural counterpart of the unitary operator $U$
representing the position of a quantum particle on a unit circle
discussed in section 2. Clearly, the algebra should include the
orbit-center operators $x_0$ and $y_0$. Bearing in mind the
parametrization (4.4) it is plausible to introduce the operator
\begin{equation}
r_{0+} = x_0+\rmi y_0
\end{equation}
which has the meaning of the operator corresponding to the center
of the circle. In order to complete the algebra we also introduce the
Hermitian conjugates of the operators $r_+$ and $r_{0+}$,
respectively, such that
\begin{equation}
r_- = r_x-\rmi r_y,\qquad r_{0-} = x_0-\rmi y_0.
\end{equation}
Taking into account (4.3), (3.15) and (3.11) we arrive at the
following algebra which seems to be most natural in the case of the
circular motion of a charged particle in a uniform magnetic field:
\begin{equation}
\fl [L,r_\pm]=\pm 2r_\pm,\quad [L,r_{0\pm}]=0,\quad
[r_+,r_-]=\frac{2}{\mu\omega},\quad [r_{0+},r_{0-}]=-\frac{2}{\mu\omega},
\quad [r_{\pm},r_{0\pm}]=0.
\end{equation}
The algebra (4.7) has the Casimir operator given in the unitary
irreducible representation by
\begin{equation}
r_-r_+ + \frac{1}{\mu\omega}L = cI,
\end{equation}
where $c$ is a constant. We choose the representation referring to
$c=-\frac{1}{\mu\omega}$ because it is the only one such that (4.8)
with $r_\pm$ given by (4.4) and (4.6) is equivalent to (4.2).
Consider now the creation and annihilation operators defined by
\begin{equation}
a=\sqrt{\frac{\mu\omega}{2}}r_+,\qquad
a^\dagger=\sqrt{\frac{\mu\omega}{2}}r_-,
\end{equation}
which coincide in view of (4.4) and (3.14) with the operators
(3.7). The Casimir (4.8) with $c=-\frac{1}{m\omega}$ written with
the help of the Bose operators (4.9) takes the form
\begin{equation}
L = -(2N_a+1),
\end{equation}
where $N_a=a^\dagger a$. Furthermore, it follows from (4.7) that
the creation and annihilation operators such that (see (3.12), (4.5)
and (4.6))
\begin{equation}
b = \sqrt{\frac{\mu\omega}{2}}r_{0-},\qquad b^\dagger =
\sqrt{\frac{\mu\omega}{2}}r_{0+}
\end{equation}
commute with $a$ and $a^\dagger$. Therefore, the operators
$N_a=a^\dagger a$ and $N_b=b^\dagger b$, commute with each other.
Consider the irreducible representation of the algebra (4.7) spanned
by the common eigenvectors of the number operators $N_a$ and $N_b$
satisfying
\begin{equation}
N_a |n,m\rangle = n|n,m\rangle,\qquad N_b |n,m\rangle = m|n,m\rangle,
\end{equation}
where $n$ and $m$ are nonnegative integers. Using (4.10), (4.9) and
(4.11) we find that the generators of the algebra (4.7) act on the
basis vectors $|n,m\rangle$ in the following way:
\numparts
\begin{eqnarray}
L|n,m\rangle &=& -(2n+1)|n,m\rangle,\\
r_+|n,m\rangle &=& \sqrt{\frac{2n}{\mu\omega}}|n-1,m\rangle,\\
r_-|n,m\rangle &=& \sqrt{\frac{2(n+1)}{\mu\omega}}|n+1,m\rangle,\\
r_{0+}|n,m\rangle &=& \sqrt{\frac{2(m+1)}{\mu\omega}}|n,m+1\rangle,\\
r_{0-}|n,m\rangle &=& \sqrt{\frac{2m}{\mu\omega}}|n,m-1\rangle.
\end{eqnarray}
\endnumparts
Now bearing in mind the form of the eigenvalue equation (2.4) and
the discussion above we define the coherent states for a charged
particle in a uniform magnetic field as the simultaneous
eigenvectors of the commuting non-Hermitian operators $Z$ and
$r_{0-}$:
\numparts
\begin{eqnarray}
&&Z|\zeta,z_0\rangle = \zeta |\zeta,z_0\rangle,\\
&&r_{0-}|\zeta,z_0\rangle = z_0|\zeta,z_0\rangle,
\end{eqnarray}
\endnumparts
where
\begin{equation}
Z=e^{-\frac{L}{2}+\frac{1}{2}}r_+,
\end{equation}
and we recall that $r_{0-}$ is proportional to the Bose annihilation
operator $b$ (see (4.11)), so that the coherent states
$|\zeta,z_0\rangle$ can be viewed as tensor product of the
eigenvectors $|\zeta\rangle$ of the operator $Z$ and the standard
coherent states $|z_0\rangle$. Clearly, the complex number $\zeta$
parametrizes the classical phase space for the circular motion of a
charged particle while the complex number $z_0$ represents the
position of the center of the circle. Taking into account (4.14) and
(4.13) we find
\begin{equation}
\langle n,m|\zeta,z_0\rangle=\left(\frac{\mu\omega}{2}\right)^\frac{n}{2}
\frac{\zeta^n}{\sqrt{n!}}e^{-\frac{1}{2}(n+\frac{1}{2})^2}
\left(\frac{\mu\omega}{2}\right)^\frac{m}{2}\frac{z_0^m}{\sqrt{m!}}.
\end{equation}
Now, the form of the operator $Z$ and (2.6) indicate the following
parametrization of the complex number $\zeta$:
\begin{equation}
\zeta = r(l)e^{-\frac{l}{2}+\rmi\varphi},
\end{equation}
where $l$ is real non-positive and
$r(l)=\sqrt{-\frac{l}{\mu\omega}}$ is the classical radius of the
circle in which moves a particle implied by the classical relation
$l=-\mu\omega r^2$. Further, in accordance with (4.6) we set
\begin{equation}
z_0 = \overline x_0 - \rmi\overline y_0,
\end{equation}
where $\overline x_0$ and $\overline y_0$ are real. Using (4.17)
and (4.18) we can write (4.16) in the form
\begin{equation}
\fl \langle n,m|l,\varphi;\overline x_0,\overline y_0 \rangle =
\left(-\frac{l}{2}e^{-l}\right)^\frac{n}{2}\frac{e^{\rmi n\varphi}}
{\sqrt{n!}}e^{-\frac{1}{2}(n+\frac{1}{2})^2}\frac{1}{\sqrt{m!}}
\left[\frac{1}{\sqrt{2}}\left(\frac{\overline x_0}{\lambda}-\rmi
\frac{\overline y_0}{\lambda}\right)\right]^m,
\end{equation}
where $|l,\varphi;\overline x_0,\overline y_0\rangle\equiv|\zeta,z_0
\rangle$ with $\zeta$ and $z_0$ given by (4.17) and (4.18),
respectively, and $\lambda=1/\sqrt{\mu\omega}$ is the classical
radius of the ground state Landau orbit.
As with the states $|\xi\rangle$ given by (2.4) our most important
criterion to test the correctness of the introduced coherent states
$|\zeta,z_0\rangle$ will be their closeness to the classical phase
space. Consider the expectation value of the angular momentum
operator $L$. Taking into account the completeness of the states
$|n,m\rangle$, (4.13a) and (4.19) we get
\begin{equation}
\langle L\rangle_l=\frac{\langle l,\varphi;\overline x_0,\overline y_0|L|
l,\varphi;\overline x_0,\overline y_0\rangle}
{\langle l,\varphi;\overline x_0,\overline y_0|
l,\varphi;\overline x_0,\overline y_0\rangle}=
-\frac{\sum_{n=0}^\infty \frac{2n+1}{n!}\left(-\frac{l}{2}e^{-l}\right)^n
e^{-(n+\frac{1}{2})^2}}
{\sum_{n=0}^\infty \frac{1}{n!}\left(-\frac{l}{2}e^{-l}\right)^n
e^{-(n+\frac{1}{2})^2}}.
\end{equation}
From computer calculations it follows that $\langle
L\rangle_l\approx l$. Nevertheless, in opposition to the case of
the coherent states for a quantum particle on a circle discussed in
section 2 the approximate equality of $\langle L\rangle_l$ and $l$
does not hold for practically arbitrary small $|l|$. More precisely,
we have found that the approximation is very good for $|l|\ge1$ (the
bigger $l$ the better approximation). For example if $|l|\sim1$
then the relative error $|(\langle L\rangle_l-l)/l|\sim1\%$. In our
opinion such behavior of $\langle L \rangle_l$ means that for small
$|l|$ the quantum fluctuations are not negligible and the
description based on the concept of the classical phase space is not
an adequate one. We remark that the same phenomenon have been
observed in the case of the coherent states for a particle on a
sphere \cite{8}. Thus, it turns out that the parameter $l$ in (4.17)
can be identified (in general approximately) with the classical
angular momentum of a charged particle in a uniform magnetic field.
We now discuss the position of a particle on a circle in the context
of the introduced coherent states. Using (4.13b) and (4.19) we find
\begin{equation}
\fl \langle r_+\rangle_{(l,\varphi)}=
\frac{\langle l,\varphi;\overline x_0,\overline y_0|r_+|
l,\varphi;\overline x_0,\overline y_0\rangle}
{\langle l,\varphi;\overline x_0,\overline y_0|
l,\varphi;\overline x_0,\overline y_0\rangle}=
r(l)e^{\rmi\varphi}e^{-\frac{1}{4}}e^{-\frac{l}{2}}
\frac{\sum_{n=0}^\infty \frac{1}{n!}\left(-\frac{l}{2}e^{-l}\right)^n
e^{-(n+1)^2}}
{\sum_{n=0}^\infty \frac{1}{n!}\left(-\frac{l}{2}e^{-l}\right)^n
e^{-(n+\frac{1}{2})^2}},
\end{equation}
where $r(l)=\sqrt{-\frac{l}{\mu\omega}}$ is the classical formula on
the radius of the circle in which moves a particle (see (4.17)).
The computer calculations indicate that
\begin{equation}
\langle r_+\rangle_{(l,\varphi)}\approx
r(l)e^{\rmi\varphi}e^{-\frac{1}{4}},
\end{equation}
where the approximation is very good but a bit worse than that in
the case with $\langle L\rangle_l$. Namely, for $|l|=5$ the
relative error is of order $1\%$. Because of the term
$e^{-\frac{1}{4}}$ it turns out that the average value of $r_+$ does
not belong to the circle with radius $r(l)$. Motivated by the
formal resemblance of (4.22) with $r=1$ and (2.12) we identify the
correct expectation value as
\begin{equation}
\langle\!\langle r_+ \rangle\!\rangle_{(l,\varphi)}=e^{\frac{1}{4}}
\langle r_+\rangle_{(l,\varphi)}=r(l)e^{\rmi\varphi}e^{-\frac{l}{2}}
\frac{\sum_{n=0}^\infty \frac{1}{n!}\left(-\frac{l}{2}e^{-l}\right)^n
e^{-(n+1)^2}}
{\sum_{n=0}^\infty \frac{1}{n!}\left(-\frac{l}{2}e^{-l}\right)^n
e^{-(n+\frac{1}{2})^2}},
\end{equation}
so
\begin{equation}
\langle\!\langle r_+ \rangle\!\rangle_{(l,\varphi)}\approx
r(l)e^{\rmi\varphi}
\end{equation}
which is a counterpart of (2.14). In our opinion, the appearance of
the same factor $e^{-\frac{1}{4}}$ in formulas (2.12) and (4.22)
confirms the correctness of the approach taken up in this work. In
view of the form of (4.24) it appears that $r(l)e^{\rmi\varphi}$ (see 4.17)
can be interpreted as the classical parametrization of a position of a
charged particle in a uniform magnetic field.
We now study the distribution of vectors $|n,m\rangle$ in the
normalized coherent state. The computer calculations indicate that
the function
\begin{equation}
\fl p_{n,m}(l,\overline x_0,\overline y_0) =
\frac{|\langle n,m|l,\varphi;\overline x_0,\overline y_0 \rangle|^2}
{\langle l,\varphi;\overline x_0,\overline y_0|
l,\varphi;\overline x_0,\overline y_0\rangle}
=\frac{\frac{1}{n!}\left(-\frac{l}{2}e^{-l}\right)^ne^{-(n+\frac{1}{2})^2}
\frac{1}{m!}\left(\frac{\mu\omega}{2}\right)^m(\overline x_0^2+
\overline y_0^2)^m}{\left(\sum_{n=0}^{\infty}\frac{1}{n!}\left(-\frac{l}{2}
e^{-l}\right)^n e^{-(n+\frac{1}{2})^2}\right)e^{\frac{\mu\omega}{2}
(\overline x_0^2+\overline y_0^2)}}
\end{equation}
which gives the probability of finding the system in the state
$|n,m\rangle$ when the system is in normalized coherent state
$|l,\varphi;\overline x_0,\overline y_0\rangle/\sqrt{
\langle l,\varphi;\overline x_0,\overline y_0|l,\varphi;\overline
x_0,\overline y_0\rangle}$, is peaked for fixed $l$, $m$, $x_0$ and
$y_0$ at point $n_{{\rm max}}$ coinciding with the integer nearest
to $-(l+1)/2$ (see figure 1). In view of the relation (4.10) this
observation confirms once more the interpretation of the parameter
$l$ as the classical angular momentum.
\begin{figure*}
\caption{The plot of $p_{n,m}
\end{figure*}
For the sake of completeness we now write down the formula on the
expectation value of the operator $r_{0-}$ representing the position
of the center of the circle, such that
\begin{equation}
\langle l,\varphi;\overline x_0,\overline y_0|r_{0-}|
l,\varphi;\overline x_0,\overline y_0\rangle = \overline x_0
-\rmi\overline y_0
\end{equation}
following immediately from (4.14b) and (4.18). Thus, as expected
$\overline x_0$ and $\overline y_0$ are the classical coordinates of the
center of the circle in which moves a particle.
We finally point out that the introduced coherent states are stable
with respect to the Hamiltonian $H_\perp$ given by (3.9). Indeed,
we recall that $x_0$ and $y_0$, and thus $r_{0-}$ are integrals of
the motion. Further eqs.\ (3.9) and (4.10) yield
\begin{equation}
H_\perp = -\omega \frac{L}{2}.
\end{equation}
Hence, using (4.15) and the first commutator from (4.7) we get
\begin{equation}
Z(t) = e^{\rmi tH_\perp}Ze^{-\rmi tH_\perp}=e^{-\rmi\omega t}Z
\end{equation}
which leads to
\begin{equation}
Z(t) |\zeta,z_0\rangle = \zeta(t) |\zeta,z_0\rangle,
\end{equation}
where $\zeta(t)=e^{-\rmi\omega t}\zeta$.
\section{Comparison with the Malkin-Man'ko coherent states}
In this section we compare the coherent states introduced above and
the Malkin-Man'ko coherent states \cite{1} mentioned in the
introduction using as a test of correctness the closeness to the
classical phase space. We first briefly sketch the basic properties
of the Malkin-Man'ko coherent states. Up to an irrelevant
muliplicative constant these states can be defined as the common
eigenvectors of the operators $r_+$ and $r_{0-}$
\numparts
\begin{eqnarray}
&&r_+ |z,z_0\rangle = z |z,z_0\rangle,\\
&&r_{0-}|z,z_0\rangle = z_0 |z,z_0\rangle.
\end{eqnarray}
\endnumparts
Using (4.13c) and (4.13d) we find
\begin{equation}
\langle n,m|z,z_0\rangle = \left(\frac{\mu\omega}{2}\right)^\frac{n}{2}
\frac{z^n}{\sqrt{n!}}\left(\frac{\mu\omega}{2}\right)^\frac{m}{2}
\frac{z_0^m}{\sqrt{m!}}.
\end{equation}
Of course, the states $|z,z_0\rangle$ are the standard coherent
states for the Heisenberg-Weyl algebra generated by the operators $a$,
$a^\dagger$, $b$ and $b^\dagger$ (see (4.9) and (4.11)). It is also
clear that $z$ and $z_0$ represent the position of a particle on a
circle and the coordinates of the circle center, respectively. The
parametrization of the complex number $z$ consistent with (4.4) is
of the form
\begin{equation}
z = \overline x + \rmi\overline y,
\end{equation}
where $\overline x$ and $\overline y$ are rectangular coordinates of
a particle on a circle. Evidently, the parametrization of $z_0$ is
the same as in (4.18). Now, it follows directly from (5.1a) that
\begin{equation}
\langle r_+\rangle_{(\overline x,\overline y)}=
\frac{\langle \overline x,\overline y;\overline x_0,\overline y_0|r_+|
\overline x,\overline y;\overline x_0,\overline y_0\rangle}
{\langle \overline x,\overline y;\overline x_0,\overline y_0|
\overline x,\overline y;\overline x_0,\overline y_0\rangle}=
\overline x + \rmi\overline y,
\end{equation}
where $|\overline x,\overline y;\overline x_0,\overline y_0\rangle
\equiv|z,z_0\rangle$ with $z$ and $z_0$ given by (5.3) and (4.18),
respectively. The corresponding formula on the expectation value of
$r_{0-}$ in the normalized coherent state $|\overline x,\overline y;
\overline x_0,\overline y_0\rangle/\sqrt{\langle\overline x,
\overline y;\overline x_0,\overline y_0|\overline x,\overline y;
\overline x_0,\overline y_0\rangle}$ is the same as (4.26). Using
the polar coordinates we can write (5.4) in the form
\begin{equation}
\langle r_+\rangle_{(l,\varphi)}^{MM}=\langle r_+\rangle_{(\overline
x,\overline y)}=r(l)e^{\rmi\varphi},
\end{equation}
where $r(l)=\sqrt{\overline x^2+\overline y^2}=\sqrt{-\frac{l}
{\mu\omega}}$ following from the classical formula $l=-\mu\omega
r^2~=~-\mu\omega(\overline x^2+\overline y^2)$; the indices MM are
initials for Malkin-Man'ko. We point out that in opposition to
(4.24) we have the exact relation (5.5). In this sense the
Malkin-Man'ko coherent states are better approximation of the
configuration space than the states defined by us in the previous
section. Furthermore, taking into account (4.8) with
$c=-1/(\mu\omega)$, (5.1) and (5.3) we find
\begin{equation}
\langle L\rangle_{(\overline x,\overline y)}=
\frac{\langle \overline x,\overline y;\overline x_0,\overline y_0|L|
\overline x,\overline y;\overline x_0,\overline y_0\rangle}
{\langle \overline x,\overline y;\overline x_0,\overline y_0|
\overline x,\overline y;\overline x_0,\overline y_0\rangle}=
-\mu\omega(\overline x^2+\overline y^2)-1.
\end{equation}
Therefore, using the classical relation $l=-\mu\omega
r^2~=~-\mu\omega(\overline x^2+\overline y^2)$, we get
\begin{equation}
\langle L\rangle_l^{MM}=\langle L\rangle_{(\overline
x,\overline y)}=l-1.
\end{equation}
Thus, it turns out that we have a shift in the classical momentum
and the approximation $\langle L\rangle_l^{MM}\approx l$ is worse in
the light of the observations of section 4 (see discussion under the
formula (4.20)) than the approximate relation $\langle L\rangle_l$
which takes place in the case of the coherent states introduced in
the previous section. In other words, the coherent states defined
by (4.14) are better approximation of the ``momentum part'' of the
phase space. We stress that the shift in $l$ in the formula (5.7)
is related to the zero point energy and cannot be ignored. We
finally remark that as with the states given by (4.14) the
Malkin-Man'ko coherent states are stable with respect to the
evolution generated by the Hamiltonian (3.9).
We now compare the coherent states discussed in section 4 and the
coherent states introduced by Malkin and Man'ko taking as a criterion
of correctness of the coherent states their closeness to the points
of the classical phase space. Adopting the idea of the method of
least squares we use as the measure of such closeness the following
entities
\begin{equation}
d(l) = \sqrt{\left(\frac{\langle\!\langle r_+ \rangle\!\rangle_{(l,0)}
-r(l)}{r(l)}\right)^2+\left(\frac{\langle L\rangle_l-l}{l}\right)^2},
\end{equation}
where $\langle\!\langle r_+ \rangle\!\rangle_{(l,\varphi)}$ and
$\langle L\rangle_l$ are given by (4.23) and (4.20), respectively,
for the coherent states defined by (4.14), and analogously
\begin{equation}
d^{MM}(l) = \sqrt{\left(\frac{\langle r_+\rangle_{(l,0)}^{MM}-r(l)}
{r(l)}\right)^2+\left(\frac{\langle L\rangle_l^{MM}-l}{l}\right)^2}
=\frac{1}{|l|},
\end{equation}
for the Malkin-Man'ko coherent states, where in both the above
formulas $r(l)=\sqrt{-\frac{l}{\mu\omega}}$ (see (4.23) and (5.5)).
The distances $d(l)$ and $d^{MM}(l)$ are compared in figure 2. As
evident from figure 2, the coherent states for a charged particle
in a magnetic field introduced in this paper are better approximations
of the phase space than the coherent states of Malkin and Man'ko.
\begin{figure*}
\caption{Comparison of the closeness to the phase space of the coherent
states introduced in this work (solid line) and the Malkin-Man'ko
coherent states (dotted line) by means of the distances $d(l)$ and
$d^{MM}
\end{figure*}
\section{Conclusion}
We have introduced in this work the new coherent states for a
charged particle in a uniform magnetic field. The construction of
these states based on the coherent states for the quantum mechanics
on a circle seems to be more adequate than that of Malkin and
Man'ko. Indeed, the fact that a classical particle moves transversely
in a uniform magnetic field on a circle, is recognized in the case with
the Malkin-Man'ko coherent states only on the level of the evolution
of these states. Furthermore, the coherent states introduced in
this work are closer to the points of the classical phase space than
the states discussed by Malkin and Man'ko. We realize that the best
criterion for such closeness would be minimalization of some
uncertainty relations. In the case of the coherent states for a
particle on a circle the uncertainty relations have been introduced
by authors in \cite{12} (see also \cite{13} and \cite{14}). Nevertheless,
the problem of finding the analogous relations for the coherent
states discussed herein seems to be a difficult task. The reason is
that the radius of the circle is not a c-number as with the coherent
states given by (2.4). Anyway, in our opinion the simple criterion
of closeness of the coherent states to the points of the classical
phase space based on the definitions (5.8) and (5.9) is precise
enough to decide that the coherent states introduced herein are
better than that discovered by Malkin and Man'ko. Finally,
the introduced coherent states should form
a complete set. We recall that the completeness of coherent states
is connected with the existence, via the ``resolution of the identity
operator'', of the Fock-Bargmann representation. However, the problem of
finding the resolution of the identity operator is usually nontrivial task.
In our case it is
related to the solution of the problem of moments \cite{15} such that
\begin{displaymath}
\int_{0}^{\infty}x^{n-1}\rho(x)dx = n!e^{(n+\frac{1}{2})^2},
\end{displaymath}
where $\rho(x)$ is unknown density. Because of the complexity of
the problem the Fock-Bargmann representation for the introduced
coherent states will be discussed in a separate work.
\ack
This paper has been supported by the Polish Ministry of Scientific
Research and Information Technology under the grant
No PBZ-MIN-008/P03/2003.
\section*{References}
\end{document} |
\begin{document}
\begin{abstract}
In the present paper the authors construct normal numbers in base $q$
by concatenating $q$-adic expansions of prime powers $\left\lfloor\alpha
p^\theta\right\rfloor$ with $\alpha>0$ and $\theta>1$.
\end{abstract}
\title{Construction of normal numbers via generalized prime power sequences}
\section{Introduction}
Let $q\geq 2$ be a fixed integer and $\sigma=0.a_1a_2\dots$ be the
$q$-ary expansion of a real number $\sigma$ with $0<\sigma<1$. We
write $d_1\cdots d_\ell\in\{0,1,\dots,q-1\}^\ell$ for a block of $\ell$
digits in the $q$-ary expansion. By $\mathcal{N}(\sigma;d_1\cdots
d_\ell;N)$ we denote the number of occurrences of the block $d_1\cdots
d_\ell$ in the first $N$ digits of the $q$-ary expansion of $\sigma$.
We call $\sigma$ normal to the base $q$ if for every
fixed $\ell\geq 1$
\begin{align*}
\mathcal{R}_N(\sigma)=\mathcal{R}_{N,\ell}(\sigma)= \sup_{d_1\cdots
d_\ell}\left\vert\frac{1}{N}\mathcal{N}(\sigma;d_1\cdots d_\ell;N)
-\frac{1}{q^\ell}\right\vert=o(1)
\end{align*}
as $N\rightarrow\infty$, where the supremum is taken over all
blocks $d_1\cdots d_\ell\in\{0,1,\dots,q-1\}^\ell$.
A slightly different, however equivalent definition of normal numbers is due to
Borel \cite{Borel1909:les_probabilites_denombrables} who also showed that
almost all numbers are normal (with respect to the Lebesgue measure) to any
base. However, despite their omnipresence among the reals, all numbers
currently known to be normal are established by ad hoc constructions. In
particular, we do not know whether given numbers, such as $\pi$, $e$, $\log 2$
and $\sqrt 2$, are normal.
In this paper we consider the construction of normal numbers in
base $q$ as concatenation of $q$-ary integer parts of certain
functions. A first result was achieved by
Champernowne~\cite{Champernowne1933:construction_decimals_normal}, who showed that
\begin{align*}
0.1\,2\,3\,4\,5\,6\,7\,8\,9\,10\,11\,12\,13\,14\,15\,16\,17\,18\,19\,20\dots
\end{align*}
is normal in base $10$. This construction can be easily
generalised to any integer base $q$. Copeland and Erd{\"o}s
\cite{copeland_erdoes1946:note_on_normal} proved that
\begin{align*}
0.2\,3\,5\,7\,11\,13\,17\,19\,23\,29\,31\,37\,41\,43\,47\,53\,59\,61\,67\dots
\end{align*}
is normal in base $10$.
This construction principle has been generalized in several directions. In
particular, Dumont and Thomas \cite{Dumont_Thomas1994:modifications_de_nombres}
used transducers in order to rewrite the blocks of the expansion of a given normal
number to produce another one. Such constructions using automata yield to
$q$-automatic numbers, i.e., real numbers whose $q$-adic representation
is a $q$-automatic sequence (cf. Allouche and Shallit
\cite{allouche_shallit2003:automatic_sequences}). By these means one can show
that for instance the number
\[
\sum_{n\geq0}3^{-2^n}2^{-3^{2^n}}
\]
is normal in base 2.
In the present paper we want to use another approach to generalize
Champernowne's construction of normal numbers. In particular, let $f$ be any
function and let $[f(n)]_q$ denote the base $q$ expansion of the integer part of
$f(n)$. Then define
\begin{equation}\label{normal}
\begin{split}
\sigma_q&=\sigma_q(f)=
0.\left\lfloor f(1)\right\rfloor_q\left\lfloor f(2)\right\rfloor_q\left\lfloor f(3)\right\rfloor_q \left\lfloor f(4)\right\rfloor_q \left\lfloor f(5)\right\rfloor_q \left\lfloor f(6)\right\rfloor_q \dots,
\end{split}
\end{equation}
where the arguments run through all positive integers. Champernowne's example
corresponds to the choice $f(x)=x$ in \eqref{normal}. Davenport and Erd{\"o}s
\cite{davenport_erdoes1952:note_on_normal} considered the case where $f(x)$ is
an integer valued polynomial and showed that in this case the number
$\sigma_q(f)$ is normal. This construction was subsequently extended to
polynomials over the rationals and over the reals by Schiffer
\cite{schiffer1986:discrepancy_normal_numbers} and Nakai and
Shiokawa~\cite{Nakai_Shiokawa1992:discrepancy_estimates_class}, who were both
able to show that $\mathcal{R}_N(\sigma_q(f))=\mathcal{O}(1/\log N)$. This
estimate is best possible as it was proved by Schiffer
\cite{schiffer1986:discrepancy_normal_numbers}. Furthermore Madritsch et al.
\cite{Madritsch_Thuswaldner_Tichy2008:normality_numbers_generated} gave a
construction for $f$ being an entire function of bounded logarithmic order.
Nakai and Shiokawa \cite{Nakai_Shiokawa1990:class_normal_numbers} constructed
a normal number by concatenating the integer part of a pseudo-polynomial
sequence, i.e., a sequence $(\left\lfloor p(n)\right\rfloor)_{n\geq1}$ where
\begin{gather}\label{mani:pseudopoly}
p(x)=\alpha_0 x^{\theta_0}+\alpha_1x^{\theta_1}+\cdots+\alpha_dx^{\theta_d}
\end{gather}
with $\alpha_0,\theta_0,\ldots,\alpha_d,\theta_d\in\mathbb{R}$, $\alpha_0>0$,
$\theta_0>\theta_1>\cdots>\theta_d>0$ and at least one
$\theta_i\not\in\mathbb{Z}$.
This method of construction by concatenating function values is in strong
connection with properties of $q$-additive functions. We call a function $f$ strictly
$q$-additive, if $f(0)=0$ and the function operates only on the digits of the
$q$-adic representation, i.e.,
\[
f(n)=\sum_{h=0}^\ell f(d_h)\quad\text{ for }\quad n=\sum_{h=0}^\ell d_hq^h.
\]
A very simple example of a strictly $q$-additive function is the sum of digits
function $s_q$, defined by
\[
s_q(n)=\sum_{h=0}^\ell d_h\quad\text{ for }\quad n=\sum_{h=0}^\ell d_hq^h.
\]
Refining the methods of Nakai and Shiokawa the first author obtained the following result.
\begin{thm*}[{\cite[Theorem 1.1]{madritsch2012:summatory_function_q}}]
Let $q\geq2$ be an integer and $f$ be a strictly $q$-additive function. If $p$ is a
pseudo-polynomial as defined in (\ref{mani:pseudopoly}), then there exists
$\varepsilon>0$ such that
\begin{gather}\label{mani:mainsum}
\sum_{n\leq N}f\left(\left\lfloor p(n)\right\rfloor\right)
=\mu_fN\log_q(p(N))
+NF\left(\log_q(p(N))\right)
+\mathcal{O}\left(N^{1-\varepsilon}\right),
\end{gather}
where
\[
\mu_f=\frac1q\sum_{d=0}^{q-1}f(d)
\]
and $F$ is a $1$-periodic function depending only on $f$ and $p$.
\end{thm*}
The aim of the present paper is to extend the above results to prime power
sequences. Let $f$ be a function and set
\begin{gather}\label{mani:tau}
\tau_q=\tau_q(f)=0.\left\lfloor f(2)\right\rfloor_q \left\lfloor f(3)\right\rfloor_q \left\lfloor f(5)\right\rfloor_q \left\lfloor f(7)\right\rfloor_q \left\lfloor f(11)\right\rfloor_q \left\lfloor f(13)\right\rfloor_q \dots,
\end{gather}
where the arguments of $f$ run through the sequence of primes.
Letting $f$ be a polynomial with rational coefficients, Nakai and Shiokawa
\cite{Nakai_Shiokawa1997:normality_numbers_generated} could show that
$\tau_q(f)$ is normal. Moreover, letting $f$ be an entire function of bounded
logarithmic order, Madritsch $et al.$
\cite{Madritsch_Thuswaldner_Tichy2008:normality_numbers_generated} showed that
$\mathcal{R}_N(\tau_q(f))=\mathcal{O}(1/\log N)$.
At this point we want to mention the connection of normal numbers with uniform
distribution. In particular, a number $x\in[0,1]$ is normal to base $q$ if and
only if the sequence $\{q^nx\}_{n\geq0}$ is uniformly distributed modulo 1
(cf. Drmota and Tichy
\cite{drmota_tichy1997:sequences_discrepancies_and}). Here $\{y\}$
stands for the fractional part of $y$. Let us mention
Kaufman \cite{kaufman1979:distribution_surd_p} and
Balog \cite{balog1985:distribution_p_heta,balog1983:fractional_part_p}, who
investigated the distribution of the fractional part of $\sqrt p$ and
$p^\theta$ respectively. Harman \cite{harman1983:distribution_sqrtp_modulo}
gave estimates for the discrepancy of the sequence $\sqrt p$. In his papers
Schoissengeier~\cite{schoissengeier1979:connection_between_zeros,
schoissengeier1978:neue_diskrepanz_fuer} connected the estimation of the
discrepancy of $\alpha p^\theta$ with zero free regions of the Riemann zeta
function. This allowed Tolev
\cite{tolev1991:simultaneous_distribution_fractional} to consider the
multidimensional variant of this problem as well as to provide an explicit
estimate for the discrepancy. This result was improved for different special
cases by Zhai \cite{zhai2001:simultaneous_distribution_fractional}. Since the
results above deal with the case of $\theta<1$ Baker and Kolesnik
\cite{baker_kolesnik1985:distribution_p_alpha} extended these considerations to
$\theta>1$ and provided an explicit upper bound for the discrepancy in this
case. This result was improved by Cao and Zhai
\cite{cao_zhai1999:distribution_p_alpha} for $\frac53<\theta<3$. A
multidimensional extension is due to Srinivasan and Tichy
\cite{srinivasan_tichy1993:uniform_distribution_prime}.
Combining the methods for proving uniform distribution mentioned above with a
recent paper by Bergelson et al.
\cite{bergelson_kolesnik_madritsch+2012:uniform_distribution_prime} we want to
extend the construction of Nakai and Shiokawa
\cite{Nakai_Shiokawa1990:class_normal_numbers} to prime numbers. Our first main
result is the following theorem.
\begin{thm}\label{thm:normal}
Let $\theta>1$ and $\alpha>0$. Then
\[
\mathcal{R}_N(\tau_q(\alpha x^\theta))=\mathcal{O}(1/\log N).
\]
\end{thm}
\begin{rem}
This estimate is best possible as Schiffer \cite{schiffer1986:discrepancy_normal_numbers} showed.
\end{rem}
In our second main result we use the connection of this construction of normal
numbers with the arithmetic mean of $q$-additive functions as described above. Known
results in this area are due to Shiokawa \cite{shiokawa1974:sum_digits_prime},
who was able to show the following theorem.
\begin{thm*}[{\cite[Theorem]{shiokawa1974:sum_digits_prime}}]
We have
\[
\sum_{p\leq x}s_q(p)=\frac{q-1}2\frac x{\log
q}+\mathcal{O}\left(x\left(\frac{\log\log x}{\log x}\right)^{\frac12}\right),
\]
where the sum runs over the primes and the implicit $\mathcal{O}$-constant may
depend on $q$.
\end{thm*}
Similar results concerning the moments of the sum of digits function over
primes have been established by K\'atai \cite{katai1977:sum_digits_primes}. An
extension to Beurling primes is due to Heppner
\cite{heppner1976:uber_die_summe}.
Let $\pi(x)$ stand for the number of primes less than or equal to
$x$. Adapting these ideas to our method we obtain the following theorem.
\begin{thm}\label{thm:summatoryfun}
Let $\theta>1$ and $\alpha>0$. Then
\[
\sum_{p\leq N}s_q(\left\lfloor\alpha p^\theta\right\rfloor)=\frac{q-1}2\pi(N)\log_qN^\theta+\mathcal{O}(\pi(N)),
\]
where the sum runs over the primes and the implicit $\mathcal{O}$-constant may
depend on $q$ and $\theta$.
\end{thm}
\begin{rem}
With simple modifications Theorem \ref{thm:summatoryfun} can be extended to
completely $q$-additive functions replacing $s_q$.
\end{rem}
The proof of the two theorems is divided in three parts. In the following
section we rewrite both statements and state the central theorem, which
combines them and which we prove in the rest of the paper. In Section
\ref{sec:tools} we present all the tools we need in the proof of the
central theorem. Finally, in Section \ref{sec:proof-prop-refm} we proof
the theorem.
\section{Preliminaries}\label{sec:preliminaries}
Throughout the paper, an interval denotes a set
\[
I=(\alpha,\beta]=\{x:\alpha<x\leq\beta\}
\quad\text{with}\quad\beta>\alpha\geq\frac12.
\]
We will often subdivide a interval into smaller ones. In particular we
use the observation that if $\log(\beta/\alpha)\ll\log N$, then
$(\alpha,\beta]$ is the union of, say, $s$ intervals of the type
$(\gamma,\gamma_1]$ with $s\ll\log N$ and $\gamma_1\leq2\gamma$. Given any
complex function $F$ on $I$, we have
\begin{gather}\label{bak:intervalsplit}
\left\vert\sum_{x\in I}F(x)\right\vert\ll(\log
N)\left\vert\sum_{\gamma<x\leq\gamma_1}F(x)\right\vert,
\end{gather}
for some such $(\gamma,\gamma_1]$.
In the proof $p$ will always denote a prime. We fix the block
$d_1\cdots d_\ell$ and write $\mathcal{N}(f(p))$ for the number of occurrences of
this block in the $q$-ary expansion of $\left\lfloorloor f(p)\right\rfloorloor$. By $\ell(m)$ we
denote the length of the $q$-ary expansion of an integer $m$.
In the first step we want to get rid of the blocks that may occur between two
expansions. To this end we define an integer $N$ by
\begin{gather}\label{mani:P}
\sum_{p\leq N-1}\ell\left(\left\lfloorloor p^\theta\right\rfloorloor\right) <L\leq
\sum_{p\leq N}\ell\left(\left\lfloorloor p^\theta\right\rfloorloor\right),
\end{gather}
where $\sum$ indicates that the sum runs over all primes. Thus we get that
\begin{equation}\label{mani:NtoP}
\begin{split}
L&=\sum_{p\leq N}\ell(\left\lfloor p^\theta\right\rfloor)+\mathcal{O}(\pi(N))+\mathcal{O}(\theta \log_q(N))\\
&=\frac{\theta}{\log q}N+\mathcal{O}\left(\frac{N}{\log N}\right).
\end{split}\end{equation}
Here we have used the prime number theorem in the form
\[
\pi(x)=\mathrm{Li}\, x+\mathcal{O}\left(\frac x{(\log x)^G}\right),
\]
where $G$ is an arbitrary positive constant and
\[
\mathrm{Li}\,x=\int_2^x\frac{\mathrm{d}t}{\log t}.
\]
Let $\mathcal{N}(n;d_1\cdots d_\ell)$ be the number of occurrences of the block
$d_1\cdots d_\ell$ in the expansion of $n$. Since we have fixed the block
$d_1\cdots d_\ell$ we will write $\mathcal{N}(n)=\mathcal{N}(n;d_1\cdots
d_\ell)$ for short. Then \eqref{mani:NtoP} implies that
\begin{gather}\label{mani:Ntrunc}
\left\vert\mathcal{N}(\tau_q(x^\theta);d_1\cdots d_\ell;L)-\sum_{p\leq
N}\mathcal{N}(p^\theta)\right\vert\ll\frac L{\log L}.
\end{gather}
For the next step we collect all the values that have a certain length of
expansion. Let $j_0$ be a sufficiently large integer. Then for each
integer $j\geq j_0$ we get that there exists an $N_j$ such that
\[
q^{j-2}\leq f(N_j)<q^{j-1}\leq f(N_j+1)<q^j.
\]
We note that this is possible since $f$ asymptotically grows as its leading
coefficient. This implies that
\[
N_j\asymp q^{\frac j\beta}.
\]
Furthermore for $N\geq q^{j_0}$
we set $J$ to be the greatest length of the $q$-ary expansions of $f(p)$ over
the primes $p\leq N$, i.e.,
\begin{gather}\label{mani:JP}
J:=\max_{p\leq N}\ell(\left\lfloorloor f(p)\right\rfloorloor)=\log_q(f(N))+\mathcal{O}(1)\asymp\log
N.
\end{gather}
In the next step we want to perform the counting by adding the leading zeroes to
the expansion of $f(p)$. For $N_{j-1}<p\leq N_j$ we may write $f(p)$ in $q$-ary
expansion, i.e.,
\begin{gather*}
f(p)=b_{j-1}q^{j-1}+b_{j-2}q^{j-2}+\dots+b_{1}q+b_{0}+b_{-1}q^{-1}+\dots.
\end{gather*}
Then we denote by $\mathcal{N}^*(f(p))$ the number of occurrences of the block
$d_1,\ldots,d_\ell$ in the string $0\cdots0b_{j-1}b_{j-2}\cdots b_1b_0$, where we
filled up the expansion with zeroes such that it has length $J$. The error of
doing so can be estimated by
\begin{equation}\label{mani:NtoNstar}\begin{split}
0&\leq\sum_{p\leq N}\mathcal{N}^*(f(p))-\sum_{p\leq N}\mathcal{N}(f(p))\\
&\leq\sum_{j=j_0+1}^{J-1}(J-j)\left(\pi(N_{j+1})-\pi(N_{j})\right)+\mathcal{O}(1)\\
&\leq\sum_{j=j_0+2}^{J}\pi(N_{j})+\mathcal{O}(1)\ll\sum_{j=j_0+2}^{J}\frac{q^{j/\beta}}j
\ll\frac N{\log N}\ll\frac L{\log L}.\\
\end{split}\end{equation}
In the following two sections we will estimate this sum of indicator functions
in order to prove the following proposition.
\begin{prop}\label{mani:centralprop}
Let $\theta>1$ and $\alpha>0$. Then
\begin{gather}\label{mani:centralprop:statement}
\sum_{p\leq
N}\mathcal{N}^*\left(\left\lfloor \alpha p^\theta\right\rfloor\right)=q^{-k}\pi(N)\log_qN^\theta+\mathcal{O}\left(\frac{N}{\log
N}\right)
\end{gather}
\end{prop}
\begin{proof}[Proof of Theorem \ref{thm:normal}]
We insert \eqref{mani:centralprop:statement} into \eqref{mani:Ntrunc}
and get the desired result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:summatoryfun}]
For this proof we have to rewrite the statement. In particular, we use that the
sum of digits function counts the number of $1$s, $2$s, etc. and
assigns weights to them, i.e.,
\[
s_q(n)=\sum_{d=0}^{q-1}d\cdot\mathcal{N}(n;d).
\]
Thus
\begin{align*}
\sum_{p\leq N}s_q(\left\lfloor p^\theta\right\rfloor)
&=\sum_{p\leq N}\sum_{d=0}^{q-1}d\cdot\mathcal{N}(p^\theta)
=\sum_{p\leq
N}\sum_{d=0}^{q-1}d\cdot\mathcal{N}^*(p^\theta)+\mathcal{O}\left(\frac{N}{\log
N}\right)\\
&=\frac{q-1}2\pi(N)\log_q(N^\theta)+\mathcal{O}\left(\frac{N}{\log
N}\right)
\end{align*}
and the theorem follows.
\end{proof}
\section{Tools}\label{sec:tools}
In this section we want to present all the tools we need on the way of proof of
Proposition \ref{mani:centralprop}. We start with an estimation which essentially goes
back to Vinogradov. This will provide us with Fourier expansions for the
indicator functions used in the proof. As usual given a real number
$y$, the expression $e(y)$ will stand for $\exp\{2\pi i y\}$.
\begin{lem}[{\cite[Lemma
12]{vinogradov2004:method_trigonometrical_sums}}]\label{vin:lem12}
Let $\alpha$, $\beta$, $\Delta$ be real numbers satisfying
\begin{gather*}
0<\Delta<\frac12,\quad\Delta\leq\beta-\alpha\leq1-\Delta.
\end{gather*}
Then there exists a periodic function $\psi(x)$ with period $1$,
satisfying
\begin{enumerate}
\item $\psi(x)=1$ in the interval $\alpha+\frac12\Delta\leq x
\leq\beta-\frac12\Delta$,
\item $\psi(x)=0$ in the interval $\beta+\frac12\Delta\leq x
\leq1+\alpha-\frac12\Delta$,
\item $0\leq\psi(x)\leq1$ in the remainder of the interval
$\alpha-\frac12\Delta\leq x\leq1+\alpha-\frac12\Delta$,
\item $\psi(x)$ has a Fourier series expansion of the form
$$
\psi(x)=\beta-\alpha+\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A(\nu) e(\nu x),
$$
where
\begin{gather}\label{mani:A}
\left\vert A(\nu)\right\vert \ll \min \left( \frac 1\nu,
\beta-\alpha,\frac{1}{\nu^2\Delta} \right).
\end{gather}
\end{enumerate}
\end{lem}
After we have transformed the sums under consideration into exponential sums
we want to split the interval by the following lemma.
\begin{lem}\label{lem:intervalsplit}
Let $I=(a,b]$ be an interval and $F$ be a complex function defined on $I$. If
$\log(b/a)\ll L$, then $I$ is the union of $\ell$ intervals of the type
$(c,d]$ with $\ell\ll L$ and $d\leq 2c$. Furthermore we have
\[
\left\vert\sum_{n\in I}F(n)\right\vert\ll L\left\vert\sum_{n\in(c,d]}F(n)\right\vert,
\]
for some such $(c,d]$.
\end{lem}
\begin{proof}
For $i=1,\ldots,\ell$ let $I_i$ be the $\ell$ splitting intervals. Then
\begin{align*}
\left\vert \sum_{n\in I}F(n)\right\vert
=\left\vert\sum_{i=1}^\ell\sum_{n\in I_i}F(n)\right\vert
\leq \ell\max_{1\leq i\leq \ell}\left\vert\sum_{n\in I_i}F(n)\right\vert\ll
L\left\vert\sum_{n\in I_i}F(n)\right\vert
\end{align*}
\end{proof}
We will apply the following lemma in order to estimate the occurring
exponential sums provided that the coefficients are very small. This corresponds to the
case of the most significant digits in the expansion.
\begin{lem}[{\cite[Lemma 4.19]{titchmarsh1986:theory_riemann_zeta}}]
\label{tit:lem4.19}
Let $F(x)$ be a real function, $k$ times differentiable, and satisfying $\left\vert
F^{(k)}(x)\right\vert\geq\lambda>0$ throughout the interval $[a,b]$. Then
\[
\left\vert\int_a^be(F(x))\mathrm{d}x\right\vert
\leq c(k)\lambda^{-1/k}.
\]
\end{lem}
A standard tool for estimating exponential sums over the primes is Vaughan's
identity. In order to apply this identity we have to rewrite the exponential
sum into a normal one having von Mangoldt's function as weights. Therefore let
$\Lambda$ denote von Mangoldt's function, i.e.,
\[
\Lambda(n)=\begin{cases}
\log p,&\text{if $n=p^k$ for some prime $p$ and an integer $k\geq1$;}\\
0,&\text{otherwise}.
\end{cases}
\]
In the next step
we may subdivide this weighted exponential sum into several sums of Type I and
II. In particular, let $P\geq2$ and $P_1\leq 2P$, then we define Type
I and Type II sums by the expressions
\begin{align}
&\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq
P_1}}f(xy)\label{type:1:sum}\quad(\text{Type I})\\
&\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}(\log y)f(xy)\notag\\
&\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}b_yf(xy)\label{type:2:sum}\quad(\text{Type II})
\end{align}
with $X_1\leq 2X$, $Y_1\leq 2Y$, $\left\vert a_x\right\vert\ll P^\varepsilon$, $\left\vert
b_y\right\vert\ll P^\varepsilon$ for every $\varepsilon>0$ and
\[
P\ll XY\ll P,
\]
respectively. The following lemma provides the central tool for the subdivision
of the weighted exponential sum.
\begin{lem}[{\cite[Lemma 1]{baker_kolesnik1985:distribution_p_alpha}}]
\label{bakkol:vaughan}
Let $f(n)$ be a complex valued function and $P\geq2$, $P_1\leq 2P$. Furthermore
let $U$, $V$, and $Z$ be positive numbers satisfying
\begin{gather}
2\leq U<V\leq Z\leq P,\\
U^2\leq Z,\quad 128UZ^2\leq P_1,\quad 2^{18}P_1\leq V^3.
\end{gather}
Then the sum
\[
\sum_{P\leq n\leq P_1}\Lambda(n)f(n)
\]
may be decomposed into $\ll(\log P)^6$ sums, each of which is either a Type I
sum with $Y\geq Z$ or a Type II sum with $U\leq Y\leq V$.
\end{lem}
The next tool is an estimation for the exponential sum. After subdividing the
weighted exponential sum we use Vinogradov's method in order to estimate the
occurring unweighted exponential sums.
\begin{lem}[{\cite[Lemma 6]{Nakai_Shiokawa1990:class_normal_numbers}}]
\label{nakshi:lem6}
Let $k$, $P$ and $N$ be integers such that $k\geq2$, $2\leq N\leq P$. Let
$g(x)$ be real and have continuous derivatives up to the $(k+1)$th order in
$[P+1,P+N]$; let $0<\lambda<1/(2c_0(k+1))$ and
\[
\lambda\leq\frac{g^{(k+1)}(x)}{(k+1)!}\leq c_0\lambda
\quad(P+1\leq x\leq P+N),
\]
or the same for $-g^{(k+1)}(x)$, and let
\[
N^{-k-1+\rho}\leq\lambda\leq N^{-1}
\]
with $0<\rho\leq k$. Then
\[
\sum_{n=P+1}^{P+N}e(g(n))\ll N^{1-\eta},
\]
where
\begin{gather}\label{mani:eta}
\eta=\frac{\rho}{16(k+1)L},\quad
L=1+\left\lfloor\frac14k(k+1)+kR\right\rfloor,\quad
R=1+\left\lfloor\frac{\log\left(\frac1\rho k(k+1)^2\right)}{-\log\left(1-\frac1k\right)}\right\rfloor.
\end{gather}
\end{lem}
\section{Proof of Proposition \ref{mani:centralprop}}\label{sec:proof-prop-refm}
We will apply the estimates of the preceding sections in order to estimate the
exponential sums occurring in the proof. We will proceed in four steps.
\begin{enumerate}
\item In the first step we use a method of Vinogradov
\cite{vinogradov2004:method_trigonometrical_sums} in order to rewrite the
counting function into the estimation of exponential sums. Then we will
distinguish two cases in the following two steps.
\item First we assume that the we are interested in a block which occurs among
the most significant digits. This corresponds to a very small coefficient in
the exponential sum and we may use the method of van der Corput
(cf. \cite{graham_kolesnik1991:van_der_corputs}).
\item For the blocks occurring among the least significant digits we apply
Vaughan's identity together with ideas from a recent paper by Bergelson
et al. \cite{bergelson_kolesnik_madritsch+2012:uniform_distribution_prime}.
\item Finally we combine the estimates of the last two steps in order to end
the proof.
\end{enumerate}
In this proof, the letter $p$ will always denote a prime and we set
$f(x):=\alpha x^\theta$ for short. Furthermore we set
\begin{gather}\label{mani:delta}
\delta:=\min\left(\frac14,1-\theta\right).
\end{gather}
\subsection{Rewriting the sum}\label{sec:rewriting-sum}
Throughout the rest of the paper we fix a block $d_1\cdots d_\ell$. In order to
count the occurrences of this block in the $q$-ary expansion of $\left\lfloorloor f(p)
\right\rfloorloor$ ($2\le p \le P$) we define the indicator function
\begin{align}\label{mani:I}
\mathcal{I}(t)=\begin{cases}
1, &\text{if }\sum_{i=1}^\ell d_iq^{-i}\leq t-\left\lfloorloor t\right\rfloorloor
<\sum_{i=1}^\ell d_iq^{-i}+q^{-\ell};\\
0, &\text{otherwise;}
\end{cases}
\end{align}
which is a $1$-periodic function. Indeed, we have
\[
\mathcal{I}(q^{-j}f(n)) = 1 \Longleftrightarrow d_1\cdots d_\ell =
b_{j-1}\cdots b_{j-\ell}.
\]
Thus we can write our block counting function as follows
\begin{gather}\label{mani:NthetatoNstar}
\mathcal{N}^*(f(p))=\sum_{j=l}^J\mathcal{I}\left(q^{-j}f(p)\right).
\end{gather}
Following Nakai and Shiokawa~\cite{Nakai_Shiokawa1990:class_normal_numbers} we
want to approximate $\mathcal{I}$ from above and from below by two $1$-periodic
functions having small Fourier coefficients. In particular, we set
$H=N^{\delta/3}$ and
\begin{equation}\label{mani:abd}
\begin{split}
\alpha_-=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}+(2H)^{-1},\quad
\beta_-=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}+q^{-\ell}-(2H)^{-1},\quad
\Delta_-=H^{-1},\\
\alpha_+=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}-(2H)^{-1},\quad
\beta_+=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}+q^{-\ell}+(2H)^{-1},\quad
\Delta_+=H^{-1}.
\end{split}
\end{equation}
We apply Lemma \ref{vin:lem12} with
$(\alpha,\beta,\delta)=(\alpha_-,\beta_-,\delta_-)$ and
$(\alpha,\beta,\delta)=(\alpha_+,\beta_+, \delta_+)$,
respectively, in order to get two functions $\mathcal{I}_-$ and
$\mathcal{I}_+$. By the choices of
$(\alpha_\pm,\beta_\pm,\delta_\pm)$ it is immediate that
\begin{equation}\label{uglI}
\mathcal{I}_-(t)\leq\mathcal{I}(t)\leq\mathcal{I}_+(t) \qquad
(t\in\mathbb{R}).
\end{equation}
Lemma \ref{vin:lem12} also implies that these two functions have
Fourier expansions
\begin{align}\label{mani:Ifourier}
\mathcal{I}_\pm(t)=q^{-\ell}\pm H^{-1}+
\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty A_\pm(\nu)e(\nu t)
\end{align}
satisfying
\begin{gather*}
\left\vert A_\pm(\nu)\right\vert
\ll\min(\left\vert\nu\right\vert^{-1},H\left\vert\nu\right\vert^{-2}).
\end{gather*}
In a next step we want to replace $\mathcal{I}$ by $\mathcal{I}_+$
in (\ref{mani:NthetatoNstar}). For this purpose we observe, using \eqref{uglI},
that
\begin{gather*}
\left\vert\mathcal{I}(t)-\mathcal{I}_+(t)\right\vert \le
\left\vert\mathcal{I}_+(t)-\mathcal{I}_-(t)\right\vert
\ll H^{-1} + \sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A_\pm(\nu)e(\nu t).
\end{gather*}
Thus subtracting yields the main part, and summing over $p\leq N$ gives
\begin{gather}\label{mani:0.5}
\left\vert\sum_{p\leq N}\mathcal{I}(q^{-j}f(p))-\frac{\pi(N)}{q^{\ell}}\right\vert
\ll\pi(N)H^{-1}+\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A_{\pm}(\nu)\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right).
\end{gather}
Now we consider the coefficients $A_\pm(\nu)$. Noting
\eqref{mani:A} one observes that
\begin{gather*}
A_\pm(\nu)\ll\begin{cases}
\nu^{-1}, &\text{for }\left\vert\nu\right\vert\leq H;\\
H\nu^{-2}, &\text{for }\left\vert\nu\right\vert>H.
\end{cases}
\end{gather*}
Estimating all summands with $\left\vert\nu\right\vert>H$ trivially we get
\begin{gather*}
\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A_\pm(\nu)e\left(\frac{\nu}{q^j}f(p)\right)
\ll\sum_{\nu=1}^{H}\nu^{-1}e\left(\frac{\nu}{q^j}f(p)\right)+H^{-1}.
\end{gather*}
Using this in \eqref{mani:0.5} yields
\begin{gather}\label{mani:1.5}
\left\vert\sum_{p\leq N}\mathcal{I}(q^{-j}f(p))-\frac{\pi(N)}{q^{\ell}}\right\vert
\ll\pi(N)H^{-1}+\sum_{\nu=1}^{H}
\nu^{-1}\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right).
\end{gather}
Finally we sum over all $j$s and get
\begin{equation}\label{mani:2}
\begin{split}
\left\vert\sum_{p\leq N}\mathcal{N}^*(f(p))-\frac{\pi(N)}{q^{\ell}}J\right\vert
\ll\pi(N)H^{-1}J+\sum_{j=\ell}^J\sum_{\nu=1}^{H}
\nu^{-1}S(N,j,\nu),
\end{split}
\end{equation}
where we have set
\[
S(N,j,\nu):=\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right).
\]
The crucial part is the estimation of the exponential sums over the
primes. In the following we will distinguish two cases according to the size of
$j$. This corresponds to the position in the expansion of $f(p)$. In
particular, let $\rho>0$ be arbitrarily small then we want to distinguish
between the most significant digits and the least significant digits,
i.e., between the ranges
\[
1\leq q^j\leq N^{\theta-1+\rho}
\quad\text{and}\quad
N^{\theta-1+\rho}<q^j\leq N^\theta.
\]
\subsection{Most significant digits}
In this subsection we assume that
\[
N^{\theta-1+\rho}<q^j\leq N^\theta,
\]
which means that we deal with the most significant digits in the expansion. We
start by rewriting the sum into an integral.
\begin{align*}
S(N,j,\nu)=\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right)
=\int_{2}^{N}e\left(\frac{\nu}{q^j}f(t)\right)\mathrm{d}\pi(t)+\mathcal{O}(1).
\end{align*}
In the second step we then apply the prime number theorem. Thus
\begin{align*}
S(N,j,\nu)
=\int_{N(\log N)^{-G}}^{N}
e\left(\frac{\nu}{q^j}f(t)\right)
\frac{\mathrm{d}t}{\log t}
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right).
\end{align*}
Now we use the second mean-value theorem together with Lemma \ref{tit:lem4.19}
and $k=\left\lfloor\theta\right\rfloor$ to get
\begin{equation}\label{mani:res:most}
\begin{split}
S(N,j,\nu)&\ll\frac1{\log N}\sup_{\xi}
\left\vert\int_{N(\log N)^{-G}}^{\xi}e\left(\frac{\nu}{q^j}f(t)\right)\mathrm{d}t\right\vert
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right)\\
&\ll\frac1{\log N}\left(\frac{\left\vert \nu\right\vert}{q^j}\right)^{-\frac1k}
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right).
\end{split}
\end{equation}
\subsection{Least significant digits}
For the digits in this range we want to apply Vaughan's identity in order to
transfer the sum over the primes into two special types of sums involving
products of integers. Before we may apply Vaughan's identity we have to weight
the exponential sum under consideration by the von Mangoldt function. By an
application of Lemma \ref{lem:intervalsplit}, it suffices to consider an
interval of the form $(P,2P]$. Thus
\[
\left\vert S(N,j,\nu)\right\vert
\ll(\log N)\left\vert\sum_{P<p\leq2P}e\left(f(p)\right)\right\vert.
\]
Using partial summation we get
\[
\left\vert S(N,j,\nu)\right\vert
\ll(\log N)
\left\vert\sum_{P<p\leq 2P}e\left(f(p)\right)\right\vert
\ll (\log N)P^{\frac12}+(\log N)\left\vert\sum_{P<n\leq P_1}\Lambda(n)e\left(f(n)\right)\right\vert
\]
for some $P_1$ with $P<P_1\leq 2P$. From now on we may assume that
$P>N^{1-\eta}$.
Then an application of Lemma \ref{bakkol:vaughan} with $U=P^{\frac\delta3}$,
$V=P^{\frac13}$, $Z=P^{\frac12-\frac\delta3}$ yields
\begin{align}\label{mani:afterVaughan}
S(N,j,\nu)\ll P^{\frac12}+\left(\log P\right)^7\left\vert S_1\right\vert,
\end{align}
where $S_1$ is either a Type I sum as in \eqref{type:1:sum} with
$Y\geq P^{\frac12-\frac\delta3}$ or a Type II sum as in \eqref{type:2:sum} with
\[
P^{\frac\delta3}\leq Y\leq P^{\frac13}.
\]
Suppose first that $S_1$ is a Type II sum, i.e.,
\[
S_1=\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq
P_1}}b_ye\left(f(xy)\right).
\]
Then an application of the Cauchy-Schwarz inequality yields
\begin{align*}
\left\vert S_1\right\vert^2
&\leq\sum_{X<x\leq X_1}\left\vert a_x\right\vert^2\sum_{X<x\leq X_1}
\left\vert\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}b_ye\left(\frac{\nu}{q^j}f(xy)\right)\right\vert^2\\
&\ll XP^{2\varepsilon}\sum_{Y<y\leq Y_1}\sum_{Y<z\leq Y_1}b_y\overline{b_z}
\sum_{\substack{X<x\leq X_1\\P<xy,xz\leq P_1}}e\left(\frac{\nu}{q^j}\left(f(xy)-f(xz)\right)\right),
\end{align*}
where we have used that $\left\vert a_x\right\vert\ll P^\varepsilon$. Collecting all the
terms where $y=z$ and using $\left\vert b_y\right\vert\ll P^\varepsilon$ yields
\begin{gather}\label{mani:3.5}
\left\vert S_1\right\vert^2\ll XP^{4\varepsilon}\left(XY+\sum_{Y<y<z\leq Y_1}
\left\vert\sum_{\substack{X<x\leq X_1\\P<xy,xz\leq P_1}}e\left(\frac{\nu}{q^j}\left(f(xy)-f(xz)\right)\right)\right\vert\right).
\end{gather}
There must be a pair $(y,z)$ with
$Y<y<z<Y_1$ such that
\begin{gather}\label{mani:4.5}
\left\vert S_1\right\vert^2\ll P^{2+4\varepsilon}Y^{-1}+P^{4\varepsilon}XY^2
\left\vert\sum_{X_2<x\leq X_3}e(g(x))\right\vert,
\end{gather}
where $X_2=\max(X,Py^{-1})$, $X_3=\min(X_1,P_1z^{-1})$ and
\[
g(x)
=\frac{\nu}{q^j}\left(f(xy)-f(xz)\right)
=\frac{\nu}{q^j}\alpha(y^\theta-z^\theta)x^\theta.
\]
We will apply Lemma \ref{nakshi:lem6} to estimate the exponential
sum. Setting
\[k:=\left\lceil 2\theta\right\rceil+1
\]
we get that $g^{(k+1)}(x)\sim
\nu q^{-j}\alpha\theta(\theta-1)\cdots(\theta-k)x^{\theta-(k+1)}$.
Thus
\[
\lambda\leq\frac{g^{(k+1)}(x)}{(k+1)!}\leq c_0\lambda\quad(X_2<x\leq X_3)
\]
or similarly for $-g^{(k+1)}(x)$, where
\[
\lambda=c\nu q^{-j}\alpha(y^{\theta}-z^{\theta})X^{\theta-(k+1)}
\]
and $c$ depends only on $\theta$ and $\alpha$.
Since $\theta>1$ we get
\begin{align*}
\lambda&\geq P^{\delta-\theta}Y^{\theta-1}X^{\theta-(k+1)}\geq X^{-k-\frac12}.
\end{align*}
Similarly we obtain
\[
\lambda
\leq P^{2\delta}Y^{\theta}X^{\theta-(k+1)}
\ll P^{\theta+2\delta}X^{-(k+1)}
\leq X^{-1}.
\]
Thus we get that $X^{-k-\frac12}\leq\lambda\leq X^{-1}$. Therefore an application of Lemma \ref{nakshi:lem6} yields
\[
\sum_{X_2<x\leq X_3}e(g(x))\ll X^{1-\eta},
\]
where $\eta$ depends only on $k$ an therefore on $\theta$. Inserting this in
\eqref{mani:4.5} we get
\begin{gather}\label{mani:res:typeII}
\left\vert S_1\right\vert^2\ll P^{2+4\varepsilon}Y^{-1}+P^{4\varepsilon}XY^2X^{1-\eta}
\ll P^{2+4\varepsilon}\left(P^{-\delta/3}+P^{-2\eta/3}\right).
\end{gather}
The case of $S_1$ being a type I sum is similar but simpler. We have
\begin{align*}
\left\vert S\right\vert
\leq\sum_{X<x\leq X_1}\left\vert a_x\right\vert
\left\vert\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}(\log y)e\left(f(xy)\right)\right\vert
\ll XP^{\varepsilon}\left\vert\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}(\log y)e\left(f(xy)\right)\right\vert
\end{align*}
for some $x$ with $X<x\leq X_1$. By a partial summation we get
\begin{gather}\label{mani:6}
\left\vert S\right\vert\ll XP^\varepsilon\log P\left\vert\sum_{\substack{Y_2<y\leq
Y_3\\P<xy\leq P_1}}e\left(f(xy)\right)\right\vert
\end{gather}
for some $Y\leq Y_2<Y_3\leq Y_1$. Now we set
\[
g(y)
=f(xy)
=\frac{\nu}{q^{j}}\alpha x^\theta y^\theta.
\]
Again the idea is to apply Lemma \ref{nakshi:lem6} for the estimation of the
exponential sum. We set
\[
k:=\left\lceil 3\theta\right\rceil +2
\]
and get for the $k+1$-st derivative
\[
\lambda\leq\frac{g^{(k+1)}(x)}{(k+1)!}\leq c_0\lambda\quad(X_2<x\leq X_3)
\]
or similarly for $-g^{(k+1)}(x)$, where
\[
\lambda=c\frac{\nu}{q^j}\alpha x^{\theta}Y^{\theta-(k+1)}
\]
and $c$ again depends only on $\alpha$ and $\theta$.
We may assume that $N$ and hence $P$ is sufficiently large, then we get that
\[
Y^{-(k+1)}\ll P^{-\theta}X^{\theta}Y^{\theta-(k+1)}\leq
\lambda\leq
P^{2\delta}X^{\theta}Y^{\theta-(k+1)}\ll P^{\theta+2\delta}Y^{-(k+1)}\leq Y^{-1}.
\]
Now an application of Lemma 2.5 yields
\[
\sum_{Y_2<y\leq Y_3}e(g(y))\ll Y^{1-\eta},
\]
where $\eta$ depends only on $k$ and thus on $\theta$. Inserting this in
\eqref{mani:6} we get
\begin{gather}\label{mani:res:typeI}
\left\vert S_1\right\vert \ll(\log P)XP^\varepsilon Y^{1-\eta}\ll(\log P)P^{1+\varepsilon-\eta(1/2-\delta/3)}.
\end{gather}
Combining \eqref{mani:res:typeI} and \eqref{mani:res:typeII} in
\eqref{mani:afterVaughan} yields
\begin{equation}\label{mani:res:least}
\begin{split}
\left\vert S(N,j,\nu)\right\vert
&\ll P^{\frac12}+\left(\log
P\right)^7\left(P^{1+2\varepsilon}\left(P^{-\delta/6}+P^{-\eta/3}\right)+(\log
P)P^{1+\varepsilon-\eta(1/2-\delta/3)}\right)\\
&\ll P^{\frac12}+\left(\log P\right)^8P^{1-\sigma}.
\end{split}
\end{equation}
\subsection{Conclusion}
On the one hand summing \eqref{mani:res:most} over $j$ and $\nu$ yields
\begin{align*}
&\sum_{1\leq\left\vert
\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1}\sum_{N^{\theta-\delta}<q^{j}\leq
N^{\theta}}
S(N,j,\nu)\\
&\quad\ll\sum_{1\leq\left\vert
\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1}\sum_{N^{\theta-\delta}<q^{j}\leq
N^{\theta}}
\left(\frac1{\log N}\left(\frac{\left\vert \nu\right\vert}{q^j}\right)^{-\frac1k}
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right)\right)\\
&\quad\ll\frac1{\log N}\sum_{1\leq\left\vert
\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1-\frac1k}\sum_{N^{\theta-\delta}<q^{j}\leq
N^{\theta}}q^{-\frac jk}
+\mathcal{O}\left(\frac{N}{(\log N)^{G-2}}\right)\\
&\quad\ll\frac{N}{\log N}.
\end{align*}
On the other hand in \eqref{mani:res:least} we sum over $j$ and $\nu$ and get
\[
\sum_{1\leq\left\vert\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1}
\sum_{q^\ell\leq q^j\leq N^\theta}S(N,j,\nu)
\ll(\log N)^2N^{\frac12}+(\log N)^{10}N^{1-\sigma'}.
\]
Combining these estimates in \eqref{mani:2} finally yields
\begin{align*}
\left\vert\sum_{p\leq
N}\mathcal{N}^*(f(p))-\frac{\pi(N)}{q^{\ell}}J\right\vert\ll\frac{N}{\log N}
\end{align*}
and the proposition is proved.
\section*{Acknowledgment}
The authors thank the anonymous referee, who read very carefully the manuscript and his/her suggestions improve considerably the presentation of the results.
\def$'${$'$}
\end{document} |
\begin{document}
\title{Distance domination, guarding and vertex cover for maximal outerplanar graph}
\begin{abstract}
This paper discusses a distance guarding concept on triangulation graphs, which can be associated with distance domination and distance vertex cover. We show how these subjects are interconnected and provide tight bounds for any \mbox{$n$-vertex} maximal outerplanar graph: the $2d$-guarding number, $g_{2d}(n) = \lfloor \frac{n}{5} \rfloor$; the $2d$-distance domination number, $\mathcal{G}amma_{2d}(n) = \lfloor \frac{n}{5} \rfloor$; and the $2d$-distance vertex cover number, $\beta_{2d}(n) = \lfloor \frac{n}{4} \rfloor$.
\end{abstract}
\title{Distance domination, guarding and vertex cover for maximal outerplanar graph}
\section{Introduction}
Domination, covering and guarding are widely studied subjects in graph theory. Given a graph $G=(V,E)$ a \emph{dominating set} is a set \mbox{$D \subseteq V$} of vertices such that every vertex not in $D$ is adjacent to a vertex in $D$. The \emph{domination number} $\mathcal{G}amma(G)$ is the number of vertices in a smallest dominating set for $G$. A set \mbox{$C \subseteq V$} of vertices is a \emph{vertex cover} if each edge of the graph is incident to at least one vertex of the set. The \emph{vertex cover number} $\beta(G)$ is the size of a minimum vertex cover. Thus, a dominating set guards the \emph{vertices} of a graph while a vertex cover guards its \emph{edges}. In plane graphs, these concepts differ from the notion of \emph{guarding set} as the latter guards the \emph{faces} of the graph. Let \mbox{$G=(V,E)$} be a plane graph, a guarding set is a set $S \subseteq V$ of vertices such that every face has a vertex in $S$. The \emph{guarding number} $g(G)$ is the number of vertices in a smallest guarding set for $G$.
There are many papers and books about domination and its many variants in graphs, e.g. \cite{Campos13,Haynes98,King10,Matheson96}. In 1975, domination was extended to \emph{distance domination} by Meir and Moon \cite{Meir75}. Given a graph $G$, a set $D \subset V$ of vertices is said to be a \emph{distance \mbox{$k$-dominating} set} if for each vertex \mbox{$u \in V-D$}, \mbox{$dist_G(u,v) \leq k$} for some \mbox{$v \in D$}. The minimum cardinality of a distance \mbox{$k$-dominating} set is said to be the \emph{distance \mbox{$k$-domination} number} of $G$ and is denoted by $\mathcal{G}amma_{k}(G)$ or $\mathcal{G}amma_{kd}(G)$. Note that a classical dominating set is a distance $k$-dominating set at distance 1. In the case of distance domination, there are also some known results concerning bounds for $\mathcal{G}amma_{kd}(G)$, e.g., \cite{Sridharan02,Tian04,Tian09}. However, if graphs are restricted to triangulations, then we are not aware of known bounds for $\mathcal{G}amma_{kd}(G)$. The distance domination was generalized to \emph{broadcast domination}, by Erwin, when the power of each vertex may vary \cite{Erwin04}. Given a graph \mbox{$G = (V,E)$}, a \emph{broadcast} is a function \mbox{$f : V \rightarrow \mathds{N}_0$}. The cost of a broadcast $f$ over a set $S$ of $V$ is defined as \mbox{$f(S) = \sum_{v \in S} f(v)$}. Thus, $f(V)$ is the total cost of the broadcast function $f$. A broadcast is \emph{dominating} if for every vertex $v$, there is a vertex $u$ with \mbox{$f(u) > 0$} and \mbox{$d(u, v) \leq f(u)$}, that is, a vertex $u$ with non null broadcast and whose broadcast's power reaches vertex $v$. A dominating broadcast $f$ is \emph{optimal} if $f(V)$ is minimum over all choices of broadcast dominating functions for $G$. The \emph{broadcast domination problem} consists in building this optimal function. Note that, if $f(V)=\{0,1\}$, then the broadcast domination problem coincides with the problem of finding a minimum dominating set with minimum cardinality. And, if $f(V)=\{0,k\}$, then the broadcast domination problem is the distance \mbox{$k$-dominating} problem. If a broadcast $f$ provides coverage to the edges of $G$ instead of covering its vertices, then we have a generalization of the vertex cover concept \cite{Blair05}. A broadcast $f$ is \emph{covering} if for every edge \mbox{$(x,y) \in E$}, there is a path $P$ in $G$ that includes the edge $(x,y)$ and one end of $P$ must be a vertex $u$, where $f(u)$ is at least the length of $P$. A covering broadcast $f$ is \emph{optimal} if $f(V)$ is minimum over all choices of broadcast covering functions for $G$. Note that, if \mbox{$f(V)=\{0,1\}$}, then the broadcast cover problem coincides with the problem of finding a minimum vertex cover. Regarding the broadcast cover problem when all vertices have the same power (i.e., when \mbox{$f(V)=\{0,k\}$}, for a fixed \mbox{$k \neq 1$}), as far as we know, there are no published results besides \cite{Chen12} where the authors propose a centralized and distributed approximation algorithm to solve it.
The guarding concept on plane graphs has its origin in the study of triangulated terrains, polyhedral surfaces whose faces are triangles and with the property that each vertical line intersects the surface at most by one point or segment. A set of guards covers the surface of a terrain if every point on the terrain is visible from at least one guard in the set. The combinatorial aspects of the terrain guarding problems can be expressed as guarding problems on the plane triangulated graph underlying the terrain. Such graph is called \emph{triangulation graph} (\emph{triangulation}, for short), because is the graph of a triangulation of a set of points in the plane (see Figures \ref{FIG:article-arXiv-1} and \ref{FIG:article-arXiv-2}). In this context of guarding for plane graphs, a set of guards only needs to watch the bounded faces of the graph. There are known bounds on the guarding number of a plane graph, $g(G)$; for example, $g(G) \leq \frac{n}{2}$ for any $n$-vertex plane graph \cite{Bose97}, and $g(G) \leq \frac{n}{3}$ for any triangulation of a polygon \cite{Fisk78}. The triangulation of a polygon is a \emph{maximal outerplanar graph}. A graph is outerplanar if it has a crossing-free embedding in the plane such that all vertices are on the boundary of its outer face (the unbounded face). An outerplanar is maximal outerplanar if it is not possible to add an edge such that the resulting graph is still outerplanar. A maximal outerplanar graph embedded in the plane as mentioned above is an maximal outerplanar graph and corresponds to a triangulation of a polygon. Contrary to the notions of domination and vertex cover on plane graphs that were extended to include their distance versions, the guarding concept was not generalized to its distance version.
In this paper we generalize the guarding concept on plane graphs to its distance guarding version and also formalize the broadcast cover problem when all vertices have the same power, which we call distance k-vertex cover. Furthermore, we analyze these concepts of distance guarding, covering and domination, from a combinatorial point of view, for triangulation graph. We obtain tight bounds for distance versions of guarding, domination and vertex covering for maximal outerplanar graphs.
In the next section we first describe some of the terminology used in this paper, and then discuss the relationship between distance guarding, domination and covering on triangulation graphs. In sections \ref{SEC:Domination_and_DistanceTightUpperBounds} and \ref{SEC:CoveringTightUpperBounds} we study how these three concepts of distance apply to maximal outerplanar graphs. And finally, the paper concludes with section 5 that discusses our results and
future research.
\section{Relationship between distance guarding, distance domination and distance vertex cover on triangulation graphs}
\label{SEC:BasicNotions}
In the following we introduce some of the notation used throughout the text, and then proceed to explain the relationship between the different distance concepts on triangulations. Given a triangulation \mbox{$T=(V,E)$}, we say that a bounded face $T_i$ of $T$ (i.e., a triangle) is \emph{$kd$-visible} from a vertex \mbox{$p \in V$}, if there is a vertex \mbox{$x \in T_i$} such that \mbox{$dist_T(x,p) \leq k-1$}. The \emph{$kd$-visibility region} of a vertex \mbox{$p \in V$} comprises the triangles of $T$ that are \mbox{$kd$-visible} from $p$ (see Fig. \ref{FIG:article-arXiv-1}).
\begin{figure}
\caption{The $kd$-visible region of $p$ for: (a) $k=1$; (b) $k=2$.}
\label{FIG:article-arXiv-1}
\end{figure}
A \emph{$kd$-guarding set} for $T$ is a subset \mbox{$F \subseteq V$} such that every triangle of $T$ is \mbox{$kd$-visible} from an element of $F$. We designate the elements of $F$ by \emph{$kd$-guards}. The \emph{$kd$-guarding number} $g_{kd}(T)$ is the number of vertices in a smallest \mbox{$kd$-guarding} set for $T$. Note that, to avoid confusion with \emph{multiple guarding} \cite{Belleville09} -- where the typical notation is \mbox{$k$-guarding} -- we will use \mbox{$kd$-guarding}, with an extra ``$d$''. Given a set $S$ of $n$ points, we define
$g_{kd}(S) = max \{g_{kd}(T): T \mbox{ is triangulation with } \mbox{V=S}\}$
and given \mbox{$n \in \mathds{N}$}, $g_{kd}(n) = max \{g_{kd}(S): S \mbox{ is plane point set with } \allowbreak |S|=n\}$.
A \emph{\mbox{$kd$-vertex} cover} for $T$, or \emph{distance \mbox{$k$-vertex} cover} for $T$, is a subset \mbox{$C \subseteq V$} such that for each edge \mbox{$e \in E$} there is a path of length at most $k$, which contains $e$ and a vertex of $C$. The \emph{\mbox{$kd$-vertex} cover number} $\beta_{kd}(T)$ is the number of vertices in a smallest \mbox{$kd$-vertex} cover set for $T$. Given a set $S$ of $n$ points, we define
$\beta_{kd}(S) = max \{\beta_{kd}(S): T \mbox{ is triangulation with } \mbox{V=S}\}$ and given \mbox{$n \in \mathds{N}$}, $\beta_{kd}(n) = max \{\beta_{kd}(S): S \mbox{ is plane point set with } \allowbreak |S|=n\}.$
Finally, as already defined by other authors, a \emph{$kd$-dominating set} for $T$, or \emph{distance $k$-dominating set} for $T$, is a subset \mbox{$D \subset V$} such that each vertex \mbox{$u \in V-D$}, \mbox{$dist_T(u,v) \leq k$} for some \mbox{$v \in D$}. The $kd$-domination number $\mathcal{G}amma_{kd}(T)$ is the number of vertices in a smallest $kd$-dominating set for $T$. Given a set $S$ of $n$ points, we define
$\mathcal{G}amma_{kd}(S) = max \{\mathcal{G}amma_{kd}(T): T \mbox{ is triangulation with } mbox{V=S}\}$
and given \mbox{$n \in \mathds{N}$},
$\mathcal{G}amma_{kd}(n) \allowbreak = \allowbreak max \{\mathcal{G}amma_{kd}(S): \allowbreak S \allowbreak \mbox{ is plane point set with } \allowbreak |S|=n\}$.
\begin{figure}
\caption{(a) $2d$-dominating set for a triangulation $T$; (b) $2d$-guarding set for $T$.}
\label{FIG:article-arXiv-2}
\end{figure}
The main goal is to obtain bounds on $g_{kd}(n)$, $\mathcal{G}amma_{kd}(n)$ and $\beta_{kd}(n)$. We start by showing that the three concepts, $kd$-guarding, $kd$-dominance and $kd$-vertex covering are different. Fig. \ref{FIG:article-arXiv-2} depicts \mbox{$2d$-dominating} and \mbox{$2d$-guarding} sets for a given triangulation $T$. Note that in Fig. \ref{FIG:article-arXiv-2}(a) the set $\{u,v\}$ is $2d$-dominating since the remaining vertices are at distance 1 or 2. However, it is not a \mbox{$2d$-guarding} set because the shaded triangle is not guarded, as its vertices are at distance 2 from $\{u,v\}$. In \ref{FIG:article-arXiv-2}(b) $\{w,z\}$ is a $2d$-guarding set, however it is not a \mbox{$2d$-vertex} cover since any path between the bold edge and $w$ or $z$ has length at least 3. Therefore, the bold edge is not covered.
Now we are going to establish a relation between $g_{kd}(T)$, $\mathcal{G}amma_{kd}(T)$ and $\beta_{kd}(T)$.
\begin{lemma}
If $C$ is a $kd$-vertex cover for a triangulation $T$, then $C$ is a $kd$-guarding set and a $kd$-dominating set for $T$.
\end{lemma}
\begin{proof}
If $C$ is a $kd$-vertex cover, then each edge of $T$ has one of its endpoints at distance at most \mbox{$k-1$} from $C$. Thus, any triangle of $T$ has one of its vertices at distance at most \mbox{$k-1$} from $C$, that is, $C$ is a $kd$-guarding set for $T$. Furthermore, all the vertices of $T$ are at a distance of at most $k$ from a vertex of $C$. Therefore $C$ is $kd$-dominant.
\end{proof}
\begin{lemma}
If $C$ is a $kd$-guarding set for a triangulation $T$, then $C$ is a $kd$-dominating set for $T$.
\end{lemma}
\begin{proof}
If $F$ is $kd$-guarding set, then every vertex of $T$ (which belongs to a $kd$-guarded face) is at distance at most $k$ from an element of $F$. Thus, $F$ is $kd$-dominating set for $T$.
\end{proof}
The previous lemmas prove the following result.
\begin{theorem}
\label{Thm:inequalities}
Given a triangulation $T$ the minimum cardinality $g_{kd}(T)$ of any \mbox{$kd$-guarding} set for $T$ verifies
\begin{equation}
\mathcal{G}amma_{kd}(T) \leq g_{kd}(T) \leq \beta_{kd}(T).
\end{equation}
\end{theorem}
Note that the inequalities above can be strict, as we will show for \mbox{$k=2$}. Consider the triangulation $T$ depicted in Fig. \ref{FIG:article-arXiv-3to6}(a). We start by looking for a \mbox{$2d$-dominant} set of minimum cardinality. The black vertices in Fig. \ref{FIG:article-arXiv-3to6}(b) form a \mbox{$2d$-dominating} set, since each vertex of $T$ is at a distance less than or equal to 2 from a black vertex. Besides, it is clear that the extreme vertices can not be $2d$-dominated by the same vertex, thus any $2d$-dominating set has to have at least two vertices, one to cover each extreme. Consequently, $\mathcal{G}amma_{2d}(T)=2$. But the pair of black vertices is not a $2d$-guarding set because the shaded area is not $2d$-guarded (all the vertices of the shaded triangles are at a distance 2 from the black vertices). Now, we look for a $2d$-guarding set of minimum cardinality. Note that, in Fig. \ref{FIG:article-arXiv-3to6}(d), the gray vertices are a $2d$-guarding set. Each shaded triangle needs one $2d$-guard since they are at distance of 3 and thus every $2d$-guarding set has cardinality at least 3. Therefore, \mbox{$g_{2d}(T)=3$}. Finally, we seek a \mbox{$2d$-vertex} cover. In Fig. \ref{FIG:article-arXiv-3to6}(e), each of the bold edges needs a different vertex to be $2d$-covered, since the distance between each pair of edges is greater than or equal to 3. In this way no single vertex can
simultaneously $2d$-cover two of the bold edges. Thus, \mbox{$\beta_{2d}(T) \mathcal{G}eq 4$}. Note that, this example can easily be generalized to any value of $k$.
\begin{figure}
\caption{(a) A triangulation $T$; (b) a $2d$-dominating set for a triangulation $T$ (black vertices); (c) a $2d$-guarding set for $T$ (gray vertices); (d) each of the bold edges needs a different vertex to be $2d$-covered.}
\label{FIG:article-arXiv-3to6}
\end{figure}
\section{$2d$-guarding and $2d$-domination of maximal outerplanar graphs}
\label{SEC:Domination_and_DistanceTightUpperBounds}
In this section we establish tight bounds for $g_{2d}(n)$ and $\mathcal{G}amma_{2d}(n)$ on a special class of triangulation graphs -- the maximal outerplanar graphs -- which correspond, as stated above, to triangulations of polygons. We call the edges on the exterior face \emph{exterior edges}, otherwise they are \emph{interior edges}. In order to do this, and following the ideas of O'Rourke \cite{O'Rourke83}, we first need to introduce some lemmas.
\begin{lemma}
\label{Lem:f(m-2)}
Suppose that $f(m)$ $2d$-guards are always sufficient to guard any outerplanar maximal graph with $m$ vertices. If $G$ is an arbitrary outerplanar maximal graph with two $2d$-guards placed at any two adjacent of its $m$ vertices, then $f(m-2)$ additional $2d$-guards are sufficient to guard $G$.
\end{lemma}
\begin{proof}
Let $a$ and $b$ be the adjacent vertices at which the $2d$-guards are placed, and $c$ the vertex on the exterior face of $G$ adjacent to $b$. Contract the edges $(a,b)$ and $(b,c)$ of $G$ to produce the outerplanar maximal graph $G^*$ of \mbox{$m-2$} vertices, that is, remove the edges $(a,b)$ and $(b,c)$ and replace them with a new vertex $x$ adjacent to every vertex to which $a$,$b$ and $c$ were adjacent to (see Fig. \ref{FIG:article-arXiv-7}).
\begin{figure}
\caption{Contraction of the edges $(a,b)$ and $(b,c)$.}
\label{FIG:article-arXiv-7}
\end{figure}
We know that $f(m-2)$ $2d$-guards are sufficient to guard $G^*$. Suppose that no $2d$-guard is placed at $x$. Then the same $2d$-guarding scheme will guard $G$, since the $2d$-guards placed at $a$ and $b$ guard the triangles with vertices at $a$, $b$ and $c$, and the remaining triangles are guarded by their counterparts counterparts in $G^*$. If a guard is placed at $x$, when the graph is expanded back into $G$, the guard placed at $x$ will be placed at $c$ to assure that $G$ is guarded.
\end{proof}
\begin{lemma}
\label{Lem:f(m-1)Guarding}
Suppose that $f(m)$ $2d$-guards are always sufficient to guard any outerplanar maximal graph with $m$ vertices. If $G$ is an arbitrary outerplanar maximal graph with one $2d$-guard placed at any one of its $m$ vertices, then $f(m-1)$ additional $2d$-guards are sufficient to guard $G$.
\end{lemma}
\begin{proof}
Let $a$ be the vertex where a $2d$-guard is placed and $b$ a vertex on the exterior face of $G$ adjacent to $a$. Contract the edge $(a,b)$ to produce the outerplanar maximal graph $G^*$ of $m-1$ vertices (that is, remove edge $(a,b)$ and replace it with a new vertex $x$ adjacent to every vertex to which $a$ and $b$ were adjacent to). We know that $f(m-1)$ $2d$-guards are sufficient to guard $G^*$. Suppose that no $2d$-guard is placed at $x$. Then the same $2d$-guarding scheme will guard $G$, since the $2d$-guard placed at $a$ covers the triangles with vertices at $a$ and $b$, and the remaining triangles have guarding counterparts in $G^*$. If a guard is placed at $x$, then such guard will be placed at $b$ when the graph is expanded back into $G$. The remaining guards together with $b$ assure that $G$ is $2d$-guarded.
\end{proof}
The next lemma can be easily proven by following the ideas of O’Rourke \cite{O'Rourke83}.
\begin{lemma}
\label{Lem:O'Rourke}
Let $G$ be an outerplanar maximal graph with $n \mathcal{G}eq 2k$ vertices. There is an interior edge $e$ in $G$ that partitions $G$ into two components, one of which contains $m=k,k+1,\ldots, 2k-3$ or $2k-2$ exterior edges of $G$.
\end{lemma}
\begin{theorem}
\label{Thm:SufficiencyGuarding}
Every $n$-vertex maximal outerplanar graph, with \mbox{$n \mathcal{G}eq 5$}, can be $2d$-guarded by $\lfloor \frac{n}{5} \rfloor$ $2d$-guards. That is, $g_{2d}(n) \leq \lfloor \frac{n}{5} \rfloor$ for all \mbox{$n \mathcal{G}eq 5$}.
\end{theorem}
\begin{proof}
For $5 \leq n \leq 11$, the truth of the theorem can be easily established -- the upper bounds are resumed in Table \ref{TAB:guarding}. It should be noted that for \mbox{$n=5$} the $2d$-guard can be placed randomly and for \mbox{$n=6$} it can be placed at any vertex of degree greater than 2 (or one that belongs to an interior edge).
\begin{table}[!htb]
\centering
\small
\begin{tabular}{ | l | c | c | c | c | c | c | c | }
\hline
$n$ & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\
\hline \hline
$g_{2d}(n)$ & 1 & 1 & 1 & 1 & 1 & 2 & 2 \\
\hline
\end{tabular}
\caption{Number of $2d$-guards that suffice to cover a maximal outerplanar graph of $n$ vertices.}
\label{TAB:guarding}
\end{table}
Assume that \mbox{$n \mathcal{G}eq 12$} and that the theorem holds for all \mbox{$n' < n$}. Let $G$ be a triangulation graph with $n$ vertices. The vertices of $G$ are labeled with $0,1,2, \ldots , n$. Lemma \ref{Lem:O'Rourke} guarantees the existence of an interior edge $e$ (which can be labeled $(0,m)$) that divides $G$ into maximal outerplanar graphs $G_1$ and $G_2$, such that $G_1$ has $m$ exterior edges of $G$ with $6 \leq m \leq 10$. Each value of $m$ will be considered separately.
\begin{enumerate}
\item[(1)] $m=6$. $G_1$ has \mbox{$m+1=7$} exterior edges, thus $G_1$ can be $2d$-guarded with one guard. $G_2$ has $n-5$ exterior edges including $e$, and by induction hypothesis, it can be $2d$-guarded with $\lfloor \frac{n-5}{5} \rfloor = \lfloor \frac{n}{5} \rfloor -1$ guards. Thus $G_1$ and $G_2$ together can be $2d$-guarded by $\lfloor \frac{n}{5} \rfloor$ guards.
\item[(2)] $m=7$. $G_1$ has $m+1=8$ exterior edges, thus $G_1$ can be $2d$-guarded with one guard. $G_2$ has $n-6$ exterior edges including $e$, and by induction hypothesis, it can be $2d$-guarded with $\lfloor \frac{n-6}{5} \rfloor \leq \lfloor \frac{n}{5} \rfloor -1$ guards. Thus $G_1$ and $G_2$ together can be $2d$-guarded by $\lfloor \frac{n}{5} \rfloor$ guards.
\item[(3)] $m=8$. $G_1$ has $m+1=9$ exterior edges, thus $G_1$ can be $2d$-guarded with one guard. $G_2$ has $n-7$ exterior edges including $e$, and by induction hypothesis, it can be $2d$-guarded with $\lfloor \frac{n-7}{5} \rfloor \leq \lfloor \frac{n}{5} \rfloor -1$ guards. Thus $G_1$ and $G_2$ together can be $2d$-guarded by $\lfloor \frac{n}{5} \rfloor$ guards.
\item[(4)] $m=9$. The presence of any of the internal edges (0,8), (0,7), (0,6), (9,1), (9,2) and (9,3) would violate the minimality of $m$. Thus, the triangle $T$ in $G_1$ that is bounded by $e$ is either (0,5,9) or (0,9,4). Since these are equivalent cases, suppose that $T$ is (0,5,9), see Fig. \ref{FIG:article-arXix-8to10}(a). The pentagon (5,6,7,8,9) can be $2d$-guarded by placing one guard randomly. However, to $2d$-guard the hexagon (0,1,2,3,4,5) we cannot place a $2d$-guard at any vertex. We will consider two separate cases.
\begin{figure}
\caption{The interior edge $e$ separates $G$ into two maximal outerplanar graphs $G_1$ and $G_2$: (a) the triangle $T$ in $G_1$ that is bounded by $e$ is (0,5,9); (b) $G_1$ has 10 exterior edges, both the internal edge (0,4) and the triangle (6,7,8) are present; (c) $G_1$ has 11 exterior edges and the triangles (2,3,4) and (6,7,8) are present.}
\label{FIG:article-arXix-8to10}
\end{figure}
\begin{enumerate}
\item
The internal edge (0,4) is not present. If a guard is placed at vertex 5, then the hexagon (0,1,2,3,4,5) is $2d$-guarded, thus $G_1$ is $2d$-guarded. Since $G_2$ has $n-8$ edges it can be $2d$-guarded by \mbox{$\lfloor \frac{n-8}{5} \rfloor \leq \lfloor \frac{n}{5} \rfloor -1$} guards by induction hypothesis. This yields a $2d$-guarding of $G$ by $\lfloor \frac{n}{5} \rfloor$ guards.
\item
The internal edge (0,4) is present. If a $2d$-guard is placed at vertex 0, then $G_1$ is $2d$-guarded unless the triangle (6,7,8) is present in the triangulation (see Fig. \ref{FIG:article-arXix-8to10}(b)). In any case, two $2d$-guards placed at vertices 0 and 9 guard $G_1$. $G_2$ has $n-8$ exterior edges, including $e$. By lemma \ref{Lem:f(m-2)} the two guards placed at vertices 0 and 9 allow the remainder of $G_2$ to be guarded by $f(n-8-2)=f(n-10)$ additional $2d$-guards. Recall that $f(n')$ is the number of $2d$-guards that are always sufficient to guard a maximal outerplanar graph with $n'$ vertices. By the induction hypothesis $f(n')=\lfloor \frac{n'}{5} \rfloor$. Thus, \mbox{$\lfloor \frac{n-10}{5} \rfloor = \lfloor \frac{n}{5} \rfloor - 2$} guards suffice to guard $G_2$. Together with the guards placed at vertices 0 and 9 that $2d$-guard $G_1$, all of $G$ is guarded by $\lfloor \frac{n}{5}\rfloor$ $2d$-guards.
\end{enumerate}
\item[(5)] $m=10$. The presence of any of the internal edges (0,9), (0,8), (0,7), (0,6), (9,1), (9,2), (9,3) and (9,4) would violate the minimality of $m$. Thus, the triangle $T$ in $G_1$ that is bounded by $e$ is (0,5,10) (see Fig. \ref{FIG:article-arXix-8to10}(c)). We will consider two separate cases:
\begin{enumerate}
\item
The vertices 0 and 10 have degree 2 in hexagons (0,1,2,3,4,5) and (5,6,7,8,9,1,0), respectively. Then one $2d$-guard placed at vertex 5 guards $G_1$. By the induction hypothesis $G_2$ can be guarded with \mbox{\mbox{$\lfloor \frac{n-9}{5} \rfloor \leq \lfloor \frac{n}{5}\rfloor-1$}} guards. Thus $G$ can be $2d$-guarded by $\lfloor \frac{n}{5}\rfloor$ guards.
\item
The vertex 0 has degree greater than 2 in hexagon (0,1,2,3,4,5). In this case we place a guard at vertex 0 and another guard in one vertex of the hexagon (5,6,7,8,9,1,0) of degree greater than 2. These two guards dominates $G_1$. $G_2$ has $n-9$ vertices. By lemma \ref{Lem:f(m-1)Guarding} the guard placed at vertex 0 permits the remainder of $G_2$ to be $2d$-guarded by \mbox{$f(n-9-1)=f(n-10)$} additional guards, where $f(n')$ is the number of $2d$-guards that are always sufficient to guard a maximal outerplanar graph with $n'$ vertices. By induction hypothesis $f(n')=\lfloor \frac{n'}{5} \rfloor$. Thus, \mbox{$\lfloor \frac{n-10}{5} \rfloor = \lfloor \frac{n}{5}\rfloor-2$} guards suffices to guard $G_2$. Together with the two already allocated to $G_1$, all of $G$ is guarded by $\lfloor \frac{n}{5}\rfloor$ guards.
\end{enumerate}
\end{enumerate}
\end{proof}
To prove that this upper bound is tight we need to construct a maximal outerplanar graph $G$ of order $n$ such that \mbox{$g_{2d}(G) \mathcal{G}eq \lfloor \frac{n}{5} \rfloor$}. Fig. \ref{FIG:article-arXiv-11} shows a maximal outerplanar graph $G$ for which $\mathcal{G}amma_{2d}(G)=\frac{n}{5}$, since the the black vertices dominate the graph $G$ and can only be \mbox{$2d$-dominated} by different vertices.
\begin{figure}
\caption{A maximal outerplanar graph $G$ for which $\mathcal{G}
\label{FIG:article-arXiv-11}
\end{figure}
This example can be generalized to \mbox{$kd$-domination} to obtain \mbox{$\mathcal{G}amma_{kd}(n) \mathcal{G}eq \frac{n}{(2k+1)}$}. For example, in Fig. \ref{FIG:article-arXiv-12}, the black vertices can only be \mbox{3-dominated} by different vertices, so \mbox{$\mathcal{G}amma_{3d}(n) \mathcal{G}eq \frac{n}{7}$}.
\begin{figure}
\caption{A maximal outerplanar graph $G$ for which $\mathcal{G}
\label{FIG:article-arXiv-12}
\end{figure}
According to theorem \ref{Thm:inequalities}, \mbox{$\mathcal{G}amma_{2d}(G) \leq g_{2d}(G)$}, so \mbox{$\lfloor \frac{n}{5}\rfloor \leq g_{2d}(G)$}. In conclusion, $\lfloor \frac{n}{5}\rfloor$ $2d$-guards are occasionally necessary and always sufficient to guard a $n$-vertex maximal outerplanar graph $G$. On the other hand, we can also establish that \mbox{$\mathcal{G}amma_{2d} = \lfloor \frac{n}{5}\rfloor$}, since \mbox{$\lfloor \frac{n}{5}\rfloor \leq \mathcal{G}amma_{2d}(n)$} and \mbox{$\mathcal{G}amma_{2d}(n) \leq g_{2d}(n)$}, for all $n$. Thus, it follows:
\begin{theorem}
Every $n$-vertex maximal outerplanar graph with \mbox{$n \mathcal{G}eq 5$} can be $2d$-guarded (and $2d$-dominated) by $\lfloor \frac{n}{5}\rfloor$ guards. This bound is tight in the worst case.
\end{theorem}
\section{$2d$-covering of maximal outerplanar graphs}
\label{SEC:CoveringTightUpperBounds}
In this section we determine an upper bound for $2d$-vertex cover on maximal outerplanar graphs and we show that this bound is tight. In order to do this, we first introduce the following lemma, whose proof is omitted, since it is analogous to the one of lemma \ref{Lem:f(m-1)Guarding}.
\begin{lemma}
\label{Lem:f(m-1)Covering}
Suppose that $f(m)$ vertices are always sufficient to $2d$-cover any outerplanar maximal graph with $m$ vertices. If $G$ is an arbitrary outerplanar maximal graph and if we choose any of its $m$ vertices to place a $2d$-covering vertex, then $f(m-1)$ additional vertices are sufficient $2d$-cover $G$.
\end{lemma}
\begin{theorem}
\label{Thm:SufficiencyCovering}
Every $n$-vertex maximal outerplanar graph, with \mbox{$n \mathcal{G}eq 4$}, can be $2d$-covered with $\lfloor \frac{n}{4} \rfloor$ vertices. That is, $\beta_{2d}(n) \leq \lfloor \frac{n}{4} \rfloor$ for all \mbox{$n \mathcal{G}eq 4$}.
\end{theorem}
\begin{proof}
For \mbox{$4 \leq n \leq 9$}, the truth of the theorem can be easily established -- the upper bounds are resumed in Table \ref{TAB:covering}. Note that for \mbox{$n=4$} the $2d$-covering vertex can be chosen randomly and for \mbox{$n=5$} it can be placed among the vertices of degree greater than 2.
\begin{table}[!htb]
\centering
\small
\begin{tabular}{ | l | c | c | c | c | c | c | }
\hline
$n$ & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline \hline
$\beta_{2d}(n)$ & 1 & 1 & 1 & 1 & 2 & 2 \\
\hline
\end{tabular}
\caption{Number of vertices that suffice to $2d$-cover a maximal outerplanar graph of $n$ vertices.}
\label{TAB:covering}
\end{table}
Assume that \mbox{$n \mathcal{G}eq 10$}, and that the theorem holds for \mbox{$n'<n$}. Lemma \ref{Lem:O'Rourke} guarantees the existence of an interior edge $e$ that partitions $G$ into maximal outerplanar graphs $G_1$ and $G_2$, where $G_1$ contains $m$ exterior edges of $G$ with $5 \leq m \leq 8$. Each value of $m$ will be considered separately.
\begin{enumerate}
\item[(1)] $m=5$. $G_1$ has $m+1=6$ exterior edges, thus $G_1$ can be $2d$-covered with one vertex. $G_2$ has $n-4$ exterior edges including $e$, and by the induction hypothesis, it can be $2d$-covered with $\lfloor \frac{n-4}{4} \rfloor = \lfloor \frac{n}{4} \rfloor -1$ vertices. Thus $G_1$ and $G_2$ together can be $2d$-covered by $\lfloor \frac{n}{4} \rfloor$ vertices.
\item[(2)] $m=6$. $G_1$ has $m+1=7$ exterior edges, thus $G_1$ can be $2d$-covered with one vertex. $G_2$ has $n-5$ exterior edges including $e$, and by induction hypothesis, it can be $2d$-covered with $\lfloor \frac{n-5}{4} \rfloor \leq \lfloor \frac{n}{4} \rfloor -1$ vertices. Thus $G_1$ and $G_2$ together can be $2d$-covered by $\lfloor \frac{n}{4} \rfloor$ vertices.
\item[(3)] $m=7$. The presence of any of the internal edges (0,6), (0,5), (1,7) and (2,7) would violate the minimality of $m$. Thus, the triangle $T$ in $G_1$ that is bounded by $e$ is either (0,3,7) or (0,4,7). Since these are equivalent cases, suppose that $T$ is (0,3,7) as shown in Fig. \ref{FIG:article-arXix-13to14}(a). We distinguish two cases:
\begin{enumerate}
\item
The degree of vertex 3 in the pentagon (3,4,5,6,7) is greater than 2. In this case vertex 3 is a vertex cover of $G_1$, and by induction hypothesis $G_2$ can be $2d$-covered with \mbox{$\lfloor \frac{n-6}{4} \rfloor \leq \lfloor \frac{n}{4} \rfloor-1$} vertices. Together with vertex 3 all of $G$ can be $2d$-covered by $\lfloor \frac{n}{4} \rfloor$ vertices.
\item
The degree of vertex 3 in the pentagon (3,4,5,6,7) is equal to 2. In this case vertex 7 $2d$-covers the pentagon (3,4,5,6,7). We consider graph $G^{*}$ that results from the union of $G_2$, triangle $T$ and quadrilateral (0,1,2,3). In this way, $G^{*}$ has \mbox{n-3} vertices. By lemma \ref{Lem:f(m-1)Covering} the vertex 7 permits the remainder of $G^{*}$ to be $2d$-covered by $f(n-3-1)$ additional vertices, where $f(n')$ is the number of vertices that are always sufficient to $2d$-cover a maximal outerplanar graph of $n'$ vertices. By the induction hypothesis $f(n')= \lfloor \frac{n'}{4} \rfloor$. Thus $\lfloor \frac{n-4}{4} \rfloor = \lfloor \frac{n}{4} \rfloor - 1$ vertices are sufficient to $2d$-cover $G_1$. Together with vertex 7, all of $G$ is 2d-covered by $\lfloor \frac{n}{4}\rfloor$ vertices.
\end{enumerate}
\begin{figure}
\caption{The interior edge $e$ separates $G$ into two maximal outerplanar graphs $G_1$ and $G_2$: (a) $G_1$ has 8 exterior edges and the triangle $T$ in $G_1$ that is bounded by $e$ is (0,3,7); (b) $G_1$ has 9 exterior edges and the triangle $T$ in $G_1$ that is bounded by $e$ is (0,8,4).}
\label{FIG:article-arXix-13to14}
\label{FIG:article-arxiv-13to14}
\end{figure}
\item[(4)] $m=8$. $G_1$ has $m+1=9$ exterior edges, thus $G_1$ can be $2d$-covered with two vertices. We will consider two separate cases:
\begin{enumerate}
\item
Vertices 0 and 8 have degree 2 in pentagons (0,1,2,3,4) and (4,5,6,7,8), respectively. Then vertex 5 $2d$-covers $G_1$. By the induction hypothesis $G_2$ can be $2d$-covered with \mbox{$\lfloor \frac{n-7}{4} \rfloor \leq \lfloor \frac{n}{4} \rfloor -1$} vertices. Thus $G$ can be $2d$-covered by $\lfloor \frac{n}{4} \rfloor$ vertices.
\item
Vertex 0 has degree greater than 2 in pentagon (0,1,2,3,4). In this case we place a guard at vertex 0 and another guard in one vertex of the pentagon (4,5,6,7,8) whose degree is greater than 2. These two guards $2d$-cover $G_1$. $G_2$ has $n-7$ vertices. By lemma \ref{Lem:f(m-1)Covering} the vertex 0 permits the remainder of $G_2$ to be 2d-covered by $f(n-7-1)=f(n-8)$ additional guards, where $f(n')$ is the number of $2d$-covering vertices that are always sufficient to $2d$-cover a maximal outerplanar graph with $n'$ vertices. By induction hypothesis $f(n')=\lfloor \frac{n'}{4} \rfloor$. Thus, $\lfloor \frac{n-8}{4} \rfloor = \lfloor \frac{n}{4} \rfloor-2$ vertices suffice to guard $G_2$. Together with the two allocated to $G_1$, all of $G$ is $2d$-covered by $\lfloor \frac{n}{4}\rfloor$ vertices.
\end{enumerate}
\end{enumerate}
\end{proof}
Now, we will prove that this upper bound is tight. The bold edges of the maximal outerplanar graph illustrated in Fig. \ref{FIG:article-arXiv-15} can only be $2d$-covered from different vertices, and therefore $\beta_{2d}(n) \mathcal{G}eq \frac{n}{4}$.
\begin{figure}
\caption{A maximal outerplanar graph $G$ for which $\beta_{2d}
\label{FIG:article-arXiv-15}
\end{figure}
As a conclusion,
\begin{theorem}
Every $n$-vertex maximal outerplanar graph with \mbox{$n \mathcal{G}eq 5$} can be $2d$-covered by $\lfloor \frac{n}{4}\rfloor$ vertices. This bound is tight in the worst case.
\end{theorem}
\section{Conclusions and further research}
\label{SEC:Conclusions}
In this article we defined the concept of $kd$-guarding and formalized the distance $kd$-vertex cover. We showed that there is a relationship between $2d$-guarding, $2d$-dominating and $2d$-vertex cover sets on triangulation graphs. Furthermore, we proved tight bounds for \mbox{$n$-vertex} maximal outerplanar graphs: \mbox{$g_{2d}(n) = \mathcal{G}amma_{2d}(n) = \lfloor \frac{n}{5} \rfloor$} and \mbox{$\beta_{2d}(n)= \lfloor \frac{n}{4} \rfloor$}.
Regarding future research, we believe these bounds can be extended to any triangulation and are therefore not exclusive of maximal outerplanar graphs. Moreover, it would be interesting to study how these bounds evolve for $3d$-guarding, $3d$-dominating and $3d$-vertex cover sets. And of course study upper and lower bounds of the mentioned three concepts for any distance $k$. Finally, in the future we would like to study these distance concepts applied to other types of graphs, and not focus only on triangulations.
\end{document} |
\begin{equation}gin{document}
\title{Quantum force estimation in arbitrary non-Markovian Gaussian baths}
\author{C.L. Latune$^1$, I. Sinayskiy$^{1,2}$, F. Petruccione$^{1,2}$}
\affiliation{$^1$Quantum Research Group, School of Chemistry and Physics, University of
KwaZulu-Natal, Durban, KwaZulu-Natal, 4001, South Africa\\
$^2$National Institute for Theoretical Physics (NITheP), KwaZulu-Natal, 4001, South Africa}
\date{\today}
\begin{equation}gin{abstract}
The force estimation problem in quantum metrology with arbitrary non-Markovian Gaussian bath is considered. No assumptions are made on the bath spectrum and coupling strength with the probe. Considering the natural global unitary evolution of both bath and probe and assuming initial global Gaussian states we are able to solve the main issues of any quantum metrological problem: the best achievable precision determined by the quantum Fisher information, the best initial state and the best measurement. Studying the short time behavior and comparing to regular Markovian dynamics we observe an increase of quantum Fisher information. We emphasize that this phenomenon is due to the ability to perform measurements below the correlation time of the bath, activating non-Markovian effects. This brings huge consequences for the sequential preparation-and-measurement scenario as the quantum Fisher information becomes unbounded when the initial probe mean energy goes to infinity, whereas its Markovian counterpart remains bounded by a constant. The long time behavior shows the complexity and potential variety of non-Markovian effects, somewhere between the exponential decay characteristic of Markovian dynamics and the sinusoidal oscillations characteristic of resonant narrow bands.
\end{abstract}
\maketitle
\section{Introduction}
Quantum metrology brought a revolution in parameter estimation theory. The exploitation of subtle quantum resources as entanglement and squeezing offers new ways of dribbling noise without requiring brute force energy increases (often not possible or not welcome).
An important example is the detection of gravitational wave where quantum metrology allows to reduce the leading remaining noise in the detectors, the almost irreducible quantum fluctuations, by introducing squeezed light (reduction of 28\% of the shot noise without additional noise \cite{natphot}). Note that the recent observation of gravitational waves from a binary Black Hole Merger \cite{ligo} was realized with coherent light, but the next generation experiment should use squeezed light. In such an experiment the strain of the gravitational wave is encoded in a phase difference between the electromagnetic fields running in the two arms of an interferometer. The resulting detection task is an estimation of rotation in phase space.
Here, we focus on force estimation, which corresponds to estimation of displacement in phase space. High precision force estimation can be implemented in several setups like optomechanics \cite{impulsiveforce, microopto, aspel}, trapped ions \cite{biercuck}, atomic force microscopy \cite{ieee} and others harmonic oscillators \cite{nanotube}, useful for testing or exploring fundamental physical phenomena \cite{singlespin, pt, ca, planck,nori}, and in Biology \cite{biology}.
The treatment of force estimation problem with realistic environment including non-Markovian noise have been pioneered in \cite{euroj}. Most of the publications dealing with non-Markovian noise concern two-level system quantum metrology \cite{plenio,matsuzaki,dd}.
In this paper we treat the force estimation in presence of an environment constituted of a collection of harmonic oscillators without any assumption on the bath spectrum and assuming linear coupling of any strength with the probe. We develop a new approach to study the non-Markovian effects of the bath. Ideally the dynamical parameter to be estimated is encoded in the system or $probe$ via unitary evolution. For a realistic process noise should be added. Rather than constructing and solving a dissipative master equation obtained eventually from several approximations on the probe-environment coupling and initial state, we consider the global unitary evolution of the system and its environment.
In doing so we avoid the traditional Markovian and weak coupling approximations and the resolution of a master equation. We start directly from a physical purification of the probe dynamics determined by the bath properties and bath-probe coupling coefficients. We also avoid problems related to potential initial correlations between probe and bath which can yield non-completely positive maps when tracing out the bath \cite{pechukas}.
In this work, effects which arise exclusively from the dynamics at times smaller than the bath correlation time are called non-Markovian. In other words, being able to make measurement at time scales below the bath correlation time can potentially unveil new effects that we call non-Markovian. This notion of non-Markovianity meets its classical counter-part since it refers to the memory of the bath defined as the correlation time.
How this is related to more elaborated definitions which intend to characterize non-Markovianity by short, medium or long time effects is a highly non-trivial question. Hereafter we refer to non-Markovian noise as noise appearing below the correlation time of the bath.
The parameter estimation protocol is the following. The probe system is a harmonic oscillator $S$. After a time window $[t_0;t]$ of force sensing (interaction between the probe and the force), the probe is measured and the force is estimated from the measurement output given that the initial state is known. The function which generates an estimate from the measurement outputs is called estimator. To obtain a better estimate the process has to be repeated several times and the estimate updated via the chosen estimator. For a given measurement the maximal achievable precision of the estimation is determined by the so-called Fisher information \cite{fisher,cramer,rao}. We denote by $F$ the force intensity, $\rho_S(t,t_0,F)$ the probe state just before the measurement, $\{\Pi_m\}_m$ a POVM (positive-operator-valued-measure) describing the measurement, and $p(m|F) = {\rm Tr}[\rho_S(t,t_0,F)\Pi_m]$ the conditional probability of getting the measurement output $m$. The Fisher information associated to this estimation experiment is \cite{fisher}
\begin{equation}
{\cal F}(\{\Pi_m\}_m,F) = \int dm \frac{1}{p(m|F)}\left[\frac{\partial p(m|F)}{\partial F}\right]^2.
\end{equation}
The Fisher information as well as the precision of the force estimation depend strongly on the initial state of the probe and on the measurement applied. The maximal Fisher information over all possible POVM is called the quantum Fisher information (QFI) ${\cal F}_Q(F) = {\rm Max}_{\{\Pi_m\}_m}\{{\cal F}(\{\Pi_m\}_m,F)\}$ . The uncertainty of the estimation is characterised by the mean square of the error $\delta F$ between the estimate $F_{est}$ and the real value $F$ and is lowered by the Cramer-Rao bound \cite{cramer,rao}:
\begin{equation}\label{qcrb}
\langle\delta^2 F\rangle:= \langle(F_{est}-F)^2\rangle \geq [\nu {\cal F}_Q(F)]^{-1},
\end{equation}
where $\nu$ is the number of independent repetitions of the experiment and $\langle...\rangle$ is an ensemble average. This lower bound is saturated for efficient and unbiased estimators \cite{fisher,caves,bcm96} and when best measurements are realised. Identifying best measurements and efficient estimators is in general a complex task. Except for special situation, there is no efficient estimator for finite $\nu$ \cite{bcm96}. The inequality \eqref{qcrb} is sometimes referred as local estimation not only because at principle the lower bound depends on the value of the parameter $F$ and consequently characterises the precision only at this value but also because estimators which saturate the bound may depend on the value of the unknown parameter and their efficiency may require that the range of the possible values for the unknown parameter is previously known.
However, because we choose a linear interaction between the force and the probe, the QFI does not depend on $F$ and the lower bound of the error function (see \cite{tsang}) would remain the same as the actual Cramer-Rao bound. Note also that the hypothesis of initialising the probe and bath in a global Gaussian state guarantees that the distributions generated by quadrature measurements are Gaussian, and then the maximum likelihood estimator \cite{fisher,braunstein} is efficient from the first measurement ($\nu=1$).
The QFI determines the maximal achievable precision turning it the central quantity in most works in quantum metrology.\\
In the following we derive an analytic expression of the QFI and determine the best initial state and the best measurement with the only assumption of Gaussianity of the initial global state of the probe and environment. From this result we show that surprisingly, at short times $t-t_0$ below the bath correlation time, the QFI is equal up to order 3 in $t-t_0$ to the QFI for noiseless evolution, whereas the QFI with Markovian approximation is equal to the noiseless expression only up to order 2 in $t-t_0$. This interesting feature is responsible for a huge difference between Markovian and non-Markovian noise regimes for what we call sequential preparation-and-measurement scenario, a repetition of sequence of probe state preparation, sensing of the force for an adequate duration $\tau$, and measurement (Section \ref{seqmes}). We show that with growing initial energy of the optimally
prepared probe, the optimal duration of the protocol-step diminishes,
so that by entering timescales below the bath correlation time one may
benefit from the non-Markovian effects. As a result, the QFI goes to
infinity (and the uncertainty of the estimation to zero) whereas it remains limited by a constant for protocol-step timescales bigger than the bath correlation time, where the probe experiences Markovian noise. \\
In \cite{dd} the authors observe that the QFI for frequency estimation by qubits can diverge with growing initial energy. Their observation points in the same direction as ours, but they conclude that this super-classical precision is not related to non-Markovianity since they don't observe back-flow of information from the bath to the probe.
Here we show that the same super-classical precision is attainable in continuous variable systems, and without specific hypothesis on the noise as in \cite{dd}, just assuming Gaussianity of the bath (which includes ohmic, sub-ohmic or super-ohmic, a "general environment" as called in \cite{paz}).
Although back-flow of information is a witness of non-Markovianity, it is not a universal criteria. In \cite{dd} it is not possible to analyse directly the bath memory properties because the bath does not appear explicitly. In our model we can directly compare the bath dynamics timescales with the measurement timescale and conclude that the super-classical precision occurs when the measurement timescale enters the memory bath timescale, where non-Markovian noise arises.
\section{Global dynamics}\label{global}
As mentioned in the introduction the harmonic oscillator $S$ is linearly coupled to the force to be estimated. The force is modulated by a time function $\zeta(t)$ assumed to be known. The global unitary evolution of the probe $S$ and the bath $B$ is denoted by $U(t,t_0,F)$.
We use hereafter the following notations:
\begin{equation}a
X(\theta)&:=&\frac{1}{\sqrt{2}}(a^{\dag}e^{i\theta} + a e^{-i\theta}),\nonumber\\
X(\theta) [t,t_0,F]&:=&U^{\dag}(t,t_0,F)X(\theta)U(t,t_0,F),\nonumber\\
\big\langle X(\theta)[t,t_0,F]\big\rangle_0&:=&{\rm Tr}_{SB}\{\rho_{SB}^0 X(\theta)[t,t_0]\},\nonumber\\
\big\langle\Delta^2 X(\theta)[t,t_0]\big\rangle_0&:=& \big\langle \{X(\theta)[t,t_0]\}^2\big\rangle_0 -\big\langle X(\theta)[t,t_0]\big\rangle_0^2.\nonumber
\end{equation}a
Note that defined in this way $X(\theta)[t,t_0]$ is an observable of both probe and bath's Hilbert space. By taking the expectation value in the state $\rho_{SB}^0$ one recovers the usual expression for the expectation value of a probe observable. Note also that the quantity inside the parenthesis (here $\theta$) and $[t,t_0]$ are independent: the first one designates the quadrature angle, which in the following will be chosen according to the instant of measurement $t$, and the second one designates the time evolution of this same quadrature between $t_0$ and $t$.
The global Hamiltonian is given by:
\begin{equation}gin{eqnarray}
H(t,F)/\hbar &=& \omega_0 a^{\dag} a - F\frac{\omega_0}{\sqrt{2}}\zeta(t)\left(a^{\dag}+a\right) + \sum_n \omega_n b_n^{\dag}b_n \nonumber\\
&&- ia^{\dag}\sum_n K_n b_n +ia\sum_n K_n^{*}b_n^{\dag} ,
\end{eqnarray}
where $F$ is the force intensity, the parameter to estimate. We consider that the environment is constituted of a collection of harmonic oscillators coupled to $S$ via the coefficient $K_n$. We show (Appendix \ref{evolution}) that the global unitary interaction $U(t,t_0)$ can be written in the following form
\begin{equation}a\label{evolop}
U(t,t_0,F) = &e^{-\frac{i}{\hbar}(t-t_0)H_0}&e^{iF|D(t,t_0)|X(\phi^D_{t,t_0})} R(t,t_0){\cal B}(t,t_0),\nonumber\\
\end{equation}a
where $t_0$ designates the instant of time of the beginning of the interaction of the probe with the bath and the force. Expression \eqref{evolop} shows that we can split $U(t,t_0,F)$ in a succession of four operations (see details in Appendix \ref{evolution}). Firstly a probe-bath mixing ${\cal B}(t,t_0)$. Then a displacement proportional to $F$ implemented by the operator $R(t,t_0)$ in the phase space of the bath. A second displacement given by $e^{iF|D(t,t_0)|X(\phi^D_{t,t_0})}$ and taking place in the phase space of $S$, with $D(t,t_0) = \omega_0\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}G(t,u) $, $\phi^D_{t,t_0}=\arg{D(t,t_0)}$, and the complex function $G(t,u)$ describing the bath response. Its explicit expression is similar to a Dyson series and is shown at the beginning of Appendix \ref{evolution}. The last operation is the free evolution $e^{-\frac{i}{\hbar}(t-t_0)H_0}$ with $H_0/\hbar=\omega_0 a^{\dag} a+ \sum_n \omega_n b_n^{\dag}b_n$.
Expression \eqref{evolop} is by itself an important and interesting result by its simple form. It allows exact derivation of probe and bath observables in Heisenberg picture and corresponding moments (Appendix \ref{evolution}). Usual treatment would require much more work to derive those quantities: establish an exact non-Markovian master equation and then solve it. With our method we have also the possibility of deriving those quantities without hypothesis of initially uncorrelated bath and probe, which would be hardly possible by a master equation treatment \cite{pechukas}. However this possibility is not interesting in the present work since we focus on the best initial state which is a pure state.
Note that the expression \eqref{evolop} is similar - without $R(t,t_0)$ - to the form found in \cite{pra} where the noise is treated via a Markovian master equation. Note also that \eqref{evolop} is obtained without doing any approximation regarding the strength of the coupling with the bath, and the bath spectral density.
\section{Derivation of the metrological quantities for Gaussian states}
In order to access the QFI of $S$ related to the parameter $F$ we have to maximize the Fisher information ${\cal F}(\{\Pi_m\}_m,F)$ over all physical measurements $\{\Pi_m\}_m$. Unfortunately this is not tractable. Efforts have been made to find alternatives \cite{caves, paris, bruno, nicim}, but they are still hard to apply for arbitrary dynamics and states. We restrict ourselves to Gaussian states which are more easily accessible experimentally and for which \cite{pinel, monras, jiang} have derived explicit expressions of the QFI. As we will see in the following, this important assumption also guarantees that the best measurement is a quadrature measurement and that the maximum likelihood estimator is efficient implying that the quantum Cramer-Rao bound Eq. \eqref{qcrb} is saturated for any number of repetitions of the experiment ($\nu \geq 1$) \cite{fisher,braunstein}. Assuming that the initial global state of the probe and bath $\rho_{SB}^0$ is Gaussian implies that at any time the probe state $\rho_S(t,t_0)$ is Gaussian too since the global Hamiltonian $H(t,F)$ is quadratic and the partial trace is a Gaussian operation. We are now able to derive an expression of the QFI thanks to results from \cite{caves} and expressions for fidelity between Gaussian states \cite{scutaru}. One can also use the derivations in \cite{pinel, monras, jiang} which are applications of the results in \cite{caves} to Gaussian states. As expected, all these expressions of the QFI for Gaussian states are functions of the first and second moment of the quadrature operators.
As already mentioned at the end of Section \ref{global} these quantities are highly non-trivial to derive without approximations. Here we do so (Appendix \ref{evolution}) thanks to the simple form of \eqref{evolop}.
Since in this problem of force estimation the parameter $F$ to be estimated controls only the amplitude of the displacement in the phase space, the expression for QFI derived in \cite{pinel} for Gaussian states can be simplified to
\begin{equation}a\label{firstqfi}
{\cal F}_Q (t,t_0) &=& \frac{ |D(t,t_0)|^2}{{\rm Det}{\bf\Sigma}(t,t_0)}\big\langle\Delta^2 X(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0]\big\rangle_0,\nonumber\\
\end{equation}a
where ${\rm Det}{\bf \Sigma}(t,t_0)$ is the determinant of the covariance matrix ${\bf \Sigma}(t,t_0)$ of $\rho_S(t,t_0)$. Note that the use of the result of \cite{pinel} requires that the initial probe state is Gaussian, but this does not exclude correlation with the bath. The minimum condition is indeed that $\rho_{SB}^0$ be Gaussian. For $\rho_S(t,t_0)$ being Gaussian, the determinant ${\rm Det}{\bf \Sigma}(t,t_0)$ is equal to the product of the extremal quadrature variances. Let's call $\theta_m^t$ the angle of the maximal quadrature variance at time $t$ so that we have ${\rm Det}{\bf \Sigma}(t,t_0) = \big\langle\Delta^2 X(\theta_m^t)[t,t_0]\big\rangle_0 \times\big\langle\Delta^2 X(\theta_m^t+\pi/2)[t,t_0]\big\rangle_0$. The angle $\theta_m^t$ is a function of $t$ and of $\theta_m^0$, the angle of the maximal quadrature variance of the initial state at $t_0$. If the value of $t$ is predetermined, meaning that we choose the length of the sensing window before the beginning of the experiment (as for sequential preparation-and-measurement scenario, Section \ref{seqmes}), $\theta_m^t$ is uniquely determined by $\theta_m^0$ and so there exists one $\theta_m^0 \in[0,\pi[$ such that $\theta_m^t = \phi^D_{t,t_0} - \omega_0(t-t_0)$. As shown in Appendix \ref{evolution}, for the probe and bath initially uncorrelated, the simple following relation holds $\theta_m^t = \theta_m^0 + \phi_{t,t_0}^G -\omega_0(t-t_0)$, where $\phi_{t,t_0}^G:=\arg{G(t,t_0)}$. Under these conditions, preparing the probe with $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ guarantees that $\theta_m^t = \phi^D_{t,t_0} - \omega_0(t-t_0)$, and consequently the expression of the QFI reduces to
\begin{equation}\label{qfipvar}
{\cal F}_Q(t,t_0) = \frac{ |D(t,t_0)|^2}{\Big\langle\Delta^2 P(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0]\Big\rangle_0}.
\end{equation}
This expression is remarkable by its simplicity and similarity to the noiseless and Markovian dynamics expressions \cite{pra}. Note that the condition $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ is obviously satisfied for any circularly symmetric states like coherent or Fock states.
However, more relevant than being able to write the QFI in a simple form is the best initial state. We already know that it belongs to the pure states (because of the convexity of the QFI), which by the way implies that the best state is not correlated to the bath. Maximizing \eqref{firstqfi} implies maximizing the variance $\big\langle\Delta^2 X(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0]\big\rangle_0$ (given a limited initial mean energy $E$) and minimizing ${\rm Det}{\bf \Sigma}(t,t_0)$ which is already minimal and equals to $1/4$ if $S$ is initialized in a pure state. In Appendix \ref{evolution} we show that the best state is the squeezed state $\hat{a}t S[\mu(t,t_0)]|0\rangle$ where $|0\rangle$ is the ground state of the harmonic oscillator, $\hat{a}t S[\mu(t,t_0)] = \exp{\left(\frac{\mu(t,t_0)}{2} a^{\dag 2} - \frac{\mu^{*}(t,t_0)}{2} a^{2}\right)}$ is a squeezing operator, with $\mu(t,t_0)=re^{2i(\phi^D_{t,t_0}-\phi^G_{t,t_0})}$ and $r=\frac{1}{2}\ln{[2(E+\sqrt{E^2-1/4})]}$. One can see that the best state is squeezed along the quadrature $P(\phi_{t,t_0}^D-\phi_{t,t_0}^G)$ implying that the condition $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ is satisfied and that the form of the QFI for the best state is also \eqref{qfipvar}. Substituting the variance of the quadrature $P(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0,F]$ by its expression for the best state gives
\begin{equation}gin{widetext}
\begin{equation}a\label{qfibeststate}
{\cal F}_Q(t,t_0) = \frac{ |D(t,t_0)|^2}{|G(t,t_0)|^2\left[4(E+\sqrt{E^2-1/4})\right]^{-1}+\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Big|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Big|^2},
\end{equation}a
\end{widetext}
where $N_n := {\rm Tr}[\rho_B^0b_n^{\dag}b_n]$.
This expression shows the transition between noiseless ($K_n=0$ and $G(t,t_0)=1$) and noisy quantum metrology. Without noise we recover the so-called Heisenberg limit characterized by a linear dependence in $E$, leading to an infinite precision for growing $E$ as in noiseless classical parameter estimations. For noisy quantum metrology, as time goes the influence of the bath grows ($|G(t,t_0)| \leq 1$) and can eventually dominate the initial preparation conditions ($|G(t,t_0)| \ll 1$) spoiling efforts to recover infinite precision. For broadband limit and rotating wave approximation $G(t,t_0)$ tends to 0 at long times (see Appendix \ref{bnband}), erasing the initial conditions dependence and recovering the Markovian behavior \cite{pra}. We also show in Appendix \ref{bnband} that the resonant narrow band limit reduces $G(t,t_0)$ to a cosine resulting in an indefinite succession of forward and backward flows of information between $S$ and the bath along with an indefinite dependence on the initial conditions, contrasting with the Markovian behavior. However, as we show in the following, the long time behavior of $G(t,t_0)$ cannot be determined in general without explicit expressions of $K_n$, but a behavior between these two extremes of total loss and total recovering of initial information is expected. \\
To complete the protocol of best estimation we determine in Appendix \ref{quadmeas} that the projective measurement onto the quadrature $P(\phi^D_{t,t_0} -\omega_0(t-t_0))$ yields a {\it Fisher information} equal to the right hand side of \eqref{qfipvar}. One concludes that whenever the {\it QFI} is given by \eqref{qfipvar}, in other words whenever the initial state is prepared in a Gaussian state with the maximal variance of the quadrature occurring for $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ (this includes the best state), the best measurement is the projection onto the quadrature $P(\phi^D_{t,t_0} -\omega_0(t-t_0))$. This useful result shows also that the best measurement generates Gaussian distributions which implies that the maximum likelihood estimator is efficient for arbitrary number of repetitions of the experiment \cite{fisher,braunstein}. One can show that in this problem the maximum likelihood estimator is the simple average of the outcomes (successively obtained after each repetition of the experiment) of the projective measurement just described above.
Interestingly we can treat the situation where the instant of beginning and end of the force are not known exactly. Let's call $t_i$ and $t_f$ such instants so that $\zeta(t)=0$ for $t\leq t_i$ and $t\geq t_f$. The only change in the previous expressions is $D(t,t_0) = \omega_0\int_{t_i}^{t_f}du \zeta(u) e^{i\omega_0(u-t_0)}G(t,u) = D(t_f,t_i)$. As expected, initializing the experiment before the force begins and measuring after the force stops is prejudicial for the precision of the estimation since $\big\langle \Delta^2 P(\phi^D_{t_f,t_i} -\omega_0(t_f-t_i))[t_f,t_i]\big\rangle \leq \big\langle \Delta^2 P(\phi^D_{t_f,t_i} -\omega_0(t-t_0))[t,t_0]\big\rangle$. This is one of the arguments for opting for sequential preparation-and-measurement scenario which, as shown in Appendix \ref{seqmeasurement}, avoids the potential problem of knowing $t_i$ and $t_f$.\\
\section{Short and long time behavior}\label{shortandlong}
We assume that we can perform a measurement at a time $t$ such that $t-t_0$ is much smaller than $\omega_0^{-1}$, $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$ (see Appendix \ref{considtimescale}). Under this condition one can legitimately expand Eq.\eqref{qfipvar} around $t_0$:
\begin{equation}a
{\cal F}_Q(t,t_0) &= \frac{\omega_0^2(t-t_0)^2}{\left\langle \Delta^2 P\left[\phi_{t,t_0}^{D_0}\right]\right\rangle_0 }&\left[\zeta^2(t_0) +\zeta(t_0)\dot{\zeta}(t_0)(t-t_0)\right]\nonumber\\
&&+ {\cal O}[\omega_0^2{\cal K}^2(t-t_0)^4],\label{fexpanded}
\end{equation}a
where $\phi^{D_0}_{t,t_0}:=\arg{D_0(t,t_0)}$ with $D_0(t,t_0):=\lim_{|K_n|\rightarrow0} D(t,t_0)$, the noiseless value of $D(t,t_0)$ (Appendix \ref{considtimescale}). Consequently the bath dependence appears only at the 4th order in $(t-t_0)$ through the coefficient ${\cal K}^2:=\sum_n |K_n|^2$. One can show from \eqref{firstqfi} that this property is not merely an artefact of some particular initial states but in fact remains valid for any initial Gaussian state. This means that the QFI is equal up to order 3 in $(t-t_0)$ to the QFI without any noise. This is a surprising and interesting result reminiscent of quantum Zeno effect. We show in Appendix \ref{bathcorrelation} that the evolution time scales of the bath correlation function is of order of $\Omega_p^{-1}$ and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$. This implies that the expansion \eqref{fexpanded} is valid if the measurement is performed at time scales below the correlation time of the bath. \\
If at contrary we cannot perform measurements below the correlation time of the bath, the expansion \eqref{fexpanded} is not valid (the first terms are not significant anymore and the higher terms have to be taken into account). We show in Appendix \ref{shorttime} that in such a situation the short time expansion contains a bath contribution at the 3rd order. The same phenomenon happens for Markovian dynamics.\\
We show in Section \ref{seqmes} that this short time behaviour is responsible for a great increase (and even a change of scaling) of QFI in sequential preparation-and-measurement scenario protocols when the optimal time interval between each measurement becomes smaller than the bath correlation time scale. \\
The long time analysis is much more complicated in general due to $G(t,t_0)$. Beginning to look at the long time behavior of the first term of $G(t,t_0)$, one finds that the real part tends to its Markovian equivalent $-\gamma(t-t_0)/2$ and the imaginary part is more involved, but cancels out if $g(\omega)|K(\omega)|^2$ is symmetrical with respect to $\omega_0$ (see Appendix \ref{longtime}).
However, one can show that even in the limit of $t-t_0$ much bigger than all time scales involved in the problem the second term in the expansion of $G(t,t_0)$ is different form its Markovian equivalent, $\gamma^2(t-t_0)^2/8$. So the Markovian behavior does not seem to be recovered in the long time limit as it is sometimes claimed \cite{euroj}. Note that for arbitrary $g(\omega)|K(\omega)|^2$ the imaginary part does not cancel out and its contribution can give rise to diverse long time behaviors far different from the Markovian one as for instance the extreme case of resonant narrow band (Appendix \ref{bnband}).
\section{sequential preparation-and-measurement scenario}\label{seqmes}
Let $[t_0,t_0 + T]$ be the supposed time window available for the sensing of the force. In order to increase the Fisher information about $F$ one can repeat the measurement $\nu$ times after a sensing time interval of $\tau$ such that $\nu \tau =T$. This process increases significantly the precision of the estimation when the window duration $T$ is bigger than the relaxation time of the probe. The time interval $\tau$ must be chosen adequately. There are two competing quantities growing with time: the information about $F$ in the probe state and the environment-induced fluctuation of the probe state. We are interested now in determining this optimal time interval $\tau_{\mathrm{opt}}$.\\
The total QFI is just the sum of each Fisher information generated after each measurement at time $t_k:=t_0+k\tau$:
\begin{equation}a\label{fseq}
{\cal F}_Q^{\mathrm{Seq}}(T,\tau) &=& \sum_{k=0}^{\nu-1} {\cal F}_Q(t_{k+1},t_k)\nonumber\\
&=& \sum_{k=0}^{\nu -1}\frac{|D(t_{k+1},t_k)|^2}{\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle_0}.
\end{equation}a
The second line is valid if after each measurement the probe is prepared in the best state $\hat{a}t S[\mu(t_{k+1},t_k)]|0\rangle$. This is what we call sequential preparation-and-measurement scenario. Once we choose $\tau$ we can evaluate (at least numerically) $\phi^D_{t_{k+1},t_k}$ and prepare the state $\hat{a}t S[\mu(t_{k+1},t_k)]|0\rangle$. The challenge is to prepare it in a time interval much smaller than the time evolution of the harmonic oscillator so that this time interval needed for the state preparation can be neglected. With this assumption the variances $\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k,F]\rangle$ become k-independent (see Appendix \ref{seqmeasurement}) and can be taken out of the sum in Eq. \eqref{fseq}.\\
To continue the analytic analysis we assume that we are looking for small optimal time interval $\tau_{\mathrm{opt}}$. Assuming that in Eq. \eqref{fseq} $\tau$ is much smaller than $\omega_0^{-1}$, $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$ (see Appendix \ref{considtimescale}), we can legitimately derive a small time expansion (shown in Appendix \ref{seqmeasurement}). From it we deduce,
\begin{equation}a\label{optimaltime}
\tau_{\mathrm{opt}}=\frac{1}{2\sqrt{3{\cal N}}}{\cal E}^{-1/2} + {\cal O}({\cal E}^{-3/2}),
\end{equation}a
with ${\cal E}:=\left(E+\sqrt{E^2-\frac{1}{4}}\right)$ and ${\cal N} := \sum_n|K_n|^2(N_n+1/2)$. The second order term is detailed in Appendix \ref{seqmeasurement}. This result illustrates the close relation between optimal time and initial energy $E$: when $E$ goes to infinity, the initial fluctuation of the probe state goes to zero (for the chosen quadrature) and $\tau_{\mathrm{opt}}$ tends to zero. One can derive a sufficient condition on the initial energy $E$ which guarantees that $\tau_{\rm opt} \leq \omega_0^{-1}$, $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$
(see Appendix \ref{considtimescale}). Interesting for experimental implementation, the leading term of Eq. \eqref{optimaltime} does not depend on the total available sensing window $[t_0;t_0+T]$ neither on the force time modulation $\zeta(t)$. For Markovian bath the dependence of $\tau_{\mathrm{opt}}$ is proportional to ${\cal E}^{-1/3}$ \cite{pra}.\\
The corresponding total QFI is
\begin{equation}a
&&{\cal F}_Q^{\mathrm{Seq}}(T,\tau_{opt})= \frac{\sqrt{3}\xi(T,t_0)}{2\sqrt{\cal N}}{\cal E}^{1/2} + {\cal O}({\cal E}^{-1/2}),
\end{equation}a
with $\xi(T,t_0):=\int_{t_0}^{t_0+T}dt\zeta^2(t)$. The second order term is also detailed in Appendix \ref{seqmeasurement}.
The total quantum Fisher information scales as $E^{-1/2}$ and as $E_{\mathrm{tot}}^{1/3}$ for the total energy $E_{\mathrm{tot}}:=\nu_{\mathrm{opt}}E = TE/\tau_{\mathrm{opt}}$. This is valid when the initial energy invested in the squeezing of the probe is sufficient (see Eq. \eqref{optimaltime}) so that the optimal time become smaller than $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$, which correspond to the bath correlation function timescales (Appendix \ref{bathcorrelation}). If these conditions are not fulfilled, then the above result are not valid anymore. Instead, when $\tau$ is bigger than the bath correlation timescale, the short time expansion of Eq. \eqref{fseq} changes (Appendix \ref{nmvsm}) and the total QFI becomes bounded by a constant (Appendix \ref{seqmeasurement}) as for Markovian dynamics \cite{pra}.
Only one previous work exhibits some similar tendencies for non-Markovian noise effects in harmonic oscillator probe \cite{euroj}. Here, we show that the results are general properties which do not dependent on the force time modulation neither on the bath spectrum, coupling coefficients nor coupling strength. In \cite{dd} the authors reach similar conclusions for frequency estimation with qubits but treat a special kind of noise which commutes with the parameter encoding transformation. \\
We show in Appendix \ref{seqmeasurement} that contrary to the single measurement procedure the sequential preparation-and-measurement scenario does not generate loss of information due to an experimental time window $[t_0,t_0+T]$ potentially larger than the one during which the force is actually non-null.
\section{Conclusion}
We have shown the striking difference between Markovian and non-Markovian noise for quantum metrology perspectives, stemming from the ability to perform measurement within the correlation time of the bath, activating non-Markovian effects. Our results finally suggest that we can consider at least three situations. Firstly when measurements are performed within the correlation time of the bath, leading to the super-classical scaling of QFI. Then when measurements are performed on a time scale between the bath correlation time and the time scale $\gamma^{-1}$ emerging from Markovian approximation, leading to the bounded increase of QFI discussed in \cite{pra}. Finally, when measurements are performed on a time scale bigger than $\gamma^{-1}$, leading to no significant increase with respect to the one-measurement strategy, and carrying the whole burden of the noise. The transition between super-classical scaling and bounded QFI should occurs continuously, and should happen when the measurement timescale are comparable with the bath correlation timescale. In this intermediate situation we cannot conclude about the behaviour of the QFI. \\
One should keep in mind that the ability to perform measurement at time scales shorter than the bath correlation timescale is not enough, it is also necessary that the energy invested in the initial squeezing of the probe state be big enough so that the optimal time interval $\tau_{\mathrm opt}$ goes below the bath correlation timescale. \\
In \cite{screport,zeno} the authors show an anticorrelation between quantum Zeno effect and Metrological improvement. It is because the estimated parameter depends on the internal dynamics (free evolution of the probe) so that when the evolution is frozen by the quantum Zeno effect, the information about the parameter encoded into the probe is also frozen. In our problem the action of the external force is not affected by the measurements and by the quantum Zeno effect so that the information flow about $F$ from outside into the probe is not frozen.\\
Our method using the global unitary evolution was proved to be efficient to calculate the main dynamical quantities and even the QFI for initial Gaussian states with no other assumptions. The whole difficulty of the exact evolution is concentrated in the bath response function $G(t,t_0)$. It seems that $G(t,t_0)$ captures by itself the nature of the noise, being monotonic for Markovian noise and non-monotonic for resonant narrow band limit. The real behavior of $G(t,t_0)$ is in between. The short time behaviour of $G(t,t_0)$ is also related to quantum Zeno effect. This relationship will be investigated in a future work. \\
We can treat in the same way the probe-bath coupling without rotating wave approximation $(a^{\dag} + a)(B^{\dag}+B)$. The expression of the QFI is also \eqref{firstqfi} but with a more complex function $G(t,u)$. Squeezing effects from the bath appear, turning the identification of the best state and best measurement more difficult. \\
An interesting perspective is to adapt these results to waveform estimation \cite{tsang1,tsang2}.
\begin{equation}gin{acknowledgements}
This work is based upon research supported by the
South African Research Chair Initiative of the Department of Science and Technology and National Research Foundation. CLL acknowledges and thanks the support of the College of Agriculture, Engineering and Science of the UKZN.
\end{acknowledgements}
\begin{equation}gin{widetext}
\appendix
\numberwithin{equation}{section}
\section{Operator evolution, first and second moment of probe quadrature operator}\label{evolution}
We first split the operator evolution in a free evolution term and interaction picture operator. Then, we take the probe-bath interaction term apart using the formula
\begin{equation}a
&&\exp{\left\{{\cal T}\int_{t_0}^t du [A(u) + B(u)]\right\}} = \exp{\left\{{\cal T}\int_{t_0}^t du \left[e^{{\cal T}\int_{u}^t dsA(s)}B(u)e^{-{\cal A}\int_{u}^t dsA(s)}\right]\right\}} \exp{\left\{{\cal T}\int_{t_0}^t du A(u)\right\}},
\end{equation}a
where ${\cal T}$ is the time ordering operator and ${\cal A}$ is the anti-chronological ordering operator, applying it with $A(u)$ being the probe-bath coupling term and $B(u)$ the force coupling term. It yields
\begin{equation}a
U(t,t_0,F) &=& e^{-iH_0(t-t_0)/\hbar}\exp{\left\{iF\frac{\omega_0}{\sqrt{2}}{\cal T}\int_{t_0}^t du \zeta(u) \left[ e^{i\omega_0(u-t_0)}{\cal B}(t,u)a^{\dag} {\cal B}^{\dag}(t,u) + c.c.\right]\right\}}{\cal B}(t,t_0), \label{uintermed}
\end{equation}a
where $H_0/\hbar = \omega_0 a^{\dag}a + \sum_n \omega_n b_n^{\dag} b_n$,
${\cal B}(t,u) = \exp{{\cal T}\int_u^tds \left[a_0(s)B_0^{\dag}(s) -a_0^{\dag}(s)B_0(s)\right]}$, $a_0(s):=e^{-i\omega_0(s-t_0)}a$ and $B_0(s):=\sum_n K_ne^{-i\omega_n(s-t_0)}b_n$. Then from the Baker-Hausdorff formula one gets
\begin{equation}\label{btransfo}
{\cal B}(t,u)a^{\dag}{\cal B}^{\dag}(t,u) = a^{\dag}G(t,u) + \int_{u}^tds ~e^{-i\omega_0(s-t_0)}G(s,u)B_0^{\dag}(s),
\end{equation}
with
\begin{equation}a\label{gsm}
&&G(t,u):= 1+ \sum_{k=1}^{\infty}(-1)^k\int_{u}^tds_1\int_{u}^{s_1}ds_2...\int_{u}^{s_{2k-1}}ds_{2k}e^{i\omega_0(s_1-s_2+...+s_{2k-1}-s_{2k})}[B_0(s_1),B_0^{\dag}(s_2)]...[B_0(s_{2k-1}),B_0^{\dag}(s_{2k})]\nonumber\\
&&= 1+\sum_{k=1}^{\infty}(-1)^k\sum_{n_1,...,n_k}|K_{n_1}|^2...|K_{n_k}|^2\int_{u}^tds_1\int_{u}^{s_1}ds_2...\int_{u}^{s_{2k-1}}ds_{2k} e^{i(\omega_0-\omega_{n_1})(s_1-s_2)}...e^{i(\omega_0-\omega_{n_k})(s_{2k-1}-s_{2k})}.
\end{equation}a
One can show that in fact $G(t,u)$ is just a function of $t-u$, $G(t,u)=G(t-u)$. In
the beginning of Appendix \ref{shorttime} we show a integro-differential equation satisfied by $G$ which can be solved by Laplace transform. But in the present problem we only need short time expansion of $G$.\\
Inserting \eqref{btransfo} in \eqref{uintermed}, taking apart the probe terms from the bath terms and calculating the time ordered integral, one obtains, up to an irrelevant phase factor,
\begin{equation}a\label{evolopapp}
U(t,t_0,F) = &e^{-i\frac{t-t_0}{\hbar}H_0}&e^{iF|D(t,t_0)|X(\phi^D_{t,t_0})} R(t,t_0){\cal B}(t,t_0),
\end{equation}a
where $D(t,t_0) = \omega_0\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}G(t,u) $, $\phi^D_{t,t_0}=\arg{D(t,t_0)}$, $X(\phi^D_{t,t_0})=\frac{1}{\sqrt{2}}(a^{\dag}e^{i\phi^D_{t,t_0}} + a e^{-i\phi^D_{t,t_0}})$, and
\begin{equation}
R(t,t_0)=e^{iF\frac{\omega_0}{\sqrt{2}}{\cal T}\int_{t_0}^t du \int_u^t ds\zeta(u)
\left[e^{i\omega_0(u-s)}G(s,u)B_0^{\dag}(s) + c.c.\right]}.
\end{equation}
The time ordered integral can be also calculated in $R(t,t_0)$ and leads to the following expression:
\begin{equation}
R(t,t_0) = \Pi_n e^{i F |D_n(t,t_0)| X_n(\phi^{D_n}_{t,t_0})}
\end{equation}
with
\begin{equation}
D_n(t,t_0) = \omega_0 K_n^{*}\int_{t_0}^t du~\zeta(u)e^{i\omega_0(u-t_0)}\int_u^t ds~e^{i(\omega_n-\omega_0)(s-t_0)}G(s,u),
\end{equation}
$\phi^{D_n}_{t,t_0} = \arg{D_n(t,t_0)}$, and $X_n(\phi^{D_n}_{t,t_0}) = \left(b_n^{\dag}e^{i\phi^{D_n}_{t,t_0}} +b_n e^{-i\phi^{D_n}_{t,t_0}}\right)/\sqrt{2}$.\\
From \eqref{evolopapp} the exact expression in the Heisenberg picture of the probe quadrature $X(\theta)=\frac{1}{\sqrt{2}}(a^{\dag}e^{i\theta}+ae^{-i\theta})$ can be easily derived:
\begin{equation}a
X(\theta)[t,t_0,F] &=& U_{SB}^{\dag}(t,t_0,F)X(\theta)U_{SB}(t,t_0,F)\nonumber\\
&=& \frac{1}{\sqrt{2}} {\cal B}^{\dag}(t,t_0)\Bigg\{e^{i\theta}e^{i\omega_0(t-t_0)}\Big[a^{\dag}-i\frac{F}{\sqrt{2}}D^{*}(t,t_0)\Big] + e^{-i\theta}e^{-i\omega_0 (t-t_0)}\Big[a+i\frac{F}{\sqrt{2}}D(t,t_0)\Big]\Bigg\}{\cal B}(t,t_0)\nonumber\\
&=& \frac{1}{\sqrt{2}} \Bigg\{e^{i\theta}e^{i\omega_0(t-t_0)}\Big[a^{\dag} G^{*}(t,t_0) -\int_{t_0}^tds e^{-i\omega_0(s-t_0)}G^{*}(t,s)B_0^{\dag}(s)-i\frac{F}{\sqrt{2}}D^{*}(t,t_0)\Big] +c.c.\Bigg\},\label{xquad}
\end{equation}a
For the mean value of $X(\theta)[t,t_0,F]$, defined as $\langle X(\theta)[t,t_0,F]\rangle = {\rm Tr}_{SB}\{\rho_{SB}^0X(\theta)[t,t_0,F]\}$, we find
\begin{equation}a
\langle X(\theta)[t,t_0,F]\rangle &=& \frac{e^{i\theta}e^{i\omega_0(t-t_0)}}{\sqrt{2}}G^{*}(t,t_0) \langle a^{\dag}\rangle_0 +\frac{e^{-i\theta}e^{-i\omega_0(t-t_0)}}{\sqrt{2}} G(t,t_0) \langle a\rangle_0 +F|D(t,t_0)|\sin[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] \nonumber\\
&=& |G(t,t_0)| \big\langle X[\theta+\omega_0(t-t_0) -\phi_{t,t_0}^G]\big\rangle_0 +F|D(t,t_0)|\sin[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] ,
\end{equation}a
where the subscript 0 means that the expectation value is taken for the state $\rho_{SB}^0$.
We assume also that ${\rm Tr}_{SB}\{\rho_B^0 B_0(s)\} = {\rm Tr}_{SB}\{\rho_B^0 B_0^{\dag}(s)\} =0$.\\
In order to obtain a simple expression for the variance we make the assumption that the probe and the bath are initially uncorrelated so that $\rho_{SB}^0 = \rho_S^0\rho_B^0$. The variance is
\begin{equation}a
\langle \Delta^2 X(\theta)[t,t_0]\rangle &=& \Bigg\langle \left[\Delta\left(\frac{1}{\sqrt{2}}e^{i\theta + i\omega_0(t-t_0)} G^{*}(t,t_0)a^{\dag} + c.c.\right)\right]^2\Bigg\rangle_0 \nonumber\\
&+& \int_{t_0}^tds\int^t_{t_0}ds' G^{*}(t,s') G(t,s)\sum_n |K_n|^2 e^{i(\omega_0-\omega_n)(s-s')}\left(N_n + \frac{1}{2}\right)\nonumber\\
&=& |G(t,t_0)|^2\Big\langle \Delta^2 X\big[\theta +\omega_0(t-t_0)-\phi^G_{t,t_0}\big]\Big\rangle_0 +\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Bigg|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Bigg|^2\nonumber\\\label{genvariance}
\end{equation}a
with $N_n := \langle b_n^{\dag}b_n\rangle$ and $\phi^G_{t,t_0}:=\arg G(t,t_0)$. Since the second term in \eqref{genvariance} does not depend on initial conditions, the variance $\langle \Delta^2 X(\theta)[t,t_0]\rangle $ is maximal whenever $\Big\langle \Delta^2 X\big[\theta +\omega_0(t-t_0)-\phi^G_{t,t_0}\big]\Big\rangle_0 $ is maximal. This gives the relation between $\theta_m^t$ the angle of the maximal quadrature variance at time $t$, and $\theta_m^0$ the angle of the maximal quadrature variance at time $t_0$:
\begin{equation}
\theta_m^t = \theta_m^0 + \phi_{t,t_0}^G -\omega_0(t-t_0).
\end{equation}
The variance of $P(\phi^D_{t,t_0}-\omega_0(t-t_0))[t,t_0]$ is obtained doing $\theta=\phi^D_{t,t_0}-\omega_0(t-t_0) + \pi/2$ in \eqref{genvariance}:
\begin{equation}\label{variancepbest}
\big\langle \Delta^2 P(\phi^D_{t,t_0}-\omega_0(t-t_0))[t,t_0]\big\rangle = |G(t,t_0)|^2\big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\big\rangle_0 +\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Bigg|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Bigg|^2,\nonumber\\
\end{equation}
and is minimized when $\big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\big\rangle_0$ is minimized. According to \cite{pra}, the state that realizes this minimization, given an initial mean energy $E$, is a pure squeezed state with the squeezed quadrature being $P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]$. It can be written as $\hat{a}t S[\mu(t,t_0)]|0\rangle$ where $|0\rangle$ is the ground state of the harmonic oscillator, $\hat{a}t S[\mu(t,t_0)] = \exp{\left(\frac{\mu(t,t_0)}{2} a^{\dag 2} - \frac{\mu^{*}(t,t_0)}{2} a^{2}\right)}$ with $\mu(t,t_0)=re^{2i(\phi^D_{t,t_0}-\phi^G_{t,t_0})}$ and $r=\frac{1}{2}\ln{[2(E+\sqrt{E^2-1/4})]}$. The corresponding minimal variance is:
\begin{equation}
\langle 0|\hat{a}t S^{\dag}[\mu(t,t_0)] \left[\Delta P(\phi^D_{t,t_0}-\phi^G_{t,t_0})\right]^2\hat{a}t S[\mu(t,t_0)]|0\rangle =\frac{1}{4} \left(E+\sqrt{E^2-1/4}\right)^{-1} ,
\end{equation}
and
\begin{equation}
\langle 0|\hat{a}t S^{\dag}[\mu(t,t_0)] \left[\Delta X(\phi^D_{t,t_0}-\phi^G_{t,t_0})\right]^2\hat{a}t S[\mu(t,t_0)]|0\rangle = \left(E+\sqrt{E^2-1/4}\right) ,
\end{equation}
is maximal.\\
The determinant of the covariance matrix ${\rm Det}{\bf \Sigma}(t,t_0)$ can be written for any quadrature $X(\theta)[t,t_0]$ and $P(\theta)[t,t_0]$ as:
\begin{equation}
{\rm Det}{\bf \Sigma}(t,t_0) = \langle \{\Delta X(\theta)[t,t_0]\}^2\rangle_0 \langle\{\Delta P(\theta)[t,t_0]\}^2\rangle_0 - \frac{1}{4}\Big\langle \Delta X(\theta)[t,t_0]\Delta P(\theta)[t,t_0] +\Delta P(\theta)[t,t_0]\Delta X(\theta)[t,t_0]\Big\rangle_0^2.
\end{equation}
Using \eqref{xquad}, \eqref{genvariance} and conjugated expressions one finds
\begin{equation}
{\rm Det}{\bf \Sigma}(t,t_0) = |G(t,t_0)|^4{\rm Det}{\bf \Sigma}^0 + |G(t,t_0)|^2\langle \Delta a\Delta a^{\dag} +\Delta a^{\dag} \Delta a\rangle_0 n_B(t,t_0) + n_B^2(t,t_0),
\end{equation}
where ${\bf \Sigma}^0$ is the initial covariance matrix of $S$, $\Delta a := a- \langle a\rangle_0$, and the noise term from the bath is noted $n_B(t,t_0) : = \sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Big|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Big|^2$. As expected ${\rm Det}{\bf \Sigma}(t,t_0)$ does not depend on $\theta$ and is minimal when $S$ is initialized in a pure state. \\
\section{Narrow band limit, broad band limit and Markovian dynamics}\label{bnband}
The narrow band limit is obtained for instance retaining only one mode of the bath and taking to zero the coupling coefficient of the other modes. One can also convert the discrete bath to a continuum of mode and take the mode distribution to a delta function. In any case we are left with one mode of frequency $\omega$ and coupling coefficient $K_0$. The function $G(t,t_0)$ simplifies to:
\begin{equation}a
&&G(t,t_0)= 1+\sum_{k=1}^{\infty}(-1)^k|K_0|^{2k}\int_{u}^tds_1\int_{t_0}^{s_1}ds_2...\int_{t_0}^{s_{2k-1}}ds_{2k} e^{i(\omega_0-\omega_{n_1})(s_1-s_2...+s_{2k-1}-s_{2k})},
\end{equation}a
and if we assume also that the bath mode is resonant with the probe,
\begin{equation}a
G(t,t_0)&=& 1+\sum_{k=1}^{\infty}(-1)^k|K_0|^{2k}\int_{u}^tds_1\int_{t_0}^{s_1}ds_2...\int_{t_0}^{s_{2k-1}}ds_{2k} 1\nonumber\\
&=& 1+\sum_{k=1}^{\infty}(-1)^k|K_0|^{2k}\frac{1}{2k!}(t-t_0)^{2k}\nonumber\\
&=&\cos{|K_0|(t-t_0)}.
\end{equation}a
We recover a sinusoidal behavior, meaning that the information is just going to the bath and coming back entirely to the probe periodically. \\
We take now the broad band limit. As we will see one needs more assumptions to recover the Markovian dynamics described in \cite{pra}. \\
Firstly the bath modes are taken to form a continuum,
\begin{equation}
\sum_n |K_n|^2 \rightarrow \int_0^{\omega_c} d\omega g(\omega)|K(\omega)|^2 ,
\end{equation}
where $g(\omega)$ is the bath mode distribution and $\omega_c$ is a cut-off (for instance one can consider that the experiment takes place in a limited volume). We now assume that $g(\omega)$ and $K(\omega)$ are mode independent. We have then
\begin{equation}a
[B_0(s_{2k-1}),B_0^{\dag}(s_{2k})] &=& \sum_n |K_n|^2 e^{-i\omega_n(s_{2k-1}-s_{2k})}\nonumber\\
&=& \int_0^{\omega_c}d\omega g(\omega)|K(\omega)|^2e^{-i\omega_n(s_{2k-1}-s_{2k})}\nonumber\\
&=& g|K|^2 \int_0^{\omega_c}d\omega e^{-i\omega_n(s_{2k-1}-s_{2k})}\nonumber\\
&=&g|K|^2 \left[\pi \delta(s_{2k-1}-s_{2k}) -i {\cal P}\frac{1}{s_{2k-1}-s_{2k}}\right],\label{pb}
\end{equation}a
with the third line corresponding to mode-independent coupling coefficients and spectrum density, and the fourth line is the limit of $\omega_c$ going to infinity. The Markovian dynamics corresponds to the real part of \eqref{pb}. The imaginary part will generate non-convergent terms when integrating $s_{2k}$ from $u$ to $s_{2k-1}$. So one should maintain the cut-off $\omega_c$ finite, and face the subsequent integrations. To recover the Markovian dynamics one need one more assumption in order to be able to discard the imaginary part. This assumption is equivalent to the rotating wave approximation. Instead of integrating the mode form 0 to $\omega_c$ we integrate from $-\omega_c$ to $\omega_c$ with $g(-\omega)=g(\omega)$ and $|K(-\omega)|=|K(\omega)|$. Then the imaginary part of \eqref{pb} cancels out and we end up with $[B_0(s_{2k-1}),B_0^{\dag}(s_{2k})] =g|K|^2 \pi \delta(s_{2k-1}-s_{2k})$. The integration from $-\omega_c$ to $\omega_c$ can be justified through the rotating wave approximation: the negative frequencies are far from resonance and thus contribute very little as soon as $t-t_0 \gg (\omega_0 -\omega)^{-1}$ (with $\omega \in [-\omega_c;0]$) \cite{gardiner}. We get from this that we can legitimately discard the imaginary part of \eqref{pb} as soon as we are interested in times bigger than $\omega_0^{-1}$. \\
We show now that substituting only the real part of \eqref{pb} in the expression of $G(t,t_0)$ we recover the Markovian behavior \cite{pra}:
\begin{equation}a
G(t,t_0) &=& 1 +\sum_{k=1}^{\infty}(-1)^k \frac{\pi^k}{2^k}g^k|K|^{2k}\int_{t_0}^t ds_1\int_{t_0}^{s_1}ds_3...\int_{t_0}^{s_{2k-3}}ds_{2k-1}\nonumber\\
&=& 1+\sum_{k=1}^{\infty}\frac{(-\pi g|K|^2/2)^k}{k!}(t-t_0)^k\nonumber\\
&=& e^{-\gamma(t-t_0)/2},
\end{equation}a
where $\gamma = \pi g |K|^2$. Applying this result to the variance of $P(\theta)$ one find
\begin{equation}a
\langle \Delta^2 P(\theta)[t,t_0]\rangle &=& e^{-\gamma(t-t_0)}\Bigg\langle \left[\Delta\left(\frac{i}{\sqrt{2}}e^{i\theta + i\omega_0(t-t_0)} a^{\dag} - c.c.\right)\right]^2\Bigg\rangle_0 +\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Bigg|\int_{t_0}^tds e^{-\gamma(t-s)/2} e^{i(\omega_0-\omega_n)s}\Bigg|^2\nonumber\\
&=& e^{-\gamma(t-t_0)}\Bigg\langle \left[\Delta\left(\frac{i}{\sqrt{2}}e^{i\theta + i\omega_0(t-t_0)} a^{\dag} - c.c.\right)\right]^2\Bigg\rangle_0 \nonumber\\
&&+\int_0^{\infty}d\omega g(\omega)|K(\omega)|^2\left(N(\omega) + \frac{1}{2}\right) \int_{t_0}^tds \int_{t_0}^t ds'e^{-\gamma(2t-s-s')/2} e^{i(\omega_0-\omega_n)(s-s')}\nonumber\\
&=& e^{-\gamma(t-t_0)}\big\langle \left\{\Delta P[\theta +\omega(t-t_0)]\right\}^2\big\rangle_0 +g|K|^2\left(N + \frac{1}{2}\right) \int_{t_0}^tds \int_{t_0}^t ds'e^{-\gamma(2t-s-s')} \pi\delta(s-s')\nonumber\\
&=& e^{-\gamma(t-t_0)}\big\langle \left\{\Delta P[\theta +\omega(t-t_0)]\right\}^2\big\rangle_0 +\left(N + \frac{1}{2}\right) \left(1-e^{-\gamma(t-t_0)}\right),
\end{equation}a
where we also assume that the mean excitation number $N(\omega)$ is mode-independent $N(\omega)=N$. \\
The Fisher information from the best quadrature measurement becomes
\begin{equation}
{\cal F}_{P(\phi^D_{t,t_0} -\omega_0(t-t_0))}(t,t_0) = \frac{\omega_0^2\big|\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}e^{-\gamma(t-u)/2}\big|^2}{\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\rangle}=\frac{\omega_0^2\big|\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}e^{-\gamma(t-u)/2}\big|^2}{e^{-\gamma(t-t_0)}\big\langle \left\{\Delta P(\phi^D_{t,t_0})\right\}^2\big\rangle_0 +\left(N + \frac{1}{2}\right) \left(1-e^{-\gamma(t-t_0)}\right)},\label{markovqfi}
\end{equation}
which is precisely the expression found in \cite{pra} for a Markovian dynamics.\\
In conclusion, the Markovian approximation is more than just the broad band limit, it is also the mode-independence of the coupling coefficients and spectrum but also the rotating wave approximation. This approximation is valid as soon as we are interested in times $t-t_0$ much bigger than $\omega_0^{-1}$. This contributes to the fact that in general Markovian approximation fails to describe the real dynamics at short times.
\section{Quadrature measurement}\label{quadmeas}
We consider the measurement of the quadrature $P(\theta) = \frac{i}{\sqrt{2}}(e^{i\theta}a^{\dag} - e^{-i\theta}a)$. The output is $p$ and the probe is projected on the eigenstate $|p,\theta\rangle$ such that $P(\theta) |p,\theta\rangle =p |p,\theta\rangle$. One POVM corresponding to such an ideal projective measurement is $\{|p,\theta\rangle\langle p,\theta|\}_{p \in {\mathcal R}}$. The output conditional distribution is ${\cal P}(p,\theta|F) = {\rm Tr}_S\{\rho_s(t,t_0,F)|p,\theta\rangle \langle p,\theta|\}$, where $\rho_S(t,t_0,F)={\rm Tr}_B\{U(t,t_0,F)\rho_{SB}^0U^{\dag}(t,t_0,F)\}$ is the state of the probe at the instant t after interacting with the force and the bath since the instant $t_0$ and when the value of the force amplitude is $F$. The eigenstates of $P(\theta)$ cannot be normalized so that the direct calculation of ${\cal P}(p,\theta|F)$ is not easy. But the global Hamiltonian is quadratic in the probe and bath operators so that the global evolution is a Gaussian operation. So if the initial state $\rho_{SB}^0$ is a Gaussian state the final state is Gaussian too. The reduced state of $S$ is also Gaussian (partial tracing is a Gaussian operation), and it is characterized only by the average $\langle P(\theta)[t,t_0,F]\rangle := {\rm Tr}_S\{\rho_S(t,t_0,F)P(\theta)\}$ and the variance $\langle \Delta^2 P(\theta)[t,t_0]\rangle := {\rm Tr}_S\{\rho_S(t,t_0,F)[P(\theta)-\langle P(\theta)[t]\rangle]^2\}$:
\begin{equation}a
{\cal P}(p,\theta|F) &=& \frac{1}{2\pi \big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\exp{\Big\{-\frac{1}{2\big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\big\{p - \big\langle P(\theta)[t,t_0,F]\big\rangle\big\}^2\Big\}}\nonumber\\
&=& \frac{1}{2\pi \big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\nonumber\\
&&\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\! \times\exp{\Bigg\{-\frac{\Big\{p -\frac{ie^{i\theta}}{\sqrt{2}} e^{i\omega_0(t-t_0)}G^{*}(t,t_0) \langle a^{\dag}\rangle_0 +\frac{ie^{-i\theta}}{\sqrt{2}}e^{-i\omega_0(t-t_0)}G(t,t_0) \langle a\rangle_0 -F|D(t,t_0)|\cos[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] \Big\}^2}{2\big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\Bigg\}}.\nonumber\\
\end{equation}a
The derivation of the expression of $\langle P(\theta)[t,t_0,F]\rangle$ is shown in Appendix \ref{evolution}.\\
When the parameter to estimate is described by a Gaussian distribution the Fisher information is equal to the inverse of its variance. We are presently in such a situation and it is clear that the variance of the distribution ${\cal P}(p,\theta|F) $ seen as a distribution of the parameter $F$ is $\langle \Delta^2 P(\theta)[t,t_0]\rangle\left\{|D(t,t_0)|^2\cos^2\left[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}\right]\right\}^{-1}$.\\
So the Fisher information corresponding to the measurement of $P(\theta)$ is
\begin{equation}
{\cal F}_{P(\theta)}(t,t_0) = \frac{|D(t,t_0)|^2\cos^2[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}]}{\langle \Delta^2 P(\theta)[t,t_0]\rangle}.
\end{equation}
One can easily see that the best quadrature measurement is for the angle $\theta = \phi^D_{t,t_0} -\omega_0(t-t_0)$ such that $\cos^2[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] =1$ so that the Fisher information from the best quadrature measurement is
\begin{equation}a\label{fibestmeasurement}
{\cal F}_{P(\phi^D_{t,t_0} -\omega_0(t-t_0))}(t,t_0) =\frac{|D(t,t_0)|^2}{\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle}
\end{equation}a
and it is exactly the expression of the quantum Fisher information in Eq. (6) of the main text.
\section{Short time behavior}\label{shorttime}
\subsection{Defining the short time regime}\label{considtimescale}
One can show that the function $G(t,t_0)$ satisfies the following integro-differential equation:
\begin{equation}\label{eqdiff}
\dot{G}(t,t_0):=\frac{d}{dt}G(t,t_0) = -\int_{t_0}^tds\sum_n|K_n|^2e^{i(\omega_0-\omega_n)(t-s)}G(s,t_0).
\end{equation}
One could solve this equation by Laplace transform but, here, we only need the short time behaviour.\\
From this relation one can derive the successive derivatives evaluated in $t=t_0$ for any integer $p\geq2$,
\begin{equation}
\frac{d^p}{dt^p}G(t,t_0)_{|_{t=t_0}} =- \sum_n|K_n|^2\sum_{l=0}^{p-2}i^{p-2-l}(\omega_0-\omega_n)^{p-2-l}\frac{d^{l}}{dt^{l}}G(0),
\end{equation}
with the notation $\frac{d^{l}}{dt^{l}}G(0):=\frac{d^l}{dt^l}G(t,t_0)_{|_{t=t_0}}$, making explicit the fact that $G(t,t_0)$ is a simple function of $t-t_0$ as mentioned in Appendix \ref{evolution}. For $p=0$ and $p=1$ we have $G(0)=1$ and $\frac{d}{dt}G(0)=0$. Consequently, the successive derivatives of $G(t,t_0)$ are sums and products of terms $\sum_n|K_n|^2i^l(\omega_0-\omega_n)^l$, with the powers of the $|K_n|$-factors and $(\omega_0-\omega_n)$-factors summing up to the order of the derivative. One can conclude that the evolution time scales of $G(t,t_0)$ are of the order $\Omega_p^{-1}$, with $\Omega_p$ defined for all $p\geq 2$ as
\begin{equation}
\Omega_p:=\left|\sum_n|K_n|^2(\omega_0-\omega_n)^{p-2}\right|^{1/p}.
\end{equation}
This gives us a condition for the validity of the expansion of $G(t,t_0)$ in $t=t_0$, so that when $t-t_0$ is much smaller than $\Omega_p^{-1}$, $\forall p\geq 2$, one can retain the first terms of this expansion:
\begin{equation}\label{gexp}
G(t,t_0)= 1 -\frac{{\cal K}^2}{2}(t-t_0)^2+i\frac{(t-t_0)^3}{6}\sum_n|K_n|^2(\omega_n-\omega_0) + {\cal O}[\Omega_4(t-t_0)^4],
\end{equation}
where ${\cal K}^2:=\sum_n|K_n|^2$.\\
Note that a slow time scale $\gamma^{-1}$ can also emerge for long times (Appendix \ref{longtime}) or Markovian approximation (Appendix \ref{bnband}).\\
One can expand $D(t,t_0)$ under the same conditions, but adding also the conditions $t-t_0$ much smaller than $\omega_0^{-1}$ and the evolution time scale of $\zeta(t)$:
\begin{equation}
D(t,t_0)=\omega_0 (t-t_0)\zeta(t_0) + \omega_0\frac{(t-t_0)^2}{2}[i\omega_0\zeta(t_0)+\dot{\zeta}(t_0)] + {\cal O}[\omega_0{\cal K}^2(t-t_0)^3].
\end{equation}
Finally, in order to expand \eqref{qfipvar}, we have to consider also the expansion of $\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle$. From Eq. \eqref{variancepbest} we already have an expression available, but we will use the following form in order to simplify the considerations on time scales:
\begin{equation}
\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle = |G(t,t_0)|^2\Big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\Big\rangle_0 + \int_{t_0}^tds\int_{t_0}^tds' G(t,s)G^{*}(t,s')C^0(s-s'),\label{newvar}
\end{equation}
where
\begin{equation}
C^0(s-s'):=e^{i\omega_0(s-s')}\frac{1}{2}{\rm Tr}_B\{\rho_B^0[B_0(s)B_0^{\dag}(s') + B_0^{\dag}(s')B_0(s)]\}=\sum_n|K_n|^2(N_n+1/2)e^{i(\omega_0-\omega_n)(s-s')}.
\end{equation}
To be allowed to take into account only the first terms of the short time expansion of $\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle $, one has to consider times $t-t_0$ much smaller than the $\Omega_p^{-1}$ but also much smaller than the evolution time scale of $C^0(s'-s)$. Note that although having explicit time dependence within the phase of the quadrature variance, the actual value of $\Big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\Big\rangle_0$ depends only on the initial probe state. Nevertheless, we do not need this argument. From the short time expansion of $D(t,t_0)$ and $G(t,t_0)$ one can see that the bath contribution appears only at the third order in $(t-t_0)$ for $\phi^G_{t,t_0} = \arg{G(t,t_0)}$ and second order in $(t-t_0)$ for $\phi^D_{t,t_0}=\arg{D(t,t_0)}$. Then the variance can be expanded in the following way:
\begin{equation}
\Big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\Big\rangle_0 = \Big\langle \Delta^2 P\big[\phi^{D_0}_{ t,t_0}\big]\Big\rangle_0 + {\cal O}({\cal K}^2(t-t_0)^2),
\end{equation}
where
\begin{equation}
D_0(t,t_0) := \omega_0\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)} ,
\end{equation}
is the coefficient $D(t,t_0)$ in the noiseless situation ($K_n \rightarrow 0$ $\forall n$). \\
Regarding the second term in Eq. \eqref{newvar}, one can easily re-write the Taylor expansion of $C^0(s'-s)$ in 0 in the following form
\begin{equation}
C^0(s-s') = \sum_n |K_n|^2(N_n+1/2)\left[1+ \sum_{p=1}^{\infty} \frac{[i\chi_p(s-s')]^p}{p!}\right],
\end{equation}
where the frequencies defining the evolution time scales $|\chi_p|^{-1}$ are given by $\chi_p := \left[\sum_n\frac{|K_n|^2(N_n+1/2)}{\cal N}(\omega_0-\omega_n)^p\right]^{1/p}$ and ${\cal N}:=\sum|K_n|^2(N_n+1/2)$. Note that if the $\Omega_p$ converge then the $\chi_p$ too, since the prefactor $\frac{|K_n|^2(N_n+1/2)}{\cal N}$ within the sum is smaller than $|K_n|^2$. \\
Recapping the above considerations on time scales, as long as $(t-t_0)$ is smaller than $\omega_0^{-1}$, $|\chi_q|^{-1}$, $q\geq1$, and $\Omega_p^{-1}$, $p\geq2$, we can write
\begin{equation}
\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle = \Big\langle \Delta^2 P\big[\phi^{D_0}_{ t,t_0}\big]\Big\rangle_0 + {\cal O}[({\cal K}^2+{\cal N}^2)(t-t_0)^2],
\end{equation}
and one can also expand $G(t,t_0)$, $D(t,t_0)$, $\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle$ and the QFI (Eq. \eqref{qfipvar}), and finally get
\begin{equation}a
{\cal F}_{P(\phi^D_{t,t_0} -\omega_0(t-t_0))}(t,t_0) = \frac{\omega_0^2}{\big\langle \Delta^2 P\big[\phi_{t,t_0}^{D_0}\big]\big\rangle_0 }\{\zeta^2(t_0)(t-t_0)2 +\zeta(t_0)\dot{\zeta}(t_0)(t-t_0)^3 + {\cal O}[\omega_0^2{\cal K}^2(t-t_0)^4]\}.\label{stexp}
\end{equation}a
\subsection{Non-Markovian effects at short times}\label{nmvsm}
In the above subsection we derive conditions of validity of the short time expansion \eqref{stexp} of the QFI. We show in Appendix \ref{bathcorrelation} that these time scales correspond also to the evolution time scales of the bath correlation function. If those conditions on $t-t_0$ cannot be fulfilled, meaning that the bath correlation time is not accessible and no measurement can be performed below the bath correlation time, the higher orders of the expansion \eqref{stexp} cannot be neglected and the first terms are not significant any more. Then the correct expansion is not anymore centered at $t_0$ but at $t_0+t_c$, where $t_c$ represents the bath correlation time. In such an expansion, the first derivative of $G$ is taken at $t_0+t_c$ and does not cancel anymore, yielding a first order bath dependent term for the short time expansion of $G(t,t_0)$ and of the denominator of \eqref{qfipvar}, as for Markovian dynamics (Appendix \ref{bnband}). As a comparison, Markovian dynamics can be sketched as the impossibility of accessing the correlation time of the bath and then considering that $t_c$ goes to zero. We show at the end of Appendix \ref{seqmeasurement} that the presence of a first order term in the denominator of \eqref{qfipvar} changes dramatically the QFI behaviour for sequential preparation-and-measurement scenario. The QFI becomes bounded by a constant, irrespectively of the energy invested in the squeezing of the probe initial state.\\
As a matter of comparison, we give here the short time expansion of \eqref{qfipvar} for Markovian dynamics (obtained from Eq. \eqref{markovqfi}) valid for $t-t_0$ smaller than $\omega_0^{-1}$, $\gamma^{-1}$ (defined in \ref{bnband}) and the evolution time scale of $\zeta(t)$:
\begin{equation}a
{\cal F}^M_Q(t,t_0) = &\frac{\omega_0^2(t-t_0)^2}{\big\langle \Delta^2 P\big[\phi_{t,t_0}^D\big]\big\rangle_0 }&\left\{\zeta^2(t_0)+\left[\zeta(t_0)\dot{\zeta}(t_0)+\frac{\gamma}{2}\zeta^2(t_0)-\frac{\gamma\left(n_T+\frac{1}{2}\right)}{\big\langle \Delta^2 P\big[\phi_{t,t_0}^D\big]\big\rangle_0} \zeta^2(t_0)\right](t-t_0)\right\}\nonumber\\
&+& {\cal O}[\omega_0^2(t-t_0)^2].
\end{equation}a
The bath contribution comes also at the 3rd order and always reduces the amount of information since $\gamma/2-\gamma\left(n_T+\frac{1}{2}\right) \Big\langle \Big\{\Delta P\big[\phi_{t,t_0}^D\big]\Big\}^2\Big\rangle_0^{-1}$ is always strictly negative (even smaller than $-\gamma/2$). This happens because the only contribution in the first derivative of $G(t,t_0)$ at $t=t_0$ comes from $-\sum_n|K_n|^2\int_{t_0}^t ds e^{i(\omega_0-\omega_n)(t-s)}$ (see Eq. \eqref{eqdiff}). If one takes the broad band limit together with the rotating wave approximation one ends up with $-\gamma/2$ (see Appendix \ref{bnband}). This implies that the short time behavior of the QFI for the Markovian dynamics is qualitatively different as we saw above.\\
Hence the apparition of the bath contribution only at the 4th order is a particularity of the expansion \eqref{stexp} and comes from measurements at time scales below the correlation time of the bath, justifying its classification as non-Markovian effects.
\section{Long time behavior}\label{longtime}
We are interested in the long time behavior of ${\cal G}_1(t,t_0)$ the first term of the sum in the expression of $G(t,t_0)$ \eqref{gsm}.
\begin{equation}a
{\cal G}_1(t,t_0) &:=& - \int_{t_0}^tds_1\int_{t_0}^{s_1}ds_2e^{i\omega_0(s_1-s_2)}\left[B_0(s_1),B_0^{\dag}(s_2)\right]\label{g1}\\
&=& -\sum_n |K_n|^2 \left[-\frac{i(t-t_0)}{\omega_n-\omega_0} +\frac{1-e^{-i(\omega_n -\omega_0)(t-t_0)}}{(\omega_n-\omega_0)^2}\right]\nonumber\\
&=& - \sum_n |K_n|^2 \left[\frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2} -i\frac{t-t_0}{\omega_n-\omega_0}+i\frac{\sin{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2}\right].
\end{equation}a
One can show that when $t-t_0$ goes to infinity the real part of the integrand tends to
\begin{equation}
\frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2} \rightarrow \frac{\pi}{2} (t-t_0)\delta(\omega-\omega_0).
\end{equation}
The long time behavior of the real part of ${\cal G}_1(t,t_0)$ reproduces the Markovian behavior since we recover a $\delta-function$. Substituting in the expression of ${\cal G}_1(t,t_0)$ we find
\begin{equation}a
\Re{{\cal G}_1(t,t_0)} &=& -\sum_n |K_n|^2 \frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2}\nonumber\\
&=& -\int_0^{\infty}d\omega g(\omega)|K(\omega)|^2 \frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2}\nonumber\\
&&\rightarrow -\frac{\pi}{2}(t-t_0)g(\omega_0)|K(\omega_0)|^2.
\end{equation}a
In the second line we substitute the discrete bath mode distribution by a continuous one in order to realize the integration.
Note that the Markovian limit gives a similar result ${\cal G}_1(t,t_0)\rightarrow -\gamma(t-t_0)/2=-\pi(t-t_0)g|K|^2/2$. So for the real part of ${\cal G}_1(t,t_0)$ the long time limit is similar to the Markov approximation. However, it is not so simple for the imaginary part, the same treatment as for the real part leads to an undetermined form. Writing the sine of the imaginary part in a series expansion one obtains the following expression:
\begin{equation}a
\Im{{\cal G}_1(t,t_0)} =\sum_{p=0}^{\infty}(-1)^{p+1}\frac{(t-t_0)^{2p+3}}{(2p+3)!}\big\langle(\omega_0-\omega)^{2p+1}\big\rangle,
\end{equation}a
where $\big\langle(\omega_0-\omega)^{2p+1}\big\rangle = \int_0^{+\infty} d\omega g(\omega)|K(\omega)|^2(\omega_0-\omega)^{2p+1}$. The sum is expected to converge since the imaginary part $\Im{{\cal G}_1(t,t_0)}$ is finite (can be seen form expression \eqref{g1}). Note that if $g(\omega)|K(\omega)|^2$ is symmetrical with respect to $\omega_0$ the imaginary part $\Im{{\cal G}_1(t,t_0)}$ cancels out.
\section{Sequential preparation-and-measurement scenario}\label{seqmeasurement}
We analyze the variance of the quadrature $P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)$ after the probe interacting with the force and the bath from $t_k:=t_0+k\tau$ to $t_{k+1}:=t_0+(k+1)\tau$. We use the expression of the variance \eqref{newvar} used in Appendix \ref{considtimescale}
\begin{equation}a
\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle &=&|G(t_{k+1},t_k)|^2\Big\langle \Delta^2 X\big[\phi^D_{t_{k+1},t_k}-\phi^G_{t_{k+1},t_k}\big]\Big\rangle_0 \nonumber\\
& +& \int_{t_k}^{t_{k+1}}ds\int_{t_k}^{t_{k+1}}ds' G(t_{k+1},s)G^{*}(t_{k+1},s')C^0(s-s').\label{seqvar}
\end{equation}a
One can show that $G(t,u) =G(t-u)$, yielding $G(t_{k+1},t_k) = G(\tau)$ and that the double integral depends only on $t_{k+1}-t_k$, allowing to re-write \eqref{seqvar} as
\begin{equation}a
\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle &=&|G(\tau)|^2\Big\langle \Delta^2 X\big[\phi^D_{t_{k+1},t_k}-\phi^G_{\tau}\big]\Big\rangle_0 +\int_0^{\tau}ds\int_0^{\tau}ds' G(\tau-s)G^{*}(\tau-s')C^0(s-s'),\nonumber\\
\end{equation}a
where $\phi^G_\tau :=\arg{G(\tau)}$. As discussed in Section V of the main text we make the assumption that the probe is prepared in the best state $\hat{a}t S[\mu(t_{k+1},t_k)]|0\rangle$ after each measurement. Thanks to this assumption, which corresponds to the best strategy, the expression of the variance simplifies to
\begin{equation}a\label{varpappendix}
\langle \Delta^2 P(\phi_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle &=&\frac{1}{4} \big|G(\tau)\Big|^2\left(E+\sqrt{E^2-1/4}\right)^{-1} +\int_0^{\tau}ds\int_0^{\tau}ds' G(\tau-s)G^{*}(\tau-s')C^0(s-s'),\nonumber\\
\end{equation}a
since $\langle 0|\hat{a}t S^{\dag}[\mu(t_{k+1},t_k)] \left[\Delta P(\phi^D_{t_{k+1},t_k}-\phi^G_\tau)\right]^2\hat{a}t S[\mu(t_{k+1},t_k)]|0\rangle =\frac{1}{4} \left(E+\sqrt{E^2-1/4}\right)^{-1} $.
The expression \eqref{varpappendix} depends only on $\tau$ and not anymore on k so that the denominator in Eq. \eqref{fseq} can be taken out of the sum.\\
Assuming now that $\tau$ is much smaller than all time scales involved $D(t_k+\tau,t_k)$, that is much smaller than $\omega_0^{-1}$, $\Omega_p^{-1}$, $p\geq2$ (see Appendix \ref{considtimescale}), and the evolution time scale of $\zeta(t)$, we can expand $D(t_k+\tau,t_k)$ to order 2:
\begin{equation}
D(t_k+\tau, t_k) = \omega_0\tau\zeta(t_k)+ \omega_0\frac{\tau^2}{2}\left[\dot{\zeta}(t_k) + i\omega_0\zeta(t_k) \right]+ {\cal O}(\tau^3),
\end{equation}
where the dot means the time derivative. For $|D(t_k,t_k+\tau)|^2$ we have:
\begin{equation}\label{exp}
|D(t_k,t_k+\tau)|^2 = \omega_0^2 \tau^2\left[\zeta^2(t_k)\left(1+\omega_0^2\frac{\tau^2}{4}\right) +\tau\zeta(t_k)\dot{\zeta}(t_k)+ \frac{\tau^2}{4}\dot{\zeta}^2(t_k) \right]+ {\cal O}(\tau^5).
\end{equation}
The Euler-Maclaurin formula relates the sum $\sum_{k=0}^{k=\nu-1}|D(t_k,t_k+\tau)|^2$ to the integral $\int_{t_0}^{t_0+T}dt A(t)$ where $A(t)$ designs the expansion \eqref{exp} of $|D(t,t+\tau)|^2$:
\begin{equation}a\label{exp1}
\sum_{k=0}^{k=\nu-1}|D(t_k,t_k+\tau)|^2 = \omega_0^2\tau \int_{t_0}^{t_0+T}dt \zeta^2(t) &+& \omega_0^2\tau^3 \left\{\frac{1}{4}\int_{t_0}^{t_0+T}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)] -\frac{1}{3}[\zeta(t_0+T)\dot{\zeta}(t_0+T) -\zeta(t_0)\dot{\zeta}(t_0)]\right\} \nonumber\\
&+&{\cal O}(\tau^4).
\end{equation}a
We also expand \eqref{varpappendix} to the third order in $\tau$:
\begin{equation}\label{exp2}
\langle \Delta^2 P(\phi_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle = \frac{1}{4}\left(1- \tau^2{\cal K}^2 \right)\left(E+\sqrt{E^2-\frac{1}{4}}\right)^{-1} + \tau^2{\cal N} + {\cal O}(\tau^4),
\end{equation}
remembering that this is valid if $\tau$ is smaller than $\Omega_p^{-1}$, $p\geq2$ and $|\chi_q|^{-1}$, $q\geq 1$ and ${\cal K}^2:=\sum_n|K_n|^2$, ${\cal N}:=\sum_n|K_n|^2(N_n+1/2)$ (see Appendix \ref{considtimescale}).
Substituting the expressions \eqref{exp1} and \eqref{exp2} in the total quantum Fisher information we have
\begin{equation}a
{\cal F}_Q^{Seq}(T,\tau) &=& \sum_{k=0}^{\nu-1} {\cal F}_Q(t_{k+1},t_k)\nonumber\\
&=& \sum_{k=0}^{\nu -1}\frac{|D(t_{k+1},t_k)|^2}{\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle}\nonumber\\
& =&\omega_0^2\frac{\xi(T,t_0) \tau + {\cal C}(T,t_0)\tau^3 +{\cal O}(\tau^4)}{\frac{{\cal E}^{-1}}{4} + \tau^2\left({\cal N}-\frac{{\cal E}^{-1}}{4}{\cal K}^2\right) + {\cal O}(\tau^4)},\label{fseqexp}
\end{equation}a
where ${\cal E}:=\left(E+\sqrt{E^2-\frac{1}{4}}\right)$, $\xi(T,t_0):=\int_{t_0}^{t_0+T}dt\zeta^2(t)$, and ${\cal C}(T,t_0):= \frac{1}{4}\int_{t_0}^{t_0+T}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)] -\frac{1}{3}[\zeta(t_0+T)\dot{\zeta}(t_0+T) -\zeta(t_0)\dot{\zeta}(t_0)]$ which simplifies to ${\cal C}(T,t_0):= \frac{1}{4}\int_{t_0}^{t_0+T}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)]$ if the force begins and ends at $t_0$ and $t_f$ respectively. \\
From the expansion \eqref{fseqexp} one can easily find the optimal time interval $\tau_{\mathrm{opt}}$:
\begin{equation}a
\tau_{\mathrm{opt}}=& \frac{1}{2\sqrt{3{\cal N}}}{\cal E}^{-1/2} &+ \frac{{\cal C}(T,t_0)+\xi{\cal K}^2}{16\sqrt{3}{\cal N}^{3/2}\xi(T,t_0)}{\cal E}^{-3/2} + {\cal O}({\cal E}^{-5/2}).\nonumber
\end{equation}a
This result confirms the announced correlation between $\tau_{\mathrm{opt}}$ going to zero and $E$ going to infinity. Interestingly the leading term does not depend on the total available sensing window $[t_0;t_0+T]$ neither on the force time modulation $\zeta(t)$. For Markovian bath the dependence of $\tau_{\mathrm{opt}}$ is in ${\cal E}^{-1/3}$ \cite{pra}.\\
The corresponding total quantum Fisher information is
\begin{equation}a
&&{\cal F}_Q^{\mathrm{Seq}}(T,\tau_{opt})= \frac{\sqrt{3}\xi(T,t_0)}{2\sqrt{\cal N}}{\cal E}^{1/2} + \frac{\sqrt{3}}{32{\cal N}^{3/2}}[2{\cal K}^2\xi(T,t_0) + 7/3{\cal C}(T,t_0)]{\cal E}^{-1/2} + {\cal O}({\cal E}^{-3/2}).\nonumber
\end{equation}a
If the effective time window during which the force is not null is $[t_i,t_f] \in [t_0,t]$, then we have $\xi(T,t_0)=\int_{t_i}^{t_f}dt\zeta^2(t) =\xi(t_f-t_i,t_i) $ and ${\cal C}(T,t_0):= \frac{1}{4}\int_{t_i}^{t_f}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)] = {\cal C}(t_f-t_i,t_i)$ and the total quantum Fisher information is just equal to ${\cal F}_Q^{Seq}(t_f-t_i,t_i)$: it is not prejudicial to starts and stops the sensing beyond the real time window of the force application. The exact knowledge of $t_i$ and $t_f$ is not necessary for sequential preparation-and-measurement scenario.\\
As a matter of comparison, we give the asymptotical behaviour of the optimal time interval and of the corresponding QFI when a term of 1st order in $\tau$ appears at the denominator of \eqref{qfipvar}. This happens when the time interval $\tau$ between each measurement is bigger than the evolution time scales of the bath correlation function (see Appendix \ref{nmvsm}) or when the dynamics is Markovian (which implies obviously that $\tau$ is bigger than the evolution time scales of the bath correlation function). In such situations Eq. \eqref{fseqexp} becomes
\begin{equation}a
{\cal F}_Q^{Seq}(T,\tau) & =&\omega_0^2\frac{\xi(T,t_0) \tau + {\cal C}(T,t_0)\tau^3 +{\cal O}(\tau^4)}{\frac{{\cal E}^{-1}}{4} + A\tau + {\cal O}(\tau^2)},
\end{equation}a
where $A$ is a coefficient appearing in situations described above ($A=\gamma(n_T+1/2)$ for Markovian dynamics), yielding
\begin{equation}
\tau_{opt} = \frac{1}{8A}{\cal E}^{-1/2}
\end{equation}
and a bounded QFI,
\begin{equation}
{\cal F}_Q^{\mathrm{Seq}}(T,\tau_{opt}) = \frac{\xi(T,t_0)}{3A} + {\cal O}({\cal E}^{-1}),
\end{equation}
equivalent to the result in \cite{pra} for Markovian dynamics.
\section{Correlation function and time of the bath}\label{bathcorrelation}
The bath correlation function can be defined by the following expression:
\begin{equation}
C(t,t_0|t',t_0):=\frac{1}{2}{\rm Tr}_{SB}\{\rho_{SB}^0[B(t,t_0,F)B^{\dag}(t',t_0,F)+B^{\dag}(t',t_0,F)B(t,t_0,F)]\} - {\rm Tr}_{SB}[\rho_{SB}^0B(t,t_0,F)]{\rm Tr}_{SB}[\rho_{SB}^0B^{\dag}(t',t_0,F)]
\end{equation}
where $B(t,t_0,F):=U^{\dag}(t,t_0,F)BU(t,t_0,F)$.\\
One can show the useful expression for $B(t,t_0,F)$:
\begin{equation}
B(t,t_0,F) = B_0(t) -a_0(t)\dot{G}(t,t_0) + \int_{t_0}^t ds \dot{G}(t,s)B_0(s)e^{-i\omega_0(t-s)} +i\frac{F}{\sqrt{2}}\sum_n K_n e^{-i\omega_n(t-t_0)}D_n(t,t_0),
\end{equation}
where $\dot{G}(t,s):= \frac{d}{dt}G(t,s)$, and $D_n(t,t_0)$ is defined in Appendix \ref{evolution}. \\
One gets for the bath correlation function, assuming that $\forall n$, ${\rm Tr}_B[\rho_B^0b_n]={\rm Tr}_B[\rho_B^0b_n^{\dag}]=0$,
\begin{equation}\label{bathcofct}
C(t,t_0|t',t_0) = C_{b}(t,t_0|t',t_0)+C_{I}(t,t_0|t',t_0),
\end{equation}
where the first part
\begin{equation}
C_{b}(t,t_0|t',t_0)=e^{-i\omega_0(t-t')}C^0(t-t'),
\end{equation}
corresponds to the bare correlation function of the bath without interaction with the probe, corresponding also to the Born approximation, and the second part,
\begin{equation}a
C_I(t,t_0|t',t_0)&=& e^{-i\omega_0(t-t')}\Bigg\{\int_{t_0}^{t'}dsC^0(t-s)\dot{G}^{*}(t',s) +\int_{t_0}^tdsC^0(s-t')\dot{G}(t,s)\nonumber\\
&+&\left[\frac{1}{2}{\rm Tr}_S[\rho^0_S(a^{\dag}a+aa^{\dag})]-|{\rm Tr}_S(\rho_s^0a)|^2\right]\dot{G}(t,t_0)\dot{G}^{*}(t',t_0)
+ \int_{t_0}^tds\int_{t_0}^{t'}ds' C^0(s-s')\dot{G}(t,s)\dot{G}^{*}(t',s')\Bigg\},\nonumber\\
\end{equation}a
gathers second and higher order terms coming from the interaction with the probe, involving the derivative of the response function of the bath $\dot{G}(t,s)$. The function $C^0(t-t')$ is defined above in Appendix \ref{considtimescale}, and $N_n={\rm Tr}_B[\rho_B^0b_n^{\dag}b_n]$. \\
Note that if one looks at the bath correlations at the beginning of the interaction between the bath and the probe, the correlation function is reduced to ($t'\rightarrow t_0$ in \eqref{bathcofct})
\begin{equation}a
C(t,t_0|t_0,t_0) &=& e^{-i\omega_0(t-t_0)}\left[C^0(t-t_0) +\int_{t_0}^tdsC^0(s-t_0)\dot{G}(t,s)\right].
\end{equation}a
As detailed in Appendix \ref{considtimescale} the evolution time scale of $G(t,t_0)$ and $C^0(t-t')$ are of the order of $\Omega_p^{-1}$, $p\geq2$ and $|\chi_q|^{-1}$, $q\geq1$, and since \eqref{bathcofct} depends only on these two functions, the evolution time scale of the bath correlation function is also of the order of $\Omega_p^{-1}$ and $|\chi_q|^{-1}$. This is an important conclusion since it shows that the short time effects considered in this work are within the bath correlation time, justifying their classification as non-Markovian effects.\\
Note finally that under traditional Markovian approximation, including broad band limit, rotating wave approximation and Born approximation, the bath correlation function $C(t,t_0|t',t_0)$ becomes a delta Dirac function $\delta(t-t')$, implying that the correlation time is zero, and the impossibility of performing any measurement within this time.
\end{widetext}
\begin{equation}gin{thebibliography}{1}
\bibitem{natphot} J. Aasi et al., Nature Photonics {\bf 7}, 613–619 (2013).
\bibitem{ligo} B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration) Phys. Rev. Lett. {\bf 116}, 061102 (2016).
\bibitem{impulsiveforce} D. Vitali, S. Mancini, and P. Tombesi, Phys. Rev. A {\bf 64}, 051401 (2001).
\bibitem{microopto} A. Pontin, M. Bonaldi, A. Borrielli, F. S. Cataliotti, F. Marino, G. A. Prodi, E. Serra, and F. Marin, Phys. Rev. A {\bf 89}, 023848 (2014).
\bibitem{aspel} M. R. Vaner et al., Proc. Natl. Acad. Sci. USA {\bf 108}, 16182 (2011).
\bibitem{biercuck} M. J. Biercuk, H. Uys, J. W. Britton, A. P. VanDevender, and J. J. Bollinger, Nat. Nanotechnol. {\bf 5}, 646 (2010).
\bibitem{ieee} K. S. Karvinen, M. G. Ruppert, K. Mahata and S. O. R. Moheimani, IEEE Trans. Nanotechnol. {\bf 13}, 1257-1265 (2014).
\bibitem{nanotube} J. Moser, J. Guttinger, A. Eichler, M. J. Esplandiu, D. E. Liu, M. I. Dykman, and A. Bachtold, Nat. Nanotechnol. {\bf 8}, 493-496 (2013).
\bibitem{singlespin} D. Rugar, R. Budakian, J.H. Mamin, and B. W. Chui, Nature {\bf 430}, 329-332 (2004).
\bibitem{pt} A. C. Bleszynski-Jayich, W. E. Shanks, B. Peaudecerf, E. Ginossar, F. von Oppen, L. Glaz- man, and J. G. E. Harris, Science {\bf 326}, 272-275 (2009).
\bibitem{ca} U. Mohideen and A. Roy, Phys. Rev. Lett. {\bf 81}, 4549 (1998).
\bibitem{planck} I. Pikovski, M. R. Vanner, M. Aspelmeyer, M. S. Kim and C. Brukner, Nat. Phys. {\bf 8}, 393-397 (2012).
\bibitem{nori} M. Antognozzi, S. Simpson, R. Harnima, J. Senior, R.
Hayward, H. Hoerber, M. R. Dennis, A. Y. Bekshaev, K. Y. Bliokh, and F. Nori, Nat. Phys. {\bf 12}, 731-735 (2016).
\bibitem{biology} M. A. Taylor and W. P. Bowen, Phys. Rep. {\bf 615}, 1-59 (2016).
\bibitem{euroj} Y. Gao, H. Lee, and Y. Lei Jia, Eur. Phys. J. D {\bf 68}:321 (2014).
\bibitem{plenio} A. W. Chin, S. F. Huelga, and M. B. Plenio, Phys. Rev. Lett. {\bf 109}, 233601 (2012).
\bibitem{matsuzaki} Y. Matsuzaki, S. C. Benjamin, and J. Fitzsimons, Phys. Rev. A {\bf 84}, 012103 (2011).
\bibitem{dd} A. Smirne, J. Kolodynski, S. F. Huelga, and R. Demkowicz-Dobrza\'nski, Phys. Rev. Lett. {\bf 116}, 120801 (2016).
\bibitem{pechukas} P. Pechukas, Phys. Rev. Lett. {\bf 73},1060 (1994); Phys. Rev. Lett. {\bf 75}, 3021 (1995).
\bibitem{fisher} R. A. Fisher, Math. Proc. Cambridge Philos. Soc. {\bf 22}, 700 (1925).
\bibitem{braunstein} S. L. Braunstein, J. Phys. A {\bf 25}, 3813 (1992).
\bibitem{cramer} H. Cramer, {\it Mathematical Methods of Statistics} (Princeton University, Princeton, 1946).
\bibitem{rao} C. R. Rao, {\it Linear Statistical Inference and its Applications}, 2nd edn. (Wiley, New York, 1973).
\bibitem{tsang} M. Tsang, arXiv:1605.03799 (2016).
\bibitem{bcm96} S. L. Braunstein, C. M. Caves, and G. J. Milburn, Ann. Phys. {\bf 247}, 135 (1996).
\bibitem{paz} B. L. Hu, J. P. Paz, and Y. Zhang, Phys. Rev. D {\bf 45}, 2843 (1992).
\bibitem{pra} C. L. Latune, B. M. Escher, R. L. de Matos Filho, and L. Davidovich, Phys. Rev. A {\bf 88}, 042112 (2013).
\bibitem{caves} S. L. Braunstein and C. M. Caves, Phys. Rev. Lett. {\bf 72} 3439 (1994).
\bibitem{paris} M. G. A. Paris, Int. J. Quant. Inf. {\bf 7}, 125 (2009).
\bibitem{bruno} B. M. Escher, R. L. de Matos Filho and L. Davidovich, Nat. Phys. {\bf 7}, 406 (2011).
\bibitem{nicim} B. M. Escher, L. Davidovich, N. Zagury, and R. L. de Matos Filho, Phys. Rev. Let. {\bf 109}, 190404 (2012).
\bibitem{scutaru} H. Scutaru, Journal of Physics A {\bf 31}, 3659 (1998).
\bibitem{pinel} O. Pinel, P. Jian, N. Treps, C. Fabre, and D. Braun, Phys. Rev. A {\bf 88}, 040102 (2013).
\bibitem{monras} A. Monras, arXiv:1303.3682 (2013).
\bibitem{jiang} Zhang Jiang, Phys. Rev. A {\bf 89}, 032128 (2014).
\bibitem{gardiner} C. W. Gardiner and P. Zoller, {\it Quantum Noise}, (Second Enlarged Edition, Springer - Velag Berlin Heidelberg New-York, 2000).
\bibitem{screport} Y.-R. Zhang and H. Fan, Sci. Rep. {\bf 5}:11509 (2015).
\bibitem{zeno} A. H. Kiilerich and K. Molmer, Phys. Rev. A {\bf 92}, 032124 (2015).
\bibitem{tsang1} M. Tsang, H. M. Wiseman, and C. M. Caves, Phys. Rev. Lett. {\bf 106}, 090401 (2011).
\bibitem{tsang2} M. Tsang, New J. Phys. {\bf 15}, 073005 (2013).
\end{thebibliography}
\end{document} |
\begin{document}
\title{Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal Memory}
\begin{abstract}
A neural network model of a differential equation, namely neural ODE, has enabled the learning of continuous-time dynamical systems and probabilistic distributions with high accuracy.
The neural ODE uses the same network repeatedly during a numerical integration.
The memory consumption of the backpropagation algorithm is proportional to the number of uses \emph{times} the network size.
This is true even if a checkpointing scheme divides the computation graph into sub-graphs.
Otherwise, the adjoint method obtains a gradient by a numerical integration backward in time.
Although this method consumes memory only for a single network use, it requires high computational cost to suppress numerical errors.
This study proposes the symplectic adjoint method, which is an adjoint method solved by a symplectic integrator.
The symplectic adjoint method obtains the exact gradient (up to rounding error) with memory proportional to the number of uses \emph{plus} the network size.
The experimental results demonstrate that the symplectic adjoint method consumes much less memory than the naive backpropagation algorithm and checkpointing schemes, performs faster than the adjoint method, and is more robust to rounding errors.
\end{abstract}
\section{Introduction}
Deep neural networks offer remarkable methods for various tasks, such as image recognition~\cite{He2015a} and natural language processing~\cite{Devlin2018}.
These methods employ a residual architecture~\cite{Hochreiter1997,Pascanu2013}, in which the output $x_{n+1}$ of the $n$-th operation is defined as the sum of a subroutine $f_n$ and the input $x_n$ as $x_{n+1}=f_n(x_n)+x_n$.
The residual architecture can be regarded as a numerical integration applied to an ordinary differential equation (ODE)~\cite{Lu2018}.
Accordingly, a neural network model of the differential equation ${\mathrm{d}x}/{\mathrm{d}t}=f(x)$, namely, neural ODE, was proposed in~\cite{Chen2018e}.
Given an initial condition $x(0)=x$ as an input, the neural ODE solves an initial value problem by numerical integration, obtaining the final value as an output $y=x(T)$.
The neural ODE can model continuous-time dynamics such as irregularly sampled time series~\cite{Kidger2020}, stable dynamical systems~\cite{Rana2020,Takeishi2020}, and physical phenomena associated with geometric structures~\cite{Chen2020a,Greydanus2019,Matsubara2020}.
Further, because the neural ODE approximates a diffeomorphism~\cite{Teshima2020a}, it can model probabilistic distributions of real-world data by a change of variables~\cite{Grathwohl2018,Jiang2020,Kim2020b,Yang2019b}.
For an accurate integration, the neural ODE must employ a small step size and a high-order numerical integrator composed of many internal stages.
A neural network $f$ is used at each stage of each time step.
Thus, the backpropagation algorithm consumes exorbitant memory to retain the whole computation graph~\cite{Rumelhart1986,Chen2018e,Gholami2019,Zhuang2020a}.
The neural ODE employs the adjoint method to reduce memory consumption---this method obtains a gradient by a backward integration along with the state $x$, without consuming memory for retaining the computation graph over time~\cite{Errico1997,Hairer1993,Sanz-Serna2016,Wang2013a}.
However, this method incurs high computational costs to suppress numerical errors.
Several previous works employed a checkpointing scheme~\cite{Gholami2019,Zhuang2020a,Zhuang2021}.
This scheme only sparsely retains the state $x$ as checkpoints and recalculates a computation graph from each checkpoint to obtain the gradient.
However, this scheme still consumes a significant amount of memory to retain the computation graph between checkpoints.
To address the above limitations, this study proposes the \emph{symplectic adjoint method}.
The main advantages of the proposed method are presented as follows.
\noindent\textbf{Exact Gradient and Fast Computation:}
In discrete time, the adjoint method suffers from numerical errors or needs a smaller step size.
The proposed method uses a specially designed integrator that obtains the exact gradient in discrete time.
It works with the same step size as the forward integration and is thus faster than the adjoint method in practice.
\noindent\textbf{Minimal Memory Consumption:}
Excepting the adjoint method, existing methods apply the backpropagation algorithm to the computation graph of the whole or a subset of numerical integration~\cite{Gholami2019,Zhuang2020a,Zhuang2021}.
The memory consumption is proportional to the number of steps/stages in the graph \emph{times} the neural network size.
Conversely, the proposed method applies the algorithm only to each use of the neural network, and thus the memory consumption is only proportional to the number of steps/stages \emph{plus} the network size.
\noindent\textbf{Robust to Rounding Error:}
The backpropagation algorithm accumulates the gradient from each use of the neural network and tends to suffer from rounding errors.
Conversely, the proposed method obtains the gradient from each step as a numerical integration and is thus more robust to rounding errors.
\begin{table}[t]
\centering\small
\caption{Comparison of the proposed method with existing methods}\label{tab:theoretical_comparison}
\tabcolsep=.7mm
\begin{tabular}{llccccc}
\toprule
Methods & Gradient Calculation & Exact & Checkpoints & \multicolumn{2}{c}{Memory Consumption} & \scalebox{0.9}[1.0]{Computational Cost} \\
\cmidrule{5-6}
& & & & \scalebox{0.9}[1.0]{checkpoint} & \scalebox{0.9}[1.0]{backprop.} & \\
\midrule
NODE~\cite{Chen2018e} & adjoint method & no & $x_N$ & $M$ & $L$ & $M(\!N\!\>\!\!+\!\>\!\! 2\tilde N\!)sL$ \\
\midrule
NODE~\cite{Chen2018e} & backpropagation & yes & --- & --- & \!\!$MNsL$\!\! & $2MNsL$ \\
\scalebox{0.9}[1.0]{baseline scheme} & backpropagation & yes & $x_0$ & $M$ & $NsL$ & $3MNsL$ \\
ACA~\cite{Zhuang2020a} & backpropagation & yes & $\{x_n\}_{n=0}^{N-1}$ & $M\!N$ & $sL$ & $3MNsL$ \\
MALI~\cite{Zhuang2021}$^{*}$ & backpropagation & yes & $x_N$ & $M$ & $sL$ & $4MNsL$ \\
\midrule
proposed$^{**}$ & \scalebox{0.9}[1.0]{symplectic adjoint method} & yes & $\{x_{\!n}\!\}_{n=0}^{N\!-\!1},\{\!X_{n,i}\!\}_{i=1}^{s}$ & $M\!N\!\>\!\!+\!\>\!\! s$ & $L$ & $4MNsL$ \\
\bottomrule
\end{tabular}\\
\raggedright
$^{*}$Available only for the asynchronous leapfrog integrator. $^{**}$Available for any Runge--Kutta methods.
\end{table}
\section{Background and Related Work}
\subsection{Neural Ordinary Differential Equation and Adjoint Method}
We use the following notation.
\begin{itemize}[topsep=0pt,itemsep=0pt,partopsep=0pt, parsep=0pt]
\item[$M$:] the number of stacked neural ODE components,
\item[$L$:] the number of layers in a neural network,
\item[$N$, $\tilde N$:] the number of time steps in the forward and backward integrations, respectively, and
\item[$s$:] the number of uses of a neural network $f$ per step.
\end{itemize}
$s$ is typically equal to the number of internal stages of a numerical integrator~\cite{Hairer1993}.
A numerical integration forward in time requires a computational cost of $O(MNsL)$.
It also provides a computation graph over time steps, which is retained with a memory of $O(MNsL)$; the backpropagation algorithm is then applied to obtain the gradient.
The total computational cost is $O(2MNsL)$, where we suppose the computational cost of the backpropagation algorithm is equal to that of forward propagation.
The memory consumption and computational cost are summarized in Table~\ref{tab:theoretical_comparison}.
To reduce the memory consumption, the original study on the neural ODE introduced the adjoint method~\cite{Chen2018e,Errico1997,Hairer1993,Sanz-Serna2016,Wang2013a}.
This method integrates the pair of the system state $x$ and the adjoint variable $\lambda$ backward in time.
The adjoint variable $\lambda$ represents the gradient $\pderiv{\mathcal{L}}{x}$ of some function $\mathcal{L}$, and the backward integration of the adjoint variable $\lambda$ works as the backpropagation (or more generally the reverse-mode automatic differentiation) in continuous time.
The memory consumption is $O(M)$ to retain the final values $x(T)$ of $M$ neural ODE components and $O(L)$ to obtain the gradient of a neural network $f$ for integrating the adjoint variable $\lambda$.
The computational cost is at least doubled because of the re-integration of the system state $x$ backward in time.
The adjoint method suffers from numerical errors~\cite{Sanz-Serna2016,Gholami2019}.
To suppress the numerical errors, the backward integration often requires a smaller step size than the forward integration (i.e., $\tilde N>N$), leading to an increase in computation time.
Conversely, the proposed symplectic adjoint method uses a specially designed integrator, which provides the exact gradient with the same step size as the forward integration.
\subsection{Checkpointing Scheme}
The checkpointing scheme has been investigated to reduce the memory consumption of neural networks~\cite{Griewank2000,Gruslys2016}, where intermediate states are retained sparsely as checkpoints, and a computation graph is recomputed from each checkpoint.
For example, Gruslys \textit{et al}.~applied this scheme to recurrent neural networks~\cite{Gruslys2016}.
When the initial value $x(0)$ of each neural ODE component is retained as a checkpoint, the initial value problem is solved again before applying the backpropagation algorithm to obtain the gradient of the component.
Then, the memory consumption is $O(M)$ for checkpoints and $O(NsL)$ for the backpropagation; the memory consumption is $O(M+NsL)$ in total (see the baseline scheme).
ANODE scheme retains each step $\{x_n\}_{n=0}^{N-1}$ as a checkpoint with a memory of $O(MN)$~\cite{Gholami2019}.
Form each checkpoint $x_n$, this scheme recalculates the next step $x_{n+1}$ and obtains the gradient using the backpropagation algorithm with a memory of $O(sL)$; the memory consumption is $O(MN+sL)$ in total.
ACA scheme improves ANODE scheme for methods with adaptive time-stepping by discarding the computation graph to find an optimal step size.
Even with checkpoints, the memory consumption is still proportional to the number of uses $s$ of a neural network $f$ per step, which is not negligible for a high-order integrator, e.g., $s=6$ for the Dormand--Prince method~\cite{Dormand1986}.
In this context, the proposed method is regarded as a checkpointing scheme inside a numerical integrator.
Note that previous studies did not use the notation $s$.
Instead of a checkpointing scheme, MALI employs an asynchronous leapfrog (ALF) integrator after the state $x$ is paired up with the velocity state $v$~\cite{Zhuang2021}.
The ALF integrator is time-reversible, i.e., the backward integration obtains the state $x$ equal to that in the forward integration without checkpoints~\cite{Hairer1993}.
However, the ALF integrator is a second-order integrator, implying that it requires a small step size and a high computational cost to suppress numerical errors.
Higher-order Runge--Kutta methods cannot be used in place of the ALF integrator because they are implicit or non-time-reversible.
The ALF integrator is inapplicable to physical systems without velocity such as partial differential equation (PDE) systems.
Nonetheless, a similar approach named RevNet was proposed before in~\cite{Gomez2017}.
When regarding ResNet as a forward Euler method~\cite{Chen2018e,He2015a}, RevNet has an architecture regarded as the leapfrog integrator, and it recalculates the intermediate activations in the reverse direction.
\section{Adjoint Method}
Consider a system
\begin{equation}
{\mathrm{d}}dt x=f(x,t,\theta),\label{eq:main_system}
\end{equation}
where $x$, $t$, and $\theta$, respectively, denote the system state, an independent variable (e.g., time), and parameters of the function $f$.
Given an initial condition $x(0)=x_0$, the solution $x(t)$ is given by
\begin{equation}
x(t)=x_0+\int_0^t f(x(\tau),\tau,\theta){\mathrm{d}t}au.\label{eq:solution}
\end{equation}
The solution $x(t)$ is evaluated at the terminal $t=T$ by a function $\mathcal L$ as $\mathcal L(x(T))$.
Our main interest is in obtaining the gradients of $\mathcal L(x(T))$ with respect to the initial condition $x_0$ and the parameters $\theta$.
Now, we introduce the adjoint method~\cite{Chen2018e,Errico1997,Hairer1993,Sanz-Serna2016,Wang2013a}.
We first focus on the initial condition $x_0$ and omit the parameters $\theta$.
The adjoint method is based on the \emph{variational variable} ${\mathrm{d}}elta(t)$ and the \emph{adjoint variable} $\lambda(t)$.
The variational and adjoint variables respectively follow the variational system and adjoint system as follows.
\begin{equation}
{\mathrm{d}}dt {\mathrm{d}}elta(t)=\pderiv{f}{x}(x(t),t) {\mathrm{d}}elta(t) \mbox{ for } {\mathrm{d}}elta(0)=I,\ \
{\mathrm{d}}dt \lambda(t)=-\pderiv{f}{x}(x,t)^\top\lambda(t) \mbox{ for } \lambda(T)=\lambda_T.
\label{eq:subsystems}
\end{equation}
The variational variable ${\mathrm{d}}elta(t)$ represents the Jacobian $\pderiv{x(t)}{x_0}$ of the state $x(t)$ with respect to the initial condition $x_0$; the detailed derivation is summarized in Appendix~\ref{appendix:subsystems}.
\begin{rmk}\label{rmk:conserved_quantity}
The quantity $\lambda^\top{\mathrm{d}}elta$ is time-invariant, i.e., $\lambda(t)^\top{\mathrm{d}}elta(t)=\lambda(0)^\top{\mathrm{d}}elta(0)$.
\end{rmk}
The proofs of most Remarks and Theorems in this paper are summarized in Appendix~\ref{appendix:proofs}.
\begin{rmk}\label{rmk:adjoint_is_gradient}
The adjoint variable $\lambda(t)$ represents the gradient $(\pderiv{\mathcal L(x(T))}{x(t)})^\top$ if the final condition $\lambda_T$ of the adjoint variable $\lambda$ is set to $(\pderiv{\mathcal L(x(T))}{x(T)})^\top$.
\end{rmk}
This is because of the chain rule.
Thus, the backward integration of the adjoint variable $\lambda(t)$ works as reverse-mode automatic differentiation.
The adjoint method has been used for data assimilation, where the initial condition $x_0$ is optimized by a gradient-based method.
For system identification (i.e., parameter adjustment), one can consider the parameters $\theta$ as a part of the augmented state $\tilde x=[x\ \ \theta]^\top$ of the system
\begin{equation}
{\mathrm{d}}dt \tilde x=\tilde f(\tilde x,t),\ \tilde f(\tilde x,t)=\vvec{f(x,t,\theta)\\0},\ \ \tilde x(0)=\vvec{x_0\\\theta}.
\end{equation}
The variational and adjoint variables are augmented in the same way.
Hereafter, we let $x$ denote the state or augmented state without loss of generality.
See Appendix~\ref{appendix:general_gradient} for details.
According to the original implementation of the neural ODE~\cite{Chen2018e}, the final value $x(T)$ of the system state $x$ is retained after forward integration, and the pair of the system state $x$ and the adjoint variable $\lambda$ is integrated backward in time to obtain the gradients.
The right-hand sides of the main system in Eq.~\eqref{eq:main_system} and the adjoint system in Eq.~\eqref{eq:subsystems} are obtained by the forward and backward propagations of the neural network $f$, respectively.
Therefore, the computational cost of the adjoint method is twice that of the ordinary backpropagation algorithm.
After a numerical integrator discretizes the time, Remark~\ref{rmk:conserved_quantity} does not hold, and thus the adjoint variable $\lambda(t)$ is not equal to the exact gradient~\cite{Gholami2019,Sanz-Serna2016}.
Moreover, in general, the numerical integration backward in time is not consistent with that forward in time.
Although a small step size (i.e., a small tolerance) suppresses numerical errors, it also leads to a longer computation time.
These facts provide the motivation to obtain the exact gradient with a small memory, in the present study.
\section{Symplectic Adjoint Method}
\subsection{Runge--Kutta Method}
We first discretize the main system in Eq.~\eqref{eq:main_system}.
Let $t_n$, $h_n$, and $x_n$ denote the $n$-th time step, step size, and state, respectively, where $h_n=t_{n+1}-t_n$.
Previous studies employed one of the Runge--Kutta methods, generally expressed as
\begin{equation}
\begin{split}
x_{n+1}&=x_n+h_n\sum_{i=1}^s b_i k_{n,i},\\
k_{n,i}&\vcentcolon=f(X_{n,i},t_n+c_ih_n),\\
X_{n,i}&\vcentcolon=x_n+h_n\sum_{j=1}^s a_{i,j}k_{n,j}.
\end{split}\label{eq:runge_kutta}
\end{equation}
The coefficients $a_{i,j}$, $b_i$, and $c_i$ are summarized as the Butcher tableau~\cite{Hairer2006,Hairer1993,Sanz-Serna2016}.
If $a_{i,j}=0$ for $j\ge i$, the intermediate state $X_{n,i}$ is calculable from $i=1$ to $i=s$ sequentially; then, the Runge--Kutta method is considered explicit.
Runge--Kutta methods are not time-reversible in general, i.e., the numerical integration backward in time is not consistent with that forward in time.
\begin{rmk}[\citet{Bochev1994,Hairer2006}]\label{rmk:RK_for_variational}
When the system in Eq.~\eqref{eq:main_system} is discretized by the Runge--Kutta method in Eq.~\eqref{eq:runge_kutta}, the variational system in Eq.~\eqref{eq:subsystems} is discretized by the same Runge--Kutta method.
\end{rmk}
Therefore, it is not necessary to solve the variational variable ${\mathrm{d}}elta(t)$ separately.
\subsection{Symplectic Runge--Kutta Method for Adjoint System}\label{sec:symplectic_RK}
We assume $b_i\neq 0$ for $i=1,{\mathrm{d}}ots,s$.
We suppose the adjoint system to be solved by another Runge--Kutta method with the same step size as that used for the system state $x$, expressed as
\begin{equation}
\begin{split}
\lambda_{n+1}& =\lambda_n+h_n\sum_{i=1}^s B_i l_{n,i},\\
l_{n,i}& \vcentcolon=-\pderiv{f}{x}(X_{n,i},t_n+C_ih_n)^\top\Lambda_{n,i},\\
\Lambda_{n,i}& \vcentcolon=\lambda_n+h_n\sum_{j=1}^s A_{i,j}l_{n,j}.
\end{split}\label{eq:runge_kutta_adjoint}
\end{equation}
The final condition $\lambda_N$ is set to $(\pderiv{\mathcal L(x_N)}{x_N})^\top$.
Because the time evolutions of the variational variable ${\mathrm{d}}elta$ and the adjoint variable $\lambda$ are expressible by two equations, the combined system is considered as a partitioned system.
A combination of two Runge--Kutta methods for solving a partitioned system is called a partitioned Runge--Kutta method, where $C_i=c_i$ for $i=1,{\mathrm{d}}ots,s$.
We introduce the following condition for a partitioned Runge--Kutta method.
\begin{cnd}\label{cnd:symplectic_RK}
$b_iA_{i,j}+B_ja_{j,i}-b_iB_j=0$ for $i,j=1,{\mathrm{d}}ots,s$, and $B_i=b_i\neq 0$ and $C_i=c_i$ for $i=1,{\mathrm{d}}ots,s$.
\end{cnd}
\begin{thm}[\citet{Sanz-Serna2016}]\label{thm:symplectic_RK}
The partitioned Runge--Kutta method in Eqs.~\eqref{eq:runge_kutta} and~\eqref{eq:runge_kutta_adjoint} conserves a bilinear quantity $S({\mathrm{d}}elta,\lambda)$ if the continuous-time system conserves the quantity $S({\mathrm{d}}elta,\lambda)$ and Condition \ref{cnd:symplectic_RK} holds.
\end{thm}
Because the bilinear quantity $S$ (including $\lambda^\top{\mathrm{d}}elta$) is conserved, the adjoint system solved by the Runge--Kutta method in Eq.~\eqref{eq:runge_kutta_adjoint} under Condition \ref{cnd:symplectic_RK} provides the exact gradient as the adjoint variable $\lambda_n=(\pderiv{\mathcal L(x_N)}{x_n})^\top$.
The Dormand--Prince method, one of the most popular Runge--Kutta methods, has $b_2=0$~\cite{Dormand1986}.
For such methods, the Runge--Kutta method under Condition~\ref{cnd:symplectic_RK} in Eq.~\eqref{eq:runge_kutta_adjoint} is generalized as
\begin{equation}
\begin{split}
\lambda_{n+1}&=\lambda_n+h_n\sum_{i=1}^s \tilde b_i l_{n,i},\\
l_{n,i}&\vcentcolon=-\pderiv{f}{x}(X_{n,i},t_n+c_ih_n)^\top\Lambda_{n,i},\\
\Lambda_{n,i}&\vcentcolon=
\begin{cases}
\lambda_n+h_n\sum_{j=1}^s \tilde b_j \left(1-\frac{a_{j,i}}{b_i}\right) l_{n,j} & \mbox{if}\ \ i\not\in I_0 \\
- \sum_{j=1}^s \tilde b_j a_{j,i} l_{n,j} & \mbox{if}\ \ i\in I_0,
\end{cases}\\
\end{split}\label{eq:runge_kutta_adjoint_vanish}
\end{equation}
where
\begin{equation}
\tilde b_i=
\begin{cases}
b_i & \mbox{if}\ \ i\not\in I_0 \\
h_n & \mbox{if}\ \ i\in I_0, \\
\end{cases}\ \
I_0=\{i|i=1,{\mathrm{d}}ots,s,\ b_i=0\}.
\end{equation}
Note that this numerical integrator is no longer a Runge--Kutta method and is an alternative expression for the ``fancy'' integrator proposed in \cite{Sanz-Serna2016}.
\begin{thm}\label{thm:runge_kutta_adjoint_vanish}
The combination of the integrators in Eqs.~\eqref{eq:runge_kutta} and~\eqref{eq:runge_kutta_adjoint_vanish} conserves a bilinear quantity $S({\mathrm{d}}elta, \lambda)$ if the continuous-time system conserves the quantity $S({\mathrm{d}}elta,\lambda)$.
\end{thm}
\begin{rmk}\label{rmk:adjoint_explicit}
The Runge--Kutta method in Eq.~\eqref{eq:runge_kutta_adjoint} under Condition \ref{cnd:symplectic_RK} and the numerical integrator in Eq.~\eqref{eq:runge_kutta_adjoint_vanish} are explicit backward in time if the Runge--Kutta method in Eq.~\eqref{eq:runge_kutta} is explicit forward in time.
\end{rmk}
We emphasize that Theorems~\ref{thm:symplectic_RK} and~\ref{thm:runge_kutta_adjoint_vanish} hold for any ODE systems even if the systems have discontinuity~\cite{Herrera2020}, stochasticity~\cite{Li2020i}, or physics constraints~\cite{Greydanus2019}.
This is because the Theorems are not the properties of a system but of Runge--Kutta methods.
A partitioned Runge--Kutta method that satisfies Condition~\ref{cnd:symplectic_RK} is symplectic~\cite{Hairer1993,Hairer2006}.
It is known that, when a symplectic integrator is applied to a Hamiltonian system using a fixed step size, it conserves a modified Hamiltonian, which is an approximation to the system energy of the Hamiltonian system.
The bilinear quantity $S$ is associated with the symplectic structure but not with a Hamiltonian.
Regardless of the step size, a symplectic integrator conserves the symplectic structure and thereby conserves the bilinear quantity $S$.
Hence, we named this method the \emph{symplectic adjoint method}.
For integrators other than Runge--Kutta methods, one can design the integrator for the adjoint system so that the pair of integrators is symplectic (see \cite{Matsuda2020} for example).
\begin{figure}
\caption{Forward Integration}
\caption{Backward Integration}
\label{alg:proposed:forward}
\label{alg:proposed:backward}
\label{alg:proposed:backward:forward}
\label{alg:proposed:backward:backward}
\end{figure}
\begin{wrapfigure}{r}{0.47\textwidth}
\vspace*{12mm}
\end{wrapfigure}
\subsection{Proposed Implementation}
The theories given in the last section were mainly introduced for the numerical analysis in \cite{Sanz-Serna2016}.
Because the original expression includes recalculations of intermediate variables, we propose the alternative expression in Eq.~\eqref{eq:runge_kutta_adjoint_vanish} to reduce the computational cost.
The discretized adjoint system in Eq.~\eqref{eq:runge_kutta_adjoint_vanish} depends on the vector--Jacobian product (VJP) $\Lambda^\top\pderiv{f}{x}$.
To obtain it, the computation graph from the input $X_{n,i}$ to the output $f(X_{n,i},t_n+c_ih_n)$ is required.
When the computation graph in the forward integration is entirely retained, the memory consumption and computational cost are of the same orders as those for the naive backpropagation algorithm.
To reduce the memory consumption, we propose the following strategy as summarized in Algorithms~\ref{alg:proposed:forward} and \ref{alg:proposed:backward}.
At the forward integration of a neural ODE component, the pairs of system states $x_n$ and time points $t_n$ at time steps $n=0,{\mathrm{d}}ots,N-1$ are retained with a memory of $O(N)$ as checkpoints, and all computation graphs are discarded, as shown in Algorithm~\ref{alg:proposed:forward}.
For $M$ neural ODE components, the memory for checkpoints is $O(MN)$.
The backward integration is summarized in Algorithm~\ref{alg:proposed:backward}.
The below steps are repeated from $n=N-1$ to $n=0$.
From the checkpoint $x_n$, the intermediate states $X_{n,i}$ for $s$ stages are obtained following the Runge--Kutta method in Eq.~\eqref{eq:runge_kutta} and retained as checkpoints with a memory of $O(s)$, while all computation graphs are discarded.
Then, the adjoint system is integrated from $n+1$ to $n$ using Eq.~\eqref{eq:runge_kutta_adjoint_vanish}.
Because the computation graph of the neural network $f$ in line \ref{alg:proposed:backward:forward} is discarded, it is recalculated and the VJP $\lambda^\top\pderiv{f}{x}$ is obtained using the backpropagation algorithm one-by-one in line \ref{alg:proposed:backward:backward}, where only a single use of the neural network is recalculated at a time.
This is why the memory consumption is proportional to the number of checkpoints $MN+s$ \emph{plus} the neural network size $L$.
By contrast, existing methods apply the backpropagation algorithm to the computation graph of a single step composed of $s$ stages or multiple steps.
The memory consumption is proportional to the number of uses of the neural network between two checkpoints ($s$ at least) \emph{times} the neural network size $L$, in addition to the memory for checkpoints (see Table~\ref{tab:theoretical_comparison}).
Due to the recalculation, the computational cost of the proposed strategy is $O(4MNsL)$, whereas those of the adjoint method~\cite{Chen2018e} and ACA~\cite{Zhuang2020a} are $O(M(N+2\tilde N)sL)$ and $O(3MNsL)$, respectively.
However, the increase in the computation time is much less than that expected theoretically because of other bottlenecks (as demonstrated later).
\section{Experiments}\label{sec:experiments}
We evaluated the performance of the proposed symplectic adjoint method and existing methods using PyTorch 1.7.1~\cite{Paszke2017}.
We implemented the proposed symplectic adjoint method by extending the adjoint method implemented in the package $\mathsf{torchdiffeq}$ 0.1.1~\cite{Chen2018e}.
We re-implemented ACA~\cite{Zhuang2020a} because the interfaces of the official implementation is incompatible with $\mathsf{torchdiffeq}$.
In practice, the number of checkpoints for an integration can be varied; we implemented a baseline scheme that retains only a single checkpoint per neural ODE component.
The source code is available at \url{https://github.com/tksmatsubara/symplectic-adjoint-method}.
\subsection{Continuous Normalizing Flow}\label{sec:flow}
\paragraph{Experimental Settings:}
We evaluated the proposed symplectic adjoint method on training continuous normalizing flows~\cite{Grathwohl2018}.
A normalizing flow is a neural network that approximates a bijective map $g$ and obtains the exact likelihood of a sample $u$ by the change of variables $\log p(u)=\log p(z)+\log |{\mathrm{d}}et \pderiv{g(u)}{u}|$, where $z=g(u)$ and $p(z)$ denote the corresponding latent variable and its prior, respectively~\cite{Dinh2014a,Dinh2016,Rezende2015}.
A continuous normalizing flow is a normalizing flow whose map $g$ is modeled by stacked neural ODE components, in particular, $u=x(0)$ and $z=x(T)$ for the case with $M=1$.
The log-determinant of the Jacobian is obtained by a numerical integration together with the system state $x$ as $\log |{\mathrm{d}}et \pderiv{g(u)}{u}|=-\int_0^T \mathrm{Tr}(\pderiv{f}{x}(x(t),t){\mathrm{d}t}$.
The trace operation $\mathrm{Tr}$ is approximated by the Hutchinson estimator~\cite{Hutchinson1990}.
We adopted the experimental settings of the continuous normalizing flow, FFJORD\footnote{\url{https://github.com/rtqichen/ffjord}\label{footnote:ffjord} (MIT License)}~\cite{Grathwohl2018}, unless stated otherwise.
We examined five real tabular datasets, namely, MiniBooNE, GAS, POWER, HEPMASS, and BSDS300 datasets~\cite{Papamakarios2017}.
The network architectures were the same as those that achieved the best results in the original experiments; the number of neural ODE components $M$ varied across datasets.
We employed the Dormand--Prince integrator, which is a fifth-order Runge--Kutta method with adaptive time-stepping, composed of seven stages~\cite{Dormand1986}.
Note that the number of function evaluations per step is $s=6$ because the last stage is reused as the first stage of the next step.
We set the absolute and relative tolerances to $\mathsf{atol}=10^{-8}$ and $\mathsf{rtol}=10^{-6}$, respectively.
The neural networks were trained using the Adam optimizer~\cite{Kingma2014b} with a learning rate of $10^{-3}$.
We used a batch-size of 1000 for all datasets to put a mini-batch into a single NVIDIA GeForce RTX 2080Ti GPU with 11 GB of memory, while the original experiments employed a batch-size of 10 000 for the latter three datasets on multiple GPUs.
When using multiple GPUs, bottlenecks such as data transfer across GPUs may affect performance, and a fair comparison becomes difficult.
Nonetheless, the naive backpropagation algorithm and baseline scheme consumed the entire memory for BSDS300 dataset.
We also examined the MNIST dataset~\cite{LeCun1998} using a single NVIDIA RTX A6000 GPU with 48 GB of memory.
Following the original study, we employed the multi-scale architecture and set the tolerance to $\mathsf{atol}=\mathsf{rtol}=10^{-5}$.
We set the learning rate to $10^{-3}$ and then reduced it to $10^{-4}$ at the 250th epoch.
While the original experiments used a batch-size of 900, we set the batch-size to 200 following the official code\footref{footnote:ffjord}.
The naive backpropagation algorithm and baseline scheme consumed the entire memory.
\begin{table}[t]\small
\tabcolsep=1.2mm
\centering
\caption{Results obtained for continuous normalizing flows.}\label{tab:tabular_results}
\begin{tabular}{lrrrrrrrrr}
\toprule
& \mcc{MINIBOONE ($M=1$)} & \mcc{GAS ($M=5$)} & \mcc{POWER ($M=5$)} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}
& \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & 10.59\std{0.17} & 170 & 0.74 & -10.53\std{0.25} & 24 & 4.82 & -0.31\std{0.01} & \textbf{8.1} & 6.33 \\
backpropagation~\cite{Chen2018e} & 10.54\std{0.18} & 4436 & 0.91 & -9.53\std{0.42} & 4479 & 12.00 & -0.24\std{0.05} & 1710.9 & 10.64 \\
baseline scheme & 10.54\std{0.18} & 4457 & 1.10 & -9.53\std{0.42} & 1858 & 5.48 & -0.24\std{0.05} & 515.2 & 4.37 \\
ACA~\cite{Zhuang2020a} & 10.57\std{0.30} & 306 & 0.77 & -10.65\std{0.45} & 73 & 3.98 & -0.31\std{0.02} & 29.5 & 5.08 \\
\midrule
proposed & 10.49\std{0.11} & \textbf{95} & 0.84 & -10.89\std{0.11} & \textbf{20} & 4.39 & -0.31\std{0.02} & 9.2 & 5.73 \\
\bottomrule
\\
\toprule
& \mcc{HEPMASS ($M=10$)} & \mcc{BSDS300 ($M=2$)} & \mcc{MNIST ($M=6$)} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}
& \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & 16.49\std{0.25} & 40 & 4.19 & -152.04\std{0.09} & 577 & 11.70 & 0.918\std{0.011} & 1086 & 10.12 \\
backpropagation~\cite{Chen2018e} & 17.03\std{0.22} & 5254 & 11.82 & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} \\
baseline scheme & 17.03\std{0.22} & 1102 & 4.40 & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} \\
ACA~\cite{Zhuang2020a} & 16.41\std{0.39} & 88 & 3.67 & -151.27\std{0.47} & 757 & 6.97 & 0.919\std{0.003} & 4332 & 7.94 \\
\midrule
proposed & 16.48\std{0.20} & \textbf{35} & 4.15 & -151.17\std{0.15} & \textbf{283} & 8.07 & 0.917\std{0.002} & \textbf{1079} & 9.42 \\
\bottomrule
\end{tabular}\\
\raggedright
Negative log-likelihoods (NLL), peak memory consumption [$\mathrm{MiB}$], and computation time per iteration [$\mathrm{s/itr}$]. See Table~\ref{tab:tabular_results_full} in Appendix for standard deviations.
\vspace*{-2mm}
\end{table}
\paragraph{Performance:}
The medians $\pm$ standard deviations of three runs are summarized in Table~\ref{tab:tabular_results}.
In many cases, all methods achieved negative log-likelihoods (NLLs) with no significant difference because all but the adjoint method provide the exact gradients up to rounding error, and the adjoint method with a small tolerance provides a sufficiently accurate gradient.
The naive backpropagation algorithm and baseline scheme obtained slightly worse results on the GAS, POWER, and HEPMASS datasets.
Due to adaptive time-stepping, the numerical integrator sometimes makes the step size much smaller, and the backpropagation algorithm over time steps suffered from rounding errors.
Conversely, ACA and the proposed symplectic adjoint method applied the backpropagation algorithm separately to a subset of the integration, thereby becoming more robust to rounding errors (see Appendix~\ref{appendix:rounding_off} for details).
After the training procedure, we obtained the peak memory consumption during additional training iterations (mem.~[$\mathrm{MiB}$]), from which we subtracted the memory consumption before training (i.e., occupied by the model parameters, loaded data, etc.).
The memory consumption still includes the optimizer's states and the intermediate results of the multiply--accumulate operation.
The results roughly agree with the theoretical orders shown in Table~\ref{tab:theoretical_comparison} (see also Table~\ref{tab:tabular_results_full} for standard deviations).
The symplectic adjoint method consumed much smaller memory than the naive backpropagation algorithm and the checkpointing schemes.
Owing to the optimized implementation, the symplectic adjoint method consumed smaller memory than the adjoint method in some cases (see Appendix~\ref{appendix:memory_consumption_optimization}).
On the other hand, the computation time per iteration (time [$\mathrm{s/itr}$]) during the additional training iterations does not agree with the theoretical orders.
First, the adjoint method was slower in many cases, especially for the BSDS300 and MNIST datasets.
For obtaining the gradients, the adjoint method integrates the adjoint variable $\lambda$, whose size is equal to the sum of the sizes of the parameters $\theta$ and the system state $x$.
With more parameters, the probability that at least one parameter does not satisfy the tolerance value is increased.
An accurate backward integration requires a much smaller step size than the forward integration (i.e., $\tilde N$ much greater than $N$), leading to a longer computation time.
Second, the naive backpropagation algorithm and baseline scheme were slower than that expected theoretically, in many cases.
A method with high memory consumption may have to wait for a retained computation graph to be loaded or memory to be freed, leading to an additional bottleneck.
The symplectic adjoint method is free from the above bottlenecks and performs faster in practice; it was faster than the adjoint method for all but MiniBooNE dataset.
The symplectic adjoint method is superior (or at least competitive) to the adjoint method, naive backpropagation, and baseline scheme in terms of both memory consumption and computation time.
Between the proposed symplectic adjoint method and ACA, a trade-off exists between memory consumption and computation time.
\begin{wrapfigure}{r}{2.1in}
\raggedleft
\vspace*{-5mm}
\includegraphics[scale=1,page=1]{fig/tolerance.pdf}
\vspace*{-5mm}
\caption{With different tolerances.}\label{fig:tolerance}
\vspace*{-2mm}
\end{wrapfigure}
\paragraph{Robustness to Tolerance:}
The adjoint method provides gradients with numerical errors.
To evaluate the robustness against tolerance, we employed MiniBooNE dataset and varied the absolute tolerance $\mathsf{atol}$ while maintaining the relative tolerance as $\mathsf{rtol}=10^{2}\!\times\!\mathsf{atol}$.
During the training, we obtained the computation time per iteration, as summarized in the upper panel of Fig.~\ref{fig:tolerance}.
The computation time reduced as the tolerance increased.
After training, we obtained the NLLs with $\mathsf{atol}=10^{-8}$, as summarized in the bottom panel of Fig.~\ref{fig:tolerance}.
The adjoint method performed well only with $\mathsf{atol}< 10^{-4}$.
With $\mathsf{atol}=10^{-4}$, the numerical error in the backward integration was non-negligible, and the performance degraded.
With $\mathsf{atol}>10^{-4}$, the adjoint method destabilized.
The symplectic adjoint method performed well even with $\mathsf{atol}=10^{-4}$.
Even with $10^{-4}<\mathsf{atol}<10^{-2}$, it performed to a certain level, while the numerical error in the forward integration was non-negligible.
Because of the exact gradient, the symplectic adjoint method is robust to a large tolerance compared with the adjoint method, and thus potentially works much faster with an appropriate tolerance.
\begin{table}[t]\small
\tabcolsep=.8mm
\centering
\caption{Results obtained for GAS dataset with different Runge--Kutta methods.}\label{tab:tabular_results_solvers}
\begin{tabular}{lrrrrrrrr}
\toprule
& \mcb{$p=2$, $s=2$} & \mcb{$p=3$, $s=3$} & \mcb{$p=5$, $s=6$} & \mcb{$p=8$, $s=12$} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9}
& \mca{mem.} & \mca{time} & \mca{mem.} & \mca{time} & \mca{mem.} & \mca{time} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & \textbf{21}\std{\zz0} & 247.47\std{\zz7.52} & \textbf{22}\std{0} & 18.32\std{0.88} & 24\std{\zz\zz0} & 5.34\std{0.31} & 28\std{\zz\zz0} & 9.77\std{0.81} \\
backpropagation~\cite{Chen2018e} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & 4433\std{255} & 11.85\std{1.10} & \mca{---} & \mca{---} \\
baseline scheme & \mca{---} & \mca{---} & \mca{---} & \mca{---} & 1858\std{228} & 5.82\std{0.28} & 4108\std{576} & 22.76\std{3.70} \\
ACA~\cite{Zhuang2020a} & 607\std{30} & 232.90\std{13.81} & 69\std{2} & 17.72\std{1.38} & 73\std{\zz\zz0} & 4.15\std{0.21} & 138\std{\zz\zz0} & 9.36\std{0.55} \\
\midrule
proposed & 589\std{14} & 262.99\std{\zz5.19} & 43\std{2} & 18.59\std{0.75} & \textbf{20}\std{\zz\zz0} & 4.78\std{0.32} & \textbf{21}\std{\zz\zz0} & 11.41\std{0.23} \\
\bottomrule
\end{tabular}\\
\raggedright
Peak memory consumption [$\mathrm{MiB}$], and computation time per iteration [$\mathrm{s/itr}$].
\vspace*{-2mm}
\end{table}
\paragraph{Different Runge--Kutta Methods:}
The Runge--Kutta family includes various integrators characterized by the Butcher tableau~\cite{Hairer2006,Hairer1993,Sanz-Serna2016}, such as the Heun--Euler method (a.k.a.~adaptive Heun), Bogacki--Shampine method (a.k.a.~bosh3), fifth-order Dormand--Prince method (a.k.a.~dopri5), and eighth-order Dormand--Prince method (a.k.a.~dopri8).
These methods have the orders of $p=2, 3, 5$, and $8$ using $s=2, 3, 6$, and $12$ function evaluations, respectively.
We examined these methods using GAS dataset, and the results are summarized in Table~\ref{tab:tabular_results_solvers}.
The naive backpropagation algorithm and baseline scheme consumed the entire memory in some cases, as denoted by dashes.
We omit the NLLs because all methods used the same tolerance and achieved the same NLLs.
Compared to ACA, the symplectic adjoint method suppresses the memory consumption more significantly with a higher-order method (i.e., more function evaluations $s$), as the theory suggests in Table~\ref{tab:theoretical_comparison}.
With the Heun--Euler method, all methods were extremely slow, and all but the adjoint method consumed larger memory.
A lower-order method has to use an extremely small step size to satisfy the tolerance, thereby increasing the number of steps $N$, computation time, and memory for checkpoints.
This result indicates the limitations of methods that depend on lower-order integrators, such as MALI~\cite{Zhuang2021}.
With the eighth-order Dormand--Prince method, the adjoint method performs relatively faster.
This is because the backward integration easily satisfies the tolerance with a higher-order method (i.e., $\tilde N\simeq N$).
Nonetheless, in terms of computation time, the fifth-order Dormand--Prince method is the best choice, for which the symplectic adjoint method greatly reduces the memory consumption and performs faster than all but ACA.
\begin{wrapfigure}{r}{2.3in}
\raggedleft
\vspace*{-5mm}
\includegraphics[scale=1,page=1]{fig/steps_log_mnist.pdf}
\vspace*{-6mm}
\caption{Different number of steps.}\label{fig:steps_memory}
\vspace*{-3mm}
\end{wrapfigure}
\paragraph{Memory for Checkpoints:}\label{sec:memory_for_checkpoint}
To evaluate the memory consumption with varying numbers of checkpoints, we used the fifth-order Dormand--Prince method and varied the number of steps $N$ for MNIST by manually varying the step size.
We summarized the results in Fig.~\ref{fig:steps_memory} on a log-log scale.
Note that, with the adaptive stepping, FFJORD needs approximately $MN=200$ steps for MNIST and fewer steps for other datasets.
Because we set $\tilde N=N$, but $\tilde N>N$ in practice, the adjoint method is expected to require a longer computation time.
The memory consumption roughly follows the theoretical orders summarized in Table~\ref{tab:theoretical_comparison}.
The adjoint method needs a memory of $O(L)$ for the backpropagation, and the symplectic adjoint method needs an additional memory of $O(MN+s)$ for checkpoints.
Until the number of steps $MN$ exceeds a thousand, the memory for checkpoints is negligible compared to that for the backpropagation.
Compared to the symplectic adjoint method, ACA needs a memory of $O(sL)$ for the backpropagation over $s$ stages.
The increase in memory is significant until the number of steps $MN$ reaches ten thousand.
For a stiff (non-smooth) ODE for which a numerical integrator needs thousands of steps, one can employ a higher-order integrator such as the eighth-order Dormand--Prince method and suppress the number of steps.
For a stiffer ODE, implicit integrators are commonly used, which are out of the scope of this study and the related works in Table~\ref{tab:theoretical_comparison}.
Therefore, we conclude that the symplectic adjoint method needs the memory at the same level as the adjoint method and much smaller than others in practical ranges.
A possible alternative to the proposed implementation in Algorithms~\ref{alg:proposed:forward} and \ref{alg:proposed:backward} retains all intermediate states $X_{n,i}$ during the forward integration.
Its computational cost and memory consumption are $O(3MNsL)$ and $O(MNs+L)$, respectively.
The memory for checkpoints can be non-negligible with a practical number of steps.
\begin{table}[t]
\tabcolsep=1.5mm
\centering\small
\caption{Results obtained for continuous-time physical systems.}\label{tab:pde_results}
\begin{tabular}{lcrrcrr}
\toprule
& \mcc{KdV Equation} & \mcc{Cahn--Hilliard System} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}
& MSE \tiny{($\times 10^{-3}$)} & \mca{mem.} & \mca{time} & MSE \tiny{($\times 10^{-6}$)} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & 1.61\std{3.23} & 93.7\std{0.0} & 276\std{16} & 5.58\std{1.67} & 93.7\std{0.0} & 942\std{24} \\
backpropagation~\cite{Chen2018e} & 1.61\std{3.40} & 693.9\std{0.0} & 105\std{\zz4} & 4.68\std{1.89} & 3047.1\std{0.0} & 425\std{13} \\
ACA~\cite{Zhuang2020a} & 1.61\std{3.40} & 647.8\std{0.0} & 137\std{\zz5} & 5.82\std{2.33} & 648.0\std{0.0} & 484\std{13} \\
\midrule
proposed & 1.61\std{4.00} & \textbf{79.8}\std{0.0} & 162\std{\zz6} & 5.47\std{1.46} & \textbf{80.3}\std{0.0} & 568\std{22} \\
\bottomrule
\multicolumn{7}{l}{Mean-squared errors (MSEs) in long-term predictions, peak memory consumption [$\mathrm{MiB}$],} \\
\multicolumn{7}{l}{and computation time per iteration [$\mathrm{ms/itr}$].}
\end{tabular}
\vspace*{-3mm}
\end{table}
\subsection{Continuous-Time Dynamical System}\label{sec:pde}
\paragraph{Experimental Settings:}
We evaluated the symplectic adjoint method on learning continuous-time dynamical systems~\cite{Greydanus2019,Matsubara2020,Saemundsson2020}.
Many physical phenomena can be modeled using the gradient of system energy $H$ as ${\mathrm{d}x}/{\mathrm{d}t}=G\nabla H(x)$, where $G$ is a coefficient matrix that determines the behaviors of the energy~\cite{Furihata2010}.
We followed the experimental settings of HNN++, provided in~\cite{Matsubara2020}\footnote{\url{https://github.com/tksmatsubara/discrete-autograd} (MIT License)\label{footnote:dgnet}}.
A neural network composed of one convolution layer and two fully connected layers approximated the energy function $H$ and learned the time series by interpolating two successive samples.
The deterministic convolution algorithm was enabled (see Appendix~\ref{appendix:parallelization} for discussion).
We employed two physical systems described by PDEs, namely the Korteweg--De Vries (KdV) equation and the Cahn--Hilliard system.
We used a batch-size of 100 to put a mini-batch into a single NVIDIA TITAN V GPU instead of the original batch-size of 200.
Moreover, we used the eighth-order Dormand--Prince method~\cite{Hairer1993}, composed of 13 stages, to emphasize the efficiency of the proposed method.
We omitted the baseline scheme because of $M=1$.
We evaluated the performance using mean squared errors (MSEs) in the system energy for long-term predictions.
\paragraph{Performance:}
The medians $\pm$ standard deviations of 15 runs are summarized in Table~\ref{tab:pde_results}.
Due to the accumulated error in the numerical integration, the MSEs had large variances, but all methods obtained similar MSEs.
ACA consumed much more memory than the symplectic adjoint method because of the large number of stages; the symplectic adjoint method is more beneficial for physical simulations, which often require extremely higher-order methods.
Due to the severe nonlinearity, the adjoint method had to employ a small step size and thus performed slower than others (i.e., $\tilde N> N$).
\section{Conclusion}
We proposed the symplectic adjoint method, which solves the adjoint system by a symplectic integrator with appropriate checkpoints and thereby provides the exact gradient.
It only applies the backpropagation algorithm to each use of the neural network, and thus consumes memory much less than the backpropagation algorithm and the checkpointing schemes.
Its memory consumption is competitive to that of the adjoint method because the memory consumed by checkpoints is negligible in most cases.
The symplectic adjoint method provides the exact gradient with the same step size as that used for the forward integration.
Therefore, in practice, it performs faster than the adjoint method, which requires a small step size to suppress numerical errors.
As shown in the experiments, the best integrator and checkpointing scheme may depend on the target system and computational resources.
For example, \citet{Kim2021} has demonstrated that quadrature methods can reduce the computation cost of the adjoint system for a stiff equation in exchange for the additional memory consumption.
Practical packages provide many integrators and can choose the best ones~\cite{Hindmarsh2005,Rackauckas2020}.
In the future, we will provide the proposed symplectic adjoint method as a part of such packages for appropriate systems.
\end{ack}
{\small
\begin{thebibliography}{}
\bibitem[Bochev and Scovel, 1994]{Bochev1994}
Bochev, P.~B. and Scovel, C. (1994).
\newblock {On quadratic invariants and symplectic structure}.
\newblock {\em BIT Numerical Mathematics}, 34(3):337--345.
\bibitem[Chen et~al., 2018]{Chen2018e}
Chen, T.~Q., Rubanova, Y., Bettencourt, J., Duvenaud, D., Chen, R. T.~Q.,
Rubanova, Y., Bettencourt, J., and Duvenaud, D. (2018).
\newblock {Neural Ordinary Differential Equations}.
\newblock In {\em Advances in Neural Information Processing Systems (NeurIPS)}.
\bibitem[Chen et~al., 2020]{Chen2020a}
Chen, Z., Zhang, J., Arjovsky, M., and Bottou, L. (2020).
\newblock {Symplectic Recurrent Neural Networks}.
\newblock In {\em International Conference on Learning Representations (ICLR)}.
\bibitem[Devlin et~al., 2018]{Devlin2018}
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018).
\newblock {BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding}.
\newblock {\em arXiv}.
\bibitem[Dinh et~al., 2014]{Dinh2014a}
Dinh, L., Krueger, D., and Bengio, Y. (2014).
\newblock {NICE: Non-linear Independent Components Estimation}.
\newblock In {\em Workshop on International Conference on Learning
Representations}.
\bibitem[Dinh et~al., 2017]{Dinh2016}
Dinh, L., Sohl-Dickstein, J., and Bengio, S. (2017).
\newblock {Density estimation using Real NVP}.
\newblock In {\em International Conference on Learning Representations (ICLR)}.
\bibitem[Dormand and Prince, 1986]{Dormand1986}
Dormand, J.~R. and Prince, P.~J. (1986).
\newblock {A reconsideration of some embedded Runge-Kutta formulae}.
\newblock {\em Journal of Computational and Applied Mathematics},
15(2):203--211.
\bibitem[Errico, 1997]{Errico1997}
Errico, R.~M. (1997).
\newblock {What Is an Adjoint Model?}
\newblock {\em Bulletin of the American Meteorological Society},
78(11):2577--2591.
\bibitem[Furihata and Matsuo, 2010]{Furihata2010}
Furihata, D. and Matsuo, T. (2010).
\newblock {\em {Discrete Variational Derivative Method: A Structure-Preserving
Numerical Method for Partial Differential Equations}}.
\newblock Chapman and Hall/CRC.
\bibitem[Gholami et~al., 2019]{Gholami2019}
Gholami, A., Keutzer, K., and Biros, G. (2019).
\newblock {ANODE: Unconditionally accurate memory-efficient gradients for
neural ODEs}.
\newblock In {\em International Joint Conference on Artificial Intelligence (IJCAI)}.
\bibitem[Gomez et~al., 2017]{Gomez2017}
Gomez, A.~N., Ren, M., Urtasun, R., and Grosse, R.~B. (2017).
\newblock {The Reversible Residual Network: Backpropagation Without Storing
Activations}.
\newblock In {\em Advances in Neural Information Processing Systems (NIPS)}.
\bibitem[Grathwohl et~al., 2018]{Grathwohl2018}
Grathwohl, W., Chen, R. T.~Q., Bettencourt, J., Sutskever, I., and Duvenaud, D.
(2018).
\newblock {FFJORD: Free-form Continuous Dynamics for Scalable Reversible
Generative Models}.
\newblock In {\em International Conference on Learning Representations (ICLR)}.
\bibitem[Greydanus et~al., 2019]{Greydanus2019}
Greydanus, S., Dzamba, M., and Yosinski, J. (2019).
\newblock {Hamiltonian Neural Networks}.
\newblock In {\em Advances in Neural Information Processing Systems (NeurIPS)}.
\bibitem[Griewank and Walther, 2000]{Griewank2000}
Griewank, A. and Walther, A. (2000).
\newblock {Algorithm 799: Revolve: An implementation of checkpointing for the
reverse or adjoint mode of computational differentiation}.
\newblock {\em ACM Transactions on Mathematical Software}, 26(1):19--45.
\bibitem[Gruslys et~al., 2016]{Gruslys2016}
Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., and Graves, A. (2016).
\newblock {Memory-efficient backpropagation through time}.
\newblock {\em Advances in Neural Information Processing Systems (NIPS)}.
\bibitem[Hairer et~al., 2006]{Hairer2006}
Hairer, E., Lubich, C., and Wanner, G. (2006).
\newblock {\em {Geometric Numerical Integration: Structure-Preserving
Algorithms for Ordinary Differential Equations}}, volume~31 of {\em Springer
Series in Computational Mathematics}.
\newblock Springer-Verlag, Berlin/Heidelberg.
\bibitem[Hairer et~al., 1993]{Hairer1993}
Hairer, E., N{\o}rsett, S.~P., and Wanner, G. (1993).
\newblock {\em {Solving Ordinary Differential Equations I: Nonstiff Problems}},
volume~8 of {\em Springer Series in Computational Mathematics}.
\newblock Springer Berlin Heidelberg, Berlin, Heidelberg.
\bibitem[He et~al., 2016]{He2015a}
He, K., Zhang, X., Ren, S., and Sun, J. (2016).
\newblock {Deep Residual Learning for Image Recognition}.
\newblock In {\em IEEE Conference on Computer Vision and Pattern Recognition
(CVPR)}.
\bibitem[Herrera et~al., 2020]{Herrera2020}
Herrera, C., Krach, F., and Teichmann, J. (2020).
\newblock {Neural Jump Ordinary Differential Equations: Consistent
Continuous-Time Prediction and Filtering}.
\newblock In {\em International Conference on Learning Representations (ICLR)}.
\bibitem[Hindmarsh et~al., 2005]{Hindmarsh2005}
Hindmarsh, A.~C., Brown, P.~N., Grant, K.~E., Lee, S.~L., Serban, R., Shumaker,
D.~E., and Woodward, C.~S. (2005).
\newblock {SUNDIALS: Suite of nonlinear and differential/algebraic equation
solvers}.
\newblock {\em ACM Transactions on Mathematical Software}, 31(3):363--396.
\bibitem[Hochreiter and Schmidhuber, 1997]{Hochreiter1997}
Hochreiter, S. and Schmidhuber, J. (1997).
\newblock {Long Short-Term Memory}.
\newblock {\em Neural Computation}, 9(8):1735--1780.
\bibitem[Hutchinson, 1990]{Hutchinson1990}
Hutchinson, M. (1990).
\newblock {A stochastic estimator of the trace of the influence matrix for
laplacian smoothing splines}.
\newblock {\em Communications in Statistics - Simulation and Computation},
19(2):433--450.
\bibitem[Jiang et~al., 2020]{Jiang2020}
Jiang, C.~M., Huang, J., Tagliasacchi, A., and Guibas, L. (2020).
\newblock {ShapeFlow: Learnable Deformations Among 3D Shapes}.
\newblock In {\em Advances in Neural Information Processing Systems (NeurIPS)}.
\bibitem[Kidger et~al., 2020]{Kidger2020}
Kidger, P., Morrill, J., Foster, J., and Lyons, T. (2020).
\newblock {Neural Controlled Differential Equations for Irregular Time Series}.
\newblock In {\em Advances in Neural Information Processing Systems (NeurIPS)}.
\bibitem[Kim et~al., 2020]{Kim2020b}
Kim, H., Lee, H., Kang, W.~H., Cheon, S.~J., Choi, B.~J., and Kim, N.~S.
(2020).
\newblock {WaveNODE: A Continuous Normalizing Flow for Speech Synthesis}.
\newblock In {\em ICML2020 Workshop on Invertible Neural Networks, Normalizing
Flows, and Explicit Likelihood Models}.
\bibitem[Kim et~al., 2021]{Kim2021}
Kim, S., Ji, W., Deng, S., Ma, Y., and Rackauckas, C. (2021).
\newblock {Stiff Neural Ordinary Differential Equations}.
\newblock {\em arXiv}.
\bibitem[Kingma and Ba, 2015]{Kingma2014b}
Kingma, D.~P. and Ba, J. (2015).
\newblock {Adam: A Method for Stochastic Optimization}.
\newblock In {\em International Conference on Learning Representations (ICLR)}.
\bibitem[LeCun et~al., 1998]{LeCun1998}
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998).
\newblock {Gradient-based learning applied to document recognition}.
\newblock In {\em Proceedings of the IEEE}, volume~86, pages 2278--2323.
\bibitem[Li et~al., 2020]{Li2020i}
Li, X., Wong, T.-K.~L., Chen, R. T.~Q., and Duvenaud, D. (2020).
\newblock {Scalable Gradients for Stochastic Differential Equations}.
\newblock In {\em Artificial Intelligence and Statistics (AISTATS)}.
\bibitem[Lu et~al., 2018]{Lu2018}
Lu, Y., Zhong, A., Li, Q., and Dong, B. (2018).
\newblock {Beyond finite layer neural networks: Bridging deep architectures and
numerical differential equations}.
\newblock In {\em International Conference on Machine Learning (ICML)}.
\bibitem[Matsubara et~al., 2020]{Matsubara2020}
Matsubara, T., Ishikawa, A., and Yaguchi, T. (2020).
\newblock {Deep Energy-Based Modeling of Discrete-Time Physics}.
\newblock In {\em Advances in Neural Information Processing Systems (NeurIPS)}.
\bibitem[Matsuda and Miyatake, 2021]{Matsuda2020}
Matsuda, T. and Miyatake, Y. (2021).
\newblock {Generalization of partitioned Runge–Kutta methods for adjoint
systems}.
\newblock {\em Journal of Computational and Applied Mathematics}, 388:113308.
\bibitem[Papamakarios et~al., 2017]{Papamakarios2017}
Papamakarios, G., Pavlakou, T., and Murray, I. (2017).
\newblock {Masked Autoregressive Flow for Density Estimation}.
\newblock In {\em Advances in Neural Information Processing Systems (NIPS)}.
\bibitem[Pascanu et~al., 2013]{Pascanu2013}
Pascanu, R., Mikolov, T., and Bengio, Y. (2013).
\newblock {On the difficulty of training Recurrent Neural Networks}.
\newblock In {\em International Conference on Machine Learning (ICML)}.
\bibitem[Paszke et~al., 2017]{Paszke2017}
Paszke, A., Chanan, G., Lin, Z., Gross, S., Yang, E., Antiga, L., and Devito,
Z. (2017).
\newblock {Automatic differentiation in PyTorch}.
\newblock In {\em Autodiff Workshop on Advances in Neural Information
Processing Systems}.
\bibitem[Rackauckas et~al., 2020]{Rackauckas2020}
Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R.,
Skinner, D., Ramadhan, A., and Edelman, A. (2020).
\newblock {Universal Differential Equations for Scientific Machine Learning}.
\newblock {\em arXiv}.
\bibitem[Rana et~al., 2020]{Rana2020}
Rana, M.~A., Li, A., Fox, D., Boots, B., Ramos, F., and Ratliff, N. (2020).
\newblock {Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable
Dynamical Systems}.
\newblock In {\em Conference on Learning for Dynamics and Control (L4DC)}.
\bibitem[Rezende and Mohamed, 2015]{Rezende2015}
Rezende, D.~J. and Mohamed, S. (2015).
\newblock {Variational Inference with Normalizing Flows}.
\newblock In {\em International Conference on Machine Learning (ICML)}.
\bibitem[Rumelhart et~al., 1986]{Rumelhart1986}
Rumelhart, D.~E., Hinton, G.~E., and Williams, R.~J. (1986).
\newblock {Learning representations by back-propagating errors}.
\newblock {\em Nature}, 323(6088):533--536.
\bibitem[Saemundsson et~al., 2020]{Saemundsson2020}
Saemundsson, S., Terenin, A., Hofmann, K., Deisenroth, M.~P., S{\ae}mundsson,
S., Hofmann, K., Terenin, A., and Deisenroth, M.~P. (2020).
\newblock {Variational Integrator Networks for Physically Meaningful
Embeddings}.
\newblock In {\em Artificial Intelligence and Statistics (AISTATS)}.
\bibitem[Sanz-Serna, 2016]{Sanz-Serna2016}
Sanz-Serna, J.~M. (2016).
\newblock {Symplectic Runge-Kutta schemes for adjoint equations, automatic
differentiation, optimal control, and more}.
\newblock {\em SIAM Review}, 58(1):3--33.
\bibitem[Takeishi and Kawahara, 2020]{Takeishi2020}
Takeishi, N. and Kawahara, Y. (2020).
\newblock {Learning dynamics models with stable invariant sets}.
\newblock {\em AAAI Conference on Artificial Intelligence (AAAI)}.
\bibitem[Teshima et~al., 2020]{Teshima2020a}
Teshima, T., Tojo, K., Ikeda, M., Ishikawa, I., and Oono, K. (2020).
\newblock {Universal Approximation Property of Neural Ordinary Differential
Equations}.
\newblock In {\em NeurIPS Workshop on Differential Geometry meets Deep Learning
(DiffGeo4DL)}.
\bibitem[Wang, 2013]{Wang2013a}
Wang, Q. (2013).
\newblock {Forward and adjoint sensitivity computation of chaotic dynamical
systems}.
\newblock {\em Journal of Computational Physics}, 235:1--13.
\bibitem[Yang et~al., 2019]{Yang2019b}
Yang, G., Huang, X., Hao, Z., Liu, M.~Y., Belongie, S., and Hariharan, B.
(2019).
\newblock {Pointflow: 3D point cloud generation with continuous normalizing
flows}.
\newblock {\em International Conference on Computer Vision (ICCV)}.
\bibitem[Zhuang et~al., 2020]{Zhuang2020a}
Zhuang, J., Dvornek, N., Li, X., Tatikonda, S., Papademetris, X., and Duncan,
J. (2020).
\newblock {Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural
ODE}.
\newblock In {\em International Conference on Machine Learning (ICML)}.
\bibitem[Zhuang et~al., 2021]{Zhuang2021}
Zhuang, J., Dvornek, N.~C., Tatikonda, S., and Duncan, J.~S. (2021).
\newblock {MALI: A memory efficient and reverse accurate integrator for Neural
ODEs}.
\newblock In {\em International Conference on Learning Representations (ICLR)}.
\end{thebibliography}
}
\appendix
\renewcommand\thetable{A\arabic{table}}
\setcounter{table}{0}
\renewcommand\thefigure{A\arabic{figure}}
\setcounter{figure}{0}
{\huge Supplementary Material: Appendices}
\section{Derivation of Variational System}\label{appendix:subsystems}
Let us consider a perturbed initial condition $\bar x_0=x_0+\bar{\mathrm{d}}elta_0$, from which the solution $\bar x(t)$ arises.
Suppose that the solution $\bar x(t)$ satisfies $\bar x(t)=x(t)+\bar{\mathrm{d}}elta (t)$.
Then,
\begin{equation}
\begin{split}
{\mathrm{d}}dt \bar{\mathrm{d}}elta
&={\mathrm{d}}dt (\bar x-x)\\
&=f(\bar x,t)-f(x,t)\\
&=\pderiv{f}{x}(x,t)(\bar x-x)+o(|\bar x-x|)\\
&=\pderiv{f}{x}(x,t)\bar{\mathrm{d}}elta+o(|\bar{\mathrm{d}}elta|),\\
\bar{\mathrm{d}}elta(0)&=\bar{\mathrm{d}}elta_0.
\end{split}
\end{equation}
Dividing $\bar{\mathrm{d}}elta$ by $\bar{\mathrm{d}}elta_0$ and taking the limit as $|\bar{\mathrm{d}}elta_0|\rightarrow+0$, we define the variational variable as ${\mathrm{d}}elta(t)=\pderiv{x(t)}{x_0}$ and the \emph{variational system} as
\begin{equation}
{\mathrm{d}}dt {\mathrm{d}}elta(t)
=\pderiv{f}{x}(x(t),t) {\mathrm{d}}elta(t)\mbox{ for } {\mathrm{d}}elta(0)=I.
\label{eq:variational_system}
\end{equation}
\section{Complete Proofs}\label{appendix:proofs}
\paragraph{Proof of Remark~\ref{rmk:conserved_quantity}:}
\begin{equation}
{\mathrm{d}}dt \left(\lambda^\top{\mathrm{d}}elta\right)=\left({\mathrm{d}}dt\lambda\right)^{\!\!\top} {\mathrm{d}}elta+\lambda^\top \left({\mathrm{d}}dt{\mathrm{d}}elta\right)=\left(-\pderiv{f}{x}(x,t)^\top\lambda\right)^{\!\!\top} {\mathrm{d}}elta+\lambda^\top \left(\pderiv{f}{x}(x,t){\mathrm{d}}elta\right)=0.
\end{equation}
\paragraph{Proof of Remark~\ref{rmk:adjoint_is_gradient}:}
Because ${\mathrm{d}}elta(t)=\pderiv{x(t)}{x_0}$ and $\lambda^\top{\mathrm{d}}elta$ is time-invariant,
\begin{equation}
\pderiv{\mathcal L(x(T))}{x_0}
=\pderiv{\mathcal L(x(T))}{x(T)}\pderiv{x(T)}{x_0}
=\lambda(T)^\top {\mathrm{d}}elta(T)
=\lambda(t)^\top {\mathrm{d}}elta(t)
=\pderiv{\mathcal L(x(T))}{x(t)}\pderiv{x(t)}{x_0}.
\label{eq:adjoint_integration}
\end{equation}
\paragraph{Proof of Remark~\ref{rmk:RK_for_variational}:}
Differentiating each term in the Runge--Kutta method in Eq.~\eqref{eq:runge_kutta} by the initial condition $x_0$ gives the Runge--Kutta method applied to the variational variable ${\mathrm{d}}elta$, as follows.
\begin{equation}
\begin{split}
{\mathrm{d}}elta_{n+1}&={\mathrm{d}}elta_n+h_n\sum_{i=1}^s b_i d_{n,i},\\
d_{n,i}&\vcentcolon=\pderiv{k_{n,i}}{x_0}=\pderiv{f(X_{n,i},t_n+c_ih_n)}{x_0}=\pderiv{f(X_{n,i},t_n+c_ih_n)}{X_{n,i}}\Delta_{n,i},\\
\Delta_{n,i}&\vcentcolon=\pderiv{X_{n,i}}{x_0}={\mathrm{d}}elta_{n}+h_n\sum_{j=1}^s a_{i,j}d_{n,j}.
\end{split}\label{eq:runge_kutta_variational}
\end{equation}
\paragraph{Proof of Theorem~\ref{thm:symplectic_RK}:}
Because the quantity $S$ is conserved in continuous time,
\begin{equation}
{\mathrm{d}}dt S({\mathrm{d}}elta,\lambda)=0.
\end{equation}
Because the quantity $S$ is bilinear,
\begin{equation}
{\mathrm{d}}dt S({\mathrm{d}}elta,\lambda)=\pderiv{S}{{\mathrm{d}}elta}\frac{{\mathrm{d}}{\mathrm{d}}elta}{{\mathrm{d}t}}+ \pderiv{S}{\lambda}\frac{{\mathrm{d}}\lambda}{{\mathrm{d}t}}=S\left(\frac{{\mathrm{d}}{\mathrm{d}}elta}{{\mathrm{d}t}},\lambda\right)+S\left({\mathrm{d}}elta,\frac{{\mathrm{d}}\lambda}{{\mathrm{d}t}}\right),
\end{equation}
which implies
\begin{equation}
S(d_{n,i},\Lambda_{n,i})+S(\Delta_{n,i},l_{n,i})=0.
\end{equation}
The change in the bilinear quantity $S({\mathrm{d}}elta,\lambda)$ is
\begin{equation}
\begin{split}
S({\mathrm{d}}elta_{n+1},\lambda_{n+1})-S({\mathrm{d}}elta_n,\lambda_n)
&\textstyle=S({\mathrm{d}}elta_n+h_n\sum_i b_i d_{n,i},\lambda_n+h_n\sum_i B_i l_{n,i})-S({\mathrm{d}}elta_n,\lambda_n)\\
&\textstyle=\sum_i b_i h_n S(d_{n,i},\lambda_n)+\sum_i B_i h_n S({\mathrm{d}}elta_n, l_{n,i})\\
&\textstyle\ \ +\sum_i\sum_j b_i B_j h_n^2 S(d_{n,i},l_{n,j})\\
&\textstyle=\sum_i b_i h_n S(d_{n,i},\Lambda_{n,i}-h_n\sum_j A_{i,j}l_{n,j})\\
&\textstyle\ \ +\sum_i B_i h_n S(\Delta_{n,i}-h_n\sum_j a_{i,j}d_{n,j}, l_{n,i})\\
&\textstyle\ \ +\sum_i\sum_j b_i B_j h_n^2 S(d_{n,i},l_{n,j})\\
&\textstyle=\sum_i h_n (b_iS(d_{n,i},\Lambda_{n,i})+B_iS(\Delta_{n,i},l_{n,i}))\\
&\textstyle\ \ +\sum_i\sum_j (-b_i A_{i,j} - B_j a_{j,i} + b_i B_j)h_n^2 S(d_{n,i},l_{n,j}).
\end{split}
\end{equation}
If $B_i=b_i$ and $b_i A_{i,j}+B_j a_{j,i}-b_i B_j=0$, the change vanishes, i.e., the partitioned Runge--Kutta conserves a bilinear quantity $S$.
Note that $b_i$ must not vanish because $A_{i,j}=B_j(1-a_{j,i}/b_i)$.
Therefore, the bilinear quantity $\lambda_n^\top {\mathrm{d}}elta_n$ is conserved as
\begin{equation}
\lambda_N^\top {\mathrm{d}}elta_N
=\lambda_n^\top {\mathrm{d}}elta_n \mbox{ for }n=0,{\mathrm{d}}ots,N.
\end{equation}
Remark~\ref{rmk:RK_for_variational} indicates ${\mathrm{d}}elta_n=\pderiv{x_n}{x_0}$.
When $\lambda_N$ is set to $(\pderiv{\mathcal L(x_N)}{x_N})^\top$,
\begin{equation}
\pderiv{\mathcal L(x_N)}{x_0}
=\pderiv{\mathcal L(x_N)}{x_N}\pderiv{x_N}{x_0}
=\lambda_N^\top {\mathrm{d}}elta_N
=\lambda_n^\top {\mathrm{d}}elta_n
=\pderiv{\mathcal L(x_N)}{x_n}\pderiv{x_n}{x_0},
\end{equation}
Therefore, $\lambda_n=(\pderiv{\mathcal L(x_N)}{x_n})^\top$.
\paragraph{Proof of Theorem \ref{thm:runge_kutta_adjoint_vanish}:}
By solving the combination of the integrators in Eqs.~\eqref{eq:runge_kutta} and~\eqref{eq:runge_kutta_adjoint_vanish}, a change in a bilinear quantity $S({\mathrm{d}}elta, \lambda)$ that the continuous-time dynamics conserves is
\begin{equation}
\begin{split}
S({\mathrm{d}}elta_{n+1},\lambda_{n+1})-S({\mathrm{d}}elta_n,\lambda_n)
&\textstyle=S({\mathrm{d}}elta_n+h_n\sum_i b_i d_{n,i},\lambda_n+h_n\sum_i \tilde b_i l_{n,i})-S({\mathrm{d}}elta_n,\lambda_n)\\
&\textstyle=\sum_i b_i h_n S(d_{n,i},\lambda_n)+\sum_i \tilde b_i h_n S({\mathrm{d}}elta_n, l_{n,i})\\
&\textstyle\ \ +\sum_i\sum_j b_i \tilde b_j h_n^2 S(d_{n,i},l_{n,j})\\
&\textstyle=\sum_{i\not\in I_0} b_i h_n S(d_{n,i},\Lambda_{n,i}-h_n\sum_j \tilde b_j(1-a_{j,i}/b_i) l_{n,j})\\
&\textstyle\ \ +\sum_i \tilde b_i h_n S(\Delta_{n,i}-h_n\sum_j a_{i,j}d_{n,j}, l_{n,i})\\
&\textstyle\ \ +\sum_{i\not\in I_0}\sum_j b_i \tilde b_j h_n^2 S(d_{n,i},l_{n,j})\\
&\textstyle=\sum_{i\not\in I_0}b_ih_n(S(d_{n,i},\Lambda_{n,j})+S(\Delta_{n,i},l_{n,j}))\\
&\textstyle\ \ +\sum_{i\not\in I_0}\sum_j(-b_i\tilde b_j(1-a_{j,i}/b_i)-\tilde b_ja_{j,i}+b_i\tilde b_j)h_n^2S(d_{n,i},l_{n,j})\\
&\textstyle\ \ +\sum_{i\in I_0}(\tilde b_ih_nS(\Delta_{n,i},l_{n,j})-\sum_j\tilde b_ja_{j,i}h_n^2S(d_{n,i},l_{n,j}))\\
&\textstyle=\sum_{i\not\in I_0}b_ih_n(S(d_{n,i},\Lambda_{n,j})+S(\Delta_{n,i},l_{n,j}))\\
&\textstyle\ \ +\sum_{i\in I_0}h_n^2(S(d_{n,i},\Lambda_{n,j})+S(\Delta_{n,i},l_{n,j}))\\
&=0.
\end{split}
\end{equation}
Hence, the bilinear quantity $S({\mathrm{d}}elta, \lambda)$ is conserved.
\paragraph{Proof of Remark~\ref{rmk:adjoint_explicit}:}
Eq.~\eqref{eq:runge_kutta_adjoint} can be rewritten as
\begin{equation}
\begin{split}
\lambda_n&=\lambda_{n+1}-h_n\sum_{i=1}^s b_i l_{n,i}\\
l_{n,i}&=-\pderiv{f}{x}(X_{n,i},t_n+c_ih_n)^\top\Lambda_{n,i},\\
\Lambda_{n,i}&=\lambda_{n+1}- h_n\sum_{i=1}^s b_j \frac{a_{j,i}}{b_i} l_{n,j}.
\end{split}\label{eq:runge_kutta_adjoint_backward}
\end{equation}
Eq.~\eqref{eq:runge_kutta_adjoint_vanish} can be rewritten as
\begin{equation}
\begin{split}
\lambda_n&=\lambda_{n+1}-h_n\sum_{i=1}^s \tilde b_i l_{n,i},\\
l_{n,i}&=-\pderiv{f}{x}(X_{n,i},t_n+c_ih_n)^\top\Lambda_{n,i},\\
\Lambda_{n,i}&=
\begin{cases}
\lambda_{n+1}-h_n\sum_{j=1}^s \tilde b_j\frac{a_{j,i}}{b_i} l_{n,j} & \mbox{if}\ \ i\not\in I_0 \\
- \sum_{j=1}^s \tilde b_j a_{j,i} l_{n,j} & \mbox{if}\ \ i\in I_0. \\
\end{cases}\\
\end{split}\label{eq:runge_kutta_adjoint_vanish_backward}
\end{equation}
Because $a_{i,j}=0$ for $j\ge i$, $a_{j,i}=0$ for $j\le i$.
The intermediate adjoint variable $\Lambda_{n,i}$ is calculable from $i=s$ to $i=1$ sequentially, i.e., the integration backward in time is explicit.
\section{Gradients in General Cases}\label{appendix:general_gradient}
\subsection{Gradient w.r.t.~Parameters}
For the parameter adjustment, one can consider the parameters $\theta$ as a part of the augmented state $\tilde x=[x\ \ \theta]^\top$ of the system
\begin{equation}
{\mathrm{d}}dt \tilde x=\tilde f(\tilde x,t),\ \tilde f(\tilde x,t)=\vvec{f(x,t,\theta)\\0},\ \ \tilde x(0)=\vvec{x_0\\\theta}.
\end{equation}
The variational and adjoint variables are augmented in the same way.
For the augmented adjoint variable $\tilde \lambda=[\lambda\ \ \lambda_\theta]^\top$, the augmented adjoint system is
\begin{equation}
{\mathrm{d}}dt \tilde \lambda =-\pderiv{\tilde f}{\tilde x}(\tilde x,t)^\top\tilde\lambda=-\vvec{\pderiv{f}{x}^\top & 0 \\\pderiv{f}{\theta}^\top&0}\vvec{\lambda\\\lambda_\theta}=\vvec{-\pderiv{f}{x}^\top\lambda\\-\pderiv{f}{\theta}^\top\lambda}.\label{eq:augmented_adjoint}
\end{equation}
Hence, the adjoint variable $\lambda$ for the system state $x$ is unchanged from Eq.~\eqref{eq:subsystems}, and the one $\lambda_\theta$ for the parameters $\theta$ depends on the former as
\begin{equation}
{\mathrm{d}}dt \lambda_\theta =-\pderiv{f}{\theta}(x,t,\theta)^\top \lambda,\label{eq:adjoint_param}
\end{equation}
and $\lambda_\theta(T)=(\pderiv{\mathcal L(x(T),\theta)}{\theta})^\top$.
\subsection{Gradient of Functional}
When the solution $x(t)$ is evaluated by a functional $\mathcal C$ as
\begin{equation}
\mathcal C(x(t))=\int_0^T \mathcal L(x(t),t){\mathrm{d}t},
\end{equation}
the adjoint variable $\lambda_C$ that denotes the gradient $\lambda_C(t)=(\pderiv{\mathcal C(x(T))}{x(t)})^\top$ of the functional $\mathcal C$ is given by
\begin{equation}
{\mathrm{d}}dt \lambda_C=-\pderiv{f}{x}(x,t)^\top\lambda_C+\pderiv{\mathcal L(x(t),t)}{x(t)},\ \ \lambda_C(T)=\mathbf{0}.
\end{equation}
\section{Implementation Details}
\subsection{Robustness to Rounding Error}\label{appendix:rounding_off}
By definition, the naive backpropagation algorithm, baseline scheme, ACA, and the proposed symplectic adjoint method provide the exact gradient up to rounding error.
However, the naive backpropagation algorithm and baseline scheme obtained slightly worse results on the GAS, POWER, and HEPMASS datasets.
Due to the repeated use of the neural network, each method accumulates the gradient of the parameters $\theta$ for each use.
Let $\theta_{n,i}$ denote the parameters used in the $i$-th stage of $n$-th step even though their values are unchanged.
The backpropagation algorithm obtains the gradient $\pderiv{\mathcal L}{\theta}$ with respect to the parameters $\theta$ by accumulating the gradient over all stages and steps one-by-one as
\begin{equation}
\begin{split}
\pderiv{\mathcal L}{\theta}
&=\sum_{\substack{n=0,{\mathrm{d}}ots,N-1,\\i=1,{\mathrm{d}}ots,s}}\pderiv{\mathcal L}{\theta_{n,i}}.
\end{split}
\end{equation}
When the step size $h_n$ at the $n$-th step is sufficiently small, the gradient $\pderiv{\mathcal L}{\theta_{n,i}}$ at the $i$-th stage may be insignificant compared with the accumulated gradient and be rounded off during the accumulation.
Conversely, ACA accumulates the gradient within a step and then over time steps; this process can be expressed informally as
\begin{equation}
\begin{split}
\pderiv{\mathcal L}{\theta}
&=\sum_{n=0}^{N-1}\left(\sum_{i=1}^{s}\pderiv{\mathcal L}{\theta_{n,i}}\right).
\end{split}
\end{equation}
Further, according to Eqs.~\eqref{eq:runge_kutta_adjoint} and~\eqref{eq:adjoint_param}, the (symplectic) adjoint method accumulates the adjoint variable $\lambda$ (i.e., the transpose of the gradient) within a step and then over time steps as
\begin{equation}
\lambda_{\theta,n}=\lambda_{\theta,n+1}-h_n\left(\sum_{i=1}^s B_i \left(-\pderiv{f}{\theta_{n,i}}(X_{n,i},t+C_ih_n,\theta_{n,i})^\top \Lambda_{n,i}\right)\right).
\end{equation}
In these cases, even when the step size $h_n$ at the $n$-th step is small, the gradient summed within a step (over $s$ stages) may still be significant and robust to rounding errors.
This is the reason the adjoint method, ACA, and the symplectic adjoint method performed better than the naive backpropagation algorithm and baseline scheme for some datasets.
Note that this approach requires additional memory consumption to store the gradient summed within a step, and it is applicable to the backpropagation algorithm with a slight modification.
\subsection{Memory Consumption Optimization}\label{appendix:memory_consumption_optimization}
Following Eqs.~\eqref{eq:runge_kutta_adjoint_backward} and~\eqref{eq:runge_kutta_adjoint_vanish_backward}, a naive implementation of the adjoint method retains the adjoint variables $\Lambda_{n,i}$ at all stages $i=1,{\mathrm{d}}ots,s$ to obtain their time-derivatives $l_{n,i}$, and then adds them up to obtain the adjoint variable $\lambda_n$ at the $n$-th time step.
However, as Eq.~\eqref{eq:adjoint_param} shows, the adjoint variable $\lambda_\theta$ for the parameters $\theta$ is not used for obtaining its time-derivative ${\mathrm{d}}dt \lambda_\theta$.
One can add up the adjoint variable ${\Lambda_\theta}_{n,i}$ for the parameters $\theta$ at stage $i$ one-by-one without retaining it, thereby reducing the memory consumption proportionally to the number of parameters \emph{times} the number of stages.
A similar optimization is applicable to the adjoint method.
\begin{table}[t]
\tabcolsep=1.5mm
\centering\small
\caption{Results on learning physical systems without the deterministic option.}\label{tab:pde_results_nondet}
\begin{tabular}{lcrrcrr}
\toprule
& \mcc{KdV Equation} & \mcc{Cahn--Hilliard System} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}
& MSE \scriptsize{($\times 10^{-3}$)} & \mca{mem.} & \mca{time} & MSE \scriptsize{($\times 10^{-6}$)} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & 1.61\std{3.23} & \textbf{181.4}\std{\zz0.0} & 240\std{16} & 5.58\std{2.12} & \textbf{181.4}\std{\zz0.0} & 805\std{25} \\
backpropagation~\cite{Chen2018e} & 1.61\std{3.24} & 733.9\std{15.6} & 94\std{\zz4} & 5.45\std{1.55} & 3053.5\std{22.9} & 382\std{11} \\
ACA~\cite{Zhuang2020a} & 1.61\std{3.24} & 734.5\std{20.3} & 120\std{\zz4} & 6.00\std{3.27} & 780.4\std{22.9} & 422\std{16} \\
\midrule
proposed & 1.61\std{3.58} & 182.1\std{\zz0.0} & 141\std{\zz7} & 5.48\std{1.90} & 182.1\std{\zz0.0} & 480\std{19} \\
\bottomrule
\multicolumn{7}{l}{Mean-squared errors (MSEs) in long-term predictions, peak memory consumption [$\mathrm{MiB}$],} \\
\multicolumn{7}{l}{and computation time per iteration [$\mathrm{ms/itr}$].}
\end{tabular}
\end{table}
\subsection{Parallelization}\label{appendix:parallelization}
The memory consumption and computation time depend highly on the implementations and devices.
Being implemented on a GPU, the convolution operation can be easily parallelized in space and exhibits a non-deterministic behavior.
To avoid the non-deterministic behavior, PyTorch provides an option \textsc{torch.backends.cudnn.deterministic}, which was used to obtain the results in Section~\ref{sec:pde}, following the original implementation~\cite{Matsubara2020}.
Without this option, the memory consumption increased by a certain amount, and the computation times reduced due to the aggressive parallelization, as shown by the results in Table~\ref{tab:pde_results_nondet}.
Even then, the proposed symplectic adjoint method occupied the smallest memory among the methods for the exact gradient.
The increase in the memory consumption is proportional to the width of a neural network; therefore, it is negligible when the neural network is sufficiently deep.
Note that the results in Section~\ref{sec:flow} were obtained without the deterministic option.
\begin{landscape}
\begin{table}[t]
\tabcolsep=0.8mm
\centering\small
\caption{Results obtained for continuous normalizing flows.}\label{tab:tabular_results_full}
\begin{tabular}{lrrrrrrrrr}
\toprule
& \mcc{MINIBOONE ($M=1$)} & \mcc{GAS ($M=5$)} & \mcc{POWER ($M=5$)} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}
& \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & 10.59\std{0.17} & 170\std{\zz\zz0} & 0.74\std{0.04} & -10.53\std{0.25} & 24\std{\zz\zz0} & 4.82\std{0.29} & -0.31\std{0.01} & \textbf{8.1}\std{\zz\zz0.0} & 6.33\std{0.18} \\
backpropagation~\cite{Chen2018e} & 10.54\std{0.18} & 4,436\std{115} & 0.91\std{0.05} & -9.53\std{0.42} & 4,479\std{250} & 12.00\std{0.93} & -0.24\std{0.05} & 1710.9\std{193.1} & 10.64\std{2.73} \\
baseline scheme & 10.54\std{0.18} & 4,457\std{115} & 1.10\std{0.04} & -9.53\std{0.42} & 1,858\std{228} & 5.48\std{0.25} & -0.24\std{0.05} & 515.2\std{122.0} & 4.37\std{0.70} \\
ACA~\cite{Zhuang2020a} & 10.57\std{0.30} & 306\std{\zz\zz0} & 0.77\std{0.02} & -10.65\std{0.45} & 73\std{\zz\zz0} & 3.98\std{0.14} & -0.31\std{0.02} & 29.5\std{\zz\zz0.5} & 5.08\std{0.88} \\
\midrule
proposed & 10.49\std{0.11} & \textbf{95}\std{\zz\zz0} & 0.84\std{0.03} & -10.89\std{0.11} & \textbf{20}\std{\zz\zz0} & 4.39\std{0.23} & -0.31\std{0.02} & 9.2\std{\zz\zz0.0} & 5.73\std{0.43} \\
\bottomrule
\\
\toprule
& \mcc{HEPMASS ($M=10$)} & \mcc{BSDS300 ($M=2$)} & \mcc{MNIST ($M=6$)} \\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}
& \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} & \mca{NLL} & \mca{mem.} & \mca{time} \\
\midrule
adjoint method~\cite{Chen2018e} & 16.49\std{0.25} & 40\std{\zz\zz0} & 4.19\std{0.15} & -152.04\std{0.09} & 577\std{0} & 11.70\std{0.44} & 0.918\std{0.011} & 1,086\std{4} & 10.12\std{0.88} \\
backpropagation~\cite{Chen2018e} & 17.03\std{0.22} & 5,254\std{137} & 11.82\std{1.33} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} \\
baseline scheme & 17.03\std{0.22} & 1,102\std{174} & 4.40\std{0.40} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} & \mca{---} \\
ACA~\cite{Zhuang2020a} & 16.41\std{0.39} & 88\std{\zz\zz0} & 3.67\std{0.12} & -151.27\std{0.47} & 757\std{1} & 6.97\std{0.25} & 0.919\std{0.003} & 4,332\std{1} & 7.94\std{0.63} \\
\midrule
proposed & 16.48\std{0.20} & \textbf{35}\std{\zz\zz0} & 4.15\std{0.13} & -151.17\std{0.15} & \textbf{283}\std{2} & 8.07\std{0.72} & 0.917\std{0.002} & \textbf{1,079}\std{1} & 9.42\std{0.32} \\
\bottomrule
\end{tabular}\\
Negative log-likelihoods (NLL), peak memory consumption [$\mathrm{MiB}$], and computation time per iteration [$\mathrm{s/itr}$]. The medians $\pm$ standard deviations of three runs.
\end{table}
\end{landscape}
\end{document} |
\begin{document}
\title{Global weak solutions for a model of two-phase flow \ with a single interface}
\abstract{We consider a simple nonlinear hyperbolic system modeling the flow of an inviscid fluid. The model includes as state variable the mass density
fraction of the vapor in the fluid and then phase transitions can be taken into consideration; moreover, phase interfaces are contact
discontinuities for the system. We focus on the special case of initial data consisting of two different phases separated by an interface.
We find explicit bounds on the (possibly large) initial data in order that weak entropic solutions exist for all times.
The proof exploits a carefully tailored version of the front tracking scheme.}
\section{Introduction}\label{sec:intro}
We consider the following nonlinear model for the one-dimensional flow of an inviscid fluid, where different phases can coexist:
\begin{equation}\label{eq:system}
\left\{
\begin{array}{ll}
v_t - u_x &= 0\,,\\
u_t + p(v,\lambda)_x &= 0\,,
\\
\lambda_t &= 0\,.
\end{array}
\right.
\end{equation}
Here $t>0$ and $x\in{\mathbb{R}}$; moreover, $v>0$ is the specific
volume, $u$ the velocity and $\lambda$ the mass-density fraction of
vapor in the fluid. Then, we have $\lambda\in[0,1]$ and $\lambda=0$
characterizes the liquid phase while $\lambda=1$ the vapor phase. The pressure $p$ is given by
\begin{equation}\label{eq:pressure}
p(v,\lambda)= \frac{a^2(\lambda)}v\,,
\end{equation}
where $a$ is a $\C{1}$ function defined on $[0,1]$ and satisfying $a(\lambda)>0$ for every $\lambda\in[0,1]$.
We denote $U=(v,u,\lambda)\in{\mathcal{O}}mega\doteq(0,+\infty) \tauimes {\mathbb{R}} \tauimes [0,1]$.
System \eqref{eq:system} is the homogeneous case of a more general model that was first introduced in \cite{Fan}. If $\lambda$ is constant,
then \eqref{eq:system} reduces to the isothermal $p$-system, where the global existence of weak solutions holds for initial data with arbitrary
total variation \cite{Nishida68,AmadoriGuerra01}. The global existence of weak solutions to the initial value problem for \eqref{eq:system} was
proved in \cite{amadori-corli-siam} under a suitable condition on the total variation of the initial data and the assumption $a'>0$;
a different proof of an analogous result has been recently provided in \cite{Asakura-Corli}.
The condition on the initial data was also stated in a slightly different way in \cite{amadori-corli-source} and requires, roughly speaking, that the
total variation of both pressure and velocity is suitably bounded by the total variation of $\lambda$; then, it reminds of the famous condition
introduced in \cite{NishidaSmoller} for the system of isentropic gasdynamics. A Glimm scheme to solve \eqref{eq:system} was proposed in
\cite{Peng}; we refer to \cite{amadori-corli-glimm} for a short proof of the Glimm estimates, which improve those given in \cite{Peng}.
We refer to \cite{GST2007} for the extension of Nishida's result to the initial-value problem in Special Relativity,
and to \cite{LiuIB}, \cite{Liu} to the problem of large solutions to nonisentropic gas dynamics.
A model analogous to \eqref{eq:system} is also studied in \cite{HoldenRisebroSande, HoldenRisebroSande2}, where the pressure is
$v^{-\gamma}$ and the state variable $\lambda$ is replaced by the adiabatic exponent $\gamma>1$; also in this case the global existence of
solutions is proved under a condition that has the same flavor of that discussed above. At last, we refer to \cite{Dafermos} for a comprehensive
discussion of the problem of the global existence of solutions for systems of conservation laws.
In this paper we focus on a
particular class of initial data for \eqref{eq:system}: the state variable $\lambda$ is constant both for $x<0$ and
for $x>0$. More precisely, for $x\in{\mathbb{R}}$ we consider initial data
\begin{equation}
U(x,0)= U_o(x) = \left(v_o(x), u_o(x), \lambda_o(x) \right),\label{init-data}
\end{equation}
where
\begin{equation}\label{eq:lambda_Riemann}
\lambda_o(x) = \left\{
\begin{array}{ll}
\lambda_\ell & \hbox{ if }x<0\,,\\
\lambda_r & \hbox{ if }x>0\,,\\
\end{array}
\right.
\end{equation}
for two constant values $\lambda_\ell\ne\lambda_r\in[0,1]$. Phase interfaces are stationary in model \eqref{eq:system}; then, the assumption
\eqref{eq:lambda_Riemann} reduces the study of the initial value problem for \eqref{eq:system} to that of two initial value problems for two
isothermal $p$-systems, which are coupled through the interface at $x=0$. In other words, the flow remains in the two phases characterized by
the values $\lambda_\ell$ and $\lambda_r$ as long as a solution exists.
The case of initial data giving rise to {\em two} phase interfaces is addressed in a forthcoming paper \cite{ABCD2}.
The problem we are dealing with can be understood in a different way as follows. Phase interfaces are contact discontinuities for
system \eqref{eq:system}; then, in a sense, we fall into the general framework of the perturbation of a Riemann solution.
For this subject we refer to \cite{BressanColombo-Unique, CorliSableTougeronS, CorliSableTougeronC, Schochet-Glimm, Chern},
where however the perturbation is small in the $\mathbf{BV}$ norm. In our case the perturbation leaves unchanged the initial datum for $\lambda$
but it is not necessarily small in the other state variables.
The problem of a {\em small} perturbation of a Riemann solution and the related existence of globally defined solutions was thoroughly studied
in \cite{LewickaBV}; in \cite{AmCo12-Chambery} the conditions given in \cite{LewickaBV} are made explicit for system \eqref{eq:system}.
The main result of this paper is stated in Theorem \ref{thm:main} and concerns the global existence of weak solutions to the initial value problem \eqref{eq:system}, \eqref{init-data}, \eqref{eq:lambda_Riemann}, provided that \eqref{eq:pressure} holds and the initial data satisfy suitable conditions. The focus is precisely on weakening as much as possible such conditions, allowing for large initial data: the result in \cite{amadori-corli-siam} mentioned above clearly applies to the present situation, but it is here greatly improved.
The proof of Theorem \ref{thm:main} follows the same steps as Theorem $2.2$ in \cite{amadori-corli-siam}. However, several novelties have been introduced here:
\begin{itemize}
\item[-] a Glimm functional that better accounts for nonlinear interactions with the phase wave;
\item[-] refined interaction estimates on the amplitude of the reflected waves (Lemma~\ref{lem:shock-riflesso});
\item[-] an original treatment of \emph{non-physical} waves in the front tracking algorithm;
\item[-] a simpler proof of the decay of the reflected waves at a geometric rate, as the number of reflections increases
(see Proposition~\ref{prop:alpha} and Remark~\ref{rem:65}).
\end{itemize}
\noindent In particular, as a consequence of this new approach, we require no conditions on the maximal amplitude of the
phase wave, differently from \cite[(2.8)]{amadori-corli-siam} and the equivalent formulation in \cite[(3.6)]{amadori-corli-source}.
In spite of the fact that initial data \eqref{eq:lambda_Riemann} seem to reduce system \eqref{eq:system} to two systems of two conservation
laws, we cannot avoid the introduction of non-physical waves \cite{Bressanbook} in the scheme, as a formal example
in \cite{AmCo08-Proceed-Maryland} shows. Nevertheless, we can let all these non-physical waves propagate along the same vertical front
carrying the contact discontinuity, in order to give an immediate bound on the number of fronts: this represents a remarkable algorithmic
advantage and the main feature of the front tracking used here. On the other hand, we recall that if $\lambda$ is constant then non-physical
waves need not to be introduced, see \cite{AmadoriGuerra01, BaitiDalSanto}.
The plan of the paper is the following.
In Section \ref{main} we state our main result,
while in Section \ref{prelim} we first provide some information on the Riemann problem and then
show how to treat non-physical waves by introducing a composite wave together with the phase wave.
Consequently, we introduce two solvers to be used in the front-tracking scheme, which shows up in Section \ref{sec:app_sol}.
Section \ref{sec:interactions} deals with interactions while in the last Section \ref{sec:Cauchy} we prove the convergence and consistency of the algorithm and make a comparison with the result in \cite{amadori-corli-siam}. In a final short appendix we show how the damping coefficient $c$ introduced in \eqref{eq:chi_def}, which plays a key role in the paper, is also fundamental in the stability analysis of Riemann problems in the sense of \cite{Schochet-Glimm}.
\section{Main Result}\label{main}
\setcounter{equation}{0}
In this section we state our existence theorem. First, we define $a_r=a(\lambda_r)$, $a_\ell=a(\lambda_\ell)$ and
\begin{equation}\label{def:A_0}
\delta_2 = 2\, \frac{a_r-a_\ell}{a_r+a_\ell}\,.
\end{equation}
Notice that $\delta_2$ ranges over $(-2,2)$ as soon as $a_r$, $a_\ell$ range over ${\mathbb{R}}_+$.
The quantity $\delta_2$ measures the strength of the contact discontinuity located at $x=0$ as in
\cite{AmCo06-Proceed-Lyon, Peng} and it does not change by interactions with waves of the other families, see Lemma \ref{lem:interazioni}.
We denote $p_o(x) \doteq p\left(v_o(x), \lambda_o(x) \right)$.
\begin{theorem}\label{thm:main}
Assume \eqref{eq:pressure} and consider initial data \eqref{init-data}, \eqref{eq:lambda_Riemann} with $v_o(x)\ge \underline{v}>0$,
for some constant $\underline{v}$. Let $\delta_2$ be as in \eqref{def:A_0}.
There exists a strictly decreasing function $K$ defined for $r\in(0,2)$ and satisfying
\begin{equation}\label{rage}
\lim_{r\tauo 0+} K(r) = +\infty\,,\qquad \lim_{r\tauo 2-} K(r) = \frac 29\log\left(2+\sqrt 3\right),
\end{equation}
such that,
if $\delta_2\not = 0$ and the initial data satisfy
\begin{equation}\label{hyp2}
\mathrm{TV}\left(\log(p_o)\right) \,+\, \frac{1}{\min\{a_r,a_\ell\}} \mathrm{TV}(u_o) < K(|\delta_2|)\,,
\end{equation}
then the Cauchy problem \eqref{eq:system}, \eqref{init-data} has a
weak entropic solution $(v,u,\lambda)$ defined for
$t\in\left[0,+\infty\right)$. If $\delta_2=0$ the same conclusion holds with $K(|\delta_2|)$ replaced by $+\infty$ in \eqref{hyp2}.
Moreover, the solution is valued in a compact set of ${\mathcal{O}}mega$ and there is a constant $C=C(\delta_2)$ such that
for every $t\in\left[0,+\infty\right)$ we have
\begin{equation}\label{eq:tv-est}
\mathrm{TV} \left(v(t,\cdot), u(t,\cdot)\right) \le C\,.
\end{equation}
\end{theorem}
We refer to \eqref{eq:K-explicit} for the definition of the function $K$; therefore, condition \eqref{hyp2} is {\em explicit}.
We recall that related results of global existence of solutions with large data \cite{NishidaSmoller, LiuIB, Liu, HoldenRisebroSande, HoldenRisebroSande2} do not precise the threshold of smallness of the initial data.
Moreover, we observe that condition \eqref{hyp2} is trivially satisfied if
\begin{equation}\label{eq:smalldata}
\mathrm{TV}\left(\log(p_o)\right) \,+\, \frac{1}{\min\{a_r,a_\ell\}} \mathrm{TV}(u_o)\le \frac 29\log\left(2+\sqrt 3\right),
\end{equation}
because of \eqref{rage}. Then, problem \eqref{eq:system}, \eqref{init-data} has a global solution if \eqref{eq:smalldata} is satisfied and $v_o(x)\ge \underline{v}>0$ holds. This is a striking difference with respect to the results in \cite{amadori-corli-siam, amadori-corli-source}, where the corresponding bound in the right-hand side vanishes at a critical threshold.
Moreover, Theorem \ref{thm:main} improves the main result in \cite{amadori-corli-siam}, when restricted to the case of a single contact discontinuity; we refer to Subsection~\ref{subsec:proof} for a comparison.
At last, we point out that if $\delta_2=0$ we recover the result of \cite{Nishida68}.
It is left open the question of whether the global existence of solutions to \eqref{eq:system}, \eqref{init-data} for any $\mathbf{BV}$ initial data $v_o$, $u_o$ occurs,
opposite to the possibility of the blow-up in finite time for certain $\mathbf{BV}$ data.
\section{The Riemann problem and the composite wave}\label{prelim}
\setcounter{equation}{0}
In this section we first briefly recall some basic facts about system \eqref{eq:system}, its wave curves and the solution to the Riemann problem; we refer to \cite{AmCo06-Proceed-Lyon,amadori-corli-siam} for more details. Next, we introduce a composite wave which sums up the effects of the contact discontinuity and the non-physical waves. We then show two Riemann solvers that make use of the composite wave.
Under assumption (\ref{eq:pressure}) system (\ref{eq:system}) is strictly hyperbolic in ${\mathcal{O}}mega$ with eigenvalues $e_1 = -\sqrt{-p_v(v,\lambda)}$, $e_2 = 0$, $e_3 = \sqrt{-p_v(v,\lambda)}$; the eigenvalues $e_1$ and $e_3$ are genuinely nonlinear while $e_2$ is linearly degenerate.
For $i=1,3$, the right shock-rarefaction curves through the point $U_o = (v_o,u_o,\lambda_o)$ for (\ref{eq:system}) are
\begin{equation}
v \mapsto \left(v,u_o + 2a(\lambda_o) h(\varepsilon_i),\lambda_o\right)\,,\qquad v>0\,, \ i=1,3\,,
\label{eq:lax13}
\end{equation}
where the strength $\varepsilon_i$ of an $i$-wave is defined as
\begin{equation}\label{eq:strengths}
\varepsilon_1=\frac{1}{2}\log\left(\frac{v}{v_o}\right),
\quad \varepsilon_3=\frac{1}{2}\log\left(\frac{v_o}{v}\right)
\end{equation}
and the function $h$ is defined by
\begin{equation}\label{h}
h(\varepsilon)= \begin{cases}
\varepsilon& \mbox{ if } \varepsilon \ge 0\,,\\
\sinh \varepsilon& \mbox{ if } \varepsilon < 0\,.
\end{cases}
\end{equation}
Then, rarefaction waves have positive strengths and shock waves have negative strengths.
The wave curve through $U_o$ for $i=2$ is defined by
\begin{equation*}
\lambda\mapsto\left(v_o\displaystyle\frac{a^2(\lambda)}{a^2(\lambda_o)},
u_o,\lambda\right)\,,\qquad \lambda\in[0,1]\,.
\end{equation*}
Then, the pressure is constant along a $2$-curve; the strength of a $2$-wave is defined by
\begin{equation*}
\varepsilon_2 = 2\, \frac{a(\lambda)-a(\lambda_o)}{a(\lambda)+a(\lambda_o)}\,.
\end{equation*}
Now, we consider the Riemann problem for (\ref{eq:system}) with initial condition
\begin{equation}\label{eq:incond}
(v,u,\lambda)(0,x)=\left\{
\begin{array}{ll}
(v_\ell,u_\ell,\lambda_\ell)=U_\ell & \hbox{ if }x<0\,,
\\
(v_r,u_r,\lambda_r)=U_r & \hbox{ if }x>0\,,
\end{array}
\right.
\end{equation}
for $U_\ell$ and $U_r$ in ${\mathcal{O}}mega$. We write
$p_r= a^2_r/v_r$,
$p_\ell= a^2_\ell/v_\ell$.
\begin{proposition}[\cite{AmCo06-Proceed-Lyon}]\label{prop:RP}
The Riemann problem \eqref{eq:system}, \eqref{eq:incond} has a
unique ${\mathcal{O}}mega$-valued solution in the class of solutions
consisting of simple Lax waves, for any pair of states $U_\ell$, $U_r$ in ${\mathcal{O}}mega$.
Moreover, if $\varepsilon_i$ is the strength of the $i$-wave, $i=1,2,3$, then
\begin{equation}\label{eq:stimaRiemann}
\varepsilon_3-\varepsilon_1 = \frac{1}{2}\log\left(\frac{p_r}{p_\ell}\right)\,,
\qquad 2\left(a_\ell h(\varepsilon_1) +a_r h(\varepsilon_3)\right) =
u_r-u_\ell\,,
\end{equation}
\[
\varepsilon_2 = 2\, \frac{a_{r}-a_{\ell}}{a_{r}+a_\ell}\,.
\]
\end{proposition}
The proof of Theorem \ref{thm:main} relies on a wave-front tracking algorithm that introduces non-physical waves \cite{Bressanbook}, which, however, are only needed to solve some Riemann problems involving interactions with the $2$-wave, see Section \ref{sec:app_sol}. Following \cite{amadori-corli-siam}, two states $U_\ell$ and $U_r$ as in \eqref{eq:incond} can be connected by a non-physical wave if $v_\ell=v_r$ and $\lambda_\ell=\lambda_r$; the strength of a non-physical wave is defined as
\begin{equation}\label{eq:strengthnp}
\delta_0=u_r-u_\ell\,.
\end{equation}
Then, a non-physical wave changes neither the side values of $v$ nor those of $\lambda$, while a $2$-wave does not change the side values of $u$. This suggests to define a new wave by composing the $2$-wave with a non-physical wave, with the condition that we assign {\em zero speed} to non-physical waves and we locate them at $x=0$. The order of composition does not matter, because a $2$-wave and a non-physical wave act on different state variables. This procedure differs from the one used in \cite{amadori-corli-siam}. Then, we define the {\em composite $(2,0)$-wave curve} through a point $U_o=(v_o,u_o,\lambda_\ell)$ by
\begin{equation}\label{eq:composite-wave}
u\mapsto \left( (a_r^2/a_\ell^2) v_o, u,\lambda_r \right)
\end{equation}
and its strength by
\begin{equation*}
\delta_{2,0} = u - u_o\,.
\end{equation*}
The above definition of strength is motivated by the fact that the quantity $\delta_2$ remains constant at any interaction with $1$- or $3$-waves \cite{amadori-corli-siam}. Clearly, a $(2,0)$-wave reduces to the $2$-wave as long as non-physical waves are missing. At last, we notice that the pressure does not change across a $(2,0)$-wave.
In this way, we are left to deal with waves of family $1$, $3$ and a single composite $(2,0)$-wave, which is no more entropic. A Riemann solver analogous to that provided in Proposition \ref{prop:RP} is needed; however, since we have a single contact discontinuity $\delta_2$ and we are going to use the Riemann solver only to solve interactions, we state the following result into such a form.
\begin{proposition}[Pseudo Accurate Solver]\label{prop:PS-AS}
Consider the interaction at time $t$ of a $\delta_{2,0}$-wave with an $i$-wave of strength $\delta_i$, $i=1,3$. Then the Riemann problem at time $t$ has a unique ${\mathcal{O}}mega$-valued solution, which is formed by waves $\varepsilon_1$, $\delta_{2,0}$, $\varepsilon_3$, where $\varepsilon_1$, $\varepsilon_3$ belong to the first and the third family, respectively. Moreover, we have
\begin{equation}\label{eq:stimaRiemannAS}
\varepsilon_3-\varepsilon_1 = \frac{1}{2}\log\left(\frac{p_r}{p_\ell}\right),
\qquad 2\left(a_\ell h(\varepsilon_1) +a_r h(\varepsilon_3)\right) =
u_r-u_\ell - \delta_{2,0}\,.
\end{equation}
\end{proposition}
\begin{figure}
\caption{\label{fig:inter20ASL}
\label{fig:inter20ASL}
\end{figure}
\begin{proof} We only consider the case $i=1$ and refer to Figure \ref{fig:inter20ASL}; the other case is analogous.
Consider the auxiliary problem in Figure \ref{fig:inter20ASL}({\em b}), where $V_\ell' = U_\ell + (0,\delta_{2,0},0)$. We simply shifted the left state in order to be able to solve the interaction as if it was with an actual $2$-wave. Indeed, by Proposition \ref{prop:RP}
we uniquely find $\varepsilon_1$, $\varepsilon_3$ and states $V_p'$, $V_q'$ such that \eqref{eq:stimaRiemannAS} holds. Then, the interaction in Figure \ref{fig:inter20ASL}({\em a}) is solved by the same waves $\varepsilon_1$, $\varepsilon_3$ and by states $U_p' = V_p'-(0,\delta_{2,0},0)$, $U_q' = V_q'$. Finally, \eqref{eq:stimaRiemannAS} holds by construction.
Notice that we get the same result by shifting the other two states at the right. Indeed, consider the auxiliary problem in Figure \ref{fig:inter20ASR}({\em b}), where $V_r''= U_r - (0,\delta_{2,0},0)$ and $V_m'' = U_m - (0,\delta_{2,0},0)$.
\begin{figure}
\caption{\label{fig:inter20ASR}
\label{fig:inter20ASR}
\end{figure}
By Proposition \ref{prop:RP} we uniquely find $\varepsilon_1$, $\varepsilon_3$ (the same as before, since $u_{r}''-u_{l}=u_{r}-u_{l}'=u_{r}-u_{l}-\delta_{2,0}$) and states $V_p''$, $V_q''$. Then, the interaction in Figure \ref{fig:inter20ASR}({\em a}) is solved by the same waves $\varepsilon_1$, $\varepsilon_3$ and by states $U_p'' = V_p''$ and $U_q'' = V_q''+(0,\delta_{2,0},0)$. It is then straightforward to check that $U_p' = U_p''$ and $U_q' = U_q''$.
\end{proof}
Another solver is used below. We introduce it in the same framework of Proposition \ref{prop:RP}.
\begin{proposition}[Pseudo Simplified Solver]\label{prop:PS-SS}
Consider the interaction at time $t$ of a $\delta_{2,0}$-wave with an $i$-wave of strength $\delta_i$, $i=1,3$.
Then the Riemann problem at time $t$ can be solved by an $i$-wave of the same strength $\delta_i$ and a unique wave $\varepsilon_{2,0}$, where
\begin{equation}\label{eq:inter20}
\varepsilon_{2,0} = \left\{
\begin{array}{ll}
\delta_{2,0} + 2(a_r-a_\ell)h(\delta_1)& \hbox{ if $i=1$\,,}
\\
\delta_{2,0} - 2(a_r-a_\ell)h(\delta_3)& \hbox{ if $i=3$\,.}
\end{array}
\right.
\end{equation}
\end{proposition}
\begin{figure}
\caption{\label{fig:inter20}
\label{fig:inter20}
\end{figure}
\begin{proof} We refer to Figure \ref{fig:inter20}. We recall that, by \cite[Lemma 2]{AmCo06-Proceed-Lyon},
the commutation of a $1$-wave (or a $3$-wave) with the $2$-wave $\delta_2$ only modifies the $u$ component.
In the case when a $1$-wave interacts, it is easy to check that $u_q = u_\ell + 2a_\ell h(\delta_1)$ and $u_m=u_\ell + \delta_{2,0}$;
then, we compute $\varepsilon_{2,0}$ by $u_\ell + 2a_\ell h(\delta_1) +\varepsilon_{2,0} = u_\ell + \delta_{2,0} + 2a_rh(\delta_1)$.
The other case is analogous.
\end{proof}
\section{Approximate solutions}\label{sec:app_sol}
\setcounter{equation}{0}
We use Propositions \ref{prop:PS-AS} and \ref{prop:PS-SS} to build up the piecewise-constant approximate solutions to (\ref{eq:system}) that are needed for the wave-front tracking scheme \cite{Bressanbook,amadori-corli-siam}. We first approximate the initial data \eqref{init-data}: for any $\nu\in\mathbb{N}$ we take a sequence $(v^\nu_o,u^\nu_o)$ of piecewise constant functions with a finite number of jumps such that, denoting $p^\nu_o=a^2(\lambda_o)/v^\nu_o$,
\begin{enumerate}
\item $\mathrm{TV} \log(p^\nu_o)\leq \mathrm{TV} \log(p_o)$, $\mathrm{TV} u^\nu_o\leq \mathrm{TV} u_o$;
\item $\lim_{x\tauo-\infty} (v^\nu_o,u^\nu_o)(x)
=\lim_{x\tauo-\infty} (v_o,u_o)(x)$;
\item $\|(v^\nu_o,u^\nu_o) - (v_o,u_o)\|_{\L1}\leq 1 /\nu$.
\end{enumerate}
We introduce two strictly positive parameters: $\eta=\eta_\nu$, that controls the size of rarefactions, and a threshold $\rho=\rho_\nu$, that determines which of the two Pseudo Riemann solver is to be used. Here follows a description of the scheme that improves the algorithm of \cite{amadori-corli-siam} and adapts it to the current situation.
\begin{enumerate}[{\em (i)}]
\item At time $t=0$ we solve the Riemann problems at each point of jump of
$(v^\nu_o, u^\nu_o, \lambda_o)(\cdot, 0+)$ as follows: shocks are not modified
while rarefactions are approximated by fans of waves, each of them having size
less than $\eta$. More precisely, a rarefaction of size $\varepsilon$ is approximated
by $N=[\varepsilon/\eta]+1$ waves whose size is $\varepsilon/N<\eta$; we set their speeds to
be equal to the characteristic speed of the state at the right.
Then $(v,u,\lambda)(\cdot,t)$ is defined until some wave fronts interact; by slightly changing the speed of some waves we can assume that only \emph{two} fronts interact at a time.
\item When two wave fronts of the families $1$ or $3$ interact, we solve the Riemann problem at the interaction point. If one of the incoming waves is a rarefaction, after the interaction it is prolonged (if it still exists) as a single discontinuity with speed equal to the characteristic speed of the state at the right. If a new rarefaction is generated, we employ the Riemann solver described in step {\em (i)} and split the rarefaction into a fan of waves having size less than $\eta$.
\item When a wave front of family $1$ or $3$ with strength $\delta$ interacts with the composite wave at a time $t>0$, we proceed as follows:
\begin{itemize}
\item if $|\delta|\ge\rho$, we use the {\em Pseudo Accurate solver} introduced in
Proposition \ref{prop:PS-AS}, partitioning the possibly new rarefaction
according to \emph{(i)};
\item if $|\delta|<\rho$, we use the {\em Pseudo Simplified solver} of Proposition \ref{prop:PS-SS}.
\end{itemize}
\end{enumerate}
\section{Interactions}\label{sec:interactions}
\setcounter{equation}{0}
In this section we analyze the interactions of waves. If $\delta_2=0$, i.e. if $a(\lambda_\ell)=a(\lambda_r)$, then the initial data \eqref{init-data} reduce \eqref{eq:system} to a $p$-system where the pressure $p$ only depends on $v$. The results of \cite{AmadoriGuerra01,amadori-corli-siam} apply and we recover the famous result of \cite{Nishida68}. Then, we assume from now on that $\delta_2\ne0$. For simplicity, we focus on the case
\[
a(\lambda_\ell) < a(\lambda_r)\,.
\]
As a consequence we have $\delta_2>0$; the other case is entirely similar.
For $t>0$ at which no interactions occur, and for $\xi\ge1$, $K_{np}>0$, $K\ge 1$ to be determined, we introduce the functionals
\begin{align}
L & = \sum_{\genfrac{}{}{0pt}{}{i=1,3}{\gamma_i>0}}|\gamma_i| +
\xi\sum_{\genfrac{}{}{0pt}{}{i=1,3}{\gamma_i<0}}|\gamma_i|
+K_{np} |\gamma_{2,0}|\,,\label{L-xi}
\\[2mm]
\nonumber
V & = \sum_{\genfrac{}{}{0pt}{}{i=1,3}{\gamma_i>0,\,\mathcal{A}}}|\gamma_i| +
\xi\sum_{\genfrac{}{}{0pt}{}{i=1,3}{\gamma_i<0,\,\mathcal{A}}}|\gamma_i|\,, \quad\quad Q = \delta_2 V,
\\[2mm]
\label{F}
F & = L+ K\, Q\,.
\end{align}
By $\gamma_{2,0}$ we mean the strength of the composite wave. The summation in $V$ is performed only over the set $\mathcal{A}$ of waves {\em approaching} the
front carrying the composite wave, namely the waves of the family $1$ (and $3$) located at the right (left, respectively) of $x=0$.
The term $Q$
is then the \lq\lq usual\rq\rq\ quadratic interaction potential due to the contact discontinuity at $x=0$. We also introduce
\begin{equation*}
\bar{L} = \sum_{i=1,3}|\gamma_i| = \frac 12 \mathrm{TV}(\log p(t,\cdot))\,.
\end{equation*}
\begin{remark}\rm
The functional defined in \eqref{F} differs from \cite[(5.1)]{amadori-corli-siam} because of the presence of the parameter $\xi$ in $V$ and, consequently, in the interaction potential $Q$, leading to better estimates and a more general result.
An extension of the functional to the case of a more general function
$\lambda_o$ appears possible, however with some extra condition on $\lambda_o$.
The specific case with only two phase interfaces is addressed in \cite{ABCD2}.
\end{remark}
Under the notation of Figure \ref{fig:Glimmone}, we shall make use of the identities \cite{AmCo06-Proceed-Lyon,Peng}
\begin{align}
\varepsilon_3 - \varepsilon_1 & = \alpha_3 + \beta_3 -\alpha_1 - \beta_1\,,
\label{tre-uno}
\\
a_\ell h(\varepsilon_1) + a_r h(\varepsilon_3) & = a_\ell h(\alpha_1) + a_m
h(\alpha_3) + a_m h(\beta_1) + a_r h(\beta_3)\,. \label{tre-due}
\end{align}
\begin{figure}
\caption{\label{fig:Glimmone}
\label{fig:Glimmone}
\end{figure}
\subsection{Interactions with the composite wave}\label{subsec:Interactions20wave}
We first consider the interactions of a $1$- or $3$-wave with a $(2,0)$-wave. As in \cite{AmCo06-Proceed-Lyon}, we notice that they give rise to the following pattern of solutions:
\begin{equation}\label{eq:patterns}
\begin{array}{rclcrcl}
(2,0)\tauimes 1R &\tauo& 1R+(2,0)+3R\,,&\quad &(2,0)\tauimes 1S & \tauo & 1S+(2,0)+3S\,,
\\
3R\tauimes (2,0) & \tauo& 1S+(2,0)+3R\,,
&\quad &3S\tauimes (2,0) &\tauo& 1R+(2,0)+3S\,.
\end{array}
\end{equation}
In the following we often assume that, for some fixed $m>0$, any interacting $i$-wave, $i=1,3$, with strength $\delta_i$ satisfies
\begin{equation}\label{rogna}
|\delta_i|\le m\,.
\end{equation}
We usually denote with $\delta_k$ (and $\varepsilon_k$) the interacting waves (respectively, the waves produced by the interaction).
\begin{lemma}\label{lem:interazioni}
Assume that a wave $\delta_i$, $i=1,3$, interacts with a $\delta_{2,0}$-wave.
If the Riemann problem is solved by the Pseudo Accurate solver, then the strengths $\varepsilon_i$ of the outgoing waves satisfy $\varepsilon_{2,0}=\delta_{2,0}$ and
\begin{align}
|\varepsilon_i - \delta_i| = |\varepsilon_j| & \le \displaystyle
\frac{1}{2}\,\delta_2 |\delta_i|,\,\quad
i,j=1,3,\, i\ne j\,,
\label{eq:33Pengnew}
\\
|\varepsilon_1|+|\varepsilon_3| & \leq \left\{
\begin{array}{ll}
|\delta_1| + \delta_2|\delta_1|&\qquad\hbox{ if $i=1$\,,}
\\
|\delta_3| &\qquad\hbox{ if $i=3$\,.}
\end{array}
\right.
\label{eq:stima-interazione-semplice}
\end{align}
If the Riemann problem is solved by the Pseudo Simplified procedure and we assume \eqref{rogna}, then there exists $C_o=C_o(m)$ such that
\begin{align}
|\varepsilon_{2,0}-\delta_{2,0}|& \le \displaystyle
C_o\, \delta_2 |\delta_i|\,.
\label{eq:stima-interazione-composta}
\end{align}
\end{lemma}
\begin{proof}
The estimates \eqref{eq:33Pengnew} and \eqref{eq:stima-interazione-semplice} easily follow from Proposition \ref{prop:PS-AS} and are carried out as in \cite{AmCo06-Proceed-Lyon}.
The proof of the second part relies on the estimates of \cite[Proposition 5.12]{amadori-corli-siam}; we have
\begin{equation*}
|\varepsilon_{2,0}-\delta_{2,0}| = 2|a_{r}-a_\ell|\,|h(\delta_i)|\leq 2 a_r \frac{\sinh m}m \delta_2\,
|\delta_i|\,,
\end{equation*}
whence (\ref{eq:stima-interazione-composta}) immediately follows once we set $C_o(m) \doteq 2a_{r} \sinh m/m$.
\end{proof}
\begin{proposition}\label{Delta-F-2wave}
Assume that a wave $\delta_i$, $i=1,3$, interacts with a $\delta_{2,0}$-wave at time $t$.
\noindent In the cases where the Pseudo Accurate procedure is used, then $\Delta F(t) < 0$ if
\begin{equation}\label{Kappa-mu}
K > \max\left\{\frac{\xi-1}{2}\,,\,{1}\right\}\,.
\end{equation}
In the cases where the Pseudo Simplified procedure is used, then $\Delta F(t)< 0$ if
\begin{equation}\label{eq:knp}
K_{np} < \frac{K}{C_o}\,.
\end{equation}
\end{proposition}
\begin{proof}
We first consider the case where the Pseudo Accurate solver is used and use the notation of Figure~\ref{fig:inter20ASL}. By \eqref{tre-uno} and Lemma \ref{lem:interazioni}, we have
\begin{equation*}
\left\{
\begin{array}{lll}
\varepsilon_1-\delta_1 = \varepsilon_3, &\quad |\varepsilon_1|-|\delta_1| = |\varepsilon_3|\,, &\qquad \mbox{if }i=1 \\[1mm]
\varepsilon_1+\delta_3 = \varepsilon_3, &\quad |\delta_3|-|\varepsilon_1| = |\varepsilon_3|\,, &\qquad \mbox{if }i=3\,.
\end{array}
\right.
\end{equation*}
\noindent{\fbox{$i=1$.}} If the interacting wave is a rarefaction, then $\Delta L = 2|\varepsilon_3|\le \delta_2|\delta_1|$ and
$\Delta V = - |\delta_1|$. Therefore, by \eqref{Kappa-mu} we deduce
\begin{equation}\label{eq:DeltaF2}
\Delta F = \Delta L + K \, \delta_2 \Delta V
\le \left\{ 1 - K \right\}\delta_2|\delta_1| < 0\,.
\end{equation}
If the interacting wave is a shock, we have the same estimates with $\xi$ as a factor.
\noindent{\fbox{$i=3$.}}
If the interacting wave is a shock, then $\Delta L = |\varepsilon_1|+\xi|\varepsilon_3|-\xi|\delta_3|=-(\xi-1)|\varepsilonilon_1|\le0$, $\Delta V =-\xi|\delta_3|<0$ and
\begin{equation}\label{eq:2bs}
\Delta F = -(\xi-1)|\varepsilonilon_1| -K\delta_2\xi|\delta_3| \le -K\delta_2\xi|\delta_3| <0\,.
\end{equation}
If the
wave is a rarefaction, then $\Delta L = \xi|\varepsilon_1| + |\varepsilon_3| - |\delta_3| = (\xi -1 )|\varepsilon_1| \le (\xi -1)\, \delta_2|\delta_3|/2$ and
$\Delta V \,=\, - |\delta_3|$. By \eqref{Kappa-mu} we obtain again
\begin{equation}\label{eq:2Fr}
\Delta F = \Delta L + K \, \delta_2\, \Delta V
\le \delta_2|\delta_3| \left\{ \frac{\xi -1}2 - K \right\} < 0\,.
\end{equation}
If the Pseudo Simplified solver is used, then $\Delta V \le -|\delta_i|$ ($i=1,3$) and $\Delta L =K_{np}|\varepsilon_{2,0}|-K_{np}|\delta_{2,0}| \leq K_{np}
C_o\delta_2|\delta_i|$ by \eqref{eq:stima-interazione-composta}. Hence, by \eqref{eq:knp} we get
\begin{equation*}
\Delta F \leq \delta_2|\delta_i| (K_{np} C_o - K)<0\,.
\end{equation*}
\end{proof}
\subsection{Interactions between $1$- and $3$-waves}
In this subsection we analyze the interactions between $1$- and $3$-waves, see Figure \ref{fig:inter3133}.
\begin{figure}
\caption{\label{fig:inter3133}
\label{fig:inter3133}
\end{figure}
\begin{lemma}
\label{lem:shock-riflesso}
For the interaction patterns in Figure \ref{fig:inter3133}, the following holds.
\begin{enumerate}[{(i)}]
\item Two interacting waves of different families
cross each other without changing strengths.
\item Let $\alpha_i$, $\beta_i$ be two interacting waves of the same family and $\varepsilon_1$, $\varepsilon_3$ the outgoing waves.
\begin{enumerate}[({ii}.a)]
\item If both incoming waves are shocks, then the outgoing wave of the same family is a shock and satisfies $|\varepsilon_i|>\max \{|\alpha_i|,|\beta_i|\}$; the reflected wave is a rarefaction.
\item If the incoming waves have different signs, then the reflected wave is a shock; both the amounts of shocks and rarefactions of the $i$-family decrease across the interaction. Moreover for $j\ne i $ and $\alpha_i<0<\beta_i$ one has
\begin{align}\label{eq:chi_def}
|\varepsilon_j| & \le c(\alpha_i) \cdot \min\{|\alpha_i|,|\beta_i|\}\,,\qquad c(z) \doteq \frac{\cosh z -1}{\cosh z+1}\,.
\end{align}
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{remark}\rm The inequality \eqref{eq:chi_def} generalizes the one stated in \cite[Lemma B.1]{amadori-corli-siam}
for the case $SR$, $RS\tauo SS$; moreover, in that case we provide below a simpler proof.
\end{remark}
\begin{proofof}{Lemma \ref{lem:shock-riflesso}} We only need to prove \eqref{eq:chi_def}, the rest being already proved in \cite[Lemmas 5.4--5.6]{amadori-corli-siam}. For simplicity we assume $i=3$ and distinguish between two cases according to the outgoing wave $\varepsilon_3$.
Indeed, we remark that there exists a function $x_o(\cdot)$ such that $\varepsilon_3$ is a rarefaction iff $\beta_3\ge x_o(|\alpha_3|)$; see \cite[Lemma B.1]{amadori-corli-siam}. In the limiting case $\beta_3= x_o(|\alpha_3|)$ the shock and the rarefaction cancel each other and $\varepsilon_3=0$; the interaction gives only rise to the reflected wave $\varepsilon_1$. By setting $x=|\beta_{3}|$ and $z=|\alpha_3|$, from \eqref{tre-uno} and \eqref{tre-due} we find the equation valid for $\varepsilon_3=0$, namely
$$
\sinh(x-z) -\sinh z +x=0\,,
$$
which implicitly defines the function $x=x_o(z)$.
\paragraph{\fbox{$SR,\, RS\tauo SR$}}\quad The starting point is to specialize \eqref{tre-uno} and \eqref{tre-due} to the present case:
\begin{align}
|\varepsilon_1|+|\varepsilon_3|&=-|\alpha_3|+|\beta_3|\,,\label{1}\\
\sinh(|\varepsilon_1|)-|\varepsilon_3|&=\sinh(|\alpha_3|)-|\beta_3|\,.\label{2}
\end{align}
By summing up \eqref{1} and \eqref{2} we find that
\begin{equation}
\label{4}
\sinh(|\varepsilon_1|)+|\varepsilon_1|=\sinh(|\alpha_3|)-|\alpha_3|\,.
\end{equation}
To prove \eqref{eq:chi_def} it is enough to prove that
\begin{equation}
\label{3}
|\varepsilon_1|\le c(\alpha_3) |\alpha_3|\,.
\end{equation}
Indeed, from \eqref{1} we infer that $|\alpha_3|<|\beta_3|$ and therefore \eqref{3} implies \eqref{eq:chi_def}.
To prove \eqref{3}, we introduce the notation $|\varepsilon_1|=y$ and $|\alpha_3|=z$, so that \eqref{4} rewrites as
$$
G(y,z)\doteq\sinh y+y-\sinh z+z =0\,.
$$
By a simple application of the Implicit Function Theorem, there exists a function $y=y(z)\ge 0$, defined for all $z\ge0$, such that $G\left(y(z),z\right)=0$.
Since $G_{y}(y,z)=\cosh y +1>0$, in order to prove that $y(z)\le c(z)z$ it is enough to prove that $g(z)\doteq G(c(z)z,z)>0$, that is
\begin{equation}
\label{stima-comune}
g(z)=(c(z)+1)z+\sinh(c(z)z)-\sinh z>0\,.
\end{equation}
Using the fact that $c(z)z<z$, the Mean Value Theorem and the simple identity
\begin{equation*}
1+c(z)=\left(1-c(z)\right)\cosh z\,,
\end{equation*}
we find that
$$
g(z)=\left(c(z)+1\right)z + \left(c(z)z -z\right) \cosh\zeta > z\left[ c(z)+1+ \left(c(z) - 1\right) \cosh z \right] =0\,,
$$
for $c(z)z<\zeta<z$. Hence, we have proved \eqref{stima-comune}.
\paragraph{\fbox{$SR,\, RS\tauo SS$}}\quad Again, we start from \eqref{tre-uno} and \eqref{tre-due} that can now be rewritten as
\begin{align}
|\varepsilon_1|-|\varepsilon_3|&=-|\alpha_3|+|\beta_3|\,,\label{1-SS}\\
\sinh(|\varepsilon_1|)+\sinh(|\varepsilon_3|)&=\sinh(|\alpha_3|)-|\beta_3|\,.\nonumber
\end{align}
Set $x=|\beta_{3}|$, $y=|\varepsilon_{1}|$,
$z=|\alpha_{3}|$ and define the function
$$
F(x,y;z)=\sinh y+\sinh(y-x+z)-\sinh z+x\,,
$$
which is subject to the constraints
$$
z\geq0,
\quad
0\leq x< x_{o}(z),
\quad
\max\{0,x-z\}<y<\min\{x,z\}\,.
$$
By the Implicit Function Theorem, there exists a function $y=y(x;z)$ such that $F\left(x,y(x;z);z\right)\equiv 0$.
Moreover, by denoting with $y'$ the derivative of $y$ with respect to $x$ and so on, we have
\begin{gather*}
y'=-\frac{F_{x}}{F_{y}}\,,\qquad
y''=-\frac{F_{xx}+2F_{xy}y'+F_{yy}(y')^{2}}{F_{y}}\,,
\end{gather*}
where
\begin{gather*}
F_{x}=1-\cosh(y-x+z) <0,
\quad
F_{y}=\cosh(y-x+z)+\cosh y >0\,,\\
F_{xx}=-F_{xy}= \sinh(y-x+z)>0\,,\quad
F_{yy}=\sinh(y-x+z)+\sinh y>0\,.
\end{gather*}
Therefore $y'>0$ and
\begin{gather*}
y''(x;z)=-\frac{\sinh\left(y-x+z\right)(1-y')^{2} + \sinh\left(y\right)(y')^{2}}{F_{y}} <0\,.
\end{gather*}
Hence $x\mapsto y(x;z)$ is concave down and thus
$$
y(x;z)\leq y'(0;z)x=c(z)x\,.
$$
To complete the proof of \eqref{eq:chi_def}, it remains to prove that $y(x;z)\leq c(z)z$. To do this, simply recall that $y'>0$ and then
$$
y(x;z)\leq y\left(x_o(z);z\right) \leq c(z)z\,,
$$
where the last inequality holds because it coincides with \eqref{3} in the limiting case $\beta_3=x_o(z)$, $z=|\alpha_3|$.
\end{proofof}
\begin{remark}{\rm
Under the notation of the proof of case {\em (ii.b)} in Lemma \ref{lem:shock-riflesso}, i.e., $x=\beta_i$, $z=|\alpha_i|$, we see that
the size of the reflected shock is
\begin{equation}\label{eq:eps_j-explicit}
|\varepsilon_j| = \left\{
\begin{array}{ll}
y(x;z) & \hbox { if } x \le x_o(z)\,,
\\
y(z) & \hbox { if } x > x_o(z)\,.
\end{array}
\right.
\end{equation}
The strength $\varepsilon_j$ is a continuous function of $x$ since $y\left(x_o(z);z\right) = y(z)$ for every $z$.
In particular, assume that $\beta_i> x_o(|\alpha_i|)$, so that $\varepsilon_i$ is a rarefaction. For $\beta_i$ in this range, the size of $\varepsilon_j$ does not change by \eqref{eq:eps_j-explicit} and the part of $\beta_i$ exceeding $x_o(|\alpha_i|)$ is entirely propagated along $\varepsilon_i$. This holds since the interaction only affects that part of $\beta_i$ whose amplitude is exactly $x_o(|\alpha_i|)$. We refer to Figure \ref{fig:y(x;3)} for a graph of $|\varepsilon_j|$ as a function of $\beta_i$.
We notice that this behavior of $\varepsilon_j$ is mimicked by the damping coefficient $c$ in \eqref{eq:chi_def}, which only depends on the size of $\alpha_i$. }
\begin{figure}
\caption{The reflected shock in case {\em (ii.b)}
\label{fig:y(x;3)}
\end{figure}
\end{remark}
\begin{remark}\label{rem-d(m)}\rm
In case {\em (ii.a)} of Lemma \ref{lem:shock-riflesso}, one can prove for the reflected rarefaction that
\begin{equation}\label{eq:eps_j-rar}
|\varepsilon_j| \le d\left(\max\{|\alpha_i|,|\beta_i|\}\right) \min\left\{ |\alpha_i|,|\beta_i|\right\},
\end{equation}
for a suitable function $d(z)>c(z)$; see \cite[Lemma 5.6]{amadori-corli-siam}. Estimate \eqref{eq:eps_j-rar} is analogous to \eqref{eq:chi_def} but the damping coefficient $d\left(\max\{|\alpha_i|,|\beta_i|\}\right)$ cannot be replaced by $c\left(\max\{|\alpha_i|,|\beta_i|\}\right)$.
This easily follows by a second order expansion of the function $\tauau(a,b)$ in \cite[Lemma 5.6]{amadori-corli-siam} or simply by arguing as in the proof of case {\em (ii.b)}. However, we shall see in the following proposition that the decreasing of the functional $F$ only depends on the coefficient $c$ and not on $d$.
\end{remark}
\begin{proposition}\label{prop:DeltaF33}
Consider the interactions of two wave fronts of the same family $1$ or $3$, and assume \eqref{rogna}. Then $\Delta F \le 0$ if
\begin{equation}\label{eq:sogliazza}
1 < \xi \le \frac{1}{c(m)}\quad\hbox{ and }\quad
K \le \frac{\xi-1}{\delta_2}\,.
\end{equation}
\end{proposition}
\begin{proof}
The proof takes into account the possible wave configurations.
We use the notation of Lemma \ref{lem:shock-riflesso} and assume $i=3$.
\paragraph{\fbox{$SS\tauo RS$}}\quad
We start by proving that
\begin{equation}\label{Delta_L_xi_13}
\Delta L + |\varepsilon_1|(\xi - 1) = 0\,,
\end{equation}
that holds for all $\xi\ge 1$. Indeed, in this case one has $\Delta \bar{L}=0$ by \eqref{tre-uno}
and then
\begin{equation*}
\Delta L + (\xi -1) |\varepsilon_1| = \xi (|\varepsilon_1| +
|\varepsilon_3|-|\alpha_3|- |\beta_3|)= 0\,.
\end{equation*}
If $\Delta V>0$ then $\Delta V=|\varepsilon_1|$; hence, by \eqref{eq:sogliazza} and \eqref{Delta_L_xi_13} we obtain
\begin{equation*}
\Delta F
\le |\varepsilon_1| \left\{ - (\xi-1) + K \delta_2\right\} \le 0\,.
\end{equation*}
\paragraph{\fbox{$SR,\, RS\tauo SR,\, SS$}}\quad Assume $\alpha_3<0<\beta_3$. We now
prove the stronger inequality
\begin{equation}\label{Delta_L_xi_13-SR}
\Delta L + |\varepsilon_1|\xi (\xi - 1) \le 0\,.
\end{equation}
If $\varepsilonilon_3$ is a shock, then
we use \eqref{1-SS}
, \eqref{eq:chi_def} and (\ref{eq:sogliazza})${}_1$ to obtain
\begin{align*}
\Delta L + |\varepsilon_1|\xi (\xi - 1) &=\xi^2 |\varepsilon_1| + \xi(|\varepsilon_3|-|\alpha_3|) - |\beta_3|\\
&= \xi^2 |\varepsilon_1| + \xi(|\varepsilon_1|-|\beta_3|) - |\beta_3|\\
&= (\xi+1) (\xi|\varepsilon_1| - |\beta_3|) \le 0\,.
\end{align*}
Therefore \eqref{Delta_L_xi_13-SR} holds in this case.
On the other hand, if $\varepsilonilon_3$ is a rarefaction, then the left hand side of
\eqref{Delta_L_xi_13-SR} turns out to be
\begin{align*}
\xi^2 |\varepsilon_1| + |\varepsilon_3| -\xi|\alpha_3| - |\beta_3|\,.
\end{align*}
From \eqref{1}
we have
$|\varepsilon_3|< |\beta_3|$, while (\ref{eq:chi_def}) and (\ref{eq:sogliazza})${}_1$ imply $\xi |\varepsilon_1|\le |\alpha_3|$. This completely proves \eqref{Delta_L_xi_13-SR}.
If $\Delta V>0$, then $\Delta V=\xi|\varepsilon_1|$ and hence
\begin{equation}
\Delta F
\le \xi |\varepsilon_1| \left\{ - (\xi-1) + K \delta_2\right\} \le 0
\end{equation}
by \eqref{eq:sogliazza}${}_2$. This concludes the proof of the lemma.
\end{proof}
\subsection{Decreasing of the functional $F$ and control of the variations}
In order that $\Delta F\le 0$ at any interaction, we need $K$ to satisfy both \eqref{Kappa-mu} and $\eqref{eq:sogliazza}_2$:
\begin{equation}\label{cond_K_xi}
\max\left\{\frac{\xi-1}{2}\,,\,{1}\right\}< K \le \frac{\xi-1}{\delta_2}\,.
\end{equation}
This is possible if $1+\delta_2 < \xi$; hence, by $\eqref{eq:sogliazza}_1$ we require that $\xi$ satisfies
\begin{equation}\label{cond_xi_delta2}
1+\delta_2 < \xi \le \frac{1}{c(m)}
\,.
\end{equation}
In turn, this is possible if
\begin{equation}\label{ip_m_delta2}
c(m) < \frac{1}{1+\delta_2}
\,.
\end{equation}
We notice that inequality \eqref{ip_m_delta2} is certainly satisfied if $c(m)\le 1/3$ because $\delta_2<2$.
Therefore, we choose the parameters $m$, $\xi$ and $K$ as follows:
\begin{enumerate}
\item We determine the maximum size $m$ of the waves in the approximate solution by assuming \eqref{ip_m_delta2}; we recall that $c$ is a strictly increasing function of $m$ and then it is invertible.
\item We choose $\xi$ in the non-empty interval defined by \eqref{cond_xi_delta2} and then choose $K$ to satisfy \eqref{cond_K_xi} with strict inequalities:
\begin{equation}\label{cond_K_xi-strict}
\max\left\{\frac{\xi-1}{2}\,,\,{1}\right\}< K < \frac{\xi-1}{\delta_2}\,.
\end{equation}
The strict inequality on the right of \eqref{cond_K_xi-strict} is needed both for the control on the number of interactions \cite[Lemma 6.2]{amadori-corli-siam} and for the decay of the reflected waves as the number of interactions increases, see \eqref{eq:mu} and Proposition \ref{prop:tilde-Fk}.
\item We choose $K_{np}$ so that \eqref{eq:knp} holds.
\end{enumerate}
We collect the results of the previous subsection into a single proposition.
\begin{proposition}[Local decreasing]\label{prop:last}
Consider the interaction of any two waves at time $t$. Let $m > 0$ be
such that \eqref{ip_m_delta2} holds
and $C_o=C_o(m)$ as in Lemma \ref{lem:interazioni}.
If $\xi$, $K$, $K_{np}$ satisfy \eqref{cond_xi_delta2}, \eqref{cond_K_xi-strict} and \eqref{eq:knp}, respectively,
then
\begin{equation}\label{eq:Fdecr}
\Delta F(t) \le 0\,.
\end{equation}
\end{proposition}
Now, we prove the global decreasing of $F$.
\begin{proposition}[Global decreasing] \label{global}
We choose parameters $m$, $\xi$, $K$, $K_{np}$ as in Proposition \ref{prop:last}.
Moreover, we assume that
\begin{equation}\label{eq:boundL}
\bar{L}(0+) \le m\hspace{0.8pt} c^2(m)
\end{equation}
and that the approximate solution is defined in $[0,T]$.
Then we have that $F(t)\le m$ and $\Delta F(t)\le0$ for every $t\in(0,T]$.
\end{proposition}
\begin{proof}
By Propositions \ref{Delta-F-2wave} and \ref{prop:DeltaF33} we know that $\Delta F\le 0$ if \eqref{rogna} holds.
By \eqref{eq:boundL} we deduce that $L(0+) \le m$ and by a recursion argument we find that for every $t\le T$
\begin{equation*}
F(t)\le F(0+) \le L(0+)(1 + K\delta_2) \le \xi^2 \bar{L}(0+)\le\frac{1}{c^2(m)}\bar{L}(0+)\le m\,.
\end{equation*}
This implies $\bar{L}(t)\le L(t)\le m$ for every $t\le T$ and in particular \eqref{rogna}.
\end{proof}
\section{The convergence and consistency of the algorithm}\label{sec:Cauchy}
\setcounter{equation}{0}
In this section we finally conclude the proof of Theorem \ref{thm:main}, focusing on the convergence and consistency of the front tracking algorithm.
For the algorithm to be well-defined, one has to verify that the total number of wave fronts and interactions is finite,
besides the fact that the size of rarefaction waves remains small. We already anticipated in the introduction that the algorithm used here
to construct the approximate solutions offers the advantage of getting quickly a bound on the total number of wave fronts.
As a matter of fact, at every interaction producing more than two outgoing waves the interaction potential $F$ decreases by a fixed positive
amount; hence, as in \cite[Lemma $6.2$]{amadori-corli-siam} one can prove that for large times any interaction involves only two incoming and two outgoing fronts.
The other two requirements are accomplished as in \cite[Proposition $6.3$]{amadori-corli-siam} and \cite[Lemma $6.1$]{amadori-corli-siam}, respectively.
The convergence follows from a standard application of Helly's Theorem, while for the consistency we need refined estimates to control the total size of the composite wave.
\subsection{Control of the total size of the composite wave}
The wave-front tracking scheme exploits the notion of generation order of a wave to prove that the strength
of the composite wave tends to zero as the approximation parameter $\nu$ tends to infinity:
this means that the $(2,0)$-wave becomes an entropic $2$-wave in the limit. More specifically,
for a physical wave $\gamma$ of family $1$ or $3$ we define its generation order $k_\gamma$ as in \cite[\S 6.2]{amadori-corli-siam};
on the other hand, for the $(2,0)$-wave we proceed as follows. We assign order $1$ to the $(2,0)$-wave generated at $t=0+$;
then, we keep its order unchanged in the cases where the Pseudo Accurate solver is used, while we set it to be equal to
$k_{\gamma}+1$ when the Pseudo Simplified solver is used with a physical wave $\gamma$.
For any $k=1,2,\ldots$, we define
\begin{align*}
L_k &= \sum_{\gamma>0\atop k_\gamma = k}|\gamma| + \xi
\sum_{\gamma<0\atop k_\gamma = k}|\gamma| +
K_{np} \,L_k^{0}\,,
\\
V_k & = \sum_{\gamma>0,\,\mathcal{A}\atop k_\gamma = k}|\gamma| +
\xi\sum_{\gamma<0,\,\mathcal{A}\atop k_\gamma = k}|\gamma|\,,\qquad Q_k\ =\ \delta_2V_k\,,
\\
F_k &= L_k + K\, Q_k\,,
\end{align*}
where $\gamma$ ranges over the set of $1$- and $3$-waves, as for \eqref{L-xi}.
Above we denoted
\begin{equation}\label{Lk0}
L_k^{0}=\sum_{\tauau_k < t}|\varepsilon_{2,0}-\delta_{2,0}|(\tauau_k)\,,
\end{equation}
with $\tauau_k$ denoting the interaction times where the outgoing composite wave has order of generation $k$.
As a consequence, only the times $\tauau_k$ where the Pseudo Simplified solver is used give positive summands in \eqref{Lk0}:
when the Pseudo Accurate solver is used we have $\varepsilon_{2,0} = \delta_{2,0}$.
For $k\in\mathbb{N}$, we introduce:
\begin{itemize}
\item $I_k = $ set of times when two waves $\alpha$, $\beta$ of same family interact,
with $\max\{k_\alpha, k_\beta\}=k$;
\item $J_k = $ set of times when a $1$- or a $3$-wave of order $k$ interacts with the $(2,0)$-wave.
\end{itemize}
We set ${\cal T}_k = I_k\cup J_k$ and define
\begin{align}\label{eq:mu}
\mu &\doteq \max\left\{\frac{1}{2K-1},\frac{\xi}{2K+1},\frac{K\delta_2+1}{\xi},\frac{K_{np} C_o}{K}\right\}\,.
\end{align}
We notice that $0<\mu<1$ by \eqref{cond_K_xi-strict} and \eqref{eq:knp}.
\begin{proposition}\label{lem:01}
Let $m$, $\xi$, $K$ and $K_{np}$ satisfy the assumptions of Proposition \ref{prop:last} and assume that $F(t)<m$ for all $t$.
Then the following holds, for $\tauau\in {\cal T}_h$, $h\ge 1$:
\begin{align}\label{Fk-segni-1}
&\Delta F_{h}<0\,,\qquad \Delta F_{h+1}>0\,,\\ \label{Fk-segni-2}
&\Delta F_k=0\qquad\,\, \, \hbox{ if }\ k\ge h+2\,.
\end{align}
Moreover,
\begin{equation}
\label{h=k-1b-XXX}
[\Delta F_{h+1}]_+ \le \mu \Bigl([\Delta F_h]_- -
\sum_{\ell=1}^{h-1} \Delta F_\ell \Bigr)\,.
\end{equation}
\end{proposition}
\begin{remark}\label{ref:DeltaF<0} \rm Notice that Proposition~\ref{lem:01} let us improve Proposition~\ref{prop:last}. Indeed, recalling that ${\cal T}_h= I_{h}\cup J_h$, Proposition \ref{lem:01} implies, for $\tauau \in I_{h}$,
\begin{equation*}
\Delta F = \sum_{\ell=1}^{h-1}\Delta F_{\ell} \,-\, [\Delta F_h]_- \,+\, [\Delta F_{h+1}]_+ \le -(1-\mu)[\Delta F_h]_- < 0 \,,
\end{equation*}
while for $\tauau \in J_{h}$, being $\sum_{\ell=1}^{h-1} [\Delta F_\ell]_+=0$, it gives
\begin{equation*}
\Delta F \,=\, - [\Delta F_h]_- \,+\, [\Delta F_{h+1}]_+ \le -(1-\mu)[\Delta F_h]_- <0 \,.
\end{equation*}
Then, estimate \eqref{h=k-1b-XXX} quantifies the decrease in the functional $F$ and thus improves \eqref{eq:Fdecr}.
\end{remark}
\begin{proofof}{Proposition \ref{lem:01}} If $k\ge h+2$, no wave of order $k$ is involved and then \eqref{Fk-segni-2} holds\,.
To prove \eqref{Fk-segni-1} and \eqref{h=k-1b-XXX}, we distinguish between two cases.
\paragraph{\fbox{$\tauau\in I_{h}$}} (Interactions between waves of $1$-, $3$-family).
Clearly the $F_k$'s do not vary when a $1$-wave interacts with a $3$-wave.
Then we consider interactions of waves of the same family, see Figure \ref{fig:inter_k-1}{\em (a)}.
Since $\tauau\in I_{h}$, then $\Delta L_{h+1}>0$ and $0\le \Delta Q_{h+1}\le \delta_2\Delta L_{h+1}$.
Also, $\Delta F_{h}=\Delta L_{h} + K \Delta Q_{h}<0$, since both terms in the sum are negative or zero.
This proves \eqref{Fk-segni-1}.
By \eqref{Delta_L_xi_13} and \eqref{Delta_L_xi_13-SR} (see also \cite[(6.10)]{amadori-corli-siam}), we have that
\begin{equation}\label{0k-1}
[\Delta L_{h+1}]_+ \, \le\, \frac{1}{\xi} \Bigl([\Delta L_{h}]_- - \sum_{\ell = 1}^{h-1}\,\Delta L_\ell \Bigr).
\end{equation}
By (\ref{0k-1}), the estimate $0\le \Delta Q_{h+1}\le \delta_2\Delta L_{h+1}$ and \eqref{eq:mu} we deduce that
\begin{equation}
0<\Delta F_{h+1} \le (1+K\delta_2) [\Delta L_{h+1}]_+ \le \mu \biggl( [\Delta L_{h}]_- - \sum_{\ell = 1}^{h-1}\,\Delta L_\ell\biggr)\,.
\label{eq:ll}
\end{equation}
We now prove that
\begin{equation}
[\Delta Q_{h}]_{-} - \sum_{\ell = 1}^{h-1}\Delta Q_\ell \ge 0\,,
\label{eq:mm}
\end{equation}
for which we only have to consider the case when $\Delta Q_\ell>0$ for an $\ell\le h-1$. In this case, $[\Delta Q_{h}]_- - \sum_{\ell = 1}^{h-1}\Delta Q_\ell
= -\delta_2\,\Delta V\ge \delta_2(-\Delta L + |\varepsilon_1|)\ge 0$ because of \eqref{Delta_L_xi_13}, \eqref{Delta_L_xi_13-SR};
this proves \eqref{eq:mm}.
Therefore, for $\tauau\in I_{h}$, estimate \eqref{h=k-1b-XXX} follows from (\ref{eq:ll}) and (\ref{eq:mm}).
\begin{figure}
\caption{\label{fig:inter_k-1}
\label{fig:inter_k-1}
\end{figure}
\paragraph{\fbox{$\tauau\in J_{h}$}} (Interactions with the $(2,0)$-wave).
Since no wave of order $\le h-1$ interact, then (\ref{h=k-1b-XXX}) reduces to
\begin{equation}\label{h=k-1b-XXX-caso-J}
[\Delta F_{h+1}]_+ \le \mu [\Delta F_{h}]_- \,.
\end{equation}
To prove \eqref{h=k-1b-XXX-caso-J}, we first consider the case where the Pseudo Accurate solver is used, see Figure \ref{fig:inter_k-1}{\em (b)}.
Assume that a $1$-wave $\delta_1$ of order $h$ interacts with the $(2,0)$-wave. By \eqref{eq:patterns}, the reflected wave $\varepsilon_3$ is of the
same type of the interacting wave and the transmitted one $\varepsilon_1$. If $\delta_1>0$, then $\varepsilon_1>0$ and $\varepsilon_3>0$;
by Lemma \ref{lem:interazioni} this leads to
\begin{equation*}
\Delta F_{h} =\Delta L_{h}+K\Delta Q_{h}\le \frac{\delta_2|\delta_1|}{2}-K\delta_2|\delta_1|=-(2K-1) \frac{\delta_2|\delta_1|}{2}<0
\end{equation*}
by \eqref{cond_K_xi} and then, because of \eqref{eq:mu}, to
\begin{equation*}
[\Delta F_{h+1}]_+=\Delta L_{h+1} =|\varepsilon_3|\le \frac{\delta_2|\delta_1|}{2}\le \frac{1}{2K-1}[\Delta F_{h}]_-\le \mu[\Delta F_{h}]_-\,.
\end{equation*}
The last estimate is also valid when $\delta_1<0$
(the only difference is that in the previous computations there is a factor $\xi$ both in $\Delta F_{h}$ and in $\Delta F_{h+1}$).
On the other hand, if we consider the interaction with a wave $\delta_3$ of order $h$ belonging to the third family, then the reflected
wave $\varepsilon_1$ will be of a type different from that of $\delta_3$ and $\varepsilon_3$. In this case, we first suppose $\delta_3,\varepsilon_3>0$; then, $\varepsilon_1<0$. As a consequence we have
\begin{equation*}
\Delta F_{h}=-|\varepsilon_1|-K\delta_2|\delta_3|\le -(1+2K)|\varepsilon_1|
\end{equation*}
and, therefore,
\begin{equation*}
[\Delta F_{h+1}]_+=\xi|\varepsilon_1|=\frac{\xi}{1+2K}\left[(1+2K)|\varepsilon_1|\right]\leq\frac{\xi}{1+2K}[\Delta F_{h}]_-\le \mu[\Delta F_{h}]_-\,,
\end{equation*}
because of \eqref{eq:mu}. In the other case, i.e. when $\delta_3,\varepsilon_3<0$ and $\varepsilon_1>0$, we have
\begin{equation*}
\Delta F_{h}
=-\xi|\varepsilon_1| -K\xi \delta_2|\delta_3|
\leq-\xi(1+2K)|\varepsilon_1|
\end{equation*}
and
\begin{equation*}
[\Delta F_{h+1}]_+=
|\varepsilon_1|
\le\frac{1}{\xi(1+2K)}[\Delta F_{h}]_{-}\le \mu[\Delta F_{h}]_-\,.
\end{equation*}
Now, we consider the case when the interacting wave has strength $|\delta|<\rho$ and then the Pseudo Simplified solver is used.
In this case a non-physical error of size $|\varepsilon_{2,0}-\delta_{2,0}|$ and order $h+1$ appears. Thus, again by Lemma \ref{lem:interazioni},
\begin{equation*}
0< \Delta F_{h+1} = K_{np} \Delta L_{h+1} ^0 \le K_{np} C_o \delta_2 |\delta|,\qquad
\Delta L_{h}=0\,,\qquad \Delta Q_{h} \le - \delta_2|\delta|\,.
\end{equation*}
Consequently, $[\Delta F_{h}]_- \ge K\delta_2|\delta|$ and
\begin{equation*}
[\Delta F_{h+1} ]_+ \le \frac{K_{np} C_o}{K}[\Delta F_{k-1}]_-\le \mu[\Delta F_{k-1}]_- \,.
\end{equation*}
Then (\ref{h=k-1b-XXX-caso-J}) is proved.
Finally we notice that, in all the above cases for $\tauau\in J_{h}$, \eqref{Fk-segni-1} holds.
\end{proofof}
Now, we proceed similarly as in \cite[Proposition 6.7]{amadori-corli-siam} to obtain a recursive estimate for $F_k$.
Indeed, the functional $F_k$ increases at times $\tauau \in {\cal T}_{k-1}$, it decreases at $\tauau \in {\cal T}_k$,
while it has not a definite sign for times $\tauau\in {\cal T}_h$ with $h\ge k+1$.
For $F_1$ we have:
\begin{equation}\label{F1-0}
F_1(t)= F_1(0) - \sum_{{\cal T}_1} [\Delta F_1]_- + \sum_{h>1} \sum_{{\cal T}_h} \Delta F_1\,,
\end{equation}
while for $F_k$ with $k\ge 2$ we use that $F_k(0)=0$
to obtain
\begin{equation}\label{Fk-0}
F_k(t)= \sum_{{\cal T}_{k-1}} [\Delta F_k]_+ - \sum_{{\cal T}_k} [\Delta F_k]_- + \sum_{h>k} \sum_{{\cal T}_h} \Delta F_k \,.
\end{equation}
Here above we assumed that summations are done over interaction times $\tauau<t$; the same notation is used in the following.
We consider now the last terms in \eqref{F1-0}, \eqref{Fk-0}:
\begin{equation*}
\sum_{h>k} \sum_{{\cal T}_h} \Delta F_k\,,\qquad k\ge 1\,.
\end{equation*}
The above contribution is different from zero (and then possibly positive) only if the interaction involves two waves of the same family,
one of order $k$ and the other of order $h$, with $h>k$. We denote by ${\cal T}_{h,k}$ the set of times at which an interaction of this type occurs.
Clearly ${\cal T}_{h,k}\subset {\cal T}_h$.
Moreover, we define the quantity
\begin{equation}\label{alpha-k}
\alpha_k(t)= \sum_{\tauau \in {\cal T}_{k-1},\tauau<t} [\Delta F_k(\tauau)]_+ \,,\qquad k\ge 2\,,
\end{equation}
that is, the first term on the right hand side of \eqref{Fk-0}.
Hence we rewrite \eqref{F1-0}, \eqref{Fk-0} as
\begin{align}\label{F1}
0\le F_1(t)&= F_1(0) - \sum_{{\cal T}_1} [\Delta F_1]_- + \sum_{ h> 1} \sum_{{\cal T}_{h,1}} \Delta F_1\,,\\
\label{Fk}
0\le F_k(t)&= \alpha_k - \sum_{{\cal T}_k} [\Delta F_k]_- + \sum_{h> k} \sum_{{\cal T}_{h,k}} \Delta F_k\,, \qquad k\ge 2\,.
\end{align}
\begin{proposition}\label{prop:alpha} For $k\ge2$ one has
\begin{equation}\label{alpha-k-estimate}
\alpha_k \le \mu^{k-1} F_1(0) + \sum_{h\ge k } \sum_{\ell=1}^{k-1} \sum_{{\cal T}_{h,\ell}} \Delta F_\ell \,.
\end{equation}
\end{proposition}
\begin{proof} For $k=2$, we use \eqref{h=k-1b-XXX} and the positivity of $F_1$ to get
\begin{align*}
\alpha_2 &= \sum_{{\cal T}_{1}} [\Delta F_2]_+ \,\le\, \mu \,\sum_{{\cal T}_{1}} [\Delta F_1]_-
\,\le\, \mu\left\{ F_1(0) + \sum_{h> 1} \sum_{{\cal T}_{h,1}} \Delta F_1 \right\}
\\&\le
\mu F_1(0) + \sum_{h\ge 2 } \sum_{{\cal T}_{h,1}} \Delta F_1\,,
\end{align*}
which is \eqref{alpha-k-estimate} for $k=2$.
By induction, assume that \eqref{alpha-k-estimate} holds for some $k\ge2$. Since $F_k\ge 0$, from \eqref{Fk} we get
\begin{equation*}
\sum_{{\cal T}_k} [\Delta F_k]_- \le \alpha_k \,+\, \sum_{h> k} \sum_{{\cal T}_{h,k}} \Delta F_k\,.
\end{equation*}
Now, by definition \eqref{alpha-k}, by estimate \eqref{h=k-1b-XXX} and the previous inequality we find
\begin{align*}
\alpha_{k+1}= \sum_{{\cal T}_{k}} [\Delta F_{k+1}]_+ &\le \mu \sum_{{\cal T}_{k}} [\Delta F_{k}]_-
\,-\, \mu \sum_{\ell<k}\sum_{{\cal T}_{k,\ell}} \Delta F_\ell\\
&\le \mu \alpha_k \,+\, \mu \sum_{h> k} \sum_{{\cal T}_{h,k}} \Delta F_k \,-\, \mu \sum_{\ell<k}\sum_{{\cal T}_{k,\ell}} \Delta F_\ell\,.
\end{align*}
By using the induction hypothesis \eqref{alpha-k-estimate}, we get
\begin{equation*}
\alpha_{k+1} \le \mu^{k} F_1(0) + \mu \underbrace{\sum_{h,\ell \atop h\ge k>\ell } \sum_{{\cal T}_{h,\ell}} \Delta F_\ell}_{(I)}
\,+\, \mu \sum_{h> k} \sum_{{\cal T}_{h,k}} \Delta F_k \,-\, \mu \underbrace{\sum_{\ell<k}\sum_{{\cal T}_{k,\ell}} \Delta F_\ell}_{(I\!I)}\,.
\end{equation*}
Notice that
\begin{equation*}
(I)= (I\!I) +
\sum_{h,\ell \atop h> k>\ell } \sum_{{\cal T}_{h,\ell}}
\Delta F_\ell\,,
\end{equation*}
so that
\begin{align*}
\alpha_{k+1} &\le \mu^{k} F_1(0) + \mu \sum_{h,\ell \atop h> k>\ell } \sum_{{\cal T}_{h,\ell}} \Delta F_\ell\,+\,
\mu \sum_{h> k} \sum_{{\cal T}_{h,k}} \Delta F_k \\
&= \mu^{k} F_1(0) + \mu \sum_{h,\ell \atop h> k\ge \ell } \sum_{{\cal T}_{h,\ell}} \Delta F_\ell
\end{align*}
from which we deduce \eqref{alpha-k-estimate} for $k+1$, since $\mu<1$.
\end{proof}
\begin{proposition}\label{prop:tilde-Fk} For $k\ge2$ one has
\begin{equation}\label{estimate-tildeFj}
\tauilde F_k(t) \ \dot =\ \sum_{j\ge k} F_j(t) \le {\mu^{k-1}} F_1(0)\,.
\end{equation}
\end{proposition}
\begin{proof}
For $k\ge 2$ we have $\tauilde F_k(0)=0$. Moreover, we also deduce:
\begin{itemize}
\item $\Delta \tauilde F_k(\tauau)=0$ for $\tauau\in {\cal T}_h$, $h\le k-2$, by \eqref{Fk-segni-2};
\item $\Delta \tauilde F_k(\tauau)= \Delta F_k(\tauau) >0$ for $\tauau\in {\cal T}_{k-1}$, by \eqref{Fk-segni-1};
\item at last, for all $\tauau\in {\cal T}_h$, $h\ge k$,
\begin{equation*}
\Delta \tauilde F_k(\tauau) \le - \sum_{\ell=1}^{k-1} \Delta F_\ell(\tauau)\,,
\end{equation*}
by the property $\Delta F(\tauau)<0$, see Remark \ref{ref:DeltaF<0}.
\end{itemize}
As a consequence of the above properties, using also \eqref{alpha-k} and \eqref{alpha-k-estimate}, we find
\begin{align*}
\tauilde F_k(t) &= \alpha_k + \sum_{h\ge k} \sum_{{\cal T}_{h}} \Delta \tauilde F_k \\
&\le \mu^{k-1} F_1(0) + \sum_{h\ge k } \sum_{\ell=1}^{k-1} \sum_{{\cal T}_{h,\ell}} \Delta F_\ell -
\sum_{h\ge k} \sum_{\ell=1}^{k-1} \sum_{{\cal T}_{h,\ell}} \Delta F_\ell \,=\, \mu^{k-1} F_1(0)\,.
\end{align*}
\end{proof}
We can now proceed to determine parameters $\rho$ and $\eta$ as in \cite{amadori-corli-siam}.
Fix $\eta>0$ such that $\eta=\eta_\nu\rightarrow 0$ as $\nu\rightarrow \infty$ and estimate the total number of waves of order $<k$.
Then, for the strength of the composite wave it holds
\begin{align*}
|\gamma_{2,0}|(t)&\le \tauilde{L}_k(t)+\sum_{h<k \atop\tauau_h<t}|\varepsilon_{2,0}-\delta_{2,0}|(\tauau_h)\le \\&\le \mu^{k-1}\cdot L(0) \cdot\left( 1
+ K \delta_2\right) + C_o\rho\, \delta_2\, [\tauext{number of fronts of order $<k$}]<\frac{1}{\nu}\,,
\end{align*}
by choosing $k$ sufficiently large to have the first term $\le 1/(2\nu)$ and, then, $\rho=\rho_{\nu}$ small enough to have
the second term also $\le 1/(2\nu)$.
\begin{remark}\rm\label{rem:65}
Proposition \ref{prop:alpha} improves Lemma 6.6 in \cite{amadori-corli-siam}, because of $\Delta F_\ell$ on the right hand side of
\eqref{alpha-k-estimate} in place of $[\Delta F_\ell]_+$. This is obtained under the same local interaction estimates \eqref{Fk-segni-1}--\eqref{h=k-1b-XXX}. Moreover, Proposition~\ref{prop:tilde-Fk} is only based on
Proposition \ref{prop:alpha} and on $\Delta F<0$. Hence the same argument could be applied to the general case treated in \cite{amadori-corli-siam}, and improve the related result by avoiding some technical assumptions due to the presence of non-physical waves.
\end{remark}
\subsection{Proof of Theorem \ref{thm:main} and a comparison}\label{subsec:proof}
In this last section we accomplish the proof of Theorem \ref{thm:main} and compare the result we obtain with that proved in \cite{amadori-corli-siam,amadori-corli-source}.
\begin{proofof}{Theorem \ref{thm:main}} It only remains to reinterpret the choice of the parameter $m$
in terms of the assumption~\eqref{hyp2} on the initial data.
Recalling Proposition~\ref{global}, \eqref{ip_m_delta2} and since
\begin{equation*}
\bar{L}(0+)\le\frac{1}{2}\mathrm{TV}\left(\log(p_o)\right) + \frac{1}{2\inf a_o}\mathrm{TV}(u_o)\,,
\end{equation*}
we look for $m$ satisfying
\begin{align}
|\delta_2|<\frac{1}{{c(m)}}-1 \,=\, \frac{2}{\cosh m -1} &\, \dot = \, w(m)\,,\label{hyp1}
\\
\mathrm{TV}\left(\log(p_o)\right) \,+\, \frac{1}{\min\{a_r,a_\ell\}}
\mathrm{TV}(u_o) < 2m\hspace{.8pt} c^2(m) &\, \dot =\, z(m)\,. \label{hyp2-1}
\end{align}
Notice that $w(m)$ is strictly decreasing from ${\mathbb{R}}_+$ to ${\mathbb{R}}_+$, while $z(m)$ is strictly increasing on the same sets.
Since $|\delta_2|<2$, we restrict the choice of the parameter to have $w(m)\in (0,2)$, that is $\cosh m>2$ and then
\begin{equation*}
m>\bar m = \cosh^{-1}(2) = \log\left(2+\sqrt 3\right) \,.
\end{equation*}
We can now define
\begin{equation}\label{K}
K(r)\,\dot =\, z\left(w^{-1}(r) \right)\,,\qquad r\in (0,2)\,,
\end{equation}
which can be written explicitly as
\begin{equation}
K(r)=\frac{2}{(1+r)^2}\,c^{-1}\left(\frac{1}{1+r}\right)
=\frac{2}{(1+r)^2} \log\left(\frac{2}{r}+1+\frac{2}{r}\sqrt{1+r}\right)\,.
\label{eq:K-explicit}
\end{equation}
It is easy to check that $K$ satisfies properties \eqref{rage}.
Hence, if the assumption \eqref{hyp2} holds, namely
\begin{equation*}
\mathrm{TV}\left(\log(p_o)\right) \,+\, \frac{1}{\min\{a_r,a_\ell\}} \mathrm{TV}(u_o) < K(|\delta_2|) \,,
\end{equation*}
it is easy to prove that one can choose $m>\bar m$ such that \eqref{hyp1}, \eqref{hyp2-1} hold. Finally, in order to pass to the limit and prove the convergence to a weak solution, one can proceed as in \cite{Bressanbook}. Theorem \ref{thm:main} is, therefore, completely proved.
\end{proofof}
Now, we make a comparison between Theorem~\ref{thm:main} and the main result in \cite{amadori-corli-siam}, which was proved to be equivalent to Theorem $3.1$ of \cite{amadori-corli-source}. Condition $(3.7)$ of the latter theorem, when applied to the current problem, can be written as
\begin{equation}\label{acs37}
\mathrm{TV}\left(\log(p_o)\right) + \frac{1}{\min\{a_r,a_\ell\}}
\mathrm{TV}(u_o) < H(|\delta_2|)\,,
\end{equation}
where the function $H(r)$ is only defined for $r<1/2$ by
\begin{equation}\label{H}
H(r)\doteq 2(1-2r)k^{-1}(r)\,, \qquad k(m) = \frac{1-\sqrt{d(m)}}{2-\sqrt{d(m)}}\,.
\end{equation}
Here above, $d(m)$ is the damping coefficient introduced in \cite[Lemma 5.6]{amadori-corli-siam}, see Remark~\ref{rem-d(m)}.
\begin{figure}
\caption{The functions $H$ (dashed line) and $K$ (solid line). The horizontal dotted line gives the asymptotic value $\frac29\log(2+\sqrt3)$ of $K$ for $r\tauo 2-$. }
\label{fig:HK}
\end{figure}
Hence, the result of Theorem \ref{thm:main} is new for $1/2\le|\delta_2|<2$, including the case where the $2$-wave may be arbitrarily large, i.e. $|\delta_2|$ close to $2$. In order to compare \eqref{acs37} with \eqref{hyp2} in the common range $|\delta_2|<1/2$,
we set $r=|\delta_2|\in(0,1/2)$ and rewrite $H$ as
\begin{align}
H(r)&=2(1-2r)\,d^{-1}\left(\bigl(\frac{1-2r}{1-r}\bigr)^2\right)\,.
\nonumber
\end{align}
Comparing this expression with \eqref{eq:K-explicit}, we notice that $1/(1+r)^2>(1-2r)$. Moreover, we have
\begin{equation*}
\frac{1}{1+r}> \bigl(\frac{1-2r}{1-r}\bigr)^2\,;
\end{equation*}
since $c<d$ and $c$ is strictly increasing, we have also that $c^{-1}\left(1/(1+r)\right)>k^{-1}(r)$. We deduce that $K(r)>H(r)$ for $0\le r <1/2$; see Figure~\ref{fig:HK}. Then, the conditions on the initial data obtained here considerably improve the ones required in the previous works \cite{amadori-corli-siam,amadori-corli-source}, albeit the latter were given for a more general case.
\appendix
\section{Another interpretation of the damping coefficient $c$}
\setcounter{equation}{0}
The function $c$ introduced in \eqref{eq:chi_def} plays a fundamental role in controlling the size of the weight $\xi$ assigned to shock waves in the front-tracking scheme, see Proposition \ref{prop:DeltaF33}. In this appendix we show that the same coefficient $c$ also appears in the stability analysis of the Riemann problems of system \eqref{eq:system}, see \cite{Schochet-Glimm,AmCo12-Chambery}.
In \cite{Schochet-Glimm} Schochet proves that if the solution of a Riemann problem satisfies some {\em finiteness conditions} (also called $BV$-stability conditions), then {\em small} perturbations of bounded variation of its initial data give rise to a solution defined globally in time. The analysis for system \eqref{eq:system} was done in \cite{AmCo12-Chambery}, where it was proved that there are solutions to suitable Riemann problems that do not satisfy such conditions.
As in \cite[Lemma 1.2]{AmCo12-Chambery}, let us consider the pattern formed by a $1$-shock $\varepsilon_1$, a $2$-wave $\varepsilon_2$ and a $3$-shock $\varepsilon_3$.
Maintaining the notation of that paper, we denote the states lying between waves with $U_0,U_1,U_2,U_3$, from left to right; see Figure \ref{fig:RP}.
\begin{figure}
\caption{States for the Riemann problem.}
\label{fig:RP}
\end{figure}
We use $c_1=a_1/v_1$, $c_2=a_2/v_2$ to indicate the characteristic speeds and
$s_-=-a_1/\sqrt{v_1v_0}$, $s_+=a_2/\sqrt{v_2v_3}$ to indicate the speeds of the
shocks of the first and third family, respectively. Finally, we write $L_{\pm}$,
$R_{\pm}$ for the left and right eigenvectors of the first and third family,
while we let $[U]_{\pm}$ be the variation of $U$ along the $1$- and $3$-shock.
Then, let us introduce the following quantities
\[
A=|R^{(-)}|=\left|\frac{c_1+s_-}{c_1-s_-}\cdot\frac{L_+(U_1)\cdot[U]_-}{L_-(U_1)\cdot[U]_-}\right|\,,\quad B=|R^{(+)}|=\left|\frac{c_2-s_+}{c_2+s_+}\cdot\frac{L_-(U_2)\cdot[U]_+}{L_+(U_2)\cdot [U]_+}\right|\,,
\]
which represent some coefficients of the reflection matrices $R^{(-)}_{>,\le}$
and $R^{(+)}_{<,\ge}$ appearing in \cite{AmCo12-Chambery}.
\begin{lemma}\label{lem:Schochet}
Under the notation in \eqref{eq:chi_def}, we have $A=c(\varepsilon_1)$ and $B=c(\varepsilon_3)$.
\end{lemma}
\begin{proof} First, notice that
\[
\frac{L_+(U_1)\cdot[U]_-}{L_-(U_1)\cdot[U]_-}=\frac{-c_1(v_1-v_0)+(u_1-u_0)}{c_1(v_1-v_0)+(u_1-u_0)}
\]
and, recalling that along a shock of the first family it holds $u_1-u_0=-s_-(v_1-v_0)$, the previous quantity becomes $(-c_1-s_-)/(c_1-s_-)$. Therefore,
\[
A=\bigl(\frac{c_1+s_-}{c_1-s_-}\bigr)^2
=\left(\frac{{v_0}/{v_1}-\sqrt{{v_0}/{v_1}}}{{v_0}/{v_1}+\sqrt{{v_0}/{v_1}}}\right)^2.
\]
By definition \eqref{eq:strengths}, we get ${v_0}/{v_1}=\exp(-2\varepsilon_1)$ and, finally, we find
\begin{align*}
A&=
\bigl(\frac{\exp(-\varepsilon_1/2)-\exp(\varepsilon_1/2)}{\exp(-\varepsilon_1/2)+\exp(\varepsilon_1/2)}\bigr)^2=
\tauanh^2(\varepsilon_1/2)=\frac{\cosh(\varepsilon_1)-1}{\cosh(\varepsilon_1)+1}=c(\varepsilon_1)\,.
\end{align*}
By similar computations we get also
$B=(\cosh(\varepsilon_3)-1)/(\cosh(\varepsilon_3)+1)=c(\varepsilon_3)$.
\end{proof}
By Lemma \ref{lem:Schochet}, the finiteness condition of \cite{Schochet-Glimm} for the above pattern of two shock waves and the contact discontinuity can be written as
\begin{equation}\label{eq:Schochet-stable}
c(\varepsilon_1)c(\varepsilon_3)\varepsilon_2^2 - \left(c(\varepsilon_1)+c(\varepsilon_3)\right)|\varepsilon_2| + 2\left(1-c(\varepsilon_1)c(\varepsilon_3)\right)>0\,.
\end{equation}
This condition makes explicit the analogous one provided in \cite[(14)]{AmCo12-Chambery}. We remark that condition \eqref{eq:Schochet-stable} is satisfied for {\em every} shock $\varepsilon_3$ (for example) if it holds in the degenerate case $c(\varepsilon_3)=1$, \cite{AmCo12-Chambery}; in such a case, it simply reduces to
\[
1+|\varepsilon_2| \le \frac{1}{c(\varepsilon_1)},
\]
which reminds of \eqref{cond_xi_delta2}.
{\small
}
\end{document} |
\mathbf{e}gin{document}
\title{Kazhdan-Lusztig parameters and extended quotients}
\author{Anne-Marie Aubert, Paul Baum and Roger Plymen}
\textrm{d}ate{}
\maketitle
\mathbf{e}gin{abstract}The Kazhdan-Lusztig parameters are important parameters in the representation theory of $p$-adic groups and affine Hecke algebras. We show that the Kazhdan-Lusztig parameters have a definite geometric structure, namely that of the extended quotient $T{/\!/} W$ of a complex torus $T$ by a finite Weyl group $W$. More generally, we show that the corresponding parameters, in the principal series of a reductive $p$-adic group with connected centre, admit such a geometric structure. This confirms, in a special case, a recent geometric conjecture in \cite{ABP}.
In the course of this study, we provide a unified framework for Kazhdan-Lusztig parameters on the one hand, and Springer parameters on the other hand.
Our framework contains a complex parameter $s$, and allows us to \emph{interpolate} between $s = 1$ and $s = \sqrt q$. When $s = 1$, we recover the parameters which occur in the Springer correspondence; when $s = \sqrt q$, we recover the Kazhdan-Lusztig parameters.
\end{abstract}
\section{Introduction} The Kazhdan-Lusztig parameters are important parameters in the representation theory of $p$-adic groups and affine Hecke algebras. We show that the Kazhdan-Lusztig parameters have a definite geometric structure, namely that of the extended quotient $T{/\!/} W$ of a complex torus $T$ by a finite Weyl group $W$. More generally, we show that the corresponding parameters, in the principal series of a reductive $p$-adic group with connected centre, admit such a geometric structure. This confirms, in a special case, a recent geometric conjecture in \cite{ABP}.
In the course of this study, we provide a unified framework for Kazhdan-Lusztig parameters on the one hand, and Springer parameters on the other hand.
Our framework contains a complex parameter $s$, and allows us to \emph{interpolate} between $s = 1$ and $s = \sqrt q$. When $s = 1$, we recover the parameters which occur in the Springer correspondence; when $s = \sqrt q$, we recover the Kazhdan-Lusztig parameters, see \S5. Here, $q = q_F$ is the cardinality of the residue field of the underlying local field $F$.
Let $\mathcal{G}$ denote a reductive split $p$-adic group with connected centre, maximal split torus $\mathcal{T}$.
Let $G$, $T$ denote the Langlands dual of $\mathcal{G}$, $\mathcal{T}$. Then the quotient variety $T/W$ plays a central role. For example, we have the Satake isomorphism
\[
\mathcal{H}(\mathcal{G}, \mathcal{K}) \simeq \mathcal{O}(T/W)
\]
where $\mathcal{O}(T/W)$ denotes the coordinate algebra of $T/W$, see \cite[2.2.1]{Sh}, and ${\mathcal H}(\mathcal{G}, \mathcal{K})$ denotes the algebra (under convolution) of $\mathcal{K}$-bi-invariant functions of compact support on $\mathcal{G}$, where $\mathcal{K} = \mathcal{G}(\mathfrak{o}_F)$ .
In this article, we will show that the \emph{extended quotient} plays a central role in the context of the Kazhdan-Lusztig parameters.
We will prove that the extended quotient $T{/\!/} W$ is a model for the Kazhdan-Lusztig parameters, see \S4. More generally, let
\[
{\mathfrak s} = [\mathcal{T}, {\rm ch}i]_{\mathcal{G}}
\]
be a point in the Bernstein spectrum of $\mathcal{G}$. We prove that the extended quotient $T{/\!/} W^{{\mathfrak s}}$ attached to ${\mathfrak s}$ is a model of the corresponding parameters attached to ${\mathfrak s}$. This is our main result, Theorem 4.1. \emph{The principal series of a reductive $p$-adic group with connected centre has a definite geometric structure. The principal series is a disjoint union: each component is the extended quotient of the dual torus $T$ by the finite Weyl group $W^{{\mathfrak s}}$ attached to ${\mathfrak s}$.} This confirms, in a special case, a recent geometric conjecture in \cite{ABP}.
We also show in \S4 that our bijection is compatible with base change, in the special case of the irreducible smooth representations of ${\rm GL}(n)$ which admit nonzero Iwahori fixed vectors.
The details of our interpolation between Springer parameters and Kazhdan-Lusztig parameters will be given in \S5. Our formulation creates a projection
\[
\pi_{\sqrt q} : T{/\!/} W \to T/W
\]
which provides a model of the \emph{infinitesimal character}.
We conclude in \S6 with some carefully chosen examples.
Since the crossed product algebra ${\mathcal O}(T)\rtimes W$ is
isomorphic to \[\mathbb{C}[X(T)]\rtimes W\,\simeq\,\mathbb{C}[X(T)\rtimes W],\]
we obtain a bijection
\[{\rm Prim}\,\mathbb{C}[X(T)\rtimes W]\to T{/\!/} W\] where ${\rm Prim}$ denotes primitive ideals.
By composing this bijection with the bijection $\mu$ in Theorem 4.1, we finally get a bijection
\[{\rm Prim}\,\mathbb{C}[X(T)\rtimes W]\to{\mathfrak P}(G)\]
where ${\mathfrak P}(G)$ denotes the Kazhdan-Lusztig parameters.
Let ${\mathcal I}$ be a standard Iwahori subgroup in ${\mathcal G}$ and let ${\mathcal H}({\mathcal G},{\mathcal I})$
denote the corresponding Iwahori-Hecke algebra, {\it i.e.,\,} the algebra (for the
convolution product) of compactly
supported ${\mathcal I}$-biinvariant functions on ${\mathcal G}$. The algebra is
isomorphic to \[
{\mathcal H}(X(T)\rtimes W,q)
\] the Hecke algebra of the extended
affine Weyl group
$X(T)\rtimes W$, with parameter $q$. The simple modules of
${\mathcal H}({\mathcal G},{\mathcal I})$ are parametrized by ${\mathfrak P}(G)$ \cite{KL}.
Hence ${\mathfrak P}(G)$ provides a parametrization of the simple modules
of both the Iwahori-Hecke algebra ${\mathcal H}(X(T)\rtimes W,q)$ and of
the group algebra of $X(T)\rtimes W$ (that is, the algebra
${\mathcal H}(X(T)\rtimes W,1)$).
Note that the existence of a bijection between these sets of simple
modules was already proved by Lusztig (see for instance
\cite[p.~81, assertion~(a)]{LuAst}).
Lusztig's construction needs to pass through the asymptotic Hecke algebra $J$, while
we have replaced the use of $J$ by the use of the extended
quotient $T{/\!/} W$ (which is much simpler to construct).
\section{Extended quotients} Let $\mathcal{O}(T)$ denote the coordinate algebra of the complex torus $T$. In noncommutative geometry, one of the elementary, yet fundamental, concepts is that of \emph{noncommutative quotient} \cite[Example 2.5.3]{K}. The \emph{noncommutative quotient} of $T$ by $W$ is the crossed product algebra
\[
\mathcal{O}(T) \rtimes W.
\]
This is a noncommutative unital $\mathbb{C}$-algebra. We need to filter this idea through periodic cyclic homology. We have an isomorphism
\[
{\rm HP}_*(\mathcal{O}(T) \rtimes W) \simeq H^*(T{/\!/} W ; {\rm C})
\]
where ${\rm HP}_*$ denotes periodic cyclic homology, $H^*$ denotes cohomology, and $T{/\!/} W$ is the extended quotient of $T$ by $W$, see \cite{B}. We recall the definition of the extended quotient $T{/\!/} W$.
\mathbf{e}gin{defn} Let
\[
\widetilde{T} = \{(t,w) \in T \times W : w \cdot t = t\}.
\]
The extended quotient
is the quotient
\[
T{/\!/} W : = \widetilde{T}/W
\]
where $W$ acts via $\alpha(t,w) = (\alpha \cdot t, \alpha w \alpha^{-1})$ with $\alpha \in W$.
\end{defn}
Let $W(t)$ denote the isotropy subgroup of $t$. Let ${\rm conj} (W(t))$ denote the set of conjugacy classes in $W(t)$, and let $[w]$ denote
the conjugacy class of $w$ in $W(t)$. The map
\[
\{(t,w) : t \in T, w \in W(t)\} \to \{(t,c) : t \in T, c \in {\rm conj} (W(t))\}
\]
\[
(t,w) \mapsto (t,[w])
\]
induces a canonical bijection
\[
\{(t,w) : t \in T, w \in W(t)\}/ W \to \{(t,c) : t \in T, c \in {\rm conj}(W(t))\}/ W
\]
where $W$ acts via $\alpha (t,c) = ( \alpha \cdot t, [ \alpha x \alpha^{-1}])$ with $x \in c$.
Let ${\rm Irr}(W(t))$ denote the set of equivalence classes of irreducible representations of $W(t)$. A choice of bijection between
${\rm conj}(W(t))$ and ${\rm Irr}(W(t))$ then creates a bijection
\[
T{/\!/} W \simeq \{( t, \tau) : t \in T, \tau \in {\rm Irr}(W(t))\}/ W
\]
where $W$ acts via $\alpha (t, \tau) = (\alpha \cdot t, \alpha_*(\tau))$.
Here, $\alpha_*(\tau)$ is the push-forward of $\tau$ to an irreducible representation of $W(\alpha \cdot t)$.
This leads us to
\mathbf{e}gin{defn}
The extended quotient of the second kind is
\[
(T{/\!/} W)_2: = \{( t, \tau) : t \in T, \tau \in {\rm Irr}(W(t))\}/ W
\]
\end{defn}
We then have a non-canonical bijection
\[
T{/\!/} W \simeq (T{/\!/} W)_2.
\]
Let $T^w$ denote the fixed set $\{t \in T : w \cdot t = t\}$, and let $Z(w)$ denote the centralizer of $w$ in $W$.
We have
\mathbf{e}gin{align} \label{eqn:(1)}
T{/\!/} W = \bigsqcup T^w /Z(w)
\end{align}
where one $w$ is chosen in each conjugacy class in $W$. Therefore $T{/\!/} W$ is a complex affine algebraic variety. The number of irreducible
components in $T{/\!/} W$ is bounded below by $|{\rm conj}(W)|$.
The Jacobson topology on the primitive ideal spectrum of $\mathcal{O}(T) \rtimes W$
induces a topology on $(T{/\!/} W)_2$ such that the identity map
\[
T{/\!/} W \to (T{/\!/} W)_2
\]
is continuous. From the point of view of noncommutative geometry \cite{K}, the extended quotient of the second kind is a \emph{noncommutative complex affine algebraic variety}.
The transformation groupoid $ T \rtimes W$ is naturally an \'etale groupoid, see \cite[p. 45]{K}. Its groupoid algebra $\mathbb{C} [T \rtimes W]$ is the crossed product algebra
\[
\mathcal{O}(T) \rtimes W .
\]
In the groupoid $T \rtimes W$, we have
\[
\text{source}(t,w) = t, {/\!/}uad \text{target}(t,w) = w \cdot t
\]
so that the set
\[
\{(t,w) \in T \times W : w \cdot t = t\}
\]
comprises all the arrows which are \emph{loops}.
The decomposition of the groupoid $T \rtimes W$ into transitive groupoids leads naturally to Eqn.~(\ref{eqn:(1)}).
The groupoid $T \rtimes W$ seems to be a bridge between $T{/\!/} W$ and $(T{/\!/} W)_2$.
In the context of algebraic geometry, the extended quotient is known as the inertia stack \cite{M}, in which case the notation is
\[
I(T): = \widetilde{T}, {/\!/}uad {/\!/}uad [I(T)/W]: = T{/\!/} W.
\]
\section{The parameters for the principal series}
\label{sec:unram}
Let ${\mathcal W}_F$ denote the Weil group of $F$, let $I_F$ be the inertia
subgroup of ${\mathcal W}_F$. Let
${\Phi}_F \subset {\mathcal W}_F$ denote a geometric Frobenius (a generator of
${\mathcal W}_F/I_F \simeq \mathbb{Z}$). We have ${\mathcal W}_F/I_F = <{\rm Frob}>$. We will think of this as a multiplicative group, with identity element $1$.
Let ${\mathfrak P}(G)$ denote the set of conjugacy classes in
$G$ of pairs $(\Phi,\rho)$ such that $\Phi$ is a morphism
\[
\Phi\colon {\mathcal W}_F/I_F \times {\rm SL}(2,\mathbb{C}) \to G\] which is
\emph{admissible}, {\it i.e.,\,} $\Phi(1, - )$ is a morphism of complex algebraic groups,
$\Phi({\rm Frob},1)$ is a semisimple element in
$G$, and $\rho$ is defined in the following way.
We will adopt the formulation of Reeder \cite{R}.
Choose a Borel subgroup $B_2$ in ${\rm SL}(2,\mathbb{C})$ and let
$S_{\Phi} = \Phi({\mathcal W}_F \times B_2)$, a solvable subgroup of $G$.
Let $\mathbf{B}^{\Phi}$ denote the variety of Borel subgroups of $G$ containing $S_{\Phi}$.
Let $G_{\Phi}$ be the centralizer
in $G$ of the image of $\Phi$. Then $G_{\Phi}$ acts naturally on $\mathbf{B}^{\Phi}$, and hence on the
singular homology $H_*(\mathbf{B}^{\Phi},\mathbb{C})$. Then $\rho$ is an irreducible representation of $G_{\Phi}$ which appears in the action
of $G_{\Phi}$ on $H_*(\mathbf{B}^{\Phi},\mathbb{C})$.
A Reeder parameter $(\Phi, \rho)$ determines a Kazhdan-Lusztig parameter $(\sigma, u, \rho)$ in the following way. Let
\[
u_0 =
\left(
\mathbf{e}gin{array}{cc}
1 & 1 \\
0 & 1
\end{array}
\right) , {/\!/}uad
T_x =
\left(
\mathbf{e}gin{array}{cc}
x & 0\\
0 & x^{-1}
\end{array}
\right)
\]
and set
\[
u = \Phi(1,u_0), {/\!/}uad \sigma = \Phi({\rm Frob}, T_{\sqrt q})
\]
where $q$ is the cardinality of the residue field $k_F$. Then the triple $(\sigma, u, \rho)$ is a Kazhdan-Lusztig parameter.
Since $\Phi$ is a homomorphism and
\[
T_{\sqrt q} \, u_0 \, T_{\sqrt q}^{-1} = \left(
\mathbf{e}gin{array}{cc}
1 & q\\
0 & 1
\end{array}
\right) = u_0^q
\]
it follows that
\[
\sigma u \sigma^{-1} = u^q.
\]
It is worth noting that the set ${\mathfrak P}(G)$ is $q$-independent.
We now move on to the rest of the principal series. We recall that
$\mathcal{G}$ denotes a reductive split $p$-adic group \emph{with
connected centre}, maximal split torus $\mathcal{T}$, and
$G$, $T$ denote the Langlands dual of $\mathcal{G}$, $\mathcal{T}$.
We assume in addition that the residual characteristic of $F$ is not a
torsion prime for $G$.
Let ${\mathfrak Q}(G)$ denote the set of conjugacy classes in
$G$ of pairs $(\Phi,\rho)$ such that $\Phi$ is a continuous morphism
\[
\Phi\colon {\mathcal W}_F\times {\rm SL}(2,\mathbb{C}) \to G\] which is
rational on ${\rm SL}(2,\mathbb{C})$ and such that $\Phi({\mathcal W}_F)$ consists of semisimple
element in
$G$, and $\rho$ is defined in the following way.
Choose a Borel subgroup $B_2$ in ${\rm SL}(2,\mathbb{C})$ and let $S_{\Phi} = \Phi({\mathcal W}_F \times B_2)$.
Let $\mathbf{B}^{\Phi}$ denote the variety of Borel subgroups of $G$ containing
$S_{\Phi}$.
The variety $\mathbf{B}^{\Phi}$ is non-empty if and only if $\Phi$ factors
through the topological abelianization
${\mathcal W}_F^{{\rm ab}}:={\mathcal W}_F/\overline{[{\mathcal W}_F,{\mathcal W}_F]}$ of ${\mathcal W}_F$ (see
\cite[\S~4.2]{R}). We will assume that $\mathbf{B}^{\Phi}$ is non-empty, and we
will still denote by $\Phi$ the homomorphim
\[
\Phi\colon {\mathcal W}_F^{{\rm ab}}\times {\rm SL}(2,\mathbb{C}) \to G.\]
Let $I_F^{{\rm ab}}$ denote the image of $I_F$ in ${\mathcal W}_F^{{\rm ab}}$.
The choice of Frobenius ${\rm Frob}$ determines a splitting
\mathbf{e}gin{equation} \label{eqn:splitting}
{\mathcal W}_F^{{\rm ab}}=I_F^{{\rm ab}}\times\langle{\rm Frob}\rangle.\end{equation}
Let $G_{\Phi}$ be the centralizer in $G$ of the image of $\Phi$. Then
$G_{\Phi}$ acts naturally on $\mathbf{B}^{\Phi}$, and hence on the
singular homology of $H_*(\mathbf{B}^{\Phi},\mathbb{C})$. Then $\rho$ is an
irreducible representation of $G_{\Phi}$ which appears in the action
of $G_{\Phi}$ on $H_*(\mathbf{B}^{\Phi},\mathbb{C})$.
Let ${\rm ch}i$ be a smooth quasicharacter of ${\mathcal T}$ and let ${\mathfrak s} = [{\mathcal T},{\rm ch}i]_{{\mathcal G}}$ be the point in the Bernstein spectrum ${\mathfrak B}({\mathcal G})$ determined by ${\rm ch}i$. Let
\mathbf{e}gin{equation} \label{eqn:Ws}
W^{{\mathfrak s}} = \{w \in W : w\cdot {\mathfrak s} = {\mathfrak s}\}.
\end{equation}
Let $X$ denote the rational co-character group of ${\mathcal T}$, identified with
the rational character group of $T$. Let ${\mathcal T}_0$ be the maximal compact
subgroup of ${\mathcal T}$. By choosing a uniformizer in $F$, we obtain a splitting
$${\mathcal T}={\mathcal T}_0\times X,$$ according to which
\[{\rm ch}i = \lambda\otimes t,\]
where $\lambda$ is a character of ${\mathcal T}_0$, and $t\in T$. Let
$r_F\colon {\mathcal W}_F^{{\rm ab}}\to F^\times$ denote the reciprocity isomorphism of
abelian class field theory, and let
\mathbf{e}gin{equation} \label{eqn:hl}
{{\widehat\lambda}}\colon I_F^{{\rm ab}}\to T\end{equation}
be the unique homomorphism satisfying
\mathbf{e}gin{equation} \label{eqn:dd}
\eta\circ{{\widehat\lambda}}=\lambda\circ\eta\circ r_F,{/\!/}uad \text{for all $\eta\in
X$},\end{equation}
where $\eta$ is viewed as a character of $T$ on the left side and as a
co-character of ${\mathcal T}$ on the right side of~(\ref{eqn:dd}).
Let $H$ denote the centralizer in $G$ of the image of ${{\widehat\lambda}}$:
\mathbf{e}gin{equation} \label{eqn:H}
H=G_{{{\widehat\lambda}}}.\end{equation}
The assumption that $G$ has simply-connected derived group implies that
the group $H$ is connected (see \cite[p.~396]{Roc}).
Note that $H$ itself does not
have simply-connected derived group in general (for instance, if $G$ is
the exceptional group of type ${{\rm G}}_2$, and $\sigma$ is the tensor square
of a ramified quadratic character of $F^\times$ then $H={\rm SO}(4,\mathbb{C})$).
Let ${\mathfrak Q}(G)_{{{\widehat\lambda}}}$ be the subset of ${\mathfrak Q}(G)$ consisting of
the $G$-conjugacy classes of all the pairs $(\Phi,\rho)$ such that
$\Phi$ factors through ${\mathcal W}_F^{{\rm ab}}$ and
\[\Phi|_{I_F^{{\rm ab}}}={{\widehat\lambda}}.\]
The group $W^{\mathfrak s}$ defined in~(\ref{eqn:Ws}) is a Weyl group: it is the Weyl
group of $H$
(indeed, in the decomposition of \cite[Lemma~8.1~(i)]{Roc} the group
$C_{\rm ch}i$ is trivial as proven on \cite[p.~396]{Roc}):
\[W^{\mathfrak s}=W_H.\]
\section{Main result}
\mathbf{e}gin{thm} \label{thm:ps} There is a canonical bijection of the extended quotient of the second kind $(T{/\!/} W^{\mathfrak s})_2$ onto the set
${\mathfrak Q}(G)_{{{\widehat\lambda}}}$ of conjugacy classes of Reeder parameters attached to the point ${\mathfrak s}$ in the Bernstein spectrum of $\mathcal{G}$. It follows that there is a bijection
\[
\mu^{{\mathfrak s}} : T{/\!/} W^{{\mathfrak s}} \simeq {\mathfrak Q}(G)_{{{\widehat\lambda}}}
\]
so that the extended quotient $T{/\!/} W^{{\mathfrak s}}$ is a model for the Reeder parameters attached to the point ${\mathfrak s}$.
\end{thm}
The proof of this theorem requires a series of Lemmas. We recall that
\[
W^{{\mathfrak s}} = W_H.
\]
The plan of our proof is to begin with an element in the extended quotient of the second kind $(T{/\!/} W_H)_2$.
Lemmas 4.2 and 4.3 allow us to infer that $W_H(t)$ is a semidirect product
$W_{{\mathfrak G}(t)}\rtimes A_H(t)$. We now combine the Springer correspondence for $W_{{\mathfrak G}(t)}$ with Clifford theory for semidirect products (Clifford theory is a noncommutative version of the Mackey machine). This creates $4$ parameters
$(t,x,\varrho, \psi)$.
With this data, and the character $\lambda$ determined by the point $
{\mathfrak s}$, we construct a Reeder parameter $(\Phi, \rho)$ such that
$\Phi({\rm Frob},1)=t$, $\Phi(1,u_0)=\exp x$ and
the restriction of $\rho$ contains $\varrho$.
\mathbf{e}gin{lem} \label{lem:disconnected}
Let $M$ be a reductive algebraic group. Let $M^0$ denote the connected
component of the identity in $M$. Let $T$ be a maximal torus of $M^0$ and
let $B$ be a Borel subgroup of $M^0$ containing $T$. Let
\[W_{M^0}(T):={\rm N}_{M^0}(T)/T\]
denote the Weyl group of $M^0$ with respect to $T$. We set
\[
W_M(T):={\rm N}_M(T)/T.\]
\mathbf{e}gin{enumerate}
\item[{\rm (1)}]
The group $W_M(T)$ has the semidirect product decomposition:
\[W_M(T)=W_{M^0}(T)\rtimes ({\rm N}_M(T,B)/T),\]
where ${\rm N}_M(T,B)$ denotes the normalizer in $M$ of the pair $(T,B)$.
\item[{\rm (2)}]
We have
\[{\rm N}_M(T,B)/T\simeq M/M^0=\pi_0(M).\]
\end{enumerate}
\end{lem}
\mathbf{e}gin{proof}
The group $W_{M^0}(T)$ is a normal subgroup of $W_M(T)$. Indeed,
let $n\in{\rm N}_{M^0}(T)$ and let $n'\in{\rm N}_M(T)$, then $n'nn^{\prime-1}$
belongs to $M^0$ (since the latter is normal in $M$) and normalizes $T$, that is,
$n'nn^{\prime-1}\in{\rm N}_{M^0}(T)$. On the other hand,
$n'(nT)n^{\prime-1}=n'nn^{\prime-1}(n'Tn^{\prime-1})=n'nn^{\prime-1}T$.
Let $w\in W_M(T)$. Then $wBw^{-1}$ is a Borel subgroup of $M^0$
(since, by definition, the Borel subgroups of an algebraic group are
the maximal closed connected solvable subgroups). Moreover, $wBw^{-1}$
contains $T$.
In a connected reductive algebraic group, the intersection of two Borel
subgroups always contains a maximal torus and the two Borel subgroups are
conjugate by a element of the normalizer of that torus. Hence $B$ and
$wBw^{-1}$ are conjugate by an element $w_1$ of $W_{M^0}(T)$.
It follows that $w_1^{-1}w$ normalises $B$. Hence
\[w_1^{-1}w\in W_M(T)\cap {\rm N}_{M}(B)={\rm N}_{M}(T,B)/T,\]
that is, \[W_M(T)=W_{M^0}(T)\cdot({\rm N}_M(T,B)/T).\]
Finally, we have
\[W_{M^0}(T)\cap({\rm N}_M(T,B)/T)={\rm N}_{M^0}(T,B)/T=\{1\},\]
since ${\rm N}_{M^0}(B)=B$ and $B\cap {\rm N}_{M^0}(T)=T$. This proves (1).
We will now prove (2). We consider the following map:
\[{\rm N}_{M}(T,B)/T\to M/M^0{/\!/}uad{/\!/}uad mT\mapsto mM^0.\leqno{(*)}\]
It is injective. Indeed, let $m,m'\in{\rm N}_{M}(T,B)$ such that
$mM^0=m'M^0$. Then $m^{-1}m'\in M^0\cap{\rm N}_{M}(T,B)={\rm N}_{M^0}(T,B)=T$
(as we have seen above). Hence $mT=m'T$.
On the other hand, let $m$ be an element in $M$. Then $m^{-1}Bm$ is a
Borel subgroup of $M^0$, hence there exists $m_1\in M^0$ such that
$m^{-1}Bm=m_1^{-1}Bm_1$. It follows that $m_1m^{-1}\in{\rm N}_M(B)$. Also
$m_1m^{-1}Tmm_1^{-1}$ is a torus of $M^0$ which is contained in
$m_1m^{-1}Bmm_1^{-1}=B$. Hence $T$ and $m_1m^{-1}Tmm_1^{-1}$ are conjugate
in $B$: there is $b\in B$ such that $m_1m^{-1}Tmm_1^{-1}=b^{-1}Tb$. Then
$n:=bm_1m^{-1}\in{\rm N}_M(T,B)$. It gives $m=n^{-1}bm_1$. Since $bm_1\in
M^0$, we obtain $mM^0=n^{-1}M^0$. Hence the map $(*)$ is surjective.
\end{proof}
In order to approach the notation in \cite[p.471]{CG}, we let ${\mathfrak G}(t)$ denote the identity component of the centralizer $C_H(t)$:
\[
{\mathfrak G}(t): = C_H^0(t).
\]
Let $W_{{\mathfrak G}(t)}$ denote the Weyl group of ${\mathfrak G}(t)$.
\mathbf{e}gin{lem} \label{lem:centrals}
Let $t \in T$.
The isotropy subgroup $W_H(t)$ is the group of ${\rm N}_{C_H(t)}(T)/T$, and we
have
\[W_H(t) = W_{{\mathfrak G}(t)}\rtimes A_H(t){/\!/}uad\text{with $A_H(t):=\pi_0(C_H(t))$.}\]
In the case when $H$ has simply-connected derived group, the
group $C_H(t)$ is connected and $W_H(t)$ is then the Weyl group of
$C_H(t)={\mathfrak G}(t)$.
\end{lem}
\mathbf{e}gin{proof} Let $t \in T$. Note that
\mathbf{e}gin{align*}
W_H(t) & = \{w \in W_H : w\cdot t = t\}\\
& = \{w \in W_H : wtw^{-1} = t\}\\
& = \{w \in W_H : wt = tw\}\\
& = W \cap C_H(t).
\end{align*}
Note that $H$ and $C_H(t)$ have a common maximal torus $T$. Now
\mathbf{e}gin{align*}
W_H \cap C_H(t) & = {\rm N}_H(T)/T \cap C_H(t)\\
& = {\rm N}_{C_H(t)}(T)/T\\
& = W_{C_H(t)}(T).
\end{align*}
The result follows by applying Lemma~\ref{lem:disconnected} with $M=C_H(t)$.
If $H$ has simply-connected derived group, then the
centralizer $C_H(t)$ is connected by Steinberg's theorem
\cite[\S 8.8.7]{CG}.
\end{proof}
Let $\tau$ be an irreducible representation of $W_{{\mathfrak G}(t)}$.
Now we apply the Springer correspondence
to $\tau$. Note: the Springer correspondence that we are considering
here coincides with that constructed by Springer for a reductive group
over a field of positive characteristic and is obtained
from the correspondence constructed by Lusztig by tensoring the latter by
the sign representation of $W_{{\mathfrak G}(t)}$ (see \cite{Hot}).
Let $\mathfrak{c}(t)$ denote the Lie algebra of ${\mathfrak G}(t)$, for $x\in{\mathfrak c}(t)$, let $Z_{{\mathfrak G}(t)}(x)$
denote the centralizer of $x$ in ${\mathfrak G}(t)$, via the adjoint representation of
${\mathfrak G}(t)$ on $\mathfrak{c}(t)$, and let
\mathbf{e}gin{align}
A_x = \pi_0 (Z_{{\mathfrak G}(t)}(x))
\end{align}
Let $\mathbf{B}_x$ denote the variety of Borel
subalgebras of $\mathfrak{c}(t)$ that contain $x$.
All the irreducible components of $\mathbf{B}_x$ have the same dimension $d(x)$
over $\mathbb{R}$, see \cite[Corollary 3.3.24]{CG}.
The finite group $A_x$ acts on the set
of irreducible components of $\mathbf{B}_x$ \cite[p. 161]{CG}.
\mathbf{e}gin{defn} If a group $A$ acts on the variety $\mathbf{X}$, let
${\mathcal R}(A,\mathbf{X})$ denote the set of irreducible representations of $A$
appearing
in the homology $H_*(\mathbf{X})$, as in \cite[p.118]{R}. Let
${\mathcal R}_{top}(A, \mathbf{X})$ denote the set of irreducible
representations of $A$ appearing in the top homology of $\mathbf{X}$.
\end{defn}
The Springer correspondence yields a one-to-one correspondence
\mathbf{e}gin{equation} \label{eqn:Springercor}
(x,\varrho)\mapsto \tau(x,\varrho)\end{equation}
between the set of ${\mathfrak G}(t)$-conjugacy classes of pairs $(x,\varrho)$ formed by a
nilpotent element $x \in \mathfrak{c}(t)$ and an irreducible representation
$\varrho$ of $A=A_x$ which occurs in $H_{d(x)}(\mathbf{B}_x, \mathbb{C})$ (that is,
$\varrho\in{\mathcal R}_{top}(A_x,\mathbf{B}_x)$) and the set of isomorphism classes of irreducible
representations of the Weyl group $W_{{\mathfrak G}(t)}$.
We now work with the Jacobson-Morozov theorem \cite[p. 183]{CG}. Let
$e_0$ be the standard nilpotent matrix in $\mathfrak{sl}(2,\mathbb{C})$:
\[e_0 = \left(
\mathbf{e}gin{array}{cc}
0 & 1 \\
0 & 0 \end{array}\right) \]
There exists a rational homomorphism $\gamma : {\rm SL}(2, \mathbb{C}) \to {\mathfrak C}(t)$
such that its differential $\mathfrak{sl}(2,\mathbb{C}) \to \mathfrak{c}(t)$
sends $e_0$ to $x$, see \cite[\S 3.7.4]{CG}.
Define
\mathbf{e}gin{eqnarray} \label{eqn:Phi}
\Phi \colon {\mathcal W}_F^{{\rm ab}}\times {\rm SL}(2,\mathbb{C}) \to G, {/\!/}uad{/\!/}uad (w,{\rm Frob},Y)
\mapsto {{\widehat\lambda}}(w)\cdot t \cdot \gamma(Y)
\end{eqnarray}
\mathbf{e}gin{eqnarray} \label{eqn:Upsilon}
\Upsilon \colon {\mathcal W}_F^{{\rm ab}}\times {\rm SL}(2,\mathbb{C}) \to H, {/\!/}uad{/\!/}uad (w,{\rm Frob},Y)
\mapsto {{\widehat\lambda}}(w)\cdot t \cdot \gamma(Y)
\end{eqnarray}
\mathbf{e}gin{eqnarray} \label{eqn:Psi}
\Psi \colon {\mathcal W}_F^{{\rm ab}} \times {\rm SL}(2,\mathbb{C}) \to {\mathfrak G}(t), {/\!/}uad{/\!/}uad
(w,{\rm Frob},Y) \mapsto {{\widehat\lambda}}(w)\cdot t \cdot \gamma(Y)
\end{eqnarray}
\mathbf{e}gin{eqnarray} \label{eqn:Xi}
\Xi \colon {\mathcal W}_F^{{\rm ab}} \times {\rm SL}(2,\mathbb{C}) \to {\mathfrak G}(t), {/\!/}uad {/\!/}uad (w,{\rm Frob},Y)
\mapsto {{\widehat\lambda}}(w)\cdot \gamma(Y).
\end{eqnarray}
where $w$ is any element in $I_F^{{\rm ab}}$.
Note that $im\,\Phi\subset H$ (see \cite[\S~4.2]{R}) and that
$C(im \, \Psi) = C(im \, \Upsilon)$, for any element in $C(im \, \Upsilon)$
must commute with $\Upsilon({\rm Frob}) = t$. We also have $C(im \, \Xi) = C(im \,
\Psi) \subset {\mathfrak C}(t)$. Let
\[
A_{\Psi} = \pi_0(C(im \, \Psi)),
{/\!/}uad {/\!/}uad A_{\Xi} = \pi_0(C(im \, \Xi)).
\]
\mathbf{e}gin{lem} \label{lem:AAA}
We have
\[
A_x = A_{\Xi} = A_{\Psi}.
\]
\end{lem}
\mathbf{e}gin{proof} According to \cite[\S 3.7.23]{CG}, we have
\[
Z_{{\mathfrak G}(t)}(x) = C(im \, \Xi)\cdot U
\]
with $U$ the unipotent radical of $Z_{{\mathfrak G}(t)}(x)$. Now $U$ is contractible
via the map
\[
[0,1] \times U \to U, {/\!/}uad {/\!/}uad (\lambda, \exp Y) \mapsto \exp( \lambda
Y)
\]
for all $Y \in {\mathfrak n}$ with $\exp {\mathfrak n} = U$.
\end{proof}
Lemma~\ref{lem:AAA} allows us to define
\[
A: = A_x = A_{\Psi}= A_{\Xi}.
\]
Let ${\mathcal C}(t)$ denote a {\it predual} of ${\mathfrak G}(t)$, {\it i.e.,\,}
${\mathfrak G}(t)$ is the Langlands dual of ${\mathcal C}(t)$.
Let $\mathbf{B}^{\Psi}$ (resp. $\mathbf{B}^{\Xi}$) denote the variety of the Borel
subgroups of ${\mathfrak G}(t)$
which contain $S_{\Psi}: = \Psi({\mathcal W}_F\times B_2)$ (resp. $S_{\Xi}: =
\Xi({\mathcal W}_F \times B_2) = \gamma(B_2)$).
\mathbf{e}gin{lem} \label{lem:bije}
We have
\[
{\mathcal R}_{top}(A, \mathbf{B}_x) = {\mathcal R}(A, \mathbf{B}^{\Xi}).
\]
\end{lem}
\mathbf{e}gin{proof} Let, as before, $\tau$ be an irreducible representation
of $W_{{\mathfrak G}(t)}$. Let $(x,\varrho)$ be the Springer parameter attached to
$\tau$ by the inverse bijection of (\ref{eqn:Springercor}).
Define $\Xi$ as in Eqn.\ref{eqn:Xi}. Note that $\Xi$ depends on the
morphism $\gamma$, which in turn depends on the nilpotent element $x \in
\mathfrak{c}(t)$.
Then $\Xi$ is a real tempered $L$-parameter for the $p$-adic group
${\mathcal C}(t)$, see \cite[3.18]{BM}. According to several sources, see
\cite[\S 10.13]{Lu}, \cite{BM}, there is a bijection between
Springer parameters and Reeder parameters:
\mathbf{e}gin{equation} \label{eqnarray:bij}
(d \gamma(e_0)),\varrho) \mapsto (\Xi, \varrho).
\end{equation}
Now $\varrho$ is an irreducible representation of $A$ which appears
simultaneously in $H_{d(x)}(\mathbf{B}_x, \mathbb{C})$ and $H_*(\mathbf{B}^{\Xi}, \mathbb{C})$.
\end{proof}
We will recall below a result of Ram and Ramagge, which is based on
Clifford theoretic results developed by MacDonald and Green.
Let ${\mathcal H}$ be a finite dimensional $\mathbb{C}$-algebra and let ${\mathcal A}$ be a finite group
acting by automorphisms on ${\mathcal H}$. If $V$ is a finite dimensional module
for
${\mathcal H}$ and $a\in {\mathcal A}$, let ${}^aV$ denote the ${\mathcal H}$-module with the
action $f\cdot v:=a^{-1}(f)v$, $f\in{\mathcal H}$ and $v\in V$. Then $V$ is
simple if and only if ${}^aV$ is. Let $V$ be a simple
${\mathcal H}$-module. Define the inertia subgroup of $V$ to be
\[{\mathcal A}_V:=\left\{a\in {\mathcal A}\;:\;V\simeq {}^aV\right\}.\]
Let $a\in {\mathcal A}_V$. Since both $V$ and ${}^a V$ are simple, Schur's lemma
implies that the isomorphism $V\to{}^aV$ is unique up to a scalar multiple.
For each $a\in{\mathcal A}_V$ we fix an isomorphism
\[{\phi}_a\colon V\to{}^{a^{-1}}V.\]
Then, as operators on $V$,
\[{\phi}_av=a(r){\phi}_a,{/\!/}uad \text{and} {/\!/}uad
{\phi}_a{\phi}_{a'}=\eta_V(a,a')^{-1}{\phi}_{aa'},\]
where $\eta_V(a,a')\in\mathbb{C}^\times$. The resulting function
\[\eta_V\colon {\mathcal A}_V\times {\mathcal A}_V\to \mathbb{C}^\times,\]
is a cocycle. The isomorphism class of $\eta_V$ is
independent of the choice of the isomorphism ${\phi}_a$.
Let $\mathbb{C}[{\mathcal A}_V]_{\eta_V}$ be the algebra with basis $\left\{c_a\,:\,a\in
{\mathcal A}_V\right\}$ and multiplication given by
\[c_a\cdot c_{a'}=\eta_V(a,a')c_{aa'},{/\!/}uad\text{for $a,a'\in{\mathcal A}_V$.}\]
Let $\psi$ be a simple $\mathbb{C}[{\mathcal A}_V]_{\eta_V}$-module. Then putting
\[(fa)\cdot(v\otimes z)=f{\phi}_av\otimes c_az,{/\!/}uad\text{for $f\in{\mathcal H}$,
$a\in {\mathcal A}_V$, $v\in V$, $z\in\psi$,}\]
defines an action of ${\mathcal H}\rtimes {\mathcal A}_V$ on $V\otimes\psi$.
Define the induced module
\[V\rtimes\psi:={\rm Ind}_{{\mathcal H}\rtimes {\mathcal A}_V}^{{\mathcal H}\rtimes {\mathcal A}}(V\otimes\psi).\]
\mathbf{e}gin{thm} \label{thm:RaRa}
{\rm (Ram-Ramagge, \cite[Theorem~A.6]{RamRam}, Reeder, \cite[(1.5.1)]{R})}
The induced module $V\rtimes\psi$ is a simple ${\mathcal H}\rtimes {\mathcal A}$-module,
every simple ${\mathcal H}\rtimes {\mathcal A}$-module occurs in this way, and if
$V\rtimes\psi\simeq V'\rtimes\psi'$, then $V$, $V'$ are ${\mathcal A}$-conjugate, and
$\psi\simeq\psi'$ as $\mathbb{C}[{\mathcal A}_V]_{\eta_V}$-modules.
\end{thm}
One the other hand, it follows from Lemma~\ref{lem:centrals} that the
isotropy group of $t$ in $W_H$ admits the following semidirect product
decomposition:
\[W_H(t)=W_{{\mathfrak G}(t)}\rtimes A_H(t){/\!/}uad\text{ with
$A_H(t):=\pi_0(C_H(t))$.}\]
Hence the group algebra $\mathbb{C}[W_H(t)]$ is a crossed-product algebra
\[\mathbb{C}[W_H(t)]=\mathbb{C}[W_{{\mathfrak G}(t)}]\rtimes A_H(t).\]
By applying Theorem~\ref{thm:RaRa} with ${\mathcal H}=\mathbb{C}[W_{{\mathfrak G}(t)}]$ and ${\mathcal A}=
A_H(t)$, we see that the irreducible representations of $W_H(t)$ are the
\[\tau(x,\varrho)\rtimes\psi,\]
with $\psi$ any simple $\mathbb{C}[A_{\tau}]_{\eta_{\tau}}$-module and
$\tau=\tau(x,\varrho)$.
Let ${\mathcal I}$ be a standard
Iwahori subgroup in ${\mathcal C}(t)$, and let ${\mathcal H}({\mathcal C}(t),{\mathcal I})$ denote the
corresponding Iwahori-Hecke algebra. Recall that $x=d \gamma(e_0)$.
We will denote by $V=V(x,\varrho)$ the real tempered simple module of
${\mathcal H}({\mathcal C}(t),{\mathcal I})$ which corresponds to $(x,\varrho)$. Here ``real'' means
that the central character of $V$ is real.
By applying Theorem~\ref{thm:RaRa} with ${\mathcal H}={\mathcal H}({\mathcal C}(t),{\mathcal I})$ and
${\mathcal A}=A_H(t)$, we obtain the following subset of simple modules for
${\mathcal H}({\mathcal C}(t),{\mathcal I})\rtimes A_H(t)$:
\[V(x,\varrho)\rtimes\psi,\]
with $\psi$ any simple
$\mathbb{C}[A_{V}]_{\eta_V}$-module and $V=V(x,\varrho)$.
\mathbf{e}gin{lem} \label{lem:cocycles}
We have \[A_{\tau(x,\varrho)}=A_{V(x,\varrho)}.\]
Moreover, the cocycles $\eta_{\tau(x,\varrho)}$ and $\eta_{V(x,\varrho)}$ can be
chosen to be equal.
\end{lem}
\mathbf{e}gin{proof}
Recall that the \emph{closure order on nilpotent adjoint orbits} is defined as
follows
\[ {\mathcal O}_1\le{\mathcal O}_2{/\!/}uad\text{when ${\mathcal O}_1\subset\overline{{\mathcal O}_2}$.}\]
\[ {\mathcal O}_1\le{\mathcal O}_2{/\!/}uad\text{when ${\mathcal O}_1\subset\overline{{\mathcal O}_2}$.}\]
For $x$ a nilpotent element of ${\mathfrak c}(t)$, we will denote by ${\mathcal O}_{x}$ the
nilpotent adjoint orbit which contains $x$.
Then as in \cite[(6.5)]{BM}, we define a
\emph{partial order on the representations of $W_{{\mathfrak G}(t)}$} by
\mathbf{e}gin{equation} \label{eqn:ordering}
\tau(x_1,\varrho_1)\le\tau(x_2,\varrho_2){/\!/}uad\text{when
${\mathcal O}_{x_1}\le{{\mathcal O}}_{x_2}$}.\end{equation}
In this partial order, the trivial representation of $W(t)$ is a minimal
element
and the sign representation of $W(t)$ is a maximal element.
The $W_{{\mathfrak G}(t)}$-structure of $V(x,\varrho)$ is
\mathbf{e}gin{equation} \label{eqn:Wstruct}
V(x,\varrho)|_{W_{{\mathfrak G}(t)}}\,=\,\tau(x,\varrho)\,\oplus\,
\bigoplus_{(x_1,\varrho_1)\atop\tau(x,\varrho)<\tau(x,\varrho_1)}
m_{(x_1,\varrho_1)}\,\tau(x_1,\varrho_1),\end{equation}
where the $m_{(x_1,\varrho_1)}$ are non-negative integers.
(In case ${\mathcal C}(t)$ has connected centre, (\ref{eqn:Wstruct}) is implied by
\cite[Theorem~6.3~(1)]{BM}, the proof in the general case follows the
same lines.)
In particular, it follows from (\ref{eqn:Wstruct}) that
\mathbf{e}gin{equation} \label{eqn:dim}
\textrm{d}im_{\mathbb{C}}{\rm Hom}_{W_{{\mathfrak G}(t)}}\left(\tau(x,\varrho),V(x,\varrho)\right)=1.
\end{equation}
Let $a\in A_H(t)$. Since the action of $A_H(t)$ on $W_{{\mathfrak G}(t)}$ comes from
its
action on the root datum, we have (see \cite[2.6.1, 2.7.3]{R}):
\[{}^a\tau(x,\varrho)=\tau(a \cdot x,{}^a\varrho).\]
Then
\[{}^a V(x,\varrho)|_{W_{{\mathfrak G}(t)}}\,=\,\tau(a \cdot x,{}^a\varrho)\,\oplus\,
\bigoplus_{(x_1,\varrho_1)\atop\tau(x,\varrho)\le\tau(x_1,\varrho_1)}
m_{(x_1,\varrho_1)}\,\tau(a\cdot x,{}^a\varrho_1).\]
Since $\tau(x,\varrho)\le\tau(x_1,\varrho_1)$ if and only if
${\rm ch}i(a\cdot x,{}^a\varrho)\le\tau(a\cdot x_1,{}^a\varrho_1)$, it follows
that ${}^a V(x,\varrho)$ corresponds to the ${\mathfrak G}(t)$-conjugacy class of
$(a\cdot x,{}^a\varrho)$ via the bijection induced by~(\ref{eqnarray:bij}).
Hence \[{}^a V(x,\varrho)\simeq V(x,\varrho){/\!/}uad\text{ if and only if }{/\!/}uad
{}^a\tau(x,\varrho)\simeq\tau(x,\varrho).\] The equality of the inertia
subgroups
\[A_H(t)_{V(x,\varrho)}=A_H(t)_{\tau(x,\varrho)}=:A_H(t)_{x,\varrho}\]
follows.
Let $\left\{{\phi}_a^V\,:\,a \in A_H(t)_{x,\varrho}\right\}$ (resp.
$\left\{{\phi}_a^\tau\,:\,a \in A_H(t)_{x,\varrho}\right\}$) a family of
isomorphisms for $V=V(x,\varrho)$ (resp. $\tau=\tau(x,\varrho)$) which determines the
cocycle $\eta_V$ (resp. $\eta_\tau$).
We have
\[{\rm Hom}_{W_{{\mathfrak G}(t)}}(\tau,V)\overset{{\phi}_a^V}\to
{\rm Hom}_{W_{{\mathfrak G}(t)}}(\tau,{}^{a^{-1}}V)\overset{{\phi}_a^\tau}\to
{\rm Hom}_{W_{{\mathfrak G}(t)}}({}^{{a}^{-1}}\tau,{}^{a^{-1}}V).\]
The composed map is given by a scalar, since
by Eqn.~(\ref{eqn:dim}) these spaces are one-dimensional. We normalize
${\phi}_a^V$ so that this scalar equals to one. This forces $\eta_V$ and
$\eta_\tau$ to be equal.
\end{proof}
\mathbf{e}gin{lem} There is a bijection between Springer parameters
and Reeder parameters for the group $C_H(t)$:
\[(x,\varrho,\psi)\mapsto (\Xi,\varrho,\psi).\]
\end{lem}
\mathbf{e}gin{proof}
Lemma~\ref{lem:cocycles} allows us to extend the bijection
(\ref{eqnarray:bij}) from ${\mathfrak G}(t)$ to $C_H(t)$.
\end{proof}
\mathbf{e}gin{lem} We have
\[
\mathbf{B}^{\Psi} = \mathbf{B}^{\Xi}.
\]
\end{lem}
\mathbf{e}gin{proof}
We note that
\[
S_{\Psi} = < t > \gamma(B_2), {/\!/}uad {/\!/}uad S_{\Xi} = \gamma(B_2)
\]
Let ${\mathfrak b}$ denote a Borel subgroup of the reductive group $C_H(t)$. Since ${\mathfrak b}$ is maximal among the connected solvable subgroups of $C_H(t)$, we have
${\mathfrak b} \subset {\mathfrak G}(t)$. Then we have ${\mathfrak b} = T_{{\mathfrak b}}U_{{\mathfrak b}}$ with $T_{{\mathfrak b}}$ a maximal torus in ${\mathfrak G}(t)$, and
$U_{{\mathfrak b}}$ the unipotent radical of ${\mathfrak b}$. Note that $T_{{\mathfrak b}} \subset {\mathfrak G}(t)$. Therefore $yt = ty$ for all $y \in T_{{\mathfrak b}}$. This means that $t$ centralizes $T_{{\mathfrak b}}$, i.e. $t \in Z(T_{{\mathfrak b}})$. In a connected Lie group such as ${\mathfrak G}(t)$, we have
\[
Z(T_{{\mathfrak b}}) = T_{{\mathfrak b}}\]
so that $t \in T_{{\mathfrak b}}$. Since $T_{{\mathfrak b}}$ is a group, it follows that $< t > \, \subset T_{{\mathfrak b}}$.
As a consequence, we have
\[
{\mathfrak b} \supset \, < t > \gamma(B_2) \iff {\mathfrak b} \supset \gamma(B_2).
\]
\end{proof}
Let $S_{\Upsilon} = \Upsilon({\mathcal W}_F \times B_2)$, a solvable subgroup of $H$.
Let $\mathbf{B}^{\Upsilon}$ denote the variety of Borel subgroups of $H$ containing
$S_{\Upsilon}$.
\mathbf{e}gin{lem} We have
\[
\mathcal{R}(A, \mathbf{B}^{\Upsilon}) = \mathcal{R}(A, \mathbf{B}^{\Psi})
\]
\end{lem}
\mathbf{e}gin{proof}
We denote the Lie algebra of ${\mathfrak G}(t)$ by ${\mathfrak g}(t)$, and the Lie algebra of $C_H(t)$ by ${\mathfrak c}_H(t)$ so that
\[
{\mathfrak g}(t) = {\mathfrak c}_H(t).
\]
We note that the codomain of $\Psi$ is ${\mathfrak G}(t)$.
Let $\mathbf{B}^t$ denote the variety of all Borel subgroups of $G$ which contain $t$. Let $B \in \mathbf{B}^t$.
Then $B \cap {\mathfrak G}(t)$ is a Borel subgroup of ${\mathfrak G}(t)$.
The proof in \cite[p.471]{CG} depends on the fact that ${\mathfrak G}(t)$ is connected, and also on
a triangular decomposition of ${\rm Lie}({\mathfrak G}(t))$:
\[
{\rm Lie}\,{\mathfrak G}(t) = {\mathfrak n}^t \oplus {\mathfrak t} \oplus {\mathfrak n}_{-}^t
\]
from which it follows that ${\rm Lie}\, B \cap {\rm Lie}\, {\mathfrak G}(t) = {\mathfrak n}^t \oplus {\mathfrak t}$ is a Borel subalgebra in ${\rm Lie} \,{\mathfrak G}(t)$. The superscript ``$t$'' stands for
the centralizer of $t$.
There is a canonical map
\mathbf{e}gin{align} \label{eqn:(7)}
\mathbf{B}^t \to {\rm Flag} \, {\mathfrak G}(t), {/\!/}uad B \mapsto B \cap {\mathfrak G}(t)
\end{align}
Now ${\mathfrak G}(t)$ acts by conjugation on $\mathbf{B}^t$. We have
\mathbf{e}gin{align}
\mathbf{B}^t = \mathbf{B}_1 \sqcup \mathbf{B}_2 \sqcup \cdots \sqcup \mathbf{B}_m
\end{align}
a disjoint union of ${\mathfrak G}(t)$-orbits, see \cite[Prop. 8.8.7]{CG}. These orbits are the connected components of $\mathbf{B}^t$, and the irreducible components of the projective variety
$\mathbf{B}^t$. The above map~(\ref{eqn:(7)}), restricted to any one of these orbits, is a bijection from the ${\mathfrak G}(t)$-orbit onto ${\rm Flag} \, {\mathfrak G}(t)$ and is ${\mathfrak G}(t)$-equivariant. It is then clear that
\[
\mathbf{B}_j^{\Upsilon} \simeq {\rm Flag} \, {\mathfrak G}(t)^{\Psi}
\]
for each $1 \leq j \leq m$. We also have $t \in S_\Upsilon = S_{\Psi}$. Now
\[
\mathbf{B}^{\Upsilon} = (\mathbf{B}^t)^{\Upsilon} = (\mathbf{B}^t)^{\Psi}
\]
and then
\[
H_*(\mathbf{B}^{\Upsilon}, \mathbb{C}) = H_*(\mathbf{B}_1^{\Psi}, \mathbb{C}) \oplus \cdots \oplus H_*(\mathbf{B}_m^{\Psi}, \mathbb{C})
\]
a direct sum of \emph{equivalent} $A$-modules.
Hence $\varrho$
occurs in $H_*( \mathbf{B}^{\Upsilon},\mathbb{C})$ if and only if it occurs
$H_*(\mathbf{B}^{\Psi}, \mathbb{C})$.
\end{proof}
Recall that $x$ is a nilpotent element in ${\mathfrak c}(t)$ (the Lie algebra of
${\mathfrak G}(t)$).
Define
\[A^+:=\pi_0(Z_{C_H(t)}(x)).\]
\mathbf{e}gin{lem}
We have
\[{\mathcal R}(A,\mathbf{B}^\Upsilon)={\mathcal R}(A^+,\mathbf{B}^\Upsilon).\]
\end{lem}
\mathbf{e}gin{proof}
Choose an isogeny $\iota\colon{\widetilde H}\to H$ with ${\widetilde H}_\textrm{d}er$ simply connected
(as in \cite[Theorem~3.5.4]{R}) such that $H={\widetilde H}/Z$ where $Z$ is a finite
subgroup of the centre of ${\widetilde H}$ (see \cite[\S~3]{R}). Let
$\tilde t$ be a lift of $t$ in ${\widetilde H}$, that is, $\iota(\tilde t)=t$.
Then we have (see \cite[\S~3.1]{R}):
\mathbf{e}gin{equation} \label{eqn:iotacent}
\iota(C_{{\widetilde H}}(\tilde t))=C_H^0(t)={\mathfrak G}(t).\end{equation}
Let $u:=\exp(x)$, a unipotent element in ${\mathfrak G}(t)$. It follows from
Eqn.~(\ref{eqn:iotacent}) that there exists ${\tilde u}\in C_{{\widetilde H}}(\tilde t)$
such that $u=\iota({\tilde u})$.
Recall that $A=\pi_0(Z_{{\mathfrak G}(t)}(x))$. Then
\[A\simeq\pi_0(Z_{{\mathfrak G}(t)}(u))=\pi_0(Z_{\iota(C_{{\widetilde H}}(\tilde
t))}(\iota({\tilde u})))\simeq \pi_0(Z_{C_{{\widetilde H}}}(\tilde t,{\tilde u})),\]
and $A$ is a subgroup of $\pi_0(Z_{C_H(t)}(u))\simeq A^+$ (see \cite[\S~3.2--3.3]{R}).
Recall from \cite[Lemma~3.5.3]{R} that
\[(\tilde t,{\tilde u},\varrho,\psi) \mapsto (t, u,
\rho)\] induces a bijection between
$G$-conjugacy classes of quadruples $(\tilde t,{\tilde u},\varrho,\psi)$ and
$G$-conjugacy classes of triples $(t,u,\rho)$, where $\rho\in{\mathcal R}(A^+,\mathbf{B}^\Upsilon)$ is
such that the restriction of $\rho$
to $A$ contains $\varrho$.
\end{proof}
\mathbf{e}gin{lem}
We have
\[{\mathcal R}(A^+,\mathbf{B}^\Upsilon)={\mathcal R}(A^+,\mathbf{B}^\Phi).\]
\end{lem}
\mathbf{e}gin{proof}
It follows from \cite[Lemma~4.4.1]{R}.
\end{proof}
The proof can be reversed. Here is the reason for this claim: Lemmas 4.5, 4.6, 4.8 4.10 -- 4.13 are all equalities, and Lemma 4.9 is a bijection.
This creates a canonical bijection between the extended quotient of the second kind $(T{/\!/} W^{{\mathfrak s}})_2$ and ${\mathfrak Q}(G)_{{{\widehat\lambda}}}$:
\mathbf{e}gin{align}
\mu \colon (T{/\!/} W^{{\mathfrak s}})_2 \longrightarrow {\mathfrak Q}(G)_{{{\widehat\lambda}}}, {/\!/}uad {/\!/}uad (t, x, \varrho, \psi) \mapsto (\Phi, \rho).
\end{align}
This in turn creates a bijection
\mathbf{e}gin{align}
T{/\!/} W^{{\mathfrak s}} \longrightarrow {\mathfrak Q}(G)_{{{\widehat\lambda}}}.
\end{align}
This bijection is not canonical in general, depending as it does
on a choice of bijection between the set of conjugacy classes in
$W_H(t)$ and the set of irreducible characters of $W_H(t)$.
When $G = {\rm GL}(n)$, the finite group $W_H(t)$ is a product of
symmetric groups: in this case there is a canonical bijection
between the set of conjugacy classes in $W_H(t)$ and the set of
irreducible characters of $W_H(t)$, by the classical theory of Young tableaux.
To close this section, we will consider the case of ${\rm GL}(n,F))$, and the Iwahori point ${\mathfrak i}$ in the Bernstein spectrum of ${\rm GL}(n,F)$.
The Langlands dual of ${\rm GL}(n,F)$ is ${\rm GL}(n,{\rm C})$, and we will take $T$ to be the standard maximal torus in ${\rm GL}(n,{\rm C})$. The Weyl group is the symmetric group $S_n$. We will denote our bijection, in this case canonical, as follows:
\[
\mu_{F}^{{\mathfrak i}} : T{/\!/} W \to {\mathfrak P}({\rm GL}(n,F))
\]
Let $E/F$ be a finite Galois extension of the local field $F$. According to \cite[Theorem 4.3]{MP}, we have a commutative diagram
\[
\mathbf{e}gin{CD}
T{/\!/} W @> \mu_{F}^{{\mathfrak i}} >> {\mathfrak P}({\rm GL}(n,F))\\
@ V VV @VV {\rm BC}_{E/F} V\\
T{/\!/} W @> \mu_{E}^{{\mathfrak i}} >> {\mathfrak P}({\rm GL}(n,E))
\end{CD}
\]
In this diagram, the right vertical map ${\rm BC}_{E/F}$ is the standard base change map sending one Reeder parameter to another as follows:
\[
(\Phi,1) \mapsto (\Phi_{|W_E},1).
\]
Let \[f = f(E,F)\] denote the residue degree of the extension $E/F$. We proceed to describe the left vertical map. We note that the action of W on T is as automorphisms of the algebraic group T. Since $T$ is a group, the map \[T \to T, {/\!/}uad t \mapsto t^f\] is well-defined for any positive integer $f$. The map
\[
\widetilde{T} \to \widetilde{T}, {/\!/}uad (t,w) \mapsto (t^f,w)
\]
is also well-defined, since
\[
w\cdot t^f = wt^fw^{-1} = wtw^{-1}wtw^{-1} \cdots wtw^{-1} = t^f.
\]
Since
\[
\alpha\cdot(t^f) = (\alpha\cdot t)^f
\]
for all $\alpha \in W$, this induces a map
\[
T{/\!/} W \to T{/\!/} W
\]
which is an endomorphism (as algebraic variety) of the extended quotient $T{/\!/} W$. We shall refer to this endomorphism as the \emph{base change endomorphism of degree $f$.} The left vertical map is the base change endomorphism of degree $f$, according to \cite[Theorem 4.3]{MP}. That is, our bijection $\mu^{{\mathfrak i}}$ is compatible with base change for ${\rm GL}(n)$.
When we restrict our base change endomorphism from the extended quotient $T{/\!/} W$ to the ordinary quotient $T/W$, we see that the commutative diagram
containing ${\rm BC}_{E/F}$ is consistent with \cite[Lemma 4.2.1]{Haines}.
\section{Interpolation}
We will now provide details for the interpolation procedure described in \S1. We will focus on the Iwahori point ${\mathfrak i} \in {\mathfrak B}(\mathcal{G})$, {\it i.e.,\,} on the smooth irreducible representations of $\mathcal{G}$ which admit nonzero Iwahori fixed vectors. To simplify notation, we will write $\mu = \mu^{{\mathfrak i}}$. Let ${\mathfrak P}(G)$ denote the set of conjugacy classes in
$G$ of Kazhdan-Lusztig parameters. For each $s \in \mathbb{C}^{\times}$, we construct a commutative diagram:
\[
\mathbf{e}gin{CD}
T{/\!/} W @> \mu >> {\mathfrak P}(G)\\
@ V \pi_s VV @VV i_s V\\
T/W @= T/W
\end{CD}
\]
in which the map $\mu$ is bijective. In the top row of this diagram, the set $T{/\!/} W$, the set ${\mathfrak P}(G)$, and the map $\mu$ are independent of the parameter $s$.
We start by defining the vertical maps $i_s$, $\pi_s$ in the diagram. Let $s \in \mathbb{C}^{\times}$. We will define
\mathbf{e}gin{align}
i_s: {\mathfrak P}(G) \to T/W, {/\!/}uad (\Phi, \rho) \mapsto \Phi ({\rm Frob}, T_s)\end{align}
\mathbf{e}gin{align}
\pi_s : T{/\!/} W \to T/W, {/\!/}uad (t,w) \mapsto t \cdot \gamma (T_s)
\end{align}
where $(\Phi, \rho)$ is a Reeder parameter, and $(t,w) \in T{/\!/} W$. We note that
\[
\Phi ({\rm Frob}, T_s) = t \cdot \gamma(T_s)
\]
so that the diagram is commutative.
$\mathbf{u}llet$ Let $s = 1$, and assume, for the moment, that $C_H(t)$ is connected. The map $\mu$ in Theorem 4.1 sends $(t, \tau) $ to $(\Phi ,\rho)$. We note that
\[
t = \Phi({\rm Frob}, T_1) = \Phi({\rm Frob},1).\]
The map $\mu$ determines the map
\[
(t, \tau) \mapsto (t, \Phi(1,u_0), \rho)
\]
which, in turn, determines the map
\[
\tau \mapsto (\exp(x), \rho)
\]
which is the Springer correspondence for the Weyl group $W_H(t)$.
$\mathbf{u}llet$ Now let $s = \sqrt q$ where $q$ is the cardinality of the residue field $k_F$ of $F$. We now link our result to the representation theory of the $p$-adic group $\mathcal{G}$ as follows. As in \S 3, let
\[
\sigma: = \Phi ({\rm Frob}, T_{\sqrt q}), {/\!/}uad {/\!/}uad u: = \Phi(1,u_0).
\]
Then we have
\[
\sigma u \sigma^{-1} = u^q
\]
and the triple $(\sigma, u, \rho)$ is a Kazhdan-Lusztig triple.
The correspondence $\sigma \mapsto {\rm ch}i_{\sigma}$ between points in $T$ and unramified quasicharacters of $\mathcal{T}$ can be fixed by the relation
\[
{\rm ch}i_{\sigma}(\lambda(\varpi_F)) = \lambda(\sigma)
\]
where $\varpi_F$ is a uniformizer in $F$, and $\lambda \in X_*(\mathcal{T}) = X^*(T)$. The Kazhdan-Lusztig triples $(\sigma, u, \rho)$ parametrize the irreducible constituents of the (unitarily) induced representation
\[
{\rm Ind}_{\mathcal{B}}^{\mathcal{G}}({\rm ch}i_{\sigma}\otimes 1).
\]
Note that
\[
i_{\sqrt q}: (\Phi,\rho) \mapsto \sigma
\]
so that $i_{\sqrt q}$ is the \emph{infinitesimal character}. The infinitesimal character is denoted $\mathbf{Sc}$ in \cite[VI.7.1.1]{Renard} ($\mathbf{Sc}$ for \emph{support cuspidal})
Since $\mu$ is bijective and the diagram is commutative, the number of points
in the fibre of the $q$-projection $\pi_{\sqrt q}$ equals the number of inequivalent irreducible constituents of
${\rm Ind}_{\mathcal{B}}^{\mathcal{G}}({\rm ch}i_{\sigma}\otimes 1)$:
\mathbf{e}gin{align}
|\pi^{-1}_{\sqrt q}(\sigma)| = |{\rm Ind}_{\mathcal{B}}^{\mathcal{G}}({\rm ch}i_{\sigma}\otimes 1) |
\end{align}
The $q$-projection $\pi_{\sqrt q}$ is a model of the infinitesimal character $\mathbf{Sc}$.
Our formulation leads to Eqn.(24), which appears to have some predictive power. Note that the definition of the $q$-projection $\pi_{\sqrt q}$ depends only on the $L$-parameter $\Phi$. An $L$-parameter determines an $L$-packet, and does not determine the number of irreducible constituents of the $L$-packet.
\section{Examples}
\textsc{Example~1.} \emph{Realization of the ordinary quotient} $T/W$. Consider an $L$-parameter $\Phi$ for which $\Phi | _{{\rm SL}(2,\mathbb{C})} = 1$. Let $t = \Phi({\rm Frob})$. Then
\[
G_{\Phi} : = C(im \, \Phi) = C(t)
\]
so that $G_{\Phi}$ is connected and acts trivially in homology. Therefore $\rho$ is the unit representation $1$.
Now $t$ is a semisimple element in $G$, and all such semisimple elements arise. Modulo conjugacy in $G$, the set of such $L$-parameters $\Phi$ is parametrized by the quotient $T/W$. Explicitly, let
\[
{\mathfrak P}_1(G): = \{\Phi \in {\mathfrak P}(G): \Phi |_{ {\rm SL}(2,\mathbb{C})} = 1 \}.
\]
Then we have a canonical bijection
\[
{\mathfrak P}_1(G) \to T/W, {/\!/}uad {/\!/}uad (\Phi,1) \mapsto \Phi({\rm Frob},1)
\]
which fits into the commutative diagram
\[
\mathbf{e}gin{CD}
{\mathfrak P}_1(G) @>>>T/W \\
@VVV @VVV \\
{\mathfrak P}(G) @>>> T {/\!/} W
\end{CD}
\]
where the vertical maps are inclusions.
\textsc{Example~2.} \emph{The general linear group}. Let ${\mathcal G} = {\rm GL}(n), G = {\rm GL}(n,\mathbb{C})$. Let
\[
\Phi = {\rm ch}i \otimes \tau(n)
\]
where ${\rm ch}i$ is an unramified quasicharacter of ${\mathcal W}_F$ and $\tau(n)$ is the irreducible $n$-dimensional representation of ${\rm SL}(2,\mathbb{C})$. By local classfield theory, the quasicharacter ${\rm ch}i$ factors through $F^{\times}$. In the local Langlands
correspondence for ${\rm GL}(n)$, the image of $\Phi$ is the unramified twist ${\rm ch}i \circ \textrm{d}et$ of the Steinberg representation ${\rm St}(n)$.
The sign representation $sgn$ of the Weyl group $W$ has Springer parameters $(\mathcal{O}_{prin},1)$, where $\mathcal{O}_{prin}$ is the principal orbit in $\mathfrak{gl}(n,\mathbb{C})$. In the \emph{canonical} correspondence between irreducible representations of $S_n$ and conjugacy classes in $S_n$, the trivial representation of
$W$ corresponds to the conjugacy class containing the $n$-cycle $w_0 = (123 \cdots n)$.
Now $G_{\Phi} = C(im \, \Phi)$ is connected \cite[\S3.6.3]{CG}, and so acts trivially in homology.
Therefore $\rho$ is the unit representation $1$. The image $\Phi(1,u_0)$ is a regular nilpotent, i.e. a nilpotent with one Jordan block (given by the partition of $n$ with one part). The corresponding conjugacy class in $W$ is $\{w_0\}$. The corresponding irreducible component of the extended quotient is
$$T^{w_0}/Z(w_0) = \{(z,z, \ldots,z): z \in \mathbb{C}^{\times}\} \simeq \mathbb{C}^{\times}.$$ This is our model, in the extended quotient picture, of the complex $1$-torus of all unramified twists of the Steinberg representation ${\rm St}(n)$. The map from $L$-parameters to pairs $(w,t) \in T {/\!/} W$ is given by
\[
{\rm ch}i \otimes \tau(n) \mapsto (w_0, {\rm ch}i({\rm Frob}), \textrm{d}ots, {\rm ch}i({\rm Frob})).
\]
Among these representations, there is one real tempered representation, namely ${\rm St}(n)$, with $L$-parameter $1 \otimes \tau(n)$,
attached to the principal orbit ${\mathcal O}_{prin} \subset G$.
More generally, let
\[
\Phi = {\rm ch}i_1 \otimes \tau(n_1) \oplus \cdots \oplus {\rm ch}i_k \otimes \tau(n_k)
\]
where $n_1 + \cdots + n_k = n$ is a partition of $n$. This determines the unipotent orbit ${\mathcal O}(n_1, \ldots, n_k) \subset G$. There is a conjugacy class
in $W$ attached canonically to this orbit: it contains the product of disjoint cycles of lengths $n_1, \ldots, n_k$. The fixed set is a complex torus, and the
component in $T{/\!/} W$ is a product of symmetric products of complex $1$- tori.
\textsc{Example 3}. \emph{The exceptional group of type ${{\rm G}}_2$}. This example contains a Reeder parameter $(\Phi,\rho)$ with
$\rho \neq 1$.
The torus ${\mathcal T}$ is identified with $F^\times\times F^\times$. We take
$\lambda={\rm ch}i\otimes{\rm ch}i$ where ${\rm ch}i$ is a nontrivial quadratic character of
${\mathfrak o}_F^\times$.
Here we have $H={\rm SO}(4,\mathbb{C})\simeq{\rm SL}(2,\mathbb{C})\times{\rm SL}(2,\mathbb{C})/\{\pm I\}$. This complex reductive Lie group is neither simply-connected nor of adjoint type.
We have $W^{\mathfrak s}=W_H=\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$.
We will write
\[{\rm SL}(2, \mathbb{C}) \times {\rm SL}(2, \mathbb{C}) \longrightarrow H^{\mathfrak s}, {/\!/}uad
(x,y) \mapsto [x,y],\]
\[T_{s,s'}=[T_s,T_{s'}],{/\!/}uad s,s'\in \mathbb{C}^\times.\]
We have
\[{\mathfrak Q}(G)_{\hat\lambda}\to T{/\!/} W_H\simeq \mathbb{A}^1\sqcup \mathbb{A}^1\sqcup pt_1\sqcup
pt_2\sqcup pt_*\sqcup T/W_H,\]
where
\mathbf{e}gin{itemize}
\item
one $\mathbb{A}^1$ corresponds to $(\Phi,1)$ with $\Phi({\rm Frob},1)=[I,T_s]$
and $\Phi(1,u_0)=[u_0,I]$,
\item the other $\mathbb{A}^1$ corresponds to $(\Phi,1)$
with $\Phi({\rm Frob},1)=[T_s,I]$
and $\Phi(1,u_0)=[I,u_0]$,
\item $pt_1$ corresponds to $(\Phi,1)$
with $\Phi({\rm Frob},1)= T_{1,1}$
and $\Phi(1,u_0)=[u_0,u_0]$,
\item $pt_2$ corresponds to $(\Phi,1)$
with $\Phi({\rm Frob},1)= T_{1,-1}$
and $\Phi(1,u_0)=[u_0,u_0]$,
\item
$T/W_H$ corresponds to $(\Phi,1)$
with $\Phi({\rm Frob},1)= T_{s,s'}$ $s,s'\in \mathbb{C}^\times$,
and $\Phi(1,u_0)=[I,I]$,
\item
$pt_*$ corresponds to $(\Phi,{\rm sgn})$
with $\Phi({\rm Frob},1)= T_{i,i}$, $i=\sqrt{ -1}$
and $\Phi(1,u_0)=[I,I]$.
\end{itemize}
\emph{Acknowledgement}. We would like to thank A. Premet for drawing our attention to reference \cite{CG}.
\mathbf{e}gin{thebibliography}{99}
\bibitem{ABP} A-M. Aubert, P. Baum and R.J. Plymen, Geometric structure in the principal series of the $p$-adic group $G_2$, Represent. Theory 15 (2011) to appear.
\bibitem{BM} D. Barbasch, A. Moy, A unitarity criterion for $p$-adic
groups, Invent. Math. 98 (1989) 19 -- 37.
\bibitem{B} J.L. Brylinski, Cyclic homology and equivariant theories, Ann. Inst. Fourier 37 (1987) 15 -- 28.
\bibitem{CG} N. Chriss and V. Ginzburg, Representation theory and complex
geometry, Birkhauser 2000.
\bibitem{Haines} T.J. Haines, Base change for Bernstein centers of depth zero principal series blocks, arXiv:1012.4968[mathRT].
\bibitem{Hot} R.~Hotta, On Springer's representations, J. Fac. Sci. Uni.
Tokyo, IA {\bf 28} (1982), 863--876.
\bibitem{KL} D.~Kazhdan and G.~Lusztig, Proof of the Deligne-Langlands
conjecture for Hecke algebras, Invent. math. 87 (1987), 153--215.
\bibitem{K} M. Khalkhali, Basic noncommutative geometry, EMS Series of
Lectures in Math., 2009.
\bibitem{LuAst} G. Lusztig, Representations of affine Hecke algebras,
Ast\'erisque 171-172 (1989), 73-84.
\bibitem{LuSpring} G.~Lusztig, Green polynomials and singularities of
nilpotent classes, Adv. in Math. {\bf 42} (1981), 169--178.
\bibitem{Lu} G.~Lusztig, Cuspidal local systems and graded Hecke algebras,
II, Canadian Math. Soc., Conference Preoceedings, {\bf 16} (1995), 217--275.
\bibitem{MP} S. Mendes and R.J. Plymen, Base change and $K$-theory for ${\rm GL}(n)$, J. Noncommut. Geom. 1 (2007) 311 -- 331.
\bibitem{M} J. Morava, HKR characters and higher twisted sectors, Contemp. Math. 403 (2006) 143 --152.
\bibitem{RamRam} A. Ram and J.~Ramagge, Affine Hecke algebras, cyclotomic
Hecke algebras and Clifford theory, Birkh\"auser, Trends in Math. (2003),
428--466.
\bibitem{Renard} D. Renard, Repr\'esentations des groupes r\'eductifs $p$-adiques, Cours Sp\'ecialis\'es 17, Soci\'et\'e Math. de France 2010.
\bibitem{R} M.~Reeder, Isogenies of Hecke algebras and a Langlands correspondence
for ramified principal series representations, Representation Theory {\bf
6} (2002), 101--126.
\bibitem{Roc} A. Roche, Types and Hecke algebras for principal
series representations of split reductive $p$-adic groups, Ann.
scient. \'Ec. Norm. Sup. {\bf 31} (1998), 361--413.
\bibitem{Sh} F. Shahidi, Eisenstein series and automorphic $L$-functions, Colloquium Publication 58 , AMS 2010.
\end{thebibliography}
Anne-Marie Aubert, Institut de Math\'ematiques de Jussieu, U.M.R. 7586 du C.N.R.S., Paris, France\\
Email: [email protected]\\
Paul Baum, Pennsylvania State University, Mathematics Department, University Park, PA 16802, USA\\
Email: [email protected]\\
Roger Plymen, School of Mathematics, Alan Turing building, Manchester University, Manchester M13 9PL, England\\
Email: [email protected]
\end{document} |
\begin{document}
\title{Squares of the form $\prod_{k=1}^n (2k^2+l)$ with $l$ odd}
\author{Russelle Guadalupe}
\address{Institute of Mathematics, University of the Philippines-Diliman\\
Quezon City 1101, Philippines}
\email{[email protected]}
\date{}
\maketitle
\renewcommand{\arabic{footnote}}{}
\footnote{2010 \emph{Mathematics Subject Classification}: 11D09; 11C08; 11A15.}
\footnote{\emph{Key words and phrases}: Quadratic polynomials, Diophantine equation.}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\begin{abstract}
Let $l$ be a positive odd integer. Using Cilleruelo's method, we establish an explicit lower bound $N_l$ depending on $l$ such that for all $n\geq N_l$, $\prod_{k=1}^n (2k^2+l)$ is not a square. As an application, we determine all values of $n$ such that $\prod_{k=1}^n (2k^2+l)$ is a square for certain values of $l$.
\end{abstract}
\section{Introduction}
For positive integers $a,c$ and $n$ with $\gcd(a,c)=1$, define $P_{a,c}(n) = \prod_{k=1}^n (ak^2+c)$. The study of determining whether the sequence $\{P_{a,c}(n)\}_{n\geq 1}$ contains infinitely many squares has become one of the active topics in number theory, particularly in Diophantine theory. In 2008, Amdeberhan, Medina, and Moll \cite{amdeber} conjectured that $P_{1,1}(n)$ is not a square for $n\geq 4$ and $P_{4,1}(n)$ is not a square for $n\geq 1$. In addition, they computationally verified that $P_{1,1}(n)$ is not a square for $n \leq 10^{3200}$. Shortly, Cilleruelo \cite{cille} proved the conjecture for $P_{1,1}(n)$, where he showed that $P_{1,1}(n)$ is square only for $n=3$, and Fang \cite{fang} established the conjecture for $P_{4,1}(n)$. Yang, Togb\'{e}, and He \cite{yangtog} found all positive integer solutions to the equation $P_{a,c}(n) = y^l$ for coprime integers $a, c\in \{1,\ldots, 20\}$ and $l\geq 2$. Yin, Tan, and Luo \cite{yintan} obtained the $p$-adic valuation of $P_{1,21}(n)$ for all primes $p$ and showed that $P_{1,21}(n)$ is not a square for all $n\geq 1$. Chen, Wang and Hu \cite{chenwh} proved that $P_{1,23}(n)$ is a square only for $n=3$.\\
Pak Tung Ho \cite{ptungho} studied the sequence $\{P_{1,m^2}(n)\}_{n\geq 1}$ for $m\geq 1$ and proved that if $m$ has divisors of the form $4q+1$ and $N=\max\{m,10^8\}$, then $P_{1,m^2}(n)$ is not a square for all $n\geq N$. Recently, Zhang and Niu \cite{zhangniu} generalized Ho's result; they proved that for a given positive integer $q$, there is a positive integer $N_q$ depending on $q$ such that $P_{1,q}(n)$ is not a square for all $n\geq N_q$. In this paper, we apply Cilleruelo's method to prove the following result.
\begin{theorem}
\label{th:thm1}
Let $l$ be a positive odd integer. Then there exists a positive integer $N_l$ depending on $l$ such that for all $n\geq N_l$, $P_{2,l}(n)=\prod_{k=1}^n (2k^2+l)$ is not a square.
\end{theorem}
As an application of Theorem \ref{th:thm1}, we show that for certain values of $l$, there are finitely many squares of the form $P_{2,l}(n)$ for some integer $n\geq 1$.
\begin{corollary}
\label{co:cor2}
$P_{2,1}(n)=\prod_{k=1}^n (2k^2+1)$ is not a square for all $n\geq 1$.
\end{corollary}
\begin{corollary}
\label{co:cor3}
$P_{2,3}(n)=\prod_{k=1}^n (2k^2+3)$ is not a square for all $n\geq 1$.
\end{corollary}
\begin{corollary}
\label{co:cor4}
$P_{2,7}(n)=\prod_{k=1}^n (2k^2+7)$ is a square only for $n=1$. Consequently, it is not a square for all $n\geq 2$.
\end{corollary}
We organize the paper as follows. In Section 2, we present several preliminary lemmas needed for the proof of Theorem \ref{th:thm1}. In Sections 3 and 4, we prove Theorem \ref{th:thm1} by working on two separate cases: $l\geq 3$ and $l=1$. In Sections 5, 6 and 7, we prove Corollaries \ref{co:cor2}, \ref{co:cor3} and \ref{co:cor4} using Theorem \ref{th:thm1}. In Section 8, we provide a conjecture concerning the odd values of $l$ such that $P_{2,l}(n)$ is a square for some $n\geq 1$ based on numerical computations.
\section{Preliminaries}
In this section, we list some preliminary results that are used in the proofs of our main theorem. Throughout this section, we denote $p$ to be a prime and $l$ to be a positive odd integer.
\begin{lemma}
\label{le:lem4} Let $n > \sqrt{l/2}$ be such that $P_{2,l}(n)$ is a square and let $p$ be a prime divisor of $P_{2,l}(n)$. Then $p < 2n$.
\end{lemma}
\begin{proof}
Since $P_{2,l}(n)$ is a square and $p\mid P_{2,l}(n)$, we have $p^2\mid P_{2,l}(n)$. We consider two cases:
\begin{enumerate}
\item Suppose $p^2\mid 2k^2+l$ for some $1\leq k\leq n$. Then $p\leq \sqrt{2k^2+l}\leq \sqrt{2n^2+l} < 2n$ since $l < 2n^2$.
\item Suppose $p\mid 2k^2+l$ and $p\mid 2l^2+1$ for all $1\leq k< l\leq n$. Then $p\mid 2(l^2-k^2) = 2(l-k)(l+k)$. Since $l$ is odd, $P_{2,l}(n)$ is odd, so $p\nmid 2$. Thus, we see that
either $p\mid l-k$ or $p\mid l+k$, which implies that $p\leq \max\{l-k,l+k\} < 2n$.
\end{enumerate}
\end{proof}
The above lemma implies that if $n > \sqrt{l/2}$, then $P_{2,l}(n) = \prod_{p < 2n}p^{\alpha_p}$, where $p$ is odd and
\begin{align}
\label{eq:eq1}
\alpha_p = \sum_{j\leq \log(2k^2+l)/\log p} \#\{1\leq k\leq n: p^j\mid 2k^2+l\}.
\end{align}
In particular, we have $\alpha_2=0$. Now, observe that for $1\leq k\leq n$,
\[2k^2+l = k^{\log(2k^2+l)/\log k} > k^{\log(2n^2+l)/\log n}\]
where the last inequality follows from the fact that for $l\geq 1$, the function $f_l(x) := \log(2x^2+l)/\log x$ is decreasing on $(1,+\infty)$. Setting $\lambda = \log(2n^2+l)/\log n$, we have
$P_{2,l}(n) > (n!)^{\lambda}$ and writing $n! = \prod_{p\leq n} p^{\beta_p}$, where
\begin{align}
\label{eq:eq2}
\beta_p = \sum_{j\leq \log n/\log p} \#\{1\leq k\leq n: p^j\mid k\} = \sum_{j\leq \log n/\log p}\left\lfloor\dfrac{n}{p^j}\right\rfloor,
\end{align}
we deduce that
\begin{align}
\label{eq:eq3}
\sum_{p\leq n} \beta_p\log p\leq \dfrac{1}{\lambda}\sum_{p < 2n}\alpha_p\log p.
\end{align}
\begin{lemma}
\label{le:lem5}
Let $p$ be an odd prime with $p\nmid l$ and let $j > 0$. Then the congruence $2x^2\equiv -l\pmod{p^j}$ has $1+\left(\frac{-2l}{p}\right)$ solutions.
\end{lemma}
\begin{proof}
This follows from rewriting the congruence as $(2x)^2\equiv -2l\pmod{p^j}$ and applying \cite[Thm. 5.1]{lkhua}.
\end{proof}
\begin{lemma}
\label{le:lem6}
Let $p$ be an odd prime with $p\nmid l$.
\begin{enumerate}
\item If $\left(\frac{-2l}{p}\right) = -1$, then $\alpha_p = 0$.
\item If $\left(\frac{-2l}{p}\right) = 1$, then $\frac{\alpha_p}{\lambda}-\beta_p\leq \frac{\log(2n^2+l)}{\log p}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item If $\left(\frac{-2l}{p}\right) = -1$, then by Lemma \ref{le:lem5}, the congruence $2x^2+l\equiv 0\pmod {p^j}$ has no solutions for $j > 0$. Thus, $p^j\nmid 2k^2+l$ for all integers $j, k\geq 1$ and $\alpha_p=0$.
\item If $\left(\frac{-2l}{p}\right) = 1$, then by Lemma \ref{le:lem5}, the congruence $2x^2+l\equiv 0\pmod {p^j}$ has at most two solutions in an interval of length $p^j$ for each $j > 0$. From (\ref{eq:eq1}), we deduce that
\begin{align}
\label{eq:eq4}
\alpha_p \leq \sum_{j\leq \log(2k^2+l)/\log p} 2\left\lceil\dfrac{n}{p^j}\right\rceil \leq \sum_{j\leq \log(2k^2+l)/\log p} \lambda\left\lceil\dfrac{n}{p^j}\right\rceil
\end{align}
Thus, we obtain
\[\begin{aligned}
\dfrac{\alpha_p}{\lambda}-\beta_p &\leq \sum_{j\leq \log n/\log p}\left(\left\lceil\dfrac{n}{p^j}\right\rceil-\left\lfloor\dfrac{n}{p^j}\right\rfloor\right)+\sum_{\log n/\log p < j \leq \log(2n^2+l)/\log p} \left\lceil\dfrac{n}{p^j}\right\rceil\\
&\leq \sum_{j\leq \log n/\log p}1 +\sum_{\log n/\log p < j \leq \log(2n^2+l)/\log p} 1 = \dfrac{\log(2n^2+l)}{\log p}.
\end{aligned}\]
\end{enumerate}
\end{proof}
\begin{lemma}
\label{le:lem7}
Suppose $p^s$ is the largest power of an odd prime $p$ dividing $l\geq 3$. Then $\alpha_p\leq \gamma_{l,p}(n)$, where
\[\begin{aligned}
\gamma_{l,p}(n) &= \dfrac{n}{2}\left(\dfrac{3p-2-p^{-s}}{(p-1)^2}-\dfrac{s+3}{p^s(p-1)}+\dfrac{4}{p^{s/2}(p-1)}\right)-\dfrac{2np^{s/2}}{(p-1)(2n^2+l)}\\
&+\dfrac{s(s+5)}{4}+2p^{s/2}\left(\dfrac{\log(2n^2+l)}{\log p}-s\right).
\end{aligned}\]
\end{lemma}
\begin{proof}
Write $l=p^sq$, where $p\nmid q$. We consider the number of solutions to the congruence $2x^2 \equiv -p^sq\pmod{p^j}$ with $j > 0$, which is equivalent to $(2x)^2\equiv -2p^sq\pmod{p^j}$. We work on two cases:
\begin{enumerate}
\item Suppose $s\geq j$. Then $2x\equiv p^{\lceil j/2\rceil}, p^{\lceil j/2\rceil+1},\ldots, p^j\pmod{p^j}$, so there are exactly $j-\lceil j/2\rceil+1$ solutions in this case.
\item Suppose $s < j$. Then $(2x)^2 = mp^j -2p^sq = p^s(mp^{j-s}-2q)$ for some positive integer $m$. Since $p$ does not divide $mp^{j-s}-2q$, we see that $s$ is even. Write $2x=p^{s/2}r$, where $x\in \{0,\ldots, p^j-1\}$ and $r\in \{0,\ldots, p^{j-s/2}-1\}$. The congruence now becomes $r^2\equiv -2q\pmod{p^{j-s}}$, and in view of Lemma \ref{le:lem5}, it has at most two solutions contained in each interval of length $p^{j-s}$. Thus, the congruence has at most $2p^{j-s/2}/p^{j-s} = 2p^{s/2}$ solutions contained in each interval of length $p^j$.
\end{enumerate}
Combining these cases, we see that
\[\begin{aligned}
\alpha_p &\leq \sum_{j\leq s}\left(j-\left\lceil\dfrac{j}{2}\right\rceil+1\right)\left\lceil\dfrac{n}{p^j}\right\rceil+\sum_{s < j\leq \log(2n^2+l)/\log p}2p^{s/2}\left\lceil\dfrac{n}{p^j}\right\rceil\\
&\leq \sum_{j\leq s}\left(j-\left\lceil\dfrac{j}{2}\right\rceil+1\right)\left(\dfrac{n}{p^j}+1\right)+\sum_{s < j\leq \log(2n^2+l)/\log p}2p^{s/2}\left(\dfrac{n}{p^j}+1\right)\\
&\leq \sum_{j\leq s}\left(\dfrac{j}{2}+1\right)\left(\dfrac{n}{p^j}+1\right)+2p^{s/2}\left(\dfrac{n(\frac{1}{p^{s+1}}-\frac{1}{p(2n^2+l)})}{1-\frac{1}{p}}+\dfrac{\log(2n^2+l)}{\log p}-s\right)\\
&=\dfrac{n}{2}\sum_{j\leq s}\dfrac{j+2}{p^j}+\dfrac{s(s+5)}{4}+\dfrac{2n}{p^{s/2}(p-1)}-\dfrac{2np^{s/2}}{(p-1)(2n^2+l)}+2p^{s/2}\left(\dfrac{\log(2n^2+l)}{\log p}-s\right)\\
&=\gamma_{l,p}(n).
\end{aligned}\]
\end{proof}
\begin{lemma}
\label{le:lem8}
For all positive integers $n$, we have $\sum_{n< p < 2n}\log p \leq n\log 4$.
\end{lemma}
\begin{proof}
Observe that for primes $p$ with $n < p < 2n$, $p$ appears exactly once in the prime factorization of $\binom{2n}{n}$. Thus, by binomial theorem, we get $\prod_{n < p < 2n}p\leq \binom{2n}{n}\leq 4^n$, which is equivalent to the desired inequality.
\end{proof}
\begin{lemma}
\label{le:lem9}
Let $\pi(n)$ be the number of primes at most $n$. Then for all positive integers $n$, we have $\pi(n)\leq 2\log 4\frac{n}{\log n}+\sqrt{n}$.
\end{lemma}
\begin{proof}
From \cite[p. 459, (22.4.2)]{hardyw}, we have
\[\pi(n)\leq n^{1-\lambda} + \dfrac{1}{(1-\lambda)\log n}\sum_{p\leq n}\log p\]
for all $n\geq 1$ and $\lambda\in (0,1)$. Setting $\lambda = \frac{1}{2}$ and using $\prod_{p < n}p \leq 4^n$ (see \cite[Thm. 1.4]{tenen}), the desired inequality follows.
\end{proof}
\section{Proof of Theorem \ref{th:thm1} for odd $l\geq 3$}
We now give the proof of Theorem \ref{th:thm1} for odd $l\geq 3$ as follows. Suppose $n > \sqrt{l/2}$ and define the set $\mathcal{S} = \{p: p\text{ is an odd prime with }(-\frac{2l}{p})=1\}$. Then inequality (\ref{eq:eq3}) becomes
\begin{align}
\label{eq:eq5}
\sum_{p\leq n} \beta_p\log p\leq \sum_{\substack{p \leq n\\p\in\mathcal{S}}} \dfrac{\alpha_p}{\lambda}\log p+\sum_{\substack{p \leq n\\p\notin\mathcal{S}}} \dfrac{\alpha_p}{\lambda}\log p+\sum_{n < p < 2n} \dfrac{\alpha_p}{\lambda}\log p.
\end{align}
Write $l = \prod_{i=1}^r p_i^{e_i}$, where $p_1,\ldots,p_r$ are odd primes. By Lemma \ref{le:lem6}, we have
\begin{align}
\label{eq:eq6}
\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}} \beta_p\log p \leq \sum_{\substack{p \leq n\\p\in\mathcal{S}}} \log(2n^2+l)+\dfrac{1}{\lambda}\sum_{i=1}^{r}e_i\log p_i+\sum_{n < p < 2n} \dfrac{\alpha_p}{\lambda}\log p.
\end{align}
Observe that if $p > n$, then $\alpha_p \leq \lambda$ from (\ref{eq:eq4}), so applying Lemma \ref{le:lem8} in (\ref{eq:eq6}) yields
\begin{align}
\label{eq:eq7}
\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}} \beta_p\log p \leq \log(2n^2+l)\sum_{\substack{p \leq n\\p\in\mathcal{S}}} 1+\dfrac{1}{\lambda}\sum_{i=1}^{r}e_i\log p_i+n\log 4.
\end{align}
On the other hand, if $p \leq n$, then from (\ref{eq:eq2}) we have
\[\begin{aligned}
\beta_p &= \sum_{j\leq \log n/\log p}\left\lfloor\dfrac{n}{p^j}\right\rfloor\geq \sum_{j\leq \log n/\log p}\left(\dfrac{n}{p^j}-1\right)\geq n\sum_{j\leq \log n/\log p}\dfrac{1}{p^j}-\dfrac{\log n}{\log p}\\
&\geq n\left(\dfrac{1-\frac{1}{n}}{1-\frac{1}{p}}-1\right)-\frac{\log n}{\log p} = \dfrac{n-p}{p-1}-\dfrac{\log n}{\log p}\\
&\geq \dfrac{n-1}{p-1}-\dfrac{\log (2n^2+l)}{\log p}.
\end{aligned}\]
Thus, from (\ref{eq:eq7}) we get
\begin{align}
\label{eq:eq8}
(n-1)\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}}\dfrac{\log p}{p-1}\leq \log(2n^2+l)\pi(n)+\dfrac{1}{\lambda}\sum_{i=1}^{r}e_i\log p_i+n\log 4
\end{align}
and applying Lemma \ref{le:lem9}, we obtain
\begin{align}
\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}}\dfrac{\log p}{p-1}\leq \dfrac{\log(2n^2+l)}{n-1}\left(2\log 4\dfrac{n}{\log n}+\sqrt{n}\right)+\sum_{i=1}^{r}\dfrac{e_i\log p_i}{\lambda (n-1)}+\dfrac{n\log 4}{n-1}.
\end{align}
Finally, using Lemma \ref{le:lem7}, we arrive at
\begin{align}
\label{eq:eq9}
\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}}\dfrac{\log p}{p-1}\leq \dfrac{\log(2n^2+l)}{n-1}\left(2\log 4\dfrac{n}{\log n}+\sqrt{n}\right)+\dfrac{\log n}{\log(2n^2+l)}\sum_{i=1}^{r}\dfrac{\gamma_{l,p_i}(n)\log p_i}{n-1}+\dfrac{n\log 4}{n-1}.
\end{align}
As $n$ grows sufficiently large, the right-hand side of (\ref{eq:eq9}) approaches the limit
\begin{align}
\label{eq:eq10}
10\log 2+ \dfrac{1}{4}\sum_{i=1}^r\left(\dfrac{3p_i-2-p_i^{-e_i}}{(p_i-1)^2}-\dfrac{e_i+3}{p_i^{e_i}(p_i-1)}+\dfrac{4}{p_i^{e_i/2}(p_i-1)}\right)\log p_i,
\end{align}
while the left-hand side becomes unbounded since the prime number theorem implies that
\[\sum_{p\leq n}\dfrac{\log p}{p-1} = \log n-\gamma + o(1)\]
where $\gamma$ is the Euler-Mascheroni constant (see \cite{tenen}). Thus, there exists a positive integer $N_l$ such that $\sum_{p\leq n, p\notin\mathcal{S}} (\log p)/(p-1)$ is greater than (\ref{eq:eq10}) for all $n\geq N_l$. Hence, we conclude that $P_{2,l}(n)$ is not a square for all $n\geq N_l$.
\section{Proof of Theorem \ref{th:thm1} for $l=1$}
We now give the proof of Theorem \ref{th:thm1} for $l=1$ as follows. Suppose $P_{2,1}(n)$ is a square for some $n\geq 1$ and let $p$ be a prime divisor of $P_{2,1}(n)$. By Lemma \ref{le:lem4}, we have $p < 2n$. Since $p$ divides $2k^2+1$ for some $1\leq k\leq n$, we have $(-\frac{2}{p})= 1$, so that $p\equiv 1,3\pmod{8}$. Thus, we have
\[P_{2,1}(n) = \prod_{\substack{p < 2n\\ p\equiv 1,3\pmod{8}}} p^{\alpha_p}\]
and in view of $P_{2,1}(n) > (n!)^{\lambda}$, we see that
\begin{align}
\label{eq:eq12}
\sum_{p\leq n} \beta_p\log p\leq \dfrac{1}{\lambda}\sum_{\substack{p < 2n\\ p\equiv 1,3\pmod{8}}}\alpha_p\log p.
\end{align}
By Lemma \ref{le:lem6}, we have
\begin{align}
\label{eq:eq13}
\sum_{\substack{p\leq n\\ p\not\equiv 1,3\pmod{8}}} \beta_p\log p &\leq \sum_{\substack{p \leq n\\ p\equiv 1,3\pmod{8}}} \log(2n^2+1)+\sum_{n < p < 2n} \dfrac{\alpha_p}{\lambda}\log p.
\end{align}
Note that if $p > n$, then $\alpha_p \leq \lambda$ from (\ref{eq:eq4}) and if $p\leq n$, then
\[\beta_p \geq \dfrac{n-1}{p-1}-\dfrac{\log (2n^2+1)}{\log p}.\]
Using the above bounds for $\alpha_p$ and $\beta_p$ and Lemmas \ref{le:lem8} and \ref{le:lem9} to (\ref{eq:eq13}), we obtain
\begin{align}
\label{eq:eq14}
\sum_{\substack{p\leq n\\ p\not\equiv 1,3\pmod{8}}}\dfrac{\log p}{p-1}\leq \dfrac{\log(2n^2+1)}{n-1}\left(2\log 4\dfrac{n}{\log n}+\sqrt{n}\right)+\dfrac{n\log 4}{n-1}.
\end{align}
As $n$ tends to infinity, the left-hand side of (\ref{eq:eq14}) becomes unbounded, while the right-hand side approaches to $10\log 2$. Since the smallest positive integer $n$ for which
\[\sum_{\substack{p\leq n\\ p\not\equiv 1,3\pmod{8}}}\dfrac{\log p}{p-1} > 10\log 2\]
is $n=706310$, we take $N_1:=706310$ so that $P_{2,1}(n)$ is not a square for $n\geq N_1$.
\section{Proof of Corollary \ref{co:cor2}}
We now apply Theorem \ref{th:thm1} to prove Corollary \ref{co:cor2}.
\begin{proof}
We know from the previous section that $P_{2,1}(n)$ is not a square for $n\geq N_1=706310$. Since $P_{2,1}(1)=3$ and $P_{2,1}(2)=27$ are not squares, it suffices to prove that $P_{2,1}(n)$ is not a square for $3\leq n\leq 706309$. We proceed as follows:
\begin{itemize}
\item Since $2\cdot 3^2+1=19$ is a prime and the next value of $k > 3$ for which $19$ divides $2k^2+1$ is $k=19-3=16$, we see that $P_{2,1}(n)$ is not a square for $3\leq n\leq 15$.
\item Since $2\cdot 6^2+1=73$ is a prime and the next value of $k > 6$ for which $73$ divides $2k^2+1$ is $k=73-6=67$, we see that $P_{2,1}(n)$ is not a square for $6\leq n\leq 66$.
\item Since $2\cdot 21^2+1=883$ is a prime and the next value of $k > 21$ for which $883$ divides $2k^2+1$ is $k=883-21=862$, we see that $P_{2,1}(n)$ is not a square for $21\leq n\leq 861$.
\item Since $2\cdot 597^2+1=712819$ is a prime and the next value of $k > 597$ for which $712819$ divides $2k^2+1$ is $k=712819-597=712222$, we see that $P_{2,1}(n)$ is not a square for $597\leq n\leq 712221$.
\end{itemize}
Hence, $P_{2,1}(n)$ is not a square for all $n\geq 1$.
\end{proof}
\section{Proof of Corollary \ref{co:cor3}}
We next prove Corollary \ref{co:cor3} using Theorem \ref{th:thm1}.
\begin{proof}
We set $l=3$ with $r = 1, p_1=3$ and $e_1=1$. In this case, we have $\mathcal{S} = \{p\text{ prime}: p\equiv 1,5,7,11\pmod {24}\}$ and the limit (\ref{eq:eq10}) becomes $10\log 2+ \frac{1}{4}(1+\frac{2}{\sqrt{3}})\log 3\approx 7.523267$. Since the smallest positive integer $n$ for which
\[\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}}\dfrac{\log p}{p-1} > 10\log 2+ \dfrac{1}{4}\left(1+\dfrac{2}{\sqrt{3}}\right)\log 3\]
is $n=N_3:=2189634$, we see that $P_{2,3}(n)$ is not a square for all $n\geq N_3$. Since $P_{2,3}(1)=5$ is not a square, it suffices to prove that $P_{2,3}(n)$ is not a square for $2\leq n\leq 2189633$. We proceed as follows:
\begin{itemize}
\item Since $2\cdot 2^2+3=11$ is a prime and the next value of $k > 2$ for which $11$ divides $2k^2+3$ is $k=11-2=9$, we see that $P_{2,3}(n)$ is not a square for $2\leq n\leq 8$.
\item Since $2\cdot 8^2+3=131$ is a prime and the next value of $k > 8$ for which $131$ divides $2k^2+3$ is $k=131-8=123$, we see that $P_{2,3}(n)$ is not a square for $8\leq n\leq 122$.
\item Since $2\cdot 37^2+3=2741$ is a prime and the next value of $k > 37$ for which $2741$ divides $2k^2+3$ is $k=2741-37=2704$, we see that $P_{2,3}(n)$ is not a square for $37\leq n\leq 2703$.
\item Since $2\cdot 1048^2+3=2196611$ is a prime and the next value of $k > 1048$ for which $2196611$ divides $2k^2+3$ is $k=2196611-1048=2195563$, we see that $P_{2,3}(n)$ is not a square for $1048\leq n\leq 2199562$.
\end{itemize}
Hence, $P_{2,3}(n)$ is not a square for all $n\geq 1$.
\end{proof}
\section{Proof of Corollary \ref{co:cor4}}
We finally prove Corollary \ref{co:cor4} using Theorem \ref{th:thm1}.
\begin{proof}
We set $l=7$ with $r = 1, p_1=7$ and $e_1=1$. In this case, we have $\mathcal{S} = \{p\text{ prime}: p\equiv 1,3,5,9,13,15,19,23,25,27,39,45\pmod {56}\}$ and the limit (\ref{eq:eq10}) becomes $10\log 2+ \frac{1}{4}(\frac{3}{7}+\frac{2}{3\sqrt{7}})\log 7\approx 7.262543$. Since the smallest positive integer $n$ for which
\[\sum_{\substack{p\leq n\\ p\notin\mathcal{S}}}\dfrac{\log p}{p-1} > 10\log 2+ \dfrac{1}{4}\left(\dfrac{3}{7}+\dfrac{2}{3\sqrt{7}}\right)\log 7\]
is $n=N_7:=2142500$, we see that $P_{2,7}(n)$ is not a square for all $n\geq N_7$. By direct calculation, we deduce that the only value of $n$ in $1\leq n\leq 5$ for which $P_{2,7}(n)$ is a square is $n=1$. Thus, it suffices to prove that $P_{2,7}(n)$ is not a square for $6\leq n\leq 2142499$. We proceed as follows:
\begin{itemize}
\item Since $2\cdot 6^2+7=79$ is a prime and the next value of $k > 6$ for which $79$ divides $2k^2+7$ is $k=79-6=73$, we see that $P_{2,7}(n)$ is not a square for $6\leq n\leq 72$.
\item Since $2\cdot 15^2+7=457$ is a prime and the next value of $k > 15$ for which $457$ divides $2k^2+7$ is $k=457-15=432$, we see that $P_{2,7}(n)$ is not a square for $15\leq n\leq 431$.
\item Since $2\cdot 39^2+7=3049$ is a prime and the next value of $k > 39$ for which $3049$ divides $2k^2+7$ is $k=3049-39=3010$, we see that $P_{2,7}(n)$ is not a square for $39\leq n\leq 3009$.
\item Since $2\cdot 1041^2+7=2167369$ is a prime and the next value of $k > 1041$ for which $2167369$ divides $2k^2+7$ is $k=2167369-1041=2166328$, we see that $P_{2,7}(n)$ is not a square for $1041\leq n\leq 2166327$.
\end{itemize}
Hence, the only value of $n$ for which $P_{2,7}(n)$ is a square is $n=1$.
\end{proof}
\section{Conclusion}
We have shown in Sections 3 and 4 the existence of $N_l$ in Theorem \ref{th:thm1} depending on $l$ such that $P_{2,l}(n)$ is not a square for $n\geq N_l$. As shown in the proofs of Corollaries \ref{co:cor2}, \ref{co:cor3} and \ref{co:cor4}, we infer that Theorem \ref{th:thm1} provides a method of determining, for a given odd number $l$, all positive integers $n$ for which $P_{2,l}(n)$ is a square. By extensive calculation using \textit{Mathematica}, we see that the only odd numbers $l$ in $1\leq l\leq 10000$ such that $P_{2,l}(n)$ is a square for some $n\geq 1$ are those of the form $l=m^2-2$ for some positive odd integer $m$. In view of this numerical evidence, we present the following conjecture.
\begin{conjecture}
\label{co:con1}
The only positive odd integers $l$ such that $P_{2,l}(n)$ is a square for some $n\geq 1$ are those of the form $l=m^2-2$ for some positive odd integer $m$. For such integer $l$, $P_{2,l}(n)$ is a square only for $n=1$.
\end{conjecture}
Conjecture \ref{co:con1} is quite difficult to prove as it requires more thorough analysis on the bounds of the $p$-adic valuation of $P_{2,l}(n)$. We leave further research concerning Conjecture \ref{co:con1} to the interested reader.
\end{document} |
\begin{document}
\title{Tighter constraints of multiqubit entanglement}
\author{Long-Mei Yang$^1$ }
\author{Bin Chen$^2$}
\thanks{Corresponding author: [email protected]}
\author{Shao-Ming Fei$^{1,3}$}
\author{Zhi-Xi Wang$^1$}
\thanks{Corresponding author: [email protected]}
\affiliation{
$^1$School of Mathematical Sciences, Capital Normal University, Beijing 100048, China\\
$^2$College of Mathematical Science, Tianjin Normal University, Tianjin 300387, China\\
$^3$Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany
}
\begin{abstract}
Monogamy and polygamy relations characterize the distributions of entanglement in multipartite systems.
We provide classes of monogamy and polygamy inequalities of multiqubit entanglement
in terms of concurrence, entanglement of formation, negativity, Tsallis-$q$ entanglement and R\'{e}nyi-$\alpha$ entanglement, respectively.
We show that these inequalities are tighter than the existing ones for some classes of quantum states.
\end{abstract}
\maketitle
\baselineskip20pt
\section{Introduction}
Quantum entanglement is an essential feature of quantum mechanics which distinguishes the quantum from the classical world
and plays a very important role in quantum information processing \cite{dynamics,GaoYang,demonstration,QIp}.
One singular property of quantum entanglement is that a quantum system entangled with one of the other subsystems limits
its entanglement with the remaining ones, known as the monogamy of entanglement (MoE) \cite{BMT,JSK}.
MoE plays a key role in many quantum information and communication processing tasks such as the security proof in quantum cryptographic scheme \cite{CHB}
and the security analysis of quantum key distribution \cite{Pawl}.
For a tripartite quantum state $\rho_{ABC}$, MoE can be described as the following inequality
\begin{equation}
\mathcal{E}(\rho_{A|BC})\geq \mathcal{E}(\rho_{AB})+\mathcal{E}(\rho_{AC}),
\end{equation}
where $\rho_{AB}={\rm tr}_C(\rho_{ABC})$ and $\rho_{AC}={\rm tr}_B(\rho_{ABC})$ are reduced density matrices, and $\mathcal{E}$ is an entanglement measure.
However, it has been shown that not all entanglement measures satisfy such monogamy relations.
It has been shown that the squared concurrence $\mathcal{C}^2$ \cite{TJO,Bai}, the squared entanglement of formation (EoF) $E^2$ \cite{Oliveira} and the squared convex-roof extended negativity (CREN) $\mathcal{N}_c^2$ \cite{JSK8,Feng1} satisfy the monogamy relations for multiqubit states.
Another important concept is the assisted entanglement, which is a dual amount to bipartite entanglement measure.
It has a dually monogamous property in multipartite quantum systems and gives rise to polygamy relations.
For a tripartite state $\rho_{ABC}$, the usual polygamy relation is of the form,
\begin{equation}
\mathcal{E}^a(\rho_{A|BC})\leq\mathcal{E}^a (\rho_{AB})+\mathcal{E}^a(\rho_{AC}),
\end{equation}
where $\mathcal{E}^a$ is the corresponding entanglement measure of assistance associated to $\mathcal{E}$.
Such polygamy inequality has been deeply investigated in recent years,
and was generalized to multiqubit systems and classes of higher-dimensional quantum systems \cite{JSK3,F.B,G.G1,G.G2,JSK5,JSK6,JSK8,Song}.
Recently, generalized classes of monogamy inequalities related to the $\beta$th power of entanglement measures were proposed.
In Ref. \cite{SM.Fei1,SM.Fei3}, the authors proved that the squared concurrence and CREN satisfy the monogamy inequalities in multiqubit systems for $\beta\geq2$.
It has also been shown that the EoF satisfies monogamy relations when $\beta\geq\sqrt{2}$ \cite{SM.Fei1,SM.Fei2,SM.Fei3}.
Besides, the Tsallis-$q$ entanglement and R\'enyi-$\alpha$ entanglement satisfy monogamy relations when $\beta\geq1$ \cite{JSK2,JSK3,SM.Fei2,SM.Fei3}
for some cases.
Moreover, the corresponding polygamy relations have also been established \cite{G.G1,G.G2,JSK5,JSK9,Song,Yongming}.
In this paper, we investigate monogamy relations and polygamy relations in multiqubit systems.
We provide tighter constraints of multiqubit entanglement than all the existing ones,
thus give rise to finer characterizations of the entanglement distributions among the multiqubit systems.
\section{Tighter constraints related to concurrence}
We first consider the monogamy inequalities and polygamy inequalities for concurrence.
For a bipartite pure state $|\psi\ranglengle_{AB}$ in Hilbert space $H_A\otimes H_B$, the concurrence is defined as \cite{fide,SM.Fei4}
$\mathcal{C}(|\psi\ranglengle_{AB})=\sqrt{2(1-{\rm tr}\rho_A^2)}$ with $\rho_A={\rm tr}_B|\psi\ranglengle_{AB}\langlengle\psi|$.
The concurrence for a bipartite mixed state $\rho_{AB}$ is defined by the convex roof extension,
$\mathcal{C}(\rho_{AB})=\min\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_{i}p_i\mathcal{C}(|\psi_i\ranglengle)$,
where the minimum is taken over all possible decompositions of $\rho_{AB}=\sum\limits_{i}p_i|\psi_i\ranglengle\langlengle\psi_i|$
with $\sum p_i=1$ and $p_i\geq0$.
For an $N$-qubit state $\rho_{AB_1\cdots B_{N-1}}\in H_A\otimes H_{B_1}\otimes\cdots\otimes H_{B_{N-1}}$,
the concurrence $\mathcal{C}(\rho_{A|B_1\cdots B_{N-1}})$ of the state $\rho_{AB_1\cdots B_{N-1}}$ under bipartite partition $A$ and $B_1\cdots B_{N-1}$ satisfies \cite{SM.Fei1}
\begin{equation}\langlebel{Con1}
\begin{array}{rl}
&\mathcal{C}^\beta(\rho_{A|B_1\cdots B_{N-1}})\\
&\ \ \geq\mathcal{C}^\beta(\rho_{AB_1})+\mathcal{C}^\beta(\rho_{AB_2})+\cdots+\mathcal{C}^\beta(\rho_{AB_{N-1}}),
\end{array}
\end{equation}
for $\beta\geq2$, where $\rho_{AB_j}$ denote two-qubit reduced density matrices of subsystems $AB_j$ for $j=1,2,\ldots,N-1$.
Later, the relation \eqref{Con1} is improved for the case $\beta\geq2$ \cite{SM.Fei2} as
\begin{equation}\langlebel{Con2}
\begin{array}{rl}
&\mathcal{C}^\beta(\rho_{A|B_1\cdots B_{N-1}})\\
&\ \ \geq\mathcal{C}^\beta(\rho_{AB_1})+\frac{\beta}{2}\mathcal{C}^\beta(\rho_{AB_2})+\cdots\\
&\ \ \ \ +\big(\frac{\beta}{2}\big)^{m-1}\mathcal{C}^\beta(\rho_{AB_{m}})\\
&\ \ \ \ +\big(\frac{\beta}{2}\big)^{m+1}[\mathcal{C}^\beta(\rho_{AB_{m+1}})+\cdots+\mathcal{C}^\beta(\rho_{AB_{N-2}})]\\
&\ \ \ \ +\big(\frac{\beta}{2}\big)^m\mathcal{C}^\beta(\rho_{AB_{N-1}})
\end{array}
\end{equation}
conditioned that $\mathcal{C}(\rho_{AB_i})\geq\mathcal{C}(\rho_{A|B_{i+1}\cdots B_{N-1}})$ for $i=1,2,\ldots,m$,
and $\mathcal{C}(\rho_{AB_j})\leq\mathcal{C}(\rho_{A|B_{j+1}\cdots B_{N-1}})$ for $j=m+1,\ldots,N-2$.
The relation \eqref{Con2} is further improved for $\beta\geq2$ as \cite{SM.Fei3}
\begin{equation}\langlebel{Con3}
\begin{array}{rl}
&\mathcal{C}^\beta(\rho_{A|B_1\cdots B_{N-1}})\\
&\ \ \geq\mathcal{C}^\beta(\rho_{AB_1})+\big(2^{\frac{\beta}{2}}-1\big)\mathcal{C}^\beta(\rho_{AB_2})+\cdots\\
&\ \ \ \ +\big(2^{\frac{\beta}{2}}-1\big)^{m-1}\mathcal{C}^\beta(\rho_{AB_{m}})\\
&\ \ \ \ +\big(2^{\frac{\beta}{2}}-1\big)^{m+1}[\mathcal{C}^\beta(\rho_{AB_{m+1}})+\cdots+\mathcal{C}^\beta(\rho_{AB_{N-2}})]\\
&\ \ \ \ +\big(2^{\frac{\beta}{2}}-1\big)^m\mathcal{C}^\beta(\rho_{AB_{N-1}})
\end{array}
\end{equation}
with the same conditions as in \eqref{Con2}.
For a tripartite state $|\psi\ranglengle_{ABC}$, the concurrence of assistance (CoA) is defined by \cite{EOS,monogamy}
\begin{equation}
\mathcal{C}_a(\rho_{AB})=\max\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_{i}p_i\mathcal{C}(|\psi_i\ranglengle),
\end{equation}
where the maximun is taken over all possible pure state decompositions of $\rho_{AB}$, and $\mathcal{C}(|\psi\ranglengle_{AB})=\mathcal{C}_a(|\psi\ranglengle_{AB})$.
The generalized polygamy relation based on the concurrence of assistance was established in \cite{G.G1,G.G2}
\begin{equation}\langlebel{Con14}
\begin{array}{rl}
&\mathcal{C}^2(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})\\
&\ \ =\mathcal{C}_a^2(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})\\
&\ \ \leq\mathcal{C}_a^2(\rho_{AB_1})+\mathcal{C}_a^2(\rho_{AB_2})+\cdots+\mathcal{C}_a^2(\rho_{AB_{N-1}}).
\end{array}
\end{equation}
These monogamy and polygamy relations for concurrence can be further tightened under some conditions.
To this end, we first introduce the following lemma.
\begin{lemma}\langlebel{con1}
Suppose that $k$ is a real number satisfying $0< k\leq1$, then for any $0\leq t\leq k$ and non-negative real numbers $m,n$, we have
\begin{equation}\langlebel{Con4}
(1+t)^m\geq1+\frac{(1+k)^m-1}{k^m}t^m
\end{equation}
for $m\geq1$, and
\begin{equation}\langlebel{Con5}
(1+t)^n\leq1+\frac{(1+k)^n-1}{k^n}t^n
\end{equation}
for $0\leq n\leq 1$.
\end{lemma}
\begin{proof}
We first consider the function $f(m,x)=(1+x)^m-x^m$ with $x\geq\frac{1}{k}$ and $m\geq1$.
Then $f(m,x)$ is an increasing function of $x$, since $\frac{\partial f(m,x)}{\partial x}=m[(1+x)^{m-1}-x^{m-1}]\geq 0$.
Thus,
\begin{equation}\langlebel{Con8}
f(m,x)\geq f(m,\frac{1}{k})=\big(1+\frac{1}{k}\big)^m-\big(\frac{1}{k}\big)^m=\frac{(k+1)^m-1}{k^m}.
\end{equation}
Set $x=\frac{1}{t}$ in \eqref{Con8}, we get the inequality \eqref{Con4}.
Similar to the proof of inequality \eqref{Con4}, we can obtain the inequality \eqref{Con5},
since in this case $f(n,x)$ is a decreasing function of $x$ for $x\geq \frac{1}{k}$ and $0\leq n\leq1$.
\end{proof}
In the next, we denote $\mathcal{C}_{AB_i}=\mathcal{C}(\rho_{AB_i})$ the concurrence of $\rho_{AB_i}$
and $\mathcal{C}_{A|B_1\cdots B_{N-1}}=\mathcal{C}(\rho_{A|B_1\cdots B_{N-1}})$ for convenience.
\begin{lemma}\langlebel{con2}
Suppose that $k$ is a real number satisfying $0< k\leq1$.
Then for any $2\otimes2\otimes2^{n-2}$ mixed state $\rho\in H_A\otimes H_B\otimes H_C$,
if $\mathcal{C}_{AC}^2\leq k\mathcal{C}_{AB}^2$, we have
\begin{equation}\langlebel{Con9}
\mathcal{C}_{A|BC}^\beta\geq\mathcal{C}_{AB}^\beta+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}_{AC}^\beta,
\end{equation}
for all $\beta\geq 2$.
\end{lemma}
\begin{proof}
Since $\mathcal{C}_{AC}^2\leq k\mathcal{C}_{AB}^2$ and $\mathcal{C}_{AB}>0$,
we obtain
\begin{equation}
\begin{array}{rl}
&\mathcal{C}_{A|BC}^\beta\geq(\mathcal{C}_{AB}^2+\mathcal{C}_{AC}^2)^{\frac{\beta}{2}}\\[1.5mm]
&\ \ \ \ \ \ \ \ \ =\mathcal{C}_{AB}^\beta \Big(1+\frac{\mathcal{C}_{AC}^2}{\mathcal{C}_{AB}^2}\Big)^{\frac{\beta}{2}}\\[1.5mm]
&\ \ \ \ \ \ \ \ \ \geq\mathcal{C}_{AB}^\beta \Big[1+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}
\Big(\frac{\mathcal{C}_{AC}^2}{\mathcal{C}_{AB}^2}\Big)^{\frac{\beta}{2}}\Big]\\[3mm]
&\ \ \ \ \ \ \ \ \ =\mathcal{C}_{AB}^\beta+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}_{AC}^\beta,
\end{array}
\end{equation}
where the first inequality is due to the fact, $\mathcal{C}_{A|BC}^2\geq\mathcal{C}_{AB}^2+\mathcal{C}_{AC}^2$ for
arbitrary $2\otimes2\otimes2^{n-2}$ tripartite state $\rho_{ABC}$ \cite{TJO,XJR} and
the second is due to Lemma \ref{con1}.
We can also see that if $\mathcal{C}_{AB}=0$, then $\mathcal{C}_{AC}=0$, and the lower bound becomes trivially zero.
\end{proof}
For multiqubit systems, we have the following Theorems.
\begin{theorem}\langlebel{concurrence1}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$, if $k\mathcal{C}_{AB_i}^2\geq\mathcal{C}_{A|B_{i+1}\cdots B_{N-1}}^2$
for $i=1,2,\ldots,m$, and $\mathcal{C}_{AB_j}^2\leq k\mathcal{C}_{A|B_{j+1}\cdots B_{N-1}}^2$ for $j=m+1,\ldots,N-2$,
$\forall 1\leq m\leq N-3$, $N\geq 4$, then we have
\begin{equation}\langlebel{Con12}
\begin{array}{rl}
&\mathcal{C}^\beta_{A|B_1\cdots B_{N-1}}\\[2.0mm]
&\ \ \geq\mathcal{C}^\beta_{AB_1}+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}^\beta_{AB_2}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m-1}\mathcal{C}^\beta_{AB_{m}}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m+1}\Big(\mathcal{C}^\beta_{AB_{m+1}}+\cdots+
\mathcal{C}^\beta_{AB_{N-2}}\Big)\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^m\mathcal{C}^\beta_{AB_{N-1}}
\end{array}
\end{equation}
for all $\beta\geq2$.
\end{theorem}
\begin{proof}
From the inequality \eqref{Con9}, we have
\begin{equation}\langlebel{Con10}
\begin{array}{rl}
&\mathcal{C}_{A|B_1B_2\cdots B_{N-1}}\\[2.0mm]
&\ \ \geq\mathcal{C}_{AB_1}^\beta+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}_{A|B_2\cdots B_{N-1}}^\beta\\[2.0mm]
&\ \ \geq\mathcal{C}_{AB_1}^\beta+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}_{AB_2}^\beta\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^2\mathcal{C}_{A|B_3\cdots B_{N-1}}^\beta\\[2.0mm]
&\ \ \geq\cdots\\[2.0mm]
&\ \ \geq\mathcal{C}_{AB_1}^\beta+\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)\mathcal{C}_{AB_2}^\beta\\[2.0mm]
&\ \ \ \ +\cdots +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m-1}\mathcal{C}_{AB_m}^\beta\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m}\mathcal{C}_{A|B_{m+1}\cdots B_{N-1}}^\beta.
\end{array}
\end{equation}
Since $\mathcal{C}_{AB_j}^2\leq k\mathcal{C}_{A|B_{j+1}\cdots B_{N-1}}^2$,
for $j=m+1,\ldots,N-2$, we get
\begin{equation}\langlebel{Con11}
\begin{array}{rl}
&\mathcal{C}_{A|B_{m+1}\cdots B_{N-1}}^\beta\\[2.0mm]
&\ \ \geq\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}_{AB_{m+1}}^\beta+\mathcal{C}_{A|B_{m+2\cdots B_{N-1}}}^\beta\\[2.0mm]
&\ \ \geq\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big(\mathcal{C}_{AB_{m+1}}^\beta+\cdots+\mathcal{C}_{AB_{N-2}}^\beta\Big)
+\mathcal{C}_{AB_{N-1}}^\beta.
\end{array}
\end{equation}
Combining \eqref{Con10} and \eqref{Con11}, we get the inequality \eqref{Con12}.
\end{proof}
If we replace the conditions $k\mathcal{C}_{AB_i}\geq\mathcal{C}_{A|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$,
and $\mathcal{C}_{AB_j}^2\leq k\mathcal{C}_{A|B_{j+1}\cdots B_{N-1}}^2$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq 4$,
in Theorem \ref{concurrence1} by $k\mathcal{C}_{AB_i}^2\geq \mathcal{C}_{A|B_{i+1}\cdots B_{N-1}}^2$ for $i=1,2,\ldots,N-2$,
then we have the following theorem.
\begin{theorem}\langlebel{concurrence2}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $k\mathcal{C}_{AB_i}^2\geq\mathcal{C}_{A|B_{i+1}\cdots B_{N-1}}^2$ for all $i=1,2,\ldots,N-2$, then we have
\begin{equation}\langlebel{Con13}
\begin{array}{rcl}
\mathcal{C}^\beta_{A|B_1\cdots B_{N-1}}
&\geq&\mathcal{C}^\beta_{AB_1}+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}
\mathcal{C}^\beta_{AB_2}+\cdots\\[3mm]
&&+\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{N-2}\mathcal{C}^\beta_{AB_{N-1}}
\end{array}
\end{equation}
for $\beta\geq2$.
\end{theorem}
It can be seen that the inequalities \eqref{Con12} and \eqref{Con13} are tighter than the ones given in Ref. \cite{SM.Fei3},
since
$$
\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}\geq2^{\frac{\beta}{2}}-1
$$
for $\beta\geq2$ and $0<k\leq1$.
The equality holds when $k=1$. Namely, the result (\ref{Con3}) given in \cite{SM.Fei3} are just special cases of ours for $k=1$.
As $\frac{(1+k)^\frac{\beta}{2}-1}{k^{\frac{\beta}{2}}}$ is a decreasing function with respect to $k$ for $0< k\leq 1$ and $\beta\geq2$,
we find that the smaller $k$ is, the tighter the inequalities \eqref{Con9}, \eqref{Con12} and \eqref{Con13} are.
\noindent{\bf Example 1} \, \ Consider the three-qubit state $|\psi\ranglengle_{ABC}$ in generalized Schmidt decomposition form \cite{Schmidt,SM.Fei5},
\begin{equation}\langlebel{Con6}
|\psi\ranglengle_{ABC}=\langlembda_0|000\ranglengle+\langlembda_1e^{i\varphi}|100\ranglengle+\langlembda_2|101\ranglengle+\langlembda_3|110\ranglengle+\langlembda_4|111\ranglengle,
\end{equation}
where $\langlembda_i\geq0$, $i=,1,2...,4$, and $\sum\limits_{i=0}^4\langlembda_i^2=1$.
Then we get $\mathcal{C}_{A|BC}=2\langlembda_0\sqrt{\langlembda_2^2+\langlembda_3^2+\langlembda_4^2}$, $\mathcal{C}_{AB}=2\langlembda_0\langlembda_2$ and
$\mathcal{C}_{AC}=2\langlembda_0\langlembda_3$.
Set $\langlembda_0=\langlembda_3=\frac{1}{2},\ \langlembda_2=\frac{\sqrt{2}}{2}$ and $\langlembda_1=\langlembda_4=0$.
We have $\mathcal{C}_{A|BC}=\frac{\sqrt{3}}{2}$, $\mathcal{C}_{AB}=\frac{\sqrt{2}}{2}$ and $\mathcal{C}_{AC}=\frac{1}{2}$.
Then $\mathcal{C}_{AB}^\beta+\big(2^{\frac{\beta}{2}}-1\big)\mathcal{C}_{AC}^\beta=\big(\frac{\sqrt{2}}{2}\big)^\beta+
\big(2^{\frac{\beta}{2}}-1\big)\big(\frac{1}{2}\big)^\beta$ and
$\mathcal{C}_{AB}^\beta+\frac{(1+k)^\frac{\beta}{2}-1}{k^{\frac{\beta}{2}}}\mathcal{C}_{AC}^\beta=\big(\frac{\sqrt{2}}{2}\big)^\beta+
\frac{(1+k)^\frac{\beta}{2}-1}{k^{\frac{\beta}{2}}}\big(\frac{1}{2}\big)^\beta$.
One can see that our result is better than the result (\ref{Con3}) in \cite{SM.Fei3} for $\beta\geq2$, hence better than (\ref{Con1}) and (\ref{Con2}) given in \cite{SM.Fei1,SM.Fei2},
see Fig. 1.
\begin{figure}
\caption{The $y$ axis is the lower bound of the concurrence $\mathcal{C}
\end{figure}
We now discuss the polygamy relations for the CoA of $\mathcal{C}_a(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})$ for $0\leq\beta\leq 2$.
We have the following Theorem.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an $N$-qubit pure state $|\psi\ranglengle_{AB_1\cdots B_{N-1}}$,
if $k\mathcal{C}^2_{aA|B_i}\geq\mathcal{C}^2_{aA|B_{i+1}\cdots B_{N-1}}$
for $i=1,2,\ldots,m$, and $\mathcal{C}^2_{aA|B_j}\leq k\mathcal{C}^2_{aA|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\ldots,N-2$,
$\forall 1\leq m\leq N-3$, $N\geq 4$, then we have
\begin{equation}\langlebel{Con16}
\begin{array}{rl}
&\mathcal{C}_a^\beta(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})\\[2.0mm]
&\ \ \leq\mathcal{C}^\beta_{aAB_1}+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{C}^\beta_{aAB_2}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m-1}\mathcal{C}^\beta_{aAB_{m}}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m+1}\Big(\mathcal{C}^\beta_{aAB_{m+1}}+
\cdots+\mathcal{C}^\beta_{aAB_{N-2}}\Big)\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^m\mathcal{C}^\beta_{aAB_{N-1}}
\end{array}
\end{equation}
for all $0\leq\beta\leq2$.
\end{theorem}
\begin{proof}
The proof is similar to the proof of Theorem \ref{concurrence1} by using inequality \eqref{Con5}.
\end{proof}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an $N$-qubit pure state $|\psi\ranglengle_{AB_1\cdots B_{N-1}}$,
if $k\mathcal{C}^2_{aAB_i}\geq\mathcal{C}^2_{aA|B_{i+1}\cdots B_{N-1}}$ for all $i=1,2,\ldots,N-2$, then we have
\begin{equation}\langlebel{Con17}
\begin{array}{rcl}
\mathcal{C}^\beta_a{|\psi\ranglengle_{A|B_1\cdots B_{N-1}}}
&\leq&\mathcal{C}^\beta_{aAB_1}+\frac{(1+k)^{\frac{\beta}{2}}-1}
{k^{\frac{\beta}{2}}}\mathcal{C}^\beta_{aAB_2}+\cdots\\[3.0mm]
&&+\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{N-2}\mathcal{C}^\beta_{aAB_{N-1}}
\end{array}
\end{equation}
for $0\leq\beta\leq2$.
\end{theorem}
The inequalities \eqref{Con16} and \eqref{Con17} are also upper bounds of $\mathcal{C}(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})$
for pure state $|\psi\ranglengle_{AB_1\cdots B_{N-1}}$
since $\mathcal{C}(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})=\mathcal{C}_a(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})$.
\section{Tighter constraints relate to EoF}
Let $H_A$ and $H_B$ be two Hilbert spaces with dimension $m$ and $n$ $(m\leq n)$, respectively.
Then the entanglement of formation (EoF) \cite{CHB1,CHB2} is defined as follows:
for a pure state $|\psi\ranglengle_{AB}\in H_A\otimes H_B$, the EoF is given by
\begin{equation}
E(|\psi\ranglengle_{AB})=\mathcal{S}(\rho_A),
\end{equation}
where $\rho_A={\rm Tr}_B(|\psi\ranglengle_{AB}\langlengle\psi|)$ and $\mathcal{S}(\rho)=-{\rm Tr}(\rho\log_2\rho)$.
For a bipartite mixed state $\rho_{AB}\in H_A\otimes H_B$, the EoF is given by
\begin{equation}
E(\rho_{AB})=\min\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_{i}p_iE(|\psi_i\ranglengle),
\end{equation}
with the minimum taking over all possible pure state decomposition of $\rho_{AB}$.
In Ref. \cite{EoF}, Wootters showed that $E(|\psi\ranglengle)=f(\mathcal{C}^2(|\psi\ranglengle))$ for $2\otimes m \ (m\geq2)$ pure state $|\psi\ranglengle$, and $E(\rho)=f(\mathcal{C}^2(\rho))$ for two-qubit mixed state $\rho$,
where $f(x)=H\big(\frac{1+\sqrt{1-x}}{2}\big)$ and $H(x)=-x\log_2x-(1-x)\log_2(1-x)$.
$f(x)$ is a monotonically increasing function for $0\leq x\leq1$, and satisfies the following relations:
\begin{equation}\langlebel{EoF4}
f^{\sqrt{2}}(x^2+y^2)\geq f^{\sqrt{2}}(x^2)+f^{\sqrt{2}}(y^2),
\end{equation}
where $f^{\sqrt{2}}(x^2+y^2)=[f(x^2+y^2)]^{\sqrt{2}}$.
Although EoF does not satisfy the inequality $E_{AB}+E_{AC}\leq E_{A|BC}$ \cite{CKW},
the authors in \cite{BZYW} showed that EoF is a monotonic function satisfying
$E^2(\rho_{A|B_1B_2\cdots B_{N-1}})\geq \sum_{i=1}^{N-1}E^2(\rho_{AB_i})$.
For $N$-qubit systems, one has \cite{SM.Fei1}
\begin{equation}\langlebel{EoF1}
E^\beta_{A|B_1B_2\cdots B_{N-1}}\geq E^\beta_{AB_1}+E^\beta_{AB_2}+\cdots+E^\beta_{AB_{N-1}},
\end{equation}
for $\beta\geq\sqrt{2}$, where $E_{A|B_1B_2\cdots B_{N-1}}$ is the EoF of $\rho$ under bipartite partition $A|B_1B_2\cdots B_{N-1}$,
and $E_{AB_i}$ is the EoF of the mixed state $\rho_{AB_i}={\rm Tr}_{B_1\cdots B_{i-1},B_{i+1}\cdots B_{N-1}}(\rho)$
for $i=1,2,\ldots,N-1$.
Recently, the authors in Ref. \cite{SM.Fei2} proposed a monogamy relation that is tighter than the inequality \eqref{EoF1},
\begin{equation}\langlebel{EoF2}
\begin{array}{rl}
&E^\beta_{A|B_1B_2\cdots B_{N-1}}\\[1.5mm]
&\ \ \geq E^\beta_{AB_1}+\frac{\beta}{\sqrt{2}}E^\beta_{AB_2}+\cdots+\big(\frac{\beta}{\sqrt{2}}\big)^{m-1}E^\beta_{AB_m}\\[1.5mm]
&\ \ \ \ +\big(\frac{\beta}{\sqrt{2}}\big)^{m+1}\big(E^\beta_{AB_{m+1}}+\cdots +E^\beta_{AB_{N-2}}\big)\\[1.5mm]
&\ \ \ \ +\big(\frac{\beta}{\sqrt{2}}\big)^mE^\beta_{AB_{N-1}},
\end{array}
\end{equation}
if $\mathcal{C}_{AB_i}\geq\mathcal{C}_{A|B_{j+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$, and $\mathcal{C}_{AB_j}\leq\mathcal{C}_{A|B_{j+1}\cdots B_{N-1}}$
for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$ for $\beta\geq\sqrt{2}$.
The inequality \eqref{EoF2} is also improved to
\begin{equation}\langlebel{EoF9}
\begin{array}{rl}
&E^\beta_{A|B_1B_2\cdots B_{N-1}}\\[1.5mm]
&\ \ \geq E^\beta_{AB_1}+\Big(2^{\frac{\beta}{\sqrt{2}}}-1\Big)E^\beta_{AB_2}+\cdots+
\Big(2^{\frac{\beta}{\sqrt{2}}}-1\Big)^{m-1}\\[1.5mm]
&\ \ \ \ \times E^\beta_{AB_m}+\Big(2^{\frac{\beta}{\sqrt{2}}}-1\Big)^{m+1}\big(E^\beta_{AB_{m+1}}+\cdots+E^\beta_{AB_{N-2}}\big) \\[1.5mm]
&\ \ \ \ +\Big(2^{\frac{\beta}{\sqrt{2}}}-1\Big)^mE^\beta_{AB_{N-1}},
\end{array}
\end{equation}
under the same conditions as that of inequality \eqref{EoF2}.
In fact, these inequalities can be further improved to even tighter monogamy relations.
\begin{theorem}\langlebel{eof1}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $kE^{\sqrt{2}}_{AB_i}\geq E^{\sqrt{2}}_{A|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$, and
$E^{\sqrt{2}}_{AB_j}\leq kE^{\sqrt{2}}_{A|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$,
the entanglement of formation $E(\rho)$ satisfies
\begin{equation}\langlebel{EoF3}
\begin{array}{rl}
&E^\beta_{A|B_1B_2\cdots B_{N-1}}\\[2.0mm]
&\ \ \geq E^\beta_{AB_1}+\frac{(1+k)^t-1}{k^t}E^\beta_{AB_2}+\cdots+\Big(\frac{(1+k)^t-1}{k^t}\Big)^{m-1}E^\beta_{AB_m}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^t-1}{k^t}\Big)^{m+1}(E^\beta_{AB_{m+1}}+\cdots+E^\beta_{AB_{N-2}})\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^t-1}{k^t}\Big)^mE^\beta_{AB_{N-1}},
\end{array}
\end{equation}
for $\beta\geq\sqrt{2}$, where $t=\frac{\beta}{\sqrt{2}}$.
\end{theorem}
\begin{proof}
For $\beta\geq\sqrt{2}$ and $k f^{\sqrt{2}}(x^2)\geq f^{\sqrt{2}}(y^2)$, we find
\begin{equation}\langlebel{EoF5}
\begin{array}{rl}
&f^{\beta}(x^2+y^2)=[f^{\sqrt{2}}(x^2+y^2)]^t\\[1.5mm]
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \geq[f^{\sqrt{2}}(x^2)+f^{\sqrt{2}}(y^2)]^t\\[1.5mm]
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \geq[f^{\sqrt{2}}(x^2)]^t+\frac{(1+k)^t-1}{k^t}[f^{\sqrt{2}}(y^2)]^t\\[1.5mm]
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =f^\beta(x^2)+\frac{(1+k)^t-1}{k^t}f^\beta(y^2),
\end{array}
\end{equation}
where the first inequality is due to the inequality \eqref{EoF4}, and the second inequality can be obtained from inequality \eqref{Con4}.
Let $\rho=\sum_ip_i|\psi_i\ranglengle\langlengle\psi_i|\in H_A\otimes H_{B_1}\otimes\cdots H_{B_{N-1}}$ be the optimal decomposition of
$E_{A|B_1B_2\cdots B_{N-1}}(\rho)$ for the $N$-qubit mixed state $\rho$. Then \cite{SM.Fei3}
\begin{equation}\langlebel{EoF6}
E_{A|B_1B_2\cdots B_{N-1}}\geq f(\mathcal{C}^2_{A|B_1B_2\cdots B_{N-1}}).
\end{equation}
Thus,
\begin{equation}\langlebel{EoF7}
\begin{array}{rl}
&E^\beta_{A|B_1B_2\cdots B_{N-1}}\\[2.0mm]
&\ \ \geq f^\beta(\mathcal{C}^2_{A|B_1B_2\cdots B_{N-1}})\\[2.0mm]
&\ \ \geq f^\beta(\mathcal{C}^2_{A|B_1})+\frac{(1+k)^t-1}{k^t}f^\beta(\mathcal{C}^2_{A|B_2})+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^t-1}{k^t}\Big)^{m-1}f^\beta(\mathcal{C}^2_{A|B_{m}})+\Big(\frac{(1+k)^t-1}{k^t}\Big)^{m+1}\\[2.0mm]
&\ \ \ \ \ [f^\beta(\mathcal{C}^2_{AB_{m+1}})+\cdots+f^\beta(\mathcal{C}^2_{AB_{N-2}})]\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^t-1}{k^t}\Big)^m f^\beta(\mathcal{C}^2_{AB_{N-1}})\\[2.0mm]
&\ \ =E^\beta_{AB_1}+\Big(\frac{(1+k)^t-1}{k^t}\Big)E^\beta_{AB_2}+\cdots+\Big(\frac{(1+k)^t-1}{k^t}\Big)^{m-1}\\[2.0mm]
&\ \ \ \ \times E^{m-1}_{AB_m}+\Big(\frac{(1+k)^t-1}{k^t}\Big)^{m+1}(E^\beta_{AB_{m+1}}+\cdots+E^\beta_{AB_{N-2}})\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^t-1}{k^t}\Big)^m E^\beta_{AB_{N-1}},
\end{array}
\end{equation}
where the first inequality holds due to \eqref{EoF6}, the second inequality is similar to the proof of Theorem \ref{concurrence1} by using inequality \eqref{EoF5},
and the last equality holds since for any $2\otimes 2$ quantum state $\rho_{AB_i}$, $E(\rho_{AB_i})=f[\mathcal{C}^2(\rho_{AB_i})]$.
\end{proof}
Similar to the case of concurrence, we have also the following tighter monogamy relation for EoF.
\begin{theorem}\langlebel{eof2}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $kE^{\sqrt{2}}_{AB_i}\geq E^{\sqrt{2}}_{A|B_{i+1}\cdots B_{N-1}}$ for all $i=1,2,\ldots,N-2$,
we have
\begin{equation}\langlebel{EoF8}
\begin{array}{rl}
&E^\beta_{A|B_1B_2\cdots B_{N-1}}\geq E^\beta_{AB_1}+\frac{(1+k)^t-1}{k^t}E^\beta_{AB_2}+\cdots\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\Big(\frac{(1+k)^t-1}{k^t}\Big)^{N-2}E^\beta_{AB_{N-1}},
\end{array}
\end{equation}
for $\beta\geq\sqrt{2}$ and $t=\frac{\beta}{\sqrt{2}}$.
\end{theorem}
As $\frac{(1+k)^t-1}{k^t}\geq2^t-1$ for $t\geq1$ and $0< k\leq 1$,
our new monogamy relations \eqref{EoF3} and \eqref{EoF8} are tighter than the ones given in \cite{SM.Fei1,SM.Fei2,SM.Fei3}.
Also, for $0< k\leq 1$ and $\beta\geq2$, the smaller $k$ is, the tighter inequalities \eqref{EoF3} and \eqref{EoF8} are.
\noindent{\bf Example 2} \, \ Let us again consider the three-qubit state $|\psi\ranglengle_{ABC}$ defined in \eqref{Con6}
with $\langlembda_0=\langlembda_3=\frac{1}{2},\ \langlembda_2=\frac{\sqrt{2}}{2}$ and $\langlembda_1=\langlembda_4=0$.
Then $E_{A|BC}=2-\log_{2}3\approx0.811278$,
$E_{AB}=-\frac{2+\sqrt{2}}{4}\log_2\frac{2+\sqrt{2}}{4}-\frac{2-\sqrt{2}}{4}\log_2\frac{2-\sqrt{2}}{4}\approx0.600876$
and $E_{AB}=-\frac{2+\sqrt{3}}{4}\log_2\frac{2+\sqrt{3}}{4}-\frac{2-\sqrt{3}}{4}\log_2\frac{2-\sqrt{3}}{4}\approx0.354579$.
Thus, $E^\beta_{AB}+\big(2^{\frac{\beta}{2}}-1\big)E^\beta_{AC}=(0.600876)^\beta+\big(2^{\frac{\beta}{2}}-1\big)0.354579^\beta$,
$E^\beta_{AB}+\frac{1.5^{\frac{\beta}{2}}-1}{0.5^{\frac{\beta}{2}}}E^\beta_{AC}=
(0.600876)^\beta+\frac{1.5^{\frac{\beta}{2}}-1}{0.5^{\frac{\beta}{2}}}0.354579^\beta$
, $E^\beta_{AB}+\frac{1.7^{\frac{\beta}{2}}-1}{0.7^{\frac{\beta}{2}}}E^\beta_{AC}=
(0.600876)^\beta+\frac{1.7^{\frac{\beta}{2}}-1}{0.7^{\frac{\beta}{2}}}0.354579^\beta$
and $E^\beta_{AB}+\frac{1.9^{\frac{\beta}{2}}-1}{0.9^{\frac{\beta}{2}}}E^\beta_{AC}=
(0.600876)^\beta+\frac{1.9^{\frac{\beta}{2}}-1}{0.9^{\frac{\beta}{2}}}0.354579^\beta$.
One can see that our result is better than the one in \cite{SM.Fei3} for $\beta\geq\sqrt{2}$, hencee better than the ones in \cite{SM.Fei1,SM.Fei2}, see Fig. 2.
\begin{figure}
\caption{The $y$ axis is the lower bound of the EoF $E^\beta_{A|BC}
\end{figure}
We can also provide tighter polygamy relations for the entanglement of assistance.
The entanglement of assistance (EoA) of $\rho_{AB}$ is defined as \cite{cohen},
\begin{equation}
E^a_{\rho_{AB}}=\max\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_ip_iE(|\psi_i\ranglengle),
\end{equation}
with the maximization taking over all possible pure decompositions of $\rho_{AB}$.
For any dimensional multipartite quantum state $\rho_{AB_1B_2\cdots B_{N-1}}$,
a general polygamy inequality of multipartite quantum entanglement was established as \cite{JSK5},
\begin{equation}
E^a(\rho_{A|B_1B_2\cdots B_{N-1}})\leq\sum\limits_{i=1}^{N-1}E^a(\rho_{A|B_i}).
\end{equation}
Using the same approach as for concurrence, we have the following Theorems.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $kE_{AB_i}^a\geq E_{A|B_{i+1}\cdots B_{N-1}}^a$ for $i=1,2,\ldots,m$, and
$E_{AB_j}^a\leq kE_{A|B_{j+1}\cdots B_{N-1}}^a$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$, we have
\begin{equation}
\begin{array}{rl}
&(E^a_{A|B_1B_2\cdots B_{N-1}})^\beta\\[2.0mm]
&\ \ \leq (E^a_{AB_1})^\beta+\frac{(1+k)^\beta-1}{k^\beta}(E^a_{AB_2})^\beta+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m-1}(E^a_{AB_m})^\beta\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m+1}[(E^a_{AB_{m+1}})^\beta+\cdots+(E^a_{AB_{N-2}})^\beta]\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^m(E^a_{AB_{N-1}})^\beta,
\end{array}
\end{equation}
for $0\leq\beta\leq1$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $kE_{AB_i}^a\geq E_{A|B_{i+1}\cdots B_{N-1}}^a$ for all $i=1,2,\ldots,N-2$,
we have
\begin{equation}
\begin{array}{rl}
&(E^a_{A|B_1B_2\cdots B_{N-1}})^\beta\leq (E^a_{AB_1})^\beta+\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)(E^a_{AB_2})^\beta\\[1.5mm]
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \cdots\\[1.5mm]
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{N-2}(E^a_{AB_{N-1}})^\beta,
\end{array}
\end{equation}
for $0\leq\beta\leq1$.
\end{theorem}
\section{Tighter constraints related to negativity}
The negativity, a well-known quantifier of bipartite entanglement, is defined as
$\mathcal{N}(\rho_{AB})=\big(\|\rho_{AB}^{T_A}\|-1\big)/2$ \cite{Vidal}, where $\rho_{AB}^{T_A}$ is the partial transposed matrix of $\rho_{AB}$ with respect to the subsystem $A$,
and $\|X\|$ denotes the trace norm of $X$, i.e., $\|X\|={\rm tr}\sqrt{XX^{\dag}}$.
For convenient, we use the definition of negativity as $\|\rho_{AB}^{T_A}\|-1$.
Particularly, for any bipartite pure state $|\psi\ranglengle_{AB}$,
$\mathcal{N}(|\psi\ranglengle_{AB})=2\sum\limits_{i<j}\sqrt{\langlembda_i\langlembda_j}=({\rm tr}\sqrt{\rho_A})^2-1$,
where $\langlembda_i s$ are the eigenvalues of the reduced density matrix $\rho_A={\rm tr}_B|\psi\ranglengle_{AB}\langlengle\psi|$.
The convex-roof extended negativity (CREN) of a mixed state $\rho_{AB}$ is defined by
\begin{equation}
\mathcal{N}_c(\rho_{AB})=\min\limits_{\{p_i,|\psi_i\ranglengle\}}\sum_{i}p_i\mathcal{N}(|\psi_i\ranglengle),
\end{equation}
where the minimum is taken over all possible pure state decomposition of $\rho_{AB}$.
Thus $\mathcal{N}_c(\rho_{AB})=\mathcal{C}(\rho_{AB})$ for any two-qubit mixed state $\rho_{AB}$.
The dual to the CREN of a mixed state $\rho_{AB}$ is defined as
\begin{equation}
\mathcal{N}^a_c(\rho_{AB})=\max\limits_{\{p_i,|\psi_i\ranglengle\}}\sum_{i}p_i\mathcal{N}(|\psi_i\ranglengle),
\end{equation}
with the maximum taking over all possible pure state decomposition of $\rho_{AB}$.
Furthermore, $\mathcal{N}^a_c(\rho_{AB})=\mathcal{C}^a(\rho_{AB})$ for any two-qubit mixed state $\rho_{AB}$\cite{JSK8}.
Similar to the concurrence and EoF, we have the following Theorems.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $k\mathcal{N}_{cAB_i}^2\geq\mathcal{N}_{cA|B_{i+1}\cdots B_{N-1}}^2$
for $i=1,2,\ldots,m$, and $\mathcal{N}_{cAB_j}^2\leq k\mathcal{N}_{cA|B_{j+1}\cdots B_{N-1}}^2$ for $j=m+1,\ldots,N-2$,
$\forall 1\leq m\leq N-3$, $N\geq 4$, then we have
\begin{equation}\langlebel{negativity1}
\begin{array}{rl}
&\mathcal{N}^\beta_{cA|B_1\cdots B_{N-1}}\\[2.0mm]
&\ \ \geq\mathcal{N}^\beta_{cAB_1}+\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\mathcal{N}^\beta_{cAB_2}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m-1}\mathcal{N}^\beta_{cAB_{m}}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m+1}\big(\mathcal{N}^\beta_{cAB_{m+1}}+
\cdots+\mathcal{N}^\beta_{cAB_{N-2}}\big)\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^m\mathcal{N}^\beta_{cAB_{N-1}}
\end{array}
\end{equation}
for all $\beta\geq2$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if all $k\mathcal{N}_{cAB_i}^2\geq\mathcal{N}_{cA|B_{i+1}\cdots B_{N-1}}^2$ for all $i=1,2,\ldots,N-2$, then
\begin{equation}\langlebel{negativity2}
\begin{array}{rcl}
\mathcal{N}^\beta_{cA|B_1\cdots B_{N-1}}
&\geq&\mathcal{N}^\beta_{cAB_1}+\frac{(1+k)^{\frac{\beta}{2}}-1}
{k^{\frac{\beta}{2}}}\mathcal{N}^\beta_{cAB_2}+\cdots\\[3.0mm]
&&+\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{N-2}\mathcal{N}^\beta_{cAB_{N-1}}
\end{array}
\end{equation}
for $\beta\geq2$.
\end{theorem}
\noindent{\bf Example 3} \, \ Consider the state in Example 1 with
$\langlembda_0=\langlembda_3=\frac{1}{2},\ \langlembda_2=\frac{\sqrt{2}}{2}$ and $\langlembda_1=\langlembda_4=0$.
We have $\mathcal{N}_{cA|BC}=\frac{\sqrt{3}}{2}$, $\mathcal{N}_{cAB}=\frac{\sqrt{2}}{2}$ and $\mathcal{C}_{cAC}=\frac{1}{2}$.
Then $\mathcal{N}_{cAB}^\beta+\big(2^{\frac{\beta}{2}}-1\big)\mathcal{N}_{cAC}^\beta=\big(\frac{\sqrt{2}}{2}\big)^\beta+
\big(2^{\frac{\beta}{2}}-1\big)\big(\frac{1}{2}\big)^\beta$ and
$\mathcal{N}_{cAB}^\beta+\frac{(1+k)^\frac{\beta}{2}-1}{k^{\frac{\beta}{2}}}\mathcal{N}_{cAC}^\beta=\big(\frac{\sqrt{2}}{2}\big)^\beta+
\frac{(1+k)^\frac{\beta}{2}-1}{k^{\frac{\beta}{2}}}\big(\frac{1}{2}\big)^\beta$.
One can see that our result is better than the one in \cite{SM.Fei3} for $\beta\geq2$, thus also better than the ones in \cite{SM.Fei1,SM.Fei2}, see Fig. 3.
\begin{figure}
\caption{The $y$ axis is the lower bound of the negativity $\mathcal{N}
\end{figure}
For the negativity of assistance $\mathcal{N}_c^a$, we have the following results.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an $N$-qubit pure state $|\psi\ranglengle_{A|B_1\cdots B_{N-1}}$,
if $k(\mathcal{N}^a_{cA|B_i})^2\geq(\mathcal{N}^a_{cA|B_{i+1}\cdots B_{N-1}})^2$
for $i=1,2,\ldots,m$, and $(\mathcal{N}^a_{cAB_j})^2\leq k(\mathcal{N}^a_{cA|B_{j+1}\cdots B_{N-1}})^2$ for $j=m+1,\ldots,N-2$,
$\forall 1\leq m\leq N-3$, $N\geq 4$, then we have
\begin{equation}
\begin{array}{rl}
&[\mathcal{N}_c^a(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})]^{\beta}\\[1.5mm]
&\ \ \leq(\mathcal{N}^a_{cAB_1})^{\beta}+\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)(\mathcal{N}^a_{cAB_2})^2+\cdots\\[1.5mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m-1}(\mathcal{N}^a_{cAB_{m}})^\beta\\[1.5mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{m+1}[(\mathcal{N}^a_{cAB_{m+1}})^\beta+\cdots\\[1.5mm]
&\ \ \ \ +(\mathcal{N}^a_{cAB_{N-2}})^\beta]\\[1.5mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^m(\mathcal{N}^a_{c AB_{N-1}})^\beta
\end{array}
\end{equation}
for all $0\leq\beta\leq2$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $|\psi\ranglengle_{AB_1\cdots B_{N-1}}$,
if $k(\mathcal{N}^a_{cAB_i})^2\geq(\mathcal{N}^a_{cA|B_{i+1}\cdots B_{N-1}})^2$
for all $i=1,2,\ldots,N-2$, then
\begin{equation}
\begin{array}{rl}
&[\mathcal{N}^a_c(|\psi\ranglengle_{A|B_1\cdots B_{N-1}})]^\beta\\[2.0mm]
&\ \ \leq(\mathcal{N}^a_{cAB_1})^\beta+\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)(\mathcal{N}^a_{cAB_2})^\beta+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\frac{\beta}{2}}-1}{k^{\frac{\beta}{2}}}\Big)^{N-2}(\mathcal{N}^a_{cAB_{N-1}})^\beta
\end{array}
\end{equation}
for $0\leq\beta\leq2$.
\end{theorem}
\section{Tighter monogamy relations for Tsallis-$q$ entanglement and R\'enyi-$\alpha$ entanglement}
In this section, we study the Tsallis-$q$ entanglement and R\'enyi-$\alpha$ entanglement,
and establish the corresponding monogamy and polygamy relations for the two entanglement measures, respectively.
\subsection{Tighter monogamy and polygamy relations for Tsallis-$q$ entanglement}
The Tsallis-$q$ entanglement of a bipartite pure state $|\psi\ranglengle_{AB}$ is defined as \cite{JSK3}
\begin{equation}
T_q(|\psi\ranglengle_{AB})=S_q(\rho_A)=\frac{1}{q-1}(1-{\rm tr}\rho_A^q),
\end{equation}
where $q>0$ and $q\neq1$.
For the case $q$ tends to 1, $T_q(\rho)$ is just the von Neumann entropy, $\lim\limits_{q\rightarrow 1}T_q(\rho)=-{\rm tr}\rho\log_2\rho=S(\rho)$.
The Tsallis-$q$ entanglement of a bipartite mixed state $\rho_{AB}$ is given by
$T_q(\rho_{AB})=\min\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_{i}p_iT_q(|\psi_i\ranglengle)$
with the minimum taken over all possible pure state decompositions of $\rho_{AB}$.
For $\frac{5-\sqrt{13}}{2}\leq q\leq\frac{5+\sqrt{13}}{2}$, Yuan {\it et al.} proposed an analytic relationship
between the Tsallis-$q$ entanglement and concurrence,
\begin{equation}\langlebel{Tq1}
T_q(|\psi\ranglengle_{AB})=g_q(\mathcal{C}^2(|\psi\ranglengle_{AB})),
\end{equation}
where
\begin{equation}\langlebel{Tq2}
g_q(x)=\frac{1}{q-1}\Big[1-\Big(\frac{1+\sqrt{1-x}}{2}\Big)^q-\Big(\frac{1-\sqrt{1-x}}{2}\Big)^q\Big]
\end{equation}
with $0\leq x\leq1$ \cite{Yuan}.
It has also been proved that $T_q(|\psi\ranglengle)=g_q(\mathcal{C}^2(|\psi\ranglengle))$ if $|\psi\ranglengle$ is a $2\otimes m$ pure state,
and $T_q(\rho)=g_q(\mathcal{C}^2(\rho))$ if $\rho$ is a two-qubit mixed state.
Hence, \eqref{Tq1} holds for any $q$ such that $g_q(x)$ in \eqref{Tq2} is monotonically increasing and convex.
Particularly, one has that
\begin{equation}
g_q(x^2+y^2)\geq g_q(x^2)+g_q(y^2)
\end{equation}
for $2\leq q\leq 3$.
In Ref. \cite{JSK3}, Kim provided a monogamy relation for the Tsallis-$q$ entanglement,
\begin{equation}\langlebel{Tq4}
T_{qA|B_1B_2\cdots B_{N-1}}\geq\sum\limits_{i=1}^{N-1}T_{qA|B_i},
\end{equation}
where $i=1,2,\ldots,N-1$ and $2\leq q\leq 3$.
Later, this relation was improved as follows:
if $\mathcal{C}_{AB_i}\geq\mathcal{C}_{A|B_{i+1}\cdots B_{N-1}}$
for $i=1,2,\ldots,m$, and $\mathcal{C}_{AB_j}\leq\mathcal{C}_{A|B_{j+1}\cdots B_{N-1}}$
for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$, then
\begin{equation}\langlebel{Tq3}
\begin{array}{rl}
&T^\beta_{qA|B_1B_2\cdots B_{N-1}}\\
&\ \ \geq T^\beta_{qA|B_1}+(2^\beta-1)T^\beta_{qA|B_2}+\cdots+(2^\beta-1)^{m-1}T^\beta_{qA|B_m}\\
&\ \ \ \ +(2^\beta-1)^{m+1}(T^\beta_{qA|B_{m+1}}+\cdots+T^\beta_{qA|B_{N-2}})\\
&\ \ \ \ +(2^\beta-1)^mT^\beta_{qA|B_{N-1}},
\end{array}
\end{equation}
where $\beta\geq1$ and $T^\beta_{qA|B_1B_2\cdots B_{N-1}}$ quantifies the Tsallis-$q$ entanglement under partition $A|B_1B_2\cdots B_{N-1}$,
and $T^\beta_{qA|B_i}$ quantifies that of the two-qubit subsystem $AB_i$ with $2\leq q\leq3$.
Moreover, for $\frac{5-\sqrt{13}}{2}\leq q\leq\frac{5+\sqrt{13}}{2}$, one has
\begin{equation}
T^2_{qA|B_1B_2\cdots B_{N-1}}\geq\sum\limits_{i=1}^{N-1}T^2_{qA|B_i}.
\end{equation}
We now provide monogamy relations which are tighter than \eqref{Tq4} and \eqref{Tq3}.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an arbitrary $N$-qubit mixed state $\rho_{AB_1B_2\cdots B_{N-1}}$, if
$kT_{qAB_i}\geq T_{qA|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$, and
$T_{qAB_j}\leq kT_{qA|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$,
then we have
\begin{equation}\langlebel{Tq5}
\begin{array}{rl}
&T^\beta_{qA|B_1B_2\cdots B_{N-1}}\\[1.5mm]
&\ \ \geq T^\beta_{qA|B_1}+\frac{(1+k)^\beta-1}{k^\beta}T^\beta_{qA|B_2}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m-1}T^\beta_{qA|B_m}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m+1}(T^\beta_{qA|B_{m+1}}+\cdots+T^\beta_{qA|B_{N-2}})\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^mT^\beta_{qA|B_{N-1}},
\end{array}
\end{equation}
for $\beta\geq1$ and $2\leq q\leq3$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if all $kT_{qAB_i}\geq T_{qA|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,N-2$, then we have
\begin{equation}\langlebel{Tq6}
\begin{array}{rcl}
T_{qA|B_1\cdots B_{N-1}}^\beta
&\geq& T_{qAB_1}^\beta+\frac{(1+k)^{\beta}-1}{k^{\beta}}T_{qAB_2}^\beta+\cdots\\[3.0mm]
&&+\Big(\frac{(1+k)^{\beta}-1}{k^{\beta}}\Big)^{N-2}T_{qAB_{N-1}}^\beta,
\end{array}
\end{equation}
for $\beta\geq1$ and $2\leq q\leq3$.
\end{theorem}
\noindent{\bf Example 4} \, \ Consider the quantum state given in Example 1 with
$\langlembda_0=\langlembda_3=\frac{1}{2},\ \langlembda_2=\frac{\sqrt{2}}{2}$ and $\langlembda_1=\langlembda_4=0$.
For $q=2$, one has $T_{2A|BC}=\frac{3}{8}$, $T_{2AB}=\frac{1}{4}$ and $T_{2AC}=\frac{1}{8}$.
Then $T_{2AB}^\beta+\big(2^{\beta}-1\big)T_{2AC}^\beta=(\frac{1}{4})^\beta+\big(2^{\beta}-1\big)(\frac{1}{8})^\beta$ and
$T_{2AB}^{\beta}+\frac{(1+k)^\beta-1}{k^{\beta}}T_{2AC}^{\beta}=\big(\frac{1}{4}\big)^{\beta}+
\frac{(1+k)^\beta-1}{k^{\beta}}\big(\frac{1}{8}\big)^\beta$.
It can be seen that our result is better than the one in \cite{SM.Fei3} for $\beta\geq1$, and also better than the ones given in \cite{SM.Fei1,SM.Fei2}, see Fig. 4.
\begin{figure}
\caption{The $y$ axis is the lower bound of the Tsallis-$q$ entanglement $T_q^\beta(|\psi\ranglengle_{A|BC}
\end{figure}
As a dual quantity to Tsallis-$q$ entanglement, the Tsallis-$q$ entanglement of assistance (TEoA) is defined by \cite{JSK3},
$T_q^a(\rho_{AB})=\max\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_ip_iT_q(|\psi_i\ranglengle)$,
where the maximum is taken over all possible pure state decompositions of $\rho_{AB}$.
If $1\leq q\leq 2$ or $3\leq q\leq4$, the function $g_q$ defined in \eqref{Tq2} satisfies
\begin{equation}
g_q(\sqrt{x^2+y^2})\leq g_q(x)+g_q(y),
\end{equation}
which leads to the Tsallis polygamy inequality
\begin{equation}\langlebel{Tq7}
T^a_{qA|B_1B_2\cdots B_{N-1}}\leq\sum\limits_{i=1}^{N-1}T^a_{qA|B_i}
\end{equation}
for any multi-qubit state $\rho_{A|B_1B_2\cdots B_{N-1}}$ \cite{JSK9}.
Here we provide tighter polygamy relations related to Tsallis-$q$ entanglement.
We have the following results.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $kT^a_{qAB_i}\geq T^a_{qA|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$, and
$T^a_{qAB_j}\leq kT^a_{qA|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$, then
\begin{equation}\langlebel{Tq8}
\begin{array}{rl}
&(T^a_{qA|B_1B_2\cdots B_{N-1}})^\beta\\[2.0mm]
&\ \ \leq (T^a_{qAB_1})^\beta+\frac{(1+k)^\beta-1}{k^\beta}(T^a_{qAB_2})^\beta+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m-1}(T^a_{qAB_m})^\beta\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m+1}[(T^a_{qAB_{m+1}})^\beta+\cdots+(T^a_{qAB_{N-2}})^\beta]\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^m(T^a_{qAB_{N-1}})^\beta,
\end{array}
\end{equation}
for $0\leq\beta\leq1$ with $1\leq q\leq2$ or $3\leq q\leq4$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For any $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$,
if $kT^a_{qAB_i}\geq T^a_{qA|B_{i+1}\cdots B_{N-1}}$ for all $i=1,2,\ldots,N-2$,
we have
\begin{equation}\langlebel{Tq9}
\begin{array}{rl}
&T^\beta_{qA|B_1B_2\cdots B_{N-1}}\leq T^\beta_{qAB_1}+\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)T^\beta_{qAB_2}+\cdots\\[2.0mm]
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{N-2}T^\beta_{qAB_{N-1}},
\end{array}
\end{equation}
for $0\leq\beta\leq1$ with $1\leq q\leq2$ or $3\leq q\leq4$.
\end{theorem}
\subsection{Tighter monogamy and polygamy relations for R\'enyi-$\alpha$ entanglement}
For a bipartite pure state $|\psi\ranglengle_{AB}$, the R\'enyi-$\alpha$ entanglement is defined as \cite{vidal} $E(|\psi\ranglengle_{AB})=S_\alpha(\rho_A)$,
where $S_\alpha(\rho)=\frac{1}{1-\alpha}\log_2{\rm tr}\rho^\alpha$ for any $\alpha>0$
and $\alpha\neq1$, and $\lim\limits_{\alpha\rightarrow 1}S_\alpha(\rho)=S(\rho)=-{\rm tr}\rho\log_2\rho$.
For a bipartite mixed state $\rho_{AB}$, the R\'enyi-$\alpha$ entanglement is given by
\begin{equation}
E_\alpha(\rho_{AB})=\min\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_{i}p_iE_\alpha(|\psi_i\ranglengle),
\end{equation}
where the minimum is taken over all possible pure-state decompositions of $\rho_{AB}$.
For each $\alpha>0$, one has $E_\alpha(\rho_{AB})=f_\alpha(\mathcal{C}(\rho_{AB}))$,
where $f_\alpha(x)=\frac{1}{1-\alpha}\log\big[\big(\frac{1-\sqrt{1-x^2}}{2}\big)^2+(\frac{1+\sqrt{1-x^2}}{2}\big)^2\big]$
is a monotonically increasing and convex function \cite{JSK2}.
For $\alpha\geq2$ and any $n$-qubit state $\rho_{A|B_1B_2\cdots B_{N-1}}$, one has \cite{JSK3}
\begin{equation}\langlebel{entropy1}
\begin{array}{rl}
&E_{\alpha A|B_1B_2\cdots B_{N-1}}\\[2mm]
&\ \ \geq E_{\alpha A|B_1}+E_{\alpha A|B_2}+\cdots+E_{\alpha A|B_{N-1}}.
\end{array}
\end{equation}
We propose the following two monogamy relations for the R\'enyi-$\alpha$ entanglement, which are tighter than the previous results.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an arbitrary $N$-qubit mixed state $\rho_{AB_1B_2\cdots B_{N-1}}$, if
$kE_{\alpha AB_i}\geq E_{\alpha A|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$, and
$E_{\alpha AB_j}\leq kT_{\alpha A|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$,
then
\begin{equation}
\begin{array}{rl}
&(E_{\alpha A|B_1B_2\cdots B_{N-1}})^{\beta}\\[2.0mm]
&\ \ \geq (E_{\alpha A|B_1})^{\beta}+\frac{(1+k)^\beta-1}{k^\beta}(E_{\alpha A|B_2})^{\beta}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m-1}(E_{\alpha A|B_m})^{\beta}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m+1}[(E_{\alpha A|B_{m+1}})^{\beta}+\cdots+(E_{\alpha A|B_{N-2}})^{\beta}]\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^m(E_{\alpha A|B_{N-1}})^{\beta},
\end{array}
\end{equation}
for $\beta\geq1$ and $\alpha\geq2$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an arbitrary $N$-qubit mixed state $\rho_{AB_1B_2\cdots B_{N-1}}$,
if $kE_{\alpha AB_i}\geq E_{\alpha A|B_{i+1}\cdots B_{N-1}}$ for all $i=1,2,\ldots,N-2$, then
\begin{equation}
\begin{array}{rl}
&(E_{\alpha A|B_1\cdots B_{N-1}})^{\beta}\\[2.0mm]
&\ \ \geq (E_{\alpha AB_1})^{\beta}+\Big(\frac{(1+k)^{\alpha}-1}{k^{\alpha}}\Big)(E_{\alpha AB_2})^{\beta}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\alpha}-1}{k^{\alpha}}\Big)^{N-2}(E_{\alpha AB_{N-1}})^{\beta}
\end{array}
\end{equation}
for $\beta\geq1$ and $\alpha\geq2$.
\end{theorem}
\noindent{\bf Example 5} \, \ Consider again the state given in Example 1 with
$\langlembda_0=\langlembda_3=\frac{1}{2},\ \langlembda_2=\frac{\sqrt{2}}{2}$ and $\langlembda_1=\langlembda_4=0$.
For $\alpha=2$, we find $E_{2A|BC}=\log_2\frac{8}{5}\approx678072$, $E_{2AB}=\log_2\frac{8}{7}\approx0.415037$
and $E_{2AC}=\log_2\frac{4}{3}\approx0.192645$.
Then $E_{2AB}^\alpha+E_{2AC}^\alpha=0.415037^\alpha+0.192645^\alpha$ and
$E_{2AB}^{\alpha}+\frac{(1+k)^\alpha-1}{k^{\alpha}}E_{2AC}^{\alpha}=0.415037^{\alpha}+\frac{(1+k)^\alpha-1}{k^{\alpha}}0.192645^\alpha$.
One can see that our result is better than the result in \cite{JSK3}, and the smaller $k$ is, the tighter relation is, see Fig. 5.
\begin{figure}
\caption{The $y$ axis is the lower bound of the R\'enyi entropy entanglement $E^\beta_{2}
\end{figure}
The R\'enyi-$\alpha$ entanglement of assistance (REoA),
a dual quantity to R\'enyi-$\alpha$ entanglement, is defined as
$E_\alpha^a(\rho_{AB})=\max\limits_{\{p_i,|\psi_i\ranglengle\}}\sum\limits_{i}p_iE_\alpha(|\psi_i\ranglengle)$,
where the maximum is taken over all possible pure state decompositions of $\rho_{AB}$.
For $\alpha\in[\frac{\sqrt{7}-1}{2},\frac{\sqrt{13}-1}{2}]$ and any $n$-qubit state $\rho_{AB_1B_2\cdots B_{N-1}}$,
a polygamy relation of multi-partite quantum entanglement in terms of REoA has been given by \cite{Song}:
\begin{equation}
\begin{array}{rl}
&E^a_{\alpha A|B_1B_2\cdots B_{N-1}}\\[2mm]
&\ \ \leq E^a_{\alpha A|B_1}+E^a_{\alpha A|B_2}+\cdots +E^a_{\alpha A|B_{N-1}}.
\end{array}
\end{equation}
We improve this inequality to be a tighter ones under some netural conditions.
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an arbitrary $N$-qubit mixed state $\rho_{AB_1B_2\cdots B_{N-1}}$, if
$kE^a_{\alpha AB_i}\geq E^a_{\alpha A|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\ldots,m$, and
$E^a_{\alpha AB_j}\leq kE^a_{\alpha A|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\ldots,N-2$, $\forall 1\leq m\leq N-3$, $N\geq4$,
then
\begin{equation}
\begin{array}{rl}
&(E^a_{\alpha A|B_1B_2\cdots B_{N-1}})^{\beta}\\[2.0mm]
&\ \ \leq (E^a_{\alpha A|B_1})^{\beta}+\frac{(1+k)^\beta-1}{k^\beta}(E^a_{\alpha A|B_2})^{\beta}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m-1}(E^a_{\alpha A|B_m})^{\beta}\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^{m+1}\Big[(E^a_{\alpha A|B_{m+1}})^{\beta}+\cdots+(E^a_{\alpha A|B_{N-2}})^{\beta}\Big]\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^\beta-1}{k^\beta}\Big)^m(E^a_{\alpha A|B_{N-1}})^{\beta},
\end{array}
\end{equation}
for $0\leq\beta\leq1$ with $\frac{\sqrt{7}-1}{2}\leq\alpha\leq\frac{\sqrt{13}-1}{2}$.
\end{theorem}
\begin{theorem}
Suppose $k$ is a real number satisfying $0<k\leq 1$.
For an arbitrary $N$-qubit mixed state $\rho_{AB_1B_2\cdots B_{N-1}}$,
if $kE^a_{\alpha AB_i}\geq E^a_{\alpha A|B_{i+1}\cdots B_{N-1}}$ for all $i=1,2,\ldots,N-2$, then
\begin{equation}
\begin{array}{rl}
&(E^a_{\alpha A|B_1\cdots B_{N-1}})^{\beta}\\[2.0mm]
&\ \ \leq (E^a_{\alpha AB_1})^{\beta}+\Big(\frac{(1+k)^{\alpha}-1}{k^{\alpha}}\Big)(E^a_{\alpha AB_2})^{\beta}+\cdots\\[2.0mm]
&\ \ \ \ +\Big(\frac{(1+k)^{\alpha}-1}{k^{\alpha}}\Big)^{N-2}(E^a_{\alpha AB_{N-1}})^{\beta}
\end{array}
\end{equation}
for $0\leq\beta\leq1$, with $\frac{\sqrt{7}-1}{2}\leq\alpha\leq\frac{\sqrt{13}-1}{2}$.
\end{theorem}
\section{Conclusion}
Both entanglement monogamy and polygamy are fundamental properties of multipartite entangled states.
We have presented monogamy relations related to the $\beta$th power of concurrence, entanglement of formation,
negativity, Tsallis-$q$ and R\'enyi-$\alpha$ entanglement.
We also provide polygamy relations related to these entanglement measures.
All the relations we presented in this paper are tighter than the previous results.
These tighter monogamy and polygamy inequalities can also provide finer characterizations of the entanglement distributions
among the multiqubit systems.
Our results provide a rich reference for future work on the study of multiparty quantum entanglement.
And our approaches are also useful for further study on the monogamy and polygamy properties related to measures of other quantum correlations and quantum coherence \cite{framework,coherence}.
\end{document} |
\begin{document}
\title{Thermal Noise on Adiabatic Quantum Computation}
\date{\today}
\author{Man-Hong Yung}
\email[email: ]{[email protected]}
\affiliation{Department of Physics, University of Illinois at
Urbana-Champaign, Urbana IL 61801-3080, USA}
\pacs{03.65.Yz, 03.67.Lx}
\begin{abstract}
The success of adiabatic quantum computation (AQC) depends crucially on the ability to maintain the quantum computer in the ground state of the evolution Hamiltonian. The computation process has to be sufficiently \textit{slow} as restricted by the minimal energy gap. However, at finite temperatures, it might need to be \textit{fast} enough to avoid thermal excitations. The question is, how fast does it need to be? The structure of evolution Hamiltonians for AQC is generally too complicated for answering this question. Here we model an adiabatic quantum computer as a (parametrically driven) harmonic oscillator. The advantages of this model are (1) it offers high flexibility for quantitative analysis on the thermal effect, (2) the results qualitatively agree with previous numerical calculation, and (3) it could be experimentally verified with quantum electronic circuits.
\end{abstract}
\maketitle
Adiabatic quantum computation (AQC) offers an alternative route for achieving computational goals \cite{Farhi00}, compared with the ``standard model" of quantum computation based on the gate model. The basic idea of AQC is very simple: to maintain the system (computer) to stay at the (assumed unique) ground state with respect to a time-dependent Hamiltonian. For an isolated quantum system, this is in principle guaranteed by the quantum adiabatic theorem for sufficiently \textit{slow} evolution, depending on the energy gap between the (instantaneous) ground state and the first excited state. Practically, AQC would be operated under some finite temperature which may not necessarily be negligible (recall energy gaps usually shrink with the increase of the problem size) compared with the energy gap. The effect of thermal noise would then play an important role in determining the performance of AQC. Physically, the relaxation process (i.e. excitation to higher energy states) takes finite time to complete. Hence, we expect that AQC needs to be sufficiently \textit{fast} as well. This sets another time scale due to the environment. Consequently, unless thermalization is not an issue, \textit{AQC would work only if the computation time lies within these two time scales}. The question is, how to determine the latter time scale involved? We shall answer this question in this letter.
This work is motivated by recent studies \cite{Childs01,Shenvi03, Roland05, Sarandy05,Aberg05, Ashhab06, Tiersch07,Amin08} related to the question of robustness of AQC. Concerning the noise effect on AQC, some of the models are based on either qualitative or perturbative arguments which are not verified by independent numerical investigation. Some of them are formulated in terms of parameters which are inaccessible experimentally. On the other hand, it was believed \cite{Ashhab06,Tiersch07,Amin08} that two-level approximation would be valid for AQC, even if large number of excited states would be involved when the minimal gap is smaller than the temperature. It is therefore still unclear how ``robust" AQC is against thermal noise.
With these problems in mind, our goal here is to study the thermalization problem of AQC by modeling a Harmonic oscillator as a quantum computer. This model not only provides us with enough (infinite) excited states but also allows \textit{quantitative} analysis. As we shall see, it could not be modeled by the two-level approximation. Moreover, we shall quantify the effects on the performance of AQC through physical quantities such as temperature $T$, relaxation time $1/ \gamma$, energy gap $\Delta$ and computation time $\tau_*$. The ``anomaly" of this model may seem to be the evenly distributed energy levels. To verify the validity, we have compared it with the numerical simulation by Childs \textit{et al.} \cite{Childs01}, and found that the predictions of this harmonic model qualitatively agrees with their results. Lastly, this model is testable with the current quantum electronic technologies, e.g. simple RLC circuits.
\emph{Adiabatic Quantum Computation ---} To define our adiabatic quantum computer, there are only two adjustable and time-varying parameters, namely the ``mass" $m_t \equiv m\left( t \right)$ and the ``spring constant" $k_t \equiv k\left( t \right)$. The time dependence of these two parameters, at this stage, is completely arbitrary and is designed to \textit{simulate} (e.g. see example III below) an adiabatic quantum computer. The (computational) system Hamiltonian $H_S(t)$ is described by that of a standard parametrically driven harmonic oscillator:
\begin{equation}\label{HS}
H_S \left( t \right) = \frac{{{\hat p}^2 }}{{2m_t }} + \frac{1}{2}k_t {\hat x}^2 \quad,
\end{equation}
which is associated with a set of (instantaneous) energy eigenstates $ \left| {n_t } \right\rangle$ , with $n = 0,1,2,...$, satisfying the eigenvalue equation: $H_S \left( t \right)\left| {n_t } \right\rangle = E_n \left( t \right)\left| {n_t } \right\rangle$. Here $E_n \left( t \right) = \left( {n + 1/2} \right)\Delta \left( t \right)$ is the (instantaneous) eigen-energy for the state $\left| {n_t } \right\rangle$. The energy gap $\Delta \left( t \right) \equiv \sqrt {k_t /m_t } = E_{n + 1} \left( t \right) - E_n \left( t \right)$ does not depend on $n$, by definition. The initial state is assumed to be the ground state $\left| {0_{t = 0} } \right\rangle $ of $H_S \left( t = 0 \right)$. In the absence of the heat bath, the final state is given by $U\left( {t = t_f } \right)\left| {0_{t = 0} } \right\rangle$, where $U\left( t \right) = T\exp ( { - i\int_0^t {H_S\left( {t'} \right)dt'} })$ (with $\hbar = 1$) is a time-ordered series.
The computation is considered to be fail if the final state deviates significantly (due to excitation to higher energy states) from the desired ground state $\left| {0_{t = f} } \right\rangle$ of $H_S \left( {t = t_f } \right)$. This is best quantified by the fidelity $ F \equiv \left| {\left\langle {0_{t = f} } \right|U\left( {t_f } \right)\left| {0_{t = 0} } \right\rangle } \right|^2$. Here since our goal is to study the thermal effect from the environment, we assume that AQC in the absence of the heat bath can be achieved (almost) perfectly, i.e., $F \approx 1$; violation of this condition may be considered as perturbation.
Under this condition (and to zeroth order in $\dot \Delta \left( t \right)$), we may write $U\left( t \right)\left| {n_0 } \right\rangle = \exp ( { - i\int_0^t {E _n \left( {t'} \right)dt'} })\left| {n_t } \right\rangle$, and hence a relation which is needed later:
\begin{equation}\label{Inter_pict}
U^\dagger ( t )a_t U\left( t \right) = \exp \left( { - i\int_0^t {\Delta \left( {t'} \right)dt'} } \right)a_0 \quad,
\end{equation}
where $a_t \equiv \sqrt {m_t \Delta _t /2} \left( {{\hat x} + i{\hat p}/m_t \Delta _t } \right)$ is the (instantaneous) annihilation operator for $H_S(t)$.
\emph{Ground State Occupation ---} In the presence of a heat bath, a mixed state representation $\rho \left( t \right)$, or the reduced density matrix $\rho _S \left( t \right) = Tr_B \left\{ {\rho \left( t \right)} \right\}$, is needed. The performance of the quantum computer is determined by the ground state occupation $P_g \equiv \left\langle {0_t } \right|\rho _S \left( t \right)\left| {0_t } \right\rangle$, and in the coordinate space
\begin{equation}\label{Pg_gen}
P_g = \int {\int_{ - \infty }^\infty {dx'dx} } \left\langle x \right|\rho _S \left( t \right)\left| {x'} \right\rangle \varphi _t^* \left( x \right)\varphi _t \left( {x'} \right) \quad,
\end{equation}
where $\varphi _t \left( x \right) \equiv \left\langle x \right|\left. {0_t } \right\rangle = \left( {m_t \omega _t } \right)^{1/4} \exp \left( { - m_t \omega _t x^2 /2} \right)$ is the (instantaneous) ground state wavefunction of $H_S \left( t \right)$. Before going into the technical details of the calculations for $\rho _S \left( t \right)$, we first argue that, subject to the constraints (a), (b) and (c) described below, the relevant quantity here is only the physical observable $\langle {\hat x\left( t \right)^2 }\rangle = Tr\left\{ {{\hat x}^2 \rho \left( t \right)} \right\}$ (or the current fluctuation $\left\langle {I(t)^2 } \right\rangle$ in RLC circuits).
The imposed constraints are (a) the heat bath can be approximated by a set of harmonic oscillators $H_B = \sum\nolimits_k {\hbar \omega _k } b_k^\dagger b_k$, (b) the system-bath coupling $H_{SB}$ is bilinear e.g. terms like $\hat x ( {b_k + b_k^\dagger })$, and (c) the initial state of the bath is in a thermal state $\rho _B = e^{ - \beta H_B } /Tr\left[ {e^{ - \beta H_B } } \right]$ and the system is in the ground state of $H_S \left( 0 \right)$, i.e., $\rho \left( 0 \right) = \left| {0_{t=0} } \right\rangle \left\langle {0_{t=0} } \right| \otimes \rho _B$. To proceed, we write
\begin{equation}\label{outer}
\left| {x'} \right\rangle \left\langle x \right| = \frac{1}{{2\pi }}\int_{ - \infty }^\infty {d\nu } e^{i\nu \left( {\mu /2 - x} \right)} e^{i\left( {\mu \hat p + \nu \hat x} \right)} \quad ,
\end{equation}
where $\mu \equiv x - x'$ (and $\nu $ is a just dummy variable). This form suggests that we have to evaluate the quantity $\left\langle {e^{i\left( {\mu \hat p + \nu \hat x} \right)} } \right\rangle = Tr\left\{ {e^{i\left( {\mu \hat p + \nu \hat x} \right)} \rho \left( t \right)} \right\}$, which is equal to $\exp ( { - \langle {( {\mu \hat p + \nu \hat x} )^2 } \rangle /2} )$ from the Bloch identity. Now, as verifiable by the master equation, here we claim that $\left\langle {a_t^2 } \right\rangle = \langle {a_t^{ \dagger 2} }\rangle = 0$. By completing the Gaussian integrals in Eq. (\ref{Pg_gen}) and (\ref{outer}), we finally arrive at a very compact form for $P_g$:
\begin{equation}\label{compact_Pg}
P_g = \frac{1}{{1 + n\left( t \right)}} \quad,
\end{equation}
where $n\left( t \right) \equiv \langle {a_t^\dagger a_t } \rangle = Tr \{ {a_t^\dagger a_t \rho \left( t \right)} \}$. Thus, as advertised, $\left\langle {x^2 } \right\rangle = \left( {\hbar /2m_t \Delta_t } \right)( {2\langle {a_t^ \dagger a_t } \rangle + 1} )$ is the only quantity needed to determine $P_g$.
\emph{Master Equation ---} We shall obtain $n(t)$ through the master equation approach \cite{Carmichael}. Here the full Hamiltonian is divided into three parts: $H = H_S \left( t \right) + H_B + H_{SB}$ where the first two terms have been defined. We assume that the coupling term $H_{SB}$ is a time independent operator (i.e. indepdendent of the mass $m_t$ and spring constant $k_t$ of the oscillator), and is explicitly given by
\begin{equation}
H_{SB} = {\hat x}\sum\limits_k {g_k \left( {b_k^ \dagger + b_k } \right)} \quad.
\end{equation}
Note that there could be a frequency renormalization (lamb shift type), which modifies the ground state wavefunction. This effect would be small for weak damping $\Delta \left( t \right) \gg \gamma \left( t \right)$ where $\gamma \left( t \right) \equiv \eta \left( t \right)/m\left( t \right)$ (cf. Eq.(\ref{damped}) and (\ref{nt})). Second, even for ohmic damping (time independent $\eta \left( t \right) = \eta$), the relaxation rate $\gamma \left( t \right) \propto 1/m\left( t \right)$ (or time to reach equilibrium) depends on the system parameter (here the ``inertia" $m(t)$), and therefore may be time-dependent.
To continue, we shall keep the standard assumptions for the master equation, namely (i) product initial state, (ii) Born-Markov approximation (i.e. weak coupling and short memory time), and (iii) rotating wave approximation (i.e. ignore fast oscillations). Subject to these constraints, the master equation is given \cite{Carmichael} by
$$
\frac{d}{{dt}}\tilde \rho _S = \frac{{ - 1}}{{\hbar ^2 }}\int_0^t {dt'} Tr_B \{ {[ {\tilde H_{SB}( t),[ {\tilde H_{SB}( {t'} ),\tilde \rho _S( t) \otimes \rho _B } ]} ]}\} ,
$$
where in the interaction picture: $\tilde \rho _S( t ) \equiv U^ \dagger ( t )\rho _S( t)U( t )$ and $\tilde H_{SB} ( t ) \equiv U^\dagger ( t )H_{SB} U( t)$. If we write $x = \sqrt {\hbar /2m_t \Delta _t } ( {a_t + a_t^\dagger })$, and from Eq. (\ref{Inter_pict}), we obtain interaction terms similar to that of an ordinary (i.e. with mass and spring constant fixed) harmonic oscillator, except the replacement: (a) $\Delta _0 t \to \int_0^t {dt'} \Delta \left( t' \right)$ and (b) $m_0 \Delta _0 \to m_t \Delta _t$. Consequently, the resonating modes $\omega _k \approx \Delta \left( t \right)$ would be time-dependent, and hence the friction ``coefficient" $\eta \left( t \right) \equiv \pi J\left( {\Delta _t } \right)/\Delta _t$, where $J\left( \omega \right) \equiv \sum\nolimits_k {g_k^2 \delta \left( {\omega _k - \omega } \right)}$, would also be a function of time i.e., with a classical equation of motion (neglect the frequency renormalization):
\begin{equation}\label{damped}
m\left( t \right)\frac{{d^2 }}{{dt^2 }}\left\langle x \right\rangle + \eta \left( t \right)\frac{d}{{dt}}\left\langle x \right\rangle + k\left( t \right)\left\langle x \right\rangle = 0 \quad .
\end{equation}
The exception is the ohmic case, where $J\left( \omega \right) \propto \omega$ and hence $\eta \left( t \right) = \eta _0$ is independent of the variation in the mass and the spring constant (e.g. RLC circuit). Finally, the equation for $n\left( t \right) = Tr\{ {a_0^\dagger a_0 \tilde \rho _S \left( t \right)}\}$ is obtained from the master equation:
\begin{equation}\label{nt}
\frac{d}{{dt}}n\left( t \right) = - \gamma \left( t \right)\left( {n\left( t \right) - N\left( t \right)} \right) \quad,
\end{equation}
where $N\left( t \right) \equiv 1/\left( {e^{ \Delta ( t ) / k_B T} - 1} \right)$. This is the key result of this paper, since the performance of AQC is determined entirely by $n(t)$. Note that even for the case of ohmic damping, the relaxation rate $\gamma \left( t \right) \equiv \eta \left( t \right)/m\left( t \right)$ is in general time dependent. With the initial condition $n\left( 0 \right) = 0$, this equation can be solved numerically to obtain the ground state occupation $P_g$ at time $t$.
Although in our model the time dependence for the energy gap is completely arbitrary, for the purpose of understanding the structure of the thermal excitation we assume that the gap has a Landau-Zener type variation:
\begin{equation}\label{Delta}
\Delta \left( t \right) = \sqrt {\Delta _{\max }^2 \left( {1 - t/\tau _* } \right)^2 + \Delta _{\min }^2 } \quad,
\end{equation}
where for $\Delta _{\max } \gg \Delta _{\min }$, $\Delta \left( 0 \right) = \Delta \left( {2\tau _* } \right) \approx \Delta _{\max }$, and $\Delta \left( {\tau _* } \right) = \Delta _{\min }$. Except near the region $t \approx \tau _0$, the rate of change of the energy gap is $V_S \equiv \Delta _{\max } /\tau _*$. For simplicity, we shall consider the ohmic case only and assume that $m\left( t \right) = m_0$ is time-independent, which makes $\gamma$ time-independent as well.
The following examples are chosen to demonstrate respectively that: (I) when thermalization is important (i.e. $\Delta \left( t \right) \le k_B T$), the computation speed $V_S$ needs to be fast, compared with the ``natural" speed of the bath $V_B \equiv \gamma k_B T$. We quantify this by defining $R \equiv V_B /V_S = \gamma k_B T\tau _* /\Delta _{\max }$. (II) After passing through the gap minimum, when the energy gap is larger than the temperature, i.e. $\Delta \left( {t > \tau _* } \right) > k_B T$, relaxation towards the ground state (increasing $P_g$) has the simple $e^{ - \gamma t}$ dependence, and in contrast with that in Ref.\cite{Amin08}, does not depend on $\Delta _{\min }^2 /\Delta _{\max }$ in the exponent. (III) This toy model qualitatively agrees with numerical calculation based on more realistic Hamiltonian.
\emph{Example I ---} Consider the case where ${\Delta _{\max } \le k_B T}$ and $\Delta _{\min } \ll k_B T$. It is possible to approximate $N\left( t \right) \approx k_B T/\Delta \left( t \right)$. Substitute this into Eq. (\ref{nt}), and with $\gamma \left( t \right) = \gamma$, we have
\begin{equation}\label{nt1}
n\left( t \right) = \frac{{\gamma k_B T}}{{\Delta _{\max } }}e^{ - \gamma t} \int_0^t {ds\frac{{e^{\gamma s} }}{{\sqrt {\left( {1 - s/\tau _* } \right)^2 + \varepsilon ^2 } }}} \quad,
\end{equation}
where $\varepsilon \equiv \Delta _{\min } /\Delta _{\max } \ll 1$. For $\gamma \tau _* < 1$, the integrand is dominated near $s \approx \tau _*$. Taking $e^{\gamma s} \to e^{\gamma \tau _* }$ and integrating explicitly, we have
\begin{equation}
n\left( t \right) = \lambda(t) Re^{ - \gamma \left( {t - \tau_* } \right)} \quad,
\end{equation}
where $\lambda(t) \equiv \ln 2 - \ln [ {\sqrt {\varepsilon ^2 + \left( {1 - t/\tau_* } \right)^2 } + \left( {1 - t/\tau_* } \right)}]$, and $R = \gamma k_B T\tau _* /\Delta _{\max }$ as defined above. Figure \ref{fig:} shows that this expression for $n(t)$ is in good agreement with the result by direct numerical integration for $n(t)$. From Eq. (\ref{compact_Pg}), we conclude that the thermal effect is not important even if $\Delta \left( t \right) \le k_B T$ provided that $\gamma \tau _* \ll 1$. More precisely, we require $R \ll 1$, or $V_S \gg V_B$. This minimal speed limit for AQC could not be seen by the two-level approximation \cite{Amin08}.
\begin{figure}
\caption{Simulation of AQC with harmonic oscillator under thermal noise. The $x$-axis is rescaled time $ \gamma t$. 1(a) and 2(a), in unit of $k_B T$, show the energy profiles Eq. (\ref{Delta}
\label{fig:}
\end{figure}
\emph{Example II ---} Here we consider the possibility of relaxation after passing through the minimum. In other words, we consider $n\left( t \right)$ when $\Delta \left( {t > \tau _* } \right) > k_B T$, while $\Delta _{\min } < k_B T$. This situation should not be very common for AQC, as it suggests that thermalization from the heat bath would yield better performance. To start, we could not invoke the same approximation as in example I. However, as long as $R < 1$, $N\left( t \right)$ is still sharply peaked at $t \approx \tau _*$. Base on this observation, skipping the details, we obtain an approximate solution which is valid \textit{only} for $t>\tau_*$
\begin{equation}\label{nt2}
n\left( {t > \tau _* } \right) = \kappa Re^{ - \gamma \left( {t - \tau _* } \right)} \quad,
\end{equation}
where $\kappa \equiv 2a + 2\ln \left[ {k_B T/\Delta _{\min } \left( {1 - R} \right)} \right]$ and $a \equiv \int_0^\infty e^{ - x} \ln \left( {2x} \right)dx = 0.116$. Figure \ref{fig:} shows that this expression qualitatively agrees with the direct numerical integration for $n(t)$ for $t > \tau _*$. Again, we conclude that $R$ plays an important role for determining the performance of AQC. It is suggested \cite{Amin08} that the combination $\Delta _{\min }^2 /\Delta _{\max }$ would be important for the relaxation process in the two-level approximation. We have tested with different ratios of $\Delta _{\min } /\Delta _{\max }$ (while keeping $\Delta _{\min } \ll \Delta _{\max }$), but we did not find explicit dependence of $\Delta _{\min }$ in the exponent of $n(t)$.
\emph{Example III ---} So far we have compared our results with that from the two-level approximation. Would a realistic Hamiltonian (with non-uniform distribution of energy gaps) for AQC, in some sense, look like a harmonic oscillator (uniform gaps)? If yes, then based on the results above, it may be possible to approximate the final ground state occupation by the formula:
\begin{equation}\label{fitting}
P_g \approx \frac{1}{{1 + \alpha R}} \quad,
\end{equation}
where $R \equiv \gamma k_B T \tau_*/\left( {\Delta _{\max } - \Delta _{\min } } \right)$ is generalized to include cases where $\Delta _{\min } /\Delta _{\max }$ is not negligible, and $\alpha$ is a fitting parameter. For example I, at $t=2\tau_*$, $\alpha = 2e^{ - \gamma \tau_* } \ln \left( {2\Delta _{\max } /\Delta _{\min } } \right)$, and for example II, $\alpha = \kappa e^{ - \gamma \tau _* }$. The former does not depend on $T$, and the latter depends on $T$ weakly (logarithmically).
In Ref.\cite{Childs01}, an algorithm solving the so-called ``three-bit exact cover" (EC3) problem, in which the energy gap (here taken as $\Delta \left( t \right)$) between the ground and first excited state of $H_S \left( t \right)$ is significantly larger than the rest when $\Delta \left( t \right) = \Delta _{\min }$. We extract, from FIG 2 of that paper, the final probability $P_g$ and $R$, and estimate the corresponding $\alpha$ by the relation in Eq. (\ref{fitting}), as suggested by our harmonic oscillator model. The results are shown in Table {\ref{AQC}. We found that both sets of data are consistent with the conjecture that $P_g$ decreases with $R$. For data I, the fluctuation for the value of $\alpha$ is relatively small (about 20\% from the mean), and the data point for high temperature case ($k_B T/\Delta _{\max }=10$) deviates significantly with the rest. This is anticipated from our experience in examples I and II. For data II, the fluctuation is relatively larger (about 40\% from the mean). This may due to that the ratio $\Delta _{\min } /\Delta _{\max } = 0.425$ is a bit too large for our simple formula Eq. (\ref{fitting}) to be accurate. We conclude that this harmonic oscillator model may provide reasonable estimation for some realistic AQC problems.
For AQC involving many more degrees of freedom, it may (either computationally or experimentally) be challenging to obtain the eigenenergy spectrum. However, this model can still be applicable. We first determine the $\alpha$ and $R$ for an AQC with a relatively small problem size $n$ (i.e. what was done in Table \ref{AQC}). We then gradually increase $n$ and determine the (average) scaling $\Gamma _n < 1$ of the energy spectrum (typically the first few lowest energy states are enough). Then, to estimate the same AQC (under the same temperature) with large $N$, the harmonic oscillator model suggests the ground state probability be given by the formula Eq. (\ref{fitting}) with the replacement $\Delta _{\max } \to \Gamma _N \Delta _{\max }$ (assuming $\tau_*$ and $\gamma$ are fixed).
\begin{table}[t]
\caption{\label{AQC} Simulation of AQC with Harmonic Oscillator. The data ($P_g$ and $R$), taken from the numerical simulation (FIG 2) of Childs \textit{et al}. \cite{Childs01}, are fitted ($\alpha$ being the fitting parameter) with the formula in Eq. (\ref{fitting}) suggested by the harmonic oscillator model. The standard deviation (excluding the last data point $k_B T = 10$) of $\alpha$ for data I (II) is about $20\%$ ($40\%$) from the mean value 1.68 (0.81).}
\begin{ruledtabular}
\begin{tabular}{c|c c c c c c}
& $k_B T$ \footnotetext{$k_B T$ and $\Delta _{\min }$ are in unit of ${\Delta _{\max } }$.} & $1/10$ & $1/2$ & $1$ & $2$ & $10$\\
\hline
Data I: & $P_g$ & 0.79 & 0.53 & 0.30 & 0.15 & 0.08\\
$\Delta _{\min } {=} 0.301$ & $R$ & 0.14 & 0.72 & 1.43 & 2.86 & 14.3\\
& $\alpha$ & 1.86 & 1.24 & 1.63 & 1.98 & 0.80\\
\hline
Data II: & $P_g$ & 0.89 & 0.70 & 0.42 & 0.19 & 0.08\\
$\Delta _{\min } {=} 0.425$ & $R$ & 0.17 & 0.87 & 1.74 & 3.48 & 17.4 \\
& $\alpha$ & 0.71 & 0.49 & 0.79 & 1.23 & 0.66\\
\end{tabular}
\end{ruledtabular}
\end{table}
\emph{Conclusions ---} We have introduced a harmonic oscillator model for a quantitative study on the effect of thermal noise on AQC. For ohmic damping, we showed that AQC is considered \textit{fast}, if the combination $R \equiv \gamma k_B T\tau _* /\Delta _{\max }$ is small. This model suggests a simple relation for estimating the fidelity (cf. Eq.(\ref{fitting})) of general AQC; this relation qualitatively agrees with the previous numerical simulation. This model can also be verified with quantum RLC circuits, and therefore can act as a test bed for future theoretical and experimental investigation on AQC.
\begin{acknowledgments}
M.H.Y acknowledges the support of the NSF grant EIA-01-21568 and the Croucher Foundation, and thanks R. Laflamme for the hospitality of the Institute for Quantum Computing where part of this work is done. M.H.Y also thanks Jonathan Baugh, Andrew Childs and Tzu-Chieh Wei for valuable discussions, and especially A. J. Leggett for comments and criticisms.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{A duality between the metric projection onto a convex cone and the metric projection
onto its dual in Hilbert spaces\thanks{{\it 1991 A M S Subject Classification:} Primary 90C33,
Secondary 15A48; {\it Key words and phrases:} convex sublattices, isotone projections. }}
\author{S. Z. N\'emeth\\School of Mathematics, The University of Birmingham\\The Watson Building, Edgbaston\\Birmingham B15 2TT, United Kingdom\\email: [email protected]}
\date{}
\maketitle
\begin{abstract}
If $K$ and $L$ are mutually dual closed convex cones in a Hilbert space $\mathbb H$ with the
metric projections onto them denoted by $P_K$ and $P_L$ respectively, then the
following two assertions are equivalent: (i) $P_K$ is isotone with respect to the
order induced by $K$ (i. e. $v-u\in K$ implies $P_Kv-P_Ku\in K$); (ii) $P_L$ is
subadditive with respect to the order induced by $L$ (i. e. $P_Lu+P_Lv-P_L(u+v)\in L$
for any $u,\;v \in \mathbb H$). This extends the similar result of A. B. N\'emeth and
the author for Euclidean spaces. The extension is essential because the proof of
the result for Euclidean spaces is essentially finite dimensional and seemingly
cannot be extended for Hilbert spaces. The proof of the result for Hilbert spaces is
based on a completely different idea which uses extended lattice operations.
\end{abstract}
\section{Introduction}
For simplicity let us call a closed convex cone simply cone.
Both the isotonicity \cite{IsacNemeth1990b,IsacNemeth1990c} and the subadditivity
\cite{AbbasNemeth2012,NemethNemeth2012}, of a projection onto a pointed cone with respect to
the order defined by the cone can be used for iterative methods for finding solutions of
complementarity problems with respect to the cone. Iterative methods are widely used for
solving various types of equilibrium problems (such as variational inequalities,
complementarity problems etc.) In the recent years the isotonicity gained more and more ground
for handling such problems (see \cite{NishimuraOk2012}, \cite{CarlHeikilla2011} and the
large number of references in \cite{CarlHeikilla2011} related to ordered vector spaces). If
a complementarity problem is defined by a cone $K\sqcupbset\mathbb H$
and a mapping $f:K\to\mathbb H$, where $(\mathbb H,\langle\cdot,\cdot\rangle)$ is a Hilbert space, then $x$
is a solution of the corresponding complementarity problem (that is, $x\in K$, $f(x)\in K^*$
and $\langle x,f(x)\rangle=0$, where $K^*$ is the dual of $K$), if and only if $x=P_K(x-f(x))$.
Thus, if
$f$ is continuous and the sequence $x^n$ given by the iteration $x^{n+1}=P_K(x^n-f(x^n))$ is
convergent, then its limit is a fixed point of the mapping $x\mapsto P_K(x-f(x))$ and
therefore
a solution of the corresponding complementarity problem. A specific way for showing the
convergence of the sequence $x^{n+1}=P_K(x^n-f(x^n))$ is to use the isotonicity
\cite{IsacNemeth1990b,IsacNemeth1990c} or subadditivity \cite{AbbasNemeth2012,NemethNemeth2012}
of $P_K$ with respect to the order induced by the cone $K$. In finite dimension the
isotonicity (subadditivity) of $P_K$ imposes strong constraints on the structure of $K$
($K^*$). If $P_K$ is isotone (subadditive), then $K$ ($K^*$) has to be a direct sum of the
subspace $V=K\cap(-K)$ ($V=K^*\cap(-K^*)$) with a latticial cone of a specific structure in
the orthogonal complement of the subspace $V$ (see
\cite{GuyaderJegouNemeth2012,IsacNemeth1992,NemethNemeth2012}). There exist cones of this
type which are important from the practical point of view, such as the monotone
cone (see \cite{GuyaderJegouNemeth2012}) and the monotone nonnegative cone
(see \cite{Dattorro2005}). For Euclidean spaces
the authors of \cite{NemethNemeth2012} showed that $P_K$ is isotone with respect to the order
induced by $K$ if and only if $P_L$ is subadditive with respect to the order induced by $L$,
where $K$ and $L$ are mutually dual pointed closed convex cones. If $K$ is also pointed and
generating, then the isotonicity of $P_K$ with
respect to the order induced by $K$ implies the latticiality of the cone in Hilbert spaces as
well (see \cite{IsacNemeth1990,IsacNemeth1990b,IsacNemeth1990c}). The main result of this paper
states that $P_K$ is isotone with respect to the order induced by $K$ (i.e., $v-u\in K$ implies $P_Kv-P_Ku\in K$) if and only if $P_L$ is
subadditive with respect to the order induced by $L$
(i.e., $P_Lu+P_Lv-P_L(u+v)\in L$ for any $u,\;v \in \mathbb R^n$), where $K$ and $L$ are mutually dual
pointed closed convex cones of a Hilbert space, thus extending the result of
\cite{NemethNemeth2012}. This result also implies that if $K$ is a pointed generating cone
in a Hilbert space such that $P_K$ is subadditive with respect to the order induced by
$K$, then it must be latticial. The latter two results have been already proved in Euclidean
spaces (see \cite{NemethNemeth2012} and \cite{IsacNemeth1992}), but they were open until now
in Hilbert spaces, except for the particular case of a Hilbert lattice \cite{Nemeth2003}.
Although originally
motivated by complementarity problems, recently it turned out that the isotonicity and
subadditivity of projections are also motivated by other practical problems at least as
important as the complementarity problems such as the problem of map-making from relative
distance information e.g., stellar cartography
(see
\noindent {\small www.convexoptimization.com/wikimization/index.php/Projection\_on\_Polyhedral\_Convex\_Cone}
\noindent and Section 5.13.2 in \cite{Dattorro2005}) and isotone regression
\cite{GuyaderJegouNemeth2012}, where the equivalence between two classical algorithms in
statistics is proved by using theoretical results about isotone projections. We remark that
our proofs are essentially infinite dimensional and apparently there is no easy way to extend
the methods of \cite{NemethNemeth2012} to infinite dimensions. The paper
\cite{GuyaderJegouNemeth2012} shows that investigation of the structure of cones admitting
isotone and subadditive projections is important for possible future applications. The proofs
presented here also provide a more elegant way of
proving the results of \cite{NemethNemeth2012}. However, the difference is that they do not
contain the proof of the latticiality of the involved cones. (For pointed generating cones in
Hilbert spaces this is the consequence of the main result in \cite{IsacNemeth1990}.)
The structure of this note is as follows: After
some preliminary terminology we introduce the main tools for our proofs the Moreau's
decomposition theorem (i.e., Lemma \ref{lm}) and the lattice-like
operations related to a projection onto a cone and then we proceed to showing our main result.
\section{Preliminaries}
Let $\mathbb H$ be a a real Hilbert space endowed with a scalar product $\langle\cdot,\cdot\rangle$ and let
$\|.\|$ the norm generated by the scalar product $\langle\cdot,\cdot\rangle$.
Throughout this note we shall use some standard terms and results from convex geometry
(see e.g. \cite{Rockafellar1970}).
Let $K$ be a \emph{closed convex cone} in $\mathbb H$, i. e., a nonempty closed set with
$tK+sK\sqcupbset K,\;\fracorall \;t,s\in \mathbb R_+=[0,+\infty)$. The closed convex cone $K$ is called
\emph{pointed}, if $K\cap(-K)=\{0\}.$
The cone $K$ is {\it generating} if $K-K=\mathbb H$.
The convex cone $K$ defines a pre-order relation (i.e., a reflexive and transitive binary
relation) $\leq_K$, where $x\leq_Ky$ if and only if $y-x\in K$.
The relation is {\it compatible with the vector structure} of $\mathbb H$ in the sense that
$x\leq_K y$ implies $tx+z\leq_K ty+z$ for all $z\in \mathbb H$, and all
$t\in \mathbb R_+$. If $\sqsubseteq$ is a reflexive and transitive relation on $\mathbb H$ which is
compatible with the vector structure of $\mathbb H$, then $\sqsubseteq=\leq_K$ with
$K=\{x\in\mathbb H:0\sqsubseteq x\}.$ If $K$ is pointed, then $\leq_K$ it is \emph{antisymmetric}
too, that is $x\leq_K y$ and
$y\leq_K x$ imply that $x=y.$ Hence, in this case $\le_K$ becomes an order relation (i.e, a
reflexive, transitive and antisymmetric binary relation). The elements $x$ and $y$ are called
\emph{comparable} if $x\leq_K y$ or $y\leq_K x.$
We say that $\leq_K$ is a \emph{latticial order} if for each pair of elements $x,y\in \mathbb H$
there exist the lowest upper bound $\sqcupp\{x,y\}$ (denoted by $x\vee y$) and the uppest lower bound $\inf\{x,y\}$ of
the set $\{x,y\}$ (denoted by $x\wedge y$) with respect to the relation $\leq_K$. In this case $K$ is said a
\emph{latticial or simplicial cone}, and $\mathbb H$ equipped with a latticial order is called a
\emph{Riesz space} or \emph{vector lattice}.
The \emph{dual} of the convex cone $K$ is the set
$$K^*:=\{y\in \mathbb H:\;\langle x,y\rangle \geq 0,\;\fracorall x\in K\}.$$
The set $K^*$ is a closed convex cone.
If $K$ is a closed cone, then the extended Farkas lemma (see Exercise 2.31 (f)
in \cite{BoydVandenberghe2004}) says that $(K^*)^*=K.$
Hence denoting $L=K^*$ we see that $K=L^*$ and $L^*=K$. For the
closed cones $K$ and $L$ related by these relations we say that they
are \emph{mutually dual cones}.
The cone $K$ is called \emph{self-dual}, if $K=K^*.$ If $K$ is self-dual, then it is a
generating, pointed, closed convex cone.
Let $K$ be a closed convex cone and $\rho:\mathbb H\to\mathbb H$ a mapping. Then, $\rho$ is called \emph{$K$-isotone} if $x\le_K y$ implies $\rho(x)\le_K\rho(y)$ and \emph{$K$-subadditive}
if $\rho(x+y)\le_K\rho(x)+\rho(y)$, for any $x,y\in H$.
Denote by $P_D$ the projection mapping onto a nonempty closed convex set $D$ of the Hilbert
space $\mathbb H,$ that is the mapping which associates to $x\in \mathbb H$ the unique nearest point of
$x$ in $D$ (\cite{Zarantonello1971}):
\[ P_Dx\in D,\;\; \textrm{and}\;\; \|x-P_Dx\|= \inf \{\|x-y\|: \;y\in D\}. \]
Next, we shall frequently use the
following simplified form of the Moreau's decomposition theorem \cite{Moreau1962}:
\begin{lemma}\label{lm}
Let $K$ and $L$ be mutually dual cones in the Hilbert space $\mathbb H$. For any $x$ in $K$
we have $x=P_Kx-P_L(-x)$ and $\langle P_Kx,P_L(-x)\rangle=0$. The relation $P_Kx=0$ holds if
and only if $x\in -L$.
\end{lemma}
Let $K$ and $L$ be mutually dual cones in the Hilbert space $\mathbb H$. Define the
following operations in $\mathbb H$:
\[x\sqcap_K y=P_{x-K}y,\textrm{ }x\sqcup_K y=P_{x+K}y,\textrm{ }x\sqcap_L y=P_{x-L}y,\textrm{ and }
x\sqcup_L y=P_{x+L}y.\]
Assume that
the operations $\sqcup_K$, $\sqcap_K$, $\sqcup_L$ and $\sqcap_L$ have precedence over the addition of
vectors and multiplication of vectors by scalars.
If $K$ is self-dual, then $\sqcup_K = \sqcup_L$ and $\sqcap_K=\sqcap_L$ and we arrive to the generalized
lattice operations defined by Gowda, Sznajder and Tao in \cite{GowdaSznajderTao2004}, and used
by our paper~\cite{NemethNemeth2012b}.
A direct checking yields that if $K$ is a self-dual latticial cone, then
$\sqcap_K=\sqcap_L=\wedge$, and $\sqcup_K=\sqcup_L=\vee$.
That is $\sqcap_K$, $\sqcap_L$, $\sqcup_K$ and $\sqcup_L$ are some \emph{lattice-like operations}.
We shall simply call a set $M$ which is invariant with respect to the operations $\sqcap_K$,
$\sqcap_L$, $\sqcup_K$ and $\sqcup_L$ \emph{$K$-invariant}.
The following Theorem greatly extends Lemma 2.4 of \cite{NishimuraOk2012} and since it can be
shown very similarly to Theorem 1 of \cite{NemethNemeth2012c} (i.e., the corresponding result
in Euclidean spaces), we state it here without proof.
\begin{theorem}\label{ISO}
Let $K\sqcupbset\mathbb H$ be a closed convex cone and $C\sqcupbset\mathbb H$ be a closed convex set. Then $C$ is $K$-invariant, if and
only if $P_C$ is $K$-isotone.
\end{theorem}
\section{The main result}
\begin{theorem}\label{tm}
Let $K,L$ be mutually dual closed convex cones in a Hilbert space $\mathbb H$. Then, the
following two statements are equivalent:
\begin{enumerate}[(i)]
\item\label{tma} $P_K$ is $K$-isotone.
\item\label{tmb} $P_L$ is $L$-subadditive.
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{tma})$\implies$(\ref{tmb}): From Theorem \ref{ISO}, it follows that $K$ is $K$-invariant. From the definition of the $K$-invariance it is easy to see that $K$ is also
$L$-invariant. Hence, by using again Theorem \ref{ISO}, it follows that $P_K$ is $L$-isotone. Now let $x,y\in\mathbb H$ be arbitrary. From Lemma \ref{lm}, we have
$x+y\le_L P_K(x)+P_K(y)$, because \[P_K(x)+P_K(y)-x-y=P_K(x)-x+P_K(y)-y=P_L(-x)+P_L(-y)\in L+L\sqcupbset L.\] Hence, by the $L$-isotonicity of $P_K$, we have
\[P_K(x+y)\le_L P_K(P_K(x)+P_K(y))=P_K(x)+P_K(y),\] which means that $P_K$ is $L$-subadditive. Thus, by using Lemma \ref{lm}, we get
\begin{gather*}
P_L(x+y)=x+y+P_K(-x-y)\le_L x+y+P_K(-x)+P_K(-y)\\=x+P_K(-x)+y+P_K(-y)
=P_L(x)+P_L(y),
\end{gather*}
which is equivalent to the $L$-subadditivity of $P_L$.
(\ref{tmb})$\implies$(\ref{tma}): Let $x\le_L y$. Then, $y-x\in L$ and therefore, by the $L$-subadditivity of $P_L$ and Lemma \ref{lm}, we get
\begin{eqnarray*}
P_K(x)=P_L(-x)+x=P_L(y-x-y)+x\le_L P_L(y-x)+P_L(-y)+x\\=y-x+P_L(-y)+x=P_K(y)
\end{eqnarray*}
Hence, $P_K$ is $L$-isotone and therefore Theorem \ref{ISO} implies that $K$ is $L$-invariant. From the definition of the $K$-invariance it is easy to see that $K$ is also
$K$-invariant. Therefore, by using again Theorem \ref{ISO}, it follows that $P_K$ is $K$-isotone.
\end{proof}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
We generalize the notion of calibrated submanifolds to smooth maps and show that the several examples of smooth maps
appearing in the differential geometry become the examples
of our situation.
Moreover, we apply these notion to
give the lower bound to the energy of
smooth maps in the given homotopy class
between Riemannian manifolds,
and consider the energy functional which is minimized
by the identity maps on the Riemannian manifolds with
special holonomy groups.
\end{abstract}
\tableofcontents
\section{Introduction}
In this article, we introduce the notion of calibrated geometry for smooth maps between Riemannian manifolds
and consider the lower bound or the minimizers of several energy of smooth maps.
Let $(X,g)$ and $(Y,h)$ be a compact Riemannian manifolds
and $f\colon X\to Y$ be a smooth map.
Then the $p$-energy of $f$ is defined by
\begin{align*}
\mathcal{E}_p(f):=\int_X |df|^pd\mu_g
\end{align*}
for $p\ge 1$,
where $\mu_g$ is the volume measure of $g$.
A harmonic map is defined to be a critical point of $\mathcal{E}_2$,
and the study of it is
one of the significant region in differential geometry.
In 1964, Eells and Sampson \cite{ES1964}
have shown that there is a harmonic map
$f'$ homotopic to $f$ if the sectional curvature of $h$ is nonpositive.
Moreover, Hartman \cite{Hartman1967} showed
that such harmonic maps minimize $\mathcal{E}_2|_{[f]}$,
where $[f]$ is the homotopy class represented by $f$.
In general, harmonic maps need not minimize the energy.
For example, although the identity maps on any Riemannian manifolds are always
harmonic, it is known that there is a smooth map
$f_\varepsilon$ homotopic to the identity map
such that $\mathcal{E}_2(f_\varepsilon)=\varepsilon$ on
the $n$-sphere $S^n$ with the standard metric and $n\ge 3$.
By the result shown by White \cite{White1986},
if $\pi_l(X)$ is trivial
for all $1\le l\le k$, then $\inf \mathcal{E}_k|_{[1_X]}=0$,
where $1_X$ is the identity map of $X$.
Now, we consider how to give the lower bound to the energy
restricting to a given homotopy class $[f]$
and the minimizer of them.
Such a lower bound was first obtained by
Lichnerowicz \cite{Lichnerowicz1969} in the case of
$(X,g)$ and $(Y,h)$ are K\"ahler manifolds,
then it was shown that any holomorphic maps between
K\"ahler manifolds minimize $\mathcal{E}_2$ in their homotopy classes.
Moreover, Croke \cite{Croke1987} showed that the identity
map on the real projective space with the standard metric
minimize $\mathcal{E}_2$ in its homotopy class,
then Croke and Fathi \cite{CrokeFathi1990}
introduced the new homotopy invariant
called the intersection, which gives the lower bound to
$\mathcal{E}_2|_{[f]}$ for a given homotopy class $[f]$.
Recently, Hoisington \cite{hoisington2021} give the lower bound
to $\mathcal{E}_p$ for an appropriate $p$ in the case of
$X$ is real, complex, or quaternionic projective spaces
with the standard metrics.
In this article, we generalize the notion of calibrated geometry
to smooth maps between smooth manifolds and give the lower bound to the several energy.
The origin of calibrated geometry is the
Wirtinger's inequality for the
even dimensional
subspaces in hermitian inner product spaces
\cite{Wirtinger1936},
then it refined or generalized
by many researchers.
In \cite{harvey1982calibrated},
Harvey and Lawson defined
calibrated submanifolds in the
Calabi-Yau, $G_2$ or $Spin(7)$ manifolds
which minimize the volume in their homology classes.
Similarly, we define the new class of smooth maps,
called calibrated maps, and show that they minimize
the appropriate energy
for the given situation.
Moreover, we obtain the next results as applications.
The first application is to obtain the lower bound to $p$-energy
restricting to the given homotopy class.
We assume $X$ is oriented.
The pullback of $f$ induces a linear map
$[f^*]^k\colon H^k(Y,\mathbb{R})\to H^k(X,\mathbb{R})$.
By fixing basis of $H^k(X,\mathbb{R})$, $H^k(Y,\mathbb{R})$,
we obtain the matrix $P([f^*]^k)$ of $[f^*]^k$
and put $|P([f^*]^k)|:=\sqrt{{\rm tr}({}^tP([f^*]^k)\cdot P([f^*]^k))}$.
\begin{thm}
Let $(X,g)$ and $(Y,h)$ be as above.
For any $1\le k\le \dim X$, there is a positive constant $C$
depending only on $k$,
$(X,g)$, $(Y,h)$ and the basis of
$H^k(X,\mathbb{R})$, $H^k(Y,\mathbb{R})$
such that for any $f\in C^\infty(X,Y)$ we have
\begin{align*}
\mathcal{E}_k(f)\ge C|P([f^*]^k)|.
\end{align*}
In particular, if $[f^*]^k$ is nonzero,
then $\inf (\mathcal{E}_k|_{[f]})$ is positive.
\label{thm main1}
\end{thm}
In the above theorem, the compactness of $Y$ is not essential.
See Theorem \ref{thm lower bdd of k-energy}.
The second application is to show that
the identity maps of some Riemannian manifolds with special holonomy groups minimize the appropriate energy.
As we have already mentioned, the identity map
on the real or complex projective space minimize $\mathcal{E}_2$ in its homotopy class
by \cite{Croke1987}
and \cite{Lichnerowicz1969}, respectively.
It was shown by Wei \cite{Wei1998} that
the identity map on the quaternionic projective space
$\mathbb{H}\mathbb{P}^n$ with the standard metric is an unstable
critical point of $\mathcal{E}_p$
for $1\le p< 2+4n/(n+1)$.
Moreover, Hoisington gave the nontrivial lower bound of
$\mathcal{E}_p|_{[1_{\mathbb{H}\mathbb{P}^n}]}$ for $p\ge 4$.
Here, the quaternionic projective space
is a typical example of quaternionic K\"ahler manifolds,
which are Riemannian manifolds of dimension $4n$
whose holonomy group is contained in $Sp(n)\cdot Sp(1)$.
Now, let $A$ be an $n\times m$ real-valued matrix
and denote by $a_1,\ldots,a_m\in\mathbb{R}$ the
nonnegative eigenvalues of ${}^tAA$, then put
$|A|_p:=(\sum_{i=1}^m a_i^{p/2})^{1/p}$.
Moreover, we define an energy $\mathcal{E}_{p,q}$ by
\begin{align*}
\mathcal{E}_{p,q}(f):=\int_X|df|_p^qd\mu_g,
\end{align*}
then we have $\mathcal{E}_p=\mathcal{E}_{2,p}$.
\begin{thm}\label{thm main2}
Let $(X,g)$ be a compact quaternionic K\"ahler manifold
of dimension $4n\ge 8$.
Then the identity map of $X$ minimizes $\mathcal{E}_{4,4}$
in its homotopy class.
\end{thm}
We can also show the similar theorem in the case of
other holonomy groups.
If $(X,g)$ is a compact $G_2$ manifold, then $1_X$
minimizes $\mathcal{E}_{3,3}|_{[1_X]}$ and
if $(X,g)$ is a compact $Spin(7)$ manifold, then $1_X$
minimizes $\mathcal{E}_{4,4}|_{[1_X]}$
(See Theorem \ref{thm id calib}).
Moreover, it is easy to see that if the identity map minimizes
$\mathcal{E}_{p,q}$, then it also minimizes $\mathcal{E}_{p',q'}$
for all $p'\ge p$ and $q'\ge q$ by the
H\"older's inequality.
Of course, we can also consider the case of
K\"ahler, Calabi-Yau, hyper-K\"ahler manifolds respectively, however, the results in these cases also follow from \cite{Lichnerowicz1969}.
This paper is organized as follows.
In Section \ref{sec energy of map},
we define the notion of calibrated maps, which is the analogy
of the calibrated submanifolds.
In Section \ref{sec ex}, we explain some examples of
calibrated maps.
In particular, we show that holomorphic maps
between K\"ahler manifolds and the inclusion maps of
calibrated submanifolds can be regarded as calibrated maps.
Moreover, we can also show that the fibration
whose regular fibers are calibrated submanifolds are calibrated maps.
We prove Theorem \ref{thm main1} in Section \ref{sec lower bdd},
and Theorem \ref{thm main2} in Section \ref{sec id map}.
In Section \ref{sec intersection},
we compare the homotopy invariant introduced in
\cite{CrokeFathi1990} with the invariants defined in this paper.
\paragraph{\bf Acknowledgment}
I would like to thank Professor Frank Morgan for his advice on this paper.
This work was supported by JSPS KAKENHI Grant Numbers JP19K03474, JP20H01799.
\section{Calibrated maps}\label{sec energy of map}
Let $X,Y$ be smooth manifold
of $\dim X=m$ and $\dim Y=n$.
Throughout of this paper,
we suppose $X$ is compact and oriented.
We fix a volume form ${\rm vol}\in\Omega^m(X)$ on $X$, namely,
a nowhere vanishing $m$-form
which determines an orientation and a measure of $X$.
For $m$-forms $v_1,v_2\in\Omega^m(X)$, there are
$\varphi_i\subset C^\infty(X)$ with $v_i=\varphi_i{\rm vol}$.
Then we write $v_1\le v_2$ if
$\varphi_1(x)\le \varphi_2(x)$ for all $x\in X$.
If a map
$\sigma\colon C^\infty(X,Y)\to L^1(X)$ is given,
then we can define an energy
$\mathcal{E}\colon C^\infty(X,Y)\to \mathbb{R}$ by
\begin{align*}
\mathcal{E}(f):=\int_X \sigma(f){\rm vol}.
\end{align*}
Now, $f_0,f_1\in C^\infty(X,Y)$ are said to be {\it homotopic}
if there is a smooth map $F\colon [0,1]\times X\to Y$
such that $F(0,\cdot)=f_0$ and $F(1,\cdot)=f_1$.
By Whitney approximation theorem, it is equivalent to
the existence of the continuous homotopy
joining $f_0$ and $f_1$.
For $f\in C^\infty(X,Y)$, denote by $[f]\subset C^\infty(X,Y)$
the homotopy equivalent class represented by $f$.
In this paper we consider the lower bound to $\mathcal{E}|_{[f]}$ or the minimum
of $\mathcal{E}|_{[f]}$.
Denote by $1_X\colon X\to X$ the identity map on $X$.
We define a smooth map $(1_X,f)\colon X\to X\times Y$
by
\begin{align*}
(1_X,f)(x):=(x,f(x)).
\end{align*}
The next definition is the analogy of
\cite{harvey1982calibrated}.
\begin{definition}
\normalfont
$\Phi
\in\Omega^m(X\times Y)$ is a {\it $\sigma$-calibration}
if $d\Phi=0$ and
\begin{align*}
(1_X,f)^*\Phi\le \sigma(f){\rm vol}
\end{align*}
for any smooth map $f\colon X\to Y$.
Moreover, $f$ is a {\it $(\sigma,\Phi)$-calibrated map}
if
\begin{align*}
(1_X,f)^*\Phi = \sigma(f){\rm vol}.
\end{align*}
\end{definition}
\begin{thm}
Let $\sigma$ be an energy density and
$\Phi$ be a $\sigma$-calibration.
\begin{itemize}
\setlength{\parskip}{0cm}
\setlength{\itemsep}{0cm}
\item[$({\rm i})$] The constant
$\int_X(1_X,f)^*\Phi$ is determined by the homotopy class $[f]$.
In other words,
$\int_X(1_X,f_0)^*\Phi=\int_X(1_X,f_1)^*\Phi$ if $[f_0]=[f_1]$.
\item[$({\rm ii})$] We have $\inf\mathcal{E}|_{[f]} \ge \int_X(1_X,f)^*\Phi$ for any
$f\in C^\infty(X,Y)$.
\item[$({\rm iii})$] We have $\mathcal{E}(f)=\int_X(1_X,f)^*\Phi$ iff $f$ is $(\sigma,\Phi)$-calibrated map. In particular, any
$(\sigma,\Phi)$-calibrated map minimizes $\mathcal{E}$ in its homotopy class.
\end{itemize}
\end{thm}
\begin{proof}
$({\rm i})$ If $f_0,f_1$ are homotopic, then $(1_X,f_0)$ and $(1_X,f_1)$ are homotopic,
accordingly $(1_X,f_0)^*\Phi $ and $(1_X,f_1)^*\Phi$ represent
the same cohomology class by \cite[Corollary 4.1.2]{BT1982}.
$({\rm ii})$ is shown by the definition of
$\sigma$-calibration.
$({\rm iii})$
By the point-wise inequality
$(1_X,f)^*\Phi\le \sigma(f){\rm vol}$,
we have $\mathcal{E}(f)=\int_X(1_X,f)^*\Phi$ iff $(1_X,f)^*\Phi= \sigma(f){\rm vol}$.
\end{proof}
\section{Examples}\label{sec ex}
One of the typical example of the energy of maps is
$p$-energy defined for the smooth maps between
Riemannian manifolds.
Let $(X,g)$ and $(Y,h)$ be Riemannian manifolds
and $f\colon X\to Y$ be a smooth map.
Then the pullback $f^*h$ is a section of
$T^*X\otimes T^*X$, we can take the trace ${\rm tr}_g(f^*h)$.
For $p\ge 1$, put $\sigma_p(f):=\{ {\rm tr}_g(f^*h)\}^{p/2}$.
We assume that $X$ is oriented and denote by ${\rm vol}_g$
the volume form of $g$.
The $p$-energy $\mathcal{E}_p(f)$ is defined by
\begin{align*}
\mathcal{E}_p(f):=\int_X\sigma_p(f){\rm vol}_g.
\end{align*}
Now, the differential $df_x$ is an element of $T^*_xX\otimes T_{f(x)}Y$ for every $x\in X$.
Since $g_x$ and $h_{f(x)}$
induces the natural inner product and the norm on
$T^*_xX\otimes T_{f(x)}Y$,
Then we may also write
$\sigma_p(f)(x)=|df_x|^p$.
By the H\"older's inequality, we have
\begin{align*}
\mathcal{E}_p(f)&\le {\rm vol}_g(X)^{1-p/q}\mathcal{E}_q(f)^{p/q}
\end{align*}
for $1\le p\le q$. Thus we have the following proposition.
\begin{prop}
Let $\Phi\in\Omega^m(X\times Y)$ be
a $\sigma_p$-calibration.
Then
\begin{align*}
{\rm vol}_g(X)^{-1+p/q}\int_X(1_X,f)^*\Phi\le \mathcal{E}_q(f)^{p/q}
\end{align*}
for any $q\ge p$ and $f\in C^\infty(X,Y)$.
\label{prop holder}
\end{prop}
\subsection{holomorphic maps}
Here, assume that $X,Y$ are complex manifolds
and $g,h$ are K\"ahler metrics.
Let $m=\dim_\mathbb{C} X$ and $n=\dim_\mathbb{C} Y$.
Then we have the decomposition
\begin{align*}
T^*X\otimes \mathbb{C}&=\Lambda^{1,0}T^*X\oplus\Lambda^{0,1}T^*X,\\
TY\otimes \mathbb{C}&=T^{1,0}Y\oplus T^{0,1}Y,
\end{align*}
accordingly the derivative $df\in \Gamma(T^*X\otimes f^*TY)$ is decomposed into
\begin{align*}
df&=(\partial f)^{1,0}+(\partial f)^{0,1}+(\partialb f)^{1,0}+(\partialb f)^{0,1}\\
&\in (\Lambda^{1,0}T^*X\otimes T^{1,0}Y)
\oplus (\Lambda^{1,0}T^*X\otimes T^{0,1}Y)\\
&\quad\quad\oplus (\Lambda^{0,1}T^*X\otimes T^{1,0}Y)
\oplus (\Lambda^{0,1}T^*X\otimes T^{0,1}Y).
\end{align*}
Since $df$ is real, we have
\begin{align*}
\overline{(\partial f)^{1,0}}=(\partialb f)^{0,1},
\quad\overline{(\partial f)^{0,1}}=(\partialb f)^{1,0}.
\end{align*}
Denote by $\omega_g,\omega_h$ the
K\"ahler form of $g,h$, respectively,
then the volume form is given by
${\rm vol}_g=\frac{1}{m!}\omega_g^m$.
The following observation was given by Lichnerowicz.
\begin{thm}[{\cite{Lichnerowicz1969}}]
For any smooth map $f\colon X\to Y$, we have
\begin{align*}
\omega_g^{m-1}\wedge f^*\omega_h
&=(m-1)!(|(\partial f)^{1,0}|^2-|(\partialb f)^{1,0}|^2){\rm vol}_g,\\
|df|^2&=2|(\partial f)^{1,0}|^2+2|(\partialb f)^{1,0}|^2.
\end{align*}
In particular, we have
\begin{align*}
\mathcal{E}_2(f)\ge \frac{2}{(m-1)!}\int_X\omega_g^{m-1}\wedge f^*\omega_h
\end{align*}
and the equality holds
iff $f$ is holomorphic.
\label{thm Lich}
\end{thm}
Now, we consider $\omega_g^{m-1}\wedge \omega_h\in \Omega^m(X\times Y)$.
The first two equalities in Theorem \ref{thm Lich}
implies that $\frac{2}{(m-1)!}\omega_g^{m-1}\wedge \omega_h$
is a $\sigma_2$-calibration.
Moreover, the second statement implies that
$f$ is a $(\sigma_2,\frac{2}{(m-1)!}\omega_g^{m-1}\wedge \omega_h)$-calibrated map iff $f$ is holomorphic.
One can also see that
$f$ is $(\sigma_2,-\frac{2}{(m-1)!}\omega_g^{m-1}\wedge \omega_h)$-calibrated map iff $f$ is anti-holomorphic.
\subsection{Calibrated submanifolds}
In this subsection, we see the relation between
the calibrated submanifolds in the sense of \cite{harvey1982calibrated} and
the calibrated maps.
We assume $(Y^n,h)$ is a Riemannian manifold.
\begin{definition}[\cite{harvey1982calibrated}]
\normalfont
For an integer $0<m<n$,
$\psi
\in\Omega^m(Y)$ is a {\it calibration}
if $d\psi=0$ and
\begin{align*}
\psi|_V\le {\rm vol}_{h|_V}
\end{align*}
for any $y\in Y$ and
$m$-dimensional oriented
subspace $V\subset T_yY$.
Here, $h|_V$ is the induced metric on $V$ and
${\rm vol}_{h|_V}$ is its volume form whose orientation
is compatible with the one equipped with $V$.
Moreover, an oriented submanifold
$X\subset Y$ is a {\it calibrated submanifold}
if
\begin{align*}
\psi|_{T_xX}={\rm vol}|_{h|_{T_xX}}
\end{align*}
for any $x\in X$.
\label{def HL cal}
\end{definition}
Now, if $X$ is an oriented manifold
with a volume form ${\rm vol}\in\Omega^m(X)$,
then for every linear map
$A\colon T_xX\to T_yY$
can be regarded as an $n\times m$-matrix
by taking a
basis $e_1,\ldots,e_m$ of $T_xX$ and an orthonormal basis of $T_yY$ with ${\rm vol}_x(e_1,\ldots,e_m)=1$.
Then $\sqrt{\det( {}^tA\cdot A)}$ does not
depend on the choice of these basis.
Therefore, for $f\in C^\infty(X,Y)$, we can define
the energy density
$\tau_m(f)(x):=\sqrt{\det({}^tdf_x\cdot df_x)}$ and the energy
$\mathcal{E}_{\tau_m}(f):=\int_X\tau_m(f){\rm vol}$.
\begin{prop}
Let $(X,{\rm vol})$ be an oriented manifold equipped with a volume form and $\psi\in\Omega^m(Y)$ be closed.
Assume that $\dim_\mathbb{R} X=m<n$ and
denote by $\pi_Y\colon X\times Y\to Y$ the
natural projection.
Then $\psi$ is a calibration iff
$\pi_Y^*\psi\in\Omega^m(X\times Y)$
is a $\tau_m$-calibration.
Moreover, for any embedding $f\colon X\to Y$,
the following conditions are equivalent.
\begin{itemize}
\setlength{\parskip}{0cm}
\setlength{\itemsep}{0cm}
\item[$({\rm i})$]
$f(X)$ is a calibrated submanifold, where the orientation of $f(X)$ is determined such that $f$ preserves the orientation.
\item[$({\rm ii})$] $f$ is
a $(\tau_m,\pi_Y^*\psi)$-calibrated map.
\end{itemize}
\end{prop}
\begin{proof}
Note that $(1_X,f)^*(\pi_Y^*\psi)=f^*\psi$
and
$\tau_m(f){\rm vol}_g={\rm vol}_{f^*h}$.
Hence $\psi$ is a calibration iff
$\pi_Y^*\psi\in\Omega^m(X\times Y)$
is a $\tau_m$-calibration.
Moreover,
suppose that $f$ is an embedding.
Then $f$ is a $(\tau_m,\pi_Y^*\psi)$-calibrated map iff $f(X)$ is a calibrated submanifold.
\end{proof}
\subsection{Fibrations}
Let $(X^m,g)$ be an oriented Riemannian manifold and
$Y^n$ be a smooth manifold equipped with a volume form
${\rm vol}_Y\in\Omega^n(Y)$.
Here, we suppose $n< m$ and
let
$\varphi\in\Omega^{m-n}$ be a calibration
in the sense of Definition \ref{def HL cal}.
Fix an orthonormal basis of $T_xX$
and a basis $e_1',\ldots,e_n'\in T_yY$
with ${\rm vol}_Y(e_1',\ldots,e_n')=1$, we can regard
a linear map $A\colon T_xX\to T_yY$ as
an $m\times n$-matrix.
Then the value of $\sqrt{\det(A\cdot{}^tA)}$
does not depend on the choice of above basis.
For a smooth map $f\colon X\to Y$, put
$\tilde{\tau}_{m,n}(f)|_x:=\sqrt{\det(df_x\cdot{}^tdf_x)}$
and $\Phi:={\rm vol}_Y\wedge \varphi$.
Put
\begin{align*}
X_{\rm reg}:=\{ x\in X|\, x\mbox{ is a regular point of }f\}.
\end{align*}
Note that $X_{\rm reg}$ is open in $X$.
If $x\in X_{\rm reg}$,
we have the orthogonal decomposition $T_xX={\rm Ker}(df_x)\oplus H$ and $df_x|_H\colon H\to T_{f(x)}Y$ is a linear
isomorphism.
Put $y=f(x)$ and suppose that $f^{-1}(y)$ is a calibrated submanifold
with respect to the suitable orientation.
We say that $df_x$ is {\it orientation preserving}
if there is a basis
$v_1,\ldots,v_m$ of $T_xX$ such that
\begin{align*}
v_1,\ldots,v_n&\in H,
\quad {\rm vol}_Y(df_x(v_1),\ldots,df_x(v_n))>0,\\
v_{n+1},\ldots,v_m&\in {\rm Ker}(df_x),\quad
\varphi_x(v_{n+1},\ldots,v_m)>0,\\
{\rm vol}_g(v_1,\ldots,v_m)&>0.
\end{align*}
\begin{prop}
$\Phi$ is a $\tilde{\tau}_{m,n}$-calibration.
Moreover, a smooth map $f\colon X\to Y$
is a $(\tilde{\tau}_{m,n},\Phi)$-calibrated map
iff
\begin{itemize}
\setlength{\parskip}{0cm}
\setlength{\itemsep}{0cm}
\item[$({\rm i})$]
$f^{-1}(y)\cap X_{\rm reg}$ is a calibrated submanifold with respect to $\varphi$ and the suitable orientation for any $y\in Y$,
\item[$({\rm ii})$]
$df_x$ is orientation preserving for any $x\in X_{\rm reg}$.
\end{itemize}
\end{prop}
\begin{proof}
If $x\in X$ is a critical point of $f$, then we can see
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi|_x=\tilde{\tau}_{m,n}(f){\rm vol}_g|_x=0.
\end{align*}
Fix a regular point $x$ and an oriented
orthonormal basis
$e_1,\ldots,e_m\in T_xX$ such that
$e_{m-n+1},\ldots,e_m\in {\rm Ker}(df_x)$.
Then we have
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi(e_1,\ldots,e_m)
&={\rm vol}_Y(df_x(e_1),\ldots,df_x(e_n))
\varphi(e_{n+1},\ldots,e_m),\\
\tilde{\tau}_{m,n}(f)|_x&=
\left| {\rm vol}_Y(df_x(e_1),\ldots,df_x(e_n))\right|.
\end{align*}
Since $\varphi$ is a calibration, we have
$\varphi(\pm e_{n+1},e_{n+2},\ldots,e_m)\le 1$,
hence $|\varphi(e_{n+1},\ldots,e_m)|\le 1$.
Therefore,
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi(e_1,\ldots,e_m)
&\le \left| {\rm vol}_Y(df_x(e_1),\ldots,df_x(e_{m-n}))\right|
= \tilde{\tau}_{m,n}(f)|_x,
\end{align*}
which implies that
$\Phi$ is a $\tilde{\tau}_{m,n}$-calibration.
Next we consider the condition
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi|_x=\tilde{\tau}_{m,n}(f){\rm vol}_g|_x,
\end{align*}
where $x$ is a regular value of $f$.
In this case we have the
orthogonal decomposition
$T_xX={\rm Ker}(df_x)\oplus H$,
where $H$ is an $n$-dimensional subspace.
We can take an orthonormal basis
$e_1,\ldots,e_m\in T_xX$ such that
\begin{align*}
e_1,\ldots,e_n&\in H,\\
e_{n+1},\ldots,e_m&\in {\rm Ker}(df_x),\\
a:={\rm vol}_Y(df_x(e_1),\ldots,df_x(e_{m-n}))&>0,\\
{\rm vol}_g(e_1,\ldots,e_m)&>0.
\end{align*}
Then we have
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi(e_1,\ldots,e_m)
&=a
\varphi(e_{n+1},\ldots,e_m),\\
\tilde{\tau}_{m,n}(f){\rm vol}_g(e_1,\ldots,e_m)&=|a|=a.
\end{align*}
Therefore, we have
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi|_x
&=\tilde{\tau}_{m,n}(f){\rm vol}_g|_x
\end{align*}
iff $\varphi(e_{n+1},\ldots,e_m)=1$.
Now we have taken $x\in X_{\rm reg}$ arbitrarily,
hence we have
\begin{align*}
f^*{\rm vol}_Y\wedge \varphi
&=\tilde{\tau}_{m,n}(f){\rm vol}_g
\end{align*}
iff $f^{-1}(y)\cap X_{\rm reg}$ is a calibrated submanifold
for any $y\in Y$
and $df_x$ is orientation preserving for all $x\in X_{\rm reg}$.
\end{proof}
\subsection{Totally geodesic maps between tori}\label{subsec tori}
Let $\mathbb{T}^n=\mathbb{R}^n/\mathbb{Z}^n$ be the $n$-dimensional torus
and we consider smooth maps from $\mathbb{T}^m$ to $\mathbb{T}^n$.
Let $G=(g_{ij})\in M_m(\mathbb{R})$ and $H=(h_{ij})\in M_n(\mathbb{R})$
be positive symmetric matrices.
Denote by $x=(x^1,\ldots,x^m)$ and
$y=(y^1,\ldots,y^n)$ the Cartesian coordinate
on $\mathbb{R}^m$ and $\mathbb{R}^n$, respectively,
then we have closed $1$-forms $dx^i\in\Omega^1(\mathbb{T}^m)$
and $dy^i\in\Omega^1(\mathbb{T}^n)$.
We define the flat Riemannian metrics
$g=\sum_{i,j}g_{ij}dx^i\otimes dx^j$ on $\mathbb{T}^m$
and $h=\sum_{i,j}h_{ij}dy^i\otimes dy^j$ on $\mathbb{T}^n$.
For a smooth map $f\colon \mathbb{T}^m\to \mathbb{T}^n$,
we have the pullback $f^*\colon H^1(\mathbb{T}^n,\mathbb{R})\to H^1(\mathbb{T}^m,\mathbb{R})$.
Here, since
\begin{align*}
H^1(\mathbb{T}^m,\mathbb{Z})&={\rm span}_\mathbb{Z}\{ [dx^1],\ldots,[dx^m]\},\\
H^1(\mathbb{T}^n,\mathbb{Z})&={\rm span}_\mathbb{Z}\{ [dy^1],\ldots,[dy^n]\},
\end{align*}
there is $P=(P_i^j)\in M_{m,n}(\mathbb{Z})$ such that
$f^*[dy^j]=\sum_i P_i^j[dx^i]$.
The matrix $P$ is determined by the homotopy class of $f$.
Now, let $*_g$ be the Hodge star operator of $g$
and put
\begin{align}
\Phi:=\sum_{i,j,k}h_{jk} P_i^j *_gdx^i\wedge dy^k
\in \Omega^m(\mathbb{T}^m\times \mathbb{T}^n).\label{eq phi on torus}
\end{align}
Then we can check that
\begin{align*}
\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi
&=\sum_{i,j,k,l}h_{jk} P_i^jP_l^k\int_{\mathbb{T}^m} *_gdx^i\wedge dx^l\\
&=\sum_{i,j,k,l}h_{jk} P_i^jP_l^k g^{il}{\rm vol}_g(\mathbb{T}^m)\\
&={\rm tr}({}^tPG^{-1}PH){\rm vol}_g(\mathbb{T}^m)=:\| P\|^2{\rm vol}_g(\mathbb{T}^m)\ge 0.
\end{align*}
Consequently, by the positivity of $G^{-1}$ and $H$,
$\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi=0$ iff $P=0$.
\begin{prop}
Assume that $f^*\colon H^1(\mathbb{T}^n,\mathbb{R})\to H^1(\mathbb{T}^m,\mathbb{R})$
is not the zero map.
$\| P\|^{-1}\Phi$ is a $\sigma_1$-calibration and
$f$ is a $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map
if $f(x)=Px+a$ for some $a\in \mathbb{T}^m$.
Moreover, $f$ minimizes $\mathcal{E}_2$ in its homotopy class
iff $f(x)=Px+a$ for some $a\in \mathbb{T}^m$.
\label{prop lower torus}
\end{prop}
\begin{proof}
We fix $x\in\mathbb{T}^m$ and put $df_x:=A=(A_i^j)\in M_{n,m}(\mathbb{R})$,
and show $(1_{\mathbb{T}^m},f)^*\Phi\le \sigma_1(f){\rm vol}_g$ at $x$.
Since
\begin{align*}
(1_{\mathbb{T}^m},f)^*\Phi|_x
&=\sum_{i,j,k,l}h_{jk} P_i^jA_l^k *_gdx^i\wedge dx^l|_x\\
&=\left(\sum_{i,j,k,l}h_{jk} P_i^jA_l^k g^{il}\right){\rm vol}_g|_x\\
&=\left({\rm tr}({}^tPG^{-1}AH)\right){\rm vol}_g|_x.
\end{align*}
Here, by the Cauchy-Schwarz inequality,
we have
\begin{align*}
{\rm tr}({}^tPG^{-1}AH)\le \sqrt{\| P\|\| A\|},
\end{align*}
and the equality holds iff $A=\lambda P$ for a
constant $\lambda\ge 0$.
Therefore, we have
\begin{align*}
(1_{\mathbb{T}^m},f)^*\Phi
&\le \| P\|\sigma_1(f){\rm vol}_g,
\end{align*}
which implies that $\| P\|^{-1}\Phi$ is
a $\sigma_1$-calibration.
Moreover, the equality holds iff
$df_x=\lambda_x \cdot {}^t P$ for some $\lambda_x\ge 0$.
Therefore, $f(x)={}^tPx+a$ for some $a\in\mathbb{T}^m$ is
a $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map.
For any $f\in C^\infty(\mathbb{T}^m,\mathbb{T}^n)$, we have
\begin{align*}
\int_X(1_{\mathbb{T}^m},f)^*\Phi
&\le \| P\|\int_X\sigma_1(f){\rm vol}_g
\le \| P\|\sqrt{{\rm vol}_g(\mathbb{T}^m)\mathcal{E}_2(f)}
\end{align*}
by the Cauchy-Schwartz inequality.
Moreover, we have the following equality
\begin{align*}
\int_X(1_{\mathbb{T}^m},f)^*\Phi=\| P\|\sqrt{{\rm vol}_g(\mathbb{T}^m)\mathcal{E}_2(f)}
\end{align*}
iff
$df_x=\lambda_x\cdot {}^tP$ for some $\lambda_x\ge 0$ and
$\sigma_1(f)$ is a constant function on $\mathbb{T}^m$.
Since $\sigma_1(f)(x)=\lambda_x\| P\|$,
if $\sigma_1(f)$ is constant, then
$\lambda_x=\lambda$ is independent of $x$.
Hence we may write $f(x)=\lambda \cdot {}^tPx+a$ for some
$a\in \mathbb{T}^m$.
Moreover, since $f^*=P$ on $H^1(\mathbb{T}^n)$, we have $\lambda=1$.
\end{proof}
In the above proposition, we cannot show that
every $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map
is given by $f(x)={}^tPx+a$ for some $a$.
The following case is a counterexample.
Suppose $m=n=1$ and let $P=1$.
If we put $f(x)=x+\frac{1}{2\pi}\sin (2\pi x)$, then it gives a
smooth map $\mathbb{T}^1\to \mathbb{T}^1$ homotopic to
the identity map.
Then one can easy to check that
$f$ is a $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map
since $f'(x)\ge 0$.
\section{The lower bound of $p$-energy}\label{sec lower bdd}
In this section, we give the lower bound of $p$-energy
in the general situation.
Let $(X,g)$ and $(Y,h)$ be Riemannian manifolds
and assume $X$ is compact and oriented.
Now we have the decomposition
\begin{align*}
\Lambda^kT^*_{(x,y)}(X\times Y)
\cong \bigoplus_{l=0}^k\Lambda^lT^*_xX\otimes \Lambda^{k-l}T^*_yY,
\end{align*}
then denote by $\Omega^{l,k-l}(X\times Y)\subset \Omega^k(X\times Y)$ the set consisting of
smooth sections of $\Lambda^lT^*_xX\otimes \Lambda^{k-l}T^*_yY$.
For $\Phi\in \Omega^k(X\times Y)$, let
$|\Phi_{(x,y)}|$ be the norm with respect to the metric
$g\oplus h$ on $X\times Y$.
\begin{lem}
Let $\Phi\in \Omega^{m-k,k}(X\times Y)$ be closed and
$\sup_{x,y}|\Phi_{(x,y)}|<\infty$.
Then there is a constant $C>0$ depending only on
$\Phi,m,n,k$ such that $C\Phi$ is a $\sigma_k$-calibration.
\label{lem cal k}
\end{lem}
\begin{proof}
Fix $x\in X$ and let
$\{ e_1,\ldots,e_m\}$ and $\{ e_1',\ldots,e_n'\}$
be an orthonormal basis of
$T_xX$ and $T_{f(x)}Y$, respectively.
Put
\begin{align*}
\mathcal{I}_k^m
:=\left\{ I=(i_1,\ldots,i_k)\in\mathbb{Z}^k\left|\,
0\le i_1<\cdots<i_k\le m\right.\right\}.
\end{align*}
For $I=(i_1,\ldots,i_k)\in \mathcal{I}_k^m$,
$J=(j_1,\ldots,j_k)\in\mathcal{I}_k^n$, we write
\begin{align*}
e_I:=e_{i_1}\wedge\cdots\wedge e_{i_k},\quad
e_J':=e_{j_1}'\wedge\cdots\wedge e_{j_k}'.
\end{align*}
Then we have
\begin{align*}
\Phi_{(x,f(x))}=\sum_{I\in\mathcal{I}_k^m,J\in\mathcal{I}_k^n}\Phi_{IJ}(*_ge_I)\wedge e'_J
\end{align*}
for some $\Phi_{IJ}\in\mathbb{R}$
and
\begin{align*}
\left\{ (1_X,f)^*\Phi\right\}_x
&=\sum_{I,J}\Phi_{IJ}(*_ge_I)\wedge df_x^*e'_J.
\end{align*}
If we denote by
$(df_x)_{IJ}$ the $k\times k$ matrix
whose $(p,q)$-component is given by
$g(df_x(e_{i_q}), e'_{j_p})$, then we have
\begin{align*}
(*_ge_I)\wedge df_x^*e'_J
&=\det((df_x)_{IJ}){\rm vol}_g|_x\le k!|df_x|^k{\rm vol}_g|_x,
\end{align*}
therefore,
we can see
\begin{align*}
\left\{ (1_X,f)^*\Phi\right\}_x
&\le \left(\sum_{I,J}|\Phi_{IJ}|\right) k! |df_x|^k{\rm vol}_g|_x.
\end{align*}
Since $|\Phi_{x,f(x)}|^2=\sum_{I,J}|\Phi_{I,J}|^2$,
we have
\begin{align*}
(1_X,f)^*\Phi
&\le k! (\#\mathcal{I}_k^m)(\#\mathcal{I}_k^n)\sup_{x,y}|\Phi_{(x,y)}| \sigma_k(f){\rm vol}_g,
\end{align*}
which implies the assertion.
\end{proof}
For $f\in C^\infty(X,Y)$, denote by $[f^*]^k$
the pullback $H^k(Y,\mathbb{R})\to H^k(X,\mathbb{R})$ of $f$.
For a closed form $\alpha\in\Omega^k(Y)$,
denote by $[\alpha]\in H^k(Y,\mathbb{R})$ its cohomology class.
Put
\begin{align*}
H^k_{\rm bdd}(Y,\mathbb{R})
:=\left\{ [\alpha]\in H^k(Y,\mathbb{R})|\, \alpha\in \Omega^k(Y),\, d\alpha=0,\, \sup_{y\in Y}h(\alpha_y,\alpha_y)<\infty\right\}.
\end{align*}
This is a subspace of $H^k(Y,\mathbb{R})$, and
we have $H^k_{\rm bdd}(Y,\mathbb{R})=H^k(Y,\mathbb{R})$ if $Y$ is compact.
Denote by $[f^*]^k_{\rm bdd}$ the restriction of
$[f^*]^k$ to $H^k_{\rm bdd}(Y,\mathbb{R})$.
Fixing a basis of $H^k(X,\mathbb{R})$ and $H^k_{\rm bdd}(Y,\mathbb{R})$,
we obtain the matrix $P=P([f^*]^k_{\rm bdd})\in M_{N,d}(\mathbb{R})$ of
$[f^*]^k_{\rm bdd}$,
where $d=\dim H^k_{\rm bdd}(Y,\mathbb{R})$
and $N=\dim H^k(X,\mathbb{R})$.
Put $|P|:=\sqrt{{\rm tr}({}^tPP)}$, which may depends on the choice
of basis.
Here, since $d$ may become infinity,
we may have $|P|=\infty$.
\begin{thm}
Let $(X^m,g)$ and $(Y^n,h)$ be Riemannian manifolds
and $X$ be compact and oriented.
For any $1\le k\le m$, there is a constant $C>0$
depending only on $k$,
$(X,g)$, $(Y,h)$ and the basis of
$H^k(X,\mathbb{R})$, $H^k_{\rm bdd}(Y,\mathbb{R})$
such that for any $f\in C^\infty(X,Y)$ we have
\begin{align*}
\mathcal{E}_k(f)\ge C|P([f^*]^k_{\rm bdd})|.
\end{align*}
In particular, if $[f^*]^k_{\rm bdd}$ is a nonzero map,
then the infimum of $\mathcal{E}_k|_{[f]}$ is positive.
\label{thm lower bdd of k-energy}
\end{thm}
\begin{proof}
Take bounded
closed $k$-forms $\beta_1,\ldots,\beta_d\in \Omega^k(Y)$
such that $\{ [\beta_l]\}_l$ is a basis
of $H^k_{\rm bdd}(Y,\mathbb{R})$.
By the Hodge Theory,
$H^k(X)$ is isomorphic to the space of
harmonic $k$-forms as vector spaces.
Therefore, for any basis of $H^k(X,\mathbb{R})$,
there is a corresponding basis
$\alpha_1,\ldots,\alpha_N\in\Omega^k(X)$
of the space of harmonic $k$-forms.
Let $G_{ij}:=\int_X\alpha_i\wedge *_g\alpha_j$,
which is symmetric positive definite.
Define $P=(P_{ij})\in M_{N,d}(\mathbb{R})$
by $[f^*]^k_{\rm bdd}([\beta_j])=\sum_iP_{ij}[\alpha_i]$.
If we put
\begin{align*}
\Phi:=\sum_{i,j} P_{ij}\beta_j\wedge (*_g\alpha_i),
\end{align*}
then every
$\beta_j\wedge (*_g\alpha_i)$ is closed and satisfies the
assumption of Lemma \ref{lem cal k},
since $X$ is compact and $\beta_j$ is bounded.
Take the constant $C_{ij}>0$ as in Lemma \ref{lem cal k}.
Here, $C_{ij}$ is depending only on
$m,n,k$ and $\alpha_i,\beta_j$.
Then for any $f\in C^\infty(X,Y)$, we have
\begin{align*}
(1_X,f)^*\left\{ \beta_j\wedge (*_g\alpha_i)\right\}
&\le C_{ij}\sigma_k(f){\rm vol}_g,\\
(1_X,f)^*\Phi&\le \sum_{i,j}C_{ij}|P_{ij}|\sigma_k(f){\rm vol}_g\\
&\le \sqrt{\sum_{i,j}C_{ij}^2}|P|\sigma_k(f){\rm vol}_g,
\end{align*}
hence
\begin{align*}
\mathcal{E}_k(f)\ge \left(\sum_{i,j}C_{ij}^2\right)^{-1/2}
|P|^{-1}\int_X(1_X,f)^*\Phi.
\end{align*}
Moreover, we have
\begin{align*}
\int_X(1_X,f)^*\Phi
&= \sum_{i,j} \int_X P_{ij}f^*\beta_j\wedge (*_g\alpha_i)\\
&=\sum_{i,j} \int_X P_{ij}\sum_kP_{kj}\alpha_k\wedge (*_g\alpha_i)\\
&=\sum_{i,j,k} P_{ij}P_{kj}G_{ki}.
\end{align*}
If we denote by $\lambda>0$ the minimum eigenvalue
of $(G_{ij})_{i,j}$, then we have
$\sum_{i,j,k} P_{ij}P_{kj}G_{ki}\ge \lambda|P|^2$.
Hence we obtain
\begin{align*}
\mathcal{E}_k(f)\ge \lambda\left(\sum_{i,j}C_{ij}^2\right)^{-1/2}
|P|.
\end{align*}
\end{proof}
\begin{rem}
\normalfont
Combining the above theorem
with Proposition \ref{prop holder},
we also have the lower bound of $\mathcal{E}_p$
for any $p\ge k$.
\end{rem}
\section{Energy of the identity maps}\label{sec id map}
In this section we consider when the identity map
on compact oriented Riemannian manifold $X$ minimizes
the energy.
Here, we consider the family of energies.
For Riemannian manifolds $(X^m,g)$, $(Y^n,h)$ and points
$x\in X$, $y\in Y$,
take a linear map $A\colon T_xX\to T_yY$.
Fixing orthonormal basis of $T_xX$ and $T_yY$,
we can regard $A$ as an $n\times m$-matirx.
Denote by $a_1,\ldots ,a_m\in\mathbb{R}_{\ge 0}$
the eigenvalues of ${}^tA\cdot A$, then put
\begin{align*}
|A|_p:=\left( \sum_{i=1}^m a_i^{p/2}\right)^{1/p}
\end{align*}
for $p>0$.
Then $|A|_p$ is independent of
the choice of the orthonormal basis of $T_xX$.
For a smooth map $f\colon X\to Y$,
let
\begin{align*}
\sigma_{p,q}(f)|_x&:=|df_x|_p^q,\\
\mathcal{E}_{p,q}(f)&:=\int_X\sigma_{p,q}(f){\rm vol}_g.
\end{align*}
Note that $\sigma_{2,p}=\sigma_p$ and $\mathcal{E}_{2,p}=\mathcal{E}_p$.
From now on we consider $(Y,h)=(X,g)$ and a map
$f\colon X\to X$.
Let $1_X$ be the identity map of $X$.
\begin{prop}
If $1_X$ minimizes $\mathcal{E}_{p,q}|_{[1_X]}$,
then it also minimizes $\mathcal{E}_{p',q'}|_{[1_X]}$
for any $p'\ge p$ and $q'\ge q$.
\end{prop}
\begin{proof}
First of all, for any smooth map $f$, we have
\begin{align*}
|df_x|_p&\le m^{1/p-1/p'}| df_x|_{p'},\\
\mathcal{E}_{p,q}(f)&\le m^{q/p-q/p'}{\rm vol}_g(X)^{1-q/q'}
\left( \int_X|df|_{p'}^{q'}{\rm vol}_g\right)^{q/q'},
\end{align*}
by the H\"older's inequality,
which gives $\mathcal{E}_{p',q'}(f)\ge C\mathcal{E}_{p,q}(f)^{q'/q}$
for some constant $C>0$.
Moreover, we have the equality for $f=1_X$.
Therefore, we can see
\begin{align*}
\inf \mathcal{E}_{p',q'}|_{[1_X]}
\ge \inf C\mathcal{E}_{p,q}^{q'/q}|_{[1_X]}
= C\mathcal{E}_{p,q}(1_X)^{q'/q}= \mathcal{E}_{p',q'}(1_X)
\ge \inf \mathcal{E}_{p',q'}|_{[1_X]}.
\end{align*}
\end{proof}
\begin{prop}[{cf.\cite[Lemma 2.2]{hoisington2021}}]
Let $(X,g)$ be a compact oriented Riemannian manifold
of dimension $m$.
Then $1_X$ minimizes $\mathcal{E}_{1,m}$ in its homotopy class.
\end{prop}
\begin{proof}
The proof is essentially given by
\cite[Lemma 2.2]{hoisington2021}.
For any map $f\colon X\to X$, we can see
\begin{align*}
f^*{\rm vol}_g=\det(df){\rm vol}_g
\le m^{-m}\sigma_{1,m}(f){\rm vol}_g.
\end{align*}
Here, the second inequality follows from the inequality
\begin{align*}
\frac{\sum_{i=1}^ma_i}{m}\ge \left( \prod_{i=1}^ma_i\right)^{1/m}
\end{align*}
for any $a_i\ge 0$.
Therefore, we can see
\begin{align*}
\mathcal{E}_{1,m}(f)\ge m^m\int_Xf^*{\rm vol}_g.
\end{align*}
Moreover, the equality holds if $f=1_X$.
\end{proof}
Next we consider the analogy of the above proposition.
We assume that $X$ has a nontrivial parallel $k$-form.
Denote by $g_0$ the standard metric on $\mathbb{R}^m$,
which also induces the
metric on $\Lambda^k(\mathbb{R}^m)^*$.
Let $\varphi_0\in\Lambda^k(\mathbb{R}^m)^*$ and fix an orientation
of $\mathbb{R}^m$.
For a $k$-form $\varphi$ and a Riemannian metric $g$ on an
oriented manifold $X$,
we say that {\it
$(g_0,\varphi_0)$ is a local model of $(g,\varphi)$} if
for any $x\in X$ there is an orientation preserving
isometry $I\colon \mathbb{R}^m\to T_xX$
such that $I^*(\varphi|_x)=\varphi_0$.
Denote by $*_{g_0}\colon \Lambda^k(\mathbb{R}^m)^*\to \Lambda^{m-k}(\mathbb{R}^m)^*$ the Hodge star operator induced by the standard metric
and let ${\rm vol}_{g_0}\in\Lambda^m(\mathbb{R}^m)^*$ be
the volume form.
First of all, we show the following proposition for the local model
$(g_0,\varphi_0)$.
\begin{prop}
Let $(g_0,\varphi_0)$ be as above.
Assume that
$\left| \iota_u\varphi_0\right|_{g_0}$
is independent of $u\in\mathbb{R}^m$ if $|u|_{g_0}=1$.
We have
\begin{align*}
A^*\varphi_0\wedge *_{g_0} \varphi_0\le \frac{|\varphi_0|_{g_0}^2}{m}|A|_k^k{\rm vol}_{g_0}
\end{align*}
for any $A\in M_m(\mathbb{R})$.
Moreover, if $A=\lambda T$ for $\lambda\in\mathbb{R}$, $T\in O(m)$
and $A^*\varphi_0=\lambda'\varphi_0$ for some
$\lambda'\ge 0$, then we have the equality.
\label{prop ptwise ineq}
\end{prop}
\begin{proof}
For any $A$, we can take oriented
orthonormal basis
$\{ e_1,\ldots,e_m\}$ and $e'_1,\ldots,e'_m$
of $(\mathbb{R}^m)^*$
such that $A^*e'_i=a_ie_i$ for some $a_i\in\mathbb{R}$.
We put
\begin{align*}
\varphi_0=\sum_{I\in\mathcal{I}_k^m}F_Ie_I=\sum_{I\in\mathcal{I}_k^m}F'_Ie'_I
\end{align*}
for some $F_I,F'_I\in\mathbb{R}$.
Now, put $a_I:=a_{i_1}\cdots a_{i_k}$
for $I=(i_1,\ldots,i_k)\in\mathcal{I}_k^m$.
The we have
$A^*\varphi_0=\sum_IF'_Ia_Ie_I$ and
\begin{align*}
A^*\varphi_0\wedge *_{g_0}\varphi_0
= g_0( A^*\varphi_0,\varphi_0){\rm vol}_{g_0}
&=\sum_IF_IF'_Ia_I{\rm vol}_{g_0}\\
&\le \sum_I|F_IF'_I||a_I|{\rm vol}_{g_0}.
\end{align*}
If we put $\{ I\}:=\{ i_1,\ldots,i_k\}$, then
\begin{align*}
|a_I|=\left( |a_{i_1}|^k\cdots |a_{i_k}|^k\right)^{1/k}
\le \frac{1}{k} \sum_{j\in \{ I\}}|a_j|^k,
\end{align*}
therefore, we obtain
\begin{align*}
\sum_I|F_IF'_I||a_I|
&\le \frac{1}{k}\sum_I |F_IF'_I|\sum_{j\in \{ I\}}|a_j|^k\\
&= \frac{1}{k}\sum_{j=1}^m |a_j|^k\sum_{I\in\mathcal{I}_k^m,j\in\{ I\}} |F_IF'_I|.
\end{align*}
Denote by $\hat{g}_0\colon (\mathbb{R}^m)^*\to \mathbb{R}^m$
the isomorphism induced by the metric $g_0$.
Put
\begin{align*}
\varphi_1:=\sum_{I\in\mathcal{I}_k^m}|F_I|e_I,\quad
\varphi_2:=\sum_{I\in\mathcal{I}_k^m}|F'_I|e_I
\end{align*}
and define an orthogonal matrix
$U\colon \mathbb{R}^m\to \mathbb{R}^m$ by $U\circ \hat{g}_0(e_j)=\hat{g}_0(e'_j)$.
Now we can see
\begin{align*}
\sum_{I\in\mathcal{I}_k^m,j\in\{ I\}} |F_IF'_I|
=g_0\left( \iota_{\hat{g}_0(e_j)}\varphi_1,\iota_{\hat{g}_0(e_j)}\varphi_2\right)
&\le \left| \iota_{\hat{g}_0(e_j)}\varphi_1\right|_{g_0}
\cdot \left| \iota_{\hat{g}_0(e_j)}\varphi_2\right|_{g_0}\\
&=\left| \iota_{\hat{g}_0(e_j)}\varphi_0\right|_{g_0}
\cdot \left| \iota_{\hat{g}_0(e_j)}(U^*\varphi_0)\right|_{g_0}
\end{align*}
and
\begin{align*}
\left| \iota_{\hat{g}_0(e_j)}(U^*\varphi_0)\right|_{g_0}
=\left| U^*(\iota_{U\circ \hat{g}_0(e_j)}\varphi_0)\right|_{g_0}
=\left| \iota_{U\circ\hat{g}_0(e_j)}\varphi_0\right|_{g_0}.
\end{align*}
Then by the assumption,
we can see that $C=\left| \iota_{\hat{g}_0(e_j)}\varphi_0\right|_{g_0}=\left| \iota_{U\circ\hat{g}_0(e_j)}\varphi_0\right|_{g_0}$
is independent of $j$, therefore, we have
$\sum_{I\in\mathcal{I}_k^m,j\in\{ I\}} |F_IF'_I|\le C^2$
and
\begin{align*}
A^*\varphi_0\wedge *_{g_0}\varphi_0
\le \frac{C^2}{k}\sum_{j=1}^m |a_j|^k{\rm vol}_{g_0}=\frac{C^2}{k}|A|_k^k{\rm vol}_{g_0}.
\end{align*}
In the above inequalities, we have the equality if $A=1_m$,
then we can determine the constant $C$.
Moreover, we can also check that
the equality holds
if $A=\lambda T$, where $\lambda\in\mathbb{R}$, $T\in O(m)$
and $A^*\varphi_0=\lambda'\varphi_0$ for some $\lambda'\ge 0$.
\end{proof}
\begin{prop}
Let $(X^m,g)$ be a compact oriented Riemannian
manifold and $\varphi\in \Omega^k(X)$ be
a harmonic form.
Assume that there is a local model $(g_0,\varphi_0)$
of $(g,\varphi)$ and $|\iota_u\varphi_0|_{g_0}$
is independent of $u\in\mathbb{R}^m$ if $|u|_{g_0}=1$.
Denote by ${\rm pr}_i\colon X\times X\to X$
the projection to $i$-th component for
$i=1,2$.
Then $\Phi=m|\varphi_0|_{g_0}^{-2}{\rm pr}_2^*\varphi\wedge {\rm pr}_1^*(*_g\varphi)$
is an $\sigma_{k,k}$-calibration.
Moreover, any isometry
$f\colon X\to X$ with $f^*\varphi=\varphi$
is $(\sigma_{k,k},\Phi)$-calibrated.
\label{prop id calib}
\end{prop}
\begin{proof}
$\Phi$
is an $\sigma_{k,k}$-calibration iff
\begin{align*}
f^*\varphi\wedge *_g\varphi
\le \frac{|\varphi_0|_{g_0}^2}{m}|df|_k^k{\rm vol}_g.
\end{align*}
By putting $A=df_x$ and identifying
$\mathbb{R}^m\cong T_xX$, this is equivalent to
the inequality in Proposition \ref{prop ptwise ineq}.
Moreover, the equality holds if
$(df_x)^*\varphi|_x = \varphi|_x$
for all $x\in X$ and $df_x$ is isometry.
\end{proof}
Next we have to consider when the assumption for
$(g_0,\varphi_0)$ is satisfied.
If $G\subset SO(m)$ is a closed subgroup,
then the linear action of $SO(m)$ on $\mathbb{R}^m$
induces the action of $G$ on $\mathbb{R}^m$.
Similarly, since $SO(m)$ acts on $\Lambda^k(\mathbb{R}^m)^*$
for all $k$,
$G$ also acts on them.
Here, $\mathbb{R}^m$ is {\it irreducible as a $G$-representation}
if any subspace $W\subset \mathbb{R}^m$ which is closed
under the $G$-action is equal to $\mathbb{R}^m$ or $\{ 0\}$.
For $\varphi_0\in\Lambda^k(\mathbb{R}^m)^*$,
denote by ${\rm Stab}(\varphi_0)\subset SO(m)$
the stabilizer of $\varphi_0$.
\begin{lem}
Let $G$ be a closed subgroup of $SO(m)$
and assume that $\mathbb{R}^m$ is irreducible
as a $G$-representation.
Moreover, assume that
$G\subset {\rm Stab}(\varphi_0)$.
Then $|\iota_u\varphi_0|_{g_0}$
is independent of $u\in\mathbb{R}^m$ if $|u|_{g_0}=1$.
\label{lem holonomy irred}
\end{lem}
\begin{proof}
Define a linear map $\Psi\colon \mathbb{R}^m\to \Lambda^{k-1}(\mathbb{R}^m)^*$
by $\Psi(u):=\iota_u\varphi_0$, then we can see
$\Psi$ is $G$-equivariant map since
the $G$-action preserves $\varphi_0$.
Since the $SO(m)$-action on $\Lambda^{k-1}(\mathbb{R}^m)^*$ preserves the inner product, we can see
\begin{align*}
g_0( A\Psi(u),A\Psi(v) )
=g_0( \Psi(u),\Psi(v) )
\end{align*}
for any $A\in G$ and $u,v\in\mathbb{R}^m$.
Moreover, the left-hand-side is equal to
$g_0( \Psi( Au),\Psi( Av) )$ since $\Psi$
is $G$-equivariant.
Now, let $e_1,\ldots, e_m$ be the standard orthonormal
basis of $\mathbb{R}^m$, and define the symmetric matrix
$H=(H_{ij})_{i,j}$ by
$H_{ij}:=g_0(\Psi(e_i),\Psi(e_j))$.
Then by the above argument we have
${}^tAHA=H$.
Let $\lambda\in \mathbb{R}$ be any eigenvalue of $H$
and denote by $V(\lambda)\subset\mathbb{R}^m$ the
eigenspace associate with $\lambda$.
Then we can see that $V(\lambda)$ is closed under the $G$-action,
hence we have $V(\lambda)=\mathbb{R}^m$ by the irreducibility,
which implies
\begin{align*}
|\Psi( Au)|_{g_0}^2=\lambda |u|_{g_0}^2
\end{align*}
for all $u\in\mathbb{R}^m$ and $A\in G$.
\end{proof}
Let $(X^m,g)$ be an oriented Riemannian manifold
and denote by ${\rm Hol}_g\subset SO(m)$ the
holonomy group.
We consider
$(X,g,\varphi,G,g_0,\varphi_0)$,
where $\varphi\in\Omega^k(X)$ is closed,
$(g_0,\varphi_0)$ is a local model of $(g,\varphi)$
and $G$ is a closed subgroup of $SO(m)$
such that ${\rm Hol}_g\subset G\subset {\rm Stab}(\varphi_0)$.
The followings are examples.
\begin{table}[h]
\caption{Examples of $(X,g,\varphi,G,g_0,\varphi_0)$}
\label{table_holonomy}
\centering
\begin{tabular}{lccl}
\hline
$(X,g,\varphi)$ & $m$ & $G$ & $k$ \\
\hline \hline
K\"ahler manifold & $2q$ & $U(q)$ & $2$\\
\hline
quaternionic K\"ahler manifold & $4q\ge 8$ & $Sp(q)\cdot Sp(1)$ & $4$\\
\hline
$G_2$ manifold & $7$ & $G_2$ & $3$\\
\hline
$Spin(7)$ manifold & $8$ & $Spin(7)$ & $4$\\
\hline
\end{tabular}\label{table holonomy}
\end{table}
We can apply Proposition \ref{prop id calib} and
Lemma
\ref{lem holonomy irred}
to the above cases and obtain the following result.
\begin{thm}
Let $(X,g,\varphi)$ be an oriented compact Riemannian
manifold whose geometric structure is one of Table
\ref{table holonomy}
and let $\Phi$ be as in Proposition \ref{prop id calib}.
Then the identity map $1_X$ is a
$(\sigma_{k,k},\Phi)$-calibrated map.
In particular, $1_X$ minimizes
$\mathcal{E}_{k,k}$ in its homotopy class. \label{thm id calib}
\end{thm}
\section{Intersection of Smooth maps}\label{sec intersection}
In \cite{CrokeFathi1990},
Croke and Fathi introduced the homotopy invariant
of a smooth map $f\colon X\to Y$ which gives the lower
bound to the $2$-energy $\mathcal{E}_2$.
In this section, we compare our invariant with the invariant
in \cite{CrokeFathi1990}.
First of all, we review the intersection of
smooth map introduced in \cite{CrokeFathi1990}.
Let $(X,g)$ and $(Y,h)$ be Riemannian manifolds
and suppose $X$ is compact.
Here, we do not assume $X$ is oriented,
and we use the volume measure $\mu_g$ of $g$
instead of the volume form.
Croke and Fathi defined the following quantity
\begin{align*}
i_f(g,h)=\lim_{t\to \infty}\frac{1}{t}\int_{S_g(X)}\phi_t(v)d
{\rm Liou}_g(v)
\end{align*}
for a smooth map $f\colon X\to Y$ and called it
{\it the intersection of $f$}.
Here, ${\rm Liou}_g$ is the Liouville measure on
the unit tangent bundle $S_g(X)$ and
$\phi_t^f(v)=\phi_t(v)$ is the minimum length of
all paths in $Y$ homotopic with the fixed endpoints
to
\begin{align*}
s\mapsto f(\gamma_v(s)),\quad 0\le s\le t,
\end{align*}
where $\gamma_v$ is the geodesic
from $p\in X$ with $\gamma_v'(0)=v\in S_g(X)$.
\begin{thm}[\cite{CrokeFathi1990}]
For a smooth map $f\colon X\to Y$,
the intersection $i_f(g,h)$ is homotopy invariant,
that is, $i_f(g,h)=i_{f'}(g,h)$ if $[f']=[f]$.
Moreover, for any $f$, we have
\begin{align*}
\int_X \sigma_2(f)d\mu_g
\ge\frac{m}{V(S^{m-1})^2\mu_g(X)} i_f(g,h)^2,
\end{align*}
where $V(S^{m-1})$ is the volume of
the unit sphere $S^{m-1}$ in $\mathbb{R}^m$.
\label{thm croke fathi}
\end{thm}
First of all, we introduce the variant of $i_f(g,h)$ and
improve the above theorem.
We put
\begin{align*}
j_f(g,h):=\lim_{t\to \infty}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d
{\rm Liou}_g(v).
\end{align*}
\begin{thm}
For a smooth map $f\colon X\to Y$,
$j_f(g,h)$ is homotopy invariant.
Moreover, for any $f$, we have
\begin{align*}
\int_X \sigma_2(f)d\mu_g
\ge\frac{m}{V(S^{m-1})} j_f(g,h),
\end{align*}
where the equality holds iff
the image of the geodesic in $X$ by $f$
minimizes the length in its homotopy class with the fixed endpoints.
\label{thm croke fathi2}
\end{thm}
\begin{proof}
The proof is parallel to that of Theorem
\ref{thm croke fathi}.
The homotopy invariance of
$j_f(g,h)$ is same as the case of $i_f(g,h)$.
See the proof of \cite[Lemma 1.3]{CrokeFathi1990}.
Next we show the inequality.
Here we can see
\begin{align*}
\int_X\sigma_2(f)d\mu_g
&=\frac{m}{V(S^{m-1})}\int_{S_g(X)} | df(v) |_h^2d{\rm Liou}_g(v).
\end{align*}
For $s\ge 0$, let $g_s\colon S_g(X)\to S_g(X)$ be the
geodesic flow.
Since $g_s$ preserves the Liouville measure,
we can see
\begin{align*}
\int_{S_g(X)}|df(v)|_h^2d{\rm Liou}_g(v)
&=\frac{1}{t}\int_{S_g(X)}\left(\int_0^t|df(g_sv)|_h^2
ds\right)d{\rm Liou}_g(v)\\
&=\frac{1}{t}\int_{S_g(X)}\mathcal{E}_2(f\circ\gamma_v|_{[0,t]})d{\rm Liou}_g(v),
\end{align*}
where $\mathcal{E}_2$ is the $2$-energy of the curves in
$(Y,h)$. If $L(c)$ is the length of $c$,
then we have
\begin{align*}
\mathcal{E}_2(c)
=\int_a^b|c'(s)|_h^2ds
&\ge \frac{1}{b-a}\left(\int_a^b|c'(s)|_hds\right)^2\\
&= \frac{1}{b-a}\min_cL(c)^2,
\end{align*}
therefore
\begin{align*}
\int_{S_g(X)}|df(v)|_h^2d{\rm Liou}_g(v)
&\ge\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)
\end{align*}
for any $t>0$.
Consequently, we have the second assertion by considering $t\to \infty$.
Finally, we consider the condition when
\begin{align*}
\int_{S_g(X)}|df(v)|_h^2d{\rm Liou}_g(v)
&= \lim_{t\to\infty}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)
\end{align*}
holds.
To consider it, we show
\begin{align}
\lim_{t\to\infty}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)
=\inf_{t>0}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v).
\label{eq lim inf}
\end{align}
By \cite[Lemma 1.2]{CrokeFathi1990},
we have
\begin{align*}
\phi_{t+t'}(v)\le \phi_{t'}(g_tv)+\phi_t(v)
\end{align*}
for any $t,t'\ge 0$. Then by combining the Cauchy-Schwarz
inequality, we have
\begin{align*}
\int_{S_g(X)}\phi_{t+t'}(v)^2d{\rm Liou}_g(v)
&\le
\int_{S_g(X)}\phi_{t'}(g_tv)^2d{\rm Liou}_g(v)\\
&\quad\quad
+2\sqrt{
\int_{S_g(X)}\phi_{t'}(g_tv)^2d{\rm Liou}_g(v)
\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)
}\\
&\quad\quad
+\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v).
\end{align*}
Since the Liouville measure is invariant
under the geodesic flow,
we can see
$\int_{S_g(X)}\phi_{t'}(g_tv)^2d{\rm Liou}_g(v)
=\int_{S_g(X)}\phi_{t'}(v)^2d{\rm Liou}_g(v)$,
hence
\begin{align*}
\int_{S_g(X)}\phi_{t+t'}(v)^2d{\rm Liou}_g(v)
&\le
\left(
\sqrt{ \int_{S_g(X)}\phi_{t'}(v)^2d{\rm Liou}_g(v)}
+
\sqrt{ \int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)}
\right)^2.
\end{align*}
If we put
\begin{align*}
P_t:=\sqrt{ \int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)},
\end{align*}
then we have $P_{t+t'}\le P_t+P_{t'}$,
hence
\begin{align*}
\inf_{t>0}\frac{P(t)}{t}
=\lim_{t\to\infty}\frac{P(t)}{t}.
\end{align*}
Thus we obtain \eqref{eq lim inf}.
Now, suppose
\begin{align*}
\int_X \sigma_2(f)d\mu_g
= \frac{m}{V(S^{m-1})} j_f(g,h).
\end{align*}
By the above argument,
we can see that $f\circ \gamma_v|_{[0,t]}$ is
geodesic for any $v\in S_g(X)$ and $t>0$,
and $L(f\circ \gamma_v|_{[0,t]})$
gives the minimum of
\begin{align*}
\left\{ L(c)|\, c\mbox{ is homotopic with the fixed endpoints to }f\circ \gamma_v|_{[0,t]}\right\}.
\end{align*}
\end{proof}
\begin{rem}
\normalfont
By the Cauchy-Schwarz inequality, we have
\begin{align*}
j_f(g,h)\ge \frac{i_f(g,h)^2}{\mu_g(X)V(S^{m-1})},
\end{align*}
therefore, the inequality in Theorem \ref{thm croke fathi2}
implies the inequality in
Theorem \ref{thm croke fathi}.
\end{rem}
Next we compute $j_f(g,h)$ in the case of flat tori,
and compare with the lower bound obtained by Proposition \ref{prop lower torus}.
Let $(\mathbb{T}^m,g)$ and $(\mathbb{T}^n,h)$ be as in
Subsection \ref{subsec tori},
and take a coordinate $x$ on $\mathbb{T}^m$
and $y$ on $\mathbb{T}^n$ as in
Subsection \ref{subsec tori}.
\begin{prop}
Let $f\colon \mathbb{T}^m\to \mathbb{T}^n$ be a smooth map
such that $f^*([dy^j])=\sum_iP_i^j[dx^i]$
for $P=(P_i^j)\in M_{m,n}(\mathbb{Z})$.
If we define $\Phi$ by
\eqref{eq phi on torus} in Subsection \ref{subsec tori},
then we have
\begin{align*}
j_f(g,h)=\frac{V(S^{m-1})}{m}\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi.
\end{align*}
\end{prop}
\begin{proof}
First of all, we can see that
$f$ is homotopic to the map
given by $x\mapsto Px$ for $x\in\mathbb{T}^m$,
hence it suffices to show the equality
by putting $f(x)={}^tPx$.
Since the image of the geodesic by $f$
minimizes the length in its homotopy class with the fixed endpoints, then by Theorem
\ref{thm croke fathi2},
we have
$\mathcal{E}_2(f)=\frac{m}{V(S^{m-1})}j_f(g,h)$.
we can compute $\mathcal{E}_2(f)$ directly as
\begin{align*}
\mathcal{E}_2(f)=\int_{\mathbb{T}^m}|df|^2{\rm vol}_g
=\sum_{i,j,k,l}h_{ij}P_k^iP_l^jg^{kl}{\rm vol}_g(\mathbb{T}^m)
=\| P\|^2{\rm vol}_g(\mathbb{T}^m).
\end{align*}
Moreover,
by the computation in Subsection \ref{subsec tori},
we have shown that
\begin{align*}
\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi=\| P\|^2{\rm vol}_g(\mathbb{T}^m).
\end{align*}
Therefore,
\begin{align*}
\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi
=\| P\|^2{\rm vol}_g(\mathbb{T}^m)
=\frac{m}{V(S^{m-1})}j_f(g,h).
\end{align*}
\end{proof}
\begin{comment}
\begin{align*}
\end{align*}
\[ A=
\left (
\begin{array}{ccc}
& & \\
& & \\
& &
\end{array}
\right ).
\]
\begin{itemize}
\setlength{\parskip}{0cm}
\setlength{\itemsep}{0cm}
\item[(1)]
\item[(2)]
\end{itemize}
\begin{thm}
\end{thm}
\begin{prop}
\end{prop}
\begin{proof}
\end{proof}
\begin{definition}
\normalfont
\end{definition}
\begin{rem}
\normalfont
\end{rem}
\begin{lem}
\end{lem}
\end{comment}
\end{document} |
{\mathbf e}gin{document}
\title{Boolean Percolation on Doubling Graphs }
{\mathbf e}gin{abstract}
We consider the discrete Boolean model of percolation on graphs satisfying a doubling metric condition. We study sufficient conditions on the distribution of the radii of
balls placed at the points of a Bernoulli point process for the absence of percolation, provided that the retention parameter of the underlying point process is small enough. We
exhibit three families of interesting graphs where the main result of this work holds.
Finally, we give sufficient conditions for ergodicity of the discrete Boolean model of percolation.\\
\textbf{Keywords:} Boolean percolation, Point Processes, Doubling Spaces\\
{\eta}nd{abstract}
\section{Introduction}
The aim of this work is to study sufficient conditions for subcriticality and complete coverage in the discrete Boolean model of percolation in weighted doubling graphs. We give now an informal
description of the Boolean model of percolation. Consider a simple point process ${\mathcal X}$ in
some suitable metric space $({\Gamma},d)$. Then,
at each point of ${\mathcal X}$, center a ball of random radius. Assume that the radius are independent, identically
distributed and independent of ${\mathcal X}$. Thus, ${\Gamma}$ is partitioned in two
regions, the occupied region, which is
defined as the union of all random balls, and the vacant region, which is the
complement of the occupied region.
In this paper we consider the case in which
${\Gamma}$ is a doubling weighted graph equipped with the weighted graph distance, the underlying point process ${\mathcal X}$ is a Bernoulli point process with
retention parameter $p$ for some $p \in (0,1)$ and the random radii are independent and identically distributed non-negative integer-valued random variables. In this setting, we prove the
absence of unbounded connected components on the occupied region.
This model is the discrete counterpart of the Poisson Boolean model of continuum percolation
which belongs to the family of con\-ti\-nuum percolation models. In fact, the history of continuum percolation began in 1961 when W. Gilbert
\cite{gilbert1961random}
introduced the random connection
model on the plane. In $1985$, S. Zuev and A. Sidorenko
\cite{zuev1985continuous} considered continuum models of percolation where
points are chosen randomly in space and surrounded by shapes which can be random
or
fixed. In that work the authors studied the relation between critical parameters
associated to that model. For a comprehensive study of continuum models of percolation,
see the book of R. Meester and R. Roy \cite{roy}. To the best of our knowledge, this is the first time that the discrete Boolean model of percolation appears in the literature.
{\mathbf e}gin{comment}
In $2001$, I. Benjamini and O. Schram considered different percolation mo\-dels
in the hyperbolic plane and
on regular tilings in the hyperbolic plane. Recently, the Boolean model of
percolation has received considerable a\-tten\-tion. In \cite{tykesson}, J.
Tykesson studied the Poisson
Boolean continuum model for percolation in $\mathbb{H}^n$. He showed that, for certain values of the intensity of the underlying Poisson
point process, there are infinitely many unbounded components in the occupied and vacant regions. In $2014$, C. Coletti and
S. Grynberg \cite{cristiangrynberg} used the existence of a subcritical phase for the discrete Boolean model of percolation on the $n$-dimensional
integer lattice $\mathbb{Z}^n$ to construct, forward in time, interacting particle systems with generator admitting a Kalikow-type decomposition. To the best of our knowledge, this is the first time that the discrete Boolean model of percolation appears in the literature.
We finish this introduction by pointing out
{\eta}nd{comment}
We point out that that the discrete Boolean model of percolation in graphs where the underlying point process is a Bernoulli point process and the balls
under consideration are (closed) balls of radius $1/2$ (any positive number lower than one would work as well) co\-rres\-ponds to the
case of independent site percolation.
{\mathbf e}gin{comment}
In fact, the theory of discrete percolation began before the theory of continuum percolation
with the paper of S. Broadbent and J. Hammersley in $1957$ where they
introduced iid bond percolation. Their motivation was to understand the flow of
a liquid through a porous media. For a long time, percolation on the
$n$-dimensional integer lattice concentrated the attention of the probabilistic
and physical community. In $1989$, percolation theory underwent a remarkable
change with the work of R. Lyons. He studied percolation on regular tree or
tree-like graphs. In the nineties appeared the first results on percolation on
graphs beyond $\mathbb{Z}^n$ and tree-like graphs. In $1996$, I. Benjamini and
O. Schramm proposed a comprehensive study of percolation on Cayley graphs.
O. H\"aggstr\"om \cite{haggstrom2006uniqueness} and A. Procacci
\cite{procacci2004infinite} proposed to study percolation problems on general
graphs.
{\eta}nd{comment}
In this paper we prove that if the underlying graph satisfies a doubling condition and if the family of random radii are i.i.d. random variables with finite ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$-moment, where ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$ is the corresponding Assouad dimension which will be defined in section {\rho}ef{dnsr}, then the connected components arising in the discrete Boolean model are almost surely finite for sufficiently small values of $p$. We also prove that such behavior does not occur if the random radii have infinite ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$-moment.
We remark that the absence of percolation on doubling graphs does not follow from the subcriticality of the Boolean percolation model in $\mathbb{R}^n$ and standard coupling arguments. Indeed, an interesting example where the main result of this work holds is given by the Cayley graph of the discrete Heisenberg group which can not be embedded in $\mathbb{R}^n$ for any $n$.
This paper is organized as follows. In section {\rho}ef{dnsr} we describe the discrete Boolean model of percolation and state the main theorems. In subsection {\rho}ef{doubling_graphs} we provide some interesting examples of graphs satisfying the assumptions of the main theorems. These examples include graph of polynomial growth such as self-similar graphs and Cayley graphs of nilpotent groups. In section {\rho}ef{mainp} and section {\rho}ef{infr} we prove the main results. In section {\rho}ef{ergodicitynumberofinfiniteclusters} we address the problem of determining sufficient conditions for ergodicity of the discrete Boolean model of percolation.
\section{Definitions, notation and statement of the main results}{\nu}bel{dnsr}
\subsection{Doubling Graphs}
Throughout this paper $\mathbb{N}$ will denote the set of natural numbers, $\mathbb{N}_0$ will denote the set of non-negative integer numbers and ${\Gamma}= \left(\mathcal{G}, \mu{\rho}ight)$ will denote a weighted graph, where $\mathcal{G}=(V,E)$ is a countably infinite connected graph. Here $\mu_{xy}$ is a non-negative function on $V \times V$ such that: (i) $\mu_{xy}=\mu_{yx}$; and (ii) $\mu_{xy} > 0$ if and only if $x \sim y$ (i.e. if and only if $x$ and $y$ are neighbors in ${\Gamma}$). .
A {\eta}mph{path} on ${\mathcal G}({\mathcal X},{\mathcal R})$ is a sequence of distinct vertex $v_0, v_1,{\delta}ots, v_n$ with $v_{i-1} \neq v_i$ such that $\{v_{i-1},v_i\}\in{\mathcal E}$, $i=1,{\delta}ots, n$. The length of a path from $x$ to $y$ $(z_{0}=x,z_{1},\ldots,z_{n}=y)$
is defined as $\sum_{i=1}^n \mu_{z_{i-1}z_i}$. Finally the
distance between two points $x,y$ denoted by $d(x,y)$ is the length of a shortest path from $x$ to $y$. We regard ${\Gamma}$ as a metric space with the metric $d$ given by the weighted distance on the graph ${\Gamma}$.
Also, $B(v,r)=\{u\in V:\,
d(u,v)\leq r\}$ denotes the closed ball of radius $r$ centered at $v$ and
$S(v,r)=\{u\in V:\, d(u,v)=r\}$ denotes the sphere of radius $r$ in ${\Gamma}$ around
$v$.
We write $\medida{\cdot}$ for the counting measure on $V$. We note that $\medida{B(v,r)}$ depends on the distance on ${\Gamma}$ and consequently on the weights on the graph.
{\mathbf e}gin{defi}
We say that $({\Gamma},d)$ is a {\eta}mph{doubling metric graph} if there exists a non-negative constant $C$
such that any ball $B$ in ${\Gamma}$ can be covered with at most $C$ balls whose
radius is half the radius of $B$.
{\eta}nd{defi}
For further reading
on doubling metric spaces, see \cite{gromovmetric} and \cite{mackay2010conformal}. Doubling graphs arise naturally in many applications. For
instance, metric embedding of doubling graphs turned out to be useful for
algorithm design.
A related but stronger condition is the volume and time doubling assumption on
graphs which has been used to
prove upper and lower off-diagonal, sub-Gaussian
transition probability estimates for strongly recurrent random walks, see \cite{telcs2001volume}. In section {\rho}ef{exampledoubling} we give many examples of doubling graphs.
In \cite{Doyle}, Peter G. Doyle applies Rayleigh's short-cut method to prove P\'olya's recurrence theorem. In that work, the author uses the notion of doubling graphs in the proof of P\'olya's theorem for the $3$-dimensional lattice. It is worth mentioning that in \cite{Benjamini2011} the authors show how to construct planar graphs with the doubling property. Indeed, in that work the authors construct, for any ${\alpha}lpha > 1$, a triangulation of the plane for which every ball of radius $r$ has (up to a multiplicative constant) $r^{{\alpha}lpha}$ vertices.
\noindent {\bf{Assouad Dimension}} Now we introduce the concept of Assouad dimension which will be used to state our main result. To begin with, let $\varepsilonilon > 0$ be given. We call a subset $A\subset V$
{\eta}mph{$\varepsilonilon$-separated} if $d(v,w)\geq\varepsilonilon$ for all distinct $v,w\in A$. Let $N(B,\varepsilonilon)$ be the maximal cardinality of an $\varepsilonilon$-separated subset of $B$. Then
$({\Gamma},d)$ is doubling if and only if there exists $C<\infty$ such that
{\mathbf e}gin{equation}
{\nu}bel{1.4.5}
N(B(v,r), r/2)\leq C
{\eta}nd{equation}
for all balls $B(v,r)$ in ${\Gamma}$. An easy inductive argument lets us show that if {\rho}eff{1.4.5} holds, then there exists $C'$ and ${\mathbf e}ta$ depending only on $C$ such that
{\mathbf e}gin{equation}
{\nu}bel{1.4.6}
N(B(v,r), \varepsilonilon r)\leq C'\varepsilonilon^{-{\mathbf e}ta}
{\eta}nd{equation}
for any ball $B(v,r)$ in ${\Gamma}$ and any $0<\varepsilonilon<1$. For instance, we may take ${\mathbf e}ta=\log_2C$. Thus, the doubling condition is a finite-dimensional hypothesis which controls the
growth of the cardinalities of separated subsets of any ball at any scale and location. We now give a formal definition of Assouad dimension
{\mathbf e}gin{defi}
Let ${\omega}peratorname{Cover}_{{\Gamma}mma}$ denote the set of all ${\mathbf e}ta > 0$ for which there exist $C_{{\mathbf e}ta} > 0$ such that, for any $B(v,r)$ in ${\Gamma}mma$ and for any $0 < \varepsilonilon < 1$ we have $N(B(v,r), \varepsilonilon r)\leq C\varepsilonilon^{-{\mathbf e}ta}$. The Assouad dimension of ${\Gamma}mma$ is then defined to be
\[
{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight) = \inf \{{\mathbf e}ta \in {\omega}peratorname{Cover}_{{\Gamma}mma}\}.
\]
{\eta}nd{defi}
For further details see \cite{assouad1}, \cite{assouad2} and references therein. A useful fact about the Assouad dimension which follows directly from its definition and which will be used later is that it is possible to control the volume of any ball in terms of ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$.
It follows from ({\rho}ef{1.4.6}), by taking $\varepsilonilon=1/r$, that there exists a constant $C_1$ depending only on ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$ such that
{\mathbf e}gin{equation}
{\nu}bel{constant}
\medida{B\left(v,r{\rho}ight)} \leq C_1 r^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}
{\eta}nd{equation}
for any $v \in V$ and any $r \in \mathbb{N}$.
\subsection{Marked Point Process}
A {\eta}mph{Bernoulli point process} on ${\Gamma}$ with retention parameter $p\in(0,1)$, is a family of independent $\{0,1\}$-valued random variables ${\mathcal X}=(X_v:\, v\in V)$ with common law ${\mathbf P}$ such that ${\mathbf P}(X_v=1)=p$. Identify the family of random variables ${\mathcal X}$ with the random subset ${\mathcal P}$ of $V$ defined by ${\mathcal P}=\{v\in V:\, X_v=1\}$ whose distribution is a product measure whose marginals at each vertex $v$ are Bernoulli distribution of parameter $p$.
By a {\eta}mph{Bernoulli marked point process} on ${\Gamma}$ we mean a pair $({\mathcal X},{\mathcal R})$ formed by a Bernoulli point process ${\mathcal X}$ on ${\Gamma}$ and a family of independent, identically distributed ${\mathbb N}_0$-valued random variables ${\mathcal R}=(R_v: v\in V)$ called marks. We assume that these marks are independent of the point process ${\mathcal X}$.
Let $({\mathcal X},{\mathcal R})$ be a marked point process on ${\Gamma}$ with retention parameter $p$ and marks with common distribution ${\mathbf P}_{{\rho}ho}$. We denote by ${\mathbf E}_{{\rho}ho}$ the expectation operator induced by ${\mathbf P}_{{\rho}ho}$. Also, we denote by ${\mathbf P}_{p,\,{\rho}ho}$ and ${\mathbf E}_{p,\,{\rho}ho}$ respectively the probability measure and the expectation operator induced by $({\mathcal X},{\mathcal R})$.
\subsection{Random Graphs and Percolation} Let $({\mathcal X},{\mathcal R})$ be a Bernoulli marked point process on ${\Gamma}$. Then we define an associated random graph ${\mathcal G}({\mathcal X},{\mathcal R})=(V,{\mathcal E})$ as the undirected random graph with vertex set $V$ and edge set ${\mathcal E}$ defined by the condition $\{v,w\}\in{\mathcal E}$ if, and only if, $X_v=1$ and $w\in B(v,R_v)$ or $X_w=1$ and $v\in B(w,R_w)$.
A set of vertices $C\subset V$ is connected if, for any pair of distinct vertices $v$ and $w$ in $C$, there exists a path on ${\mathcal G}({\mathcal X},{\mathcal R})$ using vertices only from $C$, starting at $v$ and ending at $w$. The connected components of the graph ${\mathcal G}({\mathcal X},{\mathcal R})$ are its maximal connected subgraphs.
The cluster $C(v)$ of vertex $v$ is the connected component of the graph ${\mathcal G}({\mathcal X},{\mathcal R})$ containing $v$. Define the {\it{Percolation event}} as follows:
{\mathbf e}gin{eqnarray}
\left(\mbox{Percolation}{\rho}ight):=\bigcup_{v\in V}\left(\medida{C(v)}=\infty{\rho}ight).
{\eta}nd{eqnarray}
\subsection{Main Results}
Now we state the first main result of this work.
{\mathbf e}gin{teo}
{\nu}bel{T1}
Let ${\Gamma}$ be a doubling graph. Let $({\mathcal X},{\mathcal R})$ be a marked point process on ${\Gamma}$ with retention parameter $p$ and marks with common distribution ${\mathbf P}_{{\rho}ho}$. Let $R$ be a random variable whose distribution is ${\mathbf P}_{{\rho}ho}$. If
{\mathbf e}gin{eqnarray}
{\nu}bel{E100f}
{\mathbf E}_{{\rho}ho}\left[R^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}{\rho}ight]<\infty,
{\eta}nd{eqnarray}
then there exists $p_0>0$ such that ${\mathbf P}_{p,\,{\rho}ho}($Percolation$)=0$ for all $p\leq p_0$.
{\eta}nd{teo}
\paragraph{Complete Coverage} We complement the result of Theorem {\rho}ef{T1} by establishing a sufficient condition for complete coverage of $V$. For any $A\subset V$, define ${\mathbb L}ambda(A)=\bigcup_{v\in A\cap{\mathcal P}}B(v,R_v)$.
We say that the growth of a given graph is \textit{at least polynomial} if there exist constants $C$ and $d$ such that $\medida{B(v,r)}\geq C r^d$ for any $v\in V$ and $r>0$. Here $B(v,r)$ denotes the closed ball centered at $v$ of radius $r$ in ${\Gamma}$. Analogously we say that the growth of a given graph is \textit{at most polynomial} if there exist constants $C^{\prime}$ and $d^{\prime}$ such that $\medida{B(v,r)}\leq C^{\prime} r^{d^{\prime}}$ for any $v\in V$ and $r>0$.
For an almost-transitive graph ${\Gamma}$, i.e, if the automorphism group of ${\Gamma}$ acts on it with finitely many orbits we have the following equivalence \cite{Woessalmosttransitive}: if ${\Gamma}$ is an almost-transitive graphs whose growth is at most polynomial then its growth is at least polynomial. Indeed, almost-transitive graphs whose growth is at most polynomial are doubling graphs.
{\mathbf e}gin{teo}
{\nu}bel{Riidinf}
Let ${\Gamma}$ be an infinite, locally finite,
graph whose growth is at least polynomial: $\medida{B(v,R)} \geq C_1 R^{C}$. Let $({\mathcal X},{\mathcal R})$ be a marked point process on ${\Gamma}$ with retention parameter $p$ and marks distributed according to the probability distribution ${\mathbf P}_{{\rho}ho}$. Let $R$ be a random variable whose law is ${\mathbf P}_{{\rho}ho}$. If ${\mathbf E}_{{\rho}ho}\left[R^{C}{\rho}ight] = \infty$, then for any $p\in(0,1]$, ${\mathbb L}ambda(V)=V \ {\mathbf P}_{p,\,{\rho}ho}$-almost surely.
{\eta}nd{teo}
It follows from the discussion above that Theorem {\rho}ef{T1} and Theorem {\rho}ef{Riidinf} hold for the family of almost-transitive graphs under a suitable moment condition.
In the discrete space version of the Boolean model of percolation considered in this work we pursue a geometric approach inspired by geometric arguments considered in \cite{Gouere}. There are considerable technical differences in the present work, though. The main ones are as follows: a) the growth of random balls is controlled by assuming a moment condition involving the Assouad dimension of the underlying graph and the random radii of the corresponding balls; this requires considerably more care in the proof of the main result of this work, and b) the ergodicity of the discrete Boolean model of percolation follows under the requirement that the underlying graph admits a family of symmetries which acts separating points.
\paragraph{Phase transition} Consider the discrete Boolean model of percolation introduced above. As in other percolation models we define the percolation probability $\theta\left(p{\rho}ight)$ by $\theta\left(p{\rho}ight) = \textbf{P}_{p,{\rho}ho}\left(\mbox{Percolation}{\rho}ight)$. A standard coupling arguments gives the monotonicity of $\theta\left(p{\rho}ight)$ in $p$. Thus, the critical parameter
{\mathbf e}gin{equation}
p_c\left({\Gamma}{\rho}ight) = \sup \{p : \theta\left(p{\rho}ight) = 0\}
{\eta}nd{equation}
is well defined.
Now replace the random radii in this model by the deterministic radius $1/2$. What we get is the independent site percolation model in ${\Gamma}$.Then, a direct coupling with the site percolation model in ${\Gamma}$ yields $p_c\left({\Gamma}{\rho}ight) \leq p_c^{s}\left({\Gamma}{\rho}ight)$, where $p_c^{s}\left({\Gamma}{\rho}ight)$ is the critical parameter for independent site percolation in ${\Gamma}$.
{\mathbf e}gin{teo}{\nu}bel{LyonsandYuval}
If ${\Gamma}$ is the Cayley graph of a group $G$ with at most polynomial growth containing a subgroup isomorphic to $\mathbb{Z}^2$, then $p_c\left({\Gamma}{\rho}ight) < 1$.
{\eta}nd{teo}
The proof of the previous theorem follows from the relation stated above between the critical parameters for independent site and Boolean percolation and Theorem $7.17$ and Corollary $7.18$ in \cite{yuval} where the authors proved that $p_c^{s}\left({\Gamma}{\rho}ight) < 1$ for graphs satisfying the assumptions of Theorem {\rho}ef{LyonsandYuval}. Therefore, we may use Theorem {\rho}ef{T1} and Theorem {\rho}ef{LyonsandYuval} to prove that phase transition occurs for the discrete Boolean model of percolation.
See remark {\rho}ef{RemarkPansu} in subsection {\rho}ef{exampledoubling} for an application of Theorem {\rho}ef{LyonsandYuval}.
\subsection{Examples of Doubling Graphs}{\nu}bel{exampledoubling}
{\nu}bel{doubling_graphs}
The first and fundamental example of a doubling metric graph is $\mathbb{Z}^n$
which has polynomial growth of order $n$. This example may be generalized, at least, in three ways: graphs with polynomial growth, Cayley graphs of nilpotent groups and self-similar graphs.
We observe that an easy way to obtain other examples of doubling graphs is to consider subgraphs of a doubling graph, since the property of a graph being doubling is hereditary, i.e., a subspace of a doubling metric space is doubling (\cite{gromovmetric}, B.2.5).
\paragraph{Graphs with Polynomial Growth}
A large family of doubling metric graphs is the one composed of graphs with polynomial
growth. For a comprehensive study of such graphs see \cite{imrich1991survey} and
\cite{gromov}. For each $v\in V$, the
{\eta}mph{growth function} ${\gamma}(v,\cdot):{\mathbb N}\to{\mathbb N}_0$ with respect to the vertex $v$, is given by
{\mathbf e}gin{eqnarray}
{\nu}bel{GFV}
{\gamma}(v,r):=\medida{B(v,r)}.
{\eta}nd{eqnarray}
It is worth mentioning that, in the case of transitive graphs the growth
function does not depend on the choice of a particular vertex $v$ and in the
case of non-transitive graphs, the growth
functions for two different vertexes differs only by a multiplicative constant.
We assume that for each $r\in{\mathbb N}$ ${\gamma}(r)=\sup_{v\in V}{\gamma}(v,r)<\infty$. We say that a given graph has \textit{polynomial growth} if there exist
constants $C$
and $d$ such that its associated growth function satisfies the inequality $C^{-1} r^d \leq {\gamma}(v,r)\leq C r^d$ for any $v\in V$ and $r>0$.
A graph with polynomial growth satisfies a condition which is slightly
stronger than being a doubling metric space. It may be proved that in this case the graph is a
doubling measure space. To keep the paper self-contained we give the definition of doubling measure space.
{\mathbf e}gin{defi}
Let $\left(M,d{\rho}ight)$ be a metric space. A positive Borel measure space $\mu$ on $M$ is said to be doubling if there exists a constant $C > 0$ such that
{\mathbf e}gin{equation}
\mu(2B) \leq C \mu (B) \nonumber
{\eta}nd{equation}
for all balls $B$ in $M$.
{\eta}nd{defi}
It follows from the previous definition that a graph with polynomial growth is a
doubling measure space with respect to the counting measure. Since metric spaces
admitting a doubling measure are doubling metric spaces (\cite{gromov},
B.3) we may conclude that graphs with polynomial growth form a family of
doubling metric graphs.
\paragraph{Cayley Graphs of Nilpotent Groups}
Let $G$ be a finitely generated group with generating set $H$. Assume that the identity element $e \notin H$. We may associate the Cayley graph ${\Gamma}(G,H)$ of G with respect to $H$,
whose vertices are the elements of $G$. The set of edges $E(G,H)$ is defined as follows,
\[
E(G,H)=\{ (g,gh) | g\in G, h \in H\cup H^{-1}\}.
\]
Then, define the growth of the group $G$ as the growth of the Cayley graph
${\Gamma}(G,H)$ with respect to some (any) generating set $H$. The order of growth
is well defined, since changing the generating set $H$ only changes the
constants appearing in the bounds of the growth function.
Let $G$ be a group. If $H_1, H_2$ are subgroups of $G$, define $[H_1,H_2]$ to be the subgroup of $G$ gennerated by all commutators $\{[h_1,h_2]:=h_1 h_2 h_1^{-1} h_2^{-2} | h_1 \in H_1, h_2 \in H_2 \}$. For $n \in \mathbb{N}$ we inductively define $C_n(G)$ by
\[
C_0(G):=G \ \mbox{and} \ \forall \ n \in \mathbb{N} \ C_{n+1}(G):=[G,C_n(G)].
\]
The group $G$ is nilpotent if, there exists $n \in \mathbb{N}$ such that $C_n(G)$ is the trivial group. A group $G$ is almost nilpotent if it contains a nilpotent normal subgroup of finite index.
{\mathbf e}gin{teo}[Gromov \cite{gromov} and Wolf \cite{wolf}] {\nu}bel{nilpotent} A finitely generated
group has polynomial growth if and only if it is almost nilpotent.
{\eta}nd{teo}
It follows from Theorem {\rho}ef{nilpotent} above that Cayley graphs of nilpotent groups have polynomial growth which in turns implies that they form a family of doubling metric graphs.
{\mathbf e}gin{rem}{\nu}bel{RemarkPansu}{{\rho}m
A concrete example of a group with polynomial growth is the discrete
Heisenberg group $\mathcal{H}_3(\mathbb{Z})$, where
\[
\mathcal{H}_3(\mathbb{Z}):=\left\{ \left( {\mathbf e}gin{array}{ccc}
1 & x& z \\
0 & 1 & y\\
0 & 0& 1 \\
{\eta}nd{array}{\rho}ight): x,y,z \in \mathbb{Z}
{\rho}ight\}.
\]
Since the discrete Heisenberg group is nilpotent, it has polynomial growth. Indeed, $\mathcal{H}_3(\mathbb{Z})$ has polynomial growth of order $4$. For further details we refer the reader to \cite{gromov}.
Observe that subgroup of $\mathcal{H}_3(\mathbb{Z})$ generated by
\[
H:=\left\{\left( {\mathbf e}gin{array}{ccc}
1 & 1& 0 \\
0 & 1 & 0\\
0 & 0& 1 \\
{\eta}nd{array}{\rho}ight), \left( {\mathbf e}gin{array}{ccc}
1 & 0& 1 \\
0 & 1 & 0\\
0 & 0& 1 \\
{\eta}nd{array}{\rho}ight)
{\rho}ight\}
\]
is isomorphic to $\mathbb{Z}^2$. Then it follows from Theorem {\rho}ef{T1} and Theorem {\rho}ef{LyonsandYuval} that the discrete Boolean model of percolation on the Cayley graph of $\mathcal{H}_3(\mathbb{Z})$ exhibits {\it{phase transition}} under the assumption that the random radius of a ball has finite fourth moment.
It is worth mentioning that, by a discrete version of Pansu Theorem \cite{pansu1989metriques}, {\it{the discrete Heisenberg group}} is an example of a doubling metric space which {\it{cannot be embedded in ${\mathbb R}^n$}}, for any $n$. Therefore, the absence of percolation in doubling graphs does not follow from the subcriticality of the Boolean percolation model in $\mathbb{R}^n$ and standard coupling arguments.
}
{\eta}nd{rem}
\paragraph{Self-Similar Graphs}
A large family of a doubling metric graph are the self-similar graphs. Self-similar graphs can be seen as discrete versions of self-similar sets.
Let ${\Gamma}mma=\left(V({\Gamma}mma),E({\Gamma}mma){\rho}ight)$ be a graph with vertex set $V({\Gamma}mma)$ and edge set $E({\Gamma}mma)$. Let $F$ be a set of vertices in $\V {\Gamma}$. Then $\mathcal{C}_{{\Gamma}}F$ denotes
the set of connected components in $\V {\Gamma}\setminus F$. We define the
{\eta}mph{reduced graph} ${\Gamma}_{F}$ of ${\Gamma}$ by setting $\V {\Gamma}_{F}=F$ and
connecting two vertices $x$ and $y$ in $\V {\Gamma}_{F}$ by an edge if
and only if there exists a $C\in\mathcal{C}_{{\Gamma}}F$ such that $x$ and $y$
are in the boundary of $C$.
{\mathbf e}gin{defi}{\nu}bel{ss_in_growth} We say that ${\Gamma}$ is {\eta}mph{self-similar} with
respect to $F$ and $\psi:\V {\Gamma}\to\V {\Gamma}_{F}$ if
{\mathbf e}gin{enumerate}
\item[(F1)] no vertices in $F$ are adjacent in ${\Gamma}$,
\item[(F2)] the intersection of the closures of two different components in $\mathcal{C}_{{\Gamma}}F$
contains not more than one vertex and
\item[(F3)] $\psi$ is an isomorphism between ${\Gamma}$ and ${\Gamma}_{F}$.
{\eta}nd{enumerate}
{\eta}nd{defi}
We say that a graph has bounded geometry if the set of vertex degrees is bounded.
{\mathbf e}gin{teo}[Kr{\"o}n \cite{kron2004growth}]The Assouad dimension of homogeneously self-similar
graphs of bounded geometry are finite and equal to the Hausdorff dimension
of self-similar sets.
{\eta}nd{teo}
{\mathbf e}gin{figure}[h]
\centering
\includegraphics[width=6cm]{MKC.pdf}
\caption{The modified Koch curve is an homogeneously self-
similar graph with bounded geometry and Hausdorff and Assouad dimension $\log 5/\log 3 $ \cite{kron2004growth}.}
{\eta}nd{figure}
\section{Proof of Theorem {\rho}ef{T1}}
{\nu}bel{mainp}
The proof of Theorem {\rho}ef{T1} will be divided into two steps. In the first step
we introduce two families of events, $F(v,r)$ and $H(v,r)$, in order to study the
diameter of the cluster $C(v)$. The family of events $F(v,r)$ is helpful to understand the behavior of the the diameter of the cluster $C(v)$ on the subgraph of ${\mathcal G}({\mathcal X},{\mathcal R})$
induced by the point process on $B(v,10r)$. The family of events $H(v,r)$ provides a way to take care of the influence of the point process $({\mathcal X},{\mathcal R})$ from the exterior of the ball $B(v,r)$. Our aim in this step is to show that the probability of the
percolation event can be controlled by the probabilities of the events $F(v,r)$.
In the second step, we will show that if the radii are not too large, then the
occurrence of the event $F(v,r_1)$ implies the occurrence of two independent
events $F(u,r_2)$ and $F(w,r_2)$ where $r_1=10r_2$. Our aim in this step is to
show that the probability of the events $F(v,r_1)$ can be bounded from above by the square of the probability of the events $F(u,r_2)$ plus a quantity that goes to zero
when $r_1$ goes to infinity. This provides a way to take care of the probabilities
of the events $F(v,r)$ that allows us to show that for $p$ small enough,
${\mathbf P}_{p,\,{\rho}ho}(F(v,r))$ goes to zero when $r$ goes to infinity.
\subsection{Controlling the diameter of the clusters $C(v)$}
For each $v\in V$, let $D_v=\inf\{r\geq 0:\, C(v)\subset B(v,r)\}$. The
percolation event is equivalent to the event $\bigcup_{v\in V}\{D_v=\infty\}$.
The proof of Theorem {\rho}ef{T1} is reduced to show that there exists $p_0>0$ such
that for each $v\in V$, $\lim_{r\to\infty}{\mathbf P}_{p,\,{\rho}ho}(D_v>r)=0$ for all
$p<p_0$.
For each $v\in V$ we define two families of events to study the diameter of the cluster $C(v)$.
\paragraph{The family of events $F(v,r)$} Let $B$ be a subset of $V$. Denote by
${\mathcal G}[B]$ the subgraph of ${\mathcal G}({\mathcal X},{\mathcal R})$ induced by $B$. Let $A$ be a non-empty subset of
$V$ contained in $B$ and let $v\in A$. We say that $v$ is
{\eta}mph{disconnected from the exterior of $A$ inside $B$} if the connected
component of ${\mathcal G}[B]$ containing $v$ is contained in $A$. Now we introduce the events $F(v,r)$. Let $v\in V$ and let
$r\in{\mathbb N}$, we say that $F(v,r)$ does not occur if $B(v,r)$ is disconnected from the
exterior of $B(v,8r)$ inside $B(v,10r)$.
\paragraph{The family of events $H(v,r)$} For each $v\in V$ and $r\in{\mathbb N}$, we define
{\mathbf e}gin{eqnarray}
{\nu}bel{Hvr}
H(v,r)=\left\{{\eta}xists\,w\in{\mathcal P}\cap B(v,10r)^c:\, R_w>\frac{d(w,v)}{10}{\rho}ight\}.
{\eta}nd{eqnarray}
The relation between the diameter of the cluster at $v$ and the families of events defined above is established in the following lemma.
{\mathbf e}gin{lem}
{\nu}bel{GHM}
The following inclusion holds for all $r\in{\mathbb N}$:
{\mathbf e}gin{eqnarray}
{\nu}bel{Mgrande2}
F(v,r)^c\cap H(v,r)^c\subset\left\{D_v\leq 8r{\rho}ight\}.
{\eta}nd{eqnarray}
{\eta}nsuremath{\mathop{\mathit{e}}}{\xi}spaceem
\subsection*{Proof of Lemma {\rho}ef{GHM}}
If the event $H(v,r)$ does not occur, then there are no sites of the point process with distance to $v$ greater than $10r$ connected to the ball $B(v,9r)$. Indeed, assume that $H(v,r)$ does not occur. Then for every $w\in{\mathcal P}\cap B(v,10r)^c$ we have $d(w,v)-R_w\geq\frac{9}{10}d(w,v)>9r$. Using the triangle inequality it is easy to verify that $d(u,v)\geq d(w,v)-R_w>9r$ for all $u\in B(w,R_w)$. If $F(v,r)$ does not occur, then the ball $B(v,r)$ is isolated from the exterior of $B(v,8r)$. If, in addition, the event $H(v,r)$ does not occur, then the balls $B(w,R_w)$, $w\in{\mathcal P}\cap B(v,10r)^c$, do not connect any vertex inside $B(v,r)$ to the complement of $B(v,8r)$. Thus $D_v\leq 8r$.
\ifmmode\sqr{\eta}lse{$\sqr$}\fi
From {\rho}eff{Mgrande2} we get
{\mathbf e}gin{eqnarray}
{\nu}bel{Mgrande3}
{\mathbf P}_{p,\,{\rho}ho}(D_v>8r)\leq {\mathbf P}_{p,\,{\rho}ho}(F(v,r))+{\mathbf P}_{p,\,{\rho}ho}(H(v,r)).
{\eta}nd{eqnarray}
In the following lemma we show that $\lim_{r\to\infty}{\mathbf P}_{p,\,{\rho}ho}(H(v,r))=0$ for all $p\in(0,1)$. Then we need to show that ${\mathbf P}_{p,\,{\rho}ho}(F(v,r))$ goes to zero as $r\to\infty$.
{\mathbf e}gin{lem}
There exists a positive constant $C$ , which depends only on the Assouad dimension ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$ of ${\Gamma}$, such that for each $v\in V$ and $r\in{\mathbb N}$, the following inequality holds:
{\mathbf e}gin{eqnarray}{\nu}bel{0}
{\mathbf P}_{p,\,{\rho}ho}\left(H(v,r){\rho}ight) \leq C \ {\mathbf E}_{{\rho}ho}\left[R^{dim_A({\Gamma}mma)}1\{R > r\}{\rho}ight].
{\eta}nd{eqnarray}
Furthermore, if ${\mathbf E}_{{\rho}ho}\left[R^{dim_A({\Gamma}mma)}{\rho}ight] < +\infty$, then ${\delta}isplaystyle \lim_{r {\rho}ightarrow +\infty} {\mathbf P}_{p,\,{\rho}ho}\left(H(v,r){\rho}ight) = 0$.
{\eta}nsuremath{\mathop{\mathit{e}}}{\xi}spaceem
{\mathbf e}gin{proof}
Let
\[
L_v(r) := {\delta}isplaystyle \sum_{w \in B(v,10r)^c} 1\{w \in \mathcal{P}\} 1\{w \in B(v,10R_w)\}
\]
and note that ${\mathbf P}_{p,\,{\rho}ho}\left(H(v,r){\rho}ight) = {\mathbf P}_{p,\,{\rho}ho}\left(L_v(r) \geq 1{\rho}ight)$. It follows that
{\mathbf e}gin{eqnarray}
{\mathbf P}_{p,\,{\rho}ho}\left(H(v,r){\rho}ight) &\leq& {\mathbf E}_{p,\,{\rho}ho}\left[L_v(r){\rho}ight] \nonumber \\
&\leq& {\mathbf E}_{p,\,{\rho}ho}\left[{\delta}isplaystyle \sum_{w \in B(v,10r)^c} 1\{w \in B(v,10R_w)\}{\rho}ight] \nonumber \\
&=& {\delta}isplaystyle \sum_{w \in B(v,10r)^c} {\mathbf E}_{p,\,{\rho}ho}\left[1\{w \in B(v,10R_w)\}{\rho}ight] \nonumber \\
&=& {\delta}isplaystyle \sum_{w \in B(v,10r)^c} \sum_{s > r} 1\{w \in B(v,10s)\} {\mathbf P}_{{\rho}ho}(R_w = s) \nonumber \\
&=& {\delta}isplaystyle \sum_{s > r} \medida{B(v,10s) \setminus B(v,10r)} {\mathbf P}_{{\rho}ho}(R = s) \nonumber \\
&\leq& {\delta}isplaystyle \sum_{s > r} \medida{B(v,10s)} {\mathbf P}_{{\rho}ho}(R = s) \nonumber \\
&\leq& C {\delta}isplaystyle \sum_{s > r} s^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)} {\mathbf P}_{{\rho}ho}(R = s) \nonumber
{\eta}nd{eqnarray}
where $C=C_1 10^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}$. The last inequality and the choice of the constant $C$ follow from ({\rho}ef{constant}). This gives ({\rho}ef{0}).
{\eta}nd{proof}
\subsection{Controlling the probabilities of the events $F(v,r)$}
To take care of the probabilities ${\mathbf P}_{p,\,{\rho}ho}(F(v,r))$ we introduce another family of events.
\paragraph{The family of events $\tilde{H}(v,r)$} For each $v\in V$
and $r\in{\mathbb N}$, we define
{\mathbf e}gin{eqnarray}
{\nu}bel{hrt}
\tilde{H}(v,r)=\{{\eta}xists\,w\in{\mathcal P}\cap B(v,100r):\,R_w \geq r\}.
{\eta}nd{eqnarray}
{\mathbf e}gin{lem} The following inclusion holds for all $r\in{\mathbb N}$:
{\nu}bel{Escala2}
{\mathbf e}gin{eqnarray}
{\nu}bel{escala} F(v,10r)\cap\tilde{H}(v,r)^c&\subset&
\bigcup_{u\in A(v,r,10)}F(u,r)
\bigcap
\bigcup_{w\in A(v,r,80)}F(w,r),
{\eta}nd{eqnarray} \normalsize
where $A(v,r,m)$ is a $r$-separated subset of $S(v,mr)$ such that $\medida{A(v,r,m)} =\newline N(S(v,mr),r)$.
{\eta}nsuremath{\mathop{\mathit{e}}}{\xi}spaceem
\subsection*{Proof of Lemma {\rho}ef{Escala2}}
{\nu}bel{Geom3}
Fix $r\in{\mathbb N}$. First, assume that the event $F(v,10r)$ occurs but the event
$\tilde{H}(v,r)$ does not occur. Since $F(v,10r)$ occurs we can go from the
vertex $v$ to the complement of the ball $B(v,80r)$ just using balls $B(u,R_u)$
centered at points from ${\mathcal P}\cap B(v,100r)$. In this way, we can go from the
sphere $S(v,10r)$ to the sphere $S(v,80r)$. One of these balls, let say
$B(u_*,R_{u_*})$, touches $S(v,10r)$. Since the sphere $S(v,10r)$ is a subset of
$\bigcup_{u\in A(v,r,10)}B(u,r)$ we get that this ball touches a ball of the
form $B(u,r)$ for some $u$ in $A(v,r,10)$.
Now we shall prove that, for this $u$, the event $F(u,r)$ occurs. It is easy to see that we can go from $B(u,r)$ to the complement of $B(u,8r)$ just using balls of the form $B(w,R_w)$ centered at points from ${\mathcal P}\cap B(v,100r)$.Since $\tilde{H}(v,r)$ does not occur, the radius of any such ball is less than $r$. Then, we can go from $B(u,r)$ to the complement of $B(u,8r)$ just using balls of the form $B(w,R_w)$ centered at points from ${\mathcal P}\cap B(u,10r)$. In other words, the event $F(u,r)$ occurs. Then, the event $\bigcup_{u\in A(v,r,10)}F(u,r)$ does occur. The proof that the event $\bigcup_{w\in A(v,r,80)}F(w,r)$ does occur follows in the same lines.
\ifmmode\sqr{\eta}lse{$\sqr$}\fi
The event on the right side of {\rho}eff{escala} is the intersection of two events. The events do only depend on the restriction of the point process to the mentioned regions, and thus the events are independent.
It follows from {\rho}eff{escala} that
{\mathbf e}gin{eqnarray}
{\mathbf P}(F(v,10r))& \leq & \sum_{u\in A(v,r,10)}{\mathbf P}_{p,\,{\rho}ho}(F(u,r)) \nonumber \\
&\times & \sum_{w\in A(v,r,80)}{\mathbf P}_{p,\,{\rho}ho}(F(w,r)) {\nu}bel{Ineq2}\\
&+&{\mathbf P}_{p,\,{\rho}ho}(\tilde{H}(v,r)).\nonumber
{\eta}nd{eqnarray}
Notice that $\medida{A(v,r,m)}\leq C_1 m^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}$ for all $v\in V$, $r\in{\mathbb N}$ and $m\in{\mathbb N}$. Therefore, by {\rho}eff{Ineq2} we get
{\mathbf e}gin{eqnarray} \textstyle
\sup_{v\in V}{\mathbf P}_{p,\,{\rho}ho}(F(v,10r))&\leq& K\left(\sup_{v\in V}{\mathbf P}_{p,\,{\rho}ho}(F(v,r)){\rho}ight)^2{\nu}bel{Ineq4}\\
&&+\sup_{v\in V}{\mathbf P}_{p,\,{\rho}ho}(\tilde{H}(v,r)), \nonumber
{\eta}nd{eqnarray}
where $K=C_{1}^{2}800^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}$.
{\mathbf e}gin{lem}
{\nu}bel{SB}
There exists positive constants $C_2$ and $C_3$, which depend only on the Assouad dimension ${\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)$ of ${\Gamma}$, such that for each $v\in V$ and $r\in{\mathbb N}$, the following inequalities hold:
{\mathbf e}gin{eqnarray}
{\nu}bel{SB1}
{\mathbf P}_{p,\,{\rho}ho}(F(v,r))&\leq& p\, C_2r^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)},\\
{\nu}bel{SB2}
{\mathbf P}_{p,\,{\rho}ho}(\tilde{H}(v,r))&\leq& p\,C_3{\mathbf E}_{{\rho}ho}\left[R^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}{\mathbf 1}\{R\geq r\}{\rho}ight].
{\eta}nd{eqnarray}
{\eta}nsuremath{\mathop{\mathit{e}}}{\xi}spaceem
\subsection*{Proof of Lemma {\rho}ef{SB}}
Let $r\in{\mathbb N}$. A simple computation shows that
{\mathbf e}gin{eqnarray*}
{\mathbf P}_{p,\,{\rho}ho}(F(v,r))&\leq&{\mathbf P}_{p,\,{\rho}ho}({\eta}xists\,x\in{\mathcal P}\cap B(v,10r))\nonumber\\
&\leq& p\,\medida{B(v,10r)}\\
&\leq& p\, C_2r^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)},
{\eta}nd{eqnarray*}
where $C_2=C_1 10^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}$. In the last inequality we used inequality ({\rho}ef{constant}).
To show {\rho}eff{SB2} we note that $\tilde{H}(v,r)=1\{Y_v\geq 1\}$, where $Y_v$ is a random variable defined by
\[Y_v=\sum_{u\in B(v,100r)}{\mathbf 1}\{u\in{\mathcal P}\}{\mathbf 1}\{R_u\geq r\}.\]
We have
{\mathbf e}gin{eqnarray*}
{\mathbf P}_{p,\,{\rho}ho}(\tilde{H}(v,r))&\leq&{\mathbf E}_{p,\,{\rho}ho}\left[Y_v{\rho}ight]\nonumber\\
&=&\sum_{u\in B(v, 100r)}p\,{\mathbf P}_{{\rho}ho}(R_u\geq r)\nonumber\\
&=&p\,\medida{B(v,100r)}{\mathbf P}_{{\rho}ho}(R\geq r)\nonumber\\
&\leq& p\,C_3 r^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)} {\mathbf P}_{{\rho}ho}(R\geq r)\nonumber\\
&\leq& p\,C_3{\mathbf E}_{{\rho}ho}\left[R^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}{\mathbf 1}\{R\geq r\}{\rho}ight],
{\eta}nd{eqnarray*}
where $C_3=C_{1}100^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}$. The first equality follows from the independence between ${\mathcal P}$ and ${\mathcal R}$ and the second equality follows from the fact that the random variables $R_u, u\in V$ are identically distributed. The second inequality follows from inequality ({\rho}ef{constant}).
\ifmmode\sqr{\eta}lse{$\sqr$}\fi
\subsection{Proof of Theorem {\rho}ef{T1}}
By {\rho}eff{Mgrande3}, the proof of Theorem {\rho}ef{T1} is reduced to show the existence of $p_0>0$ such that there exists an increasing sequence $(r_n)_{n\in{\mathbb N}}$ of natural numbers with ${\delta}isplaystyle \lim_{n\to\infty} {\mathbf P}_{p,\,{\rho}ho}(F(v,r_n))$ $=0$ for all $p<p_0$, $v\in V$. For this reason we need the following lemma.
{\mathbf e}gin{lem}
{\nu}bel{FG0}
Let $f$ and $g$ be two functions from ${\mathbb N}$ to ${\mathbb R}_+$ satisfying the following conditions: (i) $f(r)\leq 1/2$ for all $r\in \{1,{\delta}ots, 10\}$; (ii) $g(r)\leq 1/4$ for all $r\in{\mathbb N}$; (iii) for all $r\in{\mathbb N}$:
{\mathbf e}gin{eqnarray}
{\nu}bel{FG1}
f(10r)\leq f^2(r)+g(r).
{\eta}nd{eqnarray}
If $\lim_{r\to\infty}g(r)=0$, then $\lim_{n\to\infty}f(10^nr)=0$ for each $r\in\{1,{\delta}ots, 10\}$.
{\eta}nsuremath{\mathop{\mathit{e}}}{\xi}spaceem
\subsection*{Proof of Lemma {\rho}ef{FG0}}
For each $n\in{\mathbb N}$, let $F_n=\max_{1\leq r\leq 10}f(10^nr)$ and let $G_n=\max_{1\leq r \leq 10}g(10^nr)$. Using {\rho}eff{FG1} and hypothesis (i) and (ii) we may conclude, by means of the induction principle that, for each $n\in{\mathbb N}$, $F_n\leq 1/2$ and
{\mathbf e}gin{eqnarray}
{\nu}bel{CotaIndb}
F_n \leq \frac{1}{2^{n+1}}+{\delta}isplaystyle \sum_{j=0}^{n-1}\frac{1}{2^j}G_{n-1-j}.
{\eta}nd{eqnarray}
Since $g(10^nr)$ goes to zero as $n\to\infty$ we have that $G_n\to 0$ when $n\to\infty$. By {\rho}eff{CotaIndb}, we obtain that $F_n\to 0$ when $n\to\infty$.
\ifmmode\sqr{\eta}lse{$\sqr$}\fi
Now we complete the proof of Theorem {\rho}ef{T1}. Consider the functions
$$f(r)=K\sup_{v\in V}{\mathbf P}_{p,\,{\rho}ho}(F(v,r))$$
and
$$g(r)=K\sup_{v\in V}{\mathbf P}_{p,\,{\rho}ho}(\tilde{H}(v,r)).
$$
It follows from {\rho}eff{Ineq4} that
{\mathbf e}gin{eqnarray}
{\nu}bel{Ineq5}
f(10r)\leq f^2(r)+g(r).
{\eta}nd{eqnarray}
By condition {\rho}eff{E100f} and {\rho}eff{SB2} we have that $\lim_{r\to\infty}g(r)=0$ for any $p$.
We show that there exists $p_0>0$ such that if $p<p_0$ then $f(r)\leq 1/2$, $1\leq r\leq 10$ and $g(r)\leq 1/4$, $r\in{\mathbb N}$.
Set
\[p_0=\min\left(\frac{1}{2KC_210^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}}, \frac{1}{4KC_3{\mathbf E}_{{\rho}ho}\left[R^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}{\rho}ight]}{\rho}ight).\]
By condition {\rho}eff{E100f}, we get $p_0>0$.
Let $p>0$ be such that $p\leq p_0$. It follows from {\rho}eff{SB1} that
{\mathbf e}gin{eqnarray*}
f(r)\leq\frac{1}{2}\left(\frac{r}{10}{\rho}ight)^{{\omega}peratorname{dim}_{A}\left({\Gamma}mma{\rho}ight)}.
{\eta}nd{eqnarray*}
Thus we have that if $0<p\leq p_0$, then $\max_{1\leq r\leq 10}f(r)\leq 1/2$.
By {\rho}eff{SB2}, we get
{\mathbf e}gin{eqnarray*}
g(r)\leq\frac14.
{\eta}nd{eqnarray*}
Finally, by Lemma {\rho}ef{FG0}, we have that $\lim_{n\to\infty}f(10^nr)=0$ for each $r\in\{1,{\delta}ots, 10\}$.
In particular
\[\lim_{n\to\infty}f(10^n)=\lim_{n\to\infty}K\sup_{v\in V}{\mathbf P}_{p,\,{\rho}ho}(F(v,10^n))=0.\]
\noindent This finishes the proof of Theorem {\rho}ef{T1}.
\ifmmode\sqr{\eta}lse{$\sqr$}\fi
\section{Proof of Theorem {\rho}ef{Riidinf}}{\nu}bel{infr}
Fix some (any) vertex $v \in V$. We will prove that the following assertion holds:
\[{\mathbf P}_{p,\,{\rho}ho}({\eta}xists\, w\in{\mathcal P}: v \in B(w,R_w))=1.\]
Since
{\mathbf e}gin{eqnarray*}
{\mathbf P}_{p,\,{\rho}ho}\left({\eta}xists w \in \mathcal{P}: v \in B(w,R_{{\omega}mega}){\rho}ight) &=& {\mathbf P}_{p,\,{\rho}ho}\left({\eta}xists w \in \mathcal{P}: R_{w} > d(v,w){\rho}ight) \nonumber \\
&=& 1 - {\mathbf P}_{p,\,{\rho}ho}\left(\cap_{w \in V}(X_w=1,R_w > d(w,v))^c{\rho}ight), \nonumber
{\eta}nd{eqnarray*}
we will prove that
{\mathbf e}gin{equation}{\nu}bel{prodinf}
{\mathbf P}_{p,\,{\rho}ho}\left(\cap_{w \in V}(X_{{\omega}mega}=1,R_w > d(w,v))^c{\rho}ight) = 0 .
{\eta}nd{equation}
Since $\mathcal{X}=(X_w : w \in V)$ and $\mathcal{R}=(R_w:w \in V)$ are two families of independent random variables which are also independent between them, we have that ({\rho}ef{prodinf}) holds if and only if
{\mathbf e}gin{equation}{\nu}bel{inftyf}
\prod_{w \in V}{\mathbf P}_{p,\,{\rho}ho}\left((X_w=1,R_w > d(w,v))^c{\rho}ight) = 0.
{\eta}nd{equation}
It is well known that ({\rho}ef{inftyf}) holds if, and only if
{\mathbf e}gin{eqnarray}{\nu}bel{sum_inf}
\sum_{w \in V}\left(1-{\mathbf P}_{p,\,{\rho}ho}\left((X_w=1,R_w > d(w,v))^c{\rho}ight){\rho}ight) &=& + \infty. \nonumber
{\eta}nd{eqnarray}
Since
{\mathbf e}gin{eqnarray}
1-{\mathbf P}_{p,\,{\rho}ho}\left((X_w=1,R_w > d(w,v))^c{\rho}ight) &=& {\mathbf P}_{p,\,{\rho}ho}\left(X_w=1,R_w > d(w,v){\rho}ight) \nonumber \\
&=& {\mathbf P}_{p,\,{\rho}ho}\left(X_w=1{\rho}ight) {\mathbf P}_{p,\,{\rho}ho}\left(R_w > d(w,v){\rho}ight) \nonumber \\
&=& p {\mathbf P}_{{\rho}ho}\left(R_v > d(w,v){\rho}ight) \nonumber \\
&=& p {\mathbf E}_{{\rho}ho}\left[1\{{\omega}mega \in B(v,R_v)\}{\rho}ight], \nonumber
{\eta}nd{eqnarray}
we have that
{\mathbf e}gin{eqnarray}
\sum_{w \in V}\left(1-{\mathbf P}_{p,\,{\rho}ho}\left((X_w=1,R_w > d(w,v))^c{\rho}ight){\rho}ight) &=& p \sum_{w \in V} {\mathbf E}_{{\rho}ho}\left[1\{w \in B(v,R_v)\}{\rho}ight] \nonumber \\
&=& p {\mathbf E}_{{\rho}ho}\left[\medida{B(v,R_v)}{\rho}ight].
{\eta}nd{eqnarray}
We conclude that ${\mathbf P}_{p,\,{\rho}ho}({\eta}xists\, w\in{\mathcal P}: v \in B(w,R_w))=1$ if, and only if ${\mathbf E}_{{\rho}ho}\left[\medida{B(v,R_v)}{\rho}ight] = + \infty$. Since ${\Gamma}mma$ is a graph whose growth is at least polynomial, we get
that
{\mathbf e}gin{eqnarray}
{\mathbf E}_{{\rho}ho}\left[\medida{B(v,R_v)}{\rho}ight] &\geq& C_1 {\mathbf E}_{{\rho}ho}\left[R^C{\rho}ight] \nonumber \\
&=& +\infty.
{\eta}nd{eqnarray}
\section{The Number of Infinite Clusters}
{\nu}bel{ergodicitynumberofinfiniteclusters}
In this section we address the problem of determining {\it{how many infinite connected component there can be}}. We give an answer when the underlying graph has a family of symmetries which ``acts separating points''. We begin by recalling that an isometry on a graph
${\Gamma}$ with vertex set $V$ is a function $g: V \to V$ preserving the geodesic
distance of the graph, i.e., $d\left(g(v),g(w){\rho}ight)=d(v,w)$. We denote by
$\mathrm{Iso}\,({\Gamma})$ the group of isometries of ${\Gamma}$ and we observe that
isometries preserve the counting measure in ${\Gamma}$.
We say that a family of isometries $S \subset \mathrm{Iso}\,({\Gamma}) $ \textit{acts separating
points} of ${\Gamma}$ if the orbit of any vertex of ${\Gamma}$ by the action of $S$ is infinite.
As a direct consequence of this definition we have also that $S$ separates compacts (i.e., finite sets). In other words, given a
compact set $K\in {\Gamma}$, there exists $g\in S$ such that $g(K)\cap K = {\eta}mptyset$.
Let $({\mathcal X},{\mathcal R})$ be a Bernoulli marked point process in ${\Gamma}$ determined by a
family of random variables ${\mathcal X}=(X_v:\, v\in V)$ and ${\mathcal R}=(R_v: v\in V)$. We
say that an isometry $g:{\Gamma} \to {\Gamma}$ leaves the marked point process
invariant if the random variables $R_{g(v)}$ and $R_{v}$ are
equally distributed. Henceforth we assume that the isometries $g\in S$ leave the marked point process invariant.
In order to state the result about ergodicity of the Boolean model we need to
define the action of the family of isometries on the marked point process.
For that purpose we assume that the
marked point process is defined in the space of counting measures
\[
(\hat{{\Gamma}}, \mathcal{A}):= (\mathcal{N}({\Gamma}\times {\mathbb N}_0), \bor{{\Gamma} \times {\mathbb N}_0})
\]
\noindent where $\mathcal{N}({\Gamma}\times {\mathbb N}_0)$ is the set of locally finite counting measures in
${\Gamma}\times {\mathbb N}_0$. Let ${\mathbf P}_{p,\,{\rho}ho}$ be the distribution of a marked point process with retention parameter $p$.
For each isometry $g \in S$ leaving the marked point process
invariant, we induce a map $\hat{g}:(\hat{{\Gamma}}, \mathcal{A},{\mathbf P}_{p,\,{\rho}ho}) {\rho}ightarrow (\hat{{\Gamma}}, \mathcal{A})$ as follows:
\[
\hat{g}({\omega}mega)(B) = {\omega}mega(g^{-1}(B)),
\]
\noindent where $B\in \bor{{\Gamma} \times {\mathbb N}_0}$ e ${\omega}mega \in \hat{{\Gamma}}$. The function $\hat{g}$ is measurable and we observe that if $g$ leaves the process invariant then $\hat{g}$ is measure-preserving.
Let $\mathcal{I}$ denote the sigma-field of events that are invariant under all isometries of $S$, then the measure ${\mathbf P}_{p,\,{\rho}ho}$ is called $S$-\textit{ergodic} if for each $A\in \mathcal{A}$ , we have either ${\mathbf P}_{p,\,{\rho}ho}(A) = 0$ or ${\mathbf P}_{p,\,{\rho}ho}(A^c) = 0$.
We also note that the Boolean model is \textit{insertion tolerant}, i.e.,
$${\mathbf P}_{p,\,{\rho}ho}(A\cup \{v\})>0$$ for every vertex $v$ and every measurable $A$ determined by the marked point process $\left({\mathcal X},{\mathcal R}{\rho}ight)$ with ${\mathbf P}_{p,\,{\rho}ho}(A)>0$. In the previous definition, we use that a vertex may be viewed as a ball of radius $0$ and hence it can be identified with an element of $\bor{{\Gamma} \times {\mathbb N}_0}$.
The insertion tolerant property follows from a direct comparison between the discrete Boolean percolation model and the underlying Bernoulli point process. In fact, a stronger property holds:
\[
{\mathbf P}_{p,\,{\rho}ho}(A\cup \{v\}) \geq p{\mathbf P}_{p,\,{\rho}ho}(A).
\]
Clearly, the definition of insertion tolerant may be generalized to any finite set of vertices $K$ and we may conclude that ${\mathbf P}_{p,\,{\rho}ho}(A\cup K)>0$ for any finite subset $K$ and any measurable set $A$ satisfying ${\mathbf P}_{p,\,{\rho}ho}(A)>0$. Then, an adaptation of the arguments given in \cite{yuval} yields:
{\mathbf e}gin{teo}[Ergodicity of the Boolean model] {\nu}bel{ergodicity}
Let $({\mathcal X},{\mathcal R})$ be a Ber\-nou\-lli marked point process in a connected locally finite graph ${\Gamma}$. Assume that ${\Gamma}$ has a family of isometries $S$ separating points and leaving the marked point process $({\mathcal X},{\mathcal R})$ invariant. Then the Boolean discrete percolation model is $S$-ergodic.
{\eta}nd{teo}
{\mathbf e}gin{proof}
Let $A$ be an $S$-invariant subset of ${\Gamma}\times {\mathbb N}_0$, i.e., a set
satisfying
$\hat{g}A=A$, for all $g\in S$.
The idea is to show that $A$ is almost independent of $\hat{g}A$ for some $g$.
Let $\varepsilonilon > 0$. Since $A$ is measurable, we may conclude from Theorem A.2.6.3 III in
\cite{verej}, that there exists a cylinder event
$B$ which depends only on some finite set $K$ such
that ${\mathbf P}_{p,\,{\rho}ho}(A \triangle B) < \varepsilonilon$. For all $g\in S$, we have
${\mathbf P}_{p,\,{\rho}ho}(\hat{g}A {\Delta}elta \hat{g}B) = {\mathbf P}_{p,\,{\rho}ho}[\hat{g}(A {\Delta}elta B)] <\varepsilonilon$. By
assumption $S$ acts separating points, then there exists some $g\in S$ such
that $K$ and $gK$ are
disjoint. Since $gB$ depends only on $gK$, it follows that for some $g\in S$ the events
$B$ and $gB$ are
independent. Thus,
{\mathbf e}gin{eqnarray*}
|{\mathbf P}_{p,\,{\rho}ho} (A) - {\mathbf P}_{p,\,{\rho}ho} (A)^2 | &=& |{\mathbf P}_{p,\,{\rho}ho} (A\cap gA) - {\mathbf P}_{p,\,{\rho}ho} (A)^2 |\\
&\leq& |{\mathbf P}_{p,\,{\rho}ho} (A\cap gA) - {\mathbf P}_{p,\,{\rho}ho} (B\cap gA)| \\
&+& |{\mathbf P}_{p,\,{\rho}ho} (B\cap gA) - {\mathbf P}_{p,\,{\rho}ho} (B\cap
gB)| \\
&+& |{\mathbf P}_{p,\,{\rho}ho} (B\cap gB) - {\mathbf P}_{p,\,{\rho}ho} (B)2 | \\
&+& |{\mathbf P}_{p,\,{\rho}ho} (B)2 - {\mathbf P}_{p,\,{\rho}ho} (A)^2 |\\
&\leq& {\mathbf P}_{p,\,{\rho}ho} (A \triangle B) + {\mathbf P}_{p,\,{\rho}ho} (gA \triangle gB) \\
&+& |{\mathbf P}_{p,\,{\rho}ho} (B){\mathbf P}_{p,\,{\rho}ho} (gB) -
{\mathbf P}_{p,\,{\rho}ho} (B)^2 |\\
&+& |{\mathbf P}_{p,\,{\rho}ho} (B) - {\mathbf P}_{p,\,{\rho}ho} (A)| \left({\mathbf P}_{p,\,{\rho}ho} (B) + {\mathbf P}_{p,\,{\rho}ho} (A){\rho}ight) \\
&<& 4\varepsilonilon .
{\eta}nd{eqnarray*}
Therefore ${\mathbf P}_{p,\,{\rho}ho} (A) \in \{0, 1\}$ and the proof is complete.
{\eta}nd{proof}
As a consequence of the ergodicity of the Boolean model we get the fo\-llo\-wing theorem.
{\mathbf e}gin{teo}
Let $({\mathcal X},{\mathcal R})$ be a Bernoulli marked point process in a connected locally finite graph ${\Gamma}$. Assume that ${\Gamma}$ has a family of isometries $S$ separating points and leaving the marked point process $({\mathcal X},{\mathcal R})$ invariant. Then the number
of infinite clusters in the Boolean discrete percolation model is constant a.s. and equal either $0$, $1$, or $\infty$.
{\eta}nd{teo}
{\mathbf e}gin{proof}
Let $N_{\infty}$ denotes the number of infinite clusters. The action of any element of $S$ on a configuration does not change the value $N_{\infty}$. In order to prove this assertion, consider $\hat{g} \in \hat{{\Gamma}}$ and ${\omega}mega$ a realization of the process. Then $N_{\infty}(\hat{g}({\omega}mega)({\Gamma})) = N_{\infty}({\omega}mega(g^{-1}({\Gamma})) = N_{\infty}({\omega}mega({\Gamma}))$. In other words, $N_{\infty}(\hat{g}({\omega}mega)) = N_{\infty}({\omega}mega)$. Hence, $N_{\infty}$ is measurable with respect to the sigma algebra of the $S$-invariants sets, $\mathcal{I}_S$. Since the Boolean model is ergodic, we may conclude that $N_{\infty}$ is, a.s., constant.
Now, assume that $N_C = k \geq 2$. Let $v\in V$. Then, there exists an $R> 0$ such that $B(v,R)$ intersects all the $k$ infinite clusters. It follows from the insertion tolerant property that the probability of the number of infinite cluster being one is positive, contradicting the hypothesis that $N_C \geq 2$ a.s. The proof is complete.
{\eta}nd{proof}
{\alpha}ddcontentsline{toc}{chapter}{Bibliography}
{}
{\eta}nd{document} |
\begin{document}
\begin{abstract}
We use cell decomposition techniques to study additive reducts of $p$-adic fields. We consider a very general class of fields, including fields with infinite residue fields, which we study using a multi-sorted language. The results are used to obtain cell decomposition results for the case of finite residue fields. We do not require fields to be Henselian, and we allow them to be of any characteristic.\end{abstract}
\title{Cell decomposition for semi-affine structures on $p$-adic fields}
\section{Introduction}
It is hard to overstate the importance of cell decomposition techniques for the study of $o$-minimal structures. The technique made it possible to obtain results for a wide array of topics, ranging from the study of definable invariants to differentiability of definable functions, see eg. van den Dries \cite{vdd-98} for details.
Another example is the classification of reducts of $(R,+,\cdot, <)$ by Peterzil \cite{mpp-92,pet-93,pet-92} and others. One of the most striking results he obtained is the fact that there exists only a single structure between the structure of semi-algebraic sets and the semi-linear sets of $(R,+,\{\lambda_a\}_{a \in R})$: a structure where multiplication is definable only on a bounded interval.
The question whether a similar result would exist in the $p$-adic context was one of the motivations for this paper: a good understanding of semi-affine structures is a necessary first step
towards answering this question.
In the upcoming papers \cite{lee-2012.1, lee-2012.2} we will report our findings.
For $p$-adic structures, a number of cell decomposition results do exist. Probably the most well-known is the cell decomposition result for semi-algebraic sets by Denef \cite{denef-86}, which allowed him to give a new proof of Macintyre's quantifier elimination result \cite{mac-76}, and which has been a very useful tool in the study of $p$-adic integrals , see eg Denef \cite{denef-86} or Cluckers and the author \cite{clu-lee-2008}.
Haskell and Macpherson \cite{has-mac-97} developed $P$-minimality as a $p$-adic alternative to $o$-minimality, to study (expansions) of $p$-adically closed fields. It was shown later by Mourgues \cite{mou-09} that such structures admit cell decomposition (using Mourgues definition of cells) if and only if they have definable Skolem functions.
Most existing $p$-adic cell decomposition results focus on (expansions of) the semi-algebraic structure. This poses a complication for obtaining $p$-adic equivalents of Peterzil's result, because there does not really exist a minimality theory for weak $p$-adic structures.
In a previous paper \cite{clu-lee-2011} we proposed to consider all structures $(K,\mathcal{L})$, where $K$ is a $p$-adic field and the $\mathcal{L}$-definable subsets of $K$ are the same as the $\Lm_{\text{ring}}$-definable subsets of $K$. This is a direct $p$-adic equivalent of $o$-minimal reducts of $(R,+,\cdot, <)$. Unfortunately, we were unable to prove whether such structures would always admit some form of cell decomposition. We gave a few suggestions in \cite{lee-2011b}, but it seems to be rather difficult even to suggest a useful general notion of cells, so a general cell decomposition theorem for such structures still seems inaccessible.
A natural first step is to study the properties of individual structures. In \cite{lee-2011b} we consider some very weak structures (where even addition is not definable everywhere), and in this paper we will look at the $p$-adic equivalent of semi-linear sets. Some time ago, Liu \cite{liu-94} obtained a cell decomposition for the semi-linear structure $(\mathbb{Q}_p; +,-, \{\overline{c}\}_{c\in \mathbb{Q}_p}, \{P_n\}_n)$, where $\overline{c}$ is a symbol for scalar multiplication $x \mapsto cx$, and the $P_n$ are the nonzero $n$-th powers.
This paper describes similar structures, but in a more general context.
We will consider structures $(K,\mathcal{L})$, where $\mathcal{L}$ is a semi-affine language and where $K$ is a $\mathbb{Z}$-field: a valued field that satisfies the following extra conditions. Write $\Gamma_K$ for the value group, and let $R_K$ be the valuation ring of $K$.
\begin{definition}
A $\mathbb{Z}$-field is a valued field $K$ that contains an element $\pi$ of minimal positive valuation. Further, we require that $\Gamma_K$ is a $\mathbb{Z}$-group (that is, $\Gamma_K/\mathbb{Z}$ is divisible) and that there exist angular component maps $\text{ac}\,m: K\to R_K/\pi^mR_K$.
\end{definition}
We will assume that the valuation is normalized such that $\mathrm{ord}\, \pi =1$.
The required
angular component maps always exist if $\Gamma_K = \mathbb{Z}$ and $K$ has a uniformizing element $\pi$. The proof is similar to the proof of Lemma \ref{lemma:acm}.
Note that we do \emph{not} put any conditions on the residue field $\mathbb{F}_K$ and the characteristic of $K$. Moreover, we do not require the field $K$ to be Henselian.
When studying structures on valued fields, often multi-sorted languages are considered, typically consisting of a field sort and various other, auxiliary sorts used to encode information concerning the residue field and angular components. See for example Pas \cite{pas-89,pas-90}, who used multi-sorted languages to study semi-algebraic structures for fields with infinite residue fields. This approach was extended to fields with analytic structure by Cluckers, Lipshitz and Robinson \cite{clr-06}. Other recent examples include Scanlon, who used a multi-sorted language to study valued $D$-fields \cite{sca-03}, and Cluckers and Loeser \cite{clu-loe-07}, who obtain cell decomposition for henselian valued fields of characteristic zero.
Most of the examples given above are essentially multi-sorted versions of (extensions of) the language of valued fields. We present a multi-sorted language where full multiplication is not definable, but such that `multiplicative' relations like the valuation of $x$ modulo $n$ are still definable. (This relation is equivalent to $x$ being in certain cosets of the set of $n$th powers.)
The valued field $K$ will be the main sort, equipped with the language $(+, \cdot_{\pi}, |)$.
The function $\cdot_{\pi}$ is defined as \[\cdot_{\pi}:K\to K: x \mapsto \pi x.\]
The divisibility relation $|$ is defined as $x \mid y$ iff $\mathrm{ord}\, x \leqslant \mathrm{ord}\, y$.
The auxiliary sorts $\Lambda_{n,m}$ are constructed as follows.
Since $\Gamma_K$ is a $\mathbb{Z}$-group, there exist maps
\(\gamma_n: K^{\times} \to \{0,\, \ldots,\, n-1\},\)
where $\gamma_n(x)$ is the remainder of $\mathrm{ord}\, x$ after division by $n$.
For every $x \in K^{\times}$, put
\[\rho_{n,m}(x) = \pi^{\gamma_n(x)}\text{ac}\,m(x).\]
Extend this to $K$ by putting $\rho_{n,m}(0) = 0$.
Our auxiliary sorts will then be the sets of equivalence classes:
\[\Lambda_{n,m}:=\{\rho_{n,m}(x)\ | \ x \in K\}.\]
The maps $\rho_{n,m}$ project the main sort $K$ onto the auxiliary sorts $\Lambda_{n,m}$. The language on the auxiliary sorts contains no symbols. Schematically, this gives us the following language $\Lm_{\text{aff}}^{\pi}$:
\[\xymatrix{\ar[d]^{\rho_{n,m}}K& \hspace{-50pt}(+, \cdot_{\pi}, | ) \\
\{\Lambda_{n,m}\}_{n,m}&\hspace{-25pt}}\]
Note that this language does not use the value group as a seperate sort. However, the sets $\Lambda_{n,m}$ retain information on the value group, modulo an integer $n$.
Let us give some examples of relations that are definable in this language.
\begin{lemma}\label{lemma:defvb} Let $K$ be a $\mathbb{Z}$-field.
For every $k, n \in \mathbb{N}$, the following subsets of $K$ are $\Lm_{\text{aff}}^{\pi}$-definable:
\begin{enumerate}
\item $\{ (x,y) \in K^2 \mid \mathrm{ord}\, x = \mathrm{ord}\, y \}$
\item $\{ x\in K \mid \mathrm{ord}\, x \equiv k \mod n \}$
\end{enumerate}
\end{lemma}
\begin{proof}
The relation $\mathrm{ord}\, x = \mathrm{ord}\, y$ is equivalent with $x \mid y \wedge \neg (\pi x \mid y)$. We can use this to express that $\mathrm{ord}\, x = \gamma$ for any $\gamma \in \gamma_K$ by substituting $y$ for a suitable constant from $K$. For $\lambda \in \Lambda_{n,m}$ we can then express that $\mathrm{ord}\, \lambda \equiv k \mod n$ in the following way:
\[ \mathrm{ord}\, \lambda \equiv k \mod n \leftrightarrow (\exists x \in K)[\rho_{n,m}(x) = \lambda \text{ \ and \ } \mathrm{ord}\, x = k].\]
We can now use the formula
\[(\exists \lambda \in \Lambda_{n,m})[(\mathrm{ord}\, \lambda \equiv k \hspace{-2pt}\mod n) \text{\ \ and \ } \rho_{n,m}(x) = \lambda].\]
to define the set consisting of all $x \in K$ such that $\mathrm{ord}\, x \equiv k \mod n$.
\end{proof}
If $K$ is a $\mathbb{Z}$-field with infinite residuefield,
the set $\{x \in K \mid \mathrm{ord}\, x \equiv k \mod n\}$ cannot be defined in $\Lm_{\text{aff}}^{\pi}$ without using $K$-quantifiers.
To remedy this,
we expand the language in Section \ref{subsec:langdef}, thus obtaining an additive variant of the language studied by Pas.
In Section \ref{subsec:celdec} and \ref{subsec:defsetfun},
we show that $\mathbb{Z}$-fields
admit elimination of $K$-quantifiers in this extended language $\Lm_{\text{aff}}infQE$. The proof uses cell decomposition techniques. We also give a characterization of the definable functions $f: K^n \to K^m$.
In Section \ref{sec:finres}, we restrict our attention to fields with finite residue field. For such fields, we can `collapse' the multisorted language to a language with just one sort, and derive cell decomposition and quantifier elimination from the results we obtained for the multisorted language.
To make the distinction between mono- and multisorted languages clear, we will use the following terminology. The definable sets of our multi-sorted language are called `semi-additive' sets. We will refer to the mono-sorted languages we deduce from this as `semi-affine' languages. `Semi-linear' sets are the definable sets of the structure Liu studied on $\mathbb{Q}_p$. We will compare our results for semi-affine sets with existing results for semi-linear and semi-algebraic sets. In particular, we give a characterization of definable functions in Section \ref{subsec:skol}, and in Section \ref{subsec:clas}, we give some examples to show that classification by definable bijection is not quite as simple as it is for semi-algebraic sets. (It was shown by Cluckers \cite{clu-2000} that any two infinite $p$-adic semi-algebraic sets are isomorphic if and only if they have the same dimension.)
\section{Affine structures with infinite residue field}
\subsection{Definition of the languages $\Lm_{\text{aff}}inf$ and $\Lm_{\text{aff}}infQE$} \label{subsec:langdef}
Let $K$ be a valued field with value group $\Gamma_K$ and valuation ring $R_K$. Let $\pi$ be an element $\pi$ of minimal positive valuation, such that $\mathrm{ord}\, \pi =1$. We use the notation $\text{ac}\,m$ for the angular component maps $\text{ac}\,m: K\to R_K/\pi^mR_K$.
The only symbol for multiplication we included in $\Lm_{\text{aff}}^{\pi}$ is $\cdot_{\pi}$. However, as addition is definable, scalar multiplication by every $n \in \mathbb{N}$ is definable. This implies that if $K$ has characteristic zero, multiplication by every $c \in \mathbb{Q}(\pi)$ is definable. If char($K$)=$p$, we can define scalar multiplication for every $c \in \mathbb{F}_p(\pi)$. \\In general, if we denote the prime field of $K$ by $\mathbb{P}_K$, we can thus define a scalar multiplication map $\overline{c} : K\to K : x\mapsto cx$ for every $c \in \mathbb{P}_K$.
We added the symbol $\cdot_{\pi}$ because of the functions it induces on the auxiliary sets. We do not include symbols for scalar multiplication by other constants, as we want to keep the language as basic as possible. However, it is possible to define variations on $\Lm_{\text{aff}}^{\pi}$ that contain a wider range of symbols for scalar multiplication. It is easy to see that such languages can be studied in a similar way as $\Lm_{\text{aff}}^{\pi}$. In fact, we refer to these related languages when we consider the case of finite residue fields.
The addition map for the main sort $K$ induces addition functions $+_r^{(n,m)}$ on the auxiliary sorts $\Lambda_{n,m}$,
where $r\in \mathbb{N}$ is such that $rn<m$.
If $rn <=\mathrm{ord}\, \frac{y}{x}<(r+1)n$ (and some additional conditions if $r=0$), these functions are designed to satisfy the relation
\[\rho_{n,m}(x+y) = \rho_{n,m}(x) +_r^{n,m} \rho_{n,m}(y).\]
Why do we need to consider multiple addition functions on the auxiliary sorts? To see this, let us compare with a similar construction in a different language.
In \cite{fle-2011}, Flenner considers a language with auxiliary sorts \[RV_{\gamma} := K^{\times} \backslash (1+ M_{\gamma}),\] where $M_{\gamma} = \{ x\in R_K \mid \mathrm{ord}\, x > \gamma\}$. The quotient map, which is denoted $\mathrm{rv}_{\gamma}:= K^{\times} \mapsto RV_{\gamma}$, induces an addition function $\oplus_{\gamma}$ on each sort $RV_{\gamma}$, that is compatible with the addition in the main sort, in the sense that
\[\mathrm{rv}_{\gamma}(x+y) = \mathrm{rv}_{\gamma}(x) \oplus_{\gamma} \mathrm{rv}_{\gamma}(y),\] for all $x,y$ for which $\mathrm{ord}\, (x+y) = \min\{\mathrm{ord}\, x, \mathrm{ord}\, y \}.$
If this condition is not satisfied, the operation $\oplus$ is not welldefined, since then $\mathrm{rv}_{\gamma}(x+y)$ depends on the representatives $x$ and $y$, and not only on $\mathrm{rv}_{\gamma}(x)$ and $\mathrm{rv}_{\gamma}(y)$. To define the value of $\lambda_1 \oplus_{\gamma} \lambda_2$ for $\lambda_1, \lambda_2 \in RV_{\gamma}$,
one chooses representatives $x_i$ such that $\mathrm{rv}_{\gamma}(x_i) = \lambda_i$, and then puts $\lambda_1 \oplus_{\gamma} \lambda_2 := \mathrm{rv}_{\gamma}(x_1 + x_2).$ If $\mathrm{ord}\, (x_1+x_2) = \min\{\mathrm{ord}\, x_1, \mathrm{ord}\, x_2 \},$ this value does not depend on the chosen representatives, so this addition is well-defined.
The main difference between the sorts $RV_{\gamma}$ and the sorts $\Lambda_{n,m}$ is that $\mathrm{rv}_{\gamma}(x)$ remembers the order of $x$, while $\rho_{n,m}(x)$ only retains the order modulo $n$.
Hence we will have to be more careful, since every equivalence class in $\Lambda_{n,m}$ contains representatives with different orders.
Let $\lambda, \lambda' \in \Lambda_{n,m}$ and suppose that we want to define $\lambda\oplus \lambda'$. The outcome will depend on the distance of the chosen representatives, by the followning lemma:
\begin{lemma}\label{lemma:rhoab}
Put $\delta \in \{-1,1\}$. Suppose that $\rho_{n,m}(a) = \lambda$ and $\rho_{n,m}(b) = \mu$, then $\rho_{n,m}(a+\delta b)$ equals
\[ \left\{\begin{array}{lcl}
\hspace{-4pt}\lambda &\text{if}& m + \mathrm{ord}\, a \leqslant \mathrm{ord}\, b, \\
\hspace{-4pt} \rho_{n,m}(\lambda +\delta \mu \pi^{rn}), \text{with} \ rn = \mathrm{ord}\, (\frac{\mu a}{\lambda b})& \text{if} & -m +\mathrm{ord}\, b <\mathrm{ord}\, a <\mathrm{ord}\, b,\\
\hspace{-4pt} \rho_{n,m}(\lambda + \delta \mu)&\text {if}& \mathrm{ord}\, a = \mathrm{ord}\, b = \mathrm{ord}\, (a + \delta b).
\end{array}\right.\]
\end{lemma}
\begin{proof}Left as an exercise.
\end{proof}
If we want to define addition maps, we will have to take these different possibilities into account. Not that this is a bad thing: this means that we can use the auxiliary sorts $\Lambda_{n,m}$ to encode information about the distance between elements of $K$. This will be important when we want to achieve cell decomposition.
Let us now give a precise definition of the addition maps $+^r_{n,m}$.
If $r \geqslant 1$, let $\lambda +^{(n,m)}_r \lambda'$ be the unique value $\rho \in \Lambda_{n,m}$ such that
\[(\exists x,y \in K)\left[\begin{array}{cl}&[\rho_{n,m}(x)=\lambda] \wedge [\rho_{n,m}(y)= \lambda'] \\ \wedge & [0 \leqslant \mathrm{ord}\, x <n]
\wedge [0 \leqslant \mathrm{ord}\, y <n] \\ \wedge & \rho_{n,m}(x+\pi^{rn}y) = \rho \end{array}\right]\]
(The above formula cannot be used if $\lambda=0$ or $\lambda'=0$. We can extend the definition to these cases by putting $0 +^{(n,m)}_r \lambda = \lambda +^{(n,m)}_r 0 =\lambda$.)
If $r=0$, the operation above does not always yield a unique result. For this reason, we will restrict the domain to
$D_+ \hspace{-2pt}:=\hspace{-1pt} \{(\lambda, \lambda') \in \Lambda_{n,m}^2 \ | \ \phi(\lambda,\lambda')]\}$,
where $\phi(\lambda,\lambda')$ is the formula
\[(\forall x, y \in K)\left[[\rho_{n,m}(x) = \lambda \wedge \rho_{n,m}(y) = \lambda' \ \wedge \mathrm{ord}\, x =\mathrm{ord}\, y]\mathbb{R}ightarrow\mathrm{ord}\, (x+y) = \mathrm{ord}\, x\right].\]
For $(\lambda, \lambda') \in D_+$, define $\lambda +^{(n,m)}_0 \lambda'$ by the same formula as for $r\geqslant 1$; \ put $\lambda +^{(n,m)}_0 \lambda':=0$ if $(\lambda,\lambda') \notin D_+$.
Analogously, we can define functions $ -^{(n,m)}_r $. If the domain is clear from the context, we will simply write $+_r$ or $-_r$.
We are now ready to introduce the language $\Lm_{\text{aff}}infQE$, which is a definitional expansion of $\Lm_{\text{aff}}inf$, obtained by adding symbols for the functions we discussed above, and symbols for the relation
\[\equiv_{n,k}(\lambda) \leftrightarrow \mathrm{ord}\, \lambda \equiv k \mod n,\]
which we showed to be definable in the proof of Lemma \ref{lemma:defvb}.
Schematically, this gives us the following language:
\[\xymatrix{\ar[d]^{\rho_{n,m}}K& \hspace{-160pt}(+, \overline{c}_{c\in \mathbb{P}_K(\pi)}, | ) \\
\{\Lambda_{n,m}\}_{n,m}&\hspace{-20pt}
(\{+_r^{(n,m)}\}_{r \in \mathbb{N}},\{-_r^{(n,m)}\}_{r\in \mathbb{N}},\{\equiv_{n,k}\}_{k\in\mathbb{N}})}\]
We will show that $\mathbb{Z}$-fields admit quantifier elimination and cell decomposition in this language.
Remark: The same notation $\rho_{n,m}$ will also be used for the natural projection maps
\[\rho_{n,m}:\Lambda_{kn,m'}\to \Lambda_{n,m},\]
with $k \in \mathbb{N}\backslash\{0\}$, $m' \geqslant m$. These maps are clearly definable in our original language. We assume that our extended language contains symbols for these maps. These projection maps are `compatible' with the functions we defined on the $\Lambda_{n,m}$: for example for the addition maps we have
\[\rho_{n,m}(\lambda +_r^{(kn,m')} \lambda') = \rho_{n,m}(\lambda) +_r^{(n,m)} \rho_{n,m}(\lambda').\]
\subsection{Subsets of $K^k$ definable without $K$-quantifiers in $\Lm_{\text{aff}}infQE$} \label{subsec:form_of_kqf-formula}
In this section we will give a short description of $K$-quantifier-free definable subsets of $K^k$. Let $\phi(x)$ be a formula without $K$-quantifiers, and such that all free variables $x= (x_1, \ldots, x_k)$ are $K$-variables. We use the following notation.
\begin{itemize}
\item Let $g_{i,n,m}(\lambda_1, \ldots, \lambda_r)$ denote a term in the $\Lambda_{n,m}$-sort.
\item Let $f_i(x)$ denote a linear polynomial in the $K$-variables $x$ with coefficients in $\mathbb{P}_K(\pi)$ and constant term in $K$. We call this a $(\mathbb{P}_K(\pi),K)$-linear polynomial.
\item Let $\theta_{i,k',n,m}(\lambda_1, \ldots, \lambda_{k'})$ be a formula in the $\Lambda_{n,m}$-sort with $k'$ free variables.
\end{itemize}
With this notation, $\phi(x)$ is a boolean combination of expressions $\phi_{i,n,m}(x)$ and $\psi_{i,j}(x)$:
\begin{enumerate}
\item[(a)] Put $\mu_{j}(x) := g_{j,n,m}(\rho_{n,m}(f_{j,1}(x)), \ldots, \rho_{n,m}(f_{j,r}(x)))$, then $\phi_{i,n,m}(x)$
is a formula of the form $\phi_{i,n,m}(x) \leftrightarrow \theta_{i,k',n,m} (\mu_1(x),\ldots, \mu_{k'}(x)))$.
\item[(b)] $\psi_{i,j}(x) \leftrightarrow \mathrm{ord}\, f_i(x) \ \square \ \mathrm{ord}\, f_j(x)$, where $\square$ may denote $<,\leqslant, >,\geqslant,=$.
\end{enumerate}
Note that we may assume that the same value of $n$ and $m$ occurs in every expression of type $\phi_{i,n,m}$. (Indeed, expressions $\phi_{i,n,m}$ and $\phi_{j,n',m'}$, can (with the help of projection maps $\rho_{n,m}$ and $\rho_{n',m'}$) be rewritten to expressions $\phi_{i,n'',m''}, \phi_{j,n'',m''}$, where $n'' = \text{lcm}\{n,n'\}$, and $m'' = \max\{m,m'\}$.) \\
Also, since any negation of an expression of type (a) or (b) can again be rewritten as an expression of the same form, $\phi(x)$ can be obtained by taking (a finite number of) conjunctions and disjunctions of such expressions.
Furthermore, note that any expression of type (a) is equivalent with
\[(\exists \lambda_{ij} \in \Lambda_{n,m})\left[\left(\bigwedge_{i,j} \rho_{n,m}(f_{i,j}(x)) = \lambda_{i,j}\right) \wedge \psi(\lambda_{11}, \ldots, \lambda_{k'r})\right]\]
where the formula $\psi$ is defined as \[\psi(\lambda_{11}, \ldots, \lambda_{k'r}) \leftrightarrow \theta_{i,k',n,m}(g_{1,n,m}(\lambda_{11}, \ldots, \lambda_{1r}), \ldots, g_{k',n,m}(\lambda_{k'1}, \ldots, \lambda_{k'r})).\]
It follows then immediately that $\phi(x)$ is in fact a disjunction of expressions of the form
\begin{equation} (\exists \lambda \in \Lambda_{n,m}^r)\left[\phi_1(x) \wedge \left(\bigwedge_i \rho_{n,m}(f_i(x)) = \lambda_i\right) \wedge \phi_2(\lambda) \right]\label{eq:Kqf-definable}\end{equation}where $\lambda = (\lambda_1, \ldots, \lambda_r)$. Here $\psi_1$ is a quantifier-free formula in the language of the main sort $K$, and $\phi_2$ is a formula in the language of the $\Lambda_{n,m}$-sort (not necessarily quantifierfree).\\\\
\subsection{Cell Decomposition} \label{subsec:celdec}
The following notation will be convenient.
Let $D \subseteq \Lambda_{n,m}^r \times K^k$ be a definable set, and suppose that
$r'\leqslant r$, $k'\leqslant k$ and $k'+r' <k+r$. For any $(\rho, b) \in \Lambda_{n,m}^{r'}\times K^{k'}$, the notation $D(\rho,b)$ denotes the set
\[D(\rho,b):=\{(\lambda, x) \in \Lambda_{n,m}^{r-r'}\times K^{k-k'} \ | \ (\rho, \lambda, b,x) \in D\}. \]
We next define our notion of cells. This notion of cells is closely analogous to the notions of cells used for other multi-sorted languages.
\begin{definition}
A cell in $\Lambda_{n,m}^r \times K^{k+1}$ is a set
\[\left\{(\lambda,x,t)\in D_{n,m} \times D_K\times K \ \left| \ \begin{array}{l} \mathrm{ord}\, a_1(x) \ \square_1 \ \mathrm{ord}\, (t-c(x)) \ \square_2 \ \mathrm{ord}\, a_2(x),\\\text{and }\rho_{n,m}(t-c(x)) \in D(\lambda, x)\end{array}\hspace{-2pt}\right\}\right.\hspace{-2pt}\]
where
\begin{itemize}
\item $D_{n,m}$ is a subset of $\Lambda_{n,m}^r$, $\Lm_{\text{aff}}infQE$ -definable without $K$-quantifiers,
\item $D_K$ is a subset of $K^k$, $\Lm_{\text{aff}}infQE$ -definable without $K$-quantifiers,
\item $D$ is a subset of $\Lambda_{n,m}^{r+1}\times K^k$, $\Lm_{\text{aff}}infQE$ -definable without $K$-quantifiers,
\item the functions $a_i(x), c(x)$ are $(\mathbb{P}_K(\pi),K)$-linear polynomials in the variables $(x_1,\ldots, x_k)$. We call $c(x)$ a center of the cell,
\item $\square_i$ may denote either $<$ or `no condition'.
\end{itemize}
\end{definition}
Note that in the description of such a cell, $\square_i$ can only denote a strict inequality `$<$'. However, in the expressions in $(b)$ of Subsection \ref{subsec:form_of_kqf-formula}, we also used `$\leqslant$' and `$=$'. We can exclude these options since they can be expressed in terms of a strict inequality. Indeed, \[\mathrm{ord}\, f(x) \leqslant \mathrm{ord}\, g(x) \Leftrightarrow \mathrm{ord}\, f(x) < \mathrm{ord}\, \pi g(x),\] and \[\mathrm{ord}\, f(x) = \mathrm{ord}\, g(x) \Leftrightarrow \mathrm{ord}\, f(x) < \mathrm{ord}\, \pi g(x) < \mathrm{ord}\, \pi^2 f(x).\]
As a first step, we show that cells behave well when taking finite intersections.
\begin{proposition}\label{prop:intersection}
Let $C_1, C_2$ be two cells in $\Lambda_{n,m}^r \times K^{k+1}$. The intersection $C_1 \cap C_2$ can be partitioned as a finite union of cells.
\end{proposition}
\begin{proof}
First consider semi-cells of the following form:
\[C_{c}^{D}(a_1,a_2)\hspace{-1pt} :=\hspace{-1pt}\left\{(\lambda, x,t) \in D\times K \ | \ \mathrm{ord}\, a_1(x)\, \square_1 \, \mathrm{ord}\,
(t-c(x))\, \square_2 \, \mathrm{ord}\, a_2(x)\right\}\hspace{-2pt}, \]
Using the ultrametric property of the valutation, it is easy to see that
the intersection of two semi-cells
$C_{c_1}^{D_1}(a_1,a_2)$ and $C_{c_2}^{D_2}(b_1,b_2)$ can be
partitioned as a finite union of sets $A$, such that either
$A$ is the set of all $(\lambda,x,t) \in D\times K$ on which
\begin{equation}
\mathrm{ord}\, (t-c_1(x)) = \mathrm{ord}\, (t-c_2(x)) = \mathrm{ord}\, (c_1(x) -
c_2(x)),\label{vw3}
\end{equation}
with $D$ a subset of $\Lambda_{n,m}^r \times K^k$, definable without $K$-quantifiers,\\
or $A$ is a semi-cell $C_c^E(e_1,e_2)$, with the center $c(x)$ equal to $c_1(x)$ or $c_2(x)$, such that one of the
following is true on $A$:
\begin{eqnarray}
\mathrm{ord}\, (t-c(x))& > &\mathrm{ord}\,(c_1(x)-c_2(x)),\label{vw1}\\
\mathrm{ord}\, (t-c(x)) & < & \mathrm{ord}\, (c_1(x)-c_2(x)) .\label{vw2}
\end{eqnarray}
A set that satisfies one of those 3 conditions, say condition $(l)$, will be referred to as a set of type $(l)$.
A general cell $C_{c}^{D} (a_1,a_2, D_{\rho})$ has
the form:
\[ \left\{(\lambda,x,t) \in D\times K\ \left| \begin{array}{l}\mathrm{ord}\, a_1(x)\, \square_1 \, \mathrm{ord}\,
(t-c(x))\, \square_2 \, \mathrm{ord}\, a_2(x), \\ \text{and }\rho_{n,m}(t-c(x))
\in D_{\rho}(x,\lambda)\end{array}\right\}\right.\]
We want to intersect two
cells $C_{c_1}^{D_1} (a_1,a_2, D_{\rho}^{(1)})$ and $C_{c_2}^{D_2} (b_1,b_2,
D_{\rho}^{(2)})$. By the discussion above, we can write
\[C_{c_1}^{D_1} (a_1,a_2, D_{\rho}^{(1)}) \cap C_{c_2}^{D_2} (b_1,b_2, D_{\rho}^{(2)}) = \left( A^{(\ref{vw3})} \cup \bigcup_i A_i^{(\ref{vw1})} \cup \bigcup_j A_j^{(\ref{vw2})}\right)\cap Q\]
where \[Q = \left\{(\lambda,x,t) \in \Lambda_{n,m}^r \times K^{k+1} \ | \begin{array}{l} \rho_{n,m}(t-c_i(x)) \in D_\rho^{(i)}(x,\lambda), \quad \text{for\ } i = 1,2
\end{array}\right\}\] and $A_i^{(l)}$ is a set of type $(l)$. We will show that each $A_i^{(l)} \cap Q$ can be written as a finite union of cells .
After a straightforward further partitioning we may suppose that $t-c_1$ and $t-c_2$ are both nonzero, and thus that $0 \not \in D_{\rho}^{(i)}(\lambda,x)$ for any $(\lambda, x) \in D_i$. \\\\The first part of the above intersection is $A^{(\ref{vw3})} \cap Q$.
If we define $B_1$ to be the set
\[B_1 :=\left\{(\lambda, x, \rho) \in D_1\times \Lambda_{n,m} \ \left| \ \begin{array}{l} \rho \in D_{\rho}^{(1)}(\lambda,x), \\\text{and } \rho +_{0} \rho_{n,m}(c_1(x)-c_2(x)) \in D_{\rho}^{(2)}(\lambda,x)\end{array}\right\}\right.\hspace{-2pt}\]
then $A^{(\ref{vw3})} \cap Q=S$, with \[S:= \left\{(\lambda, x,t) \in (D_1 \cap D_2)\times K \ \left| \begin{array}{l} \mathrm{ord}\,(t-c_1(x)) = \mathrm{ord}\, (c_1(x)-c_2(x)), \\\text{and } \rho_{n,m}(t-c_1(x)) \in B_1(\lambda,x)\end{array}\right\}\right.\hspace{-2pt}\]
Indeed: if $(\lambda,x,t) \in A^{(\ref{vw3})} \cap Q$, then $\rho_{n,m}(t-c_2(x)) = \rho_{n,m}(t-c_1(x)) +_0 \rho_{n,m}(c_1(x)-c_2(x))$, and therefore $\rho_{n,m}(t-c_1(x))\in B_1(\lambda,x)$. On the other hand, the second condition in the description of $S$ implies that $\rho_{n,m}(t-c_1(x)) \,+_0\, \rho_{n,m}(c_1(x)-c_2(x)) \neq 0$, and since $\mathrm{ord}\,(t-c_1(x)) = \mathrm{ord}\,(c_1(x)-c_2(x))$, it follows from the definition of $+_0$ that also $\mathrm{ord}\,(t-c_2(x)) = \mathrm{ord}\,(t-c_1(x))$. But that means that $\rho_{n,m}(t-c_2(x)) = \rho_{n,m}(t-c_1(x)) +_0 \rho_{n,m}(c_1(x)-c_2(x))$, and thus $\rho_{n,m}(t-c_2(x)) \in D_\rho^{(2)}$, as required.\\\\
On a semi-cell $A_i^{(\ref{vw1})}$ with center $c_1(x)$, the condition $\mathrm{ord}\,(t-c_1(x)) > \mathrm{ord}\, (c_1(x) - c_2(x))$ holds. After a straightforward further partitioning, we get semi-cells $A_{i,j}^{(\ref{vw1})}$ with the same center, such that on each $A_{i,j}^{(\ref{vw1})}$, one of the conditions
\begin{align}
&\mathrm{ord}\, (t-c_1(x)) > \mathrm{ord}\, (c_1(x) - c_2(x)) + m,\text{ \ or \ }\label{easy}\\
&\mathrm{ord}\, (t-c_1(x)) = \mathrm{ord}\,(c_1(x) - c_2(x)) + k, \quad \text{for} \ 0 < k <m\label{harder}
\end{align} holds.
If condition (\ref{easy}) holds on $A_{i,j}^{(\ref{vw1})}$, then we can simply put
\[A_{i,j}^{(\ref{vw1})}\cap Q = \left\{(\lambda, x,t) \in A_{i,j}^{(\ref{vw1})} \ \left| \begin{array}{l} \rho_{n,m}(t-c_1(x)) \in D_\rho^{(1)}(\lambda,x), \\\text{and } \rho_{n,m}(c_1(x)-c_2(x)) \in D_\rho^{(2)}(\lambda,x)\end{array}\right\}\right.\]
since in this case $\rho_{n,m}(t-c_2(x)) = \rho_{n,m}(c_1(x)-c_2(x))$. If a conditon of type (\ref{harder}) holds on $A_{i,j}^{(\ref{vw1})}$, then there exists some $r$ with $0\leqslant rn<m$ such that
\[\rho_{n,m}(t-c_2) = \rho_{n,m}(c_1-c_2) +_r \rho_{n,m}(t-c_1).\] If we define $B_1$ to be the set
\[B_1 :=\left\{(\lambda, x, \rho) \in D_1\times \Lambda_{n,m} \ \left| \ \begin{array}{l} \rho \in D_{\rho}^{(1)}(\lambda,x), \\\text{and } \rho_{n,m}(c_1(x)-c_2(x))+_{r} \rho \in D_{\rho}^{(2)}(\lambda,x)\end{array}\right\}\right.\]
then $A_{i,j}^{(\ref{vw1})}\cap Q$ is equal to the cell
\[A_{i,j}^{(\ref{vw1})}\cap Q= \{(\lambda,x,t) \in A_{i,j}^{(\ref{vw1})} \ | \ \rho_{n,m}(t-c_1) \in B_1(\lambda,x)\}.\]
The situation is completely similar for sets $A_j^{(\ref{vw2})}\cap Q$.\end{proof}
\noindent Our aim is to use cells to give a simple description of sets definable in $\Lm_{\text{aff}}infQE$ without $K$-quantifiers. For this we will need the following lemma.
\begin{lemma}\label{lemma:polycenters}
Let $f_1(x,t), \ldots, f_r(x,t)$ be $(\mathbb{P}_K(\pi),K)$-linear polynomials in variables $(x_1,\ldots, x_n,t)$. There exists a finite partition of $K^{k+1}$ into cells, $(\mathbb{P}_K(\pi),K)$-linear polynomials $c(x), d_i(x), h_i(x)$, $a_i \in \mathbb{P}_K(\pi)$ and a $\Lambda_{n,m}$-polynomial $g_i$ in $r$ variables, such that the following holds for all $f_i(x,t)$ on each cell $A$ with center $c(x)$:
\begin{enumerate}
\item $\rho_{n,m}(f_i(x,t)) = g_i(\rho_{n,m}(t-c(x)),\rho_{n,m}(d_2(x)), \ldots, \rho_{n,m}(d_r(x))),$
\item $\mathrm{ord}\, f_i(x,t) = \left\{\begin{array}{l}\mathrm{ord}\, h_i(x) \ \text{ for all } (x,t) \in A ,\\ \text{\quad or} \\ \mathrm{ord}\, a_i(t-c(x)) \ \text{ for all } (x,t) \in A.\end{array}\right.$ \\
\end{enumerate}
\end{lemma}
\begin{proof}
If $r$=1, our claim is trivial, since we can write (if $b\neq 0$):
\[f_1(x,t) = \sum_{i=1}^n a_ix_i +bt+d = b\left(t + \sum_{i=1}^n\frac{a_i}{b} x_i + \frac{d}{b}\right)\]
Now suppose the lemma is true for polynomials $f_1(x,t),\ \ldots, f_{r-1}(x,t)$. This means that there exists a partition of
$K^{k+1}$ in cells $A$ with center $c(x)$, such that on each
cell,
\[\mathrm{ord}\, f_i(x,t) = \mathrm{ord}\, a_i(t-c(x)) \text{ or } \mathrm{ord}\, f_i(x,t) = \mathrm{ord}\, h_i(x).\] We may assume that $f_r(x,t)=a_r(t-c_r(x))$, for some $(P_K(\pi),K)$-linear polynomial $c_r(x)$.
Partition $K^{k+1}$ in the following way:
\begin{eqnarray*}K^{k+1} &=& \{(x,t) \in K^{k+1}\ |\
\mathrm{ord}\,(t-c(x)) < \mathrm{ord}\, (c(x)-c_r(x)) +m\}\nonumber\\
& & \cup \ \{(x,t) \in K^{k+1}\ |\
\mathrm{ord}\,(t-c(x)) > \mathrm{ord}\, (c(x)-c_r(x)) +m\}\label{partqp}\\
& & \cup\hspace{-3pt} \bigcup_{l=-m}^m\hspace{-5pt} \{(x,t) \in \mathbb{Q}_p^{k+1}\ |\ \mathrm{ord}\,(t-c(x)) = \mathrm{ord}\,
(c(x)-c_r(x))+l \}\nonumber
\end{eqnarray*}
Take intersections of the cells $A$ with the above parts of
$K^{k+1}$. By Proposition \ref{prop:intersection}, this results in a
finite partition of $K^{k+1}$ in cells $B$.\\
On each cell $B$, we can now eliminate one of the centers ($c(x)$ or $c_r(x)$).
For example, if for some $-m\leqslant l<0$, the relation $\mathrm{ord}\,(t-c(x)) = \mathrm{ord}\,
(c(x)-c_r(x))+l$ holds on $B$, there exists $r$ with $ 0\leqslant rn <m$ such that
\[\rho_{n,m}(t-c_r(x))= \rho_{n,m}(t-c(x)) +_r \rho_{n,m}(c(x)-c_r(x)), \] so that we can eliminate the center $c_r(x)$ from the description of all polynomials for $(x,t) \in B$. The other cases are similar.
\end{proof}
\noindent We can now give a characterization of the subsets of $K^{k+1}$ that are quantifier-free definable in $\Lm_{\text{aff}}infQE$.
\begin{theorem}
Let $B \subseteq K^{k+1}$ be a set that is $\Lm_{\text{aff}}infQE$ - definable without using $K$-quantifiers. There exist $r\in \mathbb{N},\ n,m \in \mathbb{N}\backslash\{0\}$ and a finite number of disjoint cells $C_i \subseteq \Lambda_{n,m}^r\times K^{k+1}$ such that
\[B =\{(x,t)\in K^{k+1} \ | \ \exists \lambda \in \Lambda_{n,m}^r : (\lambda,x,t) \in \cup_i C_i\}. \]
\end{theorem}
\begin{proof}
By the discussion in Section \ref{subsec:form_of_kqf-formula}, it suffices to show that a set of the following form can be partitioned as a finite union of cells:
\[ E:=\left\{(\lambda, x, t) \in D_{n,m} \times K^{k+1} \ \left| \ (x,t) \in D \wedge \left(\bigwedge_i \rho_{n,m}(f_i(x,t)) = \lambda_i\right) \right\}\right.\]where $\lambda = (\lambda_1, \ldots, \lambda_r)$. $D$ is a quantifier-free definable subset of $K^{k+1}$ (using only the language of the main sort $K$), and $D_{n,m}$ is a definable subset of $\Lambda_{n,m}^r$ (using the language on the $\Lambda_{n,m}$, and possibly using quantifiers over $\Lambda_{n,m}$.)\\\\
We may suppose that $D$ consists of all $(x,t)\in K^{k+1}$ that satisfy a finite number of relations of the form
\begin{equation} \mathrm{ord}\, f_{i,1}(x,t) < \mathrm{ord}\, f_{i,2}(x,t),\label{eq:kqf=cells1}\end{equation}
where the $f_{i,j}(x,t)$ are $(\mathbb{P}_K(\pi),K)$-linear polynomials. Using Lemma \ref{lemma:polycenters}, we can find a partition of $\Lambda^r\times K^{k+1}$ in cells $A$ with center $c(x)$, such that the residue and order of all polynomials $f_i(x,t)$ and $f_{i,j}(x,t)$ can be expressed as in the formulation of Lemma \ref{lemma:polycenters}. This implies that on $E \cap A$, a relation of the form (\ref{eq:kqf=cells1}) simplifies to either
\begin{equation} \mathrm{ord}\, (t-c(x)) < \mathrm{ord}\, h_i(x), \quad \text{or possibly}\quad \mathrm{ord}\, h_{i,1}(x) < \mathrm{ord}\, h_{i,j}(x),\label{eq:kqf=cells2}\end{equation}
for some $(\mathbb{Q}(\pi),K)$-linear polynomials $h_i(x), h_{i,j}(x)$. Also, on $E\cap A$, the condition $\bigwedge_i \rho_{n,m}(f_i(x,t)) = \lambda_i$ is equivalent to a formula of the form (for ease of notation, we assume that the center of
$A$ is the center of $f_1(x,t)$):
\begin{equation}\rho_{n,m}(t-c(x)) = a\lambda_1 \wedge \bigwedge_{i=2}^{r} [\lambda_i = g_i(\lambda_1, \rho_{n,m}(d_2(x)),\ldots, \rho_{n,m}(d_r(x)))],\label{eq:kqf=cells3}\end{equation} for some constant $a \in \mathbb{P}_K(\pi)$. But this implies that $E \cap A$ is equal to the intersection of $A$ with the cell described by (\ref{eq:kqf=cells2}) and (\ref{eq:kqf=cells3}). By Proposition \ref{prop:intersection}, this can be written as a finite union of cells.
\end{proof}
\subsection{Definable sets and functions} \label{subsec:defsetfun}
Define a semi-additive set to be a set of the following type.
\begin{definition}\label{def:semi-additive}
A set $A \subseteq K^{k+1}$ is called semi-additive if there exist $r\in \mathbb{N}$ and a finite number of disjoint cells $C_i\subseteq \Lambda_{n,m}^r\times K^{k+1}$ such that
\[A =\{(x,t)\in K^{k+1} \ | \ \exists \lambda \in \Lambda_{n,m}^r : (\lambda,x,t) \in \cup_i C_i\}. \]
\end{definition}
\noindent By the next theorem, the semi-additive subsets of $K^{k+1}$ are precisely the $\Lm_{\text{aff}}infQE$-definable subsets of $K^{k+1}$. And consequently, the $\Lm_{\text{aff}}inf$-definable subsets of $K^k$ are just the semi-additive subsets.
\begin{theorem}
Let $A\subseteq K^{k+1}$ be a semi-additive set. The projection \[B=\{x \in K^k \ | \ \exists t\in K:(x,t) \in A\}\] is a semi-additive set.
\end{theorem}
\begin{proof}
First, partition the cells $C_i$ occuring in the description of A in smaller cells $C_{i,l}$ such that the extra condition $\mathrm{ord}\, \rho_{n,m}(t-c(x)) \equiv l \mod n$ holds on $C_{i,l}$. It is then sufficient to prove that we can eliminate the variable $t$ from a formula of the form
\[(\exists t)(\exists \lambda \in \Lambda_{n,m}^r)\left[\begin{array}{cl}& \mathrm{ord}\, a_1(x) < \mathrm{ord}\, (t-c(x)) < \mathrm{ord}\, a_2(x)\ \\\wedge& \rho_{n,m}(t-c(x))\in D(\lambda,x)\ \\\wedge &\mathrm{ord}\, \rho_{n,m}(t-c(x))\equiv l \mod n\end{array}\right]\]
and this is equivalent to $(\exists \lambda \in \Lambda_{n,m}^r)\phi(x,\lambda)$, with
\[\phi(x,\lambda) \leftrightarrow (\exists \gamma \in \Gamma_K)\left[\begin{array}{cl}&\mathrm{ord}\, a_1(x) < \ \gamma < \mathrm{ord}\, a_2(x) \\ \wedge& \left[D'(\lambda,x)\neq \emptyset\right]\ \wedge\ \left[\gamma \equiv l \mod n\right]\end{array} \right]\]
where $D'(\lambda,x) = D(\lambda,x) \cap \{\mu \in \Lambda_{n,m} \mid \mathrm{ord}\, \mu \equiv l \mod n\}.$
The formula $\phi(x,\lambda)$ is equivalent with $D'(\lambda,x) \neq \emptyset \wedge (\exists \gamma' \in \Gamma_K)\psi(x, \gamma')$,\ where
\[\psi(x, \gamma') \leftrightarrow \left[ \frac{\mathrm{ord}\,(a_1(x)\pi^{-l})}{n} < \gamma' <\frac{\mathrm{ord}\,(a_2(x)\pi^{-l})}{n}\right]\]
Now if $\mathrm{ord}\, a_1(x)\pi^{-l} \equiv \zeta
\mod n$, for $0 \leqslant \zeta <n$, then $(\exists \gamma' \in \Gamma_K)\psi(x)$
is equivalent with
\[ \mathrm{ord}\, a_1(x)\pi^{-l} + n - \zeta <
\mathrm{ord}\, a_2(x)\pi^{-l}.\] This completes
the proof, since $\mathrm{ord}\, a_1(x)\pi^{-l}
\equiv \zeta \mod n$ is a ($K$-quantifier free) $\Lm_{\text{aff}}infQE$- definable condition on $x$. \end{proof}
\noindent It is now easy to give a characterization of semi-additive functions:
\begin{lemma}
Let $f: B\subseteq K^k \to K^l$ be an $\Lm_{\text{aff}}^{\pi}$-definable function. There exists a finite partition of $B$ in cells $A$ such that $f_{|A}$ has the form
\[f_{|A}: A\to K^l: x\mapsto (f_1(x), \ldots, f_l(x)),\]
where the $f_i(x)$ are $(\mathbb{P}_K(\pi), K)$-linear polynomials.
\end{lemma}
\begin{proof}
The graph of a definable function is a semi-additive set, so the graph of $f$ can be partitioned as in Definition \ref{def:semi-additive}, using a finite number of cells $C_i$. The fact that $f$ is a function, implies that for each cell $C_i$, and any $x \in D_K$, there exists a unique $t \in K$ such that $(x,t) \in \mathrm{Graph}(f)$. Note however, that this uniqueness condition implies that $t=c(x)$ and thus the function $f$, when restricted to $D_K$, simply maps each $x$ to the center $c(x)$ of the corresponding cell $C_i$, which we assumed to be a $(\mathbb{P}_K(\pi),K)$-linear polynomial.
\end{proof}
\section{The case of a finite residue field} \label{sec:finres}
For the following class of fields, angular component maps can be defined in a unique way. Note that we do not require the valued field to be Henselian.
\begin{definition}
Let $\mathbb{F}F_q$ be the finite field with $q$ elements and $\mathbb{Z}Z$ the ordered abelian group of integers. We define a $(\mathbb{F}F_q,\mathbb{Z}Z)$-field to be a valued field with residue field
isomorphic to $\mathbb{F}F_q$ and value group elementary equivalent to $\mathbb{Z}Z$.
\end{definition}
\noindent Fix an $(\mathbb{F}F_q,\mathbb{Z}Z)$-field $K$, fix an element $\pi$ with smallest positive order, such that $\mathrm{ord}\, \pi =1$. For each integer $n>0$, let $P_n$ be the set of nonzero $n$-th powers in $K$.
\begin{lemma}\label{lemma:acm} For each integer $m>0$, there is a unique group homomorphism
$$
\text{ac}\,m:K^\times \to (R_K\bmod \pi^m)^\times
$$
such that $\text{ac}\,m(\pi)=1$ and such that $\text{ac}\,m(u)\equiv u \bmod \pi^m$ for any unit $u\in R_K$.
\end{lemma}
\begin{proof}
Put $N_m:=(q-1)q^{m-1}$ and let $U$ be the set $P_{N_m}\cdot R_K^\times$.
Note that $K^\times$ equals the finite disjoint union of the sets $\pi^\ell\cdot U$ for integers $\ell$ with $0 \leqslant \ell \leqslant N_m-1$.
Hence, any element $y$ of $K^\times$ can be written as a product of the form $\pi^\ell x^{N_m} u$, with $u\in R_K^\times$, $\ell\in\{0,\ldots,N_{m}-1\}$, and $x\in K^\times$.
Since $\text{ac}\,m$ is required to be a group homomorphism to a finite group with $N_m$ elements, it must send $P_{N_m}$ to $1$. Also note that the projection $R_K \to R_K\bmod \pi^m$ (which is a ring homomorphism), induces a natural group homomorphism $p: R_K^\times \to (R_K\bmod \pi^m)^\times$. Now if we write $y=\pi^\ell x^{N_m} u$, we see that $\text{ac}\,m$ must satisfy
\begin{equation}\label{acm}
\text{ac}\,m ( y ) = p(u),
\end{equation}
which implies that the map $\text{ac}\,m$ is uniquely determined if it exists. Moreover, we claim that we can use \eqref{acm} to define $\text{ac}\,m$. This is certainly a well defined group homomorphism:
if one writes $y=\pi^\ell \tilde{ x}^{N_m} \tilde{u}$ for some other $\tilde{u}\in R_K^\times$ and $\tilde{x}\in K^\times$, then clearly $p(u)=p(\tilde {u})$. It is also clear that this homomorphism sends $\pi$ to $1$ and satisfies our requirement that $\text{ac}\,m(u)\equiv u \bmod \pi^m$ for any unit $u\in R_K$.
\end{proof}
$(\mathbb{F}_q,\mathbb{Z})$-fields satisfy the requirements we listed in the introduction, so if we consider the structure induced by our multi-sorted language, we can apply the cell decomposition results from the previous section.
Obviously, since the residue field is now assumed to be finite, $\Lambda_{n,m}$ will be a finite set. In fact, we can assume that $\Lambda_{n,m}$ is a subset of $R_K$, by choosing a fixed set of representatives for each equivalence class. For example, if $K = \mathbb{Q}_p$, we could take
\[ \Lambda_{n,m}:= \{p^ra \mid 0\leqslant r < n \wedge \mathrm{ord}\, a = 0 \wedge 0 < a \leqslant p^{m}-1\}.\] The fact that $\Lambda_{n,m}$ is finite implies that all $\Lambda_{n,m}$-quantifiers can be replaced by conjunctions (for $\forall$) and disjunctions (for $\exists$) over the elements of $\Lambda_{n,m}$. In particular,
if we
consider the 2-variable relation
\[S_{n,m}(x,z) \leftrightarrow \rho_{n,m}(x) = \rho_{n,m}(z)),\]
it is possible to `collapse' $\Lm_{\text{aff}}infQE$ to a mono-sorted language $ \Lm_{\text{aff}} := (+,-,\cdot_{\pi}, |, \{S_{n,m}\}_{n,m}).$
It follows immediately from the results of the previous section that
every definable set in this new language is a finite union of cells of the form
\begin{equation} \{(x,t) \in D\times K \ | \ \mathrm{ord}\, a_1(x) \ \square_1 \ \mathrm{ord}\, (t-c(x)) \ \square_2 \ \mathrm{ord}\, a_2(x)\ \wedge \ \rho_{n,m}(t-c(x)) = \lambda\}, \label{eq:collapsedcell}\end{equation}
with $D$ a quantifierfree definable subset of $K^k, \lambda \in \Lambda_{n,m}$; $a_i(x)$ and $c(x)$ are $(\mathbb{P}_K(\pi),K)$-linear polynomials, and $\mathbb{P}_K$ is the prime subfield of $K$.
We should compare this with the semi-linear language $(+,- \{\overline{c}\}_{c\in \mathbb{Q}_p}, \{P_n\}_{n\in \mathbb{N}})$ that Liu \cite{liu-94} considered for $\mathbb{Q}_p$.
A first difference is the use of the relation $S_{n,m}$, instead of the sets of $n$-th powers $P_n$. This difference is much smaller than it may seem at first.
If we define $Q_{n,m}$ to be the set \[Q_{n,m}:=\{x\in K \ | \ \rho_{n,m}(x) = \rho_{n,m}(1)\}\] then the relation $S_{n,m}(x,y)$ is equivalent to $ x \in yQ_{n,m}$.
Hence, we replaced expressions like `$x$ is in some coset of $P_n$' by similar expressions that use
sets $Q_{n,m}$ instead.
However, for Henselian $(\mathbb{F}_q,\mathbb{Z})$-fields, it is easy to see that for any $N \in \mathbb{N}$, $P_N$ can be defined as a finite union of cosets $\lambda Q_{n,m}$ with $\lambda \in K; n,m \in \mathbb{N}$. Since we used cosets of $P_N$ to define the maps $\text{ac}\,m$ (and thus the sets $Q_{n,m}$), the converse is also true.
Another (seeming) difference is that the language we defined contains the divisibility symbol `$|$'. Liu does not include this symbol, since he showed that for semi-linear sets over $\mathbb{Q}_p$, this relation is quantifierfree definable. We need the symbol if we want to achieve quantifier elimination, but it can be shown, see \cite[Proposition 1]{clu-lee-2011}, that for $(\mathbb{F}_q, \mathbb{Z})$-fields, the relation $\mathrm{ord}\,(x-z) < \mathrm{ord}\, (y-z)$ is definable whenever the relation $\rho_{n,m}(y-x) = \rho_{n,m}(z)$ is definable. So adding the symbol to our language does not affect the number of definable sets.
A third difference lies in the amount of scalar multiplication which is definable. $\Lm_{\text{aff}}$ has less scalar multiplication than the semi-linear language. (To compare: for the structure $(\mathbb{Q}_p; +,-,\cdot_{\pi}, \{S_{n,m}\}_{n,m})$, scalar multiplication is only definable for constants from $\mathbb{Q}$.) This difference will be important when we compare the definable functions.
\\\\
Taking these observations into account, we can consider a class of \emph{semi-affine} structures:
\begin{definition}
Given an $(\mathbb{F}_q,\mathbb{Z})$-field $K$ and a subfield $L \subseteq K$, let
$\Lm_{\text{aff}}^L$ be the language
\[ \Lm_{\text{aff}}^L := (+,-, \{\overline{c}\}_{c\in L}, |, \{R_{n,m}\}_{n,m}).\]
The structure $(K, \Lm_{\text{aff}}^L)$ is called a semi-affine structure.
\end{definition}
These languages are variations on the language $\Lm_{\text{aff}}$ we defined above, adding additional symbols for scalar multiplication, and replacing the symbol $S_{n,m}$ by $R_{n,m}$, a relation which is defined as $R_{n,m}(x,y,z) \leftrightarrow \rho_{n,m}(y-x) = \rho_{n,m}(z)$. We make this (otherwise unnecessary) substitution to point out the link with the language $(\{R_{n,m}\}_{n,m})$, that we studied in \cite{clu-lee-2011}.
Over $\mathbb{Q}_p$, Liu's semi-linear language is equivalent with $\Lm_{\text{aff}}^{\mathbb{Q}_p}$. Note also that the structures $(K,\Lm_{\text{aff}}^{\mathbb{P}_K(\pi)})$ and $(K, \Lm_{\text{aff}})$ have the same definable sets.
In general, when considering a structue $(K, \mathcal{L})$, we will always assume that if multiplication by $c$ is definable, then $\mathcal{L}$ contains a symbol $\overline{c}$ (replacing $\mathcal{L}$ by a definitional expansion if necessary). In particular, we assume that $\mathbb{P}_K \subseteq L$.
To describe the definable sets and functions of such structures, the following terminology is useful.
\begin{definition} Let $L \subseteq K$ be fields.
An $(L,K)$-linear polynomial is a polynomial of the form
\[ a_1x_1 + \ldots + a_nx_n +b,
\qquad\text{with } a_i \in L \text{ and } b \in K.\]
If $\mathcal{L} = \Lm_{\text{aff}}^L$, we write $\text{Poly}_k(\mathcal{L},K)$ for the set of all $(L,K)$-linear polynomials in $k$ variables.
\end{definition}
For all semi-affine structures $(K,L)$, we can deduce cell decomposition and quantifier elimination, using the method we described for the language $\Lm_{\text{aff}}$. The general idea is this: the cell decomposition results from the previous section still hold if we consider variations of $\Lm_{\text{aff}}infQE$, where we have more (or less) symbols for scalar multiplication to the language of the field sort. Every semi-affine language can be obtained by collapsing such a language to a language having only the field sort. In each case, we obtain cell decomposition using cells as in \eqref{eq:collapsedcell}, where the only difference is that for $(K,\Lm_{\text{aff}}^L)$, the functions $a_i(x)$ and $c(x)$ will now be $(L,K)$-linear polynomials. (Assuming that scalar multiplication is only definable for constants from $L$.) From this, the following description of definable cells can easily be deduced:
\begin{lemma}
The definable sets of a semi-affine structure $(K, \mathcal{L})$ are the boolean combinations of sets of the forms
\[\{ x\in K^k \ | \ \mathrm{ord}\, f_1(x) \ \square\ \mathrm{ord}\, \pi^r f_2(x) \} \quad \text{and} \quad \{x\in K^k \ | \ \rho_{n,m}(f_3(x)) = \lambda \},\]
where the $f_i \in \text{Poly}_k(\mathcal{L},K)$, $r\in \mathbb{Z}$ and $\lambda \in \Lambda_{n,m}$.
\end{lemma}
\noindent In the next section we study the definable functions for these languages.
\subsection{Definable functions and Skolem functions} \label{subsec:skol}
The definable functions of a semi-affine structure $(K, \mathcal{L})$ will be called $\mathcal{L}$-semiaffine functions over $K$. The definable sets and functions of $(\mathbb{Q}_p, \Lm_{\text{aff}}^{\mathbb{Q}_p})$ will be referred to as being `semi-linear'.
Using cell decomposition, it is easy to see that semi-affine functions
actually have a very simple form.
\begin{lemma}
Let $(K, \mathcal{L})$ be a semi-affine structure. For any $\mathcal{L}$-semiaffine function $A \subseteq K^k \to K^l$ there exists a finite partition of $A$ in $\mathcal{L}$-definable sets $B_i$, such that $f_{|B_i}$ has the form
\[f_{|B_i}: B_i \to K^l : x\mapsto (f_1(x), \ldots, f_l(x)),\]
with $f_l(x) \in \text{Poly}_k(\mathcal{L},K)$.
\end{lemma}
All of these semi-affine structures are truly linear in the sense that there does not exist any open set where multiplication is definable.
\begin{corol} Let $K$ be any $(\mathbb{F}_q, \mathbb{Z})$-field and $\mathcal{L}$ a semi-affine language.
Let $U \subseteq K^2$ be an open semi-affine set. The map $f:U \to K:(x,y) \mapsto xy$ is not a semi-affine function.
\end{corol}
\begin{proof}
Let us assume that scalar multiplication is definable for all $c \in K$, and that
multiplication is definable on an open cell $C$.
Fix a point $(x_0,y_0) \in C$. It is easy to see that if we choose $k \in \mathbb{N}$ big enough, we have that
\begin{equation}\label{eq:nomult} \{ (x,y) \in K^2 \mid x \in x_0 + \pi^kR_K, y\in y_0 + \pi^kR_k\} \subset C.\end{equation} If $\mathrm{ord}\, x_0 \leqslant \mathrm{ord}\, y_0$, there exists $\alpha \in R_K$ such that $y_0 = \alpha x_0$. Moreover, because of \eqref{eq:nomult}, the intersection \[W := C\cap \{(x,y) \in K^2 \mid y = \alpha x\}\] is an infinite set, and the projection $\pi_x(W)$ onto the first coordinate also contains infinitely many points.
Note that since $xy = \alpha x^2$ for $(x,y) \in W$, the multiplication map on $W$ induces a definable function $\pi_x(W) \to K: x \mapsto \alpha x^2$.
\\
After some (finite) further partitioning, we can find an open subset $U \subseteq \pi_x(W)$ and constants $b_1, b_2$ such that on $U$, the function $f(x) = b_1 x + b_2$ defines the map $x \mapsto \alpha x^2$. But this implies that the equation $b_1x + b_2 = \alpha x^2$ has infinitely many solutions, which is a contradiction. If $\mathrm{ord}\, x_0 > \mathrm{ord}\, y_0$, we can give a similar argument by intersecting with the set $\{x =\frac1 \alpha y\}$.
\end{proof}
\noindent A question one can pose concerning semi-affine functions is whether it is always possible to find a definable Skolem
function, i.e. a definable choice in the fibers of $f$. As is the case for semi-algebraic functions (see \cite{scow-vdd88}), the answer is certainly `yes' for semilinear functions, and more generally, for functions definable in a structure $(K,\Lm_{\text{aff}}^K)$.
\begin{theorem}
Let $X \subseteq K^{k + r}$ be an $\Lm_{\text{aff}}^K$-definable set. \\If $\pi_k(X) \subseteq K^k$
is the projection on the first $k$ variables, there exists a semilinear function $g: \pi_k(X) \to X$
such that $\pi_k \circ g = \mathrm{Id}_{\pi_k(X)}$.
\end{theorem}
\begin{proof}
If suffices to check that
given a $C$ and the projection map $\pi_x$,
\[\pi_x: C \subset K^{l+1} \to K^l: (x_1,\ldots, x_n,t) \mapsto (x_1, \ldots,
x_n),\] there exists a definable function $g: \pi_x(C) \to C$ such
that $\pi_x \circ g = \mathrm{Id}_{\pi_x(C)}$.
\\
If the cell $C$ has a center $c(x) \neq 0$, we first apply a
translation
\[C\to C': (x,t) \mapsto (x,t-c(x)),\]
to a cell $C'$ with center $c'(x)=0$. Since this translation is
bijective, it is invertible. Therefore the problem is reduced to the following.
Let $C$ be a cell of the form
\[C = \{(x,t) \in D \times K \ | \ \mathrm{ord}\, b(x)\,\square_1 \,\mathrm{ord}\,
t \,\square_2 \,\mathrm{ord}\, a(x)\ \wedge \ \rho_{n,m}(t) = \lambda \},
\] where $a(x), b(x) \in K[x]$ and
$D \subseteq K^l$ is a definable set.
We must show that there exists a definable function $g: \pi_x(C) \to C$ such
that $\pi_x \circ g = \mathrm{Id}_{\pi_x(C)}$. \\\\
Given $x \in \pi_x(C)\subseteq D$, we have to find $t(x)$ such that
$(x,t(x))$ satisfies the conditions
\begin{eqnarray}
\mathrm{ord}\, b(x)\ \square_1 \ \mathrm{ord}\,
t(x) \ \square_2 \ \mathrm{ord}\, a(x) \label{c1}\\
\rho_{n,m}(t(x)) = \lambda \label{c2}
\end{eqnarray}
If $\lambda = 0$, put $g(x) = (x,0)$. From now on we assume that $\lambda \neq 0$.\\
If $\square_1 = \square_2 =$ `no condition', we can simply put $g(x)
=(x, \lambda).$\\
If $\square_2 = $ `$<$', we can define $g$ as follows. First partition
$\pi_x(C)$ in parts $D_{\mu}$, such that
\[ D_{\mu} = \{x\in \pi_x(C) \ | \ \rho_{n,m}(a(x)) = \mu \}.\]
(Note: if $\mu = 0$, we can reduce to the cases were $\square_2$ = `no condition'.)
Our strategy is based on the fact that for every $x \in D$, there
exists $k \in \mathbb{Z}$ such that $k$ satisfies
\[\mathrm{ord}\, b(x) \ \square_1 \ \mathrm{ord}\, \lambda + kn < \mathrm{ord}\, a(x). \]
Restricting to a set $D_{\mu}$, we construct an element $t(x)$
with order as close as possible to $\mathrm{ord}\, a(x)$. This ensures that
$t(x)$ satisfies (\ref{c1}).
The definiton of $g$ on $D_{\mu}$ will depend on
the respective orders of $\lambda$ and $\mu$.
\begin{itemize}
\item If $\mathrm{ord}\, \lambda < \mathrm{ord}\, \mu$, we can define $g_{|D_{\mu}}$ as
\(g_{|D_{\mu}}: D_{\mu} \to C: x \mapsto \left(x,\frac{\lambda}{\mu} a(x)\right).\)
This means that we put $t(x) = \frac{\lambda}{\mu} a(x)$. Clearly
$\rho_{n,m}(t(x)) = \lambda$. Also, since $-n < \mathrm{ord}\,
(\frac{\lambda}{\mu}) < 0$, we have that $0<\mathrm{ord}\,
\frac{a(x)}{t(x)}<n$, and thus condition (\ref{c1}) must be
satisfied.
\item If $\mathrm{ord}\, \lambda \geqslant \mathrm{ord}\, \mu$, put
\(g_{D_{\mu}}: D_{\mu} \to C: x \mapsto \left(x,\frac{\lambda}{\pi^n\mu} a(x)\right).\)
\end{itemize}
If $\square_1 =$ `$<$' and $\square_2 =$ `no condition', we choose
$t(x)$ with order as close as possible to $\mathrm{ord}\, b(x)$. More
specifically, if $\mathrm{ord}\, \lambda \leqslant \mu$, define $g$ as
\(g_{D_{\mu}}: D_{\mu} \to C: x \mapsto \left(x,\frac{\lambda \pi^n}{\mu}
b(x)\right)\), and if $\mathrm{ord}\, \lambda > \mathrm{ord}\, \mu,$ put
\(g_{D_{\mu}}: D_{\mu}\to C: x \mapsto \left(x,\frac{\lambda}{\mu} b(x)\right).\)
\end{proof}
\noindent One has to be more careful for structures $(K,\Lm_{\text{aff}}^L)$ where $L \neq K$: the following lemma gives an example of a semi-affine structure
that has no definable Skolem functions.
\begin{lemma}\label{lemma:skolemscalars}
Let $K$ be an $(\mathbb{F}_q, \mathbb{Z})$- field (with $q = p^r$\hspace{-2pt}) such that $\mathrm{char}(K) = 0$, and suppose that $\mathrm{ord}\, \pi < \mathrm{ord}\, p$. Let $A$ be the set
\[A:= \{(x,y) \in K^2 \ | \ \mathrm{ord}\, y = 1+\mathrm{ord}\, x\}.\]
For the projection map $\pi_1: A \to K: (x,y)\mapsto x,$
there does not exist an $\Lm_{\text{aff}}^{\mathbb{Q}}$-definable function $g: \pi_1(A) \to K^2$ such that $\pi_1 \circ g = \mathrm{Id}_{\pi_1(A)}$.
\end{lemma}
\begin{proof}
Suppose such a $g$ exists.
After partitioning $\pi_1(A)$ in cells $C$, the function $g$ must have the form
\[g_{|C}: C\to K^2 : x \mapsto (x, ax+b),\]
where $ax+b$ is a $(\mathbb{Q}, K)$-linear polynomial, and hence $a \in \mathbb{Q}$.
There must be at least one cell $C$ that contains elements $x$ for which $\mathrm{ord}\, x < \mathrm{ord}\, \frac{b}{a}$. For these elements, $\mathrm{ord}\, ax+b = \mathrm{ord}\, ax$. However, since $\mathrm{ord}\, p > \mathrm{ord}\, \pi=1$ and $\mathrm{ord}\, a \in (\mathrm{ord}\, p)\mathbb{Z}$, it is impossible that $\mathrm{ord}\, a =1$, which is a contradiction.
\end{proof}
In general, $(K, \Lm_{\text{aff}}^L)$ will admit definable skolem functions if for any $n,m \in \mathbb{N}$ and for any coset $\lambda Q_{n,m}$, there exists a element $\lambda_0\in K$ with $\rho_{n,m}(\lambda_0) = \lambda$ and $0 \leqslant \mathrm{ord}\, \lambda_0<n$ such that scalar multiplication by $\lambda_0$ is definable. This condition is satisfied for $p$-adically closed fields if we require that $\overline{\mathbb{Q}}_K \subseteq L$, where $\overline{\mathbb{Q}}_K$ is the algebraic closure of $\mathbb{Q}$ in $K$.
\subsection{Classification} \label{subsec:clas}
Write $q_K$ for the cardinality of the residue field of $K$. Let $|\cdot|$ be the norm defined as $|x| = \max(|x_i|_K)$, where $|x_i|_K = q_K^{-\mathrm{ord}\,(x_i)}$. We can define a dimension invariant for semi-affine structures by using the notion of dimension that Scowcroft and van den Dries \cite{scow-vdd88} introduced for semi-algebraic sets, i.e., \emph{the dimension of a definable set $X$ is the greatest natural number $n$ such that there exists a non-empty definable subset $A \subseteq X$ and a definable bijection from $A$ to a nonempty definable open subset of $K^n$.}
It is straightforward, using cell decomposition and our characterization of definable functions, to check that this notion of dimension has the expected properties when applied to the context of semi-affine sets.
\\\\
Cluckers \cite{clu-2000} showed that two infinite $p$-adic semi-algebraic sets are isomorphic (i.e. there exists a definable bijection) if and only if they have the same dimension. There exists no analogous result for the semi-affine case, however. We will illustrate this fact with some examples. Although most results presented below are true for all $(\mathbb{F}_q, \mathbb{Z})$-fields, we will restrict our attention to $K=\mathbb{Q}_p$.
\begin{lemma}
There exists no semi-affine bijection between the sets
\[ A = \{ t\in \mathbb{Q}_p \ | \mathrm{ord}\, t <0 \}\qquad \text{ and} \qquad B =\{ t \in \mathbb{Q}_p \ | \mathrm{ord}\, t >0\}.\]
\end{lemma}
\begin{proof}
Suppose such a bijection $f: A \to B$ exists. Then there must
exist a finite partition of $A$ in sets $A_i$ such that $f$ is
linear on each $A_i$. Since this partition is finite, at least one
of these sets $A_i$ must contain a subset of the form \[C_i=\{t\in \mathbb{Q}_p \ |\ \mathrm{ord}\, t <
-k \wedge \rho_{n,m}(t) = \lambda\},\] with $k \in \mathbb{N}$.
By our assumption, there
must exist $a\in\mathbb{Q}$ and $b \in \mathbb{Q}_p$ such that on $A_i$, the map
$f_{|A_i}$ has the form $f_{|A_i}: x \mapsto ax+b$. If $f$ is
indeed a bijection, then $f(C_i)$ must be a subset of $B$, and
thus the condition $\mathrm{ord}\, f(x)
>0$ has to hold for all $x \in C_i$. However,
it is possible to take $x\in C_i$ such that $\mathrm{ord}\, x <
\min\{\mathrm{ord}\,\hspace{-3pt}\left(\frac{b}{a}\right),
\mathrm{ord}\,\hspace{-3pt}\left(\frac1a\right)\}$. But then $\mathrm{ord}\, f(x) =
\mathrm{ord}\, (ax+b)<0$.
\end{proof}
\noindent Other examples of non-isomorphic sets of the same dimension are the sets $P_n$.
To obtain this result, we will first look at the sets $Q_{n,m}$.
For most pairs $(n,m)$, the sets $Q_{n,m}$ are
essentially different. More precisely, there exists an isomorphism
between $Q_{n,m}$ and $Q_{n',m'}$ if and only if $n' = np^{m-m'}$.
To prove this, we first need the following lemma. (Note: We use
the notation $A\sqcup B$ to denote the disjoint union of two sets
$A$ and $B$. In practice this can be defined as $\{0\} \times A \cup \{1\} \times B$.)
\begin{lemma} \label{lemma:nobijection}
There exists no semi-affine bijection between
\[ \bigsqcup_{i\in I_1} Q_{n,m}
\qquad \text{and} \qquad \bigsqcup_{i \in I_2} Q_{n,m} \] if
$I_1$ and $I_2$ are index sets with different cardinalities.
\end{lemma}
\begin{proof}
For $j \in I_1$, we denote the different copies of $Q_{n,m}$ by
$Q_{n,m}^{(j)}$. Suppose a semi-affine bijection
\[f:\bigsqcup_{i\in I_1} Q_{n,m}
\to \bigsqcup_{i \in I_2} Q_{n,m}\] does exist. Then there must
exist a finite partition of the $Q_{n,m}^{(j)}$ in cells $C$ such
that $f_{|C}$ is linear. Since we take finite partitions, for each
$Q_{n,m}^{(j)}$, there must be at least one cell of the form $ \{
x \in \mathbb{Q}_p \ | \ \mathrm{ord}\, x < k , x \in \lambda_{ij}Q_{n_{ij},m_{ij}}
\}$. In fact, after a further finite partition, we may suppose
that $n_{ij}$ and $m_{ij}$ are equal for each cell, and thus that
all $x \in \bigsqcup_{i\in I_1} Q_{n,m}$ with order smaller than
some fixed integer $k$ belong to a set in the partition which has
the form
\[C_{k, \lambda}^{(j)}:= \{ x \in \mathbb{Q}_p \ | \ \mathrm{ord}\, x < k , x \in
\lambda Q_{n',m'} \}.\] Because of the previous lemma, we will
have to map the elements of these cells to the elements with very
small (negative) order of $\bigsqcup_{i \in I_2} Q_{n,m}$ to get a
bijection. \\
It is easy to see that if $k < \mathrm{ord}\,(\frac{b}{a})-m'$, a
function $x \mapsto ax+b$ gives a bijection between $C_{k, \lambda}^{(j)}$ to $C_{k+\mathrm{ord}\, a,
a\lambda}^{(j')}$.
\\
If we choose $k \in \mathbb{Z}$ small enough, then every set $C_{k,\lambda}^{(j)} \subset \bigsqcup_{j \in I_1} Q_{n,m}^{(j)}$ is mapped to a set $C_{k',\lambda}^{(j')} \subset \bigsqcup_{j' \in I_2} Q_{n,m}^{(j')}$. Also, for small enough $k' \in \mathbb{Z}$, every set $C_{k',\lambda}^{(j')}$ is in the image of exactly one set $C_{k,\lambda}^{(j)}$. So if $f$ is the required bijection, then for a small enough value of $\ell$, $\bigsqcup_{j \in I_1} Q_{n,m}^{(j)}$ and $\bigsqcup_{j' \in I_2} Q_{n,m}^{(j')}$ contain exactly the same number of sets of the form $\{ x \in \mathbb{Q}_p \ | \ \mathrm{ord}\, x < \ell ,\ x \in
\lambda Q_{n',m'} \}$, which is only possible if $I_1$ and $I_2$ have the same cardinality.
\end{proof}
\begin{corol}
There exists a semi-affine bijection between $Q_{n,m}$ and
$Q_{n',m'}$ if and only if $n'= np^{m-m'}$.
\end{corol}
\begin{proof}
Suppose $m = \max\{m,m'\}$ and partition $Q_{n,m}$ and
$Q_{n',m'}$ as
\begin{eqnarray*}
Q_{n,m}= \bigcup_{\lambda \in I_1} \lambda Q_{nn',m} &\text{ \ \ and \ \ } &
Q_{n',m'} = \bigcup_{\lambda \in I_2} \lambda Q_{nn',m}.
\end{eqnarray*}
Here $I_1$ and $I_2$ are defined als follows:
\begin{eqnarray*}
I_1 &=& \{ 1,p^2,\ldots, p^{(n'-1)n}\},\\
I_2 &=&\{p^{rn'}(1+a_{m'}p^{m'}+ \ldots + a_{m-1}p^{m-1}) \ | \
0\leqslant r < n;\ 0\leqslant a_i \leqslant p-1\}.
\end{eqnarray*}
If there exists a semi-affine bijection between $Q_{n,m}$ and
$Q_{n',m'}$, this induces a bijection
\[\bigsqcup_{i\in I_1} Q_{nn',m}
\to \bigsqcup_{i \in I_2} Q_{nn',m}.\]
But since $\# I_1= n'$ and $\#I_2 = np^{m-m'}$, this contradicts Lemma \ref{lemma:nobijection} if $n'\neq np^{m-m'}$.\\
If the cardinalities of $I_1$ and $I_2$ are equal, let $\tau$ be a bijection between $I_1$
and $I_2$. Now put
\[f_{|\lambda Q_{nn',m}}(x) = \frac{\tau(\lambda)}{\lambda}\, x.\]
The function $f:Q_{n,m}\to Q_{n',m'}$ is the required bijection.
\end{proof}
\begin{corol} Let $n,n' >0$. \\There exists a semi-affine bijection between $P_n$ and $P_{n'}$ if and only if
\[\frac{\#\Lambda_n}{\#\Lambda_{n'}} = \frac{n}{n'}\,p^{2\mathrm{ord}\, \left(\frac{n}{n'}\right)},\]
where $\Lambda_n:= P_n \cap \{x\in \mathbb{Q} \ | 0<x \leqslant p^{2\mathrm{ord}\, n +1}-1\} $.\\
In particular, if $p \nmid n$ and $p\nmid n'$, there is no semi-affine bijection between $P_n$ and $P_{n'}$ if $n \neq n'$.
\end{corol}
\begin{proof}
Take partitions $P_n = \bigcup_{\lambda \in \Lambda_n} \lambda Q_{n,2\mathrm{ord}\, n+1},$ (and similarly for $P_{n'}$), as explained before. Assume that $\mathrm{ord}\, n \geqslant \mathrm{ord}\, n'$. By a similar reasoning as in the proof of the previous corollary, a bijection between $P_n$ and $P_{n'}$ would induce a bijection \[\bigsqcup_{i\in I_n} Q_{nn',2 \mathrm{ord}\, n+1}
\to \bigsqcup_{i \in I_{n'}} Q_{nn',2 \mathrm{ord}\, n +1},\]
with $\# I_n= n'\cdot \#\Lambda_n$ and $\#I_{n'} = np^{2(\mathrm{ord}\, n - \mathrm{ord}\, n')}\cdot \#\Lambda_{n'}$. There exists a bijection if and only if $\# I_n = \# I_{n'}$.\\\\
Now assume that $\mathrm{ord}\, n = \mathrm{ord}\, n' =0$, and $n \geqslant n'$. There exists a bijection between $P_n$ and $P_{n'}$ if $\frac{n}{\#\Lambda_n} = \frac{n'}{\#\Lambda_{n'}}$. Under our assumptions, $\#\Lambda_n$ is equal to the number of elements of $\mathbb{F}_p^{\times}$ that are $n$-th powers. Applying a result from elementary number theory, we get that $\#\Lambda_n = \frac{p-1}{d}$, where $d = (p-1,n)$, and therefore $P_n$ will be isomorphic with $P_{n'}$ if and only if $nd = n'd'$. This is equivalent to $n\tilde{d} = n'\tilde{d'}$, with $\tilde{d} = \frac{d}{a}, \tilde{d'}=\frac{d'}{a}$, and $a = (d,d')$. As a consequence, $\tilde{d'} \mid n$.
If $\tilde{d'} \neq 1$, there is $q >1$ such that $q | \tilde{d'}$. But then also $q \mid (p-1,n)$. This contradicts $(\tilde{d'},d)=1$, so
we conclude that $\tilde{d'}=1$, and therefore $n\tilde{d} =n'$, which contradicts our assumption that $n \geqslant n'$, unless $\tilde{d}=1$ and $n=n'$.
\end{proof}
\end{document} |
\begin{document}
\lhead{Knot Fertility and Lineage}
\title{Knot Fertility and Lineage}
\begin{abstract} In this paper, we introduce a new type of relation between knots called the descendant relation. One knot $H$ is a {\em descendant} of another knot $K$ if $H$ can be obtained from a minimal crossing diagram of $K$ by some number of crossing changes. We explore properties of the descendant relation and study how certain knots are related, paying particular attention to those knots, called {\em fertile knots}, that have a large number of descendants. Furthermore, we provide computational data related to various notions of knot fertility and propose several open questions for future exploration.
\end{abstract}
\section{Introduction}
The $7_6$ knot, pictured in Figure~\ref{7-6}, is a particularly interesting knot. This is because, in a certain sense, all smaller knots are contained in this knot.
\begin{figure}
\caption{A minimal crossing diagram of the $7_6$ knot.}
\label{7-6}
\end{figure}
What do we mean by ``contained" in this context? We are interested in studying when a knot is a {\em parent} of another knot, where parenthood is defined as follows.
\begin{definition} A knot $K$ is a \textbf{parent} of a knot $H$ if a subset of the crossings in a minimal crossing diagram of $K$ can be changed to produce a diagram of $H$. In this case, we say that $H$ is a \textbf{descendant} of $K$.
\end{definition}
For instance, knot $11a135$ is a parent of knot $3_1$, the trefoil, as shown in Figure~\ref{parent_ex}. Equivalently, we can say that the trefoil is a descendant of $11a135$. A curious feature of this definition is that any knot $K$ is both its own descendant and its own parent.
\begin{figure}
\caption{A minimal crossing diagram of the $11a135$ knot becomes the trefoil after several crossing changes.}
\label{parent_ex}
\end{figure}
It is when we consider this more general relationship between knots that we see how interesting $7_6$ is, for $7_6$ is a parent of all knots with strictly smaller crossing number. That is, the descendants of $7_6$ are: $0_1$, $3_1$, $4_1$, $5_1$, $5_2$, $6_1$, $6_2$, and $6_3$. It is precisely for this reason that we call $7_6$ a {\em fertile} knot.
\begin{definition} A knot $K$ with crossing number $n$ is \textbf{fertile} if $K$ is a parent of every knot with crossing number less than $n$.
\end{definition}
Now that we have defined the concept of fertility, it is natural to ask, ``Which other knots are fertile?'' and ``Are there any fertile knots with more than seven crossings?'' In thinking about the first question, we note that it is a straightforward exercise to verify that $0_1$, $3_1$, $4_1$, $5_2$, $6_2$, and $6_3$ are also fertile. In Section~\ref{families}, we will develop tools to prove that knots such as $5_1$, $6_1$, $7_1$, and $7_2$ are {\em not} fertile. The computational results we present in Section~\ref{computations} illustrate that many other knots with seven or more crossings fail to be fertile. Answering the second question is somewhat trickier. We will introduce the related notions of $n$-fertility and $(n,m)$-fertility in Section~\ref{n-fertile} to reframe this question.
In addition to considering questions that specifically pertain to fertility, we consider more broadly the parent--descendant relationships between knots. For instance, is this relationship {\em transitive}? That is, if $A$ is a descendant of $B$ and $B$ is a descendant of $C$, is $A$ a descendant of $C$? We return to this fascinating question in Section~\ref{computations}.
\section{Preliminaries}
Before we get started, we note that it is often convenient to work with an alternative, but equivalent definition of what it means for one knot to be a parent (or descendant) of another. Suppose $K$ is a parent of $H$. Then if we consider all minimal diagrams of $K$ and {\em forget} these diagrams' crossing information---so we merely consider their {\bf shadows}---then $H$ must be obtainable by some choice of crossing information in one of these shadows. Figure~\ref{shadow} gives an example of how we derive a descendant from a parent following this process.
\begin{figure}
\caption{Starting with a minimal crossing diagram of $7_6$ (left), we forget crossing information to obtain the diagram's shadow. We then choose all new crossing information in the shadow and find we have a diagram of $4_1$ (right). This illustrates that $7_6$ is a parent of $4_1$.}
\label{shadow}
\end{figure}
We note that a diagram that is missing none, all, or some of its crossing information is referred to as a {\em pseudodiagram}, as in \cite{hanaki}. Unknown crossings in a pseudodiagram are called {\em precrossings.} Equivalence relations for pseudodiagrams, called pseudo-Reidemeister moves, allow us to think of pseudodiagrams in terms of the knots they have the potential to represent. (See \cite{pseudo} to learn much more about these objects.) Pseudodiagrams and pseudoknots will be useful objects in some constructions in Section~\ref{families}.
There are a couple of preliminary results we can state now. For instance, we notice that we can equate knots with their mirror images when proving results about knot lineage, on account of the following general fact.
\begin{proposition}\label{mirror}
If a knot $K$ is a parent of a knot $H$, then $K$ is also a parent of the mirror image of $H$.
\end{proposition}
\begin{proof} Let $P$ be the shadow corresponding to a minimal diagram of knot $K$, and suppose that $H$ is a descendant of $K$. Then we can resolve the precrossings of $P$ to produce $H$. Resolving the precrossings of $P$ in precisely the opposite way produces a diagram of the mirror image of $H$. So $K$ must also be a parent of the mirror image of $H$.
\end{proof}
We also make the following observation related to composite knots.
\begin{proposition}
Suppose that $K_1$ and $K_2$ are either both alternating knots or both torus knots, $H_1$ is a descendant of $K_1$, and $H_2$ is a descendant of $K_2$. Then $H_1\# H_2$ is a descendant of $K_1\# K_2$.
\end{proposition}
\begin{proof} For a pair of alternating knots $K_1$ and $K_2$, we know by \cite{kauffman1, murasugi, thistle} that forming the connect sum of a minimal diagram of $K_1$ with a minimal diagram of $K_2$ in the plane---as in Figure~\ref{comp}---produces a minimal diagram of $K_1\#K_2$. A similar result holds if both $K_1$ and $K_2$ are torus knots by \cite{diao2}. (We note that, in general, forming the connect sum of two minimal crossing diagrams of factor knots may not produce a minimal diagram of the composite knot \cite{lackenby}.) Now, if we resolve a shadow of a minimal diagram of $K_1$ to produce $H_1$ and, likewise, resolve a shadow of a minimal diagram of $K_2$ to produce $H_2$, these diagrams can be connected to produce a diagram of $H_1\# H_2$ that projects to the shadow of a minimal diagram of $K_1\#K_2$.
\end{proof}
\begin{figure}
\caption{The composition of minimal diagrams of $4_1$ and $3_1$.}
\label{comp}
\end{figure}
Now, without further ado, let us learn about descendant relations in families of knots.
\section{Families of Knots}\label{families}
In this section, we consider two key knot families: twist knots and $(2,p)$-torus knots. These families distinguish themselves by being closely related to one another---in terms of descendant and parent relations---and rather insular. As a result, we will observe that knots in these families tend to have few descendants.
\begin{figure}
\caption{A generic example of a minimum crossing diagram of a twist knot.}
\label{twist_gen}
\end{figure}
Keeping this functional equivalence of knots and their mirror images in mind, let us turn to twist knots. A \textit{twist knot} $T_n$ is a knot formed by two {\em clasp crossings} and $n$ {\em twist crossings}, as in Figure \ref{twist_gen}. We note that all twist knots are alternating, so it suffices to consider their minimal crossing diagrams as in the figure, where the alternating pattern is preserved as we pass {\em between} the clasp crossings and the twist crossings. For these knots, we have the following theorem.
\begin{theorem}\label{twist} The knot $K$ is a descendant of twist knot $T_n$ if and only if $K = T_k$ for some integer $k$ with $0 \leq k \leq n$.
\end{theorem}
\begin{proof}
We begin by proving the ``only if'' direction of Theorem \ref{twist}. Suppose that the knot $K$ is a descendant of twist knot $T_n$ for some positive integer $n$. Then there exists a choice of crossing information for all crossings in the shadow of a standard minimal crossing diagram of $T_n$ (i.e., the shadow of a diagram from Figure~\ref{twist_gen}) that produces a diagram of $K$. Suppose that the clasp crossings, $c_1$ and $c_2$, have opposite signs in this resolution. Then, $K$ is the unknot, $T_{0}$. Otherwise, $c_1$ and $c_2$ are alternating.
In the remaining $n$ twist crossings, $p$ crossings are positive, and $n-p$ crossings are negative. We can perform $p$ or $n-p$ (whichever number is smaller) Reidemeister 2 moves to make the twist crossings alternating. If $p=n-p$, the result is a two-crossing diagram of the unknot, $T_0$. Otherwise, some number of twist crossings with the same sign remain. In this case, either the entire knot diagram is alternating, or we can remove one crossing---producing a diagram of $T_k$---by using the sequence of Reidemeister moves pictured in Figure \ref{the_end}.
\begin{figure}
\caption{Simplifying Reidemeister moves}
\label{the_end}
\end{figure}
To prove the ``if'' direction of the theorem, let $P$ be the shadow of a minimal crossing diagram of $T_n$, and suppose that $k$ is an integer such that $0\leq k \leq n$. For the remainder of this proof, refer to Figure \ref{labetwist} for orientation and labeling conventions.
\begin{figure}
\caption{Twist knot with orientation and labeled precrossings.}
\label{labetwist}
\end{figure}
\textit{Case 1: $n-k$ is even.} In the twist of $P$, we resolve the $n-k$ crossings $d_{k+1}, \ d_{k+2},$ $ \cdots, d_{n}$ so that half are positive and half are negative. We then perform $\frac{1}{2}(n-k)$ Reidemeister 2 moves on these crossings to produce the shadow $P'$ of a minimal crossing diagram of $T_k$. Finally, we resolve the precrossings in $P'$ to produce a diagram of $T_k$.
\textit{Case 2: $n-k$ is odd.} We resolve the $n-k-1$ precrossings, $d_{k+2}, \ d_{k+3}, \cdots , d_{n},$ in the twist of $P$ as in Case 1 so that half of the crossings are positive and half of the crossings are negative. Then, we perform $\frac{1}{2}(n-k-1)$ Reidemeister 2 moves to eliminate crossings $d_{k+2}, \ d_{k+3}, \ \cdots , d_{n}$.
Next, if $k+1$ is odd, we resolve crossings $c_1$ and $c_2$ to be negative and $d_1, \cdots, d_{k+1}$ to be positive. If $k+1$ is even, we resolve all of the crossings in the shadow so they are positive. In both cases, the effect of resolving precrossings in this way puts us in the situation of Figure \ref{the_end} where we obtain a minimal crossing diagram of $T_{k}$ after a sequence of Reidemeister moves.
\end{proof}
Next, we prove a similar result for another knot family: the $T_{2,p}$ torus knots, i.e., the knots that can be represented as closures of 2-braids. Note that, since these torus knots are alternating, it suffices to consider their standard minimal diagrams as in, for example, Figure~\ref{torus_r2}.
\begin{theorem}\label{2ptorus}
The knot $K$ is a descendant of torus knot $T_{2,p}$ if and only if $K = T_{2, q}$ for some $0<q \leq p$, where $p$ and $q$ are odd integers.
\end{theorem}
\begin{proof}
Let $K$ be a descendant of torus knot $T_{2,p}$ for some positive odd integer $p$. Then, there exists a choice of crossing resolutions that produces a diagram of $K$ from the shadow of a minimal crossing diagram of $T_{2,p}$. Suppose there are $n$ positive crossing resolutions and $m$ negative crossing resolutions for some non-negative integers $n$ and $m$. If we let $l=$ min$\{n,m\}$, we can perform $l$ Reidemeister 2 moves to reduce the number of crossings in the diagram by $2l$. The resulting knot is $T_{2, p-2l}$.
To prove the converse, we consider the torus knot $T_{2,p}$ where $p$ is a positive odd integer, and we let $q$ be a positive odd integer less than $p$. In the shadow of $T_{2,p}$, we resolve $q+\frac{p-q}{2}$ crossings to be positive and the remaining $\frac{p-q}{2}$ crossings to be negative. This enables us to perform $\frac{p-q}{2}$ Reidemeister 2 moves---as in the Figure \ref{torus_r2} example---to obtain a minimal crossing diagram of $T_{2,q}$. \end{proof}
\begin{figure}
\caption{Resolving crossings in the $T_{2,11}
\label{torus_r2}
\end{figure}
\section{Notions of Fertility}\label{n-fertile}
As we mentioned in the introduction, $0_1$, $3_1$, $4_1$, $5_2$, $6_2$, $6_3$, and $7_6$ are all fertile knots. On the other hand, the results of Section 2 allow us to show that knots $5_1$, $6_1$, $7_1$, and $7_2$ are not fertile. Indeed, the knot $5_1$ is $T_{2,5}$ and $7_1$ is $T_{2,7}$, while the figure-eight knot, $4_1$ does not equal $T_{2,p}$ for any $p$, so $4_1$ fails to be a descendant of either knot. Furthermore, $6_1$ and $7_2$ are both twist knots while $5_1$ is not a twist knot and, hence, not a descendant. So both $6_1$ and $7_2$ fail to be fertile. A fun afternoon exercise shows that all other knots with seven or fewer crossings---namely, $7_3$, $7_4$, $7_5$, and $7_7$---fail to be fertile. For instance, we can show that $6_1$ is not a descendant of $7_3$ or $7_5$ while $6_3$ is not a descendant of $7_4$ or $7_7$.
What about prime knots with more than seven crossings? What about composite knots? Computations for all prime and composite knots with 10 or fewer crossings reveals that there are no 8-, 9-, or 10-crossing fertile knots. In fact, we conjecture that there are no fertile knots with crossing number greater than seven.
So, is fertility a helpful notion? Yes and no. To gain insights about a knot's structure from the notion of fertility, we expand our definition.
\begin{definition} A knot is {\bf \em n}-{\bf fertile} if it is a parent of every knot with $n$ or fewer crossings.
\end{definition}
In other words, if a knot $K$ is $n$-fertile and has crossing number $cr(K)=n+1$, then $K$ is fertile. But for every knot $K$, there is some nonnegative integer $n$ for which $K$ is $n$-fertile. This notion enables us to define a new knot invariant.
\begin{definition} The {\bf fertility number}, $F(K)$, of a knot $K$ is the greatest integer $n$ for which $K$ is $n$-fertile.
\end{definition}
What do we already know about the fertility number? First, the maximum value of $F(K)$ for a knot with crossing number $cr(K)>4$ is $cr(K)-1$. (Notice that the figure-eight knot is 4-fertile, the trefoil is 3-fertile, and the unknot is 0-fertile.) Furthermore, 0 is clearly a lower bound for $F(K)$, but perhaps we can say more. In Section~\ref{opp}, we will discuss how to improve the lower bound of $F(K)$ in general. For small knots, the results in Section~\ref{computations} will tell us so much more.
Before we look at fertility numbers for specific knots, we introduce one more notion related to knot fertility called $(n,m)$-fertility.
\begin{definition}\label{nm} A knot $K$ is {\bf ({\em n,m})}-{\bf fertile} if for each prime knot $H$ with $cr(H)\leq n$, there is a knot shadow (not necessarily from a minimal diagram) with $m$ crossings that contains both $H$ and $K$.\end{definition}
To determine if a knot $K$ is $(n,m)$-fertile, one searches through all $m$-crossing shadows that have $K$ as a resolution. (For instance, if $m=cr(K)$, then this set of
shadows is just all shadows coming from minimal diagrams of $K$.) If the union of the resolution sets of all these shadows contains all knots with $n$ or fewer crossings, then $K$ is $(n,m)$-fertile.
One of the first things we notice from Definition~\ref{nm} is that $(n-1,n)$-fertility is equivalent to fertility for $n$ crossing knots. We explore the concept of $(n,m)$-fertility further in Sections~\ref{computations} and \ref{opp}.
\section{Computational Results}\label{computations}
Cantarella et al.~\cite{knotdiagrams} have computed an exhaustive list of all
knot shadows through 10 crossings. By assigning all possible crossings to
these shadows and analyzing all knot types resulting from the
resulting knot diagrams, we obtain complete information about knot
fertility for knots through 10 crossings.
The knot types were computed using \texttt{lmpoly}, a program by Ewing
and Millett \cite{millettewing} to compute the HOMFLYPT polynomial
\cite{HOMFLY} of diagrams, and \texttt{knotfind}, a program to compute
knot types from Dowker codes, which is a part of the larger program
Knotscape by Hoste and Thistlethwaite \cite{knotscape}. The program
\texttt{knotfind} does not distinguish between the different
chiralities of chiral knots. Since fertility is a non-chiral
property (see Prop. \ref{mirror}), \texttt{knotfind} would have been sufficient for the work
here. However, we used the HOMFLYPT computation via \texttt{lmpoly}
as a further check on the data.
In Table \ref{shadowcountstable}, we show for each crossing number the total number of shadows, the number of
shadows which host only unknots (which we call {\em totally unknotted}), and the number of shadows which yield
a minimal crossing diagram (partitioned into prime and composite knot shadows).
Note that some shadows host minimal diagrams for multiple
knot types. In the table, the number of minimal diagrams are counted
without multiplicity.
\begin{table}
\caption{Counts for the number of shadows, number of totally unknotted shadows, and number of shadows corresponding to minimal diagrams by crossing number.}
\centering
\begin{tabular}{c||c|c|c|c}
crossings & shadows & totally unknotted & minimal prime & minimal composite \\
\hline
3 & 6 & 5 & 1 & 0\\
4 & 19 & 16 & 1 &0 \\
5 & 76 & 55 & 2 & 0\\
6 & 376 & 240 & 3 & 2 \\
7 & 2194 & 1149 & 10 & 3 \\
8 & 14614 & 6229 & 27 & 13 \\
9 & 106421 & 35995 & 101 & 59 \\
10 & 823832 & 219272 & 364 & 263
\end{tabular}
\label{shadowcountstable}
\end{table}
When different minimal diagrams exist for alternating knot types, they
are all related by flype moves \cite{MWTetal}. Thus, the list of descendants derived from one minimal diagram of an alternating knot is precisely the same as the list derived from any other minimal diagram.
Minimal diagrams of non-alternating knot types, though, need not be related by flype moves. As a result, the lists of
descendants derived from different minimal diagrams of a non-alternating knot might differ. In our fertility
table, Table \ref{fertilitytable}, we considered a knot type $H$ to be
a descendant of a non-alternating knot type $K$ if there exists a shadow containing $n=cr(K)$ crossings
that has both $K$ and $H$ as resolutions. This is consistent with---though it represents a different way of viewing---our original definition of ``descendant."
To give the reader some idea of the computational complexity of this problem, we provide Table \ref{nonalttable}, which shows the number of minimal diagrams for non-alternating knot types through 10 crossings.
\begin{table}[h]
\caption{Numbers of minimal diagrams for non-alternating prime knot types}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c}
knot & shadows & knot & shadows & knot & shadows & knot & shadows & knot & shadows\\
\hline
$8_{19}$ & 5 & $10_{124}$ & 13 & $10_{135}$ & 21 & $10_{146}$ & 17 & $10_{157}$ & 2 \\
$8_{20}$ & 7 & $10_{125}$ & 12 & $10_{136}$ & 26 & $10_{147}$ & 18 & $10_{158}$ & 3 \\
$8_{21}$ & 5 & $10_{126}$ & 8 & $10_{137}$ & 27 & $10_{148}$ & 11 & $10_{159}$ & 5 \\
$9_{42}$ & 17 & $10_{127}$ & 7 & $10_{138}$ & 19 & $10_{149}$ & 7 & $10_{160}$ & 10 \\
$9_{43}$ & 14 & $10_{128}$ & 20 & $10_{139}$ & 5 & $10_{150}$ & 19 & $10_{161}$ & 8 \\
$9_{44}$ & 17 & $10_{129}$ & 19 & $10_{140}$ & 16 & $10_{151}$ & 15 & $10_{162}$ & 3 \\
$9_{45}$ & 14 & $10_{130}$ & 14 & $10_{141}$ & 11 & $10_{152}$ & 1 & $10_{163}$ & 3 \\
$9_{46}$ & 8 & $10_{131}$ & 14 & $10_{142}$ & 8 & $10_{153}$ & 2 & $10_{164}$ & 3 \\
$9_{47}$ & 2 & $10_{132}$ & 35 & $10_{143}$ & 13 & $10_{154}$ & 3 & $10_{165}$ & 3 \\
$9_{48}$ & 4 & $10_{133}$ & 32 & $10_{144}$ & 8 & $10_{155}$ & 6 & \\
$9_{49}$ & 2 & $10_{134}$ & 20 & $10_{145}$ & 10 & $10_{156}$ & 6 & \\
\end{tabular}
\label{nonalttable}
\end{table}
In Table \ref{fertilitytable}, we record fertility numbers, which provide one sort of summary of our more significant computational results. In fact, we derived an exhaustive list of descendants of all knots with up to 10 crossings. This collection of data is too vast to share here, but we can share some interesting observations we derived from the data.
First, and perhaps most interestingly, we found the descendant relation is decidedly {\em not} transitive. There are a number of counterexamples
to transitivity, most of which involve non-alternating knot types. However, there are nine examples of non-transitivity involving only
alternating knot types through 10 crossings. See Table~\ref{nontransitive}.
\begin{table}[h]
\caption{Triples of alternating knots such that $K_1$ is a descendant of $K_2$ and $K_2$ is a descendant of $K_3$, but $K_1$ is not a descendant of $K_3$.}
\centering
\begin{tabular}{c|c|c}
$K_1$ & $K_2$ & $K_3$\\
\hline
$7_1$ & $8_9$ & $10_{123}$\\
$7_2$ & $8_{14}$ & $10_{96}$\\
$7_3$ & $8_{7}$ & $10_{116}$\\
$7_3$ & $8_{11}$ & $10_{92}$\\
$7_3$ & $8_{11}$ & $10_{113}$\\
$7_4$ & $8_{11}$ & $10_{92}$\\
$7_4$ & $8_{11}$ & $10_{113}$\\
$7_5$ & $8_{9}$ & $10_{123}$\\
$7_5$ & $8_{14}$ & $10_{117}$\\
\end{tabular}
\label{nontransitive}
\end{table}
The use of the term ``descendant'' in this paper leads to other strange naming conventions for knot relationships.
In particular, we call a knot $K_1$ a \textit{sibling} of a knot $K_2$, where $n=cr(K_1)=cr(K_2)$, if there is an $n$-crossing shadow that
has both $K_1$ and $K_2$ as resolutions.\footnote{The term ``sibling" was used in a similar context in~\cite{diao}, though the two definitions differ.} Note that for a given shadow, there are only two ways of assigning crossings
to yield an alternating diagram, and these two diagrams are mirror
images. Thus, these sibling relationships only occur between
alternating and non-alternating or non-alternating and non-alternating
knot types. Table \ref{siblingstable} shows all sibling relationships
for 8- and 9-crossing knot types. We created a table of sibling relationships between 10-crossing knots, but it is too unwieldy to include here. For instance, $10_{132}$ alone has 41 siblings!
Another interesting aspect of $n$-fertility that can be studied is a notion of {\em anti-fertility}. Specifically, which knots act as roadblocks to more complex knots achieving $n$-fertility for various values of $n$? For instance, our computations show that the knots $7_1$, $7_4$, and $7_7$ fail to be descendants of 15 of the 18 alternating 8-crossing knots. In essence, these three knots are providing a significant barrier to 8-crossing knots achieving fertility. These three knots continue to be problematic for alternating knots with crossing number 9 or 10. Of the 41 alternating 9-crossing knots, $7_1$ fails to be a descendant of 31 knots, $7_4$ fails to be a descendant of 26 knots, and $7_7$ fails to be a descendant of 19 knots. Similarly, of the 123 alternating 10-crossing knots, $7_1$ fails to be a descendant of 77 knots, $7_4$ fails to be a descendant of 67 knots, and $7_7$ fails to be a descendant of 53 knots.
Consider another unusual example. If we examine the shadows of alternating 9-crossing knots, we find that $8_{16}$ and $8_{17}$ fail to appear in the resolution sets of of 40 of the 41 representative shadows.\footnote{Note that we are only considering the shadow of one representative from each equivalence class of reduced, alternating knot diagrams that are related by flypes.} In other words, $8_{16}$ and $8_{17}$ are each only descendants of one alternating 9-crossing knot ($9_{33}$ and $9_{32}$, respectively). The $8_3$ knot provides another barrier to fertility for 9- and 10-crossing knots. It fails to be a descendant of 38 out of 41 alternating 9-crossing knots and 109 of 123 alternating 10-crossing knots.
Just as we can make a number of observations about $n$-fertility and anti-fertility, the data set helps us compute $(n,m)$-fertility numbers for a number of knots. See Tables \ref{m3fertilitytable}-\ref{m10fertilitytable} in the appendix for a complete list of $(n,m)$-fertility numbers for knots with up to 10 crossings. We highlight a subset of this data here, namely examples of knots that are $(k,k)$-fertile for some $k$. Table~\ref{kkfertile} shows a complete list of $(k,k)$-fertile knots for $6\leq k\leq 10$. It is no surprise that the unknot is $(k,k)$ fertile for each $k$ since every knot shadow can be resolved to produce a diagram of the unknot. More interestingly, we notice that $3_1$, $4_1$, and $5_2$ are $(k,k)$-fertile for certain $k$ values, following regular patterns. Using the observations of Table~\ref{kktable} and the results of Section~\ref{opp}, we are able to prove that these patterns hold {\em ad infinitum}.
\begin{table}[h]
\caption{A summary of knots that are $(k,k)$-fertile}\label{kktable}
\centering
\begin{tabular}{c|l}
$k$& Knots that are $(k,k)$-fertile\\
\hline
6 & $0_1$, $3_1$, $4_1$, $5_2$\\
7 & $0_1$, $3_1$\\
8 & $0_1$, $3_1$, $4_1$, $5_2$\\
9 & $0_1$, $3_1$\\
10 & $0_1$, $3_1$, $4_1$, $5_2$\\
\end{tabular}
\label{kkfertile}
\end{table}
There are likely many more interesting observations that can be made from staring at our data, but for now, we move on to discussing how our work fits in the context of the work done by others over the years.
\section{Other Knot Relations}\label{opp}
Our notion of descendant is not the first notion that attempts to capture what it means for a knot to contain another knot. For instance, the descendant relation shares some similarities with the notion of {\em predecessor}, introduced in \cite{diao}.
\begin{definition} A knot $K_1$ is a {\bf predecessor} of knot $K_2$ if the crossing number of $K_1$ is less than that of $K_2$ and $K_1$ can be obtained from a minimal diagram of $K_2$ by a single crossing change.
\end{definition}
We see that, while the trefoil is a descendant of the figure-eight knot, it is not a predecessor since two crossing changes are required to derive a trefoil from a minimal diagram of the figure-eight knot. So if a knot $K_1$ is a predecessor of a knot $K_2$, then $K_1$ is a descendant of $K_2$, but the converse fails to hold in general.
In another effort to relate knots to their parts, Millett and Jablan described what it means for one knot to be a {\em subknot} of another knot~\cite{millett1}. In particular, they gave the following definition.
\begin{definition} A knot $K_1$ is a {\bf subknot} of knot $K_2$ if it can be obtained from a minimal diagram of $K_2$ by crossing changes that preserve those within a segment of the diagram and change those outside this segment so that the complementary segment is strictly ascending.
\end{definition}
\begin{figure}
\caption{The knot $5_2$ (right) is a subknot of the knot $7_4$ (left).}
\label{subknot}
\end{figure}
In Figure~\ref{subknot}, we see that the twist knot $5_2$ is a subknot of $7_4$. Indeed, if crossing changes are performed on a segment of the knot $7_4$ between the two dots so that this segment is ascending, we find that knot $5_2$ is produced.
While the subknot definition {\em seems} stricter than the descendant relation, is it possible that the subknot and descendant relations are actually the same relations in disguise? Just as with the predecessor relation, we immediately observe that if a knot $K_1$ is a subknot of knot $K_2$, then $K_1$ is a descendant of $K_2$. What about the converse? In \cite{millett2}, it was proven that the trefoil is {\em not} a subknot of $11a135$. On the other hand, we illustrated in Figure~\ref{parent_ex} that the trefoil is a descendant of $11a135$, so the descendant and subknot relations must be distinct.
Finally, another well-known relationship between knots is one that was introduced by Taniyama in \cite{taniyama}.
\begin{definition} A knot $K_1$ is a {\bf minor} of knot $K_2$ if the set of knot shadows that have $K_2$ as a resolution is a subset of the set of shadows that have $K_1$ as a resolution.
\end{definition}
Once again, we notice that if $K_1$ is a minor of $K_2$, then $K_1$ is a descendant of $K_2$. Since all shadows of diagrams of $K_2$ are also shadows of diagrams of $K_1$ when $K_1$ is a minor of $K_2$, then in particular, a minimal crossing shadow of $K_2$ will have $K_1$ as a resolution. To determine if the minor relation is the same as the descendant relation, we need to ask about the converse. Our results from Section~\ref{computations} hold the key to answering this question. Our computations show that $7_3$ is a descendant of $8_{11}$. We also know that $8_{11}$ is a descendant of $10_{92}$, but $7_3$ is not a descendant of $10_{92}$---in fact, this trio of knots is one of our counterexamples to transitivity. More concretely, the shadow in Figure~\ref{minor_ex} contains $8_{11}$ but not $7_3$. So $7_3$ is not a minor of $8_{11}$. This proves that the descendant and minor relations are distinct.
\begin{figure}
\caption{A shadow of $10_{92}
\label{minor_ex}
\end{figure}
While the descendant and minor relations are interestingly different, the fact that if $K_1$ is a minor of $K_2$, then $K_1$ is a descendant of $K_2$ yields the following result as a corollary of Theorem 1 in \cite{taniyama}.
\begin{corollary}\label{tref} Every nontrivial knot has the trefoil as a descendant.
\end{corollary}
From this result, we see in particular that the fertility number $F(K)$ for any nontrivial knot $K$ must be at least 3. We can also infer the following result.
\begin{theorem} The trefoil is $(k,k)$-fertile for all $k\geq 3$. \end{theorem}
Similarly, we have the following three corollaries of Theorems 2, 3, and 4 in \cite{taniyama}. Each of these corollaries has further implications for bounds on fertility numbers.
\begin{corollary}\label{4_1fertility} If $K$ has a prime factor that is not equivalent to a $(2,p)$-torus knot with $p\geq 3$, then $K$ has the figure-eight knot as a descendant.
\end{corollary}
\begin{corollary}\label{5_2fertility} If $K$ has a prime factor that is not equivalent to a $(2,p)$-torus knot with $p\geq 3$ or the figure-eight knot, then $K$ has the knot $5_2$ as a descendant.
\end{corollary}
\begin{corollary} If $K$ has a prime factor that is not equivalent to any of the pretzel knots $L(p_1,p_2,p_3)$ (where $p_1$, $p_2$, and $p_3$ are odd integers), then $K$ has the knot $5_1$ as a descendant.
\end{corollary}
Corollaries~\ref{4_1fertility} and \ref{5_2fertility} have a particularly nice consequence for $(k,k)$-fertility that puts the data in Table \ref{kktable} into a larger context. We can use these corollaries to prove the following result.
\begin{theorem} The figure-eight knot, $4_1$, and the $T_3$ twist knot, $5_2$, are both $(2j,2j)$-fertile for any integer $j\geq 3$.
\end{theorem}
\begin{proof}
Let us fix an integer $j\geq 3$. Our goal is to show: for every
prime knot with
crossing number less than or equal to $2j$, there is a knot shadow
with $2j$ crossings that has
$4_1$ (resp. $5_2$) as a descendant. We prove the result for $4_1$.
The proof for $5_2$ is similar.
Let $K$ be any knot with $cr(K)\leq 2j$. By Corollary \ref{4_1fertility}, the only prime knots
that fail to have $4_1$ as a descendant are $(2,p)$ torus knots.
Case 1: $K$ is not a $(2,p)$ torus knot.
We know that $4_1$ is a descendant of $K$, i.e. there is a
$cr(K)$-crossing shadow $S$ that can
be resolved to both $4_1$ and $K$.
If $cr(K)=2j$, then we are done. If $cr(K)<2j$, then we can add
$n=2j-cr(K)$ crossings by applying pseudo-R1 moves to $S$ to
obtain a $2j$-crossing shadow which resolves to both $4_1$ and $K$.
Case 2: $K$ is a $(2,p)$ torus knot.
In Figure 11, we provide a $2j$-crossing shadow that can be resolved
to produce $4_1$, $5_2$, and all
$(2,p)$ torus knots with $p$ odd, $0<p<2j$, which suffices to prove
the theorem. We illustrate in subdiagram (b) that the $(2,2j-1)$ torus knot can be derived from the shadow pictured in (a). By an argument similar to the one that proved Theorem~\ref{2ptorus}, we can also derive from this shadow every $(2,p)$ torus knot with $0<p<2j-1$, where $p$ is an odd integer. Subdiagrams (c) and (d) illustrate our desired resolutions of the $4_1$ and $5_2$ knots, respectively.
\end{proof}
\begin{figure}
\caption{Subfigure (a) shows a shadow with $2j$ crossings, (b) gives a resolution that produces a diagram of the $(2,2j-1)$ torus knot, (c) shows a resolution to a diagram of $4_1$, and (d) shows a diagram of $5_2$.}
\label{kk_pf}
\end{figure}
\section{Conclusion}\label{conclusion}
We conclude with some of our favorite lingering questions and ideas.
\begin{enumerate}
\item Are there fertile knots with crossing number greater than seven?
It seems highly unlikely that, if there are no fertile knots with crossing number 8, 9, or 10, there will suddenly appear a fertile knot with crossing number greater than 10. We have not conclusively ruled this possibility out, however. How could one prove that no more fertile knots exist?
\item Can the results of Section \ref{families} be generalized? In other words, what more can we say about our favorite families of knots?
We have yet to explore other families of rational knots, pretzel knots, or Montesinos knots. In general, are there similar descendant relationships between families of knots other than twist knots and $(2,p)$ torus knots?
\item For an arbitrary $n$-crossing knot $K$, what proportion of $m$-crossing knots (where $m>n$) should we expect to be parents of $K$?
For example, each alternating 8-crossing knot {\em fails} to be a descendant of between 33 and 41 of the 9-crossing alternating knots. In other words, each alternating 8-crossing knot has at most eight 9-crossing parents. Of the 123 10-crossing alternating knots, any given alternating 8-crossing knot will fail to be a descendant of between 73 and 116.
\item What more can we say about barriers to fertility?
For instance, for a given $n$, what is the smallest set, $\mathcal{S}$, of alternating $n$-crossing knots which has the property that no alternating $(n+1)$-crossing knot has all members of $\mathcal{S}$ as descendants?
\item What more can we say about $(n,m)$-fertility in general and $(k,k)$-fertility in particular?
We saw that $3_1$, $4_1$, and $5_2$ are $(k,k)$-fertile for infinitely many $k$. Are there other knot types that are $(k,k)$-fertile for all sufficiently large $k$ or all sufficiently large odd or even $k$?
\end{enumerate}
\section*{Acknowledgments}
First, we would like to thank Ken Millett, Erica Flapan, Claus Ernst, Harrison Chapman, and Colin Adams for their suggestions that led to improvements of this paper. We express our deep gratitude to the National Science Foundation for supporting this work through grants DMS \#1460537 and \#1418869. This work was also supported by a grant from the Simons Foundation (\#426566, Allison Henrich) as well as the Clare Boothe Luce Foundation and the Seattle University College of Science and Engineering. Finally, this project was inspired by an observation of Inga Johnson's. We thank her for the inspiration.
\begin{table}
\caption{Fertility levels for knot types through 10 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c||c|c||c|c}
$K$ & $F$ & $K$ & $F$ & $K$ & $F$ & $K$ & $F$ & $K$ & $F$ & $K$ & $F$ & $K$ & $F$\\
\hline
$3_1$ & 3 & $9_{1}$ & 3 & $9_{41}$ & 5 & $10_{26}$ & 7 & $10_{66}$ & 6 & $10_{106}$ & 5 & $10_{146}$ & 7\\
$4_1$ & 4 & $9_{2}$ & 4 & $9_{42}$ & 6 & $10_{27}$ & 6 & $10_{67}$ & 6 & $10_{107}$ & 6 & $10_{147}$ & 7\\
$5_1$ & 3 & $9_{3}$ & 5 & $9_{43}$ & 6 & $10_{28}$ & 6 & $10_{68}$ & 6 & $10_{108}$ & 6 & $10_{148}$ & 6\\
$5_2$ & 4 & $9_{4}$ & 5 & $9_{44}$ & 6 & $10_{29}$ & 6 & $10_{69}$ & 7 & $10_{109}$ & 5 & $10_{149}$ & 6\\
$6_1$ & 4 & $9_{5}$ & 4 & $9_{45}$ & 6 & $10_{30}$ & 6 & $10_{70}$ & 6 & $10_{110}$ & 6 & $10_{150}$ & 6\\
$6_2$ & 5 & $9_{6}$ & 5 & $9_{46}$ & 6 & $10_{31}$ & 6 & $10_{71}$ & 6 & $10_{111}$ & 6 & $10_{151}$ & 6\\
$6_3$ & 5 & $9_{7}$ & 6 & $9_{47}$ & 6 & $10_{32}$ & 6 & $10_{72}$ & 6 & $10_{112}$ & 5 & $10_{152}$ & 5\\
$3_1\#3_1$ & 3 & $9_{8}$ & 6 & $9_{48}$ & 6 & $10_{33}$ & 6 & $10_{73}$ & 6 & $10_{113}$ & 6 & $10_{153}$ & 6\\
$7_{1}$ & 3 & $9_{9}$ & 5 & $9_{49}$ & 6 & $10_{34}$ & 6 & $10_{74}$ & 5 & $10_{114}$ & 6 & $10_{154}$ & 6\\
$7_{2}$ & 4 & $9_{10}$ & 5 & $3_1\#6_1$ & 4 & $10_{35}$ & 6 & $10_{75}$ & 5 & $10_{115}$ & 6 & $10_{155}$ & 5\\
$7_{3}$ & 5 & $9_{11}$ & 6 & $3_1\#6_2$ & 5 & $10_{36}$ & 6 & $10_{76}$ & 6 & $10_{116}$ & 5 & $10_{156}$ & 6\\
$7_{4}$ & 4 & $9_{12}$ & 6 & $3_1\#6_3$ & 5 & $10_{37}$ & 6 & $10_{77}$ & 6 & $10_{117}$ & 6 & $10_{157}$ & 5\\
$7_{5}$ & 5 & $9_{13}$ & 6 & $4_1\#5_1$ & 4 & $10_{38}$ & 6 & $10_{78}$ & 6 & $10_{118}$ & 5 & $10_{158}$ & 6\\
$7_{6}$ & 6 & $9_{14}$ & 5 & $4_1\#5_2$ & 4 & $10_{39}$ & 7 & $10_{79}$ & 5 & $10_{119}$ & 6 & $10_{159}$ & 5\\
$7_{7}$ & 5 & $9_{15}$ & 6 & $3_1\#3_1\#3_1$ & 3 & $10_{40}$ & 7 & $10_{80}$ & 6 & $10_{120}$ & 6 & $10_{160}$ & 6\\
$3_{1}\#4_{1}$ & 4 & $9_{16}$ & 5 & $10_{1}$ & 4 & $10_{41}$ & 7 & $10_{81}$ & 6 & $10_{121}$ & 6 & $10_{161}$ & 7\\
$8_{1}$ & 4 & $9_{17}$ & 5 & $10_{2}$ & 5 & $10_{42}$ & 7 & $10_{82}$ & 5 & $10_{122}$ & 6 & $10_{162}$ & 6\\
$8_{2}$ & 5 & $9_{18}$ & 6 & $10_{3}$ & 4 & $10_{43}$ & 6 & $10_{83}$ & 6 & $10_{123}$ & 5 & $10_{163}$ & 6\\
$8_{3}$ & 4 & $9_{19}$ & 6 & $10_{4}$ & 5 & $10_{44}$ & 7 & $10_{84}$ & 6 & $10_{124}$ & 5 & $10_{164}$ & 6\\
$8_{4}$ & 5 & $9_{20}$ & 6 & $10_{5}$ & 5 & $10_{45}$ & 7 & $10_{85}$ & 5 & $10_{125}$ & 6 & $10_{165}$ & 6\\
$8_{5}$ & 5 & $9_{21}$ & 6 & $10_{6}$ & 6 & $10_{46}$ & 5 & $10_{86}$ & 6 & $10_{126}$ & 6 & $3_1\#7_1$ & 3\\
$8_{6}$ & 6 & $9_{22}$ & 6 & $10_{7}$ & 5 & $10_{47}$ & 5 & $10_{87}$ & 6 & $10_{127}$ & 6 & $3_1\#7_2$ & 4\\
$8_{7}$ & 5 & $9_{23}$ & 6 & $10_{8}$ & 5 & $10_{48}$ & 5 & $10_{88}$ & 6 & $10_{128}$ & 6 & $3_1\#7_3$ & 5\\
$8_{8}$ & 6 & $9_{24}$ & 6 & $10_{9}$ & 5 & $10_{49}$ & 6 & $10_{89}$ & 6 & $10_{129}$ & 6 & $3_1\#7_4$ & 4\\
$8_{9}$ & 5 & $9_{25}$ & 6 & $10_{10}$ & 6 & $10_{50}$ & 6 & $10_{90}$ & 6 & $10_{130}$ & 6 & $3_1\#7_5$ & 5\\
$8_{10}$ & 5 & $9_{26}$ & 6 & $10_{11}$ & 6 & $10_{51}$ & 6 & $10_{91}$ & 5 & $10_{131}$ & 6 & $3_1\#7_6$ & 6\\
$8_{11}$ & 5 & $9_{27}$ & 7 & $10_{12}$ & 6 & $10_{52}$ & 6 & $10_{92}$ & 6 & $10_{132}$ & 6 & $3_1\#7_7$ & 5\\
$8_{12}$ & 6 & $9_{28}$ & 6 & $10_{13}$ & 6 & $10_{53}$ & 6 & $10_{93}$ & 6 & $10_{133}$ & 6 & $4_1\#6_1$ & 4\\
$8_{13}$ & 6 & $9_{29}$ & 6 & $10_{14}$ & 6 & $10_{54}$ & 6 & $10_{94}$ & 5 & $10_{134}$ & 6 & $4_1\#6_2$ & 5\\
$8_{14}$ & 6 & $9_{30}$ & 6 & $10_{15}$ & 6 & $10_{55}$ & 6 & $10_{95}$ & 6 & $10_{135}$ & 6 & $4_1\#6_3$ & 5\\
$8_{15}$ & 6 & $9_{31}$ & 6 & $10_{16}$ & 5 & $10_{56}$ & 6 & $10_{96}$ & 6 & $10_{136}$ & 6 & $5_1\#5_1$ & 3\\
$8_{16}$ & 5 & $9_{32}$ & 6 & $10_{17}$ & 5 & $10_{57}$ & 6 & $10_{97}$ & 6 & $10_{137}$ & 6 & $5_1\#5_2$ & 5\\
$8_{17}$ & 5 & $9_{33}$ & 6 & $10_{18}$ & 6 & $10_{58}$ & 6 & $10_{98}$ & 6 & $10_{138}$ & 6 & $5_2\#5_2$ & 4\\
$8_{18}$ & 5 & $9_{34}$ & 6 & $10_{19}$ & 6 & $10_{59}$ & 6 & $10_{99}$ & 5 & $10_{139}$ & 5 & $3_1\#3_1\#4_1$ & 4\\
$8_{19}$ & 5 & $9_{35}$ & 4 & $10_{20}$ & 6 & $10_{60}$ & 6 & $10_{100}$ & 5 & $10_{140}$ & 6 & & \\
$8_{20}$ & 6 & $9_{36}$ & 6 & $10_{21}$ & 5 & $10_{61}$ & 5 & $10_{101}$ & 6 & $10_{141}$ & 6 & & \\
$8_{21}$ & 6 & $9_{37}$ & 5 & $10_{22}$ & 6 & $10_{62}$ & 5 & $10_{102}$ & 6 & $10_{142}$ & 6 & & \\
$3_1\#5_1$ & 3 & $9_{38}$ & 6 & $10_{23}$ & 7 & $10_{63}$ & 6 & $10_{103}$ & 6 & $10_{143}$ & 6 & & \\
$3_1\#5_2$ & 4 & $9_{39}$ & 6 & $10_{24}$ & 6 & $10_{64}$ & 5 & $10_{104}$ & 5 & $10_{144}$ & 6 & & \\
$4_1\#4_1$ & 4 & $9_{40}$ & 5 & $10_{25}$ & 7 & $10_{65}$ & 6 & $10_{105}$ & 6 & $10_{145}$ & 6 & & \\
\end{tabular}
\label{fertilitytable}
\end{table}
\begin{table}
\caption{Table of siblings through 8- and 9-crossing knot types}
\centering
\begin{tabular}{c||l}
knot & siblings\\
\hline
$8_{5}$ & $8_{19}$ $8_{20}$\\
$8_{10}$ & $8_{19}$ $8_{20}$ $8_{21}$\\
$8_{15}$ & $8_{20}$ $8_{21}$\\
$8_{16}$ & $8_{19}$ $8_{20}$ $8_{21}$\\
$8_{17}$ & $8_{19}$ $8_{20}$ $8_{21}$\\
$8_{18}$ & $8_{19}$ $8_{20}$\\
$8_{19}$ & $8_{5}$ $8_{10}$ $8_{16}$ $8_{17}$ $8_{18}$ $8_{20}$ $8_{21}$\\
$8_{20}$ & $8_{5}$ $8_{10}$ $8_{15}$ $8_{16}$ $8_{17}$ $8_{18}$ $8_{19}$ $8_{21}$\\
$8_{21}$ & $8_{10}$ $8_{15}$ $8_{16}$ $8_{17}$ $8_{19}$ $8_{20}$\\
\hline
$9_{22}$ & $9_{42}$ $9_{43}$ $9_{45}$\\
$9_{25}$ & $9_{42}$ $9_{44}$ $9_{45}$\\
$9_{29}$ & $9_{42}$ $9_{43}$ $9_{44}$ $9_{46}$\\
$9_{30}$ & $9_{43}$ $9_{44}$ $9_{45}$\\
$9_{32}$ & $9_{42}$ $9_{43}$ $9_{44}$ $9_{45}$\\
$9_{33}$ & $9_{42}$ $9_{43}$ $9_{44}$ $9_{45}$\\
$9_{34}$ & $9_{42}$ $9_{43}$ $9_{44}$ $9_{46}$ $9_{47}$\\
$9_{35}$ & $9_{46}$\\
$9_{36}$ & $9_{42}$ $9_{43}$ $9_{44}$\\
$9_{37}$ & $9_{46}$ $9_{48}$\\
$9_{38}$ & $9_{42}$ $9_{44}$ $9_{45}$ $9_{48}$\\
$9_{39}$ & $9_{42}$ $9_{44}$ $9_{46}$ $9_{48}$ $9_{49}$\\
$9_{40}$ & $9_{42}$ $9_{46}$ $9_{47}$\\
$9_{41}$ & $9_{42}$ $9_{46}$ $9_{49}$\\
$9_{42}$ & $9_{22}$ $9_{25}$ $9_{29}$ $9_{32}$ $9_{33}$ $9_{34}$ $9_{36}$ $9_{38}$ $9_{39}$ $9_{40}$ $9_{41}$ $9_{43}$ $9_{44}$ $9_{45}$ $9_{46}$ $9_{47}$ $9_{48}$ $9_{49}$\\
$9_{43}$ & $9_{22}$ $9_{29}$ $9_{30}$ $9_{32}$ $9_{33}$ $9_{34}$ $9_{36}$ $9_{42}$ $9_{44}$ $9_{45}$ $9_{46}$ $9_{47}$\\
$9_{44}$ & $9_{25}$ $9_{29}$ $9_{30}$ $9_{32}$ $9_{33}$ $9_{34}$ $9_{36}$ $9_{38}$ $9_{39}$ $9_{42}$ $9_{43}$ $9_{45}$ $9_{46}$ $9_{47}$ $9_{48}$ $9_{49}$\\
$9_{45}$ & $9_{22}$ $9_{25}$ $9_{30}$ $9_{32}$ $9_{33}$ $9_{38}$ $9_{42}$ $9_{43}$ $9_{44}$ $9_{48}$\\
$9_{46}$ & $9_{29}$ $9_{34}$ $9_{35}$ $9_{37}$ $9_{39}$ $9_{40}$ $9_{41}$ $9_{42}$ $9_{43}$ $9_{44}$ $9_{47}$ $9_{48}$ $9_{49}$\\
$9_{47}$ & $9_{34}$ $9_{40}$ $9_{42}$ $9_{43}$ $9_{44}$ $9_{46}$\\
$9_{48}$ & $9_{37}$ $9_{38}$ $9_{39}$ $9_{42}$ $9_{44}$ $9_{45}$ $9_{46}$ $9_{49}$\\
$9_{49}$ & $9_{39}$ $9_{41}$ $9_{42}$ $9_{44}$ $9_{46}$ $9_{48}$\\
\end{tabular}
\label{siblingstable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,3)$-fertility in knot types through 3 crossings}
\centering
\begin{tabular}{c|c||c|c}
$K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 3 & $3_{1}$ & 3 \\
\end{tabular}
\label{m3fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,4)$-fertility in knot types through 4 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 4 & $3_{1}$ & 4 & $4_{1}$ & 4 \\
\end{tabular}
\label{m4fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,5)$-fertility in knot types through 5 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 5 & $3_{1}$ & 5 & $4_{1}$ & 4 & $5_{1}$ & 3 & $5_{2}$ & 4 \\
\end{tabular}
\label{m5fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,6)$-fertility in knot types through 6 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 6 & $4_{1}$ & 6 & $5_{2}$ & 6 & $6_{2}$ & 5 & $3_1\#3_1$ & 3 \\
$3_{1}$ & 6 & $5_{1}$ & 5 & $6_{1}$ & 4 & $6_{3}$ & 5 & & \\
\end{tabular}
\label{m6fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,7)$-fertility in knot types through 7 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 7 & $5_{1}$ & 6 & $6_{2}$ & 6 & $7_{1}$ & 3 & $7_{4}$ & 4 & $7_{7}$ & 5 \\
$3_{1}$ & 7 & $5_{2}$ & 6 & $6_{3}$ & 6 & $7_{2}$ & 4 & $7_{5}$ & 5 & $3_1\#4_1$ & 4 \\
$4_{1}$ & 6 & $6_{1}$ & 6 & $3_1\#3_1$ & 4 & $7_{3}$ & 5 & $7_{6}$ & 6 & & \\
\end{tabular}
\label{m7fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,8)$-fertility in knot types through 8 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 8 & $6_{2}$ & 7 & $7_{4}$ & 6 & $8_{2}$ & 5 & $8_{8}$ & 6 & $8_{14}$ & 6 & $8_{20}$ & 6 \\
$3_{1}$ & 8 & $6_{3}$ & 7 & $7_{5}$ & 6 & $8_{3}$ & 4 & $8_{9}$ & 5 & $8_{15}$ & 6 & $8_{21}$ & 6 \\
$4_{1}$ & 8 & $3_1\#3_1$ & 6 & $7_{6}$ & 6 & $8_{4}$ & 5 & $8_{10}$ & 5 & $8_{16}$ & 5 & $3_1\#5_1$ & 3 \\
$5_{1}$ & 7 & $7_{1}$ & 5 & $7_{7}$ & 6 & $8_{5}$ & 5 & $8_{11}$ & 5 & $8_{17}$ & 5 & $3_1\#5_2$ & 4 \\
$5_{2}$ & 8 & $7_{2}$ & 6 & $3_1\#4_1$ & 4 & $8_{6}$ & 6 & $8_{12}$ & 6 & $8_{18}$ & 5 & $4_1\#4_1$ & 4 \\
$6_{1}$ & 6 & $7_{3}$ & 7 & $8_{1}$ & 4 & $8_{7}$ & 5 & $8_{13}$ & 6 & $8_{19}$ & 5 & & \\
\end{tabular}
\label{m8fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,9)$-fertility in knot types through 9 crossings}
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 9 & $7_{6}$ & 7 & $8_{12}$ & 6 & $9_{2}$ & 4 & $9_{16}$ & 5 & $9_{30}$ & 6 & $9_{44}$ & 6 \\
$3_{1}$ & 9 & $7_{7}$ & 7 & $8_{13}$ & 7 & $9_{3}$ & 5 & $9_{17}$ & 5 & $9_{31}$ & 6 & $9_{45}$ & 6 \\
$4_{1}$ & 8 & $3_1\#4_1$ & 6 & $8_{14}$ & 7 & $9_{4}$ & 5 & $9_{18}$ & 6 & $9_{32}$ & 6 & $9_{46}$ & 6 \\
$5_{1}$ & 8 & $8_{1}$ & 6 & $8_{15}$ & 6 & $9_{5}$ & 4 & $9_{19}$ & 6 & $9_{33}$ & 6 & $9_{47}$ & 6 \\
$5_{2}$ & 8 & $8_{2}$ & 7 & $8_{16}$ & 6 & $9_{6}$ & 5 & $9_{20}$ & 6 & $9_{34}$ & 6 & $9_{48}$ & 6 \\
$6_{1}$ & 7 & $8_{3}$ & 6 & $8_{17}$ & 6 & $9_{7}$ & 6 & $9_{21}$ & 6 & $9_{35}$ & 4 & $9_{49}$ & 6 \\
$6_{2}$ & 7 & $8_{4}$ & 7 & $8_{18}$ & 5 & $9_{8}$ & 6 & $9_{22}$ & 6 & $9_{36}$ & 6 & $3_1\#6_1$ & 4 \\
$6_{3}$ & 8 & $8_{5}$ & 6 & $8_{19}$ & 6 & $9_{9}$ & 5 & $9_{23}$ & 6 & $9_{37}$ & 5 & $3_1\#6_2$ & 5 \\
$3_1\#3_1$ & 6 & $8_{6}$ & 7 & $8_{20}$ & 6 & $9_{10}$ & 5 & $9_{24}$ & 6 & $9_{38}$ & 6 & $3_1\#6_3$ & 5 \\
$7_{1}$ & 7 & $8_{7}$ & 7 & $8_{21}$ & 6 & $9_{11}$ & 6 & $9_{25}$ & 6 & $9_{39}$ & 6 & $4_1\#5_1$ & 4 \\
$7_{2}$ & 7 & $8_{8}$ & 7 & $3_1\#5_1$ & 5 & $9_{12}$ & 6 & $9_{26}$ & 6 & $9_{40}$ & 5 & $4_1\#5_2$ & 4 \\
$7_{3}$ & 7 & $8_{9}$ & 6 & $3_1\#5_2$ & 6 & $9_{13}$ & 6 & $9_{27}$ & 7 & $9_{41}$ & 5 & $3_1\#3_1\#3_1$ & 3 \\
$7_{4}$ & 7 & $8_{10}$ & 6 & $4_1\#4_1$ & 4 & $9_{14}$ & 5 & $9_{28}$ & 6 & $9_{42}$ & 6 & & \\
$7_{5}$ & 7 & $8_{11}$ & 7 & $9_{1}$ & 3 & $9_{15}$ & 6 & $9_{29}$ & 6 & $9_{43}$ & 6 & & \\
\end{tabular}
\label{m9fertilitytable}
\end{table}
\begin{table}
\caption{Maximal $m$ for $(m,10)$-fertility in knot types through 10 crossings}
{\small
\centering
\begin{tabular}{c|c||c|c||c|c||c|c||c|c||c|c||c|c}
$K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ & $K$ & $m$ \\
\hline
$0_{1}$ & 10 & $4_1\#4_1$ & 6 & $9_{40}$ & 5 & $10_{25}$ & 7 & $10_{65}$ & 6 & $10_{105}$ & 6 & $10_{145}$ & 6 \\
$3_{1}$ & 10 & $9_{1}$ & 5 & $9_{41}$ & 6 & $10_{26}$ & 7 & $10_{66}$ & 6 & $10_{106}$ & 5 & $10_{146}$ & 7 \\
$4_{1}$ & 10 & $9_{2}$ & 6 & $9_{42}$ & 6 & $10_{27}$ & 6 & $10_{67}$ & 6 & $10_{107}$ & 6 & $10_{147}$ & 7 \\
$5_{1}$ & 9 & $9_{3}$ & 7 & $9_{43}$ & 6 & $10_{28}$ & 6 & $10_{68}$ & 6 & $10_{108}$ & 6 & $10_{148}$ & 6 \\
$5_{2}$ & 10 & $9_{4}$ & 7 & $9_{44}$ & 6 & $10_{29}$ & 6 & $10_{69}$ & 7 & $10_{109}$ & 5 & $10_{149}$ & 6 \\
$6_{1}$ & 8 & $9_{5}$ & 6 & $9_{45}$ & 6 & $10_{30}$ & 6 & $10_{70}$ & 6 & $10_{110}$ & 6 & $10_{150}$ & 6 \\
$6_{2}$ & 9 & $9_{6}$ & 7 & $9_{46}$ & 7 & $10_{31}$ & 6 & $10_{71}$ & 6 & $10_{111}$ & 6 & $10_{151}$ & 6 \\
$6_{3}$ & 8 & $9_{7}$ & 7 & $9_{47}$ & 6 & $10_{32}$ & 6 & $10_{72}$ & 6 & $10_{112}$ & 5 & $10_{152}$ & 5 \\
$3_1\#3_1$ & 8 & $9_{8}$ & 7 & $9_{48}$ & 7 & $10_{33}$ & 6 & $10_{73}$ & 6 & $10_{113}$ & 6 & $10_{153}$ & 6 \\
$7_{1}$ & 8 & $9_{9}$ & 7 & $9_{49}$ & 6 & $10_{34}$ & 6 & $10_{74}$ & 5 & $10_{114}$ & 6 & $10_{154}$ & 6 \\
$7_{2}$ & 8 & $9_{10}$ & 7 & $3_1\#6_1$ & 6 & $10_{35}$ & 6 & $10_{75}$ & 5 & $10_{115}$ & 6 & $10_{155}$ & 5 \\
$7_{3}$ & 8 & $9_{11}$ & 7 & $3_1\#6_2$ & 6 & $10_{36}$ & 6 & $10_{76}$ & 6 & $10_{116}$ & 5 & $10_{156}$ & 6 \\
$7_{4}$ & 8 & $9_{12}$ & 7 & $3_1\#6_3$ & 6 & $10_{37}$ & 6 & $10_{77}$ & 6 & $10_{117}$ & 6 & $10_{157}$ & 5 \\
$7_{5}$ & 8 & $9_{13}$ & 7 & $4_1\#5_1$ & 5 & $10_{38}$ & 6 & $10_{78}$ & 6 & $10_{118}$ & 5 & $10_{158}$ & 6 \\
$7_{6}$ & 8 & $9_{14}$ & 7 & $4_1\#5_2$ & 6 & $10_{39}$ & 7 & $10_{79}$ & 5 & $10_{119}$ & 6 & $10_{159}$ & 5 \\
$7_{7}$ & 8 & $9_{15}$ & 7 & $3_1\#3_1\#3_1$ & 4 & $10_{40}$ & 7 & $10_{80}$ & 6 & $10_{120}$ & 6 & $10_{160}$ & 6 \\
$3_1\#4_1$ & 6 & $9_{16}$ & 6 & $10_{1}$ & 4 & $10_{41}$ & 7 & $10_{81}$ & 6 & $10_{121}$ & 6 & $10_{161}$ & 7 \\
$8_{1}$ & 7 & $9_{17}$ & 7 & $10_{2}$ & 5 & $10_{42}$ & 7 & $10_{82}$ & 5 & $10_{122}$ & 6 & $10_{162}$ & 6 \\
$8_{2}$ & 7 & $9_{18}$ & 7 & $10_{3}$ & 4 & $10_{43}$ & 6 & $10_{83}$ & 6 & $10_{123}$ & 5 & $10_{163}$ & 6 \\
$8_{3}$ & 7 & $9_{19}$ & 7 & $10_{4}$ & 5 & $10_{44}$ & 7 & $10_{84}$ & 6 & $10_{124}$ & 5 & $10_{164}$ & 6 \\
$8_{4}$ & 7 & $9_{20}$ & 7 & $10_{5}$ & 5 & $10_{45}$ & 7 & $10_{85}$ & 5 & $10_{125}$ & 6 & $10_{165}$ & 6 \\
$8_{5}$ & 7 & $9_{21}$ & 7 & $10_{6}$ & 6 & $10_{46}$ & 5 & $10_{86}$ & 6 & $10_{126}$ & 6 & $3_1\#7_1$ & 3 \\
$8_{6}$ & 7 & $9_{22}$ & 6 & $10_{7}$ & 5 & $10_{47}$ & 5 & $10_{87}$ & 6 & $10_{127}$ & 6 & $3_1\#7_2$ & 4 \\
$8_{7}$ & 8 & $9_{23}$ & 7 & $10_{8}$ & 5 & $10_{48}$ & 5 & $10_{88}$ & 6 & $10_{128}$ & 6 & $3_1\#7_3$ & 5 \\
$8_{8}$ & 8 & $9_{24}$ & 7 & $10_{9}$ & 5 & $10_{49}$ & 6 & $10_{89}$ & 6 & $10_{129}$ & 6 & $3_1\#7_4$ & 4 \\
$8_{9}$ & 8 & $9_{25}$ & 6 & $10_{10}$ & 6 & $10_{50}$ & 6 & $10_{90}$ & 6 & $10_{130}$ & 6 & $3_1\#7_5$ & 5 \\
$8_{10}$ & 7 & $9_{26}$ & 7 & $10_{11}$ & 6 & $10_{51}$ & 6 & $10_{91}$ & 5 & $10_{131}$ & 6 & $3_1\#7_6$ & 6 \\
$8_{11}$ & 7 & $9_{27}$ & 7 & $10_{12}$ & 6 & $10_{52}$ & 6 & $10_{92}$ & 6 & $10_{132}$ & 6 & $3_1\#7_7$ & 5 \\
$8_{12}$ & 8 & $9_{28}$ & 6 & $10_{13}$ & 6 & $10_{53}$ & 6 & $10_{93}$ & 6 & $10_{133}$ & 6 & $4_1\#6_1$ & 4 \\
$8_{13}$ & 8 & $9_{29}$ & 6 & $10_{14}$ & 6 & $10_{54}$ & 6 & $10_{94}$ & 5 & $10_{134}$ & 6 & $4_1\#6_2$ & 5 \\
$8_{14}$ & 8 & $9_{30}$ & 6 & $10_{15}$ & 6 & $10_{55}$ & 6 & $10_{95}$ & 6 & $10_{135}$ & 6 & $4_1\#6_3$ & 5 \\
$8_{15}$ & 8 & $9_{31}$ & 7 & $10_{16}$ & 5 & $10_{56}$ & 6 & $10_{96}$ & 6 & $10_{136}$ & 6 & $5_1\#5_1$ & 3 \\
$8_{16}$ & 7 & $9_{32}$ & 6 & $10_{17}$ & 5 & $10_{57}$ & 6 & $10_{97}$ & 6 & $10_{137}$ & 6 & $5_1\#5_2$ & 5 \\
$8_{17}$ & 7 & $9_{33}$ & 6 & $10_{18}$ & 6 & $10_{58}$ & 6 & $10_{98}$ & 6 & $10_{138}$ & 6 & $5_2\#5_2$ & 4 \\
$8_{18}$ & 7 & $9_{34}$ & 6 & $10_{19}$ & 6 & $10_{59}$ & 6 & $10_{99}$ & 5 & $10_{139}$ & 5 & $3_1\#3_1\#4_1$ & 4 \\
$8_{19}$ & 7 & $9_{35}$ & 6 & $10_{20}$ & 6 & $10_{60}$ & 6 & $10_{100}$ & 5 & $10_{140}$ & 6 & & \\
$8_{20}$ & 8 & $9_{36}$ & 6 & $10_{21}$ & 5 & $10_{61}$ & 5 & $10_{101}$ & 6 & $10_{141}$ & 6 & & \\
$8_{21}$ & 8 & $9_{37}$ & 7 & $10_{22}$ & 6 & $10_{62}$ & 5 & $10_{102}$ & 6 & $10_{142}$ & 6 & & \\
$3_1\#5_1$ & 6 & $9_{38}$ & 6 & $10_{23}$ & 7 & $10_{63}$ & 6 & $10_{103}$ & 6 & $10_{143}$ & 6 & & \\
$3_1\#5_2$ & 6 & $9_{39}$ & 6 & $10_{24}$ & 6 & $10_{64}$ & 5 & $10_{104}$ & 5 & $10_{144}$ & 6 & & \\
\end{tabular}
\label{m10fertilitytable}
}
\end{table}
\end{document} |
\begin{document}
\title{space{-1cm}
\begin{abstract}
Fix a set of primes $\pi$. A finite group is said to satisfy $C_\pi$ or, in other words, to be a
$C_\pi$-group, if it possesses exactly one class of conjugate $\pi$-Hall subgroups. The pronormality of $\pi$-Hall
subgroups in~$C_\pi$-groups is proven, or, equivalently, we prove that $C_\pi$ is inherited by
overgroups of $\pi$-Hall subgroups. Thus an affirmative solution to Problem 17.44(a) from the ``Kourovka notebook'' is
obtained. We also provide an example, showing that Hall subgroups in finite groups are not pronormal in general.
\end{abstract}
\section*{Introduction}
We use the term ``group'' in the meaning ``finite group''. The notation
mod CFSG means that the result is proven by using the classification of finite simple groups.
Throughout a set of primes is denoted by $\pi$, while its complement is denoted by~$\pi'$.
A subgroup $H$ of $G$ is called a {\it $\pi$-Hall subgroup}, if it is a $\pi$-group
(i.\,e. all its prime divisors are in $\pi$), while its index is not divisible by primes from
$\pi$. The notion of $\pi$-Hall subgroup generalizes the notion of Sylow $p$-subgroup and is equal to the
second notions if $\pi=\{p\}$. The set of $\pi$-Hall subgroup of $G$ is denoted by $\operatorname{Hall}_\pi(G)$.
A subgroup is said to be a {\em Hall subgroup}, if it is a $\pi$-Hall subgroup for a set of primes $\pi$.
According to [1] we say that $G$ {\it satisfies $($belongs to the class$)$} $E_\pi$, if $G$ possesses a
$\pi$-Hall subgroup. If $G\in E_\pi$ and every two its $\pi$-Hall subgroups are conjugate, then we say that
$G$ { \it satisfies $C_\pi$} (and we write ${G\in C_\pi}$). If ${G\in C_\pi}$ and each $\pi$-subgroup of $G$ is
included in a $\pi$-Hall subgroup, the we say that $G$ { \it satisfies} $D_\pi$ (and we write~${G\in D_\pi}$).
Properties $E_\pi$, $C_\pi$, and $D_\pi$ generalize known properties of Sylow subgroups in case of
$\pi$-Hall subgroups, but, in contrast with the Sylow properties, arbitrary group may fail to satisfy $E_\pi$,
$C_\pi$ or $D_\pi$. Groups satisfying these properties are called $E_\pi$-, $C_\pi$- and {\it $D_\pi$-groups}
respectively.
In the theory of properties $E_\pi$, $C_\pi$, and $D_\pi$ problems of the inheriting of these properties by subgroups,
homomorphic images, and extensions are very important. These properties are studied in [1--19] (for details see
[2,\,3]). In particular, $E_\pi$ and (mod CFSG) $D_\pi$ are known to inherit by normal subgroups, while $C_\pi$ does
not in general. However, even $E_\pi$ and $D_\pi$ are not inherited by arbitrary subgroup. Consider the following
example.
\begin{Ex}
According to [4, Theorem 3], $G=\operatorname{SL}_2(16)\simeq A_1(16)$ satisfies $D_\pi$ with
$\pi=\{3,5\}$, and a subgroup $M=\operatorname{SL}_2(4)\simeq \operatorname{Alt}_5$ of order $60=2^2\cdot 3\cdot 5$ is
included into $G$ in the natural. This subgroup does not even satisfy $E_\pi$, since it does not contain
elements (and, therefore, subgroups) of order~$15$.
\end{Ex}
A natural problem arises: what subgroups, except normal subgroups, inherit properties $E_\pi$, $C_\pi$ and~$D_\pi$?
Clearly, a $\pi$-Hall subgroup of an $E_\pi$-group is a $\pi$-Hall subgroup in each subgroup containing it, i.~e.
{\sl $E_\pi$ is inherited by overgroups of $\pi$-Hall subgroup.}
We formulate the same statements for $C_\pi$ and $D_\pi$ as conjectures.
\begin{Conj} {\rm[20, Problem 17.44(a); 21, Problem 2; 5, Conjecture 3]}
If $G\in C_\pi$ and $H\in\operatorname{Hall}_\pi(G)$, then $M\in C_\pi$ for every subgroup $M$ such that
$H\le M\le G$.
\end{Conj}
\begin{Conj} {\rm [20, Problem 17.44(b); 21, Problem 3]}
If $G\in D_\pi$ and $H\in\operatorname{Hall}_\pi(G)$, then
$M\in D_\pi$ for every subgroup $M$ such that~${H\le M\le G}$.
\end{Conj}
The following theorem from~[5] allows to obtain a criterion for a finite group to satisfy $C_\pi$ in terms of its
arbitrary normal series.
\begin{Theo} {\rm([5, Theorem~1] mod CFSG)}
If $G\in C_\pi$, $H\in\operatorname{Hall}_\pi(G)$, and $A\trianglelefteq G$, then~$HA\in C_\pi$.
\end{Theo}
This theorem provides a partial affirmative answer to Conjecture~1\footnote{Moreover, this theorem allows
the authors to make the conjecture for the first time.}.
Now we give an equivalent form of Conjecture~1.
According to the definition of P.Hall, a subgroup $H$ of $G$ is called {\it pronormal}, if, for every
$g\in G$, subgroups $H$ and $H^g$ are conjugate in $\langlengle H, H^g\ranglengle$. Classical examples of pronormal subgroups
are:
$\bullet$ normal subgroups;
$\bullet$ maximal subgroups;
$\bullet$ Sylow subgroups;
$\bullet$ Hall subgroups of solvable groups.
It is easy to see that Conjecture 1 is equivalent to the following.
\begin{Conj}
Hall subgroup of~$C_\pi$-groups are pronormal.
\end{Conj}
We can consider a stronger statement.
\begin{Conj}
Hall subgroups of every group are pronormal.
\end{Conj}
In~[22] Conjecture 4 is proven (mod CFSG) in a particular case, namely, the following conjecture is confirmed
\begin{Conj} {\rm [20, Problem 17.45(a)]}
Hall subgroups of a finite simple group are pronormal.
\end{Conj}
Conjecture 2 also can be reformulated in the spirit of Conjecture 3, if one introduces the notion of strongly
pronormal subgroup.
A subgroup $H$ of $G$ is called {\it strongly pronormal}, if, for each $K\le H$ and every $g\in G$,
$K^g$ is conjugate with a subgroup of $H$ (but not necessary with $K$) by an element from $\langlengle H, K^g\ranglengle$.
Clearly, every strongly pronormal subgroup is pronormal. All classical examples of pronormal subgroups (normal,
maximal, Sylow subgroups, and Hall subgroups of solvable groups) appear to be examples of strongly pronormal subgroups.
The following problem arises naturally: is a pronormal subgroup always strongly pronormal? The following example
provides a negative answer to this problem
\begin{Ex}
Let $m$ and $n$ be natural numbers and $n/2<m< n-1$. In the symmetric group $\operatorname{Sym}_n$ the pointwise
stabilizer of an $(n-m)$-element set (a subgroup~$\operatorname{Sym}_m$) is pronormal, but is not a strongly pronormal
subgroup.
\end{Ex}
In terms of strong pronormality Conjecture 2 is equivalent to the following.
\begin{Conj}
$\pi$-Hall subgroup of~$D_\pi$-groups are strongly pronormal.
\end{Conj}
Since a finite group satisfies $D_\pi$ if and only if each its composition factor satisfies to this property [6,
Theorem~7.7 (mod CFSG)], a counter example of minimal order to equivalent Conjectures 2 and 6 should be a simple
$D_\pi$-group. So the conjectures can be derived from the following conjecture.
\begin{Conj} {\rm [20, Problem 17.45(b)]}
Hall subgroups of a finite simple group are strongly pronormal.
\end{Conj}
Finally, we can consider conjecture, that strengthen all Conjectures~1--7.
\begin{Conj}
Hall subgroups of finite groups are strongly pronormal.
\end{Conj}
In the paper we prove, by using Theorem 1 (and therefore, by using the classification of finite simple groups), for
Conjectures 1--8, formulated above, the implication $5\Rightarrow 1$ and the equivalence
$4\Leftrightarrow 8$. Thus Conjectures 1--8 are connected to each other by the following logical diagram:
$$\xymatrix{
1\ar@{<=>}[r]&3\ar@{<=}[rd]&&&6\ar@{<=}[ld
]\ar@{<=>}[r]&2\\
&&4\ar@{=>}[ld]\ar@{<=>}[r]&8\ar@{=>}[rd]&&\\
&5\ar@{=>}[luu]^{\text{mod
(CFSG)}}&&&7\ar@{=>}[ruu]_{\text{mod (CFSG)}}&}
$$
We clarify the with conjectures from the left ``wind of butterfly''. First of all, as we have already mentioned,
Conjecture 5 is true (mod CFSG) [22, Theorem~1] and, therefore, both Conjectures 1 and 3 are true (mod CFSG). Thus
in the paper the following theorem is proven
\begin{Theo} {\rm (mod CFSG)}
For every set of primes $\pi$ the following hold:
$(1)$~$\pi$-Hall subgroups of $C_\pi$-groups are pronormal;
$(2)$~$C_\pi$ is inherited by overgroups of $\pi$-Hall subgroups.
\end{Theo}
Now we turn to Conjecture 4. We say that a {\it $\pi$-conjecture} holds in $G$ for a set of primes $\pi$, if all
$\pi$-Hall subgroups of $G$ are pronormal. Thus Conjecture 4 asserts that $\pi$-conjecture holds in all finite groups
for all sets of primes~$\pi$. $\pi$-conjecture holds in many special cases:
$\bullet$ for all simple groups [22, Theorem 1],
$\bullet$ for all groups not in $E_\pi$ (trivial),
$\bullet$ for all groups satisfying $C_\pi$ (by Theorem~2).
Thus we need to check that $\pi$-conjecture holds in groups from
${E_\pi\setminus C_\pi}$. Notice that there exist sets $\pi$ such that
$E_\pi\setminus C_\pi=\varnothing$. Evident examples are: the set of all primes, the empty set, and an one-element set.
Every set of all primes also satisfies this property (mod CFSG) [23, Theorem A]. For such sets
$\pi$-conjecture holds in all groups. However, if there is a ``gap'' between $E_\pi$ and $C_\pi$, then for some group
$\pi$-conjecture does not hold.
\begin{Theo}
Let a set of primes $\pi$ be such that $E_\pi\setminus C_\pi\ne\varnothing$. Then there exist $G\in E_\pi\setminus
C_\pi$ and $H\in\operatorname{Hall}_\pi(G)$ such that $H$ is not pronormal in~$G$.
\end{Theo}
It follows from the proof of Theorem 3 that we can take a regular wreath product of an arbitrary $X\in E_\pi\setminus
C_\pi$ and a cyclic group ${\Bbb Z}_p$ of order $p\in\pi'$ (here $\pi'\ne\varnothing$, since otherwise $E_\pi= C_\pi$)
as a group $G$ from the theorem.
Combining Theorems 2 and 3 we obtain
\begin{Cor} {\rm (mod CFSG)}
For every set $\pi$ of primes the following statements are equivalent:
$(1)$~$\pi$-Hall subgroups in all finite groups are pronormal;
$(2)$~$E_\pi=C_\pi$.
\end{Cor}
It would be interesting to find all sets $\pi$ such that $E_\pi=C_\pi$
(cf.~[2, Problem~7.20; 21, Problem~6]).
Theorem 3 does not just refute Conjecture 4, but also refutes a stronger Conjecture~8. Thus the implication is proven
$8\Rightarrow 4$, since this implication is made from the false statement.
Probably, the symmetry of the ``logical butterfly'' has a deeper reason: the authors do not know any counter example to
the following conjecture.
\begin{Conj}
In every group pronormal Hall subgroups are strongly pronormal.
\end{Conj}
If the conjecture is true, then each statement from the right ``wind'' will be true if and only if the symmetric
statement from the left ``wind'' is true.
Partial results on Conjectures~2 and~6 of the right ``wind'' are obtained in [24], where the conjectures are confirmed
for groups whose nonabelian factors are isomorphic to alternating, sporadic, and groups of Lie type with characteristic
in $\pi$. We discuss another possible way to solve the conjectures. As it has already mentioned, the counter example
$G$ of minimal order to any of the conjectures should be a simple $D_\pi$-group. It is also evident that
$G$ is not a $\pi$-group and the order of $G$ is divisible by at least two primes from $\pi$
(otherwise $\pi$-Hall subgroups of $G$ are strongly pronormal). The classification of simple
$D_\pi$-groups (mod CFSG) [4, Theorem~3] implies that either $2\not\in\pi$, or $3\not\in\pi$, and
in view of [23, Theorem B;
6, Lemma~5.1, Theorem~5.2]
$\pi$-Hall subgroups of $G$ has a Sylow tower. We recall the definition.
Let $H$ be a group and $\pi(H)=\{p_1,\dots, p_n\}$. According to
[1] we say that
$H$ {\it has a Sylow tower of complexion} $(p_1,\dots, p_n)$, if
$H$ has a normal series
$$
H=H_0>H_1>\dots>H_n=1
$$
such that each it section $H_{i-1}/H_i$ is isomorphic to a Sylow $p_i$-subgroup of~$H$.
Hall subgroups having Sylow tower of the same complexion are known to be conjugate [1, Theorem~A1]. In particular, if a
Hall subgroup has a Sylow tower, then it is pronormal.
Thus Conjecture 6 can be checked for simple $D_\pi$-groups, and, therefore, for all groups, if we can prove the
following statement.
\begin{Conj} {\rm[20, Problem 17.45(c)]}
Hall subgroups having a Sylow tower are strongly pronormal.
\end{Conj}
\section{Notations, agreements, and preliminary results}
If $H$ is a pronormal subgroup of $G$, then we write $H\operatorname{prn} G$.
\begin{Lemma}
Let $G$ be a group and $A$ be its normal subgroup.
If $G\in E_\pi$ and $H\in \operatorname{Hall}_\pi(G)$,
then $A,G/A\in E_\pi$ with ${H\cap A\in \operatorname{Hall}_\pi(A)}$,
${HA/A\in \operatorname{Hall}_\pi(G/A)}$.
\end{Lemma}
{\scshape Proof.} See [1, Lemma~1].
{$\Box$}
A finite group possessing a (sub)normal series with factors being either $\pi$- or $\pi'$-groups is called
{\it $\pi$-separable}. Notice that a subgroup of a $\pi$-separable group is $\pi$-separable.
\begin{Lemma}
Each $\pi$-separable group satisfies~$D_\pi$.
\end{Lemma}
{\scshape Proof.} See [7;\,1, Corollary~D5.2].
{$\Box$}
It follows from Lemma 6
\begin{Lemma}
$\pi$-Hall subgroups of a $\pi$-separable group are strongly pronormal.
\end{Lemma}
{\scshape Proof.}
Assume that $G$ is $\pi$-separable, and $H\in\operatorname{Hall}_\pi(G)$. Let $K\le H$ and $g\in G$. Consider
$M=\langlengle H, K^g\ranglengle$. It is $\pi$-separable as a subgroup of a $\pi$-separable group, and by Lemma 6 we have
$M\in D_\pi$. It follows that a $\pi$-subgroup $K^g$ of $M$ is conjugate in $M=\langlengle H, K^g\ranglengle$ with a subgroup
of $H\in\operatorname{Hall}_\pi(M)$. Hence a subgroup $H$ of $G$ is strongly pronormal.
{$\Box$}
We need the following statement from [5] together with Theorem 1.
\begin{Lemma} {\rm (mod CFSG)}
If $G\in C_\pi$ and $A\trianglelefteq G$, then~${G/A\in C_\pi}$.
\end{Lemma}
{\scshape Proof.} See [5, Lemma~9].
{$\Box$}
In order to prove Theorem 2 we also use the main result from [22] that confirm Conjecture 5 and that is stated in the
following lemma.
\begin{Lemma} {\rm(mod CFSG)}
Hall subgroups of finite simple groups are pronormal.
\end{Lemma}
{\scshape Proof.} See [22, Theorem~1].
{$\Box$}
\begin{Lemma}
Let $H$ be a subgroup of $G$ and
$g\in G$,
$y\in \langlengle H, H^g\ranglengle$. Then, if $H^y$ and $H^g$ are conjugate in $\langlengle H^y, H^g\ranglengle$, then
$H$ and $H^g$ are conjugate in~$\langlengle H, H^g\ranglengle$.
\end{Lemma}
{\scshape Proof.}
Let $z\in \langlengle H^y, H^g\ranglengle$ and $H^{yz}=H^g$. Then $z\in \langlengle H, H^g\ranglengle$, since
$\langlengle H^y, H^g\ranglengle\le\langlengle H, H^g\ranglengle $. So
$x=yz\in\langlengle H, H^g\ranglengle$ and~${H^x=H^g}$.
{$\Box$}
\begin{Lemma}
Let $\overlineerline{\phantom{g}}:G\rightarrow G_1$ be a homomorphism,
$H\le G$. If $H\operatorname{prn} G$, then
$\overlineerline{H}\operatorname{prn} \overlineerline{G}$.
\end{Lemma}
{\scshape Proof.} Clear.
{$\Box$}
\begin{Lemma}
Let $G$ be a group and $G_1,\dots, G_n$ be normal subgroups of $G$ such that $[G_i,G_j]=1$ for $i\ne
j$ and $G=G_1\cdots G_n$. Assume that for every $i=1,\dots, n$ a pronormal subgroup $H_i$ of $G_i$ is chosen, and
$H=\langlengle H_1,\dots, H_n\ranglengle$. Then $H\operatorname{prn} G$.
\end{Lemma}
{\scshape Proof.}
Choose arbitrary element $g\in G$. Then $g=g_1\dots g_n$ for some $g_1\in G_1, \dots, g_n\in G_n$.
Since, by the condition, for every $i=1,\dots, n$, the subgroup $H_i$ is pronormal in $G_i$, there exist
$x_i\in \langlengle H_i,H_i^{g_i}\ranglengle$ such that $H_i^{x_i}=H_i^{g_i}$. In view of the condition
$[H_i,H_j]=1$ for $i\ne j$, for all $i=1,\dots ,n$ we have $H_i^g=H_i^{g_i}$. By the same arguments,
$H_i^{x_i}=H_i^{x}$, where $x=x_1\dots x_n$. It is clear that
$$
x\in \bigl\langlengle H_i,H_i^{g_i}\mid i=1,\dots, n\bigr\ranglengle
=\bigl\langlengle H_i,H_i^{g_i}\mid i=1,\dots,
n\bigr\ranglengle=\langlengle H, H^g\ranglengle.
$$
Moreover
\begin{multline}
H^g=\bigl\langlengle H_i^{g}\mid i=1,\dots, n\bigr\ranglengle
=\bigl\langlengle H_i^{g_i}\mid i=1,\dots, n\bigr\ranglengle
\\
=\bigl\langlengle H_i^{x_i}\mid i=1,\dots, n\bigr\ranglengle
\bigl\langlengle H_i^{x}\mid i=1,\dots, n\bigr\ranglengle=H^x.
\end{multline}
{$\Box$}
\begin{Lemma}
Let $G$ be a group, $H\in\operatorname{Hall}_\pi(G)$ for a set $\pi$ of primes,
$A\trianglelefteq G$ and $G=HA$. If $H\cap A\operatorname{prn} A$, then $H\operatorname{prn} G$.
\end{Lemma}
{\scshape Proof.}
Let $H\cap A\operatorname{prn} A$. Choose arbitrary $g\in G$ and show that $H^x=H^g$ for some $x\in\langlengle H,
H^g\ranglengle$. Since $G=HA$, there exist $h\in H$ and $a\in A$ such that $g=ha$. Since $H\cap
A\operatorname{prn} A$, there exists $y\in \langlengle H\cap A, H^a\cap A\ranglengle$ such that $H^y\cap A=H^a\cap A$. Taking
into consideration Lemma~10, in view of
$$
y\in \langlengle H\cap A, H^a\cap A\ranglengle\le\langlengle H, H^a\ranglengle
=\langlengle H, H^{ha}\ranglengle=\langlengle H, H^g\ranglengle,
$$
we may assume that $H=H^y$ and, in particular, $H\cap A=H^a\cap A=H^g\cap A$. Now $H$, $H^g$ and $g$ lie in
$N_G(H\cap A)$. Since $G=HA$, we have $G=AN_G(H\cap A)$. Notice that
$$
N_G(H\cap A)/N_G(H\cap A)\simeq AN_G(H\cap A)/A=G/A
$$
is a $\pi$-group.
Consider a normal series
$$
N_G(H\cap A)\trianglerighteq N_A(H\cap A)\trianglerighteq H\cap A\trianglerighteq 1
$$
of $N_G(H\cap A)$. Each its factor is either a $\pi$-, or a $\pi'$-group, so
$N_G(H\cap A)$ is $\pi$-separable. By Lemma 7 we have $H\operatorname{prn} N_G(H\cap A)$. Thus
$H$ and $H^g$ are conjugate in~${\langlengle H,H^g\ranglengle}$.
{$\Box$}
\section{Proof of Theorem~2}
We prove equivalent statements (1) and (2) of the theorem simultaneously. Assume that Theorem~2 is not true and
$G\in C_\pi$ is a group of minimal order possessing a nonpronormal $\pi$-Hall subgroup $H$. Then there exists
$g\in G$ such that $M=\langlengle H, H^g\ranglengle$ does not satisfy $C_\pi$.
According to Lemma 9, $G$ is not simple. Let $A$ be a minimal normal subgroup of $G$. In view of Lemma 8 and the choice
of $G$ we have $HA/A\operatorname{prn} G/A$ and so
$H^gA=H^yA$ for some $y\in M=\langlengle H, H^g\ranglengle$. By Lemma 10 we may assume that $H=H^y$ and
$HA=H^gA$.
Now $HA\in C_\pi$, by Theorem 1, and, if $HA<G$, then the minimality of the order of $G$ implies
$H\operatorname{prn} HA$, and since $H^g\le HA$, we obtain that $H$ and $H^g$ are conjugate in $M$
(notice that $H$ and $H^g$ are conjugate in $HA$, since
$HA\in C_\pi$).
Therefore, $G=HA$.
$A$ as a minimal normal subgroup is a direct product of isomorphic simple groups:
$$
A=S_1\times\dots\times S_n.
$$
Notice that all groups $S_i$ are nonabelian (otherwise $A$ is either a $\pi$- or a $\pi'$-group, $G=HA$ would be
$\pi$-separable, and its $\pi$-Hall subgroups are pronormal according to Lemma 7). Since all subgroups $S_i$ are
subnormal in $G$, by Lemma~5 we have $H\cap S_i\in\operatorname{Hall}_\pi(S_i)$,
$i=1,\dots,n$. Now $H\cap S_i\operatorname{prn} S_i$ for all $i$ by Lemma 9, and
$$
H\cap A=\langlengle H\cap S_1,\dots, H\cap S_n\ranglengle\operatorname{prn} A
$$
by Lemma 12. Finally, applying Lemma 13, we conclude that $H\operatorname{prn} G$. The theorem is proven.
{$\Box$}
\section{Proof of Theorem 3}
Since $E_\pi\ne C_\pi$, $\pi$ is not equal to the set of all primes and $\pi'\ne\varnothing$. Let $p\in\pi'$.
Assume also that $X\in E_\pi\setminus C_\pi$. Then $X$ possesses two nonconjugate $\pi$-Hall subgroups $U$ and~$V$.
Consider the direct product $$
Y=\underbrace{X\times X\times\dots\times X\times X}\limits_{p\text{ times}}
$$
of $p$ isomorphic copies pf $X$. The map $\tau:Y\rightarrow Y$, given by
$$
(x_1,x_2,\dots,x_{p-1}, x_p)\mapsto(x_2,x_3,\dots,x_{p}, x_1),
\quad x_1,x_2,\dots,x_{p-1},x_p\in X,
$$
is an automorphism of order $p$ of $Y$. Consider the natural split extension $G$ of $Y$ by
$\langlengle \tau\ranglengle$, that is isomorphic to the regular wreath product~${X\wr\Bbb{Z}_p}$.
Since $Y$ is a normal subgroup of $G$ and the index $|G:Y|=p$ is not divisible by primes from $\pi$, we have
$\operatorname{Hall}_\pi(G)=\operatorname{Hall}_\pi(Y)$. In $Y$ define subgroups
$$
H=V\times \underbrace{U\times\dots\times U\times U}\limits_{p-1 \text{ times}},
\quad
K=\underbrace{U\times U\times\dots\times U}\limits_{p-1 \text{ times}}\times V
$$
in the natural way. Clearly, $H,K\in \operatorname{Hall}_\pi(Y)=\operatorname{Hall}_\pi(G)$.
Since $U$ and $V$ are not conjugate in $X$, subgroups $H$ and $K$ are not conjugate in $Y$ and, therefore, are not
conjugate in $\langlengle H,K\ranglengle$. At the same time, by the definition of $\tau$ we obtain the identity $H^\tau=K$,
so $H$ and $K$ are not pronormal subgroup of $G$. The theorem is proven.
{$\Box$}
In view of the proof of Theorem 3 note that the authors do not know any counter example to the following statement.
\begin{Conj}
Hall subgroups are pronormal in their normal closure.
\end{Conj}
\end{document} |
\begin{document}
\newcommand{\bra}[1] {\langle #1|}
\newcommand{\ket}[1] {| #1\rangle}
\newcommand{\tr}[1] {{\rm Tr}\left[ #1 \right]}
\newcommand{\av}[1] {\left\langle #1 \right\rangle}
\newcommand{\proj}[1]{\ket{#1}\bra{#1}}
\title{Role of Particle Entanglement in the Violation of Bell Inequalities}
\author{T. Wasak$^1$, A. Smerzi$^2$ and J. Chwede\'nczuk$^1$}
\affiliation{$^1$Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL--02--093 Warszawa, Poland\\
$^2$QSTAR, INO-CNR and LENS, Largo Enrico Fermi 2, 50125 Firenze, Italy}
\begin{abstract}
Entanglement between two separate systems is a necessary resource to violate a Bell inequality in a test of local realism.
We demonstrate that to overcome the Bell bound, this correlation must be accompanied by the entanglement
between the constituent particles. This happens whenever a super-selection rule imposes a constraint on feasible local operations.
As we show in an example, the necessary particle entanglement might solely result from their indistinguishability.
Our result reveals a fundamental relation between the non-locality and the particle entanglement.
\end{abstract}
\maketitle
The ``spooky action at the distance'' stands out among the most striking consequences of quantum mechanics \cite{epr}.
This term was coined by Albert Einstein to
underline how counterintuitive it is that a seemingly local manipulation on one part of a system immediately
affects its other distant part without any transfer of physical information.
Such an effect contradicts the postulates of the ``local realism'':
Two quantum spin-$\frac12$ particles,
if prepared in an entangled state and sent into distant regions $A$ and $B$, cannot be treated as individual objects up until the measurements are made.
Operations performed in $A$ and $B$, though local in space, act globally on the system.
The non-locality of quantum mechanics can be quantified by a series of inequalities---first considered by Bell in \cite{bell}---
for the correlations between the outcomes of local measurements
\cite{bell_rmp,chsh,test1,test2,test3,test4,test4,test5,test6,test7,test8,test9,test10,test11,test12,ent_rmp,banaszek1}.
While the violation of the Bell inequalities was first observed decades ago \cite{test2,test3,test4,test4},
only recently a loophole-free deviation from local realism has been demonstrated experimentally \cite{loophole, loophole2}.
Not all $A$-$B$ entangled states violate a known Bell inequality \cite{werner},
but it has been known since long that all {\it pure} states do violate a Bell inequality \cite{gisin}.
For illustration, consider a pure state $\ket\psi$ shared between $A$ and $B$, which decomposed into the localized states reads
\begin{equation}\label{sch}
\ket{\psi}=\sum_ic_i\ket{\phi_i}_A\otimes\ket{\chi_i}_B.
\end{equation}
If the state is $A$-$B$ entangled---which happens when at least two coefficients of this expansion, say $c_i$ and $c_{i'}$, are non-zero---
a Bell inequality \cite{gisin} will be violated, by
locally coupling $\ket{\phi_i}_A$ with $\ket{\phi_{i'}}_A$ and $\ket{\chi_i}_B$ with $\ket{\chi_{i'}}_B$. These couplings
probe the quantum coherence between $\ket{\phi_i}_A\otimes\ket{\chi_i}_B$ and $\ket{\phi_{i'}}_A\otimes\ket{\chi_{i'}}_B$.
However, sometimes local operations and/or measurements are prohibited by some superselection rule (SSR). The SSR
is a restriction imposed on quantum mechanics forbidding coherences between eigenstates of certain observables \cite{ssr_wick,ssr2}.
For the purpose of this manuscript, the SSR can be formulated as follows:
local operations/measurements cannot create/detect coherences between states with different number of particles of a given type.
Here, a particle is understood as a discrete object, indistinguishable from other of the same kind, and carrying a set of fundamental quantum numbers, such as the
charge or the baryon and lepton numbers \cite{ssr_ent1}.
To illustrate the impact of the SSR on the feasible local operations, consider
two states localized in $A$: one which contains a single sodium 23 atom, denoted by $\ket{{\rm ^{23}Na}}_A$ and the other with a rubidium 87 atom, denoted by $\ket{{\rm ^{87}Rb}}_A$.
Although these states have the same number of atoms, the atoms are different.
Any operation or measurement coupling these states would not preserve the number of atoms of a given kind or---from another perspective---would not conserve the number of protons, electrons and neutrons.
Therefore, such coupling is forbidden by the SSR \cite{ssr_wick,ssr2}.
Another known example is a single particle coherently distributed
among $A$ and $B$ \cite{enk}, namely
\begin{equation}\label{pure}
\ket\psi=\frac1{\sqrt2}\Big(\ket1_A\otimes\ket0_B+\ket0_A\otimes\ket1_B\Big).
\end{equation}
The SSR formulated above prohibits the creation or the detection of a superposition
of the vacuum $\ket0_A$ with the state containing one particle $\ket1_A$,
thus the local operations
cannot create/detect quantum coherences between the
two components of $\ket\psi$.
From the point of view of physically realizable local operations one can effectively replace the pure state (\ref{pure}) with an incoherent mixture
\begin{eqnarray}
\ket\psi\bra\psi\rightarrow\hat\varrho_{\rm eff}=\frac1{2}&&\Big(\ket1\bra1_A\otimes\ket0\bra0_B\nonumber\\
&&+\ket0\bra0_A\otimes\ket1\bra1_B\Big).\label{replace}
\end{eqnarray}
Although the state (\ref{pure}) is $A$-$B$ entangled,
due to the SSR the resulting $\hat\varrho_{\rm eff}$ is $A$-$B$ separable (i.e., non-entangled) and as such does not violate any Bell inequality \cite{ssr_ent1,ssr_ent2,ssr_ent3}.
Note that for photons, to which the SSR does not apply, the local coupling of
$\ket0_A$ with $\ket1_A$ is allowed and indeed the state (\ref{pure}) violates a Bell inequality \cite{loophole, loophole2,enk}.
Inspired by this example we formulate and prove a general theorem:
the restriction imposed on the local operations by the SSR cast all particle-separable states
to be effectively $A$-$B$ separable. In other words, in presence of SSR not only the $A$-$B$ entanglement
but also
the entanglement of
particles shared by $A$ and $B$ are necessary for the violation of any Bell inequality.
We demonstrate that this latter resource might origin solely from the particle indistinguishability \cite{identical_plenio}.
To set the stage and proceed with the proof, we note that the $A$-$B$
separable states have a general form
\begin{equation}
\hat\varrho=\sum_i\,p_i\,\hat\varrho^{(i)}_A\otimes\hat\varrho^{(i)}_B,\label{sepAB}
\end{equation}
where $p_i$'s are the statistical weights.
This relation is established by the decomposition of the total Hilbert space into the local sub-spaces
\begin{equation}\label{prodAB}
\mathcal H=\mathcal H_A\otimes\mathcal H_B.
\end{equation}
We will demonstrate that in presence of SSR and in the context of Bell inequalities,
the quantum state should also be inspected through the decomposition of $\mathcal H$ into the product of single-particle subspaces
\begin{equation}\label{prodN}
\mathcal H=\bigotimes_{i=1}^N\mathcal H_i.
\end{equation}
Here $N$ is a number of particles shared by $A$ and $B$.
In analogy to Eq.~(\ref{sepAB}), particle-separable states are
\begin{equation}\label{sep}
\hat\varrho=\sum_i\,p_i\,\hat\varrho^{(1)}_i\otimes\ldots\otimes\hat\varrho^{(N)}_i,
\end{equation}
and particle-entangled are those that cannot be expressed in this way.
We adapt this general formula to the $A$-$B$ geometry and
show that all such states do not violate any Bell inequality in presence of SSR. This means that the quantum state shared by $A$ and $B$ must necessarily be
particle-entangled to violate any Bell inequality.
First, we consider a collection of distinguishable particles. The basic building block of the $N$-body density matrix (\ref{sep}) is the one-body pure state, which for the $i$-th particle reads
\begin{equation}\label{dist1}
\ket{\psi_i}=\left(\alpha(\psi_i)\,\hat\psi_{_i}^{(A)\dagger}+\beta(\psi_i)\,\hat\psi_{i}^{(B)\dagger}\right)\ket{0}.
\end{equation}
Here $\hat\psi_{i}^{(k)\dagger}$ creates a quantum of a field associated with this particle in the region $k$, $\ket{0}$ is the vacuum and $|\alpha(\psi_i)|^2+|\beta(\psi_i)|^2=1$.
According to Eq.~(\ref{sep}), the density matrix of $N$ particles forming a separable state is an incoherent mixture of the one-body matrices.
Since one must allow each particle to be distributed among the regions in every way---and this for the $i$-th body is governed by the field $\psi_i$--- the Eq.~(\ref{sep})
translates into
\begin{equation}\label{dens_dist}
\hat\varrho=\int\!\!{\mathcal D}\psi_1\cdots\int\!\!{\mathcal D}\psi_N\,\mathcal{P}(\psi_1,\ldots,\psi_N)\bigotimes_{i=1}^N\ket{\psi_{i}}\bra{\psi_{i}}.
\end{equation}
Here, the joint probability $\mathcal{P}(\psi_1,\ldots,\psi_N)$ determines the partition of all the bodies among $A$ and $B$. The symbol ${\mathcal D}\psi_i$ is the integration measure
over the set of fields $\psi_i$.
Note that the product $\otimes$ in Eq.~(\ref{dens_dist}) refers to the decomposition of the Hilbert space as in Eq.~(\ref{prodN}).
On the other hand, if we decomposed $\hat\varrho$ into the states residing in $\mathcal H_A$ and $\mathcal H_B$, the state would not be
$A$-$B$ separable---it could not be written in the form of Eq.~(\ref{sepAB}).
This is because the density matrix (\ref{dens_dist}) has multiple terms which account for the $A$-$B$ entanglement, due to the inter-region coherence of each component from Eq.~(\ref{dist1}).
However, the particles form a separable state, therefore to identify the feasible local operations in presence of SSR, each single-particle state can be considered separately.
Following the example from Eq.~(\ref{pure}), the SSR enforces every $\ket{\psi_{i}}\bra{\psi_{i}}$ to be replaced with
\begin{eqnarray}\label{single}
\ket{\psi_{i}}\bra{\psi_{i}}\rightarrow\hat\varrho_{\rm eff}(\psi_{i})&=&|\alpha(\psi_i)|^2\hat\psi_{i}^{(A)\dagger}\ket{0}\bra{0}\,\hat\psi_{i}^{(A)}\nonumber\\
&+&|\beta(\psi_i)|^2\hat\psi_{i}^{(B)\dagger}\ket{0}\bra{0}\,\hat\psi_{i}^{(B)}.
\end{eqnarray}
This expression, plugged back into (\ref{dens_dist}) gives
\begin{equation}\label{eff_d}
\hat\varrho_{\rm eff}=\int\!\!{\mathcal D}\psi_1\cdots\int\!\!{\mathcal D}\psi_N\,\mathcal{P}(\psi_1,\ldots,\psi_N)\bigotimes_{i=1}^N\hat\varrho_{\rm eff}(\psi_i).
\end{equation}
Since the inter-region coherence is washed out already on the single-particle level of Eq.~(\ref{single}) and the integral over the fields does not introduce any quantum coherence,
the effective $N$-body density matrix $\hat\varrho_{\rm eff}$ is both particle- and $A$-$B$-separable (for the rigorous proof, see the
Appendix). To conclude, the SSR transform the state (\ref{dens_dist}) into (\ref{eff_d}), which has the form of Eq.~(\ref{sepAB}), and as such it will not violate any Bell inequality.
If the distinguishable particles are entangled, violation of some Bell inequality in presence of SSR might be possible. For illustration,
consider an electron ($e$) and a proton ($p$) forming a particle- and $A$-$B$-entangled state
\begin{equation}
\ket\psi=\frac1{\sqrt{2}}\Big(\ket{\uparrow_e}_A\otimes\ket{\uparrow_p}_B+\ket{\downarrow_e}_A\otimes\ket{\downarrow_p}_B\Big),
\end{equation}
where the arrows denote the projection of the spin of each particle. Now, local operations can be executed by coupling
$\ket{\uparrow_e}_A$ with $\ket{\downarrow_e}_A$ and $\ket{\uparrow_p}_B$ with $\ket{\downarrow_p}_B$. Therefore, according to the discussion below Eq.~(\ref{sch}), this state will violate a Bell inequality.
On the other hand, take an alternative particle- and $A$-$B$-entangled state
\begin{equation}
\ket\psi=\frac1{\sqrt{2}}\Big(\ket{\uparrow_e,\uparrow_p}_A\otimes\ket{0}_B+\ket{0}_A\otimes\ket{\downarrow_e,\downarrow_p}_B\Big).
\end{equation}
It will not violate any Bell inequality, because SSR forbid the coupling of $\ket{\uparrow_e,\uparrow_p}_A$ with $\ket{0}_A$ and
$\ket{\downarrow_e,\downarrow_p}_B$ with $\ket{0}_B$. This second example highlights the fact that when the SSR apply, both the particle and the $A$-$B$ entanglement are only necessary, but not
sufficient to drive the violation of a Bell inequality.
We now turn to bosons, for which the equivalent of Eq.~(\ref{dens_dist}) is \cite{cauchy,cauchy_long}
\begin{equation}\label{sep_bos_rho}
\hat\varrho=\int\!\!{\mathcal D}\psi\,\mathcal{P}(\psi)\ \ket{\psi}\bra{\psi}.
\end{equation}
Here $\ket{\psi}$ is the spin coherent state, which reads
\begin{equation}\label{sep_bos}
\ket{\psi}=\left(\alpha(\psi)\,\hat\psi^{(A)^\dagger}+\beta(\psi)\,\hat\psi^{(B)^\dagger}\right)^N\ket0.
\end{equation}
The language of the second quantization allows to immediately identify the relation between the regions $A$ and $B$, i.e, provides the state decomposed according to Eq.~(\ref{prodAB}).
This can be seen by writing Eq.~(\ref{sep_bos}) in terms of $A/B$ occupation states, i.e,
\begin{equation}\label{sep_bos2}
\ket{\psi}=\sum_{n_\psi=0}^NC_{n_\psi}\ket{n_\psi}_A\otimes\ket{N-n_\psi}_B,
\end{equation}
where $C_{n}=\sqrt{\binom N{n}}\alpha(\psi)^n\beta(\psi)^{N-n}$.
This expression plugged into Eq.~(\ref{sep_bos_rho}) gives
\begin{eqnarray}
\hat\varrho&=&\int\!\!{\mathcal D}\psi\,\mathcal{P}(\psi)\sum_{n_\psi=0}^N\sum_{m_\psi=0}^NC^*_{n_\psi}C_{m_\psi}\times\nonumber\\
&\times&\ket{n_\psi}\bra{m_\psi}_A\otimes\ket{N-n_\psi}\bra{N-m_\psi}_B\label{dens_bos}.
\end{eqnarray}
In presence of SSR, local operations cannot couple $\ket n_k$ with $\ket{n'}_k$. In this context, the state (\ref{sep_bos2}) can be effectively replaced by
\begin{eqnarray}
\ket{\psi}\bra\psi\rightarrow\hat\varrho_{\rm eff}(\psi)=\sum_{n_\psi=0}^N&&|C_{n_\psi}|^2\ket{n_\psi}\bra{n_\psi}_A\otimes\nonumber\\
&&\otimes\ket{N-n_\psi}\bra{N-n_\psi}_B,
\end{eqnarray}
which is both particle- and $A$-$B$-separable. Also the effective density matrix
\begin{equation}
\hat\varrho_{\rm eff}=\int\!{\mathcal D}\psi\,\mathcal{P}(\psi)\hat\varrho_{\rm eff}(\psi)
\end{equation}
is $A$-$B$ separable. Thus for bosons, in presence of SSR the particle entanglement
is a necessary resource for the violation of any Bell inequality.
We now show that the entanglement extracted solely from the indistinguishability of bosons might be sufficient for the
violation of the Bell inequality.
\begin{figure}
\caption{(color online) Particles of type $i$ (bottom left) and of type $j$ (top right) are coherently split and sent into the regions $A$ and $B$.
If they are distinguishable, the system will not violate any Bell inequality in presence of SSR, because they form a particle-separable state.
To contrary, identical particles in this configuration form a particle-entangled state due to their indistinguishability. In such case, the violation of some Bell inequality is possible.}
\label{scheme_yurke}
\end{figure}
Consider a particle of type $i$ and a particle of type $j$ in a state
\begin{equation}\label{ex1}
\ket\psi=\ket{1_i}\otimes\ket{1_j}
\end{equation}
entering the system through the two ports \cite{yurke}, shown in
Fig.~\ref{scheme_yurke}. The two beam-splitters distribute the signal among $A$ and $B$, giving
\begin{equation}\label{ex2}
\ket\psi=\frac12\Big(\ket{1_i}_A+\ket{1_i}_B\Big)\otimes\Big(\ket{1_j}_A+\ket{1_j}_B\Big).
\end{equation}
The symbol $\otimes$ in Equations (\ref{ex1}) and (\ref{ex2}) multiplies the single-particle states, therefore it refers to the decomposition of the Hilbert space as in Eq.~(\ref{prodN}).
To analyze the relation between $A$ and $B$, we switch to the second-quantization by expanding the product
and expressing $\ket\psi$ in terms of $A$- and $B$-occupation states. For instance, $\ket{1_i}_A\otimes\ket{1_j}_B\rightarrow\ket{1_i,0_j}_A\otimes\ket{0_i,1_j}_B$, giving
\begin{eqnarray}
\ket\psi&=&\frac12\Big(\ket{1_i,0_j}_A\otimes\ket{0_i,1_j}_B+\ket{0_i,1_j}_A\otimes\ket{1_i,0_j}_B\nonumber\\
&+&\ket{1_i,1_j}_A\otimes\ket{0,0}_B+\ket{0,0}_A\otimes\ket{1_i,1_j}_B\Big). \label{dist_bs_b}
\end{eqnarray}
Now, the product relates to the decomposition of the Hilbert space into $\mathcal H_A$ and $\mathcal H_B$, as in Eq.~(\ref{prodAB}) and clearly the state is $A$-$B$ entangled.
If the particles are distinguishable, i.e., $i\neq j$,
they are not entangled and the only pair of states in $A$ with equal number of
particles are $\ket{1_i,0_j}_A$ and $\ket{0_i,1_j}_A$ (and analogically in $B$). These states cannot be locally
coupled in presence of SSR and the system will not violate any Bell inequality.
On the other hand, if the particles are identical, i.e., $i=j$,
the state (\ref{ex1}) is particle-entangled state due to the indistinguishability. Now,
the coupling of $\ket{1_i,0}_k$ with $\ket{0,1_{j=i}}_k$
can be realized and the $A$-$B$ entanglement, together with particle entanglement coming solely from indistinguishability \cite{identical_plenio},
will drive the violation of some Bell inequality.
If a separable state contains a group of bosons and a group of distinguishable particles, all above arguments can be applied to each sup-group separately,
because local operations, in presence of SSR prohibit the transmutation of a particle of one type into another.
Moreover, if the state reveals incoherent particle-number fluctuations, that are consistent with SSR, each fixed-$N$ sector can be considered separately,
leading to the same conclusion---particle-separable states do not reveal non-classicality in any Bell test.
Also, one could extend the system by adding an auxiliary reference frame to the particle-separable state \cite{banaszek2,dowling}.
A composite system is created and it undergoes local operations in $A$ and $B$.
If this reference frame is a quantum system, to which the SSR applies, then according to our proof, as long as this extension does not introduce any particle entanglement,
the composite system will remain effectively $A$-$B$ separable.
Finally, we point that our result is in line with what is known in quantum interferometry \cite{giovannetti2004quantum,varenna}.
There, a collection of particles passes through the two arms of an interferometer.
During the propagation, a phase is imprinted on one of the arms. In order to surpass the shot-noise limit for the
sensitivity of the phase estimation at the output, during the phase-imprint two conditions
must be satisfied: the two arms must be entangled (which corresponds to the $A$-$B$ entanglement in our case), and the system must be particle-entangled.
To summarize, we have shown that in presence of super-selection rules, mode entanglement must be accompanied by
entanglement between the particles in order to violate a Bell inequality.
Our proof applies to any system, where the particle-entangled/non-entangled dichotomy is present.
This is the case of distinguishable particles, bosons, or systems where bosons and distinguishable
particles co-exist. Naturally, fermions form only the particle entangled states due to the Pauli principle.
Our result puts the particle entanglement on par with the $A$-$B$ mode entanglement,
as a necessary condition for the violation of the local realism. We have demonstrated that the particle entanglement necessary for the violation of the Bell inequalities might result solely from
the indistinguishability of bosons.
T. W. acknowledges the support of the Ministry of Science and Higher Education programme ``Iuventus Plus'' for years 2015-2017, project number IP2014 050073.
\section*{Appendix}
The one-body pure state for the particle of type $i$, which is distributed among the regions $A$ and $B$ reads
\begin{equation}\label{dist1.s}
\ket{\psi_{i}}=\left(\alpha(\psi_i)\hat\psi_{i}^{(A)^\dagger}+\beta(\psi_i)\hat\psi_{i}^{(B)^\dagger}\right)\ket{0}.
\end{equation}
We introduce a shortened notation, where
\begin{equation}
\alpha(\psi_i)\hat\psi_{i}^{(A)^\dagger}\equiv\hat\Phi_{i}^{(A)^\dagger}\ \ \ \mathrm{and}\ \ \ \beta(\psi_i)\hat\psi_{i}^{(B)^\dagger}\equiv\hat\Phi_{i}^{(B)^\dagger}
\end{equation}
With this at hand, the state (\ref{dist1.s}) is
\begin{equation}\label{dist2.s}
\ket{\psi_{i}}=\sum_{\kappa_i\in\{A,B\}}\hat\Phi_{i}^{(\kappa_i)^\dagger}\ket{0}
\end{equation}
Every density matrix of $N$ particles in a separable state can be expressed as
\begin{equation}\label{dens_dist.s}
\hat\varrho=\int\!\!{\mathcal D}\psi_1\cdots\int\!\!{\mathcal D}\psi_N\,\mathcal{P}(\psi_1,\ldots,\psi_N)\bigotimes_{i=1}^N\ket{\psi_{i}}\bra{\psi_{i}},
\end{equation}
where $\mathcal{P}(\psi_1,\ldots,\psi_N)$ is a probability distribution.
We now insert the expression (\ref{dist2.s}) into (\ref{dens_dist.s}) and obtain
\begin{widetext}
\begin{equation}
\hat\varrho=
\sum_{\kappa_1\in\{A,B\}}\cdots\sum_{\kappa_N\in\{A,B\}}\
\sum_{\kappa'_1\in\{A,B\}}\cdots\sum_{\kappa'_N\in\{A,B\}}
\int\!\!{\mathcal D}\psi_1\cdots\int\!\!{\mathcal D}\psi_N\,\mathcal{P}(\psi_1,\ldots,\psi_N)\hat\Phi_{1}^{(\kappa_1)^\dagger}\cdots\hat\Phi_{N}^{(\kappa_N)^\dagger}\ket{0}\bra{0}
\hat\Phi_{1}^{(\kappa'_1)}\cdots\hat\Phi_{N}^{(\kappa'_N)}.
\end{equation}
\end{widetext}
In this state, the quantum correlation between the regions $A$ and $B$ arises from the one-body coherence, which is
represented in the independent sums over $\kappa_i$ and $\kappa'_i$.
The restriction imposed on local operations require that in each regions, only states with a fixed number of particles of each type can couple. This means that the sums over
$\kappa_i$ and $\kappa'_i$ effectively do not run independently, and the state reduces to
\begin{widetext}
\begin{equation}
\hat\varrho_{\rm eff}=
\sum_{\kappa_1\in\{A,B\}}\cdots\sum_{\kappa_N\in\{A,B\}}\
\int\!\!{\mathcal D}\psi_1\cdots\int\!\!{\mathcal D}\psi_N\,\mathcal{P}(\psi_1,\ldots,\psi_N)\hat\Phi_{1}^{(\kappa_1)^\dagger}\cdots\hat\Phi_{N}^{(\kappa_N)^\dagger}\ket{0}\bra{0}
\hat\Phi_{1}^{(\kappa_1)}\cdots\hat\Phi_{N}^{(\kappa_N)}.
\end{equation}
\end{widetext}
This state does not reveal any quantum coherence and is $A$-$B$ separable. $\Box$
\begin{thebibliography}{10}
\makeatletter
\providecommand \@ifxundefined [1]{
\ifx #1\undefined \expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand\href[0]{\@sanitize\@href}
\providecommand\@href[1]{\endgroup\@@startlink{#1}\endgroup\@@href}
\providecommand\@@href[1]{#1\@@endlink}
\providecommand \@sanitize [0]{\begingroup\catcode`\&12\catcode`\#12\relax}
\@ifxundefined \pdfoutput {\@firstoftwo}{
\@ifnum{\z@=\pdfoutput}{\@firstoftwo}{\@secondoftwo}
}{
\providecommand\@@startlink[1]{\leavevmode\special{html:<a href="#1">}}
\providecommand\@@endlink[0]{\special{html:</a>}}
}{
\providecommand\@@startlink[1]{
\leavevmode
\pdfstartlink
attr{/Border[0 0 1 ]/H/I/C[0 1 1]}
user{/Subtype/Link/A<</Type/Action/S/URI/URI(#1)>>}
\relax
}
\providecommand\@@endlink[0]{\pdfendlink}
}
\providecommand \url [0]{\begingroup\@sanitize \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix}}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint[0]{\href }
\@ifxundefined \urlstyle {
\providecommand \doi [1]{doi:\discretionary{}{}{}#1}
}{
\providecommand \doi [0]{doi:\discretionary{}{}{}\begingroup
\urlstyle{rm}\Url }
}
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \Doi[1]{\href{\doibase#1}}
\providecommand \bibAnnote [3]{
\BibitemShut{#1}
\begin{quotation}\noindent
\textsc{Key:}\ #2\\\textsc{Annotation:}\ #3
\end{quotation}
}
\providecommand \bibAnnoteFile [2]{
\IfFileExists{#2}{\bibAnnote {#1} {#2} {\input{#2}}}{}
}
\providecommand \typeout [0]{\immediate \write \m@ne }
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen[0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\bibitem{epr}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Einstein}}, \bibinfo
{author} {\bibfnamefont{B.}~\bibnamefont{Podolsky}},\ and\ \bibinfo {author}
{\bibfnamefont{N.}~\bibnamefont{Rosen}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev.}\ }
\textbf{\bibinfo {volume} {47}},\ \bibinfo {pages} {777} (\bibinfo {year}
{1935})
\bibAnnoteFile{NoStop}{epr}
\bibitem{bell}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.~S.}\ \bibnamefont{Bell}},\ }
\bibfield{journal}{
\bibinfo {journal} {Physics}\ }
\textbf{\bibinfo {volume} {1}},\ \bibinfo {pages} {195} (\bibinfo {year}
{1964})
\bibAnnoteFile{NoStop}{bell}
\bibitem{bell_rmp}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.~S.}\ \bibnamefont{Bell}},\ }
\bibfield{journal}{
\bibinfo {journal} {Rev. Mod. Phys.}\ }
\textbf{\bibinfo {volume} {38}},\ \bibinfo {pages} {447} (\bibinfo {month}
{Jul}\ \bibinfo {year} {1966})
\bibAnnoteFile{NoStop}{bell_rmp}
\bibitem{chsh}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.~F.}\ \bibnamefont{Clauser}}, \bibinfo
{author} {\bibfnamefont{M.~A.}\ \bibnamefont{Horne}}, \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Shimony}},\ and\ \bibinfo {author}
{\bibfnamefont{R.~A.}\ \bibnamefont{Holt}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {23}},\ \bibinfo {pages} {880} (\bibinfo {month}
{Oct}\ \bibinfo {year} {1969})
\bibAnnoteFile{NoStop}{chsh}
\bibitem{test1}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.~J.}\ \bibnamefont{Freedman}}\ and\
\bibinfo {author} {\bibfnamefont{J.~F.}\ \bibnamefont{Clauser}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {28}},\ \bibinfo {pages} {938} (\bibinfo {month}
{Apr}\ \bibinfo {year} {1972})
\bibAnnoteFile{NoStop}{test1}
\bibitem{test2}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Aspect}}, \bibinfo {author}
{\bibfnamefont{P.}~\bibnamefont{Grangier}},\ and\ \bibinfo {author}
{\bibfnamefont{G.}~\bibnamefont{Roger}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {47}},\ \bibinfo {pages} {460} (\bibinfo {month}
{Aug}\ \bibinfo {year} {1981})
\bibAnnoteFile{NoStop}{test2}
\bibitem{test3}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Aspect}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Dalibard}},\ and\ \bibinfo {author}
{\bibfnamefont{G.}~\bibnamefont{Roger}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {49}},\ \bibinfo {pages} {1804} (\bibinfo {month}
{Dec}\ \bibinfo {year} {1982})
\bibAnnoteFile{NoStop}{test3}
\bibitem{test4}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{W.}~\bibnamefont{Tittel}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Brendel}}, \bibinfo {author}
{\bibfnamefont{B.}~\bibnamefont{Gisin}}, \bibinfo {author}
{\bibfnamefont{T.}~\bibnamefont{Herzog}}, \bibinfo {author}
{\bibfnamefont{H.}~\bibnamefont{Zbinden}},\ and\ \bibinfo {author}
{\bibfnamefont{N.}~\bibnamefont{Gisin}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {57}},\ \bibinfo {pages} {3229} (\bibinfo {month}
{May}\ \bibinfo {year} {1998})
\bibAnnoteFile{NoStop}{test4}
\bibitem{test5}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{W.}~\bibnamefont{Tittel}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Brendel}}, \bibinfo {author}
{\bibfnamefont{H.}~\bibnamefont{Zbinden}},\ and\ \bibinfo {author}
{\bibfnamefont{N.}~\bibnamefont{Gisin}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {81}},\ \bibinfo {pages} {3563} (\bibinfo {month}
{Oct}\ \bibinfo {year} {1998})
\bibAnnoteFile{NoStop}{test5}
\bibitem{test6}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{G.}~\bibnamefont{Weihs}}, \bibinfo {author}
{\bibfnamefont{T.}~\bibnamefont{Jennewein}}, \bibinfo {author}
{\bibfnamefont{C.}~\bibnamefont{Simon}}, \bibinfo {author}
{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},\ and\ \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Zeilinger}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {81}},\ \bibinfo {pages} {5039} (\bibinfo {month}
{Dec}\ \bibinfo {year} {1998})
\bibAnnoteFile{NoStop}{test6}
\bibitem{test7}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.-W.}\ \bibnamefont{{Pan}}}, \bibinfo
{author} {\bibfnamefont{D.}~\bibnamefont{{Bouwmeester}}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{{Daniell}}}, \bibinfo {author}
{\bibfnamefont{H.}~\bibnamefont{{Weinfurter}}},\ and\ \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{{Zeilinger}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {403}},\ \bibinfo {pages} {515} (\bibinfo {year}
{2000})
\bibAnnoteFile{NoStop}{test7}
\bibitem{test8}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{D.}~\bibnamefont{{Kielpinski}}}, \bibinfo
{author} {\bibfnamefont{V.}~\bibnamefont{{Meyer}}}, \bibinfo {author}
{\bibfnamefont{C.~A.}\ \bibnamefont{{Sackett}}}, \bibinfo {author}
{\bibfnamefont{W.~M.}\ \bibnamefont{{Itano}}}, \bibinfo {author}
{\bibfnamefont{C.}~\bibnamefont{{Monroe}}},\ and\ \bibinfo {author}
{\bibfnamefont{D.~J.}\ \bibnamefont{{Wineland}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {409}},\ \bibinfo {pages} {791} (\bibinfo {year}
{2001})
\bibAnnoteFile{NoStop}{test8}
\bibitem{test9}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.}~\bibnamefont{{Gr{\"o}blacher}}},
\bibinfo {author} {\bibfnamefont{T.}~\bibnamefont{{Paterek}}}, \bibinfo
{author} {\bibfnamefont{R.}~\bibnamefont{{Kaltenbaek}}}, \bibinfo {author}
{\bibfnamefont{{\v C}.}~\bibnamefont{{Brukner}}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{{{\.Z}ukowski}}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{{Aspelmeyer}}},\ and\ \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{{Zeilinger}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {446}},\ \bibinfo {pages} {871} (\bibinfo {month}
{Apr.}\ \bibinfo {year} {2007})
\bibAnnoteFile{NoStop}{test9}
\bibitem{test10}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{D.}~\bibnamefont{Salart}}, \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Baas}}, \bibinfo {author}
{\bibfnamefont{J.~A.~W.}\ \bibnamefont{van Houwelingen}}, \bibinfo {author}
{\bibfnamefont{N.}~\bibnamefont{Gisin}},\ and\ \bibinfo {author}
{\bibfnamefont{H.}~\bibnamefont{Zbinden}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {100}},\ \bibinfo {pages} {220404} (\bibinfo
{month} {Jun}\ \bibinfo {year} {2008})
\bibAnnoteFile{NoStop}{test10}
\bibitem{test11}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{M.}~\bibnamefont{{Ansmann}}}, \bibinfo
{author} {\bibfnamefont{H.}~\bibnamefont{{Wang}}}, \bibinfo {author}
{\bibfnamefont{R.~C.}\ \bibnamefont{{Bialczak}}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{{Hofheinz}}}, \bibinfo {author}
{\bibfnamefont{E.}~\bibnamefont{{Lucero}}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{{Neeley}}}, \bibinfo {author}
{\bibfnamefont{A.~D.}\ \bibnamefont{{O'Connell}}}, \bibinfo {author}
{\bibfnamefont{D.}~\bibnamefont{{Sank}}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{{Weides}}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{{Wenner}}}, \bibinfo {author}
{\bibfnamefont{A.~N.}\ \bibnamefont{{Cleland}}},\ and\ \bibinfo {author}
{\bibfnamefont{J.~M.}\ \bibnamefont{{Martinis}}},\ }
\bibfield{journal}{
\Doi{10.1038/nature08363}{\bibinfo {journal} {Nature}}\ }
\textbf{\bibinfo {volume} {461}},\ \bibinfo {pages} {504} (\bibinfo {month}
{Sep.}\ \bibinfo {year} {2009})
\bibAnnoteFile{NoStop}{test11}
\bibitem{test12}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{M.}~\bibnamefont{{Giustina}}}, \bibinfo
{author} {\bibfnamefont{A.}~\bibnamefont{{Mech}}}, \bibinfo {author}
{\bibfnamefont{S.}~\bibnamefont{{Ramelow}}}, \bibinfo {author}
{\bibfnamefont{B.}~\bibnamefont{{Wittmann}}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{{Kofler}}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{{Beyer}}}, \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{{Lita}}}, \bibinfo {author}
{\bibfnamefont{B.}~\bibnamefont{{Calkins}}}, \bibinfo {author}
{\bibfnamefont{T.}~\bibnamefont{{Gerrits}}}, \bibinfo {author}
{\bibfnamefont{S.~W.}\ \bibnamefont{{Nam}}}, \bibinfo {author}
{\bibfnamefont{R.}~\bibnamefont{{Ursin}}},\ and\ \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{{Zeilinger}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {497}},\ \bibinfo {pages} {227} (\bibinfo {year}
{2013})
\bibAnnoteFile{NoStop}{test12}
\bibitem{ent_rmp}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{R.}~\bibnamefont{Horodecki}}, \bibinfo
{author} {\bibfnamefont{P.}~\bibnamefont{Horodecki}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{Horodecki}},\ and\ \bibinfo {author}
{\bibfnamefont{K.}~\bibnamefont{Horodecki}},\ }
\bibfield{journal}{
\bibinfo {journal} {Rev. Mod. Phys.}\ }
\textbf{\bibinfo {volume} {81}},\ \bibinfo {pages} {865} (\bibinfo {month}
{Jun}\ \bibinfo {year} {2009})
\bibAnnoteFile{NoStop}{ent_rmp}
\bibitem{banaszek1}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{K.}~\bibnamefont{Banaszek}}\ and\ \bibinfo
{author} {\bibfnamefont{K.}~\bibnamefont{W\'odkiewicz}},\ }
\bibfield{journal}{
\Doi{10.1103/PhysRevLett.82.2009}{\bibinfo {journal} {Phys. Rev. Lett.}}\ }
\textbf{\bibinfo {volume} {82}},\ \bibinfo {pages} {2009} (\bibinfo {year}
{1999})
\bibAnnoteFile{NoStop}{banaszek1}
\bibitem{loophole}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{B.}~\bibnamefont{Hensen}}, \bibinfo {author}
{\bibfnamefont{H.}~\bibnamefont{Bernien}}, \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Dr{\'e}au}}, \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Reiserer}}, \bibinfo {author}
{\bibfnamefont{N.}~\bibnamefont{Kalb}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{Blok}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Ruitenberg}}, \bibinfo {author}
{\bibfnamefont{R.}~\bibnamefont{Vermeulen}}, \bibinfo {author}
{\bibfnamefont{R.}~\bibnamefont{Schouten}}, \bibinfo {author}
{\bibfnamefont{C.}~\bibnamefont{Abell{\'a}n}}, \emph{et~al.},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {526}},\ \bibinfo {pages} {682} (\bibinfo {year}
{2015})
\bibAnnoteFile{NoStop}{loophole}
\bibitem{loophole2}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{M.}~\bibnamefont{Giustina}}, \bibinfo
{author} {\bibfnamefont{M.~A.~M.}\ \bibnamefont{Versteegh}}, \bibinfo
{author} {\bibfnamefont{S.}~\bibnamefont{Wengerowsky}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Handsteiner}}, \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Hochrainer}}, \bibinfo {author}
{\bibfnamefont{K.}~\bibnamefont{Phelan}}, \bibinfo {author}
{\bibfnamefont{F.}~\bibnamefont{Steinlechner}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Kofler}}, \bibinfo {author}
{\bibfnamefont{J.-A.}\ \bibnamefont{Larsson}}, \bibinfo {author}
{\bibfnamefont{C.}~\bibnamefont{Abell\'an}}, \bibinfo {author}
{\bibfnamefont{W.}~\bibnamefont{Amaya}}, \bibinfo {author}
{\bibfnamefont{V.}~\bibnamefont{Pruneri}}, \bibinfo {author}
{\bibfnamefont{M.~W.}\ \bibnamefont{Mitchell}}, \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Beyer}}, \bibinfo {author}
{\bibfnamefont{T.}~\bibnamefont{Gerrits}}, \bibinfo {author}
{\bibfnamefont{A.~E.}\ \bibnamefont{Lita}}, \bibinfo {author}
{\bibfnamefont{L.~K.}\ \bibnamefont{Shalm}}, \bibinfo {author}
{\bibfnamefont{S.~W.}\ \bibnamefont{Nam}}, \bibinfo {author}
{\bibfnamefont{T.}~\bibnamefont{Scheidl}}, \bibinfo {author}
{\bibfnamefont{R.}~\bibnamefont{Ursin}}, \bibinfo {author}
{\bibfnamefont{B.}~\bibnamefont{Wittmann}},\ and\ \bibinfo {author}
{\bibfnamefont{A.}~\bibnamefont{Zeilinger}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {115}},\ \bibinfo {pages} {250401} (\bibinfo {year}
{2015})
\bibAnnoteFile{NoStop}{loophole2}
\bibitem{werner}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{R.~F.}\ \bibnamefont{Werner}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {40}},\ \bibinfo {pages} {4277} (\bibinfo {year}
{1989})
\bibAnnoteFile{NoStop}{werner}
\bibitem{gisin}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{N.}~\bibnamefont{Gisin}},\ }
\bibfield{journal}{
\bibinfo {journal} {Physics Letters A}\ }
\textbf{\bibinfo {volume} {154}},\ \bibinfo {pages} {201} (\bibinfo {year}
{1991})
\bibAnnoteFile{NoStop}{gisin}
\bibitem{ssr_wick}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{G.~C.}\ \bibnamefont{Wick}}, \bibinfo
{author} {\bibfnamefont{A.~S.}\ \bibnamefont{Wightman}},\ and\ \bibinfo
{author} {\bibfnamefont{E.~P.}\ \bibnamefont{Wigner}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev.}\ }
\textbf{\bibinfo {volume} {88}},\ \bibinfo {pages} {101} (\bibinfo {year}
{1952})
\bibAnnoteFile{NoStop}{ssr_wick}
\bibitem{ssr2}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.~D.}\ \bibnamefont{Bartlett}}, \bibinfo
{author} {\bibfnamefont{T.}~\bibnamefont{Rudolph}},\ and\ \bibinfo {author}
{\bibfnamefont{R.~W.}\ \bibnamefont{Spekkens}},\ }
\bibfield{journal}{
\bibinfo {journal} {Reviews of Modern Physics}\ }
\textbf{\bibinfo {volume} {79}},\ \bibinfo {pages} {555} (\bibinfo {year}
{2007})
\bibAnnoteFile{NoStop}{ssr2}
\bibitem{ssr_ent1}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{H.~M.}\ \bibnamefont{Wiseman}}\ and\
\bibinfo {author} {\bibfnamefont{J.~A.}\ \bibnamefont{Vaccaro}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {91}},\ \bibinfo {pages} {097902} (\bibinfo {year}
{2003})
\bibAnnoteFile{NoStop}{ssr_ent1}
\bibitem{enk}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.~J.}\ \bibnamefont{van Enk}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {72}},\ \bibinfo {pages} {064306} (\bibinfo {year}
{2005})
\bibAnnoteFile{NoStop}{enk}
\bibitem{ssr_ent2}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{F.}~\bibnamefont{Verstraete}}\ and\ \bibinfo
{author} {\bibfnamefont{J.~I.}\ \bibnamefont{Cirac}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {91}},\ \bibinfo {pages} {010404} (\bibinfo {year}
{2003})
\bibAnnoteFile{NoStop}{ssr_ent2}
\bibitem{ssr_ent3}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.~D.}\ \bibnamefont{Bartlett}}\ and\
\bibinfo {author} {\bibfnamefont{H.~M.}\ \bibnamefont{Wiseman}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {91}},\ \bibinfo {pages} {097903} (\bibinfo {year}
{2003})
\bibAnnoteFile{NoStop}{ssr_ent3}
\bibitem{identical_plenio}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{N.}~\bibnamefont{Killoran}}, \bibinfo
{author} {\bibfnamefont{M.}~\bibnamefont{Cramer}},\ and\ \bibinfo {author}
{\bibfnamefont{M.~B.}\ \bibnamefont{Plenio}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {112}},\ \bibinfo {pages} {150501} (\bibinfo {year}
{2014})
\bibAnnoteFile{NoStop}{identical_plenio}
\bibitem{cauchy}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{T.}~\bibnamefont{Wasak}}, \bibinfo {author}
{\bibfnamefont{P.}~\bibnamefont{Sza\ifmmode~\acute{n}\else
\'{n}\fi{}kowski}}, \bibinfo {author}
{\bibfnamefont{P.}~\bibnamefont{Zi\ifmmode~\acute{n}\else \'{n}\fi{}}},
\bibinfo {author} {\bibfnamefont{M.}~\bibnamefont{Trippenbach}},\ and\
\bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Chwede\ifmmode~\acute{n}\else
\'{n}\fi{}czuk}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {90}},\ \bibinfo {pages} {033616} (\bibinfo {year}
{2014})
\bibAnnoteFile{NoStop}{cauchy}
\bibitem{cauchy_long}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{T.}~\bibnamefont{Wasak}}, \bibinfo {author}
{\bibfnamefont{P.}~\bibnamefont{Sza{\'{n}}kowski}}, \bibinfo {author}
{\bibfnamefont{M.}~\bibnamefont{Trippenbach}},\ and\ \bibinfo {author}
{\bibfnamefont{J.}~\bibnamefont{Chwede{\'{n}}czuk}},\ }
\bibfield{journal}{
\Doi{10.1007/s11128-015-1181-z}{\bibinfo {journal} {Quantum Information
Processing}}\ }
\textbf{\bibinfo {volume} {15}},\ \bibinfo {pages} {269} (\bibinfo {year}
{2015})
\bibAnnoteFile{NoStop}{cauchy_long}
\bibitem{yurke}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{B.}~\bibnamefont{Yurke}}\ and\ \bibinfo
{author} {\bibfnamefont{D.}~\bibnamefont{Stoler}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {46}},\ \bibinfo {pages} {2229} (\bibinfo {year}
{1992})
\bibAnnoteFile{NoStop}{yurke}
\bibitem{banaszek2}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{C.}~\bibnamefont{Invernizzi}}, \bibinfo
{author} {\bibfnamefont{S.}~\bibnamefont{Olivares}}, \bibinfo {author}
{\bibfnamefont{M.~G.~A.}\ \bibnamefont{Paris}},\ and\ \bibinfo {author}
{\bibfnamefont{K.}~\bibnamefont{Banaszek}},\ }
\bibfield{journal}{
\Doi{10.1103/PhysRevA.72.042105}{\bibinfo {journal} {Phys. Rev. A}}\ }
\textbf{\bibinfo {volume} {72}},\ \bibinfo {pages} {042105} (\bibinfo {year}
{2005})
\bibAnnoteFile{NoStop}{banaszek2}
\bibitem{dowling}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{C.~F.}\ \bibnamefont{Wildfeuer}}, \bibinfo
{author} {\bibfnamefont{A.~P.}\ \bibnamefont{Lund}},\ and\ \bibinfo {author}
{\bibfnamefont{J.~P.}\ \bibnamefont{Dowling}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {76}},\ \bibinfo {pages} {052101} (\bibinfo {year}
{2007})
\bibAnnoteFile{NoStop}{dowling}
\bibitem{giovannetti2004quantum}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{V.}~\bibnamefont{Giovannetti}}, \bibinfo
{author} {\bibfnamefont{S.}~\bibnamefont{Lloyd}},\ and\ \bibinfo {author}
{\bibfnamefont{L.}~\bibnamefont{Maccone}},\ }
\bibfield{journal}{
\bibinfo {journal} {Science}\ }
\textbf{\bibinfo {volume} {306}},\ \bibinfo {pages} {1330} (\bibinfo {year}
{2004})
\bibAnnoteFile{NoStop}{giovannetti2004quantum}
\bibitem{varenna}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{L.}~\bibnamefont{Pezz\'e}}\ and\ \bibinfo
{author} {\bibfnamefont{A.}~\bibnamefont{Smerzi}},\ }
\emph{\bibinfo {title} {Atom Interferometry}},\ Vol.\ \bibinfo {volume}
{188}\ (\bibinfo {publisher} {IOS Press},\ \bibinfo {year} {2014})\ p.\
\bibinfo {pages} {691}
\bibAnnoteFile{NoStop}{varenna}
\end{thebibliography}
\end{document} |
\begin{document}
\begin{center}
{\Large\bf Design and analysis of computer experiments with both numeral and distribution inputs}
\\[2mm] Chunya Li$^{1}$, Xiaojun Cui$^{2}$, Shifeng Xiong$^{3}$\footnote{Corresponding author, Email: [email protected]}
{\footnotesize\\ 1. School of Mathematics, Physics, and Statistics, Shanghai University of Engineering Science\\ Shanghai 201620, China
\\[1mm] 2. Department of Mathematics, Nanjing University\\ Nanjing 210093, China
\\[1mm] 3. NCMIS, KLSC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences\\ Beijing 100190, China}
\end{center}
\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}space{1cm} \noindent{\bf Abstract}\quad Nowadays stochastic computer simulations with both numeral and distribution inputs are widely used to mimic complex systems which contain a great deal of uncertainty. This paper studies the design and analysis issues of such computer experiments. First, we provide preliminary results concerning the Wasserstein distance in probability measure spaces. To handle the product space of the Euclidean space and the probability measure space, we prove that, through the mapping from a point in the Euclidean space to the mass probability measure at this point, the Euclidean space can be isomorphic to the subset of the probability measure space, which consists of all the mass measures, with respect to the Wasserstein distance. Therefore, the product space can be viewed as a product probability measure space. We derive formulas of the Wasserstein distance between two components of this product probability measure space. Second, we use the above results to construct Wasserstein distance-based space-filling criteria in the product space of the Euclidean space and the probability measure space. A class of optimal Latin hypercube-type designs in this product space are proposed. Third, we present a Wasserstein distance-based Gaussian process model to analyze data from computer experiments with both numeral and distribution inputs. Numerical examples and real applications to a metro simulation are presented to show the effectiveness of our methods.
\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}space{1cm} \noindent{{\bf KEY WORDS:} Gaussian process model; metro simulation; mixed inputs; space-filling design; Wasserstein distance.}
\section{Introduction}\label{sec:intro}
Nowadays more and more real systems can be studied virtually by means of computer codes. Since it is often time-consuming to run such codes, elaborate design and modeling for computer experiments are necessary. A great amount of literature has studied statistical issues related to computer experiments, including experimental design, surrogate model, optimization, and many others (Fang, Li, and Sudjianto 2005; Santner, Williams, and Notz 2018). Most existing researches focus on deterministic computer experiments, i.e., the simulator produces the same result if run twice using the same set of inputs. Simulations for physical phenomena through numerically solving mathematical
models (differential equations) are usually deterministic. However, many real complex systems contain a great deal of uncertainty, and stochastic computer simulations can better mimic them.
If run twice using the same set of inputs, a stochastic computer simulation yields different results. In other words, its output is a probability distribution. Typical stochastic computer simulations can be found in urban transportation (Elefteriadou 2014) since the movements of vehicles, pedestrians, and passengers have high degree of randomness. Other examples of stochastic computer simulations appear in physical phenomenon simulations with random inputs (Xiu 2010), reliability (Nanty, Helbert, Marrel et al. 2016), and social science (Squazzoni, Jager, and Edmonds 2014).
The randomness of the output is caused by random numbers generated from some stochastic mechanism involved in the computer codes. The probability distributions behind these random number are actually inputs of the simulation. Therefore, stochastic computer simulations can be viewed as those having distribution inputs. There are a few papers on the design and modeling issues of computer experiments with function inputs (Muehlenstaedt, Fruth, and Roustant 2017; Tan 2019; Betancourt, Bachoc, Kleina et al. 2020; Chen, Mak, Joseph et al. 2021), but very limited on distribution inputs. Bachoc, Gamboa, Loubes et al. (2018) constructed the Wasserstein distance-based Gaussian process model with a one-dimensional distribution input. Bachoc, Suvorikova, Ginsbourger et al. (2020) discussed such a model with multidimensional distribution inputs. Note that actual stochastic simulations often have mixed types of inputs. In this paper we focus on computer experiments with both numeral and distribution inputs. To the best of our knowledge, the design and analysis issues of such computer experiments have not been investigated in the literature.
In a probability measure space, the Wasserstein distance, which is related to the optimal transport problem (Pamaretps and Zemel 2020), possesses a number of good mathematical properties. It thus has been widely applied in machine learning and statistics (Arjovsky, Chintala, and Bottou 2017; Peyr\'{e} and Cuturi 2019), including the construction of surrogate models of computer simulations mentioned above. For the mixed-input problem we face, we find that it is also very suitable. We prove that, through the mapping from a point in the Euclidean space to the mass probability measure at this point, the Euclidean space can be isomorphic to the subset of the probability measure space, which consists of all the mass measures, with respect to the Wasserstein distance. Therefore, the product space of the Euclidean space and the probability measure space can be viewed as a product probability measure space. By this way, the mixed inputs are unified within this product probability measure space. We derive formulas of the Wasserstein distance between two components of the product space, and use it to define Wasserstein distance-based space-filling criteria. A class of discretization-based approximate maximin designs in a one-dimensional probability measure space and a class of Latin hypercube-type designs in the product space are proposed. For the modeling issue, we focus on real-valued responses which are numeral features such as the expectation of the output distribution.
We use the Wasserstein distance to construct a Gaussian process model with mixed inputs, and discuss the corresponding estimation and prediction methods. Numerical experiments with test functions are presented to evaluate our design and prediction methods. Real applications to a metro passenger flow simulation are provided. A metro system contains a great deal of uncertainty caused by passengers' uncertain movements. We apply the proposed methods to build the surrogate model of the simulation, which shows how passengers' travel times depend on their walking time distribution and a boarding probability parameter.
The rest of this paper is organized as following. Section \ref{sec:wd} gives preliminary results concerning the Wasserstein distance. Section \ref{sec:sd} constructs experimental designs in the product space of the Euclidean space and the probability measure space. Section \ref{sec:gp} discusses the Gaussian process model for the mixed inputs. Section \ref{sec:experiment} shows numerical results with test functions, Section \ref{sec:real} provides real applications to the metro simulation. Section \ref{sec:dis} concludes the paper with a discussion. Technical proofs are given in the Appendix.
\section{The Wasserstein distance}\label{sec:wd}
For $p,q \geqslant1$ and positive integer $d$, consider the set $\mathcal{P}_{q,p}(\mathbb{R}^d)$ of probability measures on $\mathbb{R}^d$ with a finite moment (with respect to the $\ell_p$ norm) of order $q$. For $\mu,\nu\in\mathcal{P}_{q,p}(\mathbb{R}^d)$, we denote by $\Pi(\mu,\nu)$ the set of all probability measures $\pi$ over the product set $\mathbb{R}^d\times\mathbb{R}^d$ with first (resp. second) marginal $\mu$ (resp. $\nu$). Any element in $\Pi(\mu,\nu)$ is called a coupling measure of $\mu$ and $\nu$. The transportation cost with cost $\ell^q_p$ between\ $\mu$\ and\ $\nu$ is defined as $$\mathcal{T}_{q,p}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\int\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_p^q d\pi(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}),$$ where $\|\cdot\|_p$ denotes the $\ell_p$ norm.
Since $\mathbb{R}^d$, equipped with $\ell_p$ norm, is a Polish space, it is well known that the above infimum can be reached. The Wasserstein distance (also called Monge-Kantorovich distance) with ground metric $\ell_p$ between $\mu$ and $\nu$ is defined as$$W_{q,p}(\mu,\nu)=\mathcal{T}_{q,p}(\mu,\nu)^{1/q}.$$ For $d=1$ and $p=q$, the Wasserstein distance can be computed by (Villani 2009)
\begin{equation}\label{wdc}W_{p,p}(\mu,\nu)=\left\{\int_0^1\left|F_\mu^{-1}(t)-F_\nu^{-1}(t)\right|^pdt\right\}^{1/p},\end{equation}where $F_\mu$ and $F_\nu$ are the cumulative distribution functions of $\mu$\ and\ $\nu$, respectively, and $F_\mu^{-1}(t)=\inf\{u:\ F_\mu(u)\geqslant t\}$, $F_\nu^{-1}(t)=\inf\{u:\ F_\nu(u)\geqslant t\}$.
Let $\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}$ denote the mass probability measure at $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}\in\mathbb{R}^d$, and $\Delta^d=\{\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}:\ \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}\in\mathbb{R}^d\}$. Consider the product space $\Delta^d\times\mathcal{P}_{q,p}(\mathbb{R})$, which is a subset of $\mathcal{P}_{q,p}(\mathbb{R}^{d+1})$.
\begin{lemma}\label{lemma:ei}For $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\in\mathbb{R}^d$, we have\begin{equation*}W_{q,p}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}},\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}})
=\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_p.\end{equation*}\end{lemma}
This lemma indicates that, with the mapping $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}\mapsto\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}$, $\mathbb{R}^d$ (equipped with the $\ell_p$ distance) is isomorphic to $\Delta^d$ with respect to the Wasserstein distance. Therefore, the product space $\mathbb{R}^d\times\mathcal{P}_{q,p}(\mathbb{R})$ is isomorphic to $\Delta^d\times\mathcal{P}_{q,p}(\mathbb{R})$, which is a subspace of $\mathcal{P}_{q,p}(\mathbb{R}^{d+1})$. We view any combination of numeral and distribution inputs as a component of this probability measure space. This provides a way to unify such mixed inputs in $\mathbb{R}^d\times\mathcal{P}_{q,p}(\mathbb{R})$.
For the special product probability measure space $\Delta^d\times\mathcal{P}_{q,p}(\mathbb{R})$, the Wasserstein distance between its two components has the following expression.
\begin{theorem}\label{th:wf}For $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\in\mathbb{R}^d$, $\mu,\nu\in\mathcal{P}_{q,p}(\mathbb{R})$, we have
\begin{equation*}W_{q,p}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)=\left\{\int_0^1 c(F_\mu^{-1}(t)-F_\nu^{-1}(t)) dt\right\}^{1/q},\end{equation*}
where $c(z) = (\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_p^p+|z|^p)^{\frac{q}{p}}$.
\end{theorem}
This theorem presents formula for calculating the Wasserstein distance in the space we focus on. In particular, we provide its two important special cases, \begin{eqnarray}&&W_{1,1}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)=\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_1+W_{1,1}(\mu,\nu),\label{w11}
\\&&W_{2,2}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)^2=\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_2^2+W_{2,2}(\mu,\nu)^2.\label{w22}\end{eqnarray}
\section{Experimental design}\label{sec:sd}
From the results in the previous section, the Wasserstein distance can be viewed as an extension of the $\ell_p$ distance in the Euclidean space to the probability measure space. Similar to the Euclidean space, we can define the Wasserstein distance-based space-filling designs in the probability measure space. In this section we first present a method to construct space-filling designs in the space of probability measures in one dimension. We then introduce Latin Hypercube (LH)-type space-filling designs for the product space of the Euclidean space and the probability measure space.
\subsection{Space-filling designs in $\mathcal{P}_{q,p}([0,1],\tau)$}\label{subsec:sdw}
Note that distribution inputs of stochastic computer simulations usually have compact supports and certain degrees of smoothness. We consider the space $\mathcal{P}_{q,p}([0,1],\tau)=\{\mu\in\mathcal{P}_{q,p}(\mathbb{R}):\ \text{support of}\ \mu \subset[0,1],\ \sup_{x,y\in[0,1],\ x\neq y}|F_\mu(x)-F_\mu(y)|/|x-y|\leqslant\tau\}$ for some constant $\tau>0$. For a design of $n$ runs, $\mathcal{D}=\{\mu_1,\ldots,\mu_n\}\subset\mathcal{P}_{q,p}([0,1],\tau)$, the minimum Wasserstein distance criterion is
\begin{equation}\label{mc} \mathrm{mdc}(\mathcal{D})=\min_{1\leqslant i<j\leqslant n}W_{q,p}(\mu_i,\mu_j).\end{equation}Note that the parameter $q$ in the Wasserstein distance can be arbitrary due to finite support. We can compute $W_{q,p}(\mu_i,\mu_j)$ by \eqref{wdc}. The design that maximizes the criterion \eqref{mc} is defined as the maximin Wasserstein distance design. Similarly we can define minimax Wasserstein distance design and minimum $\phi_p$ design, which are analogues of those in Euclidean space (Johnson, Moore, and Ylvisaker 1990; Morris and Mitchell 1995).
\renewcommand{\textbf{Input}:}{\textbf{Input}:}
\renewcommand{\textbf{Steps}:}{\textbf{Steps}:}
\floatname{algorithm}{Algorithm}
\begin{algorithm}[thb]
\caption{\label{ag:cd}\quad The block coordinate descent algorithm}
\begin{algorithmic}[1]
\REQUIRE ~~\\ $n,\ m,\ \tau,\ \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}arepsilon$. \ENSURE ~~\\\STATE \textbf{Initialization:} Select $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1^{(0)},\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_n^{(0)}\in E_{m-1}(\tau)$. \STATE \textbf{Iteration:} For each $k=0,1,\ldots$,
\\ for $i=1,\ldots,n$, solve\begin{equation*}\begin{split}&\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}^{(k+1)}_i=\arg\max_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_i} \xi\left(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1^{(k+1)},\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_{i-1}^{(k+1)},\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_i,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_{i+1}^{(k)},\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_n^{(k)}\right),
\\&\text{subject to}\quad \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_i\in E_{m-1}(\tau),\end{split}\end{equation*}
If $\xi\left(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1^{(k+1)},\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_n^{(k+1)}\right)-\xi\left(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1^{(k)},\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_n^{(k)}\right)<\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}arepsilon$, then stop the iterations, and output $\mathcal{D}=\{\mu_1,\ldots,\mu_n\}$ corresponding to $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1^{(k+1)},\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_n^{(k+1)}$. \\ Otherwise, $k\leftarrow k+1$.
\end{algorithmic}
\end{algorithm}
Since $\mathcal{P}_{q,p}([0,1],\tau)$ is an infinity-dimensional space, the problem in optimizing \eqref{mc} is difficult. We use a discretization method to approximate it.
For $\mu\in\mathcal{P}_{q,p}([0,1],\tau)$, $F_\mu$ can be approximated by the piecewise linear function \begin{eqnarray}&&\tilde{F}_\mu(x)\nonumber\\&=&\sum_{i=1}^{m-1}\left\{F_\mu\left(\frac{i-1}{m-1}\right)+(m-1)\left(x-\frac{i-1}{m-1}\right)
\left[F_\mu\left(\frac{i}{m-1}\right)-F_\mu\left(\frac{i-1}{m-1}\right)\right]\right\}\nonumber\\&&\quad\quad \cdot I\left(x\in\left[\frac{i-1}{m-1},\frac{i}{m-1}\right]\right),\label{tf}\end{eqnarray} where $I$ is the indicator function. Therefore, $\mathcal{P}_{q,p}([0,1],\tau)$ can be approximated by the finite dimensional space \begin{eqnarray}&&\widetilde{\mathcal{P}}_{q,p,m-1}([0,1],\tau)\nonumber\\&=&\Big\{\mu:\ F_\mu(x)=\sum_{i=1}^{m-1}\left[s_i+(m-1)\left(x-\frac{i-1}{m-1}\right)
\left(s_{i+1}-s_i\right)\right]I\left(x\in\left[\frac{i-1}{m-1},\frac{i}{m-1}\right]\right),\nonumber\\\nonumber&&\ \ \text{for all}\ (s_1,\ldots,s_m)'\ \text{with}\ 0=s_1\leqslant s_1\leqslant\cdots\leqslant s_m=1,\ s_{i+1}-s_i\leqslant \tau/(m-1),\nonumber\\&&\quad i=1,\ldots,m-1\Big\}.\label{ap}\end{eqnarray} Each $\mu\in\widetilde{\mathcal{P}}_{q,p,m-1}([0,1],\tau)$ corresponds to the vector $(s_1,\ldots,s_m)'$ of $m-2$ degrees of freedom. Let $t_i=s_{i+1}-s_{i},\ i=1,\ldots,m-1$. Then $\mu$ is specified by the vector $(t_1,\ldots,t_{m-1})'\in E_{m-1}(\tau)=\{(x_1,\ldots,x_{m-1})':\ 0\leqslant x_i\leqslant\tau/(m-1),\ i=1,\ldots,m-1,\ \sum_{i=1}^{m-1}x_i=1\}$. The maximin Wasserstein distance design problem reduces to \begin{eqnarray}&&\max\ \ \xi(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1,\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_{n})=\min_{1\leqslant i<j\leqslant n}W_{q,p}(F_i,F_j)\label{pro}
\\&&\ \ \text{subject to}\quad \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1,\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_{n}\in E_{m-1}(\tau).\nonumber\end{eqnarray}Let $\{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_1^*,\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{t}_n^*\}$ denote the solution to \eqref{pro}. Then the corresponding $\{\mu_1^*,\ldots,\mu_n^*\}$ can be viewed as an (approximate) maximin Wasserstein distance design in $\mathcal{P}_{q,p}([0,1],\tau)$.
\begin{figure}
\caption{maximin $W_{q,p}
\label{fig:n6}
\end{figure}
There are $n\times(m-1)$ variables we need to optimize in the approximate problem \eqref{pro}, which is still a high-dimensional optimization problem. We use the block coordinate descent algorithm (Tseng 2001) to solve \eqref{pro}. In each iteration we only optimize one run in the design. See Algorithm \ref{ag:cd} for detailed steps. This algorithm can also be used to construct approximate optimal designs with other criteria such as the minimax Wasserstein distance design. Similar strategy has been used to construction designs in the Euclidean space (Mu and Xiong 2017).
The initial points in Algorithm \ref{ag:cd} can be randomly generated. We begin with a relatively small $m$ and many initial points to get a maximin design $\mathcal{D}^{(m-1)}$ in $\widetilde{\mathcal{P}}_{q,p,m-1}([0,1],\tau)$. Consequently, $\mathcal{D}^{(m-1)}$ can be acted as the initial design in Algorithm \ref{ag:cd} to solve a maximin design $\mathcal{D}^{(2(m-1))}$ in $\widetilde{\mathcal{P}}_{q,p,2(m-1)}([0,1],\tau)$, which can better approximate the maximin design in $\mathcal{P}_{q,p}([0,1],\tau)$. For example, set the initial $m=11$, and repeat the above process twice. We successively obtain $\mathcal{D}^{(10)}$, $\mathcal{D}^{(20)}$, and $\mathcal{D}^{(40)}$. The design $\mathcal{D}^{(40)}$ is set as the final approximation to the maximin design in $\mathcal{P}_{q,p}([0,1],\tau)$. Figure \ref{fig:n6} shows the maximin $W_{q,p}$ distance designs for $n=6$ with $p=q=2$ and $p=q=1$ constructed via this way.
\begin{figure}
\caption{Maximin $W_{q,p}
\label{fig:lhn6}
\end{figure}
\subsection{LH-type designs in $[0,1]^d\times\mathcal{P}_{q,p}([0,1],\tau)$}\label{subsec:sdw}
When returning to the product space $[0,1]^d\times\mathcal{P}_{q,p}([0,1],\tau)$, we can similarly define the corresponding Wasserstein distance-based designs in this space. However, it is more difficult to find the optimal design with respect to the Wasserstein distance-based criteria. Here we propose a class of LH-type designs that possess good projection properties over both $[0,1]^d$ and $\mathcal{P}_{q,p}([0,1],\tau)$. Furthermore, we can find the optimal design over such LH-type designs, and the corresponding computation is relatively easier than over the whole space $[0,1]^d\times\mathcal{P}_{q,p}([0,1],\tau)$.
For an integer $n$, let $\mathcal{D}_1=\{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n\}$ and $\mathcal{D}_2=\{\mu_1,\ldots,\mu_n\}$ be space-filling designs in $[0,1]^d$ and $\mathcal{P}_{q,p}([0,1],\tau)$, respectively. For a permutation $(e_1,\ldots,e_n)$ of $(1,\ldots,n)$, let $\mathcal{D}(e_1,\ldots,e_n)=\{(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mu_{e_1}),\ldots,(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n,\mu_{e_n})\}\subset[0,1]^d\times\mathcal{P}_{q,p}([0,1],\tau)$. The projections of $\mathcal{D}(e_1,\ldots,e_n)$ on $[0,1]^d$ and $\mathcal{P}_{q,p}([0,1],\tau)$ are respectively $\mathcal{D}_1$ and $\mathcal{D}_2$, which are both space-filling. Therefore, $\mathcal{D}(e_1,\ldots,e_n)$ can be viewed as an LH-type design (Mckay, Beckman, and Conover 1979). The designs $\mathcal{D}_1$ and $\mathcal{D}_2$ can be called base designs of $\mathcal{D}(e_1,\ldots,e_n)$. We can set $\mathcal{D}_2$ as the maximin Wasserstein distance design proposed in the previous subsection. For $\mathcal{D}_1$, there are many feasible choices (Joseph 2016; Santner, Williams, and Notz 2018). Here we prefer to those with good projection properties such as the maximin LH design (Park 1994), the maximum projection design (Joseph, Gul, and Ba 2015; Mu and Xiong 2018), and the rotated sphere packing design (He 2017), which usually yield good prediction of the simulation output.
Similar to optimal LH designs, we define the maximin Wasserstein distance LH-type design based on $(\mathcal{D}_1,\mathcal{D}_2)$ is the set $\mathcal{D}^*=\mathcal{D}(e^*_1,\ldots,e^*_n)=\{(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mu_{e^*_1}),\ldots,(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n,\mu_{e^*_n})\}$, where $e^*_1,\ldots,e^*_n$ maximizes
\begin{equation}\label{mmc} \mathrm{mdc}(\mathcal{D}(e_1,\ldots,e_n))=\min_{1\leqslant i<j\leqslant n}W_{q,p}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_i}\times\mu_{e_i},\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_j}\times\mu_{e_j})\end{equation}over all permutations $(e_1,\ldots,e_n)$ of $(1,\ldots,n)$. The Wasserstein distance in \eqref{mmc} can be computed by Theorem \ref{th:wf}, especially by \eqref{w11} and \eqref{w22}. Usually it is infeasible to compute the global solution to maximize (\ref{mmc})
based on all the $n!$ permutations. We can use the Monte Carlo method to approximate it by generating a large number of random permutations.
Figure \ref{fig:lhn6} shows the maximin $W_{2,2}$ distance LH-type design and the maximin $W_{1,1}$ distance LH-type design in $[0,1]\times\mathcal{P}_{q,p}([0,1],3)$ for $n=6$, where the base designs are the uniformly scattered points $\{0,\,1/5,\,2/5,\ldots,1\}$ and the corresponding maximin Wasserstein distance designs in Figure \ref{fig:n6}. Here the measures plotted with the red solid line, red dotted line, blue solid line, blue dotted line, green solid line, and green dotted line in Figure \ref{fig:n6} are denoted by $\mu_1,\ldots,\mu_6$. From Figure \ref{fig:lhn6} we can see that the optimal permutations in the two LH-type designs are $(5,3,6,1,4,2)$ and $(5,3,2,6,1,4)$, respectively.
\section{Gaussian process modeling}\label{sec:gp}
This section builds a Gaussian process model for computer experiments with both numeral and distribution inputs. We first define $h\circ\mu$ to be the probability measure of the random variable $h(X)$ for a function $h:\ \mathbb{R}\mapsto\mathbb{R}$, where $X\sim\mu$. For $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}\in[0,1]^d$ and $\mu\in\mathcal{P}_{2,2}(\mathbb{R})$, we model the output of such a computer simulation as \begin{equation}f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mu)=\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x})'\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}+\int\alpha(t)d(h\circ\mu (t))+Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mu),\label{kriging}\end{equation} where $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\cdot)=\left(g_1(\cdot),\ldots,g_s(\cdot)\right)'$ and $h(\cdot)$ are
pre-specified functions, $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}$ is a vector of unknown regression coefficients, $\alpha(\cdot)$ is an unknown smooth function, and $Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mu)$ is a stationary
Gaussian process defined on $[0,1]^d\times\mathcal{P}_{2,2}(\mathbb{R})$ with mean zero, variance $\sigma^2$, and covariance structure given below. The covariance between
$Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mu_1)$ and $Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_2,\mu_2)$ in \eqref{kriging} is represented by
\begin{equation*}\C[Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mu_1),
Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_2,\mu_2)]=\sigma^2R(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_2;\,\mu_1,\mu_2\,|\,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}), \end{equation*} where $R$ is the Gaussian correlation function
\begin{eqnarray}
R(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_2;\,\mu_1,\mu_2\,|\,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta})=\exp\left\{-\sum_{i=1}^d\theta_i (x_{1i}-x_{2i})^2-\theta_{d+1}W_{2,2}(\mu_1,\mu_2)^2\right\},\label{GR}
\end{eqnarray} with positive correlation parameters $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}=(\theta_1,\ldots,\theta_d,\theta_{d+1})'$. Note that $R_W(\mu_1,\mu_2)=\\\exp\left\{-\theta_{d+1}W_{2,2}(\mu_1,\mu_2)^2\right\}$ gives a valid correlation structure of Gaussian processes on $\mathcal{P}_{2,2}(\mathbb{R})$ (Bachoc, Gamboa, Loubes, et al. 2018; Bachoc, Suvorikova, Ginsbourger, et al. 2020). The correlation structure in \eqref{GR} is valid for Gaussian processes on the product space of $[0,1]^d$ and $\mathcal{P}_{2,2}(\mathbb{R})$.
Since the infinity-dimensional $\alpha(\cdot)$ in \eqref{kriging} is hard to estimate, we use a linear expression to parameterize it. Let \begin{equation}\alpha(\cdot)=\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma}'\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(\cdot)\label{ib}\end{equation} with $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma}=(\gamma_1,\ldots,\gamma_l)'\in{\mathbb{R}}^l$, where $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(\cdot)=(b_1(\cdot),\ldots,b_l(\cdot))'$ are pre-specified basis functions. Then \eqref{kriging} can be approximated as
\begin{equation}f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mu)=\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x})'\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}+\left\{\int\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(t)d(h\circ\mu (t))\right\}'\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma}+Z(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mu).\label{kriginga}\end{equation}
The parameters in model \eqref{kriginga} can be estimated by the maximum likelihood method. Suppose the set of input values is $\{(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mu_1),\ldots,(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n,\mu_u)\}\subset[0,1]^d\times\mathcal{P}_{2,2}(\mathbb{R})$. The corresponding response values are $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}=(f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1,\mu_1),\ldots,f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n,\mu_n))'$. The negative log-likelihood, up to an additive constant, is proportional to
\begin{equation}n\log(\sigma^2)+\log({\mathrm{det}}(\m{R}))+(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}-\m{G}\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}-\m{J}\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma})'\m{R}^{-1}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}-\m{G}\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}-\m{J}\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma})/\sigma^2,\label{ll}\end{equation}
where $\m{R}$ is the $n\times n$ correlation matrix whose $(i,j)$th entry is $R(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_i,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_j;\,\mu_i,\mu_j\,|\,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta})$ defined in (\ref{GR}), ``${\mathrm{det}}$" denotes
matrix determinant, $\m{G}=\left(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1),\ldots,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n)\right)'$, \\and $\m{J}=\left(\int\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(t)d(h\circ\mu_1(t)),\ldots,\int\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(t)d(h\circ\mu_n(t))\right)'$.
Denote $\m{U}=(\m{G}\ \m{J})$ and $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\psi}=(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}'\ \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma}')'$.
When $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}$ is known, the maximum likelihood estimators (MLEs) of $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\psi}$ and $\sigma^2$ are
\begin{equation}\left\{\begin{array}{l}\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\psi}}=(\m{U}'\m{R}^{-1}\m{U})^{-1}\m{U}'\m{R}^{-1}\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y},
\\\hat{\sigma}^2=(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}-\m{U}\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\psi}})'\m{R}^{-1}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}-\m{U}\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\psi}})/n.\end{array}\right. \label{bs}\end{equation}
For an untried point $(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)\in[0,1]\times\mathcal{P}_{q,p}(\mathbb{R})$, the best linear unbiased predictor $\hat{f}$ of $f$ (Santner, Williams, and Notz 2018) is
\begin{eqnarray}\hat{f}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)=\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0)'\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\beta}}+\left\{\int\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(t)d(h\circ\mu_0(t))\right\}'\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\gamma}}+{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{r}_0}'{\m{R}}^{-1}\big(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}-\m{U}\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\psi}}\big),\label{blup}\end{eqnarray}
where ${\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{r}_0}=\big(R(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1;\,\mu_0,\mu_1\,|\,{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}}),\ldots,R(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_n;\,\mu_0,\mu_n\,|\,{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}})\big)'$. Clearly this predictor possesses the interpolation property.
Given $\kappa\in(0,1)$, the $100(1-\kappa)\%$ prediction interval of $f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)$ is\begin{equation}\label{ci}P\left(f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)\in \hat{f}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)\pm \eta(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)\,t_{n-s-l}(\kappa/2)\right)=1-a,\end{equation} where $\tau(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)\geqslant0$,
\begin{eqnarray*}&&\eta(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0,\mu_0)^2=\frac{Q^2}{n-s-l}\\&&\quad\cdot\left\{1-\left(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0)',\ \int\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(t)'d(h\circ\mu_0 (t)),\ \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{r}_0'\right)\left(\begin{array}{cc}\m{0}&\m{U}'\\\m{U}&\m{R}\end{array}\right)^{-1}
\left(\begin{array}{c}\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_0)\\\int\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(t)d(h\circ\mu_0 (t))\\\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{r}_0\end{array}\right)\right\},\\&&Q^2=\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}'\big[\m{R}^{-1}-\m{R}^{-1}\m{U}(\m{U}'\m{R}^{-1}\m{U})^{-1}\m{U}'\m{R}^{-1}\big]\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y},\end{eqnarray*}
and $t_{n-s-l}(\kappa/2)$ is the upper $\kappa/2$ quantile of the Student's $t$-distribution with $n-s-l$ degrees of freedom.
When $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}$ is unknown, by plugging \eqref{bs} into \eqref{ll}, we have the MLE of $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}$
\begin{equation*}\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}}=\arg\min_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}}\,n\log(\hat{\sigma}^2)+\log\left({\mathrm{det}}({\m{R}})\right).
\end{equation*}The predictor $\hat{f}$ in \eqref{blup} and the prediction interval in \eqref{ci} can be modified by replacing $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}$ with $\hat{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}g{\theta}}$.
Similar to the Gaussian process model on the Euclidean space in the literature, we call the above methods without and with the linear regression terms $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\cdot)$ and $h(\cdot)$ in \eqref{kriging} simple Kriging and universal Kriging, respectively. Usually we set $h$ in \eqref{kriging} as the identity function,
There are some methods to parameterize $\alpha(\cdot)$ in \eqref{ib} such as the spline approximation (De Boor 1978). Here we propose the reconstruction parameterization approach (Xiong 2021) because of its good interpretation of the parameters. Specifically, in this approach the parameters $\gamma_i=\alpha(a_i),\ i=1,\ldots,l$, where $\{a_1,\ldots,a_l\}$ is the set of knots, and $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}(\cdot)=(b_1(\cdot),\ldots,b_l(\cdot))'$ in \eqref{ib} are specific interpolation basis functions. When the distribution input in model \eqref{kriging} has a support $[0,1]$, we select $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{b}$ as the polynomial interpolation basis functions (De Boor 1978), which have the Lagrange forms \begin{equation*}b_j(t)=\prod_{1\leqslant k\leqslant l,\ k\neq j}\frac{t-a_k}{a_j-a_k},\ \ j=1,\ldots,l.\end{equation*}We use the Chebyshev nodes \begin{equation*}\s{A}=\left\{a_j=1/2-\cos[(2j-1)\pi/2l]/2:\,j=1,\ldots,l\right\}\end{equation*} to avoid Runge's Phenomenon (De Boor 1978). The number of knots can be selected as $l=10$ according to the common $10d$ rule (Loeppky, Sacks, and Welch 2009).
When the distribution input $\mu$ in \eqref{kriging} lies in $\mathcal{P}_{q,p}([0,1],\tau)$, we use the design $\{\mu_1,\ldots,\mu_n\}\subset\widetilde{\mathcal{P}}_{q,p,m-1}([0,1],\tau)$ constructed in Section \ref{sec:sd}.
Let $h$ in \eqref{kriging} be the identity function. For $i=1,\ldots,n,\ j=1,\ldots,l$, by \eqref{tf} and \eqref{ap}, the entries of $\m{J}$ in \eqref{ll} can be given by
\begin{eqnarray*}&&\int b_j(t)\,d F_{\mu_i}(t)=\sum_{i=1}^{m-1}\int_{(i-1)/(m-1)}^{i/(m-1)}b_j(t)\,d F_{\mu_i}(t)\\&&=(m-1)\sum_{i=1}^{m-1}\left[F_{\mu_i}\left(\frac{i}{m-1}\right)-F_{\mu_i}\left(\frac{i-1}{m-1}\right)\right]\int_{(i-1)/(m-1)}^{i/(m-1)}b_j(t)\,d t.\end{eqnarray*}
\section{Numerical experiments with test functions}\label{sec:experiment}
In this section we conduct numerical experiments with the following test functions on $[0,1]^d\times\mathcal{P}_{q,p}([0,1],\tau)$,
\begin{eqnarray*}&&\mathrm{(I)}\ f(x,\mu)=c+x^{1+c}+\int t\,d\mu(t),
\\&&\mathrm{(II)}\ f(x,\mu)=\int \cos(3t+c_1)\,d\mu(t)+\exp(x)+c_2F_\mu(x),
\\&&\mathrm{(III)}\ f(x_1,x_2,\mu)=\left[x_1+\int t\,d\mu(t)+c_1\right]^2-c_2\log(1+x_2).\end{eqnarray*}The constants $c,\ c_1$, and $c_2$ in them are generated from the uniform distribution on $[0,1]$.
Four combinations of two design and two modeling methods are compared; see Figure \ref{fig:box}. D2 represents the maximin $W_{2,2}$-distance LH-type design based on $(\mathcal{D}_1,\mathcal{D}_2^{(2,2)})$ defined by \eqref{mmc}, where $\mathcal{D}_1$ is the maximin $\ell_2$-distance LH design and $\mathcal{D}_2^{(2,2)}$ is the maximin $W_{2,2}$-distance design. D1 represents the maximin $W_{1,1}$-distance LH-type design based on $(\mathcal{D}_1,\mathcal{D}_2^{(1,1)})$, where $\mathcal{D}_2^{(1,1)}$ is the maximin $W_{1,1}$-distance type design. Sample sizes in D2 and D1 are set as 20 and 40. Two modeling methods include simple Kriging (SK) and universal Kriging (UK). We use $\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x})=\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{g}(x_1,\ldots,x_d)=(1,x_1,\ldots,x_d)'$ and $h(t)=t$ in the UK model \eqref{kriging}.
\begin{figure}
\caption{Box-plots of squared prediction errors.}
\label{fig:box}
\end{figure}
For each design and each test function, we generate training data, and then use SK and UK to build prediction models $\hat{f}$ in \eqref{blup}. The prediction accuracy is evaluated by the empirical squared prediction error, $\sum_{k=1}^{N}\big\{\hat{f}(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_k^*,\mu^*_k)-f(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_k^*,\mu^*_k)\big\}^2/N$ with $N=1000$, where the test data $(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_1^*,\mu^*_1),\ldots,(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}_{N}^*,\mu^*_N)\in[0,1]^d\times\mathcal{P}_{q,p}([0,1],\tau)$ are generated randomly. Box-plots of the prediction errors with 100 repetitions are shown in Figure \ref{fig:box}. We can see that, with a relatively large $n$, UK is often better than SK, which is consistent with our empirical experience on Kriging in the Euclidean space. In practice, we can choose SK or UK through leave-one-out cross validation. It seems that, with SK, D1 is usually better than D2. Generally, D1 or D2 does not have clear superiority toward the other. Note that the computation for constructing D1 is more difficult than D2. We prefer to D2 in practice use.
\section{Applications to a metro simulation}\label{sec:real}
Urban metro systems are important components of urban transportation systems. The implementation of a metro system simulation provides a powerful instrument for system performance monitoring,
which enables operators to characterize the level of service and make decisions accordingly (Mo, Ma, Koutsopoulos, et al. 2021). Such a simulation is a typical stochastic simulation since a metro system contains a great deal of passengers' uncertainty, Here we consider a passenger flow simulation for a single metro route, and apply the proposed design and analysis methods to it.
On this route the passenger begins to tap in at the origin station and ends in tapping out at the destination station with taking only a train. The time between the arrival and departure of a passenger can be divided into access time, wait time, time on board, and egress time. Access time is the time it takes the passenger to walk from the tap-in fare gate to the platform; wait time is the time for which the passenger waits on the platform until boarding a train; and egress time is the time it takes to walk to the tap-out fare gate after alighting from the train.
\begin{figure}
\caption{Flow chart of a passenger on a typical metro route}
\label{fig:not}
\end{figure}
During peak hours, passengers may miss one or more trains due to crowded platform and carriages, and there may be more than one possible itineraries between their tap-in and tap-out times. Figure \ref{fig:not} illustrates all possible itineraries for passenger $i$ who enters and exits the metro system; a similar figure can be found in Zhu et al. (2017). We can use the passenger flow simulation to study the influence of the number of passengers, train schedule, and other factors on crowdedness, passenger-to-train assignment, and other responses we are interested in.
\begin{figure}
\caption{Leave-one-out prediction in Section \ref{sec:real}
\label{fig:cv}
\end{figure}
We use the passenger flow simulator to conduct a one-day simulation. The inputs are as flows.
\begin{description}
\item (i) \emph{number of passengers in the day};
\item (ii) \emph{each passenger's origin and destination stations and tap-in time};
\item (iii) \emph{boarding probability of a passenger at each station (except the last station)};
\item (iv) \emph{probability distribution of a passenger's access time at each station (except the last station)};
\item (v) \emph{probability distribution of a passenger's egress time at each station (except the first station)};
\item (vi) \emph{train capacity};
\item (vii) \emph{train schedule including arrival and departure times at each station}.
\end{description}
The simulation can yield movement of each passenger in the metro system, including which train he/she takes, his/her tap-out time, and others.
Our simulation aims at simulating a real metro route of six stations. The inputs (i), (ii), (v), (vi), and (vii) can be obtained or estimated from the automatic fare collection system and automatic vehicle location system (Xiong et al. 2022; Li et al. 2022). The simulator designs the boarding probability for passenger $i$ in (iii) as \begin{equation}\label{bc}Pr(\rho;x)=\left\{\begin{array}{lll}0,&\quad\rho<1-x/2;\\(\rho-1+x/2)/x,&\quad\rho\in[1-x/2,1+x/2];\\1,&\quad\rho>1+x/2,\end{array}\right.\end{equation}where $\rho=(L-N_0)/N$, $L$ denotes the train capacity, $N_0$ denotes the current number of passerngers on this train, $N$ denotes the number of passengers who get on the platform earlier than passenger $i$ at the same platform, and $x\in[0,1]$ is a tuning parameter. We focus on the response surface $y=f(x,\mu)$, where the response $y$ is the mean travel time of passengers who tap in at the first station, $x\in[0,1]$ is the tuning parameter in \eqref{bc} at the first station, and $\mu\in\mathcal{P}_{2,2}([1,2],3)$ represents the distribution of access time at the first station. Other inputs of the simulator are fixed. The simulation contains $72,000$ passengers in the day.
\begin{figure}
\caption{Profile curves of $f(x,\mu)$ with several fixed $\mu$ in Section \ref{sec:real}
\label{fig:fixmu}
\end{figure}
We first use a 40-run maximin $W_{2,2}$-distance LH-type design which is the same as D2 in Section \ref{sec:experiment} to compute the corresponding values of $y$. Due to the randomness of the simulation, each value is obtained with 10,000 replicates. We then use SK and UK to build prediction models for the response. The true values and leave-one-out prediction values of $y$ are presented in Figure \ref{fig:cv}. It can be seen that UK is better than SK in most runs. On the numeral aspect, SK and UK yield leave-one-out mean squared prediction errors $4.8\times10^{-3}$ and $8.8\times10^{-4}$, respectively.
Consequently, we adopt the response surface constructed by UK based on all the data. Four profile curves of $f(x,\mu)$ with fixed $\mu$ are shown in Figure \ref{fig:fixmu}. We can see that, as the tuning parameter $x$ in their boarding probability increases, the travel times of the passengers from the first station have an growing trend. In the future we can develop sensitivity analysis methods to quantify the influence of the two input parameters $x$ and $\mu$ on the response. Another important issue is to calibrate the unobserved parameter $x$ with real data from the automatic fare collection system.
\section{Discussion}\label{sec:dis}
In this paper we have proposed design and modeling methods for computer experiments with both numeral and distribution inputs. We use the Wasserstein distance to unify the mixed inputs, and then the proposed methods can be viewed as straightforward extensions of the conventional methods for the Euclidean space. This makes our methods easy to understand and to implement.
There are several further topics we can follow in the future. It may be interesting to extend discrepancy-based (Fang, Lin, Winker et al. 2000) and lattice-based (He 2021) methods to construct space-filling designs in the probability measure space. Designs defined by other distribution distances or model-based designs can also be considered. Methods for sensitivity analysis, parameter calibration, and response optimization with both numeral and distribution inputs can be developed. In addition, possible directions include the study of distribution-output and/or multidimensional distribution-input computer simulations based on the Wasserstein distance, which calls for strategies to overcome the difficulties in computation.
\section*{Appendix: Proofs}
\emph{Proof of Lemma \ref{lemma:ei}}: It is obviously followed from the fact that any coupling measure in this case is supported on the point $(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x},\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y})$. In fact, this lemma has been appeared in literature, e.g.
line 2 on page 99 in Villani (2009). \qed
\noindent
\emph{Proof of Theorem \ref{th:wf}}: By the definition, \begin{eqnarray*}\mathcal{T}_{q,p}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)=\inf_{\pi\in\Pi(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)}\int(\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{u}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{v}\|_p^p+|s-t|^p)^{\frac{q}{p}}d\pi(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{u},s,\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{v},t).
\end{eqnarray*}
Since for any $\pi \in\Pi(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)$, the support lies in the fiber $\{(\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}, \cdot, \mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}, \cdot)\}$,
\begin{eqnarray*}
\mathcal{T}_{q,p}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)=\inf_{\pi^{\prime}\in\Pi (\mu,\nu)}\int(\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_p^p+|s-t|^p)^{\frac{q}{p}}d\pi^{\prime}(s,t).
\end{eqnarray*}
Here, $\pi^{\prime}$ is the projection of $\pi$ along the first and third fibers. By this projection, two sets $\Pi(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)$ and $\Pi(\mu, \nu)$ are in one-one correspondence.
We regard this problem as an optimal transportation from $\mathbb{R}$ to $\mathbb{R}$, with the cost function $c(t-s)$. Recall that $c(z) = (\|\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}-\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}\|_p^p+|z|^p)^{\frac{q}{p}}$, and simple calculation shows that it is convex. More precisely, $c$ is a convex nonnegative symmetric function. By Remark 2.19 (ii) in Villani (2015), we have
\begin{eqnarray*}
\mathcal{T}_{q,p}(\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{x}}\times\mu,\delta_{\mathbf}\def\m{\mathbf}\def\vg{\boldsymbol}\def\s{\mathcal}\def\C{{\mathrm{Cov}}{y}}\times\nu)=\int_0^1 c(F_\mu^{-1}(t)-F_\nu^{-1}(t)) dt,
\end{eqnarray*}which completes the proof.
\qed
\begin{description}
\footnotesize
\item
Arjovsky, M., Chintala, S., and Bottou, L. (2017), Wasserstein generative adversarial networks, \textit{Proceedings of the 34th International Conference on Machine Learning}, 70, 214--223.
\item
Bachoc, F., Gamboa, F., Loubes, J.-M., and Venet, N. (2018), A Gaussian process regression model for distribution inputs, \textit{IEEE Transactions on Information Theory}, 64, 6620--6637.
\item
Bachoc, F., Suvorikova, A., Ginsbourger, D., Loubes, J.-M., and Spokoiny, V. (2020), Gaussian processes with multidimensional distribution inputs via optimal transport and Hilbertian
embedding, \textit{Electronic Journal of Statistics}, 14, 2742--2772.
\item
Betancourt, J., Bachoc, F., Kleina, T., Idier, D., Pedreros, R., Rohmer, J. (2020), Gaussian process metamodeling of functional-input code for coastal flood
hazard assessment, \textit{Reliability Engineering and System Safety}, 106870.
\item
Chen, J., Mak, S., Joseph, V. R., and Zhang, C. (2021), Function-on-function Kriging, with applications to three-dimensional printing of aortic tissues,
\textit{Technometrics}, 63, 384-395,
\item{}
De Boor, C. (1978), \textit{A Practical Guide to Splines}, Springer.
\item{}
Elefteriadou, L. (2014), \textit{An Introduction to Traffic Flow Theory}, Springer.
\item
Fang, K. T., Li, R. Z., and Sudjianto, A. (2005), \textit{Design and Modeling for Computer Experiments}, Chapman Hall/CRC Press.
\item
Fang, K. T, Lin, D. K. J., Winker, P., Zhang, Y. (2000), Uniform design: Theory and application. \textit{Technometrics}, 42, 237--248
\item
He, X. (2017), Rotated sphere packing designs, \textit{Journal of the American Statistical Association}, 112, 1612--1622.
\item
He, X. (2021), Lattice-based designs possessing quasi-optimal separation distance on all projections, \textit{Biometrika}, 108, 443--454.
\item
Johnson, M., Moore, L., and Ylvisaker, D. (1990), Minimax and maximin distance design, \textit{Journal of Statistical Planning and Inference}, 26, 131--148.
\item
Joseph, V. R. (2016), Space-filling designs for computer experiments: A review, \textit{Quality Engineering}, 28, 28--35.
\item
Joseph, V. R., Gul, E., and Ba, S. (2015), Maximum projection designs for computer experiments, \textit{Biometrika}, 102, 371--380.
\item{}
Loeppky, J. L, Sacks, J., and Welch, W. J. (2009). Choosing the sample size of a computer experiment: A practical guide, \textit{Technometrics}, 51, 366--376.
\item{}
Li, C., Xiong, S., Sun, X., Qin, Y. (2022), Bayesian analysis for metro passenger flows using automated data, \textit{Mathematical Problems in Engineering}, Article ID: 9925939.
\item{}
Mckay, M. D., Beckman, R. J., and Conover, W. J. (1979), A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, \textit{Technometrics}, 21, 239--245.
\item{}
Morris, M.D. and Mitchell, T.J. (1995), Exploratory designs for computational experiments, \textit{Journal of Statistical Planning and Inference}, 43, 381--402.
\item
Mu, W. and Xiong, S. (2017), On algorithmic construction of maximin distance designs, \textit{Communications in Statistics: Simulation and Computation}, 46, 7972--7985.
\item
Mu, W. and Xiong, S. (2018), A class of space-filling designs and their projection properties, \textit{Statistics \& Probability Letters}, 141, 129--134.
\item
Mo, B., Ma, Z., Koutsopoulos, H. N., and Zhao, J. (2021), Calibrating path choices and train capacities for urban rail transit simulation models using smart card and train
movement data, \textit{Journal of Advanced Transportation}, Article ID: 5597130.
\item
Muehlenstaedt, T., Fruth, J., and Roustant, O. (2017), Computer experiments with functional inputs and scalar outputs
by a norm-based approach, \textit{Statistics and Computing}, 27, 1083--1097.
\item
Nanty, S., Helbert, C., Marrel, A., P\'{e}rot, N., and Prieur, C. (2016), Sampling, metamodeling, and sensitivity analysis of numerical simulators with functional stochastic inputs, \textit{SIAM/ASA Journal on Uncertainty Quantification}, 4, 636--659.
\item
Pamaretps, V. M. and Zemel, Y. (2020), \textit{An Invitation to Statistics in Wasserstein Space}. Springer.
\item{}
Park, J. S. (1994), Optimal Latin-hypercube designs for computer experiments, \textit{Journal of Statistical Planning and Inference}, 39, 95--111.
\item
Peyr\'{e}, G. and Cuturi, M. (2019), Computational optimal transport, \textit{Foundations and Trends in Machine Learning}, 11, 355--607.
\item{}
Santner, T. J., Williams, B. J., and Notz, W. I. (2018). \textit{The Design and Analysis of Computer Experiments}, 2nd Edition, Springer.
\item{}
Squazzoni, F., Jager, W., and Edmonds, B. (2014), Social simulation in the social sciences: A brief overview, \textit{Social Science Computer Review}, 32, 279--294.
\item{}
Tan, M. H. Y. (2019), Gaussian process modeling of finite element models with functional inputs, \textit{SIAM/ASA Journal on Uncertainty Quantification}, 7, 1133--1161.
\item
Tseng, P. (2001), Convergence of a block coordinate descent method for nondifferentiable minimization, \textit{Journal of optimization theory and applications}, 109, 475--494.
\item
Villani, C. (2009), \textit{Optimal Transport, Old and new}, Springer.
\item
Villani, C. (2015), \textit{Topics in Optimal Transportation}, AMS.
\item
Xiong, S. (2021), The reconstruction approach: From interpolation to regression, \textit{Technometrics}, 63, 225--235,
\item
Xiong, S. Li, C. Sun, X. Qin, Y., and Wu, C. F. J. (2022), Statistical estimation in passenger-to-train assignment models based on automated data, \textit{Applied Stochastic Models in Business and Industry}, 38, 287--307.
\item
Xiu, D. (2010), \textit{Numerical Methods for Stochastic Computations: A Spectral Method Approach}. Princeton University Press.
\item
Zhu, Y., Koutsopoulos, H. N., and Wilson, N. H. M. (2017), A probabilistic passenger-to-train assignment model based on automated data, \textit{Transportation Research Part B: Methodological}, 104, 522--542.
\end{description}}
\end{document} |
\betagin{document}
\betagin{abstract}
We consider semilinear Schr\"odinger equations with nonlinearity that is a polynomial in the unknown function and its complex conjugate, on $\mathbb{R}^d$ or on the torus.
Norm inflation (ill-posedness) of the associated initial value problem is proved in Sobolev spaces of negative indices.
To this end, we apply the argument of Iwabuchi and Ogawa (2012), who treated quadratic nonlinearities
.
This method can be applied whether the spatial domain is non-periodic or periodic and whether the nonlinearity is gauge/scale-invariant or not.
\varepsilonnd{abstract}
\maketitle
\section{Introduction}
We consider the initial value problem for semilinear Schr\"odinger equations:
\betagin{equation}\lambdabel{NLS'}
\left\{
\betagin{array}{@{\,}r@{\;}l}
i\partial _tu+\Deltalta u&=F (u,\bar{u}),\qquad (t,x)\in [0,T] \widetildemes Z,\\
u(0,x)&=\partialhi (x),
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
where the spatial domain $Z$ is of the form $Z=\Bo{R}^{d_1}\widetildemes \Bo{T} ^{d_2}$, $d_1+d_2=d$, and $F (u,\bar{u})$ is a polynomial in $u,\bar{u}$ without constant and linear terms, explicitly given by
\varepsilonqq{F (u,\bar{u})=\sum _{j=1}^n\nu _ju^{q_j}\bar{u}^{p_j-q_j}}
with mutually different indices $(p_1,q_1),\dots ,(p_n,q_n)$ satisfying $p_j\ge 2$, $0\le q_j\le p_j$ and non-zero complex constants $\nu _1,\dots ,\nu _n$.
The aim of this article is to prove \varepsilonmph{norm inflation} for the initial value problem \varepsilonqref{NLS'} in some negative Sobolev spaces.
We say norm inflation in $H^s(Z)$ (``\varepsilonmph{NI$_s$}'' for short) occurs if for any $\delta >0$ there exist $\partialhi \in H^\infty$ and $T>0$ satisfying
\varepsilonqq{\tnorm{\partialhi}{H^s}<\delta ,\qquad 0<T<\delta}
such that the corresponding smooth solution $u$ to \varepsilonqref{NLS'} exists on $[0,T]$ and
\varepsilonqq{\tnorm{u(T)}{H^s}>\delta ^{-1}.}
Clearly, NI$_s$ implies the discontinuity of the solution map $\partialhi \mapsto u$ (which is uniquely defined for smooth $\partialhi$ locally in time) at the origin in the $H^s$ topology, and hence the ill-posedness of \varepsilonqref{NLS'} in $H^s$.
However, NI$_s$ is a stronger instability property of the flow than the discontinuity, which only requires $0<T\lesssim 1$ and $\tnorm{u(T)}{H^s}\gtrsim 1$.
Let us begin with the case of single-term nonlinearity:
\betagin{equation}\lambdabel{NLS}
\left\{
\betagin{array}{@{\,}r@{\;}l}
i\partial _tu+\Deltalta u&=\nu u^q\bar{u}^{p-q},\qquad (t,x)\in [0,T] \widetildemes Z,\\
u(0,x)&=\partialhi (x),
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
where $p\ge 2$ and $0\le q\le p$ are integers, $\nu \in \Bo{C}\setminus \{0\}$ is a constant.
The equation is invariant under the scaling transformation $u(t,x)\mapsto \lambda ^{\frac{2}{p-1}}u(\lambda ^2t,\lambda x)$ ($\lambda >0$), and the critical Sobolev index $s$ for which $\tnorm{\lambda ^{\frac{2}{p-1}}\partialhi (\lambda \cdot )}{\dot{H}^{s}}=\tnorm{\partialhi}{\dot{H}^{s}}$ is given by
\varepsilonqq{s=s_c(d,p):=\tfrac{d}{2}-\tfrac{2}{p-1}.}
The scaling heuristics suggests that the flow becomes unstable in $H^s$ for $s<s_c(d,p)$.
In addition, we will demonstrate norm inflation phenomena by tracking the transfer of energy from high to low frequencies (that is called ``high-to-low frequency cascade''), which naturally restrict us to negative Sobolev spaces.
In fact, we will show NI$_s$ with any $s<\min \shugo{s_c(d,p),0}$ for any $Z$ and $(p,q)$, as well as with some negative but scale-subcritical regularities for specific nonlinearities.
Precisely, our result reads as follows:
\betagin{thm}\lambdabel{thm:main0}
Let $Z$ be a spatial domain of the form $\Bo{R} ^{d_1}\widetildemes \Bo{T} ^{d_2}$ with $d_1+d_2=d\ge 1$, and let $p\ge 2$, $0\le q\le p$ be integers.
Then, the initial value problem \varepsilonqref{NLS} exhibits NI$_s$ in the following cases:
\betagin{enumerate}
\item $Z$ and $(p,q)$ are arbitrary, $s<\min \shugo{s_c(d,p),0}$.
\item $d,p,s$ satisfy $s=s_c(d,p)=-\frac{d}{2}$; that is, $(d,p,s)=(1,3,-\frac{1}{2})$ and $(2,2,-1)$.
\item $d=1$, $(p,q)=(2,0),(2,2)$ and $s<-1$.
\item $Z=\Bo{R}^d$ with $1\le d\le 3$, $(p,q)=(2,1)$ and $s<-\frac{1}{4}$.
\item $Z=\Bo{R}^{d_1}\widetildemes \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $(p,q)=(2,1)$ and $s<0$.
\item $Z=\Bo{T}$, $(p,q)=(4,1),(4,2),(4,3)$ and $s<0$.
\varepsilonnd{enumerate}
\varepsilonnd{thm}
There is an extensive literature on the ill-posedness of nonlinear Schr\"odinger equations, and a part of the above theorem has been proved in previous works.
Concerning ill-posedness in the sense of norm inflation, Christ, Colliander, and Tao \cite{CCT03p-1} treated the case of gauge-invariant power-type nonlinearities $\partialm |u|^{p-1}u$ on $\Bo{R}^d$ and proved NI$_s$ when $0<s<s_c(d,p)$ or $s\le -\frac{d}{2}$ (with some additional restriction on $s$ if $p$ is not an odd integer).
For the remaining range of regularities $-\frac{d}{2}<s<0$ (when $s_c\ge 0$) they proved the failure of uniform continuity of the solution map.
Note that this milder form of ill-posedness is not necessarily incompatible with well-posedness in the sense of Hadamard, for which continuity of the solution map is required.
Moreover, since their argument is based on scaling consideration and some ODE analysis, it does not apply in any obvious way to the cases of periodic domains,
\footnote{One can still adapt their idea to the periodic setting with additional care.
Moreover, although their original argument did not apply to the 1d cubic case with the scaling critical regularity $s=-\frac{1}{2}$, one can modify the argument to cover that case.
See \cite{OW15p} for details.}
non gauge-invariant nonlinearities, and complex coefficients.
Later, Carles, Dumas, and Sparber~\cite{CDS12} and Carles and Kappeler \cite{CK17} studied norm inflation in Sobolev spaces of negative indices for the problem with smooth nonlinearities (i.e., $\partialm |u|^{p-1}u$ with an odd integer $p\ge 3$) in $\Bo{R}^d$ and in $\Bo{T}^d$, respectively.
They used a geometric optics approach to obtain NI$_s$ for $d\ge 2$ and $s<-\frac{1}{p}$ in the $\Bo{R}^d$ case
\footnote{
In \cite{CDS12} they also proved norm inflation for generalized nonlinear Schr\"odinger equations and the Davey-Stewartson system including non-elliptic Laplacian.}
and for $s<0$ in the $\Bo{T}^d$ case with the exception of $(d,p)=(1,3)$ for which $s<-\frac{2}{3}$ was assumed.
(See \cite{C07,AC09} for related ill-posedness results.)
In fact, they showed stronger instability property than NI$_s$ for these cases; that is, norm inflation \varepsilonmph{with infinite loss of regularity} (see Proposition~\ref{prop:niilr} below for the definition).
Our argument, which evaluates each term in the power series expansion of the solution directly, is different from the aforementioned works.
Note that, for smooth nonlinearities, Theorem~\ref{thm:main0} covers all the remaining cases in the range $s<\min \shugo{s_c(d,p),0}$ and extends the result to the (partially) periodic setting as well as to the case of general nonlinearities with complex coefficients.
Moreover,
our argument also gives another proof of the results in \cite{CDS12,CK17} on NI$_s$ with infinite loss of regularity; see Proposition~\ref{prop:niilr} for the precise statement.
The one-dimensional cubic equation with nonlinearity $\partialm |u|^2u$ has been attracting particular attention due to its various physical backgrounds and complete integrability.
Note also that this is the only $L^2$-subcritical case among smooth and gauge-invariant nonlinearities.
In spite of the $L^2$ subcriticality, the equation becomes unstable below $L^2$ due to the Galilean invariance, both in $\Bo{R}$ and in $\Bo{T}$.
In fact, the initial value problem was shown to be globally well-posed in $L^2$ \cite{T87,B93-1}, whereas it was shown in \cite{KPV01,CCT03} for $\Bo{R}$ and in \cite{BGT02,CCT03} for $\Bo{T}$ that the solution map fails to be uniformly continuous below $L^2$.
Ill-posedness below $L^2(\Bo{T} )$ was established in the periodic case by the lack of continuity of the solution map \cite{CCT03p-2,M09} and by the non-existence of solutions \cite{GO18}.
Nevertheless, one can show a priori bound in some Sobolev spaces below $L^2$ \cite{KT07,CCT08,KT12,GO18}, which prevents norm inflation.
Recent results in \cite{KT16p,KVZ17p} finally gave a priori bound on $H^s$ for $s>-\frac{1}{2}$, both in $\Bo{R}$ and in $\Bo{T}$.
We remark that NI$_s$ at $s=-\frac{1}{2}$ shown in Theorem~\ref{thm:main0} ensures the optimality of these results.
\footnote{The one-dimensional cubic problem was not treated in the first version of this article.
We would like to thank T.~Oh for drawing our attention to this case.
}
In \cite[Theorem~4.7]{KVZ17p}, Killip, Vi\c{s}an and Zhang also derived a priori bound of the solutions in the norm which is logarithmically stronger than the critical $H^{-\frac{1}{2}}$.
Motivated by this result, in addition to Theorem~\ref{thm:main0} (ii) we also show norm inflation for the one-dimensional cubic equation in some ``logarithmically subcritical'' spaces; see Proposition~\ref{prop:A} below.
Since the work of Kenig, Ponce, and Vega \cite{KPV96-NLS}, non gauge-invariant nonlinearities have also been intensively studied.
In \cite{BT06}, Bejenaru and Tao proposed an abstract framework for proving ill-posedness in the sense of discontinuity of the solution map.
They considered the quadratic NLS \varepsilonqref{NLS} on $\Bo{R}$ with nonlinearity $u^2$ and obtained a complete dichotomy of Sobolev index $s$ into locally well-posed ($s\ge -1$) and ill-posed ($s<-1$) in the sense mentioned above.
Their argument is based on the power series expansion of the solution, and they proved ill-posedness by observing that high-to-low frequency cascades break the continuity of the first nonlinear term in the series.
A similar dichotomy was shown for other quadratic nonlinearities $\bar{u}^2$, $u\bar{u}$ in \cite{K09,KT10} by employing the idea of \cite{BT06}.
Later, Iwabuchi and Ogawa \cite{IO15} considered the nonlinearity $u^2$, $\bar{u}^2$ in $\Bo{R}$, $\Bo{R}^2$ and refined the idea of \cite{BT06} to prove ill-posedness in the sense of NI$_s$ for $s<-1$ in $\Bo{R}$ and $s\le -1$ in $\Bo{R}^2$.
In particular, in the two-dimensional case they could complement the local well-posedness result in $H^s(\Bo{R} ^2)$, $s>-1$, which had been obtained in \cite{K09}.
Note that the original argument of \cite{BT06} is not likely to yield norm inflation phenomena nor discontinuity of the solution map at the threshold regularity such as $s=-1$ in the above $\Bo{R}^2$ case.
We will have more discussion on this issue in the next section.
Another quadratic nonlinearity $u\bar{u}$ was investigated by the same method in \cite{IU15}, where for $\Bo{R}^d$ with $d=1,2,3$ they proved norm inflation in Besov spaces $B^{-1/4}_{2,\sigma}$ of regularity $-\frac{1}{4}$ with $4<\sigma \le \infty$.
\footnote{Essentially, they also proved NI$_s$ for $s<-\frac{1}{4}$, i.e., the case (iv) of our Theorem~\ref{thm:main0}.
}
It turns out that the method of Iwabuchi and Ogawa \cite{IO15} proving norm inflation has a wide applicability.
The purpose of the present article is to apply this method to NLS with general nonlinearities.
In the last few years the method has been used to a wide range of equations; see for instance \cite{MO15,MO16,HMO16,CP16,Ok17}.
\footnote{
In the first version of this article, we only considered gauge-invariant smooth nonlinearities $\nu |u|^{2k}u$, $k\in \Bo{Z}_{>0}$ and linear combinations of them.
Note, however, that the method of Iwabuchi and Ogawa \cite{IO15} had been applied before only to quadratic nonlinearities and it was the first result dealing with nonlinearities of general degrees in a unified manner.
The authors of \cite{CP16,O17} informed us that their proofs of norm inflation results followed the argument in the first version of this article.
We also remark that an estimate proved in the first version (Lemma~\ref{lem:a_k} below) was employed later in \cite{MO16,HMO16,Ok17}.
}
In \cite{O17,Ok17}, norm inflation based at general initial data was proved for NLS and some other equations.
\footnote{
In \cite{Ok17} non gauge-invariant nonlinearities were first treated in a general setting.
In fact Theorem~\ref{thm:main0} follows as a corollary of \cite[Proposition~2.5 and Corollary~2.10]{Ok17}.
However, we decide to include the non gauge-invariant cases in the present version in order to state Theorem~\ref{thm:main} (for multi-term nonlinearities) with more generality.
}
We make some additional remarks on Theorem~\ref{thm:main0}.
\betagin{rem}
(i) Concerning one-dimensional periodic cubic NLS below $L^2$, the renormalized (or Wick ordered) equation
\varepsilonqq{i\partial _tu+\partial _x^2u=\partialm \big( |u|^2-2-\hspace{-13pt}\int _{\Bo{T}}|u|^2\big) u}
is known to behave better than the original one \varepsilonqref{NLS} with nonlinearity $\partialm |u|^2u$; see \cite{OS12} for a detailed discussion.
We note that our proof can be also applied to the renormalized cubic NLS.
In fact, the solutions constructed in Theorem~\ref{thm:main0} is smooth and its $L^2$ norm is conserved.
Then, a suitable gauge transformation, which does not change the $H^s$ norm at any time, gives smooth solutions to the renormalized equation that exhibit norm inflation.
(ii) In the periodic setting, our proof does not rely on any number theoretic consideration.
Hence, it can be easily adapted to the problem on general anisotropic tori, whether rational or irrational; that is, $Z=\Bo{R} ^{d_1}\widetildemes [\Bo{R}^{d_2}/(\gamma _1\Bo{Z})\widetildemes \cdots \widetildemes (\gamma _{d_2}\Bo{Z})]$ for any $\gamma _1,\dots ,\gamma _{d_2}>0$.
(iii) When $Z=\Bo{R}$ and $(p,q)=(4,2)$, the example in \cite[Example~5.3]{G00p} suggests that a high-to-low frequency cascade leads to instability of the solution map when $s<-\frac{1}{8}$.
However, our argument does not imply NI$_s$ for $-\frac{1}{6}\le s<-\frac{1}{8}$ so far.
\varepsilonnd{rem}
There are far less results on ill-posedness for multi-term nonlinearities than for \varepsilonqref{NLS}.
However, such nonlinear terms naturally appear in application.
For instance, the nonlinearity $6u^5-4u^3$ appears in a model related to shape-memory alloys \cite{FLS87}, and $(u+2\bar{u}+u\bar{u})u$ is relevant in the study of asymptotic behavior for the Gross-Pitaevskii equation (see {e.g.}~\cite{GNT09}).
Note that norm inflation for a multi-term nonlinearity does not immediately follow from that for each nonlinear term.
Our next result concerns the equation \varepsilonqref{NLS'} of full generality:
\betagin{thm}\lambdabel{thm:main}
The initial value problem \varepsilonqref{NLS'} exhibits NI$_s$ whenever $s$ satisfies the condition in Theorem~\ref{thm:main0} for at least one term $u^{q_j}\bar{u}^{p_j-q_j}$ in $F (u,\bar{u})$, except for the case where $Z=\Bo{T}$ and $F (u,\bar{u})$ contains $u\bar{u}$.
When $Z=\Bo{T}$ and $F (u,\bar{u})$ contains $u\bar{u}$, NI$_s$ occurs in the following cases:
\betagin{enumerate}
\item $s<0$ if $F (u,\bar{u})$ has a quintic or higher term, or one of $u^3\bar{u}$, $u^2\bar{u}^2$, $u\bar{u}^3$.
\item $s<-\frac{1}{6}$ if $F (u,\bar{u})$ has $u^4$ or $\bar{u}^4$ but no other quartic or higher terms.
\item $s\le -\frac{1}{2}$ if $F (u,\bar{u})$ has a cubic term but no quartic or higher terms.
\item $s<0$ if $F (u,\bar{u})$ has no cubic or higher terms.
\varepsilonnd{enumerate}
\varepsilonnd{thm}
In the above theorem, the range of regularities is restricted when $Z=\Bo{T}$ and $F (u,\bar{u})$ has $u\bar{u}$; note that the nonlinear term $u\bar{u}$ by itself leads to NI$_s$ for $s<0$ as shown in Theorem~\ref{thm:main0}.
This restriction seems unnatural and an artifact of our argument.
The rest of this article is organized as follows.
In the next section, we recall the idea of \cite{BT06}, \cite{IO15} and discuss some common features and differences between them.
Section~\ref{sec:proof0} is devoted to the proof of Theorem~\ref{thm:main0} for the single-term nonlinearities.
Then, in Section~\ref{sec:proof} we see how to treat the multi-term nonlinearities, proving Theorem~\ref{thm:main}.
In Appendices, we consider norm inflation with infinite loss of regularity in Section~\ref{sec:niilr} and inflation of various norms with the critical regularity for the one-dimensional cubic problem in Section~\ref{sec:ap}.
\section{Strategy for proof}\lambdabel{BT-IO}
We will use the power series expansion of the solutions to prove norm inflation.
To see the idea, let us consider the simplest case of quadratic nonlinearity $u^2$ in \varepsilonqref{NLS}.
This amounts to considering the integral equation
\varepsilonq{eq:ie}{u(t)&=e^{it\Deltalta}\partialhi -i\int _0^te^{i(t-\tau )\Deltalta}\big( u(\tau )\cdot u(\tau )\big) \,d\tau \\
&=:\Sc{L}[\partialhi ](t)+\Sc{N}[u,u](t),\qquad t\in [0,T].}
We first recall the argument of Bejenaru and Tao \cite{BT06}.
By Picard's iteration, the power series $\sum _{k=1}^\infty U_k[\partialhi ]$ with
\varepsilonqq{&U_1[\partialhi ]:=\Sc{L}[\partialhi ], \qquad U_2[\partialhi ]:=\Sc{N}[\Sc{L}[\partialhi ],\Sc{L}[\partialhi ]],\\
&U_3[\partialhi ]:=\Sc{N}[\Sc{L}[\partialhi ],\Sc{N}[\Sc{L}[\partialhi ],\Sc{L}[\partialhi ]]]+\Sc{N}[\Sc{N}[\Sc{L}[\partialhi ],\Sc{L}[\partialhi ]],\Sc{L}[\partialhi ]],\\
&\quad \vdots \\
&U_k[\partialhi ]:=\sum _{k_1,k_2\ge 1;\,k_1+k_2=k}\Sc{N}[U_{k_1}[\partialhi ],U_{k_2}[\partialhi ]] \qquad (k\ge 2)}
formally gives a solution to \varepsilonqref{eq:ie}.
To justify this, we basically need the linear and bilinear estimates
\varepsilonq{est:qwp}{\norm{\Sc{L}[\partialhi ]}{S}\le C\norm{\partialhi}{D},\qquad \norm{\Sc{N}[u_1,u_2]}{S}\le C\norm{u_1}{S}\norm{u_2}{S}}
for the space of initial data $D$ and some space $S\subset C([0,T];D)$ in which we construct a solution.
In fact, they showed (roughly speaking) the following:
\betagin{quote}
Assume that \varepsilonqref{est:qwp} holds with the Banach space $D$ of initial data and some Banach space $S$.
Then,
(i) for any $k\ge 1$ the operators $U_k:D\to S$ are well-defined and satisfies $\tnorm{U_k[\partialhi ]}{S}\le (C\tnorm{\partialhi}{D})^k$, and
(ii) there exists $\varepsilon _0>0$ (depending on the constants in \varepsilonqref{est:qwp}) such that the solution map $\partialhi \mapsto u[\partialhi ]:=\sum _{k=1}^\infty U_k[\partialhi ]$ is well-defined on $B_D(\varepsilon _0):=\Shugo{\partialhi \in D}{\tnorm{\partialhi}{D}\le \varepsilon _0}$ and gives a solution to \varepsilonqref{eq:ie}.
\varepsilonnd{quote}
Next, consider some coarser topologies on $D$ and $S$ induced by the norms $\tnorm{~}{D'}$ and $\tnorm{~}{S'}$ weaker than $\tnorm{~}{D}$ and $\tnorm{~}{S}$, respectively.
They claimed the following:
\betagin{quote}
Assume further that the solution map $\partialhi \mapsto u[\partialhi ]$ given above is continuous from $(B_D(\varepsilon _0),\tnorm{~}{D'})$ (i.e., $B_D(\varepsilon _0)$ equipped with the $D'$ topology) to $(S,\tnorm{~}{S'})$.
Then, for each $k$ the operator $U_k$ is continuous from $(B_D(\varepsilon _0),\tnorm{~}{D'})$ to $(S,\tnorm{~}{S'})$.
\varepsilonnd{quote}
To show the continuity of $U_k$ in coarser topologies, by its homogeneity one can restrict to sufficiently small initial data.
Then, by the estimates \varepsilonqref{est:qwp}, contribution of higher order terms $\sum _{k'>k}U_{k'}[\partialhi ]$ can be made arbitrarily small compared to $U_k[\partialhi ]$.
Combining this fact with the hypothesis that $\sum _{k\ge 1}U_k[\partialhi ]$ is continuous, one can show the claim by an induction argument on $k$.
Now, this claim gives a way to prove ill-posedness in coarse topologies.
Namely, one can show the discontinuity of the solution map $\partialhi \mapsto \sum _{k=1}^\infty U_k[\partialhi ]$ in coarse topologies by simply establishing the discontinuity of the (more explicit) map $\partialhi \mapsto U_k[\partialhi ]$ for at least one $k$.
\footnote{It is worth noticing that the continuity of $U_k$ from $(B_D(\varepsilon _0),\tnorm{~}{D'})$ to $(S,\tnorm{~}{S'})$ does not imply its continuity from $(D,\tnorm{~}{D'})$ to $(S,\tnorm{~}{S'})$ in general, even though $U_k$ can be defined for all functions in $D$.
By the $k$-linearity of $U_k$, the latter continuity is equivalent to the \varepsilonmph{boundedness}: $\tnorm{U_k[\partialhi ]}{S'}\le C\tnorm{\partialhi}{D'}^k$.
Hence, only disproving the boundedness of $U_k$ in coarse topologies (which may imply that the solution map is not $k$ times differentiable) is not sufficient to conclude the discontinuity of the solution map.
}
We notice that this proof of ill-posedness includes evaluating higher terms by using \varepsilonqref{est:qwp}, that is, estimates (or well-posedness) in stronger topology.
Here, we observe two facts on this method.
First, it cannot yield norm inflation in coarse topologies.
This is because the image of the continuous solution map with domain $B_D(\varepsilon _0)$ is bounded in $S$, and hence it must be bounded in weaker norms.
Secondly, the `well-posedness' estimates \varepsilonqref{est:qwp} in $D,S$ and discontinuity of some $U_k$ in $D',S'$ would imply the discontinuity of $U_k$ in any `intermediate' norms $D'',S''$ satisfying
\varepsilonqq{\tnorm{\partialhi}{D'}\lesssim \tnorm{\partialhi}{D''}\lesssim \tnorm{\partialhi}{D}^\theta \tnorm{\partialhi}{D'}^{1-\theta},\qquad \tnorm{u}{S'}\lesssim \tnorm{u}{S''}\lesssim \tnorm{u}{S}}
for some $0<\theta <1$.
In fact, if $U_k:(B_D(\varepsilon _0),\tnorm{~}{D'})\to S'$ is not continuous, there exist $\shugo{\partialhi _n}\subset B_D(\varepsilon _0)$ and $\partialhi _\infty \in B_D(\varepsilon _0)$ such that $\tnorm{\partialhi _n-\partialhi _\infty}{D'}\to 0$ ($n\to \infty$) but $\tnorm{U_k[\partialhi _n]-U_k[\partialhi _\infty ]}{S'}\gtrsim 1$.
Since $\shugo{\partialhi _n}$ is bounded in $D$, this implies that $\tnorm{\partialhi _n-\partialhi _\infty}{D''}\to 0$ and $\tnorm{U_k[\partialhi _n]-U_k[\partialhi _\infty ]}{S''}\gtrsim 1$.
In particular, if we work in Sobolev spaces:
\varepsilonqq{D=H^{s_0},\quad S\hookrightarrow C([0,T];H^{s_0}),\quad D'=H^{s_1},\quad S'=C([0,T];H^{s_1})\qquad (s_0>s_1),}
then ill-posedness in $H^{s_1}$ as a consequence of the argument in \cite{BT06} should actually yield ill-posedness in any $H^s$, $s_1\le s<s_0$, while we have \varepsilonqref{est:qwp}, i.e., well-posedness in $H^{s_0}$.
Therefore, the regularity $s_0$ in which we invoke \varepsilonqref{est:qwp} must be automatically the threshold regularity for well-/ill-posedness.
This explains why the same argument cannot be applied to the two-dimensional quadratic NLS with nonlinearity $u^2$.
In fact, as mentioned in Introduction, \varepsilonqref{est:qwp} are obtained in $D=H^s$ when $s>-1$ (with a suitable $S$) but fails if $s\le -1$ (for any $S$ continuously embedded into $C([0,T];H^s)$), and hence well-posedness at the threshold regularity is not available in this case.
We next recall Iwabuchi and Ogawa's result \cite{IO15}, which settled the aforementioned two-dimensional case.
Indeed, the argument in \cite{IO15} is similar to that of \cite{BT06} in that it exploits the power series expansion and shows that one term in the series exhibits instability and dominates all the other terms.
Now, we notice that the existence time $T>0$ is allowed to shrink for the purpose of establishing norm inflation, while in \cite{BT06} it is fixed and uniform with respect to the initial data.
The main difference of the argument in \cite{IO15} from that of \cite{BT06} is that they worked with the estimates like
\varepsilonq{est:qwp'}{\norm{\Sc{L}[\partialhi ]}{S_T}\le C\norm{\partialhi}{D},\qquad \norm{\Sc{N}[u_1,u_2]}{S_T}\le CT^\delta \norm{u_1}{S_T}\norm{u_2}{S_T}}
for the data space $D$, $S_T\subset C([0,T];D)$, and $\delta >0$, and consider the expansion up to different times $T$ according to the initial data.
In fact, this enables us to take a sequence of initial data which is unbounded in $D$ (but converges to $0$ in a weaker norm), and such a set of initial data actually yields unbounded sequence of solutions.
Another feature of the argument in \cite{IO15} is that higher-order terms were estimated directly in $D'$ by using properties of specific initial data they chose; in \cite{BT06} these terms were simply estimated in $D$ by \varepsilonqref{est:qwp} that hold for general functions.
\footnote{In fact, we do not need `well-posedness in $D$', i.e., such estimates as \varepsilonqref{est:qwp'} that hold for \varepsilonmph{all} functions in $D$ and $S$.
It is enough to estimate the terms $U_k[\partialhi ]$ just for particularly chosen initial data $\partialhi$.
In some problems this consideration becomes essential; see \cite{Ok17}, Theorem~1.2 and its proof.
}
At a technical level, another novelty in \cite{IO15} is the use of modulation space $M_{2,1}$ as $D$ instead of Sobolev spaces.
The bilinear estimate in \varepsilonqref{est:qwp'} is then straightforward thanks to the algebra property of $M_{2,1}$.
Finally, we remark that the strategies of \cite{BT06,IO15} work well in the case that the operator $U_k$ involves a significant high-to-low frequency cascade, as mentioned in \cite{BT06}.
However, the situation is different in the case of \varepsilonmph{system} of equations, as there are more than one regularity indices and one cannot simply order two pairs of regularity indices; see e.g.~\cite{MO15}, where the argument of \cite{IO15} was employed to derive norm inflation from nonlinear interactions of ``high$\widetildemes$low$\to$high'' type.
\section{Proof of Theorem~\ref{thm:main0}}\lambdabel{sec:proof0}
Let us first consider the case of single-term nonlinearity and prove Theorem~\ref{thm:main0}.
The argument in this section basically follows that in \cite{IO15}.
Since the coefficient $\nu \neq 0$ plays no role in our proof, we assume $\nu =1$ for simplicity.
We write
\varepsilonqq{\mu _{p,q}(z_1,\dots ,z_p):=\partialrod _{l=1}^qz_l\partialrod _{m=q+1}^p\bar{z}_m,\qquad \mu _{p,q}(z):=\mu _{p,q}(z,\dots ,z),}
so that $u^q\bar{u}^{p-q}=\mu _{p,q}(u)$.
\betagin{defn}\lambdabel{defn:U_k}
For $\partialhi \in L^2(Z)$, we (formally) define
\varepsilonqq{U_1[\partialhi ](t)&:=e^{it\Deltalta}\partialhi ,\\
U_k[\partialhi ](t)&:=-i\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t e^{i(t-\tau )\Deltalta}\mu _{p,q}\big( U_{k_1}[\partialhi ],\dots ,U_{k_p}[\partialhi ]\big) (\tau )\,d\tau ,\qquad k\ge 2.}
\varepsilonnd{defn}
Note that $U_{k}[\partialhi ]=0$ unless $k\varepsilonquiv 1\mod p-1$.
The expansion $u=\sum _{k=1}^\infty U_k[\partialhi ]$ of a (unique) solution $u$ to \varepsilonqref{NLS} will play a crucial role in the proof.
To make sense of this representation, we use modulation spaces.
The notion of modulation spaces was introduced by Feichtinger in the 1980s \cite{F83} and nowadays it has become one of the common tools in the study of nonlinear evolution PDEs; see e.g.~the survey \cite{RSW12} and references therein.
\betagin{defn}
Let $A>0$ be a dyadic number.
Define the space $M_A$ as the completion of $C_0^\infty (Z)$ with respect to the norm
\[ \norm{f}{M_A}:=\sum _{\xi \in A\Bo{Z}^d}\norm{\widehat{f}}{L^2(\xi +Q_A)},\]
where $Q_A:=[-\frac{A}{2},\frac{A}{2})^d$.
\varepsilonnd{defn}
\betagin{rem}
We consider the space $M_A$ with $A<1$ only when $Z=\Bo{R} ^d$.
For $Z=\Bo{R} ^{d_1}\widetildemes \Bo{T} ^{d_2}$, the $L^2(\xi +Q_A)$ norm in the above definition means the $L^2$ norm restricted onto $(\xi +Q_A)\cap \widehat{Z}$, where $\widehat{Z}:=\Bo{R} ^{d_1}\widetildemes \Bo{Z} ^{d_2}$.
If $Z=\Bo{T}^d$, the space $M_1$ coincides with the Wiener algebra $\Sc{F} L^1(\Bo{T} ^d)$.
\varepsilonnd{rem}
We will only use the following properties of the space $M_A$.
The proof is elementary, and thus it is omitted.
\betagin{lem}\lambdabel{lem:M_A}
(i) $M_A\cong _AM_1$,\hspace{10pt} $H^{\frac{d}{2}+\varepsilon}\hookrightarrow M_1\hookrightarrow L^2$\hspace{10pt} ($\varepsilon >0$).
(ii) There exists $C=C(d)>0$ such that for any $f,g\in M_A$, we have
\[ \norm{fg}{M_A}\le CA^{\frac{d}{2}}\norm{f}{M_A}\norm{g}{M_A}.\]
\varepsilonnd{lem}
Since the space $M_A$ is a Banach algebra and the linear propagator $e^{it\Deltalta}$ is unitary in $M_A$, we can easily show the following multilinear estimates.
\betagin{lem}\lambdabel{lem:U_k}
Let $A\ge 1$ be a dyadic number and $\partialhi \in M_A$ with $\tnorm{\partialhi}{M_A}\le M$.
Then, there exists $C>0$ independent of $A$ and $M$ such that
\varepsilonqq{\norm{U_k[\partialhi ](t)}{M_A}\le t^{\frac{k-1}{p-1}}(CA^{\frac{d}{2}}M)^{k-1}M}
for any $t\ge 0$ and $k\ge 1$.
\varepsilonnd{lem}
\betagin{proof}
Let $\shugo{a_k} _{k=1}^\infty$ be the sequence defined by
\[ a_1=1,\qquad a_k=\frac{p-1}{k-1}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}a_{k_1}\cdots a_{k_p}\qquad (k\ge 2).\]
As observed in \cite[Eq.~(16)]{BT06}, one can show inductively that $a_k\le C^k$ for some $C>0$.
To be more precise, we state it as the following lemma.
The $p=2$ case can be found in \cite[Lemma~4.2]{MO16} with a detailed proof.
\betagin{lem}\lambdabel{lem:a_k}
Let $\shugo{b_k}_{k=1}^\infty$ be a sequence of nonnegative real numbers such that
\[ b_{k} \le C\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}b_{k_1}\cdots b_{k_p},\qquad k\ge 2\]
for some $p\ge 2$ and $C>0$.
Then, we have
\[ b_k\le b_1C_0^{k-1},\qquad k\ge 1;\qquad C_0:=\frac{\partiali ^2}{6}(Cp^2)^{\frac{1}{p-1}}b_1.\]
\varepsilonnd{lem}
By Lemma~\ref{lem:a_k}, it holds $a_k\le C_0^{k-1}$ for some $C_0>0$.
Thus, it suffices to show
\varepsilonqq{\norm{U_{k}[\partialhi ](t)}{M_A}\le a_kt^{\frac{k-1}{p-1}}(C_1A^{\frac{d}{2}}M)^{k-1}M,\qquad t\ge 0,\quad k\ge 1}
for some $C_1>0$.
This is trivial if $k=1$.
Let $k\ge 2$, and assume the above estimate for $U_1,U_2,\dots ,U_{k-1}$.
Using Lemma~\ref{lem:M_A}, we have
\varepsilonqq{\norm{U_{k}[\partialhi ](t)}{M_A}&\le CA^{\frac{d}{2}(p-1)}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \partialrod _{j=1}^p\norm{U_{k_j}[\partialhi ](\tau )}{M_A}\,d\tau \\
&\le CA^{\frac{d}{2}(p-1)}(C_1A^{\frac{d}{2}}M)^{k-p}M^p\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}a_{k_1}\cdots a_{k_p}\int _0^t \tau ^{\frac{k-p}{p-1}}\,dt\\
&=Ca_kC_1^{k-p}(A^{\frac{d}{2}}M)^{k-1}Mt^{\frac{k-1}{p-1}}.}
The estimate for $U_k$ follows by setting $C_1$ to be $C^{\frac{1}{p-1}}$ with the constant $C$ in the last line, which is independent of $k$.
\varepsilonnd{proof}
A standard argument (cf.~\cite[Theorem~3]{BT06}) with Lemma~\ref{lem:M_A} (ii) and Lemma~\ref{lem:U_k} shows the following local well-posedness of \varepsilonqref{NLS} in $M_A$.
\betagin{cor}\lambdabel{cor:lwp}
Let $A\ge 1$ be dyadic, and $M>0$.
If $0<T\ll (A^{d/2}M)^{-(p-1)}$, then for any $\partialhi \in M_A$ with $\tnorm{\partialhi}{M_A}\le M$ the following holds.
(i) A unique solution $u$ to the integral equation associated with \varepsilonqref{NLS},
\varepsilonq{eq:ie'}{u(t)=e^{it\Deltalta}\partialhi -i\int _0^t e^{i(t-\tau )\Deltalta}\mu _{p,q}(u(\tau ))\,d\tau ,\qquad t\in [0,T]}
exists in $C([0,T];M_A)$.
(ii) The solution $u$ given in (i) has the expression
\varepsilonqq{u=\sum _{k=1}^\infty U_{k}[\partialhi ]=\sum _{l=0}^\infty U_{(p-1)l+1}[\partialhi ],}
which converges absolutely in $C([0,T];M_A)$.
\varepsilonnd{cor}
\betagin{proof}
(i) Let
\varepsilonqq{\Psi _{\partialhi}[u](t):= e^{it\Deltalta}\partialhi -i\int _0^t e^{i(t-\tau )\Deltalta}\mu _{p,q}(u(\tau ))\,d\tau ,}
then from Lemma~\ref{lem:M_A} (ii) we have
\varepsilonqq{\norm{\Psi _\partialhi [u]}{L^\infty (0,T;M_A)}\le \tnorm{\partialhi}{M_A}+CTA^{\frac{d}{2}(p-1)}\tnorm{u}{L^\infty (0,T;M_A)}^p}
and that $\Psi$ is a contraction on a ball in $C([0,T];M_A)$ if $TA^{\frac{d}{2}(p-1)}\tnorm{\partialhi}{M_A}^{p-1}\ll 1$.
(ii) The series $u=\sum _{k\ge 1}U_k[\partialhi ]$ converges in $C([0,T];M_A)$ by virtue of Lemma~\ref{lem:U_k}.
By uniqueness, it suffices to show that $u$ solves the equation \varepsilonqref{eq:ie'}.
Let $u_K:=\sum _{k=1}^KU_k[\partialhi ]$, so that $u=\lim _{K\to \infty}u_K$ in $C([0,T];M_A)$.
We see that $\Psi _\partialhi [u_K]-u_K$ consists of $k$-linear terms in $\partialhi$ with $K+1\le k\le pK$, and we can show
\varepsilonqq{\tnorm{\Psi _\partialhi [u_K]-u_K}{L^\infty (0,T;M_A)}\le C(CT^{\frac{1}{p-1}}A^{\frac{d}{2}}M)^KM}
by an argument similar to Lemma~\ref{lem:U_k}.
By letting $K\to \infty$, we obtain $\Psi _\partialhi [u]=u$.
\varepsilonnd{proof}
\betagin{rem}
(i) In $M_A$ we have \varepsilonmph{unconditional} local well-posedness.
In particular, the embedding (Lemma~\ref{lem:M_A} (i)) shows that the unique solution with initial data in some high-regularity Sobolev space exists on a time interval $[0,T]$
and coincides with the solution constructed in Corollary~\ref{cor:lwp}.
(ii)
In the following proof of Theorem~\ref{thm:main0} we will take initial data that are localized in frequency on several cubes of side length $O(A)$ located in $\shugo{|\xi |\gg \max (1,A)}$.
For such initial data the $L^2$ norm is comparable with the $M_A$ norm, but much smaller than the Sobolev norms of positive indices.
In the $L^2$-supercritical cases (i.e., $s_c(d,p)>0$), no reasonable well-posedness is expected in $L^2$, while the use of higher Sobolev space would verify the power series expansion only on a smaller time interval.
In this regard, the space $M_A$ is suitable for our purpose.
\varepsilonnd{rem}
Let $N,A$ be dyadic numbers to be specified so that $N\gg 1$ and $0< A\ll N$ ($1\le A\ll N$ when $Z$ has a periodic direction).
In the proof of norm inflation, we will use initial data $\partialhi$ of the following form:
\varepsilonq{cond:phi}{&\widehat{\partialhi}=rA^{-\frac{d}{2}}N^{-s}\chi _\Omega \quad \text{with a positive constant $r$ and a set $\Omega$ satisfying}\\
&\Omega = \bigcup _{\varepsilonta \in \Sigma}(\varepsilonta +Q_A)\hspace{10pt} \text{for some $\Sigma \subset \shugo{\xi \in \Bo{R} ^d:|\xi |\sim N}$ s.t. $\# \Sigma \le 3$}.}
Note that $\tnorm{\partialhi}{M_A}\sim rN^{-s}$, $\tnorm{\partialhi}{H^s}\sim r$.
We derive Sobolev bounds of $U_k[\partialhi ](t)$ with $\partialhi$ satisfying the above condition.
\betagin{lem}\lambdabel{lem:supp}
There exists $C>0$ such that for any $\partialhi$ satisfying \varepsilonqref{cond:phi} and $k\ge 1$, we have
\varepsilonqq{\big| \supp{\widehat{U_{k}[\partialhi]}(t)}\big| \le C^kA^d,\qquad t\ge 0.}
\varepsilonnd{lem}
\betagin{proof}
Since the $\xi$-support of $\widehat{U_{k}[\partialhi]}$ is determined by a spatial convolution of $k$ copies of $\hat{\partialhi}$ or $\hat{\bar{\partialhi}}=\overline{\hat{\partialhi}(-\cdot )}$, it is easily seen that
\varepsilonqq{\Supp{\widehat{U_{k}[\partialhi]}(t)}{\bigcup _{\varepsilonta \in \Sc{S}_k}\big( \varepsilonta +Q_{kA}\big)}}
for all $t\ge 0$, where $\Sc{S}_1:=\Sigma$ and
\varepsilonqq{\Sc{S}_k:=&\Shugo{\varepsilonta \in \Bo{R}^d}{\varepsilonta =\sum _{l=1}^k\varepsilonta _l,\,\varepsilonta _l\in \Sigma \cup (-\Sigma )\;(1\le l\le k)},\qquad k\ge 2.}
Since $\# \Sc{S}_k\le 6^k$, we have
\varepsilonqq{\big| \supp{\widehat{U_{k}[\partialhi]}(t)}\big| \le \big| Q_{kA}\big| \# \Sc{S}_k\le (kA)^d6^k\le C^kA^d.\qedhere}
\varepsilonnd{proof}
\betagin{lem}\lambdabel{lem:U_k_H^s}
Let $\partialhi$ satisfy \varepsilonqref{cond:phi}.
Assume that $s<0$.
Then, there exists $C>0$ depending only on $d,p,s$ such that the following holds.
\betagin{enumerate}
\item $\norm{U_1[\partialhi ](T)}{H^s}\le Cr$\hspace{10pt} for any $T\ge 0$.
\item $\norm{U_{k}[\partialhi](T)}{H^s}\le Cr(C\rho)^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A)$\hspace{10pt} for any $T\ge 0$ and $k\ge 2$, where
\varepsilonqq{\rho :=rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}},\qquad f_s(A):=\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le A})}.}
\varepsilonnd{enumerate}
\varepsilonnd{lem}
\betagin{proof}
(i) is easily verified.
For (ii), we see that
\varepsilonqq{&\norm{U_{k}[\partialhi](t)}{H^s}\le \norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\partialhi]}(t)})}\sup _{\xi \in \Bo{R}^d}\big| \widehat{U_k[\partialhi]}(t,\xi )\big| \\
&\le \norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\partialhi]}(t)})}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \norm{\big| v_{k_1}(\tau )\big| *\cdots *\big| v_{k_p}(\tau )\big|}{L^\infty}\,d\tau ,}
where $v_{k_l}$ is either $\widehat{U_{k_l}[\partialhi]}$ or $\widehat{\overline{U_{k_l}[\partialhi ]}}$.
By Young's inequality, the above is bounded by
\varepsilonqq{&\norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\partialhi]}(t)})}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \norm{v_{k_1}(\tau )}{L^2}\norm{v_{k_2}(\tau )}{L^2}\partialrod _{l=3}^{p}\norm{v_{k_l}(\tau )}{L^1}\,d\tau \\
&\le \norm{\LR{\xi}^s}{L^2(\supp{\widehat{U_{k}[\partialhi]}(t)})}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \partialrod _{l=3}^p\big| \supp{\widehat{U_{k_l}[\partialhi]}(\tau )}\big| ^{\frac{1}{2}}\partialrod _{l=1}^p\norm{\widehat{U_{k_l}[\partialhi]}(\tau )}{L^2}\,d\tau .}
Since $s<0$, for any bounded set $D\subset \Bo{R} ^d$ it holds that
\varepsilonqq{\big| \shugo{\LR{\xi}^{s}>\lambda}\cap D\big| \le \big| \shugo{\LR{\xi}^s>\lambda}\cap B_D\big| \qquad (\lambda >0),}
where $B_D\subset \Bo{R} ^d$ is the ball centered at the origin with $|D|=|B_D|$.
This implies that $\tnorm{\LR{\xi}^s}{L^2(D)}\le \tnorm{\LR{\xi}^s}{L^2(B_D)}$.
Moreover, it follows from Lemma~\ref{lem:U_k} with $M=CrN^{-s}$ that
\varepsilonqq{\norm{U_{k}[\partialhi](t)}{L^2}\le \norm{U_{k}[\partialhi](t)}{M_A}\le Ct^{\frac{k-1}{p-1}}(CrA^{\frac{d}{2}}N^{-s})^{k-1}rN^{-s},\qquad k\ge 1 .}
Hence, we apply Lemma~\ref{lem:supp} to bound the above by
\varepsilonqq{&\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le C^{\frac{k}{d}}A})}\cdot C^{\frac{k}{2}}A^{\frac{d(p-2)}{2}}\sum _{\mat{k_1,\dots ,k_p\ge 1\\ k_1+\dots +k_p=k}}\int _0^t \partialrod _{l=1}^p\big[ C\tau ^{\frac{k_l-1}{p-1}}(CrA^{\frac{d}{2}}N^{-s})^{k_l-1}rN^{-s}\big] \,d\tau \\
&\le C^k\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le A})}A^{\frac{d(p-2)}{2}+\frac{d}{2}(k-p)}(rN^{-s})^k\int _0^t\tau ^{\frac{k-p}{p-1}}\,d\tau \\
&\le f_s(A)A^{\frac{d}{2}(k-2)}(CrN^{-s})^kt^{\frac{k-1}{p-1}},}
which is the desired one.
\varepsilonnd{proof}
We observe the following lower bounds on the $H^s$ norm of the first nonlinear term in the expansion of the solution.
\betagin{lem}\lambdabel{lem:U_p}
The following estimates hold for any $s\in \Bo{R}$.
\betagin{enumerate}
\item Let $(p,q)$ and $Z=\Bo{R}^{d_1}\widetildemes \Bo{T} ^{d_2}$ be arbitrary.
For $1\le A\ll N$, we define the initial data $\partialhi$ by \varepsilonqref{cond:phi} with $\Sigma =\shugo{Ne_d, -Ne_d, 2Ne_d}$, where $e_d:=(0,\dots ,0,1)\in \Bo{R} ^d$.
If $0<T\ll N^{-2}$, then we have
\varepsilonqq{\norm{U_p[\partialhi ](T)}{H^s}\gtrsim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A).}
\item Let $(p,q)=(2,1)$ and $Z=\Bo{R} ^d$, $1\le d\le 3$.
For $N\gg 1$, define $\partialhi$ by
\varepsilonqq{\widehat{\partialhi}:=rN^{\frac{1}{2}-s}\chi _{Ne_d+\widetilde{Q}_{N^{-1}}}\hspace{10pt} \text{with}\hspace{10pt} r>0,\hspace{10pt} \widetilde{Q}_{N^{-1}}:=[-\tfrac{1}{2},\tfrac{1}{2})^{d-1}\widetildemes [-\tfrac{1}{2N},\tfrac{1}{2N}).}
Then, for any $0<T\ll 1$ we have
\varepsilonqq{\norm{U_2[\partialhi ](T)}{H^s}\gtrsim r^2N^{-2s-\frac{1}{2}}T.}
\item Let $(p,q)=(2,1)$ and $Z=\Bo{R} ^{d_1}\widetildemes \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$.
Define $\partialhi$ by \varepsilonqref{cond:phi} with $A=1$, $\Sigma =\shugo{Ne_d}$.
Then, for any $0<T\ll 1$ we have
\varepsilonqq{\norm{U_2[\partialhi ](T)}{H^s}\gtrsim r^2N^{-2s}T.}
\item Let $(p,q)=(4,1)$ or $(4,2)$ or $(4,3)$ and $Z=\Bo{T}$.
Define $\partialhi$ by \varepsilonqref{cond:phi} with $A=1$, $\Sigma =\shugo{-N,2N,3N}$.
Then, for any $T>0$ we have
\varepsilonqq{\norm{U_4[\partialhi ](T)}{H^s}\gtrsim r^4N^{-4s}T.}
\varepsilonnd{enumerate}
\varepsilonnd{lem}
\betagin{proof}
Note that
\varepsilonqq{\widehat{U_p[\partialhi ]}(T,\xi )=ce^{-iT|\xi |^2}\int _{\Gamma}\partialrod _{l=1}^q\widehat{\partialhi}(\xi _l)\partialrod _{m=q+1}^p\overline{\widehat{\partialhi}(\xi _m)}\int _0^T e^{it\Phi}\,dt,}
where
\varepsilonqs{\Gamma :=\Shugo{(\xi _1,\dots ,\xi _p)}{\sum _{l=1}^q\xi _l-\sum _{m=q+1}^p\xi _m=\xi},\quad
\Phi :=|\xi |^2-\sum _{l=1}^q|\xi _l|^2+\sum _{m=q+1}^p|\xi _m|^2.}
(i) If we restrict $\xi$ to $Q_A$, we have
\varepsilonqq{\widehat{U_p[\partialhi ]}(T,\xi )=c(rA^{-\frac{d}{2}}N^{-s})^pe^{-iT|\xi |^2}\sum _{(\varepsilonta _1,\dots ,\varepsilonta _p)}\int _{\Gamma}\partialrod _{l=1}^p\chi _{\varepsilonta _l+Q_A}(\xi _l)\int _0^T e^{it\Phi}\,dt,}
where the sum is taken over the set
\varepsilonqq{\Shugo{(\varepsilonta _1,\dots ,\varepsilonta _p)\in \shugo{\partialm Ne_d ,2Ne_d}^p}{\sum _{l=1}^q\varepsilonta _l-\sum _{m=q+1}^p\varepsilonta _m=0},} which is non-empty for any $(p,q)$.
\footnote{If $p$ is even, we can choose $\varepsilonta _l$ to be $Ne_d$ or $-Ne_d$ so that $\sum _{l=1}^q\varepsilonta _l-\sum _{m=q+1}^p\varepsilonta _m=0$.
If $p$ is odd, we choose $\varepsilonta _1=2Ne_d$ and $\varepsilonta _2$ to be $Ne_d$ or $-Ne_d$ so that the output from these two frequencies is either $Ne_d$ or $-Ne_d$.
Then, the other $\varepsilonta _j$ can be chosen as for $p$ even.
}
Since $|\Phi| \lesssim N^2$ in the integral, for $0<T\ll N^{-2}$ we have
\varepsilonqq{|\widehat{U_p[\partialhi ]}(T,\xi )|\gtrsim (rA^{-\frac{d}{2}}N^{-s})^p(A^d)^{p-1}T\chi _{p^{-1}Q_{A}}(\xi ),}
and thus
\varepsilonqq{\norm{U_p[\partialhi ](T)}{H^s}\gtrsim (rA^{-\frac{d}{2}}N^{-s})^p(A^d)^{p-1}T\norm{\LR{\xi}^s}{L^2(p^{-1}Q_{A})}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A).}
(ii) In this case we have
\varepsilonqq{\widehat{U_2[\partialhi ]}(T,\xi )=c(rN^{\frac{1}{2}-s})^2e^{-iT|\xi |^2}\int _{\xi _1-\xi _2=\xi}\chi _{\widetilde{Q}_{N^{-1}}}(\xi _1-Ne_d)\chi _{\widetilde{Q}_{N^{-1}}}(\xi _2-Ne_d)\int _0^T e^{it\Phi}\,dt,}
and in the integral, for $\xi =\xi _1-\xi _2\in \widetilde{Q}_{N^{-1}}$,
\varepsilonqq{\Phi =|\xi |^2-|\xi _1|^2+|\xi _2|^2=|\xi |^2-|\xi _1-Ne_d|^2+|\xi _2-Ne_d|^2-2(\xi _1-\xi _2)\cdot Ne_d=O(1).}
Hence, if $0<T\ll 1$, we have
\varepsilonqq{|\widehat{U_2[\partialhi ]}(T)|\gtrsim (rN^{\frac{1}{2}-s})^2N^{-1}T\chi _{2^{-1}\widetilde{Q}_{N^{-1}}},\qquad \norm{U_2[\partialhi ](T)}{H^s}\gtrsim (rN^{\frac{1}{2}-s})^2N^{-\frac{3}{2}}T}
for any $s\in \Bo{R}$.
(iii) Similarly to (ii), we see that
\varepsilonqq{\widehat{U_2[\partialhi ]}(T,(\xi ',0))=c(rN^{-s})^2e^{-iT|\xi |^2}\int _{\xi _1'-\xi _2'=\xi '}\chi _{[-1/2,1/2 )^{d-1}}(\xi _1')\chi _{[-1/2,1/2 )^{d-1}}(\xi _2')\int _0^T e^{it\Phi}\,dt,}
where the integral in $\xi '=(\xi _1,\dots ,\xi _{d-1})$ vanishes if $Z=\Bo{T}$.
In the integral,
\varepsilonqq{\Phi =|(\xi ',0)|^2-|(\xi _1',N)|^2+|(\xi _2',N)|^2=O(1).}
Hence, if $0<T\ll 1$, we have
\varepsilonqq{\norm{U_2[\partialhi ](T)}{H^s}\ge \norm{\LR{\xi}^s\widehat{U_2[\partialhi ]}(T)}{L^2(Q_{1/2})}\gtrsim (rN^{-s})^2T}
for any $s\in \Bo{R}$.
(iv) We first consider $(p,q)=(4,1)$; the case of $(4,3)$ is treated in the same way.
Observe that
\varepsilonqq{&\Shugo{(\varepsilonta _1,\dots ,\varepsilonta _4)\in \shugo{-N,2N,3N}^4}{\varepsilonta _1-\varepsilonta _2-\varepsilonta _3-\varepsilonta _4=0}\\
&=\shugo{(3N,-N,2N,2N),\,(3N,2N,-N,2N),\,(3N,2N,2N,-N)}.}
Therefore, we have
\varepsilonqq{\widehat{U_4[\partialhi ]}(T,0)&=c(rN^{-s})^4\sum _{\mat{\xi _1,\dots ,\xi _4\in \Bo{Z}\\ \xi _1-\xi _2-\xi _3-\xi _4=0}}\partialrod _{l=1}^4\chi _{\shugo{-N,2N,3N}}(\xi _l)\int _0^T e^{it\Phi}\,dt\\
&=3c(rN^{-s})^4\int _0^T e^{it\{ 0^2-(3N)^2+(-N)^2+(2N)^2+(2N)^2\}}\,dt =3c(rN^{-s})^4T,}
which implies
\varepsilonqq{\norm{U_4[\partialhi ](T)}{H^s}\gtrsim (rN^{-s})^4T}
for any $s\in \Bo{R}$ and $T>0$.
Next, we consider $(p,q)=(4,2)$, which is very similar to the above.
Since
\varepsilonqq{&\Shugo{(\varepsilonta _1,\dots ,\varepsilonta _4)\in \shugo{-N,2N,3N}^4}{\varepsilonta _1+\varepsilonta _2-\varepsilonta _3-\varepsilonta _4=0}\\
&=\Shugo{(\varepsilonta _1,\dots ,\varepsilonta _4)\in \shugo{-N,2N,3N}^4}{\shugo{\varepsilonta _1,\varepsilonta _2}=\shugo{\varepsilonta _3,\varepsilonta _4}},}
we have
\varepsilonqq{\widehat{U_4[\partialhi ]}(T,0)&=c(rN^{-s})^4\sum _{\mat{\xi _1,\dots ,\xi _4\in \Bo{Z}\\ \xi _1+\xi _2-\xi _3-\xi _4=0}}\partialrod _{l=1}^4\chi _{\shugo{-N,2N,3N}}(\xi _l)\int _0^T e^{it\Phi}\,dt=15c(rN^{-s})^4T,}
and the same estimate holds.
\varepsilonnd{proof}
Now, we are in a position to prove norm inflation.
\betagin{proof}[Proof of Theorem~\ref{thm:main0}]
We first recall that $U_k[\partialhi]=0$ unless $k\varepsilonquiv 1\mod p-1$.
If the initial data $\partialhi$ satisfies \varepsilonqref{cond:phi}, Corollary~\ref{cor:lwp} guarantees existence of the solution to \varepsilonqref{NLS} and the power series expansion in $M_A$ up to time $T$ whenever $\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$.
\underline{Case 1}: General $Z$ and $(p,q)$, $s<\min \shugo{s_c(d,p),0}$.
Take $\partialhi$ as in Lemma~\ref{lem:U_p} (i).
From Lemmas~\ref{lem:U_k_H^s} and \ref{lem:U_p}, under the conditions
\varepsilonq{cond:1}{T\ll N^{-2},\quad \rho \ll 1,\quad r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg r,}
we have
\varepsilonqq{\norm{u(T)}{H^s}\sim \norm{U_p[\partialhi ](T)}{H^s}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A).}
Now, we set
\varepsilonqq{r=(\log N)^{-1},\quad A\sim (\log N)^{-\frac{p+1}{|s|}}N,\quad T=(A^{-\frac{d}{2}}N^s)^{p-1},}
so that $\rho = (\log N)^{-1}\ll 1$.
The super-critical assumption $s<s_c(d,p)=\frac{d}{2}-\frac{2}{p-1}$ ensures that
\varepsilonqq{T\sim (\log N)^{\frac{d(p+1)}{2|s|}(p-1)}N^{(s-\frac{d}{2})(p-1)}\ll N^{-2}.}
Moreover, since $f_s(A)\gtrsim A^{\frac{d}{2}+s}$ for any $s<0$ and $A\ge 1$, we see that
\varepsilonqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gtrsim r\rho ^{p-1}A^{s}N^{-s}\sim \log N\gg (\log N)^{-1}=r.}
Therefore, \varepsilonqref{cond:1} is fulfilled and we have $\tnorm{u(T)}{H^s}\gtrsim \log N$.
Noticing $\tnorm{\partialhi}{H^s}\sim r=(\log N)^{-1}$ and $T\ll N^{-2}$, we show norm inflation by letting $N\to \infty$.
\underline{Case 2}: $Z=\Bo{R}$ or $\Bo{T}$, $(p,q)=(2,0)$ or $(2,2)$, $-\frac{3}{2}\le s<-1$.
We take the same initial data $\partialhi$ as in Case 1, but with
\varepsilonqq{r=(\log N)^{-1},\quad A=1,\quad T=(\log N)^{-1}N^{-2}.}
Then, $T\ll N^{-2}$, $\rho =(\log N)^{-2}N^{-2-s}\ll 1$ by $s\ge -\frac{3}{2}$ and
\varepsilonqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho N^{-s}= (\log N)^{-3}N^{-2-2s}\gg 1\gg r}
by $s<-1$.
Hence, \varepsilonqref{cond:1} holds and we have $\tnorm{u(T)}{H^{s}}\sim (\log N)^{-3}N^{-2-2s}\gg 1$, which together with $\tnorm{\partialhi}{H^s}\sim r\ll 1$ and $T\ll 1$ shows norm inflation by taking $N$ large.
\underline{Case 3}: $Z=\Bo{R}$ or $\Bo{T}$, $p=3$, $s=-\frac{1}{2}$.
Take the same $\partialhi$ as in Case 1, but with
\varepsilonqq{r=(\log N)^{-\frac{1}{12}},\quad A\sim (\log N)^{-\frac{1}{4}}N,\quad T=(\log N)^{-\frac{1}{12}}N^{-2}.}
Then, $T\ll N^{-2}$, $\rho \sim (\log N)^{-\frac{1}{4}}\ll 1$ and
\varepsilonqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho ^2A^{-\frac{1}{2}}N^{\frac{1}{2}}(\log A)^{\frac{1}{2}}\sim (\log N)^{\frac{1}{24}}\gg 1\gg r.}
Hence, \varepsilonqref{cond:1} holds and we have $\tnorm{u(T)}{H^{-\frac{1}{2}}}\sim (\log N)^{\frac{1}{24}}\gg 1$, which implies norm inflation as well.
\underline{Case 4}: $Z=\Bo{R} ^2$ or $\Bo{R} \widetildemes \Bo{T}$ or $\Bo{T}^2$, $(p,q)=(2,0)$ or $(2,2)$, $s=-1$.
We follow the argument in Case 1 again, but with
\varepsilonqq{r=(\log N)^{-\frac{1}{12}},\quad A\sim (\log N)^{-\frac{1}{4}}N,\quad T=(\log N)^{-\frac{1}{6}}N^{-2}.}
Then, $T\ll N^{-2}$, $\rho \sim (\log N)^{-\frac{1}{2}}\ll 1$ and
\varepsilonqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\sim r\rho A^{-1}N(\log A)^{\frac{1}{2}}\sim (\log N)^{\frac{1}{6}}\gg 1\gg r.}
Hence, \varepsilonqref{cond:1} holds and we have $\tnorm{u(T)}{H^{-1}}\sim (\log N)^{\frac{1}{6}}\gg 1$, which shows NI$_{-1}$.
\underline{Case 5}: $Z=\Bo{R} ^{d_1}\widetildemes \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $(p,q)=(2,1)$, and $\frac{d}{2}-2\le s<0$.
Take $\partialhi$ as in Lemma~\ref{lem:U_p} (iii) and choose $r,T$ as $r=(\log N)^{-1}$ and $T=N^s$, which implies
\varepsilonqq{T\ll 1,\quad \rho \sim rN^{-s}T=(\log N)^{-1}\ll 1,\quad r\rho N^{-s}\sim (\log N)^{-2}N^{-s}\gg 1\gg r.}
From Lemmas~\ref{lem:U_k_H^s} and \ref{lem:U_p}, we have $\tnorm{u(T)}{H^s}\sim \norm{U_2[\partialhi ](T)}{H^s}\sim (\log N)^{-2}N^{-s}\gg 1$, and norm inflation occurs.
\underline{Case 6}: $Z=\Bo{T}$, $(p,q)=(4,1)$ or $(4,2)$ or $(4,3)$, and $-\frac{1}{6}\le s<0$.
Take $\partialhi$ as in Lemma~\ref{lem:U_p} (iv), and then take $r=(\log N)^{-1}$ and $T=N^{3s}$, which implies
\varepsilonqq{T\ll 1,\quad \rho \sim rN^{-s}T^{\frac{1}{3}}=(\log N)^{-1}\ll 1,\quad r\rho N^{-s}\sim (\log N)^{-2}N^{-s}\gg 1\gg r.}
Again, we have $\tnorm{u(T)}{H^s}\sim \norm{U_4[\partialhi ](T)}{H^s}\sim (\log N)^{-4}N^{-s}\gg 1$.
\underline{Case 7}: $Z=\Bo{R} ^d$ with $1\le d\le 3$, $(p,q)=(2,1)$, and $\frac{d}{2}-2\le s<-\frac{1}{4}$.
In this case the data $\partialhi$ is taken as in Lemma~\ref{lem:U_p} (ii) and does not satisfy \varepsilonqref{cond:phi}, so we need to modify the previous argument.
We use anisotropic modulation space $\widetilde{M}$ defined by the norm
\varepsilonqq{\norm{f}{\widetilde{M}}:=\sum _{\xi \in \Bo{Z}^{d-1}\widetildemes N^{-1}\Bo{Z}}\norm{\widehat{f}}{L^2(\xi +\widetilde{Q}_{N^{-1}})}.}
We have the product estimate
\varepsilonqq{\tnorm{fg}{\widetilde{M}}\lesssim N^{-\frac{1}{2}}\tnorm{f}{\widetilde{M}}\tnorm{g}{\widetilde{M}}}
in this space.
Thus, we follow the proof of Lemma~\ref{lem:U_k} to obtain
\varepsilonqq{\tnorm{U_k[\partialhi ](t)}{\widetilde{M}}\le Cr(CrN^{-\frac{1}{2}-s}t)^{k-1}N^{-s}}
for any $k\ge 1$, which is used to justify the expansion of the solution in $\widetilde{M}$ up to time $T$ such that $\widetilde{\rho}:=rN^{-\frac{1}{2}-s}T\ll 1$.
Then, by the same argument as in the proofs of Lemmas~\ref{lem:supp} and \ref{lem:U_k_H^s}, we see that
\varepsilonqq{|\supp{\widehat{U_k[\partialhi ]}(t)}|\le C^kN^{-1},\qquad \tnorm{U_k[\partialhi ](T)}{H^s}\le Cr(C\widetilde{\rho})^{k-1}N^{-s}.}
In particular, $\tnorm{U_2[\partialhi ](T)}{H^s}\sim r\widetilde{\rho}N^{-s}$ for $0<T\ll 1$ by Lemma~\ref{lem:U_p} (iii).
Now, we take $r=(\log N)^{-1}\ll 1$, $T=(\log N)^3N^{2s+\frac{1}{2}}\ll 1$, so that $\widetilde{\rho}=(\log N)^2N^s\ll 1$, $r\widetilde{\rho}N^{-s}=\log N \gg r$.
From the estimates above, we have $\tnorm{u(T)}{H^s}\sim \log N \gg 1$, which shows norm inflation.
\varepsilonnd{proof}
\section{Proof of Theorem~\ref{thm:main}}\lambdabel{sec:proof}
Here, we see how to use the estimates for single-term nonlinearities for the proof in the multi-term cases.
We write $p:=\max _{1\le j\le n}p_j$.
For the initial value problem \varepsilonqref{NLS'}, the $k$-th order term $U_k[\partialhi ]$ in the expansion of the solution is given by $U_1[\partialhi ]:=e^{it\Deltalta}\partialhi$ and
\varepsilonqq{U_k[\partialhi ]:=-i\sum _{j=1}^n\nu _j\sum _{\mat{k_1,\dots ,k_{p_j}\ge 1\\ k_1+\dots +k_{p_j}=k}}\int _0^t e^{i(t-\tau )\Deltalta}\mu _{p_j,q_j}\big( U_{k_1}[\partialhi ](\tau ),\dots ,U_{k_{p_j}}[\partialhi ](\tau )\big) \,d\tau }
for $k\ge 2$ inductively.
The following lemmas are verified in the same manner as Lemmas~\ref{lem:a_k}, \ref{lem:U_k}, and Corollary~\ref{cor:lwp}.
\betagin{lem}\lambdabel{lem:a_k'}
Let $\shugo{b_k}_{k=1}^\infty$ be a sequence of nonnegative real numbers such that
\[ b_{k} \le \sum _{j=1}^nC_j\sum _{\mat{k_1,\dots ,k_{p_j}\ge 1\\ k_1+\dots +k_{p_j}=k}}b_{k_1}\cdots b_{k_{p_j}},\qquad k\ge 2\]
for some $p_1,\dots ,p_n\ge 2$ and $C_1,\dots ,C_n>0$.
Then, we have
\[ b_k\le b_1C_0^{k-1},\qquad k\ge 1,\qquad C_0=\max _{1\le j\le n}\frac{\partiali ^2}{6}(nC_jp_j^2)^{\frac{1}{p_j-1}}b_1.\]
\varepsilonnd{lem}
\betagin{lem}\lambdabel{lem:U_k'}
There exists $C>0$ such that for any $\partialhi \in M_A$ with $\tnorm{\partialhi}{M_A}\le M$ we have
\varepsilonqq{\norm{U_k[\partialhi ](t)}{M_A}\le t^{\frac{k-1}{p-1}}(CA^{\frac{d}{2}}M)^{k-1}M}
for any $0\le t\le 1$ and $k\ge 1$.
\varepsilonnd{lem}
\betagin{lem}\lambdabel{lem:MAlwp'}
Let $\partialhi \in M_A$ with $\tnorm{\partialhi}{M_A}\le M$.
If $T>0$ satisfies $A^{\frac{d}{2}}MT^{\frac{1}{p-1}}\ll 1$, then a unique solution $u\in C([0,T];M_A)$ to \varepsilonqref{NLS'} exists and has the expansion $u=\sum _{k=1}^\infty U_k[\partialhi ]$.
\varepsilonnd{lem}
The next lemma can be verified similarly to Lemma~\ref{lem:U_k_H^s}.
\betagin{lem}\lambdabel{lem:U_k_H^s'}
Let $\partialhi$ satisfy \varepsilonqref{cond:phi} and $s<0$.
Then, the following holds.
\betagin{enumerate}
\item $\norm{U_1[\partialhi ](T)}{H^s}\le Cr$\hspace{10pt} for any $T\ge 0$.
\item $\norm{U_k[\partialhi ](T)}{H^s}\le Cr(C\rho)^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A)$\hspace{10pt} for any $0\le T\le 1$ and $k\ge 2$, where
\varepsilonqq{\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\quad (p=\max _{1\le j\le n}p_j),\qquad f_s(A)=\norm{\LR{\xi}^s}{L^2(\shugo{|\xi |\le A})}.}
\varepsilonnd{enumerate}
\varepsilonnd{lem}
We now begin to prove Theorem~\ref{thm:main}.
\betagin{proof}[Proof of Theorem~\ref{thm:main}]
We divide the proof into two cases:
(I) One of the terms of order $p$ (highest order) is responsible for norm inflation, or (II) a lower order term determines the range of regularities for norm inflation.
Note that (II) occurs only when $Z=\Bo{R}$, $p=3$, $F (u,\bar{u})$ has the term $u\bar{u}$ and $s\in (-\frac{1}{2},-\frac{1}{4})$.
(I): Rewrite the nonlinear terms as
\varepsilonqq{F (u,\bar{u})=\sum _{q=0}^p\nu _{p,q}\mu _{p,q}(u)+\text{(terms of order less than $p$)}.}
Note that $\nu _{p,q}$ may be zero but $(\nu _{p,0},\dots ,\nu _{p,p})\neq (0,\dots ,0)$.
We divide the series into four parts:
\varepsilonqq{\sum _{k=1}^\infty U_k[\partialhi ]&=U_1[\partialhi ]+\Big\{ \sum _{k=2}^{p}U_k[\partialhi ]-\Big( -i\sum _{q=0}^p\nu _{p,q}\int _0^te^{i(t-\tau )\Deltalta}\mu _{p,q}\big( U_1[\partialhi ](\tau )\big) \,d\tau \Big) \Big\} \\
&\hspace{10pt} +\Big( -i\sum _{q=0}^p\nu _{p,q}\int _0^te^{i(t-\tau )\Deltalta}\mu _{p,q}\big( U_1[\partialhi ](\tau )\big) \,d\tau \Big) +\sum _{k=p+1}^\infty U_k[\partialhi ]\\
&=:U_1[\partialhi ]+U_{low}[\partialhi ]+U_{main}[\partialhi ]+U_{high}[\partialhi ].}
Note that $U_{low}=0$ if $p=2$.
The following lemma indicates how $U_{low}$ is dominated by $U_{main}$, and how the contributions of the $(p+1)$ terms in $U_{main}$ can be `separated'.
\betagin{lem}\lambdabel{lem:U_p'}
We have the following:
\betagin{enumerate}
\item Let $\partialhi$ satisfy \varepsilonqref{cond:phi} and $s<0$.
Let $0<T\le 1$, and assume that $\rho =rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$.
Then, (if $p\ge 3$,)
\varepsilonqq{\norm{U_{low}[\partialhi](T)}{H^s}\lesssim r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}.}
\item Let $q_*\in \shugo{0,1,\dots ,p}$ be such that $\nu _{p,q_*}\neq 0$.
Then, for any $T\ge 0$ there exists $j\in \shugo{0,1,\dots ,p}$ such that
\varepsilonqq{\norm{U_{main}[e^{i\frac{j\partiali}{p+1}}\partialhi ](T)}{H^s}\gtrsim \tnorm{G_{q_*}[\partialhi ](T)}{H^s},}
where
\varepsilonqq{G_q[\partialhi ](t):=-i\int _0^te^{i(t-\tau )\Deltalta}\mu _{p,q}(U_1[\partialhi ](\tau ))\,d\tau ;
\quad U_{main}[\partialhi ]=\sum _{q=0}^p\nu _{p,q}G_q[\partialhi ](t).}
\varepsilonnd{enumerate}
\varepsilonnd{lem}
\betagin{proof}
(i) We notice that the nonlinear terms of highest order $p$ have nothing to do with $U_{low}[\partialhi ]$.
Hence, we estimate by Lemma~\ref{lem:U_k_H^s'} (ii) with $p$ replaced by $p-1$ and have
\varepsilonqq{\norm{U_{low}[\partialhi ](T)}{H^s}\le \sum _{k=2}^pCr(CrA^{\frac{d}{2}}N^{-s}T^{\frac{1}{(p-1)-1}})^{k-1}A^{-\frac{d}{2}}N^{-s}f_s(A).}
Since $0<T\le 1$ implies $rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{(p-1)-1}}\le \rho \ll 1$, we have
\varepsilonqq{\norm{U_{low}[\partialhi ](T)}{H^s}\lesssim r\cdot rA^{\frac{d}{2}}N^{-s}T^{\frac{1}{p-2}}\cdot A^{-\frac{d}{2}}N^{-s}f_s(A).}
(ii)
We observe that $\zetata _p:=e^{i\frac{\partiali}{p+1}}$ satisfies $\sum _{j=0}^{p}\zetata _p^{2qj}=0$ if $q\not\varepsilonquiv 0\mod p+1$.
Since $G_q[\zetata _p^j\partialhi ]=\zetata _p^{(p-2q)j}G_q[\partialhi ]$, for any $0\le q_*\le p$ it holds that
\varepsilonqq{\sum _{j=0}^p\zetata _p^{(2q_*-p)j}U_{main}[\zetata _p^j\partialhi ]&=\sum _{q=0}^{p}\sum _{j=0}^p\zetata _p^{2(q_*-q)j}\nu _{p,q}G_q[\partialhi ]=(p+1)\nu _{p,q_*}G_{q_*}[\partialhi ].}
Hence, if $\nu _{p,q_*}\neq 0$, by the triangle inequality we see that
\varepsilonqq{\sum _{j=0}^p\norm{U_{main}[\zetata _p^j\partialhi ](T)}{H^s}\ge (p+1)|\nu _{p,q_*}|\norm{G_{q_*}[\partialhi ](T)}{H^s}.
}
This implies the claim.
\varepsilonnd{proof}
By Lemma~\ref{lem:U_p'}, the proof is almost reduced to the case of single-term nonlinearities, as we see below.
\underline{Case 1}: General $Z$ and $p$, $s<\min \shugo{s_c(d,p),0}$.
Let us take the initial data $\partialhi$ as in Lemma~\ref{lem:U_p} (i), and assume $\rho =rA^{-\frac{d}{2}}N^{-s}T^{\frac{1}{p-1}}\ll 1$, $0<T\ll N^{-2}$.
Lemma~\ref{lem:U_k_H^s'} (ii) yields that
\varepsilonqq{\norm{U_{high}[\zetata _p^j\partialhi ](T)}{H^s}\lesssim r\rho ^pA^{-\frac{d}{2}}N^{-s}f_s(A),}
while Lemma~\ref{lem:U_p'} (ii) and Lemma~\ref{lem:U_p} (i) imply that
\varepsilonqq{\norm{U_{main}[\zetata _p^j\partialhi ](T)}{H^s}\sim r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg \norm{U_{high}[\zetata _p^j\partialhi ](T)}{H^s}}
for an appropriate $j$.
Hence, from Lemma~\ref{lem:U_k_H^s'} (i) and Lemma~\ref{lem:U_p'} (i),
\varepsilonqq{\norm{u(T)}{H^s}&\ge \tfrac{1}{2}\norm{U_{main}[\zetata _p^j\partialhi ](T)}{H^s}-\norm{U_{low}[\zetata _p^j\partialhi ](T)}{H^s}-\norm{U_1[\zetata _p^j\partialhi ](T)}{H^s}\\
&\ge C^{-1}r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)-C\big( r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}+r\big) .}
If we take the same choice for $r,A,T$ as in Case~1 of the proof of Theorem~\ref{thm:main0};
\varepsilonqq{r=(\log N)^{-1},\quad A\sim (\log N)^{-\frac{p+1}{|s|}}N,\quad T=(A^{-\frac{d}{2}}N^s)^{p-1};\quad \text{so that}\hspace{10pt} \rho = (\log N)^{-1},}
all the required conditions for norm inflation are satisfied when $p=2$.
Even for $p\ge 3$, it suffices to check that
\varepsilonqq{r\rho ^{p-1}A^{-\frac{d}{2}}N^{-s}f_s(A)\gg r^2N^{-2s}f_s(A)T^{\frac{1}{p-2}}.}
This is equivalent to $\rho ^{p-2}\gg T^{\frac{1}{p-2}-\frac{1}{p-1}}$, which we can easily show.
\underline{Case 2-4-5-7}: $p=2$.
We need to deal with the following situations:
\betagin{itemize}
\item $d=1$, $\nu _{2,1}=0$, $-\frac{3}{2}\le s<-1$;
\item $d=2$, $\nu _{2,1}=0$, $s=-1$;
\item $Z=\Bo{R} ^{d_1}\widetildemes \Bo{T} ^{d_2}$ with $d_1+d_2\le 3$, $d_2\ge 1$, $\nu _{2,1}\neq 0$, and $\frac{d}{2}-2\le s<0$;
\item $Z=\Bo{R} ^d$, $1\le d\le 3$, $\nu _{2,1}\neq 0$, $\frac{d}{2}-2\le s<-\frac{1}{4}$,
\varepsilonnd{itemize}
which correspond to Cases 2, 4, 5, and 7 in the proof of Theorem~\ref{thm:main0}, respectively.
As seen in the preceding case, we do not have to care about $U_{low}$ and the proof is the same as the single-term cases, except that we need to pick up the appropriate one among $u^2$, $u\bar{u}$, $\bar{u}^2$ by using Lemma~\ref{lem:U_p'} (ii).
\underline{Case 3}: $d=1$, $p=3$, $s=-\frac{1}{2}$.
We take the initial data $e^{i\frac{j\partiali}{4}}\partialhi$ with $\partialhi$ as in \varepsilonqref{cond:phi} and parameters $r,A,T$ as in Case~3 for Theorem~\ref{thm:main0}.
Following the argument in Case 1, it suffices to check the condition for $\tnorm{U_{main}}{H^s}\gg \tnorm{U_{low}}{H^s}$;
\varepsilonqq{r\rho ^2A^{-\frac{1}{2}}N^{\frac{1}{2}}f_{-\frac{1}{2}}(A)\gg r^2Nf_{-\frac{1}{2}}(A)T.}
Actually, we see that $\text{L.H.S.}\sim (\log N)^{\frac{1}{24}}\gg (\log N)^{\frac{1}{4}}N^{-1}\sim \text{R.H.S.}$
\underline{Case 6}: $Z=\Bo{T}$, $p=4$, $(\nu _{4,1},\nu _{4,2},\nu _{4,3})\neq (0,0,0)$, $s\in [-\frac{1}{6},0)$.
Similarly, we take $e^{i\frac{j\partiali}{5}}\partialhi$ with parameters $r,A,T$ as in Case~6 for Theorem~\ref{thm:main0}.
It suffices to verify the condition
\varepsilonqq{r\rho ^{3}N^{-s}\gg r^2N^{-2s}T^{\frac{1}{2}},}
and in fact it holds that $\text{L.H.S.}\sim (\log N)^{-4}N^{-s}\gg (\log N)^{-2}N^{-\frac{s}{2}}\sim \text{R.H.S.}$
(II): Recall that we claim NI$_s$ for $s\in (-\frac{1}{2},-\frac{1}{4})$ in the case of $Z=\Bo{R}$, $p=3$, and $F (u,\bar{u})$ has the term $u\bar{u}$.
We take $\partialhi$ as in \varepsilonqref{cond:phi} with $A=N^{-1}$ and $\Sigma =\shugo{N}$ (same as in Case 7 for the single-term nonlinearity).
By Lemmas~\ref{lem:MAlwp'} and \ref{lem:U_k_H^s'}, we can expand the solution whenever $\rho =rN^{-\frac{1}{2}-s}T^{\frac{1}{2}}\ll 1$ and we have
\varepsilonqq{\sum _{k\ge 4}\tnorm{U_k[\partialhi ](T)}{H^s}\lesssim r\rho ^3N^{\frac{1}{2}-s}f_s(N^{-1})\sim r^4N^{-\frac{3}{2}-4s}T^{\frac{3}{2}}}
for $0<T\le 1$.
For $U_3$, observing that the Fourier support is in the region $|\xi |\sim N$, we modify the estimate in Lemma~\ref{lem:U_k_H^s'} to obtain
\varepsilonqq{\tnorm{U_3[\partialhi ](T)}{H^s}\lesssim r\rho ^2N^{\frac{1}{2}-s}\cdot \tnorm{\LR{\xi}^s}{L^2(\supp \widehat{U_3[\partialhi ]})}\sim r^3N^{-1-2s}T.}
For $U_2$ the contribution from $u^2$ and $\bar{u}^2$ has the Fourier support in high frequency, thus being dominated by the contribution from $u\bar{u}$.
By Lemma~\ref{lem:U_p} (ii), we have
\varepsilonqq{\tnorm{U_2[\partialhi ](T)}{H^s}\gtrsim r^2N^{-\frac{1}{2}-2s}T}
if $0<T\ll 1$.
We set $r=(\log N)^{-1}$ and $T=(\log N)^3N^{2s+\frac{1}{2}}$ as before (Case 7 in the single-term case), then it holds that $T\ll 1$, $\rho =(\log N)^{\frac{1}{2}}N^{-\frac{1}{4}}\ll 1$ and
\varepsilonqq{\tnorm{u(T)}{H^s}\ge C^{-1}r^2N^{-\frac{1}{2}-2s}T-C\big( r+r^3N^{-1-2s}T+r^4N^{-\frac{3}{2}-4s}T^{\frac{3}{2}}\big) \gtrsim \log N\gg 1}
for $s\in [-\frac{3}{4},-\frac{1}{4})$, which gives the claimed norm inflation.
This concludes the proof of Theorem~\ref{thm:main}.
\varepsilonnd{proof}
\appendix
\section{Norm inflation with infinite loss of regularity}\lambdabel{sec:niilr}
In this section, we derive norm inflation with infinite loss of regularity for the problem with smooth gauge-invariant nonlinearities:
\betagin{equation}\lambdabel{nuNLS}
\left\{
\betagin{array}{@{\,}r@{\;}l}
i\partial _tu+\Deltalta u&=\partialm |u|^{2\nu}u,\qquad t\in [0,T],\quad x\in Z=\Bo{R} ^{d-d_2}\widetildemes \Bo{T}^{d_2},\\
u(0,x)&=\partialhi (x),
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
where $\nu$ is a positive integer.
The initial value problem \varepsilonqref{nuNLS} on $\Bo{R} ^d$ is invariant under the scaling $u(t,x)\mapsto \lambda ^{\frac{1}{\nu}}u(\lambda ^2t,\lambda x)$, and the critical Sobolev index is $s_c(d,2\nu +1)=\frac{d}{2}-\frac{1}{\nu}$, which is non-negative except for the case $d=\nu =1$.
\betagin{prop}\lambdabel{prop:niilr}
We assume the following condition on $s$:
\betagin{itemize}
\item If $d=\nu =1$, then $s<-\frac{2}{3}$;
\item if $d\ge 2$, $\nu =1$ and $d_2=0,1$ (i.e., $Z=\Bo{R} ^d$ or $\Bo{R} ^{d-1}\widetildemes \Bo{T}$), then $s<-\frac{1}{3}$;
\item if $d\ge 1$, $\nu \ge 2$ and $d_2=0$ (i.e., $Z=\Bo{R} ^d$), then $s<-\frac{1}{2\nu +1}$;
\item otherwise, $s<0$.
\varepsilonnd{itemize}
Then, NI$_s$ with infinite loss of regularity occurs for the initial value problem \varepsilonqref{nuNLS}:
For any $\delta >0$ there exist $\partialhi \in H^\infty$ and $T>0$ satisfying $\tnorm{\partialhi}{H^s}<\delta$, $0<T<\delta$ such that the corresponding smooth solution $u$ to \varepsilonqref{NLS} exists on $[0,T]$ and $\tnorm{u(T)}{H^\sigma}>\delta ^{-1}$ for \varepsilonmph{all} $\sigma \in \Bo{R}$.
\footnote{
More precisely, we show $\tnorm{\widehat{u}(T)}{L^2(\{ |\xi |\le 1\} )}>\delta ^{-1}$.
This implies the claim if we define the Sobolev norm of negative indices $\sigma$ as $\tnorm{f}{H^\sigma}:=\tnorm{\min \{ 1,\,|\xi |^{\sigma}\} \widehat{f}(\xi )}{L^2}$.
}
\varepsilonnd{prop}
\betagin{rem}
(i) The proofs of Theorems~\ref{thm:main0} and \ref{thm:main} are easily adapted to yield NI$_s$ with \varepsilonmph{finite} loss of regularity in most cases.
However, we only consider here \varepsilonmph{infinite} loss of regularity.
(ii) The coefficient of the nonlinearity is not important in the proof, and the same result holds for any non-zero complex constant.
(iii) To show infinite loss of regularity, we need to use the nonlinear interactions of very high frequencies which create a significant output in low frequency $\{ |\xi |\le 1\}$.
Except for the case $d=\nu =1$, there are such interactions that are also \varepsilonmph{resonant}; i.e., there exist non-zero vectors $k_1,\dots ,k_{2\nu +1}\in \Bo{Z}^{d}$ satisfying
\varepsilonqq{\sum _{j=0}^{\nu}k_{2j+1}=\sum _{l=1}^{\nu}k_{2l},\qquad \sum _{j=0}^{\nu}|k_{2j+1}|^2=\sum _{l=1}^{\nu}|k_{2l}|^2.}
This is also the key ingredient in the proof of the previous results \cite{CDS12,CK17}, and hence the restriction on the range of $s$ in Proposition~\ref{prop:niilr} is the same as that in \cite{CDS12,CK17}.
A complete characterization of the resonant set
\varepsilonqq{\Sc{R}_{d,\nu}(k):=\Shugo{(k_m)_{m=1}^{2\nu +1}\in (\Bo{Z}^d)^{2\nu +1}}{k=\sum _{m=1}^{2\nu +1}(-1)^{m+1}k_m,\, |k|^2=\sum _{m=1}^{2\nu +1}(-1)^{m+1}|k_m|^2}}
(for $k\in \Bo{Z}^d$ given) is easily obtained in the $\nu =1$ case; see \cite[Proposition~4.1]{CK17} for instance.
In Proposition~\ref{prop:char} below, we will provide a complete characterization of the set $\Sc{R}_{1,2}(0)$, which may be of interest in itself.
Since $(k_m)_{m=1}^5\in \Sc{R}_{1,2}(k)$ if and only if $(k_m-k)_{m=1}^5\in \Sc{R}_{1,2}(0)$, we have a characterization of $\Sc{R}_{1,2}(k)$ for any $k\in \Bo{Z}$ as well.
However, in the proof of Proposition~\ref{prop:niilr} we only need the fact that $\Sc{R}_{d,\nu}(0)$ has an element consisting of non-zero vectors in $\Bo{Z}^d$, except for $(d,\nu )=(1,1)$.
\varepsilonnd{rem}
\betagin{proof}[Proof of Proposition~\ref{prop:niilr}]
We follow the proof of Theorem~\ref{thm:main0} but take different initial data to show infinite loss of regularity.
Let $N\gg 1$ be a large positive integer and define $\partialhi \in H^\infty (Z)$ by
\varepsilonqq{\widehat{\partialhi}:=rN^{-s}\chi _{\Sigma +Q_1},}
where $r=r(N)>0$ is a constant to be chosen later, $Q_1:=[-\tfrac{1}{2},\tfrac{1}{2})^d$, and
\varepsilonqs{\Sigma :=
\betagin{cases}
\shugo{N,2N} &\text{if $d=\nu =1$},\\
\shugo{Ne_{d-1},\,Ne_d,\,N(e_{d-1}+e_d)} &\text{if $d\ge 2$, $\nu =1$},\\
\shugo{Ne_d,\,3Ne_d,\,4Ne_d} &\text{if $d\ge 1$, $\nu \ge 2$},
\varepsilonnd{cases}\\
e_d:=(\underbrace{0,\dots ,0}_{d-1},1),\qquad e_{d-1}:=(\underbrace{0,\dots ,0}_{d-2},1,0).
}
The argument in Section~\ref{sec:proof0} (with $A=1$) shows the following:
\betagin{itemize}
\item The unique solution $u=u[\partialhi ]$ to \varepsilonqref{nuNLS} exists on $[0,T]$ and has the power series expansion $u=\sum _{k=1}^\infty U_k[\partialhi ]$ if $\rho :=rN^{-s}T^{\frac{1}{2\nu}}\ll 1$.
\item $\tnorm{U_1[\partialhi ](T)}{H^s}=\tnorm{\partialhi}{H^s}\sim r$ for any $T\ge 0$.
\item $\tnorm{U_k[\partialhi ](T)}{H^s}\le C\rho ^{k-1}rN^{-s}$ for any $T\ge 0$ and $k\ge 2$.
\varepsilonnd{itemize}
For the first nonlinear term $U_{2\nu +1}[\partialhi]$, we observe that
\varepsilonqq{|\widehat{U_{2\nu +1}[\partialhi ]}(T,\xi )|&=c(rN^{-s})^{2\nu +1}\Big| \int _\Gammamma \partialrod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \Big( \int _0^Te^{it\Phi}\,dt\Big) \,d\xi _1\dots d\xi _{2\nu +1}\Big| ,}
where
\varepsilonqq{\Gammamma :=\Shugo{(\xi _1,\dots ,\xi _{2\nu +1})}{\sum _{j=0}^\nu \xi _{2j+1}-\sum _{l=1}^\nu \xi _{2l}=\xi},\quad \Phi :=|\xi |^2-\sum _{j=0}^\nu |\xi _{2j+1}|^2+\sum _{l=1}^\nu |\xi _{2l}|^2.}
Now, we restrict $\xi$ to the low-frequency region $Q_{1/2}$.
If $d=\nu =1$, then we have
\varepsilonqq{&\chi _{Q_{1/2}}(\xi )\int _\Gammamma \partialrod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \int _0^Te^{it\Phi}\,dt\\
&=2\chi _{Q_{1/2}}(\xi )\int _\Gammamma \chi _{N+Q_1}(\xi _1)\chi _{2N+Q_1}(\xi _2)\chi _{N+Q_1}(\xi _3)\int _0^Te^{it\Phi}\,dt,}
and $\Phi =O(N^2)$ in the integral.
If $d\ge 2$ and $\nu =1$, we have
\varepsilonqq{&\chi _{Q_{1/2}}(\xi )\int _\Gammamma \partialrod _{m=1}^{2\nu+1}\chi _{\Sigma +Q_1}(\xi _m) \int _0^Te^{it\Phi}\,dt\\
&=2\chi _{Q_{1/2}}(\xi )\int _\Gammamma \chi _{Ne_{d-1}+Q_1}(\xi _1)\chi _{N(e_{d-1}+e_d)+Q_1}(\xi _2)\chi _{Ne_d+Q_1}(\xi _3)\int _0^Te^{it\Phi}\,dt,}
and the resonant property implies that
\varepsilonqq{
\Phi =\betagin{cases}
O(N) &\text{if $d_2=0,1$},\\
O(1) &\text{if $d_2\ge 2$}
\varepsilonnd{cases}
}
in the integral.
Therefore, in these cases we have the following lower bound:
\varepsilonq{est:A}{\norm{\widehat{U_{2\nu +1}[\partialhi ]}(T)}{L^2(Q_{1/2})}\ge cT(rN^{-s})^{2\nu +1}=c\rho ^{2\nu}rN^{-s}}
\varepsilonqq{\text{for any}\quad
0<T\ll \betagin{cases}
N^{-2} &\text{if $d=\nu =1$},\\
N^{-1} &\text{if $d\ge 2$, $\nu =1$, $d_2=0,1$},\\
~1 &\text{if $d\ge 2$, $\nu =1$, $d_2\ge 2$}.
\varepsilonnd{cases}
}
The quintic and higher cases are slightly different.
On one hand, there are ``almost resonant'' interactions such as
\varepsilonqq{\partialrod _{j=1,3}\chi _{Ne_d+Q_1}(\xi _j)\partialrod _{l=2,4}\chi _{3Ne_d+Q_1}(\xi _l)\partialrod _{m=5}^{2\nu +1}\chi _{4Ne_d+Q_1}(\xi _m),}
for which it holds
\varepsilonqq{
\Phi =\betagin{cases}
O(N) &\text{if $d_2=0$},\\
O(1) &\text{if $d_2\ge 1$}
\varepsilonnd{cases}
}
in the integral.
On the other hand, some non-resonant interactions such as
\varepsilonqq{\chi _{3Ne_d+Q_1}(\xi _1)\chi _{4Ne_d+Q_1}(\xi _2)\partialrod _{m=3}^{2\nu +1}\chi _{Ne_d+Q_1}(\xi _m)}
also create low-frequency modes, with $|\Phi |\sim N^2$ in the integral.
Hence, if we choose $T>0$ as
\varepsilonqq{
N^{-2}\ll T\ll \betagin{cases}
N^{-1} &\text{if $d\ge 1$, $\nu \ge 2$, $d_2=0$},\\
~1 &\text{if $d\ge 1$, $\nu \ge 2$, $d_2\ge 1$},
\varepsilonnd{cases}
}
then
\varepsilonqq{
\betagin{cases}
\Bo{R}e \Big( \displaystyle\int _0^Te^{it\Phi}\,dt\Big) \ge \frac{1}{2}T &\text{for ``almost resonant'' interactions},\\[10pt]
\Big| \displaystyle\int _0^Te^{it\Phi}\,dt\Big| \le CN^{-2}\ll T &\text{for non-resonant interactions},
\varepsilonnd{cases}
}
so that no cancellation occurs among ``almost resonant'' interactions, which dominate the non-resonant interactions.
Therefore, we have \varepsilonqref{est:A} for such $T$ as above.
Finally, we set
\varepsilonqq{
\betagin{cases}
r:=N^{s+\frac{2}{3}}\log N,\quad T:=N^{-2}(\log N)^{-1} &\text{if $d=\nu =1$},\\
r:=N^{s+\frac{1}{2\nu +1}}\log N,\quad T:=N^{-1}(\log N)^{-1} &\text{if $d\ge 2$, $\nu =1$, $d_2=0,1$}\\[-5pt]
&\quad \text{or $d\ge 1$, $\nu \ge 2$, $d_2=0$},\\
r:=N^s\log N,\quad T:=(\log N)^{-(2\nu +\frac{1}{2})} &\text{otherwise}.
\varepsilonnd{cases}
}
We see that, under the assumption on $s$, $\tnorm{\partialhi}{H^s}\sim r\ll 1$, $T\ll 1$, $\rho \ll 1$, and
\varepsilonqq{\norm{\widehat{u}(T)}{L^2(Q_{1/2})}&\ge c\norm{\widehat{U_{2\nu +1}[\partialhi ]}(T)}{L^2(Q_{1/2})}-C\Big( \norm{U_1[\partialhi ](T)}{H^s}+\sum _{l\ge 2} \norm{U_{2\nu l+1}[\partialhi ](T)}{H^s}\Big) \\
&\ge c\norm{\widehat{U_{2\nu +1}[\partialhi ]}(T)}{L^2(Q_{1/2})}\gg 1.}
We conclude the proof by letting $N\to \infty$.
\varepsilonnd{proof}
At the end of this section, we give a characterization of resonant interactions creating the zero mode in the one-dimensional quintic case.
\betagin{prop}\lambdabel{prop:char}
The quintuplet $(k_1,\dots ,k_5)\in \Bo{Z}^5$ satisfies
\varepsilonq{cond:res}{k_1+k_3+k_5=k_2+k_4,\qquad k_1^2+k_3^2+k_5^2=k_2^2+k_4^2}
if and only if
\varepsilonq{char}{\shugo{k_1,\,k_3,\,k_5}&=\shugo{ap,\,bq,\,(a+b)(p+q)},\\
\shugo{k_2,\,k_4}&=\shugo{ap+(a+b)q,\,(a+b)p+bq}}
for some $a,b,p,q\in \Bo{Z}$.
\varepsilonnd{prop}
\betagin{exm}
(i) Taking $a=p=b=q=1$ in \varepsilonqref{char}, we have the quintuplet
$(1,3,1,3,4)$ which has appeared in the proof of Proposition~\ref{prop:niilr} above.
Also, with $(a,b,p,q)=(-1,2,-2,1)$ we have $(2,3,2,0,-1)$, which gives a resonant interaction for quartic nonlinearities $u^3\bar{u}$, $u\bar{u}^3$ exploited in the proof of Lemma~\ref{lem:U_p}~(iv) above.
(ii) The quintuplets $(pq,-q^2,-pq,p^2,p^2-q^2)$ given in \cite[Lemma~4.2]{CK17} can be obtained by setting $a=-q$, $b=p$ in \varepsilonqref{char}.
\varepsilonnd{exm}
\betagin{proof}[Proof of Proposition~\ref{prop:char}]
The \varepsilonmph{if} part is verified by a direct computation, so we show the \varepsilonmph{only if} part.
Let $(k_1,\dots ,k_5)\in \Bo{Z}^5$ satisfy \varepsilonqref{cond:res}.
We start with observing that at least one of $k_1,k_3,k_5$ is an even integer; otherwise, we would have
\varepsilonqq{k_1^2+k_3^2+k_5^2\varepsilonquiv 3\not\varepsilonquiv 1\varepsilonquiv k_2^2+k_4^2\mod 4,}
contradicting \varepsilonqref{cond:res}.
Without loss of generality, we assume $k_5$ to be even and set
\varepsilonqq{n_j:=k_j-\tfrac{1}{2}k_5\in \Bo{Z}\quad (j=1,\dots ,5),\qquad n_6:=-\tfrac{1}{2}k_5\in \Bo{Z}.}
From \varepsilonqref{cond:res} we see that
\varepsilonqq{n_1+n_3+n_5=n_2+n_4+n_6,\quad n_1^2+n_3^2=n_2^2+n_4^2,\quad n_5=-n_6.}
The second equality implies that two vectors $(n_1-n_2,n_3-n_4), (n_1+n_2,n_3+n_4)\in \Bo{Z}^2$ are orthogonal to each other (unless one of them is zero), which allows us to write
\varepsilonq{id:A}{(n_1-n_2,n_3-n_4)=\alpha (q,p),\quad (n_1+n_2,n_3+n_4)=\beta (-p,q)}
with $\alpha ,\beta ,p,q\in \Bo{Z}$.
Note that $n_1,\dots ,n_4$ are then written as
\varepsilonqq{n_1=\tfrac{1}{2}(\alpha q-\beta p),\qquad n_2=-\tfrac{1}{2}(\alpha q+\beta p),\\
n_3=\tfrac{1}{2}(\alpha p+\beta q),\qquad n_4=-\tfrac{1}{2}(\alpha p-\beta q),}
and that
\varepsilonqq{n_5=-n_6=\tfrac{1}{2}(n_5-n_6)=-\tfrac{1}{2}\big\{ (n_1-n_2)+(n_3-n_4)\big\} =-\tfrac{1}{2}\alpha (p+q).}
Recalling $k_j=n_j-n_6$ ($j=1,\dots ,5$), we have
\varepsilonq{ks}{&k_1=-\tfrac{1}{2}(\alpha +\beta )p,\qquad k_3=-\tfrac{1}{2}(\alpha -\beta )q,\qquad k_5=-\alpha (p+q),\\
&\qquad k_2=-\tfrac{1}{2}(\alpha +\beta )p-\alpha q,\qquad k_4=-\tfrac{1}{2}(\alpha -\beta )q-\alpha p.}
We next claim that the integers $\alpha ,\beta ,p,q$ can be chosen in \varepsilonqref{id:A} so that $\alpha$ and $\beta$ have the same parity.
To see this, we notice that the four integers $n_1\partialm n_2$, $n_3\partialm n_4$ are of the same parity, since all of
\varepsilonqs{(n_1+n_2)+(n_1-n_2)=2n_1,\qquad (n_3+n_4)+(n_3-n_4)=2n_3,\\
(n_1-n_2)+(n_3-n_4)=n_6-n_5=2n_6}
are even.
If $n_1\partialm n_2$, $n_3\partialm n_4$ are odd integers, then by \varepsilonqref{id:A} $\alpha$ and $\beta$ must be odd.
So, we assume that they are all even.
If one of $p,q$ is odd, then both $\alpha$ and $\beta$ must be even.
If both $p$ and $q$ are even, we replace $(\alpha ,\beta ,p,q)$ with $(2\alpha ,2\beta ,p/2,q/2)$ to obtain another expression \varepsilonqref{id:A} with both $\alpha$ and $\beta$ being even.
Hence, the claim is proved.
Finally, we set $a:=-\frac{1}{2}(\alpha +\beta )$, $b:=-\frac{1}{2}(\alpha -\beta )$, both of which are integers.
Inserting them into \varepsilonqref{ks}, we find the expression \varepsilonqref{char}.
\varepsilonnd{proof}
\section{Norm inflation for 1D cubic NLS at the critical regularity}\lambdabel{sec:ap}
In this section, we consider the particular equation
\betagin{equation}\lambdabel{cNLS}
\left\{
\betagin{array}{@{\,}r@{\;}l}
i\partial _tu+\partial _x^2 u&=\partialm |u|^2u,\qquad t\in [0,T],\quad x\in Z=\Bo{R} \text{~or~} \Bo{T} ,\\
u(0,x)&=\partialhi (x).
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
We will show the inflation of the Besov-type scale-critical Sobolev and Fourier-Lebesgue norms with an additional logarithmic factor:
\betagin{defn}
For $1\le p<\infty$, $1\le q\le \infty$ and $\alpha \in \Bo{R}$, define the $D^{[\alpha]}_{p,q}$-norm by
\varepsilonqq{\norm{f}{D^{[\alpha]}_{p,q}}:=\Big\| N^{-\frac{1}{p}}\LR{\log N}^\alpha \norm{\widehat{f}}{L^p_\xi (\{ N\le \LR{\xi}<2N\} )}\Big\| _{\varepsilonll ^q_N(2^{\Bo{Z}_{\ge 0}})} .}
We also define the $D^{s}_{p,q}$-norm for $s\in \Bo{R}$ by
\varepsilonqq{\norm{f}{D^{s}_{p,q}}:=\Big\| N^{s}\norm{\widehat{f}}{L^p_\xi (\{ N\le \LR{\xi}<2N\} )}\Big\| _{\varepsilonll ^q_N(2^{\Bo{Z}_{\ge 0}})} .}
\varepsilonnd{defn}
\betagin{rem}
(i) We see that $D^{[0]}_{2,q}=D^{-\frac{1}{2}}_{2,q}=B^{-\frac{1}{2}}_{2,q}$ (Besov norm) and $D^{[0]}_{p,p}=\Sc{F} L^{-\frac{1}{p},p}$ (Fourier-Lebesgue norm).
In the case of $Z=\Bo{R}$, the homogeneous version of $D^{[0]}_{p,q}$ is scale invariant for any $p,q$.
(ii) We have the embeddings $D^{[\alpha ]}_{p_2,q}\hookrightarrow D^{[\alpha]}_{p_1,q}$ if $p_1\le p_2$,
$D^{[\alpha ]}_{p,q_1}\hookrightarrow D^{[\alpha ]}_{p,q_2}$ if $q_1\le q_2$.
(iii) We will not consider the space $D^{[\alpha ]}_{p,q}$ with $p=\infty$ here, since our argument seems valid only in the space of negative regularity.
\varepsilonnd{rem}
\betagin{prop}\lambdabel{prop:A}
For the Cauchy problem \varepsilonqref{cNLS}, norm inflation occurs in the following cases:
(i) In $D^{[\alpha ]}_{p,q}$ for any $1\le q\le \infty$ and $\alpha <\frac{1}{2q}$, if $\frac{3}{2}\le p<\infty$.
(ii) In $D^{[\alpha ]}_{p,q}$ and $D^s_{p,q}$ for any $1\le q\le \infty$, $\alpha \in \Bo{R}$ and $s<-\frac{2}{3}$, if $1\le p<\frac{3}{2}$.
\varepsilonnd{prop}
\betagin{rem}
(i) If $\frac{3}{2}\le p<\infty$ and $1\le q<\infty$, Proposition~\ref{prop:A} shows inflation of a ``logarithmically subcritical'' norm (i.e., $D^{[\alpha ]}_{p,q}$ with $\alpha >0$).
Moreover, if $1\le p<\frac{3}{2}$ we show norm inflation in $D^{s}_{p,q}$ for subcritical regularities $-\frac{2}{3}>s>-\frac{1}{p}$.
However, for $q=\infty$ and $p\ge \frac{3}{2}$, inflation is not detected even in the critical norm $D^{[0]}_{p,\infty}$.
(ii) In \cite[Theorem~4.7]{KVZ17p} global-in-time a priori bound was established in $D^{[\frac{3}{2}]}_{2,2}$ and $D^{[2]}_{2,\infty}$.
Recently, Oh and Wang \cite{OW18p} proved global-in-time bound in $\Sc{F} L^{0,p}$ for $Z=\Bo{T}$ and $2\le p<\infty$.
There are still some gaps between these results and ours.
In fact, Proposition~\ref{prop:A} shows inflation of $D^{[\frac{1}{4}-]}_{2,2}$ and $D^{[0-]}_{2,\infty}$ norms, as well as in a norm only logarithmically stronger than $\Sc{F} L^{-\frac{1}{p},p}$ for $p\ge 2$.
(iii) Guo \cite{G17} also studied \varepsilonqref{cNLS} on $\Bo{R}$ in ``almost critical'' spaces.
It would be interesting to compare our result with \cite[Theorem~1.8]{G17}, where he showed well-posedness (and hence a priori bound) in some Orlicz-type generalized modulation spaces which are barely smaller than the critical one $M_{2,\infty}$.
There is no conflict between these results, because the function spaces for which norm inflation is claimed in Proposition~\ref{prop:A} are not included in $M_{2,\infty}$ due to negative regularity.
Note also that the function spaces in \cite[Theorem~1.8]{G17} admit the initial data $\partialhi$ of the form $\widehat{\partialhi}(\xi )=[\log (2+|\xi |)]^{-\gammamma}$ only for $\gamma >2$ (see \cite[Remark~1.9]{G17}), while it belongs to $D_{p,q}^{[\alpha ]}$ if $\gamma >\alpha +\frac{1}{q}$.
(iv) In contrast to the results in \cite{KVZ17p,OW18p}, complete integrability of the equation will play no role in our argument.
In particular, Proposition~\ref{prop:A} still holds if we replace the nonlinearity in \varepsilonqref{cNLS} with any of the other cubic terms $u^3,\bar{u}^3,u\bar{u}^2$ or any linear combination of them with complex coefficients.
\varepsilonnd{rem}
\betagin{proof}[Proof of Proposition~\ref{prop:A}]
We follow the argument in Section~\ref{sec:proof0}.
For $1\le \rho <\infty$ and $A>0$, let $M^\rho _A$ be the rescaled modulation space defined by the norm
\varepsilonqq{\norm{f}{M^\rho _A}:=\sum _{\xi \in A\Bo{Z}}\norm{\widehat{f}}{L^\rho (\xi +I_A)},\qquad I_A:=[ -\tfrac{A}{2},\tfrac{A}{2}) .}
It is easy to see that $M_A^\rho $ is a Banach algebra with a product estimate:
\varepsilonqq{\norm{fg}{M_A^\rho}\le CA^{1-\frac{1}{\rho}}\norm{f}{M^\rho _A}\norm{g}{M^\rho _A}.}
Mimicking the proof of Lemma~\ref{lem:U_k}, we see that the operators $U_k$ defined as in Definition~\ref{defn:U_k} satisfy
\varepsilonq{est:A1}{\norm{U_k[\partialhi ](t)}{M_A^\rho}\le t^{\frac{k-1}{2}}\big( CA^{1-\frac{1}{\rho}}\tnorm{\partialhi}{M_A^\rho}\big) ^{k-1}\tnorm{\partialhi}{M_A^\rho},\qquad t\ge 0,\quad k\ge 1.}
We also recall that from Corollary~\ref{cor:lwp}, the power series expansion of the solution map $u[\partialhi ]=\sum _{k\ge 1}U_k[\partialhi ]$ is verified in $C([0,T];M_A^2)$ whenever
\varepsilonq{cond:A1}{0<T\ll \big( A^{\frac{1}{2}}\tnorm{\partialhi}{M_A^2}\big) ^{-2}.}
For the proof of norm inflation in $D^{[\alpha ]}_{p,q}$, we restrict the initial data $\partialhi$ to those of the form \varepsilonqref{cond:phi}; for given $N\gg 1$, we set
\varepsilonqq{\widehat{\partialhi}:=rA^{-\frac{1}{p}}N^{\frac{1}{p}}\chi_{(N+I_A)\cup (2N+I_A)},}
where $r>0$ and $1\ll A\ll N$ will be specified later according to $N$.
Then, since $\tnorm{\partialhi}{M_A^2}\sim rA^{\frac{1}{2}-\frac{1}{p}}N^{\frac{1}{p}}$, the condition \varepsilonqref{cond:A1} is equivalent to
\varepsilonq{cond:A2}{0<r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\ll 1.}
Moreover, it holds that
\varepsilonq{est:A2}{\norm{U_1[\partialhi ](T)}{D^{[\alpha ]}_{p,q}}=\norm{\partialhi}{D^{[\alpha ]}_{p,q}}\sim r(\log N)^\alpha ,\qquad T\ge 0,}
and similarly to Lemma~\ref{lem:U_p} (i), that
\varepsilonq{est:A3}{\norm{U_3[\partialhi ](T)}{D^{[\alpha ]}_{p,q}}&\ge cT\big( rA^{-\frac{1}{p}}N^{\frac{1}{p}}\big) ^3A^2\norm{\Sc{F} ^{-1}\chi _{I_{A/2}}}{D^{[\alpha ]}_{p,q}},\qquad 0<T\le \tfrac{1}{100}N^{-2},\\
&=c\Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f_{p,q}^\alpha (A),}
where
\varepsilonqq{f_{p,q}^\alpha (A):=\norm{\Sc{F} ^{-1}\chi _{I_{A/2}}}{D^{[\alpha ]}_{p,q}}\sim \betagin{cases}
(\log A)^{\alpha +\frac{1}{q}}, &\alpha >-\frac{1}{q},\\
(\log \log A)^{\frac{1}{q}}, &\alpha =-\frac{1}{q},\\
~1, &\alpha <-\frac{1}{q}.\varepsilonnd{cases}}
For estimating $U_{2l+1}[\partialhi]$, $l\ge 2$ in $D^{[\alpha ]}_{p,q}$, we first observe that
\varepsilonqq{\norm{U_k[\partialhi ](T)}{D^{[\alpha ]}_{p,q}}\le \norm{\Sc{F} ^{-1}\chi _{\supp{\widehat{U_k[\partialhi ]}(T)}}}{D^{[\alpha ]}_{p,q}}\norm{\widehat{U_k[\partialhi ]}(T)}{L^\infty}.
}
A simple computation yields that
\varepsilonqq{\norm{\Sc{F} ^{-1}\chi _\Omega}{D^{[\alpha ]}_{p,q}}\le C\norm{\Sc{F} ^{-1}\chi _{I_{|\Omega |}}}{D^{[\alpha ]}_{p,q}}}
for any measurable set $\Omega \subset \Bo{R}$ of finite measure.
From Lemma~\ref{lem:supp}, we have
\varepsilonqq{\big| \supp{\widehat{U_k[\partialhi ]}(T)}\big| \le C^kA, \qquad T\ge 0,\quad k\ge 1,}
and hence,
\varepsilonqq{\norm{\Sc{F} ^{-1}\chi _{\supp{\widehat{U_k[\partialhi ]}(T)}}}{D^{[\alpha ]}_{p,q}}\le C\norm{\Sc{F} ^{-1}\chi _{I_{C^kA}}}{D^{[\alpha ]}_{p,q}}\le C^kf_{p,q}^\alpha (A).}
Moreover, similarly to Lemma~\ref{lem:U_k_H^s} (ii), we use Young's inequality, \varepsilonqref{est:A1} and Lemma~\ref{lem:a_k} to obtain
\varepsilonqq{\norm{\widehat{U_k[\partialhi ]}(T)}{L^\infty}&\le \sum _{\mat{k_1,k_2,k_3\ge 1\\k_1+k_2+k_3=k}}\int _0^T\norm{\widehat{U_{k_1}[\partialhi ]}(t)}{M^{\frac{3}{2}}_A}\norm{\widehat{U_{k_2}[\partialhi ]}(t)}{M^{\frac{3}{2}}_A}\norm{\widehat{U_{k_3}[\partialhi ]}(t)}{M^{\frac{3}{2}}_A}\,dt\\
&\le \int _0^Tt^{\frac{k-3}{2}}\,dt\cdot \big( CrA^{1-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{k-3}\big( CrA^{\frac{2}{3}-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{3}\\
&\le C\big( CrT^{\frac{1}{2}}A^{1-\frac{1}{p}}N^{\frac{1}{p}}\big) ^{k-1}rA^{-\frac{1}{p}}N^{\frac{1}{p}},\qquad T\ge 0,\quad k\ge 3.
}
Hence, we have
\varepsilonq{est:A4}{\norm{U_k[\partialhi ](T)}{D^{[\alpha ]}_{p,q}}\le C\Big[ Cr(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^{k-1}r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f_{p,q}^\alpha (A),\quad
T\ge 0,~~k\ge 3.}
From \varepsilonqref{cond:A2}--\varepsilonqref{est:A4}, we only need to check if there exist $r,A,T$ such that
\varepsilonq{cond:A3}{1\ll A\ll N,\qquad r\ll (\log N)^{-\alpha} ,\qquad (TN^2)\le \tfrac{1}{100},\qquad\qquad \\
\Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2\ll 1 \ll \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f^\alpha _{p,q}(A).}
When $1\le p<\frac{3}{2}$, it holds that $2(1-\frac{1}{p})\ge 0>2(1-\frac{1}{p})-\frac{1}{p}$.
Hence, we may choose
\varepsilonqs{r=(\log N)^{\min \{ -\alpha ,0\} -1},\quad A=N^{\frac{1}{2}},\quad T=\tfrac{1}{100}N^{-2},}
which clearly satisfies \varepsilonqref{cond:A3}.
(Note that $f^\alpha _{p,q}(A)\gtrsim 1$ for any $p,q,\alpha$.)
If $\frac{3}{2}\le p<\infty$, \varepsilonqref{cond:A3} would imply that
\varepsilonqq{1 \ll \Big[ r(TN^2)^{\frac{1}{2}}\Big( \frac{A}{N}\Big) ^{1-\frac{1}{p}}\Big] ^2r\Big( \frac{A}{N}\Big) ^{-\frac{1}{p}}f^\alpha _{p,q}(A)\lesssim r^3f^\alpha _{p,q}(A)\ll (\log N)^{-3\alpha}f^\alpha _{p,q}(A).}
In particular, when $\alpha >-\frac{1}{q}$ this condition requires
\varepsilonqq{(\log N)^{3\alpha}\ll (\log N)^{\alpha +\frac{1}{q}},}
which shows the necessity of the restriction $\alpha <\frac{1}{2q}$ in our argument.
We now see the possibility of choosing $r,A,T$ with the condition \varepsilonqref{cond:A3} in the following two cases separately: (a) If $1\le q<\infty$ and $0\le \alpha<\frac{1}{2q}$, we may take for instance
\varepsilonqq{r=(\log N)^{-\alpha}(\log \log N)^{-1},\quad A=N(\log \log N)^{-1},\quad T=\tfrac{1}{100}N^{-2}.}
(Note that $f^\alpha _{p,q}(A)\sim f^\alpha _{p,q}(N)\sim (\log N)^{\alpha +\frac{1}{q}}$.)
(b) If $\alpha <0$, we take
\varepsilonqq{r=(\log N)^{-\alpha}(\log \log N)^{-1},\quad A=N(\log N)^{\alpha (1-\frac{1}{p})^{-1}},\quad T=\tfrac{1}{100}N^{-2}.}
In both cases we easily show \varepsilonqref{cond:A3}.
Finally, we assume $1\le p<\frac{3}{2}$ and prove norm inflation in $D^s_{p,q}$ for $s<-\frac{2}{3}$.
We use the initial data $\partialhi$ of the form
\varepsilonqq{\widehat{\partialhi}:=rN^{-s}\chi_{[N,N+1]\cup [2N,2N+1]}.}
Then, the condition \varepsilonqref{cond:A1} with $A=1$ is equivalent to
\varepsilonqq{0<T^{\frac{1}{2}}rN^{-s}=(TN^2)^{\frac{1}{2}}rN^{-s-1}\ll 1.}
Repeating the argument above we also verify that
\varepsilonqs{\norm{U_1[\partialhi ](T)}{D^s_{p,q}}=\norm{\partialhi}{D^s_{p,q}}\sim r,\\
\norm{U_3[\partialhi ](T)}{D^s_{p,q}}\ge c\big( T^\frac{1}{2}rN^{-s}\big) ^2rN^{-s}=c(TN^2)r^3N^{-3s-2}\qquad \text{if $T\le \tfrac{1}{100}N^{-2}$},\\
\norm{U_k[\partialhi ](T)}{D^s_{p,q}}\le C\big( CT^\frac{1}{2}rN^{-s}\big) ^{k-1}rN^{-s},\qquad T\ge 0,~k\ge 3.
}
Hence, we set
\varepsilonqq{r=N^{s+\frac{2}{3}}\log N,\qquad T=\tfrac{1}{100}N^{-2},}
so that for $s<-\frac{2}{3}$ we have
\varepsilonqs{\norm{U_1[\partialhi ](T)}{D^s_{p,q}}\sim N^{s+\frac{2}{3}}\log N\ll 1,\qquad \norm{U_3[\partialhi ](T)}{D^s_{p,q}}\gtrsim (\log N)^3\gg 1, \\
\sum _{l\ge 2}\norm{U_{2l+1}[\partialhi ](T)}{D^s_{p,q}}\lesssim N^{-\frac{2}{3}}(\log N)^5\ll 1,}
from which norm inflation is detected by letting $N\to \infty$.
\varepsilonnd{proof}
\mathrm{med}skip
\noindent \textbf{Acknowledgments:}
The author would like to thank Tadahiro Oh for his generous suggestion and encouragement.
This work is partially supported by JSPS KAKENHI Grant-in-Aid for Young Researchers (B)
No.~24740086 and No.~16K17626.
\betagin{thebibliography}{00}
\bibitem{AC09} T.~Alazard and R.~Carles, \varepsilonmph{Loss of regularity for supercritical nonlinear Schr\"odinger equations}, Math.~Ann. \textbf{343} (2009), no. 2, 397--420.
\bibitem{BT06} I.~Bejenaru and T.~Tao, \varepsilonmph{Sharp well-posedness and ill-posedness results for a quadratic non-linear Schr\"odinger equation}, J.~Funct.~Anal. \textbf{233} (2006), no. 1, 228--259.
The latest version is in \texttt{arXiv:math/0508210}
\bibitem{B93-1} J.~Bourgain, \varepsilonmph{Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, I, Schr\"odinger equations}, Geom. Funct. Anal. \textbf{3} (1993), 107--156.
\bibitem{BGT02} N.~Burq, P.~G\'erard, and N.~Tzvetkov, \varepsilonmph{An instability property of the nonlinear Schr\"odinger equation on $S^d$}, Math.~Res.~Lett. \textbf{9} (2002), no. 2-3, 323--335.
\bibitem{C07} R.~Carles, \varepsilonmph{Geometric optics and instability for semi-classical Schr\"odinger equations}, Arch.~Ration.~Mech.~Anal. \textbf{183} (2007), no. 3, 525--553.
\bibitem{CDS12} R.~Carles, E.~Dumas, and C.~Sparber, \varepsilonmph{Geometric optics and instability for NLS and Davey-Stewartson models}, J.~Eur.~Math.~Soc. \textbf{14} (2012), no. 6, 1885--1921.
\bibitem{CK17} R.~Carles and T.~Kappeler, \varepsilonmph{Norm-inflation with infinite loss of regularity for periodic NLS equations in negative Sobolev spaces}, Bull. Soc. Math. France \textbf{145} (2017), no. 4, 623--642.
\bibitem{CP16} A.~Choffrut and O.~Pocovnicu, \varepsilonmph{Ill-posedness of the cubic nonlinear half-wave equation and other fractional NLS on the real line}, Int.~Math.~Res.~Not. IMRN (\textbf{2018}), no. 3, 699--738.
\bibitem{CCT03} M.~Christ, J.~Colliander, and T.~Tao, \varepsilonmph{Asymptotics, frequency modulation, and low regularity ill-posedness for canonical defocusing equations}, Amer.~J.~Math. \textbf{125} (2003), no. 6, 1235--1293.
\bibitem{CCT03p-1} M.~Christ, J.~Colliander, and T.~Tao, \varepsilonmph{Ill-posedness for nonlinear Schr\"odinger and wave equations}, preprint (2003). \texttt{arXiv:math/0311048}
\bibitem{CCT03p-2} M.~Christ, J.~Colliander, and T.~Tao, \varepsilonmph{Instability of the periodic nonlinear Schr\"odinger equation}, preprint (2003). \texttt{arXiv:math/0311227}
\bibitem{CCT08} M.~Christ, J.~Colliander, and T.~Tao, \varepsilonmph{A priori bounds and weak solutions for the nonlinear Schr\"odinger equation in Sobolev spaces of negative order}, J.~Funct.~Anal. \textbf{254} (2008), no. 2, 368--395.
\bibitem{FLS87} F.~Falk, E.W.~Laedke, and K.H.~Spatschek, \varepsilonmph{Stability of solitary-wave pulses in shape-memory alloys}, Phys.~Rev.~B \textbf{36} (1987), no. 6, 3031--3041.
\bibitem{F83} H.G.~Feichtinger, \varepsilonmph{Modulation spaces on locally compact Abelian groups}, Technical Report,
University of Vienna, 1983; Published in ``Proc.~Internat.~Conf.~on Wavelets and Applications'', New Delhi Allied Publishers, 2003, 1--56.
\bibitem{G00p} A.~Gr\"unrock, \varepsilonmph{Some local wellposedness results for nonlinear Schr\"odinger equations below $L^2$}, preprint (2000). \texttt{arXiv:math/0011157}
\bibitem{G17} S.~Guo, \varepsilonmph{On the 1D cubic nonlinear Schr\"odinger equation in an almost critical space}, J.~Fourier Anal.~Appl. \textbf{23} (2017), no. 1, 91--124.
\bibitem{GO18} Z.~Guo and T.~Oh, \varepsilonmph{Non-existence of solutions for the periodic cubic NLS below $L^2$}, Int. Math. Res. Not. IMRN (\textbf{2018}), no. 6, 1656--1729.
\bibitem{GNT09} S.~Gustafson, K.~Nakanishi, and T.P.~Tsai, \varepsilonmph{Scattering theory for the Gross-Pitaevskii equation in three dimensions}, Commun.~Contemp.~Math. \textbf{11} (2009), no. 4, 657--707.
\bibitem{HMO16} H.~Huh, S.~Machihara, and M.~Okamoto, \varepsilonmph{Well-posedness and ill-posedness of the Cauchy problem for the generalized Thirring model}, Differential Integral Equations \textbf{29} (2016), no. 5-6, 401--420.
\bibitem{IO15} T.~Iwabuchi and T.~Ogawa, \varepsilonmph{Ill-posedness for the nonlinear Schr\"odinger equation with quadratic non-linearity in low dimensions}, Trans.~Amer.~Math.~Soc. \textbf{367} (2015), no. 4, 2613--2630.
\bibitem{IU15} T.~Iwabuchi and K.~Uriya, \varepsilonmph{Ill-posedness for the quadratic nonlinear Schr\"odinger equation with nonlinearity $|u|^2$}, Commun.~Pure Appl.~Anal. \textbf{14} (2015), no. 4, 1395--1405.
\bibitem{KPV96-NLS} C.E.~Kenig, G.~Ponce, and L.~Vega, \varepsilonmph{Quadratic forms for the $1$-D semilinear Schr\"odinger equation}, Trans.~Amer.~Math.~Soc. \textbf{348} (1996), no. 8, 3323--3353.
\bibitem{KPV01} C.E.~Kenig, G.~Ponce, and L.~Vega, \varepsilonmph{On the ill-posedness of some canonical dispersive equations}, Duke Math.~J. \textbf{106} (2001), no. 3, 617--633.
\bibitem{KVZ17p} R.~Killip, M.~Vi\c{s}an, and X.~Zhang, \varepsilonmph{Low regularity conservation laws for integrable PDE}, preprint (2017). \texttt{arXiv:1708.05362}
\bibitem{K09} N.~Kishimoto, \varepsilonmph{Low-regularity bilinear estimates for a quadratic nonlinear Schr\"odinger equation}, J.~Differential Equations \textbf{247} (2009), no. 5, 1397--1439.
\bibitem{KT10} N.~Kishimoto and K.~Tsugawa, \varepsilonmph{Local well-posedness for quadratic nonlinear Schr\"odinger equations and the ``good'' Boussinesq equation}, Differential Integral Equations \textbf{23} (2010), no. 5-6, 463--493.
\bibitem{KT07} H.~Koch and D.~Tataru, \varepsilonmph{A priori bounds for the 1D cubic NLS in negative Sobolev spaces}, Int.~Math.~Res.~Not. IMRN \textbf{2007}, no. 16, Art.ID rnm053, 36 pp.
\bibitem{KT12} H.~Koch and D.~Tataru, \varepsilonmph{Energy and local energy bounds for the 1-d cubic NLS equation in $H^{-1/4}$}, Ann.~Inst.~H.~Poincar\'e Anal.~Non Lin\'eaire \textbf{29} (2012), no. 6, 955--988.
\bibitem{KT16p} H.~Koch and D.~Tataru, \varepsilonmph{Conserved energies for the cubic NLS in 1-d}, preprint (2016). \texttt{arXiv:1607.02534}
\bibitem{MO15} S.~Machihara and M.~Okamoto, \varepsilonmph{Ill-posedness of the Cauchy problem for the Chern-Simons-Dirac system in one dimension}, J.~Differential Equations \textbf{258} (2015), no. 4, 1356--1394.
\bibitem{MO16} S.~Machihara and M.~Okamoto, \varepsilonmph{Sharp well-posedness and ill-posedness for the Chern-Simons-Dirac system in one dimension}, Int.~Math.~Res.~Not. (\textbf{2016}), no. 6, 1640--1694.
\bibitem{M09} L.~Molinet, \varepsilonmph{On ill-posedness for the one-dimensional periodic cubic Schr\"odinger equation}, Math.~Res.~Lett. \textbf{16} (2009), no. 1, 111--120.
\bibitem{O17} T.~Oh, \varepsilonmph{A remark on norm inflation with general initial data for the cubic nonlinear Schr\"odinger equations in negative Sobolev spaces}, Funkcial.~Ekvac. \textbf{60} (2017), 259--277.
\bibitem{OS12} T.~Oh and C.~Sulem, \varepsilonmph{On the one-dimensional cubic nonlinear Schr\"odinger equation below $L^2$}, Kyoto J.~Math. \textbf{52} (2012), no. 1, 99--115.
\bibitem{OW15p} T.~Oh and Y.~Wang, \varepsilonmph{On the ill-posedness of the cubic nonlinear Schr\"odinger equation on the circle}, to appear in An. \c{S}tiin\c{t}. Univ. Al. I. Cuza Ia\c{s}i. Mat. (N.S.).
\bibitem{OW18p} T.~Oh and Y.~Wang, \varepsilonmph{Global well-posedness of the one-dimensional cubic nonlinear Schr\"odinger equation in almost critical spaces}, preprint (2018). \texttt{arXiv:1806.08761}
\bibitem{Ok17} M.~Okamoto, \varepsilonmph{Norm inflation for the generalized Boussinesq and Kawahara equations}, Nonlinear Anal. \textbf{157} (2017), 44--61.
\bibitem{RSW12} M.~Ruzhansky, M.~Sugimoto, and B.~Wang, \varepsilonmph{Modulation spaces and nonlinear evolution equations}, Evolution equations of hyperbolic and Schr\"odinger type, 267--283, Progr.~Math., \textbf{301}, Birkh\"auser/Springer Basel AG, Basel, 2012.
\bibitem{T87} Y.~Tsutsumi, \varepsilonmph{$L^2$-solutions for nonlinear Schr\"odinger equations and nonlinear groups}, Funkcial.~Ekvac. \textbf{30} (1987), no. 1, 115--125.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\subjclass[2010]{Primary 35R30; Secondary 35R25, 30C35.}
\maketitle
\begin{abstract}
We consider the two-dimensional version of Calder\`on's problem. When the D-N map is assumed to be known up to an error level $\varepsilon_0$, we investigate how the resolution in the determination of the unknown conductivity deteriorates the farther one goes from the boundary. We provide explicit formulas for the resolution, which apply to conductivities which are perturbations, concentrated near an interior point $q$, of the homogeneous conductivity.
\end{abstract}
\section{Introduction}
We consider the well-known Calder\'{o}n's inverse boundary value problem, also known as Electrical Impedance Tomography. Given $K\ge 1$, and $\gamma \in L^{\infty}(\Omega)$, such that $K^{-1}\le \gamma\le K$, the so-called Dirichlet-to-Neumann map
\[ \Lambda_{\gamma}: H^{1/2}(\partial \Omega) \longrightarrow H^{-1/2}(\partial \Omega) \]
is the operator which associates to each $\varphi \in H^{1/2}(\partial \Omega)$ the conormal derivative $\gamma \partial_{\nu}u \in H^{-1/2}(\partial \Omega)$, where $u$ is the weak solution to the Dirichlet problem
\begin{equation}\label{basicDpb}
\left\{
\begin{array}{llllll}
\text{div}( \gamma \nabla u) =0 \ ,&\text{in}& \Omega \ ,&\\
u = \varphi \ ,&\text{on} &\partial \Omega\ .&
\end{array}
\right.
\end{equation}
Calder\'{o}n's problem asks for the determination of $\gamma$, given $\Lambda_{\gamma}$ \cite{Ca}. We refer to Uhlmann \cite{U} for a thorough review on the progress and on the state of the art for this problem.
It is well-known that this problem is ill-posed \cite{A88, A07} and that, assuming a-priori regularity bounds of any order on $\gamma$, the best possible stability of $\gamma$ in terms of $\Lambda_{\gamma}$ is of logarithmic type, Mandache \cite{Man}. See also \cite{Fa} for the latest result of stability under minimal a-priori assumptions in the two--dimensional case, and for an updated reference list.
On the other hand, under minimal regularity assumptions, it is known that the boundary values of $\gamma$ depend in a Lipschitz fashion on $\Lambda_{\gamma}$,
\cite{Sy-U-2, A88, Bro}. It is then natural to ask how the determination of the values of $\gamma$ does deteriorate the deeper we go inside the domain $\Omega$.
In this direction we mention the result of Nagayasu, Uhlmann and Wang \cite{NUW} who consider two--valued conductivities of the form
\[ \gamma = 1 + ( c - 1) \chi_{D} \ , \; c>0 \ ,\]
when $\Omega = B_R(0) \subset \mathbb R^2$ and the domain $D$ is a small perturbation of a disk $B_r(0)$, $0<r<R$. Examining the linearization $\mathrm{d}\Lambda$ of the corresponding Dirichlet-to-Neumann map, they show that the dependence of the infinitesimal domain variation in terms of $\mathrm{d}\Lambda$ deteriorates when $r\to 0$ at a logarithmic rate.
Also in this note, we shall treat the two-dimensional setting, but we shall consider more general perturbations of the homogeneous conductivity $\gamma_0\equiv 1$, and, rather than examining stability, we shall discuss a more crude notion of \emph{resolution}.
Let us briefly illustrate here our notion of resolution. Given an error level $\varepsilon_0>0$ on the Dirichlet-to-Neumann map, we shall say that two conductivities $\gamma_1, \gamma_2$ are \emph{indistinguishable} if $\|\Lambda_{\gamma_1}-\Lambda_{\gamma_2}\|_{*} \le \varepsilon_0$. Here $\|\cdot\|_*$ denotes the appropriate $H^{1/2}(\partial \Omega) \longrightarrow H^{-1/2}(\partial \Omega)$ norm. Next, fixing a disk $B_{\rho}(q)\subset \Omega$, we consider the class $\Gamma_\Omega (\rho, q)$ of conductivities which are perturbations of the reference homogeneous conductivity $\gamma_0\equiv 1$, and which may differ from $\gamma_0$ only inside $B_{\rho}(q)$. We shall call \emph{resolution limit} at level $\varepsilon_0$, for the point $q$, the largest $\rho>0$ such that all conductivities in $\Gamma_\Omega (\rho, q)$ are indistinguishable.
We recall that a related notion of \emph{distinguishability} has been already introduced by Isaacson and Cheney in \cite{Isa,IC}.
The main result of this note is the explicit calculation of such a resolution limit for all $q\in \Omega$ in two specific geometrical settings. Namely, when $\Omega$ is the unit disk $B_1(0)$ and when $\Omega$ is the half plane $\mathbb{H}^+$. Such explicit formulas illustrate that the resolution deteriorates as the distance from the boundary increases.
Our approach is based on few elementary facts.
(I) When $\Omega=B_1(0)$ the resolution limit for the center $q=0$ can be explicitly computed by separation of variables, \cite{A88}.
(II) The quadratic form $\lang \Lambda_{\gamma}\varphi, \varphi \rang = \int_{\Omega}\gamma |\nabla u|^2$
where $u$ and $\varphi$ are as in \eqref{basicDpb} is invariant under conformal mappings.
(III) The quadratic form $\lang \Lambda_{\gamma}\varphi, \varphi \rang$ above is monotone with respect to the conductivity $\gamma$. This is a well-known fact in the theory of EIT and has been used in many instances in the past \cite{A89,AR98,Ike,KSS, ARS00}.
(IV) The explicit classical description in terms of M\"obius transformations of the automorphisms of the disk and of the conformal mappings of the half space onto the disk enable to reinterpret the formula for the resolution limit in each point in $B_1(0)$ or in $\mathbb{H}^+$.
In particular, we shall see that the case of the half plane is especially instructing, because in this case the resolution limit depends linearly on the depth.
We wish to mention that, while this paper was in preparation, the authors became aware of the preprint by Garde and Knudsen \cite{GK} where similar considerations are developed. It may be noticed, however that the present approach has some differences.
i) In \cite{GK} only two-phase perturbations of the reference homogeneous conductivity are considered, whereas here we are able to treat any variable perturbation.
ii) In \cite{GK} the error on the data is evaluated with respect to the $L^{2}\longrightarrow L^{2}$ norm, instead of the $H^{1/2}\longrightarrow H^{-1/2}$ norm, as we do here. This last choice, besides being physically motivated, has the fundamental advantage of being conformally invariant.
iii) Here we examine the case of the half plane, which may be especially suggestive in connection to geophysical applications.
In the next section 2 we shall introduce the functional framework necessary for our analysis. The specific feature that we emphasize is that we are allowed to treat with equal simplicity bounded and unbounded simply connected domains in the plane. Next we show the basic conformal invariance of the functional spaces just introduced and of the Dirichlet-to-Neumann map. Finally we rigorously formulate the notions of indistinguishability and of resolution limit.
In section 3 we compute the resolution limit. First we treat the case of the resolution limit at the center of the disk. Next we compute the resolution limit at an arbitrary point in the disk.
We examine the asymptotic behavior of the resolution limit with respect to the relevant parameters: depth, error level and ellipticity.
We conclude with the formulas for the half plane.
\section{Preliminaries.}
We shall use the standard identification of $\mathbb{R}^2$ with $\mathbb{C}$. Depending on the circumstances, points in the plane shall be represented by pairs $x = (x_1 , x_2)$ of real numbers or by a single complex number $z$.
Let $\Omega$ be a simply connected domain in $\mathbb{R}^2$, whose boundary is $C^{1, \alpha}$, $0 < \alpha < 1$.
Let $K \ge 1$. Throughout the paper we shall consider conductivities $\gamma \in L^\infty (\Omega)$ which satisfy the following ellipticity condition:
\begin{equation} \label{eq:condizione-ellitticita} K^{-1} \le \gamma \le K .\end{equation}
In $H^1_{\text{loc}} (\Omega)$ we consider the equivalence relation: $u \sim v$ if and only if $u - v$ is constant. We define $H^1_{\Diamond}(\Omega)$ as the set of equivalence classes $[u]_{\sim}$ such that $u \in H^1_{\text{loc}} (\Omega)$ satisfying $\int_\Omega | \nabla u |^2 < \infty$. On $H^1_{\Diamond}(\Omega)$ we consider the norm given by
\[ \| [u]_\sim \|^2 = \int_\Omega | \nabla u |^2 .\]
From now on we shall simply write $u$ instead of $[u]_\sim \in H^1_\Diamond(\Omega)$. Let us remark that similar conventions have already been used, see for instance \cite[Section 16.1.2]{AIM}.
\noindent
The corresponding trace space is defined as follows
\[ H^{1/2}_\Diamond (\partial \Omega) = H^1_\Diamond (\Omega)/ H_0^1(\Omega) . \]
On $H^{1/2}_{\Diamond}(\partial \Omega)$ we consider the norm given by
\[ \| \varphi \|_{H^{1/2}_{\Diamond} (\partial \Omega)} = \inf_{\substack{u \in H^1_\Diamond (\Omega) \\ u |_{\partial \Omega} = \varphi}} \| \nabla u \|_{L^2(\Omega)}.\]
Let us denote
\[ \| \varphi \|_{1/2} = \| \varphi \|_{H^{1/2}_{\Diamond} (\partial \Omega)}. \]
Let $\gamma \in L^\infty(\Omega)$ satisfying \eqref{eq:condizione-ellitticita}. Let $u \in H^1_\Diamond (\Omega)$ be the weak solution to
\begin{equation} \label{eq:problema-variazionale} \left \{ \begin{split} \text{div} (\gamma \nabla u) = 0 , & \:\text{ in } \Omega \\ u = \varphi , & \:\text{ on } \partial \Omega\end{split} \right .\end{equation}
where $\varphi \in H^{1/2}_\Diamond (\partial \Omega)$.
By the Riesz representation theorem it is clear that the solution to \eqref{eq:problema-variazionale} exists and it is unique.
\begin{definition} We denote by $H^{-1/2} (\partial \Omega)$ the dual space to $H^{1/2}_\Diamond (\partial \Omega)$ and we denote by $\lang \cdot , \cdot \rang$ the $L^2(\partial \Omega)$-based duality between these spaces. Then we define the D-N map as follows
\[ \Lambda_\gamma : H^{1/2}_\Diamond (\partial \Omega) \longrightarrow H^{-1/2} (\partial \Omega) \]
for every $\varphi_1 , \varphi_2 \in H^{1/2}_\Diamond (\partial \Omega) $
\begin{equation} \label{eq:triangolo} \lang \Lambda_\gamma \varphi_1 , \varphi_2 \rang = \int_\Omega \gamma \nabla u_1 \cdot \nabla v_2 , \end{equation}
where $u_1$ is the solution to \eqref{eq:problema-variazionale} satisfying the boundary condition $\varphi = \varphi_1$ and $v_2$ is any function in $H^1_\Diamond (\Omega)$ satisfying $v_2 |_{\partial \Omega} = \varphi_2$.
\end{definition}
\begin{lemma}[Conformal invariance] \label{lem:lem_1} Let $\Omega, \Omega'$ be two simply connected domains whose boundaries are $C^{1,\alpha}$ and let $\omega : \Omega' \longrightarrow \Omega$ be a conformal map between them. Let $\gamma$ satisfying \eqref{eq:condizione-ellitticita}. Then for all $\varphi_1 , \varphi_2 \in H^{1/2}_\Diamond (\partial \Omega)$ we have
\[ \lang \Lambda_{\gamma} \varphi_1 , \varphi_2 \rang \;=\; \lang \Lambda_{\gamma \circ \omega} \psi_1 , \psi_2 \rang , \]
where $\psi_i = \varphi_i \circ \omega$, $i = 1, 2$.
\end{lemma}
\begin{proof} Given $\varphi_1, \varphi_2 \in H^{1/2}_\Diamond (\partial \Omega)$ we consider $u_1 , u_2 \in H^{1}_\Diamond (\Omega)$ such that
\[ \left \{ \begin{split} \text{div} (\gamma \nabla u_i) = 0 ,& \:\text{ in } \Omega \\ u_i = \varphi_i , & \:\text{ on } \partial \Omega\end{split} \right . \]
Since $\partial \Omega , \partial \Omega'$ are $C^{1,\alpha}$, it is well-known that $\omega$ extends (with the same regularity) to a diffeomorphism from $\overline{\Omega'}$ to $\overline{\Omega}$, $x = \omega(y)$. The Cauchy-Riemann equations can be written as follows
\[ \left ( \frac{\partial y}{\partial x} \right ) \left ( \frac{\partial y}{\partial x} \right )^T = \left | \text{det } \frac{\partial y}{\partial x} \right | I ,\]
hence
\[ \begin{split} \lang \Lambda_{\gamma} \varphi_1 , \varphi_2 \rang &= \int_{\Omega} \gamma(x) \: \nabla_x u_1 \cdot \nabla_x u_2 \text{ d} x \\ & = \int_{\Omega'} \gamma (\omega(y)) \: \frac{\left ( \frac{\partial y}{\partial x} \right )^T \nabla_y u_1 \left ( \frac{\partial y}{\partial x} \right )^T \nabla_y u_2}{\left | \text{det } \frac{\partial y}{\partial x} \right |} \text{ d} y \\ & = \int_{\Omega'} \gamma (\omega(y)) \: \nabla_y u_1 \cdot \nabla_y u_2 \text{ d} y \\ & = \lang \Lambda_{\gamma \circ \omega} \psi_1 , \psi_2 \rang .
\end{split} \]
where $\psi_i = \varphi_i \circ \omega$, $i = 1,2$.
\end{proof}
\begin{corollary} \label{cor:corollary_1} Let $\varphi \in H^{1/2}_{\Diamond} (\partial \Omega)$. For any conformal map $\omega : \Omega' \longrightarrow \Omega$ we have
\[ \| \varphi \circ \omega \|_{H^{1/2}_{\Diamond} (\partial \Omega')} = \| \varphi \|_{H^{1/2}_{\Diamond} (\partial \Omega)}. \]
\end{corollary}
\begin{proof}We use Lemma \ref{lem:lem_1} with $\gamma \equiv 1$.\end{proof}
\begin{definition} Let us denote by $\| \cdot \|_*$ the $\mathscr{L} (H^{1/2}_{\Diamond} , H^{-1/2})$-norm, that is
\[ \| L \|_* = \sup_{\| \varphi \|_{1/2} = 1} \| L \varphi \|_{-1/2}. \]
\end{definition}
\begin{remark} We recall that if $L : H^{1/2}_{\Diamond}(\partial \Omega) \longrightarrow H^{-1/2}(\partial \Omega) $ is selfadjoint then we also have
\[ \| L \|_* = \sup_{\| \varphi \|_{1/2} = 1} | \lang L \varphi , \varphi \rang | .\]
Hence this formula may be applied when $L = \Lambda$ is a D-N map and also when $L = \Lambda_1 - \Lambda_2$ is the difference of two D-N maps.
\end{remark}
\begin{corollary} \label{cor:cor_conformal-map-norm} Let $\gamma_{1} , \gamma_2$ be two conductivities in $\Omega$ and let $\omega : {\Omega'} \longrightarrow {\Omega}$ be a conformal map. Then
\[ \| \Lambda_{\gamma_1 \circ \omega} - \Lambda_{\gamma_2 \circ \omega} \|_* = \| \Lambda_{\gamma_1} - \Lambda_{\gamma_2}\|_* .\]
\end{corollary}
\begin{proof} Immediate consequence of Lemma \ref{lem:lem_1} and its Corollary \ref{cor:corollary_1}.\end{proof}
\begin{definition} We introduce the class
\[ \Gamma_\Omega (\rho, q) = \left \{ \gamma \in L^\infty (\Omega) \; : \; K^{-1} \le \gamma \le K , \; \gamma = 1 + \chi_{B_\rho(q)} ( \gamma - 1 ) \right \} \]
as the family of conductivities which are perturbations of the homogeneous conductivity $\gamma \equiv 1$, localized in $B_\rho (q)$. We shall call the point $q$ the \emph{center of the perturbation}.
\end{definition}
\begin{definition} Let $\varepsilon_0 > 0$ be the error level admitted on the known measurement of the map $\Lambda_\gamma$. We shall say that two conductivities $\gamma_1, \gamma_2$ are $\varepsilon_0$-\emph{indistinguishable} if \[ \| \Lambda_{\gamma_1} - \Lambda_{\gamma_2} \|_* \le \varepsilon_0 . \]
\end{definition}
\begin{definition} Given the disk $B_\rho (q)$, we denote two specific elements of $\Gamma_\Omega (\rho , q)$ as follows:
\[ \gamma_K = ( 1 + \chi_{B_\rho(q)} ( K - 1) ) , \]
\[ \gamma_{K^{-1}} = ( 1 + \chi_{B_\rho(q)} ( K^{-1} - 1) ). \]
\end{definition}
Note that for all $\gamma \in \Gamma_\Omega (\rho , q)$
\[ \gamma_K \le \gamma \le \gamma_{K^{-1}}. \]
For this reason it is sensible to call $\gamma_K , \gamma_{K^{-1}}$ the \emph{extreme conductivities} in $\Gamma_{\Omega} (\rho , q)$.
\begin{definition} We define the \emph{resolution limit} (at level $\varepsilon_0$) relative to the center $q \in \Omega$ the number \[ \ell_q = \sup \left \{ \rho > 0 : \text{ for all } \gamma_1 , \gamma_2 \in \Gamma_{\Omega} ( \rho , q),\; \gamma_1 , \gamma_2 \text{ are indistinguishable} \right \}. \]
\end{definition}
For the sake of brevity, when $\rho, q$ are kept fixed, we denote by $\Lambda_i$ the map $\Lambda_{\gamma_i}$ and by $\Lambda_K$, $\Lambda_{K^{-1}}$ the maps $\Lambda_{\gamma_{K}}, \Lambda_{\gamma_{K^{-1}}}$ rispectively.
\begin{lemma} \label{lem:lem_monotonicity} Let $\gamma_1 , \gamma_2 \in \Gamma_\Omega (\rho,q)$. Then the following estimate holds:
\[ \| \Lambda_{1} - \Lambda_{2} \|_* \le \| \Lambda_{K} - \Lambda_{K^{-1}} \|_* . \]
\end{lemma}
\begin{proof} Given $\gamma \in \Gamma_\Omega (\rho, q)$, by \eqref{eq:problema-variazionale} for all $\varphi \in H^{1/2}_{\Diamond}( \partial \Omega)$ we have
\[ \lang \Lambda_{\gamma} \varphi , \varphi \rang = \inf_{\substack{u \in H^{1}_{\Diamond} (\Omega)\\u_{|\partial \Omega }= \varphi}} \int_\Omega \gamma | \nabla u |^2 . \]
Hence
\[ \lang \Lambda_{K^{-1}} \varphi , \varphi \rang \; \le \; \lang \Lambda_{i} \varphi , \varphi \rang \; \le \;\lang \Lambda_{K} \varphi , \varphi \rang \;,\quad i = 1,2 , \]
and consequently
\[ | \lang ( \Lambda_{1} - \Lambda_{2} ) \varphi , \varphi \rang | \le \; \lang (\Lambda_{K} - \Lambda_{K^{-1}}) \varphi , \varphi \rang .\]
\end{proof}
\begin{corollary} \label{cor:cor_1}
\[ \ell_q = \sup \left \{ \rho > 0 : \gamma_K , \gamma_{K^{-1}} \in \Gamma_{\Omega} ( \rho , q) \text{ are indistinguishable} \right \}. \]
\end{corollary}
\begin{proof} Immediate consequence of Lemma \ref{lem:lem_monotonicity}. \end{proof}
\section{The resolution limit, formulas and asymptotics.}
\begin{lemma} \label{lemma:mappaK} For all $\varphi \in H^{1/2}(\partial B_1 (0))$, $\varphi(\theta) = \sum_{n \in \mathbb{Z}} \varphi_n e^{i n \theta}$, the following formulas hold:
\[ \begin{split}\Lambda_{K} \varphi &= \sum_{n \in \mathbb{Z}} |n| \dfrac{(K+1) + r^{2|n|} (K - 1)}{(K+1) - r^{2|n|} (K - 1)} \varphi_n e^{i n \theta} ,\\
\Lambda_{K^{-1}} \varphi &= \sum_{n \in \mathbb{Z}} |n| \dfrac{(K+1) - r^{2|n|} (K - 1)}{(K+1) + r^{2|n|} (K - 1)} \varphi_n e^{i n \theta} . \end{split} \]
\end{lemma}
\begin{proof} By separation of variables in polar coordinates.\end{proof}
\begin{lemma} \label{lemma:fondamentale}
\begin{equation} \label{eq:eq_rappr} \| \Lambda_{K} - \Lambda_{K^{-1}} \|_* = \frac{4 (K^2 - 1) r^{2}}{(K+1)^2 - r^{4} (K-1)^2} = \frac{4 k r^{2}}{1 - k^2 r^{4}}, \end{equation}
where
\begin{equation} \label{eq:k} k = \frac{K-1}{K+1} . \end{equation}
\end{lemma}
\begin{proof} We have
\[ \begin{split} \| \Lambda_{K} - \Lambda_{K^{-1}} \|_* &= \sup_{\varphi \ne 0} \dfrac{ \lang (\Lambda_K - \Lambda_{K^{-1}} ) \varphi , \varphi \rang}{\| \varphi \|^2_{1/2}} \\ &= \sup_{\varphi \ne 0} \dfrac{\sum_{n \in \mathbb{Z}} |n| \dfrac{4 (K^2 - 1) r^{2|n|}}{(K+1)^2 - r^{4|n|} (K-1)^2} |\varphi_n|^2}{\sum_{n \in \mathbb{Z}} |n| |\varphi_n|^2} . \end{split} \]
Since the expression \[ \dfrac{4 (K^2 - 1) r^{2|n|}}{(K+1)^2 - r^{4|n|} (K-1)^2} \] is decreasing with respect to $n \in \mathbb{N} \setminus \{ 0 \}$, we obtain that
\[ \| \Lambda_{K} - \Lambda_{K^{-1}} \|_* = \frac{4 (K^2 - 1) r^{2}}{(K+1)^2 - r^{4} (K-1)^2} .\]
\end{proof}
\begin{theorem}[The resolution at the center of a disk] \label{prop:prop_risol-bound} Let $\Omega = B_1(0)$. The resolution limit at the center of the disk $B_1(0)$ is
\begin{equation}\label{eq:eq_risol-bound} \ell_0 = \sqrt{\frac{\sqrt{4 + \varepsilon_0^2} - 2}{\varepsilon_0 k}},\end{equation}
where $k$ is the constant introduced in \eqref{eq:k}.
\end{theorem}
\begin{proof} The extreme conductivities $\gamma_K , \gamma_{K^{-1}} \in \Gamma_{B_1(0)} (r,0)$ are $\varepsilon_0$-indistinguishable if and only if
\begin{equation} \label{eq:diseq1} \frac{4 (K^2 - 1) r^{2}}{(K+1)^2 - r^{4} (K-1)^2} \le \varepsilon_0 , \end{equation}
that is
\begin{equation} \label{eq:eq-bound} r \le \sqrt{ \frac{- 2 + \sqrt{4 + \varepsilon_0^2}}{\varepsilon_0 k}} . \end{equation}
Hence, by Corollary \ref{cor:cor_1}, the right-hand side in \eqref{eq:eq-bound} defines $\ell_0$.\end{proof}
\begin{remark} $\ell_0$ is meaningful only if $\ell_0 < 1$ and this corresponds to require \[ \varepsilon_0 < \varepsilon_{\max} = \frac{4 k}{1 - k^2}.\]
Evidently $\ell_0$ is an increasing function of $\varepsilon_0$ and Fig. \ref{fig:limdirisol0K3} examplifies its graph for a fixed value of $K$.
\end{remark}
\begin{figure}
\caption{$\ell_0 = \ell_0 (\varepsilon_0 , K)$ con $K = 50$.\label{fig:limdirisol0K3}
\label{fig:limdirisol0K3}
\end{figure}
Next we observe the following asymptotic behaviours as function of $\varepsilon_0$ and $K$.
\begin{remark}
\begin{equation} \label{eq:comportamento-asintotico} \ell_0 (\varepsilon_0 , K) = \frac{1}{2 \sqrt{k}} \left ( \sqrt{\varepsilon_0} + O (\varepsilon_0^{5/2}) \right ) \quad \text{ as } \varepsilon_0 \to 0^+ \end{equation}
\end{remark}
Moreover, if we fix $\varepsilon_0 > 0$, we examine the behaviour with respect to $K$.
\[ \ell_0 (\varepsilon_0 , K) = C(\varepsilon_0) \sqrt{\frac{K+1}{K-1}} ,\]
where
\[ C(\varepsilon_0) = \sqrt{\frac{\sqrt{4 + \varepsilon_0^2} - 2}{\varepsilon_0}}.\]
\begin{remark} The function $\ell_0 = \ell_0 (\varepsilon_0 , K)$ has the following properties:
\begin{enumerate}
\item \( \displaystyle \lim_{K \to + \infty} \ell_0(\varepsilon_0 , K) = C(\varepsilon_0) \) , \( \displaystyle \lim_{K \to 1^+} \ell_0(\varepsilon_0 , K) = + \infty \);
\item \( \ell_0 = \ell_0 (\varepsilon_0 , K) \) is strictly decreasing with respect to $K$;
\item $\ell_0(\varepsilon_0 , K) < 1$ if $K > \dfrac{1 + C(\varepsilon_0) ^2}{1 - C(\varepsilon_0) ^2} = 2^{-1} \left ( \varepsilon_0 + \sqrt{ 4 + \varepsilon_0^2} \right )$.
\end{enumerate}
Note in particular that
\[ \inf_{K \ge 1} \ell_0 (\varepsilon_0 ,K) = C(\varepsilon_0) > 0.\]
Hence $C(\varepsilon_0)$ is a lower bound on the resolution limit which is independent of the ellipticity. See for example Fig. \ref{fig:lim_risol_0_Kinfty} for $\varepsilon_0$ fixed at level $10^{-1}$.
\begin{figure}
\caption{$\ell_0 = \ell_0 (\varepsilon_0 , K)$ with $\varepsilon_0 = 10^{-1}
\label{fig:lim_risol_0_Kinfty}
\end{figure}
Note also that if $K < 2^{-1} \left ( \varepsilon_0 + \sqrt{ 4 + \varepsilon_0^2} \right )$ then all conductivities are indistinguishable.
\end{remark}
\begin{proposition} \label{prop:disk_automorphism} Given $r \in (0,1)$ e $q \in [0,1)$, then there exists a (conformal) automorphism $f : \overline{B_1(0)} \longrightarrow \overline{B_1(0)}$ such that $f(B_\rho(q)) = B_r (0)$, where
\begin{equation} \label{eq:eq_relation_rho} \rho = \frac{1 + r^2 - \sqrt{1 + (4 q^2 - 2) r^2 + r^4}}{2 r} . \end{equation}
\end{proposition}
\begin{proof} Up to rotations, the generic automorphism of $B_1(0)$ is given by
\[ f_p (z) = \frac{ z - p }{1 - p z} , \]
for any $p \in [0,1)$. We have
\[ | f_p(z) | = r \quad \text{ if and only if } \quad \left | \frac{z - p}{1 - p z} \right |^2 = r^2 .\]
That is: $f_p$ maps $B_r(0)$ onto $B_\rho (q)$ with $q, \rho$ given by
\[ \left \{ \begin{matrix} \displaystyle q = \frac{ p ( 1 - r^2)}{1 - r^2 p^2 } , & \\ & \\ \displaystyle \rho = \frac{r ( 1 - p^2)}{1 - r^2 p^2}. \end{matrix} \right . \]
Viceversa, given $q$ and $r$, we can solve for $p$ and obtain
\[ \left \{ \begin{matrix} \displaystyle p = \sqrt{ \frac{1}{r^2} + \left ( \frac{1 - r^2}{2 r^2 q} \right )^2 } - \frac{1 - r^2}{2 r^2 q} ,& \\ & \\ \displaystyle \rho = \frac{1 + r^2 - \sqrt{1 + (4 q^2 - 2) r^2 + r^4}}{2 r} . \end{matrix} \right . \]
and \eqref{eq:eq_relation_rho} follows.
\end{proof}
\begin{theorem}[Depth dependent resolution in a disk]
Let $\Omega = B_1(0)$. The resolution limit at level $\varepsilon_0 > 0$, relative to the center $q$, is given by
\begin{equation} \label{eq:resolution_limit_disk_q} \ell_q = \frac{1 + \ell_0^2 - \sqrt{ 1 + (4 q^2 - 2 ) \ell_0^2 + \ell_0^4}}{2 \ell_0}, \end{equation}
where $\ell_0$ is the number introduced in \eqref{eq:eq_risol-bound}.
\end{theorem}
\begin{proof} Straightforward consequence of Corollary \ref{cor:cor_conformal-map-norm}, Theorem \ref{prop:prop_risol-bound} and Proposition \ref{prop:disk_automorphism}.
\end{proof}
\begin{remark} We immediately see that
\[ \frac{\text{d}}{\text{d}q} \ell_q = - \frac{2 q \ell_0}{\sqrt{ 1 + (4 q^2 - 2 ) \ell_0^2 + \ell_0^4}} < 0 , \]
that is $\ell_q$ is increasing with respect to the ``depth'' $1- q$. See Fig. \ref{fig:andamento_ellq_3r} and Fig.s \ref{fig:figure_01}, \ref{fig:figure_02}, \ref{fig:figure_03} for various instances of the disks of indistinguishable perturbations starting from various values $r$ of the resolution limit in the center.
\end{remark}
\begin{center}
\begin{figure}
\caption{$\ell_q$, as function of $q$, with $K = 10^2$ and $\varepsilon_0 = 10^{-1}
\label{fig:andamento_ellq_3r}
\end{figure}
\begin{figure}
\caption{$r = 0.1$ \label{fig:figure_01}
\label{fig:figure_01}
\end{figure}
\begin{figure}
\caption{$r = 0.2$ \label{fig:figure_02}
\label{fig:figure_02}
\end{figure}
\begin{figure}
\caption{$r = 0.3$\label{fig:figure_03}
\label{fig:figure_03}
\end{figure}
\end{center}
\begin{remark} \label{prop:asymp_bhv} For \eqref{eq:resolution_limit_disk_q} we have the following asymptotic behaviour
\begin{equation} \label{eq:asymp_bhv} \ell_q(\varepsilon_0 , K) = \dfrac{2 \ell_0}{1 + \ell_0^2} (1 - q) + o (1 - q) \quad \text{ as } q \to 1 , \end{equation}
where $\ell_0 = \ell_0 (\varepsilon_0 , K)$ is given in \eqref{eq:eq_risol-bound}.
\end{remark}
Now, our aim is to provide an explicit formula of the resolution limit in the case of the half plane $\mathbb{H}^+$.
\begin{proposition} \label{prop:half_plane_mobius} Given $r \in (0,1)$, $q \in (0, + \infty)$ and $\alpha \in \mathbb{R}$, there exists a Möbius transformation $f : \overline{\mathbb{H}^+} \longrightarrow \overline{B_1(0)}$ such that $f(B_{\rho}(\alpha + iq)) = B_r(0)$, where \begin{equation} \label{eq:relazione-rhoqr} \rho = \dfrac{2 q r}{1 + r^2} . \end{equation}
\end{proposition}
\begin{proof} Up to rotations in the target, the generic Möbius transformation which maps $\mathbb{H}^+$ into $B_1(0)$ is given by
\[ f_a(z) = \frac{z - a}{z - \overline{a}} ,\]
for any $a \in \mathbb{H}^+$. We have
\[ \left | f_a(z) \right | = r \quad \text{ if and only if } \quad \left | \frac{z - a}{z - \overline{a}} \right |^2 = r^2.\]
That is: $f_a^{-1}$ maps $B_r(0)$ onto $B_\rho (\alpha +i q) \subset \mathbb{H}^+$ with $q, \rho$ given by
\[ \begin{cases} q = \beta \dfrac{1 + r^2}{1 - r^2}, \\ \\ \rho = \beta \dfrac{2 r}{1 - r^2},\end{cases} \]
where $a = \alpha + i \beta$. Viceversa, given $q$ and $r$, we can solve for $\beta$ and obtain
\[ \begin{cases} \beta = q \dfrac{1 - r^2}{1 + r^2} ,\\ \\ \rho = \dfrac{2 q r}{1 + r^2}.\end{cases} \]
and \eqref{eq:relazione-rhoqr} follows.
\end{proof}
\begin{theorem}[Depth dependent resolution in a half plane] Let $\Omega = \mathbb{H}^+$. The resolution limit at level $\varepsilon_0 > 0$, relative to the depth level $q$ (relative to any point in the half plane whose distance from $\partial \mathbb{H}^+$ is $q > 0$) is given by
\begin{equation}\label{eq:eq_risol-bound-semipiano} \widetilde{\ell}_q = \dfrac{2 q \ell_0 }{1 + \ell_0^2} ,\end{equation}
where $k$ is the constant introduced in \eqref{eq:k} and $\ell_0$ as in \eqref{eq:eq_risol-bound}.
\end{theorem}
\begin{proof} Immediate consequence of Corollary \ref{cor:cor_conformal-map-norm}, Theorem \ref{eq:eq_risol-bound} and Proposition \ref{prop:half_plane_mobius} .
\end{proof}
See Fig.s \ref{fig:figure_s005rit}, \ref{fig:figure_s01rit}, \ref{fig:figure_s015rit}. For better interpreting the half plane as a 2D-model of the underground, the $y$-axis is oriented downwards.
\begin{figure}
\caption{$r = 0.05$\label{fig:figure_s005rit}
\label{fig:figure_s005rit}
\end{figure}
\begin{figure}
\caption{$r = 0.1$\label{fig:figure_s01rit}
\label{fig:figure_s01rit}
\end{figure}
\begin{figure}
\caption{$r = 0.15$\label{fig:figure_s015rit}
\label{fig:figure_s015rit}
\end{figure}
\begin{remark}
In the case of the half plane it is evident from \eqref{eq:eq_risol-bound-semipiano} that the resolution diverges linearly with respect to the depth, that is when $q \to + \infty$. See Fig. \ref{fig:semipiano_cono}.
\end{remark}
\begin{figure}
\caption{The resolution cone, with $r = 0.15$. \label{fig:semipiano_cono}
\label{fig:semipiano_cono}
\end{figure}
Finally, we observe the following asymptotic behaviour of $\widetilde{\ell}_q $ as function of $\varepsilon_0$.
\begin{remark} Given $q > 0$ we have
\begin{equation} \widetilde{\ell}_q = \frac{q}{\sqrt{k}} \left ( \sqrt{\epsilon_0} + O(\epsilon_0^{3/2}) \right ) \quad \text{ as } \epsilon_0 \to 0^+. \end{equation}
\end{remark}
\end{document} |
\begin{document}
\newcommand{\ch}{\lfloor\frac{k}{3}\rfloor}
\newcommand{\tH}{\lfloor\frac{k}{4}\rfloor}
\newcommand{8pt}{8pt}
\newcommand{\ensuremath{\mathcal{C}}}{\ensuremath{\mathcal{C}}}
\newcommand{\ensuremath{\mathcal{P}}}{\ensuremath{\mathcal{P}}}
\newcommand{\ord}[1]{\operatorname{c}(\ensuremath{#1})}
\newcommand{S}{S}
\newcommand{J}{J}
\newcommand{\ensuremath{P}}{\ensuremath{P}}
\newcommand{\cnt}[1]{\ensuremath{\operatorname{ind}(#1)}}
\newcommand{\sub}[1]{\ensuremath{\operatorname{sub}(#1)}}
\newcommand{\ind}[1]{\ensuremath{\operatorname{ind}(#1)}}
\newcommand{\indp}[2]{\ensuremath{\operatorname{ind}_{#2}(#1)}}
\newcommand{\cntp}[1]{\text{sub}\ensuremath{^{+}(#1)}}
\newcommand{\inj}[1]{\text{inj}\ensuremath{(#1)}}
\newcommand{\injA}[2]{\text{inj}\ensuremath{_{#1}(#2)}}
\newcommand{\aut}[1]{\text{aut}\ensuremath{(#1)}}
\newcommand{\homo}[1]{\ensuremath{\operatorname{hom}(#1)}}
\newcommand{\degr}[1]{\operatorname{deg}(#1)}
\newcommand{\ensuremath{\Lambda}}{\ensuremath{\Lambda}}
\newcommand{\ensuremath{C_i}}{\ensuremath{C_i}}
\newcommand{\ensuremath{\Gamma}}{\ensuremath{\Gamma}}
\newcommand{\ensuremath{\Lambda}el}[1]{\ensuremath{\Lambda}(#1)}
\newcommand{\ensuremath{\Lambda}eli}[2]{\ensuremath{\Lambda_{#1}}(#2)}
\newcommand{CountInd$^{*}$}{CountInd$^{*}$}
\newcommand{CountInd$^{*}$Bis}{CountInd}
\newcommand{JointCount}{JointCount}
\newcommand{\injCond}[2]{\text{inj}\ensuremath{(#1\,|\,#2)}}
\newcommand{\homoCond}[2]{\text{hom}\ensuremath{(#1\,|\,#2)}}
\newcommand{\text{CountDagInjections}}{\text{CountDagInjections}}
\newcommand{\text{EnumPiece}}{\text{EnumPiece}}
\newcommand{\text{EnumDecomposition}}{\text{EnumDecomposition}}
\newcommand{\text{HomCount}}{\text{HomCount}}
\newcommand{\subTreeCount}[2]{\ensuremath{\operatorname{c}_{#1}(#2)}}
\newcommand{\operatorname{t}}{\operatorname{t}}
\newcommand{\tau}{\tau}
\newcommand{\tau}{\tau}
\newcommand{\mathrel{+}=}{\mathrel{+}=}
\newcommand{\text{SimpleCount}}{\text{SimpleCount}}
\newcommand{\cov}[1]{C}
\newcommand{\ensuremath{\textsc{Core}}}{\ensuremath{\textsc{Core}}}
\newcommand{\ensuremath{\textsc{Decompose}}}{\ensuremath{\textsc{Decompose}}}
\newcommand{\ensuremath{\textsc{BuildTree}}}{\ensuremath{\textsc{BuildTree}}}
\newcommand{\ensuremath{\textsc{RRS}}}{\ensuremath{\textsc{RRS}}}
\newcommand{\!\leftrightsquigarrow\!}{\!\leftrightsquigarrow\!}
\newcommand{\operatorname{polylog}}{\operatorname{polylog}}
\newcommand{\oline}[1]{\ensuremath{\Gamma[#1]}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\bar{u}}{\bar{u}}
\newcommand{\bar{\mathbf{u}}}{\bar{\mathbf{u}}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\bar{v}}{\bar{v}}
\newcommand{\bar{\mathbf{v}}}{\bar{\mathbf{v}}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathbf{u}l}{\mathbf{u}llet}
\newcommand{\ensuremath{\Lambda}ic}{\ensuremath{\Lambda}_i^{\mathbf{u}l}}
\newcommand{Jc}{J^{\mathbf{u}l}}
\newcommand{Sc}{S^{\,\mathbf{u}l}}
\newcommand{E^{\bul}}{E^{\mathbf{u}l}}
\newcommand{T^{\,\bul}}{T^{\,\mathbf{u}l}}
\newcommand{\mathcal{B}c}{\mathcal{B}^{\,\mathbf{u}l}}
\newcommand{\mathcal{E}c}{\mathcal{E}^{\,\mathbf{u}l}}
\newcommand{P_i^{\,\bul}}{P_i^{\,\mathbf{u}l}}
\title{Faster algorithms for counting subgraphs in sparse graphs\thanks{A preliminary version of this article appeared in the proceedings of the 14th International Symposium on Parameterized and Exact Computation (IPEC 2019).}}
\author{Marco Bressan\thanks{Marco Bressan is supported in part by a Google Focused Award ``Algorithms and Learning for AI'' (ALL4AI), by the ERC Starting Grant DMAP 680153, and by the ``Dipartimenti di Eccellenza 2018-2022'' grant awarded to the Department of Computer Science of the Sapienza University of Rome.}}
\institute{Dipartimento di Informatica, Sapienza Universit\`a di Roma, Italy. \email{[email protected]}}
\maketitle
\begin{abstract}
Given a $k$-node pattern graph $H$ and an $n$-node host graph $G$, the subgraph counting problem asks to compute the number of copies of $H$ in $G$.
In this work we address the following question: can we count the copies of $H$ faster if $G$ is sparse?
We answer in the affirmative by introducing a novel tree-like decomposition for directed acyclic graphs, inspired by the classic tree decomposition for undirected graphs.
This decomposition gives a dynamic program for counting the homomorphisms of $H$ in $G$ by exploiting the degeneracy of $G$, which allows us to beat the state-of-the-art subgraph counting algorithms when $G$ is sparse enough.
For example, we can count the induced copies of any $k$-node pattern $H$ in time $2^{O(k^2)} O(n^{0.25k + 2} \log n)$ if $G$ has bounded degeneracy, and in time $2^{O(k^2)} O(n^{0.625k + 1} \log n)$ if $G$ has bounded average degree.
These bounds are instantiations of a more general result, parameterized by the degeneracy of $G$ and the structure of $H$, which generalizes classic bounds on counting cliques and complete bipartite graphs.
We also give lower bounds based on the Exponential Time Hypothesis, showing that our results are actually a characterization of the complexity of subgraph counting in bounded-degeneracy graphs.
\keywords{subgraph counting, tree decomposition, degeneracy, sparsity}
\CRclass{Mathematics of computing -- Discrete mathematics \and Theory of computation -- Design and analysis of algorithms \and Theory of computation -- Graph algorithms analysis}
\end{abstract}
\section{Introduction}
\label{sec:intro}
We address the following fundamental subgraph counting problem:
\begin{quote}
\textbf{Input:} an $n$-node graph $G$ (the \emph{host graph}) and a $k$-node graph $H$ (the \emph{pattern})\\
\textbf{Output:} the number of induced copies of $H$ in $G$
\end{quote}
If no further assumptions are made, the best possible algorithm for this problem is likely to have running time $f(k) \cdot n^{\Theta(k)}$.
Indeed, the naive brute-force algorithm has running time $O(k^2 n^k)$, and under the Exponential Time Hypothesis~\cite{Impagliazzo&1998} any algorithm for counting $k$-cliques has running time $n^{\Omega(k)}$~\cite{Chen&2005,Chen&2006}.
The best algorithm known, which was given over $30$ years ago by Ne\v{s}et\v{r}il and Poljak~\cite{Nesetril&1985} and is based on fast matrix multiplication, is only slightly faster than $O(k^2n^k)$.
Ignoring $\poly(k)$ factors\footnote{In this paper we suppress $\poly(k)$ factors by default; if needed, we explicit them in order to emphasize that the dependence on $k$ is polynomial rather than exponential.}, the algorithm runs in time $O(n^{\omega\lfloor\frac{k}{3}\rfloor + 2})$ where $\omega$ is the matrix multiplication exponent.
Since $\omega \le 2.373$~\cite{LeGall2014}, this gives a state-of-the-art running time of $O( n^{0.791 k + 2})$.
In this work, we aim at breaking through this ``$n^{\Theta(k)}$ barrier'' by assuming that $G$ is sparse, and in particular, that $G$ has bounded degeneracy.
This assumption is often made for real-world graphs like social networks, since it agrees well with their structural properties~\cite{Eppstein&2011}.
The family of bounded-degeneracy graphs is rich from a theoretical point of view, too: it includes many important classes such as Barab\'asi-Albert preferential attachment graphs, graphs excluding a fixed minor, planar graphs, bounded-treewidth graphs, bounded-degree graphs, and bounded-genus graphs, see~\cite{Grohe&2013nowheredense}.
Unfortunately, even when $G$ has bounded degeneracy, the state of the art remains the $O(n^{0.791 k + 2})$-time algorithm by Ne\v{s}et\v{r}il and Poljak, unless one makes further assumptions.
For example, one can count the copies of any given pattern $H$ in time $O(n)$, provided $G$ is planar~\cite{Eppstein1995} or has bounded treewidth~\cite{Nesetril2012sparsity} or has bounded degree~\cite{Patel&2018}; all conditions that are stricter than bounded degeneracy.
Alternatively, if $G$ has bounded degeneracy, $O(n)$-time algorithms exist when $H$ is the clique~\cite{Alon&2008,Chiba&1985,Eppstein&2010}, or when $H$ is a complete bipartite graph, if we do not require the copies of $H$ to be induced~\cite{Eppstein1994}.
Unfortunately, it is not clear how to extend the techniques behind these results to all patterns $H$ and all $G$ with bounded degeneracy.
Thus, to what extent a small degeneracy of $G$ makes subgraph counting easier remains an open question.
In this work we introduce a novel tree-like graph decomposition, to be applied to the pattern graph $H$, designed to exploit the degeneracy of $G$ when counting the homomorphisms of $H$ in $G$.
When $G$ is sparse enough, this decomposition yields subgraph counting algorithms faster than the state of the art.
For example, we show how to count the induced copies of \emph{any} $k$-node pattern $H$ in time $2^{O(k^2)} O(n^{0.25 k + 2} \log n)$ when $G$ has bounded degeneracy, and in time $2^{O(k^2)} O(n^{0.625 k + 2} \log n)$ when $G$ has bounded average degree.
These results are instantiations of a more general result which says that $H$ can be counted in time $f(k) O(d^{k-\tau(H)} n^{\tau(H)} \log n)$, where $d$ is the degeneracy of $G$, and $\tau(H)$ is a certain measure of ``width'' of $H$ arising from our decomposition.
Assuming the Exponential Time Hypothesis, we also show that $n^{\Omega(\tau(H)/\log \tau(H))}$ operations are required in the worst case, even if $G$ has degeneracy $2$.
This provides a novel characterization of the complexity of subgraph counting in bounded-degeneracy graphs.
\subsection{Results}
\label{sub:results}
We divide our results into \emph{bounds} (Section~\ref{sub:bounds}) and \emph{techniques} (Section~\ref{sub:tech}).
We denote by $d$ the degeneracy of $G$, and we denote by \homo{H,G}, \sub{H,G}, \ind{H,G} the number of, respectively, homomorphisms, occurrences, and induced occurrences of $H$ in $G$.
See Subsection~\ref{sub:prelim} for further definitions and notation.
We remark that, unless otherwise specified, our bounds hold for every $H$ including disconnected ones.
\subsubsection{Bounds}
\label{sub:bounds}
Our first results are two running time bounds parameterized by the sparsity of $G$.
\begin{theorem}
\label{thm:mainbound}
For any $k$-node pattern $H$ one can compute \homo{H,G} and \sub{H,G} in time $2^{O(k \log k)} \cdot O(d^{k-(\tH+2)} n^{\tH+2} \log n)$, and one can compute \ind{H,G} in time $2^{O(k^2)} \cdot O(d^{k-(\tH+2)} n^{\tH+2} \log n)$, where $d$ is the degeneracy of $G$.
\end{theorem}
This bound reduces the exponent of $n$ to $\tH+2 \le 0.25 k + 2$, down from the state-of-the-art $\omega \lfloor \frac{k}{3} \rfloor + 2 \le 0.791 k + 2$ of the Ne\v{s}et\v{r}il-Poljak bound.
This implies that our polynomial dependence on $n$ is better whenever $d = O(n^{0.721})$, and in any case (that is, even if $\omega=2$) whenever $d = O(n^{0.556})$.
As a corollary of Theorem~\ref{thm:mainbound}, since $d = O(\sqrt{rn})$ where $r$ is the average degree of $G$, we obtain:
\begin{theorem}
\label{thm:lowavgd}
For any $k$-node pattern $H$ one can compute \homo{H,G} and \sub{H,G} in time $2^{O(k \log k)} \cdot O(r^{\frac{1}{2}(k-\tH)-1} n^{\frac{1}{2}(k+\tH)+1} \log n)$, and one can compute \ind{H,G} in time $2^{O(k^2)} \cdot O(r^{\frac{1}{2}(k-\tH)-1} n^{\frac{1}{2}(k+\tH)+1} \log n)$, where $r$ is the average degree of $G$.
\end{theorem}
This bound has a polynomial dependence on $n$ better than Ne\v{s}et\v{r}il-Poljak whenever $r = O(n^{0.221})$, and in any case (that is, even if $\omega=2$) whenever $r=O(n^{0.056})$.
In particular, we have a $2^{O(k^2)} \cdot O(n^{0.625k+1} \log n)$-time algorithm when $r=O(1)$.
These are the first improvements over the Ne\v{s}et\v{r}il-Poljak algorithm for graphs with small degeneracy or small average degree.
As a second result, we give improved bounds for some classes of patterns.
The first is the class of quasi-cliques, a typical target pattern for social networks~\cite{Sariyuce&2018,Sariyuce&2017,Tsourakakis&2017}.
We prove:
\begin{theorem}
\label{thm:dense_ub}
If $H$ is the clique minus $\epsilon$ edges, then one can compute \homo{H,G} and \sub{H,G} in time $2^{O(k \log k)} \cdot O(d^{k-\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} n^{\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} \log n)$, and \ind{H,G} in time $2^{O(\epsilon + k \log k)} \cdot O(d^{k-\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} n^{\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} \log n)$.
\end{theorem}
\noindent
This generalizes the classic $O(d^{k-1} n)$ bound for counting cliques by Chiba and Nishizeki~\cite{Chiba&1985}, at the price of an extra factor $2^{O(\epsilon + k \log k)} O(\log n)$.
Next, we consider complete quasi-multipartite graphs:
\begin{theorem}
\label{thm:bipartite}
If $H$ is a complete multipartite graph, then one can compute $\homo{H,G}$ and $\sub{H,G}$ in time $2^{O(k \log k)} \cdot O(d^{k-1} n \log n)$.
If $H$ is a complete multipartite graph plus $\epsilon$ edges, then one can compute $\homo{H,G}$ and $\sub{H,G}$ in time $2^{O(k \log k)} \cdot O(d^{k-\lfloor\frac{\epsilon}{4}\rfloor-2} n^{\lfloor\frac{\epsilon}{4}\rfloor+2} \log n)$.
\end{theorem}
\noindent
This generalizes an existing $O(d^3 2^{2d} n)$ bound for counting the non-induced copies of complete (maximal) bi-partite graphs~\cite{Eppstein1994}, again at the price of an extra factor $2^{O(k \log k)} \log n$.
Table~\ref{tab:ub} summarizes our upper bounds.
We remark that our algorithms work for the colored versions of the problem (count only copies of $H$ with prescribed vertex and/or edge colors) as well as the weighted versions of the problem (compute the total node or edge weight of copies of $H$ in $G$).
This can be obtained by a straightforward adaptation of our homomorphism counting algorithms.
\renewcommand{1.2}{1.2}
\begin{table}[h]
\resizebox{.99\textwidth}{!}{
\centering
\begin{tabular}{lll}
pattern $H$ & time to compute \ind{H,G} & reference \\
\toprule
all (even disconnected) & $O\big(n^{\omega\lfloor\frac{k}{3}\rfloor + 2}\big)$ & \cite{Nesetril&1985}\\
all (even disconnected) & $2^{O(k^2)} \cdot O\big(d^{k-\tH-2} n^{\tH+2} \log n\big)$ & this work\\
all (even disconnected) & $2^{O(k^2)} \cdot O\big(r^{\frac{1}{2}(k-\tH)-1} n^{\frac{1}{2}(k+\tH)+1} \log n\big)$ & this work \\
$K_k$ & $O\big(d^{k-1} n\big)$ & \cite{Chiba&1985} \\
$K_k$ - $\epsilon$ edges & $2^{O(\epsilon + k \log k)} \cdot O\big(d^{k-\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} n^{\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} \log n\big)$ & this work \\
$K_{k_1,k_2}$ & $O(d^3 2^{2d} n)$ & \cite{Eppstein1994} \\
$K_{k_1,\ldots,k_{\ell}}$ & $2^{O(k \log k)} \cdot O(d^{k-1} n \log n)$ \;\, {\quad\quad\quad\quad (\sub{H,G} only}) & this work \\
$K_{k_1,\ldots,k_{\ell}}$ + $\epsilon$ edges & $2^{O(k \log k)} \cdot O(d^{k-\lfloor\frac{\epsilon}{4}\rfloor-2} n^{\lfloor\frac{\epsilon}{4}\rfloor+2} \log n)$ \; (\sub{H,G} only) & this work
\\\bottomrule
\end{tabular}
}
\caption{Summary of upper bounds.}
\label{tab:ub}
\end{table}
\subsubsection{Techniques}
\label{sub:tech}
The bounds of Subsection~\ref{sub:bounds} are instantiations of a single, more general result.
This result is based on a novel notion of width, the \emph{dag treewidth} $\tau(H)$ of $H$, which captures the relevant structure of $H$ when counting its copies in a $d$-degenerate graph.
In a simplified form, the bound is the following:
\begin{theorem}
\label{thm:hsw_ub}
For any $k$-node pattern $H$ one can compute \homo{H,G}, \sub{H,G}, and \ind{H,G} in time $f(k) \cdot O(d^{k-\tau(H)} n^{\tau(H)} \log n)$.
\end{theorem}
Let us briefly explain this result.
The heart of the problem is computing \homo{H,G}; once we know how to do this, we can obtain \sub{H,G} and \ind{H,G} via inclusion-exclusion arguments at the price of an extra multiplicative factor $f(k)$, like in~\cite{Borgs&2006,Curticapean&2017}.
To compute \homo{H,G}, we give $G$ an acyclic orientation with maximum outdegree $d$.
Then, we take every possible acyclic orientation $P$ of $H$, and compute \homo{P,G} where by \homo{P,G} we mean the number of homomorphisms of $P$ in $G$ that respect the orientations of the arcs.
Note that the number of such homomorphisms can be $n^{\Omega(k)}$ even if $G$ has bounded degeneracy (for example, if $P$ is an independent set), so we cannot list them explicitly.
At this point we introduce our technical tool, the \emph{dag tree decomposition} of $P$.
This is a tree $T$ that captures the relevant reachability relations between the nodes of $P$.
Given $T$, one can compute $\homo{P,G}$ via dynamic programming in time $f(k) \cdot O(d^{k-\tau(T)} n^{\tau(T)} \log n)$, where $\tau(T) \in \{1,\ldots,k\}$ is the \emph{width} of $T$.
The dynamic program computes \homo{P,G} by combining carefully the homomorphism counts of certain subgraphs of $P$.
The dag-treewidth $\tau(H)$, which is the parameter appearing in the bound of Theorem~\ref{thm:hsw_ub}, is the maximum width of the optimal dag tree decomposition of any acyclic orientation $P$ of any graph obtainable by identifying nodes of and/or adding edges to $H$ (this arises from the inclusion-exclusion arguments).
With this, our technical machinery is complete.
To obtain the bounds of the previous paragraph, we show how to compute efficiently dag tree decompositions of low width, and apply a more technical version of Theorem~\ref{thm:hsw_ub}.
We conclude by complementing Theorem~\ref{thm:hsw_ub} with a lower bound based on the Exponential Time Hypothesis.
This lower bound shows that in the worst case the dag-treewidth $\tau(H)$ cannot be beaten, and therefore our decomposition captures, at least in part, the complexity of counting subgraphs in $d$-degenerate graphs.
\begin{theorem}
\label{thm:hsw_lb}
Under the Exponential Time Hypothesis~\cite{Impagliazzo&1998}, no algorithm can compute $\sub{H,G}$ or $\ind{H,G}$ in time $f(d,k) \cdot n^{o(\tau(H)/\log{\tau(H)})}$ for all $H$.
\end{theorem}
\subsection{Preliminaries and notation}
\label{sub:prelim}
Both $G=(V,E)$ and $H=(V_H,E_H)$ are simple graphs, possibly disconnected.
For any subset $V' \subseteq V$ we denote by $G[V']$ the subgraph of $G$ induced by $V'$; the same notation applies to any graph.
A \emph{homomorphism} from $H$ to $G$ is a map $\phi : V_H \to V$ such that $\{u,u'\} \in E_H$ implies $\{\phi(u),\phi(u')\} \in E$.
We write $\phi : H \to G$ to highlight the edges that $\phi$ preserves.
When $H$ and $G$ are oriented, $\phi$ must preserve the direction of the arcs.
If $\phi$ is injective then we have an injective homomorphism.
We denote by $\homo{H,G}$ and $\inj{H,G}$ the number of homomorphisms and injective homomorphisms from $H$ to $G$.
To avoid confusion, we will use the symbol $\psi$ to denote maps that are not necessarily homomorphisms.
The symbol $\simeq$ denotes isomorphism.
A \emph{copy} of $H$ in $G$ is a subgraph $F \subseteq G$ such that $F \simeq H$.
If moreover $F \simeq G[V_F]$ then $F$ is an induced copy.
We denote by $\sub{H,G}$ and $\ind{H,G}$ the number of copies and induced copies of $H$ in $G$; we may omit $G$ if clear from the context.
When we give an acyclic orientation to the edges of $H$, we denote the resulting dag by $P$.
All the notation described above applies to directed graphs in the natural way.
The \emph{degeneracy} of $G$ is the smallest integer $d$ such that there is an acyclic orientation of $G$ with maximum outdegree bounded by $d$.
Such an orientation can be found in time $O(|E|)$ by repeatedly removing from $G$ a minimum-degree node~\cite{Nesetril2012sparsity}.
From now on we assume that $G$ has this orientation.
Equivalently, $d$ is the smallest integer that bounds from above the minimum degree of every subgraph of $G$.
We assume the following operations take constant time: accessing the $i$-th arc of any node $u \in V$, and checking if $(u,v)$ is an arc of $G$ for any pair $(u,v)$.
Our upper bounds still hold if checking an arc takes time $O(\log n)$, which can be achieved via binary search if we first sort the adjacency lists of $G$.
The $\log n$ factor in our bounds appears since we assume logarithmic access time for our dictionaries, each of which holds $O(n^k)$ entries.
This factor can be removed by using dictionaries with worst-case $O(1)$ access time (e.g., hash maps), at the price of obtaining probabilistic/amortized bounds rather than deterministic ones.
Finally, we recall the tree decomposition and treewidth of a graph.
For any two nodes $X,Y$ in a tree $T$, we denote by $T(X,Y)$ the unique path between $X$ and $Y$ in $T$.
\begin{definition}[see~\cite{Diestel2017}, Ch.\ 12.3]
\label{def:treedecomp}
\label{def:treewidth}
Given a graph $G=(V,E)$, a tree decomposition of $G$ is a tree $D=(V_D,E_D)$ such that each node $X \in V_D$ is a subset $X \subseteq V$, and that\,\footnote{Formally, we should define a tree together with a mapping between its nodes and the subsets of $V$. However, the definition adopted here is sufficient for our purposes and lightens the notation.}:
\begin{enumerate}
\item[1.] $\cup_{X \in V_D} X = V$
\item[2.] for every edge $e = \{u,v\} \in G$ there exists $X \in D$ such that $u,v \in X$
\item[3.] $\forall\, X, X', X'' \in V_T$, if $X \in D(X', X'')$ then $X' \cap X'' \subseteq X$
\end{enumerate}
The width of a tree decomposition $T$ is $\operatorname{t}(T) = \max_{X \in V_T} |X| - 1$.
The treewidth $\operatorname{t}(G)$ of a graph $G$ is the minimum of $\operatorname{t}(T)$ over all tree decompositions $T$ of $G$.
\end{definition}
\subsection{Related work}
As anticipated, the fastest algorithm known for computing $\ind{H,G}$ is the one by Ne\v{s}et\v{r}il and Poljak~\cite{Nesetril&1985} that runs in time $O(n^{\omega\lfloor\frac{k}{3}\rfloor + (k \bmod 3)})$ where $\omega$ is the matrix multiplication exponent.
With the current bound $\omega \le 2.373$, this running time is in $O(n^{0.791 k + 2})$.
Unfortunately, the algorithm is based on fast matrix multiplication, which makes it oblivious to the sparsity of $G$.
Under certain assumptions on $G$, faster algorithms are known.
If $G$ has bounded maximum degree, $\Delta=O(1)$, then we can compute $\ind{H,G}$ in time $c^k \cdot O(n)$ for some $c=c(\Delta)$ via multivariate graph polynomials~\cite{Patel&2018}.
If $G$ has treewidth $\operatorname{t}(G) \le k$, and we are given a tree decomposition of $G$ of such width, then we can compute $\ind{H,G}$ in time $2^{O(k \log k)} O(n)$; see Lemma 18.4 of~\cite{Nesetril2012sparsity}.
When $G$ is planar, we obtain an $f(k) \, O(n)$ algorithm where $f$ is exponential in $k$~\cite{Eppstein1995}.
All these assumptions are stronger than bounded degeneracy, and the techniques cannot be extended easily.
A more general class that captures all these cases is that of nowhere-dense graphs~\cite{nowhere-dense}, for which there exist fixed-parameter-tractable subgraph counting algorithms~\cite{Grohe&FO}.
Nowhere dense graphs however do not include all bounded degeneracy graphs or all graphs with bounded average degree.
Even assuming $G$ has bounded degeneracy, algorithms faster than Ne\v{s}et\v{r}il-Poljak are known only when $H$ belongs to special classes.
The earliest result of this kind is the classic algorithm by Chiba and Nishizeki~\cite{Chiba&1985} to list all $k$-cliques in time $O(d^{k-1} n)$.
Eppstein showed that one can list all maximal cliques in time $O(d 3^{d/3} n)$~\cite{Eppstein&2010} and all non-induced complete bipartite subgraphs in time $O(d^3 2^{2d} n)$~\cite{Eppstein1994}.
These algorithms exploit the degeneracy ordering of $G$ in a way similar to ours.
In fact, our techniques can be seen as a generalization of~\cite{Chiba&1985} that takes into account the structure of $H$.
We note that a fundamental limitation of~\cite{Chiba&1985,Eppstein1994,Eppstein&2010} is that they lists all the copies of $H$, which for a generic $H$ might be $\Theta(n^k)$ even if $G$ has bounded degeneracy (for example if $H$ is the independent set).
In contrast, we list the copies of subgraphs $H$, and combine them to infer the number of copies of $H$.
To be more precise, we list the homomorphisms of $H$, which is another difference we have with~\cite{Chiba&1985,Eppstein1994,Eppstein&2010} and a point we have in common with previous work~\cite{Curticapean&2017}.
Regarding our ``dag tree decomposition'', it is inspired to the standard notion of tree decomposition of a graph, and it yields a similar dynamic program.
Yet, the similarity between the two decompositions is rather superficial; indeed, our dag-treewidth can be $O(1)$ when the treewidth is $\Omega(k)$, and vice versa.
Our decomposition is unrelated to the several notions of tree decomposition for directed graphs already known~\cite{Ganian2010digraph}.
Finally, our lower bounds are novel; no general lower bounds in terms of $d$ and of the structure of $H$ was available before.
\subsection{Manuscript organisation.}
In Section~\ref{sec:simple} we build the intuition with a gentle introduction to our approach.
In Section~\ref{sec:dec} we give our dag tree decomposition and the dynamic program for counting homomorphisms.
In Section~\ref{sec:twbound} we show how to compute good dag tree decompositions.
Finally, in Section~\ref{sec:lb} we prove the lower bounds.
\section{Exploiting degeneracy orientations}
\label{sec:simple}
We build the intuition behind our approach, starting from the classic algorithm for counting cliques by Chiba and Nishizeki~\cite{Chiba&1985}.
The algorithm begins by orienting $G$ acyclically so that $\max_{v \in G}d_{\text{out}}(v) \le d$, which takes time $O(|E|)$.
With $G$ oriented acyclically, we take each $v \in G$ in turn, enumerate every subset of $(k-1)$ out-neighbors of $v$, and check its edges.
In this way we can explicitly find all $k$-cliques of $G$ in time $O(k^2 d^{k-1} n)$.
Observe that the crucial fact here is that an acyclically oriented clique has exactly one \emph{source}, that is, a node with no incoming arcs.
We would like to extend this approach to an arbitrary pattern $H$.
Since every copy of $H$ in $G$ appears with exactly one acyclic orientation, we take every possible acyclic orientation $P$ of $H$, count the copies of $P$ in $G$, and sum all the counts.
Thus, the problem reduces to counting the copies of an arbitrary dag $P$ in our acyclic orientation of $G$.
Let us start in the naive way.
Suppose $P$ has $s$ sources.
Fix a directed spanning forest $F$ of $P$.
This is a collection of $s$ directed disjoint trees rooted at the sources of $P$ (arcs pointing away from the roots).
Clearly, each copy of $P$ in $G$ contains a copy of $F$.
Hence, we can enumerate the copies of $F$ in $G$, and for each one check if it is a copy of $P$.
To this end, first we enumerate the $O(n^s)$ possible $s$-uples of $V$ to which the sources of $P$ can be mapped.
For each such $s$-uple, we enumerate the possible mappings of the remaining $k-s$ nodes of the forest.
This can be done in time $O(d^{k-s})$ by a straightforward extension of the out-neighbor listing algorithm above.
Finally, for each mapping we check if its nodes induce $P$ in $G$, in time $O(k^2)$.
The total running time is $O(k^2 d^{k-s} n^s )$.
Unfortunately, if $P$ is an independent set then $s=k$ and the running time is $O(k^2n^k)$, so we have made no progress over the naive algorithm.
At this point we introduce our first idea.
For reference we use the toy pattern $P$ in Figure~\ref{fig:cycle}.
Instead of enumerating the copies of $P$ in $G$, we decompose $P$ into two \emph{pieces}, $P(1)$ and $P(3,5)$.
Here, $P(1)$ denotes the subgraph of $P$ reachable from $1$ (that is, the transitive closure of $1$ in $P$). The same for $P(3)$ and $P(5)$, and we let $P(3,5)=P(3) \cup P(5)$.
Now we count the copies of $P(1)$, and then the copies of $P(3,5)$, hoping to combine the result in some way to obtain the count of $P$.
To simplify the task, we focus on counting homomorphisms rather than copies (see below).
Thus, we want to compute $\hom(P,G)$ by combining $\hom(P(1),G)$ and $\hom(P(3,5),G)$.
Now, clearly, knowing $\hom(P(1),G)$ and $\hom(P(3,5),G)$ is not sufficient to infer $\hom(P,G)$.
Thus, we need to solve a slightly more complex problem.
For every pair $x,y \in V(G)$, let $\phi : \{2,6\} \mapsto V(G)$ be the map given by $\phi(2)=x$ and $\phi(6)=y$. We let $\homo{P, G, (x,y)}$ be the number of homomorphisms of $P$ in $G$ whose restriction to $\{2,6\}$ is $\phi$.
By a counting argument one can immediately see that:
\begin{align}
\label{eqn:sumhom}
\homo{P,G} = \sum_{\phi : \{2,6\} \to V(G)} \!\!\!\!\!\! \homo{P, G,\phi}
\end{align}
Thus, to compute $\hom(P,G)$ we only need to compute $\homo{P, G,\phi}$ for all possible $\phi$.
Now, define $\homo{P(1),G, \phi}$ and $\homo{P(3,5), G,\phi}$ with the same meaning as above.
A crucial observation is that $\{2,6\}$, the domain of $\phi$, is precisely the set of nodes in $P(1) \cap P(3,5)$.
It is not difficult to see that this implies:
\begin{align}
\homo{P, G,\phi} = \homo{P(1),G,\phi} \cdot \homo{P(3,5),G, \phi}
\label{eqn:toy_homo}
\end{align}
Thus, now our goal is to compute $\homo{P(1),G,\phi}$ and $\homo{P(3,5),G, \phi}$ for all $\phi : \{2,6\} \to V(G)$.
To this end, we list all $\phi_{P(1)} : P(1) \to G$ with the technique above, and for each such $\phi_{P(1)}$ we increment a counter associated to $(\phi_{P(1)}(2),\phi_{P(1)}(6))$ in a dictionary with default value $0$.
Thus, we obtain $\homo{P(1),G,\phi}$ for all $\phi:\{2,6\} \to V$.
Since $P(1)$ has one source, we enumerate $O(k^2 n)$ maps.
If the dictionary takes time $O(\log n)$ to access an entry, the total running time is $O(k^2 n \log n)$.
The same technique applied to $P(3,5)$ yields a running time of $O(k^2 n^2 \log n)$, since $P(3,5)$ has two sources.
Finally, we apply Equation~\ref{eqn:sumhom} by running over all entries in the first dictionary and retrieving the corresponding value from the second dictionary.
The total running time is $O(k^2 n^2 \log n)$, while enumerating the homomorphisms of $P$ would have required time $O(k^2 n^3)$.
\begin{figure}
\caption{Toy example: an acyclic orientation $P$ of $H=C_6$, decomposed into two pieces.}
\label{fig:cycle}
\end{figure}
Let us abstract the general approach from this toy example.
We want to decompose $P$ into a set of pieces $P_1,P_2,\ldots$ with the following properties: (i) each piece $P_i$ has a small number of sources $s(P_i)$, and (ii) we can obtain $\homo{P,G,\phi}$ by combining the homomorphism counts of the $P_i$.
This is achieved by the \emph{dag tree decomposition}, which we introduce in Section~\ref{sec:dec}.
Like the tree decomposition for undirected graphs, the dag tree decomposition leads to a dynamic program to compute $\homo{P,G}$.
\section{DAG tree decompositions}
\label{sec:dec}
Let $P=(V_P,A_P)$ be a directed acyclic graph.
We denote by $S_P$, or simply $S$, the set of nodes of $P$ having no incoming arc.
These are the \textit{sources} of $P$.
We denote by $V_P(u)$ the transitive closure of $u$ in $P$, i.e.\ the set of nodes of $P$ reachable from $u$, and we let $P(u) = P[V_P(u)]$ be the corresponding subgraph of $P$.
For a subset of sources $B \subseteq S$ we let $V_P(B) = \cup_{u \in B} V_P(u)$ and $P(B) = P[V_P(B)]$.
Thus, $P(B)$ is the subgraph of $P$ induced by all nodes reachable from $B$.
We call $B$ a \textit{bag} of sources.
We can now formally introduce our decomposition.
\begin{definition}[Dag tree decomposition]
\label{def:piecedecomp}
Let $P=(V_P,A_P)$ be a dag.
A dag tree decomposition (d.t.d.) of $P$ is a (rooted) tree $T=(\mathcal{B},\mathcal{E})$ with the following properties:
\begin{enumerate}\itemsep2pt
\item each node $B \in \mathcal{B}$ is a bag of sources $B \subseteq S_P$
\item $\bigcup_{B \in \mathcal{B}} B = S_P$
\item \label{pr:joint_path}for all $B,B_1,B_2 \in T$, if $B \in T(B_1,B_2)$ then $V_P(B_1) \cap V_P(B_2) \subseteq V_P(B)$
\end{enumerate}
\end{definition}
One can see the similarity with the tree decomposition of an undirected graph (Definition~\ref{def:treedecomp}).
However, our dag tree decomposition differs crucially in two aspects.
First, the bags are subsets of $S$ rather than subsets of $V_P$.
This is because the time needed to list the homomorphisms between $P(B_i)$ and $G$ is driven by $n^{|B_i|}$.
Second, the path-intersection property (3) concerns the pieces reachable from the bags rather than the bags themselves.
The reason is that, to combine the counts of two pieces together, their intersection must form a separator in $P$ (similarly to the tree decomposition of an undirected graph).
The dag tree decomposition induces the following notions of \emph{width}, used throughout the rest of the article.
\begin{definition}
\label{def:pw}
The \emph{width} of $T$ is $\tau(T) = \max_{B \in \mathcal{B}} |B|$.
The \emph{dag treewidth} $\tau(P)$ of $P$ is the minimum of $\tau(T)$ over all dag tree decompositions $T$ of $P$.
\end{definition}
Clearly $\tau(P) \in \{1,\ldots, k\}$ for any $k$-node dag $P$.
Figure~\ref{fig:dtd} shows a pattern $P$ together with a d.t.d.\ of width $1$.
We observe that $\tau(P)$ has no obvious relation to the treewidth $\operatorname{t}(H)$ of $H$; see the discussion in Subsection~\ref{sub:incexc}.
\begin{figure}
\caption{Left: a dag $P$ formed by five pieces. Right: a dag tree decomposition $T$ for $P$. Since $\tau(T)=1$ and the largest piece contains $4$ nodes, we can compute $\homo{P,G}
\label{fig:dtd}
\end{figure}
\subsection{Counting homomorphisms via dag tree decompositions}
For any $B \in \mathcal{B}$ let $T(B)$ be the subtree of $T$ rooted at $B$.
We let $\oline{B}$ be the down-closure of $B$ in $T$, that is, the union of all bags in $T(B)$.
Consider $P(\oline{B})$, the subgraph of $P$ induced by the nodes reachable from $\oline{B}$ (note the difference with $P(B)$, which contains only the nodes reachable from $B$).
We compute $\homo{P(\oline{B}),G}$ in a bottom-up fashion over all $B$, starting with the leaves of $T$ and moving towards the root.
This is similar to the dynamic program given by the standard tree decomposition (see~\cite{Fomin&2010}).
As anticipated, we actually compute $\homo{P(\oline{B}),\phi}$, the number of homomorphisms that extend a fixed mapping $\phi$.
We need the following concept:
\begin{definition}
Let $P_1=(V_{P_1},A_{P_1}), P_2=(V_{P_2},A_{P_2})$ be two subgraphs of $P$, and let $\phi_1: P_1 \to G$ and $\phi_2 : P_2 \to G$ be two homomorphisms.
We say $\phi_1$ and $\phi_2$ \emph{respect} each other if $\phi_1(u)=\phi_2(u)$ for all $u \in V_{P_1} \cap V_{P_2}$.
\end{definition}
Given some $\phi$, we denote by $\homo{P_1,G,\phi_2}$ the number of homomorphisms from $P_1$ to $G$ that respect $\phi_2$.
We can now present our main algorithmic result.
\begin{theorem}
\label{thm:counting}
Let $P$ be any $k$-node dag, and $T=(\mathcal{B},\mathcal{E})$ be a d.t.d.\ for $P$.
Fix any $B \in \mathcal{B}$ as the root of $T$.
There is a dynamic programming algorithm \text{HomCount}($P, T, B$) that in time $O(|\mathcal{B}| k^2 d^{k-\tau(T)} n^{\tau(T)} \log n)$ computes $\homo{P(\oline{B}), G, \phi_B}$ for all $\phi_B : P(B) \to G$.
This is also a bound on the time needed to compute $\homo{P,G}$.
\end{theorem}
The proof of Theorem~\ref{thm:counting} is given in the next subsection.
Before continuing, let $f_{T}(k)$ be an upper bound on the time needed to compute a d.t.d.\ of minimum width with at most $2^k$ bags for a pattern on $k$ nodes.
We can show that such a d.t.d.\ always exists:
\begin{lemma}
\label{lem:small_dtd}
Any $k$-node dag $P$ has a minimum-width d.t.d.\ on at most $2^{k}$ bags.
\end{lemma}
\begin{proof}
We show that, if a d.t.d.\ $T=(\mathcal{B},\mathcal{E})$ has two bags containing exactly the same sources, then one of the two bags can be removed.
This implies that there exists a minimum-width d.t.d.\ where every bag contains a distinct source set, which therefore has at most $2^k$ bags.
Suppose indeed $T$ contains two bags $X$ and $X'$ formed by the same subset of sources.
Let $B$ be the neighbor of $X$ on the unique path $T(X,X')$.
Let $T^*=(\mathcal{B}^*,\mathcal{E}^*)$ be the tree obtained from $T$ by replacing the edge $\{B',X\}$ with $\{B',B\}$ for every neighbor $B' \ne B$ of $X$ and then deleting $X$.
Clearly $|\mathcal{B}^*|=|\mathcal{B}|-1$, and properties (1) and (2) of Definition~\ref{def:piecedecomp} are satisfied.
Let us then check property (3).
Consider a generic path $T^*(B_1,B_2)$ and look at the corresponding path $T(B_1,B_2)$.
If $T(B_1,B_2)$ does not contain edges that we deleted, then $T^*(B_1,B_2)=T(B_1,B_2)$.
In this case the property holds for any bag in $T^*(B_1,B_2)$ since it holds in $T$.
Suppose instead $T(B_1,B_2)$ contains edges that we deleted.
Then $T^*(B_1,B_2)$ contains the same bags of $T(B_1,B_2)$ save that $X$ is replaced by $B$.
Thus we only need to check that $V_P(B_1) \cap V_P(B_2) \subseteq V_P(B)$.
By property (3), $V_P(B_1) \cap V_P(B_2) \subseteq V_P(X)$.
Moreover, since by construction $B \in T(X,X')$, property (3) also gives $V_P(X) = V_P(X) \cap V_P(X') \subseteq V_P(B)$.
Thus $V_P(B_1) \cap V_P(B_2) \subseteq V_P(B)$.
Therefore $T^*$ is a d.t.d.\ for $P$.
\qed
\end{proof}
Then, as an immediate corollary of Theorem~\ref{thm:counting}, we have:
\begin{theorem}
\label{thm:homo_cost}
We can compute $\homo{P,G}$ in time $f_{T}(k) + O(k^2 2^k d^{k-\tau(P)} n^{\tau(P)} \log n)$.
\end{theorem}
Theorem~\ref{thm:homo_cost} will be used in Section~\ref{sub:incexc} to prove the bounds for our original problem of counting the copies of $H$ via inclusion-exclusion arguments.
\subsubsection{Proof of Theorem~\ref{thm:counting}}
The algorithm behind Theorem~\ref{thm:counting} is similar to the one for counting homomorphisms using a tree decomposition.
To start, we prove that our dag tree decomposition enjoys a separator property similar to the one enjoyed by tree decompositions.
\begin{lemma}
\label{lem:separator}
Let $T$ be a rooted d.t.d.\ and let $B_1, \ldots, B_l$ be the children of $B$ in $T$.
Then for all $i \in [l]$:
\begin{enumerate}\itemsep0pt
\item[a.] $V_P(\oline{B_i}) \cap V_P(\oline{B_j}) \subseteq V_P(B)$ for all $j \ne i$
\item[b.] for any arc $(u,u') \in P(\oline B)$, if $u \in V_P(\oline{B_i}) \setminus V_P(B)$ then $u' \in V_P(\oline{B_i})$
\item[c.] for any arc $(u',u) \in P(\oline B)$, if $u \in V_P(\oline{B_i}) \setminus V_P(B)$ then $u' \in V_P(\oline{B_i})$
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (a).
Suppose for some $ i \ne j$ we have $V_P(\oline{B_i}) \cap V_P(\oline{B_j}) \nsubseteq V_P(B)$.
So there exists some node $u \in V_P$ such that $u \in V_P(\oline{B_i})$, $u \in V_P(\oline{B_j})$, and $u \notin V_P(B)$.
By definition of $\oline{\cdot}$, this implies $u \in V_P(B_i')$ and $u \in V_P(B_j')$ for some bags $B_i' \in T(B_i)$ and $B_j' \in T(B_j)$.
Observe however that $B \in T(B_i',B_j')$.
Thus, by point (3) of Definition~\ref{def:piecedecomp}, we have $u \in V_P(B)$.
This contradicts the third inclusion, $u \notin V_P(B)$.
Now we prove (b) and (c).
For (b), since $u \in V_P(\oline{B_i})$ and $(u,u') \in P$, then $u' \in V_P(\oline{B_i})$ too.
For (c), suppose by contradiction $u' \notin V_P(\oline{B_i})$.
Therefore, either $u' \in V_P(B)$, or $u' \in V_P(B_j')$ for some $B_j' \in \oline{B_j}$ with $j \ne i$.
In both cases however we have $u \in V_P(B)$: in the first case this holds since $u$ is reachable from $u'$, and in the second case since $B \in T(B_i,B_j')$ and by point (3) of Definition~\ref{def:piecedecomp}.
Thus in any case $u \in V_P(B)$, which contradicts again $u \notin V_P(B)$.
\qed
\end{proof}
Lemma~\ref{lem:separator} says that $V_P(B)$ is a separator for the sub-patterns $P(\oline{B_i})$ in $P$.
This allows us to compute $\homo{P(\oline{B})}$ by combining $\homo{P(\oline{B_1})},\ldots,\homo{P(\oline{B_l})}$.
Next, we show that each homomorphism $\phi$ of $P(\oline{B})$ is the \emph{juxtaposition} (definition below) of some $\phi_B$ of $B$ and some $\phi_1,\ldots,\phi_l$ of $\oline{B_1},\ldots,\oline{B_l}$, provided they respect $\phi_B$.
This establishes a bijection, implying that we can count the homomorphisms $\phi$ by multiplying the counts of the homomorphisms $\phi_B,\phi_1,\ldots,\phi_l$.
\begin{definition}
\label{def:juxt}
Let $\{\phi_1,\ldots,\phi_{\ell}\}$ be any set of homomorphisms, where for all $i=1,\ldots,\ell$ we have $\phi_i : X_i \to G$ and $\phi_i$ respects $\phi_j$ for all $j =1,\ldots,\ell$.
The \emph{juxtaposition} of $\phi_1,\ldots,\phi_{\ell}$, denoted by $\phi_1\ldots\phi_\ell$, is the homomorphism $\phi : \cup_{i=1}^{\ell} X_i \to G$ such that $\phi(u)=\phi_i(u)$ whenever $u \in X_i$.
\end{definition}
Note that the juxtaposition is always well-defined ad unique, since the $\phi_i$ respect each other and the image of every $u$ is determined by at least one among $\phi_1,\ldots,\phi_{\ell}$.
\begin{lemma}
\label{lem:homo_comp}
Let $T$ be a d.t.d.\ and let $B_1, \ldots, B_l$ be the children of $B$ in $T$.
Fix any $\phi_B : P(B) \to G$.
Let $\Phi(\phi_B) = \{\phi : P(\oline{B}) \to G \,|\, \phi \text{ respects } \phi_B\}$, and for $i=1,\ldots,l$ let $\Phi_i(\phi_B) = \{\phi : P(\oline{B_i}) \to G \,|\, \phi \text{ respects } \phi_B\}$.
Then there is a bijection between $\Phi(\phi_B)$ and $\Phi_1(\phi_B) \times \ldots \times \Phi_l(\phi_B)$, and therefore:
\begin{align}
\homo{P(\oline{B}),G,\phi_B} = \prod_{i=1}^{l}\homo{P(\oline{B_i}),G,\phi_B}
\end{align}
\end{lemma}
\begin{proof}
First, we show there is an injection between $\Phi(\phi_B)$ and $\Phi_1(\phi_B) \times \ldots \times \Phi_l(\phi_B)$.
Fix any $\phi \in \Phi(\phi_B)$, and consider the tuple $(\phi_1, \ldots, \phi_l)$ where
each $\phi_i$ is the restriction of $\phi$ to $P(\oline{B_i})$.
Note that $\phi_i$ is unique, and that it respects $\phi_B$ since $\phi$ does.
Thus $\phi_i \in \Phi_i(\phi_B)$.
It follows that $(\phi_1, \ldots, \phi_l) \in \Phi_1(\phi_B) \times \ldots \times \Phi_l(\phi_B)$.
Now we show there is an injection between $\Phi_1(\phi_B) \times \ldots \times \Phi_l(\phi_B)$ and $\Phi(\phi_B)$.
Consider any tuple $(\phi_1, \ldots, \phi_l) \in \Phi_1(\phi_B) \times \ldots \times \Phi_l(\phi_B)$, and consider the juxtaposition $\phi = \phi_B\phi_1\ldots\phi_l$.
Then $\phi : P(\oline{B}) \to G$ and $\phi$ respects $\phi_B$.
It follows that $\phi \in \Phi(\phi_B)$.
\qed
\end{proof}
Last, we bound the cost of enumerating the homomorphisms of a piece of $P$.
\begin{lemma}
\label{lem:listing2}
Given any $B \subseteq S$, the set of homomorphisms $\Phi = \{ \phi : P(B) \to G \}$ has size $O(d^{k-|B|} n^{|B|})$ and can be enumerated in time $O(k^2 d^{k-|B|} n^{|B|})$.
\end{lemma}
\begin{proof}
We prove the bound on the enumeration time; the proof gives immediately also the bound on $|\Phi|$.
Let $B=\{u_1,\ldots,u_b\}$ where $b = |B|$.
Fix a spanning forest $\{T_1, \ldots, T_b\}$ of $P(B)$, where each $T_i=(V_i,A_i)$ is a directed tree rooted at $u_i$ (arcs pointing away from the root).
Consider any $\phi \in \Phi$, and let $\phi_i$ be its restriction to $V_i$.
Clearly, $\phi=\phi_1\ldots\phi_{b}$.
Note that $\phi_i$ is a homomorphism of $T_i$ in $G$.
Thus, to enumerate $\Phi$ we can enumerate each possible tuples $(\phi_1,\ldots,\phi_b)$ where $\phi_i$ is a homomorphism of $T_i$ in $G$ for all $i$.
Note that not all such tuples give a valid juxtaposition that is a homomorphism $\phi \in \Phi$. However, we can check if $\phi \in \Phi$ in time $O(k^2)$ by checking the arcs between the images of $\phi$ in $G$.
Let then $\Phi_{T_i}$ be the set of homomorphisms of $T_i$ in $G$.
We show how to enumerate $\Phi_{T_i}$ in time $O(d^{|V_i|-1} n)$, and thus all tuples $(\phi_1,\ldots,\phi_b) \in \Phi_{T_1} \times \ldots \times \Phi_{T_b}$ in time $\prod_{i=1}^b O(d^{|V_i|-1} n) = O(d^{k-b} n^{b})$.
Together with the check on the arcs, this gives a total running time of $O(k^2 d^{k-b} n^{b})$ for enumerating $\Phi$, as desired.
To enumerate $\Phi_{T_i}$, we take each $v \in G$ and enumerate all $\phi_i \in \Phi_{T_i}$ such that $\phi_i(s_i)=v$.
To this end note that, once we have fixed $\phi_i(x)$, for each arc $(x,y) \in T_i$ we have at most $d$ choices for $\phi(y)$.
Thus we can enumerate all $\phi_i \in \Phi_{T_i}$ that map $s_i$ to $v$ in time $d^{|V_i|-1}$.
The total time to enumerate $\Phi_{T_i}$ is therefore $O(d^{|V_i|-1} n)$, as claimed.
\qed
\end{proof}
We can now describe our dynamic programming algorithm, \text{HomCount}, to compute $\homo{P(\oline{B}),G}$.
Given a d.t.d.\ $T$ of $P$, the algorithm goes bottom-up from the leaves of $T$ towards the root, combining the counts using Lemma~\ref{lem:homo_comp}.
For readability, we write the algorithm in a recursive fashion.
\begin{algorithm}[h!]
\caption{\text{HomCount}($P, T, B$)}
\begin{algorithmic}[1]
\State $C_{B} =$ empty dictionary with default value $0$
\If{$B$ is a leaf} \Comment{base case}
\For{every homomorphism $\phi_B : P(B) \to G$} \label{ln:enum1}
\State $C_B(\phi_B) = 1$ \label{ln:acc1}
\EndFor
\Else
\State let $B_1,\ldots,B_l$ be the children of $B$ in $T$
\For{$i=1,\ldots,l$}
\State $C_{B_i} =$ \text{HomCount}($P, T, B_i$) \label{ln:rec} \Comment{recur on the subtree of $T$ rooted at $B_i$}
\State $AGG_{B_i} =$ empty dictionary with default value $0$
\For{every key $\phi$ in $C_{B_i}$} \label{ln:reindex} \Comment{we aggregate the values of $C$}
\State let $\phi_r$ be the restriction of $\phi$ to $V_P(B) \cap V_P(\oline{B_i})$
\State $AGG_{B_i}(\phi_r) \mathrel{+}= C_{B_i}(\phi)$
\EndFor \label{ln:reindex2}
\EndFor
\For{every homomorphism $\phi : P(B) \to G$} \label{ln:enum2} \Comment{Lemma~\ref{lem:homo_comp} in action}
\State for $i=1,\ldots,l$ let $\phi_r$ be the restriction of $\phi$ to $V_P(B) \cap V_P(\oline{B_i})$
\State $C_B(\phi) = \prod_{i=1}^l AGG_{B_i}(\phi_{r})$ \label{ln:acc2}
\EndFor
\EndIf
\State \Return $C_B$
\end{algorithmic}
\end{algorithm}
We prove:
\begin{lemma}
\label{lem:pccorrect}
\label{lem:pccost}
Let $P$ be any dag, $T=(\mathcal{B},\mathcal{E})$ any d.t.d.\ for $P$, and $B$ any element of $\mathcal{B}$.
\text{HomCount}($P, T, B$) in time $O(|\mathcal{B}| \poly(k) d^{k-\tau(T)} n^{\tau(T)} \log{n})$ returns a dictionary $C_B$ that for all $\phi_B : P(B) \to G$ satisfies $C_B(\phi_B) = \homo{P(\oline{B}), G, \phi_B}$.
\end{lemma}
\begin{proof}
We first prove the correctness, by induction on the nodes of $T$.
The base case is when $B$ is a leaf of $T$.
In this case $P(B) = P(\oline{B})$, and the algorithm sets $C_B(\phi_B) = 1$ for each $\phi_B : P(B) \to G$.
Therefore $C_B(\phi_B) = \homo{P(\oline{B}), G, \phi_B}$ as desired.
The inductive case is when $B$ is an internal node of $T$.
As inductive hypothesis we assume that, for every child $B_i$ of $B$, the dictionary $C_{B_i}$ computed at line~\ref{ln:rec} satisfies $C_{B_i}(\phi) = \homo{P(\oline{B_i}),G,\phi}$ for every $\phi : P(B_i) \to G$.
Let $\Phi_{P(\oline{B_i})}$ be the set of homomorphisms from $P(\oline{B_i})$ to $G$, and let $\Phi_{P(\oline{B_i})}(\phi)$ be the subset of elements of $\Phi_{P(\oline{B_i})}$ that respect $\phi$.
Thus, the inductive hypothesis says that $C_{B_i}(\phi) = |\Phi_{P(\oline{B_i})}(\phi)|$ for every $\phi : P(B_i) \to G$.
Now consider the loop at lines~\ref{ln:reindex}--\ref{ln:reindex2}.
We claim that, after that loop, we have:
\begin{align}
\label{eqn:AGG}
AGG_{B_i}(\phi_r)
= \!\!\sum_{\substack{\phi \text{ in } C_{B_i} \\ \phi \text{ respects } \phi_{r}}}
\!\!\!\!\!\!\! \!\!\!\! C_{B_i}(\phi)
= \sum_{\substack{\phi \,:\, P(B_i) \to G \\ \phi \text{ respects } \phi_{r}}}
\!\!\!\!\!\!\! \!\!\!\! C_{B_i}(\phi)
= \sum_{\substack{\phi \,:\, P(B_i) \to G \\ \phi \text{ respects } \phi_{r}}}
\!\!\!\!\!\!\! \!\!\!\! \big|\Phi_{P(\oline{B_i})}(\phi)\big|
= \big|\Phi_{P(\oline{B_i})}(\phi_r)\big|
\end{align}
The first equality holds since the loop adds $C_{B_i}(\phi)$ to $AGG_{B_i}(\phi_r)$ if and only if the restriction of $\phi$ to $V_P(B) \cap V_P(\oline{B_i})$ is $\phi_r$, that is, if and only if $\phi$ respects $\phi_r$.
The second equality holds since the keys of $C_{B_i}$ are a subset of $\{\phi \,:\, P(B_i) \to G\}$ and $C_{B_i}(\phi)=0$ if $\phi$ is not in $C_{B_i}$.
The third equality holds by the inductive hypothesis above.
The fourth equality holds since the sets $\Phi_{P(\oline{B_i})}(\phi)$ form a partition of $\Phi_{P(\oline{B_i})}(\phi_r)$.
Finally, consider the loop at lines~\ref{ln:enum2}--\ref{ln:acc2}.
We claim that line~\ref{ln:acc2} sets:
\begin{align}
C_B(\phi)
= \prod_{i=1}^l \big|\Phi_{P(\oline{B_i})}(\phi_r)\big|
= \prod_{i=1}^l \big|\Phi_{P(\oline{B_i})}(\phi)\big|
= \homo{P(\oline{B}), G, \phi}
\end{align}
The first equality holds by coupling line~\ref{ln:acc2} and Equation~\eqref{eqn:AGG}.
The second equality holds since any element of $\Phi_{P(\oline{B_i})}$ respects $\phi$ if and only if it respects its restriction $\phi_r$ to $V_P(B) \cap V_P(\oline{B_i})$, thus $\Phi_{P(\oline{B_i})}(\phi_r) = \Phi_{P(\oline{B_i})}(\phi)$.
The last equality holds by definition of $\Phi_{P(\oline{B_i})}(\phi_r)$ and by Lemma~\ref{lem:homo_comp}.
This proves the correctness.
Let us turn to the running time.
We can represent a homomorphism $\phi$ as a tuple of nodes of $G$.
Now, if $B$ is a leaf in $T$, then the running time is dominated by the loop at line~\ref{ln:enum1}.
By the bound of Lemma~\ref{lem:listing2}, the loop performs $O(d^{k-|B|}n^{|B|})$ iterations.
Each iteration takes time $O(k^2)$ to check $\phi_B$ (see again Lemma~\ref{lem:listing2}), and time $O(k \log{n})$ to update $C_B(B, \phi)$ since $C_B$ contains $O(n^k)$ entries.
This gives a running time of $O(k^2 d^{k-|B|} n^{|B|} \log{n})$.
If $B$ is an internal node of $T$, then the time taken by Lines~\ref{ln:reindex}--\ref{ln:reindex2} is dominated by the recursive calls at line~\ref{ln:rec}.
The loop at line~\ref{ln:enum2} follows the analysis above for each of the $l$ children of $B$, for a total running time of $O(k^2 l\, d^{k-|B|} n^{|B|} \log{n})$.
The total running time excluding recursive calls is then $O(k^2 l\, d^{k-|B|} n^{|B|} \log{n})$ as well.
The thesis follows by recursing on the subtrees of $T(B)$, noting that the sum of the number of children of all bags is at most $|\mathcal{B}|$.
\qed
\end{proof}
\subsection{Inclusion-exclusion arguments and the dag-treewidth of undirected graphs}
\label{sub:incexc}
We turn to computing $\homo{H,G}$, $\sub{H,G}$ and $\ind{H,G}$.
We do so via standard inclusion-exclusion arguments, using our algorithm for computing $\homo{P,G}$ as a primitive.
To this end we shall define appropriate notions of width for undirected pattern graphs.
Let $\Sigma(H)$ be the set of all dags $P$ that can be obtained by orienting $H$ acyclically.
Let $\Theta(H)$ be the set of all equivalence relations on $V_H$ (that is, all the partitions of $V_H$), and for $\theta \in \Theta(H)$ let $H/\theta$ be the pattern obtained from $H$ by identifying equivalent nodes according to $\theta$ and removing loops and multiple edges.
Let $D(H)$ be the set of all supergraphs of $H$ on the node set $V_H$, including $H$.
\begin{definition}
\label{def:widths}
The \emph{dag treewidth} of $H$ is $\tau(H) = \tau_3(H)$, where:
\begin{align}
\tau_1(H) &= \max\{ \tau(P) : {P \in \Sigma(H)} \}\\
\tau_2(H) &= \max\{ \tau_1(H/\theta) : {\theta \in \Theta(H)} \}\\
\tau_3(H) &= \max\{ \tau_2(H') : {H' \in D(H)} \}
\end{align}
\end{definition}
Note that $\tau(H)$ is unrelated to the treewidth $\operatorname{t}(H)$.
For example, when $H$ is a clique we have $\operatorname{t}(H)=k$ and $\tau(H)=1$; when $H$ is the independent set we have $\operatorname{t}(H)=1$ and $\tau(H)=\Theta(k)$, see Lemma~\ref{thm:pw_tw}; and when $H$ is an expander we have $\operatorname{t}(H),\tau(H) \in \Theta(k)$, see again Lemma~\ref{thm:pw_tw}.
In fact, $\tau(H)$ is within constant factors of the independence number $\alpha(H)$ of $H$ (see Section~\ref{sub:alpha}), and thus decreases as $H$ becomes denser.
This happens because adding arcs increases the number of nodes reachable from the sources of $P \in \Sigma(H)$, so we may need fewer sources to reach a given piece of $P$.
When $H$ is a clique, $P$ is reachable from just one source and thus $\tau(H)=1$.
Clearly, $\tau_1(H) \le \tau_2(H) \le \tau(H)$.
The intuition behind $\tau_1(H)$ is that, in $G$, each homomorphism of $H$ corresponds to a homomorphism of some acyclic orientation $P$ of $H$.
Thus to compute $\homo{H,G}$ we sum $\homo{P,G}$ over all orientation $P$ of $H$, and the running time is dominated by the $P$ with largest treewidth.
The intuition behind $\tau_2(H)$ is similar but now we look at computing $\sub{H,G}$.
Since homomorphisms can map different nodes of $H$ to the same node of $G$, to recover $\sub{H,G}$ we must combine $\homo{H',G}$ for all possible $H' = H/\theta$ through inclusion-exclusion arguments.
The intuition behind $\tau_3(H)$ is that to compute $\ind{H,G}$ we must remove from $\sub{H,G}$ the counts of $\sub{H',G}$ for certain supergraphs $H'$ of $H$.
Indeed, the three measures $\tau_1(H),\tau_2(H),\tau(H)$ yield:
\begin{theorem}
\label{thm:wrapping}
Consider any $k$-node pattern graph $H=(V_H,E_H)$, and let $f_{T}(k)$ be an upper bound on the time needed to compute a d.t.d.\ of minimum width on $2^{O(k \log k)}$ bags for any $k$-node dag.
Then one can compute:
\begin{itemize}
\item $\homo{H,G}$ in time $2^{O(k \log k)} \cdot O(f_{T}(k) + d^{k-\tau_1(H)} n^{\tau_1(H)} \log n)$,
\item $\sub{H,G}$ in time $2^{O(k \log k)} \cdot O(f_{T}(k) + d^{k-\tau_2(H)} n^{\tau_2(H)} \log n)$,
\item $\ind{H,G}$ in time $2^{O(k^2)} \cdot O(f_{T}(k) + d^{k-\tau(H)} n^{\tau(H)} \log n)$.
\end{itemize}
The claim still holds if we replace $\tau_1,\tau_2,\tau$ with upper bounds, and $f_{T}(k)$ with the time needed to compute a d.t.d.\ on $2^{O(k \log k)}$ bags that satisfies those upper bounds.
\end{theorem}
\begin{proof}
We prove the three bounds in three separated steps. The last claim follows straightforwardly.
\paragraph{From dags to undirected patterns.}
Let $H$ be any undirected pattern.
First, note that:
\begin{align}
\label{eq:homo_sum}
\homo{H,G} = \sum_{P \in \Sigma(H)} \!\!\!\!\homo{P,G}
\end{align}
Let indeed $\Phi(H)=\{\phi : H \to G\}$ be the set of homomorphisms from $H$ to $G$.
Similarly, for any $P\in \Sigma(H)$ define $\Phi(P)=\{\phi_P : P \to G\}$ (note that $\phi_P$ must preserve the direction of the arcs).
Then, there is a bijection between $\Phi(H)$ and $\cup_{P \in \Sigma(H)} \Phi(P)$.
Consider indeed any $\phi \in \Phi(H)$.
Let $\sigma$ be the orientation of $H$ that assigns to $\{u,v\} \in E_H$ the orientation of $\{\phi(u),\phi(v)\}$ in $G$, and let $P=H_{\sigma}$.
Then $\phi$ is a homomorphism of $P$ in $G$.
On the other hand consider any homomorphism $\phi \in \Phi(P)$ for some acyclic orientation $P$ of $H$.
By ignoring the orientation of the edges, $\phi \in \Phi(H)$, too.
Thus, to compute $\homo{H,G}$ we compute $\homo{P,G}$ for all $P \in \Sigma(H)$ and apply Equation~\ref{eq:homo_sum}.
Clearly, enumerating $|\Sigma(H)|$ takes time $O(k!) = 2^{O(k \log k)}$.
For each $P$, by Lemma~\ref{lem:small_dtd} in time $f_{T}(k)$ we compute a d.t.d.\ $T=(\mathcal{B},\mathcal{E})$ of width $\tau(P)$ such that $|\mathcal{B}|\le 2^k$.
Then, by Lemma~\ref{lem:pccost} we compute $\homo{P,G}$ in time $O(2^k \poly(k) d^{k-\tau(P)} n^{\tau(P)} \log n)$.
Thus, we can compute every $\homo{P,G}$ in time $O(f_{T}(k) + 2^{O(k)} d^{k-\tau(P)} n^{\tau(P)} \log n)$.
Multiplying by $2^{O(k \log k)}$ gives the first bound of the theorem.
\paragraph{From homomorphisms to non-induced copies.}
Recall that $H/\theta$ is the graph obtained from $H$ by identifying the nodes in the same equivalence class and removing loops and multiple edges, where $\theta \in \Theta(H)$ is an equivalence relation (or partition) over $V_H$.
Then, by Equation~15 of~\cite{Borgs&2006}:
\begin{align}
\label{eqn:inj}
\inj{H,G} = \sum_{\theta \in \Theta(H)} \!\!\!\! \mu(\theta) \, \homo{H/\theta, G}
\end{align}
where $\mu(\theta) = \prod_{A \in \theta} (-1)^{|A|-1}(|A|-1)!$, where $A$ runs over the equivalence classes (the sets) in $\theta$.
Thus, to compute $\inj{H,G}$, we enumerate all $\theta \in \Theta(H)$, compute $\homo{H/\theta, G}$, and apply Equation~\ref{eqn:inj}.
It is known that $|\Theta| = 2^{O(k \ln{k})}$ (see e.g.~\cite{Berend&2010}), and clearly for each $\theta$ we can compute $\mu(\theta)$ and $H/\theta$ in $O(\poly(k))$.
Thus, the first bound of the theorem holds for computing $\inj{H,G}$ too.
Finally, we compute $\sub{H,G} = \frac{\inj{H,G}}{\aut{H}}$, where $\aut{H}$ is the number of automorphisms of $H$, which can be computed in time $2^{O(\sqrt{k \ln k})}$~\cite{Mathon1979}.
This proves the second bound of the theorem.
\paragraph{From non-induced to induced.}
Finally, let $D(H)$ be the set of all supergraphs of $H$ on the same node set.
Then from Equation~14 of \cite{Borgs&2006}:
\begin{align}
\label{eqn:ind}
\ind{H,G} = \sum_{H' \in D(H)} \!\!\!\!(-1)^{|E_{H'} \setminus E_H|} \, \inj{H',G}
\end{align}
To compute $\ind{H,G}$, we take every $H'\in D(H)$, compute $\inj{H',G}$, and apply Equation~\ref{eqn:ind}.
Since $|D(H)| \le 2^{k^2}$, the third bound of the theorem follows.
\qed
\end{proof}
The algorithmic part of our work is complete.
We shall now focus on computing good dag tree decompositions, so to instantiate Theorem~\ref{thm:wrapping} and obtain the upper bounds of Section~\ref{sub:results}.
\section{Computing good dag tree decompositions}
\label{sec:twbound}
In this section we show how to compute dag tree decompositions of low width.
First, we show that for every $k$-node dag $P$ we can compute in time $2^{O(k)}$ a dag tree decomposition $T$ that satisfies $\tau(T) \le \lfloor \frac{k}{4}\rfloor+2$.
This result requires a nontrivial proof. As a corollary, we prove Theorem~\ref{thm:mainbound} and Theorem~\ref{thm:lowavgd}.
Second, we give improved bounds for cliques minus $\epsilon$ edges; as a corollary, we prove Theorem~\ref{thm:dense_ub}.
Third, we give improved bounds for complete multipartite graphs plus $\epsilon$ edges; as a corollary, we prove Theorem~\ref{thm:bipartite}.
Finally, we show that $\Omega(\alpha(H)) \le \tau(H) \le \alpha(H)$ where $\alpha(H)$ is the independence number of $H$, which is of independent interest.
This implies that the trivial decomposition on one bag has width that is asymptotically optimal.
To proceed, we need some additional notation.
For a dag $P$, we say $v \in V_P$ is a \emph{joint} if it is reachable from at least two sources, i.e., if $v \in V_P(u) \cap V_P(u')$ for some $u,u' \in S$ with $u \ne u'$.
Let $J$ be the set of joints of $P$.
We write $J(u)$ for the set of joints reachable from $u$, and for any $X \subseteq V_P$ we let $J(X) = \cup_{u \in X} J(u)$.
Similarly, we denote by $S(y)$ the sources from which $y$ is reachable, and we let $S(X) = \cup_{u \in X} S(u)$.
\input{allp}
\subsection{Bounds for quasi-cliques (Theorem~\ref{thm:dense_ub})}
\label{sub:dense_bound}
\begin{lemma}
\label{lem:quasicliques}
If a $k$-node dag $P$ has ${k \choose 2} - \epsilon$ edges, then in time $O(\poly(k))$ one can compute a d.t.d.\ $T$ for $P$ on two bags such that $\tau(T) \le \lceil \nicefrac{1}{2} + \sqrt{\nicefrac{\epsilon}{2}} \, \rceil$.
\end{lemma}
\begin{proof}
The source set $S$ of $P$ is an independent set.
Hence $\epsilon \ge {|S| \choose 2}$, and $|S| \le 1 + \sqrt{2\epsilon}$.
Consider any tree $T$ on two bags $B_1,B_2$ such that $B_1 \cup B_2 = S$, $|B_1|=\lfloor |S|/2 \rfloor$, and $|B_2|=\lceil |S|/2 \rceil$.
It is immediate to check that $T$ satisfies the claim.
\qed
\end{proof}
By coupling Lemma~\ref{lem:quasicliques} and Theorem~\ref{thm:wrapping}, for computing $\homo{H,G}$ and $\sub{H,G}$ we obtain a running time bound of $2^{O(k \log k)} \cdot O(d^{k-\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} n^{\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} \log n)$.
For $\ind{H,G}$, we refine the bound of Theorem~\ref{thm:wrapping} by observing that $|D(H)| \le 2^{\epsilon}$.
This yields a running time bound of $2^{O(\epsilon + k \log k)} \cdot O(d^{k-\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} n^{\lceil \frac{1}{2} + \sqrt{\frac{\epsilon}{2}} \, \rceil} \log n)$.
This concludes the proof of Theorem~\ref{thm:dense_ub}.
\subsection{Bounds for quasi-multipartite graphs (Theorem~\ref{thm:bipartite})}
\begin{lemma}
\label{lem:hbipartite}
If $H$ is a complete multipartite graph, then $\tau_2(H) = 1$.
If $H$ is a complete multipartite graph plus $\epsilon$ edges, then $\tau_2(H) \le \lfloor\frac{\epsilon}{4}\rfloor+2$.
In any case, for any $\theta \in \Theta(H)$, for any acyclic orientation $P$ of $H/\theta$ we can compute in time $2^{O(k)}$ a d.t.d.\ of $P$ on $O(k)$ bags whose width satisfies the bounds above.
\end{lemma}
\begin{proof}
First, suppose $H=(V_H,E_H)$ is complete multipartite, so $V_H = V_H^1 \cup \ldots \cup V_H^{\kappa}$ where each $V_H^j$ is a maximal independent set in $H$.
In any acyclic orientation $P$ of $H$, the source set $S$ satisfies $S \subseteq V_H^j$ for some $j \in \{1,\ldots, \kappa\}$.
Moreover, $V_P(u) = V_P(u')$ for any $u,u' \in S$.
A d.t.d.\ $T$ for $P$ of width $\tau(T) = 1$ is the tree on $|S|$ bags with one source per bag, which can be computed in time $O(\poly(k))$.
Suppose now we add $\epsilon$ arcs to $P$, with any orientation; this means $H$ is a complete multipartite graph plus $\epsilon$ edges.
Again we have $S \subseteq V_H^j$, but now for some $u,u' \in S$ we might have $V_P(u) \ne V_P(u')$, so the d.t.d\ above might not be valid anymore.
Let $P_j = P[V_H^j]$ and consider any d.t.d.\ $T$ for $P_j$.
We argue that $T$ is a valid d.t.d.\ for $P$ as well.
First, the source set of $P_j$ is same of $P$ (that is, $S$).
Thus, since $T$ satisfies properties (1) and (2) of Definition~\ref{def:piecedecomp} for $P_j$, then it does so for $P$, too.
For property (3), note that every node $v \in V_H \setminus V_H^j$ is reachable from every $u \in S$.
Thus, all bags $B$ of $T$ satisfy $V_P(B) = (V_H \setminus V_H^j) \cup (V_P(B) \cap V_H^j)$.
As a consequence, for any three bags $B,B_1,B_2$, if $V_{P_j}(B_1) \cap V_{P_j}(B_2) \subseteq V_{P_j}(B)$ then $V_{P}(B_1) \cap V_{P}(B_2) \subseteq V_{P}(B)$.
Thus $T$ satisfies property (3) and is a d.t.d.\ for $P$.
Therefore, any d.t.d.\ for $P_j$ is a d.t.d.\ for $P$.
Now, since $P_j$ has at most $\epsilon$ edges, by Theorem~\ref{thm:widthbound} in time $2^{O(k)}$ we can compute a d.t.d.\ for it of width at most $\lfloor\frac{\epsilon}{4}\rfloor+2$ on $O(k)$ bags.
Consider now any $\theta \in \Theta(H)$.
For any $v \in V_H$, we denote by $\theta(v)$ the node of $H/\theta$ corresponding to $v$, and for any $x \in H/\theta$ we let $\theta^{-1}(x) = \{u \in V_H : \theta(u)=\theta(x)\}$ be the set of nodes of $H$ identified in $x$.
Let $P$ be any acyclic orientation of $H/\theta$.
Since the sources $S$ of $P$ form an independent set, then $\cup_{x \in S} \theta^{-1}(x) \subseteq V_H^j$ for some $j$.
Moreover, for any node $x$ of $P$, if $v \in \theta^{-1}(x)$ for some $v \in V_H^i$ with $i \ne j$ then $x$ reachable from every node in $S$.
Therefore, if we let $V_P^j = \cup_{v \in V_H^j} \theta(v)$ and $P_j=P[V_P^j]$, the arguments above apply and we obtain the same bound.
\qed
\end{proof}
By coupling Lemma~\ref{lem:hbipartite} and Theorem~\ref{thm:wrapping},
when $H$ is complete multipartite, for computing $\homo{H,G}$ and $\sub{H,G}$ we obtain a time bound of $2^{O(k \log k)} \cdot O(d^{k-1} n \log n)$.
Similarly, when $H$ is complete multipartite plus $\epsilon$ edges, we obtain a time bound of $2^{O(k \log k)} \cdot O(d^{k-\lfloor\frac{\epsilon}{4}\rfloor-2}n^{\lfloor\frac{\epsilon}{4}\rfloor-2}\log n)$.
This proves Theorem~\ref{thm:bipartite}.
\subsection{Independence number and dag treewidth}
\label{sub:alpha}
Recall that $\alpha(H)$ is the independence number of $H$. We show:
\begin{lemma}
\label{thm:pw_tw}
Any $k$-node graph $H$ satisfies $\Omega(\alpha(H)) \le \tau(H) \le \alpha(H)$.
\end{lemma}
\begin{proof}
For the upper bound, note that $\alpha(H'/\theta) \le \alpha(H)$ for any $H' \in D(H)$ and any $\theta \in \Theta(H')$. Moreover, in any acyclic orientation $P$ of $H'/\theta$ the sources form an independent set. Thus $\tau(P) \le \alpha(H)$. The bound follows by Definition~\ref{def:widths}.
For the lower bound, we exhibit a pattern $H'$ obtained by adding edges to $H$ such that $\tau(P) = \Omega(\alpha(H))$ for all its acyclic orientations $P$.
Let $I \subseteq V_H$ be an independent set of $H$ with $|I| = \Omega(\alpha(H))$ and $|I|\!\!\mod 5 \equiv 0$.
We add edges to $I$, so to obtain the $1$-subdivision of an expander.
Partition $I$ into $I_{J},I_{S}$ where $|I_{J}| = \frac{2}{5}|I|$ and $|I_{S}| = \frac{3}{5}|I|$.
Consider a $3$-regular expander $\mathcal{E}=(I_{J}, E_{\mathcal{E}})$ of linear treewidth $t(\mathcal{E}) = \Omega(|I_{J}|)$.
It is well known that such expanders exist (see e.g.\ Proposition~1 and Theorem~5 of~\cite{Grohe&2009}).
Note that $|E_{\mathcal{E}}|=\frac{3}{2}|I_{J}|=|I_{S}|$.
For each edge $\{u,v\} \in E_{\mathcal{E}}$, we choose a distinct node in $I_{S}$, denoted by $e_{uv}$, and we add to $H$ the edges $\{e_{uv},u\}$ and $\{e_{uv},v\}$.
Let $H'$ be the resulting pattern.
Observe that $H'[I]$ is the $1$-subdivision of $\mathcal{E}$, and that $t(\mathcal{E}) = \Omega(\alpha(H))$ since $|I_{J}|=\Omega(\alpha(H))$.
Let now $P=(V_P,A_P)$ be any acyclic orientation of $H'$ where $I_{S} \subseteq S_P$ where $S_P$ are the sources of $P$.
Such an orientation exists since $I_{S}$ is an independent set in $H'$.
Let $T$ be any d.t.d.\ of $P$.
We show that $\tau(T) \ge \frac{1}{2}(\operatorname{t}(\mathcal{E})+1) = \Omega(\alpha(H))$, which implies $\tau(P) = \Omega(\alpha(H))$ and therefore the thesis $\tau(H) = \Omega(\alpha(H))$.
To this end, consider the tree $D$ obtained from $T$ by replacing each bag of sources $B$ with the bag of nodes $J(B) \cap I_{J}$.
We claim that $D$ is a tree decomposition of $\mathcal{E}$ of width at most $2\tau(T)-1$.
Let us start by checking the properties of Definition~\ref{def:treedecomp}.
\\
\textbf{Property (1)}. By point (2) of Definition~\ref{def:piecedecomp}, the d.t.d.\ $T$ satisfies $\cup_{B \in T} B = S_P$.
Therefore, by construction of $D$, we have $\cup_{X \in D}\, X = \cup_{B \in T} (J(B) \cap I_{J}) = I_{J}$.
\\
\textbf{Property (2)}. Let $\{v,w\}$ be any edge of $\mathcal{E}$ where we recall that $\{v,w\} \subseteq I_{J}$. By construction of $H'$, there exists $u \in I_{S}$ such that $J(u) = \{v,w\}$ in $P$. Since $T$ is a d.t.d., it satisfies point (2) of Definition~\ref{def:piecedecomp}, hence $u \in B$ for some $B \in T$. By construction of $D$ this implies there is some bag $X \in D$ such that $X=J(u)=\{v,w\}$.
\\
\textbf{Property (3)}.
Fix any three bags $X_1,X_2,X_3 \in D$ such that $X_1 \in D(X_2,X_3)$.
By construction, $X_1=J(B_1)\cap I_{J}, X_2=J(B_2)\cap I_{J}, X_3=J(B_3)\cap I_{J}$ for some $B_1,B_2,B_3 \in T$ such that $B_1 \in T(B_2,B_3)$.
Consider any $v \in X_2 \cap X_3$; we need to show that $v \in X_1$.
By construction of $D$, we have $v \in X_2 \cap X_3 = J(B_2) \cap J(B_3) \cap I_{J}$.
Thus, there exist $u \in B_2$ and $u' \in B_3$ such that $v \in J(u) \cap I_{J}$ and $v \in J(u') \cap I_{J}$.
However, since $B_1 \in T(B_2,B_3)$, point (3) of Definition~\ref{def:treedecomp} implies $J(u) \cap J(u') \subseteq J(B_1)$.
Therefore, $v \in J(B_1)$ as well.
Moreover $v \in I_{J}$, and thus $v \in J(B_1) \cap I_{J}$.
But $J(B_1) \cap I_{J} = X_1$, so $v \in X_1$.
Hence, $D$ is a tree decomposition of $\mathcal{E}$.
Finally, note that any bag $X \in D$ by construction satisfies $|X| = |J(B) \cap I_{J}| \le 2|B|$ since any source $u \in B$ has at most $2$ arcs towards $I_{J}$.
Then by Definition~\ref{def:treewidth} and Definition~\ref{def:pw} we have $\operatorname{t}(\mathcal{E}) \le 2\tau(P)-1$, that is, $\tau(P) \ge \frac{1}{2}(\operatorname{t}(\mathcal{E})+1)$, as claimed.
\qed
\end{proof}
\section{Lower bounds}
\label{sec:lb}
We prove Theorem~\ref{thm:hsw_lb}, in a more technical form.
Note that, since $\tau(H) = \Theta(\alpha(H))$ by Lemma~\ref{thm:pw_tw}, the bound still holds if one replaces $\tau(H)$ by $\alpha(H)$.
The proof uses the following result:
\begin{theorem}[\cite{Curticapean&2014}, Theorem I.2]
\label{thm:curticapean}
The following problems are $\#W[1]$-hard and, assuming ETH, cannot be solved in time $f(k) \cdot n^{o(k/\log{k})}$ for any computable function $f$: counting (directed) paths or cycles of length $k$, and counting edge-colorful or uncolored $k$-matchings in bipartite graphs.
\end{theorem}
Let us now state the lower bound.
\begin{theorem}
Choose any function $a : \mathbb{N} \to \mathbb{N}$ such that $a(k) \in [1,k]$ for all $k \in \mathbb{N}$. There exists an infinite family $\mathcal{H}$ of patterns such that (1) for all $H \in \mathcal{H}$ we have $\tau(H) = \Theta(a(|V(H)|))$, and (2) if there exists an algorithm that for all $H \in \mathcal{H}$ computes $\ind{H,G}$ or $\sub{H,G}$ in time $f(d,k) \cdot n^{o(a(\tau(H))/\log {a(\tau(H))})}$, where $d$ is the degeneracy of $G$, then ETH fails.
\end{theorem}
\begin{proof}
We reduce counting cycles in an arbitrary graph to counting a gadget pattern on $k$ nodes and dag treewidth $O(a(k))$ in a $d$-degenerate graph.
First, fix a function $d : \mathbb{N} \to \mathbb{N}$ such that $d(k) \in \Omega(\frac{k}{a(k)})$.
Now consider a simple cycle on $k_0 \ge 3$ nodes and any arbitrarily large $k \ge 3$.
Our gadget pattern on $k$ nodes is the following.
For each edge $e=uv$ of the cycle create a clique $C_e$ on $d(k)-1$ nodes; delete $e$ and connect both $u$ and $v$ to every node of $C_e$.
The resulting pattern $H$ has $k = k_0 \, d(k)$ nodes.
Let us prove that $\tau(H) \le k_0$; since $k_0 = \frac{k}{d(k)} \in O(a(k))$, this implies $\tau(H) = O(a(k))$ as desired.
Consider again the generic edge $e=uv$.
In any acyclic orientation $P$ of $H$, the set $C_{e} \cup u$ induces a clique, and thus can contain at most one source.
Applying the argument to all $e$ shows that $|S(P)| \le k_0$, hence $\tau(P) \le k_0$.
This holds also if we add edges and/or identify nodes of $P$, hence $\tau(H) \le k_0$.
Now consider a simple graph $G_0$ on $n_0$ nodes and $m_0$ edges.
We replace each edge of $G_0$ as described above, which takes time $O(\poly(n_0))$.
The resulting graph $G$ has $n=m_0(d-1) + n_0 = O(dn_0^2)$ nodes and degeneracy $d$.
Every $k_0$-cycle of $G_0$ is univocally associated to a copy of $H$ in $G$ (note that every non-induced copy of $H$ in $G$ is induced too).
Suppose we have an algorithm that computes $\ind{H,G}$ or $\sub{H,G}$ in time $f(d,k) \cdot n^{o(\tau(H)/\log{\tau(H)})}$.
Since $\tau(H) \le k_0$, $n = O(dn_0^2)$, $k=f_1(d,k_0)$, and $d=f_2(k_0)$, for some functions $f_1,f_2$, then the running is time $f(k_0) \cdot (n_0)^{o(k_0/\log{k_0})}$.
Invoking Theorem~\ref{thm:curticapean} concludes the proof.
\qed
\end{proof}
\section{Conclusions}
We have shown how, by introducing a novel tree-like decomposition for directed acyclic graphs, one can improve on the decades-old state-of-the-art subgraph counting algorithms when the host graph is sufficiently sparse.
Our decomposition may be of independent interest, as it seems to capture the relevant structure of the problem.
We leave open the question of finding a characterization of the complexity of subgraph counting in sparse graphs that is tight for every given pattern, rather than pattern classes as a whole.
\end{document} |
\begin{document}
\date{}
\title{\Large\bf $q$ - Deformed Spin Networks, Knot Polynomials and Anyonic Topological Quantum Computation}
\author{Louis
H. Kauffman\\ Department of Mathematics, Statistics \\ and Computer Science (m/c
249) \\ 851 South Morgan Street \\ University of Illinois at Chicago\\
Chicago, Illinois 60607-7045\\ $<[email protected]$>$\\ and \\ Samuel J. Lomonaco
Jr. \\ Department of Computer Science and Electrical Engineering \\ University of
Maryland Baltimore County \\ 1000 Hilltop Circle, Baltimore, MD 21250\\
$<[email protected]$>$}
\maketitle
\thispagestyle{empty}
\subsection*{\centering Abstract}
{\em We review the $q$-deformed spin network approach to Topological Quantum Field Theory and apply these methods to
produce unitary representations of the braid groups that are dense in the unitary groups. Our methods are rooted in the bracket state sum model
for the Jones polynomial. We give our results for a large class of representations based on values for the bracket polynomial that are roots of unity.
We make a separate and self-contained study of the quantum universal Fibonacci model in this framework. We apply our results to give quantum algorithms for
the computation of the colored Jones polynomials for knots and links, and the Witten-Reshetikhin-Turaev invariant of three manifolds.}
\section{Introduction}
This paper describes the background for topological quantum computing in terms of Temperley--Lieb recoupling theory.
This is a recoupling theory that generalizes standard angular momentum recoupling theory, generalizes the Penrose theory of spin networks and is inherently
topological. Temperley--Lieb recoupling Theory is based on the bracket polynomial model \cite{KA87,KP} for the Jones polynomial. It is built in terms of
diagrammatic combinatorial topology. The same structure can be explained in terms of the $SU(2)_{q}$ quantum group, and has relationships with functional
integration and Witten's approach to topological quantum field theory. Nevertheless, the approach given here will be unrelentingly elementary. Elementary,
does not necessarily mean simple. In this case an architecture is built from simple beginnings and this archictecture and its recoupling language can be
applied to many things including, e.g. colored Jones polynomials, Witten--Reshetikhin--Turaev invariants of three manifolds, topological quantum field theory
and quantum computing.
\bigbreak
In quantum computing, the application is most interesting because the simplest non-trivial example of this Temperley--Lieb recoupling Theory gives the
so-called Fibonacci model. The recoupling theory yields representations of the Artin Braid group into unitary groups
$U(n)$ where $n$ is a Fibonacci number. These representations are {\em dense} in the unitary group, and can be used to model quantum computation
universally in terms of representations of the braid group. Hence the term: topological quantum computation.
\bigbreak
In this paper, we outline the basics of the TL-Recoupling Theory, and show explicitly how the Fibonacci model arises from it.
The diagrammatic computations in the section 9 are completely self-contained and can be used by a reader who has just learned the bracket polynomial, and
wants to see how these dense unitary braid group representations arise from it. The outline of the parts of this paper is give below.
\bigbreak
\begin{enumerate}
\item Knots and Braids
\item Quantum Mechanics and Quantum Computation
\item $SU(2)$ Representations of the Artin Braid Group
\item The Bracket Polynomial and the Jones Polynomial
\item Quantum Topology, Cobordism Categories, Temperley-Lieb Algebra and Topological Quantum Field Theory
\item Braiding and Topological Quantum Field Theory
\item Spin Networks and Temperley-Lieb Recoupling Theory
\item Fibonacci Particles
\item The Fibonacci Recoupling Model
\item Quantum Computation of Colored Jones Polynomials and the Witten-Reshetikhin-Turaev Invariant
\end{enumerate}
We should point out that most of the results in this paper are either new, or are
new points of view on known results. The material on $SU(2)$ representations of the Artin braid group is new, and the relationship of this material to
the recoupling theory is new. See Theorem 1 in Section 3. The treatment of elementary cobordism categories is well-known, but new in the context of quantum
information theory. The reformulation of Temperley-Lieb recoupling theory for the purpose of producing unitary braid group representations is new for quantum
information theory, and directly related to much of the recent work of Freedman and his collaborators. In Theorem 2 in Section 7 we give a general method
to obtain unitary representations at roots of unity using the recoupling theory. The treatment of the Fibonacci model in terms of two-strand recoupling
theory is new and at the same time, the most elementary non-trivial example of the recoupling theory. The models in section 10 for quantum computation of
colored Jones polynomials and for quantum computation of the Witten-Reshetikhin-Turaev invariant are new. They take a
particularly simple aspect in this context.
\bigbreak
Here is a very condensed presentation of how unitary representations of the braid group are constructed via topological quantum field theoretic methods.
One has a mathematical particle with label $P$
that can interact with itself to produce either itself labeled $P$ or itself
with the null label $*.$ When $*$ interacts with $P$ the result is always $
P. $ When $*$ interacts with $*$ the result is always $*.$ One considers
process spaces where a row of particles labeled $P$ can successively
interact subject to the restriction that the end result is $P.$ For example
the space $V[(ab)c]$ denotes the space of interactions of three particles
labeled $P.$ The particles are placed in the positions $a,b,c.$ Thus we
begin with $(PP)P.$ In a typical sequence of interactions, the first two $P$
's interact to produce a $*,$ and the $*$ interacts with $P$ to produce $P.$
\[
(PP)P \longrightarrow (*)P \longrightarrow P.
\]
\noindent In another possibility, the first two $P$'s interact to produce a $
P,$ and the $P$ interacts with $P$ to produce $P.$
\[
(PP)P \longrightarrow (P)P \longrightarrow P.
\]
It follows from this analysis that the space of linear combinations of
processes $V[(ab)c]$ is two dimensional. The two processes we have just
described can be taken to be the qubit basis for this space. One obtains
a representation of the three strand Artin braid group on $V[(ab)c]$ by
assigning appropriate phase changes to each of the generating processes. One
can think of these phases as corresponding to the interchange of the
particles labeled $a$ and $b$ in the association $(ab)c.$ The other operator
for this representation corresponds to the interchange of $b$ and $c.$ This
interchange is accomplished by a {\it unitary change of basis mapping}
\[
F:V[(ab)c] \longrightarrow V[a(bc)].
\]
\noindent If
\[
A:V[(ab)c] \longrightarrow V[(ba)c]
\]
is the first braiding operator (corresponding to an interchange of the first
two particles in the association) then the second operator
\[
B:V[(ab)c] \longrightarrow V[(ac)b]
\]
is accomplished via the formula $B = F^{-1}RF$ where the $R$ in this formula
acts in the second vector space $V[a(bc)]$ to apply the phases for the
interchange of $b$ and $c.$ These issues are illustrated in Figure 1, where the parenthesization of the particles
is indicated by circles and by also by trees. The trees can be taken to indicate patterns of particle interaction, where
two particles interact at the branch of a binary tree to produce the particle product at the root. See also Figure 28 for an illustration
of the braiding $B = F^{-1}RF$
\bigbreak
In this scheme, vector spaces corresponding to associated strings of
particle interactions are interrelated by {\it recoupling transformations}
that generalize the mapping $F$ indicated above. A full representation of
the Artin braid group on each space is defined in terms of the local
interchange phase gates and the recoupling transformations. These gates and
transformations have to satisfy a number of identities in order to produce a
well-defined representation of the braid group. These identities were
discovered originally in relation to topological quantum field theory. In
our approach the structure of phase gates and recoupling
transformations arise naturally from the structure of the bracket model for
the Jones polynomial. Thus we obtain a knot-theoretic basis for topological
quantum computing. \bigbreak
$$ \picill3inby4.5in(F1) $$
\begin{center}
{\bf Figure 1 - Braiding Anyons. }
\end{center}
Aspects of the quantum Hall effect are related to topological quantum field theory
\cite{Wilczek,Fradkin,B1,B2}, where, in two dimensional space, the braiding of quasi-particles or collective excitations leads to non-trival
representations of the Artin braid group. Such particles are called {\it Anyons}. It is hoped that the mathematics we explain here will
form the bridge between theoretical models of anyons and their applications to quantum computing.
\bigbreak
\noindent {\bf Acknowledgement.} The first author thanks the
National Science Foundation for support of this research under NSF Grant
DMS-0245588. Much of this effort was sponsored by the Defense
Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory, Air
Force Materiel Command, USAF, under agreement F30602-01-2-05022.
The U.S. Government is authorized to reproduce and distribute reprints
for Government purposes notwithstanding any copyright annotations thereon. The
views and conclusions contained herein are those of the authors and should not be
interpreted as necessarily representing the official policies or endorsements,
either expressed or implied, of the Defense Advanced Research Projects Agency,
the Air Force Research Laboratory, or the U.S. Government. (Copyright 2006.)
It gives the authors pleasure to thank the Newton Institute in Cambridge England and ISI in Torino, Italy for their hospitality during
the inception of this research and to thank Hilary Carteret for useful conversations.
\bigbreak
\section{Knots and Braids}
The purpose of this section is to give a quick introduction to the diagrammatic theory of knots,
links and braids. A {\it knot} is an embedding of a circle in three-dimensional space, taken up to ambient
isotopy. The problem of deciding whether two knots are isotopic is an example of a {\it placement problem}, a problem of studying the topological forms that
can be made by placing one space inside another. In the case of knot theory we consider the placements of a circle inside three dimensional space.
There are many applications of the theory of knots. Topology is a background for the physical structure of real
knots made from rope of cable. As a result, the field of practical knot tying is a field of applied topology that existed well before the
mathematical discipline of topology arose. Then again long molecules such as rubber molecules and DNA molecules can be knotted and linked. There have been
a number of intense applications of knot theory to the study of $DNA$ \cite{Sumners1} and to polymer physics \cite{KA}. Knot theory is closely related to
theoretical physics as well with applications in quantum gravity \cite{Smolin,CRLS,KaufLiko} and many applications of ideas in physics to the topological
structure of knots themselves \cite{KP}.
\bigbreak
{\it Quantum topology} is the study and invention of topological invariants via the use
of analogies and techniques from mathematical physics. Many invariants such as the Jones polynomial are
constructed via partition functions and generalized quantum amplitudes. As a result, one expects to see relationships between knot theory
and physics. In this paper we will study how knot theory can be used to produce unitary representations of the braid group. Such representations
can play a fundamental role in quantum computing.
\bigbreak
$$ \picill5inby2in(F2) $$
\begin{center}
{\bf Figure 2 - A knot diagram. }
\end{center}
$$ \picill5inby2.5in(F3) $$
\begin{center}
{\bf Figure 3 - The Reidemeister Moves. }
\end{center}
\noindent That is, two knots are regarded as equivalent if one embedding can be obtained from the other
through a continuous family of embeddings of circles in three-space. A {\it link} is an embedding of a disjoiint
collection of circles, taken up to ambient isotopy. Figure 2 illustrates a diagram for a knot. The diagram is regarded
both as a schematic picture of the knot, and as a plane graph with extra structure at the nodes (indicating how the curve of
the knot passes over or under itself by standard pictorial conventions).
$$ \picill5inby4in(F4) $$
\begin{center}
{\bf Figure 4 - Braid Generators. }
\end{center}
Ambient isotopy is mathematically the same as the equivalence relation generated on diagrams by the {\it Reidemeister moves}. These moves are
illustrated in Figure 3. Each move is performed on a local part of the diagram that is topologically identical to the part of the diagram illustrated
in this figure (these figures are representative examples of the types of Reidemeister moves) without changing the rest of the diagram. The Reidemeister
moves are useful in doing combinatorial topology with knots and links, notably in working out the behaviour of knot invariants. A {\it knot invariant}
is a function defined from knots and links to some other mathematical object (such as groups or polynomials or numbers) such that equivalent diagrams are
mapped to equivalent objects (isomorphic groups, identical polynomials, identical numbers). The Reidemeister moves are of great use for analyzing the
structure of knot invariants and they are closely related to the {\it Artin Braid Group}, which we discuss below.
$$ \picill5inby3in(F5) $$
\begin{center}
{\bf Figure 5 - Closing Braids to form knots and links. }
\end{center}
$$ \picill5inby2.5in(F6) $$
\begin{center}
{\bf Figure 6 - Borromean Rings as a Braid Closure. }
\end{center}
Another significant structure related to knots and links is the Artin Braid Group. A {\it braid} is an embedding of a collection of strands that have
their ends top and bottom row points in two rows of points that are set one above the other with respect to a choice of vertical. The strands are not
individually knotted and they are disjoint from one another. See Figures 4, 5 and 6 for illustrations of braids and moves on braids. Braids can be
multiplied by attaching the bottom row of one braid to the top row of the other braid. Taken up to ambient isotopy, fixing the endpoints, the braids form
a group under this notion of multiplication. In Figure 4 we illustrate the form of the basic generators of the braid group, and the form of the
relations among these generators. Figure 5 illustrates how to close a braid by attaching the top strands to the bottom strands by a collection of
parallel arcs. A key theorem of Alexander states that every knot or link can be represented as a closed braid. Thus the theory of braids is critical to the
theory of knots and links. Figure 6 illustrates the famous Borromean Rings (a link of three unknotted loops such that any two of the loops are unlinked)
as the closure of a braid.
\bigbreak
Let $B_{n}$ denote the Artin braid group on $n$ strands.
We recall here that $B_{n}$ is generated by elementary braids $\{ s_{1}, \cdots ,s_{n-1} \}$
with relations
\begin{enumerate}
\item $s_{i} s_{j} = s_{j} s_{i}$ for $|i-j| > 1$,
\item $s_{i} s_{i+1} s_{i} = s_{i+1} s_{i} s_{i+1}$ for $i= 1, \cdots n-2.$
\end{enumerate}
\noindent See Figure 4 for an illustration of the elementary braids and their relations. Note that the braid group has a diagrammatic
topological interpretation, where a braid is an intertwining of strands that lead from one set of $n$ points to another set of $n$ points.
The braid generators $s_i$ are represented by diagrams where the $i$-th and $(i + 1)$-th strands wind around one another by a single
half-twist (the sense of this turn is shown in Figure 4) and all other strands drop straight to the bottom. Braids are diagrammed
vertically as in Figure 4, and the products are taken in order from top to bottom. The product of two braid diagrams is accomplished by
adjoining the top strands of one braid to the bottom strands of the other braid.
\bigbreak
In Figure 4 we have restricted the illustration to the
four-stranded braid group $B_4.$ In that figure the three braid generators of $B_4$ are shown, and then the inverse of the
first generator is drawn. Following this, one sees the identities $s_{1} s_{1}^{-1} = 1$
(where the identity element in $B_{4}$ consists in four vertical strands),
$s_{1} s_{2} s_{1} = s_{2} s_{1}s_{2},$ and finally
$s_1 s_3 = s_3 s_1.$
\bigbreak
Braids are a key structure in mathematics. It is not just that they are a collection of groups with a vivid topological interpretation.
From the algebraic point of view the braid groups $B_{n}$ are important extensions of the symmetric groups $S_{n}.$ Recall that the
symmetric group $S_{n}$ of all permutations of $n$ distinct objects has presentation as shown below.
\begin{enumerate}
\item $s_{i}^{2} = 1$ for $i= 1, \cdots n-1,$
\item $s_{i} s_{j} = s_{j} s_{i}$ for $|i-j| > 1$,
\item $s_{i} s_{i+1} s_{i} = s_{i+1} s_{i} s_{i+1}$ for $i= 1, \cdots n-2.$
\end{enumerate}
Thus $S_{n}$ is obtained from $B_{n}$ by setting the square of each braiding generator equal to one. We have an exact sequence of groups
$${1} \longrightarrow B_{n} \longrightarrow S_{n} \longrightarrow {1}$$ exhibiting the Artin Braid group as an extension of the symmetric group.
\bigbreak
In the next sections we shall show how representations of the Artin Braid group are rich enough to provide a dense set of transformations in the
unitary groups. Thus the braid groups are, {\it in principle} fundamental to quantum computation and quantum information theory.
\bigbreak
\section{Quantum Mechanics and Quantum Computation}
We shall quickly
indicate the basic principles of quantum mechanics. The quantum information context
encapsulates a concise model of quantum theory:
\bigbreak
{\em The initial state of a quantum process is a vector $|v \rangle$ in a complex vector space $H.$
Measurement returns basis elements $\beta$ of $H$ with probability
$$|\langle \beta \,|v \rangle |^{2}/\langle v \,|v \rangle$$
\noindent where $\langle v \,|w \rangle = v^{\dagger}w$ with $v^{\dagger}$ the conjugate transpose of $v.$
A physical process occurs in steps $|v\rangle \longrightarrow U\,|v \rangle = |Uv \rangle $ where $U$ is a unitary linear transformation.
\bigbreak
Note that since $\langle Uv \,|Uw \rangle = \langle v \,|U^{\dagger}U |w \rangle = \langle v \,|w \rangle = $ when $U$ is unitary, it follows that probability
is preserved in the course of a quantum process. }
\bigbreak
One of the details for any specific quantum problem is the nature of the unitary
evolution. This is specified by knowing appropriate information about the classical physics that
supports the phenomena. This information is used to choose an appropriate Hamiltonian through which the
unitary operator is constructed via a correspondence principle that replaces classical variables with appropriate quantum
operators. (In the path integral approach one needs a Langrangian to construct the action on which the path
integral is based.) One needs to know certain aspects of classical physics to
solve any specific quantum problem.
\bigbreak
A key concept in the quantum information viewpoint is the notion of the superposition of states.
If a quantum system has two distinct states $|v \rangle$ and $|w \rangle,$ then it has infinitely many states of the form
$a|v \rangle + b|w \rangle$ where $a$ and $b$ are complex numbers taken up to a common multiple. States are ``really"
in the projective space associated with $H.$ There is only one superposition of a single state $|v \rangle$ with
itself.
\bigbreak
Dirac \cite{D} introduced the ``bra -(c)-ket" notation $\langle A\,|B \rangle = A^{\dagger}B$ for the inner product of complex vectors $A,B \in H$.
He also separated the parts of the bracket into the {\em bra} $<A\,|$ and the {\em ket} $|B \rangle.$ Thus
$$\langle A\,|B \rangle = \langle A\,|\,\,|B \rangle$$
\noindent In this interpretation,
the ket $|B \rangle$ is identified with the vector $B \in H$, while the bra $<A\,|$ is regarded as the element dual to $A$ in the
dual space $H^*$. The dual element to $A$ corresponds to the conjugate transpose $A^{\dagger}$ of the vector $A$, and the inner product is
expressed in conventional language by the matrix product $A^{\dagger}B$ (which is a scalar since $B$ is a column vector). Having separated the bra and the ket, Dirac can write the
``ket-bra" $|A \rangle \langle B\,| = AB^{\dagger}.$ In conventional notation, the ket-bra is a matrix, not a scalar, and we have the following formula for
the square of $P = |A \rangle \langle B\,|:$
$$P^{2} = |A \rangle \langle B\,| |A \rangle \langle B\,| = A(B^{\dagger}A)B^{\dagger} = (B^{\dagger}A)AB^{\dagger} = \langle B\,|A \rangle P.$$
\noindent The standard example is a ket-bra $P = |A\,\rangle \langle A|$ where $\langle A\,|A \rangle =1$ so that $P^2 = P.$ Then $P$ is a projection
matrix, projecting to the subspace of $H$ that is spanned by the vector $|A \rangle$. In fact, for any vector $|B \rangle$ we have
$$P|B \rangle = |A \rangle \langle A\,|\,|B \rangle = |A \rangle \langle A\,|B \rangle = \langle A\,|B \rangle |A \rangle .$$
\noindent If $\{|C_{1} \rangle, |C_{2} \rangle , \cdots |C_{n} \rangle \}$ is an orthonormal basis for $H$, and $$P_{i} = |C_{i} \,\rangle \langle C_{i}|,$$
\noindent then for any vector $|A \rangle $ we have
$$|A \rangle = \langle C_{1}\,|A \rangle |C_{1} \rangle + \cdots + \langle C_{n}\,|A \rangle |C_{n} \rangle .$$
\noindent Hence
$$\langle B\,|A \rangle = \langle B\,|C_{1} \rangle \langle C_{1}\,|A \rangle + \cdots + \langle B\,|C_{n} \rangle \langle C_{n}\,|A \rangle $$
One wants the probability of starting in state $|A \rangle $ and ending in state $|B \rangle .$ The
probability for this event is equal to $|\langle B\,|A \rangle |^{2}$. This can be refined if we have more knowledge.
If the intermediate states $|C_{i} \rangle $ are a complete set of orthonormal alternatives then we
can assume that
$\langle C_{i}\,|C_{i} \rangle = 1$ for each $i$ and that $\Sigma_{i} |C_{i} \rangle \langle C_{i}| = 1.$ This identity now corresponds to the fact that
$1$ is the sum of the probabilities of an arbitrary state being projected into one of these intermediate states.
\bigbreak
If there are intermediate states between the intermediate states this formulation can be continued
until one is summing over all possible paths from $A$ to $B.$ This becomes the path integral expression
for the amplitude $\langle B|A \rangle .$
\bigbreak
\subsection{What is a Quantum Computer?}
A {\it quantum computer} is, abstractly, a composition $U$ of unitary transformations, together with an initial state and a choice of measurement
basis. One runs the computer by repeatedly initializing it, and then measuring the result of applying the unitary transformation $U$ to the initial state.
The results of these measurements are then analyzed for the desired information that the computer was set to determine. The key to using the computer
is the design of the initial state and the design of the composition of unitary transformations. The reader should consult \cite{N} for more specific
examples of quantum algorithms.
\bigbreak
Let $H$ be a given finite dimensional vector space over the complex numbers $C.$ Let $\{ W_{0}, W_{1},..., W_{n} \}$ be an
orthonormal basis for $H$ so that with $|i \rangle := |W_{i} \rangle $ denoting $W_{i}$ and $\langle i|$ denoting the conjugate transpose of $|i \rangle $,
we have
$$\langle i|j \rangle = \delta_{ij}$$
\noindent where $\delta_{ij}$ denotes the Kronecker delta (equal to one when its indices are equal to one another, and equal
to zero otherwise). Given a vector $v$ in $H$ let $|v|^{2} := \langle v|v \rangle .$ Note that $\langle i|v$ is the $i$-th coordinate of $v.$
\noindent An {\em measurement of $v$} returns one of the coordinates $|i \rangle $
of $v$ with probability $|\langle i|v|^{2}.$ This model of measurement is a simple instance of the situation with a quantum
mechanical system that is in a mixed state until it is measured. The result of measurement is to put the system into one of
the basis states.
When the dimension of the space $H$ is two ($n=1$), a vector in the space is called a {\em qubit}. A qubit represents one
quantum of binary information. On meausurement, one obtains either the ket $|0 \rangle $ or the ket $|1 \rangle $. This constitutes the
binary distinction that is inherent in a qubit. Note however that the information obtained is probabilistic. If the qubit is
$$| \psi \rangle = \alpha |0 \rangle + \beta \ |1 \rangle ,$$ \noindent then the ket $|0 \rangle $ is observed with probability $|\alpha|^{2}$, and the ket
$|1 \rangle $ is observed with probability $|\beta|^{2}.$ In speaking of an idealized quantum computer, we do not specify the nature
of measurement process beyond these probability postulates.
In the case of general dimension $n$ of the space $H$, we will call the vectors in $H$
{\em qudits}. It is quite common to use spaces $H$ that are tensor products of two-dimensional spaces (so that all computations
are expressed in terms of qubits) but this is not necessary in principle. One can start with a given space, and later work out
factorizations into qubit transformations.
A {\em quantum computation} consists in the application of a unitary
transformation $U$ to an initial qunit $\psi = a_{0}|0 \rangle + ... + a_{n}|n \rangle $ with $|\psi|^{2}=1$, plus a measurement of
$U\psi.$ A measurement of $U\psi$ returns the ket $|i \rangle $ with probability $|\langle i|U\psi|^2$. In particular, if we start the computer
in the state $|i \rangle $, then the probability that it will return the state $|j \rangle $ is $|\langle j|U|i \rangle |^{2}.$
It is the necessity for writing a given computation in terms of unitary transformations, and the probabilistic
nature of the result that characterizes quantum computation. Such computation could be carried out by an idealized quantum
mechanical system. It is hoped that such systems can be physically realized.
\subsection{Universal Gates}
A {\em two-qubit gate} $G$ is a unitary linear mapping $G:V \otimes V \longrightarrow V$ where $V$ is a two complex dimensional
vector space. We say that the gate $G$ is {\em universal for quantum computation} (or just {\em universal}) if $G$ together with
local unitary transformations (unitary transformations from $V$ to $V$) generates all unitary transformations of the complex vector
space of dimension $2^{n}$ to itself. It is well-known \cite{N} that $CNOT$ is a universal gate. (On the standard basis,
$CNOT$ is the identity when the first qubit is $0$, and it flips the second qbit, leaving the first alone, when the first qubit is $1.$)
\bigbreak
\noindent A gate $G$, as above, is said to be {\em entangling} if there is a vector
$$| \alpha \beta \rangle = | \alpha \rangle \otimes | \beta \rangle \in V \otimes V$$ such that
$G | \alpha \beta \rangle$ is not decomposable as a tensor product of two qubits. Under these circumstances, one says that
$G | \alpha \beta \rangle$ is {\em entangled}.
\bigbreak
\noindent In \cite{BB}, the Brylinskis
give a general criterion of $G$ to be universal. They prove that {\em a two-qubit gate $G$ is universal if and only if it is
entangling.}
\bigbreak
\noindent {\bf Remark.} A two-qubit pure state $$|\phi \rangle = a|00 \rangle + b|01 \rangle + c|10 \rangle + d|11 \rangle$$
is entangled exactly when $(ad-bc) \ne 0.$ It is easy to use this fact to check when a specific matrix is, or is not, entangling.
\bigbreak
\noindent {\bf Remark.} There are many gates other than $CNOT$ that can be used as universal gates in the presence of local unitary
transformations. Some of these are themselves topological (unitary solutions to the Yang-Baxter equation, see \cite{BG,Yong}) and themselves generate
representations of the Artin Braid Group. Replacing $CNOT$ by a solution to the Yang-Baxter equation does not place the local unitary transformations as
part of the corresponding representation of the braid group. Thus such substitutions connote only a partial solution to creating topological
quantum computation. In this paper we are concerned with braid group representations that include all aspects of the unitary
group. Accordingly, in the next section we shall first examine, how the braid group on three strands can be represented as local unitary
transformations.
\bigbreak
\section{$SU(2)$ Representations of the Artin Braid Group}
The purpose of this section is to determine all the representations of the three strand Artin braid group $B_{3}$ to the special unitary group $SU(2)$ and
concomitantly to the unitary group $U(2).$ One regards the groups $SU(2)$ and $U(2)$ as acting on a single qubit, and so $U(2)$ is usually regarded as the
group of local unitary transformations in a quantum information setting. If one is looking for a coherent way to represent all unitary transformations by
way of braids, then $U(2)$ is the place to start. Here we will show that there are many representations of the three-strand braid group
that generate a dense subset of $U(2).$ Thus it is a fact that local unitary transformations can be "generated by braids" in many ways.
\bigbreak
We begin with the structure of $SU(2).$ A matrix in $SU(2)$ has the form
$$ M =
\left( \begin{array}{cc}
z & w \\
-\bar{w} & \bar{z} \\
\end{array} \right),$$ where $z$ and $w$ are complex numbers, and $\bar{z}$ denotes the complex conjugate of $z.$
To be in $SU(2)$ it is required that $Det(M)=1$ and that $M^{\dagger} = M^{-1}$ where $Det$ denotes determinant, and $M^{\dagger}$ is the conjugate transpose of $M.$
Thus if
$z = a + bi$ and $w = c + di$ where $a,b,c,d$ are real numbers, and $i^2 = -1,$ then
$$ M =
\left( \begin{array}{cc}
a + bi & c + di \\
-c + di & a - bi \\
\end{array} \right)$$ with $a^2 + b^2 + c^2 + d^2 = 1.$ It is convenient to write
$$M =
a\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array} \right) +
b\left( \begin{array}{cc}
i & 0\\
0 & -i \\
\end{array} \right) +
c\left( \begin{array}{cc}
0 & 1 \\
-1 & 0\\
\end{array} \right) +
d\left( \begin{array}{cc}
0 & i \\
i & 0 \\
\end{array} \right),$$ and to abbreviate this decomposition as
$$M = a + bi +cj + dk$$
where
$$ 1 \equiv
\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array} \right),
i \equiv
\left( \begin{array}{cc}
i & 0\\
0 & -i \\
\end{array} \right),
j \equiv,
\left( \begin{array}{cc}
0 & 1 \\
-1 & 0\\
\end{array} \right),
k \equiv
\left( \begin{array}{cc}
0 & i \\
i & 0 \\
\end{array} \right)$$ so that
$$i^2 = j^2 = k^2 = ijk = -1$$ and
$$ij = k, jk=i, ki = j$$
$$ji = -k, kj = -i, ik = -j.$$
The algebra of $1,i,j,k$ is called the {\it quaternions} after William Rowan Hamilton who discovered this algebra prior to the discovery of
matrix algebra. Thus the unit quaternions are identified with $SU(2)$ in this way. We shall use this identification, and some facts about
the quaternions to find the $SU(2)$ representations of braiding. First we recall some facts about the quaternions.
\begin{enumerate}
\item Note that if $q = a + bi +cj + dk$ (as above), then $q^{\dagger} = a - bi - cj - dk$ so that $qq^{\dagger} = a^2 + b^2 + c^2 + d^2 = 1.$
\item A general quaternion has the form $ q = a + bi + cj + dk$ where the value of $qq^{\dagger} = a^2 + b^2 + c^2 + d^2,$ is not fixed to unity.
The {\it length} of $q$ is by definition $\sqrt{qq^{\dagger}}.$
\item A quaternion of the form $ri + sj + tk$ for real numbers $r,s,t$ is said to be a {\it pure} quaternion. We identify the set of pure
quaternions with the vector space of triples $(r,s,t)$ of real numbers $R^{3}.$
\item Thus a general quaternion has the form $q = a + bu$ where $u$ is a pure quaternion of unit length and $a$ and $b$ are arbitrary real numbers.
A unit quaternion (element of $SU(2)$) has the addition property that $a^2 + b^2 = 1.$
\item If $u$ is a pure unit length quaternion, then $u^2 = -1.$ Note that the set of pure unit quaternions forms the two-dimensional sphere
$S^{2} = \{ (r,s,t) | r^2 + s^2 + t^2 = 1 \}$ in $R^{3}.$
\item If $u, v$ are pure quaternions, then $$uv = -u \cdot v + u \times v$$ whre $u \cdot v$ is the dot product of the vectors $u$ and $v,$ and
$u \times v$ is the vector cross product of $u$ and $v.$ In fact, one can take the definition of quaternion multiplication as
$$(a + bu)(c + dv) = ac + bc(u) + ad(v) + bd(-u \cdot v + u \times v),$$ and all the above properties are consequences of this
definition. Note that quaternion multiplication is associative.
\item Let $g = a + bu$ be a unit length quaternion so that $u^2 = -1$ and $a = cos(\theta/2), b=sin(\theta/2)$ for a chosen angle $\theta.$
Define $\phi_{g}:R^{3} \longrightarrow R^{3}$ by the equation $\phi_{g}(P) = gPg^{\dagger},$ for $P$ any point in $R^{3},$ regarded as a pure quaternion.
Then $\phi_{g}$ is an orientation preserving rotation of $R^{3}$ (hence an element of the rotation group $SO(3)$). Specifically, $\phi_{g}$ is a rotation
about the axis $u$ by the angle $\theta.$ The mapping $$\phi:SU(2) \longrightarrow SO(3)$$ is a two-to-one surjective map from the special unitary group to
the rotation group. In quaternionic form, this result was proved by Hamilton and by Rodrigues in the middle of the nineteeth century.
The specific formula for $\phi_{g}(P)$ as shown below:
$$\phi_{g}(P) = gPg^{-1} = (a^2 - b^2)P + 2ab (P \times u) + 2(P \cdot u)b^{2}u.$$
\end{enumerate}
We want a representation of the three-strand braid group in $SU(2).$ This means that we want a homomorphism $\rho: B_{3} \longrightarrow SU(2),$ and hence
we want elements $g = \rho(s_{1})$ and $h= \rho(s_{2})$ in $SU(2)$ representing the braid group generators $s_{1}$ and $s_{2}.$ Since $s_{1}s_{2}s_{1} =
s_{2}s_{1}s_{2}$ is the generating relation for $B_{3},$ the only requirement on $g$ and $h$ is that $ghg = hgh.$ We rewrite this relation as
$h^{-1}gh = ghg^{-1},$ and analyze its meaning in the unit quaternions.
\bigbreak
Suppose that $g = a + bu$ and $h=c + dv$ where $u$ and $v$ are unit pure quaternions so that $a^2 + b^2 = 1$ and $c^2 + d^2 = 1.$
then $ghg^{-1} = c +d\phi_{g}(v)$ and $h^{-1}gh = a + b\phi_{h^{-1}}(u).$ Thus it follows from the braiding relation that
$a=c,$ $b= \pm d,$ and that $\phi_{g}(v) = \pm \phi_{h^{-1}}(u).$ However, in the case where there is a minus sign we have
$g = a + bu$ and $h = a - bv = a + b(-v).$ Thus we can now prove the following Theorem.
\bigbreak
\noindent {\bf Theorem 1.} {\it If $g = a + bu$ and $h=c + dv$ are pure unit quaternions,then, without loss of generality, the braid relation $ghg=hgh$ is
true if and only if
$h = a + bv,$ and $\phi_{g}(v) = \phi_{h^{-1}}(u).$ Furthermore, given that $g = a +bu$ and $h = a +bv,$ the condition $\phi_{g}(v) = \phi_{h^{-1}}(u)$
is satisfied if and only if $u \cdot v = \frac{a^2 - b^2}{2 b^2}$ when $u \ne v.$ If $u = v$ then then $g = h$ and the braid relation is trivially
satisfied.}
\bigbreak
\noindent {\bf Proof.} We have proved the first sentence of the Theorem in the discussion prior to its statement. Therefore assume that
$g = a +bu, h = a +bv,$ and $\phi_{g}(v) = \phi_{h^{-1}}(u).$
We have already stated the formula for $\phi_{g}(v)$ in the discussion about quaternions:
$$\phi_{g}(v) = gvg^{-1} = (a^2 - b^2)v + 2ab (v \times u) + 2(v \cdot u)b^{2}u.$$ By the same token, we have
$$\phi_{h^{-1}}(u) = h^{-1}uh = (a^2 - b^2)u + 2ab (u \times -v) + 2(u \cdot (-v))b^{2}(-v)$$
$$= (a^2 - b^2)u + 2ab (v \times u) + 2(v \cdot u)b^{2}(v).$$ Hence we require that
$$(a^2 - b^2)v + 2(v \cdot u)b^{2}u = (a^2 - b^2)u + 2(v \cdot u)b^{2}(v).$$ This equation is equivalent to
$$2(u \cdot v)b^{2} (u - v) = (a^2 - b^2)(u - v).$$
If $u \ne v,$ then this implies that $$u \cdot v = \frac{a^2 - b^2}{2 b^2}.$$
This completes the proof of the Theorem.
$
{\cal B}ox$
\bigbreak
\noindent{\bf An Example.} Let
$$g = e^{i\theta} = a + bi$$ where $a = cos(\theta)$ and $b = sin(\theta).$
Let $$h = a + b[(c^2 - s^2)i + 2csk]$$ where $c^2 + s^2 = 1$ and $c^2 - s^2 = \frac{a^2 - b^2}{2b^2}.$ Then we can reexpress $g$ and $h$ in matrix form
as the matrices $G$ and $H.$ Instead of writing the explicit form of $H,$ we write $H = FGF^{\dagger}$ where $F$ is an element of $SU(2)$ as shown below.
$$G =
\left( \begin{array}{cc}
e^{i\theta} & 0 \\
0 & e^{-i\theta} \\
\end{array} \right)$$
$$F =
\left( \begin{array}{cc}
ic & is \\
is & -ic \\
\end{array} \right)$$
This representation of braiding where one generator $G$ is a simple matrix of phases, while the other generator $H = FGF^{\dagger}$ is derived from $G$ by
conjugation by a unitary matrix, has the possibility for generalization to representations of braid groups (on greater than three strands) to $SU(n)$ or
$U(n)$ for
$n$ greater than $2.$ In fact we shall see just such representations constructed later in this paper, by using a version of topological quantum field theory.
The simplest example is given by
$$g = e^{7 \pi i/10}$$
$$f = i \tau + k \sqrt{\tau}$$
$$h = f r f^{-1}$$
where $\tau^{2} + \tau = 1.$
Then $g$ and $h$ satisfy $ghg=hgh$ and generate a representation of the three-strand braid group that is dense in $SU(2).$ We shall call this the
{\it Fibonacci} representation of $B_{3}$ to $SU(2).$
\bigbreak
\noindent {\bf Density.} Consider representations of $B_{3}$ into $SU(2)$ produced by the method of this section. That is consider the subgroup $SU[G,H]$ of
$SU(2)$ generated by a pair of elements $\{g,h \}$ such that $ghg=hgh.$ We wish to understand when such a representation will be dense in $SU(2).$
We need the following lemma.
\bigbreak
\noindent {\bf Lemma.} {\it $e^{ai} e^{bj} e^{ci} = cos(b) e^{i(a +c)} + sin(b) e^{i(a-c)} j.$ Hence any element of $SU(2)$ can be written in the form
$e^{ai} e^{bj} e^{ci}.$ for appropriate choices of angles $a,b,c.$ In fact, if $u$ and $v$ are linearly independent unit vectors in $R^{3},$ then
any element of $SU(2)$ can be written in the form $$e^{au} e^{bv} e^{cu}$$ for appropriate choices of the real numbers $a,b,c.$}
\bigbreak
\noindent {\bf Proof.}
It is easy to check that
$$e^{ai} e^{bj} e^{ci} = cos(b)e^{i(a + c)} + sin(b)e^{i(a - c)} j.$$
This completes the verification of the identity in the statement of the Lemma.
\bigbreak
\noindent Let $v$ be any unit direction in $R^{3}$ and $\lambda$ an arbitrary angle.
We have $$e^{v\lambda} = cos(\lambda) + sin(\lambda)v,$$ and
$$v = r + si + (p + qi)j$$ where $r^2 + s^2 + p^2 + q^2 = 1.$ So
$$e^{v\lambda} = cos(\lambda) + sin(\lambda)[r + si] + sin(\lambda)[p + qi]j$$
$$= [(cos(\lambda) + sin(\lambda)r) + sin(\lambda)s i] + [sin(\lambda)p + sin(\lambda)q i]j.$$
\bigbreak
\noindent By the identity just proved, we can choose angles $a,b,c$ so that
$$e^{v\lambda} = e^{ia}e^{jb}e^{ic}.$$ Hence
$$cos(b)e^{i(a + c)} = (cos(\lambda) + sin(\lambda)r) + sin(\lambda)s i$$
and
$$sin(b)e^{i(a - c)} = sin(\lambda)p + sin(\lambda)q i.$$
Suppose we keep $v$ fixed and vary $\lambda.$ Then the last equations show that this will result in a full variation of $b.$
\bigbreak
\noindent Now consider
$$e^{ia'}e^{v\lambda}e^{ic'} = e^{ia'}e^{ia}e^{jb}e^{ic}e^{ib'} = e^{i(a' + a)}e^{jb}e^{i(c + c')}.$$
By the basic identity, this shows that any element of $SU(2)$ can be written in the form
$$e^{ia'}e^{v\lambda}e^{ic'}.$$
Then, by applying a rotation, we finally conclude that if $u$ and $v$ are linearly independent unit vectors in $R^{3},$ then any element of
$SU(2)$ can be written in the form
$$e^{au} e^{bv} e^{cu}$$ for appropriate choices of the real numbers $a,b,c.$
$
{\cal B}ox$
\bigbreak
This Lemma can be used to verify density of a representation, by finding two elements $A$ and $B$ in the representation such that
the powers of $A$ are dense in the rotations about its axis, and the powers of $B$ are dense in the rotations about its axis, and such that the
axes of $A$ and $B$ are linearly independent in $R^{3}.$ Then by the Lemma the set of elements $A^{a+c}B^{b}A^{a-c}$ are dense in $SU(2).$ It follows
for example, that the Fibonacci representation described above is dense in $SU(2),$ and indeed the generic representation of $B_{3}$ into
$SU(2)$ will be dense in $SU(2).$ Our next task is to describe representations of the higher braid groups that will extend some of these unitary
repressentations of the three-strand braid group. For this we need more topology.
\bigbreak
\section{The Bracket Polynomial and the Jones Polynomial}
We now discuss the Jones polynomial. We shall construct the Jones polynomial by using the bracket state
summation model \cite{KA87}. The bracket polynomial, invariant under Reidmeister moves II and III, can be normalized to give an invariant of all three
Reidemeister moves. This normalized invariant, with a change of variable, is the Jones polynomial
\cite{JO1,JO2}. The Jones polynomial was originally discovered by a different method than the one given here.
\bigbreak
The {\em bracket polynomial} , $<K> \, = \, <K>(A)$, assigns to each unoriented link diagram $K$ a
Laurent polynomial in the variable $A$, such that
\begin{enumerate}
\item If $K$ and $K'$ are regularly isotopic diagrams, then $<K> \, = \, <K'>$.
\item If $K \sqcup O$ denotes the disjoint union of $K$ with an extra unknotted and unlinked
component $O$ (also called `loop' or `simple closed curve' or `Jordan curve'), then
$$< K \sqcup O> \, = \delta<K>,$$
where $$\delta = -A^{2} - A^{-2}.$$
\item $<K>$ satisfies the following formulas
$$<\mbox{\large $\chi$}> \, = A <\mbox{\large $\asymp$}> + A^{-1} <)(>$$
$$<\overline{\mbox{\large $\chi$}}> \, = A^{-1} <\mbox{\large $\asymp$}> + A <)(>,$$
\end{enumerate}
\noindent where the small diagrams represent parts of larger diagrams that are identical except at
the site indicated in the bracket. We take the convention that the letter chi, \mbox{\large $\chi$},
denotes a crossing where {\em the curved line is crossing over the straight
segment}. The barred letter denotes the switch of this crossing, where {\em the curved
line is undercrossing the straight segment}. See Figure 7 for a graphic illustration of this relation, and an
indication of the convention for choosing the labels $A$ and $A^{-1}$ at a given crossing.
$$ \picill5inby3in(F7) $$
\begin{center} {\bf Figure 7 - Bracket Smoothings}
\end{center}
\noindent It is easy to see that Properties $2$ and $3$ define the calculation of the bracket on
arbitrary link diagrams. The choices of coefficients ($A$ and $A^{-1}$) and the value of $\delta$
make the bracket invariant under the Reidemeister moves II and III. Thus
Property $1$ is a consequence of the other two properties.
\bigbreak
In computing the bracket, one finds the following behaviour under Reidemeister move I:
$$<\mbox{\large $\gamma$}> = -A^{3}<\smile> \hspace{.2in}pace {.5in}$$ and
$$<\overline{\mbox{\large $\gamma$}}> = -A^{-3}<\smile> \hspace{.2in}pace {.5in}$$
\noindent where \mbox{\large $\gamma$} denotes a curl of positive type as indicated in Figure 8,
and $\overline{\mbox{\large $\gamma$}}$ indicates a curl of negative type, as also seen in this
figure. The type of a curl is the sign of the crossing when we orient it locally. Our convention of
signs is also given in Figure 8. Note that the type of a curl does not depend on the orientation
we choose. The small arcs on the right hand side of these formulas indicate
the removal of the curl from the corresponding diagram.
\bigbreak
\noindent The bracket is invariant under regular isotopy and can be normalized to an invariant of
ambient isotopy by the definition
$$f_{K}(A) = (-A^{3})^{-w(K)}<K>(A),$$ where we chose an orientation for $K$, and where $w(K)$ is
the sum of the crossing signs of the oriented link $K$. $w(K)$ is called the {\em writhe} of $K$.
The convention for crossing signs is shown in Figure 8.
$$ \picill4.5inby2in(F8) $$
\begin{center} {\bf Figure 8 - Crossing Signs and Curls}
\end{center}
\noindent One useful consequence of these formulas is the following {\em switching formula}
$$A<\mbox{\large $\chi$}> - A^{-1} <\overline{\mbox{\large $\chi$}}> = (A^{2} - A^{-2})<\mbox{\large $\asymp$}>.$$ Note that
in these conventions the $A$-smoothing of $\mbox{\large $\chi$}$ is $\mbox{\large $\asymp$},$ while the $A$-smoothing of
$\overline{\mbox{\large $\chi$}}$ is $)(.$ Properly interpreted, the switching formula above says that you can switch a crossing and
smooth it either way and obtain a three diagram relation. This is useful since some computations will simplify quite quickly with the
proper choices of switching and smoothing. Remember that it is necessary to keep track of the diagrams up to regular isotopy (the
equivalence relation generated by the second and third Reidemeister moves). Here is an example. View Figure 9.
$$ \picill3inby2in(F9) $$
\begin{center} {\bf Figure 9 -- Trefoil and Two Relatives} \end{center}
\bigbreak
\noindent Figure 9 shows a trefoil diagram $K$, an unknot diagram $U$ and another unknot diagram $U'.$ Applying the switching formula,
we have $$A^{-1} <K> - A <U> = (A^{-2} - A^{2}) <U'>$$ and
$<U>= -A^{3}$ and $<U'>=(-A^{-3})^2 = A^{-6}.$ Thus $$A^{-1} <K> - A(-A^{3}) = (A^{-2} - A^{2}) A^{-6}.$$ Hence
$$A^{-1} <K> = -A^4 + A^{-8} - A^{-4}.$$ Thus $$<K> = -A^{5} - A^{-3} + A^{-7}.$$ This is the bracket polynomial of the trefoil diagram $K.$
\bigbreak
\noindent Since the trefoil diagram $K$ has writhe $w(K) = 3,$ we have the normalized polynomial
$$f_{K}(A) = (-A^{3})^{-3}<K> = -A^{-9}(-A^{5} - A^{-3} + A^{-7}) = A^{-4} + A^{-12} - A^{-16}.$$
\bigbreak
The bracket model for the Jones polynomial is quite useful both theoretically and in terms
of practical computations. One of the neatest applications is to simply compute, as we have done, $f_{K}(A)$ for the
trefoil knot $K$ and determine that $f_{K}(A)$ is not equal to $f_{K}(A^{-1}) = f_{-K}(A).$ This
shows that the trefoil is not ambient isotopic to its mirror image, a fact that is much harder to
prove by classical methods.
\bigbreak
\noindent {\bf The State Summation.} In order to obtain a closed formula for the bracket, we now describe it as a state summation.
Let $K$ be any unoriented link diagram. Define a {\em state}, $S$, of $K$ to be a choice of
smoothing for each crossing of $K.$ There are two choices for smoothing a given crossing, and
thus there are $2^{N}$ states of a diagram with $N$ crossings.
In a state we label each smoothing with $A$ or $A^{-1}$ according to the left-right convention
discussed in Property $3$ (see Figure 7). The label is called a {\em vertex weight} of the state.
There are two evaluations related to a state. The first one is the product of the vertex weights,
denoted
$$<K|S>.$$
The second evaluation is the number of loops in the state $S$, denoted $$||S||.$$
\noindent Define the {\em state summation}, $<K>$, by the formula
$$<K> \, = \sum_{S} <K|S>\delta^{||S||-1}.$$
It follows from this definition that $<K>$ satisfies the equations
$$<\mbox{\large $\chi$}> \, = A <\mbox{\large $\asymp$}> + A^{-1} <)(>,$$
$$<K \sqcup O> \, = \delta<K>,$$
$$<O> \, =1.$$
\noindent The first equation expresses the fact that the entire set of states of a given diagram is
the union, with respect to a given crossing, of those states with an $A$-type smoothing and those
with an $A^{-1}$-type smoothing at that crossing. The second and the third equation
are clear from the formula defining the state summation. Hence this state summation produces the
bracket polynomial as we have described it at the beginning of the section.
\bigbreak
\noindent {\bf Remark.} By a change of variables one obtains the original
Jones polynomial, $V_{K}(t),$ for oriented knots and links from the normalized bracket:
$$V_{K}(t) = f_{K}(t^{-\frac{1}{4}}).$$
\noindent {\bf Remark.} The bracket polynomial provides a connection between knot theory and physics, in that the state summation
expression for it exhibits it as a generalized partition function defined on the knot diagram. Partition functions
are ubiquitous in statistical mechanics, where they express the summation over all states of the physical system of
probability weighting functions for the individual states. Such physical partition functions contain large amounts of
information about the corresponding physical system. Some of this information is directly present in the properties of the
function, such as the location of critical points and phase transition. Some of the information can be obtained by differentiating the
partition function, or performing other mathematical operations on it.
\bigbreak
There is much more in this connection with statistical mechanics in that the local weights in a partition function are often expressed in
terms of solutions to a matrix equation called the Yang-Baxter equation, that turns out to fit perfectly invariance under the third
Reidemeister move. As a result, there are many ways to define partition functions of knot diagrams that give rise to invariants of knots and links.
The subject is intertwined with the algebraic structure of Hopf algebras and quantum groups, useful for producing systematic solutions to the Yang-Baxter
equation. In fact Hopf algebras are deeply connected with the problem of constructing invariants of three-dimensional manifolds in relation to
invariants of knots. We have chosen, in this survey paper, to not discuss the details of these approaches, but rather to proceed to Vassiliev invariants
and the relationships with Witten's functional integral. The
reader is referred to \cite{KA87,KA89,KL,Kauffman-Graph,KaufInter,KP,AW,JO1,JO2,KR,RT1,RT2,T,TV} for more information about relationships of knot theory with
statistical mechanics, Hopf algebras and quantum groups. For topology, the key point is that Lie algebras can be used to construct invariants of
knots and links.
\bigbreak
\subsection{Quantum Computation of the Jones Polynomial}
Quantum algorithms for computing the Jones polynomial have been discussed elsewhere. See \cite{QCJP,BG,Ah1,QCJP2,Ah2,Wo}. Here, as an example, we give a
local unitary representation that can be used to compute the Jones polynomial for closures of 3-braids. We analyse this representation by making
explicit how the bracket polynomial is computed from it, and showing how the quantum computation devolves to finding the trace of a unitary transformation.
The idea behind the construction of this representation depends upon the algebra generated by two single qubit density matrices
(ket-bras).
Let
$|v\rangle$ and
$|w\rangle$ be two qubits in $V,$ a complex vector space of dimension two over the complex numbers. Let
$P = |v\rangle\langle v|$ and $Q=|w\rangle\langle w|$ be the corresponding ket-bras. Note that
$$P^2 = |v|^{2}P,$$
$$Q^2 = |w|^{2}Q,$$
$$PQP = |\langle v|w \rangle|^{2}P,$$
$$QPQ= |\langle v|w\rangle|^{2}Q.$$
$P$ and $Q$ generate a representation of the Temperley-Lieb algebra (See Section 5 of the present paper). One can adjust parameters to make a representation
of the three-strand braid group in the form
$$s_{1} \longmapsto rP + sI,$$
$$s_{2} \longmapsto tQ + uI,$$
where $I$ is the identity mapping on $V$ and $r,s,t,u$ are suitably chosen scalars. In the following we use this method to adjust
such a representation so that it is unitary. Note also that this is a local unitary representation of $B_{3}$ to $U(2).$ We leave it as an exersise for
the reader to verify that it fits into our general classification of such representations as given in section 3 of the present paper.
\bigbreak
The representation depends on two symmetric but non-unitary matrices $U_{1}$ and $U_{2}$ with
$$U_{1} = \left[
\begin{array}{cc}
d & 0 \\
0 & 0
\end{array}
\right] = d|w \rangle \langle w|$$
\noindent and
$$U_{2} = \left[
\begin{array}{cc}
d^{-1} & \sqrt{1-d^{-2}} \\
\sqrt{1-d^{-2}} & d - d^{-1}
\end{array}
\right] = d | v \rangle \langle v |$$
where $w = (1,0),$ and $v = (d^{-1}, \sqrt{1 - d^{-2}}),$ assuming the entries of $v$ are real.
Note that $U_{1}^{2} = dU_{1}$ and $U_{2}^{2} = dU_{1}.$ Moreover, $U_{1}U_{2}U_{1} = U_{1}$ and $U_{2}U_{1}U_{2} = U_{1}.$
This is an example of a specific
representation of the Temperley-Lieb algebra \cite{KA87, QCJP}.
\noindent The desired representation of the Artin braid group is given on the two braid generators for the three strand braid group by the
equations:
$$\Phi(s_{1})= AI + A^{-1}U_{1},$$
$$\Phi(s_{2})= AI + A^{-1}U_{2}.$$
Here $I$ denotes the $2 \times 2$ identity matrix.
\noindent For any $A$ with $d = -A^{2}-A^{-2}$ these formulas define a representation of the braid group. With
$A=e^{i\theta}$, we have $d = -2cos(2\theta)$. We find a specific range of
angles $|\theta| \leq \pi/6$ and $|\theta - \pi| \leq \pi/6$ {\it that give unitary representations of
the three-strand braid group.} Thus a specialization of a more general represention of the braid group gives rise to a continuous family
of unitary representations of the braid group.
Note that the traces of these matrices are given by the formulas $tr(U_{1})=tr(U_{2})= d$ while $tr(U_{1}U_{2}) = tr(U_{2}U_{1}) =1.$
If $b$ is any braid, let $I(b)$ denote the sum of the exponents in the braid word that expresses $b$.
For $b$ a three-strand braid, it follows that
$$\Phi(b) = A^{I(b)}I + \Pi(b)$$
\noindent where $I$ is the $ 2 \times 2$ identity matrix and $\Pi(b)$ is a sum of products in the Temperley-Lieb algebra
involving $U_{1}$ and $U_{2}.$ Since the Temperley-Lieb algebra in this dimension is generated by $I$,$U_{1}$, $U_{2}$,
$U_{1}U_{2}$ and $U_{2}U_{1}$, it follows that the value of the bracket polynomial of the closure of the braid $b$, denoted
$<\overline{b}>,$ can be calculated directly from the trace of this representation, except for the part involving the identity matrix.
The bracket polynomial evaluation depends upon the loop counts in the states of the closure of the braid, and these loop counts
correspond to the traces of the non-identity Temperley-Lieb elements. Note that the closure of the three-strand diagram for the identity braid
in $B_{3}$ has bracket polynomial $d^2.$ The
result is the equation
$$<\overline{b}> = A^{I(b)}d^{2} + tr(\Pi(b))$$
\noindent where $\overline{b}$ denotes the standard braid closure of $b$, and the sharp brackets denote the bracket polynomial.
Since the trace of the $2 \times 2$ identity matrix is $2$, we see that
$$<\overline{b}> = tr(\Phi(b)) + A^{I(b)}(d^{2} -2).$$
It follows from this calculation that the question of computing the bracket polynomial for the closure of the three-strand
braid $b$ is mathematically equivalent to the problem of computing the trace of the unitary matrix $\Phi(b).$
\noindent {\bf The Hadamard Test}
In order to (quantum) compute the trace of a unitary matrix $U$, one can use the {\it Hadamard test} to obtain the diagonal matrix
elements $\langle \psi|U|\psi \rangle$ of $U.$ The trace is then the sum of these matrix elements as $|\psi \rangle$ runs over an orthonormal basis for
the vector space. We first obtain $$\frac{1}{2} + \frac{1}{2}Re\langle \psi|U|\psi \rangle$$ as
an expectation by applying the Hadamard gate $H$
$$H|0 \rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$$
$$H|1 \rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle)$$
to the first qubit of
$$C_{U} \circ (H \otimes 1) |0 \rangle |\psi \rangle = \frac{1}{\sqrt{2}}(|0\rangle \otimes|\psi \rangle + |1\rangle \otimes U|\psi\rangle.$$
Here $C_{U}$ denotes controlled $U,$ acting as $U$ when the control bit is $|1 \rangle$ and the identity mapping when the control bit is $|0 \rangle.$ We
measure the expectation for the first qubit $|0 \rangle$ of the resulting state
$$\frac{1}{2}(H|0\rangle \otimes|\psi \rangle + H|1\rangle \otimes U|\psi\rangle)
=\frac{1}{2}((|0\rangle + |1\rangle) \otimes|\psi \rangle + (|0\rangle - |1\rangle) \otimes U|\psi\rangle)$$
$$=\frac{1}{2}(|0\rangle \otimes (|\psi \rangle + U|\psi\rangle) + |1\rangle \otimes(|\psi \rangle - U|\psi\rangle)).$$
This expectation is $$\frac{1}{2}(\langle \psi | + \langle \psi| U^{\dagger})(|\psi \rangle + U|\psi\rangle) = \frac{1}{2} + \frac{1}{2}Re\langle \psi|U|\psi
\rangle.$$
\noindent The imaginary
part is obtained by applying the same procedure to
$$\frac{1}{\sqrt{2}}(|0\rangle \otimes|\psi \rangle - i|1\rangle \otimes U|\psi\rangle$$
This is the method used in
\cite{Ah1}, and the reader may wish to contemplate its efficiency in the context of this simple model. Note that the Hadamard test enables this quantum
computation to estimate the trace of any unitary matrix $U$ by repeated trials that estimate individual matrix entries $\langle \psi|U|\psi\rangle.$
We shall return to quantum algorithms for the Jones polynomial and other knot polynomials in a subsequent paper.
\bigbreak
\section{Quantum Topology, Cobordism Categories, Temperley-Lieb Algebra and Topological Quantum Field Theory}
The purpose of this section is to discuss the general idea behind topological quantum field theory, and to illustrate its application to basic
quantum mechanics and quantum mechanical formalism. It is useful in this regard to have available the concept of {\it category}, and we shall begin the
section by discussing this far-reaching mathematical concept.
\bigbreak
\noindent {\bf Definition.} A {\it category Cat} consists in two related collections:
\begin{enumerate}
\item $Obj(Cat)$, the {\it objects} of $Cat,$ and
\item $Morph(Cat)$, the {\it morphisms} of $Cat.$
\end{enumerate}
satisfying the following axioms:
\begin{enumerate}
\item Each morphism $f$ is associated to two objects of $Cat$, the {\it domain} of f and the {\it codomain} of f. Letting $A$ denote the domain of $f$ and
$B$ denote the codomain of $f,$ it is customary to denote the morphism $f$ by the arrow notation
$f:A \longrightarrow B.$
\item Given $f:A \longrightarrow B$ and $g:B \longrightarrow C$ where $A$, $B$ and $C$ are objects of $Cat$, then there exists an associated morphism
$g \circ f : A \longrightarrow C$ called the {\it composition} of $f$ and $g$.
\item To each object $A$ of $Cat$ there is a unique {\it identity morphism} $1_{A}:A \longrightarrow A$ such that $1_{A} \circ f = f$ for any
morphism $f$ with codomain $A$, and $g \circ 1_{A} = g$ for any morphism $g$ with domain $A.$
\item Given three morphisms $f:A \longrightarrow B$, $g:B \longrightarrow C$ and $h:C \longrightarrow D$, then composition is associative.
That is $$(h \circ g) \circ f = h \circ (g \circ f).$$
\end{enumerate}
\noindent If $Cat_{1}$ and $Cat_{2}$ are two categories, then a {\it functor} $F:Cat_{1} \longrightarrow Cat_{2}$ consists in functions
$F_{O}:Obj(Cat_{1}) \longrightarrow Obj(Cat_{2})$ and $F_{M}:Morph(Cat_{1}) \longrightarrow Morph(Cat_{2})$ such that
identity morphisms and composition of morphisms are preserved under these mappings. That is (writing just $F$ for $F_{O}$ and $F_{M}$),
\begin{enumerate}
\item $F(1_{A}) = 1_{F(A)}$,
\item $F(f:A \longrightarrow B) = F(f):F(A) \longrightarrow F(B)$,
\item $F(g \circ f) = F(g) \circ F(f)$.
\end{enumerate}
A functor $F:Cat_{1} \longrightarrow Cat_{2}$ is a structure preserving mapping from one category to another.
It is often convenient to think of the image of the functor $F$ as an {\it interpretation} of the first category in terms of the second.
We shall use this terminology below and sometimes refer to an interpretation without specifying all the details of the functor that describes it.
\bigbreak
The notion of category is a broad mathematical concept, encompassing many fields of mathematics. Thus one has the category of sets where the objects
are sets (collections) and the morphisms are mappings between sets. One has the category of topological spaces where the objects are spaces and the morphisms
are continuous mappings of topological spaces. One has the category of groups where the objects are groups and the morphisms are homomorphisms of groups.
Functors are structure preserving mappings from one category to another. For example, the fundamental group is a functor from the category of topological
spaces with base point, to the category of groups. In all the examples mentioned so far, the morphisms in the category are restrictions of mappings in the
category of sets, but this is not necessarily the case. For example, any group $G$ can be regarded as a category, $Cat(G)$, with one object $*.$
The morphisms from $*$ to itself are the elements of the group and composition is group multiplication. In this example, the object has no internal structure
and all the complexity of the category is in the morphisms.
\bigbreak
The Artin braid group $B_{n}$ can be regarded as a category whose single object is an ordered
row of points $[n] = \{1,2,3,...,n \}.$ The morphisms are the braids themselves and composition is the multiplication of the braids. The ordered row of points
is interpreted as the starting and ending row of points at the bottom and the top of the braid. In the case of the braid category, the morphisms have both
external and internal structure. Each morphism produces a permutation of the ordered row of points (corresponding to the begiinning and ending points of the
individual braid strands), and weaving of the braid is extra structure beyond the object that is its domain and codomain. Finally, for this example, we can
take all the braid groups $B_{n}$ ($n$ a positive integer) under the wing of a single category, $Cat(B)$, whose objects are all ordered rows of points
$[n]$, and whose morphisms are of the form $b:[n] \longrightarrow [n]$ where $b$ is a braid in $B_{n}.$ The reader may wish to have
morphisms between objects with different $n$. We will have this shortly in the Temperley-Lieb category and in the category of tangles.
\bigbreak
The {\it $n$-Cobordism Category}, $Cob[n]$, has as its objects smooth manifolds of dimension $n$, and as its morphisms, smooth manifolds $M^{n+1}$ of
dimension $n+1$ with a partition of the boundary, $\partial M^{n+1}$, into two collections of $n$-manifolds that we denote by $L(M^{n+1})$ and $R(M^{n+1}).$
We regard $M^{n+1}$ as a morphism from $L(M^{n+1})$ to $R(M^{n+1})$
$$M^{n+1}: L(M^{n+1}) \longrightarrow R(M^{n+1}).$$
As we shall see, these cobordism categories are highly significant for
quantum mechanics, and the simplest one, $Cob[0]$ is directly related to the Dirac notation of bras and kets and to the Temperley-Lieb algebara. We shall
concentrate in this section on these cobordism categories, and their relationships with quantum mechanics.
\bigbreak
\noindent One can choose to consider either oriented or non-oriented manifolds, and within unoriented manifolds there are those that are orientable and
those that are not orientable. In this section we will implicitly discuss only orientable manifolds, but we shall not specify an orientation. In the
next section, with the standard definition of topological quantum field theory, the manifolds will be oriented. The definitions of the cobordism
categories for oriented manifolds go over mutatis mutandis.
\bigbreak
Lets begin with $Cob[0]$. Zero dimensional manifolds are just collections of points. The simplest zero dimensional manifold is a single point $p$.
We take $p$ to be an object of this category and also $*$,
where $*$ denotes the empty manifold (i.e. the empty set in the category of manifolds). The object $*$ occurs in $Cob[n]$ for every $n$, since
it is possible that either the left set or the right set of a morphism is empty. A line segment $S$ with boundary points $p$ and $q$ is a morphism from $p$
to $q$.
$$S:p \longrightarrow q$$
See Figure 10. In this figure we have illustrated the morphism from $p$ to $p.$ The simplest convention for this category is to take this morphism to
be the identity. Thus if we look at the subcategory of $Cob[0]$ whose only object is $p$, then the only morphism is the identity morphism. Two points
occur as the boundary of an interval. The reader will note that $Cob[0]$ and the usual arrow notation for morphisms are very closely related. This is
a place where notation and mathematical structure share common elements. In general the objects of $Cob[0]$ consist in the empty object $*$
and non-empty rows
of points, symbolized by
$$p \otimes p \otimes \cdots \otimes p \otimes p.$$
Figure 10 also contains a morphism
$$p \otimes p \longrightarrow *$$ and the morphism
$$* \longrightarrow p\otimes p.$$
The first represents a cobordism of two points to the empty set (via the bounding curved interval). The second represents a cobordism from the empty set
to two points.
$$ \picill5inby2in(F10) $$
\begin{center}
{\bf Figure 10 - Elementary Cobordisms}
\end{center}
In Figure 11, we have indicated more morphisms in $Cob[0]$, and we have named the morphisms just discussed as
$$| \Omega \rangle : p \otimes p \longrightarrow *,$$
$$\langle \Theta |: * \longrightarrow p\otimes p.$$
The point to notice is that the usual conventions for handling Dirac bra-kets are essentially the same as the compostion rules in this
topological category. Thus in Figure 11 we have that
$$\langle \Theta | \circ | \Omega \rangle = \langle \Theta | \Omega \rangle : * \longrightarrow *$$
represents a cobordism from the empty manifold to itself. This cobordism is topologically a circle and, in the Dirac formalism is interpreted as a
scalar. In order to interpret the notion of scalar we would have to map the cobordism category to the category of vector spaces and linear mappings.
We shall discuss this after describing the similarities with quantum mechanical formalism. Nevertheless, the reader should note that if $V$ is a
vector space over the complex numbers $C$, then a linear mapping from $C$ to $C$ is determined by the image of $1$, and hence is characterized by the
scalar that is the image of $1$. In this sense a mapping $C \longrightarrow C$ can be regarded as a possible image in vector spaces of the
abstract structure $\langle \Theta | \Omega \rangle : * \longrightarrow *$. It is therefore assumed that in $Cob[0]$ the composition with the
morphism $\langle \Theta | \Omega \rangle$ commutes with any other morphism. In that way $\langle \Theta | \Omega \rangle$ behaves like a scalar in
the cobordism category. In general, an $n+1$ manifold without boundary behaves as a scalar in $Cob[n]$, and if a manifold $M^{n+1}$ can be written
as a union of two submanifolds $L^{n+1}$ and $R^{n+1}$ so that that an $n$-manifold $W^{n}$ is their common boundary:
$$M^{n+1} = L^{n+1} \cup R^{n+1}$$ with
$$ L^{n+1} \cap R^{n+1} = W^{n}$$ then,
we can write $$\langle M^{n+1} \rangle = \langle L^{n+1} \cup R^{n+1} \rangle = \langle L^{n+1} | R^{n+1} \rangle,$$ and $\langle M^{n+1} \rangle$
will be a scalar (morphism that commutes with all other morphisms) in the category $Cob[n]$.
\bigbreak
$$ \picill5inby3in(F11) $$
\begin{center}
{\bf Figure 11 - Bras, Kets and Projectors}
\end{center}
$$ \picill5inby3in(F12) $$
\begin{center}
{\bf Figure 12 - Permutations}
\end{center}
$$ \picill5inby2in(F13) $$
\begin{center}
{\bf Figure 13 - Projectors in Tensor Lines and Elementary Topology}
\end{center}
Getting back to the contents of Figure 11, note how the zero dimensional cobordism category has structural parallels to the Dirac ket--bra formalism
$$ U = | \Omega \rangle \langle \Theta |$$
$$ UU = | \Omega \rangle \langle \Theta | \Omega \rangle \langle \Theta |= \langle \Theta | \Omega \rangle | \Omega \rangle \langle \Theta |
= \langle \Theta | \Omega \rangle U.$$ In the cobordism category, the bra--ket and ket--bra formalism is seen as patterns of connection of
the one-manifolds that realize the cobordisms.
\bigbreak
Now view Figure 12. This Figure illustrates a morphism $S$ in $Cob[0]$ that requires two crossed line segments for its planar representation.
Thus $S$ can be regarded as a non-trivial permutation, and $S^2 = I$ where $I$ denotes the identity morphisms for a two-point row.
From this example, it is clear that $Cob[0]$ contains the structure of all the symmetric groups and more. In fact, if we take the subcateogry of
$Cob[0]$ consisting of all morphisms from $[n]$ to $[n]$ for a fixed positive integer $n,$ then this gives the well-known {\it Brauer algebra} (see
\cite{Benkart}) extending the symmetric group by allowing any connections among the points in the two rows. In this sense, one could call $Cob[0]$ the {\it
Brauer category}. We shall return to this point of view later.
\bigbreak
In this section, we shall be concentrating
on the part of $Cob[0]$ that does not involve permutations. This part can be characterized by those morphisms that can be represented by
planar diagrams without crosssings between any of the line segments (the one-manifolds). We shall call this crossingless subcategory of $Cob[0]$ the {\em
Temperley-Lieb Category} and denote it by $CatTL.$ In $CatTL$ we have the subcategory $TL[n]$ whose only objects are the row of $n$ points and the empty
object $*$, and whose morphisms can all be represented by configurations that embed in the plane as in the morphisms $P$ and $Q$ in Figure 13. Note that with
the empty object $*$, the morphism whose diagram is a single loop appears in $TL[n]$ and is taken to commute with all other morphisms.
\bigbreak
The {\em Temperley-Lieb Algebra}, $AlgTL[n]$ is generated by the morphisms in $TL[n]$ that go from $[n]$ to itself.
Up to multiplication by the loop, the product (composition) of two such morphisms is another flat morphism from $[n]$ to itself.
For algebraic purposes the loop $*
\longrightarrow *$ is taken to be a scalar algebraic variable $\delta$ that commutes with all elements in the algebra. Thus the equation
$$ UU = \langle \Theta | \Omega \rangle U.$$
becomes
$$UU = \delta U$$
in the algebra. In the algebra we are allowed to add morphisms formally and this addition is taken to be commutative. Initially the algebra is taken with
coefficients in the integers, but a different commutative ring of coefficients can be chosen and the value of the loop may be taken in this ring. For example,
for quantum mechanical applications it is natural to work over the complex numbers. The multiplicative structure of $AlgTL[n]$ can be described by
generators and relations as follows: Let $I_{n}$ denote the identity morphism from $[n]$ to $[n].$ Let $U_{i}$ denote the morphism from $[n]$ to $[n]$
that connects $k$ with $k$ for $k<i$ and $k>i+1$ from one row to the other, and connects $i$ to $i+1$ in each row. Then the algebra
$AlgTL[n]$ is generated by $\{ I_{n}, U_{1},U_{2},\cdots ,U_{n-1} \}$ with relations
$$U_{i}^{2} = \delta U_{i}$$
$$U_{i}U_{i+1}U_{i} = U_{i}$$
$$U_{i}U_{j} = U_{j}U_{i} \,: \,\, |i-j|>1.$$
These relations are illustrated for three strands in Figure 13. We leave the commuting relation for the reader to draw in the case where $n$ is
four or greater. For a proof that these are indeed all the relations, see \cite{KaufDiag}.
\bigbreak
Figures 13 and 14 indicate how the zero dimensional cobordism category contains structure that goes well beyond the usual Dirac formalism.
By tensoring the ket--bra on one side or another by identity morphisms, we obtain the beginnings of the Temperley-Lieb algebra and the Temperley-Lieb
category. Thus Figure 14 illustrates the morphisms $P$ and $Q$ obtained by such tensoring, and the relation $PQP = P$ which is the same as
$U_{1}U_{2}U_{1} = U_{1}$
\bigbreak
Note the composition at the
bottom of the Figure 14. Here we see a composition of the identity tensored with a ket, followed by a bra tensored with the identity.
The diagrammatic for this
association involves ``straightening" the curved structure of the morphism to a straight line.
In Figure 15 we have elaborated this
situation even further, pointing out that in this category each of the morphisms $\langle \Theta |$ and $| \Omega \rangle$ can be seen, by
straightening, as mappings from
the generating object to itself. We have denoted these corresponding morphisms by $\Theta$ and $\Omega$ respectively.
In this way there is a correspondence between
morphisms $p \otimes p \longrightarrow *$ and morphims $p \longrightarrow p.$
\bigbreak
In Figure 15 we have illustrated the generalization of the straightening
procedure of Figure 14. In Figure 14 the straightening occurs because the connection structure in the morphism of $Cob[0]$ does not depend on
the wandering of curves in diagrams for the morphisms in that category. Nevertheless, one can envisage a more complex interpretation of the
morphisms where each one-manifold (line segment) has a label, and a multiplicity of morphisms can correspond to a single line segment.
This is exactly what we expect in interpretations. For example, we can interpret the line segment $[1] \longrightarrow [1]$ as a mapping from
a vector space $V$ to itself. Then $[1] \longrightarrow [1]$ is the diagrammatic abstraction for $ V \longrightarrow V,$ and there are many
instances of linear mappings from $V$ to $V$.
\bigbreak
At the vector space level there is a duality between mappings
$V \otimes V \longrightarrow C$ and linear maps $V \longrightarrow V.$
Specifically, let
$$\{ | 0 \rangle ,\cdots, | m \rangle \}$$
be a basis for $V.$ Then $\Theta: V \longrightarrow V$ is determined by
$$\Theta |i \rangle = \Theta_{ij} \, |j \rangle$$ (where we have used the Einstein summation convention on the repeated index $j$)
corresponds to the bra
$$\langle \Theta |: V \otimes V \longrightarrow C$$
defined by
$$\langle \Theta |ij \rangle = \Theta_{ij}.$$
Given $\langle \Theta | :V \otimes V \longrightarrow C,$
we associate $\Theta: V \longrightarrow V$ in this way.
\bigbreak
Comparing with the diagrammatic for the category $Cob[0]$, we say that $\Theta: V \longrightarrow V$ is obtained by {\it straightening}
the mapping $$\langle \Theta | :V \otimes V \longrightarrow C.$$ Note that in this interpretation, the bras and kets are defined relative to the
tensor product of $V$ with itself and $[2]$ is interpreted as $V \otimes V.$ If we interpret $[2]$ as a single vector space $W,$ then
the usual formalisms of bras and kets still pass over from the cobordism category.
\bigbreak
$$ \picill5inby3in(F14) $$
\begin{center}
{\bf Figure 14 - The Basic Temperley-Lieb Relation}
\end{center}
$$ \picill3inby4.5in(F15) $$
\begin{center}
{\bf Figure 15 - The Key to Teleportation}
\end{center}
Figure 15 illustrates the staightening of $| \Theta \rangle$ and $\langle \Omega |,$ and the straightening of a composition of these
applied to $| \psi \rangle,$ resulting in $| \phi \rangle.$ In the left-hand part of the bottom of Figure 15 we illustrate the preparation
of the tensor product $| \Theta \rangle \otimes | \psi \rangle$ followed by a successful measurement by $\langle \Omega |$ in the second two
tensor factors. The resulting single qubit state, as seen by straightening, is $| \phi \rangle = \Theta \circ \Omega |\psi \rangle.$
\bigbreak
From this, we see that it is possible to reversibly, indeed unitarily, transform a state $| \psi \rangle$ via a combination of preparation and measurement
just so long as the straightenings of the preparation and measurement ($\Theta$ and $\Omega$) are each invertible (unitary). This is the
key to teleportation \cite{Teleport,C1,C2}. In the standard teleportation procedure one chooses the preparation $\Theta$ to be (up to normalization) the $2$
dimensional identity matrix so that
$| \theta \rangle = |00\rangle + |11\rangle.$ If the successful measurement $\Omega$ is also the identity, then the transmitted state $| \phi \rangle$
will be equal to $| \psi \rangle.$ In general we will have $| \phi \rangle = \Omega |\psi \rangle.$ One can then choose a basis of measurements
$|\Omega \rangle,$ each corresponding to a unitary transformation $\Omega$ so that the recipient of the transmission can rotate the result by the
inverse of $\Omega$ to reconsitute $|\psi \rangle$ if he is given the requisite information. This is the basic design of the teleportation procedure.
\bigbreak
There is much more to say about the category $Cob[0]$ and its relationship with quantum mechanics. We will stop here, and invite the reader to explore
further. Later in this paper, we shall use these ideas in formulating our representations of the braid group. For now, we point out how things look
as we move upward to $Cob[n]$ for $n > 0.$ In Figure 16 we show typical cobordisms (morphisms) in $Cob[1]$ from two circles to one circle and from
one circle to two circles. These are often called ``pairs of pants". Their composition is a surface of genus one seen as a morphism from two circles
to two circles. The bottom of the figure indicates a ket-bra in this dimension in the form of a mapping from one circle to one circle as a composition of
a cobordism of a circle to the empty set and a cobordism from the empty set to a circle (circles bounding disks). As we go to higher dimensions the
structure of cobordisms becomes more interesting and more complicated. It is remarkable that there is so much structure in the lowest dimensions of
these categories.
$$ \picill2inby3.5in(F16) $$
\begin{center}
{\bf Figure 16 - Corbordisms of $1$-Manifolds are Surfaces}
\end{center}
\section{Braiding and Topological Quantum Field Theory}
The purpose of this section is to discuss in a very general way how braiding is related to topological quantum field theory.
In the section to follow, we will use the Temperley-Lieb recoupling theory to produce specfic unitary representations of the Artin
braid group.
\bigbreak
The ideas in the subject
of topological quantum field theory (TQFT) are well expressed in the book \cite{Atiyah} by Michael Atiyah and the paper \cite{Witten} by Edward Witten.
Here is Atiyah's definition:
\bigbreak
\noindent {\bf Definition.} A TQFT in dimension $d$ is a functor $Z(\Sigma)$ from the cobordism category $Cob[d]$ to the category $Vect$ of
vector spaces and linear mappings which assigns
\begin{enumerate}
\item a finite dimensional vector space $Z(\Sigma)$ to each compact, oriented $d$-dimensional manifold $\Sigma,$
\item a vector $Z(Y) \in Z(\Sigma)$ for each compact, oriented $(d + 1)$-dimensional manifold $Y$ with boundary $\Sigma.$
\item a linear mapping $Z(Y):Z(\Sigma_{1}) \longrightarrow Z(\Sigma_{2})$ when $Y$ is a $(d + 1)$-manifold that is a cobordism
between $\Sigma_{1}$ and $\Sigma_{2}$ (whence the boundary of $Y$ is the union of $\Sigma_{1}$ and $-\Sigma_{2}.$
\end{enumerate}
\noindent The functor satisfies the following axioms.
\begin{enumerate}
\item $Z(\Sigma^{\dagger}) = Z(\Sigma)^{\dagger}$ where $\Sigma^{\dagger}$ denotes the manifold $\Sigma$ with the opposite orientation and
$Z(\Sigma)^{\dagger}$ is the dual vector space.
\item $Z(\Sigma_{1} \cup \Sigma_{2}) = Z(\Sigma_{1}) \otimes Z(\Sigma_{2})$ where $\cup$ denotes disjoint union.
\item If $Y_{1}$ is a cobordism from $\Sigma_{1}$ to $\Sigma_{2},$ $Y_{2}$ is a cobordism from $\Sigma_{2}$ to $\Sigma_{3}$ and
$Y$ is the composite cobordism $Y = Y_{1} \cup_{\Sigma_{2}} Y_{2},$ then
$$Z(Y) = Z(Y_{2}) \circ Z(Y_{1}): Z(\Sigma_{1}) \longrightarrow Z(\Sigma_{2})$$ is the composite of the corresponding linear mappings.
\item $Z(\phi) = C$ ($C$ denotes the complex numbers) for the empty manifold $\phi.$
\item With $\Sigma \times I$ (where $I$ denotes the unit interval) denoting the identity cobordism from $\Sigma$ to $\Sigma,$
$Z(\Sigma \times I)$ is the identity mapping on $Z(\Sigma).$
\end{enumerate}
Note that, in this view a TQFT is basically a functor from the cobordism categories defined in the last section to Vector Spaces
over the complex numbers. We have already seen that in the lowest dimensional case of cobordisms of zero-dimensional manifolds, this gives
rise to a rich structure related to quatum mechanics and quantum information theory. The remarkable fact is that the case of three-dimensions
is also related to quantum theory, and to the lower-dimensional versions of the TQFT. This gives a significant way to think about three-manifold
invariants in terms of lower dimensional patterns of interaction. Here follows a brief description.
\bigbreak
Regard the three-manifold as a union of two handlebodies with boundary an orientable surface $S_{g}$ of genus $g.$ The surface is divided up into trinions as
illustrated in Figure 17. A {\it trinion} is a surface with boundary that is topologically equivalent to a sphere with three punctures. The trinion
constitutes, in itself a cobordism in $Cob[1]$ from two circles to a single circle, or from a single circle to two circles, or from three circles to the
empty set. The {\it pattern} of a trinion is a trivalent graphical vertex, as illustrated in Figure 17. In that figure we show the trivalent vertex
graphical pattern drawn on the surface of the trinion, forming a graphical pattern for this combordism. It should be clear from this figure that any
cobordism in $Cob[1]$ can be diagrammed by a trivalent graph, so that the category of trivalent graphs (as morphisms from ordered sets of points to ordered
sets of points) has an image in the category of cobordisms of compact one-dimensional manifolds. Given a surface $S$ (possibly with boundary) and a
decomposition of that surface into triions, we associate to it a trivalent graph
$G(S,t)$ where $t$ denotes the particular trinion decomposition.
\bigbreak
In this correspondence, distinct graphs can correspond to topologically identical cobordisms of circles, as
illustrated in Figure 19. It turns out that the graphical structure is important, and that it is extraordinarily useful to articulate transformations
between the graphs that correspond to the homeomorphisms of the corresponding surfaces. The beginning of this structure is indicated in the bottom part of
Figure 19.
\bigbreak
In Figure 20 we illustrate another feature of the relationship betweem surfaces and graphs. At the top of the figure we indicate a
homeomorphism between a twisted trinion and a standard trinion. The homeomorphism leaves the ends of the trinion (denoted $A$,$B$ and $C$) fixed while undoing
the internal twist. This can be accomplished as an ambient isotopy of the embeddings in three dimensional space that are indicated by this figure.
Below this isotopy we indicate the corresponding graphs. In the graph category there will have to be a transformation between a braided and an unbraided
trivalent vertex that corresponds to this homeomorphism.
\bigbreak
$$ \picill3inby4in(F17) $$
\begin{center}
{\bf Figure 17 - Decomposition of a Surface into Trinions}
\end{center}
$$ \picill3inby2.5in(F18) $$
\begin{center}
{\bf Figure 18 - Trivalent Vectors}
\end{center}
$$ \picill2.5inby3.5in(F19) $$
\begin{center}
{\bf Figure 19 - Trinion Associativity}
\end{center}
$$ \picill2.5inby3.5in(F20) $$
\begin{center}
{\bf Figure 20 - Tube Twist}
\end{center}
From the point of view that we shall take in this paper, the key to the mathematical structure of three-dimensional TQFT lies in the trivalent graphs,
including the braiding of grapical arcs. We can think of these braided graphs as representing idealized Feynman diagrams,
with the trivalent vertex as the basic particle interaction vertex, and the braiding of lines representing an interaction resulting from an exchange of
particles. In this view one thinks of the particles as moving in a two-dimensional medium, and the diagrams of braiding and trivalent vertex interactions
as indications of the temporal events in the system, with time indicated in the direction of the morphisms in the category. Adding such graphs to the category
of knots and links is an extension of the {\it tangle category} where one has already extended braids to allow any embedding of strands and circles that
start in $n$ ordered points and end in $m$ ordered points. The tangle category includes the braid category and the Temperley-Lieb category. These are
both included in the category of braided trivalent graphs.
\bigbreak
Thinking of the basic trivalent vertex as the form of a particle interaction there will be a set of particle states that can label each arc incident to
the vertex. In Figure 18 we illustrate the labeling of the trivalent graphs by such particle states. In the next two sections we will see specific rules
for labeling such states. Here it suffices to note that there will be some restrictions on these labels, so that a trivalent vertex has a set of possible
labelings. Similarly, any trivalent graph will have a set of admissible labelings. These are the possible particle processes that this graph can support.
We take the set of admissible labelings of a given graph $G$ as a basis for a vector space $V(G)$ over the complex numbers. This vector space is the space
of {\it processes} associated with the graph $G.$ Given a surface $S$ and a decomposition $t$ of the surface into trinions, we have the associated
graph $G(S,t)$ and hence a vector space of processes $V(G(S,t))$. It is desirable to have this vector space independent of the particular decomposition
into trinions. If this can be accomplished, then the set of vector spaces and linear mappings associated to the surfaces can consitute a functor from the
category of cobordisms of one-manifolds to vector spaces, and hence gives rise to a one-dimensional topological quantum field theory. To this end we need
some properties of the particle interactions that will be described below.
\bigbreak
A {\it spin network} is, by definition a lableled trivalent graph in a category of graphs that satisfy the properties outlined in the previous
paragraph. We shall detail the requirements below.
\bigbreak
The simplest case of this idea is C. N. Yang's original interpretation of the Yang-Baxter Equation \cite{Yang}. Yang articulated a
quantum field theory in one dimension of space and one dimension of time in which the $R$-matrix giving the
scattering ampitudes for an interaction of two particles whose (let us say) spins corresponded to the matrix indices so that
$R^{cd}_{ab}$ is the amplitude for particles of spin $a$ and spin $b$ to interact and produce particles of spin $c$ and $d.$ Since these interactions are
between particles in a line, one takes the convention that the particle with spin
$a$ is to the left of the particle with spin $b,$ and the particle with spin $c$ is to the left of the particle with spin $d.$
If one follows the concatenation of such interactions, then there is an underlying permutation that is obtained
by following strands from the bottom to the top of the diagram (thinking of time as moving up the page). Yang designed the
Yang-Baxter equation for $R$ so that {\em the amplitudes for a composite process depend only on the underlying permutation corresponding to the
process and not on the individual sequences of interactions.}
\bigbreak
In taking over the Yang-Baxter equation for topological purposes, we can use the same interpretation, but think of the diagrams with
their under- and over-crossings as modeling events in a spacetime with two dimensions of space and one dimension of time. The extra
spatial dimension is taken in displacing the woven strands perpendicular to the page, and allows us to use braiding operators $R$ and
$R^{-1}$ as scattering matrices. Taking this picture to heart, one can add other particle properties to the idealized theory. In
particular one can add fusion and creation vertices where in fusion two particles interact to become a single particle and in creation
one particle changes (decays) into two particles. These are the trivalent vertices discussed above. Matrix elements corresponding to trivalent vertices can
represent these interactions. See Figure 21.
\bigbreak
$$ \picill3inby1in(F21) $$
\begin{center}
{\bf Figure 21 -Creation and Fusion}
\end{center}
Once one introduces trivalent vertices for fusion and creation, there is the question how these interactions will behave in respect to
the braiding operators. There will be a matrix expression for the compositions of braiding and fusion or creation as indicated in Figure
22. Here we will restrict ourselves to showing the diagrammatics with the intent of giving the reader a flavor of these
structures. It is natural to assume that braiding intertwines with creation as shown in Figure 24 (similarly with fusion). This
intertwining identity is clearly the sort of thing that a topologist will love, since it indicates that the diagrams can be interpreted
as embeddings of graphs in three-dimensional space, and it fits with our interpretation of the vertices in terms of trinions. Figure 22 illustrates the
Yang-Baxter equation. The intertwining identity is an assumption like the
Yang-Baxter equation itself, that simplifies the mathematical structure of the model.
\bigbreak
$$ \picill3inby1.5in(F22) $$
\begin{center}
{\bf Figure 22 - YangBaxterEquation}
\end{center}
$$ \picill3inby1.5in(F23) $$
\begin{center}
{\bf Figure 23 - Braiding}
\end{center}
$$ \picill3inby1.5in(F24) $$
\begin{center}
{\bf Figure 24 - Intertwining}
\end{center}
It is to be expected that there will be an operator that expresses the recoupling of vertex interactions as shown in Figure 25 and labeled
by $Q.$ This corresponds to the associativity at the level of trinion combinations shown in Figure 19. The actual formalism of such an operator will
parallel the mathematics of recoupling for angular momentum. See for example
\cite{KL}. If one just considers the abstract structure of recoupling then one sees that for trees with four branches (each with a single
root) there is a cycle of length five as shown in Figure 26. One can start with any pattern of three vertex interactions and
go through a sequence of five recouplings that bring one back to the same tree from which one started. {\em It is a natural simplifying
axiom to assume that this composition is the identity mapping.} This axiom is called the {\em pentagon identity}.
\bigbreak
$$ \picill3inby1.5in(F25) $$
\begin{center}
{\bf Figure 25 - Recoupling}
\end{center}
$$ \picill3inby3in(F26) $$
\begin{center}
{\bf Figure 26 - Pentagon Identity}
\end{center}
Finally there is a hexagonal cycle of interactions between braiding, recoupling and the intertwining identity as shown in Figure 27.
One says that the interactions satisfy the {\em hexagon identity} if this composition is the identity.
\bigbreak
$$ \picill3inby4in(F27) $$
\begin{center}
{\bf Figure 27 - Hexagon Identity}
\end{center}
A {\em graphical three-dimensional topological quantum field theory} is an algebra of interactions that satisfies the Yang-Baxter equation, the
intertwining identity, the pentagon identity and the hexagon identity. There is not room in this summary to detail the way
that these properties fit into the topology of knots and three-dimensional manifolds, but a sketch is in order. For the case of topological
quantum field theory related to the group $SU(2)$ there is a construction based entirely on the combinatorial topology of the bracket polynomial
(See Sections 7,9 and 10 of this article.). See \cite{KP,KL} for more information on this approach.
\bigbreak
Now return to Figure 17 where we
illustrate trinions, shown in relation to a trivalent vertex, and a surface of genus three that is decomposed into four trinions. It
turns out that the vector space
$V(S_g) = V(G(S_{g},t))$ to a surface with a trinion decomposition as $t$ described above, and defined in terms of the graphical topological quantum field
theory, does not depend upon the choice of trinion decomposition. This independence is guaranteed by
the braiding, hexagon and pentagon identities. One can then associate a well-defined vector $|M \rangle$ in $V(S_{g})$ whenenver $M$
is a three manifold whose boundary is $S_{g}.$ Furthermore, if a closed three-manifold $M^{3}$ is decomposed along a surface $S_{g}$ into
the union of $M_{-}$ and $M_{+}$
where these parts are otherwise disjoint three-manifolds with boundary $S_{g},$ then the inner product $I(M) = \langle M_{-} | M_{+} \rangle$ is, up to
normalization, an invariant of the three-manifold $M_{3}.$ With the definition of graphical topological quantum field theory given above, knots and links can
be incorporated as well, so that one obtains a source of invariants $I(M^{3},K)$ of knots and links in orientable three-manifolds. Here we see the uses of
the relationships that occur in the higher dimensional cobordism categories, as descirbed in the previous section.
\bigbreak
\noindent The invariant $I(M^{3},K)$ can be formally compared with the Witten \cite{Witten} integral $$Z(M^{3},K) = \int DAe^{(ik/4\pi)S(M,A)} W_{K}(A).$$ It
can be shown that up to limits of the heuristics, $Z(M,K)$ and $I(M^{3},K)$ are essentially equivalent for appropriate choice of gauge group and
corresponding spin networks.
\bigbreak
By these graphical reformulations, a three-dimensional $TQFT$ is, at base, a highly simplified theory of point particle interactions in $2+1$
dimensional spacetime. It can be used to articulate invariants of knots and links and invariants of three manifolds. The reader
interested in the
$SU(2)$ case of this structure and its implications for invariants of knots and three manifolds can consult \cite{KL,KP,Kohno,Crane,MS}. One expects that
physical situations involving
$2+1$ spacetime will be approximated by such an idealized theory. There are also applications to $3 + 1$ quantum gravity \cite{ASR,AL,KaufLiko}.
Aspects of the quantum Hall effect may be related to topological quantum field theory
\cite{Wilczek}. One can study a physics in two dimensional space where the braiding of particles or collective excitations leads to non-trival
representations of the Artin braid group. Such particles are called {\it Anyons}. Such $TQFT$ models would describe applicable physics. One can
think about applications of anyons to quantum computing along the lines of the topoological models described here.
\bigbreak
$$ \picill3inby2.5in(F28) $$
\begin{center}
{\bf Figure 28 - A More Complex Braiding Operator}
\end{center}
\bigbreak
A key point in the application of $TQFT$ to quantum information theory is contained in the
structure illustrated in Figure 28. There we show a more complex braiding operator, based on the composition of recoupling with the
elementary braiding at a vertex. (This structure is implicit in the Hexagon identity of Figure 27.) The new braiding operator is a
source of unitary representations of braid group in situations (which exist mathematically) where the recoupling transformations are themselves
unitary. This kind of pattern is utilized in the work of Freedman and collaborators \cite{F,FR98,FLZ,Freedman5,Freedman6}
and in the case of classical angular momentum formalism has been dubbed a ``spin-network quantum simlator" by Rasetti and collaborators
\cite{MR,MR2}. In the next section we show how certain natural deformations \cite{KL} of Penrose spin networks \cite{Penrose} can be used
to produce these unitary representations of the Artin braid group and the corresponding models for anyonic topological quantum computation.
\bigbreak
\section {Spin Networks and Temperley-Lieb Recoupling Theory}
In this section we discuss a combinatorial construction for spin networks that generalizes the original construction of Roger Penrose.
The result of this generalization is a structure that satisfies all the properties of a graphical $TQFT$ as described in the previous section, and
specializes to classical angular momentum recoupling theory in the limit of its basic variable. The construction is based on the properties of
the bracket polynomial (as already described in Section 4). A complete description of this theory can be found in the book ``Temperley-Lieb
Recoupling Theory and Invariants of Three-Manifolds" by Kauffman and Lins \cite{KL}.
\bigbreak
The ``$q$-deformed" spin networks that we construct here are based on the bracket polynomial relation. View Figure 29 and Figure 30.
\bigbreak
$$ \picill5inby4in(F29) $$
\begin{center}
{\bf Figure 29 - Basic Projectors }
\end{center}
\bigbreak
$$ \picill5inby3in(F30) $$
\begin{center}
{\bf Figure 30 - Two Strand Projector}
\end{center}
\bigbreak
$$ \picill5inby3in(F31) $$
\begin{center}
{\bf Figure 31 -Vertex}
\end{center}
\bigbreak
In Figure 29 we indicate how the basic projector (symmetrizer, Jones-Wenzl projector) $$ \picill.25inby.25in(symm) $$
\bigbreak
\noindent is constructed on the basis of the
bracket polynomial expansion. In this technology a symmetrizer is a sum of tangles on $n$ strands (for a chosen integer $n$). The tangles are made by
summing over braid lifts of permutations in the symmetric group on $n$ letters, as indicated in Figure 29. Each elementary braid is then expanded by the
bracket polynomial relation as indicated in Figure 29 so that the resulting sum consists of flat tangles without any crossings (these can be viewed as
elements in the Temperley-Lieb algebra). The projectors have the property that the concatenation of a projector with itself is just that projector, and
if you tie two lines on the top or the bottom of a projector together, then the evaluation is zero. This general definition of projectors is very useful for
this theory. The two-strand projector is shown in Figure 30. Here the formula for that projector
is particularly simple. It is the sum of two parallel arcs and two turn-around arcs (with coefficient $-1/d,$ with $d = -A^{2} - A^{-2}$ is the loop
value for the bracket polynomial. Figure 30 also shows the recursion formula for the general projector. This recursion formula is due to Jones and Wenzl and
the projector in this form, developed as a sum in the Temperley--Lieb algebra (see Section 5 of this paper), is usually known as the {\em Jones--Wenzl
projector}.
\bigbreak
The projectors are combinatorial analogs of irreducible representations of a group (the original spin nets were based
on $SU(2)$ and these deformed nets are based on the corresponding quantum group to SU(2)). As such the reader can think of them as ``particles". The
interactions of these particles are governed by how they can be tied together into three-vertices. See Figure 31.
In Figure 31 we show how to tie three projectors, of $a,b,c$ strands respectively, together to form a three-vertex. In order to accomplish this
interaction, we must share lines between them as shown in that Figure so that there are non-negative integers $i,j,k$ so that
$a = i + j, b = j + k, c = i + k.$ This is equivalent to the condition that $a + b + c$ is even and that the sum of any two of $a,b,c$ is
greater than or equal to the third. For example $a + b \ge c.$ One can think of the vertex as a possible particle interaction where
$[a]$ and $[b]$ interact to produce $[c].$ That is, any two of the legs of the vertex can be regarded as interacting to produce the third leg.
\bigbreak
There is a basic orthogonality of three vertices as shown in Figure 32. Here if we tie two three-vertices together
so that they form a ``bubble" in the middle, then the resulting network with labels $a$ and $b$ on its free ends
is a multiple of an $a$-line (meaning a line with an $a$-projector on it) or zero (if $a$ is not equal to $b$).
The multiple is compatible with the results of closing the diagram in the equation of Figure 32 so the two free
ends are identified with one another. On closure, as shown in the Figure, the left hand side of the equation becomes
a Theta graph and the right hand side becomes a multiple of a ``delta" where $\Delta_{a}$ denotes the bracket
polynomial evaluation of the $a$-strand loop with a projector on it. The $\Theta(a,b,c)$ denotes the bracket
evaluation of a theta graph made from three trivalent vertices and labeled with $a, b, c$ on its edges.
\bigbreak
There is a recoupling formula in this theory in the form shown in Figure 33.
Here there are ``$6$-j symbols", recoupling coefficients that can be expressed, as shown in
Figure 35, in terms of tetrahedral graph evaluations and theta graph evaluations. The tetrahedral graph is shown in
Figure 34. One derives the formulas for
these coefficients directly from the orthogonality relations for the trivalent vertices by
closing the left hand side of the recoupling formula and using orthogonality to evaluate the right hand side.
This is illustrated in Figure 35.
$$ \picill5inby5in(F32) $$
\begin{center}
{\bf Figure 32 - Orthogonality of Trivalent Vertices}
\end{center}
\bigbreak
$$ \picill5inby1.2in(F33) $$
\begin{center}
{\bf Figure 33 - Recoupling Formula}
\end{center}
\bigbreak
$$ \picill5inby1in(F34) $$
\begin{center}
{\bf Figure 34 - Tetrahedron Network}
\end{center}
\bigbreak
$$ \picill3inby3.5in(F35) $$
\begin{center}
{\bf Figure 35 - Tetrahedron Formula for Recoupling Coefficients}
\end{center}
\bigbreak
Finally, there is the braiding relation, as illustrated in Figure 36.
$$ \picill3inby3in(F36) $$
\begin{center}
{\bf Figure 36 - Local Braiding Formula}
\end{center}
\bigbreak
With the braiding relation in place, this $q$-deformed spin network theory satisfies the pentagon, hexagon and braiding naturality identities
needed for a topological quantum field theory. All these identities follow naturally from the basic underlying topological construction of the
bracket polynomial. One can apply the theory to many different situations.
\subsection{Evaluations}
In this section we discuss the structure of the evaluations for $\Delta_{n}$ and the theta and tetrahedral networks. We refer to
\cite{KL} for the details behind these formulas. Recall that $\Delta_{n}$ is the bracket evaluation of the closure of the $n$-strand
projector, as illustrated in Figure 32. For the bracket variable $A,$ one finds that
$$\Delta_{n} = (-1)^{n}\frac{A^{2n+2} - A^{-2n-2}}{A^{2} - A^{-2}}.$$
One sometimes writes the {\it quantum integer}
$$[n] = (-1)^{n-1}\Delta_{n-1} = \frac{A^{2n} - A^{-2n}}{A^{2} - A^{-2}}.$$
If $$A=e^{i\pi/2r}$$ where $r$ is a positive integer, then
$$\Delta_{n} = (-1)^{n}\frac{sin((n+1)\pi/r)}{sin(\pi/r)}.$$
Here the corresponding quantum integer is
$$[n] = \frac{sin(n\pi/r)}{sin(\pi/r)}.$$
Note that $[n+1]$ is a positive real number for $n=0,1,2,...r-2$ and that $[r-1]=0.$
\bigbreak
The evaluation of the theta net is expressed in terms of quantum integers by the formula
$$\Theta(a,b,c) = (-1)^{m + n + p}\frac{[m+n+p+1]![n]![m]![p]!}{[m+n]![n+p]![p+m]!}$$
where $$a=m+p, b=m+n, c=n+p.$$ Note that $$(a+b+c)/2 = m + n + p.$$
\bigbreak
When $A=e^{i\pi/2r},$ the recoupling theory becomes finite with the restriction that only three-vertices
(labeled with $a,b,c$) are {\it admissible} when $a + b +c \le 2r-4.$ All the summations in the
formulas for recoupling are restricted to admissible triples of this form.
\bigbreak
\subsection{Symmetry and Unitarity}
The formula for the recoupling coefficients given in Figure 35 has less symmetry than is actually inherent in the structure of the situation.
By multiplying all the vertices by an appropriate factor, we can reconfigure the formulas in this theory so that the revised recoupling transformation is
orthogonal, in the sense that its transpose is equal to its inverse. This is a very useful fact. It means that when the resulting matrices are real, then
the recoupling transformations are unitary. We shall see particular applications of this viewpoint later in the paper.
\bigbreak
Figure 37 illustrates this modification of the three-vertex. Let $Vert[a,b,c]$ denote the original $3$-vertex of the Temperley-Lieb recoupling theory.
Let $ModVert[a,b,c]$ denote the modified vertex. Then we have the formula
$$ModVert[a,b,c] = \frac{\sqrt{\sqrt{\Delta_{a} \Delta_{b} \Delta_{c}}}}{ \sqrt{\Theta(a,b,c)}}\,\, Vert[a,b,c].$$
\noindent {\bf Lemma.} {\it For the bracket evaluation at the root of unity $A = e^{i\pi/2r}$ the factor
$$f(a,b,c) = \frac{\sqrt{\sqrt{\Delta_{a} \Delta_{b} \Delta_{c}}}}{ \sqrt{\Theta(a,b,c)}}$$
is real, and can be taken to be a positive real number for $(a,b,c)$ admissible (i.e. $a + b + c \le 2r -4$).}
\bigbreak
\noindent {\bf Proof.} By the results from the previous subsection,
$$\Theta(a,b,c) = (-1)^{(a+b+c)/2}\hat{\Theta}(a,b,c)$$ where $\hat{\Theta}(a,b,c)$ is positive real, and
$$\Delta_{a} \Delta_{b} \Delta_{c} = (-1)^{(a+b+c)} [a+1][b+1][c+1]$$ where the quantum integers in this formula can be taken to be
positive real. It follows from this that
$$f(a,b,c) = \sqrt{\frac{\sqrt{[a+1][b+1][c+1]}}{\hat{\Theta}(a,b,c)}},$$ showing that this factor can be taken to be positive real.
$
{\cal B}ox$
\bigbreak
In Figure 38 we show how this modification of the vertex affects the non-zero term of the orthogonality of trivalent
vertices (compare with Figure 32). We refer to this as the ``modified bubble identity." The coefficient in the modified bubble identity is
$$\sqrt{ \frac{\Delta_{b}\Delta_{c}}{\Delta_{a}} } = (-1)^{(b+c-a)/2} \sqrt{\frac{[b+1][c+1]}{[a+1]}}$$
where $(a,b,c)$ form an admissible triple. In particular $b+c-a$ is even and hence this factor can be taken to be real.
\bigbreak
We rewrite the recoupling formula in this new basis and emphasize
that the recoupling coefficients can be seen (for fixed external labels $a,b,c,d$) as a matrix transforming the horizontal ``double-$Y$" basis
to a vertically
disposed double-$Y$ basis. In Figures 39, 40 and 41 we have shown the form of this transformation,using the matrix notation
$$M[a,b,c,d]_{ij}$$ for the modified recoupling coefficients. In Figure 39 we derive an explicit formula for these matrix elements. The proof of this
formula follows directly from trivalent--vertex orthogonality (See Figures 32 and 35.), and is given in Figure 39. The result shown in Figure 39 and
Figure 40 is the following formula for the recoupling matrix elements.
$$M[a,b,c,d]_{ij} = ModTet
\left( \begin{array}{ccc}
a & b & i \\
c & d & j \\
\end{array} \right)/\sqrt{\Delta_{a}\Delta_{b}\Delta_{c}\Delta_{d}}$$
where $\sqrt{\Delta_{a}\Delta_{b}\Delta_{c}\Delta_{d}}$ is short-hand for the product
$$\sqrt{ \frac{\Delta_{a}\Delta_{b}}{\Delta_{j}} }\sqrt{ \frac{\Delta_{c}\Delta_{d}}{\Delta_{j}} } \Delta_{j}$$
$$= (-1)^{(a+b-j)/2}(-1)^{(c+d-j)/2} (-1)^{j} \sqrt{ \frac{[a+1][b+1]}{[j+1]}}\sqrt{ \frac{[c+1][d+1]}{[j+1]}} [j+1]$$
$$ = (-1)^{(a+b+c+d)/2}\sqrt{[a+1][b+1][c+1][d+1]}$$
In this form, since
$(a,b,j)$ and $(c,d,j)$ are admissible triples, we see that this coeffient can be taken to be real, and its value is
independent of the choice of $i$ and $j.$
The matrix $M[a,b,c,d]$ is real-valued.
\bigbreak
\noindent It follows from Figure 33 (turn the diagrams by ninety degrees) that
$$M[a,b,c,d]^{-1} = M[b,d,a,c].$$
In Figure 42 we illustrate the formula
$$M[a,b,c,d]^{T} = M[b,d,a,c].$$ It follows from this formula that
$$M[a,b,c,d]^{T} = M[a,b,c,d]^{-1}.$$ {\it Hence $M[a,b,c,d]$ is an orthogonal, real-valued matrix.}
$$ \picill3inby2in(F37) $$
\begin{center}
{\bf Figure 37 - Modified Three Vertex}
\end{center}
\bigbreak
$$ \picill3inby3in(F38) $$
\begin{center}
{\bf Figure 38 - Modified Bubble Identiy}
\end{center}
\bigbreak
$$ \picill3inby5in(F39) $$
\begin{center}
{\bf Figure 39 - Derivation of Modified Recoupling Coefficients}
\end{center}
\bigbreak
$$ \picill3inby2.5in(F40) $$
\begin{center}
{\bf Figure 40 - Modified Recoupling Formula}
\end{center}
\bigbreak
$$ \picill3inby2in(F41) $$
\begin{center}
{\bf Figure 41 - Modified Recoupling Matrix}
\end{center}
\bigbreak
$$ \picill3inby3in(F42) $$
\begin{center}
{\bf Figure 42 - Modified Matrix Transpose}
\end{center}
\bigbreak
\noindent {\bf Theorem 2.} {\it In the Temperley-Lieb theory we obtain unitary (in fact real orthogonal) recoupling transformations when the bracket
variable $A$ has the form $A = e^{i\pi/2r}$ for $r$ a positive integer. Thus we obtain families of unitary representations of the Artin braid group
from the recoupling theory at these roots of unity.}
\bigbreak
\noindent {\bf Proof.} The proof is given the discussion above.
$
{\cal B}ox$
\bigbreak
In Section 9 we shall show explictly how these methods work in the case of the Fibonacci model where $A = e^{3i\pi/5}$.
\bigbreak
\section {Fibonacci Particles}
In this section and the next we detail how the Fibonacci model for anyonic quantum computing \cite{Kitaev,Preskill} can be constructed by using a version of
the two-stranded bracket polynomial and a generalization of Penrose spin networks. This is a fragment of the Temperly-Lieb recoupling theory \cite{KL}. We
already gave in the preceding sections a general discussion of the theory of spin networks and their relationship with quantum computing.
\bigbreak
The Fibonacci model is a $TQFT$ that is based on a single ``particle" with two states that we shall call the {\it marked state} and the
{\it unmarked state}. The particle in the marked state can interact with itself either to produce a single particle in the marked state, or
to produce a single particle in the unmarked state. The particle in the unmarked state has no influence in interactions (an unmarked state interacting
with any state $S$ yields that state $S$).
One way to indicate these two interactions symbolically is to use a box,for the marked state and a blank space for the unmarked state.
Then one has two modes of interaction of a box with itself:
\begin{enumerate}
\item Adjacency: $\fbox{~} ~~ \fbox{~}$
\smallbreak
\noindent and
\item Nesting: $\fbox{ \fbox{~~} }.$
\end{enumerate}
\noindent With this convention we take the adjacency interaction to yield a single box, and the nesting interaction to produce nothing:
$$\fbox{~} ~~ \fbox{~} = \fbox{~}$$
$$\fbox{ \fbox{~~} } = $$
\noindent We take the notational opportunity to denote nothing by an asterisk (*). The syntatical rules for operating the asterisk are
Thus the asterisk is a stand-in for no mark at all and it can be erased or placed wherever it is convenient to do so.
Thus $$\fbox{ \fbox{~~} } = *. $$
$$ \picill3inby1.3in(F43) $$
\begin{center}
{\bf Figure 43 - Fibonacci Particle Interaction}
\end{center}
We shall make a recoupling theory based on this particle, but it is worth noting some of its purely combinatorial properties first.
The arithmetic of combining boxes (standing for acts of distinction) according to these rules has been studied and formalized in
\cite{LOF} and correlated with Boolean algebra and classical logic. Here {\em within} and {\em next to} are ways to refer to the two sides delineated by
the given distinction. From this point of view, there are two modes of relationship (adjacency and nesting) that arise at once in the presence of a
distinction.
\bigbreak
$$ \picill3inby3.5in(F44) $$
\begin{center}
{\bf Figure 44 - Fibonacci Trees}
\end{center}
\bigbreak
From here on we shall denote the Fibonacii particle by the letter $P.$
Thus the two possible interactions of $P$ with itself are as follows.
\begin{enumerate}
\item $P,P \longrightarrow *$
\item $P,P \longrightarrow P$
\end{enumerate}
\noindent In Figure 43 we indicate in small tree diagrams the two possible interactions of the particle $P$ with itself.
In the first interaction the particle vanishes, producing the asterix. In the second interaction the particle
a single copy of $P$ is produced. These are the two basic actions of a single distinction relative to itself, and they
constitute our formalism for this very elementary particle.
\bigbreak
In Figure 44, we have indicated the different results of particle
processes where we begin with a left-associated tree structure with three branches, all marked and then four branches all marked.
In each case we demand that the particles interact successively to produce an unmarked particle in the end, at the root of the tree.
More generally one can consider a left-associated tree with $n$ upward branches and one root. Let $T(a_1,a_2, \cdots , a_n : b)$ denote such
a tree with particle labels $a_1, \cdots, a_n$ on the top and root label $b$ at the bottom of the tree. We consider all possible processes
(sequences of particle interactions) that start with the labels at the top of the tree, and end with the labels at the bottom of the tree.
Each such sequence is regarded as a basis vector in a complex vector space
$$V^{a_1,a_2, \cdots , a_n}_{b}$$
associated with the tree. In the case where all the labels are marked at the top and the bottom label is unmarked, we shall denote this tree
by $$V^{111 \cdots 11}_{0} = V^{(n)}_{0}$$ where $n$ denotes the number of upward branches in the tree. We see from Figure 44 that the dimension
of $V^{(3)}_{0}$ is $1,$ and that $$dim(V^{(4)}_{0}) = 2.$$ This means that $V^{(4)}_{0}$ is a natural candidate in this context for the two-qubit
space.
\bigbreak
Given the tree $T(1,1,1,\cdots, 1:0)$ ($n$ marked states at the top, an unmarked state at the bottom), a process basis vector in $V^{(n)}_{0}$
is in direct correspondence with a string of boxes and asterisks ($1$'s and $0$'s) of length $n-2$ with no repeated asterisks and ending in a marked state.
See Figure 44 for an illustration of the simplest cases. It follows from this
that $$dim(V^{(n)}_{0}) = f_{n-2}$$ where $f_k$ denotes the $k$-th Fibonacci number:
$$f_0 = 1, f_1 = 1, f_2 = 2, f_3 = 3, f_4 = 5, f_5= 8, \cdots$$ where $$f_{n+2} = f_{n+1} + f_{n}.$$
The dimension formula for these spaces follows from the fact that there are $f_{n}$ sequences of length $n-1$ of marked and unmarked states with no
repetition of an unmarked state. This fact is illustrated in Figure 45.
\bigbreak
$$ \picill3inby2.5in(F45) $$
\begin{center}
{\bf Figure 45 - Fibonacci Sequence}
\end{center}
\bigbreak
\section{The Fibonacci Recoupling Model}
We now show how to make a model for recoupling the Fibonacci particle by using the Temperley Lieb recoupling theory and the bracket polynomial.
Everything we do in this section will be based on the 2-projector, its properties and evaluations based on the bracket polynomial model for the Jones
polynomial. While we have outlined the general recoupling theory based on the bracket polynomial in earlier sections of this paper,
the present section is self-contained, using only basic information about the bracket polyonmial, and the essential properties of the
2-projector as shown in Figure 46. In this figure we state the definition of the 2-projector, list its two main properties (the operator is idempotent and
a self-attached strand yields a zero evaluation) and give diagrammatic proofs of these properties.
$$ \picill3inby3in(F46) $$
\begin{center}
{\bf Figure 46 - The 2-Projector}
\end{center}
\bigbreak
In Figure 47, we show the essence of the Temperley-Lieb recoupling model for the Fibonacci particle. The Fibonaccie particle is, in this mathematical
model, identified with the 2-projector itself. As the reader can see from Figure 47, there are two basic interactions of the 2-projector with itself,
one giving a 2-projector, the other giving nothing. This is the pattern of self-iteraction of the Fibonacci particle. There is a third possibility,
depicted in Figure 47, where two 2-projectors interact to produce a 4-projector. We could remark at the outset, that the 4-projector will be zero if we
choose the bracket polynomial variable $A = e^{3 \pi/5}.$ Rather than start there, we will assume that the 4-projector is forbidden and deduce (below)
that the theory has to be at this root of unity.
$$ \picill3inby3in(F47) $$
\begin{center}
{\bf Figure 47 - Fibonacci Particle as 2-Projector}
\end{center}
\bigbreak
\noindent Note that in Figure 47 we have adopted a single strand notation for the particle interactions, with a solid strand corresponding to the
marked particle, a dotted strand (or nothing) corresponding to the unmarked particle. A dark vertex indicates either an interaction point, or it
may be used to indicate the single strand is shorthand for two ordinary strands. Remember that these are all shorthand expressions for underlying
bracket polynomial calculations.
\bigbreak
In Figures 48, 49, 50, 51, 52 and 53 we have provided complete diagrammatic calculations of all of the relevant small nets and evaluations that
are useful in the two-strand theory that is being used here. The reader may wish to skip directly to Figure 54 where we determine the
form of the recoupling coefficients for this theory. We will discuss the resulting algebra below.
\bigbreak
For the reader who does not want to skip the next collection of Figures, here is a guided tour. Figure 48 illustrates three three basic nets in case of two
strands. These are the theta, delta and tetrahedron nets. In this Figure we have shown the decomposition on the theta and delta nets in terms of
2-projectors. The Tetrahedron net will be similarly decomposed in Figures 52 and 53. The theta net is denoted $\Theta,$ the delta by $\Delta,$ and the
tetrahedron by $T.$ In Figure 49 we illustrate how a pedant loop has a zero evaluation. In Figure 50 we use the identity in Figure 49 to show how
an interior loop (formed by two trivalent vertices) can be removed and replaced by a factor of $\Theta/\Delta.$ Note how, in this figure, line two proves
that one network is a multiple of the other, while line three determines the value of the multiple by closing both nets.
\bigbreak
\noindent Figure 51 illustrates the explicit calculation of the delta and theta nets. The figure begins with a calculation of the result of closing
a single strand of the 2-projector. The result is a single stand multiplied by $(\delta - 1/\delta)$ where $\delta = -A^2 - A^{-2},$ and $A$ is the bracket
polynomial parameter. We then find that $$\Delta = \delta^{2} - 1$$ and
$$\Theta = (\delta - 1/\delta)^{2} \delta - \Delta/\delta = (\delta -1/\delta)(\delta^{2} - 2).$$
\bigbreak
\noindent Figures 52 and 53 illustrate the calculation of the value of the tetrahedral network $T.$ The reader should note the first line of
Figure 52 where the tetradedral net is translated into a pattern of 2-projectors, and simplified. The rest of these two figures are a diagrammatic
calculation, using the expansion formula for the 2-projector. At the end of Figure 53 we obtain the formula for the tetrahedron
$$T = (\delta - 1/\delta)^{2}(\delta^{2} - 2) - 2\Theta/\delta.$$
$$ \picill3inby3in(F48) $$
\begin{center}
{\bf Figure 48 - Theta, Delta and Tetrahedron}
\end{center}
\bigbreak
$$ \picill3inby3in(F49) $$
\begin{center}
{\bf Figure 49 - LoopEvaluation--1}
\end{center}
\bigbreak
$$ \picill3inby3.5in(F50) $$
\begin{center}
{\bf Figure 50 - LoopEvaluation--2}
\end{center}
\bigbreak
$$ \picill3inby4in(F51) $$
\begin{center}
{\bf Figure 51 - Calculate Theta, Delta}
\end{center}
\bigbreak
$$ \picill3inby2.7in(F52) $$
\begin{center}
{\bf Figure 52 - Calculate Tetrahedron -- 1}
\end{center}
\bigbreak
$$ \picill3inby2.5in(F53) $$
\begin{center}
{\bf Figure 53 - Calculate Tetrahedron -- 2}
\end{center}
\bigbreak
Figure 54 is the key calculation for this model. In this figure we assume that the recoupling formulas involve only $0$ and $2$ strands, with
$0$ corresponding to the null particle and $2$ corresponding to the 2-projector. ($2 + 2 = 4$ is forbidden as in Figure 47.) From this assumption we
calculate that the recoupling matrix is given by
$$ F =
\left( \begin{array}{cc}
a & b \\
c & d \\
\end{array} \right) =
\left( \begin{array}{cc}
1/\Delta & \Delta/\Theta \\
\Theta/\Delta^{2} & T \Delta/\Theta^{2} \\
\end{array} \right)
$$
$$ \picill5inby5in(F54) $$
\begin{center}
{\bf Figure 54 - Recoupling for 2-Projectors}
\end{center}
\bigbreak
$$ \picill5inby6.5in(F55) $$
\begin{center}
{\bf Figure 55 - Braiding at the Three-Vertex}
\end{center}
\bigbreak
$$ \picill6inby6.5in(F56) $$
\begin{center}
{\bf Figure 56 - Braiding at the Null-Three-Vertex}
\end{center}
\bigbreak
\noindent Figures 55 and 56 work out the exact formulas for the braiding at a three-vertex in this theory. When the 3-vertex has three marked lines,
then the braiding operator is multiplication by $-A^{4},$ as in Figure 55. When the 3-vertex has two marked lines, then the braiding operator is
multiplication by $A^{8},$ as shown in Figure 56.
\bigbreak
\noindent Notice that it follows from the symmetry of the diagrammatic recoupling formulas of Figure 54 that
{\it the square of the recoupling matrix $F$ is equal to
the identity.} That is,
$$\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array} \right) = F^{2} =
\left( \begin{array}{cc}
1/\Delta & \Delta/\Theta \\
\Theta/\Delta^{2} & T \Delta/\Theta^{2} \\
\end{array} \right)
\left( \begin{array}{cc}
1/\Delta & \Delta/\Theta \\
\Theta/\Delta^{2} & T \Delta/\Theta^{2} \\
\end{array} \right) =$$
$$\left( \begin{array}{cc}
1/\Delta^{2} + 1/\Delta & 1/\Theta + T\Delta^{2}/\Theta^{3} \\
\Theta/\Delta^{3} + T/(\Delta\Theta) & 1/\Delta + \Delta^{2} T^{2}/\Theta^{4} \\
\end{array} \right).$$
Thus we need the relation
$$1/\Delta + 1/\Delta^{2} = 1.$$
This is equivalent to saying that
$$\Delta^{2} = 1 + \Delta,$$ a quadratic equation whose solutions are
$$\Delta = (1 \pm \sqrt{5})/2.$$
Furthermore, we know that $$\Delta = \delta^{2} - 1$$ from Figure 51.
Hence $$\Delta^{2} = \Delta + 1 = \delta^{2}.$$
We shall now specialize to the case where
$$\Delta = \delta = (1 + \sqrt{5})/2,$$
leaving the other cases for the exploration of the reader.
We then take $$A = e^{3\pi i/5}$$ so that
$$\delta = -A^{2} - A^{-2} = -2cos(6\pi/5) = (1 + \sqrt{5})/2.$$
\bigbreak
Note that $\delta - 1/\delta = 1.$ Thus
$$\Theta = (\delta - 1/\delta)^{2} \delta - \Delta/\delta = \delta - 1.$$
and
$$T = (\delta - 1/\delta)^{2}(\delta^{2} - 2) - 2\Theta/\delta = (\delta^{2} - 2) - 2(\delta - 1)/\delta$$
$$= (\delta - 1)(\delta -2)/\delta = 3\delta - 5.$$
Note that $$T = -\Theta^{2}/\Delta^{2},$$ from which it follows immediately that
$$F^{2} = I.$$ This proves that we can satisfy this model when $\Delta = \delta = (1 + \sqrt{5})/2.$
\bigbreak
\noindent For this specialization we see that the matrix $F$ becomes
$$ F =
\left( \begin{array}{cc}
1/\Delta & \Delta/\Theta \\
\Theta/\Delta^{2} & T \Delta/\Theta^{2} \\
\end{array} \right) =
\left( \begin{array}{cc}
1/\Delta & \Delta/\Theta \\
\Theta/\Delta^{2} & (-\Theta^{2}/\Delta^{2}) \Delta/\Theta^{2} \\
\end{array} \right) =
\left( \begin{array}{cc}
1/\Delta & \Delta/\Theta \\
\Theta/\Delta^{2} & -1/\Delta \\
\end{array} \right)$$
This version of $F$ has square equal to the identity independent of the value of $\Theta,$ so long as $\Delta^{2} = \Delta + 1.$
\bigbreak
\noindent {\bf The Final Adjustment.} Our last version of $F$ suffers from a lack of symmetry. It is not a symmetric matrix, and hence
not unitary. A final adjustment of the model gives this desired symmetry. {\it Consider the result of replacing each trivalent vertex (with three 2-projector
strands) by a multiple by a given quantity $\alpha.$} Since the $\Theta$ has two vertices, it will be multiplied by $\alpha^{2}.$ Similarly,
the tetradhedron $T$ will be multiplied by $\alpha^{4}.$ The $\Delta$ and the $\delta$ will be unchanged. Other properties of the model will remain
unchanged. The new recoupling matrix, after such an adjustment is made, becomes
$$\left( \begin{array}{cc}
1/\Delta & \Delta/\alpha^{2}\Theta \\
\alpha^{2}\Theta/\Delta^{2} & -1/\Delta \\
\end{array} \right)$$
For symmetry we require $$\Delta/(\alpha^{2}\Theta) = \alpha^{2}\Theta/\Delta^{2}.$$ We take $$\alpha^{2} = \sqrt{\Delta^{3}}/\Theta.$$
With this choice of $\alpha$ we have $$\Delta/(\alpha^{2}\Theta) = \Delta \Theta/(\Theta \sqrt{\Delta^{3}}) = 1/\sqrt{\Delta}.$$
Hence the new symmetric $F$ is given by the equation
$$F =
\left( \begin{array}{cc}
1/\Delta & 1/\sqrt{\Delta} \\
1/\sqrt{\Delta} & -1/\Delta \\
\end{array} \right) =
\left( \begin{array}{cc}
\tau & \sqrt{\tau} \\
\sqrt{\tau} & -\tau \\
\end{array} \right)$$
where $\Delta$ is the golden ratio and $\tau = 1/\Delta$.
This gives the Fibonacci model. Using Figures 55 and 56, we have that the local braiding matrix for the model is given by the formula
below with $A = e^{3\pi i/5}.$
$$R =
\left( \begin{array}{cc}
-A^{4} & 0 \\
0 & A^{8} \\
\end{array} \right)=
\left( \begin{array}{cc}
e^{4\pi i/5} & 0 \\
0 & -e^{2\pi i/5} \\
\end{array} \right).$$
\bigbreak
The simplest example of a braid group representation arising from this theory is the representation of the three strand braid group generated by
$S_{1}= R$ and $S_{2} = FRF$ (Remember that $F=F^{T} = F^{-1}.$). The matrices $S_{1}$ and $S_{2}$ are both unitary, and they generate a dense subset of
the unitary group $U(2),$ supplying the first part of the transformations needed for quantum computing.
\bigbreak
\section{Quantum Computation of Colored Jones Polynomials and the Witten-Reshetikhin-Turaev Invariant}
In this section we make some brief comments on the quantum computation of colored Jones polynomials. This material will be expanded in a subsequent
publication.
$$ \picill3inby6in(F57) $$
\begin{center}
{\bf Figure 57 - Evaluation of the Plat Closure of a Braid}
\end{center}
\bigbreak
First, consider Figure 57. In that figure we illustrate the calculation of the evalutation of the {\it ($a$) - colored bracket polynomial} for
the {\it plat closure} $P(B)$ of a braid $B$.
The reader can infer the definition of the plat closure from Figure 57. One takes a braid on an even number of strands and closes the top strands with
each other in a row of maxima. Similarly, the bottom strands are closed with a row of minima. It is not hard to see that any knot or link can be represented
as the plat closure of some braid.
\bigbreak
$$ \picill5inby4in(F58) $$
\begin{center}
{\bf Figure 58 - Dubrovnik Polynomial Specialization at Two Strands }
\end{center}
\bigbreak
The ($a$) - colored bracket polynonmial of a link $L$, denoted $<L>_{a},$ is the evaluation of that link where each single strand has been replaced by $a$
parallel strands and the insertion of Jones-Wenzl projector (as discussed in Section 7). We then see that we can use our discussion of the Temperley-Lieb
recoupling theory as in sections 7,8 and 9 to compute the value of the colored bracket polynomial for the plat closure $PB.$ As shown in Figure 57, we regard
the braid as acting on a process space $V^{a,a,\cdots,a}_{0}$ and take the case of the action on the vector $v$ whose process space coordinates are all
zero. Then the action of the braid takes the form $$Bv(0,\cdots,0) = \Sigma_{x_{1},\cdots,x_{n}} B(x_{1},\cdots,x_{n}) v(x_{1},\cdots,x_{n})$$
where $B(x_{1},\cdots,x_{n})$ denotes the matrix entries for this recoupling transformation and $v(x_{1},\cdots,x_{n})$ runs over a basis for the
space $V^{a,a,\cdots,a}_{0}.$ Here $n$ is even and equal to the number of braid strands. In the figure we illustrate with $n=4.$ Then, as the figure shows,
when we close the top of the braid action to form $PB,$ we cut the sum down to the evaluation of just one term. In the general case we will get
$$<PB>_{a} = B(0,\cdots,0)\Delta_{a}^{n/2}.$$ The calculation simplifies to this degree because of the vanishing of loops in the recoupling graphs.
The vanishing result is stated in Figure 57, and it is proved in the case $a =2$ in Figure 49.
\bigbreak
The {\it colored Jones polynomials} are normalized versions of the colored bracket polymomials, differing just by a normalization factor.
\bigbreak
In order to consider quantumn computation of the colored bracket or colored Jones polynomials, we therefore can consider quantum computation of the
matrix entries $B(0,\cdots,0).$ These matrix entries in the case of the roots of unity $A = e^{i\pi/2r}$ and for the $a=2$ Fibonacci model with
$A= e^{3i\pi/5}$ are parts of the diagonal entries of the unitary transformation that represents the braid group on the process space $V^{a,a,\cdots,a}_{0}.$
{\it We can obtain these matrix entries by using the Hadamard test as described in section 4.} As a result we get relatively efficient quantum
algoritms for the colored Jones polynonmials at these roots of unity, in essentially the same framework as we described in section 4, but for braids of
arbitrary size. The computational complexity of these models is essentially the same as the models for the Jones polynomial discussed in \cite{Ah1}.
We reserve discussion of these issues to a subsequent publication.
\bigbreak
It is worth remarking here that these algorithms give not only quantum algorithms for computing the colored bracket and Jones polynomials, but also for
computing the Witten-Reshetikhin-Turaev ($WRT$) invariants at the above roots of unity. The reason for this is that the $WRT$ invariant, in unnormalized
form is given as a finite sum of colored bracket polynomials:
$$WRT(L) = \Sigma_{a = 0}^{r-2} \Delta_{a} <L>_{a},$$
and so the same computation as shown in Figure 57 applies to the $WRT.$ This means that we have, in principle, a quantum algorithm for the computation of the
Witten functional integral \cite{Witten} via this knot-theoretic combinatorial topology. It would be very interesting to understand a more direct approach to
such a computation via quantum field theory and functional integration.
\bigbreak
Finally, we note that in the case of the Fibonacci model, the ($2$)-colored bracket polynomial is a special case of the Dubrovnik version of the
Kauffman polynomial \cite{IRI}. See Figure 58 for diagammatics that resolve this fact. The skein relation for the Dubrovnik polynomial is boxed in this
figure. Above the box, we show how the double strands with projectors reproduce this relation. This observation means that in the Fibonacci model, the
natural underlying knot polynomial is a special evaluation of the Dubrovnik polynomial, and the Fibonacci model can be used to perform quantum computation
for the values of this invariant.
\end{document} |
\begin{document}
\vspace*{-3mm}
\section*{The McMillan theorem for colored branching\\[-2pt] processes and dimensions of random fractals}
\centerline{\Large\it Victor I. Bakhtin}
\centerline{\large [email protected]}
\renewcommand{\abstractname}{}
\begin{abstract}
For simplest colored branching processes we prove an analog to the McMillan theorem and calculate
Hausdorff dimensions of random fractals defined in terms of the limit behavior of empirical
measures generated by finite genetic lines. In this setting the role of Shannon's entropy is played
by the Kullback--Leibler divergence and the Hausdorff dimensions are computed by means of the
so-called Billingsley--Kullback entropy, defined in the paper.
{\bf Keywords:} {\it random fractal, Hausdorff dimension, colored brunching process, basin of the
empirical measure, spectral potential, Billingsley--Kullback entropy, Kullback action, maximal
dimension principle}
{\bf 2010 Mathematics Subject Classification:\,} 28A80, 37F35, 60J80
\end{abstract}
Let us consider the finite set $X =\{1,\dots,r\}$, whose elements denote different colors, and a
vector $(\mu(1),\dots,\mu(r)) \in [0,1]$. A simplest colored branching process can be defined as an
evolution of a population in which all individuals live the same fixed time and then, when the
lifetime ends, each individual generates (independently of others) a random set of ``children''
containing individuals of colors $1$, \dots, $r$ with probabilities $\mu(1)$, \dots, $\mu(r)$
respectively. We will suppose that the evolution starts with a unique initial individual. It is
suitable to represent this process as a random genealogical tree with individuals as vertices and
each vertex connected by edges with its children. Denote by $X_n$ the set of all genetic lines of
length $n$ (that survive up to generation $n$). The colored branching process can degenerate (when
it turns out that starting from some $n$ all the sets $X_n$ are empty) or, otherwise, evolve
endlessly. Every genetic line $x =(x_1,\dots,x_n)\in X_n$ generates an empirical measure
$\delta_{x,n}$ on the set of colors $X$ by the following rule: for each $i\in X$ the value of
$\delta_{x,n}(i)$ is the fraction of those coordinates of the vector $(x_1,\dots,x_n)$ that
coincide with $i$.
Let $\nu$ be an arbitrary probability measure on $X$. The analog to the McMillan theorem that will
be proved below asserts that under condition of nondegeneracy of the colored branching process the
cardinality of the set $\{\pin x\in X_n\mid \delta_{x,n}\approx\nu\pin\}$ has an almost sure
asymptotics of order $e^{-n\rho(\nu,\mu)}$, where
\begin{equation*}
\rho(\nu,\mu) =\sum_{i\in X} \nu(i)\ln\frac{\nu(i)}{\mu(i)}.
\end{equation*}
Formally, the value of $\rho(\nu,\mu)$ coincides with the usual Kullback--Leibler divergence and
differs from the latter only in the fact that in our setting the measure $\mu$ is not probability
and so $\rho(\nu,\mu)$ can be negative.
In the paper we investigate also random fractals defined in terms of the sequence of empirical
measures $\delta_{x,n}$ limit behavior. Let $X_\infty$ be the set of infinite genetic lines. Fix an
arbitrary vector $\theta =(\theta(1),\dots,\theta(r)) \in (0,1)^r$ and define the following metrics
on $X_\infty$:
\begin{equation*}
\dist(x,y) =\prod_{t=1}^n \theta(x_t),\quad\ \text{where}\ \
n=\inf\pin\{\pin t\mid x_t\ne y_t\pin\} -1.
\end{equation*}
Denote by $V$ any set of probability measures on $X$. It will be proved, in particular, that under
the condition of nondegeneracy of the colored branching process
\begin{equation*}
\dim_H\pin\{\pin x\in X_\infty \mid \delta_{x,n}\to V\pin\} \pin=\pin
\sup_{\nu\in V} d(\nu,\mu,\theta)
\end{equation*}
almost surely, where $d(\nu,\mu,\theta)$ is the \emph{Billingsley--Kullback entropy} defined below.
The paper can be divided into two parts. The first one (sections 1--5) contains known results; some
of them have been modified in a certain way for the convenience of use in what follows. Anyway,
most of them are proved below for the completeness and convenience of the reader. The second part
(sections 6--9) contains new results.
In addition, we note that all the results of the paper can be easily extended to Moran's
self-similar geometric constructions in $\mathbb R^n$, but we will not do that.
\section{The spectral potential}\label{1..}
Let $X$ be an arbitrary finite set. Denote by $B(X)$ the space of all real-valued functions on $X$,
by $M(X)$ the set of all positive measures on $X$, and by $M_1(X)$ the collection of all
probability distributions on $X$.
Every measure $\mu\in M(X)$ determines a linear functional on $B(X)$ of the form
\begin{equation*}
\mu[f] = \int_X f\,d\mu =\sum_{x\in X} f(x)\mu(x).
\end{equation*}
It is easily seen that this functional is \emph{positive} (i.\,e., takes nonnegative values on
nonnegative functions). If, in addition, the measure $\mu$ is probability then this functional is
normalized (takes the value $1$ on the unit function).
Consider the nonlinear functional
\begin{equation}\label{1,,1}
\lambda(\varphi,\mu) =\ln \mu[e^\varphi],
\end{equation}
where $\varphi\in B(X)$ and $\mu\in M(X)$. We will call it the \emph{spectral potential}.
Evidently, it is monotone (if $\varphi\ge \psi$ then $\lambda(\varphi,\mu)\ge \lambda(\psi,\mu)$,
additively homogeneous (that is, $\lambda(\varphi+t,\mu) =\lambda(\varphi,\mu) +t$ for each
constant $t$), and analytic in $\varphi$.
Define a family of probability measures $\mu_\varphi$ on $X$, depending on the functional parameter
$\varphi\in B(X)$, by means of the formula
\begin{equation*}
\mu_{\varphi}[f] = \frac{\mu[e^\varphi f]}{\mu[e^\varphi]}, \qquad f\in B(X).
\end{equation*}
Evidently, each measure $\mu_\varphi$ is equivalent to $\mu$ and has the density $e^{\varphi -
\lambda(\varphi,\mu)}$ with respect to $\mu$.
Let us compute the first two derivatives of the spectral potential with respect to the argument
$\varphi$. Introduce the notation
\begin{equation*}
\lambda'(\varphi,\mu)[f] =\frac{d\lambda(\varphi+tf,\mu)}{d\pin t}\biggr|_{t=0}.
\end{equation*}
This is nothing more than the derivative of the spectral potential in the direction $f$ at the
point $\varphi$. An elementary computation shows that
\begin{equation}\label{1,,2}
\lambda'(\varphi,\mu)[f] =\frac{d\ln \mu\bigl[e^{\varphi+tf}\bigr]}{d\pin t}\biggr|_{t=0} =
\frac{\mu[e^\varphi f]}{\mu[e^\varphi]} = \mu_{\varphi}[f].
\end{equation}
In other words, the derivative $\lambda'(\varphi,\mu)$ coincides with the probability measure
$\mu_\varphi$. Then put
\begin{equation*}
\lambda''(\varphi,\mu)[f,g] =
\frac{\partial^2\lambda(\varphi+tf+sg,\mu)}{\partial s\,\partial\pin t}\biggr|_{s,t=0}
\end{equation*}
\noindent
and compute this derivative using just obtained formula \eqref{1,,2}:
\begin{equation*}
\lambda''(\varphi,\mu)[f,g] =\frac{\partial}{\partial s}
\biggl(\frac{\mu[e^{\varphi+sg}f]}{\mu[e^{\varphi+sg}]}\pin\biggr)\biggr|_{s=0} =
\frac{\mu[e^\varphi fg]}{\mu[e^\varphi]} -\frac{\mu[e^\varphi f]\pin
\mu[e^\varphi g]}{(\mu[e^\varphi])^2} = \mu_{\varphi}[fg] -\mu_{\varphi}[f]\pin \mu_{\varphi}[g].
\end{equation*}
In probability theory the expression $\mu_\varphi[t]$ is usually called the expectation of the
random variable $f$ with respect to the probability distribution $\mu_\varphi$, and the expression
$\mu_{\varphi}[fg] -\mu_{\varphi}[f]\pin \mu_{\varphi}[g]$ is called the covariance of random
variables $f$ and $g$. In particular, the second derivative
\begin{equation*}
\frac{d^2\lambda(\varphi+tf,\mu)}{d\pin t^2}\biggr|_{t=0} =
\mu_{\varphi}\bigl[f^2\bigr] -\mu_{\varphi}[f]^2
=\mu_\varphi\bigl[(f -\mu_{\varphi}[f])^2\bigr]
\end{equation*}
is equal to the variance of the random variable $f$ with respect to the distribution $\mu_\varphi$.
Since the variance is nonnegative it follows that the spectral potential is convex in $\varphi$.
\section{The Kullback action}\label{2..}
Denote by $B^*(X)$ the space of all linear functionals on $B(X)$. Then, obviously,
\begin{equation*}
M_1(X)\subset M(X)\subset B^*(X).
\end{equation*}
The following functional of two arguments $\nu\in B^*(X)$ and $\mu\in M(X)$ will be called the
\emph{Kullback action}:
\begin{equation} \label{2,,1}
\rho(\nu,\mu) =
\begin{cases}
\mu[\varphi\ln\varphi] =\nu[\ln\varphi], &\text{if \,$\nu\in M_1(X)$ \,and \,$\nu=\varphi \mu$},
\\[2pt]
+\infty &\text{in all other cases}.
\end{cases}
\end{equation}
To be more precise, the ``all other cases'' fit into at least one of the three categories: \,a)
singular w.\,r.\,t. $\mu$ probability measures $\nu$, \,b) nonnormalized functionals $\nu$, and
\,c)~nonpositive functionals $\nu$.
In the literature, as far as I know, this functional have been defined only for probability
measures $\nu$ and $\mu$. Different authors call it differently: the relative entropy, the
deviation function, the Kullback--Leibler information function, the Kullback--Leibler divergence.
When $\nu$ is a probability measure the Kullback action can be defined by the explicit formula
\begin{equation} \label{2,,2}
\rho(\nu,\mu) =\sum_{x\in X} \nu(x)\ln\frac{\nu(x)}{\mu(x)}.
\end{equation}
\noindent
In particular, if $\mu(x) \equiv 1$ then the Kullback action differs only in sign from Shannon's
entropy
\begin{equation} \label{2,,3}
H(\nu) = -\sum_{x\in X} \nu(x)\ln\nu(x).
\end{equation}
In the case of probability measure $\mu$ the Kullback action is nonnegative and vanishes only if
$\nu =\mu$. Indeed, if the functional $\nu$ is not an absolutely continuous with respect to $\mu$
probability measure then $\rho(\nu,\mu) =+\infty$. Otherwise, if $\nu$ is a probability measure of
the form $\nu =\varphi\mu$ then from Jensen's inequality and strong convexity of the function $f(x)
=x\ln x$ it follows that
\begin{equation*}
\rho(\nu,\mu) =\mu[f(\varphi)] \ge f\bigl(\mu[\varphi]\bigr) =0
\end{equation*}
(so long as $\mu[\varphi] =\nu[1] =1$), and the equality $\rho(\nu,\mu) =0$ holds if and only if
$\varphi$ is constant almost everywhere and, respectively, $\nu$ coincides with $\mu$.
Every measure $\mu\in M(X)$ can be put down in the form $\mu =c\mu_1$, where $c =\mu[1]$ and
$\mu_1\in M_1(X)$. If $\nu\in M_1(X)$ then \eqref{2,,2} implies
\begin{equation} \label{2,,4}
\rho(\nu,\mu) =\rho(\nu,\mu_1) -\ln c \ge -\ln\mu[1].
\end{equation}
In case $\nu\notin M_1(X)$ this inequality holds all the more since the Kullback action is
infinite.
\begin{theorem}\label{2..1}
The spectral potential and the Kullback action satisfy the Young inequality
\begin{equation}\label{2,,5}
\rho(\nu,\mu)\ge \nu[\psi] -\lambda(\psi,\mu),
\end{equation}
\noindent
that turns into equality if and only if\/ $\nu =\mu_\psi$.
\end{theorem}
\emph{Proof.} If $\rho(\nu,\mu) =+\infty$ then the Young inequality is trivial. If $\rho(\nu,\mu)
<+\infty$ then by the definition of Kullback action the functional $\nu$ is an absolutely
continuous probability measure of the form $\nu =\varphi\mu$, where $\varphi$ is a nonnegative
density. In this case
\begin{equation*}
\lambda(\psi,\mu) \pin=\pin\ln \mu[e^\psi] \pin\ge\pin \ln\! \intop_{\varphi>0} \! e^\psi\, d\mu
\pin=\pin \ln\! \intop_{\varphi>0}\! e^{\psi-\ln\varphi}\,d\nu \pin=\pin
\ln \nu\bigl[e^{\psi -\ln\varphi}\bigr] \pin\ge\pin \nu[\psi-\ln\varphi]
\end{equation*}
(at the last step we have used Jensen's inequality and concavity of the logarithm function). Since
$\rho(\nu,\mu) =\nu[\ln \varphi]$, this formula implies inequality \eqref{2,,5}.
Recall that $\mu_\psi =e^{\psi-\lambda(\psi,\mu)}\mu$. So if $\nu =\mu_\psi$ then by definition
\begin{equation*}
\rho(\nu,\mu) =\nu[\psi-\lambda(\psi,\mu)] =\nu[\psi]-\lambda(\psi,\mu).
\end{equation*}
Vice versa, assume that $\rho(\nu,\mu) =\nu[\psi] -\lambda(\psi,\mu)$. Then subtract from the above
equality the Young inequality $\rho(\nu,\mu)\ge \nu[\varphi] -\lambda(\varphi,\mu)$. We obtain
\begin{equation*}
\lambda(\varphi,\mu) -\lambda(\psi,\mu) \ge \nu[\varphi-\psi].
\end{equation*}
From this follows that $\nu =\lambda'(\psi,\mu)$. Finally, $\lambda'(\psi,\mu)$ coincides with
$\mu_\psi$. \qed
\begin{theorem}\label{2..2}
The Kullback action\/ $\rho(\nu,\mu)$ is the Legendre transform w. r. t.\/ $\nu$ of the spectral
potential\/$:$
\begin{equation}\label{2,,6}
\rho(\nu,\mu) \pin= \sup_{\psi\in B(X)}\bigl\{\nu[\psi] -\lambda(\psi,\mu)\bigr\}, \qquad
\nu\in B^*(X),\ \ \mu\in M(X).
\end{equation}
\end{theorem}
\emph{Proof.} By the Young inequality the left hand side of \eqref{2,,6} is not less than the right
one. Therefore it is enough to associate with any functional $\nu\in B^*(X)$ a family of functions
$\psi_t$, depending on the real-valued parameter $t$, on which the equality in \eqref{2,,6} is
attained.
At first, suppose that $\nu$ is an absolutely continuous with respect to $\mu$ probability measure
of the form $\nu =\varphi\mu$, where $\varphi$ is a nonnegative density. Consider the family of
functions
\begin{equation*}
\psi_t(x) \,=\,
\begin{cases}
\ln\varphi(x),&\text{если}\ \ \varphi(x)> 0,\\[2pt]
-t,&\text{если}\ \ \varphi(x) =0.
\end{cases}
\end{equation*}
\noindent
When $t\to +\infty$ we have the following relations
\begin{gather*}
\mu\bigl[e^{\psi_t}\bigr] \pin=\intop_{\varphi>0} \! \varphi\,d\mu\pin
+\intop_{\varphi=0} \!\pin e^{-t}\,d\mu\pin\ \longrightarrow\
\int_X \varphi\,d\mu \,=\,1,\\[6pt]
\nu[\psi_t] \pin=\intop_{\varphi>0} \!\varphi\ln\varphi\,d\mu\pin
+\intop_{\varphi=0} \! -t\varphi\,d\mu \pin=\pin \mu[\varphi\ln\varphi],\\[6pt]
\nu[\psi_t] -\lambda(\psi_t,\mu) \pin=\pin \nu[\psi_t] -\ln \mu\bigl[e^{\psi_t}\bigr]\,
\longrightarrow\, \mu[\varphi\ln\varphi] \pin=\pin \rho(\nu,\mu),
\end{gather*}
and so \eqref{2,,6} is proved.
In all the other cases, when $\nu$ is not an absolutely continuous probability measure, by
definition $\rho(\nu,\mu) =+\infty$. Let us examine this cases one after another.
If $\nu$ is a singular relative to $\mu$ probability measure, then there exists $x_0\in X$ such
that $\mu(x_0) =0$ and $\nu(x_0) >0$. In this case consider the family of functions
\begin{equation*}
\psi_t(x) \,=\,
\begin{cases}
t, &\text{если}\ \ x=x_0, \\[2pt]
0, &\text{если}\ \ x\ne x_0.
\end{cases}
\end{equation*}
It is easily seen that
\begin{equation*}
\nu[\psi_t] -\lambda(\psi_t,\mu) \pin\ge\pin t\nu(x_0) -\ln\mu\bigl[e^{\psi_t}\bigr]
\pin\ge\pin t\nu(x_0) - \ln\mu[1].
\end{equation*}
The right hand side of the above formula goes to $+\infty$ while $t$ increases and \eqref{2,,6}
holds again.
If the functional $\nu$ is not normalized then put $\psi_t =t$. Then the expression
\begin{equation*}
\nu[\psi_t] -\lambda(\psi_t,\mu)\pin=\pin\nu[t] -\ln\mu[e^t]\pin=\pin t\pin(\nu[1]-1)-\ln\mu[1]
\end{equation*}
is unbounded from the above and hence \eqref{2,,6} is still valid.
Finally, if the functional $\nu$ is not positive then there exists a nonnegative function $\varphi$
such that $\nu[\varphi] <0$. Consider the family $\psi_t =-t\varphi$, where $t>0$. For it
\begin{equation*}
\nu[\psi_t] -\lambda(\psi_t,\mu) \pin\ge\pin -t\nu[\varphi] -\lambda(0,\mu)
\, \longrightarrow\, +\infty
\end{equation*}
as $t\to +\infty$, and \eqref{2,,6} remains in force. \qed
\begin{corollary}\label{2..3}
The functional\/ $\rho(\,\cdot\,,\mu)$ is convex and lower semicontinuous on\/ $B^*(X)$.
\end{corollary}
\emph{Proof.} These are properties of the Legendre transform. \qed
\section{The local large deviations principle and\\[-2pt] the McMillan theorem}\label{3..}
As above, we keep to the following notation: $X$ is a finite set, $B(X)$ stands for the space of
real-valued functions on $X$, $B^*(X)$ is the space of linear functionals on $B(X)$, $M_1(X)$ is
the set of all probability measures on $X$, and $M(X)$ is the set of all positive measures on $X$.
To each finite sequence $x =(x_1,\dots,x_n)\in X^n$ let us correspond an \emph{empirical measure}
$\delta_{x,n}\in M_1(X)$ which is supported on the set $\{x_1,\dots,x_n\}$ and assigns to every
point $x_i$ the measure $1/n$. The integral of any function $f$ with respect to $\delta_{x,n}$
looks like
\begin{equation*}
\delta_{x,n}[f] =\frac{f(x_1)+\,\dotsc\,+f(x_n)}{n}.
\end{equation*}
Denote by $\mu^n$ Cartesian power of a measure $\mu\in M(X)$, which is defined on $X^n$.
\begin{theorem}[\hbox spread -4pt {the local large deviations principle}] \label{3..1}
For any measure\/ $\mu\in M(X)$, any functional\/ $\nu\in B^*(X)$, and\/ $\eps>0$ there exists a
neighborhood\/ $O(\nu)$ such that
\begin{equation}\label{3,,1}
\mu^n\bigl\{x\in X^n\bigm| \delta_{x,n}\in O(\nu)\bigr\} \pin\le\pin e^{-n(\rho(\nu,\mu) -\eps)}.
\end{equation}
On the other hand, for any\/ $\eps>0$ and any neighborhood\/ $O(\nu)$ the following asymptotic
estimate holds\/$:$
\begin{equation}\label{3,,2}
\mu^n\bigl\{x\in X^n\bigm| \delta_{x,n}\in O(\nu)\bigr\} \pin\ge\pin e^{-n(\rho(\nu,\mu) +\eps)},
\qquad n\to\infty.
\end{equation}
\end{theorem}
If $\rho(\nu,\mu) =+\infty$, then by the difference $\rho(\nu,\mu)-\eps$ in \eqref{3,,1} we mean an
arbitrary positive number.
In the case of probability measure $\mu$ Theorem \ref{3..1} is a partial case of Varadhan's large
deviations principle (which explicit formulation can be found, e.\,g., in \cite{Deuschel-Stroock}
and \cite{Varadhan}). Therefore, this theorem can be deduced from Varadhan's large deviations
principle by means of mere renormalization of $\mu$. Nevertheless, we will prove it independently
for the purpose of completeness.
\emph{Proof.} By Theorem \ref{2..2} for any $\eps>0$ there exists $\psi\in B(X)$ such that
\begin{equation}\label{3,,3}
\rho(\nu,\mu) -\eps/2 \pin<\pin \nu[\psi] -\lambda(\psi,\mu).
\end{equation}
Consider the probability measure $\mu_\psi =e^{\psi-\lambda(\psi,\mu)}\mu$. Obviously,
\begin{equation}\label{3,,4}
\frac{d\mu^n(x)}{d\mu_\psi^n(x)} \pin=\pin \prod_{i=1}^n \frac{d\mu(x_i)}{d\mu_\psi(x_i)}
\pin=\pin \prod_{i=1}^n e^{\lambda(\psi,\mu) -\psi(x_i)} \pin=\pin
e^{n(\lambda(\psi,\mu) -\delta_{x,n}[\psi])}.
\end{equation}
Define a neighborhood of the functional $\nu$ as follows:
\begin{equation*}
O(\nu) \pin=\pin \bigl\{\pin \delta\in B^*(X)\bigm| \delta[\psi] >\nu[\psi] -\eps/2\pin\bigr\}.
\end{equation*}
Then it follows from \eqref{3,,4} and \eqref{3,,3} that under the condition $\delta_{x,n}\in
O(\nu)$
\begin{equation*}
\frac{d\mu^n(x)}{d\mu_\psi^n(x)} \pin<\pin
e^{n(\lambda(\psi,\mu)-\nu[\psi] +\eps/2)} \pin<\pin e^{n(-\rho(\nu,\mu)+\eps)}.
\end{equation*}
Consequently,
\begin{equation*}
\mu^n\bigl\{x\in X^n\bigm| \delta_{x,n}\in O(\nu)\bigr\} \, =
\intop_{\delta_{x,n}\in O(\nu)}\hspace{-1 em} d\mu^n(x) \,\le
\intop_{\delta_{x,n}\in O(\nu)}\hspace{-1 em} e^{n(-\rho(\nu,\mu)+\eps)}\,d\mu_\psi^n(x)
\,\le\, e^{-n(\rho(\nu,\mu)-\eps)}.
\end{equation*}
Thus the first part of Theorem \ref{3..1} is proved.
The estimate \eqref{3,,2} is trivial if $\rho(\nu,\mu) =+\infty$. So it is enough to prove it only
in the case when $\nu$ is a probability measure of the form $\nu =\varphi\mu$ and the Kullback
action $\rho(\nu,\mu) =\nu[\ln\varphi]$ is finite. Fix any number $\eps>0$ and neighborhood
$O(\nu)$. Define the sets
\begin{equation*}
Y_n \pin=\pin \bigl\{\pin x\in X^n\bigm| \delta_{x,n}\in O(\nu),\ \ \big|\delta_{x,n}[\ln\varphi]
-\nu[\ln\varphi]\big| <\eps/2\pin\bigr\}
\end{equation*}
\noindent
(the last inequality in the braces means that $\varphi(x_i)>0$ at each point of the sequence $x
=(x_1,\dots,x_n)$). Note that for $x\in Y_n$
\begin{equation*}
\frac{d\mu^n(x)}{d\nu^n(x)} \pin=\pin \prod_{i=1}^n \frac{d\mu(x_i)}{d\nu(x_i)} \pin=\pin
\prod_{i=1}^n \frac{1}{\varphi(x_i)} \pin=\pin e^{-n\delta_{x,n}[\ln\varphi]} \pin>\pin
e^{-n(\nu[\ln\varphi]+\eps/2)}.
\end{equation*}
Consequently,
\begin{equation}\label{3,,5}
\mu^n(Y_n) \pin=\pin \int_{Y_n} d\mu^n(x) \pin\ge\pin
\int_{Y_n} e^{-n(\nu[\ln\varphi]+\eps/2)}\,d\nu^n(x) \pin=\pin
e^{-n\rho(\nu,\mu) -n\eps/2}\pin\nu^n(Y_n).
\end{equation}
By the Law of large numbers $\nu^n(Y_n)\to 1$. Hence \eqref{3,,5} implies \eqref{3,,2}. \qed
\begin{corollary}[the McMillan theorem] \label{3..2}
For any probability measure\/ $\nu\in M_1(X)$ and\/ $\eps>0$ there exists a neighborhood\/ $O(\nu)$
such that
\begin{equation*}
\#\{\pin x =(x_1,\dots,x_n)\in X^n\mid \delta_{x,n}\in O(\nu)\pin\} \pin\le\pin e^{n(H(\nu) +\eps)}.
\end{equation*}
On the other hand, for any neighborhood\/ $O(\nu)$ and\/ $\eps>0$
\begin{equation*}
\#\{\pin x \in X^n\mid \delta_{x,n}\in O(\nu)\pin\} \pin\ge\pin e^{n(H(\nu) -\eps)}
\quad\ \text{as}\ \ n\to\infty.
\end{equation*}
Here\/ $H(\nu)$ denotes Shannon's entropy defined in\/ \eqref{2,,3}.
\end{corollary}
\emph{Proof.} This follows from equalities \eqref{2,,2}, \eqref{2,,3}, and the previous theorem, if
we set $\mu(x) =1$ for all $x\in X$. \qed
\section{Hausdorff dimension and the maximal dimension principle} \label{4..}
Let us define the Hausdorff dimension of an arbitrary metric space $\Omega$.
Suppose that $\Omega$ is covered by at most countable collection of subsets \pin$\cal U =\{U_i\}$.
Denote by $|\cal U|$ the diameter of this covering: $|\cal U| =\sup |U_i|$, where $|U_i|$ is the
diameter of $U_i$. For every $\alpha\in \mathbb R$ put
\begin{equation*}
\mathrm{mes}\kern 0.04em(\cal U,\alpha) =\sum_i |U_i|^\alpha.
\end{equation*}
The \emph{Hausdorff measure} (of dimension $\alpha$) of the metric space $\Omega$ is
\begin{equation*}
\mathrm{mes}\kern 0.04em(\Omega,\alpha) \pin=\pin \varliminf_{|\kern.04em\cal U|\to 0} \mathrm{mes}\kern 0.04em(\cal U,\alpha),
\end{equation*}
where \pin$\cal U$ is at most countable covering of $\Omega$. Obviously,
\begin{equation*}
\mathrm{mes}\kern 0.04em(\cal U,\beta) \le \mathrm{mes}\kern 0.04em(\cal U,\alpha)\pin |\cal U|^{\beta-\alpha}
\quad \text{if}\ \ \beta\ge\alpha.
\end{equation*}
This implies the following property of the Hausdorff measure: if $\mathrm{mes}\kern 0.04em(\Omega,\alpha) < \infty$
for some $\alpha$, then $\mathrm{mes}\kern 0.04em(\Omega,\beta) =0$ for all $\beta> \alpha$.
The \emph{Hausdorff dimension} of the space $\Omega$ is the number
\begin{equation}\label{4,,1}
\dim_H \Omega \pin=\pin\inf\pin\{\pin \alpha \mid \mathrm{mes}\kern 0.04em(\Omega,\alpha) =0\pin\}.
\end{equation}
In other words, $\dim_H \Omega =\alpha_0$ if $\mathrm{mes}\kern 0.04em(\Omega,\alpha) =0$ for all $\alpha>\alpha_0$ and
$\mathrm{mes}\kern 0.04em(\Omega,\alpha) =\infty$ for all $\alpha<\alpha_0$.
Below we will consider the space of sequences
\begin{equation*}
X^{\mathbb N} =\{\pin x=(x_1,x_2,x_3,\dots)\pin\}, \quad\ \text{где}\ \ x_i\in X =\{1,\dots,r\}.
\end{equation*}
Let $x =(x_1,x_2,\dots)\in X^{\mathbb N}$. Denote by $Z_n(x)$ the set of sequences
$y=(y_1,y_2,\dots)$ whose first $n$ coordinates coincide with the same coordinates of $x$. This set
will be called a \emph{cylinder of rank} $n$. The collection of all cylinders generates the
\emph{Tychonoff topology} on the space $X^{\mathbb N}$ and the \emph{cylinder $\sigma$-algebra} of
subsets in $X^{\mathbb N}$.
Take an arbitrary positive function $\eta$ on the set of all cylinders that possesses the following
two properties: first, if $Z_n(x) \subset Z_m(y)$ then $\eta(Z_n(x))\le \eta(Z_m(y))$ and, second,
$\eta(Z_n(x)) \to 0$ as $n\to \infty$ at each point $x\in X^{\mathbb N}$. Define the \emph{cylinder
metrics} on $X^{\mathbb N}$ by means of the formula
\begin{equation} \label{4,,2}
\dist(x,y) =\eta(Z_n(x)), \quad\ \text{where} \ \
n=\max\pin\{\pin m\mid Z_m(x) =Z_m(y)\pin\}.
\end{equation}
Evidently, the diameter of $Z_n(x)$ in this metrics coincides with $\eta(Z_n(x))$.
Suppose on $X^{\mathbb N}$, besides the cylinder metrics \eqref{4,,2}, a Borel measure $\mu$ is
given. The function
\begin{equation*}
d_\mu(x) =\varliminf_{n\to\infty} \frac{\ln \mu(Z_n(x))}{\ln |Z_n(x)|}
\end{equation*}
\noindent
is called \emph{$($lower\/$)$ pointwise dimension of the measure\/ $\mu$}.
The next theorem provides an effective tool for computing the Hausdorff dimensions of various
subsets of $X^{\mathbb N}$.
\begin{theorem} \label{4..1}
Suppose\/ $A\subset X^{\mathbb N}$. If there exists a finite Borel measure\/ $\mu$ on\/ $X^{\mathbb
N}$ such that\/ $d_\mu(x) \le d$ for each point\/ $x\in A$, then\/ $\dim_H A\le d$. On the
contrary, if\/ $d_\mu(x) \ge d$ for each\/ $x\in A$ and the outer measure\/ $\mu^*(A)$ is positive,
then\/ $\dim_H A \ge d$.
\end{theorem}
It follows that if $d_\mu(x)\equiv d$ on the whole subset $A\subset X^{\mathbb N}$ then its
dimension is equal to $d$.
A weakened version of the second part of Theorem \ref{4..1} in which the condition $d_\mu(x) \ge d$
is replaced by the more strong one $\mu(Z_n(x))\le |Z_n(x)|^d$ is usually called the \emph{mass
distribution principle.}
\emph{Proof.} Every cylinder $Z_n(x)$ is, in fact, a ball in the metrics \eqref{4,,2}, whose radius
equals to its diameter, and vice versa, any ball in this metrics coincides with a cylinder.
Besides, any two cylinders $Z_n(x)$ and $Z_m(y)$ either have empty intersection or one of them is
embedded into other. Therefore, while computing the Hausdorff measure and dimension of a subset
$A\subset X^{\mathbb N}$ it is enough to operate with only disjoint coverings of $A$ by cylinders.
Suppose first that $d_\mu(x) <\alpha$ for all points $x\in A$. Then for each $x\in A$ there exist
arbitrarily small cylinders $Z_n(x)$ satisfying the condition $|Z_n(x)|^\alpha < \mu(Z_n(x))$.
Using this kind of cylinders we can put together a disjoint covering \pin$\cal U$ of the set $A$ of
arbitrarily small diameter. For this covering we have the inequalities
\begin{equation*}
\mathrm{mes}\kern 0.04em(\cal U,\alpha) \pin=\sum_{Z_n(x)\in \cal U} |Z_n(x)|^\alpha \pin\le
\sum_{Z_n(x)\in \cal U} \mu(Z_n(x)) \pin\le\pin \mu\bigl(X^{\mathbb N}\bigr),
\end{equation*}
and hence $\dim_H A\le \alpha$. Thus the first part of the theorem is proved.
Suppose now that $d_\mu(x) >\alpha$ for all points $x\in A$. Define the sets
\begin{equation*}
A_\eps \pin=\pin \bigl\{ x\in A\bigm| |Z_n(x)|^\alpha > \mu(Z_n(x))\ \ \text{whenever}\ \
|Z_n(x)| <\eps\bigr\}.
\end{equation*}
Obviously, $A =\bigcup_{\eps>0} A_\eps$. Hence there exists an $\eps$ such that $\mu^*(A_\eps)>0$.
Let \pin$\cal U$ be be a disjoint covering of $A$ by cylinders of diameters less than $\eps$. From
the definition of $A_\eps$ it follows that $\mathrm{mes}\kern 0.04em(\cal U,\alpha) \ge \mu^*(A_\eps)$. Therefore
$\dim_H A \ge \alpha$, and thus the second part of the theorem is proved. \qed
Theorem \ref{4..1} was first proved by Billingsley in the case when the function $\eta$ in
\eqref{4,,2} is a probability measure on $X^{\mathbb N}$ (see \cite[Theorems 2.1 and
2.2]{Billingsley II}). An analog to this theorem for subsets $A\subset \mathbb R^r$ was proved in
\cite{Young} and \cite{Pesin}.
Each point $x =(x_1,x_2,\dots) \in X^{\mathbb N}$ generates a sequence of empirical measures
$\delta_{x,n}$ on the set $X$:
\begin{equation*}
\delta_{x,n}(i) =\frac{\#\{\pin t\mid x_t=i,\pin\ t\le n\pin\}}{n}, \qquad i\in X.
\end{equation*}
\noindent
In other words, $\delta_{x,n}(i)$ is the fraction of those coordinates of the vector
$(x_1,\dots,x_n)$ that coincide with $i$.
For every probability measure $\nu\in M_1(X)$ let us define its \emph{basin} $B(\nu)$ as the set of
all points $x\in X^{\mathbb N}$ such that $\delta_{x,n}$ converges to $\nu$.
Evidently, basins of different measures do not intersect each other and are nonempty. If $x\in
B(\nu)$, and $y\in X^{\mathbb N}$ differs from $x$ in only finite number of coordinates, then $y\in
B(\nu)$. This implies density of each basin in $X^{\mathbb N}$.
Every measure $\nu\in M_1(X)$ generates Bernoulli distribution $P_\nu = \nu^{\mathbb N}$ on the
space $X^{\mathbb N}$. By the strong law of large numbers the basin $B(\nu)$ has probability one
with respect to Bernoulli distribution $P_\nu$, and its complement has zero probability $P_\nu$. In
particular, any basin different from $B(\nu)$ has zero probability.
Points that does not belong to the union of all basins will be called \emph{irregular}. The set or
irregular points has zero probability with respect to any distribution $P_\nu$, where $\nu\in
M_1(X)$. As a result, $X^{\mathbb N}$ turns out to be decomposed into the disjoint union of
different basins and the set of irregular points.
Let us fix some numbers $\theta(i)\in (0,1)$ for all elements $i\in X =\{1,\dots,r\}$, and define a
\emph{cylinder\/ $\theta$-metrics} on $X^{\mathbb N}$ by the rule
\begin{equation}\label{4,,3}
\dist(x,y) =\prod_{t=1}^n \theta(x_t),\quad\ \text{где}\ \
n=\inf\pin\{\pin t\mid x_t\ne y_t\pin\} -1.
\end{equation}
It is a partial case of the cylinder metrics \eqref{4,,2}.
For each measure $\nu\in M_1(X)$ and $\theta$-metrics \eqref{4,,3} define the quantity
\begin{equation}\label{4,,4}
S(\nu,\theta) \pin=\,\frac{\sum_{i=1}^r \nu(i)\ln \nu(i)}{\sum_{i=1}^r \nu(i)\ln \theta(i)}.
\end{equation}
We will call it the \emph{Billingsley entropy} because he was the first who wrote down this formula
and applied it for the computation of Hausdorff dimensions \cite{Billingsley}. He expressed also
this quantity in terms of Shannon's entropy and the Kullback action:
\begin{equation*}
S(\nu,\theta) \,=\, \frac{H(\nu)}{H(\nu)+\rho(\nu,\theta)}.
\end{equation*}
\begin{theorem} \label{4..2}
Hausdorff dimension of any basin\/ $B(\nu)$ relative to the\/ $\theta$-metrics\/ \eqref{4,,3} is
equal to the Billingsley entropy\/ $S(\nu,\theta)$.
\end{theorem}
A partial case of this theorem in which $\theta(1) =\ldots =\theta(r) =1/r$ was first proved by
Eggleston \cite{Eggleston}. In the complete form this theorem and its generalizations were proved
by Billingsley in \cite{Billingsley,Billingsley II}.
\emph{Proof.} Assume first that $\nu(i)>0$ for every $i=1,\,\dots,\,r$. Obviously,
\begin{equation*}
\frac{\ln P_\nu(Z_n(x))}{\ln |Z_n(x)|} \pin=\pin
\frac{\sum_{t=1}^n \ln \nu(x_t)}{\sum_{t=1}^n \ln \theta(x_t)} \pin=\pin
\frac{\sum_{i=1}^r n\delta_{x,n}(i) \ln \nu(i)}{\sum_{i=1}^r n\delta_{x,n}(i)\ln \theta(i)}.
\end{equation*}
Hence for each point $x\in B(\nu)$ we have
\begin{equation} \label{4,,5}
d_{P_\nu}(x) \pin=\pin
\varliminf_{n\to\infty} \frac{\ln P_\nu(Z_n(x))}{\ln |Z_n(x)|}
\pin=\,\frac{\sum_{i=1}^r \nu(i)\ln \nu(i)}{\sum_{i=1}^r \nu(i)\ln \theta(i)} \pin=\pin S(\nu,\theta).
\end{equation}
Applying Theorem \ref{4..1} to the set $A =B(\nu)$ and measure $\mu =P_\nu$, we obtain the
statement of Theorem \ref{4..2}.
In the general case the same argument provides only lower bound $d_{P_\nu}(x)\ge S(\nu,\theta)$,
that implies the lower bound $\dim_H B(\nu) \ge S(\nu,\theta)$. The inverse inequality is provided
by the next lemma. \qed
\begin{lemma}\label{4..3}
Suppose the space\/ $X^{\mathbb N}$ is equipped with the metrics\/ \eqref{4,,3}. Then for any
measure\/ $\nu\in M_1(X)$ and\/ $\eps>0$ there exists a neighborhood\/ $O(\nu)$ such that Hausdorff
dimension of the set
\begin{equation*}
A=\bigl\{ x\in X^{\mathbb N}\bigm| \forall\,N\ \exists\,n>N\!:\, \delta_{x,n}\in O(\nu)\bigr\}
\end{equation*}
does not exceed\/ $S(\nu,\theta)+\eps$.
\end{lemma}
\emph{Proof.} Fix a measure $\nu\in M_1(X)$ and an arbitrary positive number $\kappa$. By
McMillan's theorem there exists a neighborhood $O(\nu)$ such that for each positive integer $n$
\begin{equation}\label{4,,6}
\#\bigl\{ Z_n(x)\bigm| \delta_{x,n}\in O(\nu)\bigr\} \pin\le\pin e^{n(H(\nu) +\kappa)}.
\end{equation}
Decrease this neighborhood in such a way that, in addition, for every measure $\delta\in O(\nu)$
the next inequality holds:
\begin{equation*}
\sum_{i=1}^r \delta(i)\ln \theta(i) \pin<\pin \sum_{i=1}^r \nu(i) \ln \theta(i) +\kappa.
\end{equation*}
Then for every cylinder $Z_n(x)$ satisfying the condition $\delta_{x,n}\in O(\nu)$ we have the
estimate
\begin{align} \notag
|Z_n(x)| \pin=\pin \prod_{t=1}^n \theta(x_t) \pin&=\pin\exp\biggl\{\sum_{t=1}^n \ln \theta(x_t)\biggr\}
\pin=\pin \exp\biggl\{n\sum_{i=1}^r \delta_{x,n}(i)\ln \theta(i)\biggr\} \pin<\pin\\[3pt]
\pin&<\pin \exp\biggl\{n\sum_{i=1}^r \nu(i)\ln \theta(i) +n\kappa\biggr\}. \label{4,,7}
\end{align}
For any positive integer $N$ the set $A$ is covered by the collection of cylinders
\begin{equation*}
\cal U_N \pin=\pin\bigcup_{n=N}^\infty \bigl\{ Z_n(x)\bigm| \delta_{x,n}\in O(\nu)\bigr\}.
\end{equation*}
Evidently, the diameter of this covering goes to zero when $N$ increases. Now we can evaluate
$\mathrm{mes}\kern 0.04em(\cal U_N,\alpha)$ by means of formulas \eqref{4,,6} and \eqref{4,,7}:
\begin{align} \notag
\mathrm{mes}\kern 0.04em(\cal U_N,\alpha) \pin&=\sum_{Z_n(x)\in\pin \cal U_N}\hspace{-0.5em} |Z_n(x)|^\alpha
\pin\le\pin \sum_{n=N}^\infty e^{n(H(\nu) +\kappa)} \exp\biggl\{\alpha n\sum_{i=1}^r
\nu(i)\ln \theta(i) +\alpha n\kappa\biggr\} \pin=\pin \\[3pt] \label{4,,8}
\pin&=\pin \sum_{n=N}^\infty \exp\biggl\{n\pin\biggl(-\sum_{i=1}^r \nu(i)\ln \nu(i)
+\alpha\sum_{i=1}^r \nu(i)\ln \theta(i) +\kappa +\alpha\kappa\biggr)\!\pin\biggr\}.
\end{align}
If $\alpha > S(\nu,\theta)$, then we can choose so small $\kappa>0$ that the last exponent in
braces is negative, and all the sum \eqref{4,,8} goes to zero as $N\to \infty$. Therefore Hausdorff
measure (of dimension $\alpha$) of the set $A$ is zero, and hence $\dim_H A$ does not exceed
$\alpha$. \qed
\proofskip
We will say that a sequence of empirical measures $\delta_{x,n}$ \emph{condenses} on a subset
$V\subset M_1(X)$ (notation $\delta_{x,n}\succ V$) if it has at least one limit point in $V$.
Similarly to the famous large deviations principle by Varadhan \cite{Deuschel-Stroock,Varadhan},
it is natural that the next theorem be named the \emph{maximal dimension principle.}
\begin{theorem} \label{4..4}
Let the space\/ $X^{\mathbb N}$ be equipped with the cylinder\/ $\theta$-metrics\/ \eqref{4,,3}.
Then for any nonempty subset\/ $V\subset M_1(X)$
\begin{equation} \label{4,,9}
\dim_H\pin\bigl\{x\in X^{\mathbb N} \bigm| \delta_{x,n}\succ V\bigr\} \pin=\pin
\sup_{\nu\in V} S(\nu,\theta).
\end{equation}
\end{theorem}
\emph{Proof.} The set $A =\{\pin x\in X^{\mathbb N} \mid \delta_{x,n}\succ V\pin\}$ contains basins
of all measures $\nu\in V$. So by Theorem \ref{4..2} its dimension is not less than the right hand
side of \eqref{4,,9}.
It is easily seen from the definition \eqref{4,,4} of the Billingsly entropy $S(\nu,\theta)$ that
it depends continuously on the measure $\nu\in M_1(X)$. Consider the closure $\barV$ of $V$.
Obviously, it is compact. Fix any $\eps>0$. By Lemma \ref{4..3} for any measure $\nu\in\barV$ there
exists a neighborhood $O(\nu)$ such that
\begin{equation} \label{4,,10}
\dim_H\pin\bigl\{x\in X^{\mathbb N}\bigm| \delta_{x,n}\succ O(\nu)\bigr\} \pin\le\pin S(\nu,\theta)+\eps
\pin\le\pin \sup_{\nu\in V} S(\nu,\theta)+\eps.
\end{equation}
Pick out a finite covering of $\barV$ composed of neighborhoods of this sort. Then the set $A
=\{\pin x\in X^{\mathbb N} \mid \delta_{x,n}\succ V\pin\}$ will be covered by a finite collection
of sets of the form $\{\pin x\in X^{\mathbb N}\mid \delta_{x,n}\succ O(\nu)\pin\}$ satisfying
\eqref{4,,10}. By the arbitrariness of $\eps$ this implies the statement of Theorem \ref{4..4}.
\qed
\proofskip
A very similar to Theorem \ref{4..4} result was proved by Billingsley in \cite[Theorem
7.1]{Billingsley}.
Suppose that a certain subset \pin$\Xi\subset X$ is specified in the set $X =\{1,\dots,r\}$. In
this case the subset \pin$\Xi^{\mathbb N}\subset X^{\mathbb N}$ will be named the \emph{generalized
Cantor set.} It consists of those sequences $x =(x_1,x_2,\dots)$ in which all $x_t\in\Xi$.
\begin{theorem} \label{4..5}
If the space\/ $X^{\mathbb N}$ is equipped with the\/ $\theta$-metrics\/ \eqref{4,,3} then
Hausdorff dimension of the generalized Cantor set\/ \pin$\Xi^{\mathbb N}$ coincides with the unique
solution of Moran's equation
\begin{equation} \label{4,,11}
\sum_{i\pin\in\pin\Xi} \theta(i)^s =1.
\end{equation}
\end{theorem}
This theorem was first proved by Moran in 1946 \cite{Moran} for generalized Cantor subsets of the
real axis and afterwards it was extended by Hutchinson \cite{Hutchinson} to the attractors of
self-similar geometric constructions in $\mathbb R^r$. Let us show how it can be derived from the
maximal dimension principle.
\emph{Proof.} Let $s$ be the solution to Moran's equation. Introduce a probability distribution
$\nu$ on $X$, setting $\nu(i) =\theta(i)^s$ for $i\in\Xi$ and $\nu(i) =0$ for $i\notin\Xi$. Then
\begin{equation} \label{4,,12}
S(\nu,\theta) \pin=\pin \frac{\sum_{i=1}^r \nu(i)\ln \nu(i)}{\sum_{i=1}^r \nu(i)\ln \theta(i)} \pin=\pin
\frac{\sum_{i\in\Xi}\theta(i)^s\ln \theta(i)^s}{\sum_{i\in\Xi}\theta(i)^s\ln \theta(i)} \pin=\pin s.
\end{equation}
Consider the set $B(\nu)\cap\Xi^{\mathbb N}$. It has the unit measure with respect to the
distribution $P_\nu =\nu^{\mathbb N}$. Besides, for every point $x\in B(\nu)\cap\Xi^{\mathbb N}$ by
\eqref{4,,5} we have the equality $d_{P_\nu}(x) =S(\nu,\theta)$. In this setting it follows from
Theorem \ref{4..1} and formula \eqref{4,,12} that
\begin{equation} \label{4,,13}
\dim_H \bigl(B(\nu)\cap\Xi^{\mathbb N}\bigr) =S(\nu,\theta) =s.
\end{equation}
Denote by $V$ the collection of all probability measures on $X$ supported on \pin$\Xi\subset X$.
Evidently, for each point $x\in\Xi^{\mathbb N}$ all the limit points of the sequence $\delta_{x,n}$
belong to $V$. Hence, we can apply Theorem \ref{4..4} that implies
\begin{equation} \label{4,,14}
\dim_H \pin\Xi^{\mathbb N} \pin\le\pin
\dim_H\pin\bigl\{x\in X^{\mathbb N} \bigm| \delta_{x,n}\succ V\bigr\} \pin=\pin
\sup_{\nu\in V} S(\nu,\theta) \pin=\pin
\sup_{\nu\in V}\frac{\sum_{i\in\Xi} \nu(i)\ln \nu(i)}{\sum_{i\in\Xi} \nu(i)\ln \theta(i)}\pin.
\end{equation}
Note that for every measure $\nu\in V$
\begin{equation*}
s\sum_{i\pin\in\pin\Xi} \nu(i)\ln \theta(i) \pin-\sum_{i\pin\in\pin\Xi} \nu(i)\ln \nu(i) \pin=\pin
\sum_{\nu(i)>0}\hspace{-0.3em} \nu(i)\ln\frac{\theta(i)^s}{\nu(i)} \pin\le\pin
\ln\Biggl\{\sum_{\nu(i)>0}\hspace{-0.3em} \nu(i)\frac{\theta(i)^s}{\nu(i)}\!\pin\Biggr\} \pin\le\pin
0,
\end{equation*}
where we have used concavity of the logarithm function. It follows that the right hand side in
\eqref{4,,14} does not exceed $s$. Finally, comparing \eqref{4,,13} and \eqref{4,,14}, we obtain
the desired equality $\dim_H \pin\Xi^{\mathbb N} =s$. \qed
\section{Brunching processes} \label{5..}
First let us introduce the basic notions about the simplest Galton--Watson brunching process.
Suppose that a random variable $Z$ takes nonnegative values $k\in\mathbb Z_+$ with probabilities
$p_k$. The \emph{Galton--Watson brunching process} is a sequence of integer-valued random variables
$Z_0$, $Z_1$, $Z_2$, \dots{} such that $Z_0\equiv 1$, \,$Z_1 =Z$, and further each $Z_{n+1}$ is
defined as the sum of $Z_n$ independent counterparts of the random variable $Z$. In particular, if
$Z_n =0$ then $Z_{n+1} =0$ as well. Usually $Z_n$ is thought of as the total number of descendants
in $n$-th generation of a unique common ancestor under the condition that each descendant
independently of others gives birth to $Z$ children.
It is known that in some cases the posterity of the initial ancestor may degenerate (when starting
from a certain $n$ all $Z_n$ are zeros) and in other cases it can ``flourish'' (when $Z_n$ grows
exponentially). The type of behavior of the brunching process depends on the mean number of
children of any individual
\begin{equation*}
m =\mathsf{E}\kern 0.07em Z = \sum_{k=0}^\infty kp_k
\end{equation*}
and on the generating function of that number
\begin{equation*}
f(s) =f_1(s) =\sum_{k=0}^\infty p_k s^k.
\end{equation*}
Obviously, the restriction of the function $f(s)$ to the segment $[0,1]$ is nonnegative,
nondecreasing, convex, and satisfies $f(1) =1$ and $f'(1) =m$.
In the theory of brunching processes (see, for instance, \cite{Athreya-Ney,Harris}) the following
statements have been proved.
\begin{theorem} \label{5..1}
The generating functions of the number of descendants in\/ $n$-th generation
\begin{equation*}
f_n(s) =\sum_{k=0}^\infty \mathsf{P}\{Z_n=k\}\pin s^k
\end{equation*}
\noindent
satisfy the recursion relation\/ $f_{n+1}(s) =f(f_n(s))$.
\end{theorem}
\begin{theorem} \label{5..2}
If\/ $m\le 1$ then the brunching process degenerates almost surely\/ $($except the case when each
individual gives birth to exactly one child\/$)$. If\/ $m>1$ then the probability\/ $q$ of
degeneration is less than\/ $1$ and coincides with a unique nonunit root of the equation\/ $f(s)
=s$ on the segment\/ $[0,1]$.
\end{theorem}
\begin{theorem} \label{5..3}
If\/ $m>1$ and\/ $\mathsf{E}\kern 0.07em Z^2<\infty$ then the sequence\/ $W_n =Z_n/m^n$ converges almost surely to a
random variable\/ $W$ such that\/ $\mathsf{P}\{W>0\} =1-q$. If\/ $m>1$ and\/ $\mathsf{E}\kern 0.07em Z^2 =\infty$ then
for any number\/ $m'<m$ with probability\/ $1-q$
\begin{equation*}
\lim_{n\to\infty} Z_n/m^n <\infty, \qquad \lim_{n\to\infty} Z_n\big/(m')^n =\infty
\end{equation*}
$($here\/ $q$ is the probability of degeneration of the brunching process\/$)$.
\end{theorem}
Thereby, in the case $m>1$ there is an alternative for the total number of descendants $Z_n$:
either it vanishes at a certain moment $n_0$ (with probability $q<1$) or it is asymptotically
equivalent to $W m^n$ (with the complementary probability $1-q$), where the random variable $W>0$
does not depend on $n$ (except the case $\mathsf{E}\kern 0.07em Z^2 =\infty$, when only the logarithmic equivalence
$\ln Z_n \sim \ln m^n$ is guaranteed). All other types of the descendants' number behavior have
zero probability.
We will exploit these theorems in the study of colored brunching processes.
Suppose now that each individual may give birth to children of $r$ different colors (or $r$
different genders, if one likes). We will suppose that the posterity of each individual in the
first generation represents a random set $X$ containing random number $k_1$ of children of the
first color, random number $k_2$ of children of the second color, and so on up to $k_r$ children of
color $r$. All elements of $X$ (including elements of the same color) are treated as different. The
ordered array $k=(k_1,k_2,\dots,k_r) \in \mathbb Z_+^r$ will be called the \emph{color structure}
of the set of children $X$. Denote by $p_k$ the probability of birth of the set $X$ with color
structure $k=(k_1,k_2,\dots,k_r)$. Naturally, all the probabilities $p_k$ are nonnegative and
\begin{equation*}
\sum_{k\in\mathbb Z_+^r} p_k =1.
\end{equation*}
\noindent
If an individual $x_1$ gave birth to $x_2$, then $x_2$ gave birth to $x_3$, and so on up to an
individual $x_n$, then the sequence $x =(x_1,\dots,x_n)$ will be called the \emph{genetic line} of
length $n$.
Let us construct a new branching process taking into account not only the total number of
descendants but also the color of each individual and all its upward and downward lineal relations.
This process may be thought of as a random \emph{genealogical tree} with a common ancestor in the
root and all its descendants in the vertices, where each parent is linked with all its children. In
the case of degenerating population its genealogical tree is finite, and in the case of
``flourishing'' one the tree is infinite.
Formally it is convenient to define such a process as a sequence of random sets $X_n$ containing
all genetic lines of length $n$. As the first set $X_1$ we take $X$. The subsequent $X_n$ are built
up by induction: if $X_n$ is already known, then for all genetic lines $(x_1,\dots,x_n)\in X_n$
define disjoint independent random sets of children $X(x_1,\dots,x_n)$, each with color structure
distribution as in $X$, and put
\begin{equation*}
X_{n+1} \pin=\pin \bigl\{ (x_1,\dots,x_n,x_{n+1})\bigm| (x_1,\dots,x_n)\in X_n,\ \, x_{n+1}\in
X(x_1,\dots,x_n) \bigr\}.
\end{equation*}
The built in such a way stochastic process $X_1$, $X_2$, $X_3$, \dots\ will be referred to as the
\emph{colored branching process} (or \emph{unconditional} colored branching process if one wishes
to emphasize that the posterity of any individual is independent of its color and genealogy).
\section{The McMillan theorem for colored branching\\[-2pt] processes} \label{6..}
Consider a colored branching process $X_1$, $X_2$, \dots\ determined by a finite collection of
colors $\Omega =\{1,\dots,r\}$ and a probability distribution $\{\pin p_k\mid k\in\mathbb
Z^r_+\pin\}$, where $k=(k_1,\dots,k_r)$ is the color structure of each individual's set of children
$X$. We will always think that $X_1$ is generated by a unique initial individual.
For any genetic line $x =(x_1,\dots,x_n)\in X_n$ define the \emph{spectrum} $\delta_{x,n}$ as the
corresponding empirical measure on $\Omega$ by the rule
\begin{equation} \label{6,,1}
\delta_{x,n}(i) =\frac{\#\{\pin t\mid g(x_t) =i\pin\}}{n}, \qquad i\in\Omega,
\end{equation}
where $g(x_t)$ denotes the color of $x_t$. In other words, $\delta_{x,n}(i)$ is the fraction of
individuals of color $i$ in the genetic line $x$. Our next goal is to obtain asymptotical estimates
for cardinalities of the random sets
\begin{equation*}
\bigl\{ x =(x_1,\dots,x_n)\in X_n\bigm| \delta_{x,n}\in O(\nu)\bigr\},
\end{equation*}
where $O(\nu)$ is a small neighborhood of the distribution $\nu$ on the set of colors $\Omega$.
Denote by $\mu(i)$ the expectation of members of color $i$ in $X$:
\begin{equation} \label{6,,2}
\mu(i) =\sum_{k\in\mathbb Z^r_+} k_ip_k, \qquad i=1,\,\dots,\,r.
\end{equation}
Provided all $\mu(i)$ are finite, the vector $\mu =(\mu(1),\dots,\mu(r))$ can be regarded as a
measure on the set of colors $\Omega$. This measure generates the measure $\mu^n$ on $\Omega^n$ as
Cartesian product.
Define a mapping $G:X_n\to\Omega^n$ by means of the formula
\begin{equation*}
G(x_1,\dots,x_n) =\bigl(g(x_1),\pin\dots,\pin g(x_n)\bigr),
\end{equation*}
where $g(x_t)$ is the color of $x_t$.
\begin{lemma} \label{6..1}
For any\/ $\omega =(\omega_1,\dots,\omega_n) \in \Omega^n$ we have
\begin{equation} \label{6,,3}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x\in X_n\mid G(x) =\omega\pin\} \pin=\pin \prod_{t=1}^n \mu(\omega_t)
\pin=\pin \mu^n(\omega).
\end{equation}
\end{lemma}
\emph{Proof.} Cast out the last coordinate in $\omega$ and let $\omega' = (\omega_1, \dots,
\omega_{n-1})$. For any genetic line $(x_1,\dots,x_{n-1})\in X_{n-1}$, by virtue of the definition
of unconditional colored branching process we have
\begin{equation*}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x_{n}\in X(x_1,\dots,x_{n-1})\mid g(x_{n}) =\omega_{n}\pin\}
\pin=\pin \mu(\omega_{n}).
\end{equation*}
Evidently, this expression does not depend on $x' =(x_1,\dots,x_{n-1})$. Therefore,
\begin{equation*}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x\in X_{n}\mid G(x) =\omega\pin\} \pin=\pin
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x'\in X_{n-1}\mid G(x') =\omega'\pin\}\pin \mu(\omega_{n}).
\end{equation*}
Repeated application of the latter equality gives \eqref{6,,3}. \qed
\proofskip
Define for the measure $\mu$ from \eqref{6,,2} the Kullback action
\begin{equation*}
\rho(\nu,\mu) =\sum_{i\in\Omega} \nu(i)\ln\frac{\nu(i)}{\mu(i)}, \qquad \nu\in M_1(\Omega),
\end{equation*}
where $M_1(\Omega)$ is the set of all probability measures on $\Omega$. This formula is a copy of
\eqref{2,,2}.
\begin{theorem} \label{6..2}
Suppose\/ $X_1,\, X_2,\, \dots$ is an unconditional colored brunching process with finite
collection of colors\/ $\Omega$. Then for any\/ $\eps>0$ and probability measure\/ $\nu\in
M_1(\Omega)$ there exists a neighborhood\/ $O(\nu)\subset M_1(\Omega)$ such that for all natural\/
$n$
\begin{equation} \label{6,,4}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \pin\le\pin
e^{n(-\rho(\nu,\mu)+\eps)}.
\end{equation}
On the other hand, for any\/ $\eps>0$ and any neighborhood\/ $O(\nu)$
\begin{equation} \label{6,,5}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \pin\ge\pin
e^{n(-\rho(\nu,\mu)-\eps)} \quad\ \text{as}\ \ n\to\infty.
\end{equation}
\end{theorem}
If $\rho(\nu,\mu) =+\infty$, the expression $-\rho(\nu,\mu)+\eps$ in \eqref{6,,4} should be
treated as an arbitrary negative real number.
\emph{Proof.} It follows from \eqref{6,,1} that for every genetic line $x\in X_n$ its spectrum
$\delta_{x,n}$ coincides with the empirical measure $\delta_{\omega,n}$, where $\omega =G(x)$.
Therefore,
\begin{equation} \label{6,,6}
\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \,=
\sum_{\omega\in\Omega^n:\pin \delta_{\omega,n}\in O(\nu)}
\#\{\pin x\in X_n\mid G(x) =\omega\pin\}. \\[-6pt]
\end{equation}
It follows from \eqref{6,,3} and \eqref{6,,6} that
\begin{equation*}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \pin=\pin
\mu^n\{\pin\omega\in\Omega^n\mid \delta_{\omega,n}\in O(\nu)\pin\}.
\end{equation*}
The latter equality converts estimates \eqref{6,,4} and \eqref{6,,5} into already proved estimates
\eqref{3,,1}, \eqref{3,,2} from the large deviations principle. \qed
\proofskip
Remarkable that the last reference to the large deviations principle serves a unique ``umbilical
cord'' linking the first three sections of the paper with others.
Now we are ready to state an analog of the McMillan theorem for colored branching processes. Let
$q^*$ be a probability of degeneration of the process (probability of the occasion that starting
from a certain number $n$ all the sets $X_n$ turn out to be empty).
\begin{theorem} \label{6..3}
Suppose\/ $X_1,\, X_2,\, \dots$ is an unconditional colored brunching process with finite
collection of colors\/ $\Omega$. Then for any\/ $\eps>0$ and any probability measure\/ $\nu\in
M_1(\Omega)$ there exists a neighborhood\/ $O(\nu)\subset M_1(\Omega)$ such that for almost sure
\begin{equation} \label{6,,7}
\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \pin<\pin e^{n(-\rho(\nu,\mu)+\eps)} \quad\
\text{as}\ \ n\to\infty.
\end{equation}
On the other hand, if\/ $\rho(\nu,\mu) <0$ then for any neighborhood\/ $O(\nu)$ and positive\/
$\eps$ the estimate
\begin{equation} \label{6,,8}
\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \pin>\pin e^{n(-\rho(\nu,\mu)-\eps)}
\quad\ \text{as}\ \ n\to\infty
\end{equation}
\noindent
holds with probability\/ $1-q^*$ $($or almost surely under the condition that our branching process
does not degenerate\/$)$.
\end{theorem}
\emph{Proof.} Application of Chebyshev's inequality to \eqref{6,,4} gives
\begin{equation*}
\mathsf{P}\bigl\{\#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \ge
e^{n(-\rho(\nu,\mu)+2\eps)}\bigr\} \pin\le\pin e^{-n\eps}.
\end{equation*}
Sum up these inequalities over all $n\ge N$:
\begin{equation*}
\mathsf{P}\bigl\{\exists\, n\ge N\!:\ \#\{\pin x\in X_n\mid \delta_{x,n}\in O(\nu)\pin\} \ge
e^{n(-\rho(\nu,\mu)+2\eps)}\bigr\} \pin\le\pin \frac{e^{-N\eps}}{1-e^{-\eps}}.
\end{equation*}
This implies \eqref{6,,7} with constant $2\eps$ instead of $\eps$, that does not change its sense.
Proceed to the second part of the theorem. Let $\kappa = -\rho(\nu,\mu) -\eps$ and the number
$\eps$ be so small that $\kappa>0$. By the second part of Theorem \ref{6..2} for any neighborhood
$O(\nu)$ there exists $N$ such that
\begin{equation*}
\mathsf{E}\kern 0.07em\kern 0.05em\#\{\pin x\in X_N\mid \delta_{x,N}\in O(\nu)\pin\} \pin>\pin e^{N\kappa}.
\end{equation*}
Without loss of generality we may assume that $O(\nu)$ is convex.
Construct a Galton--Watson branching process satisfying the conditions
\begin{gather} \label{6,,9}
Z_1 \pin=\pin \#\{\pin x\in X_N\mid \delta_{x,N}\in O(\nu)\pin\}, \\[6pt] \label{6,,10}
Z_n \pin\le\pin \#\{\pin x\in X_{nN}\mid \delta_{x,nN}\in O(\nu)\pin\},
\qquad n=2,\,3,\,\dots
\end{gather}
Let the random variable $Z_1$ be defined by \eqref{6,,9}. For $n>1$ define $Z_n$ as a total number
of genetic lines
\begin{equation*}
(x_1,\pin\dots,\pin x_N,\ \ .\ \ .\ \ .\ \ ,\pin x_{(n-1)N+1},\pin\dots,\pin x_{nN})
\pin\in\pin X_{nN}
\end{equation*}
such that the spectrum of each segment $(x_{kN+1},\pin\dots,\pin x_{(k+1)N})$ belongs to $O(\nu)$.
In other words, we will treat as ``individuals'' of the process $Z_1$, $Z_2$, $Z_3$, \dots\ those
segments $(x_{kN+1},\pin\dots,\pin x_{(k+1)N})$ of genetic lines of the initial process whose
spectrum lies in $O(\nu)$. Then \eqref{6,,10} follows from convexity of $O(\nu)$, and from
unconditionality of the initial colored branching process it can be concluded that the sequence
$Z_1$, $Z_2$, \dots\ in fact forms a Galton--Watson branching process.
By construction, $\mathsf{E}\kern 0.07em Z_1 >e^{N\kappa}$. In this setting Theorem \ref{5..3} asserts that there is an
alternative for the sequence $Z_n$: either it tends to zero with a certain probability $q<1$ or it
grows faster than $e^{nN\kappa}$ with probability $1-q$. In the second case, by virtue of
\eqref{6,,10},
\begin{equation} \label{6,,11}
\#\{\pin x\in X_{nN}\mid \delta_{x,nN}\in O(\nu)\pin\}\, e^{-nN\kappa}
\pin\rightarrow\pin \infty \quad\ \text{при}\ \ n\to\infty.
\end{equation}
To finish the proof we have to do two things: verify that in fact \eqref{6,,11} is valid with
probability $1-q^*$ and get rid of the multiplier $N$ there. To do this we will exploit two ideas.
First, if the colored branching process $X_1$, $X_2$, \dots\ were generated by $m$ initial
individuals instead of the unique one, then \eqref{6,,11} would be valid with probability at least
$1-q^m$. Second, if one genetic line is a part of another and the ratio of their lengthes is close
to $1$ then their spectra are close as well.
Obviously, the total number of individuals in the $n$-th generation of the initial branching
process $X_1$, $X_2$, $X_3$, \dots\ equals $|X_n|$. The sequence of random variables $|X_n|$ forms
a Galton--Watson brunching process with probability of degeneration $q^*$, that does not exceed
$q$. Therefore, the sequence $|X_n|$ grows exponentially with probability $1-q^*$.
Consider the colored brunching process $X_{k+1}$, $X_{k+2}$, $X_{k+3}$, \dots\ obtained from the
initial one by virtue of truncation of the first $k$ generations. It represents a union of $|X_k|$
independent brunching processes generated by all individuals of $k$-th generation. It satisfies
\eqref{6,,11} with probability at least $1- q^{|X_k|}$. Hence for the initial process with even
greater probability we obtain the condition
\begin{equation} \label{6,,12}
\#\{\pin x\in X_{k+nN}\mid \delta_{x,k+nN}\in O^*(\nu)\pin\}\, e^{-nN\kappa}
\pin\rightarrow\pin \infty \quad\ \text{при}\ \ n\to\infty,
\end{equation}
where $O^*(\nu)$ is an arbitrary neighborhood of $\nu$ containing the closure of $O(\nu)$.
Suppose the sequence $|X_n|$ grows exponentially. Then for every $m\in\mathbb N$ define the numbers
\begin{equation*}
k_i \pin=\pin\min\pin\{\pin k: |X_k|\ge m,\ \ k=i \hspace{-0.6em}\mod N\pin\},
\qquad i=0,\,1,\,\dots,\,N-1.
\end{equation*}
For each $k=k_i$, the condition \eqref{6,,12} holds with probability at least $1-q^m$, and in
common they give the estimate
\begin{equation*}
\#\{\pin x\in X_n\mid \delta_{x,n}\in O^*(\nu)\pin\} \pin>\pin e^{n\kappa}
\quad\ \text{as}\ \ n\to\infty
\end{equation*}
with probability at least $1-Nq^m$. By virtue of the arbitrariness of $m$ this estimate is valid
almost surely (under the condition $|X_n|\to\infty$, which takes place with probability $1-q^*$).
It is equivalent to \eqref{6,,8}. \qed
\section{Dimensions of random fractals (upper bounds)}\label{7..}
We proceed investigation of the colored brunching process $X_1$, $X_2$, \dots\ with finite
collection of colors $\Omega =\{1,\dots,r\}$. Let us consider the corresponding set of infinite
genetic lines
\begin{equation*}
X_\infty \pin=\pin\bigl\{ x=(x_1,x_2,x_3,\dots)\bigm| (x_1,\dots,x_n)\in X_n\ \
\forall\,n\in \mathbb N \bigr\}.
\end{equation*}
Define the cylinder $\theta$-metrics on $X_\infty$
\begin{equation}\label{7,,1}
\dist(x,y) =\prod_{t=1}^n\theta(x_t), \qquad n=\inf\pin\{\pin t\mid x_t\ne y_t\pin\} -1,
\end{equation}
where the numbers $\theta(1)$, \dots, $\theta(r)$ are taken from $(0,1)$.
We will be interested in Hausdorff dimensions of both the space $X_\infty$ and its various subsets
defined in terms of partial limits of empirical measures on $\Omega$ (those measures are called
spectra and denoted $\delta_{x,n}$). If the colored brunching process degenerates then $X_\infty$
is empty. Therefore of interest is only the case when $m =\mathsf{E}\kern 0.07em |X_1| >1$ and the cardinality of
$X_n$ increases with rate of order $m^n$.
As before, denote by $\mu(i)$, where $i\in \Omega$, the expectation of individuals of color $i$ in
the random set $X_1$. It will be always supposed that $\mu(i)<\infty$. Consider any probability
measure $\nu\in M_1(\Omega)$. It will be proved below that the dimension of $\{\pin x\in
X_\infty\mid \delta_{x,n}\to \nu\pin\}$ can be computed by means of the function
\begin{equation} \label{7,,2}
d(\nu,\mu,\theta) \pin=\pin \frac{\rho(\nu,\mu)\strut}{\sum_{i=1}^r \nu(i)\ln \theta(i)\strut}
\pin=\pin \frac{\sum_{i=1}^r \nu(i)\ln\frac{\textstyle \nu(i)}{\textstyle \mu(i)}}{\sum_{i=1}^r
\nu(i)\ln \theta(i)\strut}.
\end{equation}
\noindent
We will name it the \emph{Billingsley--Kullback entropy.}
In \eqref{7,,2} the numerator is the Kullback action and the denominator is negative. If $\mu$ is a
probability measure on $\Omega$ then the Kullback action is nonnegative. But in our setting this is
not the case since $m =\mu(1)+\ldots+ \mu(r) > 1$. In particular, if $\mu(i)>\nu(i)$ for all $i\in
\Omega$ then the Kullback action will be negative, and the Billingsley--Kullback entropy positive.
Note, in addition, that if $\mu(1) =\ldots =\mu(r) =1$ then $-\rho(\nu,\mu)$ is equal to Shannon's
entropy $H(\nu)$, and the whole of Billingsley--Kullback entropy turns into the Billingsley entropy
\eqref{4,,4}.
\begin{lemma}\label{7..1}
Let the space\/ $X_\infty$ of infinite genetic lines be equipped with the metrics\/ \eqref{7,,1}.
Then for any probability measure\/ $\nu\in M_1(\Omega)$ and any\/ $\eps>0$ there exists a
neighborhood\/ $O(\nu)$ such that Hausdorff dimension of the set
\begin{equation*}
A =\bigl\{ x\in X_\infty\bigm| \forall\,N\ \exists\,n>N\!:\, \delta_{x,n}\in O(\nu)\bigr\}
\end{equation*}
does not exceed\/ $d(\nu,\mu,\theta)+\eps$ almost surely.
\end{lemma}
\emph{Proof} is carried out in the same manner as in Lemma \ref{4..3}. Take any $\kappa>0$. By
Theorem \ref{6,,4} there exists a neighborhood $O(\nu)$ such that almost surely
\begin{equation}\label{7,,3}
\#\bigl\{ x\in X_n\bigm| \delta_{x,n}\in O(\nu)\bigr\} \pin\le\pin e^{n(-\rho(\nu,\mu) +\kappa)}
\quad\ \text{as}\ \ n\to\infty.
\end{equation}
Reduce this neighborhood in such a way that in addition for all measures $\delta\in O(\nu)$,
\begin{equation*}
\sum_{i=1}^r \delta(i)\ln \theta(i) \pin<\pin \sum_{i=1}^r \nu(i) \ln \theta(i) +\kappa.
\end{equation*}
Then for each cylinder $Z_n(x)$ satisfying the condition $\delta_{x,n}\in O(\nu)$ we have the
estimate
\begin{align} \notag
|Z_n(x)| \pin=\pin \prod_{t=1}^n \theta(x_t) \pin&=\pin\exp\biggl\{\sum_{t=1}^n \ln \theta(x_t)\biggr\}
\pin=\pin \exp\biggl\{n\sum_{i=1}^r \delta_{x,n}(i)\ln \theta(i)\biggr\} \pin<\pin\\[3pt]
\pin&<\pin \exp\biggl\{n\sum_{i=1}^r \nu(i)\ln \theta(i) +n\kappa\biggr\}. \label{7,,4}
\end{align}
For every natural $N$ the set $A$ is covered by the collection of cylinders
\begin{equation*}
\cal U_N \pin=\pin\bigcup_{n=N}^\infty \bigl\{ Z_n(x)\bigm| \delta_{x,n}\in O(\nu)\bigr\}.
\end{equation*}
Evidently, the diameter of this covering tends to zero as $N\to \infty$. Hence $\mathrm{mes}\kern 0.04em(\cal
U_N,\alpha)$ can be estimated by virtue of formulas \eqref{7,,3} and \eqref{7,,4}:
\begin{align} \notag
\mathrm{mes}\kern 0.04em(\cal U_N,\alpha) \pin&=\sum_{Z_n(x)\in\pin \cal U_N}\hspace{-0.5em} |Z_n(x)|^\alpha
\pin\le\pin \sum_{n=N}^\infty e^{n(-\rho(\nu,\mu) +\kappa)}
\exp\biggl\{\alpha n\sum_{i=1}^r \nu(i)\ln \theta(i) +\alpha n\kappa\biggr\} \pin=\pin \\[3pt]
\pin&=\pin \sum_{n=N}^\infty \exp\biggl\{n\pin\biggl(-\sum_{i=1}^r \nu(i)\ln\frac{\nu(i)}{\mu(i)}
+\alpha\sum_{i=1}^r \nu(i)\ln \theta(i)+\kappa+\alpha\kappa\biggr)\!\pin\biggr\}. \label{7,,5}
\end{align}
If $\alpha > d(\nu,\mu,\theta)$ then $\kappa$ can be chosen so small that the last exponent in
braces is negative, and all the sum \eqref{7,,5} tends to zero as $N\to \infty$. Therefore
Hausdorff measure (of dimension $\alpha$) of the set $A$ is zero, and its dimension does not exceed
$\alpha$. \qed
\proofskip
As before, we say that the sequence of empirical measures $\delta_{x,n}$ condenses on a subset
$V\subset M_1(\Omega)$ (notation $\delta_{x,n}\succ V$) if it has a limit point in $V$.
\begin{theorem} \label{7..2}
Let\/ $X_1,\, X_2,\, X_3,\, \dots$ be an unconditional colored brunching process with finite set of
colors\/ $\Omega$, and the set\/ $X_\infty$ of all infinite genetic lines equipped with the
cylinder metrics\/ \eqref{7,,1}. Then for any subset\/ $V\subset M_1(\Omega)$ almost surely
\begin{equation}\label{7,,6}
\dim_H\pin\{\pin x\in X_\infty \mid \delta_{x,n}\succ V\pin\} \pin\le\pin
\sup_{\nu\in V} d(\nu,\mu,\theta).
\end{equation}
In particular, $\dim_H X_\infty\le s$ for almost sure, where\/ $s$ is a unique root of the ``Bowen
equation''
\begin{equation} \label{7,,7}
\sum_{i=1}^r \mu(i) \theta(i)^s =1.
\end{equation}
\end{theorem}
\emph{Proof.} It follows from the definition of the Billingsley--Kullback entropy
$d(\nu,\mu,\theta)$ that it depends continuously on the measure $\nu\in M_1(\Omega)$. Let $\barV$
be the closure of $V$. Obviously, it is compact. Take an arbitrary $\eps>0$. By Lemma \ref{7..1}
for any measure $\nu\in\barV$ there exists a neighborhood $O(\nu)$ such that almost surely
\begin{equation} \label{7,,8}
\dim_H\pin\bigl\{x\in X_\infty\bigm| \delta_{x,n}\succ O(\nu)\bigr\} \pin\le\pin
d(\nu,\mu,\theta)+\eps \pin\le\pin \sup_{\nu\in V} d(\nu,\mu,\theta)+\eps.
\end{equation}
Choose a finite covering of $\barV$ by neighborhoods of this kind. Then the set $\{\pin x\in
X_\infty \mid \delta_{x,n}\succ V\pin\}$ will be covered by a finite collection of sets of the form
$\{\pin x\in X_\infty \mid \delta_{x,n}\succ O(\nu)\pin\}$ satisfying \eqref{7,,8}. By the
arbitrariness of $\eps$ this implies the first statement of Theorem \ref{7..2}.
Let $s$ be a solution of equation \eqref{7,,7}. Note that for any measure $\nu\in M_1(\Omega)$,
since the logarithm function is concave,
\begin{gather*}
s\sum_{i=1}^r \nu(i)\ln \theta(i) \pin-\sum_{i=1}^r \nu(i)\ln\frac{\nu(i)}{\mu(i)} \pin=\pin
\sum_{i=1}^r \nu(i)\ln\frac{\mu(i)\theta(i)^{s}}{\nu(i)} \pin\le\pin \\[6pt]
\pin\le\pin \ln\Biggl\{\sum_{\nu(i)>0}\hspace{-0.3em}
\nu(i)\frac{\mu(i)\theta(i)^{s}}{\nu(i)}\!\pin\Biggr\} \pin\le\pin 0.
\end{gather*}
Consequently, $d(\nu,\mu,\theta)\le s$. Now the second part of our theorem follows from the first
one if we take $V=M_1(\Omega)$. \qed
\proofskip
{\bf Remark.\,} In fact the ``Bowen equation'' is an equation of the form $P(s\varphi) =0$, where
$P(s\varphi)$ is the topological pressure of a weight function $s\varphi$ in a dynamical system
(more detailed explanations can be found in \cite{Pesin}). If we replace the topological pressure
$P(s\varphi)$ by the spectral potential
\begin{equation*}
\lambda(s\varphi,\mu) = \ln\sum_{i=1}^r e^{s\varphi(i)}\mu(i),
\quad\ \text{where}\ \ \varphi(i) =\ln \theta(i),
\end{equation*}
then the Bowen equation turns into the equation $\lambda(s\varphi,\mu) =0$, which is equivalent to
\eqref{7,,7}.
\section{Block selections of colored brunching processes} \label{8..}
Let $\xi_1$, $\xi_2$, $\xi_3$, \dots\ be a sequence if independent identically distributed random
variables taking values $0$ or $1$ (independent Bernoulli trials).
\begin{lemma} \label{8..1}
If\/ $0<p'<p<1$ and\/ $\mathsf{P}\{\xi_i=1\}\ge p$, then
\begin{equation} \label{8,,1}
\mathsf{P}\{\pin\xi_1+\ldots+\xi_k \ge p'k\pin\} \pin\to\pin 1 \quad \text{as}\ \ k\to\infty
\end{equation}
uniformly with respect to the probability\/ $\mathsf{P}\{\xi_i=1\}\ge p$.
\end{lemma}
\emph{Proof.} In the case $\mathsf{P}\{\xi_i=1\} =p$ this follows from the law of large numbers. If
$\mathsf{P}\{\xi_i=1\}$ increases then the probability in the left hand side of \eqref{8,,1}
increases as well. \qed
\proofskip
Consider a colored brunching process $X_1$, $X_2$, \dots\ with finite set of colors $\Omega =
\{1,\dots,r\}$. Each $X_n$ consists of genetic lines $(x_1,x_2,\dots,x_n)$ of length $n$, in which
every subsequent individual has been born by the previous. Fix a (large enough) natural $N$. We
will split genetic lines of length divisible by $N$ into blocks of length $N$:
\begin{equation*}
(x_1,x_2,\dots,x_{nN}) =(y_1,\dots,y_n), \quad\ \text{where}\quad
y_{k} =(x_{(k-1)N+1},\pin\dots,\pin x_{kN}).
\end{equation*}
Each block $y_k$ generates an empirical measure $\delta_{y_k}$ (spectrum) on $\Omega$ by the rule
\begin{equation*}
\delta_{y_k}(i) =\frac{\#\{\pin t\mid g(x_t)=i,\ \, (k-1)N <t\le kN\pin\}}{N},
\end{equation*}
where $g(x_t)$ denotes the color of $x_t$.
A \emph{block selection of order\/ $N$} from a colored brunching process $X_1$, $X_2$, \dots\ is
any sequence of random subsets $Y_n\subset X_{nN}$ with the following property: if
$(y_1,\dots,y_{n+1})\in Y_{n+1}$ then $(y_1,\dots,y_n)\in Y_n$. In this case the sequence of blocks
$(y_1,\dots,y_{n+1})$ will be called a \emph{prolongation} of the sequence $(y_1,\dots,y_n)$.
As above (see \eqref{6,,2}), denote by $\mu(i)$ the expectation of children of color $i$ born by
each individual, and by $\mu$ the corresponding measure on $\Omega$.
\begin{theorem} \label{8..2}
Let\/ $X_1,\, X_2,\, X_3,\, \dots$ be an unconditional colored brunching process with finite set of
colors\/ $\Omega$ and probability of degeneration\/ $q^*<1$. If a measure\/ $\nu\in M_1(\Omega)$
satisfies the condition\/ $\rho(\nu,\mu) <0$, then for any its neighborhood\/ $O(\nu)\subset
M_1(\Omega)$ and any number\/ $\eps>0$ with probability\/ $1-q^*$ one can extract from the
brunching process a block selection\/ $Y_1$, $Y_2$, \dots\ of an order\/ $N$ such that each
sequence of blocks\/ $(y_1,\dots,y_n)\in Y_n$ has at least\/ $l(N)$ prolongations in\/ $Y_{n+1}$,
where
\begin{equation} \label{8,,2}
l(N) = e^{N(-\rho(\nu,\mu)-\eps)},
\end{equation}
and the spectra of all blocks belong to\/ $O(\nu)$.
\end{theorem}
\emph{Proof.} Fix any numbers $p$ and $\eps$ satisfying the conditions
\begin{equation*}
0<p<p+\eps<1-q^*, \qquad \rho(\nu,\mu)+\eps<0.
\end{equation*}
By the second part of Theorem \ref{6..3} for all large enough $N$ we have
\begin{equation} \label{8,,3}
\mathsf{P}\bigl\{ \#\{\pin x\in X_N \mid \delta_{x,N}\in O(\nu)\pin\} >
e^{N(-\rho(\nu,\mu)-\eps/2)} \bigr\} \pin\ge\pin p +\eps.
\end{equation}
Further we will consider finite sequences of random sets $X_1$, \dots, $X_{nN}$ and extract from
them block selections $Y_1$, \dots, $Y_n$ of order $N$ such that the spectra of all their blocks
belong to $O(\nu)$ and each sequence of blocks $(y_1,\dots,y_k)\in Y_k$ has at least $l(N)$
prolongations in $Y_{k+1}$. Denote by $A_n$ the event of existence of a block selection with these
properties. Define one more event $A$ by the condition
\begin{equation*}
\#\{\pin x\in X_N \mid \delta_{x,N}\in O(\nu)\pin\} \pin>\pin l(N)\pin e^{N\eps/2}.
\end{equation*}
It follows from \eqref{8,,2} and \eqref{8,,3} that $\mathsf{P}(A)\ge p+\eps$. Evidently, $A\subset
A_1$. Therefore, $\mathsf{P}(A_1)\ge p+\eps$. Now we are going to prove by induction that
$\mathsf{P}(A_n)\ge p$ whenever the order $N$ of selection is large enough. Let us perform the step
of induction. Assume that $\mathsf{P}(A_n)\ge p$ is valid for some $n$. Consider the conditional
probability $\mathsf{P}(A_{n+1}|A)$. By the definition of events $A_{n+1}$ and $A$ it cannot be
less than the probability of the following event: there are at least $l(N)$ wins in a sequence of
$[l(N) e^{N\eps/2}]$ independent Bernoulli trials with probability of win $\mathsf{P}(A_n)$ in
each. Using Lemma \ref{8..1} (with $p' =p/2$ and $k=[l(N) e^{N\eps/2}]$) one can make this
probability greater than $1-\eps$ at the expense of increasing~$N$. Then,
\begin{equation*}
\mathsf{P}(A_{n+1}) \ge \mathsf{P}(A)\pin\mathsf{P}(A_{n+1}|A) > (p+\eps)(1-\eps) >p.
\end{equation*}
Thus the inequality $\mathsf{P}(A_n) >p$ is proved for all $n$.
It means that with probability greater than $p$ one can extract from the sequence $X_1$, \dots,
$X_{nN}$ a block selection $Y_1$, \dots, $Y_n$ of order $N$ such that the spectra of all blocks
belong to the neighborhood $O(\nu)$ and each sequence of blocks $(y_1,\dots,y_k)\in Y_k$ has at
least $l(N)$ prolongations in $Y_{k+1}$.
To obtain a block selection of infinite length with the same properties, we will construct finite
block selections $Y_1$, \dots, $Y_n$ in the following manner. Initially, suppose that every $Y_k$,
where $k\le n$, consists of all sequences of blocks $(y_1,\dots,y_k)\in X_{kN}$ such that the
spectrum of each block lies in $O(\nu)$. At the first step we exclude from $Y_{n-1}$ all sequences
of blocks having less than $l(N)$ prolongations in $Y_n$, and then exclude from $Y_n$ all
prolongations of the sequences that have been excluded from $Y_{n-1}$. At the second step we
exclude from $Y_{n-2}$ all sequences of blocks having after the first step less than $l(N)$
prolongations in the modified $Y_{n-1}$, and then exclude from $Y_{n-1}$ and $Y_{n}$ all
prolongations of the sequences that have been excluded from $Y_{n-2}$. Proceeding further in the
same manner, after $n$ steps we will obtain a block selection $Y_1$, \dots, $Y_n$ such that each
sequence of blocks from any $Y_k$ has at least $l(N)$ prolongations in $Y_{k+1}$. Evidently, this
selection will be the maximal among all selections of order $N$ having the mentioned property.
Therefore with probability at least $p$ all the sets $Y_k$ are nonempty.
For every $n$ let us construct, as is described above, the maximal block selection $Y^{(n)}_1$,
\dots, $Y^{(n)}_n$. From the maximality of these selections it follows that
\begin{equation*}
Y^{(n)}_n\supset Y^{(n+1)}_n\supset Y^{(n+2)}_n\supset\ldots
\end{equation*}
Define the sets $Y_n =\bigcap_{k\ge n} Y^{(k)}_n$. Then with probability at least $p$ all of them
are nonempty and compose an infinite block selection from Theorem \ref{8..2}. Since $p$ may be
chosen arbitrarily close to $1-q^*$, such selections do exist with probability $1-q^*$. \qed
\proofskip
Theorem \ref{8..2} can be strengthened by taking several measures in place of a unique measure
$\nu\in M_1(\Omega)$.
\begin{theorem} \label{8..3}
Let\/ $X_1,\, X_2,\, X_3,\, \dots$ be an unconditional colored brunching process with finite set of
colors\/ $\Omega$ and probability of degeneration\/ $q^*<1$. If a finite collection of measures\/
$\nu_i\in M_1(\Omega)$, where\/ $i=1,\,\dots,\,k$, satisfy the inequalities\/ $\rho(\nu_i,\mu) <0$,
then for any neighborhoods\/ $O(\nu_i)\subset M_1(\Omega)$ and any\/ $\eps>0$ with probability\/
$1-q^*$ one can extract from the brunching process a block selection\/ $Y_1$, $Y_2$, \dots\ of an
order\/ $N$ such that for every\/ $i=1,\,\dots,\,k$ each sequence of blocks\/ $(y_1,\dots,y_n)\in
Y_n$ has at least\/
\begin{equation*}
l_i(N) = e^{N(-\rho(\nu_i,\mu)-\eps)}
\end{equation*}
prolongations\/ $(y_1,\dots,y_n,y)\in Y_{n+1}$ with the property\/ $\delta_{y}\in O(\nu_i)$.
\end{theorem}
It can be proved in the same manner as the previous one, only now the event $A_n$ should be
understood as existence of a finite block selection $Y_1$, \dots, $Y_n$ satisfying the conclusion
of Theorem \ref{8..3} and the event $A$ should be defined by the system of inequalities
\begin{equation*}
\#\{\pin x\in X_N \mid \delta_{x,N}\in O(\nu_i)\pin\} \pin>\pin l_i(N)\pin e^{N\eps/2},
\qquad i=1,\,\dots,\,k.
\end{equation*}
We leave details to the reader.
\section{Dimensions of random fractals (lower bounds)}\label{9..}
Now we proceed investigation of the space of infinite genetic lines
\begin{equation*}
X_\infty \pin=\pin\bigl\{ x=(x_1,x_2,x_3,\dots)\bigm| (x_1,\dots,x_n)\in X_n\ \
\forall\,n\in \mathbb N \bigr\},
\end{equation*}
which is generated by an unconditional colored brunching process $X_1$, $X_2$, \dots\ with finite
set of colors $\Omega =\{1,\dots,r\}$. It is supposed that there is a measure
\begin{equation*}
\mu = (\mu(1),\,\dots,\,\mu(r))
\end{equation*}
on $\Omega$, where $\mu(i)$ denotes the expectation of children of color $i$ born by each
individual, and $X_\infty$ is equipped with the cylinder $\theta$-metrics \eqref{7,,1}.
\begin{theorem} \label{9..1}
Let\/ $X_1,\, X_2,\, X_3,\, \dots$ be an unconditional colored brunching process with finite set of
colors\/ $\Omega$ and probability of degeneration\/ $q^*<1$. If a measure\/ $\nu\in M_1(\Omega)$
satisfies the condition\/ $d(\nu,\mu,\theta) >0$, then with probability\/ $1-q^*$ for any
neighborhood\/ $O(\nu)$ we have the lower bound
\begin{equation} \label{9,,1}
\dim_H\pin \{\pin x\in X_\infty\mid \exists\,N\ \forall\,n>N\ \, \delta_{x,n}\in O(\nu)\pin\}
\,\ge\, d(\nu,\mu,\theta).
\end{equation}
\end{theorem}
\emph{Proof.} Fix any number $\alpha <d(\nu,\mu,\theta)$ and so small $\eps>0$ that
\begin{equation} \label{9,,2}
d(\nu,\mu,\theta) \pin=\pin \frac{\rho(\nu,\mu)\strut}{\sum_{i=1}^r \nu(i)\ln \theta(i)\strut} \pin>\pin
\frac{\rho(\nu,\mu) +2\eps\strut}{\sum_{i=1}^r \nu(i)\ln \theta(i) -\eps\strut} \pin>\pin \alpha.
\end{equation}
Then choose a convex neighborhood $O^*(\nu)$ whose closure lies in $O(\nu)$ such that for any
measure $\delta\in O^*(\nu)$
\begin{equation} \label{9,,3}
\sum_{i=1}^r \delta(i)\ln \theta(i) >\sum_{i=1}^r \nu(i)\ln \theta(i)-\eps.
\end{equation}
By Theorem \ref{8..2} with probability $1-q^*$ one can extract from the brunching process under
consideration a block selection $Y_1$, $Y_2$, \dots\ of order $N$ such that any sequence of blocks
$(y_1,\dots,y_n)\in Y_n$ has at least $l(N)$ prolongations in $Y_{n+1}$, where
\begin{equation*}
l(N) = e^{N(-\rho(\nu,\mu)-\eps)},
\end{equation*}
and for each block $y_k =(x_{(k-1)N+1},\dots,x_{kN})$ the corresponding empirical measure
$\delta_{y_k}$ (spectrum) belongs to $O^*(\nu)$. Exclude from this selection a certain part of
genetic lines in such a way that each of the remaining sequences of blocks $(y_1,\dots,y_n)\in Y_n$
would have exactly $[l(N)]$ prolongations in $Y_{n+1}$.
Define the random set
\begin{equation*}
Y_\infty \pin=\pin \bigl\{ y=(y_1,y_2,\dots) \bigm| (y_1,\dots,y_n)\in Y_n, \ \,
n=1,\,2,\,\dots\bigr\}.
\end{equation*}
Any sequence $y=(y_1,y_2,\dots)\in Y_\infty$ consists of blocks of length $N$. Having written down
in order the elements of all these blocks, we obtain from $y$ an infinite genetic line $x =
(x_1,x_2,\dots)\in X_\infty$. Denote it as $\pi(y)$. By the definition of $Y_\infty$ the spectrum
of each block $y_k$ belongs to $O^*(\nu)$. For every point $x=\pi(y)$, where $y\in Y_\infty$, the
empirical measure $\delta_{x,nN}$ is an arithmetical mean of empirical measures corresponding to
the first $n$ blocks of $y$, and so belongs to $O^*(\nu)$ as well. It follows that
\begin{equation} \label{9,,4}
\pi(Y_\infty) \pin\subset\pin \{\pin x\in X_\infty\mid \exists\,N\ \forall\,n>N\ \,
\delta_{x,n}\in O(\nu)\pin\}.
\end{equation}
The family of all cylinders of the form $Z_{nN}(x)$, where $x\in \pi(Y_\infty)$, generates some
$\sigma$-algebra on $\pi(Y_\infty)$. Define a probability measure $P$ on this $\sigma$-algebra such
that
\begin{equation*}
P\bigl(Z_{nN}(x)\bigr) =[l(N)]^{-n}.
\end{equation*}
Then for all large enough $N$, all $x \in \pi(Y_\infty)$, and all natural $n$
\begin{equation*}
P\bigl(Z_{nN}(x)\bigr) \le e^{nN(\rho(\nu,\mu)+2\eps)}.
\end{equation*}
On the other hand, by \eqref{9,,3}
\begin{gather*}
|Z_{nN}(x)| \pin=\pin \prod_{t=1}^{nN} \theta(x_t) \pin=\pin
\exp\biggl\{\sum_{t=1}^{nN} \ln \theta(x_t)\biggr\} \pin=\pin
\exp\biggl\{\sum_{i=1}^r nN\delta_{x,nN}(i)\ln \theta(i)\biggr\} \pin\ge\pin \\[3pt]
\pin\ge\pin \exp\biggl\{nN\biggl(\pin\sum_{i=1}^r \nu(i)\ln \theta(i) -\eps\biggr)\biggr\}.
\end{gather*}
It follows from the last two formulas and \eqref{9,,2} that
\begin{equation*}
|Z_{nN}(x)|^\alpha \pin\ge\pin
\exp\biggl\{nN\alpha\biggl(\pin\sum_{i=1}^r \nu(i)\ln \theta(i)-\eps\biggr)\biggr\} \pin\ge\pin
e^{nN(\rho(\nu,\mu)+2\eps)} \pin\ge\pin P\bigl(Z_{nN}(x)\bigr).
\end{equation*}
Now we are ready to compute the Hausdorff measure of dimension $\alpha$ of the set $\pi(Y_\infty)$.
If, while computing the Hausdorff measure, we used coverings of $\pi(Y_\infty)$ not with any
cylinders, but with only cylinders of orders divisible by $N$, then the last formula would imply
that such a measure will be at least $P(\pi(Y_\infty)) =1$. Any cylinder can be put in a cylinder
of order divisible by $N$ such that the difference of their orders will be less than $N$ and the
ratio of their diameters greater than $\min \theta(i)^{N}$. Therefore,
\begin{equation*}
\mathrm{mes}\kern 0.04em\bigl(\pi(Y_\infty),\alpha\bigr) \ge \min \theta(i)^{N\alpha}
\end{equation*}
and hence $\dim_H \pi(Y_\infty) \ge \alpha$.
The set defined in the right hand part of \eqref{9,,4} contains $\pi(Y_\infty)$. Then its dimension
is at least $\alpha$ too. Recall that we have proved this fact by means of a block selection that
exists with probability $1-q^*$. By the arbitrariness of $\alpha<d(\nu,\mu,\theta)$ this implies
the desired bound \eqref{9,,1} with the same probability. \qed
\begin{theorem} \label{9..2}
Let\/ $s$ be a root of the Bowen equation
\begin{equation*}
\sum_{i=1}^r \mu(i) \theta(i)^s =1.
\end{equation*}
If\/ $s\le 0$, then\/ $X_\infty =\emptyset$ almost surely. Otherwise, if\/ $s>0$, then\/ $X_\infty$
is nonempty with a positive probability, and with the same probability its dimension equals\/ $s$.
\end{theorem}
\emph{Proof.} The expectation of total number of children of each individual in the brunching
process generating the set $X_\infty$ is equal to $m=\mu(1)+\,\ldots\,+\mu(r)$. If $s\le 0$, then
$m\le 1$. In this case by Theorem \ref{5..2} our brunching process degenerates almost surely, and
$X_\infty =\emptyset$.
If $s>0$, then $m>1$. In this case by Theorem ref{5..2} our brunching process is degenerate with a
positive probability, and $X_\infty$ is nonempty. Define a measure $\nu\in M_1(\Omega)$ by means of
the equality
\begin{equation*}
\nu(i) =\mu(i)\theta(i)^{s}, \quad\ i\in \Omega.
\end{equation*}
Then, evidently, $d(\nu,\mu,\theta) =s$. By the previous Theorem $\dim_H X_\infty \ge s$ with the
same probability with which $X_\infty \ne\emptyset$. On the other hand, by Theorem \ref{7..2} the
inverse inequality holds almost surely. \qed
\proofskip
A more general version of Theorem \ref{9..2}, in which the similarity coefficients $\theta(1)$,
\dots, $\theta(r)$ are random, is proved in
\cite{Falconer-article,Falconer-book,Graf,Mauldin-Williams}.
For every probability measure $\nu\in M_1(\Omega)$ define a basin $B(\nu)\subset X_\infty$ as the
set of all infinite genetic lines $x =(x_1,x_2,x_3,\dots)$ such that the corresponding sequence of
empirical measures $\delta_{x,n}$ converges to $\nu$. What is the dimension of $B(\nu)$? By Theorem
\ref{7..2} it does not exceed the Billingsley--Kullback entropy $d(\nu,\mu,\theta)$ with
probability $1$. On the other hand, the inverse inequality does not follow from the previous
results (and, in particular, from Theorem \ref{9..1}). To obtain it, we ought to enhance the
machinery of block selections.
\begin{lemma} \label{9..3}
Let\/ $Q_1$, \dots, $Q_{2^r}$ be vertices of a cube in\/ $\mathbb R^r$. Then there exists a choice
law\/ $i\!:\mathbb R^r\to \{1,\dots,2^r\}$ such that if neighborhoods\/ $O(Q_i)$ are small enough,
sequences
\begin{equation*}
\delta_n\in\mathbb R^r \quad \text{and}\quad
\mathsf D\kern 0.07emelta_n =\frac{\delta_1+\ldots+\delta_n}{n}
\end{equation*}
satisfy the conditions\/ $\delta_{n+1}\in O\bigl(Q_{i(\mathsf D\kern 0.07emelta_n)}\bigr)$ and\/ $\delta_1\in O(Q_1)
\cup \ldots\cup O(Q_{2^r})$, then the sequence\/ $\mathsf D\kern 0.07emelta_n$ converges to the center of the cube.
\end{lemma}
\emph{Proof.} First consider the case $r=1$, when the cube turns to a segment. Let, for
definiteness, $Q_1 =-1$ and $Q_2 =1$. Set
\begin{equation} \label{9,,5}
i(\mathsf D\kern 0.07emelta) =
\begin{cases}
1,& \text{если}\ \ \mathsf D\kern 0.07emelta\ge 0,\\[1pt]
2,& \text{если}\ \ \mathsf D\kern 0.07emelta<0.
\end{cases}
\end{equation}
Take any neighborhoods $O(Q_1)$ and $O(Q_2)$ with radii at most $1$. Then for any sequence
$\delta_n$ satisfying the conditions $\delta_{n+1}\in O\bigl(Q_{i(\mathsf D\kern 0.07emelta_n)}\bigr)$ and
$|\delta_1|<2$ we have the estimate $|\mathsf D\kern 0.07emelta_n|<2/n$. It may be easily proved by induction. Thus in
the one-dimensional case the lemma is proved. To prove it in the multidimensional case one should
choose a coordinate system with origin at the center of the cube and axes parallel to edges of the
cube and apply the choice law \eqref{9,,5} to each of the coordinates independently. \qed
\begin{theorem} \label{9..4}
Let\/ $X_1,\, X_2,\, X_3,\, \dots$ be an unconditional colored brunching process with finite set of
colors\/ $\Omega$ and probability of degeneration\/ $q^*<1$. If a measure\/ $\nu\in M_1(\Omega)$
satisfies the condition\/ $d(\nu,\mu,\theta) >0$, then with probability\/ $1-q^*$
\begin{equation*}
\dim_H B(\nu) = d(\nu,\mu,\theta).
\end{equation*}
\end{theorem}
\emph{Proof.} Fix any number $\alpha <d(\nu,\mu,\theta)$ and so small $\eps>0$ that
\begin{equation*}
d(\nu,\mu,\theta) \pin=\pin \frac{\rho(\nu,\mu)\strut}{\sum_{i=1}^r \nu(i)\ln \theta(i)\strut} \pin>\pin
\frac{\rho(\nu,\mu) +3\eps\strut}{\sum_{i=1}^r \nu(i)\ln \theta(i) -\eps\strut} \pin>\pin \alpha.
\end{equation*}
The set $M_1(\Omega)$ is in fact a simplex of dimension $r-1$, where $r =|\Omega|$. Suppose first
that $\nu$ is an inner point of this simplex (in other words, $\nu(i)>0$ for all $i\in \Omega$).
Take a small convex neighborhood $O(\nu)\subset M_1(\Omega)$ such that for any measure $\delta\in
O(\nu)$
\begin{gather*}
\rho(\delta,\mu) < \rho(\nu,\mu) +\eps,\\[3pt]
\sum_{i=1}^r \delta(i)\ln \theta(i) >\sum_{i=1}^r \nu(i)\ln \theta(i)-\eps.
\end{gather*}
Let $Q_1$, \dots, $Q_{2^{r-1}}$ be vertices of some cube in $O(\nu)$ with center at $\nu$. Define
for them small neighborhoods $O(Q_i)\subset O(\nu)$ as in Lemma \ref{9..3}. By Theorem \ref{8..3}
with probability $1-q^*$ one can extract from the colored branching process $X_1$, $X_2$, \dots\ a
block selection $Y_1$, $Y_2$, \dots\ of order $N$ such that for every $i\le 2^{r-1}$ each sequence
of blocks $(y_1,\dots,y_n)\in Y_n$ has at least
\begin{equation*}
l(N) = e^{N(-\rho(\nu,\mu)-2\eps)}
\end{equation*}
prolongations $(y_1,\dots,y_n,y)\in Y_{n+1}$ possessing the property $\delta_{y}\in O(Q_i)$.
Exclude from this block selection a certain part of genetic lines so that each of the remaining
sequences of blocks $(y_1,\dots,y_n)\in Y_n$ would have exactly $[l(N)]$ prolongations
$(y_1,\dots,y_n,y)\in Y_{n+1}$, and all these prolongations would satisfy the choice law from Lemma
\ref{9..3}, namely,
\begin{equation*}
\delta_y\in O\bigl(Q_{i(\mathsf D\kern 0.07emelta_n)}\bigr), \quad\ \text{where} \quad
\mathsf D\kern 0.07emelta_n =\frac{\delta_{y_1}+\ldots+\delta_{y_n}}{n}.
\end{equation*}
Denote by $\pi(Y_\infty)$ the set of all infinite genetic lines $(x_1,x_2,\dots)\in X_\infty$ for
which every initial segment of length $nN$, has been partitioned into blocks of length $N$, turns
into an element of $Y_n$. Then by Lemma \ref{9..3} we have the inclusion $\pi(Y_\infty) \subset
B(\nu)$.
Reproducing reasoning from the proof of Theorem \ref{9..1} one can ascertain that the dimension of
$\pi(Y_\infty)$ is greater than $\alpha$. Since $\alpha$ can be taken arbitrarily close to
$d(\nu,\mu,\theta)$, we obtain the lower bound $\dim_H B(\nu) \ge d(\nu,\mu,\theta)$. The inverse
inequality, as was mentioned above, follows from Theorem \ref{7..2}. Thus in the case of inner
point $\nu\in M_1(\Omega)$ the theorem is proved.
If the measure $\nu$ belongs to the boundary of the simplex $M_1(\Omega)$, then one should exclude
from $\Omega$ all elements $i$ with $\nu(i) =0$, and consider the set
\begin{equation*}
\Omega' =\{\pin i\in \Omega \mid \nu(i)>0\pin\}.
\end{equation*}
Exclude from the brunching process $X_1$, $X_2$, \dots\ all genetic lines containing elements of
colors not in $\Omega'$ and denote as $X'_1$, $X'_2$, \dots\ the resulting brunching process (with
the set of colors $\Omega'$). The corresponding set of infinite genetic lines $X'_\infty$ is
contained in $X_\infty$. It follows from the definition of Billingsley--Kullback entropy
$d(\nu,\mu,\theta)$ that it is the same for the sets of colors $\Omega$ and $\Omega'$. Besides, the
measure $\nu$ lies in the interior of the simplex $M_1(\Omega')$. Therefore, $\dim_H B(\nu)\cap
X'_\infty =d(\nu,\mu,\theta)$ with the same probability as $X'_\infty\ne \emptyset$.
The theorem would be completely proved if the probability of the event $X'_\infty\ne\emptyset$ was
equal to $1-q^*$. But it may be less than $1-q^*$. This obstacle can be overcome as follows.
Let $m' =\sum_{i\in \Omega'} \mu(i)$. This is nothing more than the expectation of each
individual's number of children in the brunching process $X'_1$, $X'_2$, \dots\ If $m'\le 1$ then
\eqref{2,,4} implies the inequality $\rho(\nu,\mu) \ge 0$, which contradicts the condition
$d(\nu,\mu,\theta) >0$ of our theorem. Therefore $m'>1$ and, respectively, the probability of the
event $X'_\infty = \emptyset$ is strictly less than $1$. Let us denote it $q'$.
If the brunching process $X'_1$, $X'_2$, \dots\ was generated not by a unique initial element, but
$k$ initial elements, then the probability of $X'_\infty =\emptyset$ would be equal to~$(q')^k$.
Recall that the cardinality of $X_n$ grows exponentially with probability $1-q^*$. If this is the
case, one can first wait for the event $|X_n|\ge k$, and then consider separately $|X_n|$
independent counterparts of the brunching process $X'_1$, $X'_2$, \dots\ generated by different
elements of $X_n$. This trick allows to obtain the bound $\dim_H B(\nu) \ge d(\nu,\mu,\theta)$ with
conditional probability at least $1-(q')^{k}$ under the condition $|X_n|\to \infty$. Since $k$ is
arbitrary, the above mentioned conditional probability is in fact one, and the complete probability
cannot be less than $1-q^*$. \qed
\end{document} |
\begin{document}
\author[Robert Laterveer]
{Robert Laterveer}
\address{Institut de Recherche Math\'ematique Avanc\'ee,
CNRS -- Universit\'e
de Strasbourg,\
7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX,
FRANCE.}
\email{[email protected]}
\title{On the Chow ring of Fano varieties on the Fatighenti-Mongardi list}
\begin{abstract} Conjecturally, Fano varieties of K3 type admit a multiplicative Chow--K\"unneth decomposition, in the sense of Shen--Vial. We prove this for many of the families of Fano varieties of K3 type constructed by Fatighenti--Mongardi.
This has interesting consequences for the Chow ring of these varieties.
\end{abstract}
\thanks{\textit{2020 Mathematics Subject Classification:} 14C15, 14C25, 14C30}
\keywords{Algebraic cycles, Chow group, motive, Bloch--Beilinson filtration, Beauville's ``splitting property'' conjecture, multiplicative Chow--K\"unneth decomposition, Fano variety, K3 surface}
\thanks{Supported by ANR grant ANR-20-CE40-0023.}
\maketitle
\section{Introduction}
Given a smooth projective variety $Y$ over $\mathbb{C}$, let $A^i(Y):=CH^i(Y)_{\mathbb{Q}}$ denote the Chow groups of $Y$ (i.e. the groups of codimension $i$ algebraic cycles on $Y$ with $\mathbb{Q}$-coefficients, modulo rational equivalence). The intersection product defines a ring structure on $A^\ast(Y)=\bigoplus_i A^i(Y)$, the Chow ring of $Y$ \cite{F}. In the case of K3 surfaces, this ring structure has a remarkable property:
\begin{theorem}[Beauville--Voisin \cite{BV}]\label{K3} Let $S$ be a K3 surface.
The $\mathbb{Q}$-subalgebra
\[ R^\ast(S):= \bigl\langle A^1(S), c_j(S) \bigr\rangle\ \ \ \subset\ A^\ast(S) \]
injects into cohomology under the cycle class map.
\end{theorem}
Motivated by the cases of K3 surfaces and abelian varieties, Beauville \cite{Beau3} has conjectured that for certain special varieties, the Chow ring should admit a multiplicative splitting. To make concrete sense of Beauville's elusive ``splitting property conjecture'', Shen--Vial \cite{SV} have introduced the concept of {\em multiplicative Chow--K\"unneth decomposition\/}. It seems both interesting and difficult to better understand the class of special varieties admitting such a decomposition.
In \cite{S2}, the following conjecture is raised:
\begin{conjecture}\label{conj} Let $X$ be a smooth projective Fano variety of K3 type (i.e. $\dim X=2d$ and the Hodge numbers $h^{p,q}(X)$ are $0$ for all $p\not=q$ except for $h^{d-1,d+1}(X)=h^{d+1,d-1}(X)=1$). Then $X$ has a multiplicative Chow--K\"unneth decomposition.
\end{conjecture}
This conjecture is verified in some special cases \cite{37}, \cite{39}, \cite{40}, \cite{FLV2}, \cite{S2}.
This paper aims to contribute to this program. The main result is as follows:
\begin{nonumbering}[=Theorem \ref{main}] Let $X$ be a smooth Fano variety in one of the families of Table \ref{table:1}. Then $X$ has a multiplicative Chow--K\"unneth decomposition.
\end{nonumbering}
Table \ref{table:1} lists Fano varieties $X$ of K3 type that were constructed by Fatighenti--Mongardi \cite{FM} as hypersurfaces in products of Grassmannians. The K3 surfaces $S$ in Table \ref{table:1} are shown in \cite{FM} to be associated to $X$ on the level of Hodge theory, and on the level of derived categories. In some cases, the geometric relation between $X$ and $S$ is straightforward (e.g., for B1 and B2 the Fano variety $X$ is a blow-up with center the K3 surface $S$); in other cases the geometric relation is more indirect (e.g. for M1, M6, M7, M8, M9, M10 the Fano variety $X$ is related to the K3 surface $S$ via the so-called ``Cayley's trick'', cf. \cite{FM} and subsection \ref{ss:cay} below).
To prove Theorem \ref{main}, we have devised a general criterion (Proposition \ref{crit}), which we hope might apply to other Fano varieties of K3 type. To verify the criterion, one needs a motivic relation between the Fano variety $X$ and the associated K3 surface $S$, and one needs a certain instance of the {\em Franchetta property\/}.
\begin{table}[h]
\centering
\begin{tabular}{||c c c c c c||}
\hline
$\stackrel{\hbox{Label}}{\hbox{in\ \cite{FM}}}$ & $X\subset U$ & $\dim X$ & $\rho(X)$ & $\stackrel{\hbox{Genus of}}{\hbox{associated\ K3}}$ &$\stackrel{\hbox{Also}}
{\hbox{occurs\ in}}$ \\
[0.5ex]
\hline\hline
B1 & $X_{(2,1,1)}\subset\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1$ & 4 & 3 & 7& \cite{40}\\
B2 & $X_{(2,1)}\subset\Gr(2,4)\times\mathbb{P}^1$ & 4 & 3 & 5& \cite{40}\\
M1 & $X_{(1,1,1)}\subset\mathbb{P}^3\times\mathbb{P}^3\times\mathbb{P}^3$ & 8 & 3 & 3& \cite{IM}\\
M3 & $X_{(1,1)}\subset\Gr(2,5)\times Q_5$ & 10 & 2 & 6&\\
M4 & $X_{(1,1)}\subset\SGr(2,5)\times Q_4$ & 8 & 2 & 6 &\\
M6 & $X_{(1,1)}\subset \mathbb{S}_5\times\mathbb{P}^7$ & 16 & 2 & 7&\\
M7 & $X_{(1,1)}\subset\Gr(2,6)\times \mathbb{P}^5$ & 12 & 2 & 8 &\\
M8 & $X_{(1,1)}\subset\SGr(2,6)\times \mathbb{P}^4$ & 10 & 2 & 8 &\\
M9 & $X_{(1,1)}\subset S_2 \Gr(2,6)\times \mathbb{P}^3$ & 8 & 2 & 8&\\
M10 & $X_{(1,1)}\subset\SGr(3,6)\times \mathbb{P}^3$ & 8 & 2 & 9&\\
S2 & $X_{1}\subset \OGr(2,8)$ & 8 & 2 & 7& \cite{S2}\\
[1ex]
\hline
\end{tabular}
\caption{Families of Fano varieties of K3 type. (As in \cite{FM}, $\Gr(k,m)$ denotes the Grassmannian of $k$-dimensional subspaces of an $m$-dimensional vector space. $\SGr(k,m)$, $S_2 \Gr(k,m)$ and $\OGr(k,m)$ denote the symplectic resp. bisymplectic resp. orthogonal Grassmannian. $\mathbb{S}_5$ denotes a connected component of $\OGr(5,10)$, and $Q_m$ is an $m$-dimensional smooth quadric.)}
\label{table:1}
\end{table}
As a consequence of our main result, the Chow ring of these Fano varieties behaves like the Chow ring of a K3 surface:
\begin{nonumberingc}[=Corollary \ref{cor}] Let $X\subset U$ be the inclusion of a Fano variety $X$ in its ambient space $U$, where $X,U$ are as in Table \ref{table:1}. Let $\dim X=2d$. Let $R^\ast(X)\subset A^\ast(X)$ be the $\mathbb{Q}$-subalgebra
\[ R^\ast(X):=\Bigl\langle A^1(X), A^2(X), \ldots, A^d(X), c_j(X),\operatorname{i}a\bigl(A^\ast(U)\to A^\ast(X)\bigr)\Bigr\rangle\ \ \ \subset A^\ast(X)\ .\]
Then $R^\ast(X)$ injects into cohomology under the cycle class map.
\end{nonumberingc}
We end this introduction with a challenge. Fatighenti--Mongardi have constructed some more Fano varieties of K3 type for which it would be nice to settle Conjecture \ref{conj} (in particular the families labelled M13 and S1 in \cite{FM}, for which I have not been able to check condition (c3) or (c3$^\prime$) of the general criterion Proposition \ref{crit}).
Additionally, the following are some Fano varieties of K3 type in the litterature for which Conjecture \ref{conj} is still open, and for which
the methods of the present paper are not sufficiently strong:
K\"uchle fourfolds of type $c5$, Pl\"ucker hyperplane sections of $\Gr(3,10)$, intersections of $\Gr(2,8)$ with 4 Pl\"ucker hyperplanes,
Gushel--Mukai fourfolds and sixfolds. It would be interesting to
devise new methods to treat these families.
\vskip0.6cm
\begin{convention} In this article, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. A {\sl subvariety\/} is a (possibly reducible) reduced subscheme which is equidimensional.
{\bf All Chow groups will be with rational coefficients}: we denote by $A_j(Y)$ the Chow group of $j$-dimensional cycles on $Y$ with $\mathbb{Q}$-coefficients; for $Y$ smooth of dimension $n$ the notations $A_j(Y)$ and $A^{n-j}(Y)$ are used interchangeably.
The notation $A^j_{hom}(Y)$ will be used to indicate the subgroup of homologically trivial cycles.
The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted
$\mathcal M_{\rm rat}$.
\end{convention}
\section{Preliminaries}
\subsection{MCK decomposition}
\label{ss:mck}
\begin{definition}[Murre \cite{Mur}] Let $X$ be a smooth projective variety of dimension $n$. We say that $X$ has a {\em CK decomposition\/} if there exists a decomposition of the diagonal
\[ \mathbb{D}elta_X= \pi^0_X+ \pi^1_X+\cdots +\pi_X^{2n}\ \ \ \hbox{in}\ A^n(X\times X)\ ,\]
such that the $\pi^i_X$ are mutually orthogonal idempotents and $(\pi_X^i)_\ast H^\ast(X,\mathbb{Q})= H^i(X,\mathbb{Q})$.
(NB: ``CK decomposition'' is shorthand for ``Chow--K\"unneth decomposition''.)
\end{definition}
\begin{remark} The existence of a CK decomposition for any smooth projective variety is part of Murre's conjectures \cite{Mur}, \cite{J4}.
\end{remark}
\begin{definition}[Shen--Vial \cite{SV}] Let $X$ be a smooth projective variety of dimension $n$. Let $\mathbb{D}elta_X^{sm}\in A^{2n}(X\times X\times X)$ be the class of the small diagonal
\[ \mathbb{D}elta_X^{sm}:=\bigl\{ (x,x,x)\ \vert\ x\in X\bigr\}\ \subset\ X\times X\times X\ .\]
An {\em MCK decomposition\/} is a CK decomposition $\{\pi_X^i\}$ of $X$ that is {\em multiplicative\/}, i.e. it satisfies
\[ \pi_X^k\circ \mathbb{D}elta_X^{sm}\circ (\pi_X^i\times \pi_X^j)=0\ \ \ \hbox{in}\ A^{2n}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\]
(NB: ``MCK decomposition'' is shorthand for ``multiplicative Chow--K\"unneth decomposition''.)
\end{definition}
\begin{remark} The small diagonal (seen as a correspondence from $X\times X$ to $X$) induces the {\em multiplication morphism\/}
\[ \mathbb{D}elta_X^{sm}\colon\ \ h(X)\otimes h(X)\ \to\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
Let us assume $X$ has a CK decomposition
\[ h(X)=\bigoplus_{i=0}^{2n} h^i(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
By definition, this decomposition is multiplicative if for any $i,j$ the composition
\[ h^i(X)\otimes h^j(X)\ \to\ h(X)\otimes h(X)\ \xrightarrow{\mathbb{D}elta_X^{sm}}\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\]
factors through $h^{i+j}(X)$.
If $X$ has an MCK decomposition, then setting
\[ A^i_{(j)}(X):= (\pi_X^{2i-j})_\ast A^i(X) \ ,\]
one obtains a bigraded ring structure on the Chow ring: that is, the intersection product sends $A^i_{(j)}(X)\otimes A^{i^\prime}_{(j^\prime)}(X) $ to $A^{i+i^\prime}_{(j+j^\prime)}(X)$.
It is expected that for any $X$ with an MCK decomposition, one has
\[ A^i_{(j)}(X)\stackrel{??}{=}0\ \ \ \hbox{for}\ j<0\ ,\ \ \ A^i_{(0)}(X)\cap A^i_{hom}(X)\stackrel{??}{=}0\ ;\]
this is related to Murre's conjectures B and D, that have been formulated for any CK decomposition \cite{Mur}.
The property of having an MCK decomposition is restrictive, and is closely related to Beauville's ``splitting property' conjecture'' \cite{Beau3}.
To give an idea: hyperelliptic curves have an MCK decomposition \cite[Example 8.16]{SV}, but the very general curve of genus $\ge 3$ does not have an MCK decomposition \cite[Example 2.3]{FLV2}. As for surfaces: a smooth quartic in $\mathbb{P}^3$ has an MCK decomposition, but a very general surface of degree $ \ge 7$ in $\mathbb{P}^3$ should not have an MCK decomposition \cite[Proposition 3.4]{FLV2}.
For more detailed discussion, and examples of varieties with an MCK decomposition, we refer to \cite[Section 8]{SV}, as well as \cite{V6}, \cite{SV2}, \cite{FTV}, \cite{37}, \cite{39}, \cite{40}, \cite{S2}, \cite{46}, \cite{38}, \cite{FLV2}.
\end{remark}
\subsection{The Franchetta property}
\label{ss:fr}
\begin{definition} Let $\mathcal X\to B$ be a smooth projective morphism, where $\mathcal X, B$ are smooth quasi-projective varieties. We say that $\mathcal X\to B$ has the {\em Franchetta property in codimension $j$\/} if the following holds: for every $\Gamma\in A^j(\mathcal X)$ such that the restriction $\Gamma\vert_{X_b}$ is homologically trivial for the very general $b\in B$, the restriction $\Gamma\vert_b$ is zero in $A^j(X_b)$ for all $b\in B$.
We say that $\mathcal X\to B$ has the {\em Franchetta property\/} if $\mathcal X\to B$ has the Franchetta property in codimension $j$ for all $j$.
\end{definition}
This property is studied in \cite{PSY}, \cite{BL}, \cite{FLV}, \cite{FLV3}.
\begin{definition} Given a family $\mathcal X\to B$ as above, with $X:=X_b$ a fiber, we write
\[ GDA^j_B(X):=\operatorname{i}a\Bigl( A^j(\mathcal X)\to A^j(X)\Bigr) \]
for the subgroup of {\em generically defined cycles}.
In a context where it is clear to which family we are referring, the index $B$ will often be suppressed from the notation.
\end{definition}
With this notation, the Franchetta property amounts to saying that $GDA^\ast_B(X)$ injects into cohomology, under the cycle class map.
\subsection{Cayley's trick and motives}
\label{ss:cay}
\begin{theorem}[Jiang \cite{Ji}]\label{ji} Let $ E\to U$ be a vector bundle of rank $r\ge 2$ over a smooth projective variety $U$, and let $S:=s^{-1}(0)\subset U$ be the zero locus of a regular section $s\in H^0(U,E)$ such that $S$ is smooth of dimension $\dim U-\rank E$. Let $X:=w^{-1}(0)\subset \mathbb{P}(E)$ be the zero locus of the regular section $w\in H^0(\mathbb{P}(E),\mathcal O_{\mathbb{P}(E)}(1))$ that corresponds to $s$ under the natural isomorphism $H^0(U,E)\cong H^0(\mathbb{P}(E),\mathcal O_{\mathbb{P}(E)}(1))$, and assume $X$ is smooth. There is an isomorphism of Chow motives
\[ h(X)\cong h(S)(1-r)\oplus \bigoplus_{i=0}^{r-2} h(U)(-i)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
\end{theorem}
\begin{proof} This is \cite[Corollary 3.2]{Ji}, which more precisely gives an isomorphism of {\em integral\/} Chow motives. For later use, we now give some details about the isomorphism as constructed in loc. cit.. Let
\[ \Gamma:= X\times_U S\ \ \subset\ X\times S \]
(this is equal to $\mathbb{P}(\mathbb{N}N_i)=\mathcal H_s\times_X Z$ in the notation of loc. cit.). Let
\[ \Pi_i\ \ \in\ A^\ast(X\times U)\ \ \ \ (i=0, \ldots, r-2) \]
be correspondences inducing the maps $ (\pi_i)_\ast$ of loc. cit., i.e.
\[ (\Pi_i)_\ast= (\pi_i)_\ast := (q_{i+1})_\ast \iota_\ast\colon\ \ A^j(X)\ \to\ A^{j-i}(U)\ ,\]
where $\iota\colon X\hookrightarrow\mathbb{P}(E)$ is the inclusion morphism, and the $(q_{i+1})_\ast\colon A_\ast(\mathbb{P}(E))\to A_\ast(U)$ are defined in loc. cit. in terms of the projective bundle formula for $q\colon E\to U$. As indicated in \cite[Corollary 3.2]{Ji} (cf. also \cite[text preceeding Corollary 3.2]{Ji}), there is an isomorphism
\[ \Bigl( \Gamma, \Pi_0,\Pi_1,\ldots, \Pi_{r-2}\Bigr)\colon\ \ h(X)\ \xrightarrow{\cong}\ h(S)(1-r)\oplus \bigoplus_{i=0}^{r-2} h(U)(-i)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
\end{proof}
\begin{remark} In the set-up of Theorem \ref{ji}, a cohomological relation between $X$ and $S$ was established in \cite[Prop. 4.3]{Ko} (cf. also \cite[section 3.7]{IM0}, as well as \cite[Proposition 46]{BFM} for a generalization). A relation on the level of derived categories was established in \cite[Theorem 2.10]{Or} (cf. also \cite[Theorem 2.4]{KKLL} and \cite[Proposition 47]{BFM}).
\end{remark}
We now make the natural observation that the isomorphism of Theorem \ref{ji} behaves well with respect to families, in the following sense:
\begin{notation} Let $X, S, U$ and $E\to U$ be as in Theorem \ref{ji}. Let $B\subset\mathbb{P} H^0(\mathbb{P}(E),\mathcal O_{\mathbb{P}(E)}(1))$ be the Zariski open such that
both $X:=X_b\subset\mathbb{P}(E)$ and $S:=S_b\subset U$ are smooth of the expected dimension. Let
\[ \mathcal X\to B\ ,\ \ \ \mathcal S\to B \]
denote the universal families.
\end{notation}
\begin{proposition}\label{ji2} Let $X, S, U$ be as in Theorem \ref{ji}. Assume $U$ has trivial Chow groups. For any $m\in\mathbb{N}$, there are injections
\[ GDA^j(X^m)\ \hookrightarrow\ GDA^{j+m-mr}(S^m)\oplus \bigoplus GDA^\ast(S^{m-1})\oplus \cdots \cdots \oplus \mathbb{Q}^s\ .\]
\end{proposition}
\begin{proof} (NB: we will not really need this proposition below, but we include it because it makes some arguments easier, cf. footnote 1 below.)
Let us first do the case $m=1$. The isomorphism of Theorem \ref{ji} is {\em generically defined\/}, i.e. there exist relative correspondences
$\Gamma_B,\Pi_i^B$ fitting into a commutative diagram
\begin{equation}\label{dia} \begin{array}[c]{ccc} A^j(\mathcal X) & \xrightarrow{\bigl( (\Gamma_B)_\ast,(\Pi_0^B)_\ast,\ldots,(\Pi^B_{r-2})_\ast\bigr)} & A^{j+1-r}(\mathcal S)\oplus \bigoplus_{i=0}^{r-2} A^{j-i}(U\times B)\\
&&\\
\downarrow&&\downarrow\\
&&\\
A^j(X) & \xrightarrow{\bigl( \Gamma_\ast,(\Pi_0)_\ast,\ldots,(\Pi_{r-2})_\ast\bigr)} & \ A^{j+1-r}(S)\oplus \bigoplus_{i=0}^{r-2} A^{j-i}(U),\\
\end{array}\end{equation}
where vertical arrows are restrictions to a fiber, and the lower horizontal arrow is the isomorphism of Theorem \ref{ji}.
Indeed, $\Gamma_B$ can be defined as
\[ \Gamma_B:= \mathcal X\times_{U\times B}\mathcal S\ \ \subset\ \mathcal X\times_B \mathcal S\ .\]
The $\Pi_i$ are also generically defined (just because the graph of the embedding $\iota\colon X\hookrightarrow \mathbb{P}(E)$ is generically defined). This gives
relative correspondences $\Gamma_B, \Pi_i^B$ over $B$ such that the restriction to a fiber over $b\in B$ gives back the correspondences $\Gamma,\Pi_i$ of Theorem \ref{ji}. The fact that this makes diagram \eqref{dia} commute is \cite[Lemma 8.1.6]{MNP}.
The commutative diagram \eqref{dia} implies that there is an injective map
\begin{equation}\label{1} GDA^j(X)\ \hookrightarrow\ GDA^{j+1-r}(S)\oplus \bigoplus A^\ast(U) = GDA^{j+1-r}(S)\oplus \mathbb{Q}^s\ .\end{equation}
The argument for $m>1$ is similar: the isomorphism of motives of Theorem \ref{ji}, combined with the fact that $U$ has trivial Chow groups (and so $h(U)\cong \oplus \mathds{1}(\ast)$) induces an isomorphism of Chow groups
\begin{equation}\label{2} A^j(X^m)\ \xrightarrow{\cong}\ A^{j+m-mr}(S^m)\oplus \bigoplus A^\ast(S^{m-1})\oplus \cdots \cdots \oplus \mathbb{Q}^s\ .\end{equation}
Here the map from left to right is given by various combinations of the correspondences $\Gamma$ and $\Pi_i$. As we have seen these correspondences are generically defined, and so their products are also generically defined. It follows as above that the map \eqref{2} preserves generically defined cycles.
\end{proof}
\subsection{A Franchetta-type result}
\begin{proposition}\label{spread} Let $Y$ be a smooth projective variety with trivial Chow groups (i.e. $A^\ast_{hom}(Y)=0$). Let $L_1,\ldots,L_r\to Y$ be very ample line bundles, and let
$\mathcal X\to B$ be the universal family of smooth complete intersections of type $X=Y\cap H_1\cap\cdots\cap H_r$, where $H_j\in\vert L_j\vert$.
Assume the fibers $X$ have $H^{\dim X}_{tr}(X,\mathbb{Q})\not=0$.
There is inclusion
\[ \ker \Bigl( GDA^{\dim X}_B(X\times X)\to H^{2\dim X}(X\times X,\mathbb{Q})\Bigr)\ \ \subset\ \Bigl\langle (p_1)^\ast GDA^\ast_B(X), (p_2)^\ast GDA^\ast_B(X) \Bigr\rangle\ .\]
\end{proposition}
\begin{proof} This is essentially equivalent to Voisin's ``spread'' result \cite[Proposition 1.6]{V1} (cf. also \cite[Proposition 5.1]{LNP} for a reformulation). For completeness, we include a quick proof. Let $\bar{B}:=\mathbb{P} H^0(Y,L_1\oplus\cdots\oplus L_r)$ (so that $B\subset\bar{B}$ is a Zariski open), and let
us consider the projection
\[ \pi\colon\ \ \mathcal X\times_{\bar{B}} \mathcal X\ \to\ Y\times Y\ .\]
Using the very ampleness assumption, one finds that $\pi$ is a $\mathbb{P}^s$-bundle over $(Y\times Y)\setminus \mathbb{D}elta_Y$, and a $\mathbb{P}^t$-bundle over $\mathbb{D}elta_Y$.
That is, $\pi$ is what is termed a {\em stratified projective bundle\/} in \cite{FLV}. As such, \cite[Proposition 5.2]{FLV} implies the equality
\begin{equation}\label{stra} GDA^\ast_B(X\times X)= \operatorname{i}a\Bigl( A^\ast(Y\times Y)\to A^\ast(X\times X)\Bigr) + \mathbb{D}elta_\ast GDA_B^\ast(X)\ ,\end{equation}
where $\mathbb{D}elta\colon X\to X\times X$ is the inclusion along the diagonal. Since $Y$ has trivial Chow groups, one has $A^\ast(Y\times Y)\cong A^\ast(Y)\otimes A^\ast(Y)$.
Base-point freeness of the $L_j$ implies $\mathcal X\to Y$ has the structure of a projective bundle; it is then readily seen (by a direct argument or by simply applying once more \cite[Proposition 5.2]{FLV}) that
\[ GDA^\ast_B(X)=\operatorname{i}a\bigl( A^\ast(Y)\to A^\ast(X)\bigr)\ .\]
The equality \eqref{stra} thus reduces to
\[ GDA^\ast_B(X\times X)=\Bigl\langle (p_1)^\ast GDA^\ast_B(X), (p_2)^\ast GDA^\ast_B(X), \mathbb{D}elta_X\Bigr\rangle\ \]
(where $p_1, p_2$ denote the projection from $X\times X$ to first resp. second factor). The assumption that $X$ has non-zero transcendental cohomology
implies that the class of $\mathbb{D}elta_X$ is not decomposable in cohomology. It follows that
\[ \begin{split} \operatorname{i}a \Bigl( GDA^{\dim X}_B(X\times X)\to H^{2\dim X}(X\times X,\mathbb{Q})\Bigr) =&\\
\operatorname{i}a\Bigl( \mathbb{D}ec^{\dim X}(X\times X)\to H^{2\dim X}(X\times X,\mathbb{Q})\Bigr)& \oplus \mathbb{Q}[\mathbb{D}elta_X]\ ,\\
\end{split}\]
where we use the shorthand
\[ \mathbb{D}ec^j(X\times X):= \Bigl\langle (p_1)^\ast GDA^\ast_B(X), (p_2)^\ast GDA^\ast_B(X)\Bigr\rangle\cap A^j(X\times X) \ \]
for the {\em decomposable cycles\/}.
We now see that if $\Gamma\in GDA^{\dim X}(X\times X)$ is homologically trivial, then $\Gamma$ does not involve the diagonal and so $\Gamma\in \mathbb{D}ec^{\dim X}(X\times X)$.
This proves the proposition.
\end{proof}
\begin{remark} Proposition \ref{spread} has the following consequence: if the family $\mathcal X\to B$ has the Franchetta property, then $\mathcal X\times_B \mathcal X\to B$ has the Franchetta property in codimension $\dim X$.
\end{remark}
\subsection{HPD and motives}
\begin{theorem}\label{hpd} Let $Y_1, Y_2\subset\mathbb{P}(V)$ be smooth projective varieties with trivial Chow groups (i.e. $A^\ast_{hom}(Y_j)=0$), and let $Y_2^\vee\subset\mathbb{P}(V^\vee)$ be the HPD dual of $Y_2$. Let $H\subset \mathbb{P}(V)\times\mathbb{P}(V)$ be a $(1,1)$-divisor, and let $f_H\colon \mathbb{P}(V)\to\mathbb{P}(V^\vee)$ be the morphism defined by $H$. Assume that the varieties
\[ \begin{split} X&:= (Y_1\times Y_2)\cap H\ ,\\
S&:=Y_1\cap (f_H)^{-1}(Y_2^\vee)\\
\end{split}\]
are smooth and dimensionally transverse. Assume moreover that the Hodge conjecture holds for $S$, that $H^j(S,\mathbb{Q})$ is algebraic for $j\not=\dim S$ and that $H^{\dim S}(S,\mathbb{Q})$ is not completely algebraic.
Then there is a split injection of Chow motives
\[ h(X)\ \hookrightarrow\ h(S)(-m)\oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat} \ ,\]
where $m:={1\over 2}(\dim X-\dim S)$.
(In particular, one has vanishing
\[ A^j_{hom}(X) =0\ \ \ \forall\ j > {1\over 2}(\dim X+\dim S)\ .)\]
\end{theorem}
\begin{proof} Using the HPD formalism, it is proven in \cite[Proposition 2.4]{FM} that there exists a semi-orthogonal decomposition
\begin{equation}\label{so} D^b(X)=\bigl\langle D^b(S), A_1,\ldots, A_s\bigr\rangle\ ,\end{equation}
where the $A_j$ are some exceptional objects. Using Hochschild homology and the Kostant--Rosenberg isomorphism (cf. for instance \cite[Sections 1.7 and 2.5]{Kuz}), this implies that there exist correspondences $\Phi^\prime$ and $\Xi^\prime$ such that
\[ H^{\ast}_{tr}(X,\mathbb{Q})\ \xrightarrow{(\Phi^\prime)_\ast}\ H^{\ast}_{tr}(S,\mathbb{Q})\ \xrightarrow{(\Xi^\prime)_\ast}\ H^{\ast}_{tr}(X,\mathbb{Q}) \]
is the identity. (Here, $H^\ast_{tr}( ,\mathbb{Q})$ denotes the orthogonal complement of the algebraic part of cohomology.)
By assumption $H^\ast_{tr}(S,\mathbb{Q})=H^{\dim S}_{tr}(S,\mathbb{Q})$, and by weak Lefschetz $H^\ast_{tr}(X,\mathbb{Q})= H^{\dim X}_{tr}(X,\mathbb{Q})$, and so we actually have that
\[ H^{\dim X}_{tr}(X,\mathbb{Q})\ \xrightarrow{(\Phi^\prime)_\ast}\ H^{\dim S}_{tr}(S,\mathbb{Q})\ \xrightarrow{(\Xi^\prime)_\ast}\ H^{\dim X}_{tr}(X,\mathbb{Q}) \]
is the identity. Again using Hochschild homology and the Kostant--Rosenberg isomorphism, we see that the Hodge conjecture for $S$, plus the decomposition \eqref{so}, implies the Hodge conjecture for $X$. This means that we can find correspondences $\Phi$ and $\Xi$ such that
\[ H^{\ast}_{}(X,\mathbb{Q})\ \xrightarrow{\Phi_\ast}\ H^{\dim S}_{}(S,\mathbb{Q}) \oplus \bigoplus \mathbb{Q}(-j)\ \xrightarrow{\Xi_\ast}\ H^{\ast}_{}(X,\mathbb{Q}) \]
is the identity, i.e. the cycle
\[ \mathbb{D}elta_X - \Xi\circ \Phi\ \ \ \in\ A^{\dim X}(X\times X) \]
is homologically trivial.
We now consider things family-wise, i.e. we construct universal families $\mathcal X\to B$ and $\mathcal S\to B$, where
\[ B\ \ \subset\ \mathbb{P} H^0\bigl(Y_1\times Y_2,\mathcal O_{Y_1\times Y_2}(1,1)\bigr) \]
parametrizes all divisors $H$ such that both $X:=X_H$ and $S:=S_H$ are smooth and dimensionally transverse.
Applying Voisin's Hilbert schemes argument \cite[Proposition 3.7]{V0} (cf. also \cite[Proposition 2.11]{Lacub}) to this set-up, we may assume that the correspondences $\Phi$ and $\Xi$ are generically defined (with respect to $B$), and so in particular
\[ \mathbb{D}elta_X - \Xi\circ \Phi\ \ \ \in\ GDA^{\dim X}(X\times X) \ .\]
We observe that $H^{\dim X}_{tr}(X,\mathbb{Q})\cong H^{\dim S}_{tr}(S,\mathbb{Q})$ (this follows from the decomposition \eqref{so}), and so $H^{\dim X}_{tr}(X,\mathbb{Q})\not=0$; all conditions of Proposition \ref{spread} are fulfilled.
Applying Proposition \ref{spread} to the cycle $ \mathbb{D}elta_X - \Xi\circ \Phi$, we find that a modification of this cycle vanishes:
\[ \mathbb{D}elta_X - \Xi\circ \Phi -\gamma=0\ \ \ \hbox{in}\ A^{\dim X}(X\times X) \ ,\]
where
\[ \gamma\in \Bigl\langle (p_1)^\ast GDA^\ast(X), (p_2)^\ast GDA^\ast(X)\Bigr\rangle\]
is a decomposable cycle.
This translates into the fact that (up to adding some trivial motives $\mathds{1}(\ast)$ and modifying the correspondences $\Phi$ and $\Xi$) the composition
\[ h(X) \ \xrightarrow{\Phi}\ h^{}_{}(S)(-m) \oplus \bigoplus \mathds{1}(\ast)\ \xrightarrow{\Xi}\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat} \]
is the identity, which proves the proposition.
(Finally, the statement in parentheses is a straightforward consequence of the injection of motives: taking Chow groups, one obtains an injection
\[ A^j_{hom}(X)\ \hookrightarrow\ A^{j-m}_{hom}(S)\ .\]
But the group on the right vanishes for $j-m>\dim S$, which means $j> {1\over 2}(\dim X+\dim S)$.)
\end{proof}
\begin{example} Here is a sample application of Theorem \ref{hpd}. Let $Y_1=Y_2=\Gr(2,5)\subset\mathbb{P}^9$. Then $Y_2^\vee=\Gr(2,5)\subset(\mathbb{P}^9)^\vee$ and $S:=Y_1\cap (f_H)^{-1}(Y_2^\vee)$ is 3-dimensional (for $H$ sufficiently general). We consider the 11-dimensional variety
\[ X:= \bigl(\Gr(2,5)\times \Gr(2,5)\bigr)\cap H\ \ \ \subset \mathbb{P}^9\times \mathbb{P}^9 \ ,\]
where $H$ is a general $(1,1)$-divisor. This $X$ is a Fano variety of Calabi--Yau type, considered in \cite[Section 3.3]{IM0}.
Theorem \ref{hpd} implies that one has
\[ A^j_{hom}(X)=0\ \ \ \forall\ j> 7\ ,\]
i.e. $X$ has $\hbox{Niveau}(A^\ast(X))\le 3$ in the sense of \cite{moi}.
\end{example}
\section{Main result}
This section contains the proof of our main result, which is as follows:
\begin{theorem}\label{main} Let $X\subset U$ be the inclusion of a Fano variety $X$ in its ambient space $U$, where $X,U$ are as in Table \ref{table:1}. Then $X$ has an MCK decomposition. The Chern classes $c_j(X)$, and the image $\operatorname{i}a\bigl( A^\ast(U)\to A^\ast(X)\bigr)$, lie in $A^\ast_{(0)}(X)$.
\end{theorem}
\subsection{A criterion}
To prove Theorem \ref{main}, we will use the following general criterion:
\begin{proposition}\label{crit} Let $\mathcal X\to B$ be a family of smooth projective varieties. Assume the following conditions:
\noindent
(c0) each fiber $X$ has dimension $2d \ge 8$;
\noindent
(c1) each fiber $X$ has a self-dual CK decomposition $\{\pi^\ast_X\}$ which is generically defined (with respect to $B$), and $h^j(X)\cong\oplus \mathds{1}(\ast)$ for $j\not=2d$;
\noindent
(c2) there exists a family of surfaces $\mathcal S\to B^\circ$ where $B^\circ\subset B$ is a countable intersection of non-empty Zariski opens, and for each
$b\in B^\circ$ there is a split injection of motives
\[ h(X_b) \ \hookrightarrow\ h(S_b)(1-d)\oplus \bigoplus \mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
\noindent
(c3) the family $\mathcal S\times_{B^\circ} \mathcal S\to B^\circ$ has the Franchetta property.
Then for each fiber $X$, $\{\pi^\ast_X\}$ is an MCK decomposition, and $GDA^\ast(X)\subset A^\ast_{(0)}(X)$.
\noindent
Moreover, condition (c3) may be replaced by the following:
\noindent
(c3$^\prime$) $\mathcal S\to B^\circ$ is a family of K3 surfaces, which is the universal family of smooth sections of a direct sum of very ample line bundles on some smooth projective ambient space $V$ with trivial Chow groups (i.e. $A^\ast_{hom}(V)=0$), and $\mathcal S\to B^\circ$ has the Franchetta property.
\end{proposition}
\begin{proof} Using Voisin's Hilbert schemes argument \cite[Proposition 3.7]{V0} (cf. also \cite[Proposition 2.11]{Lacub}), one may assume that the split injection of (c2) is generically defined (with respect to $B^\circ$). This means that there exists a relative correspondence
$\Phi$ fitting into a commutative diagram
\[ \begin{array}[c]{ccc} A^j(\mathcal X) & \xrightarrow{ \Phi_\ast} & A^{j+1-d}(\mathcal S)\oplus \bigoplus A^\ast(B^\circ)\\
&&\\
\downarrow&&\downarrow\\
&&\\
A^j(X) & \xrightarrow{ (\Phi\vert_b)_\ast } & \ A^{j+1-d}(S)\oplus \mathbb{Q}^s ,\\
\end{array}\]
where vertical arrows are restrictions to a fiber, and the lower horizontal arrow is induced by the injection of (c2). The same then applies to $X\times X$, i.e.
there is a commutative diagram
\[ \begin{array}[c]{ccc} A^j(\mathcal X\times_{B^\circ} \mathcal X) & \xrightarrow{} & A^{j+2-2d}(\mathcal S\times_{B^\circ} \mathcal S)\oplus \bigoplus A^{\ast}(\mathcal S) \oplus \bigoplus A^\ast(B^\circ) \\
&&\\
\downarrow&&\downarrow\\
&&\\
A^j(X\times X) & \hookrightarrow& \ A^{j+2-2d}(S\times S)\oplus \bigoplus A^{\ast}(S) \oplus \bigoplus \mathbb{Q}^s,\\
\end{array}\]
where the lower horizontal arrow is split injective thanks to (c2). That is, there is an injection
\[ GDA^j_{B^\circ}(X\times X)\ \hookrightarrow\ GDA^{j+2-2d}_{B^\circ}(S\times S)\oplus \bigoplus GDA^\ast_{B^\circ}(S)\oplus \mathbb{Q}^s\ .\]
It then follows from (c3) that $\mathcal X\times_{B^\circ} \mathcal X\to B^\circ$ has the Franchetta property.\footnote{(NB: in practice, one can often avoid recourse to the Hilbert scheme argument in this step. For instance, in the setting of Proposition \ref{p1} below, the split injection of (c2) is generically defined by construction, and one can apply Proposition \ref{ji2} to conclude that
$\mathcal X\times_{B^\circ} \mathcal X\to B^\circ$ has the Franchetta property.)}
Let us now ascertain that the CK decomposition $\{\pi^\ast_X\}$ is multiplicative. What we need to check is that for each $X=X_b$ one has
\begin{equation}\label{this} \pi_X^k\circ \mathbb{D}elta_X^{sm}\circ (\pi_X^i\times \pi_X^j)=0\ \ \ \hbox{in}\ A^{4d}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\end{equation}
A standard spread lemma (cf. \cite[Lemma 3.2]{Vo}) shows that it suffices to prove this for all $b\in B^\circ$, so we will henceforth assume that $X=X_b$ with $b\in B^\circ$.
We note that the cycle in \eqref{this} is generically defined, and homologically trivial.
Let us assume that among the three integers $(i,j,k)$, at least one is different from $2d$. Using the hypothesis $h^j(X)=\oplus\mathds{1}(\ast)$ for $j\not=2d$, we find there is a (generically defined) split injection
\[ ( \pi^{4d-i}_X\times \pi^{4d-j}_X\times\pi^k_X)_\ast A^{4d}(X\times X\times X)\ \hookrightarrow\ A^\ast(X\times X)\ .\]
Since
\[ \pi_X^k\circ \mathbb{D}elta_X^{sm}\circ (\pi_X^i\times \pi_X^j) = ({}^t \pi^i_X\times{}^t \pi^j_X\times\pi^k_X)_\ast (\mathbb{D}elta^{sm}_X) = ( \pi^{4d-i}_X\times \pi^{4d-j}_X\times\pi^k_X)_\ast (\mathbb{D}elta^{sm}_X) \]
(where the first equality is an instance of Lieberman's lemma), the required vanishing \eqref{this} now follows from the Franchetta property for $\mathcal X\times_B \mathcal X\to B$.
It remains to treat the case $i=j=k=2d$. Using the split injection of motives (2) and taking the tensor product, we find there is a split injection of Chow groups
\[ A^j(X\times X\times X)\ \hookrightarrow\ A^{j+3-3d}(S^3)\oplus \bigoplus A^\ast(S^2)\oplus \bigoplus A^\ast(S)\oplus \mathbb{Q}^s\ .\]
Moreover (just as we have seen above for $X^2$), this injection respects generically defined cycles, i.e. there is an injection
\[ GDA^j(X\times X\times X)\ \hookrightarrow\ GDA^{j+3-3d}(S^3)\oplus \bigoplus GDA^\ast(S^2)\oplus \bigoplus GDA^\ast(S)\oplus \mathbb{Q}^s\ .\]
In particular, taking $j=4d$ we find an injection
\[ GDA^{4d}(X\times X\times X)\ \hookrightarrow\ GDA^{d+3}(S^3)\oplus \bigoplus GDA^\ast(S^2)\oplus \bigoplus GDA^\ast(S)\oplus \mathbb{Q}^s\ .\]
By assumption, $d\ge 4$ and so the summand $GDA^{d+3}(S^3)$ vanishes for dimension reasons.
The required vanishing \eqref{this} then follows from the Franchetta property for $\mathcal S\times_{B^\circ} \mathcal S$. This proves that $\{\pi^\ast_X\}$ is MCK.
To see that $GDA^\ast(X)\subset A^\ast_{(0)}(X)$, it suffices to note that
\[ (\pi^k_X)_\ast GDA^j(X)\ \ \ \ (k\not=2j) \]
is generically defined, and homologically trivial. The Franchetta property for $\mathcal X\to B$ (which is implied by the Franchetta property for $\mathcal X\times_{B^\circ} \mathcal X$) then implies the vanishing
\[ (\pi^k_X)_\ast GDA^j(X)=0\ \ \ \ (k\not=2j)\ , \]
and so $GDA^j(X)\subset (\pi^{2j}_X)_\ast A^j(X)=: A^j_{(0)}(X)$.
Let us now proceed to show that condition (c3$^\prime$) implies condition (c3). The hypotheses of (c3$^\prime$) imply that $B^\circ$ is a Zariski open in some $\bar{B}:=\mathbb{P} H^0(V,\oplus_{j=1}^s L_j)$ which is isomorphic to $\mathbb{P}^r$.
The very ampleness assumption implies that
\[ \pi\colon\ \ \mathcal S\times_{\bar{B}} \mathcal S\ \to\ V\times V \]
is a $\mathbb{P}^{r-2s}$-bundle over $(V\times V)\setminus \mathbb{D}elta_V$ and a $\mathbb{P}^{r-s}$-bundle over $\mathbb{D}elta_V$. That is, $\pi$ is a {\em stratified projective bundle\/} in the sense of \cite{FLV}. As such, \cite[Proposition 5.2]{FLV} implies the equality
\[ GDA^\ast(S\times S)= \operatorname{i}a\Bigl( A^\ast(V\times V)\to A^\ast(S\times S)\Bigr) + \mathbb{D}elta_\ast GDA^\ast(S)\ ,\]
where $\mathbb{D}elta\colon S\to S\times S$ is the inclusion along the diagonal. Since $V$ has trivial Chow groups, one has $A^\ast(V\times V)\cong A^\ast(V)\otimes A^\ast(V)$. Moreover,
$\mathcal S\to V$ is a projective bundle and so \cite[Proposition 5.2]{FLV} gives $GDA^\ast(S)=\operatorname{i}a\bigl (A^\ast(V)\to A^\ast(S)\bigr)$. It follows that the above equality reduces to
\begin{equation}\label{gda} GDA^\ast(S\times S)=\Bigl\langle (p_1)^\ast GDA^\ast(S), (p_2)^\ast GDA^\ast(S), \mathbb{D}elta_S\Bigr\rangle\ \end{equation}
(where $p_1, p_2$ denote the projection from $S\times S$ to first resp. second factor). By assumption, $S$ is a K3 surface and the Franchetta property holds for $\mathcal S\to B^\circ$, which means that
\[ GDA^\ast(S) = \mathbb{Q} \oplus GDA^1(S) \oplus \mathbb{Q}[o]\ ,\]
where $o\in A^2(S)$ is the Beauville--Voisin class \cite{BV}.
Given a divisor $D\in A^1(S)$, it is known that
\[ \mathbb{D}elta_S\cdot (p_j)^\ast(D)=\mathbb{D}elta_\ast(D)= D\times o + o\times D\ \ \ \hbox{in}\ A^3(S\times S)\ \]
\cite[Proposition 2.6(a)]{BV}. Also, it is known that
\[ \mathbb{D}elta_S\cdot (p_j)^\ast(o)= \mathbb{D}elta_\ast(o)= o\times o\ \ \ \hbox{in}\ A^4(S\times S) \]
\cite[Proposition 2.6(b)]{BV}. It follows that the right-hand side of \eqref{gda} is {\em decomposable\/} in codimension $>2$, i.e.
\[ \begin{split} \Bigl\langle (p_1)^\ast GDA^\ast(S), (p_2)^\ast GDA^\ast(S), \mathbb{D}elta_S\Bigr\rangle \cap A^j(S\times S) &=\\\Bigl\langle (p_1)^\ast GDA^\ast(S), (p_2)^\ast GDA^\ast(S)\Bigr\rangle\cap A^j(S\times S)& \ \ \ \ \ \ \forall j\not=2\ .\\ \end{split}\]
Since we know that $GDA^\ast(S)$ injects into cohomology (this is the Franchetta property for $\mathcal S\to B^\circ$), equality \eqref{gda}
(plus the K\"unneth decomposition in cohomology) now implies that
\[ GDA^j(S\times S)\ \to\ H^{2j}(S\times S,\mathbb{Q}) \] is injective for $j\not=2$.
For the case $j=2$, it suffices to remark that $\mathbb{D}elta_S$ is linearly independent from the decomposable part in cohomology (for otherwise $H^{2,0}(S)$ would be zero, which is absurd). The injectivity of
\[ GDA^2(S\times S)\ \to\ H^4(S\times S,\mathbb{Q}) \]
then follows from \eqref{gda} plus the injectivity of $GDA^\ast(S)\to H^\ast(S,\mathbb{Q})$. This shows that condition (c3$^\prime$) implies condition (c3); the proposition is proven.
\end{proof}
\subsection{Verifying the criterion: part 1}
\begin{proposition}\label{p1} The following families verify the conditions of Proposition \ref{crit}: the universal families $\mathcal X\to B$ of Fano varieties of type M1, M6, M7, M8, M9, M10.
\end{proposition}
\begin{proof} The existence of a generically defined CK decomposition is an easy consequence of the fact that the Fano varieties $X$ under consideration are complete intersections in an ambient space $U$ with trivial Chow groups, cf. for instance \cite[Lemma 3.6]{V0}. This takes care of conditions (c0) and (c1) of Proposition \ref{crit}.
To verify condition (c2), we use Cayley's trick (Theorem \ref{ji}). The K3 surface $S$ associated to the Fano variety $X$ is a complete intersection in an ambient space $V$ as indicated in Table \ref{table:2}. Let us write $2d:=\dim X$. The ambient spaces $V$ that occur all have trivial Chow groups, and so Theorem \ref{ji}
gives the split injection of motives
\[ h(X)\ \hookrightarrow\ h(S)(1-d)\oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\]
i.e. condition (c2) is verified.
\begin{table}[h]
\centering
\begin{tabular}{||c c c c||}
\hline
${\hbox{Label\ in\ \cite{FM}}}$ & $X$ & $\dim X$ & $ S\subset V$ \\
[0.5ex]
\hline\hline
M1 & $X_{(1,1,1)}\subset\mathbb{P}^3\times\mathbb{P}^3\times\mathbb{P}^3$ & 8 & $S_{1^4}\subset \mathbb{P}^3\times\mathbb{P}^3$ \\
M6 & $X_{(1,1)}\subset \mathbb{S}_5\times\mathbb{P}^7$ & 16 & $S_{1^8}\subset \mathbb{S}_5$ \\
M7 & $X_{(1,1)}\subset\Gr(2,6)\times \mathbb{P}^5$ & 12 & $S_{1^6}\subset \Gr(2,6)$ \\
M8 & $X_{(1,1)}\subset\SGr(2,6)\times \mathbb{P}^4$ & 10 & $S_{1^5}\subset \SGr(2,6)$ \\
M9 & $X_{(1,1)}\subset S_2 \Gr(2,6)\times \mathbb{P}^3$ & 8 & $S_{1^4}\subset S_2 \Gr(2,6)$ \\
M10 & $X_{(1,1)}\subset\SGr(3,6)\times \mathbb{P}^3$ & 8 & $S_{1^4}\subset \SGr(3,6)$ \\
[1ex]
\hline
\end{tabular}
\caption{Fano varieties $X$ and their associated K3 surface $S$.}
\label{table:2}
\end{table}
We observe that all ambient spaces $V$ in Table \ref{table:2} have trivial Chow groups. To verify condition (c3$^\prime$), it only remains to check the Franchetta property for the families $\mathcal S\to B^\circ$. In all these cases, $\mathcal S\to V$ is a projective bundle, and so (using the projective bundle formula, or lazily applying \cite[Proposition 5.2]{FLV}) we find equality
\[ GDA^j_{B^\circ}(S)=\operatorname{i}a\bigl( A^j(V)\to A^j(S)\bigr)\ .\]
Let us check that the right-hand side injects into cohomology. This is non-trivial only in codimension $j=2$. For the family M1, it suffices to observe that
$A^2(\mathbb{P}^3\times\mathbb{P}^3)$ is generated by intersections of divisors, and so
\[ \operatorname{i}a\Bigl( A^2(\mathbb{P}^3\times\mathbb{P}^3)\to A^2(S)\Bigr) = \mathbb{Q}[o] \]
injects into cohomology. For the family M7, it suffices to check that the restriction of $c_2(Q)\in A^2(\Gr(2,6))$ (where $Q$ denotes the universal quotient bundle) to $S$ is proportional to $o$; this is done in \cite[Proposition 2.1]{PSY}. For the family M6, we may as well verify that
\[ \operatorname{i}a \Bigl(A^2(\OGr(5,10))\to A^2(S)\Bigr)=\mathbb{Q}[o] \]
(recall that $\mathbb{S}_5$ is a connected component of $\OGr(5,10)$ in its spinor embedding), this is taken care of in \cite[Proposition 2.1]{PSY}.
For the families M8 and M9, since $\SGr(2,6)$ and $S_2 \Gr(2,6)$ are complete intersections (of dimension 7 resp. 6) inside $\Gr(2,6)$, there is an isomorphism
\[ A^2(S_2 \Gr(2,6))\ \xrightarrow{\cong}\ A^2(\SGr(2,6))\ \xrightarrow{\cong}\ A^2(\Gr(2,6))\ .\]
The case M7 then guarantees that $\operatorname{i}a\bigl( A^2 (V)\to A^2(S)\bigr)$ is spanned by $o$. Finally, for the case M10 one observes that $A^2(\SGr(3,6))\cong\mathbb{Q}$ (this follows from \cite[Proposition 2.1]{vdG}, where $\SGr(3,6)$ is denoted $Y_3$), and so
\[ \operatorname{i}a \Bigl(A^2(\SGr(3,6))\to A^2(S)\Bigr)=\mathbb{Q}[h^2]= \mathbb{Q}[o] \ .\]
\end{proof}
\begin{remark} It seems likely that the families M7, M8, M9, M10 can be related to one another via (a higher-codimension version of) the game of {\em projections\/} and {\em jumps\/} of \cite[Sections 3.3 and 3.4]{BFM}. This might simplify the above argument.
\end{remark}
\subsection{Verifying the criterion: part 2}
\begin{proposition}\label{p2} The following families verify the conditions of Proposition \ref{crit}: the universal families $\mathcal X\to B$ of Fano varieties of type
M3 and M4.
\end{proposition}
\begin{proof} The existence of a generically defined CK decomposition follows as above. The difference with the above is that the families M3 and M4 are {\em not\/} in the form of Cayley's trick; hence, to check condition (c2) we now apply Theorem \ref{hpd} rather than Theorem \ref{ji}.
For the case M3, Theorem \ref{hpd} applies with $Y_1=\Gr(2,5)$ and $Y_2=Q_5$ a 5-dimensional quadric embedded in $\mathbb{P}^9$. Let $B^\circ\subset B$ be the open parametrizing Fano varieties $X$ of type M3 for which, in the notation of Theorem \ref{hpd}, $S$ is a smooth surface.
For each $X=X_b$ with $b\in B^\circ$, Theorem \ref{hpd} gives an injection of motives
\[ h(X)\ \hookrightarrow\ h(S)(-4) \oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
Since quadrics are projectively self-dual, this $S$ is the intersection of $\Gr(2,5)$ with a quadric and 3 hyperplanes in $\mathbb{P}^9$; this is Mukai's model for the general K3 surface of genus 6.
That the family $\mathcal S\to B^\circ$ has the Franchetta property is proven in \cite{PSY}. This takes care of conditions (c2) and (c3) of Proposition \ref{crit}.
For the family M4, Theorem \ref{hpd} applies again, with $Y_1=\SGr(2,5)$ and $Y_2=Q_4$ a 4-dimensional quadric embedded in $\mathbb{P}^9$.
Note that $Y_1$ is a hyperplane section of $\Gr(2,5)$ under its Pl\"ucker embedding. Again, let $B^\circ\subset B$ denote the open where both $X$ and $S$ are smooth dimensionally transverse. Theorem \ref{hpd} now gives an injection of motives
\[ h(X)\ \hookrightarrow\ h(S)(-3) \oplus \bigoplus\mathds{1}(\ast)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\]
where $S$ is again the intersection of $\Gr(2,5)$ with a quadric and 3 hyperplanes in $\mathbb{P}^9$. The family $\mathcal S\to B^\circ$ is now the family of all smooth 2-dimensional complete intersections of $\SGr(2,5)$ with a quadric and 2 hyperplanes. One has that $\mathcal S\to \SGr(2,5)$ is a projective bundle, and so (as before)
\[ GDA^2(S)=\operatorname{i}a\Bigl( A^2(\SGr(2,5))\to A^2(S)\Bigr)\ .\]
But $A^2(\Gr(2,5))\to A^2(\SGr(2,5))$ is an isomorphism (weak Lefschetz), and so $GDA^2(S)=\mathbb{Q}[o]$ as for the family M3. All conditions of Proposition \ref{crit} are verified.
\end{proof}
\subsection{Proof of theorem}
\begin{proof}(of Theorem \ref{main}) For the families B1 and B2 the result was proven in \cite{40}. The family S2 was treated in \cite{S2}. For the remaining families, we have checked (Propositions \ref{p1} and \ref{p2}) that Proposition \ref{crit} applies, which gives a generically defined MCK decomposition. The Chern classes $c_j(X)$, as well as the image
$\operatorname{i}a\bigl(A^\ast(U)\to A^\ast(X)\bigr)$, are clearly generically defined, and so they are in $A^\ast_{(0)}(X)$ thanks to Proposition \ref{crit}.
\end{proof}
\section{A consequence}
\begin{corollary}\label{cor} Let $X\subset U$ be the inclusion of a Fano variety $X$ in its ambient space $U$, where $X,U$ are as in Table \ref{table:1}. Let $\dim X=2d$. Let $R^\ast(X)\subset A^\ast(X)$ be the $\mathbb{Q}$-subalgebra
\[ R^\ast(X):=\Bigl\langle A^1(X), A^2(X), \ldots, A^d(X), c_j(X),\operatorname{i}a\bigl(A^\ast(U)\to A^\ast(X)\bigr)\Bigr\rangle\ \ \ \subset A^\ast(X)\ .\]
Then $R^\ast(X)$ injects into cohomology under the cycle class map.
\end{corollary}
\begin{proof} This is a formal consequence of the MCK paradigm. We know (Theorem \ref{main}) that $X$ has an MCK decomposition, and $c_j(X)$ and $\operatorname{i}a\bigl(A^\ast(U)\to A^\ast(X)\bigr)$ are in $A^\ast_{(0)}(X)$. Moreover, we know that
\begin{equation}\label{vani} A^j_{hom}(X) =0\ \ \ \ \forall j\not=d+1 \end{equation}
(indeed, the injection of motives of Proposition \ref{crit}(c2) induces an injection $A^j_{hom}(X)\hookrightarrow A^{j+1-d}(S)$ where $S$ is a K3 surface). This means that
\[ A^j(X) =A^j_{(0)}(X)\ \ \ \ \forall j\not= d+1\ ,\]
and so
\[ R^\ast(X)\ \ \subset\ A^\ast_{(0)}(X) \ .\]
It only remains to check that $A^\ast_{(0)}(X)$ injects into cohomology under the cycle class map. In view of \eqref{vani}, this reduces to checking that the cycle class map induces an injection
\[ A^{d+1}_{(0)}(X)\ \ \hookrightarrow\ H^{2d+2}(X,\mathbb{Q})\ .\]
By construction, the correspondence $\pi_X^{2d+2}$ is supported on a subvariety $V\times W\subset X\times X$, where $V,W\subset X$ are (possibly reducible) subvarieties of dimension $\dim V=d+1$ and $\dim W=d-1$. As in \cite{BS}, the action of $\pi^{2d+2}_X$ on $A^{d+1}(X)$ factors over $A^0(\widetilde{W})$, where $\widetilde{W}\to W$ is a resolution of singularities. In particular, the action of $\pi^{2d+2}_X$ on $A^{d+1}_{hom}(X)$ factors over $A^0_{hom}(\widetilde{W})=0$ and so is zero. But the action of $\pi^{2d+2}_X$ on $A^{d+1}_{(0)}(X)$ is the identity, and so
\[ A^{d+1}_{(0)}(X)\cap A^{d+1}_{hom}(X)=0\ ,\]
as requested.
\end{proof}
\vskip1cm
\begin{nonumberingt} Thanks to Lie Fu and Charles Vial for lots of enriching exchanges around the topics of this paper. Thanks to the referee for pertinent comments. Thanks to Kai who is a great expert on Harry Potter trivia.
\end{nonumberingt}
\vskip1cm
\end{document} |
\begin{document}
\setlist[itemize]{leftmargin=*}
\setlist[enumerate]{leftmargin=*}
\title{Fast Coalgebraic Bisimilarity Minimization}
\author{Jules Jacobs}
\affiliation{
\institution{Radboud University}
\city{Nijmegen}
\country{The Netherlands}
}
\author{Thorsten Wißmann}
\affiliation{
\institution{Radboud University}
\city{Nijmegen}
\country{The Netherlands}
}
\begin{CCSXML}
<ccs2012>
<concept>
<concept_id>10003752</concept_id>
<concept_desc>Theory of computation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
\end{CCSXML}
\ccsdesc[500]{Theory of computation}
\keywords{Coalgebra, Partition Refinement, Monotone Neighbourhoods}
\begin{abstract}
Coalgebraic bisimilarity minimization generalizes classical automaton
minimization to a large class of automata whose transition structure is specified by a functor,
subsuming strong, weighted, and probabilistic bisimilarity.
This offers the enticing possibility of turning bisimilarity minimization into an off-the-shelf technology, without having to develop a new algorithm for each new type of automaton.
Unfortunately, there is no existing algorithm that is fully general, efficient, and able to handle large systems.
We present a generic algorithm that minimizes coalgebras over an arbitrary functor
in the category of sets as long as the action on morphisms is sufficiently computable. The functor makes
at most $\mathcal{O}(m \log n)$ calls to the functor-specific action,
where $n$ is the number of states and $m$ is the number of transitions in the coalgebra.
While more specialized algorithms can be asymptotically faster than our algorithm (usually by a factor of $\ensuremath{\mathcal{O}}\xspace(\frac{m}{n})$),
our algorithm is especially well suited to efficient implementation, and our tool \textit{Boa}{} often uses much less time and memory on existing benchmarks,
and can handle larger automata, despite being more generic.
\end{abstract}
\maketitle
\ifthenelse{\boolean{preprintversion}}{
}{}
\section{Introduction}
State-based systems arise in various shapes throughout computer science: as
automata for regular expressions,
as control-flow graphs of programs,
Markov decision processes,
(labelled) transition systems,
or as the small-step semantics of programming languages.
If the programming language of interest involves concurrency,
bisimulation can capture whether two systems exhibit the same behaviour~\cite{Winskel93,Milner1980}.
In model checking, a state-based system is derived from the implementation and
then checked against its specification.
It is often beneficial to reduce the size of a state-based system by merging all equivalent states.
Moore's algorithm \cite{Moore} and Hopcroft's $\ensuremath{\mathcal{O}}\xspace(n \log n)$ algorithm \cite{Hopcroft71} do this for the deterministic finite automata that arise from regular expressions,
and produce the equivalent automaton with minimal number of states.
In model checking, state-space reduction can be effective as a preprocessing step~\cite{BaierKatoen08}.
For instance, in probabilistic model checking,
the time saved in model checking due to the smaller system exceeds the time needed to minimize the system~\cite{KatoenEA07}.
Subsequent to \citet{Hopcroft71},
a variety of algorithms were developed for minimizing different types of automata.
Examples are algorithms for
\ifthenelse{\boolean{preprintversion}}{\pagebreak[10]}{}
\begin{itemize}
\item transition systems (without action labels)~\cite{KanellakisSmolka83,KanellakisS90},
labelled transition systems~\cite{Valmari09}, which arise from the verification concurrent systems,
\item weighted bisimilarity~\cite{ValmariF10} for Markov chains and probabilistic settings (such as probabilistic model checking \cite{KatoenEA07}),
\item Markov decision processes~\cite{BaierEM00,GrooteEA18} that combine concurrency with probabilistic branching,
\item weighted tree automata~\cite{HoegbergEA09,HoegbergEA07} that arise in natural language processing~\cite{MayKnight06}.
\end{itemize}
Recently, those algorithms and system equivalences were subsumed by a coalgebraic
generalization~\cite{DorschEA17,coparFM19,WissmannEA2021}\tw{add citation to concurSpecialIssue in the final version}.
This generic algorithm is parametrized by a (Set-)functor that describes the concrete system
type of interest. Functors are a standard notion in category theory and a
key notion in the Haskell programming language.
In coalgebraic automaton minimization, the functor is used to attach transition data to each state of the automaton.
For instance, the powerset functor models non-deterministic branching in transition systems,
and the probability distribution functor models probabilistic branching in Markov chains.
The users of a coalgebraic minimization algorithm may create their own system type by composing the provided basic functors,
allowing them to freely combine deterministic, non-deterministic,
and probabilistic behaviour.
For instance, the functor to model Markov decision processes is the composition of the functors of transition systems
and the functor for probability distributions.
This generalization points to the enticing possibility of turning automata minimization for different types of automata into an off-the-shelf technology.
Unfortunately, there are two problems that currently block this vision.
Firstly, although the generic algorithm has excellent $\ensuremath{\mathcal{O}}\xspace(m \log n)$ asymptotic complexity,
where $n$ is the number of states and $m$ is the number of edges,
it is slow in practice, and the data structures required for partition refinement suffer from hungry memory usage.
A machine with 16GB of RAM required several minutes to minimize tree automata with 150 thousand states and
ran out of memory when minimizing tree automata larger than 160 thousand states \cite{coparFM19,WissmannEA2021}.
This problem has also been observed for algorithms for specific automata types, e.g., transition systems~\cite{Valmari10}.
In order to increase the total memory available,
a distributed partition refinement algorithm has been developed \cite{BDM22},
(and previously also for specific automata types, e.g., labelled transition
systems~\cite{BlomOrzan05}), but this algorithm runs in $\ensuremath{\mathcal{O}}\xspace(n^2)$ and requires expensive distributed hardware.
Secondly, the generic algorithm does not work for all Set-functors, because it places certain restrictions on the functor type necessary for the tight run time complexity.
For instance, the algorithm is not capable of minimizing frames for the monotone neighbourhood logic \cite{HansenKupke04,HansenKupke04cmcs}, arising in game theory~\cite{Parikh1985,Peleg87,Pauly2001}.
We present a new algorithm that works for \emph{all} system types given by computable $\ensuremath{\mathsf{Set}}\xspace{}$-functors,
requiring only an implementation of the functor's action on morphisms, which is
then used to compute so-called \emph{signatures of states}, a notion originally introduced for labelled transition systems~\cite{BlomOrzan05}.
The algorithm makes at most $\ensuremath{\mathcal{O}}\xspace(m \log n$) calls to the functor implementation\tw{Maybe replace with: the algorithm computes at most ... state signatures},
where $n$ and $m$ are the number of states and edges in the automaton, respectively.
In almost all instances, one such call takes $\ensuremath{\mathcal{O}}\xspace(k)$ time, where $k$ is the maximum out-degree of a state, so the overall run time is in $\ensuremath{\mathcal{O}}\xspace(km\log n)$.
We compensate for this extra factor because our algorithm has been designed to be efficient in practice
and does not need large data structures: we only need the automaton with predecessors and a refinable partition data structure.
We provide an implementation of our algorithm in our tool called \textit{Boa}{}.
The user of the tool can either encode their system type as a composition of the functors natively supported by \textit{Boa}{},
or extend \textit{Boa}{} with a new functor by providing a small amount of Rust code that implements the functor's action on morphisms.
Empirical evaluation of our implementation shows that the memory usage is much reduced,
in certain cases by more than 100x compared to the distributed algorithm~\cite{BDM22},
such that the benchmarks that were used to illustrate its scalability can now be solved on a single computer.
Running time is also much reduced, in certain cases by more than 3000x, even though we run on a single core rather than a distributed cluster.
We believe that this is a major step towards coalgebraic partition refinement as an off-the-shelf technology for automaton minimization.
\paragraph{The rest of the paper is structured as follows.}
\begin{description}
\item[\Cref{sec:CoalgebraIntro}:] Coalgebraic bisimilarity minimization and our algorithm in a nutshell.
\item[\Cref{sec:formalcoalg}:] The formal statement of behavioural equivalence of states, and examples for how this reduces to known notions of equivalence for particular instantiations.
\item[\Cref{sec:CoalgebraicPartitionRefinement}:] Detailed description of our coalgebraic minimization algorithm for any computable set functor, and time complexity analysis showing that the algorithm makes at most $\ensuremath{\mathcal{O}}\xspace(m \log n)$ calls to the functor operation.
\item[\Cref{sec:Instances}:] Instantiations of the algorithm showing its genericity.
\item[\Cref{sec:Benchmarks}:] Benchmark results showing our algorithm outperforms earlier work.
\item[\Cref{sec:Conclusion}:] Conclusion and future work.
\end{description}
\section{Fast Coalgebraic Bisimilarity Minimization in a Nutshell}
\label{sec:CoalgebraIntro}
\newcommand{\st}[1]{\mathbf{#1}}
\tikzstyle{state} = [thick, circle, draw=black, minimum size=0.3cm, scale=0.8]
\tikzstyle{accepting}=[double distance=1pt, outer sep=1pt+\pgflinewidth,inner sep=3pt]
\tikzstyle{arr} = [thick,->,>=stealth,shorten >= 1pt, shorten <= 1pt]
\renewcommand{\ensuremath{\mathsf{ar}}raystretch}{1.2}
\newcommand{\mathsf{F}}{\mathsf{F}}
\newcommand{\mathsf{T}}{\mathsf{T}}
This section presents the key ideas of our fast coalgebraic minimization algorithm.
We start with an introduction to coalgebra,
and how the language of category theory provides an elegant unifying framework for different types of automata.
No knowledge of category theory is assumed;
we will go from the concrete to the abstract,
and category theoretic notions have been erased from the presentation as much as possible.
Let us thus start by looking at three examples of automata:
deterministic finite automata on the alphabet $\{a,b\}$,
transition systems,
and Markov chains.
The usual way of visualizing is depicted in the first row of \Cref{fig:coalgexamples}.
For instance, a deterministic finite automaton on state set $C$ is usually described via
a transition function $\delta \colon C \times \{a,b\} \to C$ and a set of accepting states $F \subseteq C$ (the initial state is not relevant for the task of computing equivalent states).
In order to generalize various types of automata, however, we take a \emph{state-centric} point of view,
where we consider all the data as being \emph{attached to a particular state}:
\begin{itemize}
\item In a finite automaton on the alphabet $\{a,b\}$ each state has two successors: one for the input letter $a$ and one for the input letter $b$.
Each state also carries a boolean that determines whether the state is accepting (double border), or not (single border).
For instance, state $\st{3}$ in the deterministic automaton in the left column of \Cref{fig:coalgexamples} is not accepting, but after transitioning via $a$ it goes to state $\st{5}$, which is accepting.
We can specify any deterministic automaton entirely via a map
\[
c\colon C\to \set{\mathsf{F},\mathsf{T}}\times C\times C
\]
This map sends every state $q\in C$ to $(b,q_a,q_b) := c(q)$, where $b\in \set{\mathsf{F},\mathsf{T}}$ specifies if $b$ is accepting, and $q_a, q_b\in C$ are the target states for in input $a$ and $b$, respectively.
\item A transition system consists of a (finite) set of locations $C$, plus a (finite) set of transitions $\textqt{\mathord{\to}} \subseteq C\times C$.
For instance, state $\st{3}$ in the figure can transition to state $\st{4}$ or $\st{5}$ or to itself, whereas $\st{5}$ cannot transition anywhere.
A transition system is specified by a map
\[
c\colon C\to \ensuremath{\mathcal{P}}f(C)
\]
where $\ensuremath{\mathcal{P}}f(C)$ is the set of finite subsets of $C$. This maps sends every location $q$ to the set of locations $c(q)\subseteq C$ to which a transition exists.
\item A Markov chain consists of a set of states, and for each state a probability distribution over all states describes the transition behaviour.
That is, for each pair of states $q, q'\in C$, the probability $p_{q,q'}\in [0,1]$ denoting the probability to transition from $q$ to $q'$.
We also attach a boolean label to each state (again, indicated by double border).
For instance, state $\st{1}$ in the figure steps to state $\st{2}$ with probability $\frac{1}{3}$ and to state $\st{3}$ with probability $\frac{2}{3}$.
Such a Markov chain is specified by a map
\[
c\colon C\to \set{\mathsf{F},\mathsf{T}}\times \ensuremath{\mathcal{D}}(C)
\]
where $\ensuremath{\mathcal{D}}(C)$ is the set of finite probability distributions over $C$.
\end{itemize}
\begin{figure}
\caption{Examples of different system types and their encoding as coalgebras for the state set $C = \set{\st{1}
\label{fig:coalgexamples}
\end{figure}
We call the data $c(q)$ attached to a state $q$ the \textbf{successor structure} of the state $q$.
\textbf{By generalizing the pattern above, different types of automata can be treated in a uniform way}:
In all these examples, we have a set of states $C$ (where $C = \{\st{1},\st{2},\st{3},\st{4},\st{5}\}$ in the figure),
and then a map $c\colon C\to F(C)$ for the successor structures, for some construction $F$ turning the set of states $C$ into another set $F(C)$.
Such a mapping $F\colon \ensuremath{\mathsf{Set}}\xspace{} \to \ensuremath{\mathsf{Set}}\xspace{}$ (in programming terms one should think of $F$ as a type constructor) is called a functor,
and describes the automaton type.
This point of view allows us to easily consider variations,
such as labelled transition systems, given by $F(X) = \ensuremath{\mathcal{P}}f(\{a,b\} \times X)$,
and Markov chains where the states are not labelled but the transitions are labelled,
given by $F(X) = \ensuremath{\mathcal{D}}(\{a,b\} \times X)$.
Other examples, such as monoid weighted systems, Markov Decision processes, and tree automata, are given in \Cref{sec:formalcoalg}.
Representing an automaton of type $F$ by attaching a successor structure of type $F(C)$ to each state $q \in C$ brings us to the following definition:
\begin{definition}
An automaton of type $F$, or finite $F$-coalgebra, is a pair $(C,c)$ of a finite set of states $C$,
and a function $c \colon C \to F(C)$ that attaches the successor structure of type $F(C)$ to each state in $C$.
\end{definition}
Since $C$ is a finite set of states, we can give such a map $c$ by listing what each state in $C$ maps to.
For the concrete automata in \Cref{fig:coalgexamples},
the representation using such a mapping $c \colon C \to F(C)$ is given in the \textqt{Coalgebra} row.
\subsection{Behavioural equivalence of states in $F$-automata, generically}
We now know how to uniformly \emph{represent} an automaton of type $F$,
but \textbf{we need a uniform way to state what it means for states to be equivalent}.
Intuitively, we would like to say that two states are equivalent
if the successor structures attached to the two states by the map $c \colon C \to F(C)$ are equivalent.
The difficulty is that the successor structure may itself contain other states,
so equivalence of states requires equivalence of successor structures and vice versa.
A way to cut this knot is to consider a \emph{proposed} equivalence of states,
and then define what it means for this equivalence to be valid, namely:
an equivalence of states is \emph{valid} if proposed to be equivalent states have equivalent successor structures,
where equivalence of the successor structures is considered up to the \emph{proposed} equivalence of states.
In short, the proposed equivalence should be compatible with the transition structure specified by the successor structures.
Rather than representing a proposed equivalence as an equivalence relation $R\subseteq C\times C$ on the state space $C$,
it is better to use a surjective map $r\colon C\to C'$ that assigns to each state a canonical representative in $C'$ identifying its equivalence class (also called \emph{block}).
That is, two states $q,q'$ are equivalent according to $r$, if $r(q) = r(q')$.
Intuitively, $r$ partitions the states into blocks or equivalence classes $\set{q\in C\mid r(q) = y} \subseteq C$ for each canonical representative $y \in C'$.
Not only does this representation of the equivalence avoid quadratic overhead in the implementation,
but it is also more suitable to state the stability condition:
An equivalence $r\colon C \to C'$ is \emph{stable},
if for every two equivalent states $q_1,q_2$ (i.e., with $r(q_1) = r(q_2)$),
the successor structures $c(q_1)$ and $c(q_2)$ attached to the states become \emph{equal}
after replacing states $q$ inside the successor structures with their canonical representative $r(q)$.
This guarantees that we can build a minimized automaton with the canonical representatives $r(q)\in C'$ as state space.
If we do this replacement for both the source and the target of all transitions, we obtain a potentially smaller automaton
$c'\colon C' \to F(C')$.
In order to gain intuition about this, let us investigate our three examples in \Cref{fig:coalgexamples}:
\begin{itemize}
\item In the finite automaton, the states $\st{4} \equiv \st{5}$ and $\st{2} \equiv \st{3}$ can be shown to be equivalent, so we have $C'=\set{\st{1}, \st{2}, \st{4}}$ and $r\colon C\to C'$ with $\st{3}\mapsto \st{2}$ and $\st{5}\mapsto \st{4}$ (and also $\st{1}\mapsto \st{1}$, $\st{2}\mapsto \st{2}$, $\st{4}\mapsto \st{4}$, which we will use implicitly in future examples).
We can check that this equivalence is compatible with $c$ by verifying
that the successor structures of supposedly equivalent states become \emph{equal} after substituting $\st{5} \mapsto \st{4}$ and $\st{3} \mapsto \st{2}$.
After substituting $\st{5} \mapsto \st{4}$ we indeed have that $c(\st{2}) = (\mathsf{F},\st{4},\st{3})$ and $c(\st{3}) = (\mathsf{F},\st{5},\st{3})$ become equal,
and that $c(\st{4}) = (\mathsf{T},\st{5},\st{4})$ and $c(\st{5}) = (\mathsf{T},\st{4},\st{4})$ become equal.
So this equivalence is stable.
\item For the transition system, the states $\st{3} \equiv \st{4}$ are equivalent, and $\st{1} \equiv \st{2}$ are equivalent.
We can verify, for instance, that states $c(\st{1}) = \{\st{2},\st{3},\st{4}\}$ and $f(\st{2}) = \{ \st{1}, \st{4} \}$ are equivalent,
because after substituting $\st{4} \mapsto \st{3}$ and $\st{2} \mapsto \st{1}$, we indeed have $\{\st{1},\st{3},\st{3}\} = \{ \st{1}, \st{3} \}$,
because duplicates can be removed from sets.
Note that it is important that the data for transition systems are sets rather than lists or multisets.
Multisets also give a valid type of automaton, but they do not give the same notion of equivalence.
\item For the Markov chain, we can verify $\st{2} \equiv \st{3} \equiv \st{5}$.
Consider that all three of these states step to state $\st{4}$ with probability $\frac{1}{2}$.
With the remaining probability $\frac{1}{2}$ these states step to one of the states $\st{2} \equiv \st{3} \equiv \st{5}$, i.e.~they stay in this block.
State $\st{3}$ steps to either state $\st{2}$ or $\st{5}$ with probability of $\frac{1}{4}$ each. If we however assume that
state $\st{5}$ behaves equivalent to $\st{2}$, then the branching of state $\st{3}$ is the same as going to state $\st{2}$ with probability
$\frac{1}{4} + \frac{1}{4} = \frac{1}{2}$ directly.
Thus, when
substituting $\st{5} \mapsto \st{2}$ and $\st{3} \mapsto \st{2}$
the distribution $c(\st{3}) = (\mathsf{F},\{\st{2}\colon \frac{1}{4},\st{4}\colon\frac{1}{2},\st{5}\colon\frac{1}{4}\})$,
collapses to $(\mathsf{F},\{\st{2}\colon\frac{1}{2},\st{4}\colon\frac{1}{2}\})$.
In other words, edges to equivalent states get merged by summing up their probability.
\end{itemize}
Here we assumed that we were given an equivalence, which we check to be stable.
Our next task is to determine how to find the maximal stable equivalence.
We shall see that this only requires a minor modification to checking that
a given equivalence is stable:
if we discover that an equivalence is \emph{not} stable,
we can use that information to iteratively refine the equivalence until it is stable.
\subsection{Minimizing $F$-automata, generically: the naive algorithm}
\label{sec:minimizing_automata}
In this section we describe a \textbf{naive but generic method for minimizing $F$-automata}\tw{only $F$-coalgebra and \textqt{automata of type $F$} is introduced so far} \cite{KonigKupper14}.
The method is based on the observation that we can start by optimistically assuming that \emph{all} states are equivalent,
and then use the stability check described in the preceding section to determine how to split up into finer blocks.
By iterating this procedure we will arrive at the minimal automaton.
Let us thus see what happens if we blindly assume \emph{all} states to be equivalent,
and perform the substitution where we change every state to state $\st{1}$.
For the finite automaton in \Cref{fig:coalgexamples}, we get
\begin{align*}
\st{1} &\mapsto (\mathsf{F},\st{1},\st{1}) &
\st{2} &\mapsto (\mathsf{F},\st{1},\st{1}) &
\st{3} &\mapsto (\mathsf{F},\st{1},\st{1}) &
\st{4} &\mapsto (\mathsf{T},\st{1},\st{1}) &
\st{5} &\mapsto (\mathsf{T},\st{1},\st{1})
\end{align*}
Clearly, even though we assumed all states to be equivalent,
the states $\st{1},\st{2},\st{3}$ are still distinct from $\st{4},\st{5}$ because the former three are not accepting whereas the latter two are.
Therefore, even if we initially assumed all states to be equivalent, we discover inequivalent states.
Let us thus try the equivalence $\st{1} \equiv \st{2} \equiv \st{3}$ and $\st{4} \equiv \st{5}$,
and apply substitution where we send $\st{2} \mapsto \st{1}$, $\st{3} \mapsto \st{1}$ and $\st{5} \mapsto \st{4}$:
\begin{align*}
\st{1} &\mapsto (\mathsf{F},\st{1},\st{1}) &
\st{2} &\mapsto (\mathsf{F},\st{4},\st{1}) &
\st{3} &\mapsto (\mathsf{F},\st{4},\st{1}) &
\st{4} &\mapsto (\mathsf{T},\st{4},\st{4}) &
\st{5} &\mapsto (\mathsf{T},\st{4},\st{4})
\end{align*}
We have now discovered \emph{three} distinct blocks of states: state $\st{1}$, states $\st{2} \equiv \st{3}$ and states $\st{4} \equiv \st{5}$.
If we apply a substitution for \emph{that} equivalence, we get:
\begin{align*}
\st{1} &\mapsto (\mathsf{F},\st{2},\st{2}) &
\st{2} &\mapsto (\mathsf{F},\st{4},\st{2}) &
\st{3} &\mapsto (\mathsf{F},\st{4},\st{2}) &
\st{4} &\mapsto (\mathsf{T},\st{4},\st{4}) &
\st{5} &\mapsto (\mathsf{T},\st{4},\st{4})
\end{align*}
We did not discover new blocks; we still have three distinct blocks of states:
$\st{1}$, states $\st{2} \equiv \st{3}$ and states $\st{4} \equiv \st{5}$.
Hence, there is no need to change the substitution map sending each state to a representative in the $\equiv$-class,
and so we reached a fixed point.
We can now read off the minimized automaton by deleting states $\st{3}$ and $\st{5}$ from the last automaton above.
\newcommand{0.3}{0.3}
\begin{figure}
\caption{Execution of the naive algorithm for the three automata of \Cref{fig:coalgexamples}
\label{fig:naivealgexamples}
\end{figure}
The reader may observe that the process sketched above is quite general,
and can be used to minimize a large class of automata. The sketch translates into the pseudocode in \Cref{algSketchNaive}.
\begin{algorithm}
\begin{algorithmic}
\upshape
\Procedure{NaiveAlgorithm}{automaton}
\Comment{Finds equivalent states of automaton}
\State Put all states in one block (\ie assume that all states are equivalent)
\While{number of blocks grows}
\State Substitute current block numbers in the successor structures
\State Split up blocks according to the successor structures
\EndWhile
\EndProcedure
\end{algorithmic}
\caption{Sketch of the naive partition refinement algorithm}
\label{algSketchNaive}
\end{algorithm}
The execution trace of this naive algorithm for our three example automata of \Cref{fig:coalgexamples} can be found in \Cref{fig:naivealgexamples}.
What the algorithm only needs is the ability to obtain a canonicalized successor structure
after applying a substitution to the successor states.
In general this may involve some amount of computation.
For instance, for transition systems, a purely textual substitution would lead to $\{\st{1},\st{1},\st{1}\}$
assuming all states are conjectured equivalent in the first step,
and the canonical form of this set is $\{\st{1}\}$.
Note that the states $\st{1}-\st{4}$ all have successor structure $\{\st{1}\}$ in the first step of the algorithm,
but they get distinguished from state $\st{5}$,
which has successor structure $\{\ \}$.
We see that in order to talk about equivalence of states,
and in order to perform minimization, we need a notion of substitution and canonicalization.
As it turns out, this corresponds exactly to the standard definition of functor in category theory (for $\ensuremath{\mathsf{Set}}\xspace{}$):
\begin{definition}
\label{def:functor}
$F\colon \ensuremath{\mathsf{Set}}\xspace{} \to \ensuremath{\mathsf{Set}}\xspace{}$ is a functor, if given an $p\colon A \to B$ (i.e., a ``substitution''),
we have a mapping $F[p]\colon F(A) \to F(B)$.
Furthermore, this operation must satisfy $F[id] = id$ and $F[p \circ g] = F[p] \circ F[g]$.
\end{definition}
We thus require all automata types to be given by functors in the sense of \Cref{def:functor}.
We can then talk about equivalence of states,
and minimize automata by repeatedly applying this operation $F[p]$ as sketched above.
A more formal naive algorithm will be discussed in \Cref{sec:algFinalChain}.
\subsection{The challenge: a generic \emph{and} efficient algorithm}
The problem with the naive algorithm sketched in \Cref{sec:minimizing_automata}
is that it processes all transitions in every iteration of the main loop.
In certain cases, partition refinement (in general) may take $\ensuremath{\Theta}\xspace(n)$ iterations to converge, where $n$ is the number of states.
This can happen, for instance, if the automaton has a long chain of transitions,
so in each iteration, only one state is moved to a different block.
\Cref{fig:nsquared} contains three example automata for which the naive algorithm takes $\ensuremath{\Theta}\xspace(n)$ iterations
(provided one generalizes the examples to have $n$ nodes).
Since naive algorithm computes new successor structures for all states in each iteration,
the functor operation is applied $\ensuremath{\mathcal{O}}\xspace(n^2)$ times in total.
Thus, the challenge we set out to solve is the following:
\tikzstyle{arrsum} = [thick,->,>=stealth,dashed]
\begin{figure}
\caption{Examples of shapes of automata on which the naive algorithm runs in $\ensuremath{\Theta}
\label{fig:nsquared}
\end{figure}
\qquad \textbf{Can we find an asymptotically and practically efficient algorithm for automaton minimization that uses only the successor structure recomputation operation $F[p]$?}
By using only $F[p]$, we do not impose further conditions on the functor $F$ beside $F[p]$ being computable
Since the algorithm does not inspect $F$ any further, the only condition imposed on the functor
is that $F[p]$ is computable for all substitutions $p$ on the state space.
\subsection{Hopcroft's trick: the key to efficient automaton minimization}
A key part of the solution is a principle often called \textqt{Hopcroft's trick} or \textqt{half the size} trick,
which underlies all known asymptotically efficient automata minimization algorithms.
To understand the trick, consider the following game:
\begin{enumerate}
\item We start with a set of objects, \eg $\{1,2,3,4,5,6,7,8,9\}$.
\item We chop the set into two parts arbitrarily, \eg $\{1,3,5,7,9\},\{2,4,6,8\}$.
\item We select one of the sets, and chop it up arbitrarily again, \eg $\{1,3\},\{5,7,9\},\{2,4,6,8\}$.
\item We continue the game iteratively (possibly until all sets are singletons).
\end{enumerate}
Once the game is complete, we trace back the history of one particular element, say $3$,
and count how many times it was in the smaller part of a split:
\quad \textbf{The number of times an element was part of the smaller half of a split is $\ensuremath{\mathcal{O}}\xspace(\log n)$.}
One can prove this bound by considering the evolution of the size of the set containing the element.
Initially, this size is $n$. Each time the element was part of the smaller part of the split, the size of the surrounding
set gets cut in at least half, which can happen at most $\ensuremath{\mathcal{O}}\xspace(\log n)$ times before we reach a singleton.
This indicates that for efficient algorithms,
we should make sure that the running time of the algorithm is only proportional to the smaller halves of the splits.
In other words, when we split a block, we have to make sure that we do not loop over the larger half of the split.
A slightly more general bound results from considering a game where we can
split each set into an arbitrary number of parts, rather than $2$:
\quad \textbf{The number of times an element was part of \underline{a} smaller part of the split is $\ensuremath{\mathcal{O}}\xspace(\log n)$.}
In this case, ``a smaller part of the split'' is to be understood as any part of the split except the largest part.
Thus, if we split $\{1,2,3,4,5,6,7,8,9\}$ into $\{1,3\},\{5,7,9\},\{2,4,6,8\}$, then $\{1,3\},\{5,7,9\}$ are both considered ``smaller parts'',
whereas $\{2,4,6,8\}$ is the larger part.
In terms of algorithm design, our goal shall thus be that when we do a $k$-way split of a block,
we may do operations proportional to all the $k-1$ smaller parts of the split,
but never an operation proportional to the largest part of the split.
\subsection{A sketch of our generic and efficient algorithm}
We design our algorithm based on the naive algorithm and Hopcroft's trick.
The main problem with the naive algorithm is that it recomputes the successor structures of \emph{all} states at each step.
The reader may already have noticed that many of the successor structures in fact stay the same, and are unnecessarily recomputed.
The successor structure of a state only changes if the block number of one of its successors changes.
\emph{The key to a more efficient algorithm is to minimize the number of times a block number changes, so that successor structure recomputation is avoided as much as possible.}
In the naive algorithm, we see that when we split a block of states into smaller blocks,
we have freedom about which numbers to assign to each new sub-block.
We therefore choose to \emph{keep the old number for the largest sub-block}.
Hopcroft's trick will then ensure that a state's number changes at most $\ensuremath{\mathcal{O}}\xspace(\log n)$ times.
In order to reduce recomputation of successor structures, our algorithm tracks for each block of states (\ie states with the same block number), which of the states are \emph{dirty}, meaning that at least one of their successors' number changed.
The remaining states in the block are \emph{clean}, meaning that the successors did not change.
Importantly, all clean states of a block \emph{have the same successor structure},
because (A) their successors did not change (B) if their successor structure was different in the last iteration, they would have been placed in different blocks.
Therefore, in order to recompute the successor structures of a block,
it suffices to recompute the dirty states and \emph{one} of the clean states,
because we know that all the clean states have the same successor structure.
This sketch translates into the pseudocode of \Cref{algSketchOpt}.
\begin{algorithm}
\begin{algorithmic}
\upshape
\Procedure{PartRefSetFun}{automaton}
\Comment{Finds equivalent states of automaton}
\State Put all states in one block (\ie assume that all states are equivalent)
\State Mark all states dirty
\While{number of blocks grows}
\State Pick a block with dirty states
\State Compute the successor structures of the dirty states and one clean state
\State Mark all states in the block clean
\State Split up the block, keeping the old block number for the largest sub-block
\State Mark all predecessors of changed states dirty
\EndWhile
\EndProcedure
\end{algorithmic}
\caption{Sketch of the optimized partition refinement algorithm}
\label{algSketchOpt}
\end{algorithm}
Let us investigate the complexity of this algorithm in terms of the number of successor structure recomputations.
By Hopcroft's trick, a state's number can now change at most $\ensuremath{\mathcal{O}}\xspace(\log n)$ times,
since we do not change the block number of the largest sub-block.
Whenever we change a state's number, all the predecessors of that state will need to be marked dirty, and be recomputed.
If we take a more global view, we can see that a recomputation may be triggered for every edge in the automaton,
for each time the number of the destination state of the edge changes.
Therefore, if there are $m$ edges, there will be at most $\ensuremath{\mathcal{O}}\xspace(m \log n)$ successor structure recomputations, \ie at most $\ensuremath{\mathcal{O}}\xspace(m \log n)$ calls to the functor operation.
In order to make the algorithm asymptotically efficient in terms of the total number of primitive computation steps,
we must make sure to never do any operation that is proportional to the number of clean states in a block.
Importantly, we must be able to split a block into $k$ sub-blocks without iterating over the clean states.
To do this, we have to devise efficient data structures to keep track of the blocks and their dirty states (\Cref{sec:DataStructures}).
We implement our algorithm (\Cref{sec:OurAlgorithm}) with these data structures and efficient methods for computing the functor operation in our tool, \textit{Boa}{}.
When using \textit{Boa}{}, the user can either encode their automata using a composition of the built-in functors,
or implement their own functor operation and instantiate the algorithm with that.
\subsubsection*{Practical efficiency of the algorithm}
Previous work on algorithms that apply to classes of functors that support more specialized operations in addition to just the functor operation
can give better asymptotic complexity when one considers more fine-grained accounting than just the number of calls to the functor operation \cite{DorschEA17,concurSpecialIssue,coparFM19,WissmannEA2021}.
Perhaps surprisingly, even though our algorithm is very generic
and doesn't have access to these specialized operations,
our algorithm is much faster than the more specialized algorithm in practice (\Cref{sec:Benchmarks}).
However, the limiting factor in practice is not necessarily time but space.
The aforementioned algorithm requires on the order of 16GB of RAM for minimizing automata with 150 thousand states \cite{coparFM19,WissmannEA2021}.
In order to be able to access more memory, distributed algortithms have been developed \cite{BDM22,BlomOrzan05}.
Using a cluster with 265GB of memory, the distributed algorithm was able to minimize an automaton with 1.3 million states and 260 million edges.
By contrast, \textit{Boa}{} is able to minimize the same automaton using only 1.7GB of memory.
The reason is that we do not need any large auxiliary data structures;
most of the 1.7GB is used for storing the automaton itself.
Furthermore, because we only need to compute the functor operation for states in the automaton,
we are able to store the automaton in an efficient immutable binary format.
In the rest of the paper we will first give a more formal definition of bisimilarity in coalgebras (\Cref{sec:formalcoalg}),
we describe how we represent our automata, and which basic operations we need (\Cref{sec:Representation}),
we describe the auxiliary data structures required by our algorithm (\Cref{sec:DataStructures}),
we describe our algorithm and provide complexity bounds (\Cref{sec:OurAlgorithm}),
we show a variety of functor instances that our algorithm can minimize (\Cref{sec:Instances}),
we compare the practical performance to earlier work (\Cref{sec:Benchmarks}),
and we conclude the paper (\Cref{sec:Conclusion}).
\section{Coalgebra and Bisimilarity, Formally}
\label{sec:formalcoalg}
In this section we define formally what it means for two states in a coalgebra to be behaviourally equivalent,
and we give examples to show that behavioural equivalence in coalgebras reduces to known notions of bisimilarity for specific functors.
Recall that we model state-based systems as coalgebras for set functors (\Cref{def:functor}):
\begin{definition}
An \emph{$F$-coalgebra} consists of a carrier set $C$ and a structure map $c\colon C\to FC$.
\end{definition}
Intuitively, the carrier $C$ of a coalgebra $(C,c)$ is the set of states of the
system, and for each state $x\in C$, the map provides $c(x)\in FC$ that is the
structured collection of successor states of $x$. If $F=\ensuremath{\mathcal{P}}f$, then $c(x)$ is
simply a finite set of successor states. The functor determines a canonical
notion of behavioural equivalence.
\begin{definition}
A \emph{homomorphism} between coalgebras $h\colon (C,c)\to (D,d)$ is a map $h\colon
C\to D$ with $F[h](c(x)) = d(h(x))$ for all $x\in C$. States $x,y$ in a
coalgebra $(C,c)$ are \emph{behaviourally equivalent} if there is some
other coalgebra $(D,d)$ and a homomorphism $h\colon (C,c)\to (D,d)$ such that
$h(x) = h(y)$.
\end{definition}
\begin{example}
\label{exCoalgebra}
We consider coalgebras for the following functors (see also \Cref{tabExCoalgebra}):
\begin{enumerate}
\item Coalgebras for $\ensuremath{\mathcal{P}}f$ are finitely-branching transition systems and
states $x,y$ are behaviourally equivalent iff they are bisimilar.
\item An (algebraic) signature is a set $\Sigma$ together with a map $\ensuremath{\mathsf{ar}}\colon \Sigma\to
\ensuremath{\mathbb{N}}\xspace$. The elements of $\ensuremath{\mathsf{sig}}ma\in \Sigma$ are called \emph{operation
symbols} and $\ensuremath{\mathsf{ar}}(\ensuremath{\mathsf{sig}}ma)$ is the arity. Every signature induces a functor
defined by
\[
\tilde{\Sigma}X = \{ (\ensuremath{\mathsf{sig}}ma,x_1,\ldots,x_{\ensuremath{\mathsf{ar}}(\ensuremath{\mathsf{sig}}ma)}) \mid \ensuremath{\mathsf{sig}}ma\in\Sigma,
x_1,\ldots,x_{\ensuremath{\mathsf{ar}}(\ensuremath{\mathsf{sig}}ma)}\in X \}
\]
on sets and for maps $f\colon X\to Y$ defined by
\[
\tilde{\Sigma}[f](\ensuremath{\mathsf{sig}}ma,x_1,\ldots,x_{\ensuremath{\mathsf{ar}}(\ensuremath{\mathsf{sig}}ma)})
= (\ensuremath{\mathsf{sig}}ma,f(x_1),\ldots,f(x_{\ensuremath{\mathsf{ar}}(\ensuremath{\mathsf{sig}}ma)})).
\]
A state in a $\tilde{\Sigma}$-coalgebra describes a possibly
infinite $\Sigma$-tree, with nodes labelled by $\ensuremath{\mathsf{sig}}ma \in \Sigma$
with $\ensuremath{\mathsf{ar}}(\ensuremath{\mathsf{sig}}ma)$ many children.
Two states are behaviourally equivalent iff they describe the same
$\Sigma$-tree.
\item Deterministic finite automata on alphabet $A$ are coalgebras for
the signature $\Sigma$ with 2 operation symbols of arity $|A|$.
States are behaviourally equivalent iff they accept the same language.
\item For a commutative monoid $(M,+,0)$, the \emph{monoid-valued} functor $M^{(X)}$ \cite[Def.~5.1]{GummS01}
can be thought of as $M$-valued distributions over $X$:
\[
M^{(X)} := \{\mu \colon X\to M\mid \mu(x)\neq 0 \text{ for only finitely many
}x\in X\}
\]
The map $f\colon X\to Y$ is sent by $M^{(-)}$ to
\[
M^{(f)}\colon M^{(X)}\to M^{(Y)}
\qquad
M^{(f)}(\mu) = \big(y\mapsto \sum_{x\in X, f(x) = y} \mu(x)\big)
\]
Coalgebras for $M^{(-)}$ are
weighted systems whose weights come from $M$.
A coalgebra $c\colon C\to M^{(C)}$, sends a state $x\in C$ and another state
$y\in C$ to a weight $m:=c(x)(y)\in M$ which is understood as the weight of
the transition $x\xrightarrow{m} y$, where $c(x)(y) = 0$ is understood as
no transition. The coalgebraic behavioural equivalence captures weighted
bisimilarity~\cite{Klin09}. Concretely, a weighted bisimulation is an
equivalence relation $R\subseteq C\times C$ such that for all $x\,R\,y$
and $z\in C$:
\[
\sum_{z\,R\,z'} c(x)(z')
=\sum_{z\,R\,z'} c(y)(z')
\]
\item Taking $M = (\ensuremath{\mathbb{Q}}\xspace,+,0)$, we get that $M^{(X)}$ are linear combinations over $X$.
If we restrict to the subfunctor $\ensuremath{\mathcal{D}}(X) = \{ f \in \ensuremath{\mathbb{Q}}\xspace_{\geq 0}^{(X)} \mid \sum_{x \in X} f(x) = 1\}$ where the weights are nonnegative and sum to 1,
we get (rational finite support) probability distributions over $X$.\footnote{
In models of computation where addition of rational numbers isn't linear time,
one can restrict to fixed-precision rationals $Q_q = \{ \frac{p}{q} \mid p \in \ensuremath{\mathbb{Z}}\xspace \}$ for some fixed $q \in \ensuremath{\mathbb{N}}\xspace_{>0}$ to obtain our time complexity bound.
}
\item For two functors $F$ and $G$, we can consider the coalgebra over their composition $F \circ G$.
Taking $F = \ensuremath{\mathcal{P}}f$ and $G = A\times (-)$, coalgebras over $F \circ G$ are labelled transition systems with strong bisimilarity.
Taking $F = \ensuremath{\mathcal{P}}f$ and $G = \ensuremath{\mathcal{D}}$, coalgebras over $F \circ G$ are Markov
decision processes with probabilistic bisimilarity
\cite[Def.~6.3]{LarsenS91},
\cite[Thm.~4.2]{BARTELS200357}.
For $F=M^{(-)}$ and $G=\Sigma$ for some signature functor, $FG$-coalgebras
are weighted tree automata and coalgebraic behavioural equivalence is
backward bisimilarity~\cite{coparFM19,HoegbergEA09}.
\end{enumerate}
\end{example}
\begin{table}[t]
\caption{List of functors, their coalgebras, and the accompanying notion
of behavioural equivalence. The first five is given in
\Cref{exCoalgebra}, the last introduced later in \Cref{sec:Instances}.}
\label{tabExCoalgebra}
\centering
\begin{tabular}{@{}lll@{}}
\toprule
Functor $F(X)$ & Coalgebras $c\colon C\to FC$
& Coalgebraic behavioural equivalence
\\
\midrule
$\ensuremath{\mathcal{P}}f(X)$ & Transition Systems & (Strong) Bisimilarity \\
$\ensuremath{\mathcal{P}}f(A\times X)$ & Labelled Transition Systems & (Strong) Bisimilarity \\
$M^{(X)}$ & Weighted Systems (for a monoid $M$)& Weighted Bisimilarity \\
$\ensuremath{\mathcal{P}}f (\ensuremath{\mathcal{D}}(X))$ & Markov Decision Processes & Probabilistic Bisimilarity \\
$M^{(\tilde{\Sigma} X)}$ & Weighted Tree Automata & Backwards Bisimilarity \\
\midrule
$\ensuremath{\mathcal{N}}\xspace(X)$ & Monotone Neighbourhood Frames
& Monotone Bisimilarity
\\
\bottomrule
\end{tabular}
\end{table}
Sometimes, we need to reason about successors and predecessors of a general
$F$-coalgebra:
\begin{definition}
\label{sucpred}
Given a coalgebra $c\colon C\to FC$ and a state $x\in C$, we say that $y\in C$
is a \emph{successor of $x$} if $c(x)$ is not in the image of
$Fi_y\colon F(C\setminus\{y\}) \to FC$, where $i_y\colon
C\setminus\set{y}\ensuremath{\rightarrowtail} C$ is the canonical inclusion. Likewise, $x$ is a
\emph{predecessor} of $y$, and the \emph{outdegree} of $x$ is the number of
successors of $x$.
\end{definition}
Intuitively, $y$ is a successor of $x$ if $y$ appears somewhere in the term
that defines $c(x) \in F(C)$, like we did in the \textqt{coalgebra} row in
\Cref{fig:coalgexamples}.
We will access the predecessors in the minimization algorithm, and moreover,
the total and maximum number of successors will be used in the run time
complexity analysis.
\section{Coalgebraic Partition Refinement}
\label{sec:CoalgebraicPartitionRefinement}
In this section we will describe how the coalgebraic notions of the preceding section can be used for automata minimization.
\subsection{Representing Abstract Data}
\label{sec:Representation}
When writing an abstract algorithm, it is crucial for the complexity analysis, how the abstract data is actually represented in memory.
We understand finite sets like the carrier of the input coalgebra as finite cardinals $C \cong \set{0,\ldots,|C|-1}\subseteq \ensuremath{\mathbb{N}}\xspace$,
and a map $f\colon C\to D$ for finite $C$ is represented by an array of length $|C|$.
\subsection*{Coalgebra implementation}
\label{sec:coalginterface}
The coalgebra $c \colon C \to FC$ that we wish to minimize is given to the algorithm
as a black-box, because it only needs to interact with the coalgebra via a specific interface.
Whenever the algorithm comes up with a partition $p\colon C\to
C'$, two states $x,y\in C$ need to be moved to different blocks if $F[p](c(x))
\neq F[p](c(y))$. Hence, the algorithm needs to derive $F[p](c(x))$ for states
of interest $x\in C$. Since all partitions are finite, we can assume
$C'\subseteq \ensuremath{\mathbb{N}}\xspace$, and so for simplicity, we consider partitions as maps $p\colon C\to \ensuremath{\mathbb{N}}\xspace$
with the image $\ensuremath{\mathcal{I}}\xspacem(p) = \{0,\ldots,|C'|-1\}$ and so $F[p](c(x))$ is an element of the set $F\ensuremath{\mathbb{N}}\xspace$.
For the case of labelled transition systems, i.e.~$F(X) = \ensuremath{\mathcal{P}}f(A\times X)$,
the binary representation of $F[p](c(x))$ is called the \emph{signature of $x\in C$
with respect to $p$}~\cite{BlomOrzan05}.
This straightforwardly generalizes to arbitrary functors $F$~\cite{BDM22,concurSpecialIssue},
so we reuse the terminology \emph{signature} for the binary encoding of the successor structure of $x\in C$ with respect to the blocks the partition $p$ of the previous iteration.
Beside the signatures, the optimized minimization algorithm needs to be able to determine the predecessors of a state, in order to determine which states to mark dirty.
Formally, we require:
\begin{definition}\label{coalgImpl}
The \emph{implementation} of an $F$-coalgebra $c\colon C\to FC$ is
the data $(n,\ensuremath{\mathsf{sig}},\ensuremath{\mathsf{pr}}ed)$ where:
\begin{enumerate}
\item $n\in \ensuremath{\mathbb{N}}\xspace$ is a natural number such that $C\cong \set{0,\ldots,n-1}$
\item $\ensuremath{\mathsf{sig}}\colon C\times (C\to \ensuremath{\mathbb{N}}\xspace) \to 2^*$ is a function that given a state and a partition, computes the successor structure of the state (represented a binary data), satisfying for all
partitions $p\colon C\to \ensuremath{\mathbb{N}}\xspace$ (encoded as an array of size $|C|$) that
\begin{equation}
\forall x,y\in C\colon \qquad
\ensuremath{\mathsf{sig}}(x,p) = \ensuremath{\mathsf{sig}}(y,p)
\quad\Leftrightarrow\quad
F[p](c(x)) = F[p](c(y))
\label{eqSig}
\end{equation}
\item $\ensuremath{\mathsf{pr}}ed\colon C\to \ensuremath{\mathcal{P}}f C$ is a function such that $\ensuremath{\mathsf{pr}}ed(x)$ contains the predecessors of $x$.
\end{enumerate}
\end{definition}
Passing such a general interface makes the algorithm usable as a library,
because the coalgebra can be represented in an arbitrary fashion in memory, as
long as the above functions can be implemented.
The equivalence involving $\ensuremath{\mathsf{sig}}$ \eqref{eqSig} specifies that the binary data
of type $2^*$ returned by $\ensuremath{\mathsf{sig}}$ is some normalized representation of $F[p](c(x))\in F\ensuremath{\mathbb{N}}\xspace$.
For example, in the implementation for $F=\ensuremath{\mathcal{P}}f$, an element of $F\ensuremath{\mathbb{N}}\xspace=\ensuremath{\mathcal{P}}f \ensuremath{\mathbb{N}}\xspace$
is a set of natural numbers. Since e.g.~$\set{2,0}$
and $\set{0,2,2} \in \ensuremath{\mathcal{P}}f \ensuremath{\mathbb{N}}\xspace$ are the same set, the $\ensuremath{\mathsf{sig}}$ function
essentially needs to sort the arising sets and remove duplicates:
\begin{example}
We can represent $\ensuremath{\mathcal{P}}f$-coalgebras $c\colon C\to \ensuremath{\mathcal{P}}f C$
by keeping for every state $x\in C$ an array of its successors $c(x) \subseteq C$ in memory.
As a pre-processing step, we directly compute the predecessors for each state
$x\in C$ and keep them as an array $\ensuremath{\mathsf{pr}}ed(x)\subseteq C$ for every state $x$ in memory as well
(computing the predecessors of all states can be done in linear time, and thus does not affect the complexity of the algorithm).
With $n := |C|$, the remaining function $\ensuremath{\mathsf{sig}}$ is implemented as follows:
\begin{enumerate}
\item Given $p\colon C\to \ensuremath{\mathbb{N}}\xspace$ and $x\in C$, create a new array $t$ of
integers of size $|c(x)|$. For each successor $y\in c(x)$, add $p(y)\in \ensuremath{\mathbb{N}}\xspace$
to $t$; this runs linearly in the length of $t$ because we assume that the
map $p$ is represented as an array with $\ensuremath{\mathcal{O}}\xspace(1)$ access.
\item Sort $t$ via radix sort and then remove all duplicates, with both
steps taking linear time.
\item Return the binary data blob of the integer array $t$.
\end{enumerate}
For $\ensuremath{\mathcal{P}}f$, the computation of the signature of a state $x\in C$ thus takes
$\ensuremath{\mathcal{O}}\xspace(|c(x)|)$ time.
\end{example}
We discuss further instances in \Cref{sec:Instances} later.
\subsection*{Renumber}
By encoding everything as binary data in a normalized way, we are able to make
heavy use of radix sort, and thus achieve linear bounds on sorting tasks.
This trick is also used in the complexity analysis of Kanellakis and Smolka,
who refer to it as lexicographic sorting method by Aho, Hopcroft, and Ullman~\cite{AHU74}.
We use this trick in order to turn arrays of binary data $p\colon B\to 2^*$ into
their corresponding partitions $p'\colon B \to \{0,\ldots,|\ensuremath{\mathcal{I}}\xspacem(p)|-1\}$ satisfying
$p(x) = p(y)
\Longleftrightarrow\
p'(x) = p'(y)
\text{ for all }x,y\in B$.
The pseudocode is listed in \Cref{algoRenumber}: first, a permutation
$r\colon B\to B$ is computed such that $p\circ r\colon B\to
2^*$ is sorted. This radix sort runs in $\ensuremath{\mathcal{O}}\xspace(\sizeof{p})$, where $\sizeof{p} = \sum_{x\in B} |p(x)|$ is the total size of the entire array $p$.
Since identical entries in $p$ are now adjacent, a simple for-loop iterates over $r$
and readily assigns block numbers.
\begin{algorithm}[t]
\caption{Renumbering an array using radix sort}
\begin{algorithmic}
\upshape
\Procedure{Renumber}{$p\colon B\to 2^*$}
\State Create a new array $r$ of size $|B|$ containing numbers $\range{0}{|B|}$
\State Sort $r$ by the key $p\colon B\to 2^*$ using radix-sort
\State Create a new array $p'\colon B\to \ensuremath{\mathbb{N}}\xspace$
\State $j\ensuremath{\xspace\mathop{:=}} 0$
\For{$i \in \range{0}{|B|}$}
\ensuremath{\mathcal{I}}\xspacef{$i > 0$ and $p[r[i-1]] \neq p[r[i]]$} $j\ensuremath{\xspace\mathop{:=}} j + 1$ \EndIf
\State $p'[r[i]] \ensuremath{\xspace\mathop{:=}} j$
\EndFor
\State \ensuremath{\mathbb{R}}\xspaceeturn $p'$
\EndProcedure
\end{algorithmic}
\label{algoRenumber}
\end{algorithm}
\todo{Later on we want to assume that renumber(p)[0] = 0.
This is to ensure that the states with the same signature as the clean states are put next to the clean states.
We have to modify the algorithm to ensure that.}
\begin{lemma}
\label{renumberCorrect}
\Cref{algoRenumber} runs in time $\ensuremath{\mathcal{O}}\xspace(\sizeof{p})$ for the parameter $p\colon
B\to 2^*$ and returns a map $p'\colon B\ensuremath{\twoheadrightarrow} b$ for some $b\in \ensuremath{\mathbb{N}}\xspace$
such that for all $x,y\in B$ we have
\(
p(x) = p(y)
\Leftrightarrow
p'(x) = p'(y)
\).
\end{lemma}
\begin{proofappendix}{renumberCorrect}
\textbf{Run Time Complexity.}
Note that the radix sort needs to take care that the bit-strings do not have
uniform length. This can be easily achieved in linear time (especially because
the sorting is not required to be stable).
\textbf{Correctness.} After the sorting operation, the blocks' identical
elements are adjacent in the permutation $r$. Thus, the final for-loop can
create a new block whenever it sees an element different from the previous element.
\end{proofappendix}
In the actual implementation, we use hash maps to implement $\ensuremath{\textsc{Renumber}}$.
This is faster in practice but due to the resolving of
hash-collisions, the theoretical worst-case complexity of the implementation has an additional log
factor.
The renumbering can be understood as the compression of a map $p\colon B\to 2^*$
to an integer array $p'\colon B\to \ensuremath{\mathbb{N}}\xspace$. In the algorithm, the array elements of
type $2^*$ are encoded signatures of states.
\subsection{The Naive Method Coalgebraically}
\label{sec:algFinalChain}
To illustrate the use of the encoding and notions defined above, let us restate the naive method (\Cref{algSketchNaive}, \cite{KonigKupper14,KanellakisSmolka83}) in \Cref{algFinalChain}.
Recall that the basic idea is that it computes a sequence of partitions
$p_i\colon C\to P_i$ ($i\in \ensuremath{\mathbb{N}}\xspace$) for a given input coalgebra $c\colon C\to FC$.
Initially this partition identifies all states $p_0\colon C\to 1$.
In the first iteration, the map $p'\colon (C\xrightarrow{c} FC\xrightarrow{F[p]}
F\ensuremath{\mathbb{N}}\xspace)$ sends each state to its \emph{output behaviour} (this
distinguishes final from non-final states in DFAs and deadlock from live states
in transition systems).
Then this partition is refined successively under consideration of the transition structure:
$x,y$ are identified by $p_{i+1}\colon C\to P_{i+1}$ iff they are identified by
the composed map
\[
C\xrightarrow{c} FC
\xrightarrow{F[p_i]} FP_i.
\]
The algorithm terminates as soon as $p_i = p_{i+1}$, which then identifies
precisely the behaviourally equivalent states in the input coalgebra $(C,c)$.
\begin{algorithm}
\caption{The naive algorithm, also called \emph{final chain partitioning}}
\label{algFinalChain}
\begin{algorithmic}
\upshape
\Procedure{NaiveAlgorithm'}{$c\colon C\to FC$}
\State Create a new array $p\colon C\to \ensuremath{\mathbb{N}}\xspace := (x \mapsto 0)$ \Comment{i.e.~$p[x] = 0$ for all $x\in C$}
\While{$|\ensuremath{\mathcal{I}}\xspacem(p)|$ changes}
\State compute $p'\colon C\to 2^* := x \mapsto \ensuremath{\mathsf{sig}}(x,p)$
\Comment{$p'[x] \in 2^*$ is the encoding of $F[p](c(x))\in F\ensuremath{\mathbb{N}}\xspace$}
\State $p\colon C \to \ensuremath{\mathbb{N}}\xspace \ensuremath{\xspace\mathop{:=}} \ensuremath{\textsc{Renumber}}(p')$
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
Recently, Birkmann \text{et al.}\xspace~\cite{BDM22} have adapted this algorithm to a
distributed setting, with a run time in $\ensuremath{\mathcal{O}}\xspace(m\cdot n)$.
\subsection{The Refinable Partition Data Structure}
\label{sec:DataStructures}
For the naive method it sufficed to represent the quotient on the state space
$p\colon C\to \ensuremath{\mathbb{N}}\xspace$ by a simple array. For more efficient algorithms like our \Cref{algSketchOpt}, it is crucial to quickly perform certain operations on the partition, for which
we have built upon a refinable partition data structure~\cite{Valmari09,ValmariLehtinen08}.
The data structure keeps track of the partition of the states into blocks.
A key requirement for our algorithm is the ability to split a block into $k$ sub-blocks, where $k$ is arbitrary.
The refinable partition also tracks for each state whether it is \emph{clean} or \emph{dirty},
and a \emph{worklist} of blocks with at least one dirty state.
Let us define the exposed functionality of the refinable partition data structure:
\begin{enumerate}
\item Given (the natural number identifying) a block $B$, return its dirty states $B_{\dirty}$ in $\ensuremath{\mathcal{O}}\xspace(|B_{\dirty}|)$.
\item Given a block $B$, return one arbitrary clean state in $\ensuremath{\mathcal{O}}\xspace(1)$ if there is any. We denote this by the set $B_{\clean_1}$ of cardinality at most 1. $B_{\clean_1}$ contains a clean state of $B$
or is empty if all states of $B$ are dirty.
\item Return an arbitrary block with a dirty state and remove it from the worklist, in $\ensuremath{\mathcal{O}}\xspace(1)$.
\item $\ensuremath{\mathcal{N}}\xspacearkDirty(s)$: mark state $s$ dirty, and put its block on the
worklist, in $\ensuremath{\mathcal{O}}\xspace(1)$.
\item $\ensuremath{\textsc{Split}}(B,A)$: split a block $B$ into many sub-blocks according to an array
$A\colon B_{\dirty}\to \ensuremath{\mathbb{N}}\xspace$.
The array $A$ indicates that the $i$-th dirty state is placed in the
sub-block $A[i]$, meaning that two states $s_1,s_2$ stay together iff $A[s_1] = A[s_2]$.
The clean states are placed in the $0$-th sub-block, with those states satisfying $A[s] = 0$.
The block identifier of $B$ gets re-used as the identifier for largest sub-block,
and all states of $B$ are marked clean. $\ensuremath{\textsc{Split}}$ returns the list of all
newly allocated sub-blocks, i.e.~those except the re-used one.
For the time complexity of our algorithm, it is important that $\ensuremath{\textsc{Split}}(B,A)$
runs in time $\ensuremath{\mathcal{O}}\xspace(|B_{\dirty}|)$, regardless of the number of clean states.
\end{enumerate}
In order to implement these operations with the desired run time complexity, we maintain the following data structures:
\begin{itemize}
\item $\member{loc2state}$ is an array of size $|C|$ containing all states of $C$. Every
block is a section of this array, and the
other stuctures are used to quickly find and update the entries in the $\member{loc2state}$ array. A
visualization of an extract of this array is shown in \Cref{algMarkDirtySplit}; for example
lowermost row shows three blocks of size 5, 3, and 1, respectively.
\item The array $\member{state2loc}$ is inverse to $\member{loc2state}$; $\member{state2loc}[s]$ provides the
index (\textqt{location}) of state $s$ in $\member{loc2state}$.
\item $\member{block\_of}s$ is an array of tuples $(start,mid,end)$ and specifies the blocks of the partition.
A block identifier $B$ is simply an index in this array and $\member{block\_of}s[B]=(start,mid,end)$ means that
block $B$ starts at $\member{loc2state}[start]$ and ends before $\member{loc2state}[end]$, as indicated in the visualization in \Cref{algMarkDirtySplit}. The range $\range{start}{mid}$ contains the clean states
of $B$ and $\range{mid}{end}$ the dirty states. E.g.~$mid=end$ iff the block has no dirty states.
\item The array $\member{block\_of}$ of size $|C|$ that maps every state $s\in C$ to
the ID $B = \member{block\_of}[s]$ of its surrounding block.
\item $\member{worklist}$ is a list of block identifiers and mentions those blocks with at least one dirty state.
\end{itemize}
\newcommand{\mathrel{+}=}{\mathrel{+}=}
\newcommand{\mathrel{-}=}{\mathrel{-}=}
\begin{algorithm}[h]
\caption{Refinable partition data structure with $n$-way split}
\label{algMarkDirtySplit}
\begin{minipage}[t]{0.48\textwidth}
\begin{algorithmic}
\upshape
\Procedure{MarkDirty}{$s$}
\LComment{Determine the block data}
\State $B := \member{block\_of}[s]$
\State $j := \member{state2loc}[s]$
\State $(start,mid,end) := \member{block\_of}s[B]$
\LComment{Do nothing if already dirty}
\ensuremath{\mathcal{I}}\xspacef{$mid \leq j$} \ensuremath{\mathbb{R}}\xspaceeturn \EndIf
\LComment{Add to worklist if first dirty state}
\ensuremath{\mathcal{I}}\xspacef{$mid = end$} $\member{worklist}.add(B)$ \EndIf
\LComment{Swap $s$ with the last clean state}
\State $s' := \member{loc2state}[mid - 1]$
\State $\member{state2loc}[s'] := j$
\State $\member{state2loc}[s] := mid$
\State $\member{loc2state}[j] := s'$
\State $\member{loc2state}[mid] := s$
\LComment{Move marker to make $s$ dirty}
\State $\member{block\_of}s[B].mid \mathrel{-}= 1$
\EndProcedure
\end{algorithmic}
\centering\hspace*{5mm}
\def\ensuremath{\mathsf{partition}}overhang{4mm}
\newcommand{\drawPartitionBlock}[3]{
\foreach \statename [count=\x,remember=\statename as \laststate] in {#2} {
\ifthenelse{1=\x}{
\node[arrayelem] (#1\statename) {$s_\statename$};
\global\edef\myfirstname{#1\statename}
}{
\node[anchor=west,arrayelem,alias=#1\statename] (#1\statename) at (#1\laststate.east) {$s_\statename$};
\draw[inner block line] (#1\statename.north west)
-- (#1\statename.south west);
\global\edef\mylastname{#1\statename}
}
}
\draw[outer block line] (\myfirstname.north west) -- (\myfirstname.south west);
\draw[outer block line] (\mylastname.north east) -- (\mylastname.south east);
\foreach \name [count=\curcount] in {#3} {
\draw[block separator line] (\name.north east) -- (\name.south east);
}
\node[overlay,anchor=east] at ([xshift=-0pt]\myfirstname.west) {$\cdots$};
\node[overlay,anchor=west] at ([xshift= 0pt]\mylastname.east) {$\cdots$};
\draw[outer block line] ([xshift=-\ensuremath{\mathsf{partition}}overhang]\myfirstname.north west)
-- ([xshift=\ensuremath{\mathsf{partition}}overhang]\mylastname.north east);
\draw[outer block line] ([xshift=-\ensuremath{\mathsf{partition}}overhang]\myfirstname.south west)
-- ([xshift=\ensuremath{\mathsf{partition}}overhang]\mylastname.south east);
}
\begin{tikzpicture}[
x=14pt,
arrayelem/.style={
text depth=0pt,
inner ysep=5pt,
},
start mid end/.style={
label node/.style={},
label arrow/.style={
->,shorten >= 2pt,
},
},
state range brace/.style={
decorate,
decoration={brace,raise=2pt,amplitude=7pt},
line cap=round,
draw=docStyleGray,
line width=1pt,
every node/.append style={
yshift=8pt,
anchor=south,
},
},
inner block line/.style={
draw=docStyleGray,
dashed,
},
outer block line/.style={
draw=docStyleBlue,
line width=1.5pt,
},
block separator line/.style={
outer block line,
dashed,
},
]
\begin{scope}[yshift=0cm]
\drawPartitionBlock{s}{1,...,9}{s4}
\draw[state range brace] (s1.north west) --
node {clean} ([xshift=-1pt]s4.north east);
\draw[state range brace] ([xshift=1pt]s5.north west)
-- node {dirty states $B_{\dirty}$} (s9.north east);
\draw[state range brace] ([yshift=7mm]s1.north west)
-- node {block $B$} ([yshift=7mm]s9.north east);
\begin{scope}[start mid end]
\foreach \labeltext/\anchor/\name in {start/west/s1,end/east/s9,mid/west/s5} {
\node[label node] (labeltext) at ([yshift=-6mm]\name.south \anchor) {\labeltext};
\path[label arrow] (labeltext.north) edge (\name.south \anchor);
}
\end{scope}
\end{scope}
\begin{scope}[yshift=-19mm,
start mid end/.style={
label node/.style={
opacity=0,
overlay,
},
label arrow/.style={
opacity=0,
overlay,
},
},
]
\drawPartitionBlock{t}{1,2,4,3,5,6,7,8,9}{t4}
\begin{scope}[yshift=-17mm,
block separator line/.style={outer block line},
]
\drawPartitionBlock{u}{1,2,4,7,8,3,6,9,5}{u8,u9}
\end{scope}
\node[overlay] at ([xshift=0mm,yshift=3mm]t1.north west)
{\textsc{MarkDirty}($s_3$)};
\node[overlay] at ([xshift=5mm,yshift=6mm]u1.north west)
{\textsc{Split}($B$,$[1,2,1,0,0,1]$)};
\end{scope}
\begin{scope}
\foreach \source/\target in {s3/t3,s4/t4,
t3/u3,
t5/u5,
t6/u6,
t7/u7,
t8/u8,
t9/u9
} {
\draw[->,shorten <= 0mm, shorten >= 0mm,
line width=1pt,
line cap=round,
draw=docStyleGray,]
([yshift=-1mm]\source.south) -- ([yshift=1mm]\target.north);
}
\end{scope}
\end{tikzpicture}
\end{minipage}
\begin{minipage}[t]{0.52\textwidth}
\newcommand{D}{D}
\begin{algorithmic}
\upshape
\Procedure{Split}{$B$, $A\colon B_{\dirty} \to \ensuremath{\mathbb{N}}\xspace$}
\LComment{Cumulative counts of sub-block sizes}
\State $(start,mid,end) := \member{block\_of}s[B]$
\State $D[\range{0}{\max_i A[i]+1}] := 0$
\State $D[0] := mid - start$
\For{$j \in B_{\dirty}$}
\State $D[A[j]] \mathrel{+}= 1$
\EndFor
\State $\ensuremath{i_{max}} = \ensuremath{\mathsf{ar}}gmax_i D[i]$
\For{$i \in \range{1}{|D|}$}
\State $D[i] \mathrel{+}= D[i-1]$
\EndFor
\LComment{Re-order the states by $A$-value}
\State $\ensuremath{\mathsf{di}}States := \operation{copy}(\member{loc2state}[\range{mid}{end}])$
\For{$i \in \operation{reverse}(\range{0}{|A|})$}
\State $D[A[i]] \mathrel{-}= 1$
\State $j := start + D[A[i]]$
\State $\member{loc2state}[j] := \ensuremath{\mathsf{di}}States[i]$
\State $\member{state2loc}[\member{loc2state}[i]] := j$
\EndFor
\State $D[0] \mathrel{-}= mid - start$
\LComment{Create blocks and assign IDs}
\State $D.add(end - start)$
\State $old\_block\_count := |\member{block\_of}s|$
\For{$i \in \range{0}{|D|-1}$}
\State $j_0 := start + D[i]$
\State $j_1 := start + D[i+1]$
\ensuremath{\mathcal{I}}\xspacef{$i = \ensuremath{i_{max}}$}
\State $\member{block\_of}s[B] = (j_0,j_1,j_1)$
\Else
\State $\member{block\_of}s.add(j_0,j_1,j_1)$
\State $idx := |\member{block\_of}s|-1$
\State $\member{block\_of}[\member{loc2state}[\range{j_0}{j_1}]] := idx$
\EndIf
\EndFor
\State \ensuremath{\mathbb{R}}\xspaceeturn $\range{old\_block\_count}{|\member{block\_of}s|}$
\EndProcedure
\end{algorithmic}
\end{minipage}
\end{algorithm}
With this data, we can implement the above-mentioned interface:
\begin{enumerate}
\item For a block $B$, its dirty states $B_{\dirty}$ are the states $\member{loc2state}[\range{mid}{end}]$ where $\member{block\_of}s[B] = (start,mid,end)$.
\item One arbitrary clean state $B_{\clean_1}$ of a given block $B$ is determined in a similar fashion: for
$\member{block\_of}s[B] = (start,mid,end)$, if $start=mid$, then there is one clean state
$B_{\clean_1} = \set{}$, and otherwise we chose $B_{\clean_1} = \set{\member{loc2state}[start]}$.
\item Returning an arbitrary block containing a dirty state is just a matter of extracting one element from $\member{worklist}$.
\item The pseudocode of $\ensuremath{\mathcal{N}}\xspacearkDirty$ is listed in \Cref{algMarkDirtySplit}:
when marking a state $s\in C$ dirty, we first find the boundaries
$(start,mid,end) = \member{block\_of}s[B]$ of the surrounding block $B = \member{block\_of}[s]$. By the index $\member{state2loc}[s]$, we can check in $\ensuremath{\mathcal{O}}\xspace(1)$ whether $s$ is in the first (\textqt{clean}) or second (\textqt{dirty}) part of the block. Only if $s$ wasn't dirty already, we need to do something: if $B$ did not contain dirty states yet ($start=mid$), $B$ now needs to be added to the $\member{worklist}$. Then, we change the location of $s$ in the main array such that it becomes the last clean state, and then we make it dirty by moving the decrementing the index $mid$.
In the example in \Cref{algMarkDirtySplit}, the content of $\member{loc2state}$ is visualized. The bold dashed line visualizes the $mid$ position, so states on the left of it are clean, states on the right are dirty. The call to $\ensuremath{\mathcal{N}}\xspacearkDirty(s_3)$ transforms the first row into the second row:
it does so by moving
$s_3$ from the clean states of $B$ to the dirty ones, while $s_4$ stays
clean.
\item The pseudocode of $\ensuremath{\textsc{Split}}$ is listed in \Cref{algMarkDirtySplit}: for a
block $B$, the caller provides us with an array $A\colon B_{\dirty}\to \ensuremath{\mathbb{N}}\xspace$ that
specifies which of the states stay together and which are moved to separate
blocks. In the visualized example, $A=[1,2,1,0,0,1]$ represents the map
\[
s_3\mapsto 1,\quad
s_5\mapsto 2,\quad
s_6\mapsto 1,\quad
s_7\mapsto 0,\quad
s_8\mapsto 0,\quad
s_9\mapsto 1
\]
So $\ensuremath{\textsc{Split}}(B,A)$ needs to create new blocks $s_3,s_6,s_9$ and $s_5$, while
$s_7,s_8$ stay with the clean states. In any case, the clean states stay in the same block, so we can understand $A$ as an efficient representation of the map
\[
\bar A\colon B\to \ensuremath{\mathbb{N}}\xspace \qquad
\bar A(s) = \begin{cases}
A(s) &\text{if }s\in B_{\dirty},\\
0 &\text{otherwise.}
\end{cases}
\]
Then, two states $s,s'\in B$ stay in the same block iff $\bar A[s] = \bar A[s']$.
In the implementation, we first create an auxiliary array $D$ which has
different meanings. Before the definition of $\ensuremath{i_{max}}$, it counts the sizes of
the resulting blocks:
\[
D[i] = \set{j\in B \mid \bar A[j] = i}.
\]
We compute $D$ by initializing $D[0]$ with the number of clean states
($mid-start$) and iterating over $A$. The index of the largest block
remembered in $\ensuremath{i_{max}}$, and then we change the meaning of $D$ such that it
now holds partial sums $D[i] := \sum_{0\le j < i} D[j]$.
For every new block $i$, this sum $D[i]$ denotes the end of the block,
relative to the start of the old block $B$.
We use the sums to re-order the states such that states belonging to the same sub
block come next to each other. The for-loop moves every state $i\in B$ to the
end of the new block $A[i]$ and decrements $D[A[i]]$ such that the next state
belonging to $A[i]$ is inserted before that.
Finally, we do not need to move the clean states to sub-block $0$, so we
simply decrement $D[0]$ by the number of clean states. Since we have inserted
all the elements at the end of their future subblocks and have decremented
the entry of $D$ during each insertion, the entries of $D$ now point to the
\emph{first element} of each future subblock.
Having the states in the right position within $B$, we can now create the
subblocks with the right boundaries. For convenience, we add the (relative)
end of $B$ to $D$, because then, every sub block $i$ ranges from $D[i]$ to
$D[i+1]$. We had saved the index of the largest subblock $\ensuremath{i_{max}}$, which will
inherit the block identifier of $B$ and the entry $\member{block\_of}s[B]$. For all other
subblocks, we add a new block to $\member{block\_of}s$. All new blocks have no dirty
states, so $mid=end$ for the new entries. If we have added a new block, then
we need to update $\member{block\_of}[s]$ for every state $s$ in the subblock.
\end{enumerate}
\subsection{Optimized Algorithm}
\label{sec:OurAlgorithm}
With the refinable partition data structure at hand, we can improve on the naive algorithm without restricting the choice of $F$.
Our efficient algorithm is given in \Cref{algPartRefSetFun}.
We start by creating a refinable partition data structure with a single block for all the states.
We then iterate while there is still a block with dirty states, i.e.~with states
whose signatures should be recomputed.
We split the block into sub-blocks in a refinement step that is similar to the naive algorithm, and re-use the old block for the largest sub-block.
To achieve our complexity bound, this splitting must happen in time $|B_{\dirty}|$,
regardless of the number of clean states.
Fortunately, this is possible because the clean states all have the same
signature, because all their successors remained unchanged. Hence, it suffices to
compute the signature for one arbitrary clean state, denoted by $B_{\clean_1}$.
Depending on the functor, it might happen that there are dirty states $d\in
B_{\dirty}$ that have the same signature as the clean states. Having marked a state as
\textqt{dirty} just means that the signature might have changed compared to the
previous run, so it might be that the signature of a dirty state turns out to
be identical to the clean states in the block $B$.
The wrapper $\ensuremath{\textsc{Renumber}}'$ then first compresses $p\colon B_{\dirty}\to 2^*$ to
$A\colon B_{\dirty}\to \ensuremath{\mathbb{N}}\xspace$. Then, $\ensuremath{\textsc{Renumber}}'$ ensures that those dirty states $d\inB_{\dirty}$ with
the same signature as the clean states satisfy $A(d) = 0$. This is used in
$\ensuremath{\textsc{Split}}$: in the splitting operation, two dirty states $d,d' \in B_{\dirty}$ stay in the same block iff $A(d) = A(d')$ and the clean states end up in the same block as the dirty states $d$ with $A(d) = 0$.
After the block $B$ is split, we need to mark all states
$x\in B$ as dirty whose signature might have possibly changed due to the
updated partition. If the successor $y$ of $x\in B$ was moved to a new block,
i.e.~if $p(y)$ changed, this might affect the signature of $x$. Conversely,
if no successor of $x$ changed block, then the signature of $x$ remains unchanged:
\begin{lemma}
\label{sameSigForCleans}
If for a finite coalgebra $c\colon C\to FC$, two partitions
$p_1,p_2\colon C\to \ensuremath{\mathbb{N}}\xspace$
satisfy $p_1(y) = p_2(y)$ for all successors $y$ of $x\in C$,
then $F[p_1](c(x)) = F[p_2](c(x))$.
\end{lemma}
\begin{proofappendix}{sameSigForCleans}
Let $S\subseteq C$ be the subset of all successors of $x$.
Since $C$ is finite, we have the finite intersection
\[
S = \bigcap \set[\big]{C\setminus\{y\}\mid y\text{ not a successor of }x}.
\]
In general, every set functor preserves finite non-empty intersections~\cite{trnkova69}. We distinguish cases:
\begin{enumerate}
\item Case $S \neq \emptyset$: If $S$ is non-empty, then the above intersection is preserved by $F$, so
\[
FS = \bigcap \set[\big]{F(C\setminus\{y\})\mid y\text{ not a successor of }x}.
\]
We have that $c(x)$ is in the image of every $F(C\setminus\{y\})\ensuremath{\rightarrowtail} FC$ for every
non-successor $y$ of $x$. Since $F$ preserves the above intersection, we have
that $c(x)$ is also in the image of $Fs\colon FS\ensuremath{\rightarrowtail} FC$ (where $s\colon S\ensuremath{\rightarrowtail} C$ is just
the inclusion map).
Since $p_1(z) = p_2(z)$ for all $z\in S$ by assumption, the domain restrictions of $p_1,p_2\colon C\to \ensuremath{\mathbb{N}}\xspace$ to $S\subseteq C$ are identical:
$p_1|_S = p_2|_S\colon S\to \ensuremath{\mathbb{N}}\xspace$. Then, we can conclude $F[p_1](c(x)) = F[p_2](c(x))$ by the diagram:
\[
\begin{tikzcd}
1
\ensuremath{\mathsf{ar}}row{r}{c(x)}
\ensuremath{\mathsf{ar}}row{dr}[swap]{\exists c'}
& FC \ensuremath{\mathsf{ar}}row[shift left=1]{r}{F[p_1]}
\ensuremath{\mathsf{ar}}row[shift right=1]{r}[swap]{F[p_2]}
&[13mm] F\ensuremath{\mathbb{N}}\xspace
\\
& FS
\ensuremath{\mathsf{ar}}row[>->]{u}[swap]{Fs}
\ensuremath{\mathsf{ar}}row[bend right=10]{ur}[sloped,below]{F(p_1|_S) = F(p_2|_S)}
\end{tikzcd}
\]
\item Case $S=\emptyset$ and $|C| = 1$. This implies that $C = \set{x}$ and, because $S=\emptyset$ entails that $x$ is not a successor of itself. Hence, there is some map $c'\colon 1\to F(C\setminus\set{x})$ making
\[
\begin{tikzcd}[ampersand replacement=\&]
1
\ensuremath{\mathsf{ar}}row{r}{c(x)}
\ensuremath{\mathsf{ar}}row{dr}[swap]{\exists c'}
\& FC
\\
\& F(C\setminus\set{x}) \rlap{\ensuremath{~= F\emptyset = FS}}
\ensuremath{\mathsf{ar}}row[>->]{u}[swap]{Fs}
\end{tikzcd}
\]
commute. The rest of the reasoning is identical to the first case.
\item Case $S=\emptyset$ and $|C|\neq 1$. Since $x\in C$, we have $|C|\ge 2$ and so $C\setminus\set{y}$ is not empty for all $y\in C$.
We now switch from $F\colon \ensuremath{\mathsf{Set}}\xspace\to\ensuremath{\mathsf{Set}}\xspace$ to its Trnkov\'a-hull $\bar F\colon \ensuremath{\mathsf{Set}}\xspace\to\ensuremath{\mathsf{Set}}\xspace$~\cite{trnkova71}. The functor $\bar F$ coincides with $F$ on all non-empty sets
\[
X\neq \emptyset \quad\ensuremath{\mathbb{R}}\xspaceightarrow\quad
\bar FX = FX
\]
and on maps with non-empty domain, and has the property that it preserves all finite intersections (also the possibly empty ones).
In particular, we have $\bar F\ensuremath{\mathbb{N}}\xspace = F\ensuremath{\mathbb{N}}\xspace$, $\bar FC = FC$,
and $\bar F(C\setminus\set{y}) = F(C\setminus\set{y})$
for all $y\in C$.
So whenever $y$ is not a successor of $x$ in the original coalgebra $c\colon C\to FC$, we have that $c(x)\colon 1\to FC=\bar FC$ factors through the canonical injection
\[
\bar F(C\setminus\set{y}) \ensuremath{\rightarrowtail} \bar FC.
\]
Since $\bar F$ preserves the (empty) intersection, we have
\[
\bar F S = \bigcap \set{\bar F(C\setminus\set{y}\mid \text{$y$ is not a successor of $x$}}
\]
and there is some $c'$ with
\[
\begin{tikzcd}[ampersand replacement=\&]
1
\ensuremath{\mathsf{ar}}row{r}{c(x)}
\ensuremath{\mathsf{ar}}row{dr}[swap]{\exists c'}
\& \bar FC\rlap{\ensuremath{~=FC}}
\\
\& \bar FS.
\ensuremath{\mathsf{ar}}row[>->]{u}
\end{tikzcd}
\]
The rest of the reasoning is like in the first case, only with $F$ replaced with $\bar F$.
\global\def\proofappendix@qedsymbolmissing{}\qed
\end{enumerate}
\end{proofappendix}
\begin{algorithm}[t]
\caption{Optimized Partition Refinement for all Set functors}
\label{algPartRefSetFun}
\begin{algorithmic}
\upshape
\Procedure{PartRefSetFun}{$C$, $\ensuremath{\mathsf{sig}}$, $\ensuremath{\mathsf{pr}}ed$}
\Comment{i.e.~for the implementation of $c\colon C\to FC$}
\State Create a new refinable partition structure $p\colon C\to \ensuremath{\mathbb{N}}\xspace$
\State Init $p$ to have one block of all states, and all states marked dirty.
\ensuremath{\mathcal{A}}\xspacelgoSeparator
\While{there is a block $B$ with a dirty state}
\BeginBox[fixed width,algo block brace alt={Compute signatures, \\in total $\ensuremath{\mathcal{O}}\xspace(m \log n)$ calls\\ to the coalgebra}]
\State Compute the arrays
\State \quad $(sigs_{\ensuremath{\mathsf{di}}}\colon B_{\dirty} \to 2^*) := (x\mapsto \ensuremath{\mathsf{sig}}(x, p))$
\State \quad $(sigs_{\ensuremath{\mathsf{cl}}}\colon B_{\clean_1} \to 2^*) := (x\mapsto \ensuremath{\mathsf{sig}}(x, p))$
\EndBox
\ensuremath{\mathcal{A}}\xspacelgoSeparator
\BeginBox[fixed width,algo block brace alt={Split $B$ according to \\ signatures in $\ensuremath{\mathcal{O}}\xspace(|B_{\dirty}|)$}]
\State $A \colon B_{\dirty}\to \ensuremath{\mathbb{N}}\xspace := \ensuremath{\textsc{Renumber}}'(sigs_{\ensuremath{\mathsf{di}}}, sigs_{\ensuremath{\mathsf{cl}}})$
\State $\vec{B}_{new} := \ensuremath{\textsc{Split}}(B,A)$
\EndBox
\ensuremath{\mathcal{A}}\xspacelgoSeparator
\BeginBox[fixed width,algo block brace alt={Mark dirty all states with \\ a successor in a new block \\ in total time $\ensuremath{\mathcal{O}}\xspace(m \log n)$}]
\For{every $B' \in \vec{B}_{new}$ and $s \in B'$}
\For{every $s'\in \ensuremath{\mathsf{pr}}ed(s)$}
\State $\ensuremath{\mathcal{N}}\xspacearkDirty(s')$
\EndFor
\EndFor
\EndBox
\EndWhile
\ensuremath{\mathcal{A}}\xspacelgoSeparator
\State \ensuremath{\mathbb{R}}\xspaceeturn the partition $p$
\EndProcedure
\end{algorithmic}
\begin{algorithmic}
\upshape
\Procedure{Renumber'}{$p\colon B_{\dirty} \to 2^{*}, q : B_{\clean_1} \to 2^{*}$}
\State $(A\colon B_{\dirty}\to \ensuremath{\mathbb{N}}\xspace) := \ensuremath{\textsc{Renumber}}(p)$
\BeginBox[fixed width,algo block brace alt={Ensure that $A(d) = 0$ for all dirty \\ states $d$ that have the same \\ signature as the clean states.}]
\ensuremath{\mathcal{I}}\xspacef{there are $d\in B_{\dirty}, c\in B_{\clean_1}$ with $p(d) = q(c)$}
\State Swap the values $0$ and $A(d)$ in the array $A$.
\EndIf
\EndBox
\State \ensuremath{\mathbb{R}}\xspaceeturn $A$
\EndProcedure
\end{algorithmic}
\end{algorithm}
We can now prove correctness of the partition refinement for coalgebras:
\begin{theorem}
\label{mainCorrectness}
For a given coalgebra $c\colon C\to FC$,
\Cref{algPartRefSetFun} computes behavioural equivalence.
\end{theorem}
\begin{proofappendix}{mainCorrectness}
\textbf{Correctness:} We show that the property
\[
\text{for all clean states }x,y\in C\text{ with }p(x) = p(y)\text{ we have }
F[p](c(x)) = F[p](c(y)).
\tag*{\ensuremath{(\star)}}
\label{algInv}
\]
holds throughout the execution of \Cref{algPartRefSetFun}:
\begin{itemize}
\item Initially, all states are marked dirty, so \ref{algInv} holds trivially.
\item For every loop iteration for a block $B\in C/p$, let
\begin{itemize}
\item $p\colon C\to \ensuremath{\mathbb{N}}\xspace$ be the partition at the beginning of the loop iteration;
\item $\phi \subseteq C$ be the clean states in $p$;
\item $p'\colon C\to \ensuremath{\mathbb{N}}\xspace$ be the new partition (i.e.~the new value of $p$ after the splitting operation) after the loop iteration;
\item $\phi' \subseteq C$ be the clean states after the loop iteration.
\end{itemize}
In other words, we assume
\[
\text{for all }x,y\in \phi\colon \quad p(x) = p(y) \text{ implies }F[p](c(x)) = F[p](c(y))
\tag*{\ref{algInv}}
\]
and need to show that \ref{algInv} holds after the loop iteration, i.e.
\begin{equation}
\text{for all }x,y\in \phi'\colon \quad p'(x) = p'(y) \text{ implies }F[p'](c(x)) = F[p'](c(y))
\tag*{(\ensuremath{\star'})}
\label{algInvPrime}
\end{equation}
By \ref{algInv}, the composed map
\(
B \hookrightarrow C \xrightarrow{c} FC \xrightarrow{F[p]} F\ensuremath{\mathbb{N}}\xspace
\)
sends all clean states $x,y\in B\cap \phi$ to the same value, so it suffices
to compute the signature $F[p](c(x))$ for one arbitrary clean state $x\in B_{\clean_1}$.
Since $\ensuremath{\mathsf{sig}}(x,p) = \ensuremath{\mathsf{sig}}(y,p)$ iff $F[p](c(x)) = F[p](c(y))$,
the resulting partition $p'$ is then constructed by $\ensuremath{\textsc{Split}}(B,A)$ such that
\begin{align}
p'(x) = p'(y)\quad\text{iff}\quad F[p](c(x)) = F[p](c(y)) \qquad\text{for all $x,y\in B$}.
\label{pPrimeStable}
\end{align}
If a state $x\in C$ is clean at the end of the loop body, i.e.~$x\in \phi'$, this
implies that $x$ has no successor $s$ in a block $B'\in \vec{B}_{new}$
(otherwise $\ensuremath{\mathcal{N}}\xspacearkDirty(x)$ would have been called).
The new blocks returned by $\ensuremath{\textsc{Split}}$ indicate which states $s\in C$ were
moved to a different block identifier: if $p(s) \neq p'(s)$, then
$s\in B'$ for some $B' \in \vec{B}_{new}$. Combining these two observations
yields that for all successors $s$ of $x$, we have $p(s) = p'(s)$, so by
\Cref{sameSigForCleans}:
\begin{align}
F[p](c(x)) = F[p'](c(x)) \qquad\text{for all $x\in \phi'$}
\label{cleanProperty}
\end{align}
For the final verification of \ref{algInvPrime}, consider clean states
$x,y\in \phi'$ in the same block ($p'(x) = p'(y)$). In particular $x\in B$ iff $y\in B$, and we can show
$F[p](c(x)) = F[p](c(y))$ by case distinction:
\begin{itemize}
\item If $x,y\in B$, then $F[p](c(x)) = F[p](c(y))$ by
construction of $p'$ \eqref{pPrimeStable}.
\item If $x,y\notin B$, then
$p(x) = p(y)$ and so the states were clean
before the current loop iteration, for which the invariant \ref{algInv}
provides us with $F[p](c(x)) = F[p](c(y))$.
\end{itemize}
In any case, we have $F[p](c(x)) = F[p](c(y))$ and so
by the observation for clean states \eqref{cleanProperty}
\[
F[p'](c(x))
= F[p](c(x))
= F[p](c(y))
= F[p'](c(y))
\]
as desired, proving the invariant \ref{algInvPrime} after each loop iteration.
\end{itemize}
The invariant \ref{algInv} provides partial correctness:
Whenever the algorithm terminates, the invariant \ref{algInv} shows that we
have a well-defined map
\[
C/p \longrightarrow F(C/p)
\]
on the $p$-equivalence classes of $C$
turning $p\colon C\to C/p$ into an $F$-coalgebra homomorphism.
Thus, all states identified by $p$ are behaviourally equivalent. For the
converse, one can show by induction over loop iterations that whenever two
states $x,y\in C$ are behaviourally equivalent, then they remain identified by
$p$.
In total, upon termination, the returned partition $p$ precisely identifies
the behaviourally equivalent states.
Termination itself is clear because every finite set $|C|$ has only finitely
many quotients.
\end{proofappendix}
\subsection{Complexity Analysis}
\label{sec:ComplexityAnalysis}
We structure the complexity analysis as a series of lemmas phrased in terms of the number of states $n = |C|$ and the total number of transitions $m$ defined by
\[
m := \sum_{x\in C} |\ensuremath{\mathsf{pr}}ed(x)|
\]
As a first observation, we exploit that $\ensuremath{\textsc{Split}}$ re-uses the block index for
the largest resulting block. Thus, whenever $x$ is moved to a block with a
different index, the new block has at most half the size of the old block,
leading to the logarithmic factor, by Hopcroft's trick:
\begin{lemma}
\label{partLogNChange}
A state is moved into a new block at most $\ensuremath{\mathcal{O}}\xspace(\log n)$ times, that is, for
every $x\in C$, the value of $p(x)$ in \Cref{algPartRefSetFun} changes at
most $\lceil{\log_2 |C|}\rceil$ many times.
\end{lemma}
\begin{proofappendix}{partLogNChange}
When a block gets split into sub-blocks, the old block is reused for the largest sub-block.
Therefore, a newly created block is at most half the size of the old block.
Formally, let $p_\ensuremath{\mathsf{old}} \colon C\to\ensuremath{\mathbb{N}}\xspace$ be the partition before an iteration of
\Cref{algPartRefSetFun} and $p\colon C\to\ensuremath{\mathbb{N}}\xspace$ the partition after the
iteration. Then for all $x\in C$ we have
\[
p_\ensuremath{\mathsf{old}} (x) = p(x)
\quad\text{or}\quad
|\{x'\in C\mid p_\ensuremath{\mathsf{old}}(x) = p_\ensuremath{\mathsf{old}}(x')\}|
~~\ge~~ 2\cdot |\{x'\in C\mid p(x) = p(x')\}|
\]
In other words, each time a state is moved to a new block, the size of its containing block gets cut at least in half.
Since the initial block has size $n$, the value of $p(x)$ can change at most
$\lceil \log_2 n \rceil$ many times.
\end{proofappendix}
When a state is moved to a different block, all its predecessors are marked
dirty. If there are $m$ transitions in the system, and each state is moved to different block
at most $\log n$ times, then:
\begin{lemma}\label{markDirtyCount}
$\ensuremath{\mathcal{N}}\xspacearkDirty$ is called at most $m\cdot \ceil{\log n} + n$ many times (including initialization).
\end{lemma}
\begin{proofappendix}{markDirtyCount}
The initialization phase marks all the $n$ states of the coalgebra as dirty.
Whenever $p(x)$ changes value, i.e.~$x$ is moved to another block, every
$y\in \ensuremath{\mathsf{pr}}ed(x)$ is marked dirty. Using the bound from
\autoref{partLogNChange}, we have that in the predecessor-loop, the total
number of invocations of $\ensuremath{\mathcal{N}}\xspacearkDirty$ is bounded by
\[
\sum_{x\in C} (|\ensuremath{\mathsf{pr}}ed(x)|\cdot \log\ceil{n}) =
(\sum_{x\in C} |\ensuremath{\mathsf{pr}}ed(x)|)\cdot \log\ceil{n} = m\cdot \log\ceil{n}
\]
leading to a total of at most $m\cdot \log\ceil{n} + n$ invocations.
\end{proofappendix}
In the actual implementation, we arrange the pointers in the initial partition
directly such that all states are marked dirty when the main loop is entered
for the first time.
The overall run time is dominated by the complexity of $\ensuremath{\mathsf{sig}}$ and $\ensuremath{\mathsf{pr}}ed$. Here,
we assume that $\ensuremath{\mathsf{sig}}$ always takes at least the time needed to write its return
value. On the other hand, we allow that $\ensuremath{\mathsf{pr}}ed$ returns a pre-computed array by
reference, taking only $\ensuremath{\mathcal{O}}\xspace(1)$ time. The pre-computation of $\ensuremath{\mathsf{pr}}ed$ can be
done at the beginning of the algorithm by iterating over the entire coalgebra
once, e.g.~it can be done along with input parsing. This runs linear in the
overall size of the coalgebra, and thus is dominated by the complexity of the
algorithm:
\begin{proposition}
\label{runTimeDominance}
The run time complexity of \Cref{algPartRefSetFun} amounts to the time spent in
$\ensuremath{\mathsf{sig}}$ and in $\ensuremath{\mathsf{pr}}ed$ plus $\ensuremath{\mathcal{O}}\xspace(m\cdot \log n+n)$.
\end{proposition}
\begin{proofappendix}{runTimeDominance}
We split the analysis in three sections: The initialization, the computation of the new partition, and the $\ensuremath{\mathcal{N}}\xspacearkDirty$ loop.
\begin{itemize}
\item The initialization takes $\ensuremath{\mathcal{O}}\xspace(|C|)$. Since all states are marked dirty, the algorithm calls
$\ensuremath{\mathsf{sig}}$ precisely $|C|$ times in the first while-loop iteration. Hence, the initialization phase is
dominated by the time spent in $\ensuremath{\mathsf{sig}}$.
\item Extracting a block $B$ with dirty states takes $\ensuremath{\mathcal{O}}\xspace(1)$, and computing
the arrays of size $|B_{\dirty}|$ and $|B_{\clean_1}|$ is clearly dominated by the time spent in $\ensuremath{\mathsf{sig}}$.
In particular, for every $x\in B_{\dirty}$, the size of $sigs_\ensuremath{\mathsf{di}}(x)$ is bounded by the time $\ensuremath{\mathsf{sig}}(x,p)$.
$\ensuremath{\textsc{Renumber}}'$ and the nested $\ensuremath{\textsc{Renumber}}$ run linearly in the total size of
the signatures in $sigs_\ensuremath{\mathsf{di}}$ and $sigs_\ensuremath{\mathsf{cl}}$, whose contents were written by $\ensuremath{\mathsf{sig}}$.
The invocation of $\ensuremath{\textsc{Split}}$ runs in $\ensuremath{\mathcal{O}}\xspace(|B_\ensuremath{\mathsf{di}}|)$, because the largest
subblock of $B$ inherits the index of $B$; usually the subblock of clean
states are the largest one and otherwise, the number of clean states is
bounded by $|B_\ensuremath{\mathsf{di}}|$.
\item $\ensuremath{\mathcal{N}}\xspacearkDirty$ runs in $\ensuremath{\mathcal{O}}\xspace(1)$ so the time of the for-loops
amounts to the number of iterations and the time spent in $\ensuremath{\mathsf{pr}}ed$. The
\emph{outer} for-loop has at most $|B_\ensuremath{\mathsf{di}}$ iterations, because it iterates
over all elements in the blocks returned by $\ensuremath{\textsc{Split}}$.
The \emph{inner} for-loop is bounded by the time and return value of
$\ensuremath{\mathsf{pr}}ed$.
If $\ensuremath{\mathsf{pr}}ed$ returns an array by reference in $\ensuremath{\mathcal{O}}\xspace(1)$, then the inner
for-loop amounts to the calls to $\ensuremath{\mathcal{N}}\xspacearkDirty$, contributing the extra
$\ensuremath{\mathcal{O}}\xspace(m\log n + n)$ time from \Cref{markDirtyCount}.
\global\def\proofappendix@qedsymbolmissing{}\qed
\end{itemize}
\end{proofappendix}
Thus, it remains to count how often the algorithm calls $\ensuremath{\mathsf{sig}}$.
Roughly, $\ensuremath{\mathsf{sig}}$ is called for every state that becomes dirty, so we can show:
\begin{theorem}\label{mainComplexity}
The number of invocations of $\ensuremath{\mathsf{sig}}$ in \Cref{algPartRefSetFun} is bounded by
$\ensuremath{\mathcal{O}}\xspace(m \cdot \log n + n)$.
\end{theorem}
\begin{proofappendix}{mainComplexity}
We show that $\ensuremath{\mathsf{sig}}$ is called at most
\[
2\cdot (m\ceil{\log n} + n)
\]
many times. There are two lines in the algorithm in which $\ensuremath{\mathsf{sig}}$ is called:
\begin{enumerate}
\item $\ensuremath{\mathsf{sig}}(x,p)$ is called for a \emph{dirty} state $x$. By \autoref{markDirtyCount}, this can happen at most
$m\ceil{\log n} + n$ many times.
\item $\ensuremath{\mathsf{sig}}(x,p)$ is called for a \emph{clean} state $x$. The bound for this is again
$m\ceil{\log n} +n$, because if $\ensuremath{\mathsf{sig}}$ is called for a clean state $x\in
B_{\clean_1} \subseteq B$, then $B$ must have at least one dirty state. In
particular, in every iteration for a block $B$, we have $|B_{\clean_1}| \le
|B_{\dirty}|$.
\end{enumerate}
Hence, the overall number of invocations to $\ensuremath{\mathsf{sig}}$ is bounded by $2\cdot
(m\ceil{\log n} + n)$ which is in $\ensuremath{\mathcal{O}}\xspace(m\log n + n)$ as desired.
\end{proofappendix}
\begin{corollary}
\label{corMain}
If $\ensuremath{\mathsf{sig}}$ takes $f$ time, if $\ensuremath{\mathsf{pr}}ed$ runs in $\ensuremath{\mathcal{O}}\xspace(1)$ (returning a reference) and $m\ge n$, then
\Cref{algPartRefSetFun} computes behavioural equivalence in the input coalgebra in
$\ensuremath{\mathcal{O}}\xspace(f\cdot m\cdot \log n)$ time.
\end{corollary}
\begin{proofappendix}{corMain}
By \Cref{runTimeDominance}, the time of $\ensuremath{\mathsf{sig}}$ dominates the overall run time. If $m\ge n$, the algorithm
runs in $\ensuremath{\mathcal{O}}\xspace(f\cdot (m\cdot \log n + n)) = \ensuremath{\mathcal{O}}\xspace(f\cdot m\cdot \log n)$ time.
\end{proofappendix}
\begin{example}
For $\ensuremath{\mathcal{P}}f$-coalgebras, $\ensuremath{\mathsf{sig}}$ takes $\ensuremath{\mathcal{O}}\xspace(k)$ time, if every state has at most $k$ successors.
Then \Cref{algPartRefSetFun} minimizes $\ensuremath{\mathcal{P}}f$-coalgebras in time $\ensuremath{\mathcal{O}}\xspace(k\cdot m\cdot \log n)$.
Note that $m\le k\cdot n$, so the complexity is also bounded by $\ensuremath{\mathcal{O}}\xspace(k^2\cdot n\log n)$.
\end{example}
\subsection{Comparison to related work on the algorithmic level}
\newcommand{\algname}[1]{\textit{#1}}
\newcommand{\copar}[1]{\textit{CoPaR}}
\newcommand{\distr}[1]{\textit{DCPR}}
\newcommand{\textit{mCRL2}}{\textit{mCRL2}}
\newcommand{\ours}[1]{\textit{Boa}}
We can classify partition refinement algorithms by their time complexity, and by the classes of functors they are applicable to.
For concrete system types, there are more algorithms than we can recall, so
instead, we focus on early representatives and on generic algorithms.
\subsubsection*{The Hopcroft line of work.}
One line of work originates in Hopcroft's 1971 work on DFA minimization \cite{Hopcroft71},
and continues with Kanellakis and Smolka's
\cite{KanellakisSmolka83,KanellakisS90} work on partition refinement for
transition systems running in $\ensuremath{\mathcal{O}}\xspace(k^2 n\log n)$ where $k$ is the maximum
out-degree. It was a major achievement by Paige and Tarjan \cite{PaigeTarjan87}
to reduce the run time to $\ensuremath{\mathcal{O}}\xspace(k n \log n)$ by counting transitions and storing these transition counters in a clever way,
which subsequently lead to a fruitful line of research on transition system minimization \cite{GaravelL22}.
This was generalized to coalgebras in Deifel, Dorsch, Milius, Schröder and Wißmann's work on \copar{},
which is applicable to a large class of functors satisfying their \emph{zippability} condition.
These algorithms keep track of a worklist of blocks with respect to which \emph{other} blocks still have to be split.
Our algorithm, by contrast, keeps track of a worklist of blocks that \emph{themselves} still potentially have to be split.
Although similar at first sight, they are fundamentally different: in the former, one is given a block,
and must determine how to split all the predecessor blocks,
whereas in our case one is given a block, which is then split based on its successors.
The advantage of the former class of algorithms is that they have optimal time complexity $\ensuremath{\mathcal{O}}\xspace(kn \log n)$,
provided one can implement the special splitting procedure for the functor.
The additional memory needed for the transition counters is linear in $kn$.
Our algorithm, by contrast, has an extra factor of $k$,
but is applicable to all computable set-functors.
By investing this extra time-factor $k$, we reduce the memory consumption because we do not need to maintain transition counters or intermediate states like \copar{}.
A practical advantage of our algorithm is that \emph{one} recomputation of a block split can take into account the changes to \emph{all} the other blocks that happened since the recomputation.
The Hopcroft-\copar{} line of work, on the other hand, has to consider each change of the other blocks separately.
This advantage is of no help in the asymptotic complexity, because in the worst case only one other split happened each time,
and then our algorithm does in $\ensuremath{\mathcal{O}}\xspace(k)$ what \copar{} can do in $\ensuremath{\mathcal{O}}\xspace(1)$.
However, as we shall see in the benchmarks of \Cref{sec:Benchmarks}, in practice our algorithm outperforms \copar{} and \textit{mCRL2}{},
even though our algorithm is applicable to a more general class of functors.
\subsubsection*{The Moore line of work.}
Another line of work originates in Moore's 1956 work on DFA minimization \cite{Moore},
which in retrospect is essentially the naive algorithm specialized to DFAs.
In this class, the most relevant for us is the algorithm by König and Küpper~\cite{KonigKupper14} for coalgebras,
and the distributed algorithm of Birkmann, Deifel, and Milius \cite{BDM22}.
Like our algorithm, algorithms in this class split a block based on its successors, and can be applied to general functors.
Unlike the Hopcroft-\copar{} line of work and our algorithm, the running time of these algorithms is $\ensuremath{\mathcal{O}}\xspace(kn^2)$.
Another relevant algorithm in this class is the algorithm of Blom and Orzan \cite{BlomOrzan05} for transition systems.
Their main algorithm runs in time $\ensuremath{\mathcal{O}}\xspace(kn^2)$, but in a side note they mention a variation of their algorithm that runs in $\ensuremath{\mathcal{O}}\xspace(n \log n)$ \emph{iterations}.
They do not further analyse the time complexity or describe how to implement an iteration,
because the main focus of their paper is a distributed implementation of the $\ensuremath{\mathcal{O}}\xspace(kn^2)$ algorithm,
and the $\ensuremath{\mathcal{O}}\xspace(n \log n)$ variation precludes distributed implementation.
Out of all algorithms, Blom and Orzan's $\ensuremath{\mathcal{O}}\xspace(n \log n)$ variation is the most similar to our algorithm,
in particular because their algorithm is in the Moore line of work, yet also re-uses the old block for the largest sub-block
(which is a feature that usually appears in the Hopcroft-\copar{} line of work).
However, their block splitting is different from ours and is only correct for
labelled transition systems but can not be easily applied to general functors $F$.
\section{Instances}
\label{sec:Instances}
We give a list of examples of instances that can be supported by our algorithm.
We start with the instances that were already previously supported by \copar{},
and then give examples of instances that were not previously supported by $n \log n$ algorithms.
\subsection{Instances also supported by \copar{}}
\subsubsection*{Products and coproducts}
The simplest instances are those built using the product $F \times G$ and
disjoint union $F + G$, or in general, signature functors for countable signatures $\Sigma$.
The binary encoding of an element of signature functor
$(\ensuremath{\mathsf{sig}}ma,x_1,\ldots,x_k)\in \tilde\Sigma X$
starts with a specification of $\ensuremath{\mathsf{sig}}ma$, followed by the concatenation of
encodings of the parameters $x_1,\ldots,x_k$. The functor implementation can
simply apply the substitution recursively to these elements $x_1,\ldots, x_k$,
without any further need for normalization.
\subsubsection*{Powerset}
The finite powerset functor $\ensuremath{\mathcal{P}}f$ can be used to model transition systems as coalgebras.
In conjunction with products and coproducts, we can model nondeterministic
(tree) automata and labelled transition systems.
The binary encoding of an element $\{x_1,\ldots,x_k\}$ of the powerset functor,
is stored as a list of elements prefixed by its length.
The functor implementation can recursively apply the substitution to
the elements of a set $\{x_1,\ldots,x_k\}$,
and subsequently normalize by sorting the resulting elements and removing adjacent duplicates.
\subsubsection*{Monoid-valued functors}
The binary encoding of $\mu \in M^{(X)}$ (for a countable monoid $M$) is an
array of pairs $(x_i,\mu(x_i))$.
The binary encoding stores a list of these pairs prefixed by the length of the list.
The functor implementation recursively applies the substitution to the $x_i$,
and then sorts the pairs by the $x_i$ value,
and removes adjacent duplicate $x_i$ by summing up their associated monoid values $\mu(x_i)$.
\subsection{Instances not supported by \copar{}}
\subsubsection*{Composition of functors without intermediate states}
The requirement of zippability in the $m\log n$ algorithm~\cite{coparFM19} is
not closed under the composition of functors $F\circ G$. As a workaround, one
can introduce explicit intermediate states between $F$- and $G$-transitions.
This introduces potentially many more states into the coalgebra, which leads to
increased memory usage. Because our algorithm works for any computable functor,
it can instead use the composed functor directly,
without any pre-processing that splits each state of the automaton.
This is important for practical efficiency.
\subsection*{Monotone Modal Logics and Monotone Bisimulation}
When reasoning about game-theoretic settings~\cite{Parikh1985,Peleg87,Pauly2001}, the
arising modal logics have modal operators that talk about the ability of agents
to enforce properties in the future.
This leads to \emph{monotone modal logics} whose domain of reasoning are
\emph{monotone neighbourhood frames} and the canonical notion of equivalence is
\emph{monotone bisimulation}. It was shown by Hansen and
Kupke~\cite{HansenKupke04,HansenKupke04cmcs} that these are an instance of
coalgebras and coalgebraic behavioural equivalence for the monotone
neighbourhood functor. Instead of the original definition, it suffices for our
purposes to work with the following equivalent characterization:
\begin{notheorembrackets}
\begin{definition}[{\cite[Lem~3.3]{HansenKupke04}}]
\label{defMonNeighbour}
The monotone neighbourhood functor \linebreak
${\ensuremath{\mathcal{N}}\xspace\colon \ensuremath{\mathsf{Set}}\xspace\to\ensuremath{\mathsf{Set}}\xspace}$
is given by
\[
\ensuremath{\mathcal{N}}\xspace X = \{ N\in \ensuremath{\mathcal{P}}f\ensuremath{\mathcal{P}}f X \mid N\text{ \upshape upwards closed}\}
\text{ and }
\ensuremath{\mathcal{N}}\xspace(f\colon X\to Y)(N) = \closure{\{f[S]\mid S\in N\}}.
\]
where $\uparrow$ denotes upwards closure.
\end{definition}
\end{notheorembrackets}
\begin{proofappendix}[Details for]{defMonNeighbour}
Usually $\ensuremath{\mathbb{N}}\xspace$ is defined to be a subfunctor of the double contravariant
powerset functor $\ensuremath{\mathcal{N}}\xspace X \subseteq 2^{2^{X}}$, but it can also be defined
using double covariant powerset $\ensuremath{\mathcal{P}}f\ensuremath{\mathcal{P}}f$ by explicitly defining upwards
closure in the definition of maps $\ensuremath{\mathcal{N}}\xspace f\colon \ensuremath{\mathcal{N}}\xspace X\to \ensuremath{\mathcal{N}}\xspace Y$~\cite{HansenKupke04}.
\end{proofappendix}
Hence, in a coalgebra $c\colon C\to \ensuremath{\mathcal{N}}\xspace C$, the successor structure of a
state $x\in C$ is an upwards closed family of neighbourhoods $c(x)$.
To avoid redundancy, we do not keep the full neighbourhoods in memory, but only
the least elements in this family: given a family $N\in \ensuremath{\mathcal{N}}\xspace X$ for finite $X$,
we define the map
\[
\ensuremath{\mathsf{atom}}_X\colon \ensuremath{\mathcal{P}}f\ensuremath{\mathcal{P}}f X \to \ensuremath{\mathcal{P}}f\ensuremath{\mathcal{P}}f X
\qquad
\ensuremath{\mathsf{atom}}_X(N) = \{S\in N\mid \nexists S'\in N: S'\subsetneqq S\}
\]
which transforms a monotone family into an antichain by taking the minimal
elements in the monotone family.
\begin{definition}
\label{defMonImpl}
We can implement $\ensuremath{\mathcal{N}}\xspace$-coalgebras as follows:
For a coalgebra $c\colon C\to \ensuremath{\mathcal{N}}\xspace C$, keep for every
state $x\in C$ an array of arrays representing $\ensuremath{\mathsf{atom}}_C(c(x)) \in \ensuremath{\mathcal{P}}f\ensuremath{\mathcal{P}}f X$.
The predecessors of a state $y$ needs to be computed in advance and is given by
\[
\ensuremath{\mathsf{pr}}ed(y) = \{x\in C\mid y\in A\text{ for some }A\in \ensuremath{\mathsf{atom}}_C(c(x))\}.
\]
For the complexity analysis, we specify the out-degree as
\[
k := \max_{x\in C} \sum_{S\in \ensuremath{\mathsf{atom}}_C(c(x))} |S|.
\]
For the signature $\ensuremath{\mathsf{sig}}(x,p)$ of a state $x$ w.r.t.~$p\colon C\to \ensuremath{\mathbb{N}}\xspace$, do the following:
\begin{enumerate}
\item Compute $\ensuremath{\mathcal{P}}f[\ensuremath{\mathcal{P}}f [p]](t)$ for $t := \ensuremath{\mathsf{atom}}_C(c(x))$ by using the $\ensuremath{\mathsf{sig}}$-implementation of $\ensuremath{\mathcal{P}}f$ first for each nested set and then on the outer set.
This results in a new set of sets $t' := \ensuremath{\mathcal{P}}f[\ensuremath{\mathcal{P}}f [p]](t)$.
\item For the normalization, iterate over all pairs $S,T\in t'$ and remove $T$
if $S\subsetneqq T$.
This step is not linear in the size of $t'$ but takes $\ensuremath{\mathcal{O}}\xspace(k^2)$ time.
\end{enumerate}
\end{definition}
\begin{proofappendix}[Details for]{defMonImpl}
The only tricky part is why the normalization step in $\ensuremath{\mathsf{sig}}$ takes only
$\ensuremath{\mathcal{O}}\xspace(k^2)$ time.
For each $S,T\in t'\in \ensuremath{\mathcal{P}}f\ensuremath{\mathcal{P}}f\ensuremath{\mathbb{N}}\xspace$, we compare $S\subsetneqq T$. Of course
we have $\le k^2$ pairs $S,T$ and the comparison $S\subsetneqq T$ takes $k$
steps, so we are clearly in $\ensuremath{\mathcal{O}}\xspace(k^3)$. But actually we can give a tighter bound:
We can address every $y\in S$ for every $S\in t'$ by an index $i$, using that
we have just ordered $t'$ in the steps before. By the definition of $k$, each
such index is smaller than $k$.
Throughout checking $S\subsetneqq T$ for all $S,T\in t'$, we compare each
pair of indices at most once. Hence, we perform at most $k^2$ comparisons.
\end{proofappendix}
For such a monotone neighbourhood frame $c\colon C\to \ensuremath{\mathcal{N}}\xspace C$, note that for
states $x\in C$, another state $y\in C$ might be contained in multiple sets
$S\in c(x)$. Still, the definition of $m$ in the complexity analysis is agnostic of this.
\begin{proposition}
\label{propMonotone}
For a monotone neighbourhood frame $c\colon C\to \ensuremath{\mathcal{N}}\xspace C$, let $k\in \ensuremath{\mathbb{N}}\xspace$ be such that $|\ensuremath{\mathsf{atom}}_C(c(x))| \le k$ for all $x\in C$. \Cref{algPartRefSetFun} computes monotone bisimilarity in $\ensuremath{\mathcal{O}}\xspace(k^2\cdot m\log n)$ time.
\end{proposition}
\begin{proofappendix}{propMonotone}
With $k\in \ensuremath{\mathbb{N}}\xspace$ denoting the bound on the out-degree, each call to $\ensuremath{\mathsf{sig}}$ takes $\ensuremath{\mathcal{O}}\xspace(k^2)$ time.
\Cref{algPartRefSetFun} calls $\ensuremath{\mathsf{sig}}$ $\ensuremath{\mathcal{O}}\xspace(m\log n)$ many times, yielding $\ensuremath{\mathcal{O}}\xspace(k^2\cdot m\log n)$ as desired.
\end{proofappendix}
\section{Benchmarks}
\label{sec:Benchmarks}
To evaluate the practical performance and memory usage of our algorithm,
we have implemented it in our tool \textit{Boa}{} \cite{artifact_boa}, written in Rust.
The user of \textit{Boa}{} can either use a composition of the built-in functors to describe their automaton type,
or implement their own automaton type by implementing the interface of \Cref{sec:coalginterface} in Rust.
The user may then input the data of their automaton using either a textual format akin to the representation in the ``Coalgebra'' row of \Cref{fig:coalgexamples},
or use \textit{Boa}{}'s more efficient and compact binary input format.
We test \textit{Boa}{} on the benchmark suite of Birkmann, Deifel and
Milius \cite{BDM22},
consisting of real-world benchmarks (fms \& wlan -- from the benchmark suite of the PRISM model checker \cite{PRISM}), and randomly generated benchmarks (wta -- weighted tree automata).
For the wta benchmarks,
the size of the first 5 was chosen to be maximal such that \copar{} \cite{coparFM19} uses 16GB of memory,
and the size of the 6th benchmark was chosen by Birkmann, Deifel and
Milius to demonstrate the scalability of their distributed algorithm.
\renewcommand\theadfont{\bfseries}
\newcommand{\tnodes}{\raisebox{.4ex}{\tiny{$\times 32$}}}
\newcommand{--\ \ }{--\ \ }
\newcommand{\tc}[2]{\multirow{#1}{*}{#2}}
\begin{table}[h!]
\centering
\begin{tabular}{>{\bfseries}cccccccccc}
\multicolumn{4}{l}{\thead{benchmark}} & \multicolumn{3}{l}{\thead{time (s)}} & \multicolumn{2}{c}{\thead{memory (MB)}} \\
\toprule
\thead{type} & \thead{n} & \thead{\% red} & \thead{m} & \thead{\copar{}} & \thead{\distr{}} & \thead{\ours{}} & \thead{\distr{}} & \thead{\ours{}} \\
\toprule
fms & 35910 & 0\% & 237120 & 4 & 2 & 0.02 & 13\tnodes & 6 \\
fms & 152712 & 0\% & 1111482 & 17 & 8 & 0.10 & 62\tnodes & 20 \\
fms & 537768 & 0\% & 4205670 & 68 & 26 & 0.40 & 163\tnodes & 72 \\
fms & 1639440 & 0\% & 13552968 & 232 & 84 & 1.29 & 514\tnodes & 199 \\
fms & 4459455 & 0\% & 38533968 & --\ \ & 406 & 4.60 & 1690\tnodes & 557 \\
\midrule
wlan & 248503 & 56\% & 437264 & 39 & 297 & 0.11 & 90\tnodes & 15 \\
wlan & 607727 & 59\% & 1162573 & 105 & 855 & 0.30 & 147\tnodes & 38 \\
wlan & 1632799 & 78\% & 3331976 & --\ \ & 2960 & 0.81 & 379\tnodes & 92 \\
\midrule
wta$_5$(2) & 86852 & 0\% & 21713000 & 537 & 71 & 0.85 & 701\tnodes & 179 \\
wta$_4$(2) & 92491 & 0\% & 18498200 & 723 & 67 & 0.96 & 728\tnodes & 154 \\
wta$_3$(2) & 134207 & 0\% & 20131050 & 689 & 113 & 1.34 & 825\tnodes & 175 \\
wta$_2$(2) & 138000 & 0\% & 13800000 & 467 & 129 & 0.98 & 715\tnodes & 126 \\
wta$_1$(2) & 154863 & 0\% & 7743150 & 449 & 160 & 0.74 & 621\tnodes & 80 \\
wta$_3$(2) & 1300000 & 0\% & 195000000 & --\ \ & 1377 & 22.58 & 7092\tnodes & 1647 \\
\midrule
wta$_5$(W) & 83431 & 0\% & 16686200 & 642 & 52 & 1.01 & 663\tnodes & 142 \\
wta$_4$(W) & 92615 & 0\% & 23153750 & 511 & 61 & 1.21 & 849\tnodes & 193 \\
wta$_3$(W) & 94425 & 0\% & 14163750 & 528 & 59 & 0.76 & 639\tnodes & 124 \\
wta$_2$(W) & 134082 & 0\% & 13408200 & 471 & 76 & 0.96 & 675\tnodes & 124 \\
wta$_1$(W) & 152107 & 0\% & 7605350 & 566 & 79 & 0.76 & 642\tnodes & 82 \\
wta$_3$(W) & 944250 & 0\% & 141637500 & --\ \ & 675 & 15.18 & 6786\tnodes & 1231 \\
\midrule
wta$_5$(Z) & 92879 & 0\% & 18575800 & 463 & 56 & 0.67 & 754\tnodes & 161 \\
wta$_4$(Z) & 94451 & 0\% & 23612750 & 445 & 61 & 0.81 & 871\tnodes & 199 \\
wta$_3$(Z) & 100799 & 0\% & 15119850 & 391 & 64 & 0.62 & 628\tnodes & 135 \\
wta$_2$(Z) & 118084 & 0\% & 11808400 & 403 & 74 & 0.66 & 633\tnodes & 113 \\
wta$_1$(Z) & 156913 & 0\% & 7845650 & 438 & 82 & 0.68 & 677\tnodes & 93 \\
wta$_3$(Z) & 1007990 & 0\% & 151198500 & --\ \ & 645 & 19.55 & 5644\tnodes & 1325 \\
\bottomrule
\end{tabular}
\caption{
Time and memory usage comparison on the benchmarks of Birkmann, Deifel and
Milius~\cite{BDM22}.
The columns n, \%red, m give the number of states, the percentage of redundant states, and the number of edges, respectively.
The results for \ours{} are an average of 10 runs.
The results for \copar{} and \distr{} are those reported in Birkmann, Deifel and
Milius \cite{BDM22}.
The memory usage of \distr{} is per worker, indicated by $\times 32$ (for the 32 workers on the HPC cluster) \\
The functors associated with the benchmarks are as follows:
\textbf{fms:} $F(X) = \ensuremath{\mathbb{Q}}\xspace^{(X)}$,
\textbf{wlan:} $F(X) = \ensuremath{\mathbb{N}}\xspace \times \ensuremath{\mathcal{P}}f(\ensuremath{\mathbb{N}}\xspace \times \ensuremath{\mathcal{D}}(X))$,
\textbf{wta$_r$(M):} $F(X) = M \times M^{(4 \times X^r)}$ where $r$ indicates the branching factor of the tree automaton,
and $M=W$ is the monoid of 64-bit words with bitwise-or,
$M=Z$ is the monoid of integers with addition,
and $M=2$ is the monoid of booleans with logical-or.
}
\label{tab:benchmarks}
\end{table}
The benchmark results are given in \Cref{tab:benchmarks}.
The first columns list the type of benchmark and the size of the input coalgebra.
For the size, the column $n$ denotes the number of states and $m$ is the number
of edges as defined in \Cref{sec:ComplexityAnalysis}.
In the wlan benchmarks for CoPaR~\cite{coparFM19,WissmannEA2021}, the reported number of states
and eges also include intermediate states introduced by CoPaR in order to cope with functor
composition, a preprocessing step which we do not need in \ours{}, and thus are different from the numbers in \Cref{tab:benchmarks} here.
The three subsequent columns list the running time of \copar{}, \distr{}, and \ours{}.
The last two columns list the memory usage of \distr{} and \ours{}.
The benchmark results for \distr{} and \copar{} are those reported by Birkmann, Deifel and
Milius \cite{BDM22},
and were run on their high performance computing cluster with 32 workers on 8 nodes with two Xeon 2660v2 chips (10 cores per
chip + SMT) and 64GB RAM.
The memory usage of \distr{} is \emph{per worker}, indicated by the $\times 32$.
Execution times of \copar{} were taken using one node of the cluster.
Some entries for \copar{} are missing, indicating that it ran out of its 16GB of memory.
The benchmark results for our algorithm were obtained on a consumer setup: on one core of a 2.3GHz MacBook Pro 2019 with 32GB of memory.
A point to note is that compared to \copar{}, the distributed algorithm does best on the randomly generated benchmarks.
The distributed algorithm beats \copar{} in execution time by taking advantage of the large parallel compute power of the HPC cluster.
This comes at the cost of $\ensuremath{\mathcal{O}}\xspace(n^2)$ worst case complexity, but randomly generated benchmarks are more or less the \emph{best case} for the distributed algorithm, and require only a very small constant number of iterations, so that the effective complexity is $\ensuremath{\mathcal{O}}\xspace(n)$.
The real world benchmarks on the other hand, and especially the wlan benchmarks, need more iterations, which results in sequential \copar{} outperforming \distr{}.
In general, benchmarks with transition systems with long shortest path lengths will truly trigger the worst case of the $\ensuremath{\mathcal{O}}\xspace(n^2)$ algorithm,
and can make its execution time infeasably long.
In summary, the benchmarks here are not chosen to be favourable to \copar{} and our algorithm, as they do not trigger the time complexity advantage to the full extent.
Nevertheless, our algorithm outperforms both \copar{} and \distr{} by a large margin. On the synthetic benchmarks (wta), roughly speaking, when \copar{} takes 10 minutes, \distr{} takes one minute, and our algorithm takes a second.
On the real-world wlan benchmark, the difference with \distr{} is greatest, with the largest benchmark requiring almost an hour on the HPC cluster for \distr{}, whereas our algorithm completes the benchmark in less than a second on a single thread.
Sequential \copar{} is unable to run the largest wta benchmarks, because it requires more memory than the 16GB limit.
The distributed algorithm is able to spread the required memory usage among 32 workers, thus staying under the 16GB limit per worker.
Our algorithm uses sufficiently less memory to be able to run all benchmarks on a single machine. In fact, it uses significantly less memory than \distr{} uses \emph{per worker}.
There are several reasons for this:
\begin{itemize}
\item Our algorithm does not require large hash tables.
\item Our algorithm uses an binary representation with simple in-memory dictionary compression.
\item We operate directly on the composed functor instead of splitting states into pieces.
\end{itemize}
Even the largest benchmarks stay far away from the 16GB memory limit.
We are thus able to minimize large coalgebraic transition systems on cheap, consumer grade hardware.
To assess the cost of genericity, we also compare with \textit{mCRL2}{}, a full toolset
for the verification of concurrent systems. Among many other tasks, \textit{mCRL2}{}
also supports minimization of transition systems by strong bisimilarity as part
of the \texttt{ltsconvert}
command\footnote{\url{https://www.mcrl2.org/web/user_manual/tools/release/ltsconvert.html}}
and even implements multiple algorithms for that, out of which the algorithm by Jansen \text{et al.}\xspace~\cite{JansenGKW20} turned out to be the fastest.
For benchmarking, we ran its implementation in \textit{mCRL2}{} and compared the fastest with the run time of
$\ours{}$. As input files, we used the \emph{very large transition systems}
(VLTS) benchmark suite\footnote{\url{https://cadp.inria.fr/resources/vlts/}}.
Unfortunately, the benchmark suite is not available online in an open format,
so the files were converted with the CADP tool to the plain text \texttt{.aut}
format, supported by \textit{mCRL2}{} and our tool. The results are shown in \autoref{tab:benchmarksmcrl}.
The benchmark consists of two series of input files, \textit{cwi} and \textit{vasy}, whose file sizes ranged from a few KB to hundreds of MB (biggest vasy ws 145MB in zipped format and biggest cwi was 630MB zipped).
Surprisingly, $\ours{}$ is significantly faster than the bisimilarity minimization implemented in \textit{mCRL2}{}. On all input files, \textit{mCRL2}{} and \ours{} agreed
on the size of the resulting partition, giving confidence in the correctness of the computed partition.
It should be noted that \textit{mCRL2}{} supports a wide range of bisimilarity notions (e.g.~branching bisimilarity), which our algorithm can not cover.
\begin{table}[h!]
\centering
\begin{tabular}{>{\bfseries}cccccccc}
\multicolumn{4}{l}{\thead{benchmark}} & \multicolumn{2}{l}{\thead{time (s)}} & \multicolumn{2}{c}{\thead{memory (MB)}} \\
\toprule
\thead{type} & \thead{n} & \thead{\% red} & \thead{m} & \thead{\textit{mCRL2}{}} & \thead{\ours{}} & \thead{\textit{mCRL2}{}} & \thead{\ours{}} \\
\toprule
cwi & 142472 & 97\% & 925429 & 0.85 & 0.08 & 99 & 15 \\
cwi & 214202 & 63\% & 684419 & 0.63 & 0.15 & 111 & 16 \\
cwi & 371804 & 90\% & 641565 & 0.38 & 0.11 & 95 & 22 \\
cwi & 566640 & 97\% & 3984157 & 6.19 & 0.44 & 414 & 60 \\
cwi & 2165446 & 98\% & 8723465 & 10.72 & 1.52 & 978 & 166 \\
cwi & 2416632 & 96\% & 17605592 & 14.87 & 1.56 & 1780 & 247 \\
cwi & 7838608 & 87\% & 59101007 & 231.08 & 17.43 & 5777 & 816 \\
cwi & 33949609 & 99\% & 165318222 & 312.11 & 35.41 & 16698 & 2809 \\
\midrule
vasy & 52268 & 84\% & 318126 & 0.31 & 0.04 & 48 & 7 \\
vasy & 65537 & 0\% & 2621480 & 6.62 & 0.14 & 553 & 28 \\
vasy & 66929 & 0\% & 1302664 & 2.56 & 0.08 & 275 & 18 \\
vasy & 69754 & 0\% & 520633 & 0.93 & 0.04 & 128 & 11 \\
vasy & 83436 & 0\% & 325584 & 0.38 & 0.04 & 86 & 10 \\
vasy & 116456 & 0\% & 368569 & 0.47 & 0.06 & 105 & 15 \\
vasy & 164865 & 99\% & 1619204 & 1.92 & 0.23 & 162 & 22 \\
vasy & 166464 & 49\% & 651168 & 0.81 & 0.08 & 116 & 16 \\
vasy & 386496 & 99\% & 1171872 & 0.67 & 0.08 & 133 & 28 \\
vasy & 574057 & 99\% & 13561040 & 18.84 & 2.41 & 1277 & 141 \\
vasy & 720247 & 99\% & 390999 & 0.38 & 0.05 & 88 & 31 \\
vasy & 1112490 & 99\% & 5290860 & 8.86 & 0.78 & 579 & 93 \\
vasy & 2581374 & 0\% & 11442382 & 31.95 & 2.30 & 2691 & 285 \\
vasy & 4220790 & 67\% & 13944372 & 31.82 & 2.87 & 2293 & 311 \\
vasy & 4338672 & 40\% & 15666588 & 34.89 & 3.12 & 3160 & 372 \\
vasy & 6020550 & 99\% & 19353474 & 34.91 & 4.11 & 2124 & 534 \\
vasy & 6120718 & 99\% & 11031292 & 15.56 & 2.37 & 1297 & 325 \\
vasy & 8082905 & 99\% & 42933110 & 72.45 & 3.79 & 4313 & 719 \\
vasy & 11026932 & 91\% & 24660513 & 60.57 & 6.26 & 2768 & 661 \\
vasy & 12323703 & 91\% & 27667803 & 63.49 & 8.16 & 3103 & 740 \\
\bottomrule
\end{tabular}
\caption{
Time and memory usage comparison on the VLTS benchmark suite (for space reasons, we have excluded the very short running benchmarks).
The columns n, \%red, m give the number of states, the percentage of redundant states, and the number of edges, respectively.
The results are an average of 10 runs.
For \textit{mCRL2}{}, the default \texttt{bisim} option was used, which runs the JGKW algorithm \cite{JansenGKW20}.
}
\label{tab:benchmarksmcrl}
\end{table}
\section{Conclusions and Future Work}
\label{sec:Conclusion}
The coalgebraic approach enables generic tools for automata minimization,
applying to different types of input automata.
With our coalgebraic partition refinement algorithm,
implemented in our tool \textit{Boa}{},
we reduce the time and memory use compared to previous work.
This comes at the cost of an extra factor of $k$ (the outdegree of a state) in the time-complexity compared to asymptotically optimal algorithms.
Though our asymptotic complexity is not as good as the asymptotically fastest but less generic algorithms,
the evaluation shows the efficiency of our algorithm.
We wish to expand the supported system equivalence notions. So far, our algorithm is applicable to functors on \ensuremath{\mathsf{Set}}\xspace. More advanced
equivalence and bisimilarity notions such as trace
equivalence~\cite{SILVA2011291,HasuoJS07}, branching bisimulations, and others from the
linear-time-branching spectrum~\cite{Glabbeek01}, can be understood
coalgebraically using graded
monads~\cite{DorschMS19,MiliusPS15}, corresponding to changing the base
category of the functor from \ensuremath{\mathsf{Set}}\xspace{} to, for example, the Eilenberg-Moore~\cite{SilvaBBR13} or
Kleisli~\cite{HasuoJS07} category of a monad.
For branching bisimulation,
efficient algorithms exist~\cite{JansenGKW20,GrooteV90}, whose ideas might embed into our framework.
We conjecture that it is possible to adapt the algorithm to nominal
sets, in order to minimize (orbit-)finite coalgebras there~\cite{KozenEA15,msw16,skmw17,SuppSet23}.
Up-to techniques provide another successful line of research for deciding
bisimilarity. Bonchi and Pous~\cite{BonchiP13} provide a construction
for deciding bismilarity of two particular states of interest, where the
transition structure is unfolded lazily while the reasoning evolves. By
computing the partitions in a similarly lazy way, performance of our minimization algorithm can hopefully be improved even further.
\begin{acks}
We thank
Hans-Peter Deifel,
Stefan Milius,
Jurriaan Rot,
Hubert Garavel,
Sebastian Junges,
Marck van der Vegt,
Joost-Pieter Katoen,
and Frits Vaandrager
for helpful discussions and the anonymous referees for their valuable feedback for improving the paper.
Thorsten Wißmann was supported by the NWO TOP project 612.001.852.
\end{acks}
\label{maintextend}
\ifthenelse{\boolean{dropappendix}}{}{
\appendix
\ifthenelse{\boolean{proofsinappendix}}{
\section{Omitted Proofs}
\closeoutputstream{proofstream}
\input{\jobname-proofs.out}
}{
}
}
\end{document} |
\mrm{b}}\def\p{\mrm{p}egin{document}
\title{\mrm{b}}\def\p{\mrm{p}f\Large The Microstructure of Stochastic Volatility Models with Self-Exciting Jump Dynamics\thanks{Financial support from the Alexander-von-Humboldt-Foundation is gratefully acknowledged.}}
\author{Ulrich Horst\footnote{Department of Mathematics and School of Business and Economics, Humboldt-Universit\"at zu Berlin, Unter den Linden 6, 10099 Berlin; email: [email protected]}\quad\ and\quad Wei Xu\footnote{Department of Mathematics, Humboldt-Universit\"at zu Berlin, Unter den Linden 6, 10099 Berlin; email: [email protected]}}
\maketitle
\mrm{b}}\def\p{\mrm{p}egin{abstract}
We provide a general probabilistic framework within which we establish scaling limits for a class of continuous-time stochastic volatility models with self-exciting jump dynamics. In the scaling limit, the joint dynamics of asset returns and volatility is driven by independent Gaussian white noises and two independent Poisson random measures that capture the arrival of exogenous shocks and the arrival of self-excited shocks, respectively. Various well-studied stochastic volatility models with and without self-exciting price/volatility co-jumps are obtained as special cases under different scaling regimes. We analyze the impact of external shocks on the market dynamics, especially their impact on jump cascades and show in a mathematically rigorous manner that many small external shocks may tigger endogenous jump cascades in asset returns and stock price volatility.
\end{abstract}
{\mrm{b}}\def\p{\mrm{p}f AMS Subject Classification:} 60F17; 60G52; 91G99
{\mrm{b}}\def\p{\mrm{p}f Keywords:} {stochastic volatility, self-exciting jumps, Hawkes process, branching process, affine model}
\renewcommand{1.15}{1.15}
\section{Introduction}
Affine stochastic volatility models have been extensively investigated in the mathematical finance and financial economics literature in the last decades.
In the classical Heston \cite{Heston1993} model the volatility process follows square-root mean-reverting Cox-Ingerson-Ross \cite{CIR1985} process.
The Heston model introduces a dynamics for the underlying asset that can take into account the asymmetry and excess kurtosis that are typically observed in financial assets returns and provides analytically tractable option pricing formulas. However, it is unable to capture large volatility movements. To account for large volatility movements, the model has been extended to jump-diffusion models by numerous authors. Bates \cite{Bates1996} adds a jump component in the asset price process. Barndorff-Nielsen and Shephard \cite{Barndorff-NielsenShephard2001} consider volatility processes of Ornstein-Uhlenbeck type driven by L\'evy processes. Affine models allowing for jumps in prices and volatilities are considered in Bakshi et al.~\cite{BakshiCaoChen1997}, Bates \cite{Bates1996}, Duffie et al. \cite{DuffiePanSingleton2000}, Pan \cite{Pan2002} and Sepp \cite{Sepp2008}, among many others. Empirical evidence for the presence of (negatively correlated) co-jumps in returns and volatility is given in, e.g.~Eraker \cite{Eraker2004}, Eraker et al. \cite{ErakerJohannesPolson2003}, and Jacod and Todorov \cite{JacodTodorov2010}.
In a standard jump model with arrival rates calibrated to historical data, jumps are inherently rare. Even more unlikely are patterns of multiple jumps in close succession over hours or days. Large moves, however, tend to appear in clusters. For example, as reported in A\"{\i}t-Sahalia et al.~\cite{SahaliaCacho-DiazLaeven2015} ``from mid-September to mid-November 2008, the US stock market jumped by more than 5\% on 16 separate days. Intraday fluctuations were even more pronounced: during the same two months, the range of intraday returns exceeded 10\% during 14 days.''
\mrm{b}}\def\p{\mrm{p}egin{figure}
\mrm{b}}\def\p{\mrm{p}egin{center}
\includegraphics[width=16cm]{Chart22.pdf}
\caption{CBOE VIX index based on daily closing values from Sep. 12 to Nov. 20, 2008. Orange dots indicate a jump of 2.5\% or more from previous day closing value.}
\end{center}
\end{figure}
{Jump clusters can also be observed in the volatility. Figure 1 displays the evolution of the Chicago Board of Exchange VIX index and indicates up movements of more than 2.5\% in daily closing values for the above mentioned period; on 26 out of 49 days, the index jump up by 2.5\% or more.} Bates \cite{Bates2019} argues that the dramatic decline in futures prices on Monday, October 19, 1987, from the previous Friday’s closing value ``was the result of an estimated 34 jumps'' in volatility.
Jump clusters over time have been discussed in the financial econometrics literature by many authors including A\"{\i}t-Sahalia et al. \cite{SahaliaCacho-DiazLaeven2015}, Fulop et al. \cite{FulopLiYu2015}, Lee and Mykland \cite{LeeMykland2008}, Maheu and McCurdy \cite{MaheuMcCurdy2004}, and Yu \cite{Yu2004}. Among the most relevant papers for our work are the ones by Andersen et al. \cite{AndersenFusariTodorov2015} and Bates \cite{Bates2019}. They consider continuous-time models of self-exciting price/volatility co-jumps in intradaily stock returns and volatility. Every small intradaily jump substantially increases the probability of more intradaily cojumps in volatility and returns, and these multiple price jumps can accumulate into the major outliers in daily returns. Bates \cite{Bates2019} finds {``that multifactor models with both exogenous and self-exciting but short-lived volatility spikes'' substantially improve model fits both in-sample and out-of-sample}. He also shows that such models provide more accurate predictions of implied volatility. A similar conclusion on implied volatilities was reached in the recent work by Jiao et al. \cite{JiaoMaScottiZhou2018}.
In order to account for self-exiting jump dynamics one needs to leave the widely applied class of Lévy jump processes. Lévy processes have independent increments and hence do not allow for any type of serial dependence. Hawkes processes are capable of displaying mutually exciting jumps. Originally introduced by Hawkes \cite{Hawkes1971a, Hawkes1971b} to model the occurrence seismic events, Hawkes processes have received considerable attention in the financial mathematics and economics literature as a powerful tool to model financial time series in recent years; we refer to Bacry et al.~\cite{BacryMastromatteoMuzy2015} and references therein for reviews on Hawkes processes and their applications to science and finance. On the more mathematical side, a series of functional limit theorem and large deviation principles for Hawkes processes and marked Hawkes processes has recently been established by, e.g. Bacry et al.~\cite{BacryDelattreHoffmannMuzy2013}, Gao and Zhu \cite{GaoZhu2018b,GaoZhu2018}, and Karabash and Zhu \cite{KarabashZhu2015}. Horst and Xu \cite{HorstXu2019a} introduced Hawkes random measures in order to study limit theorems for limit order book models with self-exciting cross-dependent order flow. In \cite{HorstXu2019b} they established functional limit theorems for marked Hawkes point measures with homogeneous immigration under a light-tailed condition on the arrival dependencies of different events. Under a light-tailed condition on the arrival dependencies of different events, Jaisson and Rosenbaum \cite{JaissonRosenbaum2015} proved that the rescaled intensity process of a Hawkes process converges weakly to a Feller diffusion and that the rescaled point process converges weakly to the integrated diffusion. Under a heavy-tailed condition they proved that the rescaled point process converges weakly to the integral of a rough fractional diffusion; see \cite{JaissonRosenbaum2016}. Their result provides a microscopic foundation for the rough Heston model; see \cite{ElEuchRosenbaum2019a,ElEuchRosenbaum2019b}.
Motivated by the recent empirical works on stochastic volatility models with self-exciting jumps, we provide a unified microscopic foundation for stochastic volatility models with price/volatility co-jumps based on Hawkes processes. Many of the existing jump diffusion stochastic volatility models including the classical Heston model \cite{Heston1993}, the Heston model with jumps \cite{Bates1996, DuffiePanSingleton2000, Pan2002}, the OU-type volatility model \cite{Barndorff-NielsenShephard2001}, the multi-factor model with self-exciting volatility spikes \cite{Bates2019} and the alpha Heston model \cite{JiaoMaScottiZhou2018} are obtained as scaling limits under different scaling regimes. As such, our work contributes to the rich literature on scaling limits for financial market models\footnote{Much of the earlier work including \cite{Foellmer1994, FoellmerSchweizer1993,Horst2005} focussed on the temporary occurrence and bubbles, due to imitation and contagion effects rather than volatility. More recently, the focus seems to have shifted to order books models and volatility.} as well as to the growing literature on Hawkes processes by establishing novel scaling limits for Hawkes systems.
Our analysis uses the link between Hawkes processes and continuous-state branching processes (CB-processes). The extinction behavior of CB-processes as analyzed in Grey \cite{Grey1974} is of particular interest to us as it allows us to study the impact duration of external shocks on order flow. Branching processes are a particular class of affine processes. Affine processes have been widely used in the financial mathematics literature. Duffie et al. \cite{DuffieFilipovicSchachermayer2003} define an affine process as a time-homogeneous Markov process, whose characteristic function is the exponential of an affine function of the state vector under a regularity assumption. They showed that this type of process unifies the concepts of continuous-state branching processes with immigration (CBI-processes) and Ornstein-Uhlenbeck type processes (OU-processes). They also provide a rigorous mathematical approach to affine processes, including the characterization of affine processes in terms of admissible parameters (comparable to the characteristic triplet of a L\'evy process). Dawson and Li \cite{DawsonLi2006} provide a construction of affine process as the unique strong solution to a system of stochastic differential equations with non-Lipschitz coefficients and Poisson-type integrals over some random sets. Keller-Ressel \cite{Keller-Ressel2011} considers the long-term behavior of affine stochastic volatility models, including an expression for the invariant distribution. He also provided explicit expressions for the time at which a moment of given order becomes infinite. We shall repeatedly draw on the results in \cite{DawsonLi2006,DuffieFilipovicSchachermayer2003,Keller-Ressel2011}.
We consider a microstructure model of a financial market with two types market orders that we refer to as {\sl exogenous orders} and {\sl induced orders}, respectively. We think of {exogenous orders} as exogenous shocks; they arrive at an exogenous Poisson dynamics. Exogenous orders generate a random environment for the arrival of induced orders. Induced orders arrive according to a marked Hawkes process with exponential kernel. The marks represent the magnitudes by which the orders change prices as well as their impact on the arrival rate of future induced order flow. In particular, jumps in prices and/or volatility may trigger cascades of child-jumps and hence volatility clusters. Jump cascades are particularly likely to occur after large exogenous shocks. Figure 2 shows the evolution of the CBOE VIX index from 1990 to 2019; the time series clearly displays the occasional occurrence exogenous shocks. Apart from the already mentioned clustering of jumps during the hight of the global financial crisis there seem to be further jump clusters, for instance after the 1998 Russian and the 2011 Eurozone debt crisis.
\mrm{b}}\def\p{\mrm{p}egin{figure}
\mrm{b}}\def\p{\mrm{p}egin{center}
\includegraphics[width=16cm]{VIX02.pdf}
\caption{CBOE VIX index based on daily closing values from 1999-2019.}
\end{center}
\end{figure}
Our first main contribution is to analyze the genealogical decomposition of the benchmark model, and to anlayze the impact duration of large external shocks on induced order flow. To this end, we provide a decomposition of the benchmark model as the sum of a sequence of independent self-enclosed sub-models that describe the impact of exogenous shocks on induced order flow. We then provide four regimes for the long-run impact of an exogenous shock to the market dynamics. The impact of an exogenous shock will last forever with positive probability in the supercritical case and vanish at an exponential rate in the subcritical case. In the critical case the impact duration of external shocks is heavy-tailed despite the exponential decay of individual events on future dynamics.
Our second main contribution is a scaling limit for the benchmark model when the frequency of order arrivals tends to infinity and the impact of an individual order on the market dynamics tends to zero. Depending on the choice of scaling parameters, various well-known jump-diffusion stochastic volatility models are obtained in the scaling limit. Different to the arguments in \cite{JaissonRosenbaum2015} on the convergence of nearly unstable Hawkes processes, we give sufficient conditions on the model parameters that guarantee the existence of a non-degenerate scaling limit based on the link between Hawkes proceses and branching particle systems. Loosely speaking, we require the convergence of the sequence of branching mechanisms. From this, we conclude that the sequence of generators converges to a limiting generator when restricted to exponential functions. Since the linear span of exponential functions is not dense in the domain of the limit generator, methods and technique based in the convergence of generators can not be applied to establish the convergence of the rescaled sequence of market models. Instead, we use general convergence results for infinite dimensional stochastic integrals established in Kurtz and Protter \cite{KurtzProtter1996}. Their methods have previously been applied to prove diffusion approximations for limit order book models in \cite{HorstKreher2019}. We prove that the rescaled sequence of market models converges in distribution to the unique solution of a stochastic differential equation driven by two independent Gaussiam white noise processes, a Poisson random measure that describes the arrivals of large exogenous shocks and an independent Poisosn random measure that describes the dynamics of endogenously induced self-excited jumps.
Our third main contribution is to analyze the genealogical decomposition of the limiting jump-diffusion volatility model. We provide an economically intuitive decomposition in terms of three sub-models. The first sub-model is self-enclosed and captures the impact of all events prior to time $0$ on future order flow. The second sub-model describes the cumulative impact of the exogenous shocks of positive magnitude on the market dynamics; this sub-model can be further decomposed into a sequence of self-enclosed sub-models that capture the impact of individual shocks. The third sub-model is the most interesting one. This self-enclosed sub-model describes the impact of exogenous shocks ``of insignificant magnitude'' on the market dynamics. Specifically, in the scaling limit large exogenous shocks translate into jumps while vanishingly small exogenous shocks translate in a well defined sense into a non-trivial mean-reversion level of the stochastic volatility process that keeps the volatility bounded away from zero at all times. Due to the dependence of the jump arrivals on the volatility process, this shows in a mathematically rigorous manner how many small \textsl{exogenous} events may trigger \textsl{endogenous} jump cascades. Our decomposition is very different from that in \cite{JiaoMaScottiZhou2018}. They decompose the volatility process into a truncated variance process plus a variance process that captures all jumps larger than some threshold. Unlike ours, their sub-models are not self-enclosed and do not classify jumps by their origin.
\mrm{b}}\def\p{\mrm{p}egin{figure}
\mrm{b}}\def\p{\mrm{p}egin{center}
\includegraphics[width=16cm]{Figure03.pdf}
\caption{Path simulation from our limit model with (blue trajectory) and without (orange trajectory) external shocks. Red crosses indicate jumps triggered by the first external shocks; light blue dots indicate jumps triggered endogenously.}
\end{center}
\end{figure}
Finally, we analyze the distribution of jumps of different magnitudes and different origins in the scaling limit. We give an explicit expression for the joint distribution of the number and magnitudes of jumps induced by an exogenous shocks of given size and the time of the last induced jump in terms of the unique continuous solution to a Riccati equation with singular initial condition. We also show that exogenously (by exogenous shocks) and endogenously (by the volatility) triggered jump cascades share an important characteristic. In both cases, on average the \textsl{proportions of the total number of jumps} triggered by a given time is the same\footnote{In other words, in both cases, on avarege 80\% of the total number of shocks triggered are triggered by the same time}. The \textsl{number} of jumps triggered by an external shock, however, scales linearly in the magnitude of that shock. As a result, jump cascades triggered by external shocks will usually comprise more jumps than those triggered endogenously and thus trigger more dense jump clusters. Figure 3 illustrates this effect. It shows the evolution of a sample path of our modle with (blue) and without (orange) external shocks. There are two external shocks; the times of the shocks originating from the external shocks are indicated by red dots. Light blue dots times of indicate endogenously triggered jumps. For our choice of parameters they are more evenly distributed across time.
The remainder of this paper is organized as follows. Our benchmark financial market model is introduced in Section \ref{LFG}. Its genealogical structure is analyzed in Section \ref{genealogy}. The scaling result is established in Section \ref{HFHM}. In this section we also show how various well studied stochastic volatility models can be obtained as scaling limits. The genealogical structure of the scaling is analyzed in Section \ref{gen-limit}.
\section{Hawkes market model} \label{LFG}
\setcounter{equation}{0}
In this section, we introduce a benchmark stochastic volatility model for which we drive a scaling limit in a later section. There are two types of buy/sell orders in our model that we refer to as {\sl exogenous orders} and {\sl induced orders}, respectively. Exogenous orders as orders that arrive at an exogenous Poisson dynamics (``exogenous shocks''). They generate a random environment for the arrival of induced orders. Induced orders will arrive at much higher frequencies than exogenous orders and follow a self-exciting dynamics.
In what follows all random variables are defined on a common probability space $(\Omega,\mathscr{F}, \mathbf{P})$ endowed with filtration $\{\mathscr{F}_t:t\geq 0\}$ that satisfies the usual hypotheses.
\subsection{The benchmark model}
The arrivals of exogenous/induced market buy/sell orders are recorded by $(\mathscr{F}_t)$-random point processes
$\{N^{\mathtt{e}/\mathtt{i},\mathtt{b}/\mathtt{s}}_t:t\geq 0\}$ with respective arrival times $\{\tau_k^{\mathtt{e}/\mathtt{i},\mathtt{b}/\mathtt{s}}:k=1,2,\cdots\}$. We denote by $\{J^{\mathtt{e}/\mathtt{i},\mathtt{b}/\mathtt{s}}_k:k=1,2\cdots\}$ the sequences of price changes (in ticks) resulting from exogenous/induced market buy/sell orders. For any time $t\geq 0$, the (logarithmic) price $P_t$ is given by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.01}
P_t\!\!\!&=\!\!\!& P_0+
\sum_{k=1}^{N_t^{\mathtt{e}, \mathtt{b}}}\delta\cdot J_k^{\mathtt{e}, \mathtt{b}}
-\sum_{k=1}^{N_t^{\mathtt{e}, \mathtt{s}}}\delta \cdot J_k^{\mathtt{e}, \mathtt{s}}
+\sum_{k=1}^{N_t^{\mathtt{i}, \mathtt{b}}}\delta\cdot J_k^{\mathtt{i}, \mathtt{b}}
-\sum_{k=1}^{N_t^{\mathtt{i}, \mathtt{s}}}\delta \cdot J_k^{\mathtt{i}, \mathtt{s}},
\eeqlb
where $P_0$ is the price at time $0$ and $\delta$ is the tick size, i.e. the minimum price movement.
We now formulate three assumptions on the order flow dynamics that greatly simplify the subsequent analysis but that do not change our main results on the occurrence of jumps and jump-cascades. First, we assume that price increments are conditionally independent.
\mrm{b}}\def\p{\mrm{p}egin{assumption}
Price changes are described by independent sequences $\{J^{\mathtt{e}/\mathtt{i},\mathtt{b}/\mathtt{s}}_k:k=1,2,\cdots\}$ of i.i.d. $\mathbb{Z}_+$-valued random variables.
\end{assumption}
Second, we assume that price increments are uncorrelated.
\mrm{b}}\def\p{\mrm{p}egin{assumption}
For any $l,l'\in \{\mathtt{e},\mathtt{i}\}$ and $j,j'\in\{\mathtt{b},\mathtt{s}\}$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}[dN_t^{l,j}\cdot dN^{l',j'}_t|\mathscr{F}_{t-}]=0,\quad \mbox{if }l\neq l' \mbox{\ or } j\neq j'.
\eeqnn
\end{assumption}
Third, we assume that exogenous buy/sell orders arrive according to independent Poisson processes.
\mrm{b}}\def\p{\mrm{p}egin{assumption}\label{AssumptionPoisson}
$\{N^{\mathtt{e},\mathtt{b}}_t:t\geq 0\}$ and $\{N^{\mathtt{e},\mathtt{s}}_t:t\geq 0\}$ are two independent Poisson processes with rates $p_{\mathtt{e}}^\mathtt{b}$ and $p_{\mathtt{e}}^\mathtt{s} = 1-p_{\mathtt{e}}^\mathtt{b}$ respectively.
\end{assumption}
Induced orders arrive according to Hawkes processes with $\mrm{b}}\def\p{\mrm{p}eta$-exponential kernel for some $\mrm{b}}\def\p{\mrm{p}eta>0$. We assume that both past exogenous and past induced orders increase the arrival intensity of induced orders.
\mrm{b}}\def\p{\mrm{p}egin{assumption}\label{AssumptionHawkes}
The processes $\{N^{\mathtt{i},\mathtt{b}}_t:t\geq 0\}$
and $\{N^{\mathtt{i},\mathtt{s}}_t:t\geq 0\}$ are marked Hawkes processes
with intensities $p_{\mathtt{i}}^\mathtt{b} V_{t-}dt$ and $p_{\mathtt{i}}^\mathtt{s} V_{t-}dt$ respectively, where $p_{\mathtt{i}}^\mathtt{b} + p_{\mathtt{i}}^\mathtt{s} = 1$, and where the intensity process $\{V_t:t\geq 0\}$ is given by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.02}
V_t\!\!\!&=\!\!\!&\mu_t+\sum_{ j\in\{\mathtt{b},\mathtt{s}\}}\sum_{k= 1}^{N_t^{\mathtt{e},j}}X_k^{\mathtt{e},j}e^{-\mrm{b}}\def\p{\mrm{p}eta(t-\tau^{\mathtt{e},j}_k)}+\sum_{ j\in\{\mathtt{b},\mathtt{s}\}}\sum_{k= 1}^{N_t^{\mathtt{i},j}}X_k^{\mathtt{i},j}e^{-\mrm{b}}\def\p{\mrm{p}eta(t-\tau^{\mathtt{i},j}_k)}, \quad t \geq 0.
\eeqlb
Here, $\{\mu_t:t\geq 0\}$ is an $\mathscr{F}_0$-measurable functional-valued random variable that represents the impact of all the orders that arrived prior to time $0$ on the arrival rate of future orders, and $\{X_k^{\mathtt{e}/\mathtt{i},\mathtt{b}/\mathtt{s}}:k=1,2,\cdots\}$ are sequences of nonnegative random variables that represent the impact of each order arrival on the intensity process.
\end{assumption}
We now provide a stochastic integral representation of the price process in terms of marked Hawkes point measures as introduced in Horst and Xu \cite{HorstXu2019b}. To this end, we associate with each order a mark from the mark space $\mathbb{U}=\mathbb{R}\times \mathbb{R}_+$. A mark comprises the amount by which the order changes the price (in ticks) along with its impact on the arrival intensity of future orders. Specifically, associated with the exogenous/induced order arrival process is a sequence $\{\xi^{\mathtt{e/i}}_k:k=1,2,\cdots\}$ of independent and identically distributed random variables, where $\xi^{\mathtt{e/i}}_k:=(\xi^{\mathtt{e/i}}_{k,P},\xi^{\mathtt{e/i}}_{k,V})$. The quantity $\xi^{\mathtt{e/i}}_{k,P}$ specifies the movement of the price caused by the $k$-th exogenous/induced order, and $\xi^{\mathtt{e/i}}_{k,V}$ specifies the contribution to the intensity process. The law of $\xi^{\mathtt{e/i}}_k$ is given by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.03}
\nu_\mathtt{e/i}(du)\!\!\!&:=\!\!\!& p_{\mathtt{e/i}}^\mathtt{b}\cdot \mathbf{P}\left\{(J^{\mathtt{e/i},\mathtt{b}},X^{\mathtt{e/i},\mathtt{b}})\in (du_1,du_2)\right\}
+p_{\mathtt{e/i}}^\mathtt{s}\cdot \mathbf{P}\left\{(-J^{\mathtt{e/i},\mathtt{s}},X^{\mathtt{e/i},\mathtt{s}})\in (du_1,du_2)\right\}.
\eeqlb
Let $\{\tau^{\mathtt{e/i}}_k:k=1,2,\cdots \}:= \{\tau^{\mathtt{e/i},\mathtt{b}}_i:i=1,2,\cdots \}\cup\{\tau^{\mathtt{e/i},\mathtt{s}}_i:i=1,2,\cdots \}$ be the arrival times of exogenous/induced (buy and sell) orders.
In view of Assumption~\ref{AssumptionPoisson}, we can associate with the sequence $\{(\tau_k^{\mathtt{e}},\xi^{\mathtt{e}}_k):k=1,2,\cdots\}$ a Poisson point measure
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.04}
N_{\mathtt{e}}(dt,du) := \sum_{k=1}^\infty \mathbf{1}_{\{ \tau_k^{\mathtt{e}}\in dt, \xi^{\mathtt{e}}_k\in du \}}
\eeqlb
on $[0,\infty)\times \mathbb{U}$ with intensity $dt\nu_\mathtt{e}(du)$. Likewise, associated with the sequence $\{(\tau_k^{\mathtt{i}},\xi^{\mathtt{i}}_k):k=1,2,\cdots\}$ is an $(\mathscr{F}_t)$-random point measure
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.05}
N_{H}(dt,du) := \sum_{k=1}^\infty \mathbf{1}_{\{ \tau_k^{\mathtt{i}}\in dt, \xi^{\mathtt{i}}_k\in du \}}
\eeqlb
on $[0,\infty)\times \mathbb{U}$ with intensity $V_{t-}dt\nu_\mathtt{i}(du)$. From Horst and Xu \cite{HorstXu2019b}, we know that $N_{H}(dt,du)$ is a \textit{marked Hawkes point measure with homogeneous immigration}. In particular, on an extension of the original probability space we can define a time-homogeneous Poisson random measure $N_\mathtt{i}(ds,du,dz)$ on $(0,\infty)\times \mathbb{U}\times \mathbb{R}_+$ with intensity $ds\nu_\mathtt{i}(du)dz$ that is independent of $N_\mathtt{e}(ds,du)$ and satisfies
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.07}
\int_0^t \int_{\mathbb{U}} f(u) N_H(ds,du)\!\!\!&=\!\!\!& \int_0^t \int_{\mathbb{U}}\int_0^{V_{s-}} f(u)N_\mathtt{i}(ds,du,dx),\quad f\in B(\mathbb{U}),
\eeqlb
{where $B(\mathbb{U})$ is the collection of bounded functions on $\mathbb{U}$.}
As a result, the price process $\{ P_t:t\geq 0 \}$ and the intensity process $\{ V_t:t\geq 0 \}$ can be represented as
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
P_t\!\!\!&=\!\!\!& P_0 + \int_0^t \int_\mathbb{U} \delta\cdot u_1 N_\mathtt{e}(ds,du) + \int_0^t \int_{\mathbb{U}}\int_0^{V_{s-}}\delta\cdot u_1 N_\mathtt{i}(ds,du,dx),\label{eqn2.08} \\
V_t\!\!\!&=\!\!\!& \mu_t + \int_0^t \int_\mathbb{U} u_2 \cdot e^{-\mrm{b}}\def\p{\mrm{p}eta(t-s)}N_\mathtt{e}(ds,du) + \int_0^t \int_{\mathbb{U}}\int_0^{V_{s-}}u_2 \cdot e^{-\mrm{b}}\def\p{\mrm{p}eta(t-s)} N_\mathtt{i}(ds,du,dx). \label{eqn2.09}
\eeqlb
The preceding integral representation can be further simplified if we assumes that the impact of all the orders that arrived prior to time $0$ on the arrival rate of future events decreases exponentially.
\mrm{b}}\def\p{\mrm{p}egin{assumption}
The process $\{\mu_t : t \geq 0\}$ satisfies $\mu_t:=V_0 \cdot e^{-\mrm{b}}\def\p{\mrm{p}eta t}$ for any $t \geq 0$.
\end{assumption}
Under the above assumptions, and using the fact that $ e^{-\mrm{b}}\def\p{\mrm{p}eta t}= 1-\int_0^t \mrm{b}}\def\p{\mrm{p}eta e^{-\mrm{b}}\def\p{\mrm{p}eta s}ds$,
the model (\ref{eqn2.08})-(\ref{eqn2.09}) can be rewritten into
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
P_t\!\!\!&=\!\!\!& P_0+\int_0^t \int_{\mathbb{U}}\delta\cdot u_1N_{\mathtt{e}}(ds,du)+\int_0^t \int_{\mathbb{U}} \int_0^{V_{s-}}\delta\cdot u_1N_\mathtt{i}(ds,du,dx),\label{eqn2.10}\\
V_t\!\!\!&=\!\!\!& V_0-\int_0^t \mrm{b}}\def\p{\mrm{p}eta V_sds+\int_0^t\int_{\mathbb{U}} u_2N_{\mathtt{e}}(ds,du)+\int_0^t\int_{\mathbb{U}} \int_0^{V_{s-}} u_2N_\mathtt{i}(ds,du,dx).\label{eqn2.11}
\eeqlb
By Theorem~6.2 in \cite{DawsonLi2006}, there exists a unique $\mathbb{U}$-valued strong solution $\{ (P_t,V_t):t\geq 0 \}$
to (\ref{eqn2.10})-(\ref{eqn2.11}). We call this solution \textit{Hawkes market model} with parameter $(\delta,\mrm{b}}\def\p{\mrm{p}eta;\nu_{\mathtt{e}/\mathtt{i}})$. The solution is a strong Markov process whose infinitesimal generator
$\mathcal{A}_\delta$ acts on any function $f\in C^2(\mathbb{U})$ according to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.12}
\mathcal{A}_\delta f(p,v)
\!\!\!&=\!\!\!&
-v\cdot\mrm{b}}\def\p{\mrm{p}eta\frac{\partial }{\partial v}f(p,v)
+v\cdot\int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig[f(p+\delta\cdot u_1,v+u_2)-f(p,v)\mrm{b}}\def\p{\mrm{p}ig]\nu_\mathtt{i}(du) \cr
\!\!\!&\!\!\!&
+\int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig[f(p+\delta\cdot u_1,v+u_2)-f(p,v)\mrm{b}}\def\p{\mrm{p}ig]\nu_\mathtt{e}(du).
\eeqlb
\subsection{Examples}
We now consider three specific examples. Within each example we clarify when co-jumps in prices and volatilities are negatively correlated. We revisit all three examples when anlayzing scaling limits.
We say that an $(\mathbb{Z}_+\times\mathbb{R}_+) $-valued random variable $\xi=(\xi_1,\xi_2)$
has {\it bivariate exponential distribution} ${\rm BVE}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda})$
with parameter $\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}:=(\lambda_1,\lambda_2,\lambda_{12})\in\mathbb{R}_+^3$
if for any $(k,x)\in \mathbb{Z}_+\times\mathbb{R}_+$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\xi_1\geq k, \xi_2\geq x\}= \exp\{-\lambda_1 (k-1)-\lambda_2 x-\lambda_{12}((k-1)\vee x)\}.
\eeqnn
The first moment $\mathrm{M}^\mathbf{e}_{1}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):=(\mathrm{M}^\mathbf{e}_{1,k}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}))_{k=1,2}$ and the second moment $\mathrm{M}^\mathbf{e}_{2}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):=(\mathrm{M}^\mathbf{e}_{2,jk}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}))_{j,k=1,2}$ are given by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\mathrm{M}^\mathbf{e}_{1,1}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):= \mathbf{E}[\xi_1]= \frac{1}{1-e^{-\lambda_1-\lambda_{12}}},
\quad
\mathrm{M}^\mathbf{e}_{1,2}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):= \mathbf{E}[\xi_2]= \frac{1}{\lambda_2+\lambda_{12}}
\eeqlb
and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\!\!\!&\!\!\!&\mathrm{M}^\mathbf{e}_{2,11}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):= \mathbf{E}[|\xi_1|^2]= \frac{e^{-\lambda_1-\lambda_{12}}+(2e^{-\lambda_1-\lambda_{12}}-1)^3}{(1-e^{-\lambda_1-\lambda_{12}})^2},
\quad
\mathrm{M}^\mathbf{e}_{2,22}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):= \mathbf{E}[|\xi_2|^2]=\frac{2}{\lambda_2+\lambda_{12}},\cr
\!\!\!&\!\!\!&\mathrm{M}^\mathbf{e}_{2,12}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):= \mathbf{E}[\xi_1\xi_2]
=\frac{1/\lambda_2}{1-e^{-(\lambda_1+\lambda_{12})}}
+ \frac{1/\lambda_2-1/(\lambda_2+\lambda_{12})}{1-e^{-(\lambda_1+\lambda_{12}+\lambda_{12})}}.
\eeqlb
This implies that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.26}
\mathrm{C}_\mathbf{e} (\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}):={\rm Cov}(\xi_1,\xi_2)
\!\!\!&=\!\!\!& \frac{1/\lambda_2-1/(\lambda_2+\lambda_{12})}{1-e^{-(\lambda_1+\lambda_{12})}}
+ \frac{1/\lambda_2-1/(\lambda_2+\lambda_{12})}{1-e^{-(\lambda_1+\lambda_{12}+\lambda_{12})}} \geq 0
\eeqlb
with equality if $\lambda_{12}=0$, which holds if and only $\xi_1$ and $\xi_2$ are independent.
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Exponential market model]\label{Ecample-Exp}
For $j\in\{\mathtt{e},\mathtt{i} \}$ and some constants $\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{b}_j,\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{s}_j\in\mathbb{R}_+^3$, let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.28}
(J^{j,\mathtt{b}},X^{j,\mathtt{b}}) \overset{\rm d}= {\rm BVE}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}_j^\mathtt{b})
\quad\mbox{and}\quad
(J^{j,\mathtt{s}},X^{j,\mathtt{s}}) \overset{\rm d}={\rm BVE}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}_j^\mathtt{s}).
\eeqlb
In this case, we call the market model (\ref{eqn2.10})-(\ref{eqn2.11}) {\rm exponential market model} with parameter
$(\delta,\mrm{b}}\def\p{\mrm{p}eta, p^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}}; \mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}})$.
From (\ref{eqn2.26}), the jumps in prices and volatilities are negatively correlated
if $p^\mathtt{b}_j\mathrm{C}_\mathbf{e}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{b}_j)< p^\mathtt{s}_j\mathrm{C}_\mathbf{e}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{s}_j)$
for $j\in\{\mathtt{e},\mathtt{i} \}$.
\end{example}
We say that an $(\mathbb{Z}_+\times\mathbb{R}_+) $-valued random variable $\xi=(\xi_1,\xi_2)$ has a {\it bivariate Pareto distribution} $\mathcal{P}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta})$ with parameters $\alpha>0$ and $\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}=(\theta_1,\theta_2)\in (0,\infty)^2$ if
for any $(k,x)\in \mathbb{Z}_+\times\mathbb{R}_+ $,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\xi_1\geq k,\xi_2\geq x\}=\Big( 1+\frac{k-1}{\theta_{1}} +\frac{x}{\theta_{2}}\Big)^{-\alpha}.
\eeqnn
The probability law of $\xi=(\xi_1,\xi_2)$ is multivariate regularly varying with index $\alpha$ and $\mathbf{E}[\|\xi\|^\kappa]<\infty$, for any $\kappa<\alpha$.
When $\alpha>2$,
the first moment $\mathrm{M}^\mathbf{p}_{1}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}):=(\mathrm{M}^\mathbf{p}_{1,k}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}))_{k=1,2}$
and the second moment $\mathrm{M}^\mathbf{p}_{2}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}):=(\mathrm{M}^\mathbf{p}_{2,jk}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}))_{j,k=1,2}$ have the following representation:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\mathrm{M}^\mathbf{p}_{1,1}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}):= \mathbf{E}[\xi_1]= \sum_{k=0}^\infty \Big( 1+\frac{k}{\theta_{1}}\Big)^{-\alpha},
\quad
\mathrm{M}^\mathbf{p}_{1,2}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}):= \mathbf{E}[\xi_2]= \frac{\theta_2}{\alpha-1}
\eeqlb
and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\mathrm{M}^\mathbf{p}_{2,11}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta})\!\!\!&:=\!\!\!& \mathbf{E}[|\xi_1|^2]
= \sum_{k=0}^\infty (2k+1)\Big( 1+\frac{k}{\theta_{1}}\Big)^{-\alpha},
\quad
\mathrm{M}^\mathbf{p}_{2,22}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}):= \mathbf{E}[|\xi_2|^2]
=\frac{2\theta_2^2}{(\alpha-1)(\alpha-2)},\cr
\mathrm{M}^\mathbf{p}_{2,12}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta})\!\!\!&:=\!\!\!& \mathbf{E}[\xi_1\xi_2]
=\frac{\theta_2}{\alpha-1}\sum_{k=0}^\infty \Big( 1+\frac{k}{\theta_{1}}\Big)^{-\alpha+1}.
\eeqlb
This implies that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.27}
\mathrm{C}_\mathbf{p}(\alpha,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}):={\rm Cov}(\xi_1,\xi_2)
= \frac{\theta_2}{\alpha-1}\sum_{k=0}^\infty \frac{k}{\theta_{1}}\Big( 1+\frac{k}{\theta_{1}}\Big)^{-\alpha}> 0.
\eeqlb
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Pareto market model]\label{Ecample-Pareto}
For $j\in\{\mathtt{e},\mathtt{i} \}$ and some constants $\alpha_j>0$, $\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}_j^\mathtt{b}, \mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}_j^\mathtt{s}\in (0,\infty)^2$,
let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
(J^{j,\mathtt{b}},X^{j,\mathtt{b}}) \overset{\rm d}= \mathcal{P}(\alpha_j,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{b}_j)
\quad\mbox{and}\quad
(J^{j,\mathtt{s}},X^{j,\mathtt{s}}) \overset{\rm d}= \mathcal{P}(\alpha_j,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{s}_j).
\eeqnn
In this case, we call the market model (\ref{eqn2.10})-(\ref{eqn2.11}) {\rm Pareto market model} with parameter
$(\delta,\mrm{b}}\def\p{\mrm{p}eta, p^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}}; \alpha_{\mathtt{e}/\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}})$.
From (\ref{eqn2.27}), when $\alpha_{\mathtt{e}/\mathtt{i}}>2$, the jumps in prices and volatilities are negatively correlated if $p^\mathtt{b}_j\mathrm{C}_\mathbf{p}(\alpha_j,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{b}_j)< p^\mathtt{s}_j\mathrm{C}_\mathbf{p}(\alpha_j,\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{s}_j)$ for all $j\in\{\mathtt{e},\mathtt{i} \}$.
\end{example}
We say that a $(\mathbb{Z}_+\times\mathbb{R}_+) $-valued random variable $\xi=(\xi_1,\xi_2)$ has an exponential-Pareto mixing distribution with parameter $(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}, \alpha, \mrm{b}}\def\p{\mrm{p}oldsymbol{\theta})$ if for any $(k,x)\in \mathbb{Z}_+\times\mathbb{R}_+ $, and some $q \in (0,1)$
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\xi_1\geq k,\xi_2\geq x\}=
q \Big( 1+\frac{k-1}{\theta_{1}} +\frac{x}{\theta_{2}}\Big)^{-\alpha}
+(1-q) \exp\{-\lambda_1 (k-1)-\lambda_2 x-\lambda_{12}((k-1)\vee x)\}.
\eeqnn
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Exponential-Pareto mixing market model] \label{Ecample-Exp-Pareto}
If the mark of each event has an exponential-Pareto mixing distribution, then we call the market model (\ref{eqn2.10})-(\ref{eqn2.11})
{\rm exponential-Pareto mixing market model} with parameter
$(\delta,\mrm{b}}\def\p{\mrm{p}eta, p^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}}; \mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}};\alpha_{\mathtt{e}/\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}})$
and selecting mechanism {$q_{\mathtt{e}/\mathtt{i}}$}.
\end{example}
\section{The genealogy of market dynamics}\label{genealogy}
In this section, we analyze the genealogical structure of the market dynamics and establish a representation of the dynamics in terms of independent and identically distributed {\sl self-enclosed} sub-models. Self-enclosed sub-models correspond to market models with no exogenous orders, except initial ones; they describe the impact of exogenous shocks on induced orders. Specifically, we call our market model self-enclosed if $\nu_\mathtt{e}(\mathbb{U})=0$; in this case, we denote the model by $\{ (P_{0,t},V_{0,t}):t\geq 0\}$. In view of (\ref{eqn2.10})-(\ref{eqn2.11}) it satisfies the following dynamics:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
P_{0,t}\!\!\!&=\!\!\!& P_0+\int_0^t \int_{\mathbb{U}} \int_0^{V_{0,s-}}\delta\cdot u_1N_\mathtt{i}(ds,du,dx),\label{eqn2.17}\\
V_{0,t}\!\!\!&=\!\!\!& V_0-\int_0^t \mrm{b}}\def\p{\mrm{p}eta V_{0,s}ds +\int_0^t\int_{\mathbb{U}} \int_0^{V_{0,s-}} u_2N_\mathtt{i}(ds,du,dx).\label{eqn2.18}
\eeqlb
From the cluster representation of Hawkes process, we can decompose {the Hawkes market model} into a sum of self-enclosed models
$\{(P_{k,t},V_{k,t}):t\geq 0\}_{k\geq 1}$. Associated to the sequences of arrival times and marks $\{ (\tau^{\mathtt{e}}_k, \xi_k^{\mathtt{e}}): k=1,2,\cdots \}$, these sub-models satisfy $(P_{k,t},V_{k,t})=(0,0)$ if $t<\tau_k^{\mathtt{e}}$, and for $t\geq \tau^{\mathtt{e}}_k$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
P_{k,t}\!\!\!&=\!\!\!& \xi_{k,P}^{\mathtt{e}}+\int_{\tau^{\mathtt{e}}_k}^t \int_{\mathbb{U}} \int_{ \sum_{j=0}^{k-1}V_{i,s-}}^{\sum_{j=0}^{k-1}V_{i,s-}+V_{k,s-}}\delta\cdot u_1N_\mathtt{i}(ds,du,dx),\label{eqn2.24}\\
V_{k,t}\!\!\!&=\!\!\!& \xi_{k,V}^{\mathtt{e}}-\int_{\tau^{\mathtt{e}}_k}^t \mrm{b}}\def\p{\mrm{p}eta V_{k,s}ds +\int_{\tau^{\mathtt{e}}_k}^t \int_{\mathbb{U}} \int_{\sum_{j=0}^{k-1}V_{i,s-}}^{\sum_{j=0}^{k-1}V_{i,s-}+V_{k,s-}} u_2N_\mathtt{i}(ds,du,dx).\label{eqn2.25}
\eeqlb
\mrm{b}}\def\p{\mrm{p}egin{theorem}\label{Thm206}
The Hawkes market model $\{(P_t,V_t):t\geq 0\}$ defined by (\ref{eqn2.10})-(\ref{eqn2.11}) admits the following decomposition:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{ (P_t,V_t):t\geq 0 \} \overset{\rm a.s.}= \Big\{ \sum_{k= 0}^\infty (P_{k,t},V_{k,t}) :t\geq 0 \Big\}.
\eeqnn
\end{theorem}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
It suffices to prove that the infinite sum is well defined and equals $\{ (P_t,V_t):t\geq 0 \}$. For any $K\geq 1$, let $(P^K_{t},V_{t}^K):=\sum_{k= 0}^K (P_{k,t},V_{k,t})$, which solves
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
P^K_{t}\!\!\!&=\!\!\!& P_0+ \sum_{k=1}^K \xi_{k,P}^{\mathtt{e}}\mathbf{1}_{\{t\geq \tau^{\mathtt{e}}_{k}\}}+\int_{\tau^{\mathtt{e}}_k}^t \int_{\mathbb{U}} \int_0^{V^K_{s-}}\delta\cdot u_1N_\mathtt{i}(ds,du,dx),\\
V^K_{t}\!\!\!&=\!\!\!& V_0 +\sum_{k=1}^K \xi_{k,V}^{\mathtt{e}}\mathbf{1}_{\{t\geq \tau^{\mathtt{e}}_{k}\}}-\int_{\tau^{\mathtt{e}}_k}^t \mrm{b}}\def\p{\mrm{p}eta V^K_{s}ds +\int_{\tau^{\mathtt{e}}_k}^t \int_{\mathbb{U}} \int_0^{V^K_{s-}} u_2N_\mathtt{i}(ds,du,dx).\\
\eeqnn
For any $T\geq 0$, the set $\{k \in \mathbb N : \tau^{\mathtt{e}}_{k}\leq T\}$ is a.s.~finite. Hence $\{(P^K_{t},V_{t}^K): t\in[0,T]\}\overset{\rm a.s.}\to \{(P_{t},V_{t}): t\in[0,T]\}$ as $K\to\infty$.
$\Box$
The sub-models $\{(P_{k,t},V_{k,t}):t\geq 0\}_{k\geq 0}$ are self-enclosed, mutually independent and identically distributed:
for any $j,k\geq 1$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{(P_{j,t+\tau^{\mathtt{e}}_j},V_{j,t+\tau^{\mathtt{e}}_j}):t\geq 0\}\overset{\rm d}= \{(P_{k,t+\tau^{\mathtt{e}}_k},V_{k,t+\tau^{\mathtt{e}}_k}):t\geq 0\}.
\eeqnn
Conditioned on $(\xi_{1,P}^{\mathtt{e}},\xi_{1,V}^{\mathtt{e}})=(P_0,V_0)$, the model $\{(P_{1,t+\tau^{\mathtt{e}}_1},V_{1,t+\tau^{\mathtt{e}}_1}):t\geq 0\}$ equals $\{(P_{0,t},V_{0,t}):t\geq 0\}$ in law. As a result, the impact of exogenous orders on the market dynamics can be analyzed by analyzing the model $\{(P_{0,t},V_{0,t}):t\geq 0\}$. By \cite[Theorem 1.1]{KawazuWatanabe1971} the volatility process $\{V_{0,t}:t\geq 0 \}$ is a continuous-state branching process. Arguments given by Grey \cite{Grey1974} show that it tends to either $0$ or $\infty$ as $t \to \infty$. We show that this implies that the price process $\{P_{0,t}:t\geq 0 \}$ either settles down or fluctuates strongly as $t \to \infty$.
\mrm{b}}\def\p{\mrm{p}egin{remark}
The case {$V_{0,t} \to 0$} is economically very intuitive. In the absence of exogenous shocks it seems reasonable to assume that prices settle down in the {long run}.
\end{remark}
In order to study the long-run behavior of the price process $\{P_{0,t}:t\geq 0 \}$, we denote by $\mathcal{T}_0$ its last jump time and for $0\leq a\leq b \leq\infty$ we denote the number of jumps in the time interval $[a,b)$ by $\mathcal{J}_a^b$. That is,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathcal{T}_0:= \sup\{t\in[0,\infty): |P_{0,t}-P_{0,t-}|>0\}\quad \mbox{and}\quad \mathcal{J}_a^b:=\# \{t\in[a,b): |P_{0,t}-P_{0,t-}|>0\}.
\eeqnn
The following theorem analyzes the joint distribution of $( \mathcal{T}_0, \mathcal{J}^\infty_0)$ in terms of the function
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
g_0(x)\!\!\!& :=\!\!\!& \mrm{b}}\def\p{\mrm{p}eta x+ \int_\mathbb{U} [e^{-x\cdot u_2 }-1]\nu_\mathtt{i}(du),\quad x\geq 0.
\eeqnn
\mrm{b}}\def\p{\mrm{p}egin{theorem}
For any $\lambda, t\geq 0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.15}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-\lambda \cdot\mathcal{J}_0^\infty }, \mathcal{T}_0\leq t \mrm{b}}\def\p{\mrm{p}ig] = \exp\mrm{b}}\def\p{\mrm{p}ig\{- \phi_t(\lambda)\cdot V_0 \mrm{b}}\def\p{\mrm{p}ig\},
\eeqlb
where $t\mapsto\phi_t(\lambda)$ solves the Riccatti equation
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.16}
\phi_t(\lambda)\!\!\!&=\!\!\!& \frac{1}{\mrm{b}}\def\p{\mrm{p}eta} -\int_0^t g_0(\phi_s(\lambda)) ds -(e^{-\lambda}-1) \int_0^t ds \int_\mathbb{U} \exp\{ -\phi_s(\lambda)\cdot u_2\} \nu_\mathtt{i}(du).
\eeqlb
\end{theorem}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
From (\ref{eqn2.17}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-\lambda\cdot \mathcal{J}_0^\infty }, \mathcal{T}_0\leq t \mrm{b}}\def\p{\mrm{p}ig] \!\!\!&=\!\!\!& \mathbf{E} \Big[\exp\Big\{ -\lambda \int_0^t \int_{\mathbb{U}} \int_0^{V_{0,s-} } N_\mathtt{i}(ds,du,dz)\Big\}, \int_t^\infty \int_{\mathbb{U}} \int_0^{V_{0,s-} } N_\mathtt{i}(ds,du,dz)=0 \Big] \cr
\!\!\!&=\!\!\!& \mathbf{E} \Big[\exp\Big\{ -\lambda \int_0^t \int_{\mathbb{U}} \int_0^{V_{0,s-} } N_\mathtt{i}(ds,du,dz)\Big\}, \mathbf{P}_{\mathscr{F}_t}\Big\{\int_t^\infty \int_{\mathbb{U}} \int_0^{V_{0,s-} } N_\mathtt{i}(ds,du,dz)=0 \Big\} \Big].
\eeqnn
From the properties of generalized Poisson processes, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}_{\mathscr{F}_t}\Big\{\int_t^\infty \int_{\mathbb{U}} \int_0^{V_{0,s-} } N_\mathtt{i}(ds,du,dz)=0 \Big\} \!\!\!&=\!\!\!& \mathbf{E}_{\mathscr{F}_t}\Big[ \exp\Big\{ -\int_t^\infty V_{0,s} ds \Big\} ; \mathcal{T}_0\leq t \Big].
\eeqnn
Moreover, conditioned on $\{\mathcal{T}_0\leq t\}$, we have $V_{0,s}= V_{0,t}e^{-\mrm{b}}\def\p{\mrm{p}eta(s-t)}$ for any $s\geq t$, and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}_{\mathscr{F}_t}\Big[ \exp\Big\{ -\int_t^\infty V_{0,s} ds \Big\} ; \mathcal{T}_0\leq t \Big]= \mathbf{E}_{\mathscr{F}_t}\mrm{b}}\def\p{\mrm{p}ig[ \exp\mrm{b}}\def\p{\mrm{p}ig\{ -V_{0,t}/\mrm{b}}\def\p{\mrm{p}eta\mrm{b}}\def\p{\mrm{p}ig\} \mrm{b}}\def\p{\mrm{p}ig].
\eeqnn
Putting all the above results together, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[ e^{-\lambda\cdot\mathcal{J}_0^\infty}, \mathcal{T}_0\leq t \mrm{b}}\def\p{\mrm{p}ig] \!\!\!&=\!\!\!& \mathbf{E} \Big[\exp\Big\{ -\int_0^t \int_{\mathbb{U}} \int_0^{V_{0,s} } \lambda N_\mathtt{i}(ds,du,dz)-V_{0,t}/\mrm{b}}\def\p{\mrm{p}eta\Big\} \Big]= \mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-Y_t}\mrm{b}}\def\p{\mrm{p}ig],
\eeqnn
where
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
Y_t= V_0/\mrm{b}}\def\p{\mrm{p}eta-\int_0^t V_{0,s}ds +\int_0^t\int_{\mathbb{U}} \int_0^{V_{0,s-}}\mrm{b}}\def\p{\mrm{p}ig( \lambda+ u_2/\mrm{b}}\def\p{\mrm{p}eta \mrm{b}}\def\p{\mrm{p}ig)N_\mathtt{i}(ds,du,dz).
\eeqnn
Applying It\^o's formula to $\exp\{-Y_t-(\phi_{t-s}(\lambda)-1/\mrm{b}}\def\p{\mrm{p}eta)V_{0,t}\}$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
e^{-Y_t}
\!\!\!&=\!\!\!&
e^{-\phi_{t}(\lambda) V_0} +\int_0^t\int_{\mathbb{U}} \int_0^{V_{0,s-}}
e^{-Y_{s-}-(\phi_{t-s}(\lambda)-1/\mrm{b}}\def\p{\mrm{p}eta)V_{0,s-}}\mrm{b}}\def\p{\mrm{p}ig[e^{-( \frac{u_2}{\mrm{b}}\def\p{\mrm{p}eta}+\lambda) -(\phi_{t-s}(\lambda)-1/\mrm{b}}\def\p{\mrm{p}eta)u_2} -1\mrm{b}}\def\p{\mrm{p}ig]\tilde{N}_\mathtt{i}(ds,du,dz) ,
\eeqnn
where $\tilde{N}_\mathtt{i}(ds,du,dz):= N_\mathtt{i}(ds,du,dz)-ds\nu_\mathtt{i}(du)dz$.
Taking expectations on both sides yields (\ref{eqn2.15}).
$\Box$
\mrm{b}}\def\p{\mrm{p}egin{corollary}\label{Thm207}
We have $\mathcal{T}_0<\infty$ if and only if $|\mathcal{J}_0^\infty|<\infty$. Moreover, for any $\lambda\geq 0$, the limit $\phi_\infty(\lambda):=\lim_{t\to\infty} \phi_t(\lambda)$ exists, is the largest root of the function
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.21}
g_\lambda(x):= g_0(x)+(e^{-\lambda}-1)\int_\mathbb{U} e^{-x\cdot u_2 }\nu_\mathtt{i}(du), \qquad x \geq 0,
\eeqlb
and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn2.20}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-\lambda \cdot\mathcal{J}_0^\infty }, \mathcal{T}_0<\infty\mrm{b}}\def\p{\mrm{p}ig] = \exp\mrm{b}}\def\p{\mrm{p}ig\{- V_0 \cdot \phi_\infty(\lambda)\mrm{b}}\def\p{\mrm{p}ig\}
\quad \mbox{and}\quad
\mathbf{P}\{\mathcal{T}_0<\infty \}=\exp\{-V_0 \cdot \phi_\infty(0)\}.
\eeqlb
\end{corollary}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
The first result follows directly from the second. Indeed, $|\mathcal{J}_0^\infty|<\infty$ means that only finite jumps happen and so $\mathcal{T}_0<\infty$. Conversely, from (\ref{eqn2.15}) and the continuity of $\phi_\infty(\lambda)$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\mrm{b}}\def\p{\mrm{p}ig\{\mathcal{J}_0^\infty<\infty \mrm{b}}\def\p{\mrm{p}ig| \mathcal{T}_0<\infty \mrm{b}}\def\p{\mrm{p}ig\}=\lim_{\lambda\to 0+} \mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-\lambda\cdot\mathcal{J}_0^\infty} \mrm{b}}\def\p{\mrm{p}ig| \mathcal{T}_0<\infty\mrm{b}}\def\p{\mrm{p}ig] = \lim_{\lambda\to 0+} e^{- (\phi_\infty(\lambda)- \phi_\infty(0)) V_0 }=1.
\eeqnn
We now prove the second result. For any $\lambda\geq 0$, the {function} $\{ \phi_t(\lambda):t\geq 0 \}$ is {non-negative} and non-increasing. Hence, the limit $\phi_\infty(\lambda):=\lim_{t\to\infty} \phi_t(\lambda)$ exists.
Applying the dominated convergence theorem to (\ref{eqn2.15}) yields
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-\lambda\cdot\mathcal{J}_0^\infty}, \mathcal{T}_0<\infty \mrm{b}}\def\p{\mrm{p}ig]
= \lim_{t\to\infty}\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[e^{-\lambda\cdot\mathcal{J}_0^\infty}, \mathcal{T}_0\leq t \mrm{b}}\def\p{\mrm{p}ig]
= \exp\mrm{b}}\def\p{\mrm{p}ig\{- V_0 \cdot \lim_{t\to\infty}\phi_t(\lambda) \mrm{b}}\def\p{\mrm{p}ig\}
= \exp\mrm{b}}\def\p{\mrm{p}ig\{- V_0\cdot \phi_\infty(\lambda) \mrm{b}}\def\p{\mrm{p}ig\}.
\eeqnn
By (\ref{eqn2.16}), the process $\{ \phi_t(\lambda):t\geq 0 \}$ satisfies the semigroup property $\phi_{t+s}(\lambda) = \phi_t( \phi_s(\lambda))$ for any $s,t\geq 0$.
Hence, for any $t\geq 0$
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi_\infty(\lambda) =\lim_{s\to\infty} \phi_{t+s}(\lambda) = \lim_{s\to\infty}\phi_t( \phi_s(\lambda))=\phi_t( \phi_\infty(\lambda)).
\eeqnn
Taking this back into (\ref{eqn2.16}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^tg_0(\phi_\infty(\lambda))ds+(e^{-\lambda}-1)\int_0^tds\int_\mathbb{U} \exp\{-\phi_\infty(\lambda) \cdot u_2 \}\nu_\mathtt{i}(du) \equiv 0,
\eeqnn
which means that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
g_0(\phi_\infty(\lambda)) + (e^{-\lambda}-1)\int_\mathbb{U} \exp\{ -\phi_\infty(\lambda)\cdot u_2\} \nu_\mathtt{i}(du)=0.
\eeqnn
It remains to show that $\phi_\infty(\lambda)$ is the largest root. The function $g_\lambda$ is smooth, strictly convex as $\nu_{\mathtt{i}}$ is supported on $\mathbb Z_+ \times \mathbb R_+$, and $g_\lambda(x) \to \infty$ if $x \to \infty$. If $\lambda > 0$, then $g_\lambda(0) < 0$ and hence $g_\lambda$ has a unique root. If $\lambda = 0$, then $x=0$ is a root but there is a second one if $g'_0(0) < 0$. In this case, it follows from the continuity of $g_\lambda(x)$ in $(\lambda,x)$ that $\phi_\infty(\lambda)$ decrease to $\phi_\infty(0)$ continuously as $\lambda\to 0+$. This shows that $\phi_\infty(0)$ is the largest root of $g_0$.
$\Box$
For the remainder of this section, we always assume that $\int_\mathbb{U}\|u\|\nu_{\mathtt{i}}(du)<\infty$.
Since $\mathbf{P}\{\mathcal{T}_0<\infty \}=\exp\{- \phi_\infty(0) V_0\}$ we see that $\mathbf{P}\{|\mathcal{J}_0^\infty| =\infty \}= \mathbf{P}\{ \mathcal{T}_0=\infty \}>0$ if and only if $\phi_\infty(0) > 0$.
The proof of Corollary~\ref{Thm207} shows that this is the case if and only if the function $g_0$ has a strictly positive root and that this is the case if and only if
\mrm{b}}\def\p{\mrm{p}egin{equation} \label{beta-tilde}
\tilde\mrm{b}}\def\p{\mrm{p}eta := g'_0(0) = \mrm{b}}\def\p{\mrm{p}eta -\int_{\mathbb{U}} u_2\nu_{\mathtt{i}}(du) < 0.
\end{equation}
Economically, $\tilde \mrm{b}}\def\p{\mrm{p}eta$ describes the net decay in the long run of the impact of induced orders on the volatility: the impact of past orders is discounted at a rate $\mrm{b}}\def\p{\mrm{p}eta$ while new impact is added at a rate $\int_{\mathbb{U}} u_2\nu_{\mathtt{i}}(du)$.
\mrm{b}}\def\p{\mrm{p}egin{corollary}\label{Thm209}
We have $\mathbf{P}\{\mathcal{J}_0^\infty =\infty \}= \mathbf{P}\{ \mathcal{T}_0=\infty \}>0$
if and only if $\tilde\mrm{b}}\def\p{\mrm{p}eta<0$.
In this case, conditioned on $\{ \mathcal{T}_0=\infty \}$,
the following holds:
\mrm{b}}\def\p{\mrm{p}egin{enumerate}
\item[(1)] The number of price changes tends to infinity as time increases, i.e.
$\limsup_{k\to\infty} \mathcal{J}_k^{k+1}\to \infty $ a.s.
\item[(2)] The price drifts to $\infty$, $-\infty$ or oscillates
if and only if $\int_\mathbb{U}u_1\nu_{\mathtt{i}}(du)>0$, $<0$ or $=0$.
\end{enumerate}
\end{corollary}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
The first assertion has already been established.
The second result follows from the fact that the intensity process $V_{0,t}$ tends to either $0$ or $\infty$ a.s.
Indeed, conditioned on $\{\mathcal{T}_0=\infty\}$, we have $\limsup_{t\to\infty} V_{0,t}>0$
and hence $V_{0,t}\to \infty$ almost surely.
Denote by $\tau_1<\tau_2<\cdots$ the jump times of the price process. Then $\tau_k\to\infty$ a.s. as $k\to\infty$ and
it suffices to prove that the random walk $ P_{\tau_k}=P_0+\sum_{j=1}^{k} \eta_{j,P}^{\mathtt{i}}$
drifts to $\infty$, $-\infty$ or oscillates
if and only if $\mathbf{E}[\eta_{1,P}^{\mathtt{i}}]>0$, $<0$ or $=0$.
This, however, follows from, e.g.~Theorem~4 in \cite[p.203]{Feller1971}.
$\Box$
\mrm{b}}\def\p{\mrm{p}egin{proposition}
We have $\mathbf{E}[ \mathcal{J}_0^\infty ]<\infty $ if and only if $\tilde\mrm{b}}\def\p{\mrm{p}eta>0$.
In this case, $\mathbf{E}[\mathcal{J}_0^\infty ]=V_0/\tilde\mrm{b}}\def\p{\mrm{p}eta$.
\end{proposition}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
Taking expectation on both sides of (\ref{eqn2.18}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}[V_{0,t}]\!\!\!&=\!\!\!& V_0 -\int_0^t \tilde\mrm{b}}\def\p{\mrm{p}eta \cdot \mathbf{E}[V_{0,s}]ds.
\eeqnn
Solving this equation, we get $\mathbf{E}[V_{0,t}]= V_0 \cdot e^{-\tilde{\mrm{b}}\def\p{\mrm{p}eta}t}$.
From the definition of $\mathcal{J}_0^\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}[\mathcal{J}_0^\infty]
\!\!\!&=\!\!\!& \mathbf{E}\Big[ \int_0^\infty \int_{\mathbb{U}}\int_0^{V_{0,s-}}N_\mathtt{i}(ds,du,dx) \Big]
= \int_0^\infty \mathbf{E}[V_{0,s}]ds = V_0 \int_0^\infty e^{-\tilde{\mrm{b}}\def\p{\mrm{p}eta}s}ds,
\eeqnn
which is finite if and only if $\tilde\mrm{b}}\def\p{\mrm{p}eta>0$.
In this case, we have $\mathbf{E}[\mathcal{J}_0^\infty]=V_0/\tilde\mrm{b}}\def\p{\mrm{p}eta$.
$\Box$
The following corollary provides four regimes for the long-run impact of an exogenous shock of magnitude $(P_0,V_0)$ to the market dynamics.
As pointed out above, the economically interesting case is $\tilde \mrm{b}}\def\p{\mrm{p}eta \geq 0$.
The critical case $\tilde \mrm{b}}\def\p{\mrm{p}eta = 0$ is particularly relevant.
In this case, the impact of exogenous orders on induced ones is slowly decaying.
In the scaling limit it corresponds to a loss of mean-reversion of the volatility process.
\mrm{b}}\def\p{\mrm{p}egin{corollary}\label{Thm210}
There are the four regimes for the long-run impact of exogenous shocks of magnitude $(P_0,V_0)$ on induced order flow.
\mrm{b}}\def\p{\mrm{p}egin{enumerate}
\item[(1)] If $\tilde\mrm{b}}\def\p{\mrm{p}eta <0$, then as $t\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{ \mathcal{T}_0\geq t \} \to 1-e^{- \phi_\infty(0) V_0}\in(0,1).
\eeqnn
\item[(2)] If $\tilde\mrm{b}}\def\p{\mrm{p}eta >0$,
then for any $t>0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{ \mathcal{T}_0\geq t \} \leq \frac{V_0}{\mrm{b}}\def\p{\mrm{p}eta}\cdot e^{- \tilde\mrm{b}}\def\p{\mrm{p}eta \cdot t }.
\eeqnn
\item[(3)] If $\tilde\mrm{b}}\def\p{\mrm{p}eta = 0$
and $\nu_\mathtt{i}(|u_2|^2):=\frac{1}{2}\int_{\mathbb{U}}|u_2|^2\nu_{\mathtt{i}}(du)<\infty$,
then as $t\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{ \mathcal{T}_0\geq t \} \sim \frac{V_0}{\nu_\mathtt{i}(|u_2|^2)}\cdot t^{-1}.
\eeqnn
\item[(4)] If $\tilde\mrm{b}}\def\p{\mrm{p}eta = 0$
and $\nu_{\mathtt{i}}(\mathbb{Z}\times [x,\infty))\sim C(1+x)^{-1-\alpha}$ as $x\to \infty$ for some $\alpha\in(0,1)$,
then as $t\to \infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{ \mathcal{T}_0\geq t \} \sim C\cdot V_0\cdot t^{-1/\alpha}.
\eeqnn
\end{enumerate}
\end{corollary}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
The first regime follows from Corollary~\ref{Thm207} and \ref{Thm209}.
For other three regimes, {from (\ref{eqn2.15}) we have}
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_0>t \} =1-e^{-V_0 \phi_t(0)} \sim V_0 \cdot \phi_t(0).
\eeqnn
Thus, it suffices to consider the asymptotic behavior of $\phi_t(0)$ as $t\to\infty$.
By (\ref{eqn2.16}) with $\lambda=0$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi_t(0)= \frac{1}{\mrm{b}}\def\p{\mrm{p}eta}e^{-\tilde\mrm{b}}\def\p{\mrm{p}eta t} - \int_0^t e^{-\tilde\mrm{b}}\def\p{\mrm{p}eta (t-s)} ds \int_\mathbb{U} [e^{ -u_2\phi_s(0)}-1+u_2\phi_s(0)]\nu_\mathtt{i}(du).
\eeqnn
Thus, (2) follows from the fact that the integral term above is nonnegative.
We now prove (3).
If $\int_{\mathbb{U}}|u_2|^2\nu_{\mathtt{i}}(du)<\infty$, then for $\lambda\to 0+$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_\mathbb{U} [e^{ -u_2\lambda}-1+u_2\lambda]\nu_\mathtt{i}(du)\sim \nu_\mathtt{i}(|u_2|^2)\cdot \lambda^2.
\eeqnn
From the monotonicity of $\{\phi_t(0):t\geq 0\}$, for any $\epsilon>0$ there exists $t_0>0$ large enough such that for any $t\geq 0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi_{t_0+t}(0)\!\!\!&= \!\!\!& \phi_{t_0}(0) - \int_0^t ds \int_\mathbb{U} [e^{ -u_2\phi_{t_0+s}(0)}-1+u_2\phi_{t_0+s}(0)]\nu_\mathtt{i}(du)\cr
\!\!\!&\leq\!\!\!& \phi_{t_0}(0) - [\nu_\mathtt{i}(|u_2|^2)+\epsilon]\cdot\int_0^t |\phi_{t_0+s}(0)|^2 ds .
\eeqnn
Applying the nonlinear Gr\"onwall's inequality, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\frac{1}{\phi_{t_0+t}(0)}-\frac{1}{\phi_{t_0}(0)}\geq [\nu_\mathtt{i}(|u_2|^2)+\epsilon]\cdot t,
\eeqnn
which implies that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi_{t+t_0}(0)\leq \frac{1}{\frac{1}{\phi_{t_0}(0)}+ [\nu_\mathtt{i}(|u_2|^2)+\epsilon]\cdot t}
\quad\mbox{and}\quad
\limsup_{t\to\infty}\ (t+t_0)\phi_{t+t_0}(0)\leq \frac{1}{\nu_\mathtt{i}(|u_2|^2)+\epsilon}.
\eeqnn
Similarly, we also have for any $\epsilon\in(0,\nu_\mathtt{i}(|u_2|^2))$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi_{t+t_0}(0)\geq \frac{1}{\frac{1}{\phi_{t_0}(0)}+ [\nu_\mathtt{i}(|u_2|^2)-\epsilon]\cdot t}
\quad\mbox{and}\quad
\liminf_{t\to\infty}\ (t+t_0)\phi_{t+t_0}(0)\geq \frac{1}{\nu_\mathtt{i}(|u_2|^2)-\epsilon}.
\eeqnn
This shows (3) as $\epsilon$ is arbitrary.
For (4), from \cite[Theorem 8.1.6]{BinghamGoldieTeugels1987}, we have as $\lambda\to 0+$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_\mathbb{U} [e^{ -u_2\lambda}-1+u_2\lambda]\nu_\mathtt{i}(du)
\sim C\lambda^{\alpha+1}.
\eeqnn
As before, we also have as $t\to \infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi_t(0)\sim Ct^{-1/\alpha}.
\eeqnn
$\Box$
\section{The scaling limit}\label{HFHM}
\setcounter{equation}{0}
In this section, we consider the weak convergence of a sequence of rescaled market models. Our scaling limit provides a microscopic foundation for an array of stochastic volatility models including the classic Heston model, the Heston model with jumps, the jump-diffusion stochastic volatility model in \cite{DuffiePanSingleton2000} and the alpha Heston model in \cite{JiaoMaScottiZhou2018}.
\subsection{Assumptions and asymptotic results}
For each $n \in \mathbb{N}$, we consider a market model $\{(P_t^{(n)},V_t^{(n)}):t\geq 0\}$ of the form (\ref{eqn2.10})-(\ref{eqn2.11}) with initial state $(P_0^{(n)},V_0^{(n)})$ and parameter $(1/n,\mrm{b}}\def\p{\mrm{p}eta^{(n)}; \nu_{\mathtt{e}/\mathtt{i}}^{(n)})$.
Without loss of generality, we assume that all models are defined on the common filtered probability space $(\Omega,\mathscr{F}, \mathscr{F}_t,\mathbf{P})$.
We are interested in the weak convergence of the rescaled market models $\{(P^{(n)}(t), V^{(n)}(t)):t\geq 0 \}$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.02}
\Big(\mrm{b}}\def\p{\mrm{p}egin{array}{c}P^{(n)}(t)\cr V^{(n)}(t) \end{array} \Big) \!\!\!&=\!\!\!&
\Big(\mrm{b}}\def\p{\mrm{p}egin{array}{c}P^{(n)}(0)\cr V^{(n)}(0) \end{array} \Big) -\int_0^t \Big(\mrm{b}}\def\p{\mrm{p}egin{array}{c} 0\cr \gamma_n\mrm{b}}\def\p{\mrm{p}eta^{(n)}\end{array} \Big) V^{(n)}(s)ds
+\int_0^t\int_{\mathbb{U}} \frac{u}{n}N_{\mathtt{e}}^{(n)}(d\gamma_ns,du) \cr
\!\!\!&\!\!\!& +\int_0^t\int_{\mathbb{U}} \int_0^{V^{(n)}(s-)} \frac{u}{n}N^{(n)}_\mathtt{i}(d\gamma_ns,du,dnx)
\eeqlb
where $\{ \gamma_n \}_{n\geq 1}$ is a sequence of positive numbers that converges to infinity as $n\to\infty$. The market models are driven by the Poisson random measures $N_\mathtt{e}^{(n)}$ and $N_\mathtt{i}^{(n)}$ whose respective compensators are given by
\[
\hat{N}_\mathtt{e}^{(n)}(d\gamma_ns,du):=\gamma_n ds\nu^{(n)}_\mathtt{e}(du) \quad \mbox{and} \quad
\hat{N}_\mathtt{i}^{(n)}(d\gamma_ns,du,dnx):=n\gamma_n ds\nu^{(n)}_\mathtt{i}(du)dx.
\]
In particular, the arrival rate of exogenous orders in the $n$-th market model is $\gamma_n$, and the arrival rate of induced orders is $n\gamma_n$\footnote{Since order sizes are scaled by a factor $\frac{1}{n}$, this suggests that exogenous orders will not generate a diffusive behavior in the limit; they only generate a drift and/or jumps. Induced orders, on the other hand, may well generate diffusive behavior.}. This justifies our interpretation of induced orders as originating from high-frequency trading.
The third term on the right side of equation (\ref{eqn3.02}) is a two-dimensional, compound Poisson process. Its weak convergence has been extensively studied in the literature; see \cite[Chapter VII]{JacodShiryaev2003} or \cite[Corollary 15.20]{Kallenberg2002}.
The following condition is necessary for its weak convergence.
\mrm{b}}\def\p{\mrm{p}egin{assumption}\label{MainCondition00}
There exists a constant $\gamma\in [0,\infty)$ such that $ \gamma_n\sim \gamma \cdot n$ as $n\to\infty$\footnote{We emphasize that $\gamma =0$ is allowed. In Section 3 we consider the case $\gamma_n=n$ as well as the case $\gamma_n = n^\alpha$ for some $\alpha \in (0,1)$.}.
\end{assumption}
We now give sufficient conditions on the parameters $\mrm{b}}\def\p{\mrm{p}eta^{(n)}$ and $\nu_{\mathtt{e}/\mathtt{i}}^{(n)}$ that guarantee the existence of a non-degenerate scaling limit. Our arguments are based on the link between the benchmark model and general branching particle systems. By (\ref{eqn2.12}) the infinitesimal generator $\mathcal{A}^{(n)}$ of $\{(P^{(n)}(t),V^{(n)}(t)):t\geq 0\}$ acts on functions $f\in C^2(\mathbb{U})$ according to,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.01}
\mathcal{A}^{(n)} f(p,v)
\!\!\!&=\!\!\!&
-v\cdot\gamma_n\mrm{b}}\def\p{\mrm{p}eta^{(n)}\frac{\partial }{\partial v}f(p,v)
+v\cdot n\gamma_n\int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig[f((p,v)+u/n)-f(p,v)\mrm{b}}\def\p{\mrm{p}ig]\nu^{(n)}_\mathtt{i}(du) \cr
\!\!\!&\!\!\!&
+\gamma_n\int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig[f((p,v)+u/n)-f(p,v)\mrm{b}}\def\p{\mrm{p}ig]\nu^{(n)}_\mathtt{e}(du).
\eeqlb
By Corollary~8.9 in \cite[p.232]{EthierKurtz1986}, the sequence of $\{(P^{(n)}(t),V^{(n)}(t)):t\geq 0\}$ is weakly convergent if there exits an infinitesimal generator $\mathcal{A}$ with domain $\mathscr{D}(\mathcal{A})$ such that for any $f\in \mathscr{D}(\mathcal{A})$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.28}
\mathcal{A}^{(n)} f(p,v)\to \mathcal{A} f(p,v),\quad \mbox{as }n\to\infty.
\eeqlb
In what follows, we identify a candidate limit generator by analyzing the limit of the sequence $\mathcal{A}^{(n)}f$ for the special case where $f$ is the exponential function. To this end, let $\mathbb{U}_*:= \mathrm{i}\mathbb{R}\times \mathbb{C}_-$ with $ \mathrm{i}\mathbb{R}:=\{\mathrm{i}\cdot x:\ x\in\mathbb{R}\} $ and $\mathbb{C}_-:=\{x+\mathrm{i}\cdot y:\ (x,y)\in\mathbb{R}_+\times\mathbb{R}\}$, where $\mathrm{i} := \sqrt{-1}$.
For any $z=(z_1,z_2)\in \mathbb{U}_*$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.27}
\mathcal{A}^{(n)}\exp\{z_1p+z_2v\}
\!\!\!&=\!\!\!& \exp\{z_1p+z_2v\}\cdot \Big[\gamma_n\int_{\mathbb{U}} u_1\nu_\mathtt{i}^{(n)}(du)\cdot vz_1
+\gamma_n\int_{\mathbb{U}} (u_2-\mrm{b}}\def\p{\mrm{p}eta^{(n)})\nu_\mathtt{i}^{(n)}(du) \cdot vz_2 \cr
\!\!\!&\!\!\!& +\gamma_n\int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig( e^{\frac{1}{n}\langle z,u \rangle } -1\mrm{b}}\def\p{\mrm{p}ig)\nu_\mathtt{e}^{(n)}(du)
+n\gamma_n \int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig( e^{\frac{1}{n}\langle z,u \rangle } -1-\frac{\langle z,u \rangle}{n}\mrm{b}}\def\p{\mrm{p}ig)\nu_\mathtt{i}^{(n)}(du)\cdot v\Big].
\eeqlb
Convergence as $n\to\infty$ holds if all the four terms in the above sum are convergent.
The first two terms converge if and only if the following condition holds.
\mrm{b}}\def\p{\mrm{p}egin{assumption}\label{MainCondition01}
Assume that $\int_\mathbb{U}\|u\|\nu_{\mathtt{i}}^{(n)}(du)<\infty$ for any $n\geq 1$ and that there exists a constant $b:=(b_1,b_2)\in\mathbb{R}^2$ such that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
-\gamma_n\int_{\mathbb{U}} u_1\nu_\mathtt{i}^{(n)}(du)\to b_1
\quad \mbox{and}\quad
\gamma_n\int_{\mathbb{U}} (\mrm{b}}\def\p{\mrm{p}eta^{(n)}-u_2)\nu_\mathtt{i}^{(n)}(du)\to b_2.
\eeqnn
\end{assumption}
We split the last two terms in (\ref{eqn3.27}) into four parts according to the direction of price movements. Specifically, let $\mathbb{U}_\pm=\mathbb{R}_\pm \times\mathbb{R}_+$, and for $j\in\{+,-\}$ and $z \in \mathbb{U}_j$ let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
G_{j}^{(n)}(z)
\!\!\!&:=\!\!\!& n\gamma_n \int_{\mathbb{U}_{j}}\Big( e^{-\frac{1}{n}\langle z,u\rangle}-1 +\frac{\langle z,u\rangle }{n}\Big)\nu_\mathtt{i}^{(n)}(du)
\quad \mbox{and} \quad
H_{j}^{(n)}(z)
:= \gamma_n \int_{\mathbb{U}_j}\mrm{b}}\def\p{\mrm{p}ig(e^{-\frac{1}{n}\langle z,u\rangle}-1\mrm{b}}\def\p{\mrm{p}ig)\nu_\mathtt{e}^{(n)}(du) .
\eeqnn
The next condition guarantees that the remaining terms on the right hand side of equation (\ref{eqn3.27}) converge.
\mrm{b}}\def\p{\mrm{p}egin{condition}\label{MainCondition02}
There exists a constant $C>0$ such that $ \sup_{n \geq 1} n\gamma_n \int_{\mathbb{U}_+} \|u\|^2 \wedge \|u\| \nu_\mathtt{i}^{(n)}(dnu)\leq C$.
Moreover, $ G_{\pm}^{(n)}(\cdot)$ and $ H_{\pm}^{(n)}(\cdot)$ converge to continuous functions $G_{\pm}(\cdot)$ and $H_{\pm}(\cdot)$ respectively, as $n\to\infty$.
\end{condition}
The following proposition provides an exact representation of the limit functions $G_\pm(\cdot)$ and $H_{\pm}(\cdot)$.
\mrm{b}}\def\p{\mrm{p}egin{proposition}\label{Representation}
Under Condition~\ref{MainCondition02}, the limit functions $G_\pm(\cdot)$ and $H_{\pm}(\cdot)$ have the following representations: for any $z=(z_1,z_2)\in \mathbb{U}_{\pm}$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
G_{\pm}(z)
\!\!\!&=\!\!\!& \langle z,\sigma^{\pm} z\rangle
+\int_{\mathbb{U}_{\pm}} (e^{-\langle z,u\rangle}-1+\langle z,u\rangle)\hat\nu_\mathtt{i}(du)
\quad \mbox{and}\quad
H_\pm(z)
=\langle z,a^{\pm}\rangle
+\int_{\mathbb{U}_\pm} (e^{-\langle z,u\rangle}-1)\hat\nu_\mathtt{e}(du),
\eeqnn
where $a^{\pm}\in\mathbb{U}_\pm$, $\sigma^{\pm}:=(\sigma^{\pm}_{j,l})_{j,l=1,2}$ is a symmetric, nonnegative definite matrix,
and $(\|u\|\wedge \|u\|^2)\hat{\nu}_\mathtt{i}(du)$ and $(\|u\|\wedge 1)\hat{\nu}_\mathtt{e}(du)$ are finite measures on $\mathbb{U}\setminus \{ 0 \}$.
\end{proposition}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
We prove the result for the function $G_+(\cdot)$. All other results can be established in the same way.
From Theorem~8.1 in \cite{Sato1999}, the representation of $G_+(\cdot)$ in terms of $\sigma^+$ and $\hat\nu_\mathtt{i}(du)$ is unique { if it exists.}
For any $n\geq 1$, let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.04}
\theta_\mathtt{i}^{(n)}
:= n\gamma_n\int_{\mathbb{U}_+}(\|u\|^2\wedge 1) \nu_\mathtt{i}^{(n)}(dnu)
\quad \mbox{and}\quad
\mathbf{P}^{(n)}_\mathtt{i}(du):= n\gamma_n\frac{\|u\|^2\wedge 1}{\theta^{(n)}_\mathtt{i}}\cdot \mathbf{1}_{\{u\in\mathbb{U}_+\}}\cdot\nu_\mathtt{i}^{(n)}(dnu),
\eeqlb
which is a probability law on the compact space $\overline{\mathbb{U}}_+:= \mathbb{U}_+\cup\{\infty\}$.
Thus, we can always find a subsequence that converges weakly to some limit probability law $\mathbf{P}_\mathtt{i}$ on $\overline{\mathbb{U}}_+$. We notice that $\mathbf{P}_\mathtt{i}$ may have a point mass in $0$. Among other things, we need to prove that $\mathbf{P}_\mathtt{i}\{+\infty\}=0$.
Let $\mathcal{C}:= \mrm{b}}\def\p{\mrm{p}ig\{r\geq 0: \mathbf{P}_\mathtt{i}\{\|u\| =r \}=0 \mrm{b}}\def\p{\mrm{p}ig\};$ its complement is at most countable. For any $r\in\mathcal{C}$, we put $\mathbb{B}_r:=\{ u\in \mathbb{U} : \|u\|\leq r \}$ and decompose the function $G^{(n)}_+$ as
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.03}
{G^{(n)}_+(z)}
\!\!\!&=\!\!\!& \theta^{(n)}_\mathtt{i} \left( \langle\hat{\mrm{b}}\def\p{\mrm{p}eta}^{(n)},z\rangle
+\sum_{j,l=1}^2\sigma^{+(n)}_{j,l}(r) z_jz_l+ \mathcal{J}^{(n)}(z,r) + \mathcal{E}^{(n)}(z,r) \right),
\eeqlb
where
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\hat{\mrm{b}}\def\p{\mrm{p}eta}^{(n)} \!\!\!&:= \!\!\!& \int_{\overline{\mathbb{U}}_+} (u-u\wedge 1)\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1},\\
\mathcal{J}^{(n)}(z,r) \!\!\!&:= \!\!\!& \int_{\overline{\mathbb{U}}_+\setminus\mathbb{B}_r}\mrm{b}}\def\p{\mrm{p}ig[e^{-\langle z,u\rangle}-1+\langle z,u\wedge 1\rangle \mrm{b}}\def\p{\mrm{p}ig]\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1}, \label{eqn3.05} \\
\sigma^{+(n)}_{j,l}(r)
\!\!\!&:=\!\!\!& \frac{1}{2}\int_{\mathbb{B}_r} (u_j\wedge 1) (u_l\wedge 1)\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1}\in[-1,1],\quad j,l=1,2, \label{eqn3.06}\\
\mathcal{E}^{(n)}(z,r)
\!\!\!&:=\!\!\!& \int_{\mathbb{B}_r}\Big[e^{-\langle z,u\rangle}-1+\langle z,u\wedge 1\rangle -\frac{1}{2}\sum_{j,l=1}^2 (u_j\wedge 1) (u_l\wedge 1)z_jz_l\Big]\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1}.\label{eqn3.07}
\eeqlb
The weak convergence of $\{\mathbf{P}^{(n)}_\mathtt{i}\}_{n\geq 1}$ along with the fact that $\mathbf{P}_\mathtt{i}(\partial \mathbb{B}_r)=0$ $(r \in \mathcal{C})$ implies that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\lim_{n \to \infty} \sigma^{+(n)}_{j,l}(r)
\!\!\!&=\!\!\!& \frac{1}{2}\int_{\mathbb{B}_r} (u_j\wedge 1) (u_l\wedge 1)\frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1} =: \sigma^+_{j,l}(r) \nonumber, \\
\lim_{n\to\infty}\mathcal{E}_{n}(z,r)
\!\!\!& =\!\!\!& \int_{\mathbb{B}_r}\Big[e^{-\langle z,u\rangle}-1+\langle z,u\wedge 1\rangle -\frac{1}{2}\sum_{j,l=1}^2 (u_j\wedge 1) (u_l\wedge 1)z_jz_l\Big]\frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1} =: \mathcal{E}(z,r). \nonumber
\eeqlb
By Condition~\ref{MainCondition02}, the sequences $\{\theta^{(n)}_\mathtt{i}, n \in \mathbb{N} \}$ and $\{ \hat\mrm{b}}\def\p{\mrm{p}eta^{(n)}, n \in \mathbb{N}\}$ are bounded. Hence, we may w.l.o.g.~assume that
\[
(\mathbf{P}^{(n)}_\mathtt{i}, \theta^{(n)}_\mathtt{i},\hat{\mrm{b}}\def\p{\mrm{p}eta}^{(n)}) \to
(\mathbf{P}_\mathtt{i},\theta_\mathtt{i},\hat{\mrm{b}}\def\p{\mrm{p}eta})
\]
as $n \to \infty$. If $\theta_\mathtt{i} = 0$, the convergence of $G^{(n)}_z$ implies $G_+=0$, due to the moment condition on the measures $\nu^{(n)}_\mathtt{i}$. Hence, we may assume that $\theta_\mathtt{i}>0$. Thus, as $n \to \infty$
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_{\overline{\mathbb{U}}_+\setminus\mathbb{B}_r}\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1}\to c_0(r)
\quad \mbox{and} \quad
\int_{\overline{\mathbb{U}}_+\setminus\mathbb{B}_r}
(u\wedge 1)\cdot\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1}\to {c}(r)
\eeqnn
for some non-negative numbers $c_0(r)$ and $c(r)$. As a result, for any $z \in \mathbb{R}^2_+$ (noting that $u \geq 0$) it follows from (\ref{eqn3.03}) that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lim_{n \to \infty} \int_{\overline{\mathbb{U}}_+\setminus\mathbb{B}_r}e^{-\langle z,u\rangle}\frac{\mathbf{P}^{(n)}_\mathtt{i}(du)}{\|u\|^2\wedge 1} \!\!\!& = \!\!\!&
\int_{\overline{\mathbb{U}}_+\setminus\mathbb{B}_r}e^{-\langle z,u\rangle}\frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1} \\
\!\!\!& = \!\!\!& \frac{G_+(z)}{\theta_i}- \langle\hat{\mrm{b}}\def\p{\mrm{p}eta},z\rangle -\sum_{j,l=1}^2 \hat\sigma^+_{j,l}(r)z_jz_l-\mathcal{E}(z,r)+c_0(r)-\langle \mathbf{c}(r),z\rangle.
\eeqnn
Since $G_+(\cdot)$ is continuous by assumption and $\mathcal{E}(\cdot,r)$ is continuous by the dominated convergence theorem,
the function $ \int_{\overline{\mathbb{U}}_+\setminus\mathbb{B}_r}e^{-\langle \cdot,u\rangle}\frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1}$ is continuous and hence $\mathbf{P}_\mathtt{i}$ is a probability law on $\mathbb{U}_+$, i.e. $\mathbf{P}_\mathtt{i}\{ \infty\}=0$. In particular,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lim_{n \to \infty} \hat\mrm{b}}\def\p{\mrm{p}eta^{(n)} = \hat{\mrm{b}}\def\p{\mrm{p}eta}= \int_{\mathbb{U}_+} (u-u\wedge 1)\frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1}.
\eeqnn
Moreover,
$\lim_{r\to 0+}\lim_{n\to\infty} \sigma^{+(n)}_{j,l}(r)=\hat\sigma^+_{j,l}$ and
$\lim_{r\to 0+}\lim_{n\to\infty} \mathcal{J}^{(n)}(z,r)
= \int_{\mathbb{U}_+\setminus\{ 0\}} \mrm{b}}\def\p{\mrm{p}ig[e^{-\langle z,u\rangle}-1+\langle z,u\wedge 1\rangle\mrm{b}}\def\p{\mrm{p}ig]\frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1}$.
Taking these back into (\ref{eqn3.03}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
G_+(z)
\!\!\!&=\!\!\!& \sum_{j,l=1}^2 \sigma^+_{j,l} z_jz_l
+ \int_{\mathbb{U}_+\setminus\{ 0\}} \mrm{b}}\def\p{\mrm{p}ig[e^{-\langle z,u\rangle}-1+\langle z,u \rangle \mrm{b}}\def\p{\mrm{p}ig]\hat\nu_\mathtt{i}(du),
\eeqnn
where $\sigma^+_{j,l}:=\theta_\mathtt{i}\hat\sigma^+_{j,l}$ and
$\hat\nu_\mathtt{i}(du)
:= \theta_\mathtt{i}\cdot \frac{\mathbf{P}_\mathtt{i}(du)}{\|u\|^2\wedge 1}$ on $\mathbb{U}_+\setminus\{ 0\}$.
$\Box$
Let $a:=(a_1,a_2)= a^++a^-$, and $\sigma:=(\sigma_{jl})= \sigma^++\sigma^-$, and for any $z=(z_1,z_2)\in \mathbb{U}_*$, let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
G(z)\!\!\!&:=\!\!\!& -\langle b,z \rangle+ G_+(-z)+G_-(-z)= -\langle b,z \rangle + \langle z,\sigma z\rangle
+\int_{\mathbb{U}} (e^{\langle z,u\rangle}-1-\langle z,u\rangle)\hat\nu_\mathtt{i}(du), \label{eqn3.25}\\
H(z)\!\!\!&:=\!\!\!& H_+(-z)+H_-(-z)= \langle a, z\rangle
+\int_{\mathbb{U}} (e^{\langle z,u\rangle}-1)\hat\nu_\mathtt{e}(du).\label{eqn3.26}
\eeqlb
Under Conditions~\ref{MainCondition01} and \ref{MainCondition02} we thus have that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.22}
\mathcal{A}^{(n)}\exp\{z_1p+z_2v\} \to \exp\{z_1p+z_2v\}\cdot \mrm{b}}\def\p{\mrm{p}ig[H(z) + v\cdot G(z)\mrm{b}}\def\p{\mrm{p}ig],\quad (p,v)\in\mathbb{U},\ z\in \mathbb{U}_*
\eeqlb
as $n\to\infty$. It will turn out that the infinitesimal generator $\mathcal{A}$ of the limit process does indeed act on $f\in C^2(\mathbb{U})$ according to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.31}
\mathcal{A} f(p,v)
\!\!\!&=\!\!\!& \langle(a-bv), \nabla f(p,v)\rangle +v\cdot\langle \nabla , \sigma \nabla f(p,v) \rangle
+ \int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig[f((p,v)+u)-f(p,v)\mrm{b}}\def\p{\mrm{p}ig]\hat\nu_\mathtt{e}(du) \cr
\!\!\!&\!\!\!&
+v\cdot \int_{\mathbb{U}}\mrm{b}}\def\p{\mrm{p}ig[f((p,v)+u)-f(p,v)-\langle u, \nabla f(p,v)\rangle\mrm{b}}\def\p{\mrm{p}ig]\hat\nu_\mathtt{i}(du).
\eeqlb
\mrm{b}}\def\p{\mrm{p}egin{remark}\label{remark01}
As pointed out in the proof of the previous proposition the measure $\mathbf{P}_{\mathtt{i}}$ may has a point-mass in $0$. Likewise, the corresponding measure $\mathbf{P}_{\mathtt{e}}$ arising in the analysis of the functions $H_\pm$ may have a point mass in $0$ as well. If there are no point measures in zero, then $\sigma^\pm$, respectively $a^\pm$ are zero. Loosely speaking, the quantities $\sigma$ and $a$ account for the arrival of infinitely many induced, respectively exogenous orders of ``insignificant magnitude''. This turns out to be very important for the analysis of the jump dynamics of the scaling limit.
\end{remark}
Before we prove the weak convergence of the rescaled market models, we provide an alternative
link between the parameters $(1/n,\mrm{b}}\def\p{\mrm{p}eta^{(n)}; \nu_{\mathtt{e}/\mathtt{i}}^{(n)})$ and $(a,b,\sigma,\hat{\nu}_{\mathtt{e}/\mathtt{i}})$ that clarifies their interpretation as the drift, diffusion and jump measure of the limiting model.
\mrm{b}}\def\p{\mrm{p}egin{proposition}\label{MainAssumptionNew}
Condition~\ref{MainCondition02} holds if and only if
$ \sup_{n\geq 1}n\gamma_n \int_{\mathbb{U}_+} \|u\|^2 \wedge \|u\| \nu_\mathtt{i}^{(n)}(dnu)<\infty$ and there exist parameters $(a^\pm,\sigma^\pm, \hat{\nu}_{\mathtt{e}/\mathtt{i}})$ such that the following holds.
\mrm{b}}\def\p{\mrm{p}egin{enumerate}
\item[(a)] As $n\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\gamma_n\int_{\mathbb{U}_\pm}\Big(\frac{u}{n}\wedge 1 \Big)\nu^{(n)}_\mathtt{e}(du)
\to
a^{\pm}+\int_{\mathbb{U}_\pm}(u\wedge 1)\hat{\nu}_\mathtt{e}(du)
\eeqnn
and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
n\gamma_n\int_{\mathbb{U}_\pm} \Big(\frac{u}{n}\wedge 1\Big)^{\mathrm{T}}\Big(\frac{u}{n}\wedge 1\Big)\nu_{\mathtt{i}}^{(n)}(du)
\to
2\sigma^\pm +\int_{\mathbb{U}_\pm}(u\wedge 1)^{\mathrm{T}}(u\wedge 1) \hat\nu_{\mathtt{i}}(du);
\eeqnn
\item[(b)] For any $f_1,f_2\in C_b(\mathbb{R}^2)$ that satisfy $f_k(u)=O(\|u\|^k)$ as $\|u\|\to 0$ for $k=1,2$, if $n\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\gamma_n\int_{\mathbb{U}} f_1\Big(\frac{u}{n}\Big)\nu_\mathtt{e}^{(n)}(du)
\to
\int_{\mathbb{U}} f_1(u)\hat\nu_{\mathtt{e}}(du)
\quad\mbox{and}\quad
n\gamma_n\int_{\mathbb{U}} f_2\Big(\frac{u}{n}\Big)\nu_\mathtt{i}^{(n)}(du)
\to
\int_{\mathbb{U}} f_2(u)\hat\nu_\mathtt{i}(du).
\eeqnn
\end{enumerate}
\end{proposition}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
A direct computation shows that conditions (a) and (b) imply the convergence of the terms (\ref{eqn3.05})-(\ref{eqn3.07}) to continuous functions, which immediately implies the convergence of $G^{(n)}_+$ to $G_+$. Using the same arguments as in the proof of the previous lemma, the converse follows from direct computation using the convergence along a subsequence of $\{(\mathbf{P}^{(n)}_\mathtt{i}, \theta^{(n)}_\mathtt{i})\}_{n\geq 1}$.
$\Box$
\subsection{Weak convergence}
Since the linear span of the set $\{ e^{\langle z,\cdot\rangle}:z\in\mathbb{U}_* \}$ is not dense in the space of continuous functions on $\mathbb{U}$ vanishing at infinity, which is a subspace of $\mathscr{D}(\mathcal{A})$, we cannot prove (\ref{eqn3.28}) directly. Instead, we prove the weak convergence of the rescaled market models using the general convergence results for infinite dimensional stochastic integrals established by Kurtz and Protter \cite{KurtzProtter1996}. To this end, we consider the separable Banach space $L^{1,2}(\mathbb{R}_+):=L^{1}(\mathbb{R}_+)\cap L^{2}(\mathbb{R}_+)$, endowed with the norm $\|\cdot\|_{L^{1,2}}:= \|\cdot\|_{L^{1}} \vee \|\cdot\|_{L^{2}}$ and the Haar basis $\{\varphi_j^k:j\geq -1,k\geq 0 \}$. The Haar basis is defined by $\varphi_{-1}^k(x)=\mathbf{1}_{[k,k+1)}(x)$ and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\varphi_j^k(x)
:=
\left\{\mrm{b}}\def\p{\mrm{p}egin{array}{ll}
2^{j/2}, & x\in [k\cdot 2^{-j},(k+1/2)\cdot 2^{-j});
\cr
-2^{j/2}, & x\in [(k+1/2)\cdot 2^{-j},(k+1)\cdot 2^{-j});
\cr
0, &\mbox{else},
\end{array}
\right. \quad j,k\geq 0.
\eeqnn
Let the separable Banach space $\mathbb{H}:= \mathbb{R}^2 \times L^{1,2}(\mathbb{R}_+)$ be endowed with norm $\|\cdot\|_{\mathbb{H}}$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\|Z\|_{\mathbb{H}}\!\!\!&:=\!\!\!& |Z_1|+ |Z_2| +\|Z_3\|_{L^{1,2}}, \quad \mbox{for any} \quad Z:=(Z_1,Z_2,Z_3)\in \mathbb{H}.
\eeqnn
Following Kurtz and Protter \cite{KurtzProtter1996}, we say that an $(\mathscr{F}_t)$-adapted process $\{\mathbf{Y}(t):= (\mathbf{Y}_1(t),\mathbf{Y}_2(t),\mathbf{Y}_3(t)):t\geq 0\}$ is a {\it $\mathbb{H}^\#$-semimartingale} if $\{\mathbf{Y}_1(t):t\geq 0\}$ and $\{\mathbf{Y}_2(t):t\geq 0\}$ are two $\mathbb{R}$-semimartingales, and $\{\mathbf{Y}_3(t):t\geq 0\}$ is an $L^{1,2}(\mathbb{R}_+)^\#$-semimartingale random measure on $\mathbb{R}_+$, i.e.
\mrm{b}}\def\p{\mrm{p}egin{enumerate}
\item[(1)] for any $f \in L^{1,2}(\mathbb{R}_+)$, $\{\mathbf{Y}_3(f,t):t\geq 0\}$ is a c\'adl\'ag, $(\mathscr{F}_t)$ semimartingale with $\mathbf{Y}_3(f,0)=0$;
\item[(2)] for any $f_1,\cdots f_m\in L^{1,2}(\mathbb{R}_+)$ and $w_1,\cdots, w_m\in\mathbb{R}$, $\mathbf{Y}_3(\sum_{i=1}^m w_i f_i,t)=\sum_{i=1}^mw_i\mathbf{Y}_3(f_i,t)$ a.s. for any $t\geq 0$.
\end{enumerate}
Let us recall the definition of stochastic integrals w.r.t.~the $\mathbb{H}^\#$-semimartingale $\mathbf{Y}$. To this end, let
$\mathcal{S}^0_{\mathbb{H}}$ be the collection of $\mathbb{H}$-valued, simple processes of the form
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.11}
\mathbf{X}(t):=(\mathbf{X}_1(t),\mathbf{X}_2(t),\mathbf{X}_3(t))=\Big(\sum \xi_{1,k}\cdot\mathbf{1}_{[t_k,t_{k+1})}(t),\sum \xi_{2,k}\cdot\mathbf{1}_{[t_k,t_{k+1})}(t),\sum \xi_{3,k,i,j}\cdot\mathbf{1}_{[t_k,t_{k+1})}(t)\cdot\varphi_i^j \Big),
\eeqlb
where $0= t_1<t_2<\cdots$, and $\{\xi_{1,k}\}$, $\{\xi_{2,k}\},\{\xi_{2,k,i,j}\}$ are sequences of bounded, $\mathbb{R}$-valued, $(\mathscr{F}_{t_k})$-adapted random variables, all but finitely many of which being zero.
For any $\mathbf{X} \in \mathcal{S}^0_{\mathbb{H}}$, the stochastic integral w.r.t.~$\mathbf{Y}$ is defined as
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^t \mathbf{X}(s-)\cdot \mathbf{Y}(ds) \!\!\!&=\!\!\!& \sum_{k=1}^\infty \xi_{1,k}[\mathbf{Y}_1(t_{k+1}\wedge t)- \mathbf{Y}_1(t_{k}\wedge t)] + \sum_{k=1}^\infty \xi_{2,k}[\mathbf{Y}_2(t_{k+1}\wedge t)- \mathbf{Y}_2(t_{k}\wedge t)]\cr
\!\!\!&\!\!\!& +\sum_{k=1}^\infty\sum_{i=-1}^\infty\sum_{j=0}^\infty \xi_{3,k,i,j}\cdot[\mathbf{Y}_3(t_{k+1}\wedge t, \varphi_i^j)- \mathbf{Y}_3(t_{k}\wedge t,\varphi_i^j)].
\eeqnn
Let $\hat{\mathbb{H}}$ be the completion of the linear space $\mathcal{S}^0_{\mathbb{H}}$ with respect to the norm $\|\cdot\|_\mathbb{H}$, i.e. for any $\{\mathbf{X}(t):t\geq 0\}\in\hat{\mathbb{H}}$, there exists a sequence of simple processes $\{\mathbf{X}_n(t):t\geq 0\}\in \mathcal{S}^0_{\mathbb{H}}$ such that for any $T\geq 0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lim_{n\to\infty} \int_0^T \| \mathbf{X}(t)-\mathbf{X}_n(t) \|_\mathbb{H}dt =0.
\eeqnn
The stochastic integral for $\mathbf{X}(\cdot)\in\hat{\mathbb{H}}$ is then defined as
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^t \mathbf{X}(s-)\cdot \mathbf{Y}(ds) := \lim_{n\to\infty}\int_0^t \mathbf{X}_n(s-)\cdot \mathbf{Y}(ds).
\eeqnn
We say that a $(\mathscr{F}_{t})$-adapted process $\{\mathbf{Y}(t):t\geq 0 \}$ is a {\it standard $\mathbb{H}^\#$-semimartingale} if $\int_0^\cdot \mathbf{X}(s-)\cdot \mathbf{Y}(ds)\in\mathbf{D}([0,\infty);\mathbb{R}^2)$ for any $\mathbf{X}\in \mathcal{S}^0_{\mathbb{H}}$ and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.10}
\mathcal{H}_t^0:= \left\{ \Big\|\int_0^t \mathbf{X}(s-)\cdot \mathbf{Y}(ds) \Big\|: \mathbf{X}\in \mathcal{S}^0_{\mathbb{H}},\ \sup_{s\leq t} \|\mathbf{X}(s)\|_{\mathbb{H}}\leq 1\right\}
\eeqlb
is stochastically bounded for each $t\geq 0$.
We are now ready to provide an alternative representation of the market model (\ref{eqn3.02}).
For any $t\geq 0$ and $n\geq 1$, let us define $\mathbf{Y}^{(n)}(t):= (\mathbf{Y}^{(n)}_1(t),\mathbf{Y}^{(n)}_2(t),\mathbf{Y}^{(n)}_3(t))$ by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{Y}^{(n)}_1(t):=\gamma_n\left( \int_{\mathbb{U}}u\nu^{(n)}_{\mathtt{i}}(du)-
\left( \mrm{b}}\def\p{\mrm{p}egin{array}{c} 0\\ \mrm{b}}\def\p{\mrm{p}eta_n
\end{array} \right)\right) \cdot t, \quad
\mathbf{Y}^{(n)}_2(t):= \int_0^t\int_{\mathbb{U}} \frac{u}{n}N^{(n)}_{\mathtt{e}}(d\gamma_n s,du)
\eeqnn
and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{Y}^{(n)}_3(t):=\int_0^t \int_{\mathbb{U}} \frac{u}{n}\tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx).
\eeqnn
{The process $\{\mathbf{Y}^{(n)}_1(t):t\geq 0\}$ is a real-valued process, $\{\mathbf{Y}^{(n)}_2(t):t\geq 0\}$ is a compound Poisson process, and
$\{ \mathbf{Y}^{(n)}_3(t):t\geq 0\}$ is an $L^{1,2}(\mathbb{R}_+)^\#$-martingale random measure on $\mathbb R_+$. }
We can rewrite the market model (\ref{eqn3.02}) as
\mrm{b}}\def\p{\mrm{p}egin{equation} \label{SDE}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}
\mathbf P^{(n)}(t)\\ \mathbf V^{(n)}(t)
\end{array} \Big)
= \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}
\mathbf P^{(n)}(0)\\ \mathbf V^{(n)}(0)
\end{array} \Big)+ \int_0^t \mathbf{F}(\mathbf V^{(n)}(s-)) \cdot \mathbf{Y}^{(n)}(ds),
\end{equation}
where the function $\mathbf{F}: \mathbb{R}_+ \to (\mathbb{R}_+^{2}\times L^{1,2}(\mathbb{R}_+))^2$ is defined by
\[
\mathbf{F}(v) := \left( \mrm{b}}\def\p{\mrm{p}egin{array}{lll} v & 1 & \mathbf{1}_{\{\cdot< v\}} \\ v & 1 & \mathbf{1}_{\{\cdot< v\}}
\end{array} \right).
\]
In the next subsection, we prove the weak convergence of the sequence of integrators $\{\mathbf{Y}^{(n)}(t):t\geq 0\}_{n\geq 1} $ in the space $\mathbf{D}([0,\infty);\mathbb{H}^\#)$. Subsequently, we show that this implies convergence of the market models.
\subsubsection{Convergence of $\mathbf{Y}^{(n)}$}
Since the elements of $\{\mathbf{Y}^{(n)}(t):t\geq 0\}$ are mutually independent, it suffices to prove the weak convergence of the processes $\{\mathbf{Y}^{(n)}_i(t):t\geq 0\}_{n\geq 1} $ $(i=1,2,3)$ separately.
The convergence of $\{\mathbf{Y}^{(n)}_1(t):t\geq 0\}_{n\geq 1} $ and $\{\mathbf{Y}^{(n)}_2(t):t\geq 0\}_{n\geq 1}$ follows from \cite[Theorem 3.4]{JacodShiryaev2003}.
\mrm{b}}\def\p{\mrm{p}egin{lemma}\label{Convergence01}
Under Condition~\ref{MainCondition01} and \ref{MainCondition02}, we have $\{(\mathbf{Y}^{(n)}_1(t),\mathbf{Y}^{(n)}_2(t)):t\geq 0\}_{n\geq 1} $ converges weakly to $\{(\mathbf{Y}_1(t),\mathbf{Y}_2(t)):t\geq 0\}$ in $\mathbf{D}([0,\infty);\mathbb{R}^2)$ as $n\to\infty$, where
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{Y}_1(t)=-bt
\quad \mbox{and}\quad
\mathbf{Y}_2(t)= at+ \int_0^t \int_{\mathbb{U}} u N_1(ds,du),
\eeqnn
and $N_1(ds,du)$ is a Poisson random measure on $(0,\infty)\times \mathbb{U}$ with intensity $ds\hat\nu_{\mathtt{e}}(du)$.
\end{lemma}
We now turn to the convergence of the sequence $\{\mathbf{Y}^{(n)}_3(t):t\geq 0\}_{n\geq 1}$. Following Kurtz and Protter \cite{KurtzProtter1996} we say that this sequence converges weakly to $\{\mathbf{Y}_3(t):t\geq 0\}$ as $n \to \infty$ and write {$\mathbf{Y}_3^{(n)} \Rightarrow \mathbf{Y}_3$} if
\[
\left( \mathbf{Y}^{(n)}_3(f_1,\cdot),\cdots, \mathbf{Y}^{(n)}_3(f_m,\cdot) \right) \to
\left( \mathbf{Y}_3(f_1,\cdot),\cdots, \mathbf{Y}_3( f_m,\cdot) \right)
\]
weakly in {$\mathbf{D}([0,\infty),\mathbb R^{2\times m})$} for any $f_1, \cdots, f_m \in L^{1,2}(\mathbb R_+)$ and any $m \in \mathbb N$. The following lemma establishes the weak convergence. It uses the fact that $2\sigma$ is symmetric and nonnegative-definite so that the square root $\sqrt{2\sigma}$ is well defined.
\mrm{b}}\def\p{\mrm{p}egin{lemma}\label{Convergence}
Under Condition~\ref{MainCondition01} and \ref{MainCondition02}, we have that $\mathbf{Y}_3^{(n)} \Rightarrow \mathbf Y_3$ where
$\{\mathbf{Y}_3(t):t\geq 0\}$ is an $L^{1,2}(\mathbb{R}_+)^\#$-martingale with the following representation:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{Y}_3(t)=
\int_0^t \sqrt{2\sigma} \Big(\mrm{b}}\def\p{\mrm{p}egin{array}{c}
W_1(ds,dx)\\ W_2(ds,dx)
\end{array} \Big) + \int_0^t \int_{\mathbb{U}} u \tilde{N}_0(ds,du,dx).
\eeqnn
Here, $W_1(dt,dx)$ and $W_2(dt,dx)$ are orthogonal Gaussian white noises on $(0,\infty)^2$ with intensities $dsdx$, $N_0(ds,du,dx)$ is a Poisson random measure on $(0,\infty)\times \mathbb{U} \times \mathbb{R}_+$ with intensity $ds\hat\nu_\mathtt{i}(du)dx$, and $$\tilde{N}_0(ds,du,dx):=N_0(ds,du,dx)-ds\hat\nu_{\mathtt{i}}(du)dx.$$
\end{lemma}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
We prove the weak convergence of $\{\mathbf{Y}_3^{(n)}(f,t):t\geq 0\}_{n\geq 1}$ for any $f \in L^{1,2}(\mathbb R_+)$; the general case be proved in the same way. From Kurtz's criterion \cite[p. 137]{EthierKurtz1986} tightness of this sequence follows from the following estimate: for any $t\in[0,1] $ and $\epsilon>0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.35}
\mathbf{E}_{\mathscr{F}_t}\mrm{b}}\def\p{\mrm{p}ig[\|\mathbf{Y}_3^{(n)}(f,t+\epsilon)-\mathbf{Y}_3^{(n)}(f,t)\|\mrm{b}}\def\p{\mrm{p}ig] \leq 2(\epsilon+\sqrt{\epsilon}).
\eeqlb
In order to verify this inequality, we first apply Jensen's inequality to get
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.36}
\mathbf{E}_{\mathscr{F}_t}\mrm{b}}\def\p{\mrm{p}ig[\|\mathbf{Y}_3^{(n)}(f,t+\epsilon)-\mathbf{Y}_3^{(n)}(f,t)\| \mrm{b}}\def\p{\mrm{p}ig]
\!\!\!&=\!\!\!& \mathbf{E}\Big[\Big\|\int_t^{t+\epsilon} \int_{\mathbb{U}} \int_0^\infty f(x) \frac{u}{n}\tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx)\Big\|\Big]\cr
\!\!\!&\leq \!\!\!&\mathbf{E}\Big[\Big\|\int_t^{t+\epsilon} \int_{\|u\|\leq 1} \int_0^\infty f(x) \frac{u}{n}\tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx)\Big\|^2\Big]^{1/2}\cr
\!\!\!&\!\!\!& +\mathbf{E}\Big[\Big\|\int_t^{t+\epsilon} \int_{\|u\|> 1} \int_0^\infty f(x) \frac{u}{n}\tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx)\Big\|\Big].
\eeqlb
Applying the Burkholder-Davis-Gundy inequality to the first expectation on the right side of the last inequality, this term can be bounded by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
C\mathbf{E}\Big[\int_t^{t+\epsilon} \int_{\|u\|\leq 1} \int_0^\infty |f(x)|^2 \frac{\|u\|^2}{n^2}\tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx)\Big]
\!\!\!&\leq\!\!\!& C\cdot \epsilon\cdot \|f\|_{L^2} dx \cdot n\gamma_n\int_{\|u\|\leq 1}\|u\|^2\nu_\mathtt{i}^{(n)}(dnu).
\eeqnn
Moreover, the second term on the right side of the last inequality in (\ref{eqn3.36}) can be bounded by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
2\epsilon \cdot \|f\|_{L^1} n\gamma_n\int_{\|u\|> 1} \|u\|\nu^{(n)}_{\mathtt{i}}(dnu).
\eeqnn
Taking these two upper estimates back into (\ref{eqn3.36}) and using Condition~\ref{MainCondition02} yields (\ref{eqn3.35}). We may hence assume that $\{\mathbf{Y}^{(n)}_3(f,t):t\geq 0\}_{n\geq 1}$ converges weakly in $\mathbf{D}([0,\infty);\mathbb R^2)$
along a subsequence. By Skorokhood's representation theorem, we may actually assume almost sure convergence and need to prove that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{Y}_3^{(n)}(f,t) \!\!\!&\to\!\!\!& \int_0^t\int_0^\infty f(x)\sqrt{2\sigma}\Big(
\mrm{b}}\def\p{\mrm{p}egin{array}{c}
W_1(ds,dx)\\ W_2(ds,dx)
\end{array}\Big)
+\int_0^t\int_{\mathbb{U}} \int_0^\infty f(x) u\tilde{N}_0(ds,du,dx)=:\mathbf{Y}_3(f,t).
\eeqnn
The next step, then, consists in proving that the limit is a local martingale; subsequently we apply a general representation result for local martingales to conclude. In order to prove the local martingale property of the limit process, let $\mathcal{A}^{(n)}_f$ be the infinitesimal generator of the $\mathbb{R}^2$-valued martingale $\{\mathbf{Y}_3^{(n)}(f,t):t\geq 0\}$. It acts on $\phi\in C_b^2(\mathbb{U})$ according to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.17}
\mathcal{A}^{(n)}_f\phi(z)\!\!\!&=\!\!\!& n\gamma_n\int_0^\infty dx\int_{\mathbb{U}} \mrm{b}}\def\p{\mrm{p}ig[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle \mrm{b}}\def\p{\mrm{p}ig] \nu^{(n)}_\mathtt{i}(dnu)\cr
\!\!\!&=\!\!\!& n\gamma_n\int_0^\infty dx\int_{\mathbb{U}_+} \mrm{b}}\def\p{\mrm{p}ig[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle \mrm{b}}\def\p{\mrm{p}ig] \nu^{(n)}_\mathtt{i}(dnu)\cr
\!\!\!&\!\!\!& + n\gamma_n\int_0^\infty dx\int_{\mathbb{U}_-} \mrm{b}}\def\p{\mrm{p}ig[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle \mrm{b}}\def\p{\mrm{p}ig] \nu^{(n)}_\mathtt{i}(dnu).
\eeqlb
We can rewrite the two terms on the right side of the last equality as
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lefteqn{n\gamma_n\int_0^\infty dx\int_{\mathbb{U}_\pm} \mrm{b}}\def\p{\mrm{p}ig[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle \mrm{b}}\def\p{\mrm{p}ig] \nu^{(n)}_\mathtt{i}(dnu)}\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!&
\frac{1}{2}\sum_{j,l=1}^2\frac{\partial^2 \phi(z)}{\partial z_j\partial z_l} n\gamma_n\int_0^\infty|f(x)|^2 dx\int_{\mathbb{U}_\pm} (u_j\wedge 1)(u_l\wedge 1) \nu^{(n)}_\mathtt{i}(dnu)\cr
\!\!\!&\!\!\!& +
n\gamma_n\int_0^\infty dx\int_{\mathbb{U}_\pm} \Big[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle\cr
\!\!\!&\!\!\!&\quad - \frac{1}{2}\sum_{j,l=1}^2\frac{\partial^2 \phi(z)}{\partial z_j\partial z_l} |f(x)|^2(u_j\wedge 1)(u_l\wedge 1)\Big] \nu^{(n)}_\mathtt{i}(dnu).
\eeqnn
From Proposition~\ref{MainAssumptionNew}, as $n\to\infty$ the first term on the right side of the equality above converges to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\sum_{j,l=1}^2 \frac{\partial^2 \phi(z)}{\partial z_j\partial z_l}\int_0^\infty |f(x)|^2 dx
\Big[ \sigma^+_{j,l} +\frac{1}{2}
\int_{\mathbb{U}_\pm} (u_j\wedge 1)(u_l\wedge 1) \hat\nu_\mathtt{i}(du)\Big],
\eeqnn
and the second term converges to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^\infty dx\int_{\mathbb{U}_\pm} \Big[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle - \frac{1}{2}\sum_{j,l=1}^2\frac{\partial^2 \phi(z)}{\partial z_j\partial z_l} |f(x)|^2(u_j\wedge 1)(u_l\wedge 1)\Big] \hat\nu_\mathtt{i}(du).
\eeqnn
Hence, the generator $\mathcal{A}^{(n)}_f$ converges to
the linear operator $\mathcal{A}_f$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.55}
\mathcal{A}_f\phi(x)\!\!\!&:=\!\!\!& \sum_{j,l=1}^2 \sigma_{j,l}\frac{\partial^2 \phi(z)}{\partial z_j \partial z_l}\int_0^\infty |f(x)|^2 dx + \int_0^\infty dx\int_{\mathbb{U}} \Big[\phi(z+f(x)u)-\phi(z)-\langle \nabla\phi(z),f(x)u \rangle \Big] \hat\nu_\mathtt{i}(du).
\eeqlb
Applying It\^o's formula to the function $e^{\mathrm{i}\langle z,\mathbf{Y}_3^{(n)}(f,\cdot)\rangle}$, we see that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathcal{M}^{(n)}_t(f):=e^{\mathrm{i}\langle z,\mathbf{Y}_3^{(n)}(f,t)\rangle}-\int_0^t \mathcal{A}^{(n)}_f e^{\mathrm{i}\langle z,\mathbf{Y}_3^{(n)}(f,s)\rangle}ds
\eeqnn
is an $(\mathscr{F}_t)$-local martingale. Since
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{eqn4.57}
\sup_{s\in[0,t]}|\mathcal{A}^{(n)}_f e^{\mathrm{i}\langle z,\mathbf{Y}_3^{(n)}(f,s)\rangle}|
\!\!\!&=\!\!\!&\sup_{s\in[0,t]}|e^{\mathrm{i}\langle z,\mathbf{Y}_3^{(n)}(f,s)\rangle}|\cdot
n\gamma_n\int_0^\infty dx\int_{\mathbb{U}} \mrm{b}}\def\p{\mrm{p}ig| e^{\mathrm{i}\langle z,f(x)u\rangle} -1-\mathrm{i}\langle z,f(x)u \rangle \mrm{b}}\def\p{\mrm{p}ig| \nu^{(n)}_\mathtt{i}(dnu)\cr
\!\!\!&\leq\!\!\!& n\gamma_n\int_0^\infty dx\int_{\mathbb{U}} |\langle z,f(x)u \rangle|^2\wedge |\langle z,f(x)u \rangle| \nu^{(n)}_\mathtt{i}(dnu)\cr
\!\!\!&\leq\!\!\!& \int_0^\infty\|f(x)z\|^2\vee\|f(x)z\| dx \cdot n\gamma_n \int_{\mathbb{U}}
\|u \|^2\wedge \|u\| \nu^{(n)}_\mathtt{i}(dnu),
\eeqlb
and because the last term is uniformly bounded in $n$, due to Condition~\ref{MainCondition02}, the process $\{\mathcal{M}^{(n)}_t(f):t\geq 0 \}$ is in fact a true martingale.
By the almost sure convergence in the Skorohood topology we know that $\mathbf{Y}_3^{(n)}(f,t)\to \mathbf{Y}_3(f,t)$ a.s.~for all but countably many $t > 0$. Thus, it follows from (\ref{eqn3.55}) and the dominated convergence theorem that
\[
\int_0^t \mathcal{A}^{(n)}_f e^{\mathrm{i}\langle z,\mathbf{Y}_3^{(n)}(f,s)\rangle}ds \to
\int_0^t \mathcal{A}_f e^{\mathrm{i}\langle z,\mathbf{Y}_3(f,s)\rangle}ds
\]
almost surely in the space $\mathbf{C}([0,\infty), \mathbb C)$. This implies that for any $z\in\mathbb{R}^2$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathcal{M}^{(n)}_t(f) \to \mathcal{M}_t(f) := e^{\mathrm{i}\langle z,\mathbf{Y}_3(f,t)\rangle}-\int_0^t \mathcal{A}_f e^{\mathrm{i}\langle z,\mathbf{Y}_3(f,s)\rangle}ds
\eeqnn
almost surely in the Skorohood space ${\mathbf D}([0,\infty), \mathbb C)$ and hence that $\mathcal{M}^{(n)}_t(f) \to \mathcal{M}_t(f)$ almost surely for almost all $t > 0$.
In view of (\ref{eqn4.57}) we also have convergence in $L^1$ for almost all $t >0$.
Since $\{\mathcal{M}_t(f): t \geq 0\}$ is right-continuous, the martingale property of $\{\mathcal{M}^{(n)}_t(f): t \geq 0\}$ carries over to $\{\mathcal{M}_t(f): t \geq 0\}$.
By \cite[Theorem 2.42]{JacodShiryaev2003} this is equivalent to the local martingale property of $\{\mathbf{Y}_3(f,t):t\geq 0\}$.
We now prove the desired representation. The local martingale $\{\mathbf{Y}_3(f,t):t\geq 0\}$ admits the the canonical representation
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.18}
\mathbf{Y}_3(f,t)\!\!\!&=\!\!\!& M^c_t(f)+ \int_0^t \int_{\mathbb{U}} u' \tilde{N}_f^d(ds,du'),
\eeqlb
where ${N}_f^d(ds,du')$ is an integer-valued random measure on $[0,\infty)\times\mathbb{U}$
with compensator $ds \nu^d_\mathtt{i}(du') $ and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\nu^d_\mathtt{i}(du')= \int_0^\infty dx\int_\mathbb{U} \mathbf{1}_{\{f(x)u\in du'\}} \hat\nu_\mathtt{i}(du),
\eeqnn
and $\{ M^c_t(f):t\geq 0\}$ is a continuous, $\mathbb{R}^2$-valued local martingale with quadratic covariation process
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\langle {M}_j^c(f),{M}_l^c(f)\rangle_t
=t\cdot 2\sigma_{j,l}\int_0^\infty |f(x)|^2 dx,\quad j,l=1,2.
\eeqnn
Similarly, for any $f,g\in L^{1,2}(\mathbb{R}_+)$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\langle {M}_j^c(f),{M}_l^c(g)\rangle_t
=t\cdot 2\sigma_{j,l}\int_0^\infty f(x)g(x) dx, \quad j,l=1,2.
\eeqnn
By \cite[Theorem III-7]{ElKaroui1990}, there exist two orthogonal Gaussian white noise $W_1(ds,dx)$ and $W_2(ds,dx)$ on $(0,\infty)^2$ with density $dsdx$ such that for any $f\in L^{1,2}(\mathbb{R}_+)$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
{M}_j^c(f)= \int_0^t \int_{\mathbb{R}_+} f(x)\sqrt{2\sigma}_{j1}W_1(ds,dx) +\int_0^t \int_{\mathbb{R}_+} f(x) \sqrt{2\sigma}_{j2}W_2(ds,dx),\quad j=1,2.
\eeqnn
By \cite[Theorem 7.4]{IkedaWatanabe1989}, there exists a Poisson random measure $N_0(ds,du,dx)$ on $(0,\infty)\times \mathbb{U} \times \mathbb{R}_+$ with intensity $ds\hat\nu_\mathtt{i}(du)dx$ such that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^t \int_{\mathbb{U}} u \tilde{N}_f^d(ds,du)= \int_0^t \int_{\mathbb{U}}\int_0^\infty f(x) \cdot u \tilde{N}_0(ds,du,dx).
\eeqnn
Taking these two representations back into (\ref{eqn3.18}) yields the desired result.
$\Box$
\subsubsection{Uniform tightness and weak convergence of market models}
The next steps towards the proof of the weak convergence of the market models is to prove that $\{ \mathbf{Y}^{(n)}(t):t\geq 0 \}_{n\geq 1}$ is a sequence of \textit{uniformly tight} standard $\mathbb{H}^\#$-semimartingales. That is, the sequence $\{\mathcal{H}_t^{(n)}\}_{n\geq1}$ is uniformly stochastically bounded for any $t\geq 0$, where $\mathcal{H}_t^{(n)}$ is defined as in (\ref{eqn3.10}) with $\mathbf{Y}$ replaced by $\mathbf{Y}^{(n)}$.
\mrm{b}}\def\p{\mrm{p}egin{lemma}\label{UniformlyTight}
Under Conditions~\ref{MainCondition01} and \ref{MainCondition02}, $\{\mathbf{Y}^{(n)}(t):t\geq 0\}_{n\geq 1}$ is a sequence of uniformly tight $\mathbb{H}^\#$-semimartingales.
\end{lemma}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
We just prove that $\{\mathcal{H}_1^{(n)}\}_{n\geq1}$ is uniformly stochastically bounded; the general case can be proved similarly.
For any simple $\mathbb{H}$-valued processes $\mathbf{X}$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.12}
\int_0^1 \mathbf{X}(s-)\cdot\mathbf{Y}^{(n)}(ds)
\!\!\!&=\!\!\!& \sum_{k=1}^3 \int_0^1 \mathbf{X}_k(s-)\cdot\mathbf{Y}_k^{(n)}(ds),
\eeqlb
where
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^1\mathbf{X}_1(s-)\cdot \mathbf{Y}_1^{(n)}(ds)
\!\!\!&:=\!\!\!& \gamma_n\left( \int_{\mathbb{U}}u\nu^{(n)}_{\mathtt{i}}(du)-
\left( \mrm{b}}\def\p{\mrm{p}egin{array}{c} 0\\ \mrm{b}}\def\p{\mrm{p}eta_n
\end{array} \right)\right) \int_0^1 \sum \xi_{1,k}\mathbf{1}_{[t_k,t_{k+1})}(s)ds,\cr
\int_0^1\mathbf{X}_2(s-)\cdot \mathbf{Y}_2^{(n)}(ds)
\!\!\!&:=\!\!\!& \int_0^1\sum \xi_{2,k}\mathbf{1}_{[t_k,t_{k+1})}(s)\int_{\mathbb{U}} \frac{u}{n}N^{(n)}_{\mathtt{e}}(d\gamma_n s,du),\cr
\int_0^1\mathbf{X}_3(s-)\cdot \mathbf{Y}_3^{(n)}(ds)
\!\!\!&:=\!\!\!& \int_0^1\int_0^\infty \sum \xi_{3,k,i,j}\mathbf{1}_{[t_k,t_{k+1})}(s)\varphi_i^j(x)\int_{\mathbb{U}} \frac{u}{n}\tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx).
\eeqnn
Since $\sup_{s\in[0,1]}\|X(s)\|_{\mathbb{H}}\leq 1$ a.s.,
we have $|\xi_{1,k}|, |\xi_{2,k}|, |\xi_{3,k,i,j}|\leq 2$ a.s. for any $k\geq 1$, $i\geq -1$ and $j\geq 0$.
By Condition~\ref{MainCondition01} there exists a constant $C>0$ such that for any $n\geq 1$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\gamma_n\left\| \int_{\mathbb{U}}u\nu^{(n)}_{\mathtt{i}}(du)-
\left( \mrm{b}}\def\p{\mrm{p}egin{array}{c} 0\\ \mrm{b}}\def\p{\mrm{p}eta_n
\end{array} \right)\right\|\leq C
\eeqnn
and from the Markov inequality,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\mathbf{P}\Big\{\Big\|\int_0^1\mathbf{X}_1(s-)\cdot \mathbf{Y}_1^{(n)}(ds)\Big\|\geq K \Big\}
\leq \frac{C}{K}\int_0^1\sum \mathbf{E}[ |\xi_{1,k}|]\mathbf{1}_{[t_k,t_{k+1})}(s)ds\leq \frac{2C}{K}. \label{eqn3.14}
\eeqlb
For the second term in (\ref{eqn3.12}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\Big\|\int_0^1\mathbf{X}_2(s-)\cdot\mathbf{Y}_2^{(n)}(ds)\Big\|
\leq 2\int_0^1 \int_{\mathbb{U}} \frac{\|u\|}{n}N^{(n)}_{\mathtt{e}}(d\gamma_n s,du).
\eeqnn
For any $M>0$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.13}
\mathbf{P} \Big\{ \int_0^1 \int_{\mathbb{U}} \frac{\|u\|}{n}N^{(n)}_{\mathtt{e}}(d\gamma_n s,du)\geq K \Big\}\!\!\!&\leq\!\!\!& \mathbf{P} \Big\{ \int_0^1 \int_{\mathbb{U}} \Big(\frac{\|u\|}{n}\wedge M\Big)N^{(n)}_{\mathtt{e}}(d\gamma_n s,du)\geq K \Big\}\cr
\!\!\!&\!\!\!& + \mathbf{P} \Big\{\int_0^1\int_{\mathbb{U}}\Big(\frac{\|u\|}{n}\vee M\Big)\mathbf{1}_{\{\|u\|\geq nM\}} N^{(n)}_{\mathtt{e}}(d\gamma_n s,du)\geq K\Big\}.
\eeqlb
Applying Jensen's inequality to the first term on the right side of this inequality, it can be bounded by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\frac{1}{K}\mathbf{E} \Big[\int_0^1\int_{\mathbb{U}}\Big(\frac{\|u\|}{n}\wedge M\Big) N^{(n)}_{\mathtt{e}}(d\gamma_n s,du)\Big]
\!\!\!&\leq\!\!\!& \frac{1}{K}\cdot\gamma_n \int_{\mathbb{U}}\Big(\frac{\|u\|}{n}\wedge M\Big) \nu_{\mathtt{e}}^{(n)}(du).
\eeqnn
Moreover, the second term on the right side of (\ref{eqn3.13}) can be bounded by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P} \Big\{\int_0^1\int_{\mathbb{U}} \mathbf{1}_{\{\|u\|\geq nM\}} N^{(n)}_{\mathtt{e}}(d\gamma_n s,du)\geq 1\Big\}
\leq 1-\exp\{ -\gamma_n \nu^{(n)}_{\mathtt{e}}(\|u\|\geq nM)\}\leq C \cdot \gamma_n \nu^{(n)}_{\mathtt{e}}(\|u\|\geq nM).
\eeqnn
By Proposition~\ref{MainAssumptionNew}(b), there exist constants $C>0$ independent of $M$ and $C_M>0$ such that for any $n\geq 1$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\gamma_n \int_{\mathbb{U}}\Big(\frac{\|u\|}{n}\wedge M\Big) \nu_{\mathtt{e}}^{(n)}(du) \leq C_M
\quad \mbox{and}\quad
\gamma_n \nu^{(n)}_{\mathtt{e}}(nM,\infty) \leq C \cdot\hat{\nu}_{\mathtt{e}}(\|u\|\geq M).
\eeqnn
Altogether, this yields
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.15}
\mathbf{P}\Big\{ \Big\|\int_0^1\mathbf{X}_2(s-)\cdot\mathbf{Y}_2^{(n)}(ds)\Big\|\geq K \Big\}\leq \frac{C_M}{K}+ C \hat{\nu}_{\mathtt{e}}(\|u\|\geq M),
\eeqlb
which goes to $0$ when first sending $K$ and then $M$ to $\infty$.
We now consider the third term on the right side of (\ref{eqn3.12}).
Applying Jensen's inequality again, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.29}
\mathbf{P}\Big\{\Big\| \int_0^1\mathbf{X}_3(s-)\cdot \mathbf{Y}_3^{(n)}(ds) \Big\|\geq K \Big\}
\!\!\!&\leq\!\!\!&
\mathbf{P}\mrm{b}}\def\p{\mrm{p}ig\{\mrm{b}}\def\p{\mrm{p}ig\|\mathbf{M}_{\leq 1}\mrm{b}}\def\p{\mrm{p}ig\|\geq K/2 \mrm{b}}\def\p{\mrm{p}ig\} + \mathbf{P}\mrm{b}}\def\p{\mrm{p}ig\{\mrm{b}}\def\p{\mrm{p}ig\|\mathbf{M}_{> 1}\mrm{b}}\def\p{\mrm{p}ig\|\geq K/2 \mrm{b}}\def\p{\mrm{p}ig\} \cr
\!\!\!&\leq\!\!\!& \frac{4}{K^2}\mathbf{E}\Big[\mrm{b}}\def\p{\mrm{p}ig\|\mathbf{M}_{\leq 1}\mrm{b}}\def\p{\mrm{p}ig\|^2\Big] +\frac{2}{K}\mathbf{E}\Big[\mrm{b}}\def\p{\mrm{p}ig\|\mathbf{M}_{> 1}\mrm{b}}\def\p{\mrm{p}ig\| \Big],
\eeqlb
where
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{M}_{\leq 1}\!\!\!&:=\!\!\!& \int_0^1\int_0^\infty\int_{\|u\|\leq n} \sum \xi_{3,k,i,j}\mathbf{1}_{[t_k,t_{k+1})}(s)\varphi_i^j(x) \frac{u}{n} \tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx),\cr
\mathbf{M}_{>1}\!\!\!&:=\!\!\!& \int_0^1\int_0^\infty\int_{\|u\|> n} \sum \xi_{3,k,i,j}\mathbf{1}_{[t_k,t_{k+1})}(s)\varphi_i^j(x) \frac{u}{n} \tilde{N}^{(n)}_{\mathtt{i}}(d\gamma_n s,du,dnx).
\eeqnn
Applying the Burkholder-Davis-Gundy inequality to the first term on the right side of the last inequality in (\ref{eqn3.29}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[ \mrm{b}}\def\p{\mrm{p}ig\|\mathbf{M}_{\leq 1}\mrm{b}}\def\p{\mrm{p}ig\|^2 \mrm{b}}\def\p{\mrm{p}ig]
\!\!\!&\leq\!\!\!& \int_0^1\sum \mathbf{E}[|\xi_{3,k,i,j}|^2]\mathbf{1}_{[t_k,t_{k+1})}(s)ds\int_0^\infty|\varphi_i^j(x)|^2dx \cdot n\gamma_n\int_{\frac{\|u\|}{n}\leq 1 } \frac{\|u\|^2}{n^2} \hat{\nu}^{(n)}_{\mathtt{i}}(du)\cr
\!\!\!&=\!\!\!& \|\mathbf{X}_3\|_{L^2}^2\cdot n\gamma_n\int_{\frac{\|u\|}{n}\leq 1 } \frac{\|u\|^2}{n^2} \nu^{(n)}_{\mathtt{i}}(du) \leq C n\gamma_n\int_{\frac{\|u\|}{n}\leq 1 } \frac{\|u\|^2}{n^2} \nu^{(n)}_{\mathtt{i}}(du).
\eeqnn
Moreover, we also have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[ \mrm{b}}\def\p{\mrm{p}ig\|\mathbf{M}_{> 1}\mrm{b}}\def\p{\mrm{p}ig\| \mrm{b}}\def\p{\mrm{p}ig]
\!\!\!&\leq\!\!\!& 2 \int_0^1\sum |\xi_{3,k,i,j}|\mathbf{1}_{[t_k,t_{k+1})}(s)ds\int_0^\infty|\varphi_i^j(x)|dx\cdot n\gamma_n \int_{\frac{\|u\|}{n}> 1 } \frac{\|u\|}{n} {\nu}^{(n)}_{\mathtt{i}}(du)\cr \cr
\!\!\!&=\!\!\!& 2\|\mathbf{X}_3\|_{L^1}\cdot n\gamma_n \int_{\frac{\|u\|}{n}> 1 } \frac{\|u\|}{n} {\nu}^{(n)}_{\mathtt{i}}(du)
\leq C\cdot n\gamma_n \int_{\frac{\|u\|}{n}> 1 } \frac{\|u\|}{n} {\nu}^{(n)}_{\mathtt{i}}(du).
\eeqnn
Along with the assumption that $ \sup_{n\geq 1}n\gamma_n \int_{\mathbb{U}_+} \|u\|^2 \wedge \|u\| \nu_\mathtt{i}^{(n)}(dnu)<\infty$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.16}
\mathbf{P} \Big\{\Big\| \int_0^1\mathbf{X}_3(s-)\cdot \mathbf{Y}_3^{(n)}(ds) \Big\|\geq K \Big\}\leq \frac{C}{K}+\frac{C}{K^2}.
\eeqlb
$\Box$
We are now ready to prove the weak convergence of the rescaled Hawkes market models.
\mrm{b}}\def\p{\mrm{p}egin{theorem}\label{Thm305}
Under Condition~\ref{MainCondition01} and \ref{MainCondition02},
if $(P^{(n)}(0),V^{(n)}(0))\to (P(0),V(0))$ in distribution,
the rescaled processes $\{(P^{(n)}(t),V^{(n)}(t)):t\geq 0\}$ converge weakly to $\{(P(t),V(t)):t\geq 0\}$ in $\mathbf{D}([0,\infty);\mathbb{U})$ as $n\to\infty$, where $\{(P(t),V(t)):t\geq 0\}$ is the unique strong solution to the following stochastic system:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(t)\\ V(t) \end{array} \Big)\!\!\!&=\!\!\!& \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(0)\\ V(0) \end{array} \Big) + \int_0^t (a-bV(s))ds + \int_0^t \int_0^{V(s)}\sqrt{2\sigma}\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}W_1(ds,dx)\\ W_2(ds,dx) \end{array} \Big) \cr
\!\!\!&\!\!\!&\cr
\!\!\!&\!\!\!& + \int_0^t\int_{\mathbb{U}} u N_1(ds,du) +\int_0^t\int_{\mathbb{U}}\int_0^{V(s-)}u\tilde{N}_0(ds,du,dx).\label{eqn3.21}
\eeqlb
\end{theorem}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
By \cite[Theorem 2.5]{DawsonLi2012} the stochastic system (\ref{eqn3.21}) admits a unique strong solution $(P,V)$. In order to establish the convergence we recall that
\mrm{b}}\def\p{\mrm{p}egin{equation*}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}
P^{(n)}(t)\\ V^{(n)}(t)
\end{array} \Big)
= \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}
P^{(n)}(0)\\ V^{(n)}(0)
\end{array} \Big)+ \int_0^t \mathbf{F}(V^{(n)}(s-)) \cdot \mathbf{Y}^{(n)}(ds).
\end{equation*}
The function $\mathbf{F}$ is unbounded and does not satisfy the requirements of \cite[Theorem 7.5]{KurtzProtter1996}. To overcome this problem, we use a standard localization argument; see \cite{KurtzProtter1991} for details. Let
\[
\eta^c(\omega) = \inf_{t \geq 0} \{\omega(t) \vee \omega(t-) \geq c\} \quad \mbox{and} \quad \mathbf{F}^c(\omega,s-) := {\mrm{b}}\def\p{\mrm{p}f 1}_{[0,\eta^c)}(s) \mathbf{F}(\omega(s-))
\]
for $\omega \in \mathbf{D}([0,\infty), \mathbb{R})$ and $c>0$, and consider the equation
\mrm{b}}\def\p{\mrm{p}egin{equation*}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}
P^{(n),c}(t)\\ V^{(n),c}(t)
\end{array} \Big)
= \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}
P^{(n)}(0)\\ V^{(n)}(0)
\end{array} \Big)+ \int_0^t \mathbf{F}^c(V^{(n),c},s-) \cdot \mathbf{Y}^{(n)}(ds).
\end{equation*}
This system satisfies the assumptions of \cite[Theorem 7.5]{KurtzProtter1996}. Thus, as $n \to \infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mrm{b}}\def\p{\mrm{p}ig( P^{(n),c}, V^{(n),c} \mrm{b}}\def\p{\mrm{p}ig) \to \mrm{b}}\def\p{\mrm{p}ig( P^{c}, V^{c} \mrm{b}}\def\p{\mrm{p}ig)
\eeqnn
weakly in $\mathbf{D}([0,\infty),\mathbb{U})$ and hence $\eta^c( V^{(n),c})\to \eta^c(V^{c})$ weakly in $\mathbf{D}([0,\infty),\mathbb{R})$, where $\left( P^{c}, V^{c} \right)$ is the unique strong solution to (\ref{eqn3.21}), restricted to $[0,\eta^c(V^{c}))$. Uniqueness of strong solutions to (\ref{eqn3.21}) also yields $\left( P^{c}, V^{c} \right) \to \left( P, V \right)$ a.s.~as $c \to \infty$.
This shows that $(P^{(n)},V^{(n)}) \to (P,V)$ weakly in $\mathbf{D}([0,\infty),\mathbb{U})$.
$\Box$
The process $\{ (P(t),V(t)):t\geq 0\}$ is a strong Markov process with infinitesimal generator $\mathcal{A}$ defined by (\ref{eqn3.31}).
Standard arguments (see \cite[Theorem 7.1' and 7.4]{IkedaWatanabe1989}) show that the limit process $\{ (P(t),V(t)):t\geq 0\}$ also can be represented as a stochastic integral driven by two Brownian motions.
\mrm{b}}\def\p{\mrm{p}egin{corollary}
Let $\{B_1(t):t\geq 0 \}$ and $\{B_2(t):t\geq 0 \}$ are two Brownian motions with correlation coefficient $\rho=\frac{\sigma_{12}+\sigma_{21}}{\sqrt{\sigma_{11}\sigma_{22}}}$. The unique strong solution to the stochastic system\footnote{The existence and uniqueness of strong solution follows from, e.g.~\cite[Theorem 6.2]{DawsonLi2006}.}
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(t)\\ V(t) \end{array} \Big)\!\!\!&=\!\!\!& \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(0)\\ V(0) \end{array} \Big) + \int_0^t (a-bV(s))ds +\int_0^t \sqrt{V(s)}d\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}\sqrt{2\sigma_{11}}B_1(s)\\ \sqrt{2\sigma_{22}}B_2(s) \end{array} \Big) \cr
\!\!\!&\!\!\!&\cr
\!\!\!&\!\!\!& + \int_0^t\int_{\mathbb{U}} u N_1(ds,du) +\int_0^t\int_{\mathbb{U}}\int_0^{V(s-)}u\tilde{N}_0(ds,du,dx)\label{eqn3.30}
\eeqlb
is a realization for the limiting market system.
The solution is an affine process with characteristic function
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.23}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\exp\{z_1P(t)+ z_2V(t) \}\mrm{b}}\def\p{\mrm{p}ig]=\exp\Big\{ z_1P(0)+ \psi^G_t(z)V(0)+\int_0^t H(z_1,\psi^G_s(z)) ds \Big\},\quad z\in\mathbb{U}_*,
\eeqlb
where
$\{\psi^G_t(z):t\geq 0\}$ is the unique solution to the following Riccati equation
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.24}
\psi^G_t(z)\!\!\!&=\!\!\!& z_2+ \int_0^t G(z_1,\psi^G_s(z))ds.
\eeqlb
\end{corollary}
\subsection{Examples}
We now provide scaling limits for each of the examples considered in Section 2.2. We show that the rescaled market models converge to Heston-type stochastic volatility models with or without jumps, depending on the arrival frequency of large orders.
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Heston volatility model]\label{Ecample-Exp02}
Let $\{(P^{(n)}_t,V^{(n)}_t):t\geq 0\}$ be an exponential market as defined in Example~\ref{Ecample-Exp} with parameters $(1/n, \mrm{b}}\def\p{\mrm{p}eta^{(n)}, p_{\mathtt{e}/\mathtt{i}}^{\mathtt{b}/\mathtt{s}},
\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}},
\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s} (n)}_{\mathtt{i}})$ satisfying
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.33}
\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s} (n)}_{\mathtt{i}}\to \mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s} }_{\mathtt{i}} ,
\quad n\mrm{b}}\def\p{\mrm{p}ig[p_\mathtt{i}^\mathtt{s}\mathrm{M}^\mathbf{e}_{1,1}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{s}(n)}_{\mathtt{i}})- p_\mathtt{i}^\mathtt{b}\mathrm{M}^\mathbf{e}_{1,1}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}(n)}_{\mathtt{i}})\mrm{b}}\def\p{\mrm{p}ig]\to 0,\quad n\mrm{b}}\def\p{\mrm{p}ig[\mrm{b}}\def\p{\mrm{p}eta^{(n)}-p_\mathtt{i}^\mathtt{s}\mathrm{M}^\mathbf{e}_{1,2}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{s}}_{\mathtt{i}})- p_\mathtt{i}^\mathtt{b}\mathrm{M}^\mathbf{e}_{1,2}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}}_{\mathtt{i}})\mrm{b}}\def\p{\mrm{p}ig]\to b_2>0
\eeqlb
as $n\to\infty$. Let $\gamma_n=n$.
Then Condition~\ref{MainCondition01} holds. The conditions in Proposition~\ref{MainAssumptionNew} hold as well with limiting parameters
$$
\hat\nu_\mathtt{e}(\mathbb{U})=\hat\nu_\mathtt{i}(\mathbb{U})=0,
\quad
a^+ =p_{\mathtt{e}}^\mathtt{b}\cdot\mathrm{M}^\mathbf{e}_{1}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{b}_{\mathtt{e}}),
\quad
a^- =-p_{\mathtt{e}}^\mathtt{s}\cdot\mathrm{M}^\mathbf{e}_{1}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{s}_{\mathtt{e}}),
\quad
\sigma^+= \frac{p_{\mathtt{i}}^\mathtt{b}}{2} \cdot \mathrm{M}^\mathbf{e}_{2}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{b}_{\mathtt{i}}),
$$
$$
\sigma^-_{11}= \frac{p_{\mathtt{i}}^\mathtt{s}}{2} \cdot\mathrm{M}^\mathbf{e}_{2,11}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{s}_{\mathtt{i}}),
\quad
\sigma^-_{22}= \frac{p_{\mathtt{i}}^\mathtt{s}}{2}\cdot \mathrm{M}^\mathbf{e}_{2,22}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{s}_{\mathtt{i}}),
\quad
\sigma^-_{12}= \sigma^-_{21}= -\frac{p_{\mathtt{i}}^\mathtt{s}}{2}\cdot \mathrm{M}^\mathbf{e}_{2,12}(\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^\mathtt{s}_{\mathtt{i}}).
$$
Since all order size distributions are light-tailed, there are no jumps in the scaling limit, and the limit model (\ref{eqn3.30}) reduces to the standard Heston volatility model
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
P(t)\!\!\!&=\!\!\!& P(0)+ a_1 t +\int_0^t \sqrt{2\sigma_{11}V(s)}d B_1(s) , \cr
V(t)\!\!\!&=\!\!\!& V(0)+ \int_0^t b_2\Big[\frac{a_2}{b_2}-V(s)\Big]ds +\int_0^t \sqrt{2\sigma_{22}V(s)}d B_2(s).
\eeqnn
\end{example}
Next, we consider an exponential-Pareto mixing market model as defined in Example~\ref{Ecample-Exp-Pareto}. In this case, we obtain the jump-diffusion Heston volatility model as analyzed in \cite{DuffiePanSingleton2000,Pan2002} among many others in the scaling limit.
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Pareto-jump-diffusion volatility model] \label{Ecample-Pareto-Diffusion}
Let $\{(P^{(n)}_t,V^{(n)}_t):t\geq 0\}$ be an exponential-Pareto mixing market model as defined in Example~\ref{Ecample-Exp-Pareto} with parameters
$(1/n,\mrm{b}}\def\p{\mrm{p}eta^{(n)}, p^{\mathtt{b/s}}_{\mathtt{e}/\mathtt{i}}; \mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b/s}}_{\mathtt{e}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b/s}(n)}_{\mathtt{i}}
\alpha_{\mathtt{e}/\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b/s}(n)}_{\mathtt{e}/\mathtt{i}})$
and selecting mechanism $q^{(n)}_{\mathtt{e}/\mathtt{i}}$ satisfying
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\alpha_{\mathtt{e}}>0,\quad \alpha_{\mathtt{i}}=0, \quad q^{(n)}_{\mathtt{i}}=\frac{1}{n^2},\quad q^{(n)}_{\mathtt{e}}=\frac{1}{n},
\quad
\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b}/\mathtt{s}(n)}_{\mathtt{e}/\mathtt{i}}
=n\cdot\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}}
\eeqnn
and (\ref{eqn3.33}) as $n\to\infty$.
Let $\gamma_n=n$. Then, Condition~\ref{MainCondition01} as well as the conditions in Proposition~\ref{MainAssumptionNew} hold
with limit parameters $a^\pm,\sigma^\pm$ as defined in Example~\ref{Ecample-Exp02}, with $\hat{\nu}_\mathtt{i}(\mathbb{U})=0$, and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\hat{\nu}_\mathtt{e}(du)= \sum_{j\in\{\mathtt{b},\mathtt{s}\}} \mathbf{1}_{\mathbb{U}_j}(u) \cdot p^{j}_\mathtt{e} \cdot \frac{\alpha_\mathtt{e}(\alpha_\mathtt{e}+1)}{\theta^j_{\mathtt{e},1}\theta^j_{\mathtt{e},1}} \Big(1+\frac{|u_1|}{\theta^j_{\mathtt{e},1}} +\frac{u_2}{\theta^j_{\mathtt{e},1}} \Big)^{-\alpha_\mathtt{e}-2}du_1du_2,
\eeqnn
where $\mathbb{U}_b=\mathbb{U}_+$ and $\mathbb{U}_s=\mathbb{U}_-$.
In this case, the limit stochastic system (\ref{eqn3.30}) reduces to the following jump-diffusion Heston volatility model:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
P(t)\!\!\!&=\!\!\!& P(0)+ a_1 t +\int_0^t \sqrt{2\sigma_{11}V(s)}d B_1(s)+ \sum_{k=1}^{N_t} \xi_{k,P}^\mathtt{e} , \cr
V(t)\!\!\!&=\!\!\!& V(0)+ \int_0^t b_2\Big[\frac{a_2}{b_2}-V(s)\Big]ds +\int_0^t \sqrt{2\sigma_{22}V(s)}d B_2(s)+\sum_{k=1}^{N_t} \xi_{k,V}^\mathtt{e} .
\eeqnn
where $\{N_t:t\geq 0\}$ is a Poisson process with rate $1$ and $\{(\xi_{k,P}^\mathtt{e},\xi_{k,V}^\mathtt{e} ):k=1,2,\cdots\}$ is a sequence of i.i.d. $\mathbb{U}$-valued random variables with probability law $ \hat{\nu}_\mathtt{e}(du)$.
\end{example}
In the previous example co-jumps in prices and volatility emerged in the scaling limit as a result of occasional large exogenous shocks. The key assumption was that $\gamma_n=n$. The following example considers the scaling limit for $\gamma_n=n^\alpha$ for some $\alpha \in (0,1)$.
Two types of jumps emerge in our limit: jumps originating from large exogenous orders as well as self-exciting child-jumps. For the special case where the induced orders do not contribute to the intensity of order arrivals, the child-jumps drop out of the model. In this case, the limiting volatility process reduces to a non-Gaussian process of Ornstein-Uhlenbeck type as analyzed in Barndorff-Nielsen and Shephard \cite{Barndorff-NielsenShephard2001}\footnote{Constant time-changes as allowed in Barndorff-Nielsen and Shephard \cite{Barndorff-NielsenShephard2001} can easily be incorporated into our model.}.
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Stable-volatility model without diffusion]\label{Ecample-Pareto02}
Let $\{(P^{(n)}_t,V^{(n)}_t):t\geq 0\}$ be a Pareto market model as defined in Example~\ref{Ecample-Pareto} with parameters
$(1/n,\mrm{b}}\def\p{\mrm{p}eta^{(n)}, p^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}},p^{{\mathtt{b}/\mathtt{s}}(n)}_{\mathtt{i}}; \alpha_{\mathtt{e}/\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}})$ satisfying that
$\alpha_{\mathtt{e}}=\alpha_{\mathtt{i}}-1\in(0,1)$.
Moreover, as $n\to\infty$ we have $p^{{\mathtt{b}/\mathtt{s}}(n)}_{\mathtt{i}}\to p^{\mathtt{b}/\mathtt{s}}_{\mathtt{i}}$ with $p^{\mathtt{b}}_{\mathtt{i}}+p^{\mathtt{s}}_{\mathtt{i}}=1$ and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.34}
n^{\alpha_{\mathtt{e}}}\mrm{b}}\def\p{\mrm{p}ig[ p^{\mathtt{s}(n)}_{\mathtt{i}}\mathrm{M}^\mathbf{p}_{1,1}(\alpha_{\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{s}_{\mathtt{i}})- p^{\mathtt{b}(n)}_{\mathtt{i}}\mathrm{M}^\mathbf{p}_{1,1}(\alpha_{\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{b}_{\mathtt{i}})\mrm{b}}\def\p{\mrm{p}ig]\to b_1,\quad
n^{\alpha_{\mathtt{e}}}\mrm{b}}\def\p{\mrm{p}ig[\mrm{b}}\def\p{\mrm{p}eta^{(n)}- p^{\mathtt{s}(n)}_{\mathtt{i}}\mathrm{M}^\mathbf{p}_{1,2}(\alpha_{\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{s}_{\mathtt{i}})- p^{\mathtt{b}(n)}_{\mathtt{i}}\mathrm{M}^\mathbf{p}_{1,2}(\alpha_{\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^\mathtt{b}_{\mathtt{i}})\mrm{b}}\def\p{\mrm{p}ig]\to b_2.
\eeqlb
Let $\gamma_n=n^{\alpha_{\mathtt{e}}}$.Then Condition~\ref{MainCondition01} as well as the conditions in Proposition~\ref{MainAssumptionNew} hold with the following limit parameters: $a^+ =a^- = 0$, $\sigma^+= \sigma^- = 0$, and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\hat{\nu}_i(du)= \sum_{j\in\{\mathtt{b},\mathtt{s}\}} \mathbf{1}_{\mathbb{U}_j}(u) \cdot p^{j}_i \cdot \frac{\alpha_i(\alpha_i+1)}{\theta^j_{i,1}\theta^j_{i,1}} \Big(\frac{|u_1|}{\theta^j_{i,1}} +\frac{u_2}{\theta^j_{i,1}} \Big)^{-\alpha_i-2}du_1du_2,\quad i\in\{\mathtt{e},\mathtt{i}\}.
\eeqnn
In this case, the limit stochastic system (\ref{eqn3.30}) reduces to the following pure-jump model:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn3.32}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(t)\\ V(t) \end{array} \Big)\!\!\!&=\!\!\!& \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(0)\\ V(0) \end{array} \Big) - \int_0^t bV(s)ds + \int_0^t\int_{\mathbb{U}} u N_1(ds,du) +\int_0^t\int_{\mathbb{U}}\int_0^{V(s-)}u\tilde{N}_0(ds,du,dx).
\eeqlb
The dynamics (\ref{eqn3.32}) can be represented in more convenient way. Let $Z_{\alpha_{\mathtt{e}}}(t)$ denote the third term on the right side of (\ref{eqn3.32}). By Theorem~14.3(ii) in \cite{Sato1999}, $\{Z_{\alpha_{\mathtt{e}}}(t):t\geq 0 \}$ is an $\mathbb{U}$-valued $\alpha_{\mathtt{e}}$-stable processes.
As for the last term, let us introduce a random measure $N_{\alpha_{\mathtt{i}}}(ds,du)$ on $[0,\infty)\times\mathbb{U}$ as follows: for any $t\geq 0$ and $U\subset\mathbb{U}$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
N_{\alpha_{\mathtt{i}}}((0,t],U)= \int_0^t\int_{\mathbb{U}}\int_0^{V(s-)} \mathbf{1}_{\{ u \in \sqrt[\alpha_{\mathtt{i}}]{V(s-)}\cdot U \}}N_0(ds,du,dx).
\eeqnn
Standard computations show that its predictable compensator has the following representation:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\hat{N}_{\alpha_{\mathtt{i}}}((0,t],U)
\!\!\!&=\!\!\!&\sum_{j\in\{\mathtt{b},\mathtt{s}\}}p^{j}_\mathtt{i} \cdot \int_0^tds\int_{\mathbb{U}_j}V(s-) \mathbf{1}_{\{ u \in \sqrt[\alpha_{\mathtt{i}}]{V(s-)}\cdot U \}} \frac{\alpha_\mathtt{i}(\alpha_\mathtt{i}+1)}{\theta^j_{\mathtt{i},1}\theta^j_{\mathtt{i},2}} \Big( \frac{|u_1|}{\theta^j_{\mathtt{i},1}} +\frac{u_2}{\theta^j_{\mathtt{i},2}}\Big)^{-\alpha_\mathtt{i}-2}du_1du_2\cr
\!\!\!&=\!\!\!& \sum_{j\in\{\mathtt{b},\mathtt{s}\}}p^{j}_\mathtt{i} \cdot \int_0^tds\int_{\mathbb{U}_j} \mathbf{1}_{\{ u \in U \}} \frac{\alpha_\mathtt{i}(\alpha_\mathtt{i}+1)}{\theta^j_{\mathtt{i},1}\theta^j_{\mathtt{i},2}} \Big( \frac{|u_1|}{\theta^j_{\mathtt{i},1}} +\frac{u_2}{\theta^j_{\mathtt{i},2}}\Big)^{-\alpha_\mathtt{i}-2}du_1du_2.
\eeqnn
Thus, $N_{\alpha_{\mathtt{i}}}(ds, du)$ is a Poisson random measure on $\mathbb{U}$ with intensity $ds\hat\nu_{\mathtt{i}}(du)$ and the L\'evy processes $\{Z_{\alpha_{\mathtt{i}}}(t):t\geq 0\}$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
Z_{\alpha_{\mathtt{i}}}(t):= \int_0^t\int_{\mathbb{U}} u\tilde{N}_{\alpha_{\mathtt{i}}}(ds,du)
\eeqnn
is a compensated, $\mathbb{U}$-valued $\alpha_{\mathtt{i}}$-stable process.
We can now rewrite (\ref{eqn3.32}) into
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(t)\\ V(t) \end{array} \Big)\!\!\!&=\!\!\!& \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(0)\\ V(0) \end{array} \Big) + Z_{\alpha_{\mathtt{e}}}(t) - \int_0^t bV(s)ds +\int_0^t \sqrt[\alpha_{\mathtt{i}}]{V(s-)} dZ_{\alpha_{\mathtt{i}}}(s).
\eeqnn
\end{example}
So far, we obtained jump-diffusion models with {\sl exogenous} jump dynamics as well as pure jump models with {\sl endogenous} jump dynamics as scaling limits.
The next example combines both dynamics into a single model.
We call this model {\sl generalized $\alpha$-stable Heston volatility model}.
We assume that induced orders arrive at a rate $n^2$ as in Example \ref{Ecample-Pareto-Diffusion}.
However, we now assume that large orders arrive with much higher probabilities.
In Example \ref{Ecample-Pareto-Diffusion} large exogenous and induced orders arrive at rate one per unit time.
Now we assume they arrive at rate $n^{\alpha}$ per unit time; despite the increased rate, their proportion among all orders will still be negligible in the limit.
Several existing models are obtained as special cases.
If the arrival intensity of induced orders depends on exogenous orders only, the model reduces to the stochastic volatility model studied in Barndorff-Nielsen and Shephard \cite{Barndorff-NielsenShephard2001}.
If, in addition, there are no jumps in the volatility, the model reduces to that analyzed in Bates \cite{Bates1996}.
The special case with no child-jumps and no exogenous jumps in prices corresponds to the model studied in Nicolato et al. \cite{NicolatoPisaniSloth2017}.
The special case without exogenous jumps corresponds to the alpha Heston model that has recently been studied in Jiao et al. \cite{JiaoMaScottiZhou2018}.
The multi-factor model of Bates \cite{Bates2019} with both exogenous and self-excited shocks is also contained as a special case.
\mrm{b}}\def\p{\mrm{p}egin{example}[\rm Generalized $\alpha$ Heston volatility model] \label{Ecample-Stable-Diffusion}
Let $\{(P^{(n)}_t,V^{(n)}_t):t\geq 0\}$ be an exponential-Pareto mixing market model as defined in Example~\ref{Ecample-Exp-Pareto} with parameters
$(1/n,\mrm{b}}\def\p{\mrm{p}eta^{(n)}, p^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}}; \mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}}, \mrm{b}}\def\p{\mrm{p}oldsymbol{\lambda}^{\mathtt{b}/\mathtt{s}(n)}_{\mathtt{i}};\alpha_{\mathtt{e}/\mathtt{i}},\mrm{b}}\def\p{\mrm{p}oldsymbol{\theta}^{\mathtt{b}/\mathtt{s}}_{\mathtt{e}/\mathtt{i}})$
and selecting mechanism $q^{(n)}_{\mathtt{e}/\mathtt{i}}$ satisfying that $\alpha_{\mathtt{e}}\in(0,1)$, $\alpha_{\mathtt{i}}\in(1,2)$, $q^{(n)}_{\mathtt{i}}=n^{\alpha_{\mathtt{i}}-2}$, $q^{(n)}_{\mathtt{e}}=n^{\alpha_{\mathtt{e}}-1}$ and (\ref{eqn3.33}) holds as $n\to\infty$.
Let $\gamma_n=n$, we see that Condition~\ref{MainCondition01} and conditions in Proposition~\ref{MainAssumptionNew} hold with limit parameters $a^\pm,\sigma^\pm$ defined in Example~\ref{Ecample-Exp02} and $b,\hat{\nu}_{\mathtt{e}/\mathtt{i}}(du)$ defined in Example~\ref{Ecample-Pareto02}.
In this case, the limit stochastic system (\ref{eqn3.30}) reduces to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(t)\\ V(t) \end{array} \Big)\!\!\!&=\!\!\!& \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(0)\\ V(0) \end{array} \Big) +\int_0^t (a- bV(s))ds +\int_0^t \sqrt{V(s)}d\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}\sqrt{2\sigma_{11}}B_1(s)\\ \sqrt{2\sigma_{22}}B_2(s) \end{array} \Big)
+Z_{\alpha_{\mathtt{e}}}(t) +\int_0^t \sqrt[\alpha_{\mathtt{i}}]{V(s-)} dZ_{\alpha_{\mathtt{i}}}(s). \qquad
\eeqnn
\end{example}
\section{The genealogy of the limiting market dynamics}\label{gen-limit}
\setcounter{equation}{0}
In this section, we analyze the impact of exogenous shocks on the jump dynamics of the limiting model. Exogenous shocks can be split in two groups: shocks of positive magnitude and shocks of ``insignificant magnitude''. Shocks of positive magnitude are captured by the Poisson random measure $N_1$. As argued in Section \ref{HFHM} (see Remark \ref{remark01}), shocks of ``insignificant magnitude'' are captured by the drift vector $a \in \mathbb{R}\times \mathbb{R}_+$. Both types of shocks can trigger jump cascades. Cascades triggered by shocks of positive magnitude are exogenous while cascades triggered by shocks of insignificant magnitude are endogenous in nature.
Rewriting the characteristic function (\ref{eqn3.23}) as
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.01}
\mathbf{E}\left[e^{z_1P(t)+ z_2V(t) }\right]\!\!\!&=\!\!\!&\exp\mrm{b}}\def\p{\mrm{p}ig\{ z_1P(0)+ \psi^G_t(z)V(0)\mrm{b}}\def\p{\mrm{p}ig\}\cdot\exp\Big\{\int_0^tds \int_{\mathbb{U}} [e^{z_1u_1+ \psi^G_s(z)u_2}-1]\hat{\nu}_\mathtt{e}(du) \Big\} \cr
\!\!\!&\!\!\!&
\times
\exp\Big\{z_1\cdot a_1t + a_2\int_0^t\psi^G_s(z)ds\Big\}
\eeqlb
the limit model can be decomposed into independent, self-enclosed sub-models in terms of the strong Markov processes induced by each of the exponential functions on the right side of the above equation.
\mrm{b}}\def\p{\mrm{p}egin{itemize}
\item[(i)]
From the semigroup property of $\{\psi^G_t(z):t\geq 0\}$ we conclude that the first term on the right side of (\ref{eqn4.01}) induces a Markov semigroup $(Q_{0,t})_{t\geq0}$ on $\mathbb{U}$ via
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.02}
\int_0^\infty e^{\langle z,u\rangle}Q_{0,t}(u',du)= \exp\mrm{b}}\def\p{\mrm{p}ig\{z_1u'_1+\psi^G_t(z)u'_2\mrm{b}}\def\p{\mrm{p}ig\}.
\eeqlb
Applying the Kolmogorov consistency theorem and the relationship between the solution to (\ref{eqn3.21}) and its characteristic function (\ref{eqn3.23}), we see that the Markov process $\{(P_0(t),V_0(t)):t\geq 0 \}$ that solves
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.19}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P_0(t)\\ V_0(t) \end{array} \Big)\!\!\!&=\!\!\!& \Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P(0)\\ V(0) \end{array} \Big) - \int_0^tbV_0(s)ds + \int_0^t \int_0^{V_0(s)}\sqrt{2\sigma}\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}W_1(ds,dx)\\ W_2(ds,dx) \end{array} \Big) \cr
\!\!\!& \!\!\!& +\int_0^t\int_{\mathbb{U}}\int_0^{V_0(s-)}u\tilde{N}_0(ds,du,dx)
\eeqlb
is a realization of the transition semigroup $(Q_{0,t})_{t\geq0}$. This self-enclosed market model captures the impact of all events prior to time $0$ on future order flow. We emphasize that the model is independent of the drift and that the volatility mean-reverts to the level zero if $b_2>0$.
\item[(ii)] Let $\{\mathbf{P}_{e,t}(\cdot):t\geq 0\}$ be the family of probability laws induced by second term on the right side of (\ref{eqn4.01}).
The family of kernels $(Q_{e,t})_{t\geq 0}$ on $\mathbb{U}$ defined by $Q_{e,t}(u',du) := \mathbf{P}_{e,t}* Q_{0,t}(u',du)$ is a Markov semigroup. The Markov process $\{(P_{e}(t),V_{e}(t)):t\geq 0 \}$ that solves
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.22}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P_{e}(t)\\ V_{e}(t) \end{array} \Big)\!\!\!&=\!\!\!&
\int_0^t u N_1(ds,du) + \int_0^t \int_{V_0(s)}^{V_0(s)+V_{e}(s)}\sqrt{2\sigma}\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}W_1(ds,dx)\\ W_2(ds,dx) \end{array} \Big) \cr
\!\!\!&\!\!\!& - \int_0^t bV_{e}(s)ds +\int_0^t\int_{\mathbb{U}}\int_{V_0(s-) }^{V_0(s-)+V_{e}(s-)}u\tilde{N}_0(ds,du,dx)
\eeqlb
is a realization of the transition semigroup $(Q_{e,t})_{t\geq0}$. We can interpret this model as describing the cumulative impact of the exogenous orders on the market dynamics. Again, this model is independent of the drift and the volatility mean-reverts to the level zero if $b_2>0$.
\item[(iii)]
Let $\{\mathbf{P}_{a,t}(\cdot):t\geq 0\}$ be the family of probability laws induced by the last term on the right side of (\ref{eqn4.01}), and $(Q_{a,t})_{t\geq 0}$ be the Markov semigroup on $\mathbb{U}$ given by $Q_{a,t}(u',du):= \mathbf{P}_{a,t}* Q_{0,t}(u',du)$.
The Markov process $\{(P_a(t),V_a(t)):t\geq 0 \}$ that solves
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.20}
\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}P_a(t)\\ V_a(t) \end{array} \Big)\!\!\!&=\!\!\!& \int_0^t(a-bV_a(s))ds + \int_0^t \int_{V_0(s)+V_{e}(s)}^{V_0(s)+V_{e}(s)+V_a(s)}\sqrt{2\sigma}\Big( \mrm{b}}\def\p{\mrm{p}egin{array}{c}W_1(ds,dx)\\ W_2(ds,dx) \end{array} \Big) \cr
\!\!\!&\!\!\!& +\int_0^t\int_{\mathbb{U}}\int_{V_0(s-)+V_{e}(s-)}^{V_0(s-)+V_{e}(s-)+V_a(s-)}u\tilde{N}_0(ds,du,dx)
\eeqlb
is a realization of the transition semigroup $(Q_{a,t})_{t\geq0}$. Unlike the two other sub-models this sub-model depends on the drift $a$. We can interpret this model as describing the impact of the drift of the volatility process on the market dynamics.
\end{itemize}
By the spatial-orthogonality of the Gaussian white noises $W_1(ds,du)$, $W_2(ds,du)$ and the Poisson random measure $N_0(ds,du,dx)$, the processes defined by (\ref{eqn4.19})-(\ref{eqn4.20}) are mutually independent. Their sum equals the process (\ref{eqn3.21}).
\mrm{b}}\def\p{\mrm{p}egin{theorem}
The following decomposition of the solution $\{(P(t),V(t)):t\geq 0\}$ to (\ref{eqn3.21}) holds:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{(P(t),V(t)):t\geq 0\}\overset{\rm a.s.}=\Big\{\Big(
\mrm{b}}\def\p{\mrm{p}egin{array}{c}
P_0(t)+P_{e}(t)+ P_a(t) \cr
V_0(t)+V_{e}(t) +V_a(t)
\end{array}\Big):t\geq 0\Big\}.
\eeqnn
\end{theorem}
The sub-model $\{(P_e(t),V_e(t)):t\geq 0\}$ can be decomposed into a sum of independent and identically distributed sub-models of the form (i). To this end, we denote by ${\mathbf Q}_{p,v}$ the distribution of the process (\ref{eqn4.19}) with initial state $(P(0),V(0))=(p,v)$. Then,
\[
({P}_e(t),{V}_e(t)):= \int_0^t\int_{\mathbb{U}} \int_{\mathbf{D}([0,\infty),\mathbb{U})} \omega(t-s)N_e(ds,d(p,v),d\omega)
\]
where $N_e(dt,d\omega)$ is a Poisson random measure on $(0,\infty)\times\mathbb{U}\times \mathbf{D}([0,\infty),\mathbb{U})$ with intensity $ ds \hat\nu_{\mathtt{e}}(d(p,v))\mathbf{Q}_{p,v}(d\omega) $.
The sub-model $\{(P_a(t),V_a(t)):t\geq 0\}$ can also be decomposed into self-enclosed sub-models where the volatility process evolves as an excursion process selected by a Poisson random measure.
In order to make this more precise, we put $\tau_0(\omega):=\inf\{ t>0: \omega_2(t)=0 \}$ for any $\omega(\cdot):= (\omega_1(\cdot),\omega_2(\cdot))\in \mathbf{D}(\mathbb{R}_+,\mathbb{U})$, denote by $\mathbf{D}_0(\mathbb{R}_+,\mathbb{U})$ the subspace of $\mathbf{D}(\mathbb{R}_+,\mathbb{U})$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{D}_0(\mathbb{R}_+,\mathbb{U})\!\!\!&: =\!\!\!&\{ \omega\in \mathbf{D}(\mathbb{R}_+,\mathbb{U}): \omega(0)=(0,0)\mbox{ and } \omega_2(t)=0 \mbox{ for any } t\geq \tau_0(\omega)\},
\eeqnn
endowed with the filtration $\mathscr{G}_t:=\sigma(\omega(s):s\in[0,t])$, for $t \geq 0$, and by $(Q_{0,t}^\circ)_{t\geq 0}$ the restriction of the semigroup $(Q_{0,t})_{t\geq 0}$ on $\mathbb{R}\times (0,\infty)$. The following result is proved in the appendix.
\mrm{b}}\def\p{\mrm{p}egin{theorem} \label{thm-cluster}
Let $\sigma_{22} > 0$. There exists an $\sigma$-finite measure $\mathbf{Q}(d\omega)$\footnote{We construct such a measure $\mathbf{Q}(d\omega)$ in (\ref{eqn4.31}).} on $\mathbf{D}([0,\infty),\mathbb{U})$ with support $\mathbf{D}_0([0,\infty),\mathbb{U})$, and a Poisson random measure $N_a(ds,d\omega)$ on $(0,\infty)\times \mathbf{D}_0([0,\infty),\mathbb{U})$ with intensity $a_2 ds \mathbf{Q}(d\omega)$ such that the stochastic system $\{(\hat{P}_a(t),\hat{V}_a(t)):t\geq 0 \}$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.18}
(\hat{P}_a(t),\hat{V}_a(t)):= \int_0^t (a_1,0)ds+ \int_0^t \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \omega(t-s)N_a(ds,d\omega)
\eeqlb
is Markov with respect to the filtration $(\mathscr{E}_t)_{t\geq 0}$ with $\mathscr{E}_t:=\sigma(N_a((0,s],U): s\in[0,t],\ U\in\mathscr{G}_{t-s})$ and has the same finite dimensional distributions as $\{(P_a(t),V_a(t)):t\geq 0 \}$.
\end{theorem}
\mrm{b}}\def\p{\mrm{p}egin{remark}
Let us benchmark our decomposition result against that of Jiao et al.~\cite{JiaoMaScottiZhou2018}.
The variance process studied in their paper is a special case of (\ref{eqn4.20}).
By considering the jumps larger than some threshold $\mrm{b}}\def\p{\mrm{p}ar{y}>0$ as immigrations of a CB-process, they provided a decomposition for the variance processes as a sum of a truncated variance process $V^{(y)}_\cdot$ with jump threshold $\mrm{b}}\def\p{\mrm{p}ar{y}$ and a sub-model of the form (\ref{eqn4.22}) with $N_1(ds,du)$ replaced by a Poisson random measure with intensity $V^{(y)}_{s-}ds \mathbf{1}_{u_2\geq \mrm{b}}\def\p{\mrm{p}ar{y}} \hat\nu_{\mathtt{i}}(du)$\footnote{We emphasize that the assumption of a Poisson arrival of exogenous shocks was make for mathematical convenience and to better distinguish the effects of Poisson and Hawkes arrivals. An extension to Hawkes arrivals is not difficult}.
Their decomposition offers a way to refine the sub-model (\ref{eqn4.20}), which is different to our cluster representation in Theorem~\ref{thm-cluster}.
However, their truncated variance process $V^{(y)}_\cdot$ is not self-enclosed.
Moreover, we believe that our decomposition results is economically more intuitive as it is based on the different origins of the jumps.
\end{remark}
\subsection{The sub-model $\{(P_0(t),V_0(t)):t\geq 0\}$ }
In this section we study the sub-model $\{(P_0(t),V_0(t)):t\geq 0\}$, that is, the impact of an exogenous shock of magnitude $(P(0), V(0))$ on the market dynamics.
By \cite[Theorem 1.1]{KawazuWatanabe1971} and arguments given in \cite{Grey1974}, the volatility process $\{V_0(t):t\geq 0\}$ is a continuous-state branching process, and $\mathbf{P}\{\lim_{t\to\infty }V_0(t)\in \{0,\infty\}\}=1$.
Let $$\mathscr{T}_{0} := \inf\{t\geq 0: V_0(t)=0 \}$$ be its first hitting time of $0$.
Since $0$ is a trap for this process, $V_0(t) = 0$ for all $t \geq \mathscr{T}_{0}$ almost surely.
The following lemma can be viewed as an analogue of Corollary \ref{Thm207}. It analyzes the impact duration of exogenous shocks on the market dynamics in terms of the function
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.54}
G_0(x) \!\!\!& := \!\!\!& G(0,-x) = b_2 x + \sigma_{22} x^2 +\int_{\mathbb{U}} (e^{-u_2 x}-1+ u_2 x )\hat\nu_\mathtt{i}(du), \qquad x \geq 0.
\eeqlb
\mrm{b}}\def\p{\mrm{p}egin{lemma}[\rm Grey (1974)] \label{Grey}
For any $t>0$, $\mathbf{P}\{\mathscr{T}_{0}\leq t\}>0$ if and only if there exists a constant $\vartheta>0$ such that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.52}
G_0(\vartheta)>0\quad \mbox{and}\quad \int_\vartheta^\infty \frac{1}{G_0(x)}dx<\infty.
\eeqlb
In this case, $\mathbf{P}\{\mathscr{T}_{0}\leq t\}=\exp\{-V(0) \cdot \overline{v}_t \}$, where $\{\overline{v}_t :t\geq 0 \}$ is the minimal solution to the following Riccati differential equation
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\frac{d}{dt}\overline{v}_t= -G_0(\overline{v}_t)
\eeqnn
with singular initial condition $\overline{v}_0=\infty$.
Moreover, $\mathbf{P}\{\mathscr{T}_{0}<\infty\}=\exp\mrm{b}}\def\p{\mrm{p}ig\{-V(0) \cdot \overline{v}_\infty \mrm{b}}\def\p{\mrm{p}ig\}$ with $\overline{v}_\infty $ being the largest root of $G_0(x)=0$, and $\overline{v}_\infty =0$ if and only if $b_2= G'_0(0)\geq 0$.
\end{lemma}
\mrm{b}}\def\p{\mrm{p}egin{remark}
The integral condition (\ref{eqn4.52}) holds if, for instance, $\sigma_{22}>0$. In the absence of jumps, this condition is also necessary.
\end{remark}
We now consider the distribution of the induced jumps with magnitude $(p,v)\in(0,\infty)^2$ or larger.
To this end, let $A:= \{u\in\mathbb{U}: |u_1|\geq p,|u_2|\geq v\}$, let $$\mathcal{T}_A:= \sup \{t\geq 0: (\Delta P_0(t),\Delta V_0(t))\in A\}$$ denote the arrival time of the last jump whose magnitude belongs to $A$, and for any $T\in[0,\infty]$, let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathcal{J}_A(T):= \#\{t\in[0,T]: (\Delta P_0(t),\Delta V_0(t))\in A\}=\int_0^T \int_{A} \int_0^{V_0(s-)} N_0(ds,du,dx)
\eeqnn
denote the number of jumps with magnitudes in $A$ during the time interval $[0,T]$. Since the process $\{V_0(t):t\geq 0\}$ is conservative, i.e. $\mathbf{P}\{\sup_{s\in[0,T]}V_0(s)<\infty\}=1$, $\mathcal{T}_A<\infty$ if and only if $\mathcal{J}_A(\infty)<\infty$.
By definition,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{\mathcal{T}_A\leq r, \mathscr{T}_{0}\leq t\} = \Big\{ \int_r^t \int_{A} \int_0^{V_0(s-)} N_0(ds,du,dx)=0, V_0(t)=0 \Big\}
\eeqnn
almost surely, and so
\[
\mathbf{P} \{\mathcal{T}_A\leq r, \mathscr{T}_{0}\leq t\} = \mathbf{E}\left[ \exp \left\{-\hat\nu_\mathtt{i}(A) \cdot\int_r^t V_0(s) ds \right\} {\mrm{b}}\def\p{\mrm{p}f 1}_{\{\mathcal{T}_A\leq r, V_0(t)=0\}} \right].
\]
The indicator function is inconvenient when computing the expected value. To bypass this problem we use a result from He and Li \cite{HeLi2016}. Under the assumptions of Lemma \ref{Grey}, the event $\{\mathcal{T}_A\leq r\}$ has strictly positive probability for any $r > 0$. Conditioned on this event, the process $\{V_0(t) :t\geq r\}$ almost surely equals the process $\{V^A_0(t) :t\geq r\}$ defined by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
V^A_0(t) \!\!\!&=\!\!\!& V_0(r) - \int_r^t\Big(b_2+\int_A u_2\hat{\nu}_{\mathtt{i}}(du)\Big)V^A_0(s)ds + \int_r^t \sqrt{2\sigma_{22}V^A_0(s)} dB_2(s) \cr
\!\!\!& \!\!\!& +\int_r^t\int_{A^{\rm c}}\int_0^{V^A_0(s-)}u_2\tilde{N}_0(ds,du,dx).
\eeqnn
By Theorem 2.2 in \cite{DawsonLi2012}, $V_0^A(t) \leq V_0(t)$ for all $t \geq r$.
In particular, under the conditions of Lemma \ref{Grey}, $\mathbf{P}\{V^A_0(t) = 0\}> 0$ for all $t > r$. Using the fact that $0$ is a trap for the volatility process, straightforward modifications of arguments given in the proof of \cite[Theorem 3.1]{HeLi2016} show that for any $t > r$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{\mathcal{T}_A\leq r, \mathscr{T}_{0}\leq t\} = \Big\{ \int_r^t \int_{A} \int_0^{V^A_0(s-)} N_0(ds,du,dx)=0, V^A_0(t)=0 \Big\}.
\eeqnn
Using the dominated convergence theorem, and the fact that $\{(V^A_0(t),\int_0^t V^A_0(w)dw):t\geq r\}$ is an affine process,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.55}
\mathbf{P}_{\mathscr{F}_r}\{\mathcal{T}_A\leq r,\mathscr{T}_{0}\leq t\}\!\!\!&=\!\!\!& \lim_{\lambda_0\to\infty}\mathbf{E}_{\mathscr{F}_r}\Big[ \int_r^t \int_{A} \int_0^{V^A_0(s-)} N_0(ds,du,dx)=0,\ \exp\mrm{b}}\def\p{\mrm{p}ig\{-\lambda_0\cdot V^A_0(t)\mrm{b}}\def\p{\mrm{p}ig\} \Big] \cr
\!\!\!&=\!\!\!& \lim_{\lambda_0\to\infty}\mathbf{E}_{\mathscr{F}_r}\Big[ \exp\Big\{-\hat\nu_\mathtt{i}(A)\int_r^t V^A_0(s)ds -\lambda_0\cdot V^A_0(t)\Big\} \Big] \cr
& = & \lim_{\lambda_0\to\infty} \exp\mrm{b}}\def\p{\mrm{p}ig\{-\psi^A_{t-r}(\lambda_0) V_0(r)\mrm{b}}\def\p{\mrm{p}ig\},
\eeqlb
where $\psi^A_\cdot(\lambda_0 ): [0,\infty) \to \mathbb R_+$ is the unique positive solution to the Riccati differential equation
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.28}
\psi^A_t(\lambda_0)=\lambda_0+\int_0^t \mrm{b}}\def\p{\mrm{p}ig[\hat\nu_\mathtt{i}(A)- G^A_0\mrm{b}}\def\p{\mrm{p}ig(\psi^A_w(\lambda_0)\mrm{b}}\def\p{\mrm{p}ig)\mrm{b}}\def\p{\mrm{p}ig]dw,
\eeqlb
and the function $G^A_0: \mathbb R_+ \to \mathbb R$ is defined by
\[
G^A_0(x) := G_0(x)-\int_A (e^{-xu_2}-1)\hat{\nu}_{\mathtt{i}}(du);
\]
see Theorem~2.7 in Duffie et al.~\cite{DuffieFilipovicSchachermayer2003} for details.
The following theorem shows that the distribution of the random variable $\left(\mathscr{T}_0, \mathcal{T}_A, \mathcal{J}_A(T)\right)$ can be expressed in terms of the unique continuous solution to the Riccati equation (\ref{eqn4.28}) with singular initial condition, and the unique non-negative continuous solution to the Riccati differential equation
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}
\frac{d}{ds}\phi^A_s(x,\lambda)
= - G_0(\phi^A_s(x,\lambda) )
- (e^{-\lambda}-1) \int_A \exp\{ - \phi^A_s (x,\lambda) \cdot u_2 \} \hat{\nu}_{\mathtt{i}}(du) \label{eqn4.44}
\eeqlb
with initial condition $\phi^A_0(x,\lambda) = x$ for $x,\lambda\geq 0$.
\mrm{b}}\def\p{\mrm{p}egin{theorem}
Suppose that (\ref{eqn4.52}) holds for some $\vartheta>0$. In the class of continuous functions there exists a minimal positive solution to the Riccati equation
\mrm{b}}\def\p{\mrm{p}egin{equation}
\frac{d}{ds}
\mrm{b}}\def\p{\mrm{p}ar\psi^A_s = \hat\nu_\mathtt{i}(A) - G^A_0(\mrm{b}}\def\p{\mrm{p}ar\psi^A_s), \label{eqn4.25}
\end{equation}
with singular initial condition $\mrm{b}}\def\p{\mrm{p}ar\psi^A_0 = \infty$. The function is finite on $(0,\infty)$, and for any $t> r\geq 0$ and $\lambda\geq 0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.27}
\mathbf{E}\Big[\exp\mrm{b}}\def\p{\mrm{p}ig\{-\lambda \mathcal{J}_A(T) \mrm{b}}\def\p{\mrm{p}ig\} ;\mathcal{T}_A\leq r,\mathscr{T}_{0}\leq t\Big]
=\exp\Big\{ - \phi^A_{r\wedge T}\Big(-\psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r} \mrm{b}}\def\p{\mrm{p}ig),\lambda\Big)\cdot V(0) \Big\}.
\eeqlb
\end{theorem}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
By \cite[Proposition 6.1]{DuffieFilipovicSchachermayer2003}, the solution $\psi^A_\cdot(\cdot)$ to the Riccati equation (\ref{eqn4.28}) is continuous in both variables. From this and the uniqueness of the solution, we conclude that the ODE satisfies a comparison principle. In particular, $\psi^A_t(\lambda_0)$ is increasing in $\lambda_0$ for any $t \geq 0$, and hence the limit $\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_t:=\lim_{\lambda_0\to \infty}\psi^A_t(\lambda_0)$ exists in $[0,\infty]$. In order to prove that $\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_t < \infty$ for any $t > 0$, we first conclude from (\ref{eqn4.55}) that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\exp\mrm{b}}\def\p{\mrm{p}ig\{-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r} V_0(r)\mrm{b}}\def\p{\mrm{p}ig\}
\!\!\!&=\!\!\!& \lim_{\lambda_0 \to \infty} \mathbf{E}_{\mathscr{F}_r} \Big[ \exp\Big\{-\hat\nu_\mathtt{i}(A)\int_r^t V^A_0(s)ds -\lambda_0 \cdot V^A_0(t)\Big\} \Big] \\
\!\!\!&=\!\!\!& \mathbf{E} \Big[ \exp\Big\{-\hat\nu_\mathtt{i}(A)\int_r^t V^A_0(s)ds \Big\};V^A_0(t)=0 \Big].
\eeqnn
Since $\{V_0(t): t \geq 0\}$ is conservative, so is $\{ V^A_0(t): t \geq r\}$ and hence $\int_r^t V^A_0(s)ds<\infty.$ Thus, the expectation on the right side of the last equality is positive since $\mathbf{P}\{V^A_0(t) = 0\} > 0$ for all $t > r$. This shows that $\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r}<\infty$ for any $t>r$. Stochastic continuity of $\{V^A_0(t) : t \geq r\}$ yields continuity of $\mrm{b}}\def\p{\mrm{p}ar\psi_\cdot^A$ on $(0,\infty)$.
We now show that $\{\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t}:t>0\}$ solves (\ref{eqn4.25}) with singular initial condition.
Using the semigroup property $\psi^A_{t+s}(\lambda_0)= \psi^A_t(\psi^A_s(\lambda_0))$ for any $t,s> 0$ and letting $\lambda_0 \to\infty$ shows that $\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_{t+s}= \psi^A_t(\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_s)$, from which we deduce that $t\mapsto\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_t$ is a continuous solution to (\ref{eqn4.25}). Moreover, for any sequence $\{s_n:n\geq 1\}$ that satisfies that $s_n\to 0+$, it follows from $\sup_{n\geq 1}\mrm{b}}\def\p{\mrm{p}ar\psi^A_{s_n}\geq \sup_{n\geq 1}\psi^A_{s_n}(\lambda_0)= \lambda_0$ that $\lim_{t\to0+} \mrm{b}}\def\p{\mrm{p}ar\psi^A_t =\infty$.
Finally, the constructed solution is also the minimal solution in the class of continuous functions. Indeed, if $\psi(\cdot)$ is another continuous solution to (\ref{eqn4.25}) with $\psi(0+)=\infty$, then the semigroup property yields $\psi(t)=\lim_{s\to 0+}\psi(t+s)=\lim_{s\to 0+}\psi^A_t(\psi(s))\geq \psi^A_t(\lambda_0)$ for any $t>0$ and all $\lambda_0\geq 0$, from which the assertion follows.
We finally prove (\ref{eqn4.27}). For any $t>r\geq 0$, it follows from (\ref{eqn4.55}) that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lefteqn{\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\exp\{-\lambda \mathcal{J}_A(T) \} ;\mathcal{T}_A\leq r,\mathscr{T}_{0}\leq t\mrm{b}}\def\p{\mrm{p}ig]
=\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\exp\{-\lambda \mathcal{J}_A(T\wedge r) \} \times\mathbf{E}_{\mathscr{F}_{r\wedge T}}\mrm{b}}\def\p{\mrm{p}ig[\exp\mrm{b}}\def\p{\mrm{p}ig\{-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r} V_0(r)\mrm{b}}\def\p{\mrm{p}ig\}\mrm{b}}\def\p{\mrm{p}ig]\mrm{b}}\def\p{\mrm{p}ig]}\qquad \qquad\qquad\!\!\!&\!\!\!&\cr
\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!& \mathbf{E}\Big[\exp\Big\{-\int_0^{r\wedge T}\int_A\int_0^{V_0(s-)} \lambda N_0(ds,du,dx)+\psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r}\mrm{b}}\def\p{\mrm{p}ig)\cdot V_0(r\wedge T) \Big\} \Big].
\eeqnn
Following the standard arguments given in, e.g.~proof of \cite[Theorem 2.12]{DuffieFilipovicSchachermayer2003}, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lefteqn{\exp\Big\{ -\int_0^{r\wedge T}\int_A\int_0^{V_0(s-)} \lambda N_0(ds,du,dx)+ \psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r}\mrm{b}}\def\p{\mrm{p}ig) \cdot V_0(r\wedge T) \Big\}}\qquad\!\!\!&\!\!\!&\cr
\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!&
\exp\Big\{ - \phi^A_{r\wedge T}\Big(-\psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r}\mrm{b}}\def\p{\mrm{p}ig),\lambda\Big)\cdot V(0) \Big\} + \mbox{ Martingale .}
\eeqnn
Taking expectations on both sides of above equation, yields the desired result.
$\Box$
From (\ref{eqn4.52}), we see that $G^A_0(\infty)=\infty$. Hence, the right inverse $G_0^{A,-1}(y):= \inf\mrm{b}}\def\p{\mrm{p}ig\{x\geq 0: G^{A}_0(x)> y \mrm{b}}\def\p{\mrm{p}ig\}$ of $G_0^A$
is well defined. This allows us to obtain the distribution of the random variable $\left( \mathcal{J}_A(T), \mathcal{T}_A \right)$.
\mrm{b}}\def\p{\mrm{p}egin{corollary}\label{Thm407}
Suppose that (\ref{eqn4.52}) holds for some $\vartheta>0$.
As $t\to\infty$, the function $t \mapsto \mrm{b}}\def\p{\mrm{p}ar\psi^A_t$ decreases to $ \mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty := G^{A,-1}_0\mrm{b}}\def\p{\mrm{p}ig(\hat\nu_\mathtt{i}(A)\mrm{b}}\def\p{\mrm{p}ig)$.
Moreover, for any $r \geq 0$ and $\lambda\geq 0$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.50}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\exp\{-\lambda \mathcal{J}_A(T) \} ;\ \mathcal{T}_A\leq r\mrm{b}}\def\p{\mrm{p}ig]= \exp\Big\{ - \phi^A_{r\wedge T}\Big(-\psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty \mrm{b}}\def\p{\mrm{p}ig),\lambda\Big)\cdot V(0) \Big\}.
\eeqlb
\end{corollary}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
From (\ref{eqn4.27}), for any $\lambda \geq 0$, we see that $\phi^A_{r\wedge T}\mrm{b}}\def\p{\mrm{p}ig(-\psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r} \mrm{b}}\def\p{\mrm{p}ig),\lambda \mrm{b}}\def\p{\mrm{p}ig)$ is decreasing in $t$. Moreover,
by continuity and the uniqueness of the solution to (\ref{eqn4.44}), the equation satisfies a comparison principle, and so the mapping $x \mapsto \phi^A_t(x,0)$ is increasing. As a result, the mapping $t \mapsto -\psi^G_{(r-T)^+}\mrm{b}}\def\p{\mrm{p}ig(0,-\mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r} \mrm{b}}\def\p{\mrm{p}ig)$ is decreasing.
Moreover, from (\ref{eqn3.24}),
$-\psi^G_t(0,-x)=x- \int_0^t G_0(\psi^G_s(0,-x))ds$, which is increasing in $x$. The last two results imply that the mapping $t \mapsto \mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r}$ is decreasing. In particular, the limit $\mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty:=\lim_{t\to\infty} \mrm{b}}\def\p{\mrm{p}ar\psi^A_{t-r}$ exists. Since $\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_{s+t}= \psi^A_t(\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_s)$, as $s\to\infty$ we have $\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_\infty= \psi^A_t(\mrm{b}}\def\p{\mrm{p}ar{\psi}^A_\infty)$. Taking this into (\ref{eqn4.28}) we conclude that $\int_0^t [\hat\nu_\mathtt{i}(A) -G^A_0\mrm{b}}\def\p{\mrm{p}ig(\mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty\mrm{b}}\def\p{\mrm{p}ig)]ds\equiv0$ and hence
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty = G^{A,-1}_0\left(\hat\nu_\mathtt{i}(A)\right).
\eeqnn
$\Box$
The following corollary is the analogue of Corollary \ref{Thm209}. It shows that the quantity $b_2$ is the analogue of the quantity $\tilde \mrm{b}}\def\p{\mrm{p}eta$ introduced in equation (\ref{beta-tilde}); it determines whether or not the impact duration of an external shock is almost surely finite.
\mrm{b}}\def\p{\mrm{p}egin{corollary}\label{Thm408}
We have $\mathbf{P}\{\mathcal{T}_A<\infty \}=\mathbf{P}\{\mathcal{J}_A(\infty)<\infty \}=\exp\{-V(0)\cdot \mrm{b}}\def\p{\mrm{p}ar{v}_\infty\}$, where $\mrm{b}}\def\p{\mrm{p}ar v_\infty$ is the largest root of $G_0(x)=0$. Moreover,
$\mathbf{P}\{\mathcal{T}_A<\infty \}=\mathbf{P}\{\mathcal{J}_A(\infty)<\infty \}=1$ if and only if $b_2=G_0'(0) \geq 0$.
\end{corollary}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
By equation (\ref{eqn3.23}) and Corollary~\ref{Thm407}, $\mathbf{P}\{\mathcal{T}_A\leq r \} =\exp\mrm{b}}\def\p{\mrm{p}ig\{ - \phi^A_{r}( \mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty,0)\cdot V(0) \mrm{b}}\def\p{\mrm{p}ig\}$. This implies that $\phi^A_{r}( \mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty,0)$ decreases to some limit $\phi^A_\infty(\mrm{b}}\def\p{\mrm{p}ar\psi^A_\infty,0)$ as $r\to\infty$.
From (\ref{eqn4.44}), we see that semigroup property holds for $\phi^A_{r}(x,0)$ for any $x\geq 0$ and hence $G_0(\phi^A_\infty(x,0))=0$. We now show that $\phi^A_\infty(x,0) \equiv \mrm{b}}\def\p{\mrm{p}ar{v}_\infty$.
From (\ref{eqn4.54}), we see that $G_0(0) = 0$ and that $G_0$ is strictly convex. Hence, $G_0(x)>0$ for any $x > 0$ if and only if $b_2=G_0'(0) \geq 0$. In this case, $\mrm{b}}\def\p{\mrm{p}ar{v}_\infty=0$ is the only root of $G_0(x)=0$ and $\mathbf{P}\{\mathcal{T}_A<\infty \}=\mathbf{P}\{\mathcal{J}_A(\infty)<\infty \}=1$.
If $b_2=G_0'(0) < 0$, since $G_0(x)\to\infty$ as $x\to\infty$, there is only one positive root $\mrm{b}}\def\p{\mrm{p}ar{v}_\infty$ of $G_0(x)=0$ and $G_0(y)<0$ for $y\in(0,\mrm{b}}\def\p{\mrm{p}ar{v}_\infty)$. It suffices to prove $\phi^A_\infty(x,0)>0$, which follows directly from the fact that $\phi^A_t(x,0)>0$ continuously decreases to $\phi^A_\infty(x,0)>0$ as $t\to\infty$.
$\Box$
Taking expectation on both sides of (\ref{eqn4.19}), we have $\mathbf{E}[V_0(t)]=V(0) \exp\{-b_2 t\}$ for any $t\geq 0$.
From the definition of $\mathcal{J}_A(T)$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\mathcal{J}_A(T)\mrm{b}}\def\p{\mrm{p}ig]\!\!\!&=\!\!\!& \mathbf{E}\Big[ \int_0^T \int_{A} \int_0^{V_0(s-)} N_0(ds,du,dx) \Big] =\hat\nu_\mathtt{i}(A) \int_0^T \mathbf{E}[V_0(s)] ds = \hat\nu_\mathtt{i}(A)\cdot \frac{V(0)}{b_2}\cdot (1-e^{-b_2 T}).
\eeqnn
In particular, $\mathbf{E}[\mathcal{J}_A(T)] = \hat\nu_\mathtt{i}(A) V(0) T+ O(T^2)$ for small $T >0$ and $\mathbf{E}[\mathcal{J}_A(T)]$ converges as $T\to\infty$ if and only if $b_2 > 0$. In this case, the number of induced jumps of a given magnitude following a large shock is finite as shown above and the expected number of shocks is proportional to the shock size. Specifically, we have the following corollary.
\mrm{b}}\def\p{\mrm{p}egin{corollary}\label{cor-cluster}
We have $\mathbf{E}[\mathcal{J}_A(\infty)]<\infty$ if and only if $b_2>0$.
In this case, $\mathbf{E}[\mathcal{J}_A(\infty)]= \hat\nu_\mathtt{i}(A)\cdot V(0)/b_2$.
\end{corollary}
The following corollary is the analogue of Corollary \ref{Thm210}. It provides four regimes for the impact duration of exogenous shocks on the market dynamics. If the volatility strictly mean-reverts to $0$, then the impact decreases exponentially. In the critical case $b_2=0$, the impact decays only slowly.
\mrm{b}}\def\p{\mrm{p}egin{corollary}
We have the following four regimes:
\mrm{b}}\def\p{\mrm{p}egin{enumerate}
\item[(1)] If $b_2<0$, we have as $t\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_A> t\} \sim \mathbf{P}\{\mathscr{T}_{0}> t\}\to 1- \exp\mrm{b}}\def\p{\mrm{p}ig\{-V(0) \cdot \mrm{b}}\def\p{\mrm{p}ar{v}_\infty\mrm{b}}\def\p{\mrm{p}ig\};
\eeqnn
\item[(2)] If $b_2>0$, there exists a constant $C>0$ such that for any $t\geq 1$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_A> t\}\leq \mathbf{P}\{\mathscr{T}_{0}> t\} \leq C\cdot V(0)\cdot e^{-b_2t};
\eeqnn
\item[(3)] If $b_2=0$ and $\hat\nu_\mathtt{i}(|u_2|^2):=\frac{1}{2}\int_\mathbb{U}|u_2|^2 \hat\nu_\mathtt{i}(du)<\infty$, we have $t\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_A> t\} \sim \mathbf{P}\{\mathscr{T}_{0}> t\} \sim \frac{V(0)}{\sigma_{22}+\hat\nu_\mathtt{i}(|u_2|^2)}\cdot t^{-1};
\eeqnn
\item[(4)] If $b_2=\sigma_{22}=0$ and $\hat\nu_\mathtt{i}(\mathbb{R}\times [x,\infty))\sim Cx^{-1-\alpha}$ as $x\to\infty$ for some $\alpha\in(0,1)$, there exists a constant $C>0$ such that as $t\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_A> t\} \sim \mathbf{P}\{\mathscr{T}_{0}> t\} \sim C \cdot V(0) \cdot t^{-1/\alpha}.
\eeqnn
\end{enumerate}
\end{corollary}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
The first regime follows directly from Lemma~\ref{Grey} and Corollary~\ref{Thm408}.
For the second regime, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_A> t\}\leq\mathbf{P}\{\mathscr{T}_{0}> t\}= 1-\exp\{-V(0) \cdot \overline{v}_t\}\leq V(0) \cdot \overline{v}_t.
\eeqnn
From (\ref{eqn3.25}), we have $G_0(x)\geq b_2 x$ for any $x\geq 0$.
From Gr\"onwall's inequality, we also have for any $t\geq 1$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\overline{v}_t\leq \overline{v}_1- \int_1^t b_2\overline{v}_sds \leq \overline{v}_1e^{-b_2(t-1)}.
\eeqnn
The last two regimes for $\mathbf{P}\{\mathscr{T}_{0}> t\}$ can be proved following the proof of Corollary~\ref{Thm210} with the fact that $G_0(x)\sim [\sigma_{22}+\hat\nu_\mathtt{i}(|u_2|^2)] x^2$ in regime (3) and $G_0(x)\sim C x^{\alpha+1}$ in regime (4) as $x\to 0+$.
We now prove the last two regimes for $\mathbf{P}\{\mathcal{T}_A> t\}$.
From Corollary~\ref{Thm407} and \ref{Thm408}, we have as $t\to\infty$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{P}\{\mathcal{T}_A> t\}\!\!\!&=\!\!\!& 1-\exp\mrm{b}}\def\p{\mrm{p}ig\{ - \phi^A_t( G^{A,-1}_0(\hat\nu_\mathtt{i}(A)),0)\cdot V(0) \mrm{b}}\def\p{\mrm{p}ig\} \sim V(0)\cdot \phi^A_t( G^{A,-1}_0(\hat\nu_\mathtt{i}(A)),0).
\eeqnn
From (\ref{eqn4.44}), we also have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\phi^A_t( G^{A,-1}_0(\hat\nu_\mathtt{i}(A)),0)\!\!\!&=\!\!\!& G^{A,-1}_0(\hat\nu_\mathtt{i}(A)) - \int_0^t G_0(\phi^A_s( G^{A,-1}_0(\hat\nu_\mathtt{i}(A)),0))ds.
\eeqnn
From this and the proof of Corollary~\ref{Thm210}, we also can prove that the last two regimes for $\mathbf{P}\{\mathcal{T}_A> t\}$.
$\Box$
\subsection{The sub-model $\{(P_a(t),V_a(t)):t\geq 0\}$ }
We now study the distribution of induced jumps in the system $\{(P_a(t),V_a(t)):t\geq 0\}$ assuming that $a_2 > 0$. The analysis is much simpler than the preceding one because now the volatility process is recurrent. For any $T>0$, let
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathcal{J}_A^a(T):= \#\mrm{b}}\def\p{\mrm{p}ig\{ t\in[0,T]: (\Delta P_a(t),\Delta V_a(t))\in A\mrm{b}}\def\p{\mrm{p}ig\}= \int_0^T \int_A\int_0^{V_a(s-)} N_0(ds,du,dx).
\eeqnn
Taking expectation on both sides of above equation, we see that the expected number of jumps is given by
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\mathcal{J}_A^a(T)\mrm{b}}\def\p{\mrm{p}ig] \!\!\!&=\!\!\!& \hat\nu_\mathtt{i}(A)\int_0^T \mathbf{E}[V_a(s)]ds.
\eeqnn
Taking expectation on both sides of (\ref{eqn4.20}), we also have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}[V_a(t)]= \int_0^t (a_2-b_2 \mathbf{E}[V_a(s)])ds =\frac{a_2}{b_2} (1-e^{-b_2t})
\quad\mbox{and}\quad
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\mathcal{J}_A^a(T)\mrm{b}}\def\p{\mrm{p}ig]= \hat\nu_\mathtt{i}(A)\cdot \frac{a_2}{b_2}\cdot \mrm{b}}\def\p{\mrm{p}ig[T-(1-e^{-b_2 T})/b_2\mrm{b}}\def\p{\mrm{p}ig].
\eeqnn
\mrm{b}}\def\p{\mrm{p}egin{lemma}
For any $T\in[0,\infty)$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\exp\{-\lambda\mathcal{J}_A^a(T)\}\mrm{b}}\def\p{\mrm{p}ig]= \exp\Big\{ - \int_0^T a_2 \psi_s^a(\lambda)ds\Big\},
\eeqnn
where $s\mapsto\psi_s^a(\lambda)$ is the unique solution to the following Riccati equation:
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\psi_t^a(\lambda)\!\!\!&=\!\!\!& \hat\nu_\mathtt{i}(A)\cdot(1-e^{-\lambda})\cdot t - \int_0^t G_0(\psi_s^a(\lambda))ds.
\eeqnn
\end{lemma}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
From the exponential formula of Poisson random measure; see \cite[p.8]{Bertoin1996}, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\mrm{b}}\def\p{\mrm{p}ig[\exp\{-\lambda\mathcal{J}_A^a(T)\}\mrm{b}}\def\p{\mrm{p}ig]\!\!\!&=\!\!\!& \mathbf{E}\Big[\exp\Big\{- \hat\nu_\mathtt{i}(A)\cdot(1-e^{-\lambda})\cdot\int_0^T V_a(s)ds\Big\}\Big].
\eeqnn
Since $\{(\int_0^t V_a(s)ds,V_a(t)):t\geq 0\}$ is affine,
the result follows from \cite[Theorem 2.7]{DuffieFilipovicSchachermayer2003}.
$\Box$
\mrm{b}}\def\p{\mrm{p}egin{remark}\label{concluding-rem}
Let us compare exogenously and endogenously triggered jump cascades assuming that the volatility process strictly mean-reverts
{and the arrival rate of exogenous socks $\lambda_\mathtt{e}:=\nu_\mathtt{e}(\mathbb{U})$ is finite.}
By Corollary \ref{cor-cluster} exogenously triggered clusters with jumps in a set $A$ arrive at rate $\lambda_\mathtt{e} \hat{\nu}_\mathtt{i}(A)$, and an external volatility shock of size $V(0)$ triggers $\frac{V(0)}{b_2} (1-e^{-b_2t})$ jumps by time $t>0$ on average. By contrast, endogenously triggered jump cascades arrive at a rate $a_2 \hat\nu_\mathtt{i}(A)$ and each cascade comprises $\frac{1}{b_2}(1-e^{-b_2t})$ jumps by time $t>0$ on average. This shows that both cascades have the same ``duration distribution'' but that {exogenously} triggered cascades usually comprise more jumps immediately and hence cluster more heavily. This effect is illustrated by Figure 3.
\end{remark}
\mrm{b}}\def\p{\mrm{p}egin{appendix}
\renewcommand{\Alph{section}.\!\!\!&abic{equation}}{\Alph{section}.\!\!\!&abic{equation}}
\section{A cluster representation for $\{(P_a(t),V_a(t)):t\geq 0\}$}
\setcounter{equation}{0}
This appendix proves Theorem \ref{thm-cluster}. The proof uses arguments given in Li \cite{Li2019} where the result is established for the volatility process. We assume throughout that $\sigma_{22} > 0$ and start with the following simple but useful lemma.
\mrm{b}}\def\p{\mrm{p}egin{lemma}\label{Thm404}
For any two finite measures $(u_2\wedge 1)\nu_1(du)$ and $(u_2\wedge 1)\nu_2(du)$ on $\mathbb{U}\setminus \{0\}$, we have $\nu_1(du)=\nu_2(du)$ if and only if for any $z=(z_1,z_2)\in \mathbb{U}_*$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.12}
\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\nu_1(du)= \int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\nu_2(du).
\eeqlb
\end{lemma}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
We first extend $\nu_1(du)$ and $\nu_2(du)$ to $\mathbb{U}$ with $\nu_1(\{0\})=\nu_2(\{0\})=0$. From (\ref{eqn4.12}) with $z$ replaced by $z+(0,1)$, we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.13}
\int_{\mathbb{U}}(e^{\langle z,u\rangle-u_2}-e^{z_1u_1})\nu_1(du)= \int_{\mathbb{U}}(e^{\langle z,u\rangle-u_2}-e^{z_1u_1})\nu_2(du).
\eeqlb
Taking the difference between (\ref{eqn4.12}) and (\ref{eqn4.13}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_{\mathbb{U}}e^{\langle z,u\rangle}(1-e^{-u_2})\nu_1(du)= \int_{\mathbb{U}}e^{\langle z,u\rangle}(1-e^{-u_2})\nu_2(du).
\eeqnn
By assumption $(1-e^{-u_2})\nu_1(du)$ and $(1-e^{-u_2})\nu_2(du)$ are finite measure on $\mathbb{U}$.
By the one-to-one correspondence between measures and their characteristic function, $(1-e^{-u_2})\nu_1(du)=(1-e^{-u_2})\nu_2(du)$ and $\nu_1(du)=\nu_2(du)$.
$\Box$
For any $t>0$ and $u'\in\mathbb{U}$, the probability measure $Q_{0,t}(u',du)$ introduced in (\ref{eqn4.02}) is infinitely divisible, i.e., for any $n\geq 1$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.08}
\int_\mathbb{U} e^{\langle z,u\rangle}Q_{0,t}(u',du)= \Big(\exp\mrm{b}}\def\p{\mrm{p}ig\{z_1\frac{u'_1}{n}+\psi^G_t(z)\frac{u'_2}{n}\mrm{b}}\def\p{\mrm{p}ig\}\Big)^n= \int_\mathbb{U} e^{\langle z,u\rangle}Q^{(*n)}_{0,t}(u'/n,du).
\eeqlb
Using the the L\'evy-Khintchine formula for infinite divisible distributions (see \cite[Theorem 1]{Bertoin1996}), the representation (\ref{eqn3.24}) for the characteristic exponent $\{\psi^G_t(z): z\in \mathbb{U}_*\}$ and Theorem 3.13 in \cite{Li2019} along with the assumption that $\sigma_{22}>0$, we obtain that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.09}
\psi^G_t(z)= b_1(t)z_1 \rangle + \sigma_{11}(t) |z_1|^2 +\int_{\mathbb{U}\setminus \{0\}} (e^{\langle z,u\rangle}-1- z_1u_1)\eta_t(du),
\eeqlb
where $b_1(t) \in \mathbb R,$
$\sigma_{11}(t)\geq 0$ and $(|u_1|\wedge |u_1|^2+ |u_2|\wedge 1)\eta_t(du)$ is a finite measure on $\mathbb{U}\setminus \{0\}$.
\mrm{b}}\def\p{\mrm{p}egin{lemma}\label{Thm405}
The family of $\sigma$-finite measures $\{\eta_t(du):t\geq 0 \}$ is an entrance law for $(Q_{0,t}^\circ)_{t\geq 0}$, i.e.
$$\eta_{s+t}(du)= \eta_t Q_{0,s}^\circ(du), \qquad s,t\geq 0.$$
\end{lemma}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
In view of Lemma~\ref{Thm404}, it suffices to prove that for any $s,t\geq 0$
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.14}
\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\eta_{s+t}(du)= \int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\eta_tQ^{\circ}_{0,s}(du).
\eeqlb
From (\ref{eqn4.09}),
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.15}
\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\eta_{t}(du)=\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\eta_{t}(du)=\psi^G_{t}(z_1,0)-\psi^G_{t}(z_1,z_2).
\eeqlb
Moreover, from the definition of $Q^{\circ}_{0,s}(u',du)$ and (\ref{eqn4.02}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})Q^{\circ}_{0,s}(u',du)
\!\!\!&=\!\!\!&\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})Q_{0,s}(u',du)\cr
\!\!\!&=\!\!\!& \exp\mrm{b}}\def\p{\mrm{p}ig\{z_1u'_1+\psi^G_s(z)u'_2\mrm{b}}\def\p{\mrm{p}ig\} -\exp\mrm{b}}\def\p{\mrm{p}ig\{z_1u'_1+\psi^G_s(z_1,0)u'_2\mrm{b}}\def\p{\mrm{p}ig\}.
\eeqnn
Taking this into the term on the right side of (\ref{eqn4.14}), we obtain from (\ref{eqn4.15}) that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\eta_tQ^{\circ}_{0,s}(du)
\!\!\!&=\!\!\!& \int_{\mathbb{U}}\eta_t(du') \int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})Q^{\circ}_{0,s}(u',du)\cr
\!\!\!&=\!\!\!&\int_{\mathbb{U}}[e^{z_1u'_1+\psi^G_s(z)u'_2} -e^{z_1u'_1+\psi^G_s(z_1,0)u'_2}]\eta_t(du')\cr
\!\!\!&=\!\!\!& \psi^G_t(z_1,\psi^G_s(z_1,0))-\psi^G_t(z_1,\psi^G_s(z)).
\eeqnn
From the semigroup property of $(\psi^G_t)_{t\geq 0}$, we also have $\psi^G_t(z_1,\psi^G_s(z_1,0))= \psi^G_{s+t}(z_1,0)$ and $\psi^G_t(z_1,\psi^G_s(z)) =\psi^G_{s+t}(z)$.
Along with (\ref{eqn4.15}) this yields the desired result.
$\Box$
In view of the preceding lemma, we can define an $\sigma$-finite measure $\mathbf{Q}(d\omega)$ on $\mathbf{D}([0,\infty),\mathbb{U})$ as follows: for any $0<t_1<t_2<\cdots<t_n$ and $u^{(1)},\cdots, u^{(n)} \in \mathbb{R}\times(0,\infty)$,
\mrm{b}}\def\p{\mrm{p}egin{align}\label{eqn4.31}
& \mathbf{Q}(\omega(t_1)\in du^{(1)}, \omega(t_2)\in du^{(2)},\cdots \omega(t_n)\in du^{(n)}) \nonumber \\
:= & \eta_{t_1}(du^{(1)})Q_{0,t_2-t_1}^\circ(u^{(1)},du^{(2)})\cdots Q_{0,t_n-t_{n-1}}^\circ(u^{(n-1)},du^{(n)}).
\end{align}
The following lemma provides the analogue of equation (3.17) in \cite{Li2019}.
\mrm{b}}\def\p{\mrm{p}egin{lemma}\label{Thm406}
For any $t>0$, we have $\frac{1}{u'_2}Q_{0,t}((0,u'_2),du)\to \eta_t(du)$ as $u'_2\to 0+$.
\end{lemma}
\noindent{\it Proof.~~}}\def
$\Box$
{
$\Box$
From (\ref{eqn4.15}) we have for any $z:=(z_1,z_2)\in \mathbb{U}_*$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lim_{u'_2\to 0+}\int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\frac{1}{u'_2} Q_{0,t}((0,u'_2),du)
\!\!\!&=\!\!\!&\lim_{u'_2\to 0+} \frac{1}{u'_2} \mrm{b}}\def\p{\mrm{p}ig(e^{u'_2 \psi^G_t(z)} -e^{u'_2 \psi^G_t(z_1,0)}\mrm{b}}\def\p{\mrm{p}ig) \cr
\!\!\!& =\!\!\!& \psi^G_t(z)-\psi^G_t(z_1,0)= \int_{\mathbb{U}}(e^{\langle z,u\rangle}-e^{z_1u_1})\eta_{t}(du).
\eeqnn
The same arguments as in the proof of Lemma~\ref{Thm404} yield
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lim_{u'_2\to 0+}\int_{\mathbb{U}}e^{\langle z,u\rangle}(1-e^{-u_2})\frac{1}{u'_2} Q_{0,t}((0,u'_2),du) = \int_{\mathbb{U}}e^{\langle z,u\rangle}(1-e^{-u_2})\eta_{t}(du),
\eeqnn
and hence the desired result.
$\Box$
By Lemma~\ref{Thm406}, we have formally that
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}
& & \mathbf{Q}(\omega(t_1)\in du^{(1)}, \omega(t_2)\in du^{(2)},\cdots \omega(t_n)\in du^{(n)}) \nonumber \\
&=& \lim_{u_2 \to 0} \frac{1}{u_2} Q_{0,t_1}^\circ((0,u_2); du^{(1)}) Q_{0,t_2-t_1}^\circ(u^{(1)},du^{(2)})\cdots Q_{0,t_n-t_{n-1}}^\circ(u^{(n-1)},du^{(n)}),
\end{eqnarray*}
which shows that $\mathbf{Q}(d\omega)$ is supported on $\mathbf{D}_0([0,\infty),\mathbb{U})$. We refer to the proof of Theorem 6.1 in \cite{Li2019} for further details.
\mbox{ }
\noindent \textsc{Proof of Theorem \ref{thm-cluster}.} It suffices to prove that $(\hat{P}_a(t),\hat{V}_a(t))$ is a Markov process with transition semigroup $(Q_{a,t})_{t\geq 0}$ on $\mathbb{U}$. For this, it is enough to prove that for any $0\leq t_1\leq r\leq t_2$, any $z=(z_1,z_2)\in \mathbb{U}_*$ and every $\mathscr{E}_{t_1}$-measurable $\mathbb{C}$-valued random variable $X_{t_1}$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.17}
\mathbf{E}[X_{t_1}\cdot e^{z_1P_a(t_2)+z_2V_a(t_2)} ]= \mathbf{E}\Big[ X_{t_1}\cdot \exp\Big\{z_1P_a(r)+\psi^G_{t_2-r}(z)V_a(r)+\int_0^{t_2-r} [a_1z_1+ a_2 \psi^G_s(z)] ds\Big\}\Big].
\eeqlb
From the definition of $(\mathscr{E}_t)_{t\geq 0}$, we just need to prove this statement with
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
X_{t_1}= \exp\Big\{ \int_0^{t_1} \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} h(\omega)N_a(ds,d\omega)\Big\},
\eeqnn
where $h(\omega): \mathbf{D}([0,\infty),\mathbb{U}) \mapsto \mathbb{C}_- $.
From this and (\ref{eqn4.18}),
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.32}
\lefteqn{\mathbf{E}\Big[ \exp\Big\{ \int_0^{t_1} \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} h(\omega)N_a(ds,d\omega)+ z_1P_a(t_2)+z_2V_a(t_2) \Big\} \Big]}\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!& \mathbf{E}\Big[ \exp\Big\{ a_1t_2z_1+ \int_0^{t_2} \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[h(\omega)\mathbf{1}_{\{s\leq t_1 \}}+\langle z,\omega(t_2-s)\rangle \mrm{b}}\def\p{\mrm{p}ig]N_a(ds,d\omega) \Big\} \Big].
\eeqlb
By the exponential formula for Poisson random measures, the last term equals
\mrm{b}}\def\p{\mrm{p}egin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eqn4.30}
\lefteqn{\exp\Big\{ a_1t_2z_1+ \int_0^{t_2} a_2 ds\int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[\exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}+\langle z,\omega(t_2-s)\rangle\}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega) \Big\} }\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!& \exp\Big\{ a_1rz_1+\int_0^r a_2 ds\int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[\exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}+\langle z,\omega(t_2-s)\rangle\}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega)\Big\} \cr
\!\!\!&\!\!\!& \times \exp\Big\{ a_1(t_2-r)z_1+ \int_{r}^{t_2} a_2 ds\int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig(e^{\langle z,\omega(t_2-s)\rangle}-1\mrm{b}}\def\p{\mrm{p}ig)\mathbf{Q}(d\omega)\Big\}.
\eeqlb
From the definition of $\mathbf{Q}(d\omega)$, we have for any $s\leq r$,
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lefteqn{\int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}\}\mrm{b}}\def\p{\mrm{p}ig(e^{\langle z,\omega(t_2-s)\rangle}-1\mrm{b}}\def\p{\mrm{p}ig)\mathbf{Q}(d\omega)}\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!& \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}\}\mrm{b}}\def\p{\mrm{p}ig[\exp\{ z_1\omega_1(r-s)+ \psi^G_{t_2-r} (z) \omega_2(r-s) \}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega)
\eeqnn
and
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lefteqn{\int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[\exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}+\langle z,\omega(t_2-s)\rangle\}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega)}\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!& \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[\exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}\}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega)
+ \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \exp\{h(\omega)\mathbf{1}_{\{s\leq t_1 \}}\}\mrm{b}}\def\p{\mrm{p}ig(e^{\langle z,\omega(t_2-s)\rangle}-1\mrm{b}}\def\p{\mrm{p}ig)\mathbf{Q}(d\omega)\cr
\!\!\!&=\!\!\!& \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[\exp\{ h(\omega)\mathbf{1}_{\{s\leq t_1 \}}+ z_1\omega_1(r-s)+ \psi^G_{t_2-r} (z) \omega_2(r-s) \}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega).
\eeqnn
Taking this back into the first term on the right side of the last equality in (\ref{eqn4.30}), this term equals
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\exp\Big\{ a_1rz_1+\int_0^r a_2 ds\int_{\mathbf{D}_0([0,\infty),\mathbb{U})} \mrm{b}}\def\p{\mrm{p}ig[\exp\{ h(\omega)\mathbf{1}_{\{s\leq t_1 \}}+ z_1\omega_1(r-s)+ \psi^G_{t_2-r} (z) \omega_2(r-s) \}-1\mrm{b}}\def\p{\mrm{p}ig]\mathbf{Q}(d\omega)\Big\} ,
\eeqnn
which equals to
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbf{E}\Big[ \exp\Big\{ \int_0^{t_1} \int_{\mathbf{D}_0([0,\infty),\mathbb{U})} h(\omega)N_a(ds,d\omega)+ z_1P_a(r)+\psi^G_{t_2-r} (z)V_a(r) \Big\} \Big].
\eeqnn
Moreover, from (\ref{eqn4.31}) and Lemma~\ref{Thm406} we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_{\mathbf{D}([0,\infty),\mathbb{U})}\mrm{b}}\def\p{\mrm{p}ig(e^{\langle z,\omega(t_2-s)\rangle}-1\mrm{b}}\def\p{\mrm{p}ig) \mathbf{Q}(d\omega)
\!\!\!&=\!\!\!& \int_{\mathbb{U}} \mrm{b}}\def\p{\mrm{p}ig(e^{\langle z,u\rangle}-1\mrm{b}}\def\p{\mrm{p}ig)\eta_{t_2-s}(du) \cr
\!\!\!&=\!\!\!& \lim_{u'_2\to 0+} \int_{\mathbb{U}} \mrm{b}}\def\p{\mrm{p}ig(e^{\langle z,u\rangle}-1\mrm{b}}\def\p{\mrm{p}ig)\frac{1}{u'_2}Q_{0,t_2-s}((0,u'_2),du) \cr
\!\!\!&=\!\!\!& \lim_{u'_2\to 0+}\frac{1}{u'_2} ( e^{u'_2\psi^G_{t_2-s}(z)} -1)= \psi^G_{t_2-s}(z).
\eeqnn
Taking these back into (\ref{eqn4.32}), we have
\mrm{b}}\def\p{\mrm{p}egin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lefteqn{\mathbf{E}\Big[ \exp\Big\{ \int_0^{t_1} \int_{\mathbf{D}([0,\infty),\mathbb{U})} h(\omega)N_a(ds,d\omega)+ z_1P_a(t_2)+z_2V_a(t_2) \Big\} \Big]}\!\!\!&\!\!\!&\cr
\!\!\!&=\!\!\!& \mathbf{E}\Big[ \exp\Big\{ \int_0^{t_1} \int_{\mathbf{D}([0,\infty),\mathbb{U})} h(\omega)N_a(ds,d\omega)+ z_1P_a(r)+\psi^G_{t_2-r}(z)V_a(r) + \int_{r}^{t_2} \mrm{b}}\def\p{\mrm{p}ig(a_1z_2+ a_2 \psi^G_{t_2-s}(z)\mrm{b}}\def\p{\mrm{p}ig) ds \Big\}\Big].
\eeqnn
Here we have got the desired result (\ref{eqn4.17}).
$\Box$
\end{appendix}
\mrm{b}}\def\p{\mrm{p}ibliographystyle{siam}
\mrm{b}}\def\p{\mrm{p}ibliography{ReferenceforVolatility}
\end{document} |
\begin{document}
{\mathrm d}ate{\today}
\keywords{Sobolev space, infinitesimal Hilbertianity, weighted Euclidean space, decomposability bundle, closability of the Sobolev norm}
\subjclass[2010]{53C23, 46E35, 26B05}
\begin{abstract}
We provide a quick proof of the following known result: the Sobolev space
associated with the Euclidean space, endowed with the Euclidean distance and
an arbitrary Radon measure, is Hilbert. Our new approach relies upon the
properties of the Alberti--Marchese decomposability bundle. As a consequence
of our arguments, we also prove that if the Sobolev norm is closable on
compactly-supported smooth functions, then the reference measure is
absolutely continuous with respect to the Lebesgue measure.
{\rm e}nd{abstract}
\title{A short proof of the infinitesimal Hilbertianity of the weighted Euclidean space}
\section*{Introduction}
In recent years, the theory of weakly differentiable functions over an abstract
metric measure space \(({\rm X},{\sf d},\mu)\) has been extensively studied.
Starting from the seminal paper \cite{Cheeger00}, several (essentially equivalent) versions of Sobolev space \(W^{1,2}({\rm X},{\sf d},\mu)\) have been proposed in \cite{Shanmugalingam00,AmbrosioGigliSavare11,DiM14a}.
The definition we shall adopt in this paper is the one via test plans and weak upper
gradients, which has been introduced by L.\ Ambrosio, N.\ Gigli and G.\ Savar\'{e}
in \cite{AmbrosioGigliSavare11}. In general, \(W^{1,2}({\rm X},{\sf d},\mu)\) is a Banach space,
but it might be non-Hilbert: for instance, consider the Euclidean space
endowed with the \({\rm e}ll^\infty\)-norm and the Lebesgue measure. Those metric measure
spaces whose associated Sobolev space is Hilbert -- which are said to be
{\rm e}mph{infinitesimally Hilbertian}, cf.\ \cite{Gigli12} -- play a very
important role. We refer to the introduction of \cite{LP20} for an account
of the main advantages and features of this class of spaces.
The aim of this manuscript is to provide a quick proof of the following result
(cf.\ Theorem \ref{thm:Eucl_inf_Hilb}):
\begin{equation}\tag{\(\star\)}\label{eq:statement_main_result}
(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\;\text{ is infinitesimally Hilbertian for any Radon measure }
\mu\geq 0\text{ on }\mathbb{R}^d,
{\rm e}nd{equation}
where \({\sf d}_{\rm Eucl}(x,y)\coloneqq|x-y|\) stands for the Euclidean distance on \(\mathbb{R}^d\).
This fact has been originally proven in \cite{GP16-2}, but it can also be alternatively
considered as a special case of the main result in \cite{DMGSP18}. The approach we propose
here is more direct and is based upon the differentiability theorem \cite{AM16} for Lipschitz
functions in \(\mathbb{R}^d\) with respect to a given Radon measure, as we are going to describe.
Let \(\mu\geq 0\) be any Radon measure on \(\mathbb{R}^d\). G.\ Alberti and
A.\ Marchese proved in \cite{AM16} that it is possible to select the maximal
measurable sub-bundle \(V(\mu,\cdot)\) of \(T\mathbb{R}^d\) -- called the {\rm e}mph{decomposability bundle}
of \(\mu\) -- along which all Lipschitz functions are \(\mu\)-a.e.\ differentiable.
This way, any given Lipschitz function \(f\colon\mathbb{R}^d\to\mathbb{R}\) is naturally associated with
a gradient \(\nabla_{\!\scriptscriptstyle\rm AM} f\), which is an \(L^\infty\)-section of \(V(\mu,\cdot)\).
Being \(\nabla_{\!\scriptscriptstyle\rm AM}\) a linear operator, its induced Dirichlet energy functional
\(\sfE_{\scriptscriptstyle{\rm AM}}\) on \(L^2(\mu)\) is a quadratic form. Hence, the proof of
{\rm e}qref{eq:statement_main_result} presented here follows along these lines:
\begin{itemize}
\item[\(\rm a)\)] The maximality of \(V(\mu,\cdot)\) ensures that the curves selected
by a test plan \({\mbox{\boldmath\(\pi\)}}\) on \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) are `tangent' to \(V(\mu,\cdot)\),
namely, \({\mathrm d}ot\gamma_t\in V(\mu,\gamma_t)\) for
\(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)\)-a.e.\ \((\gamma,t)\). See Lemma \ref{lem:pi_tangent_to_V}.
\item[\(\rm b)\)] Given any Lipschitz function \(f\colon\mathbb{R}^d\to\mathbb{R}\), we can deduce from
item a) that the modulus of the gradient \(\nabla_{\!\scriptscriptstyle\rm AM} f\) is a weak upper gradient of \(f\);
cf.\ Proposition \ref{prop:nablaAM_wug}.
\item[\(\rm c)\)] Since Lipschitz functions with compact support are dense in energy
in \(W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) -- cf.\ Theorem \ref{thm:density_in_energy} below --
we conclude from b) that the Cheeger energy \({\sf E}_{\rm Ch}\) is the lower semicontinuous
envelope of \(\sfE_{\scriptscriptstyle{\rm AM}}\). This grants that \({\sf E}_{\rm Ch}\) is a quadratic form, thus accordingly
the space \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) is infinitesimally Hilbertian.
See Theorem \ref{thm:Eucl_inf_Hilb} for the details.
{\rm e}nd{itemize}
Finally, by combining our techniques with a structural result
for Radon measures in the Euclidean space by De Philippis--Rindler
\cite{DPR}, we eventually prove (in Theorem \ref{thm:no_closable})
the following claim:
\[
\text{The Sobolev norm }\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}
\text{ is closable on }C^\infty_c\text{-functions}\quad\Longrightarrow\quad\mu\ll\mathcal L^d.
\]
Cf.\ Definition \ref{def:closability} for the notion of closability we
are referring to. This result solves a conjecture that has been posed by
M.\ Fukushima (according to V.I.\ Bogachev \cite[Section 2.6]{Bogachev10}).
{\bf Acknowledgements.} The second and third named authors acknowledge the support by
the Academy of Finland, projects 274372, 307333, 312488, and 314789.
\section{Preliminaries}
\subsection{Sobolev calculus on metric measure spaces}
By {\rm e}mph{metric measure space} \(({\rm X},{\sf d},\mu)\) we mean a complete, separable
metric space \(({\rm X},{\sf d})\) together with a non-negative Radon measure \(\mu\neq 0\).
We denote by \({\rm LIP}({\rm X})\) the space of all real-valued Lipschitz functions on \({\rm X}\),
whereas \({\rm LIP}_c({\rm X})\) stands for the family of all elements of \({\rm LIP}({\rm X})\) having
compact support. Given any \(f\in{\rm LIP}({\rm X})\), we shall denote by
\({\rm lip}(f)\colon{\rm X}\to[0,+\infty)\) its {\rm e}mph{local Lipschitz constant},
which is defined as
\[{\rm lip}(f)(x)\coloneqq\left\{\begin{array}{ll}
\varlimsup_{y\to x}\big|f(x)-f(y)\big|/{\sf d}(x,y)\\
0
{\rm e}nd{array}\quad\begin{array}{ll}
\text{ if }x\in{\rm X}\text{ is an accumulation point,}\\
\text{ otherwise.}
{\rm e}nd{array}\right.\]
The metric space \(({\rm X},{\sf d})\) is said to be {\rm e}mph{proper} provided its bounded, closed
subsets are compact.
To introduce the notion of Sobolev space \(W^{1,2}({\rm X},{\sf d},\mu)\) that has been proposed
in \cite{AmbrosioGigliSavare11}, we first need to recall some terminology.
The space \(C\big([0,1],{\rm X}\big)\) of all continuous curves in \({\rm X}\) is a complete,
separable metric space if endowed with the sup-distance
\({\sf d}_\infty(\gamma,\sigma)\coloneqq\max\big\{{\sf d}(\gamma_t,\sigma_t)\;\big|\;t\in[0,1]\big\}\).
We say that \(\gamma\in C\big([0,1],{\rm X}\big)\) is {\rm e}mph{absolutely continuous}
provided there exists a function \(g\in L^1(0,1)\) such that
\({\sf d}(\gamma_s,\gamma_t)\leq\int_s^t g(r)\,{\mathrm d} r\) holds for all \(s,t\in[0,1]\)
with \(s<t\). The {\rm e}mph{metric speed} \(|{\mathrm d}ot\gamma|\) of \(\gamma\), defined as
\(|{\mathrm d}ot\gamma_t|\coloneqq\lim_{h\to 0}{\sf d}(\gamma_{t+h},\gamma_t)/|h|\)
for \(\mathcal L^1\)-a.e.\ \(t\in[0,1]\), is the minimal integrable function
(in the \(\mathcal L^1\)-a.e.\ sense)
that can be chosen as \(g\) in the previous inequality;
cf.\ \cite[Theorem 1.1.2]{AmbrosioGigliSavare08}.
A {\rm e}mph{test plan} over \(({\rm X},{\sf d},\mu)\) is a Borel probability measure \({\mbox{\boldmath\(\pi\)}}\)
on \(C\big([0,1],{\rm X}\big)\), concentrated on absolutely continuous curves,
such that the following properties are satisfied:
\begin{itemize}
\item \textsc{Bounded compression.} There exists \({\rm Comp}({\mbox{\boldmath\(\pi\)}})>0\)
such that \(({\rm e}_t)_*{\mbox{\boldmath\(\pi\)}}\leq{\rm Comp}({\mbox{\boldmath\(\pi\)}})\,\mu\) holds for all \(t\in[0,1]\),
where \({\rm e}_t\colon C\big([0,1],{\rm X}\big)\to{\rm X}\) stands for the evaluation map
\(\gamma\mapsto{\rm e}_t(\gamma)\coloneqq\gamma_t\).
\item \textsc{Finite kinetic energy.} It holds that
\(\int\!\!\int_0^1|{\mathrm d}ot\gamma_t|^2\,{\mathrm d} t\,{\mathrm d}{\mbox{\boldmath\(\pi\)}}(\gamma)<+\infty\).
{\rm e}nd{itemize}
Let \(f\colon{\rm X}\to\mathbb{R}\) be a given Borel function. We say that \(G\in L^2(\mu)\)
is a {\rm e}mph{weak upper gradient} of \(f\) provided for any test plan \({\mbox{\boldmath\(\pi\)}}\)
on \(({\rm X},{\sf d},\mu)\) it holds that \(f\circ\gamma\in W^{1,1}(0,1)\) for
\({\mbox{\boldmath\(\pi\)}}\)-a.e.\ \(\gamma\) and that
\[\big|(f\circ\gamma)'_t\big|\leq G(\gamma_t)\,|{\mathrm d}ot\gamma_t|
\quad\text{ for }({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)\text{-a.e.\ }(\gamma,t).\]
The minimal such function \(G\) (in the \(\mu\)-a.e.\ sense) is called the
{\rm e}mph{minimal weak upper gradient} of \(f\) and is denoted by \(|Df|\in L^2(\mu)\).
\begin{definition}[Sobolev space \cite{AmbrosioGigliSavare11}]
The {\rm e}mph{Sobolev space} \(W^{1,2}({\rm X},{\sf d},\mu)\) is defined as the family of all
those functions \(f\in L^2(\mu)\) that admit a weak upper gradient \(G\in L^2(\mu)\).
We endow the vector space \(W^{1,2}({\rm X},{\sf d},\mu)\) with the Sobolev norm
\(\|f\|_{W^{1,2}({\rm X},{\sf d},\mu)}^2\coloneqq\|f\|_{L^2(\mu)}^2+\big\||Df|\big\|_{L^2(\mu)}^2\).
{\rm e}nd{definition}
The Sobolev space \(\big(W^{1,2}({\rm X},{\sf d},\mu),\|\cdot\|_{W^{1,2}({\rm X},{\sf d},\mu)}\big)\)
is a Banach space, but in general it is not a Hilbert space. This fact motivates the
following definition, which has been proposed by N.\ Gigli:
\begin{definition}[Infinitesimal Hilbertianity \cite{Gigli12}]
We say that a metric measure space \(({\rm X},{\sf d},\mu)\) is {\rm e}mph{infinitesimally Hilbertian}
provided its associated Sobolev space \(W^{1,2}({\rm X},{\sf d},\mu)\) is a Hilbert space.
{\rm e}nd{definition}
Let us define the {\rm e}mph{Cheeger energy} functional
\({\sf E}_{\rm Ch}\colon L^2(\mu)\to[0,+\infty]\) as
\begin{equation}\label{eq:def_E_Ch}
{\sf E}_{\rm Ch}(f):=\left\{\begin{array}{ll}
\penalty-20\null
\(\blacksquare\)ac{1}{2}\int|Df|^2\,{\mathrm d}\mu\\
+\infty
{\rm e}nd{array}\quad\begin{array}{ll}
\text{ if }f\in W^{1,2}({\rm X},{\sf d},\mu),\\
\text{ otherwise.}
{\rm e}nd{array}\right.
{\rm e}nd{equation}
It holds that the metric measure space \(({\rm X},{\sf d},\mu)\) is infinitesimally
Hilbertian if and only if \({\sf E}_{\rm Ch}\) satisfies the {\rm e}mph{parallelogram rule}
when restricted to \(W^{1,2}({\rm X},{\sf d},\mu)\), {\rm e}mph{i.e.},
\begin{equation}\label{eq:parallelogram_id}
{\sf E}_{\rm Ch}(f+g)+{\sf E}_{\rm Ch}(f-g)=2\,{\sf E}_{\rm Ch}(f)+2\,{\sf E}_{\rm Ch}(g)
\quad\text{ for every }f,g\in W^{1,2}({\rm X},{\sf d},\mu).
{\rm e}nd{equation}
Furthermore, we define the functional \({\sf E}_{\rm lip}\colon L^2(\mu)\to[0,+\infty]\) as
\begin{equation}\label{eq:def_E_lip}
{\sf E}_{\rm lip}(f)\coloneqq\left\{\begin{array}{ll}
\penalty-20\null
\(\blacksquare\)ac{1}{2}\int{\rm lip}^2(f)\,{\mathrm d}\mu\\
+\infty
{\rm e}nd{array}\quad\begin{array}{ll}
\text{ if }f\in{\rm LIP}_c({\rm X}),\\
\text{ otherwise.}
{\rm e}nd{array}\right.
{\rm e}nd{equation}
Given any \(f\in{\rm LIP}_c({\rm X})\), it holds that \(f\in W^{1,2}({\rm X},{\sf d},\mu)\)
and \(|Df|\leq{\rm lip}(f)\) in the \(\mu\)-a.e.\ sense. This ensures that the
inequality \({\sf E}_{\rm Ch}\leq{\sf E}_{\rm lip}\) is satisfied. Actually, \({\sf E}_{\rm Ch}\)
is the \(L^2(\mu)\)-relaxation of \({\sf E}_{\rm lip}\):
\begin{theorem}[Density in energy \cite{AmbrosioGigliSavare11-3}]
\label{thm:density_in_energy}
Let \(({\rm X},{\sf d},\mu)\) be a metric measure space, with \(({\rm X},{\sf d})\) proper.
Then \({\sf E}_{\rm Ch}\) is the {\rm e}mph{\(L^2(\mu)\)-lower semicontinuous envelope}
of \({\sf E}_{\rm lip}\), {\rm e}mph{i.e.}, it holds that
\[{\sf E}_{\rm Ch}(f)=\inf\varliminf_{n\to\infty}{\sf E}_{\rm lip}(f_n)\quad\text{ for every }f\in L^2(\mu),\]
where the infimum is taken among all sequences
\((f_n)_n\subseteq L^2(\mu)\) such that \(f_n\to f\) in \(L^2(\mu)\).
{\rm e}nd{theorem}
\subsection{Decomposability bundle}\label{ss:decomposability_bundle}
Let us denote by \({\rm Gr}(\mathbb{R}^d)\) the set of all linear subspaces of \(\mathbb{R}^d\).
Given any \(V,W\in{\rm Gr}(\mathbb{R}^d)\), we define the distance \({\sf d}_{\rm Gr}(V,W)\)
as the Hausdorff distance in \(\mathbb{R}^d\) between the closed unit ball of \(V\)
and that of \(W\). Hence, \(\big({\rm Gr}(\mathbb{R}^d),{\sf d}_{\rm Gr}\big)\) is a compact metric space.
\begin{theorem}[Decomposability bundle \cite{AM16}]\label{thm:Alberti-Marchese}
Let \(\mu\geq 0\) be a given Radon measure on \(\mathbb{R}^d\). Then there exists
a \(\mu\)-a.e.\ unique Borel mapping \(V(\mu,\cdot)\colon\mathbb{R}^d\to{\rm Gr}(\mathbb{R}^d)\),
called the {\rm e}mph{decomposability bundle} of \(\mu\), such that the following
properties hold:
\begin{itemize}
\item[\(\rm i)\)] Any function \(f\in{\rm LIP}(\mathbb{R}^d)\) is differentiable at
\(\mu\)-a.e.\ \(x\in\mathbb{R}^d\) with respect to \(V(\mu,x)\), {\rm e}mph{i.e.}, there exists a Borel
map \(\nabla_{\!\scriptscriptstyle\rm AM} f\colon\mathbb{R}^d\to\mathbb{R}^d\) such that \(\nabla_{\!\scriptscriptstyle\rm AM} f(x)\in V(\mu,x)\)
for all \(x\in\mathbb{R}^d\) and
\begin{equation}\label{eq:formula_nablaAM}
\lim_{V(\mu,x)\ni v\to 0}\penalty-20\null
\(\blacksquare\)ac{f(x+v)-f(x)-\nabla_{\!\scriptscriptstyle\rm AM} f(x)\cdot v}{|v|}=0
\quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d.
{\rm e}nd{equation}
\item[\(\rm ii)\)] There exists a function \(f_0\in{\rm LIP}(\mathbb{R}^d)\) such that
for \(\mu\)-a.e.\ point \(x\in\mathbb{R}^d\) it holds that \(f_0\) is not differentiable
at \(x\) with respect to any direction \(v\in\mathbb{R}^d\setminus V(\mu,x)\).
{\rm e}nd{itemize}
{\rm e}nd{theorem}
We refer to \(\nabla_{\!\scriptscriptstyle\rm AM} f\) as the {\rm e}mph{Alberti--Marchese gradient} of \(f\).
It readily follows from {\rm e}qref{eq:formula_nablaAM} that \(\nabla_{\!\scriptscriptstyle\rm AM} f\) is
uniquely determined (up to \(\mu\)-a.e.\ equality) and that for every
\(f,g\in{\rm LIP}(\mathbb{R}^d)\) it holds that
\begin{equation}\label{eq:nablaAM_linear}
\nabla_{\!\scriptscriptstyle\rm AM}(f\pm g)(x)=\nabla_{\!\scriptscriptstyle\rm AM} f(x)\pm\nabla_{\!\scriptscriptstyle\rm AM} g(x)
\quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d.
{\rm e}nd{equation}
\begin{remark}{\rm
Theorem \ref{thm:Alberti-Marchese} was actually proven under the
additional assumption of \(\mu\) being a finite measure. However,
the statement depends only on the null sets of \(\mu\), not on the
measure \(\mu\) itself. Therefore, in order to obtain Theorem
\ref{thm:Alberti-Marchese} as a consequence of the original result in \cite{AM16}, it is sufficient to
replace \(\mu\) with the following Borel probability measure on \(\mathbb{R}^d\):
\[
\tilde\mu\coloneqq\sum_{j=1}^\infty\penalty-20\null
\(\blacksquare\)ac{\mu|_{B_j(\bar x)}}
{2^j\mu\big(B_j(\bar x)\big)},\quad\text{ for some }\bar x\in{\rm spt}(\mu).
\]
Observe, indeed, that the measure \(\tilde\mu\) satisfies
\(\mu\ll\tilde\mu\ll\mu\).
\penalty-20\null
\(\blacksquare\)}{\rm e}nd{remark}
\begin{remark}{\rm
Given any function \(f\in{\rm LIP}(\mathbb{R}^d)\), it holds that
\begin{equation}\label{eq:nablaAM_leq_lip}
\big|\nabla_{\!\scriptscriptstyle\rm AM} f(x)\big|\leq{\rm lip}(f)(x)\quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d.
{\rm e}nd{equation}
Indeed, fix any point \(x\in\mathbb{R}^d\) such that \(f\) is differentiable at \(x\) with
respect to \(V(\mu,x)\). Then for all \(v\in V(\mu,x)\setminus\{0\}\) it holds that
\(\nabla_{\!\scriptscriptstyle\rm AM} f(x)\cdot v=|v|\,\lim_{h\searrow 0}\big(f(x+hv)-f(x)\big)/|hv|
\leq|v|\,{\rm lip}(f)(x)\) by {\rm e}qref{eq:formula_nablaAM}, thus accordingly
\(\big|\nabla_{\!\scriptscriptstyle\rm AM} f(x)\big|=\sup\big\{\nabla_{\!\scriptscriptstyle\rm AM} f(x)\cdot v\;\big|\;
v\in V(\mu,x),\,|v|\leq 1\big\}\leq{\rm lip}(f)(x)\).
\penalty-20\null
\(\blacksquare\)}{\rm e}nd{remark}
\section{Universal infinitesimal Hilbertianity of the Euclidean space}
The objective of this section is to show that the Euclidean space is
{\rm e}mph{universally infinitesimally Hilbertian}, meaning that it is
infinitesimally Hilbertian when equipped with any Radon measure;
cf.\ Theorem \ref{thm:Eucl_inf_Hilb} below. The strategy of the proof we are going to
present here is based upon the structure of the decomposability bundle
described in Subsection \ref{ss:decomposability_bundle}.
First of all, we prove that any given test plan over the weighted Euclidean space
is `tangent', in a suitable sense, to the Alberti--Marchese decomposability bundle:
\begin{lemma}\label{lem:pi_tangent_to_V}
Let \(\mu\geq 0\) be a given Radon measure on \(\mathbb{R}^d\).
Let \({\mbox{\boldmath\(\pi\)}}\) be a test plan on \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\).
Then for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ \(\gamma\) it holds that
\[{\mathrm d}ot\gamma_t\in V(\mu,\gamma_t)\quad
\text{ for }\mathcal L^1\text{-a.e.\ }t\in[0,1].\]
{\rm e}nd{lemma}
\begin{proof}
Let \(f_0\) be an \(L\)-Lipschitz function as in ii) of Theorem \ref{thm:Alberti-Marchese}.
Set \(B\subseteq C\big([0,1],\mathbb{R}^d\big)\times[0,1]\) as
\[B\coloneqq\Big\{(\gamma,t)\;\Big|\;\gamma\text{ and }f_0\circ\gamma
\text{ are differentiable at }t,\text{ and }{\mathrm d}ot\gamma_t\notin V(\mu,\gamma_t)\Big\}.\]
It can be easily shown that \(B\) is Borel measurable. We can assume that \(\gamma\) is
absolutely continuous (since by definition a test plan is concentrated on absolutely
continuous curves); in particular, also \(f_0 \circ \gamma\) is absolutely continuous,
and thus both \(\gamma\) and \(f_0 \circ \gamma\) are differentiable
\(\mathcal{L}^1\)-almost everywhere. In particular, we are done if
we can prove that \(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)(B)=0\).
Call
\(B_t\coloneqq\big\{\gamma\;\big|\;(\gamma,t)\in B\big\}\) for every \(t\in[0,1]\).
Moreover, \(G\) stands for the set of all \(x\in\mathbb{R}^d\) such that \(f_0\) is not
differentiable at \(x\) with respect to any direction \(v\in\mathbb{R}^d\setminus V(\mu,x)\).
Thus, \(\mu(\mathbb{R}^d\setminus G)=0\) by Theorem \ref{thm:Alberti-Marchese}.
We claim that the inclusion \({\rm e}_t(B_t)\subseteq\mathbb{R}^d\setminus G\)
holds for every \(t\in[0,1]\). Indeed, for every \(\gamma\in B_t\) one has that
\[\begin{split}
\bigg|\penalty-20\null
\(\blacksquare\)ac{f_0(\gamma_t+h{\mathrm d}ot\gamma_t)-f_0(\gamma_t)}{h}-(f_0\circ\gamma)'_t\bigg|
&\leq\bigg|\penalty-20\null
\(\blacksquare\)ac{f_0(\gamma_t+h{\mathrm d}ot\gamma_t)-f_0(\gamma_{t+h})}{h}\bigg|
+\bigg|\penalty-20\null
\(\blacksquare\)ac{f_0(\gamma_{t+h})-f_0(\gamma_t)}{h}-(f_0\circ\gamma)'_t\bigg|\\
&\leq\,L\,\bigg|\penalty-20\null
\(\blacksquare\)ac{\gamma_{t+h}-\gamma_t}{h}-{\mathrm d}ot\gamma_t\bigg|
+\bigg|\penalty-20\null
\(\blacksquare\)ac{f_0(\gamma_{t+h})-f_0(\gamma_t)}{h}-(f_0\circ\gamma)'_t\bigg|,
{\rm e}nd{split}\]
so by letting \(h\to 0\) we conclude that \(f_0\) is differentiable at \(\gamma_t\)
in the direction \({\mathrm d}ot\gamma_t\), {\rm e}mph{i.e.}, \(\gamma_t\notin G\).
Therefore, we conclude that \({\mbox{\boldmath\(\pi\)}}(B_t)\leq{\mbox{\boldmath\(\pi\)}}\big({\rm e}_t^{-1}(\mathbb{R}^d\setminus G)\big)\leq
{\rm Comp}({\mbox{\boldmath\(\pi\)}})\,\mu(\mathbb{R}^d\setminus G)=0\) for all \(t\in[0,1]\). This grants that
\(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)(B)=0\) by Fubini theorem, whence the statement follows.
{\rm e}nd{proof}
As a consequence of Lemma \ref{lem:pi_tangent_to_V}, we can readily prove
that the modulus of the Alberti--Marchese gradient of a given Lipschitz
function is a weak upper gradient of the function itself:
\begin{proposition}\label{prop:nablaAM_wug}
Let \(\mu\geq 0\) be a Radon measure on \(\mathbb{R}^d\).
Let \(f\in{\rm LIP}_c(\mathbb{R}^d)\) be given. Then the function
\(|\nabla_{\!\scriptscriptstyle\rm AM} f|\in L^2(\mu)\) is a weak upper gradient of \(f\).
{\rm e}nd{proposition}
\begin{proof}
Let \({\mbox{\boldmath\(\pi\)}}\) be any test plan over \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\). We claim that
for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ \(\gamma\) it holds
\begin{equation}\label{eq:nablaAM_wug_claim}
(f\circ\gamma)'_t=\nabla_{\!\scriptscriptstyle\rm AM} f(\gamma_t)\cdot{\mathrm d}ot\gamma_t\quad
\text{ for }\mathcal L^1\text{-a.e.\ }t\in[0,1].
{\rm e}nd{equation}
Indeed, for \(({\mbox{\boldmath\(\pi\)}}\otimes\mathcal L^1)\)-a.e.\ \((\gamma,t)\) we
have that \(f\) is differentiable at \(\gamma_t\) with respect to \(V(\mu,\gamma_t)\)
and that \({\mathrm d}ot\gamma_t\in V(\mu,\gamma_t)\); this stems from item i) of Theorem
\ref{thm:Alberti-Marchese} and Lemma \ref{lem:pi_tangent_to_V}.
Hence, {\rm e}qref{eq:formula_nablaAM} yields
\[\nabla_{\!\scriptscriptstyle\rm AM} f(\gamma_t)\cdot{\mathrm d}ot\gamma_t=
\lim_{h\searrow 0}\penalty-20\null
\(\blacksquare\)ac{f(\gamma_t+h{\mathrm d}ot\gamma_t)-f(\gamma_t)}{h}=
\lim_{h\searrow 0}\penalty-20\null
\(\blacksquare\)ac{f(\gamma_{t+h})-f(\gamma_t)}{h}
=(f\circ\gamma)'_t,\]
which proves the claim {\rm e}qref{eq:nablaAM_wug_claim}.
In particular, for \({\mbox{\boldmath\(\pi\)}}\)-a.e.\ curve \(\gamma\) it holds
\[\big|(f\circ\gamma)'_t\big|\leq\big|\nabla_{\!\scriptscriptstyle\rm AM} f(\gamma_t)\big|\,|{\mathrm d}ot\gamma_t|
\quad\text{ for }\mathcal L^1\text{-a.e.\ }t\in[0,1].\]
Given that \(|\nabla_{\!\scriptscriptstyle\rm AM} f|\in L^2(\mu)\) by {\rm e}qref{eq:nablaAM_leq_lip},
we conclude that \(|Df|\leq|\nabla_{\!\scriptscriptstyle\rm AM} f|\) holds in the \(\mu\)-a.e.\ sense.
{\rm e}nd{proof}
We are now in a position to prove the universal infinitesimal Hilbertianity of
the Euclidean space, as an immediate consequence of Proposition \ref{prop:nablaAM_wug}
and of the linearity of \(\nabla_{\!\scriptscriptstyle\rm AM}\):
\begin{theorem}[Infinitesimal Hilbertianity of weighted \(\mathbb{R}^d\)]\label{thm:Eucl_inf_Hilb}
Let \(\mu\geq 0\) be a Radon measure on \(\mathbb{R}^d\).
Then the metric measure space \((\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) is infinitesimally Hilbertian.
{\rm e}nd{theorem}
\begin{proof}
First of all, let us define the {\rm e}mph{Alberti--Marchese energy}
functional \(\sfE_{\scriptscriptstyle{\rm AM}}\colon L^2(\mu)\to[0,+\infty]\) as
\[\sfE_{\scriptscriptstyle{\rm AM}}(f)\coloneqq\left\{\begin{array}{ll}
\penalty-20\null
\(\blacksquare\)ac{1}{2}\int|\nabla_{\!\scriptscriptstyle\rm AM} f|^2\,{\mathrm d}\mu\\
+\infty
{\rm e}nd{array}\quad\begin{array}{ll}
\text{ if }f\in{\rm LIP}_c(\mathbb{R}^d),\\
\text{ otherwise.}
{\rm e}nd{array}\right.\]
Since \(|Df|\leq|\nabla_{\!\scriptscriptstyle\rm AM} f|\leq{\rm lip}(f)\) holds \(\mu\)-a.e.\ for
any \(f\in{\rm LIP}_c(\mathbb{R}^d)\) by Proposition \ref{prop:nablaAM_wug} and
{\rm e}qref{eq:nablaAM_leq_lip}, we have that \({\sf E}_{\rm Ch}\leq\sfE_{\scriptscriptstyle{\rm AM}}\leq{\sf E}_{\rm lip}\),
where \({\sf E}_{\rm Ch}\) and \({\sf E}_{\rm lip}\) are defined as in {\rm e}qref{eq:def_E_Ch}
and {\rm e}qref{eq:def_E_lip}, respectively.
In view of Theorem \ref{thm:density_in_energy}, we deduce that \({\sf E}_{\rm Ch}\)
is the \(L^2(\mu)\)-lower semicontinuous envelope of \(\sfE_{\scriptscriptstyle{\rm AM}}\). Thanks to
the identities in {\rm e}qref{eq:nablaAM_linear}, we also know that \(\sfE_{\scriptscriptstyle{\rm AM}}\) satisfies
the parallelogram rule when restricted to \({\rm LIP}_c(\mathbb{R}^d)\), which means that
\begin{equation}\label{eq:parallelogram_id_AM}
\sfE_{\scriptscriptstyle{\rm AM}}(f+g)+\sfE_{\scriptscriptstyle{\rm AM}}(f-g)=2\,\sfE_{\scriptscriptstyle{\rm AM}}(f)+2\,\sfE_{\scriptscriptstyle{\rm AM}}(g)\quad\text{ for every }f,g\in{\rm LIP}_c(\mathbb{R}^d).
{\rm e}nd{equation}
Fix \(f,g\in W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\). Let us choose any
two sequences $(f_n)_n,(g_n)_n\subseteq{\rm LIP}_c(\mathbb{R}^d)$ such that
\begin{itemize}
\item \(f_n\to f\) and \(g_n\to g\) in \(L^2(\mu)\),
\item \(\sfE_{\scriptscriptstyle{\rm AM}}(f_n)\to{\sf E}_{\rm Ch}(f)\) and \(\sfE_{\scriptscriptstyle{\rm AM}}(g_n)\to{\sf E}_{\rm Ch}(g)\).
{\rm e}nd{itemize}
In particular, observe that \(f_n+g_n\to f+g\) and \(f_n-g_n\to f-g\) in \(L^2(\mu)\).
Therefore, it holds that
\[\begin{split}
{\sf E}_{\rm Ch}(f+g)+{\sf E}_{\rm Ch}(f-g)&\leq\varliminf_{n\to\infty}\big(\sfE_{\scriptscriptstyle{\rm AM}}(f_n+g_n)+\sfE_{\scriptscriptstyle{\rm AM}}(f_n-g_n)\big)
\overset{{\rm e}qref{eq:parallelogram_id_AM}}=2\lim_{n\to\infty}\big(\sfE_{\scriptscriptstyle{\rm AM}}(f_n)+\sfE_{\scriptscriptstyle{\rm AM}}(g_n)\big)\\
&=2\,{\sf E}_{\rm Ch}(f)+2\,{\sf E}_{\rm Ch}(g).
{\rm e}nd{split}\]
By replacing $f$ and $g$ with $f+g$ and $f-g$, respectively,
we conclude that the converse inequality is verified as well. Consequently,
the Cheeger energy \({\sf E}_{\rm Ch}\) satisfies the parallelogram rule
{\rm e}qref{eq:parallelogram_id}, thus \(W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)\) is a Hilbert space.
This completes the proof of the statement.
{\rm e}nd{proof}
\begin{remark}{\rm
As a byproduct of the proof of Theorem \ref{thm:Eucl_inf_Hilb}, we
see that for all $f\in W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)$
there exists a sequence $(f_n)_n\subseteq{\rm LIP}_c(\mathbb{R}^d)$ such that
$f_n\to f$ and $|\nabla_{\!\scriptscriptstyle\rm AM} f_n|\to|Df|$ in $L^2(\mu)$.
\penalty-20\null
\(\blacksquare\)}{\rm e}nd{remark}
\begin{example}\label{ex:example_Cantor}{\rm
Given an arbitrary Radon measure \(\mu\) on \(\mathbb{R}^d\), it might happen that
\[|Df|\neq|\nabla_{\!\scriptscriptstyle\rm AM} f|\quad\text{ for some }f\in{\rm LIP}_c(\mathbb{R}^d).\]
For instance, consider the measure \(\mu\coloneqq\mathcal L^1|_C\) on \(\mathbb{R}\),
where \(C\subseteq\mathbb{R}\) is any Cantor set of positive Lebesgue measure. Since the
support of \(\mu\) is totally disconnected, one has that every \(f\in L^2(\mu)\)
is a Sobolev function with \(|Df|=0\). However, it holds
\(V(\mu,x)=\mathbb{R}\) for \(\mathcal L^1\)-a.e.\ \(x\in C\) by Rademacher theorem,
whence for any \(f\in{\rm LIP}(\mathbb{R})\) we have that \(\nabla_{\!\scriptscriptstyle\rm AM} f(x)=f'(x)\) for
\(\mathcal L^1\)-a.e.\ \(x\in C\).
\penalty-20\null
\(\blacksquare\)}{\rm e}nd{example}
\section{Closability of the Sobolev norm on smooth functions}
The aim of this conclusive section is to address a problem that has been
raised by M.\ Fukushima (as reported in \cite[Section 2.6]{Bogachev10}).
Namely, we provide a (negative) answer to the following question:
{\it Does there
exist a singular Radon measure \(\mu\) on \(\mathbb{R}^2\) for which the Sobolev
norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^2,{\sf d}_{\rm Eucl},\mu)}\) is closable on
compactly-supported smooth functions (in the sense of Definition
\ref{def:closability} below)?}
Actually, we are going to prove a stronger result:
{\it Given any Radon measure \(\mu\) on \(\mathbb{R}^d\)
that is not absolutely continuous with
respect to \(\mathscr L^d\), it holds that
\(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is not closable on
compactly-supported smooth functions.} Cf.\ Theorem \ref{thm:no_closable}
below.
Let \(f\in C^\infty_c(\mathbb{R}^d)\) be given. Then we denote by \(\nabla f\colon\mathbb{R}^d\to\mathbb{R}^d\)
its classical gradient.
Note that the identity \(|\nabla f|={\rm lip}(f)\) holds.
Given a Radon measure \(\mu\) on \(\mathbb{R}^d\), it is immediate
to check that
\begin{equation}\label{eq:proj_grad}
\nabla_{\!\scriptscriptstyle\rm AM} f(x)=\pi_x\big(\nabla f(x)\big)\quad\text{ for }\mu\text{-a.e.\ }x\in\mathbb{R}^d,
{\rm e}nd{equation}
where \(\pi_x\colon\mathbb{R}^d\to V(\mu,x)\) stands for the orthogonal projection map.
We denote by \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) the space of all (equivalence classes, up
to \(\mu\)-a.e.\ equality, of) Borel maps \(v\colon\mathbb{R}^d\to\mathbb{R}^d\) with \(|v|\in L^2(\mu)\).
It holds that \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) is a Hilbert space if
endowed with the norm \(v\mapsto\big(\int|v|^2\,{\mathrm d}\mu\big)^{1/2}\).
\begin{definition}[Closability of the Sobolev norm on smooth functions]
\label{def:closability}
Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\). Then the Sobolev norm
\(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\) is {\rm e}mph{closable on
compactly-supported smooth functions} provided the following property is verified:
if a sequence \((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) satisfies \(f_n\to 0\) in \(L^2(\mu)\)
and \(\nabla f_n\to v\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) for some element
\(v\in L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\), then it holds that \(v=0\).
{\rm e}nd{definition}
In order to provide some alternative characterisations of the above-defined
closability property, we need to recall the following improvement of
Theorem \ref{thm:density_in_energy} in the weighted Euclidean space case:
\begin{theorem}[Density in energy of smooth functions
\cite{GP16-2}]\label{thm:density_in_energy_smooth}
Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\).
Then \({\sf E}_{\rm Ch}\) is the \(L^2(\mu)\)-lower semicontinuous envelope
of the functional
\[
L^2(\mu)\ni f\longmapsto\left\{\begin{array}{ll}
\penalty-20\null
\(\blacksquare\)ac{1}{2}\int|\nabla f|^2\,{\mathrm d}\mu\\
+\infty
{\rm e}nd{array}\quad\begin{array}{ll}
\text{ if }f\in C^\infty_c(\mathbb{R}^d),\\
\text{ otherwise.}
{\rm e}nd{array}\right.
\]
{\rm e}nd{theorem}
\begin{lemma}\label{lem:equiv_closable}
Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\).
Then the following conditions are equivalent:
\begin{itemize}
\item[\(\rm i)\)] The Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\)
is closable on compactly-supported smooth functions.
\item[\(\rm ii)\)] The functional \({\sf E}_{\rm lip}\) -- see {\rm e}qref{eq:def_E_lip} --
is \(L^2(\mu)\)-lower semicontinuous when restricted to \(C^\infty_c(\mathbb{R}^d)\).
\item[\(\rm iii)\)] The identity \(|Df|=|\nabla f|\) holds \(\mu\)-a.e.\ on \(\mathbb{R}^d\),
for every function \(f\in C^\infty_c(\mathbb{R}^d)\).
{\rm e}nd{itemize}
{\rm e}nd{lemma}
\begin{proof}\ \\
{\color{blue}\({\rm i)}\Longrightarrow{\rm ii)}\)} Fix any \(f\in C^\infty_c(\mathbb{R}^d)\)
and \((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) such that \(f_n\to f\) in \(L^2(\mu)\).
We claim that
\begin{equation}\label{eq:equiv_closable_claim}
\int|\nabla f|^2\,{\mathrm d}\mu\leq\varliminf_{n\to\infty}\int|\nabla f_n|^2\,{\mathrm d}\mu.
{\rm e}nd{equation}
Without loss of generality, we may assume the right-hand side
in {\rm e}qref{eq:equiv_closable_claim} is finite. Therefore, we can find a subsequence
\((f_{n_k})_k\) of \((f_n)_n\) and an element \(v\in L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) such that
\(\lim_k\int|\nabla f_{n_k}|^2\,{\mathrm d}\mu=\varliminf_n\int|\nabla f_n|^2\,{\mathrm d}\mu\) and
\(\nabla f_{n_k}\rightharpoonup v\) in the weak topology of \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\).
By virtue of Banach--Saks theorem, we can additionally require that
\(\nabla\tilde f_k\to v\) in the strong topology of \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\),
where we set \(\tilde f_k\coloneqq\penalty-20\null
\(\blacksquare\)ac{1}{k}\sum_{i=1}^k f_{n_i}\in C^\infty_c(\mathbb{R}^d)\)
for all \(k\in\mathbb{N}\). Since \(\tilde f_k-f\to 0\) in \(L^2(\mu)\)
and \(\nabla(\tilde f_k-f)\to v-\nabla f\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\),
we deduce from i) that \(v=\nabla f\). Consequently, we have
that \(\nabla f_n\rightharpoonup\nabla f\) in the weak topology
of \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\), thus proving {\rm e}qref{eq:equiv_closable_claim}
by semicontinuity of the norm. In other words, it holds that
\({\sf E}_{\rm lip}(f)\leq\varliminf_n{\sf E}_{\rm lip}(f_n)\), which yields the
validity of item ii).\\
{\color{blue}\({\rm ii)}\Longrightarrow{\rm iii)}\)} Let
\(f\in C^\infty_c(\mathbb{R}^d)\) be given. Theorem
\ref{thm:density_in_energy_smooth} yields existence of a sequence
\((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) such that \(f_n\to f\)
and \(|\nabla f_n|\to|Df|\) in \(L^2(\mu)\). Therefore,
item ii) ensures that
\[
\penalty-20\null
\(\blacksquare\)ac{1}{2}\int|\nabla f|^2\,{\mathrm d}\mu
={\sf E}_{\rm lip}(f)\leq\varliminf_{n\to\infty}{\sf E}_{\rm lip}(f_n)=\lim_{n\to\infty}
\penalty-20\null
\(\blacksquare\)ac{1}{2}\int|\nabla f_n|^2\,{\mathrm d}\mu=\penalty-20\null
\(\blacksquare\)ac{1}{2}\int|Df|^2\,{\mathrm d}\mu.
\]
Since \(|Df|\leq|\nabla f|\) holds \(\mu\)-a.e.\ on \(\mathbb{R}^d\),
we conclude that \(|Df|=|\nabla f|\), thus proving item iii).\\
{\color{blue}\({\rm iii)}\Longrightarrow{\rm i)}\)} We argue by
contradiction: suppose that there exists a sequence
\((f_n)_n\subseteq C^\infty_c(\mathbb{R}^d)\) such that \(f_n\to 0\)
in \(L^2(\mu)\) and \(\nabla f_n\to v\) in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\)
for some \(v\in L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\setminus\{0\}\).
Fix any \(k\in\mathbb{N}\) such that \(\|\nabla f_k-v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}
\leq\penalty-20\null
\(\blacksquare\)ac{1}{3}\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\). In particular,
\(\|\nabla f_k\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}
\geq\penalty-20\null
\(\blacksquare\)ac{2}{3}\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\). Let us define
\(g_n\coloneqq f_k-f_n\in C^\infty_c(\mathbb{R}^d)\) for every \(n\in\mathbb{N}\).
Since \(g_n\to f_k\) in \(L^2(\mu)\) and \(\nabla g_n\to\nabla f_k-v\)
in \(L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)\) as \(n\to\infty\), we conclude that
\[
\|\nabla f_k\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\geq\penalty-20\null
\(\blacksquare\)ac{2}{3}\,\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}
>\penalty-20\null
\(\blacksquare\)ac{1}{3}\,\|v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}\geq
\|\nabla f_k-v\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)}=
\lim_{n\to\infty}\|\nabla g_n\|_{L^2_\mu(\mathbb{R}^d,\mathbb{R}^d)},
\]
whence \({\sf E}_{\rm lip}(f_k)>\lim_n{\sf E}_{\rm lip}(g_n)\). This contradicts
the lower semicontinuity of \({\sf E}_{\rm lip}\) on \(C^\infty_c(\mathbb{R}^d)\).
Consequently, item i) is proven.
{\rm e}nd{proof}
The last ingredient we need is the following result proven by
G.\ De Philippis and F.\ Rindler:
\begin{theorem}[Weak converse of Rademacher theorem
\cite{DPR}]\label{thm:DPR}
Let \(\mu\) be a Radon measure on \(\mathbb{R}^d\). Suppose all Lipschitz
functions \(f\colon\mathbb{R}^d\to\mathbb{R}\) are \(\mu\)-a.e.\ differentiable.
Then it holds that \(\mu\ll\mathcal L^d\).
{\rm e}nd{theorem}
We are finally in a position to prove the following statement
concerning closability:
\begin{theorem}[Failure of closability for singular measures]
\label{thm:no_closable}
Let \(\mu\geq 0\) be a given Radon measure on \(\mathbb{R}^d\).
Suppose that \(\mu\) is not absolutely continuous with respect
to the Lebesgue measure \(\mathcal L^d\). Then
the Sobolev norm \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\)
is not closable on compactly-supported smooth functions.
{\rm e}nd{theorem}
\begin{proof}
First of all, Theorem \ref{thm:DPR} grants the existence of a
Lipschitz function \(f\colon\mathbb{R}^d\to\mathbb{R}\) and a Borel set
\(P\subseteq\mathbb{R}^d\) such that \(\mu(P)>0\) and \(f\) is not
differentiable at any point of \(P\). Recalling Theorem
\ref{thm:Alberti-Marchese}, we then see that \(V(\mu,x)\neq\mathbb{R}^n\)
for \(\mu\)-a.e.\ \(x\in P\). Therefore, we can find a compact
set \(K\subseteq P\) and a vector \(v\in\mathbb{R}^d\) such that \(\mu(K)>0\)
and \(v\notin V(\mu,x)\) for \(\mu\)-a.e.\ \(x\in K\). Now pick
any \(g\in C^\infty_c(\mathbb{R}^d)\) such that \(\nabla g(x)=v\)
holds for all \(x\in K\). Then Proposition \ref{prop:nablaAM_wug}
and {\rm e}qref{eq:proj_grad} yield
\[
|D g|(x)\leq|\nabla_{\!\scriptscriptstyle\rm AM}\,g|(x)=\big|\pi_x\big(\nabla g(x)\big)\big|
=\big|\pi_x(v)\big|<|v|=|\nabla g|(x)
\quad\text{ for }\mu\text{-a.e.\ }x\in K,
\]
thus accordingly \(\|\cdot\|_{W^{1,2}(\mathbb{R}^d,{\sf d}_{\rm Eucl},\mu)}\)
is not closable on compactly-supported smooth functions by
Lemma \ref{lem:equiv_closable}. Hence, the statement is achieved.
{\rm e}nd{proof}
\begin{remark}{\rm
The converse of Theorem \ref{thm:no_closable} might fail.
For instance, the measure \(\mu\) described in Example
\ref{ex:example_Cantor} is absolutely continuous with respect
to \(\mathcal L^1\), but the Sobolev norm
\(\|\cdot\|_{W^{1,2}(\mathbb{R},{\sf d}_{\rm Eucl},\mu)}\) is not closable
on compactly-supported smooth functions as a consequence of
Lemma \ref{lem:equiv_closable}.
\penalty-20\null
\(\blacksquare\)}{\rm e}nd{remark}
{\mathrm d}ef$'$} \def\cprime{$'${$'$} {\mathrm d}ef$'$} \def\cprime{$'${$'$}
\begin{thebibliography}{10}
\bibitem{AM16}
{\sc G.~Alberti and A.~Marchese}, {{\rm e}m On the differentiability of {L}ipschitz
functions with respect to measures in the {E}uclidean space}, Geom. Funct.
Anal., 26 (2016), pp.~1--66.
\bibitem{AmbrosioGigliSavare08}
{\sc L.~Ambrosio, N.~Gigli, and G.~Savar{\'e}}, {{\rm e}m Gradient flows in metric
spaces and in the space of probability measures}, Lectures in Mathematics ETH
Z\"urich, Birkh\"auser Verlag, Basel, second~ed., 2008.
\bibitem{AmbrosioGigliSavare11-3}
\leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\rm e}m Density of
{L}ipschitz functions and equivalence of weak gradients in metric measure
spaces}, Rev. Mat. Iberoam., 29 (2013), pp.~969--996.
\bibitem{AmbrosioGigliSavare11}
\leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\rm e}m Calculus and heat
flow in metric measure spaces and applications to spaces with {R}icci bounds
from below}, Invent. Math., 195 (2014), pp.~289--391.
\bibitem{Bogachev10}
{\sc V.~I. Bogachev}, {{\rm e}m Differentiable {M}easures and the {M}alliavin
{C}alculus}, Mathematical surveys and monographs, American Mathematical Soc.,
2010.
\bibitem{Cheeger00}
{\sc J.~Cheeger}, {{\rm e}m Differentiability of {L}ipschitz functions on metric
measure spaces}, Geom. Funct. Anal., 9 (1999), pp.~428--517.
\bibitem{DPR}
{\sc G.~De~Philippis and F.~Rindler}, {{\rm e}m On the structure of
$\mathscr{A}$-free measures and applications}, Annals of Mathematics, 184(3)
(2016), pp.~1017--1039.
\bibitem{DMGSP18}
{\sc S.~Di~Marino, N.~Gigli, E.~Pasqualetto, and E.~Soultanis}, {{\rm e}m
Infinitesimal {H}ilbertianity of locally {${\rm CAT}(\kappa)$}-spaces}.
\newblock Submitted, arXiv:1812.02086, 2018.
\bibitem{Gigli12}
{\sc N.~Gigli}, {{\rm e}m On the differential structure of metric measure spaces and
applications}, Mem. Amer. Math. Soc., 236 (2015), pp.~vi+91.
\bibitem{GP16-2}
{\sc N.~Gigli and E.~Pasqualetto}, {{\rm e}m Behaviour of the reference measure on
{$\sf RCD$} spaces under charts}.
\newblock Accepted at Comm. Anal. Geom., arXiv: 1607.05188, 2016.
\bibitem{LP20}
{\sc D.~Lu\v{c}i\'{c} and E.~Pasqualetto}, {{\rm e}m Infinitesimal {H}ilbertianity
of {W}eighted {R}iemannian {M}anifolds}, Canadian Mathematical Bulletin, 63
(2020), pp.~118--140.
\bibitem{DiM14a}
{\sc S.~D. Marino}, {{\rm e}m Recent advances on {BV} and {S}obolev {S}paces in
metric measure spaces}, PhD Thesis, (2014).
\bibitem{Shanmugalingam00}
{\sc N.~Shanmugalingam}, {{\rm e}m Newtonian spaces: an extension of {S}obolev
spaces to metric measure spaces}, Rev. Mat. Iberoamericana, 16 (2000),
pp.~243--279.
{\rm e}nd{thebibliography}
{\rm e}nd{document} |
\begin{document}
\title[Schwarz Lemmata]{General Schwarz Lemmata and their applications}
\alphauthor{Lei Ni}
\alphaddress{Lei Ni. Department of Mathematics, University of California, San Diego, La Jolla, CA 92093, USA}
\epsilonmail{[email protected]}
\begin{abstract} We prove estimates interpolating the Schwarz Lemmata of Royden-Yau and the ones recently established by the author. These more flexible estimates provide additional information on (algebraic) geometric aspects of compact K\"ahler manifolds with nonnegative holomorphic sectional curvature, nonnegative $\operatorname{Ric}_\epsilonll$ or positive $S_\epsilonll$.
{\sqrt {-1}t Dedicated to Professor Luen-Fai Tam on the occasion of his 70th birthday.}
\epsilonnd{abstract}
\maketitle
\section{Introduction}
There are many generalizations of the classical Schwarz Lemma on holomorphic maps between unit balls via the work of Ahlfors, Chen-Cheng-Look, Lu, Mok-Yau, Royden, Yau, etc (see \cite{Kobayashi-H} and \cite{Roy, Yau-sch} and references therein). The one obtained by Royden \cite{Roy} states:
\begin{theorem}\operatorname{l}abel{thm-sch-roy}
Let $f: M^m\to N^n$ be a holomorphic map. Assume that the holomorphic sectional curvature of $N$, $H(Y)\operatorname{l}e -\Upsilonappa |Y|^4, \, \forall Y\sqrt {-1}n T'N$ and the Ricci curvature of $M$, $\operatorname{Ric}^M(X, \overlineerline{X})\gammae -K |X|^2, \, \forall X\sqrt {-1}n T'M$ with $\Upsilonappa, K>0$. Let $d=\dim(f(M))$. Then
\begin{equation}\operatorname{l}abel{eq:sch-roy1}
\|\partialartial f\|^2(x) \operatorname{l}e \frac{2d}{d+1}\frac{K}{\Upsilonappa}.
\epsilonnd{equation}
\epsilonnd{theorem}
In \cite{Ni-1807} the author proved a new version which only involves the holomorphic sectional curvature of domain and target manifolds. Recall that for the tangent map $\partialartial f: T_x'M \to T'_{f(x)}N$ we define its maximum norm square to be
\begin{equation}\operatorname{l}abel{eq:1}
\|\partialartial f\|^2_0(x)\doteqdot \sup_{v\ne 0}\frac{|\partialartial f(v)|^2}{|v|^2}.
\epsilonnd{equation}
\begin{theorem}\operatorname{l}abel{thm:sch1} Let $(M, g)$ be a complete K\"ahler manifold such that the holomorphic sectional curvature $H^M(X)/|X|^4 \gammae -K$ for some $K\gammae0$. Let $(N^n, h)$ be a K\"ahler manifold such that $H^N(Y)<-\Upsilonappa |Y|^4$ for some $\Upsilonappa>0$. Let $f:M\to N$ be a holomorphic map. Then
\begin{equation}\operatorname{l}abel{eq:sch-ni}
\|\partialartial f\|^2_0(x) \operatorname{l}e \frac{K}{\Upsilonappa}, \forall x\sqrt {-1}n M,
\epsilonnd{equation}
provided that the bisectional curvature of $M$ is bounded from below if $M$ is not compact. In particular, if $K=0$, any holomorphic map $f: M\to N$ must be a constant map.
\epsilonnd{theorem}
The assumption on the bisectional curvature lower bound can be replaced with the existence of an exhaustion function $\rho(x)$ which satisfies that
\begin{equation}\operatorname{l}abel{eq:2}
\operatorname{l}imsup_{\rho\to \sqrt {-1}nfty} \operatorname{l}eft(\frac{|\partialartial \rho|+[\sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \rho]_{+}}{\rho}\rightght)=0.
\epsilonnd{equation}
The proof uses a viscosity consideration from PDE theory. It is also reminiscent of Pogorelov's Lemma \cite{Pogo} (cf. Lemma 4.1.1 of \cite{Gu}) for Monge-Amp\`ere equation, since the maximum eigenvalue of $\nabla^2 u$ is the $\|\cdot\|_0$ for the normal map $\nabla u$ for any smooth $u$.
A consequence of Theorem \ref{thm:sch1} asserts that {\sqrt {-1}t the equivalence of the negative amplitude of the holomorphic sectional curvature implies the equivalence of the metrics}. Namely if $M^m$ admits two K\"ahler metrics $g_1$ and $g_2$ satisfying that
$$
-L_1|X|_{g_1}^4\operatorname{l}e H_{g_1}(X)\operatorname{l}e -U_1|X|_{g_1}^4, \quad -L_2|X|_{g_2}^4 \operatorname{l}e H_{g_2}(X)\operatorname{l}e -U_2|X|_{g_2}^4
$$
then for any $v\sqrt {-1}n T_x'M$ we have the estimates:
$$
|v|^2_{g_2}\operatorname{l}e \frac{L_1}{U_2}|v|^2_{g_1};\quad |v|^2_{g_1}\operatorname{l}e \frac{L_2}{U_1}|v|^2_{g_2}.
$$
This result can be viewed as a stability statement of the classical result asserting that a complete K\"ahler manifold with the negative constant holomorphic sectional curvature must be a quotient of the complex hyperbolic space form. Motivated by Rauch's work which induces much work towards the $1/4$-pinching theorem and, the above stability of K\"ahler metrics it is natural to ask {\sqrt {-1}t whether or not a K\"ahler manifold $M$ with its homomorphic sectional curvature being close to $-1$ is biholomorphic to a quotient of the complex hyperbolic space.} Besides the Liouville type theorem for holomorphic maps into manifolds with negative holomorphic sectional curvature, we shall show in Section 5 further implications of this estimate towards the structure of the fundamental groups of manifolds with nonnegative holomorphic sectional curvature.
Before we state another recent result of the author we first recall some basic notions from Grassmann algebra \cite{Fede, Whit}. Let $\mathbb{C}^m$ be a complex Hermitian space (later we will identify the holomorphic tangent spaces $T_x'M$ and $T'_{f(x)}N$ with $\mathbb{C}^m$ and $\mathbb{C}^n$). Let $\wedge^\epsilonll \mathbb{C}^m$ be the spaces of $\epsilonll$-multi-vectors $\{v_1\wedge\cdots \wedge v_\epsilonll\}$ with $v_i\sqrt {-1}n \mathbb{C}^m$. For ${\bf{a}}=v_1\wedge\cdots \wedge v_\epsilonll, {\bf{b}}=w_1\wedge \cdots w_\epsilonll$, the inner product can be defined as $\operatorname{l}angle {\bf{a}}, \overlineerline{{\bf{b}}}\rangle =\det(\operatorname{l}angle v_i, {\bar{\alpha}}r{w}_j\rangle)$. This endows $\wedge^\epsilonll \mathbb{C}^n$ an Hermitian structure, hence a norm $|\cdot|$. There are also other norms, such as the {\sqrt {-1}t mass} and the {\sqrt {-1}t comass}, which shall be denoted as $|\cdot|_0$ as in \cite{Whit}, and could be useful for some problems. We refer \cite{Whit} Sections 13, 14 for detailed discussions. Assume that $f: (M^m, g)\to (N^n, h)$ is a holomorphic map between two K\"ahler manifolds. Let $\partialartial f: T'M\to T'N$ be the tangent map. Let $\Lambda^\epsilonll \partialartial f:\wedge^\epsilonll T_x'M \to \wedge^\epsilonll T_{f(x)}'N$ be the associated map defined as
$\Lambda^\epsilonll \partialartial f( v_1\wedge \cdots \wedge v_\epsilonll)=\partialartial f(v_1)\wedge\cdots\wedge \partialartial f(v_\epsilonll)$. Define $\|\cdot\|_0$ as $$\|\Lambda^\epsilonll \partialartial f\|_0(x) \doteqdot\sup_{{\bf a}=v_1\wedge\cdots\wedge v_\epsilonll\ne 0, {\bf a}\sqrt {-1}n \wedge^\epsilonll T_x'M} \frac{|\Lambda^\epsilonll \partialartial f({\bf a})|}{|{\bf a}|}.$$
The notion $\|\cdot\|_0$ is adapted to be consistent with the comass notion in \cite{Whit}.
By the singular value decomposition, we may choose normal coordinates centered at $x_0$ and $f(x_0)$ such that at $x_0$, $df\operatorname{l}eft(\frac{\partialartial\, }{\partialartial z^{\alphalpha}}\rightght)=\operatorname{l}ambda_\alphalpha \delta_{i\alphalpha} \frac{\partialartial\, }{\partialartial w^i}$. If we order $\{\operatorname{l}ambda_\alphalpha\}$ such that $|\operatorname{l}ambda_1|\gammae |\operatorname{l}ambda_2|\gammae \cdots \gammae |\operatorname{l}ambda_m|$, $\|\Lambda^\epsilonll \partialartial f\|_0(x_0)=|\operatorname{l}ambda_1\cdots\operatorname{l}ambda_\epsilonll|$. It is also easy to see that $\|\partialartial f \|^2 \doteqdot g^{\alphalpha{\bar{\alpha}}r{\beta}}h_{i{\bar{\alpha}}r{j}}\frac{\partialartial f^i}{\partialartial z^\alphalpha} \overlineerline{ \frac{\partialartial f^j}{\partialartial z^\beta}}=\sum_{\alphalpha=1}^m |\operatorname{l}ambda_\alphalpha|^2$. The following was proved in Corollary 3.4 of \cite{Ni-1807}.
\begin{theorem}\operatorname{l}abel{thm-schni2} Let $f:M^m\to N^n$ ($m\operatorname{l}e n$) be a holomorphic map with $M$ being a complete manifold. Assume that $\operatorname{Ric}^M $ is bounded from below and the scalar curvature $S^M(x)\gammae -K$. Assume further that $\operatorname{Ric}^N_m(x)\operatorname{l}e -\Upsilonappa<0$. Then we have the estimate
$$
\|\Lambda^m \partialartial f\|^2_0(x)\operatorname{l}e \operatorname{l}eft(\frac{K}{m\Upsilonappa}\rightght)^m.
$$
\epsilonnd{theorem}
Here recall that in \cite{Ni-1807} $\operatorname{Ric}(x, \Sigma)$ is defined as the Ricci curvature of the curvature tensor restricted to the $k$-dimensional subspace $\Sigma\subset T_x'M$. Precisely for any $v\sqrt {-1}n \Sigma$, $\operatorname{Ric}(x, \Sigma)(v,{\bar{\alpha}}r{v})\doteqdot \sum_{i=1}^k R(E_i,\overlineerline{E}_i, v,{\bar{\alpha}}r{v})$ with $\{E_i\}$ being a unitary basis of $\Sigma$. We say that $\operatorname{Ric}_k(x)<0$ if $\operatorname{Ric}(x, \Sigma)<0$ for every k-dimensional subspace $\Sigma$. Clearly $\operatorname{Ric}_k(x)<0$ implies that $S_k(x)<0$, and it coincides with $H$ when $k=1$, with the Ricci curvature $\operatorname{Ric}$ when $k=\dim(N)$. Here $S_k(x, \Sigma)$ is defined to be the scalar curvature of the curvature operator restricted to $\Sigma\subset T'_x N$. One can refer to \cite{Ni-1807, Ni, Ni-Zheng2} for the definitions and related results on the geometric significance of $\operatorname{Ric}_\epsilonll$ and $S_\epsilonll$.
Note that Theorem \ref{thm-schni2} has at least two limits in studying the holomporphic maps. The first it applies only to the case that $\dim(N)$, the dimension of the target manifold is at least as big as the dimension of the domain. The second limit is that it can only be applied to detect whether or not the map is full-dimensional, namely $\dim(f(M))=\dim(M)$ or not.
The first goal of this paper is to prove a family of estimates for holomorphic maps between K\"ahler manifolds containing the above three results as special cases. The result below removes the above mentioned constraints of Theorem \ref{thm-schni2}.
\begin{theorem}\operatorname{l}abel{thm:main1} Let $f:M^m\to N^n$ be a holomorphic map with $M$ being a complete manifold. When $M$ is noncompact assume either the bisectional curvature is bounded from below or (\ref{eq:2}) holds for some exhaustion function $\rho$. Let $\epsilonll\operatorname{l}e \dim(M)$ be a positive integer.
(i) Assume that the holomorphic sectional curvature of $N$, $H^N(Y)\operatorname{l}e -\Upsilonappa |Y|^4$ and $M$ has, $\operatorname{Ric}_\epsilonll^M\gammae -K $, for some $K\gammae 0, \Upsilonappa >0$. Then
$$
\sigma_\epsilonll(x)\operatorname{l}e \frac{2\epsilonll'}{\epsilonll'+1}\frac{K}{\Upsilonappa}
$$
where $\sigma_\epsilonll(x)=\sum_{\alphalpha=1}^\epsilonll |\operatorname{l}ambda_\alphalpha|^2(x)$, and $\epsilonll'=\min\{\epsilonll, \dim(f(M))\}$. In particular, if $K=0$, the map $f$ must be a constant.
(ii) Assume that $S^M_\epsilonll(x)\gammae -K$ and that $\operatorname{Ric}^N_\epsilonll(x)\operatorname{l}e -\Upsilonappa$ for some $K\gammae 0, \Upsilonappa >0$. Then
$$
\|\Lambda^\epsilonll \partialartial f\|^2_0(x)\operatorname{l}e \operatorname{l}eft(\frac{K}{\epsilonll\Upsilonappa}\rightght)^\epsilonll.
$$
In particular, if $K=0$, the map $f$ has rank smaller than $\epsilonll$.
\epsilonnd{theorem}
Note that part (i) above recovers Theorem \ref{thm-sch-roy} for $\epsilonll=\dim(M)$, and recovers Theorem \ref{thm:sch1} for $\epsilonll=1$. Hence it provides a family of estimates interpolating between Theorem \ref{thm-sch-roy} and \ref{thm:sch1}. Similarly part (ii) recovers Theorem \ref{thm-schni2} when $\epsilonll=\dim(M)$, and recovers Theorem \ref{thm:sch1} for $\epsilonll=1$, noting that in the case $\epsilonll=\dim(M)$, the assumption on the lower bound of bisectional curvature can be weakened to a lower bound of the Ricci curvature (from the proof this is obvious). Hence part (ii) provides a family of estimates interpolating between Theorem \ref{thm:sch1} and \ref{thm-schni2}. Part (ii) also implies that any K\"ahler manifold with $\operatorname{Ric}_\epsilonll\operatorname{l}e -\Upsilonappa<0$ must be $\epsilonll$-hyperbolic, a result proved in \cite{Ni-1807}. Moreover it can also be applied to $M$ with $\dim(M)>\epsilonll$ or even $\dim(M)>\dim(N)$ concluding more detailed degeneracy information of the map, re-enforcing the relationship between the $\epsilonll$ dimensional ``holomorphic" area of $N$ and the $\operatorname{Ric}^N_\epsilonll$.
The proof of the result (in Section 4) is built upon extensions of $\partialartial{\bar{\alpha}}r{\partialartial}$-Bochner formulae of \cite{Ni-1807}, which are proved in Section 3 after some preliminaries in Section 2. In Section 5 we show that the estimates can be used to rule out the existence of certain holomorphic mappings under some curvature conditions (cf. Theorem \ref{thm:51}). In particular Theorem \ref{thm:sch1} (cf. Corollary 5.4 of \cite{Ni-1807}) implies that {\sqrt {-1}t if a compact K\"ahler manifold $(M, g)$ has $H\gammae 0$, then there is no onto homomorphism from its fundamental group to the fundamental group of any oriented Riemann surface (complex curve) of genus greater than one.} The more flexible Theorem \ref{thm:main1} extends this statement to include all K\"ahler manifolds with $\operatorname{Ric}_\epsilonll \gammae0$ (for some $\epsilonll\sqrt {-1}n \{1, \cdots, m\}$). Note that a similar statement was proved for Riemannian manifold with positive isotropic curvature in \cite{FW}. In \cite{Tsu, Ni} it was proved that if the holomorphic sectional curvature $H>0$ or more generally $\operatorname{Ric}_\epsilonll>0$ then $\partiali_1(M)=\{0\}$. The result here provides some information for the nonnegative case. Note that the examples in \cite{Hitchin} indicate that the class of K\"ahler manifolds with $H>0$ (most of them are not Fano) seems to be much larger than that with $\operatorname{Ric}>0$. There has been very little known for manifold $M$ with $H\gammae 0$ (or $\operatorname{Ric}_\epsilonll\gammae 0$ for $\epsilonll<\dim(M)$) comparing with the situation for compact manifolds with $\operatorname{Ric}\gammae 0$. In fact when $M$ is a compact K\"ahler manifold with nonnegative bisectional curvature, Mok's classification result \cite{Mok} implies that the fundamental group $\partiali_1(M)$ must be a Bieberbach one. In Corollary 5.1 of \cite{Ni-Tam} a paper by Tam and the author, this was extended (as a result of F. Zheng) to the case when $M$ is a non-compact complete K\"ahler manifold, but under the nonnegativity of sectional curvature. For compact Riemannian manifolds with nonnegative Ricci curvature Cheeger-Gromoll \cite{CG} proved that $\partiali_1(M)$ must be a finite extension of a Bieberbach group. {\sqrt {-1}t Could this be proven for a compact K\"ahler manifold with $\operatorname{Ric}_\epsilonll \gammae 0$ with $\epsilonll<\dim(M)$}? Note that such a statement can not be possibly true for K\"ahler manifold with $B^\partialerp\gammae 0$ (hence nor with $\operatorname{Ric}^\partialerp\gammae 0$). In a recent preprint \cite{Mu}, the question has been answered positively for $H\gammae 0$, assuming additionally that $M$ is a projective variety. Given that there are many non-algebraic K\"ahler manifolds with $H\gammae 0$, our result for general K\"ahler manifolds is not contained in \cite{Mu}.
In \cite{ABCKT}, two invariants were defined for a K\"ahler manifold $M$. One is the so-called Albanese dimension $a(M)\doteqdot \dim_{\mathbb{C}}(Alb(M))$ (we use the complex dimension instead), the dimension of the image of the Albanese map $Alb: M\to \mathbb{C}^{\dim(H^{1,0}(M))}/H_1(M, \mathbb{Z})$. The other invariant is the genus of $M$, $g(M)$ which is defined as the maximal $\dim(U)$ with $U$ being an isotropic subspace of $H^1(M, \mathbb{C})$. The above consequence of Theorem \ref{thm:51} can be rephrased as that for $M$ with $H^M(X)\gammae 0$, or more generally $\operatorname{Ric}_\epsilonll\gammae 0$, we must have $g(M)\operatorname{l}e 1$. The same conclusion is obtained in Section 6 for K\"ahler manifold $M$ with the Picard number $\rho(M)=1$ and $S_2>0$, or $h^{1,1}(M)=1$. A corollary of Theorem \ref{thm:51} concludes that if $S^M_\epsilonll>0$, then $a(M)\operatorname{l}e \epsilonll-1$. ( This is also a consequence of the vanishing theorem proved in \cite{Ni-Zheng2}.) These results endow the curvature $\operatorname{Ric}_\epsilonll$ and $S_\epsilonll$ some algebraic geometric/topological implications.
In Section 5 we also illustrate that the $C^2$-estimate for the complex Monge-Amp\`ere equation is a special case of our computation in Section 3. In Section 6 we derive some estimates on the minimal ``energy" needed for a non-constant holomorphic map between certain K\"ahler manifolds extending earlier results in \cite{Ni-1807}.
\section{Preliminaries}
We collect some needed algebraic results. For holomorphic map $f: (M^m, g)\to (N^n, h)$, let $\partialartial f(\frac{\partialartial\ }{\partialartial z^{\alphalpha}})=\sum_{i=1}^n f^i_{\alphalpha} \frac{\partialartial\ }{\partialartial w^i}$ with respect to local coordinates $(z^1, \cdots, z^m)$ and $(w^1, \cdots, w^n)$. The Hermitian form $A_{\alphalpha{\bar{\alpha}}r{\beta}}dz^\alphalpha \wedge dz^{{\bar{\alpha}}r{\beta}}$ with
$A_{\alphalpha{\bar{\alpha}}r{\beta}}=f^i_{\alphalpha} \overlineerline{f^j_\beta} h_{i{\bar{\alpha}}r{j}}$ is the pull-back of K\"ahler form $\omega_h$ via $f$. By the singular value decomposition for $x_0\sqrt {-1}n M$ and $f(x_0)\sqrt {-1}n N$ we may choose normal coordinates centered at $x_0$ and $f(x_0)$ such that $\partialartial f(\frac{\partialartial\ }{\partialartial z^\alphalpha})=\operatorname{l}ambda_\alphalpha \delta^i_{\alphalpha} \frac{\partialartial\ }{\partialartial w^i}$. Then $|\operatorname{l}ambda_\alphalpha|$ are the singular values of $\partialartial f: (T'_{x_0}M, g) \to (T_{f(x_0)}'N, h)$. It is easy to see that $|\operatorname{l}ambda_1|^2 \gammae \cdots \gammae |\operatorname{l}ambda_m|^2$ are the eigenvalues of $A$ (with respect to $g$).
\begin{proposition} \operatorname{l}abel{prop:21} For any $1\operatorname{l}e \epsilonll\operatorname{l}e m$ the following holds
$$
\sigma_\epsilonll\doteqdot \sum_{\alphalpha=1}^\epsilonll |\operatorname{l}ambda_\alphalpha|^2 \gammae \sum_{1\operatorname{l}e \alphalpha, \beta\operatorname{l}e \epsilonll} g^{\alphalpha {\bar{\alpha}}r{\beta}}A_{\alphalpha{\bar{\alpha}}r{\beta}}\doteqdot U_\epsilonll.
$$
\epsilonnd{proposition}
\begin{proof} Arguing invariantly we choose unitary basis of $T'_{x_0}M$ with respect to $g$. Then the left hand side is the partial sum of the eigenvalues of $A$ in descending order, and the right hand side is the trace of the first $\epsilonll\times \epsilonll$ block of $(A_{\alphalpha{\bar{\alpha}}r{\beta}})$. Hence the result is well-known (cf. \cite{Horn-Johnson}, Corollary 4.3.34).
\epsilonnd{proof}
For a linear map $L: \mathbb{C}^m\to \mathbb{C}^n$ between two Hermitian linear spaces, $\Lambda^\epsilonll L: \wedge^\epsilonll \mathbb{C}^m \to \wedge^\epsilonll \mathbb{C}^n$ is define as the linear extension of the action on simple vectors: $\Lambda^\epsilonll L({\bf{a}})\doteqdot L(v_1)\wedge\cdots\wedge L(v_\epsilonll)$ with ${\bf{a}}=v_1\wedge\cdots \wedge v_\epsilonll$. The metric on $\wedge^\epsilonll \mathbb{C}^m$ is defined as $\operatorname{l}angle {\bf{a}}, \overlineerline{{\bf{b}}}\rangle =\det(\operatorname{l}angle v_i, {\bar{\alpha}}r{w}_j\rangle)$. If $\{e_\alphalpha\}$ is a unitary frame of $\mathbb{C}^m$, the $\{e_{\operatorname{l}ambda}\}$, with $\operatorname{l}ambda=(\alphalpha_1, \cdots, \alphalpha_\epsilonll)$, $\alphalpha_1\operatorname{l}e \cdots \operatorname{l}e \alphalpha_\epsilonll$, being the multi-index, and $e_{\operatorname{l}ambda}=e_{\alphalpha_1}\wedge \cdots\wedge e_{\alphalpha_\epsilonll}$, is a unitary frame for $\wedge^\epsilonll \mathbb{C}^m$. The Binet-Cauchy formula implies that this is consistent with the Hermitian product $\operatorname{l}angle {\bf{a}}, \overlineerline{{\bf{b}}}\rangle$ defined in the previous section. The norm $\|\Lambda^\epsilonll L\|_0$ is the operator norm with respect to the Hermitian structures of $\wedge^\epsilonll \mathbb{C}^m $ and $\wedge^\epsilonll \mathbb{C}^m$ defined above, which equals to the Jacobian of a Lipschitz map $f$, when $\epsilonll=m$ or $n$, applying to $L=\partialartial f$ (cf. Section 3.1 of \cite{Fede}).
For the local Hermitian matrices $A=(A_{\alphalpha{\bar{\alpha}}r{\beta}})$ and $G=(g_{\alphalpha{\bar{\alpha}}r{\beta}})$ we denote $A_\epsilonll$ and $G_{\epsilonll}$ be the upper-left $\epsilonll\times \epsilonll$ blocks of them.
\begin{proposition}\operatorname{l}abel{prop:22} For any $1\operatorname{l}e \epsilonll\operatorname{l}e m$ the following holds:
\begin{eqnarray}
\|\Lambda^\epsilonll \partialartial f\|_0^2=\Pi_{\alphalpha=1}^\epsilonll |\operatorname{l}ambda_\alphalpha|^2&\gammae& \frac{\det(A_\epsilonll)}{\det(G_\epsilonll)}\doteqdot W_\epsilonll; \operatorname{l}abel{eq:21}
\epsilonnd{eqnarray}
\epsilonnd{proposition}
\begin{proof} For the inequality in (\ref{eq:21}), as in the above proposition we may choose a unitary frame of $T'_{x_0}M$ such that $G=\operatorname{id}$. Then the claimed result is also a well-known statement about the partial products of the descending eigenvalues. The result can be seen by applying 4.1.6 of \cite{MM} to $(A+{\varepsilon}ilon G)^{-1}$ and let ${\varepsilon}ilon \to 0$ (see also Problem 4.3.P15 of \cite{Horn-Johnson}).
For the equality (\ref{eq:21}), first observe that
$$
\|\Lambda^\epsilonll \partialartial f\|^2_0(x) \gammae \frac{| \partialartial f\operatorname{l}eft(v_1\rightght)\wedge \cdots \wedge \partialartial f\operatorname{l}eft(v_\epsilonll\rightght)|^2}{|v_1\wedge\cdots \wedge v_\epsilonll|^2}=\Pi_{\alphalpha=1}^\epsilonll |\operatorname{l}ambda_\alphalpha|^2
$$
if $\{v_\alphalpha \}$ are the eigenvectors of $A$ with eigenvalues $\{|\operatorname{l}ambda_\alphalpha|^2\}$. On the other hand for general orthonormal vectors $\{v_\alphalpha\}$, the above paragraph implies $ \frac{| \partialartial f\operatorname{l}eft(v_1\rightght)\wedge \cdots \wedge \partialartial f\operatorname{l}eft(v_\epsilonll\rightght)|^2}{|v_1\wedge\cdots \wedge v_\epsilonll|^2}\operatorname{l}e \Pi_{\alphalpha=1}^\epsilonll |\operatorname{l}ambda_\alphalpha|^2
$. Combining them we have the equality in (\ref{eq:21}). \epsilonnd{proof}
\section{$\partialartial{\bar{\alpha}}r{\partialartial}$-Bochner formulae}
Here we generalize the $\partialartial{\bar{\alpha}}r{\partialartial}$-Bochner formula derived in \cite{Ni-1807} on $\|\partialartial f\|^2$ and $\|\Lambda^m \partialartial f\|_0^2$ to $\sigma_\epsilonll$ and $\|\Lambda^\epsilonll \partialartial f\|_0^2$. Since both $\sigma_\epsilonll(x)$ and $\|\Lambda^\epsilonll \partialartial f\|_0^2(x)$ are only continous in general we first derive formula on their barriers supplied by Proposition \ref{prop:21}, \ref{prop:22}.
\begin{proposition}\operatorname{l}abel{prop:31} Under the normal coordinates near $x_0$ and $f(x_0)$ such that $\partialartial f(\frac{\partialartial\ }{\partialartial z^\alphalpha})=\operatorname{l}ambda_\alphalpha \delta^i_{\alphalpha} \frac{\partialartial\ }{\partialartial w^i}$ with $|\operatorname{l}ambda_1|\gammae \cdots\gammae |\operatorname{l}ambda_\alphalpha|\gammae \cdots \gammae |\operatorname{l}ambda_m|$ being the singular values of $\partialartial f: (T'_{x_0}M, g) \to (T_{f(x_0)}'N, h)$, let $U_\epsilonll(x)$ and $W_\epsilonll(x)$ be the functions defined in the last section in a small neighborhood of $x_0$. Then at $x_0$, for $v\sqrt {-1}n T'_{x_0}M$, and nonzero $U_\epsilonll$ and $W_\epsilonll$,
\begin{eqnarray}
\operatorname{l}angle \sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \operatorname{l}og U_\epsilonll, \frac{1}{\sqrt{-1}}v\wedge {\bar{\alpha}}r{v}\rangle &=&\frac{U_\epsilonll \sum_{1\operatorname{l}e i\operatorname{l}e n, 1\operatorname{l}e \alphalpha \operatorname{l}e \epsilonll} |f^i_{\alphalpha v}|^2-|\sum_{\alphalpha =1}^\epsilonll \overlineerline{\operatorname{l}ambda_\alphalpha}f^\alphalpha_{\alphalpha v}|^2}{U_\epsilonll^2}\operatorname{l}abel{eq:31}\\
&\quad&+\sum_{\alphalpha=1}^\epsilonll \frac{|\operatorname{l}ambda_\alphalpha|^2}{U_\epsilonll}(-R^N(\alphalpha, {\bar{\alpha}}r{\alphalpha}, \partialartial f(v), \overlineerline{\partialartial f(v)})+R^M(\alphalpha, {\bar{\alpha}}r{\alphalpha}, v, {\bar{\alpha}}r{v}));\nonumber \\
\operatorname{l}angle \sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \operatorname{l}og W_\epsilonll, \frac{1}{\sqrt{-1}}v\wedge {\bar{\alpha}}r{v}\rangle &=& \sum_{\alphalpha=1}^\epsilonll \sum_{ \epsilonll+1\operatorname{l}e i \operatorname{l}e n} \frac{|f^i_{\alphalpha v}|^2}{|\operatorname{l}ambda_\alphalpha|^2}\operatorname{l}abel{eq:32} \\
&\quad& + \sum_{\alphalpha=1}^\epsilonll (- R^N(\alphalpha, {\bar{\alpha}}r{\alphalpha}, \partialartial f(v), \overlineerline{\partialartial f(v)})+R^M(\alphalpha,{\bar{\alpha}}r{\alphalpha}, v, {\bar{\alpha}}r{v})). \nonumber
\epsilonnd{eqnarray}
\epsilonnd{proposition}
\begin{proof} The calculation is similar to that of \cite{Ni-1807}. Here we include the details of the first. Choose holomorphic normal coordinate $(z_1, z_2, \cdots, z_m)$ near a point $p$ on the domain manifold $M$, correspondingly $(w_1, w_2, \cdots, w_n)$ near $f(p)$ in the target. Let $\omega_g=\sqrt{-1}g_{a{\bar{\alpha}}r{\beta}}dz^\alphalpha\wedge d{\bar{\alpha}}r{z}^{\beta}$ and $\omega_h=\sqrt{-1}h_{i{\bar{\alpha}}r{j}}dw^i\wedge d{\bar{\alpha}}r{w}^{j}$ be the K\"ahler forms of $M$ and $N$ respectively. Correspondingly, the Christoffel symbols are given
$$
^M\Gamma_{\alphalpha \gammaamma}^\beta =\frac{\partialartial g_{\alphalpha {\bar{\alpha}}r{\delta}}}{\partialartial z^{\gammaamma}}g^{{\bar{\alpha}}r{\delta}\beta}=\Gamma_{\gammaamma \alphalpha }^\beta; \quad \quad ^N\Gamma_{i k}^j =
\frac{\partialartial h_{i {\bar{\alpha}}r{l}}}{\partialartial w^{k}}h^{{\bar{\alpha}}r{l}k}=\Gamma_{k i }^j.
$$
We always uses Einstein's convention when there is an repeated index. The symmetry in the Christoffel symbols is due to K\"ahlerity. If the appearance of the indices can distinguish the manifolds we omit the superscripts $^M$ and $^N$. Correspondingly the curvatures are given by
$$
^MR^\beta_{\alphalpha {\bar{\alpha}}r{\delta} \gammaamma}=-\frac{\partialartial}{\partialartial {\bar{\alpha}}r{z}^{\delta}} \Gamma_{\alphalpha \gammaamma}^\beta; \quad \quad \quad \,^NR^j_{i {\bar{\alpha}}r{l} k}=-\frac{\partialartial}{\partialartial {\bar{\alpha}}r{w}^{l}} \Gamma_{i k}^j.
$$
At the points $x_0$ and $f(x_0)$, where the normal coordinates are centered we have that
$$
R_{{\bar{\alpha}}r{\beta}\alphalpha {\bar{\alpha}}r{\delta} \gammaamma}=-\frac{\partialartial^2 g_{{\bar{\alpha}}r{\beta}\alphalpha}}{\partialartial z^{\gammaamma}\partialartial {\bar{\alpha}}r{z}^{\delta}}; \quad \quad R_{{\bar{\alpha}}r{j}i {\bar{\alpha}}r{l} k}=-\frac{\partialartial^2 h_{{\bar{\alpha}}r{j}i}}{\partialartial w^{k}\partialartial {\bar{\alpha}}r{w}^{l}}.
$$
Direct calculation shows that at the point $x_0$ (here repeated indices $\alphalpha, \beta $ are summed from $1$ to $\epsilonll$, while $i, j, k, l$ are summed from $1$ to $n$)
\begin{eqnarray*}
(\operatorname{l}og U_\epsilonll)_\gammaamma &=&\frac{g^{\alphalpha {\bar{\alpha}}r{\beta}}_{\quad, \gammaamma} A_{\alphalpha{\bar{\alpha}}r{\beta}}+g^{\alphalpha {\bar{\alpha}}r{\beta}}f^i_{\alphalpha \gammaamma}h_{i{\bar{\alpha}}r{j}}\overlineerline{f^j_\beta}+g^{\alphalpha {\bar{\alpha}}r{\beta}}f^i_{\alphalpha }\overlineerline{f^j_\beta}f^k_\gammaamma h_{i{\bar{\alpha}}r{j}, k} }{U_\epsilonll}=\frac{f^i_{\alphalpha\gammaamma}\overlineerline{f^i_\alphalpha}}{U_\epsilonll};\\
(\operatorname{l}og U_\epsilonll)_{{\bar{\alpha}}r{\gammaamma}} &=&\frac{g^{\alphalpha {\bar{\alpha}}r{\beta}}_{\quad, {\bar{\alpha}}r{\gammaamma}} A_{\alphalpha{\bar{\alpha}}r{\beta}}+g^{\alphalpha {\bar{\alpha}}r{\beta}}f^i_{\alphalpha}h_{i{\bar{\alpha}}r{j}}\overlineerline{f^j_{\beta\gammaamma}}+g^{\alphalpha {\bar{\alpha}}r{\beta}}f^i_{\alphalpha }\overlineerline{f^j_\beta}\overlineerline{f^k_\gammaamma} h_{i{\bar{\alpha}}r{j}, {\bar{\alpha}}r{k}} }{U_\epsilonll}=\frac{\overlineerline{f^i_{\alphalpha\gammaamma}}f^i_\alphalpha}{U_\epsilonll};\\
\operatorname{l}eft(\operatorname{l}og U_\epsilonll\rightght)_{\gammaamma {\bar{\alpha}}r{\gammaamma}}&=& \frac{R^M_{\alphalpha{\bar{\alpha}}r{\beta}\gammaamma{\bar{\alpha}}r{\gammaamma}}f^i_\alphalpha \overlineerline{f^i_\beta}+|f^i_{\alphalpha \gammaamma}|^2-R^N_{i{\bar{\alpha}}r{j}k{\bar{\alpha}}r{l}}f^i_\alphalpha \overlineerline{f^j_\beta}f^k_\gammaamma \overlineerline{f^l_\gammaamma}}{U_\epsilonll}-\frac{|\sum_{1\operatorname{l}e \alphalpha\operatorname{l}e \epsilonll; 1\operatorname{l}e i\operatorname{l}e n} f^i_{\alphalpha \gammaamma}\overlineerline{f^i_\alphalpha}|^2}{U_\epsilonll^2}.
\epsilonnd{eqnarray*}
The claimed equation then follows.
\epsilonnd{proof}
\begin{corollary} Let $f: M\to N$ be a holomorphic map between two K\"ahler manifolds.
(i) If the bisectional curvature of $N$ is non-positive and the bisectional curvature of $M$ is nonnegative, then $\operatorname{l}og \sigma_\epsilonll(x)$ is a plurisubharmonic function.
(ii) Assume that $\operatorname{Ric}^N_\epsilonll\operatorname{l}e 0$ and $\operatorname{Ric}^M_\epsilonll\gammae0$. If $\|\Lambda^\epsilonll \partialartial f\|^2_0$ not identically zero, then for every $x$, there exists a $\Sigma \subset T_x'M$ with $\dim(\Sigma)\gammae \epsilonll$ such that $\operatorname{l}og \|\Lambda^\epsilonll \partialartial f\|^2_0(x)$ is plurisubharmonic on $\Sigma$.
\epsilonnd{corollary}
\section{Proof of Theorem \ref{thm:main1}}
Since in general $\sigma_\epsilonll$ and $\|\Lambda^\epsilonll \partialartial f\|_0$ are not smooth we adopt the viscosity consideration as in Section 5 of \cite{Ni-1807} to prove the result. We also need to modify the algebraic argument in the Appendix of \cite{Ni-1807} for some point-wise estimates needed. Another difference of the argument is that we shall apply the maximum principle to a degenerate operator. First we need a Royden type lemma.
\begin{lemma}\operatorname{l}abel{lem:41} If the holomorphic sectional curvature $R^N$ has a upper bound $-\Upsilonappa$, with respect
to the normal coordinates as in Proposition \ref{prop:21} at $x_0$ (and $f(x_0)$),
$$
\sum_{1\operatorname{l}e \alphalpha,\beta, \gammaamma, \delta\operatorname{l}e \epsilonll} g^{\alphalpha {\bar{\alpha}}r{\beta}}g^{\gammaamma {\bar{\alpha}}r{\delta}}R^N_{i{\bar{\alpha}}r{j}k{\bar{\alpha}}r{l}}f^i_{\alphalpha} \overlineerline{f^{j}_\beta} f^k_\gammaamma \overlineerline{f^l_\delta} \operatorname{l}e -\frac{\epsilonll'+1}{2\epsilonll'} \Upsilonappa U^2_\epsilonll, \mbox{ when }\Upsilonappa>0; \quad \operatorname{l}e -\Upsilonappa U^2_\epsilonll \mbox{ when } \Upsilonappa\operatorname{l}e 0.
$$
Here $\epsilonll'=\min\{ \epsilonll, \dim(f(M))\}$.
\epsilonnd{lemma}
\begin{proof} We follow the argument in Appendix of \cite{Ni-1807}, which is due to F. Zheng. The left hand side can be written as $ \sum_{1\operatorname{l}e \alphalpha, \beta\operatorname{l}e \epsilonll'} R^N_{\alphalpha{\bar{\alpha}}r{\alphalpha}\beta{\bar{\alpha}}r{\beta}}|\operatorname{l}ambda_\alphalpha|^2|\operatorname{l}ambda_\beta|^2$. In the space $$\Sigma\doteqdot \operatorname{span} \{ \partialartial f\operatorname{l}eft( \frac{\partialartial\ }{\partialartial z^1}\rightght), \cdots, \partialartial f\operatorname{l}eft( \frac{\partialartial\ }{\partialartial z^{\epsilonll'}}\rightght)\}, $$ consider the vector $Y=\sum_{1\operatorname{l}e i \operatorname{l}e \epsilonll'} w^i\operatorname{l}ambda_i \frac{\partialartial \ }{\partialartial w^i}$ with $(w^1, \cdots, w^{\epsilonll'})\sqrt {-1}n \mathbb{S}^{2\epsilonll'-1}\subset \Sigma$. Then direct calculations show that
\begin{eqnarray*}
\sum_{1\operatorname{l}e \alphalpha, \beta\operatorname{l}e \epsilonll'} R^N_{\alphalpha{\bar{\alpha}}r{\alphalpha}\beta{\bar{\alpha}}r{\beta}}|\operatorname{l}ambda_\alphalpha|^2|\operatorname{l}ambda_\beta|^2&=&\frac{\epsilonll' (\epsilonll'+1)}{2}\cdot \frac{1}{Vol (\mathbb{S}^{2\epsilonll'-1})}\sqrt {-1}nt_{\mathbb{S}^{2\epsilonll'-1}} R^N(Y, \overlineerline{Y}, Y,\overlineerline{Y})\\
&\operatorname{l}e & -\Upsilonappa \frac{\epsilonll' (\epsilonll'+1)}{2}\cdot \frac{1}{Vol (\mathbb{S}^{2\epsilonll'-1})}\sqrt {-1}nt_{\mathbb{S}^{2\epsilonll'-1}} |Y|^4\\
&=& \frac{-\Upsilonappa }{2}\operatorname{l}eft(U_\epsilonll^2+\sum_{1\operatorname{l}e \alphalpha \operatorname{l}e \epsilonll'} |\operatorname{l}ambda_\alphalpha|^4\rightght).
\epsilonnd{eqnarray*}
The result follows from elementary inequalities $\sum_{1\operatorname{l}e \alphalpha \operatorname{l}e \epsilonll'} |\operatorname{l}ambda_\alphalpha|^4\operatorname{l}e U_\epsilonll^2 \operatorname{l}e\epsilonll'\, \sum_{1\operatorname{l}e \alphalpha \operatorname{l}e \epsilonll'} |\operatorname{l}ambda_\alphalpha|^4$.
\epsilonnd{proof}
To prove part (i), let $\epsilonta(t):[0, +\sqrt {-1}nfty)\to [0, 1]$ be a function supported in $[0, 1]$ with $\epsilonta'=0$ on $[0, \frac{1}{2}]$, $\epsilonta' \operatorname{l}e 0$, $\frac{|\epsilonta'|^2}{\epsilonta}+(-\epsilonta'')\operatorname{l}e C_1$. The construction of such $\epsilonta$ is elementary.
Let $\varphi_R(x)=\epsilonta(\frac{r(x)}{R})$. When the meaning is clear we omit subscript $R$ in $\varphi_R$. Clearly $\sigma_\epsilonll \cdot \varphi$ attains a maximum somewhere at $x_0$ in $B_p(R)$. With respect to the normal coordinates near $x_0$ and $f(x_0)$, $(U_\epsilonll \varphi)(x_0)=(\sigma \varphi)(x_0)$, and $(U_\epsilonll\varphi)(x)\operatorname{l}e (\sigma_\epsilonll \varphi)(x)\operatorname{l}e (\sigma\varphi)(x_0)\operatorname{l}e (U_\epsilonll \varphi)(x_0)$ for $x$ in the small normal neighborhood. The maximum principle then implies that at $x_0$
$$
\nabla (U_\epsilonll \varphi)=0; \sum_{1\operatorname{l}e \alphalpha \operatorname{l}e \epsilonll} \frac{1}{2}(\nabla_{\alphalpha}\nabla_{{\bar{\alpha}}r{\alphalpha}} +\nabla_{{\bar{\alpha}}r{\alphalpha}}\nabla_{\alphalpha}) \operatorname{l}og (U_\epsilonll \varphi) \operatorname{l}e 0.
$$
Now applying the $\partialartial {\bar{\alpha}}r{\partialartial}$ formula (\ref{eq:31}), the above Lemma and the complex Hessian comparison theorem of Li-Wang \cite{LW}, together with the argument in \cite{Ni-1807}, imply the result. It is clear from the proof that if $\epsilonll=m=\dim(M)$, only the Laplacian comparison theorem is needed. Hence one only needs to assume that the Ricci curvature of $M$ is bounded from below.
The proof of part (ii) is similar. For the sake of the completeness we include the argument under the assumption (\ref{eq:2}). In this case we let $\varphi=\epsilonta\operatorname{l}eft(\frac{\rho}{R}\rightght)$. Now $\varphi$ has support in $D(2R)\doteqdot \{\rho\operatorname{l}e 2R\}$. Hence $W_\epsilonll \cdot \varphi$ attains its maximum somewhere, say at $x_0 \sqrt {-1}n D(2R)$. Now at $x_0$ we have \begin{eqnarray*}
0&\gammae& \sum_{\gammaamma=1}^\epsilonll \frac{\partialartial^2}{\partialartial z^\gammaamma \partialartial z^{{\bar{\alpha}}r{\gammaamma}}}\, \operatorname{l}eft(\operatorname{l}og (W_\epsilonll \, \varphi)\rightght) \gammae \sum_{\alphalpha,\gammaamma=1}^\epsilonll R^M_{\alphalpha{\bar{\alpha}}r{\alphalpha}\gammaamma {\bar{\alpha}}r{\gammaamma}}-R^N_{\alphalpha{\bar{\alpha}}r{\alphalpha }\gammaamma{\bar{\alpha}}r{\gammaamma}}|\operatorname{l}ambda_\gammaamma|^2 + \sum_{\gammaamma=1}^\epsilonll \frac{\partialartial^2 \operatorname{l}og \varphi}{\partialartial z^\gammaamma \partialartial z^{{\bar{\alpha}}r{\gammaamma}}} \\
&\gammae& -K +\epsilonll \cdot \Upsilonappa \cdot W_\epsilonll^{1/\epsilonll}+\frac{ \epsilonta''}{R^2\varphi} |\nabla \rho|^2+\frac{\epsilonll \epsilonta'}{R \varphi}\operatorname{l}eft([\sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \rho]_{+}\rightght)-\frac{|\epsilonta'|^2}{\varphi^2 R^2}\cdot |\nabla \rho|^2\\
&\gammae& -K +\epsilonll \cdot \Upsilonappa \cdot W_\epsilonll^{1/\epsilonll} -\frac{C_1}{\varphi R^2}|\nabla \rho|^2 -\frac{C_1}{\varphi R} \cdot C(m)\operatorname{l}eft( [\sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \rho]_{+}\rightght).
\epsilonnd{eqnarray*}
Multiplying $\varphi$ on both sides of the above we have that
$$
\sup_{D(R)}\|\Lambda^\epsilonll \partialartial f\|_0^2(x)\operatorname{l}e \operatorname{l}eft(\frac{K+ +\frac{C_1}{\varphi R^2}|\nabla \rho|^2 +\frac{C_1}{\varphi R} \cdot C(m)\operatorname{l}eft( [\sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \rho]_{+}\rightght)}{\epsilonll \Upsilonappa}\rightght)^\epsilonll.
$$
The result follows by observing that $\frac{|\nabla \rho|^2}{R^2}\operatorname{l}e \frac{4|\nabla \rho|^2}{\rho^2} \to 0$ and $\frac{ [\sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \rho]_{+}}{R}\operatorname{l}e 2 \frac{ [\sqrt{-1}\partialartial {\bar{\alpha}}r{\partialartial} \rho]_{+}}{\rho}\to 0$ as $R\to \sqrt {-1}nfty$.
\section{Applications}
First we show that the Pogorelov type estimate of \cite{Ni-1807} can be adapted to derive the $C^2$-estimate for the Monge-Amp\`ere equation related to the existence of K\"ahler-Einstein metrics and prescribing the Ricci curvature problem. Recall that the geometric problems reduce to a complex Monge-Amp\`ere equation
$$
\frac{\det(g_{\alphalpha{\bar{\alpha}}r{\beta}}+\varphi_{\alphalpha{\bar{\alpha}}r{\beta}})}{\det(g_{\alphalpha{\bar{\alpha}}r{\beta}})}=e^{t\varphi +f}
$$
with $t\sqrt {-1}n [-1, 1]$, $f$ being a fixed function with prescribed complex Hessian. $g'_{\alphalpha{\bar{\alpha}}r{\beta}}=g_{\alphalpha{\bar{\alpha}}r{\beta}}+\varphi_{\alphalpha{\bar{\alpha}}r{\beta}}$ is another K\"ahler metric with $[\omega_{g'}]=[\omega_g]$. We apply our previous setting to the map $\operatorname{id}:(M, g)\to (M, g')$. The computation in \cite{Aub, Yau} (See also the exposition in \cite{Siu}) is on $\mathcal{L} \|\partialartial f\|^2$. By the computation from Section 3 and 4, at the point where $\|\partialartial \operatorname{id}\|^2_0$ is attained we have that
$$
0\gammae \frac{\partialartial^2}{\partialartial z^\gammaamma \partialartial {\bar{\alpha}}r{z}^\gammaamma} \operatorname{l}og (1+\varphi_{1{\bar{\alpha}}r{1}})\gammae R_{1{\bar{\alpha}}r{1}\gammaamma{\bar{\alpha}}r{\gammaamma}}-R'_{1{\bar{\alpha}}r{1}\gammaamma {\bar{\alpha}}r{\gammaamma}}\operatorname{l}eft(1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}\rightght).
$$
Here $R'$ is the curvature of $g'$ and $|\operatorname{l}ambda_\gammaamma|^2=1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}$. Since we do not have information on $R'$ in general, but only $\operatorname{Ric}^{g'}(\frac{\partialartial}{\partialartial z^1}, \frac{\partialartial}{\partialartial {\bar{\alpha}}r{z}^1})=\operatorname{Ric}^g_{1{\bar{\alpha}}r{1}}-t\varphi_{1{\bar{\alpha}}r{1}}-f_{1{\bar{\alpha}}r{1}}$, we multiply $\frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}}$ on the both sides of the above inequality and then sum $\gammaamma$ from $1$ to $m$ arriving at
\begin{eqnarray*}
0&\gammae& \sum_{\gammaamma=1}^m\frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}}R^g_{1{\bar{\alpha}}r{1}\gammaamma{\bar{\alpha}}r{\gammaamma}}-
\frac{\operatorname{Ric}^g_{1{\bar{\alpha}}r{1}}}{1+\varphi_{1{\bar{\alpha}}r{1}}}
+t\frac{\varphi_{1{\bar{\alpha}}r{1}}}{1+\varphi_{1{\bar{\alpha}}r{1}}}+\frac{f_{1{\bar{\alpha}}r{1}}}{1+\varphi_{1{\bar{\alpha}}r{1}}}\\
&\gammae& -C(M, g, f)\sum_{\gammaamma=1}^m\frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}}-1.
\epsilonnd{eqnarray*}
Now we apply/repeat the same consideration/calculation to $Q\doteqdot \operatorname{l}og \sigma_1 -(C(M, g, f)+1)\varphi$. Then at the point $x_0$, where $Q$ attains its maximum, we have that
$$
0\gammae -C(M, g, f)\sum_{\gammaamma=1}^m\frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}}-(C(M, g, f)+2) +(C(M, g, f)+1)\sum_{\gammaamma=1}^m\frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}},
$$
which then implies that
$$
\sum_{\gammaamma=1}^m\frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}}\operatorname{l}e C(M, g, f)+2.
$$
This implies that at the maximum point of $\sigma_1 e^{-(C(M, g, f)+1)\varphi}$,
\begin{eqnarray*}
\sigma_1 e^{-(C(M, g, f)+1)\varphi}&=&\sigma_1\frac{\omega^m_g}{\omega^m_{g'}}e^{t\varphi +f}e^{-(C(M, g, f)+1)\varphi}\\
&\operatorname{l}e& \operatorname{l}eft(\frac{1}{m-1} \sum_{\gammaamma=2}^m \frac{1}{1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}}\rightght)^{m-1} e^{t\varphi +f}e^{-(C(M, g, f)+1)\varphi}\\
&\operatorname{l}e& \operatorname{l}eft(\frac{C(M, g, f)+2}{m-1} \rightght)^{m-1} e^{t\varphi +f}e^{-(C(M, g, f)+1)\varphi}.
\epsilonnd{eqnarray*}
If we write $K=\operatorname{l}eft(\frac{C(M, g, f)+2}{m-1} \rightght)^{m-1}$, $\Upsilonappa=C(M, g, f)+2$, the above implies
\begin{equation}
1+\varphi_{\gammaamma{\bar{\alpha}}r{\gammaamma}}(x)\operatorname{l}e \sigma_1(x)\operatorname{l}e Ke^{\Upsilonappa (\varphi(x)-\varphi(x_0))} e^{t\varphi(x_0) +f(x_0)}, \quad \forall \gammaamma\sqrt {-1}n \{1, \cdots, m\}.
\epsilonnd{equation}
As mentioned in the introduction, Theorem \ref{thm:main1} removes the constrains that $\dim(M)\operatorname{l}e \dim(N)$ in the previous results proved in \cite{Ni-1807}. As in \cite{NZ} we denote by $B^\partialerp$ the orthogonal bisectional curvature. We say $B^\partialerp\operatorname{l}e \Upsilonappa$ if for any $X, Y\sqrt {-1}n T'N$ with $\operatorname{l}angle X, \overlineerline{Y}\rangle=0$, $R(X, {\bar{\alpha}}r{X}, Y, {\bar{\alpha}}r{Y})\operatorname{l}e \Upsilonappa |X|^2|Y|^2$. The following is a corollary of the proof Theorem \ref{thm:main1}.
\begin{theorem}\operatorname{l}abel{thm:51} Let $f: (M, g)\to (N, h)$ be a holomorphic map.
(i) Assume that $M$ is compact. Under the assumptions either $\operatorname{Ric}^M_\epsilonll >0$, and the holomorphic sectional curvature $H^N\operatorname{l}e 0$, or $\operatorname{Ric}^M_\epsilonll\gammae 0$ and $H^N < 0$, $f$ must be constant. The same result also holds if $(B^M)^\partialerp>0$ and $(B^N)^\partialerp\operatorname{l}e 0$ or $(B^M)^\partialerp\gammae0$ and $(B^N)^\partialerp< 0$.
(ii) If $M$ is compact with $S^M_\epsilonll\gammae 0$ and $\operatorname{Ric}^N_\epsilonll <0$, or $S^M_\epsilonll >0$, $\operatorname{Ric}^N_\epsilonll \operatorname{l}e 0$ then $\dim(f(M))< \epsilonll$. The same result holds if $\operatorname{Ric}^M_\epsilonll\gammae 0$ and $S^N_\epsilonll <0$, or $\operatorname{Ric}^M_\epsilonll> 0$ and $S^N_\epsilonll \operatorname{l}e 0$.
\epsilonnd{theorem}
\begin{proof} Since $M$ is compact $\sigma_\epsilonll$ attains a maximum somewhere, say at $x_0$. If $f$ is not constant, $\sigma_\epsilonll(x_0)>0$. Applying (\ref{eq:31}), using the normal coordinates around $x_0$ and $f(x_0)$ specified as in the last two sections we have that
$$
0\gammae \sum_{\gammaamma=1}^\epsilonll \frac{\partialartial^2\ }{\partialartial z^\gammaamma, \partialartial {\bar{\alpha}}r{z}^\gammaamma}\operatorname{l}eft(\operatorname{l}og U_\epsilonll\rightght)\gammae \sum_{1\operatorname{l}e \alphalpha, \gammaamma \operatorname{l}e \epsilonll} \frac{-R^N_{\alphalpha{\bar{\alpha}}r{\alphalpha}\gammaamma{\bar{\alpha}}r{\gammaamma}}|\operatorname{l}ambda_\alphalpha|^2|\operatorname{l}ambda_\gammaamma|^2}{U_\epsilonll}+ \sum_{\alphalpha =1}^\epsilonll \frac{\operatorname{Ric}^M(x_0, \Sigma)(\alphalpha, {\bar{\alpha}}r{\alphalpha})|\operatorname{l}ambda_\alphalpha|^2}{U_\epsilonll}.
$$
Here $\Sigma=\operatorname{span}\{ \frac{\partialartial\ }{\partialartial z^1}, \cdots, \frac{\partialartial\ }{\partialartial z^\epsilonll} \}$. By Lemma \ref{lem:41}, if $H^N<0$, the first term is positive, the second one is nonnegative since $\operatorname{Ric}^M_\epsilonll\gammae 0$. Hence a contradiction. From the proof, the same holds if $H^N\operatorname{l}e0$ and $\operatorname{Ric}^M_\epsilonll>0$. For the case concerning $B^\partialerp$ the proof is similar.
For (ii), if $\operatorname{rank}(f)\gammae \epsilonll$, $\|\Lambda^\epsilonll\partialartial f\|_0$ has a nonzero maximum somewhere, say at $x_0$. Then applying (\ref{eq:32}), using the normal coordinates around $x_0$ and $f(x_0)$ specified as in the last two sections we have that
$$
0\gammae \sum_{\gammaamma=1}^\epsilonll \frac{\partialartial^2\ }{\partialartial z^\gammaamma, \partialartial {\bar{\alpha}}r{z}^\gammaamma}\operatorname{l}eft(\operatorname{l}og W_\epsilonll\rightght)\gammae \sum_{1\operatorname{l}e \gammaamma \operatorname{l}e \epsilonll} (-\operatorname{Ric}^N_\epsilonll(x_0, \Sigma) |\operatorname{l}ambda_\gammaamma|^2)+\operatorname{Scal}^M(x_0, \Sigma).
$$
This leads to a contradiction under the assumptions either $S^M_\epsilonll\gammae 0$ and $\operatorname{Ric}^N_\epsilonll <0$, or $S^M_\epsilonll >0$, $\operatorname{Ric}^N_\epsilonll \operatorname{l}e 0$. For the second part, we introduce the operator:
$$
\mathcal{L}_\epsilonll=\sum_{\gammaamma=1}^\epsilonll\frac{1}{2|\operatorname{l}ambda_{\gammaamma}|^2}\operatorname{l}eft(\nabla_{\gammaamma}\nabla_{{\bar{\alpha}}r{\gammaamma}}
+\nabla_{{\bar{\alpha}}r{\gammaamma}}\nabla_{\gammaamma}\rightght).
$$
Since at $x_0$ $\operatorname{W}_\epsilonll\ne 0$, the above operator is well defined in a small neighborhood of $x_0$. As before applying $\mathcal{L}$ at $x_0$ implies that
$$
0\gammae \mathcal{L}_\epsilonll \operatorname{l}eft(\operatorname{l}og W_\epsilonll\rightght)\gammae (-\operatorname{Scal}^N(x_0, \partialartial f(\Sigma))+\sum_{1\operatorname{l}e \gammaamma \operatorname{l}e \epsilonll} \frac{\operatorname{Ric}^M(x_0, \Sigma)(\gammaamma, {\bar{\alpha}}r{\gammaamma})}{|\operatorname{l}ambda_\gammaamma|^2}.
$$
The above also induces a contradiction under either $\operatorname{Ric}^M_\epsilonll\gammae 0$ and $S^N_\epsilonll <0$, or $\operatorname{Ric}^M_\epsilonll> 0$ and $S^N_\epsilonll \operatorname{l}e 0$.
\epsilonnd{proof}
This can be combined with the following result of Siu-Beauville (cf. Theorem 1.5 of \cite{ABCKT}) to infer information regarding the fundamental group of the manifolds with $\operatorname{Ric}_\epsilonll\gammae0$.
\begin{theorem}[Siu-Beauville] Let $M$ be a compact K\"ahler manifold. There exists a compact Riemann surface $C_g$ of genus greater than one and a surjective holomorphic map $f: M \to C'$ with $g(C')\gammae g(C)$ with connected fibers if and only if there exists a surjective homomorphism $h: \partiali_1(M)\to \partiali_1(C_g)$.
\epsilonnd{theorem}
\begin{corollary} (i) Let $(M, g)$ be a compact K\"ahler manifold with $\operatorname{Ric}_\epsilonll\gammae 0$ for some $1\operatorname{l}e \epsilonll\operatorname{l}e m$. Then
there exists no surjective homomorphism $h: \partiali_1(M)\to \partiali_1(C_g)$. Furthermore, there is no subspace $V\subset H^1(M, \mathbb{C})$ with $\wedge^2 V=0$ in $H^2(M, \mathbb{C})$ and $\dim(V)\gammae 2$. Namely $g(M)\operatorname{l}e 1$. Similarly, if $\operatorname{Ric}_\epsilonll\gammae 0$, $\partiali_1(M)$ can not be of the type of an amalgamated product $\Gamma_1*_{\Delta}\Gamma_2$ with the index of $\Delta$ in $\Gamma_1$ greater than one and index of $\Delta$ in $\Gamma_2$ greater than two.
(ii) Let $(M, g)$ be a compact K\"ahler manifold with $S^M_\epsilonll> 0$ for some $1\operatorname{l}e \epsilonll\operatorname{l}e m$. Then $a(M)\operatorname{l}e\epsilonll-1$.
(iii) If $S^M_n\gammae 0$, then any harmonic map $f: M\to N$ with $N$ being a locally Hermitian symmetric space, can not have $\operatorname{rank}(f)=\dim(N)$.
\epsilonnd{corollary}
\begin{proof} The first part of (i) follows from part (i) of Theorem \ref{thm:51}. Namely apply it to $N=C_g$ and combine it with the above Siu-Beauville's result. The second part follows by combining Theorem \ref{thm:51} with Theorem 1.4 of \cite{ABCKT} due to Catanese (cf. Theorem 1.10 of \cite{Cat}). For the second part involving the amalgamated product, apply Theorem 6.27 of \cite{ABCKT}, namely a result of Gromov-Schoen below instead, to conclude that there exists an equivariant holomorphic map from $\widetilde{M} $ into the Poincar\'e disk. This induce a contradiction with part (i) of Theorem \ref{thm:51} since the maximum principle argument still applies (see also \cite{NR}). The statement of (ii) is an easy consequence of part (ii) of Theorem \ref{thm:51}.
For part (iii), by Siu's result on the holomorphicity of the harmonic maps between K\"ahler manifolds, namely Theorem 6.13 of \cite{ABCKT}, any such a harmonic map must be holomorphic. Then part (ii) of Theorem \ref{thm:51} induces a contradiction noting that the canonical metric on $N$ is K\"ahler-Einstein with negative Einstein constant.
\epsilonnd{proof}
\begin{theorem}[Gromov-Schoen]
Let $M$ be a compact K\"ahler manifold with fundamental group $\Gamma=\Gamma_1*_{\Delta}\Gamma_2$ with the index of $\Delta$ in $\Gamma_1$ greater than one and index of $\Delta$ in $\Gamma_2$ greater than two. Then there exists a representation $\rho: \partiali_1(M)\to \operatorname{Aut}(\mathbb{D})$, where $\mathbb{D}=\{z\, |\, |z|=1\}$, with discrete cocompact image, and a holomorphic equivariant map from the universal cover $\widetilde{M}\to \mathbb{D}$, which also descends to a surjective map $M\to \rho(\Gamma)/\mathbb{D}$.
\epsilonnd{theorem}
In fact the vanishing theorem of \cite{Ni-Zheng2} implies that for K\"ahler manifolds with $S_\epsilonll>0$, there does not exist a $k$-wedge subspace in $H^{1, 0}$ (in the sense of \cite{Cat}) for any $k\gammae \epsilonll$. Moreover, such manifolds have to be Albanese primitive for $k\gammae \epsilonll$.
For noncompact manifolds, Theorem \ref{eq:sch-ni} and Theorem \ref{thm:main1} can also be applied, together with Theorem 4.14 and 4.28 of \cite{ABCKT}, to infer some restriction on K\"ahler manifolds with nonnegative holomorphic sectional curvature or with $\operatorname{Ric}_\epsilonll\gammae 0$.
\begin{corollary} Assume that $M$ is a complete K\"ahler manifold with bounded geometry with $\operatorname{Ric}^M_\epsilonll\gammae 0$. Then
(i) $H^1(M, \mathbb{C})=\{0\}$ implies that $\mathcal{H}^1_{L^2}(M)=\{0\}$;
(ii) And $\dim (\mathcal{H}^1_{L^2, ex}(M))\operatorname{l}e 1$.
\epsilonnd{corollary}
Here $\mathcal{H}_{L^2}(M)$ is the space of the harmonic $L^2$-forms and $\mathcal{H}^1_{L^2, ex}(M)$ is the space of the $L^2$ harmonic exact forms. The statements are trivial when $M$ is compact.
\section{Mappings from positively curved manifolds}
In \cite{NZ}, the orthogonal $\operatorname{Ric}^\partialerp$ was studied. Recall that $\operatorname{Ric}^\partialerp (X, \overlineerline{X})=\operatorname{Ric}(X, \overlineerline{X})-H(X)/|X|^2$. We say $\operatorname{Ric}^\partialerp\gammae K$ if $\operatorname{Ric}^\partialerp (X, \overlineerline{X})\gammae K|X|^2$. It is easy to see that $B^\partialerp\gammae \Upsilonappa$ implies that $\operatorname{Ric}^\partialerp \gammae (m-1)\Upsilonappa$. Similar upper estimate also holds if $B^\partialerp$ is bounded from above. It was also shown in \cite{NZ} via explicit examples that $B^\partialerp$ is independent of the holomorphic sectional curvature $H$, as well as the Ricci curvature. Similarly $\operatorname{Ric}^\partialerp$ is independent of $\operatorname{Ric}$, as well as $H$. It was proved in \cite{NZ} that for manifold whose $\operatorname{Ric}^\partialerp$ has a positive lower bound, the manifold is compact with an effective diameter uppper bound. (See \cite{Tsu} for the corresponding result for holomorphic sectional curvature.) It is not hard to see that for K\"ahler manifolds with $\operatorname{Ric}_\epsilonll\gammae K>0$, they must be compact with an upper diameter estimate.
Applying $\partialartial{\bar{\alpha}}r{\partialartial}$-Bochner formulae we have the following estimates in the spirit of \cite{Ni-1807}.
\begin{theorem}\operatorname{l}abel{thm:hoop}
(i) Assume that $\operatorname{Ric}^M_\epsilonll (X, \overlineerline{X})\gammae K|X|^2$, and $H^N(Y)\operatorname{l}e \Upsilonappa |Y|^4$, with $K, \Upsilonappa>0$. Then for any nonconstant $f: M\to N$
$$
\max_{x\sqrt {-1}n M} \sigma_\epsilonll(x)\gammae \frac{K}{\Upsilonappa}.
$$
(ii) Assume that $(B^M)^{\partialerp}\gammae K$, and $(B^N)^\partialerp\operatorname{l}e \Upsilonappa$, with $K, \Upsilonappa>0$. Then for any nonconstant $f: M\to N$, $\dim(f(M))=m$. Moreover for any $\epsilonll<\dim(M)$
$$
\max_{x\sqrt {-1}n M} \sigma_\epsilonll(x)\gammae \epsilonll \frac{K}{\Upsilonappa}.
$$
(iii) Assume that $\operatorname{Ric}^M_\epsilonll\gammae K$, and that $\operatorname{Ric}^N_\epsilonll \operatorname{l}e \Upsilonappa $, with $K,\Upsilonappa>0$. Then for any holomorphic map $f:M\to N$ with $\dim(f(M))\gammae \epsilonll$
$$
\max_{x}\|\Lambda^\epsilonll \partialartial f\|_0^2(x) \gammae \operatorname{l}eft(\frac{K}{\Upsilonappa}\rightght)^\epsilonll.
$$
(iv) Assume that $(\operatorname{Ric}^M)^\partialerp \gammae K$, and that $(B^N)^\partialerp \operatorname{l}e \Upsilonappa $, with $K,\Upsilonappa>0$. Then for any holomorphic map $f:M\to N$ with $\dim(f(M))\gammae m-1$, $\dim(f(M))=m$. Moreover
$$
\max_{x}\|\Lambda^m \partialartial f\|_0^2(x) \gammae \operatorname{l}eft(\frac{K}{(m-1)\Upsilonappa}\rightght)^{m}.
$$
In the case $\dim(M)=\dim(N)$, only $(\operatorname{Ric}^N)^{\partialerp}\operatorname{l}e (m-1)\Upsilonappa$ is needed. In general $(B^N)^\partialerp \operatorname{l}e \Upsilonappa $ can be weakened to $(\operatorname{Ric}^N_m)^{\partialerp}\operatorname{l}e (m-1)\Upsilonappa$. Here $(\operatorname{Ric}^N_\epsilonll)^{\partialerp}$ is the orthogonal Ricci curvature of the curvature tensor $R^N$ restricted to $m$-dimensional subspaces.
\epsilonnd{theorem}
\begin{proof} First observe that under any assumption of the above theorem $M$ is compact. From Lemma \ref{lem:41} and (\ref{eq:31}), part (i) follows. For part (ii), at the point $x_0$ where $\sigma_\epsilonll(x)$ attains its maximum, applying (\ref{eq:31}) to $v=\frac{\partialartial\ }{\partialartial z^m}$, we have that
$$
0\gammae -\Upsilonappa |\operatorname{l}ambda_m|^2+K
$$
which implies that $|\operatorname{l}ambda_m|^2\gammae \frac{K}{\Upsilonappa}$. Then claimed estimate follows from $\sigma_\epsilonll\gammae \epsilonll |\operatorname{l}ambda_m|^2$.
For part (iii), we apply (\ref{eq:32}) at the point $x_0$, where $\|\Lambda^\epsilonll \partialartial f\|_0^2(x)$ attains its maximum. In particular we apply it to $v=\frac{\partialartial\ }{\partialartial z^\epsilonll}$ and let $\Sigma=\operatorname{span}\{ \frac{\partialartial\ }{\partialartial z^1}, \cdots, \frac{\partialartial\ }{\partialartial z^\epsilonll}\}$. Hence at $x_0$
$$
0\gammae -\operatorname{Ric}^N(x_0, f(\Sigma)) |\operatorname{l}ambda_\epsilonll|^2+\operatorname{Ric}^M (x_0, \Sigma).
$$
Hence we derive that $|\operatorname{l}ambda_\epsilonll|^2\gammae \frac{K}{\Upsilonappa}$. The claimed result then follows.
The part (iv) can be proved similarly.
\epsilonnd{proof}
The part (ii) of the theorem is not as strong as it appears, since $B^\partialerp>0$ implies that $h^{1,1}(M)=1$. On the other hand we have the following observation.
\begin{proposition}
Let $M$ be a K\"ahler manifold with $h^{1,1}(M)=1$. Then any holomorphic map $f: M\to N$, with $\dim(f(M))<\dim(M)$ must be a constant map. Hence $g(M)\operatorname{l}e 1$, if $\dim(M)\gammae 2$. In particular, if the Picard number $\rho(M)=1$ and $S_2^M>0$, any holomorphic map $f: M\to N$, with $\dim(f(M))<\dim(M)$ must be a constant map.
\epsilonnd{proposition}
\begin{proof}
In fact $f^*\omega_h$, with $\omega_h$ being the K\"ahler form of $N$, is a $d$-closed positive $(1,1)$-form. By the assumption $[f^*\omega_h]$ proportional to $[\omega_g]$. Hence it must be either zero or a positive multiple of $[\omega_g]$. Since the second case implies that $\dim(f(M))=m$, only the first case can occur, which implies that $f$ is a constant map.
Note that this implies that for any K\"ahler manifold $M$ with $\dim(M)\gammae 2$ and $h^{1,1}(M)=1$, the genus $g(M)\operatorname{l}e 1$, in view of the result of Catanese (cf. Theorem 1.10 of \cite{Cat}) since otherwise there exists a nonconstant holomorphic map $f: M\to C_g$ with $C_g$ being a Riemann surface of genus $g(M)$. Since the first Chern class map $c_1: H^{1}(M, \mathcal{O}^*)\to \mathcal{H}^{1,1}(M)\cap H^2(M, \mathbb{Z})$ is onto, and $S^M_2>0$ implies that $H^2(M, \mathbb{C})=\mathcal{H}^{1,1}(M)$, the assumption then implies $h^{1,1}(M)=1$. The last result then follows from the first.
\epsilonnd{proof}
Taking $\Upsilonappa\to 0$, the part (ii) of Theorem \ref{thm:hoop} also implies that any holomorphic map from a compact manifold with $B^\partialerp>0$ into one with $B^\partialerp\operatorname{l}e 0$ must be a constant map (cf. Theorem \ref{thm:51}). Given that $B^\partialerp$ is independent of $H$ and $\operatorname{Ric}$, this does not follow from Yau-Royden's estimate Theorem \ref{thm-sch-roy}, nor from Theorem \ref{thm:sch1}. The part (iv) provides an additional information on compact K\"ahler manifolds with $\operatorname{Ric}^\partialerp>0$.
\section*{Acknowledgments} {We would like to thank James McKernan (particularly bringing my attention to the work \cite{Laz}) and Fangyang Zheng for conversations regarding holomorphic maps from $\mathbb{P}^m$. We are also grateful to Yanyan Niu for informing \cite{Mu}.}
\begin{thebibliography}{A}
\bibitem{ABCKT} J. Amor\'os, M. Burger, K. Corlette, D. Kotschick, and D. Toledo, \textit{ Fundamental groups of compact K\"ahler manifolds.} Mathematical Surveys and Monographs, \textbf{44}. American Mathematical Society, Providence, RI, 1996.
\bibitem{Aub} T. Aubin, \textit{ Nonlinear analysis on manifolds. Monge-Amp\`ere equations.} Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], \textbf{252}. Springer-Verlag, New York, 1982.
\bibitem{Cat} F. Catanese, \textit{ Moduli and classification of irregular K\"ahler manifolds (and algebraic varieties) with Albanese general type fibrations.} Invent. Math. \textbf{104} (1991), no. 2, 263--289.
\bibitem{CG} J. Cheeger and D. Gromoll, \textit{ The splitting theorem for manifolds of nonnegative Ricci curvature.} J. Differential Geometry \textbf{6} (1971/72), 119--128.
\bibitem{Fede} H. Federer, \textit{ Geometric Measure Theory. } Die Grundlehren der mathematischen Wissenschaften, Band 153 Springer-Verlag New York Inc., New York 1969.
\bibitem{FW} A. Fraser and J. Wolfson, \textit{ The fundamental group of manifolds of positive isotropic curvature and surface groups.} Duke Math. J. \textbf{133} (2006), no. 2, 325--334.
\bibitem{Gu} C. E. Guti\'errez, \textit{ The Monge-Amp\`ere Equation.} 2nd edition. Progress in Nonlinear Differential Equation and Their Applications. \textbf{89} (2016), Birkh\"auser.
\bibitem{Hitchin}
N. Hitchin, \textit{ On the curvature of rational surfaces.} In Differential Geometry ({\epsilonm Proc. Sympos. Pure Math., Vol XXVII, Part 2, Stanford University, Stanford, Calif., 1973}), pages 65-80. Amer. Math. Soc., Providence, RI, 1975.
\bibitem{Horn-Johnson} R. Horn and C. Johnson, \textit{ Matrix Analysis.} 2nd Edition. Cambridge University Press, 2013.
\bibitem{Kobayashi-H} S. Kobayashi, \textit{Hyperbolic Complex Spaces.} Springer, New York, 1998.
\bibitem{Laz} R. Lazarsfeld, \textit{Some applications of the theory of positive vector bundles. Complete intersections} (Acireale, 1983), 29–61, Lecture Notes in Math., 1092, Springer, Berlin, 1984.
\bibitem{LW} P. Li and J.-P. Wang, \textit{Comparison theorem for K\"ahler manifolds and positivity of spectrum.} J. Differential Geom. \textbf{69} (2005), no. 1, 43--74.
\bibitem{MM} M. Marcus and H. Minc, \textit{A survey of matrix theory and matrix inequalities.} Reprint of the 1969 edition. Dover Publications, Inc., New York, 1992.
\bibitem{Mok} N. Mok, \textit{ The uniformization theorem for compact K\"ahler manifolds of nonnegative holomorphic bisectional curvature.} J. Differential Geom. \textbf{27} (1988), no. 2, 179--214.
\bibitem{Mu} S. Matsumura, \textit{ On projective manifolds with semi-positive holomorphic sectional curvature.} ArXiv:1811.04182.
\bibitem{NR} T. Napier and M. Ramachandran, \textit{Filtered ends, proper holomorphic mappings of K\"ahler manifolds to Riemann surfaces, and K\"ahler groups.} Geom. Funct. Anal. \textbf{17} (2008), no. 5, 1621--1654.
\bibitem{Ni-1807}L. Ni, \textit{ Liouville theorems and a Schwarz Lemma for holomorphic mappings between K\"ahler manifolds.} ArXiv preprint: 1807.02674.
\bibitem{Ni} L. Ni, \textit{The fundamental group, rational connectedness and the positivity of K\"ahler manifolds.} ArXiv preprint:1902.00974
\bibitem{Ni-Tam} L. Ni and L.-F. Tam, \textit{ Plurisubharmonic functions and the structure of complete K\"ahler manifolds with nonnegative curvature.} J. Differential Geom. \textbf{64} (2003), no. 3, 457--524.
\bibitem{NZ} L. Ni and F. Zheng, \textit{ Comparison and vanishing theorems for K\"ahler manifolds.} Calc. Var. Partial Differential Equations, \textbf{57}(2018), no. 6, Art. 151, 31 pp.
\bibitem{Ni-Zheng2} L. Ni and F. Zheng, \textit{ Positivity and Kodaira embedding theorem.} ArXiv preprint:1804.09696.
\bibitem{Pogo} A.-V. Pogorelov, \textit{ The Minkowski multidimensional problem.} Translated from the Russian by Vladimir Oliker. Introduction by Louis Nirenberg. Scripta Series in Mathematics. V. H. Winston \& Sons, Washington, D.C.; Halsted Press [John Wiley \& Sons], New York-Toronto-London, 1978.
\bibitem{Roy} H. L. Royden, \textit{ The Ahlfors-Schwarz lemma in several complex variables.} Comment. Math. Helv. \textbf{55} (1980), no. 4, 547--558.
\bibitem{Siu} Y.-T. Siu, \textit{ Lecture on Hermitian-Einstein Metrics for Stable Bundles and K\"ahler-Einstein Metrics.} Birkh\"auser, Basel, 1987.
\bibitem{Tsu} Y. Tsukamoto,\textit{ On K\"ahlerian manifolds with positive holomorphic sectional curvature. } Proc. Japan Acad. \textbf{33} (1957), 333--335.
\bibitem{Whit} H.
Whitney, \textit{ Geometric Integration Theory.} Princeton University Press, Princeton, N. J., 1957.
\bibitem{Yau} S.-T. Yau, \textit{ On Ricci curvature of a compact K\"ahler manifold and the complex Monge-Amp\`ere equation, I,} Comm. Pure and Appl. Math. \textbf{31} (1978), 339--411.
\bibitem{Yau-sch} S. T. Yau, \textit{ A general Schwarz lemma for K\"ahler manifolds.} Amer. J. Math. \textbf{100} (1978), no. 1, 197--203.
\epsilonnd{thebibliography}
\epsilonnd{document} |
\begin{document}
\begin{sloppy}
\title{ Iteration complexity analysis of random coordinate descent methods
for $\ell_0$ regularized convex problems }
\begin{abstract}
In this paper we analyze a family of general random block coordinate
descent methods for the minimization of $\ell_0$ regularized
optimization problems, i.e. the objective function is composed of a
smooth convex function and the $\ell_0$ regularization. Our family
of methods covers particular cases such as random block coordinate
gradient descent and random proximal coordinate descent methods. We
analyze necessary optimality conditions for this nonconvex $\ell_0$
regularized problem and devise a separation of the set of local
minima into restricted classes based on approximation versions of
the objective function. We provide a unified analysis of the almost
sure convergence for this family of block coordinate descent
algorithms and prove that, for each approximation version, the limit
points are local minima from the corresponding restricted class of
local minimizers. Under the strong convexity assumption, we prove
linear convergence in probability for our family of methods.
\end{abstract}
\begin{keywords}
$\ell_0$ regularized convex problems, Lipschitz gradient, restricted classes of local minima, random coordinate descent methods, iteration complexity analysis.
\end{keywords}
\pagestyle{myheadings} \thispagestyle{plain} \markboth{A. Patrascu and I. Necoara}{Coordinate descent methods for $\ell_0$ regularized optimization}
\section{Introduction}
\noindent In this paper we analyze the properties of local minima
and devise a family of random block coordinate descent methods for
the following $\ell_0$ regularized optimization problem:
\begin{equation}\label{l0regular}
\min\limits_{x \in \rset^n} F(x) \quad \left(= f(x) +
\norm{x}_{0,\lambda} \right),
\end{equation}
where function $f$ is smooth and convex and the quasinorm of $x$ is
defined as: \[ \norm{x}_{0,\lambda}= \sum\limits_{i=1}^N
\lambda_i\norm{x_i}_0, \] where $\|x_i\|_0$ is the quasinorm which
counts the number of nonzero components in the vector $x_i \in
\rset^{n_i}$, which is the $i$th block component of $x$, and
$\lambda_i \ge 0$ for all $i=1, \dots, N$. Note that in this
formulation we do not impose sparsity on all block components of
$x$, but only on those $i$th blocks for which the corresponding
penalty parameter $\lambda_i>0$. However, in order to avoid the
convex case, intensively studied in the literature, we assume that
there is at least one $i$ such that $\lambda_i>0$.
\noindent In many applications such as compressed sensing
\cite{BluDav:08,CanTao:04}, sparse support
vector machines \cite{Bah:13}, sparse nonnegative factorization
\cite{Gil:12}, sparse principal component analysis \cite{JouNes:10}
or robust estimation \cite{Kek:13} we deal with a convex
optimization problem for which we like to get an (approximate)
solution, but we also desire a solution which has the additional
property of sparsity (it has few nonzero components). The typical
approach for obtaining a sparse minimizer of an optimization problem
involves minimizing the number of nonzero components of the
solution. In the literature for sparse optimization two
formulations are widespread: \textit{(i) the regularized
formulation} obtained by adding an $\ell_0$ regularization term to
the original objective function as in \eqref{l0regular};
\textit{(ii) the sparsity constrained formulation} obtained by
including an additional constraint on the number of nonzero elements
of the variable vector. However, both formulations are hard
combinatorial problems, since solving them exactly would require to
try all possible sparse patterns in a brute-force way. Moreover,
there is no clear equivalence between them in the general case.
\noindent Several greedy algorithms have been developed in the last
decade for the sparse linear least squares setting under certain
restricted isometry assumptions \cite{Bah:13,BluDav:08,CanTao:04}.
In particular, the iterative hard thresholding algorithm has gained
a lot of interest lately due to its simple iteration
\cite{BluDav:08}. Recently, in \cite{Lu:12}, a generalization of the
iterative hard thresholding algorithm has been given for general
$\ell_0$ regularized convex cone programming. The author shows
linear convergence of this algorithm for strongly convex objective
functions, while for general convex objective functions the author
considers the minimization over a bounded box set. Moreover, since
there could be an exponential number of local minimizers for the
$\ell_0$ regularized problem, there is no characterization in
\cite{Lu:12} of the local minima at which the iterative hard
thresholding algorithm converges.
Further, in \cite{LuZha:12}, penalty decomposition methods were
devised for both regularized and constrained formulations of sparse
nonconvex problems and convergence analysis was provided for these
algorithms. Analysis of sparsity constrained problems were provided
e.g. in \cite{Bec:12}, where the authors introduced several classes
of stationary points and developed greedy coordinate descent
algorithms converging to different classes of stationary points.
Coordinate descent methods are used frequently to solve sparse
optimization problems
\cite{Bec:12,LuXia:13,NecCli:13,NecNes:13,BanGha:08} since they are
based on the strategy of updating one (block) coordinate of the
vector of variables per iteration using some index selection
procedure (e.g. cyclic, greedy or random). This often reduces
drastically the iteration complexity and memory requirements, making
these methods simple and scalable. There exist numerous papers
dealing with the convergence analysis of this type of methods: for
deterministic index selection see
\cite{HonWan:13,BecTet:13,LuoTse:92}, while for random index
selection see
\cite{LuXia:13,Nes:12,NecCli:13,Nec:13,NecPat:14,PatNec:14,RicTac:12}.
\subsection{Main contribution}
\noindent In this paper we analyze a family of general random block
coordinate descent iterative hard thresholding based methods for
the minimization of $\ell_0$ regularized optimization problems, i.e.
the objective function is composed of a smooth convex function and
the $\ell_0$ regularization. The family of the algorithms we
consider takes a very general form, consisting in the minimization
of a certain approximate version of the objective function one
block variable at a time, while fixing the rest of the block
variables. Such type of methods are particularly suited for solving
nonsmooth $\ell_0$ regularized problems since they solve an easy low
dimensional problem at each iteration, often in closed form. Our
family of methods covers particular cases such as random block
coordinate gradient descent and random proximal coordinate descent
methods. We analyze necessary optimality conditions for this
nonconvex $\ell_0$ regularized problem and devise a procedure for the separation of
the set of local minima into restricted classes based on
approximation versions of the objective function. We provide a
unified analysis of the almost sure convergence for this family of
random block coordinate descent algorithms and prove that, for each
approximation version, the limit points are local minima from the
corresponding restricted class of local minimizers. Under the strong
convexity assumption, we prove linear convergence in probability for
our family of methods. We also provide numerical experiments which
show the superior behavior of our methods in comparison with the
usual iterative hard thresholding algorithm.
\subsection{Notations and preliminaries}
We consider the space $\rset^n$ composed by column
vectors. For $x,y \in \rset^n$ denote the scalar product by
$\langle x,y \rangle = x^T y$ and the Euclidean norm by
$\|x\|=\sqrt{x^T x}$. We use the same notation $\langle \cdot,\cdot
\rangle$ ($\|\cdot\|$) for scalar product (norm) in spaces of
different dimensions. For any matrix $A \in \rset^{m \times n}$ we
use $\sigma_{\min}(A)$ for the minimal
eigenvalue of matrix $A$. We use the notation $[n] = \{1,2,
\dots, n\}$ and $e = [1 \cdots 1]^T \in \rset^n$. \noindent In the
sequel, we consider the following decompositions of the variable
dimension and of the $n \times n$ identity matrix:
\begin{equation*}
n = \sum\limits_{i=1}^N n_i, \qquad \qquad I_n= \left[ U_1 \dots U_N
\right], \qquad \qquad I_n= \left[ U_{(1)} \dots U_{(n)} \right],
\end{equation*}
where $U_i \in \rset^{n \times n_i}$ and $U_{(j)} \in
\rset^{n}$ for all $i \in [N]$ and $j \in [n]$. If the
index set corresponding to block $i$ is given by $\mathcal{S}_i$,
then $\abs{\mathcal{S}_i} = n_i$. Given $x \in \rset^n$, then
for any $i \in [N]$ and $j \in [n]$, we denote:
\begin{align*}
x_i &= U_i^T x \in \rset^{n_i}, \quad \quad \quad \quad \ \ \nabla_i f(x)= U_i^T \nabla f(x) \in \rset^{n_i},\\
x_{(j)} &= U_{(j)}^T x \in \rset^{}, \quad \quad \quad \quad \ \ \ \nabla_{(j)} f(x)= U_{(j)}^T \nabla f(x) \in \rset^{}.
\end{align*}
\noindent For any vector $x \in \rset^n$, the support of $x$ is
given by $\text{supp}(x)$, which denotes the set of indices corresponding
to the nonzero components of $x$. We denote $\bar{x} = \max\limits_{j
\in \text{supp}(x)} \abs{x_{(j)}}$ and $\underline{x} = \min\limits_{j \in
\text{supp}(x)} \abs{x_{(j)}}$. Additionally, we introduce the following
set of indices:
$$I(x) = \text{supp}(x) \cup \{j \in [n]: \ j \in \mathcal{S}_i, \ \lambda_i=0\}$$
and $I^c(x) = [n] \backslash I(x)$. Given two scalars $p\ge 1, r>0$
and $ x \in \rset^n$, the $p-$ball of radius $r$ and centered in $x$
is denoted by $\mathcal{B}_p(x,r) = \{y \in \rset^n: \; \norm{y-x}_p
< r \}$. Let $I \subseteq [n]$ and denote the subspace of all
vectors $x \in \rset^n$ satisfying $I(x) \subseteq I$ with $S_I$,
i.e. $S_I = \{x \in \rset^n: \; x_i=0 \quad \forall i \notin I\}$.
\noindent We denote with $f^*$ the optimal value of the convex
problem $f^* = \min_{x \in \rset^n} f(x)$ and its optimal set with
$X^*_f=\left\{x \in \rset^n: \nabla f(x)=0 \right\}$. In this paper
we consider the following assumption on function $f$:
\begin{assumption}\label{assump_grad_1}
The function $f$ has (block) coordinatewise Lipschitz continuous
gradient with constants $L_i>0$ for all $i \in [N]$, i.e. the
convex function $f$ satisfies the following inequality for all $i \in [N]$:
\begin{equation*}
\norm{\nabla_i f(x+U_ih_i) - \nabla_i f(x)} \le L_i \norm{h_i} \quad \forall x \in \rset^n, h_i \in \rset^{n_i}.
\end{equation*}
\end{assumption}
\noindent An immediate consequence of Assumption \ref{assump_grad_1}
is the following relation \cite{Nes:12}:
\begin{equation}\label{Lipschitz_gradient}
f(x+U_ih_i) \le f(x) + \langle \nabla_i f(x), h_i\rangle +
\frac{L_i}{2}\norm{h_i}^2 \quad \forall x \in \rset^n, h_i \in \rset^{n_i}.
\end{equation}
We denote with $\lambda=[\lambda_1 \cdots \lambda_N]^T \in \rset^N$,
$L=[L_1 \cdots L_N]^T $ and $L_f$ the global Lipschitz constant
of the gradient $\nabla f(x)$. In the Euclidean settings, under
Assumption \ref{assump_grad_1} a tight upper bound of the global
Lipschitz constant is $L_f \le \sum_{i=1}^N L_i$ (see \cite[Lemma
2]{Nes:12}). Note that a global inequality based on $L_f$,
similar to \eqref{Lipschitz_gradient}, can be also derived. Moreover,
we should remark that Assumption \ref{assump_grad_1} has been frequently considered in coordinate descent settings (see e.g. \cite{Nec:13,Nes:12,
NecNes:13,NecCli:13,NecPat:14,RicTac:12}).
\section{Characterization of local minima}
\noindent In this section we present the necessary optimality
conditions for problem \eqref{l0regular} and provide a detailed
description of local minimizers. First, we establish necessary
optimality conditions satisfied by any local minimum. Then, we
separate the set of local minima into restricted classes around the
set of global minimizers. The next theorem provides conditions for
obtaining local minimizers of problem \eqref{l0regular}:
\begin{theorem}\label{lemmaaux}
If Assumption \ref{assump_grad_1} holds, then any $z \in \rset^n
\backslash \{0\}$ is a local minimizer of problem \eqref{l0regular}
on the ball $\mathcal{B}_{\infty}(z,r)$, with
$r=\min\left\{\underline{z},
\frac{\underline{\lambda}}{\norm{\nabla f(z)}_1}\right\}$,
if and only if $z$ is a global minimizer of convex problem
$\min\limits_{x \in S_{I(z)}} f(x)$. Moreover, $0$ is a local
minimizer of problem \eqref{l0regular} on the ball
$\mathcal{B}_{\infty} \left(0, \frac{\min_{i \in [N]}
\lambda_i}{\norm{\nabla f(z)}_1} \right)$ provided that $0 \not \in
X_f^*$, otherwise is a global minimizer for \eqref{l0regular}.
\end{theorem}
\begin{proof}
For the first implication, we assume that $z$ is a local minimizer
of problem \eqref{l0regular} on the open ball
$\mathcal{B}_{\infty}(z,r)$, i.e. we have:
$$f(z) \le f(y) \quad \forall y \in \mathcal{B}_{\infty}(z,r) \cap S_{I(z)}.$$
\noindent Based on Assumption \ref{assump_grad_1} it follows that
$f$ has also global Lipschitz continuous gradient, with constant
$L_f$, and thus we have:
$$f(z) \le f(y) \le f(z) + \langle \nabla f(z),y- z\rangle + \frac{L_f}{2}\norm{y- z}^2
\quad \forall y \in \mathcal{B}_{\infty}(z,r)\cap S_{I(z)}.$$
Taking $\alpha = \min\{\frac{1}{L_f}, \frac{r}{\max\limits_{j \in
I(z)}\abs{\nabla_{(j)} f(z) }}\}$ and $y= z - \alpha \nabla_{I(z)}
f(z)$, we obtain:
$$0 \le \left(\frac{\alpha^2}{2L_f} - \frac{\alpha}{L_f}\right)\norm{\nabla_{I(z)} f(z)}^2 \le 0.$$
Therefore, we have $ \nabla_{I(z)} f(z) =0$, which means
that:
\begin{equation}\label{localmin}
z = \arg\min\limits_{x \in S_{I(z)}} f(x).
\end{equation}
\noindent For the second implication we first note that for any $y,
d \in \rset^n$, with $ y \neq 0$ and $\norm{d}_{\infty} <
\underline{y} $, we have:
\begin{equation}\label{positive}
\abs{y_{(i)}+d_{(i)}} \ge \abs{y_{(i)}} - \abs{d_{(i)}} \ge \underline{y} - \norm{d}_{\infty} > 0 \quad \forall i \in \text{supp}(y).
\end{equation}
Clearly, for any $d \in \mathcal{B}_{\infty}(0,r) \backslash
S_{I(y)}$, with $r= \underline{y}$, we have:
\begin{equation*}
\norm{y+d}_{0,\lambda} = \norm{y}_{0,\lambda} + \sum\limits_{i \in I^c(y) \cap \text{supp}(d)} \norm{d_{(i)}}_{0,\lambda} \ge \norm{y}_{0,\lambda} + \underline{\lambda}.
\end{equation*}
\noindent Let $d \in \mathcal{B}_{\infty}(0,r) \backslash S_{I(y)}$,
with $r = \min\left\{ \underline{y},
\frac{\underline{\lambda}}{\norm{\nabla f(y)}_1} \right\}$. The
convexity of function $f$ and the Holder inequality lead to:
\begin{align}
\label{parti}
F(y+d) &\ge f(y) + \langle \nabla f(y), d \rangle + \norm{y+d}_{0,\lambda} \nonumber \\
&\ge F(y) - \norm{\nabla f(y)}_1 \norm{d}_{\infty} +
\underline{\lambda} \ge F(y) \quad \forall y \in \rset^n.
\end{align}
We now assume that $z$ satisfies \eqref{localmin}. For any $x \in
\mathcal{B}_{\infty}(z,r) \cap S_{I(z)}$ we have
$\norm{x-z}_{\infty} < \underline{z}$, which by \eqref{positive}
implies that $\abs{x_{(i)}} > 0$ whenever $\abs{z_{(i)}} > 0$.
Therefore, we get:
\begin{equation*}
F(x) = f(x) + \norm{x}_{0,\lambda} \ge f(z) + \norm{z}_{0,\lambda} = F(z),
\end{equation*}
and combining with the inequality \eqref{parti} leads to the second
implication. Furthermore, if $0 \not \in X_f^*$, then $\nabla f(0)
\not =0$. Assuming that $\min_{i \in [N]} \lambda_i>0$, then $F(x)
\geq f(0) + \langle \nabla f(0), x \rangle + \norm{x}_{0,\lambda}
\geq F(0) - \norm{\nabla f(0)}_1 \norm{x}_{\infty} + \min_{i \in
[N]} \lambda_i \geq F(0)$ for all $x \in \mathcal{B}_{\infty}
\left(0, \frac{\min_{i \in [N]} \lambda_i}{\norm{\nabla f(z)}_1}
\right)$. If $0 \in X_f^*$, then $\nabla f(0) =0$ and thus $F(x)
\geq f(0) + \langle \nabla f(0), z \rangle + \norm{x}_{0,\lambda}
\geq F(0)$ for all $x \in \rset^n$.
\end{proof}
\noindent From Theorem \ref{lemmaaux} we conclude that any vector
$z \in \rset^n$ is a local minimizer of problem \eqref{l0regular} if
and only if the following equality holds:
$$ \nabla_{I(z)} f(z)=0.$$
We denote with $\mathcal{T}_f$ the set of all local minima of
problem \eqref{l0regular}, i.e.
\[ \mathcal{T}_f =\left\{ z \in \rset^n:\; \nabla_{I(z)} f(z)=0 \right \}, \]
and we call them \textit{basic local minimizers}. It is not hard to
see that when the function $f$ is strongly convex, the number of
basic local minima of problem \eqref{l0regular} is finite,
otherwise we might have an infinite number of basic local
minimizers.
\subsection{Strong local minimizers}
\noindent In this section we introduce a family of strong local
minimizers of problem \eqref{l0regular} based on an approximation of
the function $f$. It can be easily seen that finding a basic local
minimizer is a trivial procedure e.g.: $(a)$ if we choose some set
of indices $I \subseteq [n]$ such that $\{j \in [n]: j \in
\mathcal{S}_i, \lambda_i=0\} \subseteq I$, then from Theorem
\ref{lemmaaux} the minimizer of the convex
problem $\min_{x \in S_I} f(x)$ is a basic local minimizer for problem
\eqref{l0regular}; $(b)$ if we minimize the convex function $f$
w.r.t. all blocks $i$ satisfying $\lambda_i=0$, then from Theorem
\ref{lemmaaux} we obtain again some basic local minimizer for
\eqref{l0regular}. This motivates us to introduce more restricted
classes of local minimizers. Thus, we first define an approximation
version of function $f$ satisfying certain assumptions. In
particular, given $i \in [N]$ and $x \in \rset^n$, the convex
function $u_i:\rset^{n_i} \to \rset^{}$ is an upper bound of
function $f(x+U_i(y_i-x_i))$ if it satisfies:
\begin{equation}
\label{u_upperapr}
f(x+U_i(y_i-x_i)) \le u_i(y_i;x) \quad \forall y_i \in \rset^{n_i}.
\end{equation}
\noindent We additionally impose the following assumptions on each
function $u_i$.
\begin{assumption}\label{approximation}
The approximation function $u_i$ satisfies the assumptions: \\
(i) The function $u_i(y_i;x)$ is strictly convex and differentiable
in the first argument, is continuous in the second argument and
satisfies $u_i(x_i;x) = f(x)$ for all
$x \in \rset^n$.\\
(ii) Its gradient in the first argument satisfies $\nabla u_i(x_i;x)
= \nabla_i f(x) \quad
\forall x \in \rset^n$. \\
(iii) For any $x \in \rset^n$, the function $u_i(y_i;x)$ has
Lipschitz continuous gradient in the first argument with constant
$M_i > L_i$, i.e. there exists $M_i>L_i$ such that:
$$\norm{\nabla u_i(y_i;x) - \nabla u_i(z_i;x)} \le M_i \norm{y_i-z_i} \quad
\forall y_i, z_i \in \rset^{n_i}.$$
(iv) There exists $\mu_i$ such that $0< \mu_i \le M_i - L_i$ and
$$u_i(y_i;x) \ge f(x+U_i(y_i-x_i)) + \frac{\mu_i}{2}\norm{y_i-x_i}^2 \quad \forall x \in \rset^n,
y_i \in \rset^{n_i}.$$
\end{assumption}
Note that a similar set of assumptions has been considered in
\cite{HonWan:13}, where the authors derived a general framework for
the block coordinate descent methods on composite convex problems.
Clearly, Assumption \ref{approximation} $(iv)$ implies the upper
bound \eqref{u_upperapr} and in \cite{HonWan:13} this inequality is replaced with
the assumption of strong convexity of $u_i$ in the first argument.
\noindent We now provide several examples of approximation versions
of the objective function $f$ which satisfy Assumption
\ref{approximation}.
\begin{example}
\label{ex_3u} We now provide three examples of approximation
versions for the function $f$. The reader can easily find many
other examples of approximations satisfying Assumption \ref{approximation}. \\
\text{1. Separable quadratic approximation}: given $M \in \rset^N$,
such that $M_i > L_i$ for all $i \in [N]$, we define the
approximation version
$$u_i^q(y_i; x, M_i) = f(x) + \langle \nabla_i f(x), y_i-x_i\rangle +
\frac{M_i}{2}\norm{y_i-x_i}^2.$$
It satisfies Assumption \ref{approximation}, in particular
condition $(iv)$ holds for $\mu_i=M_i - L_i$. This type of
approximations was used by Nesterov for deriving the random
coordinate gradient descent method for solving smooth convex
problems \cite{Nes:12} and further extended to the composite convex
case in \cite{NecCli:13,RicTac:12}.
\noindent \text{2. General quadratic approximation}: given $H_i
\succeq 0$, such that $H_i \succ L_iI_{n_i}$ for all $i \in [N]$,
we define the approximation version
$$u_i^Q(y_i;x,H_i) = f(x) + \langle \nabla_i f(x), y_i-x_i\rangle +
\frac{1}{2}\langle y_i-x_i, H_i(y_i-x_i)\rangle. $$
It satisfies Assumption
\ref{approximation}, in particular condition $(iv)$ holds for
$\mu_i = \sigma_{\min}(H_i - L_i I_{n_i})$ (the smallest
eigenvalue). This type of approximations was used by Luo, Yun and
Tseng in deriving the greedy coordinate descent method based on the
Gauss-Southwell rule for solving composite convex problems
\cite{LuoTse:92,LuoTse:93,TseYun:09}.
\noindent \text{3. Exact approximation}: given $\beta \in \rset^N$,
such that $\beta_i > 0$ for all $i \in [N]$, we define the
approximation version
$$u_i^{e}(y_i;x,\beta) = f(x+U_i(y_i-x_i)) + \frac{\beta_i}{2}\norm{y_i-x_i}^2.$$
It satisfies Assumption \ref{approximation}, in particular
condition $(iv)$ holds for $\mu_i = \beta_i$. This type of
approximation functions was used especially in the nonconvex
settings \cite{GriSci:00,HonWan:13}.
\end{example}
\noindent Based on each approximation function $u_i$ satisfying
Assumption \ref{approximation}, we introduce a class of restricted
local minimizers for our nonconvex optimization problem
\eqref{l0regular}.
\begin{definition}\label{local_min_general}
For any set of approximation functions $u_i$ satisfying Assumption
\ref{approximation}, a vector $z$ is called an \textit{u-strong
local minimizer} for problem \eqref{l0regular} if it satisfies:
$$ F(z) \le \min\limits_{y_i \in \rset^{n_i}} u_i(y_i;z) + \norm{z+ U_i(y_i-z_i)}_{0,\lambda} \quad \forall i \in [N].$$
Moreover, we denote the set of strong local minima, corresponding to
the approximation functions $u_i$, with $\mathcal{L}_u$.
\end{definition}
\noindent It can be easily seen that \[ \min_{y_i \in \rset^{n_i}}
u_i(y_i;z) + \norm{z+ U_i(y_i-z_i)}_{0,\lambda}
\overset{y_i=z_i}{\leq} u_i(z_i;z) + \norm{z}_{0,\lambda} = F(z) \]
and thus an u-strong local minimizer $z \in \mathcal{L}_u$, has the
property that each block $z_i$ is a fixed point of the operator
defined by the minimizers of the function $u_i(y_i;z) +
\lambda_i\norm{y_i}_0$, i.e. we have for all $i \in [N]$:
$$z_i = \arg\min\limits_{y_i \in \rset^{n_i}} u_i(y_i;z) +
\lambda_i\norm{y_i}_0.$$
\begin{theorem}
Let the set of approximation functions $u_i$ satisfy Assumption
\ref{approximation}, then any $u-$strong local minimizer is a local
minimum of problem \eqref{l0regular}, i.e. the following inclusion
holds:
\[ \mathcal{L}_u \subseteq \mathcal{T}_f. \]
\end{theorem}
\begin{proof}
From Definition \ref{local_min_general} and Assumption
\ref{approximation} we have:
\begin{align*}
F(z) & \le \min\limits_{y_i \in \rset^{n_i}} u_i(y_i;z) + \norm{z+ U_i(y_i-z_i)}_{0,\lambda}\\
&\le \min\limits_{y_i \in \rset^{n_i}} u_i(z_i;z) + \langle \nabla u_i (z_i;z), y_i-z_i \rangle + \frac{M_i}{2}\norm{y_i-z_i}^2+ \norm{z+ U_i(y_i-z_i)}_{0,\lambda}\\
&= \min\limits_{y_i \in \rset^{n_i}} F(z) + \langle \nabla_i f(z), y_i-z_i \rangle + \frac{M_i}{2}\norm{y_i-z_i}^2+ \lambda_i(\norm{y_i}_0-\norm{z_i}_{0})\\
&\le F(z) + \langle \nabla_i f(z), h_i \rangle +
\frac{M_i}{2}\norm{h_i}^2+
\lambda_i(\norm{z_i+h_i}_0-\norm{z_i}_{0})
\end{align*}
for all $h_i \in \rset^{n_i}$ and $i \in [N]$. Choosing now $h_i$
as follows:
\[ h_{i} = -\frac{1}{M_i}U_{(j)}\nabla_{(j)} f(z) \;\;\; \text{for some} \;\; j \in I(z)
\cap \mathcal{S}_i,\] we have from the definition of $I(z)$ that
\[ \lambda_i(\norm{z_i+h_i}_0-\norm{z_i}_{0}) \leq 0 \]
and thus $0 \leq -\frac{1}{2M_i} \|\nabla_{(j)} f(z) \|^2$ or
equivalently $\nabla_{(j)} f(z) = 0$. Since this holds for any $j
\in I(z) \cap \mathcal{S}_i$, it follows that $z$ satisfies
$\nabla_{I(z)} f(z) = 0$. Using now Theorem \ref{lemmaaux} we obtain
our statement.
\end{proof}
\noindent For the three approximation versions given in Example
\ref{ex_3u} we obtain explicit expressions for the corresponding
u-strong local minimizers. In particular, for some $M \in
\rset^N_{++}$ and $i \in [N]$, if we consider the previous separable
quadratic approximation $u_i^q(y_i;x,M_i)$, then any strong local
minimizer $z \in \mathcal{L}_{u^q}$ satisfies the following
relations:
\begin{enumerate}
\item[\textit{(i)}] $\nabla_{I(z)} f(z) = 0$ and additionally
\item[\textit{(ii)}]
$\begin{cases} \abs{\nabla_{(j)} f(z)} \le \sqrt{2\lambda_{i} M_{i} }, &\text{if} \ z_{(j)}=0 \\
\abs{z_{(j)}} \ge \sqrt{\frac{2\lambda_{i}}{M_{i}}}, &\text{if} \
z_{(j)} \neq 0, \quad \forall i \in [N]$ and $j \in \mathcal{S}_i.
\end{cases}$
\end{enumerate}
The relations given in $(ii)$ can be derived based on the separable
structure of the approximation $u_i^q(y_i;x,M_i)$ and of the
quasinorm $\|\cdot\|_0$ using similar arguments as in Lemma 3.2 from \cite{Lu:12}. For completeness, we present the main steps in the derivation. First, it is clear that any $z \in \mathcal{L}_{u^q}$ satisfies:
\begin{equation}\label{u^q_argmin}
z_{(j)} = \arg\min_{y_{(j)} \in \rset^{}} \nabla_{(j)} f(z) (y_{(j)}-z_{(j)}) + \frac{M_i}{2}\abs{y_{(j)}-z_{(j)}}^2
+ \lambda_i \norm{y_{(j)}}_0
\end{equation}
for all $j \in \mathcal{S}_i $ and $ i\in [N]$. On the other hand since the optimum point in the previous optimization problems can be $0$ or different from $0$, we have:
\begin{align*}
&\min_{y_{(j)} \in \rset^{}} \nabla_{(j)} f(z) (y_{(j)}-z_{(j)}) + \frac{M_i}{2}\abs{y_{(j)}-z_{(j)}}^2 + \lambda_i \norm{y_{(j)}}_0 \\
&=\min \left\{ \frac{M_i}{2}\abs{z_{(j)}- \frac{1}{M_i}\nabla_{(j)} f(z)}^2 - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2, \lambda_i - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2 \right\}.
\end{align*}
If $z_{(j)}=0$, then from fixed point relation of problem \eqref{u^q_argmin} and the expression for its optimal value we have $\frac{M_i}{2}\abs{z_{(j)}- \frac{1}{M_i}\nabla_{(j)} f(z)}^2 - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2 \leq \lambda_i - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2$ and thus
$\abs{\nabla_{(j)} f(z)} \le \sqrt{2\lambda_i M_i}$. Otherwise, we have $j \in I(z)$ such that from Theorem \ref{lemmaaux} we have $\nabla_{(j)} f(z) =0$ and combining with $\frac{M_i}{2}\abs{z_{(j)}- \frac{1}{M_i}\nabla_{(j)} f(z)}^2 - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2 \geq \lambda_i - \frac{1}{2M_i}\abs{\nabla_{(j)} f(z)}^2$ leads to $\abs{z_{(j)}} \ge \sqrt{\frac{2\lambda_i}{M_i}}$. Similar derivations as above can be derived for the general quadratic approximations $u_i^Q(y_i;x,H_i)$ provided that $H_i$ is diagonal matrix. For general matrices $H_i$, the corresponding strong local minimizers are fixed points of small $\ell_0$ regularized quadratic problems of dimensions $n_i$.
\noindent Finally, for some $\beta \in \rset^N_{++}$ and $i \in
[N]$, considering the exact approximation $u_i^{e}(y_i;x,\beta_i)$
we obtain that any corresponding strong local minimizer $z \in
\mathcal{L}_{u^e}$ satisfies:
\begin{equation*}
z_i = \arg \min\limits_{h_i \in \rset^{n_i}} F(z+ U_ih_i) + \frac{\beta_{i}}{2}\norm{h_i}^2 \quad \forall i \in [N].
\end{equation*}
\begin{theorem}\label{inclusions}
Let Assumption \ref{assump_grad_1} hold and $u^1, u^2$ be two approximation functions satisfying Assumption \ref{approximation}.
Additionally, let
$$u^1(y_i;x) \le u^2(y_i;x), \quad \forall y_i \in \rset^{n_i}, x \in \rset^n, i \in [N].$$
Then the following inclusions are valid:
$$ \mathcal{X}^* \subseteq \mathcal{L}_{u^1} \subseteq \mathcal{L}_{u^2} \subseteq \mathcal{T}_f.$$
\end{theorem}
\begin{proof}
Assume $z \in \mathcal{X}^*$, i.e. it is a global minimizer of our
original nonconvex problem \eqref{l0regular}. Then, we have:
\begin{align*}
F(z) &\le \min\limits_{y_i \in \rset^{n_i}} F(z + U_i(y_i-z_i)) \\
& = \min\limits_{y_i \in \rset^{n_i}} f(z+U_i(y_i-z_i)) + \lambda_i \norm{y_i}_0 + \sum\limits_{j \neq i}\lambda_j \norm{z_j}_0\\
&\le \min\limits_{y_i \in \rset^{n_i}} u_i^1(y_i;z)
+\norm{z+U_i(y_i-z_i)}_{0,\lambda} \quad \forall i\in [N],
\end{align*}
and thus $z \in \mathcal{L}_{u^1}$, i.e. we proved that
$\mathcal{X}^* \subseteq \mathcal{L}_{u^1}$. Therefore, any class of
$u$-strong local minimizers contains the global minima of problem
\eqref{l0regular}.
\noindent Further, let us take $z \in \mathcal{L}_{u^1}$. Using
Definition \eqref{local_min_general} and defining \[ t_i =
\arg\min\limits_{y_i \in \rset^{n_i}} u_i^2(y_i;z) +
\norm{z+U_i(y_i-z_i)}_{0,\lambda}, \] we get:
\begin{align*}
F(z) &\le \min\limits_{y_i \in \rset^{n_i}} u_i^1(y_i;z) + \norm{z+U_i(y_i-z_i)}_{0,\lambda}\\
&\le u_i^1(t_i;z) + \norm{z+U_i(t_i-z_i)}_{0,\lambda}\\
&\le u_i^2(t_i;z) + \norm{z+U_i(t_i-z_i)}_{0,\lambda}\\
&= \min\limits_{y_i \in \rset^{n_i}} u_i^2 (y_i;z) +
\norm{z+U_i(y_i-z_i)}_{0,\lambda}.
\end{align*}
This shows that $z \in \mathcal{L}_{u^2}$ and thus
$\mathcal{L}_{u^1} \subseteq \mathcal{L}_{u^2}$.
\end{proof}
\noindent Note that if the following inequalities hold \[ (L_i +
\beta_i) I_{n_i} \preceq H_i \preceq M_i I_{n_i} \quad \forall i \in
[N],\] using the Lipschitz gradient relation
\eqref{Lipschitz_gradient}, we obtain that
$$u_i^{e}(y_i;x,\beta_i) \le u_i^{Q}(y_i;x,H_i) \leq u_i^{q}(y_i;x,M_i) \quad \forall x \in \rset^n,
y_i \in \rset^{n_i}.$$ \noindent Therefore, from Theorem
\ref{inclusions} we observe that $u^{q} \ (u^{Q})$-strong local
minimizers for problem \eqref{l0regular} are included in the class
of all basic local minimizers $\mathcal{T}_f$. Thus, designing an
algorithm which converges to a local minimum from
$\mathcal{L}_{u^q}$ ($\mathcal{L}_{u^Q}$) will be of interest.
Moreover, $u^e$-strong local minimizers for problem
\eqref{l0regular} are included in the class of all $u^{q} \
(u^Q)$-strong local minimizers. Thus, designing an algorithm which
converges to a local minimum from $\mathcal{L}_{u^{e}}$ will be of
interest. To illustrate the relationships between the previously
defined classes of restricted local minima and see how much they are
related to global minima of \eqref{l0regular}, let us consider an
example.
\begin{example}
\label{classes_points} We consider the least square settings $f(x) =
\norm{Ax-b}^2$, where $A \in \rset^{m \times n}$ and $b \in \rset^m$
satisfying:
\begin{equation*}
A = \begin{bmatrix}
1 & \alpha_1 &\cdots &\alpha_1^n \\
1 &\alpha_2 &\cdots &\alpha_2^n \\
1 &\alpha_3 &\cdots &\alpha_3^n \\
1 &\alpha_4 &\cdots &\alpha_4^n \\
\end{bmatrix}
+ \left[p I_4 \quad O_{4,n-4} \right], \qquad b=q e,
\end{equation*}
with $e \in \rset^{4}$ the vector having all entries $1$. We choose
the following parameter values: $\alpha = [1 \ 1.1 \ 1.2 \ 1.3]^T,
n=7, p=3.3, q=25, \lambda=1$ and $\beta_i=0.0001$ for all $i\in
[n]$. We further consider the scalar case, i.e. $n_i =1$ for all
$i$. In this case we have that $u_i^q=u_i^Q$, i.e. the separable and
general quadratic approximation versions coincide. The results are
given in Table \ref{local_min}. From $128$ possible local minima, we
found $19$ local minimizers in $\mathcal{L}_{u^{q}}$ given by
$u^q_i(y_i;x,L_f)$, and only $6$ local minimizers in
$\mathcal{L}_{u^q}$ given by $u^q_i(y_i;x,L_i)$. Moreover, the class
of $u^{e}$-strong local minima $\mathcal{L}_{u^{e}}$ given by
$u^e_i(y_i;x,\beta_i)$ contains only one vector which is also the
global optimum of problem \eqref{l0regular}, i.e. in this case
$\mathcal{L}_{u^{e}} = \mathcal{X}^*$. From Table \ref{local_min} we
can clearly see that the newly introduced classes of local
minimizers are much more restricted (in the sense of having small
number of elements, close to that of the set of global minimizers)
than the class of basic local minimizers that is much larger.
\setlength{\extrarowheight}{5pt}
\renewcommand{4pt}{4pt}
\begin{center}
\begin{table}[ht]
\begin{center}\caption{Strong local minima distribution on a least square example.}
\label{local_min}
\begin{tabular}{|c|c|c|c|c|}
\hline \textbf{Class of local minima} & $\mathcal{T}_f$ &
$\overset{\mathcal{L}_{u^{q}}}{u^q_i(y_i;x,L_f)}$
& $\overset{\mathcal{L}_{u^{q}}}{u^q_i(y_i;x,L_i)}$ & $\overset{\mathcal{L}_{u^{e}}}{u^e_i(y_i;x,\beta_i)}$ \\
\hline
\textbf{Number of local minima} & 128 & 19 & 6 & 1 \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{center}
\end{example}
\section{Random coordinate descent type methods}
In this section we present a family of random block coordinate descent methods
suitable for solving the class of problems \eqref{l0regular}. The family of the algorithms we consider takes a very general form, consisting in the minimization of a certain approximate
version of the objective function one block variable at a time, while fixing the
rest of the block variables. Thus, these algorithms are a combination between an iterative hard thresholding scheme and a general random coordinate descent method and they are particularly suited for solving nonsmooth $\ell_0$ regularized problems since they solve an easy low dimensional problem at each iteration, often in closed form. Our family of methods covers particular cases such as random block coordinate gradient descent and random proximal coordinate descent methods.
\noindent Let $x \in \rset^n$ and $i \in
[N]$. Then, we introduce the following \textit{thresholding map} for a given approximation version $u$ satisfying Assumption \ref{approximation}:
\begin{align*}
T^{u}_i(x) &=
\arg\min\limits_{y_i \in \rset^{n_i}} u_i(y_i;x) + \lambda_i\norm{y_i}_{0}.
\end{align*}
\noindent In order to find a local minimizer of problem
\eqref{l0regular}, we introduce the family of \textit{random block coordinate
descent iterative hard thresholding} (RCD-IHT) methods, whose
iteration is described as follows:
\begin{algorithm}{{\bf RCD-IHT}}
\begin{itemize}
\item[1.] Choose $ x^0 \in \rset^n$ and approximation version $u$ satisfying Assumption \ref{approximation}. For $k \ge 0$ do:
\item[2.] Choose a (block) coordinate $i_k \in [N]$ with uniform probability
\item[3.] Set $x^{k+1}_{i_k} = T^{u}_{i_k}(x^k)$ and $x^{k+1}_i=x^k_i \;\; \forall i \neq i_k$.
\end{itemize}
\end{algorithm}
\noindent Note that our algorithm is directly dependent on the
choice of approximation $u$ and the computation of the operator
$T^{u}_{i}(x)$ is in general easy, sometimes even in closed form.
For example, when $u_i(y_i;x) = u_i^q(y_i;x,M_i)$ and $\nabla_{i_k}
f(x^k)$ is available, we can easily compute the closed form solution
of $T^{u}_{i_k}(x^k)$ as in the iterative hard thresholding schemes
\cite{Lu:12}. Indeed, if we define $\Delta^i(x) \in \rset^{n_i}$ as
follows:
\begin{align}
\label{deltaq} (\Delta^i(x))_{(j)} = \frac{M_i}{2} \abs{x_{(j)}-
(1/M_i)\nabla_{(j)} f(x)}^2,
\end{align}
then the iteration of (RCD-IHT) method becomes:
\begin{equation*}
x^{k+1}_{(j)} =
\begin{cases}
x^k_{(j)} - \frac{1}{M_{i_k}}\nabla_{(j)}f(x^k), & \text{if} \quad
(\Delta^{i_k}(x^k))_{(j)}\ge \lambda_{i_k}\\
0, &\text{if} \quad (\Delta^{i_k}(x^k))_{(j)}\le \lambda_{i_k},
\end{cases}
\end{equation*}
for all $j \in \mathcal{S}_{i_k}$. Note that if at some iteration
$\lambda_{i_k}=0$, then the iteration of algorithm (RCD-IHT) is
identical with the iteration of the usual \textit{random block
coordinate gradient descent method} \cite{NecCli:13,Nes:12}.
Further, our algorithm has, in this case, similarities with the iterative hard
thresholding algorithm (IHTA) analyzed in \cite{Lu:12}. For
completeness, we also present the algorithm (IHTA).
\begin{algorithm}{{IHTA}} \cite{Lu:12}
\begin{itemize}
\item[1.] Choose $ M_f > L_f$. For $k \ge 0$ do:
\item[2.] $x^{k+1} = \arg\min_{y \in \rset^{n}} f(x^k) +
\langle \nabla f(x^k), y - x^k \rangle + \frac{M_f}{2}\norm{ y -
x^k}^2 + \norm{y}_{0,\lambda} $,
\end{itemize}
\end{algorithm}
or equivalently for each component we have the update:
\begin{equation*}
x^{k+1}_{(j)} =
\begin{cases}
x^k_{(j)} - \frac{1}{M_{f}}\nabla_{(j)}f(x^k), & \text{if} \quad
\frac{M_f}{2} \abs{x^k_{(j)} - \frac{1}{M_{f}}
\nabla_{(j)}f(x^k)}^2 \ge \lambda_{i}\\
0, &\text{if} \quad \frac{M_f}{2} \abs{x^k_{(j)} - \frac{1}{M_{f}}\nabla_{(j)}f(x^k)}^2 \le
\lambda_{i},
\end{cases}
\end{equation*}
for all $j \in \mathcal{S}_i$ and $i \in [N].$ Note that the
arithmetic complexity of computing the next iterate $x^{k+1}$ in
(RCD-IHT), once $\nabla_{i_k}f(x^k)$ is known, is of order
$\mathcal{O}(n_{i_k})$, which is much lower than the arithmetic
complexity per iteration $\mathcal{O}(n)$ of (IHTA) for $N >> 1$,
that additionally requires the computation of full gradient $\nabla
f(x^k)$. Similar derivations as above can be derived for the
general quadratic approximations $u_i^Q(y_i;x,H_i)$ provided that
$H_i$ is diagonal matrix. For general matrices $H_i$, the
corresponding algorithm requires solving small $\ell_0$ regularized
quadratic problems of dimensions $n_i$.
\noindent Finally, in the particular case when we consider the exact
approximation $u_i(y_i;x)=u_i^{e}(y_i;x,\beta_i)$, at each
iteration of our algorithm we need to perform an exact minimization
of the objective function $f$ w.r.t. one randomly chosen (block)
coordinate. If $\lambda_{i_k} = 0$, then the iteration of algorithm
(RCD-IHT) requires solving a small dimensional subproblem with a
strongly convex objective function as in the classical
\textit{proximal block coordinate descent method} \cite{HonWan:13}.
In the case when $\lambda_{i_k}>0$ and $n_i > 1$, this subproblem is
nonconvex and usually hard to solve. However, for certain particular
cases of the function $f$ and $n_i = 1$ (i.e. scalar case $n=N$), we
can easily compute the solution of the small dimensional subproblem
in algorithm (RCD-IHT). Indeed, for $x \in \rset^n$ let us define:
\begin{align}
\label{deltae} v^i(x) & = x + U_i h_i(x), \ \text{where} \ h_i(x) =
\arg\min\limits_{h_i \in \rset^{}}
f(x + U_ih_i) + \frac{\beta_i}{2}\norm{h_i}^2 \nonumber\\
\Delta^i(x) &= f(x-U_ix_i) + \frac{\beta_i}{2}\norm{x_i}^2 -
f(v^{i}(x)) - \frac{\beta_i}{2}\norm{(v^{i}(x))_i - x_i}^2 \quad
\forall i \in [n].
\end{align}
Then, it can be seen that the iteration of (RCD-IHT) in the scalar
case for the exact approximation $u_i^{e}(y_i;x,\beta_i)$ has the
following form:
\begin{equation*}
x^{k+1}_{i_k} =
\begin{cases}
(v^{i_k}(x^k))_{i_k}, \ &\text{if} \ \Delta^{i_k}(x^k) \ge \lambda_{i_k} \\
0 , \ &\text{if} \ \Delta^{i_k}(x^k) \le \lambda_{i_k}.
\end{cases}
\end{equation*}
In general, if the function $f$ satisfies Assumption
\ref{assump_grad_1}, computing $v^{i_k}(x^k)$ at each iteration of
(RCD-IHT) requires the minimization of an unidimensional convex
smooth function, which can be efficiently performed using
unidimensional search algorithms. Let us analyze the least squares
settings in order to highlight the simplicity of the iteration of
algorithm (RCD-IHT) in the scalar case for the approximation
$u_i^{e}(y_i;x,\beta_i)$.
\begin{example}
Let $A \in \rset^{m \times n}, b \in \rset^m$ and $f(x) =
\frac{1}{2}\norm{Ax-b}^2$. In this case (recall that we consider
$n_i=1$ for all $i$) we have the following expression for
$\Delta^{i}(x)$:
$$\Delta^{i}(x) =\frac{1}{2}\norm{r-A_{i}x_{i}}^2+\frac{\beta_{i}}{2}\norm{x_{i}}^2
- \frac{1}{2}\left\lVert
r\left(I_m-\frac{A_{i}A_{i}^T}{\norm{A_{i}}^2+\beta_{i}}\right)\right\rVert^2-\frac{\beta_{i}}{2}\left
\lVert \frac{A_{i}^Tr}{\norm{A_{i}}^2+\beta_{i}}\right\rVert^2,$$
where $r = Ax-b$. Under these circumstances, the iteration of
(RCD-IHT) has the following closed form expression:
\begin{equation}
x^{k+1}_{i_k}=
\begin{cases}
x^k_{i_k} - \frac{A_{i_k}^Tr^k}{\norm{A_{i_k}}^2+\beta_{i_k}}, \
&\text{if} \
\Delta^{i_k}(x^k) \ge \lambda_{i_k} \\
0 , \ &\text{if} \ \Delta^{i_k}(x^k) \le \lambda_{i_k}.
\end{cases}
\end{equation}
\end{example}
\noindent In the sequel we use the following notations for the
entire history of index choices, the expected value of objective
function $f$ w.r.t. the entire history and for the support of the
sequence $x^k$:
\begin{equation*}
\xi^k = \{i_0, \dots, i_{k-1} \}, \qquad f^k=\mathbb{E}[f(x^k)], \qquad I^k=I(x^k).
\end{equation*}
\noindent Due to the randomness of algorithm (RCD-IHT),
at any iteration $k$ with $\lambda_{i_k} > 0$, the
sequence $I^k$ changes if one of the following situations holds
for some $j \in \mathcal{S}_{i_k}$:
\begin{align*}
(i)& \ x^k_{(j)} = 0 \ \text{and} \ (T^u_{i_k}(x^k))_{(j)} \neq 0 \\
(ii)& \ x^k_{(j)} \neq 0 \ \text{and} \ (T^u_{i_k}(x^k))_{(j)} = 0.
\end{align*}
\noindent In other terms, at a given moment $k$ with
$\lambda_{i_k}>0$, we expect no change in the sequence $I^k$ of
algorithm (RCD-IHT) if there is no index $j \in \mathcal{S}_{i_k}$
satisfying the above corresponding set of relations $(i)$ and
$(ii)$. We define the notion of \textit{change of $I^k$ in
expectation} at iteration $k$, for algorithm (RCD-IHT) as follows:
let $x^k$ be the sequence generated by (RCD-IHT), then the sequence
$I^k=I(x^k)$ changes in expectation if the following situation
occurs:
\begin{equation}\label{expectation}
\mathbb{E}[\abs{I^{k+1} \setminus I^k} + \abs{I^k \setminus I^{k+1}}
\ | \ x^k] > 0,
\end{equation}
which implies (recall that we consider uniform probabilities for the
index selection):
\begin{align*}
\mathbb{P}\left(\abs{I^{k+1} \setminus I^k} + \abs{I^k \setminus
I^{k+1}} > 0 \ | \ x^k \right) \ge \frac{1}{N}.
\end{align*}
\noindent In the next section we show that there is a finite number
of changes of $I^k$ in expectation generated by algorithm
(RCD-IHT) and then, we prove global convergence of this algorithm,
in particular we show that the limit points of the generated
sequence converges to strong local minima from the class of points
$\mathcal{L}_{u}$.
\section{Global convergence analysis}
\noindent In this section we analyze the descent properties of the
previously introduced family of coordinate descent algorithms under
Assumptions \ref{assump_grad_1} and \ref{approximation}. Based on
these properties, we establish the nature of the limit points of the
sequence generated by Algorithm (RCD-IHT). In particular, we derive
that any accumulation point of this sequence is almost surely a
local minimum which belongs to the class $\mathcal{L}_u$. Note that
the classical results for any iterative algorithm used for solving
general nonconvex problems state global convergence to stationary
points, while for the $\ell_0$ regularized nonconvex and NP-hard
problem \eqref{l0regular} we show that our family of algorithms have
the property that the generated sequences converge to strong
local minima.
\noindent In order to prove almost sure convergence results for our
family of algorithms, we use the following supermartingale
convergence lemma of Robbins and Siegmund (see e.g.
\cite{PatNec:14}):
\begin{lemma}
\label{mart} Let $v_k, u_k$ and $\alpha_k$ be three sequences of
nonnegative random variables satisfying the following conditions:
\[ \mathbb{E}[v_{k+1} | {\cal F}_k] \leq (1+\alpha_k) v_k - u_k
\;\; \forall k \geq 0 \;\; \text{a.s.} \;\;\; \text{and} \;\;\;
\sum_{k=0}^\infty \alpha_k < \infty \;\; \text{a.s.}, \] where
${\cal F}_k$ denotes the collections $v_0, \dots, v_k, u_0, \dots,
u_k$, $\alpha_0, \dots, \alpha_k$. Then, we have $\lim_{k \to
\infty} v_k = v$ for a random variable $v \geq 0$ a.s. \; and \;
$\sum_{k=0}^\infty u_k < \infty$ a.s.
\end{lemma}
\noindent Further, we analyze the convergence properties of
algorithm (RCD-IHT). First, we derive a descent inequality for this
algorithm.
\begin{lemma}
\label{descent_ramiht}
Let $x^k$ be the sequence generated by (RCD-IHT) algorithm. Under
Assumptions \ref{assump_grad_1} and \ref{approximation} the
following descent inequality holds:
\begin{align}
\label{decrease}
\mathbb{E}[F(x^{k+1})\;|\; x^k] \le F(x^k) -
\mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2 \;|\; x^k
\right].
\end{align}
\end{lemma}
\begin{proof}
From Assumption \ref{approximation} we have:
\begin{align*}
F(x^{k+1}) + \frac{\mu_{i_k}}{2}\norm{x^{k+1}_{i_k}-x_{i_k}^k}^2 &\le
u_{i_k}(x^{k+1}_{i_k}, x^k) + \norm{x^{k+1}}_{0,\lambda} \\
&\le u_{i_k}(x^{k}_{i_k}, x^k) + \norm{x^{k}}_{0,\lambda} \\
&\le f(x^k) + \norm{x^{k}}_{0,\lambda} = F(x^k).
\end{align*}
In conclusion, our family of algorithms belong to the class of
descent methods:
\begin{align}
\label{decrease_iter}
F(x^{k+1}) & \le F(x^k) -
\frac{\mu_{i_k}}{2}\norm{x^{k+1}_{i_k}-x_{i_k}^k}^2.
\end{align}
Taking expectation w.r.t. $i_k$ we get our descent inequality.
\end{proof}
\noindent We now prove the global convergence of the sequence
generated by algorithm (RCD-IHT) to local minima which belongs to
the restricted set of local minimizers $\mathcal{L}_u$.
\begin{theorem}
\label{convergence_rpamiht} Let $x^k$ be the sequence generated by
algorithm (RCD-IHT). Under Assumptions \ref{assump_grad_1} and
\ref{approximation} the following statements hold:
\noindent $(i)$ There exists a scalar $\tilde{F}$ such that:
$$ \lim\limits_{k \to \infty} F(x^{k})= \tilde{F} \ a.s. \quad
\text{and} \quad \lim\limits_{k \to \infty} \norm{x^{k+1}-x^k} = 0 \
a.s.$$
\noindent $(ii)$ At each change of sequence $I^k$ in expectation we
have the following relation:
$$\mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2 \;|\; x^k \right] \ge \delta,$$
where $\delta= \frac{1}{N}\min\left\{\min\limits_{i \in [N]:
\lambda_i>0} \frac{\mu_i\lambda_i }{M_i}, \min\limits_{i \in [N], j
\in {\mathcal S}_i \cap \text{supp}(x^0)} \frac{ \mu_i}{2}
|x^0_{(j)}|^2 \right\} > 0.$
\noindent $(iii)$ The sequence $I^k$ changes a finite number of
times as $k \to \infty$ almost surely. The sequence $\norm{x^k}_0$
converges to some $\norm{x^*}_0$ almost surely. Furthermore, any
limit point of the sequence $x^k$ belongs to the class of strong
local minimizers $\mathcal{L}_u$ almost surely.
\end{theorem}
\begin{proof}
\noindent $(i)$ From the descent inequality given in Lemma
\eqref{descent_ramiht} and Lemma \ref{mart} we have that there
exists a scalar $\tilde{F}$ such that $\lim_{k \to \infty}
F(x^{k})= \tilde{F}$ almost sure. Consequently, we also have
$\lim_{k \to \infty} F(x^k) - F(x^{k+1}) = 0$ almost sure and since
our method is of descent type, then from \eqref{decrease_iter} we
get $\frac{\mu_{i_k}}{2}\norm{x^{k+1} - x^k}^2 \le F(x^k) -
F(x^{k+1})$, which leads to $\lim_{k \to \infty} \norm{x^{k+1}-x^k}
= 0$ almost sure.
\noindent $(ii)$ For simplicity of the notation we denote $x^{+} =
x^{k+1}, x = x ^k $ and $ i=i_k$. First, we show that any nonzero
component of the sequence generated by (RCD-IHT) is bounded below by
a positive constant. Let $x \in \rset^n$ and $i \in [N]$. From
definition of $T^{u}_i(x)$, for any $j \in
\text{supp}(T^{u}_i(x))$, the $j$th component of the minimizer
$T^{u}_i(x)$ of the function $u_i(y_i;x) + \lambda_i\norm{y_i}_{0}$
is denoted $(T^{u}_i(x))_{(j)}$. Let us define $y^+ = x +
U_i(T^{u}_i(x)-x_i)$. Then, for any $j \in \text{supp}(T^{u}_i(x))$
the following optimality condition holds:
\begin{align}
\label{nablaui} \nabla_{(j)} u_i(y^+_i;x)=0.
\end{align}
\noindent On the other hand, given $j \in \text{supp}(T^{u}_i(x))$, from the definition of $T^{u}_i(x)$ we get:
\begin{align*}
u_i(y^+_i;x) + \lambda_i \norm{y^+_i}_0 \le
u_i(y^{+}_i-U_{(j)}y^+_{(j)}; x) + \lambda_i
\norm{y^{+}_i-U_{(j)}y^{+}_{(j)}}_0.
\end{align*}
Subtracting $\lambda_i \norm{y^{+}_i-U_{(j)}y^{+}_{(j)}}_{0}$ from
both sides, leads to:
\begin{equation} \label{ineq_iter}
u_i(y^+_i;x) + \lambda_i \le u_i(y^{+}_i-U_{(j)}y^+_{(j)}; x) .
\end{equation}
Further, if we apply the Lipschitz gradient relation given in
Assumption \ref{approximation} $(iii)$ in the right hand side and
use the optimality conditions for the unconstrained problem solved
at each iteration, we get:
\begin{align*}
u_i(y^{+}_i-U_{(j)}y^+_{(j)}; x) &\le u_i(y^{+}_i;x) - \langle \nabla_{(j)} u_i(y^{+}_i;x),y^{+}_{(j)}\rangle\
+ \frac{M_{i}}{2} |y^{+}_{(j)}|^2\\
& \overset{\eqref{nablaui}}{=} u_i(y^{+}_i;x) + \frac{M_{i}}{2}
|y^{+}_{(j)}|^2.
\end{align*}
Combining with the left hand side of \eqref{ineq_iter} we get:
\begin{equation}\label{bound_x_k}
|(T^{u}_i(x))_{(j)}|^2 \ge \frac{2\lambda_{i}}{M_{i}} \qquad \forall
j \in \text{supp}(T^{u}_i(x)).
\end{equation}
Replacing $x = x^k$ for $k \ge 0$, it can be easily seen that, for
any $j \in \text{supp}(x^k_i)$ and $i \in [N]$, we have:
\begin{equation*}
\abs{x^k_{(j)}}^2
\begin{cases}
\ge \frac{2\lambda_{i}}{M_i}, &\text{if} \quad x^k_{(j)} \neq 0 \quad \text{and} \quad i \in \xi^k \\
= \abs{x^{0}_{(j)}}^2, &\text{if} \quad x^k_{(j)} \neq 0 \quad
\text{and} \quad i \notin \xi^k.
\end{cases}
\end{equation*}
\noindent Further, assume that at some iteration $k > 0$ a change of
sequence $I^k$ in expectation occurs. Thus, there is an index $j \in
[n]$ (and block $i$ containing $j$) such that either $\left(
x^k_{(j)}=0 \ \text{and} \ \left(T^{u}_i(x^{k})\right)_{(j)} \neq
0\right)$ or $\left( x^k_{(j)} \neq 0 \ \text{and} \
\left(T^{u}_i(x^{k})\right)_{(j)} = 0\right)$. Analyzing these cases
we have:
\begin{equation*}
\norm{T^{u}_i(x^k)-x^k_i}^2 \ge \left |
\left(T^{u}_i(x^k)\right)_{(j)}-x^k_{(j)} \right |^2 \;\;
\begin{cases}
\ge \frac{2\lambda_{i}}{M_{i}} &\text{if} \quad x^k_{(j)} = 0 \\
\ge \frac{2\lambda_{i}}{M_{i}} &\text{if} \quad x^k_{(j)} \neq 0 \ \text{and} \ i \in \xi^k \\
= |x^{0}_{(j)}|^2 &\text{if} \quad x^k_{(j)} \neq 0 \ \text{and} \ i
\notin \xi^k.
\end{cases}
\end{equation*}
\noindent Observing that under uniform probabilities we have:
$$\mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2| x^k \right] =
\frac{1}{N}\sum\limits_{i=1}^N\frac{\mu_i}{2}\norm{T^{u}_i(x^k)-x^k_i}^2,$$
we can conclude that at each change of sequence $I^k$ in expectation we get:
\begin{align*}
\mathbb{E}\left[\frac{\mu_{i_k}}{2}\norm{x^{k+1}-x^k}^2 | x^k\right] \ge \frac{1}{N}\min\left\{\min\limits_{i \in [N]: \lambda_i>0} \frac{\mu_i\lambda_i }{M_i},
\min\limits_{i \in [N], j \in {\mathcal S}_i \cap \text{supp}(x^0)} \frac{ \mu_i}{2}
|x^0_{(j)}|^2 \right\}.
\end{align*}
\noindent $(iii)$ From $\lim\limits_{k \to \infty}
\norm{x^{k+1}-x^k} = 0$ a.s. we have $\lim\limits_{k \to \infty}
\mathbb{E}\left[\norm{x^{k+1}-x^k} \ | \ x^k\right] = 0$ a.s. On
the other hand from part $(ii)$ we have that if the sequence $I^k$
changes in expectation, then $\mathbb{E}[\norm{x^{k+1}-x^{k}}^2 \ |
\ x^{k}] \ge \delta > 0.$ These facts imply that there are a finite
number of changes in expectation of sequence $I^k$, i.e. there exist
$K>0$ such that for any $k > K$ we have $I^k = I^{k+1}$.
\noindent Further, if the sequence $I^k$ is constant for $k > K$,
then we have $I^k=I^*$ and $\norm{x^k}_{0,\lambda} =
\norm{x^*}_{0,\lambda}$ for any vector $x^*$ satisfying
$I(x^*)=I^*$. Also, for $k> K$ algorithm (RCD-IHT) is equivalent
with the classical random coordinate descent method
\cite{HonWan:13}, and thus shares its convergence properties, in
particular any limit point of the sequence $x^k$ is a minimizer on
the coordinates $I^*$ for $\min_{x \in S_{I^*}} f(x)$. Therefore,
if the sequence $I^k$ is fixed, then we have for any $k > K$ and
$i_k \in I^k$:
\begin{equation}\label{fixedsupp}
u_{i_k}(x^{k+1}_{i_k};x^k)+ \norm{x^{k+1}}_{0,\lambda} \le
u_{i_k}(y_{i_k};x^k)+\norm{x^{k} +
U_{i_k}(y_{i_k}-x^k_{i_k})}_{0,\lambda} \quad \forall y_{i_k} \in
\rset^{n_{i_k}}.
\end{equation}
\noindent On the other hand, denoting with $x^*$ an accumulation
point of $x^k$, taking limit in \eqref{fixedsupp} and using that
$\norm{x^k}_{0,\lambda} = \norm{x^*}_{0,\lambda}$ as $k \to \infty$,
we obtain the following relation:
$$F(x^*)\le \min_{y_{i} \in \rset^{n_i}} u(y_i;x^*)+\norm{x^*+U_i(y_i-x^*_i)}_{0,\lambda} \quad a.s.$$
for all $i \in [N]$ and thus $x^*$ is the minimizer of the previous
right hand side expression. Using the definition of local minimizers
from the set $\mathcal{L}_u$, we conclude that any limit point $x^*$
of the sequence $x^k$ belongs to this set, which proves our
statement.
\end{proof}
\noindent It is important to note that the classical results for
any iterative algorithm used for solving nonconvex problems usually
state global convergence to stationary points, while for our
algorithms we were able to prove global convergence to local minima
of our nonconvex and NP-hard problem \eqref{l0regular}. Moreover, if
$\lambda_i=0$ for all $i \in [N]$, then the optimization problem
\eqref{l0regular} becomes convex and we see that our convergence
results cover also this setting.
\section{Rate of convergence analysis}
\noindent In this section we prove the linear convergence in
probability of the random coordinate descent algorithm (RCD-IHT)
under the additional assumption of strong convexity for function
$f$ with parameter $\sigma$ and for the scalar case, i.e. we assume
$n_i=1$ for all $i \in [n] =[N]$. Note that, for algorithm (RCD-IHT)
the scalar case is the most practical since it requires solving a
simple unidimensional convex subproblem, while for $n_i > 1$ it
requires the solution of a small NP-hard subproblem at each
iteration. First, let us recall that complexity results of random
block coordinate descent methods for solving convex problems $f^*
=\min_{x \in \rset^n} f(x)$, under convexity and Lipschitz gradient
assumptions on the objective function, have been derived e.g. in
\cite{HonWan:13}, where the authors showed sublinear rate of
convergence for a general class of coordinate descent methods.
Using a similar reasoning as in \cite{HonWan:13,NecPat:14a}, we
obtain that the randomized version of the general block coordinate
descent method, in the strongly convex case, presents a linear rate
of convergence in expectation of the~form:
\begin{equation*}
\mathbb{E}[f(x^{k}) - f^*] \le \left(1-\theta\right)^k
\left(f(x^{0}) - f^*\right),
\end{equation*}
where $\theta \in (0,1)$. Using the strong convexity property for $f$ we have:
\begin{equation}\label{rcd_rate_of_conv2}
\mathbb{E}\left[\norm{x^k - x^*} \right] \le \left(1- \theta\right)^{k/2}
\sqrt{\frac{2}{\sigma}\left(f(x^{0}) - f^* \right)} \quad \forall x \in
X_f^*,
\end{equation}
where we recall that we denote $X_f^* = \arg \min_{x \in \rset^n}
f(x)$. For attaining an $\epsilon$-suboptimality this algorithm has
to perform the following number of iterations:
\begin{equation}\label{rcd_complexity2}
k \ge \frac{2}{\theta} \log
\frac{1}{\epsilon}\sqrt{\frac{2\left(f(x^0)-f^*\right)}{\sigma}}.
\end{equation}
\noindent In order to derive the rate of convergence in probability
for algorithm (RCD-IHT), we first define the following notion which
is a generalization of relations \eqref{deltaq} and \eqref{deltae}
for $u_i(y_i,x) = u_i^q(y_i,x,M_i)$ and $u_i(y_i,x) =
u_i^e(y_i,x,\beta_i)$, respectively:
\begin{align}
\label{deltau1} & v^i(x) = x + U_i(h_i(x) - x_i), \quad \text{where}
\quad h_i(x) = \arg\min\limits_{y_i \in \rset^{}} u_i(y_i;x) \\
&\Delta^i(x) = u_i(0;x) - u_i(h_i(x);x). \label{deltau2}
\end{align}
We make the following assumption on functions $u_i$ and consequently
on $\Delta^i(x)$:
\begin{assumption}
\label{assump_delta}
There exist some positive constants $C_i$ and $D_i$ such that the
approximation functions $u_i$ satisfy for all $i \in [n]$:
$$\abs{\Delta^i(x) - \Delta^i(z)} \le C_i\norm{x-z} + D_i\norm{x-z}^2 \quad \forall x \in
\rset^n, z \in \mathcal{T}_f$$
and
$$ \min_{z \in \mathcal{T}_f} \min\limits_{i \in [n]} \abs{\Delta^i(z) - \lambda_i} >0. $$
\end{assumption}
\noindent Note that if $f$ is strongly convex, then the set
$\mathcal{T}_f$ of basic local minima has a finite number of
elements. Next, we show that this assumption holds for the most
important approximation functions $u_i$ (recall that $u_i^q =u_i^Q$
in the scalar case $n_i=1$).
\begin{lemma}\label{delta_xy}
Under Assumption \ref{assump_grad_1} the following statements
hold:\\
$(i)$ If we consider the separable quadratic approximation
$u_i(y_i;x) = u_i^q(y_i;x,M_i)$,~then:
$$ \abs{\Delta^i(x) - \Delta^i(z)} \le M_i v^i_{\max}\left(1 +
\frac{L_f}{M_i}\right)\norm{x - z} + \frac{M_i}{2}\left(1 + \frac{L_f}{M_i}\right)^2 \norm{x - z}^2,$$
for all $x \in \rset^n$ and $z \in \mathcal{T}_f$, where we have
defined $v^i_{\max}$ as follows $v^i_{\max} = \max
\{\norm{(v^i(y))_i} :\; y \in \mathcal{T}_f\}$ for all $i \in [n]$. \\
$(ii)$ If we consider the exact approximation $u_i(y_i;x) =
u_i^e(y_i;x,\beta_i)$, then we have:
\[ \abs{\Delta^i(x) - \Delta^i(z)} \le \gamma^i\norm{x-z} +
\frac{L_f+\beta_i}{2} \norm{x-z}^2,\] for all $x \in \rset^n$ and
$z \in \mathcal{T}_f$, where we have defined $\gamma^i$ as follows
$\gamma^i = \max\{\norm{\nabla f(y-U_iy_i)}+ \norm{\nabla f(v^i(y))}
+ \beta_i\norm{y_i}: \; y \in \mathcal{T}_f \}$ for all $i \in
[n]$.
\end{lemma}
\begin{proof}
$(i)$ For the separable quadratic approximation $u_i(y_i;x) =
u_i^q(y_i;x,M_i)$, using the definition of $\Delta^i(x)$ and
$v^i(x)$ given in \eqref{deltau1}--\eqref{deltau2} (see also
\eqref{deltaq}), we get:
\begin{align}
\label{deltaqq} \Delta^i(x) =
\frac{M_i}{2}\norm{x_i-\frac{1}{M_i}\nabla_if(x)}^2 =
\frac{M_i}{2}\norm{(v^i(x))_i}^2.
\end{align}
\noindent Then, since $\norm{\nabla_i f(x) - \nabla_i f(z)} \leq L_f
\norm{x-z}$ and using the property of the norm $|\norm{a} -
\norm{b}| \leq \norm{a-b}$ for any two vectors $a$ and $b$, we
obtain:
\begin{align*}
\abs{\Delta^i(x) -\Delta^i(z)} &= \frac{M_i}{2}\left\lvert \norm{(v^i(x))_i}^2 - \norm{(v^i(z))_i}^2\right\lvert \\
& \le \frac{M_i}{2}\left\lvert \norm{(v^i(x))_i} - \norm{(v^i(z))_i}\right\lvert \; \left\lvert \norm{(v^i(x))_i} + \norm{(v^i(z))_i}\right\lvert \\
& \overset{\eqref{deltaqq}}{\le} \frac{M_i}{2}\left(1 +
\frac{L_f}{M_i}\right)\norm{x - z} \left( 2\norm{(v^i(z))_i} +
\left(1 + \frac{L_f}{M_i}\right)\norm{x - z}\right).
\end{align*}
\noindent $(ii)$ For the exact approximation $u_i(y_i;x) =
u_i^e(y_i;x,\beta_i)$, using the definition of $\Delta^i(x)$ and
$v^i(x)$ given in \eqref{deltau1}--\eqref{deltau2} (see also
\eqref{deltae}), we get:
\[ \Delta^i(x) = f(x-U_ix_i) - f(v^i(x)) +
\frac{\beta_i}{2}\norm{x_i}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i -
x_i}^2. \] Then, using the triangle inequality we derive the
following relation:
\begin{align*}
\abs{\Delta^i(x) - \Delta^i(z)} &\le \Big |f(x-U_ix_i)- f(z-U_iz_i) + f(v^i(z)) - f(v^i(x))\\
&\;\;\;\; + \frac{\beta_i}{2}\norm{(v^i(z))_i - z_i}^2 -
\frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2 \Big |
+\Big\lvert\frac{\beta_i}{2}\norm{x_i}^2 -
\frac{\beta_i}{2}\norm{z_i}^2\Big\rvert.
\end{align*}
\noindent For simplicity, we denote:
\begin{align*}
\delta_{1i}(x,z)= &f(x-U_ix_i)- f(z-U_iz_i) + f(v^i(z)) - f(v^i(x)) \\
& \;\; + \frac{\beta_i}{2}\norm{(v^i(z))_i - z_i}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2\\
\delta_{2i}(x,z)=&\frac{\beta_i}{2}\norm{x_i}^2 -
\frac{\beta_i}{2}\norm{z_i}^2.
\end{align*}
\noindent In order to bound $\Delta^i(x) - \Delta^i(z)$, it is
sufficient to find upper bounds on $\abs{\delta_{1i}(x,z)}$ and
$\abs{\delta_{2i}(x,z)}$. For a bound on $\abs{\delta_{1i}(x,z)}$ we
use $\abs{\delta_{1i}(x,y)} =
\max\{\delta_{1i}(x,y),-\delta_{1i}(x,y)\}$. Using the optimality
conditions for the map $v^i(x)$ and convexity of $f$ we obtain:
\begin{align*}
f(v^i(x)) & \ge f(v^i(z)) + \langle \nabla f(v^i(z)), v^i(x) - v^i(z)\rangle \nonumber\\
&= \! f(v^i(z)) \!+\! \langle \nabla f(v^i(z)), x \!-\! z\rangle
\!+\! \langle \nabla_i f(v^i(z)), ((v^i(x))_i \!-\! x_i)
\!-\! ((v^i(z))_i \!-\! z_i)\rangle \nonumber\\
&=\! f(v^i(z)\!) \!+\! \langle \nabla f(v^i(z)\!), x \!-\! z\rangle
\!-\! \beta_i \langle (v^i(z)\!)_i \!-\! z_i,((v^i(x)\!)_i \!-\!
x_i)\!-\! ((v^i(z)\!)_i \!-\! z_i) \rangle \nonumber\\
&= f(v^i(z)) + \langle \nabla f(v^i(z)), x-z\rangle + \frac{\beta_i}{2}\norm{(v^i(z))_i - z_i}^2 \nonumber\\
&\qquad + \frac{\beta_i}{2}\norm{(v^i(z))_i-z_i}^2 - \beta_i \langle (v^i(z))_i -z_i, (v^i(x))_i-x_i\rangle \nonumber\\
&=f(v^i(z)) + \langle \nabla f(v^i(z)), x-z\rangle + \frac{\beta_i}{2}\norm{(v^i(z))_i-z_i}^2 \nonumber \\
&\qquad + \frac{\beta_i}{2}\norm{(v^i(z))_i-z_i - ((v^i(x))_i-x_i)}^2 - \frac{\beta_i}{2}\norm{(v^i(x))_i - x_i}^2 \nonumber\\
& \ge \! f(v^i(z)) \!+\! \frac{\beta_i}{2}\norm{(v^i(z))_i \!-\!
z_i}^2 \!-\! \frac{\beta_i}{2}\norm{(v^i(x))_i \!-\! x_i}^2 \!-\!
\norm{\nabla f(v^i(z))}\norm{x \!-\! z},
\end{align*}
where in the last inequality we used the Cauchy-Schwartz
inequality. On the other hand, from the global Lipschitz continuous
gradient inequality we get:
\begin{equation*}
f(x-U_ix_i) \le f(z-U_iz_i) + \norm{\nabla f(z-U_iz_i)}\norm{x-z} +
\frac{L_f}{2}\norm{x-z}^2.
\end{equation*}
\noindent From previous two relations we obtain:
\begin{equation}\label{delta}
\delta_{1i}(x,z) \le \left( \norm{\nabla f(z-U_iz_i)}+ \norm{\nabla
f(v^i(z))} \right) \norm{x-z} + \frac{L_f}{2}\norm{x-z}^2.
\end{equation}
\noindent In order to obtain a bound on $-\delta_{1i}(x,z)$ we
observe that:
\begin{align}
& f(v^i(x)) + \frac{\beta_i}{2}\norm{(v^i(x))_i-x_i}^2 - f(v^i(z)) - \frac{\beta_i}{2}\norm{(v^i(z))_i
-z_i}^2 \nonumber\\
&\quad \le f(x + U_i((v^i(z))_i-z_i)) - f(v^i(z)) \nonumber \\
& \quad \le \norm{\nabla f(v^i(z))}\norm{x-z} +
\frac{L_f}{2}\norm{x-z}^2,\label{bound_aux3}
\end{align}
where in the last inequality we used the Lipschitz gradient
relation and Cauchy-Schwartz inequality. Also, from the convexity of
$f$ and the Cauchy-Schwartz inequality we get:
\begin{equation}\label{bound_aux4}
f(x-U_ix_i) \ge f(z-U_iz_i) - \norm{\nabla f(z-U_iz_i)}\norm{x-z}.
\end{equation}
Combining now the bounds \eqref{bound_aux3} and \eqref{bound_aux4}
we obtain:
\begin{equation}\label{-delta}
-\delta_{1i}(x,z)\le \left(\norm{\nabla f(z-U_iz_i)}+ \norm{\nabla f(v^i(z))}\right)\norm{x-z} + \frac{L_f}{2}\norm{x-z}^2.
\end{equation}
Therefore, from \eqref{delta} and \eqref{-delta} we obtain a bound
on $\delta_{1i}(x,z)$:
\begin{equation}\label{delta1abs}
\abs{\delta_{1i} (x,z)}\le \left( \norm{\nabla f(z-U_iz_i)}+
\norm{\nabla f(v^i(z))} \right) \norm{x-z} +
\frac{L_f}{2}\norm{x-z}^2.
\end{equation}
\noindent Regarding the second quantity $\delta_{2i}(x,z)$, we
observe that:
\begin{align}
\abs{\delta_{2i}(x,z)} &= \frac{\beta_i}{2} \Big\lvert\norm{x_i}+\norm{z_i}
\Big\rvert \Big\lvert \norm{x_i}-\norm{z_i}\Big\rvert
= \frac{\beta_i}{2}\Big\lvert \norm{x_i}-\norm{z_i}+2\norm{z_i}\Big\rvert
\Big\lvert\norm{x_i}-\norm{z_i}\Big\rvert \nonumber\\
& \le \frac{\beta_i}{2}\left(\norm{x - z}+2\norm{z_i}\right) \norm{x - z}.\label{delta2abs}
\end{align}
From the upper bounds on $\abs{\delta_{1i} (x,z)}$ and
$\abs{\delta_{2i} (x,z)}$ given in \eqref{delta1abs} and
\eqref{delta2abs}, respectively, we obtained our result.
\end{proof}
\noindent We further show that the second part of Assumption \ref{assump_delta} holds for the most important approximation functions $u_i$.
\begin{lemma}\label{lemma_alpha}
Under Assumption \ref{assump_grad_1} the following statements
hold:\\
$(i)$ Considering the separable quadratic approximation $u_i(y_i;x)
= u_i^q(y_i;x,M_i)$, then for any fixed $z \in \mathcal{T}_f$
there exist only two values of parameter $M_i$ satisfying $\abs{\Delta^i(z) - \lambda_i} = 0$.\\
$(ii)$ Considering the exact approximation $u_i(y_i;x) =
u_i^e(y_i;x,\beta_i)$, then for any fixed $z \in \mathcal{T}_f$,
there exists a unique $\beta_i$ satisfying $\abs{\Delta^i(z) - \lambda_i} = 0$.\\
\end{lemma}
\begin{proof}
$(i)$ For the approximation $u_i(y_i;x) = u_i^q(y_i;x,M_i)$ we have: $$\Delta^i(z) = \frac{M_i}{2}\norm{z_i - \frac{1}{M_i}\nabla_i f(z)}^2.$$
Thus, we observe that $\Delta^i(z) = \lambda_i$ is equivalent with the following relation:
$$ \frac{\norm{z_i}^2 }{2}M_i^2 - \left( \langle \nabla_i f(z), z_i\rangle +\lambda_i \right)M_i + \frac{\norm{\nabla_i f(z)}^2}{2} = 0.$$
which is valid for only two values of $M_i$.
\noindent $(ii)$ For the approximation $u_i(y_i;x) = u_i^e(y_i;x,\beta_i)$ we have:
$$\Delta^i(z) = f(z-U_iz_i) + \frac{\beta_i}{2}\norm{z_i}^2 - f(v^i_{\beta}(z)) -
\frac{\beta_i}{2}\norm{h^i_{\beta}(z) - z_i}^2,$$ where
$v^i_{\beta}(z)$ and $h^i_{\beta}(z)$ are defined as in
\eqref{deltau1} corresponding to the exact approximation. Without
loss of generality, we can assume that there exist two constants
$\beta_i > \gamma_i > 0$ such that $\Delta^i(z) = \lambda_i$. In
other terms, we have:
$$ \frac{\beta_i}{2}\norm{z_i}^2 - f(v^i_{\beta}(z)) - \frac{\beta_i}{2}\norm{h^i_{\beta}(z) - z_i}^2 =
\frac{\gamma_i}{2}\norm{z_i}^2 - f(v^i_{\gamma}(z)) -
\frac{\gamma_i}{2}\norm{h^i_{\gamma}(z) - z_i}^2.$$ We analyze two
possible cases. Firstly, if $z_i = 0$, then the above equality leads
to the following relation:
\begin{align*}
f(v^i_{\beta}(z)) + \frac{\beta_i}{2}\norm{h^i_{\beta}(z)}^2 &=
f(v^i_{\gamma}(z)) + \frac{\gamma_i}{2}\norm{h^i_{\gamma}(z)}^2 \\
& \le f(v^i_{\beta}(z)) +
\frac{\gamma_i}{2}\norm{h^i_{\beta}(z)}^2, \end{align*} which
implies that $\beta_i \le \gamma_i$, that is a contradiction.
Secondly, assuming $z_i \neq 0$ we observe from optimality of
$h^i_{\beta}(z)$ that:
\begin{equation}\label{ineq1}
\frac{\beta_i}{2}\norm{z_i}^2 - f(v^i_{\beta}(z)) - \frac{\beta_i}{2}\norm{h^i_{\beta}(z) - z_i}^2 \ge \frac{\beta_i}{2}\norm{z_i}^2 - f(z).
\end{equation}
On the other hand, taking into account that $z \in \mathcal{T}_f$ we
have:
\begin{equation}\label{ineq2}
\frac{\gamma_i}{2}\norm{z_i}^2 - f(v^i_{\gamma}(z)) -
\frac{\gamma_i}{2}\norm{h^i_{\gamma}(z) - z_i}^2 \le
\frac{\gamma_i}{2}\norm{z_i}^2 - f(z).
\end{equation}
From \eqref{ineq1} and \eqref{ineq2} we get $\beta_i \le \gamma_i$, thus
implying the same contradiction.
\end{proof}
\noindent We use the following notations:
\begin{align*}
C_{\max} = \max_{1\le i \le n} C_i,\quad D_{\max} = \max_{1\le i \le
n} D_i,\quad \tilde{\alpha} &= \min_{z \in \mathcal{T}_f}
\min\limits_{i \in [n]} \abs{\Delta^i(z) - \lambda_i}.
\end{align*}
\noindent Since the cardinality of basic local minima
$\mathcal{T}_f$ is finite for strongly convex functions $f$, then
there is a finite number of possible values for $\abs{\Delta^i(z) -
\lambda_i}$. Therefore, from previous lemma we obtain that
$\tilde{\alpha}=0$ for a finite number of values of parameters
$(M_i,\mu_i)$ of the approximations $u_i = u_i^q$ or $u_i=u_i^e$. We
can reason in a similar fashion for general approximations $u_i$,
i.e. that $\tilde{\alpha}=0$ for a finite number of values of
parameters $(M_i,\mu_i)$ of the approximations $u_i$ satisfying
Assumption \ref{approximation}. In conclusion, choosing randomly at
an initialization stage of our algorithm the parameters
$(M_i,\mu_i)$ of the approximations $u_i$, we can conclude that
$\tilde{\alpha}>0$ almost sure.
\noindent Further, we state the linear rate of convergence with high
probability for algorithm (RCD-IHT). Our analysis will employ ideas
from the convergence proof of deterministic iterative hard
thresholding method in \cite{Lu:12}. However, the random nature of
our family of methods and the properties of the approximation
functions $u_i$ require a new approach. We use the notation $k_p$
for the iterations when a change in expectation of $I^k$ occurs, as
given in the previous section. We also denote with $F^*$ the global
optimal value of our original $\ell_0$ regularized problem
\eqref{l0regular}.
\begin{theorem}
Let $x^k$ be the sequence generated by the family of algorithms
(RCD-IHT) under Assumptions \ref{assump_grad_1},
\ref{approximation} and \ref{assump_delta} and the additional assumption of strong convexity of
$f$ with parameter $\sigma$. Denote with $\kappa$ the number of
changes in expectation of $I^k$ as $k \to \infty$. Let $x^*$ be some
limit point of $x^k$ and $\rho>0$ be some confidence level.
Considering the scalar case $n_i=1$ for all $i \in [n]$, the
following statements hold:
\noindent \textit{(i)} The number of changes in expectation $\kappa$
of $I^k$ is bounded by $\left \lceil \frac{ \mathbb{E} \left [
F(x^0) - F(x^*) \right] }{\delta} \right \rceil$, where $\delta$ is
specified in Theorem \ref{convergence_rpamiht} $(ii)$.
\noindent \textit{(ii)} The sequence $x^k$ converges linearly in the
objective function values with high probability, i.e. it satisfies
$\mathbb{P}\left(F(x^k) - F(x^*) \le \epsilon \right) \ge 1-\rho$
for $k \ge \frac{1}{\theta}\log\frac{\tilde{\omega}}{\rho\epsilon}$,
where $\tilde{\omega}=2^{\omega}(F(x^0)-F^*)$, with $\omega =
\left\{\max\limits_{t \in \rset^{}} \ \alpha t -\beta t^2 : 0 \le t
\le \left\lfloor\frac{\mathbb{E}[F(x^0) -
F(x^*)]}{\delta}\right\rfloor \right\}, \beta =
\frac{\delta}{2(F(x^0)-F^*)}$, $\alpha = \left(\log \left[2(F(x^0) -
F^*)\right] + 2\log\frac{2 N }{\sqrt{\sigma}\xi} - \frac{\delta}{2
(F(x^0)-F^*)} + \theta \right)$ and
$\xi=\frac{1}{2}\left(\sqrt{\frac{C_{\max}^2}{D_{\max}^2} +
\frac{\tilde{\alpha}}{D_{\max}}} -
\frac{C_{\max}}{D_{\max}}\right)$.
\end{theorem}
\begin{proof}
$(i)$ From \eqref{decrease} and Theorem \ref{convergence_rpamiht}
$(ii)$ it can be easily seen that:
\begin{align*}
\delta & \le
\mathbb{E}\left[\frac{\mu_{i_{k_p}}}{2}\norm{x^{k_p+1}-x^{k_p}}^2
\Big| x^{k_p}\right] \le F(x^{k_p}) -
\mathbb{E}[F(x^{k_p+1})|x^{k_p}] \\
& \leq F(x^{k_p}) - \mathbb{E}[F(x^{k_{p+1}})|x^{k_p}].
\end{align*}
Taking expectation in this relation w.r.t. the entire history
$\xi^{k_p}$ we get the bound: $\delta \le \mathbb{E}\left[
F(x^{k_p}) - F(x^{k_{p+1}}) \right].$ Further, summing up over $p
\in [\kappa]$ we have:
$$\kappa \delta \le \mathbb{E}\left[F(x^{k_1}) -
F(x^{k_{\kappa}+1})\right]\le \mathbb{E}\left[F(x^0) -
F(x^*)\right],$$ i.e. we have proved the first part of our theorem.
\noindent $(ii)$ In order to establish the linear rate of
convergence in probability of algorithm (RCD-IHT), we first derive a
bound on the number of iterations performed between two changes in
expectation of $I^k$. Secondly, we also derive a bound on the
number of iterations performed after the support is fixed (a similar
analysis for deterministic iterative hard thresholding method was
given in \cite{Lu:12}). Combining these two bounds, we obtain the
linear convergence of our algorithm. Recall that for any $p \in
[\kappa]$, at iteration $k_{p}+1$, there is a change in expectation
of $I^{k_p}$, i.e.
\begin{equation*}
\mathbb{E}[\abs{I^{k_{p}} \setminus I^{k_{p}+1}} + \abs{I^{k_{p}+1}
\setminus I^{k_{p}}} \Big| \ x^{k_{p}}] > 0,
\end{equation*}
which implies that
$$\mathbb{P}\left( \abs{I^{k_{p}} \setminus I^{k_{p}+1}} + \abs{I^{k_{p}+1} \setminus
I^{k_{p}}} > 0 | x^{k_p}\right) = \mathbb{P}\left( I^{k_{p}} \neq I^{k_{p}+1} | x^{k_p}\right)
\ge \frac{1}{n}$$
and furthermore
\begin{equation}\label{probabilitysupp2}
\mathbb{P}\left( \abs{I^{k_{p}} \setminus I^{k_{p}+1}} + \abs{I^{k_{p}+1}
\setminus I^{k_{p}}} = 0 | x^{k_p} \right) = \mathbb{P}\left( I^{k_{p}} =
I^{k_{p}+1} | x^{k_p}\right) \le \frac{n-1}{n}.
\end{equation}
\noindent Let $p$ be an arbitrary integer from $[\kappa]$. Denote
$\hat{x}^* = \arg\min\limits_{x \in S_{I^{k_p}}} f(x)$ and
$\hat{f}^*=\mathbb{E}\left[f(\hat{x}^*) \ | \ x^{k_{p-1}+1} \right]$.
\noindent Assume that the number of iterations performed between two
changes in expectation satisfies:
\begin{equation}\label{iter_rcd_aux2}
k_{p} - k_{p-1} > \frac{1}{\theta} \left(\log \left[2(F(x^0) - F^* -
(p-1)\delta )\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) +
1,
\end{equation}
where we recall that $\sigma$ is the strong convexity parameter of
$f$. For any $k \in [k_{p-1}+1, k_{p}]$ we denote $f^k =\mathbb{E}[f(x^k) \ | \ x^{k_{p-1}+1}]$. From Lemma \ref{descent_ramiht} and Theorem
\ref{convergence_rpamiht} we have:
$$f^{k_{p-1}+1} - \hat{f}^* \le \mathbb{E}[F(x^{k_{p-1}+1}) \ | \ x^{k_{p-1}+1}] - \mathbb{E}[F(\hat{x}^*)\ | \ x^{k_{p-1}+1}]
\le F(x^0) - (p-1)\delta - F^*,$$
\noindent so that we can claim that \eqref{iter_rcd_aux2} implies
\begin{equation}\label{iter_rcd2}
k_{p} - k_{p-1} > \frac{2}{\theta} \log \frac{
2\sqrt{2(f^{k_{p-1}+1} - \hat{f}^*)}n}{\sqrt{\sigma}\xi} + 1 \ge
\frac{2}{\theta} \log \frac{ \sqrt{2n(f^{k_{p-1}+1} -
\hat{f}^*)}}{\sqrt{\sigma}\xi(\sqrt{n}-\sqrt{n-1})} +1.
\end{equation}
\noindent We show that under relation \eqref{iter_rcd2}, the
probability \eqref{probabilitysupp2} does not hold. First, we
observe that between two changes in expectation of $I^k$, i.e.
$k \in [k_{p-1}+1, k_{p}]$, the algorithm (RCD-IHT) is equivalent with
the randomized version of coordinate descent method \cite{HonWan:13,
NecPat:14a} for strongly convex problems. Therefore, the method has
linear rate of convergence \eqref{rcd_rate_of_conv2}, which in our
case is given by the following expression:
\begin{equation*}
\mathbb{E}\left[\norm{x^k\!-\!\hat{x}^*} \;|\; x^{k_{p-1}+1} \right]\!\le\!\left(1\!-\!\theta\right)^{(k-k_{p-1}-1)/2}\sqrt{\frac{2}{\sigma}\left(f^{k_{p-1}+1}-\hat{f}^*\right)},
\end{equation*}
for all $k \in [k_{p-1}+1, k_{p}]$. Taking $k=k_{p}$, if we apply
the complexity estimate \eqref{rcd_complexity2} and use the bound
\eqref{iter_rcd2}, we obtain:
$$\mathbb{E}\left[\norm{x^{k_{p}} - \hat{x}^*} \;|\; x^{k_{p-1}+1} \right] \le \left(1 - \theta\right)^{(k_{p} - k_{p-1}-1)/2}
\!\sqrt{\frac{2}{\sigma}\left(f^{k_{p-1}+1}-\hat{f}^*\right)} \!<\!
\!\xi\!\left(1\!-\!\sqrt{\frac{n\!-\!1}{n}}\right).$$ From the Markov
inequality, it can be easily seen that we have:
\begin{equation*}
\mathbb{P}\left(\norm{x^{k_{p}} - \hat{x}^*} < \xi \;|\; x^{k_{p-1}+1} \right) = 1-
\mathbb{P}\left(\norm{x^{k_{p}} - \hat{x}^*} \ge \xi \;|\; x^{k_{p-1}+1}\right) >
\sqrt{1-\frac{1}{n}}.
\end{equation*}
\noindent Let $i \in [N]$ such that $\lambda_i>0$. From Assumption
\ref{assump_delta} and definition of parameter $\xi$ we see that the
event $\norm{x^{k_{p}} - \hat{x}^*} < \xi$ implies:
\begin{equation*}
\abs{\Delta^i(x^{k_p}) - \Delta^i(\hat{x}^*)} \le
C_{\max}\norm{x^{k_p}-\hat{x}^*} +
D_{\max}\norm{x^{k_p}-\hat{x}^*}^2 < \tilde{\alpha} \le
\abs{\Delta^i(\hat{x}^*) - \lambda_i}.
\end{equation*}
\noindent The first and the last terms from the above inequality
further imply:
\begin{equation*}
\begin{cases}
\abs{\Delta^i(x^{k_p})} > \lambda_i, & \text{if} \quad \abs{\Delta^i(\hat{x}^*)} > \lambda_i\\
\abs{\Delta^i(x^{k_p})} < \lambda_i, & \text{if} \quad \abs{\Delta^i(\hat{x}^*)} < \lambda_i,
\end{cases}
\end{equation*}
or equivalently $I^{k_p+1} = \hat{I}^* = \left\{ j \in [n]:
\lambda_j=0 \right\} \cup \left\{ i \in [n]: \lambda_i>0,
\abs{\Delta^i(\hat{x}^*)} > \lambda_i \right\}$. \noindent In
conclusion, if \eqref{iter_rcd2} holds, then we have:
\begin{equation*}
\mathbb{P}\left( I^{k_{p}+1} = \hat{I}^* \;|\; x^{k_{p-1}+1} \right) >
\sqrt{1-\frac{1}{n}}.
\end{equation*}
Applying the same procedure as before for iteration $k = k_{p} - 1$
we obtain:
\begin{equation*}
\mathbb{P}\left( I^{k_{p}} = \hat{I}^* \;|\; x^{k_{p-1}+1}\right) >
\sqrt{1-\frac{1}{n}}.
\end{equation*}
\noindent Considering the events $\{I^{k_{p}} = \hat{I}^*\}$ and
$\{I^{k_{p}+1} = \hat{I}^*\}$ to be independent (according to the
definition of $k_p$), we have:
\begin{equation*}
\mathbb{P}\left( \left\{I^{k_{p}+1} = \hat{I}^*\right\} \cap
\left\{I^{k_{p}} = \hat{I}^*\right\} \;|\; x^{k_{p-1}+1} \right) = \mathbb{P}\left(
I^{k_{p}+1} = I^{k_{p}} \;|\; x^{k_{p-1}+1} \right) > \frac{n-1}{n},
\end{equation*}
which contradicts the assumption $\mathbb{P}\left(I^{k_{p}} =
I^{k_{p}+1} \ | \ x^{k_{p}}\right) \le \frac{n-1}{n}$ (see
\eqref{probabilitysupp2} and the definition of $k_p$ regarding the
support of $x$).
\noindent Therefore, between two changes of support the number of
iterations is bounded by:
\begin{equation*}
k_{p} - k_{p-1} \le \frac{1}{\theta} \left(\log \left[2(F(x^0) - F^*
- (p-1)\delta )\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right)
+1.
\end{equation*}
We can further derive the following:
\begin{align*}
&\frac{1}{\theta} \left(\log \left[2(F(x^0) - F^* - (p-1)\delta )\right] +
2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) \\
&= \frac{1}{\theta} \left(\log \left[2 (F(x^0) - F^*)\left(1 -
\frac{(p-1)\delta}{F(x^0)-F^*}
\right)\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) \\
& = \frac{1}{\theta} \left(\log \left[2 (F(x^0) - F^*)\right] +
\log\left[1 - \frac{(p-1)\delta}{F(x^0)-F^*}\right] + 2\log\frac{2 n }{\sqrt{\sigma}\xi}
\right) \\
&\le \frac{1}{\theta} \left(\log \left[2(F(x^0) - F^*)\right] -
\frac{(p-1)\delta}{F(x^0)-F^*} + 2\log\frac{2 n }{\sqrt{\sigma}\xi}
\right),
\end{align*}
\noindent where we used the inequality $\log(1-t) \le -t$ for any
$t \in (0, \ 1)$. Denoting with $k_\kappa$ the number of iterations
until the last change of support, we have:
\begin{align*}
k_\kappa & \le \sum\limits_{p=1}^{\kappa}\frac{1}{\theta} \left(\log
\left[2
(F(x^0) - F^*)\right] - \frac{(p-1)\delta}{F(x^0)-F^*}
+ 2\log\frac{2 n }{\sqrt{\sigma}\xi} \right) +1 \\
& = \kappa \frac{1}{\theta}\left(\log \left[2(F(x^0) - F^*)\right]
+ 2\log\frac{2 n }{\sqrt{\sigma}\xi} + \frac{\delta}{2 (F(x^0)-F^*)} + \theta \right)
- \frac{\kappa^2}{\theta}
\underbrace{\frac{\delta}{2(F(x^0)-F^*)}}_{\beta}.
\end{align*}
\noindent Once the support is fixed (i.e. after $k_\kappa$
iterations), in order to reach some $\epsilon$-local minimum in
probability with some confidence level $\rho$, the algorithm
(RCD-IHT) has to perform additionally another
$$\frac{1}{\theta}\log \frac{f^{k_\kappa+1}-f(x^*)}{\epsilon \rho}$$
iterations, where we used again \eqref{rcd_complexity2} and Markov
inequality. Taking into account that the iteration $k_\kappa$ is the
largest possible integer at which the support of sequence $x^k$
could change, we can bound:
$$f^{k_\kappa + 1}- f(x^*) = E[F(x^{k_\kappa + 1})-
F(x^*)] \le F(x^0) - F^* - \kappa\delta.$$
\noindent Thus, we obtain:
\begin{align*}
\frac{1}{\theta}&\log \frac{f^{k_\kappa+1}-f(x^*)}{\epsilon \rho} \le \frac{1}{\theta}\log \frac{F(x^0) - F^* - \kappa\delta}{\epsilon \rho} \\
&\le \frac{1}{\theta}\left(\log \left[(F(x^0) - F^*)\left(1 - \frac{\kappa\delta}{F(x^0) - F^*}\right)\right] - \log \epsilon \rho \right)\\
&\overset{\log (1-t) \leq -t}{\le} \frac{1}{\theta}\left(\log (F(x^0) - F^*) - \frac{\kappa\delta}{F(x^0) - F^*} - \log \epsilon \rho \right)\\
&\le \frac{1}{\theta}\left(\log \frac{F(x^0) - F^*}{\epsilon \rho }
- \frac{\kappa\delta}{F(x^0) - F^*} \right).
\end{align*}
Adding up this quantity and the upper bound on $k_{\kappa}$, we get
that the algorithm (RCD-IHT) has to perform at most
$$\frac{1}{\theta} \left(\alpha\kappa - \beta \kappa^2 + \log \frac{F(x^0) - F^*}{\epsilon \rho }\right)\le \frac{1}{\theta} \left(\omega + \log \frac{F(x^0) - F^*}{\epsilon \rho }\right)$$
iterations in order to attain an $\epsilon$-suboptimal point with
probability at least $\rho$, which proves the second statement of
our theorem.
\end{proof}
\noindent Note that we have obtained global linear convergence for
our family of random coordinate descent methods on the class of
$\ell_0$ regularized problems with strongly convex objective
function $f$.
\section{Random data experiments on sparse learning} In this
section we analyze the practical performances of our family of
algorithms (RCD-IHT) and compare them with that of algorithm
(IHTA) \cite{Lu:12}. We perform several numerical tests on sparse
learning problems with randomly generated data. All algorithms were
implemented in Matlab code and the numerical simulations are
performed on a PC with Intel Xeon E5410 CPU and 8 Gb RAM memory.
\noindent Sparse learning represents a collection of learning
methods which seek a tradeoff between some goodness-of-fit measure
and sparsity of the result, the latter property allowing better
interpretability. One of the models widely used in machine learning
and statistics is the linear model (least squares setting). Thus, in
the first set of tests we consider sparse linear~formulation:
$$\min\limits_{x \in \rset^n} F(x) \quad \left(=\frac{1}{2}\norm{Ax-b}^2 + \lambda \norm{x}_0 \right),$$
where $A \in \rset^{m \times n} $ and $\lambda >0$. We analyze the
practical efficiency of our algorithms in terms of the probability
of reaching a global optimal point. Due to difficulty of finding
the global solution of this problem, we consider a small model $m=6$
and $n=12$. For each penalty parameter $\lambda $, ranging from
small values (0.01) to large values (2), we ran the family of
algorithms (RCD-IHT), for separable quadratic approximation (denoted
(RCD-IHT-$u^q$), for exact approximation (denoted (RCD-IHT-$u^e$)
and (IHTA) \cite{Lu:12} from 100 randomly generated (with random
support) initial vectors. The numbers of runs out of 100 in which
each method found the global optimum is given in Table \ref{tabel2}.
We observe that for all values of $\lambda$ our algorithms
(RC-IHT-$u^q$) and (RCD-IHT-$u^e$) are able to identify the global
optimum with a rate of success superior to algorithm (IHTA) and for
extreme values of $\lambda$ our algorithms perform much better than
(IHTA).
\renewcommand{4pt}{4pt}
\begin{table}[ht]
\centering \caption{Numbers of runs out of 100 in which algorithms
(IHTA), (RCD-IHT-$u^q$) and (RCD-IHT-$u^e$) found global optimum.}
{\small \label{tabel2}
\begin{tabular}{|c|c|c|c|}
\hline
$\lambda $ &\textbf{(IHTA)} & \textbf{(RCD-IHT-$u^q$)} & \textbf{(RCD-IHT-$u^e$)}\\
\hline \hline
$0.01$ & 95 & 96 & 100\\
\hline
$0.07$ & 92 & 92 & 100\\
\hline
$0.09$ & 43 & 51 & 70\\
\hline
$0.15$ & 41 & 47 & 66\\
\hline
$0.35$ & 24 & 28 & 31\\
\hline
$0.8$ & 36 & 43 & 44\\
\hline
$1.2$ & 29 & 29 & 54\\
\hline
$1.8$ & 76 & 81 & 91\\
\hline
$2$ & 79 & 86 & 97 \\
\hline
\end{tabular}
}
\end{table}
\noindent In the second set of experiments we consider the
$\ell_2$ regularized logistic loss model from machine learning
\cite{Bah:13}. In this model the relation between the data,
represented by a random vector $a \in \rset^n$, and its associated
label, represented by a random binary variable $y \in \{0, 1\}$, is
determined by the conditional probability:
$$P\{y | a;x \}= \frac{e^{y \langle a,x\rangle}}{1+e^{\langle a,x\rangle}},$$
where $x$ denotes a parameter vector. Then, for a set of $m$
independently drawn data samples $\{(a_i , y_i )\}_{i=1}^m$, the
joint likelihood can be written as a function of $x$. To find the
maximum likelihood estimate one should maximize the likelihood
function, or equivalently minimize the negative log-likelihood (the
logistic loss):
$$\min\limits_{x \in \rset^n} \frac{1}{m}\sum\limits_{i=1}^m
\log\left(1 + e^{\langle a_i,x\rangle} \right) - y_i\langle a_i,x
\rangle. $$ Under the assumption of $n \le m$ and $A = \left[a_1,
\dots, a_m \right] \in \rset^{n \times m}$ being full rank, it is
well known that $f(\cdot)$ is strictly convex. However, there are
important applications (e.g. feature selection) where these
assumptions are not satisfied and the problem is highly ill-posed.
In order to compensate this drawback, the logistic loss is
regularized by some penalty term (e.g. $\ell_2$ norm $\norm{x}^2_2$,
see \cite{Bah:13,Has:09}). Furthermore, the penalty term implicitly
bounds the length of the minimizer, but does not promote sparse
solutions. Therefore, it is desirable to impose an additional
sparsity regularizer, such as the $\ell_0$ quasinorm. In conclusion
our problem to be minimized is given by:
$$\min\limits_{x \in \rset^n} F(x) \quad \left(=\frac{1}{m}\sum\limits_{i=1}^m \log\left(1 + e^{\langle a_i,x\rangle} \right)
- y_i\langle a_i,x \rangle + \frac{\nu}{2}\norm{x}^2 +
\norm{x}_{0,\lambda}\right),$$ where now $f$ is strongly convex with
parameter $\nu$. For simulation, data were uniformly random
generated and we fixed the parameters $\nu=0.5$ and $\lambda=0.2$.
Once an instance of random data has been generated, we ran 10 times
our algorithms (RCC-IHT-$u^q$) and (RCD-IHT-$u^e$) and algorithm
(IHTA) \cite{Lu:12} starting from 10 different initial points. We
reported in Table \ref{tabel1} the best results of each algorithm
obtained over all 10 trials, in terms of best function value that
has been attained with associated sparsity and number of iterations.
In order to report relevant information, we have measured the
performance of coordinate descent methods (RCD-IHT-$u^q$) and
(RCD-IHT-$u^e$) in terms of full iterations obtained by dividing the
number of all iterations by the dimension $n$. The column $F^*$
denotes the final function value attained by the algorithms,
$\norm{x^*}_0$ represents the sparsity of the last generated point
and \textit{iter} (\textit{full-iter}) represents the number of
iterations (the number of full iterations). Note that our algorithms
(RCD-IHT-$u^q$) and (RCD-IHT-$u^e$) have superior performance in
comparison with algorithm (IHTA) on the reported instances. We
observe that algorithm (RCD-IHT-$u^e$) performs very few full
iterations in order to attain best function value amongst all three
algorithms. Moreover, the number of full iterations performed by
algorithm (RCD-IHT-$u^e$) scales up very well with the dimension of
the problem.
\renewcommand{4pt}{4pt}
\begin{table}[ht]
\centering \caption{Performance of Algorithms (IHTA),
(RCD-IHT-$u^q$), (RCD-IHT-$u^e$)} {\small \label{tabel1}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$m\backslash n$ &\multicolumn{3}{c|}{\textbf{(IHTA)}} & \multicolumn{3}{c|}{\textbf{(RCD-IHT-$u^q$)}} &
\multicolumn{3}{c|}{\textbf{(RCD-IHT-$u^e$)}}\\
\cline{2-10}
& $F^*$ & $\norm{x^*}_0$ & iter & $F^*$ & $\norm{x^*}_0$ & full-iter & $F^*$ & $\norm{x^*}_0$ & full-iter\\
\hline
\hline
$20\backslash 100$ & 1.56 & 23 & 797 & 1.39 & 21 & 602 & -0.67 & 15 & 12 \\
\hline
$50\backslash 100$ & -95.88 & 31 & 4847 & -95.85 & 31 & 4046 & -449.99 & 89 & 12 \\
\hline
$30\backslash 200$ & -14.11 & 35 & 2349 & -14.30 & 33 & 1429 & -92.95 & 139 & 12 \\
\hline
$50\backslash 200$ & -0.88 & 26 & 3115 & -0.98 & 25 & 2494 & -13.28 & 83 & 19 \\
\hline
$70\backslash 300$ & -12.07 & 70 & 5849 & -11.94 & 71 & 5296 & -80.90 & 186 & 19 \\
\hline
$70\backslash 500$ & -20.60 & 157 & 6017 & -19.95 & 163 & 5642 & -69.10 & 250 & 16 \\
\hline
$100\backslash 500$ & -0.55 & 16 & 4898 & -0.52 & 16 & 5869 & -47.12 & 233 & 14 \\
\hline
$80\backslash 1000$ & 13.01 & 197 & 9516 & 13.71 & 229 & 7073 & -0.56 & 19 & 13 \\
\hline
$80\backslash 1500$ & 5.86 & 75 & 7825 & 6.06 & 77 & 7372 & -0.22 & 24 & 14 \\
\hline
$150\backslash 2000$ & 26.43 & 418 & 21353 & 25.71 & 509 & 20093 & -30.59 & 398 & 16 \\
\hline
$150\backslash 2500$ & 26.52 & 672 & 15000 & 27.09 & 767 & 15000 & -55.26 & 603 & 17 \\
\hline
\end{tabular}
}
\end{table}
\end{sloppy}
\end{document} |
\begin{document}
\title{Sublinear signal production in a two-dimensional Keller-Segel-Stokes system}
\begin{abstract}
\noindent
{\textbf{Abstract:}
We study the chemotaxis-fluid system
\begin{align*}
\left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c}
n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c)-u\cdot\!\nabla n,\ &x\in\Omega,& t>0,\\
c_{t}\ &=\Delta c-c+f(n)-u\cdot\!\nabla c,\ &x\in\Omega,& t>0,\\
u_{t}\ &=\Delta u+\nabla P+n\cdot\!\nabla\phi,\ &x\in\Omega,& t>0,\\
\nabla\cdot u\ &=0,\ &x\in\Omega,& t>0,
\end{array}\right.
\end{align*}
where $\Omega\subset\mathbb{R}^2$ is a bounded and convex domain with smooth boundary, $\phi\in W^{1,\infty}\left(\Omega\right)$ and $f\in C^1([0,\infty))$ satisfies $0\leq f(s)\leq K_0 s^\alpha$ for all $s\in[0,\infty)$, with $K_0>0$ and $\alpha\in(0,1]$. This system models the chemotactic movement of actively communicating cells in slow moving liquid.
We will show that in the two-dimensional setting for any $\alpha\in(0,1)$ the classical solution to this Keller-Segel-Stokes-system is global and remains bounded for all times.
}
\\
{\textbf{Keywords:} chemotaxis, Keller-Segel, Stokes, chemotaxis-fluid interaction, global existence, boundedness}\\
{\textbf{MSC (2010):} 35K35 (primary), 35A01, 35Q35, 35Q92, 92C17}
\end{abstract}
\section{Introduction}\label{sec1:intro}
\textbf{Keller-Segel models.}\quad Chemotaxis is the biological phenomenon of oriented movement of cells under influence of a chemical signal substance. This process is known to play a large role in various biological applications (\cite{HP09}). One of the first mathematical models concerning chemotaxis was introduced by Keller and Segel to describe the aggregation of bacteria (see \cite{KS70} and \cite{KS71}).
A simple realization of a standard Keller-Segel system, which models the assumption that the cells are not only attracted by higher concentration of the signal chemical but also produce the chemical themselves, can be expressed by
\begin{align}\label{KS}\tag{$KS$}
\left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c}
n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c),\ &x\in\Omega,& t>0,\\
c_{t}\ &=\Delta c-c+n,\ &x\in\Omega,& t>0,
\end{array}\right.
\end{align}
in a bounded domain $\Omega\subset\mathbb{R}^N$ with $N\geq1$. Herein, $n=n(x,t)$ denotes the unknown density of the involved cells and $c=c(x,t)$ the unknown concentration of the attracting chemical substance.
The Keller-Segel system alone has been studied intensively in the last decades and a wide array of interesting properties, such as finite time blow-up and spatial pattern formation, have been discovered (see also the surveys \cite{BBWT15},\cite{HP09},\cite{Ho03}). For instance, the Keller-Segel system obtained from \eqref{KS} with homogeneous Neumann boundary conditions where $\Omega\subset\mathbb{R}^N$ is a ball, emits blow-up solutions for $N\geq2$, if the total initial mass of cells lies above a critical value (\cite{mizoguchi_winkler_13},\cite{win10jde}), while all solutions remain bounded when either $N=1$, or $N=2$ an the initial total mass of cells is below the critical value (\cite{OY01},\cite{NSY97}).
Through its application to various biological contexts, many variants of the Keller-Segel model have been proposed over the years. In particular, adaptions of \eqref{KS} in the form of
\begin{align}\label{kssens}
\refstepcounter{gleichung}
n_{t} =\Delta n-\nabla\!\cdot(nS(x,n,c)\cdot\nabla c),\quad x\in\Omega, t>0,
\end{align}
with given chemotactic sensitivity function $S$, which can either be a scalar function, or more general a tensor valued function (see e.g. \cite{XO09-MSmodels}), for the first equation or
\begin{align}\label{ksox}
\refstepcounter{gleichung}
c_{t} =\Delta c-ng(c),\quad x\in\Omega, t>0,
\end{align}
with given function $g$ for the second equation, have been studied. Both of these adjustments are known to have an influence on the boundedness of solutions to their respective systems. For instance, if we replace the first equation of \eqref{KS} with \eqref{kssens} for a scalar function $S$ satisfying $S(r)\leq C(1+r)^{-\gamma}$ for all $r\geq1$ and some $\gamma>1-\frac{2}{N}$, then all solutions to the corresponding Neumann problem are global and uniformly bounded. On the other hand if $N\geq2$, $\Omega\subset\mathbb{R}^N$ is a ball and $S(r)>cr^{-\gamma}$ for some $\gamma<1-\frac{2}{N}$ then the solution may blow up (\cite{HoWin05_bvblowchemo}).
Considering the adaption of \eqref{KS} with \eqref{ksox} as second equation, which basically corresponds to the system assumption that the cells consume some of the chemical instead of producing it, it was shown in \cite{TaoWin12_evsmooth} that for $N=2$ the corresponding Neumann problem possesses bounded classical solution for suitable regular initial data not depending on a smallness condition. For $N=3$ it was proved, that there exist global weak solutions which eventually become smooth and bounded after some waiting time.
A combination of both adjustments, where $S$ is matrix-valued with non-trivial nondiagonal parts, was studied in \cite{win15_chemorot}. There it was shown that under fairly general assumptions on $g$ and $S$ at least one generalized solution exists which is global. This result does neither contain a restriction on the spatial dimension nor on the size of the initial data.
One last adaption of \eqref{KS} we would like to mention has only recently been studied thoroughly and concerns the system
\begin{align}\label{KSa}\tag{$KS^\alpha$}
\left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c}
n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c),\ &x\in\Omega,& t>0,\\
c_{t}\ &=\Delta c-c+f(n),\ &x\in\Omega,& t>0,
\end{array}\right.
\end{align}
with $f\in C^1\left([0,\infty)\right)$ satisfying $0\leq f(n)\leq K n^\alpha$ for any $n\geq0$ with $K>0$ and $\alpha>0$. In this setting it is known, that the system \eqref{KSa} does not emit any blow-up solution if $\alpha<\frac{2}{N}$ (\cite{liudongmei15_boundchemo}) but it remains an open question whether this exponent is indeed critical.
Similar forms of $f(n)$ have been treated before either in the linear case $f(n)=n$ (\cite{Mimura1996499}) or (sub-)linear cases with an additional logistic growth term introduced to the first equation (eg. \cite{Os02-chemologatract},\cite{Win10-chemolog},\cite{NaOs13}).
\textbf{Chemotaxis-fluid systems.}\quad Nonetheless, one assumption is shared by all of these adapted Keller-Segel models. That is, only the cell density $n$ and the chemical concentration $c$ are unknown and all other system parameters are fixed. In particular, the models assume that there is no interaction between the cells and their surroundings. However, experimental observations indicate that chemotactic motion inside a liquid can be substantially influenced by the mutual interaction between cells and fluid. For instance, in \cite{tuval2005bacterial} the dynamical generation of patterns and emergence of turbulence in population of aerobic bacteria suspended in sessile drops of water is reported, whereas examples involving instationary fluids are important in the context of broadcast spawning phenomena related to successful coral fertilization (\cite{coll1994chemical},\cite{miller1985demonstration}).
A model considering the chemotaxis-fluid interaction, building on experimental observations of Bacillus subtilis was given in \cite{tuval2005bacterial}. In the system in question, the fluid velocity $u=u(x,t)$ and the associated pressure $P=P(x,t)$ are introduced as additional unknown quantities utilizing the incompressible Navier-Stokes equations. One of the first theoretical results concerning the solvability in this context were shown in \cite{lorz10}, where the local existence of weak solutions for $N\in\{2,3\}$ was shown. This setting, however, involved signal consumption in the form of per-capita oxygen consumption of the bacteria, which corresponds to an equation of the form \eqref{ksox}. Since we want to focus on the case of signal production by the cells as realized in \eqref{KS}, a more suitable system in this context is the Keller-Segel-Navier-Stokes system
\begin{align}\label{KSNS}\tag{$KSNS$}
\left\{\begin{array}{r@{\,}r@{\,}r@{\,}l@{\quad}l@{\,}c}
n_{t}\,&+\,&u\cdot\!\nabla n\ &=\Delta n-\nabla\!\cdot(n\nabla c),\ &x\in\Omega,& t>0,\\
c_{t}\,&+\,&u\cdot\!\nabla c\ &=\Delta c-c+n,\ &x\in\Omega,& t>0,\\
u_{t}\,&+\,&u\cdot\!\nabla u\ &=\Delta u-\nabla P+n\cdot\!\nabla\phi,\ &x\in\Omega,& t>0,\\
&&\dive u\ &=0,\ &x\in\Omega,& t>0,
\end{array}\right.
\end{align}
where the fluid is supposed to be driven by forces induced by the fixed gravitational potential $\phi$ and transports both the cells and the chemical.
The mathematical analysis of \eqref{KSNS} regarding global and bounded solutions is far from trivial, as on the one hand its Navier-Stokes subsystem lacks complete existence theory (\cite{Wie99-NS}) and on the other hand the previously mentioned properties for Keller-Segel system can still emerge. In order to weaken the analytical effort necessary, a commonly made simplification is to assume that the fluid flow is comparatively slow and thus the fluid velocity evolution may be described by the Stokes equation $(\kappa=0)$ rather than the full Navier-Stokes system.
Of course, all alterations to \eqref{KS} described above can be included as adjustments to the systems in this Keller-Segel(-Navier)-Stokes setting as well. Their influence on global and bounded solutions are one focal point of recent studies. For instance, an adjustment making use of both sensitivity and chemical consumption has been applied to Keller-Segel-Stokes systems in \cite{win15_globweak3d}, where for scalar valued sensitivity functions $S$ the existence of global weak solutions for bounded three-dimensional domains has been established. Building on this existence result, it was shown in \cite{win15_chemonavstokesfinal} that the generalized solution approaches a spatially homogeneous steady state under fairly weak assumptions imposed on the parameter functions $S$ and $g$. Under similar assumptions the existence of global weak solutions for suitable non-linear diffusion types have been proven in \cite{francescolorz10} and the existence of bounded and global weak solutions even allowing matrix-valued $S$ not requiring a decay assumption in \cite{win_ct_fluid_3d}.
A Keller-Segel-Stokes system corresponding to the adjustment made to \eqref{KS} by only making use of rotational sensitivity was studied in \cite{Wang20157578}, where it was shown that the Neumann problem for the Keller-Segel-Stokes system possesses a unique global classical solution which remains bounded for all times, if we assume $S$ to satisfy $|S(x,n,c)|\leq C_S(1+n)^{-\alpha}$ with $C_S>0$ for some $\alpha>0$.
Regarding the introduction of the additional logistic growth term $+rn-\mu n^2$ with $r\geq0$ and $\mu>0$ to the first equation, it was shown in \cite[Theorem 1.1]{tao_winkler15_zampfinal} for space dimension $N=3$, that every solution remains bounded if $\mu\geq23$ and thus any blow-up phenomena are excluded. Moreover, these solutions tend to zero (\cite[Theorem 1.2]{tao_winkler15_zampfinal}).
Some of these results have in part been transferred to the full chemotaxis Navier-Stokes system. These include global existence of classical solutions for $N=2$ with scalar valued sensitivity (\cite{win_fluid_final}), large time behavior and eventual smoothness of such solutions (\cite{win15_chemonavstokesfinal}) and even global existence of mild solution to double chemotaxis systems under the effect of incompressible viscious fluid (\cite{kozono15}). Boundedness results with matrix-valued sensitivity without decay requirements but for small initial data have been discussed in \cite{caolan16_smalldatasol3dnavstokes} and boundedness results under influence of a logistic growth term in \cite{tao_winkler_non2015}.
\textbf{Main results.}\quad Most of these results stated above are concerned with the chemical consumption version of the chemotaxis model (\cite{Wang20157578} and \cite{tao_winkler_non2015} being the exceptions). To the best of our knowledge the Stokes variant of chemotaxis-fluid interaction has only been discussed outside of the chemical consumption case either by introducing a logistic growth term as in \cite{tao_winkler_non2015} or taking a more general chemotactic sensitivity as in \cite{Wang20157578}. Motivated by this fact and the result of \cite{liudongmei15_boundchemo} for \eqref{KSa} mentioned above, we are now interested in whether the influence of a coupled slow moving fluid described by Stokes equation affects the possible choice for $\alpha\in(0,1)$, while still maintaining the exclusion of possible unbounded solutions. Henceforth, we will consider that the evolution of $(n,c,u,P)$ is governed by the Keller-Segel-Stokes System
\begin{align}\label{KSaS}\tag{$KS^{\alpha}S$}
\left\{\begin{array}{r@{\,}l@{\quad}l@{\,}c}
n_{t}\ &=\Delta n-\nabla\!\cdot(n\nabla c)-u\cdot\!\nabla n,\ &x\in\Omega,& t>0,\\
c_{t}\ &=\Delta c-c+f(n)-u\cdot\!\nabla c,\ &x\in\Omega,& t>0,\\
u_{t}\ &=\Delta u+\nabla P+n\cdot\!\nabla\phi,\ &x\in\Omega,& t>0,\\
\dive u\ &=0,\ &x\in\Omega,& t>0,
\end{array}\right.
\end{align}
where $\Omega\subset\mathbb{R}^2$ is a bounded and smooth domain and $f\in C^1([0,\infty))$ satisfies
\begin{align}\label{fprop}
\refstepcounter{gleichung}
0\leq f(s)\leq K_0 s^\alpha\quad\mbox{for all }s\in[0,\infty)
\end{align}
with some $\alpha\in(0,1]$ and $K_0>0$. We shall examine this system along with no-flux boundary conditions for $n$ and $c$ an a no-slip boundary condition for $u$,
\begin{align}\label{bcond}
\refstepcounter{gleichung}
\frac{\partial n}{\partial\nu}=\frac{\partial c}{\partial\nu}=0\quad\mbox{and}\quad u=0\qquad \mbox{for }x\in\romega\mbox{ and }t>0,
\end{align}
and initial conditions
\begin{align}\label{idcond}
\refstepcounter{gleichung}
n(x,0)=n_0(x),\quad c(x,0)=c_0(c),\quad u(x,0)=u_0(x),\quad x\in\Omega.
\end{align}
For simplicity we will assume $\phi\in\W[1,\infty]$ and that for some $\theta>2$ and $\delta\in(\frac{1}{2},1)$ the initial data satisfy the regularity and positivity conditions
\begin{align}\label{idreg}
\refstepcounter{gleichung}
\begin{cases}&n_0\in \CSp{0}{\bomega}\mbox{ with }n_0> 0\mbox{ in }\bomega,\\
&c_0\in\W[1,\theta]\mbox{ with }c_0> 0\mbox{ in }\bomega,\\
&u_0\in \DA,\end{cases}
\end{align}
where here and below $A^\delta$ denotes the fractional power of the Stokes operator $A:=-\mathcal{P}\Delta$ in $\Lo[2]$ regarding homogeneous Dirichlet boundary conditions, with the Helmholtz projection $\mathcal{P}$ from $\Lo[2]$ to the solenodial subspace $L^2_\sigma(\Omega):=\left\{\left.\varphi\in\Lo[2]\right\vert\dive\varphi=0\right\}$.
In this framework we can state our main result in the following way:
\begin{theo}\label{Thm:globEx}
Let $\theta>2$, $\delta\in(\frac{1}{2},1)$ and $\Omega\subset\mathbb{R}^2$ be a bounded and convex domain with smooth boundary. Assume $\phi\in\W[1,\infty]$ and that $n_0,c_0$ and $u_0$ comply with \eqref{idreg}. Then for any $\alpha\in(0,1)$, the PDE system \eqref{KSaS} coupled with boundary conditions \eqref{bcond} and initial conditions \eqref{idcond} possesses a solution $(n,c,u,P)$ satisfying
\begin{align*}
\begin{cases}
n\in\CSp{0}{\bomega\times[0,\infty)}\cap\CSp{2,1}{\bomega\times(0,\infty)},\\
c\in\CSp{0}{\bomega\times[0,\infty)}\cap\CSp{2,1}{\bomega\times(0,\infty)},\\
u\in\CSp{0}{\bomega\times[0,\infty)}\cap\CSp{2,1}{\bomega\times(0,\infty)},\\
P\in\CSp{1,0}{\bomega\times[0,\infty)},
\end{cases}
\end{align*}
which solves \eqref{KSaS} in the classical sense and remains bounded for all times. This solution is unique within the class of functions which for all $T\in(0,\infty)$ satisfy the regularity properties
\begin{align}\label{locExUClass}
\refstepcounter{gleichung}
\begin{cases}
n\in\CSp{0}{[0,T);\Lo[2]}\cap\LSp{\infty}{(0,T);\CSp{0}{\bomega}}\cap\CSp{2,1}{\bomega\times(0,T)},\\
c\in\CSp{0}{[0,T);\Lo[2]}\cap\LSp{\infty}{(0,T);\W[1,\theta]}\cap\CSp{2,1}{\bomega\times(0,T)},\\
u\in\CSp{0}{[0,T);\Lo[2]}\cap\LSp{\infty}{(0,T);\DA}\cap\CSp{2,1}{\bomega\times(0,T)},\\
P\in \LSp{1}{(0,T);\W[1,2]},
\end{cases}
\end{align}
up to addition of functions $\hat{p}$ to $P$, such that $\hat{p}(\cdot,t)$ is constant for any $t\in(0,\infty)$.
\end{theo}
In view of Theorem \ref{Thm:globEx}, there is no evident difference regarding $\alpha$ between the coupled system \eqref{KSaS} and the chemotaxis system without fluid \eqref{KSa} for dimension $N=2$.
In Section \ref{sec1:lEx} we will briefly discuss local existence of classical solutions and basic a priori estimates. Section \ref{sec2:regu} is dedicated to the connection between the regularity of $n$ and the regularity of the spacial derivative of $u$, which plays a crucial part in obtaining additional information on the regularity of $c$. In Section \ref{sec3:N2} we will combine standard testing procedures with the results from the previous sections to prove the boundedness and globality of classical solutions to \eqref{KSaS}.
\setcounter{gleichung}{0}
\section{Local existence of classical solutions}\label{sec1:lEx}
The following theorem concerning the local existence of classical solutions, as well as an extensibility criterion can be proven with exactly the same steps demonstrated in \cite[Lemma 2.1]{win_fluid_final} and \cite[Lemma 2.1]{tao_winkler_chemohapto11siam}.
\begin{lem}[Local existence of classical solutions]\label{Lem:locEx}
Let $\theta>2$, $\delta\in(\frac{1}{2},1)$ and $\Omega\subset\mathbb{R}^2$ be a bounded and convex domain with smooth boundary. Suppose $\phi\in\W[1,\infty]$ and that $n_0,c_0$ and $u_0$ satisfy \eqref{idreg}. Then there exist $T_{max}\in(0,\infty]$ and functions $(n,c,u,P)$ satisfying
\begin{align*}
\begin{cases}
n\in\CSp{0}{\bomega\times[0,T_{max})}\cap\CSp{2,1}{\bomega\times(0,T_{max})},\\
c\in\CSp{0}{\bomega\times[0,T_{max})}\cap\CSp{2,1}{\bomega\times(0,T_{max})},\\
u\in\CSp{0}{\bomega\times[0,T_{max})}\cap\CSp{2,1}{\bomega\times(0,T_{max})},\\
P\in\CSp{1,0}{\bomega\times[0,T_{max})},
\end{cases}
\end{align*}
that solve \eqref{KSaS} with \eqref{bcond} and \eqref{idcond} in the classical sense in $\Omega\times(0,T_{max})$. Moreover, we have $n>0$ and $c>0$ in $\bomega\times[0,T_{max})$ and the alternative
\begin{align}\label{lExAlt}
\refstepcounter{gleichung}
\mbox{either }\,T_{max}=\infty\,\mbox{ or }\,\|n(\cdot,t)\|_{\Lo[\infty]}+\|c(\cdot,t)\|_{\W[1,\theta]}+\|A^{\delta}u(\cdot,t)\|_{\Lo[2]}\to\infty\ \mbox{as }t\nearrowT_{max}.
\end{align}
This solution is unique among all functions satisfying \eqref{locExUClass} for all $T\in(0,T_{max})$, up to addition of functions $\hat{p}$, such that $\hat{p}(\cdot,t)$ is constant for any $t\in(0,T)$, to the pressure $P$.
\end{lem}
Local existence at hand, we can immediately prove two elementary properties, which will be the starting point for all of our regularity results to come.
\begin{lem}\label{Lem:masscons}
Under the assumptions of Lemma \ref{Lem:locEx}, the solution of \eqref{KSaS} satisfies
\begin{align}\label{masscons-n}
\refstepcounter{gleichung}
\int_{\Omega}\! n(x,t)\intd x=\int_{\Omega}\! n_0=:m\quad\mbox{for all}\ t\in(0,T_{max})
\end{align}
and there exists a constant $C>0$ such that
\begin{align}\label{L1-bound-c}
\refstepcounter{gleichung}
\int_{\Omega}\! c(x,t)\intd x\leq C\quad\mbox{for all}\ t\in(0,T_{max}).
\end{align}
\end{lem}
\begin{bew}
The first property follows immediately from simple integration of the first equation in \eqref{KSaS}. For \eqref{L1-bound-c} we integrate the second equation of \eqref{KSaS} and recall \eqref{fprop} to obtain
\begin{align*}
\frac{\intd}{\intd t}\int_{\Omega}\! c+\int_{\Omega}\! c\leq K_0\int_{\Omega}\! n^\alpha\quad\mbox{for all }t\in(0,T_{max}).
\end{align*}
Hence, making use of \eqref{masscons-n} and the fact $\alpha<1$, $y(t)=\int_{\Omega}\! c(x,t)\intd x$ satisfies the ODI
\begin{align*}
y'(t)+y(t)\leq C_1\|n_0\|_{\Lo[1]}^{\alpha}=C_2\quad\mbox{for all }t\in(0,T_{max})
\end{align*}
for some $C_1>0$ and $C_2:=C_1m^\alpha>0$ in view of \eqref{idreg}. Upon integration we infer
\begin{align*}
y(t)\leq y(0)e^{-t}+C_2\left(1-e^{-t}\right)\quad\mbox{for all }t\in(0,T_{max}),
\end{align*}
which, due to the assumed regularity of $c_0$ in \eqref{idreg}, completes the proof.
\end{bew}
\setcounter{gleichung}{0}
\section{Regularity of u implied by regularity of n}\label{sec2:regu}
Let us recall that $\mathcal{P}$ denotes the Helmholtz projection from $\Lo[2]$ to the subspace $L^2_\sigma\left(\Omega\right)=\left\{\varphi\in\Lo[2]\,\vert\,\dive\varphi=0\right\}$ and $A:=-\mathcal{P}\Delta$ denotes the Stokes operator in $\Lo[2]$ under homogeneous Dirichlet boundary conditions.
For now we limit our observations to a projected version of the Stokes subsystem $\frac{\intd}{\intd t}u+Au=\mathcal{P}\left(n\nabla\phi\right)$ in \eqref{KSaS} without regard for the rest of the system. In contrast to the setting with the full Navier-Stokes equations we can make use of the absence of the convective term $(u\cdot\nabla)u$ in the Stokes equation to gain results concerning the regularity of the spatial derivative $Du$ based on the regularity of the term $\mathcal{P}\left(n\nabla\phi\right)$, which in fact solely depends on the regularity of $n$, due to the assumed boundedness of $\nabla\phi$.
In \cite[Lemma 2.4]{Wang20157578} this correlation between the regularity of $u$ and $n$ is proven in space dimension $N=2$. The proof of \cite[Lemma 2.4]{Wang20157578} is based on an approach employed in \cite[Section 3.1]{win_ct_fluid_3d}, which makes use of general results for sectorial operators shown in \cite{fr69}, \cite{hen81} and \cite{gig81} and mainly relies on an embedding of the domains of fractional powers $D\left(A^\beta\right)$ into $\Lo[p]$, see \cite[Theorem 1.6.1]{hen81} or \cite[Theorem 3]{gig81}, for instance. Since we are only working in two-dimensional domains we will only state the result from \cite[Lemma 2.4]{Wang20157578} here and refer the reader to \cite[Corollary 3.4]{win_ct_fluid_3d} and \cite[Lemma 2.5]{Wang20157578} for the remaining details regarding the proof.
\begin{lem}\label{Lem:u_w1r_from_n_lp}
Let $p\in[1,\infty)$ and $r\in[1,\infty]$ be such that
\begin{align*}
\begin{cases}
r<\frac{2p}{2-p}\quad&\mbox{if }p\leq 2,\\
r\leq\infty\quad&\mbox{if }p>2.
\end{cases}
\end{align*}
Furthermore, let $T>0$ be such that $n:\Omega\times(0,T)\mapsto\mathbb{R}$ satisfies
\begin{align*}
\|n(\cdot,t)\|_{\Lo[p]}\leq \eta\quad\mbox{for all }t\in(0,T),
\end{align*}
with some $\eta>0$. Then for $u_0\in\DA$ with $\delta\in\left(\frac{1}{2},1\right)$ and $\phi\in\W[1,\infty]$ all solutions $u$ of the third and fourth equations in \eqref{KSaS} fulfill
\begin{align*}
\|Du(\cdot,t)\|_{\Lo[r]}\leq C\quad\mbox{for all }t\in(0,T),
\end{align*}
with a constant $C=C(p,r,\eta,u_0,\phi)>0$.
\end{lem}
Evidently, a supposedly known bound for $n$ at hand, we immediately obtain the desired boundedness of $u$ in view of Sobolev embeddings. Nevertheless, since we only have the time independent $L^1$--bound of $n$ from Lemma \ref{Lem:masscons} as a starting point, obtaining a bound for $n$ in $\Lo[p]$ with suitable large $p>1$ will require additional work.
\setcounter{gleichung}{0}
\section{Global existence and boundedness in two-dimensional domains}\label{sec3:N2}
For this section we fix $\theta>2,\delta\in(\frac{1}{2},1)$ and initial data satisfying \eqref{idreg}. In particular, this ensures that all requirements of Lemma \ref{Lem:locEx} are met. Let $(n,c,u,P)$ denote the solution given by Lemma \ref{Lem:locEx} and $T_{max}$ its maximal time of existence. Making use of the connection between the regularity of $u$ and $n$ discussed in the previous section we immediately obtain
\begin{prop}\label{Prop:u_bound_N2_all_p}
For all $r<2$ and all $q<\infty$ there exist constants $C_1>0$ and $C_2>0$ such that the solution to \eqref{KSaS} satisfies
\begin{align*}
\|Du(\cdot,t)\|_{\Lo[r]}\leq C_1\quad\mbox{for all }t\in(0,T_{max})
\end{align*}
and
\begin{align*}
\|u(\cdot,t)\|_{\Lo[q]}\leq C_2\quad\mbox{for all }t\in(0,T_{max}).
\end{align*}
\end{prop}
\begin{bew}
Due to \eqref{masscons-n} and the regularity of $n_0$ we can find $C_3>0$ such that $\|n(\cdot,t)\|_{\Lo[1]}=\|n_0\|_{\Lo[1]}\leq C_3$ holds for all $t\in(0,T_{max})$. Thus, we may apply Lemma \ref{Lem:u_w1r_from_n_lp} with $p=1$ to obtain for any $r<2$ that $\|Du(\cdot,t)\|_{\Lo[r]}\leq C_2\ \mbox{for all }t\in(0,T_{max})$ with some $C_2>0$. The second claim then follows immediately from the Sobolev embedding theorem (\cite[Theorem 5.6.6]{evans}).
\end{bew}
\subsection{Obtaining a first information on the gradient of c}\label{ssec32:reg_c}
In order to derive the bounds necessary in our approach towards the boundedness result we require an estimate on the gradient of $c$ as a starting point. To obtain a first information in this matter, we apply standard testing procedures to derive an energy inequality involving integrals of $n\ln n$ and $|\nabla c|^2$. But first, let us briefly recall Young's inequality in order to fix notation.
\begin{lem}\label{young}
Let $a,b,\varepsilon>0$ and $1<p,q<\infty$ with $\frac{1}{p}+\frac{1}{q}=1$. Then
\begin{align*}
ab\leq\varepsilon a^p+C(\varepsilon,p,q) b^q,
\end{align*}
where $C(\varepsilon,p,q)=(\varepsilon p)^{-\frac{q}{p}}q^{-1}$.
\end{lem}
Before we derive an inequality for the time evolution of $\int_{\Omega}\! n\ln$ we employ the Gagliardo-Nirenberg inequality\ to show one simple preparatory lemma on which we will rely multiple times later on.
\begin{lem}\label{Lem:ngnb}
Let $\Omega\subset\mathbb{R}^2$ be a bounded domain with smooth boundary. Let $r\geq1$ and $s\geq1$. Then for any $\eta>0$ there exists $C>0$ such that
\begin{align*}
\int_{\Omega}\!|\varphi|^{rs}\leq C\left(\int_{\Omega}\!|\nabla(|\varphi|^\nfrac{r}{2})|^2\right)^{\frac{(rs-1)}{r}}+C
\end{align*}
holds for all functions $\varphi\in\Lo[1]$ satisfying $\nabla(|\varphi|^{\nfrac{r}{2}})\in\Lo[2]$ and $\int_{\Omega}\!|\varphi|\leq\eta$.
\end{lem}
\begin{bew}
By an application of the Gagliardo-Nirenberg inequality\ (see \cite[Lemma 2.3]{lankchapto15} for a version including integrability exponents less than $1$) we can pick $C_1>0$ such that
\begin{align*}
\int_{\Omega}\!|\varphi|^{rs}=\||\varphi|^\nfrac{r}{2}\|_{\Lo[2s]}^{2s}\leq C_1\|\nabla(|\varphi|^\nfrac{r}{2})\|_{\Lo[2]}^{2sa}\||\varphi|^\nfrac{r}{2}\|_{\Lo[\nfrac{2}{r}]}^{2s(1-a)}+C_1\||\varphi|^\nfrac{r}{2}\|_{\Lo[\nfrac{2}{r}]}^{2s}
\end{align*}
holds for all $\varphi\in\Lo[1]$ with $\nabla(|\varphi|^{\nfrac{r}{2}})\in\Lo[2]$, with $a\in(0,1)$ provided by
\begin{align*}
a=\frac{\frac{r}{2}-\frac{1}{2s}}{\frac{r}{2}+\frac{1}{2}-\frac{1}{2}}=1-\frac{1}{rs}.
\end{align*}
Since $\int_{\Omega}\!|\varphi|\leq\eta$ we have $\||\varphi|^\nfrac{r}{2}\|_{\Lo[\nfrac{2}{r}]}=\left(\int_{\Omega}\!|\varphi|\right)^\nfrac{r}{2}\leq\eta^\nfrac{r}{2}$ and thus
\begin{align*}
\int_{\Omega}\!|\varphi|^{rs}\leq C_2\left(\int_{\Omega}\!|\nabla(|\varphi|^\nfrac{r}{2})|^2\right)^{\frac{(rs-1)}{r}}+C_2
\end{align*}
for all $\varphi\in\Lo[1]$ satisfying $\nabla(|\varphi|^{\nfrac{r}{2}})\in\Lo[2]$, where $C_2=C_1\max\{\eta,\eta^{rs}\}>0$.
\end{bew}
The particular form in which we will need this inequality most often is the following:
\begin{cor}\label{cor:ngnb}
There exists a constant $K_1>0$ such that the solution of \eqref{KSaS} fulfills
\begin{align*}
\int_{\Omega}\! n^2\leq K_1\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+K_1
\end{align*}
for all $t\in(0,T_{max})$.
\end{cor}
Testing of the first equation in \eqref{KSaS} with $\ln n$ yields the following estimation.
\begin{lem}\label{Lem:n-energy}
There exists a constant $K_2>0$ such that the solution of \eqref{KSaS} fulfills
\begin{align}\label{n-energy}
\refstepcounter{gleichung}
\frac{\intd}{\intd t}\int_{\Omega}\! n\ln n +\int_{\Omega}\!|\nabla( n^{\nfrac{1}{2}})|^2\leq K_2\int_{\Omega}\!\left|\Delta c\right|^2+K_2\quad\mbox{for all }t\in(0,T_{max}).
\end{align}
\end{lem}
\begin{bew}
Making use of \eqref{masscons-n} and $\dive u=0$ in $\Omega$, multiplication of the first equation in \eqref{KSaS} with $\ln n$ and integration by parts yield
\begin{align}\label{n-energy-proof-eq1}
\refstepcounter{gleichung}
\frac{\intd}{\intd t}\int_{\Omega}\! n\ln n+\int_{\Omega}\!\frac{|\nabla n|^2}{n}=\int_{\Omega}\!\nabla c\divdot\nabla n\quad\mbox{for all }t\in(0,T_{max}).
\end{align}
To further estimate the right hand side, we first let $K_1>0$ be as in Corollary \ref{cor:ngnb}. Then, integrating the right hand side of \eqref{n-energy-proof-eq1} by parts once more and applying Young's inequality with $p=q=2$ and $\varepsilon=\frac{3}{K_1}$ (see Lemma \ref{young}) and Corollary \ref{cor:ngnb}, we obtain
\begin{align*}
\frac{\intd}{\intd t}\int_{\Omega}\! n\ln n+4\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2&\leq\frac{3}{K_1}\int_{\Omega}\! n^2+C_1\int_{\Omega}\!|\Delta c|^2\\
&\leq\frac{3}{K_1}\left(K_1\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2+K_1\right)+C_1\int_{\Omega}\!|\Delta c|^2
\end{align*}
for all $t\in(0,T_{max})$ and some $C_1>0$. Reordering the terms appropriately completes the proof with $K_2:=\max\{3,C_1\}$.
\end{bew}
The second of the separate inequalities treats the time evolution of $\int_{\Omega}\!|\nabla c|^2$.
\begin{lem}\label{Lem:c-energy}
Given any $\xi>0$, there exists a constant $K_3>0$ such that
\begin{align}\label{c-energy}
\refstepcounter{gleichung}
\frac{\xi}{2}\frac{\intd}{\intd t}\int_{\Omega}\! |\nabla c|^2 +\frac{\xi}{4}\int_{\Omega}\!|\Delta c|^2+ \xi\int_{\Omega}\!|\nabla c|^2\leq \frac{1}{2}\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2+K_3
\end{align}
holds for all $t\in(0,T_{max})$.
\end{lem}
\begin{bew}
Testing the second equation of \eqref{KSaS} with $-\xi\Delta c$ and integrating by parts we obtain
\begin{align*}
\frac{\xi}{2}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^2 +\xi\int_{\Omega}\!|\Delta c|^2+\xi\int_{\Omega}\!|\nabla c|^2=-\xi\int_{\Omega}\! f(n)\Delta c+\xi\int_{\Omega}\!\Delta c\nabla c\cdot u
\end{align*}
for all $t\in(0,T_{max})$. An application of Young's inequality to both integrals on the right side therefore implies that
\begin{align}\label{c-energy-eq0}
\refstepcounter{gleichung}
\frac{\xi}{2}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^2+\xi\int_{\Omega}\!|\Delta c|^2+\xi\int_{\Omega}\!|\nabla c|^2 \leq \xi\int_{\Omega}\! f(n)^{2}+\frac{\xi}{2}\int_{\Omega}\!|\Delta c|^2+\xi\int_{\Omega}\!|\nabla c|^2 |u|^2
\end{align}
holds for all $t\in(0,T_{max})$. We fix $q>2$ and make use of the Hölder inequality to see that
\begin{align}\label{c-energy-eq1}
\refstepcounter{gleichung}
\xi\int_{\Omega}\!|\nabla c|^2 |u|^2\leq \xi\|\nabla c\|_{\Lo[\frac{2q}{q-2}]}^2\|u \|_{\Lo[q]}^2
\end{align}
is valid for all $t\in(0,T_{max})$. An application of the Gagliardo-Nirenberg inequality\ combined with \cite[Theorem 3.4]{sima90m} allows us to further estimate
\begin{align*}
\|\nabla c\|_{\Lo[\frac{2q}{q-2}]}^2&\leq C_1\|\Delta c\|_{\Lo[2]}^{\frac{4q+4}{3q}}\| c\|_{\Lo[1]}^{\frac{2q-4}{3q}}+C_1\|c\|_{\Lo[1]}^2\\
&\leq C_2\|\Delta c\|_{\Lo[2]}^{\frac{4}{3}+\frac{4}{3q}}+C_2\quad\mbox{for all }t\in(0,T_{max})
\end{align*}
for some $C_1>0$ and $C_2>0$ in view of \eqref{L1-bound-c}. Plugging this into \eqref{c-energy-eq1} and recalling Proposition \ref{Prop:u_bound_N2_all_p}, we thus find $C_3>0$ such that
\begin{align*}
\xi\int_{\Omega}\!|\nabla c|^2|u|^2\leq C_3\|\Delta c\|_{\Lo[2]}^{\frac{4}{3}+\frac{4}{3q}}+C_3\quad\mbox{for all }t\in(0,T_{max}).
\end{align*}
Since $q>2$, we have $\frac{4}{3}+\frac{4}{3q}<2$ and may apply Young's inequality to obtain
\begin{align}\label{c-energy-eq2}
\refstepcounter{gleichung}
\xi\int_{\Omega}\!|\nabla c|^2|u|^2\leq \frac{\xi}{4}\|\Delta c\|_{\Lo[2]}^2+C_4,
\end{align}
for some $C_4>0$ and all $t\in(0,T_{max})$. To estimate the term containing $f(n)^{2}$ in \eqref{c-energy-eq0} we let $K_1$ denote the positive constant from Corollary \ref{cor:ngnb}. Then, recalling \eqref{fprop} and making use of the fact $\alpha<1$, an application of Young's inequality yields $C_5>0$ fulfilling $\xi f(n)^{2}\leq \frac{1}{2K_1}n^2+C_5$ for all $(x,t)\in\Omega\times(0,T_{max})$ and thus, by Corollary \ref{cor:ngnb}
\begin{align}\label{c-energy-eq3}
\refstepcounter{gleichung}
\xi\int_{\Omega}\! f(n)^{2}\leq \frac{1}{2K_1}\int_{\Omega}\! n^2+C_5|\Omega|\leq \frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+C_6\quad\mbox{for all }t\in(0,T_{max})
\end{align}
with $C_6:=\frac{1}{2}+C_5|\Omega|$. Combining \eqref{c-energy-eq0}, \eqref{c-energy-eq2} and \eqref{c-energy-eq3} completes the proof.
\end{bew}
Before we are able to combine the previous lemmata to derive an ODI appropriate for our purpose, we require one additional result which is a corollary from Lemma \ref{Lem:ngnb}.
\begin{cor}\label{cor:ln_n-gradbound}
There exists a constant $K_4>0$ such that the solution to \eqref{KSaS} obeys
\begin{align*}
\frac{1}{2}\int_{\Omega}\!|\nabla(n^{\nfrac{1}{2}})|^2\geq K_4\int_{\Omega}\! n\ln n -\frac{1}{2}\quad \mbox{for all }t\in(0,T_{max}).
\end{align*}
\end{cor}
\begin{bew}
In view of the pointwise inequality $x\ln x\leq x^2$ for $x\in(0,\infty)$, the positivity of $n$ ascertained in Lemma \ref{Lem:locEx} therefore implies $n\ln n\leq n^2$ for all $t\in(0,T_{max})$ and thus an application of Corollary \ref{cor:ngnb} immediately shows that there exists $C_1>0$ such that
\begin{align*}
\int_{\Omega}\! n\ln n\leq\int_{\Omega}\! n^2\leq C_1\|\nabla(n^{\nfrac{1}{2}})\|_{\Lo[2]}^2+C_1
\end{align*}
holds for all $t\in(0,T_{max})$. Therefore, multiplying by $K_4:=\frac{1}{2C_1}$ and reordering the terms appropriately proves the asserted inequality.
\end{bew}
Adding up suitable multiples of the differential inequalities in Lemma \ref{Lem:n-energy} and Lemma \ref{Lem:c-energy}, we obtain a first bound on the gradient of $c$.
\begin{prop}\label{Prop:grad_c2-bound}
There exists a constant $C>0$ such that the solution of \eqref{KSaS} fulfills
\begin{align}\label{grad_c2-bound}
\refstepcounter{gleichung}
\int_{\Omega}\!|\nabla c|^2\leq C\quad\mbox{for all }t\in(0,T_{max}).
\end{align}
\end{prop}
\begin{bew}
Letting $K_2$ denote the positive constant from Lemma \ref{Lem:n-energy}, we set $\xi=4K_2+4$ and then $K_3>0$ as the corresponding constant given by Lemma \ref{Lem:c-energy}. With the constants defined this way, we know that the inequality
\begin{align}\label{grad_c2_sptemp-eq1}
\refstepcounter{gleichung}
(2K_2+2)\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^2+(K_2+1)\int_{\Omega}\!|\Delta c|^2+(4K_2+4)\int_{\Omega}\!|\nabla c|^2\leq\frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+K_3,
\end{align}
holds for all $t\in(0,T_{max})$ due to Lemma \ref{Lem:c-energy}. Thus, adding up \eqref{n-energy} and \eqref{grad_c2_sptemp-eq1} we obtain
\begin{align*}
\frac{\intd}{\intd t}\bigg(\int_{\Omega}\! n\ln n+(2K_2+2)\int_{\Omega}\!|\nabla c|^2\bigg)+\frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2+\int_{\Omega}\!|\Delta c|^2+(4K_2+4)\int_{\Omega}\!|\nabla c|^2\leq C_1
\end{align*}
for all $t\in(0,T_{max})$ with $C_1=K_2+K_3>0$. By Corollary \ref{cor:ln_n-gradbound} we can estimate $\frac{1}{2}\int_{\Omega}\!|\nabla(n^\nfrac{1}{2})|^2$ from below to obtain
\begin{align*}
\frac{\intd}{\intd t}\bigg(\int_{\Omega}\! n\ln n+(2K_2+2)\int_{\Omega}\!|\nabla c|^2\bigg)+K_4\int_{\Omega}\! n\ln n+\!\int_{\Omega}\!|\Delta c|^2+2(2K_2+2)\!\int_{\Omega}\!|\nabla c|^2\leq C_2
\end{align*}
for all $t\in(0,T_{max})$, with $K_4>0$ as in Corollary \ref{cor:ngnb} and $C_2=C_1+\frac{1}{2}>0$. Dropping the non-negative term involving $|\Delta c|^2$, this implies that $y(t):=\int_{\Omega}\! n\ln n+(2K_2+2)\int_{\Omega}\!|\nabla c|^2$, $t\in[0,T_{max})$ satisfies
\begin{align*}
y'(t)+C_3y(t)\leq C_2\quad\mbox{for all }t\in(0,T_{max}),
\end{align*}
where $C_3:=\min\left\{K_4,2\right\}>0$. Upon an ODE comparison, this leads to the boundedness of $y$ and hence \eqref{grad_c2-bound}, due to $n\ln n$ being bounded from below by the positivity of $n$.
\end{bew}
\subsection{Further testing procedures}\label{ssec:33:testing}
The $L^2$--bound of the gradient of $c$ proven in the previous lemma will be our starting point in improving the regularity of both $n$ and $c$. Preparation and combination of differential inequalities concerning $n^p$ and $|\nabla c|^{2q}$, for appropriately chosen $q$ and $p$, will be the main part of this section. The testing procedures employed in this approach are based on the application to a similar chemotaxis-Stokes system discussed in \cite{win_ct_fluid_3d}.
The following preparatory result, taken from \cite[Lemma 2.5]{win_ct_fluid_2d}, will be a useful tool in estimations later on and is a simple derivation from Young's inequality.
\begin{lem}\label{young_exp-sum1}
Let $a>0$ and $b>0$ be such that $a+b<1$. Then for all $\varepsilon>0$ there exists $C>0$ such that
\begin{align*}
x^ay^b\leq\varepsilon(x+y)+C\quad\mbox{for all }x\geq0\mbox{ and }y\geq0.
\end{align*}
\end{lem}
In the first step to improve the known regularities of $n$ and $c$, consist of an application of standard testing procedures to gain separate inequalities regarding the time evolution of $\int_{\Omega}\! n^p$ and $\int_{\Omega}\!|\nabla c|^{2q}$, respectively.
\begin{lem}\label{Lem:np-ineq}
Let $p>1$. Then the solution of \eqref{KSaS} satisfies
\begin{align}\label{np-ineq}
\refstepcounter{gleichung}
\frac{1}{p}\frac{\intd}{\intd t}\int_{\Omega}\! n^p+\frac{2(p-1)}{p^2}\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\leq\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2
\end{align}
for all $t\in(0,T_{max})$.
\end{lem}
\begin{bew}
We multiply the first equation of \eqref{KSaS} with $n^{p-1}$ and integrate by parts to see that
\begin{align*}
\frac{1}{p}\frac{\intd}{\intd t}\int_{\Omega}\! n^p=-(p-1)\int_{\Omega}\!|\nabla n|^2 n^{p-2}+(p-1)\int_{\Omega}\! n^{p-1}\nabla c\cdot\nabla n-\frac{1}{p}\int_{\romega}\! n^{p}u\cdot\vec{\nu}
\end{align*}
holds for all $t\in(0,T_{max})$, where we made use of the fact $\dive u=0$ and the divergence theorem to rewrite the last term accordingly. Due to the boundary condition imposed on $u$ the last term disappears, such that an application of Young's inequality to the second to last term implies
\begin{align*}
\frac{1}{p}\frac{\intd}{\intd t}\int_{\Omega}\! n^p+(p-1)\int_{\Omega}\!|\nabla n|^2n^{p-2}\leq\frac{p-1}{2}\int_{\Omega}\!|\nabla n|^2 n^{p-2}+\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2
\end{align*}
for all $t\in(0,T_{max})$. Reordering the terms and rewriting $|\nabla n|^2n^{p-2}=\frac{4}{p^2}|\nabla(n^\nfrac{p}{2})|^2$ completes the proof.
\end{bew}
\begin{lem}\label{Lem:grad_c2q}
Let $q>1$. Then
\begin{align}\label{grad_c2q}
\refstepcounter{gleichung}
\frac{1}{2q}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^{2q}&+\frac{2(q-1)}{q^2}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q}\nonumber\\
&\leq\left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}+\int_{\Omega}\!|\nabla c|^{2q}|Du|
\end{align}
for all $t\in(0,T_{max})$.
\end{lem}
\begin{bew}
Differentiating the second equation of \eqref{KSaS} and making use of the fact $\Delta|\nabla c|^2=2\nabla c\cdot \nabla\Delta c+2|D^2c|^2$, we obtain
\begin{align*}
\frac{1}{2}\left(|\nabla c|^2\right)_t&=\nabla c\cdot\nabla\left(\Delta c-c+f(n)-u\cdot\nabla c\right)\\
&=\frac{1}{2}\Delta|\nabla c|^2-|D^2c|^2-|\nabla c|^2+\nabla c\cdot\nabla f(n)-\nabla c\cdot\nabla\left(u\cdot\nabla c\right)\mbox{ in }\Omega\times(0,T_{max}).
\end{align*}
We multiply this identity by $\left(|\nabla c|^2\right)^{q-1}$ and integrate by parts over $\Omega$, where due to the Neumann boundary conditions imposed on $n$ and $c$ every boundary integral except the one involving $\frac{\partial|\nabla c|^2}{\partial \nu}$ disappears. Thus, the equality
\begin{align}\label{grad_c2q-eq1}
\refstepcounter{gleichung}
\frac{1}{2q}\frac{\intd}{\intd t}\int_{\Omega}\!|\nabla c|^{2q}&+\frac{q-1}{2}\int_{\Omega}\!|\nabla c|^{2q-4}\Big\vert\nabla|\nabla c|^2\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q-2}|D^2 c|^2+\int_{\Omega}\!|\nabla c|^{2q}\\
&=\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla f(n)-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla\left(u\cdot\nabla c\right)+\frac{1}{2}\int_{\romega}\!|\nabla c|^{2q-2}\frac{\partial|\nabla c|^2}{\partial\nu}\nonumber
\end{align}
holds for all $t\in(0,T_{max})$. Recalling \eqref{fprop}, we integrate the first integral by parts to see that
\begin{align*}
\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla f\left(n\right)\leq K_0\int_{\Omega}\!\Big\vert\nabla|\nabla c|^{2q-2}\Big\vert|\nabla c|\,n^\alpha +K_0\int_{\Omega}\!|\nabla c|^{2q-2}|\Delta c|\,n^\alpha
\end{align*}
holds for all $t\in(0,T_{max})$. Since $\nabla|\nabla c|^{2q-2}=2(q-1)|\nabla c|^{2q-4} D^2 c\cdot\nabla c$ in $\Omega\times(0,T_{max})$, and since the Cauchy-Schwarz inequality implies $|\Delta c|\leq \sqrt{2}|D^2 c|$, we may apply Young's inequality to obtain
\begin{align}\label{grad_c2q-eq2}
\refstepcounter{gleichung}
\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla f\left(n\right)&\leq\int_{\Omega}\!|\nabla c|^{2q-2}|D^2 c|^2+\frac{\left(2K_0(q-1)+\sqrt{2}K_0\right)^2}{4}\int_{\Omega}\!|\nabla c|^{2q-2} n^{2\alpha}\nonumber\\
&=\int_{\Omega}\!|\nabla c|^{2q-2}|D^2 c|^2+\left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\int_{\Omega}\!|\nabla c|^{2q-2} n^{2\alpha}
\end{align}
for all $t\in(0,T_{max})$. To treat the second integral on the right hand side of \eqref{grad_c2q-eq1}, we first rewrite
\begin{align}\label{grad_c2q-eq3}
\refstepcounter{gleichung}
-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla\left(u\cdot \nabla c\right)&=-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\left(Du\cdot\nabla c\right)-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\left(D^2 c\cdot u\right)
\end{align}
for all $t\in(0,T_{max})$, and then make use of the pointwise equality
\begin{align*}
|\nabla c|^{2q-2}\nabla c\cdot\left(D^2c\cdot u\right)=\frac{1}{2q}u\cdot\nabla|\nabla c|^{2q}\mbox{ in }\Omega\times(0,T_{max}),
\end{align*}
to see that, since $u$ is divergence free,
\begin{align*}
-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\left(D^2 c\cdot u\right)=\frac{1}{2q}\int_{\Omega}\!(\dive u)|\nabla c|^{2q}=0
\end{align*}
holds for all $t\in(0,T_{max})$. Thus, \eqref{grad_c2q-eq3} implies
\begin{align}\label{grad_c2q-eq4}
\refstepcounter{gleichung}
-\int_{\Omega}\!|\nabla c|^{2q-2}\nabla c\cdot\nabla\left(u\cdot \nabla c\right)\leq\int_{\Omega}\!|\nabla c|^{2q}|Du|\quad\mbox{for all }t\in(0,T_{max}).
\end{align}
For the remaining boundary integral in \eqref{grad_c2q-eq1} we recall that the convexity of $\Omega$ ensures $\frac{\partial|\nabla c|^2}{\partial\nu}\leq0$ on $\romega$ (see \cite[Lemme I.1, p.350]{lion}). Combining this with \eqref{grad_c2q-eq1}, \eqref{grad_c2q-eq2} and \eqref{grad_c2q-eq4} completes the proof due to the identity
\[
\Big\vert\nabla|\nabla c|^q\Big\vert^2=\Big\vert\nabla\left(|\nabla c|^{2\cdot\nfrac{q}{2}}\right)\Big\vert^2=\frac{q^2}{4}|\nabla c|^{2q-4}\Big\vert\nabla|\nabla c|^2\Big\vert^2\quad\mbox{in }\Omega\times(0,T_{max}).\quad\mbox{
}\qedhere
\]
\end{bew}
Before uniting the inequalities from \eqref{np-ineq} and \eqref{grad_c2q} into a single energy-type inequality, we estimate the right hand sides therein separately.
\begin{lem}\label{Lem:right_hand_sides}
Let $\infty>q>\max\{2,\frac{1}{\alpha}\}$, $p=\alpha q $. For any $\kappa>0$ there exist constants $K_5,K_6$ and $K_7>0$ such that
\begin{align}\label{right_hand_sides-1}
\refstepcounter{gleichung}
\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2\leq \frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2\right)+K_5,
\end{align}
\begin{align}\label{right_hand_sides-2}
\refstepcounter{gleichung}
\left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\!\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq\frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2\right)+K_6
\end{align}
and
\begin{align}\label{right_hand_sides-3}
\refstepcounter{gleichung}
\int_{\Omega}\! |\nabla c|^{2q}|Du|\leq\frac{\kappa}{6}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+K_7
\end{align}
hold for all $t\in(0,T_{max})$.
\end{lem}
\begin{bew}
To prove \eqref{right_hand_sides-1}, we first fix some $\beta_1>1$ and apply Hölder's inequality to obtain
\begin{align}\label{right_hand_sides-eq1}
\refstepcounter{gleichung}
\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2\leq\frac{p-1}{2}\left(\int_{\Omega}\! n^{p\beta_1}\right)^{\nfrac{1}{\beta_1}}\left(\int_{\Omega}\! |\nabla c|^{2\beta'_1}\right)^{\nfrac{1}{\beta'_1}}
\end{align}
for all $t\in(0,T_{max})$, where $\beta'_1$ denotes the Hölder conjugate of $\beta_1$. By \eqref{masscons-n} and Lemma \ref{Lem:ngnb} applied to $\varphi=n$, $\eta=m$, $r=p$ and $s=\beta_1$, we can find $C_1>0$ such that
\begin{align}\label{right_hand_sides-eq1.3}
\refstepcounter{gleichung}
\left(\int_{\Omega}\! n^{p\beta_1}\right)^\nfrac{1}{\beta_1}\leq C_1\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{1-\nfrac{1}{p\beta_1}}+C_1\quad\mbox{for all }t\in(0,T_{max}).
\end{align}
Similarly to the application of the Gagliardo-Nirenberg inequality\ (\cite[Lemma 2.3]{lankchapto15}) utilized in Lemma \ref{Lem:ngnb}, we can show that the second integral on the right in \eqref{right_hand_sides-eq1} satisfies
\begin{align}\label{right_hand_sides-eq1.6}
\refstepcounter{gleichung}
\left(\int_{\Omega}\! |\nabla c|^{2\beta'_1}\right)^{\nfrac{1}{\beta'_1}}\leq C_2\left(\Big\|\nabla|\nabla c|^q\Big\|_{\Lo[2]}^\nfrac{2b_1}{q}\Big\||\nabla c|^q\Big\|_{\Lo[\nfrac{2}{q}]}^\nfrac{(2-2b_1)}{q}+\Big\||\nabla c|^q\Big\|^{\nfrac{2}{q}}_{\Lo[\nfrac{2}{q}]}\right)
\end{align}
for all $t\in(0,T_{max})$ with $C_2>0$ and $b_1\in(0,1)$ provided by
\begin{align*}
b_1=\frac{\frac{q}{2}-\frac{q}{2\beta'_1}}{\frac{q}{2}+\frac{1}{2}-\frac{1}{2}}=1-\frac{1}{\beta'_1}=\frac{1}{\beta_1}.
\end{align*}
Since Proposition \ref{Prop:grad_c2-bound} implies the boundedness of $\||\nabla c|^q\|_{\Lo[\nfrac{2}{q}]}$, plugging \eqref{right_hand_sides-eq1.3} and \eqref{right_hand_sides-eq1.6} into \eqref{right_hand_sides-eq1} we obtain $C_3>0$ such that
\begin{align*}
\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2
\leq C_3&\bigg(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\bigg)^{1-\nfrac{1}{p\beta_1}}\left(\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big|^2\right)^{\nfrac{1}{q\beta_1}}\\
+&\ C_3\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{1-\nfrac{1}{p\beta_1}}+C_3\left(\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big|^2\right)^{\nfrac{1}{q\beta_1}}+C_3
\end{align*}
holds for all $t\in(0,T_{max})$. Due to $\alpha<1$ the choice of $p=\alpha q$
implies $p<q$ and thus, $1-\frac{1}{p\beta_1}+\frac{1}{q\beta_1}<1$. Therefore, we may apply Lemma \ref{young_exp-sum1} with $\varepsilon=\frac{\kappa}{12}$ to the three terms on the right hand side containing an integral and obtain for some $C_4>0$ that
\begin{align*}
\frac{p-1}{2}\int_{\Omega}\! n^p|\nabla c|^2\leq\frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big|^2\right)+C_4
\end{align*}
holds for all $t\in(0,T_{max})$, which proves \eqref{right_hand_sides-1}. The proof of \eqref{right_hand_sides-2} follows the same reasoning. First, we apply Hölder's inequality with $\beta_2=\frac{q+1}{2}$ and $\beta'_2$ as corresponding Hölder conjugate
to obtain
\begin{align}\label{right_hand_sides-eq2}
\refstepcounter{gleichung}
\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq\left(\int_{\Omega}\! n^{2\alpha\beta_2}\right)^{\nfrac{1}{\beta_2}}\left(\int_{\Omega}\! |\nabla c|^{(2q-2)\beta'_2}\right)^{\nfrac{1}{\beta'_2}}
\end{align}
for all $t\in(0,T_{max})$. Since the choices of $\beta_2$ and $p$ imply $\frac{2\alpha\beta_2}{p}=\frac{\alpha(q+1)}{\alpha q}>1$, we can utilize Lemma \ref{Lem:ngnb} with $\varphi=n$, $r=p$ and $s=\frac{2\alpha\beta_2}{p}$ to estimate
\begin{align}\label{right_hand_sides-eq3}
\refstepcounter{gleichung}
\left(\int_{\Omega}\! n^{2\alpha\beta_2}\right)^{\nfrac{1}{\beta_2}}\leq C_5\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{\nfrac{(2\alpha\beta_2-1)}{p\beta_2}}+C_5\quad\mbox{for all }t\in(0,T_{max}),
\end{align}
with some $C_5>0$. For the integral involving $|\nabla c|^{(2q-2)\beta'_2}$, we make use of the Gagliardo-Nirenberg inequality\ as shown before to obtain $C_6>0$ such that
\begin{align}\label{right_hand_sides-eq4}
\refstepcounter{gleichung}
\left(\int_{\Omega}\! |\nabla c|^{(2q-2)\beta'_2}\right)^{\nfrac{1}{\beta'_2}}\leq C_6\left(\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)^{\frac{q-1}{q}b_2}+C_6
\end{align}
holds for all $t\in(0,T_{max})$, with $b_2\in(0,1)$ determined by
\begin{align*}
b_2=\frac{\frac{q}{2}-\frac{q}{2(q-1)\beta'_2}}{\frac{q}{2}+\frac{1}{2}-\frac{1}{2}}=1-\frac{1}{(q-1)\beta'_2}=1-\frac{1}{(q-1)}+\frac{1}{(q-1)\beta_2}.
\end{align*}
Thus, a combination of \eqref{right_hand_sides-eq2},\eqref{right_hand_sides-eq3} and \eqref{right_hand_sides-eq4} leads to
\begin{align*}
\bigg(K_0(q-1)+\frac{K_0}{\sqrt{2}}&\bigg)^2\!\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq C_7\bigg(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\bigg)^{\nfrac{(2\alpha\beta_2-1)}{p\beta_2}}\left(\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)^{\frac{q-1}{q}b_2}\\
&\ +C_7\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\right)^{\nfrac{(2\alpha\beta_2-1)}{p\beta_2}}+C_7\left(\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)^{\frac{q-1}{q}b_2}+C_7
\end{align*}
for all $t\in(0,T_{max})$ with some $C_7>0$.
Here the choice of $p$ and the fact that $\alpha<1$ imply
\begin{align*}
\frac{2\alpha\beta_2-1}{p\beta_2}+\frac{q-1}{q}b_2&=\frac{2\alpha}{p}-\frac{1}{p\beta_2}+\frac{q-2}{q}+\frac{1}{q\beta_2}\\
&=\frac{2}{q}-\frac{1}{\alpha q\beta_2 }+\frac{q-2}{q}+\frac{1}{q\beta_2 }=1-\frac{1-\alpha}{\alpha q\beta_2 }<1.
\end{align*}
Therefore, the requirements of Lemma \ref{young_exp-sum1} are satisfied again and an application thereof yields $C_8>0$ such that
\begin{align*}
\left(K_0(q-1)+\frac{K_0}{\sqrt{2}}\right)^2\!\int_{\Omega}\! n^{2\alpha}|\nabla c|^{2q-2}\leq \frac{\kappa}{6}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2\right)+C_8
\end{align*}
holds for all $t\in(0,T_{max})$ and thus proving \eqref{right_hand_sides-2}. To verify \eqref{right_hand_sides-3} we fix $\beta_3=\frac{3}{2}$ and $\beta'_3=3$. Since $\beta_3<2$ Hölder's inequality yields
\begin{align*}
\int_{\Omega}\!|\nabla c|^{2q}|Du|\leq\left(\int_{\Omega}\!|\nabla c|^{2q\beta'_3}\right)^{\nfrac{1}{\beta'_3}}\left(\int_{\Omega}\!|D u|^{\beta_3}\right)^{\nfrac{1}{\beta_3}}\leq C_9\Big\||\nabla c|^q\Big\|_{\Lo[6]}^2
\end{align*}
for some $C_9>0$, in view of the boundedness of $\|D u\|_{\Lo[\frac{3}{2}]}$ shown in Proposition \ref{Prop:u_bound_N2_all_p}. Similarly to the previous applications of the Gagliardo-Nirenberg and Young inequalities we can make use of the boundedness of $\||\nabla c|^q\|_{\Lo[\nfrac{2}{q}]}$ to obtain $C_{10}>0$ such that
\begin{align*}
\int_{\Omega}\!|\nabla c|^{2q}|Du|\leq\frac{\kappa}{6}\int_{\Omega}\!\Big|\nabla|\nabla c|^q\Big|^2+C_{10}
\end{align*}
for all $t\in(0,T_{max})$, which completes the proof.
\end{bew}
Combining the three previous lemmata we are now in the position to control $L^p$--norms of $n$ and $\nabla c$ with arbitrarily high $p$. In fact we have
\begin{prop}\label{Prop:np_grad_c2q-bounds}
Let $\infty>q>\max\{2,\frac{1}{\alpha}\}$ and $p=\alpha q$. Then we can find $C>0$ such that, the solution to \eqref{KSaS} satisfies
\begin{align}\label{np-bound}
\refstepcounter{gleichung}
\int_{\Omega}\! n^p\leq C\quad\mbox{for all }t\in(0,T_{max})
\end{align}
and
\begin{align}\label{grad_c2q-bound}
\refstepcounter{gleichung}
\int_{\Omega}\! |\nabla c|^{2q}\leq C\quad\mbox{for all }t\in(0,T_{max}).
\end{align}
\end{prop}
\begin{bew}
Given $q>\max\{2,\frac{1}{\alpha}\}$ and $p=\alpha q$ we fix $\kappa=\min\left\{\frac{2(q-1)}{q^2},\,\frac{2(p-1)}{p^2}\right\}$. By Lemmata \ref{Lem:np-ineq}, \ref{Lem:grad_c2q} and \ref{Lem:right_hand_sides}, we can find $C_1:=K_5+K_6+K_7>0$ such that
\begin{align*}
\frac{\intd}{\intd t}\bigg(\frac{1}{p}\int_{\Omega}\! n^p&+\frac{1}{2q}\int_{\Omega}\!|\nabla c|^{2q}\bigg)+\frac{2(p-1)}{p^2}\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2\\
&+\frac{2(q-1)}{q^2}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q}
\leq\frac{\kappa}{2}\left(\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2\right)+C_1
\end{align*}
holds for all $t\in(0,T_{max})$. Herein the choice of $\kappa$ implies
\begin{align}\label{np_grad_c2q-bounds-eq1}
\refstepcounter{gleichung}
\frac{\intd}{\intd t}\bigg(\frac{1}{p}\int_{\Omega}\! n^p+\frac{1}{2q}\int_{\Omega}\!|\nabla c|^{2q}\bigg)+\frac{p-1}{p^2}\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2+\frac{q-1}{q^2}\int_{\Omega}\!\Big\vert\nabla|\nabla c|^q\Big\vert^2+\int_{\Omega}\!|\nabla c|^{2q}\leq C_1
\end{align}
for all $t\in(0,T_{max})$. We drop the non-negative term $\frac{q-1}{q^2}\int_{\Omega}\!|\nabla|\nabla c|^q|^2$ and apply Lemma \ref{Lem:ngnb} to estimate $\int_{\Omega}\!|\nabla(n^\nfrac{p}{2})|^2$ from below in \eqref{np_grad_c2q-bounds-eq1}, to obtain $C_2,C_3>0$ such that $y(t):=\frac{1}{p}\int_{\Omega}\! n^p+\frac{1}{2q}\int_{\Omega}\!|\nabla c|^{2q}$, $t\in(0,T_{max})$ satisfies
\begin{align*}
y'(t)+C_2 y(t)\leq C_3\quad\mbox{for all }t\in(0,T_{max}),
\end{align*}
from which we infer the boundedness of $y$ upon an ODE comparison and thus \eqref{np-bound} and \eqref{grad_c2q-bound}.
\end{bew}
\subsection{Global existence and boundedness}\label{ssec:34:bound}
We can now begin to verify the boundedness of the three quantities appearing in the extensibility criterion \eqref{lExAlt}. The first of these quantities will be $\|A^\delta u(\cdot,t)\|_{\Lo[2]}$.
\begin{prop}\label{Prop:Au2-bound}
Let $\delta\in(\frac{1}{2},1)$ be as in Lemma \ref{Lem:locEx}. There exists a constant $C>0$ such that the solution of \eqref{KSaS} satisfies
\begin{align*}
\|A^\delta u(\cdot,t)\|_{\Lo[2]}\leq C\quad\mbox{for all }t\in(0,T_{max}).
\end{align*}
\end{prop}
\begin{bew}
The proof essentially follows the argumentation of \cite[Lemma 2.3]{win_ct_fluid_2d}, whilst making use of the previously proven bound $\|n\|_{\Lo[p]}\leq C$ for all $t\in(0,T_{max})$ with some $p>2$. Nonetheless, let us recount the main arguments.
It is well known, see \cite[Theorem 38.6]{sellyou} and \cite[p.204]{sohr} for instance, that the Stokes operator $A$ is a positive, sectorial operator and generates a contraction semigroup $\left(e^{-tA}\right)_{t\geq0}$ in $L^2_\sigma(\Omega)$ with operator norm bounded by
\begin{align*}
\|e^{-tA}\|\leq e^{-\mu_1 t}\quad\mbox{for all }t\geq0,
\end{align*}
with some $\mu_1>0$. Furthermore, the operator norm of the fractional powers of the Stokes operator satisfy an exponential decay property (\cite[Theorem 37.5]{sellyou}). That is, there exists $C_1>0$ such that
\begin{align}\label{Au2-bound-expdecprop}
\refstepcounter{gleichung}
\left\| A^\deltae^{-tA}\right\|\leq C_1t^{-\delta}e^{-\mu_1 t}\quad\mbox{for all }t>0.
\end{align}
Thus, representing $u$ by its variation of constants formula
\begin{align*}
u(\cdot,t)=e^{-tA} u_0+\int_0^{t} e^{-(t-s)A}\mathcal{P}\left(n(\cdot,s)\nabla\phi\right)\intd s,\quad t\in(0,T_{max}),
\end{align*}
and applying the fractional power $A^\delta$, we can make use of the fact that $e^{-tA}$ commutes with $A^\delta$ (\cite[IV.(1.5.16), p.206]{sohr}), the contraction property and \eqref{Au2-bound-expdecprop} to find $C_2>0$ such that
\begin{align}\label{Au2-bound-eq1}
\refstepcounter{gleichung}
\|A^\delta u(\cdot,t)\|_{\Lo[2]}&\leq \|A^\delta u_0\|_{\Lo[2]}+C_1\int_0^{t} \left(t-s\right)^{-\delta}e^{-\mu_1(t-s)}\left\|\mathcal{P}\left(n(\cdot,s)\nabla\phi\right)\right\|_{\Lo[2]}\intd s\nonumber\\
&\leq \|A^\delta u_0\|_{\Lo[2]}+C_2\sup_{t\in(0,T_{max})}\left\|n(\cdot,t)\right\|_{\Lo[2]}\int_0^\infty\!\! \sigma^{-\delta} e^{-\mu_1\sigma}\intd \sigma
\end{align}
holds for all $t\in(0,T_{max})$, by the boundedness of $\nabla\phi$. Due to \eqref{idreg} we have $\|A^\delta u_0\|_{\Lo[2]}\leq C_3$ for some $C_3>0$. Furthermore, since $\delta<1$ the integral converges and by Proposition \ref{Prop:np_grad_c2q-bounds} we can find $C_4>0$ such that $\|n(\cdot,t)\|_{\Lo[2]}\leq C_4$ for all $t\in(0,T_{max})$. Combined with \eqref{Au2-bound-eq1} these facts yield
\begin{align*}
\|A^\delta u(\cdot,t)\|_{\Lo[2]}\leq C_5\quad\mbox{for all }t\in(0,T_{max})
\end{align*}
with some $C_5>0$, which completes the proof.
\end{bew}
The second quantity of the extensibility criterion we treat is $\|c(\cdot,t)\|_{\W[1,\theta]}$. In view of Proposition \ref{Prop:np_grad_c2q-bounds}, we can take some $q>\max\{2,\theta\}$ and obtain under simple application of the Poincaré inequality
\begin{cor}\label{Cor:grad_ctheta-bound}
There exists a constant $C>0$ such that
\begin{align*}
\|c(\cdot,t)\|_{\W[1,\theta]}\leq C
\end{align*}
holds for all $t\in(0,T_{max})$.
\end{cor}
Now, to prove the last remaining bound required for the extensibility criterion \eqref{lExAlt}, as well as one of the estimates required for the boundedness result, we require some well known results concerning the Neumann heat semigroup $\left(e^{t\Delta}\right)_{t\geq0}$. These semigroup estimates and Proposition \ref{Prop:np_grad_c2q-bounds} will be the main ingredients of our proof. For more details concerning the estimations used, we refer the reader to \cite[Lemma 2.1]{cao2014global}, \cite[Lemma 1.3]{win10jde} and \cite{hen81}.
\begin{prop}\label{Prop:ninf-bound}
There exists a constant $C>0$ such that
\begin{align*}
\|n(\cdot,t)\|_{\Lo[\infty]}\leq C
\end{align*}
holds for all $t\in(0,T_{max})$.
\end{prop}
\begin{bew}
First, we fix $p>2$ and represent $n$ by its variation of constants formula
\begin{align*}
n(\cdot,t)=e^{t\Delta} n_0-\int_0^{t}e^{(t-s)\Delta}\big(\nabla\cdot\left(n\nabla c\right)+u\cdot\nabla n\big)(\cdot,s)\intd s,\quad t\in(0,T_{max}).
\end{align*}
The fact $\dive u=0$ and the maximum principle therefore yield
\begin{align*}
\|n(\cdot,t)\|_{\Lo[\infty]}&\leq \| n_0\|_{\Lo[\infty]}+\int_0^{t}\left\|e^{(t-s)\Delta}\big(\nabla\cdot\left(n\nabla c+u n\right)\big)(\cdot,s)\right\|_{\Lo[\infty]}\intd s
\end{align*}
for all $t\in(0,T_{max})$. Now, we can make use of the well known smoothing properties of the Neumann heat semigroup (see \cite[Lemma 2.1 (iv)]{cao2014global}), to estimate
\begin{align}\label{ninf-bound-eq1}
\refstepcounter{gleichung}
\|n(\cdot,t)\|_{\Lo[\infty]}&\leq \|n_0\|_{\Lo[\infty]}\\&\ +C_1\int_0^{t}\!\!\left(1+\left(t-s\right)^{-\frac{1}{2}-\frac{1}{p}}\right)e^{-\lambda_1(t-s)}\left(\left\|\left(n\left(\nabla c +u\right)\right)(\cdot,s)\right\|_{\Lo[p]}\right)\intd s\nonumber
\end{align}
for all $t\in(0,T_{max})$ and some $C_1>0$, where $\lambda_1$ denotes the first nonzero eigenvalue of $-\Delta$ in $\Omega$ with regards to the homogeneous Neumann boundary conditions. To estimate $\|n(\nabla c+u)\|_{\Lo[p]}$ we apply Hölder's inequality to obtain some $C_2>0$ such that
\begin{align*}
\|n(\nabla c+u)(\cdot,t)\|_{\Lo[p]}\leq\|n(\cdot,t)\|_{\Lo[2p]}\left(\|\nabla c(\cdot,t)\|_{\Lo[2p]}+\|u(\cdot,t)\|_{\Lo[2p]}\right)\leq C_2
\end{align*}
holds for all $t\in(0,T_{max})$, wherein the boundedness of all quantities on the right hand side followed in view of Propositions \ref{Prop:u_bound_N2_all_p} and \ref{Prop:np_grad_c2q-bounds}. Plugging this into \eqref{ninf-bound-eq1} and recalling $n_0\in \CSp{0}{\bomega}$ yields $C_3>0$ such that
\begin{align*}
\|n(\cdot,t)\|_{\Lo[\infty]}&\leq C_3+C_3\int_0^\infty\!\!\left(1+\left(t-s\right)^{-\frac{1}{2}-\frac{1}{p}}\right)e^{-\lambda_1(t-s)}\intd s
\end{align*}
is valid for all $t\in(0,T_{max})$. By the choice of $p$ we have $-\frac{1}{2}-\frac{1}{p}>-1$ and thus there exists $C_4>0$ such that
\begin{align*}
\|n(\cdot,t)\|_{\Lo[\infty]}\leq C_4\quad\mbox{for all }t\in(0,T_{max}),
\end{align*}
which completes the proof.
\end{bew}
Let us gather the previous results to prove our main theorem.
\begin{proof}[\textbf{Proof of Theorem \ref{Thm:globEx}:}]
As an immediate consequence of the bounds in Proposition \ref{Prop:Au2-bound}, Corollary \ref{Cor:grad_ctheta-bound} and Proposition \ref{Prop:ninf-bound}, we obtain $T_{max}=\infty$ in view of the extensibility criterion \eqref{lExAlt}.
Secondly, since $\theta>2$ we have $\W[1,\theta]\hookrightarrow\Co[\mu_1]$ with $\mu_1=\frac{\theta-2}{\theta}$ (\cite[Theorem 5.6.5]{evans}). Thus, Corollary \ref{Cor:grad_ctheta-bound} implies $\|c(\cdot,t)\|_{\Lo[\infty]}\leq C$ for all $t\in(0,T_{max})$. Additionally, since for $\delta\in(\frac{1}{2},1)$ the fractional powers of the Stokes operator satisfy $D(A^\delta)\hookrightarrow\Co[\mu_2]$ for any $\mu_2\in(0,2\delta-1)$ (see \cite[Lemma III.2.4.3]{sohr} and \cite[Theorem 5.6.5]{evans}), Proposition \ref{Prop:Au2-bound} shows that $\|u(\cdot,t)\|_{\Lo[\infty]}\leq C$ for all $t\in(0,T_{max})$ and the boundedness of $\|n(\cdot,t)\|_{\Lo[\infty]}$ for all $t\in(0,T_{max})$ follows directly from Proposition \ref{Prop:ninf-bound}.
\end{proof}
\end{document} |
\begin{document}
\author{Nero Budur}
\email{[email protected]}
\address{KU Leuven and University of Notre Dame}
\curraddr{KU Leuven, Department of Mathematics,
Celestijnenlaan 200B, B-3001 Leuven, Belgium}
\author{Botong Wang}
\email{[email protected]}
\address{University of Notre Dame}
\curraddr{Department of Mathematics,
255 Hurley Hall, IN 46556, USA}
\date{}
\classification{14D15, 32G08, 14F35}
\keywords{deformation theory, differential graded Lie algebra, cohomology jump locus, local system}
\thanks{The first author was partly sponsored by the Simons Foundation, NSA, and a KU Leuven OT grant. }
\begin{abstract}
To study infinitesimal deformation problems with cohomology constraints, we introduce and study cohomology jump functors for differential graded Lie algebra (DGLA) pairs. We apply this to local systems, vector bundles, Higgs bundles, and representations of fundamental groups. The results obtained describe the analytic germs of the cohomology jump loci inside the corresponding moduli space, extending previous results of Goldman-Millson, Green-Lazarsfeld, Nadel, Simpson, Dimca-Papadima, and of the second author.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{Motivation and overview.}\label{subsMot} Consider a representation $\rho:\pi_1(X)\rightarrow GL(n,\mathbb{C})$ of the fundamental group of a topological space $X$. How can one describe all the infinitesimal deformations of $\rho$ constrained by the condition that the degree $i$ cohomology of the corresponding local system $L_\rho$ has dimension $\ge k$? More precisely, fixing $n$, define the cohomology jump locus $\sV^i_k$ as the set of all such representations. $\sV^i_k$ has a natural scheme structure when $X$ is a finite CW-complex, and we are asking for a description of the formal scheme $\sV^i_{k,{(L)}}$ at the point $L$.
A nice answer to this question was given in certain cases. Define the resonance variety $\mathcal{R}^i_k$ as the set consisting of $\omegaega\in H^1(X,\cc)$ such that the degree $i$ cohomology of the cup-product complex $(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X,\cc),\omegaega \cup . )$ has dimension $\ge k$. $\mathcal{R}^i_k$ also has a natural scheme structure. When $X$ is the complement of a complex hyperplane arrangement and $\rho=\mathbf{1}$ is the trivial rank $n=1$ representation, Esnault, Schechtman, and Viehweg \cite{ESV} showed that there is an isomorphism of reduced formal germs
\begin{equation}\label{eqHA}
(\sV^i_{k})^{red}_{(\mathbf{1})} \cong (\mathcal{R}^i_{k})^{red}_{(0)}.
\end{equation}
This result has been generalized further by Dimca, Papadima, and Suciu \cite{dps} and recently by Dimca and Papadima \cite{dp}. Their more general result identifies $(\sV^i_k)_{(\mathbf{1})}^{red}$ for rank $n\ge 1$ representations on a finite CW-complex $X$ with the reduced formal germ at the origin of a space $\mathcal{R}^i_k$ defined by replacing the cup-product complexes with Aomoto complexes of the differential graded Lie algebra (DGLA) $\sA^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_\mathbb{C} \mathfrak{gl}(\mathbb{C}^r)$, where $\sA^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is any commutative differential graded algebra (CDGA) homotopy equivalent with Sullivan's CDGA $\Omega^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X,\mathbb{C})$ of piecewise smooth $\mathbb{C}$-forms. In particular, (\ref{eqHA}) is recovered since in that case $\Omega^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X,\mathbb{C})$ is formal, that is, it is homotopy equivalent with its cohomology.
In this article we generalize further these results by providing a description of the formal germ $\sV^i_{k,{(\rho)}}$ at any representation $\rho$ of any rank. In other words, we deal with possibly non-reduced formal germs and with possibly non-trivial local systems.
The deformation problem with cohomology constraints can also be posed for different objects. For addressing all deformation problems with cohomology constraints at once, we provide a unified framework via differential graded Lie algebras (DGLA), in a sense which we describe next.
By a deformation problem we mean describing the formal germ $\mathcal{M}_{(\rho)}$ at some object $\rho$ in some moduli space $\mathcal{M}$. This is equivalent to describing the corresponding functor on Artinian local algebras which we denote also by $\mathcal{M}_{(\rho)}$. In fact, this functor is usually well-defined even if the moduli space is not. In practice, every deformation problem over a field of characteristic zero is governed by a DGLA $C$ depending on $(\mathcal{M},\rho)$. This means that the functor $\mathcal{M}_{(\rho)}$ is naturally isomorphic to the deformation functor ${\rm {Def}}(C)$ canonically attached to $C$ via solutions of the Maurer-Cartan equation modulo gauge. An answer to the deformation problem is then obtained by replacing $C$ with a homotopy equivalent DGLA $D$ with enough finiteness conditions which make ${\rm {Def}}(C)={\rm {Def}}(D)$ representable by an honest space, providing a simpler description of $\mathcal{M}_{(\rho)}$ than the original definition. A particularly nice answer to the deformation problem is achieved when $D$ can be taken to be the cohomology $H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C)$ of $C$, that is, when $C$ is formal. For an overview of this subject, see \cite{M}.
By a deformation problem with cohomology constraints we mean that we have formal germs $\sV^i_{k ,(\rho)}\subset \mathcal{M}_{(\rho)}$ of objects with a cohomology theory constrained by the condition that the degree $i$ cohomology has dimension $\ge k$. Then we would like to describe $\sV^i_{k ,(\rho)}$. This is equivalent to describing the corresponding functor on Artinian local algebras which we denote also by $\sV^i_{k ,(\rho)}$, and which in fact is usually well-defined even if the formal germs are not. The point of this article is to stress that, in practice, a deformation problem with cohomology constraints over a field of characteristic zero is governed by a pair $(C,M)$ of a DGLA together with a module over it. Given any such pair, we will canonically define {\it cohomology jump functors} ${\rm {Def}}^i_k(C,M)$. When $\sV^i_{k ,(\rho)}\cong{\rm {Def}}^i_k(C,M)$ as subfunctors of $\mathcal{M}_{(\rho)}\cong{\rm {Def}}(C)$, we obtain an answer to the deformation problem with cohomology constraints by replacing $(C,M)$ with a homotopy equivalent pair $(D,N)$ with enough finiteness conditions which makes ${\rm {Def}}^i_k(C,M)\cong{\rm {Def}}^i_k(D,N)$ representable by an honest space (like $\mathcal{R}^i_k$ above). A particularly nice answer is achieved when $(D,N)=(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C),H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M))$, that is when the pair $(C,M)$ is formal.
In this paper we consider the deformation problem with cohomology constraints for linear representations of fundamental groups, local systems, holomorphic vector bundles, and Higgs bundles.
The idea of using DGLA pairs is already implicit in \cite{M-a}, where a functor ${\rm {Def}}_\chi$ of semi-trivialized deformations is attached to a DGLA map $\chi:C\rightarrow C'$. Such situation arises for a DGLA pair $(C,M)$ by setting $C'={\rm {End}}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M)$ with the induced natural DGLA structure. It is shown in this case in \cite{M-a} that the image of ${\rm {Def}}_\chi\rightarrow {\rm {Def}}(C)$ describes the deformation problem with no-change-in-cohomology constraint. Therefore our ${\rm {Def}}^i_k(C,M)$ can be seen as refinements of ${\rm {Def}}_\chi$ for $\chi:C\rightarrow {\rm {End}}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M)$.
A theorem due to Lurie \cite{Lu} and Pridham \cite{Pr} in the framework of derived algebraic geometry states that, with the appropriate axiomatization, every infinitesimal deformation problem is governed by a DGLA and that a converse holds. A natural question is whether this can be extended to an equivalence between infinitesimal deformation problems with cohomology constraints and DGLA pairs.
This article puts together, simplifies, and extends previous two articles by the second author \cite{w,wb}. The DGLA pairs were introduced in the first version of this article \cite{wb} by the second author to address the reduced structure of the cohomology jump loci. Since \cite{w, wb} will not be published, we will provide complete arguments, even though they may have already appeared in \cite{w, wb}. The main concept introduced in this second version is that of cohomology jump functors of a DGLA pair. Since this refines the deformation functor, and leads to more direct proofs of even stronger results, we feel that this approach is the closest to a hypothetic final answer to the general problem of infinitesimal deformations with cohomology constraints.
Let us describe next in more detail the results of this article.
\subsection{Cohomology jump loci of complexes.}\label{subsCpx} Let $M$ be a finitely generated module over a Noetherian ring $R$. Let
$
G\;\displaystyle{\mathop{ \rightarrow }^d}\; F \rightarrow M\rightarrow 0
$
be a presentation of $M$ by finitely generated free $R$-modules. Then the ideal $$J_k(M)=I_{rank(F)-k}(d)$$ of minors of size $rank(F)-k$ of a matrix representing $d$ does not depend on the choice of presentation. This is result goes back to J.W. Alexander and H. Fitting. We generalize it to complexes as follows.
Now let $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ be a complex of $R$-modules, bounded above, such that $H^i(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ is a finitely generated $R$-module for every $i$. By a lemma of Mumford \cite[III.12.3]{h}, there exists a complex $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ of finitely generated free $R$-modules, and a morphism of complexes $g: F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\to E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ which is a quasi-isomorphism. We define the {\it cohomology jump ideals of $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$} to be
$$
J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=I_{rank(F^i)-k+1}(d^{i-1}\oplus d^{i})
$$
where $d^{i-1}: F^{i-1}\to F^i$ and $d^i: F^i\to F^{i+1}$ are the differentials of the complex $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. We show that $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ does not depend on the choice of $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ (Definition-Proposition \ref{defprop}). This is the content of Section \ref{secCJI}.
We are interested in the subscheme of $\spec(R)$ associated to such $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$. This setup occurs in many situations, for example when $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ can obtained from topology or from DGLA pairs as below.
\subsection{Cohomology jump loci of DGLA pairs.} Let $C=(C^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }, d_C)$ be a DGLA over $\cc$. The deformation functor ${\rm {Def}}(C)$ attached to $C$ is a functor from the category of Artinian local algebras to the category of sets. Let $M=(M^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ },d_M)$ be a differential graded module over $C$. Then, using \ref{subsCpx}, we define and study in Section \ref{secDGLA} the {\it cohomology jump functors} ${\rm {Def}}^i_k(C,M)$ as subfunctors of ${\rm {Def}}(C)$. Let us state the crucial property next.
In general the DGLA that governs a deformation problem has infinite dimension on each degree. The following result of Deligne-Goldman-Millson-Schlessinger-Stasheff allows one to replace the DGLA with a finite dimensional one within the same homotopy equivalence class, when such a DGLA is available.
\begin{theorem}[\cite{gm}]\label{gm0}
The cohomology functor ${\rm {Def}}(C)$ only depends on the 1-homotopy type of $C$. More precisely, if a morphism of DGLA $f: C\to D$ is 1-equivalent, then the induced transformation on functors $f_*: {\rm {Def}}(C)\to {\rm {Def}}(D)$ is an isomorphism.
\end{theorem}
The familiar notions of $i$-equivalence and $i$-homotopy extend easily to DGLA pairs, see Section \ref{secDGLA}. We also extend the last theorem to DGLA pairs:
\begin{theorem}\label{independence}
The cohomology jump functor ${\rm {Def}}^i_k(C, M)$ only depends on the $i$-homotopy type of $(C, M)$. More precisely, if a morphism of DGLA pairs $g=(g_1, g_2): (C, M)\to (D, N)$ satisfies that $g_1$ is 1-equivalent and $g_2$ is $i$-equivalent, then the induced transformation on functors $g_*: {\rm {Def}}^i_k(C, M)\to {\rm {Def}}^i_k(D, N)$ is an isomorphism.
\end{theorem}
A typical application of Theorem \ref{gm0} is when $C$ is formal. Similarly, we define formality for DGLA pairs and make use of it via Theorem \ref{independence} when we consider concrete deformation problems as below.
In Sections \ref{secQR} and \ref{augment} we address the quadratic cones, resonance varieties, and augmentations of DGLA pairs, needed in our analysis of concrete deformation problems.
\subsection{Holomorphic vector bundles.} In Section \ref{holomorphic}, we consider the moduli space $\mathcal{M}$ of stable rank $n$ holomorphic vector bundles $E$ with vanishing Chern classes on a compact K\"ahler manifold $X$. These holomorphic vector bundles are the ones that admit flat unitary connections. In $\mathcal{M}$, we consider the cohomology jump loci
$$
\sV^{pq}_k(F)=\{E\in\sM \mid \dim H^q(X, E\otimes_{\sO_X} F \otimes_{\sO_X} \Omega^p_X)\geq k \}
$$
with the natural scheme structure, for fixed $p$ and fixed poly-stable bundle $F$ with vanishing Chern classes. We show that this deformation problem with cohomology contraints is governed by the DGLA pair $({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$ constructed from Dolbeault complexes (Theorem \ref{vb1}). Let
\begin{align*}
\mathcal{Q}(E) &=\{\eta\in H^1(X, \enmo(E))\mid
\eta\wedge\eta=0\in H^2(X, \enmo(E))\}, \\
\sR^{pq}_k(E; F) & =\{\eta\in \mathcal{Q}(E) \mid \dim H^q(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X, E\otimes F\otimes \Omega^p_X),\eta \wedge\cdot )\geq k\},
\end{align*}
with natural scheme structures defined using \ref{subsCpx}. Formality of the DGLA pair implies:
\begin{thrm}\label{thrmHolVB} Let $X$ be a compact K\"ahler manifold. Let $E$ and $F$ be a stable and, respectively, a poly-stable holomorphic vector bundle with vanishing Chern classes on $X$. Then there is an isomorphism of formal schemes
$$
\sV^{pq}_k(F)_{(E)}\cong\sR^{pq}_k(E;F)_{(0)}.
$$
\end{thrm}
This generalizes the result of Nadel \cite{n} and Goldman-Millson \cite{gm} that $\sM_{(E)}\cong \mathcal{Q}(E)_{(0)}$, and it also generalizes a result of Green-Lazarsfeld \cite{gl1,gl2} for rank $n=1$ bundles. It also implies that if $k=\dim H^q(X, E\otimes F\otimes \Omega^p_X)$, then $\sV^{pq}_k(F)$ has quadratic algebraic singularities at $E$ (Corollary \ref{corQuad}), a result also shown for $F\otimes\Omega_X^p=\mathcal{O}_X$ by Martinengo \cite{ma} and the second author \cite{w}.
\subsection{Irreducible local systems and Higgs bundles.} In Section \ref{localsystem}, we consider the moduli space $\mathcal{M}_{\textrm{B}}$ of irreducible rank $n$ local systems $L$ on a compact K\"ahler manifold $X$, and we consider the cohomology jump loci
$$
\mathcal{V}^i_k(W)=\{ L\in \mathcal{M}_{\textrm{B}}\mid \dim_\mathbb{C} H^i(X,L\otimes_\mathbb{C} W)\ge k \}
$$
with the natural scheme structure, for a fixed semi-simple local system $W$ of any rank. The DGLA pair governing this deformation problem with cohomology constraints is $({A}^{\ubul}_{\rm{DR}}(\enmo(L)), {A}^{\ubul}_{\rm{DR}}(L\otimes W))$, constructed from the de Rham complex. Parallel results and proofs similar to the case of holomorphic vector bundles hold. Let
$$
\mathcal{Q}(L) =\{\eta\in H^1(X, \enmo(L)) \mid
\eta\wedge\eta=0\in H^2(X, \enmo(L))\},$$
$$
\sR^i_k(L;W) =\{ \eta\in \mathcal{Q}(L)\mid \dim H^i(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X, L\otimes W), \eta\wedge \cdot)\geq k \},
$$
with the natural scheme structures.
\begin{thrm}\label{thmIrrLS} Let $X$ be a compact K\"ahler manifold. Let $L$ be an irreducible local system on $X$, and let $W$ be a semi-simple local system. The isomorphism of formal schemes
$$
(\mathcal{M}_{\textrm{B}})_{(L)}\cong \mathcal{Q}(L)_{(0)}
$$
induces an isomorphism
$$
\sV^i_k(W)_{(L)}\cong\, \sR^i_k(L;W)_{(0)}.
$$
\end{thrm}
The proof of this result generalizes the main ``strong linearity'' result of Popa-Schnell as stated in \cite[Theorem 3.7]{PS}, proved there for rank one local systems $E$, $W=\mathbb{C}_X$, and $X$ a smooth projective complex variety, see Remark \ref{rmkPS}.
Higgs bundles are similarly treated in Section \ref{secHB}, via a DGLA pair arising from the Higgs complex.
\subsection{Representations of the fundamental group.}
Also in Section \ref{localsystem}, we look at representations of the fundamental group. This case is closely related to the case of local systems. This relation, at the level of deformations, is a particular case of the relation between the deformation functors of an augmented DGLA pair and those of the DGLA pair itself, see Theorem \ref{mainaug}.
Let $X$ be a smooth manifold which is of the homotopy type of a finite type CW-complex, and let $x\in X$ be a base point. The set of group homomorphisms $\homo(\pi_1(X, x), GL(n, \mathbb{C}))$ has naturally a scheme structure. We denote this scheme by $\mathbf{R}(X, n)$. Every closed point $\rho\in\mathbf{R}(X, n)$ corresponds to a rank $n$ local system $L_\rho$ on $X$. Let $W$ be a local system of any rank on $X$. In $\mathbf{R}(X, n)$, we define the cohomology jump loci
$$\tilde\sV^i_k(W)=\{\rho\in \mathbf{R}(X, n)\,|\, \dim H^i(X, L_\rho\otimes_\mathbb{C} W)\geq k\}
$$
with the natural scheme structure (these were denoted $\sV^i_k$ in \ref{subsMot}).
We show that an augmented DGLA pair $({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)), {A}^{\ubul}_{\rm{DR}}(L_\rho\otimes_\mathbb{C} W); \varepsilon)$ governs this deformation problem with cohomology constraints (Theorem \ref{thmRP1}). This generalizes the result of Goldman-Millson \cite{gm} who showed that the deformation problem without cohomology constraints is governed by the augmented DGLA $({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)); \varepsilon)$. This also generalizes the result of Dimca-Papadima \cite{dp} mentioned in \ref{subsMot}. In \cite{dp}, $X$ is allowed to be a connected CW-complex of finite type by replacing the de Rham complex with Sullivan's de Rham complex, but, for simplicity, we opted to leave out this topological refinement.
Thus, the formal scheme of $\tilde\sV^i_k(\mathbb{C}_X^n)$ at the trivial representation only depends on the $k$-homotopy type of the topological space $X$, generalizing a result of \cite{dp} for the underlying reduced germs.
Let
$$
\mathcal{Q}(\rho)=\{\eta\in Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})\,|\,\bar\eta\wedge\bar\eta=0 \in H^2(X, \enmo(L_\rho))\},$$
$$
\sR^i_k(\rho,W)=\{\eta\in \mathcal{Q}(\rho)\,|\, \dim H^i(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X, L_\rho\otimes_\mathbb{C} W), \bar\eta\wedge\cdot)\geq k\},
$$
with the natural scheme structures, where $Z^1$ stands for the vector space of 1-cocycles, and $\bar\eta$ is the image of $\eta$ in cohomology.
\begin{thrm}\label{thmRPP}
Let $X$ be a compact K\"ahler manifold, $\rho\in\mathbf{R}(X,n)$ be a semi-simple representation, and $W$ a semi-simple local system on $X$. Then
$$
\tilde\sV^i_k(W)_{(\rho)}\cong \sR^i_k(\rho,W)_{(0)}.
$$
\end{thrm}
This generalizes the result of Simpson \cite{s1} that $
\mathbf{R}(X, n)_{(\rho)}\cong \mathcal{Q}(\rho)_{(0)}
$. With the same assumptions, if in addition $k=\dim H^i(X,L_\rho)$, then $\tilde\sV^i_k(W)$ has quadratic singularities at $\rho$ (Corollary \ref{corQRep}).
\subsection{Other consequences of formality.} Theorems \ref{thrmHolVB}, \ref{thmIrrLS}, \ref{thmRPP} describing the local structure of cohomology jump loci $\sV^i_k$ in terms of cohomology resonance loci $\sR^i_k$ are consequences of the formality of the DGLA pair governing the corresponding deformation problem with cohomology constraints. In Section \ref{secIneq}, we show that formality for a DGLA pair $(C,M)$ leads to more information about the geometry of cohomology resonance loci and about the possible shapes of the sequence of Betti numbers $\dim H^i(M)$. This puts together and extends to DGLA pairs a method which was previously employed in different setups by Lazarsfeld-Popa \cite{LP}, the first author \cite{B-h}, and Popa-Schnell \cite{PS}.
\subsection{Analytic and \'etale local germs.}
According to Artin's approximation theorem \cite{ar}, two analytic germs $(X, x)$ and $(Y, y)$ are isomorphic if and only if the formal schemes $X_{(x)}$ and $Y_{(y)}$ are isomorphic. Furthermore, Artin also showed in \cite{ar1} that as \'etale local germs $(X, x)$ and $(Y, y)$ are isomorphic in the algebraic category. Thus, our results on isomorphisms between formal schemes can be stated as isomorphisms between analytic germs and also between algebraic \'etale germs.
\subsection{Notation.}
Throughout this paper, all rings are defined over $\mathbb{C}$. By an Artinian local algebra, we mean an Artinian local algebra which is of finite type over $\mathbb{C}$. Denote the category of Artinian local algebras with local homomorphisms by $\mathsf{ART}$ and the category of sets by $\mathsf{SET}$. Suppose $\mathbf{F}$ is a functor from $\mathsf{ART}$ to $\mathsf{SET}$. We shall say a formal scheme $\mathcal{X}$ consisting only one closed point (or a complete local ring $R$ resp.) prorepresents the functor $\mathbf{F}$, if $\homo(\Gamma(\mathcal{X}, \sO_\mathcal{X}), -)$ (resp. $\homo(R, -)$) is naturally isomorphic to the functor $\mathbf{F}$. By abusing notation, we will frequently use the same letter to denote a closed point in some moduli space and the object the closed point represents. Also by abusing notation, we will frequently use a formal scheme $\mathcal{X}$ (supported at a point) to denote the functor it prorepresents, i.e., $\homo(-, \mathcal{X}): \mathsf{ART}\to \mathsf{SET}$.
\section{Cohomology jump loci of complexes}\label{secCJI}
Let $R$ be a noetherian ring, and let $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ be a complex of $R$-modules, bounded above. Suppose $H^i(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ is a finitely generated $R$-module. In this section, we define the notion of cohomology jump ideals for the complex $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. Throughout this section, we assume all complexes of $R$-modules are bounded above and have finitely generated cohomology.
First, we want to replace $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ by a complex of finitely generated free $R$-modules. This is achieved by a lemma of Mumford.
\begin{lemma}[\cite{h}-III.12.3]\label{lemM}
Let $R$ and $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ be defined as above. There exists a bounded above complex $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ of finitely generated free $R$-modules and a morphism of complexes $\phi: F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\to E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ which is a quasi-isomorphism.
\end{lemma}
\begin{defprop}\label{defprop}{\it
Under the above notations, we define the {\bf cohomology jump ideals} to be
$$
J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=I_{rank(F^i)-k+1}(d^{i-1}\oplus d^{i})
$$
where $I$ denotes the determinantal ideal, $d^{i-1}: F^{i-1}\to F^i$ and $d^i: F^i\to F^{i+1}$ are the differentials of the complex $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. Then $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ does not depend on the choice of $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. }
\end{defprop}
\begin{proof} This is a generalization of the proof of the Fitting Lemma from \cite[20.4]{eisenbud}. We can assume $R$ is local. By \cite[Proposition 4.4.2]{R}, $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ has a unique minimal free resolution $G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } \rightarrow F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. Let $\rho:G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\rightarrow E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ the composition with $\phi$. If $\bar{\phi}: \bar{F}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\rightarrow E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is another free resolution of $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ with finite rank terms, then by \cite[Theorem 3.1.7]{R}, there exists a map $\beta: G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\rightarrow \bar{F}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ unique up to homotopy such that $\bar{\phi}\beta$ is homotopic with $\rho$. Hence $\beta$ is a quasi-isomorphism, and so $G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is a minimal free resolution of $\bar{F}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ also. Thus, it is enough to prove that $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ is the same if computed with $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ and $G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$.
By \cite[Proposition 4.4.2]{R}, $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is a direct sum of $G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ with a direct sum of shifts of the trivial complex
$$ 0\rightarrow R{\mathop{\rightarrow}^1} R\rightarrow 0.$$ It is enough, by induction, to assume that only one such shifted trivial complex is added to $G^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ to obtain $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. Fix $i$. There are four shifts of the trivial complex that can be added to $G^{i-1}\rightarrow G^i\rightarrow G^{i+1}$. Let $r$ be the rank of $G^i$ and $M$ the matrix of $d_G^{i-1}\oplus d_G^{i}$. The ideals $I_{rank(F^i)-k}(d_F^{i-1}\oplus d_F^{i})$ for each of the four possible cases are:
\begin{align*}
I_{r-k}\begin{pmatrix} M & 0 \end{pmatrix}, I_{r+1-k}\begin{pmatrix} M & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, I_{r+1-k}\begin{pmatrix} M & 0 \\ 0 & 0 \\ 0 & 1 \end{pmatrix}, I_{r-k}\begin{pmatrix} M \\ 0 \end{pmatrix},
\end{align*}
and all are equal to $I_{r-k}(M)$ as we wanted to show.
\end{proof}
\begin{cor}\label{idealindep}
If $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is quasi-isomorphic to $E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$, then $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=J^i_k(E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$.
\end{cor}
\begin{cor}\label{tensor}
Let $R$ and $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ be defined as above, and let $S$ be a noetherian $R$-algebra. Moreover, suppose $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is a complex of flat $R$-modules, then $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })\otimes_R S=J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R S)$, where we regard $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R S$ as a complex of $S$ modules.
\end{cor}
\begin{proof}
By Lemma \ref{lemM}, there is a quasi-isomorphism $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\to E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$, where $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is a bounded above complex of finitely generated free $R$-modules. Since $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is bounded above and flat, $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R S$ is quasi-isomorphic to $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } \otimes_R S$. Thus, $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R S)$ can be computed as determinantal ideals of $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R S$. Hence, the corollary follows from the fact that taking determinantal ideals commutes with taking tensor product.
\end{proof}
When $R$ is a field, by definition $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=0$ if $\dim H^i(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })\geq k$ and $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=R$ if $\dim H^i(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })<k$. Thus, we have the following.
\begin{cor}\label{corFl}
Suppose $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is a complex of flat $R$-modules. Then for any maximal ideal $m$ of $R$, $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })\subset m$ if and only if $\dim_{R/m} H^i(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R R/m)\geq k$.
\end{cor}
Next, we address a partial generalization of Corollary \ref{idealindep}.
\begin{defn} A morphism of complexes is $q$-{\bf equivalent} if it induces an isomorphism on cohomology up to degree $q$ and a monomorphism at degree $q+1$. For example, $\infty$-equivalent means quasi-isomorphic.
\end{defn}
\begin{prop}\label{propQequiv}
Let $(R,m)$ be a noetherian local ring and let $f: E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\rightarrow E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ be a $q$-equivalence between two bounded above complexes of free $R$-modules with finitely generated cohomology. If $f\otimes \id_{R/m}: E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R R/m\rightarrow E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_R R/m$ is also a $q$-equivalence, then $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=J^i_k(E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ for $i\le q$.
\end{prop}
\begin{proof}
Let $\phi: F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\to E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ and $\phi': F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\to E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ be the minimal free resolutions of $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ and $E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ respectively. Since $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is a complex of free $R$-modules, we can lift the composition $f\circ \phi: F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\to E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ via $\phi'$ to $g:F\to F'$. Thus, we obtain the following diagram,
\begin{equation*}
\xymatrix{
F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\ar[r]^{g}\ar[d]^{\phi}&F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\ar[d]^{\phi'}\\
E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\ar[r]^{f}&E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }
}
\end{equation*}
where $\phi$ and $\phi'$ are $\infty$-equivalent, and $f$ is $q$-equivalent. Since the diagram commutes, $g$ is also $q$-equivalent. Taking the tensor product of the above diagram with $R/m$ over $R$ gives us another diagram,
\begin{equation*}
\xymatrix{
F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m\ar[r]^{\bar{g}}\ar[d]^{\bar\phi}&F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m\ar[d]^{\bar\phi'}\\
E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m\ar[r]^{\bar{f}}&E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m
}
\end{equation*}
where $\bar{f}$ is $q$-equivalent by assumption. Since $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$, $E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$, $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ and $F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ are complexes of free, hence flat, $R$-modules, $\bar\phi$ and $\bar\phi'$ are $\infty$-equivalent. Therefore, $\bar{g}$ is also $q$-equivalent.
Since $F$ and $F'$ are minimal, the differentials in $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m$ and $F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m$ are all zero. Therefore, $\bar{g}: F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m\to F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes_{R}R/m$ being $q$-equivalent means
$$\bar{g}^i: F^i\otimes_{R}R/m\to F'^i\otimes_{R}R/m$$
is an isomorphism for $i\leq q$ and a monomorphism for $i=q+1$. In particular, $rank(F^i)=rank(F'^i)$ for $i\leq q$.
By definition, $J^i_k(E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=J^i_k(F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ and $J^i_k(E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=J^i_k(F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$. Hence we only need to show $J^i_k(F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=J^i_k(F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ for $i\le q$. Recall that $J^i_k(F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=I_{rank(F^i)-k+1}(d^{i-1}\oplus d^i)$, where $d^{i-1}$ and $d^i$ are the differentials in $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. Notice that
$$I_{rank(F^i)-k+1}(d^{i-1}\oplus d^i)=\sum_{0\leq j\leq rank(F^i)-k+1}I_{j}(d^{i-1})\cdot I_{rank(F^i)-k+1-j}(d^{i}).$$
Since $rank(F^i)=rank(F'^i)$ for $i\leq q$, to show $J^i_k(F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })=J^i_k(F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ for $i\le q$, it suffixes to show $I_j(d^i)=I_j(d'^i)$ for any $j\in \nn$ and $i\leq q$, where $d'^i$ is the differential in $F'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$. This follows from following two statements, which will be proved in the next two lemmas.
\begin{enumerate}
\item $g^i: F^i\to F'^i$ is an isomorphism for $i\leq q$;
\item $g^{q+1}: F^{q+1}\to F'^{q+1}$ is injective and its image is a direct summand of $F'^{q+1}$.
\end{enumerate}
\end{proof}
\begin{lemma}
Let $(R, m)$ be a noetherian local ring. Let $h: M\to M'$ be a morphism between finite free $R$-modules. Suppose $h\otimes \id_{R/m}: M\otimes_R R/m\to M'\otimes_R R/m$ is an isomorphism. Then $h$ is an isomorphism.
\end{lemma}
\begin{proof}
The composition $M\stackrel{h}{\to}M'\to M'\otimes_R R/m$ is surjective. By Nakayama's lemma, $h$ is surjective. Since $M'$ is free, we have a short exact sequence
$$0\to Ker(h)\otimes_R R/m\to M\otimes_R R/m\to M'\otimes_R R/m\to 0.$$
Since $M\otimes_R R/m\to M'\otimes_R R/m$ is an isomorphism, $Ker(h)\otimes_R R/m=0$. Hence, $Ker(h)=0$ by Nakayama's lemma.
\end{proof}
\begin{lemma}
Let $(R, m)$ be a noetherian local ring. Let $h: M\to M'$ be a morphism between finite free $R$-modules. Suppose $h\otimes \id_{R/m}: M\otimes_R R/m\to M'\otimes_R R/m$ is injective. Then $h$ is injective, and the cokernel of $h$ is a free $R$-module.
\end{lemma}
\begin{proof}
Denote the kernel, image and cokernel of $h$ by $Ker(h)$, $Im(h)$ and $Coker(h)$ respectively. Then we have two short exact sequences,
\begin{equation}\label{ses1}
0\to Ker(h)\to M\to Im(h)\to 0
\end{equation}
and
\begin{equation}\label{ses2}
0\to Im(h)\to M'\to Coker(h)\to 0
\end{equation}
Since $M$ and $M'$ are free $R$-modules, we have the following exact sequences by taking tensor with $R/m$.
$$0\to Tor_1(Im(h), R/m)\to Ker(h)\otimes R/m\to M\otimes R/m\to Im(h)\otimes R/m\to 0$$
and
$$0\to Tor_1(Coker(h), R/m)\to Im(h)\otimes R/m\to M'\otimes R/m\to Coker(h)\otimes R/m\to 0$$
where all the tensor and $Tor$ are over $R$.
Notice that the morphism $h\otimes \id_{R/m}: M\otimes_R R/m\to M'\otimes_R R/m$ factors as $M\otimes_R R/m\to Im(h)\otimes R/m\to M'\otimes R/m$. Since $h\otimes_R \id_{R/m}$ is injective and since $M\otimes_R R/m\to Im(h)\otimes R/m$ is obviously surjective, $Im(h)\otimes R/m\to M'\otimes R/m$ must be injective. Therefore, we have the vanishing $Tor_1(Coker(h), R/m)=0$. Over a noetherian local ring, this means $Coker(h)$ is free. By short exact sequences (\ref{ses2}) and (\ref{ses1}), we can conclude $Im(h)$ and $Ker(h)$ are both free. Thus, $M$ splits as a direct sum of free $R$-modules $Ker(h)$ and $Im(h)$. Now, since $h\otimes \id_{R/m}: M\otimes_R R/m\to M'\otimes_R R/m$ is injective, clearly $Ker(h)=0$.
\end{proof}
\begin{rmk}
The assertion of Proposition \ref{propQequiv} is not necessarily true only with the assumption that $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ and $E'^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ are $q$-equivalent. This can be seen by taking a zero-complex and a free resolution of a non-free $R$-module.
\end{rmk}
\section{Cohomology jump loci of DGLA pairs}\label{secDGLA}
In this section we recall the definition of the deformation functor of a DGLA, we define DGLA pairs and their cohomology jump functors, and we prove Theorem \ref{independence} on the invariance of the cohomology jump functors under a change of the DGLA pair.
Firstly, recall the definition of a DGLA over $\mathbb{C}$, for example from \cite{gm}:
\begin{defn}
A DGLA consists of the following set of data,
\begin{enumerate}
\item a graded vector space $C=\bigoplus_{i\in \mathbb{N}}C^i$ over $\mathbb{C}$,
\item a Lie bracket which is bilinear, graded skew-commutative, and satisfies the graded Jacobi identity, i.e., for any $\alpha\in C^i, \beta\in C^j$ and $\gamma\in C^k$,
$$[\alpha, \beta]+(-1)^{ij}[\beta, \alpha]=0$$
and
$$(-1)^{ki}[\alpha, [\beta, \gamma]]+(-1)^{ij}[\beta, [\gamma, \alpha]]+(-1)^{jk}[\gamma, [\alpha, \beta]]=0$$
\item a family of linear maps, called the differential maps, $d^i: C^i\to C^{i+1}$, satisfying $d^{i+1}d^i=0$ and the Leibniz rule, i.e., for $\alpha\in C^i$ and $\beta\in C$
$$d[\alpha, \beta]=[d\alpha, \beta]+(-1)^i[\alpha, d\beta]$$
where $d=\sum d^i: C\to C$.
\end{enumerate}
A homomorphism of DGLAs is a linear map which preserves the grading, Lie bracket, and the differential maps.
\end{defn}
We denote this DGLA by $(C, d)$, or $C$ when there is no risk of confusion.
\begin{defn}\label{module}
Given a DGLA $(C, d_C)$, we define a \textbf{module} over $(C, d_C)$ to be the following set of data,
\begin{enumerate}
\item a graded vector space $M=\bigoplus_{i\in \mathbb{N}} M^i$ together with a bilinear multiplication map $C\tildemes M\to M$, $(a, \xi)\mapsto a\xi$, such that for any $\alpha\in C^i$ and $\xi\in M^j$, $\alpha\xi \in M^{i+j}$. And furthermore, for any $\alpha\in C^i, \beta\in C^j$ and $\zeta\in M$, we require
$$[\alpha, \beta]\zeta=\alpha(\beta\zeta)-(-1)^{ij}\beta(\alpha\zeta).$$
\item a family of linear maps $d^i_M: M^i\to M^{i+1}$ (write $d_M=\sum_{i\in\zz} d^i_M: M\to M$), satisfying $d^{i+1}_M d^i_M=0$. And we require it to be compatible with the differential on $C$, i.e., for any $\alpha\in C^i$,
$$d_M(\alpha\xi)=(d_C\alpha)\xi+(-1)^i\alpha(d_M\xi).$$
\end{enumerate}
\end{defn}
We will call such a module by a $(C, d_C)$-module or simply a $C$-module.
\begin{defn}
A \textbf{homomorphism} of $(C, d_C)$-modules $f: (M, d_M)\to (N, d_N)$ is a linear map $f: M\to N$ which satisfies
\begin{enumerate}
\item $f$ preserves the grading, i.e., $f(M^i)\subset N^i$,
\item $f$ is compatible with multiplication by elements in $C$, i.e., $f(\alpha\xi)=\alpha f(\xi)$, for any $\alpha\in C$ and $\xi \in M$,
\item $f$ is compatible with the differentials, i.e., $f(d_M\alpha)=d_N f(\alpha)$.
\end{enumerate}
\end{defn}
Fixing a DGLA $(C, d_C)$, the category of $C$-modules is an abelian category.
\begin{defn}\label{defnHot}
A \textbf{DGLA pair} is a DGLA $(C, d_C)$ together with a $(C, d_C)$-module $(M, d_M)$. Usually, we write such a pair simply by $(C, M)$. A homomorphism of DGLA pairs $g: (C, M)\to (D, N)$ consists of a map $g_1: C\to D$ of DGLA and a $C$-module homomorphism $g_2: M\to N$, considering $N$ as a $C$-module induced by $g_1$. For $q\in \nn\cup \{\infty\}$, we call $g$ a $q$-\textbf{equivalence} if $g_1$ is 1-equivalent and $g_2$ is $q$-equivalent. Moreover, we define two DGLA pairs to be of the same $q$-\textbf{homotopy type}, if they can be connected by a zig-zag of $q$-equivalences. Two DGLA pairs have the same {\textbf{homotopy type}} if they have the same $\infty$-homotopy type.
\end{defn}
\begin{defn} Let $(C, M)$ be a DGLA pair. Then $(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C),0)$, the cohomology of $C$ with zero differentials, is a DGLA, and $(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M),0)$, the cohomology of $M$ with zero differentials, is an $H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C)$-module. We call the DGLA pair $(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C), H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M))$ the \textbf{cohomology DGLA pair} of $(C, M)$.
\end{defn}
\begin{assumption} From now, for a DGLA pair $(C,M)$ we always assume that $M$ is bounded above as a complex and $H^j(M)$ is a finite dimensional $\cc$-vector space for every $j\in \zz$.
\end{assumption}
\begin{defn}
We say the DGLA pair $(C, M)$ is $q$-\textbf{formal} if $(C, M)$ is of the same $q$-homotopy type as $(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C), H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M))$. A pair is {\bf formal} is it is $\infty$-formal.
\end{defn}
Given a DGLA pair, we can abstractly define the space of flat connections and the cohomology jump loci as functors from $\mathsf{ART}$ to $\mathsf{SET}$. We will be mainly interested in the case when these functors are prorepresentable.
Given a DGLA $(C, d)$ over $\mathbb{C}$ together with an Artinian local algebra $A$, a groupoid $\mathcal{C}(C, A)$ is defined in \cite{gm}. We recall this definition. $C\otimes_{\cc}A$ is naturally a DGLA by letting $[\alpha\otimes a, \beta\otimes b]=[\alpha, \beta]\otimes ab$ and $d(\alpha\otimes a)=d\alpha \otimes a$. Let $m$ be the maximal ideal in $A$. Then under the same formula, $C\otimes_{\cc} m$ is also a DGLA. Since $(C\otimes_{\cc}m)^0=C^0\otimes_{\cc}m$ is a nilpotent Lie algebra, the Campbell-Hausdorff multiplication defines a nilpotent Lie group structure on the space $C^0\otimes m$. We denote this Lie group by $\exp(C^0\otimes m)$. Now, a element $\lambdabda\in C^0\otimes m$ acts on $C^1\otimes m$ by
$$\overline\exp(\lambdabda): \alpha \mapsto \exp(\ad \lambdabda)\alpha+\frac{1-\exp(\ad \lambdabda)}{\ad \lambdabda}(d\lambdabda)$$
in terms of power series. This is a group action for the group $\exp(C^0\otimes m)$ on $C^1\otimes m$.
\begin{defn}\label{cat}
Category $\sC(C; A)$ is defined to be the category with objects
$$\obj \sC(C; A)=\{\omegaega\in C^1\otimes_{\cc}m\;|\; d\omegaega+\frac{1}{2}[\omegaega, \omegaega]=0\},$$
and with the morphisms between two elements $\omegaega_1$, $\omegaega_2$
$$\morph(\omegaega_1, \omegaega_2)=\{\lambdabda\in C^0\otimes m\;|\; \overline\exp(\lambdabda)\omegaega_1=\omegaega_2\}.$$
Define the \textbf{deformation functor} to be the functor
$$
{\rm {Def}}(C): A\mapsto \iso\sC(C; A)
$$
from $\mathsf{ART}$ to $\mathsf{SET}$. Here we denote the set of isomorphism classes of a category by $\iso$.
\end{defn}
\begin{defn}
Given any $\omegaega\in \obj\sC(C; A)$ and a $C$-module $M$, we can associate an {\bf Aomoto complex} to it:
\begin{equation}\label{eqAo}
(M\otimes_\mathbb{C} A, d_\omegaega)
\end{equation}
with $$d_\omegaega:=d\otimes \id_A+\omegaega.$$ The condition $d\omegaega+\frac{1}{2}[\omegaega, \omegaega]=0$ implies $d_\omegaega\circ d_\omegaega=0$.
\end{defn}
\begin{lemma}\label{lemFGCoh} $(M\otimes_\mathbb{C} A,d_\omegaega)$ has finitely generated cohomology over $A$.
\end{lemma}
\begin{proof}
The finite decreasing filtration $M\otimes_\mathbb{C} m^s$ of $M\otimes_\mathbb{C} A$ is compatible with $d_\omegaega$. Consider the associated spectral sequence
$$
E_1^{s,t}=H^{s+t}(M\otimes_\mathbb{C} m^s/m^{s+1},d_\omegaega)\Rightarrow H^{s+t}(M\otimes_\mathbb{C} A,d_\omegaega),
$$
which degenerates after finitely many pages. It is enough to show that $E_1^{s,t}$ are finitely generated. However, this follows from the fact that $d_\omegaega=d\otimes id_A$ on $M\otimes m^s/m^{s+1}$, together with our assumption that $(M,d)$ has finitely generated cohomology.
\end{proof}
\begin{prop}\label{propexp}
Given any $\lambdabda\in C^0\otimes m$, the morphism $\overline\exp(\lambdabda): \omegaega_1\to \omegaega_2$ in $\sC(C; A)$ induces functorially a morphism between complexes $(M\otimes A, d_{\omegaega_1})\to (M\otimes A, d_{\omegaega_2})$.
\end{prop}
\begin{proof}
We need to show the commutativity of the following diagram.
\[
\xymatrixcolsep{8pc}\xymatrix{
M^i\otimes A\ar[rr]^{d_{\omegaega_1}=d_M\otimes \id_A+\omegaega_1}\ar[d]^{\exp(\lambdabda)}&& M^{i+1}\otimes A\ar[d]^{\exp(\lambdabda)}\\
M^i\otimes A\ar[rr]^{d_{\omegaega_2}=d_M\otimes \id_A+\overline\exp(\lambdabda)(\omegaega_1)}&&M^{i+1}\otimes A
}
\]
A direct computation reduces the commutativity to the following lemma.
\end{proof}
\begin{lemma}
Under the above notations, for any $\xi\in M\otimes A$, the following equations hold.
\begin{equation}\label{eqLL}
\exp(\lambdabda)(\omegaega_1\xi)=(\exp(\ad \lambdabda)\omegaega_1)\exp(\lambdabda)\xi
\end{equation}
\begin{equation}\label{eqLLL}
\exp(\lambdabda)d\xi=d(\exp(\lambdabda)\xi)+\left(\frac{1-\exp(\ad \lambdabda)}{\ad \lambdabda}d\lambdabda\right)\exp(\lambdabda)\xi
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma] The equation (\ref{eqLL}) is equivalent with the usual relation $e^\lambda\circ\omegaega_1\circ e^{-\lambda}=e^{[\lambda,-]}\omegaega_1$. Let us recall the proof. We expand the right side of (\ref{eqLL}) and calculate the coefficient of term $\lambdabda^p \omegaega_1 \lambdabda^q \xi$. It is equal to
\begin{align*}
\sum_{i=0}^q(-1)^{q-i}\,\frac{1}{i!}\frac{1}{(p+q-i)!}{p+q-i\choose p}&=\sum_{i=0}^q(-1)^{q-i}\,\frac{1}{i!p!(q-i)!}\\
&=(-1)^q\frac{1}{p!q!}\sum_{i=0}^q\left((-1)^{i}{p \choose i}\right)
\end{align*}
The last sum is zero unless $q=0$, and in this case, the coefficient is $\frac{1}{p!}$. This is exactly the coefficient of $\lambdabda^p\omegaega_1\xi$ on the left side of the equation.
To show (\ref{eqLLL}), by comparing the coefficients of the term $\lambdabda^p(d\lambdabda)\lambdabda^q\xi$, we are lead to show the following equality,
$$\frac{1}{(p+q+1)!}=\sum_{i=0}^{q}\frac{(-1)^{p-i}}{i! (p+q-i+1)!} {p+q-i\choose p}$$
and this is equivalent to
$$\frac{p!q!}{(p+q+1)!}=\sum_{i=0}^q\frac{(-1)^{q-i}}{p+1+q-i}{q\choose i}.$$
Now, the right side is equal to $\int_0^1(1-t)^qt^pdt$. And by induction on $q$, we can easily show the integration is equal to $\frac{p!q!}{(p+q+1)!}$.
\end{proof}
\begin{defn}\label{cohdef}
Let $(C,M)$ be a DGLA pair. We define $\sC^i_k(C, M; A)$ to be the full subcategory of $\sC(C; A)$ consisting the objects $\omegaega\in \obj\sC(C; A)$ such that $J^i_k(M\otimes_\cc A, d_\omegaega)=0$. This is well-defined since $(M\otimes_\cc A, d_\omegaega)$ is a bounded-above complex with finitely generated cohomology by Lemma \ref{lemFGCoh}.
\end{defn}
\begin{cor}
Under the notations of the previous definition, if $\sC^i_k(C, M; A)$ contains an object $\omegaega$ of $\sC(C; A)$, then $\sC^i_k(C, M; A)$ contains the isomorphism class of $\omegaega$ in $\sC(C; A)$. In other words, if $\omegaega\in \obj\sC^i_k(C, M; A)$, then $\overline\exp(\lambdabda)(\omegaega)\in \obj\sC^i_k(C, M; A)$ for any $\lambdabda\in\exp(C^0\otimes m)$.
\end{cor}
\begin{proof}
Since $\overline{\exp}(\lambdabda)$ has an inverse $\overline{\exp}(-\lambdabda)$, Proposition \ref{propexp} implies that $(M\otimes A, d_\omegaega)$ is isomorphic to $(M\otimes A, d_{\omegaega'})$, where $\omegaega'=\overline\exp(\lambdabda)(\omegaega)$. Thus, for any $\lambdabda\in \exp(C^0\otimes m)$, $\omegaega\in \obj\sC^i_k(C, M; A)$ is equivalent to $\overline\exp(\lambdabda)(\omegaega)\in \obj\sC^i_k(C, M; A)$.
\end{proof}
\begin{lemma}
Let $(C, M)$ be a DGLA pair, and let $p: A\to A'$ be a local ring homomorphism of Artinian local algebras. For any $\omegaega\in \obj\sC^i_k(C, M; A)$, the image of $\omegaega$ under $p_*: \sC(C; A)\to\sC(C; A')$ is contained in $\obj\sC^i_k(C, M; A')$.
\end{lemma}
\begin{proof}
Denote by $\omegaega'$ the image of $\omegaega$ under $p_*$. Since $\omegaega\in \obj\sC^i_k(C, M; A)$, $J^i_k(M\otimes_\cc A, d_\omegaega)=0$. By definition,
$$(M\otimes_\cc A', d_{\omegaega'})=(M\otimes_\cc A, d_{\omegaega})\otimes_A A'. $$
Since $(M\otimes_\cc A, d_{\omegaega})$ is a complex of flat $A$-modules, by Corollary \ref{tensor}
$$J^i_k((M\otimes_\cc A, d_{\omegaega})\otimes_A A')=J^i_k(M\otimes_\cc A, d_{\omegaega})\otimes_A A'$$
and hence
$$
J^i_k(M\otimes_\cc A', d_{\omegaega'})=J^i_k(M\otimes_\cc A, d_{\omegaega})\otimes_A A'=0.
$$
Therefore, $\omegaega'\in \obj\sC^i_k(C, M; A')$
\end{proof}
\begin{defn}\label{cohfunctor}
The \textbf{cohomology jump functor} associated to a DGLA pair $(C, M)$ is defined to be the functor
$${\rm {Def}}^i_k(C, M): A\mapsto \iso\sC^i_k(C, M; A)$$
from $\mathsf{ART}$ to $\mathsf{SET}$. By the previous lemma, ${\rm {Def}}^i_k(C, M)$ is a subfunctor of ${\rm {Def}}(C)$.
\end{defn}
\begin{theorem} {\bf [= Theorem \ref{independence}.]}\label{independence2}
The cohomology jump functor ${\rm {Def}}^i_k(C, M)$ only depends on the $i$-homotopy type of $(C, M)$. More precisely, if a morphism of DGLA pairs $g: (C, M)\to (D, N)$ is an $i$-equivalence, then the induced transformation on functors $g_*: {\rm {Def}}^i_k(C, M)\to {\rm {Def}}^i_k(D, N)$ is an isomorphism.
\end{theorem}
\begin{proof} Given Artinian local algebra $A$, we need to show the following two conditions are equivalent for any $\omegaega\in \obj\sC(C; A)$,
\begin{enumerate}
\item $\omegaega\in \obj\sC^i_k(C, M; A)$
\item $g_{1*}(\omegaega)\in \obj\sC^i_k(D, N; A)$,
\end{enumerate}
where $g=(g_1, g_2)$. According to Proposition \ref{propQequiv}, it is sufficient to show that the two complexes $(M\otimes_\mathbb{C} A, d_\omegaega)$ and $(N\otimes_\mathbb{C} A, d_{g_1(\omegaega)})$ are $i$-equivalent. Now this follows from the argument of \cite[Theorem 3.7]{dp}: our hypothesis on $H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } (g_2)$ implies that the map between the $E_1$ terms of the spectral sequences of the two complexes formed as in the proof of Lemma \ref{lemFGCoh} is an isomorphism for ${\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\le i$ and a monomorphism for ${\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }=i+1$, and this suffices.
\end{proof}
\section{Resonance varieties of DGLA pairs}\label{secQR}
Let $(C,M)$ be a DGLA pair. In this section we consider a nice description of ${\rm {Def}}(C)$ and ${\rm {Def}}^i_k(C,M)$ in terms of the space of flat connections and the resonance varieties which can be defined when $(C,M)$ satisfies some finitess conditions.
\begin{defn} The {\bf space of flat connections} of $C$ is
$$
\sF(C)=\obj\sC(C;\mathbb{C})=\{\omegaega\in C^1\mid d\omegaega+\frac{1}{2}[\omega,\omega]=0 \}.
$$
When $\dim C_1<\infty$, $\sF(C)$ is an affine scheme of finite type over $\mathbb{C}$.
The space of flat connections of $H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C)$ is called the {\bf quadratic cone} of $C$,
$$\mathcal{Q}(C)=\{\eta\in H^1(C)\mid [\eta,\eta]=0 \}.$$
Since by assumption $\dim H^1(C)<\infty$, $\mathcal{Q}(C)$ is an affine scheme of finite type over $\mathbb{C}$.
\end{defn}
Suppose $[C^0, C^1]=0$, i.e., $[\alpha, \beta]=0$ for any $\alpha\in C^0$ and $\beta\in C^1$. Then by definition, the action of $C^0$ on $C^1$ via $\overline{\exp}$ is trivial. Thus, we have the following:
\begin{lemma}\label{lemF}
Let $C$ be a DGLA with $[C^0, C^1]=0$ and $\dim C^1<\infty$. Then ${\rm {Def}}(C)$ is prorepresented by $\sF(C)_{(0)}$.
\end{lemma}
\begin{cor}\label{corQ} Let $C$ be a DGLA with $[H^0(C), H^1(C)]=0$. If $C$ is 1-formal, then ${\rm {Def}}(C)$ is prorepresented by $\mathcal{Q}(C)_{(0)}$.
\end{cor}
\begin{proof}
By Theorem \ref{gm0}, ${\rm {Def}}(C)$ is naturally isomorphic to ${\rm {Def}}(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C))$. The last functor is prorepresented by $\sF(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C))_{(0)}=\mathcal{Q}(C)_{(0)}$.
\end{proof}
\begin{defn}\label{dglaresonance}
Let $(C, M)$ be a DGLA pair with $\dim C^1<\infty$. There is a tautological section $\zeta=\zeta_{\sF(C)}$ of the sheaf $\sF(C)\otimes_\cc\mathcal{O}_{\sF(C)} $. Hence there is a universal complex on $\sF(C)$,
$$(M\otimes_\cc \sO_{\sF(C)}, d_\zeta=d_M\otimes \id_{\sO_{\sF(C)}} +\zeta),$$
which interpolates point-wise all the complexes as in (\ref{eqAo}) with $A=\mathbb{C}$.
Define the \textbf{resonance variety} $\sR^i_k(C,M)$ to be closed subscheme of $\sF(C)$ of finite type over $\mathbb{C}$ defined by the ideal $J^i_k(M\otimes_\cc \sO_{\sF(C)}, d_\zeta)$. This is well-defined as long as the complex $(M\otimes_\cc \sO_{\sF(C)}, d_\zeta)$ has finitely generated cohomology, so, in particular, when $M^i$ are finite-dimensional. The {\bf cohomology resonance variety} ${}^h\sR^i_k(C,M)=\sR^i_k(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C), H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M))$ is always well-defined, and admits a presentation in terms of linear algebra: it is the subscheme of $\mathcal{Q}(C)$ defined by the cohomology jump ideal $J^i_k$ of the complex $(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M)\otimes_\cc \sO_{\mathcal{Q}(C)}, \zeta_{\mathcal{Q}(C)})$, where $\zeta_{\mathcal{Q}(C)}$ is the tautological section of $\mathcal{Q}(C)\otimes_\cc\mathcal{O}_{\mathcal{Q}(C)}$.
\end{defn}
By Corollary \ref{corFl}, we have:
\begin{lemma}\label{lemRR}
Set-theoretically,
$$\sR^i_k(C,M)=\{\omega\in \sF(C)\mid \dim H^i(M, d_\omega=d_M+\omegaega)\geq k\} $$
when well-defined, and
$${}^h\sR^i_k(C,M)=\{\eta\in \mathcal{Q}(C)\mid \dim H^i(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M), \eta)\geq k\}. $$
\end{lemma}
By Lemma \ref{lemF} and the definitions:
\begin{lemma}
Let $(C, M)$ be a DGLA pair with $[C^0,C^1]=0$, $\dim C^1<\infty$, and $\dim M^i<\infty$ for $i\le q$ for some $q\ge 1$. Then ${\rm {Def}}^i_k(C,M)$ is prorepresented by $\sR^i_k(C,M)_{(0)}$ for $i\le q$.
\end{lemma}
Hence, together with Corollary \ref{corQ} and by definitions:
\begin{cor}\label{cohformal}
Let $(C, M)$ be a $q$-formal DGLA pair with $[H^0(C),H^1(C)]=0$, $q\ge 1$.
Then ${\rm {Def}}^i_k(C, M)$ is prorepresented by ${}^h\sR^i_k(C,M)_{(0)}$ for $i\le q$.
\end{cor}
\section{Augmented DGLA pairs}\label{augment}
\begin{defn}
Let $C$ be a DGLA, and let $\mathfrak{g}$ be a Lie Algebra. We can regard $\mathfrak{g}$ as a DGLA concentrating on degree zero. An augmentation map is a DGLA map $\varepsilon: C\to \mathfrak{g}$. The augmentation ideal of $\varepsilon$ is defined to be the kernel of $\varepsilon$, which is clearly a DGLA too. Denote the augmentation ideal of $\varepsilon$ by $C_0$. Moreover, suppose $M$ is a $C$-module. Then naturally, $M$ is also a $C_0$-module. Define the deformation functor of the augmented DGLA $(C; \varepsilon)$ by
$$
{\rm {Def}}(C; \varepsilon)\stackrel{\textrm{def}}{=}{\rm {Def}}(C_0),
$$
and the deformation functor of the augmented DGLA pair $(C, M;\varepsilon)$ by
$$
{\rm {Def}}(C, M; \varepsilon)\stackrel{\textrm{def}}{=}{\rm {Def}}(C_0, M).
$$
\end{defn}
\begin{theorem}\cite[Theorem 3.5]{gm}\label{gmaug}
Under the above notations, suppose $C$ is 1-formal, and suppose the degree zero part of $\varepsilon$, $\varepsilon^0: C^0\to \mathfrak{g}$ is surjective. Moreover, suppose the restriction of $\varepsilon^0$ to $H^0(C)$ is injective. Then ${\rm {Def}}(C; \varepsilon)$ is prorepresented by the formal scheme of $\mathcal{Q}(C)\tildemes \mathfrak{g}/\varepsilon(H^0(C))$ at the origin.
\end{theorem}
We will generalize the theorem to DGLA pairs.
\begin{theorem}\label{mainaug}
Let $(C, M)$ be a DGLA pair, and let $\varepsilon: C\to \mathfrak{g}$ be an augmentation map. Suppose all the assumptions in the previous theorem hold, and moreover $(C, M)$ is $q$-formal, $q\ge 1$. Then for $i\le q$, ${\rm {Def}}^i_k(C, M;\varepsilon)$ is prorepresented by the formal scheme of ${}^h\sR^i_k(C,M)\tildemes \mathfrak{g}/\varepsilon(H^0(C))$ at the origin. Furthermore, we have a commutative diagram of natural transformations of functors from $\mathsf{ART}$ to $\mathsf{SET}$,
\begin{equation}\label{comm}
\xymatrix{
{\rm {Def}}^i_k(C, M; \varepsilon)\ar[r]\ar[d]&({}^h\sR^i_k(C,M)\tildemes \mathfrak{g}/\varepsilon(H^0(C)))_{(0)}\ar[d]\\
{\rm {Def}}(C; \varepsilon)\ar[r]&(\mathcal{Q}(C)\tildemes \mathfrak{g}/\varepsilon(H^0(C)))_{(0)}
}
\end{equation}
\end{theorem}
\begin{proof} This essentially follows from the previous theorem of Goldman-Millson and Theorem \ref{independence}. According to \cite[3.9]{gm}, ${\rm {Def}}(C; \epsilon)$ associates to every Artinian local algebra $A$ the isomorphism classes of the transformation groupoid $\sC(C; A)\bowtie \exp(\mathfrak{g}\otimes m)$, where $m$ is the maximal ideal of $A$. Recall that in Definition \ref{cat}, we defined $\sC(C; A)$ to be the transformation groupoid with objects
$$\obj \sC(C; A)=\{\omegaega\in C^1\otimes_{\cc}m\;|\; d\omegaega+\frac{1}{2}[\omegaega, \omegaega]=0\},$$
and the morphisms are defined by the action of the nilpotent group $\exp(C^0\otimes m)$. The augmentation map induces a map of Lie groups $\exp(C^0\otimes m)\to \exp(\mathfrak{g}\otimes m)$. The objects in $\sC(C; A)\bowtie \exp(\mathfrak{g}\otimes m)$ are defined to be the Cartesian product of sets $\obj \sC(C; A)\tildemes \exp(\mathfrak{g}\otimes m)$, and the morphisms are defined by the diagonal group action of $\exp(C^0\otimes m)$.
By definition, ${\rm {Def}}^i_k(C, M; \varepsilon)$ is the subfunctor which associates to an Artinian local algebra $A$ the isomorphism classes of the transformation groupoid $\sC^i_k(C, M; A)\bowtie \exp(\mathfrak{g}\otimes m)$. Now, by \cite[Lemma 3.8]{gm}, we have an equivalence of groupoids
$$\sC^i_k(C, M; A)\bowtie \exp(\mathfrak{g}\otimes m)\simeq\sC^i_k(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C), H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M); A)\bowtie \exp(\mathfrak{g}\otimes m). $$
One can easily check that the functor $A\mapsto \iso\sC^i_k(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(C), H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M); A)\bowtie \exp(\mathfrak{g}\otimes m)$ from $\mathsf{ART}$ to $\mathsf{SET}$ is prorepresented by the formal scheme $({}^h\sR^i_k(C,M)\tildemes \mathfrak{g}/\varepsilon(H^0(C)))_{(0)}$, and the diagram (\ref{comm}) commutes.
\end{proof}
\section{Holomorphic vector bundles.} \label{holomorphic}
In this and the next few sections we study concrete deformation problems with cohomology constraints. To a fixed setup consisting of a moduli space $\mathcal{M}$ with a fixed object $\rho$ and cohomology-defined strata $\mathcal{V}^i_k$ for all $i$ and $k$, we attach a DGLA pair $
(C,M)
$
such that the formal germs $(\mathcal{M}_{(\rho)},(\mathcal{V}^i_k)_{(\rho)})$ prorepresent $
({\rm {Def}}(C) , {\rm {Def}}^i_k(C,M))$ for all $i$ and $k$. We also try to find when right-hand side admits further simplifications via formality, allowing a description of the left-hand side in terms of linear algebra.
Let $X$ be a compact K\"ahler manifold. Fix $n$ and $p$. Fix a poly-stable holomorphic vector bundle, i.e. locally free $\mathcal{O}_X$-module, $F$ on $X$ of any rank with vanishing Chern classes. We consider the following deformation problem with cohomology constraints:
$$
(\mathcal{M} ,\rho, \mathcal{V}^q_k) = (\mathcal{M}(X,n), E, \mathcal{V}^{pq}_k(F) ),
$$
where $\mathcal{M}=\sM(X, n)$ is the moduli space of rank $n$ stable holomorphic vector bundles on $X$ with vanishing Chern classes, and in $\sM$ one defines point-wise the Hodge cohomology jump loci with respect to $F$ to be
\begin{equation}\label{defvb}
\sV^{pq}_k(F)=\{E\in\sM \mid \dim H^q(X, E\otimes_{\sO_X} F \otimes_{\sO_X} \Omega^p_X)\geq k \}.
\end{equation}
$\sM$ is an analytic scheme \cite{lo}. The scheme structure of $\sV^{pq}_k(F)$ is defined locally as follows. Over a small open subset $U$ of $\sM$, there is a vector bundle $\sE$ on $X\tildemes U$ which is locally the Kuranishi family of vector bundles. Denote by $p_1$ and $p_2$ the projections from $X\tildemes U$ to its first and second factor.
\begin{defn}\label{subscheme}
Locally, as a subscheme of $U$, $\sV^{pq}_k(F)$ is defined by the ideal
$$J^q_k(\Gamma(U, \mathbf{R}p_{2*}(\sE\otimes_{p_1^{-1}\sO_X} p_1^{-1}(F\otimes_{\sO_X} \Omega_X^p)))).$$
\end{defn}
Since locally two Kuranishi families are isomorphic to each other, the subschemes patch together, and hence $\sV^{pq}_k(F)$ is a well-defined closed subscheme of $\sM$. By base change and the property of determinantal ideals, one can easily check that the closed points of $\sV^{pq}_k$ satisfy (\ref{defvb}).
\begin{defn}
For a locally free sheaf $\mathcal{F}$ on $X$, denote the {Dolbeault complex of sheaves} of $\mathcal{F}\otimes_{\mathcal{O}_X}\Omega_X^p$ by
$$
(\mathcal{A}^{p,\ubul}_{\rm{Dol}}(\mathcal{F}),\bar{\partial})\stackrel{\rm{def}}{=}(\mathcal{F}\otimes_{\mathcal{O}_X}\Omega_X^{p,\bullet},\bar{\partial}).
$$ The corresponding complex of global sections on $X$, which we will call the {\bf Dolbeault complex} of $\mathcal{F}\otimes_{\mathcal{O}_X}\Omega_X^p$, will be denoted by
$$
({{A}}^{p,\ubul}_{\rm{Dol}}(\mathcal{F}),\bar{\partial})\stackrel{\rm{def}}{=}(\Gamma(X,\mathcal{F}\otimes_{\mathcal{O}_X}\Omega_X^{p,\bullet}),\bar{\partial}).
$$
\end{defn}
\begin{rmk}\label{rmkDGen}
It is a standard fact (cf. \cite{gm}, \cite{M}) that the DGLA $({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), \bar\partial)$ controls the deformation theory of $\sM$ at $E$. It means that the deformation functor ${\rm {Def}}({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)))$ is prorepresented by the formal scheme $\sM_{(E)}$. This is a particular case of a more general result of \cite{FIM} which states that for any complex manifold or complex algebraic variety $X$, the infinitesimal deformations of an $\mathcal{O}_X$-coherent sheaf $E$ are controlled by the DGLA of global sections $\Gamma (X,\mathcal A^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } (\enmo^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } (\tildelde{E}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })))$ of any acyclic resolution $\mathcal A^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ of the sheaf of DGLAs $\enmo^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(\tildelde{E}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })$ of a locally free resolution $\tildelde{E}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ of $E$. If $X$ is smooth, then $\mathcal A^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ can chosen to be the Dolbeault resolution. Note that for this type of statements it does not matter if a moduli space can be constructed. Note also that to have a meaningful infinitesimal deformation problem with cohomology constraints as in (\ref{defvb}), we must ask for $X$ to be a compact manifold or a proper algebraic variety.
\end{rmk}
Suppose $E\in \sV^{pq}_k(F)$. Then the DGLA pair
$$({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$$ controls the deformation theory of $\sV^{pq}_k(F)$ at $E$, where the DGLA pair structure comes from the usual map $\enmo(E)\otimes E\to E$:
\begin{thrm}\label{vb1} Let $X$ be a compact K\"ahler manifold.
For any $E\in \sM$, the natural isomorphism of functors from $\mathsf{ART}$ to $\mathsf{SET}$
$$\sM_{(E)}\cong {\rm {Def}}({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)))$$
induces for any $p,q, k\in \nn$ a natural isomorphism of subfunctors
$$\sV^{pq}_{k}(F)_{(E)}\cong {\rm {Def}}^q_k({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)).$$
\end{thrm}
\begin{proof}
Let $A$ be an Artinian local algebra. Given any $s\in \sM_{(E)}(A)=\homo(\spec(A), \sM_{(E)})$, denote its image in ${\rm {Def}}({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)))(A)$ by $\omegaega$. We need to show that $s\in \sV^{pq}_{k}(F)_{(E)}(A)$ if and only if $\omegaega\in {\rm {Def}}^q_k({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))(A)$. As in Definition \ref{cohdef}, the complex associated to $\omegaega$ is $({{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)\otimes_\cc A, d_\omegaega)$.
Denote by $E_s$ the pull-back of the Kuranishi family $\sE$ by the composition $\spec(A)\stackrel{s}{\to}\sM_{(E)}\to \sM$. Then $E_s$ is a vector bundle on $X_A\stackrel{\textrm{def}}{=}X\tildemes_{\spec(\cc)} \spec(A)$.
Denote the second projection by $p_2: X_A\to \spec(A)$. By the construction (cf. \cite[Section 6]{gm}, \cite[Proposition 3.4]{w}), $({{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)\otimes_\cc A, d_\omegaega)$ is equal to the Dolbeault complex of the vector bundle $E_s\otimes_{p_1^{-1}\sO_X} p_1^{-1}(F \otimes_{\sO_X}\Omega^p_X)$, and hence it is quasi-isomorphic to $\mathbf{R} p_{2*}(E_s\otimes_{p_1^{-1}\sO_X} p_1^{-1}(F \otimes_{\sO_X}\Omega^p_X))$ as complexes of $A$-modules. Therefore,
\begin{equation}\label{idealeq1}
J^q_k({{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)\otimes_\cc A, d_\omegaega)=J^q_k(\mathbf{R} p_{2*}(E_s\otimes_{p_1^{-1}\sO_X} p_1^{-1}(F \otimes_{\sO_X}\Omega^p_X)))
\end{equation}
as ideals of $A$.
Since taking determinantal ideals commutes with base change,
\begin{equation}\label{idealeq2}
J^q_k(\mathbf{R} p_{2*}(E_s\otimes_{p_1^{-1}\sO_X} p_1^{-1}(F \otimes_{\sO_X}\Omega^p_X)))=J^q_k(\mathbf{R} p_{2*}(\sE\otimes_{p_1^{-1}\sO_X} p_1^{-1}(F \otimes_{\sO_X}\Omega^p_X)))\otimes_{\sO_{U}}A,
\end{equation}
where $U$ is an open subset of $\sM$ where the Kuranishi family is defined, and we use $p_1$, $p_2$ for projections to first and second factors of the products $X\tildemes_{\spec{\cc}} \spec(A)$ and $X\tildemes_{\spec{\cc}} U$, respectively, on each side of the equality.
By definition, $s\in \sV^{pq}_{k}(F)_{(E)}(A)$ if and only if $$J^q_k(\mathbf{R} p_{2*}(\sE\otimes_{p_1^{-1}\sO_X} p_1^{-1}F \otimes_{p_1^{-1}\sO_X} p_1^{-1}\Omega^p_X))\otimes_{\sO_{U}}A=0.$$ On the other hand, $\omegaega\in {\rm {Def}}^q_k({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))(A)$ if and only if $$J^q_k({{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)\otimes_\cc A, d_\omegaega)=0.$$ We obtain that $s\in \sV^{pq}_{k}(F)_{(E)}(A)$ if and only if $\omegaega\in {\rm {Def}}^q_k({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}} (E\otimes F))(A)$ by (\ref{idealeq1}) and (\ref{idealeq2}).
\end{proof}
\begin{rmk}\label{rmkAbsHol}
If we replace $\mathcal{M}_{(E)}$ and $\sV^{pq}_{k}(F)_{(E)}$ by the abstract deformation functors, the theorem still holds for any compact complex manifold $X$ and any holomorphic vector bundles $E$ and $F$, cf. also Remark \ref{rmkDGen}. For the purpose of this paper, we focus on the case leading to formality of the DGLA pairs. This will require the K\"ahler and the vanishing chern classes assumptions.
\end{rmk}
\begin{que}\label{queGQVB}
One can ask a general question, in light of Remarks \ref{rmkDGen} and \ref{rmkAbsHol}: are the infinitesimal deformations of a bounded complex of $\mathcal{O}_X$-coherent sheaves $E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ on a compact complex manifold, or smooth complex algebraic variety $X$, with the hypercohomology constraint
$$
\dim_\mathbb{C} \hh^q(X,E^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes \Omega_X^p)\ge k,
$$
for a bounded-above complex of $\mathcal{O}_X$-coherent sheaves $F^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$, governed by the DGLA pair
$$
({A}^{0,\ubul}_{\rm{Dol}}(\enmo^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(\tildelde{E}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ })), {{A}}^{p,\ubul}_{\rm{Dol}} (Tot^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(\tildelde{E}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }\otimes \tildelde{F}^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }))),
$$
where $\tildelde{E}$, $\tildelde{F}$ are locally free resolutions of $E$ and $F$, and $Tot^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }$ is the total complex?
\end{que}
The next formality result will provide a concrete description of the formal scheme of the cohomology jump loci via linear algebra.
\begin{thrm}[\cite{dgms}]\label{formal1}
Let $X$ be a compact K\"ahler manifold.
For any $E\in \sM$, the DGLA pair \\
$$({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$$ is formal.
\end{thrm}
\begin{proof}
Since both $E$ and $F$ are poly-stable and are of vanishing Chern classes, there exist flat unitary metrics on both $E$ and $F$, according to \cite{uy}. Hence $E\otimes F$ admits a flat unitary metric too. The Chern connection on $E\otimes F$ induced by the flat unitary metric is flat. Similarly, on $\enmo(E)$ there is also a flat unitary metric, whose Chern connection is also flat. Denote the $(1,0)$ part of the flat connections by $\partial$. Denote the subcomplexes of ${A}^{0,\ubul}_{\rm{Dol}}(\enmo(E))$ and ${{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)$ consisting $\partial$-closed forms by $K{A}^{0,\ubul}_{\rm{Dol}}(\enmo(E))$ and $K{{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)$, respectively. Clearly, $(K{A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), K{{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$ is a sub-DGLA pair of $({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$, i.e., the inclusion map
\begin{equation}\label{dglamap1}
(K{A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), K{{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))\to ({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))
\end{equation}
is a map of DGLA pairs. On the other hand, thanks to the existence of flat unitary metrics, $H^q(X, \enmo(E))$ can be computed by $\partial$-closed $\enmo(E)$-valued $(0, q)$-forms modulo $\partial$-exact forms, and similarly $H^q(X, E\otimes F\otimes \Omega^p)$ can be computed by $\partial$-closed $E\otimes F$-valued $(p, q)$-forms modulo $\partial$-exact forms. Hence, there is a natural surjective map of DGLA pairs.
\begin{equation}\label{dglamap2}
(K{A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), K{{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))\to (H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }({{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))))
\end{equation}
As in Lemma 2.2 of \cite{s1}, one can easily show that the cohomology classes of $K{A}^{0,\ubul}_{\rm{Dol}}(\enmo(E))$ and $K{{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F)$ are represented by harmonic forms. Therefore, the two maps (\ref{dglamap1}) and (\ref{dglamap2}) are both $\infty$-equivalent maps. Thus, $({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$ is formal.
\end{proof}
\begin{rmk}\label{resonance1} Let us spell out what are the quadratic cone and the cohomology resonance varieties in this case, as defined in Section \ref{secQR}. The quadratic cone $\mathcal{Q}$ of the DGLA ${A}^{0,\ubul}_{\rm{Dol}}(\enmo(E))$ will be denoted $\mathcal{Q}(E)$ and is
$$
\mathcal{Q}(E)=\{\eta\in H^1(X, \enmo(E))\mid
\eta\wedge\eta=0\in H^2(X, \enmo(E))\}.
$$
The cohomology resonance variety $^h\sR^q_k({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)), {{A}}^{p,\ubul}_{\rm{Dol}}(E\otimes F))$ will be denoted by $\sR^{pq}_k(E; F)$ to simplify the notation. Point-wise,
$$
\sR^{pq}_k(E; F)=\{\eta\in \mathcal{Q}(E) \mid \dim H^q(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X, E\otimes F\otimes \Omega^p_X),\eta \wedge\cdot )\geq k\},
$$
and its scheme structure is defined using the cohomology jump ideal of the universal cohomology Aomoto complex as in Definition \ref{dglaresonance}.
\end{rmk}
It was shown by Nadel \cite{n} and Goldman-Millson \cite{gm} that there is an isomorphism of formal schemes $\mathcal{M}_{(E)}\cong \mathcal{Q}(E)_{(0)}$, and thus $\mathcal{M}$ has quadratic algebraic singularities. The proof follows easily from Corollary \ref{corQ}. We generalize this as follows.
\begin{cor}\label{corSVB} {\bf [ = Theorem \ref{thrmHolVB}.]}
The isomorphism of formal schemes
$$
\sM_{(E)}\cong \mathcal{Q}(E)_{(0)}
$$
induces for any $p,q, k\in \nn$ an isomorphism
$$
\sV^{pq}_k(F)_{(E)}\cong \sR^{pq}_k(E; F)_{(0)}.
$$
\end{cor}
\begin{proof}
Using Yoneda's lemma, one can easily see that if two formal schemes prorepresent the same functor from $\mathsf{ART}$ to $\mathsf{SET}$, then the two formal schemes are isomorphic. Thus, the corollary is a direct consequence of Proposition \ref{formal1}, Corollary \ref{cohformal} and Proposition \ref{vb1}. The only thing we need to check is that
\begin{equation}\label{zero}
[H^0({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E))), H^1({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)))]=0.
\end{equation}
Since $E$ is stable, $H^0({A}^{0,\ubul}_{\rm{Dol}}(\enmo(E)))= H^0(X, \enmo(E))= \cc\cdot \id_E$. Clearly, $[\id_E, -]=0$.
\end{proof}
\begin{rmk}
If $E$ is only poly-stable, then (\ref{zero}) is not true in general. So for the whole moduli space of semi-stable vector bundles we do not have a nice local description of the Hodge cohomology jump loci as in the above corollary. In fact, the moduli space itself may not have quadratic singularity at some points, which are semi-stable but not stable.
\end{rmk}
\begin{cor}\label{corQuad}
Suppose $k=\dim H^q(X, E\otimes F\otimes \Omega^p_X)$. Then $\sV^{pq}_k(F)$ has quadratic algebraic singularities at $E$.
\end{cor}
\begin{proof}
When $k=\dim H^q(X, E\otimes F\otimes \Omega^p_X)$, the resonance variety $\sR^{pq}_k(E; F)$ is a quadratic cone in $H^1(X, \enmo(E))$. Indeed, $\sR^{pq}_k(E; F)$ is defined by a determinantal ideal of $1\tildemes 1$ minors, so $\sR^{pq}_k(E; F)$ is isomorphic to the intersection of the quadratic cone $\mathcal{Q}(E)$ and a linear subspace. Now, it follows from the previous corollary that $\sV^{pq}_k(F)_{(E)}$ is isomorphic to the formal scheme of a quadratic cone at the origin.
\end{proof}
\begin{rmk}
Corollary \ref{corQuad} was shown for $F\otimes\Omega_X^p=\mathcal{O}_X$ by Martinengo \cite{ma} and the second author \cite{w}.
\end{rmk}
\begin{rmk}\label{rmkGL}
The case when $n=1$ and $F=\mathcal{O}_X$ of the Corollary \ref{corSVB} is due to Green-Lazarsfeld \cite{gl1,gl2} and phrased in terms of their {\it derived complex}. This complex is the universal complex used by us to define cohomology resonance varieties in Definition \ref{dglaresonance}. In this case, $\mathcal{M}={\rm{Pic}}^\tau (X)$ is locally isomorphic via the inverse of the exponential map with the cone $\mathcal{Q}(E)$ which is the whole $H^1(X,\enmo(E))=H^1(X,\mathcal{O}_X)$. As in \cite{w}, by choosing $E$ to be a smooth point on the cohomology jump loci, the proof of Corollary \ref{corQuad} then implies a result in {\it loc. cit.} that $\mathcal{V}^{pq}_k$ are union of translates of subtori (this has been generalized in \cite{bw}). It also implies the next corollary.
\end{rmk}
\begin{cor}\label{corRk1} {{\rm (}}\cite{gl2}, \cite[Theorem 4.2]{M-a}{\rm{)}} Assume $n=1$, that is, $\mathcal{M}=Pic^\tau(X)$. If $E$ is a singular point of $\mathcal{V}^{pq}_k(F)$, then $E\in \mathcal{V}^{pq}_{k+1}(F)$.
\end{cor}
\section{Representations of $\pi_1(X)$ and local systems.} \label{localsystem}
Let $X$ be finite-type CW-complex with base point $x\in X$.
Fix $n$. Let
$$\mathbf{R}(X, n)=\homo(\pi_1(X, x), GL(n, \cc)).$$ Since $\pi_1(X, x)$ is finitely presented, $\mathbf{R}(X,n)$ is an algebraic scheme.
Fix $W$ a local system of any rank on $X$. We consider now the deformation problem with cohomology constraints
$$
(\mathbf{R}(X,n), \rho, \tilde\sV^i_k(W))
$$
where the cohomology jump loci are defined as
$$
\tilde\sV^i_k(W)=\{\rho\in \mathbf{R}(X, n)\,|\, \dim H^i(X, L_\rho\otimes_\mathbb{C} W)\geq k\},
$$
where $L_\rho$ is the rank $n$ local system on $X$ associated to the representation $\rho$.
One can give $\tilde\sV^i_k(W)$ a closed subscheme structure by the universal local system $\sL$ on $X\tildemes \mathbf{R}(X, n)$ as follows. Here $\sL$ is actually a local system of $R$-modules on $X$, where $R=H^0(\mathcal{O}_{\mathbf{R}(X,n)})$, such that $\sL\otimes_R (R/m_\rho)=L_\rho$, where $m_\rho$ is the maximal ideal of the closed point $\rho$ in $R$. Let $a:X\rightarrow pt$ be the map from $X$ to a point. Then $Ra_*({\mathcal L}\otimes_\mathbb{C} W)$ represents a bounded complex of free $R$-modules with finitely generated cohomology. Thus we can define the closed subscheme $\tilde\sV^i_k(W)$ of $\mathbf{R}(X,n)=\spec R$ by the ideal
$$
J^i_k(Ra_*({\mathcal L}\otimes_\mathbb{C} W)).
$$
By base change and Corollary \ref{corFl}, the closed points of $\tilde\sV^i_k(W)$ are the representations $\rho$ with $\dim H^i(X, L_\rho\otimes_\mathbb{C} W)\geq k$. Equivalently, one can use the definition of the cohomology of local systems in terms of twisted cochain complexes on the universal covering of $X$ to define the scheme structure on $\tilde\sV^i_k(W)$. The cohomology jump loci for finite CW-complexes can be rather arbitrary \cite{w-ex}.
Assume from now that $X$ is a smooth manifold of the homotopy type of a finite CW-complex.
\begin{defn} For a local system $\mathcal{F}$ on $X$, let $(\mathcal{A}^{\ubul}_{\rm{DR}}(\mathcal{F}),d)$ be the {de Rham complex of sheaves} of $\mathcal{F}$-valued $C^\infty$-forms on $X$. The corresponding complex of global sections on $X$, which we will call the {\bf de Rham complex} of $\mathcal{F}$, will be denoted ${A}^{\ubul}_{\rm{DR}}(\mathcal{F})$.
\end{defn}
Let $\mathfrak{g}=\enmo(L_\rho)|_{x}$, and let the DGLA augmentation map $\varepsilon: {A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho))\to \mathfrak{g}$ be the restriction map. Let $\rho\in\mathbf{R}(X, n)$. Goldman-Millson \cite{gm} showed that the formal scheme of $\mathbf{R}(X, n)$ at $\rho$ prorepresents the functor ${\rm {Def}}({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)); \varepsilon)$. See Section \ref{augment} for the definition of this functor. We generalize this to $\tilde\sV^i_k (W)$, noting first that $$({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)), {A}^{\ubul}_{\rm{DR}}(L_\rho\otimes_\mathbb{C} W); \varepsilon)$$ is naturally an augmented DGLA pair.
\begin{thrm}\label{thmRP1} Let $X$ be a smooth manifold of the homotopy type of a finite CW-complex. The natural isomorphism \begin{equation*}\label{isoaug}
\mathbf{R}(X, n)_{(\rho)}\cong {\rm {Def}}({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)); \varepsilon).
\end{equation*}
induces for any $i,k\in \nn$ a natural isomorphism of subfunctors,
$$
(\tilde\sV^i_k(W))_{(\rho)}\cong {\rm {Def}}^i_k({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)), {A}^{\ubul}_{\rm{DR}}(L_\rho\otimes_\mathbb{C} W); \varepsilon).
$$
\end{thrm}
\begin{proof} This is similar to the proof of Theorem \ref{vb1}. Let $A$ be an Artinian local algebra. Given any $s:\spec A\rightarrow \mathbf{R}(X,n)_{(\rho)}$, denote its image in ${\rm {Def}}({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho));\varepsilon)$ by $\omegaega$. Let ${\mathcal L}_s$ be the induced $A$-local system ${\mathcal L}\otimes_R A$ on $X$. Then $({A}^{\ubul}_{\rm{DR}}(L_\rho\otimes_\mathbb{C} W)\otimes_\mathbb{C} A,d_\omegaega)$ is the de Rham complex of the $A$-local system ${\mathcal L}_s\otimes_\mathbb{C} W$ on $X$ (cf. \cite[Section 6]{gm}). Thus it is quasi-isomorphic with $Ra_*({\mathcal L}_s\otimes_\mathbb{C} W)$ as complexes of $A$-modules. So
$$
J^i_k({A}^{\ubul}_{\rm{DR}}(L_\rho\otimes_\mathbb{C} W)\otimes_\mathbb{C} A,d_\omegaega)=J^i_k(Ra_*({\mathcal L}_s\otimes_\mathbb{C} W)),
$$
which in turn equals $J^i_k(Ra_*({\mathcal L}\otimes_\mathbb{C} W))\otimes_R A$ by Corollary \ref{tensor}.
\end{proof}
\begin{rmk}
This theorem generalizes a result of Dimca-Papadima \cite{dp} who proved it for the reduced structure of the cohomology jump loci at the trivial representation, that is, for the germ of $(\tilde\sV^i_k)^{red}$ at $\mathbf{1}$. In \cite{dp}, $X$ is allowed to be a connected CW-complex of finite type by replacing the de Rham complex with Sullivan's de Rham complex of piecewise $C^\infty$ forms. For simplicity, we opted to leave out this topological refinement.
\end{rmk}
Along with representations of the fundamental group, let us consider the closely-related deformation problem for the associated local systems. The relation at the level of deformations between representations (i.e. local systems with a frame at a fixed point) and local systems is a particular case of the relation between the deformation functors of an augmented DGLA pair and those of the DGLA pair itself, see Theorem \ref{mainaug}.
For now, the assumptions are the same: $X$ is a smooth manifold of the homotopy type of a finite CW-complex and $W$ is a local system on $X$. We consider the deformation problem with cohomology constraints:
$$
(\mathcal{M}_{\textrm{B}}=\mathcal{M}(X,n), L, \mathcal{V}^{i}_k(W)),
$$
where $\mathcal{M}_{\textrm{B}}=\mathcal{M}_{\textrm{B}}(X, n)$ is the moduli space of irreducible rank $n$ local systems on $X$ and
$$
\mathcal{V}^i_k(W)=\{ L\in \mathcal{M}_{\textrm{B}}\mid \dim_\mathbb{C} H^i(X,L\otimes_\mathbb{C} W)\ge k \}.
$$
The natural subscheme structure of $\mathcal{V}^i_k(W)$ in $\mathcal{M}_{\textrm{B}}$ is defined as follows. $GL(n, \cc)$ acts on $\mathbf{R}(X, n)$ by conjugation. Clearly these actions preserve all the cohomology jump loci $\tilde\sV^i_k(W)$ of representations. Since $\mathcal{M}_{\textrm{B}}$ is an open subset of the GIT quotient of $\mathbf{R}(X, n)$ by $GL(n, \cc)$, $\sV^i_k(W)$ can be defined as the the intersection of $\mathcal{M}_{\textrm{B}}$ and the image of $\tildelde\sV^i_k(W)$ under the GIT quotient map.
The argument in Section \ref{holomorphic} works similarly for moduli spaces of local systems. Since the proofs are essentially the same, we only state the results.
Let $L$ be in $\mathcal{M}_{\textrm{B}}$. Then $({A}^{\ubul}_{\rm{DR}}(\enmo(L)), {A}^{\ubul}_{\rm{DR}}(L\otimes W))$ is naturally a DGLA pair. It is a standard fact that the deformation functor ${\rm {Def}}({A}^{\ubul}_{\rm{DR}}(\enmo(L)))$ is prorepresented by the formal scheme $(\mathcal{M}_{\textrm{B}})_{(L)}$.
\begin{thrm} Let $X$ be a smooth manifold of the homotopy type of a finite CW-complex.
The natural isomorphism of functors $$(\mathcal{M}_{\textrm{B}})_{(L)}\cong{\rm {Def}}({A}^{\ubul}_{\rm{DR}}(\enmo(L)))$$ induces for any $i, k\in \nn$ a natural isomorphism of subfunctors $$\sV^i_k(W)_{(L)}\cong{\rm {Def}}^i_k({A}^{\ubul}_{\rm{DR}}(\enmo(L)), {A}^{\ubul}_{\rm{DR}}(L\otimes W)).$$
\end{thrm}
In this last result, the condition of irreducibility of the local system can be removed if we replace $(\mathcal{M}_{\textrm{B}})_{(L)}$ and $\sV^i_k(W)_{(L)}$ by the abstract deformation functors, cf. Remark \ref{rmkAbsHol}. However, we are again focusing on the case leading to formality, for which at least a semi-simplicity condition is crucial. Irreducibility will be used to further simplify the answer of the deformation problem in terms of resonance varieties.
\begin{thrm}\label{lsformal} Let $X$ be a compact K\"{a}hler manifold, $L\in \mathcal{M}_{\textrm{B}}$, and let $W$ be a semi-simple local system on $X$.
Then the DGLA pair $({A}^{\ubul}_{\rm{DR}}(\enmo(L)), {A}^{\ubul}_{\rm{DR}}(L\otimes W))$ is formal.
\end{thrm}
\begin{proof}
The proof is essentially the same as the proof of the Theorem \ref{formal1}, except here we need to use the harmonic metric on the flat bundle in the sense of \cite{s1} instead of the flat unitary metric before.
\end{proof}
In the situation of Theorem \ref{lsformal}, as in Remark \ref{resonance1}, the quadratic cone of ${A}^{\ubul}_{\rm{DR}}(\enmo(L))$ is
$$
\mathcal{Q}(L)=\{\eta\in H^1(X, \enmo(L)) \mid
\eta\wedge\eta=0\in H^2(X, \enmo(L))\},
$$
and the cohomology resonance varieties of the DGLA pair are
$$
\sR^i_k(L;W)=\{ \eta\in \mathcal{Q}(L)\mid \dim H^i(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X, L\otimes W), \eta\wedge \cdot)\geq k \},
$$
with the scheme structure of $\sR^i_k(L;W)$ defined using the universal Aomoto complex, as in Definition \ref{dglaresonance}. The condition on the irreducibility of $L$, as opposed to just semi-simplicity, is now used to derive the analog of Corollary \ref{corSVB} for local systems with a similar proof:
\begin{cor}\label{corVLS}{\bf [ = Theorem \ref{thmIrrLS}.]} Let $X$ be a compact K\"{a}hler manifold, $L\in \mathcal{M}_{\textrm{B}}$, and let $W$ be a semi-simple local system on $X$.
The isomorphism of formal schemes
$$
(\mathcal{M}_{\textrm{B}})_{(L)}\cong \mathcal{Q}(L)_{(0)}
$$
induces for any $i, k\in \nn$ an isomorphism
$$
\sV^i_k(W)_{(L)}\cong\, \sR^i_k(L;W)_{(0)}.
$$
\end{cor}
The analogs of the Corollaries \ref{corQuad} and \ref{corRk1} also hold.
\begin{rmk}\label{rmkPS}
Corollary \ref{corVLS} for rank one local systems $E$ and $W=\mathbb{C}_X$ also follows from the strong linearity theorem of Popa-Schnell namely \cite[Theorem 3.7]{PS}. In fact, our approach gives a different proof of the strong linearity theorem, at least of the fact that the two complexes appeared in \cite[Theorem 3.7]{PS} are quasi-isomorphic (in the derived category) after restricting to the formal neighborhood of the origin. One can argue as follows. For an Artinian local algebra $A$ and a map from $\spec(A)$ to the formal neighborhood, one can restrict the two complexes in \cite[Theorem 3.7]{PS} to $\spec(A)$. After the restriction, the two complexes can be connected to another one via a zig-zag using the proof of Theorem \ref{independence2} and the proof of Theorem \ref{formal1}. The two maps in the zig-zag are quasi-isomorphisms. Since the zig-zag is canonical, it allows us to take inverse limit for all such $A$. After taking limit, we obtain two quasi-isomorphisms which connect the two complexes on the formal neighborhood of origin. Note that the proof of \cite[Theorem 3.7]{PS} gives a stronger statement: the quasi-isomorphism is obtained by one single map. The proof we sketched gives that the quasi-isomorphism is obtained by one zig-zag. However, this suffices for the application to cohomology jump loci.
\end{rmk}
Now, coming back to representations, by Theorem \ref{mainaug} and Theorem \ref{lsformal} we have the following:
\begin{cor}\label{rep1} Let $X$ be a compact K\"ahler manifold. Let $\rho\in \mathbf{R}(X, n)$ be a semi-simple representation, and let $W$ be a semi-simple local system. Then there is an isomorphism of formal schemes
$$\tilde\sV^i_k(W)_{(\rho)}\cong (\sR^i_k(L_\rho,W)\tildemes \mathfrak{g}/\mathfrak{h})_{(0)}$$
where $\mathfrak{h}=\varepsilon(H^0(X,\enmo (L_\rho)))$.
\end{cor}
\begin{proof}
To apply Theorem \ref{mainaug}, we only need to check the assumptions in Theorem \ref{gmaug}. It is obvious that $A^0_{\textrm{DR}}(\enmo(L_\rho))\to \enmo(L_\rho)|_{x}$ is surjective. Since $\rho$ is semi-simple, or equivalently $L_\rho$ is a semi-simple local system, we can assume it splits into a direct sum of simple local systems $L_\rho=\bigoplus_{j\in J} L_j$. Then $H^0({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)))$ is generated by $\id_{L_j}$, $j\in J$. Therefore, $\epsilon^0: H^0({A}^{\ubul}_{\rm{DR}}(\enmo(L_\rho)))\to \enmo(L_\rho)|_{x}$ is injective.
\end{proof}
We can give another equivalent description of the cohomology resonance variety $\sR^i_k(L_\rho,W)$ and the affine space $\mathfrak{g}/\mathfrak{h}$. It is well-known that the tangent space of $\mathbf{R}(X, n)$ at the point $\rho$ is isomorphic to the vector space of 1-cocycles $Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})$, see \cite{gm}. Moreover, we have the following isomorphism,
\begin{equation}\label{eqZ1}
Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})/B^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})\cong H^1(X, \enmo(L_\rho)).
\end{equation}
In fact, one can easily check that $B^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})\cong \mathfrak{g}/\mathfrak{h}$. For 1-cocycle
$\eta$ in the vector space $Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})$, denote the image in $H^1(X, \enmo(L_\rho))$ under the above isomorphism by $\bar\eta$.
\begin{defn}\label{defnQrho}
Define the \textbf{quadratic cone} of $\rho$ to be
$$\mathcal{Q}(\rho)=\{\eta\in Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})\,|\,\bar\eta\wedge\bar\eta=0 \in H^2(X, \enmo(L_\rho))\}.$$
Define the \textbf{twisted resonance varieties} of $\rho$ to be
$$
\sR^i_k(\rho,W)=\{\eta\in \mathcal{Q}(\rho)\,|\, \dim H^i(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(X, L_\rho\otimes_\mathbb{C} W), \bar\eta\wedge\cdot)\geq k\}.
$$
As in Definition \ref{subscheme}, using the universal family, we can give $\sR^i_k(\rho,W)$ a subscheme structure.
\end{defn}
Simpson \cite{s1} showed that there is an isomorphism of formal schemes
$
\mathbf{R}(X, n)_{(\rho)}\cong \mathcal{Q}(\rho)_{(0)}
$ for a semi-simple representation $\rho$. We generalize this to $\tilde\sV^i_k(W)_{(\rho)}$. First, we need the following:
\begin{lemma}\label{rep2} Let $X$ be a compact K\"ahler manifold.
There is a non-canonical isomorphism of schemes
$$
H^1(X,\enmo(L_\rho))\tildemes \mathfrak{g}/\mathfrak{h}\cong Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho}).
$$
This induces an isomorphism of subshemes
$$
\sR^i_k(L_\rho,W)\tildemes \mathfrak{g}/\mathfrak{h}\cong \sR^i_k(\rho,W)
$$
if $\rho$ and $W$ are semi-simple.
\end{lemma}
\begin{proof}
The first claim follows from (\ref{eqZ1}) and the remark after. Now $B^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})$ acts on $Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})$. By definition, $ \sR^i_k(\rho,W)$ is invariant under this action. Therefore, $ \sR^i_k(\rho,W)$ is equal to the pull-back of some closed subscheme $\bar \sR^i_k(\rho,W)$ of the quotient
$Z^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})/B^1(\pi_1(X), \mathfrak{gl}(n, \cc)_{\ad \rho})$. Under the isomorphism (\ref{eqZ1}) one can easily see that $\sR^i_k(L_\rho,W)$ and $\bar \sR^i_k(\rho,W)$ are defined by the same universal complexes, and hence they are isomorphic. The conclusion follows.
\end{proof}
From Corollary \ref{rep1} and Lemma \ref{rep2} we get:
\begin{theorem}\label{mainkahler2}{\bf [= Theorem \ref{thmRPP}.]} Let $X$ be a compact K\"ahler manifold.
Let $\rho\in \mathbf{R}(X, n)$ be a semi-simple representation, and let $W$ be a semi-simple local system. Then the isomorphism of formal schemes
$$
\mathbf{R}(X, n)_{(\rho)}\cong \mathcal{Q}(\rho)_{(0)}
$$ induces an isomorphism
$$
\tilde\sV^i_k(W)_{(\rho)}\cong \sR^i_k(\rho,W)_{(0)}.
$$
\end{theorem}
When $k=\dim H^i(X, L_\rho)$, the resonance variety $\sR^i_k(\rho,W)$ is equal to the intersection of the quadratic cone $\mathcal{Q}(\rho)$ and a linear subspace of $Z^1(\pi_1(X), \mathfrak{gl}(n, \mathbb{C})_{\textrm{ad}\rho})$, see the proof of Corollary \ref{corQuad}. Hence, $\sR^i_k(\rho,W)$ is also a quadratic cone. Thus we have the following corollary.
\begin{cor}\label{corQRep} Let $X$ be a compact K\"ahler manifold.
Let $\rho\in \mathbf{R}(X, n)$ be a semi-simple representation, and let $W$ be a semi-simple local system. Suppose $k=\dim H^i(X, L_\rho)$. Then $\tilde\sV^i_k(W)$ has quadratic singularities at $\rho$.
\end{cor}
\section{Stable Higgs bundles}\label{secHB}
According to nonabelian Hodge theory due to Simpson, given a smooth projective complex variety $X$, one can consider three moduli spaces and the cohomology jump loci in them: $\mathcal{M}_{\textrm{B}}(X, n)$, $\mathcal{M}_{\textrm{DR}}(X, n)$, $\mathcal{M_{\textrm{Dol}}}(X, n)$, denoting the moduli spaces of irreducible local systems of rank $n$, stable flat bundles of rank $n$, and stable Higgs bundles with vanishing Chern classes of rank $n$, respectively, see \cite{s3}. Although $\mathcal{M}_{\textrm{B}} (X,n)$ can be constructed for any topological space with finitely generated fundamental group, the assumption that $X$ is smooth projective is essential for the construction of $\mathcal{M}_{\textrm{DR}}(X, n)$ and $\mathcal{M_{\textrm{Dol}}}(X, n)$. Since $\mathcal{M}_{\textrm{B}}(X, n)$ and $\mathcal{M}_{\textrm{DR}}(X, n)$ are isomorphic as analytic spaces, and since the isomorphism induces isomorphisms on the cohomology jump loci, the deformation problems with cohomology constraints are same for irreducible local systems and stable flat bundles.
We consider now the deformation problem with cohomology constraints
$$
(\mathcal{M_{\textrm{Dol}}}=\mathcal{M_{\textrm{Dol}}}(X,n), E=(E,\theta), \sV^i_{k}(F))
$$
where $F=(F,\phi)$ is a poly-stable Higgs bundle with vanishing Chern classes and
\begin{align*}
\sV^i_{k}(F) =\{ (E, \theta) \in \mathcal{M_{\textrm{Dol}}}\mid \dim\hh^i(X, (E\otimes_{\mathcal{O}_X}F\otimes_{\mathcal{O}_X} \Omega^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }_X, \theta\otimes 1+1\otimes\phi))\ge k\}.
\end{align*}
The subscheme structure of $\sV^i_{k}(F)$ is defined as follows. Fix a base point $x\in X$. $\mathcal{M_{\textrm{Dol}}}$ is the GIT quotient by $GL(n, \cc)$ of a fine moduli space $\mathbf{R}_{\textrm{Dol}}(X, x, n)$ of rank $n$ stable Higgs bundles $(E, \theta)$ on $X$ with vanishing Chern classes together with a basis $\beta: E|_{x}\to \cc^n$. On $\mathbf{R}_{\textrm{Dol}}(X, x, n)$, there is a universal family of Higgs bundles. Using this universal Higgs bundles, we can define cohomology jump loci in $\mathbf{R}_{\textrm{Dol}}(X, x, n)$ as a closed subschemes. These cohomology jump loci are invariant under the $GL(n, \cc)$ action. Thus we can define their image under the quotient map to be $\sV^i_{k}(F)$, which has a closed subscheme structure.
This deformation problem with cohomology constraints is parallel to the case of irreducible local systems. We will only state the main theorem. We leave all the statements and the proofs of the other corollaries to the reader.
\begin{defn}
For a Higgs bundle $(\mathcal{F}, \psi)$ we define the {\bf Higgs complex} as the complex of global $\mathcal{F}$-valued $C^\infty$-forms with differential $\bar\partial+\psi$. We denote this complex by $({A}^{\ubul}_{\rm{Higgs}} (\mathcal{F}),\bar\partial+\psi)$, or simply ${A}^{\ubul}_{\rm{Higgs}}(\mathcal{F})$.
\end{defn}
For a Higgs bundle $(E, \theta)$ in $\mathcal{M_{\textrm{Dol}}}$, we have a DGLA pair
$$
({A}^{\ubul}_{\rm{Higgs}}(\enmo(E)), {A}^{\ubul}_{\rm{Higgs}}(E\otimes_{\mathcal{O}_X}F)),
$$
where the Higgs field on the locally free $\mathcal{O}_X$-module $\enmo(E)$ is
$[\theta,\cdot]$, and the Higgs field on $E\otimes _{\mathcal{O}_X}F$ is as in the complex from the definition of $\sV^i_k(F)$. The standard fact is that the formal scheme of $\mathcal{M_{\textrm{Dol}}}$ at ${(E, \theta)}$ prorepresents the functor ${\rm {Def}}({A}^{\ubul}_{\rm{Higgs}}(\enmo(E)))$, see \cite{Mart}.
\begin{thrm}
The natural isomorphism of functors $$(\mathcal{M_{\textrm{Dol}}})_{(E, \theta)}\cong{\rm {Def}}({A}^{\ubul}_{\rm{Higgs}}(\enmo(E)))$$ induces for any $i, k\in \nn$ a natural isomorphism of subfunctors $$(\sV^i_{k}(F))_{(E, \theta)}={\rm {Def}}^i_k({A}^{\ubul}_{\rm{Higgs}}(\enmo(E)), {A}^{\ubul}_{\rm{Higgs}}(E\otimes F)).$$ The DGLA pair $({A}^{\ubul}_{\rm{Higgs}}(\enmo(E)), {A}^{\ubul}_{\rm{Higgs}}(E\otimes F))$ is formal, and hence its quadratic cone and cohomology resonance variety determine the formal germs at $(E,\theta)$ of $\mathcal{M_{\textrm{Dol}}}$ and $\sV^i_k(F)$.
\end{thrm}
\section{Other consequences of formality}\label{secIneq}
In this section we point out how the formality of a DGLA pair $(C,M)$ has implications on the possible shape of the Betti numbers of $M$ and on the geometry of the cohomology resonance varieties ${}^h\sR^i_k(C,M)$.
Let $(C,M)$ be a formal DGLA pair. We will use the following simplifying notation in this section:
\begin{align*}
Q&=Q(C), \\
\sR^i_k &= {}^h\sR^i_k(C,M),\\
\pp &=\pp(H^1(C)),\\
b_i& =\dim_\cc H^i(M).
\end{align*}
Let $R$ be the homogeneous coordinate ring of the projectivization $\pp Q$ of the quadratic cone $Q$ in $\pp$. Consider the universal complex from Definition \ref{dglaresonance} on $\pp Q$
\begin{equation}\label{eqSC}
H^0(M)\otimes_\cc\sO_{\pp Q}\mathop{\longrightarrow}^{\zeta_Q}
\ldots\longrightarrow H^k(M)\otimes_{\cc}\sO_{\pp Q}(k) \mathop{\longrightarrow}^{\zeta_{Q}}\ldots
\end{equation}
and the associated complex of graded $R$-modules
\begin{equation}\label{eqGC}
H^0(M)\otimes_\cc R\mathop{\longrightarrow}^{\zeta_Q}
\ldots\longrightarrow H^k(M)\otimes_{\cc}R(k) \mathop{\longrightarrow}^{\zeta_{Q}}\ldots
\end{equation}
By definition, multiplication with $\zeta_Q$ are graded maps of degree one, hence the shifts. The cohomology jump ideals $J^i_k$ of these complexes define $\mathbb{P}\sR^i_k$ inside $\mathbb{P} Q$. Let
$$a=a(C,M)\stackrel{\textrm{def}}{=}\min \{i\mid H^i(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }(M)\otimes_{\cc}R(_{^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ }}),\zeta_Q)\ne 0\}$$
measure how far to the right the complex (\ref{eqGC}) is exact. Therefore the complexes
\begin{equation}\label{eqSC2}
H^0(M)\otimes_\cc\sO_{\pp Q}(-a)\mathop{\longrightarrow}
\ldots\longrightarrow H^a(M)\otimes_{\cc}\sO_{\pp Q}
\end{equation}
and
\begin{equation}\label{eqGC2}
H^0(M)\otimes_\cc R(-a)\mathop{\longrightarrow}
\ldots\longrightarrow H^a(M)\otimes_{\cc}R
\end{equation}
are exact except in degree $a$, and the complex (\ref{eqGC2}) is a minimal graded free resolution of the cokernel of the last map. We will call $\phi_i$ the maps in these complexes from the degree $i$ term to the degree $i+1$ term.
There are two sources of restrictions on the possible Betti numbers $b_i$ and on the geometry of the resonance varieties $\pp\mathcal{R}^i_k$ for $i<a$: one from the Chern classes of the vector bundles in (\ref{eqSC2}), and another one from the relation of $\mathbb{P}\mathcal{R}^i_k$ with Fitting ideals of the maps in (\ref{eqGC2}). The Chern classes technique was first applied by Lazarsfeld-Popa \cite{LP} to, in our language, the DGLA pair controlling the infinitesimal deformations of $\mathcal{O}_X$ in ${\rm{Pic}}^\tau(X)=\mathcal{M}(X,1)$ with cohomology constraints when $X$ is a compact K\"ahler manifold, see Section \ref{holomorphic}. Fitting ideals were also used by Fulton-Lazarsfeld to prove connectedness results for Brill-Noether loci, which are particular cases of cohomology jump loci. For applications of Fitting ideals to twisted higher-rank Brill-Noether loci, see the survey \cite{TiB}. It was noticed in \cite{B-h} that the case of the trivial local system of rank one on the complement of a hyperplane arrangement is similar, where the Chern classes approach and the relation with Fitting ideals were also explored. This similarity is explained and generalized in this section by observing that we can run the arguments for any formal DGLA pair.
The following results were stated in \cite{B-h} for hyperplane arrangement complements. However, in that case $\mathbb{P} Q=\mathbb{P}$, which is not true in general.
\begin{prop}\label{propIneqs}
With notation as above for a formal DGLA pair $(C,M)$, let $i<a$.
(a) Let $\beta_i=\rightarrownk (\phi_i)$ and let $I_{\beta_i}(\phi_i)$ be the ideal in $R$ generated by the minors of rank $\beta_i$ of $\phi_i$. Then $b_i=\beta_i+\beta_{i-1}$ and ${\rm{depth}}(I_{\beta_i}(\phi_i))\ge a-i$.
(b) $(\mathbb{P}\mathcal{R}^i_1)^{red}$ is the support of $I_{\beta_i}(\phi_i)$.
(c) $(\mathbb{P}\mathcal{R}^{i-1}_1)^{red}\subset (\mathbb{P}\mathcal{R}^i_1)^{red}$.
(d) $\codim_{\mathbb{P} Q} \mathbb{P}\mathcal{R}^i_1 \ge a-i$ if $R$ is Cohen-Macaulay.
(e) $\codim_{\mathbb{P} Q} \mathbb{P}\mathcal{R}^i_1\le (\beta_{i-1}+1)(\beta_{i+1}+1)$ if $R$ is Cohen-Macaulay.
(f) $\mathcal{R}^0_k$ is defined by $I_{\beta_0+1-k}(\phi_i)$.
(g) $(\mathbb{P}\mathcal{R}^i_k)^{red}$ contains the support of $I_{\beta_i+1-k}(\phi_i)$, and equals it away from $(\mathbb{P}\mathcal{R}^i_1)$.
(h) $\codim_{\mathbb{P} Q} \mathbb{P}\mathcal{R}^i_k\le (\beta_{i-1}+k)(\beta_{i+1}+k)$ if $R$ is Cohen-Macaulay.
(i) $\mathbb{P}\mathcal{R}^i_k$ is connected away from the components of $\mathbb{P}\mathcal{R}^i_1$ which are disconnected from the support of $I_{\beta_i+1-k}(\phi_i)$, if $\mathbb{P} Q$ is irreducible and reduced.
(j) $(\mathcal{R}^{i}_1)^{red}\subset (\mathcal{R}^{i+1}_2)^{red}$. If $i<a-1$ and $k\le 1+\frac{a-2}{i+1}$, then $(\mathcal{R}^i_1)^{red}\subset (\mathcal{R}^{i+1}_j)^{red}$.
(k) $b_i\ge \binom{a}{i}$ if $R$ is a polynomial ring. $\beta_i\ge a-i$ if $R$ is Cohen-Macaulay.
Let $q_i=\codim_{\mathbb{P} Q}\mathbb{P} \mathcal{R}^i_1$, and for $i>0$ let $$c_t^{(i)}=\prod_{k=1}^{i+1}(1-k\cdot t)^{(-1)^kb_i+1-k}.$$ Let $c_j^{(i)}$ be the coefficient of $t^j$ in $c_t^{(i)}$. Assume that $\chi_a(M):=b_a-b_{a-1}+b_{a-2}-\ldots\ne 0$.
(l) Any Schur polynomial of weight $< q_i$ in $c_1^{(i)},\ldots, c_{q_i-1}^{(i)}$ is non-negative.
\end{prop}
\begin{proof}
(a) This is \cite[Theorem 20.9]{eisenbud}.
(b) The proof is essentially the same as for \cite[Proposition 3.4]{B-h}. By truncating (\ref{eqGC2}) and repeating the following argument, it is enough to show only the case $i=a-1$. Using the complex of sheaves (\ref{eqSC2}), let $\sF={\rm{coker}}(\phi_{a-1})$. The support of the Fitting ideal $I_{\beta_{a-1}}(\phi_{a-1})$ is the locus of closed points in $\mathbb{P} Q$ where $\sF$ fails to be locally free. The claim follows now from Lemma \ref{lemRR}, Lemma \ref{lemEPY}, and the fact that ${\rm{Tor}}_1^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa(\bar{\eta}),\mathcal{F}_{\bar{\eta}})=0$ iff the stalk $\mathcal{F}_{\bar{\eta}}$ is free.
(c) Follows from (b) and \cite[Corollary 20.12]{eisenbud}.
(d) Follows from (a) and (b), since the Cohen-Macaulay condition implies that depth equals codimension.
(e) See \cite[Theorem 1.2]{B-h}. The result of Eagon-Northcott used there holds if $R$ is Cohen-Macaulay.
(f) It follows by definition.
(g) This is essentially the proof of Theorem 1.1 from \cite{B-h} and its Erratum. Again, it is enough to prove the case $i=a-1$. By Lemma \ref{lemRR} and Lemma \ref{lemEPY},
$$
(\mathbb{P}\mathcal{R}^{a-1}_k)^{red}=\{\bar{\eta}\in \mathbb{P} Q\mid {\rm{Tor}}_1^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa(\bar{\eta}),\mathcal{F}_{\bar{\eta}})\ge k\}.
$$
The support of the Fitting ideal $I_{\beta_{a-1}+1-k}(\phi_{a-1})$ is
$$
\{\bar{\eta}\in\mathbb{P} Q\mid m(\mathcal{F}_{\bar{\eta}})-\rightarrownk(\mathcal{F}_{\bar{\eta}})\ge k\},
$$
where $m(\mathcal{F}_{\bar{\eta}})$ is the minimal number of generators of $\mathcal{F}_{\bar{\eta}}$ over $\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}$. The rank is well-defined since minimal free resolutions exist over local rings, and by the characterization of exactness of a complex from \cite[Theorem 20.9]{eisenbud}. Thus we do not need to assume that $\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}$ is a domain as in {\it loc. cit.}. The rest of the argument is as in {\it loc. cit.}
(h) Follows from (f) as in the proof of (e).
(i) Follows from (f) and from the Fulton-Lazarsfeld connectedness theorem, see Erratum, Corollary 1.2 of \cite{B-h}.
(j) See \cite[Corollary 1.3]{B-h}.
(k) See \cite[Proposition 3.2]{B-h}. The result of Herzog-K\"uhl used holds for the case when $R$ is a polynomial ring. The result of Evans-Griffiths used holds for the case when $R$ is Cohen-Macaulay.
(l) This is essentially the same proof as for \cite[Theorem 3.1]{B-h}. Consider the case $i=a-1$ firstly. By (b), $\mathbb{P}\mathcal{R}^{a-1}_1$ is the locus of points in $\mathbb{P} Q$ where $\mathcal{F}$ fails to be locally free. Let $W$ be a generic vector subspace of $H^1(C)$ of codimension $\dim \mathbb{P}\mathcal{R}^{a-1}_1$+1. Then the restriction of (\ref{eqSC2}) to $X=\mathbb{P} Q\cap\mathbb{P} W$ gives an exact sequence of locally free sheaves on $X$:
\begin{equation}\label{eqRes}
0\rightarrow H^0M\otimes\mathcal{O}_{X}(-a)\rightarrow\ldots\rightarrow H^aM\otimes\mathcal{O}_{X}\rightarrow \mathcal{F}_{| X}\rightarrow 0.
\end{equation}
Since { we assume} $\chi_a(M)\ne 0$, the restriction of $\mathcal{F}$ to $X$ is non-zero. Moreover, this is a globally generated vector bundle, so Fulton-Lazarsfeld positivity applies, see \cite[12.1.7 (a)]{Fu}: for any positive $k$-cycle $\alpha$ on $X$, the intersection $P_j\cap \alpha$ is the rational equivalence class of a non-negatively supported $k-j$ cycle on $X$, where $P_j$ is any Schur polynomial of weight $j\le k$ in the Chern classes of $\mathcal{F}_{|X}$.
Let $i:X\rightarrow \mathbb{P}$ be the natural inclusion. For a vector bundle $E$ on $\mathbb{P}$ and for $\alpha\in A_*(X)$, the projection formula says that $i_*(c_j(i^* E)\cap\alpha)=c_j(E)\cap i_*\alpha$. From (\ref{eqRes}) it is not difficult to see that the same holds for $\mathcal{F}_{|X}$, namely
\begin{equation}\label{eqInt}
i_*(c_j(\mathcal{F}_{|X})\cap\alpha) =C_j\left\{\prod_{k=1}^a(1+c_1(\mathcal{O}_{\mathbb{P}}(-k))\cdot t)^{(-1)^k b_{a-k}}\right\}\cap i_*\alpha,
\end{equation}
where $C_j$ stands for the coefficient of $t^j$. Let $\alpha=[X]\cap [L]\in A_k(X)$ with $[L]\in A_*(\mathbb{P})$ be the class of a linear section. Then the degree of $i_*\alpha $ is $\deg (\mathbb{P} Q)$. Thus the non-negativity result of Fulton-Lazarsfeld implies that
$$
C_j\left\{\prod_{k=1}^a(1-k\cdot t)^{(-1)^k b_{a-k}}\right\}\cdot \deg(\mathbb{P} Q)\ge 0
$$
for $j\le k$, and so for $j\le \dim X=q_{a-1}-1$. Thus the claim follows in this case for $P_j=c_j(\mathcal{F}_{|X})$. For the other Schur polynomials, a repeated application of (\ref{eqInt}) reduces the claim to this case.
For the case $i<a-1$, note that $\chi_{i}(M)\ne 0$ for $i<a$ by (c) and the assumption that $\chi_a(M)\ne 0$. Hence this case follows by the same argument by truncating (\ref{eqRes}) and shifting to get global generation.
\end{proof}
The following, which was used above, was proved for the case $\mathbb{P} Q=\mathbb{P}$ in \cite[Theorem 4.1]{EPY} using the BGG correspondence.
\begin{lm}\label{lemEPY} With the notation as above, let $\mathcal{F}={\rm{coker}}(\phi_{a-1})$ in (\ref{eqSC2}). Let $0\ne\eta\in Q$ and denote its image in $\mathbb{P} Q$ by $\bar{\eta}$. Then for $i\ge 0$
$$
H^{a-i}(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } M,\eta .)={\rm{Tor}}_i^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa (\bar{\eta}),\mathcal{F}_{\bar{\eta}}),
$$
where $\kappa(\bar{\eta})$ is the residue field of $\eta$, and $\mathcal{F}_{\bar{\eta}}$ is the stalk of the sheaf $\mathcal{F}$ at $\bar{\eta}$.
\end{lm}
\begin{proof}
By induction on $k$, we can assume that
$$
H^{k-i}\sigma_{\le k}(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } M,\eta .)={\rm{Tor}}_i^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa(\bar{\eta}), \mathcal{K}_{k-1,\bar{\eta}})
$$
for $i\ge 0, k<a$, where $\sigma_{\le k}$ is the stupid truncation and $\mathcal{K}_k={\rm{coker}}(\phi_{k})$ in (\ref{eqSC2}). By applying $.\otimes_{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}\kappa(\bar{\eta})$ to the exact sequence
$$
0\longrightarrow \mathcal{K}_{a-2} \longrightarrow H^a(M)\otimes\mathcal{O}_{\mathbb{P} Q}\longrightarrow \mathcal{K}_{a-1}\longrightarrow 0,
$$
we obtain that
$$
{\rm{Tor}}_{i}^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa(\bar{\eta}),\mathcal{K}_{a-1,\bar{\eta}})= {\rm{Tor}}_{i-1}^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa({\bar{\eta}}),\mathcal{K}_{a-2,\bar{\eta}})=H^{a-i}(H^{\,\begin{picture}(-1,1)(-1,-3)\circle*{2}\end{picture}\ } M,\eta .)
$$
for $i\ge 2$. Since the case $i=0$ is obvious, it remains to prove the case $i=1$. This case follows since the map $H^{a-1}(M)\rightarrow H^a(M)$ decomposes via the surjection $H^{a-1}(M)\rightarrow \mathcal{K}_{a-2,\bar{\eta}}\otimes \kappa(\bar{\eta})$, and we have an exact sequence
$$
0\longrightarrow {\rm{Tor}}_{1}^{\mathcal{O}_{\mathbb{P} Q,\bar{\eta}}}(\kappa(\bar{\eta}),\mathcal{K}_{a-1,\bar{\eta}})\longrightarrow \mathcal{K}_{a-2,\bar{\eta}}\otimes\kappa(\bar{\eta}) \longrightarrow H^a(M)\longrightarrow \mathcal{K}_{a-1}\otimes\kappa(\bar{\eta}) \longrightarrow 0. $$
\end{proof}
\begin{rmk} Given a particular deformation problem with cohomology constraints, it is interesting to determine geometrically the number $a=a(C,M)$ for a DGLA pair governing the deformation problem. Let us give some examples.
(a) Consider the formal DGLA pair $({A}^{0,\ubul}_{\rm{Dol}} (\enmo(\mathcal{O}_X)),{{A}}^{p,\ubul}_{\rm{Dol}} (\mathcal{O}_X))$ governing the deformation problem with cohomology constraints $({\rm{Pic}}^\tau(X), \mathcal{O}_X, \mathcal{V}^{pq}_k(\mathcal{O}_X))$ from Remark \ref{rmkGL}. Then Proposition \ref{propIneqs} becomes a result about $\mathcal{V}^{pq}_k(\mathcal{O}_X)$ and $h^q(X,\Omega_X^p)$ via Corollary \ref{corSVB}. In this case the deformation problem can be stated on the Albanese of $X$, but one pays the price that one has to know something about the Albanese map. For $p=0$ or $\dim X$,
$$
a=\dim X-\dim (\text{generic fiber of the Albanese map}),
$$
see \cite{LP} where the statements on the numbers $h^q(\mathcal{O}_X)$ are also proven. For other values of $p$, it is enough to consider $p+q\le\dim X$ by Hodge symmetry. Then Popa-Schnell \cite{PS} show that
\begin{equation}\label{eqPS}
a\ge n-p-\delta,
\end{equation}
where $\delta$ is the defect of the semismallness of the Albanese map. This fact is implicit in the proof of their result that $\codim \sV^{pq}_1(\mathcal{O}_X)\ge |n-p-q|-\delta$, which follows from (\ref{eqPS}) together with part (d) of Proposition \ref{propIneqs} above.
(b) Consider the DGLA pair $({A}^{\ubul}_{\rm{DR}}(\enmo(\mathbb{C}_X)), {A}^{\ubul}_{\rm{DR}}(\mathbb{C}_X))$ governing the deformation problem with cohomology constraints $
(\mathbf{R}(X,1), \mathbf{1}, \tilde\sV^i_k(\mathbb{C}_X))
$
from Section \ref{localsystem}. When $X$ is the complement in $\mathbb{C}^n$ of a central essential indecomposable hyperplane arrangement, the pair is formal because $X$ is formal. Moreoever, in this case, $a=n-1$ by \cite{EPY} and Proposition \ref{propIneqs} is proved in \cite{B-h}.
\end{rmk}
\end{document} |
\begin{document}
\title{A Complete Enumeration of Ballot Permutations Avoiding Sets of Small Patterns}
\begin{abstract}
Permutations whose prefixes contain at least as many ascents as descents are called ballot permutations. Lin, Wang, and Zhao have previously enumerated ballot permutations avoiding small patterns and have proposed the problem of enumerating ballot permutations avoiding a pair of permutations of length $3$. We completely enumerate ballot permutations avoiding two patterns of length $3$ and we relate these avoidance classes with their respective recurrence relations and formulas, which leads to an interesting bijection between ballot permutations avoiding $132$ and $312$ with left factors of Dyck paths. In addition, we also conclude the Wilf-classification of ballot permutations avoiding sets of two patterns of length $3$, and we then extend our results to completely enumerate ballot permutations avoiding three patterns of length $3$.
\end{abstract}
\section{Introduction}
The distribution of descents over permutations has been thoroughly researched and has several important combinatorial properties. Specifically, the Eulerian polynomials $A_n(t)$ encapsulate information about the number of descents in every permutation in $S_n$, and $q$-analogues defined using additional permutation statistics have been considered by Agrawal, Choi, and Sun \cite{agrawal2020permutation}, Carlitz \cite{carlitz1954q}, and Foata and Sch{\"u}tzenberger \cite{foata1978major}. In particular, the Eulerian polynomials can also be equivalently defined using the excedance permutation statistic. Spiro \cite{spiro2020ballot} introduced a variation of this in his work on ballot permutations, of which we will now give a brief history.
The following ballot problem was first introduced by Bertrand \cite{bertrand} in 1887 for the case $\lambda = 1$.
\begin{problem}
Suppose in an election, candidate A receives $a$ votes and candidate B receives $b$ votes, where $a \geq \lambda b$ for some positive integer $\lambda$. How many ways can the ballots in the election be ordered such that candidate A maintains more than $\lambda$ times as many votes as candidate B throughout the counting of the ballots?
\end{problem}
Almost immediately after Bertrand introduced the ballot problem, André \cite{andre} introduced ballot sequences in a combinatorial solution, and more recently, Goulden and Serrano \cite{goulden} provided a solution to the case where $\lambda>1$ using a variation of ballot sequences. However, the most famous variation of ballot sequences is ballot permutations, which represent each vote for candidate A and candidate B via an ascent and a descent in the permutation, respectively. Ballot permutations have been studied by Spiro \cite{spiro2020ballot}, Bernardi, Duplantier, and Nadeau \cite{bernardi2010bijection}, and Lin, Wang, and Zhao \cite{lin2022decomposition}. In particular, Bernardi, Duplantier, and Nadeau \cite{bernardi2010bijection} proved that the set of ballot permutations with length $n$ are equinumerous to the set of odd order permutations with the same length. Spiro \cite{spiro2020ballot} introduced a variation of excedence numbers, whose distribution over the set of odd order permutations is the same as the distribution of the descent numbers over the set of ballot permutations.
In an extension to Spiro's \cite{spiro2020ballot} work, Lin, Wang, and Zhao \cite{lin2022decomposition} constructed an explicit bijection between these two sets of permutations, which can be extended to positive well-labeled paths and proves a conjecture due to Spiro \cite{spiro2020ballot} using the statistic of peak values. Lin, Wang, and Zhao \cite{lin2022decomposition} also established a connection between $213$-avoiding ballot permutations and Gessel walks and initiated the enumeration of ballot permutations avoiding a single pattern of length $3$. They have also suggested the problem of enumerating ballot permutations avoiding pairs of permutations of length $3$, on which we will now present two main results. We first completely enumerate ballot permutations avoiding two patterns of length $3$ and prove their respective recurrence relations and formulas. In doing this, we characterize the set of ballot permutations avoiding sets of patterns. We then show a bijection between $132$- and $213$-avoiding ballot permutations with left factors of Dyck paths and establish all Wilf-equivalences between patterns. We finally initiate and completely enumerate ballot permutations avoiding three patterns of length $3$.
This paper is organized as follows. In Section 2, we introduce preliminary definitions and notation. In Section 3, we completely enumerate ballot permutations avoiding two patterns of length $3$ and prove their respective recurrence relations and formulas. In addition, we prove Wilf-equivalences of patterns and show a bijection to left factors of Dyck paths. In Section 4, we extend our enumeration to ballot permutations avoiding three patterns of length $3$. In Section 5, we conclude with open problems and further directions.
\section{Preliminaries} \label{sec:preliminaries}
The following notation is borrowed from \cite{sun2022d}. We write $S_n$ to denote the set of permutations of $[n] = \{ 1,2, \dots, n \}$. Note that we can represent each permutation $\sigma \in S_n$ as a sequence $\sigma(1) \cdots \sigma(n)$. Further, let $\mathrm{Id}_n$ denote the identity permutation $12 \cdots n$ of size $n$ and given a permutation $\sigma \in S_n$, let $\rev(\sigma)$ denote the reverse permutation $\sigma(n) \sigma(n-1) \cdots \sigma(1)$. We further say that a sequence $w$ is \emph{consecutively increasing} (respectively \emph{decreasing}) if for every index $i$, $w(i+1) = w(i)+1$ (respectively $w(i+1) = w(i)-1$).
For a sequence $w = w(1) \cdots w(n)$ with distinct real values, the \emph{standardization} of $w$ is the unique permutation with the same relative order. Note that once standardized, a consecutively-increasing sequence is the identity permutation and a consecutively-decreasing sequence is the reverse identity permutation. Moreover, we say that in a permutation $\sigma$, the elements $\sigma(i)$ and $\sigma(i+1)$ are \emph{adjacent} to each other. More specifically, $\sigma(i)$ is \emph{left-adjacent} to $\sigma(i+1)$ and similarly, the element $\sigma(i+1)$ is $\emph{right-adjacent}$ to $\sigma(i)$. The definitions in this section are taken from \cite{lin2022decomposition}.
\begin{definition}
A \emph{prefix} of a permutation $\sigma$ is a contiguous initial subsequence $\sigma(1) \cdots \sigma(p)$ for some $p$.
\end{definition}
\begin{definition}
Given a permutation $\sigma \in S_n$, we say that $i \in [n-1]$ is a \emph{descent} of $\sigma$ if $\sigma(i)>\sigma(i+1)$. Similarly, we say that $i \in [n-1]$ is an \emph{ascent} of $\sigma$ if $\sigma(i) < \sigma(i+1)$.
\end{definition}
\begin{definition}
A \emph{ballot permutation} is a permutation $\sigma$ such that any prefix of $\sigma$ has at least as many ascents as descents.
\end{definition}
We let $B_n$ to denote the set of all ballot permutations of length $n$. It is interesting to consider the notion of pattern avoidance on ballot permutations, which we will now introduce.
\begin{definition}
We say that the permutation $\sigma$ \emph{contains} the permutation $\pi$ if there exists indices $c_1 < \dots < c_k$ such that $\sigma(c_1) \cdots \sigma(c_k)$ is order-isomorphic to $\pi$. We say that a permutation \emph{avoids} a pattern $\pi$ if it does not contain it.
\end{definition}
Given patterns $\pi_1, \dots, \pi_m$, we let $B_n(\pi_1, \dots, \pi_m)$ to denote the set of all ballot permutations of length $n$ that avoid the patterns $\pi_1, \dots, \pi_n$.
\begin{definition}
We say that two sets of patterns $\pi_1, \dots, \pi_k$ and $\tau_1, \dots, \tau_\ell$ are \emph{Wilf-equivalent} if $|S_n(\pi_1, \dots, \pi_k)| = |S_n(\tau_1, \dots, \tau_\ell)|.$ In the context of ballot permutations, we say that these two sets of patterns are Wilf-equivalent if $|B_n(\pi_1, \dots, \pi_k)| = |B_n(\tau_1, \dots, \tau_\ell)|.$
\end{definition}
To characterize permutations, we will now define the direct sum and the skew sum of permutations.
\begin{definition}
Let $\sigma$ be a permutation of length $n$ and $\pi$ be a permutation of length $m$. Then the \emph{skew sum} of $\sigma$ and $\pi$, denoted $\sigma \ominus \pi$, is defined by \begin{equation*}
\sigma \ominus \pi (i) = \begin{cases}
\sigma(i) + m & 1 \leq i \leq n \\
\pi(i-n) & n+1 \leq i \leq m+n.
\end{cases}
\end{equation*}
\end{definition}
\begin{example}\label{skewsum}
As illustrated in Figure \ref{box}, $$132 \ominus 123 = 465123.$$
\end{example}
\begin{figure}
\caption{The graph of the skew sum described in Example \ref{skewsum}
\label{box}
\end{figure}
\begin{definition}
Let $\sigma$ be a permutation of length $n$ and $\pi$ be a permutation of length $m$. Then the \emph{direct sum} of $\sigma$ and $\pi$, denoted $\sigma \oplus \pi$, is defined by \begin{equation*}
\sigma \oplus \pi (i) = \begin{cases}
\sigma(i) & 1 \leq i \leq n \\
\pi(i-n)+n & n+1 \leq i \leq m+n.
\end{cases}
\end{equation*}
\end{definition}
\begin{example}\label{directsum}
As illustrated in Figure \ref{box2}, $$132 \oplus 123 = 132456.$$
\end{example}
\begin{figure}
\caption{The graph of the direct sum described in Example \ref{directsum}
\label{box2}
\end{figure}
\section{Enumeration of Pattern Avoidance Classes of length 2}
\label{sec:enumeration}
Lin, Wang, and Zhao \cite{lin2022decomposition} have enumerated sequences of ballot permutations avoiding small patterns. They provide the following Table \ref{double avoidance}:
\begin{table}[htp]
\centering
\begin{tabular}{|c | c | c | c|}
\hline
Patterns & Sequence & OEIS Sequence & Comment \\ [0.5ex]
\hline\hline
$123$ & $1,1,2,2,5,5,14,14, \dots$ & \href{http://oeis.org/A208355}{A208355} & Catalan number $C(\lceil \frac{n}{2} \rceil)$ \\
\hline
$132$ & $1,1,2,4,10,25,70, \dots$ & \href{https://oeis.org/A005817}{A005817} & $C(\lceil \frac{n}{2} \rceil)C(\lceil \frac{n+1}{2} \rceil)$ \\
\hline
$213$ & $1,1,3,6,21,52,193, \dots$ & \href{https://oeis.org/A151396}{A151396} & Gessel walks ending on the $y$-axis \\
\hline
$231$ & $1,1,2,4,10,25,70, \dots$ & \href{https://oeis.org/A005817}{A005817} & Wilf-equivalent to pattern $132$ \\
\hline
$312$ & $1,1,3,6,21,52,193, \dots$ & \href{https://oeis.org/A151396}{A151396} & Wilf-equivalent to pattern $213$ \\
\hline
$321$ & $1,1,3,9,28,90,297, \dots$ & \href{https://oeis.org/A071724}{A071724} & $\frac{3}{n+1} {{2n-2} \choose {n-2}}$ for $n >1$ \\
\hline
\end{tabular}
\caption{Sequences of ballot permutations avoiding one pattern of length $3$.}
\label{double avoidance}
\end{table}
We extend Lin, Wang, and Zhao's \cite{lin2022decomposition} results to enumerate ballot permutations avoiding two patterns of length $3$. Table \ref{double avoidance} presents the sequence of ballot permutations avoiding two patterns of length $3$.
\begin{table}[htp]
\centering
\begin{tabular}{|c | c | c | c | c|}
\hline
Patterns & Sequence & OEIS Sequence & Comment \\ [0.5ex]
\hline\hline
$123, 132$ & $1,1,1,1,1,1,1 \dots$ & & Sequence of all $1$s; Theorem \ref{123,132} \\
\hline
$123, 213$ & $1,1,2,1,2,1,2, \dots$ & & Excluding $n=1$; Theorem \ref{123,213} \\
\hline
$123, 231$ & $1,1,1,0,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123, 312$ & $1,1, 2,0,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123, 321$ & $1,1,2,2,0,0,0, \dots$ & & Terminates after $n=4$ \\
\hline
$132, 213$ & $1,1,2,3,6,10,20, \dots$ & \href{https://oeis.org/A001405}{A001405} & Theorem \ref{132,213} \\
\hline
$132, 231$ & $1,1,1,1,1,1,1, \dots$ & & Sequence of all $1$s; Theorem \ref{132,231} \\
\hline
$132, 312$ & $1,1,2,3,6,10,20, \dots$ & \href{https://oeis.org/A001405}{A001405} & Wilf-equivalent to patterns $132,213$ \\
\hline
$132, 321$ & $1,1,2,4,7,11,16, \dots$ & \href{https://oeis.org/A152947}{A152947} & Theorem \ref{132,321} \\
\hline
$213, 231$ & $1,1,2,3,6,10,20, \dots$ & \href{https://oeis.org/A001405}{A001405} & Wilf-equivalent to patterns $132,213$ \\
\hline
$213, 312$ & $1,1,3,4,11,16,42, \dots$ & \href{https://oeis.org/A027306}{A027306} & Theorem \ref{213,312} \\
\hline
$213, 321$ & $1,1,3,6,10,15,21, \dots$ & \href{https://oeis.org/A000217}{A000217} & Excluding $n=1$; Theorem \ref{213,321} \\ \hline
$231, 312$ & $1,1,2,3,6,10,20, \dots$ & \href{https://oeis.org/A001405}{A001405} & Wilf-equivalent to patterns $132,213$ \\
\hline
$231, 321$ & $1,1,2,4,8,16,32, \dots$ & \href{https://oeis.org/A011782}{A011782} & Theorem \ref{231,321} \\
\hline
$312, 321$ & $1,1,3,6,12,24,48, \dots$ & \href{https://oeis.org/A003945}{A003945} & Excluding $n=1$; Theorem \ref{312,321} \\
\hline
\end{tabular}
\caption{Sequences of ballot permutations avoiding two patterns of length 3.}
\label{double avoidance}
\end{table}
We first present a lemma, which will be used in the proofs of Theorems \ref{123,132} and \ref{123,213}.
\begin{lemma}\label{123 characterization}
Let $\sigma \in B_n(123)$, where $n$ is odd. Then either $\sigma(n)=1$ or $\sigma(n-2)=1$.
\end{lemma}
\begin{proof}
Write $\sigma = \sigma_L 1 \sigma_R $ and let $\sigma$ be a ballot permutation avoiding $123$. Since $\sigma$ avoids the pattern $123$, it cannot have two consecutive ascents. But because $\sigma$ is also a ballot permutation, it has at least as many ascents as descents, and we conclude that $\sigma$ has an equal number of ascents and descents. Now if $\sigma_R$ is empty, then $\sigma(n)=1$. If $\sigma_R$ is nonempty, $\sigma_R$ must be decreasing to avoid the pattern $123$. Also $\sigma_R$ must be at most $2$ elements, since if it contained more, then either it would contain an instance of $123$ or there would be more descents than ascents in $\sigma$. But $\sigma_R$ cannot be one element, or else it would end in an ascent, which is impossible since $\sigma$ begins with an ascent and $n$ is odd. Hence $\sigma_R$ must be $2$ elements if it is nonempty, and thus $\sigma(n-2)=1$.
\end{proof}
Now we proceed to enumerate ballot permutations avoiding pairs of patterns. We first consider when one of patterns is $123$.
\begin{theorem}\label{123,132}
For all $n$, there exists a unique ballot permutation avoiding the patterns $123$ and $132$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(123,132)$ and write $\sigma = \sigma_L 1 \sigma_R$. Note that the case where $n=2$ is immediate, so for the following proof, assume $n>2$. We have two cases:
\begin{enumerate}
\item $n$ is even.
Using the same logic as in Lemma \ref{123 characterization}, we conclude that $\sigma$ has one more ascent than descent. So $\sigma_R$ cannot be empty and must only be one element to simultaneously avoid $123$ and $132$.
We claim that $\sigma_R$ must be $2$ (the second minimal element in $\sigma$). For the sake of contradiction, suppose that $\sigma_R = r >2$. If there exists an element $m>2$ between $2$ and $1$ in $\sigma$, then $2mr$ is a subsequence of $\sigma$ and is an occurrence of $132$ or $123$. If there does not exist such an element $m$ between $2$ and $1$, then they are adjacent, and hence $\sigma$ contains two consecutive descents and hence is not a ballot permutation. Thus $\sigma_R = 2$.
Note that $\sigma = \sigma_L 12$. Then $\sigma_L$ is a prefix of $\sigma$ and therefore is in $B_{n-2}(123,132)$, so we can inductively use the above reasoning to conclude that $\sigma_L = (12) \ominus (12) \ominus \dots \ominus (12)$. Hence there is a unique $\sigma$ in $B_n(123,132)$.
\item $n$ is odd.
Using Lemma \ref{123 characterization}, $\sigma_R$ must either be empty or two elements. But if $\sigma_R$ contains two elements, it either contains $123$ or $132$. So we conclude that $\sigma_R$ must be empty and hence $\sigma = \sigma_L 1$. And because $\sigma_L$ is a prefix of $\sigma$, it must be in $B_{n-1}(123,132)$, and note that $\sigma_L = (12) \ominus (12) \ominus \dots \ominus (12)$ follows immediately from Case 1. Hence there is a unique $\sigma$ in $B_n(123,132)$.
\end{enumerate}
Thus there is a unique ballot permutation avoiding the patterns $123$ and $132$.
\end{proof}
\begin{theorem}\label{123,213}
Let $a_n = |B_n(123,213)|$. Then $$a_n = \begin{cases}
1, & n=1 \text{ or n is even} \\
2, & \text{ otherwise}.
\end{cases}$$
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(123,213)$ and write $\sigma = \sigma_L 1 \sigma_R$. We have two cases:
\begin{enumerate}
\item $n$ is even.
Then following the same reasoning in Theorem \ref{123,132}, $\sigma$ contains one more ascent than descent and hence $\sigma_R$ cannot be empty; namely $\sigma_R = 2$. So $\sigma_L$ is a prefix of $\sigma$ and therefore is in $B_{n-2}(123,213)$. A similar inductive argument presented in Theorem \ref{123,132} above shows that $\sigma_L = (12) \ominus (12) \ominus \dots \ominus (12)$. Hence only $\sigma$ is in $B_n(123,213)$.
\item $n$ is odd.
Then from Lemma \ref{123 characterization}, either $\sigma = \sigma_L 1$ or $\sigma = \sigma_L 1 a b$.
If $\sigma = \sigma_L 1$, then the same argument in Theorem \ref{123,132} concludes that $\sigma_L = (12) \ominus \dots \ominus (12)$.
If $\sigma = \sigma_L 1 a b$, then $ab$ must be $32$ to avoid $123$ and $213$. Then $\sigma_L = (12) \ominus \dots \ominus (12)$ follows by the same argument.
If $n$ is odd, there are two different elements in $B_n(123,213)$.
\end{enumerate}
Therefore, $a_n = 2$ if $n$ is odd and $a_n = 1$ if $n$ is even.
\end{proof}
We will now show four sets of patterns are Wilf-equivalent to each other via bijection.
\begin{theorem}\label{wilf equivalence}
The four sets $B_n(132,213)$, $B_n(213,231)$, $B_n(231,312)$, and $B_n(132,312)$ are Wilf-equivalent.
\end{theorem}
\begin{proof}
In each of the following bijections between $B_n(\pi_1, \pi_2)$ and $B_n(\pi_1', \pi_2')$, we first construct a bijection from $S_n(\pi_1, \pi_2)$ to $S_n(\pi_1', \pi_2')$ that preserves the positions of each descent and ascent in every permutation, and hence this restricts the bijection from $B_n(\pi_1, \pi_2)$ to $B_n(\pi_1', \pi_2')$.
We first show a bijection between $B_n(132,213)$ and $B_n(213,231)$. Elements in $B_n(132,213)$ are of the form $\Id_{k_1} \ominus \Id_{k_2} \ominus \dots \ominus \Id_{k_m}$, as illustrated in Figure \ref{Wilfs}. Note that $k_1 >1$ while any other $k_i$ (where $i \neq 1$) may equal $1$, as long as the resulting permutation is a ballot permutation. We must have $k_1 > 1$ since the permutation is a ballot permutation and must start with an ascent. The permutation must be in this form since ascents must be consecutive to avoid $132$ and if there is a descent between element $i$ and element $j$, then consecutive ascents after $j$ must cover all elements up to $i$ to avoid $213$.
Similarly, elements in $B_n(213,231)$ can be written as $((\Id_{k_1'} n \oplus \Id_{k_2'}) (n-1) \oplus \dots \oplus \Id_{k_m'}) (n-m+1)$, where the $(n-i+1)$ terms do not change under direct sum and each $(n-i+1)$ term is the largest element of every element after it. In other words, these terms are essentially ignored in the direct sum operations. This is also shown in Figure \ref{Wilfs}. Note that $k_1'>0$ while any other $k_i'$ may equal $0$, as long as the resulting permutation is a ballot permutation. Now we can rewrite this as $\sigma_{k_1} \oplus \sigma_{k_2} \oplus \dots \oplus \sigma_{k_m}$, where $\sigma_{k_i} = \Id_{k_i'} (n-i+1)$ and the $(n-i+1)$ terms does not change under direct sum.
And hence we can send $\sigma_{k_1} \oplus \sigma_{k_2} \oplus \dots \oplus \sigma_{k_m}$ to $\Id_{k_1} \ominus \Id_{k_2} \ominus \dots \ominus \Id_{k_m}$, due to each $\sigma_{k_i}$ ending in $(n-i-1)$. Note that this is a bijection from $S_n(132,213)$ to $S_n(213,231)$ that preserves the positions of ascents and descents in every permutation, so this property restricts the bijection between $B_n(132,213)$ to $B_n(213,231)$.
Now we show a bijection between $B_n(213,231)$ and $B_n(231,312)$. As discussed above, elements in $B_n(213,231)$ can be written in the form $\sigma_{k_1} \oplus \sigma_{k_2} \oplus \dots \oplus \sigma_{k_m}$, where $\sigma_{k_i} = \Id_{k_i'} (n-i+1)$. Now note that elements in $B_n(231,312)$ are in the form of $1 \oplus \rev(\Id_{k_1}) \oplus \dots \oplus \rev(\Id_{k_m})$, where each $\Id_{k_i}$ may be one element. Elements in $B_n(213,231)$ are of the form $((\Id_{k_1'} n \oplus \Id_{k_2'}) (n-1) \oplus \dots \oplus \Id_{k_m'}) (n-m+1)$. Note that $k_1'>0$ while any other $k_i'$ may equal $0$. Now we transform $1 \oplus \rev(\Id_{k_1}) \oplus \dots \oplus \rev(\Id_{k_m})$ into an element in $B_n(213,231)$ by preserving the place of each ascent and descent. This expression can be rewritten as the direct sum of identity permutations, with maximal elements to represent the places where descents occur. In other words, every element of the form $1 \oplus \rev(\Id_{k_1}) \oplus \dots \oplus \rev(\Id_{k_m})$ can be turned into an element of the form $((\Id_{k_1'} n \oplus \Id_{k_2'}) (n-1) \oplus \dots \oplus \Id_{k_m'}) (n-m+1)$ such that the place of every descent and ascent is preserved. And the same argument works in reverse, so we conclude that there is a bijection between $B_n(213,231)$ and $B_n(231,312)$.
We show a bijection between $B_n(231,312)$ and $B_n(132,312)$.
Note that elements in $B_n(231,312)$ can be written in the form $1 \oplus \rev(\Id_{k_1}) \oplus \dots \oplus \rev(\Id_{k_m})$, where each $\Id_{k_i}$ may be one element.
Observe that elements in $B_n(132,312)$ can be written in the form $(((( \cdots (m \oplus \Id_{k_{m}}) \ominus \cdots ) \ominus 2 ) \oplus \Id_{k_2} ) \ominus 1 ) \oplus \Id_{k_1}$, where $1, \dots , m$ are the first $m$ minimal elements in $\sigma$ and $\Id_{k_i}$ may be empty.
These are also shown in Figure \ref{Wilfs}. Now we will turn $\sigma$ into an element of $B_n(231,312)$. Note that by this construction, there will always be a descent after each identity permutation in the sum, which we may write in terms of a reverse of an identity permutation. Also noting that $\Id_{k_i}$ may be written as $\rev(1) \oplus \dots \oplus \rev(1)$, we can turn the expression above to $1 \oplus \rev(\Id_{k_1}) \oplus \dots \oplus \rev(\Id_{k_j})$ while preserving every descent and ascent. A similar argument works in reverse, and hence there is a bijection between $B_n(132,312)$ and $B_n(231,312)$.
\end{proof}
\begin{figure}
\caption{The form of a permutation in $B_n(132,213)$.}
\caption{The form of a permutation in $B_n(213,231)$.}
\caption{The form of a permutation in $B_n(132,312)$.}
\caption{Example forms of permutations in $B_n(132,213)$, $B_n(213,231)$, and $B_n(231,312)$. All of these permutations can be mapped to each other and to the left factor $UUDUUD$, as will be shown in Theorem \ref{132,213}
\label{Wilfs}
\end{figure}
Note that this result also shows that the distribution of descents is consistent for all elements in these four sets.
We will show in the following theorem that the elements in $B_n(132,213)$ are in bijection with left factors of Dyck paths of $n-1$ steps. But first we provide the following definition:
\begin{definition}
A \emph{left factor} of a Dyck path is the path made up of all steps that precede the last $D$ step in a Dyck path. Left factors of Dyck paths of $n$ steps are left factors of all possible Dyck paths such that the path preceding the last $D$ step contains $n$ steps.
\end{definition}
\begin{example}\label{left factor}
Consider the Dyck path $UUDUUDDDUUDD$ shown in Figure \ref{Dyck}. The left factor associated with this Dyck path is $UUDUUDDDUU$.
\end{example}
\begin{figure}
\caption{The Dyck path and its corresponding left factor in Example \ref{left factor}
\label{Dyck}
\end{figure}
The following theorem presents a bijection between ballot permutations avoiding $132$ and $213$ with left factors of Dyck paths. Since every prefix of a ballot permutation contains no more descents than ascents, this makes left factors of Dyck paths a very natural combinatorial object to biject to.
\begin{theorem}\label{132,213}
The elements in $B_n(132,213)$ are in bijection with left factors of Dyck paths of $n-1$ steps, which are counted by the OEIS sequence \href{https://oeis.org/A001405}{A001405} \cite{oeis}.
\end{theorem}
\begin{proof}
Note that the elements in $B_n(132,213)$ are the skew sum of consecutively increasing permutations. Moreover, since they have to be ballot, the first two elements in any $\sigma \in B_n(132,213)$ must be increasing.
Let $\sigma = \Id_{k_1} \ominus \Id_{k_2} \ominus \dots \ominus \Id_{k_m}$. Note that $k_1 >1$ while any other $k_i$ may equal $1$. Now we can group together consecutive $k_i$s where each $k_i = 1$. So we get $\sigma = \Id_{k_1} \ominus \rev(\Id_{\ell_1}) \ominus \Id_{\ell_2} \ominus \cdots$. Then note that $\Id_{k_1} \ominus \rev(\Id_{\ell_1})$ uniquely determine a series of ups and downs in the left factor Dyck path. We can use the same argument for the rest of the terms in the direct sum of $\sigma$ to conclude that each $\sigma \in B_n(132,213)$ uniquely determine a series of ups and downs in a left factor Dyck path. And we can see that this argument works in reverse as well, since consecutive ascents can be grouped together into an identity term in $\sigma$, and consecutive descents can be grouped together into a reverse identity term in $\sigma$. Hence we conclude that there exists a bijection between the elements in $B_n(132,213)$ and left factors of Dyck paths of $n-1$ steps.
And hence \begin{align*}
|B_{n+1}(132,213)| = {n \choose {\lfloor \frac{n}{2} \rfloor}}. & \qedhere
\end{align*}
\end{proof}
The following example illustrates the bijection presented in Theorem \ref{132,213}.
\begin{example}\label{bijectexample}
The ballot permutation $\sigma = 456312$ is in $B(132,213)$ and is in bijection with the left factor $UUDDU$, as shown in Figure \ref{biject}.
\end{example}
\begin{figure}
\caption{A permutation in $B_6(132,213)$ and its corresponding left factor of a Dyck path in Example \ref{bijectexample}
\label{biject}
\end{figure}
\begin{theorem}\label{132,231}
There exists a unique ballot permutation avoiding the patterns $132$ and $231$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(132,231)$. Then the first two elements of $\sigma$ must be increasing since $\sigma$ is a ballot permutation. Moreover, these two elements must also be consecutive to avoid an occurrence of $132$. So call these two elements $k$ and $k+1$. Elements smaller than $k$ must be placed before $k$ to avoid an occurrence of $231$. Starting from the minimal element $1$, elements less than $k$ must be placed consecutively to avoid $132$, so we conclude that $\sigma = \Id_n$.
\end{proof}
Now we show a bijection between ballot permutations of length $n+1$ avoiding the patterns $132$ and $321$ with permutations of length $n$ avoiding the same patterns. This involves removing the second element of the ballot permutation and noting that the remaining subpermutation will still avoid $132$ and $321$.
\begin{theorem}\label{132,321}
The elements in $B_{n+1}(132,321)$ are in bijection with the elements in $S_n(132,321)$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_{n+1}(132,321)$. Then since $\sigma$ is a ballot permutation, the first two elements of $\sigma$ are increasing, and they must be consecutive to avoid an occurrence of $132$. So let us write $\sigma = k(k+1) \sigma_R$. Then removing an element will still avoid these patterns, so $k \sigma_R \in S_n(132,321)$.
Now let $\sigma \in S_n(132,321)$. Then write $\sigma = \sigma_L n \sigma_R$, where $\sigma_L n$ and $\sigma_R$ are both consecutively increasing. Note that $\sigma$ has at most one descent.
Then inserting a consecutive increasing element into the second index of $\sigma$ and standardizing everything else preserves the number of descents, and moreover, still avoids the patterns $132$ and $321$. More specifically, $\sigma_L n (n+1) \sigma_R$ will still avoid $132$ and $321$ (this is the inverse of the map above since $\sigma_L n$ is consecutively increasing). This permutation will still be a ballot permutation since it starts with an ascent and the permutation contains at most one descent, because $\sigma_L$ and $\sigma_R$ are both consecutively increasing.
And hence $\sigma_L n (n+1) \sigma_R \in B_{n+1}(132,321)$. This is sufficient to show a bijection between the elements in $B_{n+1}(132,321)$ and the elements in $S_n(132,321)$. Simion and Schmidt \cite{simion1985restricted} proved that $|S_n(132,321)| = {n \choose 2} + 1$, so
\begin{align*}
|B_{n+1}(132,321)| = |S_n(132,321)| ={n \choose 2} + 1. & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{213,312}
Let $a_n = |B_n(213,312)|$. Then $$a_n = \sum_{k=0}^{\lfloor \frac{n}{2} \rfloor} {n \choose k},$$ which is listed as the OEIS sequence \href{https://oeis.org/A027306}{A027306} \cite{oeis}.
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(213,312)$. Then writing $\sigma = \sigma_L n \sigma_R$, note that $\sigma_L$ must be increasing and $\sigma_R$ must be decreasing (but not necessarily consecutive).
Then we construct all possible ballot permutations in $B_n(213,312)$. Let $|\sigma_R| = k$. Then there are $n \choose k$ ways to pick the elements in $\sigma_R$, which are forced to decrease. The rest of the elements must be in $\sigma_L$, which are forced to increase. Hence there are $n \choose k$ ways to construct $\sigma$. But $k \leq \lfloor \frac{n}{2} \rfloor$, since we cannot have more descents than ascents. And hence \begin{align*}
a_n = \sum_{k=0}^{\lfloor \frac{n}{2} \rfloor} {n \choose k}. & \qedhere
\end{align*}
\end{proof}
Next we can show a bijection between ballot permutations avoiding $213$ and $321$ with permutations avoiding the same patterns that are not of the form $n \Id_{n-1}$, which is clearly not a ballot permutation.
\begin{theorem}\label{213,321}
Elements in $B_n(213,321)$ are in bijection with elements in $S_n(213,321) \setminus (n \Id_{n-1})$.
\end{theorem}
\begin{proof}
It's clear that every $\sigma \in B_n(213,321)$ is in $S_n(213,321) \setminus (n \Id_{n-1})$.
Now let $\sigma \in S_n(213,321)$. Then let us write $\sigma = \sigma_L n \sigma_R$, noting $\sigma_L$ and $\sigma_R$ are both increasing to avoid $213$ and $321$. This means that $\sigma$ has at most one descent. Note that if $\sigma_L$ is nonempty, then $\sigma$ is a ballot permutation that avoids $213$ and $321$. But there is only one permutation in $S_n(213,321)$ where $\sigma_L$ is empty: namely, $n \Id_{n-1}$. And hence we conclude that every $\sigma \in S_n(213,321)$ that is not $n \Id_{n-1}$ is in $B_n(213,321)$, and hence there is a bijection between the elements in $B_n(213,321)$ and the elements in $S_n(213,321) \setminus (n \Id_{n-1})$.
So by Simion and Schmidt \cite{simion1985restricted}, we conclude that \begin{align*}
|B_n(213,321)| = |S_n(213,321)| - 1 = {n \choose{2}}. & \qedhere
\end{align*}
\end{proof}
The following theorem shows a bijection between $B_{n+1}(231,321)$ and $S_n(231,321)$, which involves removing the first element of every ballot permutation avoiding $231$ and $321$ and noting that the remaining permutation will still avoid these patterns.
\begin{theorem}\label{231,321}
The elements in $B_{n+1}(231,321)$ are in bijection with the elements in $S_n(231,321)$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_{n+1}(231,321)$. Note that the minimum element $1$ must be either the first element or the second element of $\sigma$ to avoid $231$ and $321$. However, since $\sigma$ is a ballot permutation, it cannot start with a descent, and hence $1$ must be the first element. Note that removing $1$ from $\sigma$ will still avoid $231$ and $321$, and hence is an element in $S_n(231,321)$.
Now let $\sigma \in S_n(231,321)$. Since $\sigma$ avoids $321$, there are no consecutive descents, which means that there is at most one more descent than ascent. Then note that if we insert a minimal element $0$ at the beginning of $\sigma$, then $0\sigma$ will still $231$ and $321$. Moreover, we've guaranteed one ascent at the beginning of the permutation, and there are still no consecutive descents. Hence there at at least as many ascents as descents in $\sigma$, and hence $\sigma \in B_{n+1}(231,321)$.
This is sufficient to show a bijection between the elements in $B_{n+1}(231,321)$ and the elements in $S_n(231,321)$. By Simion and Schmidt \cite{simion1985restricted}, we conclude that \begin{align*}
|B_{n+1}(231,321)| = |S_n(231,321)| = 2^{n-1}. & \qedhere
\end{align*}
\end{proof}
And lastly, we provide a constructive approach to show the following result:
\begin{theorem}\label{312,321}
Let $a_n = |B_n(312,321)|$. Then $a_n = 3 \cdot 2^{n-3}$ for $a_n \geq 3$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(312,321)$ and write $\sigma = \sigma_L r$, where $r \in [n]$. We insert a maximal element $(n+1)$ into $\sigma$ to generate an element in $B_{n+1}(312,321)$. Note that we must insert $(n+1)$ adjacent to $r$ in $\sigma$ to avoid an occurrence of $312$ and $321$. Further, $\sigma_L (n+1) r$ avoids $312$ and $321$ and is further ballot. A similar argument shows that $\sigma_L r (n+1)$ is also in $B_{n+1}(312,321)$. So each $\sigma \in B_n(312,321)$ will generate two distinct elements in $B_{n+1}(312,321)$.
Since we've shown that $(n+1)$ must be inserted adjacent to the last element of a permutation in $B_n(312,321)$, now we show that inserting $(n+1)$ anywhere else into some $\sigma' \notin B_n(312,321)$ will not generate an element in $B_{n+1}(312,321)$.
As stated above, we must insert $(n+1)$ adjacent to the last element of $\sigma'$ to avoid $312$ and $321$. Now write $\sigma' = \sigma_L' r'$. We have two permutations to consider:
\begin{enumerate}
\item $\sigma_L' r' (n+1)$
Then note that $\sigma_L' r'$ must be ballot and avoid $312$ and $321$ as well, which is impossible.
\item $\sigma_L' (n+1) r'$
Now if $\sigma_L' r'$ does not avoid $312$ and $321$, then $\sigma_L' (n+1) r'$ does not avoid these patterns either. If $\sigma_L' r'$ does avoid $312$ and $321$, then it must not be a ballot permutation. But since this permutation avoids $321$, there cannot exist consecutive descents in this permutation. Note that $\sigma_L' r'$ cannot start with an ascent, or else it would be a ballot permutation. And hence $\sigma_L' r'$ must start with a descent, and hence $\sigma_L' (n+1) r'$ is not a ballot permutation.
\end{enumerate}
So inserting a maximal element $(n+1)$ anywhere else of $\sigma' \notin B_n(312,321)$ will not generate an element in $B_{n+1}(312,321)$. So we conclude that $a_{n+1} = 2a_n$. Since we know that $a_3 = 3$, then we conclude that $a_n = 3 \cdot 2^{n-3}$.
\end{proof}
\section{Enumeration of Pattern Avoidance Classes of length 3}
Having enumerated all $3$-permutations avoiding double restrictions, we now turn our attention to enumerating $3$-permutations avoiding triple restrictions, as Simion and Schmidt \cite{simion1985restricted} have done with classic permutations. Table \ref{triple avoidance} presents the sequence of ballot permutations avoiding three patterns of length $3$.
\begin{table}[h]
\centering
\begin{tabular}{|c | c | c | c | c|}
\hline
Patterns & Sequence & OEIS Sequence & Comment \\ [0.5ex]
\hline\hline
$123,132,213$ & $1,1,1,1,1,1, \dots$ & & Sequence of all $1$s; Corollary \ref{123,132,213} \\
\hline
$123,132,231$ & $1,1,0,0,0,0, \dots$ & & Terminates after $n=2$ \\
\hline
$123,132,312$ & $1,1,1,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123,132,321$ & $1,1,1,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123,213,231$ & $1,1,1,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123,213,312$ & $1,1,2,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123,213,321$ & $1,1,2,1,0,0, \dots$ & & Terminates after $n=4$ \\
\hline
$123,231,312$ & $1,1,1,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123, 231, 321$ & $1,1,1,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123,312,321$ & $1,1,2,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$132,213,231$ & $1,1,1,1,1,1, \dots$ & & Sequence of all $1$s; Corollary \ref{triple corollary} \\
\hline
$132,213,312$ & $1,1,2,2,3,3, \dots$ & \href{https://oeis.org/A004526}{A004526} & Theorem \ref{132,213,312} \\
\hline
$132,213,321$ & $1,1,2,3,4,5, \dots$ & \href{https://oeis.org/A000027}{A000027} & Excluding $n=1$; Theorem \ref{132,213,321} \\
\hline
$132,231,312$ & $1,1,1,1,1,1, \dots$ & & Sequence of all $1$s; Corollary \ref{triple corollary} \\
\hline
$132,231,321$ & $1,1,1,1,1,1, \dots$ & & Sequence of all $1$s; Corollary \ref{triple corollary} \\
\hline
$132,312,321$ & $1,1,2,3,4,5, \dots$ & \href{https://oeis.org/A000027}{A000027} & Excluding $n=1$; Theorem \ref{triple bijection} \\
\hline
$213,231,312$ & $1,1,2,2,3,3, \dots$ & \href{https://oeis.org/A004526}{A004526} & Theorem \ref{213,231,312} \\
\hline
$213,231,321$ & $1,1,2,3,4,5, \dots$ & \href{https://oeis.org/A000027}{A000027} & Excluding $n=1$; Theorem \ref{triple bijection} \\
\hline
$213,312,321$ & $1,1,3,4,5,6, \dots$ & \href{https://oeis.org/A000027}{A000027} & Excluding $n=1$ and $n=2$; Theorem \ref{213,312,321} \\
\hline
$231,312,321$ & $1,1,2,3,5,8, \dots$ & \href{https://oeis.org/A000045}{A000045} & Theorem \ref{231,312,321} \\
\hline
\end{tabular}
\caption{Sequences of ballot permutations avoiding three permutations of length $3$.}
\label{triple avoidance}
\end{table}
\begin{corollary}\label{123,132,213}
We have $|B_n(123,132,213)|=1$ for all $n$.
\end{corollary}
\begin{proof}
This follows immediately from Theorem \ref{123,132}, since the unique permutation avoiding $123$ and $132$ also avoids $213$. Let $\sigma \in B_n(123,132,213)$. Specifically, $\sigma = (12) \ominus (12) \ominus \dots \ominus (12)$ when $n$ is even and $\sigma = (12) \ominus (12) \ominus \dots \ominus (12) \ominus (1)$ when $n$ is odd.
\end{proof}
\begin{corollary}\label{triple corollary}
We have $|B_n(132,213,231)| = |B_n(132,231,312)| = |B_n(132,231,321)| =1$ for all $n$.
\end{corollary}
\begin{proof}
This follows immediately from Theorem \ref{132,231}. In particular, \begin{align*}
B_n(132,213,231) = B_n(132,231,312) = B_n(132,231,321) = \{ \Id_n \}. & \qedhere
\end{align*}
\end{proof}
Now we will show that the sets of patterns $\{132,213,312 \}$ and $\{213,231,312\}$ are Wilf-equivalent.
\begin{theorem}\label{213,231,312}
The sets $B_n(132,213,312)$ and $B_n(213,231,312)$ are Wilf-equivalent.
\end{theorem}
\begin{proof}
Note that an element in $B_n(132,213,312)$ can be written as $\Id_{k_L} \ominus \rev(\Id_{k_R})$. Similarly, an element in $B_n(213,231,312)$ can be written as $\Id_{k_L} \oplus \rev(\Id_{k_R})$.
Now we can write $\Id_{k_L} \oplus \rev(\Id_{k_R})$ as $\Id_{k_L} \oplus (1 \ominus \rev(\Id_{k_R-1}))$. But note that we can rewrite this as $\Id_{k_L+1} \ominus \rev(\Id_{k_R-1})$ while preserving the positions of every descent and ascent in the permutation. A similar reasoning applies for the reverse case, and hence there is a descent-preserving bijection between $B_n(132,213,312)$ and $B_n(213,231,312)$.
\end{proof}
\begin{theorem}\label{132,213,312}
Let $a_n = |B_n(132,213,312)|$. Then $a_n = \lfloor \frac{n+1}{2} \rfloor$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(132,213,312)$. Then, because $\sigma$ avoids $132$, $213$, and $312$, it can be written in the form $\sigma_L n \sigma_R$, where $\sigma_L n$ is consecutively increasing and $\sigma_R$ is consecutively decreasing. Now we count how many different $\sigma$ there are. Note that $n$ can be in the last $\lfloor \frac{n+1}{2} \rfloor$ places to ensure that there are at least as many ascents as descents in $\sigma$. And hence $|B_n(132,213,312)| = \lfloor \frac{n+1}{2} \rfloor$.
\end{proof}
Now we show a Wilf-equivalence between three other sets of patterns.
\begin{theorem}\label{triple bijection}
The three sets $B_n(132,213,321)$, $B_n(132,312,321)$, and $B_n(213,231,321)$ are Wilf-equivalent.
\end{theorem}
\begin{proof}
Note that an element in $B_n(132,213,321)$ can be written as $\Id_{k_L} \ominus (1 \oplus \Id_{k_R})$. Similarly, an element in $B_n(132,312,321)$ can be written as $(\Id_{k_L} \ominus 1) \oplus \Id_{k_R}$ and an element in $B_n(213,231,321)$ can be written as $\Id_{k_L} \oplus (1 \ominus \Id_{k_R})$.
Observe that for each value of $k_L, k_R \in \mathbb N$ with $k_L+k_R+1 = n$, we can send $\Id_{k_L} \ominus (1 \oplus \Id_{k_R})$ to $(\Id_{k_L} \ominus 1) \oplus \Id_{k_R}$. This preserves the position of every descent in the permutation and gives our bijection.
Similarly, $\Id_{k_L} \ominus (1 \oplus \Id_{k_R})$ can be bijected to $\Id_{k_L-1} \oplus (1 \ominus \Id_{k_R+1})$, which also preserves the position of every descent in the permutation.
Since the bijections from $S_n(132,213,321)$ to $S_n(132,312,321)$ to $S_n(213,231,321)$ are descent-preserving, we can now restrict them to bijections from $B_n(132,213,321)$ to $B_n(132,312,321)$ to $B_n(213,231,321)$.
Hence there exists descent-preserving bijections between $B_n(132,213,321)$ and $B_n(132,312,321)$ and between $B_n(132,213,321)$ and $B_n(213,231,321)$, and all three sets are Wilf-equivalent.
\end{proof}
\begin{theorem}\label{132,213,321}
Let $a_n = |B_n(132,213,321)|$. Then $a_n = n-1$.
\end{theorem}
\begin{proof}
Let $\sigma \in B_n(132,213,321)$. Then, because $\sigma$ avoids $132$, $213$, and $321$, then it can be written in the form of $\sigma_L n \sigma_R$, where $\sigma_L n$ is consecutively increasing and $\sigma_R$ is consecutively increasing. Note that there is at most one descent in this permutation, and hence $n$ can be anywhere except the first element for $\sigma$ to be a ballot permutation (in other words, $\sigma_L$ cannot be empty), and hence there are $n-1$ different permutations in $B_n(132,213,321)$.
\end{proof}
\begin{theorem}\label{213,312,321}
Let $a_n = |B_n(213,312,321)|$. Then $a_n = n$.
\end{theorem}
\begin{proof}
Let $\sigma$ be in $B_n(213,312,321)$. Now $\sigma$ can be written as $\sigma_L n \sigma_R$, where $\sigma_L$ is increasing and $\sigma_R$ is either empty or one element to avoid $312$ and $321$.
When $\sigma_R$ is empty, the identity permutation is the only one that satisfies the above criteria. When $\sigma_R$ is nonempty, we can choose $n-1$ different elements to be the last element. Then all the other elements must go in increasing order in $\sigma_L$, so there are a total of $n$ different permutations in $B_n(213,312,321)$.
\end{proof}
Finally we present a constructive approach to show that ballot permutations avoiding the patterns $231$, $312$, and $321$ follow the Fibonacci sequence with initial terms $a_1 = 1$ and $a_2 = 1$.
\begin{theorem}\label{231,312,321}
Let $a_n = |B_n(231,312,321)|$. Then $a_n$ follows the recurrence relation $a_{n} = a_{n-1} + a_{n-2}$ with the initial terms $a_1 = 1$ and $a_2 = 1$, which is the Fibonacci sequence.
\end{theorem}
\begin{proof}
Note that given some $\sigma \in B_{n-1}(231,312,321)$, the permutation $\sigma n$ will be in $B_{n}(231,312,321)$, since inserting $n$ at the end of a permutation that avoids $231,312$, and $321$ will still avoid these three permutations. Moreover, an ascent has been added by inserting $n$ onto the end of $\sigma$, and hence $\sigma n$ will still be a ballot permutation. This case contributes $a_{n-1}$ different elements in $B_{n}(231,312,321)$.
Now also note that given some $\tau \in B_{n-2}(231,312,321)$, the permutation $\tau n (n-1)$ will also be in $B_{n}(231,312,321)$. This still avoids $231$, $312$, and $321$, and we've added an ascent followed by a descent, so $\tau n (n-1)$ is still a ballot permutation. This case contributes $a_{n-2}$ different elements in $B_{n}(231,312,321)$.
Given $\sigma \in B_{n-1}(231,312,321)$, we show that inserting the maximal element $n$ in any other place cannot produce an element in $B_{n}(231,312,321)$. Now if $\sigma$ ends in $n-1$ and we insert $n$ left-adjacent to $n-1$, this case is already accounted for above because this is in the form of $\tau n(n-1)$, where $\tau \in B_{n-2}(231,312,321)$. If $\sigma$ ends in $k<n-1$, then inserting $n$ left-adjacent to $k$ will contain an occurrence of $231$. Inserting $n$ anywhere else will contain either an occurrence of $321$ or $312$, since these cases are disjoint.
For $\sigma \notin B_{n-1}(231,312,321)$, we show that we cannot produce an element in $B_{n}(231,312,321)$ by inserting the maximal element $n$ anywhere. Note that if $\sigma$ contains either $231$, $312$, or $321$, inserting $n$ anywhere will still contain an occurrence of these patterns. Now let $\sigma$ be a non-ballot permutation. Note that we must insert $n$ adjacent to the last element of $\sigma$ or else there is an occurrence of $312$ or $321$. If $n$ is inserted left-adjacent to the last element, then $\sigma$ must be $\sigma_L (n-1)$ to avoid $231$. Then $\sigma_L n (n-1)$ is not a ballot permutation because we've inserted a descent at the end of $\sigma_L (n-1)$. Now if we insert $n$ at the end of $\sigma$, note that $\sigma$ is a prefix of $\sigma n$. And since $\sigma$ is not a ballot permutation, $\sigma n$ cannot be either. And hence if we insert $n$ anywhere else, we cannot produce an element in $B_{n}(231,312,321)$.
Now let $\tau \in B_{n-2}(231,312,321)$. We show that we cannot produce an element in $B_{n}(231,312,321)$ by inserting the maximal elements $n-1$ and $n$ in any other places. Now note that $\tau (n-1) \in B_{n-1}(231,312,321)$, which is covered by the other case above. Now similar to the reasoning above, we have to insert $n-1$ left-adjacent to the last element of $\tau$. And doing this forces the last element of $\tau$ to be $n-2$ to avoid $231$. So write $\tau$ as $\tau_L (n-2)$ and consider $\tau_L (n-1) (n-2)$. Then note that $\tau_L (n-1) (n-2)n$ is already counted in the case above since $\tau_L (n-1) (n-2) \in B_{n-1}(231,312,321)$. Moreover, $\tau_L (n-1) n (n-2)$ contains $231$, so inserting $n$ and $n-1$ anywhere else in $\tau$ will not produce an element in $B_{n}(231,312,321)$.
For $\tau \notin B_{n-2}(231,312,321)$, we show that we cannot produce an element in $B_{n}(231,312,321)$ by inserting the maximal elements $n$ and $n-1$ anywhere. As discussed above, if $\tau$ contains $231$, $312$, or $321$, then inserting $n$ and $n-1$ anywhere in $\tau$ will still contain these patterns. So assume that $\tau$ is not a ballot permutation. So we must insert $n-1$ either left-adjacent or right-adjacent to the last element of $\tau$.
Let us consider the case where we insert $n-1$ left-adjacent to the last element of $\tau$. Similarly as above, this forces the last element of $\tau$ to be $n-2$ to avoid $231$. So write $\tau$ as $\tau_L (n-2)$ and consider $\tau_L (n-1) (n-2)$. Now this is simply adding a descent at the end of $\tau_L (n-2)$. Similarly, we must insert $n$ adjacent to $n-2$, and hence $\tau_L (n-1) (n-2)$ is not a ballot permutation. This implies that $\tau_L (n-1) (n-2) n$ is also not a ballot permutation. Moreover, $\tau_L (n-1) n (n-2)$ contains an occurrence of $231$.
Note that if we insert $n-1$ right-adjacent to the last element of $\tau$, then $\tau (n-1)$ is not a ballot permutation because $\tau$ is a prefix of $\tau (n-1)$ and $\tau$ is not a ballot permutation. Hence both $\tau (n-1) n$ and $\tau n (n-1)$ cannot be ballot permutations.
Hence if we insert $n$ and $n-1$ anywhere else, we cannot produce an element in $B_{n}(231,312,321)$.
So we conclude that \begin{align*}
a_{n} = a_{n-1} + a_{n-2}. & \qedhere
\end{align*}
\end{proof}
\section{Conclusion and Open Problems}
In this paper, we have exhaustively enumerated ballot permutations avoiding two patterns of length $3$ and three patterns of length $3$. The results presented in this paper extend Lin, Wang, and Zhao's \cite{lin2022decomposition} enumeration of permutations avoiding a single pattern of length $3$ and proved Wilf-equivalences of pattern classes. In particular, bijections between ballot permutations avoiding certain patterns and left factors of Dyck paths were also shown. We conclude with the following open problems as proposed by Lin, Wang, and Zhao's \cite{lin2022decomposition}:
\begin{problem}
Can ballot permutations avoiding sets of patterns of length $4$ be enumerated?
\end{problem}
Although this paper has shown connections between ballot permutations avoiding patterns of length $3$ and their recurrence relations and formulas, there are no existing OEIS sequences \cite{oeis} that correspond with the number of ballot permutations avoiding one pattern with length $4$. Moreover, can ballot permutations avoiding consecutive patterns or vincular patterns be enumerated?
And finally, Lin, Wang, and Zhao \cite{lin2022decomposition} have suggested the following notion of a \emph{ballot multipermutation}.
\begin{definition}
For a tuple of natural numbers $\textbf{m} = (m_1, \dots, m_n)$, let $\mathfrak{S}_{\textbf{m}}$ be the set of multipermutations of $\{ 1^{m_1}, 2^{m_2}, \dots, n^{m_n} \}$. An element $\sigma \in \mathfrak{S}$ is a ballot multipermutation if for each $i$ such that $1 \leq i \leq \sum_{k=1}^n m_k$, the following inequality holds:
$$|\{ j \in [i] : \sigma(j) < \sigma(j+1) \}| \geq |\{ j \in [i] : \sigma(j) > \sigma(j+1) \}|.$$
\end{definition}
\begin{problem}
For fixed $\textbf{m}$, is it possible to enumerate ballot multipermutations in $\mathfrak{S}_{\textbf{m}}$? Further, is it possible to enumerate ballot multipermutations avoiding patterns in $S_n$?
\end{problem}
In addition, inspired by Bertrand's \cite{bertrand} ballot problem for $\lambda > 1$, we propose the following problem:
\begin{problem}
Can ballot permutations avoiding a single pattern of length $3$ or pairs of patterns of length $3$ with at least $\lambda$ times as many ascents as descents be enumerated?
\end{problem}
Furthermore, the enumeration of even and odd ballot permutations avoiding small patterns has not been studied and would be a further avenue for future research.
$\\ \\ \\$
\end{document} |
\begin{enumerate}gin{document}
\title{\bf Indecomposable Decomposition and Couniserial Dimension \footnotetext{{\it Key Words}
\begin{enumerate}gin{abstract}
{\noindent Dimensions like Gelfand, Krull, Goldie have an intrinsic role in
the study of theory of rings and modules. They provide useful technical
tools for studying their structure. In this paper we define one of the
dimensions called couniserial dimension
that measures how close a ring or module is to being uniform. Despite their different objectives, it
turns out that there are certain common properties between the couniserial
dimension and Krull dimension like each module having such a dimension
contains a uniform submodule and has finite uniform dimension, among others.
Like all dimensions, this is an ordinal valued
invariant. Every module of finite length has couniserial dimension and
its value lies between the uniform dimension and the length of the module.
Modules with countable couniserial dimension are shown to possess
indecomposable decomposition. In particular, von Neumann regular ring with
countable couniserial dimension is semisimple artinian. If the maximal right quotient ring of a non-singular ring
$R$ has a couniserial dimension as an $R$-module, then $R$ is a semiprime right Goldie ring. As one of the applications, it follows
that all
right $R$-modules have couniserial dimension if and only if $R$ is a
semisimple artinian ring. }
\end{abstract}
\noindent{\bf 0. Introduction}
{ In this article we introduce a notion of dimension of a module, to
be called couniserial dimension. It is an ordinal valued invariant that is
in some sense a measure of how far a module is from being uniform.
In order to define couniserial dimension for modules over a ring $R$, we
first define, by transfinite induction, classes $\zeta _{\alpha }$ of $R$
-modules for all ordinals $\alpha \geq 1$. First we remark
that if a module $M$ is isomorphic to all its non-zero submodules, then $M$ must be uniform.
To start with, let $\zeta _{1}$
be the class of all uniform modules. Next, consider an ordinal $\alpha >1$; if $
\zeta _{\begin{enumerate}ta }$ has been defined for all ordinals $\begin{enumerate}ta <\alpha $, let $
\zeta _{\alpha }$ be the class of those $R$-modules $M$ such that for every
non-zero submodule $N$ of $M$, where $N\ncong M$, we have $N\in \bigcup_{\begin{enumerate}ta
<\alpha }\zeta _{\begin{enumerate}ta }$. If an $R$-module $M$ belongs to some $\zeta
_{\alpha }$, then the least such $\alpha $ is called the {\it couniserial dimension} of
$M$, denoted by c.u.dim$(M)$. For $M=0$, we define
c.u.dim$(M)=0$. If a non-zero module $M$ does not belong to any $\zeta_{\alpha }$, then we say that c.u.dim$(M)$ is not
defined, or that $M$ has no couniserial dimension. Equivalently, Proposition \ref{2.2}
shows that }{an $R$-module $M$ has couniserial dimension if and only if for each
descending chain of submodules of $M$, $M_{1}\geq M_{2}\geq ...$, there
exists $n\geq 1$, either $M_{n}$ is uniform or $M_{n}\cong M_{k}$ for all $
k\geq n$.}{ It is clear by the definition that every submodule and so every summand of a
module with couniserial dimension has couniserial dimension. Also note that, for the
integer number $n$, couniserial dimension of }$\Bbb{Z}^{n}$ is $n$. {An example is given
to show that the direct sum of two modules each with couniserial
dimension (even copies of a module) need not have couniserial dimension. In Section 2, we prove some basic properties of the couniserial
dimension. In Section 3, we prove our main results.
It is shown in Theorem \ref{decomposation} that a module of countable
(finite or infinite) couniserial dimension can be decomposed in to
indecomposable modules. Theorem \ref{dedekind} shows that a Dedekind finite module with couniserial dimension is a finite direct
sum of indecomposable modules. Theorem \ref{2.5} in Section 3 shows that for a right non-singuar ring $R$
with maximal right quotient ring $Q$, if $Q_R$ has
couniserial dimension, then $R$ is a semiprime right Goldie ring which is a finite product of piecewise domains.
The reader may compare this with the wellknown result that a prime ring with Krull dimension is a right Goldie ring
but need not be a piecewise domain. Furthermore, a prime right Goldie ring need not have couniserial dimension as is also
the case for
Krull dimension. \\
\indent In Section 4, we give some applications of couniserial dimension.
It is shown in Proposition \ref{Artinian} that a module $M$ with finite
length is semisimple if and only if for every submodule $N$ of $M$ the right
$R$-module $\oplus _{i=1}^{\infty }M/N$ has couniserial dimension. As a
consequence a commutative noetherian ring $R$ is semisimple if and only if
for every finite length module $M$ the module $\oplus _{i=1}^{\infty }M$ has
couniserial dimension.} It is shown in Proposition \ref{anti is injective} that
if $P$ is an anti-coHopfian projective right $R$-module and $\oplus
_{i=1}^{\infty }E(P)$ has couniserial dimension, then $P$ is injective.
As another application we show that all right (left) $R$-module have couniserial dimension if and only if $R$ is semisimple
artinian (see Theorem \ref{final}).
Several examples are included in the paper that demonstrates as to why
the conditions imposed are necessary and what, if any, there is any relation with the corresponding result in the literature.
\section{\hspace{-6mm}. Definitions and Notation.}
Recall that a semisimple module $M$ is said to be {\it homogeneous} if $M$
is a direct sum of pairwise isomorphic simple submodules.
A module $M$ has
{\it finite uniform dimension} (or {\it finite Goldie rank}) if $M$ contains no infinite direct
sum of non-zero submodules, or equivalently, there exist independent
uniform submodules $U_1, ... ,U_n$ in $M$ such that
$\oplus _{i = 1}^n U_i$ is an essential
submodule of $M$. Note that $n$ is uniquely determined by $M$. In this case, it is written u.dim$(M) = n$.\\
\indent For any module $M$, we define Z$(M) = \lbrace x \in M : $ r.ann$(x)$ is an essential right ideal of $R \rbrace$ . It can be
easily checked that Z$(M)$ is a submodule of $M$. If Z$(M) = 0$, then $ M$
is called a {\it non-singular} module. In particular, if we take $M = R_R$, then $R$ is called right non-singular
if Z$(R_R) = 0$.\\
\indent A ring $R$ is a called {\it right Goldie} ring if it satisfies the following two conditions:
(i) $R$ has ascending chain condition on right annihilator ideals and, (ii) u.dim$(R_R)$ is finite. \\
\indent Recall that a ring $R$ is {\it right V-ring} if all right simple $R$-modules are injective. A ring $R$ is called {\it fully right idempotent}
if $I = I^2$, for every right ideal $I$. We
recall that a right V-ring is fully right idempotent (see \cite [Corollary 2.2] {7}) and a prime fully right idempotent ring is right
non-singular
(see \cite[Lemma 4.3]{2}). So a prime right V-ring is right non-singular.
Recall that a module
$M$ is called {\it $\Sigma$-injective} if every direct sum of copies of $M$ is injective.
A ring $R$ is called {\it right $\Sigma$-V-ring} if each simple right module is $\Sigma$-injective. \\
\indent In this paper, for a ring $R$, $Q = Q _{max} (R)$ stands maximal right quotient ring $R$.
It is well known that if $R$ is a right non-singular, then the injective hull of $R_R$, $E(R_R)$, is a ring and is equal to
the maximal right quotient ring of $R$, \cite[Corollary 2.31]{Gooderlnonsingular}.\\
\indent A module $M$ is called {\it Hopfian} if $M$ is not isomorphic to any of its proper factor modules (equivalently, every onto
endomorphism of
$M$ is 1-1).
{\it Anti-Hopfian} modules are introduced by Hirano and Mogami \cite{Hirano}. Such modules are isomorphic to all its non-zero
factor modules. A module $M$ is called uniserial if the lattice of submodules are linearly ordered. Anti-Hopfian modules are
uniserial artinian. \\
\indent Recall that a module $M$ is called {\it coHopfian} if it is not isomorphic to a proper submodule (equivalently, every 1-1 endomorphism of $M$ is onto).
Varadarjan \cite{varadarjan} dualized the concept of anti-Hopfian
module and called it anti-coHopfian module. With slight modification we will call a non-zero module to be
{\it anti-coHopfian} if is isomorphic to all its non-zero submodules. A non-zero
module $M$ is called {\it uniform} if the intersection of any two non-zero
submodules is non-zero. We see an anti-coHopfian module is noetherian and
uniform.\\
\indent An $R$-module $M$ has cancellation property if for every $R$-modules $N$ and $T$, $M \oplus N \cong M\oplus T $ implies
$N\cong T$.
Every module with semilocal endomorphism ring has cancellation property \cite{crash}. Since endomorphism ring of a simple
module is a division ring,
it has cancellation property. \\
\indent Throughout this paper, let $R$ denote an arbitrary ring with
identity and all modules are assumed to be unitary and right modules, unless other words stated. If $N$ is a submodule
(resp. proper submodule) of $M$ we write $N\leq M$ (resp. $N<M$). Also, for
a module $M$, $\oplus _{i=1}^{\infty }M$ stands for countably infinite
direct sum of copies of $M$. If $N$ is a submodule of $M$
and $k > 1$, $\oplus _{i = k}^{\infty} N = \oplus _{i = 1}^{\infty} N_{i}$ is a submodule of
$\oplus _{i = 1}^{\infty} M$
with
$N_1 = N_2 = ... = N_{k - 1} = 0$ and for $i \geq k$ $N_i = N$.
\section{\hspace{-6mm}. Basic and Preliminary Results.}
As defined in the introduction, couniserial dimension is an ordinal valued number. The reader may refer to \cite {stoll} regarding
ordinal
numbers. We begin this section with a lemma and a remark on the definition of couniserial dimension.
\begin{enumerate}gin{lemma} \label{anti-coHopfian}
An anti-coHopfian module is uniform noetherian.
\end{lemma}
\begin{enumerate}gin{proof}
Since $M$ is isomorphic to each cyclic submodules, $M$ is cyclic and every submodule of $M$ is cyclic and so $M$ is noetherian.
Thus $M$ has a uniform submodule, say $U$. Since $U \cong M$, $M$ is uniform.$~\square$
\end{proof}
\begin{enumerate}gin {remark} \label{1.3}
{\rm We make the convention that a statement ``c.u.dim$(M) = \alpha$'' will mean that the couniserial dimension of $M$ exists
and equals $\alpha$. By the definition of couniserial dimension,
if $M$ has couniserial dimension and $N$ is a submodule of $M$,
then $N$ has couniserial dimension and c.u.dim$(N) \leq $ c.u.dim$(M)$.
Moreover, if $M$ is not uniform and c.u.dim$(M) = $ c.u.dim$(N)$, where $N$ is a submodule of $M$,
then $M \cong N$.
On the other hand,
since every set of ordinal numbers has supremum, it follows immediately from the definition that $M$ has
couniserial dimension if and only if for all submodules $N$ of $M$
with $N \ncong M$ , c.u.dim$(N)$ is defined. In the latter case,
if $\alpha = $ sup$\lbrace$ c.u.dim$(N) ~ \vert \ N \leq M, N \ncong M \rbrace $, then c.u.dim$(M) \leq \alpha + 1$. }
\end{remark}
The next proposition provides a working definition for a module $M$ that has couniserial dimension.
\begin{enumerate}gin{proposition}\label{2.2}
An $R$-module $M$ has couniserial dimension if and only if for every descending chain of submodules
$ M_1 \geq M_2 \geq ... $, there exists $ n \geq 1$ such that
$M_n$ is uniform or $ M_n \cong M_k$ for all $k \geq n $.
\end{proposition}
\begin{enumerate}gin{proof}
{\rm ($\Rightarrow$) Let
$ M_1 \geq M_2 \geq ... $ be a descending chain of submodules of $M$.
Put $\gamma =$ inf $\lbrace$c.u.dim$(M_n) ~ \vert ~ n\geq 1\rbrace $. So $\gamma = $ c.u.dim $(M_n)$ for some $n \geq 1$.
If $ M_n$ is not uniform, then $ M_n \cong M_k$ for all $k \geq n $, because $\gamma $ is infimum. \\
($ \Leftarrow $) If $M$ does not have couniserial dimension,
then $M$ is not uniform and so there exists a submodule $M_1$ of $M$ such that $M_1 \ncong M$ and $M_1$ does not
have couniserial dimension, by the above remark. So
there exists a submodule $M_2$ of $M_1$ such that $M_2 \ncong M_1$ and $M_2$ does not have couniserial dimension.
Continuing in this manner, we obtain a descending chain of submodules $ M_1 \geq M_2 \geq ... $, such that
for every $i \geq 1$, $M_i$ does not have couniserial dimension and $M_i \ncong M_{i+1}$, a contradiction. This completes the
proof. $~\square$ }
\end{proof}
As a consequence, we have the following corollary.
\begin{enumerate}gin{corollary}\label{2.3}
Every artinian module has couniserial dimension.
\end{corollary}
\begin{enumerate}gin{lemma} \label{less}
If $M$ is an $R$-module and {\rm c.u.dim}$(M) = \alpha$, then for any $ 0 \leq \begin{enumerate}ta \leq \alpha$,
there exists a submodule $N$ of $M$ such that {\rm c.u.dim}$(N) = \begin{enumerate}ta$.
\end{lemma}
\begin{enumerate}gin{proof}
{\rm The proof is by transfinite induction on c.u.dim$(M) = \alpha$. The case $\alpha = 1$ is clear.
Let
$\alpha > 1$ and $0 \leq \begin{enumerate}ta < \alpha$,
then, using Remark \ref{1.3}, there exists a submodule $K$ of $M$ such that $K \ncong M$ and
$\begin{enumerate}ta \leq$ c.u.dim$(K)$. Now since $\begin{enumerate}ta \leq$ c.u.dim$(K) < \alpha$, by induction hypothesis,
there exists a submodule $N$ of $K$ such that c.u.dim$(N) = \begin{enumerate}ta$. $~\square$ }
\end{proof}
As a consequence we have the following.
\begin{enumerate}gin{lemma} \label{uniform submodule}
Every module with couniserial dimension has a uniform submodule.
\end{lemma}
In the next proposition
we observe that every module of finite couniserial dimension has finite uniform dimension.
\begin{enumerate}gin{lemma} \label{non}
Let $M$ be an $R$-module of finite couniserial dimension. Then $M$ has finite uniform dimension and {\rm u.dim}$(M) \leq$ {\rm c.u.dim}$(M)$.
\end{lemma}
\begin{enumerate}gin{proof}
The proof is by induction on c.u.dim$(M) = n$. The case $n = 1$ is clear. Let $n > 1 $ and $N$ be a submodule of $M$
such that c.u.dim$(N) = n - 1 $. Thus by the inductive hypothesis, $N$ has finite uniform dimension. Put $m = $ u.dim$(N)$.
If $N $ is not essential in $M$,
then there exists a uniform submodule $U$ of $M$ such that $N \cap U = 0 $. Thus $N \oplus U$ is a submodule of $M$ of uniform
dimension $m + 1$. Then $ (N \oplus U) \ncong N$ and so $ n - 1 < $ c.u.dim$(N \oplus U) \leq n$.
Thus $(N \oplus U) \cong M$, by Remark \ref{1.3}. This proves the lemma. $~\square$
\end{proof}
\begin{enumerate}gin{example}
{\rm There exist modules of infinite couniserial dimension but of finite uniform dimension.
Take $M = \Bbb{Z}_{{p}^{\infty}} \oplus \Bbb{Z}_{{p}^{\infty}}$. Then $M$ is artinian $\Bbb{Z}$-module
of infinite couniserial dimension but of finite uniform dimension $2$.}
\end{example}
In the following we consider equality in the above lemma in a special case.
\begin{enumerate}gin{lemma} \label{non1}
Let $M$ be an injective non-uniform $R$-module of finite couniserial dimension. Then
{\rm c.u.dim}$(M) = $ {\rm u.dim}$(M)$
if and only if $M$ is finitely generated semisimple module.
\end{lemma}
\begin{enumerate}gin{proof}{ $(\Leftarrow) $ is clear.\\
$(\Rightarrow)$. Let c.u.dim$(M) = $ u.dim$(M) = m > 1$.
Then $M = E_1 \oplus ... \oplus E_m$, where $E_i$ are uniform injective modules.
If $E_1$ is not simple then there exists a non-injective
submodule
$K$ of $E_1$. Thus $K \oplus E_2 \oplus ... \oplus E_m$ is not isomorphic to $M$. But clearly
c.u.dim$(K \oplus E_2 \oplus ... \oplus E_m) \geq m$, a
contradiction. This completes the proof.
$~\square$}
\end{proof}
Note that the condition being injective is necessary in the above proposition.
\begin{enumerate}gin{example}
{\rm We can see easily that for $M = \Bbb{Z} \oplus \Bbb{Z}$,
c.u.dim$(M) = $ u.dim$(M)$ $ = 2$ but $M$ is not semisimple. Also, the next lemma shows that there exists a module of finite uniform dimension without couniserial dimension.}
\end{example}
The following lemma shows that direct sum of two uniform modules may not have couniserial dimension.
\begin{enumerate}gin{lemma} \label{example}
Let $D$ be a domain and $S $ be a simple $D$-module. If $S \oplus D$ as $D$-module has couniserial dimension, then $D$ is principal
right ideal
domain.
\end{lemma}
\begin{enumerate}gin{proof}
Let $I$ be a non-cyclic right ideal of $D$. Choose a non-zero element $x \in I$. Set $J_1 = xR$ which is isomorphic to $D$.
Thus
there exists a right ideal $J_2$ of $D$ such that $J_2 \cong I$ and $J_2 \leq J_1$. Now let $J_3$ be
a cyclic right ideal contained in $J_2$ and by continuing this manner we have a descending chain
$J_1 \geq J_2 \geq ...$ of right ideals of $D$
where for each odd integer $i$, $J_i$ is cyclic and for each even integer $i$, $J_i$ is not cyclic. Now consider
the descending chain $S \oplus J_1 \geq S \oplus J_2 \geq ...$ of submodules of $S \oplus D$. Since $S$ has cancellation property
and for each $i$, $S \oplus J_i$ is not uniform, by using Proposition \ref{2.2}, we see that, for some $n$,
$J_n \cong J_{n + 1}$, a contradiction. Thus $D$ is a principal right ideal domain.
\end{proof}
\begin{enumerate}gin{remark}
{\rm (1)
The simple module $S$ in the statement of the Lemma \ref{example}
can be replaced by any cancellable module. Indeed it follows from the
Theorem \ref{2.5}, proved latter if the maximal right quotient ring $Q$ of a domain
$D$ as $D$-module has couniserial dimension,
then $Q_D$ has cancellation property and so if $Q \oplus D$ as $D$-module has couniserial dimension,
$D$ must be right principal ideal domain. \\
(2) Also, since a Dedekind domain has cancellation property, similar proof shows
that if $D$ is a Dedekind domain which is
not right principal ideal domain, then $D \oplus D$ does not have couniserial dimension. This example shows that even
direct sum of a uniform module with itself may not have couniserial dimension.
}
\end{remark}
The definition of addition two ordinal numbers can be given inductively.
If $ \alpha $ and $ \begin{enumerate}ta $ are two ordinal numbers
then $ \alpha + 0 = \alpha $, $ \alpha + (\begin{enumerate}ta + 1) = (\alpha + \begin{enumerate}ta) + 1 $ and if $ \gamma $ is a limit ordinal then
$ \alpha+\gamma $ is the limit of $ \alpha + \begin{enumerate}ta $ for all $ \begin{enumerate}ta < \gamma $ (See \cite{stoll}).
\begin{enumerate}gin{lemma} \label{ord}\rm (See \cite [Theorem 7.10]{stoll}). For ordinal numbers $\alpha $, $\begin{enumerate}ta $ and $\gamma$,
we have the following:\\
{\rm (1)} If $\alpha < \begin{enumerate}ta $, then $\gamma + \alpha < \gamma + \begin{enumerate}ta $. \\
{\rm (2)} If $\alpha < \begin{enumerate}ta $, then $ \alpha + \gamma \leq \begin{enumerate}ta + \gamma$.
\end{lemma}
We call an $R$-module $M$ {\it fully coHopfian} if every submodule of $M$ is coHopfian. Note that
artinian modules are fully coHopfian.
If $I$ is the set of prime numbers, then $\oplus _{p \in I} {\Bbb {Z} _p}$ is an example of fully coHopfian
$\Bbb{Z}$-module that it is not artinian.
\begin{enumerate}gin{proposition}{\label{b}} Let $M = M_1 \oplus M_2$ be a fully coHopfian $R$-module with couniserial dimension.
Then c.u.dim$(M) \geq $ c.u.dim$(M_1) + $ c.u.dim$(M_2)$.
\end{proposition}
\begin{enumerate}gin{proof}
{\rm We may assume $M_1, M_2 \neq 0$. We use transfinite induction on c.u.dim$(M_2) = \alpha$.
Since $M_1 \ncong M$,
c.u.dim$(M)$ $\geq $ c.u.dim$(M_{1}) + 1$. So the case $\alpha = 1$ is clear. Thus, suppose $\alpha > 1$
and for every right $R$-module $L$ of couniserial dimension less than $\alpha$, c.u.dim$(M_1 \oplus L ) \geq $ c.u.dim$(M_{1}) + $
c.u.dim$(L)$.
If $\alpha$ is a successor ordinal, then there exists an ordinal number $\gamma$ such that
c.u.dim$(M_2) = \gamma + 1$. Using Lemma \ref{less}, there exists a non-zero submodule $K$ of $M_2$
such that c.u.dim$(K) = \gamma < \alpha $. So by induction hypothesis $${\rm c.u.dim}(M_1) +\gamma =
{\rm c.u.dim}(M_1) + {\rm c.u.dim}(K) \leq {\rm c.u.dim}(M_1 \oplus K).$$
Using our assumption and Remark \ref{1.3}, we have c.u.dim$(M_1 \oplus K) < $ c.u.dim$(M)$ and hence
c.u.dim$(M_{1}) + $ c.u.dim$(M_{2}) \leq$ c.u.dim$(M)$.\\
\indent If $\alpha$
is a limit ordinal and $ 1 \leq \begin{enumerate}ta < \alpha$,
then by Remark \ref{1.3}, there exists a non-zero submodule $K$ of $M_2$ such that
$\begin{enumerate}ta \leq $ c.u.dim$(K)$.
Then by induction hypothesis ${\rm c.u.dim}(M_1) + \begin{enumerate}ta \leq {\rm c.u.dim}(M_1) + {\rm c.u.dim}(K) \leq
{\rm c.u.dim}(M_1 \oplus K) < {\rm c.u.dim}(M).$
Therefore c.u.dim$(M_1) + \alpha = $ sup$\lbrace$ c.u.dim$(M_{1}) + \begin{enumerate}ta \mid \begin{enumerate}ta < \alpha
\rbrace \leq $ c.u.dim$(M)$. $~\square$ }
\end{proof}
The condition fully coHopfian of Proposition \ref{b} is necessary.
\begin{enumerate}gin{example}
{\rm For the $\Bbb{Z}$-modules $M = \oplus_{i = 1}^{\infty} \Bbb{Z}_{p}$,
and $ L = \Bbb{Z}_{p}$, we have
$M \cong M \oplus L$. One can see c.u.dim$(M) = \omega$ and so
c.u.dim$(M) \ngeq $ c.u.dim$(M) + $ c.u.dim$(L)$.
Also, in general, we don't have the equality
in Proposition \ref{b}. Consider the $\Bbb{Z}$-module
$ M = \Bbb{Z}_2 \oplus \Bbb{Z}_4$. Then, $M$ is fully coHopfian and $3 = $ c.u.dim$(M) > $
c.u.dim$(\Bbb{Z} _2) + $ c.u.dim$(\Bbb{Z} _4 )$.}
\end{example}
Here we prove another result on fully coHopfian module:
\begin{enumerate}gin{proposition} \label{simple2}
Let $M$ be an $R$-module and $N$ be a cancellable module (for example a simple module)
such that $N \oplus M$ has couniserial dimension. If $M$ is fully coHopfian, then $M$ is artinian.
\end{proposition}
\begin{enumerate}gin{proof}
Let $M$ be fully coHopfian and let $M_1 \geq M_2 \geq ... $ be a descending chain of submodules of $M$. Then
$N \oplus M_1 \geq N \oplus M_2 \geq ... $ is a descending chain of submodules of $N \oplus M$ and so for some $n$,
$M_i \cong M_n$ for each $i \geq n$. Now since $M$ is fully coHopfian, we have $M_i = M_n$ for each $i \geq n$.
$~\square$
\end{proof}
Let us recall the definition of uniserial dimension \cite{j.algebra}.
\begin{enumerate}gin {definition} \label{uniserial dimension}
{\rm In order to define uniserial dimension for modules over a ring $R$, we
first define, by transfinite induction, classes $ \zeta_\alpha $ of $R$-modules for all ordinals
$ \alpha \geq 1 $. To start with, let $ \zeta_1 $ be the class of non-zero uniserial modules.
Next, consider an ordinal $ \alpha > 1 $; if $ \zeta_\begin{enumerate}ta $ has been defined for all ordinals $ \begin{enumerate}ta < \alpha $,
let $ \zeta_\alpha $ be the class of those $R$-modules $M$ such that, for every submodule $N < M$, where $M/N \ncong M$, we have
$M/N \in \bigcup_{\begin{enumerate}ta < \alpha} \zeta_\begin{enumerate}ta$. If an $R$-module $M$ belongs to some $\zeta_\alpha$,
then the least such $\alpha$ is the
{\it uniserial dimension of} $M$, denoted u.s.dim$(M)$. For $M = 0$, we define u.s.dim$(M) = 0$.
If $M$ is non-zero and $M$ does not
belong to any $\zeta_\alpha$, then we say that ``u.s.dim$(M)$ is not defined,'' or that `` $M$ has no uniserial dimension.''}
\end {definition}
\begin{enumerate}gin{remark}\label{semisimple eq}
{\rm Note that, in general,
there is no relation between the existence of the uniserial dimension and the
existence of the couniserial dimension of a module. For example, the polynomial ring in infinite number of commutative
indeterminates
over a field $k$, $R = k[x_1, x_2, ...]$ has this property that
c.u.dim$(R_R) = 1$ but $R_R$ does not have uniserial dimension (see
\cite [ Remark 2.3] {j.a.its}).
It follows by the definition that a semisimple module $M$ has uniserial dimension if and only if $M$ has couniserial dimension, in
which case
u.s.dim$(M) = \alpha$ if and only if c.u.dim$(M) = \alpha$.
Furthermore a semisimple module $M$ has couniserial dimension if and only if $M$ is a finite direct sum of
homogeneous semisimple modules ( see \cite [Proposition 1.18]{j.algebra}).}
\end{remark}
Using the above remark we have the following
interesting results.
\begin{enumerate}gin{corollary} \label{finite simple}
All right semisimple modules over a ring $R$ have couniserial dimension if and only if
there exist only finitely many non-isomorphic simple right $R$-modules.
\end{corollary}
\begin{enumerate}gin{lemma} \label{semisimple1}
Suppose that $M$ is simple. Then $\oplus _{i = 1}^{\infty} E(M)$ has couniserial dimension if and only if
$M$ is injective.
\end{lemma}
\begin{enumerate}gin{proof}
($\Leftarrow$) It is clear by the statement in Remark \ref{semisimple eq}. \\
($\Rightarrow$) Consider the
descending chain
$$ M \oplus ( \oplus _{ i = 2}^{\infty} E(M)) \geq M^{(2)} \oplus ( \oplus _{ i = 3}^{\infty} E(M)) \geq ... $$
of submodules of
$\oplus _{i = 1}^{\infty} E(M)$ where $M^{(n)} = \oplus_{i = 1}^{\infty} M_i $ with $M_1 = ... = M_n = M$
and for each $i > n$, $M _i = 0$.
Then, by Proposition \ref{2.2},
there exists $n\geq 1$ such that
$$M^{(n)} \oplus ( \oplus _{i = n + 1}^{\infty}E(M))\cong M^{(n + 1)}\oplus(\oplus _{i = n + 2}^{\infty} E(M))$$
and so
$\oplus _{i = n + 1}^{\infty} E(M) \cong M \oplus (\oplus _{i = n + 2}^{\infty} E(M))$, because $M$ is cancelable . Since $M$
is cyclic, there exists a right module $L$ such that for some $k$, $E(M) ^{k} \cong M \oplus L $. This shows $M$ is injective. $~\square$
\end{proof}
\section{\hspace{-6mm}. Main Results.}
In this section we use our basic results to prove the main results.
\begin{enumerate}gin{proposition} \label{simple1}
Let $M_R$ be an injective module and $N_R$ be a cancellable module over a commmutative ring $R$
(for example a simple module)
such that $N \oplus M$ has couniserial dimension. Then $M$ is $\Sigma$-injective.
\end{proposition}
\begin{enumerate}gin{proof}
According to \cite [Theorem 6.17]{cyclic}, it is enough to show that $R$ satisfies the ascending chain condition on ideals
of
$R$ that are annihilators of subsets of $M$. Let $I_1 \leq I_2 \leq ... $ be a chain of such annihilator ideals.
Then for each $i$,
$M_i = $ ann$ _{M}(I_i) $
is a submodule of $M$ and so we have descending chain $N \oplus M_1 \geq N \oplus M_2 \geq ... $
of submodules of $N \oplus M$. Then there exists a positive integer $n$ such that $M_n \cong M_i$ for all $i \geq n$. Thus
ann$(M_i) = $ ann$(M_n) $. Therefore for each $i \geq n$, $ I_i = I_n$. $~\square$
\end{proof}
\begin{enumerate}gin{remark}
{\rm One can see that the above result provides another proof for the fact that commutative
V-rings (i.e, von Neumann regular rings ) are
$\Sigma$-V-ring. For an example of right V-rings that is not $\Sigma$-V-ring, the reader may refer to
\cite [ Example, page 60]{cyclic}.}
\end{remark}
The next result shows that if a module
has countable couniserial dimension then it can be decomposed into indecomposable modules.
\begin{enumerate}gin{theorem}\label{decomposation}
For an $R$-module $M$, if {\rm c.u.dim}$(M) \leq \omega $, then $M$ has indecomposable decomposition.
\end{theorem}
\begin{enumerate}gin{proof}
The proof is by induction on c.u.dim$(M) = \alpha$. The case $\alpha = 1$ is clear. If $\alpha > 1$ and
$M$ is not indecomposable, then $M = N_1 \oplus N_2$, where $N_1$ and $N_2$ are non-zero submodules of $M$.
If c.u.dim$(N_i) < $ c.u.dim$(M)$, $ i = 1, 2$, then by induction hypothesis $M$ has indecomposable decomposition.
If not, for definiteness let
c.u.dim$(N_1) = $ c.u.dim$(M)$. Then $M \cong N_1$, by Remark \ref{1.3}. Thus
it contains an infinite direct sum of uniform modules, say
$\oplus_{i = 1}^{\infty} K_i $. Clearly, c.u.dim$(\oplus_{i = 1}^{\infty} K_i ) \geq \omega$.
Thus we have $M \cong \oplus_{i = 1}^{\infty} K_i $.
$~\square$
\end{proof}
\begin{enumerate}gin{remark}
{\rm We do not know whether the above proposition holds for a module of arbitrary couniserial dimension.
For infinite countable couniserial dimension one can show under some condition that the module can be represented as a direct sum
of
uniform modules. }
\end{remark}
Recall that a module $M$ is called {\it Dedekind finite} if $M$ is not isomorphic to
any proper direct summand of itself. Clearly, every direct summand of a Dedekind finite module is a Dedekind finite module.
Obviously, a Hopfian module is Dedekind finite.
Since all finitely generated modules over a commutative ring are Hopfian (see \cite{good}),
they provide examples of Dedekind finite modules.
\begin{enumerate}gin{theorem} \label{dedekind}
If $M$ is a Dedekind finite module with couniserial dimension, then $M$ has finite indecomposable decomposition.
\end{theorem}
\begin{enumerate}gin{proof}
The proof is by induction on c.u.dim$(M) = \alpha$. The case $\alpha = 1$ is clear. Let $\alpha > 1$ and every
Dedekind finite module with c.u.dim less than $\alpha$ be decomposed to finitely many
indecomposable modules. If $M$ is not indecomposable,
then $M = M_1 \oplus M_2$. Since $M_i \ncong M$, using Remark \ref{1.3}, c.u.dim$(M_i) < $ c.u.dim$(M)$ and so,
by induction hypothesis, $M_i$ have finite indecomposable decomposition. This completes the proof. $~\square$
\end{proof}
A ring $R$ is called a {\it von Neumann regular ring} if for each $x \in R$, there exists
$y \in R$ such that $xyx = x$, equivalently, every principal right ideal is a direct summand. $R$ is
{\it unit regular ring} if for each $x \in R,$ there exists a unit element $u \in R$
such that $x = xux.$ As a consequence of the above theorem we have the following corollary.
\begin{enumerate}gin{corollary}
Every Dedekind finite von Neumann regular ring (in particular, unit regular rings )
with couniserial dimension is semisimple artinian.
\end{corollary}
A ring $R$ is called a PWD {\it (piecewise domain)} if it
possesses a complete set $ \lbrace e_{i} \vert 0 \leq i \leq n \rbrace$ of orthogonal idempotents such that $xy = 0$
implies $x = 0 $ or $y = 0$ whenever $x \in e_i Re_k$ and $y \in e_k Re_j $. Note that the
definition is left-right symmetric and all $e_i R e_i$ are domain, see \cite{lance small}.
An element $x$ of $R$ is called regular if its right and left annihilators are zero.
\begin{enumerate}gin{proposition} \label{semiprime right Goldie}
Let $R$ be a semiprime right Goldie ring with couniserial dimension. If u.dim$(R_R) = n$, then $R$ has a decomposition
into
$n$ uniform modules. In particular, it is a piecewise domain.
\end{proposition}
\begin{enumerate}gin{proof}
We can assume that $n > 1$.
Let $I_1 = U_1 \oplus ... \oplus U_n$ be an essential right ideal of $R$. Then, by \cite [Proposition 6.13]{goodearl},
$I_1$ contains a regular element $x$ and thus $J_1 = xR$ is a right ideal of $R$ which is $R$-isomorphic to $ R$.
So u.dim$(J_1) = n$ and it contains an essential right ideal $I_2$ of $R$ such that it is a
direct sum of $n$ uniform right ideals.
By continuing in this manner we obtain a descending chain $ I_1 \geq J_1 \geq I_2 \geq ...$ of right ideals of
$R$ such that $I_i$ are direct sum of
$n$ uniform and $J_i$ are isomorphic to $R$. Since $R$ has couniserial dimension, for some $n$, $ I_n \cong R$. The last
statement follows from \cite [Pages 2-3]{lance small}. This
completes the proof. $~\square$
\end{proof}
\begin{enumerate}gin{remark} \label{example of prime right Goldie}
{\rm
There exists an example of simple noetherian ring of uniform dimension $2$ which has no
non-trivial idempotents (c.f. \cite [Example 7.16, page 441 ] {robson}).
So by the above proposition this provides an example of prime right Goldie ring without couniserial dimension.
}
\end{remark}
\begin{enumerate}gin{lemma}\label{Q-map}
Let $R$ be a right non-singular ring with maximal right
quotient ring $Q$. Let $M$ be a $Q$-module. If $M$ is non-singular $R$-module,
such that $M_R$ has couniserial dimension, then
$M_Q$ has couniserial dimension.
\end{lemma}
\begin{enumerate}gin{proof}
{\rm Let $M \geq M_1 \geq M_2 \geq ...$ be a descending chain of $Q$-submodules
of $M$. So it is a descending chain of
$R$-submodules of $M$ and thus, for some $n$, $M_n$ is uniform $R$-module or
$M_n \cong M_i$ as $R$-modules for all $i \geq n$. If $M_n $ is uniform $R$-module, then it is also
uniform $Q$-module. So let
$M_n \cong M_i$ as $R$-modules and let
$\varphi_{i}$ be this isomorphism.
If $q \in Q$ and
$t \in M_n$
there exists an essential right ideal $E$ of $R$ such that $qE \leq R$.
So $\varphi_{i} (tqE) = \varphi_{i} (tq) E $ and also $ \varphi_{i} (tqE) = \varphi_{i} (t) qE$.
Then $\varphi_{i} (tq) E = \varphi_{i} (t) qE$.
Since $Q$ is
right non-singular, $\varphi_{i} (tq) = \varphi_{i} (t)q$. Thus $\varphi_{i}$ is a $Q$-isomorphism.
This completes the proof. $~\square$}
\end{proof}
A ring $R$ is semiprime (prime) right Goldie ring if and only if its maximal right quotient ring is
semisimple (simple) artinian ring, \cite [Theorems 3.35 and 3.36]{Gooderlnonsingular}.
Semiprime right Goldie rings are non-singular.
A right non-singular ring $R$ is semiprime right Goldie ring if and only if u.dim$(R_R)$ is
finite, \cite [Theorem 3.17]{Gooderlnonsingular}. Recall that a {\it right full linear ring} is the ring of all linear transformations
(written on the left) of a right vector space over a division ring. If the dimension of the vector space is finite, a right full linear ring is
exactly a simple artinian ring.
\begin{enumerate}gin{theorem}\label{2.5}
Let $R$ be a right non-singular ring with
maximal right quotient ring, $Q$. If $Q$ as an $R$-module has couniserial dimension,
then $R$ is a semiprime right Goldie ring which is a finite product of prime Goldie rings, each of which is
a piecewise domain.
\end{theorem}
\begin{enumerate}gin{proof}
It is enough to show that $R$ has finite uniform dimension.
Since $Q_R$ has couniserial dimension, $R_R$ has couniserial dimension and so every right ideal of $R$ has
couniserial dimension. Thus Lemma \ref{uniform submodule} implies that every right ideal contains a uniform submodule.
Now by
\cite [Theorem 3.29] {goodearl} the maximal right quotient ring of
$R$ is a product of right full linear rings, say $Q = \prod _{ i \in I} Q_{i}$, where $Q_{i}$ are right full linear rings. Note that
since $R_R$ is right non-singular, $Q_R$ is also non-singular and so, using Lemma \ref{Q-map}, $Q_Q$ has
couniserial dimension.
At first we claim each $Q_{i}$ is endomorphism ring of a finite dimensional vector space. Assume the contrary. Then
$Q_{j}$ is the endomorphism ring of an infinite dimensional vector space, for some $j$. Thus $Q_{j} \cong Q_{j} \times Q_{j} $ and so
if $\iota: Q_{j} \longrightarrow Q$ be the canonical embedding, then $\iota ( Q_{j})$ is a right ideal of $Q$ and there exists a
$Q$-isomorphism
$Q \cong \iota ( Q_{j}) \times Q$. Then
there exist right ideals $T_1$ and $T$ of $Q$ such that $Q = T_1 \oplus T$, $T_1$ and $Q$ are isomorphic as
$Q$-modules and $T \cong \iota ( Q_{j})$ as $Q$-module. Because
$Q_{j}$ is the endomorphism ring of an infinite dimensional vector space, it has a right ideal which is not principal, for example
its socle. So $\iota ( Q_{j})$ and thus $T$ contains a non-cyclic right ideal of $Q$ and thus since $T \cong Q/T_1$,
there exists a non-cyclic right ideal of $Q$, say $K_1$ such that $ Q \geq K_1 \geq T_1 $.
Now $T_1 $ is isomorphic to $Q$. So
we can have a descending chain
$Q > K_1 > T_1 > K_2 > T_2 > ... $ of right ideals of $Q$ such that $T_i$ are cyclic but $K_i$ are not
cyclic. This is a contradiction. So all $Q_i$ are
endomorphism ring of finite dimensional vector spaces. Now to show $R$ is semiprime right Goldie ring it is enough to show
that
the index set $I$ is finite.
If $I$ is infinite, there exist infinite subsets $I_1 $ and $I_2$ of $I$ such that $I = I_1 \cup I_2$.
and $I_1\cap I_2 $ is empty. Let $T_1 = \prod _{i \in I} N_i$ such that $N_i = Q_i$ for all $i \in I_1$ and
$N_i = 0$ for all $i \in I_2$. Similarly let
$ T = \prod _ {i \in I} M_i$ such that $M_i = Q_i$ for all $i \in I_2$ and
$M_i = 0$ for all $i \in I_1$. Then $T_1$ and $T$ are right ideals of $Q$ and $Q = T_1 \oplus T$.
$T$ contains a right ideal of $Q$ which is not cyclic, for example $\oplus_{i \in I} M_i$. Since $T \cong Q/ T_1$, there exists a
non-cyclic right ideal $K_1$ of $Q$ such that
$Q \geq K_1 \geq T_1$. Note that $T_1$ is a cyclic $Q$-module and because $I_1$ is infinite, the structure of $T_1$ is
similar to that of $Q$. We can continue in this manner and
find a descending chain of right ideals of $Q$ such that $K_i$ are non cyclic $Q$-modules and $T_i$ are cyclic $Q$ modules,
which is a contradiction.
Therefore $I$ is finite and $R$ must have finite uniform dimension. This shows $R$ is semiprime right Goldie ring and so
Proposition \ref{semiprime right Goldie}
and \cite [ Corollary 3]{lance small} imply that it is a direct sum of prime right Goldie rings. $~\square$
\end{proof}
The reader may ask what if $R_R$ has couniserial dimension instead of $Q_R$.
Indeed we may point out that unlike a semiprime ring with right Krull dimension, a semiprime ring
with couniserial dimension need not be a right Goldie ring. See Dubrovin \cite{Uniserial with nil}
that contains an example of a primitive uniserial ring with non-zero nilpotent elements. \\
Next we show that the converse of the above theorem is not true, in general. In fact we show that there exists a prime right Goldie ring $R$
such that c.u.dim$(R_R) = 2$ and $Q _R$ does not have couniserial dimension. We need the following lemma to give the
example.
\begin{enumerate}gin{lemma}\label{morita}
For an ordinal number $\alpha$, being of couniserial dimension $\alpha$ is a Morita invariant property for modules.
\end{lemma}
\begin{enumerate}gin{proof}
This is clear by the definition of couniserial dimension and \cite [ Proposition 21.7 ]{Anderson}. $~\square$
\end{proof}
\begin{enumerate}gin{example}
{\rm Here we give an example of a prime right Goldie ring $R$ with maximal right
quotient ring $Q$ such that $Q_R$ does not have couniserial dimension. Take $R = M_2 (\Bbb{Z})$, the $2 \times 2$ matrix ring over $\Bbb{Z}$.
Then $R$ is a prime right Goldie ring with maximal right quotient ring
$Q = M_2(\Bbb{Q})$. Note that under the standard Morita equivalent between the ring
$\Bbb{Z}$ and $R= M_2(\Bbb{Z})$, see \cite [Theorem 17.20 ]{Lam}, $R$ corresponds to
$\Bbb{Z} \oplus \Bbb{Z}$
and so using the above lemma $R$ has couniserial dimension $2$.
If $\lbrace p_i \vert i \geq 1 \rbrace$ is the set of all prime numbers,
then $\Bbb{Q}/ \Bbb{Z} = \sum _{i = 1}^{\infty} K_i /\Bbb{Z}$,
where $K_i = \lbrace m/p_{i}^{n} \vert n \geq 0 $ and $m \in \Bbb{Z}\rbrace$. Then
take $Q_{n} = \sum _{i = n} ^{\infty} K_i$. Then $M_2(Q_1) \geq M_2({Q_2}) \geq ... $ is a descending chain
of $R$-submodules of $Q$ which are not uniform $R$-modules.
Assume that for some $n$, $M_2({Q_{n}}) \cong M_2({Q_{n + 1}})$ with
an $R$-isomorphism $\phi$. Let $\phi (\left( \begin{enumerate}gin{array}{ccc} 1& 0 \\ 0& 1 \end{array}\right)) =
\left( \begin{enumerate}gin{array}{ccc} m_1/t_1& m_2/t_2\\ m_3/t_3& m_4/t_4 \end{array}\right)$, where $m_i/t_i \in Q_{n + 1}$.
Suppose that $j \geq 1$ and
$\phi (\left( \begin{enumerate}gin{array}{ccc} 1/p_{n}^{j}& 0 \\ 0& 1/p_{n}^{j} \end{array}\right) )=
\left( \begin{enumerate}gin{array}{ccc} m_{1,j}/t_{1,j} & m_{2,j}/t_{2,j}\\ m_{3,j}/t_{3,j}& m_{4,j}/t_{4,j} \end{array}\right)$, where
$p_n$ does not odd non of $t_{i,j}$ for all $1 \leq i \leq 4$.
Then since $\phi$ is additive, we can easily see that
$\left( \begin{enumerate}gin{array}{ccc} m_{1,j} p_n^{j}/t_{1,j} & m_{2,j}p_n^{j}/t_{2,j}\\ m_{3,j}p_n^{j}/t_{3,j}& m_{4,j}p_n^{j}/t_{4,j} \end{array}\right) =
\left( \begin{enumerate}gin{array}{ccc} m_1/t_1& m_2/t_2\\ m_3/t_3& m_4/t_4 \end{array}\right)$
and this implies that $p_n^{j} \vert m_i$ for all $j \geq 1$ and $0 \leq i \leq 4$ and so $m_i = 0$, a contradiction.
So $Q _R$ does not have couniserial dimension.
}
\end{example}
\section{\hspace{-6mm}. Some Applications.}
A right $R$-module $M$ which has a composition series is called a
module of {\it finite length.} A right $R$-module $M$ is of finite length if and only if $M$ is both
artinian and noetherian. The length of a composition series of $M_R$ is said to be the
length of $M_R$ and is denoted by length$(M)$. Clearly, by Corollary \ref{2.3}, a module of finite length
has couniserial dimension. The next result shows a relation between couniserial dimension
of a finite length module $M$ and length$(M)$.
\begin{enumerate}gin{proposition}\label{semi1} Let $M$ be a right $R$-module of finite length. Then the following statements hold: \\
{\rm (1)} If $N$ is a submodule of $M$, then {\rm c.u.dim}$(M/N) \leq $ {\rm c.u.dim}$(M)$.\\
{\rm (2)} {\rm c.u.dim}$(M) \leq $ {\rm length}$(M)$.
\end{proposition}
\begin{enumerate}gin{proof}
{\rm (1) The proof is by induction on $n$, where length$(M) = n$. The case $n = 1$ is clear.
Now, let $ n > 1 $ and
assume that the assertion is true for all
modules with length less than $n$. If $N$ is a non-zero submodule of $M$, then the length$(M/N) < n$. Thus for every proper
submodule
$K/N$ of
$M/N$, by induction, c.u.dim$(K/N) \leq $ c.u.dim$(K) < $ c.u.dim$(M)$. Now, Remark \ref{1.3} implies that c.u.dim$(M/N) \leq $ c.u.dim$(M)$. \\
(2) The proof is by induction on length$(M) = n$. The case $n = 1$ is clear. Now if $n > 1$ and $K$ is
a proper submodule of $M$,
then, by assumption, c.u.dim$(K) \leq $ length$(K) < $ length$(M)$. Thus by Remark \ref{1.3}, c.u.dim$(M) \leq $ length$(M)$. $~\square$}
\end{proof}
Recall that an $R$-module $M$ is called {\it co-semisimple} if every simple $R$-module is $M$-injective,
or equivalently, Rad$(M/N) = 0$ for every submodule $N\leq M$ (See \cite[Theorem 23.1]{wis}).
The next proposition gives a condition as to when a module of finite length is semisimple. It may be of
interest to state that for the finite length $\Bbb{Z}$-module $\Bbb{Z} _4$, $\oplus _{i = 1}^{\infty}\Bbb{Z} _4$ does not possess
couniserial dimension.
\begin{enumerate}gin{proposition}\label{Artinian} Let $M$ be a non-zero right $R$-module of finite length.
Then $M$ is a semisimple $R$-module if and only if
for every submodule $N$ of $M$ the right $R$-module $\oplus _{i = 1}^{\infty }M/N$ has
couniserial dimension.
\end{proposition}
\begin{enumerate}gin{proof}
($\Rightarrow$) c.f. Remark \ref{semisimple eq}.\\
($\Leftarrow$)
For every submodule $N$ of $M$ the right $R$-module $\oplus _{i = 1}^{\infty }M/N$ has
couniserial dimension. Clearly, this also holds for any factor module of $M$.
We will proof the result by induction on the length$(M) =n $. The case $n = 1$ is clear.
Now assume that $n > 1$ and the result is
true for all modules of length less that $n$.
Let $K $ be a non-zero submodule of $M$.
Since length$(M/K) < n$, by
the inductive hypothesis, $M/K$ is semisimple.
Therefore, for every non-zero submodule $K$ of $M$, Rad$(M/K) = 0$.
If Rad$(M) = 0$, then $M$ is co-semisimple.
Let $S$ be a simple submodule of $M$. Consider
the exact sequence $0 \longrightarrow S \longrightarrow M \longrightarrow M/S \longrightarrow 0$ which splits, because $M$ is co-semisimple.
Therefore, $M$ is semisimple. Next suppose that
Rad$(M) \neq 0$. Let $S$ be a simple submodule of $M$. Because by the above
Rad$(M/S) = 0$, we obtain Rad$(M) \leq S$. This implies Rad$(M) = S$ and so $M$ has only one simple submodule.
Thus Rad$(M) =$ soc$(M) = S$ is a simple module.
Suppose that $M$ is not semisimple.
Let $N$ be a maximal submodule of $M$. Then for every submodule $K \leq N < M$,
$\oplus _{i = 1}^{\infty }N/K $ is a submodule of $ \oplus _{i = 1}^{\infty }M/K$ and thus
$\oplus _{i = 1}^{\infty }N/K$ has couniserial
dimension. Since length$(N) < n$, we conclude that $N$ is semisimple.
Thus $N = $ soc$(M) = $ Rad$(M)$ is a simple module and so $M$ is of length $2$. \\
\indent Now consider the descending chain
$$ N \oplus (\oplus _{i = 2}^{\infty }M ) > N^{(2)} \oplus (\oplus _{i = 3}^{\infty }M ) > ... $$
of submodules of $\oplus _{i = 1}^{\infty }M $.
Using Proposition \ref{2.2}, there exists $k \geq 1$ such that
$N^{(k)} \oplus (\oplus _{i = k + 1}^{\infty }M) \cong N^{(k + 1)} \oplus (\oplus _{i = k + 2}^{\infty }M) $. Since
$ N^{(k + 1)}$ is finitely generated, there exists $m \geq 0$ and an $R$-module $T$, such that
$ N^{(k)} \oplus M^{m} \cong N^{(k + 1)} \oplus T$.
$N$ is simple and so it has cancellation property and thus $M^{m} \cong N \oplus T$. This implies
Rad$(T)$ is semisimple of length $m$ and length$($soc$(T)) = m-1$, a contradiction. $~\square$
\end{proof}
Recall that a ring $R$ is called {\it right bounded} if every essential right ideal contains a
two-sided ideal which is essential as a right ideal. A ring $R$ is called right {\it fully bunded}
if every prime factor ring is right bounded. A right noetherian right fully bounded ring
is commonly abbreviated as a right FBN ring. Clearly all commutative noetherian rings are example of right FBN rings.
Finite matrix rings over commutative noetherian rings are a large class of right FBN rings which are not commutative.
In \cite [Theorem 2.11] {Hiranocom}, Hirano and et.al. showed that a right FBN ring $R$ is semisimple if and only if every right
module of
finite length is semisimple. As a consequence of the above proposition we have:
\begin{enumerate}gin{corollary}
A right FBN ring $R$ is semisimple
if and only if for every finite length module $M$, the module $\oplus _{i = 1}^{\infty }M$
has couniserial dimension.
\end{corollary}
\begin{enumerate}gin{proposition} \label{anti is injective}
Let $P$ be an anti-coHopfian projective right $R$-module. If $\oplus_{ i = 1}^{\infty}E(P) $ has couniserial dimension, then
$P$ is injective.
\end{proposition}
\begin{enumerate}gin{proof}
We first show that $P$ has cancellation property.
Let $M = P \oplus B \cong P \oplus B'$. So there exist submodules $P'$ and $C $ of $M$ such that $M = P \oplus B = P' \oplus C$
and
$P' \cong P$ and $ C \cong B'$. If $p_1$ is a projection map from $M = P \oplus B$ on to $P$. Then with
restriction of $p_1$ to $C$ we have an
exact
sequence $ 0 \longrightarrow C \cap B \longrightarrow C \longrightarrow I \longrightarrow 0$, such that $I$ is a submodule of $P$.
Note that every submodule of $P$ is projective, because it is anti-coHopfian. So $I$ is projective and
thus $C \cong C \cap B \oplus I$. Similarly by considering map $p_2$ from $M = P' \oplus C$ to $P'$ we
have
$B \cong C \cap B \oplus J$ for some submodule $J$ of $P'$. Since $J \cong I \cong P$, we have $B\cong C$ and so $B\cong B'$.
Then $P$ has cancellation property. Now consider the
descending
chain
$$ P \oplus ( \oplus _{ i = 2}^{\infty} E(P)) \geq P^{(2)} \oplus ( \oplus _{ i = 3}^{\infty} E(P)) \geq ... $$
of submodules of
$\oplus _{i = 1}^{\infty} E(P)$.
Then, by Proposition \ref{2.2},
there exists $n\geq 1$ such that
$$P^{(n)} \oplus ( \oplus _{i = n + 1}^{\infty}E(P))\cong P^{(n + 1)}\oplus(\oplus _{i = n + 2}^{\infty} E(P))$$
and so
$\oplus _{i = n + 1}^{\infty} E(P) \cong P \oplus (\oplus _{i = n + 2}^{\infty} E(P))$ , because $P$ is cancelable . Since $P$
is finitely
generated, there exists a right module $L$ such that for some $k$, $E(P) ^{k} \cong P\oplus L $. This shows $P$ is injective.
$~\square$
\end{proof}
As a consequence of the above proposition we have the following corollary:
\begin{enumerate}gin{corollary} \label{domain}
Let $R$ be a principal right ideal domain with maximal right quotient ring $Q$ ( which is a division ring).
If the right $R$-module $\oplus_{i = 1}^{\infty} Q$ has couniserial dimension, then
$R = Q$.
\end{corollary}
We need the following lemmas to prove the next theorem. Using Proposition \ref{2.2} we can see that:
\begin{enumerate}gin{lemma} \label{factor}
Let $I$ be a two sided ideal of $R$ and $M$ be an $R/I$-module. If $M$ as $R$-module has couniserial dimension, then
$M$ as $R/I$-module has couniserial dimension.
\end{lemma}
\begin{enumerate}gin{lemma} \label{notherian uniform}
If all finitely generated right modules have couniserial dimension, then every right module contains a noetherian uniform module.
\end{lemma}
\begin{enumerate}gin{proof}
By Lemma \ref{anti-coHopfian} it is enough to show that every cyclic module contains an anti-coHopfian module.
Let $M$ be a non-zero cyclic right module which does not contain anti-coHopfian module and let $S$ be a simple module.
$M$ is not anti-coHopfian, then $M$ has a
non-zero submodule $M_1 \ncong M$ and $M_ 1$ has a non-zero submodule $M_2$ such that
$M_2 \ncong M_1$. By continuing in
this manner we have a descending chain $ S \oplus M \geq S\oplus M_1 \geq S \oplus M_2 \geq ... $ of
submodules of
$S \oplus M$. Since $S\oplus M$ is finitely generated, by Proposition \ref{2.2}, $ S \oplus M_n \cong S \oplus M_{n + 1}$
for some $n$. This implies that
$M_n \cong M_{n + 1}$ for some $n$, because $S$ is cancellable and this is a contradiction. $~\square$
\end{proof}
\begin{enumerate}gin{theorem} \label{final} For a ring $R$ the following are equivalent. \\
{\rm (1)} $R$ is a semisimple artinian ring.\\
{\rm (2)} All right $R$-modules have couniserial dimension.\\
{\rm (3)} All left $R$-modules have couniserial dimension.\\
{\rm (4)} All right $R$-modules have uniserial dimension.\\
{\rm (5)} All left $R$-modules have uniserial dimension.
\end{theorem}
\begin{enumerate}gin{proof}
For equivalence of (1), (4) and (5) refer \cite [Theorem 2.6]{j.algebra}. \\
$(1) \Rightarrow (2)$. This is clear by Corollary \ref{finite simple}.\\
$(2) \Rightarrow (1)$. At first we show $R$ satisfies ascending chain condition on two sided deals.
Let $I_1 \leq I_2 \leq ... $ be a chain of ideals of $R$. Since the right module $\oplus _{i = 1} ^{\infty} R/I_i $ has couniserial
dimension,
there exists $n$ such that, for each $j \geq n$, $\oplus _{i = n} ^{\infty} R/I_i \cong \oplus _{i = j} ^{\infty} R/I_i $.
Thus they have the same
annihilators and so for each $j \geq n$, $I_n = I_j$.
Suppose $R$ is a non-semisimple ring.
By Lemma \ref{factor} every module over a factor ring of $R$ also has couniserial
dimension. Thus by invoking the ascending chain condition on two sided ideals we may assume
$R$ is not semisimple artinian but every factor ring of $R$ is semisimple artinian.
Using Lemma \ref{semisimple1}, $R$ is a right V-ring. First let us assume that $R$ is primitive.
So, by Theorem \ref{2.5}, $R$ is a prime right Goldie ring.
By \cite [Theorem 5.16] {simple noetherian ring},
a prime right V-ring right Goldie is simple. By Lemma \ref{notherian uniform},
$R$ has a right noetheian uniform submodule and so using \cite [Corollary 7.25] {goodearl},
$R$ is right noetherian.
Now we show that $R$ is Morita equivalent to
a domain. By \cite [lemma 5.12]{5}, the endomorphism ring of every uniform right ideal of a prime right
Goldie ring is a right ore domain.
So by \cite [Theorem 1.2] {simple ring},
it is enough to show that $R$ has a uniform projective generator $U$. Let us
assume that $R$ is not uniform and u.dim$(R) = n$ and let $U$ be
a uniform right ideal of $R$. By \cite [Corollary 7.25] {goodearl}, $U^{n}$ can be embedded in $R$ and also $R$ can be embedded in
$U^{n}$. Then c.u.dim$(R) = $ c.u.dim$(U^{n})$ and hence $R\cong U^{n}$, because $R$ is not uniform. Thus $U$ is a projective generator uniform
right ideal of $R$. So $R$ is Morita equivalent to a domain. Now Lemma \ref{morita}
and Lemma \ref {example} and Corollary \ref{domain} show that
$R$ is simple artinian, a contradiction. So $R$ is not primitive, but every primitive factor ring is artinian (indeed all proper factor
rings are artinian).
Then since $R$ is a right V-ring, by \cite{Bacel proc}, $R$ is regular and $\Sigma$-V-ring.
Also every right ideal contains a non-zero uniform right ideal, hence minimal. So $R$ has non-zero essential soc$(R)$.
But $R$ is $\Sigma$-V-ring and by Corollary \ref{finite simple} ,
we have only finitely many non-isomorphic simple modules. Thus soc$(R)$ is injective.
This implies $R$ is semisimple, a contradiction. This completes the proof. $~\square$
\end{proof}
{\bf ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary}
This paper defines couniserial dimension of a module that measures how far a module is from being uniform. The results
proved in the paper demonstrate its importance for studding
the structure of modules and rings and is a beginning of a larger project to
study its impact. We close with some open questions:\\
1) Does a module with arbitrary couniserial dimension possesses indecomposable dimension?\\
2) Is there a theory for modules with both finite uniserial and couniserial dimensions that parallels to Krull-Schmidt-Remak-Azumaya
theorem?
{\bf ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Acknowledgments}
This paper was written when the third author was visiting Ohio University, United States during May-August 2014.
She wishes to express her deepest gratitude to Professor S. K. Jain for his kind guidance in her research project and
Mrs. Parvesh Jain for the warm hospitality extended to her during her stay.
She would also like to express her thanks to Professor E. Zelmanov for his kind invitation to visit the of University of
Californian at San Diego and to give a talk.
\begin{enumerate}gin{thebibliography}{99}
\bibitem {Anderson} F. W. Anderson and, K. R. Fuller, Rings and Categories of Modules, second ed., Grad. Texts in Math.,
Vol. 13, Springer, Berlin, (1992).
\bibitem{2} G. Baccella, On flat factor rings and fully right idempotent rings, Ann. Univ. Ferrara Sez. 26 (1980), 125-141
\bibitem{Bacel proc} G. Baccella, Von Neumann regularity of V-rings with artinian primitive factor
rings, Proc. Amer. Math. Soc., 103, 3 (1988), 747-749.
\bibitem {simple noetherian ring} J. Cozzens and C. Faith, Simple noetherian rings, Cambridge Univ. Press, Cambridge, (1975).
\bibitem {Uniserial with nil} N. I. Dubrovin, An example of a chain primitive ring with nilpotent elements,
Mat. Sb. (N.S.) 120(162) (1983), no. 3, 441-447 (Russian). MR 691988 (84f:16012)
\bibitem{j.a.its} A. Ghorbani, Z. Nazemian, On commutative rings with uniserial dimension,
Journal of Algebra and Its Applications, 14 (1) (2015), 1550008. (Available on line).
\bibitem{5} A. W. Goldie, Rings with maximum condition, Lecture Notes, Yale University, New Haven, Conn., (1961).
\bibitem{goodearl} K. R. Goodearl, An Introduction to Non commutative noetherian Rings,
London Math. Soc. 61, Cambridge University Press, Cambridge, (2004).
\bibitem{Gooderlnonsingular} K. R. Goodearl, Ring Theory: Nonsingular Rings and Modules, Dekker, New York (1976).
\bibitem {good} \textrm K. R. Goodearl, {\it Surjective endomorphisms of finitely generated modules,}
Comm. Algebra. 15 (1987), 589-609.
\bibitem{simple ring} R. Hart, J. C. Robson,
Simple rings and rings Morita equivalent to ore domains. Proc. London Math. Soc. 21 (3) (1970), 232-242.
\bibitem{lance small} R. Gordon and L. W. Small, Piecewise domains, J. Algebra 23 (1972), 553-564.
\bibitem{Hirano} Y. Hirano and I. Mogami, Modules whose proper submodules are non-hopf kernels,
Comm. Algebra. 15 (8) (1987), 1549-1567.
\bibitem {Hiranocom}
Y. Hirano , E. Poon and H. Tsutsui, A Generalization of Complete Reducibility, Comm. Algebra. 40 (6) (2012), 1901-1910.
\bibitem{cyclic} S. K. Jain, Ashish K. Srivastava and Askar A. Tuganbaev, Cyclic modules and the structure of rings,
Oxford Mathematical
Monographs, Oxford University Press, (2012).
\bibitem {crash} T. Y. Lam, A crash course on stable range, cancellation, substitution, and exchange. J. Algebra Appl. 3 (2004),
301-343.
\bibitem {Lam} T. Y. Lam, Lectures on Modules and Rings, Grad. Texts in Math., Vol.
189, Springer, New York, (1999).
\bibitem{robson} J.C. McConnell, J.C. Robson, Noncommutative noetherian Rings, Wiley, New York (1987)
\bibitem{7} G. O. Michler and O. E. Villamayor, On rings whose simple modules are injective, J. Algebra 25 (1973), 185-201.
\bibitem {j.algebra} Z. Nazemian, A. Ghorbani, M. Behboodi, Uniserial dimension of modules, J. Algebra, 399 (2014), 894-903.
\bibitem{stoll} R. R. Stoll, Set Theory and Logic. Dover Publication Inc, New York, (1961).
\bibitem {varadarjan} K.Varadarajan, Anti hopfian and anti coHopfian modules,
AMS Contemporary Math Series 456, (2008), 205-218.
\bibitem {wis} R. Wisbauer, Foundations of Module and Ring Theory, Gordon and Breach, Reading, (1991).
\end{thebibliography}
\end{document} |
\begin{document}
\begin{abstract}
Let $G$ be a finite group with cyclic Sylow $p$-subgroup,
and let $k$ be a field of characteristic $p$. Then
$H^*(BG;k)$ and $H_*(\Omega BG{}^{^\wedge}_p;k)$ are $A_\infty$ algebras whose
structure we determine up to quasi-isomorphism.
\end{abstract}
\title{Massey products in the homology of the loopspace of a
$p$-completed classifying space:
finite groups with cyclic Sylow $p$-subgroups}
\section{Introduction}
The general context is that we have a finite group $G$, and a field
$k$ of characteristic $p$. We are interested in the differential
graded cochain algebra $C^*(BG;k)$
and the differential graded algebra $C_*(\Omega (BG{}^{^\wedge}_p);k)$ of chains
on the loop space: these two are Koszul dual to each
other, and the Eilenberg-Moore and Rothenberg-Steenrod spectral
sequences relate the cohomology ring $H^*(BG;k)$ to the homology
ring $H_*(\Omega (BG{}^{^\wedge}_p);k)$. Of course if $G$ is a $p$-group, $BG$ is $p$-complete so $\Omega
(BG{}^{^\wedge}_p)\simeq G$, but in general $H_*(\Omega (BG{}^{^\wedge}_p); k)$ is
infinite dimensional. Henceforth we will omit the brackets from
$\Omega (BG{}^{^\wedge}_p)$. \\[1ex]
We consider a simple case where
the two rings are not formal, but we can identify the $A_{\infty}$
structures precisely (see Section \ref{sec:Ainfty} for a brief summary on $A_{\infty}$-algebras). From now on we suppose specifically that $G$ is a finite group with cyclic Sylow $p$-subgroup $P$,
and let $BG$ be its classifying space. Then the inclusion of
the Sylow $p$-normaliser $N_G(P) \to G$ and the quotient map $N_G(P)
\to N_G(P)/O_{p'}N_G(P)$
induce mod $p$ cohomology
equivalences
\[ B(N_G(P)/O_{p'}N_G(P)) \leqslantftarrow BN_G(P) \to BG, \]
and
hence homotopy equivalences after
$p$-completion
\[ B(N_G(P)/O_{p'}N_G(P)){}^{^\wedge}_p \xleftarrow{\ \sim\ } BN_G(P){}^{^\wedge}_p
\xrightarrow{\ \sim \ } BG{}^{^\wedge}_p. \]
Here, $O_{p'}N_G(P)$ denotes the largest normal $p'$-subgroup of
$N_G(P)$. Thus $N_G(P)/O_{p'}N_G(P)$ is a semidirect product
$\mathbb Z/p^n\rtimes \mathbb Z/q$, where $q$ is a divisor of $p-1$, and
$\mathbb Z/q$ acts faithfully as a group of automorphisms of $\mathbb Z/p^n$. In
particular, the isomorphism type of $N_G(P)/O_{p'}N_G(P)$ only depends
on $|P|=p^n$ and the inertial index $q=|N_G(P):C_G(P)|$, and therefore
so does the homotopy type of $BG{}^{^\wedge}_p$. Our main theorem determines
the multiplication maps $m_i$ in the $A_\infty$ structure on
$H^*(BG;k)$ and $H_*(\Omega (BG{}^{^\wedge}_p);k)$ arising from $C^*(BG;k)$ and
$C_*(\Omega (BG{}^{^\wedge}_p); k)$ respectively. We will suppose from now on
that $p^n>2, q>1$ since the case of a $p$-group is well understood.
The starting point is the cohomology ring
$$H^*(BG; k)=H^*(B\mathbb Z/p^n; k)^{\mathbb Z/q}=k[x]\otimes \Lambda(t)
\mbox{ with }|x|=-2q, |t|=-2q+1. $$
There is a preferred generator $t\in
H^1(B\mathbb Z/p^n;k)=\mathrm{Hom}(\mathbb Z/p^n,k)$ and we take $x$ to be the
$n$th Bockstein of $t$.
Before stating our result we should be clear about grading and signs.
\begin{remark}
\label{rem:deg}
We will be discussing both homology and cohomology, so we should be
explicit that everything is graded homologically, so that
differentials always lower degree. Explicitly, the degree of an
element of $H^i(G;k)$ is $-i$.
\end{remark}
\begin{remark}
\label{rem:signs}
Sign conventions for Massey products and $A_{\infty}$ algebras mean
that a specific sign will enter repeatedly in our statements, so for
brevity we write
$$\varepsilons (s)=
\begin{cases}
+1 & s\equiv 0, 1 \mod 4\\
-1 &s\equiv 2, 3 \mod 4\\
\end{cases}. $$
\end{remark}
\begin{theorem}
Let $G$ be a finite group with cyclic Sylow $p$-subgroup $P$ of order
$p^n$ and inertial index $q$ so that
$$H^*(BG;k)
=k[x]\otimes \Lambda(t) \mbox{ with } |x|=-2q, |t|=-2q+1 \mbox{ and } \beta_nt=x$$ Up to quasi-isomorphism, the $A_\infty$ structure on $H^*(BG;k)$ is determined by
\[ m_{p^n}(t,\dots,t)=\varepsilons (p^n) x^{h} \]
where $h=p^n-(p^n-1)/q$. This implies
\[ m_{p^n}(x^{j_1}t,\dots,x^{j_{p^n}}t)=\varepsilons (p^n) x^{h+j_1+\dots+j_{p^n}}\]
for all $j_1, \ldots, j_{p^n}\geqslantq 0$.
All $m_i$ for $i>2$ on all other
$i$-tuples of monomials give zero.
If $q>1$ and $p^n\ne 3$ then
\[ H_*(\Omega BG{}^{^\wedge}_p;k) = k[\tau] \otimes \Lambda(\xi) \mbox{ where }
|\tau|=2q-2, |\xi|=2q-1. \]
Up to quasi-isomorphism, the
$A_\infty$ structure is determined by
\[ m_h(\xi,\dots,\xi )=\varepsilons (h) \tau^{p^n}. \]
This implies
\[ m_h(\tau^{j_1}\xi,\dots,\tau^{j_h}\xi)=\varepsilons (h) \tau^{p^n+j_1+\dots+j_h} \]
for all $j_1, \ldots, j_{p^n}\geqslantq 0$. All $m_i$ for $i>2$ on all other $i$-tuples of monomials give
zero.
If $q>1$ and $p^n=3$ then $q=2$ and
\[ H_*(\Omega BG{}^{^\wedge}_p;k) = k[\tau,\xi]/(\xi^2+\tau^3), \]
and all $m_i$ are zero for $i>2$.
\end{theorem}
\section{The group algebra and its cohomology}
We assume from now on, without loss of generality,
that $G$ has a normal cyclic Sylow $p$-subgroup
$P=C_G(P)$, with inertial index $q=|G:P|$. We shall assume that $q>1$,
which then forces $p$ to be odd. For notation, let
\[ G=\langle g,s\mid g^{p^n}=1,\ s^q=1,\
sgs^{-1}=h^\gamma\rangle\cong \mathbb Z/p^n\rtimes\mathbb Z/q, \]
where $\gamma$ is a primitive $q$th root of unity modulo $p^n$.
Let $P=\langle g\rangle$ and $H=\langle s\rangle$ as subgroups of
$G$.
Let $k$ be a field of characteristic $p$.
The action of $H$ on
$kP$ by conjugation preserves the radical series, and since $|H|$ is
not divisible by $p$, there are invariant complements. Thus we may
choose an element $U\in J(kP)$ such that $U$ spans an $H$-invariant
complement of $J^2(kP)$ in $J(kP)$. It can be checked that
\[ U = \sum_{\substack{1\leqslant j \leqslant p^n-1, \\(p,j)=1}} g^j/j \]
is such an element, and that $sUs^{-1}=\gamma U$. This gives us the
following presentation for $kG$:
\[ kG = k\langle s,U \mid U^{p^n}=0,\ s^q=1,\ sU = \gamma Us
\rangle. \]
We shall regard $kG$ as a $\mathbb Z[\frac{1}{q}]$-graded
algebra with $|s|=0$ and $|U|=1/q$. Then the bar resolution
is doubly graded, and taking homomorphisms into $k$, the
cochains $C^*(BG;k)$ inherit a double grading. The differential
decreases the homological grading and preserves the internal
grading. Thus the cohomology $H^*(G,k)=H^*(BG;k)$ is doubly graded:
\[ H^*(BG;k) = k[x] \otimes \Lambda(t) \]
where $|x|=(-2q,p^n)$, $|t|=(-2q+1,h)$, and $h=p^n-(p^n-1)/q$.
Here, the first degree is homological, the second internal.
The Massey product $\langle t,t,\dots,t\rangle$ ($p^n$ repetitions) is
equal to $-x^h$. This may easily be determined by restriction to $P$,
where it is well known that the $p^n$-fold Massey product of the
degree one exterior generator is a non-zero degree two element.
The usual convention is to make the constant $-1$, because
this Massey product is minus the $n$th Bockstein of $t$ \cite[Theorem 14]{Kraines:1966a}.
\section{$A_\infty$-algebras}
\label{sec:Ainfty}
An $A_{\infty}$-algebra over a field is a $\mathbb Z$-graded vector space
$A$ with graded maps $m_n: A^{\otimes n}\rightarrow A$ of degree $n-2$ for
$n\geqslantq 1$ satisfying
$$\sum_{r+s+t=n}(-1)^{r+st}m_{r+1+t}(id^{\otimes r}\otimes m_s \otimes
id^{\otimes t})=0 $$
for $n\geqslantq 1$. The map $m_1$ is therefore a differential, and the map
$m_2$ induces a product on $H_*(A)$.
A theorem of Kadeishvili~\cite{Kadeishvili:1982a} (see also
Keller~\cite{Keller:2001a,Keller:2002a} or
Merkulov~\cite{Merkulov:1999a}) may be stated as
follows. Suppose that we are given a differential
graded algebra $A$, over a field $k$. Let $Z^*(A)$ be the cocycles,
$B^*(A)$ be the coboundaries, and $H^*(A)=Z^*(A)/B^*(A)$.
Choose a vector space splitting $f_1\colon H^*(A) \to Z^*(A) \subseteq A$
of the quotient.
Then this gives by an inductive procedure an $A_\infty$ structure
on $H^*(A)$ so that the map $f_1$ is the degree one part of a quasi-isomorphism of
$A_\infty$-algebras.
If $A$ happens to carry auxiliary gradings respected by the product
structure and preserved by the differential, then it is easy to check
from the inductive procedure that
the maps in the construction may be chosen so that they also
respect these gradings. It then follows that the structure maps $m_i$
of the $A_\infty$ structure on $H^*(A)$ also respect these gradings.
Let us apply this to $H^*(BG;k)$. We examine the elements $m_i(t,\dots,t)$.
By definition, we have $m_1(t)=0$ and $m_2(t,t)=0$. The degree of
$m_i(t,\dots,t)$ is $i$ times the degree of $t$, increased in the
homological direction by $i-2$. This gives
\[ |m_i(t,\dots,t)| = i(-2q+1,h)+(i-2,0) =(-2iq+2i-2,ih). \]
The homological degree is even, so if $m_i(t, \cdots , t)$ is non-zero then it is a
multiple of a power of $x$. Comparing degrees, if $m_i(t,\dots,t)$ is
a non-zero multiple of $x^\alpha$ then we have
\[ 2iq-2i+2=2\alpha q,\qquad ih = \alpha p^n. \]
Eliminating $\alpha$, we obtain $(iq-i+1)p^n=ihq$. Substituting
$h=p^n-(p^n-1)/q$, this gives $i=p^n$. Finally, since the Massey
product of $p^n$ copies of $t$ is equal to $-x^h$, it follows that
$m_{p^n}(t,\dots,t)=\varepsilons (p^n) x^h$, where the sign is as defined in Remark
\ref{rem:signs} \cite[Theorem 3.1]{LuPalmieriWuZhang:2009}. Thus we have
\[ m_i(t,\dots,t) = \begin{cases} \varepsilons (p^n) x^h & i=p^n \\
0 & \text{otherwise.} \end{cases} \]
We shall elaborate on this argument in a more general context in the
next section, where we shall see that the rest of the $A_\infty$
structure is also determined in a similar way.
\section{$A_\infty$ structures on a polynomial tensor exterior algebra}
In this section, we shall examine the following general situation.
Our goal is to establish that there are only two possible $A_\infty$
structures satisfying Hypothesis \ref{hyp:grading} below, and that the Koszul dual also
satisfies the same hypothesis with the roles of $a$ and $b$, and of
$h$ and $\ell$ reversed.
\begin{hypothesis}
\label{hyp:grading}
$A$ is a $\mathbb Z\times\mathbb Z$-graded $A_\infty$-algebra over a field $k$,
where the operators $m_i$ have degree $(i-2,0)$, satisfying
\begin{enumerate}
\item $m_1=0$, so that $m_2$ is strictly associative,
\item ignoring the $m_i$ with $i>2$, the algebra $A$ is
$k[x] \otimes \Lambda(t)$ where $|x|=(-2a,\ell)$ and $|t|=(-2b-1,h)$, and
\item $ha-\ell b = 1$.
\end{enumerate}
\end{hypothesis}
\begin{remarks}
(i) The $A_\infty$-algebra $H^*(BG;k)$ of the last section satisfies this
hypothesis, with $a=q$, $b=q-1$, $h=p^n-(p^n-1)/q$, $\ell=p^n$.
(ii) By comparing degrees, if we have $m_\ell(t,\dots,t)=\varepsilons (p^n) x^h$ then
$(2b+1)\ell + 2-\ell = 2ah$ and so $ha-\ell b = 1$. This explains the
role of part (3) of the hypothesis. The consequence is, of course,
that $a$ and $b$ are coprime, and so are $h$ and $\ell$.
\end{remarks}
\begin{lemma}
If $m_i(t,\dots,t)$ is non-zero, then $i=\ell > 2$ and $m_\ell(t,\dots,t)$ is
a multiple of $x^h$.
\end{lemma}
\begin{proof}
The argument is the same as in the last section. The degree of
$m_i(t,\dots,t)$ is $i|t| +i- 2 = (-2ib - 2, ih)$. Since the
homological degree is even, if $m_i(t,\dots,t)$ is non-zero then it is
a multiple of some power of $x$, say $x^\alpha$. Then we have
\[ 2ib+2 = 2\alpha a,\qquad ih = \alpha\ell. \]
Eliminating $\alpha$ gives $(ib+1)\ell=iha$, and so
using $ha-\ell b =1$ we have $i=\ell$. Substituting back gives
$\alpha = h$.
\end{proof}
Elaborating on this argument gives the entire $A_\infty$ structure.
If $m_\ell(t,\dots,t)$ is non-zero, then by rescaling the variables
$t$ and $x$ if necessary
we can assume that $m_\ell(t,\dots,t)= x^h$ (note that we can even do
this without extending the field, since $\ell$ and $h$ are coprime).
\begin{proposition}
If $m_\ell(t,\dots,t)=0$ then all $m_i$ are zero for $i>2$. If
$m_\ell(t,\dots,t)= x^h$ then
$m_\ell(x^{j_1}t,\dots,x^{j_\ell}t)= x^{h+j_1+\dots+j_\ell}$,
and all $m_i$ for $i>2$ on all other $i$-tuples of monomials give zero.
\end{proposition}
\begin{proof}
All monomials live in different degrees, so we do not need to consider
linear combinations of monomials.
Suppose that
$m_i(x^{j_1}t^{\varepsilon_1},\dots,x^{j_i}t^{\varepsilon_i})$ is some constant multiple
of $x^jt^\varepsilon$,
where each of $\varepsilon_1,\dots,\varepsilon_i,\varepsilon$ is either zero or one. Then
comparing degrees, we have
\[ (j_1+\dots+j_i)|x| + (\varepsilon_1+\dots+\varepsilon_i)|t| + (i-2,0) = j|x| + \varepsilon|t|. \]
Setting
\[ \alpha=j_1+\dots+j_i-j,\qquad \beta = \varepsilon_1+\dots+\varepsilon_i-\varepsilon \]
we have $\beta \leqslant i$, and
\[ \alpha(-2a,\ell) + \beta(-2b-1,h) + (i-2,0) = 0. \]
Thus
\[ 2\alpha a + 2\beta b + \beta + 2 - i = 0, \qquad \alpha \ell +
\beta h = 0. \]
Eliminating $\alpha$, we obtain
\[ -2\beta ha + 2\beta \ell b + \beta \ell + 2 \ell - i \ell = 0. \]
Since $ha-\ell b=1$, this gives $\beta = \ell (i-2) /(\ell -2)$.
Combining this with $\beta \leqslant i$ gives $i\leqslant \ell$. If $i<\ell$ then
$\beta$ is not divisible by $\ell$, and so $\alpha \ell + \beta h = 0$
cannot hold. So we have $\beta=i=\ell$, $\varepsilon_1=\dots=\varepsilon_\ell=1$,
$\varepsilon=0$, $\alpha=-h$, and $j=h+j_1+\dots+j_\ell$.
Finally, the identities satisfied by the $m_i$ for an
$A_\infty$ structure show that all the constant multiples have to be
the same, hence all equal to zero or after rescaling, all equal to minus one.
\end{proof}
Combining the lemma with the proposition, we obtain the following.
\begin{theorem}\label{th:Ainfty}
Under the hypothesis above, if $\ell > 2$ then there are two possible $A_\infty$
structures on $A$.
There is the formal one, where $m_i$ equal to zero for $i>2$,
and the non-formal one, where
after replacing $x$ and $t$ by suitable multiples, the only
non-zero $m_i$ with $i>2$ is $m_\ell$, and the only non-zero values on
monomials are given by
\begin{equation*}
m_\ell(x^{j_1}t,\dots,x^{j_\ell}t)=x^{h+j_1+\dots+j_\ell}.
\end{equation*}
\end{theorem}
\begin{theorem}
Let $G=\mathbb Z/p^n \rtimes \mathbb Z/q$ as above, and $k$ a field of
characteristic $p$. Then
the $A_\infty$ structure on $H^*(G,k)$ given by Kadeishvili's theorem
may be taken to be the non-formal possibility named in the above
theorem, with $a=q$, $b=q-1$, $h=p^n-(p^n-1)/q$, $\ell=p^n$.
\end{theorem}
\begin{proof}
Since we have $m_{p^n}(t,\dots,t)=\varepsilons (p^n) x^h$, the formal possibility does
not hold.
\end{proof}
\begin{remark}
Dag Madsen's thesis~\cite{Madsen:2002a}
has an appendix in which the $A_\infty$ structure
is computed for the cohomology of a truncated polynomial ring, reaching similar
conclusions by more direct methods.
\end{remark}
\section{Loops on $BG{}^{^\wedge}_p$}
In general for a finite group $G$ we have $H^*(BG{}^{^\wedge}_p;k)\cong
H^*(BG;k)=H^*(G,k)$ and $\pi_1(BG{}^{^\wedge}_p)=G/O^p(G)$,
the largest $p$-quotient of $G$. In our case, $G=\mathbb Z/p^n\rtimes\mathbb Z/q$
with $q>1$, we have $G=O^p(G)$ and so $BG{}^{^\wedge}_p$ is simply
connected. So the Eilenberg--Moore spectral sequence converges to the
homology of its loop space:
\[ \mathsf{Tor}_{*,*}^{H^*(G,k)}(k,k) \Rightarrow H_*(\Omega BG{}^{^\wedge}_p;k). \]
The internal grading on $C^*(BG;k)$ gives this spectral sequence a
third grading that is preserved by the differentials, and $H_*(\Omega
BG{}^{^\wedge}_p;k)$ is again doubly graded. Since $H^*(G,k)=k[x] \otimes
\Lambda(t)$ with $|x|=(-2q,p^n)$ and $|t|=(-2q+1,h)$, it follows that
the $E^2$ page of this spectral sequence is
$k[\tau] \otimes \Lambda(\xi)$
where $|\xi|=(-1,2q,p^n)$ and $|\tau|=(-1,2q-1,h)$
(recall $h=p^n-(p^n-1)/q$). Provided that we are not in the case
$h=2$, which only happens if
$p^n=3$, ungrading $E^\infty$ gives
\[ H_*(\Omega BG{}^{^\wedge}_p;k) = k[\tau] \otimes \Lambda(\xi) \]
with $|\tau|=(2q-2,h)$ and $|\xi|=(2q-1,p^n)$.
In the exceptional case $h=2$, $p^n=3$, we have $q=2$, and
the group $G$ is the symmetric group $\Sigma_3$ of degree
three. An explicit computation (for example by squeezed resolutions
\cite{Benson:2009b}) gives
\[ H_*(\Omega (B\Sigma_3){}^{^\wedge}_3;k) = k[\tau,\xi]/(\xi^2+\tau^3) \]
with $|\tau|=(2,2)$ and $|\xi|=(3,3)$, and the two gradings collapse
to a single grading.
Applying
Theorem~\ref{th:Ainfty}, and using the fact that either formal case is
Koszul dual to the other, we have the following.
\begin{theorem}
Suppose that $p^n\ne 3$. Then
the $A_\infty$ structure on $H_*(\Omega
BG{}^{^\wedge}_p;k)=k[\tau]\otimes\Lambda(\xi)$ is given by
\[ m_h(\tau^{j_1}\xi,\dots,\tau^{j_h}\xi)=\varepsilons (h) \tau^{p^n+j_1+\dots+j_h}, \]
and for $i>2$, all $m_i$ on all other $i$-tuples of monomials give zero.\qed
\end{theorem}
Using \cite{LuPalmieriWuZhang:2009} again, we
may reinterpret this in terms of Massey products.
\begin{corollary}
In $H_*(\Omega BG{}^{^\wedge}_p)$,
the Massey products $\langle \xi,\dots,\xi \rangle$ ($i$ times) vanish
for $0<i<h$, and give $-\tau^{p^n}$ for $i=h$.\qed
\end{corollary}
Note that the exceptional case $p^n=3$ also fits the corollary, if we
interpret a $2$-fold Massey product as an ordinary product.
\newcommand{\noopsort}[1]{}
\providecommand{\bysame}{\leqslantavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{Quantum tomography of entangled spin-multi-photon states.}
\author{Dan Cogan}
\affiliation{The Physics Department and the Solid State Institute, Technion\textendash Israel
Institute of Technology, 3200003 Haifa, Israel}
\author{Giora Peniakov}
\affiliation{The Physics Department and the Solid State Institute, Technion\textendash Israel
Institute of Technology, 3200003 Haifa, Israel}
\author{Oded Kenneth}
\affiliation{The Physics Department and the Solid State Institute, Technion\textendash Israel
Institute of Technology, 3200003 Haifa, Israel}
\author{Yaroslav Don}
\affiliation{The Physics Department and the Solid State Institute, Technion\textendash Israel
Institute of Technology, 3200003 Haifa, Israel}
\author{David Gershoni}
\email{[email protected]}
\affiliation{The Physics Department and the Solid State Institute, Technion\textendash Israel
Institute of Technology, 3200003 Haifa, Israel}
\begin{abstract}
We present a novel method for quantum tomography of multi-qubit states.
We apply the method to spin-multi-photon states, which we produce
by periodic excitation of a semiconductor quantum-dot- confined spin
every 1/4 of its coherent precession period. These timed excitations
lead to the deterministic generation of strings of entangled photons
in a cluster state. We show that our method can be used for characterizing
the periodic process map, which produces the photonic cluster. From
the measured process map, we quantify the robustness of the entanglement
in the cluster. The 3-fold enhanced generation rate over previous
demonstrations reduces the spin decoherence between the pulses and
thereby increases the entanglement.
\end{abstract}
\maketitle
\global\long\def\ket#1{\left|#1\right\rangle }
\global\long\def\operatorname{Im}{\operatorname{Im}}
\global\long\def\bra#1{\left\langle #1\right|}
\section{Introduction}
Measurement-based quantum protocols are very promising for quantum
computation in general \citep{Raussendorf2001,Raussendorf2003,Briegel2009}
and for quantum communication in particular \citep{Briegel1998,Zwerger2012,Zwerger2013,Azuma2015}.
The use of multi-partite entangled state named graph-states \citep{Briegel2001,Hein2004},
enables quantum computation by single qubit measurements and rapid
classical feedforward, depending on the measurement outcome \citep{Raussendorf2003}.
For quantum communication, graph-states of photons are particularly
attractive \citep{Walther_2005,Prevedel_2007,Lu_2007,Tokunaga_2008},
since they provide redundancy against photon loss, and compensation
for the finite efficiency of quantum gates. Moreover, since the quantum
information is contained in the graph state they eliminate the need
to communicate within the coherence time of the local nodes \citep{Azuma2015}.
Graph-states are therefore considered for efficient distribution of
entanglement between remote nodes \citep{Kimble2008} as well as for
quantum repeaters \citep{Zwerger2012,Zwerger2013}. Developing devices
capable of deterministically producing high-quality photonic graph
states at a fast rate is, therefore, a scientific and technological
challenge of utmost importance \citep{Munro2012}.
The technological quest for generating photonic graph states which
are required for building scalable quantum network architectures,
led to new schemes. Of particular importance and relevance to this
work is the Lindner and Rudolph proposal \citep{Lindner2009} for
generating one-dimensional cluster state of entangled photons using
semiconductor quantum dots (QDs). The scheme uses a single confined
electronic spin in a coherent superposition of its two eigenstates.
The spin precesses in a magnetic field while driven by a temporal
sequence of resonant laser pulses. Upon excitation of the QD spin,
a single photon is deterministically emitted and the photon polarization
is entangled with the polarization of the QD spin. This timed excitation
repeats itself indefinitely, thus generating a long 1D-cluster of
entangled photons.
Schwartz and coworkers demonstrated the first proof-of-concept realization
of this proposal in 2016 \citep{Schwartz2016}. They showed that the
entanglement robustness of the 1D - photonic string is mainly determined
by the ratio between the photon radiative time and the spin-precession
time and to a lesser extent also by the ratio between the later and
the confined spin coherence time \citep{Schwartz2016}.
In Ref \citep{Schwartz2016}, the entangler was the spin of the dark
exciton (DE). The short-range electron-hole exchange interaction removes
the degeneracy of the DE even in the absence of external field, therefore
a coherent superposition of the DE eigenstates naturally precesses.
Due to the limited temporal resolution of the silicon avalanche photodetectors
which Schwartz et al used, the spin was re-excited every 3/4 of its
precession period. In this work, we use instead superconducting single
photon detectors with an order of magnitude better temporal resolution.
Therefore we are able to drive the system every 1/4 of the DE precession
period. This leads to photon generation rate which is three fold faster
than previously demonstrated.
We develop a novel experimental and theoretical method for characterizing
the improved cluster state and the spin - multi-photon quantum states
that we generate. Our tomographic method differs from the traditional
method \citep{James2001} in the sense that it enables to measure
the spin that remains in the QD after projecting all the emitted photons.
The method uses time resolved spin-multi-photon correlations for measuring
the quantum state, and for characterizing the periodically used process
map which generates the photonic cluster.
We use a novel gradient descent method to find the process map which
best fits the data in the sense of having maximum likelihood. Our
gradient descent method differs from the standard one in the fact
that we define the gradient relative to a specific non-euclidean metric
which is adapted to the geometry of the set of physical (completely
positive) process maps. This approach is very different from known
algorithms such as projected gradient descent \citep{Goncalves2015,Bolduc2017}.
In the following, we demonstrate our tomographic technique by characterizing
the enhanced gigahertz rate generated cluster state. We show that
as a result of the time reduction between the sequential excitations,
the effect of the DE spin decoherence \citep{Cogan2018} is reduced,
and the robustness of the entanglement in the cluster state increases,
persisting for 6 consecutive photons.
The tomographic method and our experimental results are described
below.
\begin{figure*}
\caption{\label{fig:Method}
\label{fig:Method}
\end{figure*}
\section{Cluster-state Generation - Method}
In the heart of our device is a semiconductor QD. The QD contains
a confined electronic-spin, which serves as the entangler qubit \citep{Loss1998,DiVincenzo_2000,Lindner2009}.
We define the sample growth direction, which is also the QD\textquoteright s
shortest dimension (about 3nm) as the quantization z-axis. The QD
is embedded in a planar microcavity, formed by 2 Bragg-reflecting
mirrors (Fig.~\ref{fig:Method}a), facilitating efficient light-harvesting
by an objective placed above the QD.
The DE is an electron-hole pair with parallel spins \citep{Poem2010,Schwartz2015,Schwartz2015a},
having two total angular momentum states of $\pm2$ as projected on
the QD z-axis. Since the DE optical activity is weak \citep{zielinsky2014},
it has long life and coherence time \citep{Schwartz2015,Cogan2018}.
Upon optical excitation, the DE ($\ket{\Uparrow\uparrow}$) is excited
to form a biexciton (BIE) ($\ket{\Uparrow\Uparrow\uparrow\downarrow}$).
The BIE is formed by pair of electrons in their first conduction-subband
level and 2 heavy holes with parallel spins one in the first and one
in the second valence-subband levels. Fig.~\ref{fig:Method}b and
Fig.~\ref{fig:Method}c schematically describe the DE-BIE energy
level structure, and the selection rules for optical transitions between
these levels, respectively \citep{Bayer2002,Ivchenko2005,Poem2010}.
Each laser pulse (in red) excites the QD confined DE to its corresponding
BIE state. The BIE decays to the DE{*} level within about 370ps by
radiative recombination in which a single photon is emitted (marked
in pink). The DE{*} then decays to the ground DE state by about 70ps
spin-preserving acoustical-phonon relaxation. The energy difference
between the emitted photon and the exciting laser allows us to spectrally
filter the emitted single photons.
The eigenstates of the DE and BIE are given by: $\ket{\pm X_{DE}}=\left(\ket{+Z_{DE}}\pm\ket{-Z_{DE}}\right)/\sqrt{2}$
and $\ket{\pm X_{BIE}}=\left(\ket{+Z_{BIE}}\pm\ket{-Z_{BIE}}\right)/\sqrt{2}$.
The energy differences between these eigenstates are about $1\mu eV$,
smaller than the radiative width of the BIE optical transition ($\simeq3\mu eV$)
and much smaller than the spectral width of our laser pulse ($\simeq100\mu eV$).
It follows that a coherent superposition of the DE (BIE) eigenstates
precesses with a period $T_{DE}=3.1ns$ ($T_{BIE}=5.6ns$), an order
of magnitude longer than the BIE radiative time.
The DE and BIE act as spin qubits. Angular momentum conservation during
the optical transitions between these two qubits imply the following
$\pi$-system selection rules \citep{Cogan2018}:
\begin{align}
\ket{\Uparrow\uparrow} & \stackrel{\ket{+Z}}{\longleftrightarrow}\ket{\Uparrow\Uparrow\downarrow\uparrow},\nonumber \\
\ket{\Downarrow\downarrow} & \stackrel{\ket{-Z}}{\longleftrightarrow}\ket{\Downarrow\Downarrow\uparrow\downarrow}.\label{eq:selection-rules-1-1}
\end{align}
where $\ket{+Z}$ ($\ket{-Z}$) is a right (left) -hand circularly
polarized photon propagating along the +Z-direction. It thereby follows
that a laser pulse polarized $\ket{+X}=\left(\ket{+Z}+\ket{-Z}\right)/\sqrt{2}$
coherently excites a superposition $\alpha\ket{\Uparrow\uparrow}+\beta\ket{\Downarrow\downarrow}$
of the DE spin qubit states to a similar superposition $\alpha\ket{\Uparrow\Uparrow\uparrow\downarrow}+\beta\ket{\Downarrow\Downarrow\uparrow\downarrow}$
of the BIE states. The BIE then radiatively decays into an entangled
spin-photon state $\alpha\ket{\Uparrow\uparrow}\ket{+Z}+\beta\ket{\Downarrow\downarrow}\ket{-Z}$.
Therefore the excitation and photon emission act as a 2-qubit entangling
(CNOT) gate between the spin and the photon.
The method for generating the cluster state is described in Fig.~\ref{fig:Method}d.
The confined DE is resonantly excited repeatedly by a laser pulse
to its corresponding BIE. The BIE decays radiatively by emitting a
photon. The excitation and photon emission act as a two-qubit CNOT
gate which entangles the emitted photon polarization qubit and the
spin qubit, thus adding a photon to the growing photonic cluster.
The excitation pulses are timed such that between the pulses the DE-spin
precesses quarter of its precession period. This temporal precession
can be ideally described as a unitary Hadamard gate acting on the
spin qubit only. The combination of the CNOT 2-qubit gate and the
Hadamard 1-qubit gate forms the basic cycle of the protocol which
when repeated periodically generates the entangled spin + photons
cluster state \citep{Lindner2009}.
For the experimental realization of the cluster protocol and the characterization
of the generated state, we use the experimental setup described in
Fig.~\ref{fig:Method}e. The QD is first optically-emptied from carriers,
making it ready for initialization. The first 7-ns-long optical pulse
(Blue downward arrow) depletes the QD from charges and the remaining
DE \citep{Schmidgall2015}. We then write the DE spin state using
horizontally polarized 12-ps optical $\pi$-pulse to the DE{*} state.
This is possible due to small mixing between the bright exciton (BE)
and the DE \citep{Schwartz2015a}. The pulsed polarization defines
the DE{*} initial spin state. The DE{*} then relaxes to its ground
DE state, making the QD ready for implementing the cluster protocol.
A sequence of resonantly tuned linearly polarized $\pi$-area laser
pulses is then applied to the QD. Each pulse results in the emission
of a photon from the QD's BIE-DE optical transition. During the last
emission, the BIE spin evolution can be conveniently used as a resource
for the DE spin tomography \citep{Cogan2020}.
To characterize the generated multi-qubit quantum state, we project
the polarization of the detected photons on 6 different polarization
states using liquid-crystal-variable-retarders (LCVRs) and polarizing-beam-splitters
(PBSs). We then use highly efficient transmission gratings to spectrally
filter the emitted photons from the laser light. The photons are eventually
detected by 6 efficient (>80\%) fast single-photon superconducting
detectors with temporal resolution of about 30ps.
\section{\label{sec:Measurements}Cluster-state characterization}
The cluster state entanglement robustness is characterized using three
cycles of the repeated protocol. The characterization is done by correlating
one, two, and three detected photon events. In all cases the last
detected photon is used for the tomography of the DE-spin. We use
two different orthogonal, +X and +Y linearly polarized excitation
pulses respectively \citep{Cogan2020}. The +X-polarized laser pulse
promotes the DE state to a similar superposition of BIE states, while
+Y excitation introduces a $\pi/2$ phase shift to the superposition
\citep{Cogan2020}. In addition, we utilize the BIE state evolution
during its radiative decay back to the DE{*} to measure the degree-of-circular-polarization
($D_{cp}$) of the emission as a function of time:
\begin{equation}
D_{cp}(t)=\frac{P_{+Z}(t)-P_{-Z}(t)}{P_{+Z}(t)+P_{-Z}(t)},
\end{equation}
where $P_{j}$ represents the detected photon polarization-projection
on the j basis. By fitting the measured $D_{cp}(t)$ to a central-spin-evolution-model
that we recently developed for QD confined charge carriers \citep{Cogan2020},
we accurately extract the DE spin state in the time of its excitation.
As we implement the protocol, we perform full tomographic measurements
of the growing quantum state. First, we measure the initialized DE
state. Then we apply one cycle of the protocol and measure the resulting
spin+1photon state. Finally, we apply a second cycle of the protocol
and measure the spin+2photons state.
\begin{figure}
\caption{\label{fig: Correlations}
\label{fig: Correlations}
\end{figure}
In Fig \ref{fig: Correlations} we present a small set of the measurements
used to deduce the spin, spin+1photon, spin+2photon quantum states
and the process map of the periodic cycle of the cluster protocol.
The first row shows $D_{cp}$ measurements characterizing the initialized
spin, the second row shows the $D_{cp}$ of the correlated spin+1photon
state after the photon was projected on a +Y polarization basis, and
the third row shows the $D_{cp}$ of the correlated spin+2photons
state after both photons were projected on +Y polarization basis.
Each row differs from the row above it by applying one additional
cycle of the protocol. In all measurements shown in the Fig.~\ref{fig: Correlations},
the DE was initialized to -X state. In each row, the left-panel displays
time resolved PL, the center-panel (right-panel) displays time resolved
$D_{cp}$ measured after +X (+Y) polarized excitation of the final
spin. By fitting the $D_{cp}$ correlation measurements, we extract
the following spin, spin+1photon, and spin+2photon polarization density
matrix elements, respectively:
\[
[S_{X},S_{Y},S_{Z}]=[-0.73,0.05,0.06]
\]
\[
[P_{Y}^{(1)}S_{X},P_{Y}^{(1)}S_{Y},P_{Y}^{(1)}S_{Z}]=[0.01,0.16,-0.59]
\]
\[
[P_{Y}^{(1)}P_{Y}^{(2)}S_{X},P_{Y}^{(1)}P_{Y}^{(2)}S_{Y},P_{Y}^{(1)}P_{Y}^{(2)}S_{Z}]=[0.03,-0.49,-0.08],
\]
where $P_{j}^{(i)}$represents the polarization projection of the
i'th-photon in the string, on the j polarization base, and $S_{j}$
is the DE-spin polarization, projected on the j base. The typical
measurement uncertainties are about 0.01, 0.02, and 0.04 for the spin,
spin+1photon, and spin+2photon polarization density matrix elements,
respectively.
For a perfect initialization and application of the process we expect
these polarization elements to be {[}-1,0,0{]}, {[}0,0,-1{]}, {[}0,-1,0{]}
respectively. Here, the DE spin is initialized to the $-X$ state
with polarization degree of 0.73, due to the limited efficiency of
the depleting pulse \citep{Schmidgall2015}. After each cycle of the
protocol, the measured $D_{cp}(t)$ is reduced by approximately 20\%,
indicating an exponential decay as expected \citep{Popp_2005,Schwartz2016}.
\begin{figure*}
\caption{\label{fig:Process}
\label{fig:Process}
\end{figure*}
The measured correlations are used to infer directly the spin+1photon
2qubit state, obtained by applying one cycle of the protocol, the
spin+2photon state obtained by applying two cycles of the protocol,
and a full process tomography of the periodic cycle of the cluster
protocol.
To measure the spin+1photon density matrix (displayed in Fig.~\ref{fig:Process}a),
we use a set of 12 DE-photon $D_{cp}(t)$ correlation measurements.
Two of those measurements are displayed in Fig.~\ref{fig: Correlations}e
and Fig.~\ref{fig: Correlations}h. The measured density matrix has
fidelity of $F=0.77\pm0.04$ with the maximally entangled Bell state.
To measure the spin+2photons 3-qubit density matrix (displayed in
Fig.~\ref{fig:Process}b) we use a set of 72 $D_{cp}(t)$ DE-2photon
correlation measurements. Two of those measurements are displayed
in Fig.~\ref{fig: Correlations}f and \ref{fig: Correlations}i.
The measured spin+2photons density matrix has fidelity of $0.68\pm0.07$
with the maximally entangled 3-qubit state.
We produce the cluster state by repeatedly applying the same process,
as shown in the protocol of Fig.~\ref{fig:Method}d. As a result,
one can fully characterize the cluster state for any number of qubits
if the single-cycle process-map is known \citep{Schwartz2016}. Ideally,
the process-map contains a CNOT and a Hadamard gate. It maps the 2$\times$2
spin qubit density matrix into a 4$\times$4 density matrix representing
the entangled spin-photon state. The process map $\Phi$ can be fully
described by a 4$\times$16 positive and trace-preserving map with
64 real matrix elements.
We use the convention $\Phi(\hat{\rho}_{DE})=\underset{\alpha,\beta,\gamma}{\sum}\Phi_{\alpha\beta}^{\gamma}\rho_{\gamma}^{DE}\hat{\sigma}_{\alpha}\hat{\sigma}_{\beta}$
, where $\hat{\rho}_{DE}=\underset{\gamma}{\sum}\rho_{\gamma}^{DE}\hat{\sigma}_{\gamma}$
is the density matrix that describes the input DE state and $\Phi(\hat{\rho}_{DE})$
describes the DE+1photon state after one application of the process
to the input DE state. The sums are taken over $\alpha,\beta,\gamma=O,X,Y,Z$,
where $\hat{\sigma}_{0}$ is the identity matrix and $\hat{\sigma}_{X}$,$\hat{\sigma}_{Y}$,$\hat{\sigma}_{Z}$
are the corresponding Pauli matrices. The 64 real parameters $\Phi_{\alpha\beta}^{\gamma}$
thus fully specify $\Phi$.
Fig.~\ref{fig:Process}c shows the results of the full-tomographic
measurements of the process map. For acquiring these measurements,
we initialize the DE-spin-state to six different states from three
orthogonal bases \citep{Schwartz2015a}. For each of those 6 states
we use 2 $D_{cp}(t)$ single-photon measurements like the measurements
displayed in Fig.~\ref{fig: Correlations}d and Fig.~\ref{fig: Correlations}g
for tomography. Then we apply one cycle of the protocol for each initialization
and measure the resulting spin+1photon states by projecting the first
photon on different orthogonal polarization bases \citep{James2001}
and correlating it with the $D_{cp}(t)$ of the second photon. For
characterizing each of those 6 spin+1photon states, we use 12 $D_{cp}(t)$
2-photon correlations measurements like the ones presented in Fig.~\ref{fig: Correlations}e
and Fig.~\ref{fig: Correlations}h.
To obtain the physical process map that best fits our measured results,
we use a specifically developed edge-sensitive twisted-gradient-descent
minimization method (See Appendix). It is well known that the space
of physical completely-positive (CP) maps can be identified with a
cone-like-space where any unitary process sits on an extremal ray
of the cone. To find the best CP fit of $\Phi$ therefore requires
minimizing a known function $F$ (representing minus log likelihood)
over this cone. Gradient descent tries to find the minimum of a function
$F(x)$ by going roughly along gradient lines of $F$. Our newly developed
approach uses an edge-sensitive twisted-gradient-descent in order
to prevent our gradient descent solution from getting stuck in the
boundary of the physically allowed cone-like region.
Fig.~\ref{fig:Process}c-d presents the physical CP-process-map obtained
using this method. We compare the acquired physical process with the
ideal unitary process of the cluster protocol. The fidelity \citep{Jozsa1994,Schwartz2016}
between the two processes is 0.83. The obtained fidelity is higher
than in the previous demonstration \citep{Schwartz2016}. The higher
fidelity is attributed to the 3-fold shorter time between the excitations,
which reduces the influence of the DE decoherence during its precession.
The relatively high fidelity to the ideal protocol indicates that
our device can deterministically generate photonic cluster states
of high quality, thereby providing a better resource for quantum information
processing.\\
\section{Discussion}
\begin{figure}
\caption{\label{fig:EL}
\label{fig:EL}
\end{figure}
We characterize the robustness of the entanglement in the 1D cluster-state
using the notion of localizable entanglement (LE) \citep{Verstraete2004}.
The LE is the negativity \citep{Peres1996} between two qubits in
the cluster after all the other qubits are projected onto a suitable
polarization-basis. The LE decays exponentially with the distance
between the qubits \citep{Popp_2005,Schwartz2016}:
\begin{equation}
N(d)=N_{nn}\exp\left(-(d-1)/\zeta_{LE}\right),\label{eq:LE}
\end{equation}
where $N_{nn}$ is the negativity between nearest-neighbor qubits,
d is the distance between the qubits, and $\zeta_{LE}$ is the characteristic
decay-length of the LE. In Fig.~\ref{fig:EL}, we plot using pink
circles the LE in the state of a spin+Nphotons, obtained from the
measured process map, as a function of the distance between two qubits
in the string. As expected, the LE in the 1D cluster state decays
exponentially with the distance between the two qubits \citep{Popp_2005}.
Fig.~\ref{fig:EL} shows that the entanglement in the cluster persists
up to up to six photons. This presents an improvement over Ref.~\citep{Schwartz2016},
resulting from the reduction in the DE spin decoherence between the
optical pulses.
The negativity between nearest neighbor and next nearest neighbor
2 qubits can be directly obtained from our quantum state tomography.
The entanglement between the DE and the photon emitted after one cycle
of the protocol is obtained from the density matrix of the DE+1photon
in Fig.~\ref{fig:Process}a which has negativity of $N=0.27\pm0.03$.
This negativity is marked as a purple data point in Fig.~\ref{fig:EL}.
Similarly, the spin+2photons 3-qubit state resulted from application
of two cycles of the protocol is represented by the 3-qubit density
matrix, displayed in Fig.~\ref{fig:Process}b. The negativity of
the density matrix of the two external qubits, after projecting the
central qubit on the X polarization basis is $N=0.18\pm0.05$, marked
by the yellow point in Fig.~\ref{fig:EL}. \\
In summary, we demonstrate a gigahertz-rate deterministic generation
of entangled photons in a cluster state, which is 3-times faster than
previously demonstrated. We developed a novel method for spin - multi
photon quantum state tomography and for characterizing the periodic
process map which generates the photonic cluster state. Using this
method we show that the enhanced cluster generation rate also improves
the robustness of the entanglement in the generated multi -photon
state. The measured process map has fidelity of 0.83 to the ideal
one, and the entanglement in the cluster state persists up to 6 consecutive
qubits. Our studies combined with further feasible optimizations of
the device may lead to implementations of quantum communication and
efficient distribution of quantum entanglement between remote nodes.
\section*{acknowledgments}
The support of the Israeli Science Foundation (ISF), and that of the
European Research Council (ERC) under the European Union\textquoteright s
Horizon 2020 research and innovation programme (Grant Agreement No.
695188) are gratefully acknowledged.
\input{ms.bbl}
\appendix
\maketitle
\section{the process likelihood function}
\label{df0}
The basic (single cycle) process taking the QD-spin into a spin plus
emitted photon is described by a process map $\Phi$ taking one-qubit
state into a two-qubit state. Expanding these quantum states in terms
of Pauli matrices allows writing the map as
\[
\rho=\sum r_{\mu}\sigma^{\mu}\mapsto\Phi(\rho)=\sum r_{\mu}\phi_{\nu\lambda}^{\mu}\sigma^{\nu}\otimes\sigma^{\lambda}.
\]
Here $\phi_{\nu\lambda}^{\mu}$ with $\mu\nu\lambda\in\{0,x,y,z\}$
are 64 real coefficient which define the process map $\Phi$.
The trace preserving condition $Tr\Phi(\rho)=Tr\rho\forall\rho$ fixes
4 out of the 64 coefficient $\phi_{\nu\lambda}^{\mu}$ namely it requires
that $\phi_{00}^{\mu}=\frac{1}{2}\delta_{\mu0}$. We wish to estimate
the other 60 parameter by finding the best fit to the experimental
data.
In an experiment where the initial spin was $\vec{s}$ and the emitted
photon was projected on state of spin \footnote{we identify H,B,R polarizations with $\hat{x},\hat{y},\hat{z}$ respectively.}
$\vec{p}$ we would ideally expect the measured rate $R$ of photon
emission and final spin $\vec{S}$ to satisfy $\phi_{\nu\lambda\mu}s_{\mu}p_{\lambda}=R\;S_{\nu}$.
Here $s,p,S$ are four-vectors whose zeroth component equals 1 and
their spatial components are $\vec{s},\vec{p},\vec{S}$. Repeating
such measurement for six different spin polarizations $\vec{s}$ and
six different photon polarizations $\vec{p}$ gives $6\times6\times4=144$
equations for the $60$ unknown components of $\Phi$.
A straightforward approach to determine $\Phi$ is then to use a least
square method minimizing the expression $F_{0}=\sum_{p,s,\nu}\frac{1}{\Delta_{ps\nu}^{2}}\left[\phi_{\nu\lambda\mu}s_{\mu}p_{\lambda}-R\;S_{\nu}\right]^{2}.$
(Essentially representing minus log likelihood.) Here $\Delta_{ps\nu}$
are the error estimates of the corresponding measurement.
A slightly more sophisticated approach would recall that appearance
of each of the six initial polarizations $\vec{s}$ in 24 different
sums, can potentially make the errors correlated. One way to take
this into consideration is to use a more complicated square sum constructed
using a non-diagonal covariance matrix. A completely equivalent method
consists of defining new variables $\vec{s'}$ representing the 'true'
initial polarization of the DE and using the new sum of square error
function $\tilde{F}_{0}=\tilde{F}_{0}(\phi,s')$ defined by \footnote{The error estimate $\Delta_{ps\nu}$ appearing in $\tilde{F}_{0}$
is constructed in the standard way from the error estimates $\Delta R$
and $\Delta S$. The error $\Delta s$ is now taken care of by the
second term (and $\Delta p$ is negligible anyway).}
\[
\tilde{F}_{0}(\phi,s')=\sum_{p,s,\nu}\frac{1}{\Delta_{ps\nu}^{2}}\left[\phi_{\nu\lambda\mu}s'_{\mu}p_{\lambda}-R\;S_{\nu}\right]^{2}+\sum_{s}\sum_{i=1}^{3}\left(\frac{s'_{i}-s_{i}}{\Delta s_{i}}\right)^{2}
\]
Minimizing this expression with respect to $s'$ yields a standard
square sum $F_{0}(\phi)$ corresponding to the correct non-diagonal
covariance matrix. We looked for a minimum of $\tilde{F}_{0}(\phi,s')$
with respect to both the process map $\phi$ and the unknown initialization
polarizations $s'$.
\section{Completely positive condition and twisted gradient descent.}
It is well known that to be physically acceptable, a process map $\Phi$
must be completely positive (CP). A process map is CP iff the associated
Choi matrix which may be defined (up to unimportant normalization
factor) by
\begin{equation}
C_{\Phi}=\sum\phi_{\nu\lambda}^{\mu}\;\overline{\sigma_{\mu}}\otimes\sigma^{\nu}\otimes\sigma^{\lambda}\label{choi}
\end{equation}
is positive (semi-definite) $C_{\Phi}\geq0$. The bar over $\sigma_{\mu}$
denotes complex conjugation and we make no distinction here between
lower and upper indices. In general the Choi matrix $C_{\Phi}$ is
a (complex) hermitian $8\times8$ matrix which may be used as an alternative
description of $\Phi$.
Simple minded naive minimization of $F_{0}$ leads to a process map
which is not CP and hence not physically acceptable. To understand
why this happens, recall that our process is very close to an idealized
process which is unitary. Any unitary process is an extreme point
of the cone of CP-maps and has a rank-1 Choi matrix. In other words,
for a unitary process 7 out of the 8 eigenvalues of the Choi matrix
vanish. Our $\Phi$ being close to unitary has therefore 7 very small
Choi matrix eigenvalues. It is thus not surprising that very small
experimental errors can lead us to estimate some of these eigenvalues
as negative, in contradiction with the CP condition. (Had our $\Phi$
been equal to the ideal unitary map, a small random error in each
eigenvalue would lead to non CP map with probability $\frac{127}{128}=0.992$)
In other words, the encountered difficulty is actually a good sign
indicating that our process has quite low decoherence.
To find the best CP fit of $\Phi$ therefore requires minimizing a
known function $F$ (representing minus log likelihood) over the subset
of CP-maps, which as explained above may be identified (through the
Choi representation) with the cone of positive matrices. This is closely
related to the extensively studied field of convex optimization. We
have not found however in the convex optimization literature a method
which looks to exactly fit our problem. We have therefore devised
a method of our own (explained below) which is a variant of the well
known gradient descent.
Gradient descent tries to find the minimum of a function $F(x)$ by
going roughly along gradient lines of $F$. It corresponds to (numerically
discretized) solution to the equation $\frac{d}{dt}x^{i}=-g^{ij}\partial_{j}F(x)$.
The (inverse) metric tensor $g^{ij}$ is often taken to be the standard
euclidean metric $\delta_{ij}$. Such choice is not mandatory and
in fact one may choose any (positive) metric. We suggest to use a
smarter choice of the metric in order to prevent our gradient descent
solution from getting stuck in the boundary of the physically allowed
region.
It is easiest to understand our approach by considering minimization
over a simple region like $\{(x,y)\in\mathbb{R}^{2}\mid x,y\geq0\}$.
In this case our approach to minimizing $F(x,y)$ would correspond
to using iteration steps with $(\Delta x,\Delta y)\propto(x\partial_{x}F,y\partial_{y}F)$.
Assuming that both derivatives of $F$ are $O(1)$, one sees that
if our approximate estimate $(x,y)$ of the minimizer is very close
to one of the boundaries e.g. if it has $x\ll1$ and $y=O(1)$ then
the next step $(\Delta x,\Delta y)$ would adapt to this fact by being
almost parallel to the $y$-axis. One can then go a long $\sim O(1)$
distance along this direction without crossing the boundary. This
is in contrast to hitting the boundary after a distance $O(x)\ll1$
which would result from using the standard metric $g_{ij}=\delta_{ij}$.
Since the CP - condition is easier to formulate in terms of Choi matrices,
let us consider the (square sum) function we want to minimize as a
function $F(A)$ over the (real vector space) of hermitian matrices
\footnote{It is convenient to extend $F$ to arbitrary matrices by defining
$F(A)=F(\frac{1}{2}(A+A^{\dag}))$.}. The standard euclidean gradient of $F$ may be identified with the
matrix $\nabla F$ whose elements are $(\nabla F)_{ij}=\frac{\partial F}{\partial a_{ji}}$.
(This relation may be a bit confusing since the elements $a_{ij}$
of $A$ are complex.) An equivalent and possibly more rigorous definition
starts by expressing $A$ as $A=\sum a_{\alpha}\Xi_{\alpha}$ where
$a_{\alpha}\in\mathbb{R}$ and $\{\Xi_{\alpha}\}$ are some basis
for the space of Hermitian matrices which is orthonormal in the sense
$Tr(\Xi_{\alpha}\Xi_{\beta})=N\delta_{\alpha\beta}$ with some normalization
$N$. Note that our $A$ being a process Choi matrix, is already given
to us in such a form by Eq.(\ref{choi}). One can then write $\nabla F=\sum\Xi_{\alpha}\partial_{\alpha}F$.
The gradient we use corresponds to a non-flat riemanian metric and
may be expressed as $\tilde{\nabla}F=\sqrt{A}(\nabla F)\sqrt{A}$
(where $\nabla F$ is as above). We therefore look for a minimum of
$F$ over the set of positive (semi-definite) matrices by using a
gradient descent step of the form
\[
A\mapsto A+\Delta A,\;\;\;\;\;\Delta A=-q\sqrt{A}(\nabla F)\sqrt{A}
\]
Here $q>0$ is a scalar chosen so that $F(A+\Delta A)$ is minimal
under the constraint $A+\Delta A\geq0$ . Note that if $\nabla F=O(1)$
then the positivity constraint allows $q$ to remain $O(1)$ even
if $A$ is very close to the boundary of the cone of positive matrices.
All our process map estimations were obtained using this minimization
scheme which we implemented using Mathematica$^{TM}$.
Although the basic method described above works reasonably well, we
found that some extra improvement \footnote{Euclidean gradient descent fails even if one includes similar improvement
in it.} is gained if after each step we push $A$ slightly away from the
boundary of the allowed region by updating it as $A\rightarrow(1-\varepsilon)A+\frac{1}{2}\varepsilon I$
with $\varepsilon\ll1$. (We suspect that the need for this step might
be related to the finite precision of the numerical calculations.)
We increase or decrease $\varepsilon$ dynamically during the computation,
depending on the performance of previous iteration step.
In practice $\varepsilon$ ranged between $10^{-4}$ and $10^{-10}$
and scaled roughly as 2-3 times the minimal eigenvalue of $A$.
The modified gradient descent method described here is quite general
and can be applied to any $F(A)$. In practice, our $F(A)$ was of
the form $F(A)=F_{0}(A)+\sum_{\mu=0}^{3}\lambda_{\mu}Tr(A(\sigma_{\mu}\otimes I\otimes I))$
where $F_{0}(A)$ is the square sum described in subsection \ref{df0},
and the second term consists of 4 lagrange multipliers required to
enforce the normalization condition \footnote{The need for lagrange multiplier is a cost we pay for using non flat
metric. In flat metric, the constraints may be solved trivially.} $\phi_{00}^{\mu}=\frac{1}{2}\delta_{\mu0}$. The values of the multipliers
$\lambda_{\mu}$ at each iteration step are easily determined numerically
by requiring that $\Delta A=-q\sqrt{A}(\nabla F)\sqrt{A}$ does not
break the normalization condition. This amount to demanding the partial
trace $Tr_{2,3}(\sqrt{A}(\nabla F)\sqrt{A})$ to vanish, which is
just a linear set of equations for $\lambda_{\mu}$.
\end{document} |
\begin{document}
\title{Stability of properties of locales under groups}
\author{Christopher Townsend}
\maketitle
\begin{abstract}
Given a particular collection of categorical axioms, aimed at capturing properties of the category of locales, we show that if $\mathcal{C}$ is a category that satisfies the axioms then so too is the category $[ G, \mathcal{C}]$ of $G$-objects, for any internal group $G$. To achieve this we prove a general categorical result: if an object $S$ is double exponentiable in a category with finite products then so is its associated trivial $G$-object $(S, \pi_2: G \times S \rTo S)$. The result holds even if $S$ is not exponentiable.
An example is given of a category $\mathcal{C}$ that satisfies the axioms, but for which there is no elementary topos $\mathcal{E}$ such that $\mathcal{C}$ is the category of locales over $\mathcal{E}$.
It is shown, in outline, how the results can be extended from groups to groupoids.
\end{abstract}
\section{Introduction}
Given a category $\mathcal{C}$ with finite products and an internal group $G$, a categorical axiom is said to be \emph{$G$-stable} provided if it is true of $\mathcal{C}$ then so too is it true of $[ G , \mathcal{C}]$, the category of $G$-objects. An example is the property of having equalizers. A non-example is the property `every epimorphism splits' which holds in the category $\mathbf{Set}$, if the axiom of choice is true, but for any group $G$, $G \rTo 1$ is an epimorphic $G$-homomorphism which is split if and only if $G$ is trivial.
A set of categorical axioms, investigated in \cite{towhofman}, \cite{closedsubgroup}, \cite{towaxioms} and \cite{towslice}, captures various properties of the category of locales. Certain aspects of locale theory can be developed axiomatically: proper and open maps are pullback stable, the Hofmann-Mislove result shown, the closed subgroup theorem holds, Plewe's result that triqutient surjections are of effective descent proved, the patch construction developed, etc. The purpose of this paper is to explore the question of whether the axioms are $G$-stable for an internal group $G$. The answer is that, with a minor modification that does not weaken the theory, the axioms are $G$-stable. The minor modification is that the existence of coequalizers is no longer an axiom. Intuitively a modification of this sort is needed as constructing coequalizers in $[G,\mathcal{C}]$ appears to require coequalizers that are stable under products, and this stability is an additional property not true of the category of locales.
Once we have established that the axioms are $G$-stable we then establish a new result which is that not every category that satisfies the axioms is a category of locales for some topos. Any open or compact localic group that is not \'{e}tale complete (in the sense of Moerdijk, e.g. Section 7 of \cite{MoerClassTop}) provides an example.
Our next step is to verify that even without any coequalizers in $\mathcal{C}$, key results about coequalizers still hold. Specifically we show that triquotient surjections are coequalizers and that for every open or compact group $G$, there is a connected components adjunction $[ G , \mathcal{C}] \pile{\rTo \\ \lTo} \mathcal{C}$.
Finally we include some comments on how it is easy to extend the results from internal groups to groupoids, given that the axioms are slice stable (\cite{towslice}).
\section{Preliminary categorical definitions and main categorical result}\label{prelim}
Let $\mathcal{C}$ be a category with finite products and $S$ an object of $\mathcal{C}$. We use the notation $S^X$ for the presheaf
\begin{eqnarray*}
\mathcal{C}^{op} &\rTo &\mathbf{Set} \\
Y &\rMapsto &\mathcal{C}(Y\times X,S)
\end{eqnarray*}
It can be verified, using Yoneda's lemma, that $S^{X}$ is the exponential ${\bf y}S^{ {\bf y}X}$ in the presheaf category $[\mathcal{C}^{op},\mathbf{Set}]$, so the notation is reasonable (where $\bf{y}$ is the Yoneda embedding). We use $\mathcal{C}_S^{op}$ as notation for the full subcategory of $[\mathcal{C}^{op},\mathbf{Set}]$ consisting of objects of the form $S^X$; there is a contravariant functor $\mathcal{C} \rTo^{S^{(\_)}} \mathcal{C}_S^{op}$
Our first lemma is rather simple.
\begin{lemma}\label{hannah}
Let $\mathcal{C}$ be a category with finite products and $\mathbb{T}=(T,\eta,\mu)$ a monad on $\mathcal{C}$. Then for any two $\mathbb{T}$-algebras $(X,a: TX \rTo X)$ and $(S,s: TS \rTo S)$ the diagram
\begin{eqnarray*}
(S,s)^{(X,a)}\rTo^{(S,s)^a} (S,s)^{(TX,\mu_X)} \pile{\rTo^{(S,s)^{\mu_X}} \\ \rTo_{(S,s)^{Ta}}} (S,s)^{(TTX,\mu_{TX})}
\end{eqnarray*}
is an equalizer in $(\mathcal{C}^{\mathbb{T}})^{op}_{(S,s)}$.
\end{lemma}
\begin{proof}
If $\epsilon: (S,s)^{(Y,b)} \rTo (S,s)^{(TX,\mu_X)}$ is a natural transformation such that $(S,s)^{\mu_X}\epsilon=(S,s)^{Ta}\epsilon$ then define $\bar{\epsilon}:(S,s)^{(Y,b)}\rTo (S,s)^{(X,a)}$ by setting $\bar{\epsilon}_{(Z,c)}(u)$ to
\begin{eqnarray*}
Z \times X \rTo^{Id_Z \times \eta_X} Z \times TX \rTo^{\epsilon_{(Z,c)}(u)}S\text{.}
\end{eqnarray*}
This is well defined (i.e. defines a $\mathbb{T}$-algebra homomorphism from $(Z,c) \times (X,a)$ to $(S,s)$) because
\begin{eqnarray*}
(Z,c) \times (TTX,\mu_X) \pile{ \rTo^{Id_Z \times \mu_X} \\ \rTo_{Id_Z \times Ta}} (Z,c) \times (TX, \mu_X) \rTo^{Id_Z \times a} (Z,c) \times (X,a)
\end{eqnarray*}
is a coequalizer in $\mathcal{C}^{\mathbb{T}}$ (it is $U$-split, by $Id_Z \times \eta_X$ and $Id_Z \times \eta_{Ta}$, where $U: \mathcal{C}^{\mathbb{T}} \rTo \mathcal{C}$ is the forgetful functor and $U$, being monadic, creates coequalizers for $U$-split forks).
\end{proof}
Recall that an adjunction $L\dashv R:\mathcal{D}\pile{\rTo \\ \lTo} \mathcal{C}$ between categories, both with finite products, satisfies \emph{Frobenius reciprocity} provided the map
$L(R(X)\times W)\rTo^{(L\pi_{1},L\pi_{2})}LRX\times LW\rTo^{\varepsilon _{X}\times Id_{LW}}X\times LW$ is an isomorphism for all objects $W$ and $X$ of $\mathcal{D}$
and $\mathcal{C}$ respectively. For example any morphism $f:X\rTo Y$ of a cartesian category $\mathcal{C}$ gives rise to a pullback adjunction $\Sigma_f \dashv f^*: \mathcal{C}/X \rTo \mathcal{C}/X$ that satisfies Frobenius reciprocity. For another example if $G=(G,m:G \times G \rTo G, e : 1 \rTo G, i:G \rTo G)$ is a group object in a category $\mathcal{C}$ with finite products, then the adjunction $G \times (\_) \dashv U : \mathcal{C} \rTo $$ [ G , \mathcal{C}]$ satisfies Frobenius reciprocity. Here $G \times (\_)$ sends an object $X$ of $\mathcal{C}$ to the $G$-object $(G\times X, m \times Id_X)$ and $U$ is the forgetful functor (forget the $G$ action). The counit of this adjunction, at a $G$-object $(X,a)$, is given by the $G$-homomorphism $a : G \times X \rTo X$ so establishing Frobenius reciprocity for the adjunction amounts to finding, for any object $Y$ of $\mathcal{C}$ and any $G$-object $(X,a)$, an inverse for $(G \times X \times Y, m \times Id_X \times Id_Y) \rTo^{(a\pi_{23},\pi_{13})} (X,a) \times (G \times Y,m \times Id_Y)$. The inverse is given by $ X \times G \times Y \rTo^{(\pi_2,a(i\pi_2,\pi_1),\pi_3)} G \times X \times Y$. Another way to establish Frobenius reciprocity is to recall that for any $G$-object $(X,a:G\times X \rTo X)$ there is a $G$-isomorphism $(G,m) \times (X, \pi_2) \cong (G,m) \times (X,a)$.
The adjunctions $G \times (\_) \dashv U$ are key to the considerations of this paper so we recall a couple of basic facts: (1) $U$ is monadic and (2) if $G_1$ and $G_2$ are two internal groups then to prove that $G_1$ is isomorphic to $G_2$ it is sufficient to exhibit an equivalence of categories $\psi: [ G_1 , \mathcal{C}] \rTo^{\simeq} $$[ G_2 , \mathcal{C}]$ which commutes with the adjunction; that is, there exists a natural isomorphism $\beta: \psi G_1 \times (\_) \rTo^{\cong} G_2 \times (\_)$. To see (2) notice that $G_i = [U \circ G_i \times (\_)](1)$ for $i=1,2$ and $\psi$ (together with the natural isomorphism $\beta$) commute with the two monad structures.
The next lemma is a generalisation of change of base:
\begin{lemma}\label{Frob}
Let $\mathcal{C}$ and $\mathcal{D}$ be two categories with finite products and $L: \mathcal{D} \rTo \mathcal{C}$ and $R: \mathcal{C} \rTo \mathcal{D}$ two functors such that $L \dashv R$ and the adjunction satisfies Frobenius reciprocity. Then for any object $S$ of $\mathcal{C}$, $L \dashv R$ extends contravariantly to an adjunction $\mathcal{D}_{RS}^{op}\pile{\rTo \\ \lTo} \mathcal{C}^{op}_S$\end{lemma}
This lemma is essentially originally shown in \cite{towgeom}. In the case that the adjunction is a pullback adjunction arising from a locale map and $S$ is the Sierpi\'{n}ski locale, the morphisms of $\mathcal{C}_{S}^{op}$ can be used to represent dcpo homomorphisms and the adjunction established by the lemma shows how to move dcpo homomorphisms between sheaf toposes; this is how the lemma can be viewed as a generalisation of change of base. Consult \cite{towaxioms} for more detail.
\begin{proof}
Precomposition with $L$ and $R$ defines for any adjunction $L \dashv R$ an adjunction between presheaf categories, $[\mathcal{D}^{op},\mathbf{Set}] \pile{ \rTo \\ \lTo } $$ [\mathcal{C}^{op},\mathbf{Set}]$. But the Frobenius condition implies for $W$ and $X$ of $\mathcal{D}$
and $\mathcal{C}$ respectively that $S^XL\cong RS^{RX}$ and $S^WR\cong {S}^{LW}$ and so the adjunction restricts to $\mathcal{D}_{RS}^{op}\pile{\rTo \\ \lTo} \mathcal{C}^{op}_S$ which can be seen to extend (via $S^{(\_)}$) the adjunction $L \dashv R$. The unit of the extension is given by $S^{\epsilon}$ and the counit by $RS^{\eta}$ where $\eta$ (respectively $\epsilon$) is the unit (counit) of $L \dashv R$.
\end{proof}
It is an exercise, based on the result just given, to verify that if $\delta : RS^{RX} \rTo RS^W$ then the adjoint transpose of $\delta$, written $\bar{\delta}: S^X \rTo S^{LW}$, is defined by setting, for any $u : Z \times X \rTo S$, $\bar{\delta}_Z(u)$ to be
\begin{eqnarray*}
Z \times LW \rTo^{[(\epsilon_Z \times Id_{LW})L(\pi_1,\pi_2)]^{-1}} L(RZ \times W) \rTo^{\widetilde{\delta_Z(Ru)}}S
\end{eqnarray*}
where $\tilde{(\_)}$ is the action of taking adjoint transpose under $L \dashv R$. Given this observation and our observation that the adjunction $G \times (\_) \dashv U : \mathcal{C} \pile{ \rTo \\ \lTo } $$[ G , \mathcal{C} ]$ satisfies Frobenius reciprocity the following corollary is almost immediate:
\begin{corollary}\label{nathan}
Let $\mathcal{C}$ be a category with finite products and $G$ an internal group. For any objects $Y$ and $S$ of $\mathcal{C}$ and $(X,a)$ a $G$-object,
\begin{eqnarray*}
Nat[S^{X},S^Y] \cong Nat[(S,\pi_2)^{(X,a)},(S,\pi_2)^{(G \times Y, m \times Id_Y)}]
\end{eqnarray*}
naturally in both arguments. The mate of $\delta:S^X \rTo S^Y$, evaluated at $u: (Z,c) \times (X,a) \rTo (S,\pi_2)$ is given by
\begin{eqnarray*}
Z \times G \times Y \rTo^{(c(i \pi_2 , \pi_1),\pi_3)} Z \times Y \rTo^{\delta_Z(u)} S
\end{eqnarray*}
i.e. $(z,g,y)$ is in $\bar{\delta}_{(Z,c)}(u)$ if and only if $(g^{-1}z,y)$ is in $\delta(u)$.
\end{corollary}
\begin{proof}
In addition to the comments in the preamble, observe that the adjoint transpose of $\delta_Z(u)$ (under $G \times (\_) \dashv U$) is given by $G \times Z \times Y \rTo^{\pi_{23}} Z \times Y \rTo^{\delta(u)} S$ because $S$ has the trivial action.
\end{proof}
Why are we so interested in these natural transformstions? Essentially because they are by construction the points of the double exponential $S^{S^X}$; if that double exponential exists. The category of locales provides an example of a category where exponentials do not always exists (not all locales are locally compact) but for which double exponentiation (at the Sierpi\'{n}ski locale at least) does always exist, \cite{victow}. Therefore there is good reason to investigate double exponentiation categorically in the absence of an assumption of cartesian closedness and these natural transformations play a central role. Let us make this more precise, beginning with a definition.
\begin{definition}
An object $S$ in a category $\mathcal{C}$ with finite products is \emph{double exponentiable} provided for every other object $X$ the exponential $({\bf y}S)^{S^X}$ exists in $[\mathcal{C}^{op},\mathbf{Set}]$ and is representable.
\end{definition}
If an object is double exponentiable then a strong \emph{double exponential} monad can be defined on $\mathcal{C}$; its functor part sends an object $X$ to the object that represents $({\bf y}S)^{S^X}$ and the rest of the monad structure and the strength are determined by the universal property of the double exponential. The key universal property can be expressed by saying that if $P(X)$ is the functor part of the double exponential monad, evaluated at $X$, then for any other object $Y$, there is a bijection, natural in $X$ and $Y$, between morphisms $Y \rTo P(X)$ and natural transformations $S^X \rTo S^Y$. Notice that if $S$ is double exponentiable, the opposite of the Kleisli category of the double power monad, $\mathcal{C}^{op}_P$, can be identified with $\mathcal{C}^{op}_S$ (i.e. the full subcategory of $[\mathcal{C}^{op},\mathbf{Set}]$ consisting of objects of the form $S^X$). Composition of Kleisli arrows is just composition of natural transformations. We will treat the opposite of the Kleisli category as this full subcategory below without notating the equivalence.
We can now prove a categorical proposition which is of general interest and is the main technical insight of this paper.
\begin{proposition}\label{main}
Let $\mathcal{C}$ be a category with finite products, $G$ an internal group and $S$ a double exponentiable object. Then $(S,\pi_2)$ is a double exponentiable object in $[G,\mathcal{C}]$.
\end{proposition}
\begin{proof}
Let $(X,a)$ be a $G$-object. Our first observation is that $PX$ (i.e. the object representing ${\bf y}S^{S^X}$) can be made into a $G$-object by defining $a^{P}$ to be $G \times PX\rTo^{t_{G,X}} P(G \times X) \rTo^{P(a)}PX$, where $t$ is the strength on $P$. This follows by application of the definition of strength ($t_{1,X}\cong Id_{PX}$ and $t_{X \times Y, Z} = t_{X, Y\times Z} (Id_X \times t_{Y,Z})$). For any other $G$-object $(Y,b)$, $G$-homomorphisms from $Y$ to $PX$ correspond to natural transformations $\delta: S^X \rTo S^Y$ with the property that $\delta^G S^a = S^b\delta$. So to conclude the proof all we need to do is to show that such natural transformations are in bijection with natural transformations $(S,\pi_2)^{(X,a)}\rTo (S,\pi_2)^{(Y,b)}$. (The bijection must be natural in $(Y,b)$, but this aspect is straightforward and is not commented on further.)
Lemma \ref{hannah}, with $\mathbb{T}$ the monad induced by $ G\times (\_) \dashv U$, shows that natural transformations $\epsilon:(S,\pi_2)^{(X,a)}\rTo (S,\pi_2)^{(Y,b)}$ are in (natural) bijection with natural transformations $\epsilon':(S,\pi_2)^{(X,a)}\rTo (S,\pi_2)^{(G \times Y, m \times Id_Y)}$ such that $(S,\pi_2)^{m \times Id_Y} \epsilon' = (S,\pi_2)^{Id_G \times b} \epsilon' $ and the corollary shows
\begin{eqnarray*}
Nat[S^{X},S^Y] \cong Nat[(S,\pi_2)^{(X,a)},(S,\pi_2)^{(G \times Y, m \times Id_Y)}]\text{.}
\end{eqnarray*}
Since this last bijection is natural we can see that the mate of $S^b\delta$ is $(S,\pi_2)^{Id_G \times b} \bar{\delta}$ where $\bar{\delta}$ is the mate of $\delta:S^X \rTo S^Y$. So to complete the proof all that is required is a verification that $\overline{\delta^G S^a} = (S,\pi_2)^{m \times Id_Y} \bar{\delta}$.
Say we are given a $G$-homomorphism $u:(Z,c) \times (X,a) \rTo (S,\pi_2)$. By the corollary we have that $(z,g_1,g_2,y)$ belongs to $[(S,\pi_2)^{m \times Id_Y} \bar{\delta}]_{(Z,c)}(u)$ if and only if $(g_2^{-1}g_1^{-1}z,y)$ belongs to $\delta_Z(u)$. Now since $u$ is a $G$-homomorphism $u(z,gx)=u(g^{-1}z,x) $ and so by applying naturality of $\delta$ at $Z \times G \rTo^{c(i \pi_2, \pi_1)} Z$ we have $\delta_{Z \times G}( u ( Id_Z \times a))(z,g,y)=\delta_Z(u)(g^{-1}z,y)$. But then $\overline{\delta^G S^a}(u)$ is given by
\begin{diagram}
Z \times G \times G \times Y & \rTo & Z \times G \times Y & \rTo & Z \times Y & \rTo^{\delta_Z(u)} & S\\
(z,g_1,g_2,y) & \mapsto & (g_1^{-1}z,g_2,y) & \mapsto & (g_2^{-1}g_1^{-1}z,y) & & \\
\end{diagram}
\end{proof}
One of our categorical axioms, to follow, is that the category in question must be order enriched. Finite limits are assumed to be order enriched finite limits; that is, their universal property is an order isomorphism, not just a bijection. The above analysis works equally well with order isomorphisms in place of bijections; therefore,
\begin{proposition}\label{main}
Let $\mathcal{C}$ be an order enriched category with finite products, $G$ an internal group and $S$ a double exponentiable object. Then $(S,\pi_2)$ is a double exponentiable object in $[G,\mathcal{C}]$.
\end{proposition}
We need to discuss \emph{order internal} lattices in the context of an order enriched category; i.e. lattices such that the meet and join operations are adjoints to the diagonal (so being a lattice, join semilattice, meet semilattice, distributive lattice etc is a property of the object, not additional structure on the object). The following lemma will be needed:
\begin{lemma}\label{meet}
If $\mathcal{C}$ is an order enriched catrgory with finite products, then for any order internal meet semilattice $A$, if $A_0 \pile{ \rInto^i \\ \lOnto_q}A$ is a splitting of an inflationary idempotent $\psi: A \rTo A$ (i.e. $Id_A \sqsubseteq \psi=iq$ and $Id_{A_0}=qi$), then $A_0$ is an order internal meet semilattice and $i$ is a meet semilattice homomorphism. Further, $q$ preserves the top element (i.e. $q1_A = 1_{A_0}$).
\end{lemma}
\begin{proof}
Define $1_{A_0} : 1 \rTo A_0$ to be $q1_A$ (so $q$ preserves top) and $\sqcap_{A_0} : A_0 \times A_0 \rTo A_0$ to be $q\sqcap_A (i \times i ) : A \times A \rTo A$. It can be verified that $!^{A_0} \dashv 1_{A_0}$ and $\Delta_{A_0} \dashv \sqcap_{A_0} $ and so $A_0$ is an order internal meet semilattice. To prove $i$ is a meet semilattice homomorphism we need to show (i) that $i$ preserves the top element and (ii) $iq\sqcap_{A}(i \times i) = \sqcap_A (i \times i)$. For (i) notice that $Id_A \sqsubseteq i 1_{A_0} !^A$ because $Id_A \sqsubseteq iq$ and so $i1_{A_0}=1_A$ by uniqueness of right adjoints. For (ii), as $Id_A \sqsubseteq iq$ it just needs to be checked that $iq\sqcap_{A}(i \times i) \sqsubseteq \sqcap_A (i \times i)$; equivalently, $\Delta_A iq \sqcap_A ( i \times i ) \sqsubseteq i \times i$ since $\Delta_A \dashv \sqcap_A$. But this last inequality is clear because $\Delta_A iq = (i \times i )(q \times q) \Delta_A$, $\Delta_A \sqcap_A \sqsubseteq Id_{A \times A}$ and $(i \times i )(q \times q) (i \times i)= i \times i $.
\end{proof}
If, further, $\mathcal{C}$ has finite coproducts and is distributive (i.e. the canonical map $X \times Y + X \times Z \rTo X \times (Y + Z) $ is an isomorphism for any three objects $X$, $Y$ and $Z$ and $ X\times 0 \cong 0$ for any $X$) then for any object $S$, $\mathcal{C}^{op}_S$ has products; $S^X \times S^Y$ is given by $S^{X+Y}$ and the final object is $S^0$.
If additionally, $S$ is an order internal lattice and is double exponentiable then provided $\mathcal{C}$ has equalizers (and so is cartesian) two submonads of $P$ can be defined; a lower one, whose points are those natural transformations $S^X \rTo S^Y$ that are join semilattice homomorphisms and an upper one, whose points are meet semilattice homomorphisms. By reversing the order enrichment you switch between the lower and upper submonads. By construction the opposite of the Kleisli categories of the lower and upper monads can be identified with subcategories $\mathcal{C}_S^{op}$; they have the same objects and have as morphisms those natural transformations that are join (respectively meet) semilattice homomorphisms. Notice that all objects of the opposites of the Kleisli categories are order internal lattices which are distributive if $S$ is. See \cite{towhofman} for more detail on the construction of the lower and upper submonads.
Our final categorical definition is that of an object which behaves like the Sierpi\'{n}ski space. Given a cartesian order enriched category, an object $\mathbb{S}$ is a \emph{Sierpi\'{n}ski object} if it is an order internal distributive lattice such that given a pullback
\begin{diagram}
a^{\ast }(i) & \rTo & 1 \\
\dInto & & \dInto_i \\
X & \rTo^{a} & \mathbb{S}
\end{diagram}
$a$ is uniquely determined by $a^{\ast }(i)\rTo X$ for $ i:1\rInto \mathbb{S}$ equal to either $0_{\mathbb{S}}$ or $1_{\mathbb{S}}$.
If a Sierpi\'{n}ski object is double exponentiable then we use $\mathbb{P}$ for the double exponential monad and call it a \emph{double power} monad; $P_L$ and $P_U$ are used for the lower and upper power monads, when these can be defined as submonads of $\mathbb{P}$.
\section{The axioms}\label{axioms}
{\bf Axiom 1.}
\emph{$\mathcal{C}$ is an order enriched category with order enriched finite limits and finite coproducts.}
{\bf Axiom 2.}
\emph{For any morphism $f:X \rTo Y$ of $\mathcal{C}$ the pullback functor $f^* : \mathcal{C}/Y \rTo \mathcal{C}/X$ preserves finite coproducts.}
The property of being order enriched and having finite limits is $G$-stable, for any internal group $G$, as finite limits are created in $\mathcal{C}$ and the order enrichment on $[G,\mathcal{C}]$ can be taken from $\mathcal{C}$. Given Axiom 2, $[G,\mathcal{C}]$ has coproducts since if $(X,a)$ and $(Y,b)$ are two $G$-objects then
\begin{eqnarray*}
G \times ( X+ Y) \rTo^{\cong} (G \times X ) + (G \times Y )\rTo{a +b } X+Y
\end{eqnarray*}
makes $X+Y$ into a $G$-object that can be easily checked to be coproduct. The nullary case is similar. If $f$ is a morphism of $G$-objects (i.e. a $G$-homomorphism) then pullback along $f$ preserves coproduct in $[G,\mathcal{C}]$ since $G$-object pullback and coproduct are created in $\mathcal{C}$. Therefore,
\begin{lemma}
Axioms 1 and 2 are jointly $G$-stable for any internal group $G$.
\end{lemma}
{\bf Axiom 3.}
\emph{$\mathcal{C}$ has a Sierpi\'{n}ski object, $\mathbb{S}$.}
It is immediate that this axiom is $G$-stable, for any order enriched cartesian category $\mathcal{C}$, because pullbacks are created in $\mathcal{C}$. The canonical Sierpi\'{n}ski object in $[G,\mathcal{C}]$ is $(\mathbb{S},\pi_2)$.
{\bf Axiom 4.}
\emph{$\mathbb{S}$ is double exponentiable.}
That this axiom is $G$-stable follows from Proposition \ref{main}. Notice from the proposition that the morphisms of the Kleisli category $ [G , \mathcal{C} ]_{\mathbb{P}_G}$ can be identified with natural transformations $\delta: \mathbb{S}^X \rTo \mathbb{S}^Y$ with the property $\mathbb{S}^b\delta=\delta^G \mathbb{S}^a$. It is easy to see that the lower (upper) Kleisli maps correspond to $\delta $s that are join (meet) semilattice homomorphisms.
{\bf Axiom 5.}
\emph{For any objects $X$ and $Y$, any natural transformation $\alpha : \mathbb{S}^X\rTo \mathbb{S}^Y$ that is also a distributive lattice homomorphism, is of the form $\mathbb{S}^f$ for some unique $f:Y \rTo X$}
If $\epsilon:(S,\pi_2)^{(Y,b)} \rTo (S,\pi_2)^{(X,a)} $ is a natural transformtion that is also a distributive lattice homomorphism then the corresponding natural transformation $\delta: \mathbb{S}^X \rTo \mathbb{S}^Y$ is also a distributive lattice homomorphism and so, assuming the axiom, is equal to $\mathbb{S}^f$ for some unique $f: Y \rTo X$. However, by applying the uniqueness part of the axiom, we see that $f$ is a $G$-homomorphism. It is routine to then check, using the order isomorphism established in proposition \ref{main} that if $\delta$ is of the form $\mathbb{S}^f$ then $\epsilon$ must be $(\mathbb{S},\pi_2)^f$. This shows that the axiom is $G$-stable.
{\bf Axiom 6.}
\emph{(i) Inflationary idempotents split in the Kleisli category $\mathcal{C}_{P_L}$.}
\emph{(ii) Deflationary idempotents split in the Kleisli category $\mathcal{C}_{P_U}$.}
\cite{towhofman} shows that these conditions are equivalent to the assumption that the monad $P_L$ (respecitvely $P_U$) is KZ (respectively coKZ).
Say $\alpha: \mathbb{S}^X \rTo \mathbb{S}^X$ is an inflationary idempotent join semilattice homomorphism that splits as $ \mathbb{S}^{X_0}\pile{ \rInto^{\theta} \\ \lOnto_{\gamma}} \mathbb{S}^X$ in the (opposite of) the lower Kleisli category; so $\theta$ and $\gamma$ are both join semilattice homomorphisms. Then, in the presence of Axiom 5, $\theta$ must be equal to $\mathbb{S}^q$ for some unique $q$. This follows as lemma \ref{meet} shows that $\theta$ is a meet semilattice homomorphism. Notice also, by the `Further' part of that lemma, that $\gamma$ preserves top.
\begin{lemma}
Axiom 6 is $G$-stable (given Axioms 1-5).
\end{lemma}
In summary the proof follows by applying our description of the double power Kleisli morphisms of $[G,\mathcal{C}]$ in terms of the double power Kleisli morphisms of $\mathcal{C}$.
\begin{proof}
If $(X,a)$ is a $G$-object and $\delta: \mathbb{S}^X \rTo \mathbb{S}^X$ an idempotent inflationary join semilattice homomorphism such that $\mathbb{S}^a\delta=\delta^G \mathbb{S}^a$, then $\delta$ factors as $\mathbb{S}^X \rOnto^{\gamma} \mathbb{S}^{X_0} \rInto^{\mathbb{S}^q} \mathbb{S}^X$; see the preamble to the statment of the lemma. Further $\delta^G$ factors as $\mathbb{S}^{Id_G \times q} \gamma^G$. Consider $\nu: \mathbb{S}^{X_0}\rTo \mathbb{S}^{G \times X_0}$ defined as $\gamma^G \mathbb{S}^a\mathbb{S}^q$. The two squares in the following diagram commute since $\gamma$ is a (split) epimorphism and $\mathbb{S}^{Id_G \times q}$ is a (split) monomorphism:
\begin{diagram}
\mathbb{S}^X & \rTo^{\gamma} & \mathbb{S}^{X_0} & \rTo^{\mathbb{S}^q } & \mathbb{S}^X \\
\dTo^{\mathbb{S}^{a}} & & \dTo_{\nu} & & \dTo_{\mathbb{S}^a} \\
\mathbb{S}^{G \times X} & \rTo^{\gamma^G} & \mathbb{S}^{G \times X_0} & \rTo^{\mathbb{S}^{Id_G \times q}} & \mathbb{S}^{G \times X} \\
\end{diagram}
We now claim that $\nu$ is a meet semilattice homomorphism. To see this, by the `Further' part of Lemma \ref{meet}, we see that $\nu$ preserves top because both $\gamma$ and $\gamma^G$ preserve top. To establish preservation by $\nu$ of binary meets one needs but to check that $\sqcap_{\mathbb{S}^{G \times X_0}} (\nu \times \nu)\sqsubseteq \nu \sqcap_{\mathbb{S}^{X_0}}$. Now from Lemma \ref{meet} we know that $\sqcap_{\mathbb{S}^{X_0}} = \gamma \sqcap_{\mathbb{S}^X} ( \mathbb{S}^q \times \mathbb{S}^q)$ (and similarly for $\sqcap_{\mathbb{S}^{G \times X_0}}$). Therefore:
\begin{eqnarray*}
\nu \sqcap_{\mathbb{S}^{X_0}} & = & \gamma^G \mathbb{S}^a\mathbb{S}^q \gamma \sqcap_{\mathbb{S}^X} ( \mathbb{S}^q \times \mathbb{S}^q) \\
& \sqsupseteq & \gamma^G \mathbb{S}^a \sqcap_{\mathbb{S}^X} ( \mathbb{S}^q \times \mathbb{S}^q) \\
& = & \gamma^G \sqcap_{\mathbb{S}^{G \times X}} (\mathbb{S}^a \times \mathbb{S}^a )( \mathbb{S}^q \times \mathbb{S}^q) \\
& = & \gamma^G \sqcap_{\mathbb{S}^{G \times X} } (\mathbb{S}^{Id_G \times q} \times \mathbb{S}^{Id_G \times q}) ( \nu \times \nu) \\
& = & \sqcap_{\mathbb{S}^{G \times X_0} } ( \nu \times \nu) \text{.}\\
\end{eqnarray*}
Since then $\nu$ is a distributive lattice homomorphism it is of the form $\mathbb{S}^t$ for some (unique) $t: G \times X_0 \rTo X_0$ and it is readily checked that $(X_0,t)$ is a $G$-object. By construction $\gamma$ and $\mathbb{S}^q$ commute with $\mathbb{S}^a$ and $\mathbb{S}^t$ and so correspond to morphisms of $[G,\mathcal{C}]^{op}_{\mathbb{P}_G}$ (i.e. natural transformations relative to the category of $G$-objects).
This proves stability of 6(i); part (ii) is order dual.
\end{proof}
{\bf Axiom 7.}
\emph{For any equalizer diagram }
\begin{diagram}
E & \rTo^{e} & X & \pile { \rTo^f \\ \rTo_g} & Y
\end{diagram}
in $\mathcal{C}$ the diagram
\begin{diagram}
\mathbb{S}^{X}\times \mathbb{S}^{X}\times \mathbb{S}^{Y} & \pile{ \rTo^{\sqcap (Id\times \sqcup )(Id\times Id\times \mathbb{S}^{f})} \\ \rTo_{\sqcap (Id\times \sqcup )(Id\times Id\times \mathbb{S}^{g})} } & \mathbb{S}^{X} & \rTo{\mathbb{S}^{e}} & \mathbb{S}^{E}
\end{diagram}
\emph{is a coequalizer in $\mathcal{C}_{\mathbb{P}}^{op}$}.
Note that Axiom 7 does not break the symmetry given by the order
enrichment. A short calculation using the distributivity assumption on $
\mathbb{S}$ shows that the composite $\sqcup (Id\times \sqcap )$ could have
been used in the place of $\sqcap (Id\times \sqcup )$.
Stability of this axiom is also straightforward as $\mathbb{S}^e$ is an epimorphism in $\mathcal{C}_{\mathbb{P}}^{op}$. In more detail say $(E,d)\rTo^e (X,a)$ is an equalizer of $f,g:(X,a) \pile{ \rTo \\ \rTo } (Y,b) $ in $[ G , \mathcal{C}]$ and $(Z,c)$ is a $G$-object, then for any $\delta : \mathbb{S}^X \rTo \mathbb{S}^Z$ which has $\mathbb{S}^c \delta = \delta^G \mathbb{S}^a$ we also have $\mathbb{S}^c \delta' \mathbb{S}^e= (\delta')^G \mathbb{S}^d\mathbb{S}^e$ if $\delta$ factors as $\delta'\mathbb{S}^e$ because $e$ is a $G$-homomorphism. $\delta'$ must then correspond to a morphism of $[G,\mathcal{C}]^{op}_{\mathbb{P}_G}$.
\begin{definition}
A category $\mathcal{C}$ satisfying the axioms is called a \emph{category of spaces}.
\end{definition}
\begin{example}
The category of locales relative to an elementary topos $\mathcal{E}$, written $\mathbf{Loc}_{\mathcal{E}}$, is a category of spaces. The axioms are all known properties of the category of locales; e.g. \cite{towaxioms} and \cite{towhofman}.
\end{example}
For clarity, collecting together the various observations already made:
\begin{theorem}
The axioms are $G$-stable for any internal group $G$; in other words if $\mathcal{C}$ is a category of spaces then so is $[G,\mathcal{C}]$ for any internal group $G$.
\end{theorem}
\section{Categories of spaces that are not categories of locales}
In this section we provide a class of examples which shows that not every category of spaces is a category of locales for some elementary topos $\mathcal{E}$. To give this example we must first recall a few basic definitions and results about categories of spaces and a proposition about the representation of geometric morphisms as certain adjunctions between categories of locales.
\begin{definition}
(1) A morphism $f:X \rTo Y $ of a category of spaces is \emph{open} if there exists $\exists_f : \mathbb{S}^X \rTo \mathbb{S}^Y$ left adjoint to $\mathbb{S}^f$ such that $\exists_f\sqcap_{\mathbb{S}^X}(Id_{\mathbb{S}^X} \times \mathbb{S}^f)=\sqcap_{\mathbb{S}^Y}(\exists_f \times Id_{\mathbb{S}^Y})$ (Frobenius condition).
(2) An object $X$ of a category of spaces is \emph{open} if $!:X \rTo 1$ is an open map.
(3) An object $X$ of a category of spaces is \emph{discrete} if it is open and $\Delta : X \rTo X \times X$ is open.
\end{definition}
In the case where the category of spaces is a category of locales, the usual meanings are recovered; \cite{towaxioms}. Any elementary topos $\mathcal{E}$ can be identified with the full subcategory of $\mathbf{Loc}_{\mathcal{E}}$ consisting of discrete objects. One easily checks all isomorphism are open maps (notice: $\exists_{\phi^{-1}}=\mathbb{S}^{\phi}$ for any isomorphism $\phi$), and the property of being an open map is stable under composition, relative to any category of spaces; $\exists_{fg}=\exists_f\exists_g$ for any composable pair of morphisms $f$ and $g$. Further, open maps are pullback stable (\cite{towaxioms}) and the usual Beck-Chevalley condition holds for any pullback square (where an open map is being pulled back).
\begin{lemma}
If $\mathcal{C}$ is a category of spaces and $G=(G,m,e,i)$ is an internal group then a $G$-homomorphism $f:(X,a) \rTo (Y,b)$ is open relative to $[ G , \mathcal{C} ]$ if and only if $f: X \rTo Y $ is open relative to $\mathcal{C}$.
\end{lemma}
\begin{proof}
If $f$ is open as a $G$-homomorphism then there is a natural transformation $ (\mathbb{S}, \pi_2)^{(X,a)} \rTo (\mathbb{S}, \pi_2)^{(Y,b)} $ left adjoint to $(\mathbb{S}, \pi_2)^f$ and satisfying the Frobenius condition. But this natural transformation corresponds to a natural transformation $\mathbb{S}^X \rTo \mathbb{S}^Y $ which can be seen to witness that $f$ is open relative to $\mathcal{C}$. In the other direction if $f$ is open relative to $\mathcal{C}$ then there is $\exists_f : \mathbb{S}^X \rTo \mathbb{S}^Y $ left adjoint to $\mathbb{S}^f$ witnessing that $f$ is an open map of $\mathcal{C}$. So to complete the proof we can just check that the diagram
\begin{diagram}
\mathbb{S}^X & \rTo{\exists_f} & \mathbb{S}^Y \\
\dTo^{\mathbb{S}^a} & & \dTo_{\mathbb{S}^b} \\
\mathbb{S}^{G \times X} & \rTo{\exists_f^G} & \mathbb{S}^{G \times Y}\\
\end{diagram}
commutes, since then $\exists_f$ corresponds to a natural transformation $ (\mathbb{S}, \pi_2)^{(X,a)} \rTo (\mathbb{S}, \pi_2)^{(Y,b)} $ relative to $[ G , \mathcal{C}]$, which can be seen to witness that $f$ is open as a $G$-homomorphism.
To prove that the square commutes, notice that $b: G \times Y \rTo Y $ factors as $G\times Y \rTo^{(\pi_1,b)} G \times Y \rTo^{\pi_2^Y} Y$ where the first factor is an isomorphism, and so
\begin{eqnarray*}
\mathbb{S}^b \exists_f & = & \mathbb{S}^{(\pi_1,b)} \mathbb{S}^{\pi_2^Y}\exists_f \\
& = & \mathbb{S}^{(\pi_1,b)} \exists_{Id_G \times f} \mathbb{S}^{\pi_2^X} \\
& = & \exists_{(\pi_1,b(i \times Id_Y))} \exists_{Id_G \times f} \mathbb{S}^{\pi_2^X} \\
& = & \exists_{Id_G \times f} \exists_{(\pi_1,a(i \times Id_X))} \mathbb{S}^{\pi_2^X} \\
& = & \exists^G_ f \mathbb{S}^{(\pi_1,a )} \mathbb{S}^{\pi_2^X} \\
& = & \exists^G_ f \mathbb{S}^a \\
\end{eqnarray*}
where the second line is by Beck-Chevalley applied to the pullback square that is formed by pulling $f:X \rTo Y$ back along $\pi_2^Y : G \times Y \rTo Y$, the third and fifth lines use $\exists_{\phi^{-1}}=\mathbb{S}^{\phi}$ for any isomorphism $\phi$, and the fourth line follows because $f$ is a $G$-homomorphism.
\end{proof}
If $G$ is a group in a category of spaces $\mathcal{C}$ then we use $BG$ for the full subcategory of $[ G , \mathcal{C}]$ consisting of discrete objects; the lemma can be applied to show that $BG$ is the full subcategory that consists of those $G$-objects $(X,a)$ such that $X$ is discrete relative to $\mathcal{C}$. So, in the case $\mathcal{C}=\mathbf{Loc}$, $BG$ recovers its usual meaning: $G$-sets.
\begin{proposition}\label{geommorph}
Let $\mathcal{F}$ and $\mathcal{E}$ be two elementary toposes. There is an equivalence between the category of order enriched Frobenius adjunctions $L \dashv R : \mathbf{Loc}_{\mathcal{F}} \pile{\rTo \\ \lTo } \mathbf{Loc}_{\mathcal{E}}$ such that $R$ preserves the Sierp\'{n}ski locale and the category of geometric morphisms from $\mathcal{F}$ to $\mathcal{E}$. Every such Frobenius adjunction is determined up to isomorphism by the restriction of its right adjoint to discrete objects.
\end{proposition}
\begin{proof}
This is the main result of \cite{towgeom}.
\end{proof}
If $\mathcal{F} \rTo \mathcal{E}$ is a geometric morphism then we use $\Sigma_f \dashv f^*$ for the corresponding adjunction between categories of locales. We are now in a position to give our example.
\begin{example}
It is not the case that every category of spaces arises as the category of locales for some elementary topos. Let $G$ be a localic group, and say $\psi: [ G , \mathbf{Loc}] \rTo^{\simeq} \mathbf{Loc}_{\mathcal{E}}$ for some elelmentary topos $\mathcal{E}$ (such that the equivalence sends the Sierpi\'{n}ski locale relative to $\mathcal{E}$ to the canonical Sierpi\'{n}ski object of $[ G , \mathbf{Loc}]$). It follows that the discrete objects of $ \mathbf{Loc}_{\mathcal{E}}$ can be identified with the discrete objects of $[ G , \mathbf{Loc}]$; but these last are $BG$. It follows that $BG \simeq \mathcal{E}$ and therefore that there is an equivalence $\phi: \mathbf{Loc}_{\mathcal{E}} \rTo^{\simeq} \mathbf{Loc}_{BG}$. So there is an adjunction
\begin{eqnarray*}
\mathbf{Loc} \pile{\rTo^{G \times (\_)} \\ \lTo_U} [ G , \mathbf{Loc}] \pile{ \rTo^{\psi} \\ \lTo_{\psi^{-1}}} \mathbf{Loc}_{\mathcal{E}} \pile{ \rTo^{\phi} \\ \lTo_{\phi^{-1}}} \mathbf{Loc}_{BG}
\end{eqnarray*}
which satisfies Frobenius reciprocity and whose right adjoint preverse the Sierpi\'{n}ski locale. Further the restriction of the right adjoint of this adjunction to discrete locales is the forgetful functor and so by the last proposition this adjunction must be isomorphic to the adjunction $\Sigma_{p_G} \dashv p_G^*$ determined by the canonical point $p_G : \mathbf{Set} \rTo BG$ of $BG$.
For any open localic group we know that the geometric morphism $p_G : \mathbf{Set} \rTo BG$ is an open surjection (see Lemma C5.3.6 of \cite{Elephant} and the comments before it). But locales decend along open surjections (Theorem C5.1.5 of \cite{Elephant}) and the definition of locales descending along $p_G$ is that the functor $\rho: \mathbf{Loc}_{BG} \rTo $$ [ \hat{G} , \mathbf{Loc}]$, induced by $p_G^* : \mathbf{Loc}_{BG} \rTo \mathbf{Loc}$ (i.e. $U\rho=p_G^*$), is an equivalence, where $\hat{G}$ is the \'{e}tale completion of $G$ (see e.g. Lemma C5.3.16 of \cite{Elephant} for a bit more detail). Therefore there exists an equivalence of categories $[ G , \mathbf{Loc} ] \simeq [ \hat{G} , \mathbf{Loc}]$ which commutes with the canonical adjunction back to $\mathbf{Loc}$. This is sufficient to show that $G \cong \hat{G}$ (see the comments before Lemma \ref{Frob}); i.e. that $G$ is \'{e}tale complete. Since not every open localic group is \'{e}tale complete, it is not the case that every category of spaces is a category of locales over some topos.
\end{example}
\section{Making do without coequalizers}
\subsection{Making do: inside $\mathcal{C}$}
An achievement of the axiomatic approach to locale theory is that it covers Plewe's result that localic triquotient surjections are effective descent morphisms (which generalises the more well known results that localic proper and open surjections are effective descent morphisms). To prove the result one needs to show that triquotient surjections are regular epimorphisms and, on the surface, this appears to require some coequalizers of the ambient category $\mathcal{C}$. We now show how to avoid this requirement.
\begin{definition}
Given a morphism $p: Z \rTo Y$ in a category of spaces, a \emph{triquotient assignment on $p$} is a natural trasnformation $p_{\#}: \mathbb{S}^Z \rTo \mathbb{S}^Y$ satisfying
(i) $\sqcap_{\mathbb{S}^Y}( p_{\#} \times Id_{\mathbb{S}^Y}) \sqsubseteq p_{\#}\sqcap_{\mathbb{S}^Z} (Id_{\mathbb{S}^Z} \times \mathbb{S}^p )$ and
(ii) $ p_{\#}\sqcup_{\mathbb{S}^Z} (Id_{\mathbb{S}^Z} \times \mathbb{S}^p )\sqsubseteq \sqcup_{\mathbb{S}^Y}( p_{\#} \times Id_{\mathbb{S}^Y})$.
Further $p$ is a \emph{triquotient surjection} if it has a triquotient assigment $p_{\#}$ such that $p_{\#}\mathbb{S}^p = Id_{\mathbb{S}^Y}$.
\end{definition}
Consult \cite{towaxioms} for more detail on triquotient assignments and the role they play in the axiomatic aporoach. In particular note that the usual `Beck-Chevalley for pullback squares' result holds: if $p_{\#}$ is a triquotient assignment on $p:Z \rTo Y$ then for any $f: X \rTo Y$ there is a triquotient assignment $(\pi_1)_{_\#}$ on $\pi_1 :X \times_Y Z \rTo X$ such that $(\pi_1)_{_\#}\mathbb{S}^{\pi_2} = \mathbb{S}^f p_{\#}$. Notice that if $p:Z \rTo Y$ is a triquotient surjection witnessed by the triquotient assignment $p_{\#} : \mathbb{S}^Z \rTo \mathbb{S}^Y$, then $p_{\#}(1)=1$ and $p_{\#}(0)=0$. Conversely if $p: Z \rTo Y$ has a triquotient assignment $p_{\#}$ with $p_{\#}(1)=1$ and $p_{\#}(0)=0$ then $p_{\#}(\mathbb{S}^p(b))=p_{\#}(0 \sqcup \mathbb{S}^p(b))\sqsubseteq p_{\#}(0) \sqcup b = b$ and order dually $b \sqsubseteq p_{\#}(\mathbb{S}^p(b))$ and so $p$ is a triquotient surjection. Using this charcterization of triquotient surjection it is clear from Beck-Chevalley for pullback squares that triquotient surjections are pullback stable. We now prove that triquotient surjections are regular epimorphisms.
\begin{proposition}
If $\mathcal{C}$ is a category of spaces and $p: Z \rTo Y$ a triquotient surjection then $p$ is a regular epimorphism.
\end{proposition}
\begin{proof}
Let $p_1,p_2: Z \times_Y Z \pile{\rTo \\ \rTo} Z$ be the kernal pair of $p$. The diagram
\begin{diagram}
\mathbb{S}^Y & \pile{\rTo^{\mathbb{S}^p} \\ \lTo_{p_{\#}}} & \mathbb{S}^Z & \pile{ \rTo^{\mathbb{S}^{p_2}} \\ \rTo^{\mathbb{S}^{p_1} } \\ \lTo_{(p_1)_{\#}}} & \mathbb{S}^{Z \times_Y Z }\\
\end{diagram}
is a split fork in $\mathcal{C}_{\mathbb{P}}^{op}$. For any $q: Z \rTo W$ with $qp_1=qp_2$ we therefore have that $\mathbb{S}^q$ factors (uniquely) as $\mathbb{S}^p \alpha $ for some natural transformation $\alpha$ (it is given by $p_{\#}\mathbb{S}^q$). By Axiom 5 it therefore only remains to check that $\alpha$ is a distributive lattice homomorphism. Since we have already observed $p_{\#}$ preserves $0$ and $1$ we just need to show that $\alpha$ preserves binary meet and join, and for this it is sufficient to check $p_{\#}\mathbb{S}^q(c_1) \sqcap p_{\#}\mathbb{S}^q(c_2) \sqsubseteq p_{\#}\mathbb{S}^q(c_1 \sqcap c_2)$ and $ p_{\#}\mathbb{S}^q(c_1 \sqcup c_2) \sqsubseteq p_{\#}\mathbb{S}^q(c_1) \sqcup p_{\#}\mathbb{S}^q(c_2)$.
But
\begin{eqnarray*}
p_{\#}\mathbb{S}^q(c_1) \sqcap p_{\#}\mathbb{S}^q(c_2) & \sqsubseteq & p_{\#}(\mathbb{S}^p c_1 \sqcap \mathbb{S}^p p_{\#} \mathbb{S}^q c_2) \\
& = & p_{\#}(\mathbb{S}^q c_1 \sqcap (p_1)_{\#} \mathbb{S}^{p_2} \mathbb{S}^q c_2) \text{ (Beck-Chevalley)}\\
& = & p_{\#}(\mathbb{S}^q c_1 \sqcap (p_1)_{\#} \mathbb{S}^{p_1} \mathbb{S}^q c_2) \text{ (since $qp_1=qp_2$)} \\
& = & p_{\#}(\mathbb{S}^q c_1 \sqcap \mathbb{S}^q c_2) \text{ ($p_1$ triquotient surjection)}\\
& = & p_{\#}\mathbb{S}^q ( c_1 \sqcap c_2)\\
\end{eqnarray*}
and $ p_{\#}\mathbb{S}^q(c_1 \sqcup c_2) \sqsubseteq p_{\#}\mathbb{S}^q(c_1) \sqcup p_{\#}\mathbb{S}^q(c_2)$ follows an order dual proof and so we are done.
\end{proof}
Further details on the axiomatic proof that triquotient surjections are of effective decent are contained in \cite{towaxiomsdraft}.
\subsection{Making do: maps between $\mathcal{C}$s}
If $G$ is an internal group in a category of spaces we have established that $[G,\mathcal{C}]$ is a category of spaces. Since we have also recalled in Proposition \ref{geommorph} that geometric morphisms can be represented as certain adjunctions between categories of spaces it would be odd if there was not a natural `connected components' adjunction $\Sigma_G \dashv G^*$; i.e.
\begin{eqnarray*}
[G,\mathcal{C}] \pile{\rTo^{\Sigma_G} \\ \lTo_{G^*} } \mathcal{C}
\end{eqnarray*}
where $G^*$ sends an object $X$ of $\mathcal{C}$ to the trivial $G$-object $(X,\pi_2)$. But for $\Sigma_G$ to exist it would appear that coequalizers are required, since $\Sigma_G(X,a)$ must (by uniqueness of left adjoints) be isomorphic to the coequalizer of $a$ and $\pi_2$. We now show, for open groups at least, that in fact $\Sigma_G$ can always be defined. (Order dually, $\Sigma_G$ will always exist for compact groups.) The proof does not require $G$ to be a group, only a monoid, but it is not clear what this extra level of generality offers us. To prove this result we need three lemmas, the first of which is a simple order enriched result:
\begin{lemma}\label{hannahs}
If
\begin{eqnarray*}
C \rTo^c A \pile{\rTo^a \\ \rTo_b } B
\end{eqnarray*}
is a fork diagram in an order enriched category $\mathcal{C}$ (i.e. $ac=bc$), then if there exists $q:A\rTo C$ and $t:B\rTo A$ such that $ta=cq \sqsupseteq Id_A$, $qc=Id_C$ and $tb \sqsubseteq Id_A$, then $c$ is the equalizer of $a$ and $b$.
\end{lemma}
The result that $c$ is an equalizer is similar to the familiar result that split forks are coequalizers, used in Beck's monadicity theorem. An order enriched monadicity theorem can be written down, based on this result.
\begin{proof}
Say $d:D \rTo A$ has $ad=bd$. Then $cqd \sqsupseteq d$ and $cqd = tad = tbd \sqsubseteq d$; so, $cqd=d$ showing that $d$ factors via $c$, clearly uniquely as $c$ is split. This establishes an order isomorphism as the action of morphism composition preserves order.
\end{proof}
For the next lemma observe that if $p:Z \rTo X$ is an open map with a section $s: X \rTo Z$ (i.e. $ps=Id_X$) then $\mathbb{S}^p \sqsubseteq \exists_p$. This is trivial to establish because $Id_{\mathbb{S}^Z} \sqsubseteq \mathbb{S}^p \exists_p$ since $\mathbb{S}^p$ is right adjoint to $\exists_p$. In particular we observe that for any open object $Y$, arbitrary object $X$, and map $g:X \rTo Y$, we have that $\mathbb{S}^{(g,Id_X)} \sqsubseteq \exists_{\pi_2}$, where $\pi_2 : Y \times X \rTo X$, which is open because it is the pullback of the open map $!:Y \rTo 1$. Our next lemma builds on this last observation.
\begin{lemma}
If $g: Z_1 \rTo Z_2$ is a map between two open objects and $X$ is some other object of $\mathcal{C}$, then $\exists_{\pi_2^{Z_1}}\mathbb{S}^{g \times Id_X} \sqsubseteq \exists_{\pi_2^{Z_2}}$ (where $\pi_2^{Z_1}: Z_1 \times X \rTo X$ and $\pi_2^{Z_2}: Z_2 \times X \rTo X$).
\end{lemma}
\begin{proof}
As $\exists_{\pi_2^{Z_1}}$ is left adjoint to $\mathbb{S}^{\pi_2^{Z_1}}$, the proof can be completed by showing $\mathbb{S}^{g \times Id_X} \sqsubseteq \mathbb{S}^{\pi_2^{Z_1}}\exists_{\pi_2^{Z_2}}$, which is equivalent to showing $\mathbb{S}^{g \times Id_X} \sqsubseteq \exists_{\pi_{23}} \mathbb{S}^{\pi_{13}}$ by Beck-Chevalley on the pullback square
\begin{diagram}
Z_2 \times Z_1 \times X & \rTo^{\pi_{13}} & Z_2 \times X \\
\dTo^{\pi_{23}} & & \dTo_{\pi_2^{Z_2}} \\
Z_1 \times X & \rTo_{\pi_2^{Z_1}} & X \\
\end{diagram}
But the proof is then complete by our observations in the preamble because $\mathbb{S}^{g \times Id_X}$ factors as $\mathbb{S}^{(g,Id_{Z_1 \times X})}\mathbb{S}^{\pi_{23}}$.
\end{proof}
This leads us to our first result about open monoids; that is, on monoid objects $(M,m:M\times M \rTo M,e: 1 \rTo M)$, internal to $\mathcal{C}$, such that $M$ is open.
\begin{lemma}
If $M$ is an open monoid then for any $M$-object $(X,a:M \times X \rTo X)$, $\mathbb{S}^X\rTo^{\mathbb{S}^a}\mathbb{S}^{M \times X} \rTo^{\exists_{\pi_2}}\mathbb{S}^X$ is (a) inflationary and (b) idempotent.
\end{lemma}
\begin{proof}
(a) Immediate because $\exists_{\pi_2}$ is greater than $\mathbb{S}^{(e!,Id_X)}$ and $Id_{\mathbb{S}^X}$ factors as $\mathbb{S}^{(e!,Id_X)}\mathbb{S}^a$.
(b) By Beck-Chevalley on the pullback square
\begin{diagram}
M \times M \times X & \rTo^{Id_M \times a} & M \times X \\
\dTo^{\pi_{23}} & & \dTo_{\pi_2} \\
M \times X & \rTo^a & X \\
\end{diagram}
and using $a(Id_M \times a) = a (m \times Id_X)$ we have
\begin{eqnarray*}
\exists_{\pi_2} \mathbb{S}^a \exists_{\pi_2} \mathbb{S}^a & = & \exists_{\pi_2} \exists_{\pi_{23}} \mathbb{S}^{Id_M
\times a} \mathbb{S}^a \\
& = & \exists_{\pi_2} \exists_{\pi_{23}} \mathbb{S}^{m \times Id_X} \mathbb{S}^a \\
& \sqsubseteq & \exists_{\pi_2} \mathbb{S}^a
\end{eqnarray*}
where the last line is by the lemma (take $g=m$). This completes the proof of (b), given that (a) shows that $\exists_{\pi_2} \mathbb{S}^a$ is inflationary.
\end{proof}
The next result establishes the aim of this subsection.
\begin{proposition}\label{nathan}
If $M$ is an open monoid then $M^*: \mathcal{C} \rTo $$ [ M , \mathcal{C}]$ has a left adjoint, $\Sigma_M $.
\end{proposition}
\begin{proof}
For any $M$-object $(X,a)$ consider the map $\exists_{\pi_2}\mathbb{S}^a$, which we have established is an inflationary idempotent. So by applying by Axioms 5 and 6 we have a diagram
\begin{eqnarray*}
\mathbb{S}^{\Sigma_M(X,a)} \pile{ \rInto^{\mathbb{S}^{q^X}} \\ \lOnto_{\tau} } \mathbb{S}^X \pile{ \rTo^{\mathbb{S}^a} \\ \rTo^{\mathbb{S}^{\pi_2}} \\ \lTo_{\exists_a}} \mathbb{S}^{G \times X}
\end{eqnarray*}
which, by Lemma \ref{hannahs}, is an equalizer in $\mathcal{C}^{op}_{P_L}$. From this it follows that $q^X$ is the coequalizer of $a$ and $\pi_2$; if $t: X \rTo Z$ composes equally with $a$ and $\pi_2$ then $\mathbb{S}^t$ factors uniquely via $\mathbb{S}^{q^X}$ so it just remains to check, as in earlier proofs, that $\tau \mathbb{S}^t$ is a meet semilattice homomorphism. Certainly it preserves top (as $\tau$ does); the manipulation below shows that $\mathbb{S}^{q^X}( \tau \mathbb{S}^t (a_1) \sqcap \tau \mathbb{S}^t (a_2) ) \sqsubseteq \mathbb{S}^{q^X} \tau \mathbb{S}^t ( a_1 \sqcap a_2)$ from which it is clear that $\tau \mathbb{S}^t$ is a meet semilattice homomorphism (post compose the inequality with $\tau$).
\begin{eqnarray*}
\mathbb{S}^{q^X}( \tau \mathbb{S}^t (a_1) \sqcap \tau \mathbb{S}^t (a_2) ) & = & \mathbb{S}^{q^X} \tau \mathbb{S}^t (a_1) \sqcap \mathbb{S}^{q^X} \tau \mathbb{S}^t (a_2) ) \\
& = & \exists_a \mathbb{S}^{\pi_2} \mathbb{S}^t (a_1) \sqcap \exists_a \mathbb{S}^{\pi_2} \mathbb{S}^t (a_2) \\
& = & \exists_a \mathbb{S}^a \mathbb{S}^t (a_1) \sqcap \exists_a \mathbb{S}^a \mathbb{S}^t (a_2) \\
& \sqsubseteq & \mathbb{S}^t (a_1) \sqcap \mathbb{S}^t (a_2) \\
& = & \mathbb{S}^t (a_1 \sqcap a_2) \\
& \sqsubseteq & \mathbb{S}^{q^X}\tau \mathbb{S}^t (a_1 \sqcap a_2)\\
\end{eqnarray*}
\end{proof}
\section{Extending to groupoids}
In this section we outline how the above arguments extend to groupoids. We start by establishing some notation for slice categories. If $f : Y \rTo X$ is a morphism of a category $\mathcal{C}$ then we use $Y_f$ as notation for $f$ when considered as an object of the slice category $\mathcal{C}/X$. We use $Y_X$ as notation for the object $\pi_2 : Y \times X \rTo X $. Now any morphism $g:Z \rTo X$ of a cartesian category $\mathcal{C}$ gives rise to a pullback adjunction $\Sigma_g \dashv g^*: \mathcal{C}/Z \rTo \mathcal{C}/X$ that satisfies Frobenius reciprocity. So by the change of base result (Lemma \ref{Frob}) there is an adjunction, which we will write $g^{\#} \dashv g_*$, between $(\mathcal{C}/Z)_{S_Z}^{op} $ and $(\mathcal{C}/X)_{S_X}^{op} $. In the case that $X=1$ observe that for any $\delta : S^A \rTo S^B$ we have $g_*g^{\#}(\delta)=\delta^X$.
If $\mathbb{G}=(G_1 \pile{\rTo^{d_0} \\ \rTo_{d_1} } G_0, m: G_1 \times_{G_0} G_1 \rTo G_1, s:G_0 \rTo G_1, i: G_1 \rTo G_1)$ is a groupoid relative to an order enriched cartesian category $\mathcal{C}$, with object of objects $G_0$ and object of morphisms $G_1$, then there is an adjunction $\mathcal{C}/G_0 \pile{ \rTo^{\Sigma_{d_1} d_0^*}\\ \lTo_U} [ \mathbb{G} ,\mathcal{C}]$. It satisfies Frobenius reciprocity and $U$ is monadic. A $\mathbb{G}$-object consists of $(X_f, a: \Sigma_{d_1} d_0^* X_f \rTo X_f)$ where $f: X \rTo G_0$ and $a$ is a morphism over $G_0$ that satisfies the usual unit and associative identities (the domain of $\Sigma_{d_1} d_0^* X_f$ is $G_1 \times_{G_0} X$). By taking adjoint transpose across $\Sigma_{d_1} \dashv d_1^*$ it is well known that having such an $a$ on $X_f$ is equivalent to having a morphism $a': d_0^* X_f \rTo d_1^* X_f$ of $\mathcal{C}/G_1$ such that $s^*a'$ is isomorphic to the identity and $m^*a' \cong \pi_2^*a' \nu_{X_f}\pi_1^*a'$ (where $\nu:\pi_1^*d_1^* \rTo^{\cong} \pi_2^* d_0^*$ is the canonical isomorphism). If $(X_f,a)$ and $(Y_g,b)$ are two $\mathbb{G}$-objects then a morphism $h:X_f\rTo Y_b$ of $\mathcal{C}/G_0$ is a $\mathbb{G}$-homomorphism if and only if $(d_1^*h)a'=b'(d_0^*h)$.
For any object $S$ of $\mathcal{C}$, $S_{G_0}$ can be made into a $\mathbb{G}$-object by defining the trivial action on it: $Id_S \times d_1 : S \times G_1 \rTo S \times G_0$. This $\mathbb{G}$-object is written $\mathbb{G}^*S$.
To summarise the technical difference we have to account for when generalising from groups to groupoids, observe that the role of $\delta^G$, i.e. $!^G_* (!^G)^{\#}(\delta)$, must be taken by $({d_1})_*d_0^{\#}(\delta)$. So the exponential in the presheaf category, something that is determined by the categorical structure of $\mathcal{C}$ alone, must be replaced by an endofunctor relative to $\mathcal{C}$ that contains information about the groupoid. However, once this replacement is made it is easy to see how to make the generalisation.
It does not appear to be possible to generalise Proposition \ref{main} from groups to groupoids. If it were possible then the property of being double exponentiable would be stable under slicing, since for any object $X$ the category of $\mathbb{G}$-objects is the same as the slice of $\mathcal{C}$ over $X$ when $\mathbb{G}$ is taken to be the trivial groupoid $X \pile{\rTo^{Id_X} \\ \rTo_{Id_X} } X$. But, see \cite{towslice}, proving slice stabity of double exponentiability appears to require something like Axiom 7 (or, at least, that the double exponentiation functor preserves coreflexive equalizers).
\begin{proposition}
Let $S$ be a double exponentiable object in an order enriched cartesian category $\mathcal{C}$ such that double exponentiation is stable under slicing. For any internal groupoid $\mathbb{G}$, $\mathbb{G}^*S$ is a double exponentiable object of $[ \mathbb{G} ,\mathcal{C}]$.
\end{proposition}
By `stable under slicing' we mean that for any morphism $g : Z \rTo X$ the canonical morphism $g^*\mathbb{P}_X \rTo \mathbb{P}_Z g^*$, determined by the fact that $\Sigma_g \dashv g^*$ satisfies Frobenius reciprocity, is an isomorphism
\begin{proof}
Let $(X_f,a)$ be a $\mathbb{G}$-object. Then $P_{G_0}(X_f)$ can be made into a $\mathbb{G}$-object by using
\begin{eqnarray*}
d_0^*P_{G_0}(X_f) \rTo^{\cong} P_{G_1}d_0^*(X_f) \rTo^{P_{G_1}a'} P_{G_1}d_1^*(X_f) \rTo^{\cong} d_1^*P_{G_0}(X_f)\text{.}
\end{eqnarray*}
$\mathbb{G}$-homomorphisms from $(Y_g, c)$ to $P_{G_0}(X_f)$ correspond to natural transformations $\delta : S_{G_0}^{X_f} \rTo S_{G_0}^{Y_g}$ such that $({d_1})_*d_0^{\#}(\delta)S_{G_0}^{a}=S_{G_0}^{b}\delta$ (equivalently $d_0^{\#}(\delta)S_{G_1}^{a'}=S_{G_1}^{b'}d_1^{\#}\delta$, by adjoint transpose via $d_1^{\#} \dashv (d_1)_*$). Since $\mathcal{C}/G_0 \pile{ \rTo^{\Sigma_{d_1} d_0^*}\\ \lTo_U} [ \mathbb{G} ,\mathcal{C}]$ satisfies Frobenius reciprocity we know that there is an order isomorphism
\begin{eqnarray*}
Nat[S_{G_0}^{X_f},S_{G_0}^{Y_g}] \cong Nat[(\mathbb{G}^*S)^{(X_f,a)},(\mathbb{G}^*S)^{(G_1 \times_{G_0} Y,m \times Id_Y)}]
\end{eqnarray*}
natural in $Y_g$ and so the result follows as in Section \ref{prelim} from the explicit description of this order isomorphism and application of Lemma \ref{hannah} with $\mathbb{T}$ the monad induced by $ \Sigma_{d_1} d_0^* \dashv U$.
\end{proof}
Let us recall the slice stability result that we need to proceed. The proof is clear from \cite{towslice}.
\begin{proposition}
If $\mathcal{C}$ is a category of spaces then for any object $X$, so is $\mathcal{C}/X$. The canonical Sierpi\'{n}ski object relative to $\mathcal{C}/X$ is $\mathbb{S}_X$ and pullback commutes with double exponentiation.
\end{proposition}
By combining these last two propositions we have that $\mathbb{G}^*\mathbb{S}$ is a double exponentiable object in $[ \mathbb{G} ,\mathcal{C}]$, if $\mathcal{C}$ is a category of spacees. Checking that the remaining axioms are $\mathbb{G}$-stable is a straightforward re-application of the arguments deployed in Section \ref{axioms}, given that \cite{towslice} shows that the axioms hold in $\mathcal{C}/G_0$. So we have shown in outline:
\begin{theorem}
If $\mathcal{C}$ is a category of spaces and $\mathbb{G}$ an internal groupoid, then $[\mathbb{G},\mathcal{C}]$ is a category of spaces.
\end{theorem}
\end{document} |
\begin{document}
\title{Generalized Fourier--Feynman transforms and generalized convolution
products on Wiener space II}
\titlerunning{Generalized Fourier--Feynman transforms and generalized convolution
products II}
\author{Sang Kil Shim \and Jae Gil Choi}
\institute{Sang Kil Shim \at
Department of Mathematics, Dankook University, Cheonan 330-714, Republic of Korea\\
\email{[email protected]}
\and
Jae Gil Choi (Corresponding author)\at
School of General Education, Dankook University, Cheonan 330-714, Republic of Korea\\
\email{[email protected]}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
The purpose of this article is to present the second type fundamental relationship
between the generalized Fourier--Feynman transform and the generalized
convolution product on Wiener space. The relationships in this article are
also natural extensions (to the case on an infinite dimensional Banach space)
of the structure which exists between the Fourier transform and the convolution
of functions on Euclidean spaces.
\keywords{Wiener space \and
Gaussian process \and
generalized Fourier--Feynman transform \and
generalized convolution product.}
\subclass{Primary 46G12; Secondary 28C20 \and 60G15 \and 60J65}
\end{abstract}
\setcounter{equation}{0}
\section{Introduction}\label{sec:introduction}
\par
Given a positive real $T>0$, let $C_0[0,T]$ denote one-parameter Wiener space,
that is, the space of all real-valued continuous functions $x$ on $[0,T]$ with $x(0)=0$.
Let $\mathcal{M}$ denote the class of all Wiener measurable subsets
of $C_0[0,T]$ and let $\mathfrak{m}$ denote Wiener measure.
Then, as is well-known, $(C_0[0,T],\mathcal{M},\mathfrak{m})$ is
a complete measure space.
\par
In \cite{HPS95,HPS96,HPS97-1,PSS98} Huffman, Park, Skoug and Storvick established fundamental
relationships between the analytic Fourier--Feynman transform (FFT)
and the convolution product (CP) for functionals $F$ and $G$ on
$C_0[0,T]$, as follows:
\begin{equation}\label{eq:offt-ocp}
T_{q}^{(p)}\big((F*G)_q\big)(y)
= T_{q}^{(p)}(F)\bigg(\frac{y}{\sqrt2}\bigg)
T_{q}^{(p)}(G)\bigg(\frac{y}{\sqrt2}\bigg)
\end{equation}
and
\begin{equation}\label{eq:ocp-offt}
\big(T_{q}^{(p)}(F)*T_{q}^{(p)}(G)\big)_{-q}(y)
= T_{q}^{(p)}\bigg(F\bigg(\frac{\cdot}{\sqrt2}\bigg)G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{equation}
for scale-almost every $y\in C_{0}[0,T]$,
where $T_{q}^{(p)}(F)$ and $(F*G)_q$ denote the $L_p$ analytic FFT and the CP
of functionals $F$ and $G$ on $C_0[0,T]$.
For an elementary introduction of the FFT and the corresponding CP, see \cite{SS04}.
\par
For $f\in L_2(\mathbb R)$, let the Fourier transform of $f$ be given by
\[
\mathcal{F}(f)(u)=\int_{\mathbb R}e^{iuv}f(v)dm_L^{\mathfrak{n}}(v)
\]
and for $f, g\in L_2(\mathbb R)$, let the convolution of $f$ and $g$ be
given by
\[
(f*g)(u)=\int_{\mathbb R} f(u-v)g(v)dm_L^{\mathfrak{n}}(v)
\]
where $dm_L^{\mathfrak{n}} (v)$ denotes the normalized Lebesgue measure
$(2\pi)^{-1/2}dv$ on $\mathbb R$. As commented in \cite{Indag}, the Fourier
transform $\mathcal{F}$ acts like a homomorphism with convolution $*$
and ordinary multiplication on $L_2(\mathbb R)$ as follows:
for $f, g \in L_2(\mathbb R)$
\begin{equation}\label{worthy}
\mathcal{F}(f*g)=\mathcal{F}(f)\mathcal{F}(g).
\end{equation}
But the Fourier transform $\mathcal{F}$ and the convolution $*$ have a dual
property such as
\begin{equation}\label{eq:F-02}
\mathcal{F}(f)*\mathcal{F}(g)=\mathcal{F}(f g).
\end{equation}
Equations \eqref{eq:offt-ocp} and \eqref{eq:ocp-offt} above
are natural extensions (to the case on an infinite dimensional Banach space)
of the equations \eqref{worthy} and \eqref{eq:F-02}, respectively.
\par
In \cite{CCKSY05,HPS97-2}, the authors extended the relationships \eqref{eq:offt-ocp}
and \eqref{eq:ocp-offt} to the cases between the generalized FFT (GFFT)
and the generalized CP (GCP) of functionals on $C_0[0,T]$.
The definition of the ordinary FFT and the corresponding CP are based on the Wiener integral,
see \cite{HPS95,HPS96,HPS97-1}. While the definition of the GFFT and the GCP studied in \cite{CCKSY05,HPS97-2}
are based on the generalized Wiener integral \cite{CPS93,PS91}.
The generalized Wiener integral (associated with Gaussian process)
was defined by
$\int_{C_0[0,T]} F(\mathcal Z_h(x,\cdot))d\mathfrak{m}(x)$
where $\mathcal Z_h$ is the Gaussian process on $C_0[0,T]\times[0,T]$ given by
$\mathcal Z_h (x,t)=\int_0^t h(s)\tilde{d}x(s)$,
and where $h$ is a nonzero function in $L_2[0,T]$ and
$\int_0^t h(s)\tilde{d}x(s)$
denotes the Paley--Wiener--Zygmund stochastic integral \cite{PWZ33,Park69,PS88}.
On the other hand, in \cite{Indag}, the authors defined a more general CP (see, Definition \ref{def:cp}
below) and developed the relationship, such as \eqref{eq:offt-ocp},
between their GFFT and the GCP (see, Theorem \ref{thm:gfft-gcp-compose} below).
Equation \eqref{eq:gfft-gcp} in Theorem \ref{thm:gfft-gcp-compose} is useful
in that it permits one to calculate the GFFT of the GCP of functionals on $C_0[0,T]$
without actually calculating the GCP.
In this paper we work with the second relationship,
such as equation \eqref{eq:ocp-offt}, between the GFFT and the GCP of
functionals on $C_0[0,T]$.
Our new results corresponds to equation \eqref{eq:F-02} rather
than equation \eqref{worthy}.
It turns out, as noted in Remark \ref{re:meaning-main} below, that
our second relationship between the GFFT and the CP also permits one to calculate the GCP of
the GFFT of functionals on $C_0[0,T]$ without actually calculating the GCP.
\setcounter{equation}{0}
\section{Preliminaries}\label{sec:introduction}
\par
In order to present our relationship between the GFFT and the GCP,
we follow the exposition of \cite{Indag}.
\par
A subset $B$ of $C_0[0,T]$ is said to be scale-invariant measurable
provided $\rho B\in \mathcal{M}$ for all $\rho>0$, and a scale-invariant
measurable set $N$ is said to be scale-invariant null provided $\mathfrak{m}(\rho N)=0$
for all $\rho>0$. A property that holds except on a scale-invariant null set
is said to hold scale-invariant almost everywhere (s-a.e.).
A functional $F$ is said to be scale-invariant measurable provided $F$ is defined on a scale-invariant
measurable set and $F(\rho\,\cdot\,)$ is Wiener-measurable for every $\rho> 0$.
If two functionals
$F$ and $G$ are equal s-a.e., we write $F\approx G$.
\par
Let $\mathbb C$, $\mathbb C_+$ and $\mathbb{\widetilde C}_+$ denote the set of
complex numbers, complex numbers with positive real part and nonzero complex
numbers with nonnegative real part, respectively. For each $\lambda \in \mathbb C$,
$\lambda^{1/2}$ denotes the principal square root of $\lambda$; i.e., $\lambda^{1/2}$
is always chosen to have positive real part, so that
$\lambda^{-1/2}=(\lambda^{-1})^{1/2}$ is in $\mathbb C_+$ for
all $\lambda\in\widetilde{\mathbb C}_+$.
\par
Let $h$ be a function in $L_2[0,T]\setminus\{0\}$ and let $F$ be
a $\mathbb C$-valued scale-invariant measurable functional on $C_0[0,T]$
such that
\[
\int_{C_0[0,T]} F\big(\lambda^{-1/2}\mathcal Z_h(x,\cdot)\big)d\mathfrak{m}(x)
=J(h;\lambda)
\]
exists as a finite number for all $\lambda>0$. If there exists a function
$J^* (h;\lambda)$ analytic on $\mathbb C_+$ such that
$J^*(h;\lambda)=J(h;\lambda)$ for all $\lambda>0$, then $J^*(h;\lambda)$
is defined to be the generalized analytic Wiener integral (associated with
the Gaussian process $\mathcal{Z}_h$) of $F$ over $C_0[0,T]$ with parameter $\lambda$,
and for $\lambda \in \mathbb C_+$ we write
\[
\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}}
F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x)
= J^*(h;\lambda).
\]
Let $q\ne 0$ be a real number and let $F$ be a functional such that
\[
\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x)
\]
exists for all $\lambda \in \mathbb C_+$. If the following limit exists, we call
it the generalized analytic Feynman integral of $F$ with parameter $q$ and we
write
\[
\int_{C_0[0,T]}^{\mathrm{anf}_{q}}
F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak m(x)
= \lim_{\substack{
\lambda\to -iq \\ \lambda\in \mathbb C_+}}
\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}}
F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak m(x).
\]
\par
Next (see \cite{CCKSY05,Indag,HPS97-2}) we state the definition of the GFFT.
\renewcommand{\thesection.3}{\thesection.1}
\begin{definition}
Let $h$ be a function in $L_2[0,T]\setminus\{0\}$. For $\lambda\in\mathbb{C}_+$
and $y \in C_{0}[0,T]$, let
\[
T_{\lambda,h}(F)(y)
=\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}}
F\big(y+\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x).
\]
For $p\in (1,2]$ we define the $L_p$ analytic GFFT (associated with the Gaussian process $\mathcal{Z}_h$),
$T^{(p)}_{q,h}(F)$ of $F$, by the formula,
\[
T^{(p)}_{q,h}(F)(y)
=\operatorname*{l.i.m.}_{\substack{
\lambda\to -iq \\ \lambda\in \mathbb C_+}}
T_{\lambda,h} (F)(y)
\]
if it exists; i.e., for each $\rho>0$,
\[
\lim_{\substack{
\lambda\to -iq \\ \lambda\in \mathbb C_+}}
\int_{C_{a,b}[0,T]}\big| T_{\lambda,h} (F)(\rho y)
-T^{(p)}_{q, h }(F)(\rho y) \big|^{p'}
d\mathfrak m (y)=0
\]
where $1/p+1/p' =1$. We define the $L_1$ analytic GFFT, $T_{q, h }^{(1)}(F)$ of $F$,
by the formula
\[
T_{q, h }^{(1)}(F)(y)
= \lim_{\substack{
\lambda\to -iq \\ \lambda\in \mathbb C_+}}
T_{\lambda,h} (F)(y)
\]
for s-a.e. $y\in C_0[0,T]$ whenever this limit exists.
\end{definition}
\par
We note that for $p \in [1,2]$, $T_{q,h}^{(p)}(F)$ is defined only s-a.e..
We also note that if $T_{q,h}^{(p)}(F)$ exists and if $F\approx G$, then
$T_{q,h}^{(p)}(G)$ exists and $T_{q,h}^{(p)}(G)\approx T_{q,h }^{(p)}(F)$.
One can see that for each $h\in L_2[0,T]$,
$T_{q,h}^{(p)}(F)\approx T_{q,-h}^{(p)}(F)$ since
\[
\int_{C_0[0,T]}F(x)d\mathfrak{m}(x)=\int_{C_0[0,T]}F(-x)d\mathfrak{m}(x).
\]
\renewcommand{\thesection.4}{\thesection.2}
\begin{remark}\label{remark:ordinary-fft}
Note that if $h\equiv 1$ on $[0,T]$, then the generalized analytic Feynman
integral and the $L_p$ analytic GFFT, $T_{q,1}^{(p)}(F)$, agree
with the previous definitions of the analytic Feynman integral and the analytic
FFT, $T_{q}^{(p)}(F)$, respectively \cite{HPS95,HPS96,HPS97-1,PSS98} because
$\mathcal Z_1(x,\cdot)=x$ for all $x \in C_0[0,T]$.
\end{remark}
\par
Next (see \cite{Indag}) we give the definition of our GCP.
\renewcommand{\thesection.3}{\thesection.3}
\begin{definition}\label{def:cp}
Let $F$ and $G$ be scale-invariant measurable functionals on $C_{0}[0,T]$.
For $\lambda \in \widetilde{\mathbb C}_+$ and $h_1,h_2\in L_2[0,T]\setminus\{0\}$,
we define their GCP with respect to $\{\mathcal{Z}_{h_1},\mathcal{Z}_{h_2}\}$
(if it exists) by
\begin{equation}\label{eq:gcp-Z}
\begin{aligned}
(F*G)_{\lambda}^{(h_1,h_2)}(y)
=
\begin{cases}
\int_{C_0[0,T]}^{\mathrm{ anw}_{\lambda}}
F\big(\frac{y+{\mathcal Z}_{h_1} (x,\cdot)}{\sqrt2}\big)
G\big(\frac{y-{\mathcal Z}_{h_2} (x,\cdot)}{\sqrt2}\big)d \mathfrak m(x),
\quad \lambda \in \mathbb C_+ \\
\int_{C_0[0,T]}^{\mathrm{ anf}_{q}}
F\big(\frac{y+{\mathcal Z}_{h_1} (x,\cdot)}{\sqrt2}\big)
G\big(\frac{y-{\mathcal Z}_{h_2} (x,\cdot)}{\sqrt2}\big)d \mathfrak{m}(x),\\
\qquad \qquad \qquad \qquad \qquad
\qquad \lambda=-iq,\,\, q\in \mathbb R, \,\,q\ne 0.
\end{cases}
\end{aligned}
\end{equation}
When $\lambda =-iq$, we denote $(F*G)_{\lambda}^{(h_1,h_2)}$
by $(F*G)_{q}^{(h_1,h_2)}$.
\end{definition}
\renewcommand{\thesection.4}{\thesection.4}
\begin{remark}\label{remark:ordinary-cp}
(i) Given a function $h$ in $L_2[0,T]\setminus\{0\}$ and letting
$h_1=h_2\equiv h$, equation \eqref{eq:gcp-Z} yields the convolution
product studied in \cite{CCKSY05,HPS97-2}:
\[
\begin{aligned}
(F*G)_{q}^{(h,h)}(y)&
\equiv(F*G)_{q,h}(y)\\
&=\int_{C_0[0,T]}^{\mathrm{ anf}_{q}}
F\bigg(\frac{y+ \mathcal Z_{h} (x,\cdot)}{\sqrt2}\bigg)
G\bigg(\frac{y- \mathcal Z_{h} (x,\cdot)}{\sqrt2}\bigg)d \mathfrak{m}(x) .
\end{aligned}
\]
(ii) Choosing $h_1=h_2\equiv 1$, equation \eqref{eq:gcp-Z} yields
the convolution product studied in \cite{HPS95,HPS96,HPS97-1,PSS98}:
\[
\begin{aligned}
(F*G)_{q}^{(1,1) }(y)
& \equiv (F*G)_{q}(y)\\
&
=\int_{C_0[0,T]}^{\mathrm{ anf}_{q}}
F\bigg(\frac{y+ x}{\sqrt2}\bigg)
G\bigg(\frac{y- x}{\sqrt2}\bigg)d \mathfrak{m}(x).
\end{aligned}
\]
\end{remark}
\par
In order to establish our assertion we define the following conventions.
Let $h_1$ and $h_2$ be nonzero functions in $L_2[0,T]$. Then
there exists a function $\mathbf{s}\in L_2[0,T]$ such
that
\begin{equation}\label{eq:fn-rot}
\mathbf{s}^2(t)=h_1^2(t)+h_2^2(t)
\end{equation}
for $m_L$-a.e. $t\in [0,T]$, where $m_L$ denotes Lebesgue measure on $[0,T]$.
Note that the function `$\mathbf{s}$' satisfying \eqref{eq:fn-rot} is not
unique. We will use the symbol $\mathbf{s}(h_1,h_2)$ for the functions
`$\mathbf{s}$' that satisfy \eqref{eq:fn-rot} above.
Given nonzero functions $h_1$ and $h_2$ in $L_{2}[0,T]$,
infinitely many functions, $\mathbf{s}(h_1,h_2)$, exist in $L_{2}[0,T]$.
Thus $\mathbf{s}(h_1,h_2)$ can be considered as an equivalence class
of the equivalence relation $\sim$ on $L_2[0,T]$ given by
\[
\mathbf{s}_1\sim \mathbf{s}_2 \,\,\Longleftrightarrow\,\, \mathbf{s}_1^2=\mathbf{s}_2^2
\,\,\,m_L\mbox{-a.e.}.
\]
But we observe that for every function $\mathbf{s}$ in the
equivalence class $\mathbf{s}(h_1,h_2)$, the Gaussian random
variable {\color{red}${\mathcal {Z}}_{\mathbf{s}}(x,T)$} has the normal
distribution $N(0,\|h_1\|_2^2+\|h_2\|_2^2)$.
Inductively, given a
sequence $\mathcal H=\{h_1,\ldots, h_n\}$ of nonzero functions in $L_2[0,T]$,
let $\mathbf{s}(\mathcal H)\equiv \mathbf{s}(h_1,h_2,\ldots,h_n)$
be the equivalence class of the functions $\mathbf{s}$ which satisfy the relation
\[
\mathbf{s}^2(t)=h_1^2(t)+\cdots+h_n^2(t)
\]
for $m_L$-a.e. $t\in[0,T]$.
Throughout the rest of this paper, for convenience, we will regard
$\mathbf{s}(\mathcal H)$ as a function in $L_2[0,T]$.
We note that if the functions $h_1,\ldots, h_n$ are in $L_{\infty}[0,T]$,
then we can take $\mathbf{s}(\mathcal H)$ to be in $L_{\infty}[0,T]$.
By an induction argument it follows that
\[
\mathbf{s}(\mathbf{s}(h_1,h_2,\ldots,h_{k-1}),h_k)
=\mathbf{s}(h_1,h_2,\ldots,h_k)
\]
for all $k\in\{2,\ldots,n\}$.
\renewcommand{\thesection.6}{\thesection.5}
\begin{example}
Let $h_1(t)=t^4$, $h_2(t)=\sqrt{2}t^3$, $h_3(t)=\sqrt{3}t^2$, $h_4(t)={\sqrt{2}}t$, $h_5(t)=1$,
and $\mathbf{s}(t)=t^4 +t^2 +1$ for $t\in [0,T]$.
Then $\mathcal H=\{h_1,h_2,h_3,h_4,h_5\}$ is a sequence of functions in $L_2[0,T]$
and it follows that
\[
\mathbf{s}^2(t)= h_1^2(t)+ h_2^2(t)+ h_3^2(t)+ h_4^2(t)+ h_5^2(t).
\]
Thus we can write $\mathbf{s}\equiv \mathbf{s}(h_1,h_2,h_3,h_4,h_5)$.
Furthermore, one can see that
\[
(-1)^{m}\mathbf{s}\equiv \mathbf{s}((-1)^{n_1}h_1,(-1)^{n_2}h_2,(-1)^{n_3}h_3,(-1)^{n_4}h_4,(-1)^{n_5}h_5)
\]
with $m, n_1,n_2,n_3,n_4,n_5 \in \{1,2\}$.
On the other hand, it also follows that
\[
\mathbf{s}(h_1,h_2,h_3,h_4,h_5)(t)\equiv \mathbf{s}(g_1,g_2,g_3) (t)
\]
for each $t\in [0,T]$,
where $g_1(t)=-t^4-1$, $g_2(t)={\sqrt2} t\sqrt{t^4+1}$, and $g_3(t)=t^2$ for $t\in [0,T]$.
\end{example}
\renewcommand{\thesection.6}{\thesection.6}
\begin{example}
Let $h_1(t)=t^4+t^2$, $h_2(t)=t^4-t^2$, $h_3(t)=\sqrt2 t^3$,
and $\mathbf{s}(t)=\sqrt{2(t^8 +t^4)}$ for $t\in [0,T]$.
Then, by the convention for $\mathbf{s}$, it follows that
\[
\mathbf{s}(t)\equiv\mathbf{s}(h_1,h_2) (t)
\equiv \mathbf{s}({\sqrt2} h_2 ,{\sqrt2} h_3)(t).
\]
\end{example}
\renewcommand{\thesection.6}{\thesection.7}
\begin{example}
Using the well-known formulas for trigonometric and hyperbolic functions,
it follows that
\[
\begin{aligned}
\sec \big(\tfrac{\pi}{4 T} t\big)
&=\mathbf{s}\big(1, \tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t)\\
&=\mathbf{s}\big(\sin,\cos,\tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t) \\
&
=\mathbf{s}\big(\sin\big(\tfrac{\pi}{4 T} \cdot\big),\cos\big(\tfrac{\pi}{4 T}
\cdot\big),\tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t),
\end{aligned}
\]
\[
\cosh t =\mathbf{s}(1, \sinh)(t)=\mathbf{s}(-1, \sinh)(t)=\mathbf{s}(\sin,\cos,\sinh)(t) ,
\]
and
\[
-\coth \big(t+\tfrac12\big) =\mathbf{s}\big(1, \mathrm{csch}\big(\cdot+\tfrac12\big)\big)(t)
=\mathbf{s}(-\sin,\cos,- \mathrm{csch}\big(\cdot+\tfrac12\big))(t)
\]
for each $t\in [0,T]$.
\end{example}
\setcounter{equation}{0}
\section{The relationship between the GFFT and the GCP}
\par
The Banach algebra $\mathcal S(L_2[0,T])$ consists of functionals
on $C_0[0,T]$ expressible in the form
\begin{equation}\label{eq:element}
F(x)=\int_{L_2[0,T]}\exp\{i\langle{u,x}\rangle\}df(u)
\end{equation}
for s-a.e. $x\in C_0[0,T]$, where the associated measure $f$ is an
element of $\mathcal M(L_2[0,T])$, the space of $\mathbb C$-valued
countably additive (and hence finite) Borel measures on $L_2[0,T]$,
and the pair $\langle{u,x}\rangle$ denotes the Paley--Wiener--Zygmund
stochastic integral $\mathcal Z_u(x,T) \equiv \int_0^T u(s)\tilde{d}x(t)$.
For more details, see \cite{CS80,CPS93,HPS97-2,PSS98}.
\par
We first present two known results for the GFFT and the GCP
of functionals in the Banach algebra $\mathcal S(L_2[0,T])$.
\renewcommand{\thesection.3}{\thesection.1}
\begin{theorem}[\cite{HPS97-2}]\label{thm:gfft}
Let $h$ be a nonzero function in $L_\infty[0,T]$, and let
$F\in\mathcal S(L_2[0,T])$ be given by equation \eqref{eq:element}. Then,
for all $p\in[1,2]$, the $L_p$ analytic GFFT, $T_{q,h}^{(p)}(F)$ of $F$
exists for all nonzero real numbers $q$, belongs to $\mathcal S(L_2[0,T])$,
and is given by the formula
\[
T_{q,h}^{(p)}(F)(y)
= \int_{L_2[0,T]}\exp\{i\langle{u,y}\rangle\}df_t^h(u)
\]
for s-a.e. $y\in C_{0}[0,T]$, where $f_t^h$ is the complex measure in
$\mathcal M(L_2[0,T])$ given by
\[
f_t^{h}(B)=\int_B \exp\bigg\{-\frac{i}{2q}\|uh\|_2^2\bigg\}df(u)
\]
for $B \in \mathcal B(L_2[0,T])$.
\end{theorem}
\renewcommand{\thesection.3}{\thesection.2}
\begin{theorem}[\cite{Indag}] \label{thm:gcp}
Let $k_1$ and $k_2$ be nonzero functions in $L_\infty[0,T]$ and let $F$
and $G$ be elements of $\mathcal S(L_2[0,T])$ with corresponding finite
Borel measures $f$ and $g$ in $\mathcal M(L_2[0,T])$. Then, the GCP
$(F*G)_q^{(k_1,k_2)}$ exists for all nonzero real $q$, belongs to
$\mathcal S(L_2[0,T])$, and is given by the formula
\[
(F*G)_q^{(k_1,k_2)}(y)
= \int_{L_2[0,T]}\exp\{i\langle{w,y}\rangle\}d\varphi^{k_1,k_2}_c(w)
\]
for s-a.e. $y\in C_{0}[0,T]$, where
\[
\varphi^{k_1,k_2}_c
=\varphi^{k_1,k_2}\circ\phi^{-1},
\]
$\varphi^{k_1,k_2}$ is the complex measure in $\mathcal M(L_2[0,T])$ given
by
\[
\varphi_{k_1,k_2}(B)
=\int_B \exp\bigg\{-\frac{i}{4q}\|uk_1-vk_2\|_2^2\bigg\}df(u)dg(v)
\]
for $B \in \mathcal B(L_2^2[0,T])$, and $\phi:L_2^2[0,T]\to L_2[0,T]$ is the
continuous function given by $\phi(u,v)=(u+v)/\sqrt2$.
\end{theorem}
\par
The following corollary and theorem will be very useful to prove our main theorem
(namely, Theorem \ref{thm:cp-tpq02}) which we establish
the relationship between the GFFT and the GCP such as equation \eqref{eq:ocp-offt}.
The following corollary is a simple consequence of Theorem \ref {thm:gfft}.
\renewcommand{\thesection.9}{\thesection.3}
\begin{corollary}\label{thm:afft-inverse}
Let $h$ and $F$ be as in Theorem \ref{thm:gfft}.
Then, for all $p\in[1,2]$, and all nonzero real $q$,
\begin{equation}\label{eq:inverse}
T_{-q, h}^{(p)}\big(T_{q,h}^{(p)}(F)\big)\approx F.
\end{equation}
As such, the GFFT, $T_{q,h}^{(p)}$, has the
inverse transform $\{T_{q,h}^{(p)}\}^{-1}=T_{-q,h}^{(p)}$.
\end{corollary}
\par
The following theorem is due to Chang, Chung and Choi \cite{Indag}.
\renewcommand{\thesection.3}{\thesection.4}
\begin{theorem} \label{thm:gfft-gcp-compose}
Let $k_1$, $k_2$, $F$, and $G$ be as in Theorem \ref{thm:gcp},
and let $h$ be a nonzero function in $L_\infty[0,T]$. Assume that $h^2=k_1k_2$ $m_L$-a.e.
on $[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation}\label{eq:gfft-gcp}
\begin{aligned}
&T_{q,h}^{(p)}\big((F*G)_q^{(k_1,k_2)}\big)(y) \\
& = T_{q,\mathbf{s}(h,k_1)/\sqrt2}^{(p)}(F)\bigg(\frac{y}{\sqrt2}\bigg)
T_{q,\mathbf{s}(h,k_2)/\sqrt2}^{(p)}(G)\bigg(\frac{y}{\sqrt2}\bigg)
\end{aligned}
\end{equation}
for s-a.e. $y\in C_{0}[0,T]$, where
$\mathbf{s}(h,k_j)$'s, $j\in \{1,2\}$, are the functions which satisfy the relation
\eqref{eq:fn-rot}, respectively.
\end{theorem}
\renewcommand{\thesection.4}{\thesection.5}
\begin{remark}
In equation \eqref{eq:gfft-gcp}, choosing $h=k_1=k_2\equiv 1$
yields equation \eqref{eq:offt-ocp} above.
Also, letting $h=k_1=k_2$ yields the results studied in \cite{CCKSY05,HPS97-2}.
As mentioned above, equation \eqref{eq:gfft-gcp} is a more general extension of equation \eqref{worthy}
to the case on an infinite dimensional Banach space.
\end{remark}
\par
We are now ready to establish our main theorem in this paper.
\renewcommand{\thesection.3}{\thesection.6}
\begin{theorem} \label{thm:cp-tpq02}
Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}.
Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation}\label{eq:cp-fft-basic}
\begin{aligned}
&\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)*
T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \Big)_{-q}^{(k_1,k_2)}(y) \\
&=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{aligned}
\end{equation}
for s-a.e. $y\in C_0[0,T]$, where
$\mathbf{s}(h,k_j)$'s, $j\in \{1,2\}$, are the functions which satisfy the relation
\eqref{eq:fn-rot}, respectively.
\end{theorem}
\begin{proof}
Applying \eqref{eq:inverse}, \eqref{eq:gfft-gcp} with $F$, $G$, and $q$
replaced with $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)$,
$T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G)$,
and $-q$, respectively, and \eqref{eq:inverse} again,
it follows that for s-a.e. $y\in C_0[0,T]$,
\[
\begin{aligned}
&\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)*
T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \Big)_{-q}^{(k_1,k_2)}(y)\\
&= T_{q,h}^{(p)}\Big(T_{-q,h}^{(p)}\Big(\big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)*
T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \big)_{-q}^{(k_1,k_2)}\Big)\Big)(y)\\
&= T_{q,h}^{(p)}\bigg(
T_{-q,\mathbf{s}(h,k_1)/\sqrt2}^{(p)}\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)}(F)\Big)
\bigg(\frac{\cdot}{\sqrt2}\bigg)\\
&\qquad\quad\times
T_{-q,\mathbf{s}(h,k_2)/\sqrt2}^{(p)}\Big(T_{q,\mathbf{s}(h,k_2)/\sqrt{2} }^{(p)}(G)\Big)
\bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)
(y)\\
&=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{aligned}
\]
as desired.
\qed\end{proof}
\renewcommand{\thesection.4}{\thesection.7}
\begin{remark}\label{re:meaning-main}
(i) Equation \eqref{eq:gfft-gcp} shows that the GFFT of the GCP of two functionals
is the ordinary product of their transforms.
On the other hand, equation \eqref{eq:cp-fft-basic} above shows that
the GCP of GFFTs of functionals is the GFFT of product of the functionals.
These equations are useful in that they permit one to calculate
$T_{q,h}^{(p)}((F*G)_q^{(k_1,k_2)})$ and
$(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G))_{-q}^{(k_1,k_2)}$
without actually calculating the GCPs involved them, respectively.
In practice, equation \eqref{eq:cp-fft-basic} tells us that
to calculate $T_{q,h}^{(p)}(F(\frac{\cdot}{\sqrt2}) G(\frac{\cdot}{\sqrt2} ))$
is easier to calculate
than are $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)$,
$T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (G)$, and
$(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) )_{-q}^{(k_1,k_2)}$.
(ii) Equation \eqref{eq:cp-fft-basic} is a more general extension of equation \eqref{eq:F-02}
to the case on an infinite dimensional Banach space.
\end{remark}
\renewcommand{\thesection.9}{\thesection.8}
\begin{corollary}[Theorem 3.1 in \cite{PSS98}]
Let $F$ and $G$ be as in Theorem \ref{thm:gcp}. Then, for all $p\in[1,2]$
and all real $q\in\mathbb R\setminus\{0\}$,
\[
\Big( T_q^{(p)}(F)*T_q^{(p)}(G)\Big)_{-q} (y)
=T_q^{(p)}\bigg(F \bigg(\frac{\cdot}{\sqrt2}\bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg) \bigg)(y)
\]
for s-a.e. $y\in C_{0}[0,T]$, where $T_q^{(p)}(F)$ denotes the ordinary
analytic FFT of $F$ and $(F*G)_q$
denotes the CP of $F$ and $G$ (see Remarks \ref{remark:ordinary-fft}
and \ref{remark:ordinary-cp}).
\end{corollary}
\begin{proof}
In equation \eqref{eq:cp-fft-basic}, simply choose $h=k_1=k_2\equiv 1$.
\qed\end{proof}
\renewcommand{\thesection.9}{\thesection.9}
\begin{corollary}[Theorem 3.2 in \cite{CCKSY05}]
Let $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}.
Then, for all $p\in[1,2]$ and all real $q\in\mathbb R\setminus\{0\}$,
\[
\Big(T_{q,h}^{(p)}(F)*T_{q,h}^{(p)}(G)\Big)_{-q} (y)
=T_{q,h}^{(p)}\bigg(F \bigg(\frac{\cdot}{\sqrt2}\bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg) \bigg)(y)
\]
for s-a.e. $y\in C_{0}[0,T]$, where $(F*G)_q\equiv (F*G)_q^{(h,h)}$ denotes the GCP of $F$
and $G$ studied in \cite{CCKSY05,HPS97-2} (see Remark \ref{remark:ordinary-cp}).
\end{corollary}
\begin{proof}
In equation \eqref{eq:cp-fft-basic}, simply choose $h=k_1=k_2$.
\qed\end{proof}
\setcounter{equation}{0}
\section{Examples}
The assertion in Theorem \ref{thm:cp-tpq02} above can be applied to many
Gaussian processes $\mathcal Z_h$ with $h\in L_\infty[0,T]$.
In view of the assumption in Theorems \ref{thm:gfft-gcp-compose} and \ref{thm:cp-tpq02},
we have to check that
there exist solutions $\{h,k_1,k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of the system
\[
\begin{cases}
\mbox{(i)} &h^2 =k_1k_2,\\
\mbox{(ii)} &\mathbf{s}_1=\mathbf{s}(h,k_1) \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],\\
\mbox{(iii)} &\mathbf{s}_2=\mathbf{s}(h,k_2) \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],
\end{cases}
\]
or, equivalently,
\begin{equation}\label{system}
\begin{cases}
\mbox{(i)} &h^2 =k_1k_2,\\
\mbox{(ii)} &\mathbf{s}_1^2=h^2 +k_1^2 \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],\\
\mbox{(iii)} &\mathbf{s}_2^2=h^2 +k_2^2 \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T].
\end{cases}
\end{equation}
Throughout this section, we will present some examples for
the solution sets of the system \eqref{system}.
To do this we consider the Wiener space $C_0[0,1]$
and the Hilbert space $L_2[0,1]$ for simplicity.
\renewcommand{\thesection.6}{\thesection.1}
\begin{example} (Polynomials)
The set $\mathcal P =\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,1]$ with
\[
\begin{cases}
& h(t) = 2t(t^2-1) \\
&k_1(t) =(t^2-1)^2, \\
&k_2(t) =4t^2, \\
&\mathbf{s}_1(t) = (t^2-1)(t^2+1), \\
&\mathbf{s}_2(t) = 2 t(t^2+1)
\end{cases}
\] is a solution set of the system \eqref{system}.
Thus
\[
\mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t)
=(t^2-1)(t^2+1),
\]
and
\[
\mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t)
=2 t(t^2+1)
\]
for all $t\in [0,1]$. In this case, equation \eqref{eq:cp-fft-basic}
with the functions in $\mathcal P$ holds for any functionals in $F$ and $G$
in $\mathcal S(L_2[0,1])$.
\end{example}
\renewcommand{\thesection.6}{\thesection.2}
\begin{example} (Trigonometric functions I)
The set $\mathcal T_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,1]$ with
\[
\begin{cases}
h(t)=\sin 2t=2\sin t\cos t, \\
k_1(t)=2\sin^2t, \\
k_2(t)=2\cos^2t, \\
\mathbf{s}_1(t)=2\sin t, \\
\mathbf{s}_2(t)=2\cos t
\end{cases}
\] is a solution set of the system \eqref{system}.
Thus
\[
\mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t)
=\mathbf{s}(2\sin\cos,2\sin^2)(t)
=2 \sin t,
\]
and
\[
\mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t)
=\mathbf{s}(2\sin\cos,2\cos^2)(t)
=2 \cos t
\]
for all $t\in [0,1]$. Also, using equation \eqref{eq:cp-fft-basic},
it follows that for all $p\in[1,2]$, all nonzero real $q$, and all functionals $F$ and $G$ in
$\mathcal S(L_2[0,1])$,
\[
\Big(T_{q,\sqrt{2}\sin }^{(p)} (F)*
T_{q,\sqrt{2}\cos}^{(p)}(G) \Big)_{-q}^{(2\sin^2,2\cos^2)}(y)
=T_{q,2\sin\cos}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\]
for s-a.e. $y\in C_0[0,1]$.
\end{example}
\renewcommand{\thesection.6}{\thesection.3}
\begin{example} (Trigonometric functions II)
The set $\mathcal T_2=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,1]$ with
\[
\begin{cases}
h(t)=\sqrt2\sin t, \\
k_1(t)=\sqrt2\sin t\tan t, \\
k_2(t)=\sqrt2\cos t, \\
\mathbf{s}_1(t)=\sqrt2\tan t, \\
\mathbf{s}_2(t)=\sqrt2
\end{cases}
\]
is a solution set of the system \eqref{system}.
Thus
\[
\mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t)
=\mathbf{s}(\sqrt2\sin,\sqrt2\sin\tan)(t)
= \sqrt2\tan t,
\]
and
\[
\mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t)
=\mathbf{s}(\sqrt2\sin,\sqrt2\cos)(t)
=\sqrt2 \,\,\,\,\,(\mbox{constant function})
\]
for all $t\in [0,1]$.
\end{example}
\renewcommand{\thesection.6}{\thesection.4}
\begin{example} (Hyperbolic functions)
The hyperbolic functions are defined in terms of the exponential functions
$e^{x}$ and $e^{-x}$. The set $\mathcal H=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,1]$ with
\[
\begin{cases}
h(t)=1 , \\
k_1(t)= \sinh\big(t+\tfrac12\big), \\
k_2(t)= \mathrm{csch} \big(t+\tfrac12\big), \\
\mathbf{s}_1(t)= \cosh\big(t+\tfrac12\big), \\
\mathbf{s}_2(t)= \coth\big(t+\tfrac12\big)
\end{cases}
\] is a solution set of the system \eqref{system}.
Thus
\[
\mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t)
=\mathbf{s}\big(1,\sinh\big(\cdot+\tfrac12\big)\big)(t)
= \cosh\big(t+\tfrac12\big),
\]
and
\[
\mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t)
=\mathbf{s}\big(1,\mathrm{csch} \big(\cdot+\tfrac12\big)\big)(t)
= \coth\big(t+\tfrac12\big)
\]
for all $t\in [0,1]$.
\end{example}
\setcounter{equation}{0}
\section{Iterated GFFTs and GCPs}\label{relation2}
\par
In this section, we present general relationships
between the iterated GFFT and the GCP for functionals
in $\mathcal S(L_2[0,T])$ which are developments of \eqref{eq:cp-fft-basic}.
To do this we quote a result from \cite{Indag}.
\renewcommand{\thesection.3}{\thesection.1}
\begin{theorem}\label{thm:2018-step1}
Let $F\in \mathcal S(L_2[0,T])$ be given by equation \eqref{eq:element}, and
let $\mathcal H=\{h_1,\ldots,h_n\}$ be a finite sequence of nonzero functions in $L_\infty[0,T]$.
Then, for all $p\in[1,2]$ and all nonzero real $q$, the iterated $L_p$ analytic GFFT,
\[
T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big(
\cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big)
\]
of $F$ exists, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula
\[
T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big(
\cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y)
= \int_{L_2[0,T]}\exp\{i\langle{u,y}\rangle\}df_t^{h_1,\ldots,h_n}(u)
\]
for s-a.e. $y\in C_{0}[0,T]$, where $f_t^{h_1,\ldots,h_n}$ is the complex
measure in $\mathcal M(L_2[0,T])$ given by
\[
f_t^{h_1,\ldots,h_n}(B)
=\int_B \exp\bigg\{-\frac{i}{2q}\sum_{j=1}^n\|uh_j\|_2^2\bigg\}df(u)
\]
for $B \in \mathcal B(L_2[0,T])$. Moreover it follows that
\begin{equation}\label{eq:gfft-n-fubini-add}
T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big(
\cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big)(y)
=T_{q, \mathbf{s}(\mathcal H)}^{(p)}(F)(y)
\end{equation}
for s-a.e. $y\in C_{0}[0,T]$, where
$\mathbf{s}(\mathcal H)\equiv \mathbf{s}(h_1,\ldots,h_n)$ is a function in $L_{\infty}[0,T]$
satisfying the relation
\begin{equation}\label{eq:fn-rot-ind}
\mathbf{s}(\mathcal H)^2(t)=h_1^2(t)+\cdots+h_n^2(t)
\end{equation}
for $m_L$-a.e. $t\in [0,T]$.
\end{theorem}
We next establish two types of extensions of Theorem \ref{thm:cp-tpq02} above.
\renewcommand{\thesection.3}{\thesection.2}
\begin{theorem} \label{thm:iter-gfft-gcp-compose}
Let $k_1$, $k_2$, $F$, and $G$ be as in
Theorem \ref{thm:gcp}, and let $\mathcal H=\{h_1,\ldots,h_n\}$
be a finite sequence of nonzero functions in $L_{\infty}[0,T]$.
Assume that
\[
\mathbf{s}(\mathcal H)^2 \equiv \mathbf{s} (h_1,\ldots,h_n)^2=k_1k_2
\]
for $m_L$-a.e. on $[0,T]$, where $\mathbf{s}(\mathcal H)$ is the function in $L_{\infty}[0,T]$
satisfying \eqref{eq:fn-rot-ind} above.
Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation} \label{eq:multi-rel-01}
\begin{aligned}
&\Big(T_{q,k_1/\sqrt2}^{(p)}
\big(T_{q,h_n/\sqrt2}^{(p)}\big(\cdots\big(T_{q,h_2/\sqrt2}^{(p)}
\big(T_{q,h_1/\sqrt2}^{(p)}(F)\big)\big)\cdots\big)\big)\\
&\qquad
*T_{q,k_2/\sqrt2}^{(p)}\big( T_{q,h_n/\sqrt2}^{(p)}\big( \cdots\big(T_{q,h_2/\sqrt2}^{(p)}
\big(T_{q,h_1/\sqrt2}^{(p)}(G)\big)\big)\cdots\big)\big)\Big)_{-q}^{(k_1,k_2)}(y)\\
&=\Big(T_{q, \mathbf{s}(\mathcal H,k_1)/\sqrt2}^{(p)}(F)
*T_{q, \mathbf{s}(\mathcal H,k_2)/\sqrt2}^{(p)}(G)\Big)_{-q}^{(k_1,k_2)}(y)\\
&= T_{q,\mathbf{s}(\mathcal H)}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{aligned}
\end{equation}
for s-a.e. $y\in C_0[0,T])$, where $\mathbf{s}(\mathcal H,k_1)$
and $\mathbf{s}(\mathcal H,k_2)$
are functions in $L_{\infty}[0,T]$ satisfying the relations
\[
\mathbf{s}(\mathcal H,k_1)^2
\equiv \mathbf{s}(h_1,\ldots,h_n,k_1)^2
=h_1^2 +\cdots+h_n^2 +k_1^2
\]
and
\[
\mathbf{s}(\mathcal H,k_2)^2\equiv \mathbf{s}(h_1,\ldots,h_n,k_2)^2
=h_1^2 +\cdots+h_n^2 +k_2^2
\]
for $m_L$-a.e. on $[0,T]$, respectively.
\end{theorem}
\begin{proof}
Applying \eqref{eq:gfft-n-fubini-add}, the first equality of \eqref{eq:multi-rel-01}
follows immediately. Next using \eqref{eq:cp-fft-basic} with $h$ replaced with $\mathbf{s}(\mathcal H)$,
the second equality of \eqref{eq:multi-rel-01} also follows.
\qed\end{proof}
\par
In view of equations \eqref{eq:gfft-n-fubini-add} and \eqref{eq:cp-fft-basic},
we also obtain the following assertion.
\renewcommand{\thesection.3}{\thesection.3}
\begin{theorem} \label{thm:iter-gfft-gcp-compose-2nd}
Let $F$ and $G$ be as in Theorem \ref{thm:gcp}.
Given a nonzero function $h$ in $L_{\infty}[0,T]$
and finite sequences $\mathcal K_1=\{k_{11},k_{12},\ldots,k_{1n}\}$
and $\mathcal K_2=\{k_{21},k_{22},\ldots,k_{2m}\}$ of nonzero functions in $L_{\infty}[0,T]$,
assume that
\[
h^2=\mathbf{s}(\mathcal K_1)\mathbf{s}(\mathcal K_2)
\]
for $m_L$-a.e. on $[0,T]$.
Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation} \label{eq:multi-rel-02-2nd}
\begin{aligned}
&\Big(T_{q,h/\sqrt2}^{(p)}
\big(T_{q,k_{1n}/\sqrt2}^{(p)}
\big(\cdots
\big(T_{q,k_{12}/\sqrt2}^{(p)}\big(T_{q,k_{11}/\sqrt2}^{(p)}(F)\big)\big)\cdots\big)\big)\big)\\
&\quad
*T_{q,h/\sqrt2}^{(p)}\big( T_{q,k_{2m}/\sqrt2}^{(p)}
\big( \cdots
\big(T_{q,k_{22}/\sqrt2}^{(p)}\big(T_{q,k_{21}/\sqrt2}^{(p)}(G)\big)\big)\cdots\big)\big)\big)\Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\
&=\Big(T_{q,h/\sqrt2}^{(p)}\big(T_{q,\mathbf{s}(\mathcal K_1)/\sqrt2}^{(p)}(F)\big)
*T_{q,h/\sqrt2}^{(p)}\big( T_{q,\mathbf{s}(\mathcal K_2)/\sqrt2}^{(p)}(G)\big) \Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\
&=\Big(T_{q, \mathbf{s}(h,\mathbf{s}(\mathcal K_1))/\sqrt2}^{(p)}(F)
*T_{q,\mathbf{s}(h,\mathbf{s}(\mathcal K_2))/\sqrt2}^{(p)}(G)\Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\
&= T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{aligned}
\end{equation}
for s-a.e. $y\in C_0[0,T])$, where
$\mathbf{s}(h,\mathbf{s}(\mathcal K_1))$, and $\mathbf{s}(h,\mathbf{s}(\mathcal K_2))$
are functions in $L_{\infty}[0,T]$ satisfying the relations
\[
\mathbf{s}(h,\mathbf{s}(\mathcal K_1))^2
=h^2 +\mathbf{s} (\mathcal K_1)^2=h^2+k_{11}^2 + \cdots+k_{1n}^2,
\]
and
\[
\mathbf{s}(h,\mathbf{s}(\mathcal K_2))^2
=h^2 +\mathbf{s} (\mathcal K_2)^2=h^2+k_{21}^2 +\cdots+k_{2m}^2
\]
for $m_L$-a.e. on $[0,T]$, respectively.
\end{theorem}
\renewcommand{\thesection.4}{\thesection.4}
\begin{remark}
Note that given the functions $\{\mathbf{s}(\mathcal H),k_1,k_2, \mathbf{s}(\mathcal H,k_1),\mathbf{s}(\mathcal H,k_2) \}$
in Theorem \ref{thm:iter-gfft-gcp-compose}, the set $\mathcal F=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,T]$ with
\[
\begin{cases}
h(t)=\mathbf{s}(\mathcal H)(t), \\
\mathbf{s}_1(t)=\mathbf{s}(\mathcal H,k_1)(t), \\
\mathbf{s}_2(t)=\mathbf{s}(\mathcal H,k_2)(t)
\end{cases}
\]
is a solution set of the system \eqref{system}.
Also, given the functions
\[
\{h,\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2),\mathbf{s}(h,\mathbf{s}(\mathcal K_1)),\mathbf{s}(h,\mathbf{s}(\mathcal K_2))\}
\]
in Theorem \ref{thm:iter-gfft-gcp-compose-2nd}, the set $\mathcal F=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,T]$ with
\[
\begin{cases}
k_1(t)=\mathbf{s}(\mathcal K_1)(t), \\
k_2(t)=\mathbf{s}(\mathcal K_2)(t), \\
\mathbf{s}_1(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_1))(t), \\
\mathbf{s}_2(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_2))(t)
\end{cases}
\]
is a solution set of the system \eqref{system}.
\end{remark}
In the following two examples, we also consider the Wiener space $C_0[0,1]$
and the Hilbert space $L_\infty[0,1]$ for simplicity.
\renewcommand{\thesection.6}{\thesection.5}
\begin{example}
Let
$h_1=\sin \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$,
$h_2=\cos \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$,
$h_3=\tan\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$,
$k_1(t)=\tan \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$,
and
$k_2(t)= \sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$
on $[0,1]$.
Then $\{h_1,h_2,h_3,k_1,k_2\}$ is a set of functions in $L_\infty[0,1]$, and given the set $\mathcal H=\{h_1,h_2,h_3\}$, it
follows that
\[
\begin{aligned}
\mathbf{s}(\mathcal H)(t)
&\equiv\mathbf{s}(h_1,h_2,h_3)^2(t)\\
&=\mathbf{s}\big(\sin\tfrac{\pi}{4}\big(\cdot+\tfrac{1}{2} \big),\cos\tfrac{\pi}{4}
\big(\cdot+\tfrac{1}{2} \big),\tan\tfrac{\pi}{4}\big(\cdot+\tfrac{1}{2} \big)\big)^2(t)\\
&=\sec^2 \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\\
&=k_1(t)k_2(t),
\end{aligned}
\]
\[
\begin{aligned}
\mathbf{s}(\mathcal H,k_1)^2(t)
&\equiv \mathbf{s}(h_1,h_2,h_3,k_1)^2(t)\\
&=\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)+\tan^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)
=\mathbf{s}(\mathbf{s}(\mathcal H),k_1)^2(t),
\end{aligned}
\]
and
\[
\begin{aligned}
\mathbf{s}(\mathcal H,k_2)^2(t)
&\equiv \mathbf{s}(h_1,h_2,h_3,k_2)^2(t)\\
&=\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) +\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\\
&=\mathbf{s}(\mathbf{s}(\mathcal H),k_2)^2(t),
\end{aligned}
\]
for all $t\in [0,1]$.
From this we see that
the set $\mathcal F_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,1]$ with
\[
\begin{cases}
h(t)= \mathbf{s}(h_1,h_2,h_3)(t)=\sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), \\
k_1(t)=\tan \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) ,\\
k_2(t)= \sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\\
\mathbf{s}_1(t)=\mathbf{s}(\mathcal H,k_1)(t), \\
\mathbf{s}_2(t)=\mathbf{s}(\mathcal H,k_2)(t)
\end{cases}
\]
is a solution set of the system \eqref{system}, and
equation \eqref{eq:multi-rel-01} holds with the sequence $\mathcal H=\{h_1,h_2,h_3\}$ and the functions $k_1$ and $k_2$.
\end{example}
In the next example, the kernel functions of the Gaussian processes
defining the transforms and convolutions involve trigonometric and hyperbolic (and hence exponential) functions.
\renewcommand{\thesection.6}{\thesection.6}
\begin{example}
Consider the function
\[
h(t)= 2\sqrt{\csc\tfrac{\pi}{4}\big( t+\tfrac{1}{2}\big) \mathrm{cosh}\tfrac{\pi}{4}\big( t+\tfrac{1}{2}\big)}
\]
on $[0,1]$, and the finite sequences
\[
\mathcal K_1=\big\{2\mathrm{tanh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),
2\mathrm{sech} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),2 \cot\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\big\}
\]
and
\[
\mathcal K_2=\big\{\sqrt2\sin\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\sqrt2\cos\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),
\sqrt2\mathrm{sinh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\sqrt2\mathrm{cosh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\big\}
\]
of functions in $L_{\infty}[0,1]$.
Then using the relationships among hyperbolic functions and among trigonometric functions,
one can see that
\[
\mathbf{s}(\mathcal K_1)(t)=2\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\quad \mbox{ and }
\quad \mathbf{s}(\mathcal K_2)(t)=2\mathrm{cosh} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)
\]
on $[0,1]$.
From this we also see that
the set $\mathcal F_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$
of functions in $L_\infty[0,1]$ with
\[
\begin{cases}
h(t)= 2\sqrt{\csc\frac{\pi}{4}( t+\frac{1}{2}) \mathrm{cosh}\frac{\pi}{4}( t+\frac{1}{2})}, \\
k_1(t)=\mathbf{s}(\mathcal K_1)(t)=2\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) ,\\
k_2(t)=\mathbf{s}(\mathcal K_2)(t)=2\mathrm{cosh} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\\
\mathbf{s}_1(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_1))(t), \\
\mathbf{s}_2(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_2))(t)
\end{cases}
\]
is a solution set of the system \eqref{system}, and
equation \eqref{eq:multi-rel-02-2nd} holds with the function $h$, and the sequences $\mathcal K_1$ and $\mathcal K_2$.
\end{example}
\setcounter{equation}{0}
\section{Further results}
\par
In this section, we derive a more general relationship
between the iterated GFFT and the GCP for functionals
in $\mathcal S(L_2[0,T])$.
To do this we also quote a result from \cite{Indag}.
\renewcommand{\thesection.3}{\thesection.1}
\begin{theorem} \label{thm:iter-gfft-more-1}
Let $F$ and $\mathcal H=\{h_1,\ldots,h_n\}$ be as in Theorem \ref{thm:2018-step1}.
Assume that $q_1,q_2,\ldots, q_n$ are nonzero real numbers with
$\mathrm{sgn}(q_1)=\cdots=\mathrm{sgn}(q_n)$, where `$\mathrm{sgn}$' denotes the
sign function. Then,
for all $p\in[1,2]$,
\[
\begin{aligned}
&T_{q_n,h_n}^{(p)}\big(T_{q_{n-1},h_{n-1}}^{(p)}\big(
\cdots\big(T_{q_2,h_2}^{(p)}\big(T_{q_1,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y)\\
&=T_{\alpha_n,\tau_n^{(n)}h_n}^{(p)}\Big(T_{\alpha_{n},\tau_n^{(n-1)}h_{n-1}}^{(p)}
\Big(\cdots \big(T_{\alpha_n,\tau_n^{(2)}h_2}^{(p)}
\big(T_{\alpha_n,\tau_n^{(1)}h_1}^{(p)}(F) \big)\big)\cdots\Big)\Big) (y)
\end{aligned}
\]
for s-a.e. $y\in C_{0}[0,T]$, where $\alpha_n$ is given by
\[
\alpha_n=\frac{1}{\frac{1}{q_1}+\frac{1}{q_2}+\cdots+\frac{1}{q_n}}
\]
and $\tau_n^{(j)}=\sqrt{{\alpha_n}/{q_j}}$
for each $j\in \{1,\ldots,n\}$. Moreover it follows that
\[
T_{q_n,h_n}^{(p)}\big(T_{q_{n-1},h_{n-1}}^{(p)}\big(
\cdots\big(T_{q_2,h_2}^{(p)}\big(T_{q_1,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y)
=T_{\alpha_n,\mathbf{s}(\tau\mathcal H)}^{(p)}(F)(y)
\]
for s-a.e. $y\in C_{0}[0,T]$, where
$\mathbf{s}(\tau\mathcal H) \equiv \mathbf{s}(\tau_n^{(1)}h_1, \ldots, \tau_n^{(n)}h_n )$
is a function in $L_{\infty}[0,T]$ satisfying the relation
\[
\mathbf{s}(\tau\mathcal H) ^2(t)
=(\tau_n^{(1)}h_1)^2(t)+ \ldots+ (\tau_n^{(n)}h_n)^2(t)
\]
for $m_L$-a.e. $t\in [0,T]$.
\end{theorem}
\par
Next, by a careful examination we see that for all $F\in \mathcal S(L_2[0,T])$
and any positive real $\beta>0$,
\begin{equation}\label{eq:2018-new-parameter-change}
T_{\beta q,h}^{(p)} (F) \approx T_{q,h/\sqrt{\beta}}^{(p)}(F) .
\end{equation}
Using \eqref{eq:2018-new-parameter-change} and \eqref{eq:cp-fft-basic},
we have the following lemma.
\renewcommand{\thesection.2}{\thesection.2}
\begin{lemma} \label{thm:2018-last-pre}
Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in
Theorem \ref{thm:gfft-gcp-compose}. Let
$q$, $q_1$, and $q_2$ be nonzero real numbers with
$\mathrm{sgn}(q)=\mathrm{sgn}(q_1)=\mathrm{sgn}(q_2)$.
Then, for all $p\in [1,2]$,
\[
\begin{aligned}
&
\big(T_{q_1,\sqrt{q_1/(2q)} \mathbf{s}(h,k_1)}^{(p)} (F)*
T_{q_2,\sqrt{q_2/(2q)}\mathbf{s}(h,k_2)}^{(p)}(G) \big)_{-q}^{(k_1,k_2)}(y)\\
&
=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{aligned}
\]
for s-a.e. $y\in C_{0}[0,T]$.
\end{lemma}
\par
Finally, in view of Theorem \ref{thm:iter-gfft-more-1}
and Lemma \ref{thm:2018-last-pre}, we obtain the following
assertion.
\renewcommand{\thesection.3}{\thesection.3}
\begin{theorem} \label{thm:2018-last}
Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in
Theorem \ref{thm:gfft-gcp-compose}. Let
$\mathcal H_1=\{h_{1j}\}_{j=1}^n$
and
$\mathcal H_2=\{h_{2l}\}_{l=1}^m$
be finite sequences of nonzero
functions in $L_{\infty}[0,T]$.
Given nonzero real numbers $q$, $q_1$, $q_{11}$, $\ldots$, $q_{1n}$, $q_2$, $q_{21}$,
$\ldots$, $q_{2m}$ with
\[
\begin{aligned}
\mathrm{sgn}(q)
&=\mathrm{sgn}(q_1)=\mathrm{sgn}(q_{11})=\cdots=\mathrm{sgn}(q_{1n})\\
&=\mathrm{sgn}(q_2)=\mathrm{sgn}(q_{21})=\cdots=\mathrm{sgn}(q_{2m}),
\end{aligned}
\]
let
\[
\alpha_{1n}=\frac{1}{\frac{1}{q_{11}}+\frac{1}{q_{12}}+\cdots+\frac{1}{q_{1n}}},
\]
\[
\alpha_{2m}=\frac{1}{\frac{1}{q_{21}}+\frac{1}{q_{22}}+\cdots+\frac{1}{q_{2m}}},
\]
\[
\beta_{1n}=\frac{1}{\frac{1}{q_{1}}+\frac{1}{q_{11}}+\frac{1}{q_{12}}+\cdots+\frac{1}{q_{1n}}},
\]
and
\[
\beta_{2m}=\frac{1}{\frac{1}{q_{2}}+\frac{1}{q_{21}}+\frac{1}{q_{22}}+\cdots+\frac{1}{q_{2m}}}.
\]
Furthermore, assume that
\[
h^2 = \mathbf{s}(\tau_{1n}\mathcal H_1)\mathbf{s}(\tau_{2m}\mathcal H_2)
\]
for $m_L$-a.e. on $[0,T]$, where $\mathbf{s}(\tau_{1n}\mathcal H_1)$
and $\mathbf{s}(\tau_{2m}\mathcal H_2)$ are functions in $L_{\infty}[0,T]$
satisfying the relation
\[
\mathbf{s}(\tau_{1n}\mathcal H_1)^2
\equiv \mathbf{s}(\tau_{1n}^{(1)}h_{11}, \ldots, \tau_{1n}^{(n)}h_{1n} )^2
=(\tau_{1n}^{(1)}h_{11})^2 + \cdots+ (\tau_{1n}^{(n)}h_{1n})^2
\]
and
\[
\mathbf{s}(\tau_{2m}\mathcal H_2)^2
\equiv \mathbf{s}(\tau_{2m}^{(1)}h_{21}, \ldots, \tau_{2m}^{(m)}h_{2m} )^2
=(\tau_{2m}^{(1)}h_{21})^2 + \cdots+ (\tau_{2m}^{(m)}h_{2m})^2 ,
\]
respectively, and where
$\tau_{1n}^{(j)}=\sqrt{{\alpha_{1n}}/{q_{1j}}}$ for each $j\in \{1,\ldots,n\}$, and
$\tau_{2m}^{(l)}=\sqrt{{\alpha_{2m}}/{q_{2l}}}$ for each $l\in \{1,\ldots,m\}$.
For notational convenience, let
\[
h_1'=\sqrt{q_{1}/(2q)}h,\quad h_{1j}'=\sqrt{\alpha_{1n}/(2q)}h_{1j}, \quad j=1,\ldots,n,
\]
and let
\[
h_2' =\sqrt{q_{2}/(2q)}h,\quad h_{2l}'=\sqrt{\alpha_{2m}/(2q)}h_{2l}, \quad l=1,\ldots,m.
\]
Then, for all $p\in[1,2]$,
\[
\begin{aligned}
&\Big(T_{q_1,h_1'}^{(p)}\big(T_{q_{1n},h_{1n}'}^{(p)}\big(
\cdots\big(T_{q_{11},h_{11}'}^{(p)} (F)\big)\cdots \big)\big) \\
& \quad\quad\quad
*T_{q_2,h_2'}^{(p)}\big( T_{q_{2m}, h_{2m}'}^{(p)}\big(
\cdots\big(T_{q_{21}, h_{21}' }^{(p)} (G)\big)\cdots \big)\big)
\Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\
&=\Big(T_{q_1,\sqrt{q_{1}/(2q)} h}^{(p)}
\big( T_{\alpha_{1n},\sqrt{\alpha_{1n}/(2q)}\mathbf{s}(\tau_{1n}\mathcal H_1)}^{(p)}(F)\big)\\
&\quad \quad \quad
*T_{q_2,\sqrt{q_{2}/(2q)}h}^{(p)}
\big(T_{\alpha_{2m},\sqrt{\alpha_{2m}/(2q)} \mathbf{s}(\tau_{2m}\mathcal H_2)}^{(p)}(G)\big)
\Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\
&=\Big( T_{\beta_{1n},\sqrt{\beta_{1n}/(2q)}\mathbf{s}(h,\mathbf{s}(\tau_{1n}\mathcal H_1))}^{(p)}(F)\\
&\qquad \qquad \qquad\qquad\,\,\,
*T_{\beta_{2m},\sqrt{\beta_{2m}/(2q)}\mathbf{s}(h,\mathbf{s}(\tau_{2m}\mathcal H_2))}^{(p)}(G)
\Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\
&= T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\end{aligned}
\]
for s-a.e. $y\in C_{0}[0,T]$.
\end{theorem}
\end{document} |
\begin{document}
\title{Algebraic stacks}
\begin{abstract}
This is an expository article on the theory of algebraic stacks.
After introducing the general theory, we concentrate in the example
of the moduli
stack of vector budles, giving a detailed comparison with the
moduli scheme obtained via geometric invariant theory.
\end{abstract}
\section{Introduction}
The concept of algebraic stack is a generalization of the concept of
scheme, in the same sense that the concept of scheme is a generalization of
the concept of projective variety.
In many moduli problems, the functor that we want to study is not
representable by a scheme. In other words, there is no fine moduli
space. Usually this is because the objects that we want to parametrize
have automorphisms. But if we enlarge the category of schemes
(following ideas that go back to Grothendieck and Giraud, and were
developed by Deligne, Mumford and Artin) and consider algebraic
stacks, then we can construct the ``moduli stack'', that captures all
the information that we would like in a fine moduli space.
The idea of enlarging the category of algebraic varieties to study
moduli problems is not new. In fact A. Weil invented the concept of
abstract variety to give an algebraic construction of the Jacobian
of a curve.
These notes are an introduction to the theory of algebraic stacks.
I have tried to emphasize ideas and concepts through examples
instead of detailed proofs (I give references where these can be
found). In particular, section
\ref{sectionversus} is a detailed comparison between the moduli
\textit{scheme} and the moduli \textit{stack} of vector bundles.
First I will give a quick
introduction in subsection \ref{quick}, just to give some motivations
and get a flavour of the theory of algebraic stacks.
Section \ref{sectionstacks} has a more detailed exposition.
There are mainly two ways of introducing stacks. We can
think of them as 2-functors (I learnt this approach from N. Nitsure
and C. Sorger, cf. subsection \ref{subsfunctors}), or as categories
fibered on groupoids (This is the
approach used in the references, cf. subsection
\ref{subsgroupoids}). From the first point of view
it is easier to see in which
sense stacks are generalizations of schemes, and the
definition looks more natural, so conceptually it seems more
satisfactory. But since the references use categories fibered on
groupoids, after we present both points of view, we will
mainly use the second.
The concept of stack is merely a categorical concept. To do geometry
we have to add some conditions, and then we get the concept of
algebraic stack. This is done in subsection \ref{subsalgebraic}.
In subsection \ref{subsgroupspaces} we introduce a third point of view to
understand stacks: as groupoid spaces.
In subsection \ref{subsproperties} we define for algebraic stacks many of the
geometric properties that are defined for schemes (smoothness,
irreducibility, separatedness, properness, etc...). In
subsection \ref{subspoints} we introduce the concept of point and dimension of
an algebraic stacks, and in subsection \ref{subssheaves} we define sheaves on
algebraic stacks.
In section \ref{sectionversus} we study in detail the example
of the moduli of
vector bundles on a scheme $X$, comparing the moduli stack with
the moduli scheme.
Appendix A is a brief introduction to Grothendieck topologies, sheaves
and algebraic spaces. In appendix B we define some notions related to
the theory of 2-categories.
\subsection{Quick introduction to algebraic stacks}
\label{quick}
We will start with an example: vector bundles (with fixed prescribed
Chern classes and rank) on a projective scheme $X$ over an
algebraically closed field $k$. What is the moduli stack
${\mathcal{M}}$ of vector bundles on $X$?. I don't know a short answer to
this, but instead it is easy to define what is a morphism from a
scheme $B$ to the moduli stack ${\mathcal{M}}$. It is just a family of vector
bundles parametrized by $B$. More precisely, it is a vector bundle $V$
on $B\times X$, flat over $B$, such that the restriction to the slices
$b\times X$ have prescribed Chern classes and rank. In other words,
${\mathcal{M}}$ has the property that we expect from a fine moduli space:
the set of morphisms $\operatorname{Hom}(B,{\mathcal{M}})$ is equal to the set of families
parametrized by $B$.
We will say that a diagram
\begin{eqnarray}
\label{commdiag}
\xymatrix{
{B} \ar[r]^f \ar[rd]_{g} & {B'} \ar[d]^{g'} \\
& {{\mathcal{M}}}
}
\end{eqnarray}
is commutative if the vector bundle $V$ on $B\times X$ corresponding
to $g$ is isomorphic to the vector bundle $(f\times \operatorname{id}_X)^*V'$, where
$V'$ is the vector bundle corresponding to $g'$.
Note that in general, if $L$ is a line bundle on $B$, then $V$ and
$V\otimes p^*_B L$ won't be isomorphic, and then the corresponding
morphisms from $B$ to ${\mathcal{M}}$ will be different, as opposed to what
happens with moduli schemes.
A $k$-point in the stack ${\mathcal{M}}$ is a morphism
$u:\operatorname{Spec} k \to{\mathcal{M}}$, in other words, it is a vector
bundle $V$ on $X$, and we say that two points are isomorphic if they
correspond to isomorphic vector bundles. But we shouldn't think of ${\mathcal{M}}$
just as a set of points, it should be thought of as a
category. The objects of ${\mathcal{M}}$ are points\footnote{
To be precise, we should consider also $B$-valued points,
for any scheme $B$, but we will only consider $k$-valued points for
the moment}, i.e. vector
bundles on $X$, and a morphism in ${\mathcal{M}}$ is an isomorphism of vector
bundles. This is the main difference between a scheme and an algebraic
stack: a scheme is a \textit{set} of points,
but in an algebraic stack is a \textit{category}, in fact a
\textit{groupoid} (i.e. a category in which all morphisms are isomorphisms).
Each point comes with a group of
automorphisms. Roughly speaking, a scheme (or more generally, an
algebraic space \cite{Ar1}, \cite{K}) can be thought of as an
algebraic stack in which these groups of automorphisms are all trivial.
If $p$ is the $k$-point in ${\mathcal{M}}$ corresponding to
a vector bundle $V$ on $X$, then the group of automorphisms
associated to $p$ is the group of vector bundle automorphisms of $V$.
This is why algebraic stacks are well suited to serve as moduli of
objects that have automorphisms.
An algebraic stack has an atlas. This is a scheme $U$ and a surjective
morphism $u:U \to {\mathcal{M}}$ (with some other properties).
As we have seen, such a morphism $u$ is equivalent to a family of
vector bundles parametrized by $U$, and we say that $u$ is surjective
if for every vector bundle $V$ over $X$ there is at least one point in
$U$ whose corresponding vector bundle is isomorphic to $V$.
The existence of an atlas for an algebraic stack is the
analogue of the fact that for a scheme $B$ there is always an
\textit{affine} scheme $U$ and a surjective morphism $U \to B$ (if
$\{U_i\to B\}$ is a covering of $B$ by affine subschemes, take $U$ to
be the disjoint union $\coprod U_i$). Many local properties (smooth,
normal, reduced...) can be studied by looking at the atlas $U$. It is
true that in some sense an algebraic stack looks, locally, like a
scheme, but we shouldn't take this too far. For instance
the atlas of the classifying stack $BG$ (parametrizing principal
$G$-bundles, cf. example \ref{quotient}) is just a single point.
The dimension of an algebraic stack ${\mathcal{M}}$ will be defined as the
dimension of $U$ minus the relative dimension of the morphism $u$.
The dimension of an algebraic stack can be negative (for instance,
$\dim (BG)=-\dim(G)$).
A coherent sheaf $L$ on an algebraic stack ${\mathcal{M}}$ is a law that,
for each morphism $g:B
\to {\mathcal{M}}$, gives a coherent sheaf $L_B$ on $B$, and for each
commutative diagram like (\ref{commdiag}), gives an isomorphism
between $f^* L_{B'}$ and $L_B$. The coherent sheaf $L_B$ should be
thought of as the pullback ``$g^*L$'' of $L$ under $g$ (the compatibility
condition for commutative diagrams is just the condition that
$(g'\circ f)^*L$ should be isomorphic to $f^* {g'}^* L$).
Let's look at another example: the moduli quotient (example
\ref{quotient}). Let $G$ be an
affine algebraic group acting on $X$. For simplicity, assume that there is a
normal subgroup $H$ of $G$ that acts trivially on $X$, and that
$\overline G=G/H$
is an affine group acting freely on $X$ and furthermore there is a
quotient by this action
$X \to B$ and this quotient is a principal $\overline G$-bundle.
We call $B=X/G$ the \textit{quotient scheme}. Each point
corresponds to a
$G$-orbit of the action. But note that $B$ is also equal to the
quotient $X/\overline G$, because $H$ acts trivially and then $G$-orbits are
the same thing as $\overline G$-orbits. We can say that the quotient scheme
``forgets'' $H$.
One can also define the \textit{quotient stack} $[X/G]$. Roughly
speaking, a point $p$ of $[X/G]$ again corresponds to a $G$-orbit of
the action, but now each
point comes with an automorphism group: given a point $p$ in $[X/G]$,
choose a point $x\in X$ in the orbit corresponding to $p$. The
automorphism group attached to $p$ is the stabilizer $G_x$ of $x$. With
the assumptions that we have made on the action of $G$, the
automorphism group of any point is always $H$.
Then the quotient stack $[X/G]$ is not a scheme, since the automorphism
groups are not trivial. The action of $H$ is trivial, but the
moduli stack still ``remembers'' that there was an action by $H$.
Observe that the stack $[X/\overline G]$ is not isomorphic to the stack
$[X/G]$ (as opposed to what happens with the quotient schemes). Since
the action of $\overline G$ is free on $X$, the automorphism group
corresponding to
each point of $[X/\overline G]$ is trivial, and it can be shown
that, with the assumptions that we made, $[X/\overline G]$ is
represented by the scheme $B$ (this terminology will be made precise
in section \ref{sectionstacks}).
\section{Stacks}
\label{sectionstacks}
\subsection{Stacks as 2-functors. Sheaves of sets.}
\label{subsfunctors}
Given a scheme $M$ over a base scheme $S$, we define its
(contravariant) functor of
points $\operatorname{Hom}_S(-,M)$
$$
\begin{array}{rccc}
{\operatorname{Hom}_S(-,M):} &{({Sch} /S)}& \longrightarrow &{({Sets})} \\
& {B} &\longmapsto &{\operatorname{Hom}_S(B,M)}
\end{array}
$$
where $({Sch} /S)$ is the category of $S$-schemes, $B$ is an
$S$-scheme, and $\operatorname{Hom}_S(B,M)$ is the set of $S$-scheme morphisms.
If we give $({Sch}/S)$ the \'etale topology, $\operatorname{Hom}_S(-,M)$ is a sheaf.
A sheaf of sets on $({Sch}/S)$ with the \'etale topology is called a space.
Then schemes can be thought of as sheaves of sets. Moduli problems can
usually be
described by functors. We say that a sheaf of
sets $F$ is representable by a scheme $M$ if $F$ is isomorphic to the
functor of points $\operatorname{Hom}_S(-,M)$. The scheme $M$ is then called the
fine moduli scheme.
Roughly speaking, this means that there is a one to one correspondence
between families of objects parametrized by a scheme $B$ and morphisms
from $B$ to $M$.
\begin{example}[Vector bundles]
\label{defvectorbundle}
\textup{
Let $X$ be a projective scheme over a Noetherian base $S$.
We define the moduli functor
$\underline{\Bund}'$ of vector
bundles of fixed rank $r$ and Chern classes $c_i$ by sending the
scheme $B$ to the set $\underline{\Bund}'(B)$ of isomorphism classes of vector
bundles on $X\times B$, flat over $B$ with rank $r$ and whose
restriction to the
slices $X\times \{b\}$ have Chern classes $c_i$. These vector bundles
should be thought of as families of vector bundles parametrized by $B$.
A morphism $f:B'\to B$ is sent to $\underline{\Bund}'(f)=f^*:\underline{\Bund}'(B) \to
\underline{\Bund}'(B')$, the map of sets induced by the pullback. Usually we will
also fix a polarization $H$ in $X$ and restrict our attention to stable or
semistable vector bundles with respect to this polarization, and
then we consider the corresponding functors $\underline{\Bund}^{\prime s}$ and
$\underline{\Bund}^{\prime ss}$.
}
\end{example}
\begin{example}[Curves]
\textup{
The moduli functor $M_g$ of smooth curves of genus $g$ over $S$ is the
functor that sends each scheme $B$ to the set $M_g(B)$ of isomorphism
classes of smooth and proper morphisms $C \to B$ (where $C$ is an
$S$-scheme) whose fibers are geometrically connected curves of genus
$g$. Each morphism $f:B'\to B$ is sent to the map of sets induced by
the pullback $f^*$.
}
\end{example}
None of these examples are sheaves (then none of these are
representable), because of the presence
of automorphisms. They are just presheaves (=functors).
For instance, given a curve $C$ over $S$ with nontrivial
automorphisms, it is possible to construct a family $f:{\mathcal{C}} \to B$ such
that every fiber of $f$ is isomorphic to $C$, but ${\mathcal{C}}$ is not
isomorphic to $B \times C$. This implies that $M_g$ doesn't satisfy
the monopresheaf axiom.
This can be solved by taking the sheaf associated to the presheaf
(sheafification). In the examples, this amounts to change
isomorphism classes of families to equivalence classes of families,
when two families are equivalent if they are locally (using the
\'etale topology over the
parametrizing scheme $B$) isomorphic. In the case of vector bundles,
this is the reason why one usually declares two vector bundles $V$ and
$V'$ on $X \times B$ equivalent if $V\cong V'\otimes p_B^* L$ for some
line bundle $L$ on $B$. The functor obtained with this equivalence
relation is denoted $\underline{\Bund}$ (and analogously for $\underline{\Bund}^{s}$
and $\underline{\Bund}^{ss}$).
Note that if two families $V$ and $V'$ are equivalent in this sense,
then they are locally isomorphic. The converse is only true if the
vector bundles are simple (only automorphisms are
scalar multiplications). This will happen, for instance, if
we are considering the functor $\underline{\Bund}^{\prime s}$ of stable vector
bundles, since stable vector bundles are simple.
In general, if we want the functor to be a sheaf, we have to use
a weaker notion of equivalence, but this is not done because for
other reasons there is only hope of obtaining a fine moduli space if
we restrict our attention to stable vector bundles.
Once this modification is made, there are some situations in which
these examples are representable (for instance, stable vector bundles
on curves with coprime
rank and degree), but in general they
will still not be representable,
because in general we don't have a universal family:
\begin{definition}[Universal family]
Let $F$ be a representable functor, and
let $\phi:F \to \operatorname{Hom}_S(-,X)$ be the isomorphism. The object of $F(X)$
corresponding to the element $\operatorname{id}_X$ of $\operatorname{Hom}_S(X,X)$ is called the
universal family.
\end{definition}
\begin{example}[Vector bundles]
\textup{
If $V$ is a universal vector bundle (over $S\times M$, where $M$
is the fine moduli space), it has the property that for
any family $W$
of vector bundles (i.e. $W$ is a vector bundle over $X\times B$ for
some parameter scheme $B$) there exists a morphism $f:B\to M$ such that
$(f\times \operatorname{id}_X)^* V$ is equivalent to $W$.}
\end{example}
When a moduli functor $F$ is not representable and then there is no scheme
$X$ whose functor of points is isomorphic to $F$, one can still try to
find a scheme $X$ whose functor of points is an approximation to $F$
in some sense. There are two different notions:
\begin{definition}[Corepresents]
\textup{\cite[p. 60]{S}, \cite[def 2.2.1]{HL}}.
We say that a scheme $M$ corepresents the functor $F$ if there
is a natural
transformation of functors $\phi:F \to \operatorname{Hom}_S(-,M)$ such that
\begin{itemize}
\item Given another scheme $N$ and a natural transformation $\psi:F
\to \operatorname{Hom}_S(-,N)$, there is a unique natural transformation $\eta:
\operatorname{Hom}_S(-,M)\to \operatorname{Hom}_S(-,N)$ with $\psi= \eta \circ \phi$.
$$
\xymatrix{
{F} \ar[d]^{\phi} \ar[rd]^{\psi} \\
{\operatorname{Hom}_S(-,M)} \ar[r]^{\eta} & \operatorname{Hom}_S(-,N)\\
}
$$
\end{itemize}
\end{definition}
This characterizes $M$ up to unique isomorphism. Let $({Sch}/S)'$ be the
functor category, whose objects are contravariant functors from
$({Sch}/S)$ to $(Sets)$ and whose morphisms are natural transformation
of functors. Then $M$ represents $F$ iff $\operatorname{Hom}_S(Y,M)=
\operatorname{Hom}_{({Sch}/S)'}({\mathcal{Y}},F)$ for all schemes $Y$, where
${\mathcal{Y}}$ is the functor represented by $Y$. On the other hand,
one can check that $M$ corepresents $F$ iff $\operatorname{Hom}_S(M,Y)=
\operatorname{Hom}_{({Sch}/S)'}(F,{\mathcal{Y}})$ for all schemes $Y$.
If $M$ represents $F$, then it corepresents it, but the converse is
not true. From now on we will usually denote a scheme and the functor
that it represents by the same letter.
\begin{definition}[Coarse moduli]
A scheme $M$ is called a coarse moduli scheme if it corepresents $F$
and furthermore
\begin{itemize}
\item For any algebraically closed field $k$, the map
$\phi(k):F(\operatorname{Spec} k) \to \operatorname{Hom}_S(\operatorname{Spec} k, M)$ is bijective.
\end{itemize}
\end{definition}
In both cases, given a family of objects parametrized by $B$ we get a
morphism
from $B$ to $M$, but we don't require the converse to be true.
\begin{example}[Vector bundles]
\label{vb1}
\textup{
There is a scheme $\mathfrak{M}^{ss}$ that corepresents $\underline{\Bund}^{ss}$. It
fails to be a
coarse moduli scheme because its closed points are in one to one
correspondence with S-equivalence classes of vector bundles, and not
with isomorphisms classes of vector bundles. Of course, this can be
solved `by hand' by modifying the functor and considering two vector
bundles equivalent if they are S-equivalent. Once this modification
is done, $\mathfrak{M}^{ss}$ is
a coarse moduli space.
}
\textup{
But in general $\mathfrak{M}^{ss}$ doesn't represent the moduli functor
$\underline{\Bund}^{ss}$. The reason for this is that vector bundles have always
nontrivial automorphisms (multiplication by scalar), but the moduli
functor doesn't record information about automorphisms: recall that to
a scheme $B$ it associates just the set of equivalence classes of vector
bundles. To record the automorphisms of these vector bundles, we define
$$
\begin{array}{rccc}
{\mathcal{M}}: & ({Sch}/S) & \longrightarrow & (\operatorname{gr}oupoids) \\
& B & \longmapsto & {\mathcal{M}}(B)
\end{array}
$$
where ${\mathcal{M}}(B)$ is the category whose objects are vector bundles $V$
on $X\times B$ of rank $r$ and with fixed Chern classes (note that the
objects are vector bundles, not isomorphism classes of vector
bundles), and whose
morphisms are vector bundle isomorphisms (note that we use
isomorphisms of vector bundles, not S-equivalence nor equivalence
classes as before).
This defines a 2-functor between the 2-category associated to
$({Sch}/S)$ and the 2-category $(\operatorname{gr}oupoids)$ .}
\end{example}
\begin{definition}
Let $(\operatorname{gr}oupoids)$ be the 2-category whose objects are
groupoids, 1-morphisms are functors between groupoids, and 2-morphisms
are natural transformation between these functors.
A presheaf in groupoids (also called a quasi-functor) is a
contravariant 2-functor ${\mathcal{F}}$ from $({Sch}/S)$ to $(\operatorname{gr}oupoids)$.
For
each scheme $B$ we have a groupoid ${\mathcal{F}}(B)$ and for each morphism $f:B'\to
B$ we have a natural transformation of functors ${\mathcal{F}}(f)$ that is
denoted by $f^*$ (usually it is actually defined by a pullback).
\end{definition}
\begin{example}[Vector bundles]
\label{bbund}
\textup{\cite[1.3.4]{La}.
${\mathcal{M}}$ is a presheaf. For each object $B$ of
$({Sch}/S)$ it gives the
groupoid ${\mathcal{M}}(B)$ that we have defined in example \ref{vb1}.
For each 1-morphism
$f:B' \to B$ it gives the functor $F(f)=f^*:{\mathcal{M}}(B)\to {\mathcal{M}}(B')$
given by pull-back, and for every diagram
\begin{eqnarray}
\label{compo}
B'' \stackrel{g}\longrightarrow B' \stackrel{f}\longrightarrow B
\end{eqnarray}
it gives a natural transformation of functors (a 2-isomorphism)
$\epsilon_{g,f}:g^*\circ f^* \to (f\circ g)^*$. This is the only
subtle point. First recall that the pullback $f^*V$ of a vector bundle
(or more generally, any fiber product) is not uniquely defined: it is
only defined up to unique isomorphism. First choose once and for all a
pullback $f^*V$ for each $f$ and $V$. Then, given a diagram like
\ref{compo}, in principle $g^*(f^*V)$ and $(f\circ g)^*V$ are not the
same, but (because both solve the same universal problem) there is a
canonical isomorphism (the unique isomorphism of the universal
problem) $g^*(f^*V) \to (f\circ g)^*V$ between them,
and this defines the natural transformation of functors
$\epsilon_{g,f}:g^*\circ f^* \to (f\circ g)^*$. By a slight abuse of
language, usually we won't write explicitly these isomorphisms
$\epsilon_{g,f}$, and we will write $g^*\circ f^* = (f\circ g)^*$.
Since they are uniquely defined this will cause no
ambiguity.}
\end{example}
Now we will define the concept of stack. First we have to choose a
Grothendieck topology on $(Sch/S)$, either the \'etale or the fppf
topology. Later on, when we define algebraic stack, the \'etale
topology will lead to the definition of a
Deligne-Mumford stack (\cite{DM}, \cite{Vi}, \cite{E}), and the fppf to
an Artin stack (\cite{La}). For the moment we will give a unified
description.
In the following definition, to simplify notation we denote by $X|_i$
the pullback $f^*_i X$ where $f_i:U_i \to U$ and $X$ is an object of
${\mathcal{F}}(U)$, and by $X_i|_{ij}$ the
pullback $f^*_{ij,i} X_i$ where $f_{ij,i}:U_i \times_U U_j \to U_i$
and $X_i$ is an object of ${\mathcal{F}}(U_i)$. We will also use the obvious
variations of this convention, and will simplify the notation using
remark \ref{B2}.
\begin{definition}[Stack]
\label{sheaf}
A stack is a sheaf of groupoids, i.e. a 2-functor (presheaf) that
satisfies the following sheaf axioms. Let $\{U_i \to U\}_{i\in I}$ be
a covering of $U$ in the site $({Sch}/S)$. Then
\begin{enumerate}
\item (Glueing of morphisms) If $X$ and $Y$ are two objects of
${\mathcal{F}}(U)$, and $\varphi_i:X|_i\to
Y|_i$ are morphisms such that $\varphi_i|_{ij}=\varphi_j|_{ij}$, then
there exists a morphism $\eta:X\to Y$ such that $\eta|_i=\varphi_i$.
\item (Monopresheaf) If $X$ and $Y$ are two objects of ${\mathcal{F}}(U)$,
and $\varphi:X\to
Y$, $\psi:X \to Y$ are morphisms such that $\varphi|_i=\psi|_i$, then
$\varphi = \psi$.
\item \label{sheafthree} (Glueing of objects) If $X_i$ are objects
of ${\mathcal{F}}(U_i)$ and
$\varphi_{ij}:X_j|_{ij}
\to X_i|_{ij}$ are morphisms satisfying the cocycle condition
$\varphi_{ij}|_{ijk}\circ \varphi_{jk}|_{ijk}= \varphi_{ik}|_{ijk}$,
then there exists an object $X$ of ${\mathcal{F}}(U)$ and $\varphi_i:X|_i
\stackrel{\cong}\to X_i$ such that $\varphi_{ji}\circ \varphi_i|_{ij}=
\varphi_j|_{ij}$.
\end{enumerate}
\end{definition}
Let's stop for a moment and look at how we have enlarged the category
of schemes by defining the category of stacks. We can draw the following
diagram
$$
\xymatrix{
& {Algebraic\,Stacks} \ar[r] & {Stacks} \ar[r] &{Presheaves\,of\,
groupoids} \\
{{Sch}/S} \ar[r] \ar[ur] &
{Algebraic\,Spaces} \ar[r] \ar[u]
&{Spaces} \ar[r] \ar[u] &{Presheaves\,of\,sets} \ar[u]
}
$$
where $A \to B$ means that the category $A$ is a subcategory $B$.
Recall that a presheaf of sets is just
a functor from $({Sch}/S)$ to the category $({Sets})$, a presheaf of
groupoids is
just a 2-functor to the 2-category $(\operatorname{gr}oupoids)$. A sheaf (for example an
space or a stack) is a presheaf that satisfies the sheaf axioms
(these axioms are slightly different in the context of categories or
2-categories), and if this sheaf satisfies some geometric conditions
(that we haven't yet specified), we will have an algebraic stack or
algebraic space.
\subsection{Stacks as categories. Groupoids}
\label{subsgroupoids}
There is an alternative way of defining a stack. From this point of
view a stack will be a category, instead of a functor.
\begin{definition}
A category over $({Sch}/S)$ is a category ${\mathcal{F}}$ and a covariant functor
$p^{}_{\mathcal{F}}:{\mathcal{F}} \to ({Sch}/S)$. If $X$ is an object (resp. $\phi$ is a
morphism) of ${\mathcal{F}}$, and $p^{}_{\mathcal{F}}(X)=B$ (resp. $p^{}_{\mathcal{F}}(\phi)=f$), then we
say that $X$ lies over $B$ (resp. $\phi$ lies over $f$).
\end{definition}
\begin{definition}[Groupoid]
A category ${\mathcal{F}}$ over $({Sch}/S)$ is called a category fibered on
groupoids (or just groupoid) if
\begin{enumerate}
\item \label{groupoidone} For every $f:B'\to B$ in $({Sch}/S)$ and every object $X$ with
$p^{}_{\mathcal{F}}(X)=B$, there exists at least one object $X'$ and a morphism
$\phi:X'\to
X$ such that $p^{}_{\mathcal{F}}(X')=B'$ and $p^{}_{\mathcal{F}}(\phi)=f$.
$$
\xymatrix{
{X'} \ar@{-->}[r]^{\phi} \ar@{-->}[d] & {X} \ar[d] \\
{B'} \ar[r]^{f} & {B} }
$$
\item \label{groupoidtwo} For every diagram
$$\xymatrix{
{X_3} \ar[rr]^{\psi} \ar[dd]& & {X_1} \ar[dd] \\
& {X_2} \ar[ru]^{\phi} \ar[dd] \\
{B_3} \ar '[r][rr]^{f\circ f'} \ar[rd]_{f'} & & {B_1} \\
& {B_2} \ar[ru]_f
}
$$
(where $p^{}_{\mathcal{F}}(X_i)=B_i$, $p^{}_{\mathcal{F}}(\phi)=f$, $p^{}_{\mathcal{F}}(\psi)=f\circ f'$),
there exists a unique $\varphi:X_3 \to X_2$ with $\psi=\phi\circ
\varphi$ and $p^{}_{\mathcal{F}}(\varphi)=f'.$
\end{enumerate}
\end{definition}
Condition \ref{groupoidtwo} implies that the object $X'$ whose existence
is asserted in condition \ref{groupoidone} is unique up to canonical
isomorphism. For each $X$ and $f$ we choose once and for all such an
$X'$ and call it $f^*X$.
Another consequence of condition \ref{groupoidtwo} is
that $\phi$ is an isomorphism
if and only if $p^{}_{\mathcal{F}}(\phi)=f$ is an isomorphism.
Let $B$ be an object of $({Sch}/S)$. We define ${\mathcal{F}}(B)$, the fiber of
${\mathcal{F}}$ over $B$, to be the subcategory of ${\mathcal{F}}$ whose objects lie over
$B$ and whose morphisms lie over $\operatorname{id}_B$. It is a groupoid.
The association $B\to {\mathcal{F}}(B)$ in fact defines a presheaf of groupoids
(note that the 2-isomorphisms $\epsilon_{f,g}$ required in the
definition of presheaf of groupoids are well defined thanks to
condition \ref{groupoidtwo}). Conversely, given a presheaf of
groupoids ${\mathcal{G}}$ on
$(Sch/S)$, we can define the category ${\mathcal{F}}$ whose objects are pairs
$(B,X)$ where $B$ is an object of $({Sch}/S)$ and $X$ is an object of
${\mathcal{G}}(B)$, and whose morphisms $(B',X')\to (B,X)$ are pairs
$(f,\alpha)$ where $f:B'\to B$ is a morphism in $(Sch/S)$ and
$\alpha:f^* X \to X'$ is an isomorphism, where $f^*={\mathcal{G}}(f)$.
This gives the relationship between both points of view.
\begin{example}[Stable curves]
\label{defstablecurve}
\textup{\cite[def 1.1]{DM}.
Let $B$ be an $S$-scheme. Let $g\geq 2$. A stable curve of genus $g$
over $B$ is a proper and flat morphism $\pi:C \to B$ whose geometric
fibers are reduced, connected and one-dimensional schemes $C_b$ such
that
\begin{enumerate}
\item The only singularities of $C_b$ are ordinary double points.
\item If $E$ is a non-singular rational component of $C_b$, then $E$
meets the other components of $C_b$ in at least 3 points.
\item $\dim H^1({\mathcal{O}}_{C_b})=g$.
\end{enumerate}
Condition 2 is imposed so that the automorphism group of $C_b$ is finite.
A stable curve over $B$ should be thought of as a family of stable
curves (over $S$) parametrized by $B$.}
\textup{
We define $\overline{\mathcal{M}}_g$, the groupoid over $S$ whose objects
are stable curves over $B$ and whose morphisms are Cartesian diagrams
$$
\xymatrix{
{X'} \ar[r] \ar[d] & {X} \ar[d] \\
{B'} \ar[r] & {B}}
$$}
\end{example}
\begin{example}[Quotient by group action]
\label{quotient}
\textup{\cite[1.3.2]{La}, \cite[example 4.8]{DM},
\cite[example 2.2]{E}.
Let $X$ be an $S$-scheme (assume all schemes are Noetherian),
and $G$ an affine flat group $S$-scheme acting on
the right on $X$. We define the groupoid $[X/G]$ whose objects are
principal $G$-bundles $\pi:E\to B$ together with a $G$-equivariant
morphism $f:E\to X$. A morphism is Cartesian diagram
$$
\xymatrix{
{E'} \ar[r]^{p} \ar[d]_{\pi'} & {E} \ar[d]_{\pi} \\
{B'} \ar[r] & {B}}
$$
such that $f\circ p= f'$.}
\end{example}
\begin{definition}[Stack]
A stack is a groupoid that satisfies
\begin{enumerate}
\item (\textit{Prestack}). For all scheme $B$ and pair of objects $X$,
$Y$ of ${\mathcal{F}}$ over $B$,
the contravariant functor
$$
\begin{array}{rccc}
\operatorname{Iso}_B(X,Y): & ({Sch}/B)& \longrightarrow & ({Sets}) \\
& (f:B'\to B) & \longmapsto & \operatorname{Hom}(f^*X,f^*Y)
\end{array}
$$
is a sheaf on the site $(Sch/B).$
\item Descent data is effective (this is just condition
\ref{sheafthree} in the
definition \ref{sheaf} of sheaf).
\end{enumerate}
\end{definition}
\begin{example}
\textup{
If $G$ is smooth and affine, the groupoid $[X/G]$ is a stack
\cite[2.4.2]{La}, \cite[example 7.17]{Vi}, \cite[prop 2.2]{E}.
Then also $\overline{\mathcal{M}}_g$ (cf. example \ref{defstablecurve})
is a stack, because it is isomorphic to a
quotient stack of a subscheme of a Hilbert scheme by ${PGL(N)}$
\cite[thm 3.2]{E}, \cite{DM}.
The groupoid ${\mathcal{M}}$ defined in example \ref{defvectorbundle}
is also a stack \cite[2.4.4]{La}.}
\end{example}
From now on we will mainly use this approach. Now we will give some
definitions for stacks.
\textbf{Morphisms of stacks}. A morphism of stacks $f:{\mathcal{F}} \to {\mathcal{G}}$
is a functor between the
categories, such that $p_{\mathcal{G}} \circ f= p^{}_{\mathcal{F}}$.
A commutative diagram of stacks is a diagram
$$
\xymatrix{
& {{\mathcal{G}}} \ar[rd]^g \ar@2[d]^{\alpha} \\
{{\mathcal{F}}} \ar[ur]^f \ar[rr]_h & &{{\mathcal{H}}}
}
$$
such that $\alpha:g\circ f \to h$ is an isomorphism of functors.
If $f$ is an equivalence of categories, then we say that the stacks
${\mathcal{F}}$ and ${\mathcal{G}}$ are isomorphic.
We denote by $\operatorname{Hom}_S({\mathcal{F}},{\mathcal{G}})$ the category whose objects are morphisms
of stacks and whose morphisms are natural transformations.
\textbf{Stack associated to a scheme}. Given a scheme $U$ over $S$,
consider the category $({Sch}/U)$. Define
the functor $p^{}_U:({Sch}/U)\to ({Sch}/S)$ which sends the $U$-scheme
$f:B\to U$ to the composition $B\stackrel{f}\to U \to S$. Then
$({Sch}/U)$ becomes a stack. Usually we denote this stack also by $U$.
From the point of view of 2-functors, the stack associated to $U$ is
the 2-functor that for each scheme $B$ gives the category whose
objects are the elements of the set $\operatorname{Hom}_S(B,U)$, and whose only
morphisms are identities.
We say that a stack is represented by a scheme $U$ when it is
isomorphic to the stack associated to $U$. We have the following
very useful lemmas:
\begin{lemma}
\label{nonrepresentable}
If a stack has an
object with an automorphism other that the identity, then the stack
cannot be represented by a scheme.
\end{lemma}
\begin{proof}
In the definition of stack associated with a scheme we see that
the only automorphisms are identities.
\end{proof}
\begin{lemma}
\label{yoneda}
\textup{\cite[7.10]{Vi}}.
Let ${\mathcal{F}}$ be a stack and $U$ a scheme. The functor
$$
u:\operatorname{Hom}_S(U,{\mathcal{F}}) \to {\mathcal{F}}(U)
$$
that sends a morphism of stacks $f:({Sch}/U)\to {\mathcal{F}}$ to $f(\operatorname{id}_U)$ is
an equivalence of categories.
\end{lemma}
\begin{proof}
Follows from Yoneda lemma
\end{proof}
This useful
observation that we will use very often means that an object of ${\mathcal{F}}$
that lies over $U$ is equivalent to a morphism (of stacks) from $U$ to
${\mathcal{F}}$.
\textbf{Fiber product}. Given two morphisms $f_1:{\mathcal{F}}_1\to {\mathcal{G}}$,
$f_2:{\mathcal{F}}_2\to {\mathcal{G}}$, we define a new stack ${\mathcal{F}}_1 \times_{\mathcal{G}} {\mathcal{F}}_2$
(with projections to ${\mathcal{F}}_1$ and ${\mathcal{F}}_2$) as follows.
The objects are triples $(X_1,X_2,\alpha)$ where $X_1$ and $X_2$ are
objects of ${\mathcal{F}}_1$ and ${\mathcal{F}}_2$ that lie over the same scheme $U$, and
$\alpha: f_1(X_1)\to f_2(X_2)$ is an isomorphism in ${\mathcal{G}}$
(equivalently, $p_{\mathcal{G}}(\alpha)=\operatorname{id}_U$).
A morphism from $(X_1,X_2,\alpha)$ to $(Y_1,Y_2,\beta)$ is a pair
$(\phi_1,\phi_2)$ of morphisms $\phi_i:X_i\to Y_i$ that lie over the
same morphism of schemes $f:U \to V$, and such that $\beta \circ
f_1(\phi_1) = f_2(\phi_2)\circ \alpha$.
The fiber product satisfies the usual universal property.
\textbf{Representability}. A stack ${\mathcal{X}}$ is said to be representable by an
algebraic space (resp. scheme) if there is an algebraic space
(resp. scheme) $X$ such that the stack associated to $X$ is isomorphic
to ${\mathcal{X}}$.
If ``P'' is a property of algebraic spaces (resp. schemes) and ${\mathcal{X}}$
is a representable stack, we will say that ${\mathcal{X}}$ has ``P'' iff $X$
has ``P''.
A morphism of stacks $f:{\mathcal{F}}\to {\mathcal{G}}$ is said to be representable if for
all objects $U$ in $({Sch}/S)$ and morphisms $U\to {\mathcal{G}}$, the fiber
product stack $U\times_{\mathcal{G}} {\mathcal{F}}$ is representable by an algebraic
space.
Let ``P'' is a property of morphisms of schemes that is local in nature
on the target for the topology chosen on $({Sch}/S)$ (\'etale or
fppf), and it is stable under arbitrary base change. For instance:
separated, quasi-compact, unramified, flat, smooth, \'etale, surjective,
finite type, locally of finite type,... Then we say that
$f$ has ``P'' if for every $U\to {\mathcal{G}}$, the pullback $U\times_{\mathcal{G}} {\mathcal{F}}
\to U$ has ``P'' (\cite[p.17]{La}, \cite[p.98]{DM}).
\textbf{Diagonal}. Let $\Delta_{\mathcal{F}}:{\mathcal{F}} \to {\mathcal{F}}\times_S {\mathcal{F}}$ be the
obvious diagonal morphism. A morphism from a scheme $U$ to ${\mathcal{F}}
\times_S {\mathcal{F}}$ is equivalent to two objects $X_1$, $X_2$ of
${\mathcal{F}}(U)$. Taking the fiber product of these we have
$$
\xymatrix{
{\operatorname{Iso}_U(X_1,X_2)} \ar[r] \ar[d]& {{\mathcal{F}}} \ar[d]^{\Delta_{\mathcal{F}}} \\
{U} \ar[r]^{(X_1,X_2)} & {{\mathcal{F}}\times_S {\mathcal{F}}}}
$$
hence the group of automorphisms of an object is encoded in the
diagonal morphism.
\begin{proposition}
\label{diag}
\textup{\cite[cor 2.12]{La}, \cite[prop 7.13]{Vi}}.
The following are equivalent
\begin{enumerate}
\item The morphism $\Delta_{\mathcal{F}}$ is representable.
\item The stack $\operatorname{Iso}_U(X_1,X_2)$ is representable for all $U$, $X_1$
and $X_2$.
\item For all scheme $U$, every morphism $U\to {\mathcal{F}}$ is representable.
\item For all schemes $U$, $V$ and morphisms $U\to {\mathcal{F}}$ and $V\to {\mathcal{F}}$,
the fiber product $U\times_{\mathcal{F}} V$ is representable.
\end{enumerate}
\end{proposition}
\begin{proof}
The implications $1 \Leftrightarrow 2$ and $3 \Leftrightarrow 4$
follow easily from the definitions.
$1 \Rightarrow 4$) Assume that $\Delta_{\mathcal{F}}$ is representable. We have
to show that $U\times_{\mathcal{F}} V$ is representable for any $f:U\to {\mathcal{F}}$ and
$g:V\to {\mathcal{F}}$. Check that the following diagram is Cartesian
$$
\xymatrix{
{U\times_{\mathcal{F}} V} \ar[r] \ar[d]& {{\mathcal{F}}}\ar[d]^{\Delta_{\mathcal{F}}}\\
U\times_S V \ar[r]^{f\times g} &{{\mathcal{F}}\times_S {\mathcal{F}}}}
$$
Then $U\times_{\mathcal{F}} V$ is representable.
$1 \Leftarrow 4$) First note that the Cartesian diagram defined by
$h:U\to {\mathcal{F}}\times_S {\mathcal{F}}$ and $\Delta_{\mathcal{F}}$ factors as follows
$$
\xymatrix{
{U\times^{}_{{\mathcal{F}}\times_S {\mathcal{F}}} {\mathcal{F}}} \ar[r] \ar[d] &
{U\times^{}_{\mathcal{F}} U} \ar[r] \ar[d] &{{\mathcal{F}}} \ar[d] \\
{U} \ar[r]^{\Delta_U} & {U\times_S U} \ar[r] &
{{\mathcal{F}}\times_S {\mathcal{F}}}}
$$
Both squares are Cartesian and by hypothesis $U\times_{\mathcal{F}} U$ is
representable, then $U\times^{}_{{\mathcal{F}}\times_S {\mathcal{F}}} {\mathcal{F}}$ is also
representable.
\end{proof}
\subsection{Algebraic stacks}
\label{subsalgebraic}
Now we will define the notion of algebraic stack. As we have said,
first we have to choose a topology on $({Sch}/S)$. Depending of whether
we choose the \'etale or fppf topology, we get different notions.
\begin{definition}[Deligne-Mumford stack]
Let $({Sch}/S)$ be the category of
$S$-schemes with the \'etale topology. Let ${\mathcal{F}}$ be a stack. Assume
\begin{enumerate}
\item The diagonal $\Delta_{\mathcal{F}}$ is representable, quasi-compact and
separated.
\item There exists a scheme $U$ (called atlas) and an \'etale
surjective morphism
$u:U\to {\mathcal{F}}$.
\end{enumerate}
Then we say that ${\mathcal{F}}$ is a Deligne-Mumford stack.
\end{definition}
The morphism of stacks $u$ is representable because of proposition
\ref{diag} and the fact that the diagonal $\Delta_{\mathcal{F}}$ is
representable. Then the notion of \'etale is well defined for $u$.
In \cite{DM} this was called an algebraic stack. In the literature,
algebraic stack usually refers to Artin stack (that we will define
later). To avoid confusion, we will use ``algebraic stack'' only when
we refer in general to both notions, and we will use
``Deligne-Mumford'' or ``Artin'' stack when we want to be specific.
Note that the definition of Deligne-Mumford stack is the same as the
definition of algebraic space, but in the context of stacks instead of
spaces.
As with schemes a stack such that the diagonal $\Delta_{\mathcal{F}}$ is
quasi-compact and
separated is called quasi-separable. We always assume this technical
condition, as it is usually done both with schemes and algebraic
spaces.
Sometimes it is difficult to find explicitly an \'etale atlas, and the
following proposition is useful.
\begin{proposition}
\label{represen}
\textup{\cite[thm 4.21]{DM}, \cite{E}}.
Let ${\mathcal{F}}$ be a stack over the \'etale site $({Sch}/S)$. Assume
\begin{enumerate}
\item The diagonal $\Delta_{\mathcal{F}}$ is representable, quasi-compact,
separated and \textbf{unramified}.
\item There exists a scheme $U$ of finite type over $S$ and a
\text{smooth} surjective morphism
$u:U\to {\mathcal{F}}$.
\end{enumerate}
Then ${\mathcal{F}}$ is a Deligne-Mumford stack.
\end{proposition}
Now we define the analogue for the fppf topology \cite{Ar2}.
\begin{definition}[Artin stack]
Let $({Sch}/S)$ be the category of
$S$-schemes with the fppf topology. Let ${\mathcal{F}}$ be a stack. Assume
\begin{enumerate}
\item The diagonal $\Delta_{\mathcal{F}}$ is representable, quasi-compact and
separated.
\item There exists a scheme $U$ (called atlas) and a smooth
(hence locally of finite type) and
surjective morphism
$u:U\to {\mathcal{F}}$.
\end{enumerate}
Then we say that ${\mathcal{F}}$ is an Artin stack.
\end{definition}
For propositions analogous to proposition \ref{represen} see \cite[4]{La}.
\begin{proposition}
\textup{\cite[prop 7.15]{Vi}, \cite[lemme 3.3]{La}}.
If ${\mathcal{F}}$ is a
Deligne-Mumford
(resp. Artin) stack, then the diagonal
$\Delta_{\mathcal{F}}$ is unramified (resp. finite type).
\end{proposition}
Recall that $\Delta_{\mathcal{F}}$ is unramified (resp. finite type) if for
every scheme $B$ and objects $X$, $Y$ of ${\mathcal{F}}(B)$, the morphism
$\operatorname{Iso}_B(X,Y)\to U$ is unramified (resp. finite type). If $B=\operatorname{Spec} S$
and $X=Y$, then this means that the automorphism group of $X$ is
discrete and reduced for a Deligne-Mumford stack, and it just of
finite type for an Artin stack.
\begin{example}[Vector bundles]
\label{quotconstruction}
\textup{The stack ${\mathcal{M}}$ is an Artin stack, locally of finite type
\cite[4.14.2.1]{La}. The atlas is constructed as follows. Let
Let $P^H_{r,c_i}$ be the Hilbert polynomial corresponding to
sheaves on $X$ with rank $r$ and Chern classes $c_i$.
Let $\operatorname{Quot}({\mathcal{O}}(-m)^{\oplus N}, P^H_{r,c_i})$ be the Quot scheme
parametrizing quotients of sheaves on $X$
\begin{eqnarray}
\label{quotmap}
{\mathcal{O}}(-m)^{\oplus N} \twoheadrightarrow V,
\end{eqnarray}
where $V$ is a coherent sheaf on $X$ with Hilbert polynomial
$P^H_{r,c_i}$. Let $R_{N,m}$ be the subscheme corresponding to quotients
(\ref{quotmap}) such that $V$ is a vector
bundle with $H^p(V(m))=0$ for $p>0$ and the morphism (\ref{quotmap})
induces an isomorphism on global sections
$$
H^0({\mathcal{O}})^{\oplus N} \stackrel{\cong}{\longrightarrow} H^0(V(m)).
$$
The scheme $R^{}_{N,m}$ has a universal vector bundle, induced from the
universal bundle of the Quot scheme, and then there is a morphism
$u^{}_{N,m}: R^{}_{N,m}\to {\mathcal{M}}$.
Since $H$ is ample, for every vector bundle $V$,
there exist integers $N$ and $m$ such that
$R_{N,m}$ has a point whose corresponding quotient is $V$, and then if
we take the infinite disjoint union of these morphisms
we get a surjective morphism
$$
u: \Big( \coprod_{N,m>0} R^{}_{N,m}\Big) \longrightarrow {\mathcal{M}}.
$$
It can be shown that this morphism is smooth, and then it gives an
atlas. Each scheme $R_{N,m}$ is of finite type, so the union is
locally of finite type, which in turn implies that the stack ${\mathcal{M}}$
is locally of finite type.
}
\end{example}
\begin{example}[Quotient by group action]
\label{atlasquotient}
\textup{The stack $[X/G]$ is an Artin stack
\cite[4.14.1.1]{La}.
If $G$ is smooth, an atlas is defined as follows (for more general
$G$, see \cite[4.14.1.1]{La}):
Take the trivial principal $G$-bundle
$X\times G$ over $X$, and let the map $f:X\times G \to X$ be the
action of the group. This defines an object of $[X/G](X)$, and by
lemma \ref{yoneda}, it defines a morphism $u:X\to [X/G]$. It is
representable, because if $B$ is a scheme and $g:B\to [X/G]$ is the
morphism corresponding
to a principal $G$-bundle $E$ over $B$ with an equivariant morphism
$f:E\to X$, then $B\times_{[X/G]}X$ is isomorphic to the scheme $E$,
and in fact we have a Cartesian diagram
$$
\xymatrix{
{E} \ar[r]^{f} \ar[d]_{\pi} & {X} \ar[d]_{u} \\
{B} \ar[r]^{g} & {[X/G].}
}
$$
The morphism $u$ is surjective and smooth because $\pi$ is surjective
and smooth for every $g$ (if $G$ is not smooth, but only separated,
flat and of finite presentation, then $u$ is not an atlas, but if we
apply the representation theorem \cite[thm 4.1]{La}, we conclude that
there is a smooth atlas).}
\textup{
If either $G$ is \'etale
over $S$ (\cite[example 4.8]{DM}) or
the stabilizers of the geometric points of $X$ are finite and reduced
(\cite[example 7.17]{Vi}),
then $[X/G]$ is a Deligne-Mumford stack. In particular
$\overline{\mathcal{M}}_g$ is a Deligne-Mumford stack.}
\textup{Note that if the action is not free, then $[X/G]$ is not
representable by lemma \ref{nonrepresentable}. On the other hand,
if there is a scheme $Y$ such that $X \to Y$ is a principal
$G$-bundle, then $[X/G]$ is represented by $Y$.}
\textup{Let $G$ be a reductive group acting on $X$. Let $H$ be an
ample line bundle on $X$, and assume that the action is polarized.
Let $X^s$ and $X^{ss}$ be the subschemes of stable and semistable
points. Let $Y=X{/\!\!/} G$ be the GIT quotient.
Recall that there is a
good quotient $X^{ss}\to Y$, and that the restriction to the stable
part $X^s\to Y$ is a principal bundle. There is a natural morphism
$[X^{ss}/G] \to X^{ss}{/\!\!/} G$. By the previous remark, the restriction
$[X^s/G] \to Y^s$ is an isomorphism of stacks.}
\textup{If $X=S$ (with trivial action of $G$ on $S$), then $[S/G]$ is
denoted $BG$, the classifying groupoid of principal $G$-bundles.}
\end{example}
\subsection{Algebraic stacks as groupoid spaces}
\label{subsgroupspaces}
We will introduce a third equivalent definition of stack.
First consider a category $C$. Let $U$ be the set of objects and $R$
the set of morphisms. The axioms of a category give us four maps of
sets
$$
\xymatrix{
{R} \ar@<0.5ex>[r]^{s}
\ar@<-0.5ex>[r]_{t} & {U} \ar[r]^{e} & {R}}
\qquad
\xymatrix{
\save[]+<-5.5ex,-0.55ex>*{R\times^{}_{s,U,t} R}\restore \ar[r]^{m} & {R}}
$$
where $s$ and $t$ give the source and target for each morphism, $e$
gives the identity morphism, and $m$ is composition of morphisms.
If the category is a groupoid then we have a fifth morphism
$$
\xymatrix{{R} \ar[r]^i & {R}}
$$
that gives the inverse. These maps satisfy
\begin{enumerate}
\item $s\circ e= t\circ e = \operatorname{id}_R$, $s\circ i=t$, $t\circ i=s$,
$s\circ m=s\circ p_2$, $t\circ m=t\circ p_1$.
\item \textit{Associativity}. $m\circ (m\times \operatorname{id}_R)=m\circ
(\operatorname{id}_R \times m)$.
\item \textit{Identity}. Both compositions
$$
R=R\times^{}_{s,U} U=U\times^{}_{U,t}R
\xymatrix{
{}\ar@<0.5ex>[r]^{\operatorname{id}_R \times e}
\ar@<-0.5ex>[r]_{e \times \operatorname{id}_R} & {}}
R\times^{}_{s,U,t} R
\xymatrix{
{}\ar[r]^{m} & {R}}
$$
are equal to the identity map on $R$.
\item \textit{Inverse}. $m\circ (i\times \operatorname{id}_R)= e\circ s$,
$m\circ (\operatorname{id}_R \times i)= e\circ t$.
\end{enumerate}
\begin{definition}[Groupoid space]
\textup{\cite[1.3.3]{La}, \cite[pp. 668--669]{DM}}.
A groupoid space is a pair of spaces (sheaves of sets) $U$, $R$, with
five morphisms $s$, $t$, $e$, $m$, $i$ with the same properties as
above.
\end{definition}
\begin{definition}
\textup{\cite[1.3.3]{La}}.
Given a groupoid space, define the groupoid over $({Sch}/S)$ as the
category $[R,U]'$ over $({Sch}/S)$ whose objects over the scheme $B$
are elements of the set $U(B)$ and whose morphisms over $B$ are
elements of the set $R(B)$. Given $f:B' \to B$ we define a functor
$f^*: [R,U]'(B) \to [R,U]'(B')$ using the maps $U(B) \to U(B')$ and
$R(B) \to R(B')$.
\end{definition}
The groupoid $[R,U]'$ is in general only a prestack. We denote by
$[R,U]$ the associated stack. The stack $[R,U]$ can be thought of as
the sheaf associated to the presheaf of groupoids $B \mapsto
[R,U]'(B)$ (\cite[2.4.3]{La}).
\begin{example}[Quotient by group action]
\textup{Let $X$ be a scheme and $G$ an affine group scheme. We denote
by the same letters the associated spaces (functors of points). We
take $U=X$ and $R=X\times G$. Using the group action we can define the
five morphisms ($t$ is the action of the group, $s=p_1$, $m$ is the
product in the group, $e$ is defined with the identity of $G$, and $i$
with the inverse).}
\textup{
The objects of $[X\times G,X]'(B)$ are morphisms $f:B\to
X$. Equivalently, they are trivial principal $G$-bundles $B\times G$
over $B$ and a map $B\times G \to X$ defined as the composition of the
action of $G$ and $f$. The stack $[X\times G,X]$ is isomorphic to $[X/G]$.}
\end{example}
\begin{example}[Algebraic stacks]
\textup{
Let $R$, $U$ be a groupoid space such that $R$ and $U$ are algebraic
spaces, locally of finite presentation (equivalently locally of finite
type if $S$ is noetherian). Assume that the morphisms $s$, $t$ are
flat, and that $\delta=(s,t):R\to U\times_S U$ is separated and
quasi-compact. Then $[R,U]$ is an Artin stack, locally of finite type
(\cite[cor 4.7]{La}).}
\textup{
In fact, any Artin stack ${\mathcal{F}}$ can be defined in this fashion. The
algebraic space $U$ will be the atlas of ${\mathcal{F}}$, and we set
$R=U\times_{\mathcal{F}} U$. The morphisms $s$ and $t$ are the two projections,
$i$ exchanges the factors, $e$ is the diagonal, and $m$ is defined by
projection to the first and third factor.}
\end{example}
Let $\delta:R\to U\times_S U$ be an equivalence relation in the
category of spaces. One can define a groupoid space, and $[R,U]$ is to
be thought of as the stack-theoretic quotient of this equivalence
relation, as opposed to the quotient space, used for instance to
define algebraic spaces (for more details and the definition of
equivalence relation see appendix A).
\subsection{Properties of Algebraic Stacks}
\label{subsproperties}
So far we have only defined scheme-theoretic properties for
representable stacks and morphisms. We can define some properties for
arbitrary algebraic stacks (and morphisms among them) using the atlas.
Let ``P'' be a property of schemes, local in nature for the smooth
(resp. \'etale) topology. For example: regular, normal, reduced, of
characteristic $p$,... Then we say that an Artin
(resp. Deligne-Mumford) stack has ``P'' iff the atlas has ``P''
(\cite[p.25]{La}, \cite[p.100]{DM}).
Let ``P'' be a property of morphisms of schemes, local on source and
target for the smooth (resp. \'etale) topology, i.e. for any
commutative diagram
$$
\xymatrix{
{X'} \ar[r]^{p} \ar[dr]_{f''}& {Y'\times_Y X} \ar[r]^{g'} \ar[d]_{f'}
& {X} \ar[d]^{f} \\
& {Y'} \ar[r]^{g} & {Y} }
$$
with $p$ and $g$ smooth (resp. \'etale) and surjective, $f$ has ``P''
iff $f''$ has ``P''. For example: flat, smooth, locally of finite
type,... For the \'etale topology we also have: \'etale,
unramified,... Then if $f:{\mathcal{X}} \to {\mathcal{Y}}$ is a morphism of Artin
(resp. Deligne-Mumford) stacks, we say that $f$ has ``P'' iff for one
(and then for all) commutative diagram of stacks
$$
\xymatrix{
{X'} \ar[r]^{p} \ar[dr]_{f''}& {Y'\times_Y {\mathcal{X}}} \ar[r]^{g'} \ar[d]_{f'}
& {{\mathcal{X}}} \ar[d]^{f} \\
& {Y'} \ar[r]^{g} & {{\mathcal{Y}}} }
$$
where $X'$, $Y'$ are schemes and $p$, $g$ are smooth (resp. \'etale)
and surjective, $f''$ has ``P'' (\cite[pp. 27-29]{La}).
For Deligne-Mumford stacks it is enough to find a commutative diagram
$$
\xymatrix{
{X'} \ar[r]^{p} \ar[d]_{f''}& {{\mathcal{X}}} \ar[d]^{f} \\
{Y'} \ar[r]^{g} & {{\mathcal{Y}}} }
$$
where $p$ and $g$ are \'etale and surjective and $f''$ has ``P''.
Then it follows that $f$ has ``P'' (\cite[p. 100]{DM}).
Other notions are defined as follows.
\begin{definition}[Substack]
\label{substack}
\textup{\cite[def 2.5]{La}, \cite[p.102]{DM}}.
A stack ${\mathcal{E}}$ is a substack of ${\mathcal{F}}$ if it is a full subcategory of
${\mathcal{F}}$ and
\begin{enumerate}
\item If an object $X$ of ${\mathcal{F}}$ is in ${\mathcal{E}}$, then all isomorphic
objects are also in ${\mathcal{E}}$.
\item For all morphisms of schemes $f:U\to V$, if $X$ is in ${\mathcal{E}}(V)$,
then $f^* X$ is in ${\mathcal{E}}(U)$.
\item Let $\{U_i \to U\}$ be a cover of $U$ in the site
$({Sch}/S)$. Then $X$ is in ${\mathcal{E}}$ iff $X|_i$ is in ${\mathcal{E}}$ for all $i$.
\end{enumerate}
\end{definition}
\begin{definition}
\textup{\cite[def 2.13]{La}}.
A substack ${\mathcal{E}}$ of ${\mathcal{F}}$ is called open (resp. closed, resp. locally
closed) if the inclusion morphism ${\mathcal{E}} \to {\mathcal{F}}$ is
\textbf{representable} and it is an open immersion (resp. closed
immersion, resp. locally closed immersion).
\end{definition}
\begin{definition}[Irreducibility]
\textup{\cite[def 3.10]{La}, \cite[p.102]{DM}}.
An algebraic stack ${\mathcal{F}}$ is irreducible if it is not the union of two
distinct and nonempty proper closed substacks.
\end{definition}
\begin{definition}[Separatedness]
\textup{\cite[def 3.17]{La}, \cite[def 4.7]{DM}}.
An algebraic stack ${\mathcal{F}}$ is separable, if the (representable) diagonal
morphism $\Delta_{\mathcal{F}}$ is universally closed (and hence proper, because
it is automatically separable and of finite type).
A morphism $f:{\mathcal{F}} \to {\mathcal{G}}$ of algebraic stacks is separable if for all $U
\to {\mathcal{F}}$ with $U$ affine, $U\times_{\mathcal{G}} {\mathcal{F}}$ is a separable (algebraic) stack.
\end{definition}
For Deligne-Mumford stacks, $\Delta_{\mathcal{F}}$ is universally closed iff it
is finite.
There is a valuative criterion of separatedness, similar to the
criterion for schemes. Recall that by Yoneda lemma (lemma
\ref{yoneda}), a morphism $f:U\to {\mathcal{F}}$ between a scheme and a stack is
equivalent to an object in ${\mathcal{F}}(U)$. Then we will say that $\alpha$ is
an isomorphism between two morphisms $f_1,f_2:U\to {\mathcal{F}}$ when $\alpha$
is an isomorphism between the corresponding objects of ${\mathcal{F}}(U)$.
\begin{proposition}[Valuative criterion of separatedness (stacks)]
\textup{\cite[prop
3.19]{La}, \cite[thm 4.18]{DM}}.
An algebraic stack ${\mathcal{F}}$ is separated (over $S$) if and
only if the following holds. Let $A$ be a valuation ring with fraction
field $K$.
Let $g^{}_1:\operatorname{Spec} A\to {\mathcal{F}}$ and
$g^{}_2:\operatorname{Spec} A \to {\mathcal{F}}$ be two morphisms such that:
\begin{enumerate}
\item $f_{p^{}_{\mathcal{F}}}\circ g^{}_1= f_{p^{}_{\mathcal{F}}}\circ g^{}_2$.
\item There exists an isomorphism $\alpha: g^{}_1|_{\operatorname{Spec} K}
\to g^{}_2|_{\operatorname{Spec} K}$.
\end{enumerate}
$$
\xymatrix{
& & {{\mathcal{F}}} \ar[d]^{p^{}_{{\mathcal{F}}}} \\
{\operatorname{Spec} K} \ar@(u,l)[rru] \ar[r]^{i} &
{\operatorname{Spec} A} \ar@<0.5ex>[ru]^{g^{}_1} \ar@<-0.5ex>[ru]_{g^{}_2} \ar[r]
& S
}
$$
then there exists an isomorphism (in fact unique) $\tilde\alpha:
g^{}_1\to g^{}_2$ that
extends $\alpha$, i.e. $\tilde\alpha|_{\operatorname{Spec} K}=\alpha$.
\end{proposition}
\begin{remark}
\label{dvr}
\textup{
It is enough to consider complete valuation rings $A$ with
algebraically closed residue field \cite[3.20.1]{La}. If furthermore
$S$ is locally Noetherian and ${\mathcal{F}}$ is locally is finite type, it is
enough to consider discrete valuation rings $A$ \cite[3.20.2]{La}.}
\end{remark}
\begin{example}
\textup{The stack $BG$ won't be separated if $G$ is not proper over
$S$ \cite[3.20.3]{La}, and since we assumed $G$ to be affine, this
won't happen if it is not finite.}
\textup{In general the moduli stack of vector bundles ${\mathcal{M}}$
is not separated. It is easy to find
families of vector bundles that contradict the criterion.}
\textup{The stack of stable curves $\overline{\mathcal{M}}_g$ is separated
\cite[prop 5.1]{DM}.}
\end{example}
The criterion for morphisms is more involved because we are
working with stacks and we have to keep track of the isomorphisms.
\begin{proposition}[Valuative criterion of separatedness (morphisms)]
\textup{\cite[prop 3.19]{La}}
A morphism of algebraic stacks $f:{\mathcal{F}} \to {\mathcal{G}}$ is separated if and
only if the following holds. Let $A$ be a valuation ring with fraction
field $K$. Let $g^{}_1:\operatorname{Spec} A\to {\mathcal{F}}$ and
$g^{}_2:\operatorname{Spec} A \to {\mathcal{F}}$ be two morphisms such that:
\begin{enumerate}
\item There exists an isomorphism $\beta: f\circ g^{}_1\to f\circ g^{}_2$.
\item There exists an isomorphism $\alpha: g^{}_1|_{\operatorname{Spec} K}
\to g^{}_2|_{\operatorname{Spec} K}$.
\item $f(\alpha)=\beta|_{\operatorname{Spec} K}$.
\end{enumerate}
then there exists an isomorphism (in fact unique) $\tilde\alpha:
g^{}_1\to g^{}_2$ that
extends $\alpha$, i.e. $\tilde\alpha|_{\operatorname{Spec} K}=\alpha$ and
$f(\tilde\alpha)=\beta$.
\end{proposition}
Remark \ref{dvr} is also true in this case.
\begin{definition}
\textup{\cite[def 3.21]{La}, \cite[def 4.11]{DM}}.
An algebraic stack ${\mathcal{F}}$ is proper (over $S$) if it is separated and of
finite type, and if there is a scheme $X$ proper over $S$ and a
(representable) surjective morphism $X\to {\mathcal{F}}$.
A morphism ${\mathcal{F}}\to {\mathcal{G}}$ is proper if for any affine scheme $U$ and
morphism $U\to {\mathcal{G}}$, the fiber product $U\times_{\mathcal{G}} {\mathcal{F}}$ is proper
over $U$.
\end{definition}
For properness we only have a satisfactory criterion for stacks (see
\cite[prop 3.23 and conj 3.25]{La} for a generalization for morphisms).
\begin{proposition}[Valuative criterion of properness]
\textup{\cite[prop 3.23]{La}, \cite[thm 4.19]{DM}}.
Let ${\mathcal{F}}$ be a separated algebraic stack (over $S$). It is proper
(over $S$) if and only if the following condition holds.
Let $A$ be a valuation ring with fraction
field $K$.
For any commutative diagram
$$
\xymatrix{
& & {{\mathcal{F}}} \ar[d]^{p^{}_{\mathcal{F}}} \\
{\operatorname{Spec} K} \ar[r]^{i} \ar[rru]^{g} & {\operatorname{Spec} A} \ar[r] & S }
$$
there exists a finite field extension $K'$ of $K$ such that $g$ extends to
$\operatorname{Spec}(A')$, where $A'$ is the integral closure of $A$ in $K'$.
$$
\xymatrix{
& & {{\mathcal{F}}} \ar[dd]^{p^{}_{\mathcal{F}}} \\
{\operatorname{Spec} K'} \ar[rru]^{g\circ u} \ar[d]_{u} \ar[r] &
{\operatorname{Spec} A'} \ar[d] \ar@{-->}[ru] \\
{\operatorname{Spec} K} \ar[r]^{i} & {\operatorname{Spec} A} \ar[r] & S }
$$
\end{proposition}
\begin{example}[Stable curves]
\textup{The Deligne-Mumford stack of stable curves
$\overline{\mathcal{M}}_g$ is proper
\cite[thm 5.2]{DM}}.
\end{example}
\subsection{Points and dimension}
\label{subspoints}
We will introduce the concept of point of an algebraic
stack and dimension of a stack at a point. The reference for this is
\cite[chapter 5]{La}.
\begin{definition}
Let ${\mathcal{F}}$ be an algebraic stack over $S$. The set of points of ${\mathcal{F}}$
is the set of equivalence classes of pairs $(K,x)$, with $K$ a field
over $S$ (i.e. a field with a morphism of schemes $\operatorname{Spec} K \to S$)
and $x:\operatorname{Spec} K \to {\mathcal{F}}$ a morphism of stacks. Two pairs $(K',x')$ and
$(K'',x'')$ are equivalent if there is a field $K$ extension of $K'$
and $K''$ and a commutative diagram
$$
\xymatrix{
{\operatorname{Spec} K} \ar[r] \ar[d] & {\operatorname{Spec} K'} \ar[d]^{x'} \\
{\operatorname{Spec} K''} \ar[r]^{x''} & {\mathcal{F}}
}
$$
Given a morphism ${\mathcal{F}} \to {\mathcal{G}}$ of algebraic stacks and a point of
${\mathcal{F}}$, we define the image of that point in ${\mathcal{G}}$ by composition.
\end{definition}
Every point of an algebraic stack is the image of a point of an
atlas. To see this, given a point represented by $\operatorname{Spec} K \to {\mathcal{F}}$
and an atlas
$X\to {\mathcal{F}}$, take any point $\operatorname{Spec} K' \to X\times_{\mathcal{F}} \operatorname{Spec} K$. The
image of this point in $X$ maps to the given point.
To define the concept of dimension, recall that if $X$ and $Y$ are
locally Noetherian schemes and $f:X\to Y$ is flat, then for any point
$x\in X$ we have
$$
\dim_x(X)= \dim_x(f) + \dim_{f(x)}(Y),
$$
with $\dim_x(f)=\dim_x(X_{f(x)})$, where $X_y$ is the fiber of $f$
over $y$.
\begin{definition}
Let $f:{\mathcal{F}}\to {\mathcal{G}}$ be a representable morphism, locally of finite type,
between two algebraic spaces. Let $\xi$ be a point of ${\mathcal{F}}$. Let $Y$
be an atlas of ${\mathcal{G}}$
Take a point $x$ in the algebraic space $Y\times_{\mathcal{G}} {\mathcal{F}}$ that maps to
$\xi$,
$$
\xymatrix{
{Y\times_{\mathcal{G}} {\mathcal{F}}} \ar[r] \ar[d]_{\tilde f} & {\mathcal{F}} \ar[d]^{f} \\
{Y} \ar[r] & {\mathcal{G}}
}
$$
and define the dimension of the morphism $f$ at the point $\xi$ as
$$
\dim_\xi(f)=\dim_x(\tilde f).
$$
\end{definition}
It can be shown that this definition is independent of the choices
made.
\begin{definition}
Let ${\mathcal{F}}$ be a locally Noetherian algebraic stack and $\xi$ a point of
${\mathcal{F}}$. Let $u: X\to {\mathcal{F}}$ be an atlas, and $x$ a point of $X$ mapping
to $\xi$. We define the dimension of ${\mathcal{F}}$ at the point $\xi$ as
$$
\dim_\xi({\mathcal{F}})=\dim_x(X)-\dim_x(u).
$$
The dimension of ${\mathcal{F}}$ is defined as
$$
\dim({\mathcal{F}})=\operatorname{Sup}_{\xi} (\dim_\xi({\mathcal{F}})).
$$
\end{definition}
Again, this is independent of the choices made.
\begin{example}[Quotient by group action]
\textup{
Let $X$ be a smooth scheme of dimension $\dim(X)$ and $G$ a smooth
group of dimension $\dim(G)$ acting on $X$. Let $[X/G]$ be the
quotient stack defined in example \ref{quotient}. Using the atlas
defined in example
\ref{atlasquotient}, we see that
$$
\dim[X/G]=\dim(X)-\dim(G).
$$
Note that we haven't made any assumption on the action. In particular,
the action could be trivial. The dimension of an algebraic stack can
then be negative. For instance, the dimension of the classifying stack
$BG$ defined in example \ref{quotient} has dimension $\dim(BG)=-\dim(G)$.
}
\end{example}
\subsection{Quasi-coherent sheaves on stacks}
\label{subssheaves}
\begin{definition}
\textup{\cite[def 7.18]{Vi}, \cite[def 6.11, prop 6.16]{La}.}
A quasi-coherent sheaf ${\mathcal{S}}$ on an algebraic stack ${\mathcal{F}}$ is the
following set of data:
\begin{enumerate}
\item For each morphism $X\to {\mathcal{F}}$ where $X$ is a scheme, a
quasi-coherent sheaf ${\mathcal{S}}_X$ on $X$.
\item For each commutative diagram
$$
\xymatrix{
{X} \ar[r]^f \ar[dr] & {Y} \ar[d] \\
& {{\mathcal{F}}}
}
$$
an isomorphism $\varphi^{}_f: {\mathcal{S}}_X \stackrel{\cong}{\longrightarrow} f^*{\mathcal{S}}_Y$,
satisfying the cocycle condition, i.e. for any commutative diagram
\begin{eqnarray}
\label{sheaf2}
\xymatrix{
{X} \ar[r]^{f} \ar[dr] & {Y} \ar[d] \ar[r]^{g}& {Z} \ar[dl] \\
& {{\mathcal{F}}}
}
\end{eqnarray}
we have $\varphi^{}_{g\circ f} = \varphi^{}_f \circ f^* \varphi^{}_g$.
\end{enumerate}
We say that ${\mathcal{S}}$ is coherent (resp. finite type, finite
presentation, locally free) if ${\mathcal{S}}_X$ is coherent (resp. finite
type, finite presentation, locally free) for all $X$.
A morphism of quasi-coherent sheaves $h:{\mathcal{S}} \to {\mathcal{S}}'$ is a
collection of morphisms of sheaves $h^{}_X:{\mathcal{S}}^{}_X \to {\mathcal{S}}'_X$
compatible with the isomorphisms $\varphi$
\end{definition}
\begin{remark}\textup{
Since a sheaf on a scheme can be obtained by glueing the restriction
to an affine cover, it is enough to
consider affine schemes.}
\end{remark}
\begin{example}[Structure sheaf]
\textup{
Let ${\mathcal{F}}$ be an algebraic stack. The structure sheaf ${\mathcal{O}}_{\mathcal{F}}$ is
defined by taking $({\mathcal{O}}_{\mathcal{F}})_X={\mathcal{O}}_X$.}
\end{example}
\begin{example}[Sheaf of differentials]
\textup{
Let ${\mathcal{F}}$ be a Deligne-Mumford stack. To define the sheaf of
differentials $\Omega_{\mathcal{F}}$, if $U\to {\mathcal{F}}$ is an \'etale morphism we
set $(\Omega_{\mathcal{F}})_U=\Omega_U$, the sheaf of differentials of the
scheme $U$. If $V \to {\mathcal{F}}$ is another \'etale
morphism and we have a commutative diagram
$$
\xymatrix{
{U} \ar[r]^f \ar[dr] & {V} \ar[d] \\
& {{\mathcal{F}}}
}
$$
then $f$ has to be \'etale, there is a canonical isomorphism
$\varphi^{}_f :\Omega_{U/S} \to f^* \Omega_{V/S}$, and these canonical
isomorphisms satisfy the cocycle condition.}
\textup{Once we have defined $(\Omega_{\mathcal{F}})_U$ for \'etale morphisms
$U\to {\mathcal{F}}$, we can extend the definition for any morphism $X\to {\mathcal{F}}$
with $X$ an arbitrary scheme as follows: take an (\'etale) atlas
$U=\coprod U_i \to {\mathcal{F}}$. Consider the composition morphism
$$
X\times_{\mathcal{F}} U \stackrel{p_2}{\longrightarrow} U \longrightarrow {\mathcal{F}},
$$
and define $(\Omega_{\mathcal{F}})_{X\times_{\mathcal{F}} U}=p^*_2\Omega_U$. The cocycle
condition for $\Omega_{U_i}$ and \'etale descent implies that
$(\Omega_{\mathcal{F}})_{X\times_{\mathcal{F}} U}$ descends to give a sheaf
$(\Omega_{{\mathcal{F}}})_X$ on $X$. It is easy to check that this doesn't depend
on the atlas $U$ used, and that given a commutative diagram like
(\ref{sheaf2}), there are canonical isomorphisms $\varphi$ satisfying
the cocycle condition.}
\end{example}
\begin{example}[Universal vector bundle]
\textup{
Let ${\mathcal{M}}$ be the moduli stack of vector bundles on a scheme $X$
defined in \ref{bbund}. The universal vector bundle $V$ on ${\mathcal{M}}
\times X$ is defined as follows:}
\textup{
Let $B$ be a scheme and $f=(f_1,f_2): B \to {\mathcal{M}} \times X$ a
morphism. By lemma
\ref{yoneda}, the morphism $f_1:B\to {\mathcal{M}}$ is equivalent to a vector
bundle $W$ on $B \times X$. We define $V_B$ as ${\tilde f}^*W$,
where ${\tilde f}=(\operatorname{id}_B, f_2):B\to B\times X$.
Let
$$
\xymatrix{
{B'} \ar[r]^g \ar[dr]_{f'} & {B} \ar[d]^{f} \\
& {{\mathcal{M}} \times X}
}
$$
be a commutative diagram. Recall that this means that there is an
isomorphism $\alpha:f \circ g \to f'$, and looking at the projection
to ${\mathcal{M}}$ we have an isomorphism $\alpha^{}_1:f^{}_1\circ g \to f'_1$.
Using lemma \ref{yoneda}, $f^{}_1\circ g$ and $f'_1$ correspond
respectively to the vector
bundles $(g\times \operatorname{id}_X)^*W$ and $W'$ on $B'\times X$, and (again by
lemma \ref{yoneda}) $\alpha^{}_1$
gives an isomorphism between them. It is easy to check that these
isomorphisms satisfy the cocycle condition for diagrams of the form
(\ref{sheaf2}).
}
\end{example}
\section{Vector bundles: moduli stack vs. moduli scheme}
\label{sectionversus}
In this section we will compare, in the context of vector bundles, the
new approach of stacks versus the standard approach of moduli schemes
via geometric invariant theory (GIT).
Fix a scheme $X$, a positive integer $r$ and
classes $c_i\in
H^{2i}(X)$.
All vector bundles over $X$ in this section will have rank $r$ and
Chern classes $c_i$. We will also consider vector bundles
on products $B\times X$ where $B$ is a scheme. We will always assume
that these vectors bundles are flat over $B$, and that the restriction
to the slices $\{p\}\times X$ are vector bundles with rank $r$ and
Chern classes $c_i$. Fix also a polarization on $X$. All references to
stability or semistability of vector bundles will mean Gieseker
stability with respect to this fixed polarization.
Recall that the functor $\underline{\Bund}^{s}$ (resp. $\underline{\Bund}^{ss}$)
is the functor from
$(Sch/S)$ to $(Sets)$ that for each scheme $B$ gives the set of
\textit{equivalence} classes of vector bundles over $B\times X$,
flat over $B$ and such that the restrictions $V|_b$ to the slices
$p\times X$ are stable (resp. semistable) vector bundles with fixed
rank and Chern classes, where two vector bundles $V$ and $V'$ on
$B\times X$ are considered \textit{equivalent}
if there is a line bundle $L$ on
$B$ such that $V$ is isomorphic to $V'\otimes p^*_B L$.
\begin{theorem}
There are schemes $\mathfrak{M}^{s}$ and $\mathfrak{M}^{ss}$, called moduli schemes,
corepresenting the
functors ${\underline{\Bund}}^{s}$ and ${\underline{\Bund}}^{ss}$.
\end{theorem}
The moduli scheme $\mathfrak{M}^{ss}$ is constructed using the Quot schemes
introduced in example \ref{quotconstruction} (for a detailed
exposition of the construction, see \cite{HL}). Since the set of
\textit{semistable} vector bundles is bounded, we can choose once and
for all $N$ and $m$ (depending only on the Chern classes and rank)
with the property that for any semistable vector bundle $V$ there is a
point in $R=R_{N,m}$ whose corresponding quotient is isomorphic to $V$.
The scheme $R$ parametrizes vector bundles $V$ on $X$
together with a basis of $H^0(V(m))$ (up to
multiplication by scalar). Recall that $N=h^0(V(m))$.
There is an action of ${GL(N)}$ on $R$,
corresponding to change of basis
but since two basis that only
differ by a scalar give the same point on $R$, this ${GL(N)}$ action
factors through ${PGL(N)}$. Then the moduli scheme $\mathfrak{M}^{ss}$
is defined as the GIT quotient $R {/\!\!/} {PGL(N)}$.
The closed points of $\mathfrak{M}^{ss}$ correspond to S-equivalence
classes of vector bundles, so if there is a strictly semistable vector
bundle, the functor ${\underline{\Bund}}^{ss}$ is not representable.
Now we will compare this scheme with the moduli stack ${\mathcal{M}}$ defined on
example \ref{bbund}. We will also consider the moduli stack
${\mathcal{M}}^{s}$ defined in the same way, but with the extra requirement
that the vector bundles should be stable. The moduli stack
${\mathcal{M}}^{s}$ is a substack (definition \ref{substack}) of ${\mathcal{M}}$.
The following are some of the
differences between the moduli scheme and the
moduli stack:
\begin{enumerate}
\item
The stack ${\mathcal{M}}$ parametrizes all vector bundles, but the scheme
$\mathfrak{M}^{ss}$ only
parametrizes semistable vector bundles.
\item
From the point of view of the scheme $\mathfrak{M}^{ss}$, we identify two
vector bundles if they are S-equivalent. On the other hand, from the
point of view of the moduli stack, two vector bundles are identified
only if they are isomorphic.
\item
Let $V$ and $V'$ be two families of vector bundles parametrized by a
scheme $B$, i.e. two vector bundles (flat over $B$) on $B\times X$.
If there is a line bundle $L$ on $B$ such that $V$ is isomorphic to
$V'\otimes p^*_B L$, then from the point of view of the moduli scheme,
$V$ and $V'$ are identified as being the same family. On the other
hand, from the point of view of the moduli stack, $V$ and $V'$ are
identified only if they are isomorphic as vector bundles on $B\times
X$.
\item
The subscheme $\mathfrak{M}^{s}$ corresponding to stable vector bundles is
sometimes representable by a scheme, but the moduli stack ${\mathcal{M}}^{s}$
is never representable by a scheme. To see this, note that any vector
bundle has automorphisms different from the identity (multiplication
by scalars) and apply lemma \ref{nonrepresentable}.
\end{enumerate}
Now we will restrict our attention to stable bundles, i.e. to the
scheme $\mathfrak{M}^s$ and the stack ${\mathcal{M}}^s$. For stable bundles the
notions of $S$-equivalence and isomorphism coincide, so the points of
$\mathfrak{M}^s$ correspond to isomorphism classes of vector
bundles. Consider $R^{s}\subset R$, the subscheme corresponding to stable
bundles. There
is a map $\pi :R^s \to \mathfrak{M}^s=R^s/{PGL(N)}$, and $\pi$ is in fact a
principal ${PGL(N)}$-bundle (this is a consequence of Luna's \'etale slice
theorem).
\begin{remark}[Universal bundle on moduli scheme]\textup{
The scheme $\mathfrak{M}^s$ represents the functor $\underline{\Bund}^s$ if there is a
universal family. Recall that a universal family for this functor is a vector
bundle $E$ on $\mathfrak{M}^s \times X$ such that the isomorphism class of
$E|_{p\times X}$ is the isomorphism class corresponding to the point
$p\in \mathfrak{M}^s$, and for any family of vector bundles $V$ on $B\times
X$ there is a morphism $f:B\to \mathfrak{M}^s$ and a line bundle $L$ on $B$
such that $V \otimes p^*_B L$ is isomorphic to $(f\times \operatorname{id})^*E$.
Note that if $E$ is a universal family, then $E\otimes p^*_{\mathfrak{M}^s}L$
will also be a universal family for any line bundle $L$ on $\mathfrak{M}^s$.}
\textup{
The universal bundle for the Quot scheme gives a universal family
$\widetilde V$ on $R^s\times X$, but this family doesn't always descend to
give a universal family on the quotient $\mathfrak{M}^s$.}
\textup{
Let $X\stackrel{G}\longrightarrow Y$ be a principal $G$-bundle. A vector bundle
$V$ on $X$ descends to $Y$ if the action of $G$ on $X$ can be lifted
to $X$. In our case, if certain numerical criterion involving $r$ and
$c_i$ is satisfied (if $X$ is a smooth curve this criterion is
$\operatorname{gcd}(r,c_1)=1$), then we can find a line bundle $L$ on
$R^s$ such that the ${PGL(N)}$ action on $R^s$ can be lifted to
$\widetilde V \otimes p^*_{R^s}L$, and then this vector bundle descends to
give a universal family on $\mathfrak{M}^s \times X$. But in general the best
that we can get is a universal family on an \'etale cover of
$\mathfrak{M}^s$.}
\end{remark}
Recall from example \ref{atlasquotient} that there is a morphism
$[R^{ss}/{PGL(N)}] \to \mathfrak{M}^{ss}$, and that the morphism
$[R^{s}/{PGL(N)}] \to \mathfrak{M}^{s}$ is an isomorphism of stacks.
\begin{proposition}
\label{versus}
There is a commutative diagram of stacks
$$
\xymatrix{
{[R^{s}/{GL(N)}]} \ar[rr]^{q} \ar[d]_{g}^{\simeq}&
&{[R^{s}/{PGL(N)}]} \ar[d]^{h}_{\simeq} \\
{{\mathcal{M}}^{s}} \ar[rr]_{\varphi} & &{\;\mathfrak{M}^{s},}
}
$$
where $g$ and $h$ are isomorphisms of stacks, but $q$ and $\varphi$
are not.
If we change ``stable'' by
``semistable'' we still have a commutative diagram,
but the corresponding morphism $h^{ss}$ is not an
isomorphism of stacks.
\end{proposition}
\begin{proof}
The morphism $\varphi$ is the composition of the natural morphism
${\mathcal{M}}^{s} \to \underline{\Bund}^{s}$ (sending each category to the set of
isomorphism classes of objects) and the morphism $\underline{\Bund}^{s} \to
\mathfrak{M}^{s}$ given
by the fact that the scheme $\mathfrak{M}^{s}=R^s{/\!\!/} {PGL(N)}$ corepresents the
functor.
The morphism $h$ was constructed in example \ref{quotient}.
The key ingredient needed to define $g$ is the fact that the ${GL(N)}$
action on the Quot scheme lifts to the universal bundle, i.e.
the universal bundle on the Quot scheme has a ${GL(N)}$-linearization.
Let
$$
\xymatrix{
{\widetilde{B}} \ar[r]^{p} \ar[d] & R^{ss} \\
{B} }
$$
be an object of $[R^{ss}/{GL(N)}]$. Since $R^{ss}$ is a subscheme of a
Quot scheme, and this universal bundle has a ${GL(N)}$-linearization.
Let $\widetilde E$ be the vector
bundle on $\widetilde B\times X$ defined by the pullback of this universal
bundle. Since $f$ is ${GL(N)}$-equivariant, $\widetilde E$ is also
${GL(N)}$-linearized. Since $\widetilde B \times X \to B\times X$ is a principal
bundle, the vector bundle $\widetilde E$ descends to give a vector bundle $E$
on $B\times X$, i.e. an object of ${\mathcal{M}}^{ss}$. Let
$$
\xymatrix{ & & R^{ss}\\
{\widetilde{B}} \ar[r]_{\phi} \ar[d] \ar[rru]^{f}
& {\widetilde{B}'} \ar[d] \ar[ru]_{f'} \\
{B} \ar@{=}[r] & {B} }
$$
be a morphism in $[R^{ss}/{GL(N)}]$. Consider the vector bundles $\widetilde E$
and $\widetilde E'$ defined as before. Since $f'\circ \phi=f$, we get an
isomorphism of $\widetilde E$ with $(\phi \times \operatorname{id})^* \widetilde E'$. Furthermore
this isomorphism is ${GL(N)}$-equivariant, and then it descends to give an
isomorphism of the vector bundles $E$ and $E'$ on $B\times X$, and
we get a morphism in ${\mathcal{M}}^{ss}$.
To prove that this gives an equivalence of categories, we construct a
functor $\overline g$ from ${\mathcal{M}}^{ss}$ to $[R^{ss}/{GL(N)}]$.
Given a vector bundle on $B\times X$, let $q:\widetilde B \to B$ be the
${GL(N)}$-principal bundle associated with the vector bundle $p^*_B E$ on
$B$. Let $\widetilde E=(q\times \operatorname{id})^*E$ be the pullback of $E$ to $\widetilde
B\times X$. It has a canonical ${GL(N)}$-linearization because it is
defined as a pullback by a principal ${GL(N)}$-bundle. The vector bundle
${p^{}_{\widetilde B}}_* \widetilde E$ is canonically isomorphic to the trivial
bundle ${\mathcal{O}}^N_{\widetilde B}$, and this isomorphism is ${GL(N)}$-equivariant, so
we get an \textit{equivariant} morphism $\widetilde B\to R^{ss}$, and hence an object
of $[R^{ss}/{GL(N)}]$.
If we have an isomorphism between two vector bundles $E$ and $E'$ on
$B\times X$, it is easy to check that it induces an isomorphism
between the associated objects of $[R^{ss}/{GL(N)}]$.
It is easy to check that there are natural isomorphisms of functors
$g\circ \widetilde g \cong \operatorname{id}$ and $\widetilde g\circ g \cong \operatorname{id}$,
and then $g$ is an equivalence of
categories.
The morphism $q$ is defined using the following lemma, with $G={GL(N)}$,
$H$ the subgroup consisting of scalar multiples of the identity,
$\overline G={PGL(N)}$ and $Y$=$R^{ss}$.
\end{proof}
\begin{lemma}
Let $Y$ be an $S$-scheme and $G$ an affine flat group $S$-scheme,
acting on $Y$ on the right. Let $H$ be a normal closed
subgroup of
$G$. Assume that $\overline G=G/H$ is affine. If $H$ acts trivially
on $Y$, then there is a
morphism of stacks
$$
[Y/G]\longrightarrow [Y/\overline G].
$$
If $H$ is nontrivial, then this morphism is not faithful,
so it is not an isomorphism.
\end{lemma}
\begin{proof}
Let
$$
\xymatrix{
{E} \ar[r]^{f} \ar[d]^{\pi} & Y \\
{B} }
$$
be an object of $[Y/G]$. There is a scheme $Y/H$ such that $\pi$
factors
$$
E \stackrel{q}\longrightarrow E/H \stackrel{\pi'}\longrightarrow B.
$$
To construct $Y/H$, note that there is a local \'etale cover $U_i$ of
$B$ and isomorphisms $\phi_i:\pi^{-1}(U_i)\to U_i\times G$, with transition
functions $\psi_{ij}=\phi^{}_i \circ \phi^{-1}_j$. Since these
isomorphisms are $G$-equivariant, they descend to give isomorphisms
$\overline{\psi}_{ij}:U_j\times G/H \to U_i\times G/H$, and using this
transition functions we get $Y/H$. This construction shows that $\pi'$
is a principal $\overline G$-bundle.
Furthermore, $q$ is also a
principal $H$-bundle (\cite[example 4.2.4]{HL}), and in particular it
is a categorical quotient.
Since $f$ is $H$-invariant, there is a morphism $\overline f: E/H \to
R$, and this gives an object of $[Y/\overline G]$.
If we have a morphism in $[Y/G]$, given by a morphism $g:E\to E'$ of
principal $G$-bundles over $B$, it is easy to see that it descends
(since $g$ is equivariant) to
a morphism $\overline{g}:E/H \to E'/H$, giving a morphism in
$[Y/\overline G]$.
This morphism is not faithful, since the automorphism
$E\stackrel{\cdot z}{\longrightarrow} E$ given by
multiplication on the right by a nontrivial element $z\in H$
is sent
to the identity automorphism $E/H \to E/H$, and then $\operatorname{Hom}(E,E)\to
\operatorname{Hom}(E/H,E/H)$ is not injective.
\end{proof}
If $X$ is a smooth curve, then it can be shown that ${\mathcal{M}}$ is a
smooth stack of dimension $r^2(g-1)$, where $r$ is the rank and $g$ is
the genus of $X$. In particular, the open substack ${\mathcal{M}}^{ss}$ is
also smooth of dimension $r^2(g-1)$, but the moduli scheme $\mathfrak{M}^{ss}$ is
of dimension $r^2(g-1)+1$ and might not be smooth. Proposition
\ref{versus} explains the difference in the dimensions (at least on
the smooth part): we obtain the moduli stack by taking the quotient by
the group ${GL(N)}$, of dimension $N^2$, but the moduli scheme is obtained
by a quotient by the group ${PGL(N)}$, of dimension $N^2-1$. The moduli
scheme $\mathfrak{M}^{ss}$ is not smooth in general because in the strictly
semistable part of $R^{ss}$ the action of ${PGL(N)}$ is not free. On the
other hand, the smoothness of a stack quotient doesn't depend on the
freeness of the action of the group.
\section{Appendix A: Grothendieck topologies, sheaves and algebraic spaces}
\label{grothendiecktopologies}
The standard reference for Grothendieck topologies is SGA
(\textit{S\'e\-mi\-naire de G\'eo\-m\'e\-trie Alg\'e\-bri\-que}). For an
introduction see \cite{T} or \cite{MM}. For algebraic spaces, see
\cite{K} or \cite{Ar1}.
An open cover in a
topological space $U$ can be seen as family of morphisms in the category of
topological spaces $f_i:U_i \to U$, with the property that $f_i$ is an open
inclusion and the union of their images is $U$, i.e we are choosing a
class of morphisms (open inclusions)
in the category of topological spaces.
A Grothendieck topology on an arbitrary category is basically a choice of a
class of morphisms, that play the role of ``open
sets''. A morphism $f:V\to U$ in this class is to be thought of as an
``open set'' in the object $U$. The concept of intersection of open
sets, for instance, can be replaced by the fiber product: the
``intersection'' of $f_1:U_1\to U$ and $f_2:U_2\to U$ is
$f_{12}:U_1\times _U U_2 \to U$.
A category with a Grothendieck topology is called a site. We will
consider two topologies on $({Sch}/S)$.
\textbf{fppf topology}. Let $U$ be a scheme. Then a cover of $U$ is a
finite collection of morphisms $\{f_i:U_i\to U\}_{i\in I}$ such that
each $f_i$ is a finitely presented flat morphism (for Noetherian
schemes, this is equivalent to flat and finite type), and $U$ is the
(set theoretic) union of the images of $f_i$. In other words,
$\coprod U_i \to U$ is \textit{``fid\`element plat de pr\'esentation
finie''}.
\textbf{\'Etale topology}. Same definition, but substituting flat by
\'etale.
A presheaf of sets on $({Sch}/S)$ is a contravariant functor $F$ from
$({Sch}/S)$ to $({Sets})$. Choose a topology on $({Sch}/S)$.
We say that
$F$ is a sheaf (or an $S$-space) with respect to that topology if
for every cover
$\{f_i:U_i\to U\}_{i\in I}$ in the topology the following two axioms
are satisfied:
\begin{enumerate}
\item \textit{(Mono)} Let $X$ and $Y$ be two elements of $F(U)$. If
$X|_i=Y|_i$ for all $i$, then $X=Y$.
\item \textit{(Glueing)} Let $X_i$ be an object of $F(U_i)$ for each
$i$ such that
$X_i|_{ij}=X_j|_{ij}$, then there exists $X \in F(U)$ such that
$X|_i=X_i$ for each $i$.
\end{enumerate}
We have used the following notation: if $X\in F(U)$, then $X|_i$ is
the element of $F(U_i)$ given by $F(f_i)(X)$, and if $X_i\in F(U_i)$,
then $X_i|_{ij}$ is the element of $F(U_{ij})$ given by
$F(f_{ij,i})(X_i)$ where $f_{ij,i}:U_i\times_U U_j \to U_i$ is the
pullback of $f_j$.
We can define morphisms of $S$-spaces as morphisms of sheaves (natural
transformation of functors with the obvious conditions).
Note that a scheme can be viewed as an $S$-space via its functor of
points, and a morphism between two such $S$-spaces is equivalent to a
scheme morphism between the schemes (by the Yoneda embedding lemma),
then the category of $S$-schemes is a full subcategory of the category of
$S$-spaces.
\textbf{Equivalence relation and quotient space}.
An equivalence relation in the category of $S$-spaces consists of two
$S$-spaces $R$ and $U$ and a monomorphism of $S$-spaces
$$
\delta:R \to U \times_S U
$$
such that for all $S$-scheme $B$, the map $\delta(B):R(B)\to
U(B)\times U(B)$ is the graph of an equivalence relation between
sets. A quotient $S$-space for such an equivalence relation is by
definition the sheaf cokernel of the diagram
$$
\xymatrix{
{R} \ar@<0.5ex>[r]^{p_2\circ \delta}
\ar@<-0.5ex>[r]_{p_1\circ \delta} & {U}}
$$
\begin{definition}
\textup{\cite[0]{La}}.
An $S$-space $F$ is called an algebraic space if it is the quotient
$S$-space
for an equivalence
relation such that $R$ and $U$ are $S$-schemes, $p_1\circ \delta$,
$p_2\circ \delta$ are \'etale (morphisms of $S$-schemes), and $\delta$
is a quasi-compact morphism (of $S$-schemes).
\end{definition}
Roughly speaking, an algebraic space is a quotient of a scheme by an
\'etale equivalence relation. The following is an equivalent
definition.
\begin{definition}
\textup{\cite[def 1.1]{K}}.
An $S$-space $F$ is called an algebraic space if there
exists a scheme
$U$ (atlas) and a morphism of $S$-spaces $u:U\to F$ such that
\begin{enumerate}
\item (The morphism $u$ is \'etale) For any $S$-scheme $V$ and
morphism $V \to F$, the (sheaf) fiber
product $U\times_F V$ is representable by a scheme, and the map
$U\times_F V\to V$ is an \'etale morphism of schemes.
\item (Quasi-separatedness) The morphism $U\times_F U \to
U\times_S U$ is quasi-compact.
\end{enumerate}
\end{definition}
We recover the first definition by taking $R=U\times_F U$. Then
roughly speaking, we can also think of an algebraic space as
``something'' that looks locally in the \'etale topology like an
affine scheme,
in the same sense that a scheme is something that looks locally in
the Zariski topology like an affine scheme.
Algebraic spaces are used, for instance, to give algebraic structure
to certain
complex manifolds (for instance Moishezon manifolds) that are not
schemes, but
can be realized as algebraic spaces.
All smooth algebraic spaces of dimension 1 and
2 are actually schemes. An example of a smooth algebraic space of
dimension 3 that is not a scheme can be found in \cite{H}.
But \'etale topology is useful even if we are only interested in
schemes.
The idea is that the \'etale topology is finer than the Zariski
topology, and in many situations it is ``fine enough'' to do the
analogue of the manipulations that can be done with the analytic
topology of complex manifolds. As an example, consider the affine
complex line $\operatorname{Spec}(\mathbb{C}[x])$, and take a (closed) point $x_0$
different from $0$. Assume that we want to define the function $\sqrt{x}$
in a neighborhood of $x_0$. In the analytic topology we only need to
take a neighborhood small enough so that it doesn't contain a loop
that goes around the
origin, then we choose one of the branches (a sign) of the square
root. In the Zariski topology this cannot be done, because all open
sets are too large (have loops going around the origin, so the sign of
the square root will change, and $\sqrt{x}$ will be multivaluated).
But take the 2:1 \'etale map $V= \operatorname{Spec}(\mathbb{C}[y,x,x^{-1}]/(y-x^2)) \to
\operatorname{Spec}(\mathbb{C}[x])$.
The function $\sqrt{x}$ can certainly be defined on $V$, it is just
equal to the function $y$, so it is in this sense that we say that
the \'etale topology is finer: $V$ is a ``small enough open subset''
because the square root can be defined on it.
\section{Appendix B: 2-categories}
In this section we recall the notions of 2-category and 2-functor.
A 2-category $\mathfrak{C}$ consists of the following data \cite{Hak}:
\begin{enumerate}
\item [(i)] A class of objects $\operatorname{ob}\mathfrak{C}$
\item [(ii)] For each pair $X$, $Y \in \operatorname{ob}\mathfrak{C}$, a category $\operatorname{Hom}(X,Y)$
\item [(iii)] \textit{horizontal composition of 1-morphisms and
2-morphisms}. For each triple $X$, $Y$, $Z \in \operatorname{ob}\mathfrak{C}$, a functor
$$
\mu_{X,Y,Z}:\operatorname{Hom}(X,Y) \times \operatorname{Hom}(Y,Z) \to \operatorname{Hom} (X,Z)
$$
\end{enumerate}
with the following conditions
\begin{enumerate}
\item [(i')] \textit{(Identity 1-morphism)} For each object $X\in \operatorname{ob}\mathfrak{C}$,
there exists an object
$\operatorname{id}_X\in \operatorname{Hom}(X,X)$ such that
$$
\mu_{X,X,Y}(\operatorname{id}_X,\;)=\mu_{X,Y,Y}(\;,\operatorname{id}_Y)=\operatorname{id}_{\operatorname{Hom}(X,Y)},
$$
where $\operatorname{id}_{\operatorname{Hom}(X,Y)}$ is the identity functor on the category
$\operatorname{Hom}(X,Y)$
\item[(ii')] \textit{(Associativity of horizontal compositions)}
For each quadruple $X$, $Y$, $Z$, $T\in \operatorname{ob}\mathfrak{C}$,
$$
\mu_{X,Z,T}\circ (\mu_{X,Y,Z}\times \operatorname{id}_{\operatorname{Hom}(Z,T)})=
\mu_{X,Y,T}\circ (\operatorname{id}_{\operatorname{Hom}(X,Y)}\times\mu_{Y,Z,T})
$$
\end{enumerate}
The example to keep in mind is the 2-category $\mathfrak{Cat}$ of
categories. The objects of $\mathfrak{Cat}$ are categories, and for
each pair $X$, $Y$ of categories, $\operatorname{Hom}(X,Y)$ is the category of
functors between $X$ and $Y$.
Note that the main difference between a 1-category (a usual category)
and a 2-category is that $\operatorname{Hom}(X,Y)$, instead of being a set, is a
category.
Given a 2-category, an object $f$ of the category $\operatorname{Hom}(X,Y)$ is
called a 1-morphisms of
$\mathfrak{C}$, and is represented with a diagram
$$
\xymatrix
{
{\bullet} \ar[r]^f \save[]+<0ex,2.5ex>*{X}\restore &
{\bullet}\save[]+<0ex,2.5ex>*{Y}\restore}
$$
and a morphism $\alpha$ of the category $\operatorname{Hom}(X,Y)$ is called a
2-morphisms of $\mathfrak{C}$, and is represented as
$$
\xymatrix
{
{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar @(dr,dl)[rr]_{f'}^{}="fp"
\save[]+<0ex,2.5ex>*{X}\restore & &{\bullet}
\save[]+<0ex,2.5ex>*{Y}\restore
\ar @2^{\alpha} "f";"fp"}
$$
Now we will rewrite the axioms of a 2-category using diagrams.
\begin{enumerate}
\item \textit{(Composition of 1-morphisms)} Given a diagram
$$
\xymatrix
{{\bullet} \ar[r]^f \save[]+<0ex,2.5ex>*{X}\restore &
{\bullet} \ar[r]^g \save[]+<0ex,2.5ex>*{Y}\restore &
{\bullet} \save[]+<0ex,2.5ex>*{Z}\restore}
\quad\text{there exist}\quad
\xymatrix
{{\bullet} \ar[r]^{g\circ f} \save[]+<0ex,2.5ex>*{X}\restore &
{\bullet}\save[]+<0ex,2.5ex>*{Z}\restore}
$$
(this is (iii) applied to objects) and this composition is
associative: $(h\circ g) \circ f= h\circ (g\circ f)$ (this is (ii')
applied to objects).
\item \textit{(Identity for 1-morphisms)} For each object $X$ there is a
1-morphism $\operatorname{id}_X$ such that $f\circ \operatorname{id}_Y =\operatorname{id}_X \circ f=f$ (this is
(i')).
\item \label{three} \textit{(Vertical composition of 2-morphisms)}
Given a diagram
$$
\xymatrix
{{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar [rr]|g^{}="g"_{}="g2"
\ar @(dr,dl)[rr]_h^{}="h"
\save[]+<0ex,2.5ex>*{X}\restore & &{\bullet}
\save[]+<0ex,2.5ex>*{Y}\restore
\ar @2^{\alpha} "f";"g"
\ar @2^{\beta} "g2";"h"}
\quad\text{there exists}\quad
\xymatrix
{
{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar @(dr,dl)[rr]_h^{}="g"
\save[]+<0ex,2.5ex>*{X}\restore & &{\bullet}
\save[]+<0ex,2.5ex>*{Y}\restore
\ar @2^{\beta\circ\alpha} "f";"g"}
$$
and this composition is associative $(\gamma\circ\beta)\circ\alpha =
\gamma\circ(\beta\circ\alpha)$.
\item \textit{(Horizontal composition of 2-morphisms)} Given a diagram
$$
\xymatrix
{
{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar @(dr,dl)[rr]_{f'}^{}="fp"
\save[]+<0ex,2.5ex>*{X}\restore & &{\bullet}
\save[]+<0ex,2.5ex>*{Y}\restore
\ar @(ur,ul)[rr]^{g}_{}="g" \ar @(dr,dl)[rr]_{g'}^{}="gp"
& &{\bullet}
\save[]+<0ex,2.5ex>*{Z}\restore
\ar @2^{\alpha} "f";"fp"
\ar @2^{\beta} "g";"gp"}
\quad\text{there exists}\quad
\xymatrix
{
{\bullet} \ar @(ur,ul)[rrr]^{g\circ f}_{}="gf" \ar @(dr,dl)[rrr]
_{g'\circ f'}^{}="gpfp"
\save[]+<0ex,2.5ex>*{X}\restore & & &{\bullet}
\save[]+<0ex,2.5ex>*{Z}\restore
\ar @2^{\beta\ast\alpha} "gf";"gpfp"}
$$
(this is (iii) applied to morphisms) and it is associative $(\gamma\ast
\beta)\ast\alpha=\gamma\ast(\beta\ast\alpha)$ (this is (ii') applied
to morphisms).
\item \textit{(Identity for 2-morphisms)} For every 1-morphism $f$ there
is a 2-morphism $\operatorname{id}_f$ such that $\alpha\circ\operatorname{id}_g=\operatorname{id}_f\circ\alpha=
\alpha$ (this and item \ref{three} are (ii)). We have $\operatorname{id}_g
\ast \operatorname{id}_f=\operatorname{id}_{g\circ f}$ (this means that $\mu_{X,Y,Z}$ respects the
identity).
\item \textit{(Compatibility between horizontal and vertical
composition of 2-morphisms)} Given a diagram
$$
\xymatrix
{{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar [rr]|{f'}^{}="f1"_{}="f2"
\ar @(dr,dl)[rr]_{f''}^{}="fpp"
\save[]+<0ex,2.5ex>*{X}\restore & &
{\bullet} \ar @(ur,ul)[rr]^g_{}="g" \ar [rr]|{g'}^{}="g1"_{}="g2"
\ar @(dr,dl)[rr]_{g''}^{}="gpp"
\save[]+<0ex,2.5ex>*{Y}\restore & &{\bullet}
\save[]+<0ex,2.5ex>*{Z}\restore
\ar @2^{\alpha} "f";"f1"
\ar @2^{\alpha'} "f2";"fpp"
\ar @2^{\beta} "g";"g1"
\ar @2^{\beta'} "g2";"gpp"}
$$
then $(\beta'\circ \beta)\ast(\alpha'\circ \alpha)=(\beta'\ast\alpha')
\circ(\beta\ast\alpha)$ (this is (iii) applied to morphisms).
\end{enumerate}
Two objects $X$ and $Y$ of a 2-category are called equivalent if
there exist two 1-morphisms
$f:X\to Y$, $g:Y\to X$ and two 2-isomorphisms (invertible
2-morphism) $\alpha:g\circ f \to
\operatorname{id}_X$ and $\beta:f\circ g \to \operatorname{id}_Y$.
A commutative diagram of 1-morphisms in a 2-category is a diagram
$$
\xymatrix{
& {\bullet} \ar[rd]^g \save[]+<0ex,2.5ex>*{Y}\restore
\ar @2[d]^{\alpha} \\
{\bullet} \ar[ru]^f \ar[rr]_{h}
\save[]-<3ex,0ex>*{X}\restore & &
{\bullet} \save[]+<3ex,0ex>*{Z}\restore}
$$
such that $\alpha:g\circ f \to h$ is a 2-isomorphisms.
\begin{remark}
\textup{
Since 2-functors only respect composition of 1-functors up to a
2-isomorphism (condition 3), sometimes they are called pseudofunctors
or lax functors.}
\end{remark}
\begin{remark}
\textup{
Note that we don't require $g\circ f=h$ to say that the diagram is
commutative, but just require that there is a 2-isomorphisms between
them. This is the reason why 2-categories are used to describe
stacks.}
\end{remark}
On the other hand, a diagram of 2-morphisms will be called
commutative only if the compositions are actually equal.
Now we will define the concept of covariant 2-functor (a contravariant
2-functor is defined in a similar way).
A covariant 2-functor $F$ between two 2-categories $\mathfrak{C}$ and $\mathfrak{C}p$ is a law
that for each object $X$ in $\mathfrak{C}$ gives an
object $F(X)$ in $\mathfrak{C}p$. For each 1-morphism $f:X\to Y$ in $\mathfrak{C}$ gives
a 1-morphism $F(f):F(X)\to F(Y)$ in $\mathfrak{C}p$, and for each 2-morphism
$\alpha:f\Rightarrow g$ in $\mathfrak{C}$ gives a 2-morphism
$F(\alpha):F(f)\Rightarrow F(g)$ in $\mathfrak{C}p$, such that
\begin{enumerate}
\item \textit{(Respects identity 1-morphism)} $F(\operatorname{id}_X)=\operatorname{id}_{F(X)}$.
\item \textit{(Respects identity 2-morphism)} $F(\operatorname{id}_f)=\operatorname{id}_{F(f)}$.
\item \label{twoisom} \textit{(Respects composition of 1-morphism up to a
2-isomorphism)} For every diagram
$$
\xymatrix
{{\bullet} \ar[r]^f \save[]+<0ex,2.5ex>*{X}\restore &
{\bullet} \ar[r]^g \save[]+<0ex,2.5ex>*{Y}\restore &
{\bullet} \save[]+<0ex,2.5ex>*{Z}\restore}
$$
there exists a 2-isomorphism $\epsilon_{g,f}:F(g)\circ F(f) \to
F(g\circ f)$
$$
\xymatrix{
& {\bullet} \ar[rd]^{F(g)} \save[]+<0ex,2.5ex>*{F(Y)}\restore
\ar @2[d]^{\epsilon_{g,f}} \\
{\bullet} \ar[ru]^{F(f)} \ar[rr]_{F(g\circ f)}
\save[]-<3ex,0ex>*{F(X)}\restore & &
{\bullet} \save[]+<3ex,0ex>*{F(Z)}\restore}
$$
\begin{enumerate}
\item $\epsilon_{f,\operatorname{id}_X}=\epsilon_{\operatorname{id}_Y,f}=\operatorname{id}_{F(f)}$
\item $\epsilon$ \textit{ is associative}. The following
diagram is commutative
$$
\xymatrix
{F(h)\circ F(g)\circ F(f) \ar@2[rr]^{\epsilon_{h,g} \times \operatorname{id}}
\ar@2[d]_{\operatorname{id} \times \epsilon_{g,f}} & &
F(h\circ g)\circ F(f) \ar@2[d]^{\epsilon_{h\circ g,f}} \\
F(h)\circ F(g\circ f) \ar@2[rr]^{\epsilon_{h,g\circ f}} & &
F(h\circ g\circ f)}
$$
\end{enumerate}
\item \textit{(Respects vertical composition of 2-morphisms)}
For every pair of 2-morphisms $\alpha:f \to f'$, $\beta:g \to g'$,
we have $F(\beta\circ
\alpha)=F(\beta)\circ F(\alpha)$.
\item \label{last} \textit{(Respects horizontal composition of 2-morphisms)}
For every pair of 2-morphisms $\alpha:f \to f'$, $\beta:g \to g'$,
the following diagram commutes
$$
\xymatrix
{F(g)\circ F(f) \ar@2[rr]^{F(\beta)\ast F(\alpha)}
\ar@2[d]_{\epsilon_{g,f}} & &
F(g')\circ F(f') \ar@2[d]^{\epsilon_{g',f'}} \\
F(g\circ f) \ar@2[rr]^{F(\beta\ast\alpha)} & &
F(g'\circ f')}
$$
\end{enumerate}
By a slight abuse of language, condition \ref{last} is usually written
as $F(\beta)\ast F(\alpha)=F(\beta\ast \alpha)$. Note that strictly
speaking this equality doesn't make sense, because the sources (and
the targets) don't coincide, but if we chose once and for all the
2-isomorphisms $\epsilon$ of condition \ref{twoisom}, then there is a
unique way
of making sense of this equality.
\begin{remark}
\label{B2}
\textup{
In the applications to stacks, the isomorphism $\epsilon_{g,f}$ of
item \ref{twoisom} is canonically defined, and by abuse of language we will
say that $F(g)\circ F(f)= F(g\circ f)$, instead of saying that they
are isomorphic.}
\end{remark}
Given a 1-category $C$ (a usual category), we can define a 2-category:
we just have to make the set $\operatorname{Hom}(X,Y)$ into a category, and we do
this just by defining the unit morphisms for each element.
On the other hand, given a 2-category $\mathfrak{C}$ there are two ways of
defining a 1-category. We have to make each category $\operatorname{Hom}(X,Y)$ into a
set. The naive way is just to take the set of objects of $\operatorname{Hom}(X,Y)$,
and then we obtain what is called the underlying category of $\mathfrak{C}$
(see \cite{Hak}). This has the problem that a 2-functor $F:\mathfrak{C} \to \mathfrak{C}p$
is not in general a functor of the underlying categories (because in
item \ref{twoisom} we only require the composition of 1-morphisms to
be respected up to 2-isomorphism).
The best way of constructing a 1-category from a 2-category is to
define the set of morphisms between the objects $X$ and $Y$ as the
set of isomorphism classes of objects of $\operatorname{Hom}(X,Y)$: two objects
$f$ and $g$ of $\operatorname{Hom}(X,Y)$ are isomorphic if there exists a
2-isomorphism $\alpha:f \Rightarrow g$ between them. We call the category
obtained in this way the 1-category associated to $\mathfrak{C}$. Note that a
2-functor between 2-categories then becomes a functor between the
associated 1-categories.
\textbf{Acknowledgments.}
This article is based on a series of lectures that I gave in February
1999 in the Geometric Langlands programme seminar of the Tata Institute
of Fundamental Research.
First of all, I would like to thank N. Nitsure for proposing
me to give these lectures.
Most of my understanding on stacks comes from
conversations with N. Nitsure and C. Sorger.
I would also like to thank T.R. Ramadas for encouraging me
to write these notes, and the participants in the
seminar in TIFR for their active participation, interest,
questions and comments.
In ICTP (Trieste) I gave two informal talks in August 1999
on this subject, and the comments of the participants, specially L.
Brambila-Paz and Y.I. Holla, helped to
remove mistakes and improve the original notes.
This work was supported by a postdoctoral fellowship of
Ministerio de Educaci\'on y Cultura (Spain).
\end{document} |
\begin{document}
\begin{titlepage}
\begin{center}
{\large \bf Path integral approach \\ to \\the full Dicke model}\\
{\large\em M.~Aparicio Alcalde,\footnotemark[1] and
B. M. Pimentel\,\footnotemark[2]}\\
Instituto de F\'{\i}sica Te\'orica, UNESP - S\~ao Paulo State University,\\
Caixa Postal 70532-2, 01156-970 S\~ao Paulo, SP, Brazil. \\
05/11/2010\\
\subsection*{\\Abstract}
\end{center}
\baselineskip .1in
The full Dicke model describes a system of $N$ identical two level-atoms coupled to a
single-mode quantized bosonic field. The model considers rotating and counter-rotating
coupling terms between the atoms and the bosonic field, with coupling constants $g_1$ and $g_2$,
for each one of the coupling terms, respectively. We study finite temperature properties of the model
using the path integral approach and functional methods. In the thermodynamic limit,
$N\rightarrow\infty$, the system exhibits phase transition from normal to superradiant phase,
at some critical values of temperature and coupling constants. We distinguish between three particular cases,
the first one corresponds to the case of rotating wave approximation,
which $g_1\neq 0$ and $g_2=0$, the second one corresponds to the case of $g_1=0$ and $g_2\neq 0$,
in these two cases the model has a continuous symmetry. The last one, corresponds to the case of
$g_1\neq 0$ and $g_2\neq 0$, which the model has a discrete symmetry. The phase transition in each case is
related to the spontaneous breaking of its respective symmetry.
For each one of these three particular cases, we find the asymptotic behaviour of
the partition function in the thermodynamic limit, and the collective spectrum of the system in the
normal and the superradiat phase. For the case of rotating wave approximation, and also the case of
$g_1=0$ and $g_2\neq 0$, in the superradiant phase, the collective spectrum has a zero energy
value, corresponding to the Goldstone mode associated to the continuous symmetry breaking of the model. Our
analyse and results are valid in the limit of zero temperature, $\beta\rightarrow\infty$, in which,
the model exhibits a quantum phase transition.
PACS numbers: 03.65.Db, 05.30.Jp, 73.43.Nq, 73.43.Lp
\footnotetext[1]{e-mail: \,[email protected]}
\footnotetext[2]{e-mail:\,\,[email protected]}
\end{titlepage}
\baselineskip .18in
\section{Introduction}
\quad $\,\,$
The Dicke model is an interesting spin-boson model because, being
a simple model, exhibits the superradiance effect \cite{dicke}. This model describes a system of $N$
identical two level-atoms coupled to a single-mode radiation field, simplified according to the rotating wave
approximation. In this context, the super-radiance is characterized as the coherent spontaneous
radiation emission with intensity proportional to $N^2$.
Thermodynamic properties of the Dicke model were studied in the thermodynamic limit, $N\rightarrow\infty$.
It is found that, the model exhibits a second order phase transition from normal to superradiant phase
at certain critical temperature and sufficiently larger value of the coupling constant
between the atoms and the field \cite{hepp} \cite{wang}. The influence of the counter-rotating
term on the thermodynamics of the Dicke model also was studied in the literature, \cite{hepp2}
\cite{duncan}. Using different coupling constants between
the rotating and the counter-rotating coupling, it is calculated the
critical temperature and the free energy of the model
\cite{hioe} \cite{pimentel1}. We call this generalization of full Dicke model.
Path integral approach and functional methods were used for study spin-boson problems,
finding critical temperature, free energy and collective spectrum of the models,
in the thermodynamic limit, \cite{moshchi} \cite{yarunin}. With this approach,
Popov and Fedotov \cite{popov1} \cite{popov2}, rigorously calculated the partition
function and collective spectrum for the Dicke model in the normal and superradiant phase.
Relation between the phase transition and continuous symmetry breaking in the Dicke model was
pointed out in reference \cite{popov3}. The full Dicke model was studied using the path integral
approach \cite{aparicio1},
here the authors find the asymptotic behaviour of the partition function and collective spectrum
in the normal phase. Using the same approach, thermodynamic properties of some other spin-boson
models were also studied \cite{aparicio2} \cite{aparicio3}.
In this paper, using the path integral approach and functional methods, we find the
asymptotic behaviour of the partition function and collective spectrum
of the full Dicke model in the thermodynamic limit, $N\rightarrow\infty$, in the normal
and super-radiant phase.
The full Dicke model exhibits phase transition from normal to superradiant phase,
at some critical values of temperature and coupling constants. In our study we
distinguish three particular cases.
The first one corresponds to the case of rotating wave approximation,
$g_1\neq 0$ and $g_2=0$, in this case the model has a continuous symmetry, which is associated
to the conservation of the sum of the number excitation of the $N$ atoms with the number
excitation of the boson field. The second case corresponds to the model with $g_1=0$ and $g_2\neq 0$,
in this case the model also has a continuous symmetry, which is associated to the conservation
of the difference between the number excitation of the $N$ atoms and the number
excitation of the boson field. The last one corresponds to the case of
$g_1\neq 0$ and $g_2\neq 0$, which the model has a discrete symmetry. The phase transition in each case is
related to the spontaneous breaking of their respective symmetry. For the case of rotating
wave approximation, and also for the case of $g_1=0$ and $g_2\neq 0$, in the superradiant phase,
the collective spectrum has a zero energy value, corresponding to the Goldstone mode associated to
the breaking of their respective continuous symmetry. The collective spectrum obtained in this paper
is valid for the zero temperature limit, corresponding to the case of quantum phase transition.
Practical realization of the full Dicke model in the laboratory was discussed by Dimer {\it et al.} \cite{dimer}.
Since the radiation frequency and energy separation between the two levels of the atoms exceed
the coupling constant strength by many orders of magnitude the counter-rotating terms have a little
effect on the dynamics. These authors proposed that in cavities with the $N$ qubits, only
one mode of quantized field and classical fields (lasers), it is possible to obtain an effective
Hamiltonian equal to the full Dicke Hamiltonian. It is possible to control the parameters
in this effective Hamiltonian, and it is possible to operate in the phase transition regime. Other authors stressed the
importance for quantum information technology of experimental realization of generalizations of the
Dicke model in cavity quantum electrodynamics \cite{harkonen} \cite{baumann}.
Quantum phase transition of the Dicke model, in the thermodynamic limit, is studied by diagonalizing the
Hamiltonian \cite{hillery}. For this purpose it is applied the Holstein-Primakoff map, which represents the total
angular momentum of the $N$ atoms by a single bosonic field. These author find the collective spectrum
in the normal phase. Similar method was used by Emary and Brandes to study the connection between the
quantum phase transition and the quantum chaos in the Dicke model without using the rotating wave
approximation \cite{emary2}. They find the collective spectrum of the model in the normal and superradiant phase,
as another quatities properly of quantum chaos. The relationship between entanglement and quantum phase transition
in the Dicke model was also studied \cite{ent1} \cite{ent2}, the authors find that the atom-field entanglement entropy
diverges at the critical point of the phase transition. Studies of this relationship between entanglement and quantum
phase transition for others collective models exist in the literature \cite{vidal}.
This paper is organized as follows. In section 2, we introduce the full Dicke Hamiltonian and study
its symmetries. In section 3, we introduce a map between the spin momentum operators of each atom, with bilinear
forms of fermionic operators, defining the fermion full Dicke model. In section 4, we are able to introduce
the path integral approach for the full Dicke model, using functional methods we obtain
the critical temperature and the asymptotic behaviour of the partition function in some particular cases of the model.
In section 5, partition function and collective spectrum of the model are presented in the normal phase. In section 6,
partition function and collective spectrum of the model are presented in the superradiant phase. In
section 7 we discuss our conclusions. In the paper we use $k_{B}=c=\hbar=1$.
\section{The full Dicke Hamiltonian and symmetries}
\quad $\,\,$
The full Dicke model describes a system of $N$ identical two level-atoms coupled to a
single-mode quantized bosonic field. The model considers rotating and counter-rotating
coupling terms between the atoms and the bosonic field in the Hamiltonian, with coupling constants $g_1$ and $g_2$,
for each one of the coupling terms, respectively. Consequently, the Hamiltonian of the full Dicke model can be
written as
\begin{eqnarray}
H\,=\,\frac{\Omega}{2}\,\sum_{j=1}^{N}\,\sigma_{(j)}^z+\omega_{0}\,b^{\dagger}\,b\,+\frac{g_1}{\sqrt{N}}
\sum_{j=1}^{N}\, \Bigl(b\,
\sigma_{(j)}^{+}+b^{\dagger}\sigma_{(j)}^{-}\Bigr)+\,\frac{g_2}{\sqrt{N}}
\sum_{j=1}^{N}\, \Bigl(b\,
\sigma_{(j)}^{-}+b^{\dagger}\sigma_{(j)}^{+}\Bigr)\,.
\label{fullDHamil}
\end{eqnarray}
In above equation we define the operators $\sigma_{(j)}^{\pm}=\frac{1}{2}\,(\sigma_{(j)}^{1} \pm i\,\sigma_{(j)}^{2})$,
which the operators $\sigma_{(j)}^1$,
$\sigma_{(j)}^2$ and $\sigma_{(j)}^z=\sigma_{(j)}^3$ satisfy the commutation relations $[\sigma_{(j)}^p,\sigma_{(j)}^q]=
2\,\epsilon^{pqr}\,\sigma_{(j)}^r$ with $p,q,r=1,2,3$. Therefore, $[\sigma_{(j)}^+,\sigma_{(j)}^-]=\sigma_{(j)}^z$ and $[\sigma_{(j)}^z,\sigma_{(j)}^{\pm}]=
\pm\,2\,\sigma_{(j)}^{\pm}$. The
$b$ and $b^{\dagger}$ are the boson annihilation and creation
operators of mode excitations that satisfy the usual commutation
relation rules.
Let us define three different operators. The first one, the operator $N$, which is defined by
\begin{eqnarray}
N=b^{\dagger}b+\frac{1}{2}\sum_{i=1}^N\sigma_{(i)}^z\,.
\label{Nexc1}
\end{eqnarray}
The second one, the operator $N_-$, defined by
\begin{eqnarray}
N_-=b^{\dagger}b-\frac{1}{2}\sum_{i=1}^N\sigma_{(i)}^z\,.
\label{N-1}
\end{eqnarray}
Finally, we define the parity operator $\Pi$ by
\begin{eqnarray}
\Pi=e^{i\,\pi\,N}\,,
\label{parity1}
\end{eqnarray}
with operator $N$ defined in Eq. (\ref{Nexc1}). In particular case of $g_1\neq 0$ and $g_2=0$,
which corresponds to the rotating wave approximation case, it is possible to show that $[H,N]=0$.
In particular case of $g_1=0$ and $g_2\neq 0$, it is possible to show that $[H,N_-]=0$.
And it is possible to show that $[H,\Pi]=0$ for arbitrary non-negative values of $g_1$ and
$g_2$. These commutation relations of the Hamiltonian with each operator defined above, correspond
to symmetries of the model for each case. It is interesting
to see that, for the case of $g_1\neq 0$ and $g_2\neq 0$, we only have that $[H,\Pi]=0$, it means that
the system only has parity symmetry. The operators defined by
$J^p=\frac{1}{2}\sum_{i=1}^N\sigma_{(i)}^p$ with $p=1,2,3$, satisfy the usual angular momentum
commutation relations. The Hilbert space corresponding to the atoms states can be generated
by the basis $\{|j\,m\rangle\}$ with $j=N/2$ and $m=-j,-j+1,...,j-1,j$; each basis state satisfies
$J^3|j\,m\rangle =m|j\,m\rangle$ and ${\bf J}^2|j\,m\rangle =j(j+1)|j\,m\rangle$. The Hilbert space,
which the photon states are defined, can be generated by the basis $\{|n\rangle\}$, with their elements satisfying
$b^{\dagger}b|n\rangle =n|n\rangle$, in this case, $n$ is the number of
photons. Now we are able to construct a basis for the total system as a tensor product of the above basis
introduced, i.e., the set $\{|n\rangle \otimes |j\,m\rangle\}$. The symmetries mentioned above,
are related with conserved quatities. In the case of $g_1\neq 0$ and $g_2=0$, with $[H,N]=0$, the
excitation number of the system, $n+m$, is conserved.
It means that the temporal evolution of a state given by $|n\rangle \otimes |j\,m\rangle$ only evolves
toward another states $|n'\rangle \otimes |j\,m'\rangle$ which $n'+m'=n+m$. In similar fashion, for the case
of $g_1=0$ and $g_2\neq 0$, with $[H,N_-]=0$, the difference of excitation numbers, $n-m$, is conserved.
When $g_1\neq 0$ and $g_2\neq 0$, which $[H,\Pi]=0$, the value $e^{i\,\pi\, (n+m)}$ is conserved. It means that
the temporal evolution of a state given by $|n\rangle \otimes |j\,m\rangle$ only evolves toward another states
$|n'\rangle \otimes |j\,m'\rangle$ with both, $n+m$ and $n'+m'$ being even or $n+m$ and $n'+m'$ being odd.
In all mentioned cases, the phase transition is related to the spontaneous breaking of their respective
symmetries. In further analysis we shall see that, the symmetry associated to the commutation relation
$[H,\Pi]=0$ is discrete, and the symmetries associated to the commutation relations $[H,N]=0$
and $[H,N_-]=0$ are continuous symmetries. In cases of continuous symmetry breaking the Goldstone theorem is valid,
with the appearing of zero energy value in the phase with the symmetry broken.
\section{The fermion full Dicke model}
\quad $\,\,$
Let us define the fermion full Dicke model. For this purpose, let us define the raising
and lowering Fermi operators $\alpha^{\dagger}_{i}$, $\alpha_{i}$,
$\beta^{\dagger}_{i}$ and $\beta_{i}$, that satisfy the
anti-commutator relations
$\alpha_{i}\alpha^{\dagger}_{j}+\alpha^{\dagger}_{j}\alpha_{i}
=\delta_{ij}$ and
$\beta_{i}\beta^{\dagger}_{j}+\beta^{\dagger}_{j}\beta_{i}
=\delta_{ij}$. In this analysis, we use a representation of
the operators $\sigma_{(j)}^z$, $\sigma_{(j)}^+$ and $\sigma_{(j)}^-$ by the following bilinear
combination of Fermi operators, $\alpha^{\dagger}_{i}\alpha_{i}
-\beta^{\dagger}_{i}\beta_{i}$, $\alpha^{\dagger}_{i}\beta_{i}$
and $\beta^{\dagger}_{i}\alpha_{i}$, the correspondence is given by
\begin{equation}
\sigma_{(i)}^{z}\longrightarrow \alpha_{i}^{\dagger}\alpha_{i}
-\beta_{i}^{\dagger}\beta_{i}\, , \label{34}
\end{equation}
\begin{equation}
\sigma_{(i)}^{+}\longrightarrow \alpha_{i}^{\dagger}\beta_{i}\, ,
\label{35}
\end{equation}
and
\begin{equation}
\sigma_{(i)}^{-}\longrightarrow \beta_{i}^{\dagger}\alpha_{i}\, .
\label{36}
\end{equation}
Using this representation given in Eq. (\ref{34}), Eq. (\ref{35}) and Eq. (\ref{36})
in the full Dicke Hamiltonian given by Eq. (\ref{fullDHamil}), we define
the Hamiltonian of the fermion full Dicke model $H_F$. So that, we have
\begin{eqnarray}
H_F=\omega_0\; b^{\dagger}b+ \frac{\Omega}{2}\sum_{i=1}^N
\Bigl(\alpha_i^{\dagger}\alpha_i -\beta_i^{\dagger}\beta_i\Bigr)
+\frac{g_1}{\sqrt{N}}\sum_{i=1}^N
\Bigl(b\,\alpha_i^{\dagger}\beta_i \,+\, b^{\dagger}\,
\beta_i^{\dagger}\alpha_i \Bigr)+\,
\frac{g_2}{\sqrt{N}}\sum_{i=1}^N
\Bigl(b^{\dagger}\,\alpha_i^{\dagger}\beta_i \,+\, b\,
\beta_i^{\dagger}\alpha_i \Bigr)\, . \label{37}
\end{eqnarray}
We are interested in studying thermodynamic properties of the system, therefore we must find the partition
function $Z$. It is important to note that Hamiltonians $H$ and $H_F$ are defined in different spaces.
Each operator $\sigma_i^{\alpha}$ appearing in the Hamiltonian $H$ acts on two-dimensional Hilbert space,
notwithstanding, Fermi operators $\alpha^{\dagger}_{i}$, $\alpha_{i}$,
$\beta^{\dagger}_{i}$ and $\beta_{i}$, appearing in the Hamiltonian
$H_F$ act on four-dimensional Fock space. The following property relates the
partition function of the full Dicke model with the partition function of the fermion full Dicke model:
\begin{eqnarray}
Z=Tr\Bigl(\exp(-\beta\,H)\Bigr)=i^N\,Tr\left(\exp\left(-\beta\,H_F-\frac{i\pi}{2}\,N_F\right)\right)\,.
\label{partitionsfunctions}
\end{eqnarray}
In this last relation $H$ is given by the Eq. (\ref{fullDHamil}), $H_F$ is given by Eq. (\ref{37})
and the operator $N_F$ is defined by
\begin{eqnarray}
N_F=\sum_{i=1}^{N}(\alpha_{i}^{\dagger}\alpha_{i}+\beta_{i}^{\dagger}\beta_{i})\,.
\end{eqnarray}
The traces used in Eq. (\ref{partitionsfunctions}) for each Hamiltonians are carried over their repective
spaces. The relation given by Eq. (\ref{partitionsfunctions}) let us express the partition function of the
full Dicke model $Z$ using the fermion full Dicke Hamiltonian given by Eq. (\ref{37}).
\section{The partition function with path integral approach}
In this section we perform calculations in order to obtain an asymptotic expression for the partition function $Z$
of the full Dicke model in the limit of $N\rightarrow\infty$. For this purpose we use path integral approach
and functional methods. Let us define the Euclidean action $S$ of the full Dicke model in the following form
\begin{equation}
S=\int_0^{\beta} d\tau \left(b^*(\tau)\,\partial_{\tau}b(\tau)+ \sum_{i=1}^{N}
\Bigl(\alpha^*_i(\tau)\,\partial_{\tau}\alpha_i(\tau)
+\beta^*_i (\tau)\,\partial_{\tau}\beta_i(\tau)\Bigr)\right) -\int_0^{\beta}d\tau H_{F}(\tau)\,,
\label{66}
\end{equation}
the Hamiltonian $H_{F}$ is the full Hamiltonian for the full fermion
Dicke model, which is given by
\begin{eqnarray}
H_{F}(\tau)\,=\,\omega_{0}\,b^{\,*}(\tau)\,b(\tau)\,+
\,\frac{\Omega}{2}\,\displaystyle\sum_{i\,=\,1}^{N}\,
\biggl(\alpha^{\,*}_{\,i}(\tau)\,\alpha_{\,i}(\tau)\,-
\,\beta^{\,*}_{\,i}(\tau)\beta_{\,i}(\tau)\biggr)\,+
\nonumber\\
+\,\frac{g_{\,1}}{\sqrt{N}}\,\displaystyle\sum_{i\,=\,1}^{N}\,
\biggl(\alpha^{\,*}_{\,i}(\tau)\,\beta_{\,i}(\tau)\,b(\tau)\,+
\alpha_{\,i}(\tau)\,\beta^{\,*}_{\,i}(\tau)\,b^{\,*}(\tau)\,\biggr)\,+
\nonumber\\
+\,\frac{g_{\,2}}{\sqrt{N}}\,\displaystyle\sum_{i\,=\,1}^{N}\,
\biggl(\alpha_{\,i}(\tau)\,\beta^{\,*}_{\,i}(\tau)\,b(\tau)\,+
\,\alpha^{\,*}_{\,i}(\tau)\,\beta_{\,i}(\tau)\,b^{\,*}(\tau)\biggr).
\label{66a}
\end{eqnarray}
Let us define the formal quotient of the partition function of the full Dicke
model and the partition function of the free Dicke model.
Therefore we are interested in calculating the following quantity
\begin{equation}
\frac{Z}{Z_0}=\frac{\int [d\eta]\,\exp{\left(\,S-\frac{i\pi}{2\beta}\int_0^{\beta}n(\tau)d\tau\right)}}{\int
[d\eta]\,\exp{\left(\,S_{0}-\frac{i\pi}{2\beta}\int_0^{\beta}n(\tau)d\tau\right)}}\, ,
\label{65}
\end{equation}
the function $n(\tau)$ is defined by
\begin{eqnarray}
n(\tau)=\sum_{i=1}^{N}
\Bigl(\alpha^{\,*}_i(\tau)\,\alpha_i(\tau)+\beta^{\,*}_i(\tau)\beta_i(\tau)\Bigr)\,,
\end{eqnarray}
$S=S(b,b^*,\alpha,\alpha^{\dagger},\beta,\beta^{\dagger})$
is the Euclidean action of the full Dicke model
given by Eq. (\ref{66}),
$S_0=S_{0}(b,b^*,\alpha,\alpha^{\dagger},\beta,\beta^{\dagger})$
is the free Euclidean action for the free single bosonic mode and
the free atoms, i.e., the expression of the complete action $S$
taking $g_1=g_2=0$ and finally $[d\eta]$ is the functional
measure.
The functional integrals involved in Eq. (\ref{65}), are
functional integrals with respect to the complex functions
$b^*(\tau)$ and $b(\tau)$ and Fermi fields
$\alpha_i^*(\tau)$, $\alpha_i(\tau)$, $\beta_i^*(\tau)$ and
$\beta_i(\tau)$. Since we are using thermal equilibrium boundary
conditions, in the imaginary time formalism, the integration
variables in Eq. (\ref{65}) obey periodic boundary conditions for
the Bose field, i.e., $b(\beta)=b(0)$ and anti-periodic boundary
conditions for Fermi fields i.e., $\alpha_i(\beta)=-\alpha_i(0)$
and $ \beta_i(\beta)=-\beta_i(0)$.
In section 2, we have analysed the symmetry of the model studying commutation
relations between the Hamiltonian given by Eq. (\ref{fullDHamil}) with
some operators defining the symmetry.
Now we are able to analise the symmetry of the model studying the invariance of the action
given by Eq. (\ref{66}) under symmetry transformations. In this way, let us introduce
the following field transformation
\begin{eqnarray}
\begin{array}{ccc}
b(\tau)\rightarrow\,e^{i\,\gamma}\,b(\tau)\,,\;\; & \alpha(\tau)\rightarrow\,e^{i\,\theta}\,\alpha(\tau)\,,\;\; &
\beta(\tau)\rightarrow\,e^{i\,\phi}\,\beta(\tau)\,,\\
\;\;\;\;b^*(\tau)\rightarrow\,e^{-i\,\gamma}\,b^*(\tau)\,,\;\; & \;\;\;\;\alpha^*(\tau)\rightarrow\,e^{-i\,\theta}\,\alpha^*(\tau)\,,\;\; &
\;\;\;\;\beta^*(\tau)\rightarrow\,e^{-i\,\phi}\,\beta^*(\tau)\,.
\label{symmtranf1}
\end{array}
\end{eqnarray}
In the case of $g_1\neq 0$ and $g_2=0$, corresponding to the case of rotating wave approximation, its respective
action is invariant under tranformation given by Eq. (\ref{symmtranf1}), taking $\gamma=\theta-\phi$.
In the case of $g_1=0$ and $g_2\neq 0$, its corresponding action is invariant under tranformation given by
Eq. (\ref{symmtranf1}), taking $\gamma=\phi-\theta$. Finally, in the case of $g_1\neq 0$ and $g_2\neq 0$, its
corresponding action is invariant under tranformation given by Eq. (\ref{symmtranf1}), with $\gamma=\theta-\phi=0$ or
$\gamma=\theta-\phi=\pi$. In the two first cases, the case of $g_1\neq 0$ and $g_2=0$, and
the case of $g_1=0$ and $g_2\neq 0$, their respective actions are invariant under
continuous transformation, $U(1)$, of the boson field $b(\tau)$. In the case of $g_1\neq 0$ and $g_2\neq 0$,
its action is invariant under discrete transformations, $Z_2$, of the boson field $b(\tau)$, i. e.,
$b(\tau)\rightarrow b(\tau)$ and $b(\tau)\rightarrow -b(\tau)$.
Following with the purpose of calculating the quantity $\frac{Z}{Z_0}$ given by Eq. (\ref{65}),
let us use the following transformation
\begin{eqnarray}
\begin{array}{cc}
\alpha_i(\tau)\rightarrow e^{\frac{i\pi}{2\beta}t}\,\alpha_i(\tau)\,,\;\;\; &
\alpha_i^*(\tau)\rightarrow e^{-\,\frac{i\pi}{2\beta}t}\,\alpha_i^*(\tau)\,,\\
\beta_i(\tau)\rightarrow e^{\frac{i\pi}{2\beta}t}\,\beta_i(\tau)\,,\;\;\; &
\beta_i^*(\tau)\rightarrow e^{-\,\frac{i\pi}{2\beta}t}\,\beta_i^*(\tau)\,.
\end{array}
\label{trans2}
\end{eqnarray}
With this last transformation, the term $n(\tau)$ appearing in Eq. (\ref{65}) can be dropped. Therefore, applying
the tranformation given by Eq. (\ref{trans2}) into the expression given by Eq. (\ref{65}), we obtain that
\begin{equation}
\frac{Z}{Z_0}=\frac{\int [d\eta]\,e^S}{\int[d\eta]\,e^{S_0}}\,.
\label{65n}
\end{equation}
In Eq. (\ref{65n}), the Bose field obeys periodic boundary conditions, i.e., $b(\beta)=b(0)$,
and the Fermi fields obey the following boundary conditions:
\begin{eqnarray}
\begin{array}{cc}
\alpha_i(\beta)=i\,\alpha_i(0)\,,\;\;\;&
\alpha_i^*(\beta)=-\,i\,\alpha_i^*(0)\,,\\
\beta_i(\beta)=i\,\beta_i(0)\,,\;\;\;&
\beta_i^*(\beta)=-\,i\,\beta_i^*(0)\,.
\end{array}
\label{nbound}
\end{eqnarray}
The free action for the single mode bosonic field $S_{B0}(b)$ is
given by
\begin{equation}
S_{B0}(b) = \int_{0}^{\beta} d\tau\; b^{*}(\tau)\,\Bigl(
\partial_{\tau}-\omega_{0}\Bigr)\,b(\tau)\, . \label{67}
\end{equation}
Then we can write the action $S$ of the full fermion Dicke
model, given by Eq. (\ref{66}), using the free action for the
single mode bosonic field $S_{B0}(b)$ defined by Eq. (\ref{67}), plus
an additional term that can be expressed in matrix form.
Therefore the total action $S$ can be written as
\begin{equation}
S = S_{B0}(b) + \int_{0}^{\beta} d\tau\,\sum_{i=1}^{N}\,
\rho^{\dagger}_{i}(\tau)\, M(b^{*},b)\,\rho_{i}(\tau)\, ,
\label{68}
\end{equation}
the column matrix $\rho_{\,i}(\tau)$ is given in terms of
Fermi field operators in the following way
\begin{eqnarray}
\rho_{\,i}(\tau) &=& \left(
\begin{array}{c}
\beta_{\,i}(\tau) \\
\alpha_{\,i}(\tau)
\end{array}
\right),
\nonumber\\
\rho^{\dagger}_{\,i}(\tau) &=& \left(
\begin{array}{cc}
\beta^{*}_{\,i}(\tau) & \alpha^{*}_{\,i}(\tau)
\end{array}
\right) \label{69a}
\end{eqnarray}
and the matrix $M(b^{*},b)$ is given by
\begin{equation}
M(b^{*},b) = \left( \begin{array}{cc}
L & (N)^{-1/2}\,\biggl(g_{1}\,b^{*}\,(\tau) + g_{2}\,b\,(\tau)\biggr)\\
(N)^{-1/2}\,\biggl(g_{1}\,b\,(\tau) + g_{2}\,b^{*}\,(\tau)\biggr)
&
L_*
\end{array} \right)\,,
\label{69b}
\end{equation}
the operators $L$ and $L_*$ are defined by $\partial_{\tau} + \Omega/2$ and
$\partial_{\tau} - \Omega/2$ respectively.
Substituting the action $S$ given by Eq. (\ref{68}) in the functional integral form of the partition
function given by Eq. (\ref{65n}) we see that this functional integral is Gaussian in the Fermi fields.
Now, let us begin integrating with respect these Fermi fields, therefore we obtain
\begin{eqnarray}
Z=\int[d\eta(b)]\,e^{S_{B0}} \Bigl(\det{M(b^{*},b)}\Bigr)^N\,,
\label{Zop}
\end{eqnarray}
in this case, $[d\eta(b)]$ is the functional measure only for the bosonic field.
With the help of the following property for matrices with operator components
\begin{eqnarray}
\det\left(\begin{array}{cc}
A&B\\
C&D
\end{array}
\right)=\det\left(AD-ACA^{-1}B\right)\,,
\label{mazprop}
\end{eqnarray}
and determinant properties, we have that
\begin{eqnarray}
\det{M(b^{*},b)}=\det{\Bigl(LL_*\Bigr)}\,\det{\left(1-N^{-1}L_*^{-1}
\Bigl(g_1\,b+ g_2\,b^*\Bigr)L^{-1}\Bigl(g_1\,b^*+ g_2\,b\Bigr)\right)}\,.
\label{Mop1}
\end{eqnarray}
Substituting Eq. (\ref{Zop}) and Eq. (\ref{Mop1}) in Eq. (\ref{65n}), we have that
\begin{eqnarray}
\frac{Z}{Z_0}=\frac{Z_A}{\int[d\eta(b)]\,e^{S_{B0}}}\,,
\label{ZA0}
\end{eqnarray}
with $Z_A$ defined by
\begin{eqnarray}
Z_A=\int[d\eta(b)]\exp{\left(S_{B0}+N\,tr\ln\biggl(1-N^{-1}L_*^{-1}
\Bigl(g_1\,b+ g_2\,b^*\Bigr)L^{-1}\Bigl(g_1\,b^*+ g_2\,b\Bigr)\biggr)\right)}\,.
\label{ZA1}
\end{eqnarray}
We are interested in knowing the asymptotic behaviour of the quotient $\frac{Z}{Z_0}$ in the
thermodynamic limit, i. e., $N\rightarrow\infty$. With this intention, we analyse
the asymptotic behaviour of the last defined expression $Z_A$. First, let us scale the bosonic
field by $b\rightarrow\sqrt{N}\,b$ and $b^*\rightarrow\sqrt{N}\,b^*$, so that we get
\begin{eqnarray}
Z_A=A(N)\int[d\eta(b)]\exp{\left(N\,\Phi(b^*,b)\right)}\,,
\label{ZA2}
\end{eqnarray}
with the function $\Phi(b^*,b)$ defined by
\begin{eqnarray}
\Phi(b^*,b)=S_{B0}+tr\ln\biggl(1-L_*^{-1}
\Bigl(g_1\,b+ g_2\,b^*\Bigr)L^{-1}\Bigl(g_1\,b^*+ g_2\,b\Bigr)\biggr)\,.
\label{fi1}
\end{eqnarray}
The term $A(N)$ in Eq. (\ref{ZA2}) comes from transforming the functional measure $[d\eta(b)]$ under scaling
the bosonic field by $b\rightarrow\sqrt{N}\,b$ and $b^*\rightarrow\sqrt{N}\,b^*$. The asymptotic behaviour
of the integral functional appearing in Eq. (\ref{ZA2})
when $N\rightarrow\infty$, can be obtained by using the method of steepest descent \cite{amit}. In this method,
we expand the function $\Phi(b^*,b)$ around the point $b(\tau)=b_0(\tau)$ and
$b^*(\tau)=b^*_0(\tau)$, which can be of two kinds. One kind that makes $Re(\Phi(b^*,b))$ maximum,
and the other kind is defined as saddle point.
We consider the first terms of the expansion in the integral functional, which are the leading
terms for the value of the integral function. We can find the maximum points, or
saddle points, finding the stationary points. The stationary points are solution of the following equations $\frac{\delta\,
\Phi(b^*,b)}{\delta\,b(\tau)}=0$ and $\frac{\delta\,\Phi(b^*,b)}{\delta\,b^*(\tau)}=0$.
For the full Dicke model, the stationary points are constant functions $b(\tau)=b_0$ and $b^*(\tau)=b^*_0$.
It is not difficult to show that for $\beta\leq\beta_c$ the stationary point
is given by $b_0=b_0^*=0$, which is a maximum point. The critical value $\beta_c$ is obtained by solving the following equation
\begin{eqnarray}
\frac{\omega_0\,\Omega}{(g_1+g_2)^2}=\tanh\left(\frac{\beta_c\,\Omega}{2}
\right)\,.
\label{tcrit}
\end{eqnarray}
In this last equation, it is possible to find some solution for $\beta_c$, in the
case of $(g_1+g_2)^2>\omega_0\Omega$. With this condition the system undergoes a phase transition.
When the system has $\beta<\beta_c$ we say that the system is in the normal phase.
For $\beta>\beta_c$ the stationary points $b(\tau)=b_0$ and $b^*(\tau)=b^*_0$ satisfy the following
equation
\begin{eqnarray}
\frac{\omega_0\,\Omega_{\Delta}}{(g_1+g_2)^2}=\tanh\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)\,,
\label{tra1}
\end{eqnarray}
with $\Omega_{\Delta}$ defined by
\begin{eqnarray}
\Omega_{\Delta}=
\sqrt{\Omega^2+4\,(g_1+g_2)^2\,|b_0|^2}\,.
\label{omegadelta}
\end{eqnarray}
Phase transition happens if it is possible to find some real solution for $|b_0|\neq 0$ in Eq. (\ref{tra1}).
It is only possible when $(g_1+g_2)^2>\omega_0\,\Omega$ and $\beta>\beta_c$. In the case of $g_1\neq 0$ and
$g_2=0$, and also in the case of $g_1=0$ and $g_2\neq 0$, the maximum points are a continuous set of values given
by the expression $b_0=\rho\,e^{i\,\phi}$ and $b^*_0=\rho\,e^{-i\,\phi}$ with $\phi\in [0,2\pi)$ and
$\rho=|b_0|$, with $|b_0|$ defined by Eq. (\ref{tra1}). In the case of $g_1\neq 0$ and $g_2\neq 0$, we have
two maximum points, which are given by $b^*_0=b_0=\pm |b_0|$, with $|b_0|$ defined by
Eq. (\ref{tra1}). When the system has $\beta>\beta_c$ we say that the system is in the superradiant phase.
Let us continue, with the computation of the asymptotic behaviour for the integral functional appearing in Eq. (\ref{ZA2}),
for the thermodynamic limit, $N\rightarrow\infty$. In following steps, we shall find this asymptotic behaviour
when we only have one maximum point defined by $b_0=b^*_0$. The resulting expressions will be useful for the normal phase
of the full Dicke model, and also for the superradiant phase in the case of $g_1\neq 0$ and $g_2\neq 0$. We consider
the two first leading terms in the integral function appearing in Eq. (\ref{ZA2}) coming from the expansion
of $\Phi(b^*,b)$ around the maximal value $b^*_0=b_0$, this expansion is given by
\begin{eqnarray}
\Phi(b^*,b)=\Phi(b^*_0,b_0)+\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\,
(b^*(\tau_1)-b^*_0\,,\,b(\tau_1)-b_0)\,M_{\Phi}
\left(\begin{array}{c}
b^*(\tau_2)-b^*_0\\
b(\tau_2)-b_0
\end{array}\right)\,,
\label{fi2}
\end{eqnarray}
the matriz $M_{\Phi}$, is given by
\begin{eqnarray}
M_{\Phi}=\left(\begin{array}{cc}
\frac{\delta^2\Phi(b^*,b)}{\delta b^*(\tau_1)\,\delta b^*(\tau_2)}&\frac{\delta^2\Phi(b^*,b)}{\delta b^*(\tau_1)\,\delta b(\tau_2)}\\
\frac{\delta^2\Phi(b^*,b)}{\delta b(\tau_1)\,\delta b^*(\tau_2)}&\frac{\delta^2\Phi(b^*,b)}{\delta b(\tau_1)\,\delta b(\tau_2)}
\end{array}\right)
\Biggr|_{b^*=b=b_0}\,.
\label{Mfi}
\end{eqnarray}
Substituting this expansion given by Eq. (\ref{fi2}) in Eq. (\ref{ZA2}) we obtain
\begin{eqnarray}
Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(b)]\exp{\left(\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\,
\Bigl(b^*(\tau_1)\,,\,b(\tau_1)\Bigr)\,M_{\Phi}
\left(\begin{array}{c}
b^*(\tau_2)\\
b(\tau_2)
\end{array}\right)\right)}\,,
\label{ZA3}
\end{eqnarray}
to obtain the last expression, we have applied the transformation $b(\tau)\rightarrow \Bigr(b(\tau)+b_0\Bigl)/\sqrt{N}$ and
$b^*(\tau)\rightarrow \Bigr(b^*(\tau)+b^*_0\Bigl)/\sqrt{N}$ in the functional integral involved. In order to make easier
the integration of the functional integral given by Eq. (\ref{ZA2}), let us use the following transformation
\begin{eqnarray}
c\,(\tau)&=&\alpha\,\Bigl(g_2\,b(\tau)+g_1\,b^*(\tau)\Bigr)\nonumber\\
c^*(\tau)&=&\alpha\,\Bigl(g_1\,b(\tau)+g_2\,b^*(\tau)\Bigr)\,,
\label{Trv1}
\end{eqnarray}
the parameter $\alpha$ defined by the equation $\alpha^2=(g_2^2-g_1^2)^{-1}$. It is worth mentioning that, the Jacobian of this transformation
is $1$. Applying this transformation in Eq. (\ref{ZA2}) we obtain that
\begin{eqnarray}
Z_A=A(N)\int[d\eta(c)]\exp{\left(N\,\Phi_I(c^*,c)\right)}\,,
\label{ZA4}
\end{eqnarray}
the function $\Phi_I(c^*,c)$ is given by
\begin{eqnarray}
\Phi_I(c^*,c)&=&\alpha^2\int_0^{\beta}d\tau\,\Bigl(g_1\,c(\tau)-g_2\,c^*(\tau)\Bigr)\times\nonumber\\
&&\times\Bigl(\partial_{\tau}-\omega_0\Bigr)\,
\Bigl(g_1\,c^*(\tau)- g_2\,c(\tau)\Bigr)+tr\ln\biggl(1-\alpha^{-2}L_*^{-1}c^*L^{-1}c\biggr)\,.
\label{fi3}
\end{eqnarray}
The maximum point corresponds to $c^*_0=c_0=\alpha(g_1+g_2)b_0$, the point $b^*_0=b_0$ corresponds to a maximum for
the function $Re(\Phi(b^*,b))$. Using the same expansion given in Eq. (\ref{fi2})
for $\Phi_I(c^*,c)$ and substituting in Eq. (\ref{ZA4}) we obtain that
\begin{eqnarray}
Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(c)]\exp{\left(\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\,
\Bigl(c^*(\tau_1)\,,\,c(\tau_1)\Bigr)\,M_{\Phi_I}
\left(\begin{array}{c}
c^*(\tau_2)\\
c(\tau_2)
\end{array}\right)\right)}\,,
\label{ZA5}
\end{eqnarray}
we have used the identity $\Phi_I(c^*_0,c_0)=\Phi(b^*_0,b_0)$, and the matrix $M_{\Phi_I}$ is
defined by
\begin{eqnarray}
M_{\Phi_I}=\left(\begin{array}{cc}
\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c^*(\tau_2)}&\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c(\tau_2)}\\
\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c^*(\tau_2)}&\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c(\tau_2)}
\end{array}\right)
\Biggr|_{c^*=c=c_0}\,.
\label{MfiI}
\end{eqnarray}
At this level, it is convenient to use Fourier representation of the field $c(\tau)$ in the functional integral
Eq. (\ref{ZA5}). From boundaries conditions of the bosonic field $b(\tau)$ and from Eq. (\ref{Trv1}),
we deduce that $c(\tau)$ and $c^*(\tau)$ satisfies periodic boundary conditions $c(\beta)=c(0)$ and $c^*(\beta)=c^*(0)$
respectively. Therefore Fourier representation of $c(\tau)$ and $c^*(\tau)$ are given by
\begin{eqnarray}
c(\tau)&=&\frac{1}{\sqrt{\beta}}\sum_{\omega}c(\omega)e^{i\omega\tau}\,,\nonumber\\
c^*(\tau)&=&\frac{1}{\sqrt{\beta}}\sum_{\omega}c^*(\omega)e^{-i\omega\tau}\,,
\label{fourier1}
\end{eqnarray}
the parameter $\omega$ takes the values: $2\pi\,n/\beta$, with $n$ being all the integers. These values
correspond to the Matsubara frenquencies for bosonic fields. Substituting this Fourier representation,
Eq. (\ref{fourier1}), in Eq. (\ref{ZA5}), we obtain that
\begin{eqnarray}
Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(c)]\exp{\left(\frac{1}{2}\sum_{\omega_1\omega_2}
\Bigl(c^*(\omega_1)\,,\,c(\omega_1)\Bigr)\,\delta^2\Phi(\omega_1,\omega_2)
\left(\begin{array}{c}
c^*(\omega_2)\\
c(\omega_2)
\end{array}\right)\right)}\,,
\label{ZA6}
\end{eqnarray}
with $\delta^2\Phi(\omega_1,\omega_2)$ being defined by
\begin{eqnarray}
\delta^2\Phi(\omega_1,\omega_2)=\left(\begin{array}{cc}
\delta^2\Phi_{11}(\omega_1,\omega_2) & \delta^2\Phi_{12}(\omega_1,\omega_2)\\
\delta^2\Phi_{21}(\omega_1,\omega_2) & \delta^2\Phi_{22}(\omega_1,\omega_2)
\end{array}\right)\,,
\label{deltafi}
\end{eqnarray}
and each component of this matrix satisfies
\begin{eqnarray}
\delta^2\Phi_{11}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\,
e^{-i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c^*(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{-i\omega_2\tau_2}\,,\nonumber\\
\delta^2\Phi_{12}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\,
e^{-i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{i\omega_2\tau_2}\,,\nonumber\\
\delta^2\Phi_{21}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\,
e^{i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c^*(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{-i\omega_2\tau_2}\,,\nonumber\\
\delta^2\Phi_{22}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\,
e^{i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{i\omega_2\tau_2}\,.
\label{deltaficomp}
\end{eqnarray}
In this Fourier representation of the functional integral given by Eq. (\ref{ZA6}), the integral
measure $[d\eta(c)]$ takes the tractable form $\prod_{\omega}{dc^*(\omega)\,dc^*(\omega)}$.
Using the expression for $\Phi_I(c^*,c)$ given in Eq. (\ref{fi3}), we can calculate the matriz
$\delta^2\Phi(\omega_1,\omega_2)$ with components given by Eq. (\ref{deltaficomp}). Performing
these calculations we obtain that
\begin{eqnarray}
\delta^2\Phi_{11}(\omega_1,\omega_2)&=&\delta^2\Phi_{12}(\omega_1,\omega_2)=
\delta_{\omega_1\,,\,-\omega_2}\,R(\omega_1)\,,\nonumber\\
\delta^2\Phi_{21}(\omega_1,\omega_2)&=&\delta^2\Phi_{22}(\omega_1,\omega_2)=
\delta_{\omega_1\,,\,\omega_2}\,S(\omega_1)\,,
\label{deltaficomp1}
\end{eqnarray}
with $\delta_{\omega_1\,,\,\omega_2}$ being the delta Kronecker and the functions $R(\omega)$
and $S(\omega)$ are given by
\begin{eqnarray}
R(\omega)&=&2\,\omega_0\,g_1\,g_2\,\alpha^2-\frac{(\,\Omega^2_{\Delta}-\Omega^2)\,\alpha^{-2}}
{2\,\Omega_{\Delta}(\omega^2+\Omega^2_{\Delta})}\tanh{\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}\,,\nonumber\\
S(\omega)&=&i\,\omega\left(1-\frac{\Omega\,\alpha^{-2}}
{\Omega_{\Delta}(\omega^2+\Omega^2_{\Delta})}\tanh{\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}\right)+\nonumber\\
&-&
\omega_0\,(\,g_1^2+g_2^2\,)\,\alpha^2+\frac{(\,\Omega^2_{\Delta}+\Omega^2)\,\alpha^{-2}}
{2\,\Omega_{\Delta}\,(\omega^2+\Omega^2_{\Delta})}\tanh{\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}\,.
\label{RS}
\end{eqnarray}
The expression for $\Omega_{\Delta}$ is given by Eq. (\ref{omegadelta}).
Substituting the matriz $\delta^2\Phi(\omega_1,\omega_2)$, with components given by Eq. (\ref{deltaficomp1}), in
the functional integral appearing in $Z_A$, given by Eq. (\ref{ZA6}), we obtain that
\begin{eqnarray}
Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(c)]\exp{\sum_{\omega}\left(\,
S(\omega)\,c(\omega)\,c^*(\omega)+\frac{1}{2}\,R(\omega)\,
\Bigl(\,c(\omega)\,c(-\omega)+c^*(\omega)\,c^*(-\omega)\,\Bigr)\right)}\,.
\label{ZA7}
\end{eqnarray}
Performing this Gaussian functional integral, we finally obtain that
\begin{eqnarray}
Z_A=e^{N\Phi(b^*_0,b_0)}\frac{2\,\pi\,i}{(S^2(0)-R^2(0))^{1/2}}\,\,
\prod_{\omega\geq 1}\frac{(\,2\,\pi\,i\,)^2}{\,S(\omega)\,S(-\omega)\,-\,R^2(\omega)}\,.
\label{ZA8}
\end{eqnarray}
In order to find the asymptotic behaviour of $\frac{Z}{Z_0}$ when $N\rightarrow\infty$, we must
calculate $\int{[d\eta(b)]\,e^{S_{B0}}}$ appearing in Eq. (\ref{ZA0}). Using the free bosonic
action $S_{B0}$ given by Eq. (\ref{67}), we obtain that
\begin{eqnarray}
\int{[d\eta(b)]\,e^{S_{B0}}}=\prod_{\omega}\,\frac{2\,\pi\,i}{\omega_0-i\,\omega}\,.
\label{sb0}
\end{eqnarray}
Substituting Eq. (\ref{ZA8}) and Eq. (\ref{sb0}) in Eq. (\ref{ZA0}) we have that
\begin{eqnarray}
\frac{Z}{Z_0}=e^{N\Phi(b^*_0,b_0)}\frac{1}{(H(0))^{1/2}}\,\,
\prod_{\omega\geq 1}\,\frac{1}{\,H(\omega)}\,,
\label{Zfin}
\end{eqnarray}
in this last equation, the function $H(\omega)$ is given by
\begin{eqnarray}
H(\omega)=\frac{S(\omega)\,S(-\omega)-R^2(\omega)}{\omega^2+\omega^2_0}\,.
\label{H1}
\end{eqnarray}
The Eq. (\ref{RS}) gives the expresions for the functions $S(\omega)$ and $R(\omega)$, substituting
these functions in Eq. (\ref{H1}) we obtain that
\begin{eqnarray}
&&H(\omega)=\,1\,+\,\frac{(\,g_1^2-g_2^2\,)^2\,\Omega^2}{\Omega_{\Delta}^2\,(\omega^2+\Omega_{\Delta}^2)\,
(\omega^2+\omega^2_0)}\tanh^2\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)\,+\nonumber\\
&+&\frac{2\,(g_1^2-g_2^2\,)\,\Omega\,\omega^2\,-\,(\,g_1^2+g_2^2\,)\,(\Omega^2+\Omega_{\Delta}^2)\,\omega_0\,+\,
2\,g_1\,g_2\,(\Omega_{\Delta}^2-\Omega^2)\,\omega_0}{\Omega_{\Delta}\,(\omega^2+\Omega_{\Delta}^2)\,
(\omega^2+\omega^2_0)}\, \tanh\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)\,.
\label{H2}
\end{eqnarray}
The expression, given by Eq. (\ref{Zfin}), with $H(\omega)$ given by Eq. (\ref{H2}), provides a
valid expression for the quotient $\frac{Z}{Z_0}$ in the normal phase, and also in the superradiant
phase for the particular case of $g_1\neq 0$ and $g_2\neq 0$.
\section{Normal phase: $\beta <\beta_c$}
In the normal phase, $\beta <\beta_c$, from Eq. (\ref{tra1}) we have that $b_0=b^*_0=0$, i.e.
$\Omega_{\Delta}=\Omega$. Subtituting this equality in Eq. (\ref{Zfin}) and Eq. (\ref{H2}),
we obtain that
\begin{eqnarray}
\frac{Z}{Z_0}=\frac{1}{(H_I(0))^{1/2}}\,\,
\prod_{\omega\geq 1}\,\frac{1}{\,H_I(\omega)}\,,
\label{Zfinb0}
\end{eqnarray}
where
\begin{eqnarray}
H_I(\omega)&=&\,1\,+\,\frac{(\,g_1^2-g_2^2\,)^2}{(\omega^2+\Omega^2)\,
(\omega^2+\omega^2_0)}\tanh^2\left(\frac{\beta\,\Omega}{2}\right)\,+\nonumber\\
&+&\frac{2\,(g_1^2-g_2^2\,)\,\omega^2\,-\,2\,(\,g_1^2+g_2^2\,)\,\Omega\,\omega_0}
{(\omega^2+\Omega^2)\,
(\omega^2+\omega^2_0)}\, \tanh\left(\frac{\beta\,\Omega}{2}\right)\,.
\label{H21}
\end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_I(\omega)$, we
solve the equation $H_I(-i\,E)=0$, which corresponds to the collective spectrum
equation. Solving the equation, we have that
\begin{eqnarray}
2\,E^2&=&\omega_0^2+\Omega^2\,+\,2\,(g_1^2-g_2^2)\,\tanh\left(\frac{\beta\,\Omega}{2}\right)+\nonumber\\
&\pm&\left(\Bigl(\omega_0^2-\Omega^2\Bigr)^2+4\,\Bigl(g_1^2\,(\omega_0+\Omega)^2-g_2^2\,(\omega_0-\Omega)^2\Bigr)
\,\tanh\left(\frac{\beta\,\Omega}{2}\right)\right)^{1/2}\,.
\label{Espb0}
\end{eqnarray}
It is interesting to see, when $\beta =\beta_c$ we find the following roots \cite{aparicio1}
\begin{equation}
E_{\,1}\,=\,0
\label{106}
\end{equation}
and
\begin{equation}
E_{\,2}\,=\,\Biggl(\,\frac{g_{\,1}\,(\Omega\,+\,\omega_{\,0})^{\,2}\,+\,
g_{\,2}\,(\Omega\,-\,\omega_{\,0})^{\,2}}{(g_{\,1}\,+\,g_{\,2})}\,\Biggr)^{\,1/2}\,.
\label{107}
\end{equation}
With Eq. (\ref{Espb0}) we can obtain the collective spectrum
for the two following known cases: the first one, when $g_2=0$, corresponds to the Dicke
model considering the rotating wave approximation \cite{popov1}. Here we have that
\begin{eqnarray}
2\,E=\omega_0+\Omega\,\pm\,\left(\Bigl(\omega_0-\Omega\Bigr)^2+4\,g_1^2
\,\tanh\left(\frac{\beta\,\Omega}{2}\right)\right)^{1/2}\,.
\label{Espb0g20}
\end{eqnarray}
The second one corresponds to $g_1=g_2=g$. Here we have that
\begin{eqnarray}
2\,E^2=\omega_0^2+\Omega^2\,\pm\,\left(\Bigl(\omega_0^2-\Omega^2\Bigr)^2+16\,g^2\,
\omega_0\,\Omega\,\tanh\left(\frac{\beta\,\Omega}{2}\right)\right)^{1/2}\,.
\label{Espb0g1g2}
\end{eqnarray}
In the case of quantum phase transition, we are in the particular case where $\beta=\infty$. Here,
the collective spectrum corresponds to the Eq. (\ref{Espb0g1g2}) where:
$\tanh\left(\beta\,\Omega/{2}\right)=1$ \cite{emary2}.
\section{Superradiant phase: $\beta >\beta_c$}
\subsection{Case of $g_1\neq 0$ and $g_2\neq 0$:}
In the superradiant phase $\beta >\beta_c$, in the case of $g_1\neq 0$ and $g_2\neq 0$, we have two maximum points.
Both maximum points contribute equally to the partition function. Therefore $\frac{Z}{Z_0}$, given by Eq. (\ref{Zfin})
must be multiplied by a factor $2$. In this case $b_0\neq 0$, i.e. $\Omega_{\Delta}\neq\Omega$. From
Eq. (\ref{Zfin}), we have that the expresion for $\frac{Z}{Z_0}$ is given by
\begin{eqnarray}
\frac{Z}{Z_0}=2\,e^{N\phi}\frac{1}{(H_{II}(0))^{1/2}}\,\,
\prod_{\omega\geq 1}\,\frac{1}{\,H_{II}(\omega)}\,,
\label{Zfinb1}
\end{eqnarray}
where the factor $\phi$, is defined by
\begin{eqnarray}
\phi\,=\,-\,\frac{\omega_0\,\beta\,(\Omega_{\Delta}^2-\Omega^2)}{4\,(g_1+g_2)^2}+
\ln{\left(\frac{\cosh\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}
{\cosh\left(\frac{\beta\,\Omega}{2}\right)}\right)}\,.
\label{phi1}
\end{eqnarray}
The function $H_{II}(\omega)$ has the form
\begin{eqnarray}
&&H_{II}(\omega)=\frac{1}{(\omega^2+\Omega_{\Delta}^2)\,(\omega^2+\omega^2_0)}\,\times\nonumber\\
&&\times\left[\,\omega^4+\left(\omega_0^2+\Omega_{\Delta}^2+\frac{2\,(g_1^2-g_2^2)}{(g_1+g_2)^2}\,\omega_0\,\Omega\right)\,
\omega^2+\frac{4\,g_1\,g_2}{(g_1+g_2)^2}\,\omega_0^2\,(\Omega_{\Delta}^2-\Omega^2)\right]\,,
\label{H22}
\end{eqnarray}
and setting $\omega=0$ in Eq. (\ref{H22}), we obtain the expression for $H_{II}(0)$, so that
\begin{eqnarray}
H_{II}(0)=\frac{4\,g_1\,g_2\,(\Omega_{\Delta}^2-\Omega^2)}{(\,g_1+g_2\,)^2\,\Omega_{\Delta}^2}\,.
\label{H022}
\end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_{II}(\omega)$ given by Eq. (\ref{H22}),
we solve the equation $H_{II}(-i\,E)=0$. The set of solutions, $E$, are the collective spectrum in the
superradiant phase for the case of $g_1\neq 0$ and $g_2\neq 0$. Therefore, solving the equation we have that
\begin{eqnarray}
2\,E^2&=&\omega_0^2+\Omega_{\Delta}^2\,+\,2\,\frac{(\,g_1^2-g_2^2\,)}{(\,g_1+g_2)^2}\,\Omega\,\omega_0\,+\nonumber\\
&\pm&\left[\left(\omega_0^2+\Omega_{\Delta}^2\,+\,2\,\frac{(\,g_1^2-g_2^2\,)}{(\,g_1+g_2)^2}\,\Omega\,\omega_0\right)^2
-\frac{16\,g_1\,g_2}{(\,g_1+g_2)^2}\,\omega_0^2\,\Bigl(\Omega_{\Delta}^2-\Omega^2\Bigr)\right]^{1/2}\,.
\label{Espb1}
\end{eqnarray}
For the particular case of $g_1=g_2=g$, the collective spectrum energy takes the particular form
\begin{eqnarray}
2\,E^2\,=\,\omega_0^2+\Omega_{\Delta}^2\,\pm\,\left((\omega_0^2-\Omega_{\Delta}^2)^2+4\,\omega_0^2\,\Omega^2\right)^{1/2}\,.
\label{Espb1g}
\end{eqnarray}
In the limit of zero temperature, $\beta\rightarrow\infty$, from Eq. (\ref{tra1}) we have that
$\Omega_{\Delta}=4\,g^2/\omega_0$. Consequently, at zero temperature we obtain that \cite{emary2}
\begin{eqnarray}
2\,E^2\,=\,\omega_0^2+\,\frac{16\,g^4}{\omega_0^2}\,\pm\,
\left[\left(\omega_0^2-\frac{16\,g^4}{\omega_0^2}\right)^2+4\,\omega_0^2\,\Omega^2\right]^{1/2}\,.
\label{Espb1g2}
\end{eqnarray}
\subsection{Case of $g_1\neq 0$ and $g_2=0$:}
Now let us study, the case of rotating wave approximation, i. e., the case of $g_1\neq 0$ and $g_2=0$,
in the superradiant phase. Here, the expression for $\frac{Z}{Z_0}$ is obtained setting $g_2=0$ in Eq. (\ref{ZA2})
and Eq. (\ref{fi1}), therefore we have that
\begin{eqnarray}
Z_A=A(N)\int[d\eta(b)]\exp{\left(N\,\Phi_{g_1}(b^*,b)\right)}\,,
\label{ZApopov}
\end{eqnarray}
the function $\Phi_{g_1}(b^*,b)$ is defined by
\begin{eqnarray}
\Phi_{g_1}(b^*,b)=\int_0^{\beta}d\tau\,b^*(\omega)\,(\partial_{\tau}-\omega_0)\,b(\omega)+tr\ln\biggl(1-\,g_1^2\,L_*^{-1}
\,b\,L^{-1}\,b^*\biggr)\,.
\label{fi1g1}
\end{eqnarray}
In last equation, Eq. (\ref{fi1g1}), we can see that the function $\Phi_{g_1}(b^*,b)$ is invariant by
transformation $b(\tau)\rightarrow\exp{(i\,\theta\,\tau)}\,b(\tau)$ and
$b^*(\tau)\rightarrow\exp{(-i\,\theta\,\tau)}\,b^*(\tau)$, where $\theta$
is an arbitrary factor independent of $\tau$. This continuous invariance is responsible for the appearing
of Goldstone mode in the system. In order to perform the functional integral given by Eq. (\ref{ZApopov}),
let us separate the function $b(\tau)$ in the following form
\begin{eqnarray}
b\,(\tau)&=&b_c+b'\,(\tau)\,,\nonumber\\
b^*(\tau)&=&b^*_c+b'^{\,*}(\tau)\,,
\label{bg1}
\end{eqnarray}
where $b_c$ is a constant function, and the fields $b'(\tau)$ and $b'^{\,*}(\tau)$ satisfy the following boundaries
conditions $b'(0)=b'(\beta)=0$ and $b'^{\,*}(0)=b'^{\,*}(\beta)=0$. Using the representation
$b_c=\rho\, e^{i\,\phi}$ and $b^*_c=\rho\, e^{-i\,\phi}$ in the functional integral given by Eq. (\ref{ZApopov})
and Eq. (\ref{fi1g1}), and after applying the transformation $b'(\tau)\rightarrow e^{i\,\phi}\,b'(\tau)$ and $b'^{\,*}(\tau)\rightarrow e^{-i\,\phi}\,b'^{\,*}(\tau)$, we obtain that
\begin{eqnarray}
Z_A=2\,\pi\,i\,A(N)\,\int_0^{\infty}d\rho^2\,\int[d\eta(b')]\exp{\left(N\,\Phi_{g_1}(\rho,b'^{\,*},b')\right)}\,,
\label{ZApopov1}
\end{eqnarray}
the function $\Phi_{g_1}(\rho,b'^{\,*},b')$ is given by
\begin{eqnarray}
\Phi_{g_1}(\rho,b'^{\,*},b')&=&\int_0^{\beta}d\tau\,\Bigl(\rho+b'^{\,*}(\tau)\Bigr)
\Bigl(\partial_{\tau}-\omega_0\Bigr)\Bigl(\rho+b'(\tau)\Bigr)\,+\nonumber\\
&+&tr\ln\left(1-g_1^2\,L_*^{-1}\,\Bigl(\rho+
b'\Bigr)\,L^{-1}\,\Bigl(\rho+b'^{\,*}\Bigr)\right)\,.
\label{phipopov1}
\end{eqnarray}
In the integral function appearing in Eq. (\ref{ZApopov1}), one variable of integration is $\rho^2$.
Here we use the steepest descent method in order to analyse the limit $N\rightarrow\infty$,
we find the stationary point with respect to the variable $\rho^2$. Therefore, the stationary
point satisfies the following equation $\frac{\delta\,\Phi_{g_1}}{\delta\,(\rho^2)}\Bigr|_{\rho=\rho_0}$
with $b'^{\,*}(\tau)=b'(\tau)=0$. In this case the value for $\rho_0$ is the same as $b_0$ defined by
Eq. (\ref{tra1}) setting $g_2=0$. Let us consider the two first
leading terms in the functional integral appearing in Eq. (\ref{ZApopov1}), coming from the expansion
of $\Phi_{g_1}(\rho,b'^{\,*},b')$ around the point defined by $\rho_0$ and $b'^{\,*}(\tau)=b'(\tau)=0$,
giving the maximum for $Re\Bigl(\Phi_{g_1}(\rho,b'^{\,*},b')\Bigr)$. This expansion is given by
\begin{eqnarray}
\Phi_{g_1}(\rho,b'^{\,*},b')&=&\Phi_{g_1}(\rho_0,0,0)+\frac{1}{2}\,\frac{\delta^2\Phi_{g_1}}
{\delta(\rho^2)^2}\Biggr|_{\rho=\rho_0,\,b'=b'^{*}=0}\,\Bigl(\rho^2-\rho_0^2\Bigr)^2\,+\nonumber\\
&+&\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\,
(b'^{\,*}(\tau_1)\,,\,b'(\tau_1))\,M_{\Phi_{g_1}}
\left(\begin{array}{c}
b'^{\,*}(\tau_2)\\
b'(\tau_2)
\end{array}\right)\,,
\label{fipopov2}
\end{eqnarray}
here the matriz $M_{\Phi_{g_1}}$ is given by
\begin{eqnarray}
\left(\begin{array}{cc}
\frac{\delta^2\Phi_{g_1}}{\delta b'^{*}(\tau_1)\,\delta b'^{*}(\tau_2)}&\frac{\delta^2\Phi_{g_1}}{\delta b'^{*}(\tau_1)\,\delta b'(\tau_2)}\\
\frac{\delta^2\Phi_{g_1}}{\delta b'(\tau_1)\,\delta b'^{*}(\tau_2)}&\frac{\delta^2\Phi_{g_1}}{\delta b'(\tau_1)\,\delta b'(\tau_2)}
\end{array}\right)
\Biggr|_{\rho=\rho_0,\,b'=b'^{*}=0}\,\,.
\label{Mfipopov}
\end{eqnarray}
Using this expansion given by Eq. (\ref{Mfipopov}) to perform functional integral given by Eq. (\ref{ZApopov1}),
we have that
\begin{eqnarray}
Z_A&=&2\,\pi\,i\,\sqrt{N}\,e^{N\phi_{g_1}}\,\int_{-\sqrt{N}\rho^2_0}^{\infty}dy\,e^{\frac{1}{2}\,\frac{\delta^2\Phi_{g_1}}
{\delta(\rho^2)^2}\Bigr|_{\rho=\rho_0,\,b'=b'^{*}=0}\,y^2}\,\times\nonumber\\
&\times&\int[d\eta(b')]\exp\left(\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\,
(b'^{\,*}(\tau_1)\,,\,b'(\tau_1))\,M_{\Phi_{g_1}}
\left(\begin{array}{c}
b'^{\,*}(\tau_2)\\
b'(\tau_2)
\end{array}\right)\right)\,,
\label{ZApopov2}
\end{eqnarray}
The expression $\phi_{g_1}$ corresponds to the expression of $\phi$ defined in Eq. (\ref{phi1}) taking $g_2=0$. The factor $\sqrt{N}$ appearing in Eq. (\ref{ZApopov2})
come from the scaling $\rho^2\rightarrow\rho^2/\sqrt{N}$. For $N\rightarrow\infty$, integrals appearing in Eq. (\ref{ZApopov2})
are Gaussians. We represent the functions $b'(\tau)$ and $b'^{\,*}(\tau)$ in Fourier series, which do not possess the
zero mode, since they satisfy the boundary conditions given by $b'(0)=b'(\beta)=0$ and $b'^{\,*}(0)=b'^{\,*}(\beta)=0$.
Therefore, performing the funtional integral and substituting in Eq. (\ref{ZA0}), we obtain that
\begin{eqnarray}
\frac{Z}{Z_0}=\sqrt{N}\,e^{N\phi_{g_1}}\,\frac{1}{A_0}\,
\prod_{\omega\geq 1}\,\frac{1}{\,H_{II}(\omega)}\,,
\label{Zfinpopov}
\end{eqnarray}
functions $\phi_{g_1}$ and $H_{II}(\omega)$ are given respectively by Eq. (\ref{phi1}) and Eq. (\ref{H22})
setting $g_2=0$, and $A_0$ is given by
\begin{eqnarray}
A_0=\frac{g_1}{\Omega_{\Delta}\,\sqrt{\pi\,\beta\,\omega_0}}\,\left(1-\frac{\beta\,
\Omega_{\Delta}}{\sinh(\beta\Omega_{\Delta})}\right)^{\frac{1}{2}}\,.
\label{A0}
\end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_{II}(\omega)$ given by Eq. (\ref{H22})
setting $g_2=0$, the collective spectrum is obtained by solving the equation $H_{II}(-i\,E)=0$.
So that, we obtain the following spectrum
\begin{eqnarray}
E_1=0\,,
\label{Espg11}
\end{eqnarray}
and
\begin{eqnarray}
E_2^{\,2}=\omega_0^2+\Omega_{\Delta}^2\,+\,2\,\omega_0\,\Omega\,.
\label{Espg12}
\end{eqnarray}
The particular value of the spectrum given by $E_1=0$ in Eq. (\ref{Espg11}) corresponds to the Goldstone mode
\cite{popov1}.
\subsection{Case of $g_1=0$ and $g_2\neq 0$:}
Now let us study the case of $g_1=0$ and $g_2\neq 0$, in the superradiant phase. Here, the expression for
$\frac{Z}{Z_0}$ is obtained setting $g_1=0$ in Eq. (\ref{ZA2}) and Eq. (\ref{fi1}). For this case we have
\begin{eqnarray}
Z_A=A(N)\int[d\eta(b)]\exp{\left(N\,\Phi_{g_2}(b^*,b)\right)}\,,
\label{ZApopovg2}
\end{eqnarray}
the function $\Phi_{g_2}(b^*,b)$ is defined by
\begin{eqnarray}
\Phi_{g_2}(b^*,b)=\int_0^{\beta}d\tau\,b^*(\omega)\,(\partial_{\tau}-\omega_0)\,b(\omega)+tr\ln\biggl(1-\,g_2^2\,L_*^{-1}
\,b^*\,L^{-1}\,b\biggr)\,.
\label{fi1g2}
\end{eqnarray}
In last equation, Eq. (\ref{fi1g2}), we can see that the function $\Phi_{g_2}(b^*,b)$ is invariant by
transformation $b(\tau)\rightarrow\exp{(i\,\theta\,\tau)}\,b(\tau)$ and
$b^*(\tau)\rightarrow\exp{(-i\,\theta\,\tau)}\,b^*(\tau)$, where $\theta$
is an arbitrary factor independent of $\tau$. This continuous invariance is
responsible for the appearing of Goldstone mode in the system. Since Eq. (\ref{fi1g2})
is very similar to Eq. (\ref{fi1g1}), we can see that, the calculation to obtain
$\frac{Z}{Z_0}$ in the case of $g_1=0$ follows the same steps as the calculation performed
to obtain $\frac{Z}{Z_0}$ in the case of rotating wave approximation. Consequently, we have that
\begin{eqnarray}
\frac{Z}{Z_0}=\sqrt{N}\,e^{N\phi_{g_2}}\,\frac{1}{A_0}\,
\prod_{\omega\geq 1}\,\frac{1}{\,H_{II}(\omega)}\,,
\label{Zfinpopovg2}
\end{eqnarray}
where $H_{II}(\omega)$ is given by Eq. (\ref{H22}) setting $g_1=0$, the value $\phi_{g_2}$
corresponds to the expression for $\phi$ defined in Eq. (\ref{phi1}) taking $g_1=0$, and $A_0$ is given by
\begin{eqnarray}
A_0=\frac{g_2}{\Omega_{\Delta}\,\sqrt{\pi\,\beta\,\omega_0}}\,\left(1-\frac{\beta\,
\Omega_{\Delta}}{\sinh(\beta\Omega_{\Delta})}\right)^{\frac{1}{2}}\,.
\label{A0g2}
\end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_{II}(\omega)$ given by Eq. (\ref{H22})
setting $g_1=0$,
the collective spectrum is obtained by solving the equation $H_{II}(-i\,E)=0$. So that, we obtain the
following spectrum
\begin{eqnarray}
E_1=0\,,
\label{Espg21}
\end{eqnarray}
and
\begin{eqnarray}
E_2^{\,2}=\omega_0^2+\Omega_{\Delta}^2\,-\,2\,\omega_0\,\Omega\,.
\label{Espg22}
\end{eqnarray}
The particular value of the spectrum given by $E_1=0$ in Eq. (\ref{Espg21}) corresponds to the Goldstone mode.
\section{Summary}
\quad $\,\,$
In this paper, using the path integral approach and functional methods, in
the thermodynamic limit $N\rightarrow\infty$, we find the
asymptotic behaviour of the partition function and collective spectrum
of the full Dicke model in the normal and superradiant phase. In our study we
distinguish three particular cases.
The first one corresponds to the case of rotating wave approximation,
$g_1\neq 0$ and $g_2=0$, in this case the model has a continuous symmetry, which is associated
to the conservation of the sum of the number excitation of the $N$ atoms with the number
excitation of the boson field. The second case corresponds to the model with $g_1=0$ and $g_2\neq 0$,
in this case the model has a continuous symmetry, which is associated to the conservation
of the difference between the number excitation of the $N$ atoms and the number
excitation of the boson field. The last one, corresponds to the case of
$g_1\neq 0$ and $g_2\neq 0$, which the model has a discrete symmetry. The phase transition in each case is
related to the spontaneous breaking of their respective symmetry. In the case of rotating
wave approximation, and also in the case of $g_1=0$ and $g_2\neq 0$, the collective spectrum has a zero energy
value, corresponding to the Goldstone mode associated to the continuous symmetry breaking for these cases.
\end{document} |
\begin{document}
\title{\Large {\textbf{Randomness and Statistical Inference of Shapes via\\ the Smooth Euler Characteristic Transform}}}
\author[1,*]{Kun Meng}
\author[2]{Jinyu Wang}
\author[3,4]{Lorin Crawford}
\author[3]{Ani Eloyan}
\affil[1]{\small Division of Applied Mathematics, Brown University, RI, USA}
\affil[2]{\small Data Science Initiative, Brown University, RI, USA}
\affil[3]{\small Department of Biostatistics, Brown University School of Public Health, Providence, RI, USA}
\affil[4]{\small Microsoft Research New England, Cambridge, MA, USA}
\affil[*]{Corresponding Author: e-mail: \texttt{kun\[email protected]}, Address: 182 George Street, Providence, RI 02906, USA.}
\maketitle
\begin{abstract}
\noindent In this paper, we provide the foundations for deriving the distributional properties of the smooth Euler characteristic transform. Motivated by functional data analysis, we propose two algorithms for testing hypotheses on random shapes based on these foundations. Simulation studies are provided to support our mathematical derivations and show the performance of our hypothesis testing framework. We apply our proposed algorithms to analyze a data set of mandibular molars from four genera of primates to test for shape differences and interpret the corresponding results from the morphology viewpoint. Our discussions connect the following fields: algebraic and computational topology, probability theory and stochastic processes, Sobolev spaces and functional analysis, statistical inference, morphology, and medical imaging.\footnote{
\begin{itemize}
\item \textbf{Keywords:} Karhunen–Loève expansion, persistent diagrams, reproducing kernel Hilbert spaces, random fields, Sobolev spaces, separable Banach spaces.
\item \textbf{Abbreviations:} ECT, Euler characteristic transform; ECC, Euler characteristic curve; GBM, glioblastoma multiforme; HCP, homotopy critical point; i.i.d., independent and identically distributed; LECT, lifted Euler characteristic transform; NHST, null hypothesis significance test; PD, persistent diagram; PHT, persistent homology transform; RKHS, reproducing kernel Hilbert space; RCLL, right continuous with left limits; SECT, smooth Euler characteristic transform; TDA, topological data analysis.
\end{itemize}
}
\end{abstract}
\maketitle
\section{Introduction}\label{Introduction}
The quantification of shapes has become an important research direction in both the applied and theoretical sciences. It has brought advances to many fields including network analysis \citep{lee2011discriminative}, geometric morphometrics \citep{boyer2011algorithms,gao2019gaussian}, biophysics and structural biology \citep{wang2021statistical, tang2022topological}, and radiogenomics (i.e., the field that aims to predict outcomes such as survival using clinical imaging features and genomic assays) \citep{crawford2020predicting}. If shapes are considered random, then their corresponding quantitative summaries are also random --- implying such summaries of random shapes are statistics. The statistical inference of shapes based on these quantitative summaries has been of particular interest (e.g., the derivation of confidence sets by \cite{fasy2014confidence} and a framework for hypothesis testing by \cite{robinson2017hypothesis}).
\subsection{Overview of Topological Data Analysis}
Topological data analysis (TDA) presents a collection of statistical methods that quantitatively summarize the shapes represented in data using computational topology \citep{edelsbrunner2010computational}. One common statistical invariant in TDA is the persistent diagram (PD) \citep{edelsbrunner2000topological}). When equipped with the $p$-th Wasserstein distance for $1\le p<\infty$, the collection of PDs denoted as $\mathscr{D}$ is a Polish space \citep{mileyko2011probability}; hence, probability measures can be applied, and the randomness of shapes can be potentially represented using the probability measures on $\mathscr{D}$. However, a single PD does not preserve all relevant information of a shape \citep{crawford2020predicting}. Using the Euler calculus, \cite{ghrist2018persistent} (Corollary 6 therein) showed that the persistent homology transform (PHT) \citep{turner2014persistent}, motivated by integral geometry and differential topology, concisely summarizes information within shapes. The PHT takes values in $C(\mathbb{S}^{d-1};\mathscr{D}^d)=\{\mbox{all continuous maps }F:\mathbb{S}^{d-1} \rightarrow \mathscr{D}^d\}$, where $\mathbb{S}^{d-1}$ denotes the sphere $\{x\in\mathbb{R}^d:\Vert x\Vert=1\}$ and $\mathscr{D}^d$ is the $d$-fold Cartesian product of $\mathscr{D}$ (see Lemma 2.1 and Definition 2.1 of \cite{turner2014persistent}). Since $\mathscr{D}$ is not a vector space and the distances on $\mathscr{D}$ (e.g., the $p$-th Wasserstein and bottleneck distances \citep{cohen2007stability}) are abstract, many fundamental concepts in classical statistics are not easy to implement with summaries resulting from the PHT. For example, the definition of moments corresponding to probability measures on $\mathscr{D}$ (e.g., means and variances) is highly nontrivial \citep{mileyko2011probability, turner2014frechet}. The difficulty in defining these properties prevents the applications of PHT-based statistical inference methods in $C(\mathbb{S}^{d-1};\mathscr{D}^d)$.
The smooth Euler characteristic transform (SECT) proposed by \cite{crawford2020predicting} provides an alternative summary statistic for shapes. The SECT not only preserves the information of shapes of interest (see Corollary 6 of \cite{ghrist2018persistent}), but it also represents shapes using univariate continuous functions instead of PDs. Specifically, values of the SECT are maps from the sphere $\mathbb{S}^{d-1}$ to a separable Banach space --- the collection of real-valued continuous functions on a compact interval, say $C([0,T]) = \mathcal{B}$ for some $T>0$ (values of $T$ will be given in Eq.~\eqref{eq: def of sublevel sets}). That is, for any shape $K$, its SECT, denoted as $SECT(K) = \{SECT(K)(\nu)\}_{\nu\in\mathbb{S}^{d-1}}$, is in $\mathcal{B}^{\mathbb{S}^{d-1}}=\{\mbox{all maps }f: \mathbb{S}^{d-1}\rightarrow\mathcal{B}\}$; specifically, $SECT(K)(\nu)\in\mathcal{B}$ for each $\nu\in\mathbb{S}^{d-1}$. Therefore, the randomness of shapes $K$ is represented via the SECT by a collection of $\mathcal{B}$-valued random variables. The probability theory on separable Banach spaces has been better developed than on $\mathscr{D}$. Specifically, a $\mathcal{B}$-valued random variable is a stochastic process with its sample paths in $\mathcal{B}$ (we will further show in Section \ref{The Definition of Smooth Euler Characteristic Transform} that $\mathcal{B}$ herein can be replaced with a reproducing kernel Hilbert space (RKHS)). The theory of stochastic processes has been developed for nearly a century. Correspondingly, functional data analysis is a well-developed branch of statistics. Many tools are available for building the foundations of the randomness and statistical inference of shapes.
The work from \cite{crawford2020predicting} applied the SECT to magnetic resonance images taken from tumors of a cohort of glioblastoma multiforme (GBM) patients. Using summary statistics derived from the SECT as predictors within Gaussian process regression, the authors showed that the SECT has the power to predict clinical outcomes better than existing tumor shape quantification approaches and common molecular assays. The relative performance of the SECT in the GBM study represents a promising future for the utility of the SECT in medical imaging and more general statistical applications investigating shapes. Similarly, \cite{wang2021statistical} applied derivatives of the Euler characteristic transform as predictors of statistical models for subimage analysis which is related to the task of variable selection and seeks to identify physical features that are most important for differentiating between two classes of shapes. In both \cite{crawford2020predicting} and \cite{wang2021statistical}, the shapes were implemented as the predictors of regressions, and the randomness of these predictors was ignored. Finally, \cite{marsh2022detecting} showed that the SECT supersedes the standard measures used in organoid morphology.
\subsection{Overview of Contributions}
In this paper, we model the distributions of shapes via the SECT using RKHS-valued random fields and provide the corresponding foundations using tools in algebraic and computational topology, Sobolev spaces, and functional analysis. In contrast to work like \cite{crawford2020predicting} and \cite{wang2021statistical}, we model realizations from the SECT as the responses rather than as input variables or predictors. Modeling the distributions of shapes helps answer the following statistical inference question: \textit{Is the difference between two groups of shapes significant or just random?} For example, the mandibular molars in Figure \ref{fig: Teeth} are from four genera of primates. A statistical inference question from the geometric morphometrics perspective is: \textit{Are the mandibular molars in the yellow panels of Figure \ref{fig: Teeth} significantly different from those in other panels?} Based on the foundations of the randomness of shapes, we propose an approach for testing hypotheses on random shapes and answering this statistical inference question.
\begin{figure}
\caption{Mandibular molars from two different suborders of the primates: Haplorhini (\url{https://gaotingran.com/codes/codes.html}
\label{fig: Teeth}
\end{figure}
The ``theory of random sets" is a well-developed giant machinery characterizing set-valued random variables (e.g., see books such as \cite{molchanov2005theory}). However, the application of this theory to persistent homology-based statistics (e.g., PHT in \cite{turner2014persistent} and SECT in \cite{crawford2020predicting}) is underdeveloped. In the current work, we propose a new probability space for characterizing random shapes, which is compatible with SECT. This newly proposed probability space provides theoretical foundations for the SECT-based hypothesis testing in Section \ref{section: hypothesis testing}. Importantly, our proposed framework provides the theoretical foundations and algorithms for addressing hypothesis testing problems in shape analyses encountered frequently in practice. Future work may investigate combining the random sets and persistent homology theories.
Using the homotopy theory and PDs, we first propose a collection of shapes on which the SECT is well-defined. Then, we show that, for each shape in this collection, its SECT is in $C(\mathbb{S}^{d-1};\mathcal{H})=\{\mbox{all continuous maps }F: \mathbb{S}^{d-1}\rightarrow\mathcal{H}\}$, where $\mathcal{H} = H_0^1([0,T])$ is not only a Sobolev space (see Chapter 8.3 of \cite{brezis2011functional}) but also an RKHS.$^\dagger$\footnote{$\dagger$: Strictly speaking, the functions in Sobolev space $\mathcal{H}$ are defined on the open interval $(0,T)$ instead of the closed interval $[0,T]$ (see Chapter 8.2 of \cite{brezis2011functional}). Hence, the rigorous notation of $\mathcal{H}$ should be $H_0^1((0,T))$. However, Theorem 8.8 of \cite{brezis2011functional} indicates that each function in $H_0^1((0,T))$ can be uniquely represented by a continuous function defined on $[0,T]$, which implies that functions in $H_0^1((0,T))$ can be viewed as being defined on the closed interval $[0,T]$. Therefore, to implement the boundary values on $\partial (0,T)=\{0,T\}$, we use the notation $H_0^1([0,T])$ throughout this paper to indicate that all functions in $\mathcal{H}$ are viewed as defined on $[0,T]$. The same reasoning is applied for the space $W_0^{1,p}([0,T])$ implemented later in this paper (see Theorem 8.8 and the Remark 8 after Proposition 8.3 in \cite{brezis2011functional}).} Importantly, $C(\mathbb{S}^{d-1};\mathcal{H})$ is a separable Banach space (e.g., see Theorem \ref{thm: the separability of C(Shere;H)} in Appendix \ref{section: appendix, proofs}) and, hence, a Polish space; it helps us construct a probability space to characterize the distributions of shapes. This probability space makes the SECT an $\mathcal{H}$-valued random field indexed by $\mathbb{S}^{d-1}$ with continuous paths, which is equivalent to the SECT being a random variable taking values in $C(\mathbb{S}^{d-1};\mathcal{H})$. Based on the proposed probability space, we define the mean and covariance of the SECT. Using a Sobolev embedding result, we show some properties of the mean and covariance. These properties allow us to develop the Karhunen–Loève expansion of the SECT, which is the key tool for our proposed hypothesis testing framework.
Traditionally, the statistical inference of shapes in TDA is conducted in persistent diagram space $\mathscr{D}$, which is not suitable for exponential family-based distributions and requires any corresponding statistical inference to be highly nonparametric. For example, \cite{fasy2014confidence} used subsampling, and \cite{robinson2017hypothesis} applied permutation tests. The PHT-based statistical inference in $C(\mathbb{S}^{d-1};\mathscr{D}^d)$ is even more difficult. With the Karhunen–Loève expansion of the SECT and the central limit theorem, some hypotheses on the distributions of shapes can be tested using the normal distribution-based methods.
\subsection{Relevant Notation and Paper Organization}
Throughout this paper, a ``shape" refers to a finitely triangulable subset of $\mathbb{R}^d$ defined as follows.
\begin{definition}\label{def: finite triangularization}
(i) Let $K$ be a subset of $\mathbb{R}^d$. If there exists a finite simplicial complex $S$ such that the corresponding polytope $\vert S\vert=\bigcup_{s\in S}s$ is homeomorphic to $K$, we say $K$ is finitely triangulable. (ii) Let $\mathscr{S}_d$ denote the collection of all finitely triangulable subsets of $\mathbb{R}^d$.
\end{definition}
\noindent The definitions of simplicial complexes and finitely triangulable spaces can be found in \cite{munkres2018elements}. The triangulability assumption is standard in the TDA literature (e.g., \cite{cohen2010lipschitz} and \cite{turner2014persistent}). Throughout this paper, we apply the following:
\begin{enumerate}
\item All the linear spaces are defined with respect to field $\mathbb{R}$. For any normed space $\mathcal{V}$, let $\Vert\cdot\Vert_{\mathcal{V}}$ denote its norm. Let $\Vert x\Vert$ denote the Euclidean norm if $x$ is a finite-dimensional vector.
\item Let $X$ be a compact metric space equipped with metric $d_X$ and let $\mathcal{V}$ denote a normed space. $C(X;\mathcal{V})$ is the collection of continuous maps from $X$ to $\mathcal{V}$. Furthermore, $C(X;\mathcal{V})$ is a normed space equipped with $\Vert f\Vert_{C(X;\mathcal{V})} = \sup_{x\in X}\Vert f(x)\Vert_{\mathcal{V}}$. The Hölder space $C^{0,\frac{1}{2}}(X;\mathcal{V})$ is defined as
\begin{align*}
\left\{f\in C(X;\mathcal{V}) \Bigg\vert \sup_{x,y\in X,\, x\ne y}\left(\frac{\Vert f(x)-f(y)\Vert_{\mathcal{V}}}{\sqrt{d_X(x,y)}}\right)<\infty\right\}.
\end{align*}
Here, $C^{0,\frac{1}{2}}(X;\mathcal{V})$ is a normed space equipped with the norm
\begin{align*}
\Vert f\Vert_{C^{0,\frac{1}{2}}(X;\mathcal{V})} = \Vert f\Vert_{C(X;\mathcal{V})}+\sup_{x,y\in X,\, x\ne y}\left(\frac{\Vert f(x)-f(y)\Vert_{\mathcal{V}}}{\sqrt{d_X(x,y)}}\right).
\end{align*}
Obviously, $C^{0,\frac{1}{2}}(X;\mathcal{V})\subset C(X;\mathcal{V})$. For simplicity, we denote $C(X) = C(X;\mathbb{R})$ and $C^{0,\frac{1}{2}}(X) = C^{0,\frac{1}{2}}(X;\mathbb{R})$. For a given $T>0$ (see Eq.~\eqref{eq: def of sublevel sets}), we denote $C([0,T])$ as $\mathcal{B}$.
\item All derivatives implemented in this paper are \textit{weak derivatives} (defined in Chapter 5.2.1 of \cite{evans2010partial}). The inner product of $\mathcal{H} = H_0^1([0,T]) = \{f\in L^2([0,1])\,\vert\, f'\in L^2([0,1]) \mbox{ and }f(0)=f(T)=0\}$ is defined as $\langle f, g \rangle_{\mathcal{H}} = \int_0^T f'(t) g'(t) dt$ (see Chapter 8.3 of \cite{brezis2011functional}). Unless otherwise stated, the inner product $\langle\cdot,\cdot\rangle$ denotes $\langle\cdot,\cdot\rangle_{\mathcal{H}}$ for simplicity.
\item Suppose $(X, d_X)$ is a metric space. $\mathscr{B}(X)$ and $\mathscr{B}(d_X)$ denote the Borel algebra generated by the metric topology corresponding to $d_X$.
\end{enumerate}
The following inequalities are also useful for deriving many results presented in this paper
\begin{align}\label{eq: Sobolev embedding from Morrey}
\Vert f\Vert_{\mathcal{B}}\le \Vert f\Vert_{C^{0,\frac{1}{2}}([0,T])}\le \intercalilde{C}_T \Vert f\Vert_{\mathcal{H}}, \ \ \mbox{ for all }f\in\mathcal{H},
\end{align}
where $\intercalilde{C}_T $ is a constant depending only on $T$. The first inequality in Eq.~\eqref{eq: Sobolev embedding from Morrey} results from the definition of $\Vert\cdot\Vert_{\mathcal{B}}$ and $\Vert\cdot\Vert_{C^{0,\frac{1}{2}}([0,T])}$, while the second inequality is from \cite{evans2010partial} (Theorem 5 of Chapter 5.6). Eq.~\eqref{eq: Sobolev embedding from Morrey} further implies the following embeddings
\begin{align}\label{eq: H, Holder, B embeddings}
H_0^1([0,T]) = \mathcal{H} \subset C^{0,\frac{1}{2}}([0,T]) \subset \mathcal{B} = C([0,T]).
\end{align}
The algebraic topology concepts referred to in this paper (e.g., Betti numbers, homology groups, and homotopy equivalence) can be found in \cite{hatcher2002algebraic} and \cite{munkres2018elements}.
We organize this paper as follows. In Section \ref{SECTs and Their Gaussian Distributions}, we provide a collection of shapes and show that the SECT is well defined for elements in this collection. Additionally, we provide several properties of the SECT using Sobolev and Hölder spaces. These properties will be necessary in later sections for developing the probability theory of both the SECT and SECT-based hypothesis testing. In Section \ref{section: distributions of Gaussian bridge}, we construct a probability space for modeling the distributions of shapes. Based on this probability space, we model the SECT as a $C(\mathbb{S}^{d-1};\mathcal{H})$-valued random variable. In Section \ref{section: hypothesis testing}, we propose the Karhunen–Loève expansion of the SECT. This expansion leads to a normal distribution-based statistic for testing hypotheses on the distributions of shapes. We propose two algorithms therein based on this statistic. In Section \ref{section: Simulation experiments}, we provide simulation studies showing the performance of the proposed hypothesis testing algorithms. In Section \ref{section: Applications}, we apply the proposed algorithms to a silhouette database (see Figure \ref{fig: Silhouette Database}) and a data set of mandibular molars from primates to test hypotheses on shape comparisons (see Figure \ref{fig: Teeth}). In Section \ref{Conclusions and Discussions}, we conclude the paper and propose several relevant topics for future research. The Appendix in the Supplementary Material \citep{meng2022supplementary} provides the necessary mathematical tools for this paper. To avoid distraction from the main flow of the paper, unless otherwise stated, the proof of each theorem is in the Appendix.
\section{The Smooth Euler Characteristic Transform of Shapes}\label{SECTs and Their Gaussian Distributions}
In this section, we give the background on the smooth Euler characteristic transform (SECT) and propose corresponding mathematical foundations under the assumption that shapes are deterministic.
\subsection{The Definition of Smooth Euler Characteristic Transform}\label{The Definition of Smooth Euler Characteristic Transform}
We assume $K\subset \overline{B(0,R)} = \{x\in\mathbb{R}^d:\Vert x\Vert\le R\}$ which denotes a closed ball centered at the origin with some radius $R>0$. For example, the mandibular molars in Figure \ref{fig: Teeth} are bounded shapes in $\mathbb{R}^3$; they can be pre-aligned, normalized within a unit ball, and centered at the origin using the \texttt{auto3dgm} software \citep{puente2013distances}. In this example, the parameter $R=1$. For each direction $\nu\in\mathbb{S}^{d-1}$, we define a filtration $\{K_t^\nu\}_{t\in[0,T]}$ of sublevel sets by
\begin{align}\label{eq: def of sublevel sets}
K_t^\nu \overset{\operatorname{def}}{=} \left\{x\in K\vert x\cdot\nu \le t-R \right\},\ \ \mbox{ for all } t\in[0,T],\ \ \mbox{ where }T = 2R.
\end{align}
We then have the following Euler characteristic curve (ECC) in direction $\nu$
\begin{align}\label{Eq: first def of Euler characteristic curve}
\chi_t^{\nu}(K)& \overset{\operatorname{def}}{=} \mbox{ the Euler characteristic of }K_{t}^\nu = \chi (K^\nu_t) = \sum_{k=0}^{d-1} (-1)^{k}\cdot\beta_k(K_t^\nu),
\end{align}
for $t\in[0,T]$, where $\beta_k(K_t^\nu)$ is the $k$-th Betti number of $K_t^\nu$. For example, if $K_t^\nu$ is a polygon mesh, $\chi_t^{\nu}(K)=\#V-\#E+\#F$, where $\#V$, $\#E$, and $\#F$ denote the number of vertices, edges, and faces of the polygon mesh $K_t^\nu$, respectively. The Euler characteristic transform (ECT) defined as $ECT(K): \mathbb{S}^{d-1} \rightarrow \mathbb{Z}^{[0,T]}, \nu \mapsto \{\chi_{t}^\nu(K)\}_{t\in[0,T]}$ was first proposed by \cite{turner2014persistent} as an alternative to the PHT. Based on the ECT, \cite{crawford2020predicting} further proposed the SECT as follows
\begin{align}\label{Eq: definition of SECT}
\begin{aligned}
& SECT(K): \ \ \mathbb{S}^{d-1}\rightarrow\mathbb{R}^{[0,T]},\ \ \ \nu \mapsto SECT(K)(\nu) = \left\{SECT(K)(\nu;t) \right\}_{t\in[0,T]}, \\
& \mbox{ where}\ \ SECT(K)(\nu;t) \overset{\operatorname{def}}{=} \int_0^t \chi_{\tau}^\nu(K) d\tau-\frac{t}{T}\int_0^T \chi_{\tau}^\nu(K)d\tau.
\end{aligned}
\end{align}
In order for the integrals in Eq.~\eqref{Eq: definition of SECT} to be well-defined, we need to introduce additional conditions. We first propose the following concept motivated by \cite{cohen2007stability}.
\begin{definition}\label{def: HCP and tameness}
Suppose $K\in\mathscr{S}_d$ and $K\subset\overline{B(0,R)}$. (i) If $K^\nu_{t^*}$ is not homotopy equivalent to $K^\nu_{t^*-\delta}$ for any $\delta>0$, we call $t^*\in[0,T]$ a homotopy critical point (HCP) of $K$ in direction $\nu$. (ii) If $K$ has finite HCPs in every direction $\nu\in\mathbb{S}^{d-1}$, we call $K$ tame.
\end{definition}
\noindent Because the Euler characteristic is homotopy invariant, for each $\nu\in \mathbb{S}^{d-1}$, the ECC $\chi^\nu_t(K)$ is a step function of $t$ with finite discontinuities, and hence measurable, if $K$ is tame. These discontinuities are the HCPs of $K$ in direction $\nu$. We may refine the definition of the ECC $\{\chi_{t}^\nu(K)\}_{t\in[0,T]}$ in Eq.~\eqref{Eq: first def of Euler characteristic curve} as follows: for a tame $K\in\mathscr{S}_d$, define $\chi_t^\nu(K) = \chi(K_t^\nu)$ if $t$ is not an HCP in direction $\nu$, and $\chi_t^\nu(K) = \lim_{t'\rightarrow t+}\chi_{t'}^\nu(K)$ if $t$ is an HCP in direction $\nu$. Under these conditions, $\{\chi_{t}^\nu(K)\}_{t\in[0,T]}$ is a right continuous with left limits (RCLL) function and has finite discontinuities. We can define $k$-th Betti number curves $\{\beta_k(K_t^{\nu})\}_{t\in[0,T]}$ to be RCLL functions of $t\in[0,T]$ in the same way. To investigate $SECT(K)$ across shapes $K$, especially the distribution of $SECT(K)$ across $K$, we need the following condition (needed in the proofs of several theorems in this paper) for tame shapes $K\subset\overline{B(0,R)}$ to restrict our attention to a specific subset of $\mathscr{S}_d$
\begin{align}\label{Eq: topological invariants boundedness condition}
\sup_{k\in\{0,\cdots,d-1\}}\left[\sup_{\nu\in\mathbb{S}^{d-1}}\left(\#\Big\{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu}) \, \Big\vert \, \operatorname{pers}(\xi)>0\Big\}\right)\right] \le\frac{M}{d},
\end{align}
where $\operatorname{Dgm}_k(K;\phi_{\nu})$ is the PD of $K$ with respect to function $\phi_\nu(x)=x\cdot \nu+R$, $\operatorname{pers}(\xi)$ is the persistence of the homology feature $\xi$, $\#\{\cdot\}$ denotes the cardinality of a multiset, and $M>0$ is allowed to be any sufficiently large fixed number. The condition in Eq.~\eqref{Eq: topological invariants boundedness condition} involves some technicality in computational topology. To avoid distraction from the main flow of the paper, we save the details of conditions involving Eq.~\eqref{Eq: topological invariants boundedness condition} and the definitions of $\operatorname{Dgm}_k(K;\phi_{\nu})$ and $\operatorname{pers}(\xi)$ for Appendix \ref{The Relationship between PHT and SECT}. A heuristic interpretation of the condition in Eq.~\eqref{Eq: topological invariants boundedness condition} is that there exists a uniform upper bound for the numbers of nontrivial homology features of shape $K$ across all levels $t$ and directions $\nu$ (see Theorem \ref{thm: boundedness topological invariants theorem} and Eq.~\eqref{eq: approximated sample space} below). This condition is usually satisfied in applications (e.g., a tumor has finitely many connected components and cavities across all levels and directions). In this paper, we focus on shapes in the following collection
\begin{align*}
\mathscr{S}_{R,d}^M \overset{\operatorname{def}}{=} \left\{K \in\mathscr{S}_d \big\vert K\subset\overline{B(0,R)},\, K \mbox{ is tame and satisfies condition (\ref{Eq: topological invariants boundedness condition}) with fixed }M>0 \right\}.
\end{align*}
The following boundedness can be derived from condition (\ref{Eq: topological invariants boundedness condition})
\begin{theorem}\label{thm: boundedness topological invariants theorem}
For any $K\in\mathscr{S}_{R,d}^M$, we have the following bounds:\\ (i) $\sup_{\nu\in\mathbb{S}^{d-1}}\left[\sup_{0\le t\le T}\left(\sup_{k\in\{0,\cdots,d-1\}}\beta_k(K_t^{\nu})\right)\right] \le M/d$, and (ii) $\sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\chi_{t}^\nu(K)\right\vert\right) \le M$.
\end{theorem}
\noindent Theorem \ref{thm: boundedness topological invariants theorem} implies the following approximate representation of $\mathscr{S}_{R,d}^M$.
\begin{align}\label{eq: approximated sample space}
\mathscr{S}_{R,d}^M \approx \left\{K \in\mathscr{S}_d \,\Bigg\vert \, K\subset\overline{B(0,R)},\ \ K \mbox{ is tame and }\sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\chi_{t}^\nu(K)\right\vert\right) \le M \right\}.
\end{align}
The tameness of $K$ and boundedness of $\chi^\nu_t(K)$ in Theorem \ref{thm: boundedness topological invariants theorem} guarantee that, for each fixed direction $\nu\in\mathbb{S}^{d-1}$, an ECC $\chi^\nu_{(\cdot)}(K)$ is a measurable and bounded function defined on $[0,T]$. Therefore, the integrals in Eq.~\eqref{Eq: definition of SECT} are well defined Lebesgue integrals. Since $\{\chi_{t}^\nu(K)\}_{t\in[0,T]}\in L^1([0,T])$, $SECT(K)(\nu)$ is absolutely continuous on $[0,T]$. Furthermore, we have the following regularity result of the Sobolev type.
\begin{theorem}\label{thm: Sobolev function paths}
For any $K\in\mathscr{S}_{R,d}^M$ and $\nu\in\mathbb{S}^{d-1}$, we have the following:
(i) the function $\{\int_0^t \chi_\tau^\nu(K) d\tau\}_{t\in[0,T]}$ has its first-order weak derivative $\{ \chi_t^\nu(K) \}_{t\in[0,T]}$; (ii) $SECT(K)(\nu)\in W^{1,p}_0([0,1]) \subset \mathcal{B}$ for all $p\in[1,\infty)$.
\end{theorem}
\noindent Here, $W^{1,p}_0([0,T])$ is a Sobolev space defined as $W^{1,p}_0([0,T]) = \{f\in L^p([0,T]) \, \vert \, \mbox{weak derivative }f'$ exists, $f'\in L^p([0,T]), \mbox{ and }f(0)=f(T)=0\}$ (see the Theorem 8.12 of \cite{brezis2011functional} for this specific definition). We will focus on the case $p=2$ in Theorem \ref{thm: Sobolev function paths} where $\mathcal{H}=H_0^1([0,T])=W^{1,2}_0([0,T])$. Theorem \ref{thm: Sobolev function paths} indicates $SECT(\mathscr{S}_{R,d}^M) \subset \mathcal{H}^{\mathbb{S}^{d-1}} = \{\mbox{all maps }F:\mathbb{S}^{d-1}\rightarrow\mathcal{H}\}$, which is enhanced by the following result.
\begin{theorem}\label{lemma: The continuity lemma}
For each $K\in\mathscr{S}_{R,d}^M$:\\
(i) There exists a constant $C^*_{M,R,d}$ depending only on $M$, $R$, and $d$ such that the following two inequalities hold for any two directions $\nu_1,\nu_2\in\mathbb{S}^{d-1}$,
\begin{align}\label{Eq: continuity inequality}
& \left( \int_0^T \Big\vert\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\vert^2 d\tau \right)^{1/2} \le C^*_{M,R,d} \cdot \sqrt{\Vert \nu_1-\nu_2\Vert}, \\
\notag & \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2)\Big\Vert_{\mathcal{H}} \le C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 }.
\end{align}
(ii) $SECT(K) \in C^{0,\frac{1}{2}}(\mathbb{S}^{d-1};\mathcal{H})$, where $\mathbb{S}^{d-1}$ is equipped with the geodesic distance $d_{\mathbb{S}^{d-1}}$.\\
(iii) The constant $\intercalilde{C}_T$ in Eq.~\eqref{eq: Sobolev embedding from Morrey} provides the inequality
\begin{align}\label{eq: bivariate Holder continuity}
\begin{aligned}
& \Big\vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_2; t_2)\Big\vert\\
& \le \intercalilde{C}_T \left\{\Vert SECT(K)\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})}\cdot \sqrt{\vert t_1-t_2\vert} + C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 } \right\},
\end{aligned}
\end{align}
for all $\nu_1, \nu_2\in\mathbb{S}^{d-1}$ and $t_1, t_2\in[0,T]$, which implies that $(\nu,t)\mapsto SECT(K)(\nu;t)$, as a function on $\mathbb{S}^{d-1}\times[0,T]$, belongs to $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}\times[0,T];\mathbb{R})$.
\end{theorem}
\noindent Part (i) of Theorem \ref{lemma: The continuity lemma} is a counterpart of the Lemma 2.1 in \cite{turner2014persistent} and is derived using ``bottleneck stability" \citep{cohen2007stability}. Part (ii) of Theorem \ref{lemma: The continuity lemma} guarantees that $SECT(\mathscr{S}_{R,d}^M) \subset C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}; \mathcal{H}) \subset C(\mathbb{S}^{d-1}; \mathcal{H}) \subset \mathcal{H}^{\mathbb{S}^{d-1}}$. As a result, Eq.~\eqref{Eq: definition of SECT} defines the following map
\begin{align}\label{Eq: final def of SECT}
SECT: \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1}; \mathcal{H}),\ \ \ K \mapsto \left\{SECT(K)(\nu)\right\}_{\nu\in\mathbb{S}^{d-1}} \overset{\operatorname{def}}{=} SECT(K).
\end{align}
Part (iii) of Theorem \ref{lemma: The continuity lemma} will be implemented in the mathematical foundation of the hypothesis testing in Section \ref{section: hypothesis testing} (specifically, see Theorems \ref{thm: lemma for KL expansions} and \ref{thm: KL expansions of SECT}).
Corollary 6 of \cite{ghrist2018persistent} implies the following result, which shows that the map in Eq.~\eqref{Eq: final def of SECT} preserves all the information of shapes $K\in \mathscr{S}_{R,d}^M$.
\begin{theorem}\label{thm: invertibility}
The map $SECT$ defined in Eq.~\eqref{Eq: final def of SECT} is invertible for all dimensions $d$.
\end{theorem}
\noindent Together with Eq.~\eqref{Eq: final def of SECT}, Theorem \ref{thm: invertibility} enables us to view each shape $K\in\mathscr{S}_{R,d}^M$ as $SECT(K)$ which is an element of $C(\mathbb{S}^{d-1}; \mathcal{H})$. This viewpoint will help us characterize the randomness of shapes $K \in \mathscr{S}_{R,d}^M$ using probability measures on the separable Banach space $C(\mathbb{S}^{d-1}; \mathcal{H})$.
If the shapes of interest are represented as polygon meshes, their ECC defined in Eq.~\eqref{Eq: first def of Euler characteristic curve} can easily be computed (e.g., \cite{wang2021statistical}). The SECT is then derived directly from the computed ECC (see Eq.~\eqref{Eq: definition of SECT}). The mandibular molars in Figure \ref{fig: Teeth} are represented as polygon meshes, and we will use the mesh structure to compute the SECT of the molars in Section \ref{section: Applications}. In Appendix \ref{section: Computation of SECT Using the Čech Complexes}, we briefly introduce an approach for approximating the ECC of shapes in $\mathbb{R}^d$ in scenarios where the polygon mesh representation of shapes is not available, which in turn provides a corresponding approximation to SECT. This approach applies Čech complexes and the nerve theorem (see Chapter III of \cite{edelsbrunner2010computational}), which are implemented in proof-of-concept examples and simulations in this paper (Sections \ref{Proof-of-Concept Simulation Examples I: Deterministic Shapes}, \ref{section: Proof-of-Concept Simulation Examples II: Random Shapes}, and \ref{section: Simulation experiments}). In addition, an approach was studied in \cite{niyogi2008finding} for finding the Betti numbers of submanifolds of Euclidean spaces from random point clouds. For the case where a point cloud is drawn near a submanifold, this approach can be applied to estimate the SECT of the submanifold. The application of \cite{niyogi2008finding} to SECT is left for future research.
\subsection{Proof-of-Concept Simulation Examples I: Deterministic Shapes}\label{Proof-of-Concept Simulation Examples I: Deterministic Shapes}
In this subsection, we compute the SECT of two simulated shapes $K^{(1)}$ and $K^{(2)}$ of dimension $d=2$. These shapes are defined as the following and presented in Figures \ref{fig: SECT visualizations, deterministic}(a) and (g), respectively:
\begin{align}\label{eq: example shapes K1 and K2}
\begin{aligned}
& K^{(j)} = \left\{x\in\mathbb{R}^2 \,\Bigg\vert \, \inf_{y\in S^{(j)}}\Vert x-y\Vert\le \frac{1}{5}\right\},\ \ \mbox{ where }j\in\{1,2\}, \\
& S^{(1)} = \left\{\left(\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, \frac{\pi}{5}\le t\le\frac{9\pi}{5}\right\}\bigcup\left\{\left(-\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\},\\
& S^{(2)} = \left\{\left(\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, 0\le t\le 2\pi\right\}\bigcup\left\{\left(-\frac{2}{5}+\cos t, \sin t\right) \,\Bigg\vert \, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\}.
\end{aligned}
\end{align}
We compute $SECT(K^{(j)})(\nu;t)$ across directions $\nu\in\mathbb{S}^{1}$ and sublevel sets $t\in[0,T]$ with $T=3$. For the following visualization, we identify $\nu\in\mathbb{S}^1$ through the parametrization $\nu=(\cos\vartheta, \sin\vartheta)$ with $\vartheta\in[0,2\pi)$.
The surfaces of the bivariate maps $(\vartheta, t)\mapsto SECT(K^{(j)})(\nu;t)$, for $j\in\{1,2\}$, are presented in Figures \ref{fig: SECT visualizations, deterministic}(b), (c), (h), and (i). The curves of the univariate maps $t\mapsto SECT(K^{(j)})\left((1,0)^\intercal;t\right)$, for $j\in\{1,2\}$, are presented by the black solid lines in Figures \ref{fig: SECT visualizations, deterministic}(d) and (j); while the curves of the univariate maps $t\mapsto SECT(K^{(j)})\left((0,1)^\intercal;t\right)$, for $j\in\{1,2\}$, are presented by black solid lines in Figures \ref{fig: SECT visualizations, deterministic}(e) and (k). Lastly, the curves of the univariate maps $\vartheta\mapsto SECT(K^{(j)})\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)$, for $j\in\{1,2\}$, are presented by the black solid lines in Figures \ref{fig: SECT visualizations, deterministic}(f) and (l).
\begin{figure}
\caption{Visualizations of $SECT(K^{(j)}
\label{fig: SECT visualizations, deterministic}
\end{figure}
These figures illustrate the continuity of $(\nu,t)\mapsto SECT(K^{(j)})(\nu;t)$ stated in Theorem \ref{lemma: The continuity lemma} (iii). Specifically, the curves and surfaces in these figures look smoother than the path of Brownian motions, while they are not differentiable everywhere. With probability one, paths of a Brownian motion are not locally $C^{0,\frac{1}{2}}$-continuous (see Remark 22.4 of \cite{klenke2013probability}). Hence, based on Figure \ref{fig: SECT visualizations, deterministic}, the regularity of $(\nu,t)\mapsto SECT(K^{(j)})(\nu;t)$ is likely to be better than that of Brownian motion paths, but worse than continuously differentiable functions. Therefore, Figure \ref{fig: SECT visualizations, deterministic} supports the $C^{0,\frac{1}{2}}$-continuity in Theorem \ref{fig: SECT visualizations, deterministic}.
The invertibility in Theorem \ref{thm: invertibility} indicates that all information of $K^{(1)}$ and $K^{(2)}$ is stored in the surfaces presented by Figures \ref{fig: SECT visualizations, deterministic}(b), (c), (h), and (i). The red dashed curves in Figures \ref{fig: SECT visualizations, deterministic}(j), (k), and (l) are the counterparts of $K^{(1)}$ (see the curves in Figures \ref{fig: SECT visualizations, deterministic}(d), (e), and (f)). The discrepancy between the solid black and dashed red curves illustrates the ability of the SECT to distinguish shapes, which motivates us to develop the hypothesis testing approach in Section \ref{section: hypothesis testing}.
\section{Random Shapes and Probabilistic Distributions over the SECT}\label{section: distributions of Gaussian bridge}
From this section onward, we investigate the randomness of shapes.
\subsection{Formulation of the SECT as a Random Variable}\label{Distributions of SECT in Each Direction}
Suppose $\mathscr{S}_{R,d}^M$ is equipped with a $\sigma$-algebra $\mathscr{F}$ and a distribution of shapes $K$ across $\mathscr{S}_{R,d}^M$ can be represented by a probability measure $\mathbb{P}=\mathbb{P}(dK)$ on $\mathscr{F}$. Then, $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$ is a probability space. In this subsection, we construct a $\sigma$-algebra $\mathscr{F}$ such that SECT is a random variable taking values in a separable Banach space (hence, a Polish space). For each fixed direction $\nu\in\mathbb{S}^{d-1}$ and level $t\in[0,T]$, the integer-valued map $\chi^\nu_t: K \mapsto \chi^\nu_t(K)$ is defined on $\mathscr{S}_{R,d}^M$. Suppose the following assumption on the measurability of $\chi_t^\nu$ holds.
\begin{assumption}\label{assumption: the measurability of ECC}
For each fixed $\nu\in\mathbb{S}^{d-1}$ and $t\in[0,T]$, the map $\chi^\nu_t:\, (\mathscr{S}^M_{R,d},\, \mathscr{F} ) \rightarrow \left(\mathbb{R}, \, \mathscr{B}(\mathbb{R}) \right)$ is a real-valued random variable where $(\chi^{\nu}_t)^{-1}(B) = \{K\in\mathscr{S}_{R,d}^M \, \vert \,
\chi^{\nu}_t(K)\in B\}\in\mathscr{F}$ for all $B\in\mathscr{B}(\mathbb{R})$.
\end{assumption}
\noindent A $\sigma$-algebra $\mathscr{F}$ satisfying Assumption \ref{assumption: the measurability of ECC} exists --- we construct a metric on $\mathscr{S}_{R,d}^M$ and show that the Borel algebra induced by this metric satisfies Assumption \ref{assumption: the measurability of ECC}. We define the semi-distance $\rho$ by
\begin{align}\label{Eq: distance between shapes}
\rho(K_1, K_2) \overset{\operatorname{def}}{=} \sup_{\nu\in\mathbb{S}^{d-1}} \left\{ \left( \int_0^T \Big\vert\chi_\tau^{\nu}(K_1)-\chi_\tau^{\nu}(K_2)\Big\vert^2 d\tau \right)^{1/2} \right\},\ \ \mbox{ for all }K_1, K_2 \in \mathscr{S}_{R,d}^M.
\end{align}
The following theorem shows that the $\rho$ defined in Eq.~\eqref{Eq: distance between shapes} is indeed a metric on $\mathscr{S}_{R,d}^M$.
\begin{theorem}\label{Thm: metric theorem for shapes}
(i) The map $\rho$ defined in Eq.~\eqref{Eq: distance between shapes} is a metric on $\mathscr{S}_{R,d}^M$. \\
(ii) Assumption \ref{assumption: the measurability of ECC} is satisfied if $\mathscr{F}=\mathscr{B}(\rho)$.
\end{theorem}
Under Assumption \ref{assumption: the measurability of ECC}, the ECC $\{\chi^\nu_t\}_{t\in[0,T]}$, for each $\nu\in\mathbb{S}^{d-1}$, is a continuous-time stochastic process with RCLL paths defined on probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$. Since each path $\{\chi^\nu_t(K)\}_{t\in[0,T]}$ is a step function with finitely many discontinuities, integrals $\int_0^t \chi_\tau^\nu(K) d\tau$ for $t\in[0,T]$ are Riemann integrals of the form
\begin{align}\label{Eq: Riemann sum}
\int_0^t \chi_\tau^\nu(K) d\tau = \lim_{n\rightarrow\infty} \left\{\frac{t}{n} \sum_{l=1}^n \chi^\nu_{\frac{tl}{n}}(K)\right\}, \ \mbox{ for all }K\in\mathscr{S}_{R,d}^M.
\end{align}
Because each $\chi^\nu_{\frac{tl}{n}}$ in Eq.~\eqref{Eq: Riemann sum} is a random variable under Assumption \ref{assumption: the measurability of ECC}, the limit in Eq.~\eqref{Eq: Riemann sum} for each fixed $t\in[0,T]$ is a random variable as well. Therefore, for each fixed $\nu\in\mathbb{S}^{d-1}$, $\{\int_0^t \chi_\tau^\nu d\tau\}_{t\in[0,T]}$ with $\int_0^t \chi_\tau^\nu d\tau: K \mapsto \int_0^t \chi_\tau^\nu(K) d\tau$ is a real-valued stochastic process with continuous paths. Then, under Assumption \ref{assumption: the measurability of ECC}, Eq.~\eqref{Eq: definition of SECT} defines the following stochastic process on the probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$
\begin{align}\label{Eq: def SECTs as stochastic processes}
\begin{aligned}
& SECT(\cdot)(\nu) \overset{\operatorname{def}}{=} \left\{\int_0^t\chi_\tau^\nu d\tau - \frac{t}{T} \int_0^T \chi_\tau^\nu d\tau \overset{\operatorname{def}}{=} SECT(\cdot)(\nu;t) \right\}_{t\in[0,T]}.
\end {aligned}
\end{align}
Theorems \ref{thm: Sobolev function paths} and \ref{lemma: The continuity lemma}, together with \cite{berlinet2011reproducing} (Corollary 13 in Chapter 4 therein), directly imply the following theorem; hence, the corresponding proof is omitted.
\begin{theorem}\label{thm: SECT distribution theorem in each direction}
(i) For each fixed direction $\nu\in\mathbb{S}^{d-1}$, under Assumption \ref{assumption: the measurability of ECC}, $SECT(\cdot)(\nu)$ is a real-valued stochastic process with sample paths in $\mathcal{H}$. Equivalently, $SECT(\cdot)(\nu)$ is a random variable taking values in $(\mathcal{H}, \mathscr{B}(\mathcal{H}))$. Additionally, $SECT(\cdot)(\nu;0)=SECT(\cdot)(\nu;T)=0$. \\
(ii) The following map is a random variable taking values in the separable Banach space $C(\mathbb{S}^{d-1},\mathcal{H})$
\begin{align*}
SECT:\ \ \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1},\mathcal{H}),\ \ K\mapsto \left\{SECT(K)(\nu)\right\}_{\nu\in\mathbb{S}^{d-1}}.
\end{align*}
\end{theorem}
Here, we adopt $C(\mathbb{S}^{d-1}; \mathcal{H})$ instead of the $\frac{1}{2}$-Hölder space $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}; \mathcal{H})$ in Theorem \ref{thm: SECT distribution theorem in each direction} because $\frac{1}{2}$-Hölder spaces are generally not separable (e.g., see Remark 3.19 of \cite{hairer2009introduction}). In probability theory, the separability condition is crucial for probability measures on Banach spaces to behave in a non-pathological way (e.g., see Section 3 of \cite{hairer2009introduction}).
\subsection{Mean and Covariance of the SECT}\label{section: distributions of H-valued GP}
To define the mean and covariance for the SECT, we need the following theorem on the second moments.
\begin{theorem}\label{assumption: existence of second moments}
For any $\nu\in\mathbb{S}^{d-1}$, we have $\mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}=\int_{\mathscr{S}_{R,d}^M}\Vert SECT(K)(\nu)\Vert^2_{\mathcal{H}}\mathbb{P}(dK)<\infty$.
\end{theorem}
\begin{proof}
Theorem \ref{thm: boundedness topological invariants theorem} (i), Theorem \ref{thm: Sobolev function paths} (i), and the definition of $\Vert \cdot\Vert_\mathcal{H}$ imply
\begin{align}\label{eq: CSH upper bound}
\sup_{\nu\in\mathbb{S}^{d-1}} \left\{\Vert SECT(K)(\nu)\Vert^2_{\mathcal{H}} \right\}= \sup_{\nu\in\mathbb{S}^{d-1}} \left\{\int_0^T \left\vert \chi_t^\nu(K) - \frac{1}{T}\int_0^T \chi_\tau^\nu(K)d\tau \right\vert^2 dt\right\} \le 4M^2T.
\end{align}
Therefore, $\mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}\le 4M^2T<\infty$.
\end{proof}
Together, Eq.~\eqref{eq: Sobolev embedding from Morrey} and Theorem \ref{assumption: existence of second moments} imply that
\begin{align}\label{eq: finite second moments for all v and t}
\mathbb{E}\vert SECT(\cdot)(\nu;t)\vert^2 \le \intercalilde{C}^2_T \cdot \mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}<\infty,\ \ \mbox{ for all }\nu\mbox{ and }t.
\end{align}
Therefore, we may define the mean function $\{m_\nu\}_{\nu\in\mathbb{S}^{d-1}}$ as follows
\begin{align}\label{Eq: mean function of our Gaussian bridge}
\begin{aligned}
m_\nu(t) & = \mathbb{E}\left\{ SECT(\cdot)(\nu;t) \right\} =\int_{\mathscr{S}_{R,d}^M}\left\{ \int_0^t\chi_\tau^\nu(K) d\tau - \frac{t}{T} \int_0^T \chi_\tau^\nu(K) d\tau\right\} \mathbb{P}(dK),\ \ t\in[0,T].
\end{aligned}
\end{align}
The next theorem will play an important role in the Karhunen–Loève expansion and the hypothesis testing framework that we will detail in Section \ref{section: hypothesis testing}.
\begin{theorem}\label{thm: mean is in H}
(i) For each direction $\nu\in\mathbb{S}^{d-1}$, the function $m_\nu \overset{\operatorname{def}}{=} \{m_\nu(t)\}_{t\in[0,T]}$ of $t$ belongs to $\mathcal{H}$. \\ (ii) The map $(K, t)\mapsto SECT(K)(\nu;t)$ belongs to $L^2(\mathscr{S}_{R,d}^M\times[0,T],\, \mathbb{P}(dK)\otimes dt)$, where $\mathbb{P}(dK)\otimes dt$ denotes the product measure generated by $\mathbb{P}(dK)$ and the Lebesgue measure $dt$. \\
(iii) The map $\nu\mapsto m_\nu$ belongs to $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1};\mathcal{H})$; hence, this map belongs to $C(\mathbb{S}^{d-1};\mathcal{H})$.
\end{theorem}
Because of the finite second moments in Eq.~\eqref{eq: finite second moments for all v and t}, the following covariance function is well-defined
\begin{align}\label{eq: def of covariance functions}
\mathbb Xi_\nu(s,t) = Cov\Big(SECT(\cdot)(\nu;s), SECT(\cdot)(\nu;t)\Big),\ \ \mbox{ for }s,t\in[0,T].
\end{align}
The following theorem on $\mathbb Xi_\nu(s,t)$ justifies our implementation of Karhunen–Loève in Section \ref{section: hypothesis testing}.
\begin{theorem}\label{thm: lemma for KL expansions}
For each fixed direction $\nu\in\mathbb{S}^{d-1}$, we have: (i) $SECT(\cdot)(\nu)$ is mean-square continuous (i.e., $\lim_{\epsilon\rightarrow0}\mathbb{E}\vert SECT(\cdot)(\nu;t+\epsilon)-SECT(\cdot)(\nu;t)\vert^2=0$); and
(ii) $(s,t)\mapsto\mathbb Xi_\nu(s,t)$ is continuous on $[0,T]^2$.
\end{theorem}
\noindent The first part of Theorem \ref{thm: lemma for KL expansions} directly follows from Eq.~\eqref{eq: bivariate Holder continuity} and Eq.~\eqref{eq: CSH upper bound}, while the second half follows from Lemma 4.2 of \cite{alexanderian2015brief}; hence, the proof of Theorem \ref{thm: lemma for KL expansions} is omitted.
In applications, we cannot sample infinitely many directions $\nu\in\mathbb{S}^{d-1}$ and levels $t\in[0,T]$. For each given shape $K$, we can only compute $SECT(K)(\nu;t)$ for finitely many directions $\{\nu_1, \cdots, \nu_\Gamma\}\subset\mathbb{S}^{d-1}$ and levels $\{t_1, \cdots, t_\Delta\}\subset[0,T]$ with $t_1<t_2<\cdots<t_\Delta$. Hence, for a collection of shapes $\{K_i\}_{i=1}^n \subset \mathscr{S}_{R,d}^M$ sampled from $\mathbb{P}$, the data we obtain are presented by the following 3-dimensional arrays
\begin{align}\label{Eq: data matrix}
\Big\{\ SECT(K_i)(\nu_p, t_q)\ \Big\vert\ i=1,\cdots,n,\ p=1,\cdots, \Gamma, \mbox{ and }q=1, \cdots, \Delta\ \Big\}.
\end{align}
To preserve as much information about a shape $K$ as possible, we need to set the numbers of directions and levels (i.e., $\Gamma$ and $\Delta$ in Eq.~\eqref{Eq: data matrix}) at sufficiently large values \citep{turner2014persistent,crawford2020predicting}. Detailed simulation studies on the choices of $\Gamma$ and $\Delta$ are available in \cite{wang2021statistical}.
\subsection{Proof-of-Concept Simulation Examples II: Random Shapes}\label{section: Proof-of-Concept Simulation Examples II: Random Shapes}
In this subsection, we compute the SECT for a collection of random shapes $\{K_i\}_{i=1}^n$ of dimension $d=2$. These shapes are randomly generated as follows
\begin{align}\label{eq: randon shapes under null}
K_i = & \left\{x\in\mathbb{R}^2 \, \Bigg\vert\, \inf_{y\in S_i}\Vert x-y\Vert\le \frac{1}{5}\right\},\ \ \mbox{ where } \\
\notag S_i = & \left\{\left(\frac{2}{5}+a_{1,i}\times\cos t, b_{1,i}\times\sin t\right) \, \Bigg\vert\, \frac{\pi}{5}\le t\le\frac{9\pi}{5}\right\} \bigcup\left\{\left(-\frac{2}{5}+a_{2,i}\times\cos t, b_{2,i}\times\sin t\right) \, \Bigg\vert\, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\},
\end{align}
and $\{a_{1,i}, a_{2,i}, b_{1,i}, b_{2,i}\}_{i=1}^n \overset{i.i.d.}{\sim} N(1, 0.05^2)$ follow a normal distribution. One element of the shape collection $\{K_i\}_{i=1}^n$ is presented in Figure \ref{fig: SECT visualizations, random}(a). The underlying distribution on $\mathscr{S}_{R,d}^M$ generating $\{K_i\}_{i=1}^n$ is denoted by $\mathbb{P}$, and the expectation associated with $\mathbb{P}$ is denoted by $\mathbb{E}$. We estimate the expected value $\mathbb{E}\{SECT(\cdot)(\nu;t)\}$ by the sample average $\frac{1}{n}\sum_{i=1}^n SECT(K_i)(\nu;t)$ with $n=100$. We identify each direction $\nu\in\mathbb{S}^1$ through the parametrization $\nu=(\cos\vartheta, \sin\vartheta)$ with some $\vartheta\in[0,2\pi)$ as we did in Section \ref{Proof-of-Concept Simulation Examples I: Deterministic Shapes}.
The surface of the map $(\vartheta,t)\mapsto \mathbb{E}\{SECT(\cdot)(\nu;t)\}$ is presented in Figures \ref{fig: SECT visualizations, random}(b) and (c). The black solid curves in Figure \ref{fig: SECT visualizations, random}(d) present the 100 sample paths $t\mapsto SECT(K_i)\left((1,0)^\intercal;t\right)$; the black solid curves in Figure \ref{fig: SECT visualizations, random}(e) present sample paths $t\mapsto SECT(K_i)\left((0,1)^\intercal;t\right)$; and the black solid curves in Figure \ref{fig: SECT visualizations, random}(f) present paths $\vartheta\mapsto SECT(K_i)\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)$, for $i\in\{1,\cdots,100\}$. The red solid curves in Figures \ref{fig: SECT visualizations, random}(d), (e), and (f) present mean curves $t\mapsto \mathbb{E}\{SECT(\cdot)\left((1,0)^\intercal;t\right)\}$, $t\mapsto \mathbb{E}\{SECT(\cdot)\left((0,1)^\intercal;t\right)\}$, and $\vartheta\mapsto \mathbb{E}\{SECT(\cdot)\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)\}$, respectively.
The smoothness of the red solid curves in Figures \ref{fig: SECT visualizations, random}(d) and (e) supports the regularity of $\{m_\nu(t)\}_{t\in[0,T]}$ in Theorem \ref{thm: mean is in H}. The finite variance of $SECT(\cdot)(\nu;t)$ for $\nu=(1,0)^\intercal, (0,1)^\intercal$ and $t=3/2$, visually presented in Figures \ref{fig: SECT visualizations, random}(d), (e), and (f), supports Eq.~\eqref{eq: finite second moments for all v and t}.
In addition, the blue dashed curves in Figures \ref{fig: SECT visualizations, random}(d), (e), and (f) present curves $t\mapsto SECT(K^{(1)})\left((1,0)^\intercal;t\right)$, $t\mapsto SECT(K^{(1)})\left((0,1)^\intercal;t\right)$, and $\vartheta\mapsto SECT(K^{(1)})\left((\cos\vartheta, \sin\vartheta)^\intercal;\frac{3}{2}\right)$, respectively, where shape $K^{(1)}$ is defined in Eq.~\eqref{eq: example shapes K1 and K2}. Since $\mathbb{E}\{a_{1,i}\}=\mathbb{E}\{a_{2,i}\}=\mathbb{E}\{b_{1,i}\}=\mathbb{E}\{b_{2,i}\}=1$, the shape $K^{(1)}$ defined in Eq.~\eqref{eq: example shapes K1 and K2} can be somewhat viewed as the ``mean shape" of the random collection $\{K_i\}_{i=1}^n$. The similarity between the red solid curves and blue dashed curves in \ref{fig: SECT visualizations, random}(d), (e), and (f) supports the ``mean shape" role of $K^{(1)}$. The rigorous definition of a ``mean shape" and its relationship to the mean function $\mathbb{E}\{SECT(\cdot)(\nu;t)\}$ is outside the scope of this work. A potential approach for defining mean shapes is through the following Fréchet mean form \citep{frechet1948elements}
\begin{align}\label{Frechet mean shape}
K_\oplus \overset{\operatorname{def}}{=} \argmin_{K\in\mathscr{S}_{R,d}^M} \mathbb{E}\left[\left\{\rho(\cdot, K)\right\}^2\right] = \argmin_{K\in\mathscr{S}_{R,d}^M} \left[ \int_{\mathscr{S}_{R,d}^M} \left\{\rho(K', K)\right\}^2 \mathbb{P}(dK') \right],
\end{align}
where $\rho$ can be either the metric on $\mathscr{S}_{R,d}^M$ defined in Eq.~\eqref{Eq: distance between shapes} or any other metrics generating $\sigma$-algebras satisfying Assumption \ref{assumption: the measurability of ECC}. The existence and uniqueness of the minimizer $K_\oplus$ in Eq.~\eqref{Frechet mean shape}, the relationship between $SECT(K_\oplus)$ and $\mathbb{E}\{SECT(\cdot)\}$, and the extension of Eq.~\eqref{Frechet mean shape} to Fréchet regression \citep{petersen2019frechet} for random shapes are left for future research. The study of the existence of $K_\oplus$ will be a counterpart to Section 4 in \cite{mileyko2011probability}.
In the scenarios where the SECT of shapes from distribution $\mathbb{P}$ are computed only in finitely many directions and at finitely many levels (see the end of Section \ref{section: distributions of H-valued GP}), the mean surface $(\vartheta, t) \mapsto \mathbb{E}\{SECT(\cdot)(\nu;t)\}$ in Figures \ref{fig: SECT visualizations, random}(b) and (c) can also be potentially estimated using manifold learning methods (e.g., \cite{yue2016parameterization}, \cite{dunson2021inferring}, and \cite{meng2021principal}).
\begin{figure}
\caption{Panel (a) presents a shape generated from the distribution in Eq.~\eqref{eq: randon shapes under null}
\label{fig: SECT visualizations, random}
\end{figure}
\section{Testing Hypotheses on Shapes}\label{section: hypothesis testing}
In this section, we apply the probabilistic formulation in Section \ref{section: distributions of Gaussian bridge} to testing hypotheses on shapes. Suppose $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$ are two distributions on the measurable space $(\mathscr{S}_{R,d}^M, \mathscr{F})$. Let $\mathbb{P}^{(1)}\otimes \mathbb{P}^{(2)}$ be the product probability measure defined on the product $\sigma$-algebra $\mathscr{F}\otimes\mathscr{F}$ and satisfying $\mathbb{P}^{(1)}\otimes \mathbb{P}^{(2)}(A\times B)=\mathbb{P}^{(1)}(A)\cdot \mathbb{P}^{(2)}(B)$ for all $A,B\in\mathscr{F}$ (see Definition 14.4 and Theorem 14.14 of \cite{klenke2013probability}).
Define functions $m_\nu^{(j)}(t) = \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu;t)\mathbb{P}^{(j)}(dK)$, for $j\in\{1,2\}$, as the mean functions corresponding to $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$. We are interested in testing the following hypotheses
\begin{align}\label{eq: the main hypotheses}
\begin{aligned}
& H_0: m_\nu^{(1)}(t)=m_\nu^{(2)}(t)\mbox{ for all }(\nu,t)\in\mathbb{S}^{d-1}\times[0,T] \ \ \ vs.\ \ \ H_1: m_\nu^{(1)}(t)\ne m_\nu^{(2)}(t)\mbox{ for some }(\nu,t).
\end{aligned}
\end{align}
For example, suppose we are interested in the distributions of mandibular molars from two genera of primates (e.g., the mandibular molars in Figure \ref{fig: Teeth}). Testing the hypotheses in Eq.~\eqref{eq: the main hypotheses} helps us distinguish the two genera of primates from a geometric morphometrics perspective.
The null hypothesis $H_0$ in Eq.~\eqref{eq: the main hypotheses} is equivalent to $\sup_{\nu\in\mathbb{S}^{d-1}}\{\Vert m_{\nu}^{(1)}-m_{\nu}^{(2)} \Vert_{\mathcal{B}}\}=0$. Part (iii) of Theorem \ref{thm: mean is in H}, together with Eq.~\eqref{eq: Sobolev embedding from Morrey}, indicates that the following maximizer exists
\begin{align}\label{eq: def of distinguishing direction}
\nu^* \overset{\operatorname{def}}{=} \argmax_{\nu\in\mathbb{S}^{d-1}} \left\{\Vert m_{\nu}^{(1)}-m_{\nu}^{(2)} \Vert_{\mathcal{B}} \right\}.
\end{align}
The maximizer defined in Eq.~\eqref{eq: def of distinguishing direction} may not be unique. If it is not unique, we may arbitrarily choose one of the maximizers. Note that this choice does not influence our framework. The null hypothesis $H_0$ is equivalent to $\Vert m_{\nu^*}^{(1)}-m_{\nu^*}^{(2)} \Vert_{\mathcal{B}}=0$. The $\nu^*$ defined in Eq.~\eqref{eq: def of distinguishing direction} is called a \textit{distinguishing direction}. Hence, we investigate the distributions of the real-valued stochastic process $SECT(\cdot)(\nu^*)$ corresponding to $\mathbb{P}^{(1)}$ and $\mathbb{P}^{(2)}$, respectively.
\subsection{Karhunen–Loève Expansion}
Let $\mathbb Xi_{\nu^*}^{(j)}(s,t)$ be the real-valued covariance of process $SECT(\cdot)(\nu^*)$ corresponding to $\mathbb{P}^{(j)}$, for $j\in\{1,2\}$ (see Eq.~\eqref{eq: def of covariance functions}). Hereafter, we assume the following.
\begin{assumption}\label{assumption: equal covariance aasumption}
$\mathbb Xi_{\nu^*}^{(1)}=\mathbb Xi_{\nu^*}^{(2)}$, where $\nu^*$ is a distinguishing direction defined in Eq.~\eqref{eq: def of distinguishing direction}.
\end{assumption}
Under Assumption \ref{assumption: equal covariance aasumption}, we derive a $\chi^2$-test (see Algorithm \ref{algorithm: testing hypotheses on mean functions}) for the hypotheses in Eq.~\eqref{eq: the main hypotheses}. In Section \ref{The Numerical foundation for Hypothesis Testing}, we provide an approach for evaluating whether Assumption \ref{algorithm: testing hypotheses on mean functions} holds (see Eq.~\eqref{eq: norm-ratios} therein). If Assumption \ref{assumption: equal covariance aasumption} is violated, our proposed permutation strategy presented in Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} can be used. The permutation-based strategy in Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} is robust to the violation of Assumption \ref{assumption: equal covariance aasumption} as empirically shown with simulation studies in Section \ref{section: Simulation experiments}.
We first derive the $\chi^2$-test under Assumption \ref{assumption: equal covariance aasumption}. Denote $\mathbb Xi_{\nu^*}^{(1)}=\mathbb Xi_{\nu^*}^{(2)} \overset{\operatorname{def}}{=} \mathbb Xi_{\nu^*}$. The second half of Theorem \ref{thm: lemma for KL expansions} implies $\mathbb Xi_{\nu^*}(\cdot, \cdot)\in L^2([0,T]^2)$. Hence, we may define the integral operator
\begin{align}\label{eq: def of the L2 compact operator}
L^2([0,T])\rightarrow L^2([0,T]),\ \ \ \ f\mapsto \int_0^T f(s)\mathbb Xi_{\nu^*}(s,\cdot)ds.
\end{align}
Theorems VI.22 (e) and VI.23 of \cite{reed2012methods} indicate that the integral operator defined in Eq.~\eqref{eq: def of the L2 compact operator} is compact. It is straightforward that this integral operator is also self-adjoint. Furthermore, the Hilbert-Schmidt theorem (Theorem VI.16 of \cite{reed2012methods}) implies that this operator has countably many eigenfunctions $\{\phi_l\}_{l=1}^\infty$ and eigenvalues $\{\lambda_l\}_{l=1}^\infty$ with $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_l\ge\cdots\ge0$, and $\{\phi_l\}_{l=1}^\infty$ form an orthonormal basis of $L^2([0,T])$. Then, Theorems \ref{thm: mean is in H}, \ref{thm: lemma for KL expansions}, and Corollary 5.5 of \cite{alexanderian2015brief} imply the following expansions of the SECT in the distinguishing direction $\nu^*$.
\begin{theorem}\label{thm: KL expansions of SECT}
(i) (Karhunen–Loève expansions) For each fixed $j\in\{1,2\}$, we have
\begin{align*}
\begin{aligned}
& \lim_{L\rightarrow\infty} \left\{\sup_{t\in[0,T]} \int_{\mathscr{S}_{R,d}^M} \left\vert SECT(K)(\nu^*;t) - m^{(j)}_{\nu^*}(t) - \sum_{l=1}^L \sqrt{\lambda_l} \cdot Z_{l}(K) \cdot \phi_l(t)\right\vert^2 \mathbb{P}^{(j)}(dK) \right\} =0 \\
& \mbox{ where } Z_{l}(K) = \frac{1}{\sqrt{\lambda_l}}\int_0^T \left\{SECT(K)(\nu^*;t)-m_{\nu^*}^{(j)}(t) \right\} \phi_l(t)dt, \ \ \mbox{ for all }l=1,2,\ldots.
\end{aligned}
\end{align*}
For each fixed $j\in\{1,2\}$, random variables $\{Z_{l}\}_{l=1}^\infty$ are defined on the probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P}^{(j)})$, are mutually uncorrelated, and have mean 0 and variance 1.\\
(ii) There exists $\mathcal{N}\in\mathscr{F}\otimes \mathscr{F}$ such that $\mathbb{P}^{(1)}\otimes \mathbb{P}^{(2)}(\mathcal{N})=0$ and
\begin{align}\label{eq: KL expansions of SECT}
\begin{aligned}
& \frac{1}{\sqrt{2 \lambda_l}} \int_0^T \left\{ SECT(K^{(1)})(\nu^*;t)-SECT(K^{(2)})(\nu^*;t)\right\} \phi_l(t)dt= \theta_l + \left( \frac{Z_{l}(K^{(1)})-Z_{l}(K^{(2)})}{\sqrt{2}} \right),\\
& \mbox{where }\theta_l \overset{\operatorname{def}}{=} \frac{1}{\sqrt{2 \lambda_l}} \int_0^T \left\{ m_{\nu^*}^{(1)}(t) - m_{\nu^*}^{(2)}(t) \right\} \phi_l(t) dt
\end{aligned}
\end{align}
for any $(K^{(1)}, K^{(2)})\notin \mathcal{N}$. The null set $\mathcal{N}$ is allowed to be empty.
\end{theorem}
\subsection{The Theoretical Foundation for Hypothesis Testing}\label{The Theoretical foundation for Hypothesis Testing}
Suppose we have two independent collections of random samples $\{K_i^{(j)}\}_{i=1}^n\overset{i.i.d.}{\sim}\mathbb{P}^{(j)}$, for $j\in\{1,2\}$; equivalently, $\{(K_i^{(1)}, K_i^{(2)})\}_{i=1}^n\overset{i.i.d.}{\sim}\mathbb{P}^{(1)}\otimes \mathbb{P}^{(2)}$. In practice, the shapes of interest are usually preprocessed (e.g., pre-aligned, scaled, and centered at the origin) to implicitly create correspondence between them. For example, the mandibular molars in Figure \ref{fig: Teeth} have been preprocessed using an ECT-based alignment approach (see Section 4 of the Supplementary Material for \cite{wang2021statistical}). Alternatively, we may also apply the \texttt{auto3dgm} software to the molars.
In this subsection, we provide the theoretical foundation of using $\{(K_i^{(1)}, K_i^{(2)})\}_{i=1}^n$ to test the hypotheses in Eq.~\eqref{eq: the main hypotheses}. Without loss of generality, we assume $(K_i^{(1)}, K_i^{(2)})\notin \mathcal{N}$, for all $i=1,2,\ldots,n$, where $\mathcal{N}$ is the ``null set" in Theorem \ref{thm: KL expansions of SECT} (ii). Then, we have
\begin{align}\label{eq: def of the xi statistic}
\begin{aligned}
\xi_{l,i} &\overset{\operatorname{def}}{=} \frac{1}{\sqrt{2 \lambda_l}} \int_0^T \left\{ SECT(K_i^{(1)})(\nu^*;t)-SECT(K_i^{(2)})(\nu^*;t)\right\} \phi_l(t)dt \\
&= \theta_l + \left( \frac{Z_{l}(K_i^{(1)})-Z_{l}(K_i^{(2)})}{\sqrt{2}} \right),
\end{aligned}
\end{align}
where $\theta_l$ is defined in Eq.~\eqref{eq: KL expansions of SECT}. Theorem \ref{thm: KL expansions of SECT} implies that, for each fixed $l$, the random variables $\{\xi_{l,i}\}_{i=1}^n$ are i.i.d. across $i=1, 2, \ldots,n$ with mean $\theta_l$ and variance 1. In addition, for each fixed $i$, random variables $\{\xi_{l,i}\}_{l=1}^\infty$ are mutually uncorrelated across $l$. The following theorem transforms the null $H_0$ in Eq.~\eqref{eq: the main hypotheses} using the means $\{\theta_l\}_{l=1}^\infty$.
\begin{theorem}
The null $H_0$ in Eq.~\eqref{eq: the main hypotheses} is equivalent to $\theta_l=0$ for all $l=1, 2, 3, \cdots$.
\end{theorem}
\begin{proof}
We have shown that the null $H_0$ is equivalent to $m_{\nu^*}^{(1)}(t)=m_{\nu^*}^{(2)}(t)$ for all $t\in[0,T]$, where $\nu^*$ is defined in Eq.~\eqref{eq: def of distinguishing direction}. The null $H_0$ directly implies that $\theta_l=0$ for all $l$. On the other hand, if $\theta_l=0$ for all $l$, that $\{\phi_l\}_l$ is an orthonormal basis of $L^2([0,T])$ indicates that $m_{\nu^*}^{(1)}=m_{\nu^*}^{(2)}$ almost everywhere with respect to the Lebesgue measure $dt$. Part (i) of Theorem \ref{thm: mean is in H} and the embedding $\mathcal{H}\subset\mathcal{B}$ in Eq.~\eqref{eq: H, Holder, B embeddings} imply that $m_{\nu^*}^{(1)}$ and $m_{\nu^*}^{(2)}$ are continuous functions. As a result, $m_{\nu^*}^{(1)}(t)=m_{\nu^*}^{(2)}(t)$ for all $t\in[0,T]$.
\end{proof}
When eigenvalues $\lambda_l$ in the denominator of Eq.~\eqref{eq: KL expansions of SECT} are close to zero for large $l$, the estimated $\theta_l$ corresponding to the small eigenvalues can be unstable. Specifically, even if $ m_{\nu^*}^{(1)}(t) \approx m_{\nu^*}^{(2)}(t)$ for all $t$, an extremely small $\lambda_l$ can move the corresponding $\theta_l$ far away from zero. Motivated by the standard approach in the principal component analysis literature, we focus on $\{\theta_l\}_{l=1}^L$ with
\begin{align}\label{eq: def of L}
L \overset{\operatorname{def}}{=} \min \left\{ l\in\mathbb{N}\, \bigg\vert\, \frac{\sum_{l'=1}^l \lambda_{l'}}{\sum_{l^{''}=1}^\infty \lambda_{l^{''}}} >0.95\right\},
\end{align}
where the threshold 0.95 can be replaced with any other value in $(0,1)$ --- we take 0.95 as an example. Hence, to test the null and alternative hypotheses in Eq.~\eqref{eq: the main hypotheses}, we test the following
\begin{align}\label{eq: approximate hypotheses}
\begin{aligned}
& \widehat{H}_0: \theta_1=\theta_2=\cdots=\theta_L=0,\ \ \ vs. \ \ \ \widehat{H}_1: \mbox{ there exists } l'\in\{1,\cdots,L\} \mbox{ such that }\theta_{l'}\ne0.
\end{aligned}
\end{align}
\noindent Under $\widehat{H}_0$ in Eq.~\eqref{eq: approximate hypotheses}, for each $l\in\{1,\cdots,L\}$, the central limit theorem indicates that $\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}$ is asymptotically normally distributed $N(0,1)$ when $n$ is large. The mutual uncorrelation property from Theorem \ref{thm: KL expansions of SECT} and the asymptotic normality of $\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}$ provide the asymptotic independence of $\{\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}\}_{l=1}^L$ across $l=1, 2, \ldots,L$. As a result, $\sum_{l=1}^L (\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i})^2$ is asymptotically $\chi_L^2$-distributed under the null $\widehat{H}_0$ in Eq.~\eqref{eq: approximate hypotheses}. Therefore, at the asymptotic confidence level $1-\alpha$, for $\alpha\in(0,1)$, we reject $\widehat{H}_0$ if
\begin{align}\label{eq: rejection region}
\sum_{l=1}^L \left(\frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_{l,i}\right)^2 > \chi^2_{L, 1-\alpha} = \mbox{ the $1-\alpha$ lower quantile of the $\chi^2_L$ distribution}.
\end{align}
\subsection{The Numerical foundation for Hypothesis Testing}\label{The Numerical foundation for Hypothesis Testing}
In Section \ref{The Theoretical foundation for Hypothesis Testing}, we proposed an approach to testing the hypotheses in Eq.~\eqref{eq: the main hypotheses} based on $\{\xi_{l,i}\}_{l}$ defined in Eq.~\eqref{eq: def of the xi statistic}. In applications, neither the mean function $m_{\nu}^{(j)}(t)$ nor the covariance function $\mathbb Xi_{\nu}(t',t)$ is known. Hence, the corresponding Karhunen–Loève expansions in Eq.~\eqref{eq: KL expansions of SECT} are not available, and the proposed hypothesis testing approach is not directly applicable. In this subsection, motivated by Chapter 4.3.2 of \cite{williams2006gaussian}, we propose a method for estimating the $\{\xi_{l,i}\}_{l}$ in Eq.~\eqref{eq: def of the xi statistic}.
For random shapes $\{K_i^{(j)}\}_{i=1}^n\overset{i.i.d.}{\sim}\mathbb{P}^{(j)}$, with $j\in\{1,2\}$, we compute their corresponding SECT in finitely many directions and sublevel sets as discussed in Section \ref{section: distributions of H-valued GP} to get $\{SECT(K_i^{(j)})(\nu_p;t_q)\,|\, p=1,\cdots, \Gamma \mbox{ and }q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$, where $t_q = \frac{T}{\Delta}q$. The SECT of all shapes $K_i^{(j)}$ in the two collections are computed in the same collection of directions $\{\nu_p\}_{p=1}^\Gamma$ and at the same collection of sublevel sets $\{t_q\}_{q=1}^\Delta$. We estimate the mean $\widehat{m}_{\nu_p}^{(j)}(t_q)$ of $m_{\nu_p}^{(j)}(t_q)$ at level $t_q$ by taking the sample mean of $\{SECT(K_i^{(j)})(\nu_p;t_q)\}_{i=1}^n$ across $i\in\{1,\cdots,n\}$. Then, we estimate the distinguishing direction $\nu^*$ by
\begin{align}\label{eq: estimated distinguishing direction}
\widehat{\nu}^* \overset{\operatorname{def}}{=} \argmax_{\nu_p} \left[ \max_{t_q} \left\{\left\vert \widehat{m}_{\nu_p}^{(1)}(t_q) - \widehat{m}_{\nu_p}^{(2)}(t_q) \right\vert \right\} \right].
\end{align}
To evaluate whether Assumption \ref{assumption: equal covariance aasumption} holds, we may estimate the covariance matrix $\left(\mathbb Xi_{\nu^*}^{(j)}(t_{q'},t_{q})\right)_{q,q'=1,\cdots,\Delta}$, for each fixed $j\in\{1,2\}$, by taking the sample covariance matrix $\boldsymbol{C}^{(j)}$ derived from the centered vectors $\left\{ \left( SECT(K_i^{(j)})(\widehat{\nu}^*;t_1) - \widehat{m}_{\widehat{\nu}^*}^{(j)}(t_1), \cdots, SECT(K_i^{(j)})(\widehat{\nu}^*;t_\Delta)- \widehat{m}_{\widehat{\nu}^*}^{(j)}(t_\Delta) \right)^\intercal \right\}_{i=1}^n$ across $i$ (for each fixed $j$).
We may evaluate Assumption \ref{assumption: equal covariance aasumption} using the following matrix norm ratios
\begin{align}\label{eq: norm-ratios}
\mathfrak{R}_F \overset{\operatorname{def}}{=} \frac{\Vert \boldsymbol{C}^{(1)}-\boldsymbol{C}^{(2)}\Vert_F}{\Vert \boldsymbol{C}^{(1)}\Vert_F}\ \ \ \mbox{ or }\ \ \ \mathfrak{R}_\infty \overset{\operatorname{def}}{=} \frac{\Vert \boldsymbol{C}^{(1)}-\boldsymbol{C}^{(2)}\Vert_\infty}{\Vert \boldsymbol{C}^{(1)}\Vert_\infty},
\end{align}
where $\Vert \cdot\Vert_F$ denotes the Frobenius norm, and $\Vert \cdot\Vert_\infty$ denotes the $\infty$-norm $\Vert (a_{q',q})\Vert_\infty=\max_{q',q}\vert a_{q',q}\vert$. We take the two norms as examples. The larger $\mathfrak{R}_F$ and $\mathfrak{R}_\infty$, the more likely Assumption \ref{assumption: equal covariance aasumption} is violated. In Section \ref{section: Simulation experiments}, we will use simulations to show that our proposed algorithms (especially Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}) are robust to the violation of Assumption \ref{assumption: equal covariance aasumption}. Therefore, assessing whether Assumption \ref{assumption: equal covariance aasumption} is violated is not the focus of this paper. Checking Assumption \ref{assumption: equal covariance aasumption} via $\mathfrak{R}_F$ and $\mathfrak{R}_\infty$ is left for future research.
Under Assumption \ref{assumption: equal covariance aasumption}, we estimate the covariance matrix $\left(\mathbb Xi_{\nu^*}(t_{q'},t_{q})\right)_{q,q'=1,\cdots,\Delta}$ using the sample covariance matrix $\pmb{C} = (\widehat{\mathbb Xi}_{\nu^*}(t_{q'},t_{q}))_{q,q'=1,\cdots,\Delta}$ derived from the following centered sample vectors across both $i\in\{1,\cdots,n\}$ and $j\in\{1,2\}$
\begin{align}\label{eq: centered sample vectors}
\left\{ \left( SECT(K_i^{(j)})(\widehat{\nu}^*;t_1) - \widehat{m}_{\widehat{\nu}^*}^{(j)}(t_1), \cdots, SECT(K_i^{(j)})(\widehat{\nu}^*;t_\Delta)- \widehat{m}_{\widehat{\nu}^*}^{(j)}(t_\Delta) \right)^\intercal\,\Big\vert\, j=1,2 \right\}_{i=1}^n.
\end{align}
Note $\pmb{C}$ is different from $\pmb{C}^{(j)}$ as we do not fix $j$ when we derive $\pmb{C}$.
Since the eigenfunctions $\{\phi_l\}_{l=1}^\infty$ and eigenvalues $\{\lambda_l\}_{l=1}^\infty$ satisfy $\lambda_l\phi_l=\int_0^T \phi_l(s)\mathbb Xi_{\nu^*}(s,\cdot)ds$, we have $\lambda_l\phi_l(t_q)=\int_0^T \phi_l(s)\mathbb Xi_{\nu^*}(s,t_q)ds\approx\frac{T}{\Delta}\sum_{q'=1}^\Delta \phi_l(t_{q'})\mathbb Xi_{\nu^*}(t_{q'}, t_q) \approx \frac{T}{\Delta}\sum_{q'=1}^\Delta \phi_l(t_{q'})\widehat{\mathbb Xi}_{\nu^*}(t_{q'}, t_q)$, which is represented in the following matrix form
\begin{align}\label{eq: matrix form, approximate integrals by riemann sums}
&\lambda_l\begin{pmatrix}
\phi_l(t_1)\\
\vdots \\
\phi_l(t_\Delta)
\end{pmatrix}\approx\frac{T}{\Delta}
\begin{pmatrix}
\widehat{\mathbb Xi}_{\nu^*}(t_{1}, t_1) & \ldots & \widehat{\mathbb Xi}_{\nu^*}(t_\Delta, t_1) \\
\vdots & \ddots & \vdots \\
\widehat{\mathbb Xi}_{\nu^*}(t_\Delta, t_2) & \ldots & \widehat{\mathbb Xi}_{\nu^*}(t_\Delta, t_\Delta)
\end{pmatrix}
\begin{pmatrix}
\phi_l(t_1)\\
\vdots \\
\phi_l(t_\Delta)
\end{pmatrix}.
\end{align}
We denote the eigenvectors and eigenvalues of $\pmb{C}$ as $\{\pmb{v}_l=(v_{l,1}, \cdots, v_{l,\Delta})^\intercal\}_{l=1}^\Delta$ and $\{\Lambda_l\}_{l=1}^\Delta$, respectively. The following equation motivates the estimator $\phi_l(t_q)\approx \widehat{\phi}_l(t_q) \overset{\operatorname{def}}{=} \sqrt{\frac{\Delta}{T}} \cdot v_{l,q}$, for all $l\in\{1,\cdots,\Delta\}$,
\begin{align*}
\sum_{q=1}^\Delta v_{l,q}^2 = \Vert \pmb{v}_l \Vert^2 = 1 = \int_0^T\vert \phi_l(t)\vert^2 dt \approx \frac{T}{\Delta}\sum_{q=1}^\Delta \left(\phi_l(t_q)\right)^2 = \sum_{q=1}^\Delta \left(\sqrt{\frac{T}{\Delta}}\cdot\phi_l(t_q)\right)^2.
\end{align*}
The following equation motivates the estimator $\lambda_l\approx\widehat{\lambda}_l \overset{\operatorname{def}}{=} \frac{T}{\Delta}\Lambda_l$, for all $l\in\{1,\cdots,\Delta\}$,
\begin{align*}
\lambda_l \left(\widehat{\phi}_l(t_1), \cdots, \widehat{\phi}_l(t_\Delta)\right)^\intercal &\approx \frac{T}{\Delta}\pmb{C}\left(\widehat{\phi}_l(t_1), \cdots, \widehat{\phi}_l(t_\Delta)\right)^\intercal = \sqrt{\frac{T}{\Delta}} \pmb{C} \pmb{v}_l = \sqrt{\frac{T}{\Delta}} \Lambda_l \pmb{v}_l = \left(\frac{T}{\Delta}\Lambda_l\right) \left(\widehat{\phi}_l(t_1), \cdots, \widehat{\phi}_l(t_\Delta)\right)^\intercal.
\end{align*}
Additionally, we estimate the $L$ defined in Eq.~\eqref{eq: def of L} by the following
\begin{align}\label{eq: estimated L}
L\approx \widehat{L} \overset{\operatorname{def}}{=} \min \left\{ l=1,\cdots, \Delta \,\Bigg\vert\, \frac{\sum_{l'=1}^l \vert \widehat{\lambda}_{l'}\vert}{\sum_{l^{''}=1}^\Delta \vert \widehat{\lambda}_{l^{''}}\vert} >0.95\right\}.
\end{align}
We take the absolute value of the estimated eigenvalues in computing $\widehat{L}$ because they may be numerically negative in applications. We estimate the $\xi_{l,i}$ defined in Eq.~\eqref{eq: def of the xi statistic} by the following
\begin{align}\label{eq: def of xi_hat}
\xi_{l,i} \approx \widehat{\xi}_{l,i} = \frac{1}{\sqrt{2\widehat{\lambda}_l}} \cdot \frac{T}{\Delta} \sum_{q=1}^\Delta \left\{ SECT(K_i^{(1)})(\widehat{\nu}^*;t_q) - SECT(K_i^{(2)})(\widehat{\nu}^*;t_q) \right\} \widehat{\phi}_l(t_q),
\end{align}
for $l=1,\ldots,\widehat{L}$ and $i=1,\ldots,n$. Then, we can implement the $\chi^2$-test in Eq.~\eqref{eq: rejection region} as follows
\begin{align}\label{eq: numerical chisq rejection region}
\sum_{l=1}^{ \widehat{L}}\left(\frac{1}{\sqrt{n}}\sum_{i=1}^n \widehat{\xi}_{l,i}\right)^2 > \chi^2_{\widehat{L}, 1-\alpha} = \mbox{ the $1-\alpha$ lower quantile of the $\chi^2_{\widehat{L}}$ distribution}.
\end{align}
A complete outline of this $\chi^2$-hypothesis testing procedure is given in Algorithm \ref{algorithm: testing hypotheses on mean functions}.
\begin{algorithm}[h]
\caption{: ($\chi^2$-based) Testing of the hypotheses in Eq.~\eqref{eq: the main hypotheses}}\label{algorithm: testing hypotheses on mean functions}
\begin{algorithmic}[1]
\INPUT
\noindent (i) SECT of two collection of shapes $\{SECT(K_i^{(j)})(\nu_p;t_q):p=1,\cdots,\Gamma \mbox{ and } q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$;
(ii) desired confidence level $1-\alpha$ with $\alpha\in(0,1)$.
\OUTPUT \texttt{Accept} or \texttt{Reject} the null hypothesis $H_0$ in Eq.~\eqref{eq: the main hypotheses}.
\STATE For $j\in\{1,2\}$, compute $\widehat{m}_{\nu_p}^{(j)}(t_q) \overset{\operatorname{def}}{=}$ sample mean of $\{SECT(K_i^{(j)})(\nu_p;t_q)\}_{i=1}^n$ across $i\in\{1,\cdots,n\}$.
\STATE Compute the estimated distinguishing direction $\widehat{\nu}^*$ using Eq.~\eqref{eq: estimated distinguishing direction}.
\STATE Compute $\pmb{C}=(\widehat{\mathbb Xi}_{\nu^*}(t_{q'},t_{q}))_{q,q'=1,\cdots,\Delta} \overset{\operatorname{def}}{=} $ the sample covariance matrix derived from the centered sample vectors in Eq.~\eqref{eq: centered sample vectors} across $i\in\{1,\cdots,n\}$ and $j\in\{1,2\}$.
\STATE Compute the eigenvectors $\{\pmb{v}_l\}_{l=1}^\Delta$ and eigenvalues $\{\Lambda_l\}_{l=1}^\Delta$ of the sample covariance matrix $\pmb{C}$.
\STATE Compute $\widehat{\phi}_l(t_q) \overset{\operatorname{def}}{=} \sqrt{\frac{\Delta}{T}} v_{l,q}$ and $\widehat{\lambda}_l \overset{\operatorname{def}}{=} \frac{T}{\Delta}\Lambda_l$ for all $l=1,\cdots,\Delta$.
\STATE Compute $\widehat{L}$ using Eq.~\eqref{eq: estimated L}.
\STATE Compute $\{\xi_{l,i}:l=1,\cdots,\widehat{L}\}_{i=1}^n$ using Eq.~\eqref{eq: def of xi_hat}, test null $H_0$ using Eq.~\eqref{eq: numerical chisq rejection region}, and report the output.
\end{algorithmic}
\end{algorithm}
In addition to the $\chi^2$-test detailed in Algorithm \ref{algorithm: testing hypotheses on mean functions}, we also propose a permutation-based test as an alternative approach for assessing the statistical hypotheses in Eq.~\eqref{eq: the main hypotheses}. The main idea behind the permutation test is that, under the null hypothesis, shuffling the group labels of shapes should not heavily change the test statistic of interest. To perform the permutation-based test, we first apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to our original data and then repeatedly re-apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to shapes with shuffled labels.$^\S$\footnote{$\S$: When we apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the original SECT, the result of Eq.~\eqref{eq: estimated L} is denoted as $\widehat{L}_0$. When we apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the shuffled SECT, the $\widehat{L}$ resulting from Eq.~\eqref{eq: estimated L} may differ from $\widehat{L}_0$. To make the comparison between $\mathfrak{S}_0$ and $\mathfrak{S}_{k^*}$ fair (see the last step of Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}), we set $\widehat{L}$ to be $\widehat{L}_0$.} We then compare how the test statistics derived from the original differ from those computed on the shuffled data. The details of this permutation-based approach are provided in Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}. Simulation studies in Section \ref{section: Simulation experiments} show that Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} is robust to the violation of Assumption \ref{assumption: equal covariance aasumption} and can eliminate the moderate type I error inflation of Algorithm \ref{algorithm: testing hypotheses on mean functions}; however, the power under the alternative for Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} is moderately weaker than that of Algorithm \ref{algorithm: testing hypotheses on mean functions}.
\begin{algorithm}[h]
\caption{: (Permutation-based) Testing of the hypotheses in Eq.~\eqref{eq: the main hypotheses}}\label{algorithm: permutation-based testing hypotheses on mean functions}
\begin{algorithmic}[1]
\INPUT
\noindent (i) SECT of two collection of shapes $\{SECT(K_i^{(j)})(\nu_p;t_q):p=1,\cdots,\Gamma \mbox{ and } q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$;
(ii) desired confidence level $1-\alpha$ with $\alpha\in(0,1)$; (iii) the number of permutations $\Pi$.
\OUTPUT \texttt{Accept} or \texttt{Reject} the null hypothesis $H_0$ in Eq.~\eqref{eq: the main hypotheses}.
\STATE Apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the original input SECT data, compute $\widehat{L}_0$ using Eq.~\eqref{eq: estimated L} (see footnote $\S$), and compute the $\chi^2$-test statistic denoted as $\mathfrak{S}_0$ using Eq.~\eqref{eq: numerical chisq rejection region}.
\FORALL{$k=1,\cdots,\Pi$, }
\STATE Randomly permute the group labels $j\in\{1,2\}$ of the input SECT data.
\STATE Apply Algorithm \ref{algorithm: testing hypotheses on mean functions} to the permuted SECT data while setting $\widehat{L}$ to be the $\widehat{L}_0$, instead of using Eq.~\eqref{eq: estimated L}, and compute a $\chi^2$-test statistic $\mathfrak{S}_k$ using Eq.~\eqref{eq: numerical chisq rejection region}.
\ENDFOR
\STATE Compute $k^* \overset{\operatorname{def}}{=}[(1-\alpha)\cdot\Pi]\overset{\operatorname{def}}{=}$ the largest integer smaller than $(1-\alpha)\cdot\Pi$.
\STATE \texttt{Reject} the null hypothesis $H_0$ if $\mathfrak{S}_0>\mathfrak{S}_{k^*}$ and report the output.
\end{algorithmic}
\end{algorithm}
\subsection{Randomization-style Null Hypothesis Significance Test}\label{Randomization-style Null Hypothesis Significance Test}
In Section \ref{section: Simulation experiments}, we will compare Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} with the ``randomization-style null hypothesis significance test (NHST)" (\cite{robinson2017hypothesis}, particularly Section 5.3 therein), which is designed to test the following hypotheses
\begin{align}\label{eq: hypotheses for randomization-style NHST}
H_0:\ \ \mathbb{P}^{(1)}=\mathbb{P}^{(2)}\ \ \ vs.\ \ \ H_1:\ \ \mathbb{P}^{(1)}\ne\mathbb{P}^{(2)}.
\end{align}
The randomization-style NHST is based on the permutation test and the following loss function
\begin{align}\label{eq: randomization-style NHST}
F\left(\{K_i^{(1)}\}_{i=1}^n, \, \{K_i^{(2)}\}_{i=1}^n\right) \overset{\operatorname{def}}{=}\frac{1}{2n(n-1)}\sum_{k,l=1}^n\left\{\rho\left(K_k^{(1)},\, K_l^{(1)}\right)+\rho\left(K_k^{(2)},\, K_l^{(2)}\right)\right\},
\end{align}
where $\rho$ is the distance function defined in Eq.~\eqref{Eq: distance between shapes}. For a given discrete ECT, where $\{ECT(K_i^{(j)})(\nu_p;t_q):p=1,\cdots,\Gamma \mbox{ and } q=1,\cdots,\Delta\}_{i=1}^n$, we may adopt the following approximation
\begin{align}\label{eq: approximate rho}
\rho\left(K_k^{(j)},\, K_l^{(j)}\right) \approx \sup_{p=1,\ldots,\Gamma}\left(\sum_{q=1}^\Delta \left\vert ECT(K_k^{(j)})(\nu_p;t_q)-ECT(K_l^{(j)})(\nu_p;t_q)\right\vert^2\right)^{1/2}.
\end{align}
We apply Algorithm \ref{algorithm: randomization-style NHST} to implement the randomization-style NHST.
\begin{algorithm}[h]
\caption{: Randomization-style NHST}\label{algorithm: randomization-style NHST}
\begin{algorithmic}[1]
\INPUT
\noindent (i) ECT of two collection of shapes $\{ECT(K_i^{(j)})(\nu_p;t_q):p=1,\cdots,\Gamma \mbox{ and } q=1,\cdots,\Delta\}_{i=1}^n$ for $j\in\{1,2\}$;
(ii) desired confidence level $1-\alpha$ with $\alpha\in(0,1)$; (iii) the number of permutations $\Pi$.
\OUTPUT \texttt{Accept} or \texttt{Reject} the null hypothesis $H_0$ in Eq.~\eqref{eq: hypotheses for randomization-style NHST}.
\STATE Apply Eq.~\eqref{eq: randomization-style NHST} and Eq.~\eqref{eq: approximate rho} to the original input ECT data and compute the value of the loss $\mathfrak{S}_0 \overset{\operatorname{def}}{=} F(\{K_i^{(1)}\}_{i=1}^n, \, \{K_i^{(2)}\}_{i=1}^n)$.
\FORALL{$k=1,\cdots,\Pi$, }
\STATE Randomly permute the group labels $j\in\{1,2\}$ of the input ECT data.
\STATE Apply Eq.~\eqref{eq: randomization-style NHST} and Eq.~\eqref{eq: approximate rho} to the permuted ECT data and compute the value of the loss $\mathfrak{S}_k \overset{\operatorname{def}}{=} F(\{K_i^{(1)}\}_{i=1}^n, \, \{K_i^{(2)}\}_{i=1}^n)$.
\ENDFOR
\STATE Compute $k^* \overset{\operatorname{def}}{=}[\alpha\cdot\Pi]\overset{\operatorname{def}}{=}$ the largest integer smaller than $\alpha\cdot\Pi$.
\STATE \texttt{Reject} the null hypothesis $H_0$ if $\mathfrak{S}_0<\mathfrak{S}_{k^*}$ and report the output.
\end{algorithmic}
\end{algorithm}
\section{Experiments Using Simulations}\label{section: Simulation experiments}
In this section, we present proof-of-concept simulation studies showing the performance of the hypothesis testing framework in Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}. In addition, we compare our proposed algorithms to the randomization-style NHST in Algorithm \ref{algorithm: randomization-style NHST}. We focus on a family of distributions $\{\mathbb{P}^{(\varepsilon)}\}_{0\le\varepsilon\le0.1}$ on $\mathscr{S}_{R,d}^M$. For each $\varepsilon$, a collection of shapes $\{K_i^{(\varepsilon)}\}_{i=1}^n$ are i.i.d. generated from distribution $\mathbb{P}^{(\varepsilon)}$ via the following
\begin{align}\label{eq: explicit P varepsilon}
K_i^{(\varepsilon)} = & \left\{x\in\mathbb{R}^2 \, \Bigg\vert\, \inf_{y\in S_i^{(\varepsilon)}}\Vert x-y\Vert\le \frac{1}{5}\right\},\ \ \mbox{ where} \\
\notag S_i^{(\varepsilon)} = & \left\{\left(\frac{2}{5}+a_{1,i}\cdot\cos t, b_{1,i}\cdot\sin t\right) \, \Bigg\vert\, \frac{1-\varepsilon}{5}\pi\le t\le\frac{9+\varepsilon}{5}\pi\right\} \bigcup\left\{\left(-\frac{2}{5}+a_{2,i}\cdot\cos t, b_{2,i}\cdot\sin t\right) \, \Bigg\vert\, \frac{6\pi}{5}\le t\le\frac{14\pi}{5}\right\},
\end{align}
where $\{a_{1,i}, a_{2,i}, b_{1,i}, b_{2,i}\}_{i=1}^n \overset{i.i.d.}{\sim} N(1, 0.05^2)$, and the $\varepsilon$ denotes the dissimilarity between the distributions $\mathbb{P}^{(\varepsilon)}$ and $\mathbb{P}^{(0)}$. For each $\varepsilon\in[0,0.1]$, we test the following hypotheses using Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}
\begin{align*}
\begin{aligned}
& H_0: m_\nu^{(0)}(t)=m_\nu^{(\varepsilon)}(t)\mbox{ for all }(\nu,t)\in\mathbb{S}^{d-1}\times[0,T]\ \ \ vs. \ \ \ H_1: m_\nu^{(0)}(t)\ne m_\nu^{(\varepsilon)}(t)\mbox{ for some }(\nu,t),
\end{aligned}
\end{align*}
where the mean $m_\nu^{(\varepsilon)}(t) \overset{\operatorname{def}}{=} \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu;t) \mathbb{P}^{(\varepsilon)}(dK)$, and the null hypothesis $H_0$ is true when $\varepsilon=0$. We test the hypotheses $H_0:\, \mathbb{P}^{(0)}=\mathbb{P}^{(\varepsilon)}\, vs. \, \mathbb{P}^{(0)} \ne \mathbb{P}^{(\varepsilon)}$ using the scheme described in Algorithm \ref{algorithm: randomization-style NHST}.
We set $T=3$, directions $\nu_p=(\cos\frac{p-1}{4}\pi, \sin\frac{p-1}{4}\pi)^\intercal$ for $p\in\{1,2,3,4\}$, sublevel sets $t_q=\frac{T}{50}q=0.06q$ for $q\in\{1,\cdots,50\}$ (i.e., $\Gamma=4$ and $\Delta=50$ in Algorithms \ref{algorithm: testing hypotheses on mean functions}, \ref{algorithm: permutation-based testing hypotheses on mean functions}, and \ref{algorithm: randomization-style NHST}), the confidence level $95\%$ (i.e., $\alpha=0.05$ in Eq.~\eqref{eq: numerical chisq rejection region}), and the number of permutations $\Pi=1000$. For each $\varepsilon\in$ \{0, 0.0125, 0.025, 0.0375, 0.05, 0.075, 0.1\}, we independently generate two collection of shapes: $\{K_i^{(0)}\}_{i=1}^n\overset{i.i.d.}{\sim} \mathbb{P}^{(0)}$ and $\{K_i^{(\varepsilon)}\}_{i=1}^n\overset{i.i.d.}{\sim} \mathbb{P}^{(\varepsilon)}$ through Eq~\eqref{eq: explicit P varepsilon} with the number of shape pairs set to $n=100$, and we compute the ECT and SECT of each generated shape in directions $\{\nu_p\}_{p=1}^4$ and at levels $\{t_q\}_{q=1}^{50}$. Then, we implement Algorithms \ref{algorithm: testing hypotheses on mean functions}-\ref{algorithm: randomization-style NHST} to these computed ECT and SECT variables and get the corresponding \texttt{Accept}/\texttt{Reject} outputs. We repeat this procedure 100 times, and report the rejection rates across all replicates for each $\varepsilon$ in Table \ref{table: epsilon vs. rejection rates}. Additionally, the rejection rates of our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} are visually presented in Figure \ref{fig: simulation visualizations}.
\setlength{\extrarowheight}{2pt}
\begin{table}[h]
\centering
\caption{Rejection rates of Algorithms \ref{algorithm: testing hypotheses on mean functions}-\ref{algorithm: randomization-style NHST} across different indices $\varepsilon$. We assess the type I error rate of the different algorithms when the null model is satisfied (i.e., $\varepsilon$ = 0). We then assess power of these approaches in the other cases.}
\label{table: epsilon vs. rejection rates}
\vspace*{0.5em}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Indices $\varepsilon$ & 0.000 & 0.0125 & 0.0250 & 0.0375 & 0.0500 & 0.0750 & 0.100 \\ [2pt]\hline
$\mathfrak{R}_F$ defined in Eq.~\eqref{eq: norm-ratios} & 0.111 & 0.095 & 0.106 & 0.122 & 0.143 & 0.122 & 0.114 \\ [2pt]\hline
$\mathfrak{R}_\infty$ defined in Eq.~\eqref{eq: norm-ratios} & 0.307 & 0.240 & 0.328 & 0.358 & 0.457 & 0.387 & 0.415 \\ [2pt]\hline
Rejection rates of Algorithm \ref{algorithm: testing hypotheses on mean functions} & 0.15 & 0.16 & 0.32 & 0.67 & 0.92 & 0.98 & 1.00 \\ [2pt]\hline
Rejection rates of Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} & 0.03 & 0.14 & 0.22 & 0.47 & 0.74 & 1.00 & 1.00 \\ [2pt]\hline
Rejection rates of Algorithm \ref{algorithm: randomization-style NHST} & 0.01 & 0.10 & 0.11 & 0.34 & 0.66 & 0.99 & 1.00 \\ [2pt]\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{The ``\texttt{rejection rates}
\label{fig: simulation visualizations}
\end{figure}
The simulation results in this section show that our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} are powerful in detecting the difference between $\mathbb{P}^{(\varepsilon)}$ and $\mathbb{P}^{(0)}$ in terms of distinguishing between the corresponding mean functions. The estimated mean functions $\widehat{m}_{\widehat{\nu}^*}^{(0)}$ and $\widehat{m}_{\widehat{\nu}^*}^{(\varepsilon)}$ in direction $\widehat{\nu}^*$ are presented by the blue solid and red dashed curves, respectively, in Figure \ref{fig: simulation visualizations}. The discrepancy between the two estimated mean functions is presented therein. As $\varepsilon$ increases, $\mathbb{P}^{(\varepsilon)}$ deviates from $\mathbb{P}^{(0)}$, and the power of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} (i.e., the rejection rates under the alternative hypothesis) in detecting the deviation increases. When $\varepsilon>0.075$, the power of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} exceeds 0.95. For all values of $\varepsilon$, it is difficult to see the deviation of $\mathbb{P}^{(\varepsilon)}$ from $\mathbb{P}^{(0)}$ physically. For example, by just visualizing the shapes in Figure \ref{fig: simulation visualizations}, it is difficult to distinguish between the shape collections generated by $\mathbb{P}^{(0)}$ (blue) and $\mathbb{P}^{(0.075)}$ (pink), while our hypothesis testing framework detected the difference between the two shape collections in more than $95\%$ of the simulations. The randomization-style NHST approach in Algorithm \ref{algorithm: randomization-style NHST} \citep{robinson2017hypothesis} performs well in detecting the discrepancy between shape-generating distributions $\mathbb{P}^{(0)}$ and $\mathbb{P}^{(\varepsilon)}$. However, its power under the alternative is weaker than our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} (see Table \ref{table: epsilon vs. rejection rates}).
Despite the strong ability of Algorithm \ref{algorithm: testing hypotheses on mean functions} to detect the true discrepancy between mean functions, its type I error rate (i.e., the rejection rate under the null model) is moderately inflated. Specifically, the type I error rate of Algorithm \ref{algorithm: testing hypotheses on mean functions} is 0.15, while the expected error rate is 0.05 (see Table \ref{table: epsilon vs. rejection rates}). In contrast, Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} does not suffer from the type I error inflation, while its power under the alternative is moderately weaker than that of Algorithm \ref{algorithm: testing hypotheses on mean functions}. Our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} are derived under Assumption \ref{assumption: equal covariance aasumption}. This assumption is violated in our proof-of-concept simulations presented herein, which is illustrated by the $\mathfrak{R}_F$ and $\mathfrak{R}_\infty$ in Table \ref{table: epsilon vs. rejection rates}. The moderate type I error inflation of Algorithm \ref{algorithm: testing hypotheses on mean functions} (see Figure \ref{fig: simulation visualizations} and Table \ref{table: epsilon vs. rejection rates}) is essentially due to the violation of this assumption violation. Although Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} is also based on the $\chi^2$-test derived under Assumption \ref{assumption: equal covariance aasumption}, the permutation procedure makes Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} robust to the violation of Assumption \ref{assumption: equal covariance aasumption}.
We used $\Gamma=4$ directions, $\Delta=50$ levels, and $n=100$ pairs of shapes in the simulation studies above. The values $\Gamma, \Delta$, and $n$ substantially influence the runtime of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}. As a proof-of-concept, we present the runtime of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} in replicating the simulation studies above with different combinations of inputs $\Gamma, \Delta$, and $n$. Details of the runtime study are presented in Appendix \ref{section: Runtime}.
\section{Applications}\label{section: Applications}
In this section, we first apply our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} to the MPEG-7 shape silhouette database \citep{sikora2001mpeg} as a toy example to test for differences of shapes between groups. Then, we apply the proposed algorithms to analyze a data set of mandibular molars from
different genera of primates (see Figure \ref{fig: Teeth}).
\subsection{Silhouette Database}
We used a subset of the silhouette database including three classes of shapes: apples, hearts, and children (see Figure \ref{fig: Silhouette Database}, each class has 20 shapes). For each shape in Figure \ref{fig: Silhouette Database}, we computed its SECT. Specifically, we computed the ECCs for 72 directions evenly sampled over the interval $[0,2\pi]$; in each direction, we used 100 sublevel sets. We applied Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} to test the hypothesis that the shapes are different between each class and present the results in Table \ref{tab: Silhouette Database}. The p-values in Table \ref{tab: Silhouette Database} are either $\chi^2$-test p-values (Algorithm \ref{algorithm: testing hypotheses on mean functions}) or permutation-test p-values (Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} with $\Pi=1000$). In addition to testing the difference between shape classes, we applied the algorithms within each shape class (i.e., assessing calibration of type I error under the null model). Specifically, for each shape class, we randomly split the class into two halves and applied the algorithms to test the difference between the two halves. We repeated the random splitting procedure 100 times and presented the corresponding p-values in Table \ref{tab: Silhouette Database} (rows 4-6). Specifically, for each shape class, the 100 p-values are summarized by their mean and standard deviation (in parenthesis).
\begin{figure}
\caption{Each row corresponds to one of the shape classes: apples, hearts, and children.}
\label{fig: Silhouette Database}
\end{figure}
Rows 1-3 of Table \ref{tab: Silhouette Database} show that our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} can distinguish the shape classes of apples, hearts, and children in Figure \ref{fig: Silhouette Database}. Rows 4-6 show that, when the groups include similar shapes, our proposed algorithms do not falsely distinguish shapes (i.e., the algorithms tend to avoid the type I error for the shape data in Figure \ref{fig: Silhouette Database}). The p-values within each shape class measure the homogeneity/heterogeneity of the class (e.g., the class of children has the highest homogeneity among all the three shape classes). Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} tends to have larger p-values than Algorithm \ref{algorithm: testing hypotheses on mean functions} when applied within each shape class, which is essentially due to the permutation procedure implemented in Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}.
\setlength{\extrarowheight}{2pt}
\begin{table}[h]
\caption{P-values of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} for the silhouette database.}
\label{tab: Silhouette Database}
\centering
\begin{tabular}{|c|c|c|}
\hline
& Algorithm \ref{algorithm: testing hypotheses on mean functions} & Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} \\[2pt] \hline
Apples vs. Hearts & $<0.01$ & $<0.01$ \\[2pt]
Apples vs. Children & $<0.01$ & $<0.01$ \\ [2pt]
Hearts vs. Children &$<0.01$ &$<0.01$\\[2pt]\hline
Apples vs. Apples & 0.26 (0.23) & 0.46 (0.27) \\[2pt]
Hearts vs. Hearts & 0.17 (0.16) & 0.47 (0.29) \\[2pt]
Children vs. Children & 0.39 (0.28) & 0.49 (0.30)\\[2pt]
\hline
\end{tabular}
\end{table}
\subsection{Mandibular Molars from Primates}
We consider a data set of mandibular molars from two suborders of primates --- Haplorhini
and Strepsirrhini (see Figure \ref{fig: INATRA_fig_6}). In the haplorhine suborder collection, 33 molars came from the genus Tarsius (see Figure \ref{fig: INATRA_fig_6} and the yellow panels in Figure \ref{fig: Teeth}), and 9 molars came from the genus Saimiri (see Figure \ref{fig: INATRA_fig_6} and the grey panels in Figure \ref{fig: Teeth}). In the strepsirrhine suborder collection, 11 molars came from the genus Microcebus (see Figure \ref{fig: INATRA_fig_6} and the blue panels in Figure \ref{fig: Teeth}), and 6 molars came from the genus Mirza (see Figure \ref{fig: INATRA_fig_6} and the green panels in Figure \ref{fig: Teeth}). Before applying Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}, we performed the ECT on the raw data and normalized the polygon meshes by aligning the ECCs. The details of the ECT-based alignment procedure are given in \cite{wang2021statistical} (particularly, Section 4 of its Supplementary Material). The aligned molars are presented in Figure \ref{fig: Teeth}.
\begin{figure}
\caption{Illustration of the relationship between the phylogenetics and unique paraconids in molars belonging to
primates in Tarsius genus. Tarsier teeth have additional high cusps (highlighted in red). A version of this figure has been previously published in \cite{wang2021statistical}
\label{fig: INATRA_fig_6}
\end{figure}
We applied our proposed Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} to the preprocessed teeth data. For each aligned molar, we computed its SECT. Specifically, we computed the ECCs for 2918 directions; in each direction, we used 200 sublevel sets. To compare any pair of mandibular molar groups, as a proof of concept, we selected the smaller size of the two groups as the sample size input $n$ in our proposed algorithms. For example, when comparing the Tarsius and Microcebus groups, we chose $n=11$; that is, we compared the first 11 molars of the Tarsius group to all the molars in the Microcebus group. We applied Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} to the four groups of mandibular molars and presented the results in Table \ref{tab: Mandibular Molars Database}. The p-values in Table \ref{tab: Mandibular Molars Database} are either $\chi^2$-test p-values (Algorithm 1) or permutation-test p-values (Algorithm 2 with $\Pi=1000$).
The small p-values ($<0.05$) in Table \ref{tab: Mandibular Molars Database} show that our proposed algorithms can distinguish the four different genera of primates. Because genera Microcebus and Mirza belong to the same suborder Strepsirrhini (see Figure \ref{fig: INATRA_fig_6}), the p-value from Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} for ``Microcebus vs. Mirza" is comparatively large. In addition, the Microcebus molar and Mirza molar examples (visually presented in Figure \ref{fig: INATRA_fig_6}) look relatively similar. Although genera Microcebus and Saimiri belong to the same suborder Haplorhini, the molars of the two genera are drastically different (see Figure \ref{fig: INATRA_fig_6}) More specifically, the paraconids (i.e., the cusp highlighted in red in Figure \ref{fig: INATRA_fig_6}) are retained only by the genus Tarsius \citep{st2016lower}. The paraconids are one important reason for the drastically small p-values ($P<10^{-3}$) for ``Tarsius vs. Saimiri." Other drastically small p-values ($P<10^{-3}$) are because the corresponding genera belong to different suborders.
\setlength{\extrarowheight}{2pt}
\begin{table}[h]
\caption{P-values of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} for the data set of mandibular molars.}
\label{tab: Mandibular Molars Database}
\centering
\begin{tabular}{|c|c|c|}
\hline
& Algorithm \ref{algorithm: testing hypotheses on mean functions} & Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} \\[2pt] \hline
Tarsius vs. Microcebus & $<10^{-3}$ & $<10^{-3}$ \\[2pt]
Tarsius vs. Mirza & $<10^{-3}$ & $<10^{-3}$ \\ [2pt]
Tarsius vs. Saimiri & $<10^{-3}$ & $<10^{-3}$ \\[2pt]
Microcebus vs. Mirza & $<10^{-3}$ & $0.009$ \\[2pt]
Microcebus vs. Saimiri & $<10^{-3}$ & $<10^{-3}$ \\[2pt]
Mirza vs. Saimiri & $<10^{-3}$ & $<10^{-3}$\\[2pt]\hline
Tarsius vs. Tarsius & 0.206 (0.195) & 0.519 (0.274) \\[2pt]
\hline
\end{tabular}
\end{table}
In addition to testing the difference between genera, as a proof of concept, we applied the proposed algorithms within the genus Tarsius. Specifically, we focused on the first 32 molars in the Tarsius group. We randomly split the 32 molars into two halves and applied Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} to test the difference between the two halves. We repeated the random splitting procedure 100 times and presented the corresponding p-values in the last row of table \ref{tab: Mandibular Molars Database}. Specifically, the 100 p-values are summarized by their mean and standard deviation (in parenthesis). These p-values show that our proposed algorithms, especially Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}, tend to avoid the type I error for the molars from the genus Tarsius.
\section{Conclusions and Discussions}\label{Conclusions and Discussions}
In this paper, we provided the mathematical foundations for the randomness of shapes and the distributions of the SECT. The $(\mathscr{S}_{R,d}^M, \mathscr{B}(\rho), \mathbb{P})$ was constructed as an underlying probability space for modeling the randomness of shapes, and the SECT was modeled as a $C(\mathbb{S}^{d-1};\mathcal{H})$-valued random variable defined on this probability space. We showed several properties of the SECT ensuring its Karhunen–Loève expansion, which led to a normally distributed-based statistic for testing hypotheses on random shapes. Simulation studies were provided to support our mathematical derivations and show the performance of proposed hypothesis testing algorithms. Our approach was shown to be powerful in detecting the difference between two shape-generating distributions. Particularly, our proposed Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions} was shown to have larger power than the existing randomization-style NHST \citep{robinson2017hypothesis} under the alternative hypotheses and not suffer from the type I error inflation. Lastly, we applied our proposed algorithms to real data sets of silhouettes and mandibular molars from primates.
We list several potential future research areas that we believe are related to our work below:
\paragraph{Definition of Mean Shapes and Fréchet Regression.} The existence and uniqueness of the mean shapes $K_\oplus$ as defined in Eq.~\eqref{Frechet mean shape} are still unknown. If mean shapes $K_\oplus$ do exist, the relationship between $SECT(K_\oplus)$ and $\mathbb{E}\{SECT(\cdot)\}$ is of particular interest. In addition, the relationship of $\mathbb{E}\{SECT(\cdot)\}$ to the theory for ``expectations of random sets" (see Chapter 2 of \cite{molchanov2005theory}) is of interest and left for future research. The Fréchet mean in Eq.~\eqref{Frechet mean shape} may be extended to the conditional Fréchet mean and implemented in the Fréchet regression --- predicting shapes $K$ using multiple scalar predictors \citep{petersen2019frechet}. For example, predicting molecular shapes and structures from scalar-valued indicators or sequences has become of high interest in systems biology \citep{jumper2021highly, yang2020improved}. As an example of the other way around, predicting clinical outcomes from the tumors is also of interest \citep{moon2020predicting, somasundaram2021persistent, vipond2021multiparameter}.
\paragraph{Generative Models for Shapes.} The foundations for the randomness of shapes and the distributions of the SECT allow for the generative modeling of shapes. Suppose we are interested in a collection of shapes $\{K_i\}_{i=1}^n\subset\mathscr{S}_{R,d}^M$ (e.g., mandibular molars or tumors), and the underlying distribution of $\{SECT(K_i)\}_{i=1}^n$ is $\mathbf{P}$, where $\mathbf{P}$ is a probability measure on $C(\mathbb{S}^{d-1};\mathcal{H})$. Suppose $\mathbf{P}$ is known or accurately estimated. We can then simulate $\{SECT_{i'}\}_{i'=1}^{n'} \overset{i.i.d.}{\sim} \mathbf{P}$. Since the shape-to-SECT map $K\mapsto SECT(K)$ is invertible (see Theorem \ref{thm: invertibility}), for each $i'\in\{1,\cdots,n'\}$, there exists a unique shape $K_{i'}$ whose SECT is $SECT_{i'}$. Then the simulated $\{K_{i'}\}_{i'=1}^{n'}$ and the shapes of interest share the same distribution $\mathbf{P}$. The estimation of the distribution $\mathbf{P}$ from $\{K_i\}_{i=1}^n$ is left for future research. The shape reconstruction step (i.e., constructing an approximate inverse map $SECT_i\mapsto K_i$) is outside the scope of the paper. Challenges in reconstructing shapes from ECCs were discussed in \cite{fasy2018challenges}.
\paragraph{Lifted Euler Characteristic
Transform.} \cite{kirveslahti2021representing} generalized the ECT to field type data using a lifting argument and proposed the lifted ECT (LECT). The randomness, probabilistic distributions, and potential corresponding hypothesis testing frameworks using the LECT are left for future research.
\section*{Software Availability}
The source code for implementing the simulation studies in Section \ref{section: Simulation experiments} and applications in Section \ref{section: Applications} is publicly available online at \url{https://github.com/JinyuWang123/TDA.git}
\section*{Acknowledgments}
We want to thank Dr. Matthew T.~Harrison in the Division of Applied Mathematics at Brown University for useful comments and suggestions on previous versions of the manuscript. LC would like to acknowledge the support of a David \& Lucile Packard Fellowship for Science and Engineering.
\section*{Competing Interests}
The authors declare no competing interests.
\begin{center}
\Large{\textbf{Appendix --- Supplementary Material\\ for ``Randomness and Statistical Inference of Shapes via the Smooth Euler Characteristic Transform"}}
\end{center}
\begin{appendix}
\section{Overview of Persistent Diagrams}\label{The Relationship between PHT and SECT}
\noindent This subsection gives an overview of PDs in the literature. The overview is provided for the following purposes:
\begin{itemize}
\item we provide the details for the definition of $\mathscr{S}_{R,d}^M$, particularly the condition in Eq.~\eqref{Eq: topological invariants boundedness condition};
\item the PD framework is the necessary tool for proving several theorems in this paper (see Appendix \ref{section: appendix, proofs}).
\end{itemize}
Most of the materials in the overview come from or are modified from \cite{mileyko2011probability} and \cite{turner2013means}.
Let $\mathbb{K}$ be a compact topological space and $\varphi$ be a real-valued continuous function defined on $\mathbb{K}$. Because of the compactness of $\mathbb{K}$ and continuity of $\varphi$, we assume $\varphi(\mathbb{K})\subset[0,T]$ without loss of generality. For each $t\in [0,T]$, denote
\begin{align*}
\mathbb{K}^\varphi_t \overset{\operatorname{def} }{=} \{x\in\mathbb{K} \,\vert \, \varphi(x)\le t\}.
\end{align*}
Then $\mathbb{K}^\varphi_{t_1} \subset\mathbb{K}^\varphi_{t_2}$ for all $0\le t_1\le t_2 \le T$, and $i_{t_1 \rightarrow t_2}$ denotes the corresponding inclusion map. Definition \ref{def: HCP and tameness} is an analogue to the following concepts:
\begin{enumerate}
\item \textit{$t^*$ is a homotopy critical point (HCP) of $\mathbb{K}$ with respect to $\varphi$ if $\mathbb{K}^\varphi_{t^*}$ is not homotopy equivalent to $\mathbb{K}^\varphi_{t^*-\delta}$ for any $\delta>0$;}
\item \textit{$\mathbb{K}$ is called tame with respect to $\varphi$ if $\mathbb{K}$ has finitely many HCP with respect to $\varphi$.}
\end{enumerate}
If we take $\mathbb{K}=K\in\mathscr{S}_{R,d}^M$ and
\begin{align}\label{Eq: Morse function 1}
\varphi(x)=x\cdot \nu+R \overset{\operatorname{def}}{=} \phi_\nu(x),\ \ \ x\in K,\ \ \nu\in\mathbb{S}^{d-1},
\end{align}
we have the scenario discussed in Section \ref{The Definition of Smooth Euler Characteristic Transform}. The definition of $\mathscr{S}_{R,d}^M$ provides the tameness of $K$ with respect to $\phi_\nu$ and $\phi_\nu(K)\subset[0,T]$, for any fixed $\nu\in\mathbb{S}^{d-1}$.
The inclusion maps $i_{t_1 \rightarrow t_2}: \mathbb{K}_{t_1}^\varphi \rightarrow \mathbb{K}_{t_2}^\varphi$ induces the group homomorphisms
\begin{align*}
i^{\#}_{t_1 \rightarrow t_2}: H_k(\mathbb{K}_{t_1}^\varphi) \rightarrow H_k(\mathbb{K}_{t_2}^\varphi), \ \ \mbox{ for all }k\in\mathbb{Z},
\end{align*}
where $H_k(\cdot)=H_k(\cdot;\mathbb{Z}_2)$ denotes the $k$-th homology group with respect to field $\mathbb{Z}_2$, and $\mathbb{Z}_2$ is omitted for simplicity. Because of the tameness of $\mathbb{K}$ with respect to $\varphi$, for any $t_1 \le t_2$, we have that the image
\begin{align*}
im \left(i^{\#}_{(t_1-\delta) \rightarrow t_2} \right)=im \left(i^{\#}_{(t_1-\delta) \rightarrow t_1} \circ i^{\#}_{t_1 \rightarrow t_2} \right)
\end{align*}
does not depend on $\delta>0$ when $\delta$ is sufficiently small, and then this constant image is denoted as $im(i^{\#}_{(t_1-) \rightarrow t_2})$. For any $t$, the $k$-th birth group at $t$ is defined as the quotient group
\begin{align*}
B_k^{t} \overset{\operatorname{def}}{=} H_k(\mathbb{K}_t^\varphi)/im(i^{\#}_{(t-)\rightarrow t}),
\end{align*}
and $\pi_{B_k^t}: H_k(\mathbb{K}_t^\varphi) \rightarrow B_k^{t}$ denotes the corresponding quotient map. For any $\alpha\in H_k(\mathbb{K}_t^\varphi)$, we say $\alpha$ is born at $t$ if $\pi_{B_k^t}(\alpha)\ne 0$ in $B_k^t$. The tameness implies that $B_k^t$ is a nontrivial group only for finitely many $t$. For any $t_1<t_2$, we denote the quotient group
\begin{align*}
E_k^{t_1, t_2} \overset{\operatorname{def}}{=} H_k(\mathbb{K}_{t_2}^\varphi)/im(i^{\#}_{(t_1-)\rightarrow t_2})
\end{align*}
and the corresponding quotient map $\pi_{E_k^{t_1, t_2}}: H_k(\mathbb{K}_{t_2}^\varphi) \rightarrow E_k^{t_1, t_2}$. Furthermore, we define the following map
\begin{align*}
g_k^{t_1, t_2}:\ \ B_k^{t_1} \rightarrow E_k^{t_1, t_2},\ \ \ \ \pi_{B_k^{t_1}}(\alpha) \mapsto \pi_{E_k^{t_1, t_2}}\left(i^{\#}_{t_1 \rightarrow t_2}(\alpha)\right),
\end{align*}
for all $\alpha \in H_k(\mathbb{K}_{t_1}^\varphi)$. Then we define the death group
\begin{align*}
D_k^{t_1, t_2} \overset{\operatorname{def}}{=} ker(g_k^{t_1, t_2}).
\end{align*}
We say a homology class $\alpha\in H_k(\mathbb{K}_{t_1}^\varphi)$ is born at $t_1$ and dies at $t_2$ if (i) $\pi_{B_k^{t_1}}(\alpha)\ne 0$, (ii) $\pi_{B_k^{t_1}}(\alpha)\in D_{k}^{t_1, t_2}$, and (iii) $\pi_{B_k^{t_1}}(\alpha)\notin D_{k}^{t_1, t_2-\delta}$ for any $\delta\in(0, t_2-t_1)$. If $\alpha$ does not die, we artificially say that it dies at $T$ as $K_T^\varphi=\mathbb{K}$. Then we denote $\operatorname{birth}(\alpha)=t_1$ and $\operatorname{death}(\alpha)=t_2$, and the persistence of $\alpha$ is defined as
\begin{align*}
\operatorname{pers}(\alpha) \overset{\operatorname{def}}{=} \operatorname{death}(\alpha) - \operatorname{birth}(\alpha).
\end{align*}
With the notions of $\operatorname{death}(\alpha)$ and $\operatorname{birth}(\alpha)$, the $k$-th PD of $\mathbb{K}$ with respect to $\varphi$ is defined as the following multiset of 2-dimensional points (see Definition 2 of \cite{mileyko2011probability}).
\begin{align}\label{Eq: def of PD}
\operatorname{Dgm}_k(\mathbb{K};\varphi) \overset{\operatorname{def}}{=} \bigg\{\big( \operatorname{birth}(\alpha), \operatorname{death}(\alpha)\big) \,\bigg\vert\, \alpha\in H_k(\mathbb{K}_t) \mbox{ for some }t\in[0,T] \mbox{ with }\operatorname{pers}(\alpha)>0 \bigg\}\bigcup \mathfrak{D},
\end{align}
where $\big( \operatorname{birth}(\alpha_1), \operatorname{death}(\alpha_1) \big)$ and $\big( \operatorname{birth}(\alpha_2), \operatorname{death}(\alpha_2) \big)$ for $\alpha_1\ne\alpha_2$ are counted as two points even if $\alpha_1$ and $\alpha_2$ are born and die at the same times, respectively; that is, the multiplicity of the point $\big( \operatorname{birth}(\alpha_1), \operatorname{death}(\alpha_1) \big) = \big( \operatorname{birth}(\alpha_2), \operatorname{death}(\alpha_2) \big)$ is at least $2$; $\mathfrak{D}$ denotes the diagonal $\{(t,t)\,|\, t\in\mathbb{R}\}$ with the multiplicity of each point on this diagnal is the cardinality of $\mathbb{Z}$. Since $\operatorname{birth}(\alpha)$ is no later than $\operatorname{death}(\alpha)$, the PD $\operatorname{Dgm}_k(\mathbb{K};\varphi)$ is contained in the triangular region $\{(s,t)\in\mathbb{R}^2: 0\le s\le t\le T\}$.
\paragraph*{Condition (\ref{Eq: topological invariants boundedness condition}) in the definition of $\mathscr{S}_{R,d}^M$} \textit{The function $\phi_\nu$ defined in Eq.~\eqref{Eq: Morse function 1}, the corresponding PDs defined by Eq.~\eqref{Eq: def of PD}, and the definition of $\operatorname{pers}(\cdot)$ provide the details of condition (\ref{Eq: topological invariants boundedness condition}). The notation $\#\{\cdot\}$ counts the multiplicity of the corresponding multiset.}
Generally, a persistence diagram is a countable multiset of points in triangular region $\{(s,t)\in\mathbb{R}^2: 0\le s, t\le T \mbox{ and }s\le t\}$ along with $\mathfrak{D}$ (see Definition 2 of \cite{mileyko2011probability}). The collection of all persistence diagrams is denoted as $\mathscr{D}$. Obviously, all the $\operatorname{Dgm}_k(\mathbb{K};\varphi)$ defined in Eq.~\eqref{Eq: def of PD} is in $\mathscr{D}$. The following definition and stability result of the \textit{bottleneck distance} are from \cite{cohen2007stability}, and they will play important roles in the proof of Theorem \ref{lemma: The continuity lemma}.
\begin{definition}\label{def: bottleneck distance}
Let $\mathbb{K}$ be a compact topological space. $\varphi_1$ and $\varphi_2$ are two continuous real-valued functions on $\mathbb{K}$ such that $\mathbb{K}$ is tame with respect to both $\varphi_1$ and $\varphi_2$. The bottleneck distance between PDs $\operatorname{Dgm}_k(\mathbb{K};\varphi_1)$ and $\operatorname{Dgm}_k(\mathbb{K};\varphi_2)$ is defined as
\begin{align*}
W_\infty \Big(\operatorname{Dgm}_k(\mathbb{K};\varphi_1), \operatorname{Dgm}_k(\mathbb{K};\varphi_2) \Big) \overset{\operatorname{def}}{=} \inf_{\gamma} \Big(\sup \left\{\Vert \xi - \gamma(\xi) \Vert_{l^\infty} \, \Big\vert \, \xi\in \operatorname{Dgm}_k(\mathbb{K};\varphi_1)\right\} \Big),
\end{align*}
where $\gamma$ ranges over bijections from $\operatorname{Dgm}_k(\mathbb{K};\varphi_1)$ to $\operatorname{Dgm}_k(\mathbb{K};\varphi_2)$, and
\begin{align}\label{eq: def of l infinity norm}
\Vert \xi\Vert_{l^\infty} \overset{\operatorname{def}}{=} \max\{\vert \xi_1\vert , \vert \xi_2\vert\},\ \ \mbox{ for all }\xi=(\xi_1, \xi_2)^\intercal\in\mathbb{R}^2.
\end{align}
\end{definition}
\begin{theorem}\label{thm: bottleneck stability}
Let $\mathbb{K}$ be a compact and finitely triangulable topological space. $\varphi_1$ and $\varphi_2$ are two continuous real-valued functions on $\mathbb{K}$ such that $\mathbb{K}$ is tame with respect to both $\varphi_1$ and $\varphi_2$. Then, we have the bottleneck distance as follows
\begin{align*}
W_\infty \Big(\operatorname{Dgm}_k(\mathbb{K};\varphi_1), \operatorname{Dgm}_k(\mathbb{K};\varphi_2) \Big) \le \sup_{x\in\mathbb{K}} \left\vert \varphi_1(x) - \varphi_2(x) \right\vert.
\end{align*}
\end{theorem}
\section{Computation of SECT}\label{section: Computation of SECT Using the Čech Complexes}
\noindent Let $K \subset\mathbb{R}^d$ be a shape of interest. Suppose a finite set of points $\{x_i\}_{i=1}^I\subset K$ and a radius $r>0$ are properly chosen such that
\begin{align}\label{eq: ball unions approx shapes}
\begin{aligned}
& K_t^\nu = \left\{x\in K \, \vert \, x\cdot\nu \le t-R \right\} \approx \bigcup_{i\in \mathfrak{I}_t^\nu} \overline{B(x_i,r)},\ \ \mbox{ for all }t\in[0,T]\mbox{ and }\nu\in\mathbb{S}^{d-1},\\
& \mbox{ where }\, \mathfrak{I}_t^\nu \overset{\operatorname{def}}{=} \left\{i\in\mathbb{N} \, \Big\vert \, 1\le i\le I \, \mbox{ and }\, x_i\cdot\nu\le t-R\right\},
\end{aligned}
\end{align}
and $\overline{B(x_i,r)} := \{x\in\mathbb{R}^d:\Vert x-x_i\Vert\le r\}$ denotes a closed ball centered at $x_i$ with radius $r$. For example, when $d=2$, centers $x_i$ may be chosen as a subset of the grid points
\begin{align*}
\left\{y_{j,j'} \overset{\operatorname{def}}{=} \left (-R+j\cdot\delta, \, -R+j'\cdot\delta \right)^\intercal \right\}_{j,j'=1}^J
\end{align*}
of the square $[-R,R]^2$ containing shape $K$, where $\delta=\frac{2R}{J}$ and radius $r=\delta$. Specifically,
\begin{align}\label{eq: approximation using grid points}
K_t^\nu \approx \bigcup_{y_{j,j'}\in K_t^\nu} \overline{B\left(y_{j,j'}, \, \delta\right)}\ \ \mbox{ for all }t\in[0,T]\mbox{ and }\nu\in\mathbb{S}^{d-1},
\end{align}
which is a special case of Eq.~\eqref{eq: ball unions approx shapes}. The shape approximation in Eq.~\eqref{eq: approximation using grid points} is illustrated by Figures \ref{fig: Computing_SECT}(a) and (b).
\paragraph*{Čech complexes} The Čech Complex determined by the point set $\{x_i\}_{i\in \mathfrak{I}_t^\nu}$ and radius $r$ in Eq.~\eqref{eq: ball unions approx shapes} is defined as the following simplicial complex
\begin{align*}
\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right) \overset{\operatorname{def}}{=} \left\{ \operatorname{conv}\left(\{x_i\}_{i\in s}\right) \, \Bigg\vert \, s\in 2^{\mathfrak{I}_t^\nu} \mbox{ and } \bigcap_{i\in s}\overline{B(x_i,r)}\ne\emptyset\right\},
\end{align*}
where $\operatorname{conv}\left(\{x_i\}_{i\in s}\right)$ denotes the convex hull generated by points $\{x_i\}_{i\in s}$. The nerve theorem (see Chapter III of \cite{edelsbrunner2010computational}) indicates that the Čech Complex $\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right)$ and the union $\bigcup_{i\in \mathfrak{I}_t^\nu} \overline{B(x_i,r)}$ have the same homotopy type. Hence, they share the same Betti numbers, i.e.,
\begin{align*}
\beta_k\Big(\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right)\Big) = \beta_k\left(\bigcup_{i\in \mathfrak{I}_t^\nu} \overline{B(x_i,r)}\right),\ \ \mbox{ for all }k\in\mathbb{Z}.
\end{align*}
Using the shape approximation in Eq.~\eqref{eq: ball unions approx shapes}, we have the following approximation for ECC
\begin{align}\label{eq: ECC approximation using Cech complexes}
\chi_t^{\nu}(K) \approx \sum_{k=0}^{d-1} (-1)^{k} \cdot \beta_k\Big(\check{C}_r\left( \{x_i\}_{i\in \mathfrak{I}_t^\nu} \right)\Big),\ \ \ t\in[0,T].
\end{align}
The method of computing the Betti numbers of simplicial complexes in Eq.~\eqref{eq: ECC approximation using Cech complexes} is standard and can be found in the literature (e.g., Chapter IV of \cite{edelsbrunner2010computational} and Section 3.1 of \cite{niyogi2008finding}). Then, the SECT of $K$ is estimated using Eq.~\eqref{Eq: definition of SECT}. The smoothing effect of the integrals in Eq.~\eqref{Eq: definition of SECT} reduces the estimation error.
\paragraph*{Computing the SECT for our proof-of-concept and simulation examples} For the shape $K^{(1)}$ defined in Eq. \eqref{eq: example shapes K1 and K2} of Section \ref{Proof-of-Concept Simulation Examples I: Deterministic Shapes}, we estimate the SECT of $K^{(1)}$ using the aforementioned Čech complex approach with the following setup: $R=\frac{3}{2}$, $r=\frac{1}{5}$, and the point set $\{x_i\}_i$ is equal to the following collection
\begin{align}\label{eq: the centers of our shape approximation}
\left\{\left(\frac{2}{5}+\cos t_j, \sin t_j\right) \,\Bigg\vert\, t_j=\frac{\pi}{5}+\frac{j}{J}\cdot\frac{8\pi}{5}\right\}_{j=1}^J\bigcup\left\{\left(-\frac{2}{5}+\cos t_j, \sin t_j\right) \,\Bigg\vert\, \frac{6\pi}{5}+\frac{j}{J}\cdot\frac{8\pi}{5}\right\}_{j=1}^J,
\end{align}
where $J$ is a sufficiently large integer. We implement $J=100$ in our proof-of-concept example. Figure \ref{fig: Computing_SECT}(c) illustrates the shape approximation using this setup. The SECT for other shapes in our proof-of-concept/simulation examples is estimated in the similar way.
\begin{figure}
\caption{Illustrations of the shape approximation in Eq.~\eqref{eq: ball unions approx shapes}
\label{fig: Computing_SECT}
\end{figure}
\section{Runtime of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions}}\label{section: Runtime}
We simulate 20 different data sets for each combination of the parameters. The mean and standard deviation (in parenthese) of the runtime across the 20 simulations are presented in Table \ref{table: runtime of Algorithm 1}. The source code for the runtime study is publically available online through the link provided at the end of the paper. The runtime study is conducted on a computer with an AMD Ryzen 7 5800H processor running at 3200 MHz using 16 GB of RAM, running Windows version 21H2.
\begin{table}[H]
\centering
\caption{Runtime of Algorithms \ref{algorithm: testing hypotheses on mean functions} and \ref{algorithm: permutation-based testing hypotheses on mean functions} (in seconds) based on $\Gamma, \Delta$, and $n$.}\label{table: runtime of Algorithm 1}
\vspace*{0.5em}
\begin{tabular}{|ccccc|}
\hline
\multicolumn{5}{|c|}{Algorithm \ref{algorithm: testing hypotheses on mean functions}}\\
\hline
\multicolumn{1}{|c|}{Number of directions} & \multicolumn{1}{c|}{Number of levels} & \multicolumn{1}{c|}{\textit{n} = 25} & \multicolumn{1}{c|}{\textit{n} = 50} &\multicolumn{1}{c|}{\textit{n} = 100} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=25$} & \multicolumn{1}{c|}{0.75 (0.10)} & \multicolumn{1}{c|}{1.40 (0.08)} & \multicolumn{1}{c|}{2.23 (0.09)} \\ \cline{2-5}
\multicolumn{1}{|c|}{$\Gamma=2$} & \multicolumn{1}{c|}{$\Delta=50$} & \multicolumn{1}{c|}{1.26 (0.09)} & \multicolumn{1}{c|}{2.36 (0.08)} & \multicolumn{1}{c|}{4.25 (0.10)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=100$} & \multicolumn{1}{c|}{2.32 (0.09)} & \multicolumn{1}{c|}{4.52 (0.15)} & \multicolumn{1}{c|}{8.10 (0.21)} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=25$} & \multicolumn{1}{c|}{1.25 (0.09)} & \multicolumn{1}{c|}{2.26 (0.08)} & \multicolumn{1}{c|}{4.13 (0.15)} \\ \cline{2-5}
\multicolumn{1}{|c|}{$\Gamma=4$} & \multicolumn{1}{c|}{$\Delta=50$} & \multicolumn{1}{c|}{2.26 (0.09)} & \multicolumn{1}{c|}{4.29 (0.14)} &\multicolumn{1}{c|}{8.00 (0.18)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=100$} & \multicolumn{1}{c|}{4.28 (0.13)} & \multicolumn{1}{c|}{8.31 (0.17)} & \multicolumn{1}{c|}{15.77 (0.29)} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=25$} & \multicolumn{1}{c|}{2.27 (0.10)} & \multicolumn{1}{c|}{4.25 (0.12)} & \multicolumn{1}{c|}{8.06 (0.21)} \\ \cline{2-5}
\multicolumn{1}{|c|}{$\Gamma=8$} & \multicolumn{1}{c|}{$\Delta=50$} & \multicolumn{1}{c|}{4.32 (0.13)} & \multicolumn{1}{c|}{8.39 (0.17)} & \multicolumn{1}{c|}{15.81 (0.14)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=100$} & \multicolumn{1}{c|}{8.38 (0.12)} & \multicolumn{1}{c|}{16.69 (0.34)} & \multicolumn{1}{c|}{32.69 (1.16)} \\ \hline
\multicolumn{5}{|c|}{Algorithm \ref{algorithm: permutation-based testing hypotheses on mean functions}}\\
\hline
\multicolumn{1}{|c|}{Number of directions} & \multicolumn{1}{c|}{Number of levels} & \multicolumn{1}{c|}{\textit{n} = 25} & \multicolumn{1}{c|}{\textit{n} = 50} &\multicolumn{1}{c|}{\textit{n} = 100} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=25$} & \multicolumn{1}{c|}{1.53 (0.14)} & \multicolumn{1}{c|}{2.89 (0.11)} & \multicolumn{1}{c|}{4.20 (0.13)} \\ \cline{2-5}
\multicolumn{1}{|c|}{$\Gamma=2$} & \multicolumn{1}{c|}{$\Delta=50$} & \multicolumn{1}{c|}{2.69 (0.12)} & \multicolumn{1}{c|}{4.18 (0.15)} & \multicolumn{1}{c|}{7.34 (0.24)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=100$} & \multicolumn{1}{c|}{4.89 (0.25)} & \multicolumn{1}{c|}{9.12 (0.26)} & \multicolumn{1}{c|}{13.89 (0.83)} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=25$} & \multicolumn{1}{c|}{2.07 (0.17)} & \multicolumn{1}{c|}{3.55 (0.15)} & \multicolumn{1}{c|}{6.16 (0.18)} \\ \cline{2-5}
\multicolumn{1}{|c|}{$\Gamma=4$} & \multicolumn{1}{c|}{$\Delta=50$} & \multicolumn{1}{c|}{3.52 (0.21)} & \multicolumn{1}{c|}{6.31 (0.13)} &\multicolumn{1}{c|}{11.19 (0.28)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=100$} & \multicolumn{1}{c|}{6.97 (0.22)} & \multicolumn{1}{c|}{12.60 (0.67)} & \multicolumn{1}{c|}{21.62 (1.39)} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=25$} & \multicolumn{1}{c|}{3.12 (0.13)} & \multicolumn{1}{c|}{5.64 (0.19)} & \multicolumn{1}{c|}{10.22 (0.22)} \\ \cline{2-5}
\multicolumn{1}{|c|}{$\Gamma=8$} & \multicolumn{1}{c|}{$\Delta=50$} & \multicolumn{1}{c|}{5.64 (0.17)} & \multicolumn{1}{c|}{10.46 (0.16)} & \multicolumn{1}{c|}{19.36 (0.48)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\Delta=100$} & \multicolumn{1}{c|}{11.05 (0.17)} & \multicolumn{1}{c|}{20.42 (0.31)} & \multicolumn{1}{c|}{37.85 (1.87)} \\ \hline
\end{tabular}
\end{table}
\section{Proofs}\label{section: appendix, proofs}
\begin{theorem}\label{thm: the separability of C(Shere;H)}
(i) Let $\mathcal{H}$ be a separable Hilbert space. Then, $C(\mathbb{S}^{d-1};\mathcal{H})$ is separable.\\
(ii) Let $\mathcal{H}$ be a separable Hilbert space. Then, $C(\mathbb{S}^{d-1};\mathcal{H})$ is a Banach space.\\
(iii) $C(\mathbb{S}^{d-1};H_0^1([0,T]))$ is a separable Banach space.
\end{theorem}
\noindent\textbf{Remark:} The results in Theorem \ref{thm: the separability of C(Shere;H)} are well-known. The proof of Theorem \ref{thm: the separability of C(Shere;H)} is provided herein only for the convenience of the readers.
\begin{proof}
Since $H_0^1([0,T])$ is a separable Hilbert space (see Chapter 8.3 of \cite{brezis2011functional}), it suffices to show the results (i) and (ii).
The separability of $\mathcal{H}$ implies that $\mathcal{H}$ has an orthonormal basis $\{\boldsymbol{e}_j\}_{j=1}^\infty$ (see Theorem 5.11 of \cite{brezis2011functional}). Since $C(\mathbb{S}^{d-1})=C(\mathbb{S}^{d-1};\mathbb{R})$ is separable (see Chapter 3.6 of \cite{brezis2011functional}), $C(\mathbb{S}^{d-1})$ has a dense and countable subset $D$. Then, the linear hull $\intercalilde{D}\overset{\operatorname{def}}{=}\operatorname{span}\{g\boldsymbol{e}_j\,\vert\, g\in D \mbox{ and } j=1,2,\cdots\}$ is a dense and countable subset of $C(\mathbb{S}^{d-1};\mathcal{H})$, and the reasoning is the following.
For any $f\in C(\mathbb{S}^{d-1};\mathcal{H})$, we have
\begin{align*}
f(\nu)=\sum_{j=1}^\infty \langle f(\nu), \boldsymbol{e}_j\rangle \boldsymbol{e}_j,\ \ \mbox{ for each }\nu\in\mathbb{S}^{d-1},
\end{align*}
where $\langle \cdot, \cdot \rangle$ denotes the inner product of $\mathcal{H}$ and $\sum_{j=1}^\infty$ converges in the $\mathcal{H}$-topology. It is straightforward that the function $\nu\mapsto \langle f(\nu), \boldsymbol{e}_j\rangle$ is an element of $C(\mathbb{S}^{d-1})$, for each fixed $j=1,2,\cdots$. Hence, for any $\epsilon>0$, there exists $\{g_j\}_{j=1}^\infty\subset D$ such that
\begin{align*}
\sup_{\nu\in\mathbb{S}^{d-1}}\left\vert \langle f(\nu), \boldsymbol{e}_j\rangle -g_j(\nu) \right\vert<\frac{\epsilon}{2^{j+1}},
\end{align*}
for all $j=1,2,\cdots$, which implies
\begin{align*}
\left\Vert f - \sum_{j=1}^\infty g_j \boldsymbol{e}_j \right\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})} &= \sup_{\nu\in\mathbb{S}^{d-1}} \left\Vert f(\nu) - \sum_{j=1}^\infty g_j(\nu) \boldsymbol{e}_j \right\Vert_{\mathcal{H}} \\
& = \sup_{\nu\in\mathbb{S}^{d-1}} \left\Vert \sum_{j=1}^\infty \Big(\langle f(\nu), \boldsymbol{e}_j\rangle-g_j(\nu)\Big) \boldsymbol{e}_j \right\Vert_{\mathcal{H}}\\
&\le \sup_{\nu\in\mathbb{S}^{d-1}} \sum_{j=1}^\infty \left\vert \langle f(\nu), \boldsymbol{e}_j\rangle -g_j(\nu) \right\vert <\epsilon.
\end{align*}
Since $\{\sum_{j=1}^n g_j \boldsymbol{e}_j\}_{n=1}^\infty \subset \intercalilde{D}$, the proof of result (i) is completed.
The result (ii) can be proved using the same trick implemented in the proof of result (i).
\end{proof}
\paragraph*{Proof of Theorem \ref{thm: boundedness topological invariants theorem}} The following inclusion is straightforward
\begin{align*}
\left\{\operatorname{Dgm}_k(K;\phi_{\nu})\cap (-\infty,t)\times(t,\infty)\right\} \subset \left\{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu}) \, \vert \, \operatorname{pers}(\xi)>0\right\},
\end{align*}
where the definitions of $\operatorname{Dgm}_k(K;\phi_{\nu})$ and $\operatorname{pers}(\xi)$ are provided in Appendix \ref{The Relationship between PHT and SECT}. Together with the k-triangle lemma (see \cite{edelsbrunner2000topological} and \cite{cohen2007stability}), this inclusion implies
\begin{align*}
\beta_k(K_t^\nu) &=\# \{\operatorname{Dgm}_k(K;\phi_{\nu})\cap (-\infty,t)\times(t,\infty)\} \\
& \le \# \{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu}) \, \vert \, \operatorname{pers}(\xi)>0\}\le\frac{M}{d},
\end{align*}
for all $k\in\{1,\cdots,d-1\}$, all $\nu\in\mathbb{S}^{d-1}$, and all $t$ that are not HCPs in direction $\nu$, where $\#\{\cdot\}$ counts the multiplicity of the multisets. Then, result (i) comes from the RCLL property of functions $\{\beta_k(K^\nu_{t})\}_{t\in[0,T]}$. Eq.~\eqref{Eq: first def of Euler characteristic curve} implies
\begin{align*}
\sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\chi_{t}^\nu(K)\right\vert\right) & = \sup_{\nu\in\mathbb{S}^{d-1}}\left(\sup_{0\le t\le T}\left\vert\sum_{k=0}^{d-1} (-1)^{k}\cdot\beta_k(K_t^{\nu})\right\vert\right) \\
&\le \sup_{\nu\in\mathbb{S}^{d-1}}\left[\sup_{0\le t\le T} \left( d\cdot\sup_{k\in\{0,\cdots,d-1\}}\beta_k(K_t^{\nu})\right) \right] \le M,
\end{align*}
which follows from result (i) and gives result (ii).
$\square$
\paragraph*{Proof of Theorem \ref{thm: Sobolev function paths}} Because of
\begin{align*}
\left\{\int_0^t \chi_\tau^\nu(K) d\tau\right\}_{t\in[0,T]} &\in\{\mbox{all absolutely continuous functions on }[0,T]\}\\
&=\{x\in L^1([0,T]): \mbox{the weak derivative $x'$ exists and }x'\in L^1([0,T])\}\\
& \overset{\operatorname{def}}{=} W^{1,1}([0,T])
\end{align*}
(see the Remark 8 after Proposition 8.3 in \cite{brezis2011functional} for details), the weak derivative of $\{\int_0^t \chi_\tau^\nu(K) d\tau\}_{t\in[0,T]}$ exists. This weak derivative is easily computed using the tameness of $K$, integration by parts, and the definition of weak derivatives. The proof of result (i) is completed. For the simplicity of notations, we denote
\begin{align*}
F(t) \overset{\operatorname{def}}{=} \int_0^t\chi_\tau^\nu(K) d\tau - \frac{t}{T} \int_0^T \chi_\tau^\nu(K) d\tau, \ \ \mbox{ for }t\in[0,T].
\end{align*}
Theorem \ref{thm: boundedness topological invariants theorem} implies
\begin{align*}
\vert F(t)\vert \le \int_0^T \vert\chi_\tau^\nu(K) \vert d\tau + \frac{t}{T} \int_0^T \vert\chi_\tau^\nu(K)\vert d\tau \le 2TM,\ \ \mbox{ for }t\in[0,T].
\end{align*}
Hence, $F\in L^p([0,T])$ for $p\in[1,\infty)$. Result (i) implies that the weak derivative of $F$ exists and is $F'(t)=\chi_t^\nu(K)-\frac{1}{T}\int_0^T \vert\chi_\tau^\nu(K)\vert d\tau$. We have the boundedness
\begin{align*}
\vert F'(t)\vert \le \vert \chi_{t}^\nu(K) \vert + \frac{1}{T} \int_{0}^T \vert \chi_\tau^\nu(K)\vert d\tau \le 2M, \ \ \mbox{ for }t\in[0,T],
\end{align*}
which implies $F'\in L^p([0,T])$ for $p\in[1,\infty)$. Furthermore, $F(0)=F(T)=0$, together with the discussion above, implies $F\in W^{1,p}_0([0,T])$ for all $p\in[1,\infty)$ (see the Theorem 8.12 of \cite{brezis2011functional}). Theorem 8.8 and the Remark 8 after Proposition 8.3 in \cite{brezis2011functional} imply $W^{1,p}_0([0,T]) \subset \mathcal{B}$ for $p\in[1,\infty)$. Result (ii) follows.
$\square$
The following lemmas are prepared for the proof of Theorem \ref{lemma: The continuity lemma}.
\begin{lemma}\label{lemma: stability lemma 1}
Suppose $K\in\mathscr{S}_{R,d}^M$. Then, we have the following estimate for all $t$ that are neither HCPs in direction $\nu_1$ nor HCPs in direction $\nu_2$.
\begin{align}\label{Eq: counting estimate}
\begin{aligned}
& \Upsilon_k(t;\nu_1, \nu_2) \overset{\operatorname{def}}{=} \left\vert \beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2}) \right\vert \\
& \le \#\left\{x\in \operatorname{Dgm}_k(K;\phi_{\nu_1}) \, \Big\vert \, x\ne\gamma^*(x) \mbox{ and } \underline{(x,\gamma^*(x))}\bigcap\partial\big((-\infty,t)\times(t,\infty)\big)\ne\emptyset\right\},
\end{aligned}
\end{align}
where $\underline{(x,\gamma^*(x))}$ denotes the straight line segment connecting points $x$ and $\gamma^*(x)$ in $\mathbb{R}^2$, $\gamma^*$ is any optimal bijection such that
\begin{align}\label{eq: optimal bijection condition}
W_\infty \Big(\operatorname{Dgm}_k(K;\phi_{\nu_1}), \operatorname{Dgm}_k(K;\phi_{\nu_2}) \Big) = \sup \Big\{\Vert \xi - \gamma^*(\xi) \Vert_{l^\infty} \, \Big\vert \, \xi\in \operatorname{Dgm}_k(\mathbb{K};\phi_{\nu_1})\Big\}
\end{align}
(see Definition \ref{def: bottleneck distance}, and $\Vert\cdot\Vert_{l^\infty}$ is defined in Eq.~\eqref{eq: def of l infinity norm}), and ``$\#$" counts the corresponding multiplicity.
\end{lemma}
\begin{remark}
Because $(\mathscr{D}, W_\infty)$ is a geodesic space, the optimal bijection $\gamma^*$ does exist (see Proposition 1 of \cite{turner2013means} and its proof therein).
\end{remark}
\begin{proof}
Since $t$ is not an HCP, neither $\operatorname{Dgm}_k(K;\phi_{\nu_1})$ nor $\operatorname{Dgm}_k(K;\phi_{\nu_2})$ has a point on the boundary $\partial\big((-\infty,t)\times(t,\infty)\big)$. If $\beta_k(K_t^{\nu_1}) = \beta_k(K_t^{\nu_2})$, Eq.~\eqref{Eq: counting estimate} is true. Otherwise, without loss of generality, we assume $\beta_k(K_t^{\nu_1}) > \beta_k(K_t^{\nu_2})$. Notice
\begin{align*}
\beta_k(K_t^{\nu_i})=\#\left\{\operatorname{Dgm}_k(K;\phi_{\nu_i})\bigcap (-\infty,t)\times(t,\infty) \right\},\ \ \mbox{ for }i\in\{1,2\}.
\end{align*}
Let $\gamma^*$ be any optimal bijection, then there should be at least $\beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2})$ straight line segments $\underline{(x,\gamma^*(x))}$ crossing $\partial\big((-\infty,t)\times(t,\infty)\big)$; otherwise, $\gamma^*$ is not bijective. Hence,
\begin{align*}
&\beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2}) \\
& \le \#\left\{x\in \operatorname{Dgm}_k(K;\phi_{\nu_1})\,\Big\vert\, x\ne\gamma^*(x) \mbox{ and } \underline{(x,\gamma^*(x))}\bigcap\partial\big((-\infty,t)\times(t,\infty)\big)\ne\emptyset\right\},
\end{align*}
and Eq.~\eqref{Eq: counting estimate} follows.
\end{proof}
\begin{lemma}\label{lemma: stability lemma 2}
Suppose $K\in\mathscr{S}_{R,d}^M$. Except for finitely many $t$, we have
\begin{align*}
\Upsilon_k(t;\nu_1, \nu_2) \le \frac{2M}{d} \cdot \mathbf{1}_{\mathcal{T}_k},\ \ \mbox{where}
\end{align*}
\begin{align*}
\mathcal{T}_k \overset{\operatorname{def}}{=} \left\{t\in[0,T] \mbox{ not an HCP in direction }\nu_1\mbox{ or }\nu_2\,\Big\vert\, \mbox{there exists } x\in \operatorname{Dgm}_k(K;\phi_{\nu_1}) \mbox{ such that } x\ne\gamma^*(x) \right.
\\ \left. \mbox{ and } \underline{(x,\gamma^*(x))}\bigcap\partial\big((-\infty,t)\times(t,\infty)\big)\ne\emptyset\right\},
\end{align*}
and $\gamma^*: \operatorname{Dgm}_k(K;\phi_{\nu_1})\rightarrow \operatorname{Dgm}_k(K;\phi_{\nu_2})$ is any optimal bijection satisfying Eq.~\eqref{eq: optimal bijection condition}.
\end{lemma}
\begin{proof}
Theorem \ref{thm: boundedness topological invariants theorem} implies
\begin{align*}
\Upsilon_k(t;\nu_1, \nu_2) = \left\vert \beta_k(K_t^{\nu_1}) - \beta_k(K_t^{\nu_2}) \right\vert \le 2M/d.
\end{align*}
Furthermore, the inequality in Eq.~\eqref{Eq: counting estimate} indicates that $\Upsilon_k(t;\nu_1, \nu_2)=0$ if $t\notin\mathcal{T}_k$, except for finitely many HCPs in directions $\nu_1$ and $\nu_2$. Then the desired estimate follows.
\end{proof}
\paragraph*{Proof of Theorem \ref{lemma: The continuity lemma}} The definition of Euler characteristic (see Eq.~\eqref{Eq: first def of Euler characteristic curve}) and Lemma \ref{lemma: stability lemma 2} imply the following: for $p\in[1,\infty)$, we have
\begin{align}\label{Eq: estimate in proof}
\begin{aligned}
& \int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau \\
& = \int_0^T \left\vert \sum_{k=0}^{d-1} (-1)^k\cdot\Big(\beta_k(K_\tau^{\nu_1}) - \beta_k(K_\tau^{\nu_2}) \Big) \right\vert^p d\tau \\
& \le \int_0^T \left(\sum_{k=0}^{d-1}\Upsilon_k(\tau;\nu_1,\nu_2)\right)^p d\tau \\
& \le d^{(p-1)} \cdot \sum_{k=0}^{d-1} \int_0^T \Big(\Upsilon_k(\tau;\nu_1,\nu_2)\Big)^p d\tau \\
& \le \frac{(2M)^p}{d} \cdot \sum_{k=0}^{d-1} \int_{\mathcal{T}_k} d\tau \\
& \le \frac{(2M)^p}{d} \cdot \sum_{k=0}^{d-1} \left(\sum_{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu_1})} 2\cdot\Vert \xi-\gamma^*(\xi)\Vert_{l^\infty}\right),
\end{aligned}
\end{align}
where the last inequality follows from the definition of $\mathcal{T}_k$. Since $\Vert \xi-\gamma^*(\xi)\Vert_{l^\infty}$ can be positive only if $\operatorname{pers}(\xi)>0$ or $\operatorname{pers}(\gamma^*(\xi))>0$, there are at most $N$ terms $\Vert \xi-\gamma^*(\xi)\Vert_{l^\infty}>0$, where condition (\ref{Eq: topological invariants boundedness condition}) implies
\begin{align*}
N \overset{\operatorname{def}}{=} \sum_{i=1}^2 \#\{\xi\in \operatorname{Dgm}_k(K;\phi_{\nu_i}) \,\vert \, \operatorname{pers}(\xi)>0\} \le 2M/d.
\end{align*}
Therefore, the inequality in Eq.~\eqref{Eq: estimate in proof} implies
\begin{align*}
\int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau & \le \frac{2 \cdot (2M)^{(p+1)}}{d} \cdot \sup\Big\{ \Vert \xi-\gamma^*(\xi)\Vert_{l^\infty} \,\Big\vert\, \xi\in Dym(\mathbb{K};\phi_{\nu_1})\Big\} \\
& = \frac{2 \cdot (2M)^{(p+1)}}{d} \cdot W_\infty \Big(\operatorname{Dgm}_k(K;\phi_{\nu_1}), \operatorname{Dgm}_k(K;\phi_{\nu_2}) \Big).
\end{align*}
Then, Theorem \ref{thm: bottleneck stability} implies
\begin{align*}
\int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau \le \frac{2\cdot(2M)^{(p+1)}}{d} \cdot \sup_{x\in K} \vert x\cdot(\nu_1-\nu_2)\vert.
\end{align*}
Additionally, $\vert x\cdot(\nu_1-\nu_2)\vert\le \Vert x\Vert\cdot\Vert \nu_1-\nu_2\Vert$ and $K\subset\overline{B(0,R)}$ provide
\begin{align}\label{Eq: Continuity inequality lemma}
\int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^p d\tau \le \frac{2 \cdot R \cdot (2M)^{(p+1)}}{d} \cdot \Vert \nu_1-\nu_2\Vert.
\end{align}
Define the constant $C^*_{M,R,d}$ as follows
\begin{align*}
C^*_{M,R,d} \overset{\operatorname{def}}{=} \sqrt{ \frac{16M^3R}{d} + \frac{32M^3R}{d} + \frac{64 M^4 R}{d^2} } .
\end{align*}
Setting $p=2$, Eq.~\eqref{Eq: Continuity inequality lemma} implies the following
\begin{align*}
\left( \int_0^T \left\vert\Big\{\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\} \right\vert^2 d\tau \right)^{1/2} \le \sqrt{ \frac{16M^3R}{d} \cdot \Vert \nu_1-\nu_2\Vert } \le C^*_{M,R,d} \cdot \sqrt{\Vert \nu_1-\nu_2\Vert},
\end{align*}
which is the inequality in Eq.~\eqref{Eq: continuity inequality}.
The definition of $SECT(K)$, together with Eq.~\eqref{Eq: Continuity inequality lemma}, implies
\begin{align*}
& \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2) \Big\Vert^2_{\mathcal{H}} \\
& = \int_0^T \left\vert \frac{d}{dt}SECT(K)(\nu_1;t) - \frac{d}{dt}SECT(K)(\nu_2;t) \right\vert^2 dt \\
& = \int_0^T \left\vert \Big(\chi_t^{\nu_1}(K)-\chi_t^{\nu_2}(K)\Big) - \frac{1}{T} \int_0^T \Big(\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big) d\tau \right\vert^2 dt\\
& \le \int_0^T \left( \Big\vert\chi_t^{\nu_1}(K)-\chi_t^{\nu_2}(K)\Big\vert + \frac{1}{T} \int_0^T \Big\vert\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\vert d\tau \right)^2 dt \\
& \le 2\int_0^T \Big\vert\chi_t^{\nu_1}(K)-\chi_t^{\nu_2}(K)\Big\vert^2 dt + \frac{2}{T}\left(\int_0^T \Big\vert\chi_\tau^{\nu_1}(K)-\chi_\tau^{\nu_2}(K)\Big\vert d\tau \right)^2 \\
& \le \frac{32M^3R}{d} \cdot \Vert \nu_1 - \nu_2\Vert + \frac{64 M^4 R}{d^2} \cdot \Vert
\nu_1-\nu_2 \Vert^2,
\end{align*}
where the last inequality above comes from Eq.~\eqref{Eq: Continuity inequality lemma}. Then, we have
\begin{align*}
& \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2) \Big\Vert_{\mathcal{H}} \\
& \le \sqrt{ \frac{32M^3R}{d} \cdot \Vert \nu_1 - \nu_2\Vert + \frac{64 M^4 R}{d^2} \cdot \Vert
\nu_1-\nu_2 \Vert^2 } \\
& \le C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 }.
\end{align*}
The proof of the second inequality in result (i) is completed.
The law of cosines and Taylor's expansion indicates
\begin{align*}
\frac{\Vert \nu_1-\nu_2 \Vert}{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)} & = \sqrt{ 2 \cdot \frac{1-\cos\left(d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)\right)}{\left\{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)\right\}^2} }\\
& = \sqrt{ 2 \cdot \left[ \sum_{n=1}^\infty \frac{(-1)^{n+1}}{(2n)!} \cdot \Big\{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)\Big\}^{2n-2} \right] } = O(1).
\end{align*}
Then, result (ii) comes from the following
\begin{align}\label{eq: 1/2 Holder argument}
\begin{aligned}
& \frac{\Vert SECT(K)(\nu_1)-SECT(K)(\nu_2) \Vert_{\mathcal{H}}}{\sqrt{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)}}\\
& \le C^*_{M,R,d} \cdot \sqrt{ \frac{\Vert \nu_1 - \nu_2\Vert}{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)} + \frac{\Vert
\nu_1-\nu_2 \Vert^2}{d_{\mathbb{S}^{d-1}}(\nu_1, \nu_2)} }=O(1).
\end{aligned}
\end{align}
We consider the following inequality for all $\nu_1,\nu_2\in\mathbb{S}^{d-1}$ and $t_1, t_2\in[0,T]$
\begin{align}\label{eq: pm terms for Holder}
\begin{aligned}
&\vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_2; t_2)\vert\\
&\le \vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_1; t_2)\vert\\
& + \vert SECT(K)(\nu_1; t_2)-SECT(K)(\nu_2; t_2)\vert\\
& \overset{\operatorname{def}}{=} I+II.
\end{aligned}
\end{align}
From the definition of $\Vert\cdot\Vert_{C^{0,\frac{1}{2}}([0,T])}$ and Eq.~\eqref{eq: Sobolev embedding from Morrey}, we have
\begin{align*}
& \sup_{t_1, t_2\in[0,T], \, t_1\ne t_2}\frac{\vert SECT(K)(\nu_1; t_1)-SECT(K)(\nu_1; t_2)\vert}{\vert t_1-t_2 \vert^{1/2}} \\
& \le \Vert SECT(K)(\nu_1)\Vert_{C^{0,\frac{1}{2}}([0,T])}\\
& \le \intercalilde{C}_T \Vert SECT(K)(\nu_1)\Vert_\mathcal{H}\\
& \le \intercalilde{C}_T \Vert SECT(K)\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})},
\end{align*}
which implies $I\le \intercalilde{C}_T \Vert SECT(K)\Vert_{C(\mathbb{S}^{d-1};\mathcal{H})} \cdot \vert t_1-t_2 \vert^{1/2}$ for all $t_1, t_2\in[0,T]$. Applying Eq.~\eqref{eq: Sobolev embedding from Morrey} again, we have
\begin{align*}
II & \le \Vert SECT(K)(\nu_1)-SECT(K)(\nu_2)\Vert_{\mathcal{B}} \\
& \le \intercalilde{C}_T \Vert SECT(K)(\nu_1)-SECT(K)(\nu_2)\Vert_{\mathcal{H}}.
\end{align*}
Then, the inequality in Eq.~\eqref{eq: bivariate Holder continuity} follows from Eq.~\eqref{eq: pm terms for Holder} and result (ii). With an argument similar to Eq.~\eqref{eq: 1/2 Holder argument}, the function $(\nu,t)\mapsto SECT(K)(\nu;t)$ belongs to $C^{0,\frac{1}{2}}(\mathbb{S}^{d-1}\times[0,T];\mathbb{R})$. The proof of Theorem \ref{lemma: The continuity lemma} is completed.
$\square$
The proof of Theorem \ref{thm: invertibility} needs the following concepts, which can be found in \cite{andradas2012constructible}.
\begin{definition}\label{Definition: constructible sets}
(Constructible sets) (i) A locally closed set is a subset of a topological space that is the intersection of an open and a closed subset. (ii) A constructible set is a finite union of the aforementioned locally closed sets.
\end{definition}
\paragraph*{Proof of Theorem \ref{thm: invertibility}} Since $K\in\mathscr{S}_{R,d}^M$ is a finitely triangulable subset of $\mathbb{R}^d$, the shape $K$ is homeomorphic to a polytope $\vert S\vert = \bigcup_{s\in S} s \subset\mathbb{R}^d$, where $S$ is a finite simplicial complex and each $s$ is a simplex. We may assume $K=\vert S\vert$ without loss of generality. Since $s=s\bigcap\mathbb{R}^d$, each simplex $s$ is a locally closed set; hence, $K$ is constructible (see Definition \ref{Definition: constructible sets}). Then, Corollary 6 of \cite{ghrist2018persistent} implies the desired result.
$\square$
\paragraph*{Proof of Theorem \ref{Thm: metric theorem for shapes}}
To simplify the notations of the proof, we introduce another topological invariant --- the primitive Euler characteristic transform (PECT) --- which is related to SECT. The PECT is defined as follows
\begin{align}\label{Eq: def of PECT}
\begin{aligned}
& PECT: \ \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1};\mathcal{H}_{BM}),\ \ \ K \mapsto PECT(K) \overset{\operatorname{def}}{=} \{PECT(K)(\nu)\}_{\nu\in\mathbb{S}^{d-1}}, \\
& \mbox{where }\ PECT(K)(\nu) \overset{\operatorname{def}}{=} \left\{\int_0^t \chi_\tau^\nu(K) d\tau\right\}_{t\in[0,T]},
\end{aligned}
\end{align}
and $\mathcal{H}_{BM} \overset{\operatorname{def}}{=} \{f\in L^2([0,T]) \,\vert\, \mbox{weak derivative }f' \mbox{ exists, }f'\in L^2([0,T]), \mbox{ and }x(0)=0 \}$ is a separable Hilbert space equipped with the inner product $\langle f, g \rangle_{\mathcal{H}_{BM}} = \int_0^T f'(t)g'(t) dt$. The inequality in Eq.~\eqref{Eq: continuity inequality}, together with the weak derivative of $PECT(K)(\nu)$ being $\{\chi_t^\nu(K)\}_{t\in[0,T]}$ (see Theorem \ref{thm: Sobolev function paths}), implies that
\begin{align*}
\left\Vert PECT(K)(\nu_1)-PECT(K)(\nu_2)\right\Vert_{\mathcal{H}_{BM}}\le C^*_{M,R,d} \cdot \sqrt{\Vert \nu_1-\nu_2\Vert}.
\end{align*}
Therefore, we have the $PECT(K)\in C(\mathbb{S}^{d-1}; \mathcal{H}_{BM})$ in Eq.~\eqref{Eq: def of PECT}. PECT relates to SECT via the following
\begin{align}\label{eq: relationship between PECT and SECT}
SECT(K)(\nu;t)=PECT(K)(\nu;t)-\frac{t}{T}PECT(K)(\nu;t),
\end{align}
for all $\nu\in\mathbb{S}^{d-1}$ and $t\in[0,T]$. Additionally, Theorem \ref{thm: invertibility}, together with Eq.~\eqref{eq: relationship between PECT and SECT}, implies that the $PECT$ in Eq.~\eqref{Eq: def of PECT} is invertible.
The triangle inequalities and symmetry of $\rho$ follow from that of the metric of $C(\mathbb{S}^{d-1};\mathcal{H})$. Equation $\rho(K_1, K_2)=0$ indicates $\Vert PECT(K_1)(\nu)-PECT(K_2)(\nu)\Vert_{\mathcal{H}_{BM}}=0$ for all $\nu\in\mathbb{S}^{d-1}$. \cite{evans2010partial} (Theorem 5 of Chapter 5.6) implies $\Vert PECT(K_1)(\nu)-PECT(K_2)(\nu)\Vert_{\mathcal{B}}=0$ for all $\nu\in\mathbb{S}^{d-1}$. Then, we have $\int_0^t\chi_\tau^\nu(K_1) d\tau=\int_0^t\chi_\tau^\nu(K_2) d\tau$ for all $t\in[0,T]$ and $\nu\in\mathbb{S}^{d-1}$; hence, $SECT(K_1)=SECT(K_2)$. Then the invertibility in Theorem \ref{thm: invertibility} implies $K_1=K_2$. Therefore, $\rho$ is a distance, and the proof of result (i) is completed.
The proof of result (ii) is motivated by the following chain of maps for any fixed $\nu\in\mathbb{S}^{d-1}$ and $t\in[0,T]$.
\begin{align*}
& \mathscr{S}_{R,d}^M \ \ \ \xrightarrow{PECT}\ \ \ C(\mathbb{S}^{d-1};\mathcal{H}_{BM})\ \ \ \xrightarrow{\text{projection}}\ \ \ \mathcal{H}_{BM}\mbox{, which is embedded into }\mathcal{B} \ \ \ \xrightarrow{\text{projection}} \ \ \mathbb{R}, \\
& K \mapsto \left\{PECT(K)(\nu')\right\}_{\nu'\in\mathbb{S}^{d-1}} \mapsto \left\{PECT(K)(\nu;t') \right\}_{t'\in[0,T]} \ \mapsto PECT(K)(\nu;t)=\int_0^{t} \chi_{\tau}^\nu(K) d\tau,
\end{align*}
where all spaces above are metric spaces and equipped with their Borel algebras. We notice the following facts:
\begin{itemize}
\item mapping $PECT: \mathscr{S}_{R,d}^M \rightarrow C(\mathbb{S}^{d-1}; \mathcal{H}_{BM})$ is isometric;
\item projection $C(\mathbb{S}^{d-1};\mathcal{H}_{BM})\rightarrow\mathcal{H}_{BM}, \, \{F(\nu')\}_{\nu'\in\mathbb{S}^{d-1}}\mapsto F(\nu)$ is continuous for each fixed direction $\nu$;
\item applying \cite{evans2010partial} (Theorem 5 of Chapter 5.6) again, the embedding $\mathcal{H}_{BM}\rightarrow\mathcal{B}, \, F(\nu) \mapsto F(\nu)$ is continuous;
\item projection $\mathcal{B}\rightarrow \mathbb{R}, \{x(t')\}_{t'\in[0,T]}\mapsto x(t)$ is continuous.
\end{itemize}
Therefore, $\mathscr{S}_{R,d}^M\rightarrow\mathbb{R}, K\mapsto PECT(K)(\nu,t)$ is continuous, hence measurable. Because $\chi_{(\cdot)}^\nu(K)$, for each $K\in\mathscr{S}_{R,d}^M$, is a RCLL step function with finitely many discontinuities,
\begin{align*}
\chi^\nu_{t}(K)=\lim_{n\rightarrow\infty}\left[\frac{1}{\delta_n}\left
\{PECT(K)(\nu;t+\delta_n)-PECT(K)(\nu;t)\right\}\right],
\end{align*}
for all $K\in\mathscr{S}_{R,d}^M$, where $\lim_{n\rightarrow\infty}\delta_n=0$ and $\delta_n>0$. The measurability of $PECT(\cdot)(\nu;t+\delta_n)$ and $PECT(\cdot)(\nu;t)$ implies that $\chi_t^\nu(\cdot): \mathscr{S}_{R,d}^M\rightarrow\mathbb{R}, K\mapsto \chi_t^\nu(K)$ is measurable, for any fixed $\nu$ and $t$. The proof of result (ii) is completed.
$\square$
\paragraph*{Proof of Theorem \ref{thm: mean is in H}} For each fixed direction $\nu\in\mathbb{S}^{d-1}$, Theorem \ref{thm: SECT distribution theorem in each direction} indicates that the mapping $SECT(\cdot)(\nu): K\mapsto SECT(K)(\nu)$ is an $\mathcal{H}$-valued measurable function defined on the probability space $(\mathscr{S}_{R,d}^M, \mathscr{F}, \mathbb{P})$. We first show the Bochner $\mathbb{P}$-integrability of $SECT(\cdot)(\nu)$ (see Section 5 in Chapter V of \cite{yosida1965functional} for the definition of Bochner $\mathbb{P}$-integrability), and the Bochner integral of $SECT(\cdot)(\nu)$ will be fundamental to our proof of Theorem \ref{thm: mean is in H}. Lemma 1.3 of \cite{da2014stochastic} indicates that $SECT(\cdot)(\nu)$ is strongly $\mathscr{F}$-measurable (see Section 4 in Chapter V of \cite{yosida1965functional} for the definition of strong $\mathscr{F}$-measurability). Then, Assumption \ref{assumption: existence of second moments} indicates that the Bochner integral
\begin{align*}
m^*_\nu \overset{\operatorname{def}}{=} \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu)\mathbb{P}(dK)
\end{align*}
is Bochner $\mathbb{P}$-integrable and $m^*_\nu \in\mathcal{H}$ (see \cite{yosida1965functional}, Section 5 of Chapter V, Theorem 1 therein particularly). The Corollary 2 in Section 5 of Chapter V of \cite{yosida1965functional}, together with that $\mathcal{H}$ is the RKHS generated by the kernel $\kappa(s,t)=\min\{s,t\}-\frac{st}{T}$ (see Example 4.9 of \cite{lifshits2012lectures}), implies
\begin{align*}
m^*_\nu(t) &= \langle \kappa(t,\cdot), m^*_\nu \rangle \\
& = \int_{\mathscr{S}_{R,d}^M} \Big\langle \kappa(t,\cdot), SECT(K)(\nu) \Big\rangle \mathbb{P}(dK) \\
& =\int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu;t)\mathbb{P}(dK) \\
& = \mathbb{E}\left\{ SECT(\cdot)(\nu;t)\right\} =m_\nu(t),\ \ \mbox{ for all }t\in[0,T].
\end{align*}
Therefore, $m_\nu=m^*_\nu\in\mathcal{H}$. The proof of result (i) is completed.
To prove result (ii), we first show the product measurability of the following map for each fixed direction $\nu\in\mathbb{S}^{d-1}$
\begin{align}\label{eq: product measurability}
\begin{aligned}
& \Big(\mathscr{S}_{R,d}^M \times [0,T], \mathscr{F}\times \mathscr{B}([0,T])\Big)\ \ \rightarrow \ \ (\mathbb{R}, \mathscr{B}(\mathbb{R})),\\
& (K,t) \ \ \mapsto \ \ SECT(K)(\nu;t),
\end{aligned}
\end{align}
where $\mathscr{F}\times \mathscr{B}([0,T])$ denotes the product $\sigma$-algebra generated by $\mathscr{F}$ and $\mathscr{B}([0,T])$. Define the filtration $\{\mathscr{F}_t\}_{t\in[0,T]}$ by $\mathscr{F}_t=\sigma(\{SECT(\cdot)(\nu;t')\,\vert\, t'\in[0,t]\})$ for $t\in[0,T]$. Because the paths of $SECT(\cdot)(\nu)$ are in $\mathcal{H}$, these paths are continuous (see Eq.~\eqref{eq: H, Holder, B embeddings}). Proposition 1.13 of \cite{karatzas2012brownian} implies that the stochastic process $SECT(\cdot)(\nu)$ is progressively measurable with respect to the filtration $\{\mathscr{F}_t\}_{t\in[0,T]}$. Then, the mapping in Eq.~\eqref{eq: product measurability} is measurable with respect to the product $\sigma$-algebra $\mathscr{F}\times \mathscr{B}([0,T])$ (see \cite{karatzas2012brownian}, particularly Definitions 1.6 and 1.11, also the paragraph right after Definition 1.11 therein). Eq.~\eqref{eq: finite second moments for all v and t} implies
\begin{align*}
\int_0^T \int_{\mathscr{S}_{R,d}^M} \vert SECT(K)(\nu;t)\vert^2 \mathbb{P}(dK) dt \le T \cdot \intercalilde{C}^2_T \cdot \mathbb{E}\Vert SECT(\cdot)(\nu)\Vert^2_{\mathcal{H}}<\infty,
\end{align*}
where the double integral is well-defined because of the product measurability of the mapping in Eq.~\eqref{eq: product measurability} and the Fubini's theorem. Then, the proof of result (ii) is completed.
For any $\nu_1,\nu_2\in\mathbb{S}^{d-1}$, the proof of result (i) implies the following Bochner integral representation
\begin{align*}
\Vert m_{\nu_1} - m_{\nu_2} \Vert_{\mathcal{H}} & = \left\Vert \int_{\mathscr{S}_{R,d}^M} SECT(K)(\nu_1) - SECT(K)(\nu_2) \mathbb{P}(dK) \right\Vert_{\mathcal{H}} \\
& \overset{(1)}{\le} \int_{\mathscr{S}_{R,d}^M} \Big\Vert SECT(K)(\nu_1) - SECT(K)(\nu_2) \Big\Vert_{\mathcal{H}} \mathbb{P}(dK) \\
& \overset{(2)}{\le} C^*_{M,R,d} \cdot \sqrt{ \Vert \nu_1 - \nu_2\Vert + \Vert
\nu_1-\nu_2 \Vert^2 },
\end{align*}
where the inequality (1) follows from the Corollary 1 in Section 5 of Chapter V of \cite{yosida1965functional}, and the inequality (2) follows from Theorem \ref{lemma: The continuity lemma} (i). With the argument in Eq.~\eqref{eq: 1/2 Holder argument}, the proof of result (iii) is completed.
$\square$
\paragraph*{Proof of Theorem \ref{thm: KL expansions of SECT}} Theorems \ref{thm: mean is in H} and \ref{thm: lemma for KL expansions} imply that, for each $j\in\{1,2\}$, the stochastic process $\{SECT(\cdot)(\nu^*;t)-m_{\nu^*}^{(j)}(t)\}_{t\in[0,T]}$ is of mean zero, mean-square continuous, and belongs to $L^2(\mathscr{S}_{R,d}^M\times[0,T],\, \mathbb{P}(dK)\times dt)$. Then, result (i) follows from Corollary 5.5 of \cite{alexanderian2015brief}. Denote
\begin{align*}
D_L(K^{(1)},K^{(2)};t) \overset{\operatorname{def}}{=}&\left\{ SECT(K^{(1)})(\nu^*;t) - SECT(K^{(2)})(\nu^*;t) \right\} \\ &- \left[ \left\{m^{(1)}_{\nu^*}(t) + \sum_{l'=1}^L \sqrt{\lambda_{l'}} \cdot Z_{l'}(K^{(1)}) \cdot \phi_{l'}(t)\right\} -\left\{m^{(2)}_{\nu^*}(t) + \sum_{l'=1}^L \sqrt{\lambda_{l'}} \cdot Z_{l'}(K^{(2)}) \cdot \phi_{l'}(t)\right\} \right]
\end{align*}
Then, result (i) implies the following
\begin{align*}
&\lim_{L\rightarrow\infty} \left\{\sup_{t\in[0,T]} \left\Vert D_L(\cdot, \cdot;t)\right\Vert^2_{L^2}\right\}=\lim_{L\rightarrow\infty}\left\{\sup_{t\in[0,T]} \int_{\mathscr{S}_{R,d}^M \times \mathscr{S}_{R,d}^M} \left\vert D(K^{(1)},K^{(2)};t) \right\vert^2 \mathbb{P}^{(1)}\otimes\mathbb{P}^{(2)}(dK^{(1)}, dK^{(2)})\right\} = 0,
\end{align*}
where $L^2$ is the abbreviation for $L^2(\mathscr{S}_{R,d}^M\times \mathscr{S}_{R,d}^M, \mathscr{F}\otimes\mathscr{F}, \mathbb{P}^{(1)}\otimes\mathbb{P}^{(2)})$. For each fixed $l=1,2,\ldots$, we have
\begin{align}\label{eq: zero L2 norm}
\begin{aligned}
\left\Vert \frac{1}{\sqrt{2\lambda_l}}\int_0^T D_L(\cdot,\cdot;t)\phi_l(t) dt\right\Vert_{L^2} &\le \frac{1}{\sqrt{2\lambda_l}}\int_0^T \left\Vert D_L(\cdot,\cdot;t) \right\Vert_{L^2} \vert\phi_l(t)\vert dt \\
&\le \sup_{t\in[0,T]}\left\Vert D_L(\cdot,\cdot;t) \right\Vert_{L^2} \cdot\frac{1}{\sqrt{2\lambda_l}}\int_0^T \vert\phi_l(t)\vert dt \rightarrow0, \ \ \mbox{ as }L\rightarrow\infty.
\end{aligned}
\end{align}
In addition, for each fixed $l=1,2,\ldots$ and $L>l$, we have
\begin{align*}
\frac{1}{\sqrt{2\lambda_l}}\int_0^T D_L(K^{(1)}, K^{(2)};t)\phi_l(t) dt =& \frac{1}{\sqrt{2\lambda_l}}\int_0^T \left\{ SECT(K^{(1)})(\nu^*;t) - SECT(K^{(2)})(\nu^*;t) \right\}\phi_l(t)dt \\
&-\frac{1}{\sqrt{2\lambda_l}}\int_0^T \left\{m^{(1)}_{\nu^*}(t)-m^{(2)}_{\nu^*}(t)\right\}\phi_l(t)dt \\
&-\frac{1}{\sqrt{2\lambda_l}}\int_0^T \left[ \sum_{l'=1}^L \sqrt{\lambda_{l'}} \cdot \left\{ Z_{l'}(K^{(1)})-Z_{l'}(K^{(2)}) \right\} \cdot \phi_{l'}(t) \right]\cdot\phi_l(t) dt.
\end{align*}
The $L^2([0,T])$-orthonormality of $\{\phi_l\}_{l=1}^\infty$ and the limit in Eq.~\eqref{eq: zero L2 norm} imply
\begin{align*}
\int_{\mathscr{S}_{R,d}^M \times \mathscr{S}_{R,d}^M} \left\vert \left[\frac{1}{\sqrt{2\lambda_l}}\int_0^T \left\{ SECT(K^{(1)})(\nu^*;t) - SECT(K^{(2)})(\nu^*;t) \right\}\phi_l(t)dt\right]\right.\\
\left.-\left[\theta_l + \frac{Z_{l}(K^{(1)})-Z_{l}(K^{(2)})}{\sqrt{2}}\right] \right\vert^2 \mathbb{P}^{(1)}\otimes\mathbb{P}^{(2)}(dK^{(1)}, dK^{(2)})=0,
\end{align*}
which implies result (ii).
$\square$
\end{appendix}
\end{document} |
\begin{document}
\title[Parabolic equations on conic domains]{Sobolev space theory and H\"older estimates for the stochastic partial differential equations on conic and polygonal domains}
\thanks{The first and third authors were supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2020R1A2C1A01003354)}
\thanks{The second author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2019R1F1A1058988)}
\author{Kyeong-Hun Kim}
\address{Kyeong-Hun Kim, Department of Mathematics, Korea University, Anam-ro 145, Sungbuk-gu, Seoul, 02841, Republic of Korea}
\email{[email protected]}
\author{Kijung Lee}
\address{Kijung Lee, Department of Mathematics, Ajou University, Worldcup-ro 206, Yeongtong-gu, Suwon, 16499, Republic of Korea}
\email{[email protected]}
\author{jinsol Seo}
\address{Jinsol Seo, Department of Mathematics, Korea University, Anam-ro 145, Sungbuk-gu, Seoul, 02841, Republic of Korea}
\email{[email protected]}
\subjclass[2010]{60H15; 35R60, 35R05}
\keywords{parabolic equation, conic domains, weighted Sobolev regularity, mixed weight}
\begin{abstract}
We establish existence, uniqueness, and Sobolev and H\"older regularity results for the stochastic partial differential equation
\begin{equation*}
\begin{aligned}
du=\Big(\sum_{i,j=1}^d a^{ij}u_{x^ix^j}&+f^0+\sum_{i=1}^d f^i_{x^i}\Big)dt\\
&+\sum_{k=1}^{\infty}g^kdw^k_t, \quad t>0, \,x\in \cD
\end{aligned}
\end{equation*}
given with non-zero initial data. Here $\{w^k_t: k=1,2,\cdots\}$ is a family of independent Wiener processes defined on a probability space $(\Omega, \bP)$, $a^{ij}=a^{ij}(\omega,t)$ are merely measurable functions on $\Omega\times (0,\infty)$, and $\cD$ is either a polygonal domain in $\bR^2$ or an arbitrary dimensional conic domain of the type
\begin{equation}
\label{conic}
\cD(\cM):=\left\{x\in \bR^d :\,\frac{x}{\lvert x\rvert}\in \cM\right\}, \quad \quad \cM\subsetneq S^{d-1}, \quad (d\geq 2)
\end{equation}
where $\cM$ is an open subset of $S^{d-1}$ with $C^2$ boundary.
We measure the Sobolev and H\"older regularities of arbitrary order derivatives of the solution using a system of mixed weights consisting of appropriate powers of the distance to the vertices and of the distance to the boundary.
The ranges of admissible powers of the distance to the vertices and to the boundary are sharp.
\end{abstract}
\maketitle
\mysection{Introduction}\label{sec:Introduction}
The goal of this article is to present a Sobolev space theory and H\"older regularity results for the stochastic partial differential equation (SPDE)
\begin{align}\label{main equation in introduction}
d u =\left(\sum_{i,j}^d a^{ij}u_{x^ix^j}+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k, \,\,\, t>0\,; \,\,\, u(0,\cdot)=u_0
\end{align}
defined on either multi-dimensional conic domains $\cD(\cM)$ (see \eqref{conic}) or two dimensional polygonal domains. Here, $\cM$ is an open subset of $S^{d-1}$ with $\cC^2$ boundary, $\{w^k_t: k=1,2,\cdots\}$ is an infinite sequence of independent one dimensional Wiener processes, and the coefficients $a^{ij}$ are merely measurable functions of $(\omega,t)$ with the uniform parabolicity condition; see Assumption \ref{ass coeff} below.
To give the reader a flavor of our results in this article we state a particular one, an estimate, below: Let $\cD=\cD(\cM)$ be a conic domain in $\bR^d$, $\rho(x):=dist(x,\partial \cD)$, and $\rho_{\circ}(x):=\lvert x\rvert$. Then for the solution $u$ of \eqref{main equation in introduction} with zero boundary and zero initial conditions, the following holds for any $p\geq 2$:
\begin{align}\label{main estimate simple}
&\bE \int^T_0 \int_{\cD} \left(\lvert\rho^{-1}u\rvert^p+ \lvert u_x\rvert^p\right) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \nonumber\\
\leq\quad & C\,\bE \int^T_0 \int_{\cD} \Big( \lvert \rho f^0\rvert^p+\sum_{i=1}^d\lvert f^i\rvert^p +\lvert g\rvert_{l_2}^p\Big) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt
\end{align}
with $d-1<\Theta<d-1+p$ accompanied with the sharp admissible range of $\theta$; see \eqref{theta con intro} below. Also see \eqref{main estimate intro} for higher order derivative estimates. Unlike the range of $\Theta$, the range of $\theta$ is affected by the shape of domain $\cD$, which is determined by $\cM$.
Estimate \eqref{main estimate simple}, if $\rho_{\circ}$ is replaced by the distance to the set of vertices, also holds when $\cD$ is a (bounded) polygonal domain in $\bR^2$. Regarding H\"older regularity, we have for instance, if $1-\frac{d}{p}=\delta>0$,
$$
\lvert \rho^{-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ}u(\omega,t,\cdot)\rvert_{ \cC(\cD)}+
[\rho^{-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} u( \omega,t,\cdot)]_{\cC^{\delta}(\cD)}<\infty,
$$
for a.e. $(\omega,t)$. In particular,
\begin{align}
\lvert u(\omega,t,x)\rvert\leq C(\omega,t) \rho^{1-\frac{\Theta}{p}}(x) \rho^{(-\theta+\Theta)/p}_{\circ}(x)\quad \text{for all }x\in\cD \label{Holder simple}.
\end{align}
Estimate \eqref{Holder simple} shows how $\theta$ and $\Theta$ are involved in measuring the boundary behavior of the solution with respect to $\rho$ and $\rho_{\circ}$. See Theorem \ref{cor 8.10} and Theorem \ref{cor 8.23} for the full H\"older regularity results with respect to both space and time variables.
To position our results in the context of regularity theory of stochastic parabolic equations, let us provide a stream of historical remarks.
The $L_p$-theory ($p\geq 2$) of equation \eqref{main equation in introduction} defined on the entire space $\bR^d$ was first introduced by N.V. Krylov \cite{Krylov 1999-4, Krylov 1996}. In these articles the author used an analytic approach and proved the maximal regularity estimate
\begin{align}
\label{krylov lp}
\|u_x\|_{\bL_p(T)}\leq C\Big(\|f^0\|_{\bL_p(T)}+\sum_{i=1}^d \|f^i\|_{\bL_p(T)}
+\||g|_{\ell_2}\|_{\bL_p(T)}\Big), \qquad p\geq 2,
\end{align}
provided that $u(0,\cdot)\equiv 0$, where $\bL_p(T):=L_p(\Omega\times (0,T); L_p(\bR^d))$.
As for other approaches on Sobolev regularity theory, the method based on $H^{\infty}$-calculus is also available in the literature. This approach was introduced in \cite{Veraar}, in which the maximal regularity of $\sqrt{-A}u$ is obtained for the stochastic convolution
$$
u(t):=\int^t_0 e^{(t-s)A} g(s) dW_H (s).
$$
Here, $W_H(t)$ is a cylindrical Brownian motion on a Hilbert space $H$, and the operator $-A$ is assumed to admit a bounded $H^{\infty}$-calculus of angle less than $\pi/2$ on $L^q(\cO)$, where $q\geq 2$ and $\cO$ is a domain in $\bR^d$. The result of \cite{Veraar} generalizes \eqref{krylov lp} with $f^i=0$, $i=1,\ldots,d$ as one can take $A=\Delta$ and $\cO=\bR^d$.
One advantage of the approach based on $H^{\infty}$-calculus is that it provides a unified way of handling a class of differential operators satisfying the above mentioned condition. However this approach is not applicable for SPDEs with operators depending on $(\omega,t)$, and even the simplest case $A=\Delta$, it is needed that $\partial \cO$ is regular enough, that is $\partial \cO \in \cC^2$. Compared to the approach based on $H^{\infty}$-calculus, Krylov's analytic approach works well for SPDEs with operators depending also on $(\omega,t)$, and it also provides the arbitrary order regularity of solutions without much extra efforts even under weaker smoothness condition on domains.
Since the work of \cite{Krylov 1999-4, Krylov 1996} on $\bR^d$, the analytic approach has been further used for the regularity theory of SPDEs on half space \cite{Krylov 1999-2, Krylov 1999-22, KK2004-2} and on $\cC^1$-domains \cite{KK2004, Kim2004, Kim2004-2}.
The major obstacle of studying SPDEs on domains is that, unless certain compatibility conditions (cf. \cite{Flandoli}) are fulfilled, the second and higher order derivatives of solutions to SPDEs blow up near the boundary, and such blow-ups are inevitable even on $\cC^{\infty}$-domains. Hence, one needs appropriate weight system to understand the behavior of solutions near the boundary.
It is shown in \cite{Krylov 1999-2, KK2004, Kim2004} that if domains satisfy $\cC^1$ boundary condition, then blow-ups of derivatives of solutions can be described very accurately by a weight system introduced in \cite{Krylov 1999-1, KK2004, Lo1}.
This weight system is based solely on the distance to the boundary. Surprisingly enough, under this weight system it is irrelevant whether domains have $\cC^{\infty}$-boundary or $\cC^1$-boundary, that is, the regularity of solutions is not affected by the smoothness of the boundary provided that the boundary is at least of class $\cC^1$. To be more specific, let $\cO$ be a $\cC^1$-domain, $\rho(x)=dist (x,\partial \cO)$, then it holds that (see \cite{Kim2004, KK2004}) for any $d-1<\Theta<d-1+p$,
\begin{align}
\label{eqn 9.3.1}
&\bE \int^T_0 \int_{\cO}(\vert \rho^{-1}u\vert + \vert u_x\vert )^p \rho^{\Theta-d}dt \nonumber\\
\leq\,& C\bE \int^T_0 \int_{\cO}\big( \vert \rho f^0\vert ^p+\sum_{i=1}^d \vert f^i\vert ^p+\vert g\vert ^p_{\ell_2} \big)^p \rho^{\Theta-d}dt.
\end{align}
The condition $\Theta \in (d-1, d-1+p)$ is sharp and is not affected by further smoothness of $\partial \cO$ as long as $\partial \cO\in \cC^1$. Note that estimate \eqref{eqn 9.3.1} with smaller $\Theta$ gives better decay of solutions near the boundary than that with larger $\Theta$. In particular, we have $u(\omega,t,\cdot)\in W^{1,p}_{0}(\cO)$ from \eqref{eqn 9.3.1} if $\Theta\leq d$.
As for results on non-smooth domains, that is $\partial \cO \not\in \cC^1$, very few fragmentary results are known. It turns out that \eqref{eqn 9.3.1} holds true on general Lipschitz domains if $\Theta \approx d-2+p$ (see \cite{Kim2014}), and hence the case $\Theta=d$ is not included in general if $p>2$. An example in \cite{Kim2014} also shows that if $\Theta<p/2$, then estimate \eqref{eqn 9.3.1} fails to hold even on simple wedge domains of the type
\begin{equation}\label{angular domain1}
\cD^{(\kappa)}=\big\{(r\cos \eta, r\sin \eta)\in \bR^2: r>0, \eta\in (-\kappa/2, \kappa/2)\big\}, \quad \kappa < 2\pi.
\end{equation}
The vertex $0$ makes the boundary non-smooth and changes the game.
Our interest on conic and polygonal domains arises from such question which, in particular, ask if estimates similar to \eqref{eqn 9.3.1} hold on such simple Lipschitz domains. We got the clue of the problem from
a PDE result on conic domains \cite{Kozlov Nazarov 2014} (also see \cite{Na, Sol2001}) which is similar to \eqref{eqn 9.3.1}, without the term $g=(g^1, g^2,\cdots)$ of course.
It uses the weight based only on the distance to the vertex.
A work on SPDE using a weight system based only on the distance to the vertex is introduced in \cite{CKLL 2018}
(also see \cite{CKL 2019+}), in which we studied the model case of $d=2$ and $a^{ij}=\delta_{ij}$ for a starter of the program.
Even for the model case considered in \cite{ CKL 2019+,CKLL 2018} we struggled to have higher order derivative estimate and left the problem as the future work. The main issue is to include the distance to the boundary in our weight system to have a satisfactory regularity relation between solutions and the inputs. In fact, there was an omen of aforementioned difficulty that is implied in the Green's function estimate used in \cite{CKLL 2018} and \cite{CKL 2019+}. The estimate dominating Green's function does not vanish at the boundary although it does at the vertex. We need more refined Green's function estimate for the starter of a satisfactory regularity result.
We then set a program of three steps: (i) preparing a refined $d$-dimensional Green's function estimate for operators with measurable coefficients (ii) preparing PDE result (iii) establishing SPDE result addressing the higher order derivative estimates. First two steps are done in \cite{Green} and \cite{ConicPDE}, and this article fulfills the last step. In \cite{Green} the refined Green's function estimate involves both the distance to the vertex and the distance to the boundary and it now vanishes at all the points on the boundary with informative decay rate near the boundary. The work \cite{ConicPDE} fully makes use of what we prepared in \cite{Green} and it is designed to serve this article well.
Now let us explain our $L_p$-regularity result in more detail. Recall
$
\rho_{\circ}(x):=\vert x\vert \quad \text{and} \quad \rho(x):=d(x,\partial \cD),
$
which denote the distance from $x$ to vertex and to the boundary of the conic domain $\cD=\cD(\cM)$, respectively. We prove that for any $p\ge 2$ and $n=0,1,2,\cdots$, the estimate
\begin{align}
&\bE \int^T_0 \int_{\cD} \left(\vert \rho^{-1}u\vert ^p+\vert u_x\vert ^p+\cdots+ \vert \rho^{n}D^{n+1}u\vert ^p\right) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \nonumber
\\
\leq& C \bE
\int^T_0 \int_{\cD} \Big( \vert \rho f^0\vert ^p+\cdots+\vert \rho^{n+1}D^nf^0\vert ^p \nonumber
\\
&\quad \quad \quad \quad \quad\,\,\,\, +\sum^d_{i=1}\vert f^i\vert ^p+\cdots+\sum_{i=1}^d\vert \rho^{n}D^nf^i\vert ^p \nonumber \\
&\quad \quad \quad \quad \quad\,\,\,\, +\vert g\vert _{\ell_2}^p+\cdots+\vert \rho^{n}D^{n}g\vert _{\ell_2}^p\Big)
\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \label{main estimate intro}
\end{align}
holds for the solution $u=u(\omega,t,x)$ to equation \eqref{main equation in introduction} with zero initial condition, provided that
\begin{equation}
\label{theta con intro}
d-1<\Theta<d-1+p, \quad\,\, p(1-\lambda^+_c)<\theta<p(d-1+\lambda^-_c).
\end{equation}
Here, $\lambda^+_c$ and $\lambda^-_c$ are positive constants which depend on $\cM$ and are defined in Definition \ref{lambda} below (also see Proposition \ref{critical exponents} and Remark \ref{example proposition}). The same estimate holds for polygonal domains in $\bR^2$. Estimate \eqref{main estimate intro} with condition \eqref{theta con intro} is indeed an (seamless) extension of \cite{ConicPDE} to SPDEs, and what is satisfactory is that the ranges of $\Theta$ and $\theta$ in \eqref{theta con intro} are not shrunken smaller than the ranges for the deterministic parabolic equation. For this however very delicate computation is required and providing the work done successfully is one of main purposes of this article.
Finally, we want to summarize the improvement in this article over the results in \cite{CKLL 2018} and \cite{CKL 2019+}. Our domains $\cD(\cM)$ in $\bR^d$, $d\ge 2$, generalize two dimensional angular domains \eqref{angular domain1}; the choice of $\cM$ is much richer when $d>2$. Our operator $\sum_{i,j}a^{ij}(\omega,t)D_{ij}$ far generalizes Laplacian operator $\Delta$ in \cite{CKLL 2018} and \cite{CKL 2019+}. These generalizations make computation much more involved, especially, for the stochastic part of the solution. Also, thanks to the mixed weight system, we can now study the higher order derivatives in an appropriate manner and implementing it requires quite a work. Moreover, in this article we do not pose zero initial condition and hence we propose right function spaces for the initial condition in terms of regularity relations between inputs and output, where the initial condition is one of inputs. This result is new even for deterministic PDEs on conic domains. H\"older regularity results based on aforementioned improvements are also new even for PDEs on conic domains.
This article is organized as follows. In Section 2 we introduce some properties of weighted Sobolev spaces and present our main results on conic domains, including H\"older regularity results. In Section 3 we estimate weighed $L_p$ norm of the zero-th order derivative of the solution on conic domains based on the solution representation via Green's function and elementary but highly involved computations. The estimates of the derivatives of the solution on conic domains are obtained in Section 4 and the proof of the main results on conic domains are posed there, too. In section 5 we establish a regularity theory on polygonal domains in $\bR^2$.
\noindent\textbf{Notations.}
\begin{itemize}
\item We use $:=$ to denote a definition.
\item For a measure space $(A, \cA, \mu)$, a Banach space $B$ and $p\in[1,\infty)$, we write $L_p(A,\cA, \mu;B)$ for the collection of all $B$-valued $\bar{\cA}$-measurable functions $f$ such that
$$
\|f\|^p_{L_p(A,\cA,\mu;B)}:=\int_{A} \lVert f\rVert^p_{B} \,d\mu<\infty.
$$
Here, $\bar{\cA}$ is the completion of $\cA$ with respect to $\mu$. We will drop $\cA$ or $\mu$ or even $B$ in $L_p(A,\cA, \mu;B)$ when they are obvious from the context.
\item $\bR^d$ stands for the $d$-dimensional Euclidean space of points $x=(x^1,\cdots,x^d)$, $B_r(x):=\{y\in \bR^d: \vert x-y\vert <r\}$,
$\bR^d_+:=\{x=(x^1,\ldots,x^d): x^1>0\}$, and $S^{d-1}:=\{x\in \bR^d: \vert x\vert =1\}$.
\item For a domain $\mathcal{O} \subset \bR^d$, $B^{\mathcal{O}}_R(x):=B_R(x)\cap \mathcal{O}$ and $Q^{\mathcal{O}}_R(t,x):=(t-R^2,t]\times B^{\mathcal{O}}_R(x)$.
\item $\bN$ denotes the natural number system, $\bN_0=\{0\}\cup \bN$, and $\bZ$ denotes the set of integers.
\item For $x$, $y$ in $\bR^d$, $x\cdot y :=\sum^d_{i=1}x^iy^i$ denotes the standard inner product.
\item For a domain $\mathcal{O}$ in $\bR^d$, $\partial \mathcal{O}$ denotes the boundary of $\mathcal{O}$.
\item For any multi-index $\alpha=(\alpha_1,\ldots,\alpha_d)$, $\alpha_i\in \{0\}\cup \bN$,
$$
f_t=\frac{\partial f}{\partial t}, \quad f_{x^i}=D_if:=\frac{\partial f}{\partial x^i}, \quad D^{\alpha}f(x):=D^{\alpha_d}_d\cdots D^{\alpha_1}_1f(x).
$$
We denote $\vert \alpha\vert :=\sum_{i=1}^d \alpha_i$. For the second order derivatives we denote $D_jD_if$ by $D_{ij}f$. We often use the notation
$\vert gf_x\vert ^p$ for $\vert g\vert ^p\sum_i\vert D_if\vert ^p$ and $\vert gf_{xx}\vert ^p$ for $\vert g\vert ^p\sum_{i,j}\vert D_{ij}f\vert ^p$. We also use $D^m f$ to denote arbitrary partial derivatives of order $m$ with respect to the space variable.
\item $\Delta_x f:=\sum_i D_{ii}f$, the Laplacian for $f$.
\item For $n\in \{0\}\cup \bN$, $W^n_p(\mathcal{O}):=\{f: \sum_{\vert \alpha\vert \le n}\int_{\mathcal{O}}\vert D^{\alpha}f\vert ^p dx<\infty\}$, the Sobolev space.
\item For a domain $\mathcal{O}\subseteq\bR^d$ and a Banach space $X$ with the norm $\vert \cdot\vert _X$, $\cC(\mathcal{O};X)$ denotes the set of $X$-valued continuous functions $f$ in $\mathcal{O}$ such that $\vert f\vert _{\cC(\mathcal{O};X)}:=\sup_{x\in\mathcal{O}}\vert f(x)\vert _X<\infty$. Also, for $\alpha\in (0,1]$, we define the H\"older space
$\cC^{\alpha}(\mathcal{O};X)$ as the set of all $X$-valued functions $f$ such that
$$
\vert f\vert _{\cC^{\alpha}(\mathcal{O};X)}:=\vert f\vert _{\cC(\mathcal{O};X)}+[f]_{\cC^{\alpha}(\mathcal{O};X)}<\infty
$$
with the semi-norm $[f]_{\cC^{\alpha}(\mathcal{O};X)}$ defined by
$$
[f]_{\cC^{\alpha}(\mathcal{O};X)}=\sup_{x\neq y\in \mathcal{O}} \frac{\vert f(x)-f(y)\vert _X}{\vert x-y\vert ^{\alpha}}.
$$
In particular, $\mathcal{O}$ can be an interval in $\bR$.
\item For a domain $\mathcal{O}\subseteq\bR^d$, $\cC^{\infty}_c(\mathcal{O})$ is the the space of infinitely differentiable functions with compact support in $\mathcal{O}$. $supp(f)$ denotes the support of the function $f$. Also, $\cC^{\infty}(\mathcal{O})$ denotes the the space of infinitely differentiable functions in $\mathcal{O}$.
\item For a distribution $f$ on $\mathcal{O}$ and $\varphi\in \cC^{\infty}_c(\mathcal{O})$, the expression $(f,\varphi)$ denote the evaluation of $f$ with the test function $\varphi$.
\item For functions $f=f(\omega,t,x)$ depending on $\omega\in\Omega$, $t\geq 0$ and $x\in\bR^d$, we usually drop the argument $\omega$ and just write $f(t,x)$ when there is no confusion.
\item Throughout the article, the letter $C$ denotes a finite positive constant which may have different values along the argument while the dependence will be informed; $C=C(a,b,\cdots)$, meaning that $C$ depends only on the parameters inside the parentheses.
\item $A\sim B$ means that there exist constants $C_1, C_2>0$ independent of $A$ and $B$ such that $A\leq C_1B \leq C_2A$.
\item $d(x,\mathcal{O})$ stands for the distance between a point $x$ and a set $\mathcal{O}\in\bR^d$.
\item $a \vee b =\max\{a,b\}$, $a \wedge b =\min\{a,b\}$.
\item $1_U$ the indicator function on $U$.
\item We will use the following sets of functions (see \cite{Kozlov Nazarov 2014}).
\begin{itemize}
\item[-]
$\mathcal{V}(Q^{\mathcal{\mathcal{O}}}_R(t_0,x_0))$ : the set of functions $u$ defined at least on $Q^{\mathcal{\mathcal{O}}}_R(t_0,x_0)$ and satisfying
\begin{equation*}
\sup_{t\in(t_0-R^2,t_0]}\|u(t,\cdot)\|_{L_2(B^{\mathcal{O}}_{R}(x_0))} +\|\nabla u\|_{L_2(Q^{\mathcal{O}}_{R}(t_0,x_0))}<\infty.\nonumber
\end{equation*}
\item[-]
$\mathcal{V}_{loc}(Q^{\mathcal{O}}_R(t_0,x_0))$ : the set of functions $u$ defined at least on $Q^{\mathcal{O}}_R(t_0,x_0)$ and satisfying
\begin{equation*}
u\in \mathcal{V}(Q^{\mathcal{O}}_r(t_0,x_0)), \quad \forall r\in (0,R).\nonumber
\end{equation*}
\end{itemize}
\end{itemize}
\mysection{SPDE on $d$-dimensional conic domains}\label{sec:Cone}
Throughout this article we assume $d\ge 2$. Let $\cM$ be a nonempty open set in $S^{d-1}:=\left\{x\in \bR^d\,:\,\vert x\vert =1\right\}$ and $\overline{\cM}$ denotes the closure of $\cM$. We assume $\overline{\cM}\neq S^{d-1}$,
and define the $d$-dimensional conic domain $\mathcal{D}$ by
$$
\mathcal{D}=\cD(\cM):=\Big\{x\in\mathds{R}^d\setminus\{0\} \ \Big\vert \ \ \frac{x}{\vert x\vert}\in \mathcal{M} \Big\}.
$$
When $d=2$, the shapes of conic domains are quite simple. For instance, with a fixed angle $\kappa$ in the range of $\left(0,2\pi\right)$ we can consider
\begin{equation}\label{wedge in 2d}
\mathcal{D}=\mathcal{D}^{(\kappa)}:=\left\{(r\cos\eta,\ r\sin\eta)\in\mathds{R}^2 \mid r\in(0,\ \infty),\ -\frac{\kappa}{2}<\eta<\frac{\kappa}{2}\right\}.
\end{equation}
\begin{figure}
\caption{Cases of $d=2$ and $d=3$}
\end{figure}
Let $\{w_{t}^{k}\}_{k\in\bN}$ be a family of independent one-dimensional
Wiener processes defined on a complete probability space $(\Omega,\mathscr{F},\bP)$ equipped with an increasing filtration of
$\sigma$-fields $\mathscr{F}_{t}\subset\mathscr{F}$, each of which
contains all $(\mathscr{F},\bP)$-null sets. By $\cP$ we denote the predictable $\sigma$-field on $\Omega \times (0,\infty)$ generated by $\mathscr{F}_{t}$.
In this article we study the regularity theory of the stochastic partial differential equation
\begin{equation}\label{stochastic parabolic equation}
d u =\Big( \cL u+f^0+\sum_{i=1}^d f^i_{x^i}\Big)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t>0, \;x\in \cD(\cM)
\end{equation}
under the zero Dirichlet boundary condition. Here
\begin{equation*}
\cL := \sum_{i,j=1}^d a^{ij}(\omega,t) D_{ij}.
\end{equation*}
\begin{itemize}
\item[-] Each of the stochastic integrals in \eqref{stochastic parabolic equation} is understood as an It\^o stochastic integral against the given Wiener process.
\item[-] The infinite sum of stochastic integrals is understood as the limit in probability (uniformly in $t$) of the finite sums of stochastic integrals. See Remark \ref{sto series}.
\end{itemize}
Here are our assumptions on $\cM$ and the diffusion coefficients.
\begin{assumption}
\label{ass M}
The boundary $\partial \cM $ of $\cM$ in $S^{d-1}$ is of class $\cC^2$.
\end{assumption}
\begin{assumption}
\label{ass coeff}
The diffusion coefficients $a^{ij}$, $i,j=1,\cdots,d$, are real-valued $\cP$-measurable functions of $(\omega,t)$, symmetric; $a^{ij}=a^{ji}$, and satisfy the uniform parabolicity condition, i.e. there exist constants $\nu_1, \nu_2>0$ such that for any $t\in\mathds{R}$, $\omega\in \Omega$ and $\xi=(\xi^1,\ldots,\xi^d)\in\mathds{R}^d$,
\begin{equation}
\nu_1 \vert \xi\vert ^2\le \sum_{i,j}a^{ij}(\omega, t)\xi_i\xi_j\le \nu_2 \vert \xi\vert ^2. \label{uniform parabolicity}
\end{equation}
\end{assumption}
To explain our main result in the frame of weighted Sobolev regularity, we introduce some function spaces (c.f. \cite{CKL 2019+, ConicPDE}). These spaces collect the functions whose weak derivatives can be measured by the help of appropriate weights consisting of powers of the distance to the vertex and of the distance to the boundary. Let us define
$$
\rho_{\circ}(x)=\rho_{\circ,\cD}:=\vert x\vert ,\quad \quad \rho(x)=\rho_{\cD}(x):=d(x,\partial\cD).
$$
For $p\in(1,\infty)$, $\theta\in\bR$ and $\Theta\in \bR$, we define
$$
L_{p,\theta,\Theta}(\cD):=L_p(\cD,\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}dx),
$$
and for $m\in \bN_0$ define
$$
K^m_{p,\theta,\Theta}(\cD):=\{f\, : \rho^{\vert \alpha\vert } D^{\alpha}f\in L_{p,\theta,\Theta}(\cD), \, \,\vert \alpha\vert \leq m \}.
$$
The norm in $K^m_{p,\theta,\Theta}(\cD)$ is defined by
\begin{equation}
\|f\|_{K^m_{p,\theta,\Theta}(\cD)}
=\sum_{\vert \alpha\vert \leq m} \left(\int_{\cD} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\,dx\right)^{1/p}. \label{space K norm}
\end{equation}
The space $K^m_{p,\theta,\Theta}(\cD)$ is related to the weighted Sobolev space $H^m_{p,\Theta}(\cD)$ introduced in \cite{KK2004, Krylov 1999-1,Lo1} as follows:
$$
H^m_{p,\Theta}(\cD)=K^m_{p,\Theta,\Theta}(\cD),
$$
whose norm is given by
\begin{equation}
\label{eqn 8.9.5}
\|f\|_{H^m_{p,\Theta}(\cD)}:=\sum_{\vert \alpha\vert \leq m} \left(\int_{\cD} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho^{\Theta-d}\,dx\right)^{1/p}, \quad m\in \bN_0.
\end{equation}
Note that the weight of $H^m_{p,\Theta}(\cD)$ is based only on the distance to the boundary.
Using the fact that for any $\mu\in \bR$ and multi-index $\alpha$
\begin{equation}
\label{eqn 8.28.4}
\sup_{x\in \cD} \rho^{\vert \alpha\vert -\mu}_{\circ} \vert D^{\alpha} \rho^{\mu}_{\circ}(x)\vert \leq C(\mu,\alpha)<\infty,
\end{equation}
one can easily check
$$f\in K^m_{p,\theta,\Theta}(\cD) \quad \text{ if and only if} \quad \rho^{(\theta-\Theta)/p}_{\circ}f\in H^m_{p,\Theta}(\cD),
$$ and the norms in their corresponding spaces are equivalents, that is,
\begin{equation}
\label{eqn 8.9.7}
\|f\|_{K^m_{p,\theta,\Theta}(\cD)}\sim \|\rho^{(\theta-\Theta)/p}_{\circ}f\|_{H^m_{p,\Theta}(\cD)}, \quad n\in \bN_0.
\end{equation}
Below we use
relation \eqref{eqn 8.9.7} to define $K^{\gamma}_{p,\theta,\Theta}(\cD)$ for all $\gamma \in \bR$.
Let $\psi=\psi_{\cD}$ be a smooth function in $\cD$ (see e.g. \cite[Lemma 4.13]{Ku})
such that for any $m\in \bN_0$,
\begin{equation}
\label{eqn 8.9.1}
\psi_{\cD}(x)\sim \rho_{\cD}(x),\quad \rho^{m}_{\cD}\vert D^{m+1}\psi_{\cD}\vert \leq
N(m)<\infty.
\end{equation}
Actually, such $\psi$ exists on any domains. Indeed, let $\cO$ be an arbitrary domain, and put $\rho_{\cO}(x)=d(x,\partial \cO)$, and
\begin{equation}
\label{eqn 8.25.1}
\cO_{n,k}:=\{x\in \cO: e^{-n-k}<\rho_{\cO} (x)<e^{-n+k}\}.
\end{equation}
Then mollifying $1_{\cO_{n,2}}$ one can easily construct $\xi_n$ such that
$$
\xi_n \in \cC^{\infty}_c(\cO_{n,3}), \quad \vert D^m \xi_n\vert \leq C(m)e^{mn}, \quad \sum_{n\in \bZ} \xi_n(x) \sim 1,
$$
and then one can take
\begin{equation}\label{eqn 8.25.2}
\psi=\psi_{\cO}=\sum_{n\in \bZ} e^{-n}\xi_n(x).
\end{equation}
It is easy to check that $\psi=\psi_{\cO}$ satisfies \eqref{eqn 8.9.1} with $\rho_{\cO}$ in place of
$\rho_{\cD}$.
Next we choose a nonnegative function $\zeta\in \cC^{\infty}_{c}(\bR_{+})$ such that $\zeta>0$ on $[e^{-1},e]$. Then, by the periodicity,
\begin{equation}
\label{11.4.1}
\sum_{n=-\infty}^{\infty}\zeta(e^{n+t})>c>0,\quad\forall\; t\in\bR.
\end{equation}
For $p\in(1,\infty)$ and $\gamma\in \bR$, by $H^{\gamma}_p=H^{\gamma}_p(\bR^d)$ we denote the space of Bessel potential with the norm
$$
\|u\|_{H^{\gamma}_p}:=\|(1-\Delta)^{\gamma/2}u\|_{L_p(\bR^d)}:=\|\cF^{-1}[(1+\vert \xi\vert ^2)^{\gamma/2} \cF(u)(\xi)]\|_{L_p(\bR^d)}.
$$
In case $\gamma\in \bN_0$, $H^{\gamma}_p(\bR^d)$ coincides with $W^{\gamma}_p(\bR^d)$. The spaces of Bessel potentials enjoy the property
$$
\|u\|_{H^{\gamma_1}_p}\le \|u\|_{H^{\gamma_2}_p},\quad \gamma_1\le \gamma_2.
$$
Especially, we have $\|u\|_{L_p}\le \|u\|_{H^{\gamma}_p}$ for any $\gamma\ge 0$.
For $\ell_2$-valued functions $g$ we also define
$$
\|g\|_{H^{\gamma}_p(\ell_2)}:=\|\vert (1-\Delta)^{\gamma/2}g\vert _{\ell_2}\|_{L_p(\bR^d)}.
$$
Moreover, for $\bR^d$-valued functions $\tbf=(f^1,\ldots,f^d)$ we define
$$
\|\tbf\|_{H^{\gamma}_p(d)}:=\|\,\vert (1-\Delta)^{\gamma/2}\tbf\vert \,\|_{L_p(\bR^d)}.
$$
From now on, if a function defined on a domain $\cO$ vanishes near the boundary of $\cO$, then by a trivial extension we consider it as a function defined on $\bR^d$. In particular, for any $k\in \bZ$ and a function $f$ on $\cO$, the function $\zeta(e^{-k}\psi_{\cO}(x))f(x)$ has a compact support in $\cO$ and can be considered as a function on $\bR^d$.
\begin{defn}
\label{defn 8.28}
Let $p\in(1,\infty), \Theta, \gamma\in \bR$, and $\cO$ be a domain in $\bR^d$. By $H^{\gamma}_{p,\theta}(\cO)$ we denote the class of all distributions $f$ on $\cO$ such that
\begin{equation}
\label{eqn 8.10.14}
\|f\|^p_{H^{\gamma}_{p,\Theta}(\cO)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d)}<\infty,
\end{equation}
where $\psi=\psi_{\cO}$ is taken from \eqref{eqn 8.25.2}. Similarly, $H^{\gamma}_{p,\theta}(\cO;\ell_2)$ is the set of $\ell_2$-valued functions $g$ such that
\begin{equation*}
\|g\|^p_{H^{\gamma}_{p,\Theta}(\cO;\ell_2)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))g(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d;\ell_2)}<\infty.
\end{equation*}
\end{defn}
It turns out (see \cite[Proposition 2.2]{Lo1} or \cite[Lemma 4.3]{ConicPDE}) that the new norm in \eqref{eqn 8.10.14} is equivalent to the norm in \eqref{eqn 8.9.5} if $\gamma\in \bN_0$. In other words,
for $\gamma \in \bN_0$,
\begin{equation}
\label{eqn 8.9.8}
\sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi_{\cO} (e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p} \quad \sim \quad \sum_{\vert \alpha\vert \leq \gamma} \int_{\cO} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho^{\Theta-d}\,dx,
\end{equation}
and the equivalence relation depends only on $p,\gamma, \Theta,d, n,\zeta, \psi$ and $\cO$.
Now we use equivalence relations \eqref{eqn 8.9.7} and \eqref{eqn 8.9.8}, and define $K^{\gamma}_{p,\theta,\Theta}(\cD)$ for any chosen $\gamma\in \bR$.
\begin{defn}
\label{defn 8.19}
Let $p\in (1,\infty), \theta, \Theta, \gamma \in \bR$, and $\cD$ be a conic domain in $\bR^d$.
We write $f\in K^{\gamma}_{p,\theta,\Theta}(\cD)$ if and only if $\rho^{(\theta-\Theta)/p}_{\circ} f\in H^{\gamma}_{p,\Theta}(\cD)$, and define
\begin{equation}
\label{eqn 8.10.1}
\|f\|_{ K^{\gamma}_{p,\theta,\Theta}(\cD)} := \|\rho^{(\theta-\Theta)/p}_{\circ} f\|_{H^{\gamma}_{p,\Theta}(\cD)}.
\end{equation}
The space $K^{\gamma}_{p,\theta,\Theta}(\cD;\ell_2)$ and its norm are defined similarly. Also we write
$\tbf=(f^1,f^2,\cdots,f^d)\in K^{\gamma}_{p,\theta,\Theta}(\cD; \bR^d)$ if
$$
\|\tbf\|_{K^{\gamma}_{p,\theta,\Theta}(\cD; \bR^d)}:=\sum_{i=1}^d \|f^i\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}<\infty.
$$
\end{defn}
Note that the new norm of the space $K^{\gamma}_{p,\theta,\Theta}(\cD)$ is equivalent to the previous one if $\gamma\in \bN_0$.
Below we collect some basic properties of the space $K^{\gamma}_{p,\theta,\Theta}(\cD)$.
\begin{lemma}\label{property1}
Let $p\in(1,\infty)$ and $\theta, \Theta, \gamma \in\bR$.
(i) For a domain $\cO$ and $\eta\in \cC^{\infty}_c(\bR_+)$,
\begin{equation}
\label{eqn 4.24.5}
\sum_{n \in\bZ}
e^{n\Theta} \|\eta(e^{-n}\psi_{\cO} (e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p} \leq C(p,\Theta,d,\gamma, \eta, \cO) \|f\|_{H^{\gamma}_{p,\Theta}(\cO)}^{p}.
\end{equation}
The reverse inequality also holds if $\eta$ satisfies \eqref{11.4.1}.
Moreover, the same statements hold for $\ell_2$-valued functions.
(ii) $\cC_c^{\infty}(\cD)$ is dense in $K^{\gamma}_{p,\theta,\Theta}(\cD)$.
(iii) For any $\mu \in \bR$,
\begin{equation}
\label{eqn 8.19.81}
\|\psi^{\mu}f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}\sim \|f\|_{K^{\gamma}_{p,\theta+\mu p, \Theta+\mu p}(\cD)},
\end{equation}
where $\psi$ satisfies \eqref{eqn 8.9.1}. The same statement holds for $\ell_2$-valued functions.
(iv) (Pointwise multiplier) Let $\gamma\in \bR$, $n\in \bN_0$ with $\vert \gamma\vert \leq n$. If $\vert a\vert ^{(0)}_n:=\sup_{\cD} \sum_{\vert \alpha\vert \leq \vert n\vert } \rho^{\vert \alpha\vert }\vert D^{\alpha}a\vert <\infty$, then
\begin{equation}
\label{eqn 8.19.11}
\|af\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}\leq C(n,p,d)\vert a\vert ^{(0)}_n \|f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}.
\end{equation}
(v) The operator $D_i:K^{\gamma}_{p,\theta,\Theta}(\cD)\to K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)$ is bounded for any $i=1,\ldots,d$. In genereal, for any multi-index $\alpha$ we have
\begin{align}
\label{eqn 4.16.1}
\|D^{\alpha}f\|_{K^{\gamma-\vert \alpha\vert }_{p,\theta+\vert \alpha\vert p,\Theta+\vert \alpha\vert p}(\cD)}\leq C \|f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}.
\end{align}
The same statement holds for $\ell_2$-valued functions.
(vi) (Sobolev-H\"older embedding) Let $\gamma-\frac{d}{p}\geq n+\delta$, where $n\in \bN_0$ and $\delta\in (0,1)$.
Then for any $f\in K^{\gamma}_{p,\theta-p,\Theta-p}(\cD)$,
\begin{eqnarray}
&&\sum_{k\leq n} \vert \rho^{k-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k}f\vert _{\cC(\cD)} \nonumber \\
&&\quad + [\rho^{n-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{n} f]_{\cC^{\delta}(\cD)} \leq C \|f\|_{K^{\gamma}_{p,\theta-p,\Theta-p}(\cD)},
\label{eqn 8.21.1}
\end{eqnarray}
where $C=C(d,\gamma,p,\theta,\Theta,\cM)$.
\end{lemma}
\begin{proof}
All the results follow from Definition \ref{defn 8.19} and properties of the weighted Sobolev space $H^{\gamma}_{p,\Theta}(\cO)$ (cf. \cite{Lo1,Krylov 1999-1, Krylov 2001,KK2004}). See e.g. \cite[Proposition 2.2]{Lo1} for (i)-(iii) and see \cite[Theorem 3.1]{Lo1} for (iv).
To prove (v), we put $\xi=\rho^{(\theta-\Theta)/p}_{\circ}$. Then, using $\xi Df=D(\xi f)-\xi (\xi^{-1}D \xi) f$ and \eqref{eqn 8.10.1}, we get
$$
\|Df\|_{K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)}\leq \|D(\xi f)\|_{H^{\gamma-1}_{p,\Theta+p}(\cD)}+\|(\xi^{-1}D \xi) f\|_{K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)}.
$$
By \cite[Theorem 3.1]{Lo1},
$$\|D(\xi f)\|_{H^{\gamma-1}_{p,\Theta+p}(\cD)} \leq C \|\xi f\|_{H^{\gamma}_{p,\Theta}(\cD)}=C\|f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}.
$$
Using \eqref{eqn 8.28.4}, one can check $\vert \psi \xi^{-1}D\xi\vert ^{(0)}_m<\infty$ for any $m\in \bN$. Thus, by \eqref{eqn 8.19.81} and \eqref{eqn 8.19.11},
\begin{eqnarray*}
\|(\xi^{-1}D \xi) f\|_{K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)}
&\leq&C\| (\psi \xi^{-1}D \xi) f\|_{K^{\gamma-1}_{p,\theta,\Theta}(\cD)}\leq C \|f\|_{K^{\gamma-1}_{p,\theta,\Theta}(\cD)}.
\end{eqnarray*}
Thus (v) is proved.
Finally we prove (vi). Put $g=\xi f$. Then by \cite[Theorem 4.3]{Lo1},
\begin{equation}
\label{eqn 8.28.8}
\sum_{k\leq n} \vert \rho^{k-1+\frac{\Theta}{p}} D^{k}g\vert _{\cC(\cD)}
+ [\rho^{n-1+\delta+\frac{\Theta}{p}} D^{n} g]_{\cC^{\delta}(\cD)} \leq C \|g\|_{H^{\gamma}_{p,\Theta-p}(\cD)}.
\end{equation}
Hence, to prove (vi), it is enough to note that the left hand side of \eqref{eqn 8.21.1} is bounded by a constant times of the left hand side of \eqref{eqn 8.28.8}. The lemma is proved.
\end{proof}
Using the aforementioned spaces, we now introduce the function spaces for the solutions $u$ to equation \eqref{stochastic parabolic equation} as well as the function spaces for the inputs $f^0,\tbf$, and $g$.
To make equation \eqref{stochastic parabolic equation} well-defined after all, we restrict $p\in [2,\infty)$; see Remark \ref{sto series} ($i$) below. With such $p$ and a fixed time $T\in(0,\infty)$ we first define
\begin{eqnarray*}\label{entire}
&&\bH^{\gamma}_{p}(T):=L_p(\Omega\times (0,T], \cP ; H^{\gamma}_p),\\
&&\bH^{\gamma}_{p}(T,\ell_2):=L_p(\Omega\times (0,T], \cP ; H^{\gamma}_p(\ell_2)).
\end{eqnarray*}
Next, for $\theta, \Theta, \gamma \in\bR$ we define the function spaces
\begin{eqnarray*}
&&\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)\,:=L_p(\Omega\times (0,T], \cP;K^{\gamma}_{p,\theta,\Theta}(\cD)),\\
&&\bK^{\gamma}_{p,\theta,\Theta}(\cD,T,d)\,:=L_p(\Omega\times (0,T], \cP;K^{\gamma}_{p,\theta,\Theta}(\cD; \bR^d)),\\
&&\bK^{\gamma}_{p,\theta,\Theta}(\cD,T, \ell_2)\,:=L_p(\Omega \times (0,T], \cP;K^{\gamma}_{p,\theta,\Theta}(\cD;\ell_2)),
\end{eqnarray*}
and denote
$$
\bL_{p,\theta,\Theta}(\cD,T):=\bK^0_{p,\theta,\Theta}(\cD,T),\quad \bL_{p,\theta,\Theta}(\cD,T,d):=\bK^0_{p,\theta,\Theta}(\cD,T,d),
$$
$$ \bL_{p,\theta,\Theta}(\cD,T, \ell_2):=\bK^0_{p,\theta,\Theta}(\cD,T, \ell_2).
$$
Also, by $\bK^{\infty}_c(\cD,T)$ we denote the space of all functions $f$ of the form
\begin{equation*}
f(\omega,t,x)=\sum^m_{i=1}{\bf{1}}_{(\tau_{i-1}(\omega),\tau_i(\omega)]}(t)f_i(x),
\end{equation*}
where $\tau_0\le \cdots\le \tau_m$ is a finite sequence of bounded stopping times with respect to the filtration $(\rF_t)_{t\geq 0}$, and $f_i\in \cC^{\infty}_c(\cD)$, $i=1,\ldots,m$. Similarly, we define $\bK^{\infty}_c(\cD,T, \ell_2)$ as the space of $\ell_2$-valued functions $g=(g^1,g^2,
\ldots)$ such that the first finite number of $g^k$ are in $\bK^{\infty}_c(\cD,T)$ and the rest are all identically zero. We also define $\bK^{\infty}_c(\cD,T, d)$ for $\bR^d$-valued functions $\tbf=(f^1,\ldots,f^d)$ in the same manner.
Moreover, by $\bK^{\infty}_c(\cD)$ we denote the space of all functions $f$ of the form
\begin{equation*}
f(\omega,x)=\sum^m_{i=1}{\bf{1}}_{A_i}(\omega)f_i(x),
\end{equation*}
where $A_i\in\rF_0$ and $f_i\in \cC^{\infty}_c(\cD)$, $i=1,\ldots,m$.
\begin{remark}\label{dense space}
For any $\theta, \Theta, \gamma \in \bR$, $\bK^{\infty}_c(\cD,T)$ is dense in $\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)$ and so is $\bK^{\infty}_c(\cD,T,\ell_2)$ in $\bK^{\gamma}_{p,\theta,\Theta}(\cD,T,\ell_2)$. Indeed, by the definition of $\cP$, any function $f\in \bK^{\gamma}_{p,\theta,\Theta}(\cD,T)$ can be approximated by functions of the type
$$
\sum_{i=1}^m 1_{(\tau_i(\omega), \tau_{i+1}(\omega)]}(t) h_i(x),
$$
where $\tau_m$ are bounded stopping times and $h_i\in K^{\gamma}_{p,\theta,\Theta}(\cD)$, $i=1,\ldots,m$. Thus the claim follows from Lemma \ref{property1} (ii). Similarly, $\bK^{\infty}_c(\cD)$ is dense in $L_p(\Omega;K^{\gamma}_{p,\theta,\Theta}(\cD)):=L_p(\Omega, \rF_0,\bP;K^{\gamma}_{p,\theta,\Theta}(\cD))$.
\end{remark}
From now on we will also use the notation
$$U^{\gamma+2}_{p,\theta,\Theta}(\cD):=K^{\gamma+2-2/p}_{p,\theta+2-p,\Theta+2-p}(\cD).
$$
The following definition frames the spaces for the solutions of our SPDE.
\begin{defn}\label{first spaces}
Let $p\in[2,\infty)$ and $\theta, \Theta,\gamma \in\bR$. We write $u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ if
$u \in \bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\mathcal{D},T)$, $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cD):=L_p(\Omega,\rF_0,\bP;U^{\gamma+2}_{p,\theta,\Theta}(\cD))$, and there exists $(\tilde{f}, \tilde{g}) \in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\mathcal{D},T)\times \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, \ell_2)$ such that
\begin{align}
du=\tilde{f}\,dt+\sum_k \tilde{g}^kdw^k_t,\quad t\in(0,T] \nonumber
\end{align}
in the sense of distributions on $\cD$, that is, for any $\varphi\in \cC_c^{\infty}(\cD)$ the equality
\begin{align}
\label{eqn sol}
(u(t,\cdot),\varphi)=(u(0,\cdot),\varphi)+\int^{t}_{0}(\tilde{f}(s,\cdot),\varphi)ds+\sum_{k=1}^{\infty}\int^t_0(\tilde{g}^k(s,\cdot),\varphi)dw^k_s
\end{align}
holds for all $t\in (0,T]$ (a.s.).
In this case we write
\begin{align*}
\bD u:=\tilde{f}\quad\text{and}\quad \bS u:=\tilde{g}.
\end{align*}
The norm in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ is given by
\begin{align*}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}&=\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}+\|\bD u\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}+\|\bS u\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}\\
&\quad +\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}.
\end{align*}
\end{defn}
\begin{remark}
\label{sol}
Let us go back to our main equation \eqref{stochastic parabolic equation}. Let $f^0\in \bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)$,
$\tbf=(f^1,\cdots,f^d)\in \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, d)$, $g\in \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, \ell_2)$, $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cD)$, and $u$ belong to $ \bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)$ and be a solution to equation \eqref{stochastic parabolic equation}, that is, $u$ satisfies
$$
d u =\left( \cL u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t\in(0,T]
$$
in the sense of distributions on $\cD$. Then by \eqref{eqn 4.16.1} in Lemma \ref{property1} ($v$), we have
$$
\cL u+f^0+\sum_{i=1}^d f^i_{x^i}\in \bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)
$$
and consequently $u$ belongs to $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ with the accompanied inequality
\begin{eqnarray}
&& \|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}\nonumber\\ &\leq& C \Big(\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}+ \|f^0\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}+ \sum_{i=1}^d \|f^i\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T)} \nonumber \\
&&\quad \quad +\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}+\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}\Big). \label{eqiv norm}
\end{eqnarray}
\end{remark}
\begin{remark}
\label{sto series}
(i) Note that for any $m,n\in \bN$ with $m>n$, the quadratic variation of the continuous martingale $\sum_{k=n}^m \int^t_0(\tilde{g}^k, \varphi) dw^k_s$ is $\sum_{k=n}^m \int^t_0 (\tilde{g}^k(s), \varphi)^2ds$. Following the lines in \cite[Remark 3.2]{Krylov 1999-4} and using the condition $p\geq 2$, one can easily check
$$
\bE\sum_{k=1}^{\infty} \int^T_0 (\tilde{g}^k(t),\varphi)^2 dt
\leq N(\varphi,p,T) \|\tilde{g}\|^p_{\bL_{p,\theta,\Theta}(\cD,T,\ell_2)},
$$
which implies the infinite series $\sum_{k=1}^{\infty}\int^t_0 (\tilde{g}^k(s),\varphi)dw^k_s$ converges in $L_2\big(\Omega;\cC([0,T])\big)$ and in probability uniformly in $t\in [0,T]$. As a consequence, $(u(t,\cdot),\varphi)$ in \eqref{eqn sol} is a continuous semi-martingale on $[0,T]$.
(ii) In Definition~\ref{first spaces}, $\bD u$ and $\bS u$ are uniquely determined. This can be seen by using the same arguments in \cite[Remark 3.3]{Krylov 1999-4}.
\end{remark}
\begin{thm}
\label{banach}
For any $p\in [2,\infty)$ and $\theta,\Theta, \gamma\in \bR$, $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ is a Banach space.
\end{thm}
\begin{proof}
We only need to prove the completeness. This can be proved by repeating argument in Remark 3.8 of \cite{Krylov 2001}, which treats the case $\theta=\Theta$ and
$\cD=\bR^d_+$. The argument in this proof is quite universal and, without any changes, works on any conic domain $\cD$ with any $\theta,\Theta\in \bR$.
\end{proof}
The following theorem addresses important temporal properties of the functions in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. See Section \ref{sec:Introduction} for the notations $[\cdot]_{\cC^{\alpha}}$ and $\vert \cdot\vert _{\cC^{\alpha}}$.
\begin{thm}
\label{embedding}
Let $p\in[2,\infty)$ and $\theta,\Theta, \gamma\in \bR$.
(i) If $2/p<\alpha<\beta \leq 1$, then for any $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$,
\begin{eqnarray}
&&\bE [\psi^{\beta-1}u]^p_{\cC^{\alpha/2-1/p}\left([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\right)}\leq C\,T^{(\beta-\alpha)p/2}\|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}, \label{eqn 8.10.10}\label{Holder1}
\end{eqnarray}
and in addition, if $\psi^{\beta-1}u(0,\cdot)\in L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))$,
\begin{eqnarray}
\bE \vert \psi^{\beta-1}u\vert ^p_{\cC\left([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\right)}&\leq& C\bE\|\psi^{\beta-1}u(0,\cdot)\|^p_{K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)}\nonumber\\
&& \quad+C T^{p\beta/2-1}\|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)},\label{Holder2}
\end{eqnarray}
where $\psi$ satisfies \eqref{eqn 8.9.1} and constants $C$ are independent of $T$ and $u$.
(ii) For any $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ with $u(0,\cdot)= 0$, $u$ belongs to $ L_p(\Omega; \cC([0,T]; K^{\gamma}_{p,\theta,\Theta}(\cD))$ and
$$
\bE \sup_{t\leq T} \|u(t)\|^p_{K^{\gamma+1}_{p,\theta,\Theta}(\cD)}\leq C \|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)},
$$
where $C=C(d,p,n,\theta,\Theta,\cD, T)$. In particular, for any $t\leq T$,
\begin{equation}
\label{eqn 8.25.31}
\|u\|^p_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,t)}\leq \int^t_0 \bE\sup_{r\leq s} \|u(r)\|^p_{K^{\gamma+1}_{p,\theta,\Theta}(\cD,r)} ds \leq
C\int^t_0 \|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,s)}ds.
\end{equation}
\end{thm}
\begin{proof}
We follow the argument in \cite[Section 6]{Krylov 2001} (or the proof of \cite[Theorem 2.8]{Kim2014}), using \cite[Corollary 4.12]{Krylov 2001}.
(i). As usual, we suppress the argument $\omega$. Put $\xi(x)=|x|^{(\theta-\Theta)/p}$ and set $v=\xi u$, $\bar{f}=\xi \bD u$, $\bar{g}=\xi \bS u$. Then we have
$$
dv=\bar{f}dt+\sum_{k=1}^{\infty} \bar{g}^k dw^k_t, \quad t\in(0,T]
$$
in the sense of distributions on $\cD$ with the initial condition $v(0,\cdot)=\xi u(0,\cdot)$. By \eqref{eqn 8.19.81} and Definition \ref{defn 8.19}, we have
\begin{eqnarray}
I_1&:=&\bE\left[\psi^{\beta-1}u\right]^p_{\cC^{\alpha/2-1/p}([0,T],K^{\gamma+2-\beta}_{p,\theta, \Theta}(\cD))} \nonumber\\
&\sim& \bE\left[v\right]^p_{\cC^{\alpha/2-1/p}([0,T],H^{\gamma+2-\beta}_{p,\Theta+p(\beta-1)}(\cD))} \label{eqn 8.20.11}\\
&\leq& C\sum_n e^{n(\Theta+p(\beta-1))}\bE
\left[v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\right]^p_{\cC^{\alpha/2-1/p}([0,T];H^{\gamma+2-\beta}_{p})}. \nonumber
\end{eqnarray}
Now, by assumption, the function $v_n(t,x):=v(t,e^nx)\zeta(e^{-n}\psi(e^nx))$ belongs to $\bH^{\gamma+2}_p(T)$ and satisfies
\begin{equation}
\label{eqn 08.31.1}
dv_n=\bar{f}(t,e^nx)\zeta(e^{-n}\psi(e^nx))dt+ \sum_{k=1}^{\infty} \bar{g}^k(t,e^nx) \zeta(e^{-n}\psi(e^nx)) dw^k_t, \quad t>0
\end{equation}
on the entire space $\bR^d$. Then, by \cite[Corollary 4.12]{Krylov 2001} and \eqref{eqn 08.31.1}, there exists a constant $N>0$,
independent of $T$ and $u$, so that for any constant $a>0$,
\begin{eqnarray*}
&&\bE\left[v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\right]^p_{\cC^{\alpha/2-1/p}([0,T];H^{\gamma+2-\beta}_{p})}\\
&\leq&
C\,T^{(\beta-\alpha)p/2}a^{\beta-1} \Big(a\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_{p}(T)}\\
&&\hspace{1cm}+\,a^{-1}\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}+\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_{p}(T,\ell_2)} \Big)
\end{eqnarray*}
holds. Taking $a=e^{-np}$, we note that
(\ref{eqn 8.20.11}) yields
\begin{eqnarray}
\nonumber
I_1 &\leq& C\,T^{(\beta-\alpha)p/2}\Big(\sum_n
e^{n(\Theta-p)}\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_p(T)}\\ \nonumber
&&\quad\quad+ \sum_n e^{n(\Theta+p)}
\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}\\
&&\quad \quad+\sum_n e^{n\Theta}\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_p(T,\ell_2)} \Big) \nonumber\\
&=& C\,T^{(\beta-\alpha)p/2} \Big(\|u\|^p_{\bK^{\gamma+2}_{p,\theta-p, \Theta-p}(\cD,T)}
+\|\bD u\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)} +\|\bS u\|^p_{\bK^{\gamma+1}_{p,\theta, \Theta}(\cD,T,\ell_2)}\Big) \nonumber\\
&\le& C\, T^{(\beta-\alpha)p/2}\|u\|^p_{\cK^{\gamma+2}_{p,\theta, \Theta}(\cD,T)}. \nonumber
\end{eqnarray}
Thus \eqref{Holder1} is proved.
If $\psi^{\beta-1}u(0,\cdot)\in L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))$, then we note that $\psi^{\beta-1}u$ belongs to $\cC\left([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\right)$ now. For estimate \eqref{Holder2}, we have
\begin{eqnarray}
I_2&:=&\bE \vert \psi^{\beta-1}u\vert ^p_{\cC\big([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\big)}\nonumber\\
&\leq& C\,\sum_n e^{n(\Theta+p(\beta-1))}\bE
\vert v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\vert ^p_{\cC([0,T];H^{\gamma+2-\beta}_{p})} \label{Holder2 proof}
\end{eqnarray}
and by \cite[Corollary 4.12]{Krylov 2001} again, for any constant $a>0$,
\begin{eqnarray*}
&&\bE\vert v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\vert ^p_{\cC\big([0,T];H^{\gamma+2-\beta}_{p}\big)}\\
&\leq&
C\,\bE\|v(0,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{H^{\gamma+2-\beta}_{p}}\\
&&+C\,T^{p\beta/2-1}a^{\beta-1} \Big(a\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_{p}(T)}\\
&&\hspace{0.7cm}+a^{-1}\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}+\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_{p}(T,\ell_2)} \Big).
\end{eqnarray*}
This, \eqref{Holder2 proof}, and the same argument above, especially the adjustment $a=e^{-np}$ for each $n$, lead us to \eqref{Holder2}.
(ii). We use the notations used in (i). Obviously,
$$
\bE\sup_{t\leq
T}\|u(t)\|^p_{K^{\gamma+1}_{p,\theta,\Theta}(\cD)}\leq C\,
\sum_ne^{n\Theta} \bE \sup_{t\leq T}
\|v(t,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{H^{\gamma+1}_{p}}.
$$
By Remark 4.14 in \cite{Krylov 2001} with $\beta=1$ there, $v_n \in L_p(\Omega; \cC([0,T]; H^{\gamma+1}_p))$ and for any $a>0$,
\begin{align*}
\bE \sup_{t\leq T}
\|v(t,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{H^{\gamma+1}_{p}}\leq
C\, \Big(a\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_{p}(T)}&\\
+a^{-1}\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}
+\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_{p}(T,\ell_2)}&\Big).
\end{align*}
Again, taking $a=e^{-np}$ and following the above arguments, we get
\begin{eqnarray*}
&&\bE\sup_{t\leq T}\|u(t)\|^p_{K^{\gamma+1}_{p,\theta, \Theta}(\cD)} \\
&&\leq
C\, \Big(\|u\|^p_{\bK^{\gamma+2}_{p,\theta-p, \Theta-p}(\cD,T)}
+\|\bD u\|^p_{\bK^{\gamma}_{p,\theta+p, \Theta+p}(\cD,T)}+\|\bS u\|^p_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}\Big)\\
&&= C\,\|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}.
\end{eqnarray*}
The theorem is proved.
\end{proof}
\begin{remark}\label{additional condition initial}
The additional condition $\psi^{\beta-1}u(0,\cdot)\in L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))$ for \eqref{Holder2} does not follow from the assumption $u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$.
This condition is unnecessary when we prove the corresponding result on polygonal domains. See Remark \ref{remark 8.29} for detail.
\end{remark}
\begin{remark}
Theorems \ref{banach} and \ref{embedding} hold for any $\theta, \Theta\in \bR$, but certain restrictions will be given later for our main results, Theorems \ref{main result} and \ref{main result-random}.
Actually the admissible range of $\theta$ for our Sobolev-regularity theory of equation \eqref{stochastic parabolic equation} is affected by \emph{the shape of} $\cD=\cD(\cM)$, the uniform parabolicity of the leading coefficients, the space dimension $d$, and the summability parameter $p$. On the the hand, the admissible range of $\Theta$ depends only on $d$ and $p$, that is,
\begin{equation*}
d-1<\Theta<d-1+p.
\end{equation*}
\end{remark}
To explain the admissible range of $\theta$ for equation \eqref{stochastic parabolic equation} we need the following definitions. For some of the notations in them one can refer to Section \ref{sec:Introduction}.
\begin{defn}[cf. Section 2 of \cite{Kozlov Nazarov 2014}]
\label{lambda}
Let $L=\sum_{i,j=1}^d \alpha^{ij}(t)D_{ij}$ be a uniformly parabolic ``deterministic" operator with bounded coefficients $\alpha^{ij}$s.
(i) By $\lambda^+_{c,L}=\lambda^+_{c,L,\cD}$ we denote the supremum of all $\lambda\geq 0$ such that for some constant $K_0=K_0(\lambda, L,\cM)$ it holds that
\begin{equation} \label{eqn 8.17.10}
\vert v(t,x)\vert\le K_0 \left(\frac{\vert x\vert}{R}\right)^{\lambda}\sup_{Q^{\mathcal{D}}_{\frac{3R}{4}}(t_0,0)}\ \vert v\vert,
\quad
\forall \;(t,x)\in Q^{\mathcal{D}}_{R/2}(t_0,0)
\end{equation}
for any $R>0$, $t_0$, and the deterministic function $v=v(t,x)$ belonging to $\mathcal{V}_{loc}(Q^{\mathcal{D}}_R(t_0,0))$ and satisfying
\begin{equation}
\label{eqn 8.17.14}
v_t=L v \quad \text{in}\; Q^{\mathcal{D}}_R(t_0,0)\quad ; \;\quad
v(t,x)=0\quad\text{for}\;\; x\in\partial\mathcal{D}.
\end{equation}
(ii) By $\lambda^-_{c,L}$ we denote the supremum of $\lambda \geq 0$ with above property for the operator
\begin{equation*}
\hat{L}:=\sum_{i,j}\alpha^{ij}(-t)D_{ij}.
\end{equation*}
\end{defn}
Note that $K_0$ in \eqref{eqn 8.17.10} may depend on the operator $L$. Such dependency on $L$ is one of major obstacles when one handles SPDE having random coefficients, since it naturally involves infinitely many operators at the same time. To treat such case, which is in fact our case in this article, we design the following definition.
\begin{defn}
\label{lambda2}
(i) By $\cT_{\nu_1,\nu_2}$ we denote the collection of all ``deterministic" operators in the form $L=\sum_{i,j=1}^d \alpha^{ij}(t)D_{ij}$, where $\alpha^{ij}(t)$ are measurable in $t$ and satisfy Assumption \ref{ass coeff} with the fixed constants $\nu_1,\nu_2$ in the uniform parabolicity condition \eqref{uniform parabolicity}.
(ii) For a fixed $\cD=\cD(\cM)$, by $\lambda_c(\nu_1,\nu_2)=\lambda_c(\nu_1,\nu_2,\cD)$ we denote the supremum of all $\lambda\geq 0$ such that for some constant $K_0=K_0(\lambda, \nu_1,\nu_2,\cM)$ it holds that for any operator $L \in \cT_{\nu_1,\nu_2}$, $R>0$ and $t_0$,
\begin{equation}
\label{eqn 8.17.11}
\vert v(t,x)\vert\le K_0 \left(\frac{\vert x\vert}{R}\right)^{\lambda}\sup_{Q^{\mathcal{D}}_{\frac{3R}{4}}(t_0,0)}\ \vert v\vert , \quad
\forall \;(t,x)\in Q^{\mathcal{D}}_{R/2}(t_0,0),
\end{equation}
provided that $v$ is a deterministic function in $\mathcal{V}_{loc}(Q^{\mathcal{D}}_R(t_0,0))$ satisfying
\begin{equation*}
v_t=L v \quad \text{in}\; Q^{\mathcal{D}}_R(t_0,0)\quad ; \;\quad
v(t,x)=0\quad\text{for}\;\; x\in\partial\mathcal{D}.
\end{equation*}
\end{defn}
\begin{remark}
(i) Note that the dependency of $K_0$ in Definition \ref{lambda2} is more explicit compared to that of Definition \ref{lambda}. By definitions, if $L$ is an operator in $\cT_{\nu_1,\nu_2}$, then
$$\lambda^{\pm}_{c,L}\geq \lambda_c(\nu_1,\nu_2).
$$
(ii) The values of $\lambda^{\pm}_{c,L}$ and $\lambda_c(\nu_1,\nu_2)$ do not change if one replaces $\frac{3}{4}$ in \eqref{eqn 8.17.10} and \eqref{eqn 8.17.11} by any number in $(1/2,1)$ (see \cite[Lemma 2.2]{Kozlov Nazarov 2014}). Following the proof of \cite[Lemma 2.2]{Kozlov Nazarov 2014}, one can also show that for any constant $\beta>0$
$$
\lambda^{\pm}_{c,\beta L}= \lambda^{\pm}_{c,L}, \qquad \lambda_c(\beta \nu_1,\beta \nu_2)= \lambda_c(\nu_1,\nu_2).
$$
\end{remark}
Below are some sharp estimates for $\lambda^{\pm}_{c,L}$ and $\lambda_c(\nu_1,\nu_2)$. See \cite{Kozlov Nazarov 2014} for more informations.
\begin{prop}\label{critical exponents}
(i) If $L=\Delta_x$, then
\begin{equation*}
\lambda^{\pm}_{c,L}=-\frac{d-2}{2}+\sqrt{\Lambda+\frac{(d-2)^2}{4}} \, >0,
\end{equation*}
where $\Lambda=\Lambda_{\cD}$ is the first eigenvalue of Laplace-Beltrami operator with the Dirichlet condition on $\mathcal{M}$.
In particular, if $d=2$ and $\cD=\cD^{(\kappa)}$ (see \eqref{wedge in 2d}), then
\begin{equation*}
\lambda^{\pm}_{c,L}=\frac{\pi}{\kappa}.
\end{equation*}
(ii) Let $0<\nu_1\leq \nu_2<\infty$. Then we have $\lambda_{c}(\nu_1,\nu_2)>0$ and
\begin{equation}\label{CUB3}
\lambda_{c}(\nu_1,\nu_2) \geq -\frac{d}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda+\frac{(d-2)^2}{4}}.
\end{equation}
\end{prop}
\begin{proof}
(i) follows from \cite[Theorem 2.4.3]{Kozlov Nazarov 2014}. (ii) also follows from
the proofs of \cite[Theorem 2.4.1, Theorem 2.4.7]{Kozlov Nazarov 2014}, which only consider the case $\nu_2=1/{\nu_1}$. Inspecting the proofs of \cite[Theorem 2.4.1, Theorem 2.4.7]{Kozlov Nazarov 2014} one can easily check
$$
\lambda^{\pm}_{c,L}\geq -\frac{d}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda+\frac{(d-2)^2}{4}}\\,\,\,\text{and}\,\,\,\lambda^{\pm}_{c,L}>c>0\quad\text{if}\quad L\in \cT_{\nu_1,\nu_2},
$$
where the constant $c$ is the H\"older exponent of solutions to equation \eqref{eqn 8.17.14}, and it can be chosen so that it depends only on $\nu_1,\nu_2$ and $\cM$.
Moreover, for $\lambda>0$ satisfying
$$
\lambda<c\vee \Big(-\frac{d}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda+\frac{(d-2)^2}{4}}\Big)
$$
the constant $K_0$ in \eqref{eqn 8.17.11} can be chosen so that it depends only on $\nu_1,\nu_2$ and $\cM$.
This proves \eqref{CUB3}.
\end{proof}
\begin{example}[$d=2$]\label{example proposition}
For $\kappa\in (0,2\pi)$ and $\alpha\in [0,2\pi)$, we consider
$$
\cD=\mathcal{D}_{\kappa,\alpha}:=\left\{x=(r\cos\theta,\ r\sin\theta)\in\bR^2 \,\vert\, r\in(0,\ \infty),\ -\frac{\kappa}{2}+\alpha<\theta<\frac{\kappa}{2}+\alpha\right\}
$$
and the constant operator
$$
L=aD_{x_1x_1}+b(D_{x_1x_2}+D_{x_2x_1})+cD_{x_2x_2},
$$
where $a,b,c$ are constants such that $a+c>0$ and $ac-b^2>0$. Then, by \cite[Proposition 4.1]{Green}, we have
\begin{align*}
\lambda^{\pm}_{c,L}=\lambda^{\pm}_{c,L,\cD_{\kappa,\alpha}}=\frac{\,\pi\,}{\widetilde{\kappa}},
\end{align*}
where
$$
\widetilde{\kappa}=\pi-\arctan\Big(\,\frac{\bar{c}\,\cot(\kappa/2)+\bar{b}}{\sqrt{\det(A)}}\,\Big)-\arctan\Big(\,\frac{\bar{c}\,\cot(\kappa/2)-\bar{b}}{\sqrt{\det(A)}}\,\Big)
$$
with constants $\bar{a}, \bar{b}, \bar{c}$ from the relation
$$
\begin{pmatrix} \bar{a} & \bar{b}\\
\bar{b}& \bar{c} \end{pmatrix}
= \begin{pmatrix} \cos \alpha & \sin \alpha\\
-\sin \alpha & \cos \alpha \end{pmatrix}
\begin{pmatrix} a & b\\
b& c \end{pmatrix}
\begin{pmatrix} \cos \alpha & - \sin \alpha\\
\sin \alpha & \cos \alpha \end{pmatrix}.
$$
In particular, we have $\tilde{\kappa}=\pi$ if $\kappa=\pi$.
Now, let $\kappa\neq \pi$, $\alpha=0$ for $\cD$. Also, let $b=0$ in $L$. In this case we can take $\nu_1=a\wedge c$ and $\nu_2=a\vee c$ in \eqref{uniform parabolicity}. We note that $\tilde{\kappa}$ is determined by the simple relation
\begin{equation*}
\tan\Big(\frac{\widetilde{\kappa}}{\,2\,}\Big)=\sqrt{\frac{a}{c}}\tan\Big(\frac{\kappa}{\,2\,}\Big).
\end{equation*}
\end{example}
We are ready to pose our Sobolev regularity results on conic domains. We formulate them into two theorems to handle random and non-random coefficients separately. The proofs of them are located in Section \ref{sec:main proofs}.
Note that the admissible range of $\theta$ for non-random coefficients is relatively wider than that of random coefficients.
\begin{thm}(SPDE on conic domains with non-random coefficients)
\label{main result}
Let $\cL=\sum_{ij}a^{ij}(t)D_{ij}$ be non-random, $p\in[2,\infty)$, and $\gamma \geq -1$. Also assume that Assumptions \ref{ass M} and \ref{ass coeff} hold, and $\theta, \Theta\in\bR$ satisfy
\begin{equation}
\label{theta11}
p(1-\lambda^+_{c,\cL})<\theta<p(d-1+\lambda^-_{c,\cL}), \qquad d-1<\Theta<d-1+p.
\end{equation}
Then for any $f^0\in\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf=(f^1,\cdots,f^d)\in \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)$, $g\in\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,l_2)$, and $u_0\in\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)$, equation \eqref{stochastic parabolic equation} has a unique solution $u$ in the class $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ and moreover we have
\begin{eqnarray}\label{main estimate}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}
&\leq& C\big(\|f^0\|_{\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\cD,T)}
+ \|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,l_2)}\nonumber\\
&&\;\;\quad+\|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}\big),
\end{eqnarray}
where the constant $C$ depends only on $\cM,d,p,\theta,\Theta,\cL, \gamma$. In particular, it is independent of $T$.
\end{thm}
\begin{remark}
(i) A particular result of the above theorem is introduced in \cite{CKL 2019+} (cf. \cite{CKLL 2018}). More precisely, the combination of
Theorem 2.8 and Corollary 2.11 in \cite{CKL 2019+} covers the case
$$
\cL=\Delta, \quad \Theta=d=2, \quad \cD=\cD^{(\kappa)}\text{ of}\;\; \eqref{wedge in 2d}.
$$
(ii) If $\gamma \geq 0$, the separation of two terms $f^0$ and $\tbf=(f^1,\cdots,f^d)$ in our equation is redundant and we simply pose $f\in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)$ instead. This is because, by \eqref{eqn 4.16.1}, we have
$
h^0+\sum_{i=1}^d h^i_{x^i}\in K^{\gamma}_{p,\theta+p,\Theta+p}(\cD)
$ for
$
h^0\in K^{\gamma}_{p,\theta+p,\Theta+p}(\cD),\;\; h^i \in K^{\gamma+1}_{p,\theta,\Theta}(\cD),\;i=1,\ldots,d.
$
The corresponding change in the estimate \eqref{main estimate} is clear.
\end{remark}
\begin{thm}(SPDE on conic domains with random coefficients)
\label{main result-random}
Let $\cL=\sum_{ij}a^{ij}(\omega,t)D_{ij}$ be random, $p\in[2,\infty)$, and $\gamma \geq -1$. Also assume that Assumptions \ref{ass M} and \ref{ass coeff} hold, $d-1<\Theta<d-1+p$, and
\begin{equation}
\label{theta}
p\big(1-\lambda_{c}(\nu_1,\nu_2)\big)<\theta<p\big(d-1+\lambda_{c}(\nu_1,\nu_2)\big).
\end{equation}
Then all the claims of Theorem \ref{main result} hold with a constant $N=N(\cM,d,p,\gamma,\theta,\Theta,\nu_1,\nu_2)$.
\end{thm}
\begin{remark}
By Proposition \ref{critical exponents}, \eqref{theta} is fulfilled if
\begin{equation}
\label{eqn 8.28.10}
p\left(\frac{d+2}{2}-\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right)<\theta< p\left(\frac{d-2}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right).
\end{equation}
In the case of $L=\Delta$, by Proposition \ref{critical exponents}, \eqref{theta11} is fulfilled if
$$
p\left(\frac{d}{2}-\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right)<\theta< p\left(\frac{d}{2}+\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right).
$$
\end{remark}
\begin{remark}
By \eqref{space K norm}, inequality \eqref{main estimate} yields \eqref{main estimate intro}. In particular, if $\gamma=-1$ and $u(0,\cdot)\equiv 0$, then we have
\begin{eqnarray*}
&&\bE \int^T_0 \int_{\cD} \left(\vert\rho^{-1}u\vert ^p+\vert u_x\vert ^p\right) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \nonumber
\\
&\leq& C\, \bE
\int^T_0 \int_{\cD} \Big( \vert \rho f^0\vert ^p+\sum_{i=1}^d \vert f^i\vert ^p+\vert g\vert ^p_{\ell_2}\Big)
\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt.
\end{eqnarray*}
\end{remark}
\begin{remark}
The solutions $u$ in Theorems \ref{main result} and \ref{main result-random} satisfy zero Dirichlet boundary condition. Indeed, under the assumption $d-1<\Theta<d-1+p$, \cite[Theorem 2.8]{doyoon} implies that the trace operator is well defined for functions in $\bK^1_{p,\theta-p, \Theta-p}(\cD,T)$, and hence by Lemma \ref{property1} (iv) we have $u\vert _{\partial \cD}=0$.
\end{remark}
Here comes our H\"older regularity properties of solutions on conic domains.
\begin{thm}[H\"older estimates on conic domains]
\label{cor 8.10}
Let $p\in [2,\infty)$, $\theta, \Theta\in \bR$, and $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$
be the solution taken from Theorem \ref{main result} (or from Theorem \ref{main result-random}).
(i) If $\gamma+2-\frac{d}{p}\geq n+\delta$, where $n\in \bN_0$ and $\delta\in (0,1]$, then for any $0\le k\leq n$,
$$
\vert \rho^{k-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k}u(\omega,t,\cdot)\vert _{\cC(\cD)}+
[\rho^{n-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{n}( \omega,t,\cdot)]_{\cC^{\delta}(\cD)}<\infty
$$
holds for a.e. $(\omega,t)$, in particular,
\begin{equation}
\label{eqn 8.10.21}
\vert u(\omega,t,x)\vert \leq C(\omega,t) \rho^{1-\frac{\Theta}{p}}(x) \rho^{(-\theta+\Theta)/p}_{\circ}(x)\quad \text{for all }x\in\cD.
\end{equation}
(ii) Let
$$
2/p<\alpha<\beta\leq 1, \quad \gamma+2-\beta-d/p \geq m+\varepsilon,
$$
where $m\in \bN_0$ and $\varepsilon\in (0,1]$. Put $\eta=\beta-1+\Theta/p$. Then for any $0\le k\leq m$,
\begin{align} \label{eqn 8.11.11}
&\bE \sup_{t\ne s\leq T}
\frac
{\big\vert \rho^{\eta+k} \rho^{(\theta-\Theta)/p}_{\circ} \big(D^ku(t,\cdot)-D^ku(s,\cdot)\big)\big\vert^p_{\cC(\cD)}}
{\vert t-s\vert ^{p\alpha/2-1}}<\infty, \\
& \bE \sup_{t\ne s\leq T}
\frac
{\left[\rho^{\eta+m+\varepsilon}
\rho^{(\theta-\Theta)/p}_{\circ} \left(D^mu(t,\cdot)-D^mu(s,\cdot)\right)\right]^p_{\cC^{\varepsilon}(\cD)}}
{\vert t-s\vert ^{p\alpha/2-1}} <\infty. \label{eqn 8.11.21}
\end{align}
\end{thm}
\begin{proof}
(i) By definition, for almost all $(\omega, t)$, we have $u(\omega,t,\cdot)\in K^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD)$. Thus (i) is a consequence of \eqref{eqn 8.21.1}. Similarly, the claims of (ii) follow from \eqref{eqn 8.21.1} \eqref{eqn 8.10.10}, and the observation
\begin{eqnarray*}
&&\bE \sup_{t\ne s\le T} \frac{\|\psi^{\beta-1}(u(t)-u(s))\|^p_{K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)}}{\vert t-s\vert ^{(\alpha/2-1/p)p}} \\
&\sim&
\bE \sup_{t\ne s\le T} \frac{\hspace{1cm}\|u(t)-u(s)\|^p_{K^{\gamma+2-\beta}_{p,\theta+\beta p-p,\Theta+\beta p-p}(\cD)}}{\vert t-s\vert ^{(\alpha/2-1/p)p}}.
\end{eqnarray*}
\end{proof}
\begin{remark}
(i) \eqref{eqn 8.10.21} tells how fast the solution from Theorem \ref{main result} (or Theorem \ref{main result-random}) vanishes near the boundary. Near boundary points away from the vertex, $u$ is controlled by $\rho^{1-\Theta/p}$and, if $p>\Theta$, the decay near the vertex is not slower than $\rho^{1-\theta/p}_{\circ}$.
(ii) In \eqref{eqn 8.11.11} and \eqref{eqn 8.11.21}, $\alpha/2-1/p$ is the H\"older exponent in time and $\eta$ is related to the decay rate near the boundary.
As $\alpha/2-1/p\to 1/2-1/p$, $\eta$ must increase accordingly.
(iii) Suppose $\theta=d$ satisfies \eqref{eqn 8.28.10}, and let $u\in \cK^{1}_{p,d,d}(\cD,T)$ be the solution from Theorem \ref{main result-random}. Assume
$$
\kappa_0:=1-\frac{(d+2)}{p}>0.
$$
Then for any $\kappa\in (0,\kappa_0)$, we have
\begin{equation}
\label{eqn 8.11.12}
\bE \sup_{t\leq T} \sup_{x,y\in\cD} \Big\vert \frac{\vert u(t,x)-u(t,y)\vert }{\vert x-y\vert ^{\kappa}}\Big\vert^p + \bE \sup_{t\ne s\leq T}\sup_{x\in \cD}\Big\vert \frac{\vert u(t,x)-u(s,x)\vert }{\vert t-s\vert ^{\kappa/2}}\Big\vert^p <\infty.
\end{equation}
Indeed, \eqref{eqn 8.11.12} can be obtained from \eqref{eqn 8.11.11} and \eqref{eqn 8.11.21} with appropriate choices of
$\alpha,\beta$. For the first part, to apply \eqref{eqn 8.11.21} we take $\beta=\kappa_0-\kappa+2/p$ such that $2/p<\beta<1$, and take $\varepsilon=1-\beta-d/p=\kappa=-\eta$. For the second part, we use \eqref{eqn 8.11.11} with $\alpha=\kappa+2/p, \beta=1-d/p$ so that $1-\alpha p/2=-p\kappa/2$.
\end{remark}
\mysection{Key estimates on conic domains}
In this section we consider the solutions to SPDEs having a non-random operator. We fix a deterministic operator
\begin{equation}
\label{8.29.1}
L_0:=\sum_{i,j}\alpha^{ij}(t)D_{ij}\,\, \in \, \cT_{\nu_1,\nu_2}.
\end{equation}
See Definition \ref{lambda2}.
We will estimate the zeroth order derivative of the solution of the equation
\begin{equation}\label{one event equation}
d u =\left( L_0 u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t>0, \;x\in \cD(\cM).
\end{equation}
Let $G(t,s,x,y)$ denote the Green's function for the operator $\partial_t-L_0$ on $\cD=\cD(\cM)$. By definition (cf. \cite[Lemma 3.7]{Kozlov Nazarov 2014}), $G$ is a nonnegative function such that for any fixed $s\in \bR$ and $y\in\mathcal{D}$, the function $v(t,x)=G(t,s,x,y)$ satisfies
\begin{align*}
&\big(\partial_t -L_0\big)v(t,x)=\delta(x-y)\delta(t-s) \quad \text{in}\quad \bR \times \cD, \nonumber\\
& v(t,x)=0\quad \textrm{on}\quad \bR \times\mathcal{\partial D} \; ; \quad v(t,x)=0\quad \textrm{for} \quad t<s.
\end{align*}
Now, for any given
$$
f^0\in \bL_{p,\theta+p,\Theta+p}(\cD,T), \quad \textbf{f}=(f^1,\cdots,f^d)\in \bL_{p,\theta,\Theta}(\cD,T,d), \quad
$$
$$
g\in\bL_{p,\theta,\Theta}(\cD,T, \ell_2), \quad u_0 \in L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD)),
$$
we define the function $\cR(u_0,f^0,\textbf{f},g)$ by
\begin{eqnarray}
&&\cR(u_0,f^0,\textbf{f},g)(t,x)\nonumber\\
&:=&\int_{\cD} G(t,0,x,y)u_0(y)dy\nonumber\\
&&+
\int^t_0\int_{\cD}G(t,s,x,y)f(s,y)dyds-\sum_{i=1}^d \int^t_0\int_{\cD}G_{y^i}(t,s,x,y)f^i(s,y)dyds\nonumber\\
&&+\sum_{k=1}^{\infty}\int^t_0\int_{\cD}G(t,s,x,y)g^k(s,y)dy\,dw^k_s. \label{eqn 8.21.11}
\end{eqnarray}
One immediately notices that the function $\cR(u_0,f^0,\textbf{f},g)$ is a representation of a solution of \eqref{one event equation} with zero boundary condition and initial condition $u(0,\cdot)=u_0(\cdot)$; see Lemma \ref{lemma rep} in the next section.
Our main result of this section is about this representation and it is given in the following lemma.
\begin{lemma}\label{main est}
Let $T<\infty$, $p\in [2,\infty)$ and let $\theta\in\bR$, $\Theta\in\bR$ satisfy
$$
p(1-\lambda^+_{c,L_0})<\theta<p(d-1+\lambda^-_{c,L_0})\quad\text{and}\quad d-1<\Theta<d-1+p.
$$
If $f^0\in \bL_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf\in \bL^d_{p,\theta,\Theta}(\cD,T,d)$, $g\in\bL_{p,\theta,\Theta}(\cD,T, \ell_2)$, and $u_0\in L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD)):=L_p(\Omega,\rF_0; K^0_{\theta+2-p,\Theta+2-p}(\cD))$, then $u:=\cR(u_0,f^0,\tbf,g)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and the estimate
\begin{eqnarray*}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}&\leq& C \Big(\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cD,T)} + \|\tbf\|_{\bL_{p,\theta,\Theta}(\cD,T,d)}\nonumber \\
&&\quad\quad+
\|g\|_{\bL_{p,\theta,\Theta}(\cD,T,\ell_2)} +\|u_0\|_{ L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))}\Big)\nonumber\\
\end{eqnarray*}
holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if
\begin{equation*}
p\left(1-\lambda_c(\nu_1,\nu_2)\right)<\theta< p\left(d-1+ \lambda_c(\nu_1,\nu_2)\right),
\end{equation*}
then the constant $C$ depends only on $\cM,d,p,\theta,\Theta, \nu_1$ and $\nu_2$.
\end{lemma}
To prove Lemma~\ref{main est}, we use the following two results. Lemma \ref{lemma3.1} gathers rather technical but important inequalities we keep using in this section.
\begin{lemma}\label{lemma3.1}
(i) Let $\alpha+\beta>0,\ \beta > 0$, and $\gamma>0$. Then for any $a\geq b>0$
\begin{align*}
\int^{\infty}_0 \frac{1}{\left(a+\sqrt{t}\right)^{\alpha}\left(b+\sqrt{t}\right)^{\beta+\gamma}t^{1-\frac{\gamma}{2}}}dt \leq \frac{C}{a^\alpha b^\beta},
\end{align*}
where $C= C(\alpha,\beta,\gamma)$.
(ii) Let $\sigma>0,\ \alpha+\gamma>-d$, $\gamma>-1$ and $\beta,\ \nu \in \bR$. Then for any $x\in\cD$,
\begin{align*}
\int_{\mathcal{D}} \frac{\vert y\vert ^{\alpha}}{\left(\vert y\vert +1\right)^{\beta}}\frac{\rho(y)^{\gamma}}{\left(\rho(y)+1\right)^{\nu}}\ e^{-\sigma \vert x-y\vert ^2} dy \leq
C \left(\vert x\vert +1\right)^{\alpha-\beta}\left(\rho(x)+1\right)^{\gamma-\nu},
\end{align*}
where $C=C(\cM,d, \alpha, \beta, \gamma,\nu,\sigma)$.
\end{lemma}
\begin{proof}
See Lemma 3.2 and Lemma 3.7 in \cite{ConicPDE}.
\end{proof}
For the operator $L_0$, we take the constants $K_0, \lambda^+_{c,L_0}, \lambda^-_{c,L_0}$ and the operator $\hat{L}_0$ from Definition \ref{lambda}.
\begin{lemma}\label{Green estimate}
Let $\lambda^+\in \big(0,\lambda^+_{c,L_0}\big)$ and $\lambda^-\in\big(0,\lambda^-_{c,L_0}\big)$. Denote
$$
K_0^+=K_0(L_0,\cM,\lambda^+), \quad K_0^-=K_0(\hat{L}_0,\cM,\lambda^-).$$
Then, there exist positive constants
$C=C(\mathcal{M},\nu_1,\nu_2,\,\lambda^{\pm}, K_0^{\pm})$
and $\sigma=\sigma(\nu_1,\nu_2)$ such that for any $t>s$ and $x,y\in \mathcal{D}(\cM)$, the estimates
\begin{align*}
&(i)\quad G(t,s,x,y)\leq \frac{C}{(t-s)^{d/2}}J_{t-s,x}\,J_{t-s,y}\,R^{\lambda^+-1}_{t-s,x}\,R^{\lambda^--1}_{t-s,y}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}\\
&(ii)\quad \big\vert \nabla_y G(t,s,x,y)\big\vert \leq \frac{C}{(t-s)^{(d+1)/2}}J_{t-s,x}\,R^{\lambda^+-1}_{t-s,x}\,R^{\lambda^--1}_{t-s,y}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}
\end{align*}
hold, where
$$
R_{t,x}:=\frac{\rho_{\circ}(x)}{\rho_{\circ}(x)+\sqrt{t}},\quad J_{t,x}:=\frac{\rho(x)}{\rho(x)+\sqrt{t}}.
$$
In particular, if $\lambda^{\pm}\in (0,\lambda_c(\nu_1,\nu_2))$, then $C$ depends only on $\cM, \nu_1,\nu_2, \lambda^{\pm}$.
\end{lemma}
\begin{proof}
(i) See inequality (2.8) in \cite{Green}.
(ii) Denote $\hat{G}(t,s,x,y)=G(-s,-t,y,x)$. Then $\hat{G}$ is the Green's function of the operator $\partial_t-\hat{L}_0$,
where $\hat{L}_0:=\sum_{i,j}\alpha^{ij}(-t)D_{ij}$. Then by inequality (2.14) of \cite{Green} applied to $\hat{G}$, for any $\lambda^+\in(0,\lambda_c^+)$ and $\lambda^-\in(0,\lambda_c^-)$, there exist constant $C, \sigma>0$, with the dependencies prescribed in the lemma, such that
\begin{align*}
\vert \nabla_x \hat{G}(t,s,x,y)\vert \leq \frac{C}{(t-s)^{(d+1)/2}}J_{t-s,y}R^{\lambda^--1}_{t-s,x}R^{\lambda^+-1}_{t-s,y}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}
\end{align*}
for any $t>s$ and $x,\,y\in\cD$. This and the fact $\nabla_y G(t,s,x,y)=\nabla_x \hat{G}(-s,-t,y,x)$ prove (ii).
\end{proof}
Since $\cR(u_0,f^0,\tbf,g)=\cR(u_0,0,0,0)+\cR(0,f^0,\tbf,0)+\cR(0,0,0,g)$ with $0$ as zero functions in their corresponding function spaces, we will treat these three parts separately in following three lemmas and then combine them to obtain the claim of Lemma \ref{main est}.
Especially, the stochastic part $\cR(0,0,0,g)$ is important in this article and elaborated thoroughly in Lemma \ref{main est2}.
\begin{lemma}\label{main est3}
Let $p\in(1,\infty)$, and let $\theta\in\bR$, $\Theta\in\bR$ satisfy
$$
p(1-\lambda^+_{c,L_0})<\theta<p(d-1+\lambda^-_{c,L_0})\quad\text{and}\quad d-1<\Theta<d-1+p.
$$
If $u_0\in L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))$, then $u=\cR(u_0,0,0,0)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and
\begin{align*}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}\leq C \|u_0\|_{L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))}
\end{align*}
holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if
\begin{equation}\label{theta range restricted}
p\big(1-\lambda_c(\nu_1,\nu_2)\big)<\theta<p\big(d-1+\lambda_{c}(\nu_1,\nu_2)\big),
\end{equation}
then the constant $C$ depends only on $\cM,d,p,\theta,\Theta,\nu_1$ and $\nu_2$.
\end{lemma}
\begin{proof}
Green's function itself is not random. Hence, recalling the definitions of $\cR(u_0,0,0,0)$ and $\bL=\bK^0$, for simplicity we may assume that $u_0$ and hence $u$ are non-random and we just prove
\begin{align}
\int^T_0\int_{\cD}\vert \rho^{-1}u\vert ^p\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}dxdt\leq N\int_{\cD}\vert \rho^{-1+\frac{2}{p}}\,u_0\vert ^p\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}dx.
\label{main inequality3}
\end{align}
{\bf 1}. Let us denote $\mu:=-1+(\theta-d+2)/p$, $\alpha:=-1+(\Theta-d+2)/p$, and
$$
h(x):=\rho_{\circ}(x)^{\mu-\alpha}\rho(x)^{\alpha}u_0(x).
$$
Then the claimed estimate \eqref{main inequality3} turns into a simpler form of
\begin{equation}\label{main inequality 3}
\Big\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-\frac{2}{p}}u\Big\|_{L_p\left([0,T]\times \mathcal{D}\right)}\le N \|h\|_{L_p\left(\mathcal{D}\right)}.
\end{equation}
On the other hand, by the range of $\theta$ given in the condition, we can always find $\lambda^+\in\big(0,\lambda^+_{c,L_0}\big)$ and $\lambda^-\in\big(0,\lambda^-_{c,L_0}\big)$ satisfying
\begin{align}
-\frac{d-2}{p}-\lambda^+<\mu<\frac{d-2}{p}+\lambda^-.\label{inequality mu1}
\end{align}
Also, by the given range of $\Theta$ we have
\begin{align}
-1+\frac{1}{p}<\alpha<\frac{1}{p}.\label{inequality alpha1}
\end{align}
Hence, we can choose and fix the constants $\gamma$, $\beta$ satisfying
\begin{align*}
0<\gamma<\lambda^-+\frac{d-2}{p}-\mu\,,\,\quad 0<\beta<\frac{1}{p}-\alpha.
\end{align*}
Noting $\frac{d-2}{p}<d-\frac{d}{p}$, $\frac1p<2-\frac1p$ which is due to condition $p\in(1,\infty)$, we then have
\begin{align}
0<\gamma<\lambda^-+d-\frac{d}{p}-\mu\,,\,\quad 0<\beta<2-\frac{1}{p}-\alpha.\label{gamma beta chosen 1}
\end{align}
Moreover, as $\lambda^+\in (0,\lambda^+_{c,L_0})$ and $\lambda^-\in (0,\lambda^-_{c,L_0})$, by Lemma~\ref{Green estimate} there exist constants $C=C(\cM,L_0,\nu_1,\nu_2, \lambda^{\pm}),\,\sigma=\sigma(\nu_1,\nu_2)>0$ such that
\begin{align}\label{20.09.15}
\nonumber G(t,0,x,y)&\leq C\,t^{-\frac{d}{2}}\,R^{\lambda^+-1}_{t,x}R^{\lambda^- -1}_{t,y}\,J_{t,x} J_{t,y}\,e^{-\sigma\frac{\vert x-y\vert^2}{t}}\\
& = C\,t^{-\frac{d}{2}}\,R^{\lambda^+-1}_{t,x}J_{t,x}\,R^{\gamma}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta}\, R^{\lambda^--\gamma}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{1-\beta}\,e^{-\sigma\frac{\vert x-y\vert^2}{t}}
\end{align}
holds for all $t>s$ and $x,y\in\cD$. Let us prove estimate \eqref{main inequality 3}.
{\bf 2}. Using H\"older inequality and \eqref{20.09.15}, we have
\begin{align*}
\vert u(t,x)\vert&=\Big\vert \int_{\mathcal{D}}G(t,0,x,y)u_0(y)dy\Big\vert \\
&\leq \int_{\mathcal{D}}G(t,0,x,y)\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(y)\vert dy\\
&\leq C\cdot I_1(t,x)\cdot I_2(t,x),
\end{align*}
where $q=p/(p-1)$; $\frac1p+\frac1q=1$,
$$
I_1(t,x)= \left( \int_{\mathcal{D}}t^{-\frac{d}{2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t}}\cdot R_{t,x}^{(\lambda^+-1)p}J_{t,x}^{\,p}\cdot K_{1}(t,y)\cdot \vert h(y)\vert ^pdy\right)^{1/p},
$$
and
$$
I_2(t,x)=\left(\int_{\mathcal{D}}t^{-\frac{d}{2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t}}\cdot K_{2}(t,y)\cdot \vert y\vert ^{(-\mu+\alpha)q}\rho^{-\alpha q}(y)dy\right)^{1/q}
$$
with
$$
K_{1}(t,y) =R^{\gamma p}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta p},\quad \, K_{2}(t,y)=R^{(\lambda^--\gamma)q}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{(1-\beta)q}.
$$
{\bf 3}. We show that there exists a constant $C$ depending only on $\cM, d,p,\theta,\Theta, \nu_1, \nu_2$ and $\lambda^{-}$ such that
\begin{align*}
I_2(t,x)\leq C \left(\vert x\vert +\sqrt{t}\right)^{-\mu+\alpha}\left(\rho(x)+\sqrt{t}\right)^{-\alpha}.
\end{align*}
This is done by Lemma~\ref{lemma3.1} (ii). Indeed, by change of variables $y/\sqrt{t}\to y$ and the fact $\rho(y)/\sqrt{t}=\rho(y/\sqrt{t})$, we have
\begin{align*}
I_2^q(t,x)&=t^{-\frac{d}{2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t}}K_{2}(t,y)\vert y\vert ^{(-\mu+\alpha)q}\vert \rho(y)\vert ^{-\alpha q}dy\\
&=t^{-\mu q/2}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t}}-y\vert ^2}\frac{\vert y\vert ^{(\lambda^--\mu-\gamma-1+\alpha+\beta)q}}{(\vert y\vert +1)^{(\lambda^--\gamma-1+\beta)q}}\cdot\frac{\rho(y)^{(1-\alpha-\beta)q}}{(\rho(y)+1)^{(1-\beta)q}}dy,
\end{align*}
for which we can apply Lemma~\ref{lemma3.1} since \eqref{gamma beta chosen 1} implies $(\lambda^--\mu-\gamma)q>-d$ and $(1-\alpha-\beta)q>-1$. Thus we get constant $C=C(\cM,d,p,\theta,\Theta, \lambda^-,\sigma)$ such that
$$
I_2^q(t,x) \leq C \left(\vert x\vert +\sqrt{t}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t}\right)^{-\alpha q}
$$
holds for all $t,x$.
{\bf 4}. To prove estimate \eqref{main inequality 3}, by Step 3 we first note
\begin{align*}
\vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-\frac{2}{p}}\cdot \vert u(t,x)\vert &\leq C\, \vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-\frac{2}{p}} \cdot I_1(t,x)\cdot I_2(t,x)\\
& \leq C\,\rho(x)^{-2/p}R^{\,\mu-\alpha}_{t,x} J^{\,\alpha}_{t,x}\cdot I_1(t,x)
\end{align*}
for any $t,x$. Using this and Fubini's Theorem, we have
\begin{align*}
\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-\frac2p}u\|^p_{L_p([0,T]\times\mathcal{D})}&\leq C \int_0^T\int_{\mathcal{D}}\vert \rho(x)\vert ^{-2}\Big(R^{\,\mu-\alpha}_{t,x} J^{\,\alpha}_{t,x}\, I_1(t,x) \Big)^p\,dxdt\\
&=C\int_{\mathcal{D}}I_3(y)\cdot \vert h(y)\vert ^pdy,
\end{align*}
where
\begin{align*}
I_3(y)&=\int^T_0t^{-\frac{d}{2}}K_{1}(t,y)\left(\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t}}\, R^{\,(\lambda^++\mu-\alpha-1)p}_{t,x} J^{\,(\alpha+1) p}_{t,x}\rho(x)^{-2}\,dx\right)dt.
\end{align*}
Since \eqref{inequality mu1} and \eqref{inequality alpha1} imply $(\lambda^++\mu) p-2>-d$ and $(\alpha+1) p-2>-1$, by change of variables $x/\sqrt{t}\to x$, the fact $\rho(x)/\sqrt{t}=\rho(x/\sqrt{t})$, and Lemma~\ref{lemma3.1} (ii), we have
\begin{align*}
I_3(y)&=\int^T_0\frac{1}{t}K_{1}(t,y)\, \int_{\mathcal{D}}e^{-\sigma\vert x-\frac{y}{\sqrt{t}}\vert ^2}\frac{\vert x\vert ^{(\lambda^++\mu-\alpha-1)p}}{(\vert x\vert +1)^{(\lambda^++\mu-\alpha-1)p}}\frac{\rho(x)^{(\alpha+1) p-2}}{(\rho(x)+1)^{(\alpha+1) p}}\,dx\,dt\\
&\leq C\int_0^{\infty}K_{1}(t,y)\left(\rho(y)+\sqrt{t}\right)^{-2}dt\\
&= C\int_0^{\infty}\frac{\vert y\vert ^{(\gamma-\beta)p}}{\left(\vert y\vert +\sqrt{t}\right)^{(\gamma-\beta)p}}\cdot\frac{\rho(y)^{\beta p}}{\left(\rho(y)+\sqrt{t}\right)^{\beta p+2}}dt.
\end{align*}
Lastly, owing to $\gamma p>0$, $\beta p> 0$, and the fact $|y|\ge \rho(y)$ in $\cD$, we can apply Lemma~\ref{lemma3.1} (i) and we obtain
\begin{align*}
I_3(y)\leq C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2, \lambda^{\pm}).
\label{Last constant}
\end{align*}
Hence, there exists a constant $C$ having the dependency described in the lemma such that
\begin{align*}
\left\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-\frac2p}u\right\|^p_{L_p([0,T]\times\mathcal{D})}\leq C \|h\|^p_{L_p(\mathcal{D})}.
\end{align*}
Estimate \eqref{main inequality 3} and the lemma are proved.
{\bf 5}. When $\theta$ obeys \eqref{theta range restricted}, we choose $\lambda^{\pm}$ in the interval $(0,\lambda_c(\nu_1,\nu_2))$. Then the constant $C$ of Green's function estimates in Lemma \ref{Green estimate} depends only on $\cM,\nu_1,\nu_2,\lambda^{\pm}$. Therefore, in particular, constant $C$ in
\eqref {20.09.15} does not depend on $L_0$. Tracking the constants down through Steps 1, 2, 3, 4, we note that the constant in \eqref{main inequality 3} does not depend on the particular operator $L_0$. Rather, it depends on $\nu_1,\, \nu_2$ and hence $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)$.
\end{proof}
\begin{remark}\label{initial.p ge 2.}
For $\gamma\ge 0$, $\|u\|_{L_p(\bR^d)}\le \|u\|_{H^{\gamma}_p(\bR^d)}$ is a basic property of the space of Bessel potentials. This with Lemma \ref{property1} and Definition \ref{defn 8.19}, in the context of Lemma \ref{main est3}, yields
\begin{equation*}
\|u_0\|_{L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))}\le \|u_0\|_{L_p(\Omega; K^{1-2/p}_{\theta+2-p,\Theta+2-p}(\cD))}=\|u_0\|_{\bU^{1}_{p,\theta,\Theta}(\cD)}
\end{equation*}
if $p\ge 2$.
\end{remark}
\begin{lemma}\label{main est1}
Let $p\in (1,\infty)$ and let $\theta\in\bR$, $\Theta\in\bR$ satisfy
$$
p\big(1-\lambda^+_{c,L_0}\big)<\theta<p\big(d-1+\lambda^-_{c,L_0}\big)\quad\text{and}\quad d-1<\Theta<d-1+p.
$$
If $f^0\in \bL_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf\in \bL^d_{p,\theta,\Theta}(\cD,T,d)$, then $u:=\cR(0,f^0,\tbf,0)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and the estimate
\begin{eqnarray*}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}\leq C \big(\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cD,T)}+ \|\tbf\|_{\bL_{p,\theta,\Theta}(\cD,T,d)}\big)
\end{eqnarray*}
holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if
\begin{equation*}
p\left(1-\lambda_c(\nu_1,\nu_2)\right)<\theta< p\left(d-1+ \lambda_c(\nu_1,\nu_2)\right),
\end{equation*}
then the constant $C$ depends only on $\cM,d,p,\theta,\Theta, \nu_1$ and $\nu_2$.
\end{lemma}
\begin{proof}
By the same reason explained in the beginning of the proof of Lemma \ref{main est3}, we can assume $f^0, \tbf$, and hence $u$ are non-random and we just prove
\begin{align}
\label{main est 2.1}
\int_0^T \int_{\cD} \vert \rho^{-1}u\vert ^p \rho_0^{\theta-\Theta} \rho^{\theta-d} dxdt
\leq C \int_0^T \int_{\cD} \left(\vert \rho\,f\vert ^p + \vert \tbf\vert ^p \right)\rho_0^{\theta-\Theta} \rho^{\theta-d} dxdt .
\end{align}
Furthermore, when $\tbf=0$ estimate \eqref{main est 2.1} is already proved in \cite[Lemma 3.1]{ConicPDE}, the deterministic counterpart of this article.
Hence, we may assume $f^0=0$. Finally, for simplicity we further assume $f^2=\cdots=f^d=0$.
{\bf 1}. We denote $\mu:=(\theta-d)/p$ and $\alpha:=(\Theta-d)/p$ and set
$$
h(t,x)=\rho_{\circ}^{\mu-\alpha}(x)\rho^{\alpha}(x) f^1(t,x).
$$
Then \eqref{main est 2.1} turns into
\begin{equation}\label{main inequality 2-220830}
\Big\|\rho_0^{\mu-\alpha}\rho^{\alpha-1}u\Big\|_{L_p\left([0,T]\times \mathcal{D}\right)}\le C \|h\|_{L_p\left([0,T]\times \mathcal{D}\right)}.
\end{equation}
We prepare a few things as we did in Step 1 of the proof of Lemma \ref{main est3}. By the range of $\theta$ given in the statement, we can find $\lambda^+\in(0,\lambda^+_{c,L_0})$ and $\lambda^-\in(0,\lambda^-_{c,L_0})$ satisfying
\begin{align*}
1-\frac{d}{p}-\lambda^+<\mu<d-1-\frac{d}{p}+\lambda^-.
\end{align*}
Also, by the range of $\Theta$ given we have
\begin{align*}
-\frac{1}{p}<\alpha<1-\frac{1}{p}.
\end{align*}
Then we can choose and fix the constants $\gamma_1$, $\gamma_2$, $\beta_1$ and $\beta_2$ satisfying
\begin{align}
-\frac{d-1}{p} < \gamma_1 < \lambda^+ - 1 + \mu+\frac{1}{p},&\qquad 0<\gamma_2<\lambda^-+d-1-\frac{d}{p}-\mu\nonumber\\
0<\beta_1<\alpha+\frac{1}{p},\qquad\qquad&\qquad 0<\beta_2<1-\frac{1}{p}-\alpha.\label{inequality gamma beta2}
\end{align}
Moreover, since $\lambda^+\in (0,\lambda^+_c)$ and $\lambda^-\in (0,\lambda^-_c)$, by Lemma~\ref{Green estimate} there exist constants $C=C(\cM,L_0,\nu_1, \nu_2, \lambda^{\pm}),\,\sigma=\sigma(\nu_1,\nu_2)>0$ such that
\begin{align}
\vert \nabla_y G(t,s,x,y)\vert &\leq \frac{C}{(t-s)^{(d+1)/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}\,J_{t-s,x}R^{\lambda^+-1}_{t-s,x}R^{\lambda^--1}_{t-s,y} \nonumber\\
&=\frac{C}{(t-s)^{(d+1)/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}R^{\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\beta_1}R^{\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{\beta_2} \nonumber\\
&\qquad \times R^{\lambda^+-\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{1-\beta_1}R^{\lambda^--1-\gamma_2}_{t-s,y}
\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{-\beta_2} \label{deriv}
\end{align}
holds for all $t>s$ and $x,y\in\cD$. Now, we start proving \eqref{main inequality 2-220830}.
{\bf 2}.
By H\"older inequality and \eqref{deriv}, we have
\begin{align}
\vert u(t,x)\vert &=\Big\vert \int^t_0\int_{\mathcal{D}}G_{y^1}(t,s,x,y)f^1(s,y)dyds\Big\vert \nonumber\\
&\leq \int^t_0\int_{\mathcal{D}}\vert \nabla_y G(t,s,x,y)\vert \cdot\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(s,y)\vert dyds \nonumber\\
&\leq C\, I_1(t,x)\cdot I_2(t,x), \label{I12}
\end{align}
where $q=p/(p-1)$,
\begin{align*}
&I_1(t,x)\\
&=\left(\int^t_0\int_{\mathcal{D}}\frac{1}{(t-s)^{(d+1)/2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)K_{1,2}(t-s,y)\vert h(s,y)\vert ^pdyds\right)^{1/p}
\end{align*}
and
\begin{align*}
&I_2(t,x)\\
&=\left(\int^t_0\int_{\mathcal{D}}\frac{1}{(t-s)^{(d+1)/2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,1}(t-s,x)K_{2,2}(t-s,y)\vert y\vert ^{(-\mu+\alpha)q}\rho^{\alpha q}(y)dyds\right)^{1/q}
\end{align*}
with
\begin{align*}
K_{1,1}(t,x)=R^{\gamma_1p}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{\beta_1p},\quad &
K_{1,2}(t,y)=R^{\gamma_2 p}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta_2 p},\\
K_{2,1}(t,x)=R^{(\lambda^+-\gamma_1)q}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{(1-\beta_1)q},\quad
&K_{2,2}(t,y)=R^{(\lambda^--1-\gamma_2)q}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{-\beta_2q}.
\end{align*}
{\bf 3}. We show that there exists a constant $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)>0$ such that
\begin{equation}\label{I2}
I_2(t,x)\leq C \vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+\frac{1}{q}}
\end{equation}
holds for all $t,x$; we note that the right hand side is independent of $t$.
First, by change of variables $y/\sqrt{t-s}\to y$ and Lemma~\ref{lemma3.1} (ii), which we can apply since \eqref{inequality gamma beta2} gives $(\lambda^--1-\mu-\gamma_2)q>-d$ and $(-\alpha-\beta_2)q>-1$, we have
\begin{align*}
&\quad\frac{1}{(t-s)^{(d+1)/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{(-\mu+\alpha)q}\vert \rho(y)\vert ^{-\alpha q}dy\\
&=(t-s)^{-(\mu q+1)/2}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t-s}}-y\vert ^2}\frac{\vert y\vert ^{(\lambda^--\mu-1-\gamma_2+\alpha+\beta_2)q}}{(\vert y\vert +1)^{(\lambda^--1-\gamma_2+
\beta_2)q}}\cdot\frac{\rho(y)^{(-\alpha-\beta_2)q}}{(\rho(y)+1)^{-\beta_2 q}}dy\\
&\leq C (t-s)^{-1/2}\left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q},
\end{align*}
where $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)$. Using this, we have
\begin{align*}
&I_{2}^q(t,x)\\
&\leq C\int^t_0K_{2,1}(t-s,x)\cdot(t-s)^{-1/2}\left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}ds\\
&\le C\int^t_{-\infty}\frac{\vert x\vert ^{(\lambda^+-1-\gamma_1+\beta_1)q}}{(\vert x\vert +\sqrt{t-s})^{(\lambda^+-1+\mu-\alpha-\gamma_1+\beta_1)q}}\cdot\frac{\rho(x)^{(1-\beta_1)q}}{(\rho(x)+\sqrt{t-s})^{(1+\alpha-\beta_1)q}}\cdot \frac{1}{(t-s)^{1/2}}ds.
\end{align*}
Then, the change of variable $t-s\to s$ and Lemma~\ref{lemma3.1} (i), which we can apply since we have $(\lambda^++\mu-\gamma_1)q >1 $ and $(1+\alpha-\beta_1)q> 1$ from \eqref{inequality gamma beta2}, we further obtain
\begin{align*}
I_2^q(t,x)&\leq C \vert x\vert ^{(-\mu+\alpha)q}\rho(x)^{-\alpha q+1},
\end{align*}
which is equivalent to \eqref{I2}.
{\bf 4}. Now, by \eqref{I2} and \eqref{I12} we have
\begin{align*}
\vert u(t,x)\vert \leq C\, I_1(t,x)\cdot I_2(t,x)\leq C\, \vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+\frac{1}{q}}\, I_1(t,x)
\end{align*}
and hence
\begin{align*}
\|\rho_0^{\mu-\alpha}\rho^{\alpha-1}u\|^p_{L_p([0,T]\times\mathcal{D})}&\leq C \int_0^T\int_{\mathcal{D}}\vert \rho(x)\vert ^{-1}I_1^p(t,x)\,dxdt\\
&=C\int_0^T\int_{\mathcal{D}}I_3(s,y)\cdot \vert h(s,y)\vert ^pdyds,
\end{align*}
where
\begin{align*}
I_3(s,y)=\int^T_s\int_{\mathcal{D}}\frac{1}{(t-s)^{(d+1)/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)K_{1,2}(t-s,y)\rho(x)^{-1}\,dxdt.
\end{align*}
By change of variables $t-s\to t$ followed by $x/\sqrt{t}\to x$ and Lemma~\ref{lemma3.1} (ii) with $\gamma_1p-1>-d$ and $\beta_1p-1>-1$ from \eqref{inequality gamma beta2}, we have
\begin{align*}
I_3(s,y)&=\int^T_s\frac{1}{(t-s)^{(d+1)/2}}K_{1,2}(t-s,y)\left(\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)\rho(x)^{-1}\,dx\right)dt\\
&\leq \int^{\infty}_0\frac{1}{t}K_{1,2}(t,y)\left(\int_{\mathcal{D}}\frac{\vert x\vert ^{(\gamma_1-\beta_1)p}}{(\vert x\vert +1)^{(\gamma_1-\beta_1)p}}\frac{\rho(x)^{\beta_1p-1}}{(\rho(x)+1)^{\beta_1p}}e^{-\sigma\vert x-\frac{y}{\sqrt{t}}\vert ^2}\,dx\right)dt\\
&\leq C\int_0^{\infty}K_{1,2}(t,y)\left(\rho(y)+\sqrt{t}\right)^{-1}t^{-1/2}dt\\
&= C\int_0^{\infty}\frac{\vert y\vert ^{(\gamma_2-\beta_2)p}}{\left(\vert y\vert +\sqrt{t}\right)^{(\gamma_2-\beta_2)p}}\cdot\frac{\rho(y)^{\beta_2p}}{\left(\rho(y)+\sqrt{t}\right)^{\beta_2p+1}}\cdot\frac{1}{t^{1/2}}dt.
\end{align*}
Lastly, due to $\gamma_2p>0$ and $\nu_2p> 0$, Lemma~\ref{lemma3.1} (i) yields
\begin{align*}
I_3(s,y)\leq C(\cM,d,p,\theta,\Theta, \nu_1,\nu_2).
\end{align*}
Hence, there exists a constant $C$ having the dependency described in the lemma such that
\begin{align*}
\left\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-1}u\right\|^p_{L_p([0,T]\times\mathcal{D})}\leq C \|h\|^p_{L_p([0,T]\times\mathcal{D})}.
\end{align*}
\eqref{main inequality 2-220830} and the lemma are proved.
{\bf 5}. The last part of the claim related to the range of $\theta$ holds by the same reason explained in Step 5 of the proof of Lemma \ref{main est3}.
\end{proof}
Now, we move on to the stochastic part, the most important and involved one.
\begin{lemma}\label{main est2}
Let $p\in [2,\infty)$ and let $\theta\in\bR$, $\Theta\in\bR$ satisfy
$$
p(1-\lambda^+_{c,L_0})<\theta<p(d-1+\lambda^-_{c,L_0})\quad\text{and}\quad d-1<\Theta<d-1+p.
$$
If $g\in\bL_{p,\theta,\Theta}(\cD,T, \ell_2)$, then $u:=\cR(0,0,0,g)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and the estimate
\begin{eqnarray}\label{main inequality2}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}\leq C \|g\|_{\bL_{p,\theta,\Theta}(\cD,T,\ell_2)}
\end{eqnarray}
holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if
\begin{equation*}
p\big(1-\lambda_c(\nu_1,\nu_2)\big)<\theta< p\big(d-1+ \lambda_c(\nu_1,\nu_2)\big),
\end{equation*}
then the constant $C$ depends only on $\cM,d,p,\theta,\Theta, \nu_1$, and $\nu_2$.
\end{lemma}
\begin{proof}
{\bf 1}. Again, we denote $\mu:=(\theta-d)/p$ and $\alpha:=(\Theta-d)/p$.
We put $h(\omega,t,x)=\rho_{\circ}^{\mu-\alpha}(x) \rho(x)^\alpha g(\omega,t,x)$ and recall
$$\Omega_T=\Omega \times (0,T], \quad L_p(\Omega_T \times \cD):=L_p(\Omega_T\times \cD, d\bP dt dx).
$$
Then \eqref{main inequality2} is the same as
\begin{equation}\label{main inequality 2}
\big\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-1}u\big\|^p_{L_p(\Omega_T\times\mathcal{D})}\le C\,\big\|\vert h\vert_{\ell_2}\big\|^p_{L_p\left(\Omega_T\times \mathcal{D}\right)}.
\end{equation}
As we did in the proof of Lemma \ref{main est1}, we prepare few things. By the range of $\theta$ given, we can find constants $\lambda^+\in(0,\lambda^+_{c,L_0})$ and $\lambda^-\in(0,\lambda^-_{c,L_0})$ satisfying
\begin{align*}
1-\frac{d}{p}-\lambda^+<\mu<d-\frac{d}{p}+\lambda^-.
\end{align*}
Also, by the range of $\Theta$ we have
\begin{align*}
-\frac{1}{p}<\alpha<1-\frac{1}{p}.
\end{align*}
Then we can choose and fix the constants $\gamma_1$, $\gamma_2$, $\beta_1$, and $\beta_2$ satisfying
\begin{align}
-\frac{d-2}{p} < \gamma_1 < \lambda^+ - 1 + \mu + \frac{2}{p},&\qquad 0<\gamma_2<\lambda^-+d-\frac{d}{p}-\mu\nonumber\\
\frac{1}{p}<\beta_1<\alpha+\frac{2}{p},\qquad\qquad & \qquad 0<\beta_2<2-\frac{1}{p}-\alpha.\label{inequality gamma beta3}
\end{align}
Further, by Lemma \ref{Green estimate} there exist constants $C=C(\cM,L_0,\nu_1, \nu_2, \lambda^{\pm}),\,\sigma=\sigma(\nu_1,\nu_2)>0>0$ such that for any $t>s$ and $x,y\in\cD$,
\begin{align}
\nonumber
G(t,s,x,y)\leq& \frac{C}{(t-s)^{d/2}}e^{-\sigma\frac{\vert x-y\vert^2}{t-s}} \,J_{t-s,x}\, J_{t-s,y}\,R^{\lambda^+-1}_{t-s,x}\,R^{\lambda^--1}_{t-s,y}\\ \nonumber
=&C\, (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} R^{\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\beta_1}
R^{\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{\beta_2}\\
& \times R^{\lambda^+-\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{1-\beta_1}
R^{\lambda^--\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{1-\beta_2} \label{eqn 8.7.3}
\end{align}
holds.
{\bf 2}. We first estimate the $p$-th moment $\bE|u(t,x)|^p$ for any given $t$ and $x$. Using Burkholder-Davis-Gundy inequality and Minkowski's integral inequality, we have
\begin{align*}
\bE\vert u(t,x)\vert ^p&=\bE\Big\vert \sum_{k\in\bN}\int^t_0\int_{\mathcal{D}}G(t,s,x,y)g^k(s,y)dydw^k_s\Big\vert ^p\\
&\leq C\bE\left(\int_0^t\sum_{k\in\bN}\left(\int_{\cD}G(t,s,x,y)g^k(s,y)dy\right)^2 ds \right)^{p/2}\\
&\leq C\bE\left(\int_0^t\left(\int_{\cD}G(t,s,x,y)\vert g(s,y)\vert _{\ell_2}dy\right)^2 ds \right)^{p/2}\\
&=C\bE\left(\int_0^t\left(\int_{\cD}G(t,s,x,y)\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(s,y)\vert _{\ell_2}dy\right)^2 ds \right)^{p/2}.
\end{align*}
We denote
$$
I(\omega,t,x):=\left(\int_0^t\left(\int_{\cD}G(t,s,x,y)\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(\omega,s,y)\vert _{\ell_2}dy\right)^2 ds \right)^{1/2}.
$$
Then, using \eqref{eqn 8.7.3} and applying H\"older inequality twice for $x$ and then $t$, we get
\begin{align}
I(\omega,t,x)&\leq C\left(\int_0^t\Big(\int_{\cD}I_1\cdot I_2\;dy\Big)^2ds\right)^{1/2} \label{eqn 8.7.7}\\
&\leq C\|I_1(\omega,t,\cdot,x,\cdot)\|_{L_p((0,t)\times \cD, ds\,dy)}\Big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(\cD,dy)}\Big\|_{L_{r}((0,t),ds)} \nonumber
\end{align}
where $q=\frac{p}{p-1}$, $r=\frac{2p}{p-2}\,(=\infty\text{ if }p=2)$,
\begin{align}
&I_1^p(\omega,t,s,x,y) \label{I_1^p}\\
&= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \left(R^{\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\beta_1}R^{\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{\beta_2}\right)^p\vert h(\omega,s,y)\vert _{\ell_2}^p \nonumber \\
&= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \,\,K_{1,1}(t-s,x)\,K_{1,2}(t-s,y)\,\,\vert h(\omega,s,y)\vert _{\ell_2}^p,\label{I_1^p} \nonumber
\end{align}
and
\begin{align*}
&I_2^q(t,s,x,y)\\
&= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}\\
&\quad \times \left(R^{\lambda_1-\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{1-\beta_1}R^{\lambda_2-\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{1-\beta_2}\right)^q\vert y\vert ^{(-\mu+\alpha)q}\rho(y)^{-\alpha q}\\
&= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \,\,K_{2,1}(t-s,x)\,K_{2,2}(t-s,y)\,\,\vert y\vert ^{(-\mu+\alpha)q}\rho(y)^{-\alpha q},
\end{align*}
with
\begin{align*}
&K_{1,1}(t,x)=R^{\gamma_1p}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{\beta_1p},\quad
K_{1,2}(t,y)=R^{\gamma_2 p}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta_2 p},\\
&K_{2,1}(t,x)=R^{(\lambda^+-\gamma_1)q}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{(1-\beta_1)q},\quad
K_{2,2}(t,y)=R^{(\lambda^--\gamma_2)q}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{(1-\beta_2)q}.
\end{align*}
Note, by \eqref{eqn 8.7.7} we have
\begin{eqnarray}
\label{eqn 8.7.8}
\bE \vert u(t,x)\vert ^p &\leq& C \bE I^p(t,x) \\
&\leq& C \Big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(\cD,dy)}\Big\|^p_{L_{r}((0,t),ds)}
\bE\|I_1(\omega,t,\cdot,x,\cdot)\|^p_{L_p((0,t)\times \cD, ds\,dy)}. \nonumber
\end{eqnarray}
{\bf 3}. In this step we will show that there exists a constant $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)>0$ such that
\begin{equation}\label{I_2}
\big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(\cD,dy)}\big\|_{L_{r}((0,t),ds)}\leq C \vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+1-2/p}.
\end{equation}
In particular, the right hand side is independent of $t$.
{\bf Case 1.} Assume $p=2$ (hence, $q=2$ and $r=\infty$). First, we consider
\begin{align*}
&\int_{\cD}I_2^2(t,s,x,y) \,dy\\
&=K_{2,1}(t-s,x)\cdot\frac{1}{(t-s)^{d/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{2(-\mu+\alpha)}\vert \rho(y)\vert ^{-2\alpha}\,dy.
\end{align*}
Since $2(\lambda^--\mu-\gamma_2)>-d$ and $2(1-\alpha-\beta_2)>-1$ from \eqref{inequality gamma beta3}, by change of variables $y/\sqrt{t-s}\to y$ and Lemma~\ref{lemma3.1} (ii), we have
\begin{align*}
&\frac{1}{(t-s)^{d/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{2(-\mu+\alpha)}\vert \rho(y)\vert ^{-2\alpha}\,dy\\
&=(t-s)^{-\mu}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t-s}}-y\vert ^2}\frac{\vert y\vert ^{2(\lambda^--\mu-\gamma_2-1+\alpha+\beta_2)}}{(\vert y\vert +1)^{2(\lambda^--\gamma_2-1+\beta_2)}}\cdot\frac{\rho(y)^{2(1-\alpha-\beta_2)}}{(\rho(y)+1)^{2(1-\beta_2)}}dy\\
&\leq C\left(\vert x\vert +\sqrt{t-s}\right)^{2(-\mu+\alpha)}\left(\rho(x)+\sqrt{t-s}\right)^{-2\alpha}.
\end{align*}
Hence, we have
\begin{align*}
&\sup_{s\in[0,t]}\left(\int_{\cD}I_{2}^2\,dy\right)^{1/2}\\
&\leq C\sup_{s\in[0,t]}\left(K_{2,1}(t-s,x)\cdot\left(\vert x\vert +\sqrt{t-s}\right)^{2(-\mu+\alpha)}\left(\rho(x)+\sqrt{t-s}\right)^{-2\alpha}\right)^{1/2}\\
&= C\sup_{s\in[0,t]}\left(\frac{\vert x\vert ^{\lambda^+-1-\gamma_1+\beta_1}}{(\vert x\vert +\sqrt{t-s})^{\lambda^+-1+\mu-\gamma_1-\alpha+\beta_1}}\cdot\frac{\rho(x)^{1-\beta_1}}{(\rho(x)+\sqrt{t-s})^{\alpha+1-\beta_1}}\right)\\
&=C\,\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha}\sup_{s\in[0,t]}\left(R_{t-s,x}^{\lambda^+-1+\mu-\gamma_1}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\alpha+1-\beta_1}\right)\\
&\leq C \,\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha}
\end{align*}
due to $\lambda^+-1+\mu-\gamma_1>0$, $\alpha+1-\beta_1>0$ and
$0\leq J_{t-s,x}\leq R_{t-s,x}\leq 1$. Thus \eqref{I_2} holds.
{\bf Case 2.} Let $p>2$.
Again, since $(\lambda^--\mu-\gamma_2)q>-d$ and $(1-\alpha-\beta_2)q>-1$, by change of variables and Lemma~\ref{lemma3.1} (ii), we observe
\begin{align*}
&\frac{1}{(t-s)^{d/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{(-\mu+\alpha)q}\rho(y)^{-\alpha q}dy\\
=&\,(t-s)^{-\mu q/2}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t-s}}-y\vert ^2}\frac{\vert y\vert ^{(\lambda^--\mu-\gamma_2-1+\alpha+\beta_2)q}}{(\vert y\vert +1)^{(\lambda^--\gamma_2-1+\beta_2)q}}\cdot\frac{\rho(y)^{(1-\alpha-\beta_2)q}}{(\rho(y)+1)^{(1-\beta_2)q}}dy\\
\leq&\,C \left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}.
\end{align*}
Hence, we have
\begin{align*}
&\int_0^t\|I_2(t,s,x,\cdot)\|_{L_{q}(\cD,dy)}^r ds\\
\leq &\,C \int^t_0\left\{K_{2,1}(t-s,x)\cdot\left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}\right\}^{r/q}ds\\
=&\,C \int^t_0\frac{\vert x\vert ^{(\lambda^+-1-\gamma_1+\beta_1)r}}{(\vert x\vert +\sqrt{t-s})^{(\lambda^+-1+\mu-\gamma_1-\alpha+\beta_1)r}}\cdot\frac{\rho(x)^{(1-\beta_1)r}}{(\rho(x)+\sqrt{t-s})^{(\alpha+1-\beta_1)r}}ds\,.
\end{align*}
Moreover, since \eqref{inequality gamma beta3} also gives $\left(\lambda^++\mu-\gamma_1\right)r>2$ and $\left(\alpha+1-\beta_1\right)r>2$,
using Lemma~\ref{lemma3.1} we again obtain
\begin{align*}
\big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(dy;\cD)}\big\|_{L_{r}((0,t),ds)}&=\left(\int_0^t\|I_2\|_{L_{q}(\cD,dy)}^r ds\right)^{1/r}\\
&\leq C\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+1-2/p}.
\end{align*}
{\bf 4}. Now, by \eqref{eqn 8.7.8} and \eqref{I_2} we have
\begin{align*}
&\quad\bE\,\big\vert \vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-1}u(t,x)\big\vert ^p\\
&\leq C\big(\vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-1}\big)^p \cdot \bE\,\int_0^t\int_{\cD}I_1^p(t,s,x,y) dy\,ds\cdot \big(\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+1-2/p}\big)^p\\
&= C\,\rho(x)^{-2}\;\bE\,\int_0^t\int_{\cD}I_1^p(t,s,x,y) dy\,ds.
\end{align*}
Therefore, taking integrations with respect to $x$ and $t$, using Fubini theorem and recalling \eqref{I_1^p}, we have
\begin{align}
\nonumber
\bE\,\|\rho_0^{\mu-\alpha}\rho^{\alpha-1}u\|^p_{L_p(\Omega_T\times\mathcal{D})}&\leq C \,\bE\int_0^T\int_{\mathcal{D}}\int^t_0\int_{\cD}\vert \rho(x)\vert ^{-2}I_1^p\,dyds\;dxdt\\
&=C\,\bE\,\int_0^T\int_{\mathcal{D}}I_3(s,y)\cdot \vert h(s,y)\vert _{\ell_2}^pdyds, \label{eqn 8.11.31}
\end{align}
where
\begin{align*}
I_3(s,y):=\int^T_s\int_{\mathcal{D}}\frac{1}{(t-s)^{d/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t,s,x,y)K_{1,2}(t,s,x,y)\rho(x)^{-2}\,dxdt.
\end{align*}
Since \eqref{inequality gamma beta3} also implies $\gamma_1p-2>-d$ and $\beta_1p-2>-1$, by change of variables $T-t\to t$ followed by $x/\sqrt{t}\to t$ and Lemma~\ref{lemma3.1} (ii), we have
\begin{align*}
I_3(s,y)&=\int^T_s\frac{1}{(t-s)^{d/2}}K_{1,2}(t-s,y)\left(\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)\rho(x)^{-2}\,dx\right)dt\\
&\leq \int^{\infty}_0\frac{1}{t}K_{1,2}(t,y)\left(\int_{\mathcal{D}}\frac{\vert x\vert ^{(\gamma_1-\beta_1)p}}{(\vert x\vert +1)^{(\gamma_1-\beta_1)p}}\frac{\rho(x)^{\beta_1p-2}}{(\rho(x)+1)^{\beta_1p}}e^{-\sigma'\vert x-\frac{y}{\sqrt{t}}\vert ^2}\,dx\right)dt\\
&\leq C\int_0^{\infty}K_{1,2}(t,y)\left(\rho(y)+\sqrt{t}\right)^{-2}dt\\
&= C\int_0^{\infty}\frac{\vert y\vert ^{(\gamma_2-\beta_2)p}}{\left(\vert y\vert +\sqrt{t}\right)^{(\gamma_2-\beta_2)p}}\cdot\frac{\rho(y)^{\beta_2p}}{\left(\rho(y)+\sqrt{t}\right)^{\beta_2p+2}}\,dt.
\end{align*}
Hence, by Lemma~\ref{lemma3.1} (i) with the conditions $\gamma_2p>0$ and $\beta_2p> 0$, we finally get
\begin{align*}
I_3(s,y)\leq C(\cM,d,p,\theta,\Theta, \nu_1,\nu_2).
\end{align*}
This and \eqref{eqn 8.11.31} lead to \eqref{main inequality 2} and the lemma is proved.
{\bf 5}. Again, the last part of the claim related to the range of $\theta$ holds by the same reason explained in Step 5 of the proof of Lemma \ref{main est3}.
\end{proof}
\mysection{Proof of Theorems \ref{main result} and \ref{main result-random}}\label{sec:main proofs}
In this section we prove Theorems \ref{main result} and \ref{main result-random}, following the strategy below:
{\bf 1}. \emph{ A priori estimate and the uniqueness}:
\begin{itemize}
\item[-]
In Lemma \ref{lemma entire} below, we first prove that for any solution $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ to equation \eqref{stochastic parabolic equation} equipped with the general operator $\cL=\sum_{i,j=1}^d a^{ij}(\omega,t)D_{ij}$, we have
\begin{eqnarray}
& \|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}
\leq C \big( \|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)} +\text{norms of the free terms} \big). \label{eqn 8.18.21}
\end{eqnarray}
\item[-] If $\cL$ is non-random, we estimate $\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}$ based on Lemma \ref{main est}.
\item[-] To treat the SPDE with random coefficients, we introduce a SPDE having non-random coefficients and the same free terms $f^0$, $\tbf$, $g$, $u_0$. Then we prove a priori estimate for the original SPDE based on the fact that the difference between the new SPDE and the original SPDE becomes a PDE (with random coefficients).
\item[-] The uniqueness of solution to the original SPDE follows from the uniqueness result of PDEs.
\end{itemize}
{\bf 2}. \emph{The existence}:
\begin{itemize}
\item[-] If the coefficients of $\cL$ are non-random, we use the representation formula.
\item[-] For general case, we use the method of continuity with the help of the a priori estimate.
\end{itemize}
Now we start our proofs. The following lemma is what we meant in \eqref{eqn 8.18.21}.
We emphasize that the lemma holds for any $\theta,\Theta\in \bR$ and the condition $\partial \cM\in C^2$ is not needed in the proof.
\begin{lemma}\label{regularity.induction}
Let $p\in [2,\infty)$, $\gamma, \mu, \theta, \Theta \in\bR$, $\mu<\gamma$, and the diffusion coefficients $a^{ij}=a^{ij}(\omega,t)$ satisfy Assumption \ref{ass coeff}. Assume that $f^0\in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf=(f^1,\cdots,f^d) \in\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)$, $g\in\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, \ell_2)$, $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cD)$, and $u\in\bK^{\mu+2}_{p,\theta-p,\Theta-p}(\cD,T)$ satisfies
\begin{align}
du=(\cL u+f^0+\sum_{i=1}^d f^i_{x^i})\,dt+\sum_{k=1}^{\infty} g^kdw^k_t,\quad t\in(0,T] \label{eqn 8.19.1}
\end{align}
in the sense of distributions on $\cD$. Then $u\in\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)$, hence $ u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$, and the estimate
\begin{align*}
\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}
&\leq C\Big(\|u\|_{\bK^{\mu+2}_{p,\theta-p,\Theta-p}(\cD,T)}+\|f^0\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}\nonumber\\
&\quad \quad \quad +\|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}+\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)} \Big)
\end{align*}
holds with $C=C(\cM,p,n,\theta,\Theta,\nu_1,\nu_2)$.
\end{lemma}
The proof of Lemma \ref{regularity.induction} is based on the following result on $\bR^d$.
\begin{lemma}
\label{lemma entire}
Let $p\in[2,\infty)$, $\gamma \in \bR$, and Assumption \ref{ass coeff} hold. Assume $f\in \bH^{\gamma}_p(T)$, $g\in \bH^{\gamma+1}_p(T,\ell_2)$, $u(0,\cdot)\in L_p(\Omega;H^{\gamma+2-2/p}_p)$, and $u\in \bH^{\gamma+1}_p(T)$ satisfies
\begin{align}
du=(\cL u+f)\,dt+\sum^{\infty}_{k=1}g^kdw^k_t,\quad t\in(0,T]\nonumber
\end{align}
in the sense of distributions on the whole space $\bR^d$. Then $u\in \bH^{\gamma+2}_p(T)$ and
\begin{eqnarray}
\|u\|_{\bH^{\gamma+2}_p(T)} &\leq& C\Big(\|u\|_{\bH^{\gamma+1}_p(T)}\nonumber\\
&&\quad+\|f\|_{\bH^{\gamma}_p(T)}
+\|g\|_{\bH^{\gamma+1}_p(T,\ell_2)}+\|u(0,\cdot)\|_{L_p(\Omega;H^{\gamma+2-2/p}_p)}\Big),\label{whole space estimate}
\end{eqnarray}
where $C=C(d,p,\nu_1,\nu_2)$ is independent of $T$.
\end{lemma}
\begin{proof}
{\bf 1}. First, we consider the case $u(0,\cdot) \equiv 0$. Then, by e.g. \cite[Theorem 4.10]{Krylov 1999-4}, $u\in \bH^{\gamma+2}_p(T)$ and
$$
\|u_{xx}\|_{\bH^{\gamma}_p(T)}\leq C(d,p,\nu_1,\nu_2) (\|f\|_{\bH^{\gamma}_p(T)}+\|g\|_{\bH^{\gamma+1}_p(T,\ell_2)}).
$$
This and the inequality
$$
\|u\|_{\bH^{\gamma+2}_p(T)}\leq (\|u_{xx}\|_{\bH^{\gamma}_p(T)}+\|u\|_{\bH^{\gamma}_p(T)} )
$$
together with the inequality $\|u\|_{\bH^{\gamma}_p(T)} \le \|u\|_{\bH^{\gamma+1}_p(T)} $, which due to a basic property of the space of Bessel potentials,
yield the claim of the lemma.
{\bf 2}. For the case of general $u(0,\cdot)\not\equiv 0$, we use the solution $v=v(\omega,t,x)$ to the equation
$$
dv=\cL v \,dt, \quad t\in(0,T]
$$
with $v(\omega,0,\cdot)=u(\omega,0,\cdot)$ for all $\omega\in\Omega$ (see \cite[Theorem 5.2]{Krylov 1999-4}).
From a classical theory of PDE, which we apply for each $\omega$, we have
$$
\|v\|_{\bH^{\gamma+2}_p(T)}\leq C \|u_0\|_{L_p(\Omega;H^{\gamma+2-2/p}_p)}.
$$
Then for the function $u-v$, which has zero initial condition, we can apply Step 1 and we obtain estimate \eqref{whole space estimate} for $u$ simply by triangle inequality.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma \ref{regularity.induction}}]
We first note that we only need to consider the case $\mu=\gamma-1$. Indeed, suppose that the lemma holds true if $\mu=\gamma-1$. Now let $\mu=\gamma-n$, $n\in \bN$.
Then applying the result for $\mu'=\gamma-k$ and $\gamma'=\mu'+1$ with $k=n,n-1,\cdots, 1$ in order, we get the claim when $\mu=\gamma-n$.
Now suppose that the difference between $\gamma$ and $\mu$ is not an integer, i.e. $\gamma-\mu=n+\delta$, $n=0,1,2,\cdots$ and $\delta\in (0,1)$. Then, since $\mu>\gamma-(n+1)=:\mu'$ and $\|\cdot \|_{\bK^{\mu'+2}_{p,\theta-p,\Theta-p}(\cD,T)}\le \|\cdot \|_{\bK^{\mu+2}_{p,\theta-p,\Theta-p}(\cD,T)}$, we conclude that our assumption holds for $\mu'$, that is, $u\in \bK^{\mu'+2}_{p,\theta-p,\Theta-p}(\cD,T)$. Therefore, the case $\gamma-\mu \not\in \bN$ is also covered by what we just discussed.
Now we prove the lemma when $\mu=\gamma-1$, i.e. $u\in\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)$.
As usual, we omit the argument $\omega$ for the simplicity of presentation.
{\bf 1}. For $u\in\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)$, put
$$\xi(x)=\vert x\vert^{(\theta-\Theta)/p},\quad v:=\xi u, \quad f:=f^0+\sum_{i=1}^{d}f^i_{x^i},\quad v_0:=\xi u_0.
$$
Using Definition \ref{defn 8.19}, Definition \ref{defn 8.28}, and the change of variables $t\to e^{2n}t$, we have
\begin{eqnarray}
&&\|u\|^p_{\bK^{\gamma+2}_{p,\Theta-p,\Theta-p}(\cD,T)}=\|v\|^p_{\bH^{\gamma+2}_{p,\Theta-p}(\cD,T)} \nonumber\\
&&= \sum_{n\in \bZ} e^{n(\Theta-p)}\|\zeta(e^{-n}\psi(e^n\cdot))v(\cdot,e^n\cdot)\|^p_{\bH^{\gamma+2}_p(T)} \nonumber\\
&&= \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|\zeta(e^{-n}\psi(e^n\cdot))v(e^{2n}\cdot,e^n\cdot)\|^p_{\bH^{\gamma+2}_p(e^{-2n}T)}. \label{eqn 4.23.1}
\end{eqnarray}
For each $n\in \bZ$, we denote
$$
v_n(t,x):=\zeta(e^{-n}\psi(e^nx))v(e^{2n}t,e^nx),
\quad
v_{0,n}(x)=\zeta(e^{-n}\psi(e^nx))v_0(e^nx).
$$
Then using equation \eqref{eqn 8.19.1} and the product rule of differentiation, one can easily check that $v_n$ satisfies
$$
d v_n=(\cL_n v_n +f_n)dt+ \sum_{k=1}^{\infty} g^k_n dw^{n,k}_t \quad t\in(0,e^{-2n}T]
$$
in the sense of distributions on $\bR^d$ with the initial condition $v_{n}(0,\cdot)=v_{0,n}(\cdot)$,
where
$$
\cL_n:=\sum_{i,j} a^{ij}_n(t)D_{ij},\quad a^{ij}_n(t):=a^{ij}(e^{2n}t),
$$
$$
g^k_n(t,x):=e^{n} \zeta(e^{-n}\psi(e^nx))\xi(e^nx) g^k(e^{2n}t,e^nx), \qquad w^{n,k}_t:=e^{-n}w^k_{e^{2n}t},
$$
and, with Einstein's summation convention with respect to $i,j$,
\begin{eqnarray*}
f_n(t,x)&:=&\quad e^{2n}\zeta(e^{-n}\psi(e^nx))\xi(e^nx) f(e^{2n}t,e^nx) \\
&&+e^{n} a^{ij}_n(t)D_iu(e^{2n}t,e^nx) \zeta'(e^{-n}\psi(e^nx)) D_j\psi(e^nx) \xi(e^nx) \\
&&+e^{2n}a^{ij}_n(t)D_iu(e^{2n}t,e^nx) \zeta(e^{-n}\psi(e^nx)) D_j \xi(e^nx) \\
&&+e^na^{ij}_n(t)u(e^{2n}t,e^nx)\zeta'(e^{-n}\psi(e^nx)) D_i\psi(e^nx)D_j\xi(e^nx)\\
&&+e^{2n}a^{ij}_n(t)u(e^{2n}t,e^nx)\zeta(e^{-n}\psi(e^nx))D_{ij}\xi(e^nx) \\
&&+a^{ij}_n(t)u(e^{2n}t,e^nx)\zeta''(e^{-n}\psi(e^nx))D_i\psi(e^nx) D_j\psi(e^nx) \xi(e^nx)\\
&&+ e^na^{ij}_n(t)u(e^{2n}t,e^nx)\zeta'(e^{-n}\psi(e^nx)) D_{ij}\psi(e^nx)\\
&=:& \sum_{l=1}^7 f^{l}_n(t,x).
\end{eqnarray*}
Here, $\zeta'$ and $\zeta''$ denote the first and second derivative of $\zeta$, respectively. We note that for each $n\in \bZ$, the operator $\cL_n$ still satisfies the uniform parabolicity condition \eqref{uniform parabolicity} and $\{w^{n,k}_t: k\in \bN\}$ is a sequence of independent Brownian motions. Hence,
we can apply Lemma \ref{lemma entire} and from \eqref{eqn 4.23.1} we get
\begin{eqnarray}
\nonumber
\|u\|^p_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}
&\leq& C \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|v_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T)}\\
&&+C \sum_{l=1}^7 \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^l_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)} \nonumber\\
&&+C\sum_{n\in \bZ} e^{n(\Theta-p+2)}\|g_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T, \ell_2)}\nonumber\\
&&+ C \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|v_{0,n}\|^p_{L_p(\Omega;H^{\gamma+2-2/p}_p)}\label{eqn 8.19.21}
\end{eqnarray}
provided that
\begin{equation}\label{in fact true}
v_n\in \bH^{\gamma+1}_p(e^{-2n}T), \quad f^l_n \in \bH^{\gamma}_p(e^{-2n}T),\quad
g_n \in \bH^{\gamma+1}_p(e^{-2n}T,\ell_2), \quad (l=1,\ldots,7).
\end{equation}
It turns out that the claims in \eqref{in fact true} hold true.
Indeed, the change of variable $e^{2n}t \to t$ and Definition \ref{defn 8.19} yield
\begin{eqnarray}
&& \sum_{k\in \bZ} e^{n(\Theta-p+2)}\|v_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T)} \nonumber\\
&&= \sum_{n\in \bZ} e^{n(\Theta-p)}\| \zeta(e^n\psi(e^n\cdot))v(\cdot,e^n\cdot) \|^p_{\bH^{\gamma+1}_p(T)}
= \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)} \label{eqn 8.19.31}
\end{eqnarray}
and
\begin{eqnarray}
&& \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|g_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T,\ell_2)} \nonumber \\
&&= \sum_{n\in \bZ} e^{n\Theta}\|\zeta(e^n\psi(e^n\cdot))\xi(e^n\cdot)g(\cdot,e^n\cdot) \|^p_{\bH^{\gamma+1}_p(T,\ell_2)}= \|g\|^p_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}. \label{eqn 8.19.41}
\end{eqnarray}
In particular,
$$
v_n \in \bH^{\gamma+1}_p(e^{-2n}T), \quad
g_n \in \bH^{\gamma+1}_p(e^{-2n}T, \ell_2), \quad \forall\, n\in \bZ.
$$
Next, we show that $f^l_n$ belong to $\bH^{\gamma}_p(e^{-2n}T)$ in the following manner.
For $l=1$, by Definition \ref{defn 8.19} and the change of variables $e^{2n}t \to t$, we have
\begin{eqnarray*}
&& \quad\sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^1_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)}\\
&& =
\sum_{n\in \bZ} e^{n(\Theta+p)}\|\zeta(e^{-n}\psi(e^n\cdot))\xi(e^n\cdot)f(e^{2n}\cdot,e^n\cdot) \|^p_{\bH^{\gamma}_p(T)}
= \|f\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}.
\end{eqnarray*}
For $l=2$, by Definition \ref{defn 8.19} and \eqref{eqn 4.24.5},
we get
\begin{eqnarray*}
&& \quad\sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^2_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)}\\
&&\leq C
\sum_{n\in \bZ} \sum_{i,j}e^{n\Theta}\|D_iu(\cdot,e^n\cdot)\zeta'(e^{-n}\psi(e^n\cdot))\xi(e^n \cdot) D_j\psi(e^n\cdot)\|^p_{\bH^{\gamma}_p(T)} \\
&&\leq C \|\psi_x \xi u_x \|^p_{\bH^{\gamma}_{p,\Theta}(\cD,T)}=N\|\psi_x u_x \|^p_{\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)} \\
&&\leq
C \|u_x\|^p_{\bK^{\gamma}_{p,\theta, \Theta}(\cD,T)}\leq
C \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p, \Theta-p}(\cD,T)},
\end{eqnarray*}
where the last two inequalities are due to \eqref{eqn 8.9.1}, \eqref{eqn 8.19.11}, and \eqref{eqn 4.16.1}. For $l=3$, by definitions of norms, we have
\begin{eqnarray}
&& \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^2_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)} \nonumber \\
& \leq& C \sum_{n\in \bZ} \sum_{i,j}e^{n(\Theta+p)}\|D_iu(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) D_j\xi(e^n\cdot)\|^p_{\bH^{\gamma}_p(T)} \nonumber \\
&=&C
\|u_x \xi_x\|^p_{\bH^{\gamma}_{p,\Theta+p}(\cD,T)}
= C
\|\xi \xi^{-1}\xi_xu_x\|^p_{\bH^{\gamma}_{p,\Theta+p}(\cD,T)} \nonumber \\
&=&C
\|\xi^{-1}\xi_x u_x \|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)} \leq C\|\psi \xi^{-1}\xi_x u_x\|^p_{\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)}, \label{eqn 8.20.1}
\end{eqnarray}
where the last inequality is due to \eqref{eqn 8.19.81}. Now we note that for any $n\in \bN$,
$$
\vert\psi \xi^{-1}\xi_x\vert ^{(0)}_n+\vert \psi^2 \xi^{-1}\xi_{xx}\vert ^{(0)}_n\leq C(n,\xi)<\infty.
$$
Thus, by \eqref{eqn 8.19.11} the last term in \eqref{eqn 8.20.1} is bounded by
$$
C\|u_x\|^p_{\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)} \leq C \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)}.
$$
For other $l$s one can argue similarly and we gather the results:
\begin{eqnarray}
&& \sum_{l=1}^7 \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^l_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)} \nonumber \\
&\leq& C \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)}+
C\|f\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}. \label{eqn 8.19.51}
\end{eqnarray}
Consequently, coming back to \eqref{eqn 8.19.21} and using \eqref{eqn 8.19.31}, \eqref{eqn 8.19.41}, and \eqref{eqn 8.19.51}, we get
\begin{align*}
&\quad\quad\quad\|u\|^p_{\bH^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}\\
& \leq C\Big(\|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)}
+\|f\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}
+\|g\|^p_{\bH^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}
+\|u_0\|^p_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)} \Big).
\end{align*}
This yields what we want to have since $\|f^i_{x^i}\|_{K^{\gamma}_{p,\theta+p,\Theta+p}(\cD)}\leq C \|f^i\|_{K^{\gamma+1}_{p,\theta,\Theta}(\cD)}$. The lemma is proved.
\end{proof}
Now, we take the deterministic operator $L_0$ introduced in \eqref{8.29.1} and the Green function $G$ related to $L_0$. Also, recall the representation $\cR (u_0,f^0,\tbf,g)$ defined in \eqref{eqn 8.21.11} in connection with $L_0$.
\begin{lemma}
\label{lemma rep}
If $f^0\in \bK^{\infty}_c(\cD,T)$, $\tbf\in \bK^{\infty}_c(\cD,T,d)$, $g\in \bK^{\infty}_c(\cD,T,\ell_2)$, and $u_0\in\bK^{\infty}_c(\cD)$, then $u=\cR (u_0,f^0,\tbf,g)$ belongs to $\cK^0_{p,\theta,\Theta}(\cD,T)$ and satisfies
\begin{equation}
\label{eqn 8.21.13}
du=\left(L_0u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt+\sum_{k=1}^{\infty}g^k dw^k_t, \quad t\in (0,T]
\end{equation}
in the sense of distributions on $\cD$ with $u(0,\cdot)=u_0$.
\end{lemma}
\begin{proof} First, we note that
\begin{eqnarray*}\cR (u_0,f^0,\tbf,g)&=&\cR (u_0,0,0,0)+\cR (0,f^0,\tbf,0)+\cR (0,0,0,g)\\
&=:&v_1+v_2+v_3.
\end{eqnarray*}
By considering $v_1$ for each $\omega$ and by the definition of Green's function with the condition $u_0\in\bK^{\infty}_c(\cD)$, we note that $v_1$ satisfies
$$
dv_1=L_0v_1dt,\quad t>0\,; \quad v_1(0,\cdot)=u_0(\cdot)
$$
in the sense of distributions on $\cD$. Then Lemma \ref{main est3} and the facts that $\bK^{\infty}_c(\cD)$ is dense in $L_p(\Omega;K^0_{p,\theta+2-p,\Theta+2-p}(\cD))$ and $\|u_0\|_{\bU^0_{p,\theta,\Theta}(\cD)}\le \|u_0\|_{L_p(\Omega;K^0_{p,\theta+2-p,\Theta+2-p}(\cD))}$ confirm $v_1\in \cK^0_{p,\theta,\Theta}(\cD,T)$. Similarly, $v_2$ satisfies
$$
dv_2=(L_0v_2+f^0+\sum_{i=1}^d f^i_{x^i})dt,\quad t>0
$$
in the sense of distributions on $\cD$ with zero initial condition and Lemma \ref{main est1} leads us to have $v_2\in \cK^0_{p,\theta,\Theta}(\cD,T)$.
The fact that $v_3$ satisfies
$$
dv_3=L_0v_3dt+\sum_{k=1}^{\infty}g^k dw^k_t, \quad t>0
$$
in the sense of distributions on $\cD$ with zero initial condition can be proved by the same way in the proof of \cite[Lemma 3.11]{CKLL 2018}, which deals with the case $d=2$. Then Lemma \ref{main est2} gives $v_2\in \cK^0_{p,\theta,\Theta}(\cD,T)$.
Hence, $u=v_1+v_2+v_3$ satisfies the assertions and the lemma is proved.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{main result}}]\quad
Note that, since $\cL$ is non-random, we can take $L_0=\cL$ (see \eqref{8.29.1}).
{\bf 1}. \emph{ Existence and estimate \eqref{main estimate}} :
First, we assume that $f^0\in \bK^{\infty}_c(\cD,T)$, $\tbf\in \bK^{\infty}_c(\cD,T,d)$, $g\in \bK^{\infty}_c(\cD,T,\ell_2)$, and $u_0\in\bK^{\infty}_c(\cD)$. Then by Lemma \ref{lemma rep}, $u=\cR(u_0,f^0,\tbf,g) \in \cK^0_{p,\theta,\Theta}(\cD,T)$ satisfies equation \eqref{eqn 8.21.13} in the sense of distributions on $\cD$ with initial condition $u_0$. Then, we use Lemma \ref{regularity.induction} with $\mu=-2$. As $\gamma+2\ge 1$, Lemma \ref{main est} and Remark \ref{initial.p ge 2.} imply
$u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ and \eqref{main estimate}.
The general case can be easily handled by standard approximation argument. Indeed, take $f^0_n\in \bK^{\infty}_c(\cD,T)$, $\tbf_n\in \bK^{\infty}_c(\cD,T,d)$, $g_n\in \bK^{\infty}_c(\cD,T,\ell_2)$, and $u_{0,n}\in\bK^{\infty}_c(\cD)$
such that $f^0_n \to f^0$, $\tbf_n \to \tbf$, $g_n \to g$, and $u_{0,n}\to u_{0}$, as $n\to \infty$, in the corresponding spaces. Now let $u_n:=\cR(u_0,f^0_n, \tbf_n,g_n)$. Then, estimate \eqref{main estimate} applied for $u_n-u_m$ shows that $\{u_n\}$ is a Cauchy sequence in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. Taking $u$ as the limit of $u_n$ in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$, we find that $u$ is a solution to equation \eqref{eqn 8.21.13}. Estimate \eqref{main estimate} for $u$ also follows from those of $u_n$.
{\bf 2}. \emph{Uniqueness} :
Let $u \in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ be a solution to equation \eqref{eqn 8.21.13} with $f^0\equiv 0$, $\tbf \equiv 0$, $g\equiv 0$, and $u_0\equiv 0$. Due to $\gamma+2\geq 1$, $u$ at least belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$, and therefore by Lemma \ref{regularity.induction} we have $u\in \cK^2_{p,\theta,\Theta}(\cD,T)$ as all the inputs are zeros. Hence, for almost all $\omega\in \Omega$, $u^{\omega}:=u(\omega,\cdot,\cdot)\in L_p((0,T]; K^{2}_{p,\theta-p,\Theta-p}(\cD))$, and satisfies
$$
u^{\omega}_t=\cL u^{\omega}, \quad t\in (0,T]\quad ; \quad u^{\omega}(0,\cdot)=0.
$$
Hence, from the uniqueness result for the deterministic parabolic equation (see \cite[Theorem 2.12]{ConicPDE}), we conclude $u^{\omega}=0$ for almost all $\omega$. This handles the uniqueness.
\end{proof}
\begin{remark}\label{sol.representation}
The approximation argument and uniqueness result in the above proof show that if $\cL$ is non-random, then the solution in Theorem \ref{main result} is given by the formula
$$
u=\cR(u_0,f^0,\tbf,g), \quad \text{where}\quad \tbf=(f^1,\cdots,f^d).
$$
\end{remark}
\begin{proof}[\textbf{Proof of Theorem \ref{main result-random}}]\quad
{\bf 1}. \emph{The a priori estimate} :
Having the method of continuity in mind, we consider the following operators.
Denote $L_0=\nu_1 \Delta$, and for $\lambda \in [0,1]$ denote
\begin{eqnarray*}
\cL_{\lambda}=(1-\lambda)L_0+\lambda \cL
\end{eqnarray*}
Obviously,
\begin{equation*}
\cL_{\lambda}(\omega,\cdot)\in \cT_{\nu_1,\nu_2}, \quad \forall \, \lambda\in [0,1], \, \omega\in \Omega.
\end{equation*}
Now we prove that the a priori estimate
\begin{eqnarray}
\|v\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}&\leq& C\Big(\|f^0\|_{\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\cD,T)}+ \|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)} \nonumber \\
&&\quad\quad +\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,l_2)}+\|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}\Big) \label{the a priori}
\end{eqnarray}
holds with $C=C(\cM,d,p,\gamma,\theta,\Theta,\nu_1,\nu_2)$, provided that
$v\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ is a solution to the equation
\begin{equation}
\label{method}
dv=\left(\cL_{\lambda} v+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt+\sum_{k=1}^{\infty}g^k dw^k_t, \quad t\in(0,T]\,\,\,; \quad v(0,\cdot)=u_0(\cdot).
\end{equation}
To prove \eqref{the a priori}, we take $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ from Theorem \ref{main result}, which is the solution to equation \eqref{eqn 8.21.13} with the operator $L_0=\nu_1 \Delta$ and the initial condition $u(0,\cdot)=u_0$. Then $\bar{v}:=v-u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ satisfies
$$
\bar{v}_t=\cL_{\lambda} \bar{v}+\bar{f}=\cL_{\lambda}\bar{v}+\sum_{i=1}^d\bar{f}^i_{x^i}, \quad t\in(0,T]\quad;\quad \bar{v}(0,\cdot)=0
$$
where
$$\bar{f}:=(L_0-\cL_{\lambda})u=\sum_{i=1}^d \left(\sum_{j=1}^d [\nu_1 \delta^{ij}-a^{ij}(\omega,t)]u_{x^j}\right)_{x^i}=:\sum_{i=1}^d \bar{f}^i_{x^i}.
$$
Note that for each fixed $\omega$, $\bar{v}(\omega,\cdot)$ satisfies a deterministic PDE with non-random operator $\cL_{\lambda}(\omega,\cdot)$ and non-random free terms $\bar{f}^i(\omega,\cdot)$. Hence, using the deterministic counterpart of Theorem \ref{main result} for each $\omega$, and then taking the expectation, we get
$$
\|v-u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}= \|\bar{v}\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)} \leq C \sum_{i=1}^d \|\bar{f}^i\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T)}\leq C\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}.
$$
For the last inequality above we used \eqref{eqn 4.16.1}. This with estimate \eqref{main estimate} obtained for $u$ finally gives \eqref{the a priori}.
{\bf 2}. \emph{Existence, uniqueness and the estimate} :
Estimate \eqref{main estimate} and uniqueness result of solution are direct consequences of a priori estimate \eqref{the a priori}, for which the constant $C$ is independent of $\cL$ and $\lambda$. Thus we only need to prove the existence result.
Let $J$ denote the set of $\lambda\in [0,1]$ such that for any given $f^0,\tbf, g,u_0$ in their corresponding spaces, equation \eqref{method} with given $\lambda$ has a solution $v$ in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. Then by Theorem \ref{main result}, $0\in J$. Hence, the method of continuity
(see e.g. proof of \cite[Theorem 5.1]{Krylov 1999-4}) and a priori estimate \eqref{the a priori} together yield $J=[0,1]$, and in particular $1\in J$. This proves the existence result. The theorem is proved.
\end{proof}
In the next section, we use the result of Theorem \ref{main result-random} to study the regularity of SPDEs on polygonal domains in $\bR^2$. We also use the following result which helps us prove the existence of a solution on polygonal domains.
\begin{lemma}\label{global.uniqueness}
For $j=1,\,2$, let $p_j\geq 2$ and $\theta_j,\Theta_j\in\bR$, and $d-1<\Theta_j<d-1+p_j$. Also let $\theta_j$ ($j=1,2$) satisfy
\begin{equation*}
p_j(1-\lambda^+_c)<\theta_j<p_j(d-1+\lambda^-_c) \quad \text{if $\cL$ is non-random},
\end{equation*}
and
$$
p_j(1-\lambda_{c}(\nu_1,\nu_2))<\theta_j<p_j(d-1+\lambda_{c}(\nu_1,\nu_2)) \quad \text{if $\cL$ is random}.
$$
Then, if $u\in\cK^1_{p_1,\theta_1,\Theta_1}(\cD,T)$ is a solution to equation \eqref{stochastic parabolic equation} with the initial condition $u(0,\cdot)=u_0(\cdot)$ and $f^0, \tbf=(f^1,\cdots,f^d)$, $g$, $u_0$ satisfying
\begin{align*}
&f^0\in\bL_{p_j,\theta_j+p_j,\Theta_j+p_j}(\cD,T), \quad \tbf \in\bL_{p_j,\theta_j,\Theta_j}(\cD,T,d ),
\end{align*}
$$
g\in \bL_{p_j,\theta_j,\Theta_j}(\cD,T,\ell_2), \quad u_0\in\bU^1_{p,\theta,\Theta}(\cD)
$$
for both $j=1$ and $j=2$, then $u\in\cK^1_{p_2,\theta_2,\Theta_2}(\cD,T)$.
\end{lemma}
\begin{proof}
If $\cL$ is non-random, the lemma follows from Remark \ref{sol.representation}. In general, as before we fix a deterministic operator $L_0(t)=\sum_{i,j}\alpha^{ij}(t) \in \cT_{\nu_1,\nu_2}$ and $v=\cR(u_0,f^0,\tbf,g)$. Then, since $L_0$ is non-random, by Remark \ref{sol.representation}
\begin{equation}
\label{eqn 8.31.5}
v\in \cK^1_{p_1,\theta_1,\Theta_1}(\cD,T) \cap \cK^1_{p_2,\theta_2,\Theta_2}(\cD,T).
\end{equation}
Put $\bar{u}_1:=u-v$. Then $\bar{u}=\bar{u}_1$ satisfies
\begin{equation}
\label{eqn 8.23.1}
d\bar{u}=\left[\cL \bar{u}+\sum_{i=1}^d \Big(\sum_{j=1}^d [\alpha^{ij}(t)-a^{ij}(\omega,t)]v_{x^j}\Big)_{x^i}\right]dt, \quad t\in(0,T].
\end{equation}
Also, due to \eqref{eqn 8.31.5}, equation \eqref{eqn 8.23.1} has a solution $\bar{u}_2\in \cK^1_{p_2,\theta_2,\Theta_2}(\cD,T)$. Now note that for each fixed
$\omega$, both $\bar{u}_1(\omega,\cdot,\cdot)$ and $\bar{u}_2(\omega,\cdot,\cdot)$ satisfy equation \eqref{eqn 8.23.1}, which we can consider as a deterministic equation with non-random operator. By the above result for non-random operator we conclude
$$\bar{u}_1(\omega,\cdot,\cdot)=\bar{u}_2(\omega,\cdot,\cdot)
$$
for almost all $\omega$. From this we conclude that both $v$ and $u-v$ are in $\cK^1_{p_2,\theta_2,\Theta_2}(\cD,T)$, and therefore the lemma is proved.
\end{proof}
\mysection{SPDE on polygonal domains}\label{sec:polygonal domains}
In this section, based on Theorem~\ref{main result-random}, we develop a regularity theory of the stochastic parabolic equations on polygonal domains in $\bR^2$. This development is an enhanced version of the corresponding result in \cite{CKL 2019+} in which
$\cL=\Delta_x$ and $\Theta=d$. Our generalization is as follows:
\begin{itemize}
\item{}
$\Delta \quad \rightarrow \quad \cL=\sum_{i,j}a^{ij}(\omega,t)D_{ij}$; operator with (random) predictable coefficients
\item{}
$\Theta=2 \quad \rightarrow \quad 1<\Theta<1+p$
\item{}
The restriction on $\theta$ is weakened
\item{}
Sobolev regularity with $\gamma\in \{-1,0,\cdots\}$\\
$ \quad \rightarrow \quad$ Sobolev and H\"older regularities with real number $ \gamma \geq -1$
\end{itemize}
Let $\cO\subset \bR^2$ be a bounded polygonal domain with a finite number of vertices $\{p_1,\ldots,p_M\}\subset \partial \cO$.
For any $x\in\cO$, we denote
\begin{align*}
\rho(x):=\rho_{\cO}(x):=d(x,\partial\cO).
\end{align*}
In the polygonal domain, the function of $x$ defined by
$$
\min_{1\leq m\leq M}\vert x-p_m\vert
$$
will play the role of $\rho_{\circ, \cD}$, which is the distance to the vertex in an angular domain $\cD$. We first construct a smooth version of the function $\min_{1\leq m\leq M}\vert x-p_m\vert g$ as follows. Consider the domain $V:=\bR^2\setminus \{p_1,\cdots,p_M\}$ and note that
$$\rho_V(x):=d(x,\partial V)=\min_{1\leq m\leq M}\vert x-p_m\vert .
$$
Then, applying \eqref{eqn 8.25.1} and \eqref{eqn 8.25.2} for $\rho_V$ and the domain $V$, we define $\psi_{V}$ and set
$$
\rho_{\circ}=\rho_{\circ,\cO}:=\psi_{V}.
$$
We can check that for any multi-index $\alpha$ and $\mu\in \bR$,
$$
\rho_{\circ} \sim \min_{1\leq m\leq M}\vert x-p_m\vert , \quad
\sup_{\cO}\big\vert \rho^{\vert \alpha\vert -\mu}_{\circ}D^{\alpha}\rho^{\mu}_{\circ}\big\vert <\infty.
$$
On the other hand, we also choose a smooth function $\psi=\psi_{\cO}$ such that $\psi\sim \rho_{\cO}$ and satisfies \eqref{eqn 8.9.1} with $\rho_{\cO}$ in place of $\rho_{\cD}$.
Then, we recall the norms of the spaces $H^{\gamma}_{p,\Theta}(\cO)$ and $H^{\gamma}_{p,\Theta}(\cO; \ell_2)$ introduced in Definition \ref{defn 8.28};
\begin{equation*}
\|f\|^p_{H^{\gamma}_{p,\Theta}(\cO)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d)},
\end{equation*}
\begin{equation*}
\|g\|^p_{H^{\gamma}_{p,\Theta}(\cO;\ell_2)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))g(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d;\ell_2)},
\end{equation*}
where $\psi=\psi_{\cO}$. Using $\rho_{\circ,\cO}$ in place of
$\rho_{\circ,\cD}$, and following Definition \ref{defn 8.19}, we define the function spaces
$$
K^{\gamma}_{p,\theta,\Theta}(\cO), \quad K^{\gamma}_{p,\theta,\Theta}(\cO;\bR^d), \quad K^{\gamma}_{p,\theta,\Theta}(\cO;\ell_2),
$$
as well as the stochastic spaces
$$\bK^{\gamma}_{p,\theta,\Theta}(\cO,T), \quad \bK^{\gamma}_{p,\theta,\Theta}(\cO,T,d ),\quad \bK^{\gamma}_{p,\theta,\Theta}(\cO,T,\ell_2),
$$
$$
\cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T), \quad \bK^{\infty}_c(\cO,T), \quad \bK^{\infty}_c(\cO,T,\ell_2), \quad \bK^{\infty}_c(\cO).
$$
More specifically, we write $f\in K^{\gamma}_{p,\theta,\Theta}(\cO)$ if and only if $\rho^{(\theta-\Theta)/p}_{\circ} f\in H^{\gamma}_{p,\Theta}(\cO)$, and define
$$
\|f\|_{ K^{\gamma}_{p,\theta,\Theta}(\cO)} := \|\rho^{(\theta-\Theta)/p}_{\circ} f\|_{H^{\gamma}_{p,\Theta}(\cO)}.
$$
As in Section 2, if $\gamma\in \bN_0$, then we have
\begin{equation}
\label{eqn 8.25.8}
\|f\|^p_{ K^{\gamma}_{p,\theta,\Theta}(\cO)} \sim \sum_{\vert \alpha\vert \leq \gamma}\int_{\cO} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho^{\theta-\Theta}_{\circ}\rho^{\Theta-d} dx.
\end{equation}
\begin{defn}\label{definition solution polygon}
We write $u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T)$ if
$u \in \bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cO,T)$ and there exist
$(\tilde{f}, \tilde{g}) \in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cO,T)\times \bK^{\gamma+1}_{p,\theta,\Theta}(\cO,T, \ell_2)$ and $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cO)$ satisfying
$$
du=\tilde{f}\,dt+\sum_k \tilde{g}^kdw^k_t,\quad t\in(0,T]
$$
in the sense of distributions on $\cO$. The norm is defined by
\begin{eqnarray*}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T)}&:=&\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cO,T)}+\|\tilde{f}\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tilde{g}\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&&+\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}.
\end{eqnarray*}
\end{defn}
\begin{thm}
\label{thm all}
With $\cD$ replaced by $\cO$, all the claims of Lemma \ref{property1}, Remark \ref{dense space}, Theorem \ref{banach}, Theorem \ref{embedding}, and Lemma \ref{regularity.induction} hold. \end{thm}
\begin{proof}
All of these claims in Section 2 are proved based on \eqref{eqn 8.10.14}, \eqref{eqn 8.10.1}, and some properties of weighted Sobolev spaces $H^{\gamma}_{p,\Theta}(\cD)$ taken e.g from \cite{Lo1}. Since these properties in \cite{Lo1} hold true on arbitrary domains, the exactly same proofs of Section 2 work with $\cD$ replaced by $\cO$.
\end{proof}
\begin{remark}
\label{remark 8.29}
For the analog of Theorem \ref{embedding} in the case of polygonal domain we do not need the additional condition for the initial condition. This is because since $\psi$ is bounded and $\beta>2/p$, by Lemma \ref{property1} (iv), we have
$$
\|\psi^{\beta-1}u(0,\cdot)\|_{L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))}\leq C \|\psi^{2/p-1}u(0,\cdot)\|_{L_p(\Omega;K^{\gamma+2-2/p}_{p,\theta,\Theta}(\cD))} \leq C \|u\|_{\cK_{p,\theta,\Theta}^{\gamma+2}}.
$$
\end{remark}
For $m=1,\ldots,M$, let $\kappa_m$ denote the interior angle at the vertex $p_m$, and denote
\begin{equation*}
\kappa_0:=\max_{1\leq m\leq M}\kappa_m.
\end{equation*}
Also, for each $m$, let $\cD_m$ denote the conic domain in $\bR^2$ such that
$$
\cO \cap B_{\varepsilon}(p_m) \cap \{p_m+x: x\in \cD_m\} = \cO \cap B_{\varepsilon}(p_m)
$$
for all sufficiently small $\varepsilon>0$. Denote
$$\lambda^{\pm}_{c,\cL,\cO}:=\min_{m}\lambda^{\pm}_{c,\cL, \cD_m} \quad \text{if $\cL$ is non-random}
$$
and
$$\lambda_{c,\cO}(\nu_1,\nu_2):=\min_{m}\lambda_c(\nu_1,\nu_2,\cD_m)\quad \text{ if $\cL$ is random}.
$$
In Theorem \ref{main result polygon} below, we pose the condition
\begin{equation}
\label{theta poly}
p(1-\lambda^+_{c,\cL,\cO})<\theta< p(1+\lambda^-_{c,\cL,\cO})
\end{equation}
if $\cL$ is non-random, and
\begin{equation}
\label{theta poly2}
p(1-\lambda_{c,\cO}(\nu_1,\nu_2))<\theta <p(1+\lambda_{c,\cO}(\nu_1,\nu_2))
\end{equation}
if $\cL$ is random.
Here are our main results on polygonal domains.
\begin{thm}[SPDE on polygonal domains with random or non-random coefficients]
\label{main result polygon}
Let $p\in[2,\infty)$, $\gamma \geq -1$, and Assumption \ref{ass coeff} hold. Also assume that
\begin{equation}
\label{theta application}
1<\Theta<p+1,
\end{equation}
and condition \eqref{theta poly} holds if $\cL$ is non-random, condition \eqref{theta poly2} holds if $\cL$ is random.
Then for given $f^0\in\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\mathcal{O},T)$, $\tbf=(f^1,\cdots,f^d) \in\bK^{\gamma +1}_{p,\theta,\Theta}(\mathcal{O},T,d)$, $g\in\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,\ell_2)$, and $u_0\in\bU^{\gamma+2}_{p,\theta,\Theta}(\cO)$, the equation
\begin{equation}\label{stochastic parabolic equation polygon}
d u =\left(\cL u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t\in(0,T]\quad\,; \quad u(0,\cdot)=u_0
\end{equation}
admits a unique solution $u$ in the class $\cK^{\gamma+2}_{p,\theta,\Theta}(\mathcal{O},T)$.
Moreover, the estimate
\begin{eqnarray*}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\mathcal{O},T)}
&\leq& C\big(\|f^0\|_{\bK^{\gamma\vee 0}_{p,\theta+p,\Theta+p}(\mathcal{O},T)}
+\|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,\ell_2)}\nonumber\\
&&\quad\quad+\|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cO)}\big)
\end{eqnarray*}
holds with a constant $C=C(\mathcal{O},p,\gamma,\nu_1,\nu_2,\theta,\Theta,T)$.
\end{thm}
\begin{remark}
Since $d=2$ in this section, the range of $\Theta$ in \eqref{theta application} coincides with $(d-1,d-1+p)$ which we have kept throughout this article.
\end{remark}
\begin{thm}[H\"older estimates on polygonal domains]
\label{cor 8.23}
Let $p\geq 2$, $\theta, \Theta\in \bR$ and $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T)$.
(i) If $\gamma+2-\frac{d}{p}\geq n+\delta$, where $n\in \bN_0$ and $\delta\in (0,1)$, then for any $k\leq n$,
$$
\vert \rho^{k-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k}u(\omega,t,\cdot)\vert _{\cC(\cO)}+
[\rho^{n-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k} u(\omega,t,\cdot)]_{\cC^{\delta}(\cO)}<\infty
$$
holds for almost all $(\omega,t)$. In particular,
\begin{equation*}
\vert u(\omega,t,x)\vert \leq C(\omega,t) \rho^{1-\frac{\Theta}{p}}(x) \rho^{(-\theta+\Theta)/p}_{\circ}(x).
\end{equation*}
(ii) Let
$$
2/p<\alpha<\beta\leq 1, \quad \gamma+2-\beta-d/p \geq m+\varepsilon,
$$
where $m\in \bN_0$ and $\varepsilon\in (0,1]$. Put $\eta=\beta-1+\Theta/p$. Then for any $k\leq m$,
\begin{eqnarray*}
&&\bE \sup_{t,s\leq T} \frac {\big\vert \rho^{\eta+k} \rho^{(\theta-\Theta)/p}_{\circ} \left(D^ku(t)-D^ku(s)\right)\big\vert ^p_{\cC(\cO)}}
{\vert t-s\vert ^{p\alpha/2-1}}<\infty, \\
&& \bE \sup_{t,s\leq T} \frac {\left[\rho^{\eta+m+\varepsilon} \rho^{(\theta-\Theta)/p}_{\circ} \left(D^mu(t)-D^mu(s)\right)\right]^p_{\cC^{\varepsilon}(\cO)}}
{\vert t-s\vert ^{p\alpha/2-1}} <\infty.
\end{eqnarray*}
\end{thm}
\begin{proof}
The claims follow from the corresponding results of \eqref{eqn 8.21.1} and \eqref{eqn 8.10.10} mentioned in Theorem \ref{thm all}.
\end{proof}
For the proof of Theorem \ref{main result polygon}, we first prove the following estimate.
\begin{lemma}[A priori estimate]
\label{a priori p}
Let Assumptions in Theorem \ref{main result polygon} hold. Then there exists a constant $C=C(d,p,\theta,\Theta,\nu_1,\nu_2,\cO,T)$ such that the a priori estimate
\begin{eqnarray}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\mathcal{O},T)}&\leq& C\big(\|f^0\|_{\bK^{\gamma\vee 0}_{p,\theta+p,\Theta+p}(\mathcal{O},T)}
+\|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,\ell_2)}\nonumber\\
&&\quad\quad+ \|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cO)}\big)\label{polygon a priori}
\end{eqnarray}
holds provided that a solution $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ to equation \eqref{stochastic parabolic equation polygon} exists.
\end{lemma}
\begin{proof}
First, choose a sufficiently small constant $r>0$ such that each $B_{3r}(p_m)$ contains only one vertex $p_m$ and intersects with only two edges for each $m=1,\,\ldots,\,M$. Then we choose a function $\xi\in\cC_c^{\infty}(\bR^2)$ satisfying
\begin{align*}
1_{B_r(0)}(x)\leq \xi(x)\leq 1_{B_{2r}(0)}(x)\quad\text{for all }x\in\bR^2.
\end{align*}
Let $\xi_m(x):=\xi(x-p_m)$ and $\xi_0:=1-\sum_{m=1}^M\xi_m$.
By the choice of $r$ and $\xi$, $supp(\xi_m)$s are disjoint and hence $0\leq \xi_0\leq 1$.
Moreover, $\xi_0(x)=1$ if $\rho_{V}(x)>2r$.
For $m=1,\ldots,M$, let $\cD_m$ be the angular (conic) domain centered at $p_m$ with interior angle $\kappa_m$ such that $\cD_m\cap B_{3r}(p_m)=\cO\cap B_{3r}(p_m)$.
Now let $G$ be a $C^1$-domain in $\cO$ such that
$$
\xi_0(x)=0\quad\text{for }x\in\cO\setminus G\quad\text{and}\quad \inf_{x\in G} \rho_{\circ}(x)\geq c>0\text{ with a constant}\; c.
$$
Then, due to the choices of $\xi_m$ and $\cD_m$ $(m=1,\ldots,M)$, \eqref{space K norm} and \eqref{eqn 8.25.8} together easily yield
\begin{align*}
\|\xi_m v\|^p_{K^n_{p,\theta,\Theta}(\cO)} \sim\|\xi_m v\|_{K^n_{p,\theta,\Theta}(\cD_m)},\quad m=1,\ldots,M.
\end{align*}
for any $\theta,\Theta\in\bR$, $n\in\{0,1,2,\ldots\}$, and $v\in K^n_{p,\theta,\Theta}(\cO)$. Similarly,
$$
\|\xi_0 v\|^p_{K^n_{p,\theta,\Theta}(\cO)}\sim \int_{G} \vert \xi_0v\vert ^p \rho^{\Theta-d}dx\sim \|\xi_0 v\|^p_{H^n_{p,\Theta}(G)},
$$
and the same relations hold for $\ell_2$-valued functions. Denote
$$
\bH^{\gamma}_{p,\Theta}(G,T):=L_p(\Omega\times (0,T], \cP; H^{\gamma}_{p,\Theta}(G)),
$$
$$
\bH^{\gamma}_{p,\Theta}(G,T,\ell_2):=L_p(\Omega\times (0,T], \cP; H^{\gamma}_{p,\Theta}(G;\ell_2)).
$$
Then, the above observations in particular imply
\begin{align}\label{partition-of-unity.eq}
\|v\|_{\bK^n_{p,\theta,\Theta}(\cO,T)} \sim \Big(\|\xi_0 v\|_{\bH^n_{p,\Theta}(G,T)}+\sum_{m=1}^M\|\xi_m v\|_{\bK^n_{p,\theta,\Theta}(\cD_m,T)}\Big)
\end{align}
for any $v\in \bK^n_{p,\theta,\Theta}(\cO,T)$, where $n\in \{0,1,2,\cdots\}$.
Now, for each $m=1,\ldots,M$ we define $u_m:=\xi_mu$. Then, since $\gamma+2\geq 1$, $u_m$ belongs to $\bK^{1}_{p,\theta-p,\Theta-p}(\cD_m,T)$. Also, $\xi_0 u$ belongs to $\bH^{1}_{p,\Theta-p}(G,T)$. Note that each $u_m$ satisfies
\begin{equation}\label{equation for m}
d(u_m)=\Big(\cL u_m+f^0_m+\sum_{i=1}^d (f^i_m)_{x^i}\Big)dt+\sum_k g^{k}_m dw_t^k,\quad t\in (0,T]
\end{equation}
in the sense of distributions on $\cD_m$ with the initial condition $
u_m(0,\cdot)=\xi_m u_0$ and $\xi_0 u$ satisfies
\begin{equation}\label{equation for m=0}
d(\xi_0 u)=\Big(\cL (\xi_0 u)+f^0_0+\sum_{i=1}^d (f^i_0)_{x^i}\Big)dt+\sum_k g^{k}_0 dw_t^k,\quad t\in (0,T]
\end{equation}
in the sense of distributions on $G$ with the initial condition $w(0,\cdot)=\xi_0 u_0$, where
\begin{equation}\label{f g for m}
f^0_m=f^0\xi_m-\sum_{i=1}^d f^i (\xi_m)_{x^i}-u \cL(\xi_{m}), \quad f^i_m=2\sum_{j=1}^d a^{ij}u\,(\xi_{m})_{x^j},\quad
g_m=\, g\xi_m
\end{equation}
for $m=0,1,2,\ldots,M$.
Since $supp(\xi_m) \subset \overline{B_{2r}(p_m)}$ and $(\xi_m)_x=0$ on a neighborhood of $p_m$ for $m=1,\ldots,M$, we have
\begin{align*}
\|u (\xi_m)_x\|_{\bL_{p,\theta,\Theta}(\cO,t)}+\|u(\xi_m)_{xx}\|_{\bL_{p,\theta+p,\Theta+p}(\cO,t)}\leq C\|u\|_{\bL_{p,\theta,\Theta}(\cO,t)}
\end{align*}
for $t\leq T$, where $C$ depends only on $\cO,\,p,\,\theta$ and $\Theta$.
Hence, for $m=1,\,\ldots,\,M$, by Theorems \ref{main result} and \ref{main result-random}, which our range of $\theta$ allows us to use, we have for any $t\leq T$,
\begin{align*}
&\quad \quad \|\xi_m u\|_{\bK^1_{p,\theta-p,\Theta-p}(\cD_m,t)}
\\
& \leq C\big(\|f^0_m\|_{\bL_{p,\theta+p,\Theta+p}(\cD_m,t)}+ \sum_{i=1}^d \|f^i_m\|_{\bL_{p,\theta,\Theta}(\cD_m,t)}+\|g_m\|_{\bL_{p,\theta,\Theta}(\cD_m,t,\ell_2)}\nonumber\\
&\hspace{1cm}+\|\xi_m u_0\|_{\bU^1_{p,\theta,\Theta}(\cD_m)}\big)
\\
& \leq C\big(\|u\|_{\bL_{p,\theta,\Theta}(\cO,T)}+\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T,d)} +\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&\hspace{1cm}+\|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\big).
\end{align*}
For $m=0$, by \cite[Theorem 2.7]{Kim2004-2} (or \cite[Theorem 2.9]{Kim2004}),
we have
\begin{align*}
&\quad \quad\quad \|\xi_0 u\|_{\bH^1_{p,\Theta-p}(G,t)}\\
&\leq C\Big(\|f^0_0\|_{\bL_{p,\Theta+p}(G,t)}+ \sum_{i=0}^d \|f^i_0\|_{\bL_{p,\Theta}(G,t)}+\|g_0\|_{\bL_{p,\Theta}(G,t,\ell_2)}+\|\xi_0u_0\|_{L_p(\Omega;H^{1-2/p}_{p,\Theta+2-p}(G))}\Big)
\\
&\leq C\Big(\|u\|_{\bL_{p,\theta,\Theta}(\cO,T)}+\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T,d)} +\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&\hspace{1cm}+ \|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\Big).
\end{align*}
Summing up over all $m=0,\,\ldots,\,M$ and using \eqref{partition-of-unity.eq}, for each $t\leq T$, we have
\begin{align*}
&\quad\quad\|u\|_{\bK^1_{p,\theta-p,\Theta-p}(\cO,t)}
\\
&\leq C \Big(\|u\|_{\bL_{p,\theta,\Theta}(\cO,t)}+\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T)} +\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&\hspace{1cm} \|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\Big).
\end{align*}
Using this and the polygonal versions of \eqref{eqiv norm} and \eqref{eqn 8.25.31}, which mentioned in Theorem \ref{thm all}, we get, for each $t\leq T$,
\begin{align*}
&\quad\quad \|u\|^p_{\cK^1_{p,\theta,\Theta}(\cO,t)}\\
\leq &\,C\int^t_0\|u\|^p_{\cK^1_{p,\theta,\Theta}(\cO,s)}ds
\\
&+C\left(\|f^0\|^p_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}+\|\tbf\|^p_{\bL_{p,\theta,\Theta}(\cO,T)}+\|g\|^p_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}+\|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\right).
\end{align*}
Applying Gronwall's inequality, we further obtain
\begin{align*}
&\quad\quad \|u\|_{\cK^1_{p,\theta,\Theta}(\cO,T)}\\
&\leq C\left(\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T)}+\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}+\|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\right).
\end{align*}
This and the polygonal version of Lemma \ref{regularity.induction}, which is mentioned in Theorem \ref{thm all}, yield a priori estimate \eqref{polygon a priori}.
The lemma is proved.
\end{proof}
The following is a $\cC^1$-domain version of Lemma \ref{global.uniqueness}. We use it in the proof of Theorem \ref{main result polygon} below.
\begin{lemma}\label{lem for uniqueness2}
Let $G$ be a bounded $\cC^1$ domain in $\bR^d$ and let $p_j\in[2,\infty)$, $\Theta_j\in (d-1,d-1+p_j)$ for $j=1,2$. Assume that $u\in \bH^1_{p_1,\Theta_1-p_1}(G,T)$
satisfies
\begin{align*}
du=\Big(\cL u+f^0+\sum_{i=1}^d f^i_{x^i}\Big)\,dt+\sum_kg^kdw^k_t,\quad t\in(0,T]\quad
\end{align*}
in the sense of distributions on $G$ with the initial condition $u(0,\cdot)=u_0(\cdot)$ and $f^0$, $f^i$ ($i=1,2,\ldots,$), $g$, $u_0$ satisfying
$$
f^0\in \bL_{p_j,\Theta_j+p_j}(G,T)\cap \bL_{p_j,d+p_j}(G,T),\quad f^i \in \bL_{p_j,\Theta_j}(G,T)\cap \bL_{p_j,d}(G,T), \, i=1,\cdots,d,
$$
$$
g\in \bL_{p_j,\Theta_j}(G,T,\ell_2)\cap \bL_{p_j,d}(G,T,\ell_2),
$$
$$
\quad u_0\in L_p(\Omega,\rF_0;H^{1-2/p_j}_{p_j,\Theta_j+2-p_j}(G))\cap L_p(\Omega,\rF_0;H^{1-2/p_j}_{p_j,d+2-p_j}(G))
$$
for both $j=1$ and $j=2$.
Then $u$ belongs to $\bH^1_{p_2,\Theta_2-p_2}(G,T)$.
\end{lemma}
\begin{proof}
See \cite[Lemma 3.8]{CKL 2019+}. We remark that only $\Delta$ is considered in \cite{CKL 2019+}, however the proof of \cite[Lemma 3.8]{CKL 2019+} works for general case without any changes since the proof depends only on \cite[Theorem 2.7]{Kim2004-2} (or \cite[Theorem 2.9]{Kim2004}),
which involves operators having coefficients measurable in $(\omega,t)$ and continuous in $x$.
\end{proof}
We recall $d=2$ in this section.
\begin{proof}[\textbf{Proof of Theorem \ref{main result polygon}}]
Due to Lemma \ref{a priori p}, we only need to prove the existence result. Furthermore, relying on standard approximation argument, we may assume
$$
f^0\in\bK^{\infty}_c(\cO,T),\quad \tbf\in\bK^{\infty}_c(\cO,T,2), \quad g\in\bK^{\infty}_c(\cO,T,\ell_2)\quad u_0\in \bK^{\infty}_c(\cO).
$$
Considering $u-u_0$ as usual, we may assume $u_0\equiv 0$.
Also, note that $g^k=0$ for all large $k$ (say, for all $k> N$), and each $g^k$ is of the type $\sum_{j=1}^{n(k)} 1_{(\tau^k_{j-1},\tau^k_j]}(t) h^{kj}(x)$, where $\tau^k_j$ are bounded stopping times and $h^{jk}\in \cC^{\infty}_c(\cO)$.
Thus the function $v$ defined by
$$
v(t,x):=\sum_{k=1}^{\infty}\int^t_0 g^k dw^k_s=\sum_{k\leq N} \sum_{j\leq n(k)} \left(w^k_{\tau^k_j \wedge t}-w^k_{\tau^k_{j-1} \wedge t}\right) h^{kj}(x)
$$
is infinitely differentiable in $x$ and vanishes near the boundary of $\cO$. Consequently $v$ belongs to $\cK^{\nu+2}_{p,\theta,\Theta}(\cO,T)$ for any $\nu, \theta, \Theta \in \bR$ as we consult with Definition \ref{definition solution polygon}. Now, $u$ satisfies equation \eqref{stochastic parabolic equation polygon} if and only if $\bar{u}:=u-v$ satisfies
$$
d\bar{u}=\Big(\cL\bar{u}+\bar{f}^0+\sum_{i=1}^2 f^i_{x^i}\Big)dt, \quad t\in(0,T]\quad;\quad \bar{u}(0,\cdot)= 0,
$$
where $\bar{f}^0=f^0+\cL v$. Hence, considering $\bar{f}^0$ in place of $f^0$, to prove the existence we may further assume $g=0$.
Then, by the classical results without weights for $p=2$, (see, e.g. \cite{Roz1990} or \cite[Theorem~2.12, Corollary~2.14]{Kim2014}),
there exists a solution $u$ in $\cK^1_{2,2,2}(\cO,T)$ to equation \eqref{stochastic parabolic equation polygon}, which now is simplified as
$$
u_t=\cL u+f^0+\sum_{i=1}^2 f^i_{x^i}, \quad t\in(0,T]\quad; \quad u(0,\cdot)=0.
$$
By Theorem A in \cite{Aronson} (or see estimate (2.11) and proof of Theorem 2.4 in \cite{Kim2004-3} for more detail), for any $r>4$, we have
\begin{equation}
\label{bound}
\bE \sup_{t,x} \vert u(t,x)\vert ^p \leq C \bE \|\,\vert f^0\vert +\vert \tbf\vert \,\|^p_{L_r((0,T]\times \cO))} <\infty.
\end{equation}
Now we prove $u\in \cK^1_{p,\theta,\Theta}(\cO,T)$ using Lemma \ref{global.uniqueness} and Lemma \ref{lem for uniqueness2} along with $u\in \cK^1_{2,2,2}(\cO,T)$. Define $u_m:=\xi_m u$ in the same way we did in the proof of Lemma \ref{a priori p}. Then $\xi_m u$ satisfies \eqref{equation for m} in the sense of distributions on $\cD_m$ for $m=1,\ldots,M$ and $\xi_0u$ satisfies \eqref{equation for m=0}
on $G$ for $m=0$ with the same $f^0_m,\,f^i_m,\, \xi_mu_0$ as in \eqref{f g for m}.
Note that since $f^0,\tbf$ are bounded and $f^0,\tbf, (\xi_m)_x, (\xi_m)_{xx}$ vanish near vertices, we have for any $\theta\in \bR$, $q\geq 2$ and $1<\Theta<1+q$,
\begin{eqnarray*}
&&\|f^0_m\|^q_{\bL_{q,\theta+q,\Theta+q}(\cO,T)} +\sum_{i=1}^2\|f^i_m\|^q_{\bL_{q,\theta,\Theta}(\cO,T)}\\
&\leq& C \bE \int^T_0 \int_{\cO}(1+|u|^q) \rho^{\Theta-2}dx\leq C \left(\int_{\cO}\rho^{\Theta-2}dx\right) \bE \sup_{t,x}(1+|u|^q)<\infty.
\end{eqnarray*}
For the last inequality we used \eqref{bound} and the fact $\Theta-2>-1$.
Hence, $f^0_m, \,f^i_m$ along with $\xi_mu_0$ satisfy assumptions in Lemma \ref{global.uniqueness} and Lemma \ref{lem for uniqueness2}. Consequently $\xi_m u \in \bK^1_{p,\theta-p,\Theta-p}(\cD_m,T)$ as $\xi_m u \in \cK^1_{p,\theta,\Theta}(\cD_m,T)$ for $m=1,2,\cdots, M$ and $\xi_0 u \in \bH^1_{p,\Theta-p}(G,T)$. These and \eqref{partition-of-unity.eq} with $n=1$ yield $u\in \bK^1_{p,\theta-p,\Theta-p}(\cO,T)$ and in turn $u\in \cK^1_{p,\theta,\Theta}(\cO,T)$.
Finally, the analogy of of Lemma \ref{regularity.induction} in case of polygonal domains (see Theorem \ref{thm all}) proves that the solution $u$ found above actually belongs to the space $u\in \cH^{\gamma+2}_{p,\theta,\Theta}(\cO,T)$. The theorem is proved. \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.